qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
4,118,149
<p>Let's say I have the following matrix: <span class="math-container">\begin{bmatrix} \frac{1}{3} &amp; \frac{2}{3} &amp; 0 &amp; \frac{2}{3} \\ \frac{2}{3} &amp; -\frac{1}{3} &amp; \frac{2}{3} &amp; 0 \\ a &amp; b &amp; c &amp; d \\ e &amp; f &amp; g &amp; h \end{bmatrix}</span></p> <p>How do I find the last <span class="math-container">$2$</span> rows such that this matrix is orthogonal? I know that it's orthogonal if <span class="math-container">$M*M^T = Id$</span>, but this isn't helping me, nor does the gram-schmidt process.</p>
2'5 9'2
11,123
<p>Putting aside the given initial condition, the constant functions <span class="math-container">$y=1$</span> and <span class="math-container">$y=-1$</span> are solutions.</p> <p>You could not have another continuous solution that takes a value inside <span class="math-container">$(-1,1)$</span> and then also takes a value outside of that interval. If you did, by the intermediate value theorem, there would be an <span class="math-container">$x_1$</span> where <span class="math-container">$y(x_1)=1$</span> (or equals <span class="math-container">$-1$</span>) and now your proposed other solution crosses one of those constant solutions.</p> <p>There cannot be a crossing like that, since the two curves would satisfy the same first order differential equation with the same &quot;initial&quot; condition <span class="math-container">$y(x_1)=1$</span>.</p> <p>So since it's given that <span class="math-container">$y$</span> takes a value inside <span class="math-container">$(-1,1)$</span> (namely <span class="math-container">$0$</span>) then the solution <span class="math-container">$y$</span> can never take a value outside <span class="math-container">$(-1,1)$</span>.</p>
3,412,418
<blockquote> <p>You have been chosen to play a game involving a 6-sided die. You get to roll the die once, see the result, and then may choose to either stop or roll again. Your payoff is the sum of your rolls, unless this sum is (strictly) greater than 6. If you go over <span class="math-container">$6$</span>, you get <span class="math-container">$0$</span>. What's the best strategy?</p> </blockquote> <p>I tried to set up equations with expected value. I think that the best strategy is to roll until you get at least some value, call it <span class="math-container">$x$</span>. But I have not been able to make too much progress. Can someone please help me?</p>
Henry
6,460
<p>Suppose <span class="math-container">$f(x,t)$</span> is your expected return if so far you have rolled <span class="math-container">$x$</span> and you have a threshold <span class="math-container">$t$</span> where you stop when <span class="math-container">$x \ge t$</span>. Then </p> <ul> <li><p><span class="math-container">$f(x,t)=0$</span> if <span class="math-container">$x\gt 6$</span> as you are bust</p></li> <li><p><span class="math-container">$f(x,t)=x$</span> if <span class="math-container">$t \le x \le 6$</span> as you stop</p></li> <li><p><span class="math-container">$f(x,t)=\frac16 \sum\limits_{y=x+1}^{x+6} f(y,t)$</span> if <span class="math-container">$0 \le x \lt t$</span> as you reroll</p></li> </ul> <p>You are interested in <span class="math-container">$f(0,t)$</span> for different <span class="math-container">$t$</span>, and you get </p> <pre><code>t f(0,t) 1 3.5 2 3.888888889 3 4.083333333 4 3.969907407 5 3.396476337 6 2.161394033 </code></pre> <p>So the optimal position is with <span class="math-container">$t=3$</span>, rerolling if you have a total of <span class="math-container">$1$</span> or <span class="math-container">$2$</span> and otherwise stopping. This makes the game worth <span class="math-container">$4.083333333$</span> as you found </p>
2,651,537
<p>Prove that $(5m+3)(3m+1)=n^2$ is not satisfied by any <strong>positive</strong> integers $m,n$.</p> <p>I have been staring at this for some time (it's the difficult part of a competition problem, I won't spoil it by naming the problem). I tried looking at it modulo 3,4,5,7,8,16 for a contradiction, as well as looking at the discriminant with respect to $m$. I couldn't finish either way. I wonder whether it can somehow be used that $(-1,2)$ is a solution.</p> <p>There is a very small chance that there are actually positive solutions but I've ruled out the first few millions for $m$ with a little Python script and, as I said, this was set in a competition so presumably any solutions would be reasonably small.</p> <p>I've heard somewhere that all diophantine quadratics can be solved by some method and would be interested to read more (though not an entire book, if possible).</p>
Yong Hao Ng
31,788
<p><strong>1. Solving $(5m+3)(3m+1)=n^2$</strong><br> Here is an elementary approach that only uses modular arithmetic.<br> We do not assuming $m\geq 0$ initially, to highlight where it is used exactly in this approach. </p> <p>It was shown above that since $$3(5m+3)-5(3m+1)=4$$ We have $d=\gcd(5m+3,3m+1)$ divides $4$. </p> <p>If $d=1$, then $|5m+3|$ is a square so either $5m+3$ or $-(5m+3)$ is a square. However since squares are $0,1$ or $4$ modulo $5$, this is impossible. Hence we must have $d=2$ or $4$. For either case $m$ is odd so $m=2k+1$. </p> <p>Now the equation reduces to $$ (10k+8)(6k+4)=n^2 $$ We divide by $4$ into $$ (5k+4)(3k+2) = (n/2)^2 $$ so now $d'=\gcd(5k+4,3k+2)=1$ or $2$. </p> <p>Suppose first that $d'=2$, so $k$ must be even, say $k=2s$. The new equation after dividing by $4$ is $$ (5s+2)(3s+1) = (n/4)^2 $$ Now $\gcd(5s+2,3s+1)=1$ so, like earlier, $5s+2$ or $-(5s+2)$ must be a square. However this is impossible as squares are $0,1$ or $4$ modulo $5$. </p> <p>Therefore $d'=1$, hence we have $|3k+2|$ is a square so either $3k+2$ or $-(3k+2)$ is a square. Squares are $0$ or $1$ modulo $3$ so this is impossible for the former. For the latter case, $k\leq -1$ or else $-(3k+2)$ is negative and cannot be square. </p> <p>We are reduced to a necessary condition that $m=2k+1$ and $k\leq -1$. In particular $k=-1$ is possible, giving $(m,n)=(-1,2)$. This is the part where requiring $m$ to be positive comes into play: As $m=2k+1$, positive $m$ forces $k\geq 0$ and so there are no solutions. </p> <p><strong>2. Quadratic diophantine in two variables</strong><br> For two-variable quadratic diophantine equations, of the form $$ax^2+bxy+cy^2+dx+ey+f=0$$ You can refer to posts like <a href="https://math.stackexchange.com/questions/1721471/transforming-diophantine-quadratic-equation-to-pells-equation">this</a> and <a href="https://math.stackexchange.com/questions/580491/general-quadratic-diophantine-equation">this.</a> </p> <p>The idea is you can multiply by integers and use integral linear transformations to convert into $$ X^2 - uY^2 = v $$ so original integer solutions $(x,y)$ transforms into integer solutions $(X,Y)$. By finding all integer solutions to the latter, we can check which ones satisfies the original. The only difficult case is when $u&gt;0$ and not a square, which leads to a series of Pell type equations. </p> <p>In a more algebraic setting, these questions are well answered by the theory of prime norms in quadratic number fields. Books like Cox's "Primes of form $x^2-ny^2$" addresses it. </p> <p>For more than two variables <a href="https://mathoverflow.net/questions/207482/algorithmic-un-solvability-of-diophantine-equations-of-given-degree-with-given">indeed there is an algorithm</a>, but unfortunately I do not know much about it.</p>
10,722
<p>I notice that geometry students frequently have difficulty with representations of 3-dimensional objects in 2 dimensions. Today, we worked with physical manipulatives in order to help visualize where right triangles can occur in 3 dimensions in both pyramids and rectangular prisms (the focus is on fluency with the Pythagorean Theorem and noting its application in many contexts.) I chose to create physical manipulatives instead of finding online 3d manipulatives because <em>I felt as if the physical manipulatives would provide more insight than simply seeing a draggable, yet still 2d, projection.</em> </p> <p>For clarity: </p> <p>Physical manipulative in conjunction with 2d drawing: <a href="https://i.stack.imgur.com/PMWeo.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/PMWeo.jpg" alt="Photo of maipulative"></a></p> <p>Some online examples of "virtual manipulatives": <a href="https://www.learner.org/interactives/geometry/3d_pyramids.html" rel="noreferrer">1</a> <a href="http://www.learnalberta.ca/content/mejhm/index.html?l=0&amp;ID1=AB.MATH.JR.SHAP&amp;ID2=AB.MATH.JR.SHAP.SURF&amp;lesson=html/object_interactives/surfaceArea/explore_it.html" rel="noreferrer">2</a></p> <p>My question is: <em>Is there research supporting my intuition?</em> Are students who have difficulty translating between 2D and 3D more benefited by a physical model than a virtual manipulative, or the other way around?</p>
Daniel R. Collins
5,563
<p>Here's a piece comparing virtual manipulatives to traditional teaching without manipulatives, in the context of community college remedial courses:</p> <blockquote> <p>Violeta Menil and Eric Fuchs, "Teaching Pre-Algebra and Algebra Concepts to Community College Students through the Use of Virtual Manipulatives", Improving Undergraduate Mathematics Learning, CUNY Office of Academic Affairs, 2012. (<a href="http://www.cuny.edu/news/publications/imp.pdf" rel="nofollow">Link</a>)</p> </blockquote> <p>Statistically significant improvements were found regarding exam performance and student attitude for prealgebra/arithmetic classes. None are found for either in elementary algebra classes. </p>
637,199
<p>If $K^T=K$, $K^3=K$, $K1=0$ and $K\left[\begin{matrix}1\\2 \\-3\end{matrix}\right]=\left[\begin{matrix}1\\2 \\-3\end{matrix}\right]$,</p> <p>how can I find the trace of $K$ and the determinant of $K$?</p> <p>I think for determinant of $K$, since $K^3-K=(K^2-I)K=0$, then $K^2=I$ since $K$ is nonzero. Then this implies $|K|^2=1$ implies $|K|=\pm 1$, where the two lines | | denotes the determinant.</p> <p>But I'm not sure if $tr(K^2)=tr(I)=3$?</p>
Cameron Williams
22,551
<p>$K$ has a nontrivial kernel so it is not one-to-one. What does this tell you about its determinant? To determine the trace, you could use the property that the trace is the sum of the eigenvalues.</p>
1,440,522
<p>For a function $f:[0,1]\to \mathbb{R}$, let $C$ be the set of points where $f$ is continuous. Prove that $C$ is in the Borel $\sigma$-algebra.</p> <p>I know that for $A=\{f(x): f(x)&lt;a\}$ is open for each real number a, and since openness is preserved by continuity, the set $f^{-1}(A)\cap C$ should also be open. But I don't know how to write a rigorous proof for it.And I feel I need to write $C$ in such a way so it is clear that it can be written as a union or intersection of open sets.</p>
Calvin Khor
80,734
<p>Hint. A point $x$ is a continuity point if $\lim_{y→ x} f(y) = f(x)$. Hence the set of continuity points is $$ S = \left\{ x: \lim_{y→ x} f(y) = f(x) \right\} $$ We know how to express $\lim_{y→ x} f(y) = f(x)$ as $∀ ε&gt;0,\dots$ we can then turn '$∀$'s into '$∩$'s and '$∃$'s into '$∪$'s, and conclude with the density of the rationals.</p>
4,019,956
<p><strong>Preface</strong></p> <p>We will use the following facts</p> <p>i) The sequence <span class="math-container">$ \left\lbrace a_n \right\rbrace $</span> is convergent to <span class="math-container">$a$</span> if for each <span class="math-container">$ \varepsilon &gt;0$</span> there exists <span class="math-container">$ N&gt;0 $</span> such that <span class="math-container">$ \vert a_n -a \vert &lt; \varepsilon $</span> whenever <span class="math-container">$ n&gt;N$</span>.</p> <p>(ii) <span class="math-container">$ |A+B| \leqslant |A|+|B| $</span>; and <span class="math-container">$ \vert -A+B \vert = \vert A-B \vert $</span></p> <p>(iii) <span class="math-container">$ (-1)^{2n+1} =-1$</span> and <span class="math-container">$ (-1)^{2n}=1 $</span></p> <p>(iv) The statement <span class="math-container">$ 8&lt;8 $</span> is a contradiction. Contradiction=false in every way.</p> <p><strong>Proof</strong></p> <p>Suppose the sequence is convergent to <span class="math-container">$ \omega $</span>. Then for each <span class="math-container">$ \varepsilon &gt;0 $</span>, there exists <span class="math-container">$ N&gt;0 $</span>, such that <span class="math-container">$ \vert 4+3(-1)^n - \omega \vert &lt; \varepsilon $</span>, whenever <span class="math-container">$ n&gt;N $</span>. When <span class="math-container">$ n $</span> is even, we have <span class="math-container">$ \vert 7-\omega \vert &lt; \varepsilon $</span>, whenever <span class="math-container">$ n&gt;N $</span>. When <span class="math-container">$ n $</span> is odd, we have <span class="math-container">$ \vert 1-\omega \vert &lt; \varepsilon $</span>, whenever <span class="math-container">$ n&gt; N $</span>. We choose <span class="math-container">$ \varepsilon=1 $</span> and consider <span class="math-container">\begin{align*} 8 &amp;=\vert 7+1 \vert=\vert 7 -\omega + \omega +1\vert\\ &amp;=\vert (7-\omega)+4 -3+\omega \vert \\ &amp; \leqslant \vert 7 -\omega \vert + \vert 4 \vert+ \vert -3+\omega \vert \\ &amp;= \vert 7 -\omega \vert + \vert 4 \vert+ \vert -2 -1+\omega \vert \\ &amp; \leqslant \vert 7 -\omega \vert + \vert 4 \vert+\vert -2 \vert + \vert -1+\omega \vert \\ &amp;= \vert 7 -\omega \vert + 6 + \vert 1-\omega \vert \\ &amp; &lt; 2 \varepsilon +6 \\ &amp;= 8 \end{align*}</span> This is a contradiction.</p> <p>Question: Do you like this proof? Is it correct?</p>
sirous
346,566
<p>Another approach:</p> <p><span class="math-container">$p=a^2+ab+b^2-a-2b=a^2+a(b-1)+b^2-2b$</span></p> <p>We solve polynomial p for <span class="math-container">$a$</span>:</p> <p><span class="math-container">$a=\frac{-(b-1)\pm \sqrt {\Delta}}2$</span></p> <p><span class="math-container">$\Delta=(b-1)^2-4b^2+8b$</span></p> <p>a can be minimum if <span class="math-container">$\Delta=0$</span>, in this case</p> <p>we get <span class="math-container">$b\approx 1/6$</span>, which gives <span class="math-container">$a\approx 5/12$</span> and <span class="math-container">$p\approx-0.48$</span> .</p> <p>Also <span class="math-container">$b\approx -2$</span> which gives <span class="math-container">$a\approx 0.2$</span> and <span class="math-container">$p\approx 2.2$</span>. We can see that for b=0, 1 and 2 , p is zero and <span class="math-container">$\Delta&lt;0$</span> for <span class="math-container">$b&lt;0$</span> and <span class="math-container">$b&gt;2$</span>.</p> <p>Value <span class="math-container">$p=-0.48$</span> deduces that <span class="math-container">$0&gt;p_{min}&gt;-1$</span></p>
1,998,244
<p>Given the equation of a damped pendulum:</p> <p>$$\frac{d^2\theta}{dt^2}+\frac{1}{2}\left(\frac{d\theta}{dt}\right)^2+\sin\theta=0$$</p> <p>with the pendulum starting with $0$ velocity, apparently we can derive:</p> <p>$$\frac{dt}{d\theta}=\frac{1}{\sqrt{\sqrt2\left[\cos\left(\frac{\pi}{4}+\theta\right)-e^{-(\theta+\phi)}\cos\left(\frac{\pi}{4}-\phi\right)\right]}}$$</p> <p>where $\phi$ is the initial angle from the vertical. How can we derive that? Obviously $\frac{dt}{d\theta}$ is the reciprical of $\frac{d\theta}{dt}$, but I don't see how to deal with the second derivative.</p> <p>I've found a similar derivation at <a href="https://en.wikipedia.org/wiki/Pendulum_(mathematics)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Pendulum_(mathematics)</a>, where the formula</p> <p>$${\frac {d\theta }{dt}}={\sqrt {{\frac {2g}{\ell }}(\cos \theta -\cos \theta _{0})}}$$</p> <p>is derived in the "Energy derivation of Eq. 1" section. However, that uses a conservation of energy argument which is not applicable for a damped pendulum.</p> <p>So how can I derive that equation?</p>
Deepak Suwalka
371,592
<p>We can also solve it using Venn diagram, make diagram- <a href="https://i.stack.imgur.com/jbn0N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jbn0N.png" alt="enter image description here"></a></p> <p>Now, according to diagram,</p> <p>$B=17-6\;\implies11$</p> <p>$S=18-6\;\implies12$</p> <p>$B\&amp;S=6$</p> <p>$NO=5$</p> <p>So total number of students-</p> <p>$B+S+B\&amp;S+NO\;\implies11+12+6+5=34$</p>
50,362
<p>I have a question about the basic idea of singular homology. My question is best expressed in context, so consider the 1-dimensional homology group of the real line $H_1(\mathbb{R})$. This group is zero because the real line is homotopy equivalent to a point. The chain group $C_1(\mathbb{R})$ contains all finite formal linear combinations of continuous maps from the interval $[0,1]$ into $\mathbb{R}$. One such map (call it $\mu$) maps the interval along some path that begins and ends at zero. (For my purposes it doesn't matter how exactly.) This map is a cycle, i.e. is contained in the kernel of $\partial_1:C_1 \rightarrow C_0$, because it begins and ends at the same point. It must be that it is also a boundary, i.e. contained in the image of $\partial_2:C_2 \rightarrow C_1$, because otherwise it would represent a nonzero homology class in $H_1$. My question is about exactly how and why it is a boundary.</p> <p>I have an intuitive understanding of why it is a boundary that does not seem to work when I translate it into formal language, and a formal way to show it is a boundary that does not seem to capture the heart of the intuition. My reference on the formal definitions is Allen Hatcher's <i>Algebraic Topology</i>.</p> <p>Intuitively, $\mu$ maps $[0,1]$ to a loop and then smooshes it into the real line (i.e. $\mu$ factors through $S^1$). The map from the loop to the line could be extended to a disc without losing continuity, since the whole thing gets smooshed anyway. A triangle could be mapped homeomorphically to the disc, and this would give us a map $\zeta: \Delta^2 \rightarrow \mathbb{R}$ of which, intuitively anyway, $\mu$ is the boundary. However, formally, $\partial_2 (\zeta)$ is the formal sum of the restriction of $\zeta$ to each of its edges; it is thus a formal sum of <i>three</i> maps from the interval to the real line, and thus is not (formally) equal to $\mu$.</p> <p>Formally, I can define a map $\alpha : \Delta^2 \rightarrow \mathbb{R}$ from a triangle to the real line that does have $\mu$ as a boundary, but I am very unsatisfied with this construction because it involves details that feel essentially extrinsic to the intuition above. Let the vertices of $\Delta^2$ be labeled 0, 1, 2. Map $\Delta^2$ to a disc in the following way: map vertex 0 to the center of the disc; the edges $[0,1]$ and $[0,2]$ to a radius in the same way (so that the restrictions of $\alpha$ to the two edges are equal); the edge $[1,2]$ around the circumference; and extend the map to the interior of the triangle in the obvious way. Then map the disc to the real line as above; the restriction to the circumference is $\mu$. Now, the boundary map $\partial_2$ by definition maps $\alpha$ to $\alpha |_{[0,1]} +\alpha |_{[1,2]}-\alpha |_{[0,2]}$. But $\alpha |_{[0,1]}$ and $\alpha |_{[0,2]}$ are equal and $\alpha |_{[1,2]}$ is equal to $\mu$, so $\partial_2(\alpha)=\mu$.</p> <p>My question is this: is it correct that the intuitive construction of $\zeta$ does not provide an element of $C_2$ with $\mu$ as a boundary? Is it correct that in order to get $\mu$ as a boundary one must use a construction like that of $\alpha$ above? If so, is the intuition that $\mu$ is a boundary because it is a loop that can be extended to a disc before smooshing wrong? Does the fact that $\mu$ is a boundary really hang on the sign convention in the definition of $\partial_2$? If so, can you give me a reason for why this sign convention works to guarantee that such a construction will always exist when a cycle "seems like it should be" a boundary?</p> <p>EDIT:</p> <p>I should add, after reading a few very helpful but somehow-unsatisfying-to-me answers, that I am not just interested in the one-dimensional case. (See my comment on MartianInvader's answer.)</p> <p>EDIT (7/12):</p> <p>Thanks for all the help everyone. My immediate acute sense of cognitive dissonance has been addressed, so I'm marking the question answered. I have some residual sense of not getting the whole picture, but expect this to resolve itself with slow processing of more theorems (like homotopy invariance of homology, and the Hurewicz map, thank you Matt E and Dan Ramras).</p>
MartianInvader
8,309
<p>Your construction works, but I think it's a little more complicated than it needs to be. In particular, you can just define a map from a simplex directly, without having to go through a disc or anything.</p> <p>I'd do it by taking a triangle (again, let's call the vertices 0,1,2) and mapping [0,1] and [0,2] via the map that is constantly zero, and mapping [1,2] to the loop $\mu$. This can clearly be extended to the interior of the triangle ($\mathbb{R}$ is simply connected after all).</p> <p>When you take the boundary, the [0,1] and [0,2] edges map to the same constant map, and thus cancel each out, so you're left with only $\mu$ in the boundary.</p> <p>I agree that it's a little clunky that you need a canceling trick to turn three edges into one (and you'd need to do something similar for a 1-cycle involving two pieces), but it does end up working, and the same trick shows that any nullhomotopic loop is a boundary.</p>
2,005,649
<p>I am struggling to find a parameterization for the following set : </p> <p>$$F=\left\{(x,y,z)\in\mathbb R^3\middle| \left(\sqrt{x^2+y^2}-R\right)^2 + z^2 = r^2\right\} \quad\text{with }R&gt;r$$</p> <p>I also have to calculate the area. </p> <p>I know its a circle so we express it in terms of the angle but my problem is with the $x$ and $y$ . They are not defined uniquely by the angle.<br> Please explain with details because it is more important for me to understand than the answer itself</p>
Olivier Moschetta
369,174
<p>Here's a somewhat simpler approach. We need to use 1 uppercase, 1 digit and 1 special character. There are $26\cdot 10\cdot 16$ ways to choose which 3 symbols we want to use. Then we have to choose a spot for them in the password. There are 8 possible spots for the uppercase, 7 for the digit and 6 for the special character. So far we have $$26\cdot 10\cdot 16\cdot (8\cdot 7\cdot 6)$$ different ways of choosing these things. Now the other 5 symbols are completely free, they can be anything. There is a total of $26+26+10+16=78$ symbols, so we can choose these 5 symbols in $78^5$ ways. The answer is then $$26\cdot 10\cdot 16\cdot (8\cdot 7\cdot 6)\cdot 78^5\simeq 4,03\cdot 10^{15}$$</p>
2,005,649
<p>I am struggling to find a parameterization for the following set : </p> <p>$$F=\left\{(x,y,z)\in\mathbb R^3\middle| \left(\sqrt{x^2+y^2}-R\right)^2 + z^2 = r^2\right\} \quad\text{with }R&gt;r$$</p> <p>I also have to calculate the area. </p> <p>I know its a circle so we express it in terms of the angle but my problem is with the $x$ and $y$ . They are not defined uniquely by the angle.<br> Please explain with details because it is more important for me to understand than the answer itself</p>
CakeMaster
385,422
<p>Starting from a similar point as you, but with slightly different logic from there, I got...</p> <p>$$78^8-68^8-52^8-62^8+42^8+52^8+36^8-26^8 = 706905960284160 \approx 7.07E14$$</p> <p>I started with the total number of possibilities. $$78^8$$ Then, I subtracted all possibilities without a digit, without an upper case, and without a special, respectively. $$-68^8-52^8-62^8$$ Then, using inclusion-exclusion, I added back in all the possibilities having no digits or upper cases, no digits or specials, and no upper cases or specials. $$+42^8+52^8+36^8$$ Finally, once again because of inclusion-exclusion, I subtracted the number of possibilities that have no digits no upper cases and no specials. $$-26^8$$</p>
2,087,107
<p>In the following integral</p> <p>$$\int \frac {1}{\sec x+ \mathrm {cosec} x} dx $$</p> <p><strong>My try</strong>: Multiplied and divided by $\cos x$ and Substituting $\sin x =t$. But by this got no result.</p>
Community
-1
<p><strong>A big partial fraction decomposition!!</strong>:</p> <p>We use <a href="https://en.m.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a> and use $u =\tan \frac {x}{2} $ to get $$I =\int \frac {1}{\sec x+\mathrm {cosec}x } dx =\int \frac {\sin x\cos x}{\sin x+\cos x} dx= 4\int \frac {u (u^2-1)}{(u^2+1)^2 (u^2-2u-1)} du =4I_1 $$ Now we have $$I_1 =\int \frac {u (u^2-1)}{(u^2+1)^2 (u-\sqrt {2}-1)(u+\sqrt {2}-1)} = -\frac {1}{4} \int \frac {1}{u^2+1} du +\frac{1}{2} \int \frac{u+1}{(u^2+1)^2} du +\frac {4-3\sqrt {2}}{48-32\sqrt {2}} \int \frac {1}{u+\sqrt {2}-1} du +\frac {4+3\sqrt {2}}{32\sqrt {2}+48} \int \frac {1}{u-\sqrt {2}-1} du =I_{11} +I_{12} +I_{13} +I_{14} $$ Hope you can take it from here. </p>
1,548,130
<p>Let's say that $F$ is a nice well-behaved function. How would I compute the following derivative?</p> <p>$\frac{\partial}{\partial t} \left\{ \int_{0}^{t} \int_{x - t + \eta}^{x + t - \eta} F(\xi,\eta) d\xi d\eta \right\}$</p> <p>I'm guessing I need the fundamental theorem of calculus, but the double integral is REALLY throwing me off - especially that the $t$ is contained in the limits of both integrals. Can someone help me out?</p> <p>EDIT: Given the problem that this came up in, I have hunch that the above derivative is zero.</p>
Dustan Levenstein
18,966
<p>Hint: You have that $g$ is bijective. Use the following facts about bijective, injective and surjective functions:</p> <ul> <li>Every bijective function $h: C \to D$ has an inverse $h^{-1}: D \to C$ so that $h \circ h^{-1} = id_D$ and $h^{-1} \circ h = id_C$.</li> <li>If you compose a bijective function with an injective function (in either order), you get an injective function.</li> <li>If you compose a bijective function with a surjective function (in either order), you get a surjective function.</li> </ul>
1,548,130
<p>Let's say that $F$ is a nice well-behaved function. How would I compute the following derivative?</p> <p>$\frac{\partial}{\partial t} \left\{ \int_{0}^{t} \int_{x - t + \eta}^{x + t - \eta} F(\xi,\eta) d\xi d\eta \right\}$</p> <p>I'm guessing I need the fundamental theorem of calculus, but the double integral is REALLY throwing me off - especially that the $t$ is contained in the limits of both integrals. Can someone help me out?</p> <p>EDIT: Given the problem that this came up in, I have hunch that the above derivative is zero.</p>
goblin GONE
42,339
<p>Use the following facts to do your heavy lifting:</p> <blockquote> <p><strong>Proposition A.</strong> Given functions $g : Z \leftarrow Y$ and $f : Y \leftarrow X$...</p> <ol> <li><p>If $g \circ f$ is surjective, then so too is $g$.</p></li> <li><p>If $g \circ f$ is injective, then so too is $f$.</p></li> </ol> </blockquote> <p>The proof is left as an exercise for the reader.</p> <blockquote> <p><strong>Proposition B.</strong> Given a function $f : Y \leftarrow X$ and a pair of isomorphisms $$\beta : Z \leftarrow Y \qquad \alpha : X \leftarrow W,$$ we have:</p> <ol> <li><p>The following are equivalent:</p> <p>$f$ is surjective, $\qquad$ $\beta \circ f$ is surjective, $\qquad$ $f \circ \alpha$ is surjective</p></li> <li><p>The following are equivalent:</p> <p>$f$ is injective, $\qquad$ $\beta \circ f$ is injective, $\qquad$ $f \circ \alpha$ is injective</p></li> </ol> </blockquote> <p>Once again, the proof is left as an exercise for the reader.</p> <p>Let us now turn our attention to your problem.</p> <blockquote> <p><strong>Claim.</strong> Given functions $q : A \leftarrow B$ and $p : B \leftarrow A$, if $q \circ p$ is surjective and $p \circ q$ is injective, then $q$ and $p$ are isomorphisms.</p> </blockquote> <p><em>Proof.</em> From A1 and the surjectivity of $q \circ p$, we deduce that $q$ is surjective. From A2 and the injectivity of $p \circ q$, we deduce that $q$ is injective. Hence $q$ is an isomorphism.</p> <p>From B1 and the surjectivity of $q \circ p$, we deduce that $p$ is surjective. From B2 and the injectivity of $q \circ p$, we deduce that $p$ is injective. Hence $q$ is an isomorphism. QED</p>
1,220,502
<p>The problem is this. Given that $\int_0^a f(x) dx = \int_0^a f(a-x)dx$, evaluate $$\int_0^\pi \frac{x\sin x}{1+\cos^2x} dx$$</p> <p>I write the integral as $$\int_0^\pi \frac{(\pi-x)\sin(\pi-x)}{1+\cos^2(\pi-x)} dx$$ but I don't see how that helps to do it.</p>
Mercy King
23,304
<p>Using the property $$ \int_0^af(x)\,dx=\int_0^af(a-x)\,dx, $$ with $$ f(x)=\frac{x\sin x}{1+\cos^2x},\quad a=\pi $$ we have \begin{eqnarray} \int_0^\pi f(x)\,dx&amp;=&amp;\int_0^\pi=\int_0^\pi f(\pi-x)\,dx=\int_0^\pi\frac{(\pi-x)\sin(\pi-x)}{1+\cos^2(\pi-x)}\,dx\\ &amp;=&amp;\int_0^\pi\frac{(\pi-x)\sin x}{1+\cos^2x}\,dx=\pi\int_0^\pi\frac{\sin x}{1+\cos^2x}\,dx-\int_0^\pi f(x)\,dx, \end{eqnarray} and so $$ \int_0^\pi f(x)\,dx=\frac\pi2\int_0^\pi\frac{\sin x}{1+\cos^2x}\,dx=-\frac\pi2\arctan(\cos x)\Big|_0^\pi=-\frac\pi2(\arctan(-1)-\arctan(1))=\frac{\pi^2}{4}. $$</p>
1,955,591
<p>I have to prove that ' (p ⊃ q) ∨ ( q ⊃ p) ' is a tautology.I have to start by giving assumptions like a1 ⇒ p ⊃ q and then proceed by eliminating my assumptions and at the end i should have something like ⇒(p ⊃ q) ∨ ( q ⊃ p) but could not figure out how to start.</p>
Nitin Uniyal
246,221
<p>$(p\rightarrow q)\lor (q\rightarrow p)\equiv (\lnot p\lor q)\lor (\lnot q\lor p)\equiv \lnot p\lor (q\lor\lnot q)\lor p\equiv \lnot p\lor T\lor p\equiv \lnot p\lor p\equiv T$</p>
816,088
<blockquote> <p>The sum of two variable positive numbers is $200$. Let $x$ be one of the numbers and let the product of these two numbers be $y$. Find the maximum value of $y$.</p> </blockquote> <p><em>NB</em>: I'm currently on the stationary points of the calculus section of a text book. I can work this out in my head as $100 \times 100$ would give the maximum value of $y$. But I need help making this into an equation and differentiating it. Thanks!</p>
lab bhattacharjee
33,337
<p>For positive real $a,b$</p> <p>$$\frac{a+b}2\ge \sqrt{ab}\implies ab\le\frac{(a+b)^2}4$$</p> <p>which can also be shown as follows $$(a+b)^2-4ab=(a-b)^2\ge0$$</p> <p>Alternatively if $\displaystyle a+b=k, ab=a(k-a)=\frac{k^2-(k-2a)^2}4\le\frac{k^2}4$</p>
67,929
<p>Joel David Hamkins in an answer to my question <a href="https://mathoverflow.net/questions/67259/countable-dense-sub-groups-of-the-reals">Countable Dense Sub-Groups of the Reals</a> points out that "one can find an uncountable chain of countable dense additive subgroups of $\mathbb{R}$ whose subset relation has the order type of the continuum $\langle \mathbb{R},&lt;\rangle$."</p> <p>I would like to know what is the cardinality of the set of countable dense additive subgroups of the reals (up to isomorphism)? Is it undecidable?</p>
André Henriques
5,690
<p>A countable abelian group can be encoded by a <i>presentation</i>. Such a presentation has countably many generators, and countably many relations (each one being a finite word in the generators).</p> <p>The isomorphism type of the group is encoded by a countably infinite word in a countable alphabet. The cardinality of those is equal to the continuum. Hence, there are <i>at most</i> continuum many isomorhpism types. Joel David Hamkins has already provided a lower bound, therefore, there are <i>exactly</i> continuum many isomorhpism types.</p>
1,871,103
<blockquote> <p>Given that $a_{1}=0$, $a_{2}=1$ and $$a_{n+2}=\frac{(n+2)a_{n+1}-a_{n}}{n+1}$$ prove that $\lim\limits_{n\to\infty} a_n=e$</p> </blockquote> <p>What I did:</p> <p>It was hinted to prove that $a_{n+1}-a_{n}=\frac{1}{n!}$ which I did inductively. But then using this information now I get:</p> <p>$a_{n+1}=a_{1}+\frac{n}{n!}=\frac{1}{(n-1)!}$ so $a_{n}=\frac{n-1}{n!}$.</p> <p>Now $a_{n}$ is bounded above by $\sum_{k=0}^{n}\frac{1}{k!}$ which converges to $e$. I can't find a lower bound that converges to $e$ as well. I'm starting to think I was supposed to go about this differently.</p> <p>Thanks in advance.</p>
Community
-1
<p>Instead, you should have got:</p> <p>$$a_{n+1} = a_n + \frac1{n!} = a_{n-1} + \frac1{(n-1)!} + \frac1{n!}= \cdots = a_1 + \frac1{1!} + \frac1{2!} + \cdots + \frac1{(n-1)!} + \frac1{n!}$$</p> <p>i.e.</p> <p>$$a_{n+1} = \sum_{k=1}^n \frac1{k!} = \sum_{k=0}^n \frac1{k!} - 1$$</p> <p>Hence, the limit is actually $e - 1$.</p> <p>If, instead, the sequence was defined as $a_0 = 0$ and $a_1 =1$, then the limit would have been $e$ (by the same reasoning as above).</p>
96,211
<p>A modulus of continuity for a function $f$ is a continuous increasing function $\alpha$ such that $\alpha(0) = 0$ and $|f(x) - f(y)| &lt; \alpha(|x-y|)$ for all $x$ and $y$. I am trying to prove that an equicontinuous family $\mathcal F$ of functions has a common modulus of continuity. This seems intuitively obvious, but I am having difficulty proving continuity. So far, I have defined </p> <p>$\alpha(\delta) = \sup\{|f(x) - f(y)| : d(x,y) \leq \delta, f\in \mathcal F\} $. </p> <p>I want to show that this function is continuous in $\delta$. Any suggestions?</p>
Beni Bogosel
7,327
<p>Suppose $\delta_n$ is increasing and $\delta_n \to \delta$. Denote $\alpha(\delta)=\sup \{ |f(x)-f(y)| : d(x,y)&lt;\delta, f \in \mathcal F\}$ (with strict inequality). Then pick $x,y$ with $d(x,y)&lt;\delta$. This means that there exists $n$ such that $d(x,y)&lt;\delta_n$ and therefore $|f(x)-f(y)| \leq \alpha(\delta_n)\leq \lim_{n \to \infty}\alpha(\delta_n)\leq \alpha(\delta)$. Take supremum in the left side and we get $\alpha(\delta)=\lim_{n \to \infty}\alpha(\delta_n)$. This proves that $\alpha$ is left continuous. </p> <p>I'm not sure about the right-continuity right now. Maybe it works on the same way, but I didn't manage to get it now.</p>
100,955
<p>I'm trying to find the most general harmonic polynomial of form $ax^3+bx^2y+cxy^2+dy^3$. I write this polynomial as $u(x,y)$. </p> <p>I calculate $$ \frac{\partial^2 u}{\partial x^2}=6ax+2by,\qquad \frac{\partial^2 u}{\partial y^2}=2cx+6dy $$ and conclude $3a+c=0=b+3d$. Does this just mean the most general harmonic polynomial has form $ax^3-3dx^2y-3axy^2+dy^3$ with the above condition on the coefficients? "Most general" is what my book states, and I'm not quite sure what it means.</p> <p>Also, I want to find the conjugate harmonic function, say $v$. I set $\frac{\partial v}{\partial y}=\frac{\partial u}{\partial x}$ and $\frac{\partial v}{\partial x}=-\frac{\partial u}{\partial y}$ and find $$ v=dx^3+3ax^2y+bxy^2-ay^3+K $$ for some constant $K$. By integrating $\frac{\partial v}{\partial x}$, finding $v$ as some polynomial in $x$ and $y$ plus some function in $y$, and then differentiating with respect to $y$ to determine that function in $y$ up to a constant. Is this the right approach?</p> <p>Finally, the question asks for the corresponding analytic function. Is that just $u(x,y)+iv(x,y)$? Or something else? Thanks for taking the time to read this.</p>
Semiclassical
137,524
<p>An alternative approach is to 'guess' the form of the final analytic function. We want $f(z)=u(x,y)+i v(x,y)$ where $z=x+i y$ and $u(x,y)$ is a homogeneous cubic polynomial. Since $u(x,y)$ is cubic, we infer that $f(z)$ should also be some cubic polynomial; since it is homogeneous, we further conclude that $f(z)=\alpha z^3$ for some $\alpha$. (If it contained $z^2$, for instance, $u(x,y)$ would have terms of degree 2 and wouldn't be homogeneous.) Writing $\alpha=a+i d$ then yields</p> <p>$$f(z)=(a+i d)(x+i y)^3=(ax^3-3dx^2y-3axy^2+dy^3)+i(dx^3+3ax^2y-3dxy^2-ay^3)$$ in agreement with $u(x,y)$, $v(x,y)$ as identified in the OP.</p>
660,461
<p>$A = \{a,b,c,d,e\}$</p> <p>$B = \{a,b,c\}$</p> <p>$C = \{0,1,2,3,4,5,6\}$</p> <p>The first few iterations are as follows:</p> <p>$1.$ $a,a,0$</p> <p>$2.$ $b,b,1$</p> <p>$3.$ $c,c,2$</p> <p>$4.$ $d,a,4$</p> <p>$5.$ $e,b,5$</p> <p>$...$</p> <p>I'm trying to figure out at which iterations we will have $x,y,z$ such that $x=y$ and $z=5$. I wrote a program to "solve" this problem for me, and I discovered that this happens at iterations $32,46,60,137,151,165...$. The problem for me is that I don't see how to derive this pattern. In particular, the following are true:</p> <p>$32.$ $c,c,5$</p> <p>$...$</p> <p>$46.$ $b,b,5$</p> <p>$...$</p> <p>$60.$ $a,a,5$</p> <p>$...$</p>
Casteels
92,730
<p>Firstly, I think your program is incorrect as the $32$nd iteration is $(b,b,3)$ and the first iteration of $(x,y,z)$ with $x=y$ and $z=5$ is $(c,c,5)$ at iteration 48. </p> <p>In any case, the iterations you desire will occur when $n$ is a solution to the system \begin{align*} n &amp;\equiv 1,2\text{ or }3\pmod{15}\\ n&amp;\equiv 6\pmod 7 \end{align*}</p> <p>The first equation comes from the iterations $(x,y,z)$ with $x=y$, and the second takes into account the iterations with $z=5$. </p>
458,779
<p>I need to find the volume of an object restricted with the $x^{2}+z^{2} &lt; 8$ and $0 &lt; y &lt; 2$ planes. It would be easy if the cylinder were "parallel" to the XY plane, because then:</p> <p>$$0 &lt; r &lt; 2\sqrt{2}$$ $$0 &lt; \phi &lt; 2\pi$$ $$0 &lt; z &lt; 2$$</p> <p>But well, how should I handle this here?</p>
Mikasa
8,581
<p>You can also use the following limits indicating that we are using Cartesian coordinates. I use the symmetric of the solid volume as well.</p> <p>$$4\int_{x=0}^{\sqrt{8}}\int_{z=0}^{\sqrt{8-x^2}}\int_{y=0}^2dydzdx$$</p> <p><img src="https://i.stack.imgur.com/UQxeY.png" alt="enter image description here"></p>
525,326
<p>If all elements of $S$ are irrational and bounded from below by $\sqrt 2$ then $\inf S$ must be irrational .</p> <p>I would say this statement is true since $S=\{ \sqrt 2, \sqrt 3, \sqrt 5,\ldots\}$ the greatest lower bound is $\sqrt 2$ which is irrational and bounded from below the sequence. </p> <p>Is this correct?</p>
Community
-1
<p>Consider $$S = \left\{2 + \frac{\sqrt{2}}{n} : n = 1, 2, 3, \dots\right\}$$</p> <p>Then every element of $S$ is larger than $\sqrt{2}$, $S$ contains no rational entries, and $\inf S = 2$ is rational.</p>
201,173
<p>I have problem solving this equation: $$ \left(\frac{1+iz}{1-iz}\right)^4 = \frac12 + i {\sqrt{3}\over 2} $$ I know how to solve equations that are like: $$ w^4 = \frac12 + i {\sqrt{3}\over 2} $$ And I have solved it to: $$ w = \cos(-\frac{\pi}{12} + \frac{\pi k}{2})) + i\sin(-\frac{\pi}{12} + \frac{\pi k}{2})) $$ But now is: $$ w = \frac{1+iz}{1-iz} $$ How does one get the complex z? Or am I solving it wrong?</p>
Michael Hardy
11,667
<p>$$ w=\frac{1+iz}{1-iz} $$ First, multiply both sides by $1-iz$: $$ w(1-iz) = 1+iz $$ Expand the left side: $$ w-wiz = 1+iz $$ Put all terms involving $z$ on one side and those not involving $z$ on the other side: $$ w-1=iz+wiz $$ Factor $$ w-1 = iz(1+w) $$ Divide both sides by $i(1+w)$: $$ z= \frac{w-1}{i(w+1)} $$ Multiply the numerator and denominator by the conjugate, $-i$: $$ z = -i\frac{w-1}{w+1} = i\frac{1-w}{1+w}. $$</p>
2,919,266
<p>Let $(x_n)$ be a sequence in $(-\infty, \infty]$. </p> <p>Could we define the sequence $(x_n)$ so that limsup$(x_n) = -\infty$? </p> <p>My intuitive thought is no, but I’m not 100% sure. </p>
Nosrati
108,128
<p>Usiing telescopic sum $$\sum_{n\geq1}^k\dfrac{n}{(n+1)(n+2)}2^n=\sum_{n\geq1}^k\left(\dfrac{2^{n+1}}{n+2}-\dfrac{2^{n}}{n+1}\right)=\dfrac{2^{k+1}}{k+2}-1$$</p>
3,337,440
<p>Suppose that, in a memoryless way, an object A can suddenly transform into object B or object C. Once it transforms, it can no longer transform again (so if it becomes B, it cannot become C, and visa versa) </p> <p>Suppose that the pdf of an object A becoming object B is </p> <p><span class="math-container">$$\lambda e^{-\lambda t}$$</span></p> <p>Where <span class="math-container">$t$</span> is the time of the transition</p> <p>And that same object A becoming object C instead has a pdf of</p> <p><span class="math-container">$$\mu e^{-\mu t}$$</span></p> <p>We can integrate over the timespan <span class="math-container">$\tau$</span> of the experiment to derive</p> <p>P(A -> B over timespan <span class="math-container">$\tau$</span>) = <span class="math-container">$1-e^{-\lambda \tau}$</span></p> <p>P(A -> C over timespan <span class="math-container">$\tau$</span>) = <span class="math-container">$1-e^{-\mu \tau}$</span></p> <p>But the events A -> B over timespan <span class="math-container">$\tau$</span> and A -> C over timespan <span class="math-container">$\tau$</span> are mutually exclusive, so in theory </p> <p>P(A transitions to B or C over timespan <span class="math-container">$\tau$</span>) = P(A -> B over timespan <span class="math-container">$\tau$</span>) + P(A -> C over timespan <span class="math-container">$\tau$</span>) = <span class="math-container">$1-e^{-\lambda \tau} + 1-e^{-\mu \tau} = 2-(e^{-\lambda \tau} + e^{-\mu \tau})$</span></p> <p>But this is clearly incorrect, because as the timespan <span class="math-container">$\tau$</span> increases without bound, the probability of transition increases to be greater than 1, which is impossible.</p> <p>Where did I go wrong? At first pass, everything I did seems correct, but it obviously isn't. </p>
Henry
6,460
<p>The way you have set up the question does not work, and you have demonstrated that it does not work.</p> <p>So let's create a system that does work involving memorylessness and your two rates of <span class="math-container">$\lambda$</span> and <span class="math-container">$\mu$</span>:</p> <ul> <li><p>Suppose you have two exponentially distributed random variables: <span class="math-container">$X$</span> with rate <span class="math-container">$\lambda$</span> and <span class="math-container">$Y$</span> with rate <span class="math-container">$\mu$</span>. Let <span class="math-container">$Z=\min(X,Y)$</span></p> <ul> <li><p><span class="math-container">$Z$</span> is exponentially distributed with rate <span class="math-container">$\lambda+\mu$</span>, so with pdf <span class="math-container">$(\lambda+\mu)e^{-(\lambda+\mu) t}$</span> and cdf <span class="math-container">$1-e^{-(\lambda+\mu) t}$</span> for <span class="math-container">$z \gt 0$</span></p></li> <li><p><span class="math-container">$\mathbb P(Z=X) = \mathbb P(X\lt Y ) = \frac{\lambda}{\lambda+\mu}$</span> and <span class="math-container">$\mathbb P(Z=Y) = \mathbb P(X\gt Y ) = \frac{\mu}{\lambda+\mu}$</span></p></li> </ul></li> <li><p>Now let's say <span class="math-container">$A$</span> transforms at time <span class="math-container">$Z$</span>, and transforms into <span class="math-container">$B$</span> if <span class="math-container">$X \lt Y$</span> but into <span class="math-container">$C$</span> if <span class="math-container">$X \gt Y$</span></p> <ul> <li>in a sense this has a memoryless property since <span class="math-container">$Z$</span> is exponentially distributed, and if <span class="math-container">$Z \gt s$</span> then the conditional distribution of <span class="math-container">$Z-s$</span> is the same as the original distribution of <span class="math-container">$Z$</span>, while the conditional probabilities of <span class="math-container">$A$</span> transforming into <span class="math-container">$B$</span> or into <span class="math-container">$C$</span> have not changed from the original probabilities</li> </ul></li> <li><p>In this form </p> <ul> <li>The probability <span class="math-container">$A$</span> changes into <span class="math-container">$B$</span> by time <span class="math-container">$\tau$</span> is <span class="math-container">$\frac{\lambda}{\lambda+\mu}\left(1-e^{-(\lambda+\mu) \tau}\right)$</span>, which has a derivative of <span class="math-container">$\lambda e^{-(\lambda+\mu) \tau}$</span> </li> <li>The probability <span class="math-container">$A$</span> changes into <span class="math-container">$C$</span> by time <span class="math-container">$\tau$</span> is <span class="math-container">$\frac{\mu}{\lambda+\mu}\left(1-e^{-(\lambda+\mu) \tau}\right)$</span>, which has a derivative of <span class="math-container">$\mu e^{-(\lambda+\mu) \tau}$</span> </li> <li>These are not in the usual sense cdfs (individually they do not approach <span class="math-container">$1$</span> as <span class="math-container">$\tau$</span> increases, though their sum does) and pdfs (individually they do not integrate to <span class="math-container">$1$</span> over positive <span class="math-container">$\tau$</span>, though their sum does) </li> </ul></li> </ul>
2,450,245
<p>A set $Q$ contains $0$, $1$ and the average of all elements of every finite non-empty subset of $Q$. Prove that $Q$ contains all rational numbers in $[0,1]$.</p> <p>This is the exact wording, as it was given to me. Obviously, the elements that correspond to the average, are rational, since they can be expressed as $\frac{(q_1+q_2+\dots q_k)}{k}$, where $0\leq k\leq1$. But I don't know how to proceed. </p>
Michael Burr
86,421
<p>Let's explore the first few possibilities to see what's going on.</p> <ul> <li><p>You already know that $Q$ contains $0$ and $1$.</p></li> <li><p>The average of $0$ and $1$ is $\frac{1}{2}$, so this is in $Q$ too.</p></li> <li><p>The average of $0$ and $\frac{1}{2}$ is $\frac{1}{4}$ and the average of $\frac{1}{2}$ and $1$ is $\frac{3}{4}$.</p></li> <li><p>For $\frac{1}{3}$, we see that this is the average of $0$, $\frac{1}{4}$, and $\frac{3}{4}$. Similarly, $\frac{2}{3}$ is the average of $1$, $\frac{3}{4}$, and $\frac{1}{4}$.</p></li> </ul> <p>Now, let's prove this more completely (although somewhat sketchy because the complete rigor is somewhat messy).</p> <p>Step 1: Prove that all dyadic numbers are in this set. A dyadic number is a fraction of the form $\frac{a}{2^n}$ for integer $a$ and some natural number $n$. For our case, it is safe to assume that $a$ is odd and that $0&lt;a&lt;2^n$.</p> <p>Proof by induction on $n$, the only dyadic number (satisfying the conditions) when $n=1$ is $\frac{1}{2}$, which is the average of $0$ and $1$.</p> <p>Inductive step, consider the dyadic number $\frac{a}{2^{n+1}}$. We can observe that this is the average of $\frac{a+1}{2^{n+1}}$ and $\frac{a-1}{2^{n+1}}$. Note that both $a+1$ and $a-1$ are even, so these are actually dyadic numbers with smaller denominators, so they are in the set by the inductive hypothesis (if $a+1=2^{n+1}$ or $a-1=0$, then these numbers are $0$ or $1$, which are given to be in the set).</p> <p>Step 2: Prove that all rational numbers are in the set using dyadic numbers.</p> <p>Let $\frac{p}{q}$ be an arbitrary rational number. To get this rational number, it must be an average of $q$ numbers whose sum is $p$. We can write $p$ as a sum of dyadic numbers in the following way:</p> <p>Choose $p$ dyadic numbers that are all near $1$, for example, $\frac{2^n-1}{2^n}$. With various $n$'s large enough, the difference between the sum of these numbers and $p$ is a small (close to zero) dyadic number. It would be of the form $\frac{a}{2^n}$ for $a$ relatively small.</p> <p>We now need to write $\frac{a}{2^n}$ as a sum of $q-p$ dyadic numbers. We can rewrite this fraction as $\frac{a2^m}{2^{n+m}}$ so that $a2^m$ is larger than $q-p$, let this be $\frac{b}{2^{n+m}}$, observe that this is the sum of $\frac{b-1}{2^{n+m+1}}$ and $\frac{b+1}{2^{n+m+1}}$. By repeatedly splitting the smallest element of the set in this way until you have $q-p$ elements, you have a collection whose average is $\frac{p}{q}$. Note that by the choice of $b$, you'll never reach $0$ using this splitting technique (and that all of the numbers are distinct).</p>
2,189,123
<p>There are 36 gangsters, and several gangs these gangsters belong to. No two gangs have identical roster, and no gangster is an enemy of anyone in their gang. However, each gangster has at least one enemy in every gang they are <strong>not</strong> in. What is the greatest possible number of gangs? </p>
Aravind
4,959
<p>I show that $3^{12}$ is the optimal value as found by bof and Makholm. Let A,B be enemies. Let $N_A$ be the number of gangs containing A for which there is a gangster C such that A is the only enemy of C in that gang. Similary, let $N_B$ be the number for B. If $N_A \leq N_B$, then replace the enemies of A with the enemies of B, remove the gangs containing A and for each gang $S$ containing B, make a gang $S \setminus \{B\} \cup \{A\}$. This can be seen to be a valid ganging without decreasing the number of gangs.</p> <p>Repeat this process until every pair of enemies is homogeneous, that is, until we have: X,Y are enemies if and only if for all gangs $S$ containing X, the set $S \setminus \{X\} \cup \{Y\}$ is a gang.</p> <p>Once this is done, let the enemy cliques be of sizes $n_1,n_2,\ldots,n_k$. In every gang, there must be exactly one gangster from each enemy clique. Thus the number of gangs is the product of $n_i$s with sum equal to 36. This is probably optimized when all $n_i$s are equal. In this case, the optimal value is easily checked to be $3^{12}$.</p>
2,459,123
<p>My attempt:</p> <p>Step 1 $n=4 \quad LHS = 4! = 24 \quad RHS=4^2 = 16$</p> <p>Therefore $P(1)$ is true.</p> <p>Step 2 Assuming $P(n)$ is true for $n=k, \quad k!&gt;k^2, k&gt;3$</p> <p>Step 3 $n=k+1$</p> <p>$LHS = (k+1)! = (k+1)k! &gt; (k+1)k^2$ (follows from Step 2)</p> <p>I am getting stuck at this step.</p> <p>How do I show $(k+1)k^2 &gt; (k+1)^2$ ?</p>
Ian
83,396
<p>The Baire category theorem says that if $A=\bigcap_{n=1}^\infty A_n$ and $A_n$ are open and dense then $A$ is dense again. So it suffices to choose a sequence of open dense sets whose measure tends to zero, then take their intersection. You have basically found such a sequence: you have $U(\epsilon)$, so take $A_n=U(2^{-n})$ (or whatever other decaying sequence you like).</p>
1,929,698
<p>Let $f(x)=\chi_{[a,b]}(x)$ be the characteristic function of the interval $[a,b]\subset [-\pi,\pi]$. </p> <p>Show that if $a\neq -\pi$, or $b\neq \pi$ and $a\neq b$, then the Fourier series does not converge absolutely for any $x$. [Hint: It suffices to prove that for many values of $n$ one has $|\sin n\theta_0|\ge c \gt 0$ where $\theta_0=(b-a)/2.$]</p> <p>However, prove that the Fourier series converges at every point $x$.</p> <p>I've computed the Fourier series and got $\frac{b-a}{2\pi}+\sum_{n\neq 0}\frac{e^{-ina}-e^{-inb}}{2\pi in}e^{inx}.$</p> <p>Also, $|e^{-ina}-e^{-inb}|=2|\sin n\theta_0|$, and $\theta_0\in (0,\pi)$, so I can see that for infinitely many values of $n$, we have $|\sin n\theta_0|\ge c \gt 0$. But this does not guarantee $\sum_{n\neq 0}|\frac{e^{-ina}-e^{-inb}}{2\pi in}e^{inx}|\ge \sum \frac{c}{n}$, and in fact we might have this inequality only for the squares of integers, in which case the right hand side converges. So how does the hint solve the problem?</p> <p>Moreover, for the second problem, to show that the Fourier series converges at every point, I think I need to use Dirichlet's test, using $1/n$ as the decreasing sequence to $0$, but how can I show that $\frac{e^{-ina}-e^{-inb}}{2\pi in}e^{inx}$ has bounded partial sums?</p> <p>I would greatly appreciate any help.</p>
Disintegrating By Parts
112,478
<p>Suppose $a \ne -\pi$ or $b \ne \pi$ and $a\ne b$. Then the function you are talking about must be discontinuous.</p> <p>Suppose the series did converge absolutely for some $x$. That would mean $$ \sum_n \left|\frac{e^{-ina}-e^{-inb}}{2\pi in}\right| &lt; \infty. $$ But that would force the <em>uniform</em> convergence of the Fourier series everywhere by the Weierstrass M-test. But uniform convergence would imply that the periodic extension of the limit function $\chi_{[a,b]}$ must be continuous everywhere, which only happens in the case that $a=-\pi$ and $b=\pi$, or $a=b$.</p> <p>I'm not familiar with you text, but you should have some pointwise convergence theorem that shows the Fourier series converges to the mean of the left and right hand limits for your function.</p>
2,268,947
<p><a href="https://i.stack.imgur.com/6hCd2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6hCd2.png" alt="enter image description here"></a></p> <p>($\dot I = \{0,1\}$)</p> <p>The homotopy I've constructed is: $$G(t_1, t_2) = \begin{cases} \alpha(t_1), &amp; \text{if $(t_1,t_2) \in I \times \{0\}$} \\ \gamma * \beta* \delta^{-1}(t_1) , &amp; \text{otherwise} \end{cases}$$</p> <p>But this isn't correct since the domain of this function isn't a union of two open or closed sets as per the gluing lemma.</p> <p>Anyone have any ideas as to how to re-write the domain so that this becomes the required homotopy? </p>
De Yang
368,308
<p>You are correct that the gluing lemma doesn't work: i.e. $G$ is not continuous unless $\gamma$ and $\delta$ are constant paths. The question you ask in the bottom is 'can I do something differently to make this map continuous'. I think the answer is no - the information given $F$ is not used in constructing the homotopy and this information is critical.</p> <p>One thing related to convexity is that we have the map $r_t: I \times I \to I \times I$, $(s_1,s_2) \mapsto (s_1,ts_2)$. which is a deformation retraction of the inclusion $I \times \{0\} \hookrightarrow I \times I$. We can use this evidently continuous retraction to construct your homotopy.</p> <p>Let $\sqcap: I \to I \times I$ be the path with $F \circ (\sqcap)= \gamma * \beta * \delta^{-1}$. Then because $r_t$ is a deformation retraction $\gamma * \beta * \delta^{-1}= F \circ r_0 \circ (\sqcap) \cong^{homotopic} F \circ r_1 \circ (\sqcap) \cong^{homotopic} \alpha $.</p> <p>The homotopy of the first equivalence is $h_t=F \circ r_t \circ (\sqcap)$. The homotopy of the second equivalence is the homotopy (identity path at $(0,0)$) $* \alpha *$(identity path at $(1,0)$) $ \cong \alpha$.</p> <hr> <p>Edit: Above I wrote a crap answer. Prof. Brown has the right idea. Let $\text{_____}$ be the path given by $\text{_____}(t)=(t,0)$ in $I \times I$. Then there is a homotopy $g_t$ from $\sqcap$ to $\text{_____}$.<br> $h_t=F \circ g_t$ is a homotopy from $\gamma * \beta * \delta^{-1}$ to $\alpha$.</p>
2,012,223
<p>I've got a problem with finding main argument of these complex number. How can i evaluate this two examples?</p> <p>$$\sin \theta - i\cos \theta$$</p> <p>$$\frac{(1-i\tan \theta)}{1+\tan \theta}$$</p>
user90369
332,823
<p>It's not so clear what you mean, perhaps it's meant (where $z$ is a complex variable, here $z:=r(\cos\phi +i\sin\phi)$ with $r\in\mathbb{R}$): </p> <p>$\displaystyle \sin\phi-i\cos\phi=-i\frac{z}{|z|}$ </p> <p>$\displaystyle \frac{1-i\tan\phi}{1+\tan\phi}=\frac{2\overline{z}}{(1-i)z+(1+i)\overline{z}}$</p>
2,249,109
<p><strong>Question:</strong> Three digit numbers in which the middle one is a perfect square are formed using the digits $1$ to $9$.Then their sum is?</p> <p>$A. 134055$<br> $B.270540$<br> $C.170055$<br> D. None Of The Above</p> <p>Okay, It's pretty obvious that the number is like $XYZ$ where $X,Z\in[1,9] $ and $Y\in\{1,4,9\}$</p> <p>I'm facing problems in finding a method to evaluate such a sum?</p> <p>(obviously, I can't afford to use a calculator)</p>
lhf
589
<p>Induction is really the easiest route:</p> <p>Just sum</p> <p>$\quad E_{n}= F_{n-1}A + F_{n}B$</p> <p>$\quad E_{n+1}= F_nA + F_{n+1}B$</p> <p>to get </p> <p>$\quad E_{n+2}= F_{n+1}A + F_{n+2}B$</p>
3,796,216
<p>Use a group-theoretic proof to show that <span class="math-container">$\mathbb{Q}^*$</span> under multiplication is not isomorphic to <span class="math-container">$\mathbb{R}^*$</span> under multiplication.</p> <p><strong>I have tried this:</strong></p> <p>Suppose <span class="math-container">$$ \phi: \mathbb{Q}^*\to \mathbb{R}^* $$</span> where <span class="math-container">$\phi(x)=x^2$</span></p> <p>Now for some <span class="math-container">$3 \in \mathbb{R}^*$</span> there is no mapping in <span class="math-container">$\mathbb{Q}^*$</span> since <span class="math-container">$\sqrt3$</span> does not belong to <span class="math-container">$\mathbb{Q}^*$</span>.</p> <p>Hence <strong><span class="math-container">$\phi$</span></strong> is <strong>not an onto function.</strong> Therefore <span class="math-container">$\mathbb{Q}^* \not\cong \mathbb{R}^*$</span>.</p> <p>But I am not sure if it is correct.</p> <p>Also, What is a <strong>group-theoretic proof</strong>?</p>
tomasz
30,222
<p>The squaring map is not onto <span class="math-container">$\mathbf R^\times$</span>, so it does not quite work. However, it is pretty close: the image is a subgroup of index <span class="math-container">$2$</span>. In <span class="math-container">$\mathbf Q^\times$</span>, the index is infinite. Thus, the two groups are not isomorphic.</p> <p>(In fact, this shows that they are not even elementarily equivalent.)</p>
4,142,540
<p>Let <span class="math-container">$V$</span> a vector subspace of dimension <span class="math-container">$n$</span> on <span class="math-container">$\mathbb R$</span> and <span class="math-container">$f,g \in V^* \backslash \{0\}$</span> two linearly independent linear forms. I want to show that <span class="math-container">$\dim (\ker f \cap \ker g) = n-2$</span>.</p> <p>Since <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are linear forms, I know that dim ker <span class="math-container">$f =n-1$</span> and dim ker <span class="math-container">$g =n-1$</span>. I think I should use the fact that the two forms are linearly independent to <span class="math-container">$\dim (\ker f \cap \ker g) = n-2$</span> but I don't really see how...</p> <p>I saw a proof with scalar product but I would like to see an alternative proof or maybe an explanation of the scalar product's proof.</p>
azif00
680,927
<p>Another way:</p> <p>Consider the linear maps <span class="math-container">$h : V \to \mathbb R^2$</span> and <span class="math-container">$\tilde h : \mathbb R^2 \to V^*$</span> given by <span class="math-container">$h(v) = (f(v),g(v))$</span> for all <span class="math-container">$v \in V$</span> and <span class="math-container">$\tilde h(a,b) = af+bg$</span> for all <span class="math-container">$(a,b) \in \mathbb R^2$</span>. Prove that <span class="math-container">$h$</span> is surjective if <span class="math-container">$\tilde h$</span> is injective (<em>hint</em>: find an isomorphism <span class="math-container">$\psi : (\mathbb R^2)^* \to \mathbb R^2$</span> such that the tranpose of <span class="math-container">$h$</span> can be written as <span class="math-container">$\tilde h \circ \psi$</span>). Thus, the linear independence of <span class="math-container">$f$</span> and <span class="math-container">$g$</span> shows that <span class="math-container">$h$</span> is surjective, and clearly <span class="math-container">$\ker h = \ker f \cap \ker g$</span>. Finally, use the rank-nullity theorem on <span class="math-container">$h$</span>.</p>
243,903
<p>I have posted this question on mathstack echange but did not get any answer. It mam trying my luck here. </p> <p>The only simple finite groups admitting an irreducible character of degree 3 are $\mathfrak{A}_5$ and $PSL(2,7)$. That seems to be a result coming from Blichfeldt's work on $GL(3,\mathbb{C})$, which I cannot find. Is there a proof available somewhere ? </p>
Geoff Robinson
14,450
<p>It depends on how much group theory you want to use. If $G$ is such a simple group and $\chi$ is a faithful complex irreducible character of degree $3$, then a Theorem of Feit and Thompson proves that $|G|$ is not divisible by any prime $p &gt; 7$. It is easy to check (since $Z(G)$ contains no element of order $3$), that $G$ has Abelian Sylow $3$-subgroups. Hence for any $x \in G$ of order $3$, we have ${\rm gcd}([G:C_{G}(x)],\chi(1)) = 1$. By a result of Burnside, we have $\chi(x) = 0$ since $G$ is simple. It then easily follows that $|G|$ is not divisible by $9$. It also follows that a Sylow $3$-subgroup of $G$ is self-centralizing ( from this, it already follows from 1962(?) theorem of Feit and Thompson in Nagoya J. Math, that $G \cong {\rm PSL}(2,7)$ or $A_{5}$. For, more generally, we see that $\chi(g) = 0 $ whenever $g$ centralizes a Sylow $3$-subgroup $P$ of $G$, and then $|C_{G}(P)|$ divides $3$). A representation theoretic proof in this special case can be sketched as follows (using a few tricks not available to Blichfeldt):</p> <p>Now $G$ has cyclic Sylow $7$-subgroups for otherwise, $G$ contains a element of order $7$ with eigenvalues $1,\omega, \omega^{-1}$, where $\omega = e^{\frac{2 \pi i}{7}}$, which contradicts a theorem of Blichfeldt ( as $\chi$ must be primitive ( otherwise $G$ would be solvable)). Consideration of reduction (mod $7$) shows that the Sylow $7$-subgroup of $G$ has order at most $7$. Furthermore, if $7$ divides $|G|$, it follows from Burnside's normal $p$-complement theorem that a Sylow $7$-normalizer must have order $21$.</p> <p>Since $G$ has no element of order $35$, it follows from another theorem of Blichfeldt that $|G|$ can't be divisible by $35$ ( since otherwise, $\chi$ is neither $5$-rational nor $7$-rational, and would contain an element of order $35$. But consideration of reduction (mod $5$) shows that no element of order $5$ can commute with any element of order $7$).</p> <p>If $G$ contains an element of order $5$ with only two eigenvalues then an argument of Blichfeldt shows that ${\rm SL}(2,5)$ as a subgroups, and contains an element of order $6$ with eigenvalues $1, \alpha, {\bar \alpha}$ for a primitive $6$-th root of unity, which ( by another of his results) contradicts the primitivity of $\chi$. It follows that $G$ has cyclic Sylow $5$-subgroups which have order at most $5$ on consideration of reduction (mod $5$).</p> <p>It now follows that $|G|$ has the form $2^{a}.3.5$ or $2^{b}.3.7$ for integers $a,b$. Similarly to the argument for $7$, we may conclude that a Sylow $5$-normalizer has order $10$ if $5$ divides $|G|$. This gives $a \equiv 2$ (mod $4$) or $b \equiv 3$ (mod $6$) by Sylow's theorem. A Sylow $2$-subgroup $S$ of $G$ has an Abelian normal subgroup $A$ of index dividing $2$, and a 1965 Theorem of Brauer shows that $|A|$ divides $4$ so $|S| \leq 8$ and we do get $|G| = 60$ or $168$.</p>
3,453,175
<p>If <span class="math-container">$y=\dfrac {1}{x^x}$</span> then show that <span class="math-container">$y'' (1)=0$</span></p> <p>My Attempt:</p> <p><span class="math-container">$$y=\dfrac {1}{x^x}$$</span> Taking <span class="math-container">$\ln$</span> on both sides, <span class="math-container">$$\ln (y)= \ln \left(\dfrac {1}{x^x}\right)$$</span> <span class="math-container">$$\ln (y)=-x.\ln (x)$$</span> Differentiating both sides with respect to <span class="math-container">$x$</span> <span class="math-container">$$\dfrac {1}{y}\cdot y'=-(1+\ln (x))$$</span></p>
PierreCarre
639,238
<p>According to your own calculations, <span class="math-container">$y'(x) = - y(x)(1+ \ln x)$</span>, and, in particular, since <span class="math-container">$y(1)=1$</span>, you have that <span class="math-container">$y'(1)=-1$</span>. If you derive again,</p> <p><span class="math-container">$$ y''(x) = (-y(x)(1+ \ln x))'= -y'(x)(1+\ln x)-y(x) \cdot \frac 1x $$</span></p> <p>In particular, <span class="math-container">$y''(1)=-y'(1)(1+0)-y(1) \cdot 1= 0.$</span></p>
66,009
<p>Hi I have a very simple question but I haven't been able to find a set answer. How would I draw a bunch of polygons on one graph. The following does not work:</p> <pre><code>Graphics[{Polygon[{{989, 1080}, {568, 1080}, {834, 711}}], Polygon[{{1184, 1080}, {989, 1080}, {834, 711}, {958, 541}}], Polygon[{{1379, 1080}, {1184, 1080}, {958, 541}, {1082, 370}}], Polygon[{{1470, 1080}, {1379, 1080}, {1082, 370}, {1140, 291}}], Polygon[{{1665, 1080}, {1470, 1080}, {1140, 291}, {1263, 120}}], Polygon[{{1756, 1080}, {1665, 1080}, {1263, 120}, {1321, 41}}], Polygon[{{1394, 0}, {1920, 0}, {1920, 1080}, {1846, 1080}}], Polygon[{{1352, 0}, {1394, 0}, {1846, 1080}, {1756, 1080}, {1321, 41}}], Polygon[{{931, 0}, {1352, 0}, {1084, 367}}], Polygon[{{736, 0}, {931, 0}, {1084, 367}, {961, 537}}], Polygon[{{540, 0}, {736, 0}, {961, 537}, {836, 708}}], Polygon[{{450, 0}, {540, 0}, {836, 708}, {779, 788}}], Polygon[{{255, 0}, {450, 0}, {779, 788}, {654, 958}}], Polygon[{{164, 0}, {255, 0}, {654, 958}, {597, 1038}}], Polygon[{{73, 0}, {164, 0}, {597, 1038}, {568, 1080}, {524, 1080}}], Polygon[{{0, 0}, {73, 0}, {524, 1080}, {0, 1080}}]}] </code></pre> <p>I apologize for how basic this question is. If anyone could steer me in the right direction it would be greatly appreciated.</p>
MikeLimaOscar
5,414
<p>As @Szabolcs points out <code>Dispatch</code> does not interact well with <code>SameQ</code>, etc in Mathematica 10.</p> <pre><code>Dispatch[1 -&gt; 2] === Dispatch[1 -&gt; 2] </code></pre> <blockquote> <p>False</p> </blockquote> <pre><code>Dispatch[1 -&gt; 2] == Dispatch[1 -&gt; 2] </code></pre> <blockquote> <p>False</p> </blockquote> <p>Use <code>Normal</code> to "expand" the dispatch objects and then the comparison should work.</p> <pre><code>Normal[Dispatch[1 -&gt; 2]] == Normal[Dispatch[1 -&gt; 2]] </code></pre> <blockquote> <p>True</p> </blockquote> <p><code>Normal</code> works on your complete example expression:</p> <pre><code>Normal[sdp[znz[1, 3], aut[List[Rule[znz[1, 3], znz[1, 3]]], List[Rule[znz[0, 3], znz[0, 3]], Rule[znz[1, 3], znz[1, 3]], Rule[znz[2, 3], znz[2, 3]]], Dispatch[ List[Rule[znz[0, 3], znz[0, 3]], Rule[znz[1, 3], znz[1, 3]], Rule[znz[2, 3], znz[2, 3]]]]], Function[NonCommutativeMultiply[Slot[2], Slot[1]]]]] </code></pre> <blockquote> <p>sdp[znz[1, 3], aut[{znz[1, 3] -> znz[1, 3]}, {znz[0, 3] -> znz[0, 3], znz[1, 3] -> znz[1, 3], znz[2, 3] -> znz[2, 3]}, {znz[0, 3] -> znz[0, 3], znz[1, 3] -> znz[1, 3], znz[2, 3] -> znz[2, 3]}], #2 ** #1 &amp;]</p> </blockquote> <p>PS As the documentation for <a href="http://reference.wolfram.com/language/ref/Dispatch.html" rel="noreferrer"><code>Dispatch</code></a> states "The use of Dispatch will never affect results that are obtained", this should probably be classified as a bug.</p>
491,926
<p>Let I have $(S,\Sigma,\mu)$ be a probability space, then $X,Y \in \Sigma$. Define $\rho (X,Y)$ by $\rho (X,Y)$ = correlation between random variable $I_X$ and $I_Y$, where $I_X$ and $I_Y$ are the indicator function of $X$ and $Y$. Express $\rho (X,Y)$ in term of $\mu (X)$, $\mu (Y)$, $\mu(XY)$. Conclude that $\rho(X,Y)=0$, if and only if $X$ and $Y$ are independent. </p> <blockquote> <p>How to show that: $$\rho(X,Y)= \frac{\mu(XY)\,\mu(X^cY^c)-\mu(XY^c)\,\mu(X^cY)}{(\mu(X) \,\mu(X^c)-\mu(Y) \,\mu(Y^c))^{1/2}};0&lt;\mu(X)&lt;1;\,0&lt;\mu(Y)&lt;1$$ </p> </blockquote>
triple_sec
87,778
<p>To finish @JonathanY 's answer: $$(1)\quad \mu(XY^c)=\mu(X)-\mu(XY).$$ $$(2)\quad \mu(X^c Y)=\mu(Y)-\mu(XY).$$ $$(3)\quad\mu(X^c Y^c)=\mu(Y^c)-\mu(XY^c)=1-\mu(Y)-\mu(XY^c)=1-\mu(Y)-\mu(X)+\mu(XY),$$ using (1). Now, putting these together, \begin{align*} &amp;\,\mu(XY)\mu(X^c Y^c)-\mu(XY^c)\mu(X^cY)\\ =&amp;\,\mu(XY)\left(1-\mu(Y)-\mu(X)+\mu(XY)\right)-\left(\mu(X)-\mu(XY)\right)\left(\mu(Y)-\mu(XY)\right)\\ =&amp;\,\mu(XY)-\mu(XY)\mu(Y)-\mu(XY)\mu(X)+\mu(XY)^2-\mu(X)\mu(Y)+\mu(X)\mu(XY)+\mu(Y)\mu(XY)-\mu(XY)^2\\ =&amp;\,\mu(XY)-\mu(X)\mu(Y).\quad\blacksquare \end{align*}</p>
2,643,705
<p>If $A,B,C,D$ are all matrices and $A=BCD$ (with dimensions such that all matrix multiplications are defined), how does one solve for $C$? </p> <p>In the particular context I'm working in, $B$ and $D$ are both orthogonal, and $C$ is diagonal. I'm not sure if that's necessary to solve for $C$.</p>
Robert Howard
509,508
<p>You would have to multiply by $B^{-1}$ on the left on each side of the equation which would cancel $B$ on the right, and then by $D^{-1}$ on the right on each side, which would cancel $D$, like this: $$\begin{align}A&amp;=BCD\\B^{-1}A&amp;=B^{-1}BCD\\B^{-1}A&amp;=CD\\B^{-1}AD^{-1}&amp;=CDD^{-1}\\B^{-1}AD^{-1}&amp;=C\end{align}$$</p>
3,353,826
<p>All the vertices of quadrilateral <span class="math-container">$ABCD$</span> are at the circumference of a circle and its diagonals intersect at point <span class="math-container">$O$</span>. If <span class="math-container">$∠CAB = 40°$</span> and <span class="math-container">$∠DBC = 70°$</span>, <span class="math-container">$AB = BC$</span>, then find <span class="math-container">$∠DCO$</span>.</p>
leftaroundabout
11,107
<p>Although I'm not sure how common this is in pure maths settings, I would say the best notation is simply <span class="math-container">$\operatorname{round}(x)$</span>. This is easily understood, albeit not completely unambiguous – but <em>definitely</em> better than <span class="math-container">$[x]$</span> which could mean a myriad of completely unrelated things, or <span class="math-container">$\operatorname{nint}(x)$</span> which looks like “ninn-t?”</p> <p>If the ambiguity <span class="math-container">$1 \stackrel?= \operatorname{round}(1.5) \stackrel?= 2$</span> is a problem for you, make sure to explicitly discuss this. If you use the operation a lot, you could also define that you write it as <span class="math-container">$\lfloor x\rceil$</span>, but I wouldn't use that without discussion.</p> <p><code>round</code> is also the name for the rounding function in many programming languages, because what it does is it rounds a number, hence the name “round”.</p>
3,353,826
<p>All the vertices of quadrilateral <span class="math-container">$ABCD$</span> are at the circumference of a circle and its diagonals intersect at point <span class="math-container">$O$</span>. If <span class="math-container">$∠CAB = 40°$</span> and <span class="math-container">$∠DBC = 70°$</span>, <span class="math-container">$AB = BC$</span>, then find <span class="math-container">$∠DCO$</span>.</p>
John Thompson
787,805
<p>The short and sweet is that there is no short and sweet. Rounding has many different contexts and interpretations, which means that you will have to define what rounding means to you in your particular context before your use it. Now, there is standard notation for two specific types of rounding. <span class="math-container">$\lceil\text{Ceiling}\rceil$</span> brackets indicate that you always round up toward positive infinity, e.g. <span class="math-container">$\lceil1.1\rceil = 2$</span> and <span class="math-container">$\lceil-3.9\rceil = -3$</span>, and <span class="math-container">$\lfloor\text{floor}\rfloor$</span> brackets indicate that you always round down toward negative infinity, e.g. <span class="math-container">$\lfloor2.9\rfloor=2$</span> and <span class="math-container">$\lfloor-6.1\rfloor=-7$</span>. </p> <p>Now, if we wish to define the typical elementary/secondary school form of rounding where you need to specify how many decimal places you wish to round to, and halves always gets rounded up, then we could do so as follows:</p> <p>For <span class="math-container">$d \in \mathbb Z$</span>, let <span class="math-container">$R_d:\mathbb R \rightarrow \mathbb Z$</span> such that <span class="math-container">$$R_d(x) = \frac{\lfloor10^d x + 0.5\rfloor}{10^d}$$</span></p> <p>Note that this method creates a bias towards positive infinity and as such the rounding behavior for positive and negative numbers seems incongruous for some. In response to this, some elementary/secondary classes may then adopt the following adaptation that addresses this incongruity as follows:</p> <p>For <span class="math-container">$d \in \mathbb Z$</span>, let <span class="math-container">$R_d:\mathbb R \rightarrow \mathbb Z$</span> such that <span class="math-container">$$R_d(x) = \begin{cases} \dfrac{\lfloor10^d x + 0.5\rfloor}{10^d}, &amp;x\text{ is nonnegative} \\ \dfrac{\lceil10^d x - 0.5\rceil}{10^d}, &amp;x\text{ is negative} \end{cases}$$</span></p> <p>Now <span class="math-container">$R_0(4.5) = -R_0(-4.5)$</span>! Students rejoice! (Though subsequent teachers unaware of this adaptation will cringe) This even has bias eliminating implications when dealing with aggregate data, however not all bias has been eliminated. The positive bias more or less balances with the negative bias if your mean is at or close to 0, but all the rounding is still biased away from 0. If you are inclined to tackle this you could instead round halves toward the nearest even integer (often referred to as stats rounding or banker's rounding), and we can tweak our definition as follows:</p> <p>For <span class="math-container">$d \in \mathbb Z$</span>, let <span class="math-container">$R_d:\mathbb R \rightarrow \mathbb Z$</span> such that <span class="math-container">$$R_d(x) = \begin{cases} \dfrac{\lfloor10^d x + 0.5\rfloor}{10^d}, &amp;x\text{ is nonnegative and }\lfloor10^d x\rfloor\text{ is odd}\\ \dfrac{\lfloor10^d x - 0.5\rfloor}{10^d}, &amp;x\text{ is nonnegative and }\lfloor10^dx\rfloor\text{is even}\\ \dfrac{\lceil10^d x - 0.5\rceil}{10^d}, &amp;x\text{ is negative and }\lceil10^dx\rceil\text{ is odd}\\ \dfrac{\lceil10^d x + 0.5\rceil}{10^d}, &amp;x\text{ is negative and }\lceil10^dx\rceil\text{ is even} \end{cases}$$</span></p> <p>Now you've eliminated the bias away from 0, but you're left with micro-biases toward even numbers compared to odd numbers. We could go on an on with different tweaks and variations on rounding, but the bottom line is that you will want to define a method of rounding that is most suitable for the task at hand, and then make sure that you clearly communicate that method of rounding and perhaps even include analysis of its desired advantages and known disadvantages for your particular application.</p>
2,756,139
<p>Let $I=[0,1]$ and $$X = \prod_{i \in I}^{} \mathbb{R}$$ That is, an element of $X$ is a function $f:I→\mathbb{R}$.</p> <p>Prove that a sequence $\{f_n\}_n ⊆ X$ of real functions converges to some $f ∈ X$ in the product topology on $X$, if and only if it converges pointwise, i.e. for every $x ∈ I$, $f_n(x) → f(x)$ in the usual sense of convergence of sequences.</p> <hr> <p>I don't understand how a product indexed by an uncountably infinite set is like. I'm guessing a single element $f$ of $X$ is an uncountable set $(y_i)_{i \in I}$ in itself. But then how is $f(x)$ different from any other $f$? Is it the cartesian product indexed by $[0,x]$?</p> <p>Note that I included the "prove" question just to show the context, but it isn't what I find problematic. It's probably easy enough if I get how the product works.</p>
Logic_Problem_42
338,002
<p>Every single element of $X$ is simply a function: $I\to \mathbb{R}$. They differ as normal functions differ. </p>
24,810
<p>The title says it all. Is there a way to take a poll on Maths Stack Exchange? Is a poll an acceptable question?</p>
Gerry Myerson
8,269
<p>Vote this answer up, if you think a poll is not an acceptable question. </p>
3,973,611
<p>Let <span class="math-container">$$F(x)=\int_{-\infty}^x f(t)dt,$$</span> where <span class="math-container">$x\in\mathcal{R}$</span>, <span class="math-container">$f\geq 0$</span> is complicated (it cannot be integrated analytically).</p> <p>Can I used the Simpson's rule to approximate this integral, knowing that <span class="math-container">$f(-\infty)=0$</span>?</p>
Especially Lime
341,019
<p>First, your evaluation of <span class="math-container">$|D|$</span> is incorrect. This is the number of ways to choose a capital letter for the first character, and then choose the remaining seven characters with no capitals. But in fact the single capital letter could be in any of <span class="math-container">$8$</span> positions, so actually <span class="math-container">$|D|=8\times 26\times 36^7$</span>.</p> <p>Secondly, the multiple events. Note that <span class="math-container">$C$</span> and <span class="math-container">$D$</span> cannot both occur, and no three events can occur (since if they include <span class="math-container">$A,B$</span> and one of <span class="math-container">$C,D$</span> then you have at most one character). So you only need to subtract the pairs of events (not including <span class="math-container">$|C\cup D|$</span>), since the other terms are all <span class="math-container">$0$</span>.</p> <p><span class="math-container">$|A\cap B|=26^8$</span>, since this means all characters are capitals, and the other double events can be calculated similarly (ones involving <span class="math-container">$D$</span> are a bit more complicated, but no more so than the calculation for <span class="math-container">$D$</span> itself).</p>
4,090,416
<blockquote> <p>Suppose <span class="math-container">$f(x)$</span> be bounded and differentiable over <span class="math-container">$\mathbb R$</span>, and <span class="math-container">$|f'(x)|&lt;1$</span> for any <span class="math-container">$x$</span>. Prove there exists <span class="math-container">$M&lt;1$</span> such that <span class="math-container">$|f(x)-f(0)|\le M|x|$</span> holding for every <span class="math-container">$x \in \mathbb{R}$</span>.</p> </blockquote> <p>Probably, we may consider applying MVT, say <span class="math-container">$$\left|\frac{f(x)-f(0)}{x-0}\right|=|f'(\xi)|&lt;1,$$</span> but which can only imply <span class="math-container">$M\le 1$</span>, not <span class="math-container">$M&lt;1$</span> we wanted. How to improve the inequality?</p>
Martin R
42,969
<p>The function <span class="math-container">$g: \Bbb R \to \Bbb R$</span>, defined as <span class="math-container">$$ g(x) = \begin{cases} \frac{f(x)-f(0)}{x-0} &amp; \text{ if } x \ne 0 \\ f'(0) &amp; \text{ if } x = 0 \end{cases} $$</span> is continuous, with <span class="math-container">$\lim_{x \to - \infty} g(x) = \lim_{x \to + \infty} g(x)= 0$</span>.</p> <p>It follows that <span class="math-container">$|g|$</span> attains its maximum at some point <span class="math-container">$x_0$</span>, so that <span class="math-container">$$ |g(x)| \le |g(x_0)| = \left|\frac{f(x_0)-f(0)}{x_0-0}\right| = |f'(\xi_0)| =: M &lt; 1 $$</span> for some fixed <span class="math-container">$\xi_0$</span>, and all <span class="math-container">$x \in \Bbb R$</span>.</p>
1,027,330
<p>How does one figure out whether this series: $$\sum_{n=3}^{\infty}(-1)^{n-1}\frac{1}{\ln\ln n}$$ converges or diverges? And, what is the general approach behind solving for convergence/divergence in a series that seems to "oscillate" (thanks to the -1 in this case)? </p> <p>I have so far tried to split the function into two limits, but I am more or less stuck there. </p>
mathamphetamines
126,882
<p>$$ a_n = \frac{1}{lnln(n)} $$</p> <p>$$ for\ n&gt;3, a_n&gt;0 $$ Next you must show that this function is decreasing everywhere, and also that as n approaches infinity, a_n converges to zero.</p>
1,224,180
<p>Q: evaluate $\lim_{x \to \infty}$ $ (x-1)\over \sqrt {2x^2-1}$</p> <p>What I did:</p> <p>when $\lim_ {x \to \infty}$ you must put the argument in the form of $1/x$ so in that way you know that is equal to $0$</p> <p>but in this ex. the farest that I went was</p> <p>$\lim_{x \to \infty}$ $x \over x \sqrt{2}$ $1-(1/x) \over (1/x) - ??$</p>
mich95
229,072
<p>$\frac{x-1}{\sqrt{2x^{2}-1}}=\frac{x-1}{x \sqrt{2-\frac{1}{x^{2}}}}=\frac{1-\frac{1}{x}}{\sqrt{2-\frac{1}{x^{2}}}}$. Hence the limit is $\frac{1}{\sqrt{2}}$</p>
3,551,030
<p>Let the real sequence <span class="math-container">$x_n s.t. \Vert x_{n+2} - x_{n+1}\Vert = M \Vert x_{n+1} - x_{n} \Vert$</span> </p> <p>If the <span class="math-container">$0&lt; \vert M \vert &lt;1$</span>, then <span class="math-container">$x_n$</span> surely convergent since it is a contractive. </p> <p>But the <span class="math-container">$\vert M \vert =1 $</span>, we can't say the <span class="math-container">$x_n$</span> is converge or not when taking the example <span class="math-container">$x_n = {1 \over n}$</span></p> <p>So I guessed the case <span class="math-container">$\vert M \vert &gt;1$</span>, My provisional conclusion was that <span class="math-container">$x_n$</span> is diverge when the <span class="math-container">$\vert M \vert &gt;1 $</span>, started proving myself. But I don't have any idea how to show it. Hence, how to show this sequence not convergent on <span class="math-container">$\mathbb{R}$</span> when the <span class="math-container">$M(\in \mathbb{R})$</span> is a positive which is <span class="math-container">$\vert M \vert &gt;1$</span> ? Any help or hint would be appreciated. </p>
Kavi Rama Murthy
142,385
<p>Suppose <span class="math-container">$x_1 \neq x_2$</span>. </p> <p><span class="math-container">$\|x_{n+2}-x_{n+1}\|=M^{n-1} \|x_2-x_1\|$</span> for all <span class="math-container">$n$</span>. RHS tends to <span class="math-container">$\infty$</span>. If the sequence is convergent then LHS tends to <span class="math-container">$0$</span>. </p> <p>Now I will leave to you think about the case <span class="math-container">$x_1 \neq x_2$</span>. [See if <span class="math-container">$x_k \neq x_{k+1}$</span> for some <span class="math-container">$k$</span>]. Note that the hypothesis is satisfied if <span class="math-container">$x_n$</span> is a constant sequence and the sequence converges in this case. </p>
2,563,303
<blockquote> <p><strong><em>Question:</em></strong> If $z_0$ and $z_1$ are real irrational numbers I write $$q=z_0+z_1\sqrt{-1}$$ Surely $q$ is just a complex number. Under what condition will the <a href="https://en.wikipedia.org/wiki/Absolute_value#Complex_numbers" rel="nofollow noreferrer">number</a> $|q|$ be an $\color{blue}{integer}$ ?</p> </blockquote> <p>I have that $$|q| = \sqrt{z_0^2+z_1^2}$$ Consequently the only exceptions I can think of are the cases where $z_0^2+z_1^2=x^2$ for some positive integer $x$. For example $z_0=z_1=\sqrt{2}$; which is irrational. Then $$|q| = \sqrt{(\sqrt{2})^2+(\sqrt{2})^2}=\sqrt{2+2}=\sqrt{4}=2$$ I denote the set of exceptional pairs $(z_0,z_1)$ by $\mathcal{E}$. I have shown that $(\sqrt{2},\sqrt{2}) \in \mathcal{E}.$ What else is in $\mathcal{E}?$</p>
Fred
380,717
<p>If $(A_n)$ is a positive sequence with $\lim_{n \to \infty}\frac{A_{n+1}}{A_n}= 0$, then by the ratio test: $\sum_{n=1}^{\infty}A_n$ converges, hence $A_n \to 0$.</p>
2,933,383
<p>Cauchy's theorem of limits states that if <span class="math-container">$\ \lim_{n \to \infty} a_n=L ,$</span> then <span class="math-container">$$ \lim_{n \to \infty} \frac{a_1+a_2+\cdots+a_n}{n}=L $$</span> If I apply this in the series <span class="math-container">$$S = \lim_{n\to\infty} \dfrac{1}{n}[e^{\frac{1}{n}} + e^{\frac{2}{n}} + e^{\frac{3}{n}} + e^{\frac{4}{n}} + e^{\frac{5}{n}} + e^{\frac{6}{n}} + ... + e^{\frac{n}{n}}]$$</span> Here <span class="math-container">$a_n=e^{n/n},$</span> Hence, <span class="math-container">$\lim_{n \to \infty} a_n=e \ $</span>, so <span class="math-container">$S=e$</span>. I know this is incorrect because the integration gives the answer <span class="math-container">$e-1$</span>. It seems to me that the reason for this is related to the terms of the sequence being dependent on <span class="math-container">$n$</span>. I can't be sure of this because the same theorem gives the correct answer for the limit of this series(below) which is given in my book as a solved example of the above theorem <span class="math-container">$$ \ \lim_{n \to \infty} \left( \frac{1}{\sqrt{n^2+1}}+\frac{1}{\sqrt{n^2+2}}+\cdots+\frac{1}{\sqrt{n^2+n}} \right)=1 $$</span></p>
asdf
436,163
<p>The main issue is that your series UP TO <span class="math-container">$n$</span> are <span class="math-container">$a_i=e^{\frac{i}{n}}$</span> but this defines the sequence only UP TO some <span class="math-container">$n$</span> which is fixed and thus taking this <span class="math-container">$n$</span> fixed tells you nothing about the later terms.</p>
2,664,370
<p>I don't know how to start, i've noticed that it can be written as $$\lim_{x\to 0} \frac{2^x+5^x-4^x-3^x}{5^x+4^x-3^x-2^x}=\lim_{x\to 0} \frac{(5^x-3^x)+(2^x-4^x)}{(5^x-3^x)-(2^x-4^x)}$$</p>
egreg
62,967
<p>This is a very special case of l’Hôpital’s theorem, possibly what gave him (or Bernoulli) the idea for the general case.</p> <p>If you have two functions $f$ and $g$ which are differentiable at $0$ (or any other point), then $$ \lim_{x\to0}\frac{f(x)-f(0)}{g(x)-g(0)}=\frac{f'(0)}{g'(0)} $$ provided $g'(0)\ne0$ (if $g'(0)=0$, it's another story). The link with the general theorem should be apparent.</p> <p>However, this is <em>not</em> l’Hôpital: the proof is very easy, because one notices that $$ \lim_{x\to0}\frac{f(x)-f(0)}{g(x)-g(0)}= \lim_{x\to0}\frac{\,\dfrac{f(x)-f(0)}{x}\,}{\,\dfrac{g(x)-g(0)}{x}\,}= \frac{f'(0)}{g'(0)} $$ by standard limit theorems.</p> <p>Thus your limit boils down to considering $f(x)=2^x+5^x-4^x-3^x$ and $g(x)=5^x+4^x-3^x-2^x$, since $f(0)=0$ and $g(0)=0$. Then the limit is $$ \frac{\log2+\log5-\log4-\log3}{\log5+\log4-\log3-\log2} $$</p> <p><em>But I can’t use derivatives!</em> you might object. Well, it happens that taking the derivative is considered an application of l’Hôpital, but actually it isn’t, as shown before. I use to say that when we know derivatives we are able to compute limits of several sorts, so why avoiding them?</p> <p>If you can't use derivatives, just pretend you don't. Let's look at $f$; it is a sum of functions. How do you prove that the derivative of a sum is the sum of the derivatives?</p> <p>Suppose you have $a(x)$ and $b(x)$, with $s(x)=a(x)+b(x)$ (always at $0$, but it's the same at every point): $$ \frac{s(x)-s(0)}{x-0}=\frac{(a(x)-a(0))+(b(x)-b(0))}{x}= \frac{a(x)-a(0)}{x}+\frac{b(x)-b(0)}{x} $$ If the function are four, say $a$, $b$, $c$ and $d$, with $s=a+b+c+d$, then $$ \frac{s(x)-s(0)}{x-0}= \frac{a(x)-a(0)}{x}+\frac{b(x)-b(0)}{x}+ \frac{c(x)-c(0)}{x}+\frac{d(x)-d(0)}{x} $$ Now you see how to hide the derivative under the carpet: $$ \lim_{x\to0}\frac{2^x+5^x-4^x-3^x}{5^x+4^x-3^x-2^x} = \lim_{x\to0}\frac{\dfrac{2^x-1}{x}+\dfrac{5^x-1}{x}-\dfrac{4^x-1}{x}-\dfrac{3^x-1}{x}}{\dfrac{5^x-1}{x}+\dfrac{4^x-1}{x}-\dfrac{3^x-1}{x}-\dfrac{2^x-1}{x}} $$</p> <p>You <em>know</em> that $$ \lim_{x\to0}\frac{r^x-1}{x}=\log r $$ is the derivative at $0$ of $x\mapsto r^x$, don't you?</p>
1,004,303
<p>Let $S=\{(x,0)\} \cup\{(x,1/x):x&gt;0\}$. Prove that $S$ is not a connected space (the topology on $S$ is the subspace topology)</p> <p>My thoughts: Now in the first set $x$ is any real number, and I can't see that this set in open in $S$. I can't find a suitable intersection anyhow.</p>
Crostul
160,300
<p>Call $X = \{ (x, 0) : x \in \mathbb{R} \} \subset S$. Clearly $X$ is closed in $S$ since $X$ is closed in $\mathbb{R}^2$.</p> <p>But $X$ is also open in $S$, since $X= S \cap T$, where $T= \{ (x, y) \in \mathbb{R}^2 : y&lt; \frac{1}{2x} , x&gt;0\} \cup \{ (x,y) \in \mathbb{R}^2 : y &lt; 1-x\}$ is open in $\mathbb{R}^2$.</p> <p>So $X$ is clopen, and $S$ is not connected.</p>
2,141,082
<p>Let me first put down a couple definitions, two of which have terminology I make up for this post. If you already know about sheaf theory, you can safely skip Definitions 1-3 and 7-8, and the Construction. Definitions 4-6 introduce notation and terminology that is probably nonstandard, so I recommend reading those in any case. $\DeclareMathOperator{\coker}{coker}\DeclareMathOperator{\Im}{Im} \newcommand{\~}{\sim}$</p> <p><strong>Definition 1 (Sheaf)</strong></p> <p>Let $X$ be a topological space. A <em>sheaf</em> on $X$ is a map $\mathcal{F}:\operatorname{Open}(X)\to(\operatorname{Ab})$, i.e. a map which to every open $U\subseteq X$ assigns an abelian group $\mathcal{F}(U)=\Gamma(\mathcal{F},U)$, such that:</p> <ol> <li>For all $U\subseteq V\subseteq X$ there is a group morphism $\tau_{U,V}:\mathcal{F}(V)\to\mathcal{F}(U)$, called <em>restriction morphism</em>, such that $\tau_{U,V}\circ\tau_{V,W}=\tau_{U,W}$ for all $U\subseteq V\subseteq W$;</li> <li>If $U=\bigcup_iU_i$ and $\tau_{U_i,U}(s)=\tau_{U_i,U}(t)$, then $s=t$ in $\mathcal{F}(U)$;</li> <li>If $s_i\in\mathcal{F}(U_i)$ and, for all $i,j$, $\tau_{U_i\cap U_j,U_i}(s_i)=\tau_{U_i\cap U_j,U_j}(s_j)$, then there exists $s\in\mathcal{F}(\bigcup_iU_i)$ such that $\tau_{U_i,U}(s)=s_i$ for all $i$.</li> </ol> <p>A map $\mathcal{F}:\operatorname{Open}(X)\to(\operatorname{Ab})$ satisfying only 1., and not 2. and 3., is a <em>presheaf</em>.</p> <p><strong>Definition 2 (sheaf morphism)</strong></p> <p>Given two sheaves $\mathcal{F},\mathcal{G}$, a <em>sheaf morphism</em> $\phi:\mathcal{F}\to\mathcal{G}$ is the specification of a group morphism $\phi_U:\mathcal{F}(U)\to\mathcal{G}(U)$ (or, we may say, an element $\phi\in\prod_{U\in\operatorname{Open}(X)}\operatorname{Hom}(\mathcal{F}(U),\mathcal{G}(U))$) such that, for all $U,V$, $\phi_U\circ\tau_{U,V}=\tau_{U,V}\circ\phi_V$, where $\tau_{U,V}$ is the restriction morphism of $\mathcal{F}$ on the LHS and of $\mathcal{G}$ on the RHS.</p> <p><strong>Definition 3 (kernel of a morphism)</strong></p> <p>Given $\phi:\mathcal{F}\to\mathcal{G}$ a sheaf morphism, $\ker\phi$ (the <em>kernel</em> of $\phi$) is defined by:</p> <p>$$(\ker\phi)(U):=\ker(\phi_U).$$</p> <p>It is easily verified that this is a sheaf.</p> <p><strong>Definition 4 (Image and Cokernel presheaves of a morphism)</strong></p> <p>Give $\phi$ as in the previous definition, I will denote by $\operatorname{Im}^p\phi$ and $\operatorname{coker}^p\phi$ the presheaves:</p> <p>$$(\operatorname{Im}^p\phi)(U):=\operatorname{Im}(\phi_U),\qquad(\operatorname{coker}^p\phi)(U):=\operatorname{coker}\phi_U.$$</p> <p>These are, in general, not sheaves. However, given any presheaf, there is a construction (explained <a href="https://rigtriv.wordpress.com/2008/01/30/morphisms-of-sheaves/" rel="noreferrer">here</a>) that turns it into a sheaf with minimal variation, in the sense that the stalks (see below) stay the same. This is called <em>sheafification</em>.</p> <p><strong>Definition 5 (Image and Cokernel sheaves of a morphism)</strong></p> <p>The sheafifications of $\operatorname{Im}^p\phi$ and $\coker^p\phi$ as defined above are called <em>Image</em> and <em>cokernel</em> of $\phi$, and denoted as $\operatorname{Im}\phi$ and $\coker\phi$ respectively.</p> <p><strong>Definition 6 (C-surjective, I-surjective and injective morphisms)</strong></p> <p>Given a morphism $\phi:\mathcal{F}\to\mathcal{G}$, we say it is:</p> <ol> <li><em>injective</em> if $\ker\phi=0$, i.e. $(\ker\phi)(U)=0$ for all $U$;</li> <li><em>I-surjective</em> if $\Im\phi=\mathcal{G}$;</li> <li><em>C-surjective</em> if $\coker\phi=0$.</li> </ol> <p><strong>Definition 7 (stalks)</strong></p> <p>Given $\mathcal{F}$ a (pre)sheaf, set:</p> <p>$$\mathcal{F}(x):=\{(U,s):s\in\mathcal{F}(U),x\in U\in\operatorname{Open}(X)\}.$$</p> <p>Introduce the equivalence relation:</p> <p>$$(U,s)\~(V,t)\iff\exists W\subseteq U\cap V:\tau_{U,W}(s)=\tau_{V,W}(t).$$</p> <p>The <em>stalk</em> of $\mathcal{F}$ at $x$ is the quotient:</p> <p>$$\mathcal{F}_x:=\frac{\mathcal{F}(x)}{\~}.$$</p> <p><strong>Note</strong></p> <p>I know the stalk can be denoted as:</p> <p>$$\mathcal{F}_x=\lim_{U\ni x}\mathcal{F}(U),$$</p> <p>but I avoid that notation since I have never done that much category theory and, in particular, I have never seen limits of functors in enough detail to not perceive that limit notation as foreign.</p> <p><strong>Definition 8 (germs)</strong></p> <p>Given $\mathcal{F}$ a sheaf or presheaf, let $s\in\mathcal{F}(U)$. We set:</p> <p>$$s_x:=[(U,s)],$$</p> <p>that is, $s_x$ is the equivalence class of $(U,s)$ in the stalk $\mathcal{F}_x$. $s_x$ is called the <em>germ</em> of $s$ at $x$.</p> <p>The "germification" map $s\mapsto s_x$ is a group homomorphism from $\mathcal{F}(U)$ to $\mathcal{F}_x$ for any $x\in X,x\in U\in\operatorname{Open}(X)$, as can easily be verified.</p> <p><strong>Construction</strong></p> <p>Let $\phi:\mathcal{F}\to\mathcal{G}$ be a sheaf morphism. For all $x\in X$, there is an induced morphism:</p> <p>$$\phi_x:\mathcal{F}_x\to\mathcal{G}_x,\qquad\phi_x(s_x):=(\phi_U(s))_x,$$</p> <p>where $(U,s)$ is any representative of the germ $s_x\in\mathcal{F}_x$. This is well-defined. Indeed, if $(U,s)\~(V,t)$, then $\tau_{U,W}(s)=\tau_{V,W}(t)$ for some $W\subseteq U\cap V$, and by definition $(U,s)\~(W,\tau_{U,W}(s)),(V,t)\~(W,\tau_{V,W}(t))$. We would need $(\phi_U(s))_x=(\phi_V(t))_x$. Then again:</p> <p>$$(U,\phi_U(s))\~(W,\tau_{U,W}(\phi_U(s)))=(W,\phi_W(\tau_{U,W}(s)))=(W,\phi_W(\tau_{V,W}(t)))=(W,\tau_{V,W}(\phi_V(t)))\~(V,\phi_V(t)).$$</p> <p><strong>Definition 9 (stalk-injectivity, stalk-I-surjectivity and stalk-C-surjectivity)</strong></p> <p>Given a sheaf morphism $\phi:\mathcal{F}\to\mathcal{G}$, we call it</p> <ol> <li>Stalk-injective if $\phi_x:\mathcal{F}_x\to\mathcal{G}_x$ is injective for all x;</li> <li>Stalk-I-surjective if $\Im\phi_x=\mathcal{G}_x$ for all x</li> <li>Stalk-C-injective if $\coker\phi_x=0$ for all $x$. </li> </ol> <p>That said, a couple WBFs (WannaBe Facts) one might like to prove.</p> <p><strong>WBF 1</strong></p> <p>$\phi$ is C-surjective iff it is I-surjective.</p> <p><strong>WBF 2</strong></p> <p>$\phi$ is stalk-C-surjective iff it is stalk-I-surjective.</p> <p><strong>WBF 3</strong></p> <p>$\phi$ is C-surjective iff it is stalk-C-surjective.</p> <p><strong>WBF 4</strong></p> <p>$\phi$ is I-surjective iff it is stalk-I-surjective.</p> <p><strong>WBF 5</strong></p> <p>$\phi$ is injective iff it is stalk-injective.</p> <p>Unfortunatly, WBF 1 is, unless I'm much mistaken, not true.</p> <p><strong>Fact 1</strong></p> <p>Stalk-C-surjectivity implies stalk-I-surjectivity, but not viceversa.</p> <p><em>Proof</em>.</p> <p>Stalk-C-surjectivity implies $\coker^p\phi=0$, since any presheaf is a sub-presheaf of its sheafification (i.e. $\mathcal{F}(U)$ is always contained in $\mathcal{F}^+(U)$, $\mathcal{F}^+$ being the sheafification of $\mathcal F$). But then $\Im^p\phi=\mathcal G$, and $\Im^p\subseteq\Im$ just like $\coker^p\subseteq\coker$, and so we have I-surjectivity.</p> <p>Let $\Omega^p$ be the sheaf of $p$-forms on a manifold. The exterior derivative can be seen as a sheaf morphism $d:\Omega^p\to Z^p$, $Z^p$ being the closed forms. $\Im^pd=B^p$, the presheaf of exact forms, whose sheafification is just $Z^p$, since every closed form is locally exact, otherwise known as a gluing of exact forms and hence an element of the sheafification. Sk we hage I-surjectivity. However, if the cohomology of the manifold m is nontrivial, $\coker d_U=H^{p+1}(U)$ is in general nontrivial, which renders C-surjectivity impossible. $\square$</p> <p><strong>Fact 2</strong></p> <p>WBF 2 holds.</p> <p><em>Proof</em>.</p> <p>$\coker\phi_x=\frac{\mathcal G_x}{\Im\phi_x}$, which is zero iff $\phi_x$ is surjective. $\square$</p> <p><strong>Corollary 1</strong></p> <p>Seen as WBF 3 and WBF 4, along with WBF 2 which is true, would imply WBF 1 which is false, one of those must be false as well.</p> <p><strong>Fact 3</strong></p> <p>WBF 5 holds.</p> <p><em>Proof.</em></p> <p>Assume $\phi$ is stalk-injective. Let $s\in\mathcal{F}(U)$ satisfy $\phi_U(s)=0$. Then $\phi_x(s_x)=(\phi_U(s))_x=0_x=0$ for all $x\in U$, so $s_x=0$ for all $x\in U$, by stalk-injectivity. But this means that, for all $x\in U$, there is $V\subseteq U$ containinug $x$ such that $\tau_{U,V}(s)=0$. But by axiom 2. of the definition of sheaf, this implies $s=0$ on all $U$, proving $\phi_U$ is injective. So one direction is done.</p> <p>Viceversa, let $\phi$ be injective. Assume $\phi_x(s_x)=0$ for some $s_x\in\mathcal{F}_x$. Take a representative $(U,\tilde s)$. $s_x=0$ implies there is $V\subseteq U$ such that $x\in V$ and $\tau_{U,V}(\phi_U(s))=0$. But that is $\phi_V(\tau_{U,V}(s))$, so $\tau_{U,V}(s)$ must be zero by injectivity of $\phi_V$. But then $s_y=0$ for all $y\in V$, in particular for $y=x$, proving stalk-injectivity. $\square$</p> <p><strong>Fact 4</strong></p> <p>I-surjectivity implies stalk-I-surjectivity.</p> <p><em>Proof</em>.</p> <p>Let $t_x\in\mathcal{G}_x$. Take a representative $(U,t)$ such that the germ of $t$ at $x$ be the aforechosen $t_x$. $\phi_U$ is surjective by hypothesis, hence there exists $(U,s)$ such that $\phi_U(s)=t$. But then $\phi_x(s_x)=(\phi_U(s))_x=t_x$, so $\phi_x$ is surjective. $\square$.</p> <p>Let us draw a diagram of what we have proved about the various types of surjectivity, and deduce a couple corollaries by looking at it.</p> <p>$$\begin{array}{ccc} \text{C-surjectivity} &amp; &amp; \text{stalk-C-surjectivity} \\ \not\uparrow\downarrow &amp; &amp; \uparrow\downarrow \\ \text{I-surjectivity} &amp; \rightarrow &amp; \text{stalk-I-surjectivity} \end{array}$$</p> <p><strong>Corollary 2</strong></p> <p>Stalk-C-surjectivity cannot imply C-surjectivity.</p> <p><em>Proof</em>.</p> <p>Suppose otherwise. Then I-surjectivity implies stalk-I-surjectivity (Fact 4), which implies stalk-C-surjectivity (Fact 2) which implies C-surjectivity (hypothesis), and yet I-surjectivity does not imply C-surjectivity (counterexample in Fact 1), contardiction. $\square$</p> <p><strong>Corollay 3</strong></p> <p>C-surjectivity implies stalk-C-surjectivity.</p> <p><em>Proof</em>.</p> <p>Assume C-surjectivity. Then we have I-surjectivity (Fact 1), and hence stalk-I-sujectivity (Fact 4), and hence stalk-C-sutjectivity (Fact 2). $\square$</p> <p>So we add another couple arrows to the diagram.</p> <p>$$\begin{array}{ccc} \text{C-surjectivity} &amp; ^{\not\leftarrow}_{\rightarrow} &amp; \text{stalk-C-surjectivity} \\ \not\uparrow\downarrow &amp; &amp; \uparrow\downarrow \\ \text{I-surjectivity} &amp; \rightarrow &amp; \text{stalk-I-surjectivity} \end{array}$$</p> <p>There remains therefore one last question.</p> <blockquote> <p><strong>Does stalk-I-surjectivity imply I-surjectivity?</strong></p> </blockquote> <p>And that is my question. I tried proving it, and I cannot seem to get to the end. In fact, there is also another question.</p> <blockquote> <p><strong>Is all the above correct?</strong></p> </blockquote> <p>I am particularly doubtful about WBF 1 being false, since my Complex Geometry teacher said that «$\phi$ […] ̀è suriettivo se il fascio-immagine è tutto $\mathcal{G}$, oppure il cokernel è 0, è la stessa cosa» ($\phi$ […] is surjective if the image sheaf is the whole of $\mathcal{G}$, or if the cokernel is zero, it's the same thing), and I seem to have just disproven his statement.</p> <p><strong>Update</strong></p> <p>I thought I had answered the first question. I was writing a swlf-answer, which started by proving the following.</p> <blockquote> <p><strong>Lemma</strong></p> <p>For all $x$, we have:</p> <p>$$(\ker^p\phi)_x=\ker\phi_x,\qquad(\Im^p\phi)_x=\Im\phi_x.$$</p> <p><em>Proof</em>.</p> <p>$$((\ker^p\phi)_x\subseteq\ker\phi_x)$$</p> <p>Let $s_x\in(\ker^p\phi)_x$. This means $s_x$ is the germ at $x$ of some $s\in(\ker^p\phi)(U)$, by definition of stalks, and by definition of the kernel presheaf we have $\phi_U(s)=0$. Hence, by definition of the stalk morphism, $\phi_x(s_x)=(\phi_U(s))_x=0_x=0$.</p> <p>$$((\ker^p\phi)_x\supseteq\ker\phi_x)$$</p> <p>Let $s_x\in\ker\phi_x$, i.e. $\phi_x(s_x)=0$. By definition of the stalk morphism, $\phi_x(s_x)$ is $(\phi_U(s))_x$ for any representative $(U,s)$ of $s_x$. $(\phi_U(s))_x=0$ implies there is $V\subseteq U$ such that $\tau_{U,V}(\phi_U(s))=0$. But that is $\phi_V(\tau_{U,V}(s))$, meaning $\tau_{U,V}(s)\in\ker\phi_V$. Naturally, $s_x$ is also the germ of $\phi_V(s)$ at $x$, which gives us $s_x\in(\ker^p\phi)_x$.</p> <p>$$((\Im^p\phi)_x\subseteq\Im\phi_x)$$</p> <p>Let $s_x\in(\Im^p\phi)_x$. This means there is $s\in(\Im^p\phi)(U)$ such that its germ at $x$ is $s_x$. $s\in(\Im^p\phi)(U)$ means $s\in\Im\phi_U$, so there is $t\in\mathcal{F}(U)$ such that $\phi_U(t)=s$. But this means $\phi_x(t_x)=s_x$, showing $s_x\in\Im\phi_x$.</p> <p>$$((\Im^p\phi)_x\supseteq\Im\phi_x)$$</p> <p>Let $s_x\in\Im\phi_x$. This means there is $t_x\in\mathcal{F}_x:\phi_x(t_x)=s_x$. Taking a representative $(U,t)$ such that $t_x$ is the germ of $t$ at $x$, we will have $(\phi_U(t))_x=s_x$. But that means $s_x$ is the germ of something in $\Im\phi_U$, and hence $s_x\in(\Im^p\phi)_x$. $\square$</p> </blockquote> <p>This should still allow me to conclude that the answer to that question is yes, since if $\phi$ is stalk-I-surjective then $(\Im\phi)_x=(\Im^p\phi)_x=\Im\phi_x=\mathcal{G}_x$, and… wait. Is it true that if $\mathcal{H}$ is a subsheaf of $\mathcal{G}$ and $\mathcal{H}_x=\mathcal{G}_x$ for all $x$, then $\mathcal{G}=\mathcal{H}$? Because if so, since $\Im\phi$ is a subsheaf of $\mathcal{G}$ (right?), I have concluded.</p> <p>Anyways, as I wrote the second $\subset$ part of the Lemma proof, Roland posted his answer, pointing out that indeed my proof of half of WBF 1 is wrong.</p> <p>Hedidn't say anything about Facts 2-3, so I assume I did not go wrong there. Since the first half of the Lemma is essentially equivalent to Fact 3, I guess I can safely assume the first half of my proof of the Lemma is OK.</p> <p>I will come back to Facts 1 and 4 later. Right now, let me prove the following.</p> <p><strong>Fact U1</strong></p> <p>If $\mathcal{F}$ is a subsheaf of $\mathcal{G}$ (i.e. they are both sheaves and $\mathcal{F}(U)\subseteq\mathcal{G}(U)$ for all opern $U$) and $\mathcal{F}_x=\mathcal{G}_x$ for all $x\in X$, then $\mathcal{F}=\mathcal{G}$.</p> <p><em>Proof</em>.</p> <p>Assume, to the contrary, that there is an open $U$ such that $\mathcal{F}(U)\subsetneq\mathcal{G}(U)$. Take any $s\in\mathcal{G}(U)\smallsetminus\mathcal{F}(U)$. Assume, for the moment, that I can find a convering $\{U_i\}$ of $U$ with open sets such that $\mathcal{F}(U_i)=\mathcal{G}(U_i)$ for all $i$. Then we have $\tau_{U_i,U}(s)=:s_i\in\mathcal{G}(U_i)=\mathcal{F}(U_i)$, and naturally $\tau_{U_i\cap U_j,U_i}(s_i)=\tau_{U_i\cap U_j,U_j}(s_j)$, so by condition 3. in the definition of sheaf we should have $s\in\mathcal{F}(U)$, contradiction.</p> <p>It remains to prove that I can find such a covering $\{U_i\}$. Note that condition 3 does not require the covering to be finite, so we can use the set $\{V\subseteq U:\mathcal{F}(V)=\mathcal{G}(V)\}$ as a candidate. If that does not cover $U$, then we have $x\in U$ such that, for all $V\subseteq U$ containing $x$, there is $s_V\in\mathcal{G}(V)\smallsetminus\mathcal{F}(V)$. I would like to deduce from this that $\mathcal{F}_x\neq\mathcal{G}_x$. But I'm not sure how to exlude that, for every $V$, there be $W_V\subseteq V$ such that $\tau_{W_V,V}(s_V)\in\mathcal{F}(W_V)$. Maybe I'll think about this and come back afterwards. For the time being, $\not\square$.</p> <p>It should be true that $\Im^p\phi$ is a sub-presheaf of $\Im\phi$. If "separated presheaf" means something satisfying 1. and 2., but not necessarily 3., from the definition of sheaf, then $\Im^p\phi$ is a separated presheaf, since it is a sub-presheaf of $\mathcal{G}$. I think I remember reading, on a trip on Google, that if a presheaf is separated, then it is a sub-presheaf of its sheafification, which would conclude here.</p> <p>So now we are left with these questions:</p> <ol> <li>How to prove WBFs 1, 4 and 5;</li> <li>Is the content of this update correct so far?</li> <li>How to conclude the proof of Fact U1;</li> <li>How to prove that a separated presheaf is a sub-presheaf of its sheafification.</li> </ol> <p><strong>Update 2</strong></p> <p>Let us answer question 4.</p> <p><strong>Fact U2</strong></p> <p>For every presheaf $\mathcal{F}$, there exists a natural morphism $\phi:\mathcal{F}\to\mathcal{F}^+$ ($\mathcal{F}^+$ being the sheafification). It is injective iff the presheaf is separated, and, under the hypothesis of 2., it is surjective iff 3. holds.</p> <p><em>Proof</em>.</p> <p>Set $\phi_U(s)=\{x\mapsto s_x\}$. That $\{x\mapsto s_x\}$ is a map from $U$ to the union of the stalks at points of $U$, such that for all $x$, $s_x\in\mathcal{F}_x$. Also, by constrution, this is an elemento of $\mathcal{F}^(U)$, so our map is well-defined.</p> <p>The kernel $\ker\phi_U$ is precisely the set of all $s\in\mathcal{F}(U)$ such that there exists a covering $U\subseteq\bigcup_iU_i$ satisfying $\tau_{U_i,U}(s)=0$ for all $i$. Hence, it is trivial for all $U$ iff the presheaf is separated, but triviality of all $\ker\phi_U$ is precisely the injectivity of $\phi$.</p> <p>Surjectivity of $\phi$ means that, as Rolando points out at the end of the answer, if $s\in\mathcal{F}^+(U)$, then there is a covering $\{U_i\}$ of $U$ such that $\tau_{U_i,U}(s)=\phi_{U_i}(t_i)$ for all $i$, where $t_i$ are some elements of $\mathcal{F}(U_i)$. If I had injectivity, I could certainly conclude that these have coinciding restrictions to the intersections, and then condition 3. would let me glue them to find a preimage, and $\phi_U$ would be surjective for all $U$, implying surjectivity.</p> <p>I cannot seem to prove the other direction, but this is all I need. $\not\,\square$</p> <p>This implies that a sheaf is isomorphic to its sheafification, and any separated presheaf is isomorphic to a sub-presheaf of its sheafification. In particular, the answer to question 4 is yes.</p>
MickG
135,592
<p>Let me make a little sum-up and (hopefully) complete the proofs of the WBFs.</p> <p>There is a big trap I fell right into: <strong>$\mathcal{F}$ (presheaf) is a sub-presheaf of $\mathcal{F}^+$ (sheafification)</strong> is <strong>false</strong>. Indeed, as shown in <strong>Fact U2</strong>, that holds iff $\mathcal{F}$ is separated. Let us recover the proof.</p> <p><em>Proof</em>.</p> <p>$$(\text{Separation implies }\mathcal{F}\leq\mathcal{F}^+)$$</p> <p>As is easily verified, the map $s\in\mathcal{F}(U)\mapsto\{x\mapsto s_x\}\in\mathcal{F}^+(U)$ is well-defined and a morphism of presheaves. We can identify $\mathcal{F}$ with the image presheaf of this morphism (and hence view $\mathcal{F}$ as a sub-presheaf of $\mathcal{F}^+$) iff the morphism is injective.</p> <p>Injectivity is characterised by injectivity of all $\phi_U$. What is $\ker\phi_U$? It is the set of $s\in\mathcal{F}(U)$ such that $s_x=0$ for all $x$. Hence, if $\mathcal{F}$ is separated, $\ker\phi_U=0$ for all $U$.</p> <p>$$(\text{Vicevesa})$$</p> <p>If $\ker\phi_U=0$ for all $U$, it means there are no locally zero nonzero sections. Take $s,t\in\mathcal{F}(U)$. If $s_x=t_x$ for all $x\in U$, then $s-t$ has germ zero everywhere, hence is zero, hence $s=t$. So $\mathcal{F}$ is separated. $\square$</p> <p>While we're at it, the other half of <strong>Fact U2</strong> was that the morphism defined above is an isomorphism iff $\mathcal{F}$ is a sheaf.</p> <p><em>Proof</em>.</p> <p>$$(\text{Isomorphism implies sheaf})$$</p> <p>This is a morphism of presheaves, not sheaves, so surjectivity means $\Im^p\phi=\mathcal{F}^+$, and this is iff $\phi_U$ is surjective for all $U$. By injectivity and the above, $\mathcal{F}$ is a sub-presheaf of $\mathcal{F}^+$, and it has exactly the same sections on all open $U$, so it coincides with $\mathcal{F}^+$ and is thus a sheaf.</p> <p>$$(\text{Viceversa})$$</p> <p>By construction, $\mathcal{F}$ has the same stalks as $\mathcal{F}^+$. Since $\mathcal{F}$ is a sheaf, it is separated, and this implies it is a sub(pre)sheaf of $\mathcal{F}^+$, as seen above. But a subsheaf of a sheaf with the same stalks as the supersheaf is indeed the supersheaf, as seen in <strong>Fact U1</strong> (proven by Roland). This shows $\phi$ is an isomorphism. $\square$</p> <p><strong>Facts 2-3</strong>, showing respectively that stalk-I-surjectivity and stalk-C-surjectivity are equivalent, and that stalk-injectivity is equivalent to injectivity, should be correctly proved. <strong>Fact 3</strong> is anyway a consequence of the first half of the <strong>Lemma</strong>, which has been OK'd by Roland, so that works.</p> <p>Now we get to <strong>WBF 4</strong>.</p> <p><em>Proof</em>.</p> <p>$$(\text{Stalk-I-surjectivity implies I-surjectivity})$$</p> <p>The <strong>Lemma</strong> tells us $(\Im\phi)_x=\Im\phi_x=\mathcal{G}_x$, but $\Im\phi\leq\mathcal{G}$ and they have the same stalks, so $\Im\phi=\mathcal{G}$, that is $\phi$ is surjective.</p> <p>$$(\text{I-surjectivity implies stalk-I-surjectivity})$$</p> <p>If $\Im\phi=\mathcal{G}$, then $\Im\phi_x=(\Im\phi)_x=\mathcal{G}_x$ for all $x$, and hence we have stalk-I-surjectivity$. $\square$</p> <p>Let us remake the diagram in the question.</p> <p>$$\begin{array}{ccc} \text{C-surjectivity} &amp; &amp; \text{stalk-C-surjectivity} \\ &amp; &amp; \uparrow\downarrow \\ \text{I-surjectivity} &amp; \iff &amp; \text{stalk-I-surjectivity} \end{array}$$</p> <p>This shows us <strong>WBF 1</strong> and <strong>WBF 3</strong> (respectively the iff in the left column and the iff up top) are equivalent. So let us proceed to prove one of them.</p> <p>First, let me add a part to the <strong>Lemma</strong>.</p> <p><strong>Lemma, pt. 2</strong></p> <p>$(\coker\phi)_x\cong\coker\phi_x$.</p> <p><em>Proof</em>.</p> <p>$$(\text{Building a morphism})$$</p> <p>If $s_x\in\coker\phi_x$, this means we can find $\tilde s_x\in\mathcal{G}_x$ such that $[\tilde s_x]=s_x$. Set $(U,\tilde s)$ a representative of $\tilde s_x$. Then we have an element $[\tilde s]\in\coker\phi_U$. We then set $f_x(s_x)=[\tilde s]_x$. So let me recap: we take $s_x\in\coker\phi_x$, take a representative in $\mathcal{G}_x$, take a representative in some $\mathcal{G}(U)$, project it onto $\coker\phi_U$, project it onto $(\coker^p\phi)_x=(\coker\phi)_x$.</p> <p>$$(\text{Independence on the choice of the section representative})$$</p> <p>Let us first show it does not depend on the $\mathcal{G}(U)$-representative. So we have chosen $\tilde s_x\in\mathcal{G}_x$ such that its equivalence class in $\coker\phi_x$ is $s_x$. We take two representatives $(U,\tilde s_1),(V,\tilde s_2)$. This means there is $W\subseteq U\cap V$ such that $\tau_{U,W}(\tilde s_1)=\tau_{V,W}(\tilde s_2)$. I would like it if $\tau_{U,W}(\pi_U(\tilde s_1))=\pi_W(\tau_{U,W}(\tilde s_1))$, for then we would have:</p> <p>$$\tau_{U,W}(\pi_U(\tilde s_1))=\pi_W(\tau_{U,W}(\tilde s_1))=\pi_W(\tau_{V,W}(\tilde s_1))=\tau_{V,W}(\pi_V(\tilde s_1)),$$</p> <p>and hence the germs of $\pi_U(\tilde s_1),\pi_V(\tilde s_2)$ would coincide, showing this first independence. So what we want is that the projection $\pi:\mathcal{G}\to\coker\phi$ be a (pre)sheaf morphism. Naturally, $\tau_{U,W}(\mathcal{G}(U))=\mathcal{G}(W)$. $\phi$ s a morphism, so $\tau_{U,W}(\Im\phi_U)=\Im\phi_W$. But then, equivalence classes are sent to equivalence classes, so indeed the above holds.</p> <p>$$(\text{Independence on the choice of the germ representative})$$</p> <p>Let us choose $\tilde s_x^1,\tilde s_x^2$ such that $\pi_x(\tilde s_x^1)=s_x=\pi_x(\tilde s_x^2)$. This means $\tilde s_x^1-\tilde s_x^2=\phi_x(t_x)$. We have already shown that the choice of section representative does not affect the outcome, so we go ahead and choose $(U,\tilde s^1),(V,\tilde s^2)$ whose germs at $x$ are $\tilde s_x^1,\tilde s_x^2$. There certainly exists some $W$ and a representative $(W,t)$ such that $\phi_W(t)=\tilde s^1-\tilde s^2$. This means that, when we project onto $\coker\phi_W$, these coincide, and so their germs at $x$ also coincide. So this $f$ is well-defined indeed.</p> <p>$$(\text{Injectivity})$$</p> <p>What is $\ker f_x$? It consists of $s_x\in\coker\phi_x$ such that, choosing $\tilde s_x\in\mathcal{G}_x$ which projects to $s_x$, and choosing $\tilde s\in\mathcal{G}(U)$ with germ $\tilde s_x$ at $x$, the equivalence class of $\tilde s$ in $\coker\phi_U$ has zero germ. That is, there is $W\subseteq U$ and $(W,t)$ such that $\phi_W(t)=\tau_{U,W}(\tilde s)$. But then $\tilde s_x=(\tau_{U,W}(\tilde s))_x=(\phi_W(t))_x\in\Im\phi_x$, so $s_x=0$ in the first place, proving injectivity.</p> <p>$$(\text{Surjectivity})$$</p> <p>Let $s_x\in(\coker\phi)_x$. This means we have $s\in\coker\phi_U$ with germ $s_x$ at $x$. Go back up to $\tilde s\in\mathcal{G}(U)$. Project this onto the stalk and obtain the germ $\tilde s_x$. Project this onto $\coker\phi_x$, et voilà, a preimage. So the surjectivity is proved, and we are done. $\square$</p> <p>Finally, we can prove <strong>*WBF 3</strong>.</p> <p><em>Proof of <strong>WBF 3</strong></em>.</p> <p>$$(\text{C-surjectivity implies stalk-C-surjectivity})$$</p> <p>If $\coker\phi=0$, its stalks are zero, but then so are $(\coker^p\phi)_x$, since the stalks of a presheaf and its sheafification are the same, as proved somewhere. But then $\coker\phi_x\cong(\coker^p\phi)_x=0$ by the above <strong>Lemma pt. 2</strong>, proving stalk-C-surjectivity.</p> <p>$$(\text{Viceversa})$$</p> <p>$0=\coker\phi_x\cong(\coker^p\phi)_x=(\coker\phi)_x$, so $\coker\phi$ has zero stalks, thus, being a sheaf, it is the zero sheaf. $\square$</p> <p>Another pitfall I fell into is the following:</p> <blockquote> <p>Although sheaf morphisms are just presheaf morphisms between sheaves, the concepts of I- and C-surjectivity are <strong>different</strong>: for presheaf morphisms, $\phi$ is I-surjective (or equivalently C-surjective) iff $\phi_U$ is surjective for all $U$; for sheaf morphisms, it is the sheafification of $\Im^p\phi$ (resp. $\coker^p\phi$) that must be $\mathcal{G}$ (resp. 0). The former condition is <strong>not</strong> equivalent to stalk-C/I-surjectivity, since the latter is, and the former and the latter are not equivalent (example: $d$ is surjective as a sheaf morphism $d:\Omega^p\to Z^{p+1}$, but not as a presheaf morphism, in general, since $\Im^p\phi$ is the presheaf of exact forms which sheafifies to the sheaf of closed forms). The characterization given by Roland (which is easily proved) is one of sheaf morphism surjectivity, not presheaf morphism surjectivity.</p> </blockquote> <p>One last note: the <strong>Lemma</strong> allows us to conclude that a sequence $\mathcal{F}\to\mathcal{G}\to\mathcal{H}$ of sheaves is exact iff it is exact at the stalk level, i.e. the sequence $\mathcal{F}_x\to\mathcal{G}_x\to\mathcal{H}_x$ is exact for all $x$. A presheaf sequence is instead exact iff it is exact on sections. The two are not equivalent.</p>
1,685,423
<p>$$ x_n=\begin{cases}\frac{1}{n^2} &amp; \text{if n is even} \\ \frac{1}{n} &amp; \text{if n is odd}\end{cases}$$.</p> <p>How can I show that $$ \sum x_n$$ is convergent?</p>
Ahmed S. Attaalla
229,023
<p>An even $n$ contributes a positive amount to the sum. However if $n$ is odd then $n=2k+1$ will contribute:</p> <p>$$\sum_{k=0}^{\infty} \frac{1}{2k+1}$$</p> <p>What can you say about this sum?</p>
964,999
<p>If A and B are two closed sets of $R$ is A.B closed? By A.B I mean the set $\sum_{i=1}{^ n} a_ib_i$ where $a_i \in A,b_i\in B,n\in N$ How to view A.B geometrically? I am new to this subject.Sorry if the question sounds something wrong</p>
John
105,625
<p>Without further clarification from the OP, I am interpreting the question as "$A,B$ are subsets of $\mathbb{R}$, and $A \cdot B =\{ab \,| \, a\in A,b \in B\}$".</p> <p>If $A,B$ are closed sets in $\mathbb{R}$ with standard topology, then $A\cdot B$ may not be closed in $\mathbb{R}$. An example is $A=\{0\} \cup \{\frac{1}{n} \,|\, n\in \mathbb{N}\}$, and $B=\mathbb{N}$,then $A \cdot B=\{0\} \cup \mathbb{Q}_{+}$, which is not closed in $\mathbb{R}$.</p>
2,950,813
<blockquote> <p>Take <span class="math-container">$G$</span> to be a group of order <span class="math-container">$600$</span>. Prove that for any element <span class="math-container">$a$</span> <span class="math-container">$\in$</span> G there exist an element <span class="math-container">$b$</span> <span class="math-container">$\in$</span> G such that <span class="math-container">$a = b^7$</span>. </p> </blockquote> <p>My thought process: since <span class="math-container">$a = b^7$</span> <span class="math-container">$\implies$</span> <span class="math-container">$|a| = |b^7|$</span>. Consequently <span class="math-container">$\operatorname{lcm}(1,|a|) = \dfrac{1}{7}\operatorname{lcm}(7,|b|)$</span> implies <span class="math-container">$7|a| = 7|b|$</span> so <span class="math-container">$|a| = |b|$</span>. I don't know where I would go from here or if this is even the right approach. </p>
Nicky Hekster
9,605
<p><strong>Proposition</strong> Let <span class="math-container">$G$</span> be a finite group and <span class="math-container">$n$</span> a positive integer. Then the map <span class="math-container">$f: G \mapsto G$</span> defined by <span class="math-container">$f(g)=g^n$</span> is a bijection if and only if gcd<span class="math-container">$(|G|,n)=1$</span>. <p> <strong>Proof</strong> (sketch) Bézout yields <span class="math-container">$1=k|G|+mn$</span>, for some integers <span class="math-container">$k, m$</span>. Then <span class="math-container">$g=g^{k|G|+mn}=g^{mn}$</span>. Hence if <span class="math-container">$g^n=h^n$</span>, then <span class="math-container">$g=g^{mn}=h^{mn}=h$</span>. So <span class="math-container">$f$</span> is injective and since <span class="math-container">$G$</span> is finite it must be bijective. Conversely, assume gcd<span class="math-container">$(n,|G|)\neq 1$</span>. Then we can find a prime <span class="math-container">$p$</span> with <span class="math-container">$p \mid n$</span> and <span class="math-container">$p \mid |G|$</span>. By Cauchy's Theorem there is a non-trivial <span class="math-container">$g \in G$</span> with order<span class="math-container">$(g)=p$</span>. Then <span class="math-container">$g^n=g^{p \cdot \frac{n}{p}}=1^\frac{n}{p}=1=1^n$</span>. Since <span class="math-container">$f$</span> is injective this yields <span class="math-container">$g=1$</span>, a contradiction.</p>
302,179
<p>The question I am working on is:</p> <blockquote> <p>"Use a direct proof to show that every odd integer is the difference of two squares."</p> </blockquote> <p>Proof:</p> <p>Let n be an odd integer: $n = 2k + 1$, where $k \in Z$</p> <p>Let the difference of two different squares be, $a^2-b^2$, where $a,b \in Z$.</p> <p>Hence, $n=2k+1=a^2-b^2$...</p> <p>As you can see, this a dead-end. Appealing to the answer key, I found that they let the difference of two different squares be, $(k+1)^2-k^2$. I understand their use of $k$--$k$ is one number, and $k+1$ is a different number--;however, why did they choose to add $1$? Why couldn't we have added $2$?</p>
Herng Yi
34,473
<p>Another approach, a graphical proof: $$\underbrace{\begin{array}{ccccc} 1\odot &amp; 3\otimes &amp; 5\odot &amp; \cdots &amp; (2k - 1)\otimes\\\hline \bigodot &amp; \bigotimes &amp; \bigodot &amp; \cdots &amp; \bigotimes\\ \bigotimes &amp; \bigotimes &amp; \bigodot &amp; \cdots &amp; \bigotimes\\ \bigodot &amp; \bigodot &amp; \bigodot &amp; \cdots &amp; \bigotimes\\ \vdots &amp; \vdots &amp; \vdots &amp; \ddots &amp; \vdots\\[1ex] \bigotimes &amp; \bigotimes &amp; \bigotimes &amp; \cdots &amp; \bigotimes\\ \end{array}}_{\textstyle k}$$</p> <p>From this we have $k^2 = 1 + 3 + 5 + \dotsb + (2k - 1)$. Hence, $$\begin{align} 2k + 1 &amp;= \Big(1 + 3 + 5 + \dotsb + (2k - 1) + (2k + 1)\Big) - \Big(1 + 3 + 5 + \dotsb + (2k - 1)\Big)\\ &amp;= (k + 1)^2 - k^2 \end{align}$$</p>
302,179
<p>The question I am working on is:</p> <blockquote> <p>"Use a direct proof to show that every odd integer is the difference of two squares."</p> </blockquote> <p>Proof:</p> <p>Let n be an odd integer: $n = 2k + 1$, where $k \in Z$</p> <p>Let the difference of two different squares be, $a^2-b^2$, where $a,b \in Z$.</p> <p>Hence, $n=2k+1=a^2-b^2$...</p> <p>As you can see, this a dead-end. Appealing to the answer key, I found that they let the difference of two different squares be, $(k+1)^2-k^2$. I understand their use of $k$--$k$ is one number, and $k+1$ is a different number--;however, why did they choose to add $1$? Why couldn't we have added $2$?</p>
Ben Millwood
29,966
<p>What's going on here, I think, is you're confused about what you're being asked to prove.</p> <p>The statement is "every odd integer is the difference of two squares", or, more precisely, "<strong>for all</strong> odd integers $n$, <strong>there exist</strong> $a$ and $b$ such that $a^2 - b^2 = n$". Think for a bit about what the difference between "for all..." and "there exists..." is, and maybe you'll realise what's going on.</p> <p>You have to come up with a proof that works for every odd integer, so you start with an arbitrary choice $n = 2k + 1$. But then you seem to go on to pick an arbitrary difference of two squares, $a^2 - b^2$. But that's not necessary – we're not trying to prove anything about <em>every</em> difference of two squares, just <em>one</em>, so you only need to show that there <em>is</em> a difference of two squares that satisfies the condition, and in particular <em>you</em> choose it, it isn't arbitrary.</p> <p>Think of the statement as "if you give me an odd integer $n$, I can give you $a$ and $b$ such that $a^2 - b^2 = n$". In particular, you can give me whatever odd integer you like, but then <em>I</em> get to choose what $a$ and $b$ are. In particular, I can choose $a$ to be $b + 1$ if I like.</p> <p>From one of your comments:</p> <blockquote> <p>We have to show that $a^2 - b^2$ is an odd number</p> </blockquote> <p>But in general that's just not true. For example, $4^2 - 2^2 = 12$. So if you've found yourself trying to prove that statement, you've clearly gone wrong somewhere.</p>
874,404
<p>I'm reading through Basic Algebra I (which I enjoy so far. Thoughts on this for self-studying?) and am having a difficult time proving surjection. I believe I understand the concept, but when it comes to the proof I'm not sure how I'm supposed to prove it. Is this an acceptable proof, or is it at least part of the final proof?</p> <p>$f$ and $g$ are surjective $\implies f \circ g$ surjective</p> <p>Known: $$\forall y\in Y, \exists x \in X \mid y = f(x)$$ $$\forall z\in Z, \exists y \in Y \mid z = g(y)$$</p> <p>Therefore: $$\forall y\in Y \texttt{ and }\forall z\in Z, \exists x \in X \mid y = f(x), z = g(y) = g(f(x)) = (g \circ f)(x)$$</p> <p>So I guess the final line should be:</p> <p>$$\forall z \in Z, \exists x \in X \mid z = (g \circ f)(x)$$</p> <p>P.S. This is my first post here, so hopefully the style isn't too bad.</p> <p>EDIT: What if I included an extra set that includes the members of Y that exist for z = g(y)? </p> <p>$$A = \{ y \in Y \mid z = g(y)\}$$ $$\forall y_A \in A, z \in Z, \exists x \in X \mid y = f(x) \texttt{ and } z = g(y_A) = g(f(x))$$</p> <p>$$\forall z \in Z, \exists x \in X \mid z = g(f(x))$$</p> <p>(Perhaps I'm not explaining it right)</p>
Magician
155,759
<p>It is not correct to say that $$\forall y\in Y \texttt{ and }\forall z\in Z, \exists x \in X \mid y = f(x), z = g(y) = g(f(x)) = (f \circ g)(x)$$ You cannot guarantee that $z=g(y),\forall y,z$. For example, let $X=Y=Z$ and $f,g$ be the identity map. It is clearly incorrect that $\forall y,z(\exists x(y=x,z=y=x))$.</p> <p>Instead, argue through inference rules of predicative logic, like <a href="https://en.wikipedia.org/wiki/Universal_instantiation" rel="nofollow">universal instantiation</a> and <a href="https://en.wikipedia.org/wiki/Existential_instantiation" rel="nofollow">existential instantiation</a>: $$(\forall z\in Z(\exists y \in Y (z = g(y)))) \\\Rightarrow (\exists y \in Y (z_v = g(y))) \\\Rightarrow z_v=g(y_0) $$ and <a href="https://en.wikipedia.org/wiki/Universal_instantiation" rel="nofollow">universal instantiation</a> and <a href="https://en.wikipedia.org/wiki/Existential_instantiation" rel="nofollow">existential instantiation</a>: $$(\exists x \in X (y_0 = f(x))) \\\Rightarrow (y_0 = f(x_0)) $$</p> <p>hence $z_v=g(f(x_0)$. Then <a href="https://en.wikipedia.org/wiki/Existential_instantiation" rel="nofollow">existential generalization</a> and <a href="https://en.wikipedia.org/wiki/Universal_generalization" rel="nofollow">universal generalization</a>: $$(\exists x \in X (z_v=g(f(x)) \\\Rightarrow \forall z \in Z(\exists x \in X(z=g(f(x))) $$</p> <p>This is basically a rigor version of human intuition.</p>
874,404
<p>I'm reading through Basic Algebra I (which I enjoy so far. Thoughts on this for self-studying?) and am having a difficult time proving surjection. I believe I understand the concept, but when it comes to the proof I'm not sure how I'm supposed to prove it. Is this an acceptable proof, or is it at least part of the final proof?</p> <p>$f$ and $g$ are surjective $\implies f \circ g$ surjective</p> <p>Known: $$\forall y\in Y, \exists x \in X \mid y = f(x)$$ $$\forall z\in Z, \exists y \in Y \mid z = g(y)$$</p> <p>Therefore: $$\forall y\in Y \texttt{ and }\forall z\in Z, \exists x \in X \mid y = f(x), z = g(y) = g(f(x)) = (g \circ f)(x)$$</p> <p>So I guess the final line should be:</p> <p>$$\forall z \in Z, \exists x \in X \mid z = (g \circ f)(x)$$</p> <p>P.S. This is my first post here, so hopefully the style isn't too bad.</p> <p>EDIT: What if I included an extra set that includes the members of Y that exist for z = g(y)? </p> <p>$$A = \{ y \in Y \mid z = g(y)\}$$ $$\forall y_A \in A, z \in Z, \exists x \in X \mid y = f(x) \texttt{ and } z = g(y_A) = g(f(x))$$</p> <p>$$\forall z \in Z, \exists x \in X \mid z = g(f(x))$$</p> <p>(Perhaps I'm not explaining it right)</p>
Andrew D'Addesio
61,123
<p>You write:</p> <p>$$ \text{Therefore, } \forall (y,z) \in Y \times Z, \exists x \in X \mid y = f(x), z = g(y) = g(f(x)) =: (g \circ f)(x). $$</p> <p>What this is saying is that if $f : X \to Y$ and $g : Y \to Z $ are surjective, then there always exists an $x \in X$ such that the calculation $(f(x), g(f(x)))$ will lead you to any desired $(y,z)$ combination in $Y \times Z$; however, this is incorrect, and here is a simple counterexample:</p> <p>$$ X := Y := Z := \{0,1\} \\ f(x) := \begin{cases}1 &amp; x = 0 \\ 0 &amp; \text{otherwise} \end{cases} \\ g(x) := \begin{cases}1 &amp; x = 1 \\ 0 &amp; \text{otherwise} \end{cases} \\ g(f(x)) = \begin{cases} g(1) = 1 &amp; x = 0 \\ g(0) = 0 &amp; \text{otherwise} \end{cases} =: f(x). $$</p> <p>All possible $(y,z)$ tuples selected from $Y \times Z$ are $(0,0), (0,1), (1,0), (1,1)$. But the only ones we can reach from the calculation $(f(x), g(f(x)))$ for $x \in X$ are $(0,0)$ and $(1,1)$ (for $x = 0$ and $x = 1$ respectively).</p> <p>This wasn't our goal, anyway. We would like to prove that for any surjective functions $f : X \to Y$ and $g : Y \to Z $, there always exists an $x \in X$ such that the calculation $g(f(x))$ will lead you to any desired $z$ in $Z$ (in other words, $(g \circ f) : X \to Z$ is also surjective).</p> <p>If you remove that line from your proof, then you already have a decent proof, but somewhat tautological (like "$2+2=4$ because $2+2=4$") unless you back it up with some lower level logic. Here's a proof I came up with:</p> <p>$$ \begin{align*} &amp; \forall y \in Y, \exists x \in X \mid f(x) = y, &amp; (1) \\ &amp; \forall z \in Z, \exists y \in Y \mid g(y) = z. &amp; (2) \\ \\ &amp; \text{Therefore, } \\ &amp; \forall z \in Z, \exists y \in Y \mid (\exists x \in X \mid f(x) = y), \, g(y) = z. &amp; (3) \\ &amp; \forall z \in Z, \exists y \in Y \mid (\exists x \in X \mid f(x) = y, \, g(f(x)) = z). &amp; (4) \\ &amp; \forall z \in Z, \exists x \in X \mid (\exists y \in Y \mid f(x) = y, \, g(f(x)) = z). &amp; (5) \\ &amp; \forall z \in Z, \exists x \in X \mid (\exists y \in Y \mid f(x) = y), \, g(f(x)) = z. &amp; (6) \\ &amp; \forall z \in Z, \exists x \in X \mid (g(f(x)) = z). &amp; (7) \\ \end{align*} $$</p> <p>(3) follows from (1) and (2) by universal instantiation ("(1) holds for all $y$ in $Y$. Therefore, it holds for this particular $y$ in $Y$.").</p> <p>(4) follows from (3) by existential generalization ("$g(y) = z$. Therefore, there exists some value of $x$ in $X$ such that $g(y) = z$.") and equality ("$y = f(x)$. Therefore, $g(y) = g(f(x))$.").</p> <p>(5) follows from (4) by quantifier rearrangement ("The existence of $x$ does not depend on the existence of $y$, nor vice versa. Therefore the quantifiers can be swapped."). I'm not sure if this has a more formal name or if it can be proved using additional logic, but here are some links I found that suggest that it's a valid mathematical argument: <a href="https://math.stackexchange.com/questions/201051/is-the-order-of-universal-existential-quantifiers-important">[1]</a> <a href="http://www.inf.ed.ac.uk/teaching/courses/dmmr/slides/Ch1b.pdf" rel="nofollow">[2]</a></p> <p>(6) follows from (5) by existential instantiation ("$g(f(x)) = x$ for some value of $y$, which we can call $y_0$. The truth of the statement $g(f(x)) = x$ does not depend on the value of $y_0$. Therefore, $g(f(x)) = x$ (in general).").</p> <p>(7) is a weaker version of (6), and it follows by simplification ("P and Q. Therefore, Q.").</p>
3,853,509
<blockquote> <p>prove <span class="math-container">$$\sum_{cyc}\frac{a^2}{a+2b^2}\ge 1$$</span> holds for all positives <span class="math-container">$a,b,c$</span> when <span class="math-container">$\sqrt{a}+\sqrt{b}+\sqrt{c}=3$</span> or <span class="math-container">$ab+bc+ca=3$</span></p> </blockquote> <hr /> <p><strong>Background</strong> Taking <span class="math-container">$\sqrt{a}+\sqrt{b}+\sqrt{c}=3$</span> This was left as an exercise to the reader in the book 'Secrets in inequalities'.This comes under the section Cauchy Reverse technique'.i.e the sum is rewritten as :<span class="math-container">$$\sum_{cyc} a- \frac{ab^2}{a+2b^2}\ge a-\frac{2}{3}\sum_{cyc}{(ab)}^{2/3}$$</span> which is true by AM-GM.(<span class="math-container">$a+b^2+b^2\ge 3{(a. b^4)}^{1/3}$</span>)</p> <p>By QM-AM inequality <span class="math-container">$$\sum_{cyc}a\ge \frac{{ \left(\sum \sqrt{a} \right)}^2}{3}=3$$</span>.</p> <p>we are left to prove that <span class="math-container">$$\sum_{cyc}{(ab)}^{2/3}\le 3$$</span> .But i am not able to prove this .Even the case when <span class="math-container">$ab+bc+ca=3$</span> seems difficult to me.</p> <p>Please note I am looking for a solution using this cuchy reverse technique and AM-GM only.</p>
saulspatz
235,128
<p>Instead of calculating derivatives, try to restate it in terms of a series you know. <span class="math-container">$$\frac1{2+3x^2}=\frac12\frac1{1+\frac{3x^2}2}$$</span> Now if you set <span class="math-container">$y=\sqrt\frac32x$</span> you'll see an expression with a familiar series that has radius of convergence <span class="math-container">$1$</span>, so you just have to restate <span class="math-container">$|y|&lt;1$</span> in terms of <span class="math-container">$x$</span>.</p>
1,055,832
<p>How can one prove distributivity of a <a href="http://ncatlab.org/nlab/show/Heyting+algebra" rel="nofollow"> Heyting Algebra</a> via the <a href="http://ncatlab.org/nlab/show/Yoneda+lemma" rel="nofollow"> Yoneda lemma</a>?</p> <p>I'm able to prove it using the Heyting algebra property $(x \wedge a) \leq b$ if and only if $x \leq (a \Rightarrow b)$. But using Yoneda Lemma, I'm unable to get that. Help Greatly appreciated . Thank You Guys. </p>
Sina Hazratpour
278,320
<p>Use full and faithfulness of Yoneda embedding to prove every cartesian closed category C with finite coproducts must be distributive, i.e. $ a × (b + c)$ and $(a × b) + (a × c)$ are isomorphic objects in C. </p>
4,550,991
<p>This is question is taken from an early round of a Norwegian national math competition where you have on average 5 minutes to solve each question.</p> <p>I tried to solve the question by writing every number with four digits and with introductory zeros where it was needed. For example 0001 and 0101 would be the numbers 1 and 101. I then divided the different numbers into four groups based on how many times the digit 1 appeared in the number. I called these group 1,2,3 and 4. 0001 would then belong to group 1 and 0101 to group 2. I first found out in how many ways I could place the digit 1 in each group, then multiplied it by the number of combinations with the other possible eight digits (0,2,3,5,6,7,8,9). This would be the number of combinations for each group and I lastly multiplied it with the number of times 1 appeared in the number. This is done for all of the groups under:</p> <p><span class="math-container">$\binom{4}{1}\cdot8^3\cdot1$</span> times in group 1</p> <p><span class="math-container">$\binom{4}{2}\cdot8^2\cdot2$</span> times in group 2</p> <p><span class="math-container">$\binom{4}{3}\cdot8^1\cdot3$</span> times in group 3</p> <p><span class="math-container">$\binom{4}{4}\cdot8^0\cdot4$</span> times in group 4</p> <p>The sum of all these calculations will be the 2916 and the correct number of times 1 appears (I think). Is this calculation/way of thinking correct? And is there a more efficient way to do it?</p>
Jaap Scherphuis
362,967
<p>Here is what I would think is the quickest way to do it.</p> <p>As you already wrote, use leading zeroes so that all numbers have 4 digits. You can even include <span class="math-container">$0000$</span> to make things easier as that will not affect the result.</p> <p>Let's count how many times a 1 appears in the first 'thousands' position. The other three digits could be anything except 4, so there are <span class="math-container">$9\cdot9\cdot9=729$</span> such numbers. There will sometimes be 1s in those other positions, but we are not counting them just yet. This shows that the digit 1 appears exactly <span class="math-container">$729$</span> times in the first position.</p> <p>Using the same argument, the number of times a 1 appears in each of the other digit positions is also <span class="math-container">$729$</span>. There are four digit positions, so this leads to a total of <span class="math-container">$4\cdot729=2916$</span> times that the digit 1 appears.</p>
3,060,250
<p>Let <span class="math-container">$f : \mathbb{R} \to \mathbb{R}$</span> be differentiable. Assume that <span class="math-container">$1 \le f(x) \le 2$</span> for <span class="math-container">$x \in \mathbb{R}$</span> and <span class="math-container">$f(0) = 0$</span>. Prove that <span class="math-container">$x \le f(x) \le 2x$</span> for <span class="math-container">$x \ge 0$</span>.</p>
S. Senko
361,834
<p>Presuming that you meant to write <span class="math-container">$1 \le f'(x) \le 2$</span> rather than <span class="math-container">$1 \le f(x) \le 2$</span>, you can prove it as follows. First, fix an arbitrary non-zero <span class="math-container">$x \in \Bbb{R}$</span> (the result is obvious if <span class="math-container">$x=0$</span>) and consider the secant line between <span class="math-container">$0$</span> and <span class="math-container">$x$</span>. This has slope <span class="math-container">$\frac{f(x)-f(0)}{x-0}=\frac{f(x)}{x}$</span>. Thus, by MVT, there is some <span class="math-container">$c \in (0, x)$</span> such that <span class="math-container">$\frac{f(x)}{x} = f'(c)$</span>. But then, because <span class="math-container">$1 \le f'(c) \le 2$</span>, we must have <span class="math-container">$1 \le \frac{f(x)}{x} \le 2$</span>. Multiplying by <span class="math-container">$x$</span> gives the desired result.</p>
58,926
<p>Is it well known what happens if one blows-up $\mathbb{P}^2$ at points in non-general position (ie. 3 points on a line, 6 on a conic etc)? Are these objects isomorphic to something nice? </p>
known google
392
<p>I'll just add to Francesco's answer by saying that general position of the points on the plane is equivalent to ampleness of the anticanonical sheaf $\omega_X^{\otimes -1}$.</p> <p>The key observation is that on a del Pezzo surface, an irreducible negative curve ($C^2 &lt; 0$) must be an exceptional curve (i.e. $C^2 = C\cdot K_X = -1$). This follow from the adjunction formula and the Nakai-Moishezon criterion. </p> <p>If you blow up 3 colinear points, then the strict transform of the line containing these points will have self-intersection $-2$, which is not allowed by the key observation. Similiarly for the strict transform of a conic through 6 blown-up points. There is one more condition you have to impose: if you blow up 8 points, they cannot lie on a singular cubic with one of the points at the singularity.</p> <p>If you relax the requirement that $\omega_X^{\otimes -1}$ be ample to just big and nef, then you can have some degenerate point configurations: this time $C^2 = -2$ is allowed, so 3 colinear points ar OK. However, 4 colinear points would not be OK.</p>
3,335,060
<blockquote> <p>The numbers of possible continuous <span class="math-container">$f(x)$</span> defiend on <span class="math-container">$[0,1]$</span> for which <span class="math-container">$I_1=\int_0^1 f(x)dx = 1,~I_2=\int_0^1 xf(x)dx = a,~I_3=\int_0^1 x^2f(x)dx = a^2 $</span> is/are</p> <p><span class="math-container">$(\text{A})~~1~~~(\text{B})~~2~~(\text{C})~~\infty~~(\text{D})~~0$</span></p> <p>I have tried the following: Applying ILATE (the multiplication rule for integration) - nothing useful comes up, only further complications like the primitive of the primitive of f(x). No use of the given information either. Using the rule <span class="math-container">$$ \int_a^b g(x)dx = \int_a^b g(a+b-x)dx$$</span> I solved all three constraints to get <span class="math-container">$$ \int_0^1 x^2f(1-x)dx = (a-1)^2 \\ \text{or} \int_0^1 x^2[f(1-x)+f(x)]dx = (a-1)^2 +a^2 \\$$</span> Then I did the following - if f(x) + f(1-x) is constant, solve with the constraints to find possible solutions. Basically I was looking for any solutions where the function also follows the rule that f(x) + f(1-x) is constant. Solving with the other constraints, I obtained that f(x) will only follow all four constraints if the constant [= f(x) + f(1-x)] is 2, and a is <span class="math-container">$\frac{√3\pm1}{2}$</span>.</p> </blockquote>
Green05
650,645
<p>How about you simply use the Taylor series for <span class="math-container">$e^x$</span>? <span class="math-container">$$e^x = \sum_{r=0}^\infty \frac {x^r}{r!} \\ e^3 \gt 1+3/1+9/2+9/2+27/8+ 81/40+81/80+243/560+(243/560)*3/8 + ((243*3)/(560*8))*1/3 = 1+3+9+3.375+2.025+1.0125+0.4339..+0.1627...+0.0542... = 20.0633 \gt 20 $$</span> Basically add up the first ten terms of the Taylor series of e^x with x=3.</p>
3,981,458
<p>A star graph <span class="math-container">$S_{k}$</span> is the complete bipartite graph <span class="math-container">$K_{1,k}$</span>. One bipartition contains 1 vertex and the other bipartition contains <span class="math-container">$k$</span> vertices. <a href="https://en.wikipedia.org/wiki/Star_(graph_theory)" rel="noreferrer">Wikipedia Article</a></p> <p>A graph G is balanced if the average degree of every subgraph H is less than or equal to the average degree of G. In other words <span class="math-container">$\bar{d}(H) \leq \bar{d}(G)$</span>.</p> <p>Show that a star graph is balanced.</p> <p>I have been able to prove this, however in an extremely ugly and long way using many different cases. I was wondering if there is any easy way to prove this. Any ideas?</p>
qualcuno
362,866
<p>By direct computation,</p> <p><span class="math-container">$$ \widetilde{d}(S_k) = \frac{k + \overbrace{1 + \cdots + 1}^{k}}{k+1} = 2k/(k+1). $$</span></p> <p>Let's enumerate the vertices of <span class="math-container">$S_k$</span> as <span class="math-container">$v_0, \ldots, v_k$</span> with <span class="math-container">$d(v_0) = k$</span> and <span class="math-container">$d(v_i) = 1$</span> otherwise.</p> <p>Pick a subgraph <span class="math-container">$H \leq S_k$</span>. Suppose that there are <span class="math-container">$v,w \in H$</span> which are not connected in <span class="math-container">$H$</span> but they in fact are connected in <span class="math-container">$S_k$</span>. Then adding such an edge to <span class="math-container">$H$</span> can only <em>increase</em> its average degree. Thus, for our objective, this means that we can wlog assume our subgraph <span class="math-container">$H$</span> is full.</p> <p>There are only two possibilities: either <span class="math-container">$H$</span> contains <span class="math-container">$v_0$</span>, and is isomorphic to <span class="math-container">$S_j$</span> for <span class="math-container">$j \leq k$</span>, or <span class="math-container">$H$</span> has average degree zero.</p> <p>Hence, we are left with showing that <span class="math-container">$2j/(j+1)$</span> is an increasing sequence: indeed,</p> <p><span class="math-container">$$ 2j/(j+1) \leq 2k/(k+1) \iff 2jk+2j \leq 2kj+2k \iff j \leq k. $$</span></p>
2,620,032
<p>Find the derivative of $y=(\tan (x))^{\log (x)}$</p> <p>I thought of using the power rule that: $$\dfrac {d}{dx} u^n = n.u^{n-1}.\dfrac {du}{dx}$$ Realizing that the exponent $log(x)$ is not constant, I could not use that. </p>
user326210
326,210
<p>You can rewrite $y = (\tan{(x)})^{\log{(x)}}$ as $$y=\exp\left(\log{(\tan{(x)})}\log(x)\right)$$ using the rule $a^b = \exp(\log(a))^b = \exp{(b\cdot\log{(a)})}$. </p> <p>In this form, you can find the derivative of $y$ using the chain rule and product rule.</p>
127,225
<p>I got stuck solving the following problem:</p> <pre><code>Table[Table[ Table[ g1Size = x; g2Size = y; vals = FindInstance[(a1 - a2) - (b1 - b2) == z &amp;&amp; a1 + b1 == g1Size &amp;&amp; a2 + b2 == g2Size &amp;&amp; a1 + a2 == g1Size &amp;&amp; b1 + b2 == g2Size &amp;&amp; a1 &gt; 0 &amp;&amp; a2 &gt; 0 &amp;&amp; b1 &gt; 0 &amp;&amp; b2 &gt; 0, {a1, a2, b1, b2}, Integers, 3]; aa1 = a1 /. vals; aa2 = a2 /. vals; bb1 = b1 /. vals; bb2 = b2 /. vals; {g1Size, g2Size, z, Flatten@{aa1, aa2, bb1, bb2}} , {z, 0, 10}], {x, 1, 10}], {y, 1, 10}] </code></pre> <p>I want to loop through different values of g1Size, g2Size and z and find the first solution to the system of equations. As soon as a solution for a combination of g1Size,g2size and z was found, I want to extract the values for a1,a2,b1,b2 and continue with the next loop. In other words, only print the values when vals is not empty and then stop the z-loop and switch to the next values of x and y.</p> <p>But my output is like this:</p> <pre><code>{{{{1, 1, 0, {a1, a2, b1, b2}}, {1, 1, 1, {a1, a2, b1, b2}}, {1, 1, 2, {a1, a2, b1, b2}}, {1, 1, 3, {a1, a2, b1, b2}}, {1, 1, 4, {a1, a2, b1, b2}}, {1, 1, 5, {a1, a2, b1, b2}}, {1, 1, 6, {a1, a2, b1, b2}}, {1, 1, 7, {a1, a2, b1, b2}}, {1, 1, 8, {a1, a2, b1, b2}}, {1, 1, 9, {a1, a2, b1, b2}}, {1, 1, 10, {a1, a2, b1, b2}}} </code></pre> <p>plotting the names for a1,a2,b1,b2 when no solution was found.</p> <p>My mathematica coding is a bit rust and this code seems far from elegant. And I hope it is clear what I mean :).</p>
mikado
36,788
<p>The (edited) OP gives the following implementation of the time-consuming part of the calculation. (I will ignore the problem of finding the maximum as trivial).</p> <pre><code>refimplementation = Coefficient[Times @@ (1 + #1 x) // Expand, x^#2] &amp;; </code></pre> <p>I offer the following slightly different implementation</p> <pre><code>serimplementation = SeriesCoefficient[Times @@ (1 + #1 x), {x, 0, #2}] &amp;; </code></pre> <p>For the following test case</p> <pre><code>SeedRandom[1211] matsize = 100; subsize = 30; mat = RandomInteger[{-10, 10}, {matsize, matsize}]; </code></pre> <p>this implementation is better than 5 times faster on my PC, and gives the same answer</p> <pre><code>RepeatedTiming[v1 = refimplementation[mat, subsize];] RepeatedTiming[v2 = serimplementation[mat, subsize];] SameQ[v1, v2] (* {0.0871, Null} *) (* {0.016, Null} *) (* True *) </code></pre>
830,111
<p>We have the following set of lines: $$L_1: \frac{x-2}{1}=\frac{y-3}{-2}=\frac{z-1}{-3}$$ $$L_2:\frac{x-3}{1}=\frac{y+4}{3}=\frac{z-2}{-7}$$</p> <p>This leads to the following parametric equations: $$L_1:x=t+2,\space y=-2t+3,\space z=-3t+1$$ $$L_2: x=s+3,\space y=3s-4,\space z=-7s+2$$ The $x$ line looked pretty simple, so I did this: $$t+2=s+3$$$$t=s+1$$</p> <p>Then I simply substituted this into the Y equations $$-2(s+1)+3=3s-4$$ which yields $$s=5,\space t=6$$ That, in turn, gives me: $$x_1=8,x_2=8 \space z_1=-17,z_2=-38$$ forcing me to conclude the lines are skew.</p> <p><em>The lines are not skew</em> - there is an intersection at point $(4,-1,-5)$.</p> <p>Where is the flaw in my analysis?</p> <p>I suspect taking $t=s+1$ is insufficiently bounded to use in this system? If so, how do we prove $x$ equations is sufficient in an $n$-dimensional system of lines to still apply to the system?</p> <p><strong>EDIT:</strong> $5=5s$ actually does not mean that $s=5$. </p>
user2357112
91,416
<p>Your error is in going from</p> <p>$$-2(s+1)+3=3s-4$$</p> <p>to</p> <p>$$s=5,\space t=6$$</p> <p>I don't know how you got that, but it should be $s=1$, $t=2$. Maybe you dropped a term or misplaced a sign while solving the equation.</p>
1,831,191
<p>I am confused about the following Theorem:</p> <p>Let <span class="math-container">$f: I \to \mathbb{R}^n$</span>, <span class="math-container">$a \in I$</span>. Then the function <span class="math-container">$f$</span> is differentiable at <span class="math-container">$a$</span> if and only if there exists a function <span class="math-container">$\varphi: I \to \mathbb{R}^n$</span> that is continuous in <span class="math-container">$a$</span>, and such that <span class="math-container">$f(x) - f(a) = (x - a)\varphi(x)$</span>, for all <span class="math-container">$x \in I$</span>; furthermore, <span class="math-container">$\varphi(a) = f'(a)$</span>.</p> <p>I understand the proof of this theorem, but something confuses me. Doesn't this theorem state that the derivative of a function in a point is always continuous in that point, since <span class="math-container">$f'(a) = \varphi(a)$</span> is continuous in <span class="math-container">$a$</span>? This would mean that the derivative of a function is always continuous on the domain of the function, but I have encountered counterexamples. I have probably misinterpreted something; any help would be welcome.</p>
anomaly
156,999
<p>The theorem is simply stating that the function \begin{align*} \varphi(x) &amp;= \begin{cases} \frac{f(x) - f(a)}{x - a} &amp; \text{if}\;x\not = a; \\ f'(a) &amp; \text{if}\;x = a \end{cases} \end{align*} is continuous. And it clearly is; the only point to check is $x = a$, and the condition $\lim_{x\to a} \varphi(x) = \varphi(a)$ is exactly the definition of $f'(a)$. The theorem is not claiming that $f = \varphi$ everywhere on $I$.</p> <p>One of the classic examples of a differentiable function $f$ with $f'$ not continuous is $f(x) = x^2\sin (1/x)$ (with $f(x) = 0$). The derivative \begin{align*} f'(x) &amp;= \begin{cases} 2x \sin (1/x) - \cos (1/x) &amp; \text{if}\; x \not = 0; \\ 0 &amp; \text{if}\; x = 0 \end{cases} \end{align*} exists everywhere, but it is not continuous at $0$.</p>
227,311
<p>It is known that the roots of Chebyshev polynomials of the second kind, denote it by $U_n(x)$, are in the interval $(-1,1)$. I have noticed that, by looking at the low values of $n$, the roots of $(1-x)U_n(x)+U_{n-1}(x)$ are in the interval $(-2,2)$. However, I don't have a clear idea how to start proving this, could anyone help me please?</p> <p>*PS I have asked this question on StackExchange and set a bounty but still have not got any answer for it. <a href="https://math.stackexchange.com/q/1583588/60099">see this link</a> </p>
John Jiang
4,923
<p>My previous answer shows that if all roots are real, then they must be contained with $[-1,\sqrt{2})$. To show all roots are real, use <a href="https://books.google.com/books?id=FzFEEVO3PXYC&amp;pg=PA199&amp;lpg=PA199#v=onepage&amp;q&amp;f=false" rel="nofollow">lemma 6.3.9 from the book Analytic Theory of Polynomials</a>, which states that </p> <p>If $P,Q$ are monic polynomials of degree $n,m$ respectively, with $n \ge 3$ and $m &lt; n$, and $R = (z-\alpha)P - \beta Q$, with $\alpha, \beta \in \mathbb{R}$ and $\beta &gt;0$. Then $P,Q$ have strictly interlacing zeros iff $P,R$ do. (Personally I don't see why $n = 2$ fails.)</p> <p>We know that consecutive Chebyshev polynomials have strictly interlacing zeros by Proposition 6.3.10 (due to recurrence relation), so the conditions are satisfied. Note the condition on $m &lt; n$ isn't really much more general, since interlacing implies $m = n-1$.</p> <p>Real stability is a powerful tool. It also appears in the proof of Gurvits' lower bound for permanent and the proof of Kadison-Singer conjecture.</p>
1,278,329
<p>Solve the recurrence $a_n = 4a_{n−1} − 2 a_{n−2}$</p> <p>Not sure how to solve this recurrence as I don't know which numbers to input to recursively solve?</p>
Demosthene
163,662
<p>You can make an "educated guess" and propose the following ansatz: $a_n=p^n$. Your recurrence relation now has the following characteristic equation: $$p^2-4p+2=0\Longleftrightarrow (p-2-\sqrt{2})(p-2+\sqrt{2})=0$$ Therefore there are two roots, at $p=2\pm\sqrt{2}$, and we get: $$a_n=\alpha\left(2+\sqrt{2}\right)^n+\beta\left(2-\sqrt{2}\right)^n$$ where $\alpha$ and $\beta$ are constants that can be obtained from the initial conditions. In particular, we have: $$a_0=\alpha+\beta$$ $$a_1=\alpha\left(2+\sqrt{2}\right)+\beta\left(2-\sqrt{2}\right)$$ This is a system of two equations with two unknowns, that we can solve (it's a bit tedious though) to get: $$\beta=\dfrac{a_0\left(2+\sqrt{2}\right)-a_1}{2\sqrt{2}}$$ $$\alpha=a_0-\beta$$ Thus, the explicit expression for $a_n$ is: $$a_n=\alpha\left(2+\sqrt{2}\right)^n+\beta\left(2-\sqrt{2}\right)^n$$ with $\alpha$ and $\beta$ defined above. (And you can check that it indeed verifies the recurrence relation.)</p>
205,049
<p><a href="http://1d4chan.org/wiki/Most_Excellent_Adventure" rel="noreferrer">Most Excellent Adventure</a> is a home brew roleplaying game system based on the Bill &amp; Ted Films, <em>plays gnarly air guitar riff</em>. </p> <p>In this game system, when you draw from your dice pool you need to connect the results as a phone number on your phone pad:</p> <p><img src="https://i.stack.imgur.com/JxV2Y.png" alt="enter image description here"></p> <p>Here one person has rolled 4, 2, 8, 3, 8, and 2, dialling 3-2-2-4-8-8. The other 7, 4, 6, 3, 9, and 4, only dialling 4-4-7 or 3-6-9. And the longest number wins.</p> <p>How do I work out my chance for success (or chance of a certain phone number length) from this system?</p> <p>I've tried enumerating the chance of getting a two digit number depending on which number you rolled first (each of the first ones is $1/10$) and I get:</p> <ol> <li>$4/10$</li> <li>$6/10$</li> <li>$4/10$</li> <li>$6/10$</li> <li>$9/10$</li> <li>$6/10$</li> <li>$5/10$</li> <li>$7/10$</li> <li>$5/10$</li> <li>$4/10$</li> </ol> <p>But then, how do I follow each different 'route' of probabilities? I could imagine drawing a probability tree with ten branches and up to 12 levels, but that seems excessive. I could draw up a table (with blanks for non-telephonable combinations), but that would get hard to follow after the first table or so).</p> <p>I've tried to consider combinatrics, but I've gotten myself confused over nPr and nCr notation and not getting the right numbers in there. Is there an easier way to calculate the probabilities? </p>
Hagen von Eitzen
39,174
<p>It is remarkable, though maybe not trivial, that for almost every connected set of phone keys, there exists a Eulerian path through the digits (that is you can walk from digit to adjacent digit and reach all digits exactly once). By simply repeating multiple digits, you can thus dial a number (almost) whenever the set of keys is connected.</p> <p>There are few exception to the rule above: </p> <ul> <li>If you have $4, 6, 8, 0$ but neither $2, 5, 7, 9$, there is no Eulerian path. You can win only if you have two $8$s.</li> <li>If you have $1, 3, 5$ and at least one of $7,8,9$, but none of $2,4,6$, a similar situation occurs. You may need (at least) two $5$s.</li> <li>The previous case rotated by 90° or 180° or 270° around the $5$ key.</li> </ul> <p>In summary: If the set of keys is connected, you have <em>almost</em> won. However, for a precise analysis, I am afraid tha a computer based enumeration of all possibilities is the only way to go.</p>
205,049
<p><a href="http://1d4chan.org/wiki/Most_Excellent_Adventure" rel="noreferrer">Most Excellent Adventure</a> is a home brew roleplaying game system based on the Bill &amp; Ted Films, <em>plays gnarly air guitar riff</em>. </p> <p>In this game system, when you draw from your dice pool you need to connect the results as a phone number on your phone pad:</p> <p><img src="https://i.stack.imgur.com/JxV2Y.png" alt="enter image description here"></p> <p>Here one person has rolled 4, 2, 8, 3, 8, and 2, dialling 3-2-2-4-8-8. The other 7, 4, 6, 3, 9, and 4, only dialling 4-4-7 or 3-6-9. And the longest number wins.</p> <p>How do I work out my chance for success (or chance of a certain phone number length) from this system?</p> <p>I've tried enumerating the chance of getting a two digit number depending on which number you rolled first (each of the first ones is $1/10$) and I get:</p> <ol> <li>$4/10$</li> <li>$6/10$</li> <li>$4/10$</li> <li>$6/10$</li> <li>$9/10$</li> <li>$6/10$</li> <li>$5/10$</li> <li>$7/10$</li> <li>$5/10$</li> <li>$4/10$</li> </ol> <p>But then, how do I follow each different 'route' of probabilities? I could imagine drawing a probability tree with ten branches and up to 12 levels, but that seems excessive. I could draw up a table (with blanks for non-telephonable combinations), but that would get hard to follow after the first table or so).</p> <p>I've tried to consider combinatrics, but I've gotten myself confused over nPr and nCr notation and not getting the right numbers in there. Is there an easier way to calculate the probabilities? </p>
Alan Gee
43,196
<p>I'm not sure that this qualifies as a complete answer to the question, but it will certainly provide some useful ideas and information.</p> <p>I wrote a program to solve this using the following <i>brute force</i> technique.</p> <ol> <li>Create a Possibe_Next_Key integer set for each of the digits 0 to 9 E.g. For 1 this is {1,2,4,5} those being the allowable next key presses following a 1</li> <li>Make a Next_Key_Possible boolean array based on the Possibe_Next_Key sets. This is a 2D array of the form [From_Key,To_Key]=true|false E.g. Possible_Next_Key[0][5]=false because there is no direct link from the 0 to the 5 key</li> <li>Iterate through all $10^{6}$ permutations, finding the longest ordered sequence of connected numbers, checking for uniqueness against previous permutations, and so updating a dictionary containing the following:- {UniqueSet, Longest_Key_Sequence, Number_Of Permutations}</li> <li>Finally - total up the Number_Of_Permutations for each Longest_Key_Sequence </li> </ol> <p>And the results I get are as follows:</p> <p>Ways of getting a maximum key-sequence of 1 key = 0</p> <p>Ways of getting a maximum key-sequence of 2 keys = 6300</p> <p>Ways of getting a maximum key-sequence of 3 keys = 76640</p> <p>Ways of getting a maximum key-sequence of 4 keys = 119340</p> <p>Ways of getting a maximum key-sequence of 5 keys = 168024</p> <p>Ways of getting a maximum key-sequence of 6 keys = 629696</p> <p>Note - their sum <i>is</i> $10^{6}$</p> <p>So if $P_{n}$ is the probability that the longest phone number that can be dialed with a roll of 6 10-faced dice is n (following the rules of the game) then</p> <p>$P_{1}$ = 0</p> <p>$P_{2}$ = 0.0063</p> <p>$P_{3}$ = 0.07664</p> <p>$P_{4}$ = 0.11934</p> <p>$P_{5}$ = 0.168024</p> <p>$P_{6}$ = 0.629696</p> <p>Assuming my program is correct, which I believe it to be.</p>
1,262,174
<p>I am currently teaching Physics in an Italian junior high school. Today, while talking about the <a href="http://en.wikipedia.org/wiki/Dipole#/media/File:Dipole_Contour.svg" rel="noreferrer">electric dipole</a> generated by two equal charges in the plane, I was wondering about the following problem:</p> <blockquote> <p>Assume that two equal charges are placed in <span class="math-container">$(-1,0)$</span> and <span class="math-container">$(1,0)$</span>.</p> <p>There is an equipotential curve through the origin, whose equation is given by: <span class="math-container">$$\frac{1}{\sqrt{(x-1)^2+y^2}}+\frac{1}{\sqrt{(x+1)^2+y^2}}=2 $$</span> and whose shape is very <a href="http://en.wikipedia.org/wiki/Lemniscate" rel="noreferrer">lemniscate</a>-like:</p> </blockquote> <p><img src="https://i.stack.imgur.com/oojJw.png" alt="enter image description here" /></p> <blockquote> <p><strong>Is there a fast&amp;tricky way to compute the area enclosed by such a curve?</strong></p> </blockquote> <p>Numerically, it is <span class="math-container">$\approx 3.09404630427286$</span>.</p>
David H
55,051
<p>Consider this post an appendix to @achillehui's brilliant solution, which arrives at the following elliptic integral for the overall area:</p> <blockquote> <p>$$\mathcal{A}=\sqrt{8}+\int_{1}^{\phi}\sqrt{\frac{1+x-x^{2}}{x\left(1+x\right)}}\,\mathrm{d}x=:\sqrt{8}+\mathcal{B},\tag{1}$$</p> </blockquote> <p>where as usual $\phi$ denotes the golden ratio. I found this problem extremely interesting and just couldn't resist leaving that last integral unevaluated. The work below addresses the nuts and bolts of evaluating this elliptic integral as a particular case of a more general integral.</p> <hr> <p>For real parameters $a,b,c,d,z\in\mathbb{R}\land d&lt;c&lt;b\le z&lt;a$, define $\mathcal{E}{\left(a,b,c,d;z\right)}$ via the integral</p> <blockquote> <p>$$\mathcal{E}{\left(a,b,c,d;z\right)}:=\int_{z}^{a}\sqrt{\frac{\left(a-x\right)\left(x-c\right)}{\left(x-b\right)\left(x-d\right)}}\,\mathrm{d}x.\tag{2}$$</p> </blockquote> <p>For convenience we define ahead of time the auxiliary parameters</p> <p>$$\begin{cases} &amp;\theta:=\arcsin{\left(\sqrt{\frac{\left(a-c\right)\left(z-b\right)}{\left(a-b\right)\left(z-c\right)}}\right)},\\ &amp;\kappa:=\sqrt{\frac{\left(a-b\right)\left(c-d\right)}{\left(a-c\right)\left(b-d\right)}},\\ &amp;\nu:=\frac{a-b}{a-c}.\tag{3}\\ \end{cases}$$</p> <p>The key to putting the elliptic integral $\mathcal{E}$ into something more closely resembling standard form the "magic" quadratic transformation,</p> <p>$$\sqrt{\frac{\left(a-c\right)\left(x-b\right)}{\left(a-b\right)\left(x-c\right)}}=y\implies x=\frac{b\left(a-c\right)-c\left(a-b\right)y^{2}}{\left(a-c\right)-\left(a-b\right)y^{2}}.\tag{4}$$</p> <p>$$\begin{align} \mathcal{E}{\left(a,b,c,d;z\right)} &amp;=\int_{z}^{a}\sqrt{\frac{\left(a-x\right)\left(x-c\right)}{\left(x-b\right)\left(x-d\right)}}\,\mathrm{d}x\\ &amp;=\int_{\sin{\theta}}^{1}\frac{2\left(a-b\right)\left(b-c\right)y}{\left(a-c\right)\left(1-\nu y^{2}\right)^{2}}\sqrt{\frac{\left(a-c\right)\left(1-y^{2}\right)}{\left(b-d\right)y^{2}\left(1-\kappa^{2}y^{2}\right)}}\,\mathrm{d}y\\ &amp;=\frac{2\left(a-b\right)\left(b-c\right)}{\sqrt{\left(a-c\right)\left(b-d\right)}}\int_{\sin{\theta}}^{1}\frac{\mathrm{d}y}{\left(1-\nu y^{2}\right)^{2}}\sqrt{\frac{1-y^{2}}{1-\kappa^{2}y^{2}}}.\tag{5}\\ \end{align}$$</p> <p>Defining another auxiliary function,</p> <p>$$f{\left(\kappa,\nu;\theta\right)}:=\int_{\sin{\theta}}^{1}\frac{\mathrm{d}y}{\left(1-\nu y^{2}\right)^{2}}\sqrt{\frac{1-y^{2}}{1-\kappa^{2}y^{2}}},\tag{6}$$</p> <p>we then find,</p> <p>$$\begin{align} f{\left(\kappa,\nu;\theta\right)} &amp;=\int_{\sin{\theta}}^{1}\frac{\mathrm{d}y}{\left(1-\nu y^{2}\right)^{2}}\sqrt{\frac{1-y^{2}}{1-\kappa^{2}y^{2}}}\\ &amp;=\int_{\sin{\theta}}^{1}\frac{1-y^{2}}{\left(1-\nu y^{2}\right)^{2}}\cdot\frac{\mathrm{d}y}{\sqrt{\left(1-y^{2}\right)\left(1-\kappa^{2}y^{2}\right)}}\\ &amp;=\int_{\sin{\theta}}^{1}\frac{\left(1-\nu y^{2}\right)+\left(\nu y^{2}-y^{2}\right)}{\left(1-\nu y^{2}\right)^{2}}\cdot\frac{\mathrm{d}y}{\sqrt{\left(1-y^{2}\right)\left(1-\kappa^{2}y^{2}\right)}}\\ &amp;=\int_{\sin{\theta}}^{1}\frac{\mathrm{d}y}{\left(1-\nu y^{2}\right)\sqrt{\left(1-y^{2}\right)\left(1-\kappa^{2}y^{2}\right)}}\\ &amp;~~~~~-\left(1-\nu\right)\int_{\sin{\theta}}^{1}\frac{y^{2}}{\left(1-\nu y^{2}\right)^{2}}\cdot\frac{\mathrm{d}y}{\sqrt{\left(1-y^{2}\right)\left(1-\kappa^{2}y^{2}\right)}}\\ &amp;=\int_{\sin{\theta}}^{1}\frac{\mathrm{d}y}{\left(1-\nu y^{2}\right)\sqrt{\left(1-y^{2}\right)\left(1-\kappa^{2}y^{2}\right)}}\\ &amp;~~~~~-\left(1-\nu\right)\frac{\partial}{\partial\nu}\int_{\sin{\theta}}^{1}\frac{\mathrm{d}y}{\left(1-\nu y^{2}\right)\sqrt{\left(1-y^{2}\right)\left(1-\kappa^{2}y^{2}\right)}}\\ &amp;=\left[1-\left(1-\nu\right)\frac{\partial}{\partial\nu}\right]\int_{\sin{\theta}}^{1}\frac{\mathrm{d}y}{\left(1-\nu y^{2}\right)\sqrt{\left(1-y^{2}\right)\left(1-\kappa^{2}y^{2}\right)}}\\ &amp;=\left[1-\left(1-\nu\right)\partial_{\nu}\right]\left[\Pi{\left(\nu,\kappa\right)}-\Pi{\left(\theta,\nu,\kappa\right)}\right].\tag{7}\\ \end{align}$$</p> <p>Caution: out of force of habit, I use the argument conventions for denoting elliptic integrals that are adopted in Gradshteyn <em>et. al</em>, which differs slightly from that of many other common sources (such as an Wolfram-related reference or software).</p> <p>Using the partial derivative formula</p> <p>$$\small{\partial_{\nu}\Pi{\left(\theta,\nu,\kappa\right)}=\frac{E{\left(\theta,\kappa\right)}+\frac{\kappa^{2}-\nu}{\nu}F{\left(\theta,\kappa\right)}+\frac{\nu^{2}-\kappa^{2}}{\nu}\Pi{\left(\theta,\nu,\kappa\right)}-\frac{\nu\sin{\left(2\theta\right)}\sqrt{1-\kappa^{2}\sin^{2}{\left(\theta\right)}}}{2\left(1-\nu\sin^{2}{\left(\theta\right)}\right)}}{2\left(\kappa^{2}-\nu\right)\left(\nu-1\right)}},\tag{8}$$</p> <p>we then find,</p> <p>$$\begin{align} f{\left(\kappa,\nu;\theta\right)} &amp;=\left[1-\left(1-\nu\right)\frac{\partial}{\partial\nu}\right]\left[\Pi{\left(\nu,\kappa\right)}-\Pi{\left(\theta,\nu,\kappa\right)}\right]\\ &amp;=\frac{E{\left(\kappa\right)}+\frac{\kappa^{2}-\nu}{\nu}K{\left(\kappa\right)}+\frac{2\nu\kappa^{2}-\nu^{2}-\kappa^{2}}{\nu}\Pi{\left(\nu,\kappa\right)}}{2\left(\kappa^{2}-\nu\right)}\\ &amp;~~~~~\small{-\frac{E{\left(\theta,\kappa\right)}+\frac{\kappa^{2}-\nu}{\nu}F{\left(\theta,\kappa\right)}+\frac{2\nu\kappa^{2}-\nu^{2}-\kappa^{2}}{\nu}\Pi{\left(\theta,\nu,\kappa\right)}-\frac{\nu\sin{\left(2\theta\right)}\sqrt{1-\kappa^{2}\sin^{2}{\left(\theta\right)}}}{2\left(1-\nu\sin^{2}{\left(\theta\right)}\right)}}{2\left(\kappa^{2}-\nu\right)}}.\\ \end{align}$$</p> <hr> <p>Suppose now that $a+c+d=0\land c\ge b-a\land b\le0$. Then, setting $z=a+c$ we have</p> <p>$$\begin{cases} &amp;\theta=\arcsin{\left(\sqrt{\frac{\left(a-c\right)\left(a-b+c\right)}{\left(a-b\right)a}}\right)},\\ &amp;\kappa=\sqrt{\frac{\left(a-b\right)\left(a+2c\right)}{\left(a-c\right)\left(a+b+c\right)}},\\ &amp;\nu=\frac{a-b}{a-c}.\\ \end{cases}$$</p> <p>Also setting $c=1-a\land b=0$, yields</p> <p>$$\begin{cases} &amp;\theta=\arcsin{\left(\frac{\sqrt{2a-1}}{a}\right)},\\ &amp;\kappa=\sqrt{\frac{a\left(2-a\right)}{2a-1}},\\ &amp;\nu=\frac{a}{2a-1}.\\ \end{cases}$$</p> <p>Under these conditions, we can make the simplifications</p> <p>$$\begin{align} f{\left(\kappa,\nu;\theta\right)} &amp;=\frac{E{\left(\kappa\right)}+\frac{\kappa^{2}-\nu}{\nu}K{\left(\kappa\right)}+\frac{2\nu\kappa^{2}-\nu^{2}-\kappa^{2}}{\nu}\Pi{\left(\nu,\kappa\right)}}{2\left(\kappa^{2}-\nu\right)}\\ &amp;~~~~~\small{-\frac{E{\left(\theta,\kappa\right)}+\frac{\kappa^{2}-\nu}{\nu}F{\left(\theta,\kappa\right)}+\frac{2\nu\kappa^{2}-\nu^{2}-\kappa^{2}}{\nu}\Pi{\left(\theta,\nu,\kappa\right)}-\frac{\nu\sin{\left(2\theta\right)}\sqrt{1-\kappa^{2}\sin^{2}{\left(\theta\right)}}}{2\left(1-\nu\sin^{2}{\left(\theta\right)}\right)}}{2\left(\kappa^{2}-\nu\right)}}\\ &amp;=\frac{K{\left(\kappa\right)}-F{\left(\theta,\kappa\right)}}{2\nu}+\frac{\Pi{\left(\nu,\kappa\right)}-\Pi{\left(\theta,\nu,\kappa\right)}}{a}\\ &amp;~~~~~+\frac{E{\left(\kappa\right)}-E{\left(\theta,\kappa\right)}}{2\left(\kappa^{2}-\nu\right)}+\frac{\sin{\left(2\theta\right)}\sqrt{1-\kappa^{2}\sin^{2}{\left(\theta\right)}}}{4\left(1-a\right)\left(1-\nu\sin^{2}{\left(\theta\right)}\right)}\\ &amp;=\frac{\left(2a-1\right)\left[K{\left(\kappa\right)}-F{\left(\theta,\kappa\right)}\right]}{2a}+\frac{\Pi{\left(\nu,\kappa\right)}-\Pi{\left(\theta,\nu,\kappa\right)}}{a}\\ &amp;~~~~~-\frac{\left(2a-1\right)\left[E{\left(\kappa\right)}-E{\left(\theta,\kappa\right)}\right]}{2a\left(a-1\right)}-\frac{1}{a}\sqrt{\frac{2a-1}{2a\left(a-1\right)}}.\\ \end{align}$$</p> <p>Thus, for $1&lt;a&lt;2$ we obtain</p> <p>$$\begin{align} \mathcal{E}{\left(a,0,1-a,-1;1\right)} &amp;=\frac{2a\left(a-1\right)}{\sqrt{2a-1}}\,f{\left(\kappa,\nu;\theta\right)}\\ &amp;=\left(a-1\right)\sqrt{2a-1}\left[K{\left(\kappa\right)}-F{\left(\theta,\kappa\right)}\right]\\ &amp;~~~~~-\sqrt{2a-1}\left[E{\left(\kappa\right)}-E{\left(\theta,\kappa\right)}\right]\\ &amp;~~~~~+\frac{2\left(a-1\right)\left[\Pi{\left(\nu,\kappa\right)}-\Pi{\left(\theta,\nu,\kappa\right)}\right]}{\sqrt{2a-1}}\\ &amp;~~~~~-\sqrt{\frac{2\left(a-1\right)}{a}}.\tag{9}\\ \end{align}$$</p> <hr> <p>Now, the particular case we're interested in is when $a$ takes the value of the golden ratio, $\phi=\frac{1+\sqrt{5}}{2}$, defined of course by the classical condition,</p> <p>$$\phi=\frac{1}{\phi-1};~~~\small{\phi&gt;0}.$$</p> <p>A constant associated with $\phi$ is the so-called golden ratio conjugate,</p> <p>$$\Phi:=\frac{1}{\phi}.$$</p> <p>We finally obtain a closed form for the quantity $\mathcal{B}$ as follows:</p> <p>$$\begin{align} \mathcal{B} &amp;=\mathcal{E}{\left(\phi,0,1-\phi,-1;1\right)}\\ &amp;=\mathcal{E}{\left(\phi,0,-\Phi,-1;1\right)}\\ &amp;=\left(\phi-1\right)\sqrt{2\phi-1}\left[K{\left(\kappa\right)}-F{\left(\theta,\kappa\right)}\right]\\ &amp;~~~~~-\sqrt{2\phi-1}\left[E{\left(\kappa\right)}-E{\left(\theta,\kappa\right)}\right]\\ &amp;~~~~~+\frac{2\left(\phi-1\right)\left[\Pi{\left(\nu,\kappa\right)}-\Pi{\left(\theta,\nu,\kappa\right)}\right]}{\sqrt{2\phi-1}}\\ &amp;~~~~~-\sqrt{\frac{2\left(\phi-1\right)}{\phi}}\\ &amp;=\Phi\sqrt[4]{5}\left[K{\left(\kappa\right)}-F{\left(\theta,\kappa\right)}\right]-\sqrt[4]{5}\left[E{\left(\kappa\right)}-E{\left(\theta,\kappa\right)}\right]\\ &amp;~~~~~+\frac{2\Phi}{\sqrt[4]{5}}\left[\Pi{\left(\nu,\kappa\right)}-\Pi{\left(\theta,\nu,\kappa\right)}\right]-\Phi\sqrt{2},\tag{10a}\\ \end{align}$$</p> <p>where</p> <p>$$\begin{cases} &amp;\theta=\arcsin{\left(\frac{\sqrt{2\phi-1}}{\phi}\right)},\\ &amp;\kappa=\sqrt{\frac{\phi\left(2-\phi\right)}{2\phi-1}},\\ &amp;\nu=\frac{\phi}{2\phi-1}.\tag{10b}\\ \end{cases}$$</p> <p>Result $(10)$ above is potentially a place to step, but the six separate elliptic integral terms make the expression too cumbersome to fool with if we don't have too. Since each elliptic integral term has the same elliptic modulus $\kappa$, simplification seems very likely. I'll update my response if I make any progress towards a simplified final value.</p>
4,408,507
<p>We study the definition of Lebesgue measurable set to be the following:</p> <p>Let <span class="math-container">$A\subset \mathbb R$</span> be called Lebesgue measurable if <span class="math-container">$\exists$</span> a Borel set <span class="math-container">$B\subset A$</span> such that <span class="math-container">$|A-B|=0$</span>,where <span class="math-container">$|.|$</span> denotes the Lebesgue outer measure of a set.</p> <p>Then we have theorems like:</p> <p><span class="math-container">$A\subset \mathbb R$</span> is Lebesgue measurable iff</p> <p><span class="math-container">$(1)$</span> Given any <span class="math-container">$\epsilon&gt;0$</span> there exists <span class="math-container">$F\subset A$</span> closed such that <span class="math-container">$|A-F|&lt;\epsilon$</span>.</p> <p><span class="math-container">$(2)$</span> Given any <span class="math-container">$\epsilon&gt;0$</span> there exists <span class="math-container">$G\supset A$</span> open such that <span class="math-container">$|G-A|&lt;\epsilon$</span>.</p> <p>I have two questions here.</p> <p>First is that what motivates the definition of Lebesgue measurable sets and second is that why we are approximating Lebesgue measurable sets from below by closed sets and from above by open sets.I am studying the topic measure theory from Sheldon Axler's book that does not give motivation behind these definitions and theorems.Can someone give me motivation behind these things?</p>
L. F.
620,160
<p>As <a href="https://math.stackexchange.com/users/1021258">@Esgeriath</a>'s <a href="https://math.stackexchange.com/a/4523017">answer</a> mentioned, the correct inequality should be <span class="math-container">$$ \sin \frac{A}{2} \; \sin \frac{B}{2} \; \sin \frac{C}{2} \le \frac{1}{8}. $$</span> That answer used multivariable calculus methods to calculate the extremum of the left-hand side. Here is an elementary proof of this inequality.</p> <hr /> <p>You have shown that <span class="math-container">$$ \sin \frac A2 \; \sin \frac B2 \; \sin \frac C2 = \frac{(s - a)(s - b)(s - c)}{abc} $$</span> where <span class="math-container">$s = \frac{1}{2}(a + b + c)$</span>. Recall that the area of the triangle is <span class="math-container">$$ T = \sqrt{s (s - a)(s - b)(s - c)} $$</span> by <a href="https://en.wikipedia.org/wiki/Heron%27s_formula" rel="nofollow noreferrer">Heron's formula</a>, the inradius is <span class="math-container">$$ r = \frac{T}{s}, $$</span> and the circumradius is <span class="math-container">$$ R = \frac{abc}{4T}, $$</span> so <span class="math-container">$$ \frac{(s - a)(s - b)(s - c)}{abc} = \frac{T^2}{4 s R T} = \frac{r}{4R}. $$</span> But <span class="math-container">$R \ge 2r$</span> by <a href="https://en.wikipedia.org/wiki/Euler%27s_theorem_in_geometry" rel="nofollow noreferrer">Euler's inequality</a>, with equality if and only if the triangle is equilateral, so <span class="math-container">$$ \sin \frac A2 \; \sin \frac B2 \; \sin \frac C2 \le \frac{1}{8} $$</span> where equality holds if and only if <span class="math-container">$A = B = C = 60^\circ$</span>.</p>
2,278,798
<p>The converse statement, "A metric space on which every continuous, real valued function is bounded is compact" is dealt with on this site, as it is in Greene and Gamelin's monograph, "Introduction to Topology", where a hint to its proof is offered. I see no discussion of the direct statement in my title. Is it true? If so, can someone outline a proof?</p>
Henno Brandsma
4,280
<p>If $f$ is continuous on $X$ to $\mathbb{R}$. Then define $U_n = f^{-1}[(-n,n)]$ for $ n \in \mathbb{N}^+$, the the $U_n$ are open by continuity and $U_k \subseteq U_l$ wheneven $ k \le l$, i.e. the family is increasing. Also, $\mathcal{U} = \{U_n: n \in \mathbb{N}^+ \}$ is an open cover of $X$, as every $y$ is in some $(-n,n)$.</p> <p>If $X$ is compact, this has a finite subcover. If $U_N$is the element with the largest index, then $X \subseteq U_N$ and so $f[X] \subseteq [-N,N]$, so $f$ is bounded.</p>
2,150,886
<p>I want to find a first order ode, an initial value problem, that has the solution $$y=(1-y_0)t+y_0$$ where $y_0$ is the initial value.The ode has to be of first order, that is: $$y'=f(y).$$ I need this to test a special solver I am building. The main objective is to find an ode that has the property that the end-value goes into the reverse direction of the initial value. My thought is, that the function above is the most simple one that fulfills that requirement. However, any other idea that produces my desired result is welcome. </p> <p>I came so far: since $$y'=1-y_0=f((1-y_0)+y_0)$$ $f$ should be something like $$f(z)=\frac{z-y_0}{t}$$ so we get: $$y'=\frac{y-y_0}{t}$$ However, I don't know how to properly get the initial condition into the equation, the $y_0$ part in the ode itself doesn't seem right, since the initial value can't be put into the ode itself but is a special constrained outside the ode. Can anyone help me get this clear? </p>
Christian Blatter
1,303
<p>This is maybe not what you intended. </p> <p>Note that you want a differential equation of the form $y'=f(y)$, and at the same time the proposed solution has constant derivative with respect to $t$. This implies that $f$ is necessarily of the form $f(y)=c$ for some constant $c$. Given $y_0$ an IVP satisfying your requirements therefore could be $$y'=1-y_0,\qquad y(0)=y_0\ .$$</p>
2,901,734
<p>As title says find the minimum value of $(1+\frac{1}{x})(1+\frac{1}{y})$when given that $x+y=8$ using AGM inequality including Arithmetic Mean, Geometric Mean, and Harmonic Mean.</p>
farruhota
425,072
<p>Using all three means: $$(1+\frac{1}{x})(1+\frac{1}{y})=1+\color{red}{\frac1x+\frac1y}+\color{blue}{\frac1{xy}}\ge 1+\color{red}{\frac12}+\color{blue}{\frac1{16}}=\frac{25}{16},$$ where: $$\text{AM-HM:} \ \ \frac{x+y}{2}\ge \frac{2}{\frac1x+\frac1y} \iff \frac1x+\frac1y\ge \frac4{x+y}=\frac48=\frac12;\\ \text{AM-GM:} \ \ \frac{x+y}{2}\ge \sqrt{xy} \iff \frac{1}{xy}\ge \left(\frac{2}{x+y}\right)^2=\left(\frac{2}{8}\right)^2=\frac1{16}.$$ Equality occurs for $x=y=4$.</p>
182,346
<p>Let's call a polygon $P$ <em>shrinkable</em> if any down-scaled (dilated) version of $P$ can be translated into $P$. For example, the following triangle is shrinkable (the original polygon is green, the dilated polygon is blue):</p> <p><img src="https://i.stack.imgur.com/M0LOu.png" alt="enter image description here"></p> <p>But the following U-shape is not shrinkable (the blue polygon cannot be translated into the green one):</p> <p><img src="https://i.stack.imgur.com/S30bD.png" alt="enter image description here"></p> <p>Formally, a compact $\ P\subseteq \mathbb R^n\ $ is called <em>shrinkable</em> iff:</p> <p>$$\forall_{\mu\in [0;1)}\ \exists_{q\in \mathbb R^n}\quad \mu\!\cdot\! P\, +\, q\ \subseteq\ P$$</p> <p>What is the largest group of shrinkable polygons?</p> <p>Currently I have the following sufficient condition: if $P$ is <a href="https://en.wikipedia.org/wiki/Star-shaped_polygon" rel="nofollow noreferrer">star-shaped</a> then it is shrinkable. </p> <p><em>Proof</em>: By definition of a star-shaped polygon, there exists a point $A\in P$ such that for every $B\in P$, the segment $AB$ is entirely contained in $P$. Now, for all $\mu\in [0;1)$, let $\ q := (1-\mu)\cdot A$. This effectively translates the dilated $P'$ such that $A'$ coincides with $A$. Now every point $B'\in P'$ is on a segment between $A$ and $B$, and hence contained in $P$.</p> <p><img src="https://i.stack.imgur.com/sdPRw.png" alt="enter image description here"></p> <p>My questions are:</p> <p>A. Is the condition of being star-shaped also necessary for shrinkability?</p> <p>B. Alternatively, what other condition on $P$ is necessary?</p>
Beni Bogosel
13,093
<p>Here is my variant, a bit more geometrical.</p> <p>Denote by $P$ the original polygon, and $P_\lambda$ the contracted polygon with a factor $\lambda \in (0,1)$ which lies inside $P$. Note that $P_\lambda$ is obtained from $P$ after a dilation and a translation, and therefore there exists a point $O_\lambda$ such that $P_\lambda$ is the image of $P$ under a homothety $h_\lambda$ of center $O_\lambda$ and factor $\lambda$.</p> <p>Now we know that $h_\lambda : P \to P_\lambda \subset P$ is well defined, continuous and a contraction. Therefore, since $P$ is closed, $h_\lambda$ has a fixed point in $P$, which can only be $O_\lambda$. As a consequence $O_\lambda \in P\cap P_\lambda$.</p> <p>Pick a sequence $\lambda_n \to 1$ and denote $O_n = O_{\lambda_n}$. Since $(O_n)$ is contained in a compact set $P$, it follows that it has at least one limit point $O$. For keeping the notations simple, we assume the whole $(O_n)$ is convergent to $O$. </p> <p>Take now $X \in P$ and assume that $[OX]$ is not contained in $P$. Then there exists $Y \in [OX] \setminus P$. Since $P$ is closed, there exists a maximal open subsegment $(X_1X_2)$ of $[OX]$ which contains $Y$ and is out of $P$ ($X_1 \in (OY)$). Obviously $X_1,X_2 \in P$. Moreover, there exists an open cone $C$ of direction given by $(X_1X_2)$, with angle and length $\varepsilon$ small enough, which contains $(X_1X_2)\cap B(X_2,\varepsilon)$ and does not intersect $P$. This happens since $P$ is a polygon and the exterior of $P$ near $X_2$ is either an angle or a half-plane. Consider now $Z_n = h_{\lambda_n}(X_2)$. By hypothesis we have $(Z_n) \subset P$. </p> <p>Since $O_n \to O$, for $n$ large enough the angle $\angle O_nX_2O$ will be smaller than $\varepsilon/2$. Since $O_nZ_n = \lambda_n O_nX_2$ and $\lambda_n \to 1$ the points $Z_n$ will lie in the cone $C$ for $n$ large enough. This contradicts the fact that $Z_n$ is in $P$.</p> <p>Therefore $P$ is star-shaped with respect to $O$.</p>
1,928,439
<p>Is there a space whose dual is $F^m$? ($F$ is the field w.r.t. which the original set is a vector space)</p> <p>I'm trying to do the following exercise:</p> <p><a href="https://i.stack.imgur.com/j9WMP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j9WMP.png" alt="enter image description here"></a></p> <p>I only need help with the direction from $\Gamma$ is injective.</p> <p>I was thinking of using the theorem that states that if a dual map is injective then the linear map must be surjective. And if a linear map $T$ from $\mathcal{L}(F^m,V)$ with $T(e_i)=v_i$ is surjective, then the $v_i$ span $V$.</p> <p>Let's assume that $\Gamma^*(\phi)=(\phi(v_1),...,\phi(v_m))$, where $v_i \in V$, and $\phi \in \mathcal{L}(V,F)$. Is there a way to think of $\Gamma^*$ as the dual map of a linear map $\Gamma$?</p>
Alex Ortiz
305,215
<p>Given a vector space $\mathbf F^m$, this space has its double dual space ${({\mathbf F^m})^{\ast\ast}} = \mathcal L(({\mathbf F^m})^{\ast},\mathbf F),$ the set of linear functionals on $({\mathbf F^m})^{\ast}$. This double dual is <em>naturally isomorphic</em> to $\mathbf F^m$. Let $v \in \mathbf F^m$, $\varphi \in ({\mathbf F^m})^{\ast}$ and define $\operatorname{ev}_v:({\mathbf F^m})^{\ast}\to\mathbf F$, the <em>evaluation at $v$ functional on $({\mathbf F^m})^{\ast}$</em> by $$ \operatorname{ev}_v(\varphi) = \varphi(v) \in \mathbf F^m. $$ Note that $\operatorname{ev}_v$ is an element of the double dual of $\mathbf F^m$. The association of $\operatorname{ev}_v$ with $v$ is so natural, they can be thought of as the same element and $\operatorname{ev}_v\mapsto v$ is an isomorphism ${({\mathbf F^m})^{\ast\ast}}\cong \mathbf F^m$. In this sense, the dual space of $\mathbf F^m$ is $({\mathbf F^m})^{\ast}$ since the association $\operatorname{ev}_v\mapsto v$ is so natural. This isomorphism $\operatorname{ev}_v\mapsto v$ is <em>natural</em> in the sense that it does not depend on our choice of basis for $\mathbf F^m$.</p> <p>However, a vector space $V$ can never truly be <em>equal</em> to its dual for the reason that $V^\ast$ is the set of linear functionals on $V$ and $V$ is itself a set of vectors. This association of $V$ with its double dual via the evaluation map is the closest thing to a vector space <em>equalling</em> its dual.</p>
883,620
<p>$a,b,c \geq 0$ and $a^2+b^2+c^2+abc=4$ prove that $ab+bc+ac-abc \leq 2$ can any one help me with this problem,I believe Dirichlet's theorem is the key for this sorry for making mistake over and over again,but i'm certain that the inequality is true now.</p>
Joonas Ilmavirta
166,535
<p>I assume you want $ab+bc+ca-abc\leq2$. If this is not the case, let me know. This is not the best inequality you could get, but it's true.</p> <p>Since $(a-b)^2\geq0$, we have $a^2+b^2\geq2ab$, and similarly $b^2+c^2\geq2bc$ and $c^2+a^2\geq2ca$. Summing up the three inequalities gives $$ 2(a^2+b^2+c^2)\geq2(ab+bc+ca). $$ If you combine this with $a^2+b^2+c^2+abc=1$, you get $ab+bc+ca+abc\leq1$.</p> <p>But now $2abc\geq0$ so you can subtract it from the left side. Similarly you can add one to the right side. This leads to $ab+bc+ca-abc\leq2$.</p>
3,891,336
<p>I have a problem with this question:</p> <p>we have a language with alphabet {a, b, c}, all strings in this language have even length and does not contain any substring &quot;ab&quot; and &quot;ba&quot; for example these strings acceptable: &quot;accb&quot;, &quot;aa&quot;, &quot;bb&quot;, &quot;bcac&quot;, and these strings not acceptable &quot;ccab&quot;, &quot;abca&quot;, &quot;cabc&quot; and so on.. actually I think this is not regular language but I can't prove it, anyone can help me by give a regular expression for this language or prove that this language is not regular? or any help that I can figure out how I must think about it.</p> <p>i tryed (aa)<em>(cc)</em>(bb)<em>+(bb)</em>(cc)<em>(aa)</em>+(ac+ca)<em>+(bc+cb)</em> and ((a+c)(a+c))<em>+((b+c)(b+c))</em>+(ac+bc)<em>(ca+cb)</em>+(ca+cb)<em>(ac+bc)</em> but this not work</p>
J.-E. Pin
89,374
<p>Your language is regular. All you need to know is that regular languages are closed under Boolean operations (union, intersection, complement, set difference, ...)</p> <p>Let <span class="math-container">$A = \{a,b,c\}$</span> be the alphabet. The set <span class="math-container">$E$</span> of words of even length is <span class="math-container">$(AA)^*$</span> and hence it is regular. The set <span class="math-container">$F$</span> of words containing either <span class="math-container">$ab$</span> or <span class="math-container">$ba$</span> as a factor is <span class="math-container">$A^*(ab + ba)A^*$</span> and it is also regular. Your language is <span class="math-container">$E \setminus F$</span> and thus it is regular.</p>
1,386,367
<p>I'm interested in the definite integral</p> <p>\begin{align} I\equiv\int_{-\infty}^{\infty} \frac{1}{x^2-b^2}=\int_{-\infty}^{\infty} \frac{1}{(x+b) (x-b)}.\tag{1} \end{align}</p> <p>Obviously, it has two poles ($x=b, x=-b$) on the real axes and is thus singular. I tried to apply the contour integration methods mentioned <a href="https://math.stackexchange.com/questions/564952/complex-integration-poles-real-axis">here</a>, where they discuss the integral \begin{align} \int_{-\infty}^{\infty}\frac{e^{iax}}{x^2 - b^2}dx = -\frac{\pi}{b}\sin(ab),\tag{2} \end{align} where the r.h.s. is the solution derivable in multiple ways as shown in the above thread (e.g. circumventing the poles with infinitesimal arcs).</p> <p>However, since in the seemingly more simple case (1) the nominator is symmetric in constrast to the situation in (2), I obtain $$I=0,$$ as the residues equal up to different signs. E.g. consider the limit $a\rightarrow 0$ in (2) which gives $\sin(ab)\rightarrow 0$.</p> <p>Based on some literature (in the context in which the integral is appearing) it seems that one <em>should</em> obtain</p> <p>\begin{align} I=-\frac{i\pi}{b}. \end{align}</p> <p>Of course, this can be realized by considering the modified integral \begin{align} I_{mod}\equiv\lim_{\eta\rightarrow 0^+} \int_{-\infty}^{\infty} dx \frac{1}{x^2-b^2+i\eta}, \end{align} and closing the contour (e.g. a box closed at infinity) in the lower half plane.</p> <p>However, in this approach one seems to have some freedom (sign of the infinitesimal contribution, why shift one pole upwards and another pole downwards and not e.g. both upwards?)</p> <p>So let me explicitly phrase my questions:</p> <ol> <li>Is the value of the definite integral in (1) well-defined?</li> <li>Is it equal to zero?</li> <li>In any case, why would I include an infinitesimal shift as in (2) and not in another way?</li> </ol> <p>Thank you very much in advance!</p>
Emilio Novati
187,568
<p>Hint: $$ (I+P)(P-2I)=P-2I+P-2P=-2I $$</p>
3,944,628
<p>I'm reading a book and, in its section on the definition of a stopping time(continuous), the author declares at the start that for the whole section every filtration will be complete and right-continuous.</p> <p>So, in the definition of a Stopping Time, how important are these conditions? Why would they matter?</p>
B. Goddard
362,009
<p>You can do it with Lagrange multipliers. Maximize <span class="math-container">$f=\sin x/2 + \sin y/2+\sin z/2$</span> under the constraint <span class="math-container">$g=x+y+z = \pi$</span>.</p> <p>Then</p> <p><span class="math-container">$$\nabla f = \langle \cos(x/2)/2, \cos(y/2)/2, \cos(z/2)/2 \rangle =\lambda\langle 1,1,1 \rangle = \nabla g.$$</span></p> <p>This shows that <span class="math-container">$x=y=z$</span> and the maximal triangle is equilateral.</p>
3,944,628
<p>I'm reading a book and, in its section on the definition of a stopping time(continuous), the author declares at the start that for the whole section every filtration will be complete and right-continuous.</p> <p>So, in the definition of a Stopping Time, how important are these conditions? Why would they matter?</p>
Z Ahmed
671,540
<p>In a triangle ABC, <span class="math-container">$A+B+C=\pi$</span> <span class="math-container">$$f(x)=\sin(x/2) \implies f''(x)=-\frac{1}{4}\sin(x/2)&lt;0, x\in[0,2\pi].$$</span> So by Jemsen's inequality <span class="math-container">$$\frac{f(A/2)+f(B/2)+f(C/2)}{3} \le f(\frac{A+B+C}{6}).$$</span> <span class="math-container">$$\implies \frac{\sin (A/2)+\sin(B/2)+\sin{C/2}}{3} \le \sin\frac{\pi}{6}.$$</span> <span class="math-container">$$\implies \sin(A/2)+\sin(B/2)+\sin(C/2) \le \frac{3}{2}$$</span></p>
4,090,408
<p>Show that <span class="math-container">$A$</span> is a whole number: <span class="math-container">$$A=\sqrt{\left|40\sqrt2-57\right|}-\sqrt{\left|40\sqrt2+57\right|}.$$</span> I don't know if this is necessary, but we can compare <span class="math-container">$40\sqrt{2}$</span> and <span class="math-container">$57$</span>: <span class="math-container">$$40\sqrt{2}\Diamond57,\\1600\times2\Diamond 3249,\\3200\Diamond3249,\\3200&lt;3249\Rightarrow 40\sqrt{2}&lt;57.$$</span> Is this actually needed for the solution? So <span class="math-container">$$A=\sqrt{57-40\sqrt2}-\sqrt{40\sqrt2+57}.$$</span> What should I do next?</p>
José Carlos Santos
446,262
<p>That number is <span class="math-container">$-10$</span>. In fact, if you try to express <span class="math-container">$\sqrt{57-40\sqrt2}$</span> as <span class="math-container">$a+b\sqrt2$</span>, you solve the system<span class="math-container">$$\left\{\begin{array}{l}a^2+2b^2=57\\2ab=-40.\end{array}\right.$$</span>You will get that <span class="math-container">$a=5$</span> and that <span class="math-container">$b=-4$</span>. By the same method, you will get that <span class="math-container">$\sqrt{57+40\sqrt2}=-5+4\sqrt2$</span>. So<span class="math-container">$$\sqrt{57-40\sqrt2}-\sqrt{57+40\sqrt2}=-5+4\sqrt2-\left(5+4\sqrt2\right)=-10.$$</span></p>
3,416,600
<p>Show that <span class="math-container">$|{\sqrt{a^2+b^2}-\sqrt{a^2+c^2}}|\le|b-c|$</span> where <span class="math-container">$a,b,c\in\mathbb{R}$</span></p> <p>I'd like to get an hint on how to get started. What I thought to do so far is dividing to cases to get rid of the absolute value. <span class="math-container">$(++, +-, -+, --)$</span> but it looks messy. I'm wondering if there is any nicer way to solve it.</p> <p>Would love to hear some ideas.</p> <p>Thanks in advance!</p>
nonuser
463,553
<p>Use the formula: <span class="math-container">$${\sqrt{x}-\sqrt{y}}= {x-y\over \sqrt{x}+\sqrt{y}}$$</span></p> <p>in your case you get:</p> <p><span class="math-container">$${\sqrt{a^2+b^2}-\sqrt{a^2+c^2}}= {a^2+b^2-(a^2+c^2)\over \sqrt{a^2+b^2}+\sqrt{a^2+c^2}}$$</span></p> <p>so you have to prove (if you assume <span class="math-container">$b\geq c$</span>, which you can): <span class="math-container">$$b+c\leq \sqrt{a^2+b^2}+\sqrt{a^2+c^2}$$</span></p> <p>and this is trivaly...</p>
152,405
<p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p> <p>I am looking for a list of</p> <h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3> <p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p> <p>To limit the scope of the question:</p> <p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p> <p>b) Major results only.</p> <p>c) Results with very hard proofs.</p> <p>As usual, one example (or a few related ones) per post.</p> <p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p> <h2>Answers</h2> <p>(Updated Oct 3 '15)</p> <p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p> <p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p> <p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p> <p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p> <p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model. This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p> <p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p> <p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p> <p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p> <p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p> <p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p> <p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p> <p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p> <p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p> <p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p> <p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p> <p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p> <p><strong>For the following answers some issues were raised in the comments.</strong></p> <p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p> <p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p> <p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p> <p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p> <p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p> <p><strong>Additional answer:</strong></p> <p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
Vít Tuček
6,818
<p>Every $L^2$ function on $\mathbb{R}$ is almost everywhere the point-wise limit of its Fourier series. These days known as <a href="https://en.wikipedia.org/wiki/Carleson%27s_theorem">Carleson's theorem</a>.</p>
152,405
<p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p> <p>I am looking for a list of</p> <h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3> <p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p> <p>To limit the scope of the question:</p> <p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p> <p>b) Major results only.</p> <p>c) Results with very hard proofs.</p> <p>As usual, one example (or a few related ones) per post.</p> <p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p> <h2>Answers</h2> <p>(Updated Oct 3 '15)</p> <p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p> <p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p> <p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p> <p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p> <p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model. This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p> <p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p> <p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p> <p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p> <p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p> <p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p> <p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p> <p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p> <p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p> <p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p> <p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p> <p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p> <p><strong>For the following answers some issues were raised in the comments.</strong></p> <p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p> <p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p> <p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p> <p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p> <p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p> <p><strong>Additional answer:</strong></p> <p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
Gil Kalai
1,532
<p><strong><a href="http://en.wikipedia.org/wiki/Classification_of_finite_simple_groups">The classification of finite simple groups</a></strong></p> <p>This theorem describes completely all finite simple groups: A finite simple group is either cyclic groups of prime order, alternating groups, groups of Lie type (included some twisted families), or one of 26 sporadic simple group. The proof extends over many paper by many people. The completion of the project was announced in 1983, but some incomplete part was replaced by a complete proof only more recently. There was a project for major revised and simplified proof but it was also very hard. Here is a link to a <a href="http://www.ams.org/journals/bull/2001-38-03/S0273-0979-01-00909-0/S0273-0979-01-00909-0.pdf">review paper of Solomon</a>. (The answer was suggested by Victor Protsak.)</p>
27,490
<h2>Motivation</h2> <p>The common functors from topological spaces to other categories have geometric interpretations. For example, the fundamental group is how loops behave in the space, and higher homotopy groups are how higher dimensional spheres behave (up to homotopy in both cases, of course). Even better, for nice enough spaces the (integral) homology groups count <span class="math-container">$n$</span>-dimensional holes.</p> <hr /> <p>A groupoid is a category where all morphisms are invertible. Given a space <span class="math-container">$X$</span>, the fundamental groupoid of <span class="math-container">$X$</span>, <span class="math-container">$\Pi_1(X)$</span>, is the category whose objects are the points of <span class="math-container">$X$</span> and the morphisms are homotopy classes of maps rel end points. It's clear that <span class="math-container">$\Pi_1(X)$</span> is a groupoid and the group object at <span class="math-container">$x \in X$</span> is simply the fundamental group <span class="math-container">$\pi_1(X,x)$</span>. My question is:</p> <blockquote> <p>Is there a geometric interpretation <span class="math-container">$\Pi_1(X)$</span> analogous to the geometric interpretation of homotopy groups and homology groups explained above?</p> </blockquote>
David Roberts
4,177
<p>For an even more geometric application, the fundamental groupoid tells you all about parallel transport in bundles, as long as the transport is independent of the actual path and only relies on the homotopy class of the path. This is the case for flat connections (in particular on vector bundles). For appropriate spaces, one can define a flat connection on a line bundle as a character of the fundamental group, but is more natural to define it as a functor from the fundamental groupoid to the groupoid $core(1dVect)$ of 1-dimensional spaces and isomorphisms between them (this latter groupoid is equivalent to $GL_1$ considered as a groupoid with one object). A point is sent to the fibre over that point, and a (homotopy class of paths) to the isomorphism between the fibres induced by the flat connection. This has manifest extensions to vector bundles of higher rank $n$, replacing $core(1dVect)$ by the groupoid $core(ndVect)$.</p> <p>There are versions of this for covering spaces, but vector bundles are perhaps more geometric.</p>
2,115,199
<p>Let $A \in \text{End}(V)$ be an endomorphism, and $\mathbb Q[A]$ a subalgebra in $\text{End}(V)$ generated by $A$.</p> <p>Is $\mathbb Q[A]$ always at most dim$V$-dimensional? How to prove it</p>
martini
15,379
<p>Note that $\def\QA{\mathbf Q[A]}\QA$ is generated as a vector space by the powers $$ \{ A^k: k \in \mathbf N \} $$ of $A$. Recall that by the Cayley-Hamilton theorem, we have $$ \chi_A(A) = 0$$ where $\chi_A$ is the characteristic polynomial of $A$. Writing $$ \chi_A(X) = \det(A-X) = (-1)^n X^n + \sum_{i=0}^{n-1} a_i X^i$$ where $n := \dim V$ we have $$ 0 = (-1)^n A^n + \sum_{i=0}^{n-1} a_i A^i \iff A^n = (-1)^{n-1} \sum_{i=0}^{n-1} a_i A^i $$ That is, $$ A^n \in \def\s{\mathop{\rm span}}\s\{A^i: i = 0,\ldots, n-1\}. $$ Induction now gives (see below) $$ A^k \in \s\{A^i:0 \le i \le n-1\}, \quad \text{all $k \ge n$}. $$ That is, $$ \QA = \s\{A^k: k \in \mathbf N\} = \s\{A^i: 0 \le i \le n -1 \} $$ As $\QA$ therefore has a generating set with $n$ elements, it is at most $n = \dim V$-dimensional.</p> <hr> <p>Addendum: Suppose that $\ell \ge n$ is given and we have $$ A^k \in \s\{A^i: 0 \le i \le n\} $$ holds true for $k &lt; \ell$. To prove that $A^\ell \in \s\{A^i: i &lt; n\}$, multiply $$ A^n = (-1)^{n-1} \sum_{i=0}^{n-1} a_iA^i $$ by $A^{\ell-n}$, giving $$ A^\ell = (-1)^{n-1} \sum_{i=0}^{n-1} a_i A^{\ell-(n-i)} $$ As $\ell - (n-i) &lt; \ell$ for each $i$, the summands on the right hand side are in $\s\{A^j: 0 \le j \le n\}$, there fore $$ A^\ell = (-1)^{n-1} \sum_{i=0}^{n-1} a_i A^{\ell-(n-i)} \in \s\{A^j: 0 \le j \le n\}$$</p>
2,069,507
<p><a href="https://i.stack.imgur.com/B4b88.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B4b88.png" alt="The image of parallelogram for help"></a></p> <p>Let's say we have a parallelogram $\text{ABCD}$.</p> <p>$\triangle \text{ADC}$ and $\triangle \text{BCD}$ are on the same base and between two parallel lines $\text{AB}$ and $\text{CD}$, So, $$ar\triangle \text{ADC}=ar\triangle \text{BCD}$$ Now the things those should be noticed are that:</p> <p>In $\triangle \text{ADC}$ and $\triangle \text{BCD}$:</p> <p>$$\text{AD}=\text{BC}$$ $$\text{DC}=\text{DC}$$ $$ar\triangle \text{ADC}=ar\triangle \text{BCD}.$$</p> <p>Now in two different triangles, two sides are equal and their areas are also equal, so the third side is also equal or $\text{AC}=\text{BD}$. Which make this parallelogram a rectangle.</p> <p>Isn't it a claim that every parallelogram is a rectangle or a parallelogram does not exist?</p>
N. S.
9,176
<p>This is a completion to the answer by TheLoneWolf.</p> <p><strong>Lemma</strong> Let $ABC$ and $DEF$ be two triangles such that $AB=DE$ and $AC=DF$. Then the two triangles have the same area if and only if $\angle A=\angle D$ or $\angle A+\angle D=180^\circ$.</p> <p><strong>Proof:</strong></p> <p>$$\mbox{Area}(ABC)= \mbox{Area}(DEF) \Leftrightarrow \\ \frac{AB \cdot AC \cdot \sin(A)}{2} =\frac{DE \cdot DF \cdot \sin(D)}{2}\Leftrightarrow \\ \sin(A) =\sin(D)\Leftrightarrow \sin(A) -\sin(D)=0 \Leftrightarrow \\ 2 \cos(\frac{A+D}{2})\sin(\frac{A-D}{2})=0\Leftrightarrow \\ \frac{A+D}{2}=90^\circ \mbox{ or } \frac{A-D}{2}=0^\circ$$ where the last line follows from the fact that all angles are between $0^\circ$ and $180^\circ$.</p>
1,043,266
<p>Carefully see this problem(I have solved them on my own, I'm only talking about the magical coincidence):</p> <blockquote> <p>A bag contains 6 notes of 100 Rs.,2 notes of 500 Rs., 3 notes of 1000 Rs..Mr. A draws two notes from the bag then Mr. B draws 2 notes from the bag.<br> (i)Find the probability that A has drawn 600 Rs.<br> (ii)Find the probability that B has drawn 600 Rs.<br> (iii)B has drawn 600 Rs., then find the probability that A has also drawn 600 Rs..<br> (iv)A has drawn 600 Rs.,then find the probability that B has drawn 600 Rs.<br></p> <hr> <p>(i)$$P=\frac{\binom61\binom21}{\binom{11}2}=\frac{12}{55}$$ (ii)<strong>Total Probability Theorem:</strong> Considering various cases, depending upon what A chooses:<br> In order of $2H,2F,2T,1H1T,1H1F,1F1T$<br> where H=100(<strong>H</strong>undred),F=500(<strong>F</strong>ive-hundred),T=1000(<strong>T</strong>housand)]is: $$P=\frac{\binom41\binom21}{\binom92}\frac{\binom62}{\binom{11}2} +\frac{0}{\binom92}\frac{\binom22}{\binom{11}2} +\frac{\binom61\binom21}{\binom92}\frac{\binom32}{\binom{11}2} +\frac{\binom51\binom21}{\binom92}\frac{\binom61\binom31}{\binom{11}2} +\frac{\binom51\binom11}{\binom92}\frac{\binom61\binom21}{\binom{11}2} +\frac{\binom61\binom11}{\binom92}\frac{\binom21\binom31}{\binom{11}2}=\frac{12}{55}$$ <strong>Oh My God!What's happening here above?</strong><br> (iii)<strong>Baye's Theorem:</strong> $$P=\frac{\frac{\binom51\binom11}{\binom92}\frac{\binom61\binom21}{\binom{11}2}}{\frac{12}{55}}=\frac5{36}$$ (iv)<strong>Conditional Probability:</strong> $$P(B|A)=\frac{P(AB)}{P(A)}=\frac{\frac{\binom51\binom11}{\binom92}\frac{\binom61\binom21}{\binom{11}2}}{\frac{12}{55}}=\frac5{36}$$ <strong>Not again, you must be joking!</strong></p> </blockquote> <p>Why doesn't it makes any difference?Think Intutively, if A has taken some money there must be less notes so there must be difference in probability, why doesn't order matter here?</p>
Graham Kemp
135,106
<blockquote> <p>Why doesn't it makes any difference? Think Intutively, if A has taken some money there must be less notes so there must be difference in probability, why doesn't order matter here?</p> </blockquote> <p>Think counterintuitively. &nbsp; A has taken two notes but we <em>don't know what they are</em> when measuring the probability that B has a certain amount.</p> <p>It's like shuffling a deck of cards. &nbsp; It should not matter if we shuffle the deck or build the deck by picking cards out of a bag one after another. &nbsp; Either way there's an <em>equal</em> probability that each of the 52 cards are in <em>any</em> particular position in the deck.</p> <p>Does your intuition suggest that because <em>some</em> card is on top of the deck it affects the probability that the second card is an ace &mdash; when we don't know what that card might be? &nbsp; Are the following answers surprising?</p> <ol><li>Find the probability that an Ace is on top of the deck.$$\mathsf P(A_1)=\frac{4}{52}$$</li><li>Find the probability that an Ace is second from the top.$$\begin{align}\mathsf P(A_2) &amp; = \mathsf P(A_1)\mathsf P(A_2\mid A_1)+\mathsf P(\neg A_1)\mathsf P(A_2\mid\neg A_1) \\ &amp; = \frac{4}{52}\frac{3}{51}+\frac{48}{52}\frac{4}{51} \\ &amp; =\frac{4}{52}\end{align}$$</li><li>An Ace is on top of the deck, find the probability that another Ace is second from the top.$$\mathsf P(A_2\mid A_1)=\frac{3}{51}$$</li><li>An Ace is second in the deck, find the probability that another Ace is on top.$$\mathsf P(A_1\mid A_2)=\frac{\mathsf P(A_1)\mathsf P(A_2\mid A_1)}{\mathsf P(A_2)}=\frac{\frac{4}{52}\frac{3}{51}}{\frac{4}{52}}=\frac{3}{51}$$</li></ol>
3,481,348
<p>Consider <span class="math-container">$H = \mathbb{Z}_{30}$</span> and <span class="math-container">$G = \mathbb{Z}_{15}$</span> as additive abelian groups. Then how do I show that <span class="math-container">${\rm Aut}(H) \cong {\rm Aut}(G)$</span>?</p> <p>By the Chinese remainder theorem, I know that <span class="math-container">$\mathbb{Z}_{30} \cong \mathbb{Z}_{2} \times \mathbb{Z}_{15}$</span>. Intuitively I know that the only automorphism of <span class="math-container">$\mathbb{Z}_{2}$</span> is the identity automorphism, so we can construct a bijective map between <span class="math-container">$\psi : {\rm Aut}(G) \rightarrow {\rm Aut}(H)$</span>. By <span class="math-container">$\psi(\phi) = \phi^{*}$</span>, where <span class="math-container">$\phi^{*}((a,b)) = (a, \phi(b))$</span>, which is an isomorphism. (Here we consider <span class="math-container">$\mathbb{Z}_{30} \cong \mathbb{Z}_{2} \times \mathbb{Z}_{15}$</span>.)</p> <p>Is the definition of this map sufficient to claim that <span class="math-container">${\rm Aut}(G) \cong {\rm Aut}(H)$</span>? Also, how does one in general find the automorphism group of <span class="math-container">$\bigoplus_{k=1}^{r} \mathbb{Z}_{n_{k}}$</span>, where the <span class="math-container">$n_{k}$</span>'s are not necessarily coprime. </p>
Shaun
104,041
<p><strong>Hint:</strong> <span class="math-container">$${\rm Aut}(\Bbb Z_n)\cong U(n),$$</span></p> <p>where <span class="math-container">$U(n)$</span> is the group of units modulo <span class="math-container">$n$</span>.</p>
3,481,348
<p>Consider <span class="math-container">$H = \mathbb{Z}_{30}$</span> and <span class="math-container">$G = \mathbb{Z}_{15}$</span> as additive abelian groups. Then how do I show that <span class="math-container">${\rm Aut}(H) \cong {\rm Aut}(G)$</span>?</p> <p>By the Chinese remainder theorem, I know that <span class="math-container">$\mathbb{Z}_{30} \cong \mathbb{Z}_{2} \times \mathbb{Z}_{15}$</span>. Intuitively I know that the only automorphism of <span class="math-container">$\mathbb{Z}_{2}$</span> is the identity automorphism, so we can construct a bijective map between <span class="math-container">$\psi : {\rm Aut}(G) \rightarrow {\rm Aut}(H)$</span>. By <span class="math-container">$\psi(\phi) = \phi^{*}$</span>, where <span class="math-container">$\phi^{*}((a,b)) = (a, \phi(b))$</span>, which is an isomorphism. (Here we consider <span class="math-container">$\mathbb{Z}_{30} \cong \mathbb{Z}_{2} \times \mathbb{Z}_{15}$</span>.)</p> <p>Is the definition of this map sufficient to claim that <span class="math-container">${\rm Aut}(G) \cong {\rm Aut}(H)$</span>? Also, how does one in general find the automorphism group of <span class="math-container">$\bigoplus_{k=1}^{r} \mathbb{Z}_{n_{k}}$</span>, where the <span class="math-container">$n_{k}$</span>'s are not necessarily coprime. </p>
Community
-1
<p>For your first question, we have <span class="math-container">$\Bbb Z_{30}^×\cong(\Bbb Z_2\times\Bbb Z_{15})^×\cong\Bbb Z_2^×\times\Bbb Z_{15}^×\cong\Bbb Z_{15}^×$</span>.</p> <p>And <span class="math-container">$\operatorname {Aut}(\Bbb Z_n)\cong\Bbb Z_n^×$</span>.</p>
2,437,026
<p>Suppose that:</p> <p>$$ X \sim Bern(p) $$</p> <p>Then, intuitively $X^2 = X \sim Bern(p)$. However, when I try to think of it logically, it doesn't make any sense. </p> <p>As an example, $X$ is $1$ with probability $p$ and $0$ with probability $1-p$. Then, $X^2 = X\cdot X$ is $1$ only when both $X$'s are $1$, which occur with probability $p^2$, and so it doesn't seem like $X^2 = X$. Can someone tell me what is wrong here?</p>
dezdichado
152,744
<p>Your logic is flawed. When $X$ is Bernoulli, the events $\{\omega: X^2(w) = 1\}$ and $\{w:X(w) = 1\}$ are precisely the same events. </p>