qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
3,109,036
<p>I must prove this tautology using logical equivalences but I can't quite figure it out. I know it has something to do with the fact that not p and p have opposite truth values at all times. Any help would be appreciated.</p>
Bram28
256,001
<p>Use the fact that <span class="math-container">$p \rightarrow q \Leftrightarrow \neg p \lor q$</span></p> <p>Applied to your statement:</p> <p><span class="math-container">$(q \land (p \rightarrow \neg )) \rightarrow \neg p \Leftrightarrow$</span></p> <p><span class="math-container">$\neg (q \land (p \rightarrow \neg )) \lor \neg p \Leftrightarrow$</span></p> <p><span class="math-container">$\neg q \lor \neg (p \rightarrow \neg ) \lor \neg p \Leftrightarrow$</span></p> <p><span class="math-container">$\neg q \lor \neg (\neg p \lor \neg ) \lor \neg p \Leftrightarrow$</span></p> <p><span class="math-container">$\neg q \lor (p \land q) \lor \neg p \Leftrightarrow$</span></p> <p><span class="math-container">$((\neg q \lor p) \land (\neg q \lor q)) \lor \neg p \Leftrightarrow$</span></p> <p><span class="math-container">$((\neg q \lor p) \land \top) \lor \neg p \Leftrightarrow$</span></p> <p><span class="math-container">$(\neg q \lor p) \lor \neg p \Leftrightarrow$</span></p> <p><span class="math-container">$\neg q \lor p \lor \neg p$</span></p> <p>... and now you're almost there ... do you see it?</p>
2,523,000
<p>Three players are playing a game and have a fair six-sided die. This is an arbitrary game conditioned on the following rule:</p> <p>Player 1 rolls first, Player 2 roles until he has a different number to 1, Player 3 rolls until he has a different number to players 1 and 2. </p> <p>$\underline{Question}$: How do I go about calculating the expected value of each of the players rolls? </p> <p>Let $X_i =$ number appearing for player $i=1,2,3$. I can get the first one but after I get stuck in setting up the equation for the next:</p> <p>$\mathbb{E}[X_1] = \frac{1+2+...+6}{6} = 3.5$,</p> <p>$\mathbb{E}[X_2 | X_2 \ne X_1]$ =? is this what I am looking for and if so any help calculating it would be appreciate. </p> <p>Best wishes, I.</p>
David K
139,123
<p>You might show that the distribution of numbers that the players receive is the same as if they each (one at a time) chose a numbered ball from a bag without replacement, where the bag originally contained six balls numbered $1$ through $6.$</p> <p>Then consider whether there are any two balls such that one is more likely than the other to be selected second. Also consider the same question with "third" instead of "second."</p>
147,965
<p>What is the method to count multiplicities of intersection? for example suppose we have the projective line $x=0$ in $\mathbb{P}^{2}$ and the curve $V(z^{2}y^{2}-x^{4}) \subseteq \mathbb{P}^{2}$.</p> <p>Clearly they intersection consists of two points $p=[0:1:0]$ and $r=[0:0:1]$. So for example Bezout's theorem says that the sum of intersection multiplicities (at $p$ or $r$) is equal to $4$. Is there a way to know exactly what is the multiplicity of $p$ and $r$? </p>
guaraqe
28,818
<p><strong>For the first one</strong>:</p> <p>We have a sum of the form $\sum_{t=X}^{100} t$. Now, lets write it as: $$ \frac{\sum_{t=X}^{100} t + \sum_{t=X}^{100} t}{2}=\frac{\sum_{t=X}^{100} t + \sum_{t=X}^{100} (100+X-t)}{2} $$ This second sum comes from the fact that we change the order of it, we sum up from the last to the first: now the first term is 100 and the last one $X$, as you can verify. It is the same thing as if we took the first sequence, and copied it down: $$ X,X+1,...,99,100 $$ $$ 100,99,...,X+1,X $$ As you can see, the sum in each column is constant! Now we put both sums together and we have: $$ \frac{\sum_{t=X}^{100} (100+X)}{2}=\frac{(100+X)(100-X+1)}{2} $$</p>
2,242,865
<p>I have seen proofs using the delta-epsilon definition of continuity, and they make perfect sense, but I have not found one proof using the sequential definition of continuity. </p> <p>For example, when given functions, $f$ and $g$ that are continuous on [$a,b$], prove that the function $h=f+g$ is also continuous on [$a,b$]. I also have not seen a proof of $fg$ being continuous on [$a,b]$.</p> <p>Does a proof exist that does not use the delta-epsilon definition? If so, is it a less concrete proof when using the sequential definition?</p>
helloworld112358
300,021
<p>You could certainly use the sequence definition of continuity, if you are already willing to accept the sum and product rule for limits of sequences. For instance,</p> <p>Let $f$ and $g$ be continuous at $a$ and let $a_n\to a$. Then $$\lim_{n\to\infty}f(a_n)g(a_n)=\lim_{n\to\infty}f(a_n)\lim_{n\to\infty}g(a_n)=f(a)g(a).$$ Thus, $fg$ is continuous at $a$. You can similarly show $f+g$ is continuous at $a$.</p> <p>The reason it is not typically done this way is that the $\delta$-$\epsilon$ proof of continuity of products of continuous functions is almost identical to the $\epsilon$-$N$ proof that the limit of a product of sequences is the products of the limits. One of my professors likes to refer to this sort of thing as "conservation of difficulty". Sure, we can rely on the sequence results, but all that does is hide the difficulty.</p>
261,461
<p>I would like a good <strong>hint</strong> for the following problem that takes into account the position at which I am stuck. The problem is as follows</p> <blockquote> <p>Let $\mathbb{Z}_n$ be the cyclic group of order $n.$ Find a simple graph $G$ such that $\mathrm{Aut}(G) = \mathbb{Z}_n.$ </p> </blockquote> <p>The book that I am studying suggest that somehow I get rid of the "unwanted" symmetries of the cycle graph $C_n.$ We know that $\mathrm{Aut}(C_n) = D_{2n}$ and somehow we would like to "kill" the "reflections" of $C_n.$ I don't see any way how to "kill" the reflections while "preserving" the rotational symmetries of $C_n.$ </p> <p>Any hint would be appreciated!</p>
hmakholm left over Monica
14,366
<p>Make a cycle of repeated units that are <em>not</em> individually symmetric, such as</p> <pre><code> * | [--*---*--]^n \ / * </code></pre>
114,278
<p>If I wanted to show that a group of order $66$ has an element of order $33$, could I just say that it has an element of order $3$ (by Cauchy's theorem, since $3$ is a prime and $3 \mid 66$), and similarly that there must be an element of order $11$, and then multiply these to get an element of order $33$? I'm pretty sure this is wrong, but if someone could help me out I would appreciate it. Thanks. </p>
Mariano Suárez-Álvarez
274
<p>Alternatively:</p> <ul> <li><p>show that the Sylow $11$-subgroup is normal.</p></li> <li><p>show that a cyclic group of order $11$ has no automorphisms of order $3$.</p></li> <li><p>pick an element of order $3$ in your group and conclude that it commutes with any element of order $11$.</p></li> </ul>
114,278
<p>If I wanted to show that a group of order $66$ has an element of order $33$, could I just say that it has an element of order $3$ (by Cauchy's theorem, since $3$ is a prime and $3 \mid 66$), and similarly that there must be an element of order $11$, and then multiply these to get an element of order $33$? I'm pretty sure this is wrong, but if someone could help me out I would appreciate it. Thanks. </p>
Alex Becker
8,173
<p>To compliment the other answers, I will address why commutativity is necessary. Let $G$ be your group $o(x)$ denote the order of $x\in G$. You are making use of the statement that $o(a)o(b)=o(ab)$, but this is not necessarily true. It holds when $o(a),o(b)$ are coprime and $ab=ba$ (an interesting consequence of this is that the partial sums $s_n$ of the harmonic series are never integers for $n&gt;1$, which follows from Bertrand's postulate and $\mathbb Q/\mathbb Z$ being abelian), but there are nonabelian groups of order $66$, such as $S_6\oplus \mathbb Z_{11}$. If $ab=ba$ and $o(a),o(b)$ are coprime, then $(ab)^{o(a)o(b)}=a^{o(a)}b^{o(b)}=e\cdot e=e$, so $o(ab)\leq o(a)o(b)$. If $(ab)^n=e$ then $a^nb^n=(ab)^n=e$ so $(a^n)^{-1}=b^n$, hence $o(a^n)=o(b^n)$, and if $n&lt;o(a)o(b)$ then since $o(a),o(b)$ coprime we have that one of $a^n,b^n\neq e$. But $o(b^n)=o(a^n)|o(a)$ since $(a^{n})^{o(a)}=(a^{o(a)})^n=e^n=e$, and similarly $o(a^n)=o(b^n)|o(b)$, so $o(a^n)=o(b^n)=1$. Thus $a^n=b^n=e$, so $n\geq o(a)o(b)$ so $o(ab)=o(a)o(b)$.</p>
1,572,065
<p>I came across this question in Probability &amp; Statistics by Rohatgi Exercise 1.6.9.</p> <p>Let $P(A|B) = P(A|B\cap C)P(C) + P(A|B\cap C^c)P(C^c)$ with $P(A|B\cap C) \neq P(A|B)$.</p> <p>If all the three events have non-zero probability, prove that $B$ and $C$ are independent.</p> <p>After several efforts, I couldn't solve this seemingly innocuous looking question. Any idea on how to proceed will be appreciated. </p>
Varun Iyer
118,690
<p><strong>HINT</strong></p> <p>Proof by contradiction: Assume that events $B$ and $C$ are dependent events.</p> <p>Therefore we can say that:</p> <p>$$P(B \cap C) = P(B) + P(C) - P(B \cup C)$$</p> <p>Now substitute this in your formula above and if the equality doesn't hold, then the latter is true and $B$ and $C$ are independent events.</p>
3,987,416
<p>I need to prove that, for two linearly independent vectors <span class="math-container">$\mathbf{a},\mathbf{b}\in\mathbb{R}^3$</span>,</p> <p><span class="math-container">$$(\mathbf{a} · \mathbf{a}) (\mathbf{b} · \mathbf{b}) - (\mathbf{a} · \mathbf{b})(\mathbf{a} · \mathbf{b}) = (\mathbf{a} \times \mathbf{b})·(\mathbf{a} \times \mathbf{b})$$</span></p> <p>Could someone give me a demonstration of this identity? Or a hint to prove it?</p>
bonsoon
48,280
<p>Observe that <span class="math-container">$(A\times B)\cdot (A\times B) = |A\times B|^2 = |A|^2|B|^2\sin^2\theta$</span>, where <span class="math-container">$\theta$</span> is the angle between them. Now we have <span class="math-container">$\sin^2\theta = 1-\cos^2\theta$</span>, and I will remind you that <span class="math-container">$A\cdot B = |A||B|\cos\theta$</span>.</p>
3,987,416
<p>I need to prove that, for two linearly independent vectors <span class="math-container">$\mathbf{a},\mathbf{b}\in\mathbb{R}^3$</span>,</p> <p><span class="math-container">$$(\mathbf{a} · \mathbf{a}) (\mathbf{b} · \mathbf{b}) - (\mathbf{a} · \mathbf{b})(\mathbf{a} · \mathbf{b}) = (\mathbf{a} \times \mathbf{b})·(\mathbf{a} \times \mathbf{b})$$</span></p> <p>Could someone give me a demonstration of this identity? Or a hint to prove it?</p>
Christoph
86,801
<p>The left hand side is the <a href="https://en.wikipedia.org/wiki/Gramian_matrix#Gram_determinant" rel="nofollow noreferrer">determinant of the Gramian matrix</a> of <span class="math-container">$\mathbf a$</span> and <span class="math-container">$\mathbf b$</span>: <span class="math-container">$$ \det\begin{pmatrix}\mathbf a\cdot\mathbf a &amp; \mathbf a\cdot\mathbf b \\ \mathbf b\cdot\mathbf a &amp; \mathbf b\cdot\mathbf b\end{pmatrix}. $$</span> In general, the determinant of a Gramian matrix is the square of the <span class="math-container">$n$</span>-dimensional volume of a parallelotope, in this case the square of the area of the parallelogram spanned by <span class="math-container">$\mathbf a$</span> and <span class="math-container">$\mathbf b$</span>.</p> <p>The right hand side is <span class="math-container">$\|\mathbf a\times\mathbf b\|^2$</span>, where <span class="math-container">$\|\mathbf a\times\mathbf b\|$</span> also is the area of the parallelogram spanned by <span class="math-container">$\mathbf a$</span> and <span class="math-container">$\mathbf b$</span>, hence both sides are equal.</p>
3,987,416
<p>I need to prove that, for two linearly independent vectors <span class="math-container">$\mathbf{a},\mathbf{b}\in\mathbb{R}^3$</span>,</p> <p><span class="math-container">$$(\mathbf{a} · \mathbf{a}) (\mathbf{b} · \mathbf{b}) - (\mathbf{a} · \mathbf{b})(\mathbf{a} · \mathbf{b}) = (\mathbf{a} \times \mathbf{b})·(\mathbf{a} \times \mathbf{b})$$</span></p> <p>Could someone give me a demonstration of this identity? Or a hint to prove it?</p>
robjohn
13,854
<p>For a unit vector <span class="math-container">$u$</span>, <span class="math-container">$|u\cdot v|$</span> is the length of the projection of <span class="math-container">$v$</span> onto <span class="math-container">$u$</span> (which is also the distance of <span class="math-container">$v$</span> from the plane perpendicular to <span class="math-container">$u$</span>) and <span class="math-container">$|u\times v|$</span> is the length of the projection of <span class="math-container">$v$</span> onto the plane perpendicular to <span class="math-container">$u$</span> (<span class="math-container">$u\times v$</span> is the projection of <span class="math-container">$v$</span> onto the plane rotated by <span class="math-container">$\frac\pi2=90^{\large\circ}$</span> counter-clockwise).</p> <p><a href="https://i.stack.imgur.com/4ACj1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7A95W.png" alt="enter image description here" /></a></p> <p>The Pythagorean Theorem says <span class="math-container">$$ |u\cdot v|^2+|u\times v|^2=|v|^2\tag1 $$</span> Due to the linearity of &quot;<span class="math-container">$\cdot$</span>&quot; and &quot;<span class="math-container">$\times$</span>&quot;, we can remove the restriction on <span class="math-container">$u$</span> by making the equation homogeneous; here, substitute <span class="math-container">$u\mapsto\frac{u}{|u|}$</span> (which changes nothing if <span class="math-container">$|u|=1$</span>) and multiply by <span class="math-container">$|u|^2$</span>: <span class="math-container">$$ |u\cdot v|^2+|u\times v|^2=|u|^2|v|^2\tag2 $$</span></p>
341,728
<p>Trivially, for any Lie Algebra (LA) g, g':=[g,g] is an ideal. What's wrong with the following argument?</p> <p>Be g a simple LA, then it has to be g'=g by definition of simple LA. But [g,g]=g seems to be an alternative way of characterizing a semi simple LG. Furthermore, for sl(2) it doesn't seem to be true z=[x,y] for any x,y,z € g. Thus, the previous implication I'm inclined to make must be flawed, but I can't see my mistake. </p> <p>I'm following J. Fuchs book. Thanks in advance.</p>
yatima2975
360
<p>The general method to compute the order of a permutation group involves is called the <a href="http://en.wikipedia.org/wiki/Schreier%E2%80%93Sims_algorithm" rel="nofollow">Schreier-Sims algorithm</a>, and involves computing a so-called <a href="http://en.wikipedia.org/wiki/Base_%28group_theory%29" rel="nofollow">Base</a> and <a href="http://en.wikipedia.org/wiki/Strong_generating_set" rel="nofollow">Strong Generating Set</a>. That's a fairly tricky procedure which is best done by a computer. In a nutshell, and in your case, it boils down to the following observations: </p> <ul> <li>The only element in your group that fixes the points 1 and 2 at the same time is the identity. That's the base.</li> <li>The pointwise stabiliser of 1 is generated by $(2 5 7)(8 6 3)$ and has order three.</li> <li>The index of the pointwise stabiliser of 1 has index 8 in the full group (I'm not sure of how to explain that nicely, and how you can figure that out, as I'm pretty new to this stuff as well).</li> </ul> <p>So: yes, there is a procedure which a lot faster than enumerating all the possible products, but you'll probably need either brute force (the size of 24 is correct, by the way!) or some smart observations (like DonAntonio and Derek Holt provided). </p>
635,893
<p>I am trying to prove following statement:</p> <blockquote> <p>$[m,n]$ is a set of functions defined as $f \in [m,n] \leftrightarrow f: \{1,...,m\} \rightarrow \{1,...,n\}$. The size of $[m,n]$ is $n^m$ for $m,n \in \mathbb{N}_{\gt0}$.</p> </blockquote> <p>I have tried to prove it but I am not entirely sure about its correctness:</p> <p><strong>1)</strong> For the basic step $m=n=1$.</p> <p>The size of $\{1\} \rightarrow \{1\}$ is $1$. And it equals $1^1 = 1$.</p> <p><strong>2)</strong> Then I assume that for some $m,n$ the size $[m,n] = n^m$. Now comes the first problem: should I be proving it for $[m, n+1],[m+1,n],[m+1,n+1]$ or is some of it redundant?</p> <p>When trying to prove $[m, n+1]$ I rewrite it as $[m,n+1] = (n+1) * (n+1) * (n+1) * ... * (n+1) = (n+1)^m$ but I don't use the induction assumption so is that correct?</p> <p>Again $[m+1,n] = n*n*...*n = n^{m+1}$</p> <p>Finally $[m+1,n+1] = (n+1)*(n+1)*...*(n+1) = (n+1)^{m+1}$.</p> <p>During the process I didn't really used my induction assumption, so I am worried that this wouldn't qualify as a proof by induction. So what would be the correct way to prove this? </p>
Alec Teal
66,223
<p>Use the fundamental theorem of calculus to show it is constant.</p> <p>imagine the next line with a strike through:</p> <p>Assume that $f=c$ and $c\ne 0$ and reach a contradiction. Thus proving f=0.</p> <p><strong>Fleshing this out:</strong></p> <p>$\frac{d}{ds}(\int^s_0f(t)dt)=f(s)$</p> <p>You know that $f(s)=\frac{d}{ds}(\int^s_0f(t)dt)=\frac{d}{ds}(0) = 0$ for $s\in[0,1]$</p> <p>So you can ignore the contradiction bit, it doesn't apply. I panicked because the prospect of answer a question first made me rush :p</p>
635,893
<p>I am trying to prove following statement:</p> <blockquote> <p>$[m,n]$ is a set of functions defined as $f \in [m,n] \leftrightarrow f: \{1,...,m\} \rightarrow \{1,...,n\}$. The size of $[m,n]$ is $n^m$ for $m,n \in \mathbb{N}_{\gt0}$.</p> </blockquote> <p>I have tried to prove it but I am not entirely sure about its correctness:</p> <p><strong>1)</strong> For the basic step $m=n=1$.</p> <p>The size of $\{1\} \rightarrow \{1\}$ is $1$. And it equals $1^1 = 1$.</p> <p><strong>2)</strong> Then I assume that for some $m,n$ the size $[m,n] = n^m$. Now comes the first problem: should I be proving it for $[m, n+1],[m+1,n],[m+1,n+1]$ or is some of it redundant?</p> <p>When trying to prove $[m, n+1]$ I rewrite it as $[m,n+1] = (n+1) * (n+1) * (n+1) * ... * (n+1) = (n+1)^m$ but I don't use the induction assumption so is that correct?</p> <p>Again $[m+1,n] = n*n*...*n = n^{m+1}$</p> <p>Finally $[m+1,n+1] = (n+1)*(n+1)*...*(n+1) = (n+1)^{m+1}$.</p> <p>During the process I didn't really used my induction assumption, so I am worried that this wouldn't qualify as a proof by induction. So what would be the correct way to prove this? </p>
Stefan Smith
55,689
<p>Define $F(x) = \int_0^x f(s)\,ds$. By the Fundamental Theorem of Calculus, $F'=f$. But by your assumption, $F(x) = 0$ for all $x \in [0,1]$. So $f \equiv 0$.</p>
1,605,100
<p>$$\lim_{n\to\infty}\sum_{k=1}^n \frac{n}{n^2+k^2}$$</p> <p>I tried this using powerseries just putting as $x=1$ there ,even tried thinking subtracting $s_{n+1} - s_{n}$ would be of some help, also I thought of writing the denominator as product of two complex numbers and then doing the partial fractions but it did not help. Any method guys?</p> <p>Thanks in advance for guiding to think about these kind of problems.</p>
Asinomás
33,907
<p>Every such subgroup $aHa^{-1}=aHHa^{-1}$ is of the form $bH Hc$ and there are only a finite number of options for $bH$ and a finite number of options for $Hc$ (in each case there are $[G: H]$ options). So the number of subgroups is bounded by $[G:H]^2$ (this bound is most likely very weak).</p>
656,742
<p>One <a href="http://en.wikipedia.org/wiki/Nilpotent_matrix" rel="nofollow">property</a> of nilpotent matrices is that a matrix $N$ is nilpotent if and only if $\operatorname{tr}(N^k)=0$ for all $k&gt;0$. How can this property be proved?</p>
user39082
97,620
<p>This follows from Engel's theorem: a nilpotent Matrix can ( by a suitable Base change) be brought in strictly triangular form (with 0's on the diagonal) from which the claim is immediate.</p>
1,536,632
<p>I have the following system of equations </p> <p>\begin{align} \frac{dx}{d \tau} &amp;= x \left(1-x-\frac{y}{x+b} \right) \\ \frac{dy}{d \tau} &amp;= cy \left(-1+a\frac{x}{x+b} \right) \end{align}</p> <p>I am asked to show that if $a&lt;1$, the only nonnegative equilibria are $(0,0), (1,0)$.</p> <p>So first it is obvious that in order to the equations become $0$ is $(x,y)=(0,0)$ </p> <p>Then I decided $y=0$ and $1-x-\displaystyle \frac{y}{x+b}=0 \Leftrightarrow x=1$ , hence $(x,y)=(1,0)$</p> <p>In the same way I decided $x=0$ and $-1+a\displaystyle \frac{x}{x+b}=0$, but there is no $y$.</p> <p>Finally I decided $-1+a\displaystyle \frac{x}{x+b}=0$ and $1-x-\displaystyle \frac{y}{x+b}=0$, and this is difficult.</p> <p>I can't figure out what to do now. What about the fact that $a&lt;1$?</p> <p>Can anyone help? </p>
SchrodingersCat
278,967
<p>By Fermat's Little theorem, we have <br> $25^{60} \equiv 1 \pmod {61}$<br> $\left(25^{60}\right)^{20} \equiv 1^{20} \pmod {61}$<br> $25^{1200} \equiv 1 \pmod {61}$<br> $25^{1202} \equiv 25^2 \equiv 625 \equiv 15 \pmod {61}$<br> $25^{1202}+3 \equiv 18 \pmod {61}$<br> $\left(25^{1202}+3\right)^2 \equiv 18^2 \equiv 324 \equiv19 \pmod {61}$<br> $\left(25^{1202}+3\right)^2 \equiv19 \pmod {61}$<br></p> <p>The answer is $19$.</p>
4,157,288
<p>High school student here just wanting to be better at maths and to know how to approach and solve problems and also how to think like a mathematician. Please recommend a book to me it would help a lot.</p>
Boshu
257,404
<p>Both Courant and Polya's books suggested above are excellent books you should read anyway. Another such book is Gowers' <em>The Princeton Companion to Mathematics</em> available <a href="https://rads.stackoverflow.com/amzn/click/com/0691118809" rel="nofollow noreferrer" rel="nofollow noreferrer">here</a>. However, I fear they might not be sufficient. I'd read these books before starting college and yet they didn't really prepare me for high school calculus being so vastly different from Calculus from Apostol's book.</p> <p>The book that actually taught me what pure mathematicians care about, and convinced me that it was worth studying, was Tao's <em>Analysis I</em>. If you're going to study math, you will do real analysis (probably pretty early into the course), and Tao's book will make life a lot easier. The MAA sells the book in the US at <a href="https://www.maa.org/publications/maa-reviews/analysis-i-0" rel="nofollow noreferrer">this site</a>, and I'm sure softcopies are not hard to come by online.</p>
1,264,353
<blockquote> <p>Evaluate the determinants given that $\begin{vmatrix} a &amp; b &amp; c \\ d &amp; e &amp; f \\ g &amp; h &amp; i \end{vmatrix}=-6.$</p> </blockquote> <ol> <li>$\begin{vmatrix} a+d &amp; b+e &amp; c+f \\ -d &amp; -e &amp; -f \\ g &amp; h &amp; i \end{vmatrix}$ </li> <li>$\begin{vmatrix} a &amp; b &amp; c \\ 2d &amp; 2e &amp; 2f \\ g+3a &amp; h+3b &amp; i+3c \end{vmatrix}$</li> </ol> <hr> <p>Here is what I have tried:</p> <p>1.</p> <p>$\begin{vmatrix} a+d &amp; b+e &amp; c+f \\ -d &amp; -e &amp; -f \\ g &amp; h &amp; i \end{vmatrix}\stackrel{\text{add row 2 to row 1}}=\begin{vmatrix} a &amp; b &amp; c \\ -d &amp; -e &amp; -f \\ g &amp; h &amp; i \end{vmatrix}\stackrel{\text{factor out $-1$}}=-\begin{vmatrix} a &amp; b &amp; c \\ d &amp; e &amp; f \\ g &amp; h &amp; i \end{vmatrix}=-(-6)=6.$ </p> <p>2.</p> <p>$\begin{vmatrix} a &amp; b &amp; c \\ 2d &amp; 2e &amp; 2f \\ g+3a &amp; h+3b &amp; i+3c \end{vmatrix}\stackrel{\text{row 1 times -3, add to row 3}}{=}\begin{vmatrix} a &amp; b &amp; c \\ 2d &amp; 2e &amp; 2f \\ g &amp; h &amp; i \end{vmatrix}\stackrel{\text{factor out 2}}{=}2\begin{vmatrix} a &amp; b &amp; c \\ d &amp; e &amp; f \\ g &amp; h &amp; i \end{vmatrix}=2(-6)=-12.$</p> <p>Did I do these correctly? </p> <p>I've tried some cases with numbers where adding a multiple of one row to another and found that it doesn't not change the value of the determinant. But I can't seem to grasp the intuition as to why this is so from numeric calculations. </p> <p>Why is this so? </p>
Frank
237,453
<p>I see what you're doing now. You are using the properties of determinants. You look fine, but the only time you change signs is when you swap rows. I think there is a theorem that states that elementary row operations do not change the value of the det of a matrix. You were right to factor that the coefficient but I don't think you change the sign unless you swap the rows.</p>
104,368
<p>I can show that there infinitely many solutions to this equation. Is it possible that the set of rational solutions is dense?</p>
Thomas Riepe
451
<p>Concerning q.2, this looks like about it: <a href="https://www.math.psu.edu/calendars/meeting.php?id=24844" rel="nofollow">https://www.math.psu.edu/calendars/meeting.php?id=24844</a></p>
343,248
<p>Let <span class="math-container">$\chi$</span> be a primitive Dirichlet character of modulus <span class="math-container">$q&gt;1$</span>. Write, as is customary, <span class="math-container">$B(\chi)$</span> for the constant in the expression <span class="math-container">$$\frac{\Lambda'(s,\chi)}{\Lambda(s,\chi)} = B(\chi) + \sum_\rho \left(\frac{1}{s-\rho} + \frac{1}{\rho}\right),$$</span> where <span class="math-container">$\Lambda(s,\chi)$</span> is a completed Dirichlet <span class="math-container">$L$</span>-function and <span class="math-container">$\sum_\rho$</span> is a sum over its zeros. Obviously, <span class="math-container">$B(\chi) = \Lambda'(0,\chi)/\Lambda(0,\chi)$</span>. Since <span class="math-container">$$\frac{L'(s,\chi)}{L(s,\chi)} = \frac{\Lambda'(s,\chi)}{\Lambda(s,\chi)} - \frac{1}{2} \frac{\Gamma'((s+\kappa)/2)}{\Gamma((s+\kappa)/2)} - \frac{1}{2} \log \frac{q}{\pi},$$</span> where <span class="math-container">$\kappa = [\chi(-1)=-1]$</span>, we see that <span class="math-container">$$B(\chi) = b(\chi) - \frac{\gamma}{2} - \kappa \log 2 + \frac{1}{2} \log \frac{q}{\pi},$$</span> where <span class="math-container">$b(\chi)$</span> is the constant term in the Laurent expansion of <span class="math-container">$L'(s,\chi)/L(s,\chi)$</span> around <span class="math-container">$s=0$</span>. We can easily show that <span class="math-container">$$b(\chi) = \log \frac{2\pi}{q} + \gamma - \frac{L'(1,\overline{\chi})}{L(1,\overline{\chi})}$$</span> by taking logarithms on both sides of the functional equation. We thus obtain that <span class="math-container">$$B(\chi) = \frac{1}{2} \log \frac{4^{1-\kappa} \pi}{q} + \frac{\gamma}{2} - \frac{L'(1,\overline{\chi})}{L(1,\overline{\chi})}.$$</span></p> <p>It seems clear to me that this expression for <span class="math-container">$B(\chi)$</span> must be (very) classical. Now, looking in Montgomery-Vaughan for something else, I see that, in section 10.3, it states that "The constant <span class="math-container">$B(\chi)$</span>... was long considered to be mysterious; the simple formula (10.39) for it [namely, the expression for <span class="math-container">$B(\chi)$</span> right here] is due to Vorhauer (2006)." Here Vorhauer (2006) is an unpublished preprint (not accessible online). I'd gladly give credit where credit is due, but I can't help thinking that this expression must have been known long before 2006. Does anybody have an earlier reference?</p> <p>(And what would be so mysterious about <span class="math-container">$B(\chi)$</span>? IMHO, it is just tricky for the same reason that <span class="math-container">$L'(1,\chi)/L(1,\chi)$</span>, viz., the possibility of a Siegel zero. Or is it just that we don't have an expression for it as nice as the class number formula? (Do we? EDIT: for <span class="math-container">$\chi$</span> odd, we do; see Prop. 10.3.5 (due to...?) in Henri Cohen's Number Theory.) On the issue of bounding it, see <a href="https://mathoverflow.net/questions/337265/l1-chi-l1-chi?rq=1]">$|L&#39;(1,\chi)/L(1,\chi)|$</a>.)</p>
Greg Martin
5,091
<p>When I was a graduate student at the University of Michigan (this would be the mid-1990s), I took an analytic number theory class from Montgomery, from notes that would eventually become his book with Vaughan. I remember learning directly from Montgomery in that class that the real part of <span class="math-container">$B(\chi)$</span> could be written in terms of the zeros of <span class="math-container">$L(s,\chi)$</span>, but that the imaginary part was indeed mysterious.</p> <p>Perhaps part of our mental block as a discipline was that the usual formula for <span class="math-container">$\Re B(\chi)$</span> contained the term <span class="math-container">$\Re \dfrac{L'}L(1,\chi)$</span>, while Vorhauer's formula for <span class="math-container">$B(\chi)$</span> turns out to contain the term <span class="math-container">$\dfrac{L'}L(1,\overline\chi)$</span> rather than <span class="math-container">$\dfrac{L'}L(1,\chi)$</span>. (Note that the formula in your post contains an omission in this regard.)</p> <p>In any case, given the timing of this information, and the fact that Montgomery is a central figure in classical analytic number theory who is also dedicated to knowing its literature, I am confident that the formula in question is indeed due to Ulrike Vorhauer as noted. I think the correct thing to do is to credit Vorhauer with the formula's discovery and cite the book of Montgomery and Vaughan as the best source we have.</p> <p><em>Edited to add</em>: I have checked Davenport's book, and the formula it gives for <span class="math-container">$B(\chi)$</span> at the top of page 83 is not the same as Vorhauer's formula (an infinite sum over zeros is still present in Davenport's formula). The quote "can be expressed in terms of the expansion of <span class="math-container">$L'/L$</span> in powers of <span class="math-container">$s$</span>" does not at all imply that Vorhauer's formula was known (for instance, it gives no hint that the distinction between <span class="math-container">$\chi$</span> and <span class="math-container">$\bar\chi$</span> is relevant); it corresponds only to one of the very first steps in the sketch from the OP. Moreover, Montgomery himself revised Davenport's book; it strains credulity that he, having carefully read page 83 of Davenport, would attribute the formula to someone other than Davenport if that page were a sufficient source for the formula.</p> <p>It's one thing to say that Davenport and those who preceded him <em>could</em> have derived the formula (that much seems clear). But what evidence we do have points to the conclusion that nobody <em>actually</em> derived Vorhauer's formula until she did. That sort of thing happens all the time. We still give credit to the actual discoverers (Vorhauer, in this case); we don't deem the result "classical" based on our feeling.</p> <p><em>Edit 2</em>: Apparently Vorhauer's paper was accepted to Acta Arithmetica, but the publication process stalled at the page-proofs stage.</p>
3,587,990
<p>I am facing the following combination problem : </p> <p>I do have <span class="math-container">$B$</span> colored boxes, each one containing <span class="math-container">$N_b$</span> balls of the given color. The total number of balls is thus <span class="math-container">$\sum_{b=1}^{B} N_b = N$</span>.</p> <p>I want to distribute the balls to P players with the two following aims:</p> <ul> <li>The number of balls received by each player must be the same (<em>ie</em> <span class="math-container">$N/P$</span>)</li> <li>The sum of the number of different colors hold by each player must be minimal : if each player get <span class="math-container">$c_p$</span> different colors, I want <span class="math-container">$\sum_{p=1}^{P} c_p$</span> as small as possible.</li> </ul> <p>Is there any formulae or algorithm to respect both constraints ?</p> <p><strong>Edit</strong></p> <p>What is this problem useful ?</p> <p>This problem arises eg in scientific computing : you have a available number of processus (<span class="math-container">$P$</span>) and you want to perform arithmetics on several (<span class="math-container">$B$</span>) arrays of different sizes (<span class="math-container">$N_b$</span>). Obviously you want the arrays to be uniformly distributed (otherwise one process will have much more work and will slow the whole procedure), but you also want to minimize the splits in the original arrays since each one implies a communication between processus, which are also time consuming. </p> <p>What to start with : </p> <p>I guess we can easily fall back to the case of a distribution <span class="math-container">$\tilde D=\{\tilde N_1, \tilde N_2, ... \tilde N_{B}\}$</span> to distribute on P' players, with <span class="math-container">$P' \le P$</span> and <span class="math-container">$\tilde N_b &lt; N/P$</span> : indeed, any boxe in the original distribution having more than <span class="math-container">$N/P$</span> ball will fill up 100% capacity of one (or more) player. We can then try to do pairs (or triplets, etc.) with the remaining boxes to reach N/P, but we may not have a perfect match without cuts in the general case.</p>
Anubhab
602,341
<p>Well, it is easy to divide the balls such that <span class="math-container">$\displaystyle\sum_{p=1}^{P} c_p=B+P-1$</span>. You first give out as many balls as you can from box <span class="math-container">$1$</span> to player <span class="math-container">$1$</span>. Then if the box becomes empty, you give away from the next box. If some player receives <span class="math-container">$\frac{N}{P}$</span> balls, you move on to the next player.</p> <p>In general cases, that's the best you can do. In the next section, I tweak this strategy slightly to get the optimal distribution.</p> <p>The following is a recursive algorithm to find the distribution for given <span class="math-container">$D=\{N_1,N_2,...,N_B\}$</span>:</p> <ol> <li>Find the minimum <span class="math-container">$k$</span> such that there exists subset <span class="math-container">$S$</span> of <span class="math-container">$D$</span> with sum <span class="math-container">$k\cdot\frac{N}{P}$</span>.</li> <li>Give away the balls corresponding to <span class="math-container">$S$</span> to <span class="math-container">$k$</span> players.</li> <li>Repeat with <span class="math-container">$D$</span> replaced by <span class="math-container">$D-S$</span>.</li> </ol> <p>If this algorithm runs <span class="math-container">$t$</span> times, it is easy to see that <span class="math-container">$\displaystyle\sum_{p=1}^{P} c_p=B+P-t$</span>. This is indeed the best possible.</p>
1,438,026
<p>Out of curiousity, are there functions who's domain is discrete but the range is continuous? Furthermore, is there also a real-world example of such a function, in physics for instance?</p>
Jerry Guern
223,953
<p>Example: A probability distribution of the number of children women have. The domain is the integer number of children. The range is the probability of a woman having that many, which is a continuous variable.</p>
3,130,749
<p>I am trying to differentiate <span class="math-container">$\frac{v^4-10v^2\sqrt{v}}{4v^2}$</span>.</p> <p>I have tried splitting the fraction and doing the division before finding the differential, but I am still not getting the right answer.</p> <p><span class="math-container">$$ \frac{v^4-10v^2\sqrt{v}}{4v^2} =\frac{v^4}{4v^2}-\frac{10v^2\sqrt{v}}{4v^2} = \frac{v^2}{4}-\frac{5\sqrt{v}}{2} $$</span></p> <p>To find the differentials:</p> <p><span class="math-container">$$\frac{d}{dv}v^2=2\cdot v^{2-1}=2v$$</span></p> <p><span class="math-container">$$\frac{d}{dv} 5\sqrt{v}=5 \cdot \frac{d}{dv}\sqrt{v}=5 \cdot \frac{1}{2}v^{\frac{1}{2}- \frac{2}{2}}=5 \frac{\frac{1}{\sqrt{v}}}{2}$$</span></p> <p>So I got:</p> <p><span class="math-container">$$2v - 5 \frac{\frac{1}{\sqrt{v}}}{2}$$</span></p> <p>but this is wrong.</p>
TonyK
1,508
<p>You are forgetting to divide the first term by <span class="math-container">$4$</span>, and the second term by <span class="math-container">$2$</span>.</p>
143,173
<p>I have a small question that I think is very basic but I am unsure how to tackle since my background in computing inequalities is embarrassingly weak - </p> <p>I would like to show that, for a real number <span class="math-container">$p \geq 1$</span> and complex numbers <span class="math-container">$\alpha, \beta$</span>, I have <span class="math-container">\begin{equation} |\alpha + \beta|^p \leq 2^{p-1}(|\alpha|^p + |\beta|^p) \end{equation}</span></p> <p>I thought it would be best to rewrite this as <span class="math-container">\begin{equation} \left|\frac{\alpha + \beta}{2}\right|^p \leq \frac{|\alpha|^p + |\beta|^p}{2} \end{equation}</span></p> <p>but then I am unsure what to do next - is this a sensible start anyways ? Any help would be great !</p> <p>(P.S. this is not a homework question - I am currently trying to brush up my knowledge of <span class="math-container">$L^p$</span> spaces, and this inequality came up as a statement. I thought it might be worthwhile to make sure I can fill in the gaps to improve my skills in computing inequalities.)</p>
MathArt
319,307
<p>To generalize this <span class="math-container">$L_p$</span> (p&gt;1) norm for <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span>, we can introduce weights such that <span class="math-container">$q+r=1$</span>, so we have <span class="math-container">\begin{align} q\alpha+r\beta=(q\alpha^p)^{1/p}q^{1-1/p}+(r\beta^p)^{1/p}r^{1-1/p}\underset{Holder}{\le}\left(q\alpha^p+r\beta^p\right)^{1/p}(q+r)^{1-1/p}=\left(q\alpha^p+r\beta^p\right)^{1/p} \end{align}</span> therefore, <span class="math-container">\begin{align} (q\alpha+r\beta)^p\le q\alpha^p+r\beta^p. \end{align}</span> In case of <span class="math-container">$q=r=\frac12$</span>, the originally asked is then obtained. <span class="math-container">\begin{align} \left|\frac{\alpha + \beta}{2}\right|^p \leq \frac{|\alpha|^p + |\beta|^p}{2} \end{align}</span> Note the sign &quot;<span class="math-container">$\le$</span>&quot; becomes &quot;<span class="math-container">$\ge$</span>&quot; for the case <span class="math-container">$0&lt;p&lt;1$</span>.</p>
2,634,408
<p>Here is the question:</p> <p>Define the set WFF as follows--</p> <p>a) Every element in set $A = \{s_1, s_2, ...\}$ is in WFF. $A$ is countably infinite.</p> <p>b) If $a$ is in WFF, so is $(\neg a)$. If $n$ and $m$ are in WFF, so is $(n \lor m)$.</p> <p>c) No other elements are in WFF.</p> <p>Show that WFF is countable. </p> <p>Attempt at a solution: I was thinking it might be possible to show by induction that $S_i$, the set of all expressions with $i$ symbols, is countable, and then show that the union of all such countable $S_i$'s is also countable (which I know how to do). However, I'm unsure of how to proceed for the first part -- would it even be necessary to show that the set of all expressions with $i$ symbols is countable, as per the parentheses in the rules, it isn't possible to have an expression with, say, $i = 3$ symbols, or am I confused? </p> <p>Another thing I am having trouble with is that there doesn't seem to be any restriction from the criteria against infinitely long strings/logical sequences (that would make WFF, to my knowledge, not countable). I believe this is the right approach, but I am unable to justify that to myself.</p>
hmakholm left over Monica
14,366
<p>This style of definition can be confusing if you see it for the first time in a source that does not bother to explain very carefully how it should be understood.</p> <p>What part (c) means is that $WFF$ is defined to be the <strong>smallest possible</strong> set that satisfies the conditions in parts (a) and (b).</p> <p>Before this is actually a definition we need to prove that there <em>is</em> such a unique smallest possible set:</p> <p>Let's call a set that satisfies (a) and (b) <em>good</em>. Consider now the set $$ W = \{ x \mid x\text{ is a member of every good set}\} $$ It is now easy enough to prove that $W$ is good, and that $W$ is a subset of every good set. This latter property means that $W$ is smaller than every other good set, so it is actually the set $\mathit{WFF}$ being definied!</p> <p>Since the set of all <em>finite</em> strings over $A\cup\{{\neg},{\vee},{(},{)}\}$ is good (this is easy to verify), we can conclude that it is a subset of $\mathit{WFF}$, and therefore $\mathit{WFF}$ contains only finite stings.</p> <hr> <p>If you know some axiomatic set theory, you will notice that I've swept a problem under the rug, namely showing that $W$ exists as a set at all -- above I've defined it using unrestricted comprehension. But as long as there exists <em>at least one</em> good set (and I've just argued that the set of finite strings is such a good set), then we can see that $W$ is a subclass of this set, and is therefore a set itself.</p>
57,057
<p>Let $a,b,c,d,e$ be positive real numbers which satisfy $abcde=1$. How can one prove that: $$\frac{1}{a} + \frac{1}{b} + \frac{1}{c} + \frac{1}{d} +\frac{1}{e}+ \frac{33}{2(a + b + c + d+e)} \ge{\frac{{83}}{10}}\ \ ?$$</p>
Michael Rozenberg
190,319
<p>We can use the Vasc's EV Method.</p> <p>See here: <a href="https://www.emis.de/journals/JIPAM/images/059_06_JIPAM/059_06.pdf" rel="nofollow noreferrer">https://www.emis.de/journals/JIPAM/images/059_06_JIPAM/059_06.pdf</a></p> <p>Indeed, let <span class="math-container">$a+b+c+d+e=constant.$</span></p> <p>Thus, by corollary 1.9, case 1(b) (<span class="math-container">$p=0$</span>,<span class="math-container">$q=-1$</span>)</p> <p>the expression <span class="math-container">$\sum\limits_{cyc}\frac{1}{a}$</span> gets a minimal value for equality case of four variables.</p> <p>Id est, it remains to prove our inequality for <span class="math-container">$b=c=d=e$</span> and <span class="math-container">$a=\frac{1}{e^4}$</span>, which gives <span class="math-container">$$(e-1)^2(40e^8+80e^7+120e^6+160e^5-132e^4-89e^3-46e^2-3e+40)\geq0,$$</span> which is true because <span class="math-container">$$40e^8+80e^7+120e^6+160e^5-132e^4-89e^3-46e^2-3e+40=$$</span> <span class="math-container">$$=40(e^4+e^3+e^2-3e+1)^2+e(320e^4-12e^3+71e^2-486e+237)\geq$$</span> <span class="math-container">$$\geq e((e^2+40.5)(3e-2)^2+311e^4-297.5e^2+75)&gt;0$$</span> and the last inequality is true because <span class="math-container">$$297.5^2-4\cdot311\cdot75=297.5^2-311\cdot300&lt;0.$$</span></p>
840,445
<p>Stirling number of the second kind is the number of ways to partition a set of n objects into k non-empty subsets - S(n,k). I want to restrict/constrain this partition so I can count the ways to partition a set of n objects into k non-empty subsets while at least p subsets out of k will have size r. When p=k the answer is the associated Stirling of the second kind Sr(n,k) , but I was wondering whether there is a general expression for any p. In case there isn't I will be glad to find an expression for p=1 and r=1. Thank you.</p> <p>An example: the number of partitions of 4 objects into 2 subsets is S(4,2)=7. I want to count only the partitions that contain at least p=1 subset in the size of r=1. So in this case the answer is 4 because I don't want to count {1 2 | 3 4}, {1 3 | 3 2} and {1 4 | 2 4}. </p>
Adi Dani
12,848
<p>Formula that count the number of partitions of a n-set into k-parts of size 1 or 2 is<br> $$\overline{p}_k(n,N_2) =\binom{k}{n-k} \frac{n!}{k!\cdot2^{n-k}}$$</p>
3,295,973
<p>Let <span class="math-container">$(\Omega,\mathcal A,\mu)$</span> be a measure space, <span class="math-container">$p,q\ge1$</span> with <span class="math-container">$p^{-1}+q^{-1}=1$</span> and <span class="math-container">$f:\Omega\to\mathbb R$</span> be <span class="math-container">$\mathcal A$</span>-measurable with <span class="math-container">$$\int|fg|\:{\rm d}\mu&lt;\infty\;\;\;\text{for all }g\in L^q(\mu)\tag1.$$</span> By <span class="math-container">$(1)$</span>, <span class="math-container">$$L^q(\mu)\ni g\mapsto fg\tag2$$</span> is a bounded linear fuctional and hence there is a unique <span class="math-container">$\tilde f\in L^p(\mu)$</span> with <span class="math-container">$$(f-\tilde f)g=0\;\;\;\text{for all }g\in L^q(\mu)\tag3.$$</span></p> <blockquote> <p>Can we conclude that <span class="math-container">$f=\tilde f$</span>?</p> </blockquote> <p><strong>EDIT</strong>: As we can see from <a href="https://math.stackexchange.com/a/3223523/47771">this answer</a>, we need to impose further assumptions; but which do we really need?</p>
GSofer
509,052
<p>When you square the equation, you add a solution you didn't have before. Take for example the very simple equation - <span class="math-container">$$x=1$$</span> Squaring both sides we get: <span class="math-container">$$x^2=1\Rightarrow x=\pm 1$$</span> Not all operations leave the information given in the original equation unchanged. Just like if you multiply an equation by <span class="math-container">$0$</span> you get an equation that's always true (<span class="math-container">$0=0$</span>), but you basically lose all information, when squaring an equation, you can sometimes add a new solution.</p>
647,757
<blockquote> <p>How many integers $n$ are there such that $\sqrt{n}+\sqrt{n+7259}$ is an integer? </p> </blockquote> <p>No idea on this one.</p>
Bart Michels
43,288
<p>Imranfat's answer is good, however you might be looking for a more rigorous argument to show that both $n$ and $7259+n$ have to be squares. Here it is:</p> <p>If $\sqrt n+\sqrt{7259+n}$ is integer, then $$\sqrt n-\sqrt{7259+n}=\frac{-7259}{\sqrt n+\sqrt{7259+n}}$$ is rational.</p> <p>Adding and substracting these two rationals, we get that both $$(\sqrt n+\sqrt{7259+n})+(\sqrt n-\sqrt{7259+n})=2\sqrt n$$ and $$(\sqrt n+\sqrt{7259+n})-(\sqrt n-\sqrt{7259+n})=2\sqrt{7259+n}$$ are rationals.</p> <p>This means both $n$ and $7259+n$ are perfect squares. From here, imranfat's solution is the best way to continue.</p>
3,887,526
<p>I need to prove <span class="math-container">$\frac{|x+y+z|}{1+|x+y+z|} \le \frac{|x|}{1+|y|+|z|}+\frac{|y|}{1+|x|+|z|}+\frac{|z|}{1+|x|+|y|}$</span>. I've tried to use triangle inequality or to explore the form of <span class="math-container">$(a+b+c)^2$</span> but it won't get me anywhere. I would be grateful for some suggestions.</p>
Noureddine Ouertani
348,708
<p>Let's check whether: <span class="math-container">$$S(N) = \frac{|\sum_{i=0}^{N} x_i|}{1+|\sum_{i=0}^{N} x_i|}\le \sum_{i=0}^{N} \frac{|x_i|}{1+ |\sum_{j=0;j \ne i}^{N} x_j|}$$</span> for all natural numbers <span class="math-container">$N$</span> and all natural numbers <span class="math-container">$i&lt;N+1$</span> is true.</p> <p>The inequation <span class="math-container">$\frac{|x+y+z|}{1+|x+y+z|} \le \frac{|x|}{1+|y|+|z|} + \frac{|y|}{1+|x|+|z|}+\frac{|z|}{1+|x|+|y|}$</span> is the special case for <span class="math-container">$i=2$</span> with <span class="math-container">$x=x_0, y=x_1, z=x_2$</span></p> <p>For <span class="math-container">$N=0$</span> we have <span class="math-container">$\frac{|x_0|}{1+|x_0|} \le \frac{|x_0|}{1+ 0}$</span> for all <span class="math-container">$x_0$</span></p> <p>Let's suppose for a certain <span class="math-container">$N$</span> that we have <span class="math-container">$$S(N) = \frac{|\sum_{i=0}^{N} x_i|}{1+|\sum_{i=0}^{N} x_i|} \le \sum_{i=0}^{N} \frac{|x_i|}{1+ |\sum_{j=0;j \ne i}^{N} x_j|}$$</span> for all natural numbers <span class="math-container">$N$</span> and all natural numbers <span class="math-container">$i&lt;N+1$</span>.</p> <p>and prove this inequality for <span class="math-container">$S(N+1)$</span> which means let's prove the following:</p> <p><span class="math-container">$$S(N+1) \le \sum_{i=0}^{N+1} \frac{|x_i|}{1+ |\sum_{j=0;j \ne i}^{N+1} x_j|}$$</span> all natural numbers <span class="math-container">$i&lt;N+2$</span>.</p> <p>We first will use <span class="math-container">$\frac{|a+b|}{1+|a+b|} \le \frac{|a|}{1+|a|} + \frac{|b|}{1+|b|}$</span> like proved in this link:<a href="https://math.stackexchange.com/questions/194314/prove-fracab1ab-fraca1a-fracb1b/194317#194317">Prove $\frac{|a+b|}{1+|a+b|}&lt;\frac{|a|}{1+|a|}+\frac{|b|}{1+|b|}$.</a></p> <p>for <span class="math-container">$a = x_{N+1}$</span> and <span class="math-container">$ b= \sum_{i=0}^{N} x_i$</span></p> <p><span class="math-container">$$\frac{|x_{N+1}+\sum_{i=0}^{N} x_i|}{1+|x_{N+1}+\sum_{i=0}^{N}x_i|} \le \frac{|x_{N+1}|}{1+|x_{N+1}|} + \frac{|\sum_{i=0}^{N} x_i|}{1+|\sum_{i=0}^{N} x_i|}$$</span></p> <p>means that</p> <p><span class="math-container">$$\frac{|x_{N+1}+\sum_{i=0}^{N} x_i|}{1+|x_{N+1}+\sum_{i=0}^{N}x_i|} \le \frac{|x_{N+1}|}{1+|x_{N+1}|} + S(N)$$</span></p> <p>means that <span class="math-container">$$ \frac{|x_{N+1}+\sum_{i=0}^{N} x_i|}{1+|x_{N+1}+\sum_{i=0}^{N}x_i|} \le \frac{|x_{N+1}|}{1+|x_{N+1}|} + \sum_{i=0}^{N} \frac{|x_i|}{1+ |\sum_{j=0;j \ne i}^{N} x_j|}$$</span></p> <p>means that</p> <p><span class="math-container">$$S(N+1) \le \sum_{i=0}^{N+1} \frac{|x_i|}{1+ |\sum_{j=0;j \ne i}^{N+1} x_j|}$$</span></p> <p><strong>Proved!</strong></p>
2,123,926
<p>I have a triangle with the coordinates (0, 20), (15, -10), (-15, -10). So the centre of the triangle is 0,0. </p> <p>I want to rotate this by one degree. Im using the formula:</p> <pre><code>x' = cos(theta)*x - sin(theta)*y y' = sin(theta)*x + cos(theta)*y </code></pre> <p>Yet when I work out the coordinate for angle rotation of 1, I get completely inexplicable coordinates:</p> <pre><code>(-16.82941969615793, -3.3554222481086295), (16.51924443610106, 8.497441825246927), (0.31017526005686946, -5.142019577138298) </code></pre> <p>Which when plotted out looks nothing like the original triangle being rotated by 1 degree. I've made a simple demo in javascript <a href="https://jsfiddle.net/z8jemr5u/" rel="nofollow noreferrer">here</a>.</p> <p>What could i be possibly doing wrong?</p>
Julio Maldonado Henríquez
406,412
<p>For the homogeneous solution you can solve the characteristic polynomial as the ODE is lineal. So you replace $y^{i}$ by $\lambda^{i}$ when the first is derivate and the second a power. The idea is to find roots so: $$\lambda^2-2\lambda+1=0 \iff (\lambda-1)^2=0$$ for each root $r_i$ we associate a solution $A_i e^{r_ix}$ where $A_i$ can be determinate by initial conditions (IC). When you have a multiple root, as in the example, the next solution is amplified by $x,x^2,x^3,...$ depending on the multiplicity; in this case is two so the homogeneous solution is $A_1e^x+A_2xe^x$. To solve the particular solution you need the no-homogeneous part of the ODE, but the general procedure is to suppose a solution with the functions involved in this part (can be exponential, polynomial or sinusoidal for this method) with constants, and to insert it in the ODE to find the constants. As you particular solution has a $ln$, I guess it's not the case so should've applied other method like Laplace transform. If you post the rest of the ODE I'll be more helpful.</p>
69,373
<p>Let $N$ be the number of rolls until the same number appears $k$ consecutive times. Show the expected value $E[N]=\dfrac{6^k-1}{5}$. I've tried conditioning this on the first occurrence of the expected number, but I'm having a hard time generalizing further than 2 consecutive times. I think I need to use the conditional expectation formula, $E[N]=E[E[N|Y]]$ where $Y$ is another random variable which I've previously taken to be the first appearance of the number. </p>
MAK
16,080
<p>Well, for k=1, the value is trivially 1.</p> <p>For k=2, you can find the probability of your event happenning in 2,3,4,.... throws.</p> <p>The probability of N=2 , is the probability of the first two throws having the same outcome. So the expected number of throws needed for N=2 is:</p> <p>P(Outcome aa).2+ [P(Outcome aba)+ P(Outcome baa)].3+....+[P( same number appears twice after j rolls].j+......</p> <p>But, notice that every six rolls, you necessarily get <em>at least</em> one repetition. </p>
2,122,539
<p>A functor sends $X$ to $R^{(X)}$ what does this notation mean? Is this functor faithful and full?</p>
egreg
62,967
<p>Usually, $R^{(X)}$ denotes the free $R$-module on the set $X$, that is, the set of “formal sums” $$ \sum_{x\in X}r_xx $$ where $\{x\in X:r_x\ne0\}$ is finite, with the obvious operations making it into an $R$-module. The elements of $X$ are those where just one coefficient is $1$ and the others are zero.</p> <p>How do we make the assignment $X\mapsto R^{(X)}$ into a functor? Use the universal property of free modules: if $Y$ is any set, then for each map $f\colon X\to Y$ there is a unique $R$-module homomorphism $\hat{f}\colon R^{(X)}\to R^{(Y)}$ such that $\hat{f}(x)=f(x)$, for every $x\in X$: $$ \hat{f}\Bigl(\,\sum_{x\in X}r_xx\Bigr)=\sum_{x\in X}r_xf(x) $$ where the sum in the right hand side is well defined because at most a finite number of terms are nonzero.</p> <p>Clearly the identity is mapped to the identity and this assignment preserves composition of morphisms.</p> <p>The functor so obtained is certainly faithful. In order to see whether it is full, look at the case when $X$ is a singleton.</p>
482,793
<p>As title says, how does one show that a function is continuous over some interval (let us say over some interval of real numbers?)</p> <p>Would(Can) this involve derivative?</p>
Dan Brumleve
1,284
<p>Continuity has various equivalent definitions. The existence of a derivative is a stronger property than continuity, so if you already know the function is differentiable on an interval, then it is certainly continuous, but the <a href="http://en.wikipedia.org/wiki/Weierstrass_function" rel="nofollow">converse</a> does not hold.</p>
2,819,950
<p>I am a university graduate with a B.S. in mathematics who has been developing software for the past 8 years. I recently discussed a mutual interest in topology with a friend who is just about to complete his degree. Due to time constraints both of us missed our chance to take a topology course during our undergraduate studies, so we have decided to make an independent study of the subject once he finishes school, and are looking for a book to guide us.</p> <p>Ideally I am looking for a book that...</p> <ul> <li>is suitable for our math background</li> <li>is well laid out and not too difficult to follow (i.e. is suited to self study)</li> <li>does not assume prior knowledge</li> <li>is thorough enough to enable future study of topics within the field</li> <li>contains plenty of exercises</li> <li>isn't sparse on diagrams where they are appropriate</li> <li>doesn't waste much time on overly-specific material (I'm the type that likes to prove things about the determinant, <em>not calculate thousands of them</em>)</li> </ul> <h2>What topology texts would be appropriate for our self study?</h2>
Mark Fischler
150,362
<p>I had great pleasure working through Munkries, doing all the exercises. I'm not sure what you want in terms of "foundational introduction" but his (extensive) first chapter will be an excellent ramp-up getting back into shape. In particular, before even getting into topology, his exercises leading to the proof that (even without the axiom of choice) there exists an uncountable, well-ordered set were really nice, and should not be skipped. </p> <p>There are not a lot of diagrams, but there are some where needed.</p>
2,805,803
<p>The question is as follows: </p> <blockquote> <p>A typical long-playing phonograph record (once known as an LP) plays for about $24$ minutes at $33 \frac{1}{3}$ revolutions per minute while a needle traces the long groove that spirals slowly in towards the center. The needle starts $5.7$ inches from the center and finishes $2.5$ inches from the center. Estimate the length of the groove.</p> </blockquote> <p>Based on the given information, I was able to calculate the total number of revolutions -- which is $800$ revolutions ($24 \times \frac{100}{3}$). I don't know how to go further than that. Any help will be greatly appreciated. </p>
Break-in Master
953,558
<p>You can't assume that each revolution is exactly the same length! Each one will be slightly shorter than the one before. At best, you'd have to get the length of the outer-most groove and the inner most and figure out what the average of the two would be and then multiply by <span class="math-container">$800$</span>, or whatever number you came up with for the amount of grooves per side. I was going for <span class="math-container">$20:00$</span> per side (most albums barely run <span class="math-container">$18:00$</span> per side, let alone <span class="math-container">$24:00$</span>!) so, as the outer circumference is <span class="math-container">$37.700$</span> inches, I multiply that times <span class="math-container">$666$</span> because <span class="math-container">$33.3$</span> rpm <span class="math-container">$ \times 20:00$</span> is <span class="math-container">$666$</span>. But, the results from these calculations are assuming that each revolution is exactly the same length, which they aren't! Dig! Get a full reel of recording tape (<span class="math-container">$7$</span>&quot; reel or <span class="math-container">$10$</span>&quot;) and measure how much tape is used for just one revolution at the outer end of the tape. Now, get an empty reel and wrap one revolution of that same tape around the hub of the empty reel and measure that! It's nowhere NEAR as long as the first measurement you took! It's the same way with the grooves in a record.</p>
3,929,952
<p><strong>Question:</strong> How do I estimate the following integral? <span class="math-container">$$\int_{2}^{\infty} \frac{|\log \log t|}{t\, (\log t)^2} \, dt$$</span></p> <p><strong>Attempt:</strong> Setting <span class="math-container">$u=\log t$</span>, we get</p> <p><span class="math-container">$$\int_{2}^{\infty} \frac{|\log \log t|}{t\, (\log t)^2} \, dt = \int_{\log 2}^{\infty} \frac{|\log u|}{u^2}\, du$$</span> and then I am stuck.</p>
Raffaele
83,382
<p>Integration by substitution <span class="math-container">$t=e^u;\;dt=e^u du;\;\log t = u$</span> <span class="math-container">$$\int \frac{\log (\log t)}{t \log ^2 t} \, dt=\int\frac{ \log u}{u^2}\,du=-\frac{\log u+1}{u}+C$$</span></p> <p><span class="math-container">$$\int \frac{\log (\log t)}{t \log ^2 t} \, dt=-\frac{\log (\log t)+1}{\log t}+C$$</span> <span class="math-container">$$\int_{2}^{\infty} \frac{|\log \log t|}{t\, (\log t)^2} \, dt=\int_{2}^{e} \left(-\frac{\log \log t}{t\, (\log t)^2}\right) \, dt+\int_{2}^{\infty} \frac{\log \log t}{t\, (\log t)^2} \, dt=$$</span> <span class="math-container">$$=\left[\frac{\log (\log t)+1}{\log t}\right]_2^e+\left[-\frac{\log (\log t)+1}{\log t}\right]_e^{\infty}=$$</span> <span class="math-container">$$=1-\frac{1+\log (\log 2)}{\log 2}-\underset{m\to \infty }{\text{lim}}\frac{\log (\log m)+1}{\log m}+1=$$</span> <span class="math-container">$$=2-\frac{1+\log (\log 2)}{\log 2}\approx 1.086$$</span></p>
2,066,765
<p>Are there any examples of subspaces of $\ell^{2}$ and $\ell^{\infty}$ which are not closed?</p>
Batman
127,428
<p>Take the subspace of $\ell^2$ (or $\ell^\infty$) consisting of sequences with finitely many non-zero coordinates.</p> <p>Clearly, this is a subspace (sum of sequences still has at most # of nonzeros of first seq + # of nonzeros in second seq, scaling of a sequence has same non-zeros). However, you can take any sequence on $\ell^2$ (or $c_0 \subset \ell^\infty$, a closed subspace), $a$ and define the sequence of sequences $\{a_n\}$ where $a_n$ consists of only the first $n$ elements of $a$ and rest zeros, and see $a_n \to a$. </p>
14,828
<p>I am reading the book by Goldrei on Classic Set Theory. My question is more of a clarification. It is on if we are overloading symbols in some cases. For instance, when we define $2$ as a natural number, we define $$2_{\mathbb{N}} = \{\emptyset,\{\emptyset\} \}$$ When we define $2$ as an integer, $2_{\mathbb{Z}}$ is an equivalence class of ordered pair $$2_{\mathbb{Z}} = \{(n+_{\mathbb{N}}2_{\mathbb{N}},n):n \in \mathbb{N}\}$$ Similarly, when we define $2$ as a rational number, $2_{\mathbb{Q}}$ is an equivalence class of ordered pair $$2_{\mathbb{Q}} = \{(a \times_{\mathbb{Z}} 2_{\mathbb{Z}},a):a \in \mathbb{Z}\backslash\{0\}\}$$ and as a real number we define it as the left Dedekind cut of rationals less than $2_{\mathbb{Q}}$, i.e. $$2_{\mathbb{R}} = \{q \in \mathbb{Q}: q &lt;_{\mathbb{Q}} 2_{\mathbb{Q}}\}$$ </p> <p>The clarification is each of the above are different objects right? So when we say $2$, it depends on the context? Also, if the above is true, is it correct or incorrect to say that "The set of natural numbers is a subset of reals"? Should we take the statement with a pinch of salt and understand accordingly?</p>
yatima2975
360
<p>In addition to the other answers, it's also noteworthy to know that there is a number system (more-or-less) integrating N, Z, Q and R (and <em>a lot</em> more, but not C!). </p> <p>The <a href="http://en.wikipedia.org/wiki/Surreal_number">surreal numbers</a> take the basic idea from Dedekind cuts, assigning to each number a so-called <em>left set</em> ('smaller') and <em>right set</em> ('larger') of numbers (constraint to certain rules), bootstrapping the whole process from the empty set and ending up with the reals and weird infinitesimals like $\frac{1}{\sqrt{\omega - \pi}}$!</p>
192,537
<p>Can you give me an example that there is a $f \in C_0^{\infty}(\mathbb C)$, such that the equation $\bar \partial u=f$ has no $C_0^{\infty}(\mathbb C)$ solution?</p>
girianshiido
39,741
<p>If $f\in C^\infty_0(\mathbb{C})$, then $\displaystyle u:\zeta\mapsto \frac{1}{2i\pi}\int_{\mathbb{C}}\frac{f(z)}{z-\zeta}dz\wedge d\bar{z}$ is also in $C^\infty_0(\mathbb{C})$ and is a solution to $\bar{\partial}u=f$</p>
1,940,837
<p>I don't understand the difference between $\bigoplus_{i=1}^n M_i$ and $\prod_{i=1}^n M_i$ where $\{M_i\}_{i=1,...,n}$ is a collection of $R-$module. When $I$ is not finite, $$\prod_{i\in I}M_i=\{(x_i)_{i\in I}\mid x_i\in I\}$$ and $$\bigoplus _{i\in I}M_i=\{(x_i)_{i\in I}\mid x_i=0\text{ except for finite number of $i$}\}.$$</p> <p>But for $I$ finite, it looks to be the same... so what's the difference between them when $I$ is finite ?</p>
HarrySmit
332,761
<p>In the case that the $M_i$ are ideals, $\prod_{i \in I} M_i$ is sometimes used for the product of the ideals. Thus, to avoid confusion, $\bigoplus_{i \in I} M_i$ is used instead.</p>
2,568,544
<p>The exercise is to construct a cube (inscribed in the unit spher) with one of the corners at: $$ \vec{v} = \frac{1}{\sqrt{10}} (3,1,0) \in S^2 $$</p> <p>I'm a bit stuck constructing the other seven vertices. My first guess was to use the eight reflections available to me:</p> <p>$$ \square \stackrel{?}{=}\left\{ \tfrac{1}{\sqrt{10}} (\pm 3, \pm 1, \pm 0) \right\} $$</p> <p>This is not quite right since this describes the four vertices of a rectangle in 3-dimensional space. I remember the rotations of the form a group. Here is one of the matrices:</p> <p>$$ \left[ \begin{array}{rrrr} 0 &amp; -1 &amp; 0 \\ 1 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 1 \end{array}\right] \in SO(3) $$</p> <p>This would lead to the 4 vertices of a square (up to rescaling). Then I am stuck finding the remaining four.</p> <ul> <li>$ (3,1,0)\;,\; (1,-3,0)\;,\; (-3,-1,0)\;,\; (-1,3,0) $</li> </ul> <p>The algebra would give these vertices, but having the orientation of the cube <em>slightly off</em> vertical is enough to confuse my intuition. Perhaps I could use another matrix:</p> <p>$$ \left[ \begin{array}{rrr} 1 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; -1 \\ 0 &amp; 1 &amp; 0 \end{array}\right] \in SO(3) $$</p> <p>This generates a few more verties (including some repeats). This brings the count up to six.</p> <ul> <li>$ \color{#A0A0A0}{(3,1,0)}\;,\; (3,0,-1)\;,\; (3,-1,0)\;,\; (3,0,1) $</li> </ul> <p>The rotations of the cube are an isometry, so I should be able to generate the cube as the $SO_3(\mathbb{Z})$ orbit of a single vector. It's hard to decide on paper if I'm on the right track.</p> <p><strong>Following the comments</strong> Here's a third orthogonal matrix: $$ \left[ \begin{array}{rrr} 0 &amp; 0 &amp; -1 \\ 0 &amp; 1 &amp; 0 \\ 1 &amp; 0 &amp; 0 \end{array}\right] \in SO(3) $$</p> <p>and we get (up to) three more verices. So far we have 7 and we're only looking for one more.</p> <ul> <li>$ \color{#A0A0A0}{(3,1,0)}\;,\; (0,1,-3)\;,\; (-3,1,0)\;,\; (0,1,3) $</li> </ul> <p>We are up to <em>ten</em> vertices. So there is definitely a problem. There are $|SO(3, \mathbb{Z}) | = 24$ and we're computing the orbit correctly, but we are off a factor of three. We have obtained three interlocking cubes.</p> <p>What subset of $SO(3)$ should I be using?</p> <hr> <p>Generically the image of a point is the <a href="https://en.wikipedia.org/wiki/Compound_of_three_cubes" rel="nofollow noreferrer">compound of three cubes</a>. So this construction is redundant by a factor of three.</p> <p><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/8/8b/UC08-3_cubes.png/280px-UC08-3_cubes.png" alt=""></p> <p>Let's type out the possible vertices:</p> <pre><code> a b c | -b a c | -a -b c a -c b | -b -c a | -a -c -b a -b -c | -b -a -c | -a b -c a c -b | -b c -a | -a c b -c b a | c a b | b -a c -c -a b | c b -a | b -c -a -c -b -a | c -a -b | b a -c -c a -b | c -b a | b c a </code></pre> <p>The group theory suggests these should split into three groups of eight, forming the three cubes. I wasn't able to find the correct permutations.</p> <hr> <p>If I had set $\vec{v}_0 = \frac{1}{\sqrt{3}}(1,1,1)$, then the vertices $\frac{1}{\sqrt{3}}(\pm 1,\pm 1,\pm 1)$ are the vertices of a cube.</p>
user
505,767
<p><strong>HINT</strong></p> <p>1) Find the opposite corner: $\vec{w}=-\vec{v} = -\frac{1}{\sqrt{10}} (3,1,0) \in S^2 $</p> <p>2) Find the vertical axis $A$ of the cube, i.e. the line through the origin which form an angle $\alpha$ s.t. $\cos \alpha=\frac {1}{\sqrt 3}$ with the line $\vec v-\vec w$</p> <p>3) Find the others 6 corners by the rotation matrix of $\frac {\pi}{2} $around A</p> <p><a href="https://i.stack.imgur.com/U0SNP.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U0SNP.jpg" alt="enter image description here"></a></p>
2,571,629
<blockquote> <p>Let $\mathbb Z$ denote the set of integers and $\mathbb Z_{\ge 0}$ denote the set $\{0,1,2,3,...\}$. Consider the map $f:\mathbb Z_{\ge 0}\times \mathbb Z \to \mathbb Z$ given by $f(m,n)=2^m\cdot(2n+1)$. Then the map $f$ is</p> <p>(A)injective but not surjective.</p> <p>(B)surjective but not injective.</p> <p>(C)injective and surjective.</p> <p>(D)neither injective nor surjective.</p> </blockquote> <p><strong>For injectivity,</strong></p> <p>$$2^{m_1-m_2}(2n_1+1)=(1)(2n_2+1)$$ $$2^{m_1-m_2}(2n_1+1)=2^0(2n_2+1)$$ $$m_1=m_2 \land n_1=n_2 $$</p> <p><strong>For surjectivity,</strong> $m=0$, $f$ maps to odd integers.Similarly, I am getting pre-image for even integers also.</p> <p>So, (C) is the correct answer. Am I correct? But, solution manual gives (A) as the correct one. Who is correct? Please help me.</p>
Rory Daulton
161,807
<p>Note that $f(m,n) = 2^m\cdot(2n+1)$ can never be zero, since neither factor can be zero. Therefore the mapping is not surjective.</p>
2,190,394
<p><a href="https://i.stack.imgur.com/X1WFP.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X1WFP.jpg" alt="enter image description here"></a></p> <p>The required angle can be found out by using cosine rule and then we get it as be $90^{\circ}$. Can someone provide a bit more geometric approach? Or if trigonometric, then, not at least with any of these rules. Just normal $\sin, \cos, \tan$ would be better.</p>
Futurologist
357,211
<p>This problem does not require any formulas, calculations or even exact lengths for $AB$ and $CD$. The condition $AB = CD = \sqrt{3}$ is absolutely irrelevant information.</p> <p>1) Since $AB$ is orthogonal to the plane determined by the three points $B, C, D$, it is in fact orthogonal to any line in that plane. In particular, $AB$ is orthogonal to line $CD$. </p> <p>2) Furthermore, $CD$ is orthogonal to $BD$ by assumption. </p> <p>3) Since $CD$ is orthogonal to two non-parallel lines, $AB$ and $BD$, lying in the plane determined by the three points $A, B, D,$ one concludes that $CD$ is in fact orthogonal to the whole plane through $A, B, D$.</p> <p>4) Since $CD$ is orthogonal to the plane determined by the three points $A, B, D$, it is in fact orthogonal to any line in that plane. In particular, $CD$ is orthogonal to line $AD$. </p> <p>5) Consequently, $\angle \, ADC = 90^{\circ}$.</p>
2,793,144
<p>Consider the following Proposition: </p> <ul> <li>Let $A\subseteq B$. Also, let $B\subseteq C$. Thus, $A\subseteq C$.</li> </ul> <p>Proof: </p> <ul> <li>Let $A\subseteq B$. Also, let $B\subseteq C$. <blockquote> <p><strong>What goes here?</strong> Assume $x\in A$. As $x\in A$ and $A\subseteq B$, $x\in B$. As $x\in B$ and $B\subseteq C$, $x\in C$. </p> </blockquote></li> </ul> <p><strong>Is it that we should let $x$ be an arbitrary element in the domain? But what domain?</strong></p>
Michael Hardy
11,667
<p>$$ \text{Is } S\left( \cdots + \frac{A^n}{n!} + \cdots \right) S^{-1} \text{ equal to } \cdots+\frac{(SAS^{-1})^n}{n!} +\cdots \text{ ?} $$ $$ \text{Is } SA^n S^{-1} = (SAS^{-1})^n \text{ ?} $$ \begin{align} (SAS^{-1})^n &amp; = (SAS^{-1}) (SAS^{-1}) \cdots\cdots (SAS^{-1}) (SAS^{-1}) \\[10pt] &amp; = SA(S^{-1}S) A(S^{-1}S) \cdots\cdots (S^{-1}S)A(S^{-1}S) AS^{-1} &amp; &amp; \text{ by associativity} \\[10pt] &amp;= S(AAA\cdots A)S^{-1}. \end{align}</p>
184,241
<p>I am having issues with the code:</p> <pre><code>CC = p0^2/(p0 - L) Dbuy = (1 - CC)*(L/p0) + (CC - p0)*Log[CC/p0] Dwait = (1 - p1)*(L/(p1 - p0)) Dlock = (1 - p1)*(1 - L/p0 - L/(p1 - p0)) + (p1 - CC)*(1 - L/p0) - L*Log[(p1 - p0)/(CC - p0)] RevLocking1 = Table[NMaximize[{x*Dbuy*p0 + x*Dwait*0.4*p1 + x*Dlock*(p0 + 0.4*L), x*(Dbuy + Dwait + Dlock) &lt;= 1, 0 &lt;= L &lt;= p0 &lt;= p1 &lt;= 1}, {p0, p1, L}, MaxIterations -&gt; 10000], {x, 1, 6, 0.25}] </code></pre> <p>The code works, but it ignores the constraint:</p> <pre><code>0 &lt;= L &lt;= p0 &lt;= p1 &lt;= 1 </code></pre> <p>Since it reaches values like:</p> <pre><code>L -&gt; 0.303712, p0 -&gt; -0.0156877, p1 -&gt; 0.900987 </code></pre> <p>Any ideas?</p>
kglr
125
<pre><code>Merge[listData, Identity] </code></pre> <blockquote> <p>&lt;|"Input" -> {1, 2, 3}, "double" -> {2, 4, 6}, "squared" -> {1, 4, 9}|></p> </blockquote> <pre><code>listFormatData = {&lt;|"Input" -&gt; input1, "OutputKey1" -&gt; OutputKey1Input1Output|&gt;, &lt;|"Input" -&gt; input2, "OutputKey1" -&gt; OutputKey1Input2Output|&gt;}; Merge[listFormatData , Identity] </code></pre> <blockquote> <p>&lt;|"Input" -> {input1, input2}, "OutputKey1" -> {OutputKey1Input1Output, OutputKey1Input2Output}|> </p> </blockquote>
256,806
<p>How can I prove or disprove that $\lim\limits_{n\to \infty} (n+1)^{1/3}−n^{1/3}=\infty$?</p> <p>My guess is that it is false but I can't prove it.</p>
Mario Carneiro
50,776
<p>Let $x^3=n$. Then $$\lim_{n\to\infty}(n+1)^{1/3}-n^{1/3}=\lim_{x\to\infty}(x^3+1)^{1/3}-x.$$ Now for $x\geq1/3$, $$x^3+1\leq x^3+3x\leq x^3+3x+\frac3x+\frac1{x^3}=\left(x+\frac1x\right)^3,$$ so $$\lim_{x\to\infty}(x^3+1)^{1/3}-x\leq\lim_{x\to\infty}\left(\left(x+\frac1x\right)^3\right)^{1/3}-x=\lim_{x\to\infty}\frac1x=0.$$</p> <p>But $(n+1)^{1/3}-n^{1/3}\geq0$, so $\lim_{n\to\infty}(n+1)^{1/3}-n^{1/3}=0$.</p>
256,806
<p>How can I prove or disprove that $\lim\limits_{n\to \infty} (n+1)^{1/3}−n^{1/3}=\infty$?</p> <p>My guess is that it is false but I can't prove it.</p>
robjohn
13,854
<p>Since $x^3-y^3=(x-y)(x^2+xy+y^2)$, we have $$ (n+1)^{1/3}-n^{1/3}=\frac{(n+1)-n}{(n+1)^{2/3}+(n+1)^{1/3}n^{1/3}+n^{2/3}} $$ Therefore, $$ \frac1{3(n+1)^{2/3}}\le(n+1)^{1/3}-n^{1/3}\le\frac1{3n^{2/3}} $$ The limit follows by the <a href="http://en.wikipedia.org/wiki/Squeeze_theorem" rel="nofollow">Squeeze Theorem</a>.</p>
1,026,135
<p>Let a, b be elements of a group G and H a normal subgroup of G. Is it true that $aH = bH$, then $a^{-1}H = b^{-1}H$? How can I prove this?</p>
egreg
62,967
<p>I assume that $H$ is a subgroup of the group $G$.</p> <p>Saying $aH=bH$ is equivalent to $a^{-1}b\in H$; so $a^{-1}H=b^{-1}H$ is equivalent to $ab^{-1}\in H$ or $ba^{-1}\in H$ (because $(ab^{-1})^{-1}=ba^{-1})$.</p> <p>Are you sure that $a^{-1}b\in H$ implies $ba^{-1}\in H$?</p> <p>Check with $S_3$ and $H$ a two element subgroup.</p> <p>The assertion is however true if $H$ is normal (try your hand at it). An “abstract” proof is with the quotient group: if $aH=bH$, then $\pi(a)=\pi(b)$, where $\pi\colon G\to G/H$ is the canonical projection. Then $\pi(a)^{-1}=\pi(b)^{-1}$, so $\pi(a^{-1})=\pi(b^{-1})$ and finally $a^{-1}H=b^{-1}H$.</p>
1,026,135
<p>Let a, b be elements of a group G and H a normal subgroup of G. Is it true that $aH = bH$, then $a^{-1}H = b^{-1}H$? How can I prove this?</p>
Ma Ming
16,340
<p>Untrue.</p> <p>Hint: $$ aH=bH\quad&lt;=&gt;\quad a^{-1}b\in H. $$ </p> <p>$$ a^{-1}H=b^{-1} H \quad&lt;=&gt;\quad ba^{-1}\in H. $$</p> <p>Pick up $bhb^{-1}\not\in H$, let $a=bh$.</p>
134,213
<p>Is there a way to simplify the following expression</p> <p>$$A = \frac{\left (a^6-b^6 \right )^2}{c^6-4a^3b^3}$$</p> <p>assuming</p> <p>\begin{eqnarray} a = &amp;\ (a-b)^2+b\ (a+1) \cr b = &amp;\ (b-c)^2+c\ (b+1) \cr c = &amp;\ (c-a)^2+a\ (c+1) \end{eqnarray}</p> <h3>Edit</h3> <p>I would like to determine which of these answers for <code>A</code> is correct: </p> <ol> <li>$ a^3b^3$ </li> <li>$b^3c^3$ </li> <li>$a^3b^6$ </li> <li>$a^3c^3$ </li> <li>$c^6$</li> </ol>
Bill
18,890
<p>I'm not certain if this is what you want</p> <pre><code>Reduce[{A == (a^6 - b^6)^2/(c^6 - 4 a^3 b^3), a == (a - b)^2 + b (a + 1), b == (b - c)^2 + c (b + 1), c == (c - a)^2 + a (c + 1)}, A] </code></pre> <p>tells you that <code>A == -3 (5 + 2 c^2 + c^4)</code> when <code>c== + or - 1</code> times the square root of any root of $15 + 6 q + 3 q^2 + q^3$. That appears to only have one real root around $-2.78$.</p>
2,416,848
<p>Unfortunately,I've no idea of dealing with this problem.The term <em>isomorphism class</em> leads me to think of fundamental theorem of finitely generated abelian groups but i think i'm going in wrong direction.</p> <p>Please give me some hint?suggestions?Anything?</p>
Thomas Andrews
7,933
<p>Since $(2+i)(2-i)=5$ in $\mathbb Z[i]$, with the two factors relative prime there, and $\mathbb Z[i]$ is a PID, we can conclude that $$(\mathbb Z[i])/\langle 5\rangle = \mathbb Z[i]/\langle 2+i\rangle \times \mathbb Z[i]/\langle 2-i\rangle.$$</p> <p>But, by conjugation, $\mathbb Z[i]/\langle2+i\rangle\cong \mathbb Z[i]/\langle 2-i\rangle.$ And the units of the product of two rings is the product of the units of each ring.</p> <p>So you just need to know the group of units of $\mathbb Z[i]/\langle 2+i\rangle$, and that $\mathbb Z[i]/\langle 5\rangle \cong \mathbb Z_5[i]$.</p> <p>Show that $\mathbb Z[i]/\langle 2+i\rangle\cong \mathbb Z_5$.</p>
80,922
<p>I am generally confused about integer valued polynomials, and how to count them. Trying to learn the subject I started by listing permutations of the rows in a lower triangular table:</p> <p>$$\displaystyle T = \left(\begin{matrix} 1&amp;0&amp;0&amp;0&amp;0&amp;0&amp;0&amp;\cdots \\ 1&amp;1&amp;0&amp;0&amp;0&amp;0&amp;0 \\ 1&amp;1&amp;1&amp;0&amp;0&amp;0&amp;0 \\ 1&amp;1&amp;1&amp;1&amp;0&amp;0&amp;0 \\ 1&amp;1&amp;1&amp;1&amp;1&amp;0&amp;0 \\ 1&amp;1&amp;1&amp;1&amp;1&amp;1&amp;0 \\ 1&amp;1&amp;1&amp;1&amp;1&amp;1&amp;1 \\ \vdots&amp;&amp;&amp;&amp;&amp;&amp;&amp;\ddots \end{matrix}\right)$$</p> <p>with the following Mathematica program:</p> <pre><code>Table[ a = Table[1, {n, 1, k}]; nn = Length[a]; b = Flatten[ Table[Permutations[ Table[Table[If[n &gt;= k, a[[n - k + 1]], 0], {k, 1, nn}], {n, 0, nn}][[i]]], {i, 1, nn + 1}], 1]; ArrayPlot[b], {k, 0, 6}] </code></pre> <p>The last output is the following arrayplot that could be shown in matrix form too:</p> <p><img src="https://i.stack.imgur.com/2WIAd.jpg" alt="confusion"></p> <p>Is there a simpler way to program this?</p>
Bill
18,890
<p>One way you can let Mathematica know what you are assuming here is</p> <pre><code>Simplify[DSolve[{y'[x]==1/2 (1-y[x]) y[x] (a-y[x]+a y[x]), y[0]==y0}, y[x], x], 0&lt;a&lt;1/2] </code></pre> <p>One way of assigning the result and passing value of a, y0, etc is</p> <pre><code>yf = y[x]/.DSolve[{y'[x]==1/2 (1-y[x]) y[x] (a-y[x]+a y[x]), y[0]==y0}, y[x], x][[1]]; Simplify[yf/.{a-&gt;2, y0-&gt;1/2, x-&gt;4}] </code></pre> <p>That does warn you that inverse functions are being used and you should check the results carefully, but the result from DSolve is an inverse function, so I don't think that can be avoided</p>
1,341,231
<p>Let $f$ be continuous on $[0,1]$, and let $\alpha&gt;0$. Find: $\lim\limits_{x\to 0}{x^{\alpha}\int_{x}^{1}{f(t)\over t^{\alpha +1}}dt}$. I tried integration by parts, but I am not sure if $f$ is integrable and to what extent. It also gets really messy. Besides, I am not sure if I am to express the limit using $f$, or to arrive at an actual number. It would be nice if you could take a look.</p> <p>Using other questions I got: Let us denote $G(x)=\int_{x}^{1}{f(t)\over t^{\alpha +1}}dt$, so I am looking for: $\lim\limits_{x\to 0}{x^{\alpha}G(x)}=\lim\limits_{x\to 0}{G(x)\over {1\over x^{\alpha}}}$. If $g(t)=\int{f(t)\over t^{\alpha +1}}dt$, Then: $G(x)=g(1)-g(x)$ which means: $G'(x)=-g'(x)=-{f(x)\over x^{\alpha +1}}$. Let us use L'Hôpital's rule: $\lim\limits_{x\to 0}{G(x)\over {1\over x^{\alpha}}}=\lim\limits_{x\to 0}{{G'(x)=-{f(x)\over x^{\alpha +1}}}\over {-\alpha\over x^{\alpha+1}}}=\lim\limits_{x\to 0}{f(x)\over \alpha}={f(0)\over \alpha}$.</p>
Shubham Avasthi
240,149
<p>I would recommend An Introduction to the Theory of Numbers By G.H. Hardy and E.M. Wright . </p>
970,872
<p>There is a common brain teaser that goes like this:</p> <p>You are given two ropes and a lighter. This is the only equipment you can use. You are told that each of the two ropes has the following property: if you light one end of the rope, it will take exactly one hour to burn all the way to the other end. But it doesn't have to burn at a uniform rate. In other words, half the rope may burn in the first five minutes, and then the other half would take 55 minutes. The rate at which the two ropes burn is not necessarily the same, so the second rope will also take an hour to burn from one end to the other, but may do it at some varying rate, which is not necessarily the same as the one for the first rope. Now you are asked to measure a period of 45 minutes. How will you do it?</p> <p>Now I usually love brain teasers but this one frustrated me for a while because I could not prove that if a rope of non-uniform density is burned at both ends it burns in time $T/2$. I think I have sketched a proof by induction that shows that it's not actually true.</p> <p>Given a rope of uniform density the burn rate at either end is equal so clearly it burns in time $T/2$. Now, consider a rope of non-uniform density, the total time T for this rope to burn is the linear combination of the times of the uniform density "chunks" to burn, i.e. $T = T_1 + T_2 + \ldots + T_n$. So consider, $T/2 = T_1/2+ T_2/2 + \ldots + T_n/2$. If we look at each $T_i/2$ this is precisely the time it takes to burn the uniform segment $T_i$ if lit at both ends. Therefore, in order to arrive at a rope that burns in time $T/2$, one would need to light each uniform segment on both ends, not simply the end of both ends of the total rope. What am I doing wrong?</p>
TonyK
1,508
<p>Light both ends of rope $A$, and one end of rope $B$. When rope $A$ is burnt out, after 30 minutes, light the other end of rope $B$. Rope $B$ will be burnt out after a further 15 minutes.</p>
970,872
<p>There is a common brain teaser that goes like this:</p> <p>You are given two ropes and a lighter. This is the only equipment you can use. You are told that each of the two ropes has the following property: if you light one end of the rope, it will take exactly one hour to burn all the way to the other end. But it doesn't have to burn at a uniform rate. In other words, half the rope may burn in the first five minutes, and then the other half would take 55 minutes. The rate at which the two ropes burn is not necessarily the same, so the second rope will also take an hour to burn from one end to the other, but may do it at some varying rate, which is not necessarily the same as the one for the first rope. Now you are asked to measure a period of 45 minutes. How will you do it?</p> <p>Now I usually love brain teasers but this one frustrated me for a while because I could not prove that if a rope of non-uniform density is burned at both ends it burns in time $T/2$. I think I have sketched a proof by induction that shows that it's not actually true.</p> <p>Given a rope of uniform density the burn rate at either end is equal so clearly it burns in time $T/2$. Now, consider a rope of non-uniform density, the total time T for this rope to burn is the linear combination of the times of the uniform density "chunks" to burn, i.e. $T = T_1 + T_2 + \ldots + T_n$. So consider, $T/2 = T_1/2+ T_2/2 + \ldots + T_n/2$. If we look at each $T_i/2$ this is precisely the time it takes to burn the uniform segment $T_i$ if lit at both ends. Therefore, in order to arrive at a rope that burns in time $T/2$, one would need to light each uniform segment on both ends, not simply the end of both ends of the total rope. What am I doing wrong?</p>
Eugene K
273,801
<p>Here's my proof why the answer is wrong as well, if you assume nothing about the rope properties. Essentially things like feedback might affect the rope burning. </p> <p>Rope = [ 1 | 2 | 3 | 4 | 5 ]</p> <p>Imagine the rope split into 5 parts.</p> <p>The rope’s units are not guaranteed to burn at the same rate depending on which side you light it from.</p> <p>From left: 1 - 50min; 2 - 2min; 3 - 5min; 4 - 1min; 5 - 2min.<br> From right: 1 - 10min; 2 - 5min; 3 - 15min; 4 - 20min; 5 - 10min.<br> Both ways add up to 1hr if lit individually.</p> <p>Light both ends: 50 minutes in and finally it’s done. Left side burnt only Unit 1 and right side burnt units 5, 4, 3, 2. </p> <p>Or after 30 minutes, unit 1 is not burnt, and only unit 5 and 4 are burnt. </p> <p>The fastest that lighting both sides can take is 0 minutes, and the slowest is 1 hour.</p>
43,690
<p>I have to apologize because this is not the normal sort of question for this site, but there have been times in the past where MO was remarkably helpful and kind to undergrads with similar types of question and since it is worrying me increasingly as of late I feel that I must ask it.</p> <p>My question is: what can one (such as myself) contribute to mathematics?</p> <p>I find that mathematics is made by people like Gauss and Euler - while it may be possible to learn their work and understand it, nothing new is created by doing this. One can rewrite their books in modern language and notation or guide others to learn it too but I never believed this was the significant part of a mathematician work; which would be the creation of original mathematics. It seems entirely plausible that, with all the tremendously clever people working so hard on mathematics, there is nothing left for someone such as myself (who would be the first to admit they do not have any special talent in the field) to do. Perhaps my value would be to act more like cannon fodder? Since just sending in <em>enough</em> men in will surely break through some barrier.</p> <p>Anyway I don't want to ramble too much but I really would like to find answers to this question - whether they come from experiences or peoples biographies or anywhere.</p> <p>Thank you.</p>
Jon Bannon
6,269
<p>I'm not really qualified to answer this...or perhaps I am, since I'm no Terry Tao! I think what we bring to mathematics is the unique perspective afforded by our own individual experience. The more of us who are trying to do mathematics, the better, because this increases the diversity of perspectives in approaching the myriad problems we face. Also, as is popular to say, there is far more mathematics to do than any handful of powerful mathematicians can do. This is a "big tent". </p> <p>Another thing that I've read someplace...someone was describing the difference between Kolmogorov and Israel Gelfand, and this person wrote/said the following: "when Kolmogorov went into a new mathematical landscape he immediately looked for the tallest mountain and climbed it, when Gelfand entered the same landscape, he immediately began building roads." (Someone please fill muad in on the proper location of this quote...I think it was the Notices...)</p> <p>Others have said that most of what we do as mathematicians is organize and clean things up to clear the way for a future polymath, like a von Neumann, to really make some progress, and <em>some</em> road builders have built some unbelievable roads. (Didn't Serre say something like he spent most of his career rewriting other people's work?) </p> <p>I don't know about you, but I'm fine with this! It's much better than having my name on the location of a transistor on some unknown circuitboard on the space shuttle...not that there's anything wrong with that!</p> <p>Anyhow, don't worry about having things to do...there are plenty!</p>
43,690
<p>I have to apologize because this is not the normal sort of question for this site, but there have been times in the past where MO was remarkably helpful and kind to undergrads with similar types of question and since it is worrying me increasingly as of late I feel that I must ask it.</p> <p>My question is: what can one (such as myself) contribute to mathematics?</p> <p>I find that mathematics is made by people like Gauss and Euler - while it may be possible to learn their work and understand it, nothing new is created by doing this. One can rewrite their books in modern language and notation or guide others to learn it too but I never believed this was the significant part of a mathematician work; which would be the creation of original mathematics. It seems entirely plausible that, with all the tremendously clever people working so hard on mathematics, there is nothing left for someone such as myself (who would be the first to admit they do not have any special talent in the field) to do. Perhaps my value would be to act more like cannon fodder? Since just sending in <em>enough</em> men in will surely break through some barrier.</p> <p>Anyway I don't want to ramble too much but I really would like to find answers to this question - whether they come from experiences or peoples biographies or anywhere.</p> <p>Thank you.</p>
drbobmeister
8,472
<p>I had the privilege of discussing similar concerns with regard to theoretical physics with the late Richard Feynman. He told me the following, which has always served me in good stead: "You keep on learning and learning, and pretty soon you learn something no one has learned before." That was his "advice"; my advice? Go for it!</p>
43,690
<p>I have to apologize because this is not the normal sort of question for this site, but there have been times in the past where MO was remarkably helpful and kind to undergrads with similar types of question and since it is worrying me increasingly as of late I feel that I must ask it.</p> <p>My question is: what can one (such as myself) contribute to mathematics?</p> <p>I find that mathematics is made by people like Gauss and Euler - while it may be possible to learn their work and understand it, nothing new is created by doing this. One can rewrite their books in modern language and notation or guide others to learn it too but I never believed this was the significant part of a mathematician work; which would be the creation of original mathematics. It seems entirely plausible that, with all the tremendously clever people working so hard on mathematics, there is nothing left for someone such as myself (who would be the first to admit they do not have any special talent in the field) to do. Perhaps my value would be to act more like cannon fodder? Since just sending in <em>enough</em> men in will surely break through some barrier.</p> <p>Anyway I don't want to ramble too much but I really would like to find answers to this question - whether they come from experiences or peoples biographies or anywhere.</p> <p>Thank you.</p>
Franz Lemmermeyer
3,503
<p>Let met add two quotations:</p> <ol> <li><p>Fermat's motto was "Multi pertransibunt et augebitur scientia" (many will pass through and knowledge will be increased). At another occasion he wrote about "passing the torch to the next generation", which I find particularly nice.</p></li> <li><p>"When kings are building, carters have work to do". Kronecker quoted this, in his letter to Cantor of September 1891.</p></li> </ol>
4,038,151
<p>In Bernt Oksendal's <em>Stochastic Differential Equations</em>, Chapter 4, one has the following stochastic differential equation (whose solution is geometric Brownian motion): <span class="math-container">$$dN_t=rN_tdt+\alpha N_tdB_t\;\;\;\text{ ie } \;\;\; N_t-N_0=r\int_0^t N_sds+\alpha\int_0^tN_sdB_s,$$</span> where <span class="math-container">$\alpha,r\in\mathbb{R}$</span> and <span class="math-container">$B_t$</span> is standard Brownian motion (ie <span class="math-container">$B_0=0$</span>). After assuming that <span class="math-container">$N_t$</span> solves the above equation, the author abruptly deduces that <span class="math-container">$$\frac{dN_t}{N_t}=rdt+\alpha dB_t \;\;\;\text{ie}\;\;\; \int_0^t\frac{1}{N_s}dN_s= rt+\alpha B_t. \;\;\;(*)$$</span></p> <p>I don't understand how he obtained this directly.</p> <p>What I understand for sure is that if we seek to compute <span class="math-container">$$\int_0^t\frac{1}{N_s}dN_s $$</span> we apply Itô's formula for <span class="math-container">$Y_t=\ln(N_t)$</span> (assuming <span class="math-container">$N_t$</span> satisfies all the needed conditions). After some computation this yields<br /> <span class="math-container">$$\frac{1}{N_t}dN_t=d\ln N_t+\frac{1}{2}\alpha^2dt \;\;\text{ i.e } \;\;\int_0^t\frac{1}{N_s}dN_s=\ln(N_t)-\ln(N_0)+\frac{1}{2}\alpha^2t.\;\;\;(**)$$</span> But at first glance, it does not seem that <span class="math-container">$(**)$</span> implies <span class="math-container">$(*)$</span>. How did he obtain <span class="math-container">$(*)$</span>? Did he use a method other than the Ito formula or am I missing something?</p> <hr /> <p>Thank you for the helpful answers! I've upvoted both and will accept whichever has more votes (in case of tie I'll just leave them be).</p> <p>It turns out that what I was missing is the definition of the <em><strong>Itô integral with respect to an Itô process</strong></em>, which I could not find in the book. So actually <em><strong>by definition</strong></em> one has that for any Itô process of the form <span class="math-container">$$dX_t=\alpha dt+\sigma dB_t,$$</span> and <span class="math-container">$Y_t$</span> an appropriate integrand that <span class="math-container">$$\boxed{\int_0^tY_sdX_s:=\int_0^t\alpha Y_s ds + \int_0^t\sigma Y_sdB_s}$$</span> This justifies the formal notation <span class="math-container">$$\frac{1}{N_t}dN_t=\frac{1}{N_t}(rN_tdt+\sigma N_tdB_t)= \alpha dt+\sigma dB_t,$$</span> and automatically gives <span class="math-container">$(*)$</span> when <span class="math-container">$Y_t=1/N_t$</span>, of course, while assuming <span class="math-container">$Y_t$</span> meets all the necessary requirements.</p>
Jan Stuller
503,335
<p>I find notation such as <span class="math-container">$\frac{dN_t}{N_t}=rdt+\alpha dB_t$</span> incredibly unfortunate, it leads to the type of confusion you have encountered (and that I used to encounter myself).</p> <p>That is why I advise against using short-hand notation (at least until one becomes very confident in stochastic calculus techniques).</p> <p>In long-hand notation:</p> <p><span class="math-container">$$N_t=N_0+\int_{h=0}^{h=t}rN_hdh+\int_{h=0}^{h=t}\alpha N_hdB_h$$</span></p> <p>From the above, it should be obvious that <span class="math-container">$N_h$</span> <em>cannot</em> be taken out of the integral and brought to the LHS (which is not obvious in the short-hand notation). Ito's Lemma needs to be used, applied to <span class="math-container">$ln(N_t)$</span> as you point out. Let <span class="math-container">$F(N_t,t):=ln(N_t)$</span>, then taking the derivatives:</p> <p><span class="math-container">$$\frac{\partial F}{\partial N_t}=\frac{1}{N_t}, \frac{\partial^2 F}{\partial N_t^2}=\frac{-1}{N_t^2}, \frac{\partial F}{\partial t}=0$$</span></p> <p>Then:</p> <p><span class="math-container">$$F_t=F(N_0)+\int_{h=0}^{h=t}\left(\frac{\partial F}{\partial h}+\frac{\partial F}{\partial N_h}rN_h+0.5\frac{\partial^2 F}{\partial N_h^2}\alpha^2 N_h^2\right)dh+\int_{h=0}^{h=t}\frac{\partial F}{\partial N_t}\alpha N_hdB_h=\\=ln(N_0)+\int_{h=0}^{h=t}\left(r-0.5\alpha^2 \right)dh+\int_{h=0}^{h=t}\alpha dB_h=\\=ln(N_0)+(r-0.5\alpha^2)t+\alpha B_t$$</span></p> <p>Because <span class="math-container">$F_t=ln(N_t)$</span>, the final answer is (by exponentiating both sides):</p> <p><span class="math-container">$$N_t=N_0e^{(r-0.5\alpha^2)t+\alpha B_t}$$</span></p>
57,223
<p><strong>No error in 10.0.2</strong></p> <hr> <p>When using <code>Correlation[]</code> function I sometimes get the strange warning:</p> <pre><code>CorrelationTest::nortst: "At least one of the p-values in {0.0527317}, resulting from a test for normality, is below 0.05`. The tests in \!\({\"PearsonCorrelation\"}\) require that the data is normally distributed." </code></pre> <p>But take a look that there's only one p-value and it is actually greater than $0.05$:</p> <pre><code>0.0527317 &lt; 0.05 False </code></pre> <p>The code that causes this:</p> <pre><code>x = RandomReal[{-5, 5}, 100]; y = 2 x + 1 + RandomReal[{-0.1, 0.1}, 100]; X = Transpose[{x, y}]; ListPlot[X] Correlation[X] // MatrixForm CorrelationTest[X, 99995/100000, "PearsonCorrelation"] </code></pre> <p>Why this happens? (to reproduce the issue a few repetitions are usually required)</p>
Nasser
70
<p>No error nv 10.0.2. On windows 7, 64 bit</p> <p><img src="https://i.stack.imgur.com/3Rsxu.png" alt="Mathematica graphics"></p> <pre><code>SeedRandom[0]; x = RandomReal[{-5, 5}, 100]; y = 2 x + 1 + RandomReal[{-0.1, 0.1}, 100]; X = Transpose[{x, y}]; ListPlot[X] Correlation[X] // MatrixForm CorrelationTest[X, 99995/100000, "PearsonCorrelation"] </code></pre> <p><img src="https://i.stack.imgur.com/IXLek.png" alt="Mathematica graphics"></p> <p>No error messages.</p>
541,541
<blockquote> <p>Assume that $f_1\colon V_1\to W_1, f_2\colon V_2\to W_2$ are $k$-linear maps between $k$-vector spaces (over the same field $k$, but the dimension may be infinity). Then the tensor product $f_1\otimes f_2\colon V_1\otimes V_2\to W_1\otimes W_2$ is defined, and it's obvious that $\ker f_1\otimes V_2+ V_1\otimes \ker f_2 \subseteq \ker (f_1\otimes f_2)$. My question is whether the relation $\subseteq$ is in fact $=$. </p> </blockquote> <p>If this does not hold, how about assuming all these vector spaces are commutative associated $k$-algebras with identity and that all the maps are $k$-algebra homomorphisms? Or can you give a "right" form of the kernel $\ker (f_1\otimes f_2)$?</p>
pre-kidney
34,662
<p>The isomorphism $$ \ker (f_1\otimes f_2)=V_1\otimes \ker f_2 + \ker f_1 \otimes V_2 $$ is equivalent to the isomorphism $$ \operatorname{coker} (f_1\otimes f_2)=\operatorname{coker} f_1\otimes \operatorname{coker} f_2, $$ since for any ideals $I,J\subset R$ we have $$ R/(I+J)=R/I\otimes R/J $$</p>
904,547
<p>I have the sum ( $M$ is any integer $&gt; 1$ ):</p> <p>$$ \sum_{h = 1}^{M}\left(\,\left\lfloor\, 2M + 1 \over h\,\right\rfloor -\left\lfloor\, 2M \over h\,\right\rfloor\,\right) $$</p> <p>and looking for a way to simplify it. In the sense of either finding a <strong>simple closed form or a good approximation for M large</strong>. This resembles my previous question involving the divisor summatory function. However, now this is different because the sum extends to $M$ (not $2M$) and now we have differences with at the numerators an odd number and the previous even number (which is $2$ times $M$), I was hoping some good simplification could be found in this case. The first terms are integers, so they pose no problems, I was mainly looking for some way to simplify the other differences. </p> <p>A <strong>tight upper bound</strong> would also be useful ( as well as references to similar well-known $\mbox{formulas )}$.</p>
Ara
169,954
<p>$$ 2\pi\ \int_0^{\sqrt{2}} \sqrt{4-x^{2}} $$ </p> <p>$$ \pi \left(x \sqrt{4-x^2} + 4 * \arcsin \frac {x}{2} \right)$$ </p> <p>from 0 to $\sqrt{2}$ which gives $\pi^{2}$ + 2$\pi$.</p>
2,326,551
<p>Prove that $\varphi(n^2)=n\cdot\varphi(n)$ for $n\in \Bbb{N}$, where $\varphi$ is Euler's totient function.</p>
DonAntonio
31,254
<p><strong>Hint</strong>: Use that for any $\;n=p_1^{a_1}\cdot\ldots\cdot p_k^{a_k}\in\Bbb N\;,\;\;p_i\;$ primes, $\;a_i\in\Bbb N\;$ , we have</p> <p>$$\varphi(n)=n\prod_{i=1}^k\left(1-\frac1{p_i}\right)$$</p>
4,312,353
<p>Let <span class="math-container">$f_n$</span> and <span class="math-container">$f$</span> functions of <span class="math-container">$L_1\mathbb R^n$</span> such that <span class="math-container">$\lim_{n \rightarrow \infty} f_n = f$</span> almost always in <span class="math-container">$\mathbb R^n$</span> and <span class="math-container">$\int_{\mathbb R^n} |f_n|\ dx\rightarrow \int_{\mathbb R^n} |f|\ dx$</span>. Show that for every set <span class="math-container">$A \subset \mathbb R^n$</span> lebesgue measurable <span class="math-container">$$\int_{A} f_n\ dx\rightarrow \int_{A} f\ dx.$$</span> How can I solve this problem?</p>
operatorerror
210,391
<p>Here is a hint to get you started: Apply Fatou's lemma to <span class="math-container">$$ |f_n|+|f|-|f_n-f|\geq 0 $$</span> to find <span class="math-container">$f_n\to f$</span> in <span class="math-container">$L^1$</span>.</p>
4,312,353
<p>Let <span class="math-container">$f_n$</span> and <span class="math-container">$f$</span> functions of <span class="math-container">$L_1\mathbb R^n$</span> such that <span class="math-container">$\lim_{n \rightarrow \infty} f_n = f$</span> almost always in <span class="math-container">$\mathbb R^n$</span> and <span class="math-container">$\int_{\mathbb R^n} |f_n|\ dx\rightarrow \int_{\mathbb R^n} |f|\ dx$</span>. Show that for every set <span class="math-container">$A \subset \mathbb R^n$</span> lebesgue measurable <span class="math-container">$$\int_{A} f_n\ dx\rightarrow \int_{A} f\ dx.$$</span> How can I solve this problem?</p>
Mason
752,243
<p>This is an easy application of a &quot;generalized&quot; dominated convergence theorem. You have <span class="math-container">$|f_n\chi_{A}| \leq |f_n|$</span>, and <span class="math-container">$\int |f_n| \to \int |f|$</span>. Since <span class="math-container">$f_n\chi_{A} \to f\chi_{A}$</span> almost everywhere, the generalized dominated convergence theorem gives <span class="math-container">$\int f_n\chi_{A} \to \int f\chi_{A}$</span></p>
3,310,599
<p>Let <span class="math-container">$Y$</span> be a random variable on <span class="math-container">$[0,1]$</span> and let <span class="math-container">$X\sim U[0,1]$</span> be a uniformly distributed random variable that is independent of <span class="math-container">$Y$</span>.</p> <p>Prove that <span class="math-container">$$P(X&lt;Y\mid Y)=Y.$$</span></p> <p>Actually that looks kind of trivial, only use the independence of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> and then use the uniform distribution of <span class="math-container">$X$</span>. But I am not able to write down an intermediate step in mathematical formulas, something like <span class="math-container">$$P(X&lt;Y\mid Y)= \dots=Y.$$</span> Any help available?</p>
zoli
203,663
<p>Consider a probability space on which <span class="math-container">$X,Y$</span> are independent and (on <span class="math-container">$[0,1]$</span>) uniformly distributed random variables. By the definition of conditional expectation one has to prove that</p> <p><span class="math-container">$$\int_A P(X&lt;Y\mid Y)dP= \int_A YdP,$$</span> for any event <span class="math-container">$A$</span>.</p> <p>Because of independence and the uniform distribution:</p> <p><span class="math-container">$$P(X&lt;Y\mid Y=y)=P(X&lt;y)=y.$$</span></p> <p>Also, <span class="math-container">$$P(X&lt;Y\mid Y=y)=0$$</span> if <span class="math-container">$Y\not=y$</span>.</p> <p>Let <span class="math-container">$B_y$</span> the event when <span class="math-container">$Y=y$</span>.</p> <p>With all this, we have</p> <p><span class="math-container">$$\int_{A} P(X&lt;Y\mid Y)dP=\int_{A\cap B_y}y dP=\int_A YdP.$$</span></p> <p><strong>Edit</strong></p> <p>Way simpler:</p> <p>If for an elementary event <span class="math-container">$\omega \ Y(\omega)=y$</span>, because of independence and uniformity</p> <p><span class="math-container">$$P(X&lt;Y\mid Y=y)=P(X&lt;y)=y.$$</span></p> <p>So </p> <p><span class="math-container">$$P(X&lt;Y\mid Y)=Y.$$</span></p>
1,989,532
<p>The equation I am considering is </p> <p>$$x^{3}-63x=162$$</p> <p>Here is how far I got:</p> <p>$$3st=-63$$</p> <p>$$s=\frac{-63}{3t}=-21t$$</p> <p>$$s^{3}t^{3}=-162$$</p> <p>$$(-21t)^{3}-t^{3}=-162$$</p> <p>$$-9261t^{3}-t^{3}=-162$$</p> <p>Where would I go from here?</p>
Mark Fischler
150,362
<p>Cardano's method would be to write $$ x = t - \left( \frac{-63}{3t} \right)= t+\frac{21}{t} $$ which gives $$ t^3 -162 +\frac{9261}{t^3} = 0 $$ and if you multiply thru by $t^3$ and write $t^3 = p$ this gives $$p^2-162p+9261 = 0$$ where the solution is $$p=81\pm30 i \sqrt{3}$$ and to recover $t$ one has to take the cube root of $p$. </p> <p>Because $p$ is complex (and the original coefficients were real) this is the infamous "irreducible case" where all three roots are real and although you can easily express the solutions in terms of sums of the cube roots of complex quantities, finding those cube roots would be intractable. (This particular equation, as pointed out, is easily solved without the cubic formula because it has integer roots.)</p> <p>When the cubic has only one real solution, Cardano's formula is much more useful. For example, consider $$ x^3-63x=1620 $$ there, the same steps give $p = 810+3\sqrt{71871}$ and $$t = \sqrt[3]{810+3\sqrt{71871}} \approx 11.7307$$ and $$ x = t +\frac{21}{t} \approx 13.5209 $$ which indeed is the sole real root.</p>
2,090,452
<p>I have a signature given with the real numbers as its universe and addition and multiplication as functions. I need to write the following expression in First Order Logic.</p> <ul> <li>$x$ is a rational number</li> </ul> <p>My idea: $\varphi(x) = \exists a \, \exists b \, x = \frac{a}{b}$.</p> <p>The problem is I don't have division.</p> <p>Extra question: how can I write </p> <ul> <li>$x \geq 0$?</li> </ul>
Stella Biderman
123,230
<p><strong>EDIT:</strong> I just noticed you said first order arithmetic. This answer uses second order arithmetic. It's a theorem that it is impossible to define $\mathbb{Z}$ in $(\mathbb{R},+,\cdot)$</p> <p>As mentioned in the comments, division is easy: define $a/b$ to be the number $c$ such that $cb=a$. It's identifying integers that is hard. Notably, $\mathbb{Z}$ and $\mathbb{Z}[\pi]$ have pretty much the same arithmatic structure. However, you can identify $0$ as the only number that satisfies $$\varphi(x):=\forall a(ax=x)$$ and then you can identify $1$ as the only number that satisfies $$\varphi'(x)=\forall a(\varphi(a)\lor a=ax)$$</p> <p>Given these two constants, we can then recursively define the integers by using the fact that they are generated by $1$ as a group under addition, a la the Peano Axioms.</p> <p>For your bonus question, again the comments had the right idea.</p>
1,312,783
<p>I have come across the following formula:</p> <p>$$u(n)=\sum_{m=-\infty}^{n}\delta(m)$$</p> <p>where $u(n)$ is the Unit Step and $\delta(m)$ is the Delta Function:</p> <p>What I can't understand is how this formula "works".</p> <p>Expanding the formula we have:</p> <p>$$u(n)=...+\delta(0)+\delta(1)+\delta(2)+...+\delta(n-1)+\delta(n)=\delta(0)=1$$</p> <p>So expanding it, no matter what, gives us the same result which obviously is not a Unit Step, so I can't understand how that formula can produce the Unit Step.</p> <p>I know that I have made a mistake somewhere but I don't know where. Can someone explain to me that formula and my mistake?</p>
Teoc
190,244
<p>$$\frac{d}{dx}\sqrt{x+1}-\sqrt{x}=\frac1{2\sqrt{x+1}}-\frac{1}{2\sqrt x}=\frac{1}{2\sqrt{x+1}}\frac{2\sqrt x}{2\sqrt x}-\frac{1}{2\sqrt x}\frac{2\sqrt{x+1}}{2\sqrt{x+1}}$$ Can you take it from there?</p>
284,685
<p>I was wondering if there is an explicit estimate on the probability that the lowest eigenvalue of a $n \times n$ GOE matrix is larger than some number $x \in \mathbb{R}$. I am aware of the fact that there is in principle an explicit formula for that, but if $n$ becomes large, this event is really difficult to compute. </p> <p>Ideally, there should be also an error bound for that.</p> <p>Thank you.</p>
Carlo Beenakker
11,260
<p>See <A HREF="https://arxiv.org/abs/0801.1730" rel="nofollow noreferrer">Extreme Value Statistics of Eigenvalues of Gaussian Random Matrices</A> (2008), in particular the large-$n$ result:</p> <p>$$\text{Prob}(E_{\rm smallest}\geq x)\rightarrow\exp\left[-n^2\Phi\left(\frac{x+\sqrt{2n}}{\sqrt{n}}\right)\right],\;\;-\sqrt{2n}&lt;x&lt;0,$$ $$\Phi(z)=S(-\sqrt{2})-S(-\sqrt{2}-z),$$ $$S(z)= \frac{1}{216}\left[ 72 z^2 -2z^4 +(30 z + 2z^3) \sqrt{6 +z^2}+ 27\left( 3 + \ln 1296 - 4 \ln\left(-z + \sqrt{6 +z^2}\right)\right)\right].$$</p> <p>The probability that all eigenvalues are positive follows from $x\rightarrow 0$, </p> <p>$$\text{Prob}(E_{\rm smallest}\geq 0)\rightarrow3^{-n^2/4}.$$</p> <p>These are all large-$n$ results: the order $n^2$ exponents have finite-$n$ corrections of order $n$.</p>
284,685
<p>I was wondering if there is an explicit estimate on the probability that the lowest eigenvalue of a $n \times n$ GOE matrix is larger than some number $x \in \mathbb{R}$. I am aware of the fact that there is in principle an explicit formula for that, but if $n$ becomes large, this event is really difficult to compute. </p> <p>Ideally, there should be also an error bound for that.</p> <p>Thank you.</p>
Adrien Hardy
15,517
<p>By symmetry you're looking at the probability that the maximal eigenvalue is smaller than some number. Explicit inequalities for such events can be obtained by using the tridiagonal representation for the GOE, see the last 20 slides from Michel Ledoux : <a href="https://www.math.univ-toulouse.fr/~ledoux/Leipzig.pdf" rel="nofollow noreferrer">https://www.math.univ-toulouse.fr/~ledoux/Leipzig.pdf</a></p>
3,697,011
<blockquote> <p>For what value of <span class="math-container">$a$</span> would the following function have exactly one solution? <span class="math-container">$$a^2x^2+3x-5\frac{1}{a}=0$$</span></p> </blockquote> <p>I know that it needs to become <span class="math-container">$$\frac{3}{2}x^2+3x+\frac{3}{2}=0$$</span> but how can one find value of parameter <span class="math-container">$a$</span> for this to happen ? </p>
Vishu
751,311
<p>For exactly one solution, we need <span class="math-container">$D=0$</span> i.e. <span class="math-container">$$3^2 -4\cdot a^2\cdot\left(-\frac 5a\right)=0$$</span></p>
3,296,638
<p>Are two planes parallel if the magnitude of the cross product of their normal vectors is equal to <span class="math-container">$0$</span>?</p> <p><span class="math-container">$|| \vec n_1 \times \vec n_2|| = 0$</span></p>
José Carlos Santos
446,262
<p>Yes, because<span class="math-container">\begin{align}\left\lVert\vec n_1\times\vec n_2\right\rVert=0&amp;\iff\vec n_1\times\vec n_2=0\\&amp;\iff\vec n_1\text{ and }\vec n_2\text{ are parallel.}\end{align}</span></p>
1,012,652
<p>I have a problem with the following question.</p> <p>For which $n$ does the following equation have solutions in complex numbers</p> <p>$$|z-(1+i)^n|=z $$</p> <p>Progress so far.</p> <ol> <li><p>Let $z=a+bi$.</p></li> <li><p>Since modulus represents a distance, the imaginary part of RHS has to be 0. This immediately makes $b=0$.</p></li> <li><p>If solutions are in the complex domain $|a-(1+i)^n|=a $ by 2., and $a$ is Real. </p></li> <li><p>?</p></li> </ol> <p>I don't know where to go from here. </p>
inkievoyd
93,500
<p>$|z-(1+i)^n|=z$. LHS $\in \mathbb{R}$ so clearly $z \in \mathbb{R}$. I'll rewrite $z$ as $a$. $(1+i)^n =(\sqrt2)^ne^{\frac{in\pi}{4}}$. </p> <p>Thus, RHS is the square root of $(a-(\sqrt2)^n\cos(\frac{n\pi}{4}))^2+((\sqrt2)^n\sin(\frac{n\pi}{4}))^2$ (it is getting cumbersome so I didn't include the square root).</p> <p>Thus we get $a^2 -2a(\sqrt2)^n\cos(\frac{n\pi}{4})+2^n\cos^2(\frac{n\pi}{4})+2^n\sin^2(\frac{n\pi}{4})=a^2$. After some algebraic manipulation:</p> <p>$a(\sqrt2)^n\cos(\frac{n\pi}{4})=2^{n-1}$ and thus $a=\frac{2^{n/2-1}}{\cos(\frac{n\pi}{4})}$. </p> <p>By the way, $\cos(\frac{n\pi}{4})=1,\frac{1}{\sqrt2},0,\frac{-1}{\sqrt2},-1,\frac{-1}{\sqrt2},0,\frac{1}{\sqrt2}$ for $n=0,...,7$ before repeating. </p> <p>So given an $a$, you can find the zeros of $a-\frac{2^{n/2-1}}{\cos(\frac{n\pi}{4})}$. But based on what we know about $\cos(x)$, not all values of $a$ will work and also some values of $n$ will make the denominator equal to $0$.</p>
2,029,279
<blockquote> <p>Let <span class="math-container">$T:\Bbb R^2\to \Bbb R^2$</span> be a linear transformation such that <span class="math-container">$T (1,1)=(9,2)$</span> and <span class="math-container">$T(2,-3)=(4,-1)$</span>.</p> <p>A) Determine if the vectors <span class="math-container">$(1,1)$</span> and <span class="math-container">$(2,-3)$</span> form a basis.<br> B) Calculate <span class="math-container">$T(x,y)$</span>.</p> </blockquote> <p>I need help with these, please I'm stuck, don't even know how to start them...</p>
Community
-1
<p>You can't scalair multiply $(1,1)$ to get $(2, -3)$, so the vectors are linear independent. So the span of these vectors are a basis for $\mathbb{R^2}$ (dimension is also ok).</p>
4,117,954
<p>Let <span class="math-container">$l^\infty$</span> be the space of all real bounded sequences equipped with supremum norm. Let <span class="math-container">$S$</span> be the shift operator defined on <span class="math-container">$l^\infty$</span> by <span class="math-container">$(Sx)_n=x_{n+1}$</span>, <span class="math-container">$n \in \mathbb{N}$</span> for all <span class="math-container">$x \in l^{\infty}$</span>.</p> <p>I proved that by using Hahn Banach theorem that there exists <span class="math-container">$L \in (l^\infty)^*$</span> such that</p> <p>i) <span class="math-container">$\liminf_{n \rightarrow \infty} x_n \leq L(x) \leq \limsup_{n \rightarrow \infty} x_n$</span></p> <p>ii) <span class="math-container">$L(Sx)=L(x)$</span></p> <p>I also proved that <span class="math-container">$l^\infty \cong (l^1)^*$</span>, how do I show <span class="math-container">$ L \not\in \widehat{(l^1)} $</span>?</p> <p>Can anyone give some hint for the last part?</p> <p><a href="https://i.stack.imgur.com/DkrHn.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DkrHn.jpg" alt="enter image description here" /></a></p>
Kavi Rama Murthy
142,385
<p>If <span class="math-container">$ L \in \widehat{(l^1)} $</span> then there exists <span class="math-container">$(a_n) \in l^{1}$</span> such that <span class="math-container">$\lim \inf x_n \leq L(x)=\sum a_nx_n$</span> for all <span class="math-container">$x=(x_n) \in l^{\infty}$</span>. Taking <span class="math-container">$x=e_k$</span> we see that <span class="math-container">$1 \leq a_k$</span> for all <span class="math-container">$k$</span>. But then <span class="math-container">$(a_n) \notin l^{1}$</span>.</p>
998,633
<p>I'm having trouble to understand exactly <em>how we are using Fubini's theorem</em> in the following proof involving the distribution function, since it newer explicitly involves an integral with product measure.</p> <p>The proof is given to show that we can calculate an integral over some measure space $X $ as an integral over $[0, \infty ]$ $$\int _X (\phi \circ f ) d \mu= \int _0 ^{\infty } \mu \{f &gt; t \} \phi '(t) dt$$</p> <p>Where $(X, \mu ) $ is a $\sigma $-finite measure space, and $\phi :[0, \infty ] \mapsto [0, \infty ]$ is monotonic and absolutely continuous on $[0, T ]$</p> <p>The proof consits of constructing a set $E $ consisting of all points $(x,t) $ where $f&gt; t $. It is easily shown that this set is measurable with respect to the product measure on $X \times [0, \infty ] $. Further the t-section $E^t $ is measurable with respect ot $\mu $.</p> <p>The distribution function of $f $ is $\mu(E^t )= \int _X \chi _E (x,t) d \mu (x) $</p> <p>And the right side of the top equality is equal to $\int _0 ^{\infty } \mu (E^t) \phi '(t) dt= \int _X d \mu \int \chi _E (x,t) \phi '(t) d t $</p> <p>By Fubini's theorem? </p> <hr> <p>Exactly how are we using Fubini's theorem as there is no explicit use of an integral with respect to the product measure. I can see that a part of the conclusion in Fubini's theorem is used to conclude that $\mu (E^t) $ is measurable with respect to the measure on $[0, \infty ]$. Is that all?</p> <p>Thanks in advance!</p>
Oliver Díaz
121,671
<p>The <span class="math-container">$\sigma$</span>-additivity assumption on <span class="math-container">$(X,\mathscr{F},\mu)$</span> can be relaxed. Here is a slightly more general result:</p> <p><strong>Theorem:</strong> Suppose <span class="math-container">$\nu$</span> is a positive Radon measure on <span class="math-container">$[0,\infty)$</span>, and let <span class="math-container">$f$</span> be a nonnegative measurable function on <span class="math-container">$X$</span>. If <span class="math-container">$\{f&gt;0\}$</span> is <span class="math-container">$\sigma$</span>-finite w.r.t. <span class="math-container">$\mu$</span>, <span class="math-container">$$ \begin{align} \int_X\nu\Big(\big[0,f(x)\big)\Big)\,\mu(dx)=\int^\infty_0\mu(\{f&gt;t\})\,\nu(dt)\tag{1}\label{fubini2} \end{align} $$</span> In particular, if <span class="math-container">$\varphi$</span> is a function on <span class="math-container">$[0,\infty)$</span> with <span class="math-container">$\varphi(0)=0$</span> , that is absolutely continuous in compact subsets of <span class="math-container">$[0,\infty)$</span>, and <span class="math-container">$\nu(dx)=\varphi'(x)\,dx$</span>, then <span class="math-container">$$ \begin{align} \int_X(\varphi\circ f)\,d\mu=\int^\infty_0\mu(\{f&gt;t\})\varphi'(t)dt.\tag{2}\label{fubini3} \end{align} $$</span></p> <p>Here is short proof:</p> <p>Since <span class="math-container">$f\in\mathscr{M}^+(\mu)$</span> and <span class="math-container">$\{f&gt;0\}$</span> is <span class="math-container">$\mu$</span> <span class="math-container">$\sigma$</span>--finite, the set <span class="math-container">$E=\{(x,t)\in X\times[0,\infty): f(x)&gt;t\}\in\mathscr{M}(\nu\otimes\mu)$</span> is <span class="math-container">$\sigma$</span>--finite. By Fubini's theorem <span class="math-container">$$ \begin{align} \int^\infty_0\mu(E^t)\,\nu(dt)= \int_{X\times[0,\infty)}\mathbb{1}_E(x,t)\, \mu\otimes\nu(dx, dt) =\int_X\nu(E_x)\,\mu(dx) \end{align} $$</span> and <span class="math-container">$\eqref{fubini2}$</span> follows.</p> <p>In the special case where <span class="math-container">$\varphi$</span> is a function that is absolutely continuous function in compact subintervals of <span class="math-container">$[0,\infty)$</span> and <span class="math-container">$\varphi(0)=0$</span>, the fundamental theorem of Calculus implies that <span class="math-container">$\nu([0,f(x)))=\int^{f(x)}_0\varphi'(t)\,dt=\varphi(f(x))$</span>, and <span class="math-container">$\eqref{fubini3}$</span> follows.</p>
1,309,111
<p>I am considering a collection of function of the type, $ f:[0,2\pi]\rightarrow \mathbb{R^2}$. I want to define the $L^2$ norm of the function in that space.</p> <p>I am defining the a norm of $f(=(f_1,f_2)')$ as $\rho(f)=\int_0^{2\pi}(f^2_1(x)+f^2_2(x))^{\frac{1}{2}}dx$. </p> <p>Could anyone please suggest me how to show whether the above norm is an $L^2$-norm or not? Or any suggestion about any $L^2$ norm in such a space. </p> <p>Thanks in advance.</p>
Lucia
244,743
<p>Well, I'm not sure if we could define it, but let's try to check if $L^2 $ is a vector space and $||.||_2$ is a norm for any $1\leq p \leq\infty$, The cases $p=1$ and $p=\infty$ are clear, so let's check $1&lt; p &lt;\infty$ and let $f,g \in L^2$</p> <p>we have $$|f(x)+g(x)| \leq (|f(x)|+|g(x)|)^p \leq 2^{p}(|f(x)|^p + |g(x)|^p)$$ consequently, $f+g \in L^p$. Moreover</p> <p>$||f+g||_p^{p}=\int |f+g|^{p-1}|f+g| \leq \int |f+g|^{p-1}|f|+\int |f+g|^{p-1}|g| $</p> <p>But $|f+g|^{p-1} \in L^{p'}$, and by Hölder's inequality we obtain $$||f+g||_{p}^p \leq ||f+g||_{p}^{p-1}(||f||_p+||g||_p)^{p-1}$$, in other words $||f+g||_{p}\leq ||f||_p+||g||_p$</p>
546,083
<p>Thanks to internet, I found and understand how to solve diophantine $x^2 - Dy^2 = 1$. Now I would like to solve the following diophantine equation : $$x^2 - 2y^2 = x - 2y$$ but I don't know how to do it, even if I could read articles explaining how to solve the diophantine $x^2 - Dy^2 = c$.</p>
newzad
76,526
<p>$i=e^{i(\pi/2+2k\pi)}$ therefore $i^i=(e^{i(\pi/2+2k\pi)})^i=e^{-\pi/2-2k\pi}$ </p> <p>for $k=0$ you get $0.207879576350761908...$</p> <p>for $k=1$ you get $0.000388203203926766...$</p> <p>....</p> <p>Or </p> <p>$i=1\times i$, then $i^i=1^i\times i^i$: as tomasz commented, this post may be helpful: <a href="https://math.stackexchange.com/questions/3668/what-is-the-value-of-1i">What is the value of $1^i$?</a></p>
1,139,885
<p>If $a$ and $b$ are in a group $G$ and $ab=ba$, show that $xax^{-1}$ commutes with $xbx^{-1}$ for any $x \in G$.</p> <p>So I wrote: </p> <p>WWTS: $\bf{xax^{-1} \times xbx^{-1}=xbx^{-1}\times xax^{-1} }$</p> <p>Now, the problem I have is I don't know where to start. Let's say if I start with what is given:</p> <p>ab=ba, then am I allow to multiply each side by x and x$^{-1}$ and use the associative law since this is a group. So for example:</p> <p>ab=ba </p> <p>$xabx^{-1}=xbax^{-1}$ and then by associative i can change:</p> <p>$xax^{-1} b=xbx^{-1} a$ and multiply by x and x^-1 on the right</p> <p>$xax^{-1} \times bxx^{-1}=xbx^{-1} \times axx^{-1}$</p> <p>and use associative again</p> <p>$xax^{-1} \times xbx^{-1}=xbx^{-1}\times xax^{-1}$</p> <p>Any ideas?</p>
AlexR
86,940
<p>There's not much to it and you basically found it already:</p> <p>$$\begin{align*} (x a x^{-1}) (x b x^{-1}) &amp; = x a \underbrace{(x^{-1} x)}_{=e}bx^{-1} \\ &amp; = x (ab) x^{-1} \\ &amp; = x(ba)x^{-1} \\ &amp; = xb(x^{-1} x)a x^{-1}\\ &amp; = (xbx^{-1})(xax^{-1}) \end{align*}$$</p> <p>Note that the group operation is associative, we use this a lot here.</p>
1,879,509
<p>Please forgive the crudeness of this diagram.</p> <p><a href="https://i.stack.imgur.com/AoS2Q.png" rel="noreferrer"><img src="https://i.stack.imgur.com/AoS2Q.png" alt="enter image description here"></a></p> <p>(I took an image from some psychobabble website and tried to delete the larger circle that's not relevant to my question).</p> <p>Let's say these are four unit circles joined together such that each circle shares some area with two other circles.</p> <p>Obviously the total area not shared with other circles is four times the area of this (again please forgive the crude diagram)</p> <p><a href="https://i.stack.imgur.com/O2zhZ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/O2zhZ.png" alt="enter image description here"></a></p> <p>Or I could calculate the total are of a single "petal" and multiply that by $4$. But I have truly forgotten all the calculus and trigonometry I was taught more than half a century ago.</p> <p>Am I on the right track? Is there a better way than either of the two ideas I've had so far?</p> <p>P.S. Not sure if osculating circle tag applies.</p>
Doug M
317,162
<p>Draw two lines from the center of a circle, one to the center of the diagram, one to the center of the diagram, the other to a tip of the petal.</p> <p>This sector has an angle of 90 degrees. The area of the sector = $\frac 14$ the area of the cirlce or $\frac \pi4 r^2$ Now join the endpoints, creating an isosceles right triangle. The area of the triangle is $\frac 12 r^2.$</p> <p>$\frac 12$ petal = $(\frac \pi4 - \frac 12) r^2$</p> <p>There are 8 half petals.</p> <p>The total area then is $4 \pi r^2 - 8(\frac \pi4 - \frac 12) r^2 = (2\pi + 4) r^2$</p>
1,879,509
<p>Please forgive the crudeness of this diagram.</p> <p><a href="https://i.stack.imgur.com/AoS2Q.png" rel="noreferrer"><img src="https://i.stack.imgur.com/AoS2Q.png" alt="enter image description here"></a></p> <p>(I took an image from some psychobabble website and tried to delete the larger circle that's not relevant to my question).</p> <p>Let's say these are four unit circles joined together such that each circle shares some area with two other circles.</p> <p>Obviously the total area not shared with other circles is four times the area of this (again please forgive the crude diagram)</p> <p><a href="https://i.stack.imgur.com/O2zhZ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/O2zhZ.png" alt="enter image description here"></a></p> <p>Or I could calculate the total are of a single "petal" and multiply that by $4$. But I have truly forgotten all the calculus and trigonometry I was taught more than half a century ago.</p> <p>Am I on the right track? Is there a better way than either of the two ideas I've had so far?</p> <p>P.S. Not sure if osculating circle tag applies.</p>
Marcus Andrews
97,648
<p>Another approach:</p> <p><a href="https://i.stack.imgur.com/cpNbg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cpNbg.png" alt="enter image description here"></a> </p> <p>Consider the square enclosed in red. It has side length $r$, and therefore area $r^2$. Notice that a quarter portion of two different circles overlap in this region to create a single petal, which has area $p$.</p> <p>So we have $r^2 = 2\left(\frac{1}{4}\pi r^2\right) - p$, which rearranges to $p = (\frac{\pi}{2}-1) r^2$</p> <p>Then the unshared area belonging to a single circle has area $\pi r^2 - 2p = 2r^2$</p> <p>The area of the entire diagram is $4\pi r^2 - 4p = (2\pi+4) r^2$</p>
710,374
<p>Let $A$ be an $m\times n$ matrix. If the rank of $A$ is $m$, then prove there exists a matrix $B$, wich is $n \times m$, such that $AB=\text{I}_m$</p>
mookid
131,738
<p><strong>Hint:</strong></p> <p>try to prove that there are matrices $P,Q$ such as $ A = P \bigl(\begin{smallmatrix} I_m &amp; 0 \end{smallmatrix} \bigr)Q^{-1} $.</p>
3,588,068
<p>I have the following system of 3 equations and 3 unknowns: <span class="math-container">$$c_{0} = \frac{x_0}{x_0 + x_1},\ \ c_{1} = \frac{x_1}{x_1 + x_2},\ \ \ c_{2} = \frac{x_2}{x_2 + x_0},$$</span> where all <span class="math-container">$c_i\!\in\!(0,1)$</span> are known and all <span class="math-container">$x_i &gt; 0$</span> are unknown. Am I right in that the solution of this system is the nullspace of the following matrix? <span class="math-container">$$\mathbf{A}=\left[\begin{matrix}(c_0-1)&amp; c_0 &amp; 0 \\ 0 &amp; (c_1-1) &amp; c_1 \\ c_2 &amp; 0 &amp; (c_2-1) \end{matrix}\right].$$</span> If so, I want to find the non-trivial solution, i.e. the basis for <span class="math-container">$null(\mathbf{A})$</span>.</p> <p>p.s. I have attempted to simplify <span class="math-container">$\mathbf{A}$</span> to its reduced row echelon form <span class="math-container">$rref(\mathbf{A})$</span>. I know that <span class="math-container">$null(\mathbf{A}) = null(rref(\mathbf{A}))$</span>, but I get a diagonal matrix for <span class="math-container">$rref(\mathbf{A})$</span>. So does this mean that <span class="math-container">$null(\mathbf{A}) = \mathbf{0}$</span>, and therefore, there are no solutions to the system?</p>
Community
-1
<p>If <span class="math-container">$(x_0,x_1,x_2)$</span> is a solution, so is <span class="math-container">$\lambda(x_0,x_1,x_2)$</span>. So we may choose <span class="math-container">$x_0$</span> arbitrarily and draw</p> <p><span class="math-container">$$x_1=\frac{1-c_0}{c_0}x_0,\\x_2=\frac{c_2}{1-c_2}x_0.$$</span></p>
200,063
<p>I am looking to evaluate</p> <p>$$\int_0^1 x \sinh (x) \ \mathrm{dx}$$</p>
filmor
7,247
<p>You can also make use of the fact, that you can rewrite the integral:</p> <p>$$\int_0^1 xf(x)\,\mathrm dx = \int_0^1 f(x) \int_0^x 1 \,\mathrm dy\,\mathrm dx.$$</p> <p>This is just the integral over the right-angled triangle and using Fubini's theorem you arrive at $$\int_0^1 1 \int_y^1 f(x) \,\mathrm dx \,\mathrm dy.$$</p> <p>Inserting $f(x) = \sinh(x)$ you get $$\int_0^1 x\sinh(x)\,\mathrm dx = \int_0^1 \int_y^1 \sinh(x)\,\mathrm dx\,\mathrm dy = \int_0^1 \cosh(1) - \cosh(y)\,\mathrm dy = \cosh(1) - \sinh(1).$$</p>
216,055
<p>Does anyone have a suggestion for the best computer program to perform calculations in the 2nd Weyl algebra? </p>
Carlo Beenakker
11,260
<p>Weyl algebra computations are implemented in <A HREF="http://www.math.uiuc.edu/Macaulay2/doc/Macaulay2-1.8.2/share/doc/Macaulay2/Dmodules/html/" rel="noreferrer">Macaulay 2</A> (package D-modules). Here is a <A HREF="http://www.math.kobe-u.ac.jp/~taka/2007/knx/leykin-2002-m2.pdf" rel="noreferrer">manual.</A></p>
2,186,560
<p>I was hoping someone can help me with the following.</p> <p>I had to solve the following ODE and get the implicit form. </p> <p>$$\frac{dy}{dx}=\frac{1+3y}{y} * x^2$$</p> <p>this give by separating and integrating the following: $1/9*(3y-ln(3y+1))=1/3*x^3 +c $ which can also be written as (c is an arbitrary constant): $$3y-ln(3y+1)=3x^3+c $$</p> <p>I don't know exactly how to solve this, I saw something about lambert functions but that wasn't entirely clear to me. My start was rewriting the equation as: </p> <p>$3y+1-ln(3y+1)=3x^3+c$<br> $u-ln(u)=3x^3+c$<br> $ln(e^u)-ln(u)=3x^3+c$ <strong>or</strong> $ln(e^{-u})+ln(u)=-3x^3+c$<br> $ln(\frac{1}{u}*e^u)=3x^3+c$ <strong>or</strong> $ln(u*e^{-u})=-3x^3+c$<br> $\frac{1}{u}*e^u= e^{(3x^3+c)}$ <strong>or</strong> $u*e^{-u}=e^{-3x^3+c}$<br> but that's as far as I can get. Since I don't know how to apply the lambert function for that nor any other way to solve it.</p> <p>Any help with this would be greatly appreciated.</p>
Jan Eerland
226,665
<p>Well, we have that:</p> <p>$$\text{y}'\left(x\right)=\frac{1+\text{n}\cdot\text{y}\left(x\right)}{\text{y}\left(x\right)}\cdot x^2\space\Longleftrightarrow\space\int\frac{\text{y}\left(x\right)\cdot \text{y}'\left(x\right)}{1+\text{n}\cdot\text{y}\left(x\right)}\space\text{d}x=\int x^2\space\text{d}x\tag1$$</p> <p>For a constant $\text{n}$</p> <p>Now, we use:</p> <ul> <li>Substitute $\text{u}=\text{y}\left(x\right)$: $$\int\frac{\text{y}\left(x\right)\cdot \text{y}'\left(x\right)}{1+\text{n}\cdot\text{y}\left(x\right)}\space\text{d}x=\int\frac{\text{u}}{1+\text{n}\cdot\text{u}}\space\text{d}\text{u}=\frac{\text{n}\cdot\text{u}-\ln\left|1+\text{n}\cdot\text{u}\right|}{\text{n}^2}+\text{C}_1\tag2$$</li> <li>$$\int x^2\space\text{d}x=\frac{x^3}{3}+\text{C}_2\tag3$$</li> </ul> <p>So, we get:</p> <p>$$\frac{\text{n}\cdot\text{y}\left(x\right)-\ln\left|1+\text{n}\cdot\text{y}\left(x\right)\right|}{\text{n}^2}=\frac{x^3}{3}+\text{C}\tag4$$</p> <p>Simplifying it a bit:</p> <p>$$\text{n}\cdot\text{y}\left(x\right)-\ln\left|1+\text{n}\cdot\text{y}\left(x\right)\right|=\text{n}^2\cdot\frac{x^3}{3}+\text{C}\tag5$$</p> <p>Assuming that:</p> <p>$$\left(1+\text{n}\cdot\text{y}\left(x\right)\right)\in\mathbb{R}^+\tag6$$</p> <p>We can write:</p> <p>$$\text{n}\cdot\text{y}\left(x\right)-\ln\left(1+\text{n}\cdot\text{y}\left(x\right)\right)=\text{n}^2\cdot\frac{x^3}{3}+\text{C}\tag7$$</p> <p>And now we can use a general property:</p> <p>$$\text{a}\cdot\text{p}-\ln\left(\text{b}+\text{c}\cdot\text{p}\right)=\text{d}\space\Longleftrightarrow\space\text{p}=-\frac{\text{a}\cdot\text{b}-\text{c}\cdot\mathcal{W}\left(-\frac{\text{a}\cdot\exp\left(-\frac{\text{a}\cdot\text{b}}{\text{c}}-\text{d}\right)}{\text{c}}\right)}{\text{a}\cdot\text{c}}\tag8$$</p>
14,858
<p>Let $V$ be a complex vector space of dimension 6 and let $G\subset {\mathbb P}^{14}\simeq {\mathbb P}(\Lambda^2V)$ be the image of the Plucker embedding of the Grassmannian $Gr(2, V)$.</p> <ol> <li>Why the degree of $G$ is 14? or in general, how to calculate the degree of a Plucker embedding?</li> </ol> <p>Let ${\mathbb P}^8\simeq L\subset {\mathbb P}(\Lambda^2V)$ be a generic 8-plane and $S$ be the intersection of $L$ with $G$.</p> <ol> <li>How to prove that $S$ is a K3 surface?</li> </ol> <p>Another question: the paper said that this construction depends on 19 parameters. I know that this is the dimension of the deformation family of the polarized K3 we get here. But I think that in this statement 19 is coming from varying the generic 8-plane. How can we obtain this number?</p>
Dmitri Panov
943
<p>To be able calculate the degree it is worth to read a bit of Griffiths-Harris about Grassmanians (chapter 1 section 5). To prove that $S$ is $K3$ one needs to caluclate the canonical bundle of $G$, use simple facts about Plucker embedding, use adjunction formula and finally the fact that a simply connected surface with $K\cong O$ is $K3$. </p> <p>I will make the second bit of calculation, that proves that $S$ is a $K3$ (so I don't calculate the degree $14$).</p> <p>First we want to calculate the canonical bundle of $G$. Denote by $E$ the trivial $6$-dimensional bundle over $G$, and by $S$ the universal (tautological) rank $2$ sub-bundle. Then the tangent bundle to $G$ is $TG=S^* \otimes (E/S)$ . It follows from the properties of $c_1$ that $c_1(TG)=6c_1(S^*) $. Similarly for the canonical bundle $K_G$ we have the expression $K_G\cong (detS^*)^{\otimes -6}$.</p> <p>Now we will use the (simple) statement from Griffiths-Harris that under the Plucker embedding we have the isomorphism of the line bundles $det S^*=O(1)$. Using the previous calculation we see $K_G\cong O(-6)$. Finally, the surface $S$ is an iterated (6 times) hyperplane section of $G$. So by Lefshetz theorem it has same fundamental group as $G$, i.e., it is simply-connected. It suffices now to see that its canonical bundle is trivial. This is done using the adjunction formula $K_D=K_X+D|_D$. Every time we cut $G$ by a hyper-plane we tensor the canonical by $O(1)$, but $O(-6)\otimes O(6)\cong O$.</p> <p>Added. The calculation of the degree is done by Andrea</p> <p>Added. The number 19 is obtaied in the following way. The dimension of the grassmanian of $8$-planes in $CP^{14}$ is $6\cdot 9=54$. At the same time the Grassmanian of $2$-planes in $\mathbb C^6$ has symmetires, given by $SL(6,\mathbb C)$, whose dimension is 35. We should quotient by these symmetries and get $54-35=19$</p>
3,949,525
<p>This question is from Ponnusamy and silvermann complex analysis and I am making some mistake in this question and unable to find it.</p> <blockquote> <p>Show that <span class="math-container">$\prod_{n=1}^{\infty} (1-z/n) e^{z/n +5z^2/n^2}$</span> is entire.</p> </blockquote> <p><span class="math-container">$1+ f_n (z)= (1-z/n) e^{z/n +5z^2/n^2}$</span> and finding a upper bound on <span class="math-container">$f_n (z)$</span> . <span class="math-container">$1+ f_n (z) = exp( Log(1-z/n) + z/n+ 5 z^2 /n^2)$</span> For <span class="math-container">$|z|\leq R$</span> then choose N large enough such that |z/n|&lt;1 for all <span class="math-container">$n \geq N$</span> and using the identity<span class="math-container">$ Log(1-z/n) = -\sum_{k=1}^{\infty} 1/k (z/n)^k$</span> and the fact that|R/n|&lt;1.</p> <p>So, <span class="math-container">$|Log (-\sum_{k=1}^{\infty} 1/k (z/n)^k +z/n +5 z^2/n^2) | \leq -5 -1/2 R^2/n^2 $</span>, where I used Formula of infinite geometric series.</p> <p>Now <span class="math-container">$|f_n(z)| \leq exp|e^{-5 -1/2 R^2 /n^2 -1}-1| exp( -5 -1/2 R^2 /n^2)$</span> -1 and using <span class="math-container">$e^x -1 \leq x e^x for all x \geq 0 $</span> we get <span class="math-container">$|f_n(z)|\leq 5+ 1/2 R^2/ n^2 $</span> and if I sum <span class="math-container">$|f_n (z)| $</span> from N to <span class="math-container">$\infty$</span> i get Infinity as 5 is also there .</p> <p>So, I think while approximating I am doing some mistake and I am unable to find it. Can you please point that out.</p> <p>If there is some other method of proving it entire that is also welcome.</p> <p>Thanks!</p>
DanielWainfleet
254,665
<p>Hint.</p> <p>Theorem: Suppose <span class="math-container">$C$</span> is an open subset of <span class="math-container">$\Bbb C$</span> and <span class="math-container">$(f_n)_n$</span> is a sequence of complex functions that are analytic on <span class="math-container">$C,$</span> such that <span class="math-container">$f_n$</span> converges uniformly on <span class="math-container">$D$</span> to <span class="math-container">$f$</span> whenever <span class="math-container">$D$</span> is a compact subset of <span class="math-container">$C$</span>. Then <span class="math-container">$f$</span> is analytic on <span class="math-container">$C.$</span></p>
2,794,228
<p>If we define a sequence $a_n$ where $n\in\mathbb{N}$ by $$a_n=\frac{1}{({2n\pi})^{\frac{1}{3}}}$$ Then how could one show the inequality $$a_n-a_{n+1}\leq Cn^{-\frac{4}{3}}$$</p> <p>I have tried got to a point where $$a_n-a_{n+1}\leq(\frac{2\pi}{(2n\pi)(2(n+1)\pi)})^{1/3}$$</p>
zhw.
228,045
<p>We can factor out the $(2\pi)^{-1/3}$ in each term. We're left with $n^{-1/3}-(n+1)^{-1/3}.$ By the MVT, this equals</p> <p>$$(-1/3)c^{-4/3}\cdot (-1), \, \text { for some }c\in (n,n+1).$$</p> <p>This is less than $(1/3)n^{-4/3}$ and we're done.</p>
691,927
<p>Now before I begin, I know this question has been asked multiple times but all the answers but I had so many questions of my own that I figured I should make a new question as my thoughts are different than previous answers. </p> <p>Now I will ask the question first then explain my thoughts and troubles :)</p> <p>We have a uniform distribution that has the following PDF: $\frac{1}{b-a}$. So far so good. So if $n$ observations our Maximum Likelihood Function is: $\mathcal{L}(a,b)=\frac{1}{(b-a)^n}$ if each of these observations are independent and identically (i.i.d.) distributed. Now, after taking the log of the likelihood and taking the derivative once with respect to $b$ and once with respect to $a$ we have the following:</p> <p>The derivative with respect to $a$ is: $$\frac{n}{b-a}$$ and the derivative with respect to $b$ is: $$\frac{-n}{b-a}$$</p> <p>Now if we try to set either of these derivatives to zero and try to maximize the function, it will not yield anything useful. My problem arises here. Lets us just focus on maximizing $b$. I have read on this website as well as other places that to maximize $\frac{-n}{b-a}$ we have to take the maximum observation? How does that make sense, I mean should we not use the lowest observation to maximize this function? Because in this particular case, $n$ and $a$ are constants so we can easily just plug some numbers in and see that as $b$ gets bigger, the function get smaller, so why would we want the maximum observation? Help would be greatly appreciated and please I am not mathematically or statistically inclined so please be gentle! Thanks :)</p>
Jimmy R.
128,037
<p>Mathematically you can explain it as follows. You want to take $b$ as small as possible to maximize your derivative. Ok, but can you take $b$ smaller as the largest value you observed in the sample? No! That would be a contradiction to the fact that your sample comes from the (unkonwn) interval $[a,b]$ Could it be bigger? Yes, but you want it as small as possible. </p> <p>So the maximization of the derivative with respect to $b$ is under the constrain that $b$ should be as small as possible but at least equal to the largest value of the sample. So, the lowest possible value for $b$ is the maximum of your sample and you have it.</p>
1,416,661
<p>Consider a "spinner": an object like an unmagnetized compass needle that can pivots freely around an axis, and is stable pointing in any direction. You give it a spin and see where it comes to rest, measuring the resulting angle (divided by 2π) as a number from 0 to 1.</p> <p>I am bit confused, when i look into the PDF for this distribution, when each outcome its divides by 2π, the probability of each outcome turns out be 1. I mean when we draw a PDF we get a horizontal straight line at 1. However the chance of getting a value within the range 0 to 360 (2π) = 1/ 360, so when I plot the PDF for 0 to 360, it is a straight line at 0.0028, where as when i divide the same by 360 its a straight line at 1. I know the area under the curve for PDF 1 but, just becoz I divided by 360, why is the chance of occurrence or probability for each out come increases to 1 rather than 0.0028?</p>
BruceET
221,800
<p>If you're using radians, then you have $U_1 \sim Unif(0, 2\pi).$ Then the density function must have height $1/2\pi$ over $(0, 2\pi).$</p> <p>If you're using degrees, then you have $U_2 \sim Unif(0, 360),$ which has density $f_{U_2}(x) = 1/360,$ for $0 &lt; x &lt; 360\,$ (and $0$ elsewhere).</p> <p>If you define $U_3 = U_1/2\pi$ then you have $U_3 \sim Unif(0,1),$ which has density $f_{U_3}(x) = 1,$ for $0 &lt; x &lt; 1$ (and $0$ elsewhere. Because the density function is positive over the (long) interval $(0, 360)$, the height of the density function must be (small) $1/360$ in order to enclose the area 1.</p> <p>And if you define $U_4 = U_2/360$ then you also have $U_b \sim Unif(0,1).$</p> <p>Notice that each of these density functions encloses an area of 1, as must be the case for any density function. In general for $a &lt; b$, $Unif(a, b)$ has $$ \int_{- \infty}^\infty f(x)\,dx = \int_{-\infty}^0 0\,dx + \int_a^b \frac{1}{b-a}\,dx + \int_b^\infty 0\,dx = \int_a^b \frac{1}{b-a}\,dx = 1.$$</p> <p>Also, in each of your cases, the probability the spinner stops in the first quadrant is $1/4,$ whether you denote the 'first quadrant' as $(0, \pi/2)$, as $(0, 90)$, or $(0, 1/4).$ The definition of the density function must match your definition of 'first quadrant'.</p>
549,014
<p>If $y=\sin x$, then find the value of $$\frac{d^2(\cos^7 x)}{dy^2}$$ </p> <p>I have no idea on how to proceed in this problem. Please help.</p>
Mhenni Benghorbal
35,472
<p><strong>Hint:</strong> notice that</p> <p>$$ \cos x = \sqrt{1-y^2}$$</p>
2,484,027
<p>I've been trying to solve this permutation problem. I know that it's been posted on this site before, but my question is about the specific approach I take to solving it.</p> <p>Here's what I thought I'd do : I could first figure out the total number of permutation where AT LEAST $2$ girls are together, and then subtract this from the total number of permutations.</p> <p>Now, the total number of ways in which the students can be seated is obviously $^8\mathbf{P}_8$, or $8!$.</p> <p>For the total number of permutations where at least $2$ girls are together, I first figured out that if I take $2$ girls as $1$ object, I can seat them in a total of $ 7 $ ways. </p> <p>To this I multiplied the total number of way in which $2$ out of $3$ girls can be chosen. Thus I had $ 7 \times ^3\mathbf{P}_2 $, or $ 6 \times 7$</p> <p>Finally, to this I multiplied the total number of permutations for the seating arrangement of the remaining students, to get $7 \times 6 \times ^6\mathbf{P}_6 $, or $ 6 \times 7!$.</p> <p>Finally, I subtracted the number of permutation where AT LEAST 2 girls are seated together ($6 \times 7!$), from the total number of permutations of possible seating arrangements ($8!$).</p> <p>Thus, I have $8! - 6 \times 7!$, or $7!(8-6) = 10080$.</p> <p>BUT, the answer given in my book, and online, is 14400. </p> <p>I want to know where my problem solving logic is wrong and if my calculation is wrong ?</p> <p>Thanks.</p>
N. F. Taussig
173,070
<p>The reason your answer is wrong is that your count of how many arrangements have at least two girls in consecutive seats is incorrect. When you subtracted those arrangements in which a pair of girls in consecutive seats, you subtracted those arrangements in which all the girls sit in consecutive seats twice, once when you designated the leftmost two girls as the pair and once when you designated the rightmost two girls as the pair.</p> <p>One way to fix this is to subtract those arrangements in which exactly two girls sit in consecutive seats and then subtract those arrangements in which three girls sit in consecutive seats, as I did in <a href="https://math.stackexchange.com/questions/2478128/number-of-arrangements-of-8-children-of-different-ages-in-a-line-if-none-of-th/2478241#2478241">this problem</a>. However, a more straightforward fix is to use the <a href="https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle" rel="nofollow noreferrer">Inclusion-Exclusion Principle</a>.</p> <p>There are $8!$ ways to arrange the eight children in a row. From these, we must exclude those seating arrangements in which at least two girls sit in adjacent seats.</p> <p><em>A pair of girls sit in adjacent seats</em>: There are $\binom{3}{2}$ ways to choose which girls will sit in a block of adjacent seats. We have seven objects to arrange, the block of two girls and the other six children, which we can do in $7!$ ways. The two selected girls can be arranged within the block in $2!$ ways. Hence, there are $$\binom{3}{2}7!2!$$ such seating arrangements.</p> <p>However, as explained above, we have subtracted those seating arrangements in which all three girls sit in consecutive seats twice. We only wish to subtract them once, so we must add those arrangements back.</p> <p><em>Two pairs of girls sit in adjacent seats</em>: Since there are only three girls, this can only occur if the three girls sit in consecutive seats. We have six objects to arrange, the block of three girls and the five boys. The objects can be arranged in $6!$ ways. The girls can be arranged within their block in $3!$ ways. Hence, there are $6!3!$ seating arrangements in which all three girls are consecutive.</p> <p>By the Inclusion-Exclusion Principle, the number of permissible seating arrangements is $$8! - \binom{3}{2}7!2! + 6!3! = 14400$$ </p>
427,596
<p>Just a quick question. I'm trying to understand the answer to <a href="https://math.stackexchange.com/questions/424063/group-action-on-a-manifold-with-finitely-many-orbits">one of my previous questions</a>. The precise problem I want to show is as follows. </p> <blockquote> <p>Let $G$ be a group acting faithfully on a manifold $X$. If $G$ is such that $\dim G$ makes sense (for example, $G$ is also a manifold), then $\dim G \leq \dim X$. </p> </blockquote> <p>Suppose not, i.e., $\dim G &gt; \dim X$. Then we wish to show that the kernel of the homomorphism $G \to \operatorname{Aut}(X)$ is nontrivial. This should follow if $\dim \operatorname{Aut}(X) = \dim X$, but in general $\operatorname{Aut}(X)$ can be much larger than $X$. (Also, does $\dim \operatorname{Aut}(X)$ make sense?)</p> <p>Any suggestions?</p> <p>EDIT: It turns out that this proposition is false as the examples below show, and the answer I linked to has been retracted (and is being rewritten?). Thanks to everyone for the help. </p>
Jim Belk
1,726
<p>This is not true. For example, $SO(3)$ acts faithfully on the $2$-sphere $S^2$, but $SO(3)$ has dimension $3$, and $S^2$ only has dimension $2$.</p>
3,244,365
<blockquote> <p>Prove that the product of any four consecutive integers is one less than a perfect square.</p> </blockquote> <p>My first idea is the let k be a member of the integers. Let <span class="math-container">$m$</span> which also belongs to the integers be equal to <span class="math-container">$k(k+1)(k+2)(k+3)$</span>. When I expanded, I ended up with <span class="math-container">$k^4 + 6k^3 + 11k^2 + 6k$</span>. Now, my only suggestion is to compare with the form of a perfect square trinomial and show that it cannot be expressed in that form of <span class="math-container">$a^2 + 2ab + b^2$</span>. Can someone help me to complete this?</p>
Dr. Sonnhard Graubner
175,066
<p>Note that <span class="math-container">$$(x-2)(x-1)x(x+1)+1=(x^2-x-1)^2$$</span></p>
3,589,926
<p>I wanted to come up with a proof using contrapositive, so <strong>I needed the negation of the following: "one of x and y is congruent to 1 modulo 6 while the other is congruent to 5 modulo 6."</strong></p> <p>I interpreted this statement as being the same as "one of x and y is congruent to 1 modulo 6 and the other is congruent to 5 modulo 6", so <strong>the negation I got was "one of x and y is not congruent to 1 modulo 6 or the other is not congruent to 5 modulo 6".</strong></p> <p>Turns out the negation is "one of x and y is not congruent to 1 or 5 modulo 6."</p> <p>I'm not seeing it. Please help me understand.</p>
Albert
331,549
<p>I don't know Abbott's book. But I strongly recommend Apostol as a first book and Spivak as a second one. </p> <p>Apostol's book begins from the very basics (the real numbers). He first presents integral calculus, even before the concepts of limits and continuous functions. But he takes a different (but equivalent) approach: The Darboux integral. He never mention it, but it's the real theory as you can see from Wikipedia or other references. Then he presents differential calculus and make the connection via The fundamental theorem of calculus.</p> <p>The only part I don't like is the treatment of linear algebra and ordinary differential equations on Vol. II. But the rest of the book is excellent.</p> <p>Then move to Spivak's book. Specially try most of the exercises from that book. This book is a little bit more theoretical than Apostol's. Spivak also takes the Darboux integral as integration theory, but in the Appendix of Riemann sums, he proves (but never mention it) that Riemann and Darboux integrals are the same (at least for continuous functions).</p> <p>I also recommend <em>Introduction to real analysis</em> by Bartle and Sherbert as an introductory book (after you read Apostol) and little Rudin (Principles of mathematical analysis by Walter Rudin) if you want to go deeper into the subject.</p>
2,991,811
<p>I think it converges to <span class="math-container">$\frac{\pi}{2}$</span>, but I am not sure how to prove it. I've tried using induction on <span class="math-container">$n$</span> but have had no luck. Any help would be appreciated thanks.</p>
Doug M
317,162
<p>use that <span class="math-container">$|\sin x| &lt; |x|$</span></p> <p><span class="math-container">$\sin \frac{\pi}{2^{n+1}} &lt; \frac{\pi}{2^{n+1}}$</span></p> <p><span class="math-container">$0&lt;2^{n}\sin \frac{\pi}{2^{n+1}} &lt; \frac {\pi}{2}$</span></p> <p>The sequence is bounded... can we show that it is monotone?</p> <p><span class="math-container">$x_n = $$2^{n}\sin \frac{\pi}{2^{n+1}}\\ 2^{n}\sqrt {\frac {1-\cos \frac{\pi}{2^{n}}}{2}}\\ 2^{n}\frac {\sin \frac {\pi}{2^n}}{\sqrt {2(1+\cos \frac{\pi}{2^{n}})}}\\ x_{n-1}\sqrt {\frac {2}{1+\cos \frac{\pi}{2^{n}}}}\\ \sqrt {\frac {2}{1+\cos \frac{\pi}{2^{n}}}}&gt;1$</span></p> <p><span class="math-container">$\frac {x_{n+1}}{x_{n}} &gt; 1$</span> suggests that it is.</p>