qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
386,649
<p>If you were working in a number system where there was a one-to-one and onto mapping from each natural to a symbol in the system, what would it mean to have a representation in the system that involved more than one digit?</p> <p>For example, if we let $a_0$ represent $0$, and $a_n$ represent the number $n$ for any $n$ in $\mathbb{N}$, would '$a_1$$a_0$' represent a number?</p> <p>Is such a system well defined or useful for anything?</p>
N. S.
9,176
<p>The set</p> <p>$${\mathbb N}[X]$$</p> <p>is exactly a system as you describe. The constant polynomials are the whole numbers ${\mathbb N}$, and they represent the infinite "symbols" in your system, while $a_0a_1$ is actually the polynomials $a_0+a_1X$. </p> <p>If you replace $\mathbb N$ by $\mathbb Z$ or $\mathbb Q$ you get some rings which are actually often studied in mathematics.</p> <hr> <p><strong>Added</strong> When studying the prime factorization of integers, the same type of system actually comes in play.</p> <p>Look at the primes $p_1=2,p_2=3,..$. </p> <p>Then any $n &gt;2$ can be written as $2^{a_1}3^{a_2}5^{a_3}....p_k^{a_k}$ where $p_k$ is the last prime appearing in the prime factorization of $n$.</p> <p>Then the "symbols" would correspond to the powers of $2$, the elements of teh form $a_0a_1$ correspond to the integers divisible by no other prime than (maybe) 2 and 3 and so on.</p> <p>Interesting, the example is similar to $\mathbb N[X]$ and the addition of polynomials in $\mathbb N[X]$ corresponds to multiplication in positive integers.</p>
466,757
<p>Suppose we have the following</p> <p>$$ \sum_{i=1}^{\infty}\sum_{j=1}^{\infty}a_{ij}$$</p> <p>where all the $a_{ij}$ are non-negative.</p> <p>We know that we can interchange the order of summations here. My interpretation of why this is true is that both this iterated sums are rearrangements of the same series and hence converge to the same value, or diverge to infinity (as convergence and absolute convergence are same here and all the rearrangements of an absolutely convergent series converge to the same value as the series).</p> <p>Is this interpretation correct. Or can some one offer some more insightful interpretation of this result?</p> <p>Please note that I am not asking for a proof but interpretations, although an insightful proof would be appreciated.</p>
Evan
38,878
<p>Well, I think that interpretation is a bit circular, as the reason absolute convergence allows for rearrangements without changing the limit is precisely because of this point. </p> <p>The gist of it is that <strong>no cancellations due to sign are possible.</strong> I think it is very illuminating to consider the counterexample for the rearrangement theorem when you do not have absolute convergence, but still have conditional convergence (Rearrangements can lead to <strong>any</strong> limiting value!).</p> <p>You can add up as many negative terms as you want until you are happy (as it does not absolutely converge, you can continue adding until you are lower than any value in the real line you desire), and likewise for the positive terms (note, if adding the rest of the positive terms gives a finite value, then actually the whole sum didn't conditionally converge, and in fact will be $-\infty$). This procedure of adding negative terms until your sum is below $L$, and then adding positive terms until the sum is above $L$, and etc. will converge because conditional convergence requires the things you are summing to go to $0$ at the end of the day.</p> <p>Okay, but to the non-negative double sum. Towards proving the double sum result (since you are only considering two orderings), I suppose one cute thing you could try is start with rows (arrange the numbers on the upper right quadrant for this picture). Let's assume the row sums converge first. Now, for each row sum, take out the first term of each, and set it aside, adding them upwards (the first column sum). Note that of course, the value of the sum didn't change, because you took out values one at a time, putting them into the column sum, and at no point did anything funny happen because only finite sums were involved (the full sum, the partial column sum, and the full sum minus the finitely many elements you took out). Now continue this procedure, extracting the second column sum, and etc. At the "end" you have what you want to prove.</p> <p>Two limiting procedures hidden: When you "finish" the first column sum, you have to take a limit procedure. When you "finish" all the columns, there's another limit there. The issue if you include negative terms is that the first column sum might already diverge. And even if all the column sums converge, summing the column sums might converge to an entirely different value! I think to really finish this proof idea, you should use the fact that the values you are working with vary <strong>monotonely</strong> at each step.</p>
1,221,442
<p>I have two uniform random variables $B$ and $C$ distributed between $(2,3)$ and $(0,1)$ respectively. I need to find the mean of $\sqrt{B^2-4C}$. Could I plug in the means for $B$ and $C$ and then solve or is it more complicated than that?<br> The original question is here: <a href="https://math.stackexchange.com/questions/1220538/difference-between-two-real-roots-with-uniformly-distributed-coefficents/1220638#1220638">Difference between two real roots with uniformly distributed coefficents</a></p>
André Nicolas
6,312
<p>You want to find the mean of $\sqrt{B^2-4C}$. This <em>cannot</em> be done by substituting $E(B)$ and $E(C)$ for $B$ and $C$ in the expression $\sqrt{B^2-4C}$. </p> <p>Let $f_B(x)$ be the density function of $B$, and let $f_C(y)$ be the density function of $C$. So $f_B(x)=1$ on the interval $(2,3)$ and $0$ elsewhere, and $f_C(y)=1$ on the interval $(0,1)$ and $0$ elsewhere. </p> <p>Then the required mean is $$\iint_{\mathbb{R}^2} \sqrt{x^2-4y}\,f_B(x)f_C(y)\,dx\,dy.$$ In our case we are effectively integrating over the rectangle where the densities are non-zero. So we are integrating $\sqrt{x^2-4y}$ over the rectangle.</p> <p>For the double integral, I would integrate first with respect to $y$, and then with respect to $x$. The first integration is easy. The second is doable but a little messy.</p>
3,118,298
<p>So I have a formula for arc length <span class="math-container">$$s(t)=\int_{t_0}^t \vert\vert \dot\gamma(u)\vert\vert du$$</span> I computed that <span class="math-container">$$\vert\vert \dot\gamma(t)\vert\vert=\sqrt{2(1-\cos t)}$$</span></p> <p>Substituting this into the integral <span class="math-container">$$\int_0^{2\pi} \sqrt{2(1-\cos t)} dt$$</span> <span class="math-container">$$=2\int_0^{2\pi} \sqrt{1-\cos t} dt$$</span></p> <p><span class="math-container">$$=\sqrt{2}\int_0^{2\pi} \sqrt{1-\cos t} \frac{\sqrt{1+\cos t}}{\sqrt{1+\cos t}} dt$$</span></p> <p><span class="math-container">$$=\sqrt{2}\int_0^{2\pi} \frac{\sin t}{\sqrt{1+\cos t}}dt$$</span> Using subsitution <span class="math-container">$u=1+\cos t$</span>, <span class="math-container">$du=-\sin t dt$</span></p> <p><span class="math-container">$$=\sqrt{2}\int_0^{2\pi} \frac{-1}{\sqrt{u}} du$$</span> <span class="math-container">$$=-\sqrt{2} (2\sqrt{1+\cos t}\big\vert_0^{2\pi})=0$$</span></p> <p>Obviously this has to be wrong, but I'm not sure what I'm doing wrong.</p>
J P
551,142
<p>It's not quite convincing enough for me. You didn't explicitly show why it was true. Let <span class="math-container">$L$</span> be the limit of the sequence <span class="math-container">$\{a_n\}$</span> We'll use the monotone convergence theorem as you said it was already proven to you. Clearly <span class="math-container">$L \geq a_n$</span> for all <span class="math-container">$n$</span>. And so <span class="math-container">$L$</span> is an upper bound. </p> <p>Assume there is some upper bound which is less than <span class="math-container">$L$</span>, call it <span class="math-container">$k$</span>. Let <span class="math-container">$L - k = p$</span>. From the definition of the limit of a sequence, there exists an <span class="math-container">$N$</span> such that for all <span class="math-container">$n &gt; N$</span>, <span class="math-container">$|a_n - L| &lt; \varepsilon$</span> for all <span class="math-container">$\varepsilon &gt; 0.$</span> Now take <span class="math-container">$\varepsilon = p$</span>. This shows that there is a term which is greater than <span class="math-container">$k$</span>, contradicting the choice of <span class="math-container">$k$</span> as an upper bound. </p> <p>Now we have shown that <span class="math-container">$L$</span> is a upper bound, and any value smaller than <span class="math-container">$L$</span> is not an upper bound, finishing the proof.</p>
11,457
<p>In their paper <em><a href="http://arxiv.org/abs/0904.3908">Computing Systems of Hecke Eigenvalues Associated to Hilbert Modular Forms</a></em>, Greenberg and Voight remark that</p> <p>...it is a folklore conjecture that if one orders totally real fields by their discriminant, then a (substantial) positive proportion of fields will have strict class number 1.</p> <p>I've tried searching for more details about this, but haven't found anything. </p> <p>Is this conjecture based solely on calculations, or are there heuristics which explain why this should be true? </p>
Dror Speiser
2,024
<p>I think the best way for one to become convinced that class numbers of real quadratic fields tend to be small, is to look at the continued fraction expansion of $\sqrt{D}$.</p> <p>The period length of the continued fraction is about the regulator (up to a factor of $\log{D}$). One can easily compute some random continued fractions, and see that for most numbers, the length really is a small factor away from $\sqrt{D}$.</p> <p>Numbers that have small continued fraction period length are very scarce. I believe it is not hard to prove that the amount of numbers up to $X$ that have period length of $\sqrt{D}$ less than a fixed integer $n$ is $O(X^{1-\epsilon})$ for some $\epsilon &gt; 0$ (I think 1/2 should always work).</p> <p>In a sense (very strict actually) the regulator counts how many numbers $n$ with $|n| &lt; 2\sqrt{D}$ can be represented as $n = x^2-Dy^2$ for some integers $x, y$. Well, if $D$ is large and random, then it seems reasonable that many should. So the regulator should be around $\sqrt{D}$, and hence, by Dirichlet's class number formula, the class number should be very small.</p> <p>Once you become convinced of the real quadratic case, the rest immediately follow, because you already believe things people said while waving their hands a lot. (This is a general philosophy in mathematics)</p>
107,171
<p>I'm trying to find $$\lim\limits_{(x,y) \to (0,0)} \frac{e^{-\frac{1}{x^2+y^2}}}{x^4+y^4} .$$ After I tried couple of algebraic manipulation, I decided to use the polaric method. I choose $x=r\cos \theta $ , $y=r\sin \theta$, and $r= \sqrt{x^2+y^2}$, so I get </p> <p>$$\lim\limits_{r \to 0} \frac{e^{-\frac{1}{r^2}}}{r^4\cos^4 \theta+r^4 \sin^4 \theta } $$</p> <p>What do I do from here? </p> <p>Thanks a lot!</p>
GEdgar
442
<p>Some numbers are negative, so the title (not the actual statement, though) may refer to this one, easier to do: $$ \sum_{k=-\infty}^\infty \frac{1}{(3k+1)^2} = \frac{4\pi^2}{27} $$</p>
1,798,855
<p>I'm trying to understand this proof that:</p> <p>$M$ connected $\iff$ $M$ and $\emptyset$ are the only subsets of $M$ open and closed at the same time</p> <p>Which is:</p> <p>If $M=A\cup B$ is a separation, then $A$ and $B$ are open and closed. Recriprocally, if $A\subset M$ is open and closed, then $M = A\cup(M-A)$. What? I know that if $M=A\cup B$ is a separation, $A$ and $B$ are both open. But why closed? Also, the 'recriprocally' part is totally nonsense to me. Anybody could help?</p> <p>Also, there's another proof, which states: $M$ and $\emptyset$ are the only subsets of $M$ at the same time closed and open $\iff$ if $X\subset M$ has empty boundary, theb $X=M$ or $X=\emptyset$</p> <p>which is proved as the following:</p> <p>given $X\subset M$, we know the condition $X\cap \partial X = \emptyset$ implies $X$ is open, while the condition $\partial X \subset X$ implies $X$ is closed. Then, $X$ is open and closed $\iff$ $\partial X = X\cap \partial X = \emptyset$, this show $\iff$ for the theorem above.</p> <p>First of all, I think that the condition $X$ has boundary empty implies that $X\cap \partial X$ is empty, but who said anything about $\partial X\subset X$? Also, where's the $\rightarrow$ of this proof? I can only see $\leftarrow$</p>
Ross Millikan
1,827
<p>If <span class="math-container">$M=A \cup B$</span> is a separation, both <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are open as you say. Then <span class="math-container">$A = B^c$</span> is the complement of an open set, so is closed. The reciprically works the same way. If <span class="math-container">$A$</span> is open and closed, <span class="math-container">$M \setminus A=B$</span> must also be open and closed as the complement of a clopen set.</p>
75,925
<p>I hope this question is focused enough – it's not about real problem I have, but to find out if anyone knows about a similar thing.</p> <p>You probably know the <a href="https://en.wikipedia.org/wiki/Uncertainty_principle" rel="nofollow noreferrer">Heisenberg uncertainty principle</a>: For any function <span class="math-container">$g\in L^2(\mathbb{R})$</span> for which the respective expressions exist it holds that <span class="math-container">$$ \frac{1}{4}\|g\|_2^4 \leq \int_{\mathbb{R}} |x|^2 |g(x)|^2 \,dx \int_{\mathbb{R}} |g'(x)|^2 \,dx. $$</span></p> <p>This inequality is not only important in quantum mechanics, but also in signal processing for the short-time Fourier transform, see <a href="https://en.wikipedia.org/wiki/Fourier_transform#Uncertainty_principle" rel="nofollow noreferrer">here</a>.</p> <p>One can derive this by formally using integration by parts <span class="math-container">$$ \int_{\mathbb{R}} 1\,|g(x)|^2\, dx = -\int_{\mathbb{R}} x\tfrac{d}{dx}|g(x)|^2\,dx \leq 2\int_{\mathbb{R}} |xg(x)|\,|g'(x)|\,dx $$</span> and Cauchy–Schwarz.</p> <p>Now, changing just the order of the functions, you obtain this inequality <span class="math-container">$$ \int_{\mathbb{R}} |g(x)|^2 \, dx \leq 2\int_{\mathbb{R}} |xg'(x)|\,|g(x)| \, dx \leq \left(\int_{\mathbb{R}} |xg'(x)|^2 \, dx\right)^{1/2} \left(\int_{\mathbb{R}} |g(x)|^2 \, dx\right)^{1/2} $$</span> which gives <span class="math-container">$$ \|g\|_2\leq \|xg'\|_2. $$</span></p> <p>Ok, this was just playing around. However, this inequality can also be motivated by an abstract consideration about uncertainty principle associated to group-related integral transforms (see my <a href="https://regularize.wordpress.com/2011/09/16/the-uncertainty-principle-for-the-windowed-fourier-transform/" rel="nofollow noreferrer">two</a> <a href="https://regularize.wordpress.com/2011/09/20/the-uncertainty-principle-for-the-one-dimensional-wavelet-transform/" rel="nofollow noreferrer">blog posts</a>). Interestingly, the Heisenberg uncertainty principle derives from the short time Fourier transform and the last &quot;uncertainty principle&quot; derives from the wavelet transform.</p> <p>The last fact bothers me: In contrast to the fact that both inequalities can be derived from two conceptually very different integral transforms (indeed both underlying groups are very different), they have a very similar formal derivation.</p> <p>I have the following questions: Is anyone familiar with the last inequality? Could it be useful in any context? Is there some reason why these inequalities seem so entangled?</p>
Piero D'Ancona
7,294
<p>There exists a plethora of inequalities relating weighted $L^p$ norms of a function and its derivatives. For instance you have the Caffarelli-Kohn-Nirenberg family of inequalities $$\| |x|^{-\gamma}u\|_ {L^{r}}\le C \||x|^{-\alpha}\nabla u\|^{a}_ {L^{p}}\||x|^{-\beta}u\|^{1-a}_ {L^{q}} $$ which hold for a quite large range of parameters; note that here $\alpha,\beta,\gamma$ may assume negative values. You will not have difficulty in googling the vast literature on the subject (let me add that there is a recent paper of mine on arXiv with some improvements on this).</p>
156,285
<p>I have been working on this exercise for a while now. It's in B.L. van der Waerden's <em>Algebra (Volume I)</em>, page $19$. The exercise is as follows:</p> <blockquote> <p>The order of the symmetric group $S_n$ is $n!=\prod_{1}^{n}\nu$. (Mathematical induction on $n$.)</p> </blockquote> <p>I don't comprehend how we can logically use induction here. It seems that the first step would be proving $S_1$ has $1!=1$ elements. This is simply justified: There is only one permutation of $1$, the permutation of $1$ to itself.</p> <p>The next step would be assuming that $S_n$ has order $n!$. Now here is where I get stuck. How do I use this to show that $S_{n+1}$ has order $(n+1)!$?</p> <p>Here is my attempt: I am thinking this is because all $n!$ permutations of $S_n$ now have a new element to permutate. For example, if we take one single permutation $$ p(1,\dots,n) = \begin{pmatrix} 1 &amp; 2 &amp; 3 &amp; \dots &amp; n\\ 1 &amp; 2 &amp; 3 &amp; \dots &amp; n \end{pmatrix} $$ We now have $n$ modifications of this single permutation by adding the symbol $(n+1)$:</p> <p>\begin{align} p(1,2,\dots,n,(n+1))&amp;= \begin{pmatrix} 1 &amp; 2 &amp; \dots &amp; n &amp; (n+1)\\ 1 &amp; 2 &amp; \dots &amp; n &amp; (n+1) \end{pmatrix}\\ p(2,1,\dots,n,(n+1))&amp;= \begin{pmatrix} 1 &amp; 2 &amp; \dots &amp; n &amp; (n+1)\\ 2 &amp; 1 &amp; \dots &amp; n &amp; (n+1) \end{pmatrix}\\ \vdots\\ p(n,2,\dots,1,(n+1))&amp;= \begin{pmatrix} 1 &amp; 2 &amp; \dots &amp; n &amp; (n+1)\\ n &amp; 2 &amp; \dots &amp; 1 &amp; (n+1) \end{pmatrix}\\ p((n+1),2,\dots,n,1)&amp;= \begin{pmatrix} 1 &amp; 2 &amp; \dots &amp; n &amp; (n+1)\\ (n+1) &amp; 2 &amp; \dots &amp; n &amp; 1 \end{pmatrix} \end{align}</p> <p>There are actually $(n+1)$ permutations of that specific form, but we take $p(1,\dots,n)=p(1,\dots,n,(n+1))$ in order to illustrate and prove our original statement. We can make this general equality for all $n!$ permutations: $p(x_1,x_2,\dots,x_n)=p(x_1,x_2,\dots,x_n,x_{n+1})$ where $x_i$ is any symbol of our finite set of $n$ symbols and $x_{n+1}$ is strictly defined as the symbol $(n+1)$.</p> <p>We can repeat this process for all $n!$ permutations in $S_n$. This gives us $n!n$ permutations. Then, adding in the original $n!$ permutations, we have $n!n+n!=(n+1)n!=(n+1)!$. Consequently, $S_n$ has order $n!$.</p> <p>How is my reasoning here? Furthermore, is there a more elegant argument? I do not really see my argument here as <em>incorrect</em>, it just seems to lack elegance. My reasoning may well be very incorrect, however. If so, please point it out to me.</p>
Dylan Moreland
3,701
<p>I start to get confused by the notation around halfway through. Here's what I'd do: identify $S_n$ with the subgroup of $S_{n + 1}$ consisting of those permutations that fix $n + 1$. Choose elements $\sigma_1, \ldots, \sigma_{n + 1}$ of $S_{n + 1}$ such that $\sigma_i(n + 1) = i$. For example, you could take $\sigma_i$ to be $[i,n+1]$. Then prove that these are left coset representatives for $S_n$ in $S_{n + 1}$. In more detail, you need to do two things:</p> <ol> <li>If $\sigma \in S_{n + 1}$, then find an $i$ such that $\sigma_i^{-1}\sigma$ fixes $n + 1$.</li> <li>If $i \neq j$, then show that $\sigma_j^{-1}\sigma_i$ does not fix $n + 1$.</li> </ol>
7,761
<p>Our undergraduate university department is looking to spruce up our rooms and hallways a bit and has been thinking about finding mathematical posters to put in various spots; hoping possibly to entice students to take more math classes. We've had decent success in finding "How is Math Used in the Real World"-type posters (mostly through AMS), but we've been unable to find what I would call interesting/informative math posters. </p> <p>For example, I remember seeing a poster once (put out by Mathematica) that basically laid out how to solve general quadratics, cubics, and quartics. Then it had a good overview of proving that no formula existed for quintics. So not only was it pretty to look at, but if you stopped to read it you actually learned something.</p> <p>Does anyone know of a company or distributor that carries a variety of posters like this? </p> <p>I've tried searching online but all that comes up is a plethora of posters of math jokes. And even though the application/career-based posters are nice and serve a purpose, I don't feel like you actually gain mathematical knowledge by reading them.</p>
Churning Butter
4,988
<p>The American Mathematical Society has a collection of beautiful posters in their Math Samplings. They are usually adding new ones every so often, and there and some new ones that are maybe a bit more what you'd call informative, that weren't there when you'd looked in the past:</p> <p><a href="http://www.ams.org/samplings/posters/posters" rel="nofollow noreferrer">http://www.ams.org/samplings/posters/posters</a></p> <p>The posters are available for free to current students and instructors, and they just ask for you to provide the school name and some basic info. Don't need to make an account or anything. I've ordered posters different times and it's a seamless process of ordering them. They come quickly and in excellent shape.</p> <p>A few of the posters are:</p> <p><img src="https://i.stack.imgur.com/8MCi4.jpg" alt="enter image description here"> <img src="https://i.stack.imgur.com/4OmkS.jpg" alt="enter image description here"> <img src="https://i.stack.imgur.com/n5fxZ.jpg" alt="enter image description here"></p>
1,178,265
<p>I'm supposed to be able to determine <strong><em>without calculations</em></strong> the determinant, inverse matrix, and n-th power matrix of the rotation matrix :</p> <p>$\begin{pmatrix} cos\theta &amp; sin\theta \\ -sin\theta &amp; cos\theta \end{pmatrix} $</p> <p>Can someone explain to me how I can do that ?</p>
GFR
64,803
<p>The matrices you wrote is a rotation in the plane by an angle of $\theta$. Therefore its inverse can be obtained by replacing $\theta$ by $-\theta$ as a rotation can be undone by a rotation with the opposite angle. Similarly its $n$th power will be a rotation by the angle $n\theta$. Finally as the determinant measures volumes and a rotation is an isometry (more precisely if two vectors $u$, $v$ determine a parallelepiped of volume $V$, and $T$ is a linear transformation, then the volume of the parallelepiped generated by $Tu$, $Tv$ is $det(T)\cdot V$) the determinant is one.</p>
1,641,579
<p>The conditional statement is: If today is February 1, then tomorrow is Ground Hog's Day. I need to negate this but I am confused. Would it just be If today is not February 1, then tomorrow is not Ground Hog's Day? I think that is an inverse statement though. Please help me negate this conditional statement.</p>
p Groups
301,282
<p>Many (but not all) books of Group Theory / Algebra include this theorem with proof, but in my opinion, the book <em>Finite Group Theory</em> by Martin Isaacs gives very elegant, natural, motivational proof of it.</p> <p>Although it is lengthy (p. 75-80), it is certainly written by author by keeping in mind the audience - undergraduate students, and is written in such a way that if one reads, one gets feel of <em>a lecture on the topic with neat explanation of proof, reachable to undergraduates</em></p>
1,641,579
<p>The conditional statement is: If today is February 1, then tomorrow is Ground Hog's Day. I need to negate this but I am confused. Would it just be If today is not February 1, then tomorrow is not Ground Hog's Day? I think that is an inverse statement though. Please help me negate this conditional statement.</p>
verret
191,246
<p>See Section 6.2 in The Theory of Finite Groups: An Introduction, by Kurzweil and Stellmacher</p>
1,641,579
<p>The conditional statement is: If today is February 1, then tomorrow is Ground Hog's Day. I need to negate this but I am confused. Would it just be If today is not February 1, then tomorrow is not Ground Hog's Day? I think that is an inverse statement though. Please help me negate this conditional statement.</p>
Rithvik Reddy
737,143
<p>You can find it in Section 6.6(pg 186) in Charles Weibel's book 'An Introduction to Homological Algebra'</p>
144,818
<p>Let $x_1,x_2,\ldots,x_n$ be $n$ real numbers that satisfy $x_1&lt;x_2&lt;\cdots&lt;x_n$. Define \begin{equation*} A=% \begin{bmatrix} 0 &amp; x_{2}-x_{1} &amp; \cdots &amp; x_{n-1}-x_{1} &amp; x_{n}-x_{1} \\ x_{2}-x_{1} &amp; 0 &amp; \cdots &amp; x_{n-1}-x_{2} &amp; x_{n}-x_{2} \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots &amp; \vdots \\ x_{n-1}-x_{1} &amp; x_{n-1}-x_{2} &amp; \cdots &amp; 0 &amp; x_{n}-x_{n-1} \\ x_{n}-x_{1} &amp; x_{n}-x_{2} &amp; \cdots &amp; x_{n}-x_{n-1} &amp; 0% \end{bmatrix}% \end{equation*}</p> <p>Could you determine the determinant of $A$ in term of $x_1,x_2,\ldots,x_n$?</p> <p>I make a several Calculation: For $n=2$, we get</p> <p>\begin{equation*} A=% \begin{bmatrix} 0 &amp; x_{2}-x_{1} \\ x_{2}-x_{1} &amp; 0% \end{bmatrix}% \text{ and}\det (A)=-\left( x_{2}-x_{1}\right) ^{2} \end{equation*}</p> <p>For $n=3$, we get</p> <p>\begin{equation*} A=% \begin{bmatrix} 0 &amp; x_{2}-x_{1} &amp; x_{3}-x_{1} \\ x_{2}-x_{1} &amp; 0 &amp; x_{3}-x_{2} \\ x_{3}-x_{1} &amp; x_{3}-x_{2} &amp; 0% \end{bmatrix}% \text{ and}\det (A)=2\left( x_{2}-x_{1}\right) \left( x_{3}-x_{2}\right) \left( x_{3}-x_{1}\right) \end{equation*}</p> <p>For $n=4,$ we get</p> <p>\begin{equation*} A=% \begin{bmatrix} 0 &amp; x_{2}-x_{1} &amp; x_{3}-x_{1} &amp; x_{4}-x_{1} \\ x_{2}-x_{1} &amp; 0 &amp; x_{3}-x_{2} &amp; x_{4}-x_{2} \\ x_{3}-x_{1} &amp; x_{3}-x_{2} &amp; 0 &amp; x_{4}-x_{3} \\ x_{4}-x_{1} &amp; x_{4}-x_{2} &amp; x_{4}-x_{3} &amp; 0% \end{bmatrix} \\% \text{ and} \\ \det (A)=-4\left( x_{4}-x_{1}\right) \left( x_{2}-x_{1}\right) \left( x_{3}-x_{2}\right) \left( x_{4}-x_{3}\right) \end{equation*} Finally, I guess that the answer is $\det(A)=2^{n-2}\cdot (x_n-x_1)\cdot (x_2-x_1)\cdots (x_n-x_{n-1})$. But I don't know how to prove it.</p>
Dilawar
1,674
<p>Expanding Robert solution.</p> <p>Let $det(A) = P(x)$. Let the polynomial on the right is a multi-variable polynomial $P(x)$.</p> <p>If $x_1 = x_2$ then $det(A) = 0$ i.e $P(x) = 0$ i.e. $(x_1 - x_2)$ is a factor of $P(x)$.</p> <p>If $x_2 = x_3$ then $det(A) = 0$ i.e $P(x) = 0$ i.e. $(x_2 - x_3)$ is a factor of $P(x)$.</p> <p>etc. We calculate possible factors of $P(x)$. Have we calculated all possible factors of $P(x)$?</p> <p>Let $Q(x) = (x_1 - x_2) (x_2 - x_3) \ldots (x_{n} - x_{1}) $</p> <p>What we know about the degree of $P(x)$? It is $n$, equal to that of $Q(x)$. Thus $Q(x)$ multiplied by some constant factor should give us $P(x)$ i.e. we already have all possible factors of $P(x)$.</p> <p>A Robert has already mentioned, we should calculate this constant factor.</p> <p>It follows that if for any $i$, $x_i = x_{i+1}$, then $P(x) =0$ i.e. $det(A) = 0$. Since you alretady have constraints such as $x_1 &gt;x_2 \ldots x_n$, $det(A) \ne 0$.</p>
2,022,700
<blockquote> <p>In how many ways can the letters in WONDERING be arranged with exactly two consecutive vowels</p> </blockquote> <p>I solved and got answer as $90720$. But other sites are giving different answers. Please help to understand which is the right answer and why I am going wrong.</p> <p><strong>My Solution</strong></p> <p>Arrange 6 consonants $\dfrac{6!}{2!}$<br> Chose 2 slots from 7 positions $\dbinom{7}{2}$<br> Chose 1 slot for placing the 2 vowel group $\dbinom{2}{1}$<br> Arrange the vowels $3!$</p> <p>Required number of ways:<br> $\dfrac{6!}{2!}\times \dbinom{7}{2}\times \dbinom{2}{1}\times 3!=90720$</p> <p><strong>Solution taken from <a href="http://www.sosmath.com/CBB/viewtopic.php?t=6126" rel="nofollow noreferrer">http://www.sosmath.com/CBB/viewtopic.php?t=6126</a>)</strong></p> <p><a href="https://i.stack.imgur.com/7JNnA.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7JNnA.jpg" alt="enter image description here"></a></p> <p><strong>Solution taken from <a href="http://myassignmentpartners.com/2015/06/20/supplementary-3/" rel="nofollow noreferrer">http://myassignmentpartners.com/2015/06/20/supplementary-3/</a> <a href="https://i.stack.imgur.com/O1uLK.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O1uLK.jpg" alt=""></a></strong></p>
Community
-1
<p>The number of arrangements with 3 consecutive vowels is correctly explained in the original post: the number is $15120$.</p> <p>To find the number of arrangements with <em>at least</em> two consecutive vowels, we duct tape two of them together (as in the original post) and arrive at $120960$.</p> <p>The problem with this calculation is that every arrangement with 3 consecutive vowels was double counted: once as $\overline{VV}V$ and again as $V\overline{VV}$. To compensate for this we must subtract $15120$. The correct number of arrangements with <em>at least</em> two consecutive vowels is $120960-15120=105840.$</p> <p>Therefore, correct number of arrangements with <em>exactly</em> two consecutive vowels is $105840-15120=90720.$</p>
153,902
<p>Let $A_i$ be open subsets of $\Omega$. Then $A_0 \cap A_1$ and $A_0 \cup A_1$ are open sets as well.</p> <p>Thereby follows, that also $\bigcap_{i=1}^N A_i$ and $\bigcup_{i=1}^N A_i$ are open sets.</p> <p>My question is, does thereby follow that $\bigcap_{i \in \mathbb{N}} A_i$ and $\bigcup_{i \in \mathbb{N}} A_i$ are open sets as well?</p> <p>And what about $\bigcap_{i \in I} A_i$ and $\bigcup_{i \in I} A_i$ for uncountabe $I$?</p>
talmid
19,603
<p>The union of <strong>any</strong> collection of open sets is open. Let $x \in \bigcup_{i \in I} A_i$, with $\{A_i\}_{i\in I}$ a collection of open sets. Then, $x$ is an interior point of some $A_k$ and there is an open ball with center $x$ contained in $A_k$, therefore contained in $\bigcup_{i \in I} A_i$, so this union is open. Others have given a counterexample for the infinite intersection of open sets, which isn't necessarily open. By de Morgan's laws, the intersection of <strong>any</strong> collection of closed sets is closed (try to prove this), but consider the union of $\{x\}_{x\in (0,1)}$, which is $(0,1)$, not closed. The union of an infinite collection of closed sets isn't necessarily closed.</p>
425,969
<p>It seems striking that the cardinalities of <span class="math-container">$\aleph_0$</span> and <span class="math-container">$\mathfrak c = 2^{\aleph_0}$</span> each admit what I will call a &quot;homogeneous cyclic order&quot;, via the examples of <span class="math-container">$ℚ/ℤ$</span> and <span class="math-container">$ℝ/ℤ$</span>. By which I mean a cyclic order (as defined in <a href="https://ncatlab.org/nlab/show/cyclic+order" rel="nofollow noreferrer">https://ncatlab.org/nlab/show/cyclic+order</a>) such that for any two elements <span class="math-container">$x, y$</span> of the cardinal, there is a bijection of the cardinal to itself taking <span class="math-container">$x$</span> to <span class="math-container">$y$</span> and preserving the cyclic order.</p> <p>In ZFC, is there any reason to believe that either a) all infinite cardinals admit a homogeneous cyclic order, or b) there exists an infinite cardinal admitting no homogeneous cyclic order?</p>
Andreas Blass
6,794
<p>Adjoin to the theory of cyclic orders (as on the ncatlab page linked in the question) a 3-place function <span class="math-container">$f$</span> and axioms saying that, for each fixed <span class="math-container">$x$</span> and <span class="math-container">$y$</span>, the function <span class="math-container">$z\mapsto f(x,y,z)$</span> is a bijection that respects <span class="math-container">$R$</span> (i.e., and automorphism of the underlying <span class="math-container">$R$</span>-strtucture) and sends <span class="math-container">$x$</span> to <span class="math-container">$y$</span>. The resulting first-order theory, in the language <span class="math-container">$\{R,f\}$</span>, has infinite models (as noted in the question), so the upward and downward Löwenheim-Skolem theorems imply that it has models of all infinite cardinalities.</p>
3,256,646
<p>I find it really hard to find the range. I usually substitute the x's with y and then solve for y, but it does not always work for me. Do you have any advice?</p> <p>Function in question: </p> <p><span class="math-container">$$f(x) = \frac{e^{-2x}}{x}$$</span></p>
Community
-1
<p>Let <span class="math-container">$x\in A,y\in B$</span>. Then if <span class="math-container">$x,y\in C'\subset C$</span>, I claim <span class="math-container">$C'$</span> is disconnected. Define <span class="math-container">$A'=A\cap C',B'=B\cap C'$</span>. Then <span class="math-container">$A'$</span> and <span class="math-container">$B'$</span> disconnect <span class="math-container">$C'$</span>. That is, <span class="math-container">$A'\neq\emptyset, B'\neq\emptyset, A'\cup B'=C'$</span> and <span class="math-container">$A'\cap B'=\emptyset$</span>.</p>
606,356
<p>I would appreciate if somebody could help me with the following problem</p> <p>Q: Quadratic Equation $x^4+ax^3+bx^2+ax+1=0$ have four real roots $x=\frac{1}{\alpha^3},\frac{1}{\alpha},\alpha,\alpha^3(\alpha&gt;0)$ and $2a+b=14$.</p> <p>Find $a,b=?(a,b\in\mathbb{R})$</p>
David Holden
79,543
<p>lab has a point. if we write $x=\alpha +\frac1{\alpha}$ then we get $$ -ax + b = x^2+2 $$ or $$ x= \frac12 \left((-a \pm \sqrt{a^2-4(2-b)}\right) $$ but for a 'nice' problem the $2a+b=14$ should simplify the surd, and it doesn't quite seem to. maybe my arithmetic is wrong, or maybe i'm just a hopeless optimist with a penchant for erroneous oversimplification. (well i know that, actually)</p>
365,631
<p>Suppose we want to prove that among some collection of things, at least one of them has some desirable property. Sometimes the easiest strategy is to equip the collection of all things with a measure, then show that the set of things with the desired property has positive measure. Examples of this strategy appear in many parts of mathematics.</p> <blockquote> <p><strong>What is your favourite example of a proof of this type?</strong></p> </blockquote> <p>Here are some examples:</p> <ul> <li><p><strong>The probabilistic method in combinatorics</strong> As I understand it, a typical pattern of argument is as follows. We have a set <span class="math-container">$X$</span> and want to show that at least one element of <span class="math-container">$X$</span> has property <span class="math-container">$P$</span>. We choose some function <span class="math-container">$f: X \to \{0, 1, \ldots\}$</span> such that <span class="math-container">$f(x) = 0$</span> iff <span class="math-container">$x$</span> satisfies <span class="math-container">$P$</span>, and we choose a probability measure on <span class="math-container">$X$</span>. Then we show that with respect to that measure, <span class="math-container">$\mathbb{E}(f) &lt; 1$</span>. It follows that <span class="math-container">$f^{-1}\{0\}$</span> has positive measure, and is therefore nonempty.</p> </li> <li><p><strong>Real analysis</strong> One example is <a href="http://www.artsci.kyushu-u.ac.jp/%7Essaito/eng/maths/Cauchy.pdf" rel="noreferrer">Banach's proof</a> that any measurable function <span class="math-container">$f: \mathbb{R} \to \mathbb{R}$</span> satisfying Cauchy's functional equation <span class="math-container">$f(x + y) = f(x) + f(y)$</span> is linear. Sketch: it's enough to show that <span class="math-container">$f$</span> is continuous at <span class="math-container">$0$</span>, since then it follows from additivity that <span class="math-container">$f$</span> is continuous everywhere, which makes it easy. To show continuity at <span class="math-container">$0$</span>, let <span class="math-container">$\varepsilon &gt; 0$</span>. An argument using Lusin's theorem shows that for all sufficiently small <span class="math-container">$x$</span>, the set <span class="math-container">$\{y: |f(x + y) - f(y)| &lt; \varepsilon\}$</span> has positive Lebesgue measure. In particular, it's nonempty, and additivity then gives <span class="math-container">$|f(x)| &lt; \varepsilon$</span>.</p> <p>Another example is the existence of real numbers that are <a href="https://en.wikipedia.org/wiki/Normal_number" rel="noreferrer">normal</a> (i.e. normal to every base). It was shown that almost all real numbers have this property well before any specific number was shown to be normal.</p> </li> <li><p><strong>Set theory</strong> Here I take ultrafilters to be the notion of measure, an ultrafilter on a set <span class="math-container">$X$</span> being a finitely additive <span class="math-container">$\{0, 1\}$</span>-valued probability measure defined on the full <span class="math-container">$\sigma$</span>-algebra <span class="math-container">$P(X)$</span>. Some existence proofs work by proving that the subset of elements with the desired property has measure <span class="math-container">$1$</span> in the ultrafilter, and is therefore nonempty.</p> <p>One example is a proof that for every measurable cardinal <span class="math-container">$\kappa$</span>, there exists some inaccessible cardinal strictly smaller than it. Sketch: take a <span class="math-container">$\kappa$</span>-complete ultrafilter on <span class="math-container">$\kappa$</span>. Make an inspired choice of function <span class="math-container">$\kappa \to \{\text{cardinals } &lt; \kappa \}$</span>. Push the ultrafilter forwards along this function to give an ultrafilter on <span class="math-container">$\{\text{cardinals } &lt; \kappa\}$</span>. Then prove that the set of inaccessible cardinals <span class="math-container">$&lt; \kappa$</span> belongs to that ultrafilter (&quot;has measure <span class="math-container">$1$</span>&quot;) and conclude that, in particular, it's nonempty.</p> <p>(Although it has a similar flavour, I would <em>not</em> include in this list the cardinal arithmetic proof of the existence of transcendental real numbers, for two reasons. First, there's no measure in sight. Second -- contrary to popular belief -- this argument leads to an <em>explicit construction</em> of a transcendental number, whereas the other arguments on this list do not explicitly construct a thing with the desired properties.)</p> </li> </ul> <p>(Mathematicians being mathematicians, someone will probably observe that <em>any</em> existence proof can be presented as a proof in which the set of things with the required property has positive measure. Once you've got a thing with the property, just take the Dirac delta on it. But obviously I'm after less trivial examples.)</p> <p><strong>PS</strong> I'm aware of the earlier question <a href="https://mathoverflow.net/questions/34390">On proving that a certain set is not empty by proving that it is actually large</a>. That has some good answers, a couple of which could also be answers to my question. But my question is specifically focused on <em>positive measure</em>, and excludes things like the transcendental number argument or the Baire category theorem discussed there.</p>
user161212
161,212
<p>A very famous and important theorem in the theory of metric embeddings is known as &quot;Assouad's Embedding Theorem&quot;. It concerns <em>doubling</em> metric spaces: metric spaces for which there is a constant <span class="math-container">$D$</span> such that every ball can be covered by <span class="math-container">$D$</span> balls of half the radius.</p> <blockquote> <p><strong>Theorem (Assouad, 1983)</strong>: For every <span class="math-container">$\epsilon\in (0,1)$</span> and <span class="math-container">$D&gt;0$</span>, there are constants <span class="math-container">$L$</span> and <span class="math-container">$N$</span> such that if <span class="math-container">$(X,d)$</span> is doubling with constant <span class="math-container">$D$</span>, then the metric space <span class="math-container">$(X,d^\epsilon)$</span> admits an <span class="math-container">$L$</span>-bi-Lipschitz embedding into <span class="math-container">$\mathbb{R}^N$</span>.</p> </blockquote> <p>This theorem is widely used throughout metric geometry and analysis on metric spaces. (See, e.g., <a href="https://link.springer.com/article/10.1007/s000390050009" rel="noreferrer">here</a> or <a href="http://www.pitt.edu/%7Ehajlasz/OriginalPublications/Hajlasz-Whitney-PAMS-131-2003-3463-3467.pdf" rel="noreferrer">here</a>.)</p> <p>An <span class="math-container">$L$</span>-bi-Lipschitz embedding is simply an embedding that preserves all distances up to factor <span class="math-container">$L$</span>. It's easy to see that the doubling condition is necessary for this theorem to hold. Moreover, there are known doubling metric spaces (the Heisenberg group for one) that are doubling but do not admit a bi-Lipschitz embedding into any Euclidean space, so one cannot allow <span class="math-container">$\epsilon=1$</span> in Assouad's theorem.</p> <p>This means, of course, that the constants <span class="math-container">$L$</span> and <span class="math-container">$N$</span> must blow up as <span class="math-container">$\epsilon\rightarrow 1$</span>, and this is reflected Assouad's proof.</p> <p>Except, that's not quite true. In a really surprising construction, Naor and Neiman showed in 2012 that the dimension <span class="math-container">$N$</span> in Assouad's theorem can be chosen <em>independent</em> of the ``snowflake'' parameter <span class="math-container">$\epsilon$</span> as <span class="math-container">$\epsilon\rightarrow 1$</span>. (The distortion <span class="math-container">$L$</span> must necessarily blow up in general.) In other words, one need not use too many dimensions for the embedding, no matter how close <span class="math-container">$\epsilon$</span> gets to <span class="math-container">$1$</span>. I believe this shocked many people.</p> <p>The construction of Naor and Neiman is probabilistic: they construct a random Lipschitz map from <span class="math-container">$(X,d^\epsilon)$</span> into <span class="math-container">$\mathbb{R}^N$</span>, and show that it is bi-Lipschitz with positive probability. The proof is also a nice application to geometry of the Lovasz Local Lemma.</p> <p>Assouad's paper: <a href="http://www.numdam.org/article/BSMF_1983__111__429_0.pdf" rel="noreferrer">http://www.numdam.org/article/BSMF_1983__111__429_0.pdf</a></p> <p>Naor-Neiman's paper: <a href="https://www.cs.bgu.ac.il/%7Eneimano/Naor-Neiman.pdf" rel="noreferrer">https://www.cs.bgu.ac.il/~neimano/Naor-Neiman.pdf</a></p>
97,449
<p>I am trying to compute $\chi(\mathbb{C}\mathrm{P}^2)$ using only elementary techniques from differential topology and this is proving to be trickier than I thought. I am aware of the usual proof for this result, which uses the cellular decomposition of $\mathbb{C}\mathrm{P}^2$ to get $\chi(\mathbb{C}\mathrm{P}^2) = 3$, but I would like to find a proof of this result that relies on concepts like indices of isolated zeros on a vector field. So for the purposes of this question, I would like to utilize the following definition of the Euler characteristic: For a closed orientable manifold $M$ we define $\chi(M) = \sum_i \mathrm{Ind}_{d_i} \mathrm{v}$ where $\mathrm{v}$ is a vector field on $M$ with isolated zeros.</p> <p>In my first attempt at this problem I thought about finding a vector field on $\tilde{\mathrm{v}}$ on $S^5$, and then using the identification $\mathbb{C}\mathrm{P}^2 \cong S^5/\mathrm{U}(1)$, seeing if $\tilde{\mathrm{v}}$ descended to a vector field on $\mathbb{C}\mathrm{P}^2$ with isolated zeros that lent itself to computing the Euler characteristic. I had difficulty making this work out, so I am unsure if this is a good approach to tackling the problem. Any insights?</p>
Igor Rivin
11,142
<p>By the way, a very cool (to my mind) way of computing the Euler characteristic of $\mathbb{C}P^n$ is to treat it as the $n$-fold symmetric product of $\mathbb{C}P^1 = \mathbb{S}^2$ with itself. Then, it is apparently a result of MacDonald (of Atiyah and M fame) that the Euler characteristic of an $n$-fold symmetric product of a space $X$ with itself equals $\binom{\chi(X)+n-1}{\chi(X) - 1}.$`</p> <p><strong>EDIT</strong> Following @Qiaochu's penetrating remark, I looked up MacDonald's paper (I was citing from secondary sources before, shame on me). The result is: </p> <p>The $k$-th Betti number of the $n$th symmetric power of $X$ is the coefficient of $x^k t^n$ in $\prod_i (1- (-x)^i t)^{- (-1)^i (-B_i)}.$ where the $B_i$ are the betti numbers of $X$. So, for Euler characteristic you evaluate this product at $x=-1,$ and find the coefficient of $t^n.$ </p>
2,509,095
<p>Is there a very simple test to check if a line <em>segment</em> in $3D$ space cuts a plane? It is assumed we have the coordinates of the endpoints of the line segment, so $p_1,p_2$ and that we have the equation of the plane: $z = d$ (so for simplicity we're assuming it's a plane orthogonal to the z-axis).</p>
Endre Moen
413,323
<p>So the plane is given by $ax + by + 0z -d = 0$ Using the parametric form - you must find the line equation in each direction. E.g $x=t, y=2+3t, z=t$</p> <p>Then you plug that into the plain eq.: $a(t) + b(2+3t)0 + 0(t) - d = 0$. Take that value and plug it in for $t$ in the 3 equations given for the line.</p> <p>Eg. Let $a=2, b=1, d=7$ (we dont care about $c$) then the plane is given by $2x +1y -4 = 0$ and plugin for line: $2(t)+1(2+3t)-7=0 =&gt; t=-1$ and solution is: $(-1,-1,-1)$</p> <p>If there is one solution thats the intersection, $0t = k$ for constant $k$ there is no solution - and the line is parallel, and if $0t = 0$ all values for $t$ is a solutions for $x, y, z$ and the line is on the plain.</p>
438,336
<p>This a two part question:</p> <p>$1$: If three cards are selected at random without replacement. What is the probability that all three are Kings? In a deck of $52$ cards.</p> <p>$2$: Can you please explain to me in lay man terms what is the difference between with and without replacement.</p> <p>Thanks guys!</p>
Community
-1
<p>Hint: What's the probability that the first card would be a King? The second? And the third?</p> <p>With replacement means that after you draw the card you put it back into the deck, then re-draw the next one completely random.</p>
3,068,031
<blockquote> <p>Let <span class="math-container">$G$</span> be a group and <span class="math-container">$H$</span> be a subgroup of <span class="math-container">$G$</span>. Let also <span class="math-container">$a,~b\in G$</span> such that <span class="math-container">$ab\in H$</span>.</p> <p>True or false? <span class="math-container">$a^2b^2\in H.$</span></p> </blockquote> <p><em>Attempt.</em> I believe the answer is no (i have proved that the statement is true for normal subgroups, but it seems that there is no need to hold for arbitrary subgroups). I was looking for a counterexample in a non abelian group of small order, such as <span class="math-container">$S_3$</span>, or <span class="math-container">$S_4$</span>, but i couldn't find a suitable combination of <span class="math-container">$H\leq S_n$</span>, <span class="math-container">$\sigma$</span> and <span class="math-container">$\tau\in S_n$</span> such that <span class="math-container">$\sigma \tau \in H$</span> and <span class="math-container">$\sigma^2 \tau^2 \notin H.$</span></p> <p>Thanks in advance for the help.</p>
Lee Mosher
26,501
<p>Take <span class="math-container">$G$</span> to be the <a href="https://en.wikipedia.org/wiki/Free_group" rel="noreferrer">free group</a> on <span class="math-container">$a,b$</span>, whose elements are the reduced words in the alphabet <span class="math-container">$a,b,a^{-1},b^{-1}$</span>. </p> <p>Take <span class="math-container">$H$</span> to be the cyclic subgroup generated by <span class="math-container">$ab$</span>. </p> <p>The non-identity elements of <span class="math-container">$H$</span> are the reduced words words <span class="math-container">$(ab)^n$</span> for <span class="math-container">$n \ge 1$</span> and <span class="math-container">$(b^{-1}a^{-1})^n$</span> for <span class="math-container">$n \ge 1$</span>. </p> <p>Since the reduced word <span class="math-container">$a^2 b^2$</span> does not have that form, it follows that <span class="math-container">$a^2 b^2 \not\in H$</span>.</p>
2,933,375
<p>I have a set of vectors, <span class="math-container">$M_1$</span> which is defined as the following: <span class="math-container">$$M_1:=[\begin{pmatrix}1 \\ 0 \\ 1 \end{pmatrix}, \begin{pmatrix}0 \\ 1 \\ 1 \end{pmatrix}]$$</span> I have to show that <span class="math-container">$M_1$</span> isn't a generating set of <span class="math-container">$\mathbb R^3$</span>, even though it's linearly independent. My initial idea was that, because <span class="math-container">$$\begin{pmatrix}1 \\ 0 \\ 0 \end{pmatrix}≠a\cdot\begin{pmatrix}1 \\ 0 \\ 1 \end{pmatrix}+ b \cdot\begin{pmatrix}0 \\ 1 \\ 1 \end{pmatrix}$$</span> therefore <span class="math-container">$M_1$</span> isn't a generating set of <span class="math-container">$\mathbb R^3$</span>. However I would like to know if there is any other way to show that <span class="math-container">$M_1$</span> is not a generating set of <span class="math-container">$\mathbb R^3$</span>.</p>
hmakholm left over Monica
14,366
<p>Some people even write things like <span class="math-container">$$\exists z \text{ s.t. } \forall y, y\notin z$$</span> Personally I think this is a horrible practice. One can think what one wants about how it wastes space, but more importantly it reinforces the dangerous <strong>misconception</strong> that logical symbols are just shorthand for words in English sentences. They are not; they make up a separate language with its own syntax and semantics, and the way we usually <em>pronounce</em> it with English words can misrepresent that semantics. (Consider for example how many beginning students, and sometimes textbook authors, who get themselves into contortions trying to understand the truth table of <span class="math-container">$\Rightarrow$</span> as if it ought to be forced by the English words "if" and "then").</p> <p>The "just shorthand for English words" mistake is also what makes people sometimes put quantifiers <em>last</em>, which leads to horrors such as <span class="math-container">$$ \exists z \text{ s.t. }y\notin z,\; \forall y $$</span> where we have completely lost the information about whether <span class="math-container">$z$</span> is allowed to depend on <span class="math-container">$y$</span> or not -- is it <span class="math-container">$$ \exists z \text{ s.t. }(y\notin z,\; \forall y) \qquad\text{ or }\qquad (\exists z \text{ s.t. }y\notin z),\; \forall y\;? $$</span> Succinct clarity about these matters is a big part of why we use symbolic quantifiers at all in the first place!</p> <p>So always put quantifiers <em>before</em> the formula they range over. And eschew punctuation that pretends symbolic logic is a way to write down English.</p>
2,933,375
<p>I have a set of vectors, <span class="math-container">$M_1$</span> which is defined as the following: <span class="math-container">$$M_1:=[\begin{pmatrix}1 \\ 0 \\ 1 \end{pmatrix}, \begin{pmatrix}0 \\ 1 \\ 1 \end{pmatrix}]$$</span> I have to show that <span class="math-container">$M_1$</span> isn't a generating set of <span class="math-container">$\mathbb R^3$</span>, even though it's linearly independent. My initial idea was that, because <span class="math-container">$$\begin{pmatrix}1 \\ 0 \\ 0 \end{pmatrix}≠a\cdot\begin{pmatrix}1 \\ 0 \\ 1 \end{pmatrix}+ b \cdot\begin{pmatrix}0 \\ 1 \\ 1 \end{pmatrix}$$</span> therefore <span class="math-container">$M_1$</span> isn't a generating set of <span class="math-container">$\mathbb R^3$</span>. However I would like to know if there is any other way to show that <span class="math-container">$M_1$</span> is not a generating set of <span class="math-container">$\mathbb R^3$</span>.</p>
Rob Arthan
23,171
<p>In my experience, mathematical logicians never use the form with ":", but split into two camps as regards the use of a ".": many people don't use a "." after quantifiers and take the quantifiers to have high precedence, so that <span class="math-container">$\forall x\forall y(x &gt; y \implies x \ge y + 1)$</span> requires the brackets to make <span class="math-container">$x \ge y + 1$</span> fall in the scope of the universal quantifiers; others use a "." or a "<span class="math-container">$\bullet$</span>" and take it as indicating that the quantifiers have low precedence, so they would write <span class="math-container">$\forall x.\forall y.x &gt; y \implies x \ge y + 1$</span> (with the scope of the quantifiers extending as far to the right as possible). The former usage is fairly standard in the traditional logic literature, while the latter is perhaps more common among computer scientists and is often adopted in the syntax for proof assistants like HOL (and saves a lot of brackets in my experience of using such systems).</p>
2,704,770
<p>I need help in calculating this strange limit.</p> <p>$$ \lim_{n \to \infty} n^2 \int_{0}^{\infty} \frac{sin(x)}{(1 + x)^n} dx $$</p>
vrugtehagel
304,329
<p>We can substitute $x=\frac{t}{n}$:</p> <p>$$\lim_{n \to \infty} n^2 \int_{0}^{\infty} \frac{\sin(x)}{(1 + x)^n} dx=\lim_{n \to \infty} n^2 \int_{0}^{\infty} \frac{\sin(t/n)}{(1 + \frac tn)^n}\frac{1}{n}dt$$</p> <p>Now in the denominator we get $(1+\frac{t}{n})^n$, and we're taking the limit $n\to\infty$; this becomes $e^t$, so that we get (also take the $\frac 1n$ out of the integral):</p> <p>$$\lim_{n \to \infty} n \int_{0}^{\infty} \frac{\sin(t/n)}{e^t}dt$$</p> <p>Now also see that $\lim_{n\to\infty}n\sin(\frac{t}{n})=t$ so that we get</p> <p>$$\lim_{n \to \infty}\int_{0}^{\infty} \frac{t}{e^t}dt$$</p> <p>Now the $n$'s are gone so we can write</p> <p>$$\int_{0}^{\infty} \frac{t}{e^t}dt$$</p> <p>Can you take it from here?</p>
68,428
<p>I am looking at the description of LTI systems in the time domain.</p> <p>Intuitively, I'd have guessed it would be the composition of the input function and some "system function". $$ y(t) = f(x(t)) = (f\circ x)(t)$$ Where $x(t)$ is the input, $y(t)$ output and $f(x)$ a "system function".</p> <p>Why is it not that way? Could such a "system function" be found for, say, an R-C-Circuit?</p> <p>The actual output function y(t), is defined as $$ y(t) = (h * x)(t) $$ Where $h(t)$ is the response to a dirac impulse. This is hard to grasp for me. Why is it so? I have looked at various explanations, drawings of rectangles becoming infinitely narrow, which I sort of understood, but it is still "hard to grasp"! I am looking for a simple explanation in one or two sentences here.</p> <p><a href="http://en.wikipedia.org/wiki/LTI_system_theory" rel="nofollow">http://en.wikipedia.org/wiki/LTI_system_theory</a></p>
Mike Stay
756
<p>Here's a partial answer: in the case of an endofunctor $F$ on a discrete category $C$ (i.e. $F$ is a function), the coend of $F$ gives the <em>set</em> of fixpoints rather than the <em>number</em>: A profunctor $F:C \not\to C$ adds extra morphisms to $C$ so that the result is still a category. I'll say these morphisms are "in $F$". The coend of $F$ is the set of endomorphisms in $F$ mod conjugation by the morphisms in $C$; since the morphisms of $C$ are all identities, we just get the set of endomorphisms in $F$, i.e. fixed points of $F$.</p> <p>The categorical trace doesn't reduce to anything useful in the case of a discrete category. A natural transformation $\alpha:1_C \Rightarrow F$ chooses for each $c \in C$ a morphism $\alpha_c:c \to Fc$ in $C$. Since we're assuming all the morphisms in $C$ are identities, $\alpha_c$ can't exist unless $Fc = c$. So it looks to me like the set hom$(1_C, F)$ is empty unless $F$ is the identity functor on $C$, in which case it's the terminal set.</p>
4,228,826
<p>Consider the inequality <span class="math-container">$$ 1-\frac{x}{2}-\frac{x^2}{2} \le \sqrt{1-x} &lt; 1-\frac{x}{2} $$</span> for <span class="math-container">$0 &lt; x &lt; 1$</span>. The upper bound can be read off the Taylor expansion for <span class="math-container">$\sqrt{1-x}$</span> around <span class="math-container">$0$</span>, <span class="math-container">$$ \sqrt{1-x} = 1 - \frac{x}{2} - \frac{x^2}{8} - \frac{x^3}{16} - \dots $$</span> by noting that all the non linear terms are negative. Can the left side inequality be read-off the expansion by a similar reasoning? Please do not try to prove the left side inequality by other means (such as minimizing <span class="math-container">$\sqrt{1-x} - 1 + \frac{x}{2} + \frac{x^2}{2}$</span> using derivatives).</p>
Michael Rozenberg
190,319
<p>We need to prove that <span class="math-container">$$(2-x-x^2)^2\leq4(1-x),$$</span> which is <span class="math-container">$$x^2(3+x)(1-x)\geq0.$$</span></p>
1,134,145
<p>A set S is bounded if every point in S lies inside some circle |z| = R other it is unbound. Without appealing to any limit laws, theorems, or tools from calculus, prove or disprove that the set {$\frac{z}{z^2 + 1}$; z in R} is bounded.</p> <p>I imagine that it's simple, but I have no clue where to start due to the restrictions. Thanks</p>
Brian Rushton
51,970
<p>Can you prove that $\frac{z}{z^2+1}$ is less than some given number? For instance, it is not too hard to show that it is less than 1/2 (hint: start with $\frac{z}{z^2+1}&lt;1/2$ and cross-multiply).</p>
393,712
<p>I studied elementary probability theory. For that, density functions were enough. What is a practical necessity to develop measure theory? What is a problem that cannot be solved using elementary density functions?</p>
Chris Evans
78,301
<p>The standard answer is that measure theory is a more natural framework to work in. After all, in probability theory you are concerned with assigning probabilities to events (sets)... so you are dealing with functions whose inputs are sets and whose outputs are real numbers. This leads to sigma-algebras and measure theory if you want to do rigorous analysis.</p> <p>But for the more practically-minded, here are two examples where I find measure theory to be more natural than elementary probability theory:</p> <p>1) Suppose X~Uniform(0,1) and Y=cos(X). What does the joint-density of (X,Y) look like? What is the probability that (X,Y) lies in some set A? This <em>can</em> be handled with delta-functions but personally I find measure theory to be more natural.</p> <p>2) Suppose you want to talk about choosing a random continuous <em>function</em> (element of C(0,1) say). To define how you make this random choice you would like to give a p.d.f. but what would that look like? (The technical issue here is that this space of continuous functions is infinite dimensional and so Lebesgue measure cannot be defined). This problem is very natural in the field of Stochastic Processes including Financial Mathematics -- a stock price can be thought of as a random function. Under the measure theory framework you talk in terms of probability measures instead of p.d.f.'s and so infinite dimensions do not pose an obstacle.</p>
3,188,298
<blockquote> <p>Let <span class="math-container">$f: X\to Y$</span> be bijective, and <span class="math-container">$f^{-1}: Y\to X$</span> be it's inverse. If <span class="math-container">$V\subseteq Y$</span>, show that the forward image of <span class="math-container">$V$</span> under <span class="math-container">$f^{-1}$</span> is the same set as the inverse image of <span class="math-container">$V$</span> under <span class="math-container">$f$</span>.</p> </blockquote> <p>I have interpreted this as: show that <span class="math-container">$f(f^{-1}(V))=f^{-1}(f(V))$</span></p> <p>I really do not know what to do from here.</p>
Martin Argerami
22,857
<p>No, what they want you to show is that the image of <span class="math-container">$f^{-1} $</span>: <span class="math-container">$$\tag1 (f^{-1})(V)=\{f^{-1}(v):\ v\in V\} $$</span> is equal to the preimage of <span class="math-container">$V $</span> under <span class="math-container">$f $</span>: <span class="math-container">$$\tag2 f^{-1}(V)=\{x\in X:\ f (x)\in V\}. $$</span> The notation is unfortunate in this case, but it is normally used for <span class="math-container">$(2) $</span>.</p>
78,443
<p>Let $F$ be a number field and $A$ an abelian variety over $F$. It is known that if $A$ has complex multiplication, then it has potentially good reduction everywhere, namely there exists a finite extension $L$ of $F$ such that $A_L$ has good reduction over every prime of $L$.</p> <p>And what about the inverse: if $A$ is known to be of potential good reduction everywhere, how far is it from having complex multiplication?</p> <p>As the reduction behavior is determined by the Galois representations of the decompositon groups, one can reformulate the problem as follows: let $A$ be an abelian variety over $F$, $p$ a fixed rational prime, $V$ the p-adic Tate module of $A$; and for $\lambda$ primes of $F$, $\rho_\lambda$ is the $p$-adic representation on $V$ of the decomposition group $G_\lambda$ at $\lambda$. If $\rho_\lambda$ is potentially unramified for $\lambda$ not dividing $p$, and potentially cristalline for $\lambda$ dividing $p$, do we know that the global Galois representation $\rho$ on $V$ is potentially abelian, i.e. when shifting to some open subgroup, the image of $\rho$ is contained in a torus of $GL_{\mathbb{Q}_p}(V)$? What do we know about the Fontaine-Mazur conjecture in this case?</p> <p>Thanks!</p>
user2146
2,146
<p>A key feature of the Nisnevich topology is that as a cd-structure (cf. [Voevodsky, Homotopy theory of simplicial sheaves in completely decomposable topologies]) it is complete and regular. This implies what Lurie calls Nisnevich excision in DAG XI. The proof of this "excision" relies on the existence of a "splitting sequence" (cf. [Morel-Voevodsky, A^1-Homotopy Theory of Schemes, Lemma 3.1.5]) for any given Nisnevich covering.</p> <p>Def.:(MV) A splitting sequence for a covering family $\{p_{\alpha}:Spec(R_\alpha)\to Spec(R)\}$ is a sequence of closed subsets of $Spec(R)$ of the form $$ \emptyset = Z_{n+1}\subset Z_n\subset \ldots \subset Z_0=Spec(R) $$ such that for $i=0,\ldots,n$ the morphism $\coprod_\alpha (p_\alpha)^{-1}(Z_i\setminus Z_{i+1})\to Z_i\setminus Z_{i+1}$ splits.</p> <p>This existence statement needs the space which is covered to be noetherian. Lurie drops the noetherian requirement and pays for the splitting sequence, which doesn't come for free any longer. You find the non-affine situation in section 3.1 of the Morel-Voevodsky paper. </p> <p>However, here we have $Z_i:=V(a_1,\ldots,a_{i-1})$ and the first condition above says that $Z_{n+1}= \emptyset$ and the second condition gives a splitting on</p> <p>$$ Z_i\setminus Z_{i+1}= V(a_1,\ldots,a_{i-1})\cap (V(a_1,\ldots,a_i))^c$$ $$= V(a_1,\ldots,a_{i-1})\cap (V(a_1,\ldots,a_{i-1})\cap V(a_i))^c$$ $$= V(a_1,\ldots,a_{i-1})\cap (V(a_1,\ldots,a_{i-1})^c\cup D(a_i))$$ $$= V(a_1,\ldots,a_{i-1})\cap D(a_i)$$ $$= Spec(R[a_i^{-1}]/(a_1,\ldots,a_{i-1}))$$</p>
1,253,687
<p>I don't know how to solve this one and the question is:</p> <p>Find the values of a at which $y = x^3 + ax^2 + 3x + 1$.</p> <p>My solution is:</p> <p>$y'= 3x^2 + 2ax + 3$</p> <p>I know that if $y' \ge 0$, $y$ should be always increasing. I don't know how to make it true. Please help and explain and thank you in advance!</p> <p>Edit: i saw another solution but cannot understand it.</p> <p>D/4 = a^2 - 9</p> <p>What does the D stand for and why divide it by 4. Also where does the a^2 - 9 come from?</p> <p>Also, how do you write exponents?</p>
user156213
156,213
<p>Try writing it in a different form:</p> <p>$$y'=3(x^2+2ax/3+1)=3((x+a/3)^2-a^2/9+1)$$ Since $(x+a/3)^2$ is always non-negative, we need $$-a^2/9+1 &gt; 0$$ so $$a^2 &lt; 9\to -3&lt;a&lt;3$$ so the answer is (assuming strictly increasing) $$a\in (-3,3)$$</p>
2,037,030
<p>I am studying Distribution theory. But I am curious about that why we coin compact support. In what situation is it useful? Can any one give an intuitive way to explain this concept?</p>
vidyarthi
349,094
<p>The functions with compact support are those that are zero outside of a compact set. This is quite useful in distributions as it tells us that the function dosent grow indefinitely. It is useful also in the theory of Differential Equations, Functional Analysis, Toplogy. Some examples include <a href="http://mathworld.wolfram.com/BumpFunction.html" rel="nofollow noreferrer">Bump Function</a>. Also refer <a href="https://en.wikipedia.org/wiki/Support_(mathematics)" rel="nofollow noreferrer">Support</a> for different types of support.</p>
898,755
<p>The function $G_m(x)$ is what I encountered during my search for approximates of Riemann $\zeta$ function:</p> <p>$$f_n(x)=n^2 x\left(2\pi n^2 x-3 \right)\exp\left(-\pi n^2 x\right)\text{, }x\ge1;n=1,2,3,\cdots,\tag{1}$$ $$F_m(x)=\sum_{n=1}^{m}f_n(x)\text{, }\tag{2}$$</p> <p>$$G_m(x)=F_m(x)+F_m(1/x)\text{, }\tag{3}$$</p> <p>Numerical results showed that $G_m(x)$ is zero near $m+1.2$ for $m=1,2,...,8$.</p> <p>Please refer to fig 1 below for the plot of $\log|G_m(x)|$ v.s. $x$ for $m=1,2,...,8$</p> <p><img src="https://i.stack.imgur.com/KWh09.png" alt="enter image description here"></p> <p>Let us denote these zeros by $x_0(m)$. I am interested if it can be proved that</p> <p>(A) $x_0(m)$ is the smallest zero of $G_m(x)$ for $x\ge1$ </p> <p>(B) there exist bounds $\mu(m),\nu(m)$ such that $0&lt;\mu(m)\le x_0(m)\le \nu(m)$;$\mu(m),\nu(m)\to\infty$, when $m\to\infty$.</p> <p>Here are the things I tried.</p> <p>Because $G_m(1)&gt;0$ and $G_m(x)\to F_m(1/x)&lt;0$ when $x\to\infty$, so there exist a zero for $G_m(x)$ between $x=1$ and $x=\infty$.</p> <p>But I was not able to find the bounds for this zero.</p> <p>It is tempting to speculate that $x_0(m)$ is the only zero for $G_m(x)$ and $m+1&lt;x_0(m)&lt;m+2$.</p> <p>The values for $x_0(m), (m=1,2,...,10)$ are given by:</p> <p>$x_0(1)$=2.24203, $x_0(2)$=3.21971, $x_0(3)$=4.21913, $x_0(4)$=5.22283, $x_0(5)$=6.22764, $x_0(6)$=7.23268, $x_0(7)$=8.23764, $x_0(8)$=9.24241, $x_0(9)$=10.2469, $x_0(10)$=11.2512.</p>
Daccache
79,416
<p>Well, I was able to prove <span class="math-container">$(3)$</span> in your answer above, meaning the bounds are proven! The 'proof' though is still much less formal than I'd like it to be, so it might need a little refinement. Here goes:</p> <p>Intuitively, the LHS side of the inequality is the sum of all the positive terms in <span class="math-container">$b_n(m)$</span>, and the RHS is the sum of all the negative terms. If the LHS is greater than the RHS, then the total value of <span class="math-container">$b_n(m)$</span> will be positive, so the original function <span class="math-container">$G_m(x)$</span> will be positive at <span class="math-container">$x = m + 1$</span> for all <span class="math-container">$m$</span>, proving our first bound. What I thought of was that if the value of the RHS at each case is less than the value on the LHS, then the total value is positive. It is natural, then, to consider the number of summands on each summation. Once we prove that the number of summands on the LHS is larger than the RHS (lemma 1), then we find the total value of each side. (Actually, we just find directly which one is larger; it's a bit hard to explain until that point, so just wait till the end to see how.)</p> <p><strong>Lemma 1: <span class="math-container">$|\{1, ..., n_0(m)\}| \leq |\{n_1(m), ..., m\}|$</span></strong><br /> Consider the following table of values:<br /> <img src="https://i.stack.imgur.com/wtKqa.png" alt="Table of values for <span class="math-container">$n_o(m)$</span> and <span class="math-container">$n_1(m)$</span>" /><br /> The first row is the given value of <span class="math-container">$m$</span>, the second is the number of negative terms, the third is the starting point for where the terms become positive, (<span class="math-container">$m - n_0(m)$</span> is the number of positive terms), and the fourth is included for reference as the number of negative terms in the summation if it were equally distributed into half positive and half negative terms. What we want to deduce from this table is that there is always an equal or greater number of positive terms than negative ones, that is the majority of terms are positive. How? Well, if we compare the number of negative terms there would be had the number of terms been equally positive and negative (4th row) to the number of negative terms there actually are (2nd row), we see that the inequality holds, with the number of negative terms being equal in the cases <span class="math-container">$m = 1, 2, 3$</span>. How are we supposed to prove that the <span class="math-container">$n_0(m)$</span> is always smaller than <span class="math-container">$\left \lfloor{m/2}\right \rfloor$</span>? I reasoned that in the 'base' cases <span class="math-container">$m = 1, 2, 3$</span>, they are equal, and <span class="math-container">$n_0(m)$</span> behaves like <span class="math-container">$\sqrt{m}$</span> asymptotically, and <span class="math-container">$\left \lfloor{m/2}\right \rfloor$</span> behaves like <span class="math-container">$m$</span> asymptotically, and the former grows faster than the latter, so <span class="math-container">$\left \lfloor{m/2}\right \rfloor$</span> will never become less than <span class="math-container">$n_0(m)$</span>. So, at any value of m, the number of terms which are negative in <span class="math-container">$b_n(m)$</span> is always less than or equal to the number of positive terms.</p> <p>Define <span class="math-container">$N_0 = \{1, ..., n_0(m)\}$</span>, and <span class="math-container">$N_1 = \{n_1(m), ..., m\}$</span>. We already showed that <span class="math-container">$|N_0| \leq |N_1|$</span>, and now we shall show that every element in <span class="math-container">$N_0$</span> is smaller in value than in <span class="math-container">$N_1$</span>.</p> <p><strong>Lemma 2: Every element in <span class="math-container">$N_0$</span> is smaller than every element in <span class="math-container">$N_1$</span>.</strong><br /> This is since the two sets <span class="math-container">$N_0$</span> and <span class="math-container">$N_1$</span>, being subsets of the natural numbers, are well-ordered, and thus have a greatest element and least element. In this case, if the greatest element in <span class="math-container">$N_0$</span> is less than the least element in <span class="math-container">$N_1$</span>, then the elements of <span class="math-container">$N_0$</span> are all smaller than the elements of <span class="math-container">$N_1$</span>. Noting that <span class="math-container">$\sup(N_0) = n_0(m)$</span>, and <span class="math-container">$\inf(N_1) = n_1(m)$</span>, and since <span class="math-container">$n_1(m) = n_0(m) + 1$</span> (by definition of floor and ceiling), then every element in <span class="math-container">$N_0$</span> is smaller than every element in <span class="math-container">$N_1$</span>.</p> <p>Finally, we consider the actual values of each summation. For any given <span class="math-container">$m$</span>, the value of each summation only depends on <span class="math-container">$n$</span>'s values. Consider <span class="math-container">$(3)$</span>:<br /> <span class="math-container">$$\sum\limits_{n=n_1(m)}^m b_n(m)\exp\left(-\frac{n^2\pi}{m + 1}\right) &gt; \sum\limits_{n=1}^{n_0(m)} (-b_n(m))\exp\left(-\frac{n^2\pi}{m + 1}\right)$$</span> <span class="math-container">$$\sum\limits_{n=n_1(m)}^m \frac{n^2}{m + 1}\left(\frac{2\pi n^2}{m + 1} - 3\right)\exp\left(-\frac{n^2\pi}{m + 1}\right) &gt; \sum\limits_{n=1}^{n_0(m)} -\frac{n^2}{m + 1}\left(\frac{2\pi n^2}{m + 1} - 3\right)\exp\left(-\frac{n^2\pi}{m + 1}\right)$$</span> <span class="math-container">$$\sum\limits_{n=n_1(m)}^m \left(\frac{2\pi n^4 - 3n^2(m + 1)}{(m + 1)^2}\right)\exp\left(-\frac{n^2\pi}{m + 1}\right) &gt; \sum\limits_{n=1}^{n_0(m)} -\left(\frac{2\pi n^4 - 3n^2(m + 1)}{(m + 1)^2}\right)\exp\left(-\frac{n^2\pi}{m + 1}\right)$$</span><br /> Thinking of the sets of functions (one for each <span class="math-container">$m$</span>) as a set of mappings from <span class="math-container">$N_1$</span> to the natural numbers, we just need to prove that the values <span class="math-container">$N_1$</span> produces (LHS) are all larger than the values <span class="math-container">$N_0$</span> produces (RHS).</p> <p>The original functions we considered are all increasing for <span class="math-container">$n \geq n_1(m)$</span>, and decreasing before, so it maps the set <span class="math-container">$N_0$</span> onto even smaller numbers, and the larger set <span class="math-container">$N_1$</span> onto even larger numbers. Thus, the RHS sums a lower amount of increasingly smaller numbers, and the LHS sums a higher amount of increasingly larger numbers, so we conclude the RHS is smaller than the LHS, proving <span class="math-container">$(3)$</span> and thus the bound. Remember though that this whole effort is only for the point <span class="math-container">$m + 1$</span>, and very similarly <span class="math-container">$m + 2$</span> can be proven as the other bound.</p> <p>Please don't hesitate to point out any errors, since there very well might be some. I haven't had time to look over it again since I've been getting rather busy lately.</p> <p>Cheers!</p>
3,579,065
<p>It is known that the quantity <span class="math-container">$\cos \frac{2π}{17}$</span> is a root of the <span class="math-container">$8$</span>'th degree equation, <span class="math-container">$$x^8 + \frac{1}{2} x^7 - \frac{7}{4} x^6 - \frac{3}{4} x^5 + \frac{15}{16} x^4 + \frac{5}{16} x^3 - \frac{5}{32} x^2 - \frac{x}{32} + \frac{1}{256} = 0$$</span> It is known that the regular <span class="math-container">$17$</span> sided polygon can be constructed from <span class="math-container">$cos \frac{2π}{17}$</span> , if this can be expressed in square roots. Can you show that this equation can be derived from the relation <span class="math-container">$9\theta=-\sin 8\theta$</span>, where <span class="math-container">$\theta = \frac{2π}{17}$</span>?</p> <p>Can you demonstrate that it is possible to solve this particular <span class="math-container">$8$</span>'th degree equation, even though there is no <span class="math-container">$8$</span>'th degree formula? ( there is more than one possible form of solution; it is known that there is often more than one expression in radicals for the same quantity) Also, can you find a square root form for <span class="math-container">$\cos \frac{2\pi}{17}$</span> which has minimum number of terms?</p>
Community
-1
<p>You probably know that <span class="math-container">$\frac{1}{1-x}=1+x+x^2+\dots$</span>. If not, observe that <span class="math-container">$$(1-x)(1+x+x^2+\dots)=(1+x+x^2+\dots)-(x+x^2+x^3+\dots)=1$$</span> Then <span class="math-container">$$\frac{1}{(1-x)^2}=\left(\frac{1}{1-x}\right)'=1+2x+3x^2+4x^3+\dots$$</span></p> <p>Now, <span class="math-container">$1+x+x^2=(1-x)^2+3x$</span>. Or you can just divide <span class="math-container">$x^2+x+1$</span> by <span class="math-container">$(1-x)^2$</span>. The important thing is to obtain that the quotient is <span class="math-container">$1$</span> and the remainder is <span class="math-container">$3x$</span>. So, <span class="math-container">$$\begin{align}\frac{1+x+x^2}{(1-x)^2}&amp;=1+\frac{3x}{(1-x)^2}\\&amp;=1+3x(1+2x+3x^2+4x^3+\dots)\\&amp;=1+3x+3\cdot2x^2+3\cdot 3x^3+3\cdot 4x^4+3\cdot 5x^5+\dots\end{align}$$</span></p>
1,611,078
<p>If we have the function $f : \mathbb{R}\rightarrow \mathbb{R} : x \mapsto x^2 + \frac{x}{3}$ and the sequence $(a_n)_{n \in \mathbb{N}}$ which is recursively specified for $n \in \mathbb{N_+}$:</p> <p>$a_n =_{def} f(a_{n-1})$</p> <p>(So the sequence is fixed by $a_0$) </p> <p>How to determine all real numbers $x \in \mathbb{R}$ for which the sequence $(a_n)_{n \in \mathbb{N}}$ with $a_0 = x $ converges and the associated limits? </p>
Asinomás
33,907
<p>for $x\in [-1,\frac{2}{3}]$ it converges. If $x=-1$ or $\frac{2}{3}$ it converges to $\frac{2}{3}$. Otherwise it converges to $0$.</p> <p>For any other value of $x$ is goes to infinity.</p>
394,580
<p>Let <span class="math-container">$S$</span> be a smooth compact closed surface embedded in <span class="math-container">$\mathbb{R}^3$</span> of genus <span class="math-container">$g$</span>. Starting from a point <span class="math-container">$p$</span>, define a random walk as taking discrete steps in a uniformly random direction, each step a geodesic segment of the same length <span class="math-container">$\delta$</span>. Assume <span class="math-container">$\delta$</span> is less than the injectivity radius and small with respect to the intrinsic diameter of <span class="math-container">$S$</span>.</p> <blockquote> <p><em><strong>Q</strong></em>. Is the set of footprints of the random walk evenly distributed on <span class="math-container">$S$</span>, in the limit? By evenly distributed I mean the density of points per unit area of surface is the same everywhere on <span class="math-container">$S$</span>.</p> </blockquote> <p>This is likely known, but I'm not finding it in the literature on random walks on manifolds. I'm especially interested in genus <span class="math-container">$0$</span>. Thanks!</p> <hr /> <p><em>Update</em> (6JUn2021). The answer to <em><strong>Q</strong></em> is <em>Yes</em>, going back 38 years to Toshikazu Sunada, as recounted in @RW's <a href="https://mathoverflow.net/a/394625/6094">answer</a>.</p>
Pierre PC
129,074
<p><strong>Edit:</strong> As far as I understand it, this is the approach of T. Sunada, as described in the paper linked in <a href="https://mathoverflow.net/a/394625/129074">R W's answer</a>.</p> <p>Let us fix <span class="math-container">$\delta&gt;0$</span> such that any pair of points can be joined by a finite sequence of steps of size <span class="math-container">$\delta$</span>. For instance, we may choose any <span class="math-container">$\delta$</span> less than the injectivity radius, as I discuss at the end of this answer.</p> <p>It is a purely measure-theoretic result that the uniform distribution of the walk holds for almost all initial points. As I describe below, this implies, provided <span class="math-container">$\delta$</span> is at most the injectivity radius, that it actually holds for all initial points in the following sense. Call <span class="math-container">$\mathbb P_{x_0}$</span> the distribution of the random walk started at <span class="math-container">$x_0$</span>, and <span class="math-container">$\mu$</span> the uniform measure on <span class="math-container">$S$</span>.</p> <blockquote> <p>For all <span class="math-container">$x_0\in S$</span>, for all <span class="math-container">$f_0:S\to\mathbb R$</span> continuous, we have <span class="math-container">$$ \lim_{n\to\infty}\frac1n\sum_{i&lt;n}f_0(x_i) = \int f\mathrm d\mu $$</span> <span class="math-container">$\mathbb P_{x_0}$</span>-almost surely.</p> </blockquote> <p>I don't think the proof uses anything more than <span class="math-container">$S$</span> being a closed connected Riemannian manifold.</p> <h3>Ergodic argument</h3> <p>Let <span class="math-container">$\Omega\subset S^{\mathbb N}$</span> be the set of sequences such that two consecutive terms are at distance precisely <span class="math-container">$\delta$</span>. This is a compact space (closed subset of a compact space, by Tychonoff or simply using a convenient metric), and it carries a natural probability <span class="math-container">$\mathbb P_\mu$</span>, corresponding to <span class="math-container">$x_0$</span> being distributed according to <span class="math-container">$\mu$</span> and the following steps according to the random walk rules. The shift operator <span class="math-container">$$ T:(x_0,x_1,\ldots)\mapsto(x_1,x_2,\ldots) $$</span> is such that <span class="math-container">$T_*\mu=\mu$</span> (let us just accept this until the end of the proof sketch), so <span class="math-container">$(\Omega,\mathbb P_\mu,T)$</span> is a dynamical system in the measure-theoretic sense. According to Birkhoff's ergodic theorem, for <span class="math-container">$\mathbb P_\mu$</span>-almost every <span class="math-container">$x$</span>, we have <span class="math-container">$$ \lim_{n\to\infty}\frac1n\sum_{i&lt;n}f(T^ix) = \mathbb E[f|\mathcal F_T](x) $$</span> for all <span class="math-container">$f:\Omega\to\mathbb R$</span> continuous, where <span class="math-container">$\mathcal F_T$</span> is the algebra of <span class="math-container">$T$</span>-invariant sets. Now if <span class="math-container">$(\Omega,\mathbb P_\mu,T)$</span> is ergodic, then <span class="math-container">$$\mathbb E[f|\mathcal F_T](x) = \int f\mathrm d\mathbb P_\mu = \int f_0\mathrm d\mu $$</span> for all <span class="math-container">$f$</span> depending only on <span class="math-container">$x_0$</span>, i.e. <span class="math-container">$f:x\mapsto f_0(x)$</span>. Thus my claim will follow by Fubini.</p> <p>It remains to prove that <span class="math-container">$\mu$</span> is <span class="math-container">$T$</span> invariant, and that the resulting system is ergodic. According to the probabilistic interpretation in terms of Markov chains, it suffices to show that <span class="math-container">$x_1$</span> has the same distribution as <span class="math-container">$x_0$</span> (the uniform one) under <span class="math-container">$\mathbb P_\mu$</span>. This is a consequence of the fact that the geodesic flow leaves the measure induced on <span class="math-container">$TM$</span> invariant, which itself is a consequence of <a href="https://en.wikipedia.org/wiki/Liouville%27s_theorem_(Hamiltonian)" rel="nofollow noreferrer">Liouville's theorem</a> because the geodesic flow is Hamiltonian.</p> <p>Now let us show the system is ergodic. Let <span class="math-container">$f:\Omega\to\mathbb R$</span> be <span class="math-container">$T$</span>-invariant, i.e. <span class="math-container">$f\circ T = f$</span>, and suppose also that <span class="math-container">$f$</span> depends only on the first <span class="math-container">$k$</span> terms of the sequence. Let us show that it is constant by choosing <span class="math-container">$x$</span> and <span class="math-container">$y$</span> arbitrary and showing <span class="math-container">$f(x)=f(y)$</span>. Because of the hypothesis that two points can be linked using steps of size <span class="math-container">$\delta$</span>, we know that we can go from <span class="math-container">$x_k$</span> to <span class="math-container">$y_{k-1}$</span> in <span class="math-container">$N$</span> steps for some finite <span class="math-container">$N$</span>. Killing the first <span class="math-container">$k$</span> terms of <span class="math-container">$x$</span> using <span class="math-container">$T$</span>, then adding <span class="math-container">$N$</span> terms linking <span class="math-container">$x_k$</span> to <span class="math-container">$y_{k-1}$</span>, and adding the first <span class="math-container">$k$</span> terms of <span class="math-container">$y$</span>, we see that <span class="math-container">$f(x) = f(y_k)$</span>, where <span class="math-container">$y_k$</span> is equal to <span class="math-container">$y$</span> up to the <span class="math-container">$k$</span>th term. Because <span class="math-container">$f$</span> only depends on the first <span class="math-container">$k$</span> terms, we have in fact <span class="math-container">$f(x)=f(y)$</span>. Now an approximation argument shows that the system is ergodic (see for instance a previous answer of mine <a href="https://mathoverflow.net/a/323066">here</a>).</p> <h3>From almost all to all</h3> <p>Now this extends I think to all points in the manifold, provided <span class="math-container">$\delta$</span> is less than the injectivity radius. Let <span class="math-container">$A$</span> be the set of points <span class="math-container">$x_0$</span> such that the random walk started at <span class="math-container">$x_0$</span> is evenly distributed almost surely, and <span class="math-container">$\mathbf 1_A$</span> its indicator function. Then <span class="math-container">$$ \mathbb P_{x_0}(\text{even distribution}) = \int \mathbb P_{x_2}(\text{even distribution})\mathrm d\mathbb P_{x_0} \geq \int\mathbf 1_A\mathrm d (T^2_*\delta_{x_0}) = 1, $$</span> because the distribution of <span class="math-container">$x_2$</span> under <span class="math-container">$\mathbb P_{x_0}$</span> is continuous with respect to the Lebesgue measure. Indeed, it is the image of the Lebesgue measure on the product of two sphere under a map that is a submersion almost everywhere. I have convinced myself of this last fact, but I can give more detail if people are interested.</p> <h3>All points can be reached</h3> <p>Define an equivalence relation so that <span class="math-container">$x\sim y$</span> if there is a chain of steps of length <span class="math-container">$\delta$</span> from <span class="math-container">$x$</span> to <span class="math-container">$y$</span>. It is clearly an equivalence relation provided we allow for the trivial chain <span class="math-container">$(x)$</span> to be a witness for <span class="math-container">$x\sim x$</span>. To prove that we can reach any given point from any other, it will suffice to show that the equivalence classes are open. We will show that if <span class="math-container">$\delta$</span> is less than the injectivity radius <span class="math-container">$r_\text{inj}$</span>, then all points <span class="math-container">$y$</span> with <span class="math-container">$d(x,y)&lt;\min(\delta,r_\text{inj}-\delta)$</span> can be reached from <span class="math-container">$x$</span> in two steps. Since <span class="math-container">$S$</span> is compact, this will actually show that there is an upper bound on the minimal number of steps needed to link two points of the manifold.</p> <p>It will be enough to show that for any such pair <span class="math-container">$(x,y)$</span>, the spheres <span class="math-container">$\partial B_x(\delta)$</span> and <span class="math-container">$\partial B_y(\delta)$</span> intersect, which in turn would follow from the existence of points <span class="math-container">$z_\pm$</span> on <span class="math-container">$\partial B_y(\delta)$</span> such that <span class="math-container">$\pm d(x,z_\pm)&gt;\pm\delta$</span> (this sphere is topologically a sphere of dimension at least two, so it is path connected). But such points are easy to find. Follow the (unique) minimising geodesic from <span class="math-container">$y$</span> to <span class="math-container">$x$</span>, continue until the geodesic has length <span class="math-container">$\delta$</span>, and call this endpoint <span class="math-container">$z_-$</span>. Since <span class="math-container">$x$</span> is on the minimising geodesic segment from <span class="math-container">$y$</span> to <span class="math-container">$z_-$</span>, the distance between <span class="math-container">$z_-$</span> and <span class="math-container">$x$</span> is at most <span class="math-container">$\delta$</span>. In the other direction, follow the geodesic from <span class="math-container">$x$</span> to <span class="math-container">$y$</span>, continue until the geodesic has length <span class="math-container">$d(x,y)+\delta$</span>, and call this endpoint <span class="math-container">$z_+$</span>. Because the geodesic is minimising, <span class="math-container">$\delta=d(y,z_+)&lt;d(x,z_+)$</span>. This concludes the argument.</p>
1,716,656
<p>I am having trouble solving this problem</p> <blockquote> <p>Julie bought a house with a 100,000 mortgage for 30 years being repaid with payments at the end of each month at an interest rate of 8% compounded monthly. If Julie pays an extra 100 each month, what is the outstanding balance at the end of 10 years immediately after the 120th payment?</p> </blockquote> <p>My attempt:</p> <p>I first want to find the deposit per month.</p> <p>I let $D$ be the deposit per month and since it increases by 100 each payment, I used an increasing annuity,</p> <p>$D*100(Ia_{30|0.08}) = 100,000$</p> <p>However, the $D$ I got was 8.12, which is clearly not right.</p> <p>Can someone help?</p>
someguy
326,526
<p>Any result will do as long as the other die can score the same number plus two, that gets us with n-2 per die (n being number of sides). This gets us 2(n-2) posible results over n^2 (as we have two identical dice)</p> <p>then the probability is: 2(n-2)/n^2 </p>
1,716,656
<p>I am having trouble solving this problem</p> <blockquote> <p>Julie bought a house with a 100,000 mortgage for 30 years being repaid with payments at the end of each month at an interest rate of 8% compounded monthly. If Julie pays an extra 100 each month, what is the outstanding balance at the end of 10 years immediately after the 120th payment?</p> </blockquote> <p>My attempt:</p> <p>I first want to find the deposit per month.</p> <p>I let $D$ be the deposit per month and since it increases by 100 each payment, I used an increasing annuity,</p> <p>$D*100(Ia_{30|0.08}) = 100,000$</p> <p>However, the $D$ I got was 8.12, which is clearly not right.</p> <p>Can someone help?</p>
AAAfarmclub
326,781
<p>Just for fun, I counted eight.<br> <img src="https://i.stack.imgur.com/dG32E.png" alt="Dice image courtesy of Google[1]"></p>
449,631
<p>Again a root problem.. $\sqrt{2x+5}+\sqrt{5x+6}=\sqrt{12x+25}$</p> <p>Isn't there any standardized way to solve root problems..Can u plz help by giving some tips and stategies for root problems??</p>
Clement C.
75,808
<p>It seems correct, up to a typo between $A$ and $E$, and braces missing in the union at the end (in $\bigcup\{x_{n_k}\}$ Also, you might want to define $n_{k+1}$ as $$n_{k+1} = \inf\{ i &gt; n_k \mid x_i \in E \}$$</p> <p>Using the (equivalent, and pretty much identical up to notations sequence/function) definition of a countable set:</p> <blockquote> <p>A set $A$ is called countable if there exists an injective function $f$ from $A$ to $\mathbb{N}$</p> </blockquote> <p>you can also say that if $f\colon A \to \mathbb{N}$ is such an injective function, then in particular any restriction $f_{|_E}$ (for $E\subseteq A$) is also an injection, and thus $E$ is countable.</p>
559,194
<p>$\mathscr{F}\{\delta(t)\}=1$, so this means inverse fourier transform of 1 is dirac delta function so I tried to prove it by solving the integral but I got something which doesn't converge.</p>
meta_warrior
73,032
<p>$$\mathscr{F^{-1}}\{1\}=\int_{-\infty}^{\infty}e^{2\pi ixy}dy=\lim_{M\to\infty}\frac{\sin{2\pi Mx}}{\pi x}$$ Now we need to consider 2 cases:</p> <p>1) $x=0$, then $\lim_{M\to\infty}\frac{\sin{2\pi Mx}}{\pi x}=\infty$</p> <p>2) $x\ne0$, then $\lim_{M\to\infty}\frac{\sin{2\pi Mx}}{\pi x}=0$</p> <p>Hence combining these 2 cases, we obtain</p> <p>$$\mathscr{F^{-1}}\{1\}=\delta(x)$$</p>
2,720,694
<p>I am facing difficulty to calculate the second variation to the following functional.</p> <p>Define $J: W_{0}^{1,p}(\Omega)\to\mathbb{R}$ by $J(u)=\frac{1}{p}\int_{\Omega}|\nabla u|^p\,dx$ where $p&gt;1$.</p> <p>I am able to calculate the first variation as follows: $J'(u)\phi=\int_{\Omega}\,|\nabla u|^{p-2}\nabla u\cdot\nabla\phi\,dx$ which I have got by using the functional $E:\mathbb{R}\to\mathbb{R}$ defined by $E(t)=J(u+t\phi)$.</p> <p>But I am unable to calculate the second variation. </p> <p>Any type of help is very much appreciated.</p> <p>Thanks.</p>
Prasun Biswas
215,900
<p><strong>Hints:</strong></p> <ul> <li><p>If $f(a-x)=f(x)$ on $[0,a]$, then $\int_0^a f(x)~\mathrm dx=\int_0^a f(a-x)~\mathrm dx$</p></li> <li><p>If $f(2a-x)=f(x)$ on $[0,2a]$, then $\int_0^{2a} f(x)~\mathrm dx=2\int_0^a f(x)~\mathrm dx$</p></li> <li><p>Suppose $m=n=k=2^rs$ where $2\not\mid s$. Can you show that $$\int_0^{2\pi}\sin^2(kx)~\mathrm dx=2^{r+1}\int\limits_0^{\pi/2^r}\sin^2(kx)~\mathrm dx=2^{r+1}\cdot\frac{\dfrac\pi{2^r}-0}2=2^{r+1}\cdot\frac\pi{2^{r+1}}=\pi$$</p></li> </ul>
435,298
<p>Define $$\langle X,Y \rangle := \operatorname{tr}XY^t,$$ where $X,Y$ are square matrices with real entries and $t$ denotes transpose.</p> <p>I have some troubles in proving that $$ \langle [X,Y],Z \rangle = - \langle Y,[X,Z] \rangle,$$ where square brackets denote commutator.</p> <p>Let me update my <strong>questin</strong> to <strong>part ii</strong>. You have proven that my commutation relation without a transpose is wrong, while it is correct if we put a $t.$</p> <p>Then I'd say I'm in trouble, because the next step would be to define $$\operatorname{ad}_XY:=[X,Y]$$ and claim that by the above (false) property we have that $\operatorname{ad}$ is antisymmetric, i.e. $$\langle \operatorname{ad}_XY,Z\rangle =- \langle Y,\operatorname{ad}_XZ\rangle:$$ Do you know of a way to recover such a nice property or something similar?</p>
S.B.
35,778
<p>$$\langle XY-YX,Z\rangle=\langle XY,Z\rangle-\langle YX,Z\rangle\\=\langle Y,X^tZ\rangle-\langle Y,ZX^t\rangle\\=\langle Y,[X^t,Z]\rangle$$ <strong>Counterexample for the OP's equation:</strong> Let $X=Z=\left[\array{0 &amp; 1\\0 &amp;0}\right]$ and $Y=\left[\array{1&amp;0\\0&amp;2}\right]$. We have $XZ-ZX=0\implies \langle Y,[X,Z]\rangle=0$, whereas $$X^tZ-ZX^t=\left[\array{-1&amp;0\\0&amp;1}\right]\implies\langle Y,[X^t,Z]\rangle=1.$$</p>
473,508
<p>Let $p &gt; 2$ be a prime. Can someone prove that for for any $t \leq p-2$ the following identity holds</p> <blockquote> <p>$\displaystyle \sum_{x \in \mathbb{F}_p} x^t = 0$</p> </blockquote>
Servaes
30,382
<p>Let $t\in\{1,\ldots,p-2\}$ and let $g\in\Bbb{F}_p^{\times}$ be a primitive root. Then $g^t\neq1$ and $g^{p-1}=1$, so $$\sum_{x\in\Bbb{F}_p}x^t=\sum_{k=0}^{p-2}g^{kt}=\frac{1-g^{(p-1)t}}{1-g^t}=0.$$</p>
4,353,203
<p>While looking for an explanation to the first inequality, I spied another similar inequality. So, I will ask about both of them here.</p> <p><span class="math-container">$a$</span>, <span class="math-container">$b$</span>, and <span class="math-container">$c$</span> are positive numbers. <span class="math-container">$$\frac{a}{b + c} + \frac{b}{a + c} + \frac{c}{a + b} \geq \frac{(a + b + c)^{2}}{2(ab + ac + bc)}.$$</span> <span class="math-container">$$\frac{a^{2}}{b} + \frac{b^{2}}{c} + \frac{c^{2}}{a} \geq \frac{(a + b + c)^{2}}{a + b + c} = a + b + c.$$</span></p> <p>If the second inequality is an application of the Cauchy-Schwarz Inequality, state for me the two vectors in <span class="math-container">$\mathbb{R}^{3}$</span> that are used to justify it.</p>
Colin T Bowers
65,280
<p>This answer is here to flesh out the detail in the method provided by md5 in the accepted answer.</p> <p>As md5 notes, given the setup: <span class="math-container">\begin{equation} \mathbb{P}(s_{2n} = -2n + 4k) = \frac{\binom{n}{k} \binom{n}{n-k}}{\binom{2n}{n}} , \quad k = 0, 1, ..., n . \end{equation}</span> A well known result is that is also straightforward to establish via Stirling's approximation is: <span class="math-container">\begin{equation} \binom{2n}{n} = 2^{2n} \frac{1}{\sqrt{\pi n}} (1 + O(1/n)) . \end{equation}</span> Substituting this in to the above, and noting that <span class="math-container">$\binom{n}{n-k} = \binom{n}{k}$</span> gives: <span class="math-container">\begin{equation} \mathbb{P}(s_{2n} = -2n + 4k) = \sqrt{\pi n} \binom{n}{k} 2^{-n} \binom{n}{k} 2^{-n} (1 + O(1/n)) . \end{equation}</span> Another well known result sometimes referred to as the Normal approximation to the binomial means that if <span class="math-container">$d = O(\sqrt{n})$</span>, then: <span class="math-container">\begin{equation} \binom{n}{\frac{n}{2} - d} 2^{-n} = \frac{1}{\sqrt{\pi \frac{n}{2}}} e^{-2d^2 / n} + O(1/n^{3/2}) \end{equation}</span> Setting <span class="math-container">$k = \frac{n}{2} - d$</span>, and using this approximation, we get: <span class="math-container">\begin{equation} f(x) = \mathbb{P}(s_{2n} = -2n + 4k) = \sqrt{\pi n} \frac{2}{\pi n}e^{-4d^2 / n} (1 + O(1/n)) . \end{equation}</span> <span class="math-container">$x$</span> denotes outcomes for <span class="math-container">$s_{2n}$</span>, so <span class="math-container">$x = -2n + 4k$</span>, which means: <span class="math-container">\begin{equation} k = \frac{x + 2n}{4} . \end{equation}</span> Also by our definitions <span class="math-container">$d = \frac{n}{2} - k$</span>, so subbing in for <span class="math-container">$k$</span>: <span class="math-container">\begin{equation} d = \frac{n}{2} - \frac{x + 2n}{4} , \end{equation}</span> which simplifies to <span class="math-container">$d = \frac{x}{4}$</span>. Thus: <span class="math-container">\begin{equation} -\frac{4d^2}{n} = -\frac{x^2}{2(2n)} . \end{equation}</span> Subbing in to our probability function gives: <span class="math-container">\begin{equation} f(x) = \sqrt{\pi n} \frac{2}{\pi n}e^{-x^2 / 2(2n)} (1 + O(1/n)) . \end{equation}</span> At this point things are looking good, because our exponent term is correct for a Normal distribution with zero mean and variance <span class="math-container">$2n$</span>, which is exactly what we want. The problem comes when we evaluate the term in front of the exponent. We want the term in front of the exponent to be: <span class="math-container">\begin{equation} \frac{1}{\sqrt{2n} \sqrt{2 \pi}} , \end{equation}</span> because the standard deviation is <span class="math-container">$\sqrt{2n}$</span>. But instead we have: <span class="math-container">\begin{equation} \frac{2}{\sqrt{n} \sqrt{\pi}} . \end{equation}</span> Everything would be correct if that <span class="math-container">$2$</span> was in the denominator instead of the numerator, but it isn't, and I can't see where I went wrong here.</p>
318,351
<p>As we know, the Ky Fan norm is convex, and so is the Ky Fan k-norm. My question is, does this imply that the difference between them is a non-convex function, since it results from "difference between two convex" functions ?</p>
noobProgrammer
61,722
<p>Differential forms is a way of formulating a calculus on manifolds without a strict adherence to coordinates. Besides differential forms being necessary to learn some differential geometry, its absolutely necessary for understanding Stokes' Theorem (modern version), which generalizes the classical Kelvin Stokes into manifolds and has the classical Kelvin Stokes, Green's Theorem and the Divergence theorem as easy corollaries. </p> <p>Besides the technical help differential form provides, it also makes things spectacularly beautiful. For example, here is the modern Stokes' Theorem:</p> <p>$$\int_{\partial \Omega} \omega = \int_{\Omega} d\omega$$</p> <p>where $d\omega$ is the exterior derivative of the differential form $\omega$ and $\Omega$ is an orientable manifold. </p>
122,471
<p>Can anyone explain how I can prove that either $\phi(t) = \left|\cos (t)\right|$ is characteristic function or not? And which random variable has this characteristic function? Thanks in advance.</p>
Did
6,179
<p><a href="http://en.wikipedia.org/wiki/Characteristic_function_%28probability_theory%29#Properties" rel="noreferrer">Factoid 1</a>: If a characteristic function is infinitely differentiable at zero, all the moments of the corresponding random variable are finite.</p> <p><a href="http://en.wikipedia.org/wiki/Characteristic_function_%28probability_theory%29#Moments" rel="noreferrer">Factoid 2</a>: If all the moments of a random variable are finite, the corresponding characteristic function is infinitely differentiable everywhere on the real line.</p> <p>Factoid 3: The function $t\mapsto|\cos(t)|$ is infinitely differentiable at $t=0$ but not everywhere on the real line, for example not at $t=\pi/2$.</p> <p>Ergo.</p>
3,430,812
<p>Consider the set of integers, <span class="math-container">$\Bbb{Z}$</span>. Now consider the sequence of sets which we get as we divide each of the integers by <span class="math-container">$2, 3, 4, \ldots$</span>.</p> <p>Obviously, as we increase the divisor, the elements of the resulting sets will get closer and closer.</p> <p><strong>Question:</strong> In the limit as <span class="math-container">$\text{divisor}\to\infty$</span>, what will the "limiting" set be? (I don't think it could be <span class="math-container">$\Bbb{R}$</span>.)</p>
sanaris
144,567
<p>If you are physicist or applied mathematician, if you define <span class="math-container">$S_n = \{ z_i/n, z_i=i, i = 1..\infty \in \mathbb{Z}\}$</span>, then you can just say that <span class="math-container">$\lim_{n\rightarrow\infty} S = \{ \lim S_n \}$</span>, which will yield you <span class="math-container">$\lim S = \{0, 0, 0, 0,...\}$</span>. This will happen because <span class="math-container">$\lim$</span> is just working on every fixed element, pretty similar to partial differential.</p>
3,927,488
<p>Given a random variable <span class="math-container">$X$</span> with finite expectation, I know that <span class="math-container">$$X_n\to X, a.s.\text{and} |X_n| \leq X\implies \mathbb{E}|X-X_n|\to 0 \text{ by DCT.}$$</span></p> <p>I am wondering if it is possible to approximate <span class="math-container">$X$</span> (with finite expectation) by a sequence of <strong>simple</strong> random variables: <span class="math-container">$$\forall \varepsilon, \exists \text{ a simple r.v. } X_{{\varepsilon}} \text{ such that } |X_{{\varepsilon}}|\leq |X| \text{ and } \mathbb{E}|X-X_{{\varepsilon}}|&lt; \varepsilon.$$</span></p> <p>Any help will be greatly appreciated!</p>
J. W. Tanner
615,567
<p>Start by setting up equations.</p> <p>I prefer to write <span class="math-container">$A_0$</span> and <span class="math-container">$B_0$</span> for <span class="math-container">$C_{AO}$</span> and <span class="math-container">$C_{BO}$</span>, respectively, and <span class="math-container">$C$</span> for <span class="math-container">$x$</span>.</p> <p><span class="math-container">$\dfrac{dC}{dt}=-\dfrac{dA}{dt}=-\dfrac{dB}{dt}=kAB$</span></p> <p><span class="math-container">$A+C=A_0$</span></p> <p><span class="math-container">$B+C=B_0$</span></p> <p>Therefore <span class="math-container">$\dfrac{dC}{dt}=k(A_0-C)(B_0-C)=k(A_0B_0-(A_0+B_0)C+C^2)$</span>.</p> <p>This is a <a href="https://en.wikipedia.org/wiki/Separation_of_variables" rel="nofollow noreferrer">separable</a> differential equation, which can be solved with standard methods.</p>
954,419
<p>I am teaching myself mathematics, my objective being a thorough understanding of game theory and probability. In particular, I want to be able to go through A Course in Game Theory by Osborne and Probability Theory by Jaynes.</p> <p>I understand I want to cover a lot of ground so I'm not expecting to learn it in less than a year or maybe even two. Still I'm fairly certain it's not impossible.</p> <p>However I would like to have a study plan more or less fleshed out just to know I'm on the right track. There were some other questions related to self learning math here but I couldn't find one like mine.</p> <p>I'd appreciate some feedback.</p> <p>Calc I + II: no book, I already know basic calculus</p> <ul> <li>Differential equations: MIT's OCW lectures </li> <li>Calc III: Stewart's Multivariable calculus</li> <li>Linear Algebra: Strang, Gilbert, Linear Algebra and Its Applications complemented with MIT's OCW lectures <strong>OR</strong> Linear Algebra Done Right</li> </ul> <p>Until here I am more or less certain on what I want to study, but I'm totally confused on what to learn next. Jayne's book states that you need to be familiar with applied mathematics.</p> <p>After reading about applied mathematics, I came up with this plan to be done after finishing what I mentioned earlier (in order of course, not all at the same time):</p> <ol> <li>Topology A: Munkres, part I.</li> <li>Real analysis: Still not sure about the material, probably Abbott or Rudin.</li> <li>Complex Analysis: No idea about the material</li> <li>Group Theory: Rotman, An Introduction to the Theory of Groups</li> <li>Topology B: Munkres, part II.</li> </ol> <p>And then finally, Jayne's Probability Theory and game theory.</p> <p>Am I missing something here? Some of these books such as Rotman's are aimed at a graduate level, is it foolish to think I will understand them?</p>
Trurl
72,915
<p>I'll second Shane's comments, but I think you should start studying game theory now. Schelling's The Strategy of Conflict is a classic, and very readable with very little math. A more formal book that I've read is Vorob'ev Game Theory (springer-verlag), about the level of an undergraduate math book. An undergraduate game theory book will give you a taste of what is going on, and then you can decide if you want to dig in deeper.</p>
300,867
<p>I am having a difficult time understanding where I went wrong with the following: $$\begin{matrix}4x-y = 1 \\ 2x+3y = 3 \end{matrix} $$ $$\begin{matrix}4x-y = -3 \\ 2x+3y = 3 \end{matrix} $$</p> <p>I found the inverse of the common coefficient matrix of the systems: $$A^{-1} \begin{cases} \frac3{14}, \frac1{14} \\ \\ -\frac17, \frac27 \end{cases} $$</p> <p>The issue is this question: Find the solutions to the two systems by using the inverse, i.e. by evaluating $A^{-1}B$ where $B$ represents the right hand side (i.e. $B = \begin{bmatrix}1 \\ 3 \end{bmatrix}$ for system (a) and $B = \begin{bmatrix}-3 \\ 3 \end{bmatrix}$ for system (b)). </p> <p>Now whatever I find for x and y for both solutions keep coming as wrong. I am thinking, I might of read the question wrongly....I am using the negative values to find the x and y. Not too sure how to go at this now...</p>
copper.hat
27,978
<p>Here is another approach. The idea is to enlarge $C$ so that it still remains in $U$, and is 'locally connected by straight lines'.</p> <p>Let $\delta = \frac{1}{2}\sup \{ r | B(x, r) \subset U, \forall x \in C \}$. Since $C$ is compact, we have $\delta &gt; 0$, and $\overline{B}(x,\delta) \subset U$ for all $x \in C$. Furthermore, if we let $C' = \cup_{x \in C} \overline{B}(x,\delta)$, then $C' \subset U$ and $C'$ is also compact.</p> <p>(To see that $C'$ is compact, let $\phi: \mathbb{R}^n \to [0,\infty)$ be given by $\phi(x) = \min_{c \in C} \|x-c\|$. $\phi$ is continuous and $C' = \phi^{-1}[0,\delta]$. Hence $C'$ is closed, and trivially bounded. Since we are in $\mathbb{R}^n$, it follows that $C'$ is compact.)</p> <p>Let $L$ be an upper bound for $\|\frac{\partial f(x)}{\partial x}\|$ for $x \in C'$ and let $M$ be an upper bound for $|f(x)|$ for $x \in C'$. Now let $L' = \max(L,\frac{2M}{\delta})$.</p> <p>Then $f$ is Lipschitz of rank $L'$ on $C$. To see this, let $x,y \in C$. If $\|x-y\| &lt; \delta$, then by construction $y \in B(x,\delta) \subset C' \subset U$, and hence we have $|f(x)-f(y)| \leq L \|x-y\| \leq L' \|x-y\| $. If $\|x-y\| \geq \delta$, then $|f(x)-f(y)| \leq 2M = \frac{2M}{\delta} \delta \leq L' \|x-y\|$.</p>
3,316,730
<p>I'm taking a course in Linear Algebra right now, and am having a hard time wrapping my head around bases, especially since my prof didn't really explain them fully. I would really appreciate any insight you could give me as to what bases are! Also, can there can be multiple different bases for a single subspace?</p> <p>Thanks in advance. </p>
rschwieb
29,335
<p>They are subsets that “efficiently capture” the rest of the vector space. A sort of skeleton, if you will, or maybe like compressing a computer file.</p> <p>This means that you can recover every other element in the space by using just the operations (scalar multiplication and addition) and furthermore there were a exactly one way (in a sense) to generate each element. </p> <p>Finally, linear transformations (a main object of study) completely determined by what they do to a basis. You can see how this makes finite dimensional vector spaces things easier if you can forget about the potentially infinite number of vectors and just focus on what a finite subset does, and trust the other elements to follow suit.</p> <p>It can happen that a subspace has infinitely many distinct sets which are bases. Even for a <span class="math-container">$1$</span> dimensional space over an infinite field, there are infinitely many.</p>
909,741
<blockquote> <p><strong>ALREADY ANSWERED</strong></p> </blockquote> <p>I was trying to prove the result that the OP of <a href="https://math.stackexchange.com/questions/909712/evaluate-int-0-frac-pi2-ln1-cos-x-dx"><strong><em>this</em></strong></a> question is given as a hint.</p> <p>That is to say: <em>imagine that you are not given the hint and you need to evaluate</em>:</p> <blockquote> <p>$$I = \int^{\pi/2}_0 \log{\cos{x}} \, \mathrm{d}x \color{red}{\overset{?}{=} }\frac{\pi}{2} \log{\frac{1}{2}} \tag{1}$$ </p> </blockquote> <p><em>How would you proceed?</em></p> <hr> <p>Well, I tried the following steps and, despite it seems that I am almost there, I have found some troubles:</p> <ul> <li>Taking advantage of the fact: $$\cos{x} = \frac{e^{ix}+e^{-ix}}{2}, \quad \forall x \in \mathbb{R}$$</li> <li>Plugging this into the integral and performing the change of variable $z = e^{ix}$, so the line integral becomes a contour integral over <em>a quarter of circumference of unity radius centered at $z=0$</em>, i.e.: $$ I = \frac{1}{4i} \oint_{|z|=1}\left[ \log{ \left(z+\frac{1}{z}\right)} - \log{2} \right] \, \frac{\mathrm{d}z }{z}$$</li> </ul> <blockquote> <p>$\color{red}{\text{We cannot do this because the integrand is not holomorphic on } |z| = 1 }$</p> </blockquote> <ul> <li>Note that the integrand has only one pole lying in the region enclosed by the curve $\gamma : |z|=1$ and it is holomorphic (is it?) almost everywhere (except in $z =0$), so the residue theorem tells us that:</li> </ul> <p>$$I = \frac{1}{4i} \times 2\pi i \times \lim_{z\to0} \color{red}{z} \frac{1}{\color{red}{z}} \left[ \underbrace{ \log{ \left(z+\frac{1}{z}\right)} }_{L} - \log{2} \right] $$</p> <ul> <li>As I said before, it seems that I am almost there, since the result given by eq. (1) follows iff $L = 0$, which is not true (I have tried L'Hôpital and some algebraic manipulations).</li> </ul> <p>Where did my reasoning fail? Any helping hand?</p> <p>Thank you in advance, cheers!</p> <hr> <p>Please note that I'm not much of an expert in either complex analysis or complex integration so please forgive me if this is trivial.</p> <hr> <p>Notation: $\log{x}$ means $\ln{x}$.</p> <hr> <p>A graph of the function $f(z) = \log{(z+1/z)}$ helps to understand the difficulties:</p> <p><img src="https://i.stack.imgur.com/jAjTP.png" alt="enter image description here"></p> <p>where $|f(z)|$, $z = x+i y$ is plotted and the white path shows where $f$ is not holomorphic.</p>
idm
167,226
<p>An other way:</p> <p>Firstly $$\int_0^{\pi/2}\ln(\cos t)dt\underset{t=\frac{\pi}{2}-u}{=}\int_{0}^{\pi/2}\ln\left(\cos\left(\frac{\pi}{2}-u\right)\right)du=\int_0^{\pi/2}\ln(\sin u)du \tag 1 $$</p> <p>Then, $$\int_0^{\pi/2}\ln(\sin t)dt=\frac{1}{2}\left(\int_{0}^{\pi/2}\ln(\sin t)dt+\int_0^{\pi/2}\ln(\cos t)dt\right)=\frac{1}{2}\int_0^{\pi/2}\ln\left(\frac{\sin(2t)}{2}\right)dt\underset{r=2t}{=}\frac{1}{4}\int_0^\pi\ln\left(\frac{\sin r}{2}\right)dr=\frac{1}{4}\int_{0}^\pi\ln(\sin r)dr-\frac{\pi\ln 2}{4}\underset{Chasles}{=}\frac{1}{4}\int_0^{\pi/2}\ln(\sin r)dr+\int_{\pi/2}^\pi\ln(\sin t)dt-\frac{\pi\ln 2}{4}\underset{t=r+\frac{\pi}{2}}{=}\frac{1}{4}\int_0^{\pi/2}\ln(\sin r)dr+\frac{1}{4}\int_0^{\pi/2}\ln(\sin t)dt=\frac{1}{2}\int_0^{\pi/2}\ln(\sin t)dt-\frac{\pi\ln 2}{2}$$</p> <p>And thus $$\int_0^{\pi/2}\ln(\sin t)dt=\frac{1}{2}\int_0^{\pi/2}\ln(\sin t)dt-\frac{\pi\ln 2}{4}\iff\int_0^{\pi/2}\ln(\sin t)dt=-\frac{\pi\ln 2}{2}.$$</p> <p>By $(1)$ we conclude that $$\int_0^{\pi/2}\ln(\cos t)dt=-\frac{\pi\ln 2}{2}$$</p>
3,834,894
<p>I understand a continuous function may not be differentiable. But does every continuous function have directional derivative at every point? Thanks!</p>
Phil
543,036
<p>No. Consider the function <span class="math-container">$f(x,y) = e^{-\sqrt{x^2+y^2}}$</span>. Then <span class="math-container">$f$</span> is continuous everywhere, but <span class="math-container">$f(0,0)$</span> has no directional derivative at <span class="math-container">$(0,0)$</span>. I'll let you prove this rigorously on your own, but this should be clear from the plot below; the graph is obviously not differentiable at the &quot;tip&quot;. <a href="https://i.stack.imgur.com/9P2wH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9P2wH.png" alt="enter image description here" /></a></p>
147,994
<p><strong>Preamble</strong></p> <p>Consider a <a href="http://reference.wolfram.com/language/ref/SetterBar.html" rel="nofollow noreferrer"><code>SetterBar</code></a>:</p> <pre><code>SetterBar[1, StringRepeat["q", #] &amp; /@ Range@5] </code></pre> <blockquote> <p><a href="https://i.stack.imgur.com/b4eqb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b4eqb.png" alt="Blockquote"></a></p> </blockquote> <p>That is with vertical layout:</p> <pre><code>SetterBar[1, StringRepeat["q", #] &amp; /@ Range@5, Appearance -&gt; "Vertical"] </code></pre> <blockquote> <p><a href="https://i.stack.imgur.com/7CIbD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7CIbD.png" alt="Blockquote"></a></p> </blockquote> <p>Two things are different:</p> <p>First, the unpressed buttons look differently in vertical than in horizontal layout, because the vertical layout has a mouse-over behavior, which horizontal layout doesn't. </p> <blockquote> <p><a href="https://i.stack.imgur.com/uQ1Q4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uQ1Q4.png" alt="Blockquote"></a></p> </blockquote> <p>Second, the buttons are nicely aligned by the width of the longest element, but when you click one more time on the already selected button, it shrinks to the length of the string.</p> <blockquote> <p><a href="https://i.stack.imgur.com/JeqYG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JeqYG.png" alt="Blockquote"></a></p> </blockquote> <p>I tried to construct a setter bar myself, using <a href="http://reference.wolfram.com/language/ref/Setter.html" rel="nofollow noreferrer"><code>Setter</code></a>:</p> <pre><code>Column[Setter[Dynamic@x, #, StringRepeat["q", #]] &amp; /@ Range@5] </code></pre> <blockquote> <p><a href="https://i.stack.imgur.com/tfvwE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tfvwE.png" alt="Blockquote"></a></p> </blockquote> <p>Looks normal now (no mouse-over behavior), but the same story with the button changing its appearance when double clicked. Apparently, we can use <a href="http://reference.wolfram.com/language/ref/ImageSize.html" rel="nofollow noreferrer"><code>ImageSize</code></a> to fix this:</p> <pre><code>Column[Setter[Dynamic@x, #, StringRepeat["q", #], ImageSize -&gt; 100] &amp; /@ Range@5] </code></pre> <blockquote> <p><a href="https://i.stack.imgur.com/DRq3B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DRq3B.png" alt="Blockquote"></a></p> </blockquote> <p>I thought I can port this to <a href="http://reference.wolfram.com/language/ref/SetterBar.html" rel="nofollow noreferrer"><code>SetterBar</code></a>, and it works, but the FE complains that the <a href="http://reference.wolfram.com/language/ref/ImageSize.html" rel="nofollow noreferrer"><code>ImageSize</code></a>is not a valid option for <code>SetterBar</code>:</p> <pre><code>SetterBar[1, StringRepeat["q", #] &amp; /@ Range@5, Appearance -&gt; "Vertical", ImageSize -&gt; 100] </code></pre> <blockquote> <p><a href="https://i.stack.imgur.com/MmCyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MmCyT.png" alt="Blockquote"></a></p> </blockquote> <p>Also, the buttons seem to be next to each other, not like in the column of <code>Setter</code>s, so I added <a href="http://reference.wolfram.com/language/ref/ImageMargins.html" rel="nofollow noreferrer"><code>ImageMargins</code></a>, and it works again, but the FE doens't like it either:</p> <pre><code>SetterBar[1, StringRepeat["q", #] &amp; /@ Range@5, Appearance -&gt; "Vertical", ImageSize -&gt; 100, ImageMargins -&gt; {{0, 0}, {5, 0}}] </code></pre> <blockquote> <p><a href="https://i.stack.imgur.com/9AoOq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9AoOq.png" alt="Blockquote"></a></p> </blockquote> <p>Also, the appearance is still "mouse-over" like. </p> <p><strong>Questions:</strong></p> <p>1) What happens with <a href="http://reference.wolfram.com/language/ref/SetterBar.html" rel="nofollow noreferrer"><code>SetterBar</code></a>when I change the layout from horizontal to vertical? Why the appearance of the buttons changes to "mouse-over"? Can we control it?</p> <p>2) Why the FE complains about options for <code>SetterBar</code> that work fine? Is this a normal behavior? Is this is bug? Are the other examples of the similar behavior?</p>
garej
24,604
<p>To keep button size, one option would be to add <code>StringPadRight</code></p> <pre><code>SetterBar[1, StringPadRight @ ( StringRepeat["q", #] &amp; /@ Range@5 ), Appearance -&gt; {"Vertical", "Button"}] </code></pre> <p><a href="https://i.stack.imgur.com/OPnLx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OPnLx.png" alt="enter image description here"></a></p> <p>To diminish margins one possible way is to add "AbuttingLeft", i.e.</p> <pre><code> SetterBar[1, StringPadRight @ ( StringRepeat["q", #] &amp; /@ Range@5 ), Appearance -&gt; {"Vertical", "AbuttingLeft"}] </code></pre> <p><a href="https://i.stack.imgur.com/odDj9.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/odDj9.gif" alt="enter image description here"></a></p> <p>As a trade-off, one more documented option is to use <code>Appearance-&gt;"Palette"</code>.</p> <p><a href="https://i.stack.imgur.com/KeNOE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KeNOE.png" alt="enter image description here"></a></p> <p>Also notice that at my Mac OS with V10.3 <code>"Horizontal"</code> appearance behaves in the same way (with small move of the right side of double-clicked buttons).</p> <p><a href="https://i.stack.imgur.com/HtMYS.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HtMYS.gif" alt="enter image description here"></a></p> <hr> <p>For <code>Manipulate</code> it is also possible to use <code>ImageSize</code> and <code>Alignment</code> via <code>Row</code> like so:</p> <pre><code>Manipulate[s, {{s, 1}, (# -&gt; Row[{#}, ImageSize -&gt; 39, Alignment -&gt; Center]) &amp; /@ (StringRepeat["q", #] &amp; /@ Range@5), SetterBar, Appearance -&gt; "Vertical"}, ControlPlacement -&gt; Left] </code></pre> <p>@MichaelE2's warning concerning size parameters applies here as well.</p>
706,980
<p>If I know that $f(z)$ is differentiable at $z_0$, $z_0 = x_0 + iy_0$. How do I prove that $g(z) = \overline{f(\overline{z})}$ is differentiable at $\overline z_0$?</p>
Jonas Granholm
134,205
<p>The function $f(z)=u(x,y)+iv(x,y)$ is differentiable if the first partial derivatives of $u$ and $v$ exist, are continuous, and satisfy the Cauchy-Riemann equations $$\frac{\partial u}{\partial x}=\frac{\partial v}{\partial y},\quad\frac{\partial u}{\partial y}=-\frac{\partial v}{\partial x}.$$</p> <p>Since we know that $f$ is differentiable in $z_0$ we know that the above is true, and $$g(z)=\overline{f(\bar z)}=u(x,-y)-iv(x,-y),$$ so the partial derivatives of $g$ are $$\frac{\partial u}{\partial x},\;\;\frac{\partial u}{\partial(-y)},\;\;\frac{\partial(-v)}{\partial x},\;\;\text{and}\;\;\frac{\partial(-v)}{\partial(-y)}.$$ Evaluated in $\bar z_0=x_0-iy_0$ these become $$\begin{aligned} \frac{\partial u}{\partial x}(x_0),&amp; &amp;&amp;\frac{\partial u}{\partial(-y)}(-y_0)=-\frac{\partial u}{\partial y}(y_0),\\[1em] \frac{\partial(-v)}{\partial x}(x_0)=-\frac{\partial v}{\partial x}(x_0),&amp; &amp;&amp;\frac{\partial(-v)}{\partial(-y)}(-y_0)=\frac{\partial v}{\partial y}(y_0). \end{aligned}$$ $f$ is differentiable, so these are all existing and continuous in $z_0$, and it is easy to see that they satisfy the Cauchy-Riemann equations. Thus $g$ is differentiable in $z_0$.</p>
3,291,549
<p>How does one prove that the exponential and logarithmic functions are inverse using the definitions:</p> <p><span class="math-container">$$e^x= \sum_{i=0}^{\infty} \frac{x^i}{i!}$$</span> and <span class="math-container">$$\log(x)=\int_{1}^{x}\frac{1}{t}dt$$</span></p> <p>My naive approach (sort of ignoring issues of convergence) is to just apply the definitions straightforwardly, so in one direction I get:</p> <p><span class="math-container">\begin{align}\log(e^x)&amp;=\int_{1}^{e^x}\frac{1}{t}dt\\ &amp;=\int_{0}^{e^x-1}\frac{1}{1+t}dt\\ &amp;=\int_0^{e^x-1}\sum_{j=0}^\infty (-1)^jt^jdt \\ &amp;=\sum_{j=0}^\infty (-1)^j \int_0^{e^x-1} t^jdt\\ &amp;=\sum_{j=0}^\infty \frac{(-1)^j}{j+1}(e^x-1)^{j+1}\\ &amp;=\sum_{j=0}^\infty \frac{(-1)^j}{j+1} \sum_{k=0}^{j+1} \frac{n!}{k!(n-k)!}e^{x(n-k)}(-1)^k\\ &amp;=\sum_{j=0}^\infty \sum_{k=0}^{j+1} \frac{(-1)^{j+k}n!}{(j+1)k!(n-k)!} e^{x(n-k)}\\ &amp;=\sum_{j=0}^\infty \sum_{k=0}^{j+1} \frac{(-1)^{j+k}n!}{(j+1)k!(n-k)!} \sum_{\ell=0}^\infty \frac{(-1)^\ell}{\ell !}(n-k)^\ell x^\ell\\ &amp;=\sum_{j=0}^\infty \sum_{k=0}^{j+1} \sum_{\ell=0}^\infty \frac{(-1)^{j+k+\ell}n!(n-k)^\ell x^\ell}{(j+1)k!\ell!(n-k)!} \end{align}</span></p> <p>and I cant see at all that this is equal to <span class="math-container">$x$</span>. My guess is I'm going about this all wrong. </p>
Fractal
688,603
<p>We use the fact that <span class="math-container">$g(x)=\exp(x)$</span> is the unique function <span class="math-container">$g:\mathbb{R} \rightarrow (0,\infty)$</span> such that <span class="math-container">$g'(x)=g(x)$</span> and <span class="math-container">$g(0)=1.$</span></p> <p>Since <span class="math-container">$f(x)=\log(x)$</span> is evidently bijective, it must have the inverse <span class="math-container">$f^{-1}:\mathbb{R} \rightarrow (0,\infty)$</span>. By the well-known theorem on the derivative of the inverse function, <span class="math-container">$$(f^{-1}){'}(x)=\frac{1}{f'(f^{-1}(x))}=\frac{1}{1/(f^{-1}(x))}=f^{-1}(x)$$</span> for every <span class="math-container">$x&gt;0$</span>. Furthermore, since <span class="math-container">$f(1)=\log(1)=0,$</span> <span class="math-container">$f^{-1}(0)= 1$</span>. </p> <p>It follows that <span class="math-container">$f^{-1}(x)$</span> must be <span class="math-container">$\exp(x).$</span></p>
490,802
<p>Is $(x,3,5)$ a plane, for $x\in\mathbb{R}$?</p> <p>I know that if two of the coordinates are "arbitrary", like $(x,y,4)$or $(3,y,z)$, then it creates a plane (for $x,y,z\in \mathbb{R}).$</p> <p>Is there a way to tell if it would create a plane in $\mathbb{R}^3?$</p>
Rebecca J. Stones
91,818
<p>The subset $$\{(x,3,5):x \in \mathbb{R}\} \subseteq \mathbb{R}^3$$ is not a plane. It is an affine space, a translation of the vector space $$\{(x,0,0):x \in \mathbb{R}\} \subseteq \mathbb{R}^3$$ which has dimension $1$. A plane has dimension $2$. So, by the <a href="http://en.wikipedia.org/wiki/Dimension_theorem_for_vector_spaces" rel="nofollow">Dimension Theorem</a>, it cannot be a plane.</p>
4,190,301
<p>I found <span class="math-container">$\tilde{R}$</span> in a mathematical text, and I would like to know how this is pronounced. I tried to search on the internet but was not able to find anything related.</p>
epi163sqrt
132,007
<p>The symbol <span class="math-container">$\tilde{R}$</span> is also pronounced <em>arr twiddle</em>. We can hear it for instance in these <em><a href="https://www.youtube.com/watch?v=mbv3T15nWq0&amp;t=328s" rel="nofollow noreferrer">lecture notes</a></em> by F. P. Schuller in minute 27 where he defines a <em>linear map</em> between vector spaces.</p>
4,344,571
<p>In a previous exam assignment, there is a problem that asks for a proof that <span class="math-container">$\mathbb{Z}_{24}$</span> and <span class="math-container">$\mathbb{Z}_{4}\times\mathbb{Z}_6$</span> are <strong>not</strong> isomorphic.</p> <p>We have <span class="math-container">$\mathbb{Z}_{24}$</span> is isomorphic to <span class="math-container">$\mathbb{Z}_4\times\mathbb{Z}_6$</span> if there exists a bijective function <span class="math-container">$f∶ \mathbb{Z}_{24}\rightarrow\mathbb{Z}_{4}\times\mathbb{Z}_6$</span> such that <span class="math-container">$f(a+b)=f(a)+f(b)$</span> and <span class="math-container">$f(ab)=f(a)f(b) \forall a,b\in R$</span>. Since there are exactly <span class="math-container">$24$</span> unique elements in both <span class="math-container">$\mathbb{Z}_{24}$</span> and <span class="math-container">$\mathbb{Z}_4\times\mathbb{Z}_6$</span>, we can construct a bijective function <span class="math-container">$f∶ \mathbb{Z}_{24}\rightarrow\mathbb{Z}_{4}\times\mathbb{Z}_6$</span>. Consider then <span class="math-container">$$\begin{aligned} &amp;f\left([a]_{24}+[b]_{24}\right) \\ &amp;=f\left([a+b]_{24}\right) \\ &amp;=\left([a+b]_{4},[a+b]_{6}\right) \\ &amp;=\left([a]_{4}+[b]_{4},[a]_{6}+[b]_{6}\right) \\ &amp;=\left([a]_{4},[a]_{6}\right)+\left([b]_{4},[b]_{6}\right) \\ &amp;=f\left([a]_{24}\right)+f\left([b]_{24}\right) \end{aligned}$$</span> and <span class="math-container">$$\begin{aligned} &amp;f\left([a]_{24}[b]_{24}\right) \\ &amp;=f\left([a b]_{24}\right) \\ &amp;=\left([a b]_{4},[a b]_{6}\right) \\ &amp;=\left([a]_{4}[b]_{4},[a]_{6}[b]_{6}\right) \\ &amp;=\left([a]_{4},[a]_{6}\right)\left([b]_{4},[b]_{6}\right) \\ &amp;=f\left([a]_{24}\right) f\left([b]_{24}\right). \end{aligned}$$</span> It therefore seems to me that this function shows that <span class="math-container">$\mathbb{Z}_{24}$</span> <em>is</em> isomorphic to <span class="math-container">$\mathbb{Z}_4\times\mathbb{Z}_6$</span>.</p> <p>Can someone tell me where I go wrong with this &quot;proof&quot;, and tell me how I can show that the rings are <em>not</em> isomorphic?</p>
A. Thomas Yerger
112,357
<p>You don't actually say what <span class="math-container">$f$</span> is somewhere. These are a bunch of manipulations about what would be true of <span class="math-container">$f$</span> if it existed. In particular, you take <span class="math-container">$f$</span> to be some bijection between these sets, but you are assuming that <span class="math-container">$f$</span> has the additive and multiplicative properties you claim it should without actually writing anything down.</p>
904,041
<p>$$tx'(x'+2)=x$$ First I multiplied it: $$t(x')^2+2tx'=x$$ Then differentiated both sides: $$(x')^2+2tx'x''+2tx''+x'=0$$ substituted $p=x'$ and rewrote it as a multiplication $$(2p't+p)(p+1)=0$$ So either $(2p't+p)=0$ or $p+1=0$</p> <p>The first one gives $p=\frac{C}{\sqrt{T}}$ The second one gives $p=-1$. My question is how do I take the antidervative of this in order to get the answer for the actual equation?</p>
atomteori
156,639
<p>Group theory gives a pretty direct answer. Use $$ G(t,x)=(\lambda t,\lambda^\beta x)\lambda_o=1 $$Then apply the group to the DEQ: $$ t\bigg(\frac{dx}{dt}\bigg)^2+2t\frac{dx}{dt}=x\rightarrow \lambda t\bigg(\frac{\lambda ^\beta dx}{\lambda dt}\bigg)^2+2\lambda t\frac{\lambda^\beta dx}{\lambda dt}=\lambda^\beta x $$For the group to be invariant, $2\beta-1=\beta$ or $\beta=1$. There are two nontrivial stabilizers for this group: $$ \mu=\frac{x}{t^\beta}=\frac{x}{t} $$and $$ \nu=\frac{x'}{t^{\beta -1}}=x' $$According to Sophus Lie, your DEQ can be written in terms of the stabilizers of an invariant Lie group translation, one that preserves the structure of the smooth manifold. Since our group is an infinite continuous group, we are able to do just that. $$ x'^2+2x'=\frac{x}{t}\rightarrow \nu^2+2\nu=\mu $$This is a quadratic for $\nu$. $$ \nu=-1\pm\sqrt{1+\mu} $$Since $$ t\frac{d\mu}{dt}=\nu-\mu=-1\pm\sqrt{1+\mu}-\mu $$this becomes a separable equation. $$ \frac{d\mu}{-(1+\mu)\pm\sqrt{1+\mu}}=\frac{dx}{x} $$This integrates and simplifies to give $$ 1\mp\sqrt{1+\mu}=\frac{C}{\sqrt{t}} $$and after substituting for $\mu$ and simplifying again, we arrive at the solution: $$ x=C^2-2C\sqrt{t} $$There are other, perhaps easier ways of getting to this solution, but few people know about applied Lie Theory so I like to "beat the drum" whenever possible.</p>
3,148,076
<p>CONTEXT: Challenge question set by uni lecturer for discrete mathematics course</p> <p>Question: Prove the following statement is true using proof by contradiction: </p> <p>For all positive integers <span class="math-container">$x$</span>, if <span class="math-container">$x$</span>, <span class="math-container">$x+2$</span> and <span class="math-container">$x+4$</span> are all prime, then <span class="math-container">$x=3$</span>.</p> <p>I know I'd do this by trying to prove the negation of the statement, but then failing to do so and hence 'contradicting' myself. </p> <p>I've also found the negation to be that there exists a positive prime integer <span class="math-container">$x$</span> such that +2 and +4 are prime, but <span class="math-container">$x≠3$</span></p> <p>I'm stuck at the part of the proof where you show that the negation is false.</p>
Myunghyun Song
609,441
<p><strong>Hint:</strong> One of <span class="math-container">$x, x+2, x+4$</span> must be divisible by <span class="math-container">$3$</span>.</p>
3,031,460
<blockquote> <p>Give an example of an assertion which is not true for any positive integer, yet for which the induction step holds.</p> </blockquote> <p>First of all, definition.</p> <blockquote> <p>In <strong>inductive step</strong>, we suppose that <span class="math-container">$P(k)$</span> is true for some positive integer <span class="math-container">$k$</span> and then we prove that <span class="math-container">$P(k + 1)$</span> is true.</p> </blockquote> <p><strong>My thoughts</strong> : Since the assertion has to satisfy the induction step, it must be false for <span class="math-container">$n=1$</span>. </p> <p>But I cannot think of any such assertion. Does anyone have any hints? Thank you.</p> <p><strong>Edit</strong>: Just realised that my thoughts don't make much sense, as the question demands the assertion to be false for every positive integer. </p>
Badam Baplan
164,860
<p>There is no shortage of good answers to this question, so I'll give a couple examples in the hopes that thinking about them will help you to produce your own answers.</p> <p>Two possibilities for <span class="math-container">$P(n)$</span> are</p> <p>-the assertion that <span class="math-container">$n! = 0$</span> </p> <p>-the assertion that <span class="math-container">$n &gt; 2n$</span></p>
22,753
<p>I've learned the process of orthogonal diagonalisation in an algebra course I'm taking...but I just realised I have no idea what the point of it is.</p> <p>The definition is basically this: "A matrix <span class="math-container">$A$</span> is orthogonally diagonalisable if there exists a matrix <span class="math-container">$P$</span> which is orthogonal and <span class="math-container">$D = P^tAP$</span> where <span class="math-container">$D$</span> is diagonal". I don't understand the significance of this though...what is special/important about this relationship?</p>
Qiaochu Yuan
232
<p>Mathematics is all about equivalence relations. Often we are not really interested in objects on-the-nose, we are interested in objects up to some natural <a href="http://en.wikipedia.org/wiki/Equivalence_relation">equivalence relation</a>. Matrices really define <em>linear transformations</em>, so in order to talk about linear transformations using matrices we need to specify when two matrices give the same abstract linear transformation. At the level of a first course in linear algebra, there are at least two natural choices, which are often not distinguished.</p> <ul> <li>Up to change of basis. This is a reasonable choice if your vector space has no natural norm or inner product (e.g. if it is a space of polynomials). The nicest matrices here are the ones that can be diagonalized (by an arbitrary invertible matrix). </li> <li>Up to change of <em>orthonormal</em> basis. This is a reasonable choice if your vector space has an inner product / Euclidean norm, since you want to be able to compute inner products in the easiest possible way with respect to your basis. Here the nicest matrices are the ones that can be orthogonally diagonalized.</li> </ul> <p>One of the most important examples of the latter is the use of <a href="http://en.wikipedia.org/wiki/Fourier_series">Fourier series</a>, which orthogonally diagonalize differentiation. This is an important technique for solving differential equations.</p> <hr> <p>Perhaps you are also not familiar with the point of <em>diagonalization.</em> The point of diagonalization is to change coordinates so that the linear transformation you're interested in is as simple as possible (it doesn't get simpler than diagonal matrices). That makes it easy to analyze, as in the Fourier series example above. </p>
22,753
<p>I've learned the process of orthogonal diagonalisation in an algebra course I'm taking...but I just realised I have no idea what the point of it is.</p> <p>The definition is basically this: "A matrix <span class="math-container">$A$</span> is orthogonally diagonalisable if there exists a matrix <span class="math-container">$P$</span> which is orthogonal and <span class="math-container">$D = P^tAP$</span> where <span class="math-container">$D$</span> is diagonal". I don't understand the significance of this though...what is special/important about this relationship?</p>
Tpofofn
4,726
<p>Matrices are complicated objects. At first glance they are rectangular arrays of numbers with a complicated multiplication rule. Diagonalization helps us reduce the the matrix multiplication operation to a sequence of simple steps which make sense. In your case you are asking about orthogonal diagonalization, so I will limit my comments to that. Note that I normally think about diagonalization as a factorization of the matrix $\mathbf A$ as </p> <p>$$\mathbf A = \mathbf P \mathbf D \mathbf P^T$$</p> <p>We know that $\mathbf D$ contains the eigenvalues of $\mathbf A$ and $\mathbf P$ contains the corresponding eigenvectors. Consider the multiplication $\mathbf{y=Ax}$. We can perform the multiplication $\mathbf y = \mathbf P \mathbf D \mathbf P^T \mathbf x$ step-by-step as follows:</p> <p>First: $\mathbf x&#39; = \mathbf P^T \mathbf x$. This step projects $\mathbf x$ onto the eigenvectors because the eigenvectors are in the rows of $\mathbf P^T$.</p> <p>Second: $\mathbf y&#39; = \mathbf D \mathbf x&#39;$. This step stretches the resultant vector independently in each direction which corresponds to an eigenvector. This is the key. A matrix does a simple scaling (in general we may also shear or rotate) operation in independent directions (orthogonal case only!) in a particular basis or representation.</p> <p>Third: $\mathbf y = \mathbf P \mathbf y&#39;$. We take our resulting stretched vector and linearly combine the eigenvectors to get back to our original space.</p> <p>So in summary, diagonalization tells us that under matrix multiplication, as vector can be represented in a special basis under which the actual operation of the matrix is a simple diagonal matrix (the simplest possible) and then represented back in our original space.</p>
271,592
<p>Let $A$ be a C$^*$-algebra. A pre-Hilbert $A$-module $H$ is a right $A$ module with a $A$-valued inner product (which is linear in the second variable and conjugate linear in the first variable) such that</p> <p>1.$\langle \xi, \beta \cdot T \rangle_{A} = \langle \xi, \beta\rangle_{A}T$,</p> <p>2.$\langle \xi, \xi\rangle_{A} \geq 0$, and $\langle \xi, \xi\rangle_{A}=0$ implies $\xi=0$,</p> <p>where $\xi, \beta \in H$ and $T \in A$. If $H$ is complete with respect to the norm $\|\langle \xi, \xi\rangle_{A}\|^{1/2}$, then $H$ is called a Hilbert $A$-module.</p> <p>If $H$ is a pre-Hilbert $A$-module, let $\overline{H}$ be the completion of $H$ with respect to the norm $\|\langle \xi, \xi\rangle_{A}\|^{1/2}$. Let $B(H)$ be the set of bounded adjointable mapping from $H$ to $H$. It is clear that each $T \in B(H)$ extends to a element in $B(\overline{H})$ (the bounded adjointable mapping from $\overline{H}$ to $\overline{H}$). Thus, we can identify $B(H)$ as a subalgebra in $B(\overline{H})$. My question is if $B(H)$ is dense in $B(\overline{H})$?</p>
Yemon Choi
763
<p>The following is based on a copy of some old handwritten notes; I haven't had time to refresh my memory on all the details. I will try to fill in these details in the next day or so. $\newcommand{\Nat}{{\bf N}}\newcommand{\veps}{\varepsilon}$</p> <hr> <p>Using the notation of my comment: take $V_0=\ell_1(\Nat)$ sitting inside $V=\ell_2(\Nat)$, and let $A_0$ denote the space of all bounded operators on $V$ that map $V_0$ to itself. We seek a bounded operator $S:\ell_2(\Nat)\to \ell_2(\Nat)$ whose distance from $A_0$ is strictly positive.</p> <p>Fix a sequence of non-empty finite subsets of $\Nat$, denoted by $(F_j)_{j\geq 1}$, which satisfy $\max F_j &lt; \min F_{j+1}$. (We will later want $|F_j|\to\infty$ at a certain rate.) Let $$ u_j := |F_j|^{-1/2} {\bf 1}_{F_j}$$ and, denoting the standard basis of $\ell_2$ by $(e_j)_{j\geq 1}$, define $S:c_{00}(\Nat)\to \ell_2(\Nat)$ by $Se_j = u_j$. A quick calculation shows that $S$ extends to an isometry on $\ell_2(\Nat)$.</p> <p>Now suppose $T\in {\mathcal B}(\ell_2(\Nat))$ satisfies $\Vert S-T\Vert = \veps &lt;1$. The idea is that since $\Vert Te_n-u_n\Vert_2 \leq\veps$, $Te_n$ must be close in $\ell_2$-norm to something that will have large $\ell_1$-norm (provided we make $|F_j|\nearrow \infty$ at a suitable rate). Moreover, we can inductively choose a subsquence $(e_{n(j)})$ so that the vectors $Te_{n(j)}$ have approximately disjoint support in some sense. One then considers $$ x:= \sum_{j=1}^\infty c^j e_{n(j)} $$ for appropriately chosen $c\in (0,1)$, to obtain $Tx = \sum_{j=1}^\infty c^j Te_{n(j)} \notin \ell_1(\Nat)$.</p>
271,592
<p>Let $A$ be a C$^*$-algebra. A pre-Hilbert $A$-module $H$ is a right $A$ module with a $A$-valued inner product (which is linear in the second variable and conjugate linear in the first variable) such that</p> <p>1.$\langle \xi, \beta \cdot T \rangle_{A} = \langle \xi, \beta\rangle_{A}T$,</p> <p>2.$\langle \xi, \xi\rangle_{A} \geq 0$, and $\langle \xi, \xi\rangle_{A}=0$ implies $\xi=0$,</p> <p>where $\xi, \beta \in H$ and $T \in A$. If $H$ is complete with respect to the norm $\|\langle \xi, \xi\rangle_{A}\|^{1/2}$, then $H$ is called a Hilbert $A$-module.</p> <p>If $H$ is a pre-Hilbert $A$-module, let $\overline{H}$ be the completion of $H$ with respect to the norm $\|\langle \xi, \xi\rangle_{A}\|^{1/2}$. Let $B(H)$ be the set of bounded adjointable mapping from $H$ to $H$. It is clear that each $T \in B(H)$ extends to a element in $B(\overline{H})$ (the bounded adjointable mapping from $\overline{H}$ to $\overline{H}$). Thus, we can identify $B(H)$ as a subalgebra in $B(\overline{H})$. My question is if $B(H)$ is dense in $B(\overline{H})$?</p>
heller
62,272
<p>Thanks Yemon. I fill in the details of you example. </p> <p>Let $H=l^1(N)$. Then $\overline{H}=l^2(N)$.</p> <p>Let $\xi_j = 1/\sqrt{2^n} \sum_{i=0}^{2^n-1} e_{2^n+i}$ where $\{e_1, e_2, \ldots \}$ is the canonical orthonormal basis of $l^2(N)$.Define $V \in B(l^2(N))$ by $Ve_j = \xi_j$. Then it is clear that $V$ is a partial isometry.</p> <p>Assume that $T \in B(H)$ and $\|T-V\| &lt; 1/2$. Note that $\|Te_n - \xi_n\| &lt; 1/2$. Thus $$\sum_{i=0}^{2^n-1} |\langle{e_{2^n+i},Te_n\rangle}| \geq \sqrt{2^{n}}-\sum_{i=0}^{2^n-1} |\langle{e_{2^n+i},Te_n\rangle} - 1/\sqrt{2^n}| \geq \sqrt{2^{n}}(1-\|Te_n - \xi_n\|)\geq \frac{\sqrt{2^{n}}}{2}.$$</p> <p>We now choose inductively a two subsequence of non negative numbers $1=n_0 &lt; n_1&lt; n_2 &lt; \cdots$ and $n(1) &lt; n(2) &lt; \cdots$ such that $n(k) \geq k^2$ and $$\sum_{i \notin [n_{l-1}, n_{l}-1]} |\langle{e_{i},Te_{n(l)}\rangle}| &lt; 1, \quad l = 1, 2, \ldots $$</p> <p>Let $k(1) =1$. Since $Te_1 \in l^1(N)$, we can choose $n_1 \in N$ satisfies the above condition. Assume that $\{n_1, \ldots, n_k\}$ and $\{n(1), \ldots, n(k)\}$ are chosen. Recall that $T^*$ also maps $l^1(N)$ into $l^1(N)$ and $\{T^*e_i\}_{i=1}^{n_k-1} \subset l^1(N)$. We may choose $n(k+1) &gt; \max(n_k, n(k))$ such that $\sum_{i \in [1, n_{k}-1]} |\langle{e_{i},Te_{n(k+1)}\rangle}| &lt; 1/2$. Now it is clear that we can choose $n_{k+1} &gt; n_k$ such that $\sum_{i \notin [n_{k}, n_{k+1}-1]} |\langle{e_{i},Te_{n(k+1)}\rangle}| &lt; 1$. </p> <p>Let $\beta = \sum_{k=1}^{\infty} 1/k^2 e_{n(k)} \in l^1(N)$. Then $$\|T\beta\|_1 = \sum_{l=1}^{\infty} \sum_{i \in [n_{l-1}, n_l -1]} |\langle{e_i, \sum_{k=1}^{\infty} 1/k^2 Te_{n(k)}\rangle}| $$ $$ \geq \sum_{l=1}^{\infty} \frac{\sqrt{2^{n(l)}}}{2l^2} - \sum_{l=1}^{\infty} \frac{1}{l^2} = +\infty. $$</p>
164,896
<p>I want to create a list length <code>l</code> with the function $f(x_n)=x_{n-1}+r$ where $r$ is a random real number between -1 and 1 and $x_0=1$. It got it working like this:</p> <pre><code>l = 50; a = Range[l]; a[[1]] = 0; For[i = 2, i &lt;= l, i++, a[[i]] = a[[i - 1]] + RandomReal[{-1, 1}]]; a </code></pre> <p>But I have a feeling I'm doing it "wrong" and this can be heavily optimized and written shorter.</p> <p>Any ideas ?</p> <p>For example the fact that I use <code>Range</code> to create a list with elements that I never use seems wrong...</p>
José Antonio Díaz Navas
1,309
<p>I am understanding you want to generate a sequence of real numbers recursively (with some randomness) starting with $x_0=1$, Maybe this is a solution:</p> <pre><code>RandomSeed[2]; f[0] = 1; f[n_Integer /; n &gt; 0] := f[n] = f[n - 1] + RandomReal[{-1, 1}]; l = 50; f[#] &amp; /@ Range[0, l] (* {1, 1.44448, 0.663377, 0.604782, 0.675946, 0.842301, 0.430186, -0.239505, -0.0369893, 0.471446, 1.01369, 1.57084, 0.618061, 1.46357, 2.44848, 2.1493, 1.23931, 1.24203, 1.50954, 1.79396, 1.57371, 1.90365, 2.59141, 2.72949, 2.52592, 2.00322, 2.35025, 2.18926, 2.36406, 1.38073, 2.26561, 2.80813, 2.10314, 3.03269, 3.83018, 3.4961, 2.9052, 3.58327, 3.08405, 2.56132, 2.79456, 3.55316, 3.36217, 3.1672, 3.19959, 2.78361, 2.48201, 1.85312, 2.05956, 1.12755, 1.67286} *) </code></pre> <p>The use of <code>SeedRandom</code>is for obtaining always the same sequence. I f you do not want this, and generate a different sequence every time, just comment or remove the line.</p>
1,702,616
<p>I was working on a programming problem to find all 10-digit perfect squares when I started wondering if I could figure out how many perfects squares have exactly N-digits. I believe that I am close to finding a formula, but I am still off by one in some cases.</p> <p>Current formula where $n$ is the number of digits:</p> <p>$\lfloor\sqrt{10^n-1}\rfloor - \lfloor\sqrt{10^{n-1}}\rfloor$ </p> <p>How I got here: </p> <ol> <li>Range of possible 10-digit numbers is from $10^9$ to $10^{10}-1$</li> <li>10-digit perfect squares should fall into the range of $\sqrt{10^9}$ to $\sqrt{10^{10}-1}$</li> </ol> <p>Results of program for $n = 1,2,3,4,5$:</p> <pre><code>DIGITS=1, ACTUAL_COUNT=3, COMPUTED_COUNT=2 DIGITS=2, ACTUAL_COUNT=6, COMPUTED_COUNT=6 DIGITS=3, ACTUAL_COUNT=22, COMPUTED_COUNT=21 DIGITS=4, ACTUAL_COUNT=68, COMPUTED_COUNT=68 DIGITS=5, ACTUAL_COUNT=217, COMPUTED_COUNT=216 </code></pre> <p>Program: </p> <pre><code>#!/usr/bin/perl use strict; use warnings; sub all_n_digit_perfect_squares { my ($n) = @_; my $count = 0; my $MIN = int( sqrt( 10**($n-1) ) ); my $MAX = int( sqrt( (10**$n)-1 ) ); foreach my $i ( $MIN .. $MAX ) { if ( ($i * $i) &gt;= 10**($n-1) ) { $count++; } } print "DIGITS=$n" . ", ACTUAL_COUNT=$count" . ", COMPUTED_COUNT=" . ($MAX-$MIN), "\n"; return; } all_n_digit_perfect_squares($_) for (1..5); </code></pre> <p>Any advice on where I went wrong would be helpful.</p>
Nigel Galloway
63,512
<p>The number of perfect squares between any two numbers a and b with a less than b is floor sqrt b - ceil sqrt a + 1. i.e. a=1000 b=2000. ceil sqrt 1000 = 32. floor sqrt 2000 = 44. So 32 33 34 35 36 37 38 39 40 41 42 43 and 44 when squared will be perfect squares between 1000 and 2000 and 44-32+1=13. If you are silly and try for the number of perfect squares between 1000 and 1001 then 31-32+1=0. between 9 and 10 then 3-3+1=1.</p>
1,779,068
<p>Let $H$ and $K$ be two subgroups of a group $G$ such that $[G : H]=2$ and $K$ is not a subgroup of $H$. Then show that $HK=G$. Now, since $HK$ is a subset of $G$ we need only to show that $G$ is a subset of $HK$. But how can I show it? Please help me. Thank you in advance.</p>
Community
-1
<p>By Lagrange's,</p> <p>$$[G : HK][HK:H] = [G:H] = 2$$</p> <p>since $K \not\subset H$, $[HK:H] &gt; 1 \implies [HK:H] =2 $, so $[G:HK] = 1 \implies G = HK$</p>
207,778
<p>I want to save expressions as well as their names in a file.</p> <pre><code> func[i_] := i; Do[func[i] &gt;&gt;&gt; out.m,{i,1,3}]; </code></pre> <p>The output is </p> <pre><code> cat out.m 1 2 3 </code></pre> <p>However the desired output is</p> <pre><code> cat out.m func[1] = 1; func[2] = 2; func[3] = 3; </code></pre> <p><code>Save</code> does not save here.</p>
Fortsaint
10,101
<pre><code>func[i_] := i Do[ "func["&lt;&gt;ToString@i&lt;&gt;"] = "&lt;&gt;ToString@func@i &gt;&gt;&gt; "out.m" , {i,1,3} ] </code></pre> <p><strong>Edit 1</strong>: It's not clear to me what's your scope but, this does what you asked</p> <pre><code>"func["&lt;&gt;ToString@#&lt;&gt;"] = "&lt;&gt;ToString@func@# &amp; /@ Range@3 // Export["out.m", #~StringRiffle~"\n"&lt;&gt;"\n","Text"]&amp; </code></pre> <p><strong>Edit 2</strong>: Given that your scope is to create a file that contains only the <code>DownValues</code> of a symbol <code>f</code>, you do not need to define a substitution rule (<code>:=</code>) for <code>f</code>. So, let's just define the <code>DownValues</code> of f and export them in a text file</p> <pre><code>Remove@f f@#~Set~# &amp;/@ Range@3 ToString@Information[f]@"DownValues" // Export["out.m",#,"Text"]&amp; </code></pre>
1,660,289
<p>I want to find the line that passes through $(3,1,-2)$ and intersects at a right angle the line $x=-1+t, y=-2+t, z=-1+t$. </p> <p>The line that passes through $(3,1,-2)$ is of the form $l(t)=(3,1,-2)+ \lambda u, \lambda \in \mathbb{R}$ where $u$ is a parallel vector to the line. </p> <p>There will be a $\lambda \in \mathbb{R}$ such that $3+ \lambda u_1=-1+t, 1+ \lambda u_2=-2+t, -2+ \lambda u_3=-1+t$. </p> <p>Is it right so far? How can we continue?</p>
Erick Wong
30,402
<p>Julián Aguirre's answer is the definitive one, but I also want to give some insight on why this conjecture is kind of unreasonable (i.e. that one should expect it to become false by just doing a little computation).</p> <p><strong>Caveat</strong>: none of the following is proven — for instance, it hasn't been proven that there are infinitely many twin primes, so the probability of a number being part of a twin prime could in fact be $0$. But it is based on what number theorists call a <em>heuristic</em> argument based on the idea that primes are a reasonably good source of randomness and don't contain hidden correlations except for those that are easily explained (e.g. very few even numbers are prime). Heuristic arguments are frequently used to guess at what conjectures are plausible and which are implausible, and surprisingly they can be made quite precise.</p> <p>There is a huge amount of flexibility in choosing $P_p$ (exponentially many choices), so one might expect that the failure of this conjecture will be that there are too many twin primes than predicted (and indeed this is what Julián discovered). How would we begin to analyze this?</p> <p>Given a random large number $N$, the chances that $(N,N+2)$ is a twin prime are a little bit higher than $1/(\log N)^2$. In your case these numbers are not entirely random but they are specifically chosen to be indivisible by certain small primes ($2$ as well as the primes composing $P_p$), so we should really think of $1/(\log N)^2$ as a lower bound.</p> <p>For a given $P_p = \prod_{p \in S_p} p$, the size of $1/(\log N)^2$ will depend on the specific set of primes $S_p$ that go into it: $\log N$ itself will be almost exactly equal to $\log p_n + \sum_{p \in S_p} \log p$. Since we're going for a lower bound, instead of summing up all the possibilities let's just take the worst case of all primes from $p_2$ to $p_n$. This gives (after some computation) $\log N \approx p_n$ so $1/(\log N)^2 \approx 1/p_n^2$.</p> <p>But there are $2^{n-2}$ choices for $P_p$. Each of them has at least a $1/p_n^2$ chance of being part of a twin prime, so heuristically we would expect to find $2^{n-2}/p_n^2$ twin primes this way. Since $p_n$ only grows as fast as $n \log n$, the expected number of twin primes approaches infinity very quickly, so we should easily expect it to outstrip $n-2$ itself.</p> <p>On the other hand, given that it has no reason to be true in general, it's nice that the pattern does hold up for $n=3, \ldots, 8$.</p>
1,741,765
<p>Problem description: Show that well-ordering is not a first-order notion. Suppose that $\Gamma$ axiomatizes the class of well-orderings. Add countably many constants $c_i$ and show that $\Gamma \cup \{c_{i+1} &lt; c_i \mid i \in \mathcal{N}\}$ has a model. </p> <p>So, I don’t quite get how to go about this. <a href="https://math.stackexchange.com/questions/937348/why-isnt-there-a-first-order-theory-of-well-order">This post</a> seems to address the same question, but I don’t understand the answer. I assume I’ll start something like this: </p> <p>Suppose that $\Gamma$ axiomatizes the class of well-orderings (do I need to define this class?). Let $\{c_i \mid i \in \mathcal{N}\}$ be a set of new constants. Consider the set $A = \Gamma \cup \{c_{i+1} &lt; c_i \mid i \in \mathcal{N}\}$. </p> <p>And here I can’t quite see how to proceed. How do I show that every finite subset $A_0 \subseteq A$ is satisfiable? </p> <p>Anyway, suppose I manage to do the above. Then I use compactness to show that $A$ is satisfiable, i.e. $A$ has a model. </p> <p>Then what? How do I finish it? Or if it is finished, how to I explain that it is? </p> <p>I should add: hints only, no direct answers — stop me if I’m asking questions that aren’t possible to answer without revealing the solution. Thanks! </p>
Nick
27,349
<p>This implies the operation is commutative. First note that by choosing $b=a$, you get that $a^3 = a \# a \# a = a$ for any $a$. In particular, apply this observation to the element $a \# b$:</p> <p>$$ a \# b = (a\# b) \# (a\# b) \# (a \#b) = (a \# b \# a)\#(b \#a \# b) = b \# a $$</p>
134,937
<p>Let $p \equiv q \equiv 3 \pmod 4$ for distinct odd primes $p$ and $q$. Show that $x^2 - qy^2 = p$ has no integer solutions $x,y$.</p> <p>My solution is as follows.</p> <p>Firstly we know that as $p \equiv q \pmod 4$ then $\big(\frac{p}{q}\big) = -\big(\frac{q}{p}\big)$</p> <p>Assume that a solution $(x,y)$ does exist and reduce the original equation modulo $q$ to get $x^2 \equiv p \pmod q.$ This implies $\big(\frac{p}{q} \big) = 1 $. </p> <p>Now if we reduce the original equation modulo $p$ then $x^2 \equiv qy^2 \pmod p \ (*)$. To get a contradiction I want to show that $\big(\frac{q}{p}\big) =1$ and to do this I need to prove that $gcd(p.y) = 1$ as this means we can divide both sides of ($*$) by $y^2$. But how do you prove this?</p>
Will Jagy
10,400
<p>You started out well enough. You got to: the assumption of a solution implies $(p|q) = 1$ and so $(q|p ) = -1.$</p> <p>Next, we think we have $x^2 - q y^2 = p,$ in particular $$ x^2 \equiv q y^2 \pmod p. $$ If $y \neq 0 \pmod p,$ then $y$ has a multiplicative inverse $\pmod p,$ and $$ \left( \frac{x}{y} \right)^2 \equiv q \pmod p. $$ Well this contradicts $(q|p ) = -1,$ so in fact $y$ is divisible by $p.$ But then $x^2 - q y^2 = p$ says that $x$ is also divisible by $p.$ Thus $$ x^2 - q y^2 \equiv 0 \pmod {p^2}, $$ which contradicts $x^2 - q y^2 = p.$</p> <p>An answer was posted using just $\pmod 4,$ which is worth knowing.</p> <p>However, the part about a value of the Legendre symbol implying that $x,y$ are both divisible by some prime is a standard ingredient in quadratic forms and will come up again and again. </p> <p>The full version is this: if I have a quadratic form $$ f(x,y) = a x^2 + b x y + c y^2, $$ it has a "discriminant" that is the same as the term under the square root in the Quadratic Formula, $$ \Delta = b^2 - 4 a c. $$ Now, $\Delta$ is allowed to be positive or negative, however if it is positive we do not permit it to be a perfect square. The technical term for binary quadratic forms with square discriminant is "stupid." </p> <p>Now, suppose we have some prime $p$ that does not divide $\Delta,$ and for which $$ (\Delta | p) = -1. $$ Note that we allow $p \equiv \pm 1 \pmod 4$ here. What happens if $$ a x^2 + b x y + c y^2 \equiv 0 \pmod p? $$ Just to make it easy on myself, demand that $a$ not be divisible by $p.$ If so, it is still true if I multiply both sides by $4a,$ I get $$ 4 a^2 x^2 + 4 a b x y + 4 ac y^2 \equiv 0 \pmod p, $$ and $$ 4 a^2 x^2 + 4 a b x y + b^2 y^2 - b^2 y^2 + 4 ac y^2 \equiv 0 \pmod p, $$ $$ (2 a x + b y)^2 - \Delta y^2 \equiv 0 \pmod p, $$ so $$ (2 a x + b y)^2 \equiv \Delta y^2 \pmod p. $$ If we assume that $y$ is not divisible by $p,$ it has a multiplicative inverse $\pmod p,$ and we get a square equivalent to $\Delta \pmod p,$ not permitted. So actually $y$ is divisible by $p.$ Next $(2ax+ by)$is divisible by $p,$ and finally $x$ is as well as $y$ because I demanded that $a$ n not be. So there.</p> <p>The bit about $a$ not being divisible by $p$ is not critical. As $\Delta$ is not divisible by $p$ it is impossible for $a,b,c$ to all be divisible by $p.$ And hilarity ensues. It has come back to me, as in a dream. If $a$ is divisible by $p,$ simply switch the argument to $c,$ multiply by $4c$ rather than $4a$ and so on. We might worry about both $a,c$ being divisible by $p,$ but in that case $\Delta \equiv b^2 \pmod p $ and $(\Delta | p) = (b^2 | p) =1, $ contradicting $(\Delta | p) = -1.$</p>
3,706,332
<p>Let <span class="math-container">$f: \mathbb{R} \to \mathbb{R} $</span> be a differentiable function. Is it true that <span class="math-container">$f$</span> is strictly increasing on <span class="math-container">$\mathbb R$</span> if and only if <span class="math-container">$f'(x) \geq 0$</span> on <span class="math-container">$\mathbb R$</span> and the set <span class="math-container">$X = \{x \in \mathbb{R} \,| \, f'(x) = 0\}$</span> is countable?</p>
zhw.
228,045
<p>Suppose <span class="math-container">$f$</span> is differentiable and strictly increasing. Then yes, we must have <span class="math-container">$f'\ge 0$</span> everywhere. However, we can have <span class="math-container">$f'=0$</span> on an uncountable set, in fact a set of positive measure.</p> <p>Example: Let <span class="math-container">$U$</span> be an open set containing the rationals, with <span class="math-container">$m(U)&lt;1.$</span> Let <span class="math-container">$E=\mathbb R\setminus U.$</span> Then <span class="math-container">$E$</span> is closed. Define</p> <p><span class="math-container">$$f(x)=\int_0^x d(t,E)\,dt.$$</span></p> <p>Since <span class="math-container">$d(t,E)$</span> is continuous, <span class="math-container">$f'(x) = d(x,E)$</span> everywhere. We thus have <span class="math-container">$f'=0$</span> on <span class="math-container">$E$</span> and <span class="math-container">$f'&gt;0$</span> on <span class="math-container">$U.$</span></p> <p>Now <span class="math-container">$m(E)=\infty$</span> and hence is uncountable. The fact that <span class="math-container">$f'&gt;0$</span> on the open dense set <span class="math-container">$U$</span> implies <span class="math-container">$f$</span> is strictly increasing, and we have our example. </p>
121,865
<p>Can someone please help me clarify the notations/definitions below:</p> <p>Does $\{0,1\}^n$ mean a $n$-length vector consisting of $0$s and/or $1$s?</p> <p>Does $[0,1]^n$ ($(0,1)^n$) mean a $n$-length vector consisting of any number between $0$ and $1$ inclusive (exclusive)?</p> <p>As a related question, is there a reference web page for all such definitions/notations? Or do we just need to take note of them individually as we learn. Thanks.</p>
diofanto
27,163
<p>The idea is simple… The abstract set of Topology</p> <p>$$\displaystyle{\prod_{i\in I}X_i=\left\{x:I\rightarrow\cup_{i\in I}X_i \vert x(i)=x_i\in X_i,\;\forall i\in I \right\}}$$</p> <p>where $x$ are continuous functions. (Also, you can see the continuous function as equivalence relation with $n$-length vector)</p> <p>Examples:</p> <ol> <li><p>If $X_i=\{0,1\}, \forall i\in I$ then $$\displaystyle{\prod_{i\in I}X_i=\{0,1\}^I}$$ $$x=(x_i)_{i\in I}\in\{0,1\}^I$$</p></li> <li><p>If $X_i=[0,1], \forall i\in I$ then $$\displaystyle{\prod_{i\in I}X_i=[0,1]^I}$$</p></li> <li><p>Important case is the Cantor's set.</p></li> </ol> <p>P.D.: Excuse my English, please.</p>
45,662
<p>Does this undirected graph with 6 vertices and 9 undirected edges have a name? <img src="https://i.stack.imgur.com/XwuUB.png" alt="enter image description here"> I know a few names that are not right. It is not a complete graph because all the vertices are not connected. It is close to K<sub>3,3</sub> the utility graph, but not quite (and not quite matters in graph theory :-) </p> <p>This graph came up in my analysis of quaternion triple products.</p>
Alon Amit
308
<p>This is exactly $K_{3,3}$. What makes you say it's only "close" to it? Can you spot two independent sets of 3 vertices each here? Once you see that, and given that there are 9 edges, it <strong>must</strong> be the complete bipartite graph on two sets of 3 vertices each.</p>
45,662
<p>Does this undirected graph with 6 vertices and 9 undirected edges have a name? <img src="https://i.stack.imgur.com/XwuUB.png" alt="enter image description here"> I know a few names that are not right. It is not a complete graph because all the vertices are not connected. It is close to K<sub>3,3</sub> the utility graph, but not quite (and not quite matters in graph theory :-) </p> <p>This graph came up in my analysis of quaternion triple products.</p>
Isaac Kleinman
11,444
<p>You can also think of it as the <a href="http://mathworld.wolfram.com/HararyGraph.html" rel="nofollow">Harary graph</a> $H_{3,6}$.</p>
3,055,208
<p>I am trying to compute the below limit through Taylor series: <span class="math-container">$\lim \limits_{x\to \infty} ((2x^3-2x^2+x)e^{1/x}-\sqrt{x^6+3})$</span></p> <p>What I have already tried is first of all change the variable x to <span class="math-container">$x=1/t$</span> and the limit to t limits to 0, so I am able to use Maclaurin series. After that I change <span class="math-container">$e$</span> to exponent polynomial up to t=6 + <span class="math-container">$O(X^6)$</span> however, I don`t know what can I do with square root. </p>
DudeMan
630,057
<p>I think that you don't have to go that far. You know that (for large values of <span class="math-container">$x$</span>): <span class="math-container">$$ e^{1/x}=\sum_{k=0}^{\infty}{\frac{1}{k!x^k}}\geq1 $$</span> so that: <span class="math-container">$$ (2x^3-2x^2+x)e^{1/x}-\sqrt{x^6+3} \geq (2x^3-2x^2+x)-\sqrt{x^6+3}\geq\frac{x^3}{2} $$</span> Thus, the limit goes to <span class="math-container">$\infty$</span>.</p>
3,055,208
<p>I am trying to compute the below limit through Taylor series: <span class="math-container">$\lim \limits_{x\to \infty} ((2x^3-2x^2+x)e^{1/x}-\sqrt{x^6+3})$</span></p> <p>What I have already tried is first of all change the variable x to <span class="math-container">$x=1/t$</span> and the limit to t limits to 0, so I am able to use Maclaurin series. After that I change <span class="math-container">$e$</span> to exponent polynomial up to t=6 + <span class="math-container">$O(X^6)$</span> however, I don`t know what can I do with square root. </p>
TheSimpliFire
471,884
<p><strong>HINT:</strong></p> <p><span class="math-container">$$\sqrt{x^6+3}=\frac1{t^3}\left(1+3t^6\right)^{1/2}=\frac1{t^3}+\frac{\frac12\cdot3t^6}{t^3}-\frac{\frac1{2\cdot4}\cdot(3t^6)^2}{t^3}+\frac{\frac{1\cdot3}{2\cdot4\cdot6}\cdot(3t^6)^3}{t^3}-\cdots\\(2x^3-2x^2+x)e^{1/x}=\left(\frac2{t^3}-\frac2{t^2}+\frac1t\right)\left(1+t+\frac{t^2}{2\cdot1}+\frac{t^3}{3\cdot2\cdot1}+\cdots\right)$$</span></p>
4,136,082
<p><span class="math-container">$$f(x) = \begin{cases} \cos(\frac{1}{x}) &amp; \text{if $x\ne0$} \\ 0 &amp; \text{if $x=0$} \\ \end{cases}$$</span></p> <p>How do I prove this function has Darboux's property? I know it has it because it has antiderivatives, but how do I prove it otherwise, with intervals maybe ?</p>
Community
-1
<p>Well... it's obvious that it has Darboux's property on any interval <span class="math-container">$[a,b]$</span> with <span class="math-container">$0&lt;a&lt; b\lor a&lt;b&lt;0$</span>.</p> <p>If <span class="math-container">$a\le 0&lt;b$</span>, then <span class="math-container">$f(a),f(b)\in [-1,1]$</span> anyways and there are some <span class="math-container">$0&lt;x_1&lt;x_2&lt;b$</span> such that <span class="math-container">$f(x_1)=-1$</span> and <span class="math-container">$f(x_2)=1$</span>. By the previous remark, every value between <span class="math-container">$-1$</span> and <span class="math-container">$1$</span> (so a fortiori every value between <span class="math-container">$\min\{f(a),f(b)\}$</span> and <span class="math-container">$\max\{f(a),f(b)\}$</span>) is attained in the interval <span class="math-container">$[x_1,x_2]\subseteq (a,b)$</span>.</p> <p>If <span class="math-container">$a&lt; 0\le b$</span>, then <span class="math-container">$f(a),f(b)\in [-1,1]$</span> anyways and there are some <span class="math-container">$a&lt;x_1&lt;x_2&lt;0$</span> such that <span class="math-container">$f(x_1)=-1$</span> and <span class="math-container">$f(x_2)=1$</span>. For the same reason as the previous case, every value between <span class="math-container">$-1$</span> and <span class="math-container">$1$</span> is attained in the interval <span class="math-container">$[x_1,x_2]\subseteq (a,b)$</span>.</p>
2,467,327
<p>How to prove that $441 \mid a^2 + b^2$ if it is known that $21 \mid a^2 + b^2$.<br> I've tried to present $441$ as $21 \cdot 21$, but it is not sufficient.</p>
Jack D'Aurizio
44,121
<p>If $3\mid(a^2+b^2)$ then $3$ divides both $a$ and $b$, since $-1$ is not a quadratic residue $\!\!\pmod{3}$.<br> The same applies $\!\!\pmod{7}$. If $21\mid(a^2+b^2)$, from the CRT we get that $3$ and $7$ divide both $a$ and $b$, hence $3^2$ and $7^2$ divide both $a^2$ and $b^2$ and $3^2\cdot 7^2\mid (a^2+b^2)$ as wanted.</p>
3,884,098
<p>Suppose <span class="math-container">$A$</span> is a <span class="math-container">$n \times n$</span> symmetric real matrix with eigenvalues <span class="math-container">$\lambda_1, \lambda_2, \ldots, \lambda_n$</span>, what are the eigenvalues of <span class="math-container">$(I - A)^{3}$</span>?</p> <p>Are they <span class="math-container">$(1 - \lambda_1)^3, (1 - \lambda_2)^3, \ldots, (1 - \lambda_n)^3$</span>? If so, how can I arrive at this conclusion?</p>
Derek Luna
567,882
<p>I am not sure how much is going to go through without using a proof by contrapositive since any partition you make needs to have a finite number of elements and if you index a sum by the set <span class="math-container">$\{0,...,n\}$</span> we don't get information about every <span class="math-container">$f(x)$</span>, only those <span class="math-container">$f(x_{k})$</span> where <span class="math-container">$k$</span> is in the index set <span class="math-container">$\{0,...,n\}$</span>. If you suppose for some <span class="math-container">$x$</span> we have <span class="math-container">$f(x) &gt; 0$</span> and then create a simple tagged partition where one of your partition elements will be <span class="math-container">$&gt; 0$</span> and all the others will be <span class="math-container">$\geq 0$</span>, then you will eventually get that <span class="math-container">$∫^b_af(x) &gt; 0$</span>. Your specific approach will not work because all you can prove is that those specific <span class="math-container">$f(x_{k}) = 0$</span> and also there is no limiting business for <span class="math-container">$n$</span> since <span class="math-container">$n$</span> is fixed from the start.</p>
2,466,556
<p><strong>Solution:</strong> </p> <p>The vectors $\vec{AB}=(3,2,1)-(0,1,2)=3,1,-1$ and $\vec{AC}=(4,-1,0)-(0,1,2)=(4,-2,-2),$ are two direction vectors of the plane. A normal vector $\vec{n}$ to the plane is then given by $$\vec{n}=\vec{AB}\times\vec{AC}=(-4,2,-10).$$</p> <p>Since $A$ is a point on the plane, we get</p> <p>$$\vec{n}\cdot (x-0,y-1,z-2)=-4x+2(y-1)-10(z-2)=-4x+2y-10z+18=0.$$</p> <hr> <p>I dont understand the first part of the last equation. </p> <ol> <li>What vector is $(x-0,y-1,z-2)$?</li> <li>Why do they take the dot-product of the above with the normal vector? (How does it give the equation of the plane?)</li> </ol>
Sir_Math_Cat
330,367
<p>The vector $(x-0,y-1,z-2)$ comes directly from the fact that $(0,1,2)$ is a point on the plane. For the second part of your question, if $\vec{v}$ is a vector and $\vec{n}$ is a normal vector to $\vec{v}$, then $\vec{v}\cdot\vec{n}=0$. We use the dot product between the normal vector and the cross product of the displacement vectors because the equation of the plane is </p> <p>$(x-a,y-b,z-c)\cdot \vec{n}=0$,</p> <p>where $(a,b,c)$ is a point on the plane and $\vec{n}$ is normal to the plane. </p>
881,831
<p>It is trivial that a group $G$ is abelian if and only if every subgroup of $G$ with two generators is abelian (i.e., any two elements commute).</p> <p>If $G$ is a nilpotent group, every subgroup with two generators must be nilpotent. Is the reciprocal true? More precisely:</p> <blockquote> <p>Let $G$ be a group and suppose that every subgroup of $G$ generated by two elements is nilpotent (with uniformly bounded class if needed). Is $G$ necessarily nilpotent?</p> </blockquote>
Geoff Robinson
13,147
<p>When $G$ is finite the answer is "yes", since any two elements of coprime order commute, so a Sylow $p$-subgroup is normal for each prime divisor $p$ of the group order. When $G$ is infinite, I don't know, but others might. Each element of $G$ is certainly an Engel element, but there are non-nilpotent groups in which every element is an Engel element, (constructed, for example, by P.M. Cohn).</p>
223,385
<p>I would like to recreate the following picture in Mathematica. I know how to draw a tree with GraphLayout. But I don't know how to create the shape of nodes as below. A bit hints about where to start will be appreciated!</p> <p><a href="https://i.stack.imgur.com/PtK1F.png" rel="noreferrer"><img src="https://i.stack.imgur.com/PtK1F.png" alt="enter image description here"></a></p>
flinty
72,682
<pre><code>(* generate a random tree *) edges = Table[i &lt;-&gt; RandomInteger[{0, i - 1}], {i, 1, 20}]; (* random circles appear on edges with degree 1 only *) circles = If[# == 1, RandomInteger[{1, 4}], 0] &amp; /@ VertexDegree[Graph[edges]]; gencircles[{x_, y_}, name_] := If[circles[[name + 1]] &gt; 0, Disk[{x, y} + 0.1*#, .05] &amp; /@ CirclePoints[circles[[name + 1]]] , Nothing] vsf[{x_, y_}, name_, {w_, h_}] := {Gray, Disk[{x, y}, .2], White, gencircles[{x, y}, name]} Graph[edges, VertexShapeFunction -&gt; vsf] </code></pre> <p><a href="https://i.stack.imgur.com/5bKIV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5bKIV.png" alt="graph with dots"></a></p>
1,903,235
<p>According to Wikipedia, </p> <blockquote> <p>Hilbert space [...] extends the methods of vector algebra and calculus from the two-dimensional Euclidean plane and three-dimensional space to spaces with any finite or infinite number of dimensions</p> </blockquote> <p>However, the article on Euclidean space states already refers to </p> <blockquote> <p>the n-dimensional Euclidean space.</p> </blockquote> <p>This would imply that Hilbert space and Euclidean space are synonymous, which seems silly.</p> <p>What exactly is the difference between Hilbert space and Euclidean space? What would be an example of a non-Euclidean Hilbert space?</p>
user288972
288,972
<p>A Hilbert space essentially is also a generalization of Euclidean spaces with infinite dimension.</p> <hr> <p><strong>Note</strong>: this answer is just to give an intuitive idea of this generalization, and to consider infinite-dimensional spaces with a scalar product that they are complete with respect to metric induced by the norm. Clearly, there are finite-dimensional Hilbert spaces, as $\mathbb{R}^n$, with the standard scalar product and Euclidean metric.</p>
2,324,850
<p>How to find the shortest distance from line to parabola?</p> <p>parabola: $$2x^2-4xy+2y^2-x-y=0$$and the line is: $$9x-7y+16=0$$ Already tried use this formula for distance: $$\frac{|ax_{0}+by_{0}+c|}{\sqrt{a^2+b^2}}$$</p>
lab bhattacharjee
33,337
<p>HINT:</p> <p>Use <a href="http://www.sosmath.com/CBB/viewtopic.php?t=17029" rel="nofollow noreferrer">rotation of axes</a> to eliminate the $xy$ term from the equation of the parabola as the distance is invariant in rotation.</p> <p>Now use parametric equation $P(h+at^2,k+2at)$ of the parabola $$(y-k)^2=4a(x-h)$$ and $$\frac{|ax_{0}+by_{0}+c|}{\sqrt{a^2+b^2}}$$</p>
217,291
<p>I am trying to recreate the following image in latex (pgfplots), but in order to do so I need to figure out the mathematical expressions for the functions</p> <p><img src="https://i.stack.imgur.com/jYGNP.png" alt="wavepacket"></p> <p>So far I am sure that the gray line is $\sin x$, and that the redline is some version of $\sin x / x$. Whereas the green line is some linear combination of sine and cosine functions.</p> <p>Anyone know a good way to find these functions? </p>
Qiaochu Yuan
232
<blockquote> <p>What is the period of the Fibonacci sequence $F_n$ modulo a prime $p$? </p> </blockquote> <p>This is the <a href="http://en.wikipedia.org/wiki/Pisano_period" rel="nofollow">Pisano period</a>. It is difficult to say much about the exact period, but one can write down a number which is guaranteed to be divisible by the period using some facts about <a href="http://en.wikipedia.org/wiki/Finite_field" rel="nofollow">finite fields</a> in a manner analogous to Fermat's little theorem, together with <a href="http://en.wikipedia.org/wiki/Quadratic_reciprocity" rel="nofollow">quadratic reciprocity</a>. The key result is that <a href="http://mathworld.wolfram.com/BinetsFibonacciNumberFormula.html" rel="nofollow">Binet's formula</a></p> <p>$$F_n = \frac{\phi^n - \varphi^n}{\phi - \varphi}$$</p> <p>continues to hold $\bmod p$ in a suitable sense for all $p \neq 5$. Here $\phi, \varphi$ are the two roots of the characteristic polynomial $t^2 - t - 1$. </p> <p><strong>Proposition:</strong> If $p \neq 5$, then $F_n$ has period dividing $p - 1$ if $p \equiv 1, 4 \bmod 5$; otherwise, $F_n$ has period dividing $2(p + 1)$. If $p = 5$, then $F_n$ has period $20$. </p> <p><em>Proof.</em> By quadratic reciprocity, $p \equiv 1, 4 \bmod 5$ if and only if $t^2 - t - 1$ factors over $\mathbb{F}_p$. By Fermat's little theorem, it follows that $\phi, \varphi$ have multiplicative order dividing $p-1$. If $t^2 - t - 1$ does not factor over $\mathbb{F}_p$, then its roots lie in $\mathbb{F}_{p^2}$, and the <a href="http://en.wikipedia.org/wiki/Frobenius_endomorphism" rel="nofollow">Frobenius map</a> interchanges them; that is, $\phi^p \equiv \varphi \bmod p$ and vice versa. Consequently $\phi^{p+1} \equiv -1 \bmod p$, and we conclude that</p> <p>$$F_p \equiv -1 \bmod p$$ $$F_{p+1} \equiv 0 \bmod p$$ $$F_{p+2} \equiv -1 \bmod p$$ $$F_{p+3} \equiv -1 \bmod p$$</p> <p>and by induction $F_{p+1+k} \equiv - F_k \bmod p$, hence $F_{2(p+1)+k} \equiv F_k \bmod p$ as desired. The case $p = 5$ is left as an exercise. $\Box$</p> <p>This proposition describes a pattern which is straightforward to verify by hand, but which without some knowledge of abstract algebra and number theory is very difficult to explain. </p>
1,077,504
<p>Evaluate:</p> <p>$$\int_{0}^{\infty} \frac{1}{x^6 + 1} \,\mathrm dx$$</p> <p>Without <strong>the use of complex-analysis.</strong></p> <p>With complex analysis it is a very simple problem, how can this be done WITHOUT complex analysis?</p>
Lucian
93,448
<blockquote> <p><em>how can this be done WITHOUT complex analysis?</em></p> </blockquote> <p>$\quad$ All integrals of the form $~\displaystyle\int_0^\infty\frac{x^{k-1}}{(x^n+a^n)^m}dx~$ can be evaluated by substituting $x=at$ and $u=\dfrac1{t^n+1}$ , then recognizing the expression of the <a href="http://en.wikipedia.org/wiki/Beta_function" rel="nofollow">beta function</a> in the new integral, and lastly </p> <p>employing Euler's <a href="http://en.wikipedia.org/wiki/Reflection_formula" rel="nofollow">reflection formula</a> for the <a href="http://en.wikipedia.org/wiki/Gamma_function#Properties" rel="nofollow">$\Gamma$ function</a> to simplify the result.</p>
1,570,044
<p>How many arrangements of banana such that the "b" occurs before any of the "a's"?</p> <p>This is more an inquiry into what I did wrong in my counting. I came up with a solution of: $$\binom{3}{1} \binom{5}{3}$$ where i did C(3,1) to account for the 3 possible places the "b" could go and C(5,3) to choose the positions of the 3 "a's" Instead they rationalized it as $$ \binom{6}{4} (1)(1)$$</p> <p>where and how did I double count? </p>
grand_chat
215,011
<p>Another way to see this: If the <em>b</em> must occur before any of the <em>a</em>'s, then our only decision is where to put the <em>n</em>'s. There are six slots available for the two <em>n</em>'s, giving ${6\choose2}=15$ possibilities. Once the <em>n</em>'s are placed we put <em>b,a,a,a</em> in the remaining open slots, in that order.</p>
3,407,489
<p><span class="math-container">$\neg\left (\neg{\left (A\setminus A \right )}\setminus A \right )$</span></p> <p><span class="math-container">$A\setminus A $</span> is simply empty set and <span class="math-container">$\neg$</span> of that is again empty set. Empty set <span class="math-container">$\setminus$</span> A is empty set or? But every empty set is included in every set?</p> <p>I am confused when it comes to this..</p>
fleablood
280,126
<p><span class="math-container">$A$</span> = everthing that is is <span class="math-container">$A$</span>.</p> <p><span class="math-container">$A\setminus A$</span> = everything in <span class="math-container">$A$</span> that is not in <span class="math-container">$A$</span> = nothing = <span class="math-container">$\emptyset$</span></p> <p><span class="math-container">$\lnot(A\setminus A) = \lnot \emptyset$</span> = everything that is not in the emptyset = everything = The Universal Set. I'll call it <span class="math-container">$U$</span>.</p> <p><span class="math-container">$\lnot(A\setminus A) \setminus A$</span> = <span class="math-container">$U\setminus A$</span>= everything in the universal Set that is not in <span class="math-container">$A$</span> = <span class="math-container">$\lnot A$</span>.</p> <p><span class="math-container">$\lnot (\lnot A\setminus A)\setminus A) =\lnot(\lnot A)$</span>= everything that is not not in A = everything that is in <span class="math-container">$A$</span> = <span class="math-container">$A$</span>.</p> <p>.....</p> <p>Or <span class="math-container">$\neg\left (\neg{\left (A\setminus A \right )}\setminus A \right )=$</span></p> <p><span class="math-container">$\{x| x\not \in (\neg{\left (A\setminus A \right )}\setminus A )\}=$</span></p> <p><span class="math-container">$\{x|x\not \in \{x\in \neg(A\setminus A)|x\not \in A\}\}=$</span></p> <p><span class="math-container">$\{x|x\not \in \{x\not \in (A\setminus A)|x\not \in A\}\}=$</span></p> <p><span class="math-container">$\{x|x\not \in \{x\not \in \{x \in A|x\not \in A\}|x\not \in A\}\}=$</span></p> <p><span class="math-container">$\{x|x\not\in\{x\not \in \emptyset|x\not \in A\}\}=$</span></p> <p><span class="math-container">$\{x|x\not \in \{x\not \in A\}\}=$</span></p> <p><span class="math-container">$\{x|x \in A\}=$</span></p> <p><span class="math-container">$A$</span></p>
241,612
<blockquote> <p>Find all eigenvalues and eigenvectors:</p> <p>a.) $\pmatrix{i&amp;1\\0&amp;-1+i}$</p> <p>b.) $\pmatrix{\cos\theta &amp; -\sin\theta \\ \sin\theta &amp; \cos\theta}$</p> </blockquote> <p>For a I got: $$\operatorname{det} \pmatrix{i-\lambda&amp;1\\0&amp;-1+i-\lambda}= \lambda^{2} - 2\lambda i + \lambda - i - 1 $$</p> <p>For b I got: $$\operatorname{det} \pmatrix{\cos\theta - \lambda &amp; -\sin\theta \\ \sin\theta &amp; \cos\theta - \lambda}= \cos^2\theta + \sin^2\theta + \lambda^2 -2\lambda \cos\theta = \lambda^2 -2\lambda \cos\theta +1$$</p> <p>But how can I find the corresponding eigenvalues for a and b? </p>
Belgi
21,335
<p>For $a$ you can note that the matrix in case is upper triangular, or use the fact the the quadratic formula is also valid over $\mathbb{C}$. </p> <p>For $b$ the last equality you have is not true, how did the $\cos(\theta)$ coefficient of $\lambda$ disappeared ? you should apply the quadratic formula in this case too and use a simple trigonometric identity.</p>
146,813
<p>Is sigma-additivity (countable additivity) of Lebesgue measure (say on measurable subsets of the real line) deducible from the Zermelo-Fraenkel set theory (without the axiom of choice)?</p> <p>Note 1. Follow-up question: Jech's 1973 book on the axiom of choice seems to be cited as the source for the Feferman-Levy model. Can this be sourced in the work of Feferman and levy themselves? Are these S. Feferman and A. Levy?</p>
François G. Dorais
2,000
<p>This depends on exactly how you define Lebesgue measure since some definitions incorporate countable additivity. However, there is a model of ZF, the Feferman-Levy model, where $\mathbb{R}$ is a countable union of countable sets which ensures that any countably additive measure on $\mathbb{R}$ has to be trivial.</p>
107,525
<p>Say I have two random variables X and Y from the same class of distributions, but with different means and variances (X and Y are parameterized differently). Say the variance converges to zero as a function of n, but the mean is not a function of n. Can it be formally proven, without giving the actual pdf of X and Y, that their overlap area (defined the integral over the entire domain of min(f,g), where f,g are the respective pdfs) converges to zero when n goes to infinity? Perhaps this is too obvious...?</p>
leonbloy
312
<p>The answer is yes. Let's assume the means verify $\mu_X &lt; \mu_Y$, and let $c =(\mu_X +\mu_Y)/2$ the middle point. The "overlap area" (?) is</p> <p>$$\int_{-\infty}^{\infty} \min(f_X(x),f_Y(x)) dx = \int_{-\infty}^c \cdots dx + \int_{c}^{\infty} \cdots dx$$ </p> <p>The second term is: </p> <p>$$\int_c^{\infty} \min(f_X(x),f_Y(x)) dx \le \int_c^{\infty} f_X(x) dx =P(X \ge c) \le P\left(|X - \mu_X| \ge \epsilon\right)\le \frac{\sigma_X^2}{\epsilon^2}$$</p> <p>where $\epsilon = c/2$, and we've used the <a href="http://en.wikipedia.org/wiki/Chebyshev%27s_inequality" rel="nofollow">Chebyshev's inequality</a>. Because the variance $\sigma_X^2$ tends to zero, so does this term; and the same goes to the other. Then, $\int_{-\infty}^{\infty} \min(f_X(x),f_Y(x)) dx \to 0$</p>
1,880,090
<p>The solution states that the ball of radius $\epsilon &gt;0$ around a real number $x$ always contains the non-real number $x+i\epsilon/2$. </p> <p>I don't understand the answer, for every number $x \in \mathbb{R}$ there is an open ball, right? For every $x \in \mathbb{R}$ there is an $r&gt;0$ such that I can form an open ball $B_r(x)\subset \mathbb{R}$.</p>
quid
85,306
<p>You need to be careful what the definition of the sets is precisely. </p> <p>You study a subset of the complex numbers thus your topological objects are those for the complex numbers. </p> <p>Thus in this context $B_r (x) = \{z \in \mathbb{C} \colon |z-x| &lt; r\}$. So it is the set of all <strong>complex numbers</strong> at a distance less than $r$ from $x$. </p> <p>Would you be working in a context where the ambient set are the real numbers then $B_r (x) = \{z \in \mathbb{R} \colon |z-x| &lt; r\}$ it is the set of all <strong>real numbers</strong> at a distance less than $r$ from $x$. </p> <p>Put differently, the notation $B_r (x)$ is slightly imprecise in that it does not make explicit the set relative to which the ball around $x$ is formed, and this information is only implicit. For the context of your exercise you need to complex version. </p>
1,832,080
<p>The converse is pretty obvious. If G is a cycle, then it is isomorphic to it's line graph. How to prove that if L(G) is isomorphic to G, then G is a cycle...?</p> <p><strong>P.S.</strong>- Assume G is connected</p>
joriki
6,622
<p>A vertex of $G$ of degree $d_i$ contributes $\binom{d_i}2$ edges to $L(G)$. Then with $n$ denoting the common number of vertices and edges of $G$ and $L(G)$,</p> <p>$$ \sum_id_i=2n\;, $$</p> <p>$$ \sum_i\frac{d_i(d_i-1)}2=n\;, $$</p> <p>so</p> <p>$$ \sum_id_i^2=4n $$</p> <p>and thus</p> <p>\begin{align} \operatorname{Var}(d)&amp;=E[d^2]-E[d]^2 \\ &amp;=\frac{\sum_id_i^2}n-\left(\frac{\sum_id_i}n\right)^2 \\ &amp;=4-4 \\ &amp;=0\;. \end{align}</p> <p>Thus all vertices in both graphs have the same degree $2$, which is only the case in a union of cycle graphs.</p>