qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
4,531,998
<p>As you increase the value of n, you will generate all pythagorean triples whose first square is even. Is there any visual proof of the following explicit formula and where does it come from or how to derive it?</p> <p><span class="math-container">$(2n)^2 + (n^2 - 1)^2 = (n^2 + 1)^2$</span></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th><span class="math-container">$(2n)^2+(n^2-1)^2=(n^2+1)^2$</span></th> <th><span class="math-container">$(2n)^2+(n^2-1)^2=(n^2+1)^2$</span></th> <th><span class="math-container">$(2n)^2+(n^2-1)^2=(n^2+1)^2$</span></th> </tr> </thead> <tbody> <tr> <td><span class="math-container">$(2*0)^2+(0^2-1)^2=(0^2+1)^2$</span></td> <td><span class="math-container">$(2*1)^2+(1^2-1)^2=(1^2+1)^2$</span></td> <td><span class="math-container">$(2*2)^2+(2^2-1)^2=(2^2+1)^2$</span></td> </tr> <tr> <td><span class="math-container">$(2*0)^2+(0-1)^2=(0+1)^2$</span></td> <td><span class="math-container">$(2*1)^2+(1-1)^2=(1+1)^2$</span></td> <td><span class="math-container">$(2*2)^2+(4-1)^2=(4+1)^2$</span></td> </tr> <tr> <td><span class="math-container">$0^2+1^2=1^2$</span></td> <td><span class="math-container">$2^2+0^2=2^2$</span></td> <td><span class="math-container">$4^2+3^2=5^2$</span></td> </tr> <tr> <td><span class="math-container">$0+1=1$</span></td> <td><span class="math-container">$4+0=4$</span></td> <td><span class="math-container">$16+9=25$</span></td> </tr> <tr> <td><span class="math-container">$1=1$</span></td> <td><span class="math-container">$4=4$</span></td> <td><span class="math-container">$25=25$</span></td> </tr> </tbody> </table> </div>
poetasis
546,655
<p>All Pythagorean triple have a &quot;leg&quot; that is a multiple of <span class="math-container">$\space4.\space$</span> Your formula generates a subset of these.</p> <p>We begin with Euclid's formula shown here as <span class="math-container">$$A=m^2-k^2\quad B=2mk \quad C=m^2+k^2$$</span> If we let <span class="math-container">$\space k=1,\space$</span> we have <span class="math-container">$\quad A=m^2-1\quad B=2m\quad C=m^2+1$</span></p> <p>This formula also generates trivial triples where one term is zero. A variation of this replaces <span class="math-container">$\space m\space$</span> with <span class="math-container">$\space(2n-1+k)=2n\space$</span> and produces a formula that generates non-trivial triples for any natural number <span class="math-container">$n$</span>. <span class="math-container">$$\quad A=4n^2-1\quad B=4n\quad C=4n^2+1$$</span></p> <p>By inspection, we can see that the middle term will always be even and that <span class="math-container">$\space C-A=2\space$</span> for all values of <span class="math-container">$\space n.\space$</span> Algebraically below, we can also &quot;see&quot; that the formula is valid.</p> <p><span class="math-container">\begin{align*}A^2+B^2=&amp;(4n^2-1)^2+(4n)^2\\ =&amp;(16 n^4 - 8 n^2 + 1)+(16n^2)\\ =&amp;(16 n^4 + 8 n^2 + 1)\\ =&amp;(4n^2+1)^2=C^2\\ \end{align*}</span></p>
81,887
<p>The problem: Prove that for $n \in \mathbb N$:</p> <p>$$ \left(1 + \frac{1}{n} \right)^n = 1 + \sum_{m=1}^{n} \frac{1}{m!} \left(1 - \frac{1}{n} \right) \left(1 - \frac{2}{n} \right) \cdots \left(1 - \frac{m-1}{n} \right). $$</p> <p>The hint is to use the binomial theorem. So the left side can become:</p> <p>$$ \sum_{m=0}^{n} \frac{n!}{m!(n - m)!} \left(\frac{1}{n} \right)^m $$</p> <p>I don't really know where to go from here, I've tried manipulating the expressions to make them look similar but I'm not really getting anywhere.</p>
yunone
1,583
<p>Take your second sum $$ \sum_{m=0}^{n} \frac{n!}{m!(n - m)!} \left(\frac{1}{n} \right)^m $$ and write it as $$ 1+\sum_{m=1}^n \frac{n!}{m!(n - m)!} \left(\frac{1}{n} \right)^m $$ to get the indices to match.</p> <p>In your first sum $$ \sum_{m=1}^{n} \frac{1}{m!} \left(1 - \frac{1}{n} \right) \left(1 - \frac{2}{n} \right) \cdots \left(1 - \frac{m-1}{n} \right) $$ ignoring the $\frac{1}{m!}$ for now, notice $$ \left(1 - \frac{1}{n} \right) \left(1 - \frac{2}{n} \right) \cdots \left(1 - \frac{m-1}{n} \right)=\left(\frac{n-1}{n}\right)\left(\frac{n-2}{n}\right)\cdots\left(\frac{n-m+1}{n}\right). $$ Multiplying by $1=\frac{n}{n}$ gives $$ \frac{n}{n}\left(\frac{n-1}{n}\right)\left(\frac{n-2}{n}\right)\cdots\left(\frac{n-m+1}{n}\right)=\dots $$ Can you take it from there?</p>
3,464,383
<p>I couldn't find any substantial list of 'strange infinite convergent series' so I wanted to ask the MSE community for some. By <em>strange</em>, I mean infinite series/limits that <strong>converge when you would not expect them to and/or converge to something you would not expect</strong>.</p> <p>My favorite converges to Khinchin's (sometimes Khintchine's) constant, <span class="math-container">$K$</span>. For almost all <span class="math-container">$x \in \mathbb{R}$</span> (those for which this does not hold making up a measure zero subset) with infinite c.f. representation: <span class="math-container">$$x = a_0 + \frac{1}{a_1+\frac{1}{a_2+\frac1{\ddots}}}$$</span> We have: <span class="math-container">$$\lim_{n \to \infty} =\root n \of{\prod_{i=1}^na_i} = \lim_{n \to \infty}\root n \of {a_1a_2\dots a_n} = K$$</span> Which is...wow! That it converges independent of <span class="math-container">$x$</span> really gets me.</p>
Axion004
258,202
<p>Suppose <span class="math-container">$\sum_{n=1}^{\infty} a_n$</span> and <span class="math-container">$\sum_{n=1}^{\infty} b_n$</span> are both divergent. Then, one might assume that <span class="math-container">$\sum_{n=1}^{\infty} (a_n+b_n)$</span> also diverges.</p> <p>This is false. Suppose <span class="math-container">$a_n=1$</span> and <span class="math-container">$b_n=-1$</span> for all <span class="math-container">$n$</span>. Then</p> <p><span class="math-container">$$\sum_{n=1}^{\infty} a_n=\sum_{n=1}^{\infty} \,1 ~~\text{diverges}$$</span> and <span class="math-container">$$\sum_{n=1}^{\infty} b_n=\sum_{n=1}^{\infty} \,(-1) ~~\text{diverges}$$</span></p> <p>However</p> <p><span class="math-container">$$\sum_{n=1}^{\infty} (a_n+b_n)=\sum_{n=1}^{\infty} \,(1+(-1)) =\sum_{n=1}^{\infty}\,0=0$$</span> is convergent.</p>
3,386,696
<p>I need to solve <span class="math-container">$\int_{0}^{\frac{\pi}{2}} \cos^6xdx$</span> I tried to use <span class="math-container">$cos(3x)=4\cos^3x-3\cos x$</span> but not succeeded</p>
drhab
75,923
<p>Suppose that <span class="math-container">$P(a)&lt;P(a\mid b)$</span> and <span class="math-container">$P(a)&lt;P(a\mid b^c)$</span>.</p> <p>Then because at least one of <span class="math-container">$P(b)$</span> and <span class="math-container">$P(b^c)$</span> is positive: <span class="math-container">$$P(a)=P(a\mid b)P(b)+P(a\mid b^c)P(b^c)&gt;P(a)P(b)+P(a)P(b^c)=P(a)$$</span> which is absurd.</p> <p>Draw conclusions concerning minimum.</p> <p>Same idea can be used for maximum.</p>
2,010,768
<p>I have the question </p> <p>Solve the simultaneous equations </p> <p>$$\begin{cases} 3^{x-1} = 9^{2y} \\ 8^{x-2} = 4^{1+y} \end{cases}$$</p> <p><a href="https://i.stack.imgur.com/xN9Ny.jpg" rel="nofollow noreferrer">source image</a></p> <p>I know that $x-1=4y$ and $3X-6=2+2y$ .</p> <p>However when I checked the solutions this should become $6X-16=4Y$ .</p> <p>How is this? </p>
Frank
332,250
<p>From $$\begin{align*} &amp; 3^{x-1}=9^{2y}\\ &amp; 8^{x-2}=4^{y+1}\end{align*}\tag1$$ We can rewrite $9$ as $3^2$, $8$ as $2^3$ and $4$ as $2^2$. Doing so and setting the powers equivalent, we get$$\begin{align*} &amp; x-1=4y\\ &amp; 3x-6=2+2y\end{align*}\tag2$$ To answer your question, look at the second equation. Moving the $2$ to the left hand side and multiplying everything by $2$, we get our desired equation$$6x-16=4y\tag3$$ Which is the equation in the book.</p> <hr> <p>To actually <em>solve</em>, we can substitute $4y$ from $(3)$ with the first equation of $(2)$ to solve for $x$.$$\begin{align*} &amp; 6x-16=x-1\\ &amp; 5x=15\implies x=3\end{align*}\tag4$$ Substituting that into $x-1=4y$ gives $y$ as$$y=\frac {x-1}4=\frac 12\tag5$$ Therefore, we have $x,y$ as$$\boxed{(x,y)=\left(3,\frac 12\right)}$$</p> <hr> <p>If you have any questions or confusions, you can ask me!</p>
818,850
<p>I want to generate numbers $1$ to $10$ with uniform probability distribution. So I write the numbers $1$ to $10$ in the natural order. I keep writing the next $10$ number block by permuting the first $10$ numbers. Will this long string can be considered random with uniform distribution? Is there a problem?</p>
Damian Pavlyshyn
154,826
<p>Try explaining it with a number line:</p> <p>Divide the interval between 1 and 2 into ten equal parts and number them 0-9. Note that these are tenths, and so numbers of the form 1.4... fall into division four.</p> <p>Now divide the interval between 1.4 and 1.5 into ten equal parts and note that these are hundredths, so numbers of the form 1.41... fall into division one.</p> <p>Continuing this way demonstrates that at each step, adding more digits gives us a smaller and smaller interval that the number can be in and so increases accuracy rather than just making the number grow.</p>
2,857,508
<p>I'm trying to prove this: $$0.99^n \le 1/2,\text{ for }n=100$$</p> <p>I tried Bernoulli's inequality $(1-0.01)^n \geq 1-n\cdot0.01$ and it gave me LHS $\geq0$. </p> <p>I also tried to do this: $((1-0.01)^n)^n \leq(1-1/2)^n$ and it gave me LHS $\geq-99$ and RHS $\geq49$. </p> <p>Now I'm stuck.</p>
farruhota
425,072
<p>Alternatively: $$0.99^{100}&lt;\frac12 \iff 2\cdot 99^{100}&lt;100^{100} \iff \\ 2\cdot 99^{100}&lt;(1+99)^{100}=1+100\cdot 99+\cdots +100\cdot 99^{99}+99^{100} \iff \\ 2\cdot 99^{100}&lt;1+9900+\cdots+(1+99)\cdot 99^{99}+99^{100}\iff \\ 2\cdot 99^{100}&lt;1+9900+\cdots+99^{99}+2\cdot 99^{100}.$$</p>
174,676
<p>When I am faced with a simple linear congruence such as $$9x \equiv 7 \pmod{13}$$ and I am working without any calculating aid handy, I tend to do something like the following:</p> <p>"Notice" that adding $13$ on the right and subtracting $13x$ on the left gives: $$-4x \equiv 20 \pmod{13}$$</p> <p>so that $$x \equiv -5 \equiv 8 \pmod{13}.$$</p> <p>Clearly this process works and is easy to justify (apart from not having an algorithm for "noticing"), but my question is this: I have a vague recollection of reading somewhere this sort of process was the preferred method of C. F. Gauss, but I cannot find any evidence for this now, so does anyone know anything about this, or could provide a reference? (Or have I just imagined it all?)</p> <p>I would also be interested to hear if anyone else does anything similar.</p>
John Butnor
185,327
<p>9x = 7 mod 13 </p> <p>9x = 7 + 13n</p> <p>9x = 20 for n = 1</p> <p>9x = 33 for n = 2</p> <p>9x = 46 for n = 3 </p> <p>9x = 59 for n = 4 </p> <p>9x = 72 for n = 5</p> <p>Then x = 8 mod 13</p> <p>You arrive at the correct answer before n = 13.</p>
2,982,723
<p>I need help in this exercise. What I need to prove that the function <span class="math-container">$f$</span> given is not continuous in the point <span class="math-container">$(0,0)$</span></p> <p><span class="math-container">$$ f(x,y) = \begin {cases} \frac {x^3\times y} {x^6+y^2} &amp; (x,y) \not = 0\\ 0 &amp; (x,y) = 0 \end {cases} $$</span></p> <p>So what I've done so far is to calculate the limit of the function in the first place with two variables:</p> <blockquote> <p><span class="math-container">$$ \lim_{(x,y)\to\ (0,0)} \frac {x^3\times y} {x^6+y^2} $$</span> I substitute <span class="math-container">$y=mx$</span> which is the slope <span class="math-container">$$ \lim_{(x)\to\ (0)} \frac {x^3\times mx} {x^6+(mx)^2} $$</span> <span class="math-container">$$=\lim_{(x)\to\ (0)} \frac {x^4\times m} {x^6+m^2x^2} $$</span> <span class="math-container">$$=\lim_{(x)\to\ (0)} \frac {x^4\times m} {x^2(x^6+m^2)} $$</span> <span class="math-container">$$=\lim_{(x)\to\ (0)} \frac {x^2\times m} {x^6+m^2} $$</span> <span class="math-container">$$=\lim_{(x)\to\ (0)} \frac {x^2\times m} {x^6+m^2} $$</span> <span class="math-container">$$=\frac {0^2\times m} {0^6+m^2} = 0$$</span></p> </blockquote> <p>So my result says that it is continuous. What have I done wrong? What do I need to do to prove that it is not if I already calculated that it is? Thank you so much. If something isn't very clear, please let me know.</p>
Parcly Taxel
357,390
<p>We consider the path <span class="math-container">$y=x^3$</span> to <span class="math-container">$(0,0)$</span>. Along this path, the function becomes <span class="math-container">$$\frac{x^3\cdot x^3}{x^6+x^6}=\frac{x^6}{2x^6}=\frac12$$</span> and so the limit along this path is <span class="math-container">$\frac12$</span>. Since this is different from the limit of 0 you obtained with the different path <span class="math-container">$y=mx$</span>, the limit at the origin does not exist.</p>
79,868
<p>Given a function $f: \mathbb{R}^+ \rightarrow \mathbb{C}$ satisfying suitable conditions (exponential decay at infinity, continuous, and bounded variation) is good enough, its <em>Mellin transform</em> is defined by the function</p> <p>$$M(f)(s) = \int_0^{\infty} f(y) y^s \frac{dy}{y},$$</p> <p>and $f(y)$ can be recovered by the Mellin inversion formula:</p> <p>$$f(y) = \frac{1}{2\pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} y^{-s} M(f)(s) ds.$$</p> <p>This is a change of variable from the Fourier inversion formula, or the Laplace inversion formula, and can be proved in the same way. This is used all the time in analytic number theory (as well as many other subjects, I understand) -- for example, if $f(y)$ is the characteristic function of $[0, 1]$ then its Mellin transform is $1/s$, and one recovers the fact (Perron's formula) that </p> <p>$$\frac{1}{2\pi i} \int_{2 - i \infty}^{2 + i \infty} n^{-s} \frac{ds}{s}$$</p> <p>is equal to 1 if $0 &lt; n &lt; 1$, and is 0 if $n &gt; 1$. (Note that there are technical issues which I am glossing over; one integrates over any vertical line with $\sigma &gt; 0$, and the integral is equal to $1/2$ if $n = 1$.)</p> <p>I use these formulas frequently, but... I find myself having to look them up repeatedly, and I'd like to understand them more intuitively. Perron's formula can be proved using Cauchy's residue formula (shift the contour to $- \infty$ or $+ \infty$ depending on whether $n &gt; 1$), but this proof doesn't prove the general Mellin inversion formula.</p> <p>My question is:</p> <blockquote> <p>What do the Mellin transform and the inversion formula mean? Morally, why are they true?</p> </blockquote> <p>For example, why is the Mellin transform an integral over the positive reals, while the inverse transform is an integral over the complex plane?</p> <p>I found some resources -- Wikipedia; <a href="https://mathoverflow.net/questions/383/motivating-the-laplace-transform-definition">this MO question</a> is closely related, and the first video in particular is nice; and a proof is outlined in Iwaniec and Kowalski -- but I feel that there should be a more intuitive explanation than any I have come up with so far.</p>
Tom Copeland
12,178
<p>Two equations that encapsulate the properties of the Fourier and Mellin transforms:</p> <p><span class="math-container">$$\int^{\infty}_{-\infty}{\exp(2 \pi ifx)\exp(-2 \pi ify)df} = \delta(x-y)$$</span></p> <p><span class="math-container">$$\frac{1}{2\pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} x^{-s} y^{s} ds= \delta(\ln(x)-\ln(y))= y \delta(x-y).$$</span></p> <p>The transformations from one equation to the other are obvious. The delta function results are intuitive and an extrapolation of the discrete case for the orthogonality relationships of the characters of character groups. The transform pairs, Plancherel and convolution theorems, and other relations are easy to derive from these two.</p> <p>(Note that whereas <span class="math-container">$e^{sz}$</span> is an eigenfunction for <span class="math-container">$d/dz$</span> and so the Laplace/Fourier transforms are appropriate for devising an operator calculus for <span class="math-container">$f(d/dz)$</span>, <span class="math-container">$z^s$</span> is an eigenfunction of <span class="math-container">$zd/dz$</span> and so the Mellin transform is more appropriate for <span class="math-container">$f(zd/dz)$</span>.)</p> <p>Ramanujan's Master Formula/Theorem (see <a href="https://en.wikipedia.org/wiki/Ramanujan%27s_master_theorem" rel="nofollow noreferrer">Wikipedia</a>, particularly the Hardy refer.) gives a somewhat intuitive perspective on the Mellin transform as providing an &quot;interpolation&quot; of the coefficients of the Taylor series of certain classes of functions, as discussed in the intro of &quot;<a href="https://arxiv.org/abs/1103.5126" rel="nofollow noreferrer">Ramanujan's Master Theorem</a> ...&quot; by Olafsson and Pasquale. E.g.,</p> <p><span class="math-container">$$\int^{\infty}_{0}f(x)\frac{x^{s-1}}{(s-1)!} dx = g(-s)$$</span> and</p> <p><span class="math-container">$$\frac{1}{2\pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} \frac{\pi}{\sin(\pi s)} g(-s) \frac{x^{-s}}{(-s)!} ds = \sum_{n=0}^{\infty} g(n) \frac{(-x)^{n}}{n!} = f(x)$$</span></p> <p>for the transform pairs</p> <p><span class="math-container">$f(x)=\exp(-x)$</span> and <span class="math-container">$g(-s)= 1$</span> <span class="math-container">$(\sigma&gt;0)$</span> and</p> <p><span class="math-container">$f(x)=\frac{1}{1+x}$</span> and <span class="math-container">$g(-s)= (-s)!$</span> <span class="math-container">$(0&lt;\sigma&lt;1$</span> and <span class="math-container">$abs(x)&lt;1)$</span></p> <p><span class="math-container">$f(x)=\exp(-x^2)$</span> and <span class="math-container">$g(-s)= \cos(\pi\frac{ s}{2})\frac{(-s)!}{(-\frac{s}{2})!} = \frac{1}{2}\frac{(\frac{s}{2}-1)!}{(s-1)!} $</span> <span class="math-container">$(\sigma&gt;0)$</span>.</p> <p>From a similar perspective, the iconic Euler (Mellin) integral for the gamma function for <span class="math-container">$Real(s) &gt; 0$</span></p> <p><span class="math-container">$$ \displaystyle \int_0^\infty \frac{t^{s-1}}{(s-1)!} \; e^{-t\;p} \; dt = p^{-s}$$</span></p> <p>provides the scaffolding for understanding and utilizing the interplay among the Mellin transform, its inverse, operator calculus, and interpolation.</p> <p>A natural interpolation of the derivative as the fractional integroderivative of fractional calculus is obtained by using the Mellin transform to interpolate the op coefficients of the op e.g.f. <span class="math-container">$\displaystyle e^{tD_x} \;,$</span> i.e., the shift op, for the integer powers of the derivative:</p> <p><span class="math-container">$$\displaystyle \int_0^\infty \frac{t^{s-1}}{(s-1)!} \; e^{-tD_x} \; dt \; H(x) g(x) = D_x^{-s} H(x) g(x) = \int_0^\infty \frac{t^{s-1}}{(s-1)!} \; e^{-tD_x} \; H(x) g(x)\; dt$$</span></p> <p><span class="math-container">$$\displaystyle = \int_0^\infty \frac{t^{s-1}}{(s-1)!} \; H(x-t) \; g(x-t) dt \; . $$</span></p> <p>Then specifically acting on the power function for <span class="math-container">$\displaystyle \alpha &gt; -1$</span></p> <p><span class="math-container">$$ \displaystyle \int_0^\infty \frac{t^{s-1}}{(s-1)!} \; H(x-t) \; (x-t)^\alpha dt = \int_0^x \frac{t^{s-1}}{(s-1)!} \; (x-t)^\alpha \; dt $$</span></p> <p><span class="math-container">$$\displaystyle = \int_0^x \frac{t^{s-1}}{(s-1)!} \; \sum_{k \ge 0} (-1)^k \; x^{\alpha-k} \frac{\alpha!}{(\alpha-k)} \; \frac{t^k}{k!} \; dt = \frac{1}{(s-1)!} \sum_{k \ge 0} (-1)^k \; x^{\alpha-k} \binom{\alpha}{k} \; \frac{t^{s+k}}{s+k} \; |_{t=0}^{x}$$</span></p> <p><span class="math-container">$$\displaystyle = x^{\alpha + s} \; (-s)! \; \sum_{k \ge 0} \; \binom{\alpha}{k} \; \frac{sin(\pi (s+k))}{\pi (s+k)} = x^{\alpha +s} \frac{\alpha!}{(\alpha+s)!} \; = D_x^{-s} x^\alpha \; .$$</span></p> <p>The last summation converges with no restriction on <span class="math-container">$s$</span>. So, we see that the Mellin transform does indeed interpolate the coefficients of the e.g.f. generated by the binomial theorem expansion <span class="math-container">$\displaystyle x^{\alpha-k} \frac{\alpha!}{(\alpha-k)}$</span> to <span class="math-container">$\displaystyle x^{\alpha+s} \frac{\alpha!}{(\alpha+s)}$</span> to give an interpolation of the coefficients of the shift op <span class="math-container">$ D_x^k$</span> to <span class="math-container">$ D_x^{-s}$</span> consistent with fractional calculus.</p> <p>The same method can be used to interpolate</p> <p><span class="math-container">$$\displaystyle (x \; D_x \;x)^n = x^n D_x^n x^n = x^n \; n!\; L_n(-:xD_x:) , $$</span></p> <p>where <span class="math-container">$ L$</span> denotes the Laguerre polynomials and <span class="math-container">$(:xD_x:)^k = x^kD_x^k$</span> by definition, leading to</p> <p><span class="math-container">$$ \int_0^\infty \frac{t^{s-1}}{(s-1)!} \; e^{-txD_xx} \; dt \; H(x) x^\alpha = (xD_xx)^{-s}\; x^\alpha = \int_0^\infty \frac{t^{s-1}}{(s-1)!} \; \frac{x^\alpha}{(1+xt)^{\alpha+1}} \; dt = x^{\alpha-s} \frac{(\alpha-s)!}{\alpha!} = x^{-s} D_x^{-s} x^{-s} \; x^\alpha $$</span></p> <p>for <span class="math-container">$ 0 &lt; Real(s) &lt; \alpha +1 \; .$</span></p> <p>Or, give the analytic continuation for a Mellin transform related to a class of differential operators encompassing the Witt Lie algebra:</p> <p><span class="math-container">$$ (x^{1+y}D_x)^{-s} \; x^\alpha = \int_0^\infty \frac{t^{s-1}}{(s-1)!} \; H[\frac{x}{(1+y\;t\;x^y)^{1/y}}] \frac{x^\alpha}{(1+y\;t\;x^y)^{\alpha/y}} \; dt $$</span></p> <p><span class="math-container">$$= H(y) \; x^{\alpha-sy} y^{-s} \frac{(-s+\alpha/y-1)!}{(\alpha/y-1)!} \;+ \; H(-y) \; x^{\alpha+s|y|} |y|^{-s} \frac{(\alpha/|y|)!}{(\alpha/|y|+s)!} \;.$$</span></p> <p>A simple way to derive the formulas in your question is by looking at the inverse Mellin transform rep of the Dirac delta function. See my short note on the <a href="https://tcjpn.wordpress.com/2011/11/14/note-on-the-inverse-mellin-transform-and-the-dirac-delta-function/" rel="nofollow noreferrer">Inverse Mellin Transform and the Dirac Delta Function</a>. See also some applications in <a href="https://tcjpn.wordpress.com/2011/11/21/jx/" rel="nofollow noreferrer">Dirac's Delta Function and Riemann's Jump Function J(x) for the Primes</a> and <a href="https://tcjpn.wordpress.com/2011/11/16/a-generalized-dobinski-relation-and-the-confluent-hypergeometric-fcts/" rel="nofollow noreferrer">The Inverse Mellin Transform, Bell Polynomials, a Generalized Dobinski Relation, and the Confluent Hypergeometric Functions</a>.</p> <p>Edwards in <em>Riemann's Zeta Function in Ch. 10 Fourier Analysis Sec. 10.1 Invariant Operators on R+ and Their Transforms</em> gives a nice, more group-theoretic intro to the Mellin transform in line with other comments in this stream.</p>
2,026,406
<p>As written in the title, does there exists a $\mathbb C$-Banach space isometric to a Hilbert Space but whose norm is not induced by an inner product?</p> <p>Since an inner product in the Hilbert space has to fulfill the Parallelogram Identity, how could it be that such a $\mathbb C$-Banach space exists?</p>
Eric Wofsey
86,856
<p>No such space exists. Let $H$ be a Hilbert space, and suppose $X$ is a $\mathbb{C}$-Banach space with $T:X\to H$ an isometry. Then define an inner product on $X$ by $\langle x,y\rangle=\langle T(x),T(y)\rangle$ (where the right-hand side is the inner product of $H$). Then this inner product induces the norm of $X$, since $$\langle x,x\rangle=\langle T(x),T(x)\rangle=\|T(x)\|^2=\|x\|^2$$ (the last equality being because $T$ is a isometry).</p>
588,725
<p>Prove the following statement:</p> <p>$$\frac{1}{x}&lt;\ln(x)-\ln(x-1)&lt;\frac{1}{x-1}$$</p> <p>Proof:</p> <p>$$\frac{-1}{x^2}&lt;\frac{1}{x(x-1)}&lt;\frac{-1}{(x-1)^2}$$</p> <p>$$e^{(\frac{-1}{x^2})}&lt;e^{(\frac{-1}{x(x-1)})}&lt;e^{(\frac{-1}{(x-1)^2})}$$</p> <p>$$\lim_{x\to\infty}e^{(\frac{-1}{x^2})}&lt;\lim_{x\to\infty}e^{(\frac{-1}{x(x-1)})}&lt;\lim_{x\to\infty}e^{(\frac{-1}{(x-1)^2})}$$</p> <p>$$e^{0}&lt;e^{0}&lt;e^{0}$$</p> <p>$$1&lt;1&lt;1$$</p> <p>therefore MVT and we get the statement to be proven.</p> <p>does anyone agree with me in the way i choose to prove the above statement? any feedback would be good thank you in advance!</p>
Community
-1
<p>There's a mistake in your calculus: if we have a strictly inequality $$f(x)&lt;g(x)$$ then by passing to the limit we have $$\lim_{x\to a}f(x)\leq \lim_{x\to a}g(x),\qquad a\in \mathbb{R}\cup\{\infty\}$$</p>
1,879,346
<p>Ok, so I have this question from my math book.</p> <blockquote> <p>The vertices A,B,C of a triangle are (3,5),(2,6) and (-4,-2) respectively. Find the coordinates of the circum-centre and also the radius of the circum-circle of the triangle.</p> </blockquote> <p>How can we solve this? Can we use the distance formula? Answer: The circum-radius was found to be <em>R=5</em>.The coordinates of circum-centre were found to be <em>(-1,2)</em>. A diagram would be appreciated. Thank you!</p>
Carser
132,859
<p>So you can find the linear equations of the segment bisectors of each edge (really you only need two) which are $$ y=-x+1 $$ $$ y=x+3 $$ $$ y=-\frac{3x}{4} + \frac{5}{4} $$ Interestingly, these all coincide at the midpoint of segment $\overline{BC}$ at $(-1,2)$ as shown below (graph available here: <a href="https://www.desmos.com/calculator/gqwj7pn6ud" rel="nofollow noreferrer">https://www.desmos.com/calculator/gqwj7pn6ud</a>)</p> <p><a href="https://i.stack.imgur.com/Tl1Ck.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Tl1Ck.png" alt="enter image description here"></a> The radius of the circle is the distance from there to any one of the vertices of the triangle which gives you $r=5$ so the equation of the circle is given as $$ (x+1)^2+(y-2)^2 = 5^2 $$</p>
848,631
<p>The function</p> <p>$$f(y) = \displaystyle \frac{\sin(Ny)}{\sin y}$$</p> <p>is periodic with period $2 \pi$ in general. But tracing the graphic of that function for $N$ odd it seems that for $0 \leq x &lt; \pi$ it is the same as $\pi \leq x &lt; 2 \pi$.</p> <p>So can we state that the period of $f(y)$ with $N$ odd is just $\pi$? </p> <p>I found only that if $N = 2k + 1$, $k \in \mathbb{Z}$, and $y' = y + \pi$, then</p> <p>$$\sin(N y') = \sin[2k(y + \pi) + y + \pi] = \sin [(2k + 1) \pi + (2k + 1)y]$$</p> <p>Is there an analytical method to prove that periodicity?</p>
Daniel Fischer
83,702
<p>Note that</p> <p>$$\sin (x+\pi) = -\sin x$$</p> <p>for all $x$. Hence</p> <p>$$\sin (x + k\pi) = (-1)^k\sin x.$$</p> <p>So we have</p> <p>$$\sin (N(y+\pi)) = \sin (Ny + N\pi) = (-1)^N\sin (Ny).$$</p> <p>Thus, whenever $N$ and $M$ have the same parity,</p> <p>$$\frac{\sin (Ny)}{\sin (My)}$$</p> <p>has period $\pi$.</p>
2,046,773
<p>Given a function $f$ that is differentiable at a point $x_0$, if we define (using the Riemann integral)</p> <p>$$F(x) = \int_a^x f$$</p> <p>Can we necessarily say that $F^{\prime}(x)$ is continuous at $x_0$? Going back and forth between $f$ and $F$ confuses me a bit. I <em>think</em> that the Fundamental Theorem of Calculus gives us some relation between $F^{\prime}(x_0)$ and $f(x_0)$, but I'm not sure. </p>
operatorerror
210,391
<p>As stated in the problem, $F'(x)$ would be not only continuous but differentiable at $x_0$. </p> <p>The FTC allows you to conclude that for $x_0\in [a,x]$, $$ F'(x_0)=f(x_0) $$ Which means that $F'(x)$ in its region of validity has the properties $f$ has.</p>
2,289,754
<p>Let $f_t: \mathbb R^2 \to \mathbb R$ define by $f_t(x,y) = t \sin(t|x^2+y^2-1|)$.</p> <p>For all $\phi\in D(\mathbb R^2-\{(0,0)\})$, we denote by $T$ the map given by $$T(\phi)=\lim_{t\to +\infty}\int_{\mathbb R^2} f_t(x,y) \, \phi(x,y)\, \mathrm{d}x\, \mathrm{d}y$$</p> <p>How can prove that $T$ is a distribution of order $0$ on $\mathbb R^2-\{(0,0)\}$?</p> <p>Remark: I have prove that $T: D(\mathbb R^2-\{(0,0)\}) \to \mathbb R $ is a linear map. It remains to show $T$ is continuous. In fact, I started by using the polar coordinate system, then we get $$T(\phi)=\lim_{t\to +\infty}\int_{0}^{2\pi} \int_{0}^{\infty}\, t \sin(t|r^2-1|) \, \phi(r,\theta)\,r\, \mathrm{d}r\, \mathrm{d}\theta=... ??$$</p> <p>Thank you in advance</p>
Sangchul Lee
9,340
<blockquote> <p><strong>Lemma.</strong> Let $\varphi \in C_c^1(\mathbb{R})$. Then</p> <p>$$ \lim_{t\to\infty} \int_{\mathbb{R}} \frac{t \sin(t|u|)}{2} \varphi(u) \, du = \varphi(0). $$</p> </blockquote> <p><em>Proof of Lemma.</em> We have</p> <p>\begin{align*} \int_{\mathbb{R}} \frac{t \sin(t|u|)}{2} \varphi(u) \, du &amp;= \int_{0}^{\infty} t \sin(tu) \cdot \frac{\varphi(u) + \varphi(-u)}{2} \, du \\ &amp;= \varphi(0) + \int_{0}^{\infty} \cos(tu) \left( \frac{\varphi'(u) - \varphi'(-u)}{2} \right) \, du \\ &amp;\xrightarrow[t\to\infty]{} \varphi(0) \end{align*}</p> <p>by the Riemann-Lebesgue lemma. ////</p> <p>Now we return to the original problem. Let $\phi \in \mathcal{D}(\mathbb{R}^2\setminus\{0\})$ and write</p> <p>$$ \varphi(r) = \int_{\partial B_1(0)} \phi(r\omega) \, \sigma(\mathrm{d}\omega), $$</p> <p>where $\sigma$ is the surface measure on $\partial B_1(0)$. Then applying polar coordinates,</p> <p>\begin{align*} \int_{\mathbb{R}^2} f_t(x, y) \phi(x, y) \, \mathrm{d}x\mathrm{d}y &amp;= \int_{0}^{\infty} rt \sin(t|r^2-1|)\varphi(r) \, \mathrm{d}r \\ &amp;= \int_{-1}^{\infty} \frac{t\sin(t|u|)}{2} \varphi\left(\sqrt{u+1}\right) \end{align*}</p> <p>where $u = r^2 - 1$. Now taking $t\to\infty$ and applying the lemma together with the observation $\varphi \in C_c^{1}((-1,\infty))$, we have</p> <p>$$ T\phi = \varphi(1) = \int_{\partial B_1(0)} \phi(\omega) \, \sigma(\mathrm{d}\omega). $$</p> <p>That is, $T\phi$ is the $2\pi$ times the average value of $\phi$ on the unit circle. This gives</p> <p>$$|T\phi| \leq 2\pi \|\varphi\|_{\sup}$$</p> <p>and hence $T$ is a distribution of order $0$.</p>
419,730
<p>$EX = \int xf(x)dx$, where $f(x)$ is the density function of some X variable. This is pretty understandable, but what if I had, for example an $Y = -X + 1$ function and I had to calculate $EY$? How should I do this?</p>
Thomas Russell
32,374
<p>You can calculate the expected value of some function $g$ of $X$ using the following formula:</p> <p>$$E[g(X)]=\int_{-\infty}^{\infty}g(x)f(x)\:dx$$</p> <p>An explanation for this can be found on the <a href="http://en.wikipedia.org/wiki/Expected_value#General_definition" rel="nofollow">wikipedia page</a>.</p>
391,569
<p>I've started the chapter in my book where we begin to integrate trig functions, so bear in mind I've only got started and that I do not have a handle on more advanced techniques.</p> <p>$\eqalign{ &amp; \int {{{\sin }^3}x} dx \cr &amp; = \int {\sin x({{\sin }^2}x} )dx \cr &amp; = \int {\sin x({1 \over 2}} - {1 \over 2}\cos 2x)dx \cr &amp; = \int {{1 \over 2}\sin x(1 - \cos 2x)dx} \cr &amp; = \int {{1 \over 2}\sin x(1 - (2{{\cos }^2}x - 1))dx} \cr &amp; = \int {{1 \over 2}\sin x(2 - 2{{\cos }^2}x)dx} \cr &amp; = \int {\sin x - {{\cos }^2}} x\sin xdx \cr &amp; y = {1 \over 3}{\cos ^3}x - \cos x + C \cr} $</p> <hr> <p>I got the right answer but it seems like an awfully long winded way of doing things, have I made things harder than they should be with this method?</p>
vadim123
73,324
<p>You have unnecessary middle steps, just use $\sin^2x = 1-\cos^2x$ after the second line.</p> <p>To handle other odd powers:</p> <p>$$\sin^{2k+1}x=(\sin^{2}x)^k(\sin x)=(1-\cos^2x)^k(\sin x)$$</p> <p>then use $u=\cos x$.</p>
2,160,671
<p>I got stuck with this problem.</p> <p>\begin{matrix} 1 &amp; 1 &amp; 1 &amp; 1\\ 0 &amp; 1 &amp; 1 &amp; 0\\ 0 &amp; 0 &amp; 1 &amp; 1\\ \end{matrix}</p> <p>Consider the $3\times 4$ matrix $\bf A$ (above). Do the columns of $\bf A$ span $\mathbb R^3$? </p> <p>Prove your answer. Also, Find a $4\times 3$ matrix $\bf B$, such that $\bf AB = I_3$</p> <p>--</p> <p>I know that the columns of $\bf A$ span $\mathbb R^3$ as there more columns than rows. But I cannot understand how to find matrix $\bf B$ because I cannot implement "super-augmented" matrix and do Gauss-Jordan elimination. Looks like I need to do something with 4th column of $\bf A$ and 4th row of $\bf B$. What do you think?</p> <p>Thanks!</p>
GNUSupporter 8964民主女神 地下教會
290,189
<p>Denote the $j$-th columnn of $A$ by $a_j$.</p> <ol> <li><p>The columns of $A$ span $\Bbb R^3$ because the first <strong>three</strong> columns $a_1,a_2,a_3$ are clearly linearly independent, and $\dim(\Bbb R^3) = \bf 3$, so $\{a_1,a_2,a_3\}$ are a linearly independent spanning subset of $\Bbb R^3$.</p></li> <li><p>Observe that $e_2 = a_2 - a_1$ and $e_3 = a_3 - a_2$, where $e_i$ is the $i$-th standard unit vector. Then, by letting $B = \begin{bmatrix} 1 &amp; -1 &amp; 0 \\ 0 &amp; 1 &amp; -1 \\ 0 &amp; 0 &amp; 1 \\ 0 &amp; 0 &amp; 0 \end{bmatrix}$, we have \begin{align} AB =&amp; \begin{bmatrix} a_1 &amp; a_2 &amp; a_3 &amp; a_4 \end{bmatrix} \begin{bmatrix} 1 &amp; -1 &amp; 0 \\ 0 &amp; 1 &amp; -1 \\ 0 &amp; 0 &amp; 1 \\ 0 &amp; 0 &amp; 0 \end{bmatrix} \\ =&amp; \begin{bmatrix} a_1 &amp; -a_1 + a_2 &amp; - a_2 + a_3 \end{bmatrix} \\ =&amp; \begin{bmatrix} e_1 &amp; e_2 &amp; e_3 \end{bmatrix} \\ =&amp; I_3. \end{align}</p></li> </ol>
616,454
<p>How can I calculate this integral?</p> <p>$$ \int {\exp(2x)}{\sin(3x)}\, \mathrm{d}x$$ </p> <p>I tried using integration by parts, but it doesn't lead me any improvement. So I made ​​an attempt through the replacement $$ \cos(3x) = t$$ and it becomes $$\frac{1}{-3}\int \exp\left(2\left(\dfrac{\arccos(t)}{3}\right)\right)\, \mathrm{d}t$$ but I still can not calculate the new integral. Any ideas?</p> <p><strong>SOLUTION</strong>:</p> <p>$$\int {\exp(2x)}{\sin(3x)}\, \mathrm{d}x = \int {\sin(3x)}\, \mathrm{d(\frac{\exp(2x)}{2}))}=$$ </p> <p>$$\frac{1}{2}{\sin(3x)}{\exp(2x)}-\frac{1}{2}\int {\exp(2x)}\, \mathrm{d}(\sin(3x))=$$</p> <p>$$\frac{1}{2}{\sin(3x)}{\exp(2x)}-\frac{3}{2}\int {\exp(2x)}{\cos(3x)}\mathrm{d}x=$$</p> <p>$$\frac{1}{2}{\sin(3x)}{\exp(2x)}-\frac{3}{2}\int {\cos(3x)}\mathrm{d(\frac{\exp(2x)}{2}))}=$$</p> <p>$$\frac{1}{2}{\sin(3x)}{\exp(2x)}-\frac{3}{4}{\cos(3x)}{\exp(2x)}+\frac{3}{4}\int {\exp(2x)}\mathrm{d({\cos(3x)})}=$$</p> <p>$$\frac{1}{2}{\sin(3x)}{\exp(2x)}-\frac{3}{4}{\cos(3x)}{\exp(2x)}-\frac{9}{4}\int {\sin(3x)}{\exp(2x)}\mathrm{d}x$$</p> <p>$$ =&gt;(1+\frac{9}{4})\int {\exp(2x)}{\sin(3x)}\, \mathrm{d}x= \frac{1}{2}{\sin(3x)}{\exp(2x)}-\frac{3}{4}{\cos(3x)}{\exp(2x)}+c$$</p> <p>$$=\frac{1}{13}\exp(2x)(2\sin(3x)-3\cos(3x))+c$$</p>
Salech Alhasov
25,654
<p>Hint:</p> <p>$$\displaystyle\int e^{2x}\sin(3x)\, \mathrm{d}x=Ae^{2x}\sin(3x)+Be^{2x}\cos(3x)$$</p> <p>where $A$ and $B$ are constant. </p>
2,534,145
<p>I am trying to determine whether the series $\sum_{n=2}^\infty \frac{1}{n(n-1)}$ converges or not.</p> <p>I first broke the sequence up into partial fractions to obtain </p> <p>$$\sum_{n=2}^\infty -\frac{1}{n} + \frac{1}{n-1}$$ $$= \sum_{n=2}^\infty -\frac{1}{n} + \sum_{n=2}^\infty \frac{1}{n-1} $$</p> <p>The first series is a harmonic series and so it diverges. The second series is bigger than an ordinary harmonic series $\sum_{n=2}^\infty \frac{1}{n}$ and so by direct comparison if the harmonic series diverges so must $\sum_{n=2}^\infty \frac{1}{n-1}$. Both series diverge and it is known that the sum of two divergent series could be either divergent or convergent. I am not sure how to determine if this series converges - any help would be appreciated. </p>
Paolo Leonetti
45,736
<p>$$ \lim_{k\to \infty}\sum_{n=2}^k\left(\frac{1}{n-1}-\frac{1}{n}\right)=\lim_{k\to \infty} \left(1-\frac{1}{2}+\frac{1}{2}-\frac{1}{3}+\cdots-\frac{1}{k}\right)=\lim_{k\to \infty}1-\frac{1}{k}=1. $$</p>
2,534,145
<p>I am trying to determine whether the series $\sum_{n=2}^\infty \frac{1}{n(n-1)}$ converges or not.</p> <p>I first broke the sequence up into partial fractions to obtain </p> <p>$$\sum_{n=2}^\infty -\frac{1}{n} + \frac{1}{n-1}$$ $$= \sum_{n=2}^\infty -\frac{1}{n} + \sum_{n=2}^\infty \frac{1}{n-1} $$</p> <p>The first series is a harmonic series and so it diverges. The second series is bigger than an ordinary harmonic series $\sum_{n=2}^\infty \frac{1}{n}$ and so by direct comparison if the harmonic series diverges so must $\sum_{n=2}^\infty \frac{1}{n-1}$. Both series diverge and it is known that the sum of two divergent series could be either divergent or convergent. I am not sure how to determine if this series converges - any help would be appreciated. </p>
Dr. Sonnhard Graubner
175,066
<p>use that $$\sum_{n=2}^m\frac{1}{n(n-1)}=1-\frac{1}{m}$$</p>
2,534,145
<p>I am trying to determine whether the series $\sum_{n=2}^\infty \frac{1}{n(n-1)}$ converges or not.</p> <p>I first broke the sequence up into partial fractions to obtain </p> <p>$$\sum_{n=2}^\infty -\frac{1}{n} + \frac{1}{n-1}$$ $$= \sum_{n=2}^\infty -\frac{1}{n} + \sum_{n=2}^\infty \frac{1}{n-1} $$</p> <p>The first series is a harmonic series and so it diverges. The second series is bigger than an ordinary harmonic series $\sum_{n=2}^\infty \frac{1}{n}$ and so by direct comparison if the harmonic series diverges so must $\sum_{n=2}^\infty \frac{1}{n-1}$. Both series diverge and it is known that the sum of two divergent series could be either divergent or convergent. I am not sure how to determine if this series converges - any help would be appreciated. </p>
Karn Watcharasupat
501,685
<p>\begin{align} \sum_{n=2}^\infty \frac{1}{n(n-1)}=&amp;\lim_{N \to \infty}\sum_{n=2}^N \frac{1}{n-1}-\frac{1}{n}\\ &amp;=\lim_{N \to \infty} \bigg(\frac{1}{1}-\frac{1}{2}\\ &amp;+\frac{1}{2}-\frac{1}{3}\\ &amp;+\frac{1}{3}-\frac{1}{4}\\ &amp;\vdots\\ &amp;+\frac{1}{N-1}-\frac{1}{N} \bigg)\\ &amp;=\lim_{N \to \infty} \left(1-\frac{1}{N}\right)\\ &amp;=1 \end{align} Clearly the series converges.</p>
2,534,145
<p>I am trying to determine whether the series $\sum_{n=2}^\infty \frac{1}{n(n-1)}$ converges or not.</p> <p>I first broke the sequence up into partial fractions to obtain </p> <p>$$\sum_{n=2}^\infty -\frac{1}{n} + \frac{1}{n-1}$$ $$= \sum_{n=2}^\infty -\frac{1}{n} + \sum_{n=2}^\infty \frac{1}{n-1} $$</p> <p>The first series is a harmonic series and so it diverges. The second series is bigger than an ordinary harmonic series $\sum_{n=2}^\infty \frac{1}{n}$ and so by direct comparison if the harmonic series diverges so must $\sum_{n=2}^\infty \frac{1}{n-1}$. Both series diverge and it is known that the sum of two divergent series could be either divergent or convergent. I am not sure how to determine if this series converges - any help would be appreciated. </p>
Sri-Amirthan Theivendran
302,692
<p>Use the limit comparison test $$ \frac{1}{n(n-1)}\bigg/n^{-2}\to1 $$ as $n\to\infty$. Since $\sum n^{-2}$ converges, your series converges as well.</p>
2,610,560
<blockquote> <p>Given that $x, y, z$ are positive integers and that $$3x=4y=7z$$ find the minimum value of $x+y+z$. The options are:</p> <p>A) 33</p> <p>B) 40</p> <p>C) 49</p> <p>D) 61</p> <p>E) 84</p> </blockquote> <p>My attempt:</p> <p>$y=\frac{3}{4}x, z=\frac{3}{7}x$. </p> <p>Substituting these values into $x+y+z$, I get $\frac{117}{28}x$. I have no idea how to continue. $x$ in this case would have to be 28, meaning that the sum is $117$, which is not one of the options</p>
For the love of maths
510,854
<p>Turns out you made a mistake.<br> $x+y+z=x+\frac 34x +\frac 37x=\frac {61}{28}x$.<br> Substituting $x=28$ gives us $\frac {61}{28}*28= 61$.<br> Option $D$ $(61)$ is correct. </p> <hr> <p>You could do this in another way:<br> $LCM (3, 4, 7)= 3*4*7 = 84$<br> $\therefore 3*28=4*21=7*12$<br> So, Let $f(x)=x+y+z$<br> $f_{min}(x)= 28+21+12$<br> $\therefore f_{min}(x)= 61$. </p> <p>Both methods are valid. Use whichever you find easier to remember.</p>
2,888,464
<p>Let $S$ be the set of polynomials $f(x)$ with integer coefficients satisfying </p> <p>$f(x) \equiv 1$ mod $(x-1)$</p> <p>$f(x) \equiv 0$ mod $(x-3)$</p> <p>Which of the following statements are true?</p> <p>a) $S$ is empty .</p> <p>b) $S$ is a singleton.</p> <p>c)$S$ is a finite non-empty set.</p> <p>d) $S$ is countably infinite.</p> <p>My Try: I took $x =5$ then $f(5) \equiv 1$ mod $4$ and $f(5) \equiv 0$ mod $2$ . Which is impossible so $S$ is empty.</p> <p>Am I correct? Is there any formal way to solve this?</p>
Bernard
202,857
<p>Since $f(x)\equiv f(n)\mod x-n$, the second condition says $f(3)=0$ and $f(x)=(x-3)g(x)$ for some $g(x)\in\mathbf Z[x]$. But then $\;f(1)=-2g(1)$ is even, contrary to the first condition $f(1)=1$.</p>
3,566,997
<p>When I go through examples of calculating the intersection of two planes, there seems to be a convention of choosing an arbitrary point in order to solve the linear equations in question and get a particular point on the intersection line.</p> <p>Let's take an example from a <a href="https://math.stackexchange.com/q/475953/597067">previously-asked question</a>, where the given equations were:</p> <blockquote> <p><span class="math-container">$x + 2y + z - 1 = 0$</span></p> <p><span class="math-container">$2x + 3y - 2z + 2 = 0$</span></p> </blockquote> <p>And the relevant part in the <a href="https://math.stackexchange.com/a/475962/597067">second-most voted answer</a> goes as follows:</p> <blockquote> <p>Next, we need to find a particular point on the line. We can try <span class="math-container">$y=0$</span> and solve the resulting system of linear equations:<span class="math-container">$$\begin{align}x+z-1&amp;=&amp;0\\2x-2z+2&amp;=&amp;0\end{align}$$</span> giving <span class="math-container">$x=0, z=1$</span></p> </blockquote> <p><strong>My question:</strong> </p> <p>How does one know what is the correct point to choose, and how does one validates that the chosen point is correct?<br> Also, if the chosen point is wrong, how does one successfully guess the next point? </p> <p>I've found <a href="https://math.stackexchange.com/a/27384/597067">another answer</a> that seems to be very relevant, but I can't explain it:</p> <blockquote> <p>Sometimes the line of intersection happens to be parallel to the z=0 plane. In that case you could try y=0 or x=0. (One of these is sure to work.)</p> </blockquote> <p>I'm a beginner, so that's a fundamental thing, probably trivial, I don't get: Why is that true?</p>
user577215664
475,762
<p>You can differentiate wrt <span class="math-container">$y$</span>: <span class="math-container">$$\frac{d}{dx}[a^y]=\frac{d}{dy}[a^y] \frac {dy}{dx}=\frac{d} {dy}[a^y] y'=\frac{d}{dy}[e^{y \ln a}] y'$$</span> Note that <span class="math-container">$(e^{cx})'=e^{cx}c$</span> <span class="math-container">$$\frac{d}{dx}[a^y]=[e^{y \ln a}] \ln |a| y'$$</span> And <span class="math-container">$e^{y \ln a}=a^y$</span>: <span class="math-container">$$\frac{d}{dx}[a^y]=a^y \ln |a| y'$$</span> Therefore: <span class="math-container">$$a^y=x \implies a^y \ln |a| y' =1$$</span> <span class="math-container">$$x \ln |a| y' =1 \implies y'= \frac {1}{x \ln |a|}$$</span></p>
951,104
<p>Problem : If $$T_r =\frac{r}{4r^4+1}$$ then the value of $$\sum^{\infty}_{r=1} T_r$$ is ? </p> <p>How to start such problem I am not getting any clue on this please suggest thanks .</p>
Jlamprong
105,847
<p>Note that \begin{align} \sum_{r=1}^{\infty}T_r &amp;=\sum_{r=1}^{\infty}\frac{r}{4r^4+1}\\ &amp;=\frac14\sum_{r=1}^{\infty}\left(\frac{1}{2r^2-2r+1}-\frac{1}{2r^2+2r+1}\right)\\ &amp;=\frac14\sum_{r=1}^{\infty}\left(\frac{1}{2r^2-2r+1}-\frac{1}{2(r+1)^2-2(r+1)+1}\right).\\ \end{align}</p> <p>From this point, I am sure you can carry on the sum using telescoping series.</p>
833,353
<p>Let $a,b \geq 0$ and $0&lt;p,q &lt; 1$ s.t. $p + q = 1$.</p> <p>Is it true that $a^p b^q \leq a+b$?</p>
Paramanand Singh
72,031
<p>Well, the inequality is much stronger. In fact we have $a^{p}b^{q} \leq pa + qb$ and equality holds only when $a = b$. Since $0 &lt; p, q &lt; 1$ it follows that $pa &lt; a, qb &lt; b$ so that $pa + qb &lt; a + b$ and therefore we get $a^{p}b^{q} \leq pa + qb &lt; a + b$. The proof for the general inequality $a^{p}b^{q} \leq pa + qb$ can be given by writing $q = 1 - p$. Then we need to prove for $a, b \geq 0$ and $0 &lt; p &lt; 1$ that $$a^{p}b^{1 - p} \leq ap + b(1 - p)$$ clearly we can assume $b &gt; a$ and consider the function $f(x) = x^{1 - p}$ so that $f'(x) = (1 - p)x^{-p}$. Then by mean value theorem we have $$f(b) - f(a) = (b - a)f'(c)$$ for some $c \in (a, b)$. This means that $$b^{1 - p} - a^{1 - p} = (1 - p)(b - a)c^{-p}$$ We have $a &lt; c &lt; b$ therefore $c^{-p} &lt; a^{-p}$ (note that $(-p) &lt; 0$). And thus we get $$b^{1 - p} - a^{1 - p} &lt; (1 - p)(b - a)a^{-p}$$ and multiplying by $a^{p} &gt; 0$ we get $$a^{p}b^{1 - p} - a &lt; (1 - p)(b - a)$$ or $$a^{p}b^{1 - p} &lt; pa + (1 - p)b$$</p>
4,386,333
<p>This is Exercise 1.6(iii) from Intro. to K-Theory for <span class="math-container">$C^*$</span>-Algebras by Rørdam et al.</p> <p>If <span class="math-container">$A$</span> is a unital <span class="math-container">$C^*$</span>-algebra, and <span class="math-container">$a \in A$</span> is invertible, then I want to show that <span class="math-container">$a^{-1} \in C^*(a)$</span>. What I know is:</p> <ul> <li><span class="math-container">$a$</span> is invertible if and only if <span class="math-container">$aa^*$</span> and <span class="math-container">$a^*a$</span> are invertible, with <span class="math-container">$a^{-1} = a^*(aa^*)^{-1} = (a^*a)^{-1}a^*$</span></li> <li>If <span class="math-container">$b \in A$</span> is invertible and normal, then there is some <span class="math-container">$f \in C(\sigma(b))$</span> such that <span class="math-container">$f(b) = b^{-1}$</span> (proven using the continuous functional calculus). Another question <a href="https://math.stackexchange.com/questions/1636173/c-algebra-generated-by-a">here</a> shows that <span class="math-container">$b^{-1}, 1 \in C^*(b) = C^*(b, 1)$</span>.</li> <li><span class="math-container">$C^*(a)$</span> is the closed linear span of <span class="math-container">$a^ma^{*n}$</span> and <span class="math-container">$a^{*s}a^t$</span>, where <span class="math-container">$m,n,s,t \in \mathbb{N}$</span>.</li> </ul> <p>My issue here is that since <span class="math-container">$a$</span> is not a normal element, then I don't know that <span class="math-container">$C^*(a, 1) = C^*(a)$</span> (which I don't think is even true), nor that there is a *-isomorphism between <span class="math-container">$C(\sigma(a))$</span> and <span class="math-container">$C^*(a)$</span>. I suspect that there is something simple that I am not seeing, but I am not sure what it is.</p>
Just a user
977,740
<p>Note that if <span class="math-container">$a$</span> is invertible, so is <span class="math-container">$a^*a$</span>. And <span class="math-container">$(a^*a)^{-1}\in C^*(a^*a,1)\subset C^*(a, 1)$</span>, and <span class="math-container">$a^{-1} = (a^*a)^{-1}a^*\in C^*(a, 1)$</span>. This can be used to show that the spectrum of an element doesn't expand when restried to any subalgebra (which is not the case for Banach algebras).</p>
2,614,776
<p><strong>Exercise :</strong></p> <blockquote> <p><strong>(A.1)</strong> Let G be a group. The subgroup <span class="math-container">$Z(G) = \{z \in G | zg =gz \space \forall \space g \in G\}$</span> is called the center of <span class="math-container">$G$</span>. Show that <span class="math-container">$Z(G)$</span> is a proper subgroup of <span class="math-container">$G$</span>.</p> <p><strong>(A.2)</strong> Find the center of the dihedral group <span class="math-container">$D_3$</span>.</p> <p><strong>(A.3)</strong> Show that the groups <span class="math-container">$D_3$</span> and <span class="math-container">$S_3$</span> are isomorphic. Using (A.2) find the center of <span class="math-container">$S_3$</span>.</p> <p><strong>(A.4)</strong> Show that for all <span class="math-container">$n&gt;2$</span> the center <span class="math-container">$Z(S_n)$</span> of the permutation group <span class="math-container">$S_n$</span> contains only the identity permutation.</p> </blockquote> <p><strong>Attempt :</strong></p> <p><strong>(A.1)</strong></p> <p>From the definition of the identity element : <span class="math-container">$eg = ge = g \space \forall \space g \in G$</span>. This means that <span class="math-container">$e \in Z(G) \Rightarrow Z(G) \neq \emptyset$</span>.</p> <p>Now, let <span class="math-container">$a,b \in Z(G)$</span>. Then :</p> <p><span class="math-container">$$\forall \space g \in G: (ab)g = a(bg) = a(gb) = (ag)b = (ga)b = g(ab)$$</span></p> <p>which means that <span class="math-container">$ab \in Z(G)$</span>.</p> <p>Finally, let <span class="math-container">$c \in Z(G)$</span>. Then :</p> <p><span class="math-container">$\forall \space g \in G : cg=gc \Rightarrow c^{-1}(gc)c^{-1}=c^{-1}(gc)c^{-1} \Rightarrow egc^{-1} =c^{-1}ge \Rightarrow gc^{-1} = c^{-1}g \Rightarrow c^{-1} \in Z(G) $</span></p> <p>which finally leads to <span class="math-container">$Z(G) \leq G$</span>.</p> <p><strong>(A.2)</strong></p> <p>I found an elaboration for the center of the dihedral group <span class="math-container">$D_{2n}$</span> <a href="https://ysharifi.wordpress.com/2011/02/02/center-of-dihedral-groups/" rel="nofollow noreferrer">here</a> but it won't work to find out the center of <span class="math-container">$D_3$</span>, so I would appreciate any tips or links.</p> <p>For <strong>(A.3)-(A.4)</strong> I am at loss on how to even start, so I would really appreciate any thorough answer or links with similar exercises - tips.</p> <p>Sorry for not being able to provide an attempt but currently I'm going over problems that were in exams the previous years (for my semester exams) and it seems there are a lot of stuff that I have difficulty handling, which seem more weird that what we had handled this far.</p>
BallBoy
512,865
<p>For A.2 your link will work, there's just a difference in notation - your $D_3$ is the link's $D_6$.</p> <p>For A.3, Colescu's comment is a good start. The symmetries of an equilateral triangle can be reached via rotations and reflections ($D_3$) or permuting the vertices ($S_3$). Show that these give the same group of symmetries.</p> <p>For A.4, consider an element $\phi \in S_n$ (as a permutation acting in a set of letters) which is not the identity. So we know there is at least one letter - call it $a$ - which is not fixed. Say $\phi (a)=b$. There are two cases - $\phi(b)=a$ and $\phi(b)=c \neq a$. Let $\xi$ be the permutation which switches $a$ and $b$ (and leaves all else fixed) and $\psi$ the permutation which rotates $a,b,c$ cyclically. Verify that in the first case, $\psi\circ\phi(a)\neq\phi\circ\psi(a)$, and in the second case, $\xi\circ\phi(a)\neq\phi\circ\xi(a)$.</p>
3,489,896
<p>It is clear to me that <span class="math-container">$\liminf X_n \le \limsup X_n$</span> but every time that I see that <span class="math-container">$\liminf X_n \le \sup X_n$</span> it sound to me not so obvious awkward. Could someone help me to understand this inequality?<span class="math-container">$ $</span> <span class="math-container">$ $</span></p>
drhab
75,923
<p>Hint:</p> <p><span class="math-container">$\sup X_n$</span> is an upperbound of the set <span class="math-container">$\{X_n\mid n\in\mathbb N\}$</span> which means that for <strong>every</strong> <span class="math-container">$k\in\mathbb N$</span> we have <span class="math-container">$X_k\leq\sup X_n$</span>.</p>
4,287,450
<p>I tried solving this by finding dy/dx and letting the denominator be zero, yielding an equation whose solution can only be approximated. But that approximated value is not right as seen on Desmos.</p>
Glorious Nathalie
948,761
<p><span class="math-container">$r = k \theta $</span></p> <p><span class="math-container">$ x = k \theta \cos \theta $</span> and <span class="math-container">$ y = k \theta \sin \theta $</span></p> <p>Slope is <span class="math-container">$\dfrac{ \sin \theta + \theta \cos \theta }{ \cos \theta - \theta \sin \theta} $</span></p> <p>It is infinite when <span class="math-container">$\cos \theta = \theta \sin \theta $</span> i.e. <span class="math-container">$ \cot \theta = \theta$</span></p> <p>The solutions can found by the well-known Newton's method. As a guide towards finding the initial guess of <span class="math-container">$\theta$</span> plot <span class="math-container">$\cot \theta $</span> and <span class="math-container">$\theta $</span> and see where they approximately intersect. Then the itertions is</p> <p><span class="math-container">$\theta_{k+1} = \theta_k + \dfrac{ \cot \theta_k - \theta_k }{ \csc^2 \theta_k + 1 }$</span></p> <p>Once the iterations converge to a root <span class="math-container">$\theta^* = u$</span> then</p> <p><span class="math-container">$x = k u \cos u $</span></p>
3,682,514
<blockquote> <p>The hyperbola is given with the following equation: <span class="math-container">$$3x^2+2xy-y^2+8x+10y+14=0$$</span> Find the asymptotes of this hyperbola. (<span class="math-container">$\textit{Answer: }$</span> <span class="math-container">$6x-2y+5=0$</span> and <span class="math-container">$2x+2y-1=0$</span>)</p> </blockquote> <p>In my book, it is said that if the hyperbola is given with the equation: <span class="math-container">$$Ax^2+2Bxy+Cy^2+2Dx+2Ey+F=0$$</span> then the direction vector <span class="math-container">$\{l,m\}$</span> of the asymptotes are found from the following equation: <span class="math-container">$$Al^2+2Blm+Cm^2=0$$</span> (Actually, I don't know the proof) Then to solve this, we let <span class="math-container">$k=\frac{l}{m}, \ \ (m \not =0)$</span> and solve the quadratic equation for <span class="math-container">$k$</span>: <span class="math-container">$$Ak^2+2Bk+C=0, \text{ in our case } 3k^2+2k-1 = 0$$</span> From here, I got <span class="math-container">$k=-1 \text{ or } k=\frac{1}{3}$</span> (which give us slopes of the two asymptotes). </p> <p>Hence we search for the asymptotes of the form <span class="math-container">$y=kx+b$</span> and restrict <span class="math-container">$b$</span> in such way so that the line does not intersect the line. Plugging this <span class="math-container">$y$</span> and <span class="math-container">$k=-1$</span> into the equation of the parabola I got: <span class="math-container">$$(4b-2)x-b^2+10b+14=0$$</span> so <span class="math-container">$b=\frac{1}{2}$</span> (as the equation should not have a solution). Then, <span class="math-container">$y=-x+\frac{1}{2}$</span> or <span class="math-container">$2x+2y-1=0$</span> as in the answer!</p> <p>However, I could not find the second one in this way...</p> <p>Then, I got stuck... </p> <p>It would be greatly appreciated either if you help me understand (why the asymptotes are to be found so) and complete this solution or suggest another solution.</p>
Quanto
686,284
<p>To justify the approach, note that asymptotically the quadratic terms of the curve <span class="math-container">$f(x,y)=3x^2+2xy-y^2+8x+10y+14$</span> dominate, i.e.</p> <p><span class="math-container">$$3x^2+2xy-y^2=(3x-y)(x+y)=0$$</span></p> <p>which corresponds to the asymptotic behaviors of the asymptotes and yields their slopes <span class="math-container">$3,\&gt; -1$</span>.</p> <p>To obtain the actual equations of the two asymptotes, let <span class="math-container">$f’_x= f’_y=0$</span> to determine the center, i.e.</p> <p><span class="math-container">$$ 6x+2y+8=0,\&gt;\&gt;\&gt;\&gt;\&gt;2x-2y+10=0$$</span></p> <p>Solve to get the center <span class="math-container">$(-\frac94, \frac{11}4)$</span>. Then, use the point-slope formula for the equations</p> <p><span class="math-container">$$y-\frac{11}4=-(x+\frac94),\&gt;\&gt;\&gt;\&gt;\&gt; y-\frac{11}4=3(x+\frac94)$$</span></p>
3,682,514
<blockquote> <p>The hyperbola is given with the following equation: <span class="math-container">$$3x^2+2xy-y^2+8x+10y+14=0$$</span> Find the asymptotes of this hyperbola. (<span class="math-container">$\textit{Answer: }$</span> <span class="math-container">$6x-2y+5=0$</span> and <span class="math-container">$2x+2y-1=0$</span>)</p> </blockquote> <p>In my book, it is said that if the hyperbola is given with the equation: <span class="math-container">$$Ax^2+2Bxy+Cy^2+2Dx+2Ey+F=0$$</span> then the direction vector <span class="math-container">$\{l,m\}$</span> of the asymptotes are found from the following equation: <span class="math-container">$$Al^2+2Blm+Cm^2=0$$</span> (Actually, I don't know the proof) Then to solve this, we let <span class="math-container">$k=\frac{l}{m}, \ \ (m \not =0)$</span> and solve the quadratic equation for <span class="math-container">$k$</span>: <span class="math-container">$$Ak^2+2Bk+C=0, \text{ in our case } 3k^2+2k-1 = 0$$</span> From here, I got <span class="math-container">$k=-1 \text{ or } k=\frac{1}{3}$</span> (which give us slopes of the two asymptotes). </p> <p>Hence we search for the asymptotes of the form <span class="math-container">$y=kx+b$</span> and restrict <span class="math-container">$b$</span> in such way so that the line does not intersect the line. Plugging this <span class="math-container">$y$</span> and <span class="math-container">$k=-1$</span> into the equation of the parabola I got: <span class="math-container">$$(4b-2)x-b^2+10b+14=0$$</span> so <span class="math-container">$b=\frac{1}{2}$</span> (as the equation should not have a solution). Then, <span class="math-container">$y=-x+\frac{1}{2}$</span> or <span class="math-container">$2x+2y-1=0$</span> as in the answer!</p> <p>However, I could not find the second one in this way...</p> <p>Then, I got stuck... </p> <p>It would be greatly appreciated either if you help me understand (why the asymptotes are to be found so) and complete this solution or suggest another solution.</p>
bruceyuan
534,973
<p>Just what I thought, asymptotes for quadratic curve can be reformatted as the ratio of <span class="math-container">$y/x$</span> approach to a constant as <span class="math-container">$x$</span> goes up.</p> <p>as pointed out by other the slope of asymptotes is <span class="math-container">$\frac{m}{l}$</span>, let <span class="math-container">$y = \frac{m}{l} x + b$</span>, so <span class="math-container">$$ \lim _{x \to \infty} \frac{y}{x} = \frac{m}{l} $$</span> as well as <span class="math-container">$$ 0 = \lim _{x \to \infty} A + 2B \frac{y}{x} + C (\frac{y}{x})^2 + \frac{2D}{x} + \frac{2Ey}{x^2} + \frac{F}{x^2} = A + 2B\frac{m}{l} + C(\frac{m}{l})^2$$</span> Naturally yield what you want.</p>
145,156
<p>I need to fetch some journals issues, in example like this: www.sciencedirect.com/science/journal/00963003/124/1</p> <p>You may check this part to be constant: </p> <pre><code>string = "https://www.sciencedirect.com/science/journal/00963003" </code></pre> <p>I read <a href="http://mathematica.stackexchange.com/questions/3387/fetching-data-from-html-source">here</a> that I can do something like <code>src=Import["https://www.sciencedirect.com/science/journal/00963003/124/1","HTML"]</code></p> <p>But this is only for one issue. In this example there are as many as 310 volumes and each volume may have none, 1, 2, 2-3,3 or 4 issues. Is it possible to write a script in matematica to fech all HTML pages of issues at once, or at least at minimal number of fetching trials? Sorry, I cannot make more than 2 links, so some links are without https://</p>
garej
24,604
<p>Is it what you are after?</p> <pre><code>issues = Flatten@ Table[(ToString[i] &lt;&gt; "/" &lt;&gt; #) &amp; /@ {"", "1", "2", "2-3", "4"}, {i, 1, 310}] </code></pre> <p>Then one could StringJoin, smth like:</p> <pre><code>links = StringJoin[{string, "/", #}] &amp; /@ issues; </code></pre> <p>"XMLObject" is a better option. Fetch first five objects:</p> <pre><code>Import[#, "XMLObject"] &amp; /@ Take[links, 5] // AbsoluteTiming </code></pre> <blockquote> <p>{32.3833, {...1...}}</p> </blockquote> <p>It took me half of a minute for five pages fetching. (BTW, from PC that has no subscription.)</p> <p>Proof of work:</p> <pre><code>% [[2]] // Length (* 5 *) </code></pre> <hr> <p><strong>Disclaimer</strong> I would not recommend to use it from institute network.</p> <p>In no event will we be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising out of, or in connection with, the use of this code. <code>;-))</code></p>
3,556,490
<p>Can someone explain me how to solve the equation <span class="math-container">$\frac{x-n}{x+n} = e^{-x}$</span>, where <span class="math-container">$n$</span> is a non-zero natural number ? Unfortunately, I have not even an idea how to start. Any hint is much appreciated. </p> <p>Many thanks in advance. </p>
The_Sympathizer
11,172
<p>As @Jam mentions in the comments, this is similar to equations for which the Lambert W-function would be applicable, but a bit too complex - and it is in fact equations like this that are suggested to motivate a <em>generalized Lambert W-function</em> which, in this case, is pretty much equivalent to your equation: namely</p> <p><span class="math-container">$$W\left(\begin{matrix}a \\ b\end{matrix}; x\right) := \left[x \mapsto e^x \left(\frac{x - a}{x - b}\right)\right]^{-1}$$</span></p> <p>inverts <span class="math-container">$x \mapsto e^x \frac{x - a}{x - b}$</span> just as <span class="math-container">$W$</span> inverts <span class="math-container">$x \mapsto e^x x$</span>. Thus the solution of your equation is almost directly and immediately given as</p> <p><span class="math-container">$$x = W\left(\begin{matrix}n \\ -n\end{matrix}; 1\right)$$</span></p> <p>As said, though, this may not be so satisfying - but it is the inability to capture the solutions of some equations with our existing corpus of special functions that motivates the coinage of new ones to begin with, and is why it always grows with time. The paper in which this generalization is discussed, viz. Jam's link,</p> <p><a href="https://arxiv.org/pdf/1408.3999.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1408.3999.pdf</a></p> <p>also gives a series expansion:</p> <p><span class="math-container">$$W\left(\begin{matrix}t \\ s\end{matrix}; a\right) = t - T \sum_{k=1}^{\infty} \frac{L'_k(kT)}{k} e^{-kt} a^k$$</span></p> <p>where <span class="math-container">$T := t - s$</span> and I've renamed the index variable to not clash with <span class="math-container">$n$</span>. Here <span class="math-container">$L'_k$</span> is the <em>derivative</em> of the <span class="math-container">$k$</span>th Laguerre polynomial, and so with <span class="math-container">$t = n$</span> and <span class="math-container">$s = -n$</span>, you get <span class="math-container">$T = 2n$</span> so</p> <p><span class="math-container">$$\begin{align} x = W\left(\begin{matrix}n \\ -n\end{matrix}; 1\right) &amp;= n - 2n \sum_{k=1}^{\infty} \frac{L'_k(2nk)}{k} e^{-kn}\end{align}$$</span></p> <p>While I don't yet have a convergence proof, numerical experimentation at least suggests the right-hand series does actually converge at these parameters, though very poorly. For example, when <span class="math-container">$n = 1$</span>, 1000 terms gives only on the order of <span class="math-container">$10^{-5}$</span> accuracy,</p> <p><span class="math-container">$$x = W \left(\begin{matrix}1 \\ -1\end{matrix}; 1\right) \approx 1.54340$$</span></p> <p>. Convergence at higher values of <span class="math-container">$n$</span>, e.g. <span class="math-container">$n = 2$</span>, is not really better. But the series does at least provide a <em>representation</em> for the solution, and if we were to implement the generalized Lambert function in software, we would likely use the series to bootstrap a fast Newton's method procedure, similar to the usual methods for evaluating the usual Lambert function.</p>
3,315,558
<p>I am having a very frustrating time with the back book that says my answer is way off but to me everything looks fine:</p> <p><span class="math-container">\begin{align*} (xy)y'&amp;= x^2+3y^2\\ y' &amp;= \frac{x^2}{xy} + \frac{3y^2}{xy}\\ y' &amp;= \frac{x}{y} + \frac{3y}{x}\\ y' &amp;= \frac{1}{v} + 3v\\ y' &amp;= \frac{1 + 3v^2}{v}\\ v+\frac{dv}{dx}x &amp;= \frac{1+3v^2}{v}\\ \frac{dv}{dx}x&amp;= \frac{1+3v^2-v^2}{v}\\ \frac{dv}{dx}x &amp;= \frac{1+2v^2}{v}\\ \int \frac{v}{2v^2+1}\,dv &amp;= \int\frac{1}{x}\,dx\\ u &amp;= 2v^2+1\\ du &amp;= 4v\,dv\\ dv &amp;= \frac{1}{4v}\,du\\ \int \frac{v}{u} \frac{1}{4v}\,du &amp;= \int \frac{1}{x} \,dx\\ \int \frac{1}{4u}\,du &amp;= \ln|x| + c\\ \frac{1}{4} \int \frac{1}{u}\,du &amp;= \ln|x| +c\\ \frac{1}{4} \ln|2v^2 + 1| &amp;= \ln |x| + c\\ \ln|2v^2 + 1|&amp;= 4\ln|x|+c\\ 2v^2 + 1 &amp;= e^{4\ln|x|}e^c\\ 2v^2 + 1 &amp;= Cx^4\\ 2v^2 &amp;= Cx^4\\ v^2 &amp;= Cx^4\\ \frac{y}{x} &amp;= \sqrt{Cx^4}\\ y &amp;= x\sqrt{Cx^4} \end{align*}</span></p> <p>However the book says the answer is <span class="math-container">$x^2 + 2y^2 = Cx^6.$</span> I am fairly sure there are no mistakes.</p>
Cesareo
397,348
<p>Hint.</p> <p>A way to solve this DE is by making <span class="math-container">$z = y^2$</span> and then</p> <p><span class="math-container">$$ \frac 12 x z' - 3z = x^2 $$</span></p> <p>which is a linear DE. Solving for <span class="math-container">$z$</span> we have</p> <p><span class="math-container">$$ z = y^2 = C_1 x^6-\frac 12x^2 $$</span></p> <p>NOTE</p> <p>The homogeneous part</p> <p><span class="math-container">$$ \frac 12 x z'_h -3z_h = 0 $$</span></p> <p>is separable giving</p> <p><span class="math-container">$$ z_h = C_0 x^6 $$</span></p>
3,315,558
<p>I am having a very frustrating time with the back book that says my answer is way off but to me everything looks fine:</p> <p><span class="math-container">\begin{align*} (xy)y'&amp;= x^2+3y^2\\ y' &amp;= \frac{x^2}{xy} + \frac{3y^2}{xy}\\ y' &amp;= \frac{x}{y} + \frac{3y}{x}\\ y' &amp;= \frac{1}{v} + 3v\\ y' &amp;= \frac{1 + 3v^2}{v}\\ v+\frac{dv}{dx}x &amp;= \frac{1+3v^2}{v}\\ \frac{dv}{dx}x&amp;= \frac{1+3v^2-v^2}{v}\\ \frac{dv}{dx}x &amp;= \frac{1+2v^2}{v}\\ \int \frac{v}{2v^2+1}\,dv &amp;= \int\frac{1}{x}\,dx\\ u &amp;= 2v^2+1\\ du &amp;= 4v\,dv\\ dv &amp;= \frac{1}{4v}\,du\\ \int \frac{v}{u} \frac{1}{4v}\,du &amp;= \int \frac{1}{x} \,dx\\ \int \frac{1}{4u}\,du &amp;= \ln|x| + c\\ \frac{1}{4} \int \frac{1}{u}\,du &amp;= \ln|x| +c\\ \frac{1}{4} \ln|2v^2 + 1| &amp;= \ln |x| + c\\ \ln|2v^2 + 1|&amp;= 4\ln|x|+c\\ 2v^2 + 1 &amp;= e^{4\ln|x|}e^c\\ 2v^2 + 1 &amp;= Cx^4\\ 2v^2 &amp;= Cx^4\\ v^2 &amp;= Cx^4\\ \frac{y}{x} &amp;= \sqrt{Cx^4}\\ y &amp;= x\sqrt{Cx^4} \end{align*}</span></p> <p>However the book says the answer is <span class="math-container">$x^2 + 2y^2 = Cx^6.$</span> I am fairly sure there are no mistakes.</p>
Allawonder
145,126
<p>Your solution simplifies to the cubical parabola <span class="math-container">$$y=kx^3.$$</span> Substituting this into the original equation with <span class="math-container">$k=1$</span> for simplicity shows something must be wrong despite your confidence to the contrary, for we have on the one hand <span class="math-container">$(xy)y'=x^4(3x^2)=3x^6,$</span> and on the other side <span class="math-container">$x^2+3y^2=x^2+3x^6,$</span> which are clearly not equal, off in fact by <span class="math-container">$x^2.$</span></p> <p>The most suspicious operation is where you subsumed the <span class="math-container">$+1$</span> into the constant -- wrongly, it appears. There's no legitimate way you could have managed that! </p>
1,451,308
<p>$Y$ is a non-negative random variable, not necessary integrable. How to show $$\lim_{n\rightarrow \infty }\frac{1}{n}\int_{Y&lt;n}Y \, dP=0$$</p> <p>My idea is to find a good upper bound that converges to zero. However, I didn't find a good one. My upper bound is $$\frac{1}{n}\int_{Y&lt;n}Y \, dP\leq \frac{1}{n}\int_{Y&lt;n}n \, dP\leq \int_{Y&lt;n}dP\leqslant 1.$$</p>
Dominik
259,493
<p>Note that the sequence of random variables $X_n = \frac{1}{n} I\{Y &lt; n\} Y$ is bounded by $1$ and converges pointwise to $0$. The assertion now follows from the dominated convergence theorem.</p>
1,451,308
<p>$Y$ is a non-negative random variable, not necessary integrable. How to show $$\lim_{n\rightarrow \infty }\frac{1}{n}\int_{Y&lt;n}Y \, dP=0$$</p> <p>My idea is to find a good upper bound that converges to zero. However, I didn't find a good one. My upper bound is $$\frac{1}{n}\int_{Y&lt;n}Y \, dP\leq \frac{1}{n}\int_{Y&lt;n}n \, dP\leq \int_{Y&lt;n}dP\leqslant 1.$$</p>
saz
36,150
<p>Here is a direct proof (which doesn't require any additional theorems such as the dominated convergence theorem):</p> <p>Fix $\epsilon&gt;0$. Since $A_n := \{n \leq Y &lt; \infty\}$ is a sequence of decreasing sets satisfying $\bigcap_{n \in \mathbb{N}} A_n = \emptyset$, the continuity of the (probability) measure $\mathbb{P}$ gives</p> <p>$$\lim_{n \to \infty} \mathbb{P}(n \leq Y &lt; \infty) = 0;$$</p> <p>in particular we can choose $N \in \mathbb{N}$ such that $\mathbb{P}(N \leq Y &lt; \infty) \leq \epsilon$. Now</p> <p>$$\begin{align*} \frac{1}{n} \int_{Y&lt;n} Y \, d\mathbb{P} &amp;= \frac{1}{n} \int_{Y&lt;N} Y \, d\mathbb{P} + \int_{N \leq Y &lt; n} \frac{Y}{n} \, d\mathbb{P} \\ &amp;\leq \frac{N}{n} \underbrace{\mathbb{P}(Y &lt; N)}_{\leq 1} + \mathbb{P}(N \leq Y &lt; n) \\ &amp;\leq \frac{N}{n} + \mathbb{P}(N \leq Y &lt; \infty) \leq \frac{N}{n} + \epsilon. \end{align*}$$</p> <p>Choosing $n$ sufficiently large, we get</p> <p>$$\frac{1}{n} \int_{Y&lt;n} Y \, d\mathbb{P} \leq 2 \epsilon.$$</p>
153,473
<p>How does one prove that if a $X$ is a Banach space and $x^*$, a continuous linear functional from $X$ to the underlying field, then $x^*$ attains its norm for some $x$ in $X$ and $\Vert x\Vert = 1$?</p> <p>My teacher gave us a hint that we should use the statement that if $X$ is a reflexive Banach space, the unit ball is weak sequentially compact, but I am not sure as to how to construct a sequence in this ball which does not converge.</p> <p>Thank you.</p>
t.b.
5,363
<p>Suppose that $\varphi \neq 0$. Let $x_n$ be a sequence in the unit sphere such that $|\varphi(x_n)| \geq \lVert \varphi\| - \frac{1}{n}$ which exists by the definition of the operator norm. Choose a subsequence such that $x_{n_k} \to x$ weakly with $\|x\| \leq 1$ by weak sequential compactness of the unit ball. Deduce that $|\varphi(x)| = \|\varphi\|$. Then argue that $\|x\| = 1$.</p>
2,859,276
<p>Given a measurable space $(X,\scr{A})$ where $\scr{A}$ is a sigma algebra on $X$, I would like to know if $f:X \to \mathbb{R}$ is measurable then $cf$ where $c \in \mathbb{R}$ is also measurable.</p> <p>A function $f:X \to \mathbb{R}$ is measurable if $[f &gt; \alpha] = \{x \in X \:{:}\: f(x) &gt; \alpha \} \in \mathscr{A}$ for all $\alpha \in \mathbb{R}$.</p> <p>If $c = 0$, then $[cf &gt; \alpha] = X$ for all $\alpha &lt; 0$ and $[cf &gt; \alpha] = \emptyset$ for all $\alpha \geq 0$, so $[cf &gt; \alpha] \in \scr{A}$.</p> <p>If $c &gt; 0$, then $[cf &gt; \alpha] = [f &gt; \frac{\alpha}{c}] \in \scr{A}$.</p> <p>My question is when $c &lt; 0$, my attempt is as follows.</p> <p>Since $[cf &gt; \alpha] = [f &lt; \frac{\alpha}{c}] = \{x \in X \:{:}\: f(x) &lt; \frac{\alpha}{c} \}$ and $[f &gt; \frac{\alpha}{c}] \in \scr{A}$, the complement of ${[f &gt; \frac{\alpha}{c}]} $ which is $[f \leq \frac{\alpha}{c}] \in \scr{A}$, but as $[f &lt; \frac{\alpha}{c}] \neq [f \leq \frac{\alpha}{c}]$, I am stuck in showing $[f &lt; \frac{\alpha}{c}] \in \scr{A}$.</p> <p>Any help will be greatly appreciated. </p>
drhab
75,923
<p>In general if $(X,\mathscr A)$, $(Y,\mathscr B)$ and $(Z,\mathscr C)$ are measurable spaces and $f:X\to Y$ and $g:Y\to Z$ are both measurable then also $g\circ f:X\to Z$ is measurable.</p> <p>This because: $$(g\circ f)^{-1}(\mathscr C)=f^{-1}(g^{-1}(\mathscr C))\subseteq f^{-1}(\mathscr B)\subseteq\mathscr A$$</p> <p>The first $\subseteq$ because $g$ is measurable and the second $\subseteq$ because $f$ is measurable.</p> <p>This can be applied here for $(Y,\mathscr B)=(\mathbb R,\mathcal B)=(Z,\mathscr C)$ where $cf:X\to\mathbb R$ can be recognized as $g\circ f$ if the function $g:\mathbb R\to\mathbb R$ is prescribed by $x\mapsto cx$ where $c\in\mathbb R$. It is evident that this function $g$ is measurable.</p>
2,590,551
<p>Suppose we have $A,B$ which are $R$-submodules of the $R$-module M. Then , there exists an exact sequence :</p> <p>$0\rightarrow \frac{M}{A\cap B}\rightarrow \frac{M}{A}\times \frac{M}{B}\rightarrow \frac{M}{A+B} \rightarrow 0$</p> <p>Where $A+B=\langle A\cup B\rangle$</p> <p>I believe that that the first function should be something like $α(u)=(u,u)$ and the second one $β(u,v)=u-v$ to satisfy that ${\rm im}\;\alpha=\ker\beta$ but I can't show surjectivity of $β$. Any help will be appreciated.</p>
Dan
500,478
<p>You can use Kirchhoff's theorem. Let $D$ be the diagonal matrix whose entries are the degrees of the vertices of G. If you delete any row of $A-D$ (the Laplacian matrix) and its corresponding column, then the determinant of the resulting minor is the number of spanning trees for the graph.</p>
2,590,551
<p>Suppose we have $A,B$ which are $R$-submodules of the $R$-module M. Then , there exists an exact sequence :</p> <p>$0\rightarrow \frac{M}{A\cap B}\rightarrow \frac{M}{A}\times \frac{M}{B}\rightarrow \frac{M}{A+B} \rightarrow 0$</p> <p>Where $A+B=\langle A\cup B\rangle$</p> <p>I believe that that the first function should be something like $α(u)=(u,u)$ and the second one $β(u,v)=u-v$ to satisfy that ${\rm im}\;\alpha=\ker\beta$ but I can't show surjectivity of $β$. Any help will be appreciated.</p>
Rebecca J. Stones
91,818
<p>We throw away the loop, since spanning trees don't have loops. Also, since all edges have weight 1, a "minimum spanning tree" is just a "spanning tree". A $n$-vertex spanning tree has $n-1$ edges; in this case that's $4$ edges. There are $\binom{7}{4}=35$ subgraphs with $4$ edges (ignoring the loop), which I drew using a script below.</p> <p>Now we count the ones without cycles, which are necessarily spanning trees: 21.</p> <p><a href="https://i.stack.imgur.com/ISq0L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ISq0L.png" alt="$4$-edge subgraphs (without the loop)"></a></p>
115,477
<p>Let us say that a finite set $A$ in the plane is $1$-separated if:</p> <p><strong>1)</strong> it has an even number of points;</p> <p><strong>2)</strong> no open ball of diameter $1$ contains more than $|A|/2$ points.</p> <p>For a $1$-separated set $A$ define $G(A)$ to be a graph where two points $x,y$ in $A$ are joined by an edge iff the distance between them is at least $1$.</p> <blockquote> <p><em>Question</em>: can one find a finite set of graphs $G _ 1,\dots,G _ n$ such that any $1$-separated set $A $ can be partitioned into non-empty $1$-separated sets $A _ 1,\dots,A _ k$ such that $G(A _ i)$ is isomorphic to one of the $G _ j$'s?</p> </blockquote> <p><em>Comment</em>: The definition makes sense on the real line (the ball of diameter $1$ is replaced by an interval of length $1$). In that case we can take $n=1$ and $G_1$ to be a graph on two vertices joined by an edge (that is, $G(A)$ contains a matching). </p>
domotorp
955
<p>No, there is a counterexample. Take a circle, whose diameter is slightly larger than 1 and put |A|-1 points evenly around its boundary and the last point to its center. This set will be 1-separated, but no matter how you partition it, the part containing the center will not be 1-separated.</p>
45,800
<p>Let's say I want to calculate the following scalar-by-matrix derivative</p> <p>$$\frac{\partial}{\partial A} \text{tr} \left[(\vec X^T A)^T (\vec X^T A)\right],$$</p> <p>with $\vec X$ and $A$ being a $n \times 1$ and a $n \times m$ matrix, respectively. Is there a way in Mathematica to get the result</p> <p>$$2 \vec X (\vec X^T A)$$</p> <p>without explicitly defining (for instance)</p> <pre><code>n=3 m=2 A=Array[a,{n,m}] X=Array[x,{n,1}] </code></pre> <p>and calculating</p> <pre><code>D[Tr[Transpose[Transpose[X].A].Transpose[X].A],{A}] </code></pre> <p>? The problem with this approach is that the Mathematica result cannot be easily cast back into an human-readable form like</p> <p>$$2 \vec X (\vec X^T A).$$</p>
swish
2,490
<p>Your expression simplifies to this $$\vec X \vec X^T A + A^T \vec X \vec X^T$$ using just these rules</p> <pre><code>Unprotect[D, Transpose, Dot]; (*Derivative rules*) D[Tr[A_], X_] := Tr[D[A, X]] D[Transpose[A_], X_] := D[A, X]\[Transpose] D[A_ .B_, X_] := D[A, X].B + A.D[B, X] (*Tranpose rules*) 0\[Transpose] := 0 1\[Transpose] := 1 (A_\[Transpose])\[Transpose] := A (A_ .B_)\[Transpose] := B\[Transpose].A\[Transpose] (*Dot rules*) Dot[_, 0] := 0 Dot[0, _] := 0 Dot[1, A_] := A Dot[A_, 1] := A Protect[D, Transpose, Dot]; </code></pre> <p>If it's correct then I'm quite sure these two parts are different (transpose of one another), so you can't add them to get coefficient <code>2</code>.</p>
2,384,444
<p>Determine the equation of the line the portion of which, intercepted by the axes, is divided by the point $(-5,4)$ in the ratio of $1:2$.</p> <p>My Attempt: Let the equation of straight line be $$ax+by+c=0$$ It passes through the point $(-5,4)$. $$-5a+4b+c=0$$</p>
velut luna
139,981
<p>The particular solution is $$y=\frac{1}{2}x^2 e^{-\sin x}$$</p> <p>Then solve for the homogeneous solution.</p> <p>Can you take it from here?</p>
1,300,391
<p>For example, if I wanted to get two of a kind, I would need seven dice. This is because even if the first six were 1, 2, 3, 4, 5, 6, the next one would have to make a pair out of the previous dice.</p> <p>So is there a formula for getting n of a kind 100% of the time?</p>
Mark Joshi
106,024
<p>13 for three of a kind = 2 * 6 + 1 in general 6(n-1) + 1</p> <p>since will $6(n-1)$ the only way you can't have n of a kind is if you have n-1 copies of each number from 1 to 6. </p>
319,288
<p>Let <span class="math-container">$\mathcal{M}$</span> be the field of meromorphic functions of one (complex) variable and <span class="math-container">$w = w(z)$</span> an analytic function satisfying a polynomial equation</p> <p><span class="math-container">$P(w; z) := w^n + a_{n-1}(z) w^{n-1} + \cdots + a_1(z) w + a_0(z) = 0$</span>,</p> <p>where <span class="math-container">$a_0(z), \ldots, a_{n-1}(z)$</span> are in <span class="math-container">$\mathcal{M}$</span> (actually, is suffices to consider the case where <span class="math-container">$a_j(z)$</span> is entire for <span class="math-container">$j = 0, \ldots, n-1$</span>).</p> <p>Suppose <span class="math-container">$w(z)$</span> has finitely many branch points.</p> <p>In the (hyperelliptic) case <span class="math-container">$n=2$</span> it is clear that <span class="math-container">$\mathcal{M}(w) = \mathcal{M}(\sqrt{Q})$</span>, where <span class="math-container">$Q(z)$</span> is a polynomial.</p> <p>Is it always true that, under the above assumptions we have that <span class="math-container">$\mathcal{M}(w) = \mathcal{M}(\beta)$</span>, where <span class="math-container">$\beta(z)$</span> is some algebraic function?</p>
reuns
84,768
<p>This is not a complete answer but I think this might help you :</p> <p>If <span class="math-container">$w(z)$</span> is meromorphic on a disk <span class="math-container">$\{|z-z_0| &lt; r\}\subset \mathbb{C}$</span> then it is algebraic (over <span class="math-container">$\mathbb{C}(z)$</span>) iff </p> <p><span class="math-container">$(1)$</span> there are finitely many points such that for <span class="math-container">$S = \mathbb{C} \setminus \{p_1,\ldots,p_n\}$</span> and every curve <span class="math-container">$\gamma : z_0 \to z_1\in S$</span> the analytic continuation of <span class="math-container">$w(z)$</span> along <span class="math-container">$\gamma$</span> is well-defined, call <span class="math-container">$w^\gamma(z)$</span> the resulting function analytic around <span class="math-container">$z_1$</span>, </p> <p><span class="math-container">$(2)$</span> with <span class="math-container">$ \pi_1(S)$</span> the homothopy group then <span class="math-container">$\{ w^\gamma |\gamma \in \pi_1(S)\}$</span> is a finite set, </p> <p><span class="math-container">$(3)$</span> there is some <span class="math-container">$A$</span> such that each <span class="math-container">$w^\gamma(z) (z-p_k)^A$</span> is bounded around <span class="math-container">$p_k$</span> and <span class="math-container">$w^\gamma(z)z^{-A}$</span> is bounded around <span class="math-container">$\infty$</span>.</p> <p>In that case <span class="math-container">$G =\pi_1(S) / \{ \gamma \in S, w^\gamma = w\}$</span> is indeed the Galois group of <span class="math-container">$\mathbb{C}(z, \{ w^\gamma(z)\}_{\gamma\in G})/ \mathbb{C}(z)$</span>. </p> <p>Proof : look at the coefficients of <span class="math-container">$\prod_{\gamma \in G} (w^\gamma(z)-t)$</span>, since they are fixed by <span class="math-container">$f(z) \mapsto f^\lambda(z), \lambda \in \pi_1(S)$</span> and they are locally analytic then they are globally analytic on <span class="math-container">$S$</span>, and the condition <span class="math-container">$(3)$</span> shows they are meromorphic on the Riemann sphere, thus they are in <span class="math-container">$\mathbb{C}(z)$</span>.</p> <p>Without <span class="math-container">$(3)$</span> then <span class="math-container">$G$</span> is still the Galois group of <span class="math-container">$M_S( \{ w^\gamma(z)\}_{\gamma\in G})/ M_S$</span> where <span class="math-container">$M_S$</span> is the field of the meromorphic functions on <span class="math-container">$S$</span>.</p>
2,912,043
<p>How do I approach this problem? My book gives answer as $[\frac{3-\sqrt{5}}{2},\frac{3+\sqrt{5}}{2}]$. I tried forming an equation in $y$ and putting discriminant greater than or equal to zero but it didn't work. Would someone please help me?</p> <p>I get $x^2 (y-1) + 2x (y+1) + (5y-5) =0$ and discriminant gives $2y^2 - y + 2 \leq 0$, which has complex roots.</p>
Can Yaylali
524,466
<p>The values of sinus function from $\pi$ to $2\pi$ are the same as minus the sinus function from $0$ to $\pi$. Or to be a bit more precise, consider a substitution $t\rightarrow t-\pi$. </p> <p>Now what is sin$(t-\pi)$ ?</p>
2,819,652
<p>I know that my function $f$ must satisfy the following condition for $x \geq 0:$ $$\frac{d\left(xf(x)-x\right)}{dx} \geq 0.$$ What can I say about $f?$ I am curious about its possible sign or variations with respect to $x$.</p> <p>I have investigated Grönwall's inequality to find an upper bound for $f$ but it seems useless in my case.</p>
mvw
86,776
<p>We go to polar coordinates, keep track of the cyclic nature of the complex exponential function and pull the root there: \begin{align} z^3 &amp;= 2 \iff \\ (r e^{i\phi})^3 &amp;= 2 \, e^{i\cdot (0 + 2\pi k)} \iff \\ r e^{i \phi} &amp;= \sqrt[3]{2} \, e^{i \cdot 2\pi k / 3} \end{align} Here $k \in \mathbb{Z}$. </p> <p>We compare and get radius and three different angles (up to $2\pi$ cycles): $$ r = \sqrt[3]{2} \\ \phi \in \{ 0, 2\pi/3, 4\pi/3 \} $$ This gives $$ z \in \{ \sqrt[3]{2}, \sqrt[3]{2} \, e^{i \cdot 2\pi/3}, \sqrt[3]{2} \, e^{i \cdot 4\pi/ 3} \} $$</p>
290,507
<p>Every metric space is associated with a topological space in a canonical way. According to <a href="http://topospaces.subwiki.org/wiki/Induced_topology_from_metric_is_functorial" rel="nofollow">this</a> source, this amounts to a <a href="http://en.wikipedia.org/wiki/Full_and_faithful_functors" rel="nofollow">full functor</a> from the category of metric spaces with continuous maps to the category of topological spaces with continuous maps.</p> <p>Is it possible that there exists another way of obtaining a topological space from a metric space that is equally deserving of the label "canonical"? Perhaps something that no one has thought of yet? To say it in a different way, is there a sense in which the aforementioned functor is unique? Lets assume that the morphisms between metric spaces are precisely the continuous maps, although an answer that considers other morphisms between metric spaces is welcome.</p> <p>Now obviously this is a soft question, as I have neglected to specify what it means for a map to be deserving of the term "canonical." For this reason, let me motivate the question a little.</p> <p>At some point in an introductory work on analysis, the author will define the meaning of the expression "the topological space associated with (or induced by) a metric space." I'd like to know if this definition is in some sense "the unique correct definition," or whether it is only "one of many possible."</p> <hr> <p>EDIT: Lets put this another way. Obviously there are many function $\mathsf{Met} \rightarrow \mathsf{Top}$, but most of them are pretty boring. So we can restrict ourselves to functors, where $\mathsf{Met}$ and $\mathsf{Top}$ are viewed as categories (technically we need to make also specify what the morphisms of $\mathsf{Met}$ should be.) Anyway, as Martin points out, we're still going to be left with lots of "boring" functors. So I guess the question is, how do we get rid of all the "boring" ones? And once we do, is the canonical functor the only one that's left? Obviously I haven't defined "boring" so this is a very soft question.</p> <p>Magma suggests the following refinement of the question: does the canonical functor satisfy suitable a universal mapping property?</p> <hr> <p>Here's another angle. Suppose we run into an alien species, which studies topological spaces (and calls them topotopos, and what we would call "an open set of a toplogical space" they call "an openopen of a topotopo"). They also study metric spaces (and call them metrometros.) We send that species a message asking them about the "the openopens of a metrometro." Will their notion of open set of a metric space coincide with our notion? And if so, why?</p>
Berci
41,488
<p>Partial answer for your 'alien' question.</p> <p><em>If</em> we assume that aliens also think about open sets as the complements of <em>closed</em> sets, <em>and</em> if we also agree with them that being closed means being closed under <em>limits</em> of sequences, and that $\lim(x_n)=x$ in metric space is defined as $\lim d(x_n,x)=0$, <em>and</em> they also use the same real numbers, <em>then</em> we will also agree on what open sets are.</p>
4,502,175
<p>I want to create a simple demo of moving an eye (black circle) inside a bigger circle with a black stroke when moving a mouse. I have cursor positions mouseX and mouseY on a canvas and I need to map the value of mouse position into a circle so the eye is moving inside the circle.</p> <p>This should be trivial but I have no idea how to solve this problem.</p> <p>This is a coding problem but I think that I will get the best results from this Q&amp;A. If not I will ask on stack overflow.</p> <p>This is the code that shows the problem.</p> <p><a href="https://editor.p5js.org/jcubic/sketches/E2hVGceN9" rel="nofollow noreferrer">https://editor.p5js.org/jcubic/sketches/E2hVGceN9</a></p> <p>If you use map function in P5JS library (that is linear map from one range to a different range) I get the black circle to move in a square with a side equal to the diameter of the circle. So the black circle is outside.</p> <p>I'm not sure what should I use to calculate the position of the black circle so it's always inside the bigger circle.</p> <p><a href="https://i.stack.imgur.com/u8V9U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u8V9U.png" alt="enter image description here" /></a></p>
CyclotomicField
464,974
<p>Call <span class="math-container">$p$</span> the position of the cursor and <span class="math-container">$c$</span> the center of the large circle. First, we move the origin to be the center of the circle by considering the vector <span class="math-container">$v=p-c$</span> so that <span class="math-container">$v$</span> is pointing from the center of circle in the direction of <span class="math-container">$p$</span>. Now we normalize <span class="math-container">$v$</span> to <span class="math-container">$u= \frac{v}{\vert v \vert}$</span> which is the unit vector in the direction of <span class="math-container">$v$</span>. Say the radius of the large circle is <span class="math-container">$R$</span> and the radius of the black dot is <span class="math-container">$r$</span> This means the boundary point on the big circle will be <span class="math-container">$Ru$</span> and from there we can see shortening the vector by <span class="math-container">$r$</span> will give us the center point of the black dot, which is to say <span class="math-container">$(R-r)u$</span>. Finally, we need to move the origin back by adding <span class="math-container">$c$</span> to get the final coordinates since we subtracted it off initially which gives us <span class="math-container">$(R-r)u+c$</span>. This is just the naïve geometric method translated into vectors.</p>
4,651,215
<p>Three Cowboys have a shoot out. A shoots with 1/3 accuracy, B 1/2, C one (never misses). A goes first. The cowboys attempt to shoot the cowboy with the best shooting ability. What’s the probability that A is still alive?</p> <p>Edit: We assume that after A, B goes first and then C</p> <p>My approach:</p> <p>Since it is given that they attempt to shoot the cowboy with the best shooting ability, both A &amp; B would go for C.</p> <p>1st case: C is knocked out and A &amp; B remain in a duel</p> <p>P(A win vs B) = 0.3 + (0.7 * 0.5 * 0.3) + (0.7 * 0.5 * 0.7 * 0.5 * 0.3) + .... infinite geometric progression</p> <p>Thus P(A win vs B) = 0.3/(1 - 0.35) = 0.46</p> <p>2nd case: C shoots B (after both A &amp; B miss against C) and A &amp; C remain in a duel</p> <p>P(A win vs C) = 0.3</p> <p>Now 1st case happens with Probability 2/3 (1/3 chance both A &amp; B miss C)</p> <p>2nd case happens with Probability 1/3</p> <p>Thus P(A wins) = (2/3 * 0.46) + (1/3 * 1/3) = 0.42</p> <p>Does the logic sound right or I have calculated it incorrectly?</p>
Robert Shore
640,080
<p>Here's another way to look at it. If <span class="math-container">$a$</span> is even, then <span class="math-container">$a=2k, k \in \Bbb Z$</span> so <span class="math-container">$a^2=4k^2 \equiv 0 \pmod 4$</span>.</p> <p>If <span class="math-container">$a$</span> is odd, then <span class="math-container">$a=2k+1, k \in \Bbb Z$</span>, so <span class="math-container">$a^2=4k^2+4k+1 = 4(k^2+k)+1 \equiv 1 \pmod 4$</span>.</p>
128,009
<p>Hi, Is there any known sequence such that the sum of a combination of one subsequence never equals another subsequence sum. The subsequences should have elements only from the parent sequence.</p> <p>Thanks Sundi</p>
Marc Palm
10,400
<p>Rather then only considering invariant PSD operators, you might want to consider all $G$ invariant operators, i.e., $G$-intertwiner. I describe their functional calculus below. Their functional calculus can be realized/studied via convolution products and representation theory. What I describe is in the realm of the first answer by Pedro Lauridsen Ribeiro. I let you decide whether this classifies as "algebraic", but it is certainly of operator-algebraic/ representation-theoretic flavour. I claim everything else which is $G$-invariant operator will have a equivalent functional calculus.</p> <p>Here is an example. Assume $H$ is compact. We identify $L^2(G/H)$ with the induced representation $\pi = Ind_{H}^{G} 1$ or the $H$-invariant vectors in $L^2(G)$ and then uses the convolution operators for $\phi \in C_c^\infty(G//H)$: $$T_\phi f(g) = \int\limits_{G} \phi(x) f(xg) d x.$$ E.g. for $G =SL_2(\mathbb{R})$ and $H=SO(2)$, the algebra $C_c^\infty(G//H)$ is commutative by the Gelfand trick (this is not so essential) and the trace $T_\phi$ is an integral of the Harish-Chandra/Selberg transform of $\phi$ over the spectrum of the hyperbolic Laplacian or, alternatively, an integral $$ \int\limits tr\; \pi(\phi) d_{Pl} \pi$$ over the irreducible unitary (tempered) reps $\pi$ of $G$ with $H$-invariant vectors (only principal series representations here). The measure $d_{Pl}$ is the Plancherel measure.</p> <p>If $H$ is not compact, you can still do something similar working with $C_c^\infty(G)$ and obtain a similar analysis. E.g. take the two important situations when $H$ is a lattice or a parabolic subgroup in a reductive Lie group. The advantage: this generalizes to locally compact groups. E.g. on reductive groups over non-archimedean fields, there are no differential operators in any obvious way, but this gives you the Hecke operators. These ideas are crucial also in the context of the Selberg trace formula.</p>
1,902,939
<p>I was going through the problem attached herewith. the fact is I could not understand 2 things. </p> <p>Firstly, how did the author came up with the idea of choosing the event sets the way he has described? How is this intuitive?</p> <p>Secondly, I tried to find the $P(A_1)$ directly, and I came up with the following way.</p> <p>$P(A_1)=(C(4,2)\cdot 2!\cdot(14!/(4!^2\cdot3!^2)))/(16!/(4!^4))=12/15$.</p> <p>I cannot intuitively follow the procedure given here. </p> <p>Any help?</p> <p><a href="https://i.stack.imgur.com/3Rj7a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Rj7a.png" alt="enter image description here"></a> </p>
Community
-1
<p><strong><em>Edited</em></strong> to make the argument a bit more efficient.</p> <p>I think this is more or less what the first hint is suggesting:</p> <p>$aHbH = cH$, so $ab = aebe \in aHbH = cH$, where $e$ is the identity element. Multiplying on the left by $a^{-1}$ gives us $b \in a^{-1}cH$. Thus the left coset containing $b$ is $a^{-1}cH$, and so $bH = a^{-1}cH$.</p> <p>Now, $aHb = aHbe \subseteq aHbH = cH$, so $Hb \subseteq a^{-1}cH = bH$ (the last equality was proved in the previous paragraph).</p> <p>We have established that $Hb \subseteq bH$. This holds for any $b$, so in particular it holds for $b^{-1}$, and so $Hb^{-1} \subseteq b^{-1}H$. Multiplying on the left and right by $b$ gives us $bH \subseteq Hb$. </p> <p>We have shown both containments, so $bH = Hb$, hence $H$ is normal.</p>
2,796,711
<p>I have the following question in hand. </p> <p>If $\lambda_1,\cdots,\lambda_n$ are the eigenvalues of a given matrix $A \in M_n$, then prove that the matrix equation $AB - BA = \lambda B$ has a nontrivial solution $B \neq 0 \in M_n$, if and only if $\lambda = \lambda_i - \lambda_j$ for some $i,j$.</p>
user550103
550,103
<p>Here is my attempt. Does this make sense to you experts?</p> <p>If we vectorize such that \begin{align} AB - BA &amp;= \lambda B \\ &amp;\Downarrow \\ \mbox{vec}\left(AB - BA \right) &amp;= \mbox{vec}(\lambda B) \\ \mbox{vec}\left(ABI - IBA \right) &amp;= \mbox{vec}(\lambda B) \\ \left(\left(I \otimes A\right) - \left(A^{\rm T} \otimes I\right)\right)\mbox{vec}(B) &amp;= \lambda \mbox{vec}(B) \\ \end{align}</p> <p>So, according to <a href="http://www.siam.org/books/textbooks/OT91sample.pdf" rel="nofollow noreferrer">Theorem 13.16</a>, the eigenvalues of the Kronecker sum $\left(\left(I \otimes A\right) - \left(A^{\rm T} \otimes I\right)\right)$ would be $\lambda_i - \lambda_j$. Hence, the solution should be non-trivial if and only if $\lambda = \lambda_i - \lambda_j$. </p>
3,293,233
<p>My robot has a laser time-of-flight distance sensor. Holding the distance and target constant: When I take single set of readings, three standard deviations usually is 3% to 6% of the mean.</p> <p>If I take multiple sets of readings, each set will show three standard deviations to be 3-6% of the mean, BUT, averaging the averages from each set will show much lower standard deviation, such that three standard deviations will be between 0.5 to 2% of the mean.</p> <p>Am I getting a more accurate reading by the average of averages of subsets, than the average of a whole set? </p> <p>(Or am I just seeing the benefit of an average over individual readings?)</p> <p>17:30:52 Readings : 20<br> Reading Delay : 0.010 s<br> Average Reading: 1112 mm<br> Minimum Reading: 1075 mm<br> Maximum Reading: 1156 mm<br> Std Dev Reading: 19 mm<br> Three SD readings vs ave reading: 5.1 %<br> Adjusted For Error Average Distance: 1099 mm </p> <p>17:30:55<br> Readings : 20<br> Reading Delay : 0.010 s<br> Average Reading: 1110 mm<br> Minimum Reading: 1078 mm<br> Maximum Reading: 1140 mm<br> Std Dev Reading: 17 mm<br> Three SD readings vs ave reading: 4.6 %<br> Adjusted For Error Average Distance: 1097 mm </p> <p>17:30:59<br> Readings : 20<br> Reading Delay : 0.010 s<br> Average Reading: 1112 mm<br> Minimum Reading: 1057 mm<br> Maximum Reading: 1157 mm<br> Std Dev Reading: 20 mm<br> Three SD readings vs ave reading: 5.5 %<br> Adjusted For Error Average Distance: 1099 mm </p> <p>17:31:03<br> Readings : 20<br> Reading Delay : 0.010 s<br> Average Reading: 1115 mm<br> Minimum Reading: 1090 mm<br> Maximum Reading: 1146 mm<br> Std Dev Reading: 14 mm<br> Three SD readings vs ave reading: 3.8 %<br> Adjusted For Error Average Distance: 1101 mm </p> <p>17:31:07<br> Readings : 20<br> Reading Delay : 0.010 s<br> Average Reading: 1118 mm<br> Minimum Reading: 1060 mm<br> Maximum Reading: 1157 mm<br> Std Dev Reading: 21 mm<br> Three SD readings vs ave reading: 5.6 %<br> Adjusted For Error Average Distance: 1104 mm </p> <p>Average Average: 1113 mm<br> Minimum Average: 1110 mm<br> Maximum Average: 1118 mm<br> Std Dev Average: 3 mm<br> <strong>Three SD averages vs ave reading: 0.7 %</strong><br> Ave all Readings: 1112 mm<br> SDev all Reading: 16 mm<br> <strong>Three SD all vs ave all readings: 5.0 %</strong> </p> <hr>
Empy2
81,790
<p>There is natural fluctuation in the measurements, and the standard deviation <span class="math-container">$(\sigma)$</span> measures the size of those fluctuations. </p> <p>The averages fluctuate less, by <span class="math-container">$\sigma/\sqrt n$</span>. That is the point of taking a mean. So you can divide those <span class="math-container">$\sigma$</span>, which are about 16, by <span class="math-container">$\sqrt{20}$</span>, to say how precise each mean is. The precision is 3 or 4.</p> <p>In the final answer, you have 100 measurements, so your final average is accurate to <span class="math-container">$16/\sqrt{100}\approx1.6$</span>. On the other hand, you have five measured averages, each accurate to 3mm, so the overall precision is <span class="math-container">$3/\sqrt5\approx1.34$</span>. I think the difference between 1.6 and 1.34 is roundoff error. </p> <p>Three standard deviations in each of the five estimates would be <span class="math-container">$3×16/\sqrt{20}\approx11mm$</span>. Three standard deviations in the overall average would be<span class="math-container">$3×1.6\approx5mm$</span>.</p>
3,898,904
<p>I'm currently making a start on group theory and have hit a roadblock with a relatively basic theorem on finite cyclic groups. The specific relation killing me is: <span class="math-container">$$\mathbb{Z}_n \times \mathbb{Z}_m \cong \mathbb{Z}_{mn} \Leftrightarrow \text{gcd}(m,n) = 1$$</span> So, the most straightforward result I see there is <span class="math-container">$$\mathbb{Z}_n \times \mathbb{Z} \cong \mathbb{Z}_n$$</span> For some reason this doesn't sit right with me. Why should a cyclic group be unchanged (up to isomorphism) by a direct product with <span class="math-container">$\Bbb Z$</span>?</p> <p>Does anybody have a nice example to ease my mind?</p> <p>Thanks!</p>
Parcly Taxel
357,390
<p>The given biconditional, taken as a whole, extends to the infinite cyclic group <span class="math-container">$\mathbb Z$</span> when the latter is treated as <span class="math-container">$\mathbb Z_0$</span>. Let <span class="math-container">$m=0$</span>: <span class="math-container">$$\mathbb Z×\mathbb Z_n\iff\gcd(n,0)=1\iff n=1$$</span> So your conclusion is false except if <span class="math-container">$n=1$</span>, in which case it's trivial.</p>
3,898,904
<p>I'm currently making a start on group theory and have hit a roadblock with a relatively basic theorem on finite cyclic groups. The specific relation killing me is: <span class="math-container">$$\mathbb{Z}_n \times \mathbb{Z}_m \cong \mathbb{Z}_{mn} \Leftrightarrow \text{gcd}(m,n) = 1$$</span> So, the most straightforward result I see there is <span class="math-container">$$\mathbb{Z}_n \times \mathbb{Z} \cong \mathbb{Z}_n$$</span> For some reason this doesn't sit right with me. Why should a cyclic group be unchanged (up to isomorphism) by a direct product with <span class="math-container">$\Bbb Z$</span>?</p> <p>Does anybody have a nice example to ease my mind?</p> <p>Thanks!</p>
Community
-1
<p><span class="math-container">$\Bbb Z\times\Bbb Z_n\not\cong\Bbb Z_n$</span>, since the former has a nontrivial free part , whereas the latter is just torsion.</p>
2,985,962
<p>I was asking my friends a riddle about identifying hats. Each person has to correctly identify the colour of their own hat that was put on their head randomlly. There is no defined number of either colour. So they could be all white or all black or any combination in between.</p> <p>They gave an answer that gets 50% right but I fired back that getting 50% right is what you would expect, on average, for straight guesses. They claimed that that would depend on the colours of the hats that are on the people's heads. In other words, if everyone was wearing black then the 50% rule not apply. This just doesn't "feel" right to me.</p> <p>Who is correct?</p> <p>Edit:</p> <p>This is the puzzle I asked. You have 100 people standing one behind the other such that the last person can see all the people in front of him/her and so on. So the last one see 99 and the next sees 98 etc. They each have a hat put on their head which is black or white. They have no idea how many of each exist.</p> <p>Assuming they plan on a strategy in advance, how many can get their hat right. They said that the best way is for the back person to say the colour of the hat on person 99. Person 99 can say his colour. Then 98 will say the colour of the one in front etc. This was I am guaranteed at least 50 right and maybe more if two consecutive people have the same colour. My claim was that 50% guaranteed is the same as random (ignoring the extra lucky one if there are consecutive hats). Their counter-claim was that the 50% random guess would only be right if their were exactly 50 of each colour.</p>
Moko19
618,171
<p>If I understand the riddle correctly, for a set of N people, there is an algorithm by which all but one person are guaranteed to get the correct answer, assuming all the players are perfect mathematicians and can arrange a strategy before being assigned hats.</p> <p>The person in back counts the number of black hats that he can see. If he sees an odd number, he says white and otherwise says black. He has no information whatsoever about his own hat, and has a 50-50 chance of what he said being the correct color hat for himself.</p> <p>The other N-1 players have a similar game, except that they know whether the total number of the N-1 players with black hats is even or odd. As such, by counting the number of black hats that the others of the N-1 players have (whether by seeing the hats of the people in front of them or from the answers of the people behind them) and determining it as either even or odd, they can deduce whether they themselves have black or white hats.</p> <p>Similarly, if there there M colors <span class="math-container">$(C_0,C_1,C_2,C_3,...,C_{m-1})$</span>, they total the value of the hats of players in front of them by adding 0 for <span class="math-container">$C_0$</span>, 1 for <span class="math-container">$C_1$</span>, etc, and taking the remainder after dividing the total by M. As such, in a group of N players with M hat colors, the minimum number of players to answer correctly will be N-1, the maximum will be N, and the mathematical expectation of the number of correct answers is <span class="math-container">$N-1+\frac{1}{M}$</span></p>
114,909
<p>Consider this simple code:</p> <pre><code>Grid[RandomInteger[{0, 1}, {10, 10}], Background -&gt; LightBlue] </code></pre> <p><a href="https://i.stack.imgur.com/PfWwb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PfWwb.png" alt="output"></a></p> <p>Is it possible to modify the code so that the background only apply for certain entries of the matrix, e.g. only for non zero entries? </p>
kglr
125
<pre><code>Grid[Item[#, Background -&gt; If[# == 1, Blue]] &amp; /@ # &amp; /@ RandomInteger[{0, 1}, {10, 10}]] </code></pre> <p><img src="https://i.stack.imgur.com/iq0kd.png" alt="Mathematica graphics"></p> <pre><code>Grid[RandomInteger[{0, 1}, {10, 10}]] /. 1 -&gt; Item[1, Background -&gt; Blue] </code></pre> <p><img src="https://i.stack.imgur.com/tUnqt.png" alt="Mathematica graphics"></p> <pre><code>Grid[m = RandomInteger[{0, 1}, {10, 10}], Background -&gt; {None, None, Thread[Position[m, 1] -&gt; Blue]}] </code></pre> <p><img src="https://i.stack.imgur.com/lTrt7.png" alt="Mathematica graphics"></p>
1,913,394
<p>I need to prove that rank($\mathrm{A}$) is not continuous everywhere but is lower semi-continuous everywhere, where $\mathrm{A}\in \mathbb{C}^{n\times m} $</p>
am70
365,982
<p>Let <span class="math-container">$A\in{\mathbb C}^{n×m}$</span>; we use the symbol <span class="math-container">$m$</span> to denote a square minor (a selection of <span class="math-container">$k$</span> rows and <span class="math-container">$k$</span> columns). For any minor <span class="math-container">$m$</span> of the matrix define a function <span class="math-container">$f_m:{\mathbb C}^{n×m}\to {\mathbb R}$</span> in this way: if the minor is invertible then <span class="math-container">$f_m(A) = k$</span> otherwise <span class="math-container">$f_m(A) = 0$</span>. Obviously <span class="math-container">$f_m$</span> is lower semicontinuous. The rank is the maximum of all <span class="math-container">$f_m$</span>, for all possible choices of <span class="math-container">$m$</span> (including all possible choices of <span class="math-container">$k$</span>), so it is lower semicontinuous.</p>
1,406,462
<p>I've been studying an introductory book on set theory that uses the <a href="https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory">ZFC</a> set of axioms, and it's been really exciting to construct and define numbers and operations in terms of sets. But now I've seen that there are other alternatives to this approach to construct mathematics like <a href="https://en.wikipedia.org/wiki/New_Foundations">New Foundations</a> (NF), category theory and others. </p> <p>My question is, what was the motivation to look for an alternative for ZFC? Does ZFC have any weakneses that served as motivation? I know there are some concepts that don't exist in ZFC like the universal set or a set that belongs to itself. So I thought that maybe some branches of mathematics <em>needed</em> these concepts in order to develop them further, is that true?</p>
Asaf Karagila
622
<p>Well. That depends on whom you might ask this.</p> <ol> <li><p>Set theory might be inconsistent. In particular $\sf ZFC$ and its extension by large cardinal axioms. It's a nontrivial thing, to feel safe with these theories, and it takes a lot of practice and time until you understand that $\sf ZFC$ is self-evident to some extent, and [some] large cardinal axioms are somewhat self-evident as well.</p> <p>But of course, we might be tricked. Crooks have been known to seem honest, until you're left with a big heap of nothing in your hands. That is why Nigerian royalty is going to have a very hard time emailing people around the world.</p> <p>If that happens, we need to ask ourselves where the problem lies. Is it in one of the axioms, or maybe specifically in the existence of infinite sets? Will a slightly weaker set theory (e.g. $\sf ZC$) work better, or maybe we have to resort to arithmetic theories to fix things?</p></li> <li><p>Our grasp of things is usually largely inconsistent,<sup>1</sup> but we do like to think in "types". So working inside different categories when needed, or working with some type theory or another. A lot, and I mean a lot, of people will twitch when you ask them what are the elements of $\pi$. Or whether or not $\frac13$ is a subset of $e$.</p> <p>Of course, those who have a firm understanding of this know that this is a question of implementation, and this is like asking whether or not the machine code of one implementation of an algorithm is the same or different of another. But people don't think about it this way, although in a way they kinda do.</p> <p>Instead, people focus on their math, and they just remember (or more accurately: they don't) that you can formalize this in terms of set inside set theory.</p></li> <li><p>$\sf ZFC$ is incomplete. That doesn't sound like a big deal. But it is, in a deep sense. We might half expect a foundational theory to be able and decide all sort of things about the mathematical universe. Enough so most, or any, questions we might have can be answered there. Again, those who study foundations long enough <em>should</em> realize this is not the goal of a foundational theory, but it is somewhat expected.</p> <p>And some people feel very awkward with this incompleteness. They will also have a problem with arithmetic based foundations, since that is an incomplete foundation also. But it's true that in $\sf ZFC$ those incompleteness phenomenon are far more gaping, and as soon as you leave the term "countable" and enter the "uncountable", more or less everything becomes open and independent.</p> <p>This poses an actual problem. Imagine that half the functional analysts would work in one set theory (say $V=L$), and the other half in another (say $\sf ZFC+PFA$). It would create some strange discrepancies which will eventually tear the field up. And the same goes to everything else, except set theory, whose main occupation is these different axioms.</p></li> </ol> <p>But that's just counting three strikes against $\sf ZFC$. And I cannot, in good conscience finish my post like this. So let's balance things as to why $\sf ZFC$ is a good foundation.</p> <ol> <li><p>Set theory, and in particular $\sf ZFC$ is quite self evident. When I was a starting masters student, I attended a course by one of the generation's greatest mathematician and asked him about the axioms of $\sf ZFC$ and he said that a good foundational theory is one whose axioms you don't feel you're using. It might be that we're trained to think in "set theory ways", but it's also true that the axioms give you exactly the expressive power to talk about everything you care about and not care about one specific implementation over another (looking your way, Replacement axioms!), which is quite great.</p> <p>Can you imagine a mathematical universe where the reals constructed with Dedekind cuts and with Cauchy sequences are different? (Those mathematical worlds exists in other foundations, by the way, and at least to me, that is weird.)</p></li> <li><p>The previous point leads us exactly to this argument. Humans take an abstract algorithm, implement it in various languages, on various processors, using various data structures and types. But the code and data all turn into electronic signals. So working with high level objects like $\Bbb N$ and $\Bbb R$, and function spaces and so on is our algorithms. And we can turn that into electronic signals, or sets and first-order structures in this case.</p></li> <li><p>We like sets more than we are willing to admit. One of the problems of second-order logic is that the logic is incomplete.<sup>2</sup> And if you have an algorithm for a list of inference rules, to verify a proof, then there will be statements which are true (even valid) that these inference rules cannot prove. So your proof theory is quite insufficient in this aspect. Because at least the valid sentences <em>should</em> be provable.</p> <p>So we can, instead, use a first-order based foundation to fix this. Instead of proving something about $\Bbb R$, we prove that $\sf ZFC$ proves that thing about $\Bbb R$; and this we can verify mechanically. So one option is to resort to some arithmetic foundation. But this causes a different problem. We cannot "take objects" anymore. We don't have sets of reals, or rings, or groups. So when you prove something about groups, you can't say "Let $G$ be an abelian group, then bla bla bla" anymore. You have to say "The theory of abelian groups proves that bla bla bla".</p> <p>And this is quite a big issue, since we think about mathematics in some material sense. The objects exists somewhere. They are not just axiomatic consequences. And set theory, in particular $\sf ZFC$ with its implementation agnosticism, provides us with the means to do exactly this.</p></li> </ol> <hr> <p><sub><em>Footnotes.</em></sub></p> <p><sub>(1) Is $\Bbb N\subseteq\Bbb R$? Often the answer is yes, many other times the answer is no. And it really depends on what you want to do, or how you mostly use these two objects. But since we know that either method can be replaced by the other we're not worried about it too much. But it is a concrete question that has convenient answer either way, and we exploit it. And that, in a nutshell, a "global inconsistency" in our thinking.</sub></p> <p><sub>You can argue about this, but it's going to be besides the point.</sub></p> <p><sub>(2) Incomplete here means in the sense of the completeness theorem. Something true in every model need not be provable. This is a different kind of incompleteness than the one referred to earlier, where $\sf ZFC$ is incomplete in the sense that it does not prove or refute every statement.</sub></p>
2,616,604
<p>Why it is true that every odd number r that is not equivlant to p mod p has a odd number $s$ s.t</p> <p>$rs\equiv 2p-1 \mod 2p $</p> <p>My first thought is that i should have an inverse for $r$ in the ring so i can write. $s\equiv r^{-1} (-1) \mod 2p $ but im not sure what in number theory tells me that $r^{-1} $ exists?</p> <p>Where p as always in number theory is prime.</p>
Community
-1
<p>Assuming $p$ is a prime, and that not just $r\ne p$ but $p\not\mid r$...</p> <p>$r^{-1}$ exists because $r$ (being odd) is coprime with $2p$. Thus, $1=rs+2pt$ for some $s,t\in\mathbb Z$ (<a href="https://en.wikipedia.org/wiki/B%C3%A9zout%27s_identity" rel="nofollow noreferrer">Bézout's identity</a>), or, reducing modulo $2p$, $1\equiv rs\pmod{2p}$. </p>
3,537,549
<p>In the game of poker, five cards from a standard deck of 52 cards are dealt to each player.</p> <p>Assume there are four players and the cards are dealt five at a time around the table until all four players have received five cards.</p> <p>a. What is the probability of the first player receiving a royal flush (the ace, king, queen, jack, and 10 of the same suit).</p> <p>b. What is the probability of the second player receiving a royal flush?</p> <p>c. If the cards are dealt one at a time to each player in turn, what is the probability of the first player receiving a royal flush?</p> <p>I know that the probability of a royal flush is 1649740, because of 52C5=2598960, and 42598960=1649740.</p> <p>But I am struggling to understand how to determine the probability of the first player getting it, etc...</p> <p>Any help would be greatly appreciated.</p>
Eric Towers
123,905
<p>Line 1: <span class="math-container">$\cos \theta = -\sin(\theta + \pi/2)$</span>. Equivalently, make the substitution <span class="math-container">$\theta = \phi + \pi/2$</span> so that <span class="math-container">$\cos \theta = \cos(\phi + \pi/2) = - \sin \phi$</span> using a <a href="https://en.wikipedia.org/wiki/List_of_trigonometric_identities#Shifts_and_periodicity" rel="nofollow noreferrer">quarter-period shift identity</a>. Then switch the dummy variable from <span class="math-container">$\phi$</span> to <span class="math-container">$\theta$</span>. It is worth remembering that this new <span class="math-container">$\theta$</span> is <em>not</em> the same as the old <span class="math-container">$\theta$</span>, but this doesn't matter for dummy variables.</p> <p>Line 2: <span class="math-container">$2\theta / \pi \leq \sin \theta$</span> on <span class="math-container">$[0,\pi/2]$</span>.</p> <p><img src="https://i.stack.imgur.com/WkFam.png" alt="Mathematica graphics"></p> <p>The sine has negative second derivative on <span class="math-container">$(0, \pi/2)$</span>, so the linear approximation through the endpoints must be below sine.</p> <p>Then, do the easy integral.</p>
29,114
<p>In <a href="https://puzzling.stackexchange.com/questions/72494/use-2-0-1-and-9-to-make-34">this</a> question on the <a href="https://puzzling.stackexchange.com/"><em>Puzzling Stack Exchange</em></a>, I wrote <a href="https://puzzling.stackexchange.com/questions/72494/use-2-0-1-and-9-to-make-34/72504#72504">this</a> answer with some Mathjax, but it formatted a little weirdly. Here is what I wrote:</p> <hr /> <p><em>Some unecessary information, blah blah blah</em></p> <p>The following is another one less technical.</p> <blockquote> <p><span class="math-container">$$\underbrace{(2+0!)}_{3}\|\underbrace{(1+\sqrt{9})}_{4}=34\tag{$\|=\small\rm concatenation$}$$</span></p> </blockquote> <hr /> <p>Notice that the right underbrace is significantly larger than the left one. I presume the square root and the brackets are to explain why this is the case.</p> <p>Is this a bug? If so, should the large underbrace be fixed for smaller size?</p> <p>Here are some other examples:</p> <blockquote> <p><span class="math-container">$$\underbrace{1+1}_{2}\quad\underbrace{\sqrt{1}+1}_{2}\quad\underbrace{1+\sqrt{1}}_{2}\quad\underbrace{\sqrt{1}+\sqrt{1}}_{2}$$</span> <span class="math-container">$$\underbrace{(1+1)}_{2}\quad\underbrace{(\sqrt{1}+1)}_{2}\quad\underbrace{(1+\sqrt{1})}_{2}\quad\underbrace{(\sqrt{1}+\sqrt{1})}_{2}$$</span></p> </blockquote> <p>The same applies to overbraces.</p> <blockquote> <p><span class="math-container">$$\overbrace{1+1}^{2}\quad\overbrace{\sqrt{1}+1}^{2}\quad\overbrace{1+\sqrt{1}}^{2}\quad\overbrace{\sqrt{1}+\sqrt{1}}^{2}$$</span> <span class="math-container">$$\overbrace{(1+1)}^{2}\quad\overbrace{(\sqrt{1}+1)}^{2}\quad\overbrace{(1+\sqrt{1})}^{2}\quad\overbrace{(\sqrt{1}+\sqrt{1})}^{2}$$</span></p> </blockquote> <hr /> <h1>Original screenshot:</h1> <p><a href="https://i.stack.imgur.com/R0CSl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R0CSl.png" alt="SCREENSHOT" /></a></p> <p>The <strong>expressions</strong> are nearly all the same size, differing <em>just</em> slightly. But the <strong>braces</strong> (over and under) differ significantly. Apologies if the screenshot is blurry.</p> <hr /> <h1>New &amp; Improved Screenshot:</h1> <p><a href="https://i.stack.imgur.com/tDZ82.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tDZ82.png" alt="SCREENSHOT 2" /></a></p> <p>Thanks heaps to @<a href="https://math.meta.stackexchange.com/users/8297/martin-sleziak">Martin Sleziak</a> for this, particularly <a href="https://math.meta.stackexchange.com/questions/10459/square-brackets-are-not-displaying-correctly/10471#10471">this Meta answer</a>!!</p>
TheSimpliFire
471,884
<p>The braces are meant to suit the size of the expression you want them under. If you look at the code, you see the following:</p> <p><code>\underbrace{blah blah blah}</code></p> <p>which means that everything you have put from <code>{</code> to <code>}</code> will be on top of the brace. On the contrary, it would not make sense if they were all the same size, as otherwise some characters will not be included.</p>
1,484,880
<p>i messed up an exam yesterday. given</p> <p>$$\mathcal{B}:=\{(a,b]:a,b \in\mathbb{R}, a \le b\},$$</p> <p>i was able to show that $\mathcal{B}$ is the base of a topology $\mathcal{T}$, and that $\mathcal{T}$ is finer than the standard topology. however, i couldn't find a countable dense subset in $(\mathbb{R},\mathcal{T})$ :S i thought about $\mathbb{Q}$, but is $\mathbb{Q} \in \mathcal{T}$? how to show that there is no countable base for $\mathcal{T}$?</p> <p>i'm just curious, and a bit afraid that there is a simple answer to this :S</p>
DanielWainfleet
254,665
<p>This is known as the Sorgenfrey line.Although it's usually given in reverse ,with the base of intervals $[a,b)$ .(Not the same space but its mirror image, homeomorphic to it by the function $f(x)=-x$.) I will stick to your def'n.Let $C$ be a base. For each real $r$ there exists $C_r\in C$ such that $r\in C_r\subset (r-1,r]$ because $(r-1,r]$ is a nbhd of $r$ and $C$ is a base. With respect to the usual order on the reals we have $\max (C_r)=r$ . So $C_r\ne C_s$ when $r\ne s$..Since $\{C_r :r\in R\}\subset C$ the cardinal of $C$ is at least the cardinal of the reals.....(Remark ; Any open set of the Sorgenfrey line is a union of countably many half-open intervals, so the set of ALL Sorgenfrey-open sets has the same cardinal as the set of reals.)</p>
3,931,067
<p>Let <span class="math-container">$Y$</span> be a Geometric random variable with <span class="math-container">$p\leq \frac{1}{2}$</span>. I know that <span class="math-container">$f(y) = (1-p)^y p$</span> for <span class="math-container">$y=0,1,2,...$</span>Therefore,</p> <p><span class="math-container">$$E(e^{r(Y-1)}) = \sum_{y=0}^{\infty} e^{r(y-1)}(1-p)^yp$$</span></p> <p>I want to calculate that expected value to prove that if <span class="math-container">$E(e^{r(Y-1)})=1$</span> then <span class="math-container">$r=\log(\frac{p}{1-p})$</span>.</p> <p>Going backwards from what I need to prove I get that <span class="math-container">$$e^r\left(\frac{1-p}{p}\right) = 1$$</span></p> <p>Therefore I can say that <span class="math-container">$E(e^{r(Y-1)})$</span> should be <span class="math-container">$e^r\left(\frac{1-p}{p}\right)$</span> but I'm not aware of how to get there analytically. I would appreciate some advice in this one.</p>
user854214
854,214
<p>Note that <span class="math-container">$a = 0$</span> doesn't work. For <span class="math-container">$a \neq 0$</span>, the left hand side has minimum period <span class="math-container">$2\pi$</span>, while the right hand side has minimum period <span class="math-container">$2\pi / a$</span>. Hence, <span class="math-container">$a = 1$</span> is the only possibility.</p> <p>You could also look in terms of the ranges of the respective functions. The left hand side has range <span class="math-container">$[-2a, 0]$</span> if <span class="math-container">$ a \ge 0$</span>, or range <span class="math-container">$[0, -2a]$</span> if <span class="math-container">$a &lt; 0$</span>. The right hand side has range <span class="math-container">$[-2, 0]$</span>. This can only match the left hand side if <span class="math-container">$a = 1$</span>.</p>
3,931,067
<p>Let <span class="math-container">$Y$</span> be a Geometric random variable with <span class="math-container">$p\leq \frac{1}{2}$</span>. I know that <span class="math-container">$f(y) = (1-p)^y p$</span> for <span class="math-container">$y=0,1,2,...$</span>Therefore,</p> <p><span class="math-container">$$E(e^{r(Y-1)}) = \sum_{y=0}^{\infty} e^{r(y-1)}(1-p)^yp$$</span></p> <p>I want to calculate that expected value to prove that if <span class="math-container">$E(e^{r(Y-1)})=1$</span> then <span class="math-container">$r=\log(\frac{p}{1-p})$</span>.</p> <p>Going backwards from what I need to prove I get that <span class="math-container">$$e^r\left(\frac{1-p}{p}\right) = 1$$</span></p> <p>Therefore I can say that <span class="math-container">$E(e^{r(Y-1)})$</span> should be <span class="math-container">$e^r\left(\frac{1-p}{p}\right)$</span> but I'm not aware of how to get there analytically. I would appreciate some advice in this one.</p>
player3236
435,724
<p>From <span class="math-container">$0 \le b^2 = \cos(b^2)-1 \le 1-1$</span> we see that <span class="math-container">$b^2 = 0$</span>. Hence <span class="math-container">$b=0$</span>.</p> <p>Now we have <span class="math-container">$a(\cos x-1) = \cos (ax)-1$</span>.</p> <p>Substituting <span class="math-container">$x=2\pi$</span>, we have <span class="math-container">$0=\cos (2\pi a)-1$</span>. <span class="math-container">$\cos (2\pi a)=1$</span> implies that <span class="math-container">$a \in \mathbb Z$</span>.</p> <p>Substituting <span class="math-container">$x=\pi$</span>, we have <span class="math-container">$-2a = \cos (a\pi)-1 = (-1)^a-1 = 0$</span> or <span class="math-container">$-2$</span>. Hence <span class="math-container">$a=0$</span> or <span class="math-container">$1$</span>.</p> <p>Substituting both values of <span class="math-container">$a$</span> into our original equation, we have</p> <p><span class="math-container">$$0 = \cos 0 - 1$$</span></p> <p><span class="math-container">$$\cos x - 1 = \cos x - 1$$</span></p> <p>which are both valid for all <span class="math-container">$x \in \mathbb R$</span>.</p>
3,716,439
<p>Uhm I'm having trouble solving for x in these type of equations: <span class="math-container">$a^x + b^x = c^x$</span> (where x is the only unknown)</p> <p>I saw an instance by Presh Talwalkar in MindYourDecisions. I tried solving for x and used an example with a known answer. <span class="math-container">$3^x + 4^x = 5^x$</span> </p> <p>We all know that x = 2 but I don't know the correct approach. I tried <span class="math-container">$ln(3^x + 4^x) = xln(5)$</span> and got stuck.</p>
Community
-1
<p>The second equation is</p> <p><span class="math-container">$$\dot x+\dot y=x+y$$</span> and <span class="math-container">$x+y=ce^t$</span>.</p> <p>Then the first says</p> <p><span class="math-container">$$ce^t+2y=\sin t.$$</span></p> <p>Done.</p>
3,716,439
<p>Uhm I'm having trouble solving for x in these type of equations: <span class="math-container">$a^x + b^x = c^x$</span> (where x is the only unknown)</p> <p>I saw an instance by Presh Talwalkar in MindYourDecisions. I tried solving for x and used an example with a known answer. <span class="math-container">$3^x + 4^x = 5^x$</span> </p> <p>We all know that x = 2 but I don't know the correct approach. I tried <span class="math-container">$ln(3^x + 4^x) = xln(5)$</span> and got stuck.</p>
Manjoy Das
568,656
<p>Multiplying the <span class="math-container">$1$</span>st equation by <span class="math-container">$(D-1)$</span> and the <span class="math-container">$2$</span>nd one by <span class="math-container">$D$</span> and then subtracting, we get<br> <span class="math-container">\begin{align} \{(D-1)(D+2)-D(D-1)\}y&amp;=\sin t\\ \implies (D-1)y&amp;=\dfrac12\sin t\\ \end{align}</span> So the auxiliary equation is<br> <span class="math-container">$m-1=0\implies m=1$</span><br> So C.F is <span class="math-container">$y_c=ce^t$</span><br> For P.I, <span class="math-container">\begin{align} y_p&amp;=\dfrac{1}{D-1}\dfrac12 \sin t\\ &amp;=-\dfrac14 \sin t\\ \end{align}</span> So the general solution is <span class="math-container">$y=ce^t-\dfrac14 \sin t$</span><br> Now <span class="math-container">$Dy=ce^t-\dfrac14 \cos t$</span><br> So from the <span class="math-container">$2$</span>nd equation, we get <span class="math-container">\begin{align} &amp;(D-1)x+ce^t-\dfrac14 \cos t-ce^t+\dfrac14 \sin t=0\\ \implies &amp;(D-1)x=\dfrac14(\cos t-\sin t)\\ \end{align}</span> The auxiliary equation is<br> <span class="math-container">$m-1=0\implies m=1$</span><br> So C.F is <span class="math-container">$x_c=de^t$</span><br> For P.I, <span class="math-container">\begin{align} x_p&amp;=\dfrac{1}{D-1}\dfrac14(\cos t-\sin t)\\ &amp;=\dfrac18 (\sin t-\cos t)\\ \end{align}</span> So the general solution is <span class="math-container">$x=de^t+\dfrac18 (\sin t-\cos t)$</span> </p>
241,494
<p>Who has hint how to prove: $\sum_{n=0}^N \sum_{k=1}^N g_k y_n = \sum_{n=1}^N \sum_{k=1}^n g_k y_{n-k}+\sum_{n=1}^N\sum_{k=1}^n g_n y_{N-k+1}$</p> <p>Thank in advance!</p>
preferred_anon
27,150
<p>It's almost always helpful to expand both sides to see where all the terms come from: </p> <p>LHS: We can factor out $g_{k}$ from the inner sum, since it is independent of both $n$ and $N$. Thus, we have $g_{1}(y_{0}+y_{1}+\ldots+y_{N})+g_{2}(y_{0}+\ldots+y_{N})+\ldots+g_{N}(y_{0}+\ldots +y_{N})$, which we can easily see is $$y_{0}(g_{1}+g_{2}+\ldots+g_{N})+y_{1}(g_{1}+\ldots+g_{N})+\ldots+y_{N}(g_{1}+\ldots+g_{N})$$ </p> <p>RHS: I think your first sum is incorrect; it should read $\sum_{n=1}^{N} \sum_{k=1}^{n} g_{k}y_{n-k}$. If we expand this first sum, where each set of square brackets contains the next value for $n$ (the first set for $n=1$, the next for $n=2$, etc.), we get the following:$[g_{1}y_{0}]+[g_{1}y_{1}+g_{2}y_{0}]+[g_{1}y_{2}+g_{2}y_{1}+g_{3}y_{0}]+\ldots+[g_{1}y_{N-1}+g_{2}y_{N-2}+\ldots g_{N}y_{0}]$<br> Which, after rearranging, is equal to $y_{0}(g_{1}+g_{2}+\ldots+g_{N})+y_{1}(g_{1}+g_{2}+\ldots+g_{N-1})+\ldots y_{N-1}g_{1}$<br> Before expanding the second sum, let's see what we have. Each of the $y_{k}$ Has $N-k$ of the terms we want it to be multiplied by. So $y_{0}$ is being multiplied by all the $g_{k}$ we require, $y_{1}$ by all but $g_{N}$, and so on. Let us move onto the second sum.</p> <p>Expanding similarly, $[g_{1}y_{N}]+[g_{2}y_{N}+g_{2}y_{N-1}]+\ldots+[g_{N}y_{N}+g_{N}y_{N-1}+\ldots g_{N}y_{1}]$. By rearranging, we have $y_{1}g_{N}+y_{2}(g_{N-1}+g_{N})+\ldots+y_{N}(g_{1}+g_{2}+\ldots+g_{N})$ </p> <p>Add the two sums on the RHS and collect your terms, and your identity follows.</p>
4,225,583
<p>At my university, we came across following exercise:</p> <p>Let <span class="math-container">$G$</span> be a finite group of odd order.</p> <p>Show that every irreducible representation of <span class="math-container">$G$</span> over <span class="math-container">$\mathbb{C}$</span> with real valued character is trivial (so the trivial representation is the only irreducible complex representation of <span class="math-container">$G$</span> defined over <span class="math-container">$\mathbb{R}$</span>.)</p> <p>I tried to prove this, but didn´t get very far. As far as I understand, If we take arbitrary irreducible representation, then sum of elements on the main diagonal of the representation matrix must be zero, since character corresponds to the trace of given representation.</p> <p>I also know irreducible representations correspond to simple modules, but am not sure whether this could be used.</p> <p>How would you proceed with this? Any idea would be appreciated, thank you.</p>
Aphelli
556,825
<p>It’s well known (over <span class="math-container">$\mathbb{C}$</span>) that there is a perfect pairing <span class="math-container">$C \times F \rightarrow \mathbb{C}$</span>, where <span class="math-container">$C$</span> is the vector space generated by the conjugation classes (denote as <span class="math-container">$C_G$</span> their set – a basis of <span class="math-container">$C$</span>) of <span class="math-container">$G$</span>, and <span class="math-container">$F$</span> is the space of conjugation-invariant functions on <span class="math-container">$G$</span>, and <span class="math-container">$F$</span> has the set <span class="math-container">$B_F$</span> of characters of irreducible representations as a basis.</p> <p>Now, <span class="math-container">$i_1: \chi \longmapsto \overline{\chi}$</span> is an involution of <span class="math-container">$F$</span>, and its ”dual” under the pairing from above is <span class="math-container">$i_2: [x] \longmapsto [x^{-1}]$</span> (where <span class="math-container">$x\in G$</span> and <span class="math-container">$[x]=\{gxg^{-1},\, g \in G\}$</span>). So <span class="math-container">$i_1,i_2$</span> are symmetries and they have the same number of fixed vectors on <span class="math-container">$B_F$</span> and <span class="math-container">$C_G$</span> respectively (since it is their trace).</p> <p>Therefore, one just needs to prove that if <span class="math-container">$x \in G$</span> is not the unit, <span class="math-container">$x$</span> and <span class="math-container">$x^{-1}$</span> aren’t conjugate. But if <span class="math-container">$x^{-1}=gxg^{-1}$</span>, then <span class="math-container">$g^2xg^{-2}=x$</span> so <span class="math-container">$g^2$</span> commutes with <span class="math-container">$x$</span>. As <span class="math-container">$|G|$</span> is odd, <span class="math-container">$g$</span> is a power of <span class="math-container">$g^2$</span> so <span class="math-container">$g$</span> commutes with <span class="math-container">$x$</span> and thus <span class="math-container">$x=x^{-1}$</span>, so <span class="math-container">$x^2$</span> is the unit. As <span class="math-container">$|G|$</span> is odd, it follows that <span class="math-container">$x$</span> is the unit. QED.</p>
4,225,583
<p>At my university, we came across following exercise:</p> <p>Let <span class="math-container">$G$</span> be a finite group of odd order.</p> <p>Show that every irreducible representation of <span class="math-container">$G$</span> over <span class="math-container">$\mathbb{C}$</span> with real valued character is trivial (so the trivial representation is the only irreducible complex representation of <span class="math-container">$G$</span> defined over <span class="math-container">$\mathbb{R}$</span>.)</p> <p>I tried to prove this, but didn´t get very far. As far as I understand, If we take arbitrary irreducible representation, then sum of elements on the main diagonal of the representation matrix must be zero, since character corresponds to the trace of given representation.</p> <p>I also know irreducible representations correspond to simple modules, but am not sure whether this could be used.</p> <p>How would you proceed with this? Any idea would be appreciated, thank you.</p>
David A. Craven
804,921
<p>Since <span class="math-container">$\chi$</span> is real, we have <span class="math-container">$\langle \chi,\chi\rangle=1$</span>. Notice that <span class="math-container">$$\langle \chi,\chi\rangle=\frac{1}{|G|} \sum_{g\in G} \chi(g)\overline{\chi(g)}=\frac{1}{|G|}\left(\chi(1)+2\sum_{g\in I} \chi(g)^2\right),$$</span> where <span class="math-container">$I$</span> is the set of non-trivial elements of <span class="math-container">$G$</span> up to inverses. If <span class="math-container">$\chi(1)$</span> is even then the sum of the character value squares is half of an integer. But character values are algebraic integers, and this is not possible. Thus <span class="math-container">$\chi(1)$</span> is odd. (If you already knew that <span class="math-container">$\chi(1)\mid |G|$</span> you don't need this part.)</p> <p>Now we note that if <span class="math-container">$\chi(g)\neq 0$</span> for all <span class="math-container">$g\in G$</span> then we are done: the square of a real algebraic integer is at least <span class="math-container">$1$</span>, so the only way that the sum of <span class="math-container">$|G|$</span> many squares of real algebraic integers is equal to <span class="math-container">$|G|$</span> is if all have norm <span class="math-container">$1$</span>, so <span class="math-container">$\chi(1)=1$</span>.</p> <p>But the character is the trace of a matrix, so the sum of the eigenvalues. Since <span class="math-container">$\chi$</span> is real, the eigenvalues are stable under complex conjugation, and so the non-real eigenvalues come in pairs. Removing those, we are left with an odd number of <span class="math-container">$1$</span>s and <span class="math-container">$-1$</span>s, which cannot sum to <span class="math-container">$0$</span>.</p>
55,853
<p>Given a function $f:X\to X$, let $\text{Fix}(f)=\{x\in X\mid x=f(x)\}$.</p> <p>In a recent <a href="https://math.stackexchange.com/questions/55846/is-the-set-of-fixed-points-in-a-non-hausdorff-space-always-closed/55847#55847">comment</a>, I wondered whether $X$ is Hausdorff $\iff$ $\text{Fix}(f)\subseteq X$ is closed for every continuous $f:X\to X$ (the forwards implication is a simple, well-known result). At first glance it seemed plausible to me, but I don't have any particular reason for thinking so. I'll also repost Qiaochu's comment to me below for reference: </p> <blockquote> <p>I would be very surprised if this were true, but it doesn't seem easy to construct a counterexample. Any counterexample needs to be infinite and $T_1$, but not Hausdorff, and I don't have good constructions for such spaces which don't result in a huge collection of endomorphisms...</p> </blockquote> <p>Is there a non-Hausdorff space $X$ for which $\text{Fix}(f)\subseteq X$ is closed for every continuous $f:X\to X$?</p>
LostInMath
9,174
<p>While I think that Sam's counterexample is fine, I would like to elaborate on ccc's comment to Sam's answer.</p> <p>Recall that a topological space $X$ is called <em>Fréchet</em> if for every $A\subseteq X$ and every point $x\in \bar{A}$ there exists a sequence of points of $A$ converging to $x$.</p> <p>Observe that if $X$ is Fréchet with unique sequential limits, then $\text{Fix}(f)$ is closed for every continuous $f:X\to X$. Indeed, if $X$ is such a space, $f:X\to X$ is continuous and $x\in\overline{\text{Fix}(f)}$, then by Fréchetness of $X$ we find a sequence $(x_n)_{n=1}^\infty$ of points of $\text{Fix}(f)$ converging to $x$. Then $f(x)=f(\lim_{n\to\infty} x_n)=\lim_{n\to\infty}f(x_n)=\lim_{n\to\infty}x_n=x$ by continuity of $f$ and uniqueness of sequential limits in $X$. This means that $x\in\text{Fix}(f)$ and hence $\text{Fix}(f)$ is closed.</p> <p>To give a positive answer to the question we want to find a non-Hausdorff Fréchet space with unique sequential limits. The one-point compactification of the rationals suggested by Sam is such a space. Another example is Example 6.2 in <em>S.P. Franklin, Spaces in which sequences suffice II, Fund. Math. 61 (1967)</em>, which is available <a href="http://matwbn.icm.edu.pl/ksiazki/fm/fm61/fm6115.pdf">here</a>.</p>
3,986,812
<p>Given <span class="math-container">$U_n = (1 + 1/n)^n$</span> , <span class="math-container">$n = 1,2,3,\ldots\;.$</span></p> <blockquote> <p>Show that <span class="math-container">$2 \le U_n \le 3$</span> for all n</p> </blockquote> <p>This is what I've done. Can anyone help?</p> <p><span class="math-container">$$\begin{align*} a_n=\left(1+\frac1n\right)^n&amp;=\sum_{r=0}^n{^nC_r}(1)^r\left(\frac1n\right)^{n-r}\\ &amp;=\sum_{r=0}^n{^nC_r}\left(\frac1n\right)^{n-r}\\ &amp;=1+\frac{n}n+\frac1{2!}\frac{n(n-1)}{n^2}+\frac1{3!}\frac{n(n-1)(n-2)}{n^3}\\ &amp;\quad\,+\ldots+\frac1{n!}\frac{n(n-1)\ldots(n-n+1)}{n^n} \end{align*}$$</span></p> <p>Since <span class="math-container">$\forall k\in\{2,3,\ldots,n\}$</span>: <span class="math-container">$\frac1{k!}&lt;\frac1{2^k}$</span>, and <span class="math-container">$\frac{n(n-1)\ldots\big(n-(k-1)\big)}{n^k}&lt;1$</span>,</p> <p><span class="math-container">$$\begin{align*} a_n&amp;&lt;1+\left(1+\frac12+\frac1{2^2}+\ldots+\frac1{2^{n-1}}\right)\\ &amp;&lt;1+\left(\frac{1-\left(\frac12\right)^n}{1-\frac12}\right)&lt;3-\frac1{2^{n-1}}&lt;3 \end{align*}$$</span></p>
Arash
866,952
<p>Here is another way to look at it that might be helpful, <em>if the question is why for all <span class="math-container">$n$</span> it holds</em>. Consider <span class="math-container">$f(x)=(1+\frac{1}{x})^x$</span>. It is an increasing function of <span class="math-container">$x$</span>:</p> <p><span class="math-container">$$ f'(x)=\frac{d}{dx}e^{x \ln(1+\frac{1}{x})} = \left[\ln\left(1+\frac{1}{x}\right) - \frac{1}{1+x}\right]\left(1+\frac{1}{x}\right)^x &gt;0, \forall x&gt;0$$</span></p> <p>Now:</p> <p><span class="math-container">$f(1)=2$</span>, and <span class="math-container">$\lim_\limits{n\rightarrow+\infty}f(n)=e&lt;3$</span>, so <span class="math-container">$2\leq f(n) &lt; e &lt; 3, \forall n$</span></p>
1,356,585
<p>I am reading Hajime Sato's: <em>Algebraic Topology, an Intuitive Approach</em>. His Sample Problem 1.3 is: <em>Show that the topological spaces X and I are not homeomorphic.</em> (Note that this requires a font where I appears as a simple straight vertical line.) </p> <p>Sato argues by contradiction. Brief sketch of his argument: Suppose there existed a homeomorphism $f: X \to I$. Remove the point $x_0$ which lies at the crossing point of X. Then the restriction of $f$ to the new domain is still homeomorphic. But $X-x_0$ consists of four disjoint line segments (each half open), and $I-f(x_0)$ consists of two disjoint line segments (each half open). Therefore, he concludes, the spaces aren't homeomorphic. </p> <p>I don't understand how the conclusion (in the final sentence) follows. I know that two spaces are homeomorphic if there exists a bijective continuous map between them with a continuous inverse. And I know the characterization of continuity in terms of open sets. But somehow I am not seeing it. I feel like I'm missing something really obvious. Any ideas?</p>
Mathslover shah
253,752
<p>If I is disconnected by removing the point f(xo) and X by removing the point of conjunction xo ... there are two connected components of I while 4 connected components of X ... so if f is homeomorphism images of connected components are also connected ... is it possible here ? Can you map the 2 connected components to connected components of X while f being a bijection at the same time ? </p>
1,356,585
<p>I am reading Hajime Sato's: <em>Algebraic Topology, an Intuitive Approach</em>. His Sample Problem 1.3 is: <em>Show that the topological spaces X and I are not homeomorphic.</em> (Note that this requires a font where I appears as a simple straight vertical line.) </p> <p>Sato argues by contradiction. Brief sketch of his argument: Suppose there existed a homeomorphism $f: X \to I$. Remove the point $x_0$ which lies at the crossing point of X. Then the restriction of $f$ to the new domain is still homeomorphic. But $X-x_0$ consists of four disjoint line segments (each half open), and $I-f(x_0)$ consists of two disjoint line segments (each half open). Therefore, he concludes, the spaces aren't homeomorphic. </p> <p>I don't understand how the conclusion (in the final sentence) follows. I know that two spaces are homeomorphic if there exists a bijective continuous map between them with a continuous inverse. And I know the characterization of continuity in terms of open sets. But somehow I am not seeing it. I feel like I'm missing something really obvious. Any ideas?</p>
John Gowers
26,267
<p>A <em>homeomorphism</em> is an <em>isomorphism</em> between topological spaces. That is to say, it preserves the topological structure. </p> <p>Let's forget the definition of a homeomorphism in terms of continuous maps for now, and try and construct it ourselves. If $X$ is a topological space, and $Y$ is a topological space that is 'the same' as $X$, up to renaming of the points, then that means that there is a bijective function</p> <p>$$ f\colon X\to Y $$ taking a point $x\in X$ to the corresponding point $y\in Y$. As a silly example, the topological spaces $\mathbb R\times\{0\}$ and $\mathbb R\times\{1\}$ are homeomorphic. The homeomorphism $f$ takes the point $(x,0)$ to the point $(x, 1)$. </p> <p>Now we want $f$ to preserve the topological structure somehow. Since we want $X$ and $Y$ to be 'the same', up to the 'renaming' provided by the function $f$, then the open subsets of $Y$ should be precisely those sets $V\subset Y$ that correspond to open subsets $U\subset X$, via the map $f$. Putting this into mathematical language, we see that:</p> <blockquote> <p>$V\subset Y$ is open in $Y$ if and only if the set $\{f^{-1}(y)\;\colon\;y\in V\}\subset X$ is open in $X$.</p> </blockquote> <p>(remembering that $f$ is a bijection, so it has an inverse $f^{-1}$). If we think a bit harder, we can see that this is equivalent to the usual definition of a homeomorphism (left as exercise...)</p> <p>This is very powerful. It means that any statement we can make about $X$ <em>using purely topological language</em> must also be true for $Y$. For example, if we say '$X$ has $4$ connected components', then that is really a statement about the open sets of $X$ - it is a <em>topological statement</em>. If $X$ is homeomorphic to a space $Y$ then $Y$ must have $4$ connected components too.</p> <p>On the other hand, if we say $X$ has diameter $2$, then that is not a topological statement, and we should not expect it to be preserved by homeomorphism. For instance, the unit circle, which has diameter $2$, is homeomorphic to the unit square, which has diameter $\sqrt{2}$, and to a larger circle, of diameter $1000$. </p> <p>Of course, you should still <em>prove</em> that $(\text{number of connected components})$ is a homeomorphism invariant, but in general, you can be sure that any definition made purely in terms of open sets and relations between them and the points of the space will be preserved by homeomorphisms. </p>
973,268
<p>Today in highschool we were doing a chapter called &quot;Roots of polynomials&quot; where we learnt something new and interesting which is :</p> <blockquote> <p><span class="math-container">$ax^2+bx+c=0$</span> Has roots <span class="math-container">$\alpha$</span> , <span class="math-container">$\beta$</span> Then:</p> <p><span class="math-container">$$\alpha + \beta= -b/a$$</span></p> <p><span class="math-container">$$\alpha \beta=c/a$$</span></p> </blockquote> <hr /> <blockquote> <p><span class="math-container">$ax^3+bx^2+cx+ d=0$</span> Has roots <span class="math-container">$\alpha$</span> , <span class="math-container">$\beta$</span> , <span class="math-container">$\gamma$</span>. Then:</p> <p><span class="math-container">$$\alpha+ \beta + \gamma = -b/a$$</span></p> <p><span class="math-container">$$\alpha \beta + \alpha \gamma + \beta \gamma= c/a$$</span></p> <p><span class="math-container">$$\alpha \beta \gamma= -d/a$$</span></p> </blockquote> <hr /> <p>My curiosity turned to, what happens in 4th degree power polynomial. We haven't learnt in class (not in our syllabus). But is there something like a general formula for this? Coz I'm sure people who learn higher powers cannot memorise all the powers and remember when it becomes - or +. (For me I can memorize these because It's only a few set of equations and two different polynomials), what happens in higher powers and how does one memorise ? What is the general formula, if there is any?</p> <p>And also : <strong>what happen in fourth power</strong> that is for :</p> <p><span class="math-container">$$ax^4+bx^3+cx^2+dx+e=0$$</span></p> <hr /> <p><img src="https://i.stack.imgur.com/4761q.jpg" alt="enter image description here" /></p> <p><strong>After looking at the first comment I understood that it's Vietas formulas. And I checked in out in Wikipedia. The formulas are complicated looking, but I understood after looking for a while. But there are dots in the middle which means more equations. I tried this with my 3rd power and it works fine, but the question remains &quot;How to do for higher degree polynomials. I don't know what are the formulas in the middle (the dots going downwards in the middle). I believe there are n number of formulas for n powers, here there is only three. Which I already knew, please help me</strong></p>
MJD
25,554
<p>As the comments say, these are called <a href="http://en.wikipedia.org/wiki/Vieta%27s_formulas" rel="nofollow">Vieta's formulas</a>, and the plus or minus sign is not difficult to remember, because it always alternates. In general, if you have an $n$th-degree polynomial $$p(x) = a_0x^n + a_{1}x^{n-1} + a_2x^{n-2} + \cdots + a_{n-1}x + a_n$$ whose roots are $r_1, r_2, \ldots r_n$ then Vieta's formulas tell you that you can find $\frac{a_i}{ a_0}$ as follows:</p> <blockquote> <p>Take all possible products of $i$ of the roots together; add up those products, and finally multiply by $-1$ if $i$ is odd. </p> </blockquote> <p>For example, suppose $n=6$ and we want to find $\frac{a_3}{a_0}$, the coefficient of the $x^3$ term. (Here we have $i=3$.) Say the roots are $r_1\ldots r_6$. We find the products of the roots <em>in groups of three</em>: $$\begin{array}{cccc} r_1r_2r_3 + &amp; r_1r_2r_4 +&amp; r_1r_2r_5 +&amp; r_1r_2r_6 +\\ r_1r_3r_4 + &amp; r_1r_3r_5 +&amp; r_1r_3r_6 +&amp; r_1r_4r_5 +\\ r_1r_4r_6 + &amp; r_1r_5r_6 +&amp; r_2r_3r_4 +&amp; r_2r_3r_5 +\\ r_2r_3r_6 + &amp; r_2r_4r_5 +&amp; r_2r_4r_6 +&amp; r_2r_5r_6 +\\ r_3r_4r_5 + &amp; r_3r_4r_6 +&amp; r_3r_5r_6 +&amp; r_4r_5r_6 \hphantom{=} \end{array}$$</p> <p>There are 20 of these products, and we add up all 20. Then, because 3 is odd, we multiply by $-1$, and we have $\frac{a_3}{a_0}$.</p> <p>(You may be interested to observe that this works even for $i=0$, if you understand the empty product correctly; this and similar considerations are what leads mathematicians to understand empty products the way they do.)</p>
973,268
<p>Today in highschool we were doing a chapter called &quot;Roots of polynomials&quot; where we learnt something new and interesting which is :</p> <blockquote> <p><span class="math-container">$ax^2+bx+c=0$</span> Has roots <span class="math-container">$\alpha$</span> , <span class="math-container">$\beta$</span> Then:</p> <p><span class="math-container">$$\alpha + \beta= -b/a$$</span></p> <p><span class="math-container">$$\alpha \beta=c/a$$</span></p> </blockquote> <hr /> <blockquote> <p><span class="math-container">$ax^3+bx^2+cx+ d=0$</span> Has roots <span class="math-container">$\alpha$</span> , <span class="math-container">$\beta$</span> , <span class="math-container">$\gamma$</span>. Then:</p> <p><span class="math-container">$$\alpha+ \beta + \gamma = -b/a$$</span></p> <p><span class="math-container">$$\alpha \beta + \alpha \gamma + \beta \gamma= c/a$$</span></p> <p><span class="math-container">$$\alpha \beta \gamma= -d/a$$</span></p> </blockquote> <hr /> <p>My curiosity turned to, what happens in 4th degree power polynomial. We haven't learnt in class (not in our syllabus). But is there something like a general formula for this? Coz I'm sure people who learn higher powers cannot memorise all the powers and remember when it becomes - or +. (For me I can memorize these because It's only a few set of equations and two different polynomials), what happens in higher powers and how does one memorise ? What is the general formula, if there is any?</p> <p>And also : <strong>what happen in fourth power</strong> that is for :</p> <p><span class="math-container">$$ax^4+bx^3+cx^2+dx+e=0$$</span></p> <hr /> <p><img src="https://i.stack.imgur.com/4761q.jpg" alt="enter image description here" /></p> <p><strong>After looking at the first comment I understood that it's Vietas formulas. And I checked in out in Wikipedia. The formulas are complicated looking, but I understood after looking for a while. But there are dots in the middle which means more equations. I tried this with my 3rd power and it works fine, but the question remains &quot;How to do for higher degree polynomials. I don't know what are the formulas in the middle (the dots going downwards in the middle). I believe there are n number of formulas for n powers, here there is only three. Which I already knew, please help me</strong></p>
M.S.E
153,974
<p>For the fourth power</p> <p>$$ax^4+bx^3+cx^2+dx+e=0$$ has roots $\alpha$ , $\beta$ , $\gamma$ , $\delta$</p> <p>Then :</p> <p>$$ \begin{align} \alpha+\beta+\gamma+\delta &amp; = -\frac ba \\ \alpha\beta + \alpha\gamma + \alpha\delta + \beta\gamma + \beta\delta+\gamma\delta &amp; = \hphantom{-}\frac ca \\ \alpha\beta\gamma + \alpha\beta\delta + \alpha\gamma\delta + \beta\gamma\delta &amp; = -\frac da \\ \alpha\beta\gamma\delta &amp; = \hphantom{-}\frac ea \end{align} $$</p> <p><a href="https://i.stack.imgur.com/soaUl.jpg" rel="nofollow noreferrer">(original screenshot)</a></p> <p>The way I remember this is you start with the additonal of individual roots (which is - sign) and then you go +,-,+ alternatively. After adding single roots (all the combinations) then you go onto double roots (take the sum of the double root possibilites). </p> <p>And so on, until when you reached your nth formula of your nth power polynomial (with n powers) . So you have n pair of possibilities to sum, and so obviously there is only one term (all n roots multiplied).</p>
973,268
<p>Today in highschool we were doing a chapter called &quot;Roots of polynomials&quot; where we learnt something new and interesting which is :</p> <blockquote> <p><span class="math-container">$ax^2+bx+c=0$</span> Has roots <span class="math-container">$\alpha$</span> , <span class="math-container">$\beta$</span> Then:</p> <p><span class="math-container">$$\alpha + \beta= -b/a$$</span></p> <p><span class="math-container">$$\alpha \beta=c/a$$</span></p> </blockquote> <hr /> <blockquote> <p><span class="math-container">$ax^3+bx^2+cx+ d=0$</span> Has roots <span class="math-container">$\alpha$</span> , <span class="math-container">$\beta$</span> , <span class="math-container">$\gamma$</span>. Then:</p> <p><span class="math-container">$$\alpha+ \beta + \gamma = -b/a$$</span></p> <p><span class="math-container">$$\alpha \beta + \alpha \gamma + \beta \gamma= c/a$$</span></p> <p><span class="math-container">$$\alpha \beta \gamma= -d/a$$</span></p> </blockquote> <hr /> <p>My curiosity turned to, what happens in 4th degree power polynomial. We haven't learnt in class (not in our syllabus). But is there something like a general formula for this? Coz I'm sure people who learn higher powers cannot memorise all the powers and remember when it becomes - or +. (For me I can memorize these because It's only a few set of equations and two different polynomials), what happens in higher powers and how does one memorise ? What is the general formula, if there is any?</p> <p>And also : <strong>what happen in fourth power</strong> that is for :</p> <p><span class="math-container">$$ax^4+bx^3+cx^2+dx+e=0$$</span></p> <hr /> <p><img src="https://i.stack.imgur.com/4761q.jpg" alt="enter image description here" /></p> <p><strong>After looking at the first comment I understood that it's Vietas formulas. And I checked in out in Wikipedia. The formulas are complicated looking, but I understood after looking for a while. But there are dots in the middle which means more equations. I tried this with my 3rd power and it works fine, but the question remains &quot;How to do for higher degree polynomials. I don't know what are the formulas in the middle (the dots going downwards in the middle). I believe there are n number of formulas for n powers, here there is only three. Which I already knew, please help me</strong></p>
Parth Thakkar
70,311
<p>I'll give examples, then you'll understand the general pattern better.</p> <p>Take a degree four polynomial. I'll denote a general fourth-degree polynomial by <span class="math-container">$P_4$</span>. A general <span class="math-container">$5^{\text{th}}$</span> degree polynomial as <span class="math-container">$P_5$</span> and so on...</p> <p>Your <span class="math-container">$P_4$</span> is: <span class="math-container">$ a_0x^4 + a_1x^3 + a_2x^2 + a_3x^1 + a_4$</span></p> <p>Then, I'll get <span class="math-container">$4$</span> formulas:</p> <ol> <li><p><span class="math-container">$r_0 + r_1 + r_2 + r_3 = -\dfrac{a_1}{a_0}$</span></p></li> <li><p><span class="math-container">$r_0r_1 + r_0r_2 + r_0r_3 + r_1r_2 + r_1r_3 + r_2r_3 = +\dfrac{a_2}{a_0}$</span></p></li> <li><p><span class="math-container">$r_0r_1r_2 + r_0r_1r_3 + r_0r_2r_3 + r_1r_2r_3= -\dfrac{a_3}{a_0}$</span></p></li> <li><p><span class="math-container">$ r_0r_1r_2r_3 = + \dfrac{a_4}{a_0}$</span></p></li> </ol> <p>The idea is, first you add products of single roots. That is, the first formula. Then, you add products containing 2 roots - second formula. Then you add products containing 3 roots - third formula. Then add products of 4 roots - fourth formula. And on the RHS, the denominator is always the coeffecient of the heighest power, and the signs alternate. The first formula always has the negative sign. The numerator is the coefficient matching with the numbers of roots in the multiplication, i. e. for product containing 4 roots, numerator should be A4.</p>
438,744
<p>Let $A$ and $B$ be real square matrices of the same size. Is it true that $$\det(A^2+B^2)\geq0\,?$$</p> <p>If $AB=BA$ then the answer is positive: $$\det(A^2+B^2)=\det(A+iB)\det(A-iB)=\det(A+iB)\overline{\det(A+iB)}\geq0.$$</p>
Seirios
36,434
<p>If $A= \left( \begin{matrix} 1 &amp;1 \\ 0 &amp;1 \end{matrix} \right)$ and $B=\left( \begin{matrix} 1&amp;0 \\ n&amp;1 \end{matrix} \right)$, then $\det(A^2+B^2)=4(1-n)$.</p>
146,952
<p>I have revised my original post. The questions I asked there were not well-put or even thought through. I don't want to delete, however, since some of the comments may be of interest to other MO users. </p> <p>It is well known that the sums of the reciprocals of the primes diverges. But suppose we decompose the set of prime numbers, call this set $P$, into disjoint infinite sets $\langle A_\alpha\rangle$ in some way to guarantee that each member $A_\alpha$ of the partition has the following "nice" property: $$\Sigma_{p\in A_{\alpha}} \frac{1}{p} \text{ converges}.$$</p> <p>There are by now several interesting partitions in the comments and answers below. Here I include two of my own attempts at such a partition.</p> <p><strong>Attempt 1:</strong> Let $\langle \epsilon_i: i&lt;\omega\rangle$ be a sequence of positive real numbers be such that $\Sigma_i \epsilon_i$ diverges (goes to infinity). Suppose you want an infinite $X\subseteq P$ such that $$\Sigma_{p\in X}\frac{1}{p} &lt;\epsilon_1.$$ Then inductively define $X_1$ as follows: Pick $p_1\in P$ least such that $\frac{1}{p}&lt;\epsilon_1$. Continuing, suppose $p_1,p_2,\dots,p_n$ have been chosen such that $$\Sigma^n_1\frac{1}{p_i}&lt;\epsilon.$$ Then let $p_{n+1}$ be the least such that $$\Sigma^{n+1}_1\frac{1}{p_i}&lt;\epsilon_1.$$ </p> <p>It seems like we could continue this strategy to produce a partition with the desired property by setting $P=P_1$, and $P_2=P_1\setminus X_1$. Then repeat the process described above. If this stage goes all the way through, define $P_3$, etc. It seems that all the $P_n$s will be defined and each will, by construction converge. </p> <p>But this sort of strategy seems to depend on the paramater sequence $\langle\epsilon_i\rangle$ in an essential way. It may be constructive, but the strategy doesn't really give me a concrete or satisfying (or even really explicit) definition for the partition. </p> <p><strong>Attempt 2:</strong> I proposed the following partial strategy (based on the Green-Tao Theorem). Index the primes according to their natural order (i.e., $p_1 =2, p_2 =3, p_3=5$, etc.). Let $P=A_1\cup B_1$ where $$n\in A_1\Longleftrightarrow n \text{ has a prime index},$$ and $n\in B_1$ otherwise. Since $B$ has positive upper density, it must contain arbitrarily long arithmetic progressions by the Tao-Green Theorem (see the related question <a href="https://mathoverflow.net/questions/76465/extension-of-tao-green-theorem">Extension of Tao-Green Theorem</a>). I'm not sure if this means $$\Sigma_{p\in B_1} \text{ diverges }$$ but I think it does. </p> <p>Also, Ben Green comments there that $A$ must contain arbitrarily long arithmetic progressions since the density of this set is roughly $1/\log^2(N)$ (though this may still be conjecture, I'm not sure). </p> <p>But suppose we attempt to destroy each such arithmetic progression by decomposing both $A_1$ and $B_1$ in the following way: Re-index both $A_1$ and $B_1$ according to their natural order and partition each by placing an $n$ in one set of the partition if it is indexed by a prime, and placing it in the other set in the partition if it indexed by a composite. </p> <p>If we repeat the construction $k$ times (for both $A_1$ and $B_1$ and all subsequently formed subsets) we must surely eliminate all arithmetic progressions with common difference $k$. But perhaps there are still arithmetic progressions with common difference $k'&gt;k$. So we continue the iteration, creating a binary tree of height $\omega$, the leaves of which union to all of $P$.</p> <p>It is still not clear to me that this partition will have the property I requested above. Certainly there are several nice examples in the comments and answers below. </p>
Hugh Thomas
468
<p>The following is rather more simple-minded that what you are suggesting. </p> <p>Let's say we have a deck of cards, with the cards labelled by the primes in order. We are going to think of constructing the partition by dealing out the cards into piles; the piles will be the parts of the partition. </p> <p>The dealing proceeds in rounds. On $n$-th round, we deal out a card to the first $2^n$ stacks. </p> <p>So, on the zero-th round, we deal out 1 card, onto the zero-th stack. On the first round, we deal out the second card into the zero-th stack, and the third card into the first stack. On the third round, we deal out the fourth card onto the zero-th stack, the fifth card onto the first stack, the sixth card onto the second stack, and the seventh card onto the third stack. And so on. </p> <p>The first stack therefore gets the first prime number, the second prime number, the fourth prime number, the eighth prime number... The k-th prime number is greater than k, so the sum of the reciprocals converges. </p> <p>The corresponding reciprocals of other piles are smaller, so they also converge. </p> <p>And it is clear that all the piles eventually acquire infinitely many cards. </p>
146,952
<p>I have revised my original post. The questions I asked there were not well-put or even thought through. I don't want to delete, however, since some of the comments may be of interest to other MO users. </p> <p>It is well known that the sums of the reciprocals of the primes diverges. But suppose we decompose the set of prime numbers, call this set $P$, into disjoint infinite sets $\langle A_\alpha\rangle$ in some way to guarantee that each member $A_\alpha$ of the partition has the following "nice" property: $$\Sigma_{p\in A_{\alpha}} \frac{1}{p} \text{ converges}.$$</p> <p>There are by now several interesting partitions in the comments and answers below. Here I include two of my own attempts at such a partition.</p> <p><strong>Attempt 1:</strong> Let $\langle \epsilon_i: i&lt;\omega\rangle$ be a sequence of positive real numbers be such that $\Sigma_i \epsilon_i$ diverges (goes to infinity). Suppose you want an infinite $X\subseteq P$ such that $$\Sigma_{p\in X}\frac{1}{p} &lt;\epsilon_1.$$ Then inductively define $X_1$ as follows: Pick $p_1\in P$ least such that $\frac{1}{p}&lt;\epsilon_1$. Continuing, suppose $p_1,p_2,\dots,p_n$ have been chosen such that $$\Sigma^n_1\frac{1}{p_i}&lt;\epsilon.$$ Then let $p_{n+1}$ be the least such that $$\Sigma^{n+1}_1\frac{1}{p_i}&lt;\epsilon_1.$$ </p> <p>It seems like we could continue this strategy to produce a partition with the desired property by setting $P=P_1$, and $P_2=P_1\setminus X_1$. Then repeat the process described above. If this stage goes all the way through, define $P_3$, etc. It seems that all the $P_n$s will be defined and each will, by construction converge. </p> <p>But this sort of strategy seems to depend on the paramater sequence $\langle\epsilon_i\rangle$ in an essential way. It may be constructive, but the strategy doesn't really give me a concrete or satisfying (or even really explicit) definition for the partition. </p> <p><strong>Attempt 2:</strong> I proposed the following partial strategy (based on the Green-Tao Theorem). Index the primes according to their natural order (i.e., $p_1 =2, p_2 =3, p_3=5$, etc.). Let $P=A_1\cup B_1$ where $$n\in A_1\Longleftrightarrow n \text{ has a prime index},$$ and $n\in B_1$ otherwise. Since $B$ has positive upper density, it must contain arbitrarily long arithmetic progressions by the Tao-Green Theorem (see the related question <a href="https://mathoverflow.net/questions/76465/extension-of-tao-green-theorem">Extension of Tao-Green Theorem</a>). I'm not sure if this means $$\Sigma_{p\in B_1} \text{ diverges }$$ but I think it does. </p> <p>Also, Ben Green comments there that $A$ must contain arbitrarily long arithmetic progressions since the density of this set is roughly $1/\log^2(N)$ (though this may still be conjecture, I'm not sure). </p> <p>But suppose we attempt to destroy each such arithmetic progression by decomposing both $A_1$ and $B_1$ in the following way: Re-index both $A_1$ and $B_1$ according to their natural order and partition each by placing an $n$ in one set of the partition if it is indexed by a prime, and placing it in the other set in the partition if it indexed by a composite. </p> <p>If we repeat the construction $k$ times (for both $A_1$ and $B_1$ and all subsequently formed subsets) we must surely eliminate all arithmetic progressions with common difference $k$. But perhaps there are still arithmetic progressions with common difference $k'&gt;k$. So we continue the iteration, creating a binary tree of height $\omega$, the leaves of which union to all of $P$.</p> <p>It is still not clear to me that this partition will have the property I requested above. Certainly there are several nice examples in the comments and answers below. </p>
Aaron Meyerowitz
8,008
<p>There is nothing special going on related to primes or even to integers (see below.) But let me start with the positive integers. What you want can't be harder than partitioning the positive integers similarly. Consider partitioning the positive integers in sequences as follows:</p> <p>$\ 1,\ 3,\ 6,10,15,21\cdots$</p> <p>$\ 2,\ 5,\ 9,14,20,\cdots$</p> <p>$\ 4,\ 8,13,19,26\cdots$</p> <p>$\ 7,12,18,25,33,\cdots$</p> <p>$11,\cdots$</p> <p>The growing triangular pattern should be clear, but to be specific, row $k$ starts with $1+\frac{k(k-1)}{2}$ and then the gaps between successive terms are $k+1,k+2,k+3,\cdots.$ </p> <p>The sum of the reciprocals in the first row is $2$ and each other row has a smaller sum than the one above it.</p> <p>Now replace $n$ in this partition with the $n$th prime instead to get what you want. </p> <p>It would be possible to arrange to have the first row converge quite slowly and then have every following row converge even more slowly (but still converge) and that would work as well. More rapid convergence is possible as well. However the scheme above is specific and easy to follow.</p> <hr> <p>Rather than the sequence of reciprocals of integers or primes, assume merely a sequence of positive reals $ x_1 \ge x_2 \ge \cdots $ such that </p> <ul> <li>$\lim x_k =0$ and</li> <li>$\sum x_k=\infty.$</li> </ul> <p>Suppose we wish to partition into a countable number of infinite subsequences such that the $m$th subsequence has sum $s_m$ where the values $s_1,s_2,\cdots$ are some positive reals. We can't <em>always</em> do this, for example the sum of the $s_m$ has to diverge and we can't have all the $s_m$ less than $\frac{1}{3}$ if $x_3=\frac{1}{2}$. </p> <p>Here is a sufficient condition:</p> <ul> <li>There is a sequence $N_1 \lt N_2 \lt \cdots$ such that $x_m \lt s_{N_m}$</li> </ul> <p>So this allows $s_m=x_m+m!$ (huge sums) or $s_m=x_m(1+\frac1{m!})$ (small sums) as well as a mix of extremely small and extremely large target sums. With this condition we can make the subsequences by the greedy rule of considering the $x_i$ in turn and assigning $x_i$ to subsequence $m$ where $m$ is the smallest possible index such that the partial sum will remain strictly less than $s_m.$ This might mean that it takes longer ( at 10 placements per second) than the life of the universe to put anything in subsequence $2$ (or to put the second member into subsequence $1$), but infinity is much larger. </p> <p>The condition and greedy rule ensure that $x_m$ will be assigned to subsequence $N_m$ or an earlier one. How do we know that each subsequence is infinite and with the proper sum? Pick any index $m$ and $\varepsilon \gt 0.$ There are $M_1 \lt M_2$ so large that $x_{M_1} \lt \varepsilon$ and $$\sum_{i=M_1}^{M_2}x_i \gt \sum_{j=1}^ms_j.$$ Then there must be some $M_1 \le \ell \le M_2$ with $x_{\ell}$ assigned past sequence $m$. But this requires that the partial sum for subsequence $k$ is at least $s_k-\varepsilon$ for all $1 \le k \le m.$ </p>
146,952
<p>I have revised my original post. The questions I asked there were not well-put or even thought through. I don't want to delete, however, since some of the comments may be of interest to other MO users. </p> <p>It is well known that the sums of the reciprocals of the primes diverges. But suppose we decompose the set of prime numbers, call this set $P$, into disjoint infinite sets $\langle A_\alpha\rangle$ in some way to guarantee that each member $A_\alpha$ of the partition has the following "nice" property: $$\Sigma_{p\in A_{\alpha}} \frac{1}{p} \text{ converges}.$$</p> <p>There are by now several interesting partitions in the comments and answers below. Here I include two of my own attempts at such a partition.</p> <p><strong>Attempt 1:</strong> Let $\langle \epsilon_i: i&lt;\omega\rangle$ be a sequence of positive real numbers be such that $\Sigma_i \epsilon_i$ diverges (goes to infinity). Suppose you want an infinite $X\subseteq P$ such that $$\Sigma_{p\in X}\frac{1}{p} &lt;\epsilon_1.$$ Then inductively define $X_1$ as follows: Pick $p_1\in P$ least such that $\frac{1}{p}&lt;\epsilon_1$. Continuing, suppose $p_1,p_2,\dots,p_n$ have been chosen such that $$\Sigma^n_1\frac{1}{p_i}&lt;\epsilon.$$ Then let $p_{n+1}$ be the least such that $$\Sigma^{n+1}_1\frac{1}{p_i}&lt;\epsilon_1.$$ </p> <p>It seems like we could continue this strategy to produce a partition with the desired property by setting $P=P_1$, and $P_2=P_1\setminus X_1$. Then repeat the process described above. If this stage goes all the way through, define $P_3$, etc. It seems that all the $P_n$s will be defined and each will, by construction converge. </p> <p>But this sort of strategy seems to depend on the paramater sequence $\langle\epsilon_i\rangle$ in an essential way. It may be constructive, but the strategy doesn't really give me a concrete or satisfying (or even really explicit) definition for the partition. </p> <p><strong>Attempt 2:</strong> I proposed the following partial strategy (based on the Green-Tao Theorem). Index the primes according to their natural order (i.e., $p_1 =2, p_2 =3, p_3=5$, etc.). Let $P=A_1\cup B_1$ where $$n\in A_1\Longleftrightarrow n \text{ has a prime index},$$ and $n\in B_1$ otherwise. Since $B$ has positive upper density, it must contain arbitrarily long arithmetic progressions by the Tao-Green Theorem (see the related question <a href="https://mathoverflow.net/questions/76465/extension-of-tao-green-theorem">Extension of Tao-Green Theorem</a>). I'm not sure if this means $$\Sigma_{p\in B_1} \text{ diverges }$$ but I think it does. </p> <p>Also, Ben Green comments there that $A$ must contain arbitrarily long arithmetic progressions since the density of this set is roughly $1/\log^2(N)$ (though this may still be conjecture, I'm not sure). </p> <p>But suppose we attempt to destroy each such arithmetic progression by decomposing both $A_1$ and $B_1$ in the following way: Re-index both $A_1$ and $B_1$ according to their natural order and partition each by placing an $n$ in one set of the partition if it is indexed by a prime, and placing it in the other set in the partition if it indexed by a composite. </p> <p>If we repeat the construction $k$ times (for both $A_1$ and $B_1$ and all subsequently formed subsets) we must surely eliminate all arithmetic progressions with common difference $k$. But perhaps there are still arithmetic progressions with common difference $k'&gt;k$. So we continue the iteration, creating a binary tree of height $\omega$, the leaves of which union to all of $P$.</p> <p>It is still not clear to me that this partition will have the property I requested above. Certainly there are several nice examples in the comments and answers below. </p>
Stefan Kohl
28,104
<p>A very simple way to obtain such partition of the primes is to put the $n$-th prime into the $k$-th set in the partition, where $k$ is the number of 1's in the binary representation of $n$.</p>
1,332,433
<blockquote> <p>Let <span class="math-container">$f:\mathbb R\to \mathbb R$</span> be such that for every sequence <span class="math-container">$(a_n)$</span> of real numbers,</p> <p><span class="math-container">$\sum a_n$</span> converges <span class="math-container">$\implies \sum f(a_n)$</span> converges</p> <p>Prove there is some open neighborhood <span class="math-container">$V$</span>, such that <span class="math-container">$0\in V$</span> and <span class="math-container">$f$</span> restricted to <span class="math-container">$V$</span> is a linear function.</p> </blockquote> <p>This was given to a friend at his oral exam this week. He told me the problem is hard. I tried it myself, but I haven't made any significant progress.</p>
zhw.
228,045
<p>A start: There exists $\delta &gt; 0$ such that $\frac{f(x)}{x}$ is bounded in $\{0&lt;|x|&lt;\delta\}.$ Proof: Otherwise there is a sequence of nonzero terms $x_n \to 0$ such that</p> <p>$$\left|\frac{f(x_n)}{x_n}\right|&gt; n$$</p> <p>for all $n.$ We can pass to a subsequence $n_k$ such that $|x_{n_k}| &lt; 1/k^2$ for all $k.$ We then have</p> <p>$$(1)\,\,\,\,\left|\frac{f(x_{n_k})}{x_{n_k}}\right|&gt; n_k \ge k.$$</p> <p>Consider now the case where $x_{n_k}, f(x_{n_k})&gt; 0$ for all $k.$ Then for each $k,$ there is a positive integer $m_k$ such that</p> <p>$$(2)\,\,\,\,\frac{1}{k^2} &lt; m_kx_{n_k} &lt; \frac{2}{k^2}.$$</p> <p>Think about the series</p> <p>$$x_{n_1} + x_{n_1} + \cdots + x_{n_1} + x_{n_2} + \cdots + x_{n_2} + x_{n_3} + \cdots ,$$</p> <p>where $x_{n_1}$ occurs $m_1$ times, $x_{n_2}$ occurs $m_2$ times, etc. By $(2)$ this series converges. But note $(1)$ shows the $k$th grouping in the "$f$-series" sums to</p> <p>$$m_kf(x_{n_k}) &gt; m_k\cdot k\cdot x_{n_k} &gt; \frac{1}{k}.$$</p> <p>Thus the $f$-series diverges, contradiction. So we are done in the case $x_{n_k}, f(x_{n_k})&gt; 0.$ The other cases follow by similar arguments..</p>
190,604
<p><strong>Background</strong></p> <p>I am currently trying to solve exercise 1.1.18 in Hatcher's Algebraic Topology. The part of the exercise I am interested in is the following:</p> <blockquote> <p>Using the technique in the proof of Proposition 1.14, show that if a space $X$ is obtained from a path-connected subspace $A$ by attaching an n-cell $e^n$ with $n ≥ 2$, then the inclusion $A \hookrightarrow X$ induces a surjection on $\pi_1$ .</p> </blockquote> <p>I know that $i:A \hookrightarrow X$ induces a homomorphism $i_*:\pi_1(A)\rightarrow \pi_1(X)$, so I only need to show this is a surjection. I think I understand the idea of the proof, which is to show that every loop $f\in \pi_1(X)$ is homotopic to a loop which is contained entirely in $A$. Hatcher's suggestion is to follow the proof $\pi_1(S^n)=0$ for $n\geq 2$, meaning that we should be able to push the sections of $f$ which are in the attached $n$-cell $e^n$ out. This is causing me a bit of trouble.</p> <p><strong>Attempt</strong></p> <p>Since $X$ is defined to be the result of attaching an $n$-cell to $A$ via some attaching map $\varphi:\partial D^n\rightarrow X$, it has the form $X=A \amalg e^n/\sim$, where $x\sim \varphi(x)$ for all $x \in \partial D^n$. Note first that since $A$ and $e^n$ are path connected, the adjunction space $X=A\cup_\varphi e^n$ is path connected. As such, our choice of base point does not affect the structure of $\pi_1(X)$, so let $x_0 \in A$ be the base point of $\pi_1(X)$ we are working over. Let $f \in \pi_1(X,x_0)$. Let $E=\text{Int}(e^n)$ and consider $f^{-1}(E)$. This is an open subset of $(0,1)$, so it is the union of a possibly infinite collection of subsets of $(0,1)$ of the form $(a_i,b_i)$. Let $f_i$ denote the restriction of $f$ to $(a_i,b_i)$. Note that $f_i$ lies in $e^n$ and, in particular, $f(a_i)$ and $f(b_i)$ lie on the boundary of $e^n$, so they are elements of $A$. For $n\geq 2$ we can homotopy $f_i$ to the path $g_i$ from $f(a_i)$ to $f(b_i)$ that goes along the boundary of $e^n$, which is homeomorphic to $S^{n-1}$, so it is path connected for $n\geq 2$. Since $e^n$ is homeomorphic to $D^n$, where $n\geq 2$, it is simply connected so $f_i$ and $g_i$ are homotopic. Repeating this process for all $f_i$, we obtain a loop $g$ homotopic to $f$ such that $g(I)\subseteq A$. </p> <p>What really bothers me about this is how I could homotopy form a homotopy from $f$ to $g$ consisting of possibly infinitely many individual homotopies from $f_i$ to $g_i$. I believe I need there to only be finitely many $f_i's$, but I don't see how to show it. </p> <p><strong>Note:</strong> This is not homework.</p>
Holdsworth88
22,437
<p><strong>Proof with Yoyo's suggestion</strong></p> <p>Since $X$ is defined to be the result of attaching an $n$-cell to $A$ via some attaching map $\varphi:\partial D^n\rightarrow X$, it has the form $X=A \amalg e^n/\sim$, where $x\sim \varphi(x)$ for all $x \in \partial D^n$. Note first that since $A$ and $e^n$ are path connected, the adjunction space $X=A\cup_\varphi e^n$ is also path connected. As such, our choice of base point does not affect the structure of $\pi_1(X)$, so let $x_0 \in A$ be the base point of $\pi_1(X)$ we are working over. Let $f \in \pi_1(X,x_0)$. Let $E=\text{Int}(e^n)$ and consider $f^{-1}(E)$. This is an open subset of $(0,1)$, so it is the union of a possibly infinite collection of subsets of $(0,1)$ of the form $(a_i,b_i)$. Let $x \in E$ and let $U$ be an open ball around $x$ in $e^n$. Like before, $f^{-1}(U)$ is an open subspace of $(0,1)$, so it is of the form $(c_i,d_i)$ for possibly an infinite number of $i$'s. The preimage $f^{-1}(x)$ is a closed subspace of $I$, hence it is compact. The intervals $(c_i,d_i)$ form an open cover of $f^{-1}(x)$, hence a finite collection of these intervals cover $f^{-1}(x)$. Let $f_i$ denote the restriction of $f$ to $(c_i,d_i)$. Note that $f_i$ lies in the closure of $U$ and, in particular, $f(a_i)$ and $f(b_i)$ lie on the boundary of $U$. For $n\geq 2$ we can homotopy $f_i$ to the path $g_i$ from $f(a_i)$ to $f(b_i)$ in the closure of $U$ which is disjoint from $x$, since the closure of $U$ is homeomorphic to $D^{n}$, a closed and convex subset of $\mathbb{R}^n$. This gives a homotopy from $f$ to the path $g$, which is disjoint from $x$. Since $g$ is not surjective, it is homotopic to a path $h$ contained entirely in $A$. </p>
3,140,384
<p>Assume you're stranded on an island, and you've got some really important maths to do. Obviously you don't have a calculator with you, so you need to find another way.</p> <p>I know this sounds absurd, but it quite annoys me that, apart from maybe addition to division, computing any other value always ends up at "Plug it into the calculator". How do I do square roots? How do I compute sine? Powers, logarithms?</p> <p>Is there a collection of algorithms for these mathematical functions that are efficient enough to do by hand? They don't have to be insanely accurate, just good-enough approximations. I don't mind arbitrary precision ones, though :)</p>
Community
-1
<p>Like any algorithm, it will depend on your math skills, and knowledge.<a href="https://math.stackexchange.com/questions/3116326/is-there-a-faster-method-to-calculate-1-x-x-an-integer-than-this">Here</a> is an algorithm for <span class="math-container">$\frac{1}{x}$</span> for example. sqrt can be done by babylonian method:</p> <ol> <li>guess</li> <li>divide by guess</li> <li>average first two steps as new guess.</li> <li>repeat 2 and 3, until precision wanted is reached or surpassed.</li> </ol> <p>for very small angle <span class="math-container">$sin x\approx x$</span></p> <p>for things like exponents you can use </p> <p><a href="https://primes.utm.edu/glossary/page.php?sort=BinaryExponentiation" rel="nofollow noreferrer">binary exponentiation</a></p> <p>for remainders of "power towers" repeatedly use things like Fermat and Euler's theorems, etc. ex.</p> <p><span class="math-container">$213^{451^{4301^{6701^{5403^{3401}}}}} \bmod 100\equiv 13^{1 1^{13^{5^{3^{1}}}}}\equiv 37$</span></p> <p>As to logs, I'm not skilled myself. </p>
1,783,439
<p>How will the basis vectors of the subspace $\mathbb{R}^n$ consisting of those vectors $A=(a_1,\cdots,a_n)$ such that $a_1+\cdots+a_n=0$ look like?</p> <p>The initial problem was "what is the dimension of the subspace $\mathbb{R}^n$ consisting of those vectors $A=(a_1,\cdots,a_n)$ such that $a_1+\cdots+a_n=0$ ?</p> <p>I solved this question using the liner mapping $L:\mathbb{R}^n \rightarrow \mathbb{R}$ s.t. $L_{I}(X)=I.X$, where $I=(1,\cdots,1); X=(x_1,\cdots,x_n)$. It is being asked that what is the $dimension[kernel(L)]$ which is equal to $n-1$.</p> <p>Now, I wanted to solved this question not using the $dim[V]=dim[Im(F)]+dim[Ker(F)]$ but looking at the subspace itself. How will the basis vector of the subspace asked in the question will be?</p>
user324795
324,795
<p>Let $e_i$ denote the $i$th coordinate vector in $\mathbb{R}^n$. I claim that a basis for this space is given by $$ e_1 - e_n, e_2 - e_n, \dots, e_{n-1} - e_n. $$ It remains for you to prove that this is a linearly independent set which spans your space. If you're willing to use the fact that this space has dimension $n-1$, then you only need to prove one of these.</p>
628,254
<p>I am searching for some good book which section is devoted to modular arithmetic. I am self learner so I strongly prefer that book has exercises best with answers or solutions. I have CS background and has taken course on discrete mathematics but besides some basic facts on modulo operation it lacked some introduction to modular arithmetic.</p>
Brian Spiering
153,929
<p><a href="http://rads.stackoverflow.com/amzn/click/0486284247" rel="nofollow">Concepts of Modern Mathematics</a> by Ian Stewart is a good introduction to many pure mathematical concepts, including modular arithmetic.</p>
3,723,183
<p>I am working through a text book by <a href="http://www.hds.bme.hu/%7Efhegedus/Strogatz%20-%20Nonlinear%20Dynamics%20and%20Chaos.pdf" rel="nofollow noreferrer">Strogatz Nonlinear dynamics and chaos</a> . In chapter 2 question 2.2.1 , I am looking for an analytical solution. I have the question's answer but would like to ask how a certain step was performed.</p> <h3>Question</h3> <p><strong>Consider the system <span class="math-container">$\dot{x}=4x^{2}-16$</span> Find an analytical solution to the problem.</strong></p> <h3>Answer</h3> <p><span class="math-container">\begin{equation} \dot{x}=4x^{2}-16 \end{equation}</span></p> <p><span class="math-container">\begin{equation} \int \frac{1}{x^{2}-4} dx = \int 4 dt \\ \frac{1}{4} \ln(\frac{x-2}{x+2}) = 4t + C_{1} \\ x = 2 \frac{1 + C_{2}e^{16t}}{1 - C_{2}e^{16t}} \end{equation}</span></p> <p><span class="math-container">\begin{equation} C_{2}(t=0) = \frac{x-2}{x+2} \end{equation}</span></p> <p>where <span class="math-container">$C_{1}$</span> and <span class="math-container">$C_{2}$</span> are constants.</p> <h2>Summary</h2> <p>In the first step to get to <span class="math-container">$\int \frac{1}{x^{2}-4} dx = \int 4 dt $</span> how does this happen? There is an intermediary step/result that is not clear. Any help would be really appreciated.</p> <h2>Edit 1:</h2> <p>In other words, is this step okay? <span class="math-container">\begin{equation} \frac{\dot{x}}{x^{2}-4} = 4\\ \int \frac{1}{x^{2}-4} dx = \int 4 dt \end{equation}</span></p> <h2>Edit 2:</h2> <p>Can I then denote my solution as:</p> <p><span class="math-container">$x(t) = \frac{2(e^{4c_{1}+16t})}{(e^{4c_{1}-16t})}$</span></p>
Community
-1
<p>An algebraic expression needs to have all its terms defined to have a meaning.</p> <p>As</p> <p><span class="math-container">$$\lim_{x\to0}\log x$$</span> is undefined, the whole expression <span class="math-container">$0\cdot\lim_{x\to0}\log x$</span> is undefined.</p> <p>And we also have <span class="math-container">$$(\lim_{x\to0} x)(\lim_{x\to0}\log x)$$</span></p> <p>undefined, while</p> <p><span class="math-container">$$\lim_{x\to0}(x\log x)=0.$$</span></p> <hr /> <p>Also note that <span class="math-container">$0\cdot\infty$</span> is not an expression but an expression <em>pattern</em> which describes a limit of the form</p> <p><span class="math-container">$$\lim_{x\to a}(f(x)g(x))$$</span> where</p> <p><span class="math-container">$$\lim_{x\to a}f(x)=0\text{ and }\lim_{x\to a}g(x)=\infty,$$</span> as in my third example only.</p>
96,518
<p>Some time ago I had a physics test where I had the following integral: $\int y&#39;&#39; \ \mathrm{d}y$. The idea is that I had a differential equation, and I had acceleration (that is, $y''$) given as a function of position ($y$). The integral was actually equal to something else, but that's not the point. I needed to somehow solve that. I can't integrate acceleration with respect to position, so here's what I did:</p> <p>$$ \int y&#39;&#39; \ \mathrm{d}y = \int \frac{\mathrm{d}y&#39;}{\mathrm{d}t} \ \mathrm{d}y = \int \mathrm{d}y&#39; \frac{\mathrm{d}y}{\mathrm{d}t} = \int y&#39; \ \mathrm{d}y&#39; = \frac1{2}y&#39;^2 + C $$</p> <p>My professor said this was correct and it makes sense, but doing weird stuff with differentials and such never completely satisfies me. Is there a substitution that justifies this procedure?</p>
user18063
18,063
<p>When dealing with differentials, we generally have a "manipulate first, ask questions later" attitude. Differentials have a way of giving out correct results when manipulated formally but it's a mistake to think that something meaningful is going on. Okay, this is not totally true: there are non-traditional approaches to calculus where some of these manipulations can be rigorously justified, but I couldn't tell you much about them. The most popular of these approaches is due to H.J. Keisler. More info <a href="http://www.math.uiowa.edu/~stroyan/InfsmlCalculus/InfsmlCalc.htm" rel="nofollow">here</a>.</p>
1,405,637
<p>Given that $ U_n=[x(1-x)]^n$ and $n\geq2$,$V_n=\int_{0}^{1}e^xU_ndx$,prove that $V_n+2n(2n-1)V_{n-1}-n(n-1)V_{n-2}=0$<br></p> <p>I tried to solve it by integration by parts,taking $U_n$ as first function and $e^x$ as second function.<br></p> <p>$V_n=-\int_{0}^{1}n[x(1-x)]^{n-1}(1-2x)e^x dx=-\int_{0}^{1}nU_{n-1}(1-x-x)e^x dx$<br></p> <p>I could not proceed further,how can i reach the desired result.Please help.</p>
Nate Eldredge
822
<p>Hint: use the Borel-Cantelli lemma to show that $$P(X_n \ne 2 \text{ i.o.}) = 0.$$ (Independence is not needed,)</p>
3,962,427
<p>I have proven that the sequence is convergent.</p> <p>Let <span class="math-container">$x_n=\frac{(2n)!!}{(2n+1)!!}$</span></p> <p><span class="math-container">$\frac{x_{n+1}}{x_n}$</span> <span class="math-container">$=\frac{2n+2}{2n+3}&lt;1$</span> Therefore the sequence decreases. On the other hand <span class="math-container">$x_n&gt;0$</span>. Therefore the sequence is bounded below. Since we have a sequence that is bounded below and decreases the sequence must converge.</p> <p>But from here I don’t know how to find the limit. Could you please help me? And could you please say if what I’ve done is necessary or not?</p> <p>Thanks in advance!</p>
B. Goddard
362,009
<p>See that</p> <p><span class="math-container">$$x_n = \prod_{k=1}^n \frac{2k}{2k+1}, $$</span></p> <p>so that</p> <p><span class="math-container">$$\ln x_n = \sum_{k=1}^n \ln \left(\frac{2k}{2k+1}\right). $$</span></p> <p>So we work with this series. First argue that</p> <p><span class="math-container">$$\sum_{k=1}^{\infty} \ln\left(\frac{2k}{2k+1}\right) - \int_{t=1}^{\infty} \ln\left(\frac{2t}{2t+1}\right) \; dt$$</span></p> <p>finite. (If you graph the summmand and the integrand, they differ by a set of little (near) triangles. Because the functions are monotonic, the triangles can be pushed together and fit in a small rectangle.)</p> <p>Integration by parts gives</p> <p><span class="math-container">$$ = \int \ln(2t) - \ln(2t+1) \; dt = t \ln(2t) -\frac{1}{2}\ln(2t+1)(2t+2) + 1/2.$$</span></p> <p>And when you evaluate the improper integral you get <span class="math-container">$-\infty.$</span> So the limit of the sequence is <span class="math-container">$e^{-\infty} = 0.$</span></p>
2,846,928
<blockquote> <p>If I have the a complex number $z \in \mathbb{C}$ with absolute value $|z| = 1$, how do I show that $-i \frac {z-1}{z+1}$ is real? </p> </blockquote>
Math Lover
348,257
<p>Note that $$-i\frac{e^{i\theta}-1}{e^{i\theta}+1}\times \frac{e^{-i\theta/2}}{e^{-i\theta/2}}=-i\frac{e^{i\theta/2}-e^{-i\theta/2}}{e^{i\theta/2}+e^{-i\theta/2}}=\tan(\theta/2).$$</p>
2,846,928
<blockquote> <p>If I have the a complex number $z \in \mathbb{C}$ with absolute value $|z| = 1$, how do I show that $-i \frac {z-1}{z+1}$ is real? </p> </blockquote>
WimC
25,313
<p>If $\lvert z \rvert = 1$ then $\overline{z}=z^{-1}$. So complex cojugation of your expression results in $$i\frac{z^{-1}-1}{z^{-1}+1} = -i \frac{z-1}{z+1}.$$</p>
1,800,611
<p>I've setup a recurrence relation as part of a numerical analysis problem, and found that $$x_{n+1} = \frac{x_n+1}{2}$$</p> <p>The notes then say that for $x_0=0$, it is easy to show that $$x_k = 1 - 2^{-k}$$</p> <p>Plugging in a few numbers shows this is clearly correct, and equating $x_{n+1}=x_n$ shows it converges to one - but how might one come up with $x_k$ formula in the first place (rather than just being able to verify it)?</p>
ekkilop
284,417
<p>Another fun way of doing it is by means of a generating function. Define $$ A(t) = \sum_{n\geq 0} x_nt^n = x_0 + x_1 t + x_2 t^2 + \dots \quad (1) $$ for real $t&lt;1$. Then multiply your recursion formula by $t$ and sum to get $$ \sum_{n\geq 0} x_{n+1} t^n = \frac{1}{2} \sum_{n\geq 0} x_n t^n + \frac{1}{2} \sum_{n\geq 0} t^n. $$ Note that the left-hand side can we written as $$ \frac{1}{t} (\sum_{n\geq 0}x_nt^n - x_0) = \frac{A(t)}{t}, $$ and the right-hand side as $$ \frac{1}{2}A(t) + \frac{1}{2} \frac{1}{1-t}, $$ where we used the formula for a geometric series. Solving for $A(t)$ is easy and gives us $$ A(t) = \frac{t}{(1-t)(2-t)}. $$ Now Taylor expand $A(t)$ around $t=0$ to find \begin{equation} A(t) = 0 + \frac{1}{2}t + \frac{3}{4}t^2 + \frac{7}{8}t^3 + \dots + (1-2^{-k})t^k + \dots \quad (2) \end{equation} Comparing (1) and (2) we see that indeed $x_k = (1-2^{-k})$.</p>
503,808
<p>Let $\mathbb{K} = \mathbb{Q}[\sqrt[3]{5}] \ $, and let $\mathbb{L}$ be the normal closure of $\mathbb{K}$. </p> <p>Let $\mathbb{O}_{\mathbb{K}} \ $ be the integral closure of $\mathbb{Z}$ in $K$ and $\mathbb{O}_{\mathbb{L}} \ $ be the integral closure of $\mathbb{Z}$ in $\mathbb{L}$. I want to find the factorization of the primes $7, 11 \ $ and $13$ as ideals in $\mathbb{O}_{\mathbb{K}} \ $ and $\mathbb{O}_{\mathbb{L}} \ $. My question is : how can I do that without knowing an integral basis for $\mathbb{K}$ and $\mathbb{L}$ ?</p>
user8268
8,268
<p>A usual method is to look at $f=x^3-5$ in $\mathbb F_p[x]$ ($p$ the prime you want to factorize). If $f$ has no multiple factors then the factorization of $f$ gives you the factorization of $p$; $p$ is unramified. Namely, $f=f_1f_2...f_k$ means $p=P_1P_2...P_k$, the norm of $P_i$ is $p$ to the degree of $f_i$, and $P_i=(p,f_i(\alpha))$ ($\alpha=\sqrt[3]{5}$).This is the case for all $p$'s you consider, as $f'=3x^2$.</p> <p>As an example, for $p=7$, $x^3-5$ has no root in $\mathbb F_7[x]$ (if I didn't miss it), i.e. $(p)$ stays a prime ideal on $O_K$.</p> <p>Should it fail (i.e. should $f$ have multiple factors in $\mathbb F_p[x]$), one can always look at the factorization of $f$ in $\mathbb Q_p[x]$ + some (not so small) details.</p>