qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
3,403,855
<p>Construct an example of a set of real numbers E that has no points of accumulation and yet has the property that for every ε > 0 there exist points x, y ∈ E so that 0 &lt; |x − y| &lt; ε.</p> <p>so i know we need a convergent sequence to show that the difference between two elements can be as small as we like, also it should be a set not an interval to avoid accumulation points. but if a sequence converge to L, wouldnt L be an accumulation point too?</p>
José Carlos Santos
446,262
<p>You can take the sequence <span class="math-container">$\left(\sqrt n\right)_{n\in\mathbb N}$</span>, for instance. Can you prove that it works?</p>
3,403,855
<p>Construct an example of a set of real numbers E that has no points of accumulation and yet has the property that for every ε > 0 there exist points x, y ∈ E so that 0 &lt; |x − y| &lt; ε.</p> <p>so i know we need a convergent sequence to show that the difference between two elements can be as small as we like, also it should be a set not an interval to avoid accumulation points. but if a sequence converge to L, wouldnt L be an accumulation point too?</p>
heropup
118,193
<p>More generally, all that is required for the existence of <span class="math-container">$x,y \in E$</span> satisfying <span class="math-container">$|x-y| &lt; \epsilon$</span> is that there exist arbitrarily close points in <span class="math-container">$E$</span>, but this does not mean that set <span class="math-container">$E$</span> is bounded. For <span class="math-container">$E \subseteq \mathbb R^+$</span>, we can think of this as the existence of strictly increasing sequences of positive reals with no finite limit, but whose successive differences tend to <span class="math-container">$0$</span>. We are already familiar with many examples of such sequences; indeed, many such sequences are found from the <strong>partial sums</strong> of certain infinite series: Let the sequence <span class="math-container">$\{s_n\}_{n \ge 0}$</span> be defined by <span class="math-container">$$s_n = \sum_{k=0}^n a_k,$$</span> where <span class="math-container">$\{a_k\}_{k \ge 0}$</span> is an infinite sequence of strictly decreasing positive reals satisfying <span class="math-container">$a_k &gt; a_m &gt; 0$</span> whenever <span class="math-container">$k &lt; m$</span>. If <span class="math-container">$\displaystyle \lim_{n \to \infty} s_n$</span> exists and is finite--that is to say, the infinite series converges to some value <span class="math-container">$L$</span>--then <span class="math-container">$L$</span> is an accumulation point of the sequence of partial sums. But if it does not--and indeed, there are many such examples of divergent series--then as long as we have <span class="math-container">$a_k \to 0$</span> as <span class="math-container">$k \to \infty$</span>, we will have a sequence <span class="math-container">$\{s_n\}$</span> that meets your criteria: no accumulation point but there exist successive terms whose absolute difference can be made arbitrarily small.</p> <p>There also exist rather more "pathological" examples; e.g., <span class="math-container">$$s_n = 2^{\lceil \log_2 n \rceil} - 2^{-n},$$</span> which has the desired property of no accumulation point, yet arbitrarily small <strong>and</strong> arbitrarily large differences in successive terms.</p>
4,092,643
<p>The solutions manual says</p> <p><span class="math-container">$$\lim_{x \to \infty}e^{-x^2}\int_{x}^{x+\frac1x}e^{t^2}dt=\lim_{x \to \infty}\frac{e^{(x+\frac1x)^2}-e^{x^2}}{2xe^{x^2}}$$</span></p> <p>I'm trying to understand how they arrived there. Using L'Hôpital's rule rule, I have</p> <p><span class="math-container">$$\lim_{x \to \infty}e^{-x^2}\int_{x}^{x+\frac1x}e^{t^2}dt= \lim_{x \to \infty}\frac{\int_{0}^{x+\frac1x}e^{t^2}dt-\int_{0}^{x}e^{t^2}dt}{e^{x^2}}= \lim_{x \to \infty}\frac{e^{(x+\frac1x)^2}(1-\frac1{x^2})-e^{x^2}}{2xe^{x^2}}$$</span></p> <p>I'm getting this additional factor <span class="math-container">$(1-\frac1{x^2})$</span> in <span class="math-container">$\frac{e^{(x+\frac1x)^2}(1-\frac1{x^2})-e^{x^2}}{e^{x^2}2x}$</span>, which is <span class="math-container">$\frac{d}{dx}(x+\frac1x)$</span> and it comes from the application of the chain rule.</p> <p>Is the chain rule not applicable there? My thinking was: let <span class="math-container">$F(x)=\int_{0}^{x}e^{t^2}dt$</span>, and <span class="math-container">$G(x)=x+\frac1x$</span>. Then <span class="math-container">$\int_{0}^{x+\frac1x}e^{t^2}dt=F(G(x))$</span>, which is the functions' composition and I must apply the chain rule.</p>
Barry Cipra
86,747
<p>Yes, the additional factor <span class="math-container">$(1-1/x^2)$</span> belongs there if you're doing L'Hopital. (Note, whether it's there or not, the limit turns out to be <span class="math-container">$0$</span>, so the solution manual's assertion isn't technically wrong, it could claim to just be skipping a step, but it's a rather big step to skip. It's more likely an oversight on the part of the manual.)</p> <p>As an alternative, without using L'Hopital, we can use the fact that <span class="math-container">$e^{t^2}$</span> is strictly increasing for <span class="math-container">$t\ge0$</span> to obtain the inequalities</p> <p><span class="math-container">$${e^{x^2}\over x}\le\int_x^{x+1/x}e^{t^2}\,dt\le{e^{(x+1/x)^2}\over x}={e^{x^2+2+1/x^2}\over x}$$</span></p> <p>so that</p> <p><span class="math-container">$${1\over x}\le e^{-x^2}\int_x^{x+1/x}e^{t^2}\,dt\le{e^{2+1/x^2}\over x}$$</span></p> <p>and now use the Squeeze Theorem to obtain</p> <p><span class="math-container">$$\lim_{x\to\infty}e^{-x^2}\int_x^{x+1/x}e^{t^2}\,dt=0$$</span></p>
1,384,053
<p>Which number is bigger? $1.01^{101}$ or $2$? and how about $e^{\pi}$ or $\pi^e$?</p> <p>Tried some algebraic manipulations to no end, so would love some suggestions or some different ways to approach those kind of problem</p>
Dr. Sonnhard Graubner
175,066
<p>HINT: we get $$e^{\pi}&gt;\pi^{e}$$ use the function $$f(x)=\sqrt[x]{x}$$</p>
3,688,680
<p>I know cantor set and rational numbers in <span class="math-container">$\mathbb{R}$</span> are meagre. But they are all zero measure.</p> <p>So is there any meagre set that is non-zero measure?</p>
CHAMSI
758,100
<p>We have : <span class="math-container">$$ \lim_{x\to 0}{\sqrt{x}\frac{\sin^{2}{x}}{x^{\frac{5}{2}}}}=\lim_{x\to 0}{\left(\frac{\sin{x}}{x}\right)^{2}}=1 $$</span></p> <p>Thus : <span class="math-container">$$ \frac{\sin^{2}{x}}{x^{\frac{5}{2}}}\underset{x\to 0}{\sim}\frac{1}{\sqrt{x}} $$</span></p> <p>Since <span class="math-container">$ \int_{0}^{1}{\frac{\mathrm{d}x}{\sqrt{x}}} $</span> converges, <span class="math-container">$ \int_{0}^{1}{\frac{\sin^{2}{x}}{x^{\frac{5}{2}}}\,\mathrm{d}x} $</span> does also converge.</p>
206,318
<p>For plots like the one shown below, what is the syntax for adding filling between particular lines and the axis, but only in the negative region:</p> <p><a href="https://i.stack.imgur.com/X4TME.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X4TME.png" alt="enter image description here"></a></p>
Lukas Lang
36,508
<p>@Kuba's approach works using up-values, which works very nicely for simple cases as the examples given in the question. However, it does not work if the <code>want["..."]</code> expression is nested deeper inside a held expression, as noted in the comments.</p> <p>Here is an approach using <a href="https://reference.wolfram.com/language/ref/MakeExpression.html" rel="nofollow noreferrer"><code>MakeExpression</code></a> that processes any expressions of the form <code>want["..."]</code> before any evaluation happens:</p> <pre><code>MakeExpression[RowBox@{"Want", "[", str_, "]"}, StandardForm] := ToExpression[ToExpression@str, StandardForm, HoldComplete] </code></pre> <p>This works for all cases I can think of:</p> <pre><code>(* example in question *) G[x_, y_] := Want["x^2+y^2"] G[3, 4] (* 25 *) (* more deeply nested, as in comments to @Kuba's answer *) G[x_, y_] := Want["x^2+y^2"] + 1 G[3, 4] (* 26 *) (* with the string stored in a variable *) str = "x^2+y^2"; G[x_, y_] := Want[str] + 1 G[3, 4] (* 26 *) </code></pre> <p>As noted by @Kuba in the comments, this of course only works if <code>MakeExpression</code> is actually called. Notable exceptions where this is not the case are text-based front-ends and package files imported via <code>Get</code>.</p>
63,723
<p>How to find $f&#39;(a)$ where $f(x) = \sqrt{1-2x}$ ?</p> <p>I am not too sure what to do, no matter what I do I can't get the correct answer. I know it is simple algebra but I can't figure it out.</p>
Bill Dubuque
242
<p>My answer to your <a href="https://math.stackexchange.com/questions/60220/finding-lim-limits-h-to-0-frac-sqrt9h-3h/60235#60235">prior question</a> shows how to compute a more general derivative. Namely if $\rm\ f(x)\: = \ f_0 + f_1\ (x-a) +\:\cdots\:+f_n\ (x-a)^n\:$ and $\rm\: f_0 \ne 0\:$ then rationalizing the numerator below </p> <p>$$\rm \lim_{x\:\to\: a}\ \dfrac{\sqrt{f(x)}-\sqrt{f_0}}{x-a}\ = \ \lim_{x\:\to\: a}\ \dfrac{f(x)-f_0}{(x-a)\ (\sqrt{f(x)}+\sqrt{f_0})}\ =\ \lim_{x\:\to\: 0}\ \dfrac{f_1+\:\cdots\: + f_n\:(x-a)^{n-1}}{\sqrt{f(x)}+\sqrt{f_0}}\ =\ \dfrac{f_1}{2\ \sqrt{f_0}}$$</p> <p>Your current problem is merely the special case $\rm\ f(x) = 1-2\:x\: =\: 1-2\:a-2\:(x-a)\:,\:$ therefore $\rm\:f_0 = 1-2\:a,\ \ f_1 = -2\:.\ $ If something about this proof is not clear then please ask questions in the comments here or there (vs. posing more minor variants on such problems).</p>
3,069,244
<p>Consider the following functional <span class="math-container">$\Phi:\mathbb R^n\to\mathbb R $</span>: <span class="math-container">$$ \Phi(x)=\sum_{i=1}^{n-1}(1+x_i)(x_i-x_n)^2(2(1+x_i+x_n)+x_i x_n-x_1). $$</span> The computer experiments show that it is non-negative for all <span class="math-container">$x_i\geq 0$</span>. I need to prove this. Note that we have both <span class="math-container">$\Phi(x)=0$</span> and <span class="math-container">$\nabla \Phi(x)=0$</span> for all <span class="math-container">$x$</span> with equal coordinates. The proof should be simple, but I can't manage to find it. Any ideas?</p>
Aphelli
556,825
<p>Use the addition formula and simplify the <span class="math-container">$\cosh(\tanh^{-1}(\cdot))$</span>, <span class="math-container">$\sinh(\tanh^{-1}(\cdot))$</span>, your integrand becomes <span class="math-container">$2\cosh(3x)(1-3x^2)+4x\sinh(3x)$</span>.</p> <p>Note that <span class="math-container">$2\cosh(3x)(1-3x^2)-4x\sinh(3x)$</span> is the derivative of the function <span class="math-container">$\frac{2}{3}\sinh(3x)(1-3x^2)$</span> so its integral is <span class="math-container">$\frac{4}{9e}(e^2-1)$</span>. </p> <p>So it remains to integrate <span class="math-container">$8x\sinh(3x)$</span>. By parts, this is <span class="math-container">$\frac{16}{9}\cosh(1)-\frac{8}{3}\int{\cosh(3x)}=\frac{8}{9e}(e^2+1)-\frac{8}{9e}(e^2-1)$</span>. </p> <p>Summing yields finally <span class="math-container">$\frac{4e^2+12}{9e}$</span>. </p>
82,745
<p>Is it possible to simplify the following expression involving instances of Gamma function:</p> <p>$$E(p)=\frac{\frac{\Gamma(\frac{p+1}{2})}{\Gamma(\frac{p+2}{2})}} {\left(\frac{\Gamma(\frac{p+1}{p})^2}{\Gamma(\frac{p+2}{p})}\right)^{\frac{p+2}{2}}}$$</p> <p>where $p$ is rational (or even real) and $p\geq2$. The bottom part of expression $E$ comes from the formula for the area of a superellipse, i.e., supercircle:</p> <p>$$\mid x\mid ^p + \mid y \mid ^p =r^p,\ p\geq 2$$</p> <p>and the rest is related to that also. Thanx in advance.</p>
Robert Israel
13,650
<p>The only parts of this that can be simplified at all are $ \Gamma \left( {\frac {p+1}{p}} \right) ={\frac {\Gamma \left( \frac{1}{p} \right) }{p}}$, and similarly for $\Gamma\left(\frac{p+2}{p}\right)$ and $\Gamma\left(\frac{p+2}{2}\right)$</p>
82,745
<p>Is it possible to simplify the following expression involving instances of Gamma function:</p> <p>$$E(p)=\frac{\frac{\Gamma(\frac{p+1}{2})}{\Gamma(\frac{p+2}{2})}} {\left(\frac{\Gamma(\frac{p+1}{p})^2}{\Gamma(\frac{p+2}{p})}\right)^{\frac{p+2}{2}}}$$</p> <p>where $p$ is rational (or even real) and $p\geq2$. The bottom part of expression $E$ comes from the formula for the area of a superellipse, i.e., supercircle:</p> <p>$$\mid x\mid ^p + \mid y \mid ^p =r^p,\ p\geq 2$$</p> <p>and the rest is related to that also. Thanx in advance.</p>
Eric Naslund
12,176
<p>I guess it depends on what you mean by simplify. We could rewrite things in terms of (generalized) central binomial coefficients:</p> <p>First the denominator: Notice that $$\frac{\Gamma\left(1+\frac{1}{p}\right)^{2}}{\Gamma\left(1+\frac{2}{p}\right)}=\binom{\frac{2}{p}}{\frac{1}{p}}^{-1}=\frac{1}{2p}\frac{\Gamma\left(\frac{1}{p}\right)^{2}}{\Gamma\left(\frac{2}{p}\right)}.$$ For the numerator $$\frac{\Gamma\left(\frac{p+1}{2}\right)}{\Gamma\left(\frac{p+2}{2}\right)}=\frac{\Gamma\left(\frac{p+1}{2}\right)^{2}}{\Gamma\left(\frac{p+1}{2}+\frac{1}{2}\right)\Gamma\left(\frac{p+1}{2}\right)}=\frac{\Gamma\left(\frac{p+1}{2}\right)^{2}}{\sqrt{\pi}2^{-p}\Gamma\left(p+1\right)}=\frac{2^{p}}{p\sqrt{\pi}}\binom{p-1}{\frac{p-1}{2}}^{-1}$$ so the fraction becomes $$\frac{2^{p}}{p\sqrt{\pi}}\binom{\frac{2}{p}}{\frac{1}{p}}^{\frac{p+2}{2}}\biggr/\binom{p-1}{\frac{p-1}{2}}.$$ You could also write it using the beta function, then it is $$\frac{2^{\frac{3p+2}{2}}p^{\frac{p+2}{2}}}{\sqrt{\pi}}\frac{\text{B}\left(\frac{1}{p},\frac{1}{p}\right)^{\frac{p+2}{2}}}{\text{B}\left(\frac{p+1}{2},\frac{p+1}{2}\right)}.$$ To clean it up, it feels like you need a nicer way to write $\Gamma\left(\frac{1}{p}\right)^{p}$. It seems to look like a multinomial coefficient. </p> <p>Now, there is a way to rewrite everything as a multidimensional integral over a simplex, and I find this to be the cleanest way to rewrite it. This is related to a generalization of the Beta Function. Tell me if this interests you, and I can include it.</p>
3,009,477
<p>I want to determine <span class="math-container">$f(x) = x+\sin x$</span> is homeomorphic or not on <span class="math-container">$\mathbb{R}$</span>?</p> <p>A bijective continuous function is homeomorphic if its inverse is also continuous. I know that <span class="math-container">$f$</span> is bijective. Also <span class="math-container">$f$</span> is continuous being the sum of two continuous functions.</p> <p>How to look for the continuity of <span class="math-container">$f^{-1}$</span>.</p>
Robert Lewis
67,071
<p>As is pointed out in the comments, the construction which replaces <span class="math-container">$xy$</span> with</p> <p><span class="math-container">$x \circ y = xy + yx \tag 1$</span></p> <p>will leave the ring <span class="math-container">$\Bbb Z$</span> unitless under the multiplication operation "<span class="math-container">$\circ$</span>"; first we note that since <span class="math-container">$H$</span> is commutative,</p> <p><span class="math-container">$x \circ y = xy + yx = xy + xy = 2xy; \tag 2$</span></p> <p>then if </p> <p><span class="math-container">$\exists e \in \Bbb Z, \; x \circ e = e \circ x = x, \tag 3$</span></p> <p>we would also have, <em>via</em> (2),</p> <p><span class="math-container">$x \circ e = 2xe; \tag 4$</span></p> <p>but then from (3),</p> <p><span class="math-container">$2xe = x \Longrightarrow 2xe - x = 0 \Longrightarrow x(2e - 1) = 0; \tag 5$</span></p> <p>since <span class="math-container">$\Bbb Z$</span> is an integral domain, if <span class="math-container">$x \ne 0$</span> we obtain</p> <p><span class="math-container">$2e - 1 = 0, \tag 6$</span></p> <p>which has no solution in <span class="math-container">$\Bbb Z$</span>.</p> <p>Will this construction work in <span class="math-container">$\Bbb Q$</span>? In this case, we still find that (2) binds; therefore so do (4)-(6), and taking</p> <p><span class="math-container">$e = \dfrac{1}{2}, \tag 7$</span></p> <p>we see that</p> <p><span class="math-container">$x \circ e = x \circ \dfrac{1}{2} = 2 \dfrac{1}{2} x = x, \tag 8$</span></p> <p>so apparently <span class="math-container">$e = 1/2$</span> is indeed the unit of <span class="math-container">$(H, +, \circ) = (\Bbb Q, +, \circ)$</span>. As a check, we observe that</p> <p><span class="math-container">$e \circ e = 2 \left ( \dfrac{1}{2} \right )^2 = 2 \dfrac{1}{4} = \dfrac{1}{2} = e, \tag 9$</span></p> <p>so <span class="math-container">$e$</span> is idempotent with respect to <span class="math-container">$\circ$</span>, as should be true of any unit.</p> <p><strong><em>Nota Bene: Caveat Emptor!!!</em></strong> I have confined my remarks here to the specific questions surrounding <span class="math-container">$e$</span>; I have granted our OP Hitman's (perhaps tacit) assertion that indeed <span class="math-container">$(H, +, \circ)$</span> does indeed satisfy all the ring axioms, such as associativity, distributivity, etc, without checking them myself. <strong><em>End of Note.</em></strong></p>
795
<p>Please observe the following thread <a href="https://math.stackexchange.com/questions/4489/proving-that-the-given-diophantine-equation-has-a-solution">Proving that the given Diophantine equation has a solution</a>.</p> <p>There is a long boring argument/discussion about whether it should be posted, who should post it - how to get more people involved in the site... There is also a trend of converting mathematical problems into google search problems - I don't think much insight or understanding comes from this.</p> <p>Is this necessary?</p> <hr> <p>It is frustrating to be downvoted when I post questions like this. <em>Please give me some idea as to what the problem is if you would be so kind</em>.</p>
Larry Wang
73
<p>I think that the question of where to draw the line between how much discussion should take place in comments before it deserves to be moved elsewhere may be a difficult one, but <em>if meta-discussion of a specific question on the main site exists already</em>, discussion of things not related to the mathematical content of the question should take place there, rather than in comments on the main site. </p> <p>If a discussion becomes lengthy/involved enough to merit its own meta topic, then discussing it in comments on the main site will make the discussion inconvenient, since it must take place in two different places, and will make it harder to find the mathematics.<br> As posts collect many comments, only the highest voted ones are immediately shown, and many types of discussion lead to opinions about users/site policy/etiquette being more highly voted than informative comments about the substance of the post. This goes against one of the purposes of the stackexchange software - creating an easy-to-navigate repository of mathematical knowledge by using upvotes to make informative and useful content more visible.</p>
2,352,721
<h2>Question</h2> <blockquote> <p>Four fair six-sided dice are rolled. The probability that the sum of the results being <span class="math-container">$22$</span> is <span class="math-container">$$\frac{X}{1296}.$$</span> What is the value of <span class="math-container">$X$</span>?</p> </blockquote> <h2>My Approach</h2> <p>I simplified it to the equation of the form:</p> <blockquote> <p><span class="math-container">$x_{1}+x_{2}+x_{3}+x_{4}=22, 1\,\,\leq x_{i} \,\,\leq 6,\,\,1\,\,\leq i \,\,\leq 4 $</span></p> </blockquote> <p>Solving this equation results in:</p> <p><span class="math-container">$x_{1}+x_{2}+x_{3}+x_{4}=22$</span></p> <p>I removed restriction of <span class="math-container">$x_{i} \geq 1$</span> first as follows-:</p> <p><span class="math-container">$\Rightarrow x_{1}^{'}+1+x_{2}^{'}+1+x_{3}^{'}+1+x_{4}^{'}+1=22$</span></p> <p><span class="math-container">$\Rightarrow x_{1}^{'}+x_{2}^{'}+x_{3}^{'}+x_{4}^{'}=18$</span></p> <p><span class="math-container">$\Rightarrow \binom{18+4-1}{18}=1330$</span></p> <p>Now i removed restriction for <span class="math-container">$x_{i} \leq 6$</span> , by calculating the number of <strong>bad cases</strong> and then subtracting it from <span class="math-container">$1330$</span>:</p> <p>calculating <strong>bad combination</strong> i.e <span class="math-container">$x_{i} \geq 7$</span></p> <p><span class="math-container">$\Rightarrow x_{1}^{'}+x_{2}^{'}+x_{3}^{'}+x_{4}^{'}=18$</span></p> <p>We can distribute <span class="math-container">$7$</span> to <span class="math-container">$2$</span> of <span class="math-container">$x_{1}^{'},x_{2}^{'},x_{3}^{'},x_{4}^{'}$</span> i.e<span class="math-container">$\binom{4}{2}$</span></p> <p>We can distribute <span class="math-container">$7$</span> to <span class="math-container">$1$</span> of <span class="math-container">$x_{1}^{'},x_{2}^{'},x_{3}^{'},x_{4}^{'}$</span> i.e<span class="math-container">$\binom{4}{1}$</span> and then among all others .</p> <p>i.e</p> <p><span class="math-container">$$\binom{4}{1} \binom{14}{11}$$</span></p> <p>Therefore, the number of bad combinations equals <span class="math-container">$$\binom{4}{1} \binom{14}{11} - \binom{4}{2}$$</span></p> <p>Therefore, the solution should be:</p> <p><span class="math-container">$$1330-\left( \binom{4}{1} \binom{14}{11} - \binom{4}{2}\right)$$</span></p> <p>However, I am getting a negative value. What am I doing wrong?</p> <p><strong>EDIT</strong></p> <p>I am asking for my approach, because if the question is for a larger number of dice and if the sum is higher, then predicting the value of dice will not work.</p>
Dhruv Kohli
97,188
<p>The bad combinations criteria is atleast one $x_i \geq 7$.</p> <p>The number of bad combinations when:</p> <ol> <li><p>One of $x_i$ is forced to be greater than or equal to $7$ is $\binom{4}{1}\binom{12 + 4 - 1}{12} = 1820$.</p></li> <li><p>Two of $x_i$'s are forced to be greater than or equal to $7$ is $\binom{4}{2}\binom{6+4-1}{6} = 504$.</p></li> <li><p>Three of $x_i$'s are forced to be greater than or equal to $7$ is $\binom{4}{3}\binom{0+4-1}{0} = 4$.</p></li> <li><p>Four of $x_i$'s are forced to be greater than or equal to $7$ is $0$.</p></li> </ol> <p>So, total bad combinations $= 1820 - 504 + 4 - 0 = 1320$</p> <p>I used $n(\cup_{i=1}^{4}A_i) = \sum_{i=1}^{4}n(A_i) - \sum_{i,j, i\neq j}n(A_i\cap A_j) + \ldots$</p> <p>So, possible combinations $= 1330 - 1320 = 10$.</p>
2,352,721
<h2>Question</h2> <blockquote> <p>Four fair six-sided dice are rolled. The probability that the sum of the results being <span class="math-container">$22$</span> is <span class="math-container">$$\frac{X}{1296}.$$</span> What is the value of <span class="math-container">$X$</span>?</p> </blockquote> <h2>My Approach</h2> <p>I simplified it to the equation of the form:</p> <blockquote> <p><span class="math-container">$x_{1}+x_{2}+x_{3}+x_{4}=22, 1\,\,\leq x_{i} \,\,\leq 6,\,\,1\,\,\leq i \,\,\leq 4 $</span></p> </blockquote> <p>Solving this equation results in:</p> <p><span class="math-container">$x_{1}+x_{2}+x_{3}+x_{4}=22$</span></p> <p>I removed restriction of <span class="math-container">$x_{i} \geq 1$</span> first as follows-:</p> <p><span class="math-container">$\Rightarrow x_{1}^{'}+1+x_{2}^{'}+1+x_{3}^{'}+1+x_{4}^{'}+1=22$</span></p> <p><span class="math-container">$\Rightarrow x_{1}^{'}+x_{2}^{'}+x_{3}^{'}+x_{4}^{'}=18$</span></p> <p><span class="math-container">$\Rightarrow \binom{18+4-1}{18}=1330$</span></p> <p>Now i removed restriction for <span class="math-container">$x_{i} \leq 6$</span> , by calculating the number of <strong>bad cases</strong> and then subtracting it from <span class="math-container">$1330$</span>:</p> <p>calculating <strong>bad combination</strong> i.e <span class="math-container">$x_{i} \geq 7$</span></p> <p><span class="math-container">$\Rightarrow x_{1}^{'}+x_{2}^{'}+x_{3}^{'}+x_{4}^{'}=18$</span></p> <p>We can distribute <span class="math-container">$7$</span> to <span class="math-container">$2$</span> of <span class="math-container">$x_{1}^{'},x_{2}^{'},x_{3}^{'},x_{4}^{'}$</span> i.e<span class="math-container">$\binom{4}{2}$</span></p> <p>We can distribute <span class="math-container">$7$</span> to <span class="math-container">$1$</span> of <span class="math-container">$x_{1}^{'},x_{2}^{'},x_{3}^{'},x_{4}^{'}$</span> i.e<span class="math-container">$\binom{4}{1}$</span> and then among all others .</p> <p>i.e</p> <p><span class="math-container">$$\binom{4}{1} \binom{14}{11}$$</span></p> <p>Therefore, the number of bad combinations equals <span class="math-container">$$\binom{4}{1} \binom{14}{11} - \binom{4}{2}$$</span></p> <p>Therefore, the solution should be:</p> <p><span class="math-container">$$1330-\left( \binom{4}{1} \binom{14}{11} - \binom{4}{2}\right)$$</span></p> <p>However, I am getting a negative value. What am I doing wrong?</p> <p><strong>EDIT</strong></p> <p>I am asking for my approach, because if the question is for a larger number of dice and if the sum is higher, then predicting the value of dice will not work.</p>
Henry
6,460
<p>It is the coefficient of $x^{22}$ in the expansion of the generating function $$(x^6+x^5+x^4+x^3+x^2+x^1)^4=\left(\frac{x(1-x^6)}{1-x}\right)^4$$ which is $$1\cdot{{x}^{24}}+4\cdot {{x}^{23}}+10\cdot {{x}^{22}}+20\cdot {{x}^{21}}+35\cdot {{x}^{20}}+56\cdot {{x}^{19}}+80\cdot {{x}^{18}}\\+104\cdot {{x}^{17}}+125\cdot {{x}^{16}}+140\cdot {{x}^{15}}+146\cdot {{x}^{14}}+140\cdot {{x}^{13}}+125\cdot {{x}^{12}}+104\cdot {{x}^{11}}\\+80\cdot {{x}^{10}}+56\cdot {{x}^{9}}+35\cdot {{x}^{8}}+20\cdot {{x}^{7}}+10\cdot {{x}^{6}}+4\cdot {{x}^{5}}+1\cdot{{x}^{4}}$$</p> <p>so the answer is $10$.</p> <p>This generalises easily to more dice or even those with different numbers of faces </p>
2,352,721
<h2>Question</h2> <blockquote> <p>Four fair six-sided dice are rolled. The probability that the sum of the results being <span class="math-container">$22$</span> is <span class="math-container">$$\frac{X}{1296}.$$</span> What is the value of <span class="math-container">$X$</span>?</p> </blockquote> <h2>My Approach</h2> <p>I simplified it to the equation of the form:</p> <blockquote> <p><span class="math-container">$x_{1}+x_{2}+x_{3}+x_{4}=22, 1\,\,\leq x_{i} \,\,\leq 6,\,\,1\,\,\leq i \,\,\leq 4 $</span></p> </blockquote> <p>Solving this equation results in:</p> <p><span class="math-container">$x_{1}+x_{2}+x_{3}+x_{4}=22$</span></p> <p>I removed restriction of <span class="math-container">$x_{i} \geq 1$</span> first as follows-:</p> <p><span class="math-container">$\Rightarrow x_{1}^{'}+1+x_{2}^{'}+1+x_{3}^{'}+1+x_{4}^{'}+1=22$</span></p> <p><span class="math-container">$\Rightarrow x_{1}^{'}+x_{2}^{'}+x_{3}^{'}+x_{4}^{'}=18$</span></p> <p><span class="math-container">$\Rightarrow \binom{18+4-1}{18}=1330$</span></p> <p>Now i removed restriction for <span class="math-container">$x_{i} \leq 6$</span> , by calculating the number of <strong>bad cases</strong> and then subtracting it from <span class="math-container">$1330$</span>:</p> <p>calculating <strong>bad combination</strong> i.e <span class="math-container">$x_{i} \geq 7$</span></p> <p><span class="math-container">$\Rightarrow x_{1}^{'}+x_{2}^{'}+x_{3}^{'}+x_{4}^{'}=18$</span></p> <p>We can distribute <span class="math-container">$7$</span> to <span class="math-container">$2$</span> of <span class="math-container">$x_{1}^{'},x_{2}^{'},x_{3}^{'},x_{4}^{'}$</span> i.e<span class="math-container">$\binom{4}{2}$</span></p> <p>We can distribute <span class="math-container">$7$</span> to <span class="math-container">$1$</span> of <span class="math-container">$x_{1}^{'},x_{2}^{'},x_{3}^{'},x_{4}^{'}$</span> i.e<span class="math-container">$\binom{4}{1}$</span> and then among all others .</p> <p>i.e</p> <p><span class="math-container">$$\binom{4}{1} \binom{14}{11}$$</span></p> <p>Therefore, the number of bad combinations equals <span class="math-container">$$\binom{4}{1} \binom{14}{11} - \binom{4}{2}$$</span></p> <p>Therefore, the solution should be:</p> <p><span class="math-container">$$1330-\left( \binom{4}{1} \binom{14}{11} - \binom{4}{2}\right)$$</span></p> <p>However, I am getting a negative value. What am I doing wrong?</p> <p><strong>EDIT</strong></p> <p>I am asking for my approach, because if the question is for a larger number of dice and if the sum is higher, then predicting the value of dice will not work.</p>
G Cab
317,234
<p>We have that $$ \eqalign{ &amp; x_{\,1} + x_{\,2} + x_{\,3} + x_{\,4} = 22\quad \left| {\;1 \le x_{\,k} \le 6} \right.\quad \Rightarrow \cr &amp; \Rightarrow \quad y_{\,1} + y_{\,2} + y_{\,3} + y_{\,4} = 18\quad \left| {\;0 \le y_{\,k} \le 5} \right. \cr} $$ and $$ \eqalign{ &amp; {\rm N}{\rm .}\,{\rm of}\,{\rm solutions}\,{\rm without}\,{\rm upper}\,{\rm restriction} = \cr &amp; = {\rm N}{\rm .}\,{\rm of}\,{\rm (strong)}\,4{\rm - elements}\,{\rm compositions}\,{\rm of}\,22 = \cr &amp; = {\rm N}{\rm .}\,{\rm of}\,{\rm (weak)}\,4{\rm - elements}\,{\rm compositions}\,{\rm of}\,18 = \cr &amp; = \left( \matrix{ 22 - 1 \cr 4 - 1 \cr} \right) = 1330 \cr} $$</p> <p>Concerning <strong>bad compositions</strong>, we have that if you write <em>for instance</em> $$ \eqalign{ &amp; \overline {x_{\,1} } + x_{\,2} + x_{\,3} + x_{\,4} = 22\quad \left| \matrix{ \;7 \le \overline {x_{\,1} } \hfill \cr \;1 \le x_{\,k} \hfill \cr} \right.\quad \times \;\left( \matrix{ 4 \cr 1 \cr} \right)\quad \Rightarrow \cr &amp; \Rightarrow \quad x_{\,1} + x_{\,2} + x_{\,3} + x_{\,4} = 16\quad \left| {\;1 \le x_{\,k} } \right.\quad \times \;\left( \matrix{ 4 \cr 1 \cr} \right) = \cr &amp; = 4\;\left( \matrix{ 16 - 1 \cr 4 - 1 \cr} \right) = 1820 \cr} $$ you get a number of bad compositions which is higher than the total number, and it is not the number of compositions with a least one term $\ge 7$. That is clearly due to the fact that , when adding $6$ to $x_1$ that could be already $\ge 7$.</p> <p>And if you write $$ \overline {x_{\,1} } + x_{\,2} + x_{\,3} + x_{\,4} = 22\quad \left| \matrix{ \;7 \le \overline {x_{\,1} } \hfill \cr \;1 \le x_{\,k} \le 6 \hfill \cr} \right. $$ you are back to the original problem, with one element less and one sum to apply.</p> <p>So that is not the right approach to this problem.</p> <p>$\ds{\bbox[#dfd,5px]{\ The\ general\ formula\ }}$ for getting a sum $s$ with $m$ dices each with $r$ faces is given <a href="http://math.stackexchange.com/questions/992125/rolling-dice-problem/1680420#1680420">in this post</a>.</p> <p>In your particular case, although, <em>Jacob's</em> answer is the simplest.</p>
2,352,721
<h2>Question</h2> <blockquote> <p>Four fair six-sided dice are rolled. The probability that the sum of the results being <span class="math-container">$22$</span> is <span class="math-container">$$\frac{X}{1296}.$$</span> What is the value of <span class="math-container">$X$</span>?</p> </blockquote> <h2>My Approach</h2> <p>I simplified it to the equation of the form:</p> <blockquote> <p><span class="math-container">$x_{1}+x_{2}+x_{3}+x_{4}=22, 1\,\,\leq x_{i} \,\,\leq 6,\,\,1\,\,\leq i \,\,\leq 4 $</span></p> </blockquote> <p>Solving this equation results in:</p> <p><span class="math-container">$x_{1}+x_{2}+x_{3}+x_{4}=22$</span></p> <p>I removed restriction of <span class="math-container">$x_{i} \geq 1$</span> first as follows-:</p> <p><span class="math-container">$\Rightarrow x_{1}^{'}+1+x_{2}^{'}+1+x_{3}^{'}+1+x_{4}^{'}+1=22$</span></p> <p><span class="math-container">$\Rightarrow x_{1}^{'}+x_{2}^{'}+x_{3}^{'}+x_{4}^{'}=18$</span></p> <p><span class="math-container">$\Rightarrow \binom{18+4-1}{18}=1330$</span></p> <p>Now i removed restriction for <span class="math-container">$x_{i} \leq 6$</span> , by calculating the number of <strong>bad cases</strong> and then subtracting it from <span class="math-container">$1330$</span>:</p> <p>calculating <strong>bad combination</strong> i.e <span class="math-container">$x_{i} \geq 7$</span></p> <p><span class="math-container">$\Rightarrow x_{1}^{'}+x_{2}^{'}+x_{3}^{'}+x_{4}^{'}=18$</span></p> <p>We can distribute <span class="math-container">$7$</span> to <span class="math-container">$2$</span> of <span class="math-container">$x_{1}^{'},x_{2}^{'},x_{3}^{'},x_{4}^{'}$</span> i.e<span class="math-container">$\binom{4}{2}$</span></p> <p>We can distribute <span class="math-container">$7$</span> to <span class="math-container">$1$</span> of <span class="math-container">$x_{1}^{'},x_{2}^{'},x_{3}^{'},x_{4}^{'}$</span> i.e<span class="math-container">$\binom{4}{1}$</span> and then among all others .</p> <p>i.e</p> <p><span class="math-container">$$\binom{4}{1} \binom{14}{11}$$</span></p> <p>Therefore, the number of bad combinations equals <span class="math-container">$$\binom{4}{1} \binom{14}{11} - \binom{4}{2}$$</span></p> <p>Therefore, the solution should be:</p> <p><span class="math-container">$$1330-\left( \binom{4}{1} \binom{14}{11} - \binom{4}{2}\right)$$</span></p> <p>However, I am getting a negative value. What am I doing wrong?</p> <p><strong>EDIT</strong></p> <p>I am asking for my approach, because if the question is for a larger number of dice and if the sum is higher, then predicting the value of dice will not work.</p>
Filip
764,829
<p>Although good answers have already been given, I'd like to provide an alternative way of coming up with this, which I found <a href="https://digitalscholarship.unlv.edu/cgi/viewcontent.cgi?article=1025&amp;context=grrj" rel="nofollow noreferrer">here on page 6</a>. </p> <p>Define <span class="math-container">$f_i(s)$</span> as the probability of having a sum <span class="math-container">$s$</span> with <span class="math-container">$i$</span> dice. Then we can find all the probabilities for any number of dice using recursion, starting at <span class="math-container">$i = 1$</span>. We know, first of all, that </p> <p><span class="math-container">$$f_1(s) = \frac{1}{6}, s \in \{1,..,6\}$$</span></p> <p>We can then use this to find all the reasonable values for <span class="math-container">$f_2(n)$</span>. As an example, </p> <p><span class="math-container">$$f_2(3) = f_1(3-1)\cdot \frac{1}{6} + f_1(3-2)\cdot \frac{1}{6}$$</span></p> <p>The general recursion formula can thus be considered</p> <p><span class="math-container">$$f_i(s) = \sum_{j=(i-1)}^{(i-1)\cdot 6}f_{j-1}(s - j)\frac{1}{6}$$</span></p> <p>Supposing that <span class="math-container">$f_{j-1}(s-j)$</span> exists (important!). Some python code can be found below, generating all probabilities for the sum of all dice until 6 dice. </p> <pre><code>from fractions import Fraction def get(s,f,n): if n == 1: return Fraction(1,6) a = [] for i in range(1,7): if (n-1, s - i) in f: a.append(f[(n-1, s - i)]*Fraction(1,6)) return sum(a) for n in range(1,6): for s in range(n,6*n + 1): f[(n,s)] = get(s,f,n) </code></pre> <p>Note that this can also be extended and generalised to an n-sided dice. </p>
60,326
<p>If f is a weight 2 cuspidal newform, then it is common for L(f,1) to vanish. Indeed, the sign of the functional equation of f can force such vanishing. However, if f has weight k>2, then there is no a priori reason why L(f,1) will vanish. </p> <p>My question: are there known examples where L(f,1)=0 for a newform f of weight strictly greater than 2 or is there some (easy?) reason such examples shouldn't exist?</p>
GH from MO
11,919
<p>The sign of the functional equation of a self-dual cusp form (regardless its weight) can be changed easily by twisting the form with a quadratic character. In particular, the $L$-function of a self-dual cusp form (of any weight) often vanishes at the center. I don't think there is anything special about weight 2. See Theorem 7.6 in Iwaniec: Topics in classical automorphic forms.</p>
6,741
<p>I would like to have a plot that is filled to the axis with <code>Green</code> if the <code>y</code> value is greater than 10, and <code>Blue</code> if it is less than 10.</p> <p>I attempted to do the following:</p> <p><code>Plot[x, {x, 0, 20}, Filling -&gt; Axis, ColorFunction -&gt; Function[{x, y}, Piecewise[{{Green, y &gt; 10}, {Blue, y &lt; 10}}]]]</code></p> <p>The produces an all blue filling.</p> <p>The frustrating thing is that it seems that conditionals are allowed in <code>ColorFunction</code>s, but the test is evaluated only once, and the result is evaluation for each point plotted.</p> <p>How can I have the test evaluated repeatedly to get a discrete color filling?</p> <p><strong>Note:</strong> I am simply trying to fill the space under a curve by determining where a point would fall in a given set of intervals.</p>
kglr
125
<p><code>ParametricPlot</code> with <code>MeshFunctions</code> + <code>MeshShading</code> +<code>Mesh</code> options gives a cleaner picture with no blending of colors at boundaries:</p> <pre><code>ParametricPlot[{x, v x}, {x, 0, 20}, {v, 0, 1}, MeshFunctions -&gt; {# &amp;}, Mesh -&gt; {{10}}, MeshShading -&gt; {Blue, Green}, BoundaryStyle -&gt; None] </code></pre> <p><a href="https://i.stack.imgur.com/vAGYQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/vAGYQ.png" alt="enter image description here"></a></p> <p>You can color more complicated regions:</p> <pre><code>ParametricPlot[{x, v x}, {x, 0, 20}, {v, 0, 1}, MeshFunctions -&gt; {# Sin[RandomReal[5] + #] &amp;}, Mesh -&gt; {{0}}, MeshShading -&gt; {Hue[RandomReal[]], Hue[RandomReal[]]}, BoundaryStyle -&gt; None] </code></pre> <p><a href="https://i.stack.imgur.com/bH57z.png" rel="noreferrer"><img src="https://i.stack.imgur.com/bH57z.png" alt="enter image description here"></a></p>
946,973
<p>After completing the square, what are the solutions to the quadratic equation below? <span class="math-container">$$x^2 + 2x = 25$$</span></p> <p><img src="https://i.stack.imgur.com/AoFhV.png" alt="enter image description here" /></p> <p>Honstely I think it's B. But I'm not sure.</p>
Michael Hardy
11,667
<p>One can got through the process of completing the square (as in "Timbuc"'s posted answer). But one could also check by substitution. So suppose it is proposed that $x=-1+\sqrt{26}$ is a solution. We would then have \begin{align} x^2 + 2x &amp; = (-1+\sqrt{26})^2 + 2(-1+\sqrt{26}) \\[10pt] &amp; = (1-2\sqrt{26}+26) + 2(-1+\sqrt{26}) \\[10pt] &amp; = (1+26)-2 \\[10pt] &amp; = 25, \end{align} so that is indeed a solution. And the same thing works with $-1-\sqrt{26}$.</p>
1,111,952
<p><strong>My Try:</strong> </p> <p>We substitute $y = x^{2/3}$. Therefore, $x = y^{3/2}$ and $\frac{dx}{dy} = \frac{2}{3}\frac{dy}{y^{1/3}}$</p> <p>Hence, the integral after substitution is: </p> <p>$$ \frac{3}{2} \int_0^\infty \sin(y)\sqrt{y} dy$$</p> <p>Let's look at:</p> <p>$$\int_0^\infty \left|\sin(y)\sqrt{y} \right| dy = \sum_{n=0}^\infty \int_{n\pi}^{(n+1)\pi}\left|\sin(y)\right| \sqrt{y} dy \ge \sum_{n=0}^\infty \sqrt{n\pi} \int_{n\pi}^{(n+1)\pi}\left|\sin(y)\right| dy \\= \sum_{n=1}^\infty \sqrt{n\pi} \int_{n\pi}^{(n+1)\pi}\sqrt{\sin(y)^2}$$</p>
Petite Etincelle
100,564
<p>By $y = x^{2/3}$, we get $$\int_0^\infty \sin(x^{2/3})dx = \frac{3}{2}\int_0^\infty \sin(y)\sqrt{y}dy$$</p> <p>Consider the integral over the interval $[2n\pi, 2(n+1)\pi]$, we have $$ \begin{align} \int_{2n\pi}^{2(n+1)\pi} \sin(y)\sqrt{y}dy &amp;= \int_{2n\pi}^{(2n+1)\pi}\sin(y)\sqrt{y}dy + \int_{(2n+1)\pi}^{(2n+2)\pi}\sin(y)\sqrt{y}dy \\ &amp;=\int_{2n\pi}^{(2n+1)\pi}\sin(y)\sqrt{y}dy + \int_{2n\pi}^{(2n+1)\pi}\sin(y+\pi)\sqrt{y+\pi}dy\\ &amp;=\int_{2n\pi}^{(2n+1)\pi}\sin(y)\sqrt{y}dy - \int_{2n\pi}^{(2n+1)\pi}\sin(y)\sqrt{y+\pi}dy\\ &amp;=-\int_{2n\pi}^{(2n+1)\pi}\sin(y)\frac{\pi}{\sqrt{y} + \sqrt{y+\pi}}dy\\ &amp;\le -\int_{2n\pi}^{(2n+1)\pi}\sin(y)\frac{\pi}{\sqrt{(2n+1)\pi} + \sqrt{(2n+2)\pi}}dy\\ &amp;=-\frac{2\pi}{\sqrt{(2n+1)\pi} + \sqrt{(2n+2)\pi}} \end{align}$$ </p> <p>then $$\sum_{n=1}^\infty\int_{2n\pi}^{2(n+1)\pi} \sin(y)\sqrt{y}dy \leq \sum_{n=1}^\infty -\frac{2\pi}{\sqrt{(2n+1)\pi} + \sqrt{(2n+2)\pi}} =-\infty$$</p> <p>so this integral does not converge. To be more convinced, see @Kyson's comment below(so this integral oscillates between $+\infty$ and $-\infty$)</p>
1,111,952
<p><strong>My Try:</strong> </p> <p>We substitute $y = x^{2/3}$. Therefore, $x = y^{3/2}$ and $\frac{dx}{dy} = \frac{2}{3}\frac{dy}{y^{1/3}}$</p> <p>Hence, the integral after substitution is: </p> <p>$$ \frac{3}{2} \int_0^\infty \sin(y)\sqrt{y} dy$$</p> <p>Let's look at:</p> <p>$$\int_0^\infty \left|\sin(y)\sqrt{y} \right| dy = \sum_{n=0}^\infty \int_{n\pi}^{(n+1)\pi}\left|\sin(y)\right| \sqrt{y} dy \ge \sum_{n=0}^\infty \sqrt{n\pi} \int_{n\pi}^{(n+1)\pi}\left|\sin(y)\right| dy \\= \sum_{n=1}^\infty \sqrt{n\pi} \int_{n\pi}^{(n+1)\pi}\sqrt{\sin(y)^2}$$</p>
Empy2
81,790
<p>$\sin x^{2/3}$ remains above $1/2$ for $x$ between $[(2n+1/6)\pi]^{3/2}$ and $[2n+5/6]^{3/2}$, so the integral rises by more than $\left([2n+5/6]^{3/2}-[2n+1/6]^{3/2}\right)\pi^{3/2}/2$ during that time.<br> $$[2n+5/6]^{3/2}-[2n+1/6]^{3/2}=\frac{[2n+5/6]^3-[2n+1/6]^3}{[2n+5/6]^{3/2}+[2n+1/6]^{3/2}}\\ &gt;\frac{8n^2}{2[2n+1]^{3/2}}$$ That increases as a function of $n$, so the integral does not converge.</p>
1,111,952
<p><strong>My Try:</strong> </p> <p>We substitute $y = x^{2/3}$. Therefore, $x = y^{3/2}$ and $\frac{dx}{dy} = \frac{2}{3}\frac{dy}{y^{1/3}}$</p> <p>Hence, the integral after substitution is: </p> <p>$$ \frac{3}{2} \int_0^\infty \sin(y)\sqrt{y} dy$$</p> <p>Let's look at:</p> <p>$$\int_0^\infty \left|\sin(y)\sqrt{y} \right| dy = \sum_{n=0}^\infty \int_{n\pi}^{(n+1)\pi}\left|\sin(y)\right| \sqrt{y} dy \ge \sum_{n=0}^\infty \sqrt{n\pi} \int_{n\pi}^{(n+1)\pi}\left|\sin(y)\right| dy \\= \sum_{n=1}^\infty \sqrt{n\pi} \int_{n\pi}^{(n+1)\pi}\sqrt{\sin(y)^2}$$</p>
LearnTitan
209,235
<p>I will try to look at the problem from a different perspective and to understand exactly why it will <strong>not converge.</strong></p> <ol> <li><p>The first thing we should know is that if the integral converges the terms must get <strong>smaller</strong> after each "iteration".</p></li> <li><p>That means that the function keeps <strong>decreasing</strong> towards zero while going to infinity.</p></li> <li><p>If the function is decreasing towards infinity that means that the <strong>derivative of the function and its limit towards infinity</strong> should be negative (the derivative can be zero only if the value of the function has reached zero, because that would mean that the function remains at zero forever thus nothing else is added)</p></li> </ol> <p>So lets get our hands dirty:</p> <p>$$ \\ \large {f(x) = \sin x^{2/3}} \\ \large \frac {df(x)} {dx} = \cos x^{2/3} \times \frac {2} {3} {x^{-{1/3}} } = \frac {2 \cos {x^{\frac {2} {3}}}} {3 \sqrt[3] {x}} \\ \large \lim_{x\to\infty} \frac {df(x)} {dx} = \lim_{x\to\infty} \frac {2 \cos {x^{\frac {2} {3}}}} {3 \sqrt[3] {x}} = 0 \\ AND \\ \lim_{x\to\infty} {f(x)} = {[-1,+1]} \\$$ So as you can see the function is not decreasing towards infinity, it is in fact maintaining the same value. And the value of $f(x)$ towards infinity isn't $0$ but is in fact a value in $[-1,+1]$. Thus we can conclude the terms of the function aren't decreasing and the function <strong>isn't converging</strong>.</p>
1,158,956
<p>To show that orthogonal complement of a set A is closed.</p> <p>My try: I first show that the inner product is a continuous map. Let $X$ be an inner product space. For all $x_1,x_2,y_1,y_2 \in X$, by Cauchy-Schwarz inequality we get, $$|\langle x_1,y_1\rangle - \langle x_2,y_2\rangle| = |\langle x_1- x_2,y_1\rangle + \langle x_2, y_1-y_2\rangle| $$ $$\leq \|x_1- x_2\|\cdot\|y_1\| +\|x_2\|\cdot\| y_1-y_2\|$$</p> <p>This implies continuity of inner products.</p> <p>Let $A \subset X$ and $y \in A^\perp$. To show that $ A^\perp$ is closed, we have to show that if $(y_n)$ is convergent sequence in $ A^\perp$, then the limit $y$ also belong to $ A^\perp$.</p> <p>Let $x \in A$, then using that inner product is a continuous map, $$\langle x,y\rangle = \langle x, \lim_{n\to \infty} (y_n)\rangle = \lim_{n\to \infty} \langle x, y_n\rangle = 0.$$</p> <p>Since $\langle x, y_n\rangle = 0$ for all $x \in A$ and $y_n \in A^\perp$. Hence $y \in A^\perp$.</p> <p>Is the approach\the proof correct??</p> <p>Thank You!!</p>
Alfredo Lozano
286,630
<p>I really like your proof, so formalizing it we have:</p> <p>Let be $\{y_n\}_{n=1}^\infty \in A^\perp$ s.t. $y_n \to y$ and let be $x \in A$.</p> <p>We now want to show that $y\in A^\perp$.</p> <p>From the inner product's continuity we have:</p> <p>$\forall \epsilon&gt;0\ ,\exists\ \delta&gt;0$ such that:</p> <p>$|\langle x, y_n-y\rangle|&lt;\epsilon$, if $\parallel y_n-y\parallel&lt;\delta$ **</p> <p>we shall now see that $\langle x, y_n\rangle = 0\ \forall n \in \mathbb N$</p> <p>then $|\langle x, y_n\rangle - \langle x, y\rangle| = |\langle x, y\rangle|&lt;\epsilon$ , which implies $\langle x, y\rangle = 0$</p> <p>this means $y\in A^\perp$ <em>q.e.d.</em></p> <p>** Using the norm induced by the inner product, we may also note the existence of $\delta$ is guaranteed from convergence of $\{y_n\}_{n=1}^\infty$</p>
514,912
<p>I have what may seem a very trivial question, but how it is answered may affect how a proof of mine is structured. It pertains to formatting and convention. When 'recursively' defining a function does it make sense to use quantifiers? </p> <p>For example would:</p> <p>$ 5 \in R $</p> <p>If $ r \in R $, then $ \forall s \in \mathbb Z, r + s \in R $</p> <p>be an acceptable substitute for:</p> <p>$ 5 \in R $</p> <p>If $ r \in R, $ then $ r + 1 \in R $ and $ r - 1 \in R $</p> <p>Or would using quantifiers in the former definition violate some fundamental rule about how recursive functions are supposed to be considered?</p> <p>Anyways, thanks for any help!</p> <p>Thanks, </p> <p>Tuba09</p>
DonAntonio
31,254
<p>Well, define</p> <p>$$x=\sqrt{3\sqrt{5\sqrt3\ldots}}=\sqrt{3\sqrt{5x}}\stackrel{\text{square}}\implies x^2=3\sqrt{5x}\stackrel{\text{square again}}\implies x^4=45x\implies$$</p> <p>$$x^3=45\iff x=\sqrt[3]{45}$$</p> <p><strong>But</strong>...the above follows at once from arithmetic of limits, on the sequence</p> <p>$$\left\{\sqrt3\,,\,\sqrt{3\sqrt5}\,,\,\sqrt{3\sqrt{5\sqrt3}}\,\ldots\right\}$$</p> <p>and we <strong>must</strong> first prove the above sequence converges finitely, so</p> <p>Hints:</p> <p>== Prove your sequence is monotone ascending</p> <p>== Prove your sequence is bounded above</p>
3,375,459
<p>I am trying to study differential geometry.</p> <p>I am confused with regards to the following function for finding the length of a curve <span class="math-container">$\gamma$</span> connecting two points <span class="math-container">$p, q ∈ S^2$</span></p> <p><span class="math-container">$$L(γ) = \int^1_0|\dot{γ}(t)| dt,γ(0) = p, γ(1) = q$$</span></p> <p>Where <span class="math-container">$S^2$</span> is a 2-dimensional sphere sitting in the three dimensional Euclidean space <span class="math-container">$R^3$</span></p> <p>I am unfamiliar with the "dot above function" notation (dot above <span class="math-container">$\gamma$</span>), what does it mean? And from where is this function derived or what is it called?</p>
Kavi Rama Murthy
142,385
<p>Hint: you need <span class="math-container">$a &lt;\frac 1 2$</span> for finiteness of the expectation. To evaluate it just make the substitution <span class="math-container">$x=\sqrt {1-2a}z$</span>. I will let you do the calculations. The answer is <span class="math-container">$e^{c+\frac {b^2} {2(1-2a)}}$</span></p>
418,647
<p>Sorry if the question is dumb. I am trying to learn representation theory of finite groups from J.P.Serre's book by myself. In section 2.6 on canonical decomposition, he says that let V be a representation of a finite group G, $W_1,...,W_h$ be the distinct irreducible representations of G, and let V = $U_1 \oplus ... \oplus U_m$ be some decomposition of V into irreducible subrepresentations. Then we can write V = $V_1\oplus ...\oplus V_h$, where $V_i$ is the direct sum of irreducible subrepresentations among $U_i$'s which are $isomorphic$ to $W_i$. This much is clear. But then he says that :</p> <blockquote> <p>Next, if needed, one chooses a decomposition of $V_i$ into a direct sum of irreducible representations each isomorphic to $W_i$: $$V_i = W_i \oplus ...\oplus W_i$$ The last decomposition can be done in infinitely many ways; it is just as arbitrary as the choice of a basis in a vector space.</p> </blockquote> <p>I am confused with this part. I understand $external$ direct sums of same spaces, but how is the $internal$ direct sum of same spaces $W_i$ defined in general? I think I might be facing some notational difficulty. Thanks in advance.</p>
nakajuice
22,196
<p>I assume your graph can't have parallel edges or loops, otherwise statement is true since you can connect two arbitrary vertices with all the 17 edges and 5 vertices left are isolated.</p> <p>So in this case the maximum number of edges in undirected graph of n vertices is ${n \choose 2}$ (number of pairs of vertices). For $n=6$ maximal number of edges is 15, so if you add a vertice to them, there are at least 2 edges ($17-15$) that cover this vertice, so no vertice is isolated.</p>
2,942,244
<p>I need to find unmeasurable dense subset of circle. I think, that I found the unmeasurable set, but I can't show that it is dense. Here is my construction. Let's take <span class="math-container">$\alpha\in\mathbb{R}\setminus\mathbb{Q}$</span>, consider the irrational rotation of the circle on angle <span class="math-container">$\alpha$</span>. Let's take orbits of all points on the circle under the rotation, then choose the only one point from every orbit. This is set <span class="math-container">$X_0$</span>. Then <span class="math-container">$X_j=X_0+\alpha*j$</span>. So <span class="math-container">$\forall j$</span> the set <span class="math-container">$X_j$</span> is unmeasurable. I need to find the point of <span class="math-container">$X_j$</span> in any neighbourhood of any point on the circle to show that <span class="math-container">$X_j$</span> is dense. But I don't know how to do it.</p>
Henno Brandsma
4,280
<p>Let $V$ be a Vitali set in $[0,1)$ ( by AC choose a representative for each class of the relation $x\sim y$ iff $x-y\in\mathbb{Q}$) and take its image under the continuous bijection $t \to (\cos(2\pi t, \sin 2\pi t)$. This is as required. </p>
556,150
<blockquote> <p>Prove that a metric space is totally bounded if and only if every sequence has a Cauchy subsequence. </p> </blockquote> <p>I think I proved the Cauchy subsequence part:</p> <p>Let $a_{0},a_{1}, a_{2}, a_{3}, a_{4},...\in X$ be a sequence.</p> <p>For each $k$, let $F \subseteq X$ be a finite $\frac1k$-net.</p> <p>Given $I \subseteq \Bbb N_{\ge0}$ and $k&gt;1$ you find an infinite $J \subseteq I$ such that: $$\exists p\in F:\forall n\in J: d(x_n,p)&lt;\frac1k$$</p>
Community
-1
<p>The stated proof of implication totally bounded $\implies $ Cauchy subsequence is more like a sketch of the approach than a complete proof. Here is how it develops: </p> <p>Suppose $(a_n)$ is a sequence in a totally bounded metric space $X$. For each $k$, choose a finite $(1/k)$-net in $X$ and call it $F_k$. </p> <p>Let $J_0 = \mathbb{N}$ and construct infinite sets $J_0\supset J_1\supset J_2\supset J_3\supset\cdots $ inductively as follows. For every $n\in J_{k-1}$ there is $p\in F_k$ such that $d(x_n,p)&lt;1/k$. Since $J_{k-1}$ is infinite while $F_k$ is finite, there exists $p\in F_k$ such that the set $J_k : = \{n\in \mathbb{N}:d(x_n, p)&lt; 1/k\}$ is infinite. </p> <p>Finally, define subsequence $a_{n_k}$ by letting $n_k$ be some element of $J_k$ that is greater than $n_{k-1}$. This is a Cauchy subsequence. Indeed, for any $\epsilon&gt;0$ there exists $N$ such that $2/N&lt;\epsilon$. For $j,k\ge N$ we have $n_j,n_k\in J_N$, hence there is $p\in F_N$ such that $$ d(a_{n_k},p) &lt; \frac{1}{N}, \quad d(a_{n_j},p) &lt; \frac{1}{N} $$ Thus $d(a_{n_k}, a_{n_j}) &lt; 2/N&lt;\epsilon$ by the triangle inequality.</p>
1,353,922
<p>A function $f$ is continuous for all $x \geq 0$ and $f(x) \neq 0$ for all $x &gt;0$. </p> <p>If $\{f(x)\}^2 = 2\int_0^xf(t)dt $ then $f(x) = x$ for all $x \geq 0$.</p> <p>But I am stuck with the sum. </p>
Spenser
39,285
<p>By the Fundamental Theorem of Calculus, the function $$x\mapsto\int_0^xf(t)dt$$ is differentiable since $f$ is continuous. Hence, $(f(x))^2=2\int_0^xf(t)dt$ is differentiable. Since, $f(x)\neq 0$ for $x&gt;0$ and $\sqrt{}$ is differentiable on $(0,\infty)$, the composition $$\sqrt{(f(x))^2}=|f(x)|=f(x)\tag{1}$$ is differentiable for $x&gt;0$. Thus, $$\frac{d}{dx}(f(x))^2=2f(x)f'(x),\quad\forall x&gt;0.$$ But also, $$\frac{d}{dx}(f(x))^2=\frac{d}{dx}2\int_0^xf(t)dt=2f(x),$$ by the fundamental theorem of calculus again. Hence, $$2f(x)f'(x)=2f(x),\quad\forall x&gt;0.$$ Since $f(x)\neq 0$ there, this implies $$f'(x)=1,\quad \forall x&gt;0,$$ and hence $$f(x)=x+C,\quad\forall x&gt;0$$ for some constant $C$ (you can prove this by the Mean Value Theorem). By continuity of $f$, this holds for all $x\ge 0$. Evaluating the given constraint at $0$ we get $$C^2=2\int_0^0f(t)dt=0\implies C=0.$$ Thus, $f(x)=x$ for all $x\ge 0$.</p> <p><strong>Added:</strong> In $(1)$ I used that $f(x)&gt;0$ for $x&gt;0$. To prove this, note that since $f(x)\neq 0$ for $x&gt;0$ it suffices to show that $f(x)&gt;0$ for <em>one</em> $x&gt;0$ (by the Intermediate Value Theorem). But if $f(x)&lt; 0$ for all $x&gt;0$ then $0&lt;(f(x))^2=2\int_0^xf(t)dt\le 0$, a contradiction.</p> <p><strong>Additional remark:</strong> Note how nice is this exercise to practice applications of basic analysis theorems. It uses five of the most important tools one learns in first year undergraduate analysis:</p> <ul> <li>Fundamental Theorem of Calculus</li> <li>Intermediate Value Theorem</li> <li>Mean Value Theorem</li> <li>Composition of differentiable functions is differentiable</li> <li>Continuous functions are determined by their values on a dense set</li> </ul>
3,406,817
<p>What intrinsic relationship is there between (riemann) integration and Euclidean geometry that enables one to get area under the curve as a integral? This can only be related to the definition of riemann integral as a limit of sums of areas of rectangles; or is it something else.</p>
Arthur
15,500
<p>A bit tongue-in-cheek, the intrinsic relationship can be seen as "Riemannian integration was constructed specifically for this connection to work the way it does."</p>
3,406,817
<p>What intrinsic relationship is there between (riemann) integration and Euclidean geometry that enables one to get area under the curve as a integral? This can only be related to the definition of riemann integral as a limit of sums of areas of rectangles; or is it something else.</p>
user376343
376,343
<p>Reffer to <em>method of indivisibles</em> by Bonaventura Cavalieri (1598-1647). This method was used and learned at universities before the discovery of Riemannian integration </p>
553,040
<p>The question is: A card is drawn from an ordinary pack(52 cards) and a gambler bets that either a spade or an ace is going to appear. The probability of his winning are? I think the answer is $\frac{16}{52} = \frac{4}{13}$. Did I go "probably" go wrong somewhere?</p>
Satish Ramanathan
99,745
<p>There are 13 spades that contain one ace and there are remaining three aces, thus there are altogether 16 choices for the bet and the probability = 16/52 and you are correct.</p>
479,851
<p>I toss a coin a many times, each time noting down the result of the toss. If at any time I have tossed more heads than tails I stop. I.e. if I get heads on the first toss I stop. Or if toss T-T-H-H-T-H-H I will stop. If I decide to only toss the coin at most 2n+1 times, what is the probability that I will get more heads than tails before I have to give up?</p>
Kolmo
20,974
<p>I will expand my comment.</p> <p>Set $N=2 n+1$ and $S_k= \sum_{i=1}^k X_i $, where the $X_i$ are i.i.d.r.v. assuming values $+1, -1$ with equal probability ($S_0=0$ a.s.). This is the simple random walk. The idea is that we associate to head a +1 and to tail a -1.</p> <p>What you are looking for is the probability of the event $A=\max_{1 \leq k \leq N} S_k \geq 1$; that is before $N$ the random walk passed 1 which means you had 1 more head than tails.</p> <p>For $l$ positive integer define $T_l$ as the first time the random walk hits the level $l$ (the event $A$ corresponds to $T_1 \leq N$); the reflection principle implies $P(T_l \leq r) = 2 P(S_r \geq l) - P(S_r = l)$. I think you can find a proof of that on the internet, if you need it I will write it.</p> <p>Coming back to you case you have: $$ P(T_1 \leq N) = 2 P(S_N \geq 1 ) - P(S_N=1)= $$</p> <p>$$ =1- \binom {2n+1}{n+1} \ 2^{-(2n+1)} . $$</p> <p>EDIT.</p> <p>One version of the reflection principle says that $$ P(T_l \leq r \cap S_r &lt; l ) = P(T_l \leq r \cap S_r &gt; l ). $$ Note that $(S_r&gt;l) \subset (T_l \leq r)$, so the R.H.S is equal to $P(S_r &gt;l)$. Note also that $(T_l = r \cap S_r &lt; l) = (T_l = r \cap S_r &gt; l) = \emptyset$.</p> <p>Then:</p> <p>$$ P(T_l &lt; r \cap S_r &lt;l) = \sum_{k=1}^{r-1} P(T_l=k \cap S_r&lt;l) = \sum_{k=1}^{r-1} P(T_l=k \cap S_r-S_k&lt;0) $$</p> <p>The last step is because $T_l=k$ implies $S_k=l$.</p> <p>The event $(T_l=k) \in \sigma\{X_1,...,X_k\}$, while $S_r-S_k &lt;0 \in \sigma \{X_{k+1},...,X_r\}$ and those sigma-algebras are independent.</p> <p>So </p> <p>$$ P(T_l=k \cap S_r-S_k&lt;0) = P(T_l=k)P(S_r-S_k&lt;0)=P(T_l=k)P(S_r-S_k&gt;0) $$ where last equality follow from the symmetry of $S_r-S_k$.</p> <p>Proceeding backwards, the result follows.</p> <p>This implies:</p> <p>$$ P(T_l \leq r) = P(T_l \leq r \cap S_r \geq l) + P(T_l \leq r \cap S_r &lt; l)= $$ </p> <p>$$ =P(S_r \geq l) + P(S_r &gt; l )=2 P(S_r \geq l) - P(S_r = l ). $$</p>
2,932,139
<blockquote> <p>How would I show that the line <span class="math-container">$A=[(x,y,z)=(0,t,t)\mid t\in\mathbb{R}]$</span> is parallel to the plane <span class="math-container">$5x-3y+3z=1$</span>?</p> </blockquote> <p>I know the normal vector would be <span class="math-container">$(5,-3,3)$</span>, but how would I get the the directional of <span class="math-container">$A$</span>?</p>
Bernard
202,857
<p>A directing vector of the line is <span class="math-container">$(0,1,1)$</span>since the vector equation of the line is <span class="math-container">$$\overrightarrow{OM}=t\,(0,1,1).$$</span><br> This directing vector has to be orthogonal to the normal vector of the plane.</p>
3,465,171
<blockquote> <p>Let <span class="math-container">$G=[0,\infty) \times [0,\infty)$</span>, <span class="math-container">$\alpha \in (0,1)$</span> and <span class="math-container">$$\phi (x,y)=x^{\alpha} y^{1-\alpha}$$</span> Then <span class="math-container">$\phi$</span> is concave; that is, <span class="math-container">$-\phi$</span> is convex.</p> </blockquote> <hr> <p>This is left as an exercise in the book that I'm currently reading,and I think I have found a proof,which is shown below:</p> <p>Fix <span class="math-container">$x,y \in [0,\infty)$</span> and <span class="math-container">$n \in \mathbb N$</span>, where <span class="math-container">$\mathbb N$</span> always denotes the set of all positive integers,then we define <span class="math-container">$\alpha :=k/n$</span> for an arbitrary positive integer no greater than n so that <span class="math-container">$k/n \le 1$</span>.</p> <p>Let <span class="math-container">$\lambda \in (0,1)$</span>,and we define</p> <p><span class="math-container">$$z_k:=(\lambda x + (1-\lambda)y)^(k/n)$$</span></p> <p>and</p> <p><span class="math-container">$$w_k:=(\lambda x)^(k/n)+((1-\lambda)y)^(k/n)$$</span></p> <p>then it follows that</p> <p><span class="math-container">$$((z_k)^n)^(1/k)=\lambda x+(1-\lambda)y$$</span></p> <p>and that</p> <p><span class="math-container">$((w_k)^n)^(1/k) =[(\lambda x)^(k/n)+((1-\lambda)y)^(k/n)]^(n/k)$</span></p> <p><span class="math-container">$={\lambda x}[1+((1-\lambda)y/{\lambda x})^(k/n)]^(n/k)$</span></p> <p><span class="math-container">$\ge {\lambda x}[1+((1-\lambda)y/{\lambda x})^(k/n \cdot n/k)]$</span></p> <p><span class="math-container">$={\lambda x}[1+((1-\lambda)y/{\lambda x})]$</span></p> <p><span class="math-container">$=\lambda x+(1-\lambda)y$</span></p> <p><span class="math-container">$=((z_k)^n)^(1/k) $</span>,</p> <p>since it can be shown that <span class="math-container">$(1+x)^{\alpha} \ge 1+x^{\alpha}$</span> whenever <span class="math-container">$x \in [0,\infty)$</span> and <span class="math-container">$\alpha \in [1,\infty)$</span>.</p> <p>Therefore,<span class="math-container">$w_k \ge z_k$</span> holds,and hence the continuity of <span class="math-container">$\phi (x,y)$</span> with respect to <span class="math-container">$\alpha$</span> shows that</p> <p><span class="math-container">$(\lambda x+(1-\lambda)y)^{\alpha} \le (\lambda x)^{\alpha}+((1-\lambda)y)^{\alpha}$</span> for all <span class="math-container">$\alpha \in (0,1)$</span>.</p> <p>Finally,let <span class="math-container">$\lambda \in (0,1)$</span>,and let <span class="math-container">$u:=(x_1,y_1),v:=(x_2,y_2) \in G$</span>,then we obtain</p> <p><span class="math-container">$\phi (\lambda u+(1-\lambda)v)$</span></p> <p><span class="math-container">$=(\lambda x_1+(1-\lambda)x_2)^{\alpha}(\lambda y_1+(1-\lambda)y_2)^{1-\alpha}$</span></p> <p><span class="math-container">$\ge ((\lambda x_1)^{\alpha}+((1-\lambda)x_2) ^{\alpha})((\lambda y_1)^{1-\alpha}+((1-\lambda)y_2)^{1-\alpha})$</span></p> <p><span class="math-container">$\ge {\lambda x_1}^{\alpha} {\lambda y_1}^{1-\lambda}+{(1-\lambda)x_2}^{\alpha}+{(1-\lambda)y_2}^{1-\lambda}$</span></p> <p><span class="math-container">$=\phi (\lambda u)+\phi ((1-\lambda)v) $</span>,</p> <p>proving that <span class="math-container">$\phi(x,y)$</span> is concave.</p> <hr> <p>Is there anything wrong with my proof?I agree that my proof seems a little bit clumsy,so can anyone show me a succinct proof?Thank you!</p>
Community
-1
<p>Hint: <span class="math-container">$(z^2+1)=(z+i)(z-i)$</span>. Construct a half circle contour from -R to R in the real axis, where <span class="math-container">$R&gt;1$</span>. Do not differentiate twice. Express your function , <span class="math-container">$f(z)=\frac{g(z)}{(z-i)^2}$</span>. Then observe that g is holomorphic on a neighborhood of i and is non -zero when evaluated at <span class="math-container">$i$</span>. Hence, <span class="math-container">$res(f,i)$</span> is the first derivative of g evaluated at i.</p>
4,014,768
<blockquote> <p>Find the sationary points of the curve and their nature for the equation <span class="math-container">$y=e^x\cos x$</span> for <span class="math-container">$0\le x\le\pi/2$</span>.</p> </blockquote> <p>I derived it and got <span class="math-container">$e^x(-\sin x+\cos x)=0$</span>.</p> <p><span class="math-container">$e^x$</span> has no solution but I don't know how to find the <span class="math-container">$x$</span> such that <span class="math-container">$-\sin x+\cos x=0$</span></p>
MathMinded
881,969
<p>We can first simplyfy the equation to have; <span class="math-container">$$\sin{x}=\cos{x}$$</span> <span class="math-container">$$\implies\sin^2{x}=\cos^2{x}.$$</span></p> <p>Adding <span class="math-container">$\cos^2{x}$</span> to both sides gives; <span class="math-container">$$\sin^2{x}+\cos^2{x}=\cos^2{x}+\cos^2{x}=2\cos^2{x}.$$</span></p> <p>Note that <span class="math-container">$\sin^2{x}+\cos^2{x}=1,$</span> so we know that; <span class="math-container">$$2\cos^2{x}=1$$</span> <span class="math-container">$$\implies\cos^2{x}=\frac{1}{2}$$</span> <span class="math-container">$$\implies\cos{x}=±\frac{\sqrt{2}}{2}.$$</span></p> <p>Easily, the only solutions to this are <span class="math-container">$x=45^\circ, 135^\circ, 225^\circ,$</span> and <span class="math-container">$315.$</span> But since we also need <span class="math-container">$\sin{x}=\cos{x},$</span> the only solutions are <span class="math-container">$x=45^\circ$</span> and <span class="math-container">$225^\circ,$</span> or <span class="math-container">$x=\frac{\pi}{4}$</span> and <span class="math-container">$\frac{5\pi}{4}.$</span> But since <span class="math-container">$0\leq x\leq\frac{\pi}{2},$</span> <span class="math-container">$x=\frac{\pi}{4}$</span> is the only solution.</p>
18,136
<p>I introduced the hypercube (to undergraduate students in the U.S.) in the context of generalizations of the Platonic solids, explained its structure, showed it rotating. I mentioned <a href="https://matheducators.stackexchange.com/a/1824/511">Alicia Stott</a>, who discovered the <span class="math-container">$6$</span> regular polytopes in <span class="math-container">$\mathbb{R}^4$</span> (discovered after Schläfli). I sense they largely did not grasp what is the hypercube, let alone the other regular polytopes.</p> <p>I'd appreciate hearing of techniques for getting students to "grok" the fourth dimension.</p>
user52817
1,680
<p>I highly recommend "Flatland The Movie." Your institution should be able to purchase it. You can find a free trailer on the internet.</p> <p>When I was young, I read the book "Flatland: A Romance of Many Dimensions," probably in high school, and it made me "grok" the fourth dimension. </p>
1,519,070
<p>How can I show that the sequence is defined recursively?</p> <blockquote> <p>Show that the recursively defined sequence $(x_n)_{n\in\mathbb{N}}$ with $$x_1=1, \qquad\qquad x_{n+1}=\sqrt{6+x_n}$$ converges and determine its limit </p> </blockquote> <p><a href="https://i.stack.imgur.com/cL7BR.png" rel="nofollow noreferrer">Image</a></p>
Gamow
210,297
<p>The sequence is defined recursively, with initial value $x_1=1$ and with recursive relation $x_{n+1}=\sqrt{6+x_n}$. Your task is to prove that it converges and to determine its limit.</p> <p>Convergence: There is a theorem that says that if a sequence is monotonically increasing and bounded from above, then it has a limit. This suggests the following:</p> <ol> <li><p>Show that the sequence is monotonically increasing.</p></li> <li><p>Show that the sequence is bounded from above.</p></li> </ol> <p>Computing the limit: use the fact that as $n$ tends to infinity, both $x_{n+1}$ and $x_n$ are roughly equal to the limit $L$.</p> <ol start="3"> <li>Set $x_{n+1}=L$ and $x_n=L$, and extract the value of $L$.</li> </ol>
1,328,082
<p>I've been trying to prove the following inequality, but until now I've had problems coming up with a solution:</p> <p>$$ 2^{mn} \ge m^n $$</p> <p>$m$ and $n$ can assume any natural number.</p> <p>I wasn't able to find any counterexample that would invalidate this inequality, so I am assuming that this statement is generally true, but of course this still has to be proven.</p>
marty cohen
13,079
<p>More a comment than a solution, but it points to a more general result (one that I found independently and am pleased with):</p> <p>In <a href="https://math.stackexchange.com/questions/439026/prove-that-nk-2n-for-all-large-enough-n">Prove that $n^k &lt; 2^n$ for all large enough $n$</a>, I showed that if $n$ and $k$ are integers and $k≥2$ and $n≥k^2+1$, then $2^n&gt;n^k$.</p>
2,076,831
<blockquote> <p>Is the following set path-connected?</p> <p><span class="math-container">$A=\{(x,y):y=x\sin \frac{1}{x},x&gt;0\}$</span></p> </blockquote> <p>I am unable to understand that should I prove it or disprove it .</p> <p>Will someone please give me some hints.</p>
kub0x
309,863
<blockquote> <p>Also, please explain why this isn't one way: $f(x) = x^x$. Thanks in advance.</p> </blockquote> <p>The definition of a one-way function is that is easy to perform on every input but hard to invert knowing only the calculations on inputs (images). In your case the calculation on every input becomes impractical when $x$ grows. Or is it easy to calculate $10000^{10000}$ in few steps and in a decent amount of time? Also from a point of memory storage when $x=10000$ the amount of memory needed to save that value is $\log_2(10000)\cdot 10000=132877$ bits and that number will have $40000$ digits ($\log_{10}(10000)\cdot10000$). In this case I consider $10000$ as a small value.</p> <p>I mainly use Mathematics for Cryptography research purposes so I can tell you some examples of one way functions that are "easy" of perform on inputs but hard to "guess" what is "behind" of those.</p> <p>It is easy to perform modular exponentiation:</p> <p>$c \equiv a^x \pmod b$ but if an observer only knows $(a,b,c)$ then it has to guess $x$ by solving $x^{th}$ roots by $a^x = bk +c$ which is impractical when ($c,x$) are chosen to be large. This is known as the discrete logarithm problem. Though there are other techniques to solve this kind of problem (index calculation).</p> <p>Other problem as been said is the integer factorization:</p> <p>given $n=p.q$ where $(p,q)$ are primes and $1\equiv ed \pmod{(p-1)(q-1)}$ where $e=65537$. The observer only knows $(e,n)$ and he wants to guess $d$. Then he needs to factorize $n$ into prime factors for "guessing" the private key $d$ in order to solve the underlying problem.</p> <p>There are a lot of more problems like in lattice theory (closest vector problem, short vector problem), in non-commutative groups (conjugate search) etc.</p>
2,585,828
<p>I saw in an article that for every real number, there exists a Cauchy sequence of rational numbers that converges to that real number. This was stated without proof, so I'm guessing it is a well-known theorem in analysis, but I have never seen this proof. So could someone give a proof (preferably with a source) for this fact? It can use other theorems from analysis, as long as they aren't too obscure.</p>
Duncan Ramage
405,912
<p>One way to define the real numbers is as equivalence classes of Cauchy sequences (see exercise 3.24 in Princples of Mathematical Analysis by Walter Rudin for the details of the (actually, a more general) construction). Under this construction, it is obvious that every real number has such a Cauchy sequence, since every real number <em>literally is</em> a (set of) Cauchy sequence(s). Once you've done this, it is an easy exercise to prove that $\mathbb{R}$ is the only complete ordered field up to isomorphism, so it doesn't matter <em>which</em> construction you use for $\mathbb{R}$, the Cauchy construction guarantees it.</p>
2,585,828
<p>I saw in an article that for every real number, there exists a Cauchy sequence of rational numbers that converges to that real number. This was stated without proof, so I'm guessing it is a well-known theorem in analysis, but I have never seen this proof. So could someone give a proof (preferably with a source) for this fact? It can use other theorems from analysis, as long as they aren't too obscure.</p>
fleablood
280,126
<p>Let $x \in \mathbb R$. </p> <p>Let $q_i \in \mathbb Q$ so that $x - \frac 1i &lt; q_i &lt; x$.</p> <p>Claim 1: Such a sequence $\{q_i\}$ exists (because the rationals are dense in the reals)</p> <p>Claim 2: $q_i \to x$ (by the very definition of limits)</p> <p>Claim 3: $\{q_i\}$ is cauchy. (For $n &gt; \frac 2\epsilon; i,j &gt; n$ then $|q_i - x| &lt; \frac 1n &lt; \frac \epsilon 2$ and $|q_i - q_j| &lt; |q_i - x| + |q_j - x| &lt; \epsilon$.)</p> <p>Basically this is the FUNDAMENTAL principal of the meaning of real numbers.</p> <p>Real numbers are such that every bounded above sequence has a least upper bound. That is the fundamental <em>claim</em> of the nature of real numbers. The means that any sequence in which all the terms get "close together" must have an existent <em>real</em> limit.</p> <p>A Cauchy sequence is just such a sequence. To be informal, a Cauchy sequence is a sequence that "ought to" converge although the actual limit point need need be specified (or even be specifiable).</p> <p>Another way, perhaps without choice is, by the Archmedian principal there exists a unique integer $k_1$ so that $k_1 \le x &lt; k_1 + 1$ and for every integer $n$ there is a unique $k_n$ so that $k_n \le x*n &lt; k_n + 1$ or $\frac {k_n}n \le x &lt; \frac {k_n}n+ \frac 1n$. Let $q_n = \frac {k_n}n$. That is a cauchy sequence converging to $x$.</p> <p>In fact, if $k_n$ is the unique integer so that $k_n \le x*10^n \le k_n +1$ then the sequence $q_n = \frac {k_n}{10^n}$ is nothing more or less then the sequence of $n$th place decimal expansions. (If $x = \pi$ then $\{q_n\} = \{3, \frac {31}{10}, \frac {314}{100}, \frac {3141}{1000}, ....\}$ etc.)</p> <p>It is precisely <em>because</em> all reals have cauchy sequences of rationals converging to them, that the concept of decimal numbers even works.</p>
3,138,450
<p>Let <span class="math-container">$A \subseteq \mathbb{R}$</span> be bounded above and let <span class="math-container">$c \in \mathbb {R}$</span>. Define the set <span class="math-container">$c + A = \{c + a : a \in A\} $</span></p> <p>Now since <span class="math-container">$a \leq \sup A , \forall a\in A$</span>. Then <span class="math-container">$a + c \leq \sup A + c $</span>. So <span class="math-container">$A+c$</span> has an upper bound. Now the claim is that <span class="math-container">$\sup(c+A) = c + \sup A.$</span></p> <p>So now I have to prove that <span class="math-container">$\forall \varepsilon&gt; 0, c+a &gt; c + \sup A - \varepsilon.$</span></p> <p>How can I proceed? Thanks.</p>
Theo Bendit
248,286
<p>If you wish to use <span class="math-container">$\varepsilon$</span>s instead, suppose <span class="math-container">$\varepsilon &gt; 0$</span>. I claim that <span class="math-container">$c + \sup A - \varepsilon$</span> is <strong>not</strong> and upper bound for <span class="math-container">$c + A$</span>. Since you have proven that <span class="math-container">$c + \sup A$</span> is an upper bound for <span class="math-container">$c + A$</span>, this would mean that no lesser number than <span class="math-container">$c + \sup A$</span> can be an upper bound of <span class="math-container">$c + A$</span>. That is <span class="math-container">$c + \sup A$</span> is the supremum of <span class="math-container">$c + A$</span>.</p> <p>To prove this claim, it suffices to find some <span class="math-container">$x \in c + A$</span> such that <span class="math-container">$x &gt; c + \sup A - \varepsilon$</span>.</p> <p>Note that, since <span class="math-container">$\sup A$</span> is the least upper bound for <span class="math-container">$A$</span>, it follows that <span class="math-container">$\sup A - \varepsilon$</span> is <strong>not</strong> an upper bound for <span class="math-container">$A$</span>. Therefore, there must be some <span class="math-container">$a \in A$</span> such that <span class="math-container">$$a &gt; \sup A - \varepsilon.$$</span> Adding <span class="math-container">$c$</span> to both sides, <span class="math-container">$$c + a &gt; c + \sup A - \varepsilon.$$</span> Note that <span class="math-container">$c + a \in c + A$</span> by definition of <span class="math-container">$c + A$</span>. Hence, we can choose <span class="math-container">$x = c + a$</span>, and our proof of the claim is complete.</p>
391,509
<p>We have $$\dfrac{1+2+3+...+ \space n}{n^2}$$</p> <p>What is the limit of this function as $n \rightarrow \infty$?</p> <p>My idea:</p> <p>$$\dfrac{1+2+3+...+ \space n}{n^2} = \dfrac{1}{n^2} + \dfrac{2}{n^2} + ... + \dfrac{n}{n^2} = 0$$</p> <p>Is this correct?</p>
Clement C.
75,808
<p>Another method, using Riemann sums applied to $f:x\mapsto x$: $$ \frac{\sum_{k=1}^n k }{n^2} = \frac{1}{n}\sum_{k=1}^n \frac{k }{n} \xrightarrow[n\to\infty]{} \int_{0}^{1} xdx $$</p> <p>As for your question, each term goes to $0$, <em>but</em> the number of terms grows; you are not in the case where you have the sum of "constantly many terms, each of them converging".</p>
4,621,227
<p>This is a very soft and potentially naive question, but I've always wondered about this seemingly common phenomenon where a theorem has some method of proof which makes the statement easy to prove, but where other methods of proof are incredibly difficult.</p> <p>For example, proving that every vector space has a basis (this may be a bad example). This is almost always done via an existence proof with Zorn's lemma applied to the poset of linearly independent subsets ordered on set inclusion. However, if one were to suppose there exists a vector space <span class="math-container">$V$</span> with no basis, it seems (to me) that coming up with a contradiction given so few assumptions would be incredibly challenging.</p> <p>With that said, I had a few questions:</p> <ol> <li>Are there any other examples of theorems like this?</li> <li>Is this phenomenon simply due to the logical structure of the statements themselves, or is it something deeper? Is this something one can quantize in some way? That is, is there any formal way to study the structure of a statement, and determine which method of proof is ideal, and which is not ideal?</li> <li>With (1) in mind, are there ever any efforts to come up with proofs of the same theorem using multiple methods for the sake of interest?</li> </ol>
Brauer Suzuki
960,602
<p>Burnside's theorem that every finite group of order <span class="math-container">$p^aq^b$</span> (where <span class="math-container">$p,q$</span> are primes) is solvable has a short proof using character theory and a much longer proof without characters. See M. Isaacs' books for both proofs.</p>
3,091,162
<p>Let <span class="math-container">$G_1, G_2$</span> be two groups with at least one nontrivial proper subgroup each.</p> <p>Let <span class="math-container">$S_1, S_2$</span> be the sets of proper subgroups of, respectively <span class="math-container">$G_1, G_2$</span>.</p> <p>Suppose there exists a bijective function <span class="math-container">$f: S_1 \rightarrow S_2$</span> such that <span class="math-container">$\forall A\in S_1, f(A)$</span> is isomorphic to <span class="math-container">$A$</span>.</p> <p>When can I conclude that <span class="math-container">$G_1, G_2$</span> are isomorphic?</p> <p>I think that, if <span class="math-container">$G_1$</span> and <span class="math-container">$G_2$</span> are finite and abelian we can conclude that they are isomorphic, but I can't prove It. Moreover, I haven't found any counterexample for nonabelian finite groups.</p>
the_fox
11,450
<p>Certainly not always. I'd be surprised if there is a concrete set of conditions which is both necessary and sufficient to conclude isomorphism between the two groups. (My answer refers to finite groups only.)</p> <p>There are groups which are called <span class="math-container">$P$</span>-groups in Schmidt's book "Subgroup Lattices of Groups" (not be confused with <span class="math-container">$p$</span>-groups) and which are lattice-isomorphic to elementary abelian groups.</p> <p><a href="https://i.stack.imgur.com/LIFtO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LIFtO.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/dpg3n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dpg3n.png" alt="enter image description here"></a></p> <hr> <p>Added for clarity:</p> <p><a href="https://i.stack.imgur.com/583Oi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/583Oi.png" alt="enter image description here"></a></p>
3,091,162
<p>Let <span class="math-container">$G_1, G_2$</span> be two groups with at least one nontrivial proper subgroup each.</p> <p>Let <span class="math-container">$S_1, S_2$</span> be the sets of proper subgroups of, respectively <span class="math-container">$G_1, G_2$</span>.</p> <p>Suppose there exists a bijective function <span class="math-container">$f: S_1 \rightarrow S_2$</span> such that <span class="math-container">$\forall A\in S_1, f(A)$</span> is isomorphic to <span class="math-container">$A$</span>.</p> <p>When can I conclude that <span class="math-container">$G_1, G_2$</span> are isomorphic?</p> <p>I think that, if <span class="math-container">$G_1$</span> and <span class="math-container">$G_2$</span> are finite and abelian we can conclude that they are isomorphic, but I can't prove It. Moreover, I haven't found any counterexample for nonabelian finite groups.</p>
Lucio Tanzini
521,474
<p>Here is the proof in the case of finite abelian groups <span class="math-container">$G_1, G_2$</span> like above.</p> <p><strong>Lemma 1</strong> Let <span class="math-container">$G^{(n)}$</span> be the number of elements in <span class="math-container">$G$</span> of order <span class="math-container">$n$</span>. <span class="math-container">$G^{(n)}$</span> is uniquely determined by the number of cyclic subgroups of <span class="math-container">$G$</span> of order <span class="math-container">$n$</span>.</p> <p><em>Proof</em> Every element of order <span class="math-container">$n$</span> is an element of exactly one cyclic subgroup of <span class="math-container">$G$</span> of order <span class="math-container">$n$</span>. All the cyclic subgroups of order <span class="math-container">$n$</span> have <span class="math-container">$\phi(n)$</span> elements of order <span class="math-container">$n$</span>.</p> <p><strong>Lemma 2</strong> Let <span class="math-container">$p$</span> be a prime that divides <span class="math-container">$|G|$</span> then the numbers <span class="math-container">$G^{(p)}, G^{(p^2)},...$</span> uniquely determine the p-Sylow of <span class="math-container">$G$</span>.</p> <p><em>Proof</em> The p-Sylow, P, of G is of the form <span class="math-container">$\mathbb{Z}_{p^{a_1}} \times ... \times \mathbb{Z}_{p^{a_n}}$</span>. Moreover, let <span class="math-container">$P^{(\leq p^k)}$</span> be the number of elements of P that have an order less or equal to <span class="math-container">$p^k$</span>. <span class="math-container">$$ P^{(\leq p^k)}=\Pi_{i\leq n}{\min (p^{a_i}, p^k)}$$</span> Then we can determine <span class="math-container">$a_1,...,a_n$</span>.</p> <p>Then the thesis follows easily.</p>
3,783,186
<p>I am trying to prove that <span class="math-container">$$2≤\int_{-1}^1 \sqrt{1+x^6} \,dx ≤ 2\sqrt{2} $$</span> I learned that the equation <span class="math-container">$${d\over dx}\int_{g(x)}^{h(x)} f(t)\,dt = f(h(x))h'(x) - f(g(x))g'(x) $$</span> is true due to Fundamental Theorem of Calculus and Chain Rule, and I was thinking about taking the derivative to all side of the inequality, but I am not sure that it is the correct way to prove this. Can I ask for a help to prove the inequality correctly? Any help would be appreciated! Thanks!</p>
Integrand
207,050
<p>Surprised not to see this technique yet.</p> <p>For <span class="math-container">$|z|&lt; 1$</span> (the most conservative convergence case), the generalized binomial theorem states <span class="math-container">$$ (1+z)^a = \sum_{k=0}^{\infty} \binom{a}{k}z^{k} $$</span>In particular, for <span class="math-container">$z=x^6$</span> and <span class="math-container">$a=1/2$</span>, we get convergence for <span class="math-container">$|x|\le 1$</span>, <a href="https://math.stackexchange.com/q/340124"><span class="math-container">$\binom{1/2}{k}$</span> is well-known</a>, and we have <span class="math-container">$$ (1+x^6)^{1/2} = \sum_{k=0}^{\infty} \binom{1/2}{k}x^{6k} = 1 + \frac{1}{2}x^6 - \frac{1}{8}x^{12}+\frac{1}{16}x^{18}-\frac{5}{128}x^{24}+\cdots $$</span>Now we can power-rule our way out of it: <span class="math-container">$$ \frac{773}{364}=\int _{-1}^{1}1 + \frac{1}{2}x^6 - \frac{1}{8}x^{12}\,dx \le \int _{-1}^{1} \sqrt{1+x^6}\,dx \le \int _{-1}^{1} 1 + \frac{1}{2}x^6\,dx = \frac{15}{7} $$</span>The inequalities follow because the series is alternating, so if we end on a positive term we overestimate it and if we end on a negative term we underestimate it. The inequalities could be improved by adding more terms.</p>
643,237
<p>This statement is suggested as a correction to <a href="https://math.stackexchange.com/questions/643122/splitting-field-containing-nth-root">this question</a>:</p> <p>If $K$ is the splitting field of the polynomial $P(x)=x^n-a$ over $\mathbb{Q}$, prove that $K$ contains all the $n$th roots of unity.</p> <p>How to prove it?</p>
monroej
121,862
<p>If $K$ is the splitting field for that polynomial then it contains all $n$th roots of $a$, as you know. So if $\omega$ is an primitive $n$th root of unity, then the roots of $P(x)=x^n-a$ are $\sqrt[n]{a},\sqrt[n]{a}\omega,...,\sqrt[n]{a}\omega^{n-1}$ just as JJ Beck said. We can multiply $\sqrt[n]{a}\omega$ by $\frac{1}{\sqrt[n]{a}}$, since inverses exist in $K$. This give us $\omega \in K$. Therefore, all powers of $\omega$ are in $K$. Since $\omega$ was primitive, we get all $n$th roots of unity.</p>
2,709,878
<p>How do I know that an equation will have an extraneous solution? </p> <p>For example in this question: 2log9(x) = log9(2) + log9(x + 24)</p>
Jyrki Lahtonen
11,619
<p>If some step in your process of solving the equation was only a logical implication (rather than an equivalence), then the process may have introduced extraneous solutions, and you need to check all of them to be sure.</p> <p>Typically the difference is that a step in the solution is not reversible. For example: if $a=b$ then we can conclude that $a^2=b^2$, but we cannot reliably conclude from $a^2=b^2$ that $a=b$ also holds. Similarly from $\log_ax=\log_a=y$ (with $a$ a positive constant $\neq1$) we can conclude that $x=y$, but from $x=y$ we cannot conclude that $\log_a x=\log_a y$ because both $x$ and $y$ could be negative, and outside the domain of definition.</p> <hr> <p>It is IMHO pointless to memorize rules, when you are supposed to think about the logic instead. Alas, teachers worldwide seem to prefer memorizing exceptions to thinking. Luckily my junior high teacher was of a different breed. That was before the people higher up started thinking that logic = elitism, and doing well in PISA tests is all students should aim at.</p>
632,850
<blockquote> <p>Suppose that $f(x)$ is differentiable on $[0,1]$ and $f(0) = f(1) = 0$. It is also known that $|f''(x)| \le A$ for every $x \in (0,1)$. Prove that $|f'(x)| \le A/2$ for every $x \in [0,1]$.</p> </blockquote> <p>I'll explain what I did so far. First using Rolle's theorem, there is some point $c \in [0,1]$ so $f'(c) = 0$.</p> <p>EDIT: My first preliminary solution was wrong so I tried something else. EDIT2: Another revision :\</p> <p>I define a Taylor series of a second order around the point $1$: $$ f(x) = f(1) + f'(1)(x-1) + \frac12 f''(d_1)(x-1)^2 $$ $$ f(0) = f(1) + f'(1)(-1) + \frac12 f''(d_1)(-1)^2 $$ $$ |f'(1)| = \frac12 |f''(d_1)| &lt;= \frac12 A $$</p> <p>Now I develop a Taylor series of a first order for $f'(x)$ around $1$: $$ f'(x) = f'(1) + f''(d_2)(x-1) $$ $$ |f'(x)| = |f'(1)| + x*|f''(d_2)|-|f''(d_2)| \leq \frac{A}{2} + A - A = \frac{A}{2} $$</p> <p>It looks correct to me, what do you guys think?</p> <p><strong>Note: I cannot use integrals, because we have not covered them yet.</strong></p>
Feu
120,558
<p>I've already give an answer without Integral to this question in another post. I'll duplicate the answer here. If it's against policy please tell me what I should do.</p> <blockquote> <p>I manage an answer using both time a taylor expansion center at a point $x \in \left [0, 1 \right ]$. I get $$ f(h) = f(x)+f'(x)(h-x) + \frac{1}{2} f''(\xi(h))(h-x)^2 $$</p> <p>now for h = 0 and h = 1, I get </p> <p>$$0=f(x)-xf'(x)+\frac{1}{2}x^2f''(\xi(0))$$ and $$0=f(x)+f'(x)-xf'(x)+\frac{1}{2}(1-x)^2f''(\xi(1)).$$ Subtracting one to the other to get $f(x)$ out and a little manipulation yield: $$|f'(x)| = \frac{1}{2}|x^2f''(\xi(0))-(1-x)^2f''(\xi(1))|\leq\frac{1}{2}(|x^2f''(\xi(0))|+|(1-x)^2f''(\xi(1))|)\leq \frac{A}{2}(|x^2|+|1-x|^2) $$</p> <p>since $x \in \left [0, 1 \right ], (|x^2|+|1-x|^2) \leq1 $ and we get the result.</p> </blockquote>
4,474,095
<p>There is one thing I can't grasp about the proof given in the Linear Algebra Done Right book by Sheldon Axler (attached below).</p> <p>In the last part it says that <span class="math-container">$(T - \lambda_1I)...(T - \lambda_mI)v = 0$</span>, hence <span class="math-container">$T - \lambda_jI$</span> is not injective for some <span class="math-container">$j$</span>.</p> <p>What I don't understand is why the following reasoning is not correct:</p> <ul> <li>the factors in the equation can be reordered.</li> <li>suppose <span class="math-container">$\lambda_j$</span> is the only eigenvalue <span class="math-container">$T$</span> has. Let's put it at the end: <span class="math-container">$(T - \lambda_1I)...(T - \lambda_mI)(T - \lambda_jI)v = 0$</span></li> <li>the only way for the expression above to be equal to <span class="math-container">$0$</span> is if <span class="math-container">$(T - \lambda_jI)v = 0$</span> (because <span class="math-container">$\lambda_j$</span> is the only eigenvalue, so the other <span class="math-container">$T - \lambda_iI$</span> are injective).</li> <li>hence <span class="math-container">$v$</span> is an eigenvector of <span class="math-container">$T$</span> corresponding to the eigenvalue <span class="math-container">$\lambda_j$</span>. But <span class="math-container">$v$</span> was chosen arbitrarily, so it can't be true.</li> </ul> <p>I know that my logic is flawed but I can't see where. Would appreciate it if someone pointed out to me where I'm wrong.</p> <p><a href="https://i.stack.imgur.com/Iwl4U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iwl4U.png" alt="proof" /></a></p>
Acccumulation
476,070
<blockquote> <p>the factors in the equation can be reordered.</p> </blockquote> <p>The factors in the characteristic polynomial can be re-ordered, but factors can't in general be, and it's not entirely trivial to prove that they can be in this case.</p> <blockquote> <p>hence v is an eigenvector of T corresponding to the eigenvalue λj. But v was chosen arbitrarily, so it can't be true.</p> </blockquote> <p>What you seem to be getting at is that your conclusion is that all vectors are eigenvectors. It certainly can be the case that all vectors are eigenvectors of an operator. All vectors are eigenvectors of an operator that's a scalar multiple of the identity, and in one-dimensional space, all operators are such.</p> <p>But okay, let's take n&gt;1 and T be an arbitrary operator. Your confusion seems to be that you made an assumption (λ is the only eigenvalue), and arrived at a false statement. The conclusion is that the assumption is false. I'm not clear on what you think the problem is. For vector fields over algebraically complete fields (and <span class="math-container">$\mathbb C$</span> is algebraically complete), the number of eigenvalues (including multiplicity) is equal to the dimension (in fact, this can be proven by extending the very argument that you presented). So when you said &quot;suppose <span class="math-container">$λ_j$</span> is the only eigenvalue T has&quot;, you're supposing something that (for dimension greater than 1) can't be the case, unless <span class="math-container">$λ_j$</span> is a multiplicity-n eigenvalue. You assumed something false, and came to a false conclusion. What's the problem?</p>
1,107,932
<p>Is the map $f:S_n \to A_{n+2}$ given by </p> <p>$$f(s)= \begin{cases} s &amp; s\ \text{is even}\\ s \circ (n+1,\ n+2) &amp; s\ \text{is odd} \end{cases}$$ </p> <p>an injective homomorphism? I can show that if it is a homomorphism then it is injective but having difficulty in showing that $f$ is a homomorphism. Please help. </p>
Michael Albanese
39,599
<p>A map $f : S_n \to A_{n+2}$ is a homomorphism if $f(\sigma\circ\tau) = f(\sigma)\circ f(\tau)$ for every $\sigma, \tau \in S_n$. As $f$ is defined piecewise, we split this verification into cases.</p> <ol> <li>Both $\sigma$ and $\tau$ are even. Note that $\sigma\circ\tau$ is even so </li> </ol> <p>$$f(\sigma\circ\tau) = \sigma\circ\tau = f(\sigma)\circ f(\tau).$$</p> <ol start="2"> <li>Both $\sigma$ and $\tau$ are odd. Note that $\sigma\circ\tau$ is even so</li> </ol> <p>\begin{align*} f(\sigma\circ\tau) &amp;= \sigma\circ\tau\\ &amp;= \sigma\circ\tau\circ(n+1, n+2)\circ (n+1, n+2)\\ &amp;= \sigma\circ(n+1, n+2)\circ\tau\circ(n+1, n+2)\\ &amp;= f(\sigma)\circ f(\tau). \end{align*}</p> <ol start="3"> <li>$\sigma$ is even and $\tau$ is odd. Note that $\sigma\circ\tau$ is odd so</li> </ol> <p>$$f(\sigma\circ\tau) = \sigma\circ\tau\circ(n+1, n+2) = f(\sigma)\circ f(\tau).$$</p> <ol start="4"> <li>$\sigma$ is odd and $\tau$ is even. Note that $\sigma\circ\tau$ is odd so</li> </ol> <p>$$f(\sigma\circ\tau) = \sigma\circ\tau\circ(n+1, n+2) = \sigma\circ(n+1, n+2)\circ\tau = f(\sigma)\circ f(\tau).$$</p> <p>So, for any $\sigma, \tau \in S_n$, $f(\sigma\circ\tau) = f(\sigma)\circ f(\tau)$, so $f$ is a homomorphism.</p>
1,107,932
<p>Is the map $f:S_n \to A_{n+2}$ given by </p> <p>$$f(s)= \begin{cases} s &amp; s\ \text{is even}\\ s \circ (n+1,\ n+2) &amp; s\ \text{is odd} \end{cases}$$ </p> <p>an injective homomorphism? I can show that if it is a homomorphism then it is injective but having difficulty in showing that $f$ is a homomorphism. Please help. </p>
Ben Blum-Smith
13,120
<p>One more way to think about it:</p> <p>Conceive of your map $f$ as actually being from $S_n$ to $S_{n+2}$. The image lies in the subgroup $S_n\times S_2$ consisting of permutations that act independently on the first $n$ and the last $2$ indices. You can see your map as being the identity map to the first factor ($S_n$), and exactly the <a href="http://en.wikipedia.org/wiki/Parity_of_a_permutation" rel="nofollow">sign homomorphism</a> to the second factor ($S_2$). Therefore it is a direct product of homomorphisms, so it is a homomorphism.</p> <p>It happens (because of how you constructed it) that the image lies inside $A_{n+2}$, so it's a homomorphism of $S_n$ into $A_{n+2}$.</p>
836,753
<p>$\{X_1, X2, \ldots, X_{121}\}$ are independent and identically distributed random variables such that $E(X_i)= 3$ and $\mathrm{Var}(X_i)= 25$. What is the standard deviation of their average? In other words, what is the standard deviation of $\bar X= {X_1+ X_2+ \cdots + X_{121} \over 121}$?</p>
anomaly
156,999
<p>Since the $X_i$ are independent, their variance is additive: $\text{Var}(X_1 + \cdots + X_n) = \text{Var}(X_1) + \cdots + \text{Var}(X_n)$. Furthermore, $\text{Var}(\lambda X_i) = \lambda^2\text{Var}(X_i)$ for any constant $\lambda$. Thus the mean $$\overline{X} = \frac{X_1 + \cdots + X_n}{n}$$ satisfies $$\text{Var}(\overline{X}) = \text{Var}\left(\frac{X_1 + \cdots + X_n}{n}\right) = \frac{1}{n^2}\left(\text{Var}(X_1) + \cdots + \text{Var}(X_n)\right) = \frac{\sigma^2}{n},$$ where $\sigma^2$ is the common variance of the $X_i$.</p>
3,921,259
<p>I need to calculate the operator Norm of the linear operator defined as: <span class="math-container">$$ T:(C([0,1]),||\cdot||_\infty)\rightarrow\mathbb{R} \text{ where } Tf=\sum_{k=1}^n a_kf(t_k)$$</span> for <span class="math-container">$0\leq t_1&lt;t_2&lt;...&lt;t_n\leq 1$</span> and <span class="math-container">$a_1,...,a_n\in \mathbb{R}$</span>.</p> <p>I have been able to show that <span class="math-container">$||T||\geq\left|\sum_{k=1}^n a_k\right|$</span> and <span class="math-container">$||T||\leq \sum_{k=1}^n|a_k|$</span> but I don't seem to able to bound it further than that. Would appreciate any help. Thank you.</p>
copper.hat
27,978
<p>It might help to separate <span class="math-container">$T$</span> into the composition of two simpler maps.</p> <p>Note that the map <span class="math-container">$s:C[0,1] \to \mathbb{R}^n$</span> defined by <span class="math-container">$s(f) = (f(t_1),...,f(t_n))$</span> has norm one (using the <span class="math-container">$l_\infty$</span> norm on <span class="math-container">$\mathbb{R}^n$</span>) and is surjective. Furthermore, for any <span class="math-container">$y$</span> with <span class="math-container">$\|y\|_\infty =1$</span> it is easy to construct (by interpolation &amp; extrapolation) some <span class="math-container">$f \in C[0,1]$</span> with <span class="math-container">$\|f\| = 1$</span> such that <span class="math-container">$s(f) = y$</span>.</p> <p>Let the operator <span class="math-container">$\tau:\mathbb{R}^n \to \mathbb{R}$</span> be given by <span class="math-container">$\tau(y) = \sum_k a_k y_k$</span>. It is straightforward to show that <span class="math-container">$\|\tau\| = \|a\|_1$</span> and there is some <span class="math-container">$y \in \mathbb{R}^n$</span> such that <span class="math-container">$\|y\|_\infty = 1$</span> and <span class="math-container">$\tau(y) = \|a\|_1$</span>.</p> <p>Note that <span class="math-container">$T = \tau \circ s$</span>.</p> <p>Hence <span class="math-container">$\|T\| \le \|s\| \|\tau\| = \|a\|_1$</span>. Suppose <span class="math-container">$\tau(y) = \|a\|_1$</span> with <span class="math-container">$\|y\|_\infty = 1$</span> and <span class="math-container">$s(f) = y$</span> where <span class="math-container">$\|f\| = 1$</span>, then <span class="math-container">$Tf = \tau(s(f)) = \tau(y) = \|a\|_1$</span> and so <span class="math-container">$\|T\| = \|a\|_1$</span>.</p>
4,417,353
<p><a href="https://i.stack.imgur.com/td05G.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/td05G.png" alt="Here is the question. I sent it as a image since I could not find how to write a1 as below." /></a></p> <p>My first approach is that I need to use induction on that. However the problem is that what is the upper bound limit for this series?</p> <p>Also, let's say we showed that it converges. Doesn't it directly imply that that convergence point is the limit of the series? What is the difference?</p> <p>Note: This is non-grading homework question. Since it is explained in image, I feel that I need to explain.</p>
José Carlos Santos
446,262
<p>For each <span class="math-container">$n\in\Bbb N$</span>, <span class="math-container">$a_n\leqslant2$</span>. This can be proved by induction: <span class="math-container">$a_1=\sqrt2&lt;2$</span> and if <span class="math-container">$a_n&lt;2$</span>, then <span class="math-container">$a_{n+1}=\sqrt2^{a_n}&lt;\sqrt2^2=2$</span>.</p> <p>On the other hand, <span class="math-container">$(a_n)_{n\in\Bbb N}$</span> is increasing: it is not hard to prove that if <span class="math-container">$x\in(0,2)$</span>, then <span class="math-container">$\sqrt2^x&gt;x$</span>.</p> <p>Since <span class="math-container">$(a_n)_{n\in\Bbb N}$</span> is increasing and bounded, it converges. But this information does not tell you what is <span class="math-container">$\lim_{n\to\infty}a_n$</span>.</p>
2,414,721
<p>For all perfect numbers $N$, $\sigma (N) = 2N$, where $\sigma$ is the divisor sigma function.</p> <p>Let $s$ be a perfect number of the form $3^m 5^n 7^k$, where $m,n,k \geq 1$ are integers.</p> <p>Then $\sigma (s)= \sigma (3^m 5^n 7^k)$</p> <p>$ =\sigma (3^m) \sigma (5^n) \sigma (7^k)$ since $3, 5,$ and $7$ are coprime to each other.</p> <p>$ =\left(\frac{3^{m+1}-1}{2}\right)\left(\frac{5^{m+1}-1}{4}\right)\left(\frac{7^{k+1}-1}{6}\right)$</p> <p>$ =2(3^m 5^n 7^k)$ since $s$ is a perfect number.</p> <p>$\implies 9 (3^m 5^n 7^k) = 3^{m+1} 5^{n+1}+3^{m+1} 7^{k+1} + 5^{n+1} 7^{k+1} - 3^{m+1}-5^{n+1} - 7^{k+1}-1$ after some algebra.</p> <p>This is as far as I got using this method. Any and all help would be appreciated.</p>
user357980
357,980
<p>Intuitively speaking, the left hand side is generally way smaller than the right hand side because of all the exponentials, so diving by those exponentials, $3^{m+1}, 5^{n+1}, 7^{k+1}$ is a good way to exploit that behavior.</p> <p>Then you basically play around to find that no values for $m, n$, and $k$ work. (I checked, but it seems that writing it up would solve it for you.)</p>
464,586
<blockquote> <p>Find a basis for $U=\{A\in\mathbb{M}_{22}\mid A^T=-A\}$.</p> </blockquote> <p>$\mathbb{M}_{22}$ denotes the set of all $2 \times 2$ matrices. This question appeared on an examination I wrote yesterday. Does a basis even exist? I can't think of any matrices in which $A^T=-A$ except for the zero matrix. If this is the case, I was taught (if I recall correctly), that $0$ can not be in a basis because it is linearly dependent.</p> <p>$\begin{bmatrix}a&amp;b\\b&amp;c\end{bmatrix}\to \begin{bmatrix}-a&amp;-b\\-b&amp;-c \end{bmatrix}\implies a=b=c=0?$ </p>
Cameron Buie
28,900
<p><strong>Hint</strong>: The diagonal entries must be $0$ and the other entries must be opposite. (Why?) Can you think of a non-zero matrix with this form? What relationship will other elements of $U$ have to this matrix?</p> <hr> <p>It is true that the only element of $U$ having the form $$\begin{bmatrix}a&amp;b\\b&amp;c\end{bmatrix}$$ is the zero matrix, but I see no reason to assume that we're dealing with symmetric matrices only.</p>
674,621
<p>I am trying to figure out what the three possibilities of $z$ are such that </p> <p>$$ z^3=i $$</p> <p>but I am stuck on how to proceed. I tried algebraically but ran into rather tedious polynomials. Could you solve this geometrically? Any help would be greatly appreciated.</p>
Lubin
17,760
<p>The answer of @Petaro is best, because it suggests how to deal with such questions generally, but here’s another approach to the <em>specific</em> question of what the cube roots of $i$ are.</p> <p>You know that $(-i)^3=i$, and maybe you know that $\omega=(-1+i\sqrt3)/2$ is a cube root of $1$. So the cube roots of $i$ are the numbers $-i\omega^n$, $n=0,1,2$.</p>
1,443,471
<p>I have this conjecture: Let <em>a</em> and <em>b</em> be integers and <em>n</em> and <em>m</em> natural numbers. </p> <p>$$ a \equiv b \bmod n \Rightarrow a^m \equiv b^m \bmod n$$</p> <p>I think I got the induction proof, but I'm having difficulties on how to proof this with well-ordering principle.</p>
Cloudscape
180,668
<p>Let $k$ be the smallest natural number such that $a^k \not\equiv b^k \mod n$ (such a number exists by the well-ordering principle). We have that $a \equiv b \mod n$ (hence $k \ge 2$). If now $a^{k-1} \equiv b^{k-1} \mod n$, then</p> <p>$$ a^k \equiv a^{k-1} a \equiv b^{k-1} a \equiv b^{k-1} b \equiv b^k \mod n, $$</p> <p>which contradicts $a^k \not\equiv b^k \mod n$. Hence, $a^{k-1} \not\equiv b^{k-1} \mod n$, and thus $j:=k-1$ is a smaller number with $a^j \not\equiv b^j \mod n$, and since $k \ge 2$, $j$ is a natural number. This contradicts the fact that $k$ was the smallest number with that property.</p> <p>EDIT: By 'natural number' i mean positive integer.</p>
3,828,205
<blockquote> <p>Find all integers <span class="math-container">$x$</span> and <span class="math-container">$y$</span> that satisfy <span class="math-container">$$x^4-12x²+x^2y^2+30 &lt; 0$$</span></p> </blockquote> <p>Letting <span class="math-container">$a = x^2$</span> and <span class="math-container">$b = y^2$</span> I got <span class="math-container">$$a^2-12a+ab+30 &lt; 0$$</span></p> <p>from which I managed to get <span class="math-container">$$a^2-12+30+ab &lt;0 \Rightarrow (a-6)^2+ab &lt; 6.$$</span></p> <p>However I'm not sure how to proceed from here. What should I do?</p>
Michael Rozenberg
190,319
<p><span class="math-container">$$x^2y^2&lt;6-(x^2-6)^2\leq6-2^2=2,$$</span> which says <span class="math-container">$$x^2y^2=0$$</span> or <span class="math-container">$$x^2y^2=1.$$</span> The last is impossible because <span class="math-container">$x^2=4.$</span></p> <p>Can you end it now?</p>
198,049
<p>Let $S$ be a smooth projective surface over $\mathbb{C}$. (I guess this can be more general—higher dimension, other ground fields, non-projective, maybe even singular?—and I'dd like to hear that.) Let $s \in S$ be a point. Let $\beta \colon X \to S$ be the blowup of $s \in S$. Suppose that $H^{i}(S, T_{S})$ is known for $i \in \{0,1,2\}$, as well as $\mathrm{Def}(S)$.</p> <p>If I am not mistaken, the exceptional divisor $E$, which is a $(-1)$-curve, is rigid, in the sense that every deformation of $X$ also has a $(-1)$-curve. Therefore $$\mathrm{Def}(X) \cong \mathrm{Def}(X,E) \cong \mathrm{Def}(S,s) \cong \mathrm{Def}(S) \times T_{S,s},$$ where the last isomorphism is not canonical. (Rather $T_{S,s}$ is the kernel of the forgetful map $\mathrm{Def}(S,s) \to \mathrm{Def}(S)$.)</p> <p>This question is about the cohomological side of the picture, i.e. $H^{i}(X,T_{X})$ for $i \in \{0,1,2\}$. My intuition says that $H^{1}(X,T_{X})$ should also increase with dimension two, whereas the obstruction space $H^{2}(X,T_{X})$ should stay the same. I've tried to fiddle around with the spectral sequence $$ H^{p}(S, R^{q}\beta_{*}T_{X}) \Longrightarrow H^{p+q}(X, T_{X}) $$ but I could not really come to the desired conclusions.</p> <p>For $H^{0}(X, T_{X})$ we get the term $H^{0}(S, \beta_{*}T_{X})$.<br> For $H^{1}(X, T_{X})$ we get the terms $H^{1}(S, \beta_{*}T_{X})$ and $H^{0}(S, R^{1}\beta_{*}T_{X})$. Now $R_{1}\beta_{*}T_{X}$ is a skyscraper sheaf supported on $s$, and if I'm not mistaken, and vague geometric intuition makes me think that it is the tangent space $T_{S,s}$.<br> Finally, $R^{2}\beta_{*}T_{X} = 0$, so for $H^{2}(X, T_{X})$ we get the terms $H^{2}(S, \beta_{*}T_{X})$ and $H^{1}(S, R^{1}\beta_{*}T_{X})$.<br> But maybe this isn't the right way to approach the question…</p> <p>So the main question is:</p> <blockquote> <p>What are the $H^{i}(X,T_{X})$ for $i \in \{0,1,2\}$?</p> </blockquote> <p>I've not been able to find this via google, though I guess this is pretty basic knowledge in deformation theory. But I'm pretty new to this field, so please bear with me.</p>
Francesco Polizzi
7,460
<p>The answer is the following and can be found in Hartshorne's book <em>Deformation Theory</em>, see in particular Exercise 10.5 page 83.</p> <p>We work over an algebraically closed field $k$. Then there is an exact sequence of sheaves $$0 \to \beta_*T_X \to T_S \to k_s \oplus k_s \to 0,$$ inducing an exact sequence in cohomology $$0 \to H^0(X, \, T_X) \to H^0(S, \, T_S) \to k \oplus k \to H^1(X, \, T_X) \to H^1(S, \, T_S) \to 0$$ and an isomorphism $H^2(X, \, T_X) \cong H^2(S, \, T_S)$.</p> <p>We can interpret this as follows. </p> <p>First of all, the obstructions to first-order deformations of $X$ are the same as the obstructions to first-order deformations of $S$. </p> <p>Next, the term $k \oplus k$ corresponds to the deformations of the point $s$ inside $S$. Then if the group of infinitesimal automorphisms of $S$ (that is, the term $H^0(S, \, T_S))$ maps surjectively onto the deformations of $s$ into $S$, then $H^1(X, \, T_X) \cong H^1(S, \, T_S)$, i.e. the first-order deformations of $X$ are just given by first-order deformations of $S$. Otherwise, moving $s$ gives nontrivial deformations of $S$.</p> <p>The first case happens for instance when $S$ is an abelian surface. Then $\textrm{Aut}(S)$ is transitive, hence the map $H^0(S, \, T_S) \to k \oplus k$ is an isomorphism and this implies $$h^0(X, \, T_X)=0, \quad h^1(X, \, T_X) = h^1(S, \, T_S)=4.$$</p> <p>The second case happens for instance when $S$ is a surface of general type. Then it is possible to prove that $S$ has at most finitely many automorphisms, hence $h^0(S, \, T_S)=0$. This yields $$h^0(X, \, T_X)=0, \quad h^1(X, \, T_X) = h^1(S, \,T_S)+2.$$ </p>
3,754,734
<p>Let <span class="math-container">$ \sum_{n=0}^{\infty}a_{n}x^{n} $</span> be a power series. prove that</p> <p><span class="math-container">$ \sum_{n=0}^{\infty}a_{n}x^{n},\sum_{n=0}^{\infty}\left(n+1\right)a_{n+1}x^{n} $</span></p> <p>has the same convergence radius.</p> <p>So, let <span class="math-container">$ \limsup\sqrt[n]{|a_{n}|}=\beta $</span></p> <p>It follows that <span class="math-container">$ \limsup\sqrt[n+1]{|a_{n+1}|}=\beta $</span> (<strong>Im not sure about this argument, we'll sign it as (*) )</strong></p> <p>Its enough to show that <span class="math-container">$ \limsup\sqrt[n]{\left(n+1\right)|a_{n+1}|}=\beta $</span></p> <p>Let <span class="math-container">$ \limsup\sqrt[n]{\left(n+1\right)|a_{n+1}|}=\gamma $</span> and we'll show that <span class="math-container">$\beta=\gamma $</span></p> <p>From the assumption above, we can find a subsequence such that <span class="math-container">$ \lim_{k\to\infty}\sqrt[n_{k}]{|a_{n_{k}}|}=\beta$</span></p> <p>And we can notice that :</p> <p><span class="math-container">$ \lim_{k\to\infty}\sqrt[n_{k}]{\left(n_{k}+1\right)|a_{n_{k}+1}|}=\lim_{k\to\infty}\left(n_{k}+1\right)^{\frac{1}{n_{k}}}\cdot\lim_{k\to\infty}\left(|a_{n_{k}+1}|^{\frac{1}{n_{k}+1}}\right)^{\frac{n_{k}+1}{n_{k}}} $</span></p> <p>and because of <strong>(*)</strong></p> <p><span class="math-container">$ \lim_{k\to\infty}\left(n_{k}+1\right)^{\frac{1}{n_{k}}}\cdot\lim_{k\to\infty}\left(|a_{n_{k}+1}|^{\frac{1}{n_{k}+1}}\right)^{\frac{n_{k}+1}{n_{k}}}=\beta $</span></p> <p>So <span class="math-container">$ \beta \leq \gamma $</span></p> <p>Now, we'll take a subsequence such that <span class="math-container">$ \lim_{j\to\infty}\sqrt[n_{j}]{\left(n_{j}+1\right)|a_{n_{j}+1}}=\gamma $</span></p> <p>And we'll notice that</p> <p><span class="math-container">$ \gamma=\lim_{j\to\infty}\sqrt[n_{j}]{\left(n_{j}+1\right)|a_{n_{j}+1}|}=\lim_{j\to\infty}\left(n_{j}+1\right)^{\frac{1}{n_{j}}}\cdot\lim_{j\to\infty}|a_{n_{j}+1}|^{\frac{1}{n_{j}}}=\lim_{j\to\infty}\left(|a_{n_{j}+1}|^{\frac{1}{n_{j}+1}}\right)^{\frac{n_{j+1}}{n_{j}}}=\gamma $</span></p> <p>Therefore, because of <strong>(*)</strong></p> <p><span class="math-container">$ \lim_{j\to\infty}|a_{n_{j}+1}|^{\frac{1}{n_{j}+1}}=\lim_{j\to\infty}|a_{n_{j}}|^{\frac{1}{n_{j}}}=\gamma $</span></p> <p>And thus <span class="math-container">$ \gamma \leq \beta $</span> and we get that <span class="math-container">$ \gamma = \beta $</span></p> <p>Now everything depend's on the validity of <strong>(*)</strong> So I hope you could tell me if it's okay and maybe help my justify it.</p> <p>I feel like it's abuse of notation because <span class="math-container">$ a_{n_{k+1}}\neq a_{n_{k}+1} $</span> so I dont know if my argument holds.</p> <p>Thanks in advance</p>
Eric Towers
123,905
<p>You don't say where your indices start, so I pick <span class="math-container">$1$</span>. Altering this is easy.</p> <p><span class="math-container">$$ \left( \sqrt[n]{|a_n|} \right)_{n \geq 1} = (|a_1|, \sqrt{a_2}, \sqrt[3]{a_3}, \dots ) \text{ and } $$</span> <span class="math-container">$$ \left( \sqrt[n+1]{|a_{n+1}|} \right)_{n \geq 1} = (\sqrt{a_2}, \sqrt[3]{a_3}, \dots ) \text{,} $$</span> so the second sequence is the subsequence of the first sequence formed by deleting the first term. The limit you ask about is indifferent to any finite segment of the sequence not containing a maximum of the sequence, so the two limits agree as long as the first term is not a maximal term of the sequence.</p>
3,754,734
<p>Let <span class="math-container">$ \sum_{n=0}^{\infty}a_{n}x^{n} $</span> be a power series. prove that</p> <p><span class="math-container">$ \sum_{n=0}^{\infty}a_{n}x^{n},\sum_{n=0}^{\infty}\left(n+1\right)a_{n+1}x^{n} $</span></p> <p>has the same convergence radius.</p> <p>So, let <span class="math-container">$ \limsup\sqrt[n]{|a_{n}|}=\beta $</span></p> <p>It follows that <span class="math-container">$ \limsup\sqrt[n+1]{|a_{n+1}|}=\beta $</span> (<strong>Im not sure about this argument, we'll sign it as (*) )</strong></p> <p>Its enough to show that <span class="math-container">$ \limsup\sqrt[n]{\left(n+1\right)|a_{n+1}|}=\beta $</span></p> <p>Let <span class="math-container">$ \limsup\sqrt[n]{\left(n+1\right)|a_{n+1}|}=\gamma $</span> and we'll show that <span class="math-container">$\beta=\gamma $</span></p> <p>From the assumption above, we can find a subsequence such that <span class="math-container">$ \lim_{k\to\infty}\sqrt[n_{k}]{|a_{n_{k}}|}=\beta$</span></p> <p>And we can notice that :</p> <p><span class="math-container">$ \lim_{k\to\infty}\sqrt[n_{k}]{\left(n_{k}+1\right)|a_{n_{k}+1}|}=\lim_{k\to\infty}\left(n_{k}+1\right)^{\frac{1}{n_{k}}}\cdot\lim_{k\to\infty}\left(|a_{n_{k}+1}|^{\frac{1}{n_{k}+1}}\right)^{\frac{n_{k}+1}{n_{k}}} $</span></p> <p>and because of <strong>(*)</strong></p> <p><span class="math-container">$ \lim_{k\to\infty}\left(n_{k}+1\right)^{\frac{1}{n_{k}}}\cdot\lim_{k\to\infty}\left(|a_{n_{k}+1}|^{\frac{1}{n_{k}+1}}\right)^{\frac{n_{k}+1}{n_{k}}}=\beta $</span></p> <p>So <span class="math-container">$ \beta \leq \gamma $</span></p> <p>Now, we'll take a subsequence such that <span class="math-container">$ \lim_{j\to\infty}\sqrt[n_{j}]{\left(n_{j}+1\right)|a_{n_{j}+1}}=\gamma $</span></p> <p>And we'll notice that</p> <p><span class="math-container">$ \gamma=\lim_{j\to\infty}\sqrt[n_{j}]{\left(n_{j}+1\right)|a_{n_{j}+1}|}=\lim_{j\to\infty}\left(n_{j}+1\right)^{\frac{1}{n_{j}}}\cdot\lim_{j\to\infty}|a_{n_{j}+1}|^{\frac{1}{n_{j}}}=\lim_{j\to\infty}\left(|a_{n_{j}+1}|^{\frac{1}{n_{j}+1}}\right)^{\frac{n_{j+1}}{n_{j}}}=\gamma $</span></p> <p>Therefore, because of <strong>(*)</strong></p> <p><span class="math-container">$ \lim_{j\to\infty}|a_{n_{j}+1}|^{\frac{1}{n_{j}+1}}=\lim_{j\to\infty}|a_{n_{j}}|^{\frac{1}{n_{j}}}=\gamma $</span></p> <p>And thus <span class="math-container">$ \gamma \leq \beta $</span> and we get that <span class="math-container">$ \gamma = \beta $</span></p> <p>Now everything depend's on the validity of <strong>(*)</strong> So I hope you could tell me if it's okay and maybe help my justify it.</p> <p>I feel like it's abuse of notation because <span class="math-container">$ a_{n_{k+1}}\neq a_{n_{k}+1} $</span> so I dont know if my argument holds.</p> <p>Thanks in advance</p>
ir7
26,651
<p>Based on <a href="https://math.stackexchange.com/questions/113121/lim-sup-inequality-limsup-a-n-b-n-leq-limsup-a-n-limsup-b-n">this general result</a> (the equality part), we indeed have:</p> <p><span class="math-container">$$ \limsup_{n\to\infty}\sqrt[n]{(n+1)|a_{n+1}|}=\lim_{n\to\infty}\sqrt[n]{n+1}\limsup_{n\to\infty}\sqrt[n]{|a_{n+1}|} $$</span></p> <p>You are basically attempting to prove it by choosing special (their limits are hitting the <span class="math-container">$\limsup$</span>'s) subsequences on both sides, but in the process you set <span class="math-container">$\limsup$</span> of a sequence to <span class="math-container">$\limsup$</span> of one of its non-special sequences (which is wrong as stated in the comments on the other answer). The limit on non-special sequences might not even exist. I think you do this by assuming that the special subsequence for the product of sequences are also special for the individual sequences (and vice versa).</p> <p>Slightly cleaner application of the general result:</p> <p><span class="math-container">$$ \sum_{n\geq 0} (n+1)a_{n+1} x^n = \sum_{n\geq 1} na_n x^{n-1} $$</span></p> <p>The convergence radius of <span class="math-container">$ \sum_{n\geq 1} na_n x^{n-1} $</span> is the same as the one of</p> <p><span class="math-container">$$ \sum_{n\geq 1} na_n x^{n} = x \sum_{n\geq 1} na_n x^{n-1}. $$</span></p> <p>Then:</p> <p><span class="math-container">$$ \limsup_{n\to\infty}\sqrt[n]{n|a_{n}|}=\lim_{n\to\infty}\sqrt[n]{n}\limsup_{n\to\infty}\sqrt[n]{|a_n|} = \limsup_{n\to\infty}\sqrt[n]{|a_n|}.$$</span></p>
189,218
<p>I have raw images that I can import and view with ImageJ. They are the output of a program which I don't have the source code for, so I'm stuck with the output format.</p> <p>They are imported as follows with ImageJ:</p> <blockquote> <p>Image type: 16 bit Unsigned</p> <p>Width: 320</p> <p>Height: 25600</p> <p>Offset to fist image: 0 bytes</p> <p>Number of images: 1</p> <p>Gap Between images: 0 bytes</p> <p>White is zero unchecked</p> <p>Little-endian byte order unchecked</p> <p>Open all files in folder unchecked</p> <p>use virtual stack unchecked.</p> </blockquote> <p>Essentially each RAW file is a stack of 100 320 x 256 images.</p> <p>When I try importing via <code>Import[]</code> in <em>Mathematica</em>, I get</p> <blockquote> <pre><code>LibraryFunction::rterr: An error with return code -2 was encountered evaluating the function ReadImageRAW. </code></pre> </blockquote> <blockquote> <pre><code>Import::fmterr: Cannot import data as Raw format. </code></pre> </blockquote> <p>I can't seem to find any info on the first error message.</p>
Henrik Schumacher
38,178
<pre><code>images = Image /@ ArrayReshape[Import[file, "UnsignedInteger16"], {100, 256, 320}]; </code></pre>
308,436
<p>$P$ is probability. We have: $P(A) \ge \frac{2}{3}$, $P(B) \ge \frac{2}{3}$, $P(C) \ge \frac{2}{3}$ and $P(A \cap B \cap C)=0$. We have to find $P(A)$. How to do it? Of course we have $P(A \cup B \cup C) = P(A) + P(B) + P(C) - P(A \cap B)- P(A \cap C)- P(B \cap C)$, but what next? Please, help me.</p>
Did
6,179
<p><strong>Hints:</strong> </p> <ul> <li>If $\mathbb P(B)\geqslant\frac23$ and $\mathbb P(C)\geqslant\frac23$, then $\mathbb P(B\cap C)\geqslant$ $____$. </li> <li>If $\mathbb P(A)=\frac23+c$ with $c\gt0$ and $\mathbb P(B\cap C)\geqslant$ $____$, then $\mathbb P(A\cap(B\cap C))\geqslant$ $____$.</li> <li>But the hypothesis is that $\mathbb P(A\cap B\cap C)=0$, hence $\mathbb P(A)=\frac23$.</li> </ul>
1,524,109
<p>Could anyone help me with this proof without using determinant? I tried two ways. </p> <blockquote> <p>Let $A$ be a matrix. If $A$ has the property that each row sums to zero, then there does not exist any matrix $X$ such that $AX=I$, where $I$ denotes the identity matrix. </p> </blockquote> <p>I then get stuck. The other way was to prove by contradiction, and I failed too. </p>
John Bentin
875
<p>If each row of $A$ sums to zero, then each row of the column vector that is the sum of the column vectors constituting $A$ is zero. So the columns of $A$ are not linearly independent, and therefore the matrix is singular (i.e. it has no inverse).</p>
1,260,722
<blockquote> <p>Prove that <span class="math-container">$f=x^4-4x^2+16\in\mathbb{Q}[x]$</span> is irreducible.</p> </blockquote> <p>I am trying to prove it with Eisenstein's criterion but without success: for <strong>p=2</strong>, it divides <strong>-4</strong> and the constant coefficient 16, don't divide the leading coeficient 1, but its square 4 divides the constant coefficient 16, so doesn't work. Therefore I tried to find <span class="math-container">$f(x\pm c)$</span> which is irreducible:</p> <blockquote> <p><span class="math-container">$f(x+1)=x^4+4x^3+2x^2-4x+13$</span>, but 13 has the divisors: <strong>1 and 13</strong>, so don't exist a prime number <strong>p</strong> such that to apply the first condition: <span class="math-container">$p|a_i, i\ne n$</span>; the same problem for <span class="math-container">$f(x-1)=x^4+...+13$</span></p> <p>For <span class="math-container">$f(x+2)=x^4+8x^3+20x^2+16x+16$</span> is the same problem from where we go, if we set <strong>p=2</strong>, that means <span class="math-container">$2|8, 2|20, 2|16$</span>, not divide the leading coefficient 1, but its square 4 divide the constant coefficient 16; again, doesn't work.. is same problem for <strong>x-2</strong></p> </blockquote> <p>Now I'll verify for <span class="math-container">$f(x\pm3)$</span>, but I think it will be fall... I think if I verify all constant <span class="math-container">$f(x\pm c)$</span> it doesn't work with this method... so have any idea how we can prove that <span class="math-container">$f$</span> is irreducible?</p>
marwalix
441
<p>$$x^4-4x^2+16=(x^2-(2+\sqrt{-12}))(x^2-(2-\sqrt{-12}))$$</p> <p>No rational roots and no factorization into quadratics over the rationals. The polynomial is irreducible over the rationals</p> <p><strong>edit</strong> for those who commented that this is not enough. I factorized over $\mathbb{C}[X]$ and thus proved that there are no rational solution i.e no degree $1$ factors. The only factorization possible is therefore into two quadratics. $\mathbb{C}[X]$ is a UFD and therefore we have </p> <p>$$x^4-4x^2+16=(x-\sqrt{2+\sqrt{-12}})(x+\sqrt{2+\sqrt{-12}})(x-\sqrt{2-\sqrt{-12}})(x+\sqrt{2-\sqrt{-12}})$$</p> <p>And this is unique. So combining the degree 1 factors in pairs is the only way to factorize in quadratics and there are three different ways to combine and none is rational</p>
3,719,575
<p>For <span class="math-container">$\lambda =0$</span> and <span class="math-container">$\lambda &lt;0$</span> the solution is the trivial solution <span class="math-container">$x\left(t\right)=0$</span></p> <p>So we have to calculate for <span class="math-container">$\lambda &gt;0$</span></p> <p>The general solution here is</p> <p><span class="math-container">$x\left(t\right)=C_1cos\left(\sqrt{\lambda }t\right)+C_2sin\left(\sqrt{\lambda \:}t\right)$</span></p> <p>Because <span class="math-container">$0=C_1\cdot cos\left(0\right)+C_2\cdot sin\left(0\right)=C_1$</span> we know that</p> <p><span class="math-container">$x\left(t\right)=C_2sin\left(\sqrt{\lambda }t\right)$</span></p> <p><span class="math-container">$\sqrt{\lambda }t=n\pi$</span></p> <p><span class="math-container">$\sqrt{\lambda }=\frac{n\pi }{t}$</span></p> <p>But does there a solution for lambda exist which is not dependent on t?</p>
Aditya Dwivedi
697,953
<p>By saying <strong><span class="math-container">$x(L) = 0$</span></strong> you mean <span class="math-container">$\sqrt{\lambda}*L = n \pi$</span></p>
2,262,001
<p>Prove this inequality where $a$, $b$ and $c$ are sides of a triangle and $S$ its Area. $$\frac{ab + bc + ca}{4S}\ge \operatorname{ctg} \frac{\pi}{6}$$</p>
Michael Rozenberg
190,319
<p>Since $ab+ac+bc\geq\sum\limits_{cyc}(2ab-a^2)$ it's just $\sum\limits_{cyc}(a-b)^2\geq0$ and $$\sum_{cyc}(2ab-a^2)=\sum_{cyc}a(b+c-a)&gt;0,$$ it's enough to prove that $$\sum_{cyc}(2ab-a^2)\geq4\sqrt3S$$ or $$\sum_{cyc}(2ab-a^2)\geq\sqrt{3\sum_{cyc}(2a^2b^2-a^4)}$$ or $$\sum_{cyc}(a^4-a^3b-a^3c+a^2bc)\geq0,$$ which is Schur.</p> <p>Done!</p>
3,366,442
<p>I am asked to figure out when this limit exists for polynomials <span class="math-container">$f$</span> and <span class="math-container">$g$</span>, if so what the limit is, and then to prove my findings. So far I have gathered that the limit exists if and only if <span class="math-container">$\deg f\leq \deg g$</span>, in which case the limit is the division of the leading coefficients (if <span class="math-container">$\deg f = \deg g$</span>) and otherwise <span class="math-container">$0$</span> (if <span class="math-container">$\deg g &gt; \deg f$</span>).</p> <p>I am stuck on proving this rigorously with the definition of a limit. This is my first course in real analysis and I find that most of our problems have quite intuitive results but the proofs are super involved. Hints are appreciated.</p>
Andrew Chin
693,161
<p>For rational functions, the limit as <span class="math-container">$x$</span> approaches infinity is based on the limit <span class="math-container">$$\lim\limits_{x\to\infty}\frac1{x}=0.$$</span></p> <p>Suppose in the following function <span class="math-container">$f(x)$</span>, the numerator is the sum of <span class="math-container">$ax^n$</span>, the term with the highest exponent, and <span class="math-container">$\mathcal{A}(x)$</span>, the rest of the terms of the polynomial, each with a term with an exponent lower than <span class="math-container">$ax^n$</span>. The same principle applies to the denominator. Analyzing: <span class="math-container">\begin{align} f(x)&amp;=\frac{ax^n+\mathcal{A}(x)}{bx^m+\mathcal{B}(x)}\\ \lim\limits_{x\to\infty}f(x)&amp;=\lim\limits_{x\to\infty}\frac{ax^n+\mathcal{A}(x)}{bx^m+\mathcal{B}(x)}\\ &amp;=\lim\limits_{x\to\infty}\frac{ax^n+\mathcal{A}(x)}{bx^m+\mathcal{B}(x)}\color{blue}{\cdot\frac{\frac{1}{x^m}}{\frac{1}{x^m}}}\\ &amp;=\lim\limits_{x\to\infty}\frac{ax^{n-m}+\frac1{x^m}\mathcal{A}(x)}{b+\frac1{x^m}\mathcal{B}(x)}\\ &amp;=\frac{\lim\limits_{x\to\infty}ax^{n-m}+\lim\limits_{x\to\infty}\frac1{x^m}\mathcal{A}(x)}{\lim\limits_{x\to\infty}b+\lim\limits_{x\to\infty}\frac1{x^m}\mathcal{B}(x)}\\ \end{align}</span></p> <p>We can then split this into three scenarios:</p> <p>For <span class="math-container">$n&lt;m$</span>: <span class="math-container">$\lim\limits_{x\to\infty} f(x)=0$</span>.</p> <p>For <span class="math-container">$n=m$</span>: <span class="math-container">$\lim\limits_{x\to\infty}f(x)=\frac{a}{b}$</span>.</p> <p>For <span class="math-container">$n&gt;m$</span>: <span class="math-container">$\lim\limits_{x\to\infty}f(x)$</span> diverges.</p>
3,366,442
<p>I am asked to figure out when this limit exists for polynomials <span class="math-container">$f$</span> and <span class="math-container">$g$</span>, if so what the limit is, and then to prove my findings. So far I have gathered that the limit exists if and only if <span class="math-container">$\deg f\leq \deg g$</span>, in which case the limit is the division of the leading coefficients (if <span class="math-container">$\deg f = \deg g$</span>) and otherwise <span class="math-container">$0$</span> (if <span class="math-container">$\deg g &gt; \deg f$</span>).</p> <p>I am stuck on proving this rigorously with the definition of a limit. This is my first course in real analysis and I find that most of our problems have quite intuitive results but the proofs are super involved. Hints are appreciated.</p>
Certainly not a dog
691,550
<p>Let the expression be be <span class="math-container">$\sum_{i=0}^n a_ix^i\over\sum_{i=0}^mb_ix^i$</span>.</p> <p>We may rewrite this as <span class="math-container">$a_nx^n\bigl(1 + \sum_{i=1}^n \frac{a_i}{a_n}x^{-i}\bigr)\over b_mx^m\bigl(1 + \sum_{i=1}^m \frac{a_i}{a_m}x^{-i}\bigr)$</span></p> <p>Now, consider the number <span class="math-container">$y =\frac1{\epsilon^\frac 1k}$</span> where <span class="math-container">$\epsilon$</span> and <span class="math-container">$k$</span> are any positive numbers. We will prove <span class="math-container">$\lim\limits_{x\to\infty} \frac 1{x^k} = 0$</span>. Since <span class="math-container">$x \mapsto \infty$</span> <span class="math-container">$x &gt; y$</span>. So, <span class="math-container">$$x^k &gt; \frac 1\epsilon$$</span> <span class="math-container">$$=&gt; \frac 1{x^k} -0 &lt; \epsilon$$</span></p> <p>So, <span class="math-container">$x^{-k} - 0$</span> is less than any positive real number, which means <span class="math-container">$x^{-k} = 0$</span> as <span class="math-container">$x \mapsto \infty$</span>. </p> <p>So as <span class="math-container">$x \mapsto \infty$</span> our expression simplifies to <span class="math-container">$\frac {a_n} {b_n} x^{n-m}$</span>. With similar reasoning as above we reach the <span class="math-container">$conclusion$</span>. </p>
6,090
<p>I have been reading about Riemann Zeta function and have been thinking about it for some time.</p> <p>Has anything been published regarding upper bound for the real part of zeta function zeros as the imaginary part of the zeros tend to infinity?</p> <p>Thanks</p>
T..
467
<p>The term in analytic number theory is "zero-free regions". Any proof of the prime number theorem will produce such a region, and the region is equivalent to the error term in bounds for $\pi(x)$ and to the lower bounds in nonvanishing theorems $|\zeta (1+it)|> 0$. At present, the known zero-free regions are asymptotic to the line $Re(s)=1$: at height $h$ all zeros are at least at distance $d(h)$ from the line with $\lim_{|h| \to \infty} d(h) = 0$.</p> <p>(Added: on the subject of later improvements, if any region not asymptotic to $Re(s)=1$ were demonstrated it would be a giant advance in number theory, comparable to Wiles' proof of the modularity conjecture and Fermat's Last Theorem. In the analogous function field case there is an algebraic technique for boosting Beta &lt; 1 to Beta &lt; (1/2)+epsilon, and the latter is the Riemann hypothesis. You can be sure that if a zero free region were proven that had finite distance from the boundary of the critical strip, we would all have heard about it. Some known bounds are listed at <a href="http://www.math.uiuc.edu/~ford/wwwpapers/zeros.pdf">http://www.math.uiuc.edu/~ford/wwwpapers/zeros.pdf</a> )</p>
4,202,451
<p>I have been reading about the Miller-Rabin primality test. So far I think I got it except the part where the accuracy is stated.<br /> E.g from <a href="https://en.wikipedia.org/wiki/Miller%E2%80%93Rabin_primality_test#Accuracy" rel="nofollow noreferrer">wiki</a></p> <blockquote> <p>The error made by the primality test is measured by the probability for a composite number to be declared probably prime. The more bases a are tried, the better the accuracy of the test. It can be shown that if n is composite, then at most 1⁄4 of the bases a are strong liars for n. As a consequence, if n is composite then running k iterations of the Miller–Rabin test will declare n probably prime with a probability at most <span class="math-container">$4^{−k}$</span>.</p> </blockquote> <p>So if I understand correctly if we have a large number <span class="math-container">$N$</span> and if we have <span class="math-container">$k$</span> random witnesses then if none of them observes the non-primality of <span class="math-container">$N$</span>, then the probability that <span class="math-container">$N$</span> is <strong>not</strong> a prime is <span class="math-container">$1$</span> in <span class="math-container">$4^k$</span></p> <p>What I am not clear is where does this <span class="math-container">$\frac{1}{4}$</span> come from.<br /> I understand we have <span class="math-container">$4$</span> conditions to be met (in order) i.e.:</p> <ol> <li><span class="math-container">$a \not\equiv 0 \mod N$</span></li> <li><span class="math-container">$a^{N-1} \not\equiv 1 \mod N$</span></li> <li><span class="math-container">$x^2 \equiv 1 \mod N$</span></li> <li><span class="math-container">$x \equiv \pm 1 \mod N$</span></li> </ol> <p>The process is the following:<br /> In the above <span class="math-container">$a$</span> is the witness. We first check condition (1).<br /> If that passes we check condition (2).<br /> Do do that we start multiplying <span class="math-container">$a, a \cdot a, a\cdot a\cdot a ....$</span> until we calculate <span class="math-container">$a^{N-1}$</span>.<br /> Do do that efficiently we can use the squaring method. If in the process of the multiplication during squaring we encounter a number e.g. <span class="math-container">$x$</span> such that the <span class="math-container">$x^2 \equiv 1$</span> but <span class="math-container">$x \not\equiv 1$</span> and <span class="math-container">$x \not\equiv -1$</span>. (E.g <span class="math-container">$19^2 \equiv 1 \pmod {40}$</span> but <span class="math-container">$19 \not \equiv 1 \pmod {40}$</span> and <span class="math-container">$19 \not \equiv -1 \pmod {40}$</span>) then the conditions (3) and (4) fails otherwise we proceed multiplying.<br /> We check the final product for condition (2)</p> <p>Does the <span class="math-container">$1/4$</span> mean that at most <span class="math-container">$1$</span> of these can be indicate a prime? If so, how is that validated?</p>
egglog
626,167
<p>Brute-force induction won't work well here, it's better to approach this problem by looking for simplifications. For example, the numerator in the product can be written in the form <span class="math-container">$\frac{(2n)!}{n!}$</span> and can be removed from the product entirely, thus leaving <span class="math-container">$\displaystyle\frac{(2n)!}{n!}\prod_{i=1}^{n}\frac{1}{2i-3}=2^n(1-2n)$</span>. The same reasoning should be used for the denominator. It is <span class="math-container">$\frac{1}{(-1) \cdot (1) \cdot (3) ... \cdot(2n -3)}$</span> which is also <span class="math-container">$-\frac{2^{n-1}(n-1)!}{(2n-2)!}$</span>, so you have <span class="math-container">$-\frac{2^{n-1}(n-1)!}{(2n-2)!}\frac{(2n)!}{n!} = 2^{n} (1-2n) \implies \frac{2^{n-1}(n-1)!}{(2n-2)!}\frac{(2n)!}{n!} = 2^{n} (2n-1)$</span>. Further simplifications show that <span class="math-container">$\frac{2n(2n-1)}{n} = 2(2n-1)$</span> which is obviously an equality. Thus the statements are equal. You shouldn't use induction here.</p>
4,260,645
<p>I can't find any insights online on how useful the graph of function <span class="math-container">$f(x)$</span> (on the <span class="math-container">$y$</span> axis) versus it's derivative <span class="math-container">$f'(x)$</span>? (on the <span class="math-container">$x$</span> axis) does it provide some useful informations if any?</p> <p>For example, I see that when I plot <span class="math-container">$\sin(x)$</span> against <span class="math-container">$\cos(x)$</span> the plot is a circle which is reminiscent of parametric equations.</p> <p>My question is: Assuming the function is nice (nice in the common sense that it is continuous) does that kind of graph provide any useful information? What about this graph where <span class="math-container">$x(t) = f'(t)$</span>:</p> <p><a href="https://i.stack.imgur.com/DIYa9.png" rel="nofollow noreferrer">f(t) versus x(t) where x(t) = f'(t)</a></p>
Ninad Munshi
698,724
<p>You can use it to analyze the behavior of differential equations. For example, consider <span class="math-container">$$y' = y^2 - 5x+6$$</span> Plotting <span class="math-container">$y'$</span> vs <span class="math-container">$y$</span> shows us the zeros (constant solutions/asymptotes for non constant solutions) and places where y' is positive or negative. This gives us the following qualitative behaviors that are useful to understand</p> <p><span class="math-container">$$y(0)&lt;2 \implies \begin{cases}y'(x)&gt;0\\\lim\limits_{x\to-\infty}y(x) = -\infty\\\lim\limits_{x\to+\infty}y(x)=2\end{cases}$$</span></p> <p><span class="math-container">$$y(0)=2 \implies y(x)=2$$</span></p> <p><span class="math-container">$$2&lt;y(0)&lt;3 \implies \begin{cases}y'(x)&lt;0\\\lim\limits_{x\to-\infty}y(x) = 3\\\lim\limits_{x\to+\infty}y(x)=2\end{cases}$$</span></p> <p><span class="math-container">$$y(0)=3 \implies y(x)=3$$</span></p> <p><span class="math-container">$$y(0)&gt;3 \implies \begin{cases}y'(x)&gt;0\\\lim\limits_{x\to-\infty}y(x) = 3\\\lim\limits_{x\to+\infty}y(x)=+\infty\end{cases}$$</span></p> <p>All without going through the trouble of solving the differential equation. This qualitative theory of ODEs is useful in analyzing more complex dynamics, especially when analytic solutions are intractable and precise numerics are unnecessary.</p>
4,337,320
<p>Let <span class="math-container">$G$</span> be a group and <span class="math-container">$F:G^n \to G$</span> with the following property: If <span class="math-container">$x_1,…,x_n,h \in G$</span>, then <span class="math-container">$F(hx_1,…,hx_n)=hF(x_1,…,x_n)$</span>. Is there a name for this type of function property? It is something I’ve been investigating lately. For instance, if <span class="math-container">$G$</span> is a vector space and <span class="math-container">$F$</span> outputs the average vector, then <span class="math-container">$F$</span> has this property.</p>
Paul
42,723
<p>You can say the group operation is left-<a href="https://en.wikipedia.org/wiki/Distributive_property" rel="nofollow noreferrer">distributive</a> over <span class="math-container">$F$</span>.</p> <p>When writing about this, make sure you include your definition, so it's clear for everyone.</p>
3,502,507
<p>This is very similar to the <a href="https://math.stackexchange.com/questions/3501693/given-that-you-started-with-one-chip-what-is-the-probability-that-you-will-win">question</a> I've just asked, except now the requirement is to gain <span class="math-container">$4$</span> chips to win (instead of <span class="math-container">$3$</span>) </p> <p>The game is:</p> <blockquote> <p>You start with one chip. You flip a fair coin. If it throws heads, you gain one chip. If it throws tails, you lose one chip. If you have zero chips, you lose the game. If you have <strong>four</strong> chips, you win. What is the probability that you will win this game?</p> </blockquote> <p>I've tried to use the identical reasoning used to solve the problem with three chips, but seems like in this case, it doesn't work.</p> <p>So the attempt is:</p> <p>We will denote <span class="math-container">$H$</span> as heads and <span class="math-container">$T$</span> as tails (i.e <span class="math-container">$HHH$</span> means three heads in a row, <span class="math-container">$HT$</span> means heads and tails etc)</p> <p>Let <span class="math-container">$p$</span> be the probability that you win the game. If you throw <span class="math-container">$HHH$</span> (<span class="math-container">$\frac{1}{8}$</span> probability), then you win. If you throw <span class="math-container">$HT$</span> (<span class="math-container">$\frac{1}{4}$</span> probability), then your probability of winning is <span class="math-container">$p$</span> at this stage. <strong>If you throw heads <span class="math-container">$HHT$</span> (<span class="math-container">$\frac{1}{8}$</span> probability), then your probability of winning <span class="math-container">$\frac{1}{2}p$</span></strong></p> <p>Hence the recursion formula is </p> <p><span class="math-container">$$\begin{align}p &amp; = \frac{1}{8} + \frac{1}{4}p+ \frac{1}{8}\frac{1}{2}p \\ &amp;= \frac{1}{8} + \frac{1}{4}p +\frac{1}{16}p \\ &amp;= \frac{1}{8} + \frac{5}{16}p \end{align}$$</span></p> <p>Solving for <span class="math-container">$p$</span> gives</p> <p><span class="math-container">$$\frac{11}{16}p = \frac{1}{8} \implies p = \frac{16}{88}$$</span></p> <p>Now, to verify the accuracy of the solution above, I've tried to calculate the probability of losing using the same logic, namely:</p> <p>Let <span class="math-container">$p$</span> denote the probability of losing. If you throw <span class="math-container">$T$</span> (<span class="math-container">$\frac{1}{2}$</span> probability), you lose. If you throw <span class="math-container">$H$</span> (<span class="math-container">$\frac{1}{2}$</span> probability), the probaility of losing at this stage is <span class="math-container">$\frac{1}{2}p$</span>. If you throw <span class="math-container">$HH$</span>(<span class="math-container">$\frac{1}{4}$</span> probability), the probability of losing is <span class="math-container">$\frac{1}{4}p$</span>. Setting up the recursion gives</p> <p><span class="math-container">$$\begin{align}p &amp; = \frac{1}{2} + \frac{1}{4}p+ \frac{1}{8}\frac{1}{2}p \\ &amp;= \frac{1}{2} + \frac{1}{4}p +\frac{1}{16}p \\ &amp;= \frac{1}{2} + \frac{5}{16}p \end{align}$$</span></p> <p>Which implies that </p> <p><span class="math-container">$$\frac{11}{16}p = \frac{1}{2} \implies p = \frac{16}{22} = \frac{64}{88}$$</span></p> <p>Which means that probabilities of winning and losing the game do not add up to <span class="math-container">$1$</span>.</p> <p>So the <em>main</em> question is: Where is the mistake? How to solve it using recursion? (Note that for now, I'm mainly interested in the recursive solution)</p> <p>And the bonus question: Is there a possibility to generalize? I.e to find the formula that will give us the probability of winning the game, given that we need to gain <span class="math-container">$n$</span> chips to win? </p>
Tanner Swett
13,524
<p>Let <span class="math-container">$p_0, p_1, \ldots, p_4$</span> be the probability of winning if you start with <span class="math-container">$0, 1, \ldots, 4$</span> chips, respectively. Of course, <span class="math-container">$p_0 = 0$</span> and <span class="math-container">$p_4 = 1$</span>.</p> <p>There are a few different ways to approach this question. </p> <h2>Start with <span class="math-container">$p_2$</span></h2> <p>This seems to be the approach you're asking for, but it's not the easiest approach.</p> <p>To calculate <span class="math-container">$p_2$</span>, consider what happens if the coin is flipped twice. There is a <span class="math-container">$1/4$</span> chance of getting <span class="math-container">$TT$</span> (instant loss), a <span class="math-container">$1/4$</span> chance of getting <span class="math-container">$HH$</span> (instant win), and a <span class="math-container">$1/2$</span> chance of getting either <span class="math-container">$HT$</span> or <span class="math-container">$TH$</span> (back to <span class="math-container">$p_2$</span>). So we have</p> <p><span class="math-container">$$p_2 = \frac14 + \frac12 p_2,$$</span></p> <p>which we can solve to find that <span class="math-container">$p_2 = 1/2$</span>.</p> <p>Now that we know <span class="math-container">$p_2$</span>, we can directly calculate <span class="math-container">$p_1$</span> as the average of <span class="math-container">$p_0$</span> and <span class="math-container">$p_2$</span>, which is <span class="math-container">$1/4$</span>.</p> <h2>Examine the sequence</h2> <p>Notice that in the sequence <span class="math-container">$p_0, p_1, p_2, p_3, p_4$</span>, each element (besides the first and the last) is the average of its two neighbors. This implies that the sequence is an arithmetic progression. Given that <span class="math-container">$p_0 = 0$</span> and <span class="math-container">$p_4 = 1$</span>, we can use any "find a line given two points" method to find that for all <span class="math-container">$n$</span>, <span class="math-container">$p_n = n/4$</span>.</p> <h2>Conservation of expected value</h2> <p>I'm an investor and an advantage gambler (which are the same thing, really), so I like to think of things in terms of expected value. </p> <p>I start the game with <span class="math-container">$1$</span> chip, and it's a perfectly fair game; in the long run, I am expected neither to lose nor to win. So if I play the game until it ends, the expected value of the game must be <span class="math-container">$1$</span> chip. (More detail is needed to make this argument formal, but it's sound.)</p> <p>The value if I lose is <span class="math-container">$0$</span>, and the value if I win is <span class="math-container">$4$</span>, so the expected value can also be written as <span class="math-container">$(1 - p_1) \cdot 0 + p_1 \cdot 4$</span>, which simplifies to <span class="math-container">$4 p_1$</span>.</p> <p>These two ways of calculating the expected value must agree, meaning that <span class="math-container">$4 p_1 = 1$</span>, so <span class="math-container">$p_1 = 1/4$</span>.</p> <h2>In general</h2> <p>The latter two of the above arguments can each be generalized to show that if you start with <span class="math-container">$a$</span> chips, and the game ends when you reach either <span class="math-container">$0$</span> or <span class="math-container">$b$</span> chips, then the probability of winning is <span class="math-container">$a/b$</span>.</p>
188,503
<p>A Cayley table of an finite group has to have every element exactly once in every row and exactly once in every column.</p> <p><strong>Proof</strong> that every element of a group has to be at most once in every row and at most once in every column:</p> <p>Let $(G, \circ)$ be a group and $a, b, c, d \in G$ with:</p> <p>(I) $a \circ b = d$</p> <p>(II) $a \circ c = d \Leftrightarrow a = d \circ c^{-1}$</p> <p>Then:</p> <p>$\begin{align} (a \circ c) \circ (a \circ b) &amp;= d \circ d \\ \Leftrightarrow d \circ (d \circ c^{-1} \circ b) &amp;= d \circ d \\ \Leftrightarrow d \circ c^{-1} \circ b &amp;= d\\ \Leftrightarrow c^{-1} \circ b &amp;= e\\ \Leftrightarrow b &amp;= c \end{align}$</p> <p>As the group is finite, this also means it is exactly once in every row/column ($\forall a,b \in G: a \circ b = x$ with $x \in G$).</p> <p>Now my question is:</p> <p><strong>Does a group with an infinite number of elements exist, that has not every element in every row/column of its Cayley table?</strong></p> <p>(I know that Cayley tables usually get used only for finite groups. But if set of the group has a countable number of elements, you can imagine a Cayley table. For example, $(\mathbb{Z}, +)$ has obviously every element in every row/column).</p>
Brian M. Scott
12,042
<p>The term <em>Cayley table</em> is generally restricted to finite groups. However, it’s certainly possible to generalize the idea. For a group $G$ and an element $a\in G$, the $a$ ‘row’ of the table is essentially just the function $$f_a:G\to G:b\mapsto ab\;,$$ and the $a$ ‘column’ is essentially just the function $$f^a:G\to G:b\mapsto ba\;.$$ If $G$ is countably infinite, you can visualize the Cayley table as an infinite matrix.</p> <p>Let $G$ be any group, and fix $a\in G$. For each $b\in G$ you have $b=a(a^{-1}b)$, so $b$ appears in row $a$ in column $a^{-1}b$. Similarly, $b=(ba^{-1})a$, so $b$ appears in column $a$ in row $ba^{-1}$. It follows that $b$ appears in every row and column. The cardinality of the group doesn’t matter.</p> <p><strong>Added:</strong> You didn’t ask, but it’s also clear that each element of $G$ appears only once in each row and column: if $ax=ay$ or $xa=ya$, then $x=y$. Thus, each of the maps $f_a$ and $f^a$ for $a\in G$ is a bijection from $G$ onto itself, i.e., a <em>permutation</em> of $G$. The set of all permutations of $G$ is denoted by $\operatorname{Sym}(G)$ and is a group under composition of functions; the maps</p> <p>$$G\to\operatorname{Sym}(G):a\mapsto f_a$$</p> <p>and</p> <p>$$G\to\operatorname{Sym}(G):a\mapsto f^a$$</p> <p>are isomorphisms of $G$ to subgroups of $\operatorname{Sym}(G)$. This is <a href="http://en.wikipedia.org/wiki/Cayley%27s_theorem">Cayley’s theorem</a>.</p>
3,426,700
<p>For any fixed <span class="math-container">$C &gt; 0$</span>, prove that <span class="math-container">$\frac{C^{k}}{k!} \to 0$</span> as <span class="math-container">$k \to \infty$</span></p> <p>The hint given is to choose an integer <span class="math-container">$K&gt;2C$</span> and examine the ratio in the question for <span class="math-container">$k &gt;K$</span></p> <p>I'm not quite sure how the hint applies here. I tried proving that applying the ratio test to <span class="math-container">$\frac{C^{k}}{k!}$</span> shows that it is convergent, and then showing that the limsup as <span class="math-container">$k \to \infty$</span> is less than 1 and thus lim<span class="math-container">$\frac{C^{k}}{k!} = 0$</span>, but I don't think the method works given the hint in the question. </p> <p>The second part of the question says "Let <span class="math-container">$f$</span> be <span class="math-container">$C^{\infty}$</span> on [b,c] such that <span class="math-container">$|f^{(k)}(x) \leq M$</span> for all <span class="math-container">$k \in \mathbb{N}$</span> and <span class="math-container">$x \in (b,c)$</span>. Given <span class="math-container">$a \in (c,d)$</span>, show that <span class="math-container">$P_{a,k} \to f$</span> uniformly on [b,c]" and I get how the first result applies here. I'm just unsure how to actually prove that <span class="math-container">$\frac{C^{k}}{k!} \to 0$</span> as <span class="math-container">$k \to \infty$</span></p>
Bob Krueger
228,620
<p>EDIT: this answer is wrong because the part derivative term in the integral is not conjugated. </p> <p>Since the curve is symmetric about the real axis, the integral is equivalent to <span class="math-container">$\int_C z^2 dz$</span>. Thus by Cauchy's theorem, it is zero.</p>
3,426,700
<p>For any fixed <span class="math-container">$C &gt; 0$</span>, prove that <span class="math-container">$\frac{C^{k}}{k!} \to 0$</span> as <span class="math-container">$k \to \infty$</span></p> <p>The hint given is to choose an integer <span class="math-container">$K&gt;2C$</span> and examine the ratio in the question for <span class="math-container">$k &gt;K$</span></p> <p>I'm not quite sure how the hint applies here. I tried proving that applying the ratio test to <span class="math-container">$\frac{C^{k}}{k!}$</span> shows that it is convergent, and then showing that the limsup as <span class="math-container">$k \to \infty$</span> is less than 1 and thus lim<span class="math-container">$\frac{C^{k}}{k!} = 0$</span>, but I don't think the method works given the hint in the question. </p> <p>The second part of the question says "Let <span class="math-container">$f$</span> be <span class="math-container">$C^{\infty}$</span> on [b,c] such that <span class="math-container">$|f^{(k)}(x) \leq M$</span> for all <span class="math-container">$k \in \mathbb{N}$</span> and <span class="math-container">$x \in (b,c)$</span>. Given <span class="math-container">$a \in (c,d)$</span>, show that <span class="math-container">$P_{a,k} \to f$</span> uniformly on [b,c]" and I get how the first result applies here. I'm just unsure how to actually prove that <span class="math-container">$\frac{C^{k}}{k!} \to 0$</span> as <span class="math-container">$k \to \infty$</span></p>
Mark Viola
218,419
<p>Note that on <span class="math-container">$|z|=1$</span>, <span class="math-container">$\bar z=\frac1z$</span>. Hence, we have</p> <p><span class="math-container">$$\oint_{|z|=1}(\bar z)^2\,dz=\oint_{|z|=1}\frac1{z^2}\,dz$$</span></p> <p>Inasmuch as the residue of <span class="math-container">$\frac1{z^2}$</span> is <span class="math-container">$0$</span>, we conclude that </p> <p><span class="math-container">$$\oint_{|z|=1}(\bar z)^2\,dz=0$$</span></p> <p>And we are done.</p>
28,258
<p>The <a href="https://math.stackexchange.com/questions/tagged/set-theory" class="post-tag" title="show questions tagged &#39;set-theory&#39;" rel="tag">set-theory</a> tag is explicitly about mathematics (and metamathematics) in the context of $\mathsf{ZFC}$ and its subsystems and extensions.</p> <p>Every now and then we see questions about other set theories ($\mathsf{NF}$ is popular enough, but there are other versions, see <a href="https://math.stackexchange.com/q/2740723/462">here</a>, for instance). </p> <blockquote> <p>Wouldn't it be better to have a separate tag for them?</p> </blockquote> <p>I believe the chances of these questions being seen by their intended audience would increase significantly that way. The way things currently are, most of these questions end up being ignored or are not answered by the appropriate experts. </p>
Martin Sleziak
8,297
<ol> <li><p><strong>I am for creating the <a href="https://math.stackexchange.com/questions/tagged/alternative-set-theories" class="post-tag" title="show questions tagged &#39;alternative-set-theories&#39;" rel="tag">alternative-set-theories</a> tag.</strong> Or some similar name, but a tag intended for the set theories suggested in the question. (Although probably some discussion would be needed about the exact scope of the new tag.) The consensus in the <a href="https://math.meta.stackexchange.com/questions/6309/should-we-have-a-tag-for-the-set-theory-nfu">previous discussion about (nfu) tag</a> was that this tag would be too specific. Having a common tag for several similar theories might be a reasonable compromise - instead of having several very specific tags with low usage we get a single umbrella tag, but still not too big. I agree with the suggestion in the question that this might improve chances that people who are interested (and knowledgeable) in these topics would have better way to follow (or at least find) such questions.</p></li> <li><p><strong>I think that questions about this topic should not be excluded from the <a href="https://math.stackexchange.com/questions/tagged/set-theory" class="post-tag" title="show questions tagged &#39;set-theory&#39;" rel="tag">set-theory</a> tag.</strong> As far as I can tell, currently such question typically get the <a href="https://math.stackexchange.com/questions/tagged/set-theory" class="post-tag" title="show questions tagged &#39;set-theory&#39;" rel="tag">set-theory</a> tag and I think it is quite reasonable, so I am for continuing this practice. (As far as I can tell, nothing in the current revision of <a href="https://math.stackexchange.com/revisions/38807/4">the tag excerpt</a> and <a href="https://math.stackexchange.com/revisions/38806/7">the tag-wiki</a> for (set-theory) says that such question do not belong there.)<br> I think it is useful if a question gets also relatively specific tag (if such tag exists) but also a tag from some big area (if there is such tag suitable for the question at hand). Specific tags are much better for filtering results for searching. Big tags usually have more followers and their improve the chances that the question gets to a wider audience.</p></li> <li><p><strong>We need a good tag-wiki.</strong> I think that it would be useful to list at least some of alternative set theories which belong here in the tag-info. (I leave to more knowledgeable users to suggest which ones, since I do not know much about this area. Other than NF and Vopěnka's AST I did not encounter such theories. Although my guess would be that probably some of the theories listed in the Wikipedia article <a href="https://en.wikipedia.org/wiki/Alternative_set_theory" rel="nofollow noreferrer">Alternative set theory</a> might fit here.)</p></li> <li><p><strong>Maybe some synonyms could be useful.</strong> If somebody asks a question about NFU (as an example), they might be unaware that the tag <a href="https://math.stackexchange.com/questions/tagged/alternative-set-theories" class="post-tag" title="show questions tagged &#39;alternative-set-theories&#39;" rel="tag">alternative-set-theories</a> exists on this site and that it is intended for questions about NFU. I suppose at least for the most common topics which belong under the proposed tag, it might be useful to add synonyms with this tag as the master tag (for example <a href="https://math.stackexchange.com/questions/tagged/nfu" class="post-tag" title="show questions tagged &#39;nfu&#39;" rel="tag">nfu</a> $\to$ (alternative-set-theories), <a href="https://math.stackexchange.com/questions/tagged/new-foundations" class="post-tag" title="show questions tagged &#39;new-foundations&#39;" rel="tag">new-foundations</a> $\to$ (alternative-set-theories), etc.) By this we achieve that if somebody asks a question from this topic, they find the tag if they type something like "new-foundations" or "nfu" in the tag field. (I am aware that there are many users who try to help with tagging new questions and include the tags which the OP missed. And this is especially true about posts related to set theory. But still, if we can make discovering the tag easier for the OP, I think we should do it.)</p></li> </ol>
803,687
<p>How to prove : </p> <blockquote> <p>$A/m^n$ is Artinian for all $n\geq 0$ if $A$ is a Noetherian ring and $m$ maximal ideal.</p> </blockquote> <p>Any suggestions ?</p>
zcn
115,654
<p>Hint: Show that a Noetherian ring with only $1$ prime ideal is Artinian. What are the prime ideals of $A/m^n$?</p>
30,191
<p>Does anybody know about software that exactly calculates the tree-width of a given graph and outputs a tree-decomposition? I am only interested in tree-decompositions of reasonbly small graphs, but need the exact solution and a tree-decomposition. Any comments would be great. Thanks!</p>
András Salamon
7,252
<p>For general graphs there are no good algorithms known, as the problem of determining the treewidth of a graph is NP-hard. So if your graphs are not from some special class, and instances are small, then a brute force search over all decompositions of small width is a reasonable approach.</p> <p>As a previous answer suggested, Röhrig's Diplomarbeit ranks highly in a Google search. His rather negative conclusion in 1998 was that when treewidth exceeds $4$, brute force enumeration was essentially the only realistic option; up to $4$ special-purpose algorithms were reasonable. This is not that surprising, as (intuitively speaking) iterating over all choices of bags of up to $k$ elements takes $\Omega(n^k)$ time, so the runtime grows quite fast.</p> <p>Do your graphs have some special features? The <a href="http://wwwteo.informatik.uni-rostock.de/isgci/">ISGCI</a> has many special graph classes, for some of which it is possible to find join-trees efficiently. (Join-tree decompositions are another name for tree decompositions, although this term nowadays seems to usually refer to join trees as used in Bayesian networks.)</p> <p>Taking a really high level perspective, do you really want to compute tree decompositions? If you are decomposing trees because you need to do something with them, then consider an easier-to-compute decomposition. For instance, the modular decomposition can be computed in linear time, and also guarantees fast algorithms for many problems via the modular decomposition tree. There is a <a href="http://search.cpan.org/~azs/Graph-ModularDecomposition/">Perl implementation</a> of an older algorithm, <a href="http://www-sop.inria.fr/members/Nathann.Cohen/tut/Graphs/">Nathann Cohen</a> is currently working to incorporate a more recent C implementation into the <a href="http://www.sagemath.org/">Sage</a> framework, or you could use <a href="http://www.liafa.jussieu.fr/~fm/algos/index.html">Fabien de Montgolfier's C code</a> directly if you read French (the papers describing the work are in English).</p> <p>If you really do need tree decompositions, then have a look at the simple approach via induced width, which can be easily implemented (and parallelized) by considering each possible vertex ordering, then checking the induced width it corresponds to. Section 2.3 of Rina Dechter's draft version of her chapter from the Handbook of Constraint Programming is quite useful as a starting point.</p>
4,404,751
<p>I want to prove the following: Every symmetric matrix whose entries are calculated as <span class="math-container">$ 1/(n -1) $</span> with <span class="math-container">$n$</span> as the size of the matrix, except for the diagonal which is 0, has a characteristic polynomial with a root at <span class="math-container">$x=1$</span>. In other words, every such matrix has an eigenvalue of 1.</p> <p>For example Matrix 1:</p> <p><span class="math-container">\begin{array}{ccc} 0 &amp; \frac{1}{2} &amp; \frac{1}{2} \\ \frac{1}{2} &amp; 0 &amp; \frac{1}{2} \\ \frac{1}{2} &amp; \frac{1}{2} &amp; 0 \\ \end{array}</span></p> <p>has a characteristic polynomial: <span class="math-container">$f(x)=-x^3+\frac{3 x}{4}+\frac{1}{4} $</span> ,which has a root at <span class="math-container">$x=1$</span></p> <p>Matrix 2:</p> <p><span class="math-container">\begin{array}{cccc} 0 &amp; \frac{1}{3} &amp; \frac{1}{3} &amp; \frac{1}{3} \\ \frac{1}{3} &amp; 0 &amp; \frac{1}{3} &amp; \frac{1}{3} \\ \frac{1}{3} &amp; \frac{1}{3} &amp; 0 &amp; \frac{1}{3} \\ \frac{1}{3} &amp; \frac{1}{3} &amp; \frac{1}{3} &amp; 0 \\ \end{array}</span></p> <p>has a characteristic polynomial: <span class="math-container">$f(x)=-(1/27) - (8 x)/27 - (2 x^2)/3 + x^4 $</span> ,which also has a root at <span class="math-container">$x=1$</span></p> <p>Matrix 3:</p> <p><span class="math-container">\begin{array}{ccccc} 0 &amp; \frac{1}{4} &amp; \frac{1}{4} &amp; \frac{1}{4} &amp; \frac{1}{4} \\ \frac{1}{4} &amp; 0 &amp; \frac{1}{4} &amp; \frac{1}{4} &amp; \frac{1}{4} \\ \frac{1}{4} &amp; \frac{1}{4} &amp; 0 &amp; \frac{1}{4} &amp; \frac{1}{4} \\ \frac{1}{4} &amp; \frac{1}{4} &amp; \frac{1}{4} &amp; 0 &amp; \frac{1}{4} \\ \frac{1}{4} &amp; \frac{1}{4} &amp; \frac{1}{4} &amp; \frac{1}{4} &amp; 0 \\ \end{array}</span></p> <p>has a characteristic polynomial: <span class="math-container">$ f(x)=(4 + 60 x + 320 x^2 + 640 x^3 - 1024 x^5)/1024 $</span> ,which also has a root at <span class="math-container">$x=1$</span></p> <p>I want to show that this is true for any such n by n matrix, i.e. for all n.</p> <p>Looking for some tips and tricks on how to approach this.</p>
KBS
532,783
<p>Let <span class="math-container">$A_n$</span> be the <span class="math-container">$n\times n$</span> matrix such as the off-diagonal entries are <span class="math-container">$1/(n-1)$</span> and 0 on the diagonal. Then <span class="math-container">$A_n$</span> can be written as</p> <p><span class="math-container">$$A_n=\dfrac{1}{n-1}(\mathbf{1}_n\mathbf{1}_n^T-I_{n})$$</span></p> <p>where <span class="math-container">$\mathbf{1}_n$</span> is the vector of ones of dimension <span class="math-container">$n$</span>. This matrix is the perturbation of a full-rank matrix <span class="math-container">$I_n$</span> by a rank-one matrix <span class="math-container">$\mathbf{1}_n\mathbf{1}_n^T$</span>. Let <span class="math-container">$S\in\mathbb{R}^{n\times(n-1)},\tilde S\in\mathbb{R}^{n\times 1}$</span> be such that <span class="math-container">$\mathbf{1}^TS=0$</span>, <span class="math-container">$S^TS=I$</span>, <span class="math-container">$\tilde{S}^T\tilde{S}=1$</span>, and <span class="math-container">$S^T\tilde{S}$</span>.</p> <p>We can see that the matrix <span class="math-container">$P=\begin{bmatrix} S &amp; \tilde S \end{bmatrix}$</span> is invertible and orthogonal; i.e. <span class="math-container">$P^{-1}=P^T$</span>.</p> <p>Therefore, we can perform a basis change on <span class="math-container">$A_n$</span>. This shows that the matrix <span class="math-container">$A_n$</span> is similar to <span class="math-container">$$P^TA_nP=\dfrac{1}{n-1}\begin{bmatrix} -I_{n-1} &amp; 0\\0 &amp; \tilde S^T(\mathbf{1}_n\mathbf{1}_n^T)\tilde S-1 \end{bmatrix}.$$</span></p> <p>Noting that <span class="math-container">$\tilde{S}=\mathbf{1}_n/\sqrt{n}$</span> is a valid <span class="math-container">$\tilde S$</span>, then we get that <span class="math-container">$\tilde S^T(\mathbf{1}_n\mathbf{1}_n^T)\tilde S-1=n-1.$</span></p> <p>This yields that that <span class="math-container">$A_n$</span> is similar to <span class="math-container">$$\dfrac{1}{n-1}\begin{bmatrix} -I_{n-1} &amp; 0\\0 &amp; n-1 \end{bmatrix},$$</span></p> <p>or, equivalently, to <span class="math-container">$$\begin{bmatrix} -\dfrac{1}{n-1}I_{n-1} &amp; 0\\0 &amp; 1 \end{bmatrix}.$$</span></p> <p>This means that the matrix <span class="math-container">$A_n$</span> has one eigenvalue at 1 with multiplicity 1 and one eigenvalue at <span class="math-container">$-1/(n-1)$</span> with multiplicity <span class="math-container">$n-1$</span>.</p> <hr /> <p>Edit. Example in the case <span class="math-container">$n=3$</span>. Then, we have that</p> <p><span class="math-container">$$A_3=\dfrac{1}{2}(\mathbf{1}_3\mathbf{1}_3^T-I_{3}).$$</span></p> <p>In this case we can compute <span class="math-container">$S$</span> and <span class="math-container">$\tilde S$</span> as</p> <p><span class="math-container">$$S=\begin{bmatrix}\dfrac{-\sqrt{3}}{3} &amp; \dfrac{-\sqrt{3}}{3} \\ \dfrac{\sqrt{3}}{6}+\dfrac{1}{2} &amp; \dfrac{\sqrt{3}}{6}-\dfrac{1}{2} \\ \dfrac{\sqrt{3}}{6}-\dfrac{1}{2} &amp; \dfrac{\sqrt{3}}{6}+\dfrac{1}{2} \end{bmatrix},\ \tilde S=\dfrac{\sqrt{3}}{3}\begin{bmatrix}1\\1\\1 \end{bmatrix}.$$</span></p> <p>It is quite tedious to do by hand, but there numerical methods out there that can do that for you.</p> <p>We then obtain that</p> <p><span class="math-container">$$P^TA_3P=\begin{bmatrix} -\dfrac{1}{2}I_{2} &amp; 0\\0 &amp; -1\end{bmatrix}.$$</span></p>
2,243,900
<p>I've researched this topic a lot, but couldn't find a proper answer to this, and I can't wait a year to learn it at school, so my question is:</p> <blockquote> <p>What exactly is calculus? </p> </blockquote> <p>I know who invented it, the Leibniz controversy, etc., but I'm not exactly sure what it is. I think I heard it was used to calculate the area under a curve on a graph. If anyone can help me with this, I'd much appreciate it.</p>
B. Goddard
362,009
<p>In math, a "calculus" is a set of rules for computing/manipulating some set of objects. For instance the rules $\log AB = \log A+\log B$, etc, are the "logarithm calculus." </p> <p>But commonly, "calculus" refers to "differential calculus" and "integral calculus." There is a set of rules (product rule, quotient rule, chain rule) for manipulating and computing derivatives. There is also a set of rules (integration by parts, trig substitution, etc.) for manipulating and computing integrals. So, at least etymologically, "calculus" refers to these two sets of rules.</p> <p>But so far, this has been a bad answer. You really want to know what differential calculus and integral calculus have come to mean. So here goes:</p> <p>There are a lot of formulas out there that involve multiplying two quantities. $D=RT$, area$ = lw$, work = force x distance, etc. All of these formula are equivalent to finding the area of a rectangle. If you drive 30 miles per hour for 4 hours, your distance is $D = RT = 30\cdot 4 = 120 $ miles. Easy-peasy.</p> <p>But what if the speed varies during the drive. Now your $30 \times 4$ rectangle is warped. The left and right sides and bottom are still straight, but the top is all curvy. Still, the distance traveled is the area of the warped rectangle. Integral calculus cuts the area into infinitely many, infinitely tiny rectangles, computes the area of all of them and glues them back together to find the area.</p> <p>How much work to lift a 10 pound rock to the top of a 200 foot cliff? $10 \times 200 $ foot-pounds. But now what if the rock is ice and it melts on the way up, so that when it reaches the top it weighs only 1 pound? </p> <p>Differential calculus is sort of the opposite problem. When a rock is falling, it starts at $0$ ft/sec., and accelerates. At each point in time, it is going a different speed. Differential calculus gives us a formula for that constantly changing speed. If you graph the position of the rock against time, then the speed is slope of that curve at each point in time. </p> <p>In both integral and differential calculus, we do nothing more than take the formula for the area of a rectangle and the slope formula for a line and make them sit up and do tricks. </p>
2,243,900
<p>I've researched this topic a lot, but couldn't find a proper answer to this, and I can't wait a year to learn it at school, so my question is:</p> <blockquote> <p>What exactly is calculus? </p> </blockquote> <p>I know who invented it, the Leibniz controversy, etc., but I'm not exactly sure what it is. I think I heard it was used to calculate the area under a curve on a graph. If anyone can help me with this, I'd much appreciate it.</p>
Paul Sinclair
258,282
<p>Another way of understanding calculus is that it is the science of refining approximations. The idea is that if we cannot calculate a value directly, we come up with a scheme that allows us to approximate the value as closely as we want. If that scheme is good enough that by sufficiently refining the approximation, we can eliminate all but one particular value as being the number we are after, then we have our answer.</p> <p>There are a great many values that we cannot calculate directly, but which we can approximate. You mentioned one example: area. From our physical experience, we expect shapes to have a comparable quantity called area. If it takes the same amount of paint to cover two different shapes, then if I paint again, being careful of my thicknesses, it will still take the same amount of paint the second time. To quantify, we can define that a square with sidelength $1$ has area $1$. From this and the concepts that area should not change under rigid motions and that if a shape is divided into two shapes, then the sum of the areas of the parts should be the area of the whole, we can quickly calculate that the area of a rectangle has to be the width $w$ times the height $h$, <em>provided that $w$ and $h$ are both rational values.</em> With a little creativity, we can even show that it holds for some irrational values.</p> <p>But by direct calculation, we can never show that the area of a rectangle is always its width times its height for all irrational values. And even worse, we cannot arrive at an area for any figure whose boundary is not made of line segments strung together. So we have to find a means to calculate them indirectly. That means is by refining approximations.</p> <p>While Newton and Liebnitz truly do deserve their titles as the fathers of calculus for their joint invention of the Fundamental Theorem of Calculus, the basic ideas pre-date them - by nearly 2000 years. The key idea is attributed to Eudoxus, though it may predate him as well. That idea is this: If you are comparing two values $x$ and $y$, and can show that $x$ cannot be less than $y$ and $x$ cannot be greater than $y$, then it has to be that $x = y$. Pretty obvious. Let's apply it to finding areas:</p> <p>Suppose you have some arbitrary shape $S$. We can't directly calculate the area of $S$, but we can cover it with a grid of squares of sidelength $\frac 1n$ for some natural number $n$.</p> <p><a href="https://i.stack.imgur.com/DsoH4.png" rel="noreferrer"><img src="https://i.stack.imgur.com/DsoH4.png" alt="Area by counting squares"></a></p> <p>Now count the number $M_n$ of squares that overlap $S$ (both blue and tan squares), and the number $m_n$ of squares that are completely inside of $S$ (tan squares only). Since every square of the latter type is also of the former, it is always the case that $m_n \le M_n$. The total area of the covering squares is the sum of the areas of the individual squares, which we already know is $\frac 1{n^2}$, so it will be $\frac {M_n}{n^2}$, and similarly for the contained squares. If $S$ has an area , then it should be true that $$\frac {m_n}{n^2} \le \text{ area of }S \le \frac {M_n}{n^2}$$ for every $n$. </p> <p>Now for most shapes $S$ we cannot exactly calculate its area this way - not unless $S$ happens to be the union of a bunch of squares. But for nice shapes, we can come up with approximations that are as good as we want. That is, if we say that we want to know the area to a tolerance of $\epsilon$ (epsilon is a traditional variable for this role) for any given $\epsilon &gt; 0$, then by dint of effort we can produce a $n$ big enough that $$0 \le \frac {M_n}{n^2} - \frac {m_n}{n^2} &lt; \epsilon$$ Since the actual area lies between the two, either value differs from the area by an amount less than $\epsilon$.</p> <p>Now suppose there is a number $A$ that we think should be the area. Let $x &lt; A &lt; y$ for some values $x, y$, and let $\epsilon$ be the smaller of $A - x$ and $y - A$. If we can always find $n$ as above, then $$x = A - (A - x) \le A - \epsilon &lt; \frac {m_n}{n^2} \le \text{area of }S$$ Hence the area cannot be $x$. And similarly, it cannot be $y$. So, per Eudoxus, the area has to be $A$. (If we cannot find such an $n$, then we were wrong about $A$ being the area.)</p> <p>"Limits" are just a terminology used to describe this concept of refining approximations. Derivatives (slopes of tangent lines to curves) and integrals (areas of regions defined by curves) are two very common and very useful values that usually cannot be calculated directly, and which turn out to be closely related.</p>
2,243,900
<p>I've researched this topic a lot, but couldn't find a proper answer to this, and I can't wait a year to learn it at school, so my question is:</p> <blockquote> <p>What exactly is calculus? </p> </blockquote> <p>I know who invented it, the Leibniz controversy, etc., but I'm not exactly sure what it is. I think I heard it was used to calculate the area under a curve on a graph. If anyone can help me with this, I'd much appreciate it.</p>
JeopardyTempest
354,338
<p>In algebra, the central focus was lines, which change consistently. Slope was a big focal point, from its use in the fundamental equations of lines (slope-intercept, point-slope, etc), to working on parallel/perpendicular lines, to using lines for models of life and extrapolation/forecasting. While you also should have touched on more complex functions in geometry and conics, you didn't spend nearly the time on any of those as you did on lines. At the same time you've been building up some notation and tools, especially during the latter part of algebra/pre-calculus, expanding your scope (such as inverses, logarithms, rational functions) and setting a larger foundation that will be helpful in the next steps (function notation, matrices, transformations).</p> <p>Well in life most functions indeed aren't linear. For that matter, they often aren't even any of the other major geometric shapes and algebraic functions. When you look at these nonlinear functions, their "slopes" vary by where you are on the graph (in time\space). Life is like a roller coaster, sometimes up, sometimes down. No precision solutions to understand and predict the functions. Not as simple as those nice steady lines: </p> <p><a href="https://i.stack.imgur.com/7htoK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7htoK.png" alt="graph"></a></p> <p>You can get an estimate of the rate of change at each spot, basically how it is sloping, by picking a pair of points in a given area and using the old $(y_2-y_1)/(x_2-x_1)$, but because the graph continually changes slope at every point, the locations use aren't perfectly representative of the location you're interested in. You need to take that idea further, and that's what the foundations of calculus are. </p> <p>And you quickly come to see that this cousin to slope, rates of change (the derivative) actually shows up in a wide range of giant applications. They're central to physics and to the full span of sciences; after all, physical properties vary over space and time, so rates of change are everywhere. You can also use the same rate of change concept to calculate the distance along curved functions. And then the next concept, working backwards (antiderivatives/integrals), allows you calculate the area of shapes/solids. Rates of change are fairly central to most every math beyond here, from differential equations to numerical solution methods.</p> <p>So calculus really is all about getting the precise rate of change. But just as obsessing over the form of line equations may have at first seem rather limited and pointless, until you started to apply them more... likewise calculating the precise rate of change is much more monumental and useful than it probably sounds at first.</p>
2,243,900
<p>I've researched this topic a lot, but couldn't find a proper answer to this, and I can't wait a year to learn it at school, so my question is:</p> <blockquote> <p>What exactly is calculus? </p> </blockquote> <p>I know who invented it, the Leibniz controversy, etc., but I'm not exactly sure what it is. I think I heard it was used to calculate the area under a curve on a graph. If anyone can help me with this, I'd much appreciate it.</p>
Marco13
133,498
<p>A somewhat broad question. In addition to the existing answers that already go into great detail, here is a more anecdotoral one. </p> <p>The subject that you asked about is usually referred to by the term "calculus" in English. In German, this subject is called <a href="https://de.wikipedia.org/wiki/Analysis" rel="nofollow noreferrer">Analysis</a>. This name is <em>not</em> used outside of the mathematics world. So it is <em>not</em> the translation of the English word "analysis"! </p> <p>I also had difficulties with figuring out what all this was about, until I saw it referred to as the "Analysis einer Veränderlichen", which then can indeed be translated as "the analysis of a changing/changeable (value)". </p> <p>So my personal definition is: Calculus is about figuring out how the value of a function changes when its argument changes. Or more broadly: </p> <p><strong>How a function <em>behaves</em>, depending on its inputs.</strong></p> <p>You may have a function like $f(x) = 1/x$, and want to analyze what happens when $x$ approaches zero. Doing this (without actually <em>evaluating</em> the function for each possible $x$ and comparing the values) is one part of the analysis - particularly, computing the derivative $f'(x) = -1/x^2$. </p>
2,243,900
<p>I've researched this topic a lot, but couldn't find a proper answer to this, and I can't wait a year to learn it at school, so my question is:</p> <blockquote> <p>What exactly is calculus? </p> </blockquote> <p>I know who invented it, the Leibniz controversy, etc., but I'm not exactly sure what it is. I think I heard it was used to calculate the area under a curve on a graph. If anyone can help me with this, I'd much appreciate it.</p>
Tom Russell
439,282
<p>It's too simple. The derivative of a function gives its rate of change at a given point. The integral of a function gives the area under the curve at a given point. The fundamental theorem merely says that the rate of change of the area under the curve is the value of the function at a given point.</p> <p>In other words, the derivative of the integral of a function is the function itself. Voila!</p>
292,639
<p>Traditional Fourier analysis picks a period and then describes a function as: $$f(x) = \frac{1}{2} a_0 + \sum_{k=1}^\infty\, (a_k \cos{(\omega \cdot kx)} + b_n \sin{(\omega \cdot kx)})$$</p> <p>I am wondering whether there is a way to Fourier-analyze a function in a way that the period is dependent on $x$. Let $g$ be a continuous (or differentiable, if necessary) function that is positive for all arguments $x$. Is there a representation of $f$ in a form that looks something like this? $$f(x) = \frac{1}{2} a'_0 + \sum_{k=1}^\infty\, (a'_k \cos{(g(x) \cdot kx)} + b'_n \sin{(g(x) \cdot kx)})$$</p> <p>Perhaps one can first make $f(x)$ periodic in the traditional sense, then apply Fourier analysis, and then backtransform the result. This is just an idea.</p> <p>By the way, $g$ needs to have certain properties for this question to make sense. If $g(x)$ falls so steeply that no period is ever completed, Fourier analysis might not be sensible. If someone would like to work out the details, he may feel free to do so here.</p>
Clayton
43,239
<p>Observe the function is odd (meaning $f(-x)=-f(x)$), so try splitting the integral $$\int_{-4}^4 f(x)dx=\int_{-4}^0 f(x)dx+\int_0^4f(x)dx=\int_0^4f(-x)dx+\int_0^4f(x)dx.$$ What happens next?</p>
292,639
<p>Traditional Fourier analysis picks a period and then describes a function as: $$f(x) = \frac{1}{2} a_0 + \sum_{k=1}^\infty\, (a_k \cos{(\omega \cdot kx)} + b_n \sin{(\omega \cdot kx)})$$</p> <p>I am wondering whether there is a way to Fourier-analyze a function in a way that the period is dependent on $x$. Let $g$ be a continuous (or differentiable, if necessary) function that is positive for all arguments $x$. Is there a representation of $f$ in a form that looks something like this? $$f(x) = \frac{1}{2} a'_0 + \sum_{k=1}^\infty\, (a'_k \cos{(g(x) \cdot kx)} + b'_n \sin{(g(x) \cdot kx)})$$</p> <p>Perhaps one can first make $f(x)$ periodic in the traditional sense, then apply Fourier analysis, and then backtransform the result. This is just an idea.</p> <p>By the way, $g$ needs to have certain properties for this question to make sense. If $g(x)$ falls so steeply that no period is ever completed, Fourier analysis might not be sensible. If someone would like to work out the details, he may feel free to do so here.</p>
Inquest
35,001
<p>Although the odd function property is a better way: Just to add to the possible solutions: \begin{align} \int_{-4}^410x^9+7x^5&amp;=\left[x^{10}+ \dfrac{7x^6}{6} \right] _{-4}^4\\ &amp;=\left[4^{10}+ \dfrac{7\times 4^6}{6} \right]-\left[(-4)^{10}+ \dfrac{7(-4)^6}{6} \right]\\ \end{align}</p> <blockquote class="spoiler"> <p>\begin{align} &amp;=\left[4^{10}+ \dfrac{7\times 4^6}{6} \right]-\left[(4)^{10}+ \dfrac{7(4)^6}{6} \right] \\&amp;=0\end{align}</p> </blockquote>
2,044,322
<blockquote> <p>How can you show that :</p> <p><span class="math-container">$$D_n=\sum_{k=1}^{n } \frac{1}{k}-\int_{1}^{n+1} \frac{1}{x} \ dx $$</span> is increasing and bounded (and hence convergent). I'm having trouble.</p> </blockquote>
Learnmore
294,365
<p>$D_{n+1}-D_n=\{\sum_{n=1}^\infty \frac{1}{n+1}-\int_1^{n+2}\frac{1}{x}\}-\{\sum_{n=1}^\infty \frac{1}{n+2}-\int_1^{n+1}\frac{1}{x}\}$</p> <p>$=\frac{1}{2}-\{\int _{n+1}^{n+2}\frac{1}{x} \}=\frac{1}{2}-\ln{\frac{n+2}{n+1}}$</p> <p>$=\frac{1}{2}+\ln{\frac{n+1}{n+2}}&gt;0$</p>
2,044,322
<blockquote> <p>How can you show that :</p> <p><span class="math-container">$$D_n=\sum_{k=1}^{n } \frac{1}{k}-\int_{1}^{n+1} \frac{1}{x} \ dx $$</span> is increasing and bounded (and hence convergent). I'm having trouble.</p> </blockquote>
K. Miller
264,375
<p>To show that $D_n$ is bounded above observe that</p> <p>$$ D_n \leq 1 + \int_1^n \frac{1}{x}\,dx - \int_1^{n+1} \frac{1}{x}\,dx = 1 + \ln\left(\frac{n}{n+1}\right) \leq 1 + \ln(1) = 1 $$</p> <p>You may obtain this inequality as follows. Observe that $f(x) = 1/x$ is monotonically decreasing on $[1,\infty)$. Thus, $f(k) \leq f(x) \leq f(k-1)$ on the interval $[k-1,k]$ for all integers $k \geq 2$. It follows that</p> <p>$$ f(k) = \int_{k-1}^k f(k)\,dx \leq \int_{k-1}^k f(x)\,dx \implies \sum_{k=2}^n \frac{1}{k} \leq \int_1^n \frac{1}{x}\,dx $$</p> <p>Adding $1$ to both sides of the inequality gives</p> <p>$$ \sum_{k=1}^n \frac{1}{k} \leq 1 + \int_1^n \frac{1}{x}\,dx $$</p> <p>This is essentially just the proof of the integral test for convergence of infinte series.</p>
2,044,322
<blockquote> <p>How can you show that :</p> <p><span class="math-container">$$D_n=\sum_{k=1}^{n } \frac{1}{k}-\int_{1}^{n+1} \frac{1}{x} \ dx $$</span> is increasing and bounded (and hence convergent). I'm having trouble.</p> </blockquote>
Vim
191,404
<p>Why it is increasing: <a href="https://i.stack.imgur.com/C7fNM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C7fNM.png" alt="enter image description here"></a></p> <p>Why it is bounded: (just note the Red is bounded) <a href="https://i.stack.imgur.com/MAT47.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MAT47.png" alt="enter image description here"></a> (Dissatisfying that to show the Red is bounded still needs a little calculus. I'll be glad to see any idea that can de-calculise the proof of boundedness.)</p>
4,240,598
<blockquote> <p>Simplify <span class="math-container">$\dfrac{18 - \dfrac 7 {3x}} {\dfrac 7 {18x} - 3}$</span>?</p> </blockquote> <p>I'm having a hard time simplifying this particular expression and am seeking any type of assistance in solving it.</p> <p>In the expression <span class="math-container">$$\frac{18 - \dfrac 7 {3x}} {\dfrac 7 {18x} - 3}$$</span> I split the problems into two separate entities.</p> <p>For the numerator, I get <span class="math-container">$3x$</span> for the LCD and then rewrite the fraction as <span class="math-container">$$54x-7\frac 7 {3x}$$</span> As for the denominator, I get <span class="math-container">$18x$</span> for the LCD and then rewrite the fraction as <span class="math-container">$$7-\frac{54x}{18x}$$</span></p> <p>When I begin to divide, I switch the sign from division to multiplication and swap the numerator with the denominator (<span class="math-container">$7-\frac{54x}{18x}$</span> becomes <span class="math-container">$\frac{18x}{7-54x}$</span>).</p> <p>The product I get is <span class="math-container">$$\frac{972x^2-126x}{21x - 162x^2}$$</span> When I simplify I get <span class="math-container">$6-6$</span> which is zero. Is this answer correct?</p>
Peeter Joot
359
<p>Yes, <span class="math-container">$1$</span> is meant to be the unit scalar here. For example, assume that you are considering the multivectors generated by a 2D Euclidean vector space with an orthonormal basis <span class="math-container">$ \left\{ { \mathbf{e}_1, \mathbf{e}_2 } \right\} $</span>: <span class="math-container">$$ \mathbf{e}_1 \mathbf{e}_1 = 1,$$</span> <span class="math-container">$$ \left( { \mathbf{e}_1 \mathbf{e}_2 } \right) \left( { -\mathbf{e}_1 \mathbf{e}_2 } \right) = 1,$$</span> <span class="math-container">$$ \left( { \mathbf{e}_1 + \mathbf{e}_2 } \right) \frac{ \mathbf{e}_1 + \mathbf{e}_2 }{2} = 1,$$</span> <span class="math-container">$$ \left( { 1 + \mathbf{e}_1 \mathbf{e}_2 } \right) \frac{ 1 - \mathbf{e}_1 \mathbf{e}_2 }{2} = 1.$$</span> All these inverses follow directly from <span class="math-container">$ \mathbf{e}_1^2 = \mathbf{e}_2^2 = 1 $</span>, and <span class="math-container">$ \mathbf{e}_1 \mathbf{e}_2 = -\mathbf{e}_2 \mathbf{e}_1 $</span>.</p>
3,319,903
<p>What is exactly generating function of ordered partitions and how can I get number of ordered partitions from that?</p> <p>Example:</p> <p><span class="math-container">$$ 4 = 1+1+1+1 \\ = 2+2 \\ = 1+1+2 \\ = 1+2+1 \\ = 2+1+1 \\ = 1+3 \\ = 3+1 \\ = 4 $$</span> so we have <span class="math-container">$8$</span> partitions. I was thinking about exponential generating function:</p> <p><span class="math-container">$$(1+t+\frac{t^2}{2!} + ...)(1+\frac{t^2}{2!} +\frac{t^4}{4!} +...)...(1+\frac{t^n}{n!} + ... ) = e^x e^{2x} e^{3x} \cdots e^{nx} = e^{n(n+1)/2} = \sum_{k \ge 0}\frac{\left(\frac{n(n+1)}{2}x\right)^k}{k!} $$</span> so the number of ordered partitions of <span class="math-container">$n$</span> is <span class="math-container">$$\left(\frac{n(n+1)}{2}\right)^n $$</span> bot for <span class="math-container">$n=4$</span> it is <span class="math-container">$$10^4$$</span> It seems to be completely wrong.</p>
MafPrivate
695,001
<p>We can do this by generating functions which you want to.</p> <p>For partition <span class="math-container">$n$</span>,</p> <p>If we separate <span class="math-container">$n$</span> into <span class="math-container">$n$</span> numbers, we can generate the function <span class="math-container">$x+x^2+\cdots+x^n$</span> and the coefficient of <span class="math-container">$x^n$</span> is the number of combination.</p> <p>If we separate <span class="math-container">$n$</span> into <span class="math-container">$\left(n-1\right)$</span> numbers, we can generate the function <span class="math-container">$\left(x+x^2+\cdots+x^n\right)^2$</span> and the coefficient of <span class="math-container">$x^n$</span> is the number of combination.</p> <p>...</p> <p>If we separate <span class="math-container">$n$</span> into <span class="math-container">$2$</span> numbers, we can generate the function <span class="math-container">$\left(x+x^2+\cdots+x^n\right)^{n-1}$</span> and the coefficient of <span class="math-container">$x^n$</span> is the number of combination.</p> <p>If we separate <span class="math-container">$n$</span> into <span class="math-container">$1$</span> number, we can generate the function <span class="math-container">$\left(x+x^2+\cdots+x^n\right)^n$</span> and the coefficient of <span class="math-container">$x^n$</span> is the number of combination.</p> <p>Therefore the total number of combination of partition of <span class="math-container">$n$</span> is the coefficient of <span class="math-container">$x^n$</span> of this function <span class="math-container">$$\small f:\left(x+x^2+\cdots+x^n\right)+\left(x+x^2+\cdots+x^n\right)^2+\cdots+\left(x+x^2+\cdots+x^n\right)^{n-1}+\left(x+x^2+\cdots+x^n\right)^n$$</span></p> <p>However, this function <span class="math-container">$$\small g:\left(x+x^2+\cdots+x^n+\cdots\right)+\left(x+x^2+\cdots+x^n+\cdots\right)^2+\cdots+\left(x+x^2+\cdots+x^n+\cdots\right)^n+\cdots$$</span> has the same coefficient of <span class="math-container">$x^n$</span> as <span class="math-container">$f$</span>. Then, we can simplify <span class="math-container">$g$</span> like that: <span class="math-container">\begin{align}\small\dfrac{x}{1-x}+\left(\dfrac{x}{1-x}\right)^2+\cdots+\left(\dfrac{x}{1-x}\right)^n+\cdots&amp;\small=\dfrac{\dfrac{x}{1-x}}{1-\dfrac{x}{1-x}}\\&amp;\small=\dfrac{x}{1-2x}\\&amp;\small=x\left(1+\left(2x\right)+\left(2x\right)^2+\cdots+\left(2x\right)^{n-1}+\cdots\right)\\&amp;\small=x+2x^2+4x^3+8x^4+\cdots+2^{n-1}x^n+\cdots\end{align}</span></p> <p>Therefore, the number of partition of <span class="math-container">$n=2^{n-1}$</span></p> <p>The cases you have shown is <span class="math-container">$n=4$</span>, which the number of partition is <span class="math-container">$8$</span>, same as counting.</p>
523,074
<p>I have never done integration in my life and I am in first year of university. Is it harder than taking the derivative? I've heard its just going backwards. Also, my high school taught me only differentiation, I don't know why we never touched on integration. I'm going to be starting it next week and I want to know what I'm facing. Is it generally considered harder than differentiation? Thank you in advance.</p>
Betty Mock
89,003
<p>If you were fine with derivatives, you will be fine with integrals in 1st year calc. It never hurts to pay attention in class (which kind of implies attending class) and to do your homework. In fact, if you have trouble with a problem, you should do more of the same kind as soon as you know the answer. As to difficulty:</p> <p>Integrals start out harder than derivatives and wind up easier. The reason derivatives are easier is that if a function has a derivative you can compute what it is. There is an algorithm for doing so. Sometimes the computation may be long and complicated. But theoretically anyway, anyone can do this.</p> <p>With the integral, you will be given a lot of problems to solve, but there is no algorithm. The kind of problems you get in first year calculus will be solvable if you learn enough tricks. (They are chosen to be solvable). There are hundreds of tricks because over the course of many years lots and lots of smart mathematicians have worked them out. You'll probably learn 3 - 5 tricks in your first year class.</p> <p>Integrals which are not subject to the tricks (that is, most of them) are evaluated with numerical or approximating methods and for practical purposes that works very well. In fact, with the availability of computers, many of the old tricks have fallen into disuse, because it is faster to just do the numerical computation.</p> <p>Integrals and derivatives are the reverse of one another in the same sense that addition and subtraction are. As you know, taking an operation in one direction is often easier than reversing it. Thus multiplication is easier than division,and raising things to powers is easier than the reverse: finding roots.</p> <p>One expression of the connection between derivatives and integrals is the Fundamental Theorem of the Calculus, which you will probably be taught. However, the two subjects are more intertwined than the Fundamental Theorem suggests. Indeed, I find the myriad entanglements fascinating.</p> <p>Now why does the difficulty turn around later? Derivatives are about differences and division which make a function less smooth; and the smoother a function is, the easier it is to work with. For example, if f is differentiable, f' may only be continuous; and in fact it can be discontinuous. You've taken a relatively smooth function and degraded it.</p> <p>Integrals are about addition, that is to say averaging, which always smooths things out. You can integrate discontinuous functions and they will come out continuous. Integrate again and they will be differentiable, which is even smoother than continuous. We like smooth.</p>
506,965
<p>How can I prove that: $E_π [ (dQ_X/dπ) S (T)| F_t ]= E_{Q_X} [S(T) | F_t]E_π [ dQ_X/dπ | F_t ]$. Obviously $E_π [(dQ_X/dπ) S(T) ]= E_{Q_X} [S(T)]$ I know that much, but how to prove when it is conditioned on $F_t$.</p>
Davide Giraudo
9,849
<p>I guess the $Q_X$, $S(T)$ and $\mathcal F_t$ come from a particular context, so we can simplify the notation and prove that if $\mu,\nu$ are two probability measures such that $\nu\ll\mu$ and the random variables $X$ and $\frac{\mathrm d\nu}{\mathrm d\mu}$ are integrable, then $$\mathbb E_\mu\left(\frac{\mathrm d\nu}{\mathrm d\mu}\cdot X\mid \mathcal F\right)=\mathbb E_{\nu}(X\mid \mathcal F)\cdot\mathbb E_\mu\left(\frac{\mathrm d\nu}{\mathrm d\mu}\cdot\mid\mathcal F\right).$$ To see that, we go back to the definition of conditional expectation. Let $F\in\mathcal F$. Then $$\int_F\mathbb E_\mu\left(\frac{\mathrm d\nu}{\mathrm d\mu}\cdot X\mid \mathcal F\right)\mathrm d\mu=\int_F\frac{\mathrm d\nu}{\mathrm d\mu} X\mathrm d\mu=\int_FX\mathrm d\nu.$$ Now, the trick is to use $\mathcal F$-measurability of $\mathbb E_\nu(X\mid\mathcal F)$ to write $$\mathbb E_{\nu}(X\mid \mathcal F)\cdot\mathbb E_\mu\left(\frac{\mathrm d\nu}{\mathrm d\mu}\cdot\mid\mathcal F\right)=\mathbb E_\mu\left(\mathbb E_{\nu}(X\mid \mathcal F)\cdot\frac{\mathrm d\nu}{\mathrm d\mu}\cdot\mid\mathcal F\right),$$ then integrate over $F$ with respect to $\mu$. </p>
1,361,517
<p>$K_1 K_2 \dotsb K_{11}$ is a regular $11$-gon inscribed in a circle, which has a radius of $2$. Let $L$ be a point, where the distance from $L$ to the circle's center is $3$. Find $LK_1^2 + LK_2^2 + \dots + LK_{11}^2$.</p> <p>Any suggestions as to how to solve this problem? I'm unsure what method to use. </p>
De ath
352,964
<p>Let $\omega = e^{2 \pi i/11}$, a primitive $11^{\text{th}}$ root of unity. We can assume that the circle is centered at the origin. We can also assume that $A_k$ is associated with the complex number $2 \omega^k$ Let $p$ be complex number associated with the point $P$. Then $PA_1^2 + PA_2^2 + \dots + PA_{11}^2 = \sum_{k = 0}^{10} |p - 2 \omega^k|^2.$ From the identity $z \cdot \overline{z} = |z|^2$, \begin{align*} \sum_{k = 0}^{10} |p - 2 \omega^k|^2 &amp;= \sum_{k = 0}^{10} (p - 2 \omega^k)(\overline{p} - 2 \overline{\omega}^k) \\ &amp;= \sum_{k = 0}^{10} (p \overline{p} - 2 \overline{\omega}^k p - 2 \omega^k \overline{p} + 4 \omega^k \overline{\omega}^k) \\ &amp;= 11 p \overline{p} - 2p \sum_{k = 0}^{10} \overline{\omega}^k - 2 \overline{p} \sum_{k = 0}^{10} \omega^k + 4 \sum_{k = 0}^{10} \omega^k \overline{\omega}^k. \end{align*}</p> <p>The distance from $P$ to the origin is 3, so $11p \overline{p} = 11 \cdot |p|^2 = 11 \cdot 9 = 99$.</p> <p>Since $\omega$ is a primitive $11^{\text{th}}$ root of unity, $\omega^{11} - 1 = 0$, which factors as $(\omega - 1)(\omega^{10} + \omega^9 + \dots + \omega + 1) = 0.$ Since $\omega \neq 1$, we have $\omega^{10} + \omega^9 + \dots + \omega + 1 = 0$. Therefore, $2 \overline{p} \sum_{k = 0}^{10} \omega^k = 0.$</p> <p>Also, $|\omega| = 1$, so $\overline{\omega} = 1/\omega$, which means $\sum_{k = 0}^{10} \overline{\omega}^k = 1 + \frac{1}{\omega} + \dots + \frac{1}{\omega^9} + \frac{1}{\omega^{10}} = \frac{\omega^{10} + \omega^9 + \dots + \omega + 1}{\omega^{10}} = 0.$</p> <p>Finally, $\omega^k \overline{\omega}^k = \omega^k/\omega^k = 1$, so $4 \sum_{k = 0}^{10} \omega^k \overline{\omega}^k = 4 \cdot 11 = 44.$ Therefore, $PA_1^2 + PA_2^2 + \dots + PA_{11}^2 = 99 + 44 = \boxed{143}.$</p>