qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
2,814,703
<p>I am reading <a href="https://en.wikipedia.org/wiki/Lower_limit_topology" rel="nofollow noreferrer">lower limit topology</a> on Wikipedia, which states that the lower limit topology </p> <blockquote> <p>[...] is the topology generated by the basis of all half-open intervals $[a,b)$, where a and b are real numbers. [...] The lower limit topology is finer (has more open sets) than the standard topology on the real numbers (which is generated by the open intervals). The reason is that every open interval can be written as a (countably infinite) union of half-open intervals.</p> </blockquote> <p>I cannot see how to write $(a,b)$ as a countably infinite union of half-open intervals.</p>
Lee Mosher
26,501
<p>$$(a,b) = \bigcup_{n=1}^\infty \, \left[\, \left(1-\frac{1}{n}\right)\, a + \frac{1}{n} b ,\, b\right) $$</p>
2,243,083
<p>I'm writing an advanced interface, but I don't yet have a concept of derivatives or integrals, and I don't have an easy way to construct infinite many functions (which could effectively delay or tween their frame's contributing distance [difference between B and A] over the next few frames).</p> <p>I can store values for a frame, and I can also consume them or previous values and map into A.</p> <p>For example, each frame could calculate the distance between B and A, then add that distance to A, but they would be perfectly in sync.</p> <p>I can keep track of the last N frames' distances and constantly shift old distances off, but this would create a delay, not an elastic effect. Somehow, the function that's popping off past-frames' distances needs to adjust for how long it's been for those 10 frames.</p> <hr> <p><strong>Is there any function I can rewrite each frame, which picks up the progress from it's predecessor, and contributes it's correct delta amount, adjusting for the new total distance between B and A?</strong></p> <p>Does this question make sense? How can I achieve behavior where A is constantly catching up to B in a non-linear, exponential way?</p> <hr>
user7530
7,530
<p>How about this: each frame, set A's position to a weighted average of B's new position and A's previous position:</p> <p>$$A = (1-\alpha)A + \alpha B.$$</p> <p>Tune alpha to taste.</p>
1,362,860
<p>$$\frac{1}{3}+\frac{1}{13}+\frac{1}{23}+\frac{1}{31}+\frac{1}{37}+\frac{1}{43}\cdots$$ Intuitively, I feel that this sum converges, but I really don't know why, (or if I am correct). Can I have a somewhat rigorous proof of whether or not this sum converges or diverges? Thank you lots for any help.</p>
Charles
1,778
<p>By the Prime Number Theorem in arithmetic progressions (the extension of Dirichlet's Theorem), not only are there infinitely many primes which are 3 mod 10, but these primes are asymptotically 1/phi(10) = 1/4 of all primes. Thus, since the reciprocal sum of the primes diverges, so must the reciprocal sum of the primes which end in 3, albeit about 1/4 the 'speed'.</p> <p>Of course this is just primes ending with 3; if you include all primes with 3s anywhere you'll quickly find that this is the vast majority of primes. For example, there are about 10^1000000 / (1000000 log 10) ≈ 4 * 10^999993 primes with up to a million digits, but only 9^1000000 ≈ 3 * 10^954242 numbers (prime or composite) with up to a million digits lacking 3s. That is, if you pick a random prime up to 10^1000000 the chance that it will have no 3s is less than 1 in 10^45751.</p>
3,808,575
<p>Assuming I have the statement ∀x(∀y¬Q(x,y)∨P(x)), can I pull the universal quantifier ∀y out of the parenthesis? Meaning, is this statement equivalent to ∀x∀y(¬Q(x,y)∨P(x)) ?</p> <p>An approach I tried so far:</p> <ol> <li>∀x((∃y Q(x,y) ) =&gt; P(x)). (original eq.)</li> <li>∀x((∀y¬Q(x,y))∨P(x)) (De Morgan's application)</li> <li>∀x∀y(¬Q(x,y)∨P(x)). (Working off the assumption that taking out the ∀y is a valid operation).</li> <li>∀x∀y (Q(x,y) =&gt; P(x)) (Going backwards from the ¬P v Q definition of implication)</li> </ol> <p>Statement 4 does not seem to be equivalent to statement 1, which suggests that pulling out the universal quantifier is not acceptable. I would greatly appreciate any confirmation of whether this is the case, and if so, what governs when quantifiers can be brought to the outside of the parenthesis.</p>
Manx
483,923
<p>For universal quantifier. In general, if <span class="math-container">$x$</span> apear in both <span class="math-container">$A$</span> and <span class="math-container">$B$</span> we have <span class="math-container">$$\exists xA(x)\to \forall xB(x)\Rightarrow\forall x(A(x)\to B(x))\tag{1}$$</span> <span class="math-container">$$\forall x(A(x)\to B(x))\not\Rightarrow \exists xA(x)\to \forall xB(x)\tag{2}$$</span> However, if <span class="math-container">$x$</span> not apear in <span class="math-container">$B$</span> we have <span class="math-container">$$\forall x(A(x)\to B)\Leftrightarrow\exists xA(x)\to \forall x B\tag{3}$$</span> The statement in question is similar to <span class="math-container">$(3)$</span>, which is also valid. <span class="math-container">$$∀x∀y(Q(x,y)→P(x))\Leftrightarrow∀x(∃y Q(x,y)→P(x))\tag{4}$$</span> And we can formulate a direct proof for <span class="math-container">$(4)$</span> by nature deduction <span class="math-container">$$\def\fitch#1#2{\hspace{2ex}\begin{array}{|l}#1\\\hline#2\end{array}} \fitch{\forall x\forall y(Q(x,y)\to P(x))} {\fitch{\boxed{a}} {\forall y(Q(a,y)\to P(a))\\ \fitch{\exists y~Q(a,y)} {\fitch{\boxed{b}~Q(a,b)} {Q(a,b)\to P(a)\\ P(a)}\\ P(a)}\\ \exists y~Q(a,y)\to P(a)}\\ \forall x~(\exists y~Q(x,y)\to P(x))}\\ $$</span> Hence <span class="math-container">$\forall x\forall y(Q(x,y)\to P(x))\Rightarrow\forall x~(\exists y~Q(x,y)\to P(x))$</span>. For the another direction we have <span class="math-container">$$ \fitch{\forall x(\exists y~Q(x,y)\to P(x))} {\fitch{\boxed{a}} {\exists y~Q(a,y)\to P(a)\\ \fitch{\boxed{b}~Q(a,b)} {\exists y~Q(a,y)\\ P(a)}\\ \forall y~(Q(a,y)\to P(a))}\\ \forall x\forall y~(Q(x,y)\to P(x))}$$</span> Therefore <span class="math-container">$\forall x~(\exists y~Q(x,y)\to P(x))\Rightarrow \forall x\forall y(Q(x,y)\to P(x))$</span>. This proves <span class="math-container">$(4)$</span>.</p>
909,228
<p>I'm trying to find a closed form for the following sum $$\sum_{n=1}^\infty\frac{H_n}{n^3\,2^n},$$ where $H_n=\displaystyle\sum_{k=1}^n\frac{1}{k}$ is a harmonic number.</p> <p>Could you help me with it?</p>
Robert Israel
8,508
<p>Start with the series $$\sum_{n=1}^\infty H_n z^n = - \dfrac{\ln(1-z)}{1-z} = f_0(z) $$ </p> <p>Then (according to Maple 18) $$ \sum_{n=1}^\infty \dfrac{H_n}{n} z^n = \int_0^z \dfrac{f_0(t)}{t}\; dt = \operatorname{Li}_{2}(1-z) + \dfrac{\ln(1-z)^2}{2} = f_1(z)$$ </p> <p>$$\displaystyle \sum_{n=1}^\infty \dfrac{H_n}{n^2} z^n = \int_0^z \dfrac{f_1(t)}{t} dt$$ </p> <p>$$= \zeta \left( 3 \right) +\dfrac{1}{2}\, \ln^2 (1-z) \ln \left( z \right) +\ln (1-z) \operatorname{Li}_{2} (z) -\operatorname{Li}_{3}(1-z) + \operatorname{Li}_{3}(z) $$ </p> <p>But for the next integration it fails to find a closed form. $$\sum_{n=1}^\infty \dfrac{H_n}{n^3} z^n = \int_0^z f_2(t)\; dt$$</p>
909,228
<p>I'm trying to find a closed form for the following sum $$\sum_{n=1}^\infty\frac{H_n}{n^3\,2^n},$$ where $H_n=\displaystyle\sum_{k=1}^n\frac{1}{k}$ is a harmonic number.</p> <p>Could you help me with it?</p>
Tunk-Fey
123,277
<p>In the same spirit as Robert Israel's answer and continuing <a href="https://math.stackexchange.com/a/605602/123277">Raymond Manzoni's answer</a> (both of them deserve the credit because of inspiring my answer) we have $$ \sum_{n=1}^\infty \frac{H_nx^n}{n^2}=\zeta(3)+\frac{1}{2}\ln x\ln^2(1-x)+\ln(1-x)\operatorname{Li}_2(1-x)+\operatorname{Li}_3(x)-\operatorname{Li}_3(1-x). $$ Dividing equation above by $x$ and then integrating yields \begin{align} \sum_{n=1}^\infty \frac{H_nx^n}{n^3}=&amp;\zeta(3)\ln x+\frac12\color{red}{\int\frac{\ln x\ln^2(1-x)}{x}\ dx}+\color{blue}{\int\frac{\ln(1-x)\operatorname{Li}_2(1-x)}x\ dx}\\&amp;+\operatorname{Li}_4(x)-\color{green}{\int\frac{\operatorname{Li}_3(1-x)}x\ dx}.\tag1 \end{align} Using IBP to evaluate the green integral by setting $u=\operatorname{Li}_3(1-x)$ and $dv=\frac1x\ dx$, we obtain \begin{align} \color{green}{\int\frac{\operatorname{Li}_3(1-x)}x\ dx}&amp;=\operatorname{Li}_3(1-x)\ln x+\int\frac{\ln x\operatorname{Li}_2(1-x)}{1-x}\ dx\qquad x\mapsto1-x\\ &amp;=\operatorname{Li}_3(1-x)\ln x-\color{blue}{\int\frac{\ln (1-x)\operatorname{Li}_2(x)}{x}\ dx}.\tag2 \end{align} Using Euler's reflection formula for dilogarithm $$ \operatorname{Li}_2(x)+\operatorname{Li}_2(1-x)=\frac{\pi^2}6-\ln x\ln(1-x), $$ then combining the blue integral in $(1)$ and $(2)$ yields $$ \frac{\pi^2}6\int\frac{\ln (1-x)}{x}\ dx-\color{red}{\int\frac{\ln x\ln^2(1-x)}{x}\ dx}=-\frac{\pi^2}6\operatorname{Li}_2(x)-\color{red}{\int\frac{\ln x\ln^2(1-x)}{x}\ dx}. $$ Setting $x\mapsto1-x$ and using the identity $H_{n+1}-H_n=\frac1{n+1}$, the red integral becomes \begin{align} \color{red}{\int\frac{\ln x\ln^2(1-x)}{x}\ dx}&amp;=-\int\frac{\ln (1-x)\ln^2 x}{1-x}\ dx\\ &amp;=\int\sum_{n=1}^\infty H_n x^n\ln^2x\ dx\\ &amp;=\sum_{n=1}^\infty H_n \int x^n\ln^2x\ dx\\ &amp;=\sum_{n=1}^\infty H_n \frac{\partial^2}{\partial n^2}\left[\int x^n\ dx\right]\\ &amp;=\sum_{n=1}^\infty H_n \frac{\partial^2}{\partial n^2}\left[\frac {x^{n+1}}{n+1}\right]\\ &amp;=\sum_{n=1}^\infty H_n \left[\frac{x^{n+1}\ln^2x}{n+1}-2\frac{x^{n+1}\ln x}{(n+1)^2}+2\frac{x^{n+1}}{(n+1)^3}\right]\\ &amp;=\ln^2x\sum_{n=1}^\infty\frac{H_n x^{n+1}}{n+1}-2\ln x\sum_{n=1}^\infty\frac{H_n x^{n+1}}{(n+1)^2}+2\sum_{n=1}^\infty\frac{H_n x^{n+1}}{(n+1)^3}\\ &amp;=\frac12\ln^2x\ln^2(1-x)-2\ln x\left[\sum_{n=1}^\infty\frac{H_{n+1} x^{n+1}}{(n+1)^2}-\sum_{n=1}^\infty\frac{x^{n+1}}{(n+1)^3}\right]\\&amp;+2\left[\sum_{n=1}^\infty\frac{H_{n+1} x^{n+1}}{(n+1)^3}-\sum_{n=1}^\infty\frac{x^{n+1}}{(n+1)^4}\right]\\ &amp;=\frac12\ln^2x\ln^2(1-x)-2\ln x\left[\sum_{n=1}^\infty\frac{H_{n} x^{n}}{n^2}-\sum_{n=1}^\infty\frac{x^{n}}{n^3}\right]\\&amp;+2\left[\sum_{n=1}^\infty\frac{H_{n} x^{n}}{n^3}-\sum_{n=1}^\infty\frac{x^{n}}{n^4}\right]\\ &amp;=\frac12\ln^2x\ln^2(1-x)-2\ln x\left[\sum_{n=1}^\infty\frac{H_{n} x^{n}}{n^2}-\operatorname{Li}_3(x)\right]\\&amp;+2\left[\sum_{n=1}^\infty\frac{H_{n} x^{n}}{n^3}-\operatorname{Li}_4(x)\right]. \end{align} Putting all together, we have \begin{align} \sum_{n=1}^\infty \frac{H_nx^n}{n^3}=&amp;\frac12\zeta(3)\ln x-\frac18\ln^2x\ln^2(1-x)+\frac12\ln x\left[\sum_{n=1}^\infty\frac{H_{n} x^{n}}{n^2}-\operatorname{Li}_3(x)\right]\\&amp;+\operatorname{Li}_4(x)-\frac{\pi^2}{12}\operatorname{Li}_2(x)-\frac12\operatorname{Li}_3(1-x)\ln x+C.\tag3 \end{align} Setting $x=1$ to obtain the constant of integration, \begin{align} \sum_{n=1}^\infty \frac{H_n}{n^3}&amp;=\operatorname{Li}_4(1)-\frac{\pi^2}{12}\operatorname{Li}_2(1)+C\\ \frac{\pi^4}{72}&amp;=\frac{\pi^4}{90}-\frac{\pi^4}{72}+C\\ C&amp;=\frac{\pi^4}{60}. \end{align} Thus \begin{align} \sum_{n=1}^\infty \frac{H_nx^n}{n^3}=&amp;\frac12\zeta(3)\ln x-\frac18\ln^2x\ln^2(1-x)+\frac12\ln x\left[\sum_{n=1}^\infty\frac{H_{n} x^{n}}{n^2}-\operatorname{Li}_3(x)\right]\\&amp;+\operatorname{Li}_4(x)-\frac{\pi^2}{12}\operatorname{Li}_2(x)-\frac12\operatorname{Li}_3(1-x)\ln x+\frac{\pi^4}{60}.\tag4 \end{align} Finally, setting $x=\frac12$, we obtain \begin{align} \sum_{n=1}^\infty \frac{H_n}{2^nn^3}=\color{purple}{\frac{\pi^4}{720}+\frac{\ln^42}{24}-\frac{\ln2}8\zeta(3)+\operatorname{Li}_4\left(\frac12\right)}, \end{align} which matches Cleo's answer.</p> <hr> <p><strong>References :</strong></p> <p>$[1]\ $ <a href="http://mathworld.wolfram.com/HarmonicNumber.html" rel="noreferrer">Harmonic number</a></p> <p>$[2]\ $ <a href="https://en.wikipedia.org/wiki/Polylogarithm" rel="noreferrer">Polylogarithm</a></p>
997,602
<blockquote> <p>Prove that the function <span class="math-container">$x \mapsto \dfrac 1{1+ x^2}$</span> is uniformly continuous on <span class="math-container">$\mathbb{R}$</span>.</p> </blockquote> <p>Attempt: By definition a function <span class="math-container">$f: E →\Bbb R$</span> is uniformly continuous iff for every <span class="math-container">$ε &gt; 0$</span>, there is a <span class="math-container">$δ &gt; 0$</span> such that <span class="math-container">$|x-a| &lt; δ$</span> and <span class="math-container">$x,a$</span> are elements of <span class="math-container">$E$</span> implies <span class="math-container">$|f(x) - f(a)| &lt; ε.$</span></p> <p>Then suppose <span class="math-container">$x, a$</span> are elements of <span class="math-container">$\Bbb R. $</span> Now <span class="math-container">\begin{align} |f(x) - f(a)| &amp;= \left|\frac1{1 + x^2} - \frac1{1 + a^2}\right| \\&amp;= \left| \frac{a^2 - x^2}{(1 + x^2)(1 + a^2)}\right| \\&amp;= |x - a| \frac{|x + a|}{(1 + x^2)(1 + a^2)} \\&amp;≤ |x - a| \frac{|x| + |a|}{(1 + x^2)(1 + a^2)} \\&amp;= |x - a| \left[\frac{|x|}{(1 + x^2)(1 + a^2)} + \frac{|a|}{(1 + x^2)(1 + a^2)}\right] \end{align}</span></p> <p>I don't know how to simplify more. Can someone please help me finish? Thank very much.</p>
mookid
131,738
<p><strong>Hint:</strong> use the inequality $$x&gt;0\implies x &lt; 1 + x^2 $$ (if $x&lt;1$ it is true; otherwise via multiplication by $x$, $x&gt;1\implies x^2&gt;x$)</p>
997,602
<blockquote> <p>Prove that the function <span class="math-container">$x \mapsto \dfrac 1{1+ x^2}$</span> is uniformly continuous on <span class="math-container">$\mathbb{R}$</span>.</p> </blockquote> <p>Attempt: By definition a function <span class="math-container">$f: E →\Bbb R$</span> is uniformly continuous iff for every <span class="math-container">$ε &gt; 0$</span>, there is a <span class="math-container">$δ &gt; 0$</span> such that <span class="math-container">$|x-a| &lt; δ$</span> and <span class="math-container">$x,a$</span> are elements of <span class="math-container">$E$</span> implies <span class="math-container">$|f(x) - f(a)| &lt; ε.$</span></p> <p>Then suppose <span class="math-container">$x, a$</span> are elements of <span class="math-container">$\Bbb R. $</span> Now <span class="math-container">\begin{align} |f(x) - f(a)| &amp;= \left|\frac1{1 + x^2} - \frac1{1 + a^2}\right| \\&amp;= \left| \frac{a^2 - x^2}{(1 + x^2)(1 + a^2)}\right| \\&amp;= |x - a| \frac{|x + a|}{(1 + x^2)(1 + a^2)} \\&amp;≤ |x - a| \frac{|x| + |a|}{(1 + x^2)(1 + a^2)} \\&amp;= |x - a| \left[\frac{|x|}{(1 + x^2)(1 + a^2)} + \frac{|a|}{(1 + x^2)(1 + a^2)}\right] \end{align}</span></p> <p>I don't know how to simplify more. Can someone please help me finish? Thank very much.</p>
Paul
17,980
<p>You are nearly finishing the proof.</p> <p>$$|x - a| (\frac{|x|}{(1 + x^2)(1 + a^2)} + \frac{|a|}{(1 + x^2)(1 + a^2)})\le |x - a| (\frac{1}{2(1 + a^2)} + \frac{1}{2(1 + x^2)})\le |x-a|$$</p> <p>Take $\delta=\epsilon$.</p>
2,449,443
<p>Set of numbers $\ x_1, \ldots, x_m , y_1, \ldots, y_n $ where $\ x_i=0 $ for $i = 1,\ldots, m$ and $\ y_i=1 $ for $i = 1,\ldots, n$</p> <p>Show that mean $M$ of this set is given by $\frac{n}{m+n}$ and the standard deviation $S$ by $\frac{ \sqrt{mn}} {m+n} $</p> <p>I know the definitions of the mean and standard deviation and how to get them but Im really stuck at that question</p>
BruceET
221,800
<p>The sample variance of observations denoted $x_i$ is $$S^2 = \frac{\sum_{i=1}^n (x_i - \bar x)^3}{n-1} = \frac{\sum_{i=1}^n x_i^2\; -\; (\sum_{i=1}^n x_i)^2/n}{n-1}.$$</p> <p>The middle member of the equation is the definition and the last member is sometimes called the 'computational formula' (easily deduced from the definition). Is that the definition of $S$ you're using?</p> <p>Notice that for $x_i \equiv 0,$ one has $\sum_{i-1}^m x_i = \sum_{i-1}^m x_i^2 = 0.$</p> <p>Also, for $y_i \equiv 1,$ one has $\sum_{i-1}^n y_i = \sum_{i-1}^n y_i^2 = n.$</p> <p>Put it all together, and you should be off to a good start--provided I have properly interpreted a somewhat vague question.</p>
2,030,116
<p>How can i prove that $\sqrt[12]{2}$ is irrational number? </p> <p>I'm trying: </p> <p>$$\sqrt[12]{2} = \frac{p}{q}$$ where $p$, $q$ are integers</p> <p>it follows that :</p> <p>$$p^{12} = 2q^{12} $$</p> <p>What is argument of irrationality in this case? From what we know that the right-hand side has an even number of 2s and the right hand side has an odd number of 2s? </p> <p>I will be grateful for your help Best regards</p>
mathcounterexamples.net
187,663
<p>A (late) proof not using <strong>unique factorization theorem</strong>, but only <strong>Euclid's lemma</strong>.</p> <p>Suppose that $p,q$ are coprime integers. As $p^{12} = 2q^{12}$, you get that $2 \ | \ p^{12}$. As $2$ is a prime number $2 \ | \ p$. Therefore $p = 2 p^\prime$ and $$2^{12} \left(p^\prime \right)^{12} = 2q^{12} \Leftrightarrow 2^{11}\left(p^\prime \right)^{12} = q^{12}$$</p> <p>From this you get that $2 \ | \ q^{12}$ and therefore $2 \ | \ q$ (again as $q$ is a prime). You finally get the contradiction that $p,q$ are not coprime integers. Therefore the initial assumption that $\sqrt[12]{2}$ is a rational number is wrong.</p>
105,857
<p>Let $\mathcal{O}$ be the ring of integers in an algebraic number field. Is $\text{SL}_2(\mathcal{O})$ generated by elementary matrices? If it isn't, is there any other natural generating set for it?</p> <p>The usual argument shows that this is true for $\mathcal{O} = \mathbb{Z}$ (or, more generally, a Euclidean domain). However, I haven't been able to generalize this to other rings of integers.</p>
Charlie Frohman
4,304
<p>No. If you look in Charles Frohman and Benjamin Fine, "Some Amalgam Structures for Bianchi Groups," 1988, Proceedings of the American Mathematical Society, Vol. 102, No. 2, pp. 221-229, we construct a splitting of $PSl_2(\mathcal{O})$ where we are in a $\mathbb{Q}[\sqrt{-d}]$ for d a positive square free integer that is big enough, and one of the factors is the elementary matrices. The fact that the elementary matrices do not generate was well known. I think I learned it from Morris Newman. Maybe Richard Swan proved it?</p>
2,199,222
<p>I have the feeling of being stuck or missing something trying to prove $$ \lim_{N\to\infty}\sum_{k=1}^{N} \frac{1}{N+k} =\int_{1}^{2} \frac{1}{x} dx = ln(2)$$</p> <p>Using Riemann-Sums I have shown that $$\int_{1}^{a} \frac{1}{x} dx=\lim_{N\to\infty}\sum_{k=1}^{N} (a^{1/N}-1)=\lim_{N\to\infty}N(a^{1/N}-1)=\lim_{h\to 0}\frac{a^h-1}{h}=ln(a)$$</p> <p>So I would have to show that </p> <p>$$\lim_{N\to\infty}\sum_{k=1}^{N} \frac{1}{N+k}=\lim_{N\to\infty}\sum_{k=1}^{N} (2^{1/N}-1)$$</p> <p>However the summands are not equal. How does one prove this equality?</p>
Stella Biderman
123,230
<p>I would call it the sum of a skew-symmetric and a diagonal matrix.</p>
1,618,753
<p>Trying to expand $f(x)=\cot(x)$ to Taylor series (Maclaurin, actually). But I keep "adding up" infinities when using the formula. (Because of $\cot(0)=\infty$) Could you perhaps give me a hint on how to proceed?</p>
Travis Willse
155,629
<p>Since $f$ is not defined at $0$, its Maclaurin series is undefined.</p> <p>On the other hand, the pole of $f$ at $0$ is simple, and it's not hard to compute that the residue of $f$ there is $1$, so one <em>can</em> compute a Maclaurin series for $\cot x - \frac{1}{x}$ there, namely, $$\cot x - \frac{1}{x} \sim -\frac{1}{3} x - \frac{1}{45} x^3 - \cdots .$$ Of course, one can rearrange this and write $$\cot x \sim \frac{1}{x}-\frac{1}{3} x - \frac{1}{45} x^3 - \cdots$$ (as for Taylor series, $\sim$ must be suitably interpreted.) This is an example of a <em>Laurent series</em>, or roughly, an analog of a Taylor series allowing negative powers of $x - a$.</p>
644,163
<p>The question asks: Find the line through $(3,1,-2)$ that intersects and is perpendicular to</p> <p>$$x = -1 + t, y = -2 + t, z = -1 + t.$$</p> <p>My thoughts: Say the point of intersection is $(x_0,y_0,z_0)$, then my line can be of the form</p> <p>$$L(s) = (3,1,-2) + (x_0- 3,y_0- 1,z_0+ 2)s$$</p> <p>Then I tried setting up a system of equations at the intersection like this:</p> <p>$$-1 + t = 3 + (x_0- 3)s$$</p> <p>$$-2 + t = 1 + (y_0- 1)s$$</p> <p>$$-1 + t = -2 + (z_0+ 2)s$$</p> <p>And tried finding the point $(x_0,y_0,z_0)$, but I feel that I'm not on the right track. Could someone explain to me how to appropriately tackle this problem?</p>
Mhenni Benghorbal
35,472
<p>Here is an approach. Find a normal vector to the directional vector $(-1,-2,-1)$ of the given line, then use it (together with finding a point on the line $L$) to find the equation of the perpendicular line. </p>
898,683
<p>Given a pool of 30 balls (5 of each color). When drawing 8 balls without replacement, what is the probability of getting at least one of each color?</p> <p>Related: <a href="https://math.stackexchange.com/questions/897730/probability-of-drawing-at-least-one-red-and-at-least-one-green-ball">Probability of drawing at least one red and at least one green ball.</a></p> <p>When drawing more than 2 colors you need to exclude overlapping 'hands'. Thus when finding the probability of drawing no red, you can have a hand made up of blue, green, white, black and grey. But when you are determining the probability of drawing no blue you draw from red, green, white, black, grey. So you need to exclude all green, white, black, grey hands as they have already been counted. And the same for the other colors as well.</p> <p>The other complexity of the problem is that since there are only 5 of each color, no draw will only include balls of the same color.</p>
bobbym
77,276
<p>Similar problems appear in</p> <p>S. Ghahramani, Fundamentals of Probability with Stochastic Processes, 3rd ed. 2005. p73</p> <p>and </p> <p>P. J. Nahin, Digital Dice: Computational Solutions to Practical Probability Problems 2008., p. 237</p> <p>using their methods and Mathematica which is not as succinct as user84413 answer I get $$ \frac{5000}{26013} $$</p>
2,475,757
<p>I want to determine if the following integrals converge or diverge.</p> <ol> <li>$\int_{0}^\infty \frac{\sqrt{x}}{\sqrt[3]{x^5+1}}dx.$</li> <li>$\int_{0}^\infty \sin\frac{1}{x^2+1}dx$.</li> <li>$\int_{\sqrt{2}}^2 \frac{dx}{\sqrt{x^2-2}}dx.$</li> <li>$\int_{0}^1 \frac{\ln{x}}{x}dx.$</li> </ol> <hr> <p><strong>(1):</strong> Here I can raise the integrand to the third power and use that $$\int_{0}^\infty\frac{\sqrt{x}}{\sqrt[3]{x^5+1}}\leq\int_{0}^\infty\frac{x^{3/2}}{x^5+1},$$</p> <p>Clearly the RHS is convergent since the denominator grow faster than the numerator. Is this correct reasoning?</p> <p><strong>(2):</strong> Just by looking at it I can say that as $x\rightarrow \infty$ then the argument for $\sin$ approaches $0$, so the entire function approaches 0, thus the integral is convergent. Same question as the above, is this reasoning correct? And how can one show this analytically?</p> <p><strong>(3):</strong> Having trouble with this one. Clearly the function is not defined at $x=\sqrt{2}$, should I instead then be looking at $$-\lim_{n\rightarrow \sqrt{2}}\int_{2}^{n}\frac{dx}{\sqrt{x^2-2}}?$$</p> <p><strong>(4):</strong> This one seemed simple at first glance. I used the function $\ln(x)$ for comparison. $$\int_{0}^1\frac{\ln{x}}{x}dx\le \int_{0}^1 \ln{x}dx,$$</p> <p>I know that the right integral is convergent since its value is $-1$, but why is the left integral divergent?</p>
Amanda R.
335,482
<p>$S={(1,1),(1,2), (1,3) (2,1), (2,2), (2,3), (3,1), (3,2), (3,3)}$</p> <p>Therefore the sample space contains 9 items/events.</p> <p>We assume that each combinations has an equal chance of being chosen, so each combination has a $1/9$ chance of being chosen.</p> <p>$Pr(X=2)=Pr((1,2), (2,1) or (2,2))=1/9+1/9+1/9=3/9=1/3$</p>
2,941,106
<p>I have tried 29.2/8.44 and tried multiplying this to get whole numbers but doesn't seem like it's working </p>
Phil H
554,494
<p><span class="math-container">$844:2920$</span> divided by <span class="math-container">$4 = 211:730$</span>. </p> <p><span class="math-container">$211$</span> is a prime number so it won't reduce further.</p>
2,516,123
<p>Problem 11985, by Donald Knuth, <em>American Mathematical Monthly</em>, June-July, 2017:</p> <blockquote> <p>For fixed $s,t \in \mathbb{N}$. with $s\leq t$. let $a_{n}=\sum\limits_{k=s}^{t}$ $ {n}\choose{k}$. Prove that this sequence is log-concave, namely that $a_{n}^{2}\geq a_{n-1}a_{n+1} \ \forall n\geq 1$. </p> </blockquote> <p>The submission deadline for this problem was over on 31st October. Does this statement follow from some well known results?</p>
epi163sqrt
132,007
<p>This answer is based upon a result stated as example 1.3 in the paper <em><a href="https://arxiv.org/abs/math/0504164" rel="nofollow noreferrer">Log-concavity and LC-positivity</a></em> by Yi Wang and Yeong-Nan Yeh.</p> <blockquote> <p>In the following we consider natural numbers <span class="math-container">$0\leq s\leq t,\,0\leq n$</span> and use the convention <span class="math-container">$\binom{n}{k}=0$</span> if <span class="math-container">$k&gt;n$</span> or <span class="math-container">$k&lt;0$</span>. We obtain using <span class="math-container">$\binom{n}{k}=\binom{n-1}{k}+\binom{n-1}{k-1}$</span></p> </blockquote> <blockquote> <p><span class="math-container">\begin{align*} \color{blue}{a_n}&amp;=\sum_{k=s}^t\binom{n}{k}\\ &amp;=\sum_{k=s}^t\binom{n-1}{k}+\sum_{k=s}^t\binom{n-1}{k-1}\\ &amp;=\sum_{k=s}^t\binom{n-1}{k}+\sum_{k=s-1}^{t-1}\binom{n-1}{k}\tag{1}\\ &amp;\color{blue}{=2a_{n-1}+\binom{n-1}{s-1}-\binom{n-1}{t}}\tag{2} \end{align*}</span></p> </blockquote> <p><em>Comment:</em></p> <ul> <li>In (1) we shift the index of the right-hand sum to start with <span class="math-container">$k=s-1$</span> and collect in the next line equal terms to <span class="math-container">$2a_{n-1}$</span>.</li> </ul> <blockquote> <p>In order to show <span class="math-container">$a_{n-1}a_{n+1}\leq a_n^2$</span> we calculate <span class="math-container">\begin{align*} \color{blue}{a_n^2}&amp;\color{blue}{-a_{n-1}a_{n+1}}\\ &amp;=a_n\left(2a_{n-1}+\binom{n-1}{s-1}-\binom{n-1}{t}\right) -a_{n-1}\left(2a_n+\binom{n}{s-1}-\binom{n}{t}\right)\tag{3}\\ &amp;=\left[\binom{n-1}{s-1}a_n-\binom{n}{s-1}a_{n-1}\right] -\left[\binom{n-1}{t}a_n-\binom{n}{t}a_{n-1}\right]\\ &amp;=\sum_{k=s}^t\left[\binom{n-1}{s-1}\binom{n}{k}-\binom{n}{s-1}\binom{n-1}{k}\right]\\ &amp;\qquad-\sum_{k=s}^t\left[\binom{n-1}{t}\binom{n}{k}-\binom{n}{t}\binom{n-1}{k}\right]\\ &amp;=\sum_{k=s}^t\left[\binom{n-1}{s-1}\binom{n-1}{k-1}-\binom{n-1}{s-2}\binom{n-1}{k}\right]\\ &amp;\qquad-\sum_{k=s}^t\left[\binom{n-1}{t}\binom{n-1}{k-1}-\binom{n-1}{t-1}\binom{n-1}{k}\right]\tag{4}\\ &amp;\color{blue}{=\sum_{k=s}^t\left[\binom{n-1}{s-1}\binom{n-1}{k-1}-\binom{n-1}{s-2}\binom{n-1}{k}\right]}\\ &amp;\qquad\color{blue}{+\sum_{k=s}^t\left[\binom{n-1}{k}\binom{n-1}{t-1}-\binom{n-1}{k-1}\binom{n-1}{t}\right]}\tag{5} \end{align*}</span></p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (3) we replace one factor <span class="math-container">$a_n$</span> and <span class="math-container">$a_{n+1}$</span> by the identity stated in (2).</p> </li> <li><p>In (4) we use again in both sums the binomial identity <span class="math-container">$\binom{n}{k}=\binom{n-1}{k-1}+\binom{n-1}{k}$</span> twice and cancel terms.</p> </li> <li><p>In (5) we do a simple rearrangement, nothing else.</p> </li> </ul> <blockquote> <p>We now take a closer look at the summands of the first sum in (5) <span class="math-container">\begin{align*} \sum_{k=s}^t\left[\color{blue}{\binom{n-1}{s-1}\binom{n-1}{k-1}-\binom{n-1}{s-2}\binom{n-1}{k}}\right]\tag{6} \end{align*}</span> It is well-known that the binomial coefficients <span class="math-container">$\binom{n}{k}$</span> are log-concave in <span class="math-container">$k$</span> (<span class="math-container">$n$</span> fix). Furthermore the following is valid: A sequence <span class="math-container">$x_k$</span> is log-concave if and only if <span class="math-container">\begin{align*} x_{i-1}x_{j+1}\leq x_ix_j\qquad\qquad \text{for all }j\geq i\geq 1 \end{align*}</span> This is also stated in the referred paper in the introduction right at the beginning.</p> </blockquote> <blockquote> <p><strong>Conclusion:</strong> From this we conclude the summands in (6) are all non-negative and therefore the sum is non-negative. The same arguments hold also for the second sum in (5) and so the claim follows.</p> </blockquote> <p><strong>Note:</strong> The following papers might be interesting:</p> <ul> <li><p><em><a href="https://math.mit.edu/%7Erstan/pubs/pubfiles/72.pdf" rel="nofollow noreferrer">Log-Concave and Unimodal Sequences in Algebra, Combinatorics and Geometry</a></em> by R.P. Stanley</p> </li> <li><p><em><a href="https://arxiv.org/pdf/math/0602672.pdf" rel="nofollow noreferrer">On the log-convexity of combinatorial sequences</a></em> by L.L. Liu and Y. Wang</p> </li> </ul>
1,022,098
<p>Determine whether the series </p> <p>$\sum _{n=1}^{\infty }\:\frac{n^2-5n}{\sqrt{n^7+2n+1}}$</p> <p>is convergent or divergent.</p> <p>So far in class i've learned a lot of different test to use, but i'm having trouble finding out what test would be 'most pratical' for this problem. The ratio test would be too complicated so i was thinking possibly the comparison test. usually i think this test is easy to see what it would compare to </p> <p>$\:0\le \frac{n^2-5n}{\sqrt{n^7+2n+1}}\le \frac{n^2}{\sqrt{n^7}}$</p> <p>this is what i would assume because i know i want to keep the 'key items' but at the same time i've never done a comparison test with a square root in the bottom.</p> <p>Taking $\frac{n^2}{\sqrt{n^7}}$ =$\frac{n^2}{n^{\frac{7}{2}}}$ = $\frac{1}{n^{\frac{3}{2}}}$ </p> <p>$\:\sum _{n=1}^{\infty }\:\frac{1}{n^{\frac{3}{2}}}$ p-series with p = $\frac{3}{2}$ $\ge $ 1 </p> <p>Therefore Convergent</p> <p>and by Comparison Test means</p> <p>$\sum _{n=1}^{\infty }\:\frac{n^2-5n}{\sqrt{n^7+2n+1}}$ is also convergent</p>
Stella Biderman
123,230
<p>Sets are defined by their elements. If you list out the elements, it immediately forms a set from the rules of ZFC (which is the axiomatic definition of how one can construct sets). This also works for infinite cases, for example, $\{1,2,\ldots\}$ defines a set as well, with the minor counterexample of exceptionally large classes of objects (there is a sense in which things can be "too big" to be a set, but that's not going to be relevent at your level, only mentioned for completeness)</p>
1,022,098
<p>Determine whether the series </p> <p>$\sum _{n=1}^{\infty }\:\frac{n^2-5n}{\sqrt{n^7+2n+1}}$</p> <p>is convergent or divergent.</p> <p>So far in class i've learned a lot of different test to use, but i'm having trouble finding out what test would be 'most pratical' for this problem. The ratio test would be too complicated so i was thinking possibly the comparison test. usually i think this test is easy to see what it would compare to </p> <p>$\:0\le \frac{n^2-5n}{\sqrt{n^7+2n+1}}\le \frac{n^2}{\sqrt{n^7}}$</p> <p>this is what i would assume because i know i want to keep the 'key items' but at the same time i've never done a comparison test with a square root in the bottom.</p> <p>Taking $\frac{n^2}{\sqrt{n^7}}$ =$\frac{n^2}{n^{\frac{7}{2}}}$ = $\frac{1}{n^{\frac{3}{2}}}$ </p> <p>$\:\sum _{n=1}^{\infty }\:\frac{1}{n^{\frac{3}{2}}}$ p-series with p = $\frac{3}{2}$ $\ge $ 1 </p> <p>Therefore Convergent</p> <p>and by Comparison Test means</p> <p>$\sum _{n=1}^{\infty }\:\frac{n^2-5n}{\sqrt{n^7+2n+1}}$ is also convergent</p>
hmakholm left over Monica
14,366
<p>I assume that $\langle ~,~\rangle$ means an ordered pair.</p> <p><strong>No</strong>, in standard set theory there is no set which has everything of the form $\langle A,\varnothing\rangle$ as elements.</p> <p>Namely, if such a thing existed -- let's call it $X$ -- we could make a variant of Russell's paradox, by considering the subset $$ Y = \{ \langle A,B\rangle \in X \mid \langle A,B\rangle \notin A \} $$ Now $\langle Y,\varnothing\rangle$ is in $Y$ if and only if $\langle Y,\varnothing\rangle$ isn't in $Y$, which is impossible. So $X$ cannot have existed.</p> <hr> <p>Alternatively, if we're using Kuratowski pairs, we can also just say that $\bigcup \bigcup X$ would be a set of all sets, which is known to be impossible by the standard Russell paradox.</p>
2,710,681
<p>If I have a function of three variables and I want to create a new function in which it equals the other function squared, could I literally just square the other function or does this violate any rules? Would this also mean its gradient vector is just squared at a certain point?</p>
Community
-1
<blockquote> <p>could I literally just square the other function </p> </blockquote> <p>Yes. The square of $f(x,y,z)$ is $[f(x,y,z)]^2$. No violations. For example, if $f(x,y,z) = xyz$, then the square of this function is $g(x,y,z) = (xyz)^2 = x^2y^2z^2$.</p> <blockquote> <p>Would this also mean its gradient vector is just squared at a certain point?</p> </blockquote> <p>There may, coincidentally, be specific points $(a,b,c)$ for which $\nabla g(a,b,c)$ is the "square" of $\nabla f(a,b,c)$, i.e., each component of $\nabla g(a,b,c)$ is the square of the corresponding component of $\nabla f(a,b,c)$. But it's not true everywhere.</p> <p>For the $f$ and $g$ given above, it works trivially for $a=b=c=0$ and it doesn't work for $a=b=c=1$.</p>
76,683
<p>How do I force mathematica to display the below expression as a sum <code>a+b</code> with a scaling factor of <code>1/r</code>.</p> <p>(a+b)/r</p> <p>I would like Mathematica to display (1/r) (a+b), ie. I want it to show 1/r as a scaling factor. </p> <p>currently, it shows (a+b)/r , with r as a common denominator. </p>
Themis
7,863
<p>Here is something close to what you want</p> <pre><code>expr = (a + b)/r; expr Denominator[expr] HoldForm[1/Denominator[expr] // Evaluate] </code></pre> <p><a href="https://i.stack.imgur.com/O2EHD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O2EHD.png" alt="output"></a></p>
476,899
<p>Does someone know a proof that $\{1,e,e^2,e^3\}$ is linearly independent over $\mathbb{Q}$?</p> <p>The proof should not use that $e$ is transcendental.</p> <p>$e:$ Euler's number.</p> <p><a href="http://paramanands.blogspot.com/2013/03/proof-that-e-is-not-a-quadratic-irrationality.html#.Uhv87tJFUnl">$\{1,e,e^2\}$ is linearly independent over $\mathbb{Q}$</a></p> <p>Any hints would be appreciated.</p>
Git Gud
55,235
<p>Below is my attempt which is too long for a comment and may be saveable, (doubt it).</p> <p>Consider the differential equation $y^{(4)}-6y^{(3)}+11y''-6y'=\textbf 0$, where $\bf 0$ is the null function over some non-trivial interval $I$ containing $1$.</p> <p>The theory of ODE tells us that a basis of <a href="http://www.wolframalpha.com/input/?i=y%27%27%27%27-6y%27%27%27%2B11y%27%27-6y%27%3D0" rel="nofollow">solutions</a> is $$\{\underbrace{x\mapsto 1}_{\large \varphi_0}, \underbrace{x\mapsto e^x}_{\large \varphi _1}, \underbrace{x\mapsto e^{2x}}_{\large \varphi _2}, \underbrace{x\mapsto e^{3x}}_{\large \varphi _3}\}$$</p> <p>This implies that $$(\forall \lambda _0,\lambda _1, \lambda _2, \lambda _3\in \Bbb Q)\left[(\forall x\in I)\left(\sum \limits_{k=0}^3\lambda_k\varphi_k(x)=0\right)\implies \lambda _0=\lambda _1=\lambda _2=\lambda _3=0\right] \tag {*}$$</p> <p>Now if we could somehow prove that $(*)$ would also hold for the intersection of all such intervals $I$ (containing $1$), what we want would follow. But I have no hope of this being doable.</p>
3,016,252
<p>A function like <span class="math-container">$f(x) = 2x$</span> can be defined over the reals so its “type signature” or in set theory domain and codomain is <span class="math-container">$f: \mathbb{R} \rightarrow \mathbb{R}$</span>. </p> <p>I want to define a function <span class="math-container">$f(x) = 5$</span> (or some other constant number) and restrict the codomain/return type to be a constant value. What is the (most restrictive) type signature of such a function? Does this require dependent types? Is it <span class="math-container">$f: \mathbb{R} \rightarrow 5$</span>? I want to know the type signature for that specific function that returns <span class="math-container">$5$</span> and the more general notion of a function that returns a constant value that does not depend on the input argument. </p> <p>My motivation is that I want to be able to describe in a precise way functions that do not use/“delete” their input argument and return some other constant value. Eventually, I want to formalize this in a proof assistant. </p>
Hans Hüttel
289,137
<p>Define the type <span class="math-container">$\mathsf{Five}$</span> as the singleton type populated by the type axiom <span class="math-container">$E \vdash 5 : \mathsf{Five}$</span>. Then the type of <span class="math-container">$f$</span> is <span class="math-container">$f : \mathbb{R} \rightarrow \mathsf{Five}$</span>.</p>
3,016,252
<p>A function like <span class="math-container">$f(x) = 2x$</span> can be defined over the reals so its “type signature” or in set theory domain and codomain is <span class="math-container">$f: \mathbb{R} \rightarrow \mathbb{R}$</span>. </p> <p>I want to define a function <span class="math-container">$f(x) = 5$</span> (or some other constant number) and restrict the codomain/return type to be a constant value. What is the (most restrictive) type signature of such a function? Does this require dependent types? Is it <span class="math-container">$f: \mathbb{R} \rightarrow 5$</span>? I want to know the type signature for that specific function that returns <span class="math-container">$5$</span> and the more general notion of a function that returns a constant value that does not depend on the input argument. </p> <p>My motivation is that I want to be able to describe in a precise way functions that do not use/“delete” their input argument and return some other constant value. Eventually, I want to formalize this in a proof assistant. </p>
Rob Arthan
23,171
<p>The answer to this really depends on your type theory. The type theories of proof assistants like Coq or Agda will let you express any of the following classes as types:</p> <p><span class="math-container">$$ \{f : \Bbb{R} \to \Bbb{R} \mid \forall x:\Bbb{R}\cdot f(x) = 5\} \\ \{f : \Bbb{R} \to \Bbb{R} \mid \exists c:\Bbb{R}\cdot \forall x:\Bbb{R}\cdot f(x) = c\} \\ \bigcup_Y \{f : \Bbb{R} \to Y \mid \exists c:Y\cdot \forall x:\Bbb{R}\cdot f(x) = c\} \\ \bigcup_{X, Y} \{f : X \to Y \mid \exists c:Y\cdot \forall x:X\cdot f(x) = c\} $$</span> and many variations on this kind of theme. Weaker type theories might not be able to deal with some of the above.</p>
4,051,403
<p>I'm not a math major, but a philosophy major that likes to know that he knows what he's talking about. This may seem like a super stupid question, but here I go.</p> <p>So Euclid made a lot of sense when he gave the example of the nature of multiplication. For example. &quot;2 x 3&quot; is really 2 added to itself 3 times, and vice versa, and this is different from 2 + 3, which is five, two units added to three.</p> <p>But, what about division?</p> <p>It seems to be the opposite of multiplication like a mirror image from 3rd grade, but I'm starting to doubt that after digging deeper.</p> <p>For example, <a href="https://www.mathwarehouse.com/dictionary/D-words/definition-of-divide.php" rel="nofollow noreferrer">this site</a> gave the following diagram for division: <a href="https://i.stack.imgur.com/KzDiq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KzDiq.png" alt="enter image description here" /></a></p> <p>Based on this image, for 6/3, division seems to be the counting of groups, not the inverse of multiplication according to Euclid.</p> <p>Can someone explain the true nature of division in light of the euclid example I gave?</p>
JJacquelin
108,514
<p>There was two mistakes in the equation : <span class="math-container">$$y^2=\lambda x^\color{red}{2}-\frac{x^4}{\color{red}{2} }+C$$</span> ( Corrected in red ).</p> <p>Then : <span class="math-container">$$y=x'=\pm \sqrt{\lambda x-\frac{x^4}{2}+C}$$</span> <span class="math-container">$$\frac{dx}{dt}=\pm \sqrt{\lambda x-\frac{x^4}{2 }+C}$$</span> <span class="math-container">$$t(x)=\pm \int \frac{dx}{\sqrt{\lambda x^2-\frac{x^4}{2 }+C}}$$</span> This is an elliptic integral.</p> <p>The inverse function <span class="math-container">$x(t)$</span> involves the Jacobi elliptic functions.</p>
88,565
<p>Today I had an argument with my math teacher at school. We were answering some simple True/False questions and one of the questions was the following:</p> <p><span class="math-container">$$x^2\ne x\implies x\ne 1$$</span></p> <p>I immediately answered true, but for some reason, everyone (including my classmates and math teacher) is disagreeing with me. According to them, when <span class="math-container">$x^2$</span> is not equal to <span class="math-container">$x$</span>, <span class="math-container">$x$</span> also can't be <span class="math-container">$0$</span> and because <span class="math-container">$0$</span> isn't excluded as a possible value of <span class="math-container">$x$</span>, the sentence is false. After hours, I am still unable to understand this ridiculously simple implication. I can't believe I'm stuck with something so simple.<br><br> <strong>Why I think the logical sentence above is true:</strong><br> My understanding of the implication symbol <span class="math-container">$\implies$</span> is the following: If the left part is true, then the right part must be also true. If the left part is false, then nothing is said about the right part. In the right part of this specific implication nothing is said about whether <span class="math-container">$x$</span> can be <span class="math-container">$0$</span>. Maybe <span class="math-container">$x$</span> can't be <span class="math-container">$-\pi i$</span> too, but as I see it, it doesn't really matter, as long as <span class="math-container">$x \ne 1$</span> holds. And it always holds when <span class="math-container">$x^2 \ne x$</span>, therefore the sentence is true.</p> <h3>TL;DR:</h3> <p><strong><span class="math-container">$x^2 \ne x \implies x \ne 1$</span>: Is this sentence true or false, and why?</strong></p> <p>Sorry for bothering such an amazing community with such a simple question, but I had to ask someone.</p>
Colton Fitzjarrald
1,086,061
<p>If you'd like a more informal understanding of why the statement given is indeed true, consider the argument of the proposition. It's purely stating that if <span class="math-container">$x^2 \neq x$</span>, then <span class="math-container">$x \neq 1.$</span><br /> This statement is true. It's not arguing that 1 is the only element in the domain of <span class="math-container">$x$</span> for which the statement is true.</p>
2,757,870
<p>I've come to this: $$f: \mathbb{N} \to \{\ldots, -6,-4,-2,0,2,4,6, \ldots\},\qquad f(n) = \begin{cases} 2n &amp; \text{ if } n \text{ is odd} \\ -n &amp; \text{ if } n \text{ is even} \end{cases}$$</p> <p>I don't know what to do with this though. I never know how to format a proof correctly.</p>
feynhat
359,886
<p>As I pointed out in the comments, the function that you described is not surjective because $4$ (or any multiple of $4$) has no inverse image.</p> <p>Assuming you define the set of even integers as, $\mathbb{E} = 2\mathbb{Z} = \{\dots, -6, -4, -2, 0, 2, 4, 6, \dots\}$</p> <p>and the set of naturals as, $\mathbb{N} = \{0, 1, 2, \dots\}$.</p> <p>Define $f: \mathbb{N} \to \mathbb{E}$ as $f(n) = \begin{cases} -n, &amp; \text{if $n$ is even} \\ n+1, &amp; \text{if $n$ is odd} \end{cases}$</p> <p>You need to show that it is one-one and onto. Note that, if, $f(n_1) = f(n_2)$, then either, $n_1+1 = n_2+1$ or $-n_1 = -n_2$, and in both the cases, we get, $n_1 = n_2$. Thus, $f$ is one-one.</p> <p>To prove its onto-ness, take any element $n$ from the codomain $\mathbb{E}$. Then note that if $n \leq 0$, then $f(-n) = n$ and if $n &gt; 0$, then, $f(n-1) = n$. Since, every element in the codomain has an inverse image, $f$ is onto.</p> <p>So, we conclude that $\mathbb{E}$ is countable.</p> <p>Note that, to prove countability of a set $S$, it suffices to prove existence of a one-one map from $S$ to $\mathbb{N}$ (or any other countable set), or an onto map from $\mathbb{N}$ (or any other countable set) to $S$. Bijections are not necessarily required. </p>
3,415,624
<p>I want to show that the sequence given in the title is convergent and find its limit. I'm not sure if I should use the monotone convergence theorem, because when I try using induction, I don't seem to get anywhere. And I also don't know how to find a suitable candidate for a limit. I do know what the definition of convergence is, though.</p> <p>Any help is much appreciated.</p> <p>Thanks in advance. </p>
Marios Gretsas
359,315
<p><span class="math-container">$$0 \leq a_n \leq \frac{\sqrt[n]{n^3}}{n} \to 0 $$</span></p> <p>since <span class="math-container">$\sqrt[n]{n^3}\to 1$</span></p>
3,935,494
<p>Usually the inverse of a square <span class="math-container">$n \times n$</span> matrix <span class="math-container">$A$</span> is defined as a matrix <span class="math-container">$A'$</span> such that:</p> <p><span class="math-container">$A \cdot A' = A' \cdot A = E$</span></p> <p>where <span class="math-container">$E$</span> is the identity matrix.</p> <p>From this definition they prove uniqueness but using significantly the fact that <span class="math-container">$A'$</span> is both right and left inverse.</p> <p>But what if... we define right and left inverse matrices separately. Can we then prove that:</p> <p>(1) the right inverse is unique (when it exists)<br /> (2) the left inverse is unique (when it exists)<br /> (3) the right inverse equals the left one</p> <p>I mean the usual definition seems too strong to me. Why is the inverse introduced this way? Is it because if the inverse is introduced the way I mention, these three statements cannot be proven?</p>
TheSimpliFire
471,884
<p>Let <span class="math-container">$X\in\{0,1,2,3\}$</span> be the number of boys. Let <span class="math-container">$A$</span> be the event that <span class="math-container">$X=3$</span> and <span class="math-container">$B$</span> be the event that <span class="math-container">$X\ge1$</span>. By definition, we have <span class="math-container">${\rm P}(A\cap B)={\rm P}(A\mid B){\rm P}(B)$</span> so <span class="math-container">$${\rm P}(X=3\mid X\ge1)=\frac{{\rm P}(X=3\cap X\ge1)}{{\rm P}(X\ge1)}=\frac{{\rm P}(X=3)}{{\rm P}(X\ge1)}.$$</span> You have calculated <span class="math-container">${\rm P}(X\ge1)$</span> in the first part and evaluating <span class="math-container">${\rm P}(X=3)$</span> is straightforward.</p>
80,918
<p>Could anyone please tell me what could be the math function to get the number of zeros in given decimal representation of numbers? I scratched my head on Combination and Permutation but couldn't come up with generic answer. The number length can be up to 1000 digits, so you can represent a number as a String.</p> <p>For example, if numbers range is $1-100$, the answer should be $11$, for $1-200$, it's $20$ and so on! Now, how would you find total number of zeros between $1-19447494833737292827272\cdots 444$ ( or any big number)?</p> <p>Thanks.</p>
Adam Boddington
103,086
<p>There's a nice algorithm for doing this calculation which is explained <a href="http://blog.codility.com/2011/12/mu-2011-certificate-solution.html" rel="nofollow">here</a>.</p> <p>If $x$ is the number we're given, $f(x)$ is the number of zeros that appear in the range $1..x$. Using a simple program we can calculate $f(x)$ for some small values to spot a pattern.</p> <pre><code>public int CountZerosInRangeOneTo(int end) { return Enumerable.Range(1, end) .Select(i =&gt; i.ToString()) .SelectMany(s =&gt; s.ToCharArray()) .Count(c =&gt; c == '0'); } </code></pre> <p>For example:</p> <pre><code>f(5 ) = 0 f(52 ) = 5 f(523 ) = 102 f(5237 ) = 1543 f(52378) = 20667 </code></pre> <p>If $y$ is the new single digit number added to the end each time, it would appear the following is true:</p> <p>$f(10x + y) = 10 \cdot f(x) + x$</p> <p>For example, if $x = 523$, $y = 7$, and $f(x) = 102$, then:</p> <p>$f(10 \cdot 523 + 7) = f(5237) = 10 \cdot f(x) + x = 10 \cdot 102 + 523 = 1543$</p> <p>Fantastic. However, where this breaks down is when $x$ contains zeros itself. For example:</p> <pre><code>f(3 ) = 0 f(30 ) = 3 correct f(302 ) = 53 incorrect, expected 60, a difference of 7, or 9 - y f(3020 ) = 823 incorrect, expected 832, a difference of 9, or 9 - y f(30207) = 11246 incorrect, expected 11250, a difference of 4, or 2 * (9 - y) </code></pre> <p>If $g(x)$ is the number of zeros in the number $x$, then we can modify our formula like this:</p> <p>$f(10x + y) = 10 \cdot f(x) + x - g(x) \cdot (9 - y)$</p> <p>And that makes sense. If $x = 3020$ and $y = 7$ and not $9$, then that is two less numbers on the end of the sequence with two zeros each.</p> <p>So how does this formula help? Well for a very large $x$ with thousands of digits we can go left to right through each digit and calculate $f(x)$ as we go. Here's a sample C# program to do just that.</p> <pre><code>public long CountZerosInRangeOneTo(string end) { long x = 0; long fx = 0; int gx = 0; foreach (char c in end) { int y = int.Parse(new string(c, 1)); // Our formula fx = 10 * fx + x - gx * (9 - y); fx += Modulus; // Avoid negatives fx %= Modulus; // Now calculate the new x and g(x) x = 10 * x + y; x %= Modulus; if (y == 0) gx++; } return fx; } </code></pre> <p>The limitation (in C# anyway) is $x$ and $f(x)$ will get very large, so the program will have to calculate the result modulo some number, or else use a non-native integral representation.</p>
137,136
<p><a href="https://i.stack.imgur.com/tk4kk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tk4kk.png" alt="Mathcad example of solving coil impedance"></a></p> <p>I am new to Mathematica.</p> <p>I am trying to figure out how to do the same thing in Mathematica as the image depicts done in Mathcad.</p> <p>Declaring the variables with units I figured out.</p> <pre><code>U = Quantity[100, "Volts"]; f = Quantity[50, "Hz"]; R1 = Quantity[20, "Ohms"]; L1 = Quantity[0.005, "Henries"]; XL1 = UnitSimplify[2 π f L1] </code></pre> <p>I am stuck on how define Z1 with the angle.</p>
WReach
142
<p>The problem is due to two factors:</p> <ol> <li>The type system cannot infer the data type that results from a call to <code>MinMax</code>.</li> <li>A type-checked application of the <code>Transpose</code> query operator will fail when applied to an argument with an unknown type (i.e. from <code>MinMax</code>).</li> </ol> <p>I think that point #1 could be considered a bug, although the fact of the matter is that a great many operators are not (yet) known to the type system. Given these omissions, it might be reasonable to expect the <code>Transpose</code> operator to be more forgiving.</p> <p><strong>Analysis (current as of version 11.0.1)</strong></p> <p>When an operation succeeds outside of a <code>Dataset</code> but fails within, the cause is usually a type-inferencing failure. That is indeed the case here.</p> <p><a href="https://mathematica.stackexchange.com/a/89081/142">traceTypes</a> will reveal the failure:</p> <pre><code>traceTypes[Dataset[{{0, 10}, {2, 11}, {3, 12}}]][Transpose /* Map[MinMax] /* Transpose] </code></pre> <p><a href="https://i.stack.imgur.com/Pwymu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Pwymu.png" alt="traceTypes screenshot"></a></p> <p>We can see that the type machinery cannot infer the resultant type of applying <code>MinMax</code> to a list of integers:</p> <pre><code>Needs["TypeSystem`"] TypeApply[MinMax, {Vector[Atom[Integer], 3]}] (* UnknownType *) </code></pre> <p>Furthermore, a type-checked application of <code>AssociationTranspose</code> (the query-form of <code>Transpose</code>) will not accept a vector of unknown type:</p> <pre><code>Needs["GeneralUtilities`"] TypeApply[AssociationTranspose, {Vector[UnknownType, 2]}] (* FailureType[{Transpose, "nmtx"}, &lt;|"Arguments" -&gt; {__}|&gt;] *) </code></pre> <p>This last failure is the source of the message we see.</p> <p><strong>Work-around</strong></p> <p>As noted in the question, a work-around is to perform the final <code>Transpose</code> in a second query:</p> <pre><code>Dataset[{{0, 10}, {2, 11}, {3, 12}}][Transpose /* Map[MinMax]][Transpose] </code></pre> <p><a href="https://i.stack.imgur.com/fBnWt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fBnWt.png" alt="double-query dataset screenshot"></a></p> <p>When the final query result is wrapped back into a dataset, type <em>deduction</em> takes place which is more reliable than type <em>inferencing</em>. By splitting our query in two, we allow such type deduction to take place so that the final <code>Transpose</code> operator can be applied to a known type:</p> <pre><code>Needs["Dataset`"] Dataset[{{0, 10}, {2, 11}, {3, 12}}][Transpose /* Map[MinMax]] // GetType (* Vector[Vector[Atom[Integer], 2], 2] *) TypeApply[AssociationTranspose, {Vector[Vector[Atom[Integer], 2], 2]}] (* Vector[Vector[Atom[Integer], 2], 2] *) </code></pre>
564,378
<p>Suppose we start with a rational number $a_0$, and define $a_{n+1}=2a_n^2-1$ for $n\geq 0$. For what $a_0$ will it be the case that $a_i=a_j$ for some $i\neq j$?</p> <p>We can start with something like $a_0=1$, then $a_1=1$ so $a_0=a_1$.</p> <p>If $a_0=0$, we get $0, -1, 1, 1, \ldots$</p> <p>Likewise if $a_0=-1$, we get $-1,1,1,\ldots$.</p> <p>But how can we find all $a_0$?</p>
Sangchul Lee
9,340
<p>Let $a_{0} = \cos \theta$. Then it is easy to check that $a_{n} = \cos (2^{n}\theta)$. So if $a_{i} = a_{j}$ for some $i \neq j$, then we must have</p> <p>\begin{align*} \cos(2^{i}\theta) = \cos(2^{j}\theta) &amp;\quad \Longleftrightarrow \quad 2^{i}\theta = 2n\pi \pm 2^{j}\theta, \quad n \in \Bbb{Z} \\ &amp;\quad \Longleftrightarrow \quad \theta = \frac{2n\pi}{2^{i} \pm 2^{j}}, \quad n \in \Bbb{Z} \end{align*}</p> <p>Thus the problem reduces to find the condition of $(i, j, n, \pm)$ such that</p> <p>$$ \cos \left( \frac{2n\pi}{2^{i} \pm 2^{j}} \right) \in \Bbb{Q}. $$</p> <p>Referring to <a href="https://math.stackexchange.com/questions/49297/the-only-two-rational-values-for-cosine-and-their-connection-to-the-kummer-rings">this posting</a>, this is possible if and only if</p> <p>\begin{align*} \theta \equiv 0, \pm \frac{\pi}{3}, \pm \frac{\pi}{2}, \pm \frac{2\pi}{3} \pmod{2\pi} \end{align*}</p> <p>This corresponds to $a_{0} \in \{0, \pm \frac{1}{2}, \pm 1 \}$.</p>
3,527,919
<p>I've tried to prove this property of Bessel function but I don't seem to be going anywhere</p> <p><span class="math-container">$$\sqrt{\frac 12 \pi x} J_\frac 32 (x) = \cfrac{\sin x}{x} - \cos x$$</span></p> <p>I have tried substituting <span class="math-container">$\frac 32$</span> for <span class="math-container">$J_n (x)$</span> and then manipulating with the product but it doesn't seem to give me something similar with the series on my LHS. I don't know if there is another different approach which I must follow. </p>
Kemono Chen
521,015
<p>Using one of the integral representations of Bessel function: <span class="math-container">$$J_n(x)=\frac{2^{1-n} x^n \int_0^{\frac{\pi }{2}} \sin ^{2 n}t \cos (x \cos t) dt}{\sqrt{\pi } \Gamma \left(n+\frac{1}{2}\right)},$$</span> We get <span class="math-container">$$J_{3/2}(x)=\frac{x^{3/2}}{\sqrt{2\pi}}\int_0^{\pi/2}\sin^3t\cos(x\cos t)dt\\ =\frac{x^{3/2}}{\sqrt{2\pi}}\int_0^{\pi/2}(1-\cos^2t)\sin t\cos(x\cos t)dt\\ =\frac{x^{3/2}}{\sqrt{2\pi}}\int_0^1(1-t^2)\cos(xt)dt$$</span> Can you continue?</p>
1,931,754
<p>I am trying to show that the interval $[0,1)$ is a closed subset of $(-1,1)$ by using the definition that a closed subset contains all of its limit points. So for a convergent sequence $\{x_n\}$ in $[0,1)$ we have that $0 \leq x_{n} &lt; 1$ for all $n \in \mathbf{N}$. How can I show that $\lim_{n \rightarrow \infty}x_{n} = x$ implies that $x \in [0,1)$? I understand that 1 cannot be a limit point of $[0,1)$ since $1 \not\in (-1,1)$ but I'm having a hard time saying that in a formal way?</p>
John Doe
341,321
<p>By applying an exponential of a logarithm to the expression we get $$\lim_{x\to\infty} \left( 1+\log \left( \frac{x}{x-1} \right)\right)^x=\lim_{x\to\infty} \mathrm{exp}\left( x\log \left( 1+\log \left( \frac{x}{x-1}\right)\right)\right)$$</p> <p>We will show that $$L:=\lim_{x\to\infty} \left( x\log \left( 1 + \log \left( \frac{x}{x-1}\right) \right) \right)=1$$ and thus the result is $\lim_{x\to\infty} \mathrm{exp} (1)=\color{red} e$.</p> <p>$$L= \lim_{x\to\infty} \left( x\log \left( 1 + \log \left( \frac{x}{x-1}\right) \right) \right)=\lim_{x\to\infty} \left(\dfrac{\log \left( 1+\log\left( \frac{x}{x-1}\right)\right)}{\dfrac 1x}\right)$$ so by de l'Hospital's rule $$L=\lim_{x\to\infty} \left( \dfrac{-\dfrac{1}{x(x-1)\left( \log \left(\frac{x}{x-1} \right) + 1\right)}}{-\dfrac{1}{x^2}}\right)=\lim_{x\to\infty} \left( \dfrac{x}{(x-1)\left( \log \left( \frac{x}{x-1}\right) + 1\right)}\right)$$ Use de l'Hospital's rule again to get $$L=\lim_{x\to\infty} \left( \dfrac{1}{\log \left( \frac{x}{x-1}\right) +1 - \frac 1x}\right)=\dfrac{\lim_{x\to\infty} 1}{\lim_{x\to\infty} \left( \log \left( \frac{x}{x-1}\right) +1 - \frac 1x\right)}=\frac 1 1=1$$ as desired.</p>
1,355,133
<p>A while ago I asked a question about probability here <a href="https://math.stackexchange.com/questions/1353044/why-is-binomial-probability-used-here/">Why is binomial probability used here?</a></p> <p>I get that you can find how many ways of choosing the $6$ correct out of $10$ questions.</p> <p>But why do we <strong>multiply</strong> by $\binom{n}{k}$? </p> <p>I thought it is simple casework that:</p> <p>$$P(\text{total probability}) = P(\text{Q 1-&gt;6 right and 7-10 wrong} + P(\text{Q 1-&gt;5 right, 6 wrong, 7 right, 8-10 wrong}) + ...$$</p> <p>What is the idea? </p>
danimal
202,026
<p>$\binom{n}{k}$ is the number of ways of choosing $k$ things from $n$ things (without caring about the order), for example to get a $x^2$ term from (the binomial) $(1+x)^3=(1+x)(1+x)(1+x)$, I have to choose two $x$s but am free to choose which two, so I have $\binom{3}{2} = 3$ choices for this - I can discard any of the three 1s to choose two $x$s, giving $3x^2$ overall</p>
2,361,516
<p>Let $X$ be a real Hilbert space.Let $x,y \in X$ such that $\langle x,y\rangle &gt;0$. If $\alpha \geq 1$.</p> <p>I want to prove that $\Vert \alpha x-y \Vert \leq \Vert x-y \Vert$</p>
stity
285,341
<p>Inequality is wrong : for $x=y \ne 0$ and $\alpha = 0$ you get $||\alpha x-y|| = ||y|| &gt; 0 = ||x-y||$</p>
2,361,516
<p>Let $X$ be a real Hilbert space.Let $x,y \in X$ such that $\langle x,y\rangle &gt;0$. If $\alpha \geq 1$.</p> <p>I want to prove that $\Vert \alpha x-y \Vert \leq \Vert x-y \Vert$</p>
tattwamasi amrutam
90,328
<p>The inequality doesn't hold true in general. What you can show is this: $$\|\alpha x+y \|^2=\langle \alpha x+y, \alpha x+y\rangle=|\alpha|^2\|x\|^2+2\alpha\langle x,y\rangle+\|y\|^2 \ge ||x+y||^2$$</p>
475,863
<p>Let $f,P,Q$ three analytic functions. Here $P$ is a polynomial.</p> <p>I want to solve this equation: $$f(s)=P(s)\exp(Q(s)).$$</p> <p>The unknown here are $P, Q$ and $f$ is known. </p>
Hagen von Eitzen
39,174
<p>Assume $f$ is entire and not the zero function. Since the exponential is never zero, $P$ should capture all zeroes of $f$. That is: $(s-s_0)$ is a factor of $P$ if and only if $f(s_0)=0$. More precisely, for $s\in\mathbb C$ let $$\nu(s)=\min\{\,n\in\mathbb N_0\mid f^{(n)}(s)\ne0\,\}.$$ Then we can (and up to a constant factor: <em>must</em>) let $$P(s)=\prod_{z\in\mathbb C}(s-z)^{\nu(z)}$$ and $$ Q(s)=\ln\left(\frac{f(s)}{P(s)}\right).$$ Several assumptions about $f$ are needed for this to work in the first place, for example $f$ must have only finitely many zeroes.</p>
1,406,878
<p>Given is following sequence:</p> <p>$a_{n+1} = a_n - \frac{a_n - v}{s}$</p> <p>I found out that</p> <p>$\forall a_0, v, s \in \mathbb{R}, s&gt;0: \lim\limits_{n \to \infty}a_n=v$</p> <p>But I do not know why. I tried to write down $a_2$ , $a_3$, but the term becomes very long and complex, and it doesn't help me to find out why the limes leads to $v$.</p>
Augustin
241,520
<p>You can write $a_{n+1}=\left(1-\frac{1}{s}\right)a_n+\frac{v}{s}$.</p> <p>The form is $a_{n+1}=Aa_n+B$, with $A\neq 1$. Now let $r=\frac{B}{1-A}$. You can show that $b_n=a_n-r$ is a geometric sequence. So $b_n=b_0A^n$ and $a_n=r+(a_0-r)A^n$. I let you substitue with the values of $A$ and $B$, so you can find the general expression of $a_n$.</p>
1,159,860
<p>If $$f:[a,b]\times [c,d] \to \mathbb{R}$$ is continuous and $f_{y}$ is continuous, let $$F(x,y)=\int_{a}^{x} f(t,y)dt.$$ </p> <ol> <li>Find $F_x$ and $F_y$.</li> <li>If $G(x)=\int_{a}^{g(x)}f(t,x)dt$, find $G'(x)$</li> </ol> <p>My try: </p> <p>For (1) $$F(x+h,y)-F(x,y)=\int_{a}^{x+h} f(t,y)dt-\int_{a}^{x}f(t,y)dt=\int_{x}^{x+h}f(t,y)dt$$ Let $\int f(t,y)dt= H(t,y)$. Then $\int_{x}^{x+h}f(t,y)dt=H(x+h,y)-H(x,y)$. Hence $$F_x= \frac {\partial H(x,y)}{\partial x} $$. I am kind of stuck here.</p> <p>$F_{y}$ is easy. $$F(x,y+h)-F(x,y)=\int_{a}^{x}\left(f(t,y+h)-f(t,y)\right) dt$$ Hence $$F_{y}=\int_{a}^{x}\frac{\partial f}{\partial y} dt $$</p> <p>For (2) we have $$G(x+h)-G(x)=\int_{a}^{g(x+h)}f(t,x+h)dt-\int_{a}^{g(x)} f(t,x)dt$$ $$=\int_{a}^{g(x+h)}f(t,x+h)dt-\int_{a}^{g(x)} f(t,x+h) dt+ \int_{a}^{g(x)} f(t,x+h) dt-\int_{a}^{g(x)} f(t,x)dt$$ $$=\int_{g(x)}^{g(x+h)} f(t,x+h) dt+\int_{a}^{g(x)} \left(f(t,x+h)-f(t,x)\right)dt$$</p> <p>If I let $\int f(t,x+h) dt= H(t,x+h)$. Then we have $$ \frac{G(x+h)-G(x)}{h}=\frac{H(g(x+h),x+h)-H(g(x),x+h)}{g(x+h)-g(x)}. \frac{g(x+h)-g(x)}{h}+\int_{a}^{g(x)}\frac{ \left(f(t,x+h)-f(t,x)\right)}{h}dt$$</p> <p>Taking limit on both sides we have $$G'(x)=H'(g(x),x+h).g'(x)+\int_{a}^{g(x)}\frac{ \partial f}{\partial x}dt$$</p> <p>From here How do I get thr result??</p> <p>Thanks for the help!!</p>
RE60K
67,609
<p>$$\newcommand{\d}[1]{{\rm #1}} \d P(\d A-\d B)=\d P(\d A)-\d P(\d A\d B)=\d P(\d A)-\d P(\d A)\d P(\d B)=a-ab$$ Since for independant events: $\d P(\d A\d B)=\d P(\d A)\d P(\d B)$</p>
1,535,731
<p>I am currently self-learning about matrix exponential and found that determining the matrix of a diagonalizable matrix is pretty straight forward :).</p> <p>I do not, however, know how to find the exponential matrix of a non-diagonalizable matrix.</p> <p>For example, consider the matrix $$\begin{bmatrix}1 &amp; 0 \\ 1 &amp; 1\end{bmatrix}$$</p> <p>Is there any process of finding the exponential matrix of a non-diagonalizable matrix? If so, can someone please show me an example of the process? :). I am not looking for an answer of the above mentioned matrix (since I just made it up), but rather I'm interested in the actual method of finding the matrix exponential to apply to other examples :)</p>
amd
265,466
<p>In the $2\times2$ case, you can find the exponential of a matrix $A$ without having to decompose it into $BMB^{-1}$ form. In particular, you only need the eigenvalues—you don’t need to find any eigenvectors. There are three cases, as follows.</p> <hr> <p><strong>Distinct Real Eigenvalues:</strong> Let $P_1 = (A-\lambda_2I)/(\lambda_1-\lambda_2)$ and $P_2 = (A-\lambda_1I)/(\lambda_2-\lambda_1)$, where $\lambda_1,\lambda_2$ are the eigenvalues. These two matrices are projections onto the eigenspaces corresponding to $\lambda_1$ and $\lambda_2$, respectively. They have some handy properties: $P_1P_2=P_2P_1=0$, $P_1+P_2=I$, and $A=\lambda_1P_1+\lambda_2P_2$. Using these projections, $$\exp(tA)=\exp(t\lambda_1P_1+t\lambda_2P_2)=\mathrm e^{\lambda_1t}P_1+\mathrm e^{\lambda_2t}P_2.$$ </p> <hr> <p><strong>Repeated Eigenvalue:</strong> Let $G=A-\lambda I$, where $\lambda$ is the eigenvalue. By the Cayley-Hamilton theorem, $(A-\lambda I)^2=0$, so $G$ is nilpotent. Since $G$ and $\lambda I$ commute, $\exp(tA)=\exp(\lambda tI)\exp(tG)$, but $\exp(tG)=I+tG$ (expand using the power series), so $$\exp(tA) = \mathrm e^{\lambda t}(I+tG).$$ </p> <hr> <p><strong>Complex Eigenvalues:</strong> The eigenvalues are of the form $\lambda=\alpha\pm\mathrm i\beta$, and the characteristic equation is $(\lambda-\alpha)^2+\beta^2=0$. By the Cayley-Hamilton theorem, $(A-\alpha I)^2+\beta^2I=0$, so the traceless matrix $G=A-\alpha I$ satisfies $G^2=-\beta^2 I$. Writing $A=\alpha I+G$, we have $\exp(tA)=\exp(\alpha I)\exp(tG)$. Expanding the latter as a power series and using the above equality, $$ \exp(tG) = I+tG-\frac{\beta^2t^2}{2!}I-\frac{\beta^2t^3}{3!}G+\frac{\beta^4t^4}{4!}I+\dots. $$ The coefficient of $I$ is the power series for $\cos{\beta t}$, while the coefficient of $G$ is the power series for $(\sin{\beta t})/\beta$. Thus, $$ \exp(tG) = \cos{\beta t}\;I+[(\sin{\beta t})/\beta]\;G \\ \text{and} \\ \exp(tA) = \mathrm e^{\alpha t}\exp(tG). $$</p>
2,346,489
<p>I am studying the Proposition: Let $D$ be a Dedekind domain, $F$ its field of fractions, $E$ a finite dimensional extension field of $F$ and $D'$ the subring of $E$ of $D$ integral elements. Assume that $E/F$ is a finite separable field extension. Then $D'$ is a finitely generated $D$-module.</p> <p>I have to show that $D'$ is Noetherian. It is clear from the Proposition that $D'$ is a Noetherian $D$ module. Why does it follow that $D'$ is Noetherian as $D'$ module?</p> <p>Would you help me, please? Thank you in advance.</p>
D_S
28,556
<p>Every ideal of $D'$ is a $D$-submodule of $D'$, so if you have an ascending chain </p> <p>$$I_1 \subseteq I_2 \subseteq \cdots$$</p> <p>of ideals of $D'$, then it must terminate, because $D'$ is a Noetherian $D$-module.</p>
446,966
<p>If a, b, c and d are real numbers, I would probably consider the following expressions equivalent.</p> <p>$$a = b \cdot c \cdot d$$ $$a = (b \cdot c) \cdot d$$ $$a = b\cdot(c\cdot d)$$</p> <p>If a, b, c, and d are four matrices, then the order is most defiantly right to left, like so:</p> <p>$$a = b \cdot c \cdot d$$ $$a = b \cdot (c \cdot d)$$</p> <p>Since a real number is like a 1x1 matrix, is the first example an <i>exception to the rule</i> or am I <i>incorrect</i> in my first statement about the order of real number multiplication?</p> <p>Probably should mention, err, it doesn't actually matter, since the answer is the same, but it is an interesting aspect to theory which I wanted to know.</p>
amWhy
9,003
<p>Actually, matrix multiplication <strong>is associative</strong>, just as is scalar multiplication. </p> <p>(Recall the properties of <a href="https://en.wikipedia.org/wiki/Matrix_multiplication#Properties_of_matrix_multiplication" rel="nofollow">matrix multiplication</a>.)</p> <p>So for $A$, expressed as the product of three matrices $B, C, D$ of appropriate dimension (i.e., such that matrix multiplication $BC$ and $CD$ is defined), we have that $$A = B\times (C\times D) = (B\times C) \times D$$ This means that we can indeed multiply two matrices: $B \times C$, and then right multiply this product by $D$ to obtain precisely the same matrix $A$ that we'd get if we perform $A = B\times (C \times D)$.</p> <p>Remark: It is not the case that matrix multiplication is commutative, though, as is scalar multiplication of real numbers. </p>
446,966
<p>If a, b, c and d are real numbers, I would probably consider the following expressions equivalent.</p> <p>$$a = b \cdot c \cdot d$$ $$a = (b \cdot c) \cdot d$$ $$a = b\cdot(c\cdot d)$$</p> <p>If a, b, c, and d are four matrices, then the order is most defiantly right to left, like so:</p> <p>$$a = b \cdot c \cdot d$$ $$a = b \cdot (c \cdot d)$$</p> <p>Since a real number is like a 1x1 matrix, is the first example an <i>exception to the rule</i> or am I <i>incorrect</i> in my first statement about the order of real number multiplication?</p> <p>Probably should mention, err, it doesn't actually matter, since the answer is the same, but it is an interesting aspect to theory which I wanted to know.</p>
Ben Grossmann
81,360
<p>Matrix multiplication is associative, so $(AB)C=A(BC)$ for compatible matrices $A,B,C$. It is not, on the other hand, commutative. That is, $AB\neq BA$ (assuming the latter multiplication can even be carried out).</p>
1,672,847
<p>A stick of total length $1$ is split at a randomly selected point $X$, i.e. $X$ is uniformly distributed in the interval $[0, 1]$.</p> <p>Determine the expected length of the piece that contains the point $1/3$.</p> <p>I've figured out so far that I need to determine a function $f(x)$ so that the length of the piece will be equal to $L=f(x)$, but I don't know how to work from here?</p>
deanavery
418,076
<p>Because of uniform, $X$ falls to the left of $1/3$ with probability $1/3$. The length of the piece that includes $1/3$ in this case is of length $1 - X$, which can range from $2/3$ to $1$. In expectation this length is $5/6$.</p> <p>With probability $2/3$, $X$ falls to the right of $1/3$. The length of the piece that includes $1/3$ is now of length $X$, ranging from $1/3$ to $1$, with an expected value of $2/3$.</p> <p>So the answer you're looking for is $1/3 (5/6) + 2/3 (2/3) = 13/18$.</p> <p>More generally, the formula is $x \frac{(1 + 1 - x)}{2} + (1-x) \frac{1 + x}{2} = x - x^2 + 1/2$ for any $x \in [0,1]$.</p>
3,258,642
<blockquote> <p>If the roots of quadratic equation <span class="math-container">$$x^2 − 2ax + a^2 + a – 3 = 0$$</span> are real and less than <span class="math-container">$3$</span>, find the range of <span class="math-container">$a$</span>.</p> </blockquote> <p>The roots are <span class="math-container">$a \pm \sqrt {3 – a}$</span></p> <p>For the roots to be real, we must have a &lt; 3.</p> <p>Also, for the roots to be less than 3, we must have <span class="math-container">$\pm \sqrt {3 – a } \lt 3 – a $</span></p> <p>If squaring both sides is allowable, I will get <span class="math-container">$(a – 2)(a – 3) &gt; 0$</span>. Then the problem is solved.</p> <p>The question is:- how to convince others that the squaring of both sides of <span class="math-container">$\pm \sqrt {3 - a } \lt 3 - a $</span> is allowable?</p>
user10354138
592,552
<p>You can square both sides of an inequality <span class="math-container">$x&lt;y$</span> to get <span class="math-container">$x^2&lt;y^2$</span>, provided <span class="math-container">$x\geq 0$</span>.</p>
3,258,642
<blockquote> <p>If the roots of quadratic equation <span class="math-container">$$x^2 − 2ax + a^2 + a – 3 = 0$$</span> are real and less than <span class="math-container">$3$</span>, find the range of <span class="math-container">$a$</span>.</p> </blockquote> <p>The roots are <span class="math-container">$a \pm \sqrt {3 – a}$</span></p> <p>For the roots to be real, we must have a &lt; 3.</p> <p>Also, for the roots to be less than 3, we must have <span class="math-container">$\pm \sqrt {3 – a } \lt 3 – a $</span></p> <p>If squaring both sides is allowable, I will get <span class="math-container">$(a – 2)(a – 3) &gt; 0$</span>. Then the problem is solved.</p> <p>The question is:- how to convince others that the squaring of both sides of <span class="math-container">$\pm \sqrt {3 - a } \lt 3 - a $</span> is allowable?</p>
auscrypt
675,509
<p>It's not always allowable; take <span class="math-container">$a = 2.99$</span>, then <span class="math-container">$-\sqrt{3-a} = -0.1 &lt; 0.01 = 3-a$</span>. But squaring both sides gives <span class="math-container">$0.01&lt;0.0001$</span>, which is false. The issue is that negative signs cause issues when you square, since the absolute value of the left side may be larger than the right.</p> <p>To solve this problem, note that both roots are real when <span class="math-container">$a\le 3$</span>, and note that the largest root, <span class="math-container">$a + \sqrt{3-a}$</span> is less than <span class="math-container">$3$</span>. So <span class="math-container">$a + \sqrt{3-a} &lt; 3$</span>, and so <span class="math-container">$\sqrt{3-a} &lt; 3-a$</span>. Here, we are allowed to square, because there is no negative sign, which is the key. Then you solve the quadratic, which is straightforward.</p>
3,258,642
<blockquote> <p>If the roots of quadratic equation <span class="math-container">$$x^2 − 2ax + a^2 + a – 3 = 0$$</span> are real and less than <span class="math-container">$3$</span>, find the range of <span class="math-container">$a$</span>.</p> </blockquote> <p>The roots are <span class="math-container">$a \pm \sqrt {3 – a}$</span></p> <p>For the roots to be real, we must have a &lt; 3.</p> <p>Also, for the roots to be less than 3, we must have <span class="math-container">$\pm \sqrt {3 – a } \lt 3 – a $</span></p> <p>If squaring both sides is allowable, I will get <span class="math-container">$(a – 2)(a – 3) &gt; 0$</span>. Then the problem is solved.</p> <p>The question is:- how to convince others that the squaring of both sides of <span class="math-container">$\pm \sqrt {3 - a } \lt 3 - a $</span> is allowable?</p>
Peter Szilas
408,605
<p><span class="math-container">$y=(x-a)^2+(a-3)$</span>;</p> <p><span class="math-container">$a-3 &gt;0:$</span> This parabola does not cut the <span class="math-container">$x-$</span>axis (<span class="math-container">$(x-a)^\ge 0)$</span>.</p> <p>Hence <span class="math-container">$a-3\le 0$</span>.</p> <p>Roots:</p> <p><span class="math-container">$y=(x-a)^2 +(a-3) = 0;$</span></p> <p><span class="math-container">$x_{1,2}=a\pm \sqrt{3-a} &lt;3$</span>.</p> <p>1)Let <span class="math-container">$0\le a \lt 3$</span> (<span class="math-container">$a=3$</span> is ruled out)</p> <p><span class="math-container">$\sqrt{3-a} &lt;3-a=\sqrt{3-a}\sqrt{3-a}$</span>.</p> <p>This is true for <span class="math-container">$\sqrt{3-a} &gt;1$</span>, or <span class="math-container">$ 3-a &gt;1$</span>.</p> <p>Hence <span class="math-container">$2&gt;a \ge 0$</span>.</p> <p>2) Let <span class="math-container">$a &lt;0$</span>;</p> <p>Then <span class="math-container">$-|a| +\sqrt{3+|a|} &lt;3$</span>.</p> <p><span class="math-container">$\sqrt{3+|a|} &lt;3+|a|=\sqrt{3+|a|}\sqrt{3+|a|}.$</span></p> <p>This is true for <span class="math-container">$\sqrt{3+|a|} &gt;1$</span>, or <span class="math-container">$3+|a|&gt;1$</span>.</p> <p>Hence <span class="math-container">$a&lt;0$</span>;</p> <p>Combining:</p> <p><span class="math-container">$a \in (-\infty, 2)$</span></p>
3,275,732
<p>How can I solve it without using matrix? I tried it to solve by using systems. But I have no idea how deal with "<span class="math-container">$0$</span>"</p>
miky
685,127
<p>You can substitute every point in the equation and solve a system in 4 unknown (a,b,c,d) with four equation, one for every point. The system is <span class="math-container">$$d=1$$</span> <span class="math-container">$$-a+b-c+d=-2$$</span> <span class="math-container">$$a+b+c+d=2$$</span> <span class="math-container">$$8a+4b+2c+d=9$$</span></p> <p>so the point with <span class="math-container">$0$</span> tells you that <span class="math-container">$d=1$</span>. From this point you can use every method you want to solve it. (Putting it on a linear system solver I find that <span class="math-container">$a=\frac{4}{3}$</span>, <span class="math-container">$b=-1$</span>, <span class="math-container">$c=\frac{2}{3}$</span> and <span class="math-container">$d=1$</span>).</p>
3,275,732
<p>How can I solve it without using matrix? I tried it to solve by using systems. But I have no idea how deal with "<span class="math-container">$0$</span>"</p>
xrfxlp
678,937
<p>Given equation, <span class="math-container">$$y = ax^3 + bx^2 + cx + d$$</span> ,On putting the points <span class="math-container">$(0,1),(-1,-2),(1,2),(2,9)$</span> following system of equation are obtained <span class="math-container">$$\begin{align} d &amp;=1\\ -a+b-c+d &amp;=-2\\a+b+c+d &amp;=2\\8a+4b+2c+d &amp;=9.\end{align}.$$</span></p> <p>On adding <span class="math-container">$(2)$</span> and <span class="math-container">$(3)$</span> and using <span class="math-container">$(1)$</span>, <span class="math-container">$ 2b + 2 = 0 \Rightarrow b = -1$</span> </p> <p>On multiplying <span class="math-container">$(3)$</span> by <span class="math-container">$8$</span> ,subtracting it from <span class="math-container">$(4)$</span>, and using value of <span class="math-container">$b$</span>, gives <span class="math-container">$c = 2/3$</span> , and finally placing all this value in <span class="math-container">$(2)$</span> gives <span class="math-container">$a = 4/3$</span></p> <p>Hence, values are <span class="math-container">$ a = 4/3, b = -1 , c = 2/3$</span> and <span class="math-container">$d = 1$</span></p>
2,801,936
<p>To me, it seems obvious that the binary quadratic form $x^2+8y^2$ does not properly represent 3. However, I have managed to prove that it does so I think I must be doing something stupid. I have used the following:</p> <p><strong>Let f be a a binary quadratic form and n an integer. We say that f <em>properly represents</em> n if there exists [x,y]∈$\mathbb Z^2$ such that (x,y)=1 and f(x,y)=n. (x,y) is defined as the greatest common divisor of x and y.</strong></p> <p><strong>Lemma 5.3 (iii) Some form of discriminant d properly represents n if and only if $u^2\equiv d\pmod {4n}$</strong></p> <p>Then here is my 'proof':</p> <p>Let $f(x,y)=x^2+8y^2$. Then, by Lemma 5.3(iii), $f(x,y)$ properly represents n=3 if and only if there is a solution to $u^2\equiv d\pmod{4\cdot3}$</p> <p>$d=b^2-4ac=0^2-4\cdot1\cdot8=-32$ so</p> <p>$u^2\equiv -32\pmod{12}$ which gives us</p> <p>$u^2\equiv 4\pmod{12}$</p> <p>which clearly has the solution $u=2$ so $f(x,y)$ properly represents n=3.</p> <p>I know that this is wrong and it's very likely wrong for a stupid reason but I can't figure out what that is so any help would be appreciated.</p>
Robert Z
299,698
<p>Yes, you are correct so far. Now by the multiplicative property of the <a href="https://en.wikipedia.org/wiki/M%C3%B6bius_function" rel="nofollow noreferrer">Mobius function</a> and its definition, it follows that $$\begin{align*}\sum_{0\leq a_i\leq e_i, \forall i}\mu (p_1^{a_1}\cdots p_k^{a_k})\cdot p_1^{a_1}\cdots p_k^{a_k}&amp;= \sum_{0\leq a_i\leq e_i, \forall i}\mu (p_1^{a_1})\cdots \mu(p_k^{a_k})\cdot p_1^{a_1}\cdots p_k^{a_k}\\&amp;= \sum_{0\leq a_i\leq 1, \forall i}(-1)^{a_1}\cdots (-1)^{a_k}\cdot p_1^{a_1}\cdots p_k^{a_k}\\&amp;= \sum_{0\leq a_1\leq 1}(-p_1)^{a_1}\dots\sum_{0\leq a_k\leq 1}\cdots (-p_k)^{a_k}\\&amp;=(1-p_1)\cdots(1-p_k)\\ &amp;=(-1)^k\cdot (p_1-1)\cdots (p_k-1).\end{align*}$$</p>
2,558,870
<p>Suppose $f:[0,1]\to \mathbb{R}$ is uniformly continuous, and $(p_n)_{n\in\mathbb{N}}$ is a sequence of polynomial functions converging uniformly to $f$.</p> <p>Does it follow that $\mathcal{F}=\{p_n\mid n\in\mathbb{N}\}\cup \{f\}$ is equicontinuous?</p> <p>Also, if $C_n$ are the Lipschitz constants of the polynomials $p_n$, does it follow that $C_n&lt;\infty$ for all $n$, and $\lim_{n\to\infty} C_n=\infty?$</p> <p>I'm preparing for a test, but I'm not sure how to go about answering these two question. Any hints or tips as to what to look for would be appreciated. </p>
GNUSupporter 8964民主女神 地下教會
290,189
<p>$$Av=\lambda v \\ \iff (A-pI)v=(\lambda-p)v \\ \iff (A-pI)^{-1}v=(\lambda-p)^{-1}v $$ Since $p$ is not an eigenvalue of $A$, $A-pI$ is invertible. We assume $(\lambda,v)$ an eigenpair of $A$ on the top, and $v$ to be an eigenvector at the bottom.</p>
3,997,968
<p>I'm trying to figure out how to get the point x = 3 : What's given here are the points S and G . (Assuming the 2 angles are equal) <a href="https://i.stack.imgur.com/COFMn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/COFMn.png" alt="enter image description here" /></a></p> <p>Apparently, we can assume that the ball does not bounce off the x-axis. Instead, we can change the target with respect to the symmetrical point on the x-axis so that we can now look at the intersection of the x-axis and the line connecting the current and target point.</p> <p>The Formula to get x=3 is below :</p> <p><span class="math-container">$$\frac{S_x.G_y + G_x.S_y}{S_y+G_y}$$</span></p> <p>Is there any explanation for this formula ?</p>
Mick
42,351
<p>If <span class="math-container">$G = (G_x, G_y)$</span>, then <span class="math-container">$G' = (G_x, ??)$</span>.</p> <p>Next, can you use two-point form to write the equation of SG'?</p> <p><span class="math-container">$ P(P_x, 0)$</span> is a point on SG'.</p> <p>Putting the above together with S = (1, 1) and G=(7,2) , you will get <span class="math-container">$P_x = 3$</span>.</p>
4,381,145
<blockquote> <p>Show that the three vector fields <span class="math-container">$X = y\frac{\partial}{\partial z}-z\frac{\partial}{\partial y}, Y = z\frac{\partial}{\partial x}-x\frac{\partial}{\partial z}$</span> and <span class="math-container">$Z=x\frac{\partial}{\partial y}-y\frac{\partial}{\partial x}$</span> on <span class="math-container">$\Bbb R^3$</span> are tangent to the <span class="math-container">$2$</span>-sphere <span class="math-container">$\Bbb S^2$</span>.</p> </blockquote> <p>I have a definition that a vector field is tangent to submanifold <span class="math-container">$S \subseteq M$</span> if for all <span class="math-container">$p \in S$</span> the tangent vector <span class="math-container">$X_p$</span> is in <span class="math-container">$T_pS \subseteq T_pM$</span>.</p> <p>I don't really know how to approach the problem. Why is <span class="math-container">$X_p$</span> neccessarily a tangent vector? Using coordinate charts it seems that <span class="math-container">$X_p$</span> is of form <span class="math-container">$$X_p = \sum_{i=1}^n X^i(p) \frac{\partial}{\partial x^i} \bigg|_p$$</span> but what is this <span class="math-container">$X^i(p)$</span> here?</p> <p>John Lee's book also suggests that <span class="math-container">$X$</span> is tangent to a submanifold <span class="math-container">$S$</span> if and only if <span class="math-container">$(Xf) \mid_S = 0$</span> for every <span class="math-container">$f \in C^\infty(M)$</span> such that <span class="math-container">$f\mid_S \equiv 0$</span>.</p> <p>Can I use either one of these definitions here?</p>
Aaron
9,863
<p>Here is a hint: The sphere is the collection of points <span class="math-container">$(x,y,z)$</span> such that <span class="math-container">$f(x,y,z)=0$</span> where <span class="math-container">$f(x,y,z)=x^2+y^2+z^2-1$</span>. Following Lee's suggestion, a necessary condition to be tangent is for <span class="math-container">$Xf|_S=0$</span>. This van be verified for these 3 vector fields. Can you show that this is sufficient?</p>
503,358
<p>I remember I saw this question somewhere in Lang's undergraduate real analysis.</p> <blockquote> <p>Given any real number $\ge0$, show that it has a square root.</p> </blockquote>
AlexR
86,940
<p>This means to show that for $a\geq 0$, the Polynomial $x^2-a$ has at least one real root. Chose $x_0 := 1$ and use <a href="http://en.wikipedia.org/wiki/Newton%27s_method">Newton's method</a> with $$x_{n+1} = x_n - \frac{x_n^2 - a}{2x_n}$$ Then since $(x_n)$ is cauchy and $a \geq 0$ is a fixpoint of the iteration $$\lim_{n\to\infty} x_n^2 = a$$ $x_n \to \sqrt{a}$ and since $\mathbb R$ is complete, $\sqrt{a} \in \mathbb R$.</p>
982,780
<p>I have the following system of <span class="math-container">$M$</span> linear equations in <span class="math-container">$N$</span> unknowns.</p> <p><span class="math-container">$$ \begin{bmatrix} 3 &amp; 0 &amp; 1 &amp; 0 &amp; -1 &amp; -3 &amp; 2\\ 1 &amp; 2 &amp; 0 &amp; 4 &amp; 0 &amp; 0 &amp; -1\\ 1 &amp; 1 &amp; 0 &amp; 0 &amp; -1 &amp; -1 &amp; -2\\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; -3 &amp; -1 &amp; 1 \\ \end{bmatrix} \begin{bmatrix} x_{1}\\ x_{2}\\ x_{3}\\ x_{4} \\ x_{5} \\ x_{6} \\ x_{7} \\ \end{bmatrix} = \begin{bmatrix} 1\\ 0\\ 0\\ -1\\ \end{bmatrix}$$</span></p> <p>Is there any algorithm for finding answers of this equations that <span class="math-container">${x_{i} \ge 0}$</span>?</p> <p><strong>Comment</strong>: I just want that <span class="math-container">$x_i \ge 0$</span>.</p> <p>It can change to</p> <p><span class="math-container">$$ \begin{bmatrix} 1 &amp; 0 &amp; 0 &amp; 0 &amp; 2/3 &amp; -2/3 &amp; 1/3 &amp; 2/3\\ 0 &amp; 1 &amp; 0 &amp; 0 &amp; -5/3 &amp; -1/3 &amp; -7/3 &amp; -2/3 \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; -3 &amp; -1 &amp; 1 &amp; -1 \\ 0 &amp; 0 &amp; 0 &amp; 1 &amp; 2/3 &amp; 1/3 &amp; 5/6 &amp; 1/6 \\ \end{bmatrix} $$</span></p>
Asaf Karagila
622
<p>This is quite similar to Andreas Blass' answer, but as a positive proof instead of a negative proof. I will use standard ordinals notation, so $0$ is the least element, and a successor ordinal means a point that has a predecessor in the order. I will also use $(x,y)$ to denote the open interval from $x$ to $y$.</p> <p>Suppose that $\mathcal U$ is an open cover of $[0,\eta]$.</p> <p>Let $\alpha_0=\eta$. There is at least one $U_0\in\mathcal U$ such that $\alpha_0\in U_0$. Since $U_0$ can be written as the union of open intervals, either $U_0=[0,\eta]$ (which is open, as it's the entire space), or there is a least $\alpha_1\notin U_0$ such that $(\alpha_1,\eta]\subseteq U_0$. Let $U_1$ be some open set in $\mathcal U$ such that $\alpha_1\in U_1$. By induction define $\alpha_{n+1}$ the least ordinal such that $\alpha_{n+1}\notin\bigcup_{i\leq n}U_i$ but $(\alpha_{n+1},\eta]\subseteq\bigcup_{i\leq n}U_n$. If there is no such ordinal we halt.</p> <p>Now we have that $\ldots&lt;\alpha_n&lt;\ldots&lt;\alpha_1&lt;\alpha_0$. This is a decreasing sequence of ordinals, so it has to be finite. Therefore there must be some $k$ such that $\bigcup_{i\leq k}U_i=[0,\eta]$ as wanted.</p>
3,078,097
<blockquote> <p>Why it is impossible to split the natural numbers into two sets <span class="math-container">$A$</span> and <span class="math-container">$B$</span> such that for distinct elements <span class="math-container">$m, n \in A$</span> we have <span class="math-container">$m + n \in B$</span> and vice-versa?</p> </blockquote> <p>Also, does vice-versa means that there are distinct elements such that <span class="math-container">$x + y \in A$</span>? </p> <p>How do I show the proof?</p>
Stockfish
362,664
<p>Vice versa means that for distinct <span class="math-container">$m,n \in B$</span>, <span class="math-container">$m+n \in A$</span>.</p> <p>I guess you have to work through some cases.</p> <p>For instance, suppose that <span class="math-container">$1 \in A$</span> and <span class="math-container">$2 \in A$</span>. Then <span class="math-container">$3 \in B$</span>.</p> <ul> <li>If <span class="math-container">$4 \in A$</span>, we have <span class="math-container">$5, 6 \in B$</span>, i.e. <span class="math-container">$8, 9 \in A$</span>, which is a contradiction since <span class="math-container">$9=1+8$</span> and every summand is in <span class="math-container">$A$</span>.</li> <li>If <span class="math-container">$4 \in B$</span>, we have <span class="math-container">$7 \in A$</span>, i.e. <span class="math-container">$8, 9 \in B$</span>, i.e. <span class="math-container">$11, 12, 13 \in A$</span> which is a contradiction since <span class="math-container">$12=1+11$</span>.</li> </ul>
1,274,317
<blockquote> <p>Let $f:R \longrightarrow S$ a surjective ring homomorphism. Is the inverse image $f^{-1}(M)$ a maximal left ideal of $R$ for any maximal left ideal $M$ of $S$?</p> </blockquote> <p><strong>Comments:</strong> I tied something like this: if $M$ is maximal then</p> <p>$M \neq S$ and if $J$ is a left ideal such that $M \varsubsetneq J \subseteq S \Rightarrow J = S$. </p> <p>I want to show that: $f^{-1}(M) \neq R$ and if $I$ is a left ideal such that $f^{-1}(M) \varsubsetneq I \subseteq R \Rightarrow I = R$.</p> <p>The first statement follows from the fact that $f$ is surjective. I am unable to prove the second.</p>
user26857
121,097
<blockquote> <p>If $I$ is a left ideal such that $f^{-1}(M) \varsubsetneq I \subseteq R \Rightarrow I = R$.</p> </blockquote> <p>Let $a\in I$, $a\notin f^{-1}(M)$. It follows $f(a)\notin M$. Since $f^{-1}(M)\subset I$ we get $f(f^{-1}(M))\subset f(I)$. $f$ surjective gives you $M=f(f^{-1}(M))$, so $M\subset f(I)$. But $f(a)\in f(I)-M$. Now note that $f$ surjective implies that $J=f(I)$ is a left ideal of $S$, an thus you get $M\subsetneq J\subseteq S\Rightarrow J=S$, so $f(I)=S$ hence $I=R$ (why?). </p> <p><strong>Edit.</strong> Let's prove the last claim. If $r\in R$ then $f(r)\in S=f(I)$, so there is $i\in I$ such that $f(r)=f(i)$. It follows $f(r-i)=0$, so $r-i\in\ker f$. But $\ker f=f^{-1}(0)\subseteq f^{-1}(M)$, so $\ker f\subset I$ hence $r-i\in I\Rightarrow r\in I$.</p>
25,853
<p>With regard to an undergraduate statistics course, I am developing a standardized list of point deductions with the TAs (doctoral students) so that graders are consistent in what they are taking off intermediate points for. For example, most problems are 10 points total, and my proposed point deductions for intermediate math errors are (for example):</p> <ul> <li>-2 pts, erroneous +, - , *, /</li> <li>-2 pts, erroneous sign, e.g. 3.02 instead of -3.02</li> <li>-3 pts, failed to square, e.g. (x) instead of (x)^2</li> <li>-3 pts, failed to take square root, e.g. (x) instead of sqrt(x)</li> </ul> <p>If, after grading, you discover on a particular exam that the final answers for five 10-point questions are incorrect because of only making a minor -2 point intermediate error, the student could conceivably obtain a score of 80% on an exam if they missed only -2 points per question (40/50).</p> <p>However, in statistics, there is a contextual element to every question, not just solving for a numerical answer -- that is, in addition to the worked problem, students need to write a text-based response for the following:</p> <ul> <li>(2 pts) state whether the hypothesis test is significant or not</li> <li>(2 pts) state whether the null hypothesis is rejected or accepted</li> <li>(2 pts) state whether the p-value is less than 0.05 or not.</li> </ul> <p>So if there was only one minor (-2 point) intermediate error made, causing an incorrect final numerical answer, the student will also incorrectly respond to the final text-based answers (above) as well.</p> <p>Thus, would you also take off e.g. -2 points for an incorrect final numerical answer, as well as -6 points for missing the final text-based sub-items listed above?</p> <p>In other words, would you only deduct -2 points for a complex (multi-step) algebra or calculus question if only a minor intermediate step was erroneous, or would you also deduct for having an incorrect final numerical answer as well?</p> <p>Maybe I could propose to the TAs to augment the point deduction list with:</p> <ul> <li>-1 pt, incorrect final numerical answer</li> <li>-1 pt, state whether the hypothesis test is significant or not</li> <li>-1 pt, state whether the null hypothesis is rejected or accepted</li> <li>-1 pt, state whether the p-value is less than 0.05 or not.</li> </ul>
Nick C
470
<p>I favor an additive grading scheme, where points are <em>earned</em> toward a possible maximum (say 10) instead of deducting points for the myriad possible mistakes one could make. Here, I would try to adopt a set of markers I am looking for and awarding points if they appear in the written work. This could help in standardizing your grading.</p> <p>To avoid the situation you mentioned (where a student loses 6 points because they came to the wrong numerical conclusion and thereby has the wrong verbal interpretation), I might change your markers to see <em>what to award points for</em>, like:</p> <ul> <li>(+2 pt) correctly/appropriately calculate the p-value</li> <li>(+1 pt) state whether the p-value is less than 0.05</li> <li>(+1 pt) if answer above matches the calculated p-value</li> <li>(+1 pt) state whether the hypothesis test is significant or not</li> <li>(+1 pt) if statement above matches the calculated p-value</li> <li>(+1 pt) state whether the null hypothesis is rejected or accepted</li> <li>(+1 pt) if statement above matches the calculated p-value</li> <li>etc.</li> </ul> <p>This way, a student can get the wrong p-value, but still answer the rest of the problem &quot;correctly&quot; and receive points.</p>
564,360
<p>Lets take the example, if we take the expression $\frac{X!}{y_1!\cdot y_2!\cdots y_n!} $as long as summation $S=y_1+y_2+...y_n$ is less than or equals $X$, the remainder is always $0$. Thats How the permutation of $X$ things where there is $y_1$ things same , $y_2$ things same works. My question is, why does this happen, what is the mathematical explanation behind this? when its like $\frac{100!}{49!\cdot49!}$ that still works? Here the first $49$ consecutive digits already divided, but how the second consecutive $1..49$ also divides by $50...100$? </p>
Thanos Darkadakis
105,049
<p>This is not exactly a proof. What I know from combinations is that:</p> <p>$\binom{N}{m}=\frac{N!}{m!(N-m)!}$ is always integer.</p> <p>That means that if N=a+b, then a!b! divides N!</p> <p>You can use it more and more. Eg if also a=c+d then c!d! divides a!.</p> <p>So b!c!d! divides N! where N=b+c+d. And you continue splitting.</p> <p>As for why combinations is integer, there is a proof <a href="https://math.stackexchange.com/questions/11601/proof-that-a-combination-is-an-integer">here</a>: </p>
8,658
<p>$f(x) = \frac{1}{\cos x}$</p> <p>$f'(x) = \frac{\sin(x)}{\cos^2(x)}$</p> <p>$f''(x) = \frac{2\sin^2(x)+\cos^2(x)}{\cos^3(x)}$</p> <p>$f^{(3)}(x) = \frac{6\sin^3(x)+5\cos^2(x)\sin(x)}{cos^4(x)}$</p> <p>$\vdots$</p> <p>$f^{(n)}(x) = \frac{ ?}{cos^{n+1}(x)}$</p> <p>Some of these are easy: <a href="http://darkwing.uoregon.edu/~jcomes/251exn.pdf" rel="nofollow">http://darkwing.uoregon.edu/~jcomes/251exn.pdf</a> Others are not. Why?</p>
Robin Chapman
226
<p>This is asking for the $n$-th derivative of the secant function. As the derivative of $\sec$ is $\sec\tan$ and that of $\tan$ is $\sec^2=1+\tan^2$ then the $n$-th derivative of $\sec$ is $\sec f_n(\tan)$ where $f_0(t)=1$ and $f_{n+1}(t)=tf_n(t)+(t^2+1)f_n'(t)$.</p> <p>It's probably too much to hope to find a nice formula for the coefficients of the $f_n$. The constant coefficients of $f_{2m}$ are essentially the Euler numbers.</p>
2,612,134
<p>One of the exercise in Artin's algebra gives an eigenvector of an element of $SO(3)$, in one possible case. Namely, it is asked to show that </p> <blockquote> <p>If $A=[a_{ij}]$ is a rotation in $SO(3)$, then the vector $$v=\begin{bmatrix} (a_{23}+a_{32})^{-1}\\ (a_{13}+a_{31})^{-1} \\ (a_{12}+a_{21})^{-1}\end{bmatrix}$$ is an eigenvector of $A$, whenever these entries of the vector are well-defined in $\mathbb{R}$. </p> </blockquote> <p>I couldn't get any way to proceed to prove this. What I tried was, consider $A$ as sum of symmetric and skew-symmetric- $\frac{1}{2}(A+A^t)+\frac{1}{2}(A-A^t)$ and show that $v$ is an eigenvector of both with eigenvalues $1$ and $0$ respectively; but, this was not working beyond long algebraic/symbolic computations. Any hint for this?</p>
Community
-1
<p>If $A\not= A^T$ (that is, if $A$ is not $I_3$ or a U-turn), then $x\in \ker(A-A^T)$ iff $Ax=x$.</p> <p>It suffices to show that the first component of $(A-A^T)v$ is zero</p> <p>$\dfrac{a_{1,2}-a_{2,1}}{a_{1,3}+a_{3,1}}+\dfrac{a_{1,3}-a_{3,1}}{a_{1,2}+a_{2,1}}=0$ iff ${a_{1,2}}^2+{a_{1,3}^2}-{a_{2,1}^2}-{a_{3,1}}^2=0$</p> <p>iff $1-{a_{1,1}^2}-(1-{a_{1,1}^2})=0$.</p> <p>EDIT. It remains to consider the case when $A\in T$ (the set of U-turns) and, for every $i&lt;j, a_{i,j}\not=0$; $T\subset SO(3)$ is locally isomorphic to a plane included in $\mathbb{R}^3$. Then there is a sequence $(A_n)\subset SO(3)\setminus T$ s.t. $A_n$ tends to $A$. By above proof, for $n$ large enough, we know an explicit $v_n$ s.t. $A_nv_n=v_n$. Conclude by continuity.</p>
889,111
<p>This is a problem from Apostol's Real Analysis book. $$\text{Find if }\sum_{n=1}^{\infty}\dfrac{1}{n^{1+\frac{1}{n}}}\text{ converges or diverges. }$$ I tried to compare with $\displaystyle \sum_{n=1}^{\infty}\dfrac{1}{n^p}$ for suitable $p$, but $p&gt;1$ fails always. I tried to show $\displaystyle \sum_{k=1}^{\infty}2^ka_{2^k}$iconverges, where $\displaystyle a_n=n^{-\left( 1+\frac{1}{n}\right)}$ but again this got too complicated. Can someone give me a proof? Thanks. </p> <p>Edit : Sorry, I was carried away, because I was thinking it would converge, but the book asked to check for convergence only. I edited it. </p>
André Nicolas
6,312
<p><strong>Outline:</strong> One can prove, say by induction, that $2^n\gt n$ for every positive integer $n$. </p> <p>It follows that $n^{1/n}\lt 2$. </p> <p>From this we can conclude by Comparison that our series diverges. </p>
1,410,586
<blockquote> <p>For certain pairs $ (m,n)$ of positive integers with $ m\ge n$ there are exactly $ 50$ distinct positive integers $ k$ such that $ |\log m - \log k| &lt; \log n$. Find the sum of all possible values of the product $ mn$.</p> </blockquote> <p><strong>HINTS ONLY!</strong></p> <p>Obviously, converting it into a simpler form is good idea.</p> <p>$$\frac{1}{n} &lt; \frac{m}{k} &lt; n \implies k &lt; mn &lt; n^2k.$$</p> <p>Okay, so we already have a pretty simplified form, I broke it into two cases.</p> <p>Case 1: $m = n$ then saw:</p> <p>$\implies k &lt; n^2 &lt; kn^2$, which is true for ALL $k \ge 2$ and any $n$. </p> <p>So $m= n$ is an impossible case.</p> <p>The remaining case left is $m &gt; n$. </p> <p>This is actually getting quite difficult. </p> <p>There must be exactly $50$ values of $k$. </p> <p>How should I go about this?</p> <p><strong>SMALL - HINTS ONLY! Please don't give it away!</strong></p>
mathlove
78,967
<p>HINT : </p> <p>We have $$\frac 1n\lt\frac mk\lt n\iff \frac mn\lt k\lt mn\tag 1$$</p> <p>Setting $\lfloor\frac mn\rfloor=s$ gives $$(1)\iff s+1\le k\le mn-1$$with $s\le\frac mn\lt s+1$.</p> <p>Hence, one has $50=(mn-1)-(s+1)+1$.</p>
2,206,247
<p><strong>Question:</strong> Consider the following non linear recurrence relation defined for $n \in \mathbb{N}$:</p> <p>$$a_1=1, \ \ \ a_{n}=na_0+(n-1)a_1+(n-2)a_2+\cdots+2a_{n-2}+a_{n-1}$$</p> <p>a) Calculate $a_1,a_2,a_3,a_4.$</p> <p>b) Use induction to prove for all positive integers that:</p> <p>$$a_n=\dfrac{1}{\sqrt{5}}\left[\left(\dfrac{3+\sqrt{5}}{2}\right)^n-\left(\dfrac{3-\sqrt{5}}{2}\right)^n\right]$$ Hi all! I'm having trouble solving this problem. I have no problem with part (a), but I'm having lots of troubles with part (b). I proved the base case (which is quite trivial), but I'm having trouble for the inductive step (proving k->k+1).</p> <p><a href="https://i.stack.imgur.com/JgGfv.png" rel="nofollow noreferrer">Attempt</a></p> <p>I don't know what to do from this point. Thank you! </p>
Community
-1
<p><strong>HINT:</strong> $$2\sqrt{ab}\ge 0$$</p> <p>Add $a+b$ to both sides.</p>
64,643
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="https://math.stackexchange.com/questions/4467/a1-2-is-either-an-integer-or-an-irrational-number">$a^{1/2}$ is either an integer or an irrational number</a> </p> </blockquote> <p>I know how to prove $\sqrt 2$ is an irrational number. Who can tell me that why $\sqrt 3$ is a an irrational number?</p>
kuch nahi
8,365
<p>The proof is very similar to the irrationality of square root of two. </p> <p>Let $\sqrt{3} = \frac a b$, where a and b have no common factors besides $1$</p> <p>As $3b^2 = a^2$ so $a^2$ is a multiple of $3$, and hence $a$ should be a multiple of $3$. Let $a = 3k$, then $b^2 = 3k^2$, and $b$ must also be a multiple of three. You will arrive at a contradiction to the earlier assumption that $a$ and $b$ have no common factors.</p>
64,643
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="https://math.stackexchange.com/questions/4467/a1-2-is-either-an-integer-or-an-irrational-number">$a^{1/2}$ is either an integer or an irrational number</a> </p> </blockquote> <p>I know how to prove $\sqrt 2$ is an irrational number. Who can tell me that why $\sqrt 3$ is a an irrational number?</p>
Community
-1
<p>Alternatively, you can use the <a href="http://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow">rational root test</a> on the polynomial equation $x^2-3=0$ (whose solutions are $\pm \sqrt{3}$). If $\frac{a}{b}$ is a solution to the equation (with $a,b\in \mathbb{Z}$ and $b\not=0$), then $b \vert 1$ and $a \vert 3$, hence $\frac{a}{b}\in \{\pm 1, \pm 3\}$. However, it is straightforward to check that none of $\pm 1, \pm 3$ are solutions to $x^2-3=0$. Therefore there are no such rational solutions and $\sqrt{3}$ is irrational.</p> <p>In fact, in the above argument, if we replace 3 with an arbitrary prime $p\in \mathbb{N}$ and 2 with an arbitrary $m\in \mathbb{N}$, $m\geq 2$, the same argument shows that $\sqrt[m]{p}$ is irrational.</p>
39,423
<ul> <li><p>case1</p> <pre><code>Options[f] = {"t" -&gt; "0"}; f[___, OptionsPattern[]] := StringReplace["content", "t" :&gt; OptionValue["t"]] f[] (* con0en0 *) </code></pre></li> <li><p>case2</p> <pre><code>rule = {"t" -&gt; OptionValue["t1"]}; Options[gg] = {"t1" -&gt; "T1", "t2" -&gt; "1"}; gg[___, OptionsPattern[]] := StringReplace["content", rule] gg[1] (* con~~OptionValue[t1]~~en~~OptionValue[t1] *) </code></pre></li> </ul> <p>Here <code>OptionValue</code> couldn't get the value of "t1" So, how to make case 2 works like case 1? </p> <hr> <p>I found one solution is </p> <pre><code>Options[gg]={"t1"-&gt;"T1","t2"-&gt;"1"}; gg[___,OptionsPattern[]]:=Hold[StringReplace]["content",rule]//ReleaseHold//Evaluate </code></pre> <p>Any simpler methods?</p>
ubpdqn
1,997
<p>Perhaps not what you are after:</p> <pre><code>op[] := Function[x, x /. MapThread[ Rule[#1, (#2)] &amp;, {{"t", "t1", "t2"}, Thread["t" -&gt; {"0", "T1", "1"}]}]]; srf[opts_: "t"] := StringReplace["content", op[][opts]] </code></pre> <p>Note:</p> <pre><code>srf[] </code></pre> <p>yields the default value: </p> <p>"con0en0"</p> <pre><code>srf/@{"t","t1","t2"} </code></pre> <p>yields</p> <p>{"con0en0", "conT1enT1", "con1en1"}</p>
2,464,890
<p>Here is link to some limit questions:</p> <p><a href="https://i.stack.imgur.com/2rM9f.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2rM9f.png" alt="Example" /></a> Can anyone explain how has answers were derived? In (a), how can we cancel out <span class="math-container">$(x-2)$</span>? And how can answer be 0? When <span class="math-container">$x\to 2$</span>, <span class="math-container">$x-2\to 0$</span> and the answer should be infinity. Similarly in (b) the answer should be infinity. Can anyone explain?</p>
fleablood
280,126
<p>Big Picture.</p> <p>1) If $f(x) = \frac {(x-2)^2}{(x-2)(x+2)}$ then $f(0) = \frac 00$ which is undefined (not $\infty$ by the way; simply undefined and meaningless). </p> <p>However and values very very close to $x =2$ but not actually precisely $x=2$ all the $f(x)$ will have values that are very close to a value $k$. That is what we mean when you say $\lim\limits_{x\to 2}f(x) = k$. We are saying "if $x$ is close to but not necessarily equal to$2$, then $f(x)$ is close to but not necessarily equal to $k$</p> <p>Of course, we have to be technically precise and formal about it (which the above most certainly is not).</p> <p>so.... we are looking at values of $x$ where $x$ is <em>NOT</em> equal to $2$.</p> <p>2). If $M = \frac {AB}{AC}$ and $A \ne 0$ then we can divide $M = \frac {\not AB}{\not A C} = \frac AC$.</p> <p>And if $x$ is not equal to $2$ then $x -2 \ne 0$. So we can "cancel" $x-2$ out.</p> <p>So for $x \ne 2$ then $f(x) =\frac{(x-2)^2}{(x-2)(x+2)} \frac {\not{(x-2)}(x-2)}{\not{(x-2)}(x+2)} = \frac {x-2}{x+2}$.</p> <p>This new expression at $x = 2$ will be $\frac {2-2}{2+2} = \frac 04 = 0$.</p> <p>This is <em>not</em> $f(2)$ (which is still undefined) but an all $x$ very close but not equal to $2$ then $f(x) = \frac {(x-2)(x-2)}{(x-2)(x+2)}$ is very close to but not equal to $0$.</p>
2,647,194
<p>show that $p(x)=x^3-x^2-4x+5$ is irreducible in $\mathbb{Q}[x]$ </p> <p>How do we decide if a polynomial $p (x)$ in $\mathbb{Q}[x]$ is irreducible?</p>
Arthur
15,500
<p>There isn't one single all-encompassing way to show that a polynomial is irreducible. There are a lot of theorems that apply in specific cases, though, and those we try to use whenever possible to see whether we get any results.</p> <p>In this case, we see that the polynomial is of degree three, which means that if it is reducible over $\Bbb Q$, then it must have a root in $\Bbb Q$. However, the <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">rational root theorem</a> says that such a root can only be $\pm 1$ or $\pm 5$ (the leading coefficient is $1$, and all coefficients are integers, so any roots must be divisors of the constant term). We can easily check that none of those are roots, so we are done.</p>
1,987,317
<blockquote> <p>Prove that the system $$A^T A x = A^T b$$ always has a solution. The matrices and vectors are all real. The matrix $A$ is $m \times n$. </p> </blockquote> <p>I think it makes sense intuitively but I can't prove it formally.</p>
H. H. Rugh
355,946
<p>The matrix $A^TA$ need not be invertible. So what you need to prove is that $A^T b$ lies in the image $V= {\rm im\;} A^T A$. </p> <p>Now, $A^T b\in V$ is equivalent to $V^\perp \subset (A^T b)^\perp$ so it suffices to show that the orthogonal complement of ${\rm im} A^T A$ is also orthogonal to $A^T b$.</p> <p>So suppose that $z^T A^T A=0$ (i.e. the vector $z$ is orthogonal to the image). Then also $|z^T A^T|^2 = z^T A^T A z=0$ so $z^T A^T=0$. But then $z^T A^Tb=0$ as we had to show.</p>
1,987,317
<blockquote> <p>Prove that the system $$A^T A x = A^T b$$ always has a solution. The matrices and vectors are all real. The matrix $A$ is $m \times n$. </p> </blockquote> <p>I think it makes sense intuitively but I can't prove it formally.</p>
Rodrigo de Azevedo
339,790
<p>Let the SVD of $\mathrm A \in \mathrm R^{m \times n}$ be</p> <p>$$\mathrm A = \mathrm U \Sigma \mathrm V^{\top} = \begin{bmatrix} \mathrm U_1 &amp; \mathrm U_2\end{bmatrix} \begin{bmatrix} \hat\Sigma &amp; \mathrm O\\ \mathrm O &amp; \mathrm O\end{bmatrix} \begin{bmatrix} \mathrm V_1^{\top}\\ \mathrm V_2^{\top}\end{bmatrix}$$</p> <p>where the zero matrices may be empty. The eigendecomposition of $\mathrm A^{\top} \mathrm A$ is, thus,</p> <p>$$\mathrm A^{\top} \mathrm A = \mathrm V \Sigma^{\top} \mathrm U^{\top} \mathrm U \Sigma \mathrm V^{\top} = \mathrm V \Sigma^{\top} \Sigma \mathrm V^{\top} = \begin{bmatrix} \mathrm V_1 &amp; \mathrm V_2\end{bmatrix} \begin{bmatrix} \hat\Sigma^2 &amp; \mathrm O\\ \mathrm O &amp; \mathrm O\end{bmatrix} \begin{bmatrix} \mathrm V_1^{\top}\\ \mathrm V_2^{\top}\end{bmatrix}$$</p> <p>Hence, the <strong>normal equations</strong> $\mathrm A^{\top} \mathrm A \, \mathrm x = \mathrm A^{\top} \mathrm b$ can be written as follows</p> <p>$$\mathrm V \begin{bmatrix} \hat\Sigma^2 &amp; \mathrm O\\ \mathrm O &amp; \mathrm O\end{bmatrix} \mathrm V^{\top} \mathrm x = \mathrm V \begin{bmatrix} \hat\Sigma &amp; \mathrm O\\ \mathrm O &amp; \mathrm O\end{bmatrix} \mathrm U^{\top} \mathrm b$$</p> <p>Let $\mathrm y := \mathrm V^{\top} \mathrm x$. Left-multiplying by $\mathrm V^{\top}$,</p> <p>$$\begin{bmatrix} \hat\Sigma^2 &amp; \mathrm O\\ \mathrm O &amp; \mathrm O\end{bmatrix} \begin{bmatrix} \mathrm y_1\\ \mathrm y_2\end{bmatrix} = \begin{bmatrix} \hat\Sigma &amp; \mathrm O\\ \mathrm O &amp; \mathrm O\end{bmatrix} \begin{bmatrix} \mathrm U_1^{\top} \mathrm b\\ \mathrm U_2^{\top} \mathrm b\end{bmatrix}$$</p> <p>and, thus,</p> <p>$$\begin{bmatrix} \mathrm I_r &amp; \mathrm O\\ \mathrm O &amp; \mathrm O\end{bmatrix} \begin{bmatrix} \mathrm y_1\\ \mathrm y_2\end{bmatrix} = \begin{bmatrix} \hat\Sigma^{-1} &amp; \mathrm O\\ \mathrm O &amp; \mathrm O\end{bmatrix} \begin{bmatrix} \mathrm U_1^{\top} \mathrm b\\ \mathrm U_2^{\top} \mathrm b\end{bmatrix} = \begin{bmatrix} \hat\Sigma^{-1}\mathrm U_1^{\top} \mathrm b\\ \mathrm O\end{bmatrix}$$</p> <p>which <em>always</em> has a solution, as $\mathrm O \mathrm y_2 = \mathrm O$ always has a solution. The affine solution space is parametrized by</p> <p>$$\mathrm x = \mathrm V_1 \hat\Sigma^{-1} \mathrm U_1^{\top} \mathrm b + \mathrm V_2 \eta$$</p> <p>where $\{ \mathrm V_2 \eta \mid \eta \in \mathbb R^{n-r} \}$ is the null space of $\mathrm A$. If $\mathrm A$ has <strong>full column rank</strong>, then $r = n$, the null space of $\mathrm A$ is $\{0_n\}$, and $\mathrm V_1 \hat\Sigma^{-1} \mathrm U_1^{\top} \mathrm b$ is the only solution to the normal equations.</p> <hr> <p><strong>Addendum</strong></p> <p>Using the SVD of $\mathrm A$, the original system $\mathrm A \mathrm x = \mathrm b$ can be written as</p> <p>$$\begin{bmatrix} \hat\Sigma &amp; \mathrm O\\ \mathrm O &amp; \mathrm O\end{bmatrix} \begin{bmatrix} \mathrm y_1\\ \mathrm y_2\end{bmatrix} = \begin{bmatrix} \mathrm U_1^{\top} \mathrm b\\ \mathrm U_2^{\top} \mathrm b\end{bmatrix}$$</p> <p>which only has a solution if $\mathrm U_2^{\top} \mathrm b = \mathrm 0$, i.e., if $\mathrm b$ is orthogonal to the left null space of $\mathrm A$, i.e., if $\mathrm b$ is in the <strong>column space</strong> of $\mathrm A$.</p>
687,179
<p>This is a question that asks the reader for a $strategy$ to solve a particular problem. I cannot solve this problem myself so I am looking around for general methods one might use to confront it with. Suppose $$f(x)=a_0+a_1x+..., g(x)=b_0+b_1x_...$$ and given $$\lim\limits_{x\to 1^-}\frac{f^{(n)}(x)}{g^{(n)}(x)}=1$$ With the additional $$\lim\limits_{n\to\infty}\frac{a_{n+1}}{a_n}=1$$ and $$\lim\limits_{n\to\infty}\frac{b_{n+1}}{b_n}=1 $$ for all natural $n$. Prove $$\lim\limits_{n\to \infty}\frac{a_n}{b_n}=1$$</p> <p>I would like to know your general thoughts about approaching this type of problem. Any insights however small will be much appreciated. Although this is seemingly a problem in real analysis, I have tagged complex analysis because solutions may well involve it.</p>
Daniel Fischer
83,702
<p>Under the given assumptions,</p> <p>$$\lim_{n\to\infty} \frac{a_n}{b_n} = 1$$</p> <p>need not hold. Counterexample:</p> <p>$$\begin{align} f(x) &amp;= \frac{1}{1-x^2} = \sum_{k=0}^\infty x^{2k},\\ g(x) &amp;= f(x) + e^x. \end{align}$$</p> <p>We have $\dfrac{a_n}{b_n} = 0$ for all odd $n$, but nevertheless</p> <p>$$\lim_{x\to 1} \frac{f^{(n)}(x)}{g^{(n)}(x)} = \lim_{x\to 1} \frac{f^{(n)}(x)}{f^{(n)}(x)+e^x} = 1$$</p> <p>for all $n$, since $\lim\limits_{x\to 1} \lvert f^{(n)}(x)\rvert = +\infty$, while $e^x$ remains bounded.</p> <p>You need some further conditions to reach the conclusion.</p>
285,034
<p>I currently don't see how to solve the following integral:</p> <p>$$\int_{-1/2}^{1/2} \cos(x)\ln\left(\frac{1+x}{1-x}\right) \,dx$$</p> <p>I tried to solve it with integration by parts and with a Taylor series, but nothing did help me so far.</p>
ՃՃՃ
48,751
<p>Hint: The integrand is an odd function.</p>
1,640,653
<p>I have this matrix below and I'm trying to find it's inverse, I know I augment it with I<sub>2</sub> but I don't know where to go from that.</p> <p>\begin{bmatrix} 2&amp;1\\ a&amp;a \end{bmatrix}</p>
MPW
113,214
<p>Perhaps quicker to use $$\begin{bmatrix} a&amp;b\\ c&amp;d \end{bmatrix}^{-1} = \frac1{ad-bc}\begin{bmatrix} d&amp;-b\\ -c&amp;a \end{bmatrix}$$</p> <p>so that</p> <p>$$\begin{bmatrix} 2&amp;1\\ a&amp;a \end{bmatrix}^{-1} = \frac1a\begin{bmatrix} a&amp;-1\\ -a&amp;2 \end{bmatrix}=\begin{bmatrix} 1&amp;-\frac1a\\ -1&amp;\frac2a \end{bmatrix}$$</p>
2,130,141
<p>I am having troubles with the following excercise:</p> <p>$P(A\times B) = Q$ and $Q = \lbrace V\times W \ \vert \ V\in P(A), W\in P(B)\rbrace$ </p> <p>So I have to prove or disprove. I know that $P(A\times B) \neq Q$ and being specific $P(A\times B) \not\subset Q$ and $Q \subset P(A\times B)$. In addition; </p> <p>$\supseteq \rbrack \ X\in Q \rightarrow X\in V\times W$, but $V\subset A$ and $W\subset B$, $\rightarrow $ $X\subset A\times B \rightarrow X\in P(A\times B)$.</p> <p>But I am not able to disprove $\subseteq \rbrack$. I know they have diferent sizes but I want to make a formal disprove.</p> <p>I am sorry for grammar mistakes, but English is not my native language. </p> <p>Kind regards, </p> <p>Phi.</p>
Rob Arthan
23,171
<p>Assume $d$ is a metric on the set $M = \{a, b\}$ where $a \neq b$ and let $\delta = d(a, b)$. Then $\delta \neq 0$ because otherwise $a = b$. The set $U = \{x \in M \mid d(x, a) &lt; \delta\}$ is then open by definition of the topology on a metric space. But $U = \{a\}$ is not open in the topology described in the question (the indiscrete topology). By an extension of this argument (taking $\delta = \min\{d(x, y) \mid x, y \in M, x\neq y\}$) you can show that a finite metric space has the discrete topology: the topology in which every subset is open.</p>
59,567
<p>I am looking for a way to add a legend showing the identity of various atoms (with different colours) to this picture. Any Clues?</p> <pre><code>Import["ExampleData/1PPT.pdb", "Rendering" -&gt; "BallAndStick"] </code></pre> <p><img src="https://i.stack.imgur.com/FSFoH.png" alt="enter image description here"></p>
Bob Hanlon
9,362
<pre><code>bas = Import["ExampleData/1PPT.pdb", "Rendering" -&gt; "BallAndStick", ImageSize -&gt; 500]; elements = Import["ExampleData/1PPT.pdb", "ResidueAtoms"] // Flatten // Union; legend = GraphicsColumn[{ {Graphics[{#[[1]], Disk[{0, 0}, 1]}, ImageSize -&gt; 10], #[[2]]} &amp; /@ Thread[{ ElementData[#, "IconColor"] &amp; /@ elements, ElementData[#, "StandardName"] &amp; /@ elements}]}]; Row[{bas, legend}] </code></pre> <p><img src="https://i.stack.imgur.com/Uz5lB.jpg" alt="enter image description here"></p> <p>Alternative legend:</p> <pre><code>legend = GraphicsColumn[ Row[{Graphics[{#[[1]], Disk[{0, 0}, 1]}, ImageSize -&gt; 10], " " &lt;&gt; #[[2]]}] &amp; /@ Thread[{ElementData[#, "IconColor"] &amp; /@ elements, ElementData[#, "StandardName"] &amp; /@ elements}], ImageSize -&gt; 100]; Row[{bas, legend}] </code></pre> <p><img src="https://i.stack.imgur.com/oNIOs.jpg" alt="enter image description here"></p>
3,878,723
<blockquote> <p>Find the value of <span class="math-container">$k$</span> if the curve <span class="math-container">$y = x^2 - 2x$</span> is tangent to the line <span class="math-container">$y = 4x + k$</span></p> </blockquote> <p>I have looked at the solution to this question and the first step is the &quot;equate the two functions&quot;:<br /> <span class="math-container">$ x^2 - 2x = 4x + k$</span></p> <p>Why? How does that help solve the equation? And how can I use what I get from equating the two functions to find the solution?</p>
PL Wang
803,716
<p>Additionally, you can also solve with calculus. Take the derivative. <span class="math-container">$(x^2-2x)' = 2x-2$</span>. We want the derivative to be <span class="math-container">$4$</span>, which is the slope of the line. This happens when <span class="math-container">$2x-2 = 4$</span>, at <span class="math-container">$x=3$</span>.</p> <p>Plugging into our quadratic: <span class="math-container">$y=3^2-2\cdot 3 = 3$</span>.</p> <p>So we need <span class="math-container">$y = 4\cdot 3 + k$</span>, or <span class="math-container">$3 = 4\cdot 3 + k$</span>. This gives <span class="math-container">$\boxed{k=-9}$</span>.</p>
3,438,048
<p>I've recently obtained my University entrance papers from 1967 (yes,52 years ago!) and I found the question below difficult. I presume the answer is a symmetric expression in the differences between alpha,beta and gamma.Am I missing some obvious trick? Any help would be appreciated.</p> <p>Simplify and evaluate the determinant <a href="https://i.stack.imgur.com/Dfft4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dfft4.png" alt="enter image description here"></a></p> <p>and show that its value is independent of theta.</p>
José Carlos Santos
446,262
<p>The empty set belongs to <span class="math-container">$\tau$</span> trivially.</p> <p>The set <span class="math-container">$\mathbb R$</span> belongs to <span class="math-container">$\tau$</span> because is <span class="math-container">$x\in U$</span> and you take <span class="math-container">$\varepsilon=1$</span>, then <span class="math-container">$(x-\varepsilon,x+\varepsilon)\subset\mathbb R$</span>.</p> <p>If <span class="math-container">$A,B\in\tau$</span>, take <span class="math-container">$x\in A\cap B$</span>. There are <span class="math-container">$\varepsilon_1,\varepsilon_2&gt;0$</span> such that <span class="math-container">$(x-\varepsilon_1,x+\varepsilon_1)\subset A$</span> and <span class="math-container">$(x-\varepsilon_2,x+\varepsilon_2)\subset B$</span>. So,<span class="math-container">\begin{align}\bigl(x-\min\{\varepsilon_1,\varepsilon_2\},x+\min\{\varepsilon_1,\varepsilon_2\}\bigr)&amp;=(x-\varepsilon_1,x+\varepsilon_1)\cap(x-\varepsilon_2,x+\varepsilon_2)\\&amp;\subset A\cap B.\end{align}</span></p> <p>And if <span class="math-container">$\alpha\subset\tau$</span> and <span class="math-container">$x\in\bigcup_{A\in\alpha}A$</span>, then <span class="math-container">$x\in A_0$</span> for some <span class="math-container">$A_0\in\alpha$</span>. So, there is a <span class="math-container">$\varepsilon&gt;0$</span> such that <span class="math-container">$(x-\varepsilon,x+\varepsilon)\subset A_0$</span>. So,<span class="math-container">$$(x-\varepsilon,x+\varepsilon)\subset\bigcup_{A\in\alpha}A.$$</span></p>
47,926
<p>Is there any known two-dimensional Conway's game of life variation where each cell can not be just on/off but able to hold more states, maybe 4 or 5?</p>
El'endia Starman
10,537
<p>Adding on to anon's comment, here's a cellular automaton called <a href="http://en.wikipedia.org/wiki/Wireworld" rel="nofollow">Wireworld</a>.</p>
47,926
<p>Is there any known two-dimensional Conway's game of life variation where each cell can not be just on/off but able to hold more states, maybe 4 or 5?</p>
alan2here
6,982
<p>I can recomend the software Golly, it's so far ahead of anything else and easy to use. Generations rules are similar to Lifelike rules such as Conways Life but have more states. Alternitivly WireWorld as mentioned above is an example in Golly of a Rule Table, and their are many other for more human engineered cellular automata.</p>
1,343,909
<p>I was reading some examples about linear functionals from the book Introductory functional analysis with applications of Kreysig and one of the examples states the following </p> <p>Let <span class="math-container">$L:C[0,1]\rightarrow C[0,1]$</span></p> <p><span class="math-container">$$L[f(x)]=\int_{0}^{x}f(s)ds$$</span> that is linear and <span class="math-container">$R(T)=C^{1}[0,1]$</span> s.t <span class="math-container">$L(0)=0$</span>.</p> <p>My question is how can I can calculate <span class="math-container">$L^{-1}: R(T)\rightarrow C[0,1] $</span></p> <p>I could give some suggestion ?</p> <p>Thanks!</p>
egreg
62,967
<p>Hint: fundamental theorem of calculus.</p> <p>Hint 2: if $f\in C^1([0,1])$ and $f(0)=0$, then we can consider $f'\in C([0,1])$ and also $$ g(x)=\int_0^x f'(s)\,ds $$ By the fundamental theorem of calculus, $f'=g'$ and $g=L(f')$, so…</p>
3,075,869
<p>The following is quoted from <a href="https://en.wikipedia.org/wiki/Quotient_space_(topology)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Quotient_space_(topology)</a></p> <p>Quotient maps q : X → Y are characterized among surjective maps by the following property: if Z is any topological space and f : Y → Z is any function, then f is continuous if and only if f ∘ q is continuous.</p> <p>Assuming that <span class="math-container">$q$</span> is a quotient map, I can prove the characterizing property using the definition of a quotient map, namely, <span class="math-container">$q^{-1}U$</span> is open if and only if <span class="math-container">$U$</span> is open. However, I could not see how to prove the converse: how does the property imply that <span class="math-container">$q$</span> is quotient?</p>
Henno Brandsma
4,280
<p>Suppose <span class="math-container">$q: X \to Y$</span> obeys the "composition property". Suppose <span class="math-container">$U$</span> is a subset of <span class="math-container">$Y$</span> such that <span class="math-container">$q^{-1}[U]$</span> is open. Define <span class="math-container">$g: Y \to \{0,1\}$</span>, where <span class="math-container">$\{0,1\}$</span> has the Sierpinski topology <span class="math-container">$\{\emptyset,\{0\},\{0,1\}\}$</span>, by <span class="math-container">$g(x) = 0$</span> when <span class="math-container">$x \in U$</span> and <span class="math-container">$g(x)=1$</span> for <span class="math-container">$x \notin U$</span>.</p> <p>Then <span class="math-container">$g \circ q: X \to \{0,1\}$</span> sends <span class="math-container">$q^{-1}[U]$</span> to <span class="math-container">$0$</span> and all other points of <span class="math-container">$X$</span> to <span class="math-container">$1$</span> and because we assumed <span class="math-container">$q^{-1}[U]$</span> is open and as <span class="math-container">$\{0,1\}$</span> has the specified topology, <span class="math-container">$g \circ q$</span> is continuous. So by the "composition property", <span class="math-container">$g$</span> must be continuous, so <span class="math-container">$g^{-1}[\{0\}]=U$</span> must be open. </p> <p>So we have show that the topology of <span class="math-container">$Y$</span> contains all <span class="math-container">$U$</span> such that <span class="math-container">$q^{-1}[U]$</span> is open and from this it follows that <span class="math-container">$q$</span> is a quotient map.</p>
176,260
<blockquote> <p>Let $\left\{ f_{n}\right\} $ denote the set of functions on $[0,\infty) $ given by $f_{1}\left(x\right)=\sqrt{x} $ and $f_{n+1}\left(x\right)=\sqrt{x+f_{n}\left(x\right)} $ for $n\ge1 $. Prove that this sequence is convergent and find the limit function.</p> </blockquote> <p>We can easily show that this sequence is is nondecreasing. Originally, I was trying to apply the fact that “every bounded monotonic sequence must converge” but then it hit me this is true for $\mathbb{R}^{n} $. Does this fact still apply on $C[0,\infty) $, the set of continuous functions on $[0,\infty) $. If yes, what bound would we use?</p>
David Mitra
18,986
<p>In a sense, the answer is "yes". You can apply the Monotone Convergence Theorem pointwise. That is, for a fixed value of $x$, show that the sequence of nonnegative numbers $\bigl(f_n(x)\bigr)$ is bounded above and increasing. Then of course for each $x$, the sequence $\bigl(f_n(x)\bigr)$ converges to some $f(x)$. (You can find $f(x)$ explicitly by taking the limits of both sides of your defining relation for $f_n$.)</p> <p>Towards showing that $\bigl(f_n(x)\bigr)$ is indeed bounded above, try using the bound $1+2\sqrt x$. </p> <p>Perhaps better, as suggested by Did, would be to formally find the limit fuction first, and then show that this serves as an upper bound.</p>
1,175,632
<p>Determine whether the following integral converges or diverges: \begin{align*} \iint_Q e^{-xy} \ dA, \end{align*} where $Q$ is the first quadrant of the $xy$-plane.</p> <p>How should I go about this problem? Should I compare it with another known integral?</p>
awkward
76,172
<p>I think your answer of 0.0648 is correct.</p> <p>For a more general approach, consider the probability of <em>not</em> having 3 successes in a row out of $n$ trials; call this probability $P(n)$, and let's say the probability of a single success is $p$, with $q=1-p$. If we have $n$ trials, condition the probability on the number of successes at the end of the $n$ trials: the $n$ trials must end in $...F$, $...FS$, or $...FSS$, so we have the recursion</p> <p>$$P(n) = q \; P(n-1) + pq \; P(n-2) + p^2q \; P(n-3) \qquad \text{for } n \ge 3$$ with $P(0) = P(1) = P(2) = 1$.</p> <p>It's not hard to see how to extend this approach to longer strings of successes.</p>
2,360,819
<p>Probability question that I don't understand? It is on our assignment for probability and I can't seem to figure out how it has to do with probability or how to solve it.</p>
Michael Burr
86,421
<p>Consider the following chart $$ \begin{matrix} \text{Sides}&amp;\text{Diagonals}\\\hline 3&amp;0\\ 4&amp;2\\ 5&amp;5\\ 6&amp;9 \end{matrix} $$ It appears that the number of diagonals is increasing by $n-2$ each time (where $n$ is the number of sides of the new shape).</p> <p>If you poke around a bit (writing down the sums or using linear algebra, or guessing), you get the formula $$ \frac{1}{2}n^2-\frac{3}{2}n=\binom{n}{2}-n. $$</p>
2,360,819
<p>Probability question that I don't understand? It is on our assignment for probability and I can't seem to figure out how it has to do with probability or how to solve it.</p>
DanielWainfleet
254,665
<p>There are $\binom {8}{2}=28$ pairs of vertices and $\binom {8}{1}=8$ of those are pairs of adjacent vertices, so there are $28-8=20$ pairs of non-adjacent vertices and therefore $20$ diagonals.</p> <p>Probability often involves counting cases, which often involves combinatorics, which often uses various methods of counting, such as over-counting followed by a subtraction or a division.</p> <p>A recent Q on this site was : $ 7$ teams in a tourney must play $3$ games each. How many games must be played in total? My answer was ten and one-half games. Suppose we count $7$ teams $\times 3$ games per team$ =21$ games; We counted each game twice: Once for each team playing it. </p>
24,186
<p>Consider the code below:</p> <pre><code>s = Solve[(3 - Cos[4*x])*(Sin[x] - Cos[x]) == 2, x, InverseFunctions -&gt; True]; Select[s[[All, 1, 2]], Element[#, Reals] &amp;] </code></pre> <p>In MMA 8.0, I get </p> <pre><code>{-\[Pi], \[Pi]/2, \[Pi]} </code></pre> <p>but in MMA 9.0, I get an empty set { }</p> <p>Assuming the solution by MMA 8.0 is correct, can someone show me a work around for MMA 9.0? </p>
amr
950
<p>In my continuing mission to provide smartass answers, you can do, for example:</p> <pre><code>u = {-3, 3}; v = {1, 5}; d = Function[{u, v}, ((Abs[u[[1]]] - Abs[v[[1]]])^2 + (Abs[u[[2]]] - Abs[v[[2]]])^2)^(1/2)][u, v] </code></pre> <p>I think the reason why it's suggested to convert to pure functions is for performance, otherwise I don't think I'd bother. But maybe there are other reasons to do it.</p>
545,003
<p>I have a proof that I am trying to prove and I am getting stuck at the inductive hypothesis. This is my theorem:</p> <blockquote> <p>For all real numbers $n&gt;3$, the following is true: $n + 3 &lt; n!$.</p> </blockquote> <p>I have proven true for $n = 4$, and will assume true for some arbitrary value $k$, i.e.,</p> <p>$$k + 3 &lt; n!,$$</p> <p>and I want to prove for $k+1$, i.e.,</p> <p>$$(k+1) + 3 &lt; (k+1)!.$$</p> <p>Consider the $k+1$ term:</p> <p>$$(k+1)+3 = ?$$</p> <p>I am confused on how to approach the next step.</p> <p>Ok here is how I am proceeding. It seems really long so if anyone has a better way let me know: $$ =(k+3)+1 $$ $$ &lt;(k!)+1 $$ $$ &lt;k!+k! $$ $$ =2k! $$ $$ &lt;(k+1)k! $$ $$ =(k+1)! $$ Therefore both sides are equivalent.</p>
Community
-1
<p>Suppose $n+3&lt; n!$ and $(n+1)!\leq n+1+3$ , we see the following:</p> <p>$$(n+1)!\leq n+1+3 \Rightarrow (n+1)n!\leq n+1+3 $$</p> <p>$$\Rightarrow (n+1)(n+3)\leq(n+1)n!\leq n+1+3 $$</p> <p>$$(n+1)(n+3)\leq (n+1)+3 \Rightarrow (n+1)(n+3)-(n+1) \leq 3$$</p> <p>$$ \Rightarrow (n+1)(n+3-1)\leq 3$$</p> <p>$$(n+1)(n+2)\leq 3$$</p> <p>This should strike some thing s wrong... what is that?</p>
1,213,344
<p>A solid half-ball $H$ of radius $a$ with density given by $k(2a-\rho)$, where $k$ is a constant. Find its mass.</p> <p>You of course use spherical coordinates so $dV=\rho ^2 \sin\phi d\rho d\phi d\theta$. It is clear to see that the limits are $\rho \in [0,a]$ and $\theta \in [0,2\pi]$. The limits for $\phi$ are normally $[0,\pi]$ but it is apparently $[0,\pi/2]$ but I don't know how...</p> <p>Is it because it is a half sphere? If so then why can't you half the limits for $\theta$ instead or why not half it for both of the angle variables?</p>
Scientifica
164,983
<p><strong>Hints</strong>:</p> <p>1)We have $x\in [0,1]\Rightarrow x^{n+1}\le x^n$. Then prove that $\frac{x^n}{x^n+1}\ge\frac{x^{n+1}}{x^{n+1}+1}$. You can use the fact that for any real number $a\neq -1$ we have $\frac{a}{a+1}=1-\frac{1}{a+1}$.</p> <p>2)For any $x&gt;0$ we have $2x-1&lt;2x$</p>
874,697
<p>I'm trying to understand the supremem of a sequence of functions so I came up with a trivial case as follows -</p> <p>Let $(f_n(x))$ be a sequence of functions with $n$ having a value of either $1$ or $2$. Ie. A sequence with only two elements.</p> <p>Now if we define $f_1(x)=1$ and $f_2(x)=x$ what is the sup $(f(x))$?</p>
Clive Newstead
19,542
<p><strong>Hint:</strong> The supremum is a function of $x$, which you can define piecewise. Consider what the supremum is in the two cases $x \ge 1$ and $x &lt; 1$.</p>
533,534
<p>Let X = {x(i)} be a group of n data with mean = μ(x) and variance $= σ(x)^2$.</p> <p>We use the symbol S(x(i)) to represent the sum of all the x's.</p> <p>Similar notations will be used for the group Y.</p> <p>Supposed that Y is formed by adding an extra element x(n+1) to X and the value of that element is greater than μ(x).</p> <p>That is, [x(n+1) = μ(x) + d; where d > 0].</p> <p>Then we have μ(y) > μ(x) because</p> <p>$μ(y) = [S(x(i)) + (μ(x) + d)] / (n + 1)$</p> <p>$= [(S(x(i)) + μ(x)) + d] / (n + 1)$</p> <p>$= [(nμ(x) + μ(x) + d] / (n + 1)$</p> <p>$= [(n + 1) μ(x) + d] / (n + 1)$</p> <p>$= μ(x) + [d / (n + 1)]$</p> <p>$&gt; μ(x)$</p> <p>From which, we can say that:- “if an item greater than the mean is added, the new mean is greater than the original.” The actual relation between μx and μy is μ(y) = μ(x) + δ; where δ = d / (n + 1).</p> <p>I am trying to get a simple and nice relation between $σ(x)^2$ and $σ(y)^2$ too. Using the similar derivation method as above, the closest I can get is $σ(y)^2 &gt; n σ(x)^2 / (n + 1)$.</p> <p>Is the above conclusion correct? Or is it possible to go further to get a much more simpler relation like $σ(y)^2 &gt; σ(x)^2$.</p>
Cameron Buie
28,900
<p><strong>Hint</strong>: Find the two limits $\lim\limits_{x\to 0^+}f(x)$ and $\lim\limits_{x\to 0^-}f(x).$ Keep in mind that $f(x)$ has a different formula when $x&gt;0$ than it does when $x&lt;0$.</p>
2,848,891
<p>Find the number of solutions of $$\left\{x\right\}+\left\{\frac{1}{x}\right\}=1,$$ where $\left\{\cdot\right\}$ denotes Fractional part of real number $x$.</p> <h2>My try:</h2> <p>When $x \gt 1$ we get</p> <p>$$\left\{x\right\}+\frac{1}{x}=1$$ $\implies$</p> <p>$$\left\{x\right\}=1-\frac{1}{x}.$$</p> <p>Letting $x=n+f$, where $n \in \mathbb{Z^+}$ and $ 0 \lt f \lt 1$, we get</p> <p>$$f=1-\frac{1}{n+f}.$$</p> <p>By Hint given by $J.G$, i am continuing the solution:</p> <p>we have</p> <p>$$f^2+(n-1)f+1-n=0$$ solving we get</p> <p>$$f=\frac{-(n-1)+\sqrt{(n+3)(n-1)}}{2}$$ $\implies$</p> <p>$$f=\frac{\left(\sqrt{n+3}-\sqrt{n-1}\right)\sqrt{n-1}}{2}$$</p> <p>Now obviously $n \ne 1$ for if we get $f=0$</p> <p>So $n=2,3,4,5...$ gives values of $f$ as</p> <p>$\frac{\sqrt{5}-1}{2}$, $\sqrt{3}-1$, so on which gives infinite solutions.</p>
Ng Chung Tak
299,599
<p>Using <a href="https://en.wikipedia.org/wiki/Continued_fraction" rel="nofollow noreferrer"><strong>continued fraction</strong></a>:</p> <p>\begin{align} f+\frac{1}{n+f} &amp;= 1 \tag{$n&gt;1$, $0&lt;f&lt;1$} \\[5pt] n+f &amp;= n+1-\frac{1}{n+f} \\[5pt] x &amp;= n+\frac{n+f-1}{n+f} \\[5pt] &amp;= n+\frac{1}{\dfrac{n+f}{n+f-1}} \\[5pt] &amp;= n+\frac{1}{1+\dfrac{1}{n+f-1}} \\[5pt] x &amp;= n+\frac{1}{1+\dfrac{1}{x-1}} \tag{$\star$} \\[5pt] &amp;= \left[ n;\overline{1,(n-1)} \right] \\[5pt] \alpha &amp;= \frac{n+1+\sqrt{(n-1)(n+3)}}{2} \tag{$\alpha=x$} \\[5pt] \beta &amp;= \frac{n+1-\sqrt{(n-1)(n+3)}}{2} \tag{$\beta=\frac{1}{x}$} \end{align}</p> <p>where $\alpha$, $\beta$ are the roots of $(\star)$.</p> <blockquote> <p>Note that $\alpha \beta=1$, the symmetric roles for $\alpha$ and $\beta$.</p> </blockquote>
2,068,906
<p>Recall, with the birthday problem, with 23 people, the odds of a shared birthday is APPROXIMATELY .5 (correct?)</p> <p>P(no sharing of dates with 23 people) = $$\frac{365}{365}*\frac{364}{365}*\frac{363}{365}*...*\frac{343}{365} $$</p> <p>$$= \frac{365!}{342!}*\frac{1}{365^{23}} $$</p> <p>I want to do this multiplication, but nothing I have can handle it. How can I know for sure it actually is around .5 ?</p> <p>$$\frac{365!}{342!}*\frac{1}{365^{23}} = .5$$</p>
Aloizio Macedo
59,234
<p>With respect to the question in the title, by doing the second line, you are making your calculator attempt to compute a number greater than $100^{200}$. It won't.</p> <p>By doing the first line, you are making a multiplication of about $20$ numbers close to $1$. It will handle this just fine.</p>
2,068,906
<p>Recall, with the birthday problem, with 23 people, the odds of a shared birthday is APPROXIMATELY .5 (correct?)</p> <p>P(no sharing of dates with 23 people) = $$\frac{365}{365}*\frac{364}{365}*\frac{363}{365}*...*\frac{343}{365} $$</p> <p>$$= \frac{365!}{342!}*\frac{1}{365^{23}} $$</p> <p>I want to do this multiplication, but nothing I have can handle it. How can I know for sure it actually is around .5 ?</p> <p>$$\frac{365!}{342!}*\frac{1}{365^{23}} = .5$$</p>
vadim123
73,324
<p>The following command works in LibreOffice, and would probably work in Excel as well:</p> <p>=COMBIN(365,342)*FACT(23)/(365^23)</p> <p>The key is that $365!$ and $342!$ are both enormous, but all the other numbers are manageable, so we need to find a built-in function to cancel these two monsters. <a href="https://en.wikipedia.org/wiki/Binomial_coefficient" rel="nofollow noreferrer">Binomial coefficients</a> will do.</p>
2,068,906
<p>Recall, with the birthday problem, with 23 people, the odds of a shared birthday is APPROXIMATELY .5 (correct?)</p> <p>P(no sharing of dates with 23 people) = $$\frac{365}{365}*\frac{364}{365}*\frac{363}{365}*...*\frac{343}{365} $$</p> <p>$$= \frac{365!}{342!}*\frac{1}{365^{23}} $$</p> <p>I want to do this multiplication, but nothing I have can handle it. How can I know for sure it actually is around .5 ?</p> <p>$$\frac{365!}{342!}*\frac{1}{365^{23}} = .5$$</p>
Rolazaro Azeveires
39,752
<p>You can make the calculation with just about any calculator out there. You only need the basics (* /). Stay clear of any large or small numbers, to avoid overflowing (or underflowing). So go divide-multiply-divide-multiply-and-so-on:</p> <p>$$P \approx 364 / 365 * 363 / 365 * 362 / 365 \ldots $$</p>
2,152,872
<p>I am working on a problem in Artin's Algebra related to the algebraic geometry talked in Chapter 11. The problem number is 9.2., F.Y.I.</p> <p>Here goes the problem:</p> <blockquote> <p>Let $f_1, \dots, f_r$ be complex polynomials in the variables $x_1, \dots, x_n$, let $V$ be the variety of their common zeros, and let $I$ be the ideal of the polynomial ring $R = \mathbb{C}\left [ x_1, \dots, x_n \right ]$ they generate. Define a ring homomorphism from the quotient ring $\bar{R} = R/I$ to the ring $\mathcal{R}$ of continuous, complex-valued functions on $V$.</p> </blockquote> <p>I attempted to use the correspondence theorem w.r.t. the variety of a set of polynomials, i.e. the maximal ideals bijectively correspond to the point in $V$ and we may somehow define the continuous functions there. However I cannot come up with any idea further. Also, the term 'continuous' here seems redundant since I expect the homomorphism will carry polynomials to polynomials. </p> <p>I appreciate your participation and will be thankful to anything from hints to full solution. </p>
M. Winter
415,941
<p>Mostly try and error. What I learned so far is that there are no <em>good</em> characterizations of degree sequences.</p> <p><a href="https://i.stack.imgur.com/GpCpv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GpCpv.png" alt="enter image description here"></a></p>
2,122,389
<p>The problem goes so : you have a parking lot with 8 parking spaces and 8 cars, of which 4 are red and 4 are white. What is the probability of :</p> <p>a) 4 white cars being parked next to each other ?</p> <p>b) 4 white cars and 4 red cars being parked next to each other ?</p> <p>c) red and white cars being parked alternately ( red-white-red...) ?</p> <p>Any help will be greatly appreciated. :-)</p>
DXT
372,201
<p>$$\int \ln{\left(1+2m\cos{x}+2m^2\right)}dx=x\ln{(1+2m^2)}+\int\ln{\left(1+\frac{2m}{1+2m^2}\cos{x}\right)}dx=$$ where $\displaystyle \left|\frac{2m\cos{x}}{1+2m^2}\right|&lt;1$</p> <p>$\displaystyle =x\ln{(1+2m^2)}+\sum\limits_{n=1}^{\infty}\frac{(-1)^{n+1}}{n}\left(\frac{2m}{1+2m^2}\right)^n\int\cos^n{x}dx$</p> <p>$\displaystyle =x\ln{(1+2m^2)}+\sum\limits_{n=1}^{\infty}\frac{(-1)^{n+1}}{2^nn}\left(\frac{2m}{1+2m^2}\right)^n\sum\limits_{k=0}^n\binom{n}{k}\frac{e^{i(n-2k)x}}{i(n-2k)}+C$</p>
23,566
<p>I love math, and I used to be very good at it. The correct answers came fast and intuitively. I never studied, and redid the demonstration live for the tests (sometimes inventing new ones). I was the one who answered the tricky questions in class (8 hours of math/week in high school)... You get the idea.</p> <p>As such I used to have a lot of confidence in my math abilities, and didn't think twice about saying the first idea that came to mind when answering a question.</p> <p>That was more than 10 years ago, and I (almost) haven't done any math since then. I've graduated in a scientific field that requires little of it (I prefer not to give details) and worked for some time.</p> <p>Now I'm back at school (master of statistics) and I need to do math, once again. I make mistakes upon blunders with the same confidence I used to have when I was good, which is extremely embarrassing when it happens in class. </p> <p>I feel like a tone deaf musician and an ataxic painter at the same time.</p> <p>One factor that probably plays a role is that I've learnt math in my mother tongue, and I'm now using it English, but I wouldn't expect it to make such a difference. </p> <p>I know that it will require practice and hard work, but I need direction.</p> <p>Any help is welcome.</p> <p>Kind regards,</p> <p>-- Mathemastov</p>
Bill Dubuque
242
<p>I'd like to emphasize a remark that Eric made in a comment to his answer. Introspection is essential when learning mathematics - not only to analyze problem solving techniques - but also in many other ways. The web of mathematics is connected in many mysterious and marvelous ways. Spending a little effort attempting to discover these links can go a long way towards better understanding the essence of the matter. After you solve (or fail to solve) a problem you should spend some time trying to abstract a bit from the specific problem. Can it easily be generalized from a specific trick to a general method that you can add to your tools for later reuse? For example, if it's a problem about numbers can it be generalized to one about functions? If so, can the proof be simplified by appealing to properties unique to functions such as derivatives? For a simple example <a href="https://math.stackexchange.com/questions/6310/n-mid-an-bn-longrightarrow-n-mid-fracan-bna-b/6448#6448">see here,</a> or consider the trivial proof of the $\rm\:abc$ theorem for polynomials (vs. difficult conjecture for numbers), which proof exploits to the hilt the power of derivatives (viz. wronskians). </p> <p>Eventually, with enough training, you will be able to effortlessly move back and forth between the general and the specific, and better recognize the essence of the matter when sizing up problems - the same way a chess grandmaster can evaluate a chessboard in a single glance. Don't be frustrated if this doesn't come easy - or if it has atrophied - because - just like chess - it takes much practice to remain proficient. Unlike vision, language, etc. these mental faculties were not programmed into our minds by evolution, so one must continually reprogram these faculties for other purposes - whether they be chess or mathematical reasoning (it's no accident that one frequently sees strong correlation between chess and math abilities - both depend strongly upon pattern-matching, e.g. see the famous studies by the psychologist de Groot: <a href="http://www.google.com/#hl=en&amp;sugexp=ldymls&amp;xhr=t&amp;q=de+groot+though+choice+chess" rel="nofollow noreferrer">Thought and choice in chess</a>).</p>