qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
2,545,226
<p>Suppose $a_n$ is a positive sequence but not necessarily monotonic. </p> <p>For the series $\sum_{n=1}^\infty \frac{1}{a_n}$ and $\sum_{n=1}^\infty \frac{a_n}{n^2}$ I can find examples where both diverge: $a_n = n$, and where one converges and the other diverges: $a_n = n^2$.</p> <p>Can we find example where both converges?</p>
RRL
148,510
<p>Both series cannot converge.</p> <p>Suppose $\sum a_n/n^2$ converges. Since the divergent harmonic series can be written as </p> <p>$$\sum_{n=1}^\infty \frac{1}{n} = \sum_{\frac{1}{n} \leqslant \frac{a_n}{n^2}} \frac{1}{n} + \sum_{\frac{1}{n} &gt; \frac{a_n}{n^2}} \frac{1}{n},$$</p> <p>and the first series on the RHS converges, it follows that the second series diverges.</p> <p>Therefore,</p> <p>$$\sum_{n=1}^\infty \frac{1}{a_n} &gt; \sum_{\frac{1}{a_n} &gt; \frac{1}{n}} \frac{1}{a_n} &gt; \sum_{\frac{1}{a_n} &gt; \frac{1}{n}} \frac{1}{n} = \sum_{\frac{1}{n} &gt; \frac{a_n}{n^2}} \frac{1}{n}= +\infty$$</p>
3,161,997
<p>I'm taking a course in Probability and I'm asked to prove the following statement :</p> <p>Let <span class="math-container">$\Omega$</span> be a non-empty arbitrary set , then </p> <ol> <li><span class="math-container">$\text {$E_1$={$\emptyset$,$\Omega$}}$</span> is <span class="math-container">$\sigma-\text {algebra}$</span></li> <li><span class="math-container">$\text {$E_2$=P($\Omega$)}$</span> is <span class="math-container">$\sigma-\text {algebra}$</span></li> <li>Every family of sets <span class="math-container">$\text {$E\subseteq$P($\Omega$)}$</span> that is <span class="math-container">$\sigma-\text {algebra}$</span> has the property that <span class="math-container">$\text E_1 \subseteq E \subseteq E_2$</span></li> </ol> <p>Where <span class="math-container">$\text P(\Omega)$</span> is the power set of <span class="math-container">$\Omega$</span>.</p> <p>I've been able to prove the first and the second properties based on the definition of <span class="math-container">$\sigma-\text {algebra}$</span> but I'm confused about the third one. Do I have to prove that <span class="math-container">$E_1$</span> and <span class="math-container">$E_2$</span> are the smallest and the largest <span class="math-container">$\sigma-\text {algebras}$</span> respectively ? If so , how can I do it? Any hint would be appreciated, thank you. </p>
Hans Lundmark
1,242
<p>The variable <span class="math-container">$t$</span> is irrelevant here.</p> <p>The partial derivative <span class="math-container">$f'_x(x,y)$</span> measures the slope of the graph <span class="math-container">$z=f(x,y)$</span> in the <span class="math-container">$x$</span> direction at the point <span class="math-container">$(x,y)$</span>, and of course that slope can be different <strong>at different points</strong>. So it depends on <span class="math-container">$x$</span> and <span class="math-container">$y$</span>, in general.</p>
3,161,997
<p>I'm taking a course in Probability and I'm asked to prove the following statement :</p> <p>Let <span class="math-container">$\Omega$</span> be a non-empty arbitrary set , then </p> <ol> <li><span class="math-container">$\text {$E_1$={$\emptyset$,$\Omega$}}$</span> is <span class="math-container">$\sigma-\text {algebra}$</span></li> <li><span class="math-container">$\text {$E_2$=P($\Omega$)}$</span> is <span class="math-container">$\sigma-\text {algebra}$</span></li> <li>Every family of sets <span class="math-container">$\text {$E\subseteq$P($\Omega$)}$</span> that is <span class="math-container">$\sigma-\text {algebra}$</span> has the property that <span class="math-container">$\text E_1 \subseteq E \subseteq E_2$</span></li> </ol> <p>Where <span class="math-container">$\text P(\Omega)$</span> is the power set of <span class="math-container">$\Omega$</span>.</p> <p>I've been able to prove the first and the second properties based on the definition of <span class="math-container">$\sigma-\text {algebra}$</span> but I'm confused about the third one. Do I have to prove that <span class="math-container">$E_1$</span> and <span class="math-container">$E_2$</span> are the smallest and the largest <span class="math-container">$\sigma-\text {algebras}$</span> respectively ? If so , how can I do it? Any hint would be appreciated, thank you. </p>
justanotheruser
643,568
<p>I interpreted your question in two ways</p> <p><strong>Interpretation 1:</strong></p> <p>Let's take an example : </p> <p><span class="math-container">$f(x(t),y(t))=x^2+y$</span> </p> <p>and <span class="math-container">$x=t, y=2t$</span></p> <p>Our partial derivative with respect to x is <span class="math-container">$f_x=2x$</span>.</p> <p>However,</p> <p><span class="math-container">$x=t, y=2t =&gt;t=y/2=x$</span></p> <p>So, our partial derivatice with respect to x could be written as <span class="math-container">$f_x=y$</span>, since x and y are related by a shared parameter t.</p> <p><strong>Interpretation 2:</strong></p> <p>Let's take another example</p> <p><span class="math-container">$f(x(t),y(t))=x^2+xy^2$</span></p> <p>So our partial derivative with respect to x is <span class="math-container">$f_x=2x+y^2$</span>, since the derivative of <span class="math-container">$ax$</span> with respect to x is just <span class="math-container">$a$</span>.</p> <p>It so happens that our <span class="math-container">$a$</span> in this case is <span class="math-container">$y$</span>. Remember that when dealing with partial derivatives and gradients, you are measuring the slope, in 3D cases and higher dimensions, slope can depend on more than one variable. Think about it like the steepness of climbing a mountain: how steep the climb up is depends on both x (the how you approach the climb) and y (assuming the mountain gets steeper as you proceed to the top). </p>
2,072,347
<p>I was trying to solve this problem, but couldn't figure it out. The solution goes like this:</p> <p><a href="https://i.stack.imgur.com/1KSWH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1KSWH.png" alt="http://www.tkiryl.com/Calculus/Problems/Section%201.4/Calculating%20Limits/Solutions/Calc_S_59.png (I don&#39;t have the reputation to post the image)"></a></p> <p>I don't understand the first step. Why is the limit multiplied by $\frac{4x}{5x}$? and $\frac{5}{4}$ ? </p>
Community
-1
<p>$$\lim \limits_{x \to 0} \frac{\sin(5x)}{\sin(4x)}$$</p> <p>$$\lim \limits_{x \to 0} \frac{\sin(5x)}{x} . \frac{x}{\sin(4x)}$$</p> <p>$$∡\lim \limits_{\theta \to 0}\frac{ \sin(a\theta)}{\theta}=a$$</p> <p>$$=\frac54$$</p>
2,072,347
<p>I was trying to solve this problem, but couldn't figure it out. The solution goes like this:</p> <p><a href="https://i.stack.imgur.com/1KSWH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1KSWH.png" alt="http://www.tkiryl.com/Calculus/Problems/Section%201.4/Calculating%20Limits/Solutions/Calc_S_59.png (I don&#39;t have the reputation to post the image)"></a></p> <p>I don't understand the first step. Why is the limit multiplied by $\frac{4x}{5x}$? and $\frac{5}{4}$ ? </p>
barak manos
131,263
<p>$\lim\limits_{x\to0}\frac{\sin(5x)}{\sin(4x)}=$</p> <p>$\lim\limits_{x\to0}\frac{\sin(5x)\cdot5x\cdot4x}{\sin(4x)\cdot5x\cdot4x}=$</p> <p>$\lim\limits_{x\to0}\frac{\sin(5x)\cdot4x\cdot5x}{5x\cdot\sin(4x)\cdot4x}=$</p> <p>$\lim\limits_{x\to0}\left(\frac{\sin(5x)}{5x}\cdot\frac{4x}{\sin(4x)}\cdot\frac{5x}{4x}\right)=$</p> <p>$\left(\lim\limits_{x\to0}\frac{\sin(5x)}{5x}\right)\cdot\left(\lim\limits_{x\to0}\frac{4x}{\sin(4x)}\right)\cdot\left(\lim\limits_{x\to0}\frac{5x}{4x}\right)=$</p> <p>$\left(\lim\limits_{x\to0}\frac{\sin(5x)}{5x}\right)\cdot\left(\lim\limits_{x\to0}\frac{4x}{\sin(4x)}\right)\cdot\left(\lim\limits_{x\to0}\frac{5}{4}\right)=$</p> <p>$1\cdot1\cdot\frac54=$</p> <p>$\frac54$</p>
2,306,895
<p>I want to find $Hom_{\mathtt{Grp}}(\mathbb{C}^\ast,\mathbb{Z})$, where $\mathbb{C}^\ast$ is the multiplicative group, and $\mathbb{Z}$ is additive. $\mathbb{C}$ is the additive group of complex numbers. We have the following map: </p> <p>$\large{\mathbb{C} \xrightarrow{exp} \mathbb{C}^\ast \xrightarrow{?} \mathbb{Z}}$</p> <p>where the fiber of $exp$ is $\mathbb{Z}$</p> <p>And I don't know if this can help, any hint?</p>
lhf
589
<p>A group homomorphism $\phi:\mathbb{C}^* \to \mathbb{Z}$ must be trivial.</p> <p>Let $\omega \in \mathbb{C}^*$ and $n \in \mathbb N$. Then there is $\theta \in \mathbb{C}^*$ such that $\omega=\theta^n$ and so $\phi(\omega)= n \phi(\theta)$.</p> <p>Therefore, $\phi(\omega)$ is a multiple of $n$ for all $n \in \mathbb N$. This can only happen if $\phi(\omega)=0$.</p> <p>In short, $\mathbb{C}^*$ is a <a href="https://en.wikipedia.org/wiki/Divisible_group" rel="noreferrer">divisible group</a> but $\mathbb{Z}$ is not. (The image of $\phi$ is either trivial or isomorphic to $\mathbb{Z}$.)</p>
1,987,358
<p>We know that Riemann sum gives us the following formula for a function <span class="math-container">$f\in C^1$</span>:</p> <blockquote> <p><span class="math-container">$$\lim_{n\to \infty}\frac 1n\sum_{k=0}^n f\left(\frac kn\right)=\int_0^1f(x) dx.$$</span></p> </blockquote> <p>I am looking for an example where the exact calculation of <span class="math-container">$\int f$</span> would be interesting with a Riemann sum.</p> <p>We usually use integrals to calculate a Riemann sum, but I am interesting in the other direction.</p> <hr> <p><em>Edit.</em></p> <p>I actually found an example of my own today. You can compute </p> <p><span class="math-container">$$I(\rho)=\int_0^\pi \log(1-2\rho \cos \theta+\rho^2)\mathrm d \theta$$</span></p> <p>using Riemann sums.</p>
Nilotpal Sinha
60,930
<p>Not a direct answer to your question but I find the following representation of the Riemann sum interesting. Even though it is trivial, these representations show that the sequence of primes or the sequence of composites behave somewhat in a similar same way as the sequence of natural numbers in terms of their asymptotic growth rates. (<em>Too long for a comment hence posting as an answer</em>)</p> <p>Let $p_n$ be the $n$-th prime number and $c_n$ be the $n$-th composite number; then,</p> <p>$$ \lim_{n \to \infty} \frac{1}{n} \sum_{r=1}^{n} f\bigg(\frac{p_r}{p_n}\bigg) = \int_{0}^{1}f(x)dx. $$</p> <p>$$ \lim_{n \to \infty} \frac{1}{n} \sum_{r=1}^{n} f\bigg(\frac{c_r}{c_n}\bigg) = \int_{0}^{1}f(x)dx. $$</p>
1,987,358
<p>We know that Riemann sum gives us the following formula for a function <span class="math-container">$f\in C^1$</span>:</p> <blockquote> <p><span class="math-container">$$\lim_{n\to \infty}\frac 1n\sum_{k=0}^n f\left(\frac kn\right)=\int_0^1f(x) dx.$$</span></p> </blockquote> <p>I am looking for an example where the exact calculation of <span class="math-container">$\int f$</span> would be interesting with a Riemann sum.</p> <p>We usually use integrals to calculate a Riemann sum, but I am interesting in the other direction.</p> <hr> <p><em>Edit.</em></p> <p>I actually found an example of my own today. You can compute </p> <p><span class="math-container">$$I(\rho)=\int_0^\pi \log(1-2\rho \cos \theta+\rho^2)\mathrm d \theta$$</span></p> <p>using Riemann sums.</p>
Adren
405,819
<p>Here is an example ...</p> <p>For each $z\in\mathbb{C}$ with $\vert z\vert\neq 1$, consider :</p> <p>$$F(z)=\int_0^{2\pi}\ln\left|z-e^{it}\right|\,dt$$</p> <p>It is possible to get an explicit form for $F(z)$, using Riemann sums.</p> <p>For each integer $n\ge1$, consider :</p> <p>$$S_n=\frac{2\pi}{n}\sum_{k=0}^{n-1}\ln\left|z-e^{2ik\pi/n}\right|$$which is the $n-$th Riemann sum attached to the previous integral (and a uniform subdivision of $[0,2\pi]$ with constant step $\frac{2\pi}{n}$).</p> <p>Now :$$S_n=\frac{2\pi}{n}\ln\left|\prod_{k=0}^{n-1}\left(z-e^{2ik\pi/n}\right)\right|=\frac{2\pi}{n}\ln\left|z^n-1\right|$$and you can easily show that :$$F(z)=\left\{\matrix{2\pi\ln\left|z\right|&amp; \mathrm{ if}\left|z\right|&gt;1\cr0 &amp; \mathrm{otherwise}}\right.$$</p>
3,469,252
<p>At first: I am new to differential equations, so this question might seam a little bit obvious.</p> <p>The differential eqation was <span class="math-container">$y'x^3 = 2y -5$</span>. I rearranged it to: <span class="math-container">$\frac{dx^3}{dx} = \frac{d(2y-5)}{dy}$</span>.</p> <p>The Problem is, if i derive it, there is no y left: <span class="math-container">$3x^2 = 2$</span>, so how do I do this?</p> <p>Thanks a lot! </p>
Z Ahmed
671,540
<p>The ODE is linear <span class="math-container">$$y'-2\frac{y}{x^3}=\frac{-5}{x^3}.$$</span> its integrating factor is <span class="math-container">$I=\exp[\int \frac{-2}{x^3} dx]=e^{1/x^2}.$</span> So its solution is <span class="math-container">$$y=e^{-1/x^3} \int e^{1/x^2}~ \frac{-5}{x^3} dx+ C e^{-1/x^2} \implies y= \frac{5}{2} e^{-1/x^2} \int e^u du+Ce^{-1/x^2}, u=\frac{1}{x^2} $$</span> <span class="math-container">$$\implies y=\frac{5}{2}+C e^{-1/x^2}$$</span></p>
525,885
<p>Why is $$\frac{\sum_{i=1}^n a_i^{n+1}}{\sum_{i=1}^{n}a_i^n} \geq \frac{\sum_{i=1}^n a_i}{n}$$ where $n$ is some positive natural number, and all $a_i$s are assumed to be positive real number?</p>
Hagen von Eitzen
39,174
<p><em>Hint:</em> compare $$ \sum_{i=1}^n a_i^n\cdot \frac{\sum_{i=1}^1a_i}{n}$$ with $$ \sum_{i=1}^n a_i^n\cdot a_i$$</p>
29,670
<p>I have $a_k=\frac1{(k+1)^\alpha}$ and $c_k=\frac1{(k+1)^\lambda}$, where $0&lt;\alpha&lt;1$ and $0&lt;\lambda&lt;1$, and we have a infinite sequence $x_k$ with the following evolution equation. $$ x_{k+1}=\left(1-a_{k+1}\right)x_{k}+a_{k+1}c_{k+1}^{2} $$ I have proven that $x_k$ is bounded and obviously positive. How can I know its limit?</p>
mjqxxxx
5,546
<p>As pointed out in the comments, a general solution to $x_{n+1}=f_n x_n + g_n$ is possible. Define $F_0=1$ and $F_{n+1} = \prod_{i=0}^{n} f_n^{-1} = f_n^{-1} F_{n}$ for $n\ge 0$. Then $f_n = F_{n} / F_{n+1}$, and the recurrence relation becomes $$ F_{n+1} x_{n+1} = F_{n} x_{n} + F_{n+1} g_n. $$ This has the solution $F_{n+1} x_{n+1} = F_{0} x_{0} + \sum_{i=0}^{n} F_{i+1} g_{i}$, or $$ x_{n+1} = \frac{F_0}{F_{n+1}} x_{0} + \sum_{i=0}^{n} \frac{F_{i+1}}{F_{n+1}} g_{i} = x_{0}\prod_{i=0}^{n} f_{i} + \sum_{i=0}^{n} g_{i} \prod_{j=i+1}^{n} f_{j}. $$ Note that the final product is empty when $i=n$, and has the value $1$ in that case. In the specific problem given, $f_{i}=1-(i+2)^{-\alpha}$ and $g_{i} = (i+2)^{-\alpha-2\lambda}$ (up to possible $\pm 1$ mistakes in either the problem or my solution). The partial products $\prod_{i=0}^{n} f_{i}$ diverge to zero, so the first term vanishes for large $n$, and the limit of the sequence is independent of $x_0$. The result is that $$ \lim_{n\rightarrow\infty}x_{n} = \lim_{n\rightarrow\infty}\left[\sum_{i=0}^{n} (i+2)^{-\alpha-2\lambda} \prod_{j=i+1}^{n} \left(1-(j+2)^{-\alpha}\right)\right], $$ provided that the limit exists (which doesn't seem obvious).</p>
871,744
<p>When A and B are square matrices of the same order, and O is the zero square matrix of the same order, prove or disprove:- $$AB=0 \implies A=0 \text{ or } \ B=0$$</p> <p>I proved it as follows:-</p> <p>Assume $A \neq O$ and $ B \neq O$: then, $$ |A||B| \neq 0 $$ $$ |AB| \neq 0 $$ $$ AB \neq O $$ $$ \therefore A \neq O\ and\ B \neq O \implies AB \neq O $$ $$ \neg[ AB \neq O] \implies \neg [ A \neq O\ and\ B \neq O ] $$ $$AB=O \implies A=O \text{ or } \ B=O$$</p> <p>But when considering, A := \begin{pmatrix} 1&amp;1 \\1&amp;1 \end{pmatrix} and B:= \begin{pmatrix} -1&amp; 1\\ 1 &amp;-1 \end{pmatrix}then, AB=O and A$\neq $O and B $\neq$ O</p> <p>I can't figure out which one and where I went wrong.</p>
DGRasines
96,044
<p>You are saying that if $A \neq O$ then, $det(A) \neq O$, which is false in general. Consider any diagonal matrix different from $O$ which has at least one zero in the diagonal.</p>
276,948
<p>$a_n$ be a sequence of integers such that such that infinitely many terms are non zero, we need to show that either the power series $\sum a_n x^n$ converges for all $x$ or Radius of convergence is at most $1$. need some hint. thank you.</p>
P..
39,722
<p>The <a href="http://en.wikipedia.org/wiki/Radius_of_convergence#Theoretical_radius" rel="nofollow">Radius of convergence</a>, $R$, is given by $$\dfrac1R=\lim\sup\sqrt[n]{|a_n|}\geq 1, \ \text{if } a_n\in\mathbb Z \text{ with infinitely many nonzero terms}.$$</p>
849,583
<p>I have several derivatives to find:</p> <blockquote> <p>For <span class="math-container">$g(s)=3s^3-s+4$</span>, <span class="math-container">$g'(s)=$</span></p> <p>For <span class="math-container">$p(t)=\frac1t+t^2$</span>, <span class="math-container">$\frac{\mathrm dp}{\mathrm dt}=$</span></p> <p>For <span class="math-container">$w(u)=\sqrt u-2u^2-10$</span>, <span class="math-container">$\frac{\mathrm dw}{\mathrm du}=$</span></p> </blockquote> <p>Each has multiple terms, and as a result is difficult to determine how to solve.<br /> I need to find out how to go about solving these, for example I can't use the power rule because these functions have more than just one term.</p> <p>I am looking for insight on what steps to take.</p>
rschwieb
29,335
<p>Hints:</p> <p>All three problems can be solved by combining three facts:</p> <ol> <li>$D_x(f(x)\pm g(x))=D_x(f(x))\pm D_x(g(x))$</li> <li>$D_x(cf(x))=cD_x(f(x))$ for any constant $c$</li> <li>$D_x(x^n)=nx^{n-1}$ for any $n\neq 0$ (including $n=-1$ and $n=\frac12$.)</li> </ol> <hr> <p>In the first problem, the submitted solution takes βˆ’s and replaces it with +s, which certainly isn't the derivative of $-s$ with respect to $s$. In the second solution, it looks like the user doesn't know what the derivative of 1/t is, and in the third solution looks overcomplicated and contains an x for some reason.</p>
1,111,168
<p>Do we say that $0$ has $n$ $n$th roots, all nondistinct, or only one?</p> <p>I don't think it makes any difference, but I'm curious what the convention is.</p>
quid
85,306
<p>There is, as is visible, no universally agreed upon convention. However, I would argue in favor of considering by an large the number of $n$-th roots of an element $a$ (in some structure) the <strong>cardinality of the set</strong> of solutions of $X^n = a$, so that in the complex and real numbers, and in any other field, $0$ is the unique root of $0$.</p> <p>The alternative convention is arguably convenient over the complex numbers, however, already in the real numbers the situation is less clear (though one might still present reasonable arguments in its favor).</p> <p>Yet roots are also considered in other structures.</p> <ul> <li><p>Roots of unity, so roots of $1$, in finite fields play an important role, as do for example quadratic residues; for the <a href="http://en.wikipedia.org/wiki/Quadratic_residue" rel="nofollow noreferrer">Legendre symbol</a> $0$ is treated separately, corresponding to the convention that it has a unique square-root modulo $p$, and typically when one wonders about the number of roots of unity in a finite field one will consider the cardinality of the set (see an <a href="https://mathoverflow.net/questions/58357/number-of-n-th-roots-of-unity-over-finite-fields">MO question on that subject</a>). </p></li> <li><p>Further, looking beyond fields, the situation become even more cumbersome for other conventions. For example in $Z/4Z$ we have $0$ and $2$ as square-roots of $0$. Should $0$ still be counted twice? Should $2$ also be counted twice, after all $(X+2)^2 = X^2$? </p></li> <li><p>Still further, it is common to consider roots of (certain) matrices. There is an infinitude of real $2 \times 2$ matrices whose square is the $0$-matrix. They are all square-roots of the $0$-matrix, among them one finds the $0$-matrix itself.</p></li> </ul> <p>In each case, it seems the cardinality of the set of solutions makes some sense, while the assigning of multiplicities becomes rather more unclear (I might miss something though).</p>
3,114,208
<p>Say I have a biased coin that shows heads with probability <span class="math-container">$p \in ]1/3,1/2[$</span> and I initially have capital of <span class="math-container">$100 $</span>EUR. Every time heads is shown, my capital is doubled, in the other case I pay half of my capital. Let <span class="math-container">$X_{n}$</span> denote my capital after the <span class="math-container">$n$</span>th flip. </p> <p><span class="math-container">$1.$</span> Show that <span class="math-container">$\lim_{n \to \infty}\mathbb E[X_{n}]=\infty$</span></p> <p><span class="math-container">$2.$</span> Show that <span class="math-container">$X_{n}\to 0$</span> a.s.</p> <p>My idea on <span class="math-container">$1.$</span>:</p> <p>Let <span class="math-container">$R_{n}$</span> denote whether heads <span class="math-container">$(1)$</span> or tails <span class="math-container">$(2)$</span> is flipped on the <span class="math-container">$nth$</span> attempt. It follows that </p> <p><span class="math-container">$R_{n}$</span>~<span class="math-container">$\operatorname{Ber}(p)$</span>.</p> <p>Note that <span class="math-container">$X_{0}=100$</span>, and <span class="math-container">$X_{n+1}=100\prod_{i=1}^{n+1}(\frac{1}{2}+\frac{3}{2}R_{i})$</span></p> <p><span class="math-container">$\mathbb E[X_{n+1}]=100\mathbb E[\prod_{i=1}^{n+1}(\frac{1}{2}+\frac{3}{2}R_{i})]=100\prod_{i=1}^{n+1}\mathbb E[(\frac{1}{2}+\frac{3}{2}R_{i})]$</span></p> <p>and then by law of expectation of discrete distributions:</p> <p><span class="math-container">$100(\mathbb E[(\frac{1}{2}+\frac{3}{2}R_{1})^{n}])=100[P(R_{1}=1)2^{n}+(\frac{1}{2})^{n}P(R_{1}=0)]=100[p2^{n}+(1-p)(\frac{1}{2})^{n}]\xrightarrow{n\to \infty}\infty$</span> </p> <p>since <span class="math-container">$p \neq 0$</span></p> <p>Any tips on <span class="math-container">$2.$</span>?</p> <p>Another question, it seems very counterintuitive that if <span class="math-container">$P(X_{n} \to 0)=1$</span> then theres is still a chance that <span class="math-container">$\lim_{n \to \infty}\mathbb E[X_{n}]=\infty$</span>, likewise as <span class="math-container">$\lim_{n \to \infty}\mathbb E[X_{n}]=\infty$</span> that then <span class="math-container">$P(X_{n} \to 0)=1$</span> is still a possibility, is there any intuitive explanation for this behaviour?</p>
William M.
396,761
<p>Let <span class="math-container">$p$</span> as you said, <span class="math-container">$q = 1-p.$</span> Write <span class="math-container">$R_n \sim p \varepsilon_0 + q \varepsilon_1$</span> (assumed i.i.d.), <span class="math-container">$X_0 = 100$</span> and <span class="math-container">$X_{n+1}=2X_n\mathbf{1}_{\{R_{n+1}=0\}}+\dfrac{1}{2}X_n\mathbf{1}_{\{R_{n+1}=1\}}.$</span> Then <span class="math-container">$E(X_{n+1})=2pE(X_n)+\frac{q}{2}E(X_n) = (2p+\frac{q}{2})E(X_n)$</span> and therefore <span class="math-container">$E(X_n) = (2p+\frac{q}{2})^n 100.$</span></p> <p>Now, consider <span class="math-container">$\mathbf{1}_{\{R_n = 1\}}-\mathbf{1}_{\{R_n=0\}}$</span> which are random variables with mean <span class="math-container">$q-p&gt; 0$</span> and the strong law of large numbers implies that their series diverges almost surely, that is to say, for almost every realisation and for every <span class="math-container">$K &gt; 0,$</span> there will be eventually <span class="math-container">$K$</span> more <span class="math-container">$R_n = 1$</span> than <span class="math-container">$R_n = 0.$</span> Bearing this in mind it follows that <span class="math-container">$\bigcap\limits_{\alpha = \beta}^\infty \{X_\alpha &gt; \delta\}$</span> is a null event, for whatever <span class="math-container">$\delta &gt; 0$</span> may be, and therefore, so are all the following: <span class="math-container">$$\left\{\lim\cdot\inf X_n &gt;0 \right\} \subset \bigcup_{n=1}^\infty \left\{\lim\cdot\inf X_k &gt; \dfrac{1}{n} \right\} \subset \bigcup_{n=1}^\infty \bigcup_{\beta=2n}^\infty \bigcap_{\alpha = \beta}^\infty \{X_\alpha &gt; \dfrac{1}{n} - \dfrac{1}{\beta}\},$$</span> hence <span class="math-container">$X_n \to 0$</span> almost surely. Q.E.D.</p>
28,751
<p>$X \sim \mathcal{N}(0,1)$, then to show that for $x &gt; 0$, $$ \mathbb{P}(X&gt;x) \leq \frac{\exp(-x^2/2)}{x \sqrt{2 \pi}} \&gt;. $$</p>
Dilip Sarwate
15,941
<p>Integrating by parts, $$\begin{align*} Q(x) &amp;= \int_x^{\infty} \phi(t)\mathrm dt = \int_x^{\infty} \frac{1}{\sqrt{2\pi}}\exp(-t^2/2) \mathrm dt\\ &amp;= \int_x^{\infty} \frac{1}{t} \frac{1}{\sqrt{2\pi}}t\cdot\exp(-t^2/2) \mathrm dt\\ &amp;= - \frac{1}{t}\frac{1}{\sqrt{2\pi}}\exp(-t^2/2)\biggr\vert_x^\infty - \int_x^{\infty} \left( - \frac{1}{t^2} \right ) \left ( - \frac{1}{\sqrt{2\pi}} \exp(-t^2/2) \right )\mathrm dt\\ &amp;= \frac{\phi(x)}{x} - \int_x^{\infty} \frac{\phi(t)}{t^2} \mathrm dt. \end{align*} $$ The integral on the last line above has a positive integrand and so must have positive value. Therefore we have that $$ Q(x) &lt; \frac{\phi(x)}{x} = \frac{\exp(-x^2/2)}{x\sqrt{2\pi}}~~ \text{for}~~ x &gt; 0. $$ This argument is more complicated than @cardinal's elegant proof of the same result. However, note that by repeating the above trick of integrating by parts and the argument about the value of an integral with positive integrand, we get that $$ Q(x) &gt; \phi(x) \left (\frac{1}{x} - \frac{1}{x^3}\right ) = \frac{\exp(-x^2/2)}{\sqrt{2\pi}}\left (\frac{1}{x} - \frac{1}{x^3}\right )~~ \text{for}~~ x &gt; 0. $$ In fact, for large values of $x$, a sequence of increasingly tighter upper and lower bounds can be developed via this argument. Unfortunately all the bounds diverge to $\pm \infty$ as $x \to 0$.</p>
939,747
<p>Denote by $a_n$ the sum of the first $n$ primes. Prove that there is a perfect square between $a_n$ and $a_{n+1}$, inclusive, for all $n$.</p> <p>The first few sums of primes are $2$, $5$, $10$, $17$, $28$, $41$, $58$, $75$. It seems there is a perfect square between each pair of successive sums. In addition, we can put a bound on $a_n$, namely $a_n \le 2+3+5+7+9+11+...+(2n+1)=n^2+2n+5$.</p>
Marc van Leeuwen
18,880
<p>No it is not possible to recover $\def\x{\mathbf x}\x$ from $\mathbf A=\x\x^T$ (unless $\x=0$), since replacing $\x$ by $-\x$ will give the exact same value for $\x\x^T$ (the map $\x\mapsto\x\x^T$ is not injective). As others have observed the map is not surjective either (many matrices do not arise this way), but as I read the question, that is not at stake here, since $\mathbf A$ was produced by the above equation to begin with.</p>
1,473,318
<blockquote> <p>How many numbers can by formed by using the digits $1,2,3,4$ and $5$ without repetition which are divisible by $6$?</p> </blockquote> <p><strong>My Approach:</strong></p> <p>$3$ digit numbers formed using $1,2,3,4,5$ divisible by $6$ </p> <p>unit digit should be $2/4$ </p> <p>No. can be $XY2$ &amp; $XY4$</p> <p>$X+Y+2 = 6,9$ &amp; $X+Y+4 = 9,12$</p> <p>$X+Y = 4,7$ &amp; $X+Y = 5,8$</p> <p>$(X,Y)= (1,3),(3,1),(2,5),(5,2)$ &amp; </p> <p>$(X,Y)= (2,3),(3,2),(3,5),(5,3)$</p> <p>Therefore,Total 8 numbers without repetition.</p> <blockquote> <p>But I am confused here how to find numbers of numbers?</p> </blockquote>
Amir
232,937
<p>the number should be divisible by 2 and 3.</p> <p>case.1) </p> <p>XY2</p> <p>then X+Y should be 4,7.</p> <p>a) X+Y=4</p> <p>the only possibility is (X,Y)=(1,3). Then there are 2 choices (X,Y) = (1,3) or (3,1).</p> <p>b) X+Y=7</p> <p>(X,Y) = (2,5), which is not feasible, or (X,Y) =(3,4). the number of choices here is 2, then.</p> <p>case.2) XY4</p> <p>then X+Y can be 5,8.</p> <p>a) X+Y=5</p> <p>(X,Y) = (1,4), which is not feasible (4 is already taken), or (X,Y) = (2,3) , then 2 choices here.</p> <p>b) X+Y = 8</p> <p>(X,Y) = (3,5), and 2 choices here.</p> <p>then the total number of choices is 2+2+2+2 =8.</p>
1,165,147
<p>I would like to find the area under the curve of $\frac{\sin(ax/2)}{\sin(x/2)}$, namely between the first zero crossing on the left and right:</p> <p>$$ \int_{-\frac{2\pi}{a}}^{\frac{2\pi}{a}} \frac{\sin(\frac{ax}{2})}{\sin(\frac{x}{2})} $$</p> <p>I realized from Wolfram Alpha that this was not a simple solution, so I was wondering if there is a good approximation for the area. For my application, $a$ is usually over $10000$.</p> <p>Thank you for any advice and suggestions.</p>
Alex
38,873
<p>If $a$ is so large, then the bounds of the integral are not far from 0, so you can approximate both numerator and denominator with Maclaurin series expansion. </p>
75,005
<p>Let's imagine a guy who claims to possess a machine that can each time produce a completely random series of 0/1 digits (e.g. $1,0,0,1,1,0,1,1,1,...$). And each time after he generates one, you can keep asking him for the $n$-th digit and he will tell you accordingly.</p> <p>Then how do you check if his series is <em>really completely random</em>?</p> <p>If we only check whether the $n$-th digit is evenly distributed, then he can cheat using:</p> <blockquote> <p>$0,0,0,0,...$<br> $1,1,1,1,...$<br> $0,0,0,0,...$<br> $1,1,1,1,...$<br> $...$</p> </blockquote> <p>If we check whether any given sequence is distributed evenly, then he can cheat using:</p> <blockquote> <p>$(0,)(1,)(0,0,)(0,1,)(1,0,)(1,1,)(0,0,0,)(0,0,1,)...$<br> $(1,)(0,)(1,1,)(1,0,)(0,1,)(0,0,)(1,1,1,)(1,1,0,)...$<br> $...$</p> </blockquote> <p>I may give other possible checking processes but as far as I can list, each of them has flaws that can be cheated with a prepared regular series.</p> <p>How do we check if a series is really random? Or is randomness a philosophical concept that can not be easily defined in Mathematics?</p>
Gerry Myerson
8,269
<p>This is discussed very nicely in Volume 2 of Knuth's The Art Of Computer Programming. The executive summary is that randomness is a mathematical concept that can be defined in mathematics but not easily. </p>
281,735
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="https://math.stackexchange.com/questions/202452/why-is-predicate-all-as-in-allset-true-if-the-set-is-empty">Why is predicate β€œall” as in all(SET) true if the SET is empty?</a> </p> </blockquote> <p>In don't quite understand this quantification over the empty set:</p> <p>$\forall y \in \emptyset: Q(y)$</p> <p>The book says that this is always TRUE regardless of the value of the predicate $Q(y)$, and it explain that this is because this quantification adds no predicate at all, and therefore can be considered the weakest predicate possible, which is TRUE.</p> <p>I know that TRUE is the weakest predicate because $ $P$ \Rightarrow$ TRUE is TRUE for every $P$. I don't see what is the relationship between this weakest predicate and the quantification. </p>
c.w.chambers
16,959
<p>When you see a quantification like β€˜<span class="math-container">$\forall \phi x : \psi x$</span>’, this is shorthand for β€˜<span class="math-container">$\forall x : \phi x \to \psi x$</span>’. Since β€˜<span class="math-container">$x \in \emptyset$</span>’ is false for all β€˜<span class="math-container">$x$</span>’, the antecedent of β€˜<span class="math-container">$x \in \emptyset \to \psi x$</span>’ will always be false. Thus, the entire conditional statement is always true.</p> <p>Edit: In the case of existential quantification, the statement β€˜<span class="math-container">$\exists \phi x : \psi x$</span>’ is shorthand for β€˜<span class="math-container">$\exists x : \phi x \land \psi x$</span>’, so in this case, the conjunction β€˜<span class="math-container">$x \in \emptyset \land \psi x$</span>’ will always be false.</p>
73,111
<blockquote> <p>A trapezoid was inscribed into a semicirle of radius R. The side of the trapezoid is slanting alpha against the base which is the diameter of the semicirlce. Compute the area of the trapezoid.</p> </blockquote> <p>So the base is 2R. The bad thing is: that's all I know. How should I move on? Don't know what to do with that angle.</p>
AndrΓ© Nicolas
6,312
<p>There is a reasonable brute force approach, using the standard formula for the area of a trapezoid. We need a picture. Instead I will <em>label</em> the vertices, and rely on you to draw the picture. Please note that your picture is an essential part of the calculation below.</p> <p>Let the vertices of the trapezoid be $A$, $B$, $C$, $D$. These labels go <em>counterclockwise</em>, and $A$, $B$ are the endpoints of the diameter of the semicircle, with $A$ on the left. We assume that by "the side of the trapezoid is slanting $\alpha$ against the base" you mean that $\angle DAB=\alpha$. Note that for the geometry to work, we need $\alpha \ge 45^\circ$. If $\alpha=45^\circ$, our "trapezoid" degenerates to a triangle.</p> <p>Note that $\triangle ADB$ is right-angled at $D$. It follows that $AD=2R\cos\alpha$. </p> <p>Drop a perpendicular from $D$ to the diameter $AB$, meeting $AB$ at say $P$. Then by looking at $\triangle APD$, we can see that $DP=AD \sin\alpha=2R\sin\alpha\cos\alpha$. Progress, we now know the height of the trapezoid.</p> <p>Now we sort of need to know the shorter one of the two parallel sides. Note that $AP=AD\cos\alpha=2R\cos^2\alpha$.</p> <p>It follows that the shorter parallel side has length $2R-2AP=2R(1-2\cos^2\alpha)$. </p> <p>So the <em>average</em> of the two parallel sides is $2R(1-\cos^2\alpha)$, which simplifies to $2R\sin^2\alpha$. </p> <p>Multiply the average of the two parallel sides by the height. We get $4R^2\sin^3\alpha\cos\alpha$. </p> <p><strong>Comment</strong>: We really didn't need to calculate the shorter parallel side, since it is obvious that our trapezoid has the same area as the rectangle of base $PB$ and height $DP$, and once we know that $AP=2R\cos^2\alpha$, we know that $PB=2R-2R\cos^2\alpha$. And we didn't need the formula for the area of a trapezoid.</p> <p>My preference would be to find the answer when the radius is $1$, then scale by the linear factor $R$ at the end, which scales the area by the factor $R^2$.</p>
4,627,821
<p>Consider the difference equations</p> <p><span class="math-container">$$x(k+1) = f(x(k)) \qquad (1)$$</span></p> <p>and</p> <p><span class="math-container">$$y(k+1) = g(y(k)) \qquad (2)$$</span></p> <p>where <span class="math-container">$g = f \circ f$</span>.</p> <p>In <em>An Introduction to Difference Equations (3e)</em> by Saber Elaydi, in the proof of Theorem 1.16 (on p.32), it is mentioned that for <span class="math-container">$a$</span> an equilibrium of (1), if <span class="math-container">$a$</span> is an asymptotically stable equilibrium of (2), then it is also an asymptotically stable equilibrium for (1).</p> <p>How can we prove this?</p> <p>My progress so far</p> <ul> <li>Since <span class="math-container">$g(x) = f(f(x))$</span>, <ul> <li><span class="math-container">$g'(x) = f'(f(x))f'(x)$</span></li> <li><span class="math-container">$g''(x) = f''(f(x)) [f'(x)]^2 + f'(f(x)) f''(x)$</span></li> <li><span class="math-container">$g'''(x) = f'''(f(x)) f'(x) [f'(x)]^2 + 2f''(f(x)) f'(x) f''(x) + f''(f(x)) f'(x) f''(x) + f'(f(x))f'''(x)$</span></li> </ul> </li> <li>Assume <span class="math-container">$a$</span> is an asymptotically stable equilibrium for (2). There are two cases <ul> <li><span class="math-container">$|g'(a)| &lt; 1$</span></li> <li><span class="math-container">$|g'(a)| = 1$</span>, <span class="math-container">$g''(a) = 0$</span>, and <span class="math-container">$g'''(a) &lt; 0$</span></li> </ul> </li> <li>Note that <span class="math-container">$g'(a) = f'(f(a))f'(a) = [f'(a)]^2$</span></li> <li>Hence when <span class="math-container">$|g'(a)| &lt; 1$</span>, <span class="math-container">$|f'(a)| &lt; 1$</span> as well. In this case <span class="math-container">$a$</span> is an asymptotically stable equilibrium of (1)</li> <li>Now assume <span class="math-container">$|g'(a)| = 1$</span>, <span class="math-container">$g''(a) = 0$</span>, and <span class="math-container">$g'''(a) &lt; 0$</span>.</li> <li>Since <span class="math-container">$g'(a) = [f'(a)]^2 \geq 0$</span>, <span class="math-container">$g'(a) = 1$</span>. There are two cases <ul> <li><span class="math-container">$f'(a) = 1$</span>.</li> <li><span class="math-container">$f'(a) = -1$</span>.</li> </ul> </li> <li>Assume <span class="math-container">$f'(a) = 1$</span>. In this case <ul> <li><span class="math-container">$g''(a) = f''(f(a))[f'(a)]^2 + f'(f(a)) f''(a) = 2f''(a) = 0$</span>. Hence <span class="math-container">$f''(a) = 0$</span></li> <li><span class="math-container">$g'''(a) = f'''(f(a)) f'(a) [f'(a)]^2 + 2f''(f(a)) f'(a) f''(a) + f''(f(a)) f'(a) f''(a) + f'(f(a))f'''(a) = 2f'''(a) &lt; 0$</span>. Hence <span class="math-container">$f'''(a) &lt; 0$</span></li> <li>It follows that <span class="math-container">$a$</span> is an asymptotically stable equilibrium of (2)</li> </ul> </li> <li><strong>My question is how can we proceed when <span class="math-container">$f'(a) = -1$</span></strong> <ul> <li>Theorem 1.16 states that if <span class="math-container">$f'(a) = -1$</span> and <span class="math-container">$-f'''(a) - 3/2(f''(a))^2 &lt; 0$</span>, then <span class="math-container">$a$</span> is asymptotically stable.</li> <li>But since this proposition is used to prove Theorem 1.16, using Theorem 1.16 to prove this proposition would be circular.</li> </ul> </li> </ul>
Steve Morris
1,133,961
<p>If the statement is not true, then there exists <span class="math-container">$\varepsilon &gt;0$</span>, s.t. <span class="math-container">$|f^{\prime}(x)|\geq \varepsilon$</span>, <span class="math-container">$\forall x \in \mathbb{R}$</span>. Since <span class="math-container">$f$</span> is bounded, then there exists <span class="math-container">$M&gt;0$</span>, s.t. <span class="math-container">$|f(x)| \leq M$</span>, <span class="math-container">$\forall x \in \mathbb{R}$</span>. Consider <span class="math-container">$a &lt; b \in \mathbb{R}$</span> with <span class="math-container">$b-a &gt; \frac{2M}{\varepsilon}$</span>, then by mean value thm., there exists <span class="math-container">$u \in (a,b)$</span>, s.t. <span class="math-container">$|f^{\prime}(u)|=|\frac{f(b)-f(a)}{b-a}| \leq \frac{2M}{|b-a|} &lt; \varepsilon$</span>, then we get a contradiction.</p>
2,881,914
<p>Using a computer I found the double sum</p> <p>$$S(n)= \sum_{j=1}^n\sum_{k=1}^n \frac{j^2 + jk + k^2}{j^2(j+k)^2k^2}$$ has values</p> <p>$$S(10) \quad\quad= 1.881427206538142 \\ S(1000) \quad= 2.161366028875634 \\S(100000) = 2.164613524212465\\$$</p> <p>As a guess I compared with fractions $\pi^p/q$ where $p,q$ are positive integers and it appears </p> <p>$$\lim_{n \to \infty} S(n) = \frac{\pi^4}{45} = 2\zeta(4) \approx 2.164646467422276 $$</p> <p>I'd be interested in seeing a proof if true. </p>
Hazem Orabi
367,051
<p>$$ \begin{align} \frac{m^2+m\,n+n^2}{m^2\,(m+n)^2\,n^2}\, &amp;=\frac{m^2+m\,n+n^2\,\color{red}{+m\,n-m\,n}}{m^2\,(m+n)^2\,n^2} \\[2mm] &amp;=\,\frac{(m+n)^2-m\,n}{(m+n)^2\,m^2\,n^2} \\[2mm] &amp;=\,\frac{1}{m^2\,n^2}-\frac{1}{m\,n\,(m+n)^2} \\[2mm] &amp;=\,\frac{1}{m^2\,n^2}-\frac{1}{m^3}\left(\frac{1}{n}-\frac{1}{m+n}-\frac{m}{(m+n)^2}\right) \\[2mm] &amp;=\,\color{brown}{\frac{1}{m^2\,n^2}}\color{green}{-\frac{1}{m^3}\left(\frac{1}{n}-\frac{1}{m+n}\right)}\color{blue}{+\frac{1}{m^2}\frac{1}{(m+n)^2}} \end{align} $$ </p> <p><br> $$ \begin{align} \color{brown}{\large S_{\small 1}\,} &amp;=\sum_{m=1}^{\infty}\sum_{n=1}^{\infty}\frac{1}{m^2\,n^2}=\sum_{m=1}^{\infty}\frac{1}{m^2}\sum_{n=1}^{\infty}\frac{1}{n^2}=\left(\zeta(2)\right)^2=\color{brown}{\frac{\pi^4}{36}} \\[4mm] \color{green}{\large S_{\small 2}\,} &amp;=\sum_{m=1}^{\infty}\sum_{n=1}^{\infty}\frac{1}{m^3}\left(\frac{1}{n}-\frac{1}{m+n}\right)=\sum_{m=1}^{\infty}\frac{1}{m^3}\sum_{n=1}^{\infty}\left(\frac{1}{n}-\frac{1}{m+n}\right) \\[2mm] &amp;=\sum_{m=1}^{\infty}\frac{1}{m^3}\sum_{n=1}^{\color{red}{m}}\frac{1}{n}=\sum_{m=1}^{\infty}\frac{H_{m}}{m^3}=\frac{5}{4}\zeta(4)=\color{green}{\frac{\pi^4}{72}}\tag{1} \\[4mm] \color{blue}{\large S_{\small 3}\,} &amp;=\sum_{m=1}^{\infty}\sum_{n=1}^{\infty}\frac{1}{m^2}\frac{1}{(m+n)^2}=\sum_{m=1}^{\infty}\frac{1}{m^2}\sum_{n=1}^{\infty}\frac{1}{(m+n)^2} \\[2mm] &amp;=\sum_{m=1}^{\infty}\frac{1}{m^2}\,\sum_{\color{red}{n=m+1}}^{\infty}\,\frac{1}{n^2}=\sum_{m=1}^{\infty}\frac{\psi^{\small (1)}(m+1)}{m^2} \\[2mm] &amp;=\sum_{m=1}^{\infty}\frac{1}{m^2}\left[\zeta(2)-\sum_{k=1}^{m}\frac{1}{k^2}\right]=\left(\zeta(2)\right)^2-\sum_{m=1}^{\infty}\frac{H_{m,2}}{m^2} \\[2mm] &amp;=\left(\zeta(2)\right)^2-\frac{1}{2}\left[\left(\zeta(2)\right)^2+\zeta(4)\right]=\frac{1}{2}\left[\left(\zeta(2)\right)^2-\zeta(4)\right]=\color{blue}{\frac{\pi^4}{120}}\tag{2} \end{align} $$ </p> <p>$$ \color{red}{\Longrightarrow\quad S}\,=S_1-S_2+S_3=\,\color{red}{\frac{\pi^4}{45}} $$ </p> <hr> <p>$\,H_m\,\,\,$ : <a href="http://en.wikipedia.org/wiki/Harmonic_number" rel="nofollow noreferrer">Harmonic Number</a> , $\,\{1\}\,$ : <a href="http://mathworld.wolfram.com/HarmonicNumber.html#eqn19" rel="nofollow noreferrer">Equation (19)</a> ,$\,\{2\}\,$ : <a href="http://mathworld.wolfram.com/HarmonicNumber.html#eqn43" rel="nofollow noreferrer">Equation (43)</a> </p> <p>$\,{\large\psi}^{\small (1)}\,\,$ : <a href="http://en.wikipedia.org/wiki/Polygamma_function" rel="nofollow noreferrer">Polygamma Function</a> </p>
3,881,029
<p><span class="math-container">$R(A)$</span> is range of <span class="math-container">$A$</span> <br> <span class="math-container">$N(A)$</span> is nullspace of <span class="math-container">$A$</span> <br> <span class="math-container">$R(A^T)$</span> is range of <span class="math-container">$A^T$</span> <br> <span class="math-container">$N(A^T)$</span> is nullspace of <span class="math-container">$A^T$</span> <br></p> <p>Suppose <span class="math-container">$y \in R(A)$</span> and <span class="math-container">$x \in N(A^T)$</span>.</p> <p>How would one go about showing that <span class="math-container">$x^Ty=0$</span> (aka <span class="math-container">$x$</span> is perpendicular to <span class="math-container">$y$</span>)?</p>
angryavian
43,949
<p>Hint: By the definition of <span class="math-container">$R(A)$</span>, there exists <span class="math-container">$z$</span> such that <span class="math-container">$y=Az$</span>. Then note that <span class="math-container">$x^\top y = x^\top (Az) = (A^\top x)^\top z$</span>.</p>
117,933
<p>I couldn't find similar question being asked here. The closest one I can find is <a href="https://mathoverflow.net/questions/11366/when-to-split-merge-papers">When to split/merge papers?</a>. Here is my situation: I proved a theorem. When I try to type it, I found that it's very long. Since it's long, I splitted it into two parts. I finished the first part (50+ pages) and submitted to a journal few months ago. Now I almost finished typing the second part which is also 50+ pages. My question is: Should I submit the second part to the same journal? If I submit to the same journal, probably the editor will send the second part to the same referee. Then the referee who is familar with the first part can read the second part more easily. However, as we all know, it's hard to publish long paper, I think it would be even harder to publish two long papers in the same journal. However, if I submit to another journal, it may be even harder since the new referee may find it difficult to read the second part without reading the first part. So this also gives me the second question: should I wait until the first part is published/accepted, and then submit the second part? </p>
fedja
1,131
<p>I usually do not split in such cases, but if you do, I guess the same journal is a better choice, especially if you think of the poor people who will have to look up and reference your paper in the future :). Also, if possible at all, I would include into the first part a forward reference to the second one (if the submission is to the same journal and the second acceptance occurs before you get the galley proofs of the first part, that may be completely feasible from the technical point of view). What I really do not understand in this story is how the referee is supposed to do his job being presented with the first part alone. I can imagine myself in his shoes writing something like "The paper has many interesting ideas and no flaws so far but it is completely unclear how the ends will finally meet..."</p>
1,684,124
<p>Here is my attempt:</p> <p>$$ \frac{2x}{x^2 +2x+1}= \frac{2x}{(x+1)^2 } = \frac{2}{x+1}-\frac{2}{(x+1)^2 }$$</p> <p>Then I tried to integrate it,I got $2\ln(x+1)+\frac{2}{x+1}+C$ as my answer. Am I right? please correct me if I'm wrong.</p>
Enrico M.
266,764
<p>Add and remove $2$ in the numerator, and you will obtain:</p> <p>$$\frac{2x+2-2}{x^2+2x+1} = \frac{2x+2}{x^2+2x+1} - \frac{2}{x^2+2x+1}$$</p> <p>Now you see that the numerator in the first fraction is nothing but the derivative of the denominator, id est you have a function of the form $\frac{f'(x)}{f(x)}$. This hen integrated is simply </p> <p>$$\ln(x^2+2x+1)$$</p> <p>Now the other term</p> <p>$$2\int\frac{1}{x^2 + 2x + 1}\ \text{d}x = 2\int\frac{1}{(x+1)^2}\ \text{d}x = -\frac{2}{(x+1)}$$</p> <p>Obtained by simply using $y = (x+1)$.</p> <p>Final result:</p> <p>$$\ln(x^2+2x+1) -\frac{2}{(x+1)}$$</p> <p>The logarithm can be written also as</p> <p>$$\ln((x+1)^2) = 2\ln(x+1)$$</p>
1,501,940
<p>This is related to a <a href="https://math.stackexchange.com/questions/1501852/why-does-this-statement-not-hold-when-me-0/1501925#1501925">question</a> I just asked, that I now think was based on wrong assumptions.</p> <p>It is true that if <span class="math-container">$f=a$</span> a.e. on the interval <span class="math-container">$[a,b]$</span>, then <span class="math-container">$f = a$</span> on <span class="math-container">$[a,b]$</span>. However, apparently it is not true for a general measurable set <span class="math-container">$E$</span> with <span class="math-container">$m(E) \neq 0$</span>, which confuses me greatly.</p> <p>I just finished the following proof which I thought showed that the statement was true for a general measurable set <span class="math-container">$E$</span>. Apparently, it's not. Could somebody please tell me 1) what's wrong with it, 2) how to fix it, 3) how to use it to show that the statement is not true for <span class="math-container">$E$</span> with <span class="math-container">$m(E) \neq 0$</span> (or whichever kind of set it doesn't work for):</p> <blockquote> <p>Suppose <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are continuous on the general measurable set <span class="math-container">$E$</span>. Suppose also that <span class="math-container">$E_{0}\subseteq E$</span> is the set of all points where <span class="math-container">$f \neq g$</span> are all contained (i.e., <span class="math-container">$\forall x \in E_{0}$</span>, <span class="math-container">$f \neq g$</span>), where <span class="math-container">$m(E_{0})=0$</span>.</p> <p>Consider <span class="math-container">$|h|=|f-g|$</span>. <span class="math-container">$|h|$</span> is the composition of a continuous function and the linear combination of two continuous functions, so <span class="math-container">$|h|$</span> is continuous.</p> <p>Now, <span class="math-container">$E_{0} = |h|^{-1}(\mathbb{R}\backslash\{0\})=h^{-1}((-\infty,0)\cup (0,\infty))$</span>. <span class="math-container">$(-\infty,0)\cup(0,\infty)$</span> is a union of open sets and therefore open. Since <span class="math-container">$|h|$</span> is continuous, <span class="math-container">$E_{0}$</span> is also open.</p> <p>Since <span class="math-container">$E_{0}$</span> is open and <span class="math-container">$m(E_{0})=0$</span>, it must be empty (as otherwise, it must contain a nonempty interval, whose measure would be positive.</p> <p>Therefore, since the set of points on which <span class="math-container">$f \neq g$</span> is empty, <span class="math-container">$f=g$</span> everywhere on <span class="math-container">$E$</span>.</p> </blockquote>
B. S. Thomson
281,004
<p>My feline friend is getting confused (and no doubt frustrated) and has now posted a further variant on the problem. Here is the best I can come up with for a short tutorial. I hope it helps.</p> <p>Do this problem first:</p> <blockquote> <p>Suppose that $f$, $g:E \to R$ are continuous functions. Let $D$ be a subset of $E$ and suppose that $f(x)=g(x)$ for each $x\in D$. What are the necessary and sufficient conditions on $D$ in order to conclude that $f(x)=g(x)$ for all $x\in E$?</p> </blockquote> <p>You can do this in a metric space, but it is enough just to assume $E$ is some set of real numbers and that these functions are continuous relative to the set $E$. There is no measure theory here, no measurable sets, no measure zero sets. </p> <p>If you can't solve this problem, then don't even try the other problem about sets of measure zero <em>since this one is a prerequisite</em>!</p> <p>In short, forget about measure theory for a while and review what you already have learned about continuous functions, dense sets, etc.</p>
1,501,940
<p>This is related to a <a href="https://math.stackexchange.com/questions/1501852/why-does-this-statement-not-hold-when-me-0/1501925#1501925">question</a> I just asked, that I now think was based on wrong assumptions.</p> <p>It is true that if <span class="math-container">$f=a$</span> a.e. on the interval <span class="math-container">$[a,b]$</span>, then <span class="math-container">$f = a$</span> on <span class="math-container">$[a,b]$</span>. However, apparently it is not true for a general measurable set <span class="math-container">$E$</span> with <span class="math-container">$m(E) \neq 0$</span>, which confuses me greatly.</p> <p>I just finished the following proof which I thought showed that the statement was true for a general measurable set <span class="math-container">$E$</span>. Apparently, it's not. Could somebody please tell me 1) what's wrong with it, 2) how to fix it, 3) how to use it to show that the statement is not true for <span class="math-container">$E$</span> with <span class="math-container">$m(E) \neq 0$</span> (or whichever kind of set it doesn't work for):</p> <blockquote> <p>Suppose <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are continuous on the general measurable set <span class="math-container">$E$</span>. Suppose also that <span class="math-container">$E_{0}\subseteq E$</span> is the set of all points where <span class="math-container">$f \neq g$</span> are all contained (i.e., <span class="math-container">$\forall x \in E_{0}$</span>, <span class="math-container">$f \neq g$</span>), where <span class="math-container">$m(E_{0})=0$</span>.</p> <p>Consider <span class="math-container">$|h|=|f-g|$</span>. <span class="math-container">$|h|$</span> is the composition of a continuous function and the linear combination of two continuous functions, so <span class="math-container">$|h|$</span> is continuous.</p> <p>Now, <span class="math-container">$E_{0} = |h|^{-1}(\mathbb{R}\backslash\{0\})=h^{-1}((-\infty,0)\cup (0,\infty))$</span>. <span class="math-container">$(-\infty,0)\cup(0,\infty)$</span> is a union of open sets and therefore open. Since <span class="math-container">$|h|$</span> is continuous, <span class="math-container">$E_{0}$</span> is also open.</p> <p>Since <span class="math-container">$E_{0}$</span> is open and <span class="math-container">$m(E_{0})=0$</span>, it must be empty (as otherwise, it must contain a nonempty interval, whose measure would be positive.</p> <p>Therefore, since the set of points on which <span class="math-container">$f \neq g$</span> is empty, <span class="math-container">$f=g$</span> everywhere on <span class="math-container">$E$</span>.</p> </blockquote>
B. S. Thomson
281,004
<p>I am not normally a cat person but this one and I are old friends now.</p> <p>So I think I should jump into the litter box and complete the task we have been set. What is the definitive answer to this problem that was apparently given to all the kittys in a graduate class somewhere? Here is a way to formulate and answer the problem.</p> <blockquote> <p><strong>Definition</strong>. A set $E$ of real numbers is said to be a <em>purrfect</em> set if whenever $f$, $g:E\to R$ are continuous functions and $N\subset E$ is a set of Lebesgue measure zero for which $f(x)=g(x)$ for all $x\in E\setminus N$ it follows that $f(x)=g(x)$ for all $x\in E$.</p> </blockquote> <p>Don't confuse <em>purrfect</em> with <em>perfect</em>.</p> <ol> <li>$[0,1]$ is a purrfect set [proved by J. Cat].</li> <li>$[0,1] \cup \{2\} $ is not a purrfect set [checked by J. Cat].</li> <li>Every open set is a purrfect set [method of J. Cat works].</li> <li>Every set open in the density topology is a purrfect set.</li> </ol> <p>Naturally the problem is not complete until we characterize purrfect sets. The following does the job giving a paw-sitive solution to the originally posed problem.</p> <blockquote> <p><strong>Theorem</strong>. A necessary and sufficient condition for a set of real numbers $E$ to be a purrfect set is that for each $x\in E$ and for each $\epsilon&gt;0$ $$m(E\cap(x-\epsilon,x+\epsilon))&gt;0$$ where $m$ is Lebesgue outer measure.</p> </blockquote> <p>We can leave this as an exercise for all you Cool Cats since the methods should be clear. Note that purrfect sets are kind of thick and furry at each point.</p> <p>In case you think that purrfect sets might be of little interest and that the instructor who set the problem should be charged with animal cruelty let me leave you with another problem.</p> <blockquote> <p><strong>Problem</strong>. Suppose that $F:R\to R$ is an everywhere differentiable function. Show that the set $$\{x: a&lt;F'(x)&lt;b\}$$ is a purrfect set for any $a$ and $b$.</p> </blockquote> <p>[My apologies: someone seems to have run my posting through the web site <a href="http://kittify.herokuapp.com/://" rel="nofollow">Kittify</a> hence all the cat puns. Too late to edit them out.]</p>
3,896,709
<ul> <li>Any idea to evaluate the sum <span class="math-container">$$ \sum_{j=m}^{k}\frac{\binom{m}{2m - j\,\,}}{\binom{k}{j}} \quad\mbox{with}\quad m \leq k &lt; 2m - 1. $$</span></li> <li>I have found the sum for <span class="math-container">$k=2m-1$</span>. In fact, it is verified that <span class="math-container">$$ \sum_{j = m}^{2m - 1}\ \frac{\binom{m}{2m-j\,\,}} {\binom{2m - 1\,\,}{j}} = 2m\left(H_{2m} - H_{m}\right), $$</span> where <span class="math-container">$H_{j}$</span> is the <span class="math-container">$j$</span>-th harmonic number.</li> </ul>
Claude Leibovici
82,404
<p>For a short time, I hoped to be able to express the result in terms of hypergeometric functions but I failed.</p> <p>If we let <span class="math-container">$k=m+n$</span>, the problem reduces to <span class="math-container">$$S_n=\sum_{j=m}^{m+n} \frac{\Gamma (j+1)\,\,\Gamma (m+n+1-j)}{\Gamma (m+n+1)} \,\binom{m}{2 m-j} $$</span> which can write <span class="math-container">$$S_n=\frac{\Gamma(m+1)}{n!\ \Gamma(m+n+1)} P_{n}(m)$$</span> where <span class="math-container">$P_n$</span> is a polynomial of degree <span class="math-container">$2n$</span> (it does not seem to be factorable for any <span class="math-container">$n$</span>).</p> <p>The very first are <span class="math-container">$$\left( \begin{array}{cc} n &amp; P_n \\ 0 &amp; 1 \\ 1 &amp; m^2+m+1 \\ 2 &amp; m^4+2 m^3+m^2+4 \\ 3 &amp; m^6+3 m^5-2 m^4-9 m^3+13 m^2+18 m+36 \\ 4 &amp; m^8+4 m^7-10 m^6-44 m^5+53 m^4+184 m^3+100 m^2+576 \end{array} \right)$$</span></p> <p>I have not been able to find any pattern for the coefficients [except that the constant term is <span class="math-container">$(n!)^2$</span> and that the coefficient of <span class="math-container">$m^{2n}$</span> is <span class="math-container">$1$</span> (!!) ].</p> <p>The only thing I observed is that, if <span class="math-container">$n$</span> is even, the term is <span class="math-container">$m$</span> is systematically missing.</p> <p>Without any proof of it, I do not think that a closed form could exist for a general <span class="math-container">$n$</span>.</p>
1,858,095
<p>Let $f : \mathbb{R} \rightarrow \mathbb{R}$ be a function with:</p> <p>$$f(x) = x - \arctan{x}$$</p> <p>We consider the sequence $(x_{n})$ with $x_{0} &gt; 0$ and $x_{n + 1} = f(x_{n})$, for any $n \in \mathbb{N}$.</p> <p>Prove that $(x_{n})$ is convergent and find its limit.</p> <p>So far, I've proved that $f(x) \geq 0$ for any $x \geq 0$. From this I've concluded that $(x_{n})$ is a positive sequence. Now I need to find $(x_{n})$'s monotony. I've calculated both $x_{n + 1} - x_{n}$ and $\frac{x_{n + 1}}{x_{n}}$. For the first I've got $- \arctan{x_{n}}$ and the second one gave me $1 - \frac{\arctan{x_{n}}}{x_{n}}$. From this point I don't know what to do next.</p> <p>Thank you in advance!</p>
Adelafif
229,367
<p>The derivative is $\frac{x^2}{1+x^2}$and the function is a contraction mapping on any interval $[0,n]$. The given sequence is used to find the fixed point x=0. </p>
511,304
<p>Given the ODE: </p> <p>$2(x+1)y' = y$</p> <p>How can I solve that using Power Series? I started to think about it:</p> <p>$ \\2(x+1)\sum_{n=1}^{\infty}{nc_nx^{n-1}}-\sum_{n=0}^{\infty}{c_nx^n}=0 \\2\sum_{n=1}^{\infty}{nc_nx^{n}}+2\sum_{n=1}^{\infty}{nc_nx^{n-1}}-\sum_{n=0}^{\infty}{c_nx^n}=0 \\\sum_{n=0}^{\infty}{2nc_nx^{n}}+\sum_{n=0}^{\infty}{2(n+1)c_{n+1}x^{n}}-\sum_{n=0}^{\infty}{c_nx^n} = 0 \\\sum_{n=0}^{\infty}{[2nc_n + 2(n+1)c_{n+1} - c_n]x^n} = 0 $</p> <p>Then:</p> <p>$ \\2nc_{n}+2(n+1)c_{n+1}-c_n=0 \\c_{n+1}=\frac{c_n(1-2n)}{2(n+1)} $</p> <p>Now, I should know what is the generic formula of $c_n$, but I can not see the pattern by assigning values to $n$. How can I proceed?</p>
Ross Millikan
1,827
<p>For $x$ large, $\tanh (x)$ is just a little less than $1$ and $\arctan (\frac 1C)$ is essentially constant. The integral will then be of order $\lambda \arctan (\frac 1C)$ The limit will diverge (one way or the other) unless $C=\frac 1{\tan 1}\approx 0.642$</p>
218,020
<p>I came across the following statement: Let $R$ be a complete local Noetherian commutative ring. If $A$ is a commutative $R$-algebra that is finitely generated and free as a module over $R$, then $A$ is a semi-local ring that is the direct product of local rings. (I'm unsure if completeness or the Noetherian condition is actually relevant to this; but this is the specific fact being used)</p> <p>I can prove it is a semi-local ring: Let $m$ be the maximal ideal of $R$, then $\frac{A}{mA}$ is finite dimensional as a $\frac{R}{m}$ vector space, and thus Artinian. Therefore, it only has a finite number of maximal ideals, and its maximal ideals correspond to maximal ideals of $A$ containing $mA$. But all maximal ideals of $A$ contain $mA$: To see this, this is equivalent to the Jacobson radical containing $mA$, which is equivalent to $1-x$ being a unit in $A$ for any $x \in mA$. The inverse is just $1+x+x^2+\cdots$, which exists by completeness.</p> <p>But why is $A$ necessarily the direct product of local rings?</p>
sacohe
45,227
<p>I don't think I can put this into a formula (I'll reply back if I can come up with a simple one), but the way I would solve a problem like this is to just think it through.</p> <p>Let's take $10$ as an example. Here are all the possible combinations of single digit, positive integers that add up to $10$:</p> <ul> <li>$1, 9$</li> <li>$2, 8$</li> <li>$3, 7$</li> <li>$4, 6$</li> <li>$5, 5$</li> </ul> <p>Remember that each set of digits could be joined in two ways to make two numbers, so all of the possible two digit numbers are as follows:</p> <ul> <li>$1, 9 \Rightarrow 19, 91$</li> <li>$2, 8 \Rightarrow 28, 82$</li> <li>$3, 7 \Rightarrow 37, 73$</li> <li>$4, 6 \Rightarrow 46, 64$</li> <li>$5, 5 \Rightarrow 55$</li> </ul> <p>Technically, we could list many more numbers of 3-10 digits (i.e. 127, 523), but if we have two digit possibilities, then there's no reason to check three digits and up since we're looking for the smallest number.</p> <p>From the list of numbers above, you can see that $19$ is the smallest. However, we could have found this without exhausting all of the two digit options. Unfortunately, I am suggesting a guess-and-check method, but a strategical/informed version. In this case, we check if 1 could be a possible first digit, since we're looking for the smallest possible number. $10-1 = 9$, so $19$ is a valid result and we can not find a smaller answer (since there is no one-digit number that would equal $10$ and there are no other two digit numbers in the 10s that would add up to $10$.</p> <p>Now, let me take 16 to show you the proper approach. You don't just want to start with $16-1$ and then $16-2$ since these return a two-digit number ($1+15 = 16$, etc.), so you want to make an informed guess to try and find the lowest pair of single digit numbers that add up to $16$. Since $6+10 = 16$, anything $\le 6$ would have a two-digit partner, so we try $7$ and find that $7+9 = 16$, so $79$ is our smallest value that adds up to 16.</p> <p>Let's do another example that is $&lt; 20$: $15$. Since $5+10 = 15$, try $6$ since this should be the first with another single digit (since you're technically adding $1$ to $5$ and subtracting $1$ from $10$, so you find $6$ and $9$, or $69$.</p> <p>Once you get to $19$, it's not possible to find a two-digit answer that adds up to $20$, so we need to move up to 3 digits. Try using $1$ as the first digit (this only works for $19$ by the way, just like it only worked for $10$ for 2 digits), so $19-1 = 18$ and now we need to find two one digits that will add up to $18$, which is $9+9$, so $1+9+9 = 19$, so our answer is $199$.</p> <p>One-digit answers will only get you as far as $9$, Two-digit answers will only get you as far as $9+9=18$, Three-digits will only go up to $9+9+9=27$, and so on.</p> <p>Sorry there isn't a formula that I can think of to solve this quickly, but with some informed tries, you should be able to find the answer rather quickly.</p>
218,020
<p>I came across the following statement: Let $R$ be a complete local Noetherian commutative ring. If $A$ is a commutative $R$-algebra that is finitely generated and free as a module over $R$, then $A$ is a semi-local ring that is the direct product of local rings. (I'm unsure if completeness or the Noetherian condition is actually relevant to this; but this is the specific fact being used)</p> <p>I can prove it is a semi-local ring: Let $m$ be the maximal ideal of $R$, then $\frac{A}{mA}$ is finite dimensional as a $\frac{R}{m}$ vector space, and thus Artinian. Therefore, it only has a finite number of maximal ideals, and its maximal ideals correspond to maximal ideals of $A$ containing $mA$. But all maximal ideals of $A$ contain $mA$: To see this, this is equivalent to the Jacobson radical containing $mA$, which is equivalent to $1-x$ being a unit in $A$ for any $x \in mA$. The inverse is just $1+x+x^2+\cdots$, which exists by completeness.</p> <p>But why is $A$ necessarily the direct product of local rings?</p>
Ross Millikan
1,827
<p>If $s$ is the sum, the number of digits is $k=\lceil s/9 \rceil$. The lead digit is then $s+9-9k$. Then there are $k-1 \ \ 9$'s</p>
110,896
<p>Now I have a function, say $f(k,z)=e^{-kz}(1+kz)$</p> <p>I want to find the $n$th $\log$ derivative with respect to z. like $(z\partial_z)^{(n)}f(k,z)$ (or $(\partial_{\ln z})^{(n)}f(k,z)$ if you like), where the $(n)$ denotes that we take the derivative $n$ times. </p> <p>I found the answer in <a href="https://mathematica.stackexchange.com/questions/9598">this question</a> quite helpful for me to find a general expression for $\partial_z^{(n)}f(k,z)$; however, I don't know how to generalize it to $\log$ derivative case using <em>Mathematica</em>. Any suggestions?</p>
Bob Hanlon
9,362
<pre><code>f[k_, z_] = E^(-k*z)*(1 + k*z); </code></pre> <p>Direct calculation using <code>Nest</code></p> <pre><code>dOp1[f_, n_Integer?NonNegative] := Nest[z*D[#, z] &amp;, f, n] </code></pre> <p><code>Simplify</code> at each step to reduce number of terms to be differentiated</p> <pre><code>dOp2[f_, n_Integer?NonNegative] := Nest[Simplify[z*D[#, z]] &amp;, f, n] </code></pre> <p>Transform for straightforward differentiation rather than differential operator</p> <pre><code>dOp3[f_, n_Integer?NonNegative] := D[f /. z -&gt; E^z, {z, n}] /. z -&gt; Log[z] </code></pre> <p>Equivalent series expansion</p> <pre><code>dOp4[f_, n_Integer?NonNegative] := Sum[StirlingS2[n, i]*z^i*D[f, {z, i}], {i, n}] </code></pre> <p>Comparing performance of the different approaches</p> <pre><code>m = 15; g1 = dOp1[f[k, z], m]; // AbsoluteTiming (* {0.153352, Null} *) g2 = dOp2[f[k, z], m]; // AbsoluteTiming (* {0.11716, Null} *) g3 = dOp3[f[k, z], m]; // AbsoluteTiming (* {0.102199, Null} *) g4 = dOp4[f[k, z], m]; // AbsoluteTiming (* {0.000697, Null} *) </code></pre> <p>Verifying that all results are the same</p> <pre><code>g1 == g2 == g3 == g4 // Simplify (* True *) </code></pre> <p>Relative efficiency varies depending on order (<code>n</code>) and specific function (<code>f</code>).</p>
3,979,686
<ul> <li><a href="https://www.britannica.com/science/derivative-mathematics" rel="nofollow noreferrer">Derivative</a></li> </ul> <p>This article says the following:</p> <blockquote> <p>To find the slope at the desired point, the choice of the second point needed to calculate the ratio represents a difficulty because, in general, the ratio will represent only an average slope between the points, rather than the actual slope at either point (see figure).</p> </blockquote> <p>I have simplified this as follows:</p> <blockquote> <p>To find the slope at the desired point we need a second point to calculate the ratio. The choice of the second point represents difficulty. Because, in general, the ratio will not represent the actual slope at either point. Rather, it will represent an average slope between the points.</p> </blockquote> <p>What is the &quot;average slope&quot;? What is the &quot;actual slope&quot;? What is the difference between these two?</p>
GEdgar
442
<p>Suppose <span class="math-container">$a&gt;0$</span>. Complete the square. You get one of these cases: <span class="math-container">$$ \frac{1}{\sqrt{a}}\int\frac{dx}{\sqrt{(x-\beta)^2}},\qquad \beta\in \mathbb R,\\ \frac{1}{\sqrt{a}}\int\frac{dx}{\sqrt{(x-\beta)^2+\gamma^2}},\qquad \beta\in \mathbb R, \gamma &gt; 0,\\ \frac{1}{\sqrt{a}}\int\frac{dx}{\sqrt{(x-\beta)^2-\gamma^2}},\qquad \beta\in \mathbb R, \gamma &gt; 0 . $$</span> The first one has a &quot;<span class="math-container">$\log$</span>&quot; solution, the second one has an &quot;<span class="math-container">$\arcsin$</span>&quot; solution, the third one has an &quot;<span class="math-container">$\text{asinh}$</span>&quot; solution.</p> <p>There are three similar cases when <span class="math-container">$a&lt;0$</span>.</p>
69,711
<blockquote> <p>Find an equation of the tangent line to the graph of $y= \sqrt{x-3}$ that is perpendicular to $6x+3y-4=0$. </p> </blockquote> <p>I don't understand what it's asking. Is this the normal line? How do I solve this?</p>
AndrΓ© Nicolas
6,312
<p>Since this may be homework (please correct me if it isn't), I will give a few hints only.</p> <p>Hint 1: What is the slope of the given line $6x+3y-4=0$?</p> <p>Hint 2: What is the slope of any line perpendicular to the given line?</p> <p>Hint 3: So the "mystery" tangent line must have the slope reached in Hint 2.</p> <p>Hint 4: Let $\ell$ be a tangent line to $y=\sqrt{x-3}$ at the point $(a,\sqrt{a-3})$. <strong>In terms of</strong> $a$, what is the slope of $\ell$?</p> <p>Hint 5: Combine the results obtained in the two previous hints. The rest should be familiar. </p>
69,711
<blockquote> <p>Find an equation of the tangent line to the graph of $y= \sqrt{x-3}$ that is perpendicular to $6x+3y-4=0$. </p> </blockquote> <p>I don't understand what it's asking. Is this the normal line? How do I solve this?</p>
Gerry Myerson
8,269
<p>Andre's answer is good. Another approach which may stand you in good stead beyond this particular question is: draw a diagram. Sketch the graph of $y=\sqrt{x-3}$ (it doesn't have to be a real good sketch, actually it's probably good enough just to draw some random curve), sketch the line $6x+3y-4=0$ (again, probably any line will do, if all we want is to work out what the question is asking). Draw any one of the many tangents to the graph of $y=\sqrt{x-3}$. Does the tangent you have just drawn meet the line $6x+3y-4=0$ at right angles? Probably not. Draw a different tangent to the graph of $y=\sqrt{x-3}$. Is this one perpendicular to the line $6x+3y-4=0$? Are you getting a feel for what the question is asking? </p> <p>Often, drawing a simple diagram not only helps you understand what a question is asking, it helps you see how to answer it. </p>
1,159,599
<p>can someone give me a hint on how to calculate this integral?</p> <p>$\int _0^{\frac{1}{3}} \frac{e^{-x^2}}{\sqrt{1-x^2}}dx$</p> <p>Thanks so much!</p>
Lucian
93,448
<p>If the upper limit would have been <span class="math-container">$1$</span> instead of <span class="math-container">$\dfrac13$</span> , then the definite integral could have been </p> <p>expressible in terms of <a href="http://en.wikipedia.org/wiki/Bessel_function" rel="nofollow noreferrer">Bessel functions</a> <span class="math-container">$\Big($</span>just let <span class="math-container">$x=\sin t$</span>, and then use the fact that <span class="math-container">$\sin^2t=$</span></p> <p><span class="math-container">$\dfrac{1-\cos2t}2\bigg)$</span>. But, as it stands, one would need β€œincomplete” Bessel functions to express its </p> <p>value. Unfortunately, such functions do not exist. Alternately, one might expand the integrand </p> <p>into its <a href="http://en.wikipedia.org/wiki/Binomial_series" rel="nofollow noreferrer">binomial series</a>, and switch the order of summation and integration, in the hope of </p> <p>obtaining a <a href="http://en.wikipedia.org/wiki/Hypergeometric_function" rel="nofollow noreferrer">hypergeometric function</a>.</p>
1,999,194
<p>Determine the following system of equations has 'a unique solution', 'many solutions' or 'no solution': $$\begin{cases} &amp; x + 2y + z &amp;= 1\\ &amp;2x + 2y - 2z &amp;= 4\\ &amp;-x + 2y - 3z &amp;= 5 \end{cases} $$</p> <p>Answer = A unique solution</p> <p>How is it a unique solution? Could anyone explain clearly? </p> <p>Thanks</p>
StackTD
159,845
<blockquote> <p>$$\begin{cases} &amp; x + 2y + z &amp;= 1\\ &amp;2x + 2y - 2z &amp;= 4\\ &amp;-x + 2y - 3z &amp;= 5 \end{cases}$$</p> </blockquote> <p>If you let: $$A=\begin{bmatrix}1&amp;2&amp;1\\2&amp;2&amp;-2\\-1&amp;2&amp;-3\end{bmatrix}\;,\quad X=\begin{bmatrix}x\\y\\z\end{bmatrix}\;,\quad B=\begin{bmatrix}1\\4\\5\end{bmatrix}$$ then the system of equations can be written in matrix form as $AX=B$.</p> <p>This system has no solutions if the rank of $A$ is less than the rank of the <a href="https://en.wikipedia.org/wiki/Augmented_matrix" rel="nofollow noreferrer">augmented matrix</a> $(A\vert B)$. If these ranks are equal, the system has at least one solution.</p> <p>In your case, with the same number of equations as variables and thus a square coefficient matrix $A$, this solution is unique if the rank of $A$ is $3$. A matrix of full rank has a non-zero determinant, so that would be one quick way to verify that this system indeed has a unique solution:</p> <p>$$\det A=\begin{vmatrix}1&amp;2&amp;1\\2&amp;2&amp;-2\\-1&amp;2&amp;-3\end{vmatrix} = \begin{vmatrix}2&amp;3&amp;1\\0&amp;0&amp;-2\\-4&amp;-1&amp;-3\end{vmatrix} =-(-2)\begin{vmatrix}2&amp;3\\-4&amp;-1\end{vmatrix}= 2\left( -2+12 \right)=20$$</p> <p>There are other ways but perhaps you should clarify what you have already learned (relevant theory, properties) about this topic.</p>
174,655
<p>So I have 2 lists of 10000+ lists of 3 numbers, e.g.</p> <pre><code>{{1,2,3},{4,5,6},{7,8,9},...} {{2,1,3},{4,5,6},{41,2,0},...} </code></pre> <p>Wanting a result like </p> <pre><code>{2,...} </code></pre> <p>Getting some sort of list of <code>True</code>/<code>False</code> is also probably enough, like this:</p> <pre><code>{False,True,False,...} </code></pre> <p>I guess I could use <code>Position</code> once I've done that.</p> <p>I tried to use <code>Thread</code>, as below:</p> <pre><code>Thread[{{a, b}, {c, d}, {e, f}} == {{a, b}, {d, e}, {f, e}}] </code></pre> <p>which gives the <code>True</code>/<code>False</code> output</p> <pre><code>{True, {c, d} == {d, e}, {e, f} == {f, e}} </code></pre> <p>But as soon as there are actual numbers in place, it doesn't work:</p> <pre><code>Thread[{{1, 2}, {2, 3}, {4, 5}} == {{1, 3}, {2, 3}, {4, 5}}] </code></pre> <p>Returns</p> <pre><code>False </code></pre> <p>I'd really appreciate any help you could give.</p> <p>Thanks,</p> <p>H</p>
Henrik Schumacher
38,178
<p>This should be rather fast for very long packed arrays of integers.</p> <pre><code>a = {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}, {1, 2, 3}}; b = {{2, 1, 3}, {4, 5, 6}, {41, 2, 0}, {1, 2, 3}}; f2[a_,b_] := Position[Unitize[Subtract[a, b]].ConstantArray[1, Dimensions[a][[2]]], 0, 1]; f2[a,b] </code></pre> <blockquote> <p>{{2}, {4}}</p> </blockquote> <p>More complicated but twice as fast (thanks to Carl Woll for the idea to use <code>1.</code> instead of <code>1</code> in the <code>ConstantArray</code>):</p> <pre><code>f3[a_, b_] := SparseArray[ Unitize[ Unitize[Subtract[a, b]].ConstantArray[1., Dimensions[a][[2]]]], {Length[a]}, 1 ]["NonzeroPositions"]; RandomSeed[12345]; {a, b} = RandomInteger[{1, 9}, {2, 1000000, 3}]; r2 = f2[a, b]; // RepeatedTiming // First r3 = f3[a, b]; // RepeatedTiming // First r2 == r3 </code></pre> <blockquote> <p>0.060</p> <p>0.021</p> <p>True</p> </blockquote>
3,936,102
<p>Can you have a function <span class="math-container">$f \notin L^1$</span> but its Fourier transform <span class="math-container">$\hat{f} \in L^1$</span>? Ive been playing around with examples and I cant find one, but I also cant prove one doesn't exist.</p>
md2perpe
168,433
<p>For <span class="math-container">$$ f(x) = \frac{\sin x}{x} \not\in L^1(\mathbb{R}) $$</span> one has <span class="math-container">$$ \hat{f}(\xi) = C \chi_{[-a,a]}(\xi) \in L^1(\mathbb{R}) $$</span> where <span class="math-container">$C$</span> and <span class="math-container">$a$</span> are some constants depending on the exact choice of definition of Fourier transform.</p>
3,657,026
<p>I need to prove expression using mathematical induction <span class="math-container">$P(1)$</span> and <span class="math-container">$P(k+1)$</span>, that:</p> <p><span class="math-container">$$ 1^2 + 2^2 + \dots + n^2 = \frac{1}{6}n(n + 1)(2n + 1) $$</span></p> <p>Proving <span class="math-container">$P(1)$</span> gave no difficulties, however I was stuck with <span class="math-container">$P(k+1)$</span>, I've reached this point:</p> <p><span class="math-container">$$ 1^2 + \dots + (k+1)^2 = 1^2 + \dots + k^2 + (k+1)^2 = \\ \frac{1}{6}k(k+1)(2k+1) + (k+1)^2 $$</span></p> <p>I've checked answer from the exercise book, the next step would be:</p> <p><span class="math-container">$$ = \frac{1}{6}(k+1)(k(2k+1)+6(k+1)) $$</span></p> <p>How it was converted like that? Could you provide some explanation?</p> <p>Thank you in advance</p>
Siong Thye Goh
306,553
<p><span class="math-container">\begin{align} \frac16 k(k+1)(2k+1) + (k+1)^2 &amp;= \frac{k(k+1)(2k+1) + 6(k+1)^2}{6} \\ &amp;=\frac{(k+1)}{6} \cdot \left(k(2k+1) + 6(k+1) \right)\\ &amp;= \frac{k+1}{6} \cdot (2k^2+7k+6)\\ &amp;= \frac{k+1}{6} \cdot (2k+3)(k+2)\\ \end{align}</span></p> <p>where we first make them have the same denominator then factor <span class="math-container">$k+1$</span> out.</p>
571,955
<p>I've tried solving this problem every way I know how and I just can't get it. I've looked at similar problems of this type, and I still cannot get an answer that seems right.</p> <p>Parametric Equations:</p> <p><strong>a) Write the distance between the line and the point as a function of s</strong></p> <p><strong>b) Find the value of s such that the above distance is minimum</strong></p> <blockquote> <p><span class="math-container">$$x = -t+3$$</span></p> <p><span class="math-container">$$y = \frac{t}{2} +1$$</span></p> <p><span class="math-container">$$z = 2t - 1$$</span></p> </blockquote> <p>Point:</p> <blockquote> <p>(4,3,s)</p> </blockquote> <p>The most obvious thing seems to be to use the distance formula, but this gives a function of s and t. I can take partials of this equation to find s and t, but for some reason this seems wrong to me.</p> <p><span class="math-container">$$D = \sqrt{(4-(-t + 3))^2 + (3 - (1/2t + 1))^2 + (s - (2t- 1))^2}$$</span></p> <p><span class="math-container">$$D^2 = (1 + t)^2 + (2 - 1/2t)^2 + (s - 2t -1)^2$$</span></p> <p><span class="math-container">$$D^2 = 19/4t^2 + 4t + s^2 - 4st - 2s + 6$$</span></p> <p>partial w/ respect to s = 2s - 4t -2</p> <p>partial w/ respect to t = 19/2t + 4 - 4s</p> <p>t = 0, s = 1 ???</p> <p>I've tried all sorts of other methods, including taking the cross product of the two direction vectors to find a normal vector perpendicular to both, as this is the minimum distance from a vector to a point. At this point my head is spinning and I just don't know the right approach. Any hints? Am I on the right track?</p>
Steven Alexis Gregory
75,410
<p>Line: $(x, y, z) = (3, 1, -1) + t(-1, 1/2, 2)$</p> <p>Point: $(4, 3, s)$</p> <p>The point must lie in a plane that is perpendicular to the line. So the plane needs to be perpendicular to the vector $(-1, 1/2, 2)$</p> <p>This would be the plane $-x + \frac 1 2 y + 2z = C.\;$ To find C, we let $(x, y, z) = (4, 3, s)$</p> <p>$C = -4 + \frac 3 2 + 2s = -\frac 5 2 + 2s$</p> <p>So the plane is $-x + \frac 1 2 y + 2z = 2s - \frac 5 2.\;$</p> <p>This plane intersect the line when</p> <p>$-(3-t) + \frac 1 2 (1 - \frac 1 2 t) + 2(s + 2) = 2s - \frac 5 2.\;$</p> <p>$t = -8$.</p> <p>So, the point on the line is at</p> <p>$(3, 1, -1) + (-8)(-1, 1/2, 2) = (11, -3, -17)$</p> <p>The distance between $(11, -3, -17)$ and $(4, 3, s)$ is $\sqrt{s^2+34 s+374}$</p>
2,758,965
<p>Show $\log(1-x)=-\sum_{n=1}^{\infty}\frac{x^n}{n}\,\forall x\in(-1,1)$. Which value does $\sum_{n=1}^{\infty}\frac{(-1)^n}{n}$ take?</p> <hr> <p>Now because I skipped forward in my (personal) textbook I know that I could tackle this using knowledge of the Maclaurin/Taylor series. However, it was not covered in the lecture (yet) and my mind is fixated on using Maclaurin/Taylor (which I'm not allowed to use)! Can anybody show me an alternate approach that I will probably feel very stupid for not seeing?</p>
timon92
210,525
<p>Hint: $$\frac 1a + \frac{1}{bc} = \frac{bc+a}{abc} = \frac{bc+a(a+b+c)}{abc} = \frac{(a+b)(a+c)}{abc}.$$</p>
411,868
<p>Laver showed in 1995 that the period of the first row of certain <a href="https://en.wikipedia.org/wiki/Laver_table" rel="noreferrer">Laver tables</a> is unbounded, assuming that a rank-into-rank cardinal exists.</p> <p>The most accessible proof of his result that I was able to find is in chapter 12 of Patrick Dehornoy's Braids and Self-Distributivity (<a href="https://link.springer.com/book/10.1007/978-3-0348-8442-6" rel="noreferrer">Springer 2000</a>). The proof is quite technical. Laver defines an algebra on the models of set-theory - technically, on certain elementary embeddings of huge sets. He defines two operations on such elementary embeddings, quotients out certain large infinities to get finite results, and investigates their properties.</p> <p>My question is: why should elementary embeddings of ZFC have anything to do with the periodicity of these finite tables?</p> <p>More generally, is there some guiding intuition I can use to make sense of Laver's very complex construction? I find it difficult to understand how Laver could have plowed through this weird and intricate technical construction without having some reason to think it would lead to some kind of specific finitary result.</p>
Wojowu
30,186
<p>As was already mentioned in the comments, the premise of the question is somewhat backwards. Indeed, looking at <a href="https://www.sciencedirect.com/science/article/pii/S0001870885710146?via%3Dihub" rel="noreferrer">Laver's paper</a>, the combinatorial structures now known as Laver tables were not at all his initial motivation. Instead, from the start and throughout Laver was interested in elementary embeddings of some <span class="math-container">$V_\lambda$</span> and the algebras they produce under certain operations, one of which is &quot;applying&quot; one elementary embedding to another. This operation satisfies a relation <span class="math-container">$a(bc)=(ab)(ac)$</span> (and thus defines what is sometimes known as a <a href="https://ncatlab.org/nlab/show/shelf" rel="noreferrer">shelf</a>). For instance, Laver has shown that the shelf generated by one elementary embedding <span class="math-container">$j$</span>, which he denotes <span class="math-container">$\mathscr A_j$</span>, is free.</p> <p>Now, <span class="math-container">$\mathscr A_j$</span> is a large and complicated algebra. One of Laver's results (essentially Theorem 11 of the paper) is that the study of <span class="math-container">$\mathscr A_j$</span> can be reduced to studying certain finite algebras, <span class="math-container">$\mathscr A_{j,n}$</span>, which have <span class="math-container">$2^n$</span> elements. These algebras are again defined using elementary embeddings, but as Laver explains after the theorem, these algebras also have a completely explicit description not referring to any large cardinals - these are the Laver tables that you are familiar with.</p> <p>Therefore Laver's motivation was studying certain algebras which arise naturally if you are interested in large cardinals. Any relation to finitary structures was an afterthought to him.</p>
13,889
<p><strong>Question:</strong> Are there intuitive ways to introduce cohomology? Pretend you're talking to a high school student; how could we use pictures and easy (even trivial!) examples to illustrate cohomology?</p> <p><strong>Why do I care:</strong> For a number of math kids I know, doing algebraic topology is fine until we get to homology, and then it begins to get a bit hazy: why does all this quotienting out work, why do we make spaces up from other spaces, how do we define attaching maps, etc, etc. I try to help my peers do basic homological calculations through a sequence of easy examples (much like the ones Hatcher begins with: taking a circle and showing how "filling it in" with a disk will make the "hole" disappear --- ) and then begin talking about what kinds of axioms would be nice to have in a theory like this. I have attempted to begin studying co-homology through "From Calculus to Cohomology" and Hatcher's text, but I cannot see the "picture" or imagine easy examples of cohomology to start with. </p>
Zach Conn
251
<p>If one operates on an open subset $U$ of Euclidean space $\mathbb{R}^n$, then de Rham cohomology falls out of trying to solve some differential equations.</p> <p>The starting observation is that a locally constant smooth function will be constant on connected components, so the dimension of the vector space of locally constant smooth functions is the number of connected components. This space is the 0th de Rham cohomology group.</p> <p>The next step, as is the key in a lot of topology, is to start throwing paths all over the place: replace the space $U$ with the paths inside it (not required to be loops in this case). So we view a 1-form $\omega$ as a function $P_{\omega}$ on paths via the integration map which sends a path to the integral of the 1-form $\omega$ along it. And again we look for those functions $P_{\omega}$ which are locally constant, meaning they remain unchanged under small deformations of the path leaving the endpoints fixed. We know that if $\omega = df$ for $f$ a smooth function, then $P_{\omega}$ satisfies this property trivially because it sends a path to the difference of the values of $f$ evaluated at the path's endpoints. We consider two locally constant path functions $P_{\omega_1}, P_{\omega_2}$ the same if their difference is $df$ for some smooth function $f$. This space is the first de Rham cohomology group.</p> <p>And so on. For the $k$th group, we consider locally constant $k$-dimensional integrals modulo trivially constant ones. The precise definition is that $H^k(U)$ is the space of closed $k$-forms modulo exact $k$-forms, where a form is closed if its exterior derivative is zero and it's exact if it's the exterior derivative of a $(k-1)$-form. For instance, the statement that on a space $U$ every closed form is exact now becomes the statement that all the higher cohomology groups are trivial. </p> <p>To finally phrase this in the standard language of homological algebra, you can define de Rham cohomology as the cohomology of the chain complex $(\Omega^k(U), d)$ where $\Omega^k(U)$ is the set of $k$-forms on $U$ and $d$ is the exterior derivative.</p> <p>It seems that this could be explained to anybody who has studied multivariable calculus. It has the advantage that computing some basic examples of cohomology groups requires one to think only about some calculus, nothing hard to visualize.</p>
13,889
<p><strong>Question:</strong> Are there intuitive ways to introduce cohomology? Pretend you're talking to a high school student; how could we use pictures and easy (even trivial!) examples to illustrate cohomology?</p> <p><strong>Why do I care:</strong> For a number of math kids I know, doing algebraic topology is fine until we get to homology, and then it begins to get a bit hazy: why does all this quotienting out work, why do we make spaces up from other spaces, how do we define attaching maps, etc, etc. I try to help my peers do basic homological calculations through a sequence of easy examples (much like the ones Hatcher begins with: taking a circle and showing how "filling it in" with a disk will make the "hole" disappear --- ) and then begin talking about what kinds of axioms would be nice to have in a theory like this. I have attempted to begin studying co-homology through "From Calculus to Cohomology" and Hatcher's text, but I cannot see the "picture" or imagine easy examples of cohomology to start with. </p>
aaron
6,071
<p>This is meant to be a comment below Zach Conn's answer, but for some reason I don't seem to have the option of commenting.</p> <p>This is really the same answer -- in short, integrals can be viewed as cohomology classes -- but just to give a very concrete example: </p> <p>Consider $\int_C \frac{dz}{z}$ where $C$ is a closed curve in the complex plane that misses the origin. If you think of $C$ as variable, then the value of the integral depends only on the homology class of $C$ in the punctured plane. (If $C$ represents $n$ times the usual homology generator, then the integral is $2\pi i n$.)</p> <p>In other words $\int_{\cdot} \frac{dz}{z}$ is a linear functional on the homology, which is to say a cohomology class. Which is my point: it is often the case that "cohomology" equals "functionals on homology" (though not always, e.g. in the presence of torsion). So if one understands homology, then this can be a starting point.</p>
211,623
<p>Let $G$ be a (finite) group, and $a, b \in G$ be any two elements. Consider the sequence defined by \begin{eqnarray*} s_0 &amp;=&amp; a, \\ s_1 &amp;=&amp; b, \text{and} \\ s_{n+2} &amp;=&amp; s_{n+1} s_{n} \text{ for $n \geq 0$}. \end{eqnarray*} This sequence is what we get when replace the letters in the algae L-system (cf: <a href="https://en.wikipedia.org/wiki/L-system#Example_1:_Algae" rel="nofollow">https://en.wikipedia.org/wiki/L-system#Example_1:_Algae</a>) with the two elements of our group. As $G$ is finite, there are finitely many pairs $(s_{n+1}, s_{n})$ and hence that the sequence will eventually start repeating. Do we know more about this sequence $(s_n)$? Like, when is it periodic?</p>
Ash Malyshev
44,291
<p>For that system, it is always periodic, in other words it starts repeating right away. The function $(a,b) \mapsto (b, ba)$ is invertible: we have $(c^{-1} d,c) \mapsto (c,d)$. So if iterating this function starting from some point eventually starts repeating, it must have been repeating the whole time.</p> <p>More generally, consider any word $w(x,y)$ in the free generators $x$ and $y$. By the same argument, if $\langle y, w(x,y) \rangle$ is the entire free group $F = \langle x,y \rangle$, then the corresponding sequence in a finite group will be always be periodic.</p> <p>The other direction also holds. If a finite set $S$ generates a proper subgroup $K &lt; F$, there is some finite quotient $\tilde F$ in which $\tilde S$ generates a proper subgroup of $\tilde F$. This can be seen by taking the Schreier graph of the action of $F$ on the cosets of $K$, cutting out a large ball, and linking up the cut edges to each other arbitrarily. If we apply this to $S = \{y, w(x,y)\}$, we get a sequence in $\tilde F$ which starts out $a, b, w(a,b)$. But $b$ and $w(a,b)$ generate a strictly smaller subgroup than $a$ and $b$, so $a$ will never again appear in the sequence.</p> <p>Using Nielsen reduction, it's fairly easy to write down the condition for when $y$ and $w(x,y)$ actually generate all of $\langle x, y \rangle$.</p>
2,333,847
<p>A function $f(x) = k$ and the domain is $\{-2,-1,\dotsc,3\}$. Would I say $$x = \{-2,-1,\dotsc,3\}\quad\text{or}\quad x \in \{-2,-1,\dotsc,3\} \ ?$$ Thanks. </p>
orlp
5,558
<p>If you really want to be formal you can say that the domain of $f(x)$ is $\{x \in \mathbb{Z} \mid -2 \leq x \leq 3\}$.</p>
272,468
<p>I wrote the <a href="https://www.researchgate.net/publication/334884635_On_a_Bivariate_Frechet_Distribution" rel="nofollow noreferrer">Frechet Distribution</a> as follows:</p> <pre><code>dist1 = ProbabilityDistribution[{&quot;PDF&quot;, \[Lambda]1/\[Alpha]1 (x/\[Alpha]1)^(-\[Lambda]1 - 1)E^-((x/\[Alpha]1)^-\[Lambda]1)}, {x, 0, \[Infinity]}, Assumptions -&gt; {\[Lambda]1 &gt; 0, \[Alpha]1 &gt; 0}] dist2 = ProbabilityDistribution[{&quot;PDF&quot;, \[Lambda]2/\[Alpha]2 (x/\[Alpha]2)^(-\[Lambda]2 - 1)E^-((x/\[Alpha]2)^-\[Lambda]2)}, {x, 0, \[Infinity]}, Assumptions -&gt; {\[Lambda]2 &gt; 0, \[Alpha]2 &gt; 0}] bidist = CopulaDistribution[{&quot;FGM&quot;, \[Gamma]}, {dist1, dist2}] </code></pre> <p>when i write cdf of coupla dist1,dist2 as follows</p> <pre><code>CDF[bidist, {x, y}] </code></pre> <p>The output is as follows:</p> <p><a href="https://i.stack.imgur.com/NUKOb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NUKOb.png" alt="11" /></a></p> <p>That is not the output from the linked publication.</p> <p><a href="https://i.stack.imgur.com/tXowD.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tXowD.jpg" alt="d3" /></a></p> <p>I want to find maximum likelihood of FGM copula, if it is possible.</p>
cvgmt
72,111
<p>Use the settings by @Akku14, we can find two roots in <code>0&lt;a&lt;1</code>.</p> <pre><code>Clear[obj2]; obj2[a_?NumericQ] := NIntegrate[(E^(1/2 (-x[1]^2 - x[2]^2)) Log[ Abs[1 - a (x[1]^2 + x[2]^2)]] x[1]^2)/(Ο€ (x[1]^2 + x[2]^2)), {x[1], -∞, ∞}, {x[ 2], -∞, ∞}, WorkingPrecision -&gt; 15] + Log[2]/2; FindRoot[obj2[a], {a, 0.2}] FindRoot[obj2[a], {a, 0.8}] ReImPlot[obj2[a], {a, 0, 1}] </code></pre> <blockquote> <p><code>{a -&gt; 0.121312}</code></p> </blockquote> <blockquote> <p><code>{a -&gt; 0.918611}</code></p> </blockquote> <p><a href="https://i.stack.imgur.com/EHJi0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EHJi0.png" alt="enter image description here" /></a></p>
1,642,029
<p>I'm looking at my textbooks steps for calculating the complexity of bubble sort...and it jumps a step where I don't know what exactly they did. </p> <p><a href="https://i.stack.imgur.com/XaztP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XaztP.png" alt="enter image description here"></a></p> <p>I see everything up to that point using summation rules and all, but am unsure about that jump. Any help on explaining more how they got that?</p>
AndrΓ© Nicolas
6,312
<p>It depends how we "filter out." If we remove all the $b_n$ we may end up with a sequence that is finite, or even empty. </p> <p>But if we remove say $b_1$ and $b_4$ and $b_9$ and $b_{16}$ and so on, then we are left with an infinite sequence and can repeat the process.</p>
174,528
<p>I am editing my original question, as I have figured out a method of doing what I want.</p> <p>Now my question is if there is a more elegant, efficient way to do the following:</p> <pre><code>Options[f] = { Energy -&gt; energy, Temperature -&gt; Func[Energy*Frequency, {Energy, Frequency}], Frequency -&gt; freq }; f[OptionsPattern[]] := Module[{energy, temperature, frequency}, energy = OptionValue[Energy]; frequency = OptionValue[Frequency]; temperature = If[ Head[OptionValue[Temperature]] === Func, OptionValue[Temperature][[1]] /. Map[# -&gt; OptionValue[#] &amp;, Flatten[{OptionValue[Temperature][[2]]}]], OptionValue[Temperature] ]; { Energy -&gt; energy, Frequency -&gt; frequency, Temperature -&gt; temperature }] </code></pre> <p>Please note that this has the needed generalization to change what options are used as arguments to the passed function or if desired a value (or variable) can be directly inserted as an option.</p> <p>For this application I have >70 Options (parameters) and I would like to be able to insert (if desired) a function with arguments based on any of the other options, or directly insert a value (if a function "Func" isn't desired.)</p> <p>Is this more clear?</p> <p>Thanks again!</p>
MarcoB
27,951
<p>I mentioned in the comments that I find it advantageous to use strings as the names of my user-defined options. In this case, I think it may work for your problem as well. Here is a sample idea:</p> <pre><code>ClearAll[f] Options[f] = { "Energy" -&gt; energy, "Temperature" -&gt; "Energy" "Frequency", "Frequency" -&gt; freq }; f[OptionsPattern[]] := Module[ {energy, temperature, frequency}, energy = OptionValue["Energy"]; frequency = OptionValue["Frequency"]; temperature = With[{tempopt = OptionValue["Temperature"]}, If[NumberQ@tempopt, tempopt, tempopt /. Cases[tempopt, a_String :&gt; (a -&gt; OptionValue[a]), Infinity] ] ] ] </code></pre> <p>So now you can pass a "template" to your function, using the names of the other options / parameters as placeholders for their values (you will just have to be careful not to generate recursive definitions upon function call). For instance:</p> <pre><code>f["Energy" -&gt; aNumber, "Temperature" -&gt; 2 Exp["Energy"] + Log["Frequency"]] (* Out: 2 E^aNumber + Log[freq] *) </code></pre> <p>or:</p> <pre><code>f["Energy" -&gt; 4, "Temperature" -&gt; "Energy" "Frequency"^2] (* Out: 4 freq^2 *) </code></pre>
660,315
<p>Let $A,B,C$ be sets. Identify a condition such that $A \cap C = B \cap C$ together with your condition implies $A=B$. Prove this implication. Show that your condition is necessary by finding an example where $A \cap C = B \cap C$, but $ A \neq B$</p> <p>Edit: I've read the wrong proposition/definition. UGH! The question was probably about having sets being equal, not empty. </p> <p>Now, suppose that $ A \neq B$... that would mean that $A$ isn't equal to $B$. So they are different sets, but $A \cap C = B \cap C$ are equal sets. </p> <hr> <p>I'm lost on this. I need to find a condition, but where do I even start? These are my thoughts about the question so far. </p> <p>We need to find a condition for $A \cap C = B \cap C$. </p> <p>Proposition 3.1.12 states that if $A$ and $B$ are both empty sets, then $A =B$.</p> <p>So $A \cap C$ and $B \cap C$ are both empty sets which is why $A \cap C = B \cap C$.</p> <p>It seems that there are empty sets everywhere because proposition 3.1.12 claims that if $A$ and $B$ are empty set, then $A =B$. It's like there aren't any elements at all. There's nothing. </p> <p>We need to find a condition that demonstrates that $A \cap C = B \cap C$ which are empty sets, but $A \neq B$ means that there are elements in $A$ and $B$. </p> <p>How is this even possible? </p> <p>$A \cap C$ by definition 3.2.1 is $[x: x: \in A \land x \in C]$</p> <p>x belongs in A and x belongs in C. </p> <p>$B \cap C$ by definition 3.2.1 is $[x: x: \in B \land x \in C]$</p> <p>x belongs in B and x belongs in C</p> <p>The only way I could think of is a contradiction to this... but that would mean that $A = B$ ... there are no elements in A and B, but $A \cap C \neq B \cap C$ means that there are elements. There may not be elements in A and B, but there are elements in C. </p>
copper.hat
27,978
<p>If $A \not\subset C$, then let $x \in A \setminus C$, and $B = A\setminus \{x\}$. Then you can see that $A\ne B$ and $A \cap C = B \cap C$. Hence you must have $A \subset C$. By symmetry, we must have $B \subset C$. Combining shows that you need $(A \cup B) \subset C$.</p> <p>Now suppose $(A \cup B) \subset C$. Then $A \cap C = A$ and $B \cap C = B$.</p>
1,385,936
<p><em>I was wondering how to approximate $\sqrt{1+\frac{1}{n}}$ by $1+\frac{1}{2n}$ without using Laurent Series.</em></p> <p>The reason why I ask was because using this approximation, we can show that the sequence $(\cos(\pi{\sqrt{n^{2}-n}})_{n=1}^{\infty}$ converges to $0$. This done using a mean-value theorem or Lipschitz (bounded derivative) argument where</p> <p>$$ |\cos(\pi{\sqrt{n^{2}-n}})-\cos(\pi{n}+\pi/2)|=|\cos(\pi{\sqrt{n^{2}-n}})| \leq \pi|\sqrt{n^{2}-n}-n-1/2| = \pi |\frac{-1/4}{n^{2}-n+n+1/2}| $$</p> <p>I looked up $\sqrt{1+\frac{1}{n}}$ and saw that this approximation can be obtained using Laurent series at $x=\infty$. I am not familiar with Laurent series since I have not had any complex analysis yet, but I was wondering if there was another naive way to see this?</p>
marty cohen
13,079
<p>Start with $(1+x/2)^2-(1+x) =1+x+x^2/4-(1+x) =x^2/4 $.</p> <p>Then, $(1+x/2)^2-(1+x) \ge 0 $, so $1+x/2 \ge \sqrt{1+x} $.</p> <p>Going the other way, which is harder, $(1+x/2)^2-x^2/4 =(1+x) $, so, for $x \ge 0$,</p> <p>$\begin{array}\\ \sqrt{1+x} &amp;=\sqrt{(1+x/2)^2-x^2/4}\\ &amp;=(1+x/2)\sqrt{1-(x/(2(1+x/2)))^2}\\ &amp;\ge (1+x/2)(1-(x/2(1+x/2))^2) \quad\text{since }\sqrt{1-a}&gt;1-a\text{ for }0 &lt; a &lt; 1\\ &amp;=1+x/2-\frac{(1+x/2)x^2}{4(1+x/2)^2}\\ &amp;=1+x/2-\frac{x^2}{4(2+x)}\\ &amp;&gt;1+x/2-x^2/8\\ \end{array} $</p> <p>This isn't the best, but it is within a constant factor of the $x^2$ term and it is gotten by going forward, not working backwards from a known result.</p>
2,747,509
<p>How would you show that if <span class="math-container">$d\mid n$</span> then <span class="math-container">$x^d-1\mid x^n-1$</span> ?</p> <p>My attempt :</p> <blockquote> <p><span class="math-container">$dq=n$</span> for some <span class="math-container">$q$</span>. <span class="math-container">$$ 1+x+\cdots+x^{d-1}\mid 1+x+\cdots+x^{n-1} \tag 1$$</span> in fact, <span class="math-container">$$(1+x^d+x^{2d}+\cdots+x^{(q-1)d=n-d})\cdot(1+x+\cdots+x^{d-1}) = 1+x+x^2 + \cdots + x^{n-1}$$</span></p> <p>By multiplying both sides of <span class="math-container">$(1)$</span> by <span class="math-container">$(x-1)$</span> we get that <span class="math-container">$1-x^d\mid 1-x^n$</span> which is the final result</p> </blockquote> <p>Is this an ok proof?</p>
Community
-1
<p>Let $\alpha$ be a solution to the equation $x^d = 1$. We then have that $\alpha^d$ = 1. Since $d|n$ we can write $n = d\cdot k$ for some integer $k$. Thus $$1 = 1^k = \left( \alpha^d \right)^k = \alpha^{d\cdot k} = \alpha^n$$ This shows that $\alpha$ is a solution to $x^n = 1$. </p> <p>This shows that $x^d-1|x^n-1$. </p>
2,532,156
<p>As the title suggests I am confused between what arguments will qualify a explanation as a proof and when does the intuition betrays us. Here is the question that made me think about this:</p> <p>On a certain planet Tau Cetus more than half of its land is dry. Prove that a tunnel can be dug straight through center of the planet (assuming their technology is sufficiently advanced) starting and ending at dry lands.</p> <p>Isn't it obvious? My main motive is too understand proofs and methods of arguments better.</p>
not all wrong
37,268
<p>On one extreme, you have &quot;it is obvious that&quot;. On the other, you have formal logic where a statement is proved to be true if and only if you write a sequence of special symbols obeying the rules of the logic system. (Let's ignore Godelian incompleteness ideas, interesting and ultimately important though they are.)</p> <p>The question is whether or not you know it's possible to <strong>expand</strong> on your explanation to <strong>reduce</strong> it to a more detailed logical idea that you can reduce to another idea... to end up with a purely logical argument at the end of the day. If you don't know how to do that without relying on statements that &quot;are just obvious&quot; (although it's okay to rely on a theorem somebody else proved), then you haven't proved the statement <em>and you might be wrong</em>!</p> <p>There are huge numbers of paradoxes in mathematics which arise precisely from just asserting that something is &quot;obvious&quot; without understanding how to build it out of elementary ideas. A simple example is the <a href="https://en.wikipedia.org/wiki/Monty_Hall_problem" rel="nofollow noreferrer" title="Monty Hall problem">Monty Hall problem</a>.</p> <hr /> <p>As you know, your example has a sensible proof, given the natural assumptions one makes in interpreting the words in the problem:</p> <blockquote> <p>If they can't, there's ocean opposite all the land, so there's at least as much ocean as land, a contradiction.</p> </blockquote> <p>This is hugely better than &quot;it's obvious&quot;, but easy to say, so say it instead! The reason why it's better is that it is easy to expand upon. A more clear version of the same argument would be:</p> <blockquote> <p>Suppose there are no possible places for tunnels. Then let <span class="math-container">$A_L$</span> be the area of the land. We know that there is ocean opposite all of this area, so the area of the ocean is <span class="math-container">$A_O \ge A_L$</span>. But then the total area of the sphere is <span class="math-container">$A = A_L + A_O \ge 2A_L$</span> contradicting the assumption that <span class="math-container">$A_L &gt; A/2$</span>.</p> </blockquote> <p>An even more formal version (I don't know if you know about integration, especially on curved spaces, but it doesn't matter too much if you don't -- think of it as formalizing the idea of area) might be something like:</p> <blockquote> <p>Define a function <span class="math-container">$f: \text{planet} \to \{0,1\}$</span> taking the value <span class="math-container">$0$</span> on the ocean and <span class="math-container">$1$</span> on the land. We assume that it is an integrable function, and that <span class="math-container">$A_L = \int f &gt; \frac{1}{2}\int 1 = \frac{1}{2}A$</span>. Define <span class="math-container">$g(\mathbf{x}) = f(\mathbf{x}')$</span> to be the nature of the land at the point <span class="math-container">$\mathbf{x}'$</span> opposite <span class="math-container">$\mathbf{x}$</span>. We want to prove that <span class="math-container">$f(\mathbf{x}) = g(\mathbf{x}) = 1$</span> for some <span class="math-container">$\mathbf{x}$</span>. We need to assume that <span class="math-container">$\int f = \int g$</span>; i.e. it doesn't matter whether you count up the area near you or opposite you as you go around the planet.</p> <p>Now <span class="math-container">$\int (f + g) = 2\int f &gt; A = \int 1$</span>. But <span class="math-container">$\int (f + g - 1) &gt; 0$</span> is impossible unless <span class="math-container">$f + g - 1 &gt; 0$</span> somewhere, which means that <span class="math-container">$f(\mathbf{x}) = g(\mathbf{x}) = 1$</span> somewhere.</p> </blockquote> <p>This relies on very simple standard theorems about integration and nothing else.</p> <hr /> <p>Notice that in the formal presentation, a hidden assumption that is needed for the computation becomes clearer: we need area to mean the same thing on the 'opposite' side of the planet. This is the odd-looking <span class="math-container">$\int f = \int g$</span> assumption. Here's a stupid counterexample (in cross-section/one dimension fewer) that exploits this otherwise:</p> <p><a href="https://i.stack.imgur.com/ZdWy0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZdWy0.png" alt="enter image description here" /></a></p> <p>If you have a really big hill, then it has lots of surface area but is opposite only a small amount of ocean, which means that you can avoid being able to build a tunnel.</p> <p>Of course, now someone points this out it's obvious (and mathematicians get really good at revising their intuition to cope with new discoveries like this), but formalizing it more points you much to understand what you are saying much more carefully. Really, the original problem isn't correct as stated! We need to assume that <strong>the planet's surface has a symmetry under inversion through the center</strong>.</p> <p>This is the power of formalizing your arguments. You will often discover things you overlooked. And they're not always as easy to understand as these examples!</p>
733,908
<p>How do i start off with integrating the below function? i tried applying trig substitution and U substitution. how do i go about solving this function? should i split them up further into 2 separate functions ? need some help in this as i can't seem to figure out how to continue on with it </p> <p>$$\int\frac{x^{3}}{({x^{2}-1})^{0.5}}dx$$</p>
roman
36,404
<p>Note that chosing $u(x) = x^2$ and $v(x) = (x^2-1)^{\frac{1}{2}}$, we have </p> <p>$$\int u(x)v'(x)\, dx = \int\frac{x^{3}}{({x^{2}-1})^{0.5}}dx\quad\text{(check this!)} $$</p> <p>where $v'(x)$ denotes the first derivative by $x$. Thus we can use integration by parts, i.e. the formula</p> <p>$$\int u(x) v'(x) \, dx = u(x) v(x) - \int u'(x) v(x) \, dx$$</p> <p>The rest are relatively easy calculations</p>
482,003
<p>I need help with the following limit $$\lim_{n\to\infty}\sum_{k=1}^n \frac{1}{\sqrt{kn}}$$</p> <p>Thanks.</p>
achille hui
59,379
<p>Notice for large $n$, we expect $\displaystyle \sum_{k=1}^n \frac{1}{\sqrt{k}} \text{ behave like }\int_1^n \frac{dx}{\sqrt{x}} \sim 2\sqrt{n}$. This suggests $$\frac{1}{\sqrt{k}} \sim \int_{k-1/2}^{k+1/2} \frac{dx}{\sqrt{x}} \sim 2\left( \sqrt{k+\frac12} - \sqrt{k-\frac12}\right)$$ and the terms $\displaystyle \frac{1}{\sqrt{kn}}$ in the summands is close to something "telescopable". To make this idea concrete, we observe: $$\begin{align} \sum_{k=1}^n\frac{1}{\sqrt{kn}} \ge &amp; \sum_{k=1}^n \frac{2}{\sqrt{n}(\sqrt{k+1}+\sqrt{k})} = \frac{2}{\sqrt{n}} \sum_{k=1}^n(\sqrt{k+1}-\sqrt{k}) = 2 \Big(\sqrt{1+\frac{1}{n}} - \frac{1}{\sqrt{n}}\Big)\\ \sum_{k=1}^n\frac{1}{\sqrt{kn}} \le &amp; \sum_{k=1}^n \frac{2}{\sqrt{n}(\sqrt{k}+\sqrt{k-1})} = \frac{2}{\sqrt{n}}\sum_{k=1}^n(\sqrt{k}-\sqrt{k-1}) = 2 \end{align}$$</p> <p>As a result, $$\left|\;\sum_{k=1}^n \frac{1}{\sqrt{kn}} - 2\;\right| \le 2 \left(1+\frac{1}{\sqrt{n}} -\sqrt{1+\frac{1}{n}}\right) &lt; \frac{2}{\sqrt{n}} \quad\implies\quad \lim_{n\to\infty} \sum_{k=1}^n \frac{1}{\sqrt{kn}} = 2.$$</p>
3,717,243
<p>I am reading Introduction to automata theory, languages, and computation 3ed, by John E. Hopcroft, et al. The wikipedia article (<a href="https://en.wikipedia.org/wiki/Turing_machine#Formal_definition" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Turing_machine#Formal_definition</a>) for Turing machine also cites the TM definition in the 1ed of the book. (I assume the 3ed still use the same TM definition as 1ed)</p> <p>When the tape head of a TM is scanning a blank symbol, what will happen?</p> <ul> <li><p>Is the transition function undefined on a blank tape symbol?</p> </li> <li><p>Does the TM halt?</p> </li> </ul> <p>Thanks.</p>
Robert Israel
8,508
<p>The correct result should be</p> <p><span class="math-container">$$ \prod_{i=1}^\infty \left( \frac{i+x}{i+1} \right)^{1/i} = \exp \left(\int_1^x \frac{\Psi(t+1)+\gamma}{t} \; dt\right) $$</span></p> <p>See my comment to Mostafa Ayaz's answer.</p> <p>I don't know if this can be written in a more &quot;closed-form&quot; way than this, but it's certainly not <span class="math-container">$x$</span>.</p>
2,419,529
<p>I am asked to state whether the following is true or if false to give a counterexample:</p> <blockquote> <p>If $A_1 \supseteq A_2 \supseteq A_3 \supseteq \ldots $ are all sets containing an infinite number of elements, then the intersection $$\bigcap_{k=1}^\infty A_k$$ is infinite as well.</p> </blockquote> <p>I believe this statement to be false but I am not sure if the counterexample I have thought up makes sense. I said: </p> <p>Let $A_n = \{m \in \mathbb{Z} | m&gt; n\}$ for $n \in \mathbb{N}$. Would this be okay?</p>
lhf
589
<p><em>Hint:</em> Consider $A_n = \left[-\dfrac1n, \dfrac1n \right] \subseteq \mathbb R$.</p>
289,923
<p>As far as i know, both differential and gradient are vectors where their dot product with a unit vector give directional derivative with the direction of the unit vector. So what are the differences?</p>
Harald Hanche-Olsen
23,290
<p>There is hardly a noticable difference when you work on Euclidean spaces. You can think of the differential at a point as being a linear map, which maps a vector to the dot product of the vector with the gradient. The differential generalizes in a natural way to more abstract settings, such as functions on a manifold. The gradient has no such generalization, unless the manifold is equipped with a metric (which is a fancy way of saying there is a dot product defined for any two tangent vectors based at the same point).</p> <p>I could go on, but I am afraid this would turn into a lecture on differential geometry.</p>
2,908,361
<p>I tried to solve this inequality by taking the square outside the floor function $[y]$ (greatest integer less than $y$)but it was wrong since if $x=2.5$ then $[x]= 2$ and $x^2=4$ while $[x^2]=[6.25]=6$.</p>
Calvin Khor
80,734
<p>Define $f(x)=\operatorname{floor}\left(x^2\right)+5\operatorname{floor}\left(x\right)+6 = [x^2] + 5[x] + 6$.</p> <p>I'll present some graphs that indicate the solution set is more complicated than $x\ge -4$.</p> <p><a href="https://www.desmos.com/calculator/kf0m3b5ya4" rel="nofollow noreferrer">(Desmos link)</a><a href="https://i.stack.imgur.com/LLmkn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LLmkn.png" alt="graph"></a></p> <p>We see the solution is made up of 3 different regions. As you can see Desmos is not certain about the accuracy of its plot so I found a second opinion that indeed indicates that there is a small hole in the solution regions a small bit to the left of $-5$ <a href="https://www.wolframalpha.com/input/?i=floor(x%5E2)%20%2B%205floor(x)%20%2B%206%20at%20x%3D-5.01" rel="nofollow noreferrer">(W|A link)</a>,</p> <p><a href="https://i.stack.imgur.com/5qEHh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5qEHh.png" alt="enter image description here"></a></p> <p>I don't understand where $-4$ came from. Firstly, it does not come from the other potential interpretation of $[\cdot]$ as the ceiling function, as <a href="https://www.wolframalpha.com/input/?i=evaluate%20ceil(x%5E2)%20%2B%205ceil(x)%20%2B%206%20at%20x%20in%20%7B-100,-3.01,-2.7,-2.1,100%7D" rel="nofollow noreferrer">these calculations in W|A show</a>. Secondly, under the interpretation $[x] = \operatorname{floor}(x)$, we have $f(-2)=0$.</p> <p>Clearly, even without a graph, there are at least two distinct regions that satisfy the inequality, since if $x&gt;0$, then</p> <p>$$ f\left(x\right)=[x^2] + 5[x]+6 &gt; 6 &gt; 2$$ and if $x &lt; -6 $ then $$ [x^2] + 5[x]+6 β‰₯ (x^2-1) + 5(x-1) + 6 = \underbrace{x(x+5)}_{&gt;6} &gt; 2 $$ so the solution region must contain $$ \{ x &lt; -6 \} \cup \{x &gt; 0\}$$ as a subset. The fact that there is a third region is not so obvious and requires more careful study of the difference between $[x^2]$ and $x^2$.</p>
106,396
<p>An Indian mathematician, Bhaskara I, gave the following amazing approximation of the sine (I checked the graph and some values, and the approximation is truly impressive.)</p> <p>$$\sin x \approx \frac{{16x\left( {\pi - x} \right)}}{{5{\pi ^2} - 4x\left( {\pi - x} \right)}}$$</p> <p>for $(0,\pi)$</p> <p>Here's an image. Cyan for the sine and blue for the approximation. <img src="https://i.stack.imgur.com/mgmoA.jpg" alt="enter image description here"></p> <p>ΒΏIs there any way of proving such rational approximation? ΒΏIs there any theory similar to Taylor's or Power Series for rational approximations? </p>
Robert Israel
8,508
<p>Writing $x = \pi/2 + \pi t$, the approximation becomes $\cos(\pi t) \approx \frac{1-4t^2}{1+t^2} = 1 - 5 t^2 + O(t^4)$. In fact $\cos(\pi t) = 1 - \frac{\pi^2}{2} t^2 + O(t^4)$, but $\pi^2/2 \approx 4.9348$ is not far from $5$. In terms of uniform approximation to $\cos(\pi t)$ for $t \in [-1/2, 1/2]$, $\frac{1 - 4 t^2}{1+1.0043 t^2}$ would be somewhat better.</p>
1,316,008
<p><img src="https://i.stack.imgur.com/xNGPi.png" alt="enter image description here"></p> <p>The problem is shown in the image. I'm not able to post images yet.. What are the next steps to to find how tall the triangle is? So far i see, that the 3 triangles are similar; however, even by these similarities and by the fact that $AP^2+PB^2=100$ I'm unable to move forth. </p>
mathlove
78,967
<p>This is impossible.</p> <p>Suppose that there exist such points.</p> <p>Let $\alpha=\angle{PAB},\beta=\angle{PBA}$. And let $C$ be the third point of the triangle.</p> <p>Then, from $\triangle{PAB}$, we have $$\alpha+\beta=90^\circ.$$</p> <p>Now we have $$\angle{ACB}=180^\circ-\angle{CAB}-\angle{CBA}=180^\circ-2(\alpha+\beta)=0^\circ,$$ which contradicts that the three points $A,B,C$ form a triangle.</p>
336,827
<p>A covering map $p:C\to X$ is finite when for each $x\in X$ we have $|p^{-1}(x)|&lt;\infty.$ I have to prove that such a covering map has to be closed. I'm having trouble with it. </p> <p>When $p$ is a covering map, we can take open neighborhoods $U_x$ of every point $x\in X$ such that $p^{-1}(U_x)$ is a disjoint union of open neighborhoods $U_{x_i}$ of $x_i$, where $\{x_1,\ldots,x_n\}=p^{-1}(x)$, and $p$ maps each $U_{x_i}$ homeomorphically onto $U_x$. I see that when I have a closed subset $A\subseteq C$ such that $p(C)\subseteq U_x$ for a certain $x\in X$, then $p(A)$ is closed. That's because in this case $A$ is a disjoint union of $A\cap U_{x_i}$ and each of this sets is closed. (Well, certainly they're closed in $U_{x_i}.$ I'm not sure why it must be closed in $C$, but I think it must.) Since $p$ is a homeomorphism on $A\cap U_{x_i}$, we have that each $p(A\cap U_{x_i})$ is closed. (Again, certainly in $U_x$, and I think in $X$ too.) Since there are finitely many of them, they're union, that is $p(A)$ is closed too.</p> <p>I'm sure there are problems with this reasoning, which show how little I understand of topology, but I think the gist of it is right. But I don't see how I can make this local property global. What if $p(A)$ is large? And what if $A$ spans several components of $C$? </p> <p>This is a homework question.</p>
Seirios
36,434
<p>Let $A \subset C$ be closed and let $x \notin p(A)$. Because the covering is finite, there $x_1,...,x_n \in C$ such that $p^{-1}(x)=\{x_1,...,x_n\}$. For every $1 \leq i \leq n$, there exists an open neighborhood $U_i$ of $x_i$ such that $U_i \cap A = \emptyset$ and $p$ induces a homeomorphism between $U_i$ and an open neighborhood $p(U_i)$ of $x$. </p> <p>Now, consider $U = \bigcap\limits_{i=1}^n p(U_i)$: it is an open neighborhood of $x$ not intersecting $p(A)$.</p>
3,546,661
<p>The following is from <em>Elementary Differential Geometry</em> by A.N. Pressley, page 102.</p> <blockquote> <p><a href="https://i.stack.imgur.com/KkYsL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KkYsL.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/t2fa6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t2fa6.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/IjJC2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IjJC2.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/s4sY1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s4sY1.png" alt="enter image description here" /></a></p> </blockquote> <p>Why is the author using the different choices of curves to prove <span class="math-container">$E_1=E_2$</span>, <span class="math-container">$F_1=F_2$</span> and <span class="math-container">$G_1=G_2$</span>, when it can be obtained by comparing the coefficients Eq(3)?</p>
Willie Wong
1,543
<p><span class="math-container">$A + 2B + C = D + 2E + F$</span> does not imply that <span class="math-container">$A = D$</span>, <span class="math-container">$B =E$</span>, and <span class="math-container">$C = F$</span>!</p> <p>Example:</p> <p><span class="math-container">$$1 + 2\cdot 2 + 1 = 6 = 2 + 2 \cdot 1 + 2$$</span></p>
2,024,097
<blockquote> <p>Which prime numbers $p \in \mathbb{Z}$ are reducible in the unique factorization domain $\mathbb{Z}\left[\frac{1 + \sqrt{-3}}{2}\right]$ ?</p> </blockquote> <p>Suppose $p$ is a prime integer and $p = \alpha \beta$ in $\mathbb{Z}\left[\frac{1 + \sqrt{-3}}{2}\right] = \mathbb{Z}[\omega]$. Then $f(p) = p^2 = f(\alpha)f(\beta)$, where $f: a + b \omega \mapsto a^2 + ab + b^2$ is the norm in $\mathbb{Z}[\omega]$.</p> <p>There are only two possibilities:</p> <ol> <li>$f(\alpha) = 1, f(\beta) = p^2 $, so $\alpha $ is a unit</li> <li>$f(\alpha) = p, f(\beta) = p $</li> </ol> <p>Set $\alpha = a + b \omega$, then $f(\alpha) = a^2 + b^2 + ab = p$. Therefore, if there exist integers $a$ and $b$ which solve this equation, then $p$ is reducible. And if this equation has no integer solutions, $p$ is prime in $\mathbb{Z}[\omega]$. But how can I express these solutions? I tried to use congruences $\text{mod} \ 4 $ , but it did not help much.</p>
Adam Hughes
58,831
<p>This is an example of where you want congruence modulo $3$ not $4$. There are two cases, either $p\equiv 1\mod 3$ or $p\equiv -1\mod 3$. Quadratic reciprocity says that</p> <p>$$\left({p\over 3}\right)\left({3^*\over p}\right)=1$$</p> <p>What we're looking for is $\left({3^*\over p}\right)$ since that tells us if $-3$ is a square mod $p$ and therefore if there is a solution.</p> <p>But then this is just equal to $\left({p\over 3}\right)$ by multiplying both sides by $\left({p\over 3}\right)$. And we know this is</p> <blockquote> <p>$$\left({p\over 3}\right) = \begin{cases} 1 &amp; p\equiv 1\mod 3 \\ -1 &amp; p\equiv -1\mod 3\end{cases}$$</p> </blockquote>
667,903
<p>$$D=\left\{z\ \left|\ \right. 1&lt;|z|&lt;2, \ \Re(z)&gt;-\frac{1}{2}\right\}$$</p> <p>I can try to prove that each disc with radius 1, and 2 is open (?), so I need to show that the ball centered at $w$ is contained in the disc with radius 2 </p> <p>If $w\in \{z: |z|&lt;2\}$, then $|z_0-w|&lt;2$, let $\epsilon = 2-|z_0-w|&gt;0$</p> <p>Let $z\in \mathbb{C}$, such that $|z-w|&lt;\epsilon$</p> <p>and $|z-z_0|\leq |z-w|+|z_0-w|&lt;2$, so the ball of radius $\epsilon$ is contained inside the disc with radius $2$, so the disc is open. </p> <p>Similar proof for the disc with radius 1, I understand the property of intersection now, but why is it? The intersection would give you the disc with radius 1, not the washer?</p>
Eric Towers
123,905
<p>(Topology Ia): The set $D$ is the intersection of three open sets: $1 &lt; |z|$, $|z|&lt;2$, and $-\frac{1}{2} &lt; \Re(z)$. It is therefore an open set.</p> <p>(Topology Ib): The set is the complement of the union of three closed sets: $|z| \leq 1$, $2 \leq |z|$, and $\Re(z) \leq -\frac{1}{2}$. It is therefore an open set.</p> <p>(Metric): Let $z \in D$. If $1 &lt; |z|$, $|z|&lt;2$, and $-\frac{1}{2} &lt; \Re(z)$, then let $r = \min(|1 - |z||, ||z|-2|, |-\frac{1}{2}-\Re(z)|)$. Then the ball of radius $r$ centered at $z$ is contained entirely in $D$. That is, $z$ is in the interior of $D$. </p> <p>If $1 &lt; |z|$, $|z|&lt;2$, and $-\frac{1}{2} = \Re(z)$, then $z \not \in D$. For all $\epsilon &gt; 0$, the ball of radius epsilon centered at $z$ contains the interval $z+(-\epsilon, \epsilon)$, which includes points inside and outside $D$, so is on the boundary of $D$.</p> <p>If $1 &lt; |z|$, $|z|=2$, and $-\frac{1}{2} &lt; \Re(z)$, then $z \not \in D$. For all $\epsilon &gt; 0$, the ball of radius epsilon centered at $z$ contains the interval $z + \mathrm{e}^{\mathrm{i}\arg(z)}(-\epsilon, \epsilon)$, which includes points inside and outside $D$, so is on the boundary of $D$.</p> <p>If $1 = |z|$, $|z|&lt;2$, and $-\frac{1}{2} &lt; \Re(z)$, then $z \not \in D$. For all $\epsilon &gt; 0$, the ball of radius epsilon centered at $z$ contains the interval $z + \mathrm{e}^{\mathrm{i}\arg(z)}(-\epsilon, \epsilon)$, which includes points inside and outside $D$, so is on the boundary of $D$.</p> <p>If $1 &lt; |z|$, $|z|=2$, and $-\frac{1}{2} = \Re(z)$, then $z \not \in D$. For all $\epsilon &gt; 0$, the ball of radius epsilon centered at $z$ contains the interval $z + \mathrm{e}^{\mathrm{i}\arg(z)}(-\epsilon, \epsilon)$, which includes points inside and outside $D$, so is on the boundary of $D$.</p> <p>If $1 = |z|$, $|z|&lt;2$, and $-\frac{1}{2} = \Re(z)$, then $z \not \in D$. For all $\epsilon &gt; 0$, the ball of radius epsilon centered at $z$ contains the interval $z+(-\epsilon, \epsilon)$, which includes points inside and outside $D$, so is on the boundary of $D$.</p> <p>If $1 = |z|$ and $|z|=2$, then $z$ does not exist. Nothing to do here.</p> <p>If $-\frac{1}{2} = \Re(z)$ and either $1 \geq |z|$ or $|z| \geq 2$, then let $r = \min(|1 - |z||, ||z|-2|)$. Then the ball of radius $r$ centered at $z$ is contained entirely in the complement of $D$. That is, $z$ is in the exterior of $D$.</p> <p>If either $1 = |z|$ or $|z| = 2$, and $-\frac{1}{2} \geq \Re(z)$, then let $r = \min(\max(|1 - |z||, ||z|-2|), |-\frac{1}{2}-\Re(z)|)$. Then the ball of radius $r$ centered at $z$ is contained entirely in the complement of $D$. That is, $z$ is in the exterior of $D$.</p> <p>If $1 \geq |z|$, $|z| \geq 2$, or $-\frac{1}{2} &gt; \Re(z)$, then let $r = \min(|1 - |z||, ||z|-2|, |-\frac{1}{2}-\Re(z)|)$. Then the ball of radius $r$ centered at $z$ is contained entirely in the complement of $D$. That is, $z$ is in the exterior of $D$.</p> <p>Having classified every point of the complex plane as either internal to $D$, on the boundary of $D$ and external to $D$, or separated from $D$ and external to $D$, we have shown that $D$ is open.</p> <p>(If I missed a case above, it is due to simple omission and one of the arguments made above can be applied to it.)</p>
105,071
<p>As one may know, a <b>dynamical system</b> can be defined with a monoid or a group action on a set, usually a manifold or similar kind of space with extra structure, which is called the <i>phase space</i> or <i>state space</i> of the dynamical system. The monoid or group doing the acting is what I call the <i>time space</i> of the dynamical system, and is usually the naturals, integers, or reals. Often, one may require the evolution map to be continuous, differentiable, etc.</p> <p>But has anyone studied a generalization in which we allow the <i>time space</i> to be something more general &amp; exotic, a multidimensional space like $\mathbb{R}^n$, $\mathbb{C}$ (viewed as "2-dimensional" by considering it to be like $\mathbb{R}^2$), etc.? I'm especially curious about the case where the time space is $\mathbb{C}$ and the phase space is $\mathbb{C}^n$ or another complex manifold and the map is required to be holomorphic in both its arguments, as that holomorphism provides a natural linkage between the two dimensions that lets us think of the complex time as a single 2-dimensional time as opposed to two real times (any dynamical system with a timespace of $\mathbb{R}^n$ can be decomposed into a bunch of mutually-commutative evolution maps with timespaces of $\mathbb{R}$). My questions are:</p> <ol> <li><p>Is it true that the only dynamical system with phase-space $\mathbb{C}$ and time-space $\mathbb{C}$ where the evolution map is required to be holomorphic in both its arguments (the time and point to evolve) is the linear one given by $$\phi^t(z) = e^{ut} z + K \frac{1 - e^{ut}}{1 - e^u}, K, u \in \mathbb{C}$$ ? I suspect so, because an injective entire function is linear (in the sense of a β€œlinear equation” not <em>necessarily</em> a β€œlinear map”), and $\phi^{-t}$ must be the inverse of $\phi^t$. Thus, $\phi^t$ must have the form $a(t) z + b(t)$ with $a: \mathbb{C} \rightarrow \mathbb{C}$, $b: \mathbb{C} \rightarrow \mathbb{C}$. Am I right?</p></li> <li><p>Are there any interesting (e.g. with complicated, even chaotic behavior) dynamical systems of this kind on $\mathbb{C}^n$? On more sophisticated complex manifolds? (for the phase space, that is) If so, can you provide an example? Or does the holomorphism requirement essentially rule this out? EDIT: I provide one below.</p></li> <li><p>There is something else here, an interesting observation I made. Consider the above complex-time, holomorphic dynamical system. We can investigate the two prime behaviors represented by the real and imaginary times. We'll just set u = 1, K = 0 for here.</p></li> </ol> <p>In β€œreal time”, the dynamics looks like an β€œexplosion” in the plane: all points β€œblast” away from z = 0 at exponentially increasing velocity.</p> <p>In β€œimaginary time”, the result is cyclic motion, swirling around $z = 0$ with constant angular velocity that depends only on the distance of the point from $z = 0$.</p> <p>But if we trace the contours of these two evolutions, formed from different points on the plane, and then superimpose them, we have contours intersecting in what looks like contours from a contour graph of the image of the complex plane under the function $\exp(z)$! Conversely, we could say it looks like a countour plot of $\log(z)$ with a cut along a ray from $0$. So, somehow β€œnaturally” related to the dynamical system $\phi^t(z) = \exp(t) z$ is the function $\exp(z)$ (or, perhaps, $\log(z)$).</p> <p><img src="https://i.stack.imgur.com/kPq32.png" alt="plot of evoluton countours for first CTHDS"></p> <p>Note that my plotting facilities are unfortunately pretty limited, so I can't give really nice graphs with lots of contours, just a few selected ones taken from evolutions of various points in both real and imag times.</p> <p>But we have another case. To see this, we must turn our attention away from a phase space given by the complex plane to one given by the Riemann sphere, $\hat{\mathbb{C}}$. In this case, we still have the dynamical system as above, but we have an additional class of dynamical systems given by the β€œMoebius transformations”, which include the above linear-function dynamical systems as a special case. One example is $$\phi^t(z) = \frac{(1 + e^{i\pi t}) z + (1 – e^{i\pi t})}{(1 – e^{i\pi t})z + (1 + e^{i\pi t})}$$. It is easy to check that this is indeed a Moebius transformation of the Riemann sphere. This map is holomorphic everywhere on the Riemann sphere. Note that for integer step t, the unit-step map is the reciprocal map.</p> <p>Now we consider the contour lines of the real and imaginary evolution, as before. They look like this:</p> <p><img src="https://i.stack.imgur.com/MECR7.png" alt="plot of evolution contours for second CTHDS"></p> <p>(Physics buffs may notice that the real evolution (concentric circles) reminds one of the lines of a <i>magnetic</i> field of a <i>magnetic</i> dipole (like a bar magnet), while the imaginary evolution (arcs joining at points) looks like the lines of an <i>electric</i> field of an <i>electric</i> dipole.)</p> <p>Again, notice how the lines meet at right angles. It looks again like the image of the plane (or the Riemann sphere, perhaps?) under some function which may be holomorphic, though I'm not sure what that function is in this case. Is this the case? Is there such a function, with a special relation to this CTHDS in the same way as $\exp$ (or $\log$) is to the other?</p> <p>But in any case, it appears that for a complex-time holomorphic dynamical system, or CTHDS, there exists an associated natural function. What is the significance of this function/map? How does it relate to the CTHDS? If you give me a CTHDS that I don't have a closed form for, can I find its natural map?</p>
Alexandre Eremenko
25,510
<p>I will address the case of $C$-action (as time). In the case when the phase space is also $C$, you are right, there are only Moebius transformations, which can be loxodromic (elliptic, hyperbolic) or parabolic. That is up to conjugacy by an arbitrary Moebius transformation you have $z\mapsto e^{\alpha t}z$ and $z\mapsto z+t$, where $t$ is your complex "time".</p> <p>In the case when the phase space is $C^n$, we have a lot of possibilities. There is a hudge (infinite-dimensional) continuous group of analytic bijective transformations of $C^n$ onto itself. You can take any complex 1-parametric subgroup of this group. For $n=2$, there is a very nice paper of Milnor on this group. </p> <p>Edit: Friedland, Shmuel; Milnor, John Dynamical properties of plane polynomial automorphisms. Ergodic Theory Dynam. Systems 9 (1989), no. 1, 67–99. </p>
283,360
<p>Let $M$ be a simply connected topological 4-manifold with intersection form given by the E8 lattice. Does anyone know of examples of continuous self-maps of $M$ of degree 2 or 3? Or of degree any other prime for that matter?</p>
Danny Ruberman
3,460
<p>This is more like a long comment than a real answer. </p> <p>This seems like it's an algebra problem that is probably hard to solve. If you had such a map, of degree $d$, then you would get the following. Choose a basis for the homology, so that the induced map on $H_2(M)$ is written as a matrix $A$. Let J be the matrix for the intersection form with respect to this same basis. Then I think you are asking for $A^TJA = dJ$ (1). I don't have much of a feel for this but somehow it seems unlikely that there is any such matrix.</p> <p>Conversely, if you had such a matrix, then perhaps you could define a map of degree $d$ from M to itself. I would use the approach of the Whitehead-Milnor theorem; namely up to homotopy, you can write $M$ as a wedge of spheres union a $4$-cell, where the attaching map for the $4$-cell is determined by the matrix $J$. The matrix $A$ tells you how to map the $2$-skeleton to the $2$-skeleton, and I would guess that you get an extension over the $4$-cell with degree $d$ if (1) holds. </p>
148,624
<p>What does it mean to say that , A bounded linear operator is not "generally" bounded function. Can anybody explain ? </p>
davidlowryduda
9,754
<p>Let's choose our favorite bounded linear operator. At the moment, mine happens to be the identity operator $I$ on $\mathbb{R}$, a very boring bounded linear operator. If it's too boring, you can pretend I'm talking about the identity operator on $W^{k,p}$.</p> <p>Then $||I||_{\text{operator}} = 1$, as clearly $||I(x)|| \equiv 1\cdot||x||$. And so it's a bounded linear operator. But $||I(x)||$ can itself be arbitrarily large, as $||x||$ can be arbitrarily large. Thus it's not a bounded function.</p>
3,710,804
<p><span class="math-container">$$f(x) = \int \frac{\cos{x}(1+4\cos{2x})}{\sin{x}(1+4\cos^2{x})}dx$$</span></p> <p>I have been up on this problem for an hour, but without any clues. </p> <p>Can someone please help me solving this?</p>
Claude Leibovici
82,404
<p>Let <span class="math-container">$x=\tan^{-1}(y)$</span> <span class="math-container">$$I= \int \frac{\cos{(x)}(1+4\cos{(2x)})}{\sin{(x)}(1+4\cos^2{(x)})}\,dx=\int \frac{5-3 y^2}{y^5+6 y^3+5 y}\,dy$$</span> Now, partial fraction decomposition <span class="math-container">$$\frac{5-3 y^2}{y^5+6 y^3+5 y}=\frac{1}{y}-\frac{2 y}{y^2+1}+\frac{y}{y^2+5}$$</span></p>
1,770,804
<p>I am a high school student my maths teacher said that if $\,ax+b=cx+d,\,$ then is $\,a=c\,$ and $\,b=d.\,$ Can someone give me a prove of this?</p>
fleablood
280,126
<p>The statement "ax + b = cx + d implies a=c and b = d" is not true and should be obviously so. Simply do $b = cx + d - ax$ and you get $ax + b = ax + cx + d - ax = cx + d$. $a, b, c$ and $x$ can be anything you like.</p> <p>That's silly.</p> <p><strong>HOWEVER</strong> the statement "ax + b = cx + d <em>for all possible values of x</em> (where x is not a constant) implies a=c and b = d" <strong>is</strong> true. </p> <p>Pf:</p> <p>$ax + b = cx + d \iff$</p> <p>$(a-c)x = d - b$.</p> <p>$d- b$ is a constant value. If $(a-c)\ne 0$ then $(a-c)x$ can have multiple values for different values of $x$. As $(a-c)x$ is constant for all possible values of $x$, the only way this is possible is if $(a-c) = 0$ i.e. a = c. </p> <p>But then $(a-c)x = d-b \implies 0x = 0 = d-b \implies b = d$.</p> <p>(I <em>can't</em> say: Let $b = cx + d - ax$ as $cx +d -ax$ will have different values for different values of $x$. ... unless $c =a$ ... in which case I'm saying "Let $b = d$"....)</p>
2,141,182
<p>In the case of $$\sqrt{(x_n-\ell_1)+(y_n-\ell_2)}\leq \sqrt{(x_n-\ell_1)^2} + \sqrt{(y_n-\ell_2)^2} = |x_n-\ell_1|+|y_n-\ell_2|$$</p> <p>it is true, if we take the rise the two sides in the power of $2$ we get:</p> <p>\begin{align} &amp; (x_n-\ell_1)+(y_n-\ell_2)\leq \left( \sqrt{(x_n-\ell_1)^2}+\sqrt{(y_n-\ell_2)^2} \,\right)^2 \\[10pt] = {} &amp;{(x_n-\ell_1)+(y_n-\ell_2)}+\sqrt{(x_n-\ell_1)+(y_n-\ell_2)} \end{align}</p> <p>Does the same is true for $$\sqrt{a+b}\leq \sqrt{a}+\sqrt{b} \text{ ?}$$</p> <p>doesn't $\sqrt{a+b}$ is a product of $3$ components? $\sqrt{a}+\sqrt{b}+\text{(something positive)}$ and therefore $$\sqrt{a+b}\geq \sqrt{a}+\sqrt{b} \text{ ?}$$</p>
Daniel Robert-Nicoud
60,713
<p><strong>Hint:</strong> Square both sides.${}{}$</p>
2,342,051
<p>I am totally new to statistics. I'm learning the basics.</p> <p>I came upon this question while solving Erwin Kreyszig's exercise on statistics. The problem is simple. It asks to calculate standard deviation after removing outliers from the dataset.</p> <p>The dataset is as follows: 1, 2, 3, 4, 10. What I did is, I found out q<sub>m</sub> = 3. Then $q$<sub>l</sub> $= \frac{1+2}{2} = 1.5$ and $q$<sub>m</sub> $= \frac{4+10}{2} = 7$.</p> <p>Now, $IQR = 7-1.5 = 5.5$ and $1.5*IQR = 8.25$</p> <p>So, we can say numbers beyond $1.5 - 5.5 = -4$ and $7 + 5.5 = 12.5$ will be an outlier.</p> <p>Since there is no outlier, I found out the Standard Deviation of the set which is 3.53.</p> <p>But, the answer provided is 1.29 which is different from the standard deviation of the set.</p> <p>Can anyone help me what I missed? </p> <p>Also, I have another question - we can see with plain eyes 10 is an outlier. But it is not detected here - why? </p>
Dave L. Renfro
13,130
<p>For each point in the complement of the Cantor set, there exists a two-sided neighborhood of that point contained in the complement of the Cantor set (because the complement of the Cantor set is an open subset of the reals), and hence the function is zero in a two-sided neighborhood of that point, and hence the derivative at that point exists and equals zero. So $f'(x)=0$ for each $x \in C.$</p>
863,846
<p>Steven Strogatz has a great informal textbook on Nonlinear Dynamics and Chaos. I have found it to be incredibly helpful to get an intuitive sense of what is going on and has been a great supplement with my much more formal text from Perko.</p> <p>Anyways I was wondering if anyone knew of any similar informal, intuitive textbooks covering Numerical Analysis? I currently study out of Atkinson's Intro to Numerical Analysis. I am looking for a more informal numerical text aimed for upper level undergrads and first/second year grad students. I am currently a first year grad student studying for my qualifying exams. Any recommendations would be appreciated. Thank you</p>
lhf
589
<p>Try these books:</p> <ul> <li><p><em>Introduction to Applied Numerical Analysis</em> by <a href="http://en.wikipedia.org/wiki/Richard_Hamming" rel="nofollow">Richard Hamming</a></p></li> <li><p><em>Numerical Methods That Work</em> by <a href="http://en.wikipedia.org/wiki/Forman_S._Acton" rel="nofollow">Forman Acton </a></p></li> </ul>
1,434,420
<p>Why is $\bigcap\limits_{n=1}^{\infty} \left( \bigcup\limits_{i=1}^{n} G_i \right)^c = \left( \bigcup\limits_{n=1}^{\infty} \left( \bigcup\limits_{i=1}^{n} G_i \right) \right)^c$? What set properties are being applied here? (The $^c$ is set complement)</p>
NoseKnowsAll
180,054
<p>This is a repeated application of <a href="https://en.wikipedia.org/wiki/De_Morgan&#39;s_laws" rel="nofollow">De Morgan's Laws</a>:</p> <p>$$A^c \cap B^c = \left(A \cup B\right)^c$$</p> <p>Basically, simplify it to only two sets and you'll see that the intersection of the complements is equal to the complement of the union.</p>
1,289,994
<p>If you fold a rectangular piece of paper in half and the resulting rectangles have the same aspect ratio as the original rectangle, then what is the aspect ratio of the rectangles?</p>
ParaH2
164,924
<p>$$ \frac{\text{width}}{\text{lengh}}=\sqrt{2} $$</p> <p><a href="http://en.wikipedia.org/wiki/Paper_size" rel="nofollow">Have a look here</a></p>
2,280,203
<p>How to transform the integral </p> <p>$$\int _{0}^{\pi }\sin ^{2}\left( \psi \right) \sin \left( m\psi \right) d\psi $$</p> <p>to </p> <p>$$\int _{0}^{\pi }\left( \dfrac {1} {2}-\dfrac {1} {2}\cos 2\psi \right) \sin m\psi d\psi $$</p> <p>What is the general method you need to solve trig questions like this. How are do you know which identities to use and which ones should you always have memorised to derive this.</p>
N3buchadnezzar
18,908
<p><strong>The general approach</strong></p> <p>Considering trigonometric integrals there are a myriad of different techniques to solve them. Sometimes integration by parts is the best option, other times a clever substitution or a trigonometric identity saves the day. Sadly, the only way to know which one to use is to experiment and solve as many problems as possible. </p> <p><strong>My toolbox</strong></p> <p>In my toolbox I only store three basic identities when it comes to integrating sine and cosine. </p> <p>$$ \begin{align*} 1 &amp; = (\sin x)^2 + (\cos x)^2 \\ \cos(A+B) &amp; = \cos(A)\cos(B) - \sin(A) \sin(B) \\ \sin(A+B) &amp; = \cos(A) \sin(B) + \cos(B) \sin(A) \end{align*} $$</p> <p>Everything else I can derive from these pretty simple formulas. Of course as mentioned earlier I know how to use integration by parts, and use substitutions as well. </p> <p><strong>The problem at hand</strong></p> <p>Let's look at your problem and use a fairly clever trick. I start with defining the following pair of integrals</p> <p>$$ I = \int_0^{\pi} (\sin \psi)^2 \sin (m\psi)\, \mathrm{d}\psi \quad \text{and} \quad J = \int_0^{\pi} (\cos \psi)^2 \sin (m\psi)\,\mathrm{d}\psi $$</p> <p>Notice how simple we can solve the following integral</p> <p>$$ I + J = \int_0^\pi (\sin^2 \psi + \cos^2\psi) \sin (m\psi)\, \mathrm{d}\psi = \begin{cases} \cfrac{1 - \cos m\pi}{m} &amp; \text{if} \qquad m \neq 0 \\ 0 &amp; \text{if} \qquad m = 0 \end{cases} $$</p> <p>Similarly, the integral $J - I$ can be evaluated fairly easy with the use of $\cos^2\psi - \sin^2\psi = \cos 2\psi$ and </p> <p>$$ \cos ax \sin bx = \frac{\sin(a x + b x) - \sin(a x - b x)}{2}\,. $$</p> <p>This can be derived by taking the difference between $\sin(A+B)$ and $\sin(A-B)$ and then using the addition formula for sine, written at the top. Thus,</p> <p>$$ \begin{align*} J - I &amp; = \int_0^\pi (\cos^2\psi - \sin^2\psi )\sin (m\psi) \,\mathrm{d}\psi \\ &amp; = \int_0^\pi \cos (2\psi) \sin (m\psi)\, \mathrm{d}\psi \\ &amp; = \frac{1}{2}\int_0^\pi \sin (2 + m)\psi - \sin(2 - m)\psi \,\mathrm{d}\psi \\ &amp; = \begin{cases} \cfrac{m}{m^2 - 4}(1 - \cos m\pi) &amp; \text{if} \qquad m^2 \neq 4 \\ 0 &amp; \text{if} \qquad m^2 = 4 \end{cases} \end{align*} $$</p> <p>Assuming $m \neq 0$ and $m^2 \neq 2$ we then have the following system of equations</p> <p>$$ \begin{align*} J - I &amp; = \cfrac{1 - \cos m\pi}{m} \\ J + I &amp; = \cfrac{m}{m^2 - 4}(1 - \cos m\pi) \end{align*} $$ </p> <p>Solving this set of equations yields</p> <p>$$ I = \frac{2}{m} \frac{\cos(m\pi) - 1}{m^2 - 4} $$</p> <p>Doing the same in the cases $m^2 = 4$ and $m = 0$, just give $I = 0$, thus</p> <blockquote> <p>$$ \int_0^\pi (\sin \psi)^2 \sin(m\psi) \,\mathrm{d}\psi = \begin{cases} \cfrac{2}{m} \cfrac{\cos(m\pi) - 1}{m^2 - 4} &amp; \text{if} \qquad m\neq -2,0,2 \\ 0 &amp; \text{if} \qquad m = -2, 0, 2 \end{cases} $$</p> </blockquote>
519,325
<p>Evaluate $\displaystyle\int \dfrac{1}{x^2+9} \, dx$. I've only learned the normal way of solving integrals but it does not work. I haven't learned how to use trigonometry to solve these problem.</p> <p>I know you have to rearrange it into the form ${[f(x)]Β² + 1}$ and then integrate.</p> <p>Can someone point me some rules to solve these kinds of question?</p> <p>My teacher expected that the prerequisite course taught this but I have not learned it yet.</p>
imranfat
64,546
<p>Check this <a href="http://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;frm=1&amp;source=web&amp;cd=3&amp;ved=0CDYQFjAC&amp;url=http%3A%2F%2Fintegral-table.com%2Fdownloads%2Fsingle-page-integral-table.pdf&amp;ei=XWJUUtT3I6jC2wXkjYC4Cg&amp;usg=AFQjCNGqJnnp4oeD69SVOXnbYSIn565LEw" rel="nofollow">table of integrals</a></p> <p>Integral number 9 is the one you need, where the value of a = 3</p>
519,325
<p>Evaluate $\displaystyle\int \dfrac{1}{x^2+9} \, dx$. I've only learned the normal way of solving integrals but it does not work. I haven't learned how to use trigonometry to solve these problem.</p> <p>I know you have to rearrange it into the form ${[f(x)]Β² + 1}$ and then integrate.</p> <p>Can someone point me some rules to solve these kinds of question?</p> <p>My teacher expected that the prerequisite course taught this but I have not learned it yet.</p>
Jack M
30,481
<p>You know (or should know) that $\int\frac{1}{x^2+1}\mathrm{dx}=\arctan(x)$. Let's try and get the integrand into that form.</p> <p>$$\int\frac{1}{x^2+9}\mathrm{dx}=\int\frac{1}{9(\frac{x^2}{9}+1)}\mathrm{dx}=\frac{1}{9}\int\frac{1}{\left(\frac{x}{3}\right)^2+1}\mathrm{dx}$$</p> <p>You also know (or should know) that you can easily substitute out that $x/3$:</p> <p>$$=\frac{1}{9}3\int\frac{1}{\left(\frac{x}{3}\right)^2+1}\frac{\mathrm{dx}}{3}$$</p> <p>$$=\frac{1}{3}\arctan(\frac{x}{3})$$</p> <p>Any time you've got an integrand of the form 1 divided by some quadtratic, you can usually (not always) get it into that form with a bit of fiddling, sometimes involving completing the square.</p>
306,461
<p>Let $A = \{(x,y) \in\mathbb{R}^2: a \leq (x-c)^2+(y-d)^2 \leq b\}$ for given $a,b,c, d$ real numbers. I want to show that $A$ is path-connected.</p> <p>How can I do that?</p> <p>I know that every open subset of $\mathbb R^2$ that is connected is path connected. But this is obviously not open so I cannot use that. Then I thought of multiple cases. If we take arbitrary $x$ and $y$ and draw the line between them and they do not intersect with the circle centred at $(c,d)$ then we can obviously draw a line between the points which is still in the set, so we can then define the function. I am stuck on the other case. </p>
Stefan Hamcke
41,672
<p>The set $S:=[0,2\pi]\times[a,b]$ is path-connected, being a product of path-connected sets. The annulus $A$ is the image of $S$ under the continuous map $(x,y)\mapsto y\cdot e^{xi}+(c+di)$, where you consider $\mathbb R^2$ as the complex plane.</p>
2,128,588
<p>The question;</p> <p>$U = \{x |Ax = 0\}$ If $ A = \begin{bmatrix}1 &amp; 2 &amp; 1 &amp; 0 &amp; -2\\ 2 &amp; 1 &amp; 2 &amp; 1 &amp; 2\\1 &amp; 1 &amp; 0 &amp; -1 &amp; -2\\ 0 &amp; 0 &amp; 2 &amp; 0 &amp; 4\end{bmatrix}$</p> <p>Find a basis for $U$.</p> <p><hr> To make it linearly independent, I reduce the rows of $A$;</p> <p>$\begin{bmatrix}1 &amp; 0 &amp; 0 &amp; 0 &amp; 0\\0 &amp; 1 &amp; 0 &amp; 0 &amp; -2\\0 &amp; 0 &amp; 1 &amp; 0 &amp; 2\\0 &amp; 0 &amp; 0 &amp; 1 &amp; 0\end{bmatrix}\begin{bmatrix}x_1 &amp; x_2 &amp; x_3 &amp; x_4\end{bmatrix} = 0$</p> <p>So I figure I just have to list the rows out as vectors;</p> <p>Basis vectors = $\begin{pmatrix}1\\0\\0\\0\\0\end{pmatrix} , \begin{pmatrix}0\\1\\0\\0\\-2\end{pmatrix} , \begin{pmatrix}0\\0\\1\\0\\2\end{pmatrix} , \begin{pmatrix}0\\0\\0\\1\\0\end{pmatrix}$</p> <p>(can't figure out how to surround it all with curly brackets in MathJax).</p> <p><hr> Am I doing this properly? It feels too simple...</p>
drhab
75,923
<p><strong>Hint</strong>:</p> <p>For <strong>every</strong> integer $m$ the set $\{m+1,m+2,m+3,m+4,m+5,m+6\}$ contains exactly $2$ elements that are divisible by $3$.</p> <p>Now for $m$ take the sum of $n-1$ dice.</p> <p>Also have a look <a href="https://math.stackexchange.com/q/1523820/75923">this question</a> and the corresponding answers.</p>
3,057,517
<p>Hy everybody ! </p> <p>I'm studying population dynamics for my calculus exam, and I don't understand something that seems really easy, so I thought you might be able to help me out ;)</p> <p>Here's the thing. I have this differential equation <span class="math-container">$\frac{dN}{dt} = \sqrt{N}$</span>.</p> <p>Our book makes us realize that both <span class="math-container">$N(t) = 0$</span> and <span class="math-container">$N(t) = \frac{t^2}{4}$</span> are solution, which makes sense so far, simply by replacing in the original equation.</p> <p>Now suppose we start at <span class="math-container">$N(0) = 0$</span>. How can <span class="math-container">$N$</span> start growing like <span class="math-container">$\frac{t^2}{4}$</span> if it's derivative is <span class="math-container">$0$</span> at <span class="math-container">$t = 0$</span> ? Because zero derivative should mean no growth, so <span class="math-container">$\sqrt{N}$</span> should remain zero, which means still no growth, and so on. My brain is melting right now.</p>
Lutz Lehmann
115,115
<p>Apart from these two solutions you can also combine them to solutions that are zero up to some point <span class="math-container">$t\le c$</span> and then follow the shifted quadratic function, <span class="math-container">$N(t)=\frac14(t-c)^2$</span> for <span class="math-container">$t&gt;c$</span>.</p> <p>The irritation you get is based on the mis-understanding of what it means that a derivative is zero. It just means that <span class="math-container">$N(h)=o(h)$</span> for <span class="math-container">$h\approx 0$</span>. </p> <p>If the differential equation were Lipschitz-continuous, small deviations of this size in any consistent integration method advancing with step size <span class="math-container">$h$</span> add up to an error that is still small for the whole solution. In other words, the Euler method and every other consistent method converges towards the unique exact solution. This would still be true if one were to introduce artificial random perturbations of size <span class="math-container">$h^2$</span>.</p> <p>In this case however, a small initial error gets magnified into a different solution. If one takes the Euler method with random perturbations, <span class="math-container">$N_{k+1}=N_k+hf(N_k)+h^2r_k$</span>, the numerical solution will rapidly move away from the unstable zero solution and follow closely the other <span class="math-container">$N(t)=\frac14(t-c)_+^2$</span>, with a higher probability for <span class="math-container">$c$</span> being close to zero.</p> <p><a href="https://i.stack.imgur.com/H0LST.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H0LST.png" alt="enter image description here"></a></p> <pre><code>def Euler(f,t): y = np.zeros_like(t); for k in range(1,len(t)): h=t[k]-t[k-1]; y[k]=y[k-1]+h*f(y[k-1])+h**2*(np.random.random()-0.5) return y t = np.linspace(0,2,41); plt.subplot(2,1,1); plt.title("Non-Lipschitz: <span class="math-container">$y'=\sqrt{|y|}$</span>") for k in range(15): y=Euler(lambda y: abs(y)**0.5, t); plt.plot(t,y); plt.grid(); plt.subplot(2,1,2); plt.title("Lipschitz: <span class="math-container">$y'=0.5y+0.3$</span>") for k in range(15): y=Euler(lambda y: 0.5*y+0.3, t); plt.plot(t,y); plt.grid(); plt.show() </code></pre>
717,664
<p>I need a step by step answer on how to do this. What I've been doing is converting the top to $2e^{i(\pi/4)}$ and the bottom to $\sqrt2e^{i(-\pi/4)}$. I know the answer is $2e^{i(\pi/2)}$ and the angle makes sense but obviously I'm doing something wrong with the coefficients. I suspect maybe only the real part goes into calculating the amplitude but I can't be sure.</p>
Batman
127,428
<p>Try multiplying the numerator and denominator by $1+i$. This will give you $\frac{(2i+2)(1+i)}{1^2+1^2}$. Then, FOIL the numerator and note $i^2=-1$. </p>
717,664
<p>I need a step by step answer on how to do this. What I've been doing is converting the top to $2e^{i(\pi/4)}$ and the bottom to $\sqrt2e^{i(-\pi/4)}$. I know the answer is $2e^{i(\pi/2)}$ and the angle makes sense but obviously I'm doing something wrong with the coefficients. I suspect maybe only the real part goes into calculating the amplitude but I can't be sure.</p>
Gautam Shenoy
35,983
<p>Hint: Take an "i" out from top.</p>
1,109,853
<blockquote> <p><em>The below proof is incorrect. See the answers for more information.</em></p> </blockquote> <p>This question is in the context of exploring how to explain the process of developing a proof.</p> <p>When reading a proof on the irrationality of $ \sqrt{3} $, I came across the following statement, which was not proved in the irrationality proof itself.</p> <ul> <li>If $ a^2 $ is divisible by 3, then $ a $ is divisible by 3.</li> </ul> <p>I believe the following proves the above statement:</p> <ol> <li>Let $k$ be an integer, and $a$ be an integer divisible by $n$, where $ a=n(k+1) $.</li> <li>$ a = nk+n $</li> <li>$ a^2 = (nk + n)(nk + n) $</li> <li>$ a^2 = n^2(k+1)(k+1) $</li> <li>Therefore, $a^2$ is divisible by $n$.</li> </ol> <p>Although the above proof "feels" valid to me, it also seems like the proof is not complete, in a formal sense, because:</p> <ul> <li>Constraints are not placed on the variables.</li> <li>Although the leap from step 4 to step 5 seems intuitive, there is no formal explanation as to why the step is valid. (It seems like something is missing to explain how to go from divisible by $n^2$ to divisible by $n$.)</li> <li>$a$ is divisible by 3 $\implies$ $a^2$ is divisible by 3, but no justification is given for the opposite implication.</li> </ul> <p>All that said, is the above proof sufficient to justify the initial assertion about divisibility by 3? What would formally justify going from step 4 to step 5?</p> <p>And more generally: Are there objective standards for sufficiency of proof, either published or generally accepted?</p>
Mike Pierce
167,197
<p>Your proof appears to be the reverse of what you are trying to prove. The original statement is "$a^2$ divisible by $3$ implies that $a$ is divisible by $3$." Your proof starts with "suppose we have $a$ divisible by $n$..." and this lead to "...$a^2$ is divisible by $n$." Your proof should <em>start</em> with "$a^2$ divisible by $n$."</p> <p>Also, I am curious why you decide to have $(k+1)$ in $a = n(k+1)$. Why not just $k$?</p>
386,649
<p>If you were working in a number system where there was a one-to-one and onto mapping from each natural to a symbol in the system, what would it mean to have a representation in the system that involved more than one digit?</p> <p>For example, if we let $a_0$ represent $0$, and $a_n$ represent the number $n$ for any $n$ in $\mathbb{N}$, would '$a_1$$a_0$' represent a number?</p> <p>Is such a system well defined or useful for anything?</p>
Ross Millikan
1,827
<p>In a sense, our usual system is like that. In $\LaTeX$ if you put braces around something it gets treated as a single character. I know that isn't what you are thinking, but to do what you are thinking you would need a countably infinite set of characters, which is what we get with the decimal (or other base) system.</p> <p>If you had them, you could define concatenation to be some operation like multiplication if you wanted.</p>
4,107,396
<p>I'm stuck at solving the following problem: launch 3 fair coins independently. Let A the event: &quot;you get at least a head&quot; and B &quot;you get exactly one tail&quot;. Then what is the probability of the event <span class="math-container">$A \cup B$</span>?</p>
YJT
731,237
<p>Note that <span class="math-container">$\overline{A\cup B}=\overline{A}\cap \overline{B}$</span>. Now <span class="math-container">$\overline{A}$</span> is you get not heads and <span class="math-container">$\overline{B}$</span> is you get any number of tails but one. Only one option fits <span class="math-container">$\overline{A}$</span> and it is TTT, which fits also <span class="math-container">$\overline{B}$</span>.</p> <p>So <span class="math-container">$\Pr(\overline{A}\cap\overline{B})=\tfrac{1}{2^3}$</span> and <span class="math-container">$\Pr(A\cap B)=1-\tfrac{1}{2^3}$</span>.</p>
466,757
<p>Suppose we have the following</p> <p>$$ \sum_{i=1}^{\infty}\sum_{j=1}^{\infty}a_{ij}$$</p> <p>where all the $a_{ij}$ are non-negative.</p> <p>We know that we can interchange the order of summations here. My interpretation of why this is true is that both this iterated sums are rearrangements of the same series and hence converge to the same value, or diverge to infinity (as convergence and absolute convergence are same here and all the rearrangements of an absolutely convergent series converge to the same value as the series).</p> <p>Is this interpretation correct. Or can some one offer some more insightful interpretation of this result?</p> <p>Please note that I am not asking for a proof but interpretations, although an insightful proof would be appreciated.</p>
Umberto P.
67,536
<p>This isn't a proof, but perhaps can give you the insight you are looking for. Any nondecreasing sequence converges to its (possibly infinite) supremum. Thus a series of nonnegative terms converges to the supremum of its partial sums and interchanging the order of summation doesn't affect the value of the supremum: there is no accidental cancellation of terms of opposite sign.</p>
1,221,442
<p>I have two uniform random variables $B$ and $C$ distributed between $(2,3)$ and $(0,1)$ respectively. I need to find the mean of $\sqrt{B^2-4C}$. Could I plug in the means for $B$ and $C$ and then solve or is it more complicated than that?<br> The original question is here: <a href="https://math.stackexchange.com/questions/1220538/difference-between-two-real-roots-with-uniformly-distributed-coefficents/1220638#1220638">Difference between two real roots with uniformly distributed coefficents</a></p>
rightskewed
171,836
<p>Use change of variables:</p> <p>$$X = \sqrt{B^2-4C} $$ $$Y=C$$</p> <p>$$B = \sqrt{X^2+4Y}$$ $$C=Y$$</p> <p>Jacobian: $$ J = det\left( \begin{array}{ccc} \frac{\partial B}{\partial X} &amp;\frac{\partial C}{\partial X} \\ \frac{\partial B}{\partial Y} &amp; \frac{\partial C}{\partial Y} \\ \end{array} \right) = det \left( \begin{array}{ccc} \frac{X}{\sqrt{X^2+4Y}} &amp; 0 \\ \frac{2Y}{\sqrt{X^2+4Y}} &amp; 1 \\ \end{array} \right) = \frac{X}{\sqrt{X^2+4Y}} $$</p> <p>Join distribution of (X,Y) is given by: $$f_{X,Y}(X,Y) = |J|\ f_{B,C}(B,C) = |J|\ f_B(B) f_C(C) $$ where $f_{B,C}(B,C) = f_B(B) f_C(C)=1$ because $B,C$ are independent.</p> <p>Thus, $$f_{X,Y}(X,Y) = \frac{X}{\sqrt{X^2+4Y}}$$ where $ 0 \leq X \leq 3$ and $0 \leq Y \leq 1$</p> <p>We need to find $E[X] = \int_0^3 xf_X(x) dx$</p> <p>$f_X(x) = \int_0^1 f_{X,Y}(x,y)dy = \int_0^1 \frac{x}{\sqrt{x^2+4y}}dy = \frac{x.\sqrt{x^2+4y}}{2}|_0^1 = \frac{x}{2}(\sqrt{x^2+4}-x)$</p> <p>Further: $E[X] = \frac{1}{2}\int_0^{3} x^2(\sqrt{x^2+4}-x)dx$ which probably further requires $x=2\ tan\theta$ substitution.</p>
11,457
<p>In their paper <em><a href="http://arxiv.org/abs/0904.3908">Computing Systems of Hecke Eigenvalues Associated to Hilbert Modular Forms</a></em>, Greenberg and Voight remark that</p> <p>...it is a folklore conjecture that if one orders totally real fields by their discriminant, then a (substantial) positive proportion of fields will have strict class number 1.</p> <p>I've tried searching for more details about this, but haven't found anything. </p> <p>Is this conjecture based solely on calculations, or are there heuristics which explain why this should be true? </p>
Emerton
2,874
<p>One heuristic is the following: if one imagines that the residue at $s = 1$ of the $\zeta$-function doesn't grow too rapidly, then the value is a combination of the regulator and the class number. I don't know any reason for the regulator not to also grow (there are a lot of units, after all!), and hence one can imagine that the class number then stays small.</p> <p>This is part of a general heuristic that in random number fields there tends to be a trade-off between units and class number, so especially in the totally real case, when there are so many units, the class number should often be 1. </p> <p>I learnt some of these heuristics from a colleague of mine who regards it more-or-less as an axiom that a random number field has very small class number. I think this view was formed through a mixture of back-of-the-envelope ideas of the type described above, together with a lot of experience computing with random number fields. So the answer to your question might be that it is a mixture of heuristics and computations.</p> <p>Incidentally, in the real quadratic case, it is compatible with Cohen--Lenstra, but I think it goes back to Gauss. Also, there are generalizations of Cohen--Lenstra to the higher degree context, and I'm pretty sure that they are compatible with the class group/unit group trade-off heuristic described above.</p>
411,717
<p>Let $G$ be a group. By an automorphism of $G$ we mean an isomorphism $f: G\to G$ By an inner automorphism of $G$ we mean any function $\Phi_a$ of the following form: For every $x\in G$, $\Phi_a(x)=a x a^{-1}$. Prove that every inner automorphism of $G$ is an automorphism of $G$ which means I should prove $\Phi_a$ is isomorphism? any suggestion? thanks</p>
Ink
34,881
<p>To prove that every inner automorphism is indeed an automorphism, you need to show that</p> <ol> <li>$\Phi_a$ is a homomorphism</li> <li>$\Phi_a$ is surjective</li> <li>$\Phi_a$ is injective (i.e $\ker\Phi_{a} = \{e\}$)</li> </ol> <p>All three are straightforward if you know your definitions.</p>
3,436,515
<p>Please help!</p> <p>How to show that <span class="math-container">$ \lim _{nβ†’βˆž} \frac{x_{(n+1)}}{x_n} =\frac{1+\sqrt 5}{2}$</span> for a dynamical system <span class="math-container">$$x_{(n+1)}=x_n + y_n\\ y_{(n+1)}=x_n$$</span></p> <p>Thank you!</p>
user90369
332,823
<p>From </p> <p><span class="math-container">$\left(\begin{array}{r} x_n \\ y_n \end{array}\right) = \left(\begin{array}{rr} 1 ~~~1 \\ 1 ~~~0 \end{array}\right)^n\left(\begin{array}{r} x_0 \\ y_0 \end{array}\right)$</span></p> <p>you get </p> <p><span class="math-container">$\left(\begin{array}{r} x_n \\ y_n \end{array}\right) = \left(\begin{array}{rr} F_{n+1} ~~~F_n \\ F_n ~~~F_{n-1} \end{array}\right)\left(\begin{array}{r} x_0 \\ y_0 \end{array}\right)$</span> </p> <p>and the rest should be clear by using the explicit formula for the <a href="https://en.wikipedia.org/wiki/Fibonacci_number" rel="nofollow noreferrer">Fibonacci numbers</a>, </p> <p>e.g. have a look at the section <em>Matrix form</em>.</p>
442,759
<p>I was reading a book on groups, it points out about the uniqueness of the neutral element and the inverse element. I got curious, are there algebraic structures with more than one neutral element and/or more than one inverse element?</p>
Manos
11,921
<p>It depends on how you define the 'neutral' element. If you define it as $e*x=x*e=x$ for every $x$ in your set, where $*$ is the operation with respect to which $e$ is 'neutral', then $e$ is going to be unique.</p>
442,759
<p>I was reading a book on groups, it points out about the uniqueness of the neutral element and the inverse element. I got curious, are there algebraic structures with more than one neutral element and/or more than one inverse element?</p>
Stefan Hamcke
41,672
<p>This just came to my mind:</p> <p>Take a set $X$ with two elements $a$ and $b$. We want to equip this with an interior multiplication that is associative ($X$ is then called a <em>semigroup</em>), and such that $a$ is neutral on the right but not neutral on the left. Then we already know three of four values: $$aa=a,\hspace{20pt}ba=b\hspace{20pt}ab=a\hspace{20pt}bb=?$$ Now, associativity requires that $(ba)b=bb=b(ab)=ba=b$. So $$aa=a,\hspace{20pt}ba=b\hspace{20pt}ab=a\hspace{20pt}bb=b$$ Or in other words, $(x,y)$ gets mapped to the first entry, so the multiplication is projection on the first coordinate. It is then easy to show that this is indeed associative. As we see, both element $a$ and $b$ are neutral on the right.</p>
107,171
<p>I'm trying to find $$\lim\limits_{(x,y) \to (0,0)} \frac{e^{-\frac{1}{x^2+y^2}}}{x^4+y^4} .$$ After I tried couple of algebraic manipulation, I decided to use the polaric method. I choose $x=r\cos \theta $ , $y=r\sin \theta$, and $r= \sqrt{x^2+y^2}$, so I get </p> <p>$$\lim\limits_{r \to 0} \frac{e^{-\frac{1}{r^2}}}{r^4\cos^4 \theta+r^4 \sin^4 \theta } $$</p> <p>What do I do from here? </p> <p>Thanks a lot!</p>
J. M. ain't a mathematician
498
<p>To expand on my comment, two functions here are relevant: the Lerch transcendent</p> <p>$$\Phi(z,s,a)=\sum_{k=0}^\infty \frac{z^k}{(k+a)^s}$$</p> <p>and the polygamma function</p> <p>$$\psi^{(k)}(z)=\frac{\mathrm d^{k+1}}{\mathrm dz^{k+1}}\log\Gamma(z)=(-1)^{k+1}k!\sum_{j=0}^\infty \frac1{(z+j)^{k+1}}$$</p> <p>where the series expression can be easily derived from differentiating the gamma function relation $\Gamma(z+1)=z\Gamma(z)$ an appropriate number of times</p> <p>$$\psi^{(k)}(z+1)=\psi^{(k)}(z)+\frac{(-1)^k k!}{z^{k+1}}$$</p> <p>and recursing as needed.</p> <p>Comparing these definitions with the series at hand, we find that</p> <p>$$\Phi\left(1,2,\frac13\right)=\sum_{k=0}^\infty \frac1{(k+1/3)^2}$$</p> <p>which almost resembles the OP's series, save for a multiplicative factor:</p> <p>$$\frac19\Phi\left(1,2,\frac13\right)=\sum_{k=0}^\infty \frac1{9(k+1/3)^2}=\sum_{k=0}^\infty \frac1{(3k+1)^2}$$</p> <p>For the polygamma route, we specialize here to the trigamma case:</p> <p>$$\psi^{(1)}(z)=\sum_{j=0}^\infty \frac1{(z+j)^2}$$</p> <p>Letting $z=\frac13$, we have</p> <p>$$\psi^{(1)}\left(\frac13\right)=\sum_{j=0}^\infty \frac1{(j+1/3)^2}$$</p> <p>and we again see something familiar. Thus,</p> <p>$$\sum_{k=0}^\infty \frac1{(3k+1)^2}=\frac19\Phi\left(1,2,\frac13\right)=\frac19\psi^{(1)}\left(\frac13\right)$$</p>
788,671
<p>What is the ratio of the area of a triangle $ABC$ to the area of the triangle whose sides are equal in length to the medians of triangle $ABC$?</p> <p>I see an obvious method of brute-force wherein I can impose a coordinate system onto the figure. But is there a better solution?</p>
Adan Paulo Rivera Hernandez
677,883
<p>The area of a triangle given its medians is <span class="math-container">$(4/3)\sqrt{m(m-m_a)(m-m_b)(m-m_b)}$</span> and the area of a triangle formed by the medians of the first triangle is,by heron's formula, <span class="math-container">$\sqrt{m(m-m_a)(m-m_b)(m-m_b)}$</span>. Dividing the two we get</p> <p><span class="math-container">$$(4/3)\sqrt{m(m-m_a)(m-m_b)(m-m_b)}/\sqrt{m(m-m_a)(m-m_b)(m-m_b)}=4/3$$</span></p>