qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
176,055
<p>I heard teachers say [cosh x] instead of saying "hyperbolic cosine of x".</p> <p>I also heard [sinch x] for "hyperboic sine of x". Is this correct?</p> <p>How would you pronounce tanh x? Instead of saying "hyperbolic tangent of x"?</p> <p>Thank you very much in advance.</p>
Argon
27,624
<p>Here are some pronunciations that I use with alternate pronunciations given by others.</p> <ul> <li>$\sinh$ - Sinch (sɪntʃ) (Others say "shine" (ʃaɪn) according to Olivier Bégassat et al.)</li> <li>$\cosh$ - Kosh (kɒʃ or koʊʃ)</li> <li>$\tanh$ - Tanch (tæntʃ) (Others say "tsan" (tsæn) or "tank" (teɪnk) according to André Nicolas)</li> <li>$\coth$ - Koth (kɒθ) according to J. M.</li> <li>$\operatorname{csch}$ - Kisch (kɪʃ) according to J. M.</li> <li>$\operatorname{sech}$ - Seech (siːtʃ) </li> </ul> <p>I'm sure many people pronounce these functions much differently; pronunciation is simply based on preference.</p>
2,847,301
<p>Find elements $a,b,$ and $c$ in the ring $\mathbb{Z}×\mathbb{Z}×\mathbb{Z}$ such that $ab, ac,$ and $bc$ are zero divisors but $abc$ is not a zero divisor.</p> <p>Work:</p> <ul> <li><p>$a=(1,1,0)$</p></li> <li><p>$b=(1,0,1)$</p></li> <li><p>$c=(0,1,1)$</p></li> </ul> <p>Why this works: because $ab=(1,0,0)\neq(0,0,0)$.</p> <blockquote> <p><strong>Definition of zero divisor</strong>. A zero divisor is a non-zero element $a$ of a commutative ring $R$ such that there is a non-zero $b \in R$ with $ab=0$.</p> </blockquote> <p>Amy hint or suggestion will be appreciated.</p>
Aaron
9,863
<p>It is worth pointing out that in a commutative ring, if $x$ is a zero divisor, then so is $xy$, unless $xy=0$. The reason is that, if there is some $z$ such that $xz=0$, then $(xy)z=(xz)y=0$. This means that the condition "$abc$ is not a zero divisor" means that $abc=0$. Further, since $ab, bc, ac$ are all nonzero, this means that $a,b,c$ are all zero divisors.</p> <p>Since the way an element of $\mathbb Z^3$ is a zero divisor is if one of its coordinates is zero, what matters in an example is which coordinates are zero. One might ask "are there any examples that are not of the form "$(x,y,0),(z,0,w),(0,s,t)$" (as such examples are fundamentally relying off of the same key idea as the given example). In fact, there are not!</p> <p>Between $a,b,c$, we need to have a zero in each of the three coordinates, and each term must have at least one zero. The question is, can we have an example where, say, $a=(1,0,0)$? The answer is no! Because $ab\neq 0$, we can't have $0$ in the first slot of $b$. Similarly, we couldn't have a zero in the first slot of $c$. But then the first slot of $abc$ has to be nonzero too (because $\mathbb Z$ has no zero-divisors). </p> <p>So not only does the given example work, but it is essentially the only example, conceptually speaking.</p>
3,096,351
<blockquote> <p><span class="math-container">$$x_1 + y_1 = 3$$</span> <span class="math-container">$$y_1 - x_3 = -1$$</span> <span class="math-container">$$x_3 + y_3 = 7$$</span> <span class="math-container">$$x_1 - y_3 = -3$$</span></p> <p>Find the values of <span class="math-container">$x_1$</span>, <span class="math-container">$x_3$</span>, <span class="math-container">$y_1$</span> and <span class="math-container">$y_3$</span>.</p> </blockquote> <p>All I am getting is <span class="math-container">$x_1 + x_3 = 4$</span> and <span class="math-container">$y_1 + y_3 = 6$</span>.</p>
Bernard
202,857
<p><strong>Hint</strong>:</p> <p>This has nothing to do with <span class="math-container">$A$</span> and <span class="math-container">$B$</span> being finite or not.</p> <p>What can you say of a subset <span class="math-container">$X$</span> of <span class="math-container">$A$</span> which is not in <span class="math-container">$\mathscr{P}(B)$</span>?</p>
4,173,559
<p>The integral is</p> <p><span class="math-container">$$\int\frac{1}{x\sqrt{1-x^2}}dx\tag{1}$$</span></p> <p>I tried solving it by parts, but that didn't work out. I couldn't integrate the result of substituting <span class="math-container">$t=1-x^2$</span> either.</p> <p>The answer is</p> <p><span class="math-container">$$\ln\left|\dfrac{1-\sqrt{1-x^2}}{x}\right|$$</span></p>
user71207
814,679
<p>Let <span class="math-container">$\sqrt{-x^{2}+1}=xt+1$</span> to transform the integral into an easier rational function. Rearrange for <span class="math-container">$x$</span> and we have <span class="math-container">$$x = \frac{2t}{-1-t^{2}}$$</span></p> <p>Find the derivative <span class="math-container">$\frac{dx}{dt}$</span>, then substitute <span class="math-container">$x$</span> and <span class="math-container">$dx$</span> into the integral and work towards your answer.</p> <p>Solution:</p> <blockquote class="spoiler"> <p> <span class="math-container">$$\frac{dx}{dt}=\frac{2t^{2}-2}{\left(-1-t^{2}\right)^{2}}$$</span></p> </blockquote> <blockquote class="spoiler"> <p>Our integral becomes thus <span class="math-container">$$\int\frac{\frac{2t^{2}-2}{\left(-1-t^{2}\right)^{2}}}{\frac{2t}{-1-t^{2}}\left(\frac{2t}{-1-t^{2}}+1\right)}dt$$</span> which simplifies nicely to <span class="math-container">$$\int\frac{1}{t}dt\ =\ \ln\left|t\right|\ + C$$</span> where <span class="math-container">$t=\frac{\sqrt{1-x^{2}}-1}{x}$</span></p> </blockquote>
4,173,559
<p>The integral is</p> <p><span class="math-container">$$\int\frac{1}{x\sqrt{1-x^2}}dx\tag{1}$$</span></p> <p>I tried solving it by parts, but that didn't work out. I couldn't integrate the result of substituting <span class="math-container">$t=1-x^2$</span> either.</p> <p>The answer is</p> <p><span class="math-container">$$\ln\left|\dfrac{1-\sqrt{1-x^2}}{x}\right|$$</span></p>
Joe
623,665
<p>Edit: I just realised that you wanted to evaluate this integral without trigonometric substitution. I'm sorry if this answer is not of any use to you. I will keep it up for the benefit of other readers.</p> <hr /> <p>Because of the identity <span class="math-container">$\sin^2\theta+\cos^2\theta=1$</span>, a good candidate for a substitution is <span class="math-container">$x=\sin\theta$</span> (with <span class="math-container">$-\pi/2\le\theta\le\pi/2$</span>). If we rearrange <span class="math-container">$\cos^2\theta+\sin^2\theta=1$</span>, we get <span class="math-container">$$ \sqrt{\cos^2\theta} = |\cos\theta|=\sqrt{1-\sin^2\theta} $$</span> but since <span class="math-container">$\cos\theta$</span> is nonnegative for <span class="math-container">$\theta\in[-\pi/2,\pi/2]$</span>, this simplifies to <span class="math-container">$$ \cos\theta=\sqrt{1-\sin^2\theta} $$</span> Let us try using this substitution for the integral at hand. If <span class="math-container">$x=\sin\theta$</span>, then <span class="math-container">$dx=\cos\theta \, d\theta$</span>, and so <span class="math-container">\begin{align} \int \frac{dx}{x\sqrt{1-x^2}} &amp;= \int \frac{\cos\theta \, d\theta}{\sin\theta\cos\theta} \\[5pt] &amp;= \int \csc\theta \\[5pt] &amp;= -\log|\csc\theta+\cot\theta| + C \label{*}\tag{*} \\[5pt] &amp;= -\log\left|\frac{1+\cos\theta}{\sin\theta}\right| + C \\[5pt] &amp;= -\log\left|\frac{1+\sqrt{1-x^2}}{x}\right| + C \end{align}</span> Using the identity <span class="math-container">$-\log|y|=\log|y^{-1}|$</span>, this becomes <span class="math-container">$$ \int \frac{dx}{x\sqrt{1-x^2}} = \log\left|\frac{1-\sqrt{1-x^2}}{x}\right| + C $$</span></p> <hr /> <p>If you are not familiar with how to integrate <span class="math-container">$\csc\theta$</span> as in <span class="math-container">$\eqref{*}$</span>, then note that <span class="math-container">$$ \int \csc\theta \, d\theta = \int\frac{\sin\theta}{1-\cos^2\theta} \, d\theta $$</span> Try making the substitution <span class="math-container">$u=\cos\theta$</span> and be prepared to simplify your answer a lot!</p>
2,685,822
<p>How can we prove that $L = \lim_{n \to \infty}\frac{\log\left(\frac{n^n}{n!}\right)}{n} = 1$</p> <p>This is part of a much bigger question however I have reduced my answer to this, I have to determine the limit of $\log(n^{n}/n!)/n$ when $n$ goes to infinity.</p> <p>Apparently the answer is 1 by wolfram alpha but I have no clue how to get it. Any idea how I could proceed (without sirlings approximation as well).</p>
Ethan Splaver
50,290
<p>Using an integral approximation of the sum of the logarithms of the first $n$ natural numbers gives:</p> <p>$$n \log n - n=\int_{1}^n\log(t)dt\leq\underbrace{\sum_{k=1}^n\text{log}(n)}_{\log(n!)}\leq \int_{1}^{n+1}\text{log}(t)dt=\underbrace{(n+1)\log(n+1)-(n+1)}_{n\log(n)-n-1+\log(n)+(n+1)\log(1+1/n)}\\\implies n\log(n)-n\leq \log(n!)\leq n\log(n)-n+\underbrace{\left[-1+\log(n)+(n+1)\log(1+1/n)\right]}_{\alpha(n)}\\\implies n\log(n)-n\leq \log(n!)\leq n\log(n)-n+\alpha(n)\\\implies -n\log(n)+n\geq -\log(n!)\geq -n\log(n)+n-\alpha(n)\\\implies -n\log(n)+n-\alpha(n)\leq -\log(n!)\leq -n\log(n)+n\\\implies n-\alpha(n)\leq\underbrace{n\log(n)-\log(n!)}_{\log(\frac{n^n}{n!})}\leq n\implies 1-\frac{\alpha(n)}{n}\leq \frac{\log(\frac{n^n}{n!})}{n}\leq n$$</p> <p>However we have that: $$\small -\frac{3\log(n)}{n}\leq -\frac{\log(n)}{n}-\underbrace{2\log(e^{1/n})}_{\large{\frac{2}{n}}}\leq-\frac{\log(n)}{n}-2\log(1+1/n)\leq\underbrace{\frac{1}{n}-\frac{\log(n)}{n}-\frac{n+1}{n}\log(1+1/n)}_{{\large -\frac{\alpha(n)}{n}}}\\\implies -\frac{3\log(n)}{n}\leq -\frac{\alpha(n)}{n}\implies 1-\frac{3\log(n)}{n}\leq1-\frac{\alpha(n)}{n}$$</p> <p>Which by our first inequality gives us:</p> <p>$$1-\frac{3\log(n)}{n}\leq \frac{\log(\frac{n^n}{n!})}{n}\leq 1\implies \left|\frac{\log(\frac{n^n}{n!})}{n}-1\right|\leq \frac{3\log(n)}{n}\\\implies \forall \epsilon &gt;0\left(n&gt;\frac{9}{\epsilon^2}\implies \left|\frac{\log(\frac{n^n}{n!})}{n}-1\right|\leq \frac{3\log(n)}{n}&lt;\frac{3}{n^{1/2}}\leq \frac{3}{(9{\epsilon}^{-2})^{1/2}}=\epsilon\right)\\\implies \forall \epsilon&gt;0\exists N\in \mathbb{N}:\forall n\in \mathbb{N}\left(n\geq N\implies \left|\frac{\log(\frac{n^n}{n!})}{n}-1\right|&lt;\epsilon\right)\implies \lim_{n\to\infty}\frac{\log(\frac{n^n}{n!})}{n}=1$$</p> <p>As required.</p>
718,590
<p>Is it true that in general complete metric space $(M,d)$, a closed ball of radius $r$ centered at $p\in M$ is always compact? That is, the ball is the set of all points $\left\{x:d(x,p)\leq r\right\}$.</p>
Hamid Shafie Asl
97,252
<p>A subset $E$ of a metric space $X$ is said to be totally bounded if $E$ is contained in the union of finitely many open balls of radius $\varepsilon$, for every $\varepsilon&gt;0$. </p> <p>Let $X$ be a complete metric space.Then $E\subset X$ is compact if and only if it is closed and totally bounded.(<strong>Functional analysis</strong> by <strong>Walter Rudin</strong> appendix 4)</p>
1,231,781
<p>Given matrix an n-by-n matrix $A$ and its $n$ eigenvalues. How do I determine the coefficient of the term $x^2$ of the polynomial given by</p> <p>$q(x) = \det(I_n + xA)$ </p>
abel
9,252
<p>$$\det(I+ x A) = x^n \det (A + \frac 1 x I) = x^n\left(\lambda_1 + \frac 1 x\right)\left(\lambda_2 + \frac 1 x\right)\cdots \left(\lambda_n + \frac 1 x\right) $$</p>
1,231,781
<p>Given matrix an n-by-n matrix $A$ and its $n$ eigenvalues. How do I determine the coefficient of the term $x^2$ of the polynomial given by</p> <p>$q(x) = \det(I_n + xA)$ </p>
achille hui
59,379
<p>Translating result appeared on the wiki page of <a href="http://en.wikipedia.org/wiki/Characteristic_polynomial" rel="nofollow">Characteristic polynomial</a>,</p> <p>$$\det(I_n + x A ) = \sum_{k=0}^n x^k \text{tr}(\wedge^k A)$$</p> <p>where $\text{tr}(\wedge^k A)$ is the trace of the $k^{th}$ exterior power of $A$ which can be evaluated explicitly as the determinant of the $k \times k$ matrix,</p> <p>$$\text{tr}(\wedge^k A) = \frac{1}{k!} \left|\begin{matrix} \text{tr}A &amp; k-1 &amp; 0 &amp; \cdots\\ \text{tr}A^2 &amp; \text{tr}A &amp; k-2 &amp; \cdots\\ \vdots &amp; \vdots &amp; &amp; \ddots &amp; \vdots\\ \text{tr}A^{k-1} &amp; \text{tr}A^{k-2} &amp; &amp; \cdots &amp; 1 \\ \text{tr}A^k &amp; \text{tr}A^{k-1} &amp; &amp; \cdots &amp; \text{tr}A \end{matrix}\right| $$ In the special case of $k = 2$, the coefficient of $x^2$ in $\det(I_n + x A)$ is equal to $$\text{tr}(\wedge^2 A) = \frac{1}{2!} \left|\begin{matrix} \text{tr}A &amp; 1\\ \text{tr}A^2 &amp; \text{tr}A\\ \end{matrix}\right| = \frac12 \left[(\text{tr}A)^2 - \text{tr}(A^2)\right]$$ In terms of the eigenvalues of $\lambda_1, \ldots, \lambda_n$ of $A$, this is equal to $$\frac12 \left( ( \sum_{i=1}^n \lambda_i )^2 - \sum_{i=1} \lambda_i^2 \right) =\sum_{1\le i &lt; j \le n} \lambda_i\lambda_j$$ Similarly, the coefficient of $x^k$ in $\det(I_n + xA)$ will have following general form: $$\sum_{1\le i_1 &lt; i_2 &lt; \cdots &lt; i_k \le n} \lambda_{i_1}\lambda_{i_2} \cdots \lambda_{i_k}$$ i.e sum of all possible combination of product of $k$ eigenvalues.</p>
470,506
<p>When is this true? $$\lim_{r\to 0}\int_{-K}^K f(rx)dx=\int_{-K}^K \lim_{r\to 0} f(rx)dx$$ Is it true without the hypothesis of continuity of $f$? </p> <p>Thank you.</p>
Brian M. Scott
12,042
<p>Let your space be $X$. Your argument fails because for any $a&gt;0$ we can choose $J$ to contain all of the points of $X$ in the square $[a,1]\times[a,1]$: there are only finitely many points in that square. The remaining points of $X$ just don’t cover much space, and it’s not too hard to see how to cover them with small balls centred at just finitely many of them.</p> <p>Make sure that the origin is also in $j$, and $B\big(\langle 0,0\rangle,\sqrt2a\big)$ covers every point of $X$ in $[0,a)\times[0,a)$, leaving only the parts of $X$ in $[a,1]\times[0,a)$ and $(0,a)\times[a,1]$ to be covered. Pick any $x\in(0,a)$; it’s not hard to see that there’s a finite subset $F$ of $\big(\{x\}\times[a,1]\big)\cap X$ such that balls of radius $\sqrt2a$ centred at points of $F$ cover $\big([0,a)\times[a,1]\big)\cap X$. Finally, the finitely many balls of radius $\sqrt2a$ centred at points of $X$ in $[a,1]\times\{0\}$ cover $\big([a,1]\times[0,a)\big)\cap X$. Altogether, then we have a finite set of points such that the balls of radius $\sqrt2a$ centred at those points covers $X$, and we can certainly make $\sqrt2a$ as small as we like.</p> <hr> <p>Here’s another fairly explicit way to see that $X$ is totally bounded. You know that $[0,1]\times[0,1]$, being compact, is totally bounded. Given $\epsilon&gt;0$, let $F_\epsilon$ be a finite subset of $[0,1]\times[0,1]$ such that $$\mathscr{B}=\left\{B\left(x,\frac{\epsilon}2\right):x\in F_\epsilon\right\}$$ covers $[0,1]\times[0,1]$. Clearly $X\subseteq[0,1]\times[0,1]$, so $\mathscr{B}$ covers $X$. The problem, of course, is that $F_\epsilon$ need not be a subset of $X$, but we can get around this.</p> <p>For each $x\in F_\epsilon$ define a point $y_x\in X$ as follows: if $B\left(x,\frac{\epsilon}2\right)\cap X\ne\varnothing$, let $y_x$ be any point of $B(x,\epsilon)\cap X$, and otherwise let $y_x$ be the origin. Let $D_\epsilon=\{y_x:x\in F_\epsilon\}$; clearly $D_\epsilon$ is a finite subset of $X$. Suppose that $p\in X$. Then $p\in B\left(x,\frac{\epsilon}2\right)$ for some $x\in F_\epsilon$, so $B\left(x,\frac{\epsilon}2\right)\cap X\ne\varnothing$, and $y_x\in B\left(x,\frac{\epsilon}2\right)\cap X$. This implies that $d(p,y_x)\le d(p,x)+d(x,y_x)&lt;\frac{\epsilon}2+\frac{\epsilon}2=\epsilon$, i.e., that $p\in B(y_x,\epsilon)$. Since $p$ was an arbitrary point of $X$, $\{B(y,\epsilon):y\in D\}$ covers $X$, and $X$ is therefore totally bounded.</p>
2,887,858
<p>I'm learning how to take surface integrals on the surface of spheres in $\mathbb{R}^n$. This question is related to <a href="https://math.stackexchange.com/questions/2887371/calculating-the-surface-integral-int-s-10y-j-d-sigmay">Calculating the surface integral $\int_{S_1(0)}y_j \ d\sigma(y)$</a> where I try to compute a similar integral but I don't know if I did everything ok. (read that answer to have a better understanding of how to integrate over a surface if you need)</p> <p><strong>I'm asked to compute</strong></p> <p>$$\int_{S_1(0)}y_jy_k \ d\sigma(y)$$ </p> <p>Let's imagine the north hemisphere first, through the parametrization $\Sigma(y) = \left(y,\sqrt{1-|y|^2}\right)$:</p> <p>$$\int_{S_1(0)^+}y_jy_k \ d\sigma(y) = \int_{C_1(0)+}y_jy_k\det [ye_1\ \cdots \ ye_{n-1} \ n(y)]\ dy$$ </p> <p>where $C_1(0)$ is just the 'circunference' on which the 'sphere' is parametrized. We can think of it as the region $|y|&lt;1$ for $y\in\mathbb{R}^{n-1}$</p> <p>Remember that the unit normal $n(y)$ in the unit sphere is just $y$. So the determinant above would give us $y_1\cdots y_{n-1}\sqrt{1-(y_1+\cdots y_{n-1})^2}$ because these are the diagonal terms multiplied and the rest of the elements are $0$ (except for the normal column but it gets multiplied by the other $0$s)</p> <p>So we end up with</p> <p>$$\int_{S_1(0)^+}y_jy_k \ d\sigma(y) = \int_{C_1(0)+} y_1\cdots y_j^2 \cdots y_k^2 \cdots \ y_{n-1}\sqrt{1-(y_1^2+\cdots + y_{n-1}^2)}\ dy$$</p> <p>Now, breaking these onto the region $C_1(0)$ we get:</p> <p>$$\int_{S_1(0)^+}y_jy_k \ d\sigma(y) = \\ \int_{-1}^1\cdots\int_{-1}^1 y_1\cdots y_j^2\cdots y_k^2 \cdots \ y_{n-1}\sqrt{1-(y_1^2+\cdots +y_{n-1}^2)} \ dy_1\cdots dy_j \cdots dy_{n-1}$$</p> <p>And for the south hemisphere:</p> <p>$$\int_{S_1(0)^-}y_jy_k \ d\sigma(y) =\\ \int_{-1}^1\cdots\int_{-1}^1 y_1\cdots y_j^2 \cdots y_k^2 \cdots \ y_{n-1}\sqrt{-1+(y_1^2+\cdots + y_{n-1}^2)} \ dy_1\cdots dy_j \cdots dy_{n-1}$$</p> <p>I think it helps if I do the two cases of integration for the north hemisphere first. If $k\neq j$ then I have to integrate $y_i\sqrt(\cdots)$ or $y_i^2\sqrt(\cdots)$. If $k=j$ then it's a matter of integrating $y_i\sqrt(\cdots)$ and $y_i^3\sqrt(\cdots)$. </p> <p>So <strong>for $k\neq j$</strong>, if we integrate $y_i\sqrt(\cdots)$:</p> <p>$$\frac{1}{2}\int_{-1}^12y_i\sqrt{1-(y_1^2 + \cdots + y_i^2 + \cdots + y_{n-1}^2)}\ dy_i = \\\frac{1}{2}\int_{1}^{1}\sqrt{1-(y_1^2 + \cdots + u + \cdots + y_{n-1}^2)}\ du$$</p> <p>I'm integrating with the substitution $u = y_i^2$ from $u(-1) = 1$ to $u(1) = 1$ so it should be $0$. This gets multiplied with every other integral so the entire integral is $0$ which is ok according to my book.</p> <p>Now <strong>for $k=j$</strong>, if we integrate $y_i^4\sqrt(\cdots)$ we would get something, but it would also be multiplied by the integral of $y_i\sqrt(\cdots)$, so it should be $0$ too. But my book says it is $n^{-1}\int{S_1(0)}1$.</p> <p><strong><em>Batominovski's Comment:</em></strong> <em>In the paragraph above, the claim that "it would also be multiplied by the integral of $y_i\sqrt(\cdots)$" is wrong. We do not have $\int\,(fg) =\left(\int\,f\right)\,\left(\int\,g\right)$. So, $\int\,g=0$ does not imply $\int\,(fg)=0$.</em></p> <p>What is wrong?</p>
zhw.
228,045
<p>You have made some errors in setting up the integral, both here and in your previous post. The parameterization of the upper hemisphere we're using is</p> <p>$$\Sigma (y) = (y,(1-|y|^2)^{1/2},$$</p> <p>for $y$ in the open unit ball $B_{n-1}$ in $\mathbb R^{n-1}.$ We calculate</p> <p>$$\tag 1\frac{\partial \Sigma(y)}{\partial y_k} = (e_k,-y_k(1-|y|^2)^{-1/2}), \,\, k = 1,\dots , n-1.$$</p> <p>Here $e_k$ is the standard basis vector in $\mathbb R^{n-1}.$</p> <p>We want to think of the $n\times n$ matrix with $(1)$ giving the first $n-1$ column vectors, and $n(y),$ the unit normal vector, as the last column. But $n(y)$ is not $y,$ it is $\Sigma (y).$ ($\Sigma (y)$ is on the sphere; $y$ lives in $B_{n-1}.$) The determinant of this matrix is</p> <p>$$\tag 2\frac{1}{(1-|y|^2)^{1/2}}.$$</p> <p>Thus $d\sigma(\Sigma (y)) = dy/(1-|y|^2)^{1/2}.$ This is true in all dimensions, nicely enough.</p> <p>To verify $(2),$ let's take advantage of the fact that $\Sigma$ is a graph. To review: Suppose $U\subset \mathbb R^{n-1}$ is open and $f:U\to \mathbb R$ is smooth. Then the graph $\{(y,f(y)):y\in U\}$ has surface area measure given by</p> <p>$$d \sigma(y,f(y))= \left[(1+(D_1f(y))^2 + \cdots + (D_{n-1}f(y))^2\right]^{1/2}\,dy.$$</p> <p>This follows from the general surface area formula you cited in your previous post. You should try to verify this. In the case of the upper hemisphere, $f(y) = (1-|y|^2)^{1/2}.$ A straightforward computation then gives $(2).$</p> <p>Hopefully this will help with the trouble you are having with your specific integrals. Give it a try. I'll stop here for now.</p> <hr> <p>Added later, another approach: Your integrals can be easily done if you accept one property of the surface measure $\sigma$ on the sphere: Rotation invariance. More precisely, if $T$ is an orthogal transformation of $\mathbb R^n$ and, say, $f$ is continuous on the unit sphere $S,$ then</p> <p>$$\int_S f(y)\,d\sigma(y) = \int_S f(T(y))\,d\sigma (y).$$</p> <p>So assume $i\in \{1,2\dots, n\},$ and $T$ is the orthogonal transformation that sends each standard basis vector $e_j$ to itself except for $j=i,$ where $T(e_i) = -e_i.$ Suppose $j\ne i,$ and $f(y) = y_iy_j.$ Then $f(T(y))=-f(y)$ for $y\in S.$ Thus</p> <p>$$ \int_S y_iy_j\,d\sigma (y) = \int_S f(y)\,d\sigma(y) = \int_S f(T(y))\,d\sigma f(y) = -\int_S f(y)\,d\sigma(y).$$</p> <p>Thus this integral is $0.$</p> <p>For the other integral, suppose $i\ne j.$ Define $T$ to be the orthogonal transformation that switches $e_i$ with $e_j,$ and leaves the other basis vectors alone. Let $f(y)= y_i^2.$ Then</p> <p>$$\int_S y_i^2\,d\sigma (y) = \int_S f(y)\,d\sigma (y) = \int_S f(T(y))\,d\sigma(y) = \int_S y_j^2\,d\sigma (y).$$</p> <p>From this we get, for fixed $i,$</p> <p>$$\int_S y_i^2d\sigma (y) = \frac{1}{n}\int_S \left (\sum_{j=1}^{n}y_j^2\right )\,d\sigma (y) = \frac{1}{n}\int_S 1\,d\sigma (y) = \frac{\sigma (S)}{n}.$$</p>
1,046,961
<p>Find all continuous functions $f:\mathbb{R} \to \mathbb{R}$ such that for all $x \in \mathbb{R}$, $f(x) + f(2x) = 0$ <br/> I'm thinking; <br/> Let $f(x)=-f(2x)$ <br/> Use a substitution $x=y/2$ for $y \in \mathbb{R}$. <br/> That way $f(y)=-f(y/2)=-f(y/4)=-f(y/8)=....$ <br/> Im just not sure if this is a good approach. Opinions please..</p>
Community
-1
<p>By induction we prove that</p> <p>$$f(x)=(-1)^nf\left(\frac x{2^n}\right)$$ but by the sequential characterization of the continuity we have</p> <p>$$f\left(\frac x{2^n}\right)\xrightarrow{n\to\infty}f(0)$$ so since for $x=0$ we get $f(0)=0$ so $f(x)$ is constant $$f(x)=f(0)=0\quad\forall x\in\Bbb R$$</p>
35,736
<p>I have heard that the canonical divisor can be defined on a normal variety X since the smooth locus has codimension 2. Then, I have heard as well that for ANY algebraic variety such that the canonical bundle is defined:</p> <p>$$\mathcal{K}=\mathcal{O}_{X,-\sum D_i}$$</p> <p>where the $D_i$ are representatives of all divisors in the Class Group.</p> <p>I want to prove that formula or I want to find a reference for that formula, or I want someone to rephrase it in a similar way if they heard about it.</p> <p>Why do I want to prove it? Well, I use the definition that something is Calabi Yau if its canonical bundle is 0. In the case of toric varieties, $\sum D_i$~0 if all the primitive generators for the divisors lie on a hyperplane. Then the sum is 0 and therefore the toric variety is Calabi-Yau.</p> <p>Can someone confirm or fix the above formula? I do not ask for a debate on when something is Calabi-Yau, I handle that OK, I just ask whether the above formula is correct. A reference would be enough. I have little access to references at the moment.</p>
Sándor Kovács
10,076
<blockquote> <p><strong>Edit</strong> (11/12/12): I added an explanation of the phrase "this is essentially equivalent to $X$ being $S_2$" at the end to answer aglearner's question in the comments. [See also <a href="https://mathoverflow.net/questions/45347/why-does-the-s2-property-of-a-ring-correspond-to-the-hartogs-phenomenon/45354#45354">here</a> and <a href="https://mathoverflow.net/questions/45347/why-does-the-s2-property-of-a-ring-correspond-to-the-hartogs-phenomenon/45616#45616">here</a>]</p> </blockquote> <p>Dear Jesus,</p> <p>I think there are several problems with your question/desire to define a canonical divisor on <em>any</em> algebraic variety. </p> <p>First of all, what is <em>any</em> algebraic variety? Perhaps you mean a quasi-projective variety (=reduced and of finite type) defined over some (algebraically closed) field.</p> <p>OK, let's assume that $X$ is such a variety. Then what is a <em>divisor</em> on $X$? Of course, you could just say it is a formal linear combination of <em>prime divisors</em>, where a prime divisor is just a codimension 1 irreducible subvariety. </p> <p>OK, but what if $X$ is not equidimensional? Well, let's assume it is, or even that it is irreducible. </p> <p>Still, if you want to talk about divisors, you would surely want to say when two divisors are <em>linearly equivalent</em>. OK, we know what that is, $D_1$ and $D_2$ are linearly equivalent iff $D_1-D_2$ is a <em>principal divisor</em>.</p> <p>But, what is a principal divisor? Here it starts to become clear why one usually assumes that $X$ is normal even to just talk about divisors, let alone defining the canonical divisor. In order to define principal divisors, one would need to define something like the <em>order of vanishing</em> of a regular function along a prime divisor. It's not obvious how to define this unless the local ring of the general point of any prime divisor is a DVR. Well, then this leads to one to want to assume that $X$ is $R_1$, that is, regular in codimension $1$ which is equivalent to those local rings being DVRs.</p> <p>OK, now once we have this we might also want another property: If $f$ is a regular function, we would expect, that the zero set of $f$ should be 1-codimensional in $X$. In other words, we would expect that if $Z\subset X$ is a closed subset of codimension at least $2$, then if $f$ is nowhere zero on $X\setminus Z$, then it is nowhere zero on $X$. In (yet) other words, if $1/f$ is a regular function on $X\setminus Z$, then we expect that it is a regular function on $X$. This in the language of sheaves means that we expect that the push-forward of $\mathscr O_{X\setminus Z}$ to $X$ is isomorphic to $\mathscr O_X$. Now this is essentially equivalent to $X$ being $S_2$.</p> <p>So we get that in order to define <em>divisors</em> as we are used to them, we would need that $X$ be $R_1$ and $S_2$, that is, <em>normal</em>. </p> <p>Now, actually, one can work with objects that behave very much like divisors even on non-normal varieties/schemes, but one has to be very careful what properties work for them.</p> <p>As far as I can tell, the best way is to work with <em>Weil divisorial sheaves</em> which are really reflexive sheaves of rank $1$. On a normal variety, the sheaf associated to a Weil divisor $D$, usually denoted by $\mathcal O_X(D)$, is indeed a reflexive sheaf of rank $1$, and conversely every reflexive sheaf of rank $1$ on a normal variety is the sheaf associated to a Weil divisor (in particular a reflexive sheaf of rank $1$ on a regular variety is an invertible sheaf) so this is indeed a direct generalization. One word of caution here: $\mathcal O_X(D)$ may be defined for Weil divisors that are not Cartier, but then this is (obviously) not an invertible sheaf.</p> <p>Finally, to answer your original question about canonical divisors. Indeed it is possible to define a canonical divisor (=Weil divisorial sheaf) for all quasi-projective varieties. If $X\subseteq \mathbb P^N$ and $\overline X$ denotes the closure of $X$ in $\mathbb P^N$, then the dualizing complex of $\overline X$ is $$ \omega_{\overline X}^\bullet=R{\mathscr H}om_{\mathbb P^N}(\mathscr O_{\overline X}, \omega_{\mathbb P^N}[N]) $$ and the canonical <em>sheaf</em> of $X$ is $$ \omega_X=h^{-n}(\omega_{\overline X}^\bullet)|_X=\mathscr Ext^{N-n}_{\mathbb P^N}(\mathscr O_{\overline X},\omega_{\mathbb P^N})|_X $$ where $n=\dim X$. (Notice that you may disregard the derived category stuff and the dualizing complex, and just make the definition using $\mathscr Ext$.) Notice further, that if $X$ is normal, this is the same as the one you are used to and otherwise it is a reflexive sheaf of rank $1$.</p> <p>As for your formula, I am not entirely sure what you mean by "where the $D_i$ are representatives of all divisors in the Class Group". For toric varieties this can be made sense as in Josh's answer, but otherwise I am not sure what you had in mind.</p> <blockquote> <p>(Added on 11/12/12):</p> </blockquote> <p><strong>Lemma</strong> A scheme $X$ is $S_2$ if and only if for any $\iota:Z\to X$ closed subset of codimension at least $2$, the natural map $\mathscr O_X\to \iota_*\mathscr O_{X\setminus Z}$ is an isomorphism.</p> <p><strong>Proof</strong> Since both statements are local we may assume that $X$ is affine. Let $x\in X$ be a point and $Z\subseteq X$ its closure in $X$. If $x$ is a codimension at most $1$ point, there is nothing to prove, so we may assume that $Z$ is of codimension at least $2$.</p> <p>Considering the exact sequence (recall that $X$ is affine): $$ 0\to H^0_Z(X,\mathscr O_X) \to H^0(X,\mathscr O_X) \to H^0(X\setminus Z,\mathscr O_X) \to H^1_Z(X,\mathscr O_X) \to 0 $$ shows that $\mathscr O_X\to \iota_*\mathscr O_{X\setminus Z}$ is an isomorphism if and only if $H^0_Z(X,\mathscr O_X)=H^1_Z(X,\mathscr O_X)=0$ the latter condition is equivalent to $$ \mathrm{depth}\mathscr O_{X,x}\geq 2, $$ which given the assumption on the codimension is exactly the condition that $X$ is $S_2$ at $x\in X$. $\qquad\square$</p>
4,240,794
<p>I was trying to obtain the square root of a matrix through the eigenvalues and eigenvectors, and there is something that doesn't add up in some of the demonstrations that I observed after getting stuck.</p> <p>So, being <span class="math-container">$Q$</span> the eigenvector column matrix of <span class="math-container">$A$</span> and <span class="math-container">$\Lambda$</span> a diagonal matrix with the eigenvalues of <span class="math-container">$A$</span>, we can put <span class="math-container">$A$</span> as:</p> <p><span class="math-container">$$ A = Q \Lambda Q^{-1} $$</span></p> <p>From this is easily deducible that:</p> <p><span class="math-container">$$ A^n = (Q \Lambda Q^{-1})(Q \Lambda Q^{-1})...(Q \Lambda Q^{-1}) = Q \Lambda^n Q^{-1} $$</span></p> <p>But the demonstration of this only seems valid for all the natural numbers and I cannot see how it can extend directly for the integer or rational numbers.</p>
Azlif
513,870
<p>Your function is continuous. A continuous function is one-to-one if and only if it is strictly decreasing or strictly increasing.</p> <p>In your case, the function is differentiable so we only need to see its first derivative which by simple computation turns out to be <span class="math-container">$a_2$</span> times <span class="math-container">$xe^{-2x}$</span>. The latter is positive iff <span class="math-container">$x &gt; 0$</span> and negative iff <span class="math-container">$ x &lt; 0$</span>.</p>
2,298,359
<p>Which of the following has an element that is less than any another element in that set?</p> <p>I. The set of positive rational numbers</p> <p>II. The set of positive rational numbers $r$ such that $r^2$ is greater than or equal to $2$</p> <p>III. The set of positive rational numbers such that $r^2 &gt; 4$. </p> <p>The answer is None. </p> <p>I am not very clear why. Can anyone please explain intuitively? I know that there is no such thing as the smallest positive rational number. </p>
fleablood
280,126
<p>" 'there is no such thing as the smallest positive rational number' which is why None of the choices has a smallest element." Be a <em>little</em> careful! The is no smallest rational number but a <em>set</em> of rational numbers could have a smallest element <em>in the set</em>.</p> <p>Example: The set of rational numbers $r$ so that $r \ge 5$ has a smallest element. $r = 5$ is in the set and it is the smallest element in the set.</p> <p>Of the set of all rational numbers so that $r^2 \le 0$. That set has only <em>one</em> element at all. $0$. $O$ is the smallest element in <em>that</em> set.</p> <p>Okay, but what about the questions in the exercise:</p> <p>I) The set of positive rational numbers.</p> <p>There is no smallest positive rational number so no, the set doesn't have a smallest element.</p> <p>If asked why, you should be able to justify it. If $r$ is in the set then $\frac r2 &lt; r$ is also in the set, so none can be smallest.</p> <p>II) The set of all positive rational numbers so that $r^2 \ge 2$. </p> <p>$\sqrt{2}$ is irrational so for any rational number, $r$ either $r^2 &lt; 2$ or $r^2 &gt; 2$. So for any $r^2 &gt; 2$ there is another $\sqrt 2&lt; s &lt; r$ and $s^2 &gt; 0$ so there is no smallest element.</p> <p>But NOTE. If the question was all rational numbers so that $r^2 \ge 4$ that <em>does</em> have a smallest element. If $r = 2$ then $r^2 \ge 4$ and for any number $q &lt; 2$ then $q^2 &lt; 4$ so $2$ is the smallest element of the set.</p> <p>III. The set of all positive rationals where $r^2 &gt; 4$. This is just the set where $r &gt; 2$. It has no smallest as there is always some $2&lt; s &lt; r$ for any rational $r &gt; 2$.</p>
2,347,892
<p>Let $f$ be a continuous differentiable function in $\mathbb{R}$. If $f(0)=0$ and $|f'(x)|\leq|f(x)|,\;\forall x \in \mathbb{R}$, then $f$ is the null function.</p> <p>Following @DougM's idea, I think I figured it out: Consider $f$ restricted to the interval $[0,1]$. Then, by Weirstrass theorem there exists $x_1,x_2 \in [0,1]$ such that $f(x_1)\leq f(x)\leq f(x_2),\;\forall x \in [0,1]$. Suppose, WLOG, that $|f(x_1)|\geq|f(x_2)|$. Note that this implies $|f(x_1)|\geq |f(x)|,\; \forall x \in [0,1]$.<br> If $x_1=0$, then $f(x)=0, \forall x \in [0,1]$.<br> Suppose $x_1\neq0$. Then, by the mean value theorem, there exists $c \in (0,1)$ such that: \begin{align} f(x_1)-f(0)&amp;=f'(c)(x_1-0)\\ |f(x_1)|&amp;=|f'(c)||x_1|\\ |f(x_1)|&amp;\leq|f'(c)| \end{align} But then $|f(c)|\geq|f'(c)|\geq|f(x_1)|$.<br> As $c\in(0,1)\subset[0,1]$, by the men value theorem there exists $d\in(0,c)\subset[0,1]$ such that: \begin{align} f(c)-f(0)&amp;=f'(d)(c-0)\\ |f(c)|&amp;=|f'(d)||c|\\ |f(c)|&amp;&lt;|f'(d)| \end{align} But then $|f(d)|\geq|f'(d)|&gt;|f(c)\geq |f(x_1)|$, a contradiction. since $|f(x_1)|\geq |f(x),\; \forall x\in [0,1]$.<br> Therefore $x_1=0$.<br> From there you can just follow inductively to prove $f(x)=0$ for all positives, and the analogous proof for the negatives.<br> Using the MVT twice is kinda weird, wonder if there's a way not to use it like that.</p>
Philip Roe
430,997
<p>The general rule for hyperbolic PDEs in two variables is that on any boundary you are allowed as many boundary conditions as there are characteristics entering the domain. This question is about the special case where the domain boundary coincides with a characteristic. On such a boundary we may only prescribe boundary data that satisfies the characteristic equation. That is not the case here, so there is a conflict, and therefore no solution.</p>
1,780,352
<p>Suppose we have a series</p> <p>$$\sum^{\infty}_{n = 0} \frac{\sqrt{n}}{(n+1)^2 + |z|^2}$$</p> <p>As mentioned in the title, how to find the radius of convergence?</p> <p>In my opinion, $|z| \in \mathbb{C}$, $|z|^2$ is giving a real number, hence we can fix $|z|^2$ regardlessly. But then again, this is not a power series?</p>
Mark Viola
218,419
<p>Note that for $|a|=1$, we can write $a=e^{i\psi}$. Then, exploiting the $2\pi$-periodicity of the integrand, we have</p> <p>$$\begin{align} \int_{-\pi}^\pi \log|1-ae^{i\theta}|\,d\theta&amp;=\int_{-\pi}^\pi \log|1-e^{i(\theta+\psi)}|\,d\theta\\\\ &amp;=\int_{-\pi+\psi}^{\pi+\psi} \log|1-e^{i\theta}|\,d\theta\\\\ &amp;=\int_0^{2\pi}\log|1-e^{i\theta}|\,d\theta \end{align}$$</p> <p><strong>METHODOLOGY 1:</strong></p> <p>Note that we have $|1-e^{i\theta}|=\sqrt{2(1-\cos(\theta))}=2|\sin{\theta/2}|$. Now, we have</p> <p>$$\int_0^{2\pi}\log|1-e^{i\theta}|\,d\theta=4\int_0^{\pi/2}\log(2\sin(\theta))\,d\theta=0$$</p> <p>since $\int_0^{\pi/2}\log(\sin(\theta))\,d\theta=-\frac{\pi}{2}\log(2)$</p> <hr> <p><strong>METHODOLOGY 2:</strong></p> <p>Now, we cut the complex plane with a line from $(1,0)$ and extending along the positive real axis. </p> <p>Note that $\log(1-z)$ is analytic within and on a closed contour $C$ defined by $z=e^{i\phi}$ for $\epsilon \le \phi \le 2\pi -\epsilon$, and $z=1+2\sin(\epsilon/2) e^{i\nu}$ for $\pi/2 + \gamma \le \nu \le 3\pi/2 -\gamma$, where $ \cos(\gamma)=\frac{\sin \epsilon}{\sqrt{2(1-\cos \epsilon)}}$ and $0 \le \gamma &lt;2\pi$ on this branch of $\gamma$.</p> <p>Then, from the residue theorem, we have </p> <p>$$\int_C \frac{\log(1-z)}{z}dz=2\pi i \log(1-0)=0$$</p> <p>which implies </p> <p>$$\begin{align} \int_C \frac{\log(1-z)}{z} dz&amp;=\int_{\epsilon}^{2\pi-\epsilon} \log(1-e^{i\phi})i d\phi+\int_{3\pi/2-\gamma}^{\pi/2+\gamma} \frac{\log(-2\sin(\epsilon/2) e^{i\nu})}{1+2\sin(\epsilon/2) e^{i\nu}}i2\sin(\epsilon/2) e^{i\nu}d\nu\\\\ &amp;=i\int_{\epsilon}^{2\pi-\epsilon} \log(1-e^{i\phi}) d\phi+ i2\sin(\epsilon/2) \int_{3\pi/2-\gamma}^{\pi/2+\gamma} \frac{\log(-2\sin(\epsilon/2)e^{i\nu})e^{i\nu}d\nu}{1+2\sin(\epsilon/2)e^{i\nu}} \\\\ &amp;=i\int_{\epsilon}^{2\pi-\epsilon} \log|1-e^{i\phi}| d\phi -i \int_{\epsilon}^{2\pi-\epsilon} \arctan \left(\frac{\sin \phi}{1-\cos \phi}\right)d\phi \\\\ &amp;+ i2\sin(\epsilon/2) \int_{3\pi/2-\gamma}^{\pi/2+\gamma} \frac{\log(-2\sin(\epsilon/2)e^{i\nu})e^{i\nu}d\nu}{1+2\sin(\epsilon/2)e^{i\nu}} \\\\ &amp;=0 \end{align}$$</p> <p>As $\epsilon \to 0$ the first term on the RHS approaches $i$ times the integral of interest. The second term approaches zero since $\arctan(\frac{\sin \phi}{1-\cos \phi})$ is an odd, $2\pi$-periodic function of $\phi$ and the integration extends over the entire period. And the last term approaches $0$ since $x\log x \to 0$ as $x \to 0$. Thus, the integral of interest is zero!</p>
1,409,932
<p>I am trying to make a random number generator that is sorta special. Basically it generates a number between 5, and 17. The twist that I need math help on is that I want to have a variable "P" that works like the following.</p> <p>The higher p is the more likely numbers closer to 17 are to appear, the lower it is the more likely for number closer to 5 to appear.</p> <p>I have found the perfect function to model this, 1 /( 1 + abs((x - c) / a)^2B) you can play around what it <a href="https://www.desmos.com/calculator/h7aboouu2x" rel="nofollow">here</a></p> <p>Anyway when I see that function I am considering the X axis as each of the numbers (5 - 17) and the y axis as the probability that it will be randomly picked. Notice how if you change the value of "C" you change where the hump is. This is what will be the "P" variable. As you can see the higher C is the higher the numbers the hump is over are.</p> <p>How exactly would I mathmatically do this? I have taken 1d data like the result of a dice before and turned it into a 2d bar graph before, but what I need to do this time is the opposite way around.</p> <p>Edit: The gausian function might work well too <a href="https://www.desmos.com/calculator/i4aydcx5ts" rel="nofollow">https://www.desmos.com/calculator/i4aydcx5ts</a></p>
Brian Tung
224,454
<p>To answer the question of what kind of distribution to use (as opposed to David K's good answer regarding how to "implement" a distribution), you might consider the <a href="https://en.wikipedia.org/wiki/Beta_distribution" rel="nofollow">beta distribution</a>, scaled and translated appropriately (multiply by $12$ and add $5$). By adjusting the parameters $\alpha$ and $\beta$, you can achieve something akin to the shape you want.</p>
1,499,822
<p>So we were shown this problem from our first functional analysis lecture and I was wondering if someone could help or give a hint:</p> <blockquote> <p>Show that the unit sphere $\mathbb{S}^{n-1} := \{ x \in \mathbb{R}^n: \| x \| = 1 \}$ is a complete metric space equipped with $d(x,y):= \arccos \langle x,y \rangle_{\mathbb{R}^n}$ where $\langle x,y \rangle_{\mathbb{R}^n}$ denotes the standard dot product. </p> </blockquote> <p>We're having no trouble showing positivity and symmetry but could use help showing completeness and the triangle inequality.</p> <p>Kind regards</p> <p>Edit:</p> <p>So with MatiasHeikkilä's help, what's left to show is that $\theta_{x,y} \leq \theta_{x,z} + \theta_{y,z}$ if I'm not mistaken. Do I need to make a case-by-case proof to show this inequality or is there an easier way? I would argue something along the lines of: "$z$ lies between $x$ and $y$ implies the equality", "$z$ does not lie on the (smallest) path between $x$ and $y$ but $z$ lies on the half-sphere with $x$ and $y$ on it implies the inequality (since $\theta_{x,y} &lt; \theta_{x,z}$) and lastly $z$ does neither lie on the (smallest) path between $x$ and $y$ nor does $z$ lie on the half-sphere implies the inequality (since $\theta_{x,z} + \theta_{y,z} = 2 \pi - \theta_{x,y} \geq \pi $)</p>
PhoemueX
151,552
<p>Let us first show the triangle inequality for $d$, i.e. $d\left(x,z\right)\leq d\left(x,y\right)+d\left(y,z\right)$. Since we have $\left\langle Ux,Uy\right\rangle =\left\langle x,y\right\rangle $ for every orthogonal matrix, by choosing $U$ with $Uy=e_{1}=\left(1,0,\dots,0\right)$, we can assume $y=e_{1}$ (in short, we used that $d$ is invariant under orthogonal transformations). Since the cosine is strictly decreasing on $\left[0,\pi\right]$ (and since $\arccos$ takes values in $\left[0,\pi\right]$), the desired inequality is equivalent to \begin{align*} \left\langle x,z\right\rangle &amp; \overset{!}{\geq}\cos\left(\arccos\left\langle x,e_{1}\right\rangle +\arccos\left\langle e_{1},z\right\rangle \right)\\ &amp; =\cos\left(\arccos x_{1}+\arccos z_{1}\right)\\ \left(\text{by trigonometric formulas}\right) &amp; =\cos\left(\arccos x_{1}\right)\cos\left(\arccos z_{1}\right)-\sin\left(\arccos x_{1}\right)\sin\left(\arccos z_{1}\right)\\ \left(\text{since }\sin\geq0\text{ and hence }\sin=\sqrt{1-\cos^{2}}\text{ on }\left[0,\pi\right]\right) &amp; =x_{1}z_{1}-\sqrt{1-\cos^{2}\left(\arccos x_{1}\right)}\sqrt{1-\cos^{2}\left(\arccos z_{1}\right)}\\ &amp; =x_{1}z_{1}-\sqrt{1-x_{1}^{2}}\sqrt{1-z_{1}^{2}}. \end{align*} By rearranging, we see that this inequality is equivalent to \begin{align*} -\sum_{i=2}^{n}x_{i}z_{i} &amp; =x_{1}z_{1}-\left\langle x,z\right\rangle \\ &amp; \overset{!}{\leq}\sqrt{1-x_{1}^{2}}\sqrt{1-z_{1}^{2}}\\ \left(\text{since }1=\left|x\right|^{2}=\sum_{i=1}^{n}x_{i}^{2}\right) &amp; =\sqrt{\sum_{i=2}^{n}x_{i}^{2}}\sqrt{\sum_{i=2}^{n}z_{i}^{2}}. \end{align*} But by Cauchy Schwarz, we have $$ -\sum_{i=2}^{n}x_{i}z_{i}\leq\left|\sum_{i=2}^{n}x_{i}z_{i}\right|\leq\sqrt{\sum_{i=2}^{n}x_{i}^{2}}\sqrt{\sum_{i=2}^{n}z_{i}^{2}}, $$ so that the inequality above holds. We have thus verified the triangle inequality. By what you have already shown, $d$ is a metric.</p> <p>For completeness, I leave it to you to verify that $$ \Phi:\left(S^{n-1},\mathcal{T}\right)\to\left(S^{n-1},d\right),x\mapsto x $$ is continuous, where $\mathcal{T}$ denotes the usual topology. Thus, $\left(S^{n-1},d\right)$ is compact. Now, it is well known that every compact metric space is complete, so that we are done.</p>
41,450
<p>I want to perform a Pearson's $\chi^2$ test to analyse contingency tables; but because I have small numbers, it is recommended to perform instead what is called a Fisher's Exact Test.</p> <p>This requires generating all integer matrices with the same column and row totals as the one given, and compute and sum all p-values from the corresponding distribution which are lower than the one from the data.</p> <p>See <a href="https://en.wikipedia.org/wiki/Fisher%27s_exact_test" rel="nofollow noreferrer">Wikipedia</a> and <a href="http://mathworld.wolfram.com/FishersExactTest.html" rel="nofollow noreferrer">MathWorld</a> for relevant context.</p> <p>Apparently <em>R</em> offers that, but couldn't find it in <em>Mathematica</em>, and after extensive research couldn't find an implementation around, so I did my own.</p> <p>The examples in the links are with 2x2 matrices, but I did a n x m implementation and, at least for the <em>MathWorld</em> example, numbers match.</p> <p>I have one question: The code I wrote uses <code>Reduce</code>; although it seemed to me generating all matrices was more a combinatorial problem. I pondered using <a href="http://reference.wolfram.com/mathematica/ref/FrobeniusSolve.html" rel="nofollow noreferrer"><code>FrobeniusSolve</code></a>, but still seemed far from what's needed. Am I missing something or is <code>Reduce</code> the way to go?</p> <p>The essential part of the code, which I made available in github <a href="https://github.com/carlosayam/fisher-exact/blob/master/package/fisher_exact.m" rel="nofollow noreferrer">here</a>, is that for a matrix like</p> <p>$$ \left( \begin{array}{ccc} 1 &amp; 0 &amp; 2 \\ 0 &amp; 1 &amp; 2 \\ \end{array} \right)$$</p> <p>with row sums 3, 3 and column sums 1, 1, 4, it creates a system of linear equations like:</p> <p>$$ \begin{array}{c} x_{1,1}+x_{1,2}+x_{1,3}=3 \\ x_{2,1}+x_{2,2}+x_{2,3}=3 \\ \end{array} $$ $$ \begin{array}{c} x_{1,1}+x_{2,1}=1 \\ x_{1,2}+x_{2,2}=1 \\ x_{1,3}+x_{2,3}=4 \\ \end{array} $$</p> <p>subject to the constrains $ x_{1,1}\geq 0$, $x_{1,2}\geq 0$, $x_{1,3}\geq 0$, $x_{2,1}\geq 0$, $x_{2,2}\geq 0$, $ x_{2,3}\geq 0 $ and feeds this into <code>Reduce</code> to solve this system over the <code>Integers</code>. <code>Reduce</code> returns all the solutions, which is what we need to compute Fisher's exact p-value.</p> <p><em>Note:</em> I just found <a href="https://mathematica.stackexchange.com/questions/26174/recommended-settings-for-git-when-using-with-mathematica-projects">this advice</a> on how to use github better for Mathematica projects. For the time being, I leave it as-is. Hope easy to use and test.</p> <p>You can test the above mentioned code like</p> <pre><code>FisherExact[{{1, 0, 2}, {0, 0, 2}, {2, 1, 0}, {0, 2, 1}}] </code></pre> <p>It has some <em>debugging</em> via <code>Print</code> which shows all the <em>generated</em> matrices and their p-value. The last part (use of <code>Select</code>) to process all found matrices didn't seem very <em>Mathematica</em> to me, but it was late and I was tired - feedback is welcome.</p> <p>I would give my tick to the answer with more votes after a couple of days if anyone bothers to write me two lines :)</p> <p>Thanks in advance!</p>
halirutan
187
<p>As you have found out yourself, the hard part is generating all possible matrices that have a specific row- and column-sum. I don't think that using <code>Reduce</code> for this is a good way to go if you are targeting an algorithm that always works. Consider this simple example with your approach</p> <blockquote> <p><img src="https://i.stack.imgur.com/QwzKJ.png" alt="Mathematica graphics" /></p> </blockquote> <p>Clearly, the p-value should be 1. in this example. I have tried several random matrices and sometimes, <code>Reduce</code> doesn't give the result you are expecting in your setting.</p> <p>This leads me to point number two: How could one tackle the problem of generating all matrices? The naive solution of generating all matrices of a certain dimension and filter out the ones we need is clearly too complex and will blow up. I haven't searched the net carefully, but on a first glance, I did not come over some clever implementation.</p> <p>Although, what we have is the information about the col- and row-sums. Therefore, one way is to build all possible matrices row by row. We start with the first row: Its row-sum gives us the predicate if it is a correct row and the col-sums gives us boundaries for the iteration. Example, if we know the row-sum is 10, and the col-sums are say <code>{5,4,6}</code> then we could go through all possibilities</p> <pre><code>Do[ (* check if sum of {i1,i2,i3} is 10 and return valid row *), {i1, 0, 5}, {i2, 0, 4}, {i3, 0, 6} ] </code></pre> <p>If we found a valid row <code>r1</code>, we know that the possibilies for <code>r2</code> are not as many, because since the columns need to add up too, our <em>current column sum</em> changed to <code>colSum - r1</code>. We can do this step by step until we processed all rows and in the end, our <em>current column sum</em> needs to be 0.</p> <p>Since we don't know upfront, which dimension and what column and row sums we get, we need to create the loop dynamically:</p> <pre><code>allRows[sum_Integer, colSums_List] := Module[{createRows, <span class="math-container">$e, $</span>iterators, <span class="math-container">$all}, If[Min[colSums] &lt; 0, Return[{}]]; $</span>iterators = Table[{<span class="math-container">$e[i], 0, Min[colSums[[i]], sum]}, {i, Length[colSums]}]; $</span>all = Last@Reap@Do[With[{col = First /@ <span class="math-container">$iterators}, If[Total[col] === sum, Sow[col] ]], Evaluate[Sequence @@ $</span>iterators]]; If[<span class="math-container">$all === {}, {}, First[$</span>all] ] ]; </code></pre> <p>Here are all valid rows of the example above</p> <blockquote> <p><img src="https://i.stack.imgur.com/hVdJk.png" alt="Mathematica graphics" /></p> </blockquote> <p>Building now all possible matrices (in a nested list) can be done by recursively calling <code>allRows</code>:</p> <pre><code>build[{{rowSum1_, rest___}, colSums_}, result_] := Module[{<span class="math-container">$rows}, $</span>rows = allRows[rowSum1, colSums]; build[{{rest}, colSums - #}, {result, #}] &amp; /@ $rows; ]; build[{{}, colSums_}, result_] := If[Total[colSums] === 0, Sow[result]]; </code></pre> <p>The thing that is left are some definitions you have yourself and creating the <code>fisherTest</code> that uses the created possible matrices to sum up all p-values</p> <pre><code>sumMatrix[mat_] := Total /@ {Transpose[mat], mat}; getP[mat_] := Module[{<span class="math-container">$sumMat = sumMatrix[mat], $</span>tot}, <span class="math-container">$tot = Total[First[$</span>sumMat]]; Times @@ (Factorial /@ Flatten[<span class="math-container">$sumMat]) /(Factorial[$</span>tot] (Times @@ (Factorial /@ Flatten[mat]))) // N ] fisherTest[mat_] := Module[{<span class="math-container">$poss, $</span>pCut, <span class="math-container">$pVals}, $</span>poss = First@Last@Reap[build[sumMatrix[mat], {}]]; <span class="math-container">$pCut = getP[mat]; $</span>pVals = With[{m = ArrayReshape[Flatten[#], Dimensions[mat]]}, getP[m]] &amp; /@ <span class="math-container">$poss; Total[Select[$</span>pVals, # &lt;= $pCut &amp;]] ] </code></pre> <p>My <code>fisherTest</code> will be slower than yours, but if the possibilies are not too large, it will always return the correct result.</p> <p>#Advanced</p> <p>For the truly brave, I will add a version of <code>allRows</code> that uses <em>in-time compiling</em> which means that it creates dynamically a compiled function for a certain vector length <code>n</code> which can be called to find all valid rows. This compiling is only done once, so on the next run the function will be at hand and there is no compile-time.</p> <p>With this function, I'm getting similar timings as the <code>Reduce</code> implementation:</p> <pre><code>ClearAll[allRows]; allRows[n_] := allRows[n] = Block[{iters, body, cs, e, sum, vec}, iters = Table[With[{i = i}, {Unique[e], 0, Min[sum, Compile`GetElement[cs, i]]}], {i, n}]; vec = List @@ iters[[All, 1]]; (Compile[{{sum, _Integer, 0}, {cs, _Integer, 1}}, Module[{result = Internal`Bag[Most[{0}], 1]}, Do[If[#2 == sum, Internal`StuffBag[result, #1, 1]], ##3 ]; Internal`BagPart[result, All] ]] &amp;)[vec, Total[vec], Sequence @@ iters] ]; allRows[sum_Integer, colSums_List] := Partition[allRows[Length[colSums]][sum, colSums], Length[colSums]] </code></pre> <p>Quick test</p> <pre><code>m = RandomInteger[10, {3, 3}]; FisherExact[m] // AbsoluteTiming During evaluation of {cutoff,7.67668*10^-6} (* {0.319039, 0.00172301} *) fisherTest[m] // AbsoluteTiming (* {1.12482, 0.00172301} *) </code></pre>
2,897,904
<p><a href="https://i.stack.imgur.com/ZwR5o.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZwR5o.png" alt="enter image description here" /></a></p> <h2>Attempt:</h2> <p>Let <span class="math-container">$x_{INi}$</span> be the number of people who travel from Ithaca to Newark in class <span class="math-container">$i$</span> where <span class="math-container">$i=1,2,3$</span> where each number represents class Y,B, and M, respectively, by simplicity. Similarly, write <span class="math-container">$x_{NBi}$</span> and <span class="math-container">$x_{IBi}$</span>. Our goal is to maximize</p> <p><span class="math-container">$$ z = 300x_{IN1}+220x_{IN2}+100x_{IN3} + 160 x_{NB1} + 130 x_{NB2} + 80 x_{NB3} + 360x_{IB1} + 280 x_{IB2} + 140 x_{IB3} $$</span></p> <p>Now, let us focus on the constraints. First of all,</p> <p><span class="math-container">$$ \sum_{i=1}^3 (x_{INi}+x_{NBi}+x_{IBi}) \leq 30 $$</span></p> <p>moreover, we must have</p> <p><span class="math-container">$$ 4x_{IN1}+8 x_{IN2} + 22 x_{IN3} \leq 34 $$</span></p> <p><span class="math-container">$$ 8 x_{NB1} + 13 x_{NB2} + 20 x_{NB3} \leq 41 $$</span></p> <p><span class="math-container">$$ 3 x_{IB1} + 10 x_{IB2} + 18 x_{IB3} \leq 31 $$</span></p> <p>and finally, all the <span class="math-container">$x's$</span> must be positive.</p> <p>Is this a correct formulation?</p>
Community
-1
<p>The following constraints must be respected.</p> <blockquote> <p>The plane cannot seat more than 30 passengers.</p> </blockquote> <p>What you know is that:</p> <ol> <li>At first, the plane contains passengers that travel from Ithaca to Boston and Ithaca to Newark</li> <li>After landing in Newark, the place will board passengers from Newark to Boston while seating passengers from Ithaca to Boston.</li> </ol> <p>Therefore, $X_{IN} + X_{IB} \le 30$ and $X_{IB} + X_{NB} \le 30$ are necessary constraints. Here, $X_{IN} = X_{IN}^Y + X_{IN}^M + X_{IN}^B$ (and similarly for $X_{IB}$, $X_{NB}$).</p> <blockquote> <p>Number of available tickets cannot exceed forecasted demand.</p> </blockquote> <p>As an example, let's take the constraints related to $X_{IN}$.</p> <p>$$ 0 \le X_{IN}^Y \le 4, \qquad 0 \le X_{IN}^B \le 8, \qquad 0 \le X_{IN}^M \le 22. $$</p> <p>As pointed out in the comments, it is better to explicitly write down each constraint separately. Indeed, $X_{IN} \le 4 + 8 + 22$ would not contain any information about the upper bound of each fare class.</p> <blockquote> <p>Maximize revenue.</p> </blockquote> <p>Similarly to what you already proposed, this gets translated into maximizing</p> <p>$$ Z := 300X_{IN}^Y + 220X_{IN}^B + 100X_{IN}^M + (\text{amounts related to other variables}) $$</p> <hr> <p>Your approach seems to mix 0/1 programming with linear programming, particularly in the last set of equations. How are they meant to be interpreted?</p> <hr> <h2>Problem encoding in MiniZinc</h2> <pre><code>var 0..4: a; var 0..8: b; var 0..22: c; var 0..8: d; var 0..13: e; var 0..20: f; var 0..3: g; var 0..10: h; var 0..18: i; var int: revenue = 300*a + 220*b + 100*c + 160*d + 130*e + 80*f + 360*g + 280*h + 140*i; constraint a + b + c + d + e + f &lt;= 30; constraint a + b + c + g + h + i &lt;= 30; solve maximize revenue; </code></pre>
2,586,501
<p>I studied the Ebbinghaus's logic textbook, which is done under ZF (sometimes is ZFC) set theory, so the Gödel Completeness Theorem is valid in ZF(C) (in my opinion).</p> <p>When I studied Jech's set theory, it is going to show that</p> <blockquote> <p>If ZF is consistent, then so is ZFC+GCH.</p> </blockquote> <p>Nonetheless, can we really assume that ZF is consistent? By the Gödel Completeness Theorem, if ZF is consistent, then it is satisfiable and so there is a <strong>set</strong> V collecting all sets in the universe of ZF, which seems a contradiction.</p> <p>Didn't we immediately get a contradiction when we say “ZF is consistent”? </p> <p>How should I understand the assumption “ZF is consistent”?</p>
Alex Kruckman
7,062
<p>You wrote:</p> <blockquote> <p>Nonetheless, can we really assume that ZF is consistnet? By the Gödel Completeness Theorem, if ZF is consistent, then it is satisfiable and so <strong>there is a set V collecting all sets in the universe of ZF</strong>, which seems a contradiction.</p> </blockquote> <p>I added the bold to highlight the key misunderstanding, which I think has been overlooked in the comments. </p> <p>If we assume ZF is consistent, then Gödel's Completeness Theorem gives us a model $\mathcal{M}$ of ZF. What is a model? $\mathcal{M}$ is just a set $M$ given together with a binary relation $E\subseteq M^2$ such that $(M,E)\models \text{ZF}$. It is <em>not</em> the case that $M$ contains "all sets in the universe of ZF" - as you know, there is no universal set. And we don't even know that $E$ has anything to do with the real $\in$ relation on the elements $M$. $M$ is just a bag of stuff, and some of that stuff is related to other of that stuff by the $E$ relation, in such a way that all the axioms of ZF are satisfied. In fact, by the Löwenheim-Skolem theorem, if ZF is consistent, then it has a <em>countable</em> model. </p> <p>So I've addressed the "no universal set" concern from the outside: the model $\mathcal{M}$ produced by the completeness theorem doesn't contain all the sets in our ZF universe. But you might also have been concerned about "no universal set" from the inside. That is, how can the set $M$ contain within it a whole universe of a model of ZF, when ZF proves that there is no universal set?</p> <p>The answer is that while on the outside, we can see that $M$ is a set (maybe even a countable set), from the point of view of $\mathcal{M}$, our mini ZF universe, $M$ is not a set. That is, $\mathcal{M}$, being a model of ZF, satisfies the sentence $\lnot \exists x\forall y (y\in x)$ ("there is no set which contains every set"). This just means that there is no element $a$ of $M$ such that for every element $b$ of $M$, $(b,a)\in E$. The sethood of $M$ in our big ZF universe has nothing to do with it.</p>
455,695
<p>Let $V$ be the vector space of all real valued continuous functions. Prove that the linear operator $\displaystyle\int_{0}^{x}f(t)dt$ has no eigenvalues.</p>
Alex Wertheim
73,817
<p>Hint: Suppose you have $\int_{0}^{x} f(t)dt = \lambda \cdot f(t)$. Differentiate both sides - you should easily be able to solve the resultant differential equation. Is your solution truly an eigenvector if it is nontrivial? </p>
2,618,813
<p>What is the image of the set {$ { {z ∈ C : z = x + iy, x ≥ 0, y ≥ 0} } $} under the mapping $ z \to z^2$</p> <p>my answer : $f(z) = z^2 =(x+iy)^2 = x^2-y^2 +2ixy$,</p> <p>here I get $u=x^2-y^2$ and $v=2xy$. After that I am confused that how can I find the image of the set.</p> <p>Please help me, as any Hints can be appreciated or if u have time u can tell me the solution. I would be more thankful.</p>
Konstantin
509,087
<p><strong>Hint</strong>: At first I would suggest you sketch your set, call it $S$. That should be quite easy after the definition.</p> <blockquote class="spoiler"> <p> Well it is just the first quadrant.</p> </blockquote> <p>Then consider points in the complex plain, represented by their polar-coordinate representation and figure out what the mapping $z \rightarrow z^2$ does to them.</p> <blockquote class="spoiler"> <p> Given $z$ this mapping scales z and doubles the angle from the positive "real line".</p> </blockquote> <p>And now you should be able to imagine all points resulting of $S$ under this mapping.</p> <blockquote class="spoiler"> <p> Namely, any point in the upper halfplane can be reached through this mapping. The real line is obviously included.</p> </blockquote>
265,801
<p>Consider a 12-sided fair die. What is the distribution of the number T of rolls required to roll a 1, a 2, a 3, and a 4?</p> <p>Taking inspiration from the Coupon Collector's Problem, I believe that the expected number of rolls to achieve the goal would be</p> <p>$$E[T] = \sum\limits_{i=0}^3 \frac{12}{4-i} = 25$$</p> <p>Similarly, the variance would be</p> <p>$$Var[T] = \sum\limits_{i=0}^3 \frac{1-\frac{4-i}{12}}{(\frac{4-i}{12})^2} = 180$$</p> <p>But applying Chebyshev here does not yield very useful bounds. My question is therefore, how would you compute, for example, $P(T=16)$ or $P(T&lt;30)$?</p> <p>Ideally this could be generalized to a set of k required numbers, not just 4 as in the example.</p>
Giles Adams
54,383
<p>I think it's the sum of four different geometric distributions.</p> <p>For example, you start off and you haven't rolled any of the numbers. The probability of rolling one of them is 4/12 so you start a geometric distribution with that parameter. Then you suddenly roll one of them. Now you need to roll one of the other three, each roll has a probability of 3/12 of getting one of them, so it's another geometric distribution.</p> <p>By this logic I think T ~ Geo(4/12) + Geo(3/12) + Geo(2/12) + Geo(1/12) which of course is probably some hideous ditribution define in terms of convolutions (or you know, could miraculously be nice and of closed-form, I don't know what.)</p>
1,481,627
<p>I have been given homework in which the $2\times 2$ matrices of determinant $1$ are equipped with the subspace topology of $\mathbb{R}^4$. However, $\mathbb{R}^4$ is a space of 4-tuples, while 2x2 matrices are not n-tuples. How do I get the open sets of this topology?</p> <p>How is this set of matrices a subset of $\mathbb{R}^4$?</p>
Thomas Andrews
7,933
<p>If $A=(a_{ij})$ and $B=(b_{ij})$ then the inherited metric will be: $$d(A,B)=\sqrt{\sum_{i,j} (a_{ij}-b_{ij})^2}$$</p> <p>no matter whether you associate $\begin{pmatrix}a&amp;b\\c&amp;d\end{pmatrix}$ with $(a,b,c,d)$, $(a,c,b,d)$ or any other re-ordering of the values.</p>
357,847
<p>Let <span class="math-container">$n\ge 3$</span> be an integer. I would like to know if the following property <span class="math-container">$(P_n)$</span> holds: for all real numbers <span class="math-container">$a_i$</span> such that <span class="math-container">$\sum\limits_{i=1}^na_i\geq0 $</span> and <span class="math-container">$\sum\limits_{1\leq i&lt;j&lt;k\leq n}a_ia_ja_k\geq0$</span>, we have <span class="math-container">$$n^2\sum_{i=1}^na_i^3\geq\left(\sum_{i=1}^na_i\right)^3.$$</span> I have a proof that <span class="math-container">$(P_n)$</span> holds for <span class="math-container">$3\leq n\leq8$</span>, but for <span class="math-container">$n\geq9$</span> my method does not work and I did not see any counterexample for <span class="math-container">$n\ge 9$</span>.</p> <p>Is the inequality <span class="math-container">$(P_n)$</span> true for all <span class="math-container">$n$</span>? Or otherwise, what is the largest value of <span class="math-container">$n$</span> for which it holds?</p> <p>Thank you! </p>
Conrad
133,811
<p>A sort of (partial) explanation for what happens:</p> <p>Let <span class="math-container">$N \ge 3$</span> the degree and let <span class="math-container">$A=\sum{a_k}, B=\sum_{j&lt;k}a_ja_k, C=\sum_{j\ne k\ne m \ne j}a_ja_ka_m $</span>. We are given that <span class="math-container">$A \ge 0, C \ge 0$</span> and we need to prove that <span class="math-container">$N^2(A^3-3AB+3C) \ge A^3$</span>. Now we can assume wlog <span class="math-container">$A =1$</span> since if <span class="math-container">$A=0$</span> the inequality is obvious and otherwise we can divide by <span class="math-container">$A&gt;0$</span> and consider <span class="math-container">$c_j=\frac{a_j}{A}$</span> and prove the inequality for them etc</p> <p>So we need to prove <span class="math-container">$1-3B+3C \ge \frac{1}{N^2}$</span> under the hypothesis as above (the polynomial <span class="math-container">$X^N-X^{N-1}+BX^{N-2}-CX^{N-3}+...$</span> has real roots and <span class="math-container">$C \ge 0$</span></p> <p>Then if we let <span class="math-container">$b_j=a_j-\frac{1}{N}, A_1,B_1,C_1$</span> the corresponding symmetric polynomials in <span class="math-container">$b_j$</span> we have <span class="math-container">$A_1=0, B_1=B-\frac{N-1}{2N}=B-\frac {1}{2}+\frac{1}{2N}, C_1=C-B+\frac{2B}{N}+\frac{1}{3}-\frac{1}{N}+\frac{2}{3N^2}$</span> so the inequality becomes (<span class="math-container">$C-B=C_1+...$</span> from the last equality)</p> <p><span class="math-container">$1+3C_1-\frac{6B}{N}-1+\frac{3}{N}-\frac{2}{N^2}\ge \frac{1}{N^2}$</span> and since </p> <p><span class="math-container">$\frac{6B}{N}=\frac{6B_1}{N}+\frac{3}{N}-\frac{3}{N^2}$</span> all reduces to </p> <p><span class="math-container">$3C_1-\frac{6B_1}{N} \ge 0$</span></p> <p>But now the polynomial <span class="math-container">$X^N+B_1X^{N-2}-C_1X^{N-3}+...$</span> has real roots too as they are just <span class="math-container">$b_k$</span> and hence <span class="math-container">$B_1 \le 0, B_1=-B_2, B_2 \ge 0$</span> so the inequality reduces to <span class="math-container">$2B_2+NC_1 \ge 0$</span> and we know that <span class="math-container">$C_1=C+\frac{N-2}{N}B_2-\frac{(N-1)(N-2)}{6N^2}$</span></p> <p>So we need <span class="math-container">$C_1$</span> negative but <span class="math-container">$C \ge 0$</span> </p> <p>By differentiating <span class="math-container">$N-3$</span> times and using Gauss Lucas/Rolle (so the cubic that results which is in standard form) has real roots so <span class="math-container">$4(-p)^3 \ge 27q^2$</span>, we get some constraints on <span class="math-container">$B_2, -C_1$</span> which are enough to give the result for <span class="math-container">$N \le 6$</span> with some crude approximations </p> <p>Then if we try easy counterexamples for the <span class="math-container">$b_k$</span> of the type <span class="math-container">$N-1$</span> <span class="math-container">$a$</span> and one <span class="math-container">$-(N-1)a$</span> we can solve at <span class="math-container">$N=10$</span>, <span class="math-container">$a &gt; \frac{3}{80}$</span> close enough to it to satisfy the inequality <span class="math-container">$C&gt;0$</span> (which is satisifed at <span class="math-container">$a=\frac{3}{80}$</span> that one giving equality in the OP inequality as normalizing to integers we get <span class="math-container">$11$</span> taken <span class="math-container">$9$</span> times, <span class="math-container">$-19$</span> taken once and it is easy to see that the constraints are good and <span class="math-container">$S_1=80, S_3=5120$</span> and obviously <span class="math-container">$100\cdot 5120=80^3$</span></p> <p>So as noted in the comments taking <span class="math-container">$9$</span> of <span class="math-container">$111$</span> and one of <span class="math-container">$-199$</span> gets a counterexample with a positive sum of cubes (corresponding to <span class="math-container">$a=\frac{3}{80}+\frac{1}{800}$</span> normalized to integers)</p>
2,278,087
<p>The polynomial is $p(z)=\sum^n_{k=0} a_kz^k$. And I want to prove the following inequality on the unit disk$$\max_{B_1(0)}|p(z)|\geq |a_n|+|a_0|$$</p> <p>By the maximum modulus principle, the maximum must be on the unit circle and greater than $|a_0|$ by considering $p(0)$. However, I cannot make further conclusions from this, since any attempt of using the triangle inequality will result in the opposite direction of the wanted result.</p> <p>I have also seen a <a href="https://math.stackexchange.com/questions/671990/">similar problem</a>, although I can conclude $\max_{|z|=1}|p(z)|$is greater than any of the two on RHS, but since there is no relation of $\max_{k\in\{0,\ldots,n\}}|a_k|\geq|a_0|+|a_k|$, a tighter bound is needed.</p> <p>I also tried expanding it into trig functions, and consider the roots, but it didn't work as expected.</p>
mrf
19,440
<p>This was trickier than I first thought. Perhaps there is a simpler solution, but here is one approach: Let $\omega = \exp(2\pi i/n)$. Then $$ \sum_{j=0}^{n-1} {(\omega^k)}^j = \begin{cases} n, &amp; k \in \{ 0, n \} \\ 0, &amp; 1 \le k \le n-1. \end{cases} $$ (This is a well-known property of roots of unity, or follows from computing the geometric sum if you prefer.)</p> <p>Hence \begin{align} \frac1n \sum_{j=0}^{n-1} f(\omega^j z) &amp;= \frac1n \sum_{j=0}^{n-1} \sum_{k=0}^{n} a_k {(\omega^j)}^k z^k \\ &amp;= \frac1n \sum_{k=0}^{n} a_k z^k \sum_{j=0}^{n-1} {(\omega^k)}^j = a_0 + a_nz^n. \end{align}</p> <p>Consequently, we get \begin{align} |a_0| + |a_n| &amp;= \max_{|z|=1} |a_0 + a_nz^n | \\ &amp;= \max_{|z|=1} \Big| \frac1n \sum_{j=0}^{n-1} f(\omega^j z) \Big| \\ &amp;\le \frac1n \sum_{j=0}^{n-1} \max_{|z|=1} \big| f(\omega^j z) \big| = \max_{|z|=1} |f(z)|. \end{align}</p>
406,966
<p>Define a graph with vertex set <span class="math-container">$\mathbb{R}^2$</span> and connect two vertices if they are unit distance apart. The famous <a href="http://en.wikipedia.org/wiki/Hadwiger%E2%80%93Nelson_problem" rel="nofollow noreferrer">Hadwiger-Nelson problem</a> is to determine the <a href="http://en.wikipedia.org/wiki/Graph_coloring#Vertex_coloring" rel="nofollow noreferrer">chromatic number</a> <span class="math-container">$\chi$</span> of this graph. For the problem as stated (there are many variations), the best known bounds are <span class="math-container">$4 \leq \chi \leq 7$</span>. The lower bound comes from the existence of a clever subgraph on just seven vertices that is readily seen to require four colors. The upper bound comes from a straightforward tiling of the plane by monochromatic hexagons that admits a proper <span class="math-container">$7$</span>-coloring.</p> <p>I like to show this problem to young students because they are always fascinated by the fact that we can cover "everything" that is known about it in just a few minutes. What are some other problems of this type?</p> <blockquote> <p>What are some famous problems for which the best known results are fairly obvious or elementary?</p> </blockquote> <p>Update: Aubrey de Grey has recently improved the <a href="https://arxiv.org/abs/1804.02385" rel="nofollow noreferrer">lower bound</a> to 5 using an elaborate subgraph, so we now know slightly more than the elementary.</p>
Blue
409
<p>This isn't "famous", but it's long been of interest to me:</p> <blockquote> <p>What is the Pythagorean Theorem for right-corner simplices in hyperbolic $d$-space?</p> </blockquote> <ul> <li>For $d=2$, we have ... $$\cosh a \cosh b = \cosh c$$ for a triangle with legs of length $a$, $b$, and hypotenuse of length $c$.</li> <li>For $d=3$, we have ... $$\cos\frac{A}{2}\cos\frac{B}{2}\cos\frac{C}{2} - \sin\frac{A}{2}\sin\frac{B}{2}\sin\frac{C}{2} = \cos\frac{D}{2}$$ for a tetrahedron with right-triangular leg-faces of area $A$, $B$, $C$, and hypotenuse-face of area $D$.</li> <li>For $d\geq 4$, we have ... $$\text{No Idea. (So far as I know.)}$$ </li> </ul> <p>Three(!) years ago, I posted <a href="https://mathoverflow.net/questions/32377/pythagorean-theorem-for-right-corner-hyperbolic-simplices">some thoughts about the $d=4$ case to MathOverflow</a>. (The series stuff is probably mis-guided.) I've made no progress since then.</p>
3,424,686
<p>I have a problem, I am uncertain of how to solve:</p> <p><a href="https://i.stack.imgur.com/DALSn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DALSn.png" alt="enter image description here"></a></p> <p>Is it enough to show:</p> <p><span class="math-container">$L_1$</span>: <span class="math-container">$g(f(v_1)+f(v_2)) =$</span> <span class="math-container">$g(fv_1)+g(fv_2)$</span></p> <p><span class="math-container">$L_2$</span>: <span class="math-container">$k*g(f(v)) = $</span> <span class="math-container">$g(kf(v))$</span></p> <p>And if so, should I find the mapping matrices for <span class="math-container">$ g$</span> and <span class="math-container">$f$</span> or is there a smarter way?</p>
Community
-1
<p>Are you happy that <span class="math-container">$f(v_1)=4v_1-2v_3$</span> and <span class="math-container">$f(v_2)=-2v_1+v_3$</span> and <span class="math-container">$f(v_3)=2v_1+v_2-1v_3$</span>?</p> <p>Good. So now consider <span class="math-container">$f(av_1+bv_2+cv_3$</span>). By linearity, this will be:- <span class="math-container">$$a(4v_1-2v_3)+b(-2v_1+v_3)+c(2v_1+v_2-1v_3)$$</span></p> <p>Good. So now you have got to line -4 of my previous answer. </p> <p><span class="math-container">$f(av_1+bv_2+cv_3)=(4a-2b+2c,c,-2a+b-c)$</span>.</p> <p>We now have to do this process again for <span class="math-container">$g$</span>. You should get <span class="math-container">$(a-b/2+c/2,c,6a-3b+3c)$</span></p> <p>We now require <span class="math-container">$g(4a-2b+2c,c,-2a+b-c)$</span>. Let's do <span class="math-container">$g(4a-2b+2c)v_1$</span> first. It is <span class="math-container">$$(2a-b+c)v_1+(4a-2b+2c)v_3$$</span></p> <p>Now for <span class="math-container">$g(cv_2)$</span>. Easy! It's just <span class="math-container">$cv_2$</span>.</p> <p>Finally you need to work out <span class="math-container">$g((-2a+b-c)v_3)$</span>.</p> <p>Now we must equate <span class="math-container">$$(a-b/2+c/2,c,6a-3b+3c)=(a,b,c)$$</span> since we are looking for a fixed vector.</p> <p>For the first coefficient we have <span class="math-container">$a-b/2+c/2=a$</span> and therefore <span class="math-container">$b=c$</span>. This also deals with the coefficients of <span class="math-container">$v_2$</span>. </p> <p>For the final coefficient we have <span class="math-container">$6a-3b+3c=c$</span> and this gives <span class="math-container">$c=6a$</span>.</p>
4,167,602
<p>Does the probability that <span class="math-container">$\Pr \left[ {A &lt; {R_3} \cup B &lt; {R_2}} \right]$</span> with the information that <span class="math-container">${R_3} &lt; {R_2}$</span> logically equivalent to <span class="math-container">$\Pr \left[ {\min \left[ {A,B} \right] &lt; {R_3}} \right]$</span> or should the statement be <span class="math-container">$\Pr \left[ {\max \left[ {A,B} \right] &lt; {R_3}} \right]$</span> ?</p> <p>In case my thinking is wrong, is there anyway to construct something like this:</p> <p><span class="math-container">$\begin{gathered} \Pr \left[ {\max \left[ {A,B} \right] \lessgtr {\text{some}}\,{\text{function}}\,{\text{of}}\left( {{R_2},{R_3}} \right)} \right] \hfill \\ \Pr \left[ {\min \left[ {A,B} \right] \lessgtr {\text{some}}\,{\text{function}}\,{\text{of}}\left( {{R_2},{R_3}} \right)} \right] \hfill \\ \end{gathered}$</span></p> <p>Note that <span class="math-container">$R_3$</span> and <span class="math-container">$R_2$</span> are two positive number and <span class="math-container">$A,B$</span> are non-negative random variable.</p>
quasi
400,434
<p>As an example, if <span class="math-container">$R_3=1,R_2=2$</span>, and <span class="math-container">$A,B$</span> are random variables with the constraints <span class="math-container">$A\ge 1$</span> and <span class="math-container">$1\le B &lt; 2$</span>, then <span class="math-container">$$ \Pr \left[ {A &lt; {R_3} \cup B &lt; {R_2}} \right]=1 $$</span> but <span class="math-container">$$ \Pr \left[ {\min \left[ {A,B} \right] &lt; {R_3}} \right]=0 $$</span> and <span class="math-container">$$ \Pr \left[ {\max \left[ {A,B} \right] &lt; {R_3}} \right]=0 $$</span></p>
177,643
<p>Let $I=\{0, 1, \ldots \}$ be the multiplicative semigroup of non-negative integers. It is possible to find a ring $R$ such that the multiplicative semigroup of $R$ is isomorphic (as a semigroup) to $I$?</p>
Clive Newstead
19,542
<p>Replacing $B$ by $B+Y$ in your expression for $\tan(A+B)$ gives</p> <p>$$\dfrac{\tan A + \tan(B+Y)}{1 - \tan A \tan(B+Y)}$$</p> <p>Expanding gives</p> <p>$$\dfrac{\tan A + \frac{\tan B + \tan Y}{1 - \tan B \tan Y}}{1 - \tan A \frac{\tan B + \tan Y}{1 - \tan B \tan Y}}$$</p> <p>Multiplying through by $1 - \tan B \tan Y$ gives</p> <p>$$\frac{\tan A(1 - \tan B \tan Y) + \tan B + \tan Y}{1 - \tan B \tan Y - \tan A(\tan B + \tan Y)}$$</p> <p>And simplifying gives you what you want.</p>
177,643
<p>Let $I=\{0, 1, \ldots \}$ be the multiplicative semigroup of non-negative integers. It is possible to find a ring $R$ such that the multiplicative semigroup of $R$ is isomorphic (as a semigroup) to $I$?</p>
user29999
29,999
<p>Do $a=\tan A, b= \tan B$ and $y = \tan Y$. Then, \begin{eqnarray} \tan (A+B+Y) &amp;=&amp; \dfrac{\tan(A+B) + \tan(Y)}{1-\tan(A+B)y}\\ &amp;=&amp; \dfrac{\dfrac{a+b}{1-ab}+y}{1 - \Big(\dfrac{a+b}{1-ab}\Bigr)y}\\ &amp;=&amp; \dfrac{\dfrac{a+b+(1-ab)y}{1-ab}}{\dfrac{1-ab - (a+b)y}{1-ab}}\\ &amp;=&amp; \dfrac{a+b+y -aby}{1-ab-ay-by}.\\ \end{eqnarray}</p>
6,155
<p>I have a category $C$, which is equipped with a symmetric monoidal structure (tensor product $\otimes$, unit object $1$). My category also has finite coproducts (I'll write them using $\oplus$, and write $0$ for the initial object), and $\otimes$ distributes over $\oplus$.</p> <p>By an <em>exponential monad</em>, I mean a monad $(T,\eta,\mu)$ on $C$, where the functor $T:C\to C$ is equipped with some structure maps of the form $$\nu \colon 1 \to T(0)$$ and $$\alpha\colon T(X)\otimes T(Y) \to T(X\oplus Y).$$ The structure maps are isomorphisms, and are suitably "coherent" with respect to the two monoidal structures $\otimes$ and $\oplus$.</p> <p>The simplest example is: $C$ is the category of $k$-vector spaces, and $T=\mathrm{Sym}$ is the commutative $k$-algebra monad (i.e., $\mathrm{Sym}(X)$ is the symmetric algebra $\bigoplus \mathrm{Sym}^q(X)$).</p> <p>Now, I'm sure I can work out all the formalism that I need for this, if I have to. My question is: is there a convenient place in the literature I can refer to for this? Alternately, is there suitable categorical language which makes this concept easy to talk about?</p> <p>I'd also like to have a good formalism for talking about a "grading" on $T$. This means a decomposition of the functor $T=\bigoplus T^q$, where $T^q\colon C\to C$ are functors, which have "nice" properties (for instance, $T^m(X\oplus Y)$ is a sum of $T^p(X)\otimes T^{m-p}(Y)$). The motivating example again comes from the symmetric algebra: $\mathrm{Sym}=\bigoplus \mathrm{Sym}^q$.</p>
Mattia Coloma
125,244
<p>I can give you my interpretation of the Chern character in terms of morphism of formal group laws. Denote with $\gamma$ the universal line bundle over $\mathbb{P}^\infty$ and denote its first Chern class with $c_1(\gamma) = x_H \in H^2(\mathbb{P}^\infty)$. Also denote with $k = \gamma-1 \in K(\mathbb{P}^\infty)$ and with $k_i : = p_i^*(\gamma)-1 \in K(\mathbb{P}^\infty \times \mathbb{P}^\infty)$. Then by definition the Chern character of $k \in K(\mathbb{P}^\infty)$ is equal to $e^{x_H}-1 \in HP^0(\mathbb{P}^\infty, \mathbb{Q}):=\prod_{i\geq0}H^{2i}(\mathbb{P^\infty},\mathbb{Q})$. The naturality of the Chern character and the isomorphism $HP^0(\mathbb{P}^\infty,\mathbb{Q}) \cong \mathbb{Q}[[x_H]], HP^0(\mathbb{P}^\infty \times \mathbb{P}^\infty,\mathbb{Q}) \cong \mathbb{Q}[[x_1,x_2]]$ gives the following commutative square $\require{AMScd}$ \begin{CD} K(\mathbb{P}^\infty) @&gt;ch&gt;&gt; \mathbb{Q}[[x_H]]\\ @V \mu^* V V @VV \mu^* V\\ K(\mathbb{P}^\infty \times \mathbb{P}^\infty) @&gt;&gt;ch&gt; \mathbb{Q}[[x_1,x_2]] \end{CD} where $\mu$ is the multiplication in the H-space $\mathbb{P}^\infty$. If we follow the element $1+k \in K(\mathbb{P}^\infty)$ we get $\require{AMScd}$ \begin{CD} 1+k @&gt;ch&gt;&gt; e^{x_H}\\ @V \mu^* V V @VV \mu^* V\\ 1+k_1+k_2+k_1k_2 @&gt;&gt;ch&gt; e^{x_1+x_2} \end{CD} Now let $m$ be te multiplicative formal group law and $a$ be te additive one. Then, by the previous diagram we get that</p> <p>$ m(ch(k_1),ch(k_2)) = m(e^{x_1}-1,e^{x_2}-1) = e^{a(x_1,x_2)}-1 $ and the Chern character is identified to a morphism (actually an isomorphism) of fgl.</p>
2,466,805
<p>Hi maths peoples I have question how you show that function is greater or equal to zero because I want show that function is dense function and this is one of two condition for show it is dense function.</p> <p>Example we have function </p> <p>$$f(x) = \frac{1}{2} \cdot \sin(x) \text{ where } x \in \left[0, \pi\right]$$</p> <p>How show this function $f(x) \geq 0$ ?</p> <p>I make draw of sinus curve and see from zero to pi there is no value lower then zero... But I think when I say this in test teacher say wrong and laugh me after. How you do it like professional?</p> <p>I think I check when function have no zero point in the interval? But how check it with this function?</p> <p>$$f(x) = 0$$</p> <p>$$0= \frac{1}{2}\cdot \sin(x) $$</p> <p>$$0= \sin(x)$$</p> <p>$$x = \arcsin(0) $$</p> <p>$$x=0$$</p> <p>But is this good? Because now know there is zero point and at this zero point function can go lower then zero..?</p>
Thomas Andrews
7,933
<p>We have: $$1=\mathrm{Var}(X_i)=E(X^2)-(E(X))^2 = E(X^2)$$</p> <p>You need to know:</p> <blockquote> <p><strong>Lemma:</strong> If $Y$ is a non-negative random variable, then $$E(Y)\geq kP(Y\geq k).$$</p> </blockquote> <p>So if $Y=X_i^2$ and $k=4$ then:</p> <p>$$1=E(X_i^2)\geq 4P(X_i^2\geq 4)=4 P(|X_i|\geq 2)$$</p> <p><strong>Proof of lemma:</strong></p> <p>IF $p_Y$ is the PDF for $Y$, then:</p> <p>$$\begin{align}E(Y)&amp;=\int_{0}^{\infty} yp_Y(y)\,dy \\ &amp;\geq \int_{k}^{\infty} yp_Y(y)\,dy\\ &amp;\geq \int_{k}^{\infty} kp_Y(y)\,dy\\ &amp;=kP(Y\geq k) \end{align}$$</p>
105,583
<p>this is a simple question, and excuse me if it's already been answered; I searched around and couldn't find anything.</p> <p>I have two listplots, both along the same number of x data points, but with different y values. I want to find the difference between the two y values, while keeping the x values the same. I tried just subtracting the two, but that leaves all the x values as equal to 0, which is undesirable, of course.</p>
Community
-1
<p>ok, you have two lists,</p> <pre><code>list1 = {{1, 2}, {2, 3}, {3, 5}, {4, 7}, {5, 11}, {6, 13}, {7, 17}} list2 = {{1, 3.87}, {2, 3.53}, {3, 3.40}, {4, 3.33}, {5, 3.25}, {6, 4.25}, {7, 5.24}} </code></pre> <p>and you know how to plot them,</p> <pre><code>ListPlot[{list1, list2}] </code></pre> <p><a href="https://i.stack.imgur.com/zqkF3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zqkF3.png" alt="enter image description here"></a></p> <p><a href="http://reference.wolfram.com/language/guide/ListManipulation.html" rel="nofollow noreferrer">List Manipulation</a> giving massive and solid knowledge about <em>List</em></p> <pre><code>list1[[All, 2]] - list2[[All, 2]] </code></pre> <blockquote> <p>{-1.87, -0.53, 1.6, 3.67, 7.75, 8.75, 11.76}</p> </blockquote> <pre><code>ListPlot[list1[[All, 2]] - list2[[All, 2]]] </code></pre> <p><a href="https://i.stack.imgur.com/c9JnV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/c9JnV.png" alt="enter image description here"></a></p> <p>or, as proposed by @MarcoB</p> <pre><code>Transpose[{list1[[All, 1]], (list1 - list2)[[All, 2]]}] </code></pre> <blockquote> <p>{{1, -1.87}, {2, -0.53}, {3, 1.6}, {4, 3.67}, {5, 7.75}, {6, 8.75}, {7, 11.76}}</p> </blockquote> <p>well, which generates us a new <em>List</em>.</p> <p>Have Fun!</p>
181,327
<p>If $q$ and $w$ are the roots of the equation $$2x^2-px+7=0$$ </p> <p>Then $q/w$ is a root of ? </p> <p>P.s:- It is an another question of <a href="https://math.stackexchange.com/questions/181305/how-do-i-transform-the-equation-based-on-this-condition">How do I transform the equation based on this condition?</a></p>
Kartik Audhkhasi
37,312
<p>I would not be worried about the equality constraint since it can be handled in almost all optimization packages. This is because $g(x) = a$ can always be written as two inequality constraints: $g(x) \leq a$ and $g(x) \geq a$. You can also remove one optimization variable using this constraint, as mentioned by Ed Gorcenski.</p> <p>The troubling part seems to be the objective. Particularly, non-differentiability is a cause of concern. I suggest putting a "nice" lower bound on $f$ and maximizing it instead. A suitable class of algorithms to look into is "Majorization-Minimization" (or "Minorization-Maximization" for your case). The idea is to solve a sequence of simpler optimization problems. At the $k^{th}$ iteration:</p> <ol> <li>Construct a lower bound on $f(x)$ by finding a function $g(x;x_{t-1})$ such that: $f(x_{t-1}) = g(x_{t-1};x_{t-1})$ and $g(x;x_{t-1}) \leq f(x) \forall x$. This bound is parameterized by $x_{t-1}$, and is said to minorize $f(x)$.</li> <li>Solve an easier optimization problem by replacing $f(x)$ by $g(x;x_{t-1})$ in the original problem. The solution to this problem is $x_t$.</li> </ol> <p>The conditions in (1) ensure that $f(x_t) \geq f(x_{t-1})$, i.e. your objective is always monotonically non-decreasing. You will ultimately converge to a local maxima.</p> <p>Some common tricks for finding nice bounds on unwieldy objective functions are given in: D. R. Hunter, K. Lange, <em>A tutorial on MM algorithms</em>, The American Statistician, vol. 58, no. 1, pp. 30-37, 2004.</p>
1,015,137
<p>I have an exercise book from my university which doesn't specify a quantifier.</p> <p>It uses expressions like "here $A$,$B$,$C$ are sets", or "if $x \notin A$ then ..." (it uses $x$ before it is even defined, just out of nowhere).</p> <p>I'm going to need an answer from my university, of course, but I want to ask in general context: Is there any implicit quantifier when one is not specified, or is it always an error? If there is an implicit quantifier, which is it?</p>
vadim123
73,324
<p>$$\sum\limits_{n=1} ^\infty |x_n||y_n|=|x_1||y_1|+\sum_{n=2}^\infty|x_n||y_n|\le |x_1|\sum_{n=1}^\infty|y_n|+\sum_{n=2}^\infty |x_n||y_n|\le$$</p> <p>$$|x_1|\sum_{n=1}^\infty|y_n|+|x_2||y_2|+\sum_{n=3}^\infty |x_n||y_n|\le |x_1|\sum_{n=1}^\infty|y_n|+|x_2|\sum_{n=1}^\infty|y_n|+\sum_{n=3}^\infty |x_n||y_n|\le$$</p> <p>$$\vdots$$</p> <p>$$|x_1|\sum_{n=1}^\infty|y_n|+|x_2|\sum_{n=1}^\infty|y_n|+|x_3|\sum_{n=1}^\infty|y_n|+\cdots =$$</p> <p>$$ \sum\limits_{n=1} ^\infty |x_n|\sum\limits_{n=1} ^\infty |y_n|$$</p>
1,015,137
<p>I have an exercise book from my university which doesn't specify a quantifier.</p> <p>It uses expressions like "here $A$,$B$,$C$ are sets", or "if $x \notin A$ then ..." (it uses $x$ before it is even defined, just out of nowhere).</p> <p>I'm going to need an answer from my university, of course, but I want to ask in general context: Is there any implicit quantifier when one is not specified, or is it always an error? If there is an implicit quantifier, which is it?</p>
Winther
147,873
<p>Apply Cauchy-Schwarz</p> <p>$$(x\cdot y)^2 \leq ||x ||^2~||y||^2$$</p> <p>to the vectors $x =(|x_1|,|x_2|,\ldots,|x_n|)$ and $y = (|y_1|,|y_2|,\ldots,|y_n|)$ to get</p> <p>$$\left(\sum x_i y_i\right)^2 \leq \sum x_i^2\sum y_i^2$$</p> <p>Now use that</p> <p>$$\sum x_i^2 \sum y_i^2 \leq \left(\sum |x_i|\right)^2 \left(\sum |y_i|\right)^2$$ </p> <p>since $\left(\sum |x_i|\right)^2 = \sum x_i^2 + \sum_{i\not=j} |x_ix_j| \geq \sum x_i^2$ and the result follows.</p>
187,486
<p>Let $SL(2,{\mathbb R})$ act on ${\mathbb R}^2$ by matrix multiplication.</p> <p>What is known about group cohomology $H^*(SL(2,{\mathbb R}),{\mathbb R}^2)$?</p> <p>And about $$H^*(\Gamma,{\mathbb R}^2)$$ for a cocompact lattice $$\Gamma\subset SL(2,{\mathbb R})?$$</p>
Pierre
61,960
<p>I will answer only for cohomology in degree 1, as I am not sure for higher degrees. </p> <p>On the other hand I think that what I am saying remains true when R^2 is replaced by any finite dimensional representation of SL_2(R). </p> <p>For the first cohomology of a lattice with values in a finite dimensional representation, the key word is Eichler-Shimura isomorphism. These cohomology groups can be nonzero, see </p> <p><a href="http://www.ams.org/journals/tran/1987-300-02/S0002-9947-1987-0876476-0/S0002-9947-1987-0876476-0.pdf" rel="nofollow">http://www.ams.org/journals/tran/1987-300-02/S0002-9947-1987-0876476-0/S0002-9947-1987-0876476-0.pdf</a> </p> <p>or </p> <p><a href="http://www.math.u-psud.fr/~labourie/preprints/pdf/propsurf.pdf" rel="nofollow">http://www.math.u-psud.fr/~labourie/preprints/pdf/propsurf.pdf</a> </p> <p>and the (original) references there. For the cohomology group H^1(SL_2(R),V) with V a finite dimensional SL_2(R)-module, I think it is always zero. I don't have a reference on the top of my head. But if you understand the Eichler-Shimura isomorphism for lattices, it should give a proof of vanishing for SL_2(R). </p>
80,432
<p>I know Mathematica's if format is </p> <pre><code>If[test, then result, else alternative] </code></pre> <p>For example, this</p> <pre><code>y:=If[RandomReal[]&lt;0.2, 1, 3.14] </code></pre> <p>would take a random real number between $0$ and $1$, and evaluate it. If it's less than $0.2$, it'll map <code>y</code> to $1$, otherwise, it'll map <code>y</code> to $3.13$.</p> <p>I would like to extend this to multiple intervals, short of writing out multiply nested <code>If</code> statements, how can this be done? Can this be done automatically?</p> <p>For example, if <code>x := RandomReal[]</code>, I want to map to <code>1</code>, if <code>x &lt; 0.1</code>, 3 if <code>0.1 &lt;= x &lt; 0.2</code>, 19.1 if <code>0.2 &lt;= x &lt; 0.34</code>, or 7.7 if <code>x &gt;= 0.34</code>. </p>
rcollyer
52
<p>This is not an answer to your question, but I wanted to point out a better way of coding</p> <pre><code>y:=If[RandomReal[]&lt;0.2,1,3.14] </code></pre> <p>Use <a href="http://reference.wolfram.com/language/ref/RandomChoice.html" rel="nofollow"><code>RandomChoice</code></a>, instead, as you can specify the exact probabilities with which each number is chosen:</p> <pre><code>h := RandomChoice[{0.2, 0.8} -&gt; {1, 3.14}] </code></pre> <p>This can be adapted to your second list with something like this</p> <pre><code>x[rngs_ -&gt; vals_] := First@x[rngs -&gt; vals, 1] x[rngs_ -&gt; vals_, n_] := Block[{ps}, ps = Flatten[{#, 1 - Total@#}]&amp; @ Differences @ Prepend[0] @ rngs; RandomChoice[ps -&gt; vals, n] ] /; Length@vals - 1 == Length@rngs x[{0.1, 0.2, 0.34} -&gt; {1, 2, 3, 4}, 4] (* {4, 1, 1, 4} *) </code></pre> <p>Obviously, you would not want to recalculate the probability list every time, so you can return a function, instead</p> <pre><code>z[rngs_ -&gt; vals_, n_:1] := Block[{ps}, ps = Flatten[{#, 1 - Total@#}]&amp; @ Differences @ Prepend[0] @ rngs; With[{plst = ps}, If[n==1, RandomChoice[plst -&gt; vals]&amp;, RandomChoice[plst -&gt; vals, n]&amp; ] ] ] /; Length@vals - 1 == Length@rngs </code></pre> <p>Which can then be used to construct your <code>y</code></p> <pre><code>y := Evaluate[z[{0.1, 0.2, 0.34} -&gt; {1, 2, 3, 4}]][] </code></pre> <p>and answers your question, apparently.</p>
925,005
<p>Show that $$\|a+b+c\|^2\leq 3\|a\|^2+3\|b\|^2+3\|c\|^2$$ where $a,b,c$ are in some Hilbert space $(H,\langle\cdot,\cdot \rangle)$?</p> <p>I see that we have $$\|a+b\|^2\leq2\|a\|^2 +2 \|b\|^2$$ due to the parallelogram law. What about the first inequality?</p>
PenasRaul
156,384
<p>You can prove that $2 | \langle a, b \rangle | \leq \| a \|^2 + \| b \|^2$ as I believe you have proven to attain $\| a + b \|^2 \leq 2\| a \|^2 + 2\| b \|^2$.</p> <p>Then you just expand:</p> <p>$\| a + b + c \|^2 = \langle a, a \rangle + \langle b, b \rangle + \langle c, c \rangle + 2\langle a, b \rangle + 2\langle a, c \rangle + 2\langle b, c \rangle \leq \| a \|^2 + \| b \|^2 + \| c \|^2 + \| a \|^2 + \| b \|^2 + \| a \|^2 + \| c \|^2 + \| b \|^2 + \| c \|^2 = 3 ( \| a \|^2 + \| b \|^2 + \| c \|^2)$</p> <p>Sometimes, you just have to be brave to dive into the ugly formulas to simplify...</p>
1,782,855
<p>These were a couple examples in class before learning the $u$-substitution method for integrals. I'm not sure what is going on:</p> <p>$$\int1\, d(2x) = 2x+ C$$ $$\int \sin x \,d(\sin x) = \frac{(\sin x)^2}{2}+C$$ $$\int(2x+1) \,d(2x+1) = \frac{(2x+1)^2}{2}+C$$ </p> <p>How exactly do we reach these answers? I guess I can see how the last two follow if you substitute in $U$ and use the power rule, but what about the first one?</p> <p>I guess the $d(\dots)$ is throwing me off. Of it's $\int 1 \,dx$, the answer would be $x+c$, but with $d(2x)$, the answer is $2x+C$. What is happening? How would you find the integral $\int 3 \,d(2x)$?</p>
Hagen von Eitzen
39,174
<p>Have a look at the explicit bijective map $\Bbb N\times\Bbb N\to\Bbb N$, $(a,b)\mapsto \frac{(a+b)(a+b+1)}{2}+b$.</p> <p>The case of countable family of countable set is almost the same - except that it may involve a bit more need for Choice (we need an enumeration for each $A_i$).</p>
1,782,855
<p>These were a couple examples in class before learning the $u$-substitution method for integrals. I'm not sure what is going on:</p> <p>$$\int1\, d(2x) = 2x+ C$$ $$\int \sin x \,d(\sin x) = \frac{(\sin x)^2}{2}+C$$ $$\int(2x+1) \,d(2x+1) = \frac{(2x+1)^2}{2}+C$$ </p> <p>How exactly do we reach these answers? I guess I can see how the last two follow if you substitute in $U$ and use the power rule, but what about the first one?</p> <p>I guess the $d(\dots)$ is throwing me off. Of it's $\int 1 \,dx$, the answer would be $x+c$, but with $d(2x)$, the answer is $2x+C$. What is happening? How would you find the integral $\int 3 \,d(2x)$?</p>
Jxt921
229,776
<p>I want to suggest one more way to prove that $|\mathbb{N} \times \mathbb{N}| = |\mathbb{N}|$. We use two injections:</p> <p>1) $f: \mathbb{N} \times \mathbb{N} \to \mathbb{N}, f(m,n) = 2^m3^n$. Fundamental theorem of arithmetic tells us $f$ is injective.</p> <p>2) $g: \mathbb{N} \to \mathbb{N} \times \mathbb{N}, f(n) = (n,0)$. Obviously, it is also injective.</p> <p>Schröder–Bernstein theorem then tells us there is a bijection between these sets.</p>
2,556,255
<p>For work I have been doing a lot of calculations which look sort of like summing terms similar to $\frac{A}{1+x}$ and $\frac{B}{1+y}$ for some $A, B$ and small values of $0 \leq x, y \leq 0.1$. In my experience I have found that this is approximately equal to $\frac{A + B}{1 + \frac{A}{A+B}x + \frac{B}{A+B}y }$. The best thing about this approximate equality is that the numerator is simply a linear combination of the earlier two numerators. Of course, it also holds exactly true if A or B is zero!</p> <p>It would help me a great deal both if I can somehow show that my hypothesis is true. Of course I know equality does not hold, but if I can convince my colleagues that atleast this is approximately true, we can simplify our calculations and calculation speed dramatically! </p> <p>Does anyone have any idea if this indeed can be shown to be true?</p>
Karn Watcharasupat
501,685
<p>Hint: Try Taylor/Maclaurin series: $$ \frac{A}{1+x} = A( 1 - x + x^2 - x^3 \cdots) $$ for $|x| &lt; 1$. Use the start of that series to start your approximations.</p>
1,368,180
<p>Here is a tricky compass and straightedge construction problem. </p> <blockquote> <p>Given triangle $\triangle ABC$ and point $D$ on segment $\overline{AB}$, construct point $P$ on line $\overleftrightarrow{CD}$ such that $\angle APB = \angle BAC$. </p> </blockquote> <p>This configuration appears frequently in Olympiad geometry problems and the diagram is impossible to draw precisely unless you know how to do the construction.</p>
Mick
42,351
<p>There are 2 possible cases – $P$ can be outside or inside of $\triangle ABC$. Maybe that is why the OP claims that this is trick problem.</p> <p>Case-1 ($P$ is outside, easier to start with for my $ABC$)</p> <p>1) Draw line $g$, the perpendicular bisector of $AB$.</p> <p>2) Draw line $h$, through $A$ and perpendicular to $AC$, cutting $g$ at $O$.</p> <p>3) Draw circle $k$ using $O$ as center and $OA$ as radius.</p> <p>4) Let circle $k$ cuts $CD$ (extended) at $P$.</p> <p><img src="https://i.stack.imgur.com/91BYN.png" alt="enter image description here"></p> <p>Proof: By angles in alternate segment, $\alpha = \beta$.</p> <p>Case-2 ($P$ is inside triangle ABC.)</p> <p>[Continuing from the above]</p> <p>1) Locate $P’$, the mirror refection of $P$ about $AB$. It should be clear that $\theta = \alpha$.</p> <p>2) Draw circle $m$, passing through $A, B, P’$, cutting $CD$ at $P”$.</p> <p><img src="https://i.stack.imgur.com/VSCnT.png" alt="enter image description here"></p> <p>Proof: By angles in the same segment, $\phi = \theta$.</p> <p>Result follows.</p>
760,739
<p>Find the limit of the sequence given by $$\frac{10+12n+20n^4}{7n^4 + 5n^3 - 20}$$</p> <p>I think the answer is $\frac{20}{7}$ after dividing, but is that right? </p>
Community
-1
<p>Yes right!</p> <p>$$\frac{10+12n+20n^4}{7n^4+5n^3-20}=\frac{\frac{10}{n^4}+\frac{12}{n^3}+20}{7+\frac5n-\frac{20}{n^4}}$$ and pass to the limit.</p> <p><strong>Remark:</strong> There's a simple way to find the limit at $\infty$ for a fraction of two polynomials: it suffices to take the monomial of highest degree for the numerator and denominator, hence applying this we find</p> <p>$$\lim_{n\to\infty}\frac{10+12n+20n^4}{7n^4+5n^3-20}=\lim_{n\to\infty}\frac{20n^4}{7n^4}=\frac{20}7$$</p>
4,578,898
<blockquote> <p><strong>Question:</strong> Let <span class="math-container">$f$</span> be the function defined on <span class="math-container">$[0,1]$</span> by <span class="math-container">$$ f(x)= \begin{cases} n(-1)^n &amp; \textrm{if }\frac{1}{n+1}&lt;x\leq \frac{1}{n} \\ 0 &amp; \textrm{otherwise} \end{cases} $$</span> (a) Is <span class="math-container">$f$</span> a bounded function?<br>(b) Is <span class="math-container">$f\in\mathcal L[0,1]$</span>?<br>(c) Does the improper Riemann integral <span class="math-container">$\int_0^1f(x)dx$</span> exist?</p> </blockquote> <p>For (a), I had hard time to interpret the function like how it eat <span class="math-container">$x$</span> and spit out something not explicitly <span class="math-container">$x$</span> related. I try to understand it by, <span class="math-container">$$\frac{1}{11}&lt;0.1\leq\frac{1}{10}\implies f(0.1)=10(-1)^{10}=10$$</span></p> <p>Is there any other (efficient)way to interpret it? <span class="math-container">$\tag 1$</span> Hence, <span class="math-container">$x\rightarrow 0 \implies f(x)\rightarrow \pm\infty$</span> (unbounded)</p> <p>For (b), I know several properties,</p> <p><span class="math-container">$(1)$</span> Let <span class="math-container">$f$</span> be an unbounded function defined on <span class="math-container">$[a,b]$</span>. For <span class="math-container">$N&gt;0$</span> define <span class="math-container">$$ {}^Nf(x)= \begin{cases} f(x) &amp; \textrm{if }f(x)\leq N \\ N &amp; \textrm{otherwise} \end{cases} $$</span> we say <span class="math-container">$f$</span> is Lebesgue integrable on <span class="math-container">$[a,b]$</span> if <span class="math-container">${}^Nf$</span> is Lebesgue integrable for all <span class="math-container">$N&gt;0$</span> and <span class="math-container">$\lim_{N\rightarrow+\infty}\left(\int_a^b{}^Nf\right)$</span> is finite. In this case <span class="math-container">$\int_a^b f$</span> si defined to be <span class="math-container">$$\int_a^b f=\lim_{N\rightarrow+\infty}\left(\int_a^b{}^Nf\right)$$</span> <span class="math-container">$(2)$</span> Let <span class="math-container">$f$</span> is bounded on <span class="math-container">$[a,b]$</span>. Then <span class="math-container">$f$</span> is Lebesgue Integrable if and only if for every <span class="math-container">$\epsilon&gt;0$</span> there is a measurable partition <span class="math-container">$P$</span> such that <span class="math-container">$$U[f,P]-L[f,P]&lt;\epsilon$$</span> But couldn't decide where to start.</p> <p>For (c) I have no clue.</p> <p>Any help will be appreciated. TIA</p>
Bonnaduck
92,329
<p>Obviously, this is not the standard definition of the <span class="math-container">$n$</span>th root in mathematics, so I will not use the <span class="math-container">$\sqrt[n]{\cdot}\ $</span> notation.</p> <p>Lets say we want to compute the <span class="math-container">$3$</span>rd root of <span class="math-container">$100$</span>. We need to find the integer <span class="math-container">$x$</span> such that for all other integers <span class="math-container">$y$</span>, we have that <span class="math-container">$x^3$</span> is closer to <span class="math-container">$100$</span> than <span class="math-container">$y^3$</span>. &quot;Closer&quot; here, I believe, means the distance in terms of the absolute value of the difference of the two numbers.</p> <p>If we make a list:</p> <p><span class="math-container">\begin{align} 1^3 &amp;= 1 &amp; |100-1| &amp;= 99\\ 2^3 &amp;= 8 &amp; |100-8| &amp;= 92\\ 3^3 &amp;= 27 &amp; |100-27| &amp;= 73\\ 4^3 &amp;= 64 &amp; |100-64| &amp;= 36 \\ 5^3 &amp;= 125&amp; |100-125| &amp;= 25\\ 6^3 &amp;= 216&amp; |100-216| &amp;= 116 \end{align}</span></p> <p>Since <span class="math-container">$5^3$</span> is closer to <span class="math-container">$100$</span> than any other, <span class="math-container">$5$</span> is the <span class="math-container">$3$</span>rd root of <span class="math-container">$100$</span>.</p>
461,268
<p>A square hole of depth $h$ whose base is of length $a$ is given. A dog is tied to the center of the square at the bottom of the hole by a rope of length $L&gt;\sqrt{2a^2+h^2}$ ,and walks on the ground around the hole.</p> <p>The edges of the hole are smooth, so that the rope can freely slide along it. Find the shape and area of the territory accessible to the dog (whose size is neglected).</p>
Harish Kayarohanam
30,423
<p>It can move any where inside the pit </p> <p>now for L > $ \sqrt{2*a^2 +h^2}$</p> <p>so it can very well peep out of the pit (from being inside ) because $ \sqrt{2*a^2 +h^2}$ is the length of hypotenuse with one leg as diagonal of square and other being height .</p> <p>so shape will be circle on top , but shape of area it is a cone with lateral height greater than $ \sqrt{a^2/2 +h^2}$ </p> <p>If the dog cannot jump , then it is circle on top . Just it is a circle area around square top .</p> <p>But practical case , is that , as the distance from centre of square to its circumference is the maximum at corner , then it decreases as it moves from one corner to another , then again reaches maximum when it reaches other corner so this happens for all through the square . so at the corner the dog sweeps less area(as more rope gets used even to peep out) as it moves along side area swept increases and it is maximum at mid point of the side of square then again starts decreasing then it is back to minimum at the next corner . so the top are swept will in real sense cannot be a circle , but a irregular figure may be like an ellipse </p> <p><img src="https://i.stack.imgur.com/IYfOW.jpg" alt="enter image description here"></p>
461,268
<p>A square hole of depth $h$ whose base is of length $a$ is given. A dog is tied to the center of the square at the bottom of the hole by a rope of length $L&gt;\sqrt{2a^2+h^2}$ ,and walks on the ground around the hole.</p> <p>The edges of the hole are smooth, so that the rope can freely slide along it. Find the shape and area of the territory accessible to the dog (whose size is neglected).</p>
Jonas Kgomo
45,379
<p>Let ABCD be the upper square of the hole. We will determine the locus along a side at one time. For this, let O be the centre of the bottom square. Let T be the variable point on the edge AB through which the string passes. And let P be the point where the goat is standing, such that the string remains taut.</p> <p>The edges are slippery, so for the string to remain at the position, it has to be such that the points O,P,T lie on a straight line when viewed from the top. So, let the centre of the top square be O'.</p> <p>When viewed from top, $O'\equiv O$ and therefore O'PT is a straight line. Note that the length of the string inside the hole remains $\sqrt{\frac{a^2}{4}+h^2}=\frac 12\sqrt{a^2+4h^2}$ by Pythagoras' theorem. So the length of the string on the top plane is always constant, say $\ell$. Now we're going to find the locus of P such that $TP=\ell$ with coordinate geometry. </p> <p>Let $O'\equiv (0,0)$ and the axes are parallel to the sides of the square. If $P\equiv (h,k)$ then $T\equiv PO'\cap AB\equiv \{hy=kx\} \cap \{x=\frac a2\} \equiv \left(\frac a2,\frac{ka}{2h}\right)$. Then $PT^2=\ell^2=\left(h-\frac a2\right)^2+\left(k-\frac{ka}{2h}\right)^2$; so that the locus turns out to be (where a=2b)$ (x-b)^2+\frac{y^2(x-b)^2}{x^2}=\ell^2$ The attachment is a plot for $b=1, \ell =2$, thanks to WolframAlpha.</p> <p>So all we need to do is, reflect the right branch of this plot about the axes to get the desired locus. $\Box$</p> <p>I don't know what this shape is called, or how you can even find this shape without a plotting software...</p>
4,589,828
<p>I'm stuck in proving the function <span class="math-container">$$f(x) = \vert x + e^x\vert $$</span> has a minimum.</p> <p>This is what I did:</p> <p><span class="math-container">$$f'(x) = (1+e^x)\text{sgn}(1+e^x)$$</span></p> <p>But this function is never zero, because the exponential is always positive. So when I study the sign of the derivative, it is always increasing.</p> <p>Yet the plot of the funciton shows a &quot;sort of&quot; a minimum.</p> <p><strong>Adds</strong></p> <p>Since there is an absolute value I calculated the difference quotient for <span class="math-container">$f(x) &gt; 0$</span> and <span class="math-container">$f(x) &lt; 0$</span> obtaining the function is continuous when <span class="math-container">$x+e^x &gt; 0$</span> and when <span class="math-container">$x+e^x &lt; 0$</span></p> <p>Yet I also understood @Martin R. comment but how to find that point where <span class="math-container">$f(x)$</span> is discontinuous?</p>
Hirofumi Ryo
300,531
<p>I approached this integral by trigonometric substitution and partial fraction.</p> <p>Assume the followings:</p> <p><span class="math-container">$x=\sin\theta\ \ldots(1)\\ \text{Hence, }dx=\cos\theta\ d\theta\ \ldots(2)\\ \text{Also, }\cos\theta=\sqrt{1-x^2}\ \ldots(3)\\$</span></p> <p>Then,</p> <p><span class="math-container">$\operatorname{\Large\int}\dfrac{1}{x-\sqrt{1-x^2}}dx=\operatorname{\Large\int}\dfrac{\cos\theta}{\sin\theta-\sqrt{1-\sin^2\theta}}d\theta=\operatorname{\Large\int}\dfrac{\cos\theta}{\sin\theta-\cos\theta}d\theta$</span></p> <p><span class="math-container">$=\operatorname{\Large\int}\dfrac{1}{\tan\theta-1}d\theta=\operatorname{\Large\int}\dfrac{1}{\sec^2\theta\ (\tan\theta-1)}d\tan\theta=\operatorname{\Large\int}\dfrac{1}{(\tan^2\theta+1)(\tan\theta-1)}d\tan\theta$</span></p> <p><span class="math-container">$\stackrel{\text{partial fraction}}{=}\dfrac{1}{2}\operatorname{\Large\int}(\dfrac{1}{\tan\theta-1}-\dfrac{1+\tan\theta}{1+\tan^2\theta})\ d\tan\theta$</span></p> <p><span class="math-container">$=\dfrac{1}{2}(\operatorname{\Large\int}\dfrac{1}{\tan\theta-1}d\tan\theta-\operatorname{\Large\int}\dfrac{\sec^2\theta\ (1+\tan\theta)}{1+\tan^2\theta}\ d\theta)$</span></p> <p><span class="math-container">$=\dfrac{1}{2}(\operatorname{\Large\int}\dfrac{1}{\tan\theta-1}d\tan\theta-\operatorname{\Large\int}(1+\tan\theta)\ d\theta)$</span></p> <p><span class="math-container">$=\dfrac{1}{2}[\ln|\tan\theta-1|-(\theta-\ln|\cos\theta|)]+C$</span></p> <p><span class="math-container">$=\dfrac{1}{2}(\ln|\sin\theta-\cos\theta|-\theta)+C$</span></p> <p><span class="math-container">$=\underline{\underline{\dfrac{1}{2}(\ln|x-\sqrt{1-x^2}|-\arcsin x)+C}}$</span></p>
2,760,605
<p>$||x-1|-2|=|x-3|$. Find the value of $x$.<br> In my attempt I got the critical values of the expression as $1$ and $3$. But I’m not sure is we can just not consider the $2$ in the LHS.</p> <p>My steps are Case 1: when $x&gt;3$, $x-1-2=x-3$, $0=0$</p> <p>Case 2: $1&lt;x&lt;3$, $X-1-2=-x+3 \implies 2x=6 \implies x= 3$</p> <p>Case 3: $x&lt;1$, $-x+1-2=-x+3$. Not possible.</p>
MSDG
447,520
<p><strong>EDIT</strong>: As pointed out in the comments, even though the final inequality is correct, it is insufficient since $(n+1)^{1/n} \to 1$ as $n \to \infty$. The lower bound can be obtained as shown in @adfriedman's answer.</p> <p>Here's my take on it: Whenever $n \geq 3$, we have $$ n^n \geq (n+1)!, $$ and thus $$ n^n \geq (n+1)n! \quad \Leftrightarrow \quad \frac{n}{n!^{\frac{1}{n}}} \geq (n+1)^{\frac{1}{n}}. $$ On the other hand, the Taylor expansion of $e^n$ gives $$ \frac{n^n}{n!} \leq \sum_{k=0}^{\infty} \frac{n^k}{k!} = e^n \quad \Leftrightarrow \quad \frac{n}{n!^{\frac{1}{n}}} \leq e. $$ So, $$ (n+1)^{\frac{1}{n}} \leq \frac{n}{n!^{\frac{1}{n}}} \leq e. $$ Apply the Squeeze Theorem.</p>
2,105,238
<p>I need to prove that the function $f: \mathbb R_{&gt;0}\times\mathbb R \to \mathbb R^2$ , $f(x,y)=(xy, x^2-y^2)$ is injective. I know I have to show that $f(x,y)=f(a,b)$ implies $x=a$ and $y=b$ but I have no idea how to prove it. Could you give me a hint?</p>
Rob Arthan
23,171
<p>If you are given $xy$ and $x^2 - y^2$ and you know that $x &gt; 0$, you can determine all of the following: $$ \begin{align*} (x^2 + y^2)^2 &amp;= x^4 + y^4 + 2(xy)^2 = (x^2 - y^2)^2 +4 (xy)^2\\ x^2 + y^2 &amp;= \sqrt{(x^2 + y^2)^2} \\ x^2 &amp;= \frac{(x^2 - y^2) + (x^2 + y^2)}{2} \\ x &amp;= \sqrt{x^2} \\ y &amp;= \frac{xy}{x}. \end{align*} $$</p> <p>So $(x, y)$ is a function of $(xy, x^2 - y^2)$ for $(x, y) \in \Bbb{R}_{&gt;0}\times\Bbb{R}$.</p>
31,544
<p>For example, how to make Mathematica display only the lines beginning with <code>In[some number]</code>?</p> <pre><code>In[8]:= m1 = {{1, 1, 1}, {8, 4, 2}, {64, 16, 4}} </code></pre> <blockquote> <pre><code>Out[8]= {{1, 1, 1}, {8, 4, 2}, {64, 16, 4}} </code></pre> </blockquote> <pre><code>In[9]:= m1.{b1, b2, b3} == {1, 1, 1} </code></pre> <blockquote> <pre><code>Out[9]= {b1 + b2 + b3, 8 b1 + 4 b2 + 2 b3, 64 b1 + 16 b2 + 4 b3} == {1, 1, 1} </code></pre> </blockquote>
Kuba
5,478
<p>4)</p> <pre><code>SetOptions[#, CellOpen -&gt; False] &amp; /@ Cells[EvaluationNotebook[], GeneratedCell -&gt; True] </code></pre> <p>But I would choose Szabolcs's 3rd solution.</p>
780,588
<p>I need help with this proof:</p> <p>Let $V, W$ be K-vectorspaces. Let $T_1, T_2$ be subspaces of V with $T_1 \subseteq T_2$. Let $f \in hom_K(V,W)$.</p> <p>Show the following: If $ f(T_1) = f(T_2)$ then exists a subspace $U$ of $ker(f)$ so that $ U \bigoplus T_1 = T_2$</p> <p>I'm sorry, but I really have no clue how to begin at least one direction of this proof, so I'm grateful for any kind of help!</p>
Christian Blatter
1,303
<p>Put $U_1:=T_1\cap {\rm ker}(f)$, $\&gt;U_2:=T_2\cap {\rm ker}(f)$. It follows that $U_1\subset U_2\subset{\rm ker}(f)$. </p> <p>Let $U$ be a complement of $U_1$ in $U_2$, so that we have $$U_2=U_1\oplus U\ .\tag{1}$$ I claim that $$T_2=T_1\oplus U\ .\tag{2}$$ <em>Proof.</em> As $T_1\subset T_2$ and $U\subset U_2\subset T_2$ we trivially have $T_1+U\subset T_2$.</p> <p>On the other hand, consider an arbitrary $x\in T_2$. By assumption on $f$ there is an $x'\in T_1$ with $f(x')=f(x)$. It follows that $y:=x-x'\in{\rm ker}(f)$, and as $y\in T_2$ as well we have $y\in U_2$. Using $(1)$ we now can find a $y'\in T_1$ and a $u\in U$ with $y=y'+u$. In all we obtain $$x=x'+y=(x'+y')+u\in T_1+U\ ,$$ which proves the converse inclusion $T_2\subset T_1+U$.</p> <p>In order to prove that the sum $(2)$ is direct consider an $x\in T_1\cap U$. As $U\subset U_2\subset {\rm ker}(f)$ it follows that $x\in T_1\cap{\rm ker}(f)=U_1$. Since the sum $(1)$ is direct we conclude that $x=0$.</p>
1,733,923
<p>I'm working with the following sum and trying to determine what it converges to: $$\sum_{k=1}^{\infty}\ln\left(\frac{k(k+2)}{(k+1)^2}\right)$$</p> <p>Numerically I see that it seems to be converging to $-\ln(2)$, however I can't see why that is the case. I have expanded the logarithm and expressed the sum as $\sum_{k=1}^{\infty}\ln(k)+\ln(k+2)-2\ln(k+1)$, but I don't know if that actually helps anything. Does anyone have any thoughts on how to approach this problem?</p>
DonAntonio
31,254
<p>$$\frac{k^2+2k}{k^2+2k+1}=1-\frac1{(k+1)^2}\implies$$</p> <p>$$\sum_{k=1}^n\log\left(1-\frac1{(k+1)^2}\right)=\sum_{k=1}^n\log\left[\left(1-\frac1{k+1}\right)\left(1+\frac1{k+1}\right)\right]=$$</p> <p>$$=\sum_{k=1}^n\left[\log\left(1-\frac1{k+1}\right)+\log\left(1+\frac1{k+1}\right)\right]=$$</p> <p>$$=\log\frac12+\overbrace{\log\frac32+\log\frac23}^{=\log1=0}+\overbrace{\log\frac43+\log\frac34}^{=\log1=0}+\log\frac54+\ldots+\log\frac n{n+1}+\log\frac{n+2}{n+1}=$$</p> <p>$$=\log\frac12+\log\frac{n+2}{n+1}\xrightarrow[n\to\infty]{}-\log2$$</p>
26,420
<p>So the cup product is not well defined over co-chain groups, but all the books claim it is well defined over co-homology groups. The only thing I am not clear on is invariance under ordering/re-ordering of simplices when we go to the co-homology level. Every book seems to gloss over this, and after doing a few examples, I can't seem to figure out how to get this to work out right. Can someone fill me in?</p>
Reader Manifold
376,599
<p>At the cochain level, for <span class="math-container">$\varphi \in C^k(X)$</span> and <span class="math-container">$\psi \in C^l(X)$</span>, the cup product is defined as, <span class="math-container">$$(\varphi \smile \psi)(\sigma) = \varphi (\sigma \big|_{\leq k}) \psi (\sigma \big|_{\geq k})$$</span> and extended linearly, where <span class="math-container">$\sigma \big|_{\leq k}$</span> is the restriction to vertices from <span class="math-container">$1$</span> to <span class="math-container">$k$</span> and <span class="math-container">$\sigma \big|_{\geq k}$</span> is the restriction to vertices from <span class="math-container">$k$</span> to <span class="math-container">$k +l$</span>.</p> <p>It can be shown, by explicit computation that <span class="math-container">$$ d(\varphi \smile \psi) = d\varphi \smile \psi + (-1)^k \varphi \smile d \psi$$</span></p> <p>Now, suppose <span class="math-container">$[\varphi] \in H^k(X)$</span> and <span class="math-container">$[\psi] \in H^l(X)$</span>, where <span class="math-container">$\varphi , \psi \in \ker d$</span>.</p> <p>The above equation shows us that <span class="math-container">$\varphi \smile \psi \in \ker d$</span>. Also, if <span class="math-container">$\varphi \in \text{im } d$</span>, ie, if <span class="math-container">$\varphi = d \varphi '$</span> for some <span class="math-container">$\varphi '$</span>, then <span class="math-container">\begin{align} d(\varphi ' \smile \psi) &amp;= d\varphi '\smile \psi + (-1)^k \varphi '\smile d \psi \\ &amp;= d\varphi '\smile \psi \end{align}</span> as <span class="math-container">$\psi \in \ker d$</span>; showing us that <span class="math-container">$d\varphi ' \smile \psi \in \text{im } d$</span>.</p> <p>Thus, changing <span class="math-container">$\varphi$</span> by an element of <span class="math-container">$\text{im} d$</span> does not change the cohomology class of the cup product. Similarly, changing <span class="math-container">$\psi$</span> by an element of <span class="math-container">$\text{im } d$</span> does not change the cohomology class of <span class="math-container">$\varphi \smile \psi$</span> either, showing us that the product is well defined on cohomology.</p>
705,662
<p>So my function is $f(z) = \sqrt(r)e^{i\theta/2}$.</p> <p>I want to find its contour integral over the upper half-disk of $z = e^{i\theta}$, where $0\leq\theta\leq\pi$.</p> <p>I know that:</p> <p>$\int f(z) dz$ = $\int_a^b f(z(t)) * z'(t) dt$</p> <p>But in the above problem, the function I want to find the contour integral of isn't in terms of z, it's $f(z) = \sqrt(r)e^{i\theta/2}$</p> <p>I know this is a branch of $z^{1/2}$ but not sure where to go from here...</p> <p>Feel like I'm pretty close but missing something</p> <p>Thanks for the help, Mariogs</p>
Robert Israel
8,508
<p>This branch of $z^{1/2}$ is analytic on the upper half plane. What is its antiderivative? You should have a theorem relating contour integrals to antiderivatives...</p>
1,920,624
<p>Prove that if 1 is a linear combination of two integers $a, b$, then $a$ and $b$ are relatively prime. Use only basic divisibility properties. (Do not use GCDLC).</p> <p>I can only figure out how to prove this using the GCDLC.</p>
Arthur
15,500
<p>Hint: any common divisor of $a$ and $b$ will necessary divide any linear combination of $a$ and $b$.</p>
356,594
<p>The roots of the quadratic equation $ax^ 2-bx+c=0,$ $a&gt;0$, both lie within the interval $[2,\frac{12}{5}]$. Prove that: (a) $a \leq b \leq c &lt;a+b$. (b) $\frac{a}{a+c}+\frac{b}{b+a}&gt;\frac{c}{c+b}$</p> <p>So we can use the quadractic formula and obtain $2 \leq \frac{b \pm \sqrt{b^2-4ac}}{2a} \leq \frac{12}{5}$</p> <p>and as $a&gt;0$, this implies that</p> <p>$4a \leq b \pm \sqrt{b^2-4ac} \leq \frac{24a}{5}$.</p> <p>Then we can divide this into two cases (plus or minus) --- But how can we manipulate it to yield the required result?</p> <p>Another idea is to make use of the fact that $b^2-4ac$ is non-negative</p>
DonAntonio
31,254
<p>Hints: let $\,\alpha\,,\,\beta\,$ be the quadratic's roots, then Viete's formulas give</p> <p>$$\alpha+\beta=\frac{b}{a}\;,\;\;\alpha\beta=\frac{c}{a}$$</p> <p>Suppose $\,2\le \alpha\le\beta\le 2.4\,$ , then for example</p> <p>$$a=\frac{b}{\alpha+\beta}\le b\ldots$$</p>
4,188,147
<p>I'm trying to show the momentum operator <span class="math-container">$P : \mathcal D(P)\ \dot{=}\ C^\infty_0(\Bbb R)\to L^2(\Bbb R)$</span> with <span class="math-container">$P=-i\hbar\partial_x$</span> is essentially self-adjoint, which is equivalent to saying that it is symmetric (densely defined, which is true because <span class="math-container">$\overline{C^\infty_0}(\Bbb R)=L^2(\Bbb R)$</span>, and Hermitian) and <span class="math-container">$\ker(P^*\pm i 1) = \{0\}$</span>. The momentum is symmetric because <span class="math-container">$$\require{cancel}(\varphi|P\psi) = \int_\Bbb R \overline{\varphi(x)}(-i\hbar\partial_x\psi)(x)\ dx = \cancel{[-i\hbar \overline \varphi \psi]_{-\infty}^{+\infty}} + \int_\Bbb R \overline{-i\hbar \text{w-}\partial_x\varphi(x)}\psi(x)\ dx, $$</span> where <span class="math-container">$\text{w-}\partial_x$</span> indicates the weak derivative, and the equation holds for all <span class="math-container">$\psi \in C^\infty_0(\Bbb R)$</span> and <span class="math-container">$\varphi \in \mathcal D(P^*)$</span>, which can (and should) be taken to be the maximal domain of definition, i.e. the Sobolev space <span class="math-container">$$\mathcal D(P^*) = W^{1,2}(\Bbb R)=\{\varphi \in L^2(\Bbb R)\ |\ \text{w-}\partial_x\varphi \in L^2(\Bbb R)\}. $$</span> Now, to show the kernels of <span class="math-container">$P^*\pm i 1$</span> are trivial, I can take <span class="math-container">$\psi_\pm\in\ker(P^*\pm i1)$</span>, which means that <span class="math-container">$$\text{w-}\partial_x \psi_\pm(x) = \pm \frac 1 \hbar \psi_\pm(x). $$</span> At this point, I could proceed to ignore the <span class="math-container">$\text{w-}$</span> symbol and just treat the derivative as a strong derivative, so I can solve the ODE via <span class="math-container">$\psi_\pm(x) = a_\pm e^{\pm x/\hbar}$</span>, which is <span class="math-container">$\in L^2$</span> iff <span class="math-container">$a_\pm = 0$</span> as needed. But <em>why</em> can I do this? The <a href="https://en.wikipedia.org/wiki/Sobolev_inequality?oldformat=true#k_%3E_n/p" rel="nofollow noreferrer">Sobolev embedding theorem</a> only seems to guarantee that <span class="math-container">$\psi_\pm \in C^0(\Bbb R)$</span>, which wouldn't be enough to argue that <span class="math-container">$\psi_\pm$</span> can be strongly differentiated.</p>
Jose27
4,829
<p>A bootstrapping argument shows that we can worry only about classical solutions: Consider the equation <span class="math-container">\begin{equation}\tag{1} \text{w-}\partial_x \psi= \dfrac{\psi}{h}, \end{equation}</span> given that <span class="math-container">$\psi \in W^{1,2}$</span> (the domain doesn't matter here). Since the right hand side is weakly differentiable, then so is the left and so w-<span class="math-container">$\partial_x \psi\in W^{1,2}$</span>. This means two things:</p> <ol> <li><span class="math-container">$\psi\in C^1$</span> by Sobolev embedding.</li> <li>We can (weakly) differentiate the equation (1) to obtain that w-<span class="math-container">$\partial_x^2\psi \in W^{1,2}$</span>.</li> </ol> <p>Now you basically iterate step 2 to conclude that w-<span class="math-container">$\partial_x^k\psi\in W^{1,2}$</span> for all <span class="math-container">$k\geq 0$</span>, and by Sobolev embedding (as in step 1) <span class="math-container">$\psi\in C^\infty$</span>.</p>
4,224,303
<p><strong>Question</strong></p> <p>Let <span class="math-container">$X_0, X_1, X_2, \dots$</span> be independent and identically distributed continuous random variables with density <span class="math-container">$f(x)$</span>. Let <span class="math-container">$N$</span> be the first index <span class="math-container">$k$</span> such that <span class="math-container">$X_k &gt; X_0$</span>. For example, <span class="math-container">$N = 1$</span> if <span class="math-container">$X_1 &gt; X_0$</span>, <span class="math-container">$N = 2$</span> if <span class="math-container">$X_2 &gt; X_0$</span> and <span class="math-container">$X_1 \leq X_0$</span> and so on. Determine the probability mass function of <span class="math-container">$N$</span>.</p> <p><strong>My thoughts</strong></p> <p>I am thinking that <span class="math-container">$N$</span> should follow a geometric distribution, since we are somewhat modelling the probability of the first success (i.e. the first <span class="math-container">$X_k$</span> exceeding <span class="math-container">$X_0$</span>) but if so, then what should the parameter for my geometric distribution be? Any intuitive explanations will be greatly appreciated :)</p> <p><strong>Edit</strong></p> <p>Following some help from an answer below and hints from my professor, I now know that my original thought process was not entire correct!</p>
StubbornAtom
321,264
<p>When the <span class="math-container">$X_i$</span>'s are i.i.d continuous, the events <span class="math-container">$A_j=\{X_j=\max\{X_0,X_1,\ldots,X_k\}\}$</span> are equally likely for every <span class="math-container">$j=0,1,\ldots,k$</span> by symmetry.</p> <p>Further note that <span class="math-container">$P(A_i\cap A_j)=0$</span> for every <span class="math-container">$i\ne j$</span>, so that <span class="math-container">$$1=P\left(\bigcup_{j=0}^k A_j\right)=\sum_{j=0}^k P(A_j)\implies P(A_j)=\frac1{k+1}\,\,\forall \,j.$$</span></p> <p>That is really the main idea needed here.</p> <p>Your <span class="math-container">$N$</span> is defined as <span class="math-container">$$N=\min\{k\ge 1: X_k&gt;X_0\}$$</span></p> <p>So for every <span class="math-container">$k\in \mathbb N$</span>,</p> <p><span class="math-container">\begin{align} P(N&gt;k)&amp;=P(X_1\le X_0,X_2\le X_0,\ldots,X_k\le X_0) \\&amp;=P(X_0=\max\{X_0,X_1,\ldots,X_k\}) \\&amp;=\frac1{k+1} \end{align}</span></p> <p>Hence, <span class="math-container">$$P(N=k)=P(N&gt;k-1)-P(N&gt;k)=\frac1{k(k+1)}\,,$$</span> which is certainly not geometric.</p>
557,680
<blockquote> <p>Let $M_{k,n}$ be the set of all $k\times n$ matrices, $S_k$ be the set of all symmetric $k\times k$ matrices, and $I_k$ the identity $k\times k$ matrix. Suppose $A\in M_{k,n}$ is such that $AA^t=I_k$. Let $f:M_{k,n}\rightarrow S_k$ be the map $f(B)=BA^t+AB^t$. Prove that $f$ is onto (surjective). (Note: all matrices have entries in $\mathbb{R}$.)</p> </blockquote> <p>Clearly the matrix $BA^t+AB^t$ is symmetric, since $$(BA^t+AB^t)^t=(BA^t)^t+(AB^t)^t=AB^t+BA^t.$$</p> <p>We want to show that for every $C\in S_k$, there exists $B\in M_{k,n}$ such that $$C=BA^t+AB^t.$$</p> <p>How can we do that?</p>
Henry
106,067
<p>First observation: $C=BA^t+AB^t=BA^t+(BA^t)^t$. Also, since $C$ is a symmetric matrix, there is an upper-triangular matrix $U$ such that $C=U+U^t$ (just take the $C$'s entries for the off-diagonal entries, and take a half of $C$'s diagonal entries). </p> <p>Therefore it is enough to prove that for any $k\times k$ upper-triangular matrix $U$, there exists $B\in M_{k,n}$ such that $U=BA^t$. </p> <p>Let $U$ be an arbitrary but fixed $k\times k$ upper-triangular matrix. Because $AA^t=I_k$, $(UA)A^t=U$. Take $B=UA$, and we are through. </p>
2,687,375
<p>Let <span class="math-container">$x_1, x_2, \ldots, x_n$</span> be a random sample from the Bernoulli (<span class="math-container">$\theta$</span>).</p> <p>The question is to find the UMVUE of <span class="math-container">$\theta^k$</span>.</p> <p>I know the <span class="math-container">$\sum_1^nx_i$</span> is the complete sufficient statistics for <span class="math-container">$\theta$</span>.</p> <p>Is <span class="math-container">$\left(\frac{\sum_1^nx_i}{n}\right)^k$</span> the estimator or any other possible estimator?</p> <p>Could someone just help me?</p>
DanielWainfleet
254,665
<p>(1). Compactnes: In a $T_2$ (Hausdorff) space (such as $\Bbb R^2$) a compact subset must be closed. But $T$ is not closed. Also $S\setminus \{p\}$ is not closed for any $p\in S,$ because $p\in \overline {S\setminus \{p\}}$. </p> <p>Let $T$ be a non-closed subset of a Hausdorff space $X$ and let $p\in \overline T \setminus T.$ To prove that $T$ is not compact:</p> <p>For each $t\in T$ let $U_t$ and $V_t$ be disjoint open sets with $t\in U_t$ and $p\in V_t.$ Then $C=\{U_t: t\in T\}$ is an open cover of $T.$ If $I$ is a finite subset of $T$ and $D=\{U_t: t\in I\}$ then: </p> <p>(i). If $I=\emptyset$ then $D$ is not a cover of $U$ because $U$ is not empty because $U$ is not closed. </p> <p>(ii). If $I \ne \emptyset$ then $W=\cap \{V_t:t\in I\}$ is an open set containing $p.$ So $W\cap T\ne \emptyset$ (because $p\in \overline T$), but $W$ is disjoint from $\cup D.$ So $D$ is not a cover of $T.$</p> <p>(2). Connectednes: Suppose $A,B$ are open subsets of $\Bbb R^2$ such that $A\cup B\supset T=[-1,1]^2\setminus \{(0,0)\}$ and $(A\cap T)\cap (B\cap T)=\emptyset. $ </p> <p>For $r\in (0,1)$ let $C(r)=T\setminus (-1,r)^2$ and $C^-(r)=T\setminus (-r,1)^2.$ </p> <p>For brevity let $D(r)=C^+(r)\cup C^-(r)=T\setminus (-r,r)^2.$</p> <p>Each of $C^+(r),\;C^-(r)$ is connected so each is a subset of $A$ or a subset of $B.$ And $C^+(r)\cap C^-(r)\ne \emptyset$ , so $D(r)$ is a subset of $A$ or a subset of $B. $ </p> <p>For any $r,r'\in (0,1)$ we cannot have $D(r)\subset A$ and $D(r')\subset B,$ because $D(r)\cap D(r') \ne \emptyset .$ So either $D(r)\subset A$ for all $r\in (0,1)$ or $D(r)\subset B$ for all $r\in (0,1).$ </p> <p>Now $T=\cup_{r\in (0,1)} D(r)$ so $T\subset A$ or $T\subset B.$ </p> <p>A similar argument works for $S\setminus \{p\}$ for any $p\in (-1,1)^2,$ and with a little modification, for any $p\in S.$ This method will also work in higher dimensions.</p> <p>Remarks: If $P,Q$ are connected subsets of a space $X$ and $P\cap Q\ne \emptyset$ then $P\cup Q$ is connected. So $C^(r)=([-1,1]\times [r,1])\cup ([r,1]\times [-1,1])$ is connected, and similarly $C^-(r)$ is connected.</p> <p>Also if $F$ is a family of connected sets such that $a\cap b\ne \emptyset$ whenever $a,b\in F$, then $\cup F$ is connected. We can apply this with $F=\{D(r):r\in (0,1)\}.$</p>
2,529,990
<p>Find the area of the shape surrounded by $y = \sin(x)$, $y = -\cos(x)$, $x = 0$, $x = π/4$. Do I subtract $S_2$ from the $S_1$? How do I find the shape's area? <a href="https://i.stack.imgur.com/67G7B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/67G7B.png" alt="Picture of the graph: "></a></p>
zoli
203,663
<p>The area equals</p> <p>$$\int_0^{\frac{\pi}4}\int_{-\cos(x)}^{\sin(x)}\ dy\ dx=\int_0^{\frac{\pi}4}\sin(x)+\cos(x)\ dx=$$ $$=\left[-\cos(x)\right]_{0}^{\frac{\pi}4}+\left[\sin(x)\right]_{0}^{\frac{\pi}4}=$$ $$=-\frac1{\sqrt 2}+1+\frac1{\sqrt 2}-0=1.$$</p>
101,828
<p>In what sense is the limit of discrete series representation of $SL(2, \mathbb{R})$ a limit of discrete series representations? Where does the name origin from?</p>
paul garrett
15,629
<p>These repns are not actually "discrete series", in that they do not appear in $L^2(G)$. Yet their construction/description is completely parallel to that of the discrete family of repns called "discrete series". Since the relevant parameter is discrete, it is hard to conjure up any "limit-taking process", indeed, in a mathematical sense. But in a colloquial sense, since the parameter (for $SL_2(\mathbb R)$ just the "weight") takes a more extreme value for these repns than for "genuine discrete series", it's not completely unreasonable to refer to them in the form " discrete series".</p> <p>I couldn't give a citation off-hand, but probably Harish-Chandra and others used this term in the 1950s, also applied to more general (reductive and semi-simple) Lie groups.</p>
452,889
<p>My question relates to the conditions under which the spectral decomposition of a nonnegative definite symmetric matrix can be performed. That is if $A$ is a real $n\times n$ symmetric matrix with eigenvalues $\lambda_{1},...,\lambda_{n}$, $X=(x_{1},...,x_{n})$ where $x_{1},...,x_{n}$ are a set of orthonormal eigenvectors that correspond to these eigenvalues (i.e. $X$ is an orthogonal matrix), and $\Lambda=\text{diag}(\lambda_{1},...,\lambda_{n})$ then</p> <p>$A=X\Lambda X'$</p> <p>is the spectral decomposition of $A$. If we then let $A^{1/2}=X\Lambda^{1/2}X'$, where $\Lambda^{1/2}$ is a square root matrix of $\Lambda$ - i.e. $\Lambda^{1/2}\Lambda^{1/2}=\Lambda$, then $A^{1/2}A^{1/2}=A$. Thus $A^{1/2}$ is a square root matrix of $A$.</p> <p>So if a real nonnegative definite symmetric $n\times n$ matrix $A$ has $n$ eigenvalues then the matrix has a spectral decomposition and thus a square root matrix too. My question is do all symmetric $n\times n$ invertible matrices have $n$ eigenvalues, and thus a square root matrix? Furthermore it is not clear to me whether the eigenvalues have to be distinct or not. </p>
fuglede
10,816
<p>For your first question: that you can diagonalize real symmetric matrices is the so-called <a href="http://en.wikipedia.org/wiki/Spectral_theorem" rel="nofollow">spectral theorem</a>.</p> <p>For the second: your argument is general; you did not use that the eigenvalues were distinct.</p>
3,794,418
<p><strong>Background:</strong> I'm a math rookie, yet to enrol in university. I randomly started reading Mendelson's <em>Introduction to Mathematical Logic</em>, when I stumbled upon this paradox in the introductory section:</p> <blockquote> <p><strong>Grelling's Paradox:</strong> An adjective is called <em>autological</em> if the property denoted by the adjective holds for the adjective itself. An adjective is called <em>heterological</em> if the property denoted by the adjective does not apply to the adjective itself. For example, 'polysyllabic' and 'English' are autological, whereas 'monosyllabic' and 'French' are heterological. Consider the adjective 'heterological'. If 'heterological' is heterological, then it is not heterological. If 'heterological' is not heterological, then it is heterological. In either case, heterological is both heterological and not heterological.</p> </blockquote> <p>I'd like to understand the following:</p> <ol> <li><strong>What is the source of logical fallacy in this paradox?</strong> If I formulate a set <span class="math-container">$A$</span> of all adjectives and subsets <span class="math-container">$A_a$</span> and <span class="math-container">$A_h$</span> corresponding to autological and heterological adjectives, respectively, then it could be the case that <span class="math-container">$\text{(heterological)}\in A-(A_a\cup A_h)$</span>, i.e., it belongs to neither of the two sets(unless <span class="math-container">$A_a\cap A_h=\emptyset$</span> and <span class="math-container">$A_a\cup A_h=A$</span>).</li> <li>On a lighter note, I'd like to know about the mathematical significance of this paradox, and how it's dealt with in modern set theories.</li> </ol> <p>Although I understand the answer(s) could be very abstract, please add a simpler analogy along with a necessary technical explanation, if possible.</p>
Dan Christensen
3,515
<p>Logically speaking, Grelling's Paradox is just Russell's Paradox--only the names have been changed.</p> <p>For better or worse, natural language allows us to define things that cannot logically exist. Logically, just like there can exist no <strong>set</strong> of all those sets that are not <strong>elements of</strong> themselves (Russell's Paradox), there can exist no <strong>word</strong> that <strong>describes</strong> all those words that do not describe themselves (Grelling's Paradox).</p> <p>Russell's Paradox:</p> <p><span class="math-container">$\neg \exists r: [Set(r) \land \forall a: [Set(a) \implies [a\in r \iff a\notin a]]]$</span></p> <p>We can substitute:</p> <ul> <li><p><span class="math-container">$Set(x) \to Word(x)~~~$</span> (unary predicate or domain of discourse)</p> </li> <li><p><span class="math-container">$x \in y \to Describes(y,x)~~~$</span> (binary predicate)</p> </li> <li><p><span class="math-container">$r \to h~~~$</span> (<span class="math-container">$h$</span> = &quot;heterological&quot;)</p> </li> </ul> <p>Grelling's Paradox:</p> <p><span class="math-container">$\neg \exists h: [Word(h) \land \forall a: [Word(a) \implies [Describes(h,a) \iff \neg Describes(a,a)]]]$</span></p>
3,120,468
<p>I'm wondering if it's correct to say that the absolute value squared means that the values above <span class="math-container">$1$</span> are magnified, whereas the values below <span class="math-container">$1$</span> are damped (I'm considering only positive values because the codomain of the absolute value is <span class="math-container">$R^+$</span>).</p> <p>In general the above consideration should be valid for all the functions which have <span class="math-container">$R^+$</span> as codomain.</p> <p>Thank you in advance.</p>
G Cab
317,234
<p>We can approach the problem geometrically.</p> <p>Each triple toss <span class="math-container">$\left( {x_{\,1} ,x_{\,2} ,x_{\,3} } \right)$</span> corresponds to an integral point in a cube with side <span class="math-container">$[1,6]$</span>: wlog we can take the die to be numbered <span class="math-container">$0,1,\cdots , 5$</span>, being more convenient to place the cube with one vertex at the origin.<br> Let's make it general and take a cube with side <span class="math-container">$[0,r]$</span>.<br> Being the die fair, all the <span class="math-container">$\left({r+1} \right)^3$</span> points are equi-probable.</p> <p>Consider the region made by the points points which obey to <span class="math-container">$$ {0 \le x_{\,1} &lt; x_{\,2} &lt; x_{\,3} \le r} $$</span> The <em>integral volume</em> of such a region is clearly <span class="math-container">$$ \binom{r+1}{3} $$</span> (choose three values from the <span class="math-container">$r+1$</span>, and arrange in order).<br> There are six of such regions, corresponding to the permutations of the <span class="math-container">$x_k$</span>.</p> <p>The diagonal of the cube instead, <span class="math-container">$x_1=x_2=x_3$</span>, has a <em>volume</em> of <span class="math-container">$r+1$</span>.</p> <p>Let's make it even more general, in the case of <span class="math-container">$m$</span> siblings, note that the expansion of the binomial in terms of [Falling Factorials][1] via the Stirling N. of 2nd kind <span class="math-container">$$ \eqalign{ &amp; \left( {r + 1} \right)^{\,m} = \sum\limits_{\left( {0\, \le } \right)\,k\,\left( { \le \,m} \right)} { \left\{ \matrix{ m \cr k \cr} \right\}\left( {r + 1} \right)^{\underline {\,k\,} } } = \cr &amp; = \sum\limits_{\left( {0\, \le } \right)\,k\,\left( { \le \,m} \right)} {\ underbrace {\left( {k!\left\{ \matrix{m \cr k \cr} \right\}} \right)}_{N.} \underbrace {\left( \matrix{ r + 1 \cr k \cr} \right)}_{Vol.}} \cr} $$</span> represents the splitting of the cube into the regions <span class="math-container">$$ \left[ {x_{\,1} &lt; x_{\,2} &lt; \cdots &lt; x_{\,m} } \right],\left[ {x_{\,1} = x_{\,2} &lt; \cdots &lt; x_{\,m} } \right], \cdots , \left[ {x_{\,1} = x_{\,2} = \cdots = x_{\,m} } \right] $$</span> and in the formula above <span class="math-container">$k$</span> represents the number of <span class="math-container">$&lt;$</span> signs + 1.<br> This can be demonstrated resorting to the meaning of the Stirling N. 2nd kind<span class="math-container">$\left\{ \matrix{ n \cr k \cr} \right\}$</span> as the number the number of ways to partition a set of n objects into k non-empty subsets</p> <p>The decision will be taken when <span class="math-container">$$ \left[ {x_{\,1} \le x_{\,2} \le \cdots \le x_{\,m - 1} &lt; x_{\,m} } \right] $$</span> or any permutation of it.<br> Fixing the value of <span class="math-container">$x_m$</span>, then the volume of such region will evidently be <span class="math-container">$x_m^{m-1}$</span>, so that<br> the total volume of such a pyramid is <span class="math-container">$$ V = \sum\limits_{1\, \le \,k\, \le \,r} {k^{\,m - 1} } $$</span> The m-tuples in which <span class="math-container">$x_m$</span> is strictly higher than the other components are of course different (non-overlapping) with those in which the highest component is <span class="math-container">$x_1$</span> or <span class="math-container">$x_2$</span> and so on. Therefore we have <span class="math-container">$m$</span> of such regions.</p> <p>Now we have the basic elements to solve your question.</p> <p>In your particular case, <span class="math-container">$m=3$</span> and <span class="math-container">$r=5$</span>, the probability that a decision be taken is <span class="math-container">$$ P_{decision} = {{3\sum\limits_{1\, \le \,k\, \le \,5} {k^{\,2} } } \over {6^{\,3} }} = {{165} \over {216}} = {{55} \over {72}} $$</span></p>
3,120,468
<p>I'm wondering if it's correct to say that the absolute value squared means that the values above <span class="math-container">$1$</span> are magnified, whereas the values below <span class="math-container">$1$</span> are damped (I'm considering only positive values because the codomain of the absolute value is <span class="math-container">$R^+$</span>).</p> <p>In general the above consideration should be valid for all the functions which have <span class="math-container">$R^+$</span> as codomain.</p> <p>Thank you in advance.</p>
user
293,846
<p>The probability that a round ends without desicion is <span class="math-container">$$ \underbrace{3\left(\frac16\right)^2\left(\frac56+\frac46+\frac36+\frac26+\frac16\right)}_{\frac{15}{72}}+ \underbrace{6\left(\frac16\right)^3}_{\frac{2}{72}}=\frac{17}{72}. $$</span> Here the first summand stands for two equal scores with a value larger than the third one. The factor <span class="math-container">$3$</span> stands for the number of ways to choose one person from three people. The values <span class="math-container">$\frac56,\frac46,\frac36,\frac26,\frac16$</span> in the parentheses are the probabilities of the third score to be less than <span class="math-container">$6,5,4,3,2$</span>, respectively. The second summand stands for the case with three equal scores.</p> <p>Thus, the answer to the first question is <span class="math-container">$$p_1=\left (\frac{17}{72}\right)^n .$$</span></p> <p>To answer the second question recall that probability to get all three score equal is <span class="math-container">$\frac{15}2$</span> times less than to get only two equal scores. Therefore <span class="math-container">$$ p_2=1-\left (\frac{15}{17}\right)^{n-1}.$$</span></p>
448,227
<p>Is it possible to define the completion of a metric space using categorical terms?</p>
Martin Brandenburg
1,650
<p>At least for Lawvere metric spaces it is a special case of the <a href="http://ncatlab.org/nlab/show/Cauchy+complete+category" rel="nofollow">Cauchy completion</a>.</p>
1,510,124
<p>What is the method that would be used to find the unit normal vector to the surface given by the equation $x+y^{2}+2z=4$?</p>
Surb
154,545
<p>$$\{(x,y,z)\mid x+y^2+2z=4\}=\{u(y,z)=(y^2+4-2z, y,z)\mid y,z\in \mathbb R\}$$</p> <p>I denote $u_y=\frac{\partial u}{\partial y}$ and $u_z=\frac{\partial u}{\partial z}$. A normal is given by $u_y\times u_z$.</p>
1,510,124
<p>What is the method that would be used to find the unit normal vector to the surface given by the equation $x+y^{2}+2z=4$?</p>
abel
9,252
<p>if you difference $x + y^2 + 2z = 4, $ and use the fact that $(dx, dy, dz)$ is tangent vector ay $(x, y, z),$ then you get $$0=dx + 2ydy + 2dz = (1,2y, 2)^\top \cdot (dx, dy, dz)^\top.$$ that is the normal vector at the point $(x,y,z)$ to the surface $x + y^2 + 2z = 4 $ is $(1,2y,z)^\top.$</p>
2,055,546
<p>So I stumbled across this math problem:</p> <p>$1015$ USD is to be shared between $3$ people: Person A, Person B and Person C. Person A gets double of what Person B gets and Person B gets $100$ USD more than Person C</p> <p>Basically:</p> <p>$A = 2B$</p> <p>$B = C + 100$</p> <p>$A + B + C = 1015$</p> <p>Can someone please tell me how it's solved, rather than telling me the outcome?</p> <p>UPDATE: Thank you to people who helped me solve it. What I realized was, that I used the equation C = B - 100 to try to solve the problem. You made me realize that. So thank you.</p>
egreg
62,967
<p>If we lend them $\$100$, then we can give $B$ and $C$ the same amount of money, after which $C$ will give back the $\$100$ note.</p> <p>Now, if we give $\$1$ to $B$ and $C$, $A$ will receive $\$2$. The total would be $\$4$. </p> <p>OK, but we have $\$1115$ to distribute.</p> <blockquote class="spoiler"> <p> Just divide $1115$ by four, two parts to $A$, one part to $B$ and $C$. Don't forget to get back $\$100$ from $C$.</p> </blockquote>
636,289
<p>Here's what I've gathered so far: </p> <p>First I calculate the number of combinations of $5$ cards from the whole deck, which is $2 598 960.$</p> <p>I'll use the complement, so I want to the combinations of everything but the aces and kings, so it's again combinations of $5$ from $44$ cards, which is $1086008$.</p> <p>$$1-\frac{1086008}{2598960} = 0.582$$</p> <p>That is incorrect however. What am I doing wrong? The complement for "at least one $X$ and at least one $Y$" should be "not $X$ or not $Y$", which is "everything but aces and kings". Is that even correct?</p>
Edward ffitch
26,243
<p>We need to subtract from $C^{52}_{5}$ (the total number of ways of choosing $5$ cards from $52$) the number of ways of selecting no aces and the number of ways of selecting no kings, but we must add the number of ways of selecting no aces <strong>and</strong> no kings (else we'll have counted these twice). The number of ways of selecting neither aces nor kings is $C^{52-8}_{5}=C^{44}_{5}$. The number of ways of selecting no aces is $C^{52-4}_{5}=C^{48}_{5}$. This is equal to the number of ways of selecting no kings. So we have: $C^{52}_{5} - 2C^{48}_{5}+C^{44}_{5}$ ways of selecting $5$ cards from a deck such that we have at least one ace and at least one king. The probability of this is then $\dfrac{C^{52}_{5} - 2C^{48}_{5}+C^{44}_{5}}{C^{52}_{5}} \approx 10\%$.</p>
3,997,321
<p>Let <span class="math-container">$X$</span> be a set, <span class="math-container">$\tau_1,\tau_2$</span> two topologies on <span class="math-container">$X$</span>, and consider the following statements</p> <ol> <li><span class="math-container">$\tau_1\subseteq \tau_2$</span> (i.e <span class="math-container">$\tau_1$</span> is coarser/weaker than <span class="math-container">$\tau_2$</span>, or that <span class="math-container">$\tau_2$</span> is finer/stronger than <span class="math-container">$\tau_1$</span>)</li> <li>For every net <span class="math-container">$\langle x_i\rangle_{i\in I}$</span> in <span class="math-container">$X$</span> and <span class="math-container">$x\in X$</span>, if <span class="math-container">$x_i\to x$</span> relative to the topology <span class="math-container">$\tau_2$</span> then <span class="math-container">$x_i\to x$</span> relative to the topology <span class="math-container">$\tau_1$</span>.</li> </ol> <p>It is clear to me that <span class="math-container">$(1)$</span> implies <span class="math-container">$(2)$</span>. My question is whether the converse is also true, because then this would seem like a nice justification for the terminology &quot;weak/strong&quot; topology in the context of topological vector spaces.</p>
Saúl RM
807,670
<p>The converse is true: let´s prove <span class="math-container">$\neg1\implies\neg2$</span>. If you have a set <span class="math-container">$U$</span> in <span class="math-container">$\tau_1$</span> but not in <span class="math-container">$\tau_2$</span>, you can pick a point <span class="math-container">$x\in U$</span> which isn´t in the interior of <span class="math-container">$U$</span> respect to the topology of <span class="math-container">$\tau_2$</span>. Thus, if <span class="math-container">$(B_i)_{i\in I}$</span> is a base of nhoods of <span class="math-container">$x$</span> in <span class="math-container">$\tau_2$</span> ordered by reverse inclusion, for each <span class="math-container">$i$</span> you can pick a point <span class="math-container">$x_i\in B_i\setminus U$</span>. So the net <span class="math-container">$(x_i)_{i\in I}$</span> converges to <span class="math-container">$x$</span> in <span class="math-container">$\tau_2$</span> but it doesn´t in <span class="math-container">$\tau_1$</span>, because no point of the net is in <span class="math-container">$U$</span>.</p>
4,010,408
<p>good day</p> <p>Can anybody help me? thanks</p> <p>Let <span class="math-container">${X_i}$</span> Independent and Identically Distributed Random Variables as normals with mean <span class="math-container">$μ$</span> and standard deviation 2, What is the minimum value of n such that <span class="math-container">$P(|\dfrac{S_n}{n}-μ|&lt;.01) \geq .99$</span></p> <p>Thanks</p> <p>Independent and Identically Distributed Random Variables</p>
Vons
274,987
<p>It gives you <span class="math-container">$X\sim N(\mu, 4)$</span>, then <span class="math-container">$S\sim N(n\mu, 4n)$</span>.</p> <p>So <span class="math-container">$$\begin{split}P(|\frac {S-n\mu}{n}|&lt;.01)&amp;=P(|\frac{S-n\mu}{2\sqrt n}|&lt;.005\sqrt n)\\ &amp;=P(|Z|&lt;.005\sqrt n)\ge .99\end{split}$$</span></p> <p>Thus you need to find the quantile function <span class="math-container">$\Phi^{-1}.995$</span> to find a lower bound for <span class="math-container">$.005\sqrt n$</span>, say you get z*. Then solve <span class="math-container">$.005\sqrt n\ge z^*$</span> to get the minimum value of n. Don’t forget to round up to an integer.</p>
406,514
<blockquote> <p>Find the Galois group $\operatorname{Gal}(f/\mathbb{Q})$ of the polynomial $f(x)=(x^2+3)(x^2-2)$.</p> </blockquote> <p>Any explanations during the demonstration, will be appreciated. Thanks!</p>
Kortlek
77,402
<p>The splitting field of $f$ is $\mathbb{Q}(\sqrt{2}, i\sqrt{3})$ and since $$[\mathbb{Q}(\sqrt{2}, i\sqrt{3}):\mathbb{Q}]=[\mathbb{Q}(\sqrt{2}, i\sqrt{3}):\mathbb{Q}(\sqrt{2})][\mathbb{Q}(\sqrt{2}):\mathbb{Q}]=2\cdot 2=4$$ the Galois group has four element. Since every $\sigma\in$ Gal$(\mathbb{Q}(\sqrt{2}, i\sqrt{3})/\mathbb{Q})$ must take $\sqrt{2}$ and $i\sqrt{3}$ to another root of Irr$(\sqrt{2}, \mathbb{Q})=x^2-2$ and Irr($i\sqrt{3}, \mathbb{Q})=x^2+3$ respectively, we see that $\sigma(\sqrt{2})=\pm \sqrt{2}$ and $\sigma(i\sqrt{3})=\pm i\sqrt{3}$. Thus every element in Gal$(\mathbb{Q}(\sqrt{2}, i\sqrt{3})/\mathbb{Q})$ has order $1$ or $2$. As the only groups of order $4$ are $\mathbb{Z}_4$ and $V_4$ and our group is not cyclic, it must be that the Galois group of $f$ over $\mathbb{Q}$ is $V_4$.</p>
1,182,644
<p>The tittle says it all. I think it's true, and I tried to prove it by showing that the derivative of this function: $-2Bxe^{-Bx^2}$ is bounded from above with a bound less than 1, in order to do that, I tried to use Taylor series of $e^{-Bx^2}$, but it seems that leads nowhere. Any suggestion?</p> <p>Here $B&gt;0$ is a real number and we consider the euclidean norm.</p>
Michael Hardy
11,667
<p>One can write $$ \int x\,d(x^2) = \int x\Big( 2x\,dx\Big)=\cdots $$ etc.</p> <p>However, the notation $$ \int_a^b f(x)\,dg(x) \tag 1 $$ is sometimes also taken to mean the <a href="http://en.wikipedia.org/wiki/Riemann%E2%80%93Stieltjes_integral" rel="nofollow">Riemann–Stieltjes integral</a> of $f$ with respect to $g$ on the interval $a\le x\le b$. That is defined as the limit as the mesh of the partition approaches $0$, of $$ \sum_i f(x_i^\ast)\,\Delta g(x)_i = \sum_i f(x_i^\ast)(g(x_{i+1}-g(x_i)) $$ (where $x_i^\ast$ is some point in $[x_i,x_{i+1}]$).</p> <p>This integral is the same as $$ \int_a^b f(x)g'(x)\,dx \tag 2 $$ <b>PROVIDED</b> $g'$ exists everywhere in the interval, and I think one can get by with something weaker than "everyhwere" in some cases. <b>But</b> the Riemann–Stieltjes integral is defined even in cases where $g'$ fails to exist in many places and where the $(1)$ is <em>not</em> equal to $(2)$. An example is the case where $g$ is the <a href="http://en.wikipedia.org/wiki/Cantor_function" rel="nofollow">Cantor function</a>. That function is everywhere continuous and its derivative is zero <a href="http://en.wikipedia.org/wiki/Almost_everywhere" rel="nofollow">almost everywhere</a>, Riemann–Stieltjes integral $\displaystyle\int_0^1 1\,dg(x)$ is equal to $1$.</p> <p>If $f$ is some function of a random variable (capital) $X$ whose <a href="https://en.wikipedia.org/wiki/Cumulative_distribution_function" rel="nofollow">cumulative probability distribution function</a> is $g$, then the expected value $\operatorname{E}(f(X))$ is $$ \int_{-\infty}^\infty f(x)\,dg(x) $$ regardless of whether the distribution is discrete or continuous or a mixture of the two or none of the above, and "none of the above" is exemplified by the <a href="https://en.wikipedia.org/wiki/Cantor_distribution" rel="nofollow">Cantor distribution</a>.</p> <p>The Riemann–Stieltjes integral is defined only if the <a href="https://en.wikipedia.org/wiki/Bounded_variation" rel="nofollow">total variation of the function $g$, sometimes called the "integrator", is finite</a>, and also only if $f$ and $g$ have no common points of discontinuity.</p>
2,775,443
<p>The statement $\displaystyle \sum_{n=1}^{\infty}a_n$ converges $\implies$ $\displaystyle \sum_{n=1}^{\infty}\cfrac{1}{a_n}$ looks natural but do we have this implication? I am checking alternating series as an counter-example but could not find one yet. What can we say about the implication?</p>
hamam_Abdallah
369,188
<p>If $\sum a_n $ and $\sum b_n $ are both convergent then $$\lim_{n\to+\infty}a_nb_n=0$$</p> <p>but here $$a_nb_n=a_n.\frac {1}{a_n}=1.$$</p>
502,994
<p>If the image is f(x), why does f(|x|) look like two triangles above the x axis (basically the right side duplicated on the left)?</p> <p><img src="https://i.imgur.com/pGIdNYZ.jpg" alt="function image"></p>
Sudarsan
88,391
<p>Ok I think I should have put it in a better way like the other answers. Let $g(x)=f(|x|)$ and now $g(x)$ will look like the positive part of $f(x)$ flipped if $x&lt;0$ since $$g(x)=\left\{ \begin{array}{ll} f(x) &amp; x\geq0 \\ f(-x) &amp; x&lt;0\end{array}\right.$$ Now, $g(-2)=f(2)$ and so forth. </p>
1,457,478
<p>I'm trying to teach myself some number theory. In my textbook, this proof is given:</p> <blockquote> <p><strong>Example (2.3.1)</strong> Show that an integer is divisible by 3 if and only if the sum of its digits is a multiple of 3.</p> <p>Let <span class="math-container">$n=a_0a_1\ldots a_k$</span> be the decimal representation of an integer <span class="math-container">$n$</span>. Thus <span class="math-container">$n=a_k+a_{k-1}10+a_{k-2}10^2+\cdots+a_010^k$</span> where <span class="math-container">$0\le a_i&lt;10$</span> The key observation is that <span class="math-container">$10\equiv1\pmod3$</span>, i.e., <span class="math-container">$[10]=[1]$</span>. Hence <span class="math-container">$[10^i]=[10]^i=[1]$</span>, i.e., <span class="math-container">$10^i\equiv1\pmod 3$</span>. The assertion is an immediate consequence of this congruence.</p> </blockquote> <p>I don't understand the last statement. Why does it follow from that congruence?</p>
Alexander Belopolsky
185,688
<blockquote> <p>The assertion is an immediate consequence of this congruence.</p> </blockquote> <p>Since any power of $10$ is congruent to $1$ (mod $3$),</p> <p>$$n=a_k+a_{k-1}10+a_{k-2}10^2+\cdots+a_010^k\equiv a_k+a_{k-1}+a_{k-2}+\cdots+a_0\pmod3.$$</p> <p>In other words, the residual of dividing $n$ by $3$ is the same as the residual of dividing the sum of its digits by $3$. In the case of zero residual, we get the sought assertion: $n$ is divisible by $3$ iff the sum of its digits is divisible by $3$.</p>
1,231,082
<p>Let $\text{tr}A$ be the trace of the matrix $A \in M_n(\mathbb{R})$.</p> <ul> <li>I realize that $\text{tr}A: M_n(\mathbb{R}) \to \mathbb{R}$ is obviously linear (but how can I write down a <em>formal</em> proof?). However, I am confused about how I should calculate $\text{dim}(\text{Im(tr)})$ and $\text{dim}(\text{Ker(tr)})$ and a basis for each of these subspace according to the value of $n$. </li> <li>Also, I don’t know how to prove that $\text{tr}(AB)= \text{tr}(BA)$, and I was wondering if it is true that $\text{tr}(AB)= \text{tr}(A)\text{tr}(B)$. </li> <li>Finally, I wish to prove that $g(A,B)=\text{tr}(AB)$ is a positive defined scalar product if $A,B$ are <em>symmetric</em>; and also $g(A,B)=-\text{tr}(AB)$ is a scalar product if $A,B$ are <em>antisymmetric</em>. Can you show me how one can proceed to do this? </li> </ul> <p>I would really appreciate some guidance and help in clarifying the doubts and questions above. Thank you.</p>
Bernard
202,857
<p>Using Little Fermat, we have: $$2^{105} +3^{105}\equiv 2^{105\bmod 6} +3^{105\bmod 6}\equiv 2^{3}+ 3^{3}\equiv 1 + 6\equiv 0 \mod 7.$$</p>
1,231,082
<p>Let $\text{tr}A$ be the trace of the matrix $A \in M_n(\mathbb{R})$.</p> <ul> <li>I realize that $\text{tr}A: M_n(\mathbb{R}) \to \mathbb{R}$ is obviously linear (but how can I write down a <em>formal</em> proof?). However, I am confused about how I should calculate $\text{dim}(\text{Im(tr)})$ and $\text{dim}(\text{Ker(tr)})$ and a basis for each of these subspace according to the value of $n$. </li> <li>Also, I don’t know how to prove that $\text{tr}(AB)= \text{tr}(BA)$, and I was wondering if it is true that $\text{tr}(AB)= \text{tr}(A)\text{tr}(B)$. </li> <li>Finally, I wish to prove that $g(A,B)=\text{tr}(AB)$ is a positive defined scalar product if $A,B$ are <em>symmetric</em>; and also $g(A,B)=-\text{tr}(AB)$ is a scalar product if $A,B$ are <em>antisymmetric</em>. Can you show me how one can proceed to do this? </li> </ul> <p>I would really appreciate some guidance and help in clarifying the doubts and questions above. Thank you.</p>
user223741
223,741
<p>$$\begin{align} 2^3 &amp;\equiv 1 (\mod 7)\\ (2^3)^{35} &amp;\equiv 1^{35} (\mod 7)\\ 2^{105} &amp;\equiv 1 (\mod 7)\tag1 \end{align}$$ Again, $$\begin{align} 3^3 &amp;\equiv -1 (\mod 7)\\ (3^3)^{35} &amp;\equiv (-1)^{35} (\mod 7)\\ 3^{105} &amp;\equiv -1 (\mod 7)\tag2 \end{align}$$ Adding (1) and (2) we get, $$\begin{align} 2^{105}+3^{105} &amp;\equiv1+(-1)\space (\mod7)\\ \text{or,}\quad 2^{105}+3^{105} &amp;\equiv0\space (\mod7) \end{align}$$ This implies that $\space2^{105}+3^{105}$ is divisible by 7.</p>
344,994
<p>Are there any decimals that times to make 1? I know that 0.25 x 4 = 1 but it's a decimal and a number. I need two decimals. Any ideas? I'm not an expert btw so... thanks</p>
Cameron Buie
28,900
<p>By decimal, you seem to mean a $0$, then a decimal point, then some numbers, yes? Any such creature is positive and less than $1$. The product of two positive numbers less than $1$ will again be less than $1$.</p> <p>Now, if you just want two non-whole numbers that multiply to $1$, then $2.5$ and $0.4$ do the job.</p>
1,549,920
<p>Prove that $$(1+x)^\frac{1}{x}=e-\frac{e}{2}x+\frac{11e}{24}x^2-\frac{7e}{16}x^3....$$ where e is exponenial , can any one give a proof...I tried with series expansion i could not get it.</p>
Clement C.
75,808
<p><strong>Hint:</strong> you can use Taylor expansions, indeed. $$\ln(1+x) = x-\frac{x^2}{2} + \frac{x^3}{3} + o(x^3)$$ when $x \to 0$; and $e^x = 1+x+\frac{x^2}{2} + \frac{x^3}{6} + o(x^3)$. Now, $$(1+x)^{\frac{1}{x}} = e^{\frac{1}{x}\ln(1+x)},$$ so composing the above two series will give you what you seek.</p>
1,305,661
<p>How can I solve this Question ? </p> <ul> <li>Find all solutions exactly for $2\sin^2 x - \sin x = 0$</li> </ul> <p>my answer was $( 2k\pi,\pi+2k\pi, 11\pi/6, 7\pi/6)$ but my teacher answer was $( 2k\pi,\pi+2k\pi, \pi/6 +2k\pi, 5\pi/6 +2k\pi )$</p>
Batman
127,428
<p>You have $2 \sin^2 x - \sin x =0$ or $\sin x ( 2 \sin x -1) =0$, so $\sin x = 0$ or $2 \sin x -1 = 0$.</p> <p>$\sin x =0$ implies $x = \pi k$ for integer $k$. </p> <p>$2 \sin x - 1 =0 $ means $\sin x = \frac{1}{2}$ which has solutions between $0, 2pi$ as $\pi/6$ and $5 \pi/6$. Since $\sin$ is $2 \pi$ periodic, you get $\pi/6 + 2 \pi k$ and $5 \pi/6 + 2 \pi k$ for integer $k$. </p>
1,305,661
<p>How can I solve this Question ? </p> <ul> <li>Find all solutions exactly for $2\sin^2 x - \sin x = 0$</li> </ul> <p>my answer was $( 2k\pi,\pi+2k\pi, 11\pi/6, 7\pi/6)$ but my teacher answer was $( 2k\pi,\pi+2k\pi, \pi/6 +2k\pi, 5\pi/6 +2k\pi )$</p>
Community
-1
<p>Check your solution:</p> <p>$$\sin\left(\frac{11\pi}6\right)=\sin\left(\frac{7\pi}6\right)=-\frac12\to2\frac14+\frac12\color{red}{=0}.$$</p> <p>And the Teacher's solution:</p> <p>$$\sin\left(\frac{\pi}6\right)=\sin\left(\frac{5\pi}6\right)=\frac12\to2\frac14-\frac12\color{green}{=0}.$$</p>
587,162
<p>I want to prove that the following holds, where the $+$ means Minkowski sum:</p> <p>$$ conv(A+B)=conv(A)+conv(B) $$</p> <p>Let the convex hull of $A+B$ be $$ conv(A+B)=\sum_{j,k}\lambda_j\mu_k(a_j+b_k) $$</p> <p>I don't know how to continue from here. </p>
Community
-1
<p>$\newcommand{\co}{\operatorname{co}}$ As already noted we have $\co (A+B) \subset \co A +\co B$. Now let $v+w \in \co A + \co B$. Write $v = \sum_i \alpha_i a_i$ and $w = \sum_j \beta_j b_j$. First we have $$ v + b_j = \sum_i \alpha_i (a_i + b_j) $$ so $v + b_j \in \co ( A +B)$. Now we have $$ v+w = \sum_j \beta_j ( v+ b_j). $$ So $v+w \in \co( \co (A+B)) = \co (A+B)$ and we are done.</p>
376,772
<blockquote> <p>Find a polynomial $q(a)$ of degree less than equal to $2$ that saitsifies the condition $q(a_0)=b_0, q'(a_0)=b'_0, \ \text{and} \ q'(a_1)=b'_1,$ where $a_0,a_1,b_0,b'_0,b'_1\in \mathbb{R}$, where $a_0\ne a_1$. And give a formula of the form $q(a)=b_0k_0(a)+b'_0k_1(a)+b'_1k_2(a).$</p> </blockquote> <p>How can I do this question? I am self teaching numerical analysis and this question is in the book <em>An Introduction to Numerical Analysis, by Atkinson</em> but I don't know how to do it. </p>
Emily
31,475
<p>Let $q(x) = k_0 + k_1 x + k_2 x^2 + \cdots + k_n x^n$.</p> <p>Since the polynomial is of degree less than or equal to 2, $n \le 2$, as $n$ represents the degree of the polynomial.</p> <p>Thus, $q(x) = k_0 + k_1x + k_2x^2$.</p> <p>If $q(a_0) = b_0$, then $k_0+k_1 a_0 +l_2a_0^2 = b_0$. Proceed from here.</p>
4,033,453
<p>How many roots does the equation <span class="math-container">$$\left\lfloor\frac x3\right\rfloor=\frac x2$$</span> have?</p> <ol> <li><span class="math-container">$1$</span></li> <li><span class="math-container">$2$</span></li> <li><span class="math-container">$3$</span></li> <li>infinitely many</li> </ol> <p>I checked that <span class="math-container">$x=0$</span> and <span class="math-container">$x=-2$</span> are the answers, so I think the answer is <span class="math-container">$(2)$</span>. but I don't know how to solve the problem in general.</p>
hamam_Abdallah
369,188
<p><strong>hint</strong></p> <p>Put <span class="math-container">$ x=3X$</span>. the equation becomes <span class="math-container">$$\lfloor X\rfloor =\frac 32 X$$</span></p> <p>which gives</p> <p><span class="math-container">$$X-\lfloor X\rfloor =-\frac X2\in[0,1)$$</span></p> <p>So, there are three cases <span class="math-container">$1) X\in(-2,-1) \implies$</span></p> <p><span class="math-container">$$ X=\frac{-4}{3}\implies x=-4$$</span></p> <p><span class="math-container">$2) X\in[-1,0)\implies $</span></p> <p><span class="math-container">$$X=\frac{-2}{3}\implies x=-2$$</span></p> <p><span class="math-container">$3) X=0=x$</span>.</p> <p>In conclusion, it has three roots.</p>
3,057,200
<blockquote> <p>Does there exist an analytic function <span class="math-container">$f:D\rightarrow\mathbb C$</span>, where <span class="math-container">$D$</span> is a domain containing the unit disk, such that <span class="math-container">$f(z) = e^{i \text{Im} z}$</span> on the unit circle <span class="math-container">$|z|=1$</span>?</p> </blockquote> <p>I'm supposed to rely on the fact that if <span class="math-container">$f: B_R=\left \{ |z|\leq R \right \}\rightarrow \mathbb C$</span> is analytic and even on <span class="math-container">$(-R,R)$</span> (so <span class="math-container">$f(x)=f(-x)$</span> when <span class="math-container">$x$</span> is real), then it is also even on <span class="math-container">$B_R$</span>, outside the real number line. </p> <p>It seems that we need to show that <span class="math-container">$f$</span> isn't even on <span class="math-container">$B_1$</span> but is even on <span class="math-container">$(-1,1)$</span>, which I cannot show. We only know how <span class="math-container">$f$</span> behaves on the unit circle!</p>
Community
-1
<p>Here is an answer which only uses your hint in spirit. Ideally you should think about Greg Martin's hints before reading this, since they are of a similar spirit.</p> <p>You know that <span class="math-container">$f(z)$</span> satisfies the relation <span class="math-container">$f(z)f(1/z) = 1$</span> on the unit circle; because <span class="math-container">$g(z) = f(z)f(1/z) -1$</span> is a holomorphic function defined in a neighborhood of the circle, and it vanishes on a set with an accumulation point, it vanishes everywhere on that neighborhood. By the same logic <span class="math-container">$f(z) f(-z) = 1$</span> on the unit disc as well. </p> <p>In particular, the second formula implies <span class="math-container">$f(z)$</span> is nowhere everywhere along the unit disc. This allows us to use the first formula to give <span class="math-container">$f$</span> an extension to the entire complex plane; for <span class="math-container">$|z| &gt; 1$</span>, set <span class="math-container">$f(z) = 1/f(1/z)$</span>. Because the RHS is already holomorphic, and this is true in the neighborhood of the unit circle, these patch together to gives us a formula for a globally defined extension of <span class="math-container">$f$</span> (which I will denote by the same name). </p> <p>Because <span class="math-container">$1/f$</span> is continuous and nonzero on the unit disc, we see that <span class="math-container">$1/f$</span> achieves a maximum, and hence that <span class="math-container">$f(1/z)$</span> achieves a maximum on <span class="math-container">$\Bbb C \setminus \Bbb D$</span>. </p> <p>Combining these two maximums, we see that <span class="math-container">$|f|$</span> is bounded above. Thus <span class="math-container">$f$</span> is a bounded holomorphic function, and hence constant, contradicting your formula on the circle.</p>
2,917,024
<p>Let $H$ be a Hilbert space and let ${e_n} ,\ n=1,2,3,\ldots$ be an orthonormal basis of $H$. Suppose $T$ is a bounded linear oprator on $H$. Then which of the following can not be true? $$(a)\quad T(e_n)=e_1, n=1,2,3,\ldots$$ $$(b)\quad T(e_n)=e_{n+1}, n=1,2,3,\ldots$$ $$(c)\quad T(e_n)=e_{n-1} , n=2,3,4,\ldots , \,\,T(e_1)=0$$</p> <p>I think $(a)$ is not true because $e_1$ can not span the range space. I really don't know how to approach to this problem. Could you please give me some hints? Thank you very much.</p>
Eric Wofsey
86,856
<p>A bounded operator is completely determined by where it sends each $e_n$. Specifically, every element of $H$ has the form $\sum a_ne_n$ for some scalars $a_n$ (with $\sum |a_n|^2&lt;\infty$), and then we must have $T(\sum a_ne_n)=\sum a_nT(e_n)$.</p> <p>So, your task is to determine, in each of the three cases, whether the formula $$T\left(\sum a_ne_n\right)=\sum a_nT(e_n)$$ actually gives a well-defined bounded operator. This means that you need there exist a constant $C$ such that for all $(a_n)\in \ell^2$, $$\left\|\sum a_nT(e_n)\right\|\leq C\left(\sum |a_n|^2\right)^{1/2}$$ (and in particular, the infinite sum $\sum a_nT(e_n)$ needs to actually converge). Given the very simple definitions of $T(e_n)$ in each of your three cases, you should be able to write down a simple formula for $\left\|\sum a_nT(e_n)\right\|$ in each case which you can use to try to determine whether such a $C$ exists.</p>
2,917,024
<p>Let $H$ be a Hilbert space and let ${e_n} ,\ n=1,2,3,\ldots$ be an orthonormal basis of $H$. Suppose $T$ is a bounded linear oprator on $H$. Then which of the following can not be true? $$(a)\quad T(e_n)=e_1, n=1,2,3,\ldots$$ $$(b)\quad T(e_n)=e_{n+1}, n=1,2,3,\ldots$$ $$(c)\quad T(e_n)=e_{n-1} , n=2,3,4,\ldots , \,\,T(e_1)=0$$</p> <p>I think $(a)$ is not true because $e_1$ can not span the range space. I really don't know how to approach to this problem. Could you please give me some hints? Thank you very much.</p>
Dionel Jaime
462,370
<p>You're questions been answered, but if you're curious about examples of bounded linear operators that satisfy the last two resp. Look no further than $l^2(\mathbb{N})$ and consider the left and right shift operators on the standard basis. That is </p> <p>$$T_L( (a_1, a_2, ... a_n , ... )) = (a_2, a_3, ... a_n,\ ... ) \\ T_R( (a_1, a_2, ... a_n , ... )) = (0, a_1,\ ... \ a_n,\ ... ) $$</p>
3,873,438
<blockquote> <p>A particle of unit mass moves under the action of <span class="math-container">$n$</span> forces directed towards <span class="math-container">$n$</span> fixed points <span class="math-container">$A_1,A_2,...,A_n$</span>. The force towards <span class="math-container">$A_i$</span> is of magnitude <span class="math-container">$k_i$</span> times the distance of the particle from <span class="math-container">$A_i$</span>. When the particle is at B, it's acceleration is zero. When it is at another point <span class="math-container">$C$</span> which is at a distance <span class="math-container">$d$</span> from B, its acceleration is <span class="math-container">$f$</span>. Find the magnitude of <span class="math-container">$f$</span> in terms of <span class="math-container">$k_1, k_2, ... k_n$</span> and <span class="math-container">$d$</span>.</p> </blockquote> <p>Source <a href="https://imgur.com/6HHMj8X" rel="nofollow noreferrer">https://imgur.com/6HHMj8X</a></p> <p>I'm interested what a proper solution to this question would be. I'm not sure if mine is correct and it feels very cheesed.</p> <p>Given that the magnitude of <span class="math-container">$f$</span> is independent of the positions of the <span class="math-container">$A_i$</span> points, we can assume without loss of generality that all of the <span class="math-container">$n$</span> points are on top of each other. Therefore, B must also be on top of them and if I move a distance <span class="math-container">$d$</span> away from B, I am a distance <span class="math-container">$d$</span> away from all of the points. Therefore <span class="math-container">$f=d(k_1+k_2+k_3+...+k_n)$</span></p>
Quimey
10,443
<p>The first condition gives you <span class="math-container">$$ \sum_{i=1}^n (A_i - B)k_i=0 $$</span> and you want to compute <span class="math-container">$|\sum_{i=1}^n (A_i - C)k_i|$</span> but <span class="math-container">$$ \sum_{i=1}^n (A_i - C)k_i = \sum_{i=1}^n (A_i - B)k_i + \sum_{i=1}^n (B - C)k_i = \sum_{i=1}^n (B - C)k_i $$</span> So <span class="math-container">$$ |\sum_{i=1}^n (A_i - C)k_i| = |\sum_{i=1}^n (B - C)k_i| = |(B - C) \sum_{i=1}^n k_i| = d \sum_{i=1}^n k_i $$</span></p>
3,877,319
<p>I have been thinking about finding an explicit formula for the tribonacci numbers, where, namely:</p> <p><span class="math-container">$$a_n = a_{n-1}+a_{n-2}+a_{n-3}$$</span></p> <p>and <span class="math-container">$a_1 = 0, a_1 = 1, a_2 = 1.$</span> Obviously, these beginning terms can be shifted, but we'll leave them as such for now.</p> <p>This has proven difficult, and I'm still not sure how it's done, but what about the general sequence:</p> <p><span class="math-container">$$a_n = xa_{n-1}+ya_{n-2}+za_{n-3}$$</span></p> <p>With arbitrary <span class="math-container">$a_1, a_2,$</span> and <span class="math-container">$a_3.$</span> How is a tribonacci explicit formula calculated?</p> <p>Cheers.</p>
DavidW
838,791
<p>Please see <a href="https://www.fq.math.ca/Scanned/36-2/wolfram.pdf" rel="nofollow noreferrer">https://www.fq.math.ca/Scanned/36-2/wolfram.pdf</a> in <em>The Fibonacci Quarterly</em>, sub-section 6.2.2. That approach also considers having initial functions instead of any given initial values.</p> <p>Changing the constant coefficients to <span class="math-container">$x$</span>, <span class="math-container">$y$</span> and <span class="math-container">$z$</span> from 1 can be solved similarly.</p> <p>Specifically, we are seeking a solution for <span class="math-container">$f$</span> where <span class="math-container">$f: \Re \rightarrow \Im$</span> which has the property <span class="math-container">\begin{equation} \label{rec-eq} f(x) = \sum_{1 \leq l \leq 3}~f(x - l) \end{equation}</span> where <span class="math-container">$f(0)$</span>, <span class="math-container">$f(1)$</span> and <span class="math-container">$f(2)$</span> are given initial values.</p> <p>The real root of the associated characteristic equation <span class="math-container">$x^3 - x^2 - x - 1 = 0$</span> is <span class="math-container">$r_1 = \frac{1 + (19 - 3\sqrt{33})^{\frac{1}{3}} + (19 + 3\sqrt{33})^{\frac{1}{3}}}{3}$</span>. The approximate values of the complex roots are <span class="math-container">$-0.419643377607081 \pm 0.606290729207199 i$</span>.</p> <p>A general solution is <span class="math-container">$f(x) = K_1 r_1^x + K_2 r_1^{\frac{-x}{2}}\cos (\theta x) + r_1^{\frac{-x}{2}} \left( \frac{K_2\left(\frac{1 -r_1}{2}\right) + K_3}{\sqrt{\frac{1}{r_1} - \left(\frac{r_1 - 1}{2}\right)^2}} \right) \sin (\theta x)$</span> where <span class="math-container">$\theta =\arccos \left(\frac{1 - r_1}{2} \sqrt{r_1}\right)$</span> and <span class="math-container">$K_1$</span>, <span class="math-container">$K_2$</span> and <span class="math-container">$K_3$</span> are solutions to the system <span class="math-container">$\left[ \begin{array}{ccc} 1 &amp; 1 &amp; 0\\ r_1 -1 &amp; -r_1 &amp; 1\\ \frac{1}{r_1} &amp; 0 &amp; -r_1 \end{array}\right] \left[\begin{array}{c} K_1\\ K_2\\ K_3 \end{array}\right] = \left[\begin{array}{c} f(0)\\ f(1) - f(0)\\ f(2) - f(1) - f(0) \end{array}\right]. $</span></p>
3,193,107
<p>As the title says, I need to find the explicit form of the recursive sequence defined above, and I am very stuck on this.</p>
Ross Millikan
1,827
<p>Hint: if you make <span class="math-container">$b_1=5$</span> and divide by <span class="math-container">$2$</span> you have a geometric series.</p>
737,152
<p>Give an algebraic proof that $\binom{n+1}{m+1} = \sum_{k=m}^{n} \binom{k}{m}$.</p> <p>I've tried using Pascal's rule and looking for a telescopic sum, but I can't find one.</p> <p>Any help is appreciated.</p>
Debajyoti Mondal
434,846
<p><a href="https://i.stack.imgur.com/ZmLnv.jpg" rel="nofollow noreferrer">Here is a geometric intuition.</a> We started with a width of $\sqrt{n}$. At each step, the width reduced by (1/2). Hence within roughly $2\sqrt{n}$ steps we reach the base case.</p>
3,007,044
<p>Sequence <span class="math-container">$a_n$</span> is defined as <span class="math-container">$a_n = \ln (1 + a_{n-1})$</span>, where <span class="math-container">$a_0 &gt; -1$</span>. Find all <span class="math-container">$a_0$</span> for which the sequence converges.</p> <p>Using arithmetic properties of limits, I've found that if the sequence converges, then it converges to <span class="math-container">$0$</span>, but I don't know how to prove it and find <span class="math-container">$a_0$</span>.</p>
hamam_Abdallah
369,188
<p><strong>hint</strong></p> <p>for <span class="math-container">$x&gt;-1$</span> <span class="math-container">$$\ln(1+x)\le x$$</span> the sequence is decreasing.</p> <p>if <span class="math-container">$-1&lt;a_0&lt;0$</span> the sequence is not well defined</p> <p>if <span class="math-container">$a_0=0$</span> it is constant.</p> <p>if <span class="math-container">$a_0&gt;0$</span> the sequence is decreasing and positive.</p> <p>the unique fixed point is zero . the sequence goes to zero.</p>
3,007,044
<p>Sequence <span class="math-container">$a_n$</span> is defined as <span class="math-container">$a_n = \ln (1 + a_{n-1})$</span>, where <span class="math-container">$a_0 &gt; -1$</span>. Find all <span class="math-container">$a_0$</span> for which the sequence converges.</p> <p>Using arithmetic properties of limits, I've found that if the sequence converges, then it converges to <span class="math-container">$0$</span>, but I don't know how to prove it and find <span class="math-container">$a_0$</span>.</p>
user3482749
226,174
<p>Note that we must have <span class="math-container">$a_n &gt; 1$</span> for all <span class="math-container">$n$</span>, else <span class="math-container">$a_{n+1}$</span> is not even well defined (I'll assume this silently throughout).</p> <p>We note that <span class="math-container">$a_n \geq a_{n+1}$</span> if and only if <span class="math-container">$e^{a_n}\geq 1 + a_n$</span>. Given the power series expansion of <span class="math-container">$e^x$</span>, we have <span class="math-container">$$e^{a_n} - (1 + a_n) = \sum\limits_{i=2}^{\infty}\frac{a_n^i}{i!},$$</span> which is clearly non-negative for all <span class="math-container">$a_n$</span>. Thus, the sequence is decreasing, and hence monotonic, for any <span class="math-container">$a_0$</span>.</p> <p>Now, decreasing sequences converge if and only if they are bounded below, and given what you already know about what it must converge to, you essentially already know the answer, so we just need to show that the sequence is bounded below for <span class="math-container">$a_0 \geq 0$</span>, but not for <span class="math-container">$a_0 &lt; 0$</span>.</p> <p>We'll do those in order: if some <span class="math-container">$a_n \geq 0$</span>, then <span class="math-container">$a_{n+1} = \ln(1 + a_n) \geq \ln(1) = 0$</span>, so indeed the sequence is bounded below by <span class="math-container">$0$</span> for <span class="math-container">$a_0 \geq 0$</span>, and so converges for all such values. </p> <p>On the other hand, if <span class="math-container">$a_0 &lt; 0$</span>, then, since our sequence is decreasing, it cannot converge to <span class="math-container">$0$</span> (it ever gets within a distance <span class="math-container">$|a_0|$</span> of <span class="math-container">$0$</span>), so, by the argument that you already have, cannot converge. </p> <p>Thus, our sequence converges exactly for <span class="math-container">$a_0 \geq 0$</span>.</p>