qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
434,290
<p>According to the <a href="http://arxiv.org/abs/0910.5922" rel="nofollow">equation 4</a>, $$\phi(0,t)= \frac{A_0}{(1+\frac{2t^2}{R^4})^{3/4}}\cos \left(\sqrt2 t+ \frac{3}{2}\tan^{-1}\left[\frac{\sqrt2 t}{R^2}\right]\right)\tag{1}$$ what conditions makes, $$\cos \left(\sqrt2 t+ \frac{3}{2}\tan^{-1}\left[\frac{\sqrt2 t}{R^2}\right]\right)=1$$ so the equation (1) will be </p> <p>$$\phi(0,t)= \frac{A_0}{(1+\frac{2t^2}{R^4})^{3/4}}$$ The author used the <a href="http://arxiv.org/abs/hep-ph/9503217" rel="nofollow">article reference</a> to establish the equation $$\frac{1}{2} \Gamma_{lin}= \frac{1}{\tau_{linear}} \approx \frac{1.196}{\omega_{mass}} \approx \frac{.846}{R^2}$$ but I didn't get any argument there, can you explain this a bit please.</p>
Tunk-Fey
123,277
<p>Another approach, we can split the denominator part as follows $$ \frac{1}{x^4+1}=\frac{1}{2i}\left(\frac{1}{x^2-i}-\frac{1}{x^2+i}\right). $$ Consequently, the integral becomes $$ \int_{0}^{\infty }\frac {\ln x}{x^4+1}\ dx =\frac{1}{2i}\int_{0}^{\infty }\left(\frac{\ln x}{x^2-i}-\frac{\ln x}{x^2+i}\right)\ dx. $$ Using formula from <a href="https://math.stackexchange.com/questions/290200/int-0-infty-frac-ln-xx2a2-mathrmdx-evaluate-integral?lq=1">here</a>, $$ \int_0^{\infty}\frac{\ln x}{x^2+a^2}\ dx=\frac {\pi \ln a}{2a}, $$ we obtain $$ \begin{align} \int_{0}^{\infty }\frac {\ln x}{x^4+1}\ dx&amp;=\frac{1}{2i}\left(\frac {\pi \ln \sqrt{-i}}{2\sqrt{-i}}-\frac {\pi \ln \sqrt{i}}{2\sqrt{i}}\right)\\ &amp;=\frac{\pi}{4i}\left(\frac {\ln i^{\frac{3}{2}}}{i^{\frac{3}{2}}}-\frac {\ln i^{\frac{1}{2}}}{i^{\frac{1}{2}}}\right). \end{align} $$ Taking $0\le\theta\le2\pi$, from Euler's formula we have $$ e^\frac{i\pi}{2}=\cos\frac{\pi}{2}+i\sin\frac{\pi}{2}=i. $$ Thus $$ \begin{align} \int_{0}^{\infty }\frac {\ln x}{x^4+1}\ dx &amp;=\frac{\pi}{4i}\left(\frac {\ln e^\frac{3i\pi}{4}}{e^\frac{3i\pi}{4}}-\frac {\ln e^\frac{i\pi}{4}}{e^\frac{i\pi}{4}}\right)\\ &amp;=\frac{\pi}{4i}\left(-\frac{i\pi}{2\sqrt{2}}\right)\\ &amp;=\boxed{\color{blue}{-\Large\frac{\pi^2}{16}\sqrt{2}}} \end{align} $$ $$\\$$</p> <hr> <p>$$\large\color{blue}{\text{# }\mathbb{Q.E.D.}\text{ #}}$$</p>
434,290
<p>According to the <a href="http://arxiv.org/abs/0910.5922" rel="nofollow">equation 4</a>, $$\phi(0,t)= \frac{A_0}{(1+\frac{2t^2}{R^4})^{3/4}}\cos \left(\sqrt2 t+ \frac{3}{2}\tan^{-1}\left[\frac{\sqrt2 t}{R^2}\right]\right)\tag{1}$$ what conditions makes, $$\cos \left(\sqrt2 t+ \frac{3}{2}\tan^{-1}\left[\frac{\sqrt2 t}{R^2}\right]\right)=1$$ so the equation (1) will be </p> <p>$$\phi(0,t)= \frac{A_0}{(1+\frac{2t^2}{R^4})^{3/4}}$$ The author used the <a href="http://arxiv.org/abs/hep-ph/9503217" rel="nofollow">article reference</a> to establish the equation $$\frac{1}{2} \Gamma_{lin}= \frac{1}{\tau_{linear}} \approx \frac{1.196}{\omega_{mass}} \approx \frac{.846}{R^2}$$ but I didn't get any argument there, can you explain this a bit please.</p>
Felix Marin
85,343
<p>$\newcommand{\+}{^{\dagger}} \newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,} \newcommand{\dd}{{\rm d}} \newcommand{\down}{\downarrow} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,{\rm e}^{#1}\,} \newcommand{\fermi}{\,{\rm f}} \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{{\rm i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\isdiv}{\,\left.\right\vert\,} \newcommand{\ket}[1]{\left\vert #1\right\rangle} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}} \newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,} \newcommand{\sech}{\,{\rm sech}} \newcommand{\sgn}{\,{\rm sgn}} \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert} \newcommand{\wt}[1]{\widetilde{#1}}$ $\ds{\int_{0}^{\infty}{\ln\pars{x} \over x^{4} + 1}\,\dd x =-\,{\pi^2 \root{2} \over 16}:\ {\large ?}}$.</p> <blockquote> <p>\begin{align} &amp;\overbrace{\color{#c00000}{\int_{0}^{\infty}{\ln\pars{x} \over x^{4} + 1}\,\dd x}} ^{\ds{x^{4} \mapsto x}} ={1 \over 16}\int_{0}^{\infty}{x^{-3/4}\ln\pars{x} \over x + 1}\,\dd x ={1 \over 16}\lim_{\mu \to -3/4}\partiald{}{\mu} \int_{0}^{\infty}{x^{\mu} \over x + 1}\,\dd x \\[3mm]&amp;={1 \over 16}\lim_{\mu \to -3/4}\partiald{}{\mu} \int_{0}^{\infty}x^{\mu}\int_{0}^{\infty}\expo{-\pars{x + 1}t}\,\dd t\,\dd x \\[3mm]&amp;={1 \over 16}\lim_{\mu \to -3/4}\partiald{}{\mu} \int_{0}^{\infty}\expo{-t}\int_{0}^{\infty}x^{\mu}\expo{-xt}\,\dd x\,\dd t \\[3mm]&amp;={1 \over 16}\lim_{\mu \to -3/4}\partiald{}{\mu} \pars{\int_{0}^{\infty}t^{-\mu - 1}\expo{-t}\,\dd t} \pars{\int_{0}^{\infty}x^{\mu}\expo{-t}\,\dd x} \\[3mm]&amp;={1 \over 16}\lim_{\mu \to -3/4} \partiald{\bracks{\Gamma\pars{-\mu}\Gamma\pars{\mu + 1}}}{\mu} \end{align} where $\ds{\Gamma\pars{z}}$ is the <a href="http://people.math.sfu.ca/~cbm/aands/page_255.htm" rel="nofollow">Gamma Function</a> ${\bf\mbox{6.1.1}}$.</p> </blockquote> <p>Whith <a href="http://people.math.sfu.ca/~cbm/aands/page_255.htm" rel="nofollow">Euler Reflection Formula</a> ${\bf\mbox{6.1.17}}$: \begin{align} &amp;\color{#c00000}{\int_{0}^{\infty}{\ln\pars{x} \over x^{4} + 1}\,\dd x} ={1 \over 16}\,\left. \partiald{\bracks{\pi\csc\pars{-\pi\mu}}}{\mu}\right\vert_{\mu\ =\ -3/4} ={1 \over 16}\, \bracks{\pi^{2}\cot\pars{3\pi \over 4}\csc\pars{3\pi \over 4}} \end{align}</p> <blockquote> <p>$$\color{#00f}{\large% \int_{0}^{\infty}{\ln\pars{x} \over x^{4} + 1}\,\dd x =-\,{\root{2} \over 16}\,\pi^2} $$</p> </blockquote>
1,265,074
<p>I simply don't know how to go about answering this question. I've done a good few other questions about point estimation, but I really don't know where I'm going with this one:</p> <p><img src="https://i.stack.imgur.com/coWTD.png" alt="Unbiased estimator of 1 over lambda"></p> <p>Thanks for the help!</p> <p>EDIT: My question is regarding the unbiasedness section, but any input on the second part would also be great.</p>
Math1000
38,584
<p>Recall that if $X,Y\sim\mathrm{Pois}(\lambda)$ are independent, then $X+Y\sim\mathrm{Pois}(2\lambda)$. So $$\sum_{i=1}^n X_i\sim\mathrm{Pois}(n\lambda).$$ We can use the law of the unconscious statistician to compute the expectation of the given estimator: $$ \begin{align*} \mathbb E\left[\frac n{\sum_{i=1}^n X_i + 1}\right] &amp;= \sum_{k=0}^\infty \frac n{k+1}\mathbb P\left(\sum_{i=1}^n X_i=k\right)\\ &amp;= \sum_{k=0}^\infty \frac n{k+1}\cdot\frac{e^{-n\lambda}(n\lambda)^k}{k!}\\ &amp;= ne^{-n\lambda}(n\lambda)^{-1}\sum_{k=0}^\infty\frac{(n\lambda)^{k+1}}{(k+1)!}\\ &amp;= \frac{e^{-n\lambda}}\lambda\left(e^{n\lambda}-1\right)\\ &amp;= \frac1\lambda(1 - e^{-n\lambda}). \end{align*} $$ Since this expectation is not equal to $\frac1\lambda$, we see that the estimator is biased. However, as @André Nicolas pointed out, it is asymptotically unbiased, as $$\lim_{n\to\infty}\frac1\lambda(1-e^{-n\lambda})=\frac1\lambda.$$</p>
1,989,253
<p>I am trying to evaluate: </p> <p>$$\lim_{x \to 4}\frac{\sqrt{5-x} - 1}{2-\sqrt{x}}.$$</p> <p>Even though I tried rationalizing both denominator and numerator, I still end up with the functioning being undefined.</p> <p>How can I solve this without rationalizing?</p>
user382540
382,540
<p>If anyone was wondering, after rationalizing I had gotten </p> <p>$$\lim_{x \to 4}\frac{4 - x}{(2-\sqrt{x})(\sqrt{5-x}+x)}.$$</p> <p>Of which you need to factor $(4-x)$ to $(2-\sqrt{x})(2+\sqrt{x})$.</p> <p>From there you can guess where to go.</p>
357,557
<p>I have a function: $f(x)=-\frac{4x^{3}+4x^{2}+ax-18}{2x+3}$ which has only one point of intersection with the $x$-axis.</p> <p>How can i find the value of $a$?</p> <p>I tried polynomial division and discriminant, but it didn't help me.</p>
Peter Smith
35,151
<p>Obviously, $x &gt; 1$ is a power of 2 iff (A): every $y &gt; 1$ which divides $x$ is itself divisible by 2. </p> <p>Use the fact that factors of a number are less than it to bound the quantifiers in formalizing statement (A), and you'll get a $\Delta_0$ wff.</p>
1,358,002
<p>My son did something quite impressive the other day. It was shear luck but I don't think I'll ever see it duplicated again in my lifetime. </p> <p>I brought my kids to the boardwalk and my son wanted to play an amusement game. It was the arrow spin wheel game. It had 90 different names or possibilities to win. You could either pick 1 name for \$1 or 3 names for \$2. </p> <p>He chose 3 names for \$2 and won on his very first try. He picked his prize and we walked over to a completely different spinning wheel game with the same amount of names and cost. On his 1st spin, he won again and again he chose his prize and we left. Yet again we went to a 3rd similar wheel with the same amount of names and chances and he wins again on the 1st try. </p> <p>What are the odds that someone could win on their 1st 3 tries (or 3 in a row) like my son did? Thanks!</p>
quid
85,306
<p>If I understand the game correctly, he chose $3$ out of $90$, and so had a $3/90=1/30$ chance in winning a game. </p> <p>Doing this three times in a row has a probability of $(1/30)^3=1/27000$, so it should happen once in $27000$ tries on average. So, it is pretty unlikely but not excessively so. </p>
718,166
<p><strong>Question:</strong></p> <blockquote> <p>let $a\in(0,1)$, and such $f(x)\geq0$, $x\in R$ is continuous on $R$,</p> <p>if $$f(x)-a\int_x^{x+1}f(t)dt,\forall x\in R $$ is constant,</p> <p>show that</p> <p>$f(x)$ is constant;</p> <p>or $$f(x)=Ae^{bx}+B$$ where $A\ge 0,|B|\le A$ and $A,B$ are constant, and the positive number $b$ is such $\dfrac{b}{e^b-1}=a$</p> </blockquote> <p><strong>My try:</strong></p> <p>let $$f(x)-a\int_x^{x+1}f(t)dt=C$$ then we have $$f'(x)-af(x+1)+af(x)=0,\forall x\in R$$</p> <p><strong>other idea:</strong></p> <p>let $$F(x)=\int_{0}^{x}f(x)dx$$ then $$f(x)-a\int_x^{x+1}f(t)dt=F'(x)-a[F(x+1)-F(x)]=C$$ then I can't solve this ODE? maybe my idea is not good.</p> <p>Thank you very much. </p>
Bob Pego
2,947
<p>A very pretty problem! Under the hypotheses stated, we will conclude that $f(x) = A e^{bx}+B$ with $A, B\ge0$. The amplitude of $B$ cannot be restricted. </p> <p>Since $f$ is continuous, the integral is differentiable and by the OP's calculation, $$ \frac{d}{dx}(e^{ax} f(x)) = e^{ax}(f'(x)+a f(x)) = e^{ax} a f(x+1)\ . $$ It follows $u(x)=e^{ax}f(x)$ is nonnegative and satisfies $$ u'(x) = c u(x+1) , \quad c=a e^{-a}. $$ By a simple induction argument, $u$ is infinitely differentiable with all derivatives nonnegative on $\mathbb R$. Thus its reflection $v(x)=u(-x)$ is completely monotone, meaning all its even derivatives are nonnegative and all its odd derivatives are nonpositive.</p> <p>By <a href="http://en.wikipedia.org/wiki/Bernstein%27s_theorem_on_monotone_functions" rel="nofollow">Bernstein's theorem</a> on completely monotone functions, there is a unique Borel measure $d\mu$ on $[0,\infty)$ such that $$ v(x) = u(-x) = \int_0^\infty e^{-tx}\,d\mu(t) $$ for all $x&gt;0$. But then, since $v'(x)=-u'(-x)=-c v(x-1)$, we find that for $x&gt;1$, $$ c v(x) = -v'(x+1) = \int_0^\infty e^{-tx} e^{-t}t\,d\mu(t). $$ By uniqueness of the representing measure, $c\,d\mu(t)=te^{-t} d\mu(t)$ as measures on $[0,\infty)$. Hence $(c-te^{-t})d\mu(t)=0$, and the support of $d\mu$ must lie in the set of $t$ such that $c=te^{-t}$.</p> <p>Since $t\mapsto te^{-t}$ increases on $(0,1)$ and decreases on $(1,\infty)$, this set consists of two points: the number $a\in(0,1)$ and the unique $\hat a&gt;1$ such that $ae^{-a}=\hat a e^{-\hat a}$. Therefore $d\mu$ is a nonnegative combination of delta masses at $a$ and $\hat a$: $$ d\mu(t) = B\,\delta(t-a) + A\, \delta(t-\hat a) $$ where $B, A\ge0$. Consequently, for all $x&gt;1$, $$ v(x) = B e^{-ax} + A e^{-\hat a x} = e^{-ax}f(-x), $$ hence $f(x) = B + A e^{bx}$ where $b=\hat a-a&gt;0$ satisfies $a e^b = \hat a = b+a$ as desired.</p> <p>The argument above actually applies for any translate $v(x-k)$. Consequently the desired representation of $f$ holds for all $x$.</p>
1,392,858
<p>Is is known that the space of symmetric matrices $\mathbb{R}_{sym}^{n \times n}$ has $\binom{n}{2}$ dimensions.</p> <p>And according to the spectral theorem every symmetric matrix $A \in \mathbb{R}_{sym}^{n \times n}$ has a spectral decomposition in terms of 1-rank matrices.</p> <p>A = $\sum_{i=1}^n \lambda_i v_i v_i^T $</p> <p>Hence we conclude the dimension space of the symmetric matrices is $n$.</p> <p>Where is the fallacy of this reasoning ?</p> <p>Thanks in advance. </p>
Ben Grossmann
81,360
<p>You have correctly stated that for any symmetric $A$, there exist rank $1$ matrices $v_iv_i^T$ such that $$ A = \sum \lambda _i v_i v_i^T $$ However, it is impossible to select a <strong>fixed</strong> set $\{v_1v_1^T,\dots,v_nv_n^T\}$ such that <strong>every</strong> $A$, there exists a choice of $\lambda_i$ such that $A$ has the above form. That is, there is no basis for the set of symmetric matrices that consists of $n$ rank-1 matrices.</p>
1,178,080
<p>How to calculate the number of solutions of the equation $x_1 + x_2 + x_3 = 9$ when $x_1$, $x_2$ and $x_3$ are integers which can only range from <code>1</code> to <code>6</code>.</p>
Stefan4024
67,746
<p>Write $x_i = y_i + 1$, where $0\le y_1 \le 5$</p> <p>Then you have:</p> <p>$$y_1 + y_2 + y_3 = 6$$</p> <p>And according to stars and bars we have: $$\binom{6+3-1}{6} = 28 \text{ combinations}$$</p> <p>Now just exclude the $(6,0,0), (0,6,0)$ and $(0,0,6)$ and you have $25$ solutions </p> <hr> <p><strong>UPDATE:</strong></p> <p>Now to find all the solutions we should exclude let $y_1 = 6 + z_1$, where $z_1 \ge 0$, then you have: $z_1 + y_2 + y_3 = 0$, and according to stars and bars we have:</p> <p>$$\binom{0+3-1}{0} = 1 \text{ solution}$$</p> <p>Now since we have three varaibles we have $3\cdot 1$ solution to exclude.</p> <p>Note working with bigger numbers you'll exclude some solutions twice. To add them again let $y_1=6+z_1$ and $y_2=6+z_2$ and you'll have:</p> <p>$$z_1 + z_2 + y_3 = -6$$</p> <p>which obviously doesn't yield any solution. Now since we have $\binom{3}{2} = 3$ such pairs we need to add $3 \cdot 0 = 0$ solutions. Also you need to exclude the solution when $y_1 = 6+z_1; y_2 = 6+z_2; y_3=6+z_3$, which gives you:</p> <p>$$z_1 + z_2 + z_3 = -12$$</p> <p>obviously this doesn't yield any solutions, so we need to exclude $\binom{3}{3} \cdot 0 = 0$ solutions.</p>
139,817
<p>Studying stability of certain non-autonomous dynamical systems on Lie groups I have come across the following question: Exactly which finite-dimensional, real Lie groups have adjoint representations that are bounded away from zero?</p> <p>Edit: by "bounded away from zero" I mean that the image of the adjoint representation avoids an open neighborhood of zero in End(g), where g is the Lie algebra. Equivalently, the closure of the image does not contain zero, or, the norm (pick your favorite one) of every element of the adjoint representation is bounded from below by one and the same positive number. By Hadamard's inequality, a determinant bound will do as well. [end edit]</p> <p>This should include compact Lie groups since for those there exists an inner product on the Lie algebra with respect to which all inner automorphisms are orthogonal, i.e. the elements of the adjoint representation have norm 1. Correct?</p> <p>Also, for abelian Lie groups the adjoint representation is trivial, hence again bounded away from zero.</p> <p>I believe that semisimple Lie groups should also be included but can not think of a valid argument.</p> <p>Is there actually a counter example? I tried the general linear group GL(2) but the elements of the adjoint representation that I tried always have (some) unit eigenvalues. Is this an accident? I would have thought that GL(n) itself occurs as an adjoint representation somehow which would then not be bounded away from zero. But evidently I am not quite understanding the different dimensions here (the adjoint representation of GL(2) is a subgroup of GL(4)).</p> <p>My apologies if this is trivial but I could not find anything that looked relevant in several books on Lie groups.</p>
David E Speyer
297
<p>The adjoint rep is always bounded away from $0$. Let $\mathfrak{g}_0$ be a simple quotient of $\mathfrak{g}$. (I consider the $1$-dimensional Lie algebra to be simple, so there is always a simple quotient.) Let $\mathfrak{h}$ be the kernel of $\mathfrak{g} \to \mathfrak{g}_0$ and let $H = \exp(\mathfrak{h})$. </p> <p>The adjoint action preserves $\mathfrak{h}$, so the adjoint representation is block upper triangular. The upper left block is the adjoint action of $G/H$ on its Lie algebra, which is $\mathfrak{g}_0$. Since $\mathfrak{g}_0$ is simple, it is unimodular, meaning that the adjoint rep has determinant $1$. This idea is taken from anton's answer.</p> <p>In summary, the adjoint rep of $G$ can be put in block upper triangular form with the upper left block a matrix of determinant $1$ (and not a $0 \times 0$ matrix).</p>
3,156,643
<blockquote> <p>Prove that <span class="math-container">$\sin(x) &lt; x$</span> when <span class="math-container">$0&lt;x&lt;2\pi.$</span></p> </blockquote> <p>I have been struggling on this problem for quite some time and I do not understand some parts of the problem. I am supposed to use rolles theorem and Mean value theorem</p> <p>First using the mean value theorem I got <span class="math-container">$\cos(x) = \dfrac {\sin(x)}x$</span> and since <span class="math-container">$1 ≥ \cos x ≥ -1$</span> , <span class="math-container">$1 ≥ \dfrac {\sin(x)}x$</span> which is <span class="math-container">$x ≥ \sin x$</span> for all <span class="math-container">$x ≥ 0$</span>.</p> <p>Here the first issue is that I didn't know how to change <span class="math-container">$≥$</span> to <span class="math-container">$&gt;$</span>. </p> <p>The second part is proving when <span class="math-container">$x&lt;2\pi$</span> and this part I have no idea.</p> <p>I know that <span class="math-container">$2\pi &gt; 1$</span> , and <span class="math-container">$1 ≥ \sin x$</span> and my thought process ends here.</p>
Community
-1
<p>The inequality obviously holds for <span class="math-container">$x&gt;1$</span>.</p> <p>Then for <span class="math-container">$0&lt;x\le1$</span>,</p> <p><span class="math-container">$$\cos x&lt;1$$</span> and by integration from <span class="math-container">$0$</span></p> <p><span class="math-container">$$\sin x&lt;x.$$</span></p>
555,239
<p>Since the polynomial has three irrational roots, I don't know how to solve the equation with familiar ways to solve the similar question. Could anyone answer the question?</p>
lab bhattacharjee
33,337
<p>Rearranging we get $$x^2-x(y+1)+y^2+y=0$$ which is a Quadratic Equation in $x$</p> <p>As $x$ must be real, the discriminant must be $\ge0$ i.e., </p> <p>$(y+1)^2-4(y^2+y)=-3y^2-2y+1\ge0$</p> <p>$\iff 3y^2+2y-1\le0$</p> <p>$\iff \{y-(-1)\}(y-\frac13)\le0$</p> <p>$\iff -1\le y\le \frac13$</p> <p>Now, use the fact that $y$ is integer</p>
555,239
<p>Since the polynomial has three irrational roots, I don't know how to solve the equation with familiar ways to solve the similar question. Could anyone answer the question?</p>
ssharma
108,216
<p>For second equation: $$x + y = x^2 + y^2 − xy$$</p> <p>By dividing $xy$ on both sides</p> <p>$$\frac{1}{x} + \frac{1}{y} = \frac{x}{y} + \frac{y}{x} -1 = y \text{ (say)}$$</p> <p>Here for any real no. $a$ $$a + \frac{1}{a}\ge 2$$</p> <p>So RHS will be $\ge1$. But because only integer solutions are required: LHS will be $\le 2$. (Assuming neither $x$ nor $y$ is zero).</p> <p>So for this equality to be true $1\le y\le2$. Hence we need to consider cases only for $x =0,1$ and $y=0,1$ By substituting values 3 possible solutions are: $x=0,y=0$; $x=1,y=0$; $x=0,y=1$;</p>
1,878,573
<p><a href="https://i.stack.imgur.com/3iZQ8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3iZQ8.png" alt="enter image description here"></a></p> <p>I cannot get the $f'(0)$ by using L'Hôpital's rule, because it appears recurrence item. Can you help me?</p>
Zau
307,565
<p>The derivative of f at $0$ can be calculated:</p> <p>$$ f'(0) = \lim_{ x \to 0 } \frac{f(x) - f(0)}{x} = \frac{({e}^{{x}^{2}} - {e}^{{-x}^{2}} )\sin (\frac{1}{x^3})}{x}$$</p> <p>Then try to use L'Hôpital's rule or Taylor series to see the derivative.</p>
3,227,215
<p><a href="https://i.stack.imgur.com/7pJ4t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7pJ4t.png" alt="enter image description here" /></a></p> <blockquote> <p><span class="math-container">$(O, R)$</span> is the circumscribed circle of <span class="math-container">$\triangle ABC$</span>. <span class="math-container">$I \in \triangle ABC$</span>. <span class="math-container">$AI$</span>, <span class="math-container">$BI$</span> and <span class="math-container">$CI$</span> intersects <span class="math-container">$AB$</span>, <span class="math-container">$BC$</span> and <span class="math-container">$CA$</span> respectively at <span class="math-container">$M$</span>, <span class="math-container">$N$</span> and <span class="math-container">$P$</span>. Prove that <span class="math-container">$$\large \frac{1}{AM \cdot BN} + \frac{1}{BN \cdot CP} + \frac{1}{CP \cdot AM} \le \frac{4}{3(R - OI)^2}$$</span></p> </blockquote> <p>I have provided my own solution and I would be greatly appreciated if there are any other solutions, perhaps one involving trigonometry. I deeply apologise for the misunderstanding.</p>
Bartek
671,751
<p>As pointed out in the comments the inequality: <span class="math-container">$$\large \frac{1}{AM \cdot BN} + \frac{1}{BN \cdot CP} + \frac{1}{CP \cdot AM} \le \frac{4}{3(R - OI)}$$</span> Is not homogeneous and therefore cannot be correct. Take any triangle and any point and even if the given inequality is satisfied fot this configuration then after scaling it by <span class="math-container">$a$</span> for sufficiently small <span class="math-container">$a$</span> it will stop being correct. But somehow you have managed to convert it to homogeneous inequality: <span class="math-container">$$\frac{1}{AI} + \frac{1}{BI} + \frac{1}{CI} \le \frac{2}{R - OI}$$</span> Which still seems not to be true. And even if it was you obtained it by means of this wrong inequality (no surprise since homogeneous and non-homogeneous inequalities can't be equivalent): <span class="math-container">$$\left(\frac{1}{AM} + \frac{1}{BN} + \frac{1}{CP}\right)^2 \le \left(\frac{AI}{AM} + \frac{BI}{BN} + \frac{CI}{CP}\right)\left(\frac{1}{AI} + \frac{1}{BI} + \frac{1}{CI}\right)$$</span> Which is a flawed application of CS inequality. The correct application is: <span class="math-container">$$\left(\frac{1}{\sqrt{AM}} + \frac{1}{\sqrt{BN}} + \frac{1}{\sqrt{CP}}\right)^2 \le \left(\frac{AI}{AM} + \frac{BI}{BN} + \frac{CI}{CP}\right)\left(\frac{1}{AI} + \frac{1}{BI} + \frac{1}{CI}\right)$$</span></p>
821,875
<p>A school director must randomly select 6 teachers to participate in a training session. There are 30 teachers at the school. In how many different ways can these teachers be selected, if the order of selection does not matter?</p>
DSinghvi
148,018
<p>you must read permutation and combination textbook or section first. answer is 30C6=(30*29*28*27*26*25)/(6*5*4*3*2*1)=593775 I don't think it can be explained here.</p>
2,616,847
<p>By definition, a function $f:\Bbb R^n \to \Bbb R^m$ is linear if</p> <ol> <li>$f(x+y)=f(x)+f(y) \forall x,y\in \Bbb R^n$</li> <li>$f(ax)=af(x) \forall x\in \Bbb R^n$</li> </ol> <p>I want to prove that $f$ is linear iff $f(x)=Ax,A\in\Bbb R^{m\times n}$ and A is unique for any x. </p> <p>I try to prove it by showing $f(x)=f(x\cdot1)=xf(1)=ax$ if $f$. But $x$ and $1$ is not in the same dimension. How can I do it?</p> <p>And to prove that $A$ is unique, it means if</p> <p>$Ax=Bx, 0=f(x)-f(x)=Ax-Bx=(A-B)x$, then $A=B$. </p> <p>Am I understanding it right?</p>
Community
-1
<p>To prove f is linear: Two parts in your definition are equivalent to say $f(ax+by)=af(x)+bf(y)$.</p> <p>Now, $f(ax+by)=A(ax+by)=aAx+bAy=af(x)+bf(y)$. So, it is linear.</p> <p>I think your proof on uniqueness is acceptable. Just make sure to check the case of $x=0$.</p>
2,088,346
<p>I've got the domain of function and I've attempted to find the first derivative at zero but it results in a quartic equation that is too difficult for me to solve. </p> <p>$f'(x) = \frac{4x-3}{\sqrt{2x^2-3x+4}} + \frac{2x-2}{\sqrt{x^2-2x}}$</p> <p>For $f'(x) = 0$:</p> <p>$(4x-3)^2(x^2-2x) = (2-2x)^2(2x^2-3x+4)$</p> <p>Therefore $8x^4-28x^3+9x^2+26x-16 = 0$</p> <p>I've probably made an error in my calculations but I'm sure that this is not how you approach the question and I'm not quite sure how to do it otherwise. According to the textbook the answer is 2.</p>
xxyshz
377,743
<p>We know that $x\in(-\infty,0)or(2,+\infty)$ We can draw the image of the $(x^{2}-2x)^{\frac{1}{2}}$and $(2x^{2}-3x+4)^{\frac{1}{2}}$,you will find the min of this function is 2 when x=0;</p>
2,088,346
<p>I've got the domain of function and I've attempted to find the first derivative at zero but it results in a quartic equation that is too difficult for me to solve. </p> <p>$f'(x) = \frac{4x-3}{\sqrt{2x^2-3x+4}} + \frac{2x-2}{\sqrt{x^2-2x}}$</p> <p>For $f'(x) = 0$:</p> <p>$(4x-3)^2(x^2-2x) = (2-2x)^2(2x^2-3x+4)$</p> <p>Therefore $8x^4-28x^3+9x^2+26x-16 = 0$</p> <p>I've probably made an error in my calculations but I'm sure that this is not how you approach the question and I'm not quite sure how to do it otherwise. According to the textbook the answer is 2.</p>
zipirovich
127,842
<p>First of all, you have a slight error in your derivative: you're missing a factor of $\frac{1}{2}$ in both terms of $f'$. Fortunately, it doesn't affect solving the equation $f'(x)=0$.</p> <p>You said that you found the domain of this function, so you know that the domain is $(-\infty,0]\cup[2,+\infty)$.</p> <p>So we need to solve the following: $$f'(x)=\frac{4x-3}{2\sqrt{2x^2-3x+4}}+\frac{2x-2}{2\sqrt{x^2-2x}}=0.$$ The domain of the derivative is $(-\infty,0)\cup(2,+\infty)$, and it's fairly easy to see that no point in this domain can satisfy the equation: if $x&gt;2$, then both terms of $f'$ are positive, and if $x&lt;0$, then both terms of $f'$ are negative.</p> <p>However, we also have two more critical points determined by the condition that $f'(x)$ DNE, which happen to be the endpoints $x=0$ and $x=2$ of the domain. So extreme values can only occur at these two points. Both give us endpoint minima (we can conclude that from the signs of $f'$ determined above). Plugging them into the original function yields the answer.</p> <p><strong>SOME EXTRA EXPLANATION</strong></p> <p>For the sake of completeness of this answer, let's see what happens if we actually try to solve the equation. Multiplying by $2$ and rearranging terms, we get: $$\frac{4x-3}{\sqrt{2x^2-3x+4}}=-\frac{2x-2}{\sqrt{x^2-2x}}.$$ Now we've got to be careful! We're going to square both sides, which may create extraneous roots, so don't forget to verify the roots afterwards. Squaring and applying the "cross-multiply" property indeed produces the equation $$8x^4-28x^3+9x^2+26x-16=0.$$ According to Wolfram Mathematica, this equation has only two real roots $x\approx-0.97$ and $x\approx2.76$. But substituting them back into $f'(x)=0$ shows that they do <strong>not</strong> satisfy the original equation &mdash; they both happen to be such extraneous "roots" introduced by squaring. </p>
2,088,346
<p>I've got the domain of function and I've attempted to find the first derivative at zero but it results in a quartic equation that is too difficult for me to solve. </p> <p>$f'(x) = \frac{4x-3}{\sqrt{2x^2-3x+4}} + \frac{2x-2}{\sqrt{x^2-2x}}$</p> <p>For $f'(x) = 0$:</p> <p>$(4x-3)^2(x^2-2x) = (2-2x)^2(2x^2-3x+4)$</p> <p>Therefore $8x^4-28x^3+9x^2+26x-16 = 0$</p> <p>I've probably made an error in my calculations but I'm sure that this is not how you approach the question and I'm not quite sure how to do it otherwise. According to the textbook the answer is 2.</p>
Michael Rozenberg
190,319
<p>Let $f(x)=\sqrt{2x^2-3x+4}$, $g(x)=\sqrt{x^2-2x}$ and $h(x)=\sqrt{x}$.</p> <p>Since $h$ is an increasing function, </p> <p>we see that $f$ and $g$ are decreasing functions on $(-\infty,0]$</p> <p>and $f$ and $g$ are increasing functions on $[2,+\infty)$.</p> <p>Thus, $$\min\limits_{(-\infty,0]\cup[2,+\infty)}\left(\sqrt{2x^2-3x+4}+ \sqrt{x^2-2x}\right)=\min\{(f+g)(0),(f+g)(2)\}=2$$</p>
1,848,150
<blockquote> <p>Let $m$ and $c$ be non-zero real numbers and $X$ the subspace of $\mathbb R^2$ given by $X =\{ (x,y): y = mx + c \}$. Prove that $X$ is homeomorphic to $\mathbb R$.</p> </blockquote> <p>I am struggling to figure out how to define a homeomorphic function between these two sets, can anyone please help?</p>
Alex Ortiz
305,215
<p>Define the bijection $f : \mathbb R \to X$ by $fx = (x, mx + c)$. Let $(p_n)$ be a convergent sequence in $\mathbb R$ with $p_n \to p$. Then, the sequence $(fp_n) = (p_n, mp_n + c)$ also converges to $(p, mp + c)$. The inverse bijection $f^{-1} : X \to \mathbb R$ is defined by $f(x, mx + c) = x$. So, consider a convergent sequence $(q_n, mq_n + c)$ in $X$ with $(q_n, mq_n + c) \to (q, mq + c) \in X$. It follows that since $(q_n, mq_n + c) \to (q, mq + c)$, the sequence $f^{-1}(q_n,mq_n + c) = (q_n)$ also converges and $q_n \to q$. Hence, $f^{-1}$ preserves sequential convergence, and $f$ so defined is a homeomorphism.</p>
1,848,150
<blockquote> <p>Let $m$ and $c$ be non-zero real numbers and $X$ the subspace of $\mathbb R^2$ given by $X =\{ (x,y): y = mx + c \}$. Prove that $X$ is homeomorphic to $\mathbb R$.</p> </blockquote> <p>I am struggling to figure out how to define a homeomorphic function between these two sets, can anyone please help?</p>
snulty
128,967
<p>Here's a relatively straightforward way to see it.</p> <p>First note that $\iota :\Bbb R \to \Bbb R\times \{0\}\subset \Bbb R^2$ with the subspace topology is a homeomorphism.</p> <p>Then, let $m=\tan\theta$ and $A:\Bbb R^2\to \Bbb R^2$ by $(x,y)\mapsto \pmatrix{\cos\theta &amp; -\sin \theta \\ \sin\theta &amp; \cos\theta}\pmatrix{x\\ y}$. A is also a homeomorphism.</p> <p>Finally let $T_c:\Bbb R^2\to \Bbb R^2$ by $(x,y)\mapsto (x,y+c)$, another homeomorphism.</p> <p>Altogether $X= T_c(A(\iota(\Bbb R)))$, since $A(\iota(\Bbb R))=\{a(\cos\theta,\sin\theta)\mid a\in \Bbb R\}$ and after applying $T_c$ we get the set $\{(a\cos\theta, a\sin\theta+c)\mid a\in \Bbb R\}$ which satisfies $$y= a\sin\theta + c = a\cos\theta \tan\theta +c= mx+c$$</p> <p>By appropriate restrictions of the above maps we have a homeomorphism $f:\Bbb R\to X$ with $$f=T_c\bigg|_{X_1} \circ A\bigg|_{X_2}\circ \iota$$ where $X_2=\iota(\Bbb R)$ and $X_1= A(X_2)$.</p>
1,848,150
<blockquote> <p>Let $m$ and $c$ be non-zero real numbers and $X$ the subspace of $\mathbb R^2$ given by $X =\{ (x,y): y = mx + c \}$. Prove that $X$ is homeomorphic to $\mathbb R$.</p> </blockquote> <p>I am struggling to figure out how to define a homeomorphic function between these two sets, can anyone please help?</p>
tomasz
30,222
<p><strong>Hint</strong>: It might be easier to prove a more general fact. Let $f\colon X\to Y$ be any continuous function (between arbitary topological spaces). Show that $x\mapsto (x,f(x))$ defines a homeomorphic embedding of $X$ into $X\times Y$.</p>
2,099,828
<p>Can anyone show me that both $\cos t$ and $\sin t$ are eigen signals. Here is a little bit background of eigen-function. </p> <blockquote> <p>The output of a continuous-time, linear time-invariant system is denoted by $T\{z(t)\}$ where $x(t)$ is the input signal. A signal $z(t)$ is called eigen-signal of the system $T$ , when $T\{z(t)\} = \gamma z(t)$, where $\gamma$ is a complex number, in general, and is called an eigenvalue of $T$. <strong>EDIT</strong>: Suppose the impulse response of the system $T$ is real and even.</p> </blockquote>
WalterJ
344,100
<p>As a sort of hint:</p> <p>I suppose you have heard of convolution \begin{equation} y(t)=\int_{-\infty}^{\infty}x(\tau)h(t-\tau)d\tau=\int_{-\infty}^{\infty}h(\tau)x(t-\tau)d\tau \end{equation} Then what happens if our input is of the form $x(t)=e^{st}$? In that case $y(t)=H(s)e^{st}$ with $H(s)=\int_{-\infty}^{\infty}e^{-s\tau}d\tau$, so $e^{st}$ is an eigenfunction of the LTI system with eigenvalue $H(s)$. Now for your question, can we for example write $\cos$ in a more useful way? Yes, by using Euler's formula \begin{equation} e^{j\theta}=\cos(\theta)+j\sin(\theta) \implies cos(\theta)=\frac{1}{2}e^{j\theta}+\frac{1}{2}e^{-j\theta} \end{equation} Then you could already see everything you need just from there, but it might be easier to first generalize our previous result to Fourier series. There the idea is that you represent a signal as a sum of harmonically related eigenfunctions. Why? Because as we saw, with such an input the output is fairly easy to compute! Have a look at the equations below: \begin{equation} x(t)=\sum_{k=-\infty}^{\infty}a_ke^{jk\omega_0 t} \to y(t)=\sum_{k=-\infty}^{\infty}a_kH(jk\omega_0)e^{jk\omega_0 t}=\sum_{k=-\infty}^{\infty}b_ke^{jk\omega_0 t} \end{equation}</p> <p>I hope this helps, but I would suggest to have a look at the book and/or lectures by Alan Oppenheim, which is what I used.</p>
1,039,141
<blockquote> <p>Let <span class="math-container">$X = \mathbb{R}$</span> and <span class="math-container">$Y = \{x \in \mathbb{R} :x ≥ 1\}$</span>, and define <span class="math-container">$G : X → Y$</span> by <span class="math-container">$$G(x) = e^{x^2}.$$</span> Prove that <span class="math-container">$G$</span> is onto.</p> </blockquote> <p>Is this going along the right path and if so how do get the function to equal <span class="math-container">$y$</span>?</p> <blockquote> <p><span class="math-container">$G: \mathbb{R} \to\mathbb{N}_1$</span>. Let <span class="math-container">$y$$\in $$\mathbb{N_1}$</span>.</p> <p><em>claim:</em> <span class="math-container">$\sqrt{\ln y}$</span> maps to <span class="math-container">$y$</span>.</p> <p>Does <span class="math-container">$\sqrt{\ln y}$</span> belong to <span class="math-container">$\mathbb{N_1}$</span>? Yes because <span class="math-container">$y \in \mathbb{N_1}$</span>, <span class="math-container">$G( \sqrt{\ln y})=e^{(\sqrt{\ln y})^2}$</span>.</p> </blockquote>
jchun
187,232
<p>Pick an arbitrary element from within the range of the function, and show that the preimage of the element is non-empty.</p>
2,337,332
<p>Tried a lot. Though unable to find starting point.</p>
Siong Thye Goh
306,553
<p>Hint:</p> <p>Since</p> <p>$$9 \equiv -5 \pmod{14}$$</p> <p>$$9^{16} - 5^{16} \equiv (-5)^{16} - 5^{16} \pmod{14}$$</p>
2,298,665
<p>*prior to the body, note that title might be insufficient or inappropriate. Please edit it if it's needed. </p> <p>I am proving the claim below: </p> <p>Let $f: [a,b] \to \Bbb R$ be of bounded variation. </p> <p>$f(x) \ge c \gt 0$ for all $x \in [a, b]$ where $c$ is a constant</p> <p>$\Rightarrow$ $h(x)$ = $1 \over f(x)$ is of bounded variation on $[a, b]$.</p> <p>To prove it, I had made up inequality such as - $\mid {1 \over h(x_i)} - {1 \over h(x_{i-1})} \mid \le \mid {1 \over h(x_i)}\mid + \mid {1 \over h(x_{i-1)}}\mid \le {2 \over c}$</p> <p>Then I want to derive from above, the fact that $\sum_{i=1}^{n}\mid {1 \over h(x_i)} - {1 \over h(x_{i-1})} \mid \le {2 \over c}n$(*)</p> <p>but from (*) to derive the fact that $\sum_{i=1}^{n}\mid {1 \over h(x_i)} - {1 \over h(x_{i-1})} \mid \le {2 \over c}n \lt\infty$, I need a guarantee that partition of definition of Bounded of Variation is $\lt \infty$. </p> <p>so Is it true for the defnition of Bounded Variation, it only requires the finite partition?</p>
Community
-1
<p>This is a pure transcendental extension. $\mathbb{Q}(\pi)$ is the field of rational functions of one variable; any additional 'arithmetic' meaning of $\pi$ is completely irrelevant.</p> <p>It's <a href="https://math.stackexchange.com/questions/13129/automorphism-of-the-field-of-rational-functions">well-known</a> that $\operatorname{Aut}(F(t) / F)$ is the group of linear fractional transformations $t \mapsto \frac{a+bt}{c+dt}$ where $a,b,c,d \in F$ and $ad-bc \neq 0$, which in turn is isomorphic to $\operatorname{PGL}(2, F)$.</p>
1,930,901
<p>I need to prove that:</p> <p>$$f(z) = \frac{Re(z)}{|z|}$$</p> <p>and </p> <p>$$g(z) = \frac{Im(z^2)}{|z^2|}$$</p> <p>both have limit at $z=0$</p> <p>If I see $z$ as $z = x+iy$ I have:</p> <p>$$\lim_{(x,y)\to (0,0)}\frac{x}{\sqrt{x^2+y^2}}$$</p> <p>but if I take this limit at $y = x$ we have:</p> <p>$$\lim_{(x,y)\to (0,0)}\frac{x}{\sqrt{2}\sqrt{x^2}}$$</p> <p>won't that depend on the signal of $x$? So wouldn't this limit be inexistent?</p> <p>For $g$ we should have:</p> <p>$$\lim_{(x,y)\to (0,0)}\frac{2xy}{\sqrt{(x^2-y^2)^2+(2xy)^2}} = \lim_{(x,y)\to (0,0)}\frac{2xy}{\sqrt{x^4-2x^2y^2+y^4+4x^2y^2}} = $$</p> <p>$$\lim_{(x,y)\to (0,0)}\frac{2xy}{x^2+y^2}$$</p> <p>doesn't this go to infinity as well?</p>
mathworker21
366,088
<p>First of all, the result should be intuitive: if $a_j \to a$, then $a_j$ becomes closer and closer to $a$, so as $j \to \infty$, the averages $\frac{a_1+a_2+\dots+a_n}{n}$ should tends toward $a$. We can make this intuition rigorous and thus prove the result by using epsilons and deltas; we don't need any convergence tests.</p> <p>First take $\epsilon &gt; 0$ and $M$ such that $|a_m-a| &lt; \epsilon$ for all $m \ge M$. Take $N &gt; M$. We want to eventually choose $N$ large enough so that enough of the terms in the numerator of our average are $\epsilon$ close to $a$. So, </p> <p>$|\frac{a_1+\dots+a_N}{N}-a| = |\frac{a_1+\dots+a_M}{N}+\frac{a_{M+1}+\dots+a_N}{N}-a| \le |\frac{(a_1-a)+\dots+(a_M-a)}{N}|+|\frac{(a_{M+1}-a)+\dots+(a_N-a)}{N}| \le \frac{C_M}{N}+\frac{(N-M)\epsilon}{N}$ </p> <p>where $C_M$ is just a constant depending on $M$. We thus see that $\limsup_{N \to \infty} |\frac{a_1+\dots+a_N}{N}-a| \le \epsilon$. Since, $\epsilon$ is arbitrary, we get that </p> <p>$0 \le \liminf_{N \to \infty} |\frac{a_1+\dots+a_N}{N}-a| \le \limsup_{N \to \infty} |\frac{a_1+\dots+a_N}{N}-a| = 0$</p> <p>implying the limit exists and is $0$.</p>
1,930,901
<p>I need to prove that:</p> <p>$$f(z) = \frac{Re(z)}{|z|}$$</p> <p>and </p> <p>$$g(z) = \frac{Im(z^2)}{|z^2|}$$</p> <p>both have limit at $z=0$</p> <p>If I see $z$ as $z = x+iy$ I have:</p> <p>$$\lim_{(x,y)\to (0,0)}\frac{x}{\sqrt{x^2+y^2}}$$</p> <p>but if I take this limit at $y = x$ we have:</p> <p>$$\lim_{(x,y)\to (0,0)}\frac{x}{\sqrt{2}\sqrt{x^2}}$$</p> <p>won't that depend on the signal of $x$? So wouldn't this limit be inexistent?</p> <p>For $g$ we should have:</p> <p>$$\lim_{(x,y)\to (0,0)}\frac{2xy}{\sqrt{(x^2-y^2)^2+(2xy)^2}} = \lim_{(x,y)\to (0,0)}\frac{2xy}{\sqrt{x^4-2x^2y^2+y^4+4x^2y^2}} = $$</p> <p>$$\lim_{(x,y)\to (0,0)}\frac{2xy}{x^2+y^2}$$</p> <p>doesn't this go to infinity as well?</p>
Beni Bogosel
7,327
<p>A nice (and immediate) proof can be given using <a href="https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem" rel="nofollow noreferrer">Stolz-Cesaro Theorem</a>. This result is quite useful and helps solve rapidly many such problems. It is almost like l'Hopital's rule, which allows to &quot;differentiate&quot; the numerator and denominator to (hopefully) reduce the problem to a simpler question.</p> <p>Let <span class="math-container">$A_n = (a_1+...+a_n)$</span> and <span class="math-container">$B_n=n$</span>. <span class="math-container">$B_n$</span> is strictly increasing and <span class="math-container">$B_n \to \infty$</span>.</p> <p>Compute <span class="math-container">$\frac{A_{n+1}-A_n}{B_{n+1}-B_n} = a_n \to a$</span>. The referenced theorem states that <span class="math-container">$\frac{A_n}{B_n}$</span> is also convergent and has the same limit.</p>
2,512,461
<blockquote> <p>For any non-zero vector $x$, $$ \lVert x\rVert_0 \geq \frac{\lVert x\rVert_1^2}{\lVert x\rVert_2^2} $$</p> </blockquote> <p>I am trying to prove this inequality using the definitions of the $\ell_0$ "norm" (the number of none zero elements in the vector) and the definitions of the $\ell_1$ and $\ell_2$ norms; but I'm getting nowhere... I tried using $1$ or $n$ to prove it but it didn't help.. </p>
Brian Borchers
6,310
<p>rearrange the elements of $x$ so that the nonzeros are in entries $1$, $2$, $\ldots$, $n$, where $n=\| x \|_{0}$. .You can rewrite the inequality as</p> <p>$n(x_{1}^{2}+\ldots+x_{n}^{2}) \geq (|x_{1}|+\ldots +|x_{n}|)^{2}$. </p> <p>After taking the square root of both sides, this is equivalent to the well-known inequality for the 1-norm and 2-norm, </p> <p>$\sqrt{n}\| x \|_{2} \geq \| x \|_{1}.$</p> <p>This well-known inequality is easy to prove using the Cauchy-Schwartz inequality with $|x|$ and a vector of all ones.</p>
207,515
<p>Suppose I have the following list, </p> <pre><code>l = {{"b", "c", "d"}, {"e", "b"}, {"a", "b", "d", "e"}} </code></pre> <p>and further suppose I have the following association, </p> <pre><code>l1=&lt;|1 -&gt; "a", 2 -&gt; "b", 3 -&gt; "c", 4 -&gt; "d", 5 -&gt; "e"|&gt; </code></pre> <p>I wonder how can I replace the keys into my list such that I get, </p> <pre><code>{{2, 3, 4}, {5, 2}, {1, 2, 4, 5}} </code></pre>
Carl Woll
45,431
<p>For this particular mapping, you could also use <a href="http://reference.wolfram.com/language/ref/ToCharacterCode" rel="nofollow noreferrer"><code>ToCharacterCode</code></a>:</p> <pre><code>ToCharacterCode[StringJoin /@ l] - First@ToCharacterCode@"a" + 1 </code></pre> <blockquote> <p>{{2, 3, 4}, {5, 2}, {1, 2, 4, 5}}</p> </blockquote>
2,354,467
<p>I am trying to evaluate the following \begin{equation} I(a,b) = \int_{a}^{\frac{a+b}{2}} (x-a)^{\alpha-1} \, x^n \, dx + \int_{\frac{a+b}{2}}^{b} (b-x)^{\alpha-1} \, x^n \, dx, \end{equation} where $0&lt;\alpha&lt;1$. Wolfram alpha gives no solution. I tried integration by parts without success. My problem is that I don't understand well the evaluation of the limit of the upper limit and this integrand.</p>
tempx
357,017
<p>Why did you separate the integration into two pieces? If I did not read anything wrong we state the integral as </p> <p>$$ I(a,b) = \int_{a}^{b} (x-a)^{\alpha-1}x^ndx $$</p> <p>Please correct me if this is wrong. Then, by change of variables (it may not be necessary but for the ease of calculation) and assuming that $n$ is integer, we can write the above integral as</p> <p>$$ I(a,b) = \int_{0}^{b-a} t^{\alpha-1}(t+a)^ndt $$</p> <p>where I used $t=x-a$. By using the binomial theorem $(t+a)^n=\sum_{k=0}^n\binom{n}{k}t^ka^{n-k}$, we can write $I(a,b)$ as</p> <p>$$ I(a,b) = \int_{0}^{b-a} t^{\alpha-1}\sum_{k=0}^n\binom{n}{k}t^ka^{n-k}dt\implies \sum_{k=0}^n\binom{n}{k}a^{n-k}\int_{0}^{b-a}t^{k+\alpha-1}dt \\ \implies I(a,b)= \sum_{k=0}^n\binom{n}{k}a^{n-k}\frac{t^{k+\alpha}}{k+\alpha}|^{b-a}_0= \sum_{k=0}^n\binom{n}{k}a^{n-k}\frac{(b-a)^{k+\alpha}}{k+\alpha} $$</p> <p>If $n$ is a real number then the summation becomes infinite as stated at <a href="http://mathworld.wolfram.com/BinomialTheorem.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/BinomialTheorem.html</a>. Let me know if there are any errors in my derivation.</p>
62,539
<p>I am using two books for my calculus refresher.</p> <ol> <li>Thomas' Calculus </li> <li>Higher Math for Beginners by Ya. B. Zeldovich</li> </ol> <p><strong>My question is :</strong> When applying Integral Calculus for calculation of volumes of solids, generated by curves revolved around an axis, we use slices of 'cylinders' to approximate the volume of the resulting solid and then integrate the sum of those infinitesimal cylinders. However, when we are using the same techniques to calculate the surface area of the surfaces generated by revolving curves around an axis, we consider the 'slope' of the differential length of curve 'dx', calculate the total length of the curve and derive the required surface area.</p> <p>Are we not supposed to use the same 'slope' for calculating the volumes of the infinitesimal 'cylinders' for calculation of volumes? Shouldn't we use 'sliced portions of 'cones' as the infinitesimal volumes?? When it come to calculation of volumes of solids of revolution, why are we neglecting the slope of the curve for the differential length and simply assume that it is an infinitesimal cylinder??</p> <p>Ex: Let us say we want to calculate the surface area and the volume of the solid generated when the parabola y = 10 . x^2 is revolved about the y-axis, with limits of x from 0 to 5.</p> <p>In such cases, when calculating the volume of the solid, we consider infinitesimal 'cylinders', ignoring the 'slope' of the curve for the differential element 'dx', but when calculating the surface area, we consider the 'slope' of the differential element 'dx'.</p>
TonyK
1,508
<p>You could use "sliced portions of cones" as your infinitesimal volumes, but the answer would the same as if you used cylinders -- the difference between the two tends to zero faster than the volume itself, so it disappears in the limit. This is not the case with the surface area of a sliced portion of a cone -- its area is greater than the area of a cylindrical slice by a factor that tends to $\sqrt{1+(dy/dx)^2}$ in the limit.</p>
112,651
<p>What is known about the set of well orderings of $\aleph_0$ in set theory without choice? I do not mean the set of countable well-order types, but the set of all subsets of $\aleph_0$ which (relative to a pairing function) code well orderings. And I would be interested in an answer in, say, ZF without choice. My actual concern is higher order arithmetic.</p> <p>I would not be surprised if ZF proves there are continuum many. But I don't know.</p> <p>At the opposite extreme, is it provable in ZF that there are not more well orderings of $\aleph_0$ than there are countable well-order types?</p>
Andrés E. Caicedo
6,085
<p>Colin, there are continuum many, as you suspect. </p> <p>In fact, there are continuum many well-orderings of type $\omega$. The set of infinite binary sequences has size continuum. Given such a sequence $x=(x_0,x_1,\dots)$, let $i\in\{0,1\}$ be least such that $x_n=i$ infinitely often. Consider the enumeration of the naturals $a=(a_0,a_1,\dots)$ that begins with $a_0=i$. Having defined $a_n$, let $a_{n+1}$ be the first natural number not used so far, if $x_n=i$, and let $a_{n+1}$ be the second number not used so far, otherwise.</p> <p>Since there are infinitely many $k$ such that $x_k=i$, the $a_n$ enumerate all naturals. Since from the sequence we can easily recover $x$, this assignment $x\mapsto a$ is injective. The ordering $a_0\lt a_1\lt a_2\lt\dots$ is a well-ordering of the naturals in type $\omega$. </p> <p>It follows immediately that, for any countable infinite $\alpha$, there are continuum many well-orderings of the naturals in type $\alpha$. This is because one can simply fix a bijection between $\alpha$ and $\omega$, and use it to "transfer" the procedure just described.</p>
4,192,687
<p>Let <span class="math-container">$f: [0,1] \rightarrow \mathbb{R}$</span> be a continuous function. <br /> How can I show that <span class="math-container">$ \lim_{s\to\infty} \int_0^1 f(x^s) \, dx$</span> exists?<br /> It is difficult for me to calculate a limit, as no concrete function or function values are given...</p> <p>My idea is to use the dominated convergence theorem with <span class="math-container">$|f(x)| \leq 1$</span> so that <br /> <span class="math-container">$ \lim_{s\to\infty} \int_0^1 f(x^s) \, dx = \int_0^1 \lim_{s\to\infty} f(x^s) \, dx$</span>. But how can I calculate the limit of <span class="math-container">$f(x^s)$</span> then? There is no information given about the monotony of the function. Thanks.</p>
Matt E.
948,077
<p>To calculate the limit, use the fact that <span class="math-container">$f$</span> is continuous. <span class="math-container">$$ \displaystyle\lim_{s\to\infty} f(x^s) = f\left( \lim_{s\to\infty} x^s\right) = f( \chi_{\{1\}}(x)).$$</span> In other words, <span class="math-container">$\displaystyle\lim_{s\to\infty} f(x^s) = f(1)$</span> when <span class="math-container">$x=1$</span> and <span class="math-container">$\displaystyle\lim_{s\to\infty} f(x^s) = f(0)$</span> otherwise. Thus the limit of the integral becomes <span class="math-container">$f(0)$</span>.</p>
24,873
<p>It is very elementary to show that $\mathbb{R}$ isn't homeomorphic to $\mathbb{R}^m$ for $m&gt;1$: subtract a point and use the fact that connectedness is a homeomorphism invariant.</p> <p>Along similar lines, you can show that $\mathbb{R^2}$ isn't homeomorphic to $\mathbb{R}^m$ for $m&gt;2$ by subtracting a point and checking if the resulting space is simply connected. Still straightforward, but a good deal less elementary.</p> <p>However, the general result that $\mathbb{R^n}$ isn't homeomorphic to $\mathbb{R^m}$ for $n\neq m$, though intuitively obvious, is usually proved using sophisticated results from algebraic topology, such as invariance of domain or extensions of the Jordan curve theorem.</p> <p>Is there a more elementary proof of this fact? If not, is there intuition for why a proof is so difficult?</p>
Pete L. Clark
299
<p>Well, I might recast your proofs of the first two cases as follows:</p> <p>Suppose that $\mathbb{R}^n$ and $\mathbb{R}^m$ are homeomorphic. Then for any $P \in \mathbb{R}^n$, there must exist a point $Q \in \mathbb{R}^m$ such that $\mathbb{R}^n \setminus \{P\}$ and $\mathbb{R}^m \setminus \{Q\}$ are homeomorphic. (Since the homeomorphism group of $\mathbb{R}^m$ acts transitively, really any point $Q$ is okay, but I'm trying to be both simple and rigorous.) </p> <p>Now your proof when $n = 1$ is equivalent to the observation that $\pi_0(\mathbb{R}^1 \setminus \{P\})$ is nontrivial, while $\pi_0(\mathbb{R}^n \setminus \{Q\})$ is zero for all $n &gt; 1$. (Note that $\pi_0(X)$ is in bijection with the set of path-components of $X$, so really we are using that $\mathbb{R}^1$ minus a point is path-connected and $\mathbb{R}^n$ minus a point is not, for $n &gt; 1$.)</p> <p>When $n =2$, your proof is literally that $\pi_1(\mathbb{R}^2 \setminus \{P\})$ is nonzero whereas $\pi_1(\mathbb{R}^n \setminus \{Q\})$ is zero for all $n &gt; 2$. It is a little questionable to me whether the fundamental group counts as "elementary" -- you certainly have to learn about homotopies and prove some basic results in order to get this.</p> <p>If you are okay with such things, then it seems to me like you might as well also admit the higher homotopy groups: the point here is that </p> <p>for all $m \in \mathbb{Z}^+$, $\pi_{m-1}(\mathbb{R}^m \setminus \{P\}) \cong \mathbb{Z}$ whereas for $n &gt; m$, $\pi_{m-1}(\mathbb{R}^n \setminus \{Q\}) = 0$. </p> <p>If I am remembering correctly, the higher homotopy groups are introduced very early on in J.P. May's <em>A Concise Course in Algebraic Topology</em> and applied to essentially this problem, among others.</p> <p>[By the way, if we are okay with homotopy, then we probably want to replace $\mathbb{R}^n \setminus \{P\}$ with its homotopy equivalent subspace $S^{n-1}$ throughout. For some reason I decided to avoid mentioning this in the arguments above. If I had, it would have saved me a fair amount of typing...]</p> <hr> <p><b>Added</b>: Of course one could also use homology groups instead, as others have suggested in the comments. One might argue that homotopy groups are easier to <em>define</em> whereas homology groups are easier to <em>compute</em>. But this one computation of the "lower" homotopy groups of spheres is not very hard, and my guess is that if you want to start from scratch and prove everything, then for this problem homotopy groups will give the shorter approach.</p> <p>As to why the problem is hard to solve in an elementary way: the point is that the two spaces $\mathbb{R}^m \setminus \{P\}$ and $\mathbb{R}^n \setminus \{Q\}$ look the same when viewed from the lens of general topology. [An exception: one can develop <strong>topological dimension theory</strong> to tell them apart. For this I recommend the classic text of Hurewicz and Wallman. Whether that's "more elementary", I couldn't say.] In order to distinguish them for $m,n$ not too small, it seems that you need to develop various notions of the "higher connectivities" of a space, which leads inevitably to homotopy groups and co/homology groups.</p> <p>Another alternative is to throw out homeomorphism and look instead at diffeomorphism. This puts a reasonable array of tools from differentiable topology and manifold theory at your disposal rather quickly (see e.g. Milnor's book <em>Topology from a differentiable viewpoint</em>). It is not hard to show that the dimension of a manifold is a diffeomorphism invariant! So maybe the subtlety comes from insisting on working in the topological category, which often turns out to be more difficult than working with topological spaces with nice additional structures.</p>
24,873
<p>It is very elementary to show that $\mathbb{R}$ isn't homeomorphic to $\mathbb{R}^m$ for $m&gt;1$: subtract a point and use the fact that connectedness is a homeomorphism invariant.</p> <p>Along similar lines, you can show that $\mathbb{R^2}$ isn't homeomorphic to $\mathbb{R}^m$ for $m&gt;2$ by subtracting a point and checking if the resulting space is simply connected. Still straightforward, but a good deal less elementary.</p> <p>However, the general result that $\mathbb{R^n}$ isn't homeomorphic to $\mathbb{R^m}$ for $n\neq m$, though intuitively obvious, is usually proved using sophisticated results from algebraic topology, such as invariance of domain or extensions of the Jordan curve theorem.</p> <p>Is there a more elementary proof of this fact? If not, is there intuition for why a proof is so difficult?</p>
Jesse Railo
223,052
<p>One possible approach is the following:</p> <ol> <li>Show <a href="https://en.wikipedia.org/wiki/Borsuk%E2%80%93Ulam_theorem" rel="noreferrer">Borsuk–Ulam theorem</a>.</li> <li>Deduce that $S^n$ cannot be embedded to $\mathbb{R}^n$.</li> <li>Let us consider $\mathbb{R}^n$ and $\mathbb{R}^m$, where $m &gt; n$. Now $S^n$ cannot be embedded to $\mathbb{R}^n$, but it can be embedded to $\mathbb{R}^{n+1} \subseteq \mathbb{R}^m$. Therefore $\mathbb{R}^n$ and $\mathbb{R}^m$ cannot be homeomorphic (they allow different embeddings). </li> </ol> <p>In fact the dual argument (cannot be embedded to the same class of topological spaces) of 3. shows, that $S^n$ is not homeomorphic to $S^m$. This might not be strictly the easiest (the most direct) but at least for me this seems to be the nicest argument: I never claimed that Borsuk-Ulam theorem is easy.</p>
1,858,297
<p>Suppose the diameter of a nonempty set $A$ is defined as </p> <p>$$\sigma(A) := \sup_{x,y \in A} d(x,y)$$</p> <p>where $d(x,y)$ is a metric.</p> <p>Is $\sigma(.)$ a 'measurement'? I.e., how do I prove the countable additivity for this particular case?</p>
fleablood
280,126
<p>... not to mention</p> <p>$\sigma( \text{rational numbers between A and B}) + \sigma( \text {irrational numbers between A and B}) \ne \sigma( \text{ real numbers between A and B})$. </p> <p>This is pretty much the perfect example of something that absolutely can not be a measure and illustrates why we need a concept of measure.</p>
806,532
<p>This question takes place in a general metric space $X$. </p> <p>Let $x$ be an interior* point of $E \subset X$ iff there exists a deleted neighborhood of $x$ that is contained in $E$. </p> <p>This is like the normal definition of "interior point", except it uses "deleted neighborhood" instead of "neighborhood", thus allowing a point not in $E$ to be an interior* point of $E$.</p> <p>My question is: why is this not the standard definition of "interior point"? I see a couple reasons that it would make a more elegant system.</p> <ol> <li>"Limit point" and "interior* point" are both defined in terms of deleted neighborhoods ($x$ is a limit point of $E$ iff all deleted neighborhoods of $x$ include some point of $E$). This is more symmetrical.</li> <li>(Note: I do not yet have a general/categorical notion of duality) "Limit point" and "interior* point" are more adequately dual, for $x$ is a limit point of $E$ iff $x$ is not an interior* point of the complement of $E$, whereas this does not hold for "limit point" and "interior point".</li> <li>The dual notions of closure and interior are more symmetrically defined using "interior* point". The closure is defined as the <b>union</b> of $E$ and the set of limit points of $E$, and the interior is defined as the <b>intersection</b> of $E$ and the set of interior* points of $E$. The duality between closure and interior is harder to see with the standard definition of interior as the set of interior points of $E$. Also the proof that the complement of the closure of $E$ is the interior of the complement of $E$ reduces to a few applications of DeMorgan's law.</li> </ol> <p>So why do people use "interior point" and not "interior* point"? </p>
echinodermata
122,187
<p>The notions of "interior point" and "limit point", as you point out, are not dual. And if we use interior point to define interior and limit point to define closure, then we've just used two non-dual notions to define two dual notions. This is highly dissatisfying, as you know. Which one should we keep and which one should we modify?</p> <p>Before you jump to the conclusion that "limit point" is golden and "interior point" is backwater, consider the opposite perspective for a moment.</p> <p>Let $x$ be a "limit* point" of $E$ if every (non-deleted) neighborhood of $x$ intersects $E$. This is like the definition of "limit point" except that it uses "neighborhood" instead of "deleted neighborhood", thus allowing all points in $E$ to be limit* points of $E$.</p> <p>Three reasons why limit* points are more elegant than limit points: Now limit* points and interior points are both defined in terms of neighborhoods, which is more symmetrical. Limit* points and interior points are now dual. And, most importantly, the dual notions of closure and interior are more symmetrically defined using limit* points: the closure is defined as <em>the set of all limit* points</em>, and the interior is defined as <em>the set of all interior points</em>.</p> <p>You see that the entire opposite argument works, as I've shown. In fact, this way is the nicer one, because <strong>closure and interior both have simpler definitions</strong> with limit* point and interior point than with limit point and interior* point. This way, we can say that interior points are just the points in the interior. And similarly we can say that <em>closure points</em> are the points in the closure. What I've been calling limit* points has a standard name: either <a href="http://en.wikipedia.org/wiki/Adherent_point" rel="nofollow">adherent points</a> or closure points. People make use of "limit* points" because it is a good way of thinking about closures and closed sets, which is simpler in many contexts than limit points. People tend not to make use of "interior* points" to think about interiors and open sets for the converse reason.</p>
3,450,598
<blockquote> <p>Prove that <span class="math-container">$\sum_{i = m}^n a_i + \sum_{i = n + 1}^p a_i = \sum_{i = m}^p a_i$</span>, where <span class="math-container">$m ≤ n&lt;p$</span> are integers, and <span class="math-container">$a_i$</span> is a real number assigned to each integer <span class="math-container">$m ≤ i ≤ p$</span>. (Hint: you might want to use induction)</p> </blockquote> <p>Let's follow the hint and use induction on <span class="math-container">$p-m = k$</span><br> Base case: <span class="math-container">$k = 1$</span>, then <span class="math-container">$p = m + 1$</span> and <span class="math-container">$n = m$</span>. <span class="math-container">$\sum_{i = m}^n a_i + \sum_{i = n + 1}^p a_i = a_m + a_{m+1}$</span> and <span class="math-container">$\sum_{i = m}^p a_i = a_m+a_{m+1}$</span>. Therefore, the right-hand side is equal to the left-hand side.<br> Inductive step: Assume for <span class="math-container">$p-m=k$</span> the statement holds, show for <span class="math-container">$p-m = k + 1$</span>. We know that <span class="math-container">$\sum_{i = m}^n a_i + \sum_{i = n + 1}^{m+k} a_i = \sum_{i = m}^{m+k} a_i$</span>. Now, <span class="math-container">$\sum_{i = m}^n a_i + \sum_{i = n + 1}^{m+k+1} a_i = \sum_{i = m}^n a_i + \sum_{i = n + 1}^{m+k} a_i + a_{m+k+1} = \sum_{i = m}^{m+k} a_i + a_{m+k+1}$</span> by inductive hypothesis. Therefore, we get <span class="math-container">$$\sum_{i = m}^n a_i + \sum_{i = n + 1}^{m+k+1} a_i =\sum_{i = m}^{m+k+1} a_i$$</span></p> <p>Is this prove plausible. At this point about the finite sum, I can use the following facts: </p> <p>if <span class="math-container">$ m &lt; n \sum_n^m a_i= 0$</span><br> if <span class="math-container">$n \ge m - 1 \sum_{m}^{n+1}a_i = \sum_{m}^{n}a_i + a_{n+1}$</span> </p>
user
505,767
<p>I suppose it is convenient apply induction in a different way, that is</p> <ul> <li>base case, <span class="math-container">$p=n+1 \implies \sum_{i = m}^n a_i + \sum_{i = n + 1}^{n+1} a_i = \sum_{i = m}^n a_i + a_{n+1}=\sum_{i = m}^{n+1} a_i$</span></li> </ul> <p>and for the induction step assuming that <span class="math-container">$\sum_{i = m}^n a_i + \sum_{i = n + 1}^p a_i = \sum_{i = m}^p a_i$</span> we need to prove</p> <p><span class="math-container">$$\sum_{i = m}^n a_i + \sum_{i = n + 1}^{p+1} a_i = \sum_{i = m}^{p+1} a_i$$</span></p>
4,124,324
<p>I am trying to find the complex function <span class="math-container">$f(z)$</span> who's derivative equals the complex conjugate of its reciprocal</p> <p><span class="math-container">$$\dfrac{\mathrm{d} f(z)}{\mathrm{d} z} = \dfrac{1}{f(z)^*}$$</span></p> <p>which is equivalent to</p> <p><span class="math-container">$$ f(z)' f(z)^* = 1 $$</span></p> <p>I know that for <span class="math-container">$f(z)' f(z) = 1$</span> the solution is simply <span class="math-container">$\pm \sqrt{2 z + c }$</span>. But the above turns out to be a bit trickier...</p>
Aryaman Maithani
427,810
<p>(I shall use the notation <span class="math-container">$\overline{z}$</span> to denote the conjugate of <span class="math-container">$z$</span>. I will assume that your equation is defined on a nonempty open subset <span class="math-container">$\Omega \subset \Bbb C$</span>.)</p> <p>There is no such <span class="math-container">$f$</span>.</p> <p>Indeed, suppose that such an <span class="math-container">$f$</span> existed. Then, your equation forces <span class="math-container">$f'$</span> and <span class="math-container">$\bar{f}$</span> to be nowhere vanishing on <span class="math-container">$\Omega$</span>. In particular, we have <span class="math-container">$$\overline f = \frac{1}{f'}$$</span> on the open set <span class="math-container">$\Omega$</span>. Now, since holomorphic functions on open sets are infinitely differentiable, we see that <span class="math-container">$1/f'$</span> is differentiable and hence, <span class="math-container">$\overline{f}$</span> is differentiable on <span class="math-container">$\Omega$</span>.</p> <p>However, if <span class="math-container">$f$</span> and <span class="math-container">$\overline{f}$</span> are both holomorphic on <span class="math-container">$\Omega$</span>, then so are <span class="math-container">$f \pm \overline{f}$</span>. But these take values along the lines <span class="math-container">$\Bbb R$</span> and <span class="math-container">$\iota \Bbb R$</span>. So, both of these are (locally) constant. In turn, <span class="math-container">$f$</span> is constant on (the connected components of) <span class="math-container">$\Omega$</span>.</p> <p>Thus, <span class="math-container">$f' \equiv 0$</span> on <span class="math-container">$\Omega$</span>. However, since <span class="math-container">$\Omega \neq \varnothing$</span>, pick <span class="math-container">$\omega \in \Omega$</span> and put it in your functional equation to get <span class="math-container">$0 = 1$</span>, a contradiction.</p>
765,404
<p>Can anyone explain the partial derivative below:</p> <p>$\frac{\partial a^tX^{-1}b}{\partial X} = -X^{-t}ab^tX^{-t}$</p> <p>I was trying to derive this equation using the below formula, but failed.</p> <p><img src="https://i.stack.imgur.com/apR2q.png" alt="enter image description here"></p>
Set
26,920
<p>Here's another way you might consider computing the derivative of <span class="math-container">$f(X)=a^TX^{-1}b$</span>,</p> <p><span class="math-container">\begin{align} f(X+H)=a^T(X+H)^{-1}b&amp;=a^T((I+HX^{-1})X)^{-1}b\\[10pt] &amp;=a^TX^{-1}(I+HX^{-1})^{-1}b\\[1pt] &amp;=a^{T}X^{-1}\sum_{n=0}^\infty(-1)^n(HX^{-1})^nb \end{align}</span></p> <p>Where the final equality follows from the closed form for the matrix geometric series.</p> <p>For <span class="math-container">$\|H\|$</span> small,</p> <p><span class="math-container">$$a^{T}X^{-1}\sum_{n=0}^\infty(-1)^n(HX^{-1})^nb\approx a^{T}X^{-1}(I-HX^{-1})b=\underbrace{a^{T}X^{-1}b}_{f}+\underbrace{(-a^{T}X^{-1}HX^{-1}b)}_{\nabla_Hf}$$</span></p> <p>Now to determine <span class="math-container">$\nabla f$</span>, we need to write <span class="math-container">$\nabla_Hf$</span> as a matrix inner product,</p> <p><span class="math-container">$$\nabla_Hf=-a^{T}X^{-1}HX^{-1}b=-\text{tr}(X^{-1}ba^TX^{-1}H)=\langle -X^{-T}ab^TX^{-T},\; H\rangle\\[1pt]$$</span></p> <p>Therefore <span class="math-container">$\nabla f=-X^{-T}ab^TX^{-T}$</span>.</p>
178,319
<p>I asked this initially in <a href="https://math.stackexchange.com/questions/894399/identities-that-connect-antipode-with-multiplication-and-comultiplication">math.stackexchange</a>:</p> <p>The group algebra $k(G)$ of any group $G$ satisfies as a Hopf algebra the following identities: $$ S\otimes S\circ \Delta=\sigma\circ\Delta\circ S $$ $$ \nabla\circ S\otimes S=S\circ\nabla\circ\sigma $$ where $S$ is the antipode, $\Delta$, the comultiplication, $\nabla$, the multiplication, and $\sigma:x\otimes y\mapsto y\otimes x$. </p> <p>Is this valid for all Hopf algebras (in any braided monoidal category) or only for some special class?</p>
Tilman
4,183
<p>That's true in any Hopf algebra. See Sweedler, <em>Hopf algebras</em>, Prop. 4.0.1.</p>
1,613,645
<p>Let's get started:</p> <p>$$\hat f(n) = \frac{1}{2\pi}\int_0^{2\pi} |x|e^{-inx} dx$$</p> <p>since $|x|$ is an even function:</p> <p>$$= \frac{1}{\pi}\int_0^{\pi} xe^{-inx} dx$$</p> <p>Integration by parts yields:</p> <p>$$e^{-inx}\Big|_0^{\pi} + \frac{1}{in} \int_0^\pi e^{-inx} dx = (-1)^n - 1 + \frac{1}{in} \left( \frac{(-1)^n}{-in} + \frac{1}{in} \right) \\ = (-1)^n - 1 + \frac{(-1)^n - 1}{n^2}$$</p> <p>So if $n$ is even then $\hat f(n) = 0$. Otherwise:</p> <p>$$\hat f(n) = \frac{1}{\pi} \left( -2 -\frac{2}{n^2} \right)$$</p> <p>but that doesn't make sense since we know that $\hat f(n) \to 0$.</p> <p>Where is my mistake? </p> <p><strong>EDIT</strong> it should be </p> <p>$$x e^{-inx}\Big|_0^{\pi} + \frac{1}{in} \int_0^\pi e^{-inx} dx = \frac{\pi e^{-in\pi}}{-in} + \frac{(-1)^n - 1}{n^2}$$</p> <p>So $$\hat f(n) = \frac{1}{\pi} \left( \frac{(-1)^n}{-in} + \frac{(-1)^n - 1}{n^2} \right)$$</p>
Akshat VIjoy
546,552
<p>The question is referring to the MINUTE hand, NOT hour hand. Read it properly. The MINUTE hand does not only move 360 degrees in 12 hours. That's the HOUR hand. The MINUTE hand moves 360 degrees in an hour. This is because the MINUTE cycle restarts after an hour. It moves the full clock around after ONE hour, NOT TWELVE hours. So, there lies your mistake.</p>
3,123,857
<p>I have to find the integral of <span class="math-container">$$\int_{M_0}^{\infty} q(m, \mu, \sigma) \beta e^{-\beta(m-M_0)}\,\mathrm{d}m,$$</span> where <span class="math-container">$q(m, \mu, \sigma)$</span> is the normal cumulative distribution function, <span class="math-container">$M_0$</span> is a constant, <span class="math-container">$m$</span> is the variable, and <span class="math-container">$\beta$</span>, <span class="math-container">$\mu$</span>, and <span class="math-container">$\sigma$</span> are parameters. I have done the integration using the error function as follows:</p> <p><span class="math-container">\begin{align} \int_{M_0}^{\infty} q(m, \mu, \sigma) \beta e^{-\beta(m-M_0)}\,\mathrm{d}m &amp;=\beta e^{\beta M_0} \int_{M_0}^{\infty} \frac{1}{2} \Bigg[ 1+\operatorname{erf}\Bigg (\frac{(m-\mu)}{\sigma \sqrt(2)} \Bigg ) \Bigg] e^{-\beta m }\,\mathrm{d}m \\ &amp;=\frac{1}{2} + \frac{1}{2} \beta e^{\beta M_0} \int_{M_0}^{\infty} \operatorname{erf} \Bigg (\frac{(m-\mu)}{\sigma \sqrt(2)} \Bigg)e^{- \beta m}\,\mathrm{d}m \end{align}</span></p> <p>Here I get stuck. Could anyone please help solving this?</p>
gultu
383,558
<p>the integral is: </p> <p><span class="math-container">$I= \int_{M_0}^{\infty} q(m,\mu,\sigma). \beta e^{-\beta (m-M_0)} dm \\ =\frac{1}{2}+\frac{1}{2} \beta e^{\beta M_0} * I_{11}$</span></p> <p>where, <span class="math-container">$I_{11}= \int_{M_0}^{\infty} erf(\frac{(m-\mu)}{\sigma \sqrt{2}}) e^{- \beta m} dm\\ =erf(\frac{(m-\mu)}{\sigma \sqrt{2}}) \int_{M_0}^{\infty} e^{- \beta m} dm - \int_{M_0}^{\infty} \{ \frac{d}{dm} erf(\frac{(m-\mu)}{\sigma \sqrt{2}}) \int_{M_0}^{\infty} e^{- \beta m} dm \} dm \\ =\frac{1}{\beta} erf(\frac{(m-\mu)}{\sigma \sqrt{2}}) e^{- \beta M_0} + \frac{1}{\beta} \frac{2}{\sqrt{\pi}} \int_{M_0}^{\infty} e^{\frac{(m-\mu)^2}{2 \sigma^2}} e^{- \beta m} dm \\ =\frac{1}{\beta} erf(\frac{(m-\mu)}{\sigma \sqrt{2}}) e^{- \beta M_0}+ \frac{1}{\beta} \frac{2}{\sqrt{\pi}} I_{22}$</span></p> <p>where, <span class="math-container">$I_{22}= \int_{M_0}^{\infty} e^{\frac{(m-\mu)^2}{2 \sigma^2 }} e^{- \beta m} dm\\ = \int_{M_0}^{\infty} e^{\frac{\{m-(\mu-\beta \sigma^2) \}^2}{2 \sigma^2}+ \frac{\beta}{2} \{\sigma^2-2\mu \} } dm$</span></p>
4,296,967
<p>Is there any subset of the real numbers that is not a Unique Factorization Domain? (i.e. where within that subset, a &quot;prime&quot; is a number that cannot be written as a product of any numbers in that set except itself and 1, and where there is at least one number that can be written as the product of two different sets of primes).</p> <p>I usually introduce recreational math students to the concept of a non-UFD by showing them the set of all numbers a + i<em>b</em>sqrt(5). So I wondered if you can do it without complex numbers, i.e. find a subset of the reals that is a non-UFD, or prove that it's impossible.</p>
Davide Trono
494,745
<p>Consider <span class="math-container">$\mathbb{Z}[\sqrt{10}]$</span>, then <span class="math-container">$9=3\cdot3=(1+\sqrt{10})(\sqrt{10}-1)$</span> and <span class="math-container">$3, 1+\sqrt{10}, \sqrt{10} - 1$</span> are all prime numbers in this ring (although the proof of them being prime is far from obvious). Also, please notice in this context 'subset' doesn't make much sense.</p>
4,296,967
<p>Is there any subset of the real numbers that is not a Unique Factorization Domain? (i.e. where within that subset, a &quot;prime&quot; is a number that cannot be written as a product of any numbers in that set except itself and 1, and where there is at least one number that can be written as the product of two different sets of primes).</p> <p>I usually introduce recreational math students to the concept of a non-UFD by showing them the set of all numbers a + i<em>b</em>sqrt(5). So I wondered if you can do it without complex numbers, i.e. find a subset of the reals that is a non-UFD, or prove that it's impossible.</p>
MH.Lee
980,971
<p>You asked the subset of real numbers, so I'll give an example <span class="math-container">$\mathbb Z[\sqrt5]$</span>.</p> <p>In that case, <span class="math-container">$2, -2, 1\pm\sqrt5$</span> is all prime, but <span class="math-container">$-2\times2=(1-\sqrt5)(1+\sqrt5)$</span>.</p> <p>So, <span class="math-container">$\mathbb Z[\sqrt5]$</span> is surely subring of <span class="math-container">$\mathbb R$</span>, but it is not UFD.</p>
650,710
<p>How would I go about simplifying $4(a-2(b-c)-(a-(b-2)))$. Show working out and steps please.</p> <p>I'd show my working out but I'm not really sure where to start. Firstly, I would want to get rid of the 4 so I'd times everything else by 4 right? No idea. </p>
bryan.blackbee
45,767
<p>Consider re-writing the equation in different brackets. Mathematics has three different type of parentheses for a reason - to distinguish between each pair of brackets. $$ \begin{align} 4(a-2(b-c)-(a-(b-2)))&amp;=4\left\{a-2[b-c]-[a-(b-2)]\right\}\\ &amp;=4\left\{a-2[b-c]-[a-b+2]\right\}\\ &amp;=4\left\{a-2[b-c]-a+b-2\right\}\\ &amp;=4\left\{a-2b+2c-a+b-2\right\}\\ &amp;=4\left\{a-a-2b+b+2c-2\right\}\\ &amp;=4\left\{-b+2c-2\right\}\\ &amp;=-4b+8c-8\\ \end{align} $$ As an exercise, figure out what I did step by step. This is very long-winded but I hope you see what happens as I remove brackets.</p>
185,766
<p>After studying general a linear algebra course, how would an advanced linear algebra course differ from the general course? </p> <p>And would an advanced linear algebra course be taught in graduate schools?</p>
Alexander Gruber
12,952
<p>At my <em>[undergraduate]</em> university <em>[which was University of Cincinnati, at the time of this post]</em>, the first linear algebra sequence is taught to sophomores. It is mostly computational. Everything takes place in the reals and complex numbers. The class begins with row reducing and culminates with finding determinants and eigenvalues. I don't remember which book we use for this but it's terrible and the class is very easy.</p> <p>Later, students are encouraged to take "abstract" linear algebra, which focuses on abstract vector spaces (though they are all assumed to be over fields of characteristic $0$), inner product spaces, quadratic forms, proving the spectral theorem, and culminates with Jordan canonical form and the theory of convex sets. For this we use Lang's <a href="http://rads.stackoverflow.com/amzn/click/1441930817" rel="noreferrer">linear algebra</a>. More emphasis is placed on the spectral theorem than anything, with Jordan form and convex sets only if the class moves fast enough so there's time.</p> <p>Finally, after a student has taken the senior level abstract algebra sequence (featuring the basics of groups, rings, and fields), he may elect to take the graduate algebraic structures class, in which module theory, more advanced ring theory, and some representation theory are covered. For this class we use <a href="http://rads.stackoverflow.com/amzn/click/0471433349" rel="noreferrer">Dummit and Foote</a> (and whichever other books we feel like). <em>[At my incumbent university, University of Florida, basic ring and module theory is done in the first year graduate course, also using Dummit and Foote. Multilinear algebra (tensors) and more advanced ring theory are covered during spring semester of second year graduate algebra, which concurrently uses <a href="http://rads.stackoverflow.com/amzn/click/038795385X" rel="noreferrer">Lang</a>, <a href="http://rads.stackoverflow.com/amzn/click/0387905189" rel="noreferrer">Hungerford</a>, and <a href="http://rads.stackoverflow.com/amzn/click/0521367646" rel="noreferrer">Matsumura</a>.]</em></p> <p>I have heard that other universities offer graduate courses strictly in advanced linear algebra. An example of a book they may use is <a href="http://rads.stackoverflow.com/amzn/click/0387728287" rel="noreferrer">Roman</a>, which I have used as a reference many times and I must say I like very much. </p>
3,197,540
<p>Let a function be defined as:</p> <p><span class="math-container">$ f(x)=x^2\sin{\left(\frac 1x\right)}$</span> for <span class="math-container">$x \neq 0$</span> and <span class="math-container">$ f(x)=0$</span> for <span class="math-container">$x=0$</span></p> <p>I'm trying to prove that f is differentiable at 0 using the definition of derivative. However in the process of doing this I was stopped by this limit:</p> <p><span class="math-container">$$ \lim_{h \to 0} \frac{\sin\left({\frac{1}{x+h}}\right)-\sin\left({\frac{1}{x}}\right)}{h} $$</span></p> <p>Is it possible to solve this limit question without using l'Hopital's rule?</p>
Sri-Amirthan Theivendran
302,692
<p>You don't have to compute that limit. Indeed <span class="math-container">$$ f'(0)=\lim_{h\to0}\frac{f(0+h)-f(0)}{h}=\lim_{h\to 0}\frac{f(h)}{h}=\lim_{h\to 0}h\sin(1/h) $$</span> which you can compute using the squeeze theorem since <span class="math-container">$$ 0\leq |h\sin(1/h)|\leq |h|. $$</span></p>
1,322,016
<p>Find the vertical asymptotes (if any) of the graph of the function. (Use $n$ as an arbitrary integer if necessary.)</p> <p>$$s(t)= \frac{8t}{\sin{t}}$$</p> <p>$t= ?$, where n cannot $=?$</p> <p>I need a general rule for the asymptotes with where the exception of $n$ is. </p>
OnceUponACrinoid
246,291
<p>Hint: A good way to find vertical asymptotes for a function defined as a fraction is to think of where the denominator vanishes, assuming you have the fraction in a factored from. Of course, those aren't the only possibilities for where vertical asymtotes may occur.</p> <p>Also -- Have you tried graphing the given function? You may notice lots of points where the function values seems to grows very fast.</p> <p>In your specific problem, you need to find values of $t$ for which</p> <p>$\sin(t)=0$</p> <p>Can you take it from here?</p>
376,120
<p>Let <span class="math-container">$X$</span> be a proper and smooth scheme over <span class="math-container">$\mathbf{C}$</span> and let <span class="math-container">$\mathbb{L}$</span> be a local system of finite dimensional <span class="math-container">$\mathbf{C}$</span>-vector spaces. By the Riemann Hilbert correspondence, to <span class="math-container">$\mathbb{L}$</span> one can associate a locally free sheaf <span class="math-container">$\mathcal{F}$</span> of <span class="math-container">$\mathcal{O}_X$</span>-modules with an integral connection <span class="math-container">$\nabla \colon \mathcal{F} \to \mathcal{F} \otimes \Omega^1_X$</span> such that <span class="math-container">$\mathbb{L} = \mathcal{F}^{\nabla = 0}$</span>. In particular the de rham complex of <span class="math-container">$\mathcal{F}$</span>: <span class="math-container">$$ 0 \to \mathcal{F} \to \mathcal{F} \otimes \Omega^1_X \to \cdots $$</span> gives a resolution of <span class="math-container">$\mathbb{L}$</span>. This resolution is not injective, but still we can use hypercohomology of the complex to compute the cohomology of <span class="math-container">$\mathbb{L}$</span>, and we get a spectral sequence of hypercohomology.</p> <p>If <span class="math-container">$\mathbb{L} = \mathbf{C}$</span> is constant, this is the usual Hodge to de Rham spectral sequence, that degenerates, immediately. What can be said in general? I guess that it not true that the sequence always degenerates. (But for example it should be true if <span class="math-container">$\mathbb{L} = f_\ast \mathbf{C}$</span> for a proper and smooth morphism <span class="math-container">$f \colon Y \to X$</span>.)</p>
Chris
116,075
<p>This is true for any local system underlying a polarized <span class="math-container">$\mathbb{C}$</span>-variation of Hodge structure, e.g. it is true for <span class="math-container">$R^nf_*\mathbb{C}$</span> where <span class="math-container">$f:X\to Y$</span> is smooth proper and <span class="math-container">$n\ge 0$</span>. Indeed, the theory of harmonic forms and Kähler identities extends to this case. In the case of real VHS this is worked out in Section 2 of <em>S. Zucker &quot;Hodge Theory with Degenerating Coefficients&quot;, Annals of Mathematics 109 (1979), pp. 415-476</em>. Even more generally, the <span class="math-container">$E_1$</span>-degeneration holds for perverse sheaves underlying pure Hodge modules, by the theory of Morihiko Saito.</p>
376,120
<p>Let <span class="math-container">$X$</span> be a proper and smooth scheme over <span class="math-container">$\mathbf{C}$</span> and let <span class="math-container">$\mathbb{L}$</span> be a local system of finite dimensional <span class="math-container">$\mathbf{C}$</span>-vector spaces. By the Riemann Hilbert correspondence, to <span class="math-container">$\mathbb{L}$</span> one can associate a locally free sheaf <span class="math-container">$\mathcal{F}$</span> of <span class="math-container">$\mathcal{O}_X$</span>-modules with an integral connection <span class="math-container">$\nabla \colon \mathcal{F} \to \mathcal{F} \otimes \Omega^1_X$</span> such that <span class="math-container">$\mathbb{L} = \mathcal{F}^{\nabla = 0}$</span>. In particular the de rham complex of <span class="math-container">$\mathcal{F}$</span>: <span class="math-container">$$ 0 \to \mathcal{F} \to \mathcal{F} \otimes \Omega^1_X \to \cdots $$</span> gives a resolution of <span class="math-container">$\mathbb{L}$</span>. This resolution is not injective, but still we can use hypercohomology of the complex to compute the cohomology of <span class="math-container">$\mathbb{L}$</span>, and we get a spectral sequence of hypercohomology.</p> <p>If <span class="math-container">$\mathbb{L} = \mathbf{C}$</span> is constant, this is the usual Hodge to de Rham spectral sequence, that degenerates, immediately. What can be said in general? I guess that it not true that the sequence always degenerates. (But for example it should be true if <span class="math-container">$\mathbb{L} = f_\ast \mathbf{C}$</span> for a proper and smooth morphism <span class="math-container">$f \colon Y \to X$</span>.)</p>
Donu Arapura
4,144
<p>Let me supplement Chris' answer with a few additional remarks. When <span class="math-container">$\mathcal{F}$</span> underlies a polarizable complex variation of Hodge structure, then it carries a filtration <span class="math-container">$F^\bullet\mathcal{F}$</span> which induces one on the de Rham complex <span class="math-container">$$F^p\mathcal{F}\to F^{p-1}\mathcal{F}\otimes \Omega_X^1 \to \ldots$$</span> The argument of Deligne sketched in Zucker's paper, implies that the spectral sequence <span class="math-container">$$E_1= H^{p+q}(Gr^p_F(\mathcal{F}\otimes \Omega_X^\bullet))$$</span> associated to the <strong>above</strong> filtration degenerates at <span class="math-container">$E_1$</span>. If <span class="math-container">$\mathcal{F}$</span> is a flat unitary bundle, then this is the same as spectral sequence implicit in your question, but not in general. In fact, I would expect that the more naive spectral sequence would fail to degenerate for a reasonable complicated example.</p>
300,460
<p>How would we go about proving that $$\frac{1}{1\cdot 2} + \frac{1}{2\cdot 3} + \frac{1}{3 \cdot 4} +\ldots +\frac{1}{n(n+1)} = \frac{n}{n+1}$$</p>
preferred_anon
27,150
<p><strong>Hint</strong>: $\frac{1}{n}-\frac{1}{n+1}=\frac{1}{n(n+1)}$</p>
300,460
<p>How would we go about proving that $$\frac{1}{1\cdot 2} + \frac{1}{2\cdot 3} + \frac{1}{3 \cdot 4} +\ldots +\frac{1}{n(n+1)} = \frac{n}{n+1}$$</p>
Adi Dani
12,848
<p>$$\sum_{i=1}^{n}\frac{1}{i(i+1)}=\sum_{i=1}^{n}\Big(\frac{1}{i}-\frac{1}{i+1}\Big)=$$ $$=1+\sum_{i=2}^{n}\frac{1}{i}-\Big(\sum_{j=2}^{n}\frac{1}{j}+\frac{1}{n+1}\Big)=1-\frac{1}{n+1}$$</p>
2,439,340
<p>How would one proceed to prove this statement?</p> <blockquote> <p>The set of the strictly increasing sequences of natural numbers is not enumerable.</p> </blockquote> <p>I've been trying to solve this for quite a while, however I don't even know where to start.</p>
DanielWainfleet
254,665
<p>For a countably infinite set $F$ of strictly increasing functions from $\Bbb N$ to $\Bbb N$ let $F=\{f_n:n\in \Bbb N\}.$ Define $g:\Bbb N \to \Bbb N$ by $g(n)=1+\sum_{j=1}^n f_j(n).$ </p> <p>Then $g(n+1)-g(n)=f_{n+1}(n+1)+\sum_{j=1}^n (f_j(n+1)-f_j(n))&gt;0$ so $g$ is strictly increasing. </p> <p>And $g\not \in F$ because $g(n)\geq 1+f_n(n)&gt;f_n(n)$, so $g\ne f_n$ for any $n.$</p>
2,439,340
<p>How would one proceed to prove this statement?</p> <blockquote> <p>The set of the strictly increasing sequences of natural numbers is not enumerable.</p> </blockquote> <p>I've been trying to solve this for quite a while, however I don't even know where to start.</p>
Eric Lippert
21,264
<p>As other answers note, there are lots of fancy ways to prove this. But we can always go back to the basics. A straightforward diagonalization proof-by-contradiction suffices. Suppose there is such an enumeration. Maybe this is it:</p> <pre><code>1 --&gt; 1, 2, 3, 5, ... 2 --&gt; 4, 5, 7, 100, ... 3 --&gt; 1, 2, 3, 8, ... 4 --&gt; 2, 4, 5, 6, ... </code></pre> <p>Now take the first number of sequence one, and add one to it. That's our first number: 2.</p> <p>Now take the second number of sequence two - 5 - and the number from the previous step - 2. Take the larger and add one: 6.</p> <p>Now take the third number of sequence three - 3 - and the number from the previous step - 6. Take the larger and add one: 7.</p> <p>Now take the fourth number of sequence four - 6 - and the number from the previous step - 7. Take the larger and add one: 8.</p> <p>Keep doing that and construct the sequence of monotone increasing naturals:</p> <pre><code>2, 6, 7, 8, ... </code></pre> <p>By assumption, this sequence is in our enumeration, but where can it be? It cannot be at spot n for any n because by its construction the nth element of this sequence is larger than the element at spot n of the nth sequence.</p> <p>That's a contradiction, and therefore there cannot be any such enumeration.</p>
2,439,340
<p>How would one proceed to prove this statement?</p> <blockquote> <p>The set of the strictly increasing sequences of natural numbers is not enumerable.</p> </blockquote> <p>I've been trying to solve this for quite a while, however I don't even know where to start.</p>
Hagen von Eitzen
39,174
<p>There are uncountably many subsets of $\Bbb N$, but only countably many finite subsets, hence uncountably many infinite subsets. Every strictly increasing sequence of naturals corresponds to an infinite subset of $\Bbb N$.</p>
3,185,317
<p><span class="math-container">$$\lim_{n\rightarrow 0}\frac{1}{n}\int_{0}^{1}\ln(1+e^{nx})dx$$</span></p> <p>My try:</p> <p><span class="math-container">$$\frac{b-a}b\leq \ln b-\ln a\leq \frac{b-a}a \implies \frac{1}{1+e^{nx}}\leq \ln(1+e^{nx})-\ln e^{nx}\leq \frac1{e^{nx}}$$</span></p> <p>Then I integrated and multiplied by <span class="math-container">$\frac{1}{n}$</span> and I got: <span class="math-container">$$\frac{1}{n}\int _0^1\frac{1}{1+e^{nx}}dx\leq \frac{1}{n}\int _0^1[\ln(1+e^{nx})-\ln e^{nx}]dx\leq \frac{1}{n}\int _0^1 \frac{1}{e^{nx}}dx$$</span></p> <p>How to continue? There is an easier method to solve this limit?</p>
D.B.
530,972
<p>You are actually canceling the factor <span class="math-container">$x-1$</span> from numerator and denominator. This works as long as <span class="math-container">$x \ne 1$</span>. Keep in mind that in the limit, <span class="math-container">$x$</span> is approaching <span class="math-container">$1$</span>; never actually equal to <span class="math-container">$1$</span>.</p>
3,231,387
<p>I have been given the following quadratic equation and is asked to find the range of its roots <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span>, where <span class="math-container">$\alpha&gt;\beta$</span> <span class="math-container">$$(k+1)x^2 - (20k+14)x + 91k +40 =0,$$</span> where <span class="math-container">$k&gt;0$</span> .<br><br> Here's my approach. <br><br> I applied the quadratic formula for the roots and got. <span class="math-container">$$\alpha=\frac{(10k+7) -3\sqrt{k^2+k+1}}{k+1}$$</span> Similarly <span class="math-container">$$\beta=\frac{(10k+7)+3\sqrt{k^2+k+1}}{k+1}$$</span> But how to find the range. Please help</p>
Mick
42,351
<p>First, we need to find for what values of k such that the equation has real roots (<span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span>).</p> <p>To this end, we set <span class="math-container">$(10k + 7)^2 \ge (k+1)((91k + 40)$</span></p> <p>This is reduced to <span class="math-container">$k^2 + k + 1 \ge 0$</span>.</p> <p>The LHS has no real roots for k. In addition, <span class="math-container">$((k^2))$</span> is positive. That means the quadratic expression in k is positive definite (i.e. always bigger than 0). </p> <p>Therefore, the given equation has real roots for all values of k (except possibly when k = -1). </p>
3,950,098
<p>I can evaluate the limit with L'Hospital's rule:</p> <p><span class="math-container">$\lim_{n\to\infty}n(\sqrt[n]{4}-1)=\lim_{n\to\infty}\cfrac{(4^{\frac1n}-1)}{\dfrac1n}=\lim_{n\to\infty}\cfrac{\dfrac{-1}{n^2}\times 4^{\frac1n}\times\ln4}{\dfrac{-1}{n^2}}=\ln4$</span></p> <p>But is there any way to do it without using L'Hospital's rule?</p>
GEdgar
442
<p>You could try this. As <span class="math-container">$n \to \infty$</span>, <span class="math-container">$$ 4^{1/n} = \exp\left(\frac{\log 4}{n}\right) = 1 + \frac{\log 4}{n} + O(1/n^2) \\ 4^{1/n}-1 = \frac{\log 4}{n} + O(1/n^2) \\ n\left(4^{1/n}-1\right) = \log 4 + O(1/n) \\ \lim_{n\to\infty} n\left(4^{1/n}-1\right) = \log 4 $$</span></p>
3,950,098
<p>I can evaluate the limit with L'Hospital's rule:</p> <p><span class="math-container">$\lim_{n\to\infty}n(\sqrt[n]{4}-1)=\lim_{n\to\infty}\cfrac{(4^{\frac1n}-1)}{\dfrac1n}=\lim_{n\to\infty}\cfrac{\dfrac{-1}{n^2}\times 4^{\frac1n}\times\ln4}{\dfrac{-1}{n^2}}=\ln4$</span></p> <p>But is there any way to do it without using L'Hospital's rule?</p>
Vishu
751,311
<p>Let <span class="math-container">$t=\frac 1n \to 0$</span>: <span class="math-container">$$\lim_{t\to 0} \frac{4^t-1}{t} $$</span> which is of the well-known form <span class="math-container">$\lim_{x\to 0} \frac{a^x-1}{x} =\ln a $</span>.</p>
663,563
<p>it seems obvious that this integral is zero and so is the limit but what theorem we are using here?</p> <p>I see it's connected to Riemann sums with an interval=zero Right ?</p> <p>The function $\mathrm{f}$ is continuous.</p> <p>$$\lim_{x \to 0}\int_0^x\mathrm{f}(x)\ \mathrm{d}x= \ ?$$</p>
copper.hat
27,978
<p>Assuming $f$ is Riemann integrable, it is bounded by some $B$, so $0 \le |\int_0^x f(x) dx | \le \int_0^x |f(x)| dx \le Bx$.</p>
663,563
<p>it seems obvious that this integral is zero and so is the limit but what theorem we are using here?</p> <p>I see it's connected to Riemann sums with an interval=zero Right ?</p> <p>The function $\mathrm{f}$ is continuous.</p> <p>$$\lim_{x \to 0}\int_0^x\mathrm{f}(x)\ \mathrm{d}x= \ ?$$</p>
Thomas
26,188
<p>We are not using any theorem. The definition of the definite integral is $$ \int_a^b f(x) \; dx = \lim_{n\to \infty} \sum_{i=1}^n f(x_i)\Delta x. $$ where $x_i = a + i\Delta x$ and $\Delta x = \frac{b - a}{n}$. If $a=b=0$, then $\Delta x = 0$ and so the integral is zero: $$ \int_0^0 f(x)\; dx = \lim_{n\to \infty}\sum_{i=1}^n 0 = \lim_{n\to \infty} 0 = 0. $$</p> <p>About the limit. Assume that $f$ is continuous on a small interval $[0, \epsilon]$. Then according to the <a href="http://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus#First_part">Fundamental Theorem of Calculus</a> the function given by $$ F(x) = \int_0^x f(t) \; dt $$ is continuous on $[0,\epsilon]$. In particular $F$ is continuous at $0$. This, by definition, means that $\lim_{x\to 0^+} F(x) = 0$, or that $$ \lim_{x\to 0^+} \int_0^x f(t) \; dt = 0. $$</p>
4,315,572
<p>exercise:</p> <p>Let us assume that the function f has derivatives of all orders.</p> <p>Suppose that all zeros of <span class="math-container">$f$</span> have finite multiplicity. Let <span class="math-container">$a$</span> and <span class="math-container">$b$</span> be points of <span class="math-container">$A$</span>, such that <span class="math-container">$a&lt;b$</span> and neither point is a zero. Show that <span class="math-container">$f$</span> has at most finitely many zeros in <span class="math-container">$] a, b[$</span></p> <p>(We say that a point <span class="math-container">$c$</span> is a root of <span class="math-container">$f(x)=0$</span> with multiplicity <span class="math-container">$m$</span>, if <span class="math-container">$f^{(k)}(c)=0$</span> for <span class="math-container">$k=0, \ldots, m-1$</span> and <span class="math-container">$f^{(m)}(c) \neq 0 .$</span> As usual <span class="math-container">$f^{(0)}$</span> denotes <span class="math-container">$f$</span>.)</p> <p>lemma:</p> <p>A zero of finite multiplicity is an isolated point of the set of zeros</p> <p>proof:</p> <p>Considering Taylor polynomial <span class="math-container">$E(h)=f(c+h)-\left(f(c)+\frac{1}{1 !} f^{\prime}(c) h+\frac{1}{2 !} f^{\prime \prime}(c) h^{2}+\cdots+\frac{1}{m !} f^{(m)}(c) h^{m}\right)$</span> and using L’Hopital’s Rule we can show that <span class="math-container">$\lim\limits_{h \rightarrow 0} \frac{E(h)}{h^{m}}=0$</span>. Because of the multiplicity and for <span class="math-container">$h \neq 0$</span> we can write <span class="math-container">$$ \frac{f(c+h)-f(c)}{h^{m}}=\frac{1}{m !} f^{(m)}(c)+\frac{E(h)}{h^{m}} $$</span> and from this we can deduce the proof of the lemma.</p> <p>So if we know if all zeros of <span class="math-container">$f$</span> have finite multiplicity then all zeros are isolated. Meanwhile we have bounded interval we can use Bolzano–Weierstrass theorem to show that if f have infinitely many zeros then at least one of the zeros should be limit point which contradicts to be isolated.</p> <p>But why I do need <span class="math-container">$f(a)$</span>, <span class="math-container">$f(b)$</span> not being zero which was declared in the exercise. I am suspicious about using intermediate value theorem and taylor polynomial with remainder. But I don't know how to do it??</p> <p>Remark: ( May be it can be some kind of hint which I cannot figure out: next exercise asks ::: In the previous exercise, if f (a) and f (b) have the same sign, show that the number of zeros in ]a, b[, counted by multiplicity, is even. If <span class="math-container">$f(a)$</span> and <span class="math-container">$f(b)$</span> have opposite signs, show that the number of zeros in ]a, b[, counted by multiplicity, is odd)</p>
TonyK
1,508
<p>I think it's just to avoid a proliferation of cases in the proof. If you allow <span class="math-container">$f(a)=0$</span>, for instance, you have to say &quot;<span class="math-container">$n$</span> times right-differentiable&quot; every time, instead of just &quot;<span class="math-container">$n$</span> times differentiable&quot;. A function can have right-derivatives of all orders; for instance,<span class="math-container">$$f(x)=\begin{cases} e^{-1/x^2}\sin\frac{1}{x}&amp;\text{if }x\ne 0\\ 0&amp;\text{if }x=0 \end{cases}$$</span></p> <p>on <span class="math-container">$[0,1]$</span>. This has an infinite number of zeroes, all of finite multiplicity except <span class="math-container">$f(0)$</span>, which has infinite multiplicity. But then you have to define what the multiplicity of a zero is at the end-points of your interval. Not difficult, just messy.</p>
3,380,081
<p>Question: Suppose <span class="math-container">$n(S)$</span> is the number of subset of <span class="math-container">$S$</span> and <span class="math-container">$|S|$</span> be the number of elements of <span class="math-container">$S$</span>. If <span class="math-container">$n(A)+n(B)+n(C)=n(A\cup B\cup C)$</span> and <span class="math-container">$|A|=|B|=100$</span>, Find the minimum value of <span class="math-container">$|A\cap B\cap C|$</span>.</p> <p>Now, I realise PIE is the only way to go, but I don't know how to handle the intersections of the sets taken two at a time. Also, I know that this is a duplicate, but the original on wasn't answered fully. I'd request you o kindly provide a solution before mercilessly closing it off.</p> <p>Plus, I'm not good at even very basic set theory. If you could recommend a short but good book for set-theory, I'd be much obliged.</p> <p>Thank you all!</p>
Paramanand Singh
72,031
<p>It is not necessary to choose equispaced division points. You can choose the division points as <span class="math-container">$$x_k=\left(1+\frac{k}{n}\right)^2$$</span> and use the Riemann sum <span class="math-container">$$S_n=\sum_{k=1}^{n}f(x_k)(x_k-x_{k-1})$$</span> where <span class="math-container">$f(x) =\sqrt{x} $</span>.</p> <p>Then <span class="math-container">$$S_n=\sum_{k=1}^{n}\left(1+\frac{k}{n}\right)\left\{\left(1+\frac{k}{n}\right)^2-\left(1+\frac{k-1}{n}\right)^2\right\}$$</span> which can be simplified as <span class="math-container">$$S_n=\frac{1}{n}\sum_{k=1}^{n}\left(1+\frac{k}{n}\right)\left(2+\frac{2k-1}{n}\right)=\frac{1}{n}\sum_{k=1}^{n}\left(2+\frac{4k-1}{n}+\frac{2k^2-k}{n^2}\right)$$</span> and this further simplifies to <span class="math-container">$$S_n=2+\frac{2n(n+1)-n}{n^2}+\frac{n(n+1)(2n+1)} {3n^3}-\frac{n(n+1)}{2n^3}$$</span> and clearly the above sum tends to <span class="math-container">$$2+2+\frac{2}{3}=\frac{14}{3}$$</span> as <span class="math-container">$n\to\infty $</span>. </p>
173,286
<p>I have these two functions <code>fun</code> and <code>microstep</code>.Fun makes use of a Module construct within which I define the <code>Array</code> I need to store the values of magnetization for different temperatures (each case stored in a different row). <code>microstep</code> is the function that store the data at the correct position at each step of the Monte Carlo algorithm. The monte Carlo procedure doesn't matter really much now, what bothers me is that when I define the magnetization array inside fun, the function doesn't work properly:</p> <pre><code> fun [numbofsets_, nsteps_] := Module [{confinit, magnetization, index}, index = MapIndexed[ { #2[[1]], # } &amp;, numbofsets ]; (* {index, temp} tuple*) confinit = RandomChoice[{-1, 1}, {10, 10}]; (* initial random matrix *) magnetization = ConstantArray[ 0, {Length@numbofsets, nsteps}]; Table[ NestList[ microstep[ ##[[1]], ##[[2]], ##[[3]], ##[[4]] ] &amp; \ , { index[[i, 1]], index[[i, 2]] , confinit, 2 } , nsteps]; , {i, 1, Length@numbofsets}]; ] </code></pre> <p>and</p> <pre><code>microstep[tindex_, temp_, matrix_, mcindex_] := Module[{ tempmatrix = matrix, dimx, dimy, x , y , it = 1/temp, down, up, left, right, spinsum , randnum, bool = False , J = 1 }, (* generic Metropolis Alghoritm *) dimx = Dimensions[matrix][[1]]; dimy = Dimensions[matrix][[2]]; x = RandomInteger[{1, dimx}]; y = RandomInteger[{1, dimy}]; randnum = RandomReal[]; spinsum = Plus[Compile`GetElement[matrix, Mod[x + 1, dimx, 1], y], Compile`GetElement[matrix, Mod[x - 1, dimx, 1], y], Compile`GetElement[matrix, x, Mod[y - 1, dimy, 1]], Compile`GetElement[matrix, x, Mod[y + 1, dimy, 1]]]; If[2*J *spinsum*tempmatrix[[x, y]] &lt; 0 \[Or] randnum &lt; E^(- it*2*J*tempmatrix[[x, y]]*spinsum) , tempmatrix[[x, y]] = -Compile`GetElement[matrix, x, y]; bool = True ]; (* tricky part starts here *) If[bool, magnetization[[tindex, mcindex]] = Abs[(magnetization[[tindex, mcindex - 1]] + 2 *tempmatrix[[x, y]])] ; , magnetization[[tindex, mcindex]] = magnetization[[tindex, mcindex - 1]]; ]; {tindex, temp, tempmatrix, mcindex + 1} ] </code></pre> <p>now if i run </p> <pre><code>fun [{2, 3, 4}, 10] </code></pre> <p>i get </p> <blockquote> <p>"Part specification magnetization[[1,1]] is longer than depth of \ object"</p> </blockquote> <p>Meanwhile If I declare the magnetization array outside the Module, the function works properly giving me the correctly stored values, but it forces me to use global variables :</p> <pre><code>magnetization = ConstantArray[0, {3, 11}]; fun [{2, 3, 4}, 10]; magnetization </code></pre> <blockquote> <p>{ {0, 2, 0, 2, 4, 6, 8, 6, 8, 6, 6}, {0, 2, 2, 0, 2, 2, 4, 4, 2, 0, 2}, {0, 2, 4, 6, 8, 10, 8, 6, 6, 4, 4} })</p> </blockquote> <p>I think the problem rise up from the fact that module is a scoping construct but I thought that a function called inside it would see the local variable but it doesn't and I don't know how to solve the problem. In c-like languages pointers can be used, is there anything similar in mathematica? Also, as always, any suggestion is appreciated</p>
Henrik Schumacher
38,178
<p>Does this work out for you?</p> <p>Here I added <code>magnetization</code> as additional argument and gave <code>microstep</code> the attibute <code>HoldAll</code> to allow for call by reference.</p> <pre><code>SetAttributes[microstep, HoldAll]; microstep[tindex_, temp_, matrix_, mcindex_, magnetization_] := Module[{tempmatrix = matrix, dimx, dimy, x, y, it = 1/temp, down, up, left, right, spinsum, randnum, bool = False, J = 1},(*generic Metropolis Alghoritm*) dimx = Dimensions[matrix][[1]]; dimy = Dimensions[matrix][[2]]; x = RandomInteger[{1, dimx}]; y = RandomInteger[{1, dimy}]; randnum = RandomReal[]; spinsum = Plus[ Compile`GetElement[matrix, Mod[x + 1, dimx, 1], y], Compile`GetElement[matrix, Mod[x - 1, dimx, 1], y], Compile`GetElement[matrix, x, Mod[y - 1, dimy, 1]], Compile`GetElement[matrix, x, Mod[y + 1, dimy, 1]] ]; If[2*J*spinsum*tempmatrix[[x, y]] &lt; 0 ∨ randnum &lt; E^(-it*2*J*tempmatrix[[x, y]]*spinsum), tempmatrix[[x, y]] = -Compile`GetElement[matrix, x, y]; bool = True]; (*tricky part starts here*) If[bool, magnetization[[tindex, mcindex]] = Abs[(magnetization[[tindex, mcindex - 1]] + 2*tempmatrix[[x, y]])];, magnetization[[tindex, mcindex]] = magnetization[[tindex, mcindex - 1]]; ]; {tindex, temp, tempmatrix, mcindex + 1}] </code></pre> <p>There was also a second issue within <code>fun</code>: Apparently, the array <code>magnetization</code> was set up a bit too short, so I prolonged it by <code>1</code>. Removing also some <code>;</code>, the function <code>fun</code> executes without error and returns a result. Checking wether the result is correct is up to you.</p> <pre><code>fun[numbofsets_, nsteps_] := Module[{confinit, index}, index = MapIndexed[{#2[[1]], #} &amp;, numbofsets];(*{index,temp} tuple*) confinit = RandomChoice[{-1, 1}, {10, 10}];(*initial random matrix*) magnetization = ConstantArray[0, {Length@numbofsets, nsteps + 1}]; Table[ NestList[ microstep[##[[1]], ##[[2]], ##[[3]], ##[[4]], magnetization] &amp;, {index[[i, 1]], index[[i, 2]], confinit, 2}, nsteps ] , {i, 1, Length@numbofsets}] ] </code></pre>
1,641,137
<p>Let $(X,d)$ be a metric space, $a \in X$, and $\delta$ be a positive real number. Then the open ball $B(a;\delta)$ is defined as $$B(a;\delta) \colon= \left\{ \ x \in X \ \colon \ d(x,a) &lt; \delta \ \right\},$$ whereas the sphere $S(a; \delta)$ is defined as $$S(a;\delta) \colon= \left\{ \ x \in X \ \colon \ d(x,a) = \delta \ \right\},$$ Then the closure $\overline{B(a;\delta)}$ of $B(a;\delta)$ need not equal $B(a;\delta) \cup S(a;\delta)$. </p> <p>In particular, in the Euclidean space $\mathbb{R}^k$, this holds, whereas in a discrete metric space (with more than one point) this fails. Am I right? </p> <p>Now is (are) there any necessary and / or sufficient condition(s) on $(X,d)$ under which $$\overline{B(a;\delta)} = B(a;\delta) \cup S(a;\delta)?$$</p>
tomasz
30,222
<p>An example of a meaningful, sufficient condition for this is that $X$ is a <a href="https://en.wikipedia.org/wiki/Intrinsic_metric" rel="nofollow">length space</a>. This is certainly not a necessary condition: for example, a dense subspace of a length space also has this property (more generally, the property is inherited by dense subspaces; it is also inherited by open subspaces).</p> <p>A simple necessary condition is that for each $x$, you need to know that within the range of the function $d(x,\cdot)\colon X\to {\bf R}$, every point is the limit of an increasing sequence. Otherwise, it is easy to find a counterexample. Of course, this condition is certainly not sufficient, as one can easily alter a space to artificially satisfy it (say, by taking a product with ${\bf R}$).</p> <p>Another example of a space with this property is the (Euclidean) plane with a single open disk removed. This space is very far from being a length space, but it has your property. On the other hand, a plane without an open rectangle will not have the property. This shows that the condition is very sensitive with respect to the geometry of the space (considering how similar the two examples are -- you could even round off the corners of the rectangle to make them even more alike).</p>
91,645
<p>I asked a similar question previously, though this is more specific and directed. In the writing of mathematics research papers, when is information cited, such as definitions? I have read that if it is fairly recent, then cite it. But what is "fairly recent?" Also, should books from whence a definition came be cited? In other words, what deems something cite-worthy? I have read several articles on the matter and it all was very vague.</p>
Potato
18,240
<p>The first rule of academic honesty is: if in doubt, always cite. There is simply no downside, and it may help readers unfamiliar with the literature.</p>
3,701,582
<p>I still struggle mighty with basic conceptions of truth and proof. </p> <p>For example: The Continuum Hypothesis (CH) is either true or false, i.e. either CH or ~CH holds. Now, Goedel and Cohen proved that CH/~CH are independent from ZFC, so ZFC + CH and ZFC + ~CH are consistent (in case ZFC is consistent but mathematicians assume that anyway). But since we know that CH or ~CH must be false, how can that be? One of those axiom systems must be inconsistent since it has no models (because one of its axioms is false).</p> <p>Another example is the parallel axiom (P) in euclidean geometry. P is true or false, i.e. P or ~P. That would mean that either the euclidean or non-euclidean geometry system has to be inconsistent (= no model).</p> <p>Can somebody explain where I make a mistake?</p>
Reveillark
122,262
<p>Before getting to the mathematics, here is some philosophy: </p> <p>To a first approximation, there are three ways of approaching mathematics:</p> <ul> <li><p><em>Platonism</em> is the belief that mathematical statements have an intrinsic truth value, and that mathematical objects really exist in some ideal universe. To a platonist, CH is either true or false. The independence results in set theory just say that ZFC is insufficient to decide which one it is. More generally, by Gödel's incompleteness theorems, first order logic as a whole is insufficient to decide the truth value of every statement (in the sense that any first order theory which is consistent and has enough arithmetic cannot prove everything). </p></li> <li><p><em>Formalism</em> is sort of the agnostic approach. A formalist makes no judgments regarding to the inherent truth of mathematical statements, but rather views mathematics as playing around with symbols (subject to some rules). To a formalist, the independence results of Gödel and Cohen just say that there's no way to play around with strings of symbols so that the end result is a proof of CH (or ¬CH). Hence, the formalist will not waste time trying to come up with a proof, since they know that is not possible. Here, the notion of "is a proof" (together with other notions of first order logic) are defined in the metatheory, and are presumably finitistic statements on whose validity we can all agree. On that note:</p></li> <li><p><em>Finitism</em> is (more or less) the belief that infinity is a fiction. To a finitist, CH is a meaningless statement about non-existent objects. Still, a finitist can derive some value from the independence results, in a similar fashion to a formalist. Nevertheless,</p></li> </ul> <hr> <p>Regarding your statement about consistency of different theories, the key is that CH is valid <em>in some models</em>, and false in some others. We're not looking at the truth value of CH in the entire universe, but rather in some specific model. Still, this tells you that no proof of CH could exist, because any such proof would still go through in any model of ZFC (this is just the Soundness Theorem). In particular, it would go through in a model of ¬CH, which would result in a contradiction. </p> <p>If you look at, for example, ordered sets, the statement "There is a largest element" is true in the model <span class="math-container">$\{0,1\}$</span>, but false in the model <span class="math-container">$\mathbb{N}$</span> (with the usual order). It would be foolish to discard the study of bounded or unbounded ordered sets just because we have a statement which has to "really" be true or false, whatever that means.</p> <hr> <p>On another note, the notion of "truth" itself is rather problematic. This is the content of Tarski's (aptly named) theorem of the Undefinability of Truth. Roughly, it says that it is not possible to express "truth" as a first order property. More precisely, working in ZFC (much weaker theories suffice), if <span class="math-container">$\phi$</span> is a formula in <span class="math-container">$\{\in\}$</span>, we can define a formal counterpart, a code <span class="math-container">$\lceil \phi\rceil$</span> (for example, code the metalinguistic first order symbols using natural numbers). Then, Tarski's theorem says that, unless ZFC is inconsistent, there is no first order formula (in one free variable) <span class="math-container">$T(v)$</span> such that, for every sentence <span class="math-container">$\sigma$</span>, the following is provable: <span class="math-container">$$ T(\lceil \sigma\rceil)\leftrightarrow \sigma $$</span></p>
3,387,458
<p>Show that a bounded sequence having one limit point is convergent. </p> <p>The converse holds true. The fact that a convergent seq is bounded has been shown in Baby Rudin. The fact that it will have only one limit point can be found <a href="https://math.stackexchange.com/questions/3386703/prove-that-a-convergent-sequence-has-only-one-limit-point-proof-verification/3386711?noredirect=1#comment6967723_3386711">here</a>.</p> <p>The similar question seems to have been asked before on MSE except that there is no complete or well-detailed solution.</p>
user284331
284,331
<p>Consider <span class="math-container">$\limsup x_{n}=\lim_{k}x_{n_{k}}$</span> and <span class="math-container">$\liminf x_{n}=\lim_{l}y_{n_{l}}$</span> for some subsequences of <span class="math-container">$(x_{n_{k}})$</span> and <span class="math-container">$(y_{n_{l}})$</span> of <span class="math-container">$(x_{n})$</span>, so by assumption <span class="math-container">$\lim_{k}x_{n_{k}}=\lim_{l}y_{n_{l}}$</span>, so the limit exists.</p>
2,973,314
<p>If we have to find the sum of n terms of a G.P. then we have two formulas for it (1) <span class="math-container">$a(1-r^n)/(1-r)$</span> and (2) <span class="math-container">$a(r^n-1)/(r-1)$</span>. Now I know how the (1) has been derived but dont know about the (2)(is it obtained by multiplying denominator and numerator of (1) by -1?). I am also confused when two use them and why there exists 2 formulas for the same objective? pls explain it.</p>
Dr. Sonnhard Graubner
175,066
<p>Multiply numerator and denominator by <span class="math-container">$(-1)$</span> <span class="math-container">$$\frac{a(r^n-1)(-1)}{(r-1)(-1)}$$</span></p>
360,608
<p>In physics I came across these kind of equations when I am trying to find the asymptotic behaviour of some function.</p> <p>Can anyone explain if there is any sense in talking about $\sin(x)$ or $\cos(x)$ as $x$ tends to infinity?</p> <p>$$\lim_{x\rightarrow\infty}\;\sin(x)?$$</p>
preferred_anon
27,150
<p>I think the easiest way to express what a limit really means, is to say that you get arbitrarily close to the limit as you get closer and closer to your desired input. </p> <p>As $x$ goes to infinity, $\sin(x)$ and $\cos(x)$ take the values $-1$ and $1$ infinitely often, and therefore do not get as close as we might like to anything. We therefore say that the limit does not exist.</p>
360,608
<p>In physics I came across these kind of equations when I am trying to find the asymptotic behaviour of some function.</p> <p>Can anyone explain if there is any sense in talking about $\sin(x)$ or $\cos(x)$ as $x$ tends to infinity?</p> <p>$$\lim_{x\rightarrow\infty}\;\sin(x)?$$</p>
Thomas
26,188
<p>The limit $$ \lim_{x\to \infty} \sin(x) $$ does not exist. What we mean by saying that it doesn't exist is that there is not an $L$ such that $\sin(x)$ can be made arbitrarily close to $L$ for $x$ "large enough". You could try to prove that such an $L$ doesn't exist by assuming that such did exist and then getting a contradiction.</p> <p>Another way so see that the limit doesn't exist is to note that for $x_n = 2\pi n$ you have $$ \lim_{n\to \infty} \sin(x_n) = 0 $$ while for $y_n = \frac{\pi}{2} + 2\pi n$ $$ \lim_{n\to \infty} \sin(y_n) = 1. $$ Since you have found two sequences that both go to infinity at which $sin$ approaches two different values, the limit $\lim_{x\to \infty} \sin(x)$ can't exist. </p>
869,506
<p>In a paper I am reading, there is a step that seems to come from the following inequality: $$(1+x)^\alpha \le 1+2^\alpha x,$$ where $0&lt;x&lt;1$. (Also, $3\le \alpha \le 9/2$ in the context of the paper, but the above probably holds for more general $\alpha$, say, $\alpha\ge 1$.) It is stated with no explanation, and I feel that there is a slick solution, but I am unable to prove it without calculus.</p> <hr> <p>I was unsuccessful with the binomial expansion due to the generalized binomial coefficients.</p> <p>I tried a calculus approach that assumes $0&lt;x&lt;1$ and $\alpha \ge 1$, and I think I was successful. Consider $f(x):=1+2^\alpha x - (1+x)^\alpha$, and note that $f(0)=0$ and $f(1)=1$. Then, $$f'(x)=2^\alpha-\alpha(1+x)^{\alpha-1}.$$ Since $f'(0)=2^\alpha-\alpha&gt;0$, we know $f$ is increasing at $0$. If we find zero or one critical point in $[0,1]$, we are finished.</p> <p>Setting $f'(x^*)=0$ gives \begin{align*} 2^\alpha &amp;= \alpha(1+x^*)^{\alpha-1}\\ x^* &amp;=\left(\frac{2^\alpha}{\alpha}\right)^{1/(\alpha-1)}-1 \ge 0\\ \end{align*} (because $2^\alpha&gt;\alpha$), so we have one critical point in the positive reals, and we are finished.</p> <p>I'm a little uncertain about these last few steps involving the critical point; is it correct?</p> <hr> <p><strong>Question</strong>: Is there an easier way to prove the inequality, and does it hold for more general $x$ and $\alpha$?</p>
robjohn
13,854
<p>Consider $$ f(x)=1+(2^\alpha-1)x-(1+x)^\alpha\tag{1} $$ Note that $f(0)=f(1)=0$. The Mean Value Theorem says that for some $0\lt x_\alpha\lt1$, we have $f'(x_\alpha)=0$.</p> <p>Furthermore, since $\alpha-1\ge0$, $f'(x)=(2^\alpha-1)-\alpha(1+x)^{\alpha-1}$ is non-increasing.</p> <p>Thus, $f'(x_\alpha)\ge0$ for $0\le x\le x_\alpha$, and $f'(x_\alpha)\le0$ for $x_\alpha\le x\le1$. That is, $f(0)=0$, then $f(x)$ increases for $0\le x\le x_\alpha$, then $f(x)$ decreases for $x_\alpha\le x\le1$, then $f(1)=0$.</p> <p>Therefore, $f(x)\ge0$ for $0\le x\le1$. That is, $$ 1+(2^\alpha-1)x\ge(1+x)^\alpha\tag{2} $$ which implies $$ 1+2^\alpha x\ge(1+x)^\alpha\tag{3} $$</p> <hr> <p>Note that for $x\lt0$, the inequality fails, so $x\ge0$ is sharp. However, since $(2)$ is a bit stronger than $(3)$, at $x=1$, the left side of $(3)$ is $1$ greater than the right side. Thus, we can extend $x$ a bit beyond $1$, how far is determined by $\alpha$. For $\alpha=1$, we get that $$ 1+2x\ge1+x $$ which is true for all $x\ge0$. For $\alpha\gt1$, $(1+x)^\alpha$ grows faster than $1+2^\alpha x$ and so there will be some greatest $x$ where $(3)$ holds.</p>
3,602,323
<p>Let <span class="math-container">$ m $</span>, <span class="math-container">$ m+1 $</span>, <span class="math-container">$ m+2 $</span>, <span class="math-container">$ \dots $</span>, <span class="math-container">$ m+p-1 $</span> be an integers and let <span class="math-container">$ p $</span> be an odd prime. I want to show that <span class="math-container">$$ m + (m+1)^{p-2} + (m+2)^{p-2} + \cdots + (m+p-1)^{p-2} \equiv 0\pmod p. $$</span> This comes down to showing that <span class="math-container">$$ 1 + 2^{p-2} + 3^{p-2} + \cdots + (p-1)^{p-2} \equiv 0\pmod p, $$</span> because by the pigeonhole principle, without loss of generality we may assume that <span class="math-container">$ m \equiv 0 \pmod p$</span>, and then <span class="math-container">$ m+i \equiv i \pmod p$</span> for <span class="math-container">$ i = 1,2,\dots,p-1 $</span></p>
metamorphy
543,769
<p>Yet another way of computing the sum, or (following @Servaes) even more general <span class="math-container">$$S_k=\sum_{a\in\mathbb{F}_q^{\times}}a^k$$</span> where <span class="math-container">$\mathbb{F}_q$</span> is a finite field with <span class="math-container">$q$</span> elements (<span class="math-container">$q$</span> is a <em>power</em> of a prime; this includes the case of <span class="math-container">$q=p$</span>) and <span class="math-container">$k$</span> is <em>any</em> integer, is as follows. Assume (again) we know that <span class="math-container">$\mathbb{F}_q^{\times}$</span> is cyclic, and let <span class="math-container">$g$</span> be a generator of this group. If <span class="math-container">$k$</span> is a multiple of <span class="math-container">$q-1$</span>, then each term of <span class="math-container">$S_k$</span> is <span class="math-container">$1$</span>, so <span class="math-container">$S_k=q-1=-1$</span> in this case. Otherwise <span class="math-container">$g^k\neq 1$</span>, and since <span class="math-container">$a\mapsto ga$</span> is a bijection of the group onto itself, we have <span class="math-container">$$S_k=\sum_{a\in\mathbb{F}_q^{\times}}(ga)^k=g^k S_k,$$</span> hence <span class="math-container">$(g^k-1)S_k=0$</span> and <span class="math-container">$S_k=0$</span>.</p>
614,962
<blockquote> <p>We have a continuous function <span class="math-container">$f:(a,b)\to \mathbb R$</span></p> <p>Prove that: <span class="math-container">$\forall n: x_1...x_n\in(a,b):\exists x\in(a,b)$</span> such that:</p> <p><span class="math-container">$$f(x)=\frac1n ( f(x_1)+...+f(x_n) ) $$</span></p> </blockquote> <p>Experience tells me that it may be possible with induction but I have no clue on how to begin, I don't even see how is that possible.</p> <p>Help please ?</p>
DonAntonio
31,254
<p>$$\frac{x\csc x}{\cos 2x}=\frac{2x}{\sin 2x}\frac1{2\cos 5x}\xrightarrow[x\to 0]{}1\cdot\frac12$$</p>
614,962
<blockquote> <p>We have a continuous function <span class="math-container">$f:(a,b)\to \mathbb R$</span></p> <p>Prove that: <span class="math-container">$\forall n: x_1...x_n\in(a,b):\exists x\in(a,b)$</span> such that:</p> <p><span class="math-container">$$f(x)=\frac1n ( f(x_1)+...+f(x_n) ) $$</span></p> </blockquote> <p>Experience tells me that it may be possible with induction but I have no clue on how to begin, I don't even see how is that possible.</p> <p>Help please ?</p>
Adi Dani
12,848
<p>$$\lim_{x \to 0} \frac{x \csc(2x)}{\cos(5x)}=\lim_{x \to 0} \frac{x }{\sin2x\cos(5x)}$$ $$=\lim_{x\to0}\frac{2x}{\sin(2x)}\frac{1}{2\cos(5x)}=\frac{1}{2}$$</p>
3,746,630
<p>So I am solving some probability/finance books and I've gone through two similar problems that conflict in their answers.</p> <h2>Paul Wilmott</h2> <p>The first book is Paul Wilmott's <a href="https://smile.amazon.com/Frequently-Asked-Questions-Quantitative-Finance/dp/0470748753" rel="nofollow noreferrer">Frequently Asked Questions in Quantitative Finance</a>. This book poses the following question:</p> <blockquote> <p>Every day a trader either makes 50% with probability 0.6 or loses 50% with probability 0.4. What is the probability the trader will be ahead at the end of a year, 260 trading days? Over what number of days does the trader have the maximum probability of making money?</p> </blockquote> <p><strong>Solution:</strong></p> <blockquote> <p>This is a nice one because it is extremely counterintuitive. At first glance it looks like you are going to make money in the long run, but this is not the case. Let n be the number of days on which you make 50%. After <span class="math-container">$n$</span> days your returns, <span class="math-container">$R_n$</span> will be: <span class="math-container">$$R_n = 1.5^n 0.5^{260−n}$$</span> So the question can be recast in terms of finding <span class="math-container">$n$</span> for which this expression is equal to 1.</p> </blockquote> <p>He does some math, which you can do as well, that leads to <span class="math-container">$n=164.04$</span>. So a trader needs to win at least 165 days to make a profit. He then says that the average profit <em>per day</em> is:</p> <blockquote> <p><span class="math-container">$1−e^{0.6 \ln1.5 + 0.4\ln0.5}$</span> = −3.34%</p> </blockquote> <p>Which is mathematically wrong, but assuming he just switched the numbers and it should be:</p> <blockquote> <p><span class="math-container">$e^{0.6 \ln1.5 + 0.4\ln0.5} - 1$</span> = −3.34%</p> </blockquote> <p>That still doesn't make sense to me. Why are the probabilities in the exponents? I don't get Wilmott's approach here.</p> <p>*PS: I ignore the second question, just focused on daily average return here.</p> <hr /> <h2>Mark Joshi</h2> <p>The second book is Mark Joshi's <a href="https://smile.amazon.com/Quant-Interview-Questions-Answers-Second/dp/0987122827" rel="nofollow noreferrer">Quant Job Interview Question and Answers</a> which poses this question:</p> <blockquote> <p>Suppose you have a fair coin. You start off with a dollar, and if you toss an <em>H</em> your position doubles, if you toss a <em>T</em> it halves. What is the expected value of your portfolio if you toss infinitely?</p> </blockquote> <p><strong>Solution</strong></p> <blockquote> <p>Let <span class="math-container">$X$</span> denote a toss, then: <span class="math-container">$$E(X) = \frac{1}{2}*2 + \frac{1}{2}\frac{1}{2} = \frac{5}{4}$$</span> So for <span class="math-container">$n$</span> tosses: <span class="math-container">$$R_n = (\frac{5}{4})^n$$</span> Which tends to infinity as <span class="math-container">$n$</span> tends to infinity</p> </blockquote> <hr /> <hr /> <p>Uhm, excuse me what? Who is right here and who is wrong? Why do they use different formula's? Using Wilmott's (second, corrected) formula for Joshi's situation I get the average return per day is:</p> <p><span class="math-container">$$ e^{0.5\ln(2) + 0.5\ln(0.5)} - 1 = 0% $$</span></p> <p>I ran a Python simulation of this, simulating <span class="math-container">$n$</span> days/tosses/whatever and it seems that the above is not correct. Joshi was right, the portfolio tends to infinity. Wilmott was also right, the portfolio goes to zero when I use his parameters.</p> <p>Wilmott also explicitly dismisses Joshi's approach saying:</p> <blockquote> <p>As well as being counterintuitive, this question does give a nice insight into money management and is clearly related to the Kelly criterion. If you see a question like this it is meant to trick you if the expected profit, here 0.6 × 0.5 + 0.4 × (−0.5) = 0.1, is positive with the expected return, here −3.34%, negative.</p> </blockquote> <p>So what is going on?</p> <p>Here is the code:</p> <pre><code>import random def traderToss(n_tries, p_win, win_ratio, loss_ratio): SIM = 10**5 # Number of times to run the simulation ret = 0.0 for _ in range(SIM): curr = 1 # Starting portfolio for _ in range(n_tries): # number of flips/days/whatever if random.random() &gt; p_win: curr *= win_ratio # LINE 9 else: curr *= loss_ratio # LINE 11 ret += curr # LINE 13: add portfolio value after this simulation print(ret/SIM) # Print average return value (E[X]) </code></pre> <p>Use: <code>traderToss(260, 0.6, 1.5, 0.5)</code> to test Wilmott's trader scenario.</p> <p>Use: <code>traderToss(260, 0.5, 2, 0.5)</code> to test Joshi's coin flip scenario.</p> <hr /> <hr /> <p>Thanks to the followup comments from Robert Shore and Steve Kass below, I have figured one part of the issue. <strong>Joshi's answer assumes you play once, therefore the returns would be additive and not multiplicative.</strong> His question is vague enough, using the word &quot;your portfolio&quot;, suggesting we place our returns back in for each consecutive toss. If this were the case, we need the <a href="https://www.investopedia.com/articles/investing/071113/breaking-down-geometric-mean.asp" rel="nofollow noreferrer">geometric mean</a> not the arithmetic mean, which is the expected value calculation he does.</p> <p>This is verifiable by changing the python simulation to:</p> <pre><code>import random def traderToss(): SIM = 10**5 # Number of times to run the simulation ret = 0.0 for _ in range(SIM): if random.random() &gt; 0.5: curr = 2 # Our portfolio becomes 2 else: curr = 0.5 # Our portfolio becomes 0.5 ret += curr print(ret/SIM) # Print single day return </code></pre> <p>This yields <span class="math-container">$\approx 1.25$</span> as in the book.</p> <p>However, if returns are multiplicative, therefore we need a different approach, which I assume is Wilmott's formula. This is where I'm stuck. Because I still don't understand the Wilmott formula. Why is the end of day portfolio on average:</p> <p><span class="math-container">$$ R_{day} = r_1^{p_1} * r_2^{p_2} * .... * r_n^{p_n} $$</span></p> <p>Where <span class="math-container">$r_i$</span>, <span class="math-container">$p_i$</span> are the portfolio multiplier, probability for each scenario <span class="math-container">$i$</span>, and there are <span class="math-container">$n$</span> possible scenarios. Where does this (generalized) formula come from in probability theory? This isn't a geometric mean. Then what is it?</p>
Robert Shore
640,080
<p>The difference is that a <span class="math-container">$50$</span>% loss and a <span class="math-container">$50$</span>% gain (in either sequence) result in a net loss (AM-GM inequality), whereas halving and doubling (in either sequence) do not result in a net loss. Joshi is presenting (and solving) a different problem, one in which half the time the trader's expected return is <span class="math-container">$100$</span>%. So there's no <em>a priori</em> reason to expect the same result.</p> <p>Having said that, Wilmott's answer to the Joshi question is wrong. For <span class="math-container">$n$</span> tosses, <span class="math-container">$R_k=2^k(\frac 12)^{n-k}=2^{2k-n}$</span>, where <span class="math-container">$k$</span> is the number of times you toss heads. Wilmott's analysis of Joshi assumes that you are starting afresh each time with a single dollar.</p> <p>Wilmott's solution to his own problem is correct. If you take ten trials, you expect a return of <span class="math-container">$1.5^6 \cdot 0.5^4 -1 = \frac{729}{1024}-1 = -\frac{295}{1024}$</span>. Taking the geometric mean gets you <span class="math-container">$\sqrt[10]{1.5^6 \cdot 0.5^4} -1 = 1.5^{0.6} \cdot 0.5^{0.4}-1$</span>, which is exactly what Wilmott says (just writing it in exponential form).</p>
3,746,630
<p>So I am solving some probability/finance books and I've gone through two similar problems that conflict in their answers.</p> <h2>Paul Wilmott</h2> <p>The first book is Paul Wilmott's <a href="https://smile.amazon.com/Frequently-Asked-Questions-Quantitative-Finance/dp/0470748753" rel="nofollow noreferrer">Frequently Asked Questions in Quantitative Finance</a>. This book poses the following question:</p> <blockquote> <p>Every day a trader either makes 50% with probability 0.6 or loses 50% with probability 0.4. What is the probability the trader will be ahead at the end of a year, 260 trading days? Over what number of days does the trader have the maximum probability of making money?</p> </blockquote> <p><strong>Solution:</strong></p> <blockquote> <p>This is a nice one because it is extremely counterintuitive. At first glance it looks like you are going to make money in the long run, but this is not the case. Let n be the number of days on which you make 50%. After <span class="math-container">$n$</span> days your returns, <span class="math-container">$R_n$</span> will be: <span class="math-container">$$R_n = 1.5^n 0.5^{260−n}$$</span> So the question can be recast in terms of finding <span class="math-container">$n$</span> for which this expression is equal to 1.</p> </blockquote> <p>He does some math, which you can do as well, that leads to <span class="math-container">$n=164.04$</span>. So a trader needs to win at least 165 days to make a profit. He then says that the average profit <em>per day</em> is:</p> <blockquote> <p><span class="math-container">$1−e^{0.6 \ln1.5 + 0.4\ln0.5}$</span> = −3.34%</p> </blockquote> <p>Which is mathematically wrong, but assuming he just switched the numbers and it should be:</p> <blockquote> <p><span class="math-container">$e^{0.6 \ln1.5 + 0.4\ln0.5} - 1$</span> = −3.34%</p> </blockquote> <p>That still doesn't make sense to me. Why are the probabilities in the exponents? I don't get Wilmott's approach here.</p> <p>*PS: I ignore the second question, just focused on daily average return here.</p> <hr /> <h2>Mark Joshi</h2> <p>The second book is Mark Joshi's <a href="https://smile.amazon.com/Quant-Interview-Questions-Answers-Second/dp/0987122827" rel="nofollow noreferrer">Quant Job Interview Question and Answers</a> which poses this question:</p> <blockquote> <p>Suppose you have a fair coin. You start off with a dollar, and if you toss an <em>H</em> your position doubles, if you toss a <em>T</em> it halves. What is the expected value of your portfolio if you toss infinitely?</p> </blockquote> <p><strong>Solution</strong></p> <blockquote> <p>Let <span class="math-container">$X$</span> denote a toss, then: <span class="math-container">$$E(X) = \frac{1}{2}*2 + \frac{1}{2}\frac{1}{2} = \frac{5}{4}$$</span> So for <span class="math-container">$n$</span> tosses: <span class="math-container">$$R_n = (\frac{5}{4})^n$$</span> Which tends to infinity as <span class="math-container">$n$</span> tends to infinity</p> </blockquote> <hr /> <hr /> <p>Uhm, excuse me what? Who is right here and who is wrong? Why do they use different formula's? Using Wilmott's (second, corrected) formula for Joshi's situation I get the average return per day is:</p> <p><span class="math-container">$$ e^{0.5\ln(2) + 0.5\ln(0.5)} - 1 = 0% $$</span></p> <p>I ran a Python simulation of this, simulating <span class="math-container">$n$</span> days/tosses/whatever and it seems that the above is not correct. Joshi was right, the portfolio tends to infinity. Wilmott was also right, the portfolio goes to zero when I use his parameters.</p> <p>Wilmott also explicitly dismisses Joshi's approach saying:</p> <blockquote> <p>As well as being counterintuitive, this question does give a nice insight into money management and is clearly related to the Kelly criterion. If you see a question like this it is meant to trick you if the expected profit, here 0.6 × 0.5 + 0.4 × (−0.5) = 0.1, is positive with the expected return, here −3.34%, negative.</p> </blockquote> <p>So what is going on?</p> <p>Here is the code:</p> <pre><code>import random def traderToss(n_tries, p_win, win_ratio, loss_ratio): SIM = 10**5 # Number of times to run the simulation ret = 0.0 for _ in range(SIM): curr = 1 # Starting portfolio for _ in range(n_tries): # number of flips/days/whatever if random.random() &gt; p_win: curr *= win_ratio # LINE 9 else: curr *= loss_ratio # LINE 11 ret += curr # LINE 13: add portfolio value after this simulation print(ret/SIM) # Print average return value (E[X]) </code></pre> <p>Use: <code>traderToss(260, 0.6, 1.5, 0.5)</code> to test Wilmott's trader scenario.</p> <p>Use: <code>traderToss(260, 0.5, 2, 0.5)</code> to test Joshi's coin flip scenario.</p> <hr /> <hr /> <p>Thanks to the followup comments from Robert Shore and Steve Kass below, I have figured one part of the issue. <strong>Joshi's answer assumes you play once, therefore the returns would be additive and not multiplicative.</strong> His question is vague enough, using the word &quot;your portfolio&quot;, suggesting we place our returns back in for each consecutive toss. If this were the case, we need the <a href="https://www.investopedia.com/articles/investing/071113/breaking-down-geometric-mean.asp" rel="nofollow noreferrer">geometric mean</a> not the arithmetic mean, which is the expected value calculation he does.</p> <p>This is verifiable by changing the python simulation to:</p> <pre><code>import random def traderToss(): SIM = 10**5 # Number of times to run the simulation ret = 0.0 for _ in range(SIM): if random.random() &gt; 0.5: curr = 2 # Our portfolio becomes 2 else: curr = 0.5 # Our portfolio becomes 0.5 ret += curr print(ret/SIM) # Print single day return </code></pre> <p>This yields <span class="math-container">$\approx 1.25$</span> as in the book.</p> <p>However, if returns are multiplicative, therefore we need a different approach, which I assume is Wilmott's formula. This is where I'm stuck. Because I still don't understand the Wilmott formula. Why is the end of day portfolio on average:</p> <p><span class="math-container">$$ R_{day} = r_1^{p_1} * r_2^{p_2} * .... * r_n^{p_n} $$</span></p> <p>Where <span class="math-container">$r_i$</span>, <span class="math-container">$p_i$</span> are the portfolio multiplier, probability for each scenario <span class="math-container">$i$</span>, and there are <span class="math-container">$n$</span> possible scenarios. Where does this (generalized) formula come from in probability theory? This isn't a geometric mean. Then what is it?</p>
Ross Millikan
1,827
<p>The crucial thing is that Wilmott asks about the chance of making a profit, regardless of how large the profit or loss is. Joshi is asking about expected value of the portfolio. Those are very different questions. If I pay <span class="math-container">$1$</span> to bet on something and win <span class="math-container">$10$</span> with probability <span class="math-container">$\frac 15$</span> but can only play once, Wilmott says I should not. I lose <span class="math-container">$80\%$</span> of the time. Joshi says I should play, because my expected return is <span class="math-container">$2$</span>. They are asking different questions and getting different answers.</p>
297,812
<p>If $a-b=b-c$ .How to find the value of $a^2-2b^2+c^2$</p>
Peder
59,704
<p>$a-b=b-c$ means $b=(a+c)/2$ therefore $a^2-2b^2+c^2=a^2+c^2-2(\frac{a+c}{2})^2=\frac{2a^2+2c^2-(a+c)^2}{2}=\frac{a^2+c^2-2ac}{2}=\frac{(a-c)^2}{2}$</p>
297,812
<p>If $a-b=b-c$ .How to find the value of $a^2-2b^2+c^2$</p>
Aang
33,989
<p>$a-b=b-c\implies a,b,c$ are in A.P.</p> <p>$a^2-2b^2+c^2=(a^2-b^2)-(b^2-c^2)=(a-b)(a+b)-(b-c)(c+b)=(a-b)(a+b)-(a-b)(c+b)=(a-b)(a+b-c-b)=(a-b)(a-c)$</p>
2,246,025
<p>How do I solve this? $$y^\prime = y^2 -4$$ I think I am supposed to use the separable equations method and then use partial fractions.</p>
Harambe
357,206
<p>Disclaimer: technically I'm not answering your question, because I have used the word "structure" in a more general way than I believe you intended. Also I feel like talking about some maths I love.</p> <p>I can't say much about "recently discovered", but there are two main approaches to adding "structure" to sets: an algebraic approach and an analytic approach. The former is based on defining binary operations - e.g. groups, closed under addition with a few extra properties, and fields such as $\mathbb{R}$ which have addition and multiplication as defining operations. This is what is commonly referred to as a "structure".</p> <p>However, I'm more of an analysis person so I prefer looking at "spaces" (which is generally the term used for the analytic approach to adding structure to sets). My research is essentially studying certain subsets of the <a href="https://en.wikipedia.org/wiki/Hilbert_cube" rel="nofollow noreferrer">hilbert cube</a>. This is basically an infinite dimensional cube. Recently however, I've just been focussing on subsets of the plane.</p> <p>I could never have imagined these to be so complicated. A "continuum" is defined to be a compact, connected metric space. If you don't know what these terms mean - a "metric space" is anything that takes up space - there's a notion of distance between distinct points in the set. "Connected" is what you think it means. A circle in the plane is connected, but two distinct parallel lines are not. Finally "compactness" is difficult, but it's like the "continuous equivalent" of finiteness. There's a brilliant description <a href="https://math.stackexchange.com/questions/371928/what-should-be-the-intuition-when-working-with-compactness">here</a>.</p> <p>Anyway, "continua" are these single objects which are nicely self enclosed. Hopefully I'm making sense.</p> <p>An obvious question is: Is it possible to classify the continua that are subsets of the plane? (aka plane continua.) A huge number of people have studied plane continua and yet we're no where near any sort of classification. One class of continua that have been studied extensively are "indecomposable continua".</p> <p>Try and visualise a continuum. This is anything like a blob, or an elastic band, or any singular object that contains its boundary. Clearly (whatever it is you've imagined) it's possible to split it up into smaller continua, right? You can find two subcontinua of your original continuum so that the union of them is equal to your original continuum.</p> <p>Well, it turns out that this isn't necessarily the case. An indecomposable continuum cannot be realised as the union of any two of its proper subcontinua! The existence of such continua is already counterintuitive, but it was even shown that "most" plane continua are in fact indecomposable.</p> <p>I've talked enough now. All of this stuff is a subfield of "topology", the field of maths where mugs are no different to doughnuts. A "topological structure" basically has the least amount of structure possible while still being interesting. (I guess it's the opposite of what you asked then!)</p>
2,246,025
<p>How do I solve this? $$y^\prime = y^2 -4$$ I think I am supposed to use the separable equations method and then use partial fractions.</p>
epi163sqrt
132,007
<p>With respect to quaternions you mention, there were some <em>exciting</em> new numbers, called <strong>periods</strong> discovered just a few years ago.</p> <blockquote> <p>The definition of periods below is from the fascinating introductory <a href="http://www.maths.ed.ac.uk/%7Eaar/papers/kontzagi.pdf" rel="nofollow noreferrer">survey paper about <em>periods</em></a> by M. Kontsevich and D. Zagier.</p> <p><strong>Periods</strong> are defined as complex numbers whose real and imaginary parts are values of absolutely convergent integrals of <em>rational functions</em> with <em>rational coefficient</em> over domains in <span class="math-container">$\mathbb{R}^n$</span> given by polynomial inequalities with <em>rational coefficients</em>.</p> </blockquote> <p>The set of periods is therefore a <em>countable</em> subset of the complex numbers. It contains the algebraic numbers, but also many of famous transcendental constants.</p> <p><em>Note:</em> One nice aspect is that the famous equality <span class="math-container">$$\zeta(2)=\frac{\pi^2}{6}$$</span> can be shown based upon the fact that <span class="math-container">$\zeta(2)$</span> and <span class="math-container">$\frac{\pi^2}{6}$</span> are periods which form a so-called <em>accessible identity</em>. This is shown in <em><a href="https://math.stackexchange.com/questions/8337/different-methods-to-compute-sum-limits-k-1-infty-frac1k2/1106665#1106665">this answer</a></em> which also is based upon the cited paper.</p>
2,010,255
<p>While finding the Taylor Series of a function, <strong>when</strong> are you allowed to substitute? And <strong>why</strong>?</p> <p>For example:</p> <p>Around $x=0$ for $e^{2x}$ I apparently am allowed to substitute $u=2x$ and then use the known series for $e^u$. But for $e^{x+1}$ I am not allowed to substitute $u=x+1$.</p> <p>I know the technique for finding the Taylor Series of $e^{x+1}$ around $x=0$ by taking $e^{x+1}=e\times e^x$. However, I am looking for understanding and intuition for when and why it is allowed to apply substitution.</p> <p>Note: there are several question that are similar to this one, but I have found none that actually answers the question "why"; or that shows a complete proof.</p> <hr> <p>EDIT: Thanks to the answer of Markus Scheuer I should refine the question to cases where the series is finite, for example $n\to3$</p>
epi163sqrt
132,007
<blockquote> <p>A function $f(x)$ analytic at $x=0$ can be represented as power series within an open disc with radius of convergence $R$. \begin{align*} f(x)=\sum_{n=0}^\infty a_nx^n\qquad\qquad \qquad |x|&lt;R \end{align*}</p> <p><em>Any</em> substitution $x=g(u)$ is admissible as long as we respect the <em>radius of convergence</em>. \begin{align*} f(g(u))=\sum_{n=0}^\infty a_n \left(g(u)\right)^n\qquad\qquad\quad |g(u)|&lt;R \end{align*}</p> </blockquote> <p>We know $f(u)=e^u$ can be represented as Taylor series convergent for all $u\in\mathbb{R}$, i.e. the radius of convergence $R=\infty$. \begin{align*} f(u)=e^u=\sum_{n=0}^\infty \frac{u^n}{n!}\qquad\qquad\qquad u\in \mathbb{R} \end{align*}</p> <blockquote> <p><strong>Substitution $u=2x$</strong></p> <p>We consider</p> <p>\begin{align*} f(2x)=e^{2x}=\sum_{n=0}^\infty \frac{(2x)^n}{n!}\qquad\qquad\qquad 2x\in \mathbb{R} \end{align*}</p> <p>This substitution is admissible for all $x \in \mathbb{R}$ since $$2x\in\mathbb{R}\qquad\Longleftrightarrow\qquad x\in\mathbb{R}$$ So, the radius of convergence of the Taylor series of $f(2x)=e^{2x}$ is $\infty$.</p> </blockquote> <p>We obtain \begin{align*} f(2x)=e^{2x}=\sum_{n=0}^\infty \frac{(2x)^n}{n!}\qquad\qquad\qquad x\in \mathbb{R} \end{align*}</p> <blockquote> <p><strong>Substitution $u=x+1$</strong></p> <p>We consider</p> <p>\begin{align*} f(x+1)=e^{x+1}=\sum_{n=0}^\infty \frac{(x+1)^n}{n!}\qquad\qquad\qquad x+1\in \mathbb{R} \end{align*}</p> <p>This substitution is admissible for all $x \in \mathbb{R}$ since $$x+1\in\mathbb{R}\qquad\Longleftrightarrow\qquad x\in\mathbb{R}$$ So, the radius of convergence of the Taylor series of $f(x+1)=e^{x+1}$ is $\infty$.</p> </blockquote> <p>We obtain \begin{align*} f(x+1)=e^{x+1}=\sum_{n=0}^\infty \frac{(x+1)^n}{n!}\qquad\qquad\qquad x\in \mathbb{R} \end{align*}</p> <blockquote> <p>We also obtain \begin{align*} e\cdot e^x&amp;=\left(\sum_{k=0}^\infty \frac{1}{k!}\right)\left(\sum_{l=0}^\infty \frac{x^l}{l!}\right)\\ &amp;=\sum_{n=0}^\infty \left(\sum_{{k+l=n}\atop{k,l\geq 0}}\frac{1}{k!}\cdot\frac{x^l}{l!}\right)\\ &amp;=\sum_{n=0}^\infty \left(\sum_{l=0}^n\frac{1}{(n-l)!}\cdot\frac{x^l}{l!}\right)\\ &amp;=\sum_{n=0}^\infty\left(\sum_{l=0}^n\binom{n}{l}x^l\right)\frac{1}{n!}\\ &amp;=\sum_{n=0}^\infty\frac{(x+1)^n}{n!}\\ &amp;=e^{x+1} \end{align*}</p> </blockquote> <p><strong>Conclusion:</strong> We can use any substitution for convenience as long as we respect the radius of convergence.</p>
85,052
<p>A housemate of mine and I disagree on the following question: </p> <p>Let's say that we play a game of yahtzee. Of the five dice you throw, two dice obtain the value 1, two other dice obtain the value 2, and one die shows you six dots on the top side. Given the fact that you haven't thrown a "full house" yet, you start throwing the die which value is 6 again and again, until you throw a one or a two. You get to throw the die of six one or two times. If you throw a one or a two the first time, you stop, because now you have the "full house" already. If you haven't thrown a one or a two with the die of six, you throw it again, hoping for a one or a two this time.</p> <p>Now, what is the probability that you throw a one or a two with the fifth die after two turns? (Given the way a rational person operates in this situation.)</p> <p>My take on this question was the following: the probability that you throw a one or a two the first time with the fifth dice is $1/3$, and the probability that you don't throw a dice of which the value is one or two the first time, but you do throw a one or a two the second time is $ 2/3 \cdot 1/3 = 2/9$. Adding these values gives you the probability: $1/3 + 2/9 = 3/9 + 2/9 = 5/9$. </p> <p>My housemate, however, argues that the chance to throw a one or a two the first time is $1/3$, and believes that throwing the fifth dice again, gives you a probability of throwing a one or a two of $1/3$ again. Adding these values gives the expected probability of throwing a full house of $1/3 + 1/3 = 2/3$. </p> <p>Who is right, my housemate or me?</p> <p>I strongly believe I am right, but even if you tell me I'm right, I might not be able to convince my housemate of the truth. He argues that my way of reasoning implies that the probability of throwing a one or a two with the fifth dice the second time is smaller than throwing it the first time. Could you please also provide me with a pedagogically sound way to explain him why the probability is $5/9$? </p> <p>Thanks in advance</p>
2'5 9'2
11,123
<p>You are correct. And you can tell your housemate that you are using conditional probability for the second part of your calculation. <em>Given</em> that you did not complete a full house on roll 1, the probability of rolling it on the second try is $\frac{1}{3}$, just as he says. However, that scenario is only one of two that might occur. As you have written, the scenario where you miss on the first try will happen with probability $\frac{2}{3}$. That reduces the weight that we would apply to that $\frac{1}{3}$ on the second roll.</p> <p>Here's an alternative approach. Make a $6\times6$ table where the columns represent rolls on try 1 and the rows represent rolls on try 2. (Pretend that if you succeed during try number 1 that you throw again just for fun.) If you count, $20$ of these entries represent situations where you make your full house. These $36$ situations are all equally likely to occur, so the probability of eventual success is $\frac{20}{36}$, or $\frac{5}{9}$.</p> <p>Lastly, as one more response to your house mate, bring up this idea of tossing the second die just for fun even when you have already made the full house the first time. His probability of $\frac{1}{3}$ on roll 2 is counting the times that you would roll a 1 or a 2 on the second try <em>even though</em> you already succeeded on roll number 1. His probability over counts because of this.</p> <p>Actually, that might be the most convincing argument for him. He is fine to add $\frac13+\frac13$, but then he has <em>double counted</em> those situations where you would roll a 1 or 2, and then again roll a 1 or 2. So to account for this double counting, he can correct by subtracting the chance that you succeed on both rolls: $\frac13+\frac13-\frac19$</p>
4,453,784
<p>For a time homogeneous Markov chain <span class="math-container">$(X_n)_{n\ge 0}$</span> with state space <span class="math-container">$I$</span> with no self loop . Given <span class="math-container">$X_0 = i \in I$</span> , define the first return time <span class="math-container">$T_i = \inf\{n\ge 1 : X_n = i\} $</span> and first hitting time <span class="math-container">$H_i = \inf\{n\ge 0 : X_n = i\} $</span> . I want to see whether the following equalities hold ,<br /> <span class="math-container">$$ P(T_i &lt; \infty | X_0 = i) = \sum_{j \in I} P(T_i &lt; \infty | X_1 = j , X_0 = i)P(X_1=j|X_0=i) $$</span> <span class="math-container">$$ = \sum_{j \in I} P(T_i &lt; \infty | X_1 = j)P(X_1=j|X_0=i) $$</span> <span class="math-container">$$ = \sum_{j \in I} P(H_i &lt; \infty |X_0 = j)P(X_1=j|X_0=i) $$</span> The first equality seems to hold because of the union of disjoint sets in domain of Markov chain <span class="math-container">$\{T_i&lt;\infty |\; X_0=i\} = \cup_{j\in I} \{T_i&lt;\infty |\; X_0=i\}\cap\{X_1=j\}$</span> .</p> <p>The second equality seems to hold because of Markov property .</p> <p>The third equality seems to hold because of time homogeneity and seemingly <span class="math-container">$P(H_i&lt;\infty|X_0=j\neq i) = P(T_i&lt;\infty|X_0=j\neq i) $</span> , but I could not prove it .</p>
Mason
752,243
<p>I'd write like this: <span class="math-container">\begin{align} P_i(T_i &lt; \infty) &amp;= \sum_{j \in I}P_i(T_i &lt; \infty \mid X_1 = j)P_i(X_1 = j) \end{align}</span> Now, in order to use the Markov property, we write <span class="math-container">$T_i$</span> as <span class="math-container">$T_i(X_{0 + \cdot})$</span>. Note that <span class="math-container">$T_i(X_{0 + \cdot}) = H_i(X_{1 + \cdot}) + 1$</span>, so <span class="math-container">$\{T_i(X_{0 + \cdot}) &lt; \infty\} = \{H_i(X_{1 + \cdot}) &lt; \infty\}$</span>. Now we can formally apply the Markov property to get <span class="math-container">\begin{align} P_i(T_i(X_{0 + \cdot}) &lt; \infty) &amp;= \sum_{j \in I}P_i(T_i(X_{0 + \cdot}) &lt; \infty \mid X_1 = j)P_i(X_1 = j) \\ &amp;= \sum_{j \in I}P_i(H_i(X_{1 + \cdot}) &lt; \infty \mid X_1 = j)p(i,j) \\ &amp;= \sum_{j \in I}P_j(H_i(X_{0 + \cdot}) &lt; \infty)p(i, j) \\ &amp;= \sum_{j \in I}P_j(H_i &lt; \infty)p(i, j). \end{align}</span></p>
4,453,784
<p>For a time homogeneous Markov chain <span class="math-container">$(X_n)_{n\ge 0}$</span> with state space <span class="math-container">$I$</span> with no self loop . Given <span class="math-container">$X_0 = i \in I$</span> , define the first return time <span class="math-container">$T_i = \inf\{n\ge 1 : X_n = i\} $</span> and first hitting time <span class="math-container">$H_i = \inf\{n\ge 0 : X_n = i\} $</span> . I want to see whether the following equalities hold ,<br /> <span class="math-container">$$ P(T_i &lt; \infty | X_0 = i) = \sum_{j \in I} P(T_i &lt; \infty | X_1 = j , X_0 = i)P(X_1=j|X_0=i) $$</span> <span class="math-container">$$ = \sum_{j \in I} P(T_i &lt; \infty | X_1 = j)P(X_1=j|X_0=i) $$</span> <span class="math-container">$$ = \sum_{j \in I} P(H_i &lt; \infty |X_0 = j)P(X_1=j|X_0=i) $$</span> The first equality seems to hold because of the union of disjoint sets in domain of Markov chain <span class="math-container">$\{T_i&lt;\infty |\; X_0=i\} = \cup_{j\in I} \{T_i&lt;\infty |\; X_0=i\}\cap\{X_1=j\}$</span> .</p> <p>The second equality seems to hold because of Markov property .</p> <p>The third equality seems to hold because of time homogeneity and seemingly <span class="math-container">$P(H_i&lt;\infty|X_0=j\neq i) = P(T_i&lt;\infty|X_0=j\neq i) $</span> , but I could not prove it .</p>
Calvinfwc
897,319
<p>I think @Mason suggests to more explicitly write the stoping times in terms of random variable , so I try the following approach . Consider the expression on then RHS of the second equality in the original post , I first try to show ,</p> <blockquote> <p><span class="math-container">$$ \sum_{j\in I} P(T_i &lt; \infty | X_1 = j) P(X_1=j | X_0 = i) = \sum_{j\in I} P(T_i &lt; \infty | X_0 = j) P(X_1=j | X_0 = i) $$</span></p> </blockquote> <p>The LHS <span class="math-container">$$ \sum_{j\in I} P(T_i &lt; \infty | X_1 = j) P(X_1=j | X_0 = i) $$</span> <span class="math-container">$$ = \sum_{j\in I} \sum_{0&lt;n&lt;\infty} P(X_n = i \wedge X_k \neq i \; \forall \; 0&lt;k&lt;n | X_1 = j) P(X_1=j | X_0 = i) $$</span> by time homogeneity , <span class="math-container">$$ = \sum_{j\in I} \sum_{0&lt; n&lt;\infty} P(X_n = i \wedge X_k \neq i \; \forall \; 0&lt;k&lt;n | X_0= j) P(X_1=j | X_0 = i) + P(X_0 = i| X_0= j) P(X_1=j | X_0 = i) $$</span> <span class="math-container">$$ = \sum_{j\in I} \sum_{0&lt; n&lt;\infty} P(X_n = i \wedge X_k \neq i \; \forall \; 0&lt;k&lt;n | X_0= j) P(X_1=j | X_0 = i) $$</span> <span class="math-container">$$ = \sum_{j\in I} P(T_i &lt; \infty | X_0 = j) P(X_1=j | X_0 = i) $$</span> Then I try to show for a fixed <span class="math-container">$j$</span> ,</p> <blockquote> <p><span class="math-container">$$ P(T_i&lt;\infty|X_0=j\neq i) = P(H_i&lt;\infty|X_0=j\neq i) $$</span></p> </blockquote> <p>above is equivalent to (if <span class="math-container">$P(X_0 = j \neq i) \neq 0$</span>) <span class="math-container">$$ P( \color {blue} {H_i&lt;\infty} \wedge \color {orange} {X_0=j\neq i }) = P( \color {green} {T_i&lt;\infty} \wedge \color {orange} {X_0=j\neq i }) \tag{1} $$</span> Suppose <span class="math-container">$X : \Omega \to \mathbb{R}$</span> and <span class="math-container">$\omega = (\omega_n)_{n\ge 0} \in \Omega$</span> such that <span class="math-container">$X_i(\omega) = X_i(\omega_i)$</span> .</p> <p>Therefore , in <span class="math-container">$(1)$</span> <span class="math-container">$$ P( \color {blue} {H_i&lt;\infty} \wedge \color {orange} {X_0=j\neq i }) $$</span> <span class="math-container">$$ = P \color {blue} {\bigcup_{0&lt;n&lt;\infty} \{\omega : X(\omega_n) = i \wedge X(\omega_k) \neq i \; \forall \; k &lt; n \} \bigcup \{\omega : X(\omega_0) = i\}} \bigcap \color {orange} {\{\omega : X(\omega_0) = j\neq i\} } $$</span></p> <p><span class="math-container">$$ = P \bigcup_{0&lt;n&lt;\infty} \{\omega : X(\omega_n) = i \wedge X(\omega_k) \neq i \; \forall \; k &lt; n \} \bigcap \color {orange} {\{\omega : X(\omega_0) = j\neq i\} } $$</span> <span class="math-container">$$ = P \color {green} {\bigcup_{0&lt;n&lt;\infty} \{\omega : X(\omega_n) = i \wedge X(\omega_k) \neq i \; \forall \; 0&lt;k &lt; n \} } \bigcap \color {orange} {\{\omega : X(\omega_0) = j\neq i\} } $$</span> <span class="math-container">$$ = P( \color {green} {T_i&lt;\infty} \wedge \color {orange} {X_0=j\neq i }) $$</span></p> <p>Because there's no self loop , it should always be true that &quot;<span class="math-container">$j\neq i$</span>&quot; in our problem .</p>
623,819
<p>I do not understand a remark in Adams' Calculus (page 628 <span class="math-container">$7^{th}$</span> edition). This remark is about the derivative of a determinant whose entries are functions as quoted below.</p> <blockquote> <p>Since every term in the expansion of a determinant of any order is a product involving one element from each row, the general product rule implies that the derivative of an <span class="math-container">$n\times n$</span> determinant whose elements are functions will be the sum of <span class="math-container">$n$</span> such <span class="math-container">$n\times n$</span> determinants, each with the elements of one of the rows differentiated. For the <span class="math-container">$3\times 3$</span> case we have <span class="math-container">$$\frac{d}{dt}\begin{vmatrix} a_{11}(t) &amp; a_{12}(t) &amp; a_{13}(t) \\ a_{21}(t) &amp; a_{22}(t) &amp; a_{23}(t) \\ a_{31}(t) &amp; a_{32}(t) &amp; a_{33}(t) \end{vmatrix}=\begin{vmatrix} a'_{11}(t) &amp; a'_{12}(t) &amp; a'_{13}(t) \\ a_{21}(t) &amp; a_{22}(t) &amp; a_{23}(t) \\ a_{31}(t) &amp; a_{32}(t) &amp; a_{33}(t) \end{vmatrix}+\begin{vmatrix} a_{11}(t) &amp; a_{12}(t) &amp; a_{13}(t) \\ a'_{21}(t) &amp; a'_{22}(t) &amp; a'_{23}(t) \\ a_{31}(t) &amp; a_{32}(t) &amp; a_{33}(t) \end{vmatrix}+\begin{vmatrix} a_{11}(t) &amp; a_{12}(t) &amp; a_{13}(t) \\ a_{21}(t) &amp; a_{22}(t) &amp; a_{23}(t) \\ a'_{31}(t) &amp; a'_{32}(t) &amp; a'_{33}(t) \end{vmatrix}.$$</span></p> </blockquote> <hr /> <p>It is not difficult to check this equality by simply expanding both sides. However, the remark sounds like using some clever trick to get this result. Can anyone explain it to me, please? Thank you!</p>
Toan Nguyen Dinh
118,406
<p>That remarks has said most of what it needs to explain.However, I think a more precise explaination for the example is necessary.Hence, I'll cite one. <br> i) $a_{11}(t).a_{23}(t).a_{32}(t) $ is abitrary term in expansion of left determinant<br> ii) $ (a_{11}(t).a_{23}(t).a_{32}(t))' = a'_{11}(t)a_{23}(t).a_{32}(t)+a_{11}(t)a'_{23}(t).a_{32}(t)+a_{11}(t)a_{23}(t).a'_{32}(t)$<br> iii) $a'_{11}(t)a_{23}(t).a_{32}(t)$ is a term in expansion of the first determinant on the left of equation. This is a determinant in which the first row is differentiated<br> $a_{11}(t)a'_{23}(t).a_{32}(t),a_{11}(t)a_{23}(t).a'_{32}(t)$ are similar.</p>
623,819
<p>I do not understand a remark in Adams' Calculus (page 628 <span class="math-container">$7^{th}$</span> edition). This remark is about the derivative of a determinant whose entries are functions as quoted below.</p> <blockquote> <p>Since every term in the expansion of a determinant of any order is a product involving one element from each row, the general product rule implies that the derivative of an <span class="math-container">$n\times n$</span> determinant whose elements are functions will be the sum of <span class="math-container">$n$</span> such <span class="math-container">$n\times n$</span> determinants, each with the elements of one of the rows differentiated. For the <span class="math-container">$3\times 3$</span> case we have <span class="math-container">$$\frac{d}{dt}\begin{vmatrix} a_{11}(t) &amp; a_{12}(t) &amp; a_{13}(t) \\ a_{21}(t) &amp; a_{22}(t) &amp; a_{23}(t) \\ a_{31}(t) &amp; a_{32}(t) &amp; a_{33}(t) \end{vmatrix}=\begin{vmatrix} a'_{11}(t) &amp; a'_{12}(t) &amp; a'_{13}(t) \\ a_{21}(t) &amp; a_{22}(t) &amp; a_{23}(t) \\ a_{31}(t) &amp; a_{32}(t) &amp; a_{33}(t) \end{vmatrix}+\begin{vmatrix} a_{11}(t) &amp; a_{12}(t) &amp; a_{13}(t) \\ a'_{21}(t) &amp; a'_{22}(t) &amp; a'_{23}(t) \\ a_{31}(t) &amp; a_{32}(t) &amp; a_{33}(t) \end{vmatrix}+\begin{vmatrix} a_{11}(t) &amp; a_{12}(t) &amp; a_{13}(t) \\ a_{21}(t) &amp; a_{22}(t) &amp; a_{23}(t) \\ a'_{31}(t) &amp; a'_{32}(t) &amp; a'_{33}(t) \end{vmatrix}.$$</span></p> </blockquote> <hr /> <p>It is not difficult to check this equality by simply expanding both sides. However, the remark sounds like using some clever trick to get this result. Can anyone explain it to me, please? Thank you!</p>
pikunsia
178,025
<p>If a $n$x$n$ matrix $A(t)=A_{ij}(t)$ is differentiable, then $d(det(A(t))/dt$ can be performed as follows (for brevity, we shall take the particular case $n=3$). The determinant of a matrix $A(t)=A_{ij}(t)$ is given by $$det(A_{ij}(t))=e_{ijk}A_{i1}(t)A_{j2}(t)A_{k3}(t),$$ where $e_{ijk}$ is the Cartesian alternator. Let us take the derivative of this expression with respect to the parameter $t$, $$\frac{d}{dt}det(A_{ij}(t))=e_{ijk}(A_{i1}(t),tA_{j2}(t)A_{k3}(t)+A_{i1}(t)A_{j2}(t),tA_{k3}(t)+A_{i1}(t)A_{j2}(t)A_{k3}(t),t),$$ where "$,t$" stands for derivation. Let us now consider the quantity $$C_{k3}(t)=e_{ijk}A_{i1}(t)A_{j2}(t)=\frac{1}{2}(e_{ijk}A_{i1}(t)A_{j2}(t)+e_{jik}A_{j1}(t)A_{i2}(t))=\frac{1}{2}e_{ijk}(A_{i1}(t)A_{j2}(t)-A_{i2}(t)A_{j1}(t)),$$ which is the same as $$C_{k3}(t)=\frac{1}{2}e_{ijk}e_{lm3}A_{il}(t)A_{jm}(t),$$ and in general, $$C_{kn}(t)=\frac{1}{2}e_{ijk}e_{lmn}A_{il}(t)A_{jm}(t).$$ This is the so-called <em>adjugate</em> ($adj$) of the matrix $A(t)$ (notice that this formula is also useful to find the inverse of $A(t)$ whenever it be no singular). Thus, the sum for $\frac{d}{dt}det(A_{ij}(t))$ above, may be expressed by $$\frac{d}{dt}det(A_{ij}(t))=\frac{1}{2}e_{ijk}e_{lmn}A_{il}(t)A_{jm}(t)A_{kn}(t),t=C_{kn}(t)A_{kn}(t),t,$$ which is clearly the ``trace'' $tr$ of this product. So that, in matrix notation we have $$\frac{d(detA(t))}{dt}=tr(adj(A(t))\frac{d(A(t))}{dt}).$$ Indeed this formula is also valid for any finite dimension $n$, and its proof is a little more elaborated. We have exposed here the case for $n=3$ for reason of brevity. Also note that $C_{kn}^T=C_{nk}$ is a cofactor of the matrix $A(t).$</p>
2,090,512
<p>You can calculate the <strong>volume of a parallelepiped</strong> by $|(A \times B) \cdot C|$, where $A$, $B$ and $C$ are vectors. I wonder does the order matter? If it does how, is it determined. I know I can just put it in a matrix and calculate the determinant but I would like to know how it is in this case. </p> <p>Thanks!</p>
TheGeekGreek
359,887
<p>We have $a \times b = -(b \times a)$. So you get also a change of sign since the (real) standard inner product is symmetric, so there the order does not matter. The reason why the cross product changes its sign if you permute the arguments, is simply that it is like the determinant an alternating mapping (this is immediate by its definition as the components of the vectors are just determinants of $2\times 2$ matrices).</p>
4,298,184
<p><a href="https://i.stack.imgur.com/f5ny0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/f5ny0.png" alt="enter image description here" /></a> as we can see, we are supposed to use stars and bars where n = 10 and r = 3. but what i dont understand is why we use stars and bars when stars and bars is supposed to be used in situations where we try to group 10 objects in 3 distinct groups, and the question is supposed to be asking us the number of ways we can get 3 integers that have a sum of 10. any help?</p>
Mina
777,827
<p>You can assume that you have 10 balls and 2 sticks. The different ways you put these sticks between these balls divide your balls into three groups. The number of the balls in the first group is <span class="math-container">$x_1$</span> the number of the balls in the second group is <span class="math-container">$x_2$</span> and the number of the balls in the third one is <span class="math-container">$x_3$</span>. I hope this would be clear.</p>
3,263,076
<p>Let <span class="math-container">$\Gamma\subset PSL_2(\mathbb{R})$</span> be a cofinite Fuchsian group (e.g. a Fuchsian group with finite fundamental domain). Does <span class="math-container">$\Gamma$</span> necessarily contain a hyperbolic element? </p> <p>At first, I tried to use the fact that <span class="math-container">$tr(\gamma)&gt;2$</span> if <span class="math-container">$\gamma\in \Gamma$</span> is hyperbolic, but I failed at this. (Which does not mean it is not possible and if it were, I would appreciate the simplicity of this approach.)</p> <p>Now, I thought one could use the following two facts</p> <ul> <li><p>A non-elementary Fuchsian group (the orbit <span class="math-container">$\Gamma z $</span> is infinite for all <span class="math-container">$z\in \mathbb{H}$</span>) must contain a hyperbolic element.</p></li> <li><p>A Fuchsian group is elementary if it is either cyclic or generated by the Moebius transformations <span class="math-container">$g(z)=kz$</span> and <span class="math-container">$h(z)=-\frac{1}{z}$</span></p></li> </ul> <p>If I wanted to use the above, I would need to show that the fundamental domain of both a cyclic group and the one generated by <span class="math-container">$g$</span> and <span class="math-container">$h$</span> are finite, I assume. However, I fail with that. Maybe somebody can help me there?</p> <p>I appreciate any help - so if my thoughts are leading in the wrong direction, I am very happy to check out a new approach!</p>
Sunny
304,150
<p>Only a partial answer. Suppose <span class="math-container">$n \geq 3$</span> be the characteristic of ring. Then <span class="math-container">$n.1 = 0$</span> implies that <span class="math-container">$ 1 = -(n-1).1 = -(n-2).1 + (-1).1$</span>, since <span class="math-container">$-1$</span> is unit. If <span class="math-container">$-(n-2).1$</span> is not unit then by the uniqueness we get <span class="math-container">$-(n-2).1 = 0$</span> which contradicts that <span class="math-container">$n$</span> is the characteristic. So <span class="math-container">$-(n-2).1$</span> must be unit. Since <span class="math-container">$(n-2).1 + 2.1 = 0$</span>, which implies that <span class="math-container">$2.1$</span> is a unit. Hence characteristic can not be <span class="math-container">$2$</span>, even can not be even number. So we have proved that such <span class="math-container">$n$</span> must be odd.</p>
70,801
<p>I am asked to find how many there are $k$-dimensional subspaces in vector space $V$ over $\mathbb F_p$, $\dim V = n$.</p> <p>My attempt: 1) Let's find a total number of elements in $V$: assume that $\{v_1, v_2, \cdots, v_n\}$ is a basis in $V$. Then, for every $v \in V$ we can write down $$ v = a_1 v_1 + a_2 v_2 + \cdots + a_n v_n $$ and since the coordinates ($a_1, \cdots, a_n$) are from $\mathbb F_p$ there are $p^n$ vectors in $V$; $p-1$ without the zero vector.</p> <p>2) Let's look at a situation where $k=1$. Let's call this 1-dimensional space $V&#39;$. $$\forall v&#39; \in V&#39;. v&#39; = a_1 v_1$$ where $v_1$ is a basis in $V&#39;$. We know that if there are two non-zero vectors $u \in V&#39;_1$ and $v \in V&#39;_2$, they are not linear dependant. So, every 1-dimensional subspace has $(p-1)$ basises. Therefore, there are $\frac{p^n - 1}{p-1}$ possible 1-dimensional subspaces in $V$</p> <p>3) k-dimensional subspace is defined by the set of it's basises. Since basis can not contain zero vectors we can write down the formula for selecting $k$ linear independent vectors: $C^k_m (p-1)^k$, where $m = \frac{p^n - 1}{p-1}$. Here we first choose $k$ 1-dimentioanl subspaces and then we choose one of $(p-1)$ non-zero vectors from each of the subspaces.</p> <p>4) .. unfortunately, this is where I am stuck. My intuition says that the answer may be $\frac{p^n - 1}{(p-1)^k}$, but this might be completely wrong and I don't know how to go about finishing the problem.</p> <p>Thanks in advance.</p>
Florian
1,609
<p>Here is a hint: Find a formula for the number of possibilities to choose $k$ linearly indepenent vectors in $\mathbb{F}_p ^n$ (where order matters). Each of these choices serves as a basis for a $k$-dimensional subspace, but for each subspace there are several bases, so you have to divide by the number of bases for each subspace - and computing this number involves essentially the same formula as above.</p>
1,765,022
<p>The problem is:</p> <blockquote> <p>$Prove$ $that$ $|\sin^2 (x)-\sin^2 (y)|\le |x-y|$ $ for $ $ all $ $ x,y&gt;0$.</p> </blockquote> <p><em>$My$ $work$ :</em> $$\sin^2 (x)\le|\sin x|\le|x|\le|x-y|+|y|$$ and so is $$|\sin^ 2 (x)-\sin^2 (y)|\le |x-y|+|y|$$ But this is not the actual result I want. I think I have done few mistakes. Isn't it? Please help me. Thank you in advance.</p>
Michael Hardy
11,667
<p>The integral $\displaystyle\int_0^R\int_0^R\cdots \,dx\,dy$ is over a <b>square</b>, $[0,R]^2$.</p> <p>But the integral $\displaystyle \int_0^\text{something} \int_0^R \cdots \, r \, dr \, d\theta$ is over a <b>sector of a circle</b>, with an arc of a circle as a part of its boundary.</p>
3,033,812
<p>My problem: If there are 5 different candies in a jar and a child wants to take out one or more candies, how many ways can this be done? </p> <p>I said it is <span class="math-container">$^5C_1 -\; ^5C_0 = 5-1 = 4$</span> ways. The <span class="math-container">$-1$</span> for the unwanted case using this trick:</p> <p>At least/At most = total number of combinations - unwanted cases</p> <p>But according to my answer sheet, it said <span class="math-container">$2^5 -1$</span> is the answer.</p> <p>So my question is that in what situations should I use exponents and what impact does it have? </p>
Michael Burr
86,421
<p>Adding to the other answers:</p> <p><span class="math-container">$^5C_1-\, ^5C_0$</span> doesn't make sense for the following reasons:</p> <ul> <li><p>The problem states that the child takes one <em>or more</em> candies. None of these quantities in the expression includes data about the case where more than one candy is taken.</p></li> <li><p><span class="math-container">$^5C_1$</span> gives the number of ways to select one candy. On the other hand, <span class="math-container">$^5C_0$</span> is the number of ways to select <span class="math-container">$0$</span> candies. It doesn't make sense to subtract these two because they correspond to different events.</p></li> </ul>
65,304
<p>I have a plane curve $C$ described by parametric equations $x(t)$ and $y(t)$ and a function $f: \mathbb{R}^2 \rightarrow \mathbb{R}$. The line integral of $f$ along $C$ is the area of the "fence" whose path is governed by $C$ and height is governed by $f$.</p> <p><img src="https://i.stack.imgur.com/4rmZy.png" alt="enter image description here"></p> <p>How can I generate a picture of the "fence" in Mathematica?</p> <p>For the sake of a concrete example, let's borrow from Stewart (since I already borrowed his picture). For $0 \leq t \leq \pi$, define $$ \begin{align*} x(t) &amp;= \cos t\\ y(t) &amp;= \sin t\\ f(x,y) &amp;= 2 + x^2y \end{align*} $$ so that $$ \begin{align*} f(x(t),y(t)) &amp;= 2 + \cos^2 t \sin t. \end{align*} $$</p>
Sander
14,625
<pre><code> ListPointPlot3D[ Table[{Cos[t], Sin[t], 2 + Sin[t] Cos[t]^2} ,{t, 0, π, 0.01}] , Filling -&gt; 0] </code></pre> <p><img src="https://i.stack.imgur.com/Ey0fO.png" alt="Mathematica graphics"></p>
1,167,880
<p>Given $a = [-5, 8, 1]$ and $b = [2, -7, -3]$, find a vector $c$ such that $a \cdot (b × c) = 0$</p> <p>I don't know how to get it, I've been looking for examples, but I don't know..</p>
Jason Knapp
8,454
<p>What you said is exactly right, that $\delta$ can be selected independently on bounded intervals but not on unbounded ones. You have the picture from your link you consider, but if you want more what you should think about is this - for $x^2$ the $\delta$ you need to use becomes smaller (to $0$) based on the size of the interval. For a bounded interval this means you can pick a definite $\delta$, but for an unbounded one you would use "$\delta = 0$" which of course doesn't work.</p> <p>I'm sure there is a more elegant way to see this, but this can be observed from calculus. $$|x^2-x_0^2| = |x-x_0||x+x_0|$$ So, by the mean value theorem there is a $c$ between $x$ and $x_0$ obeying $$|2c| = \frac{|x^2 - x_0^2|}{|x-x_0|} = |x+x_0|$$ Suppose we had a way to pick $\delta$ based on $\epsilon$ so that $|x-x_0| &lt; \delta$ ensures $|x^2-x_0^2\ &lt; \epsilon$ for any $x_0$. Well, then we would have $$2|c||x-x_0| = |x+x_0||x-x_0| = |x^2 - x_0^2| &lt; \epsilon$$ But this means $$\delta = |x-x_0| &lt; \frac{\epsilon}{2|c|}$$ And as $x_0 \to \infty$ the right side goes to $0$, since $c$ lives near $x_0$.</p>
3,452,707
<p>It is well known that <span class="math-container">$\sum_{k=0}^n{n\choose k} =2^n$</span>.</p> <p><strong>My question:</strong> If <span class="math-container">$z$</span> is the limit point of an infinite sequence of real numbers <span class="math-container">$\{ a_n \}$</span>, then does <span class="math-container">$$\frac{{n \choose 1} a_1 + {n \choose 2} a_2+ \cdots+ {n \choose n} a_n}{2^n}$$</span> converge to <span class="math-container">$z$</span> as <span class="math-container">$n\ \to \infty$</span>?</p>
QC_QAOA
364,346
<p>Let <span class="math-container">$\epsilon&gt;0$</span> and define <span class="math-container">$b_n=a_n-z$</span>. Since <span class="math-container">$a_n\to z$</span> there exists <span class="math-container">$N$</span> such that for all <span class="math-container">$n\geq N$</span>, <span class="math-container">$|b_n|&lt;\epsilon$</span>. Now, consider the limit</p> <p><span class="math-container">$$\lim_{n\to\infty}\frac{1}{2^n}\sum_{i=1}^n \binom{n}{i}a_i=\lim_{n\to\infty}\frac{1}{2^n}\sum_{i=1}^n \binom{n}{i}(b_i+z)$$</span></p> <p><span class="math-container">$$=\lim_{n\to\infty}\frac{1}{2^n}\sum_{i=1}^n \binom{n}{i}b_i+\lim_{n\to\infty}\frac{1}{2^n}\sum_{i=1}^n \binom{n}{i}z=z+\lim_{n\to\infty}\frac{1}{2^n}\sum_{i=1}^n \binom{n}{i}b_i.$$</span></p> <p>We may simplify the problem and only consider the remaining limit. Note that proving the original statement is equivalent to proving </p> <p><span class="math-container">$$\lim_{n\to\infty}\frac{1}{2^n}\sum_{i=1}^n \binom{n}{i}b_i=0.$$</span></p> <p>For <span class="math-container">$n\geq N$</span> split the sum into two parts, <span class="math-container">$1\leq i&lt;N$</span> and <span class="math-container">$i\geq N$</span>. We have</p> <p><span class="math-container">$$\lim_{n\to\infty}\frac{1}{2^n}\sum_{i=1}^n \binom{n}{i}b_i=\lim_{n\to\infty}\left(\frac{1}{2^n}\sum_{i=1}^{N-1} \binom{n}{i}b_i+\frac{1}{2^n}\sum_{i=N}^{n} \binom{n}{i}b_i\right)$$</span></p> <p><span class="math-container">$$=\lim_{n\to\infty}\frac{1}{2^n}\sum_{i=1}^{N-1} \binom{n}{i}b_i+\lim_{n\to\infty}\frac{1}{2^n}\sum_{i=N}^{n} \binom{n}{i}b_i.$$</span></p> <p>Now, the first limit is clearly zero as <span class="math-container">$\binom{n}{i}=O(n^i)$</span> which implies the numerator is some polynomial in <span class="math-container">$n$</span> while the denominator is an exponential in <span class="math-container">$n$</span>. Thus, our problem is distilled down to showing </p> <p><span class="math-container">$$\lim_{n\to\infty}\frac{1}{2^n}\sum_{i=N}^{n} \binom{n}{i}b_i=0.$$</span></p> <p>To do this, note that</p> <p><span class="math-container">$$\frac{1}{2^n}\sum_{i=N}^{n} \binom{n}{i}b_i\leq \frac{1}{2^n}\sum_{i=N}^{n} \binom{n}{i}|b_i|&lt;\frac{1}{2^n}\sum_{i=N}^{n} \binom{n}{i}\epsilon\leq \frac{\epsilon}{2^n}\sum_{i=1}^{n} \binom{n}{i}=\epsilon.$$</span></p> <p>Thus,</p> <p><span class="math-container">$$\frac{1}{2^n}\sum_{i=N}^{n} \binom{n}{i}b_i$$</span></p> <p>is bounded by every positive number and hence is <span class="math-container">$0$</span> in the limit. We conclude </p> <p><span class="math-container">$$\lim_{n\to\infty}\frac{1}{2^n}\sum_{i=1}^n \binom{n}{i}a_i=z.$$</span></p>
148,972
<p>I am working with solving a linear system that becomes a tridiagonal matrix. In order to speed up the process for large matrices, I want to use sparse matrices. My problem is that the values are not constant along the bands, but change based on their horizontal position in the matrix (x position, if you will). For instance I want to do:</p> <pre><code>A = SparseArray[{Band[{1,1}]-&gt;func1[0,j], Band[{1,2}-&gt;func2[0,j], Band[{2,1}]-&gt;func3[0,j]},{10,10}]; </code></pre> <p>where j indexes the x position. Is there a good way to do this?</p>
kglr
125
<pre><code>func1[a_, b_] := a + b; func2[a_, b_] := 1 + a + b; func3[a_, b_] := a + 2 b; n = 10; sa = SparseArray[{Band[{1, 1}] -&gt; (func1[0, #] &amp; /@ Range[n]), Band[{1, 2}] -&gt; (func2[0, #] &amp; /@ Range[n - 1]), Band[{2, 1}] -&gt; (func3[0, #] &amp; /@ Range[n - 1])}, {n, n}]; sa // MatrixForm </code></pre> <p><img src="https://i.stack.imgur.com/E22C7.png" alt="Mathematica graphics"></p> <p>or</p> <pre><code>sa = Quiet@SparseArray[{Band[{1, 1}] -&gt; (func1[0, #] &amp; /@ Range[n]), Band[{1, 2}] -&gt; (func2[0, #] &amp; /@ Range[n]), Band[{2, 1}] -&gt; (func3[0, #] &amp; /@ Range[n])}, {n, n}]; sa // MatrixForm </code></pre> <p>(* same result *) </p>