qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
1,246,250
<p>I recently learned of Cantor's diagonal argument, and was thinking about why there can't be a bijection between any infinite set of integers and any infinite set of real numbers. I understood the basic idea behind the proof, but I was thinking of a particular transformation, for which I don't see why it doesn't form a bijection. </p> <p>Let's say we want to map all of the integers, to the real numbers between $[-1, 1]$. My transformation is created in such a way, that for every integer, I transform it into a mirrored number that is placed after a decimal. </p> <p>For example:</p> <p>$$0 \rightarrow 0.0$$ $$1 \rightarrow 0.1$$ $$2 \rightarrow 0.2$$ $$\vdots$$ $$100 \rightarrow 0.001$$ $$\vdots$$</p> <p>It would seem to me that this is one to one, and every number between $[-1, 1]$ is hit. Even in the example of an irrational number, say $(\sqrt2 - 1)$, there is sum integer of infinite size that maps exactly to $(\sqrt2 - 1)$, because $(\sqrt2 - 1)$ can be written as a decimal made of up an infinite number of integer sitting behind a decimal point. It could be mapped as:</p> <p>$$...73265312414 \rightarrow 0.41421356237...$$</p> <p>Now, I'm assuming I was probably not the first person to think of this, so, why does this not work as a bijcetive mapping?</p>
P Vanchinathan
28,915
<p>For example $1/3=0.333\ldots$ is not in the image, and so the function is not onto.(This is already contained in the answer by MooS).</p>
2,368,179
<p>Answer should be in radians Like π/4 (45°) π(90°). I used $\tan(A+B)$ formula and got $5/7$ as the answer, but that's obviously wrong.</p>
Dando18
274,085
<p>Using the $\tan(A+B)$ formula,</p> <p>$$ \tan(A+B) = \frac{-1/2 - 1/3}{1-(1/2)(1/3)} = -1 $$</p> <p>now use the $\arctan$,</p> <p>$$ A+B = \arctan(-1) = n\pi + \frac{3\pi}{4}, \ \ n\in\mathbb Z $$</p>
3,699,439
<p>I am interested if there is geometric meaning (using graphs) of <span class="math-container">$(1 + \frac{1}{n})^n$</span> when <span class="math-container">$n \rightarrow \infty$</span>. Also, is there visual explanation of why is <span class="math-container">$e^x = (1 + \frac{x}{n})^n$</span> when <span class="math-container">$n \rightarrow \infty$</span> and why is <span class="math-container">$\frac{d}{dx}e^x = e^x$</span>?</p> <p>I see that this kind of question is not posted yet.</p>
Community
-1
<p>I think of my favorite, and pretty geometric, proof of this limit, using the squeeze or sandwich theorem for limits. You can do it using an upper and lower Riemann sum with one subdivision for the integral of <span class="math-container">$1/t$</span>.</p> <p>One has <span class="math-container">$L\le\int_1^{1+x/n}1/t\rm dt\le U\implies x/n(1/(1+x/n))\le\ln(1+x/n)\le x/n(1)\implies x/(n+x)\le\ln(1+x/n)\le x/n\implies e^{x/(n+x)}\le(1+x/n)\le e^{x/n}\implies e^{nx/(n+x)}\le(l+x/n)^n\le e^x$</span>, and take limits.</p>
2,917,896
<p>I think my proof is wrong but I don't know how to approach the statement differently. I hope you can help me identify where I'm mistaken/incomplete.</p> <p>Proof: $$\text{We need to prove: } \bigcup_{n=1}^{\infty}[3 - \frac{1}{n}, 6] = [2, 6] $$</p> <p>$$\text{Thus, } x \in \bigcup_{n=1}^{\infty}[3 - \frac{1}{n}, 6] \iff x \in [2, 6]$$</p> <p>$$\text{We first consider the converse of the biconditional.}$$</p> <p>$$\text{and proceed by contrapositive.} $$ $$x \notin \bigcup_{n=1}^{\infty}[3 - \frac{1}{n}, 6] \implies x \notin [2, 6]$$ $$\text{Given that when } n = 1, [3-\frac{1}{n}, 6]=[2,6] \text{ and } $$ $$ \forall z \in (\mathbb{N} - {1}) , [3-\frac{1}{z}, 6] &lt; [2, 6] \text{ thus } \bigcup_{n=1}^{\infty}[3 - \frac{1}{n}, 6] = [2,6]$$ $$\text{It follows that, } x \notin [2,6] \text{. Thus the converse is true.}$$</p> <p>$$\text{Now, for left to right } (\implies) \text{ we proceed by direct proof. }$$</p> <p>$$x \in \bigcup_{n=1}^{\infty}[3 - \frac{1}{n}, 6] \implies x \in [2, 6]$$ $$\text{By the same logic as for the converse, we continue..}$$</p> <p>$$\text{Given that, when } n = 1, [3-\frac{1}{n}, 6] = [2, 6], \text{ It follows that: } $$ $$x \in [2,6]$$</p> <p>$$\therefore \bigcup_{n=1}^{\infty} A_{n} = [2, 6] \text{ } \blacksquare$$ </p> <p>Thank you for your time.</p> <hr> <p><strong>Updated proof:</strong></p> <p>Proof: </p> <p>We assume $x \in \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6]$ $$A_{1} = [2, 6] &gt; \bigcup_{i=2}^{\infty} [3 - \frac{1}{i}, 6] = [ \frac{5}{2}, 6] *$$ $$\therefore A_{1} = \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6] = [2, 6] , \space x \in [2,6]$$</p> <p>[ I placed a (*) to show where I'm uncertain. My problem is in knowing how much I should explain to the reader. I have to establish somehow that $A_{1}$ is the biggest interval but I kind of leave open 'why' $\bigcup_{i=2}^{\infty} [3 - \frac{1}{i}, 6] = [ \frac{5}{2}, 6]$ is true. For example, I thought I had to show why $3 - \frac{1}{i} &gt; 2$ for every i $\geq$ 2. So I have a tedency to break everything down too much]</p> <p>Now for the converse we proceed by contrapositive.</p> <p>We assume $x \notin \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6]$ $$A_{1} = [2, 6] &gt; \bigcup_{i=2}^{\infty} [3 - \frac{1}{i}, 6] = [ \frac{5}{2}, 6] *$$ $$\therefore A_{1} = \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6] = [2, 6] , \space x \notin [2,6]$$</p> <p>$$ \therefore \bigcup_{n=1}^{\infty}[3 - \frac{1}{n}, 6] = [2, 6] \blacksquare$$</p> <p><strong><em>Updated proof #2:</em></strong></p> <p>Proof:</p> <p>We assume, $x \in \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6]$.</p> <p>Since $2 \leq 3 - \frac{1}{n} &lt; 3$ for all $ n \geq 1$, $ \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6] \subseteq [2, 6], x \in [2, 6]$ </p> <p>For the converse we assume $x \notin \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6]$. </p> <p>Following the same reasoning as above, Since $2 \leq 3 - \frac{1}{n} &lt; 3$ for all $ n \geq 1$, $ \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6] \subseteq [2, 6], x \notin [2, 6]$ </p> <p>$\therefore \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6] = [2, 6] \space \blacksquare$ </p>
eloiprime
180,579
<p>We have $[3-\frac{1}{n},6]\subseteq[2,6]$ for all $n\ge 1$, and thus $$\bigcup_{n=1}^\infty\left[3-\frac{1}{n},6\right]\subseteq[2,6].$$ As for the reverse inclusion, we have $$[2,6]=\left[3-\frac{1}{1},6\right]\subseteq\bigcup_{n=1}^\infty\left[3-\frac{1}{n},6\right].$$</p>
3,692,083
<p>I wish to show that the closed unit ball in <span class="math-container">$l^1$</span> is not compact, for which I believe it would be easiest to show that it is not bounded. For this I want to consider the sequence {1, 1/2, 1/3, ... , 1/n, ...}, since the harmonic series is known to be divergent. But will this sequence actually be in the unit ball of <span class="math-container">$l^1$</span>? I'm confused by the definition of the norm given to me. </p>
obscurans
619,038
<p><span class="math-container">$\ell^1$</span> is the space of sequences under the norm <span class="math-container">$$\left\|x\right\|=\sum_{n=1}^{\infty}\left|x_i\right|$$</span> such that the norm is finite. So no, not only is the sequence <span class="math-container">$\left\{\frac{1}{n}\right\}_{n=1}^{\infty}$</span> not in the unit ball, it's not even an element of <span class="math-container">$\ell^1$</span> at all.</p>
3,384,416
<p>Consider the following algorithm: </p> <ol> <li><p>pick an integer <span class="math-container">$n&gt; 0$</span>.</p></li> <li><p>If <span class="math-container">$n$</span> is even, divide by 2. If <span class="math-container">$n$</span> is odd, find the least perfect square <span class="math-container">$m^2$</span> greater then <span class="math-container">$n$</span> and add <span class="math-container">$m^2$</span>+n. </p></li> <li><p>Repeat step 2. with either <span class="math-container">$n/2$</span> or <span class="math-container">$n + m^2$</span>. </p></li> </ol> <p>The conjecture is this algorithm will always terminate at a 1 or an 11. To show it terminates can be done by induction. Why do 1 and 11 appear? Is there anything significant about the number of "termination points" (here, just 1 and 11, so 2 termination points.)? </p> <p>Generalization: The algorithm seems to terminate for all <span class="math-container">$n$</span> when replacing the least perfect square greater than n with the greatest perfect square less than n. It also seems to terminate when square is replaced by any power. </p> <p>How can I understand this algorithm and its variations better, and in particular, the numbers where it terminates?</p>
Nitin Uniyal
246,221
<p>Geometrically the given hypothesis implies that the curvature of the surface must be zero at the intersecting curve.</p> <p>This implies <span class="math-container">$z=f(x,y=\lambda x)=c$</span> or <span class="math-container">$dz=0$</span>.</p> <p><span class="math-container">$\implies \frac{\partial z}{\partial x}dx+\frac{\partial z}{\partial y}dy=0$</span></p> <p><span class="math-container">$\implies (p+\lambda q)dx=0$</span> where <span class="math-container">$p=\partial z/\partial x$</span> and <span class="math-container">$q=\partial z/\partial y$</span>. </p> <p>Picking the partial differential equation <span class="math-container">$(p+\lambda q)=0$</span>.</p> <p>This gives <span class="math-container">$p=-\lambda q=k$</span>(say)</p> <p>Using <span class="math-container">$dz=pdx+qdy$</span> you have <span class="math-container">$dz=kdx-(k/\lambda)dy$</span></p> <p>Integrate, <span class="math-container">$z=kx-(k/\lambda) y+ c$</span> is solution.</p>
3,384,416
<p>Consider the following algorithm: </p> <ol> <li><p>pick an integer <span class="math-container">$n&gt; 0$</span>.</p></li> <li><p>If <span class="math-container">$n$</span> is even, divide by 2. If <span class="math-container">$n$</span> is odd, find the least perfect square <span class="math-container">$m^2$</span> greater then <span class="math-container">$n$</span> and add <span class="math-container">$m^2$</span>+n. </p></li> <li><p>Repeat step 2. with either <span class="math-container">$n/2$</span> or <span class="math-container">$n + m^2$</span>. </p></li> </ol> <p>The conjecture is this algorithm will always terminate at a 1 or an 11. To show it terminates can be done by induction. Why do 1 and 11 appear? Is there anything significant about the number of "termination points" (here, just 1 and 11, so 2 termination points.)? </p> <p>Generalization: The algorithm seems to terminate for all <span class="math-container">$n$</span> when replacing the least perfect square greater than n with the greatest perfect square less than n. It also seems to terminate when square is replaced by any power. </p> <p>How can I understand this algorithm and its variations better, and in particular, the numbers where it terminates?</p>
zhw.
228,045
<p>WLOG, <span class="math-container">$f(0,0)=0.$</span> I'll assume the graph of <span class="math-container">$f$</span> over every line through the origin is itself a line.</p> <p>Let <span class="math-container">$v$</span> be a nonzero vector in <span class="math-container">$\mathbb R^2.$</span> For <span class="math-container">$t\in \mathbb R,$</span> define <span class="math-container">$f_v(t) = f(tv).$</span> Our assumption on <span class="math-container">$f$</span> shows <span class="math-container">$f_v(t) = Ct$</span> for some constant <span class="math-container">$C.$</span> Here we've used <span class="math-container">$f(0,0)=0.$</span></p> <p>Now <span class="math-container">$C= (f_v)'(0).$</span> Because <span class="math-container">$f$</span> is differentiable at <span class="math-container">$(0,0),$</span> we can use the chain rule to calculate this. We get <span class="math-container">$C=Df(0,0)(v).$</span> Thus</p> <p><span class="math-container">$$f(tv)= f_v(t) = Df(0,0)(v)\cdot t = Df(0,0)(tv).$$</span></p> <p>This says <span class="math-container">$f = Df(0,0)$</span> on the line <span class="math-container">$\{tv\}.$</span> Since <span class="math-container">$v$</span> was arbitrary, <span class="math-container">$f = Df(0,0)$</span> on every line through the origin. Thus <span class="math-container">$f = Df(0,0)$</span> on <span class="math-container">$\mathbb R^2.$</span> In other words,</p> <p><span class="math-container">$$f(x,y)=\frac{\partial f}{\partial x}(0,0) x + \frac{\partial f}{\partial y}(0,0)y,\,\, (x,y)\in \mathbb R^2.$$</span></p>
2,107,787
<p>I am a a student and I am having difficulty with answering this question. I keep getting the answer wrong. Please may I have a step by step solution to this question so that I won't have difficulties with answering these type of questions in the future.</p> <p><em>n</em> is a number. 100 is the LCM of 20 and <em>n</em>. Work out two different possible values for <em>n</em>.</p> <p><em>n</em> = ______ <em>n</em> = ______</p> <p>I did this: Hcf of 100:1,2,4,5,10,20,25,50,100</p> <p>I don't know what to do next?</p> <p>Thank you and help would be appreciated</p>
Eric Wofsey
86,856
<p>Suppose the graph is nonempty and disconnected; say you can partition the $v_\alpha$ into two nonempty sets $A$ and $B$ with no edges between them. Let $U=\bigcup_{\alpha\in A} U_\alpha\cup X\setminus\left(\bigcup_{\alpha} U_\alpha\right)$ and $V=\bigcup_{\beta\in B}U_\beta$. Note that for any $\alpha$, either $U_\alpha$ is contained in $U$ (if $\alpha\in A$) or disjoint from $U$ (if $\alpha\in B$, since there are no edges between $A$ and $B$), so $U$ is open. Similarly, $V$ is also open. Since there are no edges between $A$ and $B$, $U\cap V=\emptyset$, and since every $\alpha$ is in either $A$ or $B$, $U\cup V=X$. Thus $U$ and $V$ witness that $X$ is disconnected.</p> <p>(If the graph is empty, then $X$ has the discrete topology, so $X$ will still not be connected unless it has exactly one point.)</p> <p>(Note that the converse which you claimed to already know is only true if the $U_\alpha$ cover $X$. For if they don't cover $X$ (and there exists at least one $U_\alpha$), then the union of all the $U_\alpha$ is a nontrivial clopen subset of $X$, regardless of whether the graph is connected.)</p>
753,553
<p>Let $R$ be a commutative ring and let $0 \to L \to M \to N \to0$ be an exact sequence of $R$-modules. Prove that if $L$ and $N$ are noetherian, then $M$ is noetherian. I tried considering the pre image of the map $L \to M$ and the image of the map $M \to N$ as they are submodules of $L$ and $N$ respectively, but I couldn't make headway.</p>
gniourf_gniourf
51,488
<p>Let $$0\longrightarrow L\overset{i}\longrightarrow M\overset{p}\longrightarrow N\longrightarrow0$$ be a short exact sequence of $R$-modules. Assume that $L$ and $N$ are Noetherian modules.</p> <p>Let $(M_n)_{n\geq0}$ be an ascending chain of submodules of $M$.</p> <p>Since $(p(M_n))_{n\geq0}$ is an ascending chain of submodules of $N$, and $N$ being Noetherian, there exists $n_0\geq0$ such that: $$\tag{$*$}\forall j\geq n_0,\quad p(M_j)=p(M_{n_0}).$$</p> <p>Similarly, $(i^{-1}(i(L)\cap M_n))_{n\geq0}$ being an ascending chain of submodules of $L$, and $L$ being Noetherian, there exists $n_1\geq0$ such that: $$\forall j\geq n_1,\quad i^{-1}(i(L)\cap M_j)=i^{-1}(i(L)\cap M_{n_1}),$$ which also implies: $$\tag{$**$}\forall j\geq n_1,\quad i(L)\cap M_j=i(L)\cap M_{n_1}.$$</p> <p>Set $n=\max\{n_0,n_1\}$.</p> <p>(I guess you got to this point of the proof).</p> <p>We'll now show that $$\forall j\geq n,\quad M_j=M_n.$$ Let $j\geq n$. We already have the inclusion $M_n\subset M_j$ so let's prove the inclusion $M_j\subset M_n$: let $x\in M_j$. Since $p(x)\in p(M_j)=p(M_n)$ (see Property $(*)$), there exists $y\in M_n$ such that $p(x)=p(y)$, i.e., $p(x-y)=0$, i.e., $x-y\in\ker p$. Now, since we have an exact sequence (namely, $\ker p=\operatorname{Im}i=i(L)$), $x-y\in i(L)$. Now by the inclusion $M_n\subset M_j$ we know that $x-y\in M_j$, hence $x-y\in i(L)\cap M_j=i(L)\cap M_n$ ( see Property $(**)$), hence $x\in M_n+i(L)\cap M_n=M_n$.</p> <p><strong>Remark.</strong> I never used injectivity of the mapping $i$ nor surjectivity of the mapping $p$. Hence the result still holds true for an exact sequence $$L\overset{i}\longrightarrow M\overset{p}\longrightarrow N$$ (and that's why I carefully left the $i$ all the way through the proof).</p>
32,088
<h2>Motivation</h2> <p>One of the methods for strictly extending a theory <span class="math-container">$T$</span> (which is axiomatizable and consistent, and includes enough arithmetic) is adding the sentence expressing the consistency of <span class="math-container">$T$</span> ( <span class="math-container">$Con(T)$</span> ) to <span class="math-container">$T$</span>. But this extension ( <span class="math-container">$T+Con(T)$</span> ) looks very artificial from the mathematical viewpoint, i.e. does not seem to have any mathematically interesting new consequences, and therefore is probably of no interest to a typical mathematician.</p> <hr /> <p>I would like to know if there is a <em>natural</em> theory (like PA, ZFC, ... ) which by adding the consistency statement we can prove new <em>mathematically interesting</em> statements. I don't have a definition for what is a natural theory or a mathematically interesting statement, but a theory artificially build for the sole purpose of this question would not be natural, and a purely metamathematical statement (like consistency of <span class="math-container">$T$</span>, or a statement depending on the encoding of <span class="math-container">$T$</span> or its language, or ...) would not count as a mathematically interesting statement.</p> <p>Questions:</p> <blockquote> <ol> <li><p>Is there a natural theory <span class="math-container">$T$</span> and an mathematically interesting statement <span class="math-container">$\varphi$</span>, such that it is <strong>not known</strong> that <span class="math-container">$T \vdash \varphi$</span>, but <span class="math-container">$T + Con(T) \vdash \varphi$</span>?</p> </li> <li><p>Is there a natural theory <span class="math-container">$T$</span> and an interesting mathematical statement <span class="math-container">$\varphi$</span>, such that <span class="math-container">$T \nvdash \varphi$</span> but <span class="math-container">$T + Con(T) \vdash \varphi$</span>?</p> </li> </ol> </blockquote>
Joel David Hamkins
1,946
<p>Vitali famously constructed a set of reals that is not Lebesgue measurable by using the Axiom of Choice. Most people expect that it is not possible to carry out such a construction without the Axiom of Choice.</p> <p>Solovay and Shelah, however, proved that this expectation is exactly equiconsistent with the existence of an inaccessible cardinal over ZFC. Thus, the consistency statement Con(ZFC + inaccessible) is exactly equivalent to our inability to carry out a Vitali construction without appealing to AC (beyond Dependent Choice). </p> <p>Thus, if $T$ is the theory $ZFC+$inaccessible, then T+Con(T) can prove "You will not be able to perform a Vitali construction without AC", but $T$, if consistent, does not prove this.</p> <p>I find both this theory and the statement to be natural (even though the statement can also be expressed itself as a consistency statement). Most mathematicians simply believe the statement to be true, and are often surprised to learn that it has large cardinal strength.</p> <hr> <p>There is another general observation to be made. For any consistent theory $T$ whose axioms can be computably enumerated, and this likely includes most or all of the natural theories you might have in mind, there is a polynomial $p(\vec x)$ over the integers such that $T$ does not prove that $p(\vec x)=0$ has no solutions in the integers, but $T+Con(T)$ does prove this. So if you regard the question of whether these diophantine equations have solutions as natural, then they would be examples of the kind you seek. And the argument shows that every computable theory has such examples. </p> <p>The proof of this fact is to use the MRDP solution of Hilbert's 10th problem. Namely, Con(T) is the assertion that there is no proof of a contradiction from $T$, and the MRDP methods show that such computable properties can be coded into diophantine equations. Basically, the polynomial $p(\vec x)$ has a solution exactly at a Goedel code of a proof of a contradiction from $T$, so the existence of a solution to $p(\vec x)=0$ is equivalent to $Con(T)$. If $T$ is consistent, then it will not prove $Con(T)$, and so will not prove there are no integer solutions, but $T+Con(T)$ does prove that there are no integer solutions. </p> <p>By the way, it is not true in general that if $T$ is consistent, then so is $T+Con(T)$. Although it might be surprising, some consistent theories actually prove their own inconsistency! For example, if PA is consistent, then so is the theory $T=PA+\neg Con(PA)$, but this theory $T$ proves $\neg Con(T)$. Thus, there are interesting consistent theories $T$, such as the one I just gave, such that $T+Con(T)$ proves any statement at all!</p>
1,791,631
<p>The following is stated on <a href="https://en.wikipedia.org/wiki/Constructible_universe#L_and_large_cardinals" rel="nofollow">Wikipedia</a> for <a href="https://en.wikipedia.org/wiki/Mahlo_cardinal" rel="nofollow">Mahlo cardinals</a>. Unfortunately, it's not sourced. Where can I find details? I wasn't able to google any articles dealing with Mahlo cardinals in $L$.</p> <blockquote> <p>Since $On⊂L⊆V$, properties of ordinals that depend on the absence of a function or other structure (i.e. $\Pi_1^{ZF}$ formulas) are preserved when going down from $V$ to $L$. Hence initial ordinals of cardinals remain initial in L. Regular ordinals remain regular in $L$. Weak limit cardinals become strong limit cardinals in $L$ because the generalized continuum hypothesis holds in $L$. Weakly inaccessible cardinals become strongly inaccessible. Weakly Mahlo cardinals become strongly Mahlo. And more generally, any large cardinal property weaker than 0# (see the list of large cardinal properties) will be retained in $L$.</p> </blockquote>
Stefan Mesken
217,623
<p>Let $\kappa$ be a Mahlo cardinal in $V$, i.e. let $\kappa$ be inaccessible such that $S:= \{ \alpha \in \kappa \mid \alpha \text{ is regular} \}$ is stationary in $\kappa$.</p> <p>First, note that $L \models \kappa \text{ is inaccessible}$. Indeed, if $L \models \kappa \text{ is not a cardinal}$, then there is some $\mu &lt; \kappa$ and some $f \in L$ such that $L \models f \colon \mu \to \kappa \text{ is surjective}$. However, this is a $\Sigma_{0}$ property and hence, in $V$, $f \colon \mu \to \kappa$ is surjective. Contradiction. Since $\kappa &gt; \omega$ (as an ordinal), this also yields that $L \models \kappa \text{ is uncountable}$. Repeating this argument with cofinal $f \colon \mu \to \kappa$ yields that $L \models \kappa \text{ is regular}$.</p> <p>Since $L \models \operatorname{GCH}$, it now suffices to prove that $L \models \kappa \text{ is a limit cardinal}$. This is trivial, because for any ordinal $\gamma &lt; \kappa$, we have that $(\gamma^{+})^V &lt; \kappa$ and since cardinals in $V$ are cardinals in $L$, this proves $$L \models \forall \gamma &lt; \kappa \exists \gamma &lt; \mu &lt; \kappa \colon \mu \text{ is a cardinal}.$$</p> <p>Now let $T := \{ \alpha \in \kappa \mid L \models \alpha \text{ is regular} \}$. By the argument given above, any $\alpha$ that is regular in $V$ is regular in $L$ and hence $S \subseteq T$. Suppose that $L \models \kappa \text{ is not Mahlo}$. Then there is some $C \subseteq \kappa$ such that $L \models C \text{ is club in } \kappa \text{ and } C \cap T = \emptyset$. Being club is a $\Sigma_0$, property and hence $C \subseteq \kappa$ is club in $\kappa$. Since $S \subseteq T$, we have that $C \cap S = \emptyset$ and hence $S$ is not stationary in $V$. This is a contradiction and we therefore must have that $L \models T \text{ is stationary}$.</p> <p>Thus, $\kappa$ remains Mahlo in $L$.</p>
106,000
<p>I have the following data </p> <pre><code> hours={38.9, 39, 38.9, 39, 39.3, 39.7, 39.2, 38.8, 39.6, 39.8, 39.9, 40.3, \ 40, 40.2, 40.8, 40.7, 40.8, 41.2, 40.6, 40.7, 40.7, 40.9, 40.6, 40.8, \ 40.3, 40.4, 40.7, 40.5, 40.7, 41.2, 40.3, 39.7, 40.4, 40.1, 40.3, \ 40.6, 40.1, 40.5, 40.8, 40.8, 40.9, 41.7} </code></pre> <p>The first entry belongs to January of year 1, the second to February of year 1, and so on. </p> <p>If I plot this with <code>ListPlot[hours, PlotMarkers -&gt; {"1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12"}]</code></p> <p><a href="https://i.stack.imgur.com/VcwkR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VcwkR.png" alt="enter image description here"></a> I don't get the desired result, which is to have the right number corresponding to the right month...</p> <p>Any help would be appreciated.</p>
Quantum_Oli
6,588
<p>It's because the different elements of your <code>PlotMarkers</code> option refer to different data sets, whereas you just have the one data set. </p> <p>If you were only after different coloured data points then you could <code>Style</code> each element of your data set, however I don't know if its possible to do this with different symbols. </p> <p>A kind of hacky solution is just to convert your data set into many set each with one data point, as follows:</p> <pre><code> ListPlot[List /@ Transpose[{Range[Length[hours]], hours}], PlotMarkers -&gt; ToString /@ Range[Length[hours]]] </code></pre> <p><a href="https://i.stack.imgur.com/1Ixne.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1Ixne.png" alt="enter image description here"></a></p> <p><strong>EDIT</strong></p> <p>After re-reading your question I gather you simply want the markers 1-12 repeated, in which case use something like:</p> <pre><code> PlotMarkers-&gt;PadRight[{}, Length[hours], Table[Style[ToString[i], ColorData["Rainbow"][(i - 1)/11]], {i, 12}]] </code></pre> <p><a href="https://i.stack.imgur.com/ae9iR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ae9iR.png" alt="enter image description here"></a></p>
268,091
<p><a href="https://i.stack.imgur.com/PAO6T.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PAO6T.jpg" alt="enter image description here" /></a></p> <p>I have to solve ODE x'(t)=1/2(x(t))-t, x(0) The existence of solutions of this IVP is equivalent to finding a fixed point of integral operator T:C[0,1]-&gt;C[0,1] defined by T(x(t))=x(0)+integral[0,t][1/2(x(tau))-tau) d(tau) I am facing the problem how to T(x(t)) in Mathematica??</p>
Roman
26,598
<p>We can define your operator <code>T</code> as a functional acting on <a href="https://reference.wolfram.com/language/howto/WorkWithPureFunctions.html" rel="nofollow noreferrer">pure functions</a>:</p> <pre><code>T[x_] := Function[t, Evaluate[x[0] + Integrate[x[τ]/2 - τ, {τ, 0, t}]]] </code></pre> <p>Starting with the function <span class="math-container">$x(t)=0$</span> defined as a <code>0&amp;</code> and applying <code>T</code> three times:</p> <pre><code>NestList[T, 0&amp;, 3] (* {0 &amp;, Function[t<span class="math-container">$, -(t$</span>^2/2)], Function[t<span class="math-container">$, -(t$</span>^2/2) - t<span class="math-container">$^3/12], Function[t$</span>, -(t<span class="math-container">$^2/2) - t$</span>^3/12 - t$^4/96]} *) </code></pre> <p>Define a memoizing recursion:</p> <pre><code>Clear[X]; X[0] = 0 &amp;; X[n_Integer?Positive] := X[n] = T[X[n - 1]] X[3][t] (* -t^2/2 - t^3/12 - t^4/96 *) </code></pre>
194,096
<p>Is it possible to find an expression for: $$S(N)=\sum_{k=0}^{+\infty}\frac{1}{\sum_{n=0}^{N}k^n}?$$</p> <p>For $N=1$ we have</p> <p>$$S(1) = \displaystyle\sum_{k=0}^{+\infty}\frac{1}{1 + k} = \displaystyle\sum_{k=1}^{+\infty}\frac{1}{k}$$</p> <p>which is the (divergent) harmonic series. Thus, $S (1) = \infty$.</p> <p>For $N=2$ this sum is: $$S(2)=\sum_{k=0}^{+\infty}\frac{1}{1+k+k^2}$$ which can be expressed as: $$S(2)=-1+\frac{1}{3}\sqrt 3 \pi \tanh(\frac{1}{2}\pi\sqrt 3)\approx 0.798$$</p> <p>For $N=3$ we have: $$S(3)=\frac{1}{4}\Psi(I)+\frac{1}{4I}\Psi(I)-\frac{1}{4I}\pi\coth(\pi)+\frac{1}{4}\pi\coth(\pi)+\frac{1/}{4}\Psi(1+I)-\frac{1}{4I}\Psi(1+I)-\frac{1}{2}+\frac{1}{2}\gamma \approx 0.374$$</p>
Sangchul Lee
9,340
<p>Let $T(N) = S(N-1)$. Then</p> <p>$$ \begin{align*}T(n) &amp;= 1 + \frac{1}{n} + \sum_{k=2}^{\infty} \frac{1}{k^{n-1}+k^{n-2}+\cdots+k+1} \\ &amp;= 1 + \frac{1}{n} + \sum_{k=2}^{\infty} \frac{k - 1}{k^n - 1} \\ &amp;= 1 + \frac{1}{n} + \sum_{k=2}^{\infty} \frac{1}{n} \sum_{l=1}^{n-1} \frac{\omega_l (\omega_l - 1)}{k - \omega_l} \\ &amp;= 1 + \frac{1}{n} + \sum_{k=0}^{\infty} \frac{1}{n} \sum_{l=1}^{n-1} \frac{\omega_l (\omega_l - 1)}{k + 2 - \omega_l}, \end{align*}$$</p> <p>where $\omega_l = \exp\left(\tfrac{2\pi l i}{n}\right)$. Since </p> <p>$$ \frac{1}{n} \sum_{l=0}^{n-1} \omega_l (\omega_l - 1) = 0, $$</p> <p>we may write</p> <p>$$ \begin{align*}T(n) &amp;= 1 + \frac{1}{n} + \sum_{k=0}^{\infty} \frac{1}{n} \sum_{l=1}^{n-1} \omega_l (\omega_l - 1) \left( \frac{1}{k + 2 - \omega_l} - \frac{1}{k+1} \right) \\ &amp;= 1 + \frac{1}{n} + \frac{1}{n} \sum_{l=1}^{n-1} \omega_l (\omega_l - 1) \sum_{k=0}^{\infty} \left( \frac{1}{k + 2 - \omega_l} - \frac{1}{k+1} \right) \\ &amp;= 1 + \frac{1}{n} - \frac{1}{n} \sum_{l=1}^{n-1} \omega_l (\omega_l - 1) \left( \gamma + \psi_0 (2 - \omega_l) \right) \\ &amp;= 1 + \frac{1}{n} - \frac{1}{n} \sum_{l=1}^{n-1} \omega_l (\omega_l - 1) \psi_0 (2 - \omega_l). \end{align*}$$</p>
4,159,060
<p>Let <span class="math-container">$H_1, H_2$</span> be Hilbert spaces and <span class="math-container">$T:H_1\to H_2$</span>. We say that <span class="math-container">$T$</span> is unitary if it preserves the inner product and unto.</p> <ol> <li>Show that the following claims are equivalent:</li> </ol> <p>A. <span class="math-container">$T$</span> is unitary.</p> <p>B. T copies every orthonormal basis of H to an orthonormal basis of H.</p> <p>C. T is injective and there exist an orthonormal basis of <span class="math-container">$H_1$</span> such that <span class="math-container">$T$</span> copies to an orthonormal basis of <span class="math-container">$H_2$</span>.</p> <p>D. T is invertible and <span class="math-container">$T^{-1}=T^*$</span></p> <ol start="2"> <li><p>Show that <span class="math-container">$T$</span> is unitary iff <span class="math-container">$T^*$</span> is.</p> </li> <li><p>if <span class="math-container">$H_1=H_2$</span>. Show that <span class="math-container">$T$</span> is unitary iff <span class="math-container">$T$</span> preserves the inner product and is normal.</p> </li> </ol> <p>For 1:</p> <p>A=&gt;B: Let T be an unitary operator, i.e it preserves the inner product. Let <span class="math-container">$(u_a)\in H$</span> be a hilbert basis of H (for every hilbert space there's an orthonormal basis), then <span class="math-container">$&lt;Tu_a,Tu_b&gt;=&lt;u_a,u_b&gt;=0$</span> for all <span class="math-container">$a\neq b$</span> and <span class="math-container">$&lt;Tu_a,Tu_a&gt;=&lt;u_a,u_a&gt;=1$</span>. Thus, T copies the orthonormal basis <span class="math-container">$(u_a)$</span> to an orthonormal basis <span class="math-container">$(Tu_a)$</span>.</p> <p>D=&gt;A: Let T be invertible and <span class="math-container">$T^*=T^{-1}$</span>. Then, <span class="math-container">$&lt;Tx,Ty&gt;=&lt;x,T^*Ty&gt;=&lt;x,y&gt;$</span> so T is unitary by definition.</p> <p>For 2: Using that A &lt;=&gt;D T is unitary iff it is invertible and <span class="math-container">$T^{-1}=T^*$</span>.</p> <p>If T is unitary then <span class="math-container">$&lt;T^*x,T^*y&gt;=&lt;x,TT^*y&gt;=&lt;x,Iy&gt;=&lt;x,y&gt;$</span>, we get that <span class="math-container">$T^*$</span> is unitry. In the second way, if <span class="math-container">$T^*$</span> is unitary, then <span class="math-container">$&lt;Tx,Ty&gt;=&lt;x,T^*Ty&gt;=&lt;x,y&gt;$</span>, so T is unitary.</p> <p>For 3: If T is unitary then T preservess the inner product (by def), and using A &lt;=&gt; D,</p> <p><span class="math-container">$T^*T=T^{-1}T=I$</span><br /> <span class="math-container">$TT^*=TT^{-1}=I$</span> Therefore T is normal.</p> <p>For the inverse, let T be norml and preseres the inner product, <span class="math-container">$&lt;Tx,Ty&gt;=&lt;x,T^*Ty&gt;=&lt;x,TT^*y&gt;=&lt;x,y&gt;$</span>, so <span class="math-container">$T^*T=TT^*=I$</span>, so T is invertible and <span class="math-container">$T^*=T^{-1}$</span>, thus T is unitary (by A &lt;=&gt;D).</p> <p>Is what i did fine?</p> <p>I did not get the idea in the rest =&gt; in 1, so will appreciate your help.</p>
Marc Romaní
179,483
<p>I think the confusion arises from the notation itself. Note that <span class="math-container">\begin{align} &amp;\,\mathbb{E}_X[\mathbb{P}(Y \neq h(X)|X=x)]\\ =&amp;\,\mathbb{E}_X[\mathbb{P}(Y=0,h(X)=1 | X=x) + \mathbb{P}(Y=1,h(X)=0 | X=x)]\\ =&amp;\,\mathbb{E}_X[\mathbb{P}(Y=0|X=x)\mathbb{P}(h(X)=1|X=x) + \mathbb{P}(Y=1|X=x)\mathbb{P}(h(X)=0|X=x)]\\ =&amp;\,\mathbb{E}_X[\mathbb{P}(Y=0|X=x)\boldsymbol{1}_{h(x)=1} + \mathbb{P}(Y=1|X=x)\boldsymbol{1}_{h(x)=0}]\\ =&amp;\,\mathbb{E}_X[(1-\eta(x))\boldsymbol{1}_{h(x)=1} + \eta(x)\boldsymbol{1}_{h(x)=0}] \end{align}</span> Inside the expectation you have the function <span class="math-container">$g(x) = (1-\eta(x))\boldsymbol{1}_{h(x)=1} + \eta(x)\boldsymbol{1}_{h(x)=0}$</span>, defined on <span class="math-container">$\mathbb{R}^d$</span>. But of course, you want to take the expectation with respect to the random variable <span class="math-container">$g(X)$</span>. It's just how the notation plays out, once you condition on a particular value <span class="math-container">$x$</span> for <span class="math-container">$X$</span>. If you wrote the integrals explicitly instead of the expectation operators it would become clear.</p>
2,505,757
<p>Today, I was trying to prove <a href="https://math.stackexchange.com/questions/2505714/showing-cantor-set-is-uncountable">Cantor set is uncountable</a> and I completed it just a while ago.</p> <p>So, I know that the end-points of each $A_n$ are elements of $C$ and those end-points are rational numbers. But since $C$ is uncountable, $C$ must contain uncountable numbers of irrational numbers. Then, is their a way to prove that, a specific irrational number (say $1/\sqrt2$ or $1/4\pi$) belongs to the set $C$ or not?</p> <p>(Description of notation can be found in the link given above or <a href="https://math.stackexchange.com/questions/2505714/showing-cantor-set-is-uncountable">here</a>)</p> <p>Lets say,</p> <blockquote> <p>Prove or disprove that $1/\sqrt2\in$ Cantor set on $[0,1]$.</p> </blockquote> <p>Can we do that? Or is their a way to solve such problem?</p>
Angina Seng
436,618
<p>It's disconnected: the equation is $$y^2=x(x-1)(x+1).$$ In any real solution $-1\le x\le 0$ or $x\ge1$. There are real solutions for $y$ for any $x$ in these intervals, so falls into two components, one defined by $-1\le x\le0$ and the other by $x\ge1$.</p>
3,361,153
<p>I am given two boolean expression<br> 1) <span class="math-container">$x_1 \wedge x_2 \wedge x_3$</span><br> 2) <span class="math-container">$(x_1 \wedge x_2) \vee (x_3 \wedge x_4)$</span> </p> <p>Now I need to know which expression is trivial and which is non-trivial. I wanted to know what is the procedure of doing so?</p>
José Carlos Santos
446,262
<p>If <span class="math-container">$n^2=3k$</span>, for some integer <span class="math-container">$k$</span>, then <span class="math-container">$3\mid n^2$</span>. Therefore, by Euclid's lemma, and since <span class="math-container">$3$</span> is prime, <span class="math-container">$3\mid n$</span>.</p>
629,347
<p>I understand <strong>how</strong> to calculate the dot product of the vectors. But I don't actually understand <strong>what</strong> a dot product is, and <strong>why</strong> it's needed.</p> <p>Could you answer these questions?</p>
user127.0.0.1
50,800
<p>In a general vector spaces you can define the <strong>length</strong> of a vector by the induced <strong>norm</strong> via $$\|x\| = \sqrt{x\cdot x}$$</p> <p>this is possible because the dot product is positive definite and thus $x\cdot x$ is not-negative. </p> <p>It is even possible to define an <strong>angle</strong> between to vectors this way by</p> <p>$$\phi = arccos \frac{x\cdot y}{\sqrt{x\cdot x}\cdot\sqrt{y \cdot y}} $$</p> <p>also two vector are orthogonal <em>iff</em> their inner product is zero, i.e.</p> <p>$$ x \perp y \Longleftrightarrow x\cdot y = 0$$</p> <p>Note that this is possbile for <strong>every</strong> vector space that has an inner product (dot product) </p> <hr> <p>A more special example could be: Take the vector space of the continous functions on the intervall $\left[-1,1\right]$ with the inner product defined by $\int_{-1}^1 f(x)g(x) dx$, then the functions $f(x)=x$ and $g(x)=x^2$ are <em>orthogonal</em>, because $$ \int_{-1}^{1}x\cdot x^2 dx = \int_{-1}^{1} x^3 dx = 0$$</p> <p>And the <em>length</em> of $f$ would be</p> <p>$$\left\|f\right\| = \sqrt{\int_{-1}^{1}x\cdot x\, dx } = \sqrt{\frac{2}{3}}$$ </p>
629,347
<p>I understand <strong>how</strong> to calculate the dot product of the vectors. But I don't actually understand <strong>what</strong> a dot product is, and <strong>why</strong> it's needed.</p> <p>Could you answer these questions?</p>
Steven Alexis Gregory
75,410
<p>Consider two points in $\mathbb R^n$, $P=(x_1, x_2, \dots, x_m)$ and $Q=(y_1, y_2, \dots, y_n)$. Let $O=(0,0,\dots, 0)$ be the origin. How do you find $\theta = m\angle POQ$? According to the law of cosines, $$\cos \theta = \dfrac{|P|^2 +|Q|^2-|Q-P|^2}{2|P|\cdot|Q|}$$.</p> <p>$$|P|^2=\sum_{i=1}^n x_i^2$$ $$|Q|^2=\sum_{i=1}^n y_i^2$$ $$|Q-P|^2=\sum_{i=1}^n (y_i-x_i)^2 =\sum_{i=1}^n (x_i^2+2x_iy_i+y_i^2)$$</p> <p>It follows that $\displaystyle |P|^2 +|Q|^2-|Q-P|^2 = 2\sum_{i=1}^n x_iy_i$</p> <p>So $$\cos \theta = \frac{\sum_{i=1}^n x_iy_i}{|P|\cdot|Q|}$$.</p> <p>It is convenient to <strong>define</strong> the dot product $P \circ Q = \sum_{i=1}^n x_iy_i$.</p> <hr> <p>So now consider an n-dimensional vector space $\mathbf V$ with a basis $\{\mathbf e_i\}_{i=1}^n$. Given $P=\sum_{i-1}^n x_i \mathbf e_i$ and $Q=\sum_{i-1}^n y_i \mathbf e_i$ we define $P \circ Q = \sum_{i=1}^n x_iy_i$ and we define $P$ is orthogonal to $Q$ when $P \circ Q = 0$.</p> <p>We can now define the angle, $\theta$ between the vectors $P$ and $Q$ by $\cos \theta = \dfrac{P \circ Q}{|P| \cdot |Q|}$.</p> <p>Thus the dot product is just an abstraction of a property of vectors in the Euclidean plane. From here we can now define what an orthogonal and an orthonormal basis are and find some very interesting consequences. Yes, there are a lot of technical problems that need to be solved, but they can be solved.</p>
3,715,987
<p>The domain is: <span class="math-container">$\forall x \in \mathbb{R}\smallsetminus\{-1\}$</span></p> <p>The range is: first we find the inverse of <span class="math-container">$f$</span>: <span class="math-container">$$x=\frac{y+2}{y^2+2y+1} $$</span> <span class="math-container">$$x\cdot(y+1)^2-1=y+2$$</span> <span class="math-container">$$x\cdot(y+1)^2-y=3 $$</span> <span class="math-container">$$y\left(\frac{(y+1)^2}{y}-\frac{1}{x}\right)=\frac{3}{x} $$</span> I can't find the inverse... my idea is to find the domain of the inverse, and that would then be the range of the function. How to show otherwise what is the range here?</p>
Saket Gurjar
769,080
<p>Alternate way to find the range : </p> <p><span class="math-container">$$f(x) = y =\frac{x+2}{x^2+2x+1}$$</span></p> <p><span class="math-container">$$yx^2+(2y-1)x+(y-2)=0 $$</span></p> <p>Now this quadratic has real roots (since real points exist belonging to the function for all values of x (we can remove the case of -1 later)) . So applying the condition for real roots : (<span class="math-container">$b^2-4ac \geq 0$</span>)</p> <p><span class="math-container">$$(2y-1)^2-4(y)(y-2)\geq0$$</span></p> <p><span class="math-container">$$-4y+1 +8y\geq0$$</span></p> <p><span class="math-container">$$y\geq\frac{-1}{4}$$</span></p> <p><span class="math-container">$$y \in \left[ -\frac{1}{4},\infty \right)$$</span></p> <p>Now the value of <span class="math-container">$y$</span> at <span class="math-container">$x=-1$</span> <span class="math-container">$\to \infty$</span> so no need to remove it from the range</p>
2,611,855
<p>Let $ (X,\tau) $ be a topology space, $ f : X \rightarrow Y $ a surjective function, $ A \subset Y $, and consider the topology in $ Y $: $\,$ $ \tau _{f} = \lbrace A \subset Y: f^{-1}(A) \in \tau \rbrace$. Show that: $ A $ is closed in $ Y $ if and only if $ f^{-1}(A) $ is closed in $ X.$</p> <p>I got stuck with the converse: suppose that $ f^{-1}(A) $ is closed in $ X $. Then $ X-f^{-1} (A)$ is open in $ X $. That is, $ X-f^{-1} (A) \in \tau $. This implies that</p> <p>$ X-f^{-1}(A) \stackrel{?}{=} f^{-1} (f(X)) - f^{-1}(A) = f^{-1}(f(X)-A) = f^{-1} (Y-A) \in \tau$.</p> <p>Since $ Y-A \subset Y$ and $ f^{-1} (Y-A) \in \tau $, we conclude that $ Y-A \in \tau_{f} $ and therefore $ A $ is closed in $ Y $.</p> <p>I know that $ \stackrel{?}{=} $ is not true, unless $f$ is injective. So I got stuck in that part.</p>
max8128
336,673
<p>Too long for a comment (but I can delete it if you want) so :</p> <p>In fact the RHS is a polynomial more particulary a <a href="http://mathworld.wolfram.com/JensenPolynomial.html" rel="nofollow noreferrer">Jensen polynomial</a> :</p> <p>Why? Because the coefficients fulfill the conditions also called Turan's inequality: $$\sigma_k^2\geq \sigma_{k-1}\sigma_{k+1}$$</p> <p>Proof (here $\lambda=\sigma$): We have to prove :</p> <p>$$\sigma^{2\frac{d^n-1}{d-1}+2n}\geq \sigma^{\frac{d^{n-1}-1}{d-1}+n-1}\sigma^{\frac{d^{n+1}-1}{d-1}+n+1}$$</p> <p>After simplification we have :</p> <p>$$\sigma^{2\frac{d^n-1}{d-1}}\geq\sigma^{\frac{d^{n-1}-1}{d-1}}\sigma^{\frac{d^{n+1}-1}{d-1}}$$ We take the logarithm on each side we get : $${2\frac{d^n-1}{d-1}}ln(\sigma)\geq\frac{d^{n-1}-1}{d-1}ln(\sigma)+\frac{d^{n+1}-1}{d-1}ln(\sigma)$$</p> <p>The logaritm is negative so we reverse the inequality :</p> <p>$${2(\frac{d^n-1}{d-1})}\leq\frac{d^{n-1}-1}{d-1}+\frac{d^{n+1}-1}{d-1}$$</p> <p>Finally we get :</p> <p>$$2\leq \frac{1}{d}+d$$ Wich is obvious </p> <p>So it's a serious way to prove it but now I have not the time to conclude</p> <p>Edit : The associated polynomials to Jensen polynomials are :</p> <p>$$g_n(t)=\sum_{k=0}^{n}{n\choose k}\sigma_tx^t$$</p> <p>Futhermore we know (let $g_n(t)=g_n(t)(\phi;x)$) then the polynomials $g_n(t)$ are generated by : $$e^t\phi(xt)=\sum_{k=0}^{\infty}g_n(x)\frac{t^n}{n!}$$ Theorem A from this <a href="https://ac.els-cdn.com/S0377042709001150/1-s2.0-S0377042709001150-main.pdf?_tid=ed09e460-01ba-11e8-821d-00000aacb35d&amp;acdnat=1516876479_bb10caeedc83fa4725ef630d191ba9f0" rel="nofollow noreferrer">link</a> asserts this : the function $\phi(x)$ belongs to L-P if and only if all the polynomials $g_n(\phi;x)$,$n=1,2,\cdots$ have only real zeros . </p> <p>From this <a href="http://www.ams.org/journals/proc/1975-049-01/S0002-9939-1975-0361017-4/S0002-9939-1975-0361017-4.pdf" rel="nofollow noreferrer">link</a> we can say that the zeros of Jensen polynomials are simple and negative so we can express the Jensen polynomials like this :</p> <p>$$f(z)=cz^me^{-\alpha z^2+\beta z }\prod_{n=0}^{\infty}(1-\frac{z}{z_n})e^{\frac{z}{z_n}}$$</p> <p>So the infinite sum becomes a product... maybe it could help .</p>
2,611,855
<p>Let $ (X,\tau) $ be a topology space, $ f : X \rightarrow Y $ a surjective function, $ A \subset Y $, and consider the topology in $ Y $: $\,$ $ \tau _{f} = \lbrace A \subset Y: f^{-1}(A) \in \tau \rbrace$. Show that: $ A $ is closed in $ Y $ if and only if $ f^{-1}(A) $ is closed in $ X.$</p> <p>I got stuck with the converse: suppose that $ f^{-1}(A) $ is closed in $ X $. Then $ X-f^{-1} (A)$ is open in $ X $. That is, $ X-f^{-1} (A) \in \tau $. This implies that</p> <p>$ X-f^{-1}(A) \stackrel{?}{=} f^{-1} (f(X)) - f^{-1}(A) = f^{-1}(f(X)-A) = f^{-1} (Y-A) \in \tau$.</p> <p>Since $ Y-A \subset Y$ and $ f^{-1} (Y-A) \in \tau $, we conclude that $ Y-A \in \tau_{f} $ and therefore $ A $ is closed in $ Y $.</p> <p>I know that $ \stackrel{?}{=} $ is not true, unless $f$ is injective. So I got stuck in that part.</p>
Ѕᴀᴀᴅ
302,797
<p>$\def\e{\mathrm{e}} \def\veq{\mathrel{\phantom{=}}}$Denote $μ = λ^d$. It will be proved that\begin{align*} (μ + (1 - μ) \e^{(d - 1)s})^{-\frac{1}{d - 1}} \leqslant \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} \e^{-sμ^{\frac{1}{d}}} \end{align*} holds for $0 &lt; μ &lt; 1$, $s &gt; 0$ and $d &gt; 1$. Step 1 and Step 2 prepare for the proof of monotonicity in Step 3.</p> <p><strong>Lemma:</strong> For $0 &lt; x &lt;1$ and $r &gt; 1$,$$ 1 - x^r \leqslant r(1 - x). $$ (This is trivial by considering the partial derivative of $x^r - rx$ with respect to $x$.)</p> <p><strong>Step 1:</strong> For $0 &lt; μ &lt; 1$, $d &gt; 1$ and $n \in \mathbb{N}$,\begin{align*} &amp;\veq \sum_{k = 0}^n \binom{n}{k} μ^{\frac{k}{d}} (d - 1)^{n - k} μ^{\frac{d^k - 1}{d - 1}} (1 - μ^{\frac{1}{d}} (1 - μ^{d^k}))\\ &amp;\geqslant μ \cdot (μ^{\frac{1}{d}} + d - 1)^n \cdot μ^{\frac{d^n - 1}{d - 1}} \geqslant μ \cdot μ^{\frac{n}{d}} d^n μ^{\frac{d^n - 1}{d - 1}} \geqslant μ \cdot μ^{\frac{n}{d}} \frac{1 - μ^{d^n}}{1 - μ} μ^{\frac{d^n - 1}{d - 1}}. \tag{1.1} \end{align*}</p> <p><strong>Proof:</strong> For $n = 0$, (1.1) becomes$$ 1 - μ^{\frac{1}{d}} (1 - μ) \geqslant μ \geqslant μ, $$ which is true because$$ 1 - μ^{\frac{1}{d}} (1 - μ) - μ = (1 - μ)(1 - μ^{\frac{1}{d}}) \geqslant 0. $$</p> <p>For $n \geqslant 2$, since$$ (μ^{\frac{1}{d}} + d - 1)^n = \sum_{k = 0}^n \binom{n}{k} μ^{\frac{k}{d}} (d - 1)^{n - k}, $$ then\begin{align*} &amp;\veq \sum_{k = 0}^n \binom{n}{k} μ^{\frac{k}{d}} (d - 1)^{n - k} μ^{\frac{d^k - 1}{d - 1}} (1 - μ^{\frac{1}{d}} (1 - μ^{d^k})) - μ \cdot (μ^{\frac{1}{d}} + d - 1)^n μ^{\frac{d^n - 1}{d - 1}}\\ &amp;= \sum_{k = 0}^n \binom{n}{k} μ^{\frac{k}{d}} (d - 1)^{n - k} \left( μ^{\frac{d^k - 1}{d - 1}} (1 - μ^{\frac{1}{d}} (1 - μ^{d^k})) - μ^{\frac{d^n - 1}{d - 1}} \cdot μ \right). \tag{1.2} \end{align*}</p> <p>For the $k = n$ term in (1.2), by the lemma,\begin{align*} &amp;\veq μ^{\frac{n}{d}} \left( μ^{\frac{d^n - 1}{d - 1}} (1 - μ^{\frac{1}{d}} (1 - μ^{d^n})) - μ^{\frac{d^n - 1}{d - 1}} \cdot μ \right) = μ^{\frac{n}{d}} μ^{\frac{d^n - 1}{d - 1}} (1 - μ - μ^{\frac{1}{d}} (1 - μ^{d^n}))\\ &amp;\geqslant μ^{\frac{n}{d}} μ^{\frac{d^n - 1}{d - 1}} (1 - μ - μ^{\frac{1}{d}} d^n (1 - μ)) = μ^{\frac{n}{d}} μ^{\frac{d^n - 1}{d - 1}} (1 - μ)(1 - μ^{\frac{1}{d}} d^n)\\ &amp;\geqslant μ^{\frac{n}{d}} μ^{\frac{d^n - 1}{d - 1}} (1 - μ)(1 - d^n) = - μ^{\frac{n}{d}} μ^{\frac{d^n - 1}{d - 1}} (1 - μ) \sum_{k = 0}^{n - 1} \binom{n}{k} (d - 1)^{n - k}, \end{align*} thus\begin{align*} (1.2) &amp;\geqslant \sum_{k = 0}^{n - 1} \binom{n}{k} μ^{\frac{k}{d}} (d - 1)^{n - k} \left( μ^{\frac{d^k - 1}{d - 1}} (1 - μ^{\frac{1}{d}} (1 - μ^{d^k})) - μ^{\frac{d^n - 1}{d - 1}} \cdot μ \right)\\ &amp;\veq - μ^{\frac{n}{d}} μ^{\frac{d^n - 1}{d - 1}} (1 - μ) \sum_{k = 0}^{n - 1} \binom{n}{k} (d - 1)^{n - k}\\ &amp;= \sum_{k = 0}^{n - 1} \binom{n}{k} (d - 1)^{n - k} \left( μ^{\frac{k}{d}} \left( μ^{\frac{d^k - 1}{d - 1}} (1 - μ^{\frac{1}{d}} (1 - μ^{d^k})) - μ^{\frac{d^n - 1}{d - 1}} \cdot μ \right) - μ^{\frac{n}{d}} μ^{\frac{d^n - 1}{d - 1}} (1 - μ) \right). \tag{1.3} \end{align*}</p> <p>For $0 \leqslant k \leqslant n - 1$, since $0 &lt; μ &lt; 1$ and $d &gt; 1$, then $μ^{\frac{k}{d}}$, $μ^{\frac{d^k - 1}{d - 1}}$ and $μ^{d^k}$ are all decreasing with respect to $k$. Thus\begin{align*} &amp;\veq μ^{\frac{k}{d}} \left( μ^{\frac{d^k - 1}{d - 1}} (1 - μ^{\frac{1}{d}} (1 - μ^{d^k})) - μ^{\frac{d^n - 1}{d - 1}} \cdot μ \right) - μ^{\frac{n}{d}} μ^{\frac{d^n - 1}{d - 1}} (1 - μ)\\ &amp;\geqslant μ^{\frac{n - 1}{d}} \left( μ^{\frac{d^{n - 1} - 1}{d - 1}} (1 - μ^{\frac{1}{d}} (1 - μ^{d^{n - 1}})) - μ^{\frac{d^n - 1}{d - 1}} \cdot μ \right) - μ^{\frac{n}{d}} μ^{\frac{d^n - 1}{d - 1}} (1 - μ)\\ &amp;= μ^{\frac{n - 1}{d}} μ^{\frac{d^{n - 1} - 1}{d - 1}} \left( (1 - μ^{\frac{1}{d}} (1 - μ^{d^{n - 1}})) - μ^{d^{n - 1}} \cdot μ - μ^{\frac{1}{d}} μ^{d^{n - 1}} (1 - μ) \right)\\ &amp;= μ^{\frac{n - 1}{d}} μ^{\frac{d^{n - 1} - 1}{d - 1}} \left( 1 - μ^{\frac{1}{d}} - μ^{d^{n - 1} + 1} + μ^{\frac{1}{d} + d^{n - 1} + 1}\right)\\ &amp;= μ^{\frac{n - 1}{d}} μ^{\frac{d^{n - 1} - 1}{d - 1}} (1 - μ^{\frac{1}{d}})(1 - μ^{d^{n - 1} + 1}) \geqslant 0, \end{align*} which implies $(1.3) \geqslant 0$. Therefore,$$ \sum_{k = 0}^n \binom{n}{k} μ^{\frac{k}{d}} (d - 1)^{n - k} μ^{\frac{d^k - 1}{d - 1}} (1 - μ^{\frac{1}{d}} (1 - μ^{d^k})) \geqslant μ \cdot (μ^{\frac{1}{d}} + d - 1)^n μ^{\frac{d^n - 1}{d - 1}}. $$ Also, $0 &lt; μ &lt; 1$ and $d &gt; 1$ implies $μ^{\frac{1}{d}} + d - 1 \geqslant d μ^{\frac{1}{d}}$, and by the lemma, $\displaystyle d^n \geqslant \frac{1 - μ^{d^n}}{1 - μ}$. Hence (1.1) holds for $0 &lt; μ &lt; 1$, $d &gt; 1$ and $n \in \mathbb{N}$.</p> <p><strong>Step 2:</strong> For $0 &lt; μ &lt; 1$, $s &gt; 0$ and $d &gt; 1$,\begin{align*} &amp;\veq \e^{(d - 1)s} \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} (1 - μ^{\frac{1}{d}}(1 - μ^{d^n}))\\ &amp;\geqslant μ \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} \frac{1 - μ^{d^n}}{1 - μ} μ^{\frac{d^n - 1}{d - 1}} \geqslant μ^{\frac{1}{d}} μ \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} \frac{1 - μ^{d^n}}{1 - μ} μ^{\frac{d^n - 1}{d - 1}}. \tag{2.1} \end{align*}</p> <p><strong>Proof:</strong> By <a href="https://en.wikipedia.org/wiki/Cauchy_product#Convergence_and_Mertens&#39;_theorem" rel="noreferrer">Mertens' theorem</a>,\begin{align*} &amp;\veq \e^{(d - 1)s} \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} (1 - μ^{\frac{1}{d}}(1 - μ^{d^n}))\\ &amp;= \left( \sum_{n = 0}^\infty \frac{((d - 1)s)^n}{n!} \right) \left( \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} (1 - μ^{\frac{1}{d}}(1 - μ^{d^n})) \right)\\ &amp;= \sum_{n = 0}^\infty \sum_{k = 0}^n \frac{((d - 1)s)^{n - k}}{(n - k)!} \cdot \frac{(sμ^{\frac{1}{d}})^k}{k!} μ^{\frac{d^k - 1}{d - 1}} (1 - μ^{\frac{1}{d}}(1 - μ^{d^k}))\\ &amp;= \sum_{n = 0}^\infty \frac{s^n}{n!} \sum_{k = 0}^n \binom{n}{k} μ^{\frac{k}{d}} (d - 1)^{n - k} μ^{\frac{d^k - 1}{d - 1}} (1 - μ^{\frac{1}{d}}(1 - μ^{d^k})). \tag{2.2} \end{align*}</p> <p>By (1.1),\begin{align*} (2.2) &amp;\geqslant \sum_{n = 0}^\infty \frac{s^n}{n!} μ \cdot μ^{\frac{n}{d}} \frac{1 - μ^{d^n}}{1 - μ} μ^{\frac{d^n - 1}{d - 1}} = μ \cdot \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} \frac{1 - μ^{d^n}}{1 - μ} μ^{\frac{d^n - 1}{d - 1}}\\ &amp;\geqslant μ^{\frac{1}{d}} μ \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} \frac{1 - μ^{d^n}}{1 - μ} μ^{\frac{d^n - 1}{d - 1}}. \end{align*} Hence (2.2) holds for $0 &lt; μ &lt; 1$, $s &gt; 0$ and $d &gt; 1$.</p> <p><strong>Step 3:</strong> For $0 &lt; μ &lt; 1$ and $d &gt; 1$,\begin{align*} f(s) &amp;= \ln\left( \left( \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} \e^{-sμ^{\frac{1}{d}}} \right) (μ + (1 - μ) \e^{(d - 1)s})^{\frac{1}{d - 1}} \right)\\ &amp;= \ln\left( \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} \right) - sμ^{\frac{1}{d}} + \frac{1}{d - 1} \ln(μ + (1 - μ) \e^{(d - 1)s}) \end{align*} is increasing for $s &gt; 0$.</p> <p><strong>Proof:</strong> Because for any $A &gt; 0$, the series$$ \sum_{n = 0}^\infty \left( \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} \right)' = \sum_{n = 1}^\infty \frac{n s^{n- 1} μ^{\frac{n}{d}}}{n!} μ^{\frac{d^n - 1}{d - 1}} = μ^{\frac{1}{d}} \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^{n + 1} - 1}{d - 1}} $$ converges uniformly for $s \in (0, A)$, then for any $s &gt; 0$,\begin{align*} f'(s) &amp;= \frac{\sum\limits_{n = 0}^\infty \left( \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} \right)'}{\sum\limits_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}}} - μ^{\frac{1}{d}} + \frac{1}{d - 1} \frac{(μ + (1 - μ) \e^{(d - 1)s})'}{μ + (1 - μ) \e^{(d - 1)s}}\\ &amp;= \frac{μ^{\frac{1}{d}} \sum\limits_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^{n + 1} - 1}{d - 1}}}{\sum\limits_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}}} - μ^{\frac{1}{d}} + \frac{(1 - μ) \e^{(d - 1)s}}{μ + (1 - μ) \e^{(d - 1)s}}. \end{align*}</p> <p>Define$$ A = μ^{\frac{1}{d}} \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} - μ^{\frac{1}{d}} \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^{n + 1} - 1}{d - 1}}, \quad B = \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}}. $$ Because\begin{align*} A &amp;= μ^{\frac{1}{d}} \left( \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} - \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^{n + 1} - 1}{d - 1}} \right)\\ &amp;= μ^{\frac{1}{d}} \left( \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} - \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} \cdot μ^{d^n} \right)\\ &amp;= μ^{\frac{1}{d}} \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} (1 - μ^{d^n}), \end{align*}\begin{align*} B - A &amp;= \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} - μ^{\frac{1}{d}} \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} (1 - μ^{d^n})\\ &amp;= \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} (1 - μ^{\frac{1}{d}} (1 - μ^{d^n})), \end{align*} then\begin{align*} f'(s) \geqslant 0 &amp;\Longleftrightarrow \frac{(1 - μ) \e^{(d - 1)s}}{μ + (1 - μ) \e^{(d - 1)s}} \geqslant \frac{A}{B}\\ &amp;\Longleftrightarrow B(1 - μ) \e^{(d - 1)s} \geqslant A(μ + (1 - μ) \e^{(d - 1)s})\\ &amp;\Longleftrightarrow (1 - μ)(B - A) \e^{(d - 1)s} \geqslant μA\\ &amp;\Longleftrightarrow (B - A) \e^{(d - 1)s} \geqslant \frac{μ}{1 - μ} A\\ &amp;\Longleftrightarrow \e^{(d - 1)s} \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} (1 - μ^{\frac{1}{d}} (1 - μ^{d^n})) \geqslant μ^{\frac{1}{d}} μ \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} \frac{1 - μ^{d^n}}{1 - μ}, \end{align*} where the last inequality holds by (2.1). Hence $f(s)$ is increasing for $s &gt; 0$.</p> <p><strong>Step 4:</strong> For $0 &lt; μ &lt; 1$, $s &gt; 0$ and $d &gt; 1$,\begin{align*} (μ + (1 - μ) \e^{(d - 1)s})^{-\frac{1}{d - 1}} \leqslant \sum_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} \e^{-sμ^{\frac{1}{d}}}. \tag{4.1} \end{align*}</p> <p><strong>Proof:</strong> From Step 3 it is known that $f(s)$ is increasing for $s &gt; 0$, thus$$ \exp(f(s)) = \frac{\sum\limits_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} \e^{-sμ^{\frac{1}{d}}}}{(μ + (1 - μ) \e^{(d - 1)s})^{-\frac{1}{d - 1}}} $$ is also increasing for $s &gt; 0$. Because the series $\sum\limits_{n = 0}^\infty \frac{(sμ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} \e^{-sμ^{\frac{1}{d}}}$ converges uniformly for $s \in [0, 1]$, then for any $s_0 &gt; 0$,\begin{align*} &amp;\veq \frac{\sum\limits_{n = 0}^\infty \frac{(s_0 μ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} \e^{-s_0 μ^{\frac{1}{d}}}}{(μ + (1 - μ) \e^{(d - 1)s_0})^{-\frac{1}{d - 1}}} = \exp(f(s_0)) \geqslant \lim_{s \to 0^+} \exp(f(s))\\ &amp;= \frac{\lim\limits_{s \to 0^+} \sum\limits_{n = 0}^\infty \frac{(s μ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} \e^{-s μ^{\frac{1}{d}}}}{\lim\limits_{s \to 0^+} (μ + (1 - μ) \e^{(d - 1)s})^{-\frac{1}{d - 1}}} = \frac{\sum\limits_{n = 0}^\infty \lim\limits_{s \to 0^+} \frac{(s μ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} \e^{-s μ^{\frac{1}{d}}}}{\left( μ + (1 - μ) \lim\limits_{s \to 0^+} \e^{(d - 1)s} \right)^{-\frac{1}{d - 1}}}\\ &amp;= \frac{\lim\limits_{s \to 0^+} \e^{-s μ^{\frac{1}{d}}} + \sum\limits_{n = 1}^\infty \lim\limits_{s \to 0^+} \frac{(s μ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} \e^{-s μ^{\frac{1}{d}}}}{\left( μ + (1 - μ) \lim\limits_{s \to 0^+} \e^{(d - 1)s} \right)^{-\frac{1}{d - 1}}} = \frac{1}{(μ + (1 - μ))^{-\frac{1}{d - 1}}} = 1, \end{align*} i.e.$$ \sum_{n = 0}^\infty \frac{(s_0 μ^{\frac{1}{d}})^n}{n!} μ^{\frac{d^n - 1}{d - 1}} \e^{-s_0 μ^{\frac{1}{d}}} \geqslant (μ + (1 - μ) \e^{(d - 1)s_0})^{-\frac{1}{d - 1}}. $$ Hence (4.1) holds for $0 &lt; μ &lt; 1$, $s &gt; 0$ and $d &gt; 1$.</p>
3,459,205
<p>Explain why <span class="math-container">$\arccos(t)=\arcsin(\sqrt{{1}-{t^2}})$</span> when <span class="math-container">$0&lt;t≤1$</span>. </p> <p>I tried researching online, couldn't find anything related to this question though. Know this equation is correct and make sense, just don't know how to explain it using algebra only, solving for the left side of the equation.</p>
Clement Yung
620,517
<p><strong>Algebraic proof</strong>:</p> <p>Let <span class="math-container">$\theta = \arccos(t)$</span>, so <span class="math-container">$\cos(\theta) = t$</span>. Note that since <span class="math-container">$0 &lt; t \leq 1$</span>, we have <span class="math-container">$0 \leq \theta &lt; \frac{\pi}{2}$</span>. Recall that <span class="math-container">$\sin^2(\theta) + \cos^2(\theta) = 1$</span>, so <span class="math-container">$\sin(\theta) = \pm\sqrt{1 - \cos^2(\theta)}$</span>. Since <span class="math-container">$0 \leq \theta &lt; \frac{\pi}{2}$</span>, we have <span class="math-container">$\sin(\theta) \geq 0$</span>, so <span class="math-container">$\sin(\theta) = \sqrt{1 - \cos^2(\theta)} \Rightarrow \sin(\theta) = \sqrt{1 - t^2} \Rightarrow \theta = \arcsin(\sqrt{1 - t^2})$</span>. Thus, <span class="math-container">$\arccos(t) = \arcsin(\sqrt{1 - t^2})$</span></p> <p><strong>Geometrical proof</strong>:</p> <p><a href="https://i.stack.imgur.com/TyULe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TyULe.png" alt="enter image description here"></a></p> <p><span class="math-container">$$ \theta = \arccos(t) = \arcsin(\sqrt{1 - t^2}) $$</span></p>
3,622,508
<p>I’m not sure exactly about the conditions needed for a subset <span class="math-container">$S$</span> to localise a ring <span class="math-container">$R$</span>. I know <span class="math-container">$S$</span> has to be multiplicative. But does <span class="math-container">$S$</span> also have to be a subset of the non-zero divisors of <span class="math-container">$R$</span> or does it have to be a subset of the group of units of <span class="math-container">$R$</span>?</p> <p>I can’t find a clear answer. </p>
rschwieb
29,335
<p>Classical localization can be extended to noncommutative rings using the <a href="https://en.wikipedia.org/wiki/Ore_condition" rel="nofollow noreferrer">Ore conditions</a>. They are a set of sufficient conditions to guarantee that the classical construction works, at least on one side.</p> <p>That is if you have a subset <span class="math-container">$S$</span> of a ring <span class="math-container">$R$</span> that is</p> <ol> <li>Multiplicatively closed;</li> <li>satisfies <span class="math-container">$aS\cap sR\neq \emptyset$</span> for every <span class="math-container">$s\in S$</span> and <span class="math-container">$a\in R$</span>;</li> <li>If <span class="math-container">$s\in S$</span> and <span class="math-container">$a\in R$</span> and <span class="math-container">$sa=0$</span>, there exists a <span class="math-container">$u\in S$</span> such that <span class="math-container">$au=0$</span>.</li> </ol> <p>When <span class="math-container">$R$</span> is commutative the second and third conditions are automatically satisfied for any nonempty multiplicative set. </p> <blockquote> <p>does also have to be a subset of the non-zero divisors of </p> </blockquote> <p>No, for example consider the ring <span class="math-container">$R=F[x,y]/(xy)$</span>, for which the nonnegative powers of <span class="math-container">$x$</span> (in the quotient) form a multiplicative set entirely of zero divisors. The thing is that some elements will collapse to zero: for example <span class="math-container">$y=\frac{y}{1}=\frac{xy}{x}=\frac 0 x=0$</span> (<span class="math-container">$x,y,1$</span> the images in <span class="math-container">$R$</span> of the original <span class="math-container">$x,y,1$</span> in <span class="math-container">$F[x,y]$</span>.) If you have two things that annihilate each other in the multiplicative set, then you have <span class="math-container">$0$</span> in there too and that makes everything collapse.</p> <p>The thing is that the collection of (left and right) regular elements of a ring is automatically multiplicatively closed, so they make a convenient multiplicative subset.</p> <blockquote> <p>or does it have to be a subset of the group of units of ?</p> </blockquote> <p>You can, but it is more interesting to include things that aren't units in the multiplicative set, because they <em>become</em> units in the localization.</p>
2,936,269
<p>How do you simplify: <span class="math-container">$$\sqrt{9-6\sqrt{2}}$$</span></p> <p>A classmate of mine changed it to <span class="math-container">$$\sqrt{9-6\sqrt{2}}=\sqrt{a^2-2ab+b^2}$$</span> but I'm not sure how that helps or why it helps.</p> <p>This questions probably too easy to be on the Math Stack Exchange but I'm not sure where else to post it.</p>
Vladimir Vargas
187,578
<p>The reason for doing that is that <span class="math-container">$\sqrt{a^2-2ab+b^2} = \sqrt{(a-b)^2} = a-b$</span>. Now try to put your radical in the form your classmate suggested!</p>
2,936,269
<p>How do you simplify: <span class="math-container">$$\sqrt{9-6\sqrt{2}}$$</span></p> <p>A classmate of mine changed it to <span class="math-container">$$\sqrt{9-6\sqrt{2}}=\sqrt{a^2-2ab+b^2}$$</span> but I'm not sure how that helps or why it helps.</p> <p>This questions probably too easy to be on the Math Stack Exchange but I'm not sure where else to post it.</p>
fleablood
280,126
<p>Your class mate is being.... clever.</p> <p>If $\sqrt {9-6\sqrt 2}=a-b $ then $9-6\sqrt 2=a^2-2ab+c^3$</p> <p>Let $2ab=6\sqrt 2$ and $a^2+b^2=9$.</p> <p>Can we do that? </p> <p>If we let $b^2=k $ and $a^2=9-k$ then $ab=\sqrt {k (9-k)}=3\sqrt 2=\sqrt {18} $. Solving $k (9-k)=18$ for $k $ (if it isn't visiblely obvious that we can do it in our heads, it is just a quadratic that we can solve by quadratic formula) and, for covenience, choicing the smaller solution (because we want $a&gt;b $), we get $k =3$ is a good solution..</p> <p>So $a=\sqrt 6$ and $b=\sqrt 3$.</p> <p>I.e. in other words</p> <p>$\sqrt {9-6\sqrt 2}=$</p> <p>$\sqrt {6-2\sqrt 6\sqrt 3 +3}=$</p> <p>$\sqrt {(\sqrt 6- \sqrt 3)^2}=$</p> <p>$|\sqrt 6 - \sqrt 3|=$</p> <p>$\sqrt 6 -\sqrt 3$.</p>
1,158,970
<p>In the lectures notes <a href="http://users.jyu.fi/~pkoskela/quasifinal.pdf" rel="nofollow">http://users.jyu.fi/~pkoskela/quasifinal.pdf</a> (Prof. Koskela has made them freely available from his webpage, so I am guessing is OK that I paste the link here) Quasiconformality is defined by saying that $\displaystyle \limsup_\limits{r \rightarrow 0} \frac{L_{f}(x,r)}{l_{f}(x,r)}$ must be uniformly bounded in $x,$ where $\displaystyle L_{f}(x,r):=\sup_\limits{\vert x-y \vert \leq r} \{ \vert f(x)-f(y) \vert \}$ and $\displaystyle l_{f}(x,r):=\inf_\limits{\vert x-y \vert \geq r} \{ \vert f(x)-f(y) \vert \}.$</p> <p>I have three questions concerning this definition:</p> <p>1) The main question: When he proves that a conformal mapping is quasiconformal he says (at the beginning of page 5): "Thus, given a vetor h, we have that $|Df(x, y)h| = |∇u||h|$ By the complex differentiability of f we conclude that: $\limsup_\limits{r \rightarrow 0} \dfrac{L_{f}(x,r)}{l_{f}(x,r)}=1$"</p> <p>And I don't quite understand how did he do that step. Is he perhaps using the mean value theorem and the maximum modulus principle?</p> <p>2) Second question: Even accepting the previous argument, he only shows that conformal mappings are quasiconformal in dimension $2.$ How to do this in general? Also, is this definition the same if we replace $\vert x-y \vert \leq r$ and $\vert x-y \vert \geq r$ by $\vert x-y \vert =r$? The former bounds the latter trivially, but more than that I do not know.</p> <p>3) What would be a nice visual interpretation of a quasiconformal mapping? How would look a map with possible infinite distortion at some points?</p> <p>Thanks</p>
Lee Mosher
26,501
<p>To answer 1), he's only using the definition of the $\mathbb{R}^2$ derivative. $Df(x) : \mathbb{R}^2 \to \mathbb{R}^2$ is a linear transformation and is characterized by the formula $$\lim_{h \to 0} \frac{f(x+h) - f(x) - Df(x)(h)}{|h|} = 0 $$ Fix $|h|=r$ for very small $r$ and you will see from this that the ratio $L_f(x,r)/ l_f(x,r)$ is very close to $1$.</p> <p>Regarding 2), your question is too vast. Higher dimensional quasiconformal theory is different than the 2-dimensional theory.</p> <p>To answer 3), remember that conformal maps are maps that take tiny round circles to shapes that are very close to round circles. Quasiconformal maps take tiny round circles to shapes that are very close to ellipses of uniformly bounded eccentricity. To answer the second part of 3), probably there is a simple formula for a map which takes a nested sequence of circles contracting down to a point in the domain to a nested sequence of ellipses of higher and higher eccentricity.</p>
3,531,971
<p>Let <span class="math-container">$T$</span> a linear operator. <span class="math-container">$T$</span> is bounded then ker(<span class="math-container">$T$</span>) is closed.</p> <p><b>My attempt:</b></p> <p>Let <span class="math-container">$\{x_n\}\subset \ker(T)$</span>.</p> <p>As <span class="math-container">$T$</span> is bounded then exists <span class="math-container">$M&gt;0$</span> such that <span class="math-container">$||T(x_n)||_y\leq M.||x_n||$</span></p> <p>Note that <span class="math-container">$T(x_n)=0$</span> for all <span class="math-container">$n\in \mathbb{N}$</span>.</p> <p>then <span class="math-container">$\lim_{n\rightarrow\infty}T(x_n)=0$</span></p> <p>Here i'm stuck. can someone help me?</p>
Tsemo Aristide
280,301
<p>Suppose that <span class="math-container">$lim_nx_n=0$</span>, we have <span class="math-container">$\|T(x_n)\|\leq M\|x_n\|$</span> since <span class="math-container">$lim_nM\|x_n\|=0$</span>, we deduce that <span class="math-container">$lim_n\|T(x_n)\|=0$</span>.</p> <p>In general let <span class="math-container">$x=lim_nx_n, T(x_n)=0, lim_n(x_n-x)=0$</span> implies that <span class="math-container">$lim_nT(x_n-x)=T(x)=0$</span>.</p>
1,397,776
<p>Suppose $X_1,\ldots,X_n$ are iid r.v.'s, each with pdf $f_{\theta}(x)=\frac{1}{\theta}I\{\theta&lt;x&lt;2\theta\}$. I find the minimal sufficient statistics $(X_{(1)},X_{(n)})$. I am trying to prove it is complete. Can someone give me hint? Also are there any complete sufficient statistics in this model?</p>
Michael Hardy
11,667
<p>We have $$\operatorname{E} (X_{(1)}) = \theta + \dfrac \theta {n+1} = \dfrac{n+2}{n+1} \theta$$ and $$\operatorname{E}(X_{(n)}) = 2\theta - \dfrac{\theta}{n+1} = \dfrac {2n+1} {n+1} \theta,$$ so $$ \operatorname{E} \left( \frac{n+1}{n+2} X_{(1)} - \frac{n+1}{2n+1} X_{(n)} \right) = 0 $$ regardless of the value of $\theta&gt;0$.</p> <p>Therefore, the statistic $(X_{(1)}, X_{(n)})$ is <b>not</b> complete.</p>
932,535
<p>Let's denote the set of all singleton subsets of $X$(i.e. of all subsets consisting of one element) by $A$. Describe $\sigma(A)$ in the following two cases:</p> <p>i) $X$ is countable</p> <p>ii) $X$ is uncountable</p> <p>I am new to this topic, so could you please help me understand the thinking/the concept behind this problem..</p>
Davide Giraudo
9,849
<p>First of all, a remark: a countable set can be written as a countable union of singletons, and $\sigma$-algebra are stable under countable unions, hence in each case, $\sigma(\mathcal A)$ contains each countable subset of $X$.</p> <p>In the first case, we are done. </p> <p>In the second one, we also have to consider complements of such sets. It remains to deal with the following question: are there other ones?</p>
932,535
<p>Let's denote the set of all singleton subsets of $X$(i.e. of all subsets consisting of one element) by $A$. Describe $\sigma(A)$ in the following two cases:</p> <p>i) $X$ is countable</p> <p>ii) $X$ is uncountable</p> <p>I am new to this topic, so could you please help me understand the thinking/the concept behind this problem..</p>
Josh Keneda
45,256
<p>We want to find the smallest $\sigma$-algebra that contains $A$. In the first case, for any subset $Y$ of $X$, we can express $Y$ as a countable union of singletons in $X$, so $Y \in \sigma(A)$. But $Y$ was arbitrary, so every subset of $X$ is in $\sigma(A)$.</p> <p>The second case is a bit tougher. We know that $\sigma(A)$ will contain all of the countable subsets of $X$, but it will also contain their complements, which aren't going to be countable. Let's call a set co-countable if its complement is countable.</p> <p>Try showing that the collection of countable subsets and co-countable subsets of $X$ is a $\sigma$-algebra. What's its relationship with $\sigma(A)$?</p>
18,659
<p>I'm looking for a fast algorithm for generating all the partitions of an integer up to a certain maximum length; ideally, I don't want to have to generate <em>all</em> of them and then discard the ones that are too long, as this will take around 5 times longer in my case.</p> <p>Specifically, given <span class="math-container">$L = N(N+1)$</span>, I need to generate all the partitions of <span class="math-container">$L$</span> that have at most <span class="math-container">$N$</span> parts. I can't seem to find any algorithms that'll do this directly; all I've found that seems relevant is <a href="https://doi.org/10.1007/BF02241987" rel="nofollow noreferrer">this</a> paper, which I unfortunately can't seem to access via my institution's subscription. It <a href="https://web.archive.org/web/20141013222856/http://www.site.uottawa.ca:80/%7Eivan/F49-int-part.pdf" rel="nofollow noreferrer">apparently</a><sup>1</sup> documents an algorithm that generates the partitions of each individual length, which could presumably be easily adapted to my needs.</p> <p>Does anyone know of any such algorithms?</p> <p><sup>1</sup><em>Zoghbi, Antoine; Stojmenović, Ivan</em>, <a href="https://dx.doi.org/10.1080/00207169808804755" rel="nofollow noreferrer"><strong>Fast algorithms for generating integer partitions</strong></a>, Int. J. Comput. Math. 70, No. 2, 319-332 (1998). <a href="https://zbmath.org/?q=an:0918.68040" rel="nofollow noreferrer">ZBL0918.68040</a>, <a href="https://mathscinet.ams.org/mathscinet-getitem?mr=1712501" rel="nofollow noreferrer">MR1712501</a>. <a href="https://web.archive.org/web/20141013222856/https://www.site.uottawa.ca/%7Eivan/F49-int-part.pdf" rel="nofollow noreferrer">Wayback Machine</a></p>
Anthony Labarre
689
<p>If you are only interested in using an actual implementation, you could go for the <code>integer_partitions(n[, length])</code> in <a href="http://maxima.sourceforge.net/" rel="nofollow">Maxima</a>. More details can be found <a href="http://maxima.sourceforge.net/docs/manual/en/maxima_38.html" rel="nofollow">here</a>.</p>
1,567,152
<blockquote> <p>Theorem: $X$ is a finite Hausdorff. Show that the topology is discrete.</p> </blockquote> <p>My attempt: $X$ is Hausdorff then $T_2 \implies T_1$ Thus for any $x \in X$ we have $\{x\}$ is closed. Thus $X \setminus \{x\}$ is open. Now for any $y\in X \setminus \{x\}$ and $x$ using Hausdorff property, we get $\{x\}$ is open. Am I right till here? And how to proceed further? </p>
Potato
18,240
<p>Yes, you're completely right (although, you might want to write out your argument for $\{x\}$ being open in a little more detail.) You've shown that every point is open, so it follows form the axioms for a topology that every set, as a union of points, is open. This is precisely the definition of the discrete topology.</p> <p>Finiteness is used to conclude that every point is open. (It is certainly not true that every infinite Hausdorff space is discrete. Think of $\mathbb R$!) If you write out the argument more carefully, you'll see where finiteness is used.</p>
2,796,694
<p>So for my latest physics homework question, I had to derive an equation for the terminal velocity of a ball falling in some gravitational field assuming that the air resistance force was equal to some constant <em>c</em> multiplied by $v^2.$ <br> So first I started with the differntial equation: <br> $\frac{dv}{dt}=-mg-cv^2$ <br> Rearranging to get: <br> $\frac{dv}{dt}=-\left(g+\frac{cv^2}{m}\right)$ <br> From here I tried solving it and ended up with: <br> $\frac{\sqrt{m}}{\sqrt{c}\sqrt{g}}\arctan \left(\frac{\sqrt{c}v}{\sqrt{g}\sqrt{m}}\right)+C=-t$ <br> I rearranged this to get: $v\left(t\right)=\left(\frac{\sqrt{g}\sqrt{m}\tan \left(\frac{\left(-C\sqrt{c}\sqrt{g}-\sqrt{c}\sqrt{g}t\right)}{\sqrt{m}}\right)}{\sqrt{c}}\right)$ <br> In order to calculate the terminal velocity I took the limit as t approaches infinity:<br> $\lim _{t\to \infty }\left(\frac{\sqrt{g}\sqrt{m}\tan \:\left(\frac{\left(-C\sqrt{c}\sqrt{g}-\sqrt{c}\sqrt{g}t\right)}{\sqrt{m}}\right)}{\sqrt{c}}\right)$ <br> This reduces to: $\frac{\sqrt{g}\sqrt{m}\tan \left(\infty \right)}{\sqrt{c}}$ <br> The problem with this is that tan $(\infty)$ is indefinite. <br> Where did I go wrong? Could someone please help properly solve this equation. <br> Cheers, Gabriel.</p>
Narasimham
95,860
<p>Taking <em>proper sign</em> of air resistance opposing gravity, we have terminal velocity when acceleration vanishes:</p> <p>$$ \dfrac{dv}{dt}=mg-cv^2 = 0 \rightarrow v= v_{terminal}=\sqrt{\dfrac{mg}{c}}. $$</p> <p>gets included in the coefficient of <em>tanh function</em> for velocity as an asymptotic value. </p>
3,235,300
<p>I tried with , whenever <span class="math-container">$x &gt; y$</span> implies <span class="math-container">$p(x) - p(y) =( 5/13)^x (1-(13/5)^{(x-y)}) + (12/13)^x (1- (13/12)^{(x-y)}) &gt; 0 $</span>. But here I don't understand why the answer is no.</p>
Martin R
42,969
<blockquote> <p>But here I don't understand why the answer is no.</p> </blockquote> <p>You started with <span class="math-container">$$p(x) - p(y) =\left( \frac{5}{13}\right)^x \left( 1-\left( \frac{13}{5}\right)^{x-y}\right) + \left( \frac{12}{13}\right)^x\left(1- \left( \frac{13}{12}\right)^{x-y}\right) $$</span> which is correct so far. But that expression is <em>negative</em> for <span class="math-container">$x &gt; y$</span>, not positive, because <span class="math-container">$$ \left( \frac{13}{5}\right)^{x-y} &gt; 1 \implies 1-\left( \frac{13}{5}\right)^{x-y} &lt; 0 \\ \left( \frac{13}{12}\right)^{x-y} &gt; 1 \implies 1-\left( \frac{13}{12}\right)^{x-y} &lt; 0 $$</span></p> <p>Therefore <span class="math-container">$p$</span> is <em>decreasing.</em></p>
4,332,812
<p>I came across this series.</p> <p><span class="math-container">$$\sum_{n=1}^\infty \frac{n!}{n^n}x^n$$</span></p> <p>I was able to calculate its radius of convergence. If my calculations are OK, it is the number <span class="math-container">$e$</span>. Is that correct?</p> <p>Then I started wondering if the series is convergent or divergent for <span class="math-container">$x=\pm e$</span>.</p> <p>But I think I don't have any means to determine that. Is there any known (standard undergraduate calculus) theorem or theory which I can use for determining that? And also, if the series is convergent for <span class="math-container">$x=\pm e$</span>, how do I calculate its sum?</p> <p>If there's no general approach here, is there any trick which can be applied in this particular case?</p>
Eric Towers
123,905
<p>Start with <span class="math-container">$\left(1 + \frac{1}{n} \right)^n$</span> is a strictly increasing sequence with limit <span class="math-container">$\mathrm{e}$</span>. (In some undergraduate calculus courses, this limit is the definition of <span class="math-container">$\mathrm{e}$</span>.) If we don't have strict monotonicity given, see the answers at <a href="https://math.stackexchange.com/questions/167843/show-that-left1-dfrac1n-rightn-is-monotonically-increasing/167869">Show that $\left(1+\dfrac{1}{n}\right)^n$ is monotonically increasing</a> .</p> <p>Now let's look at <span class="math-container">$x = \mathrm{e}$</span> and see if the sum converges. It doesn't and we will show that by showing that the terms do not go to zero as <span class="math-container">$n \rightarrow \infty$</span>. <span class="math-container">\begin{align*} \frac{(n+1)!}{(n+1)^{n+1}}\mathrm{e}^{n+1} - \frac{n!}{n^n}\mathrm{e}^n &amp;= \frac{(n+1)\cdot n!}{(n+1)(n+1)^{n}}\mathrm{e}\,\mathrm{e}^n - \frac{n!}{n^n}\mathrm{e}^n \\ &amp;= \frac{n!}{n^n (1+1/n)^{n}}\mathrm{e}\,\mathrm{e}^n - \frac{n!}{n^n}\mathrm{e}^n \\ &amp;= \frac{n!}{n^n}\mathrm{e}^n \left( \frac{1}{(1+1/n)^{n}}\mathrm{e} - 1 \right) \end{align*}</span> Since <span class="math-container">$(1+1/n)^n$</span> is monotonically increasing to <span class="math-container">$\mathrm{e}$</span>, the fraction <span class="math-container">$\frac{\mathrm{e}}{(1+1/n)^n}$</span> is greater than <span class="math-container">$1$</span> for all <span class="math-container">$n \geq 1$</span>. So the difference we study is positive, so the sequence of terms is strictly monotonically increasing. When <span class="math-container">$n = 1$</span>, the term is <span class="math-container">$\mathrm{e}$</span> and all subsequent terms are greater than this. Since the terms in the series do not approach <span class="math-container">$0$</span> as <span class="math-container">$n \rightarrow \infty$</span>, the series diverges.</p> <p>Likewise,for <span class="math-container">$x = -\mathrm{e}$</span>, the terms do not go to zero so the series diverges.</p>
4,286,136
<p>I'm trying to find the general solution to <span class="math-container">$xy' = y^2+y$</span>, although I'm unsure as to whether I'm approaching this correctly.</p> <p>What I have tried:</p> <p>dividing both sides by x and substituting <span class="math-container">$u = y/x$</span> I get:</p> <p><span class="math-container">$$y' = u^2x^2+u$$</span></p> <p>Then substituting <span class="math-container">$y' = u'x + u$</span> I get the following: <span class="math-container">$$u'x+u = u^2x^2+u \implies u' = u^2x \implies \int\frac{du}{u^2}=\int x dx$$</span> Proceeding on with simplification after integration: <span class="math-container">$$\frac{1}{u}=\frac{x^2}{2}+c\implies y = \frac{2x}{x^2+c}$$</span></p> <p>However, the answer shows <span class="math-container">$y=\frac{x}{(c-x)}$</span></p>
Mathphys meister
583,618
<p>It can be done in a simpler way:</p> <p><span class="math-container">\begin{equation} \frac{dy}{y(y+1)} = \frac{dx}{x} \implies \mathrm{ln}\Big|\frac{y}{y+1}\Big| = \mathrm{ln}|x| + \mathrm{ln}(\tilde{c}), \\ \end{equation}</span></p> <p>for some constant <span class="math-container">$\tilde{c}&gt;0$</span>. Hence it follows that:</p> <p><span class="math-container">\begin{equation} y = \frac{\tilde{c}x}{1-\tilde{c}x} = \frac{x}{(1/\tilde{c})-x}. \\ \end{equation}</span></p> <p>Redefine <span class="math-container">$1/\tilde{c}$</span> as <span class="math-container">$c$</span> to find the desired result.</p>
403,184
<p>A (non-mathematical) friend recently asked me the following question:</p> <blockquote> <p>Does the golden ratio play any role in contemporary mathematics?</p> </blockquote> <p>I immediately replied that I never come across any mention of the golden ratio in my daily work, and would guess that this is the same for almost every other mathematician working today.</p> <p>I then began to wonder if I were totally correct in this statement . . . which has led me to ask this question.</p> <p>My apologies is this question is unsuitable for Mathoverflow. If it is, then please feel free to close it.</p>
Oscar Lanzi
86,625
<p>Here we have a monotonocally increasing sequence that predicts itself:</p> <p><span class="math-container">$1,2,2,3,\color{brown}{3},4,4,4,\color{blue}{5,5,5},6,6,6,6,...$</span></p> <p>The <span class="math-container">$n$</span>th term <span class="math-container">$a(n)$</span> predicts the number of times that term appears in the sequence. For instance, the fifth term is <span class="math-container">$3$</span> (brown) and thus <span class="math-container">$5$</span> will appear three times (blue).*</p> <p>The prediction is implemented through the recursion relation</p> <p><span class="math-container">$a[n+a(a(n))]=a(n+1).$</span></p> <p>For example, <span class="math-container">$a(5)$</span> is the second occurrence of <span class="math-container">$3$</span>, and when we march <span class="math-container">$a(a(5))=a(3)=2$</span> steps forward from there we land at <span class="math-container">$a(7)$</span>, which is the second occurrence of <span class="math-container">$3+1=4$</span>.</p> <p>Let us plug this into an assumed power-law asumptotic relation:</p> <p><span class="math-container">$a(n)\approx\alpha n^\beta.$</span></p> <p>Thus</p> <p><span class="math-container">$\alpha[n+\alpha(\alpha n^\beta)^\beta]\approx\alpha n^\beta+1$</span></p> <p><span class="math-container">$\alpha[n+\alpha^{1+\beta}n^{\beta^2}]^\beta\approx\alpha n^\beta+1$</span></p> <p>Take two terms of the binomial power expansion of the left side:</p> <p><span class="math-container">$\alpha [n^\beta + \alpha^{1+\beta}\beta n^{\beta^2+\beta-1}]^\beta\approx\alpha n^\beta+1$</span></p> <p>Canceling the identical leading terms leads to</p> <p><span class="math-container">$[\alpha^{1+\beta}\beta n^{\beta^2+\beta-1}]^\beta\approx1$</span></p> <p>from which <span class="math-container">$\beta^2+\beta-1=0$</span> and thus <span class="math-container">$\color{blue}{\beta=\phi-1=1/\phi}$</span>. The corresponding value of <span class="math-container">$\alpha$</span> is then <span class="math-container">$\beta^{-1/(1+\beta)}=\phi^{\phi-1}$</span>. We may elegantly express the result as</p> <p><span class="math-container">$a_n\approx(\phi n)^{\phi-1}.$</span></p> <p>If we graph <span class="math-container">$a_n$</span> versus <span class="math-container">$n$</span> on a log-log plot and draw the straight line represented by <span class="math-container">$\alpha n^\beta$</span> with the values computed above, we find that the line precisely &quot;threads the needle&quot; through the sequence values. Try it!</p> <hr /> <p>*Yes, that is a shout-out to <a href="https://en.wikipedia.org/wiki/3Blue1Brown" rel="nofollow noreferrer">one of our favorite video channels</a>.</p>
38,206
<p>Simple question (I seem have asked a few like this...)</p> <p>What is $\mbox{Hom}(\mathbb{Z}/2,\mathbb{Z}/n)$? (for $n \ne 2$)</p>
Shahab
10,575
<p>Try to find the neccessary condition for such a homomorphism first. </p> <p>If f is a homomorphism from Z/2 to Z/n then clearly f maps the zero element of Z/2 to the zero element of Z/n. Can it map 1 to an arbitrary element of Z/n? No, the mapping should be such that f is a homomorphism. Assuming it is, and $f(1) = x$, clearly $0 = f(0) = f(1+1) = f(1) + f(1) = x + x$ should hold, following which $2x = 0$ in Z/n. So this is a neccessary condition. </p> <p>It is easy to see that this condition is sufficient as well. So the homomorphisms may be counted in number by counting all such x. This amounts to counting all solutions to the congruence $2x = 0$ in Z/n. </p> <p>Clearly as $gcd(2,n)|0$ so solutions always exist and they will be gcd(2,n) (which may be either 1 or 2, depending upon whether n is odd or even repectively) in number. So Hom(Z/2,Z/n) will have either 1 or 2 elements. In either case, it will be cyclic because all groups of order 1 and 2 are. </p>
1,336,424
<p>Find the minimum distance between point $M(0,-2)$ and points $(x,y)$ such that: $y=\frac{16}{\sqrt{3}\,x^{3}}-2$ for $x&gt;0$ .</p> <p>I used the formula for distance between two points in a plane to get: $$d=\sqrt{x^{2}+\frac{256}{3x^{6}}}$$ And this is where I cannot come up with how to proceed. I tried calculus but the first derivative of $d(x)$ is fairly ugly expression... A few techniques on how to handle problems on maxima and minima with(out) using calculus would be really useful.</p>
Emilio Novati
187,568
<p>Hint:</p> <p>Note that the value of $x$ that minimize $d$ minimimize also $d^2=x^2+\dfrac{2^8}{3x^6}=f$ and the derivative is $f'= 2x-\dfrac{2^9}{x^7}$. </p>
1,336,424
<p>Find the minimum distance between point $M(0,-2)$ and points $(x,y)$ such that: $y=\frac{16}{\sqrt{3}\,x^{3}}-2$ for $x&gt;0$ .</p> <p>I used the formula for distance between two points in a plane to get: $$d=\sqrt{x^{2}+\frac{256}{3x^{6}}}$$ And this is where I cannot come up with how to proceed. I tried calculus but the first derivative of $d(x)$ is fairly ugly expression... A few techniques on how to handle problems on maxima and minima with(out) using calculus would be really useful.</p>
André Nicolas
6,312
<p>We give a non-calculus approach. It is in my opinion a fair bit harder than the calculus way. </p> <p>We want to minimize $x^2+\frac{256}{3x^6}$, or equivalently $$\frac{x^2}{3}+\frac{x^2}{3}+\frac{x^2}{3}+\frac{256}{3x^6}.$$ By the arithmetic mean geometric mean inequality (AM/GM) we have $$\frac{1}{4}\left( \frac{x^2}{3}+\frac{x^2}{3}+\frac{x^2}{3}+\frac{256}{3x^6} \right)\ge \sqrt[4]{\frac{x^2}{3}\cdot \frac{x^2}{3}\cdot\frac{x^2}{3}\cdot \frac{256}{3x^6}}\tag{1}$$ with equality when all the terms on the left are equal. The right-hand side of (1) is $\frac{4}{3}$. Equality is achieved when $\frac{x^2}{3}=\frac{256}{3x^6}$, that is, when $x=\pm 2$.</p> <p>We conclude that the square of the distance has minimum value $\frac{16}{3}$.</p>
1,114,767
<p>I would like a reference for the following result (you can assume more regularity and replace $C^2(\bar\Omega)$ with $C^2(\mathbb R^n)$ if needed):</p> <blockquote> <p>Let $\Omega\subset\mathbb R^n$ be a bounded domain with a $C^2$ boundary. Let $f\in C^2(\bar\Omega)$ and $\gamma\in C^2(\bar\Omega)$ with $\gamma&gt;0$. Then there is a unique strong solution $u\in C^2(\bar\Omega)$ to the PDE $\operatorname{div}(\gamma\nabla u)=0$ with the boundary condition $u=f$ at $\partial\Omega$. (If possible, I would like the reference to tell also that $u$ is the unique minimizer of $\int_\Omega\gamma|\nabla u|^2$ in $C^2(\bar\Omega)$ with the boundary condition.)</p> </blockquote> <p>I am perfectly aware that a proof would make heavy use of modern PDE techniques with Sobolev spaces and whatnot. I would like to be able to offer PDE related topics for bachelor's theses. Proving the above statement or working with Sobolev functions would be too much and I would like to focus the theses on other issues. But for things to make any sense, I do want to give a theorem that states that solutions (in the strong sense) exist uniquely; the theorem will unfortunately be a black box for the students.</p>
spatially
124,358
<p>I think you need to at least combine the result of existence &amp; uniqueness with the result of regularity.</p> <p>For existence &amp; uniqueness w.r.t a solution $u\in H_0^1(\Omega)$, I would suggest the first existence theorem in Evans book, chapter 6.2, look for Lax-Milgram. But I think you may need to assume in addition that $0&lt;\alpha\leq \gamma$, that is, $\gamma$ has a non-zero lower bound. </p> <p>Now for the regularity, the chapter 6.3, looking for theorem about global regularity, it should do the work for you. However, you are not going to get $u\in C^2$ by assuming $f\in C^2$, this is wrong. You need to assume $f\in C^\infty(\bar\Omega)$ to get $u\in C^\infty(\bar\Omega)$.</p>
2,255,192
<p>I'm going through the exercises in Georgi E Shilov's Linear Algebra book and am on chapter 1 problem 2: "Write down all the terms appearing in the determinant of order four which have a minus sign and contain $ a_{23}$"</p> <p>the answers I have arrived at are: </p> <p>$a_{11}$$a_{23}$$a_{32}$$a_{44}$</p> <p>$a_{12}$$a_{23}$$a_{34}$$a_{41}$</p> <p>$a_{14}$$a_{23}$$a_{31}$$a_{42}$</p> <p>The answers listed in the back of the book are the same except for this one below:</p> <p>$a_{44}$$a_{23}$$a_{31}$$a_{42}$</p> <p>Is that a typo with $a_{44}$?</p>
Onur
414,808
<p>$$\begin{align} S &amp;=\sum_{n=1}^\infty a\cdot r^{n-1}\\ \\ &amp; = \frac{1}4+\frac{3}{16}+\frac{9}{64}+\dots \\ \\ &amp;= \frac{1}4\cdot\frac{3}{4}^{0}+\frac{1}4\cdot\frac{3}{4}^{1}+\frac{1}4\cdot\frac{3}{4}^{2}+\dots \\ \\ \end{align}$$</p> <p>$$ a=\frac{1}4, r=\frac{3}4\implies S= 1 $$</p>
3,494,470
<p>Will all units in <span class="math-container">$\mathbb{Z}_{72}$</span> be also units (modulo <span class="math-container">$8$</span> and <span class="math-container">$9$</span>) of <span class="math-container">$\mathbb{Z}_8$</span> and <span class="math-container">$\mathbb{Z}_9$</span>? </p> <p>I think yes, because if <span class="math-container">$\gcd(x,72)=1\implies\gcd(x,8)=\gcd(x,9)=1$</span>, right? any counterexamples? Thanks beforehand.</p>
red_trumpet
312,406
<p>A bit more abstractly: If you have a homomorphism <span class="math-container">$\varphi: R \to S$</span> of unital commutative rings, then for any unit <span class="math-container">$r \in R$</span>, its image <span class="math-container">$\varphi(r)\in S$</span> is a unit, because <span class="math-container">$\varphi(r) \varphi(r^{-1}) = \varphi(r r^{-1}) = \varphi(1) = 1$</span>.</p> <p>In your situation, you have surjections <span class="math-container">$\mathbb{Z} \to \mathbb{Z}_{72} \to \mathbb{Z}_8$</span>, so any number that is a unit mod 72 will also be a unit mod 8.</p>
148,037
<p>for example I have a data</p> <pre><code>Clear[data]; data[n_] := Join[RandomInteger[{1, 10}, {n, 2}], RandomReal[1., {n, 1}], 2]; </code></pre> <p>then <code>data[3]</code> gives</p> <pre><code>{{4, 8, 0.264842}, {9, 5, 0.539251}, {3, 1, 0.884612}} </code></pre> <p>in each sublist, first two value is matrix index, the last is <strong>matrix element which have to be added together for same matrix index</strong>.</p> <p>I want to transform the data into matrix. Usually I do it like</p> <pre><code>Clear[toSparse] toSparse[data_] := SparseArray@ Normal@Merge[Thread[data[[;; , 1 ;; 2]] -&gt; data[[;; , -1]]], Total] </code></pre> <p>I cared about the performance</p> <pre><code>In[171]:= toSparse[data[1000]]; // AbsoluteTiming Out[171]= {0.00836793, Null} In[172]:= toSparse[data[10000]]; // AbsoluteTiming Out[172]= {0.0644464, Null} In[173]:= toSparse[data[100000]]; // AbsoluteTiming Out[173]= {1.35507, Null} In[174]:= toSparse[data[1000000]]; // AbsoluteTiming Out[174]= {200.862, Null} </code></pre> <p>Any faster way to do this?</p>
Carl Woll
45,431
<p>You can change the <a href="http://reference.wolfram.com/language/ref/SparseArray" rel="noreferrer"><code>SparseArray</code></a> system options to total repeated entries instead of taking the first. Here is a function that does this:</p> <pre><code>carl[data_] := Internal`WithLocalSettings[ old=SystemOptions["SparseArrayOptions" -&gt; "TreatRepeatedEntries"]; SetSystemOptions["SparseArrayOptions" -&gt; "TreatRepeatedEntries" -&gt; Total], SparseArray @ Thread[data[[All,;;2]] -&gt; data[[All,3]]], SetSystemOptions[old] ] </code></pre> <p>Compare this with @edmund's solution:</p> <pre><code>edmund[data_] := SparseArray @ Normal @ GroupBy[data, Most-&gt;Last, Total] </code></pre> <p>For example:</p> <pre><code>data[n_] := Join[RandomInteger[{1,10}, {n,2}], RandomReal[1., {n,1}], 2] d6 = data[10^6]; r1 = carl[d6]; //AbsoluteTiming r2 = edmund[d6]; //AbsoluteTiming MinMax[r1-r2] </code></pre> <blockquote> <p>{0.852608, Null}</p> <p>{1.26883, Null}</p> <p>{-1.00044*10^-11, 8.18545*10^-12}</p> </blockquote> <p>The difference is due to the order in which the repeated entries are totaled in the two methods.</p>
148,037
<p>for example I have a data</p> <pre><code>Clear[data]; data[n_] := Join[RandomInteger[{1, 10}, {n, 2}], RandomReal[1., {n, 1}], 2]; </code></pre> <p>then <code>data[3]</code> gives</p> <pre><code>{{4, 8, 0.264842}, {9, 5, 0.539251}, {3, 1, 0.884612}} </code></pre> <p>in each sublist, first two value is matrix index, the last is <strong>matrix element which have to be added together for same matrix index</strong>.</p> <p>I want to transform the data into matrix. Usually I do it like</p> <pre><code>Clear[toSparse] toSparse[data_] := SparseArray@ Normal@Merge[Thread[data[[;; , 1 ;; 2]] -&gt; data[[;; , -1]]], Total] </code></pre> <p>I cared about the performance</p> <pre><code>In[171]:= toSparse[data[1000]]; // AbsoluteTiming Out[171]= {0.00836793, Null} In[172]:= toSparse[data[10000]]; // AbsoluteTiming Out[172]= {0.0644464, Null} In[173]:= toSparse[data[100000]]; // AbsoluteTiming Out[173]= {1.35507, Null} In[174]:= toSparse[data[1000000]]; // AbsoluteTiming Out[174]= {200.862, Null} </code></pre> <p>Any faster way to do this?</p>
Marius Ladegård Meyer
22,099
<p>I can't see any sparsity at all in this problem, given that we are adding values to <code>n</code> random elements of a 10x10 matrix, and <code>n</code> is up to $10^6$. So, given that we keep <code>data</code> as-is, the algorithm to fill the matrix is so straight-forward that it's a good candidate for compiling. I propose</p> <pre><code>makeMatrix = Compile[{{inds, _Integer, 2}, {vals, _Real, 1}}, Block[{mat = Table[0., {10}, {10}]}, Do[ mat[[inds[[i, 1]], inds[[i, 2]]]] += vals[[i]] , {i, Length[vals]} ]; mat ] , CompilationTarget -&gt; "C" , RuntimeOptions -&gt; "Speed" ] </code></pre> <p>and a wrapper</p> <pre><code>marius[data_] := makeMatrix[data[[All, 1 ;; 2]], data[[All, 3]]] </code></pre> <p>Then, on my machine,</p> <pre><code>d6 = data[10^6]; r1 = carl[d6]; //AbsoluteTiming r2 = edmund[d6]; //AbsoluteTiming r3 = marius[d6]; //AbsoluteTiming Max[Abs[r3 - r1]] </code></pre> <p>gives</p> <blockquote> <p><code>{0.967007, Null}</code></p> <p><code>{1.571291, Null}</code></p> <p><code>{0.305536, Null}</code></p> <p><code>3.63798*10^-11</code></p> </blockquote>
2,394,815
<p>Let $A$ be a (not necessarily finitely generated) abelian group where all elements have order 1, 2, or 4. Does it follow that $A$ can be written as a direct sum $(\bigoplus _\alpha \mathbb Z/4) \oplus (\bigoplus_\beta \mathbb Z/2)$?</p>
hmakholm left over Monica
14,366
<p>Call a set $S\subseteq A$ "good" if</p> <ul> <li>$S$ does not contain $a^2$ for any $a\in A$.</li> <li>Whenever $s_1^{m_1}s_2^{m_2}\cdots s_n^{m_n}=e$ where $s_1,s_2,\ldots,s_n$ are <em>different</em> elements of $S$, we have $s_1^{m_1}=e$.</li> </ul> <p>Apply Zorn's lemma to the family of good subsets of $A$ (ordered by inclusion).</p> <p>Show that a maximal good set corresponds to a direct sum as in the question.</p>
3,143,649
<p>I am reading the book <em>Random perturbation of dynamical sustem</em> of Freidlin and Wantzell (2nd edition). On page 20, they define a Markov process as follow:</p> <blockquote> <p>Let <span class="math-container">$(\Omega ,\mathcal F,\mathbb P)$</span> a probability space and <span class="math-container">$(X,\mathcal B)$</span> the state space. Let <span class="math-container">$(\mathcal F_t)$</span> a filtration. Let <span class="math-container">$(\mathbb P_x)_{x\in X}$</span> a familly of probability measure. Define the function <span class="math-container">$p$</span> as <span class="math-container">$$p(t,x,\Gamma )=\mathbb P_x\{X_t\in \Gamma \},\quad \Gamma \in \mathcal B, t\in [0,T],x\in X.$$</span> Then <span class="math-container">$X=(X_t)_{t\leq T}$</span> is a Markov process in <span class="math-container">$X$</span> if:<br> a) <span class="math-container">$X$</span> is adapted to the filtration.<br> b) <span class="math-container">$x\mapsto p(t,x,\Gamma )$</span> is measurable wrt <span class="math-container">$\mathcal B$</span>.<br> c) <span class="math-container">$p(0,x, X\setminus \{x\})=0$</span>.<br> d) <span class="math-container">$\mathbb P_x\{X_u\in Γ\mid \mathcal F_t\}=p(u-t,X_t,\Gamma )$</span> for all <span class="math-container">$t,u\in [0,T]$</span>, <span class="math-container">$t\leq u$</span>, <span class="math-container">$x\in X$</span> and <span class="math-container">$\Gamma \in \mathcal B$</span>.</p> </blockquote> <p>I am not sure how to interpret c) and d). Would these be correct? </p> <p><strong>Q1)</strong> For c), is it <span class="math-container">$\mathbb P_x\{X_0=x\}=1$</span>? </p> <p><strong>Q2)</strong> For d), is it <span class="math-container">$$\mathbb P_{X_0=0}\{X_{t+h}\in \Gamma \mid X_t=k\}=\mathbb P_{X_0=k}\{X_h\in \Gamma \}?$$</span></p> <p>But I don't really know how to interpret it. </p>
Ѕᴀᴀᴅ
302,797
<p><span class="math-container">$\def\Γ{{\mit Γ}}$</span>For Q1, since<span class="math-container">$$ p(0, x, X \setminus \{x\}) = P_x(X_0 \in X \setminus \{x\}) = 1 - P_x(X_0 = x), $$</span> so <span class="math-container">$p(0, x, X \setminus \{x\}) = 0 \Leftrightarrow P_x(X_0 = x) = 1$</span>. The process under <span class="math-container">$P_x$</span> can be regarded as starting from <span class="math-container">$x$</span>.</p> <p>For Q2, your identity is only a corollary of d) and not equivalent since <span class="math-container">$\mathscr{F}_t$</span> might be larger that <span class="math-container">$σ(X_t)$</span>. To put d) in a clearer form, it is<span class="math-container">$$ P_x(X_u \in \Γ \mid \mathscr{F}_t) = P_{X_t}(X_{u - t} \in \Γ). $$</span> In other words, d) says that for any <span class="math-container">$0 \leqslant t &lt; u$</span>, if one knows the information of time <span class="math-container">$t$</span>, which corresponds to expectation conditioning on <span class="math-container">$\mathscr{F}_t$</span>, then the probability of an event in the future, i.e. <span class="math-container">$\{X_u \in \Γ\}$</span>, with the process starting from <span class="math-container">$x$</span> is the same as the probability of <span class="math-container">$\{X_{u - t} \in \Γ\}$</span> with the process starting from <span class="math-container">$X_t$</span>. This simply means that what happens before time <span class="math-container">$t$</span> does not matter to the evolution of the process as long as the information at time <span class="math-container">$t$</span>, i.e. <span class="math-container">$\mathscr{F}_t$</span>, is known.</p>
4,563,707
<p>Sequence given : 6, 66, 666, 6666. Find <span class="math-container">$S_n$</span> in terms of n</p> <p>The common ratio of a geometric progression can be solved is <span class="math-container">$\frac{T_n}{T_{n-1}} = r$</span>, where r is the common ratio and n is the</p> <p>When plugging in 66 as <span class="math-container">$T_n$</span> and 6 as <span class="math-container">$T_{n-1}$</span>, I got the following ratio: <span class="math-container">$ \frac {66}{6} = 11$</span>.</p> <p>However, when I plugged in 666 as <span class="math-container">$T_n$</span> and 66 as <span class="math-container">$T_{n-1}$</span>, I got: <span class="math-container">$\frac {666}{66} = 10.09$</span>.</p> <p>And when I plugged in 6666 and 666: <span class="math-container">$ \frac {6666}{666} = 10.009$</span>.</p> <p>It's clear to me that the ratio is slowly decreasing, and seems to be approaching 10. Alas, this is about as far as I have gotten.</p> <p>Looking at the answers scheme, the final answers is <span class="math-container">$ \frac {20}{27}{(10^n-1)} - \frac {2}{3}{(n)}.$</span></p> <p>The answers scheme does include a few steps, but frankly, I couldn't understand the reasoning behind them, but I guess I should post them anyways.</p> <p><span class="math-container">$$ \frac {2}{3}[9 + 99 + 999 + 9999] $$</span></p> <p><span class="math-container">$$ S_n = \frac {2}{3}{(10-1)} + (10^2-1) + ...(10^n-1) $$</span></p> <p><span class="math-container">$$ = \frac {2}{3}[10^1+10^2+10^3...10^n] + \frac {2}{3}[-1-1-1-1...-1] $$</span></p> <p><span class="math-container">$$ = \frac {2}{3}(10)\left(\frac {10^n-1}{10-1}\right) - \frac {2}{3}(n) $$</span> <span class="math-container">$$ \frac {20}{27}{(10^n-1)} - \frac {2}{3}{(n)} $$</span></p> <p>I'm sorry if I did something stupid, but I have no idea where that <span class="math-container">$\frac {2}{3}$</span> came from, and even if I do, I don't understand the reasoning or the explanation behind it. I already asked my teacher, as well as my mother, both of which yielded little in understanding the logic behind the solutions given in the answers scheme.</p> <p>If anybody could offer an explanation, it would be greatly appreciated</p>
Arthur
15,500
<p>Alternately, note that if you add <span class="math-container">$\frac23=0.666\ldots$</span> to each term, then it actually becomes a geometric progression with common ratio <span class="math-container">$10$</span>. So remembering to subtract it again afterwards, we get <span class="math-container">$$ S_n=T_1+T_2+\cdots+T_n\\ =\frac23\cdot10^1+\frac23\cdot10^2+\cdots+\frac23\cdot10^n-n\cdot\frac23 $$</span> And now you can use the standard formula for sum of geometric series on the first part: <span class="math-container">$$ \frac23\cdot10^1+\frac23\cdot10^2+\cdots+\frac23\cdot10^n=\frac{20}3\frac{10^n-1}{10-1} $$</span></p>
3,887,156
<p>I understand that the vertical shift is <span class="math-container">$0$</span> that is why the graph starts at <span class="math-container">$(0,0)$</span>. Also I understand that the amplitude is <span class="math-container">$3$</span> because the maximum y value is <span class="math-container">$3$</span> and the minimum y value is <span class="math-container">$-3$</span>. Last but not least the graph is inverse sine but it seems to be a period and a half. I am not sure where the <span class="math-container">$\frac {\pi}{4}$</span> comes from. I need help understanding that part. Photo of problem attached below. Thanks!</p> <p><a href="https://imgur.com/gallery/coL4IUw" rel="nofollow noreferrer">https://imgur.com/gallery/coL4IUw</a></p>
Community
-1
<p><span class="math-container">$V=\{(a,a,c)|a,c\in\Bbb R\}=\rm{span}\{(1,1,0),(0,0,1)\}\cong\Bbb R^2$</span>. And <span class="math-container">$\Bbb R^3/\Bbb R^2\cong\Bbb R$</span>.</p>
7,080
<p>What is the right definition of the symmetric algebra over a graded vector space V over a field k?</p> <p>More generally: What is the right definition of the symmetric algebra over an object in a symmetric monoidal category (which is suitably (co-)complete)?</p> <p>Two possible definitions come to my mind:</p> <p>1) Take the tensor algebra over V and identify those tensors which differ only by an element of the symmetric group, i.e. take the coinvariants wrt. the symmetric group. The resulting algebra A is then the universal algebra together with a map V -> A such that the product of elements in V is commutative.</p> <p>2) Take the tensor algebra over V and divide out the ideal generated by antisymmetric two-tensors. In this case, the resulting algebra A is the universal algebra together with a map V -> A such that the product of A vanishes on all antisymmetric two-tensors (one could say that all commutators of A vanish).</p> <p>The definition 1) looks more natural and gives, for example, the polynomial ring in case V is of degree 0.</p> <p>The definition 2) applied a vector space shifted by degree 1 gives (up to degree shift) the exterior algebra over the unshifted vector space. However, in characteristic 2 for example, one doesn't get the polynomial ring if one starts with a vector space of degree 0.</p> <p>Finally, both definitions have a shortcoming in that they don't commute well with base change.</p>
Mariano Suárez-Álvarez
1,409
<p>Which one is the right definition depends on what you want to do with it. </p> <p>No matter how much technology you throw at the question, including homotopy coinvariants and quasi-triangular Hopf algebras, "right" is a relative notion :D</p>
3,506,982
<p>Let <span class="math-container">${X_n, n\in N}$</span> be an iid sequence of psitive rrvs and let <span class="math-container">$K$</span> be a rrv independent of this sequence and taking its values in <span class="math-container">$N$</span> with <span class="math-container">$P(K=k)=p_k$</span>. Consider the rrv <span class="math-container">$Z=\sum_{n=1}^{K} X_n$</span>. Suppose that <span class="math-container">$E(X_n) &lt;\infty$</span>. Then show <span class="math-container">$E(Z)=E(K)E(X_n)$</span></p> <p>My attempt is that since <span class="math-container">$K$</span> is independent of <span class="math-container">$X_n$</span> then how can i apply independence formula for expectation? </p>
Kavi Rama Murthy
142,385
<p><span class="math-container">$EZ=\sum_k E(Z|K=k)P(K=k)$</span> <span class="math-container">$=\sum_k (kEX_1) P(K=k)=E(X_1) \sum_k kP(Z=k)=EX_1EK$</span>. Note that <span class="math-container">$EX_n$</span> doe not depend on <span class="math-container">$n$</span> since <span class="math-container">$(X_I)$</span> is i.i.d.. </p>
1,037,736
<p>$$\sum \limits_{v=1}^n v=\frac{n^2+n}{2}$$</p> <p>please don't downvote if this proof is stupid, it is my first proof, and i am only in grade 5, so i haven't a teacher for any of this 'big sums'</p> <p>proof:</p> <p>if we look at $\sum \limits_{v=1}^3 v=1+2+3,\sum \limits_{v=1}^4 v=1+2+3+4,\sum \limits_{v=1}^5 v=1+2+3+4+5$</p> <p>i learnt rainbow numbers in class three years ago, so i use that knowlege here:</p> <p>$n=3,1+3=4$ and $2$.</p> <p>$n=4,1+4$ and $2+3$</p> <p>$n=5,1+5$ and $2+4$ and $3$</p> <p>and more that i have done on paper that i don't wanna type.</p> <p>we can see from this for the odd case that we have $(n+1)$ added together moving in from the outside, so we get to add $(n+1)$ to the total $\frac{(n-1)}2$ times plus the center number, which is $\frac{n+1}2$.. giving $\frac{n-1}2(n+1)+\frac{n+1}2=\frac{(n+1)(n-1)}{2}+\frac{n+1}{2}$ and i can get $\frac{n^2-1}2+\frac{n+1}2=\frac{n^2+n}2$ which is what we want.</p> <p>so odd are proven.</p> <p>for even we have a simplier problem: we have $n+1$ on each pair of numbers going in. since we are even numbers, we have $1+n=n+1$ , with $n$ even, $2+(n-1)=n+1$ and we can see this is good for all numbers since we increase one side by one and lower the other by 1. so we get $\frac{n}2$ times $n+1$ gives $\frac{n^2+n}{2}$</p> <p>thus is proven for all cases. thus is is proven</p>
John Joy
140,156
<p>While most of the proofs that you will see are algebraic, sometimes it is useful to get a geometric view of the problem. I've always preferred getting multiple perspectives to give me deeper understanding of the problem at hand.</p> <p>In the image, there are 5 different views of the problem. The first one has $(n+1)^2 - (n+1)$ cookies arranged in a square, with the diagonal removed. The second one arranges $n^2$ pizza into a square and then cuts the square in half. The third view arranges two sets of cookies into triangles to form a single rectangle. The fourth view we arrange squares into $n$ Ls that fit together to form a rectangle. Lastly, we have $n+1$ computers on a network that connects every computer directly to every other computer. <img src="https://i.stack.imgur.com/Pkrpj.png" alt="enter image description here"></p> <p>As an exercise, try cutting the middle row of pizzas in half horizontally, and rearrange the triangle of pizzas into a rectangle.</p> <p><img src="https://i.stack.imgur.com/CvgIG.png" alt="enter image description here"></p>
3,958,133
<p>How to simplify the following probability</p> <p><span class="math-container">$\operatorname{P} ( { C |B,A} )P( { B |A} )P( A ) + P( { {\bar B} |A} )P( A )$</span></p> <p><span class="math-container">$ = P\left( {A,B,C} \right) + P\left( {A,\bar B} \right)$</span></p> <p>Can <span class="math-container">$P\left( {A,B,C} \right) + P\left( {A,\bar B} \right)$</span> be further simplify ?</p> <p>Also, are there any systematic way to do the simplification instead of following heuristic. I see that there are some link between Boolean algebra and the simplification of these problem.</p> <p>Particularly, my thinking is that <span class="math-container">$ = P\left( {A,B,C} \right) + P\left( {A,\bar B} \right)$</span> may be somehow related to <span class="math-container">$F\left( {A,B,C} \right) = \left( {ABC} \right)\,OR\,\left( {A\bar B} \right)$</span> but I do not know if my guts feeling is correct or not.</p> <p>Please help me clear out these doubt, thank you for your enthusiasm !</p>
Graham Kemp
135,106
<p><span class="math-container">$$\begin{align}&amp;~~~~~~\mathsf P(C\mid B,A)\,\mathsf P(B\mid A)\,\mathsf P(A)+\mathsf P(\overline B\mid A)\,\mathsf P(A)\\&amp;=\big(\mathsf P(B,C\mid A)+\mathsf P(\overline B\mid A)\big)\,\mathsf P(A)&amp;&amp;\small\text{by definition of conditional probability}\\&amp;=\mathsf P\big((B\cap C)\cup\overline B\mid A\big)\,\mathsf P(A)&amp;&amp;\small\text{by sigma additivity of disjoint events}\\&amp;=\mathsf P(C\cup\overline B\mid A)\,\mathsf P(A)&amp;&amp;\small\text{by absorption}\\&amp;=\mathsf P(A\cap(C\cup\overline B) )\end{align}$$</span></p>
2,225,150
<p>I am seek for a rigorous proof for the following identity</p> <p>$\sum_{i = 0}^{T} x_i \sum_{j = 0}^{i} y_j = \sum_{i = 0}^{T}y_i\sum_{j = i}^{T} x_j$. </p> <p>By setting some small $T$ and expand the formulas, it is then clear to see the result. I am asking for help to give a formal proof of this identity, by reordering the summation. </p>
epi163sqrt
132,007
<blockquote> <p>The following representation might be helpful \begin{align*} \sum_{i = 0}^{T}\sum_{j = 0}^{i} x_iy_j=\sum_{0\leq j\leq i\leq T} x_iy_j=\sum_{j=0}^T\sum_{i=j}^Tx_iy_j\tag{1} \end{align*}</p> </blockquote> <p>From (1) we obtain by applying the laws of associativity, distributivity and commutativity: \begin{align*} \sum_{i = 0}^{T} x_i \sum_{j = 0}^{i} y_j&amp;=\sum_{i = 0}^{T}\sum_{j = 0}^{i} x_iy_j\\ &amp;=\sum_{0\leq j\leq i\leq T} x_iy_j\\ &amp;=\sum_{j=0}^T\sum_{i=j}^Tx_iy_j=\sum_{j=0}^Ty_j\sum_{i=j}^Tx_i \end{align*}</p>
2,455,428
<p>While going through some exercises in my analysis textbook, I came up with an equation which looks like an identity. I strongly believe that this is the case, but I couldn't prove this.</p> <blockquote> <p>$$\sum_{0\leq k\leq n}(-1)^k\frac{p}{k+p}\binom{n}{k} = \binom{n+p}{p}^{-1}$$</p> </blockquote> <p>Can someone provide a proof of this identity? Also, it would help a lot if you could explain the general strategy of proving such identities, if there is one, for me. Thank you.</p>
Marko Riedel
44,883
<p>This identity has appeared on MSE on several occasions in various forms. There is a proof using residues which goes like this (quoted from what should be earlier posts). Start with the function</p> <p>$$f(z) = n! (-1)^n \frac{p}{z+p} \prod_{q=0}^n \frac{1}{z-q}.$$</p> <p>We then get</p> <p>$$\mathrm{Res}_{z=k} f(z) = n! (-1)^n \frac{p}{k+p} \prod_{q=0}^{k-1} \frac{1}{k-q} \prod_{q=k+1}^n \frac{1}{k-q} \\ = n! (-1)^n \frac{p}{k+p} \frac{1}{k!} \frac{(-1)^{n-k}}{(n-k)!} = (-1)^k \frac{p}{k+p} {n\choose k}.$$</p> <p>Residues sum to zero and hence we have</p> <p>$$\sum_{k=0}^n (-1)^k \frac{p}{k+p} {n\choose k} = - \mathrm{Res}_{z=\infty} f(z) - \mathrm{Res}_{z=-p} f(z).$$</p> <p>Observe that $\lim_{R\to\infty} 2\pi R/R/R^{n+1} = 0$ and the residue at infinity is zero. It remains to compute</p> <p>$$- \mathrm{Res}_{z=-p} f(z) = - n! (-1)^n \times p \times \prod_{q=0}^n \frac{1}{-p-q} \\ = n! \times p \times \prod_{q=0}^n \frac{1}{p+q} = n! \times p \times \frac{(p-1)!}{(p+n)!} = {n+p\choose p}^{-1}.$$</p> <p>This concludes the argument. Here we require that $-p$ not be among the poles in $[0,n],$ resulting in a singularity in the sum.</p>
3,523,213
<p>I've read some simple explanations of Cantor's diagonal method.</p> <p>It seems to be:</p> <pre><code>1) Changing the i-th value in a row. 2) Do the same to the next row with the (i+1)th element. 3) Now you get an element not in any other row. So add it to list. 4) This process never ends. </code></pre> <p>This looks very like induction since it uses the (n+1) trick.</p> <p>However, induction only works for finite numbers.</p> <p>And the row lengths are not finite.</p> <p>So how did Cantor get around this?</p>
Frosty
744,938
<p>You don't need induction to prove that the new number is different than any already listed number. </p> <p>You have a construction for the new number. For any element in the list, there is some digit that is not the same, and based on where it is in the list, you can say exactly which one. This statement does not depend on previous digits differing for other numbers.</p>
569,012
<p>Let $I$ be the incenter of $\triangle{ABC}$. Let $R$ be the radius of the circle that circumscribes $\triangle{IAB}$. Find a formula for $R$ in term of other elements $a, b, c, A, B, C, r, R$ of $\triangle{ABC}$. I need this formula in order to prove a geometric inequality.</p>
Sawarnik
93,616
<p>Remember the extended law of sines: <span class="math-container">$$R = \frac{a}{2\sin A} = \frac{b}{2\sin B} = \frac{c}{2\sin C}$$</span></p> <p>The above gives the circumradius (<span class="math-container">$R$</span>) for a triangle whose one angle is <span class="math-container">$A$</span> and the opposite side is <span class="math-container">$a$</span> and so on. Well, the labeling is a bit different here.</p> <p><img src="https://i.stack.imgur.com/GoStB.png" alt="enter image description here" /></p> <p>For our triangle <span class="math-container">$IAB$</span>, the extended law of sines becomes: <span class="math-container">$$R = \frac{a}{2\sin\angle AIB}$$</span> So, we need to calculate <span class="math-container">$\angle AIB$</span>, and we are done. This can be done by realizing that <span class="math-container">$IB$</span> is the angle bisector of <span class="math-container">$\angle A$</span> and <span class="math-container">$IA$</span> angle bisector of <span class="math-container">$\angle A$</span>. So, <span class="math-container">$\angle AIB$</span> is:</p> <p><span class="math-container">$$180^{\circ} - (\angle IAB + \angle IBA) = 180^{\circ} - (\frac{\angle A}{2} + \frac{\angle B}{2}) = 180^{\circ} - (\frac{180^{\circ} - \angle C}{2}) = 90^{\circ} + \frac{C}{2} $$</span></p> <p>Note that <span class="math-container">$\sin (90^{\circ} + \theta) = \cos\theta$</span>. So we substitute back and our formula becomes:</p> <p><span class="math-container">$$R=\frac{AB}{2\cos\frac{C}{2}}$$</span></p> <p>If you need proof of anything that I have used ask me. If you want to express <span class="math-container">$\cos\frac{C}{2}$</span> in terms of <span class="math-container">$\cos C$</span>, you can do that by using the identity [and considering the positive root]:</p> <p><span class="math-container">$$\cos^2\frac{\theta}{2}=\frac{1+\cos\theta}{2}$$</span></p>
216,099
<p>$$x=\int \sqrt{\frac{y}{2a-y}}dy$$</p> <p>According to my textbook, it says that the substitution by $y=a(1-\cos\theta)$ will easily solve the intergral. Why does this work?</p>
Ken Dunn
42,937
<p>Find $\frac{dy}{d\theta}$ and then do the necessary replacement into the integral</p>
4,141,477
<blockquote> <p>Find <span class="math-container">$$\lim_{x\rightarrow0} x\tan\frac1x$$</span></p> </blockquote> <p>Now I tried to find the form of the limit (<span class="math-container">$0/0$</span> or <span class="math-container">$0\cdot \infty$</span> or <span class="math-container">$\infty/\infty$</span>), but as <span class="math-container">$x\rightarrow 0$</span>, <span class="math-container">$\tan(1/x)$</span> tends to <span class="math-container">$\tan \infty$</span>, and since <span class="math-container">$\tan x$</span> is unbounded unlike <span class="math-container">$\sin x$</span> or <span class="math-container">$\cos x$</span>, no particular value or range can be assumed for <span class="math-container">$\tan(1/x)$</span>.</p> <p>Then I tried to find LHL and RHL.</p> <p>Let <span class="math-container">$\lim_{x\rightarrow0^+} x\tan{(1/x)}=L$</span>.</p> <p>Then <span class="math-container">$\lim_{x\rightarrow0^-} x\tan{(1/x)}=-L$</span>, since <span class="math-container">$x$</span> is approaching from the negative side, the input <span class="math-container">$1/x$</span> of <span class="math-container">$\tan$</span> is the negative of the input in RHL, and <span class="math-container">$\tan (-x)=-\tan x$</span></p> <p>Now if the limit exists, then <span class="math-container">$LHL=RHL$</span>, thus <span class="math-container">$L=0$</span>.</p> <p>Thus I got that if the limit exists, then it must be equal to <span class="math-container">$0$</span>. But this doesn't confirm that the limit exists (and it doesn't).</p> <p>Please help me in proving that the limit doesn't exist, and also please point out the mistakes (if any) in the argument I presented above (sorry for I might be weak in limits and the basics of it)</p> <p><strong>EDIT:</strong> As pointed out by Shubham in the comments, I forgot to take the sign of <span class="math-container">$x$</span> too in the <span class="math-container">$LHL$</span>, thus rendering the argument which proved <span class="math-container">$L=0$</span> moot.</p> <p><em><strong>THANK YOU</strong></em></p>
cdeamaze
429,551
<p>The limit does not exist. Consider the sequence <span class="math-container">$xn=\{2/n\pi\} = \{2/\pi, 1/\pi, 2/3\pi, 1/2\pi, \}$</span> <span class="math-container">$y_n = x_n *\tan(x_n)$</span> As <span class="math-container">$x_n\to0, \tan(x_n)$</span> does NOT exist as <span class="math-container">$\tan(x_n)$</span> varies from <span class="math-container">$-\infty$</span> to <span class="math-container">$+\infty$</span>.</p> <p><span class="math-container">$x_n$</span> is well behaved, <span class="math-container">$\tan(x_n)$</span> is NOT. <span class="math-container">$y_n$</span> is not well behaved.</p>
29,703
<p>For an <a href="http://www.bekirdizdaroglu.com/ceng/Downloads/ISCE10.pdf">image denoising problem</a>, the author has a functional $E$ defined </p> <p>$$E(u) = \iint_\Omega F \;\mathrm d\Omega$$</p> <p>which he wants to minimize. $F$ is defined as </p> <p>$$F = \|\nabla u \|^2 = u_x^2 + u_y^2$$</p> <p>Then, the E-L equations are derived:</p> <p>$$\frac{\partial E}{\partial u} = \frac{\partial F}{\partial u} - \frac{\mathrm d}{\mathrm dx} \frac{\partial F}{\partial u_x} - \frac{\mathrm d}{\mathrm dy} \frac{\partial F}{\partial u_y} = 0$$</p> <p>Then it is mentioned that gradient descent method is used to minimize the functional $E$ by using </p> <p>$$\frac{\partial u}{\partial t} = u_{xx} + u_{yy}$$</p> <p>which is the heat equation. I understand both equations, and have solved the heat equation numerically before. I also worked with functionals. I do not understand however how the author jumps from the E-L equations to the gradient descent method. How is the time variable $t$ included? Any detailed derivation, proof on this relation would be welcome. I found some papers on the Net, the one by Colding <em>et al.</em> looked promising. </p> <p>References:</p> <p><a href="http://arxiv.org/pdf/1102.1411">http://arxiv.org/pdf/1102.1411</a> (Colding <em>et al.</em>)</p> <p><a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.117.1675&amp;rep=rep1&amp;type=pdf">http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.117.1675&amp;rep=rep1&amp;type=pdf</a></p> <p><a href="http://dl.dropbox.com/u/1570604/tmp/functional-grad-descent.pdf">http://dl.dropbox.com/u/1570604/tmp/functional-grad-descent.pdf</a></p> <p><a href="http://dl.dropbox.com/u/1570604/tmp/gelfand_var_time.ps">http://dl.dropbox.com/u/1570604/tmp/gelfand_var_time.ps</a> (Gelfand and Romin)</p>
rcollyer
2,283
<p>You should note that a solution, $f$, to your differential equation, $\mathcal{L}[f] = 0$, is the steady state solution to the second equation, as $\partial_t f = 0$. By turning this into a parabolic equation, only the error term will depend on $t$, and it will decay with time. This can be seen by letting </p> <p>$$h(x,y,t) = f(x,y) + \triangle f(x,y,t),$$ </p> <p>where $f$ is as before. Then</p> <p>$$\mathcal{L}[h] = \mathcal{L}[\triangle f] = \partial_t \triangle f$$ </p> <p>In general, this method makes the equations amenable to minimization routines like steepest descent.</p> <p><strong>Edit</strong>: Since you mentioned that you wanted a book to reference, when I was taking numerical analysis, we used v. 3 of <a href="http://books.google.com/books?id=ZUfVZELlrMEC&amp;dq=numerical%20methods%20inauthor%3aKincaid&amp;hl=en&amp;ei=Gz6STdSFDMLTgAeL_KUZ&amp;sa=X&amp;oi=book_result&amp;ct=result&amp;resnum=1&amp;ved=0CCkQ6AEwAA#v=onepage&amp;q&amp;f=false" rel="noreferrer">Numerical Mathematics and Computing</a> by Cheney and Kincaid, and I found it very useful. Although, at points it lacked depth, however it provided a good jumping off point. They also have a more mathematically in depth book <a href="http://books.google.com/books?id=x69Q226WR8kC&amp;dq=numerical%20methods%20inauthor%3aKincaid&amp;source=gbs_similarbooks" rel="noreferrer">Numerical analysis: mathematics of scientific computing</a> that may be useful to you, which I have not read.</p>
3,276,877
<blockquote> <p>Insert <span class="math-container">$13$</span> real numbers between the roots of the equation: <span class="math-container">$x^2 +x−12 = 0$</span> in a few ways that these <span class="math-container">$13$</span> numbers together with the roots of the equation will form the first <span class="math-container">$15$</span> elements of a sequence. Write down in an explicit form the general (nth) element of the formed sequence.</p> </blockquote> <p>Both roots of <span class="math-container">$x^2 +x−12 = 0$</span> are in reals as <span class="math-container">$D= 49$</span>, these are: <span class="math-container">$x = \frac{-1 \pm 7}{2}= 3, -4$</span>.</p> <p>i) Form an arithmetic sequence, i.e. the distance between the terms is the same. <br>Insert <span class="math-container">$13$</span> reals between these two in equidistant manner.<br> As the distance is <span class="math-container">$7$</span>, so, need equal intra-distance <span class="math-container">$=\frac {7}{14}$</span>.<br> So, the first term is at <span class="math-container">$-4$</span>, next at <span class="math-container">$-4+\frac {7}{14}=\frac {-45}{14}$</span>, &amp; so on.</p> <p>ii) Make the distance double with each next point, i.e. there is a g.p. of the minimum distance.<br>Let the first term be <span class="math-container">$a$</span>, common ratio term be <span class="math-container">$r=2$</span>, &amp; <span class="math-container">$\,2^{14}r\,$</span> is the maximum gap between the consecutive terms.<br> The sum of the geometric series is given by: <span class="math-container">$a+ar+ar^2+\cdots+ar^{14}$</span>, or<br> <span class="math-container">$a+2a+4a+8a+16a+\cdots+2^{14}a = a\frac{2^{15}-1}{2-1}=a(2^{15}-1)$</span></p> <p>The last term <span class="math-container">$\,ar^{14}=3\implies a= \frac{3}{r^{14}} = \frac{3}{2^{14}}.$</span></p> <p>So, the series starts at the second point (i.e., the one after <span class="math-container">$-4$</span>).<br> This second (starting) point is at : <span class="math-container">$-4+\frac{3}{2^{14}}$</span>, third point at : <span class="math-container">$-4+3\frac{3}{2^{14}}$</span>, fourth point at : <span class="math-container">$-4+7\frac{3}{2^{14}}$</span>, <br> The last point should act as a check, as its value is <span class="math-container">$\,3\,$</span> giving us <span class="math-container">$-4+\frac{3}{2^{14}}(2^{15}-1)$</span>, which should equal <span class="math-container">$3$</span>, but is not leading to that.</p>
Amit Rajaraman
447,210
<p><span class="math-container">$$\angle DAE = \angle EAC + \angle DAB + \alpha$$</span> <span class="math-container">$$= \frac{180°-\angle ACE}{2}+\frac{180°-\angle ABD}{2}+\alpha$$</span> <span class="math-container">$$=\frac{\angle ACB+\angle ABC}{2}+\alpha$$</span> <span class="math-container">$$=\frac{180°-\alpha}{2}+\alpha$$</span> <span class="math-container">$$=90°+\frac{\alpha}{2}$$</span></p>
3,896,345
<p>I've been studying a paper in which the author says:</p> <p>Fix <span class="math-container">$n$</span> such that <span class="math-container">$m^n \prod_{j=1}^n \frac{j}{j+\delta} &gt; 1$</span>, where <span class="math-container">$1&lt;m&lt;\infty$</span>, and <span class="math-container">$\delta &gt;0$</span>.</p> <p>I seem not to be able to show why such <span class="math-container">$n$</span> must exist. I tried rewriting it this way:</p> <p><span class="math-container">$m^n \prod_{j=1}^n \frac{j}{j+\delta} =m^n \prod_{j=1}^n (1-\frac{\delta}{j+\delta})= m^n \exp\bigg[\sum_{j=1}^n \log(1-\frac{\delta}{j+\delta})\bigg] \geq m^n\bigg[1+\sum_{j=1}^n \log(1-\frac{\delta}{j+\delta})\bigg]$</span></p> <p>but after that I'm stuck. In one part the author writes</p> <p><span class="math-container">$m^n \prod_{j=1}^n (1-\frac{\delta}{j+\delta}) \geq m^n e^{c_0} \exp\bigg[-\delta\sum_{j=1}^n \frac{1}{j+\delta}\bigg]$</span>, where <span class="math-container">$c_0$</span> is some constant,</p> <p>but then I still don't know how this guarantees that there exists <span class="math-container">$n$</span> such that <span class="math-container">$m^n \prod_{j=1}^n \frac{j}{j+\delta} &gt; 1$</span>. Does anyone have any idea on how to prove it?</p>
Václav Mordvinov
499,176
<p>Find an <span class="math-container">$N$</span> such that <span class="math-container">$\frac{mN}{N+\delta}&gt;1+\eta$</span> for some <span class="math-container">$\eta&gt;0$</span>. Then take <span class="math-container">$M$</span> such that <span class="math-container">$(1+\eta)^M&gt;\left(\prod_{j=1}^N\frac{j}{j+\delta}\right)^{-1}$</span>. Then take <span class="math-container">$n=N+M$</span>.</p>
2,633,392
<p>I was recently reading about sets and read that $B$ is a subset of $A$ when each member of $B$ is a member of $A$. However, I am not sure about whether this requires the members of $A$ to simply be members of $B$, or if they could be part of $A$ in some other way - i.e. embedded within a set inside $A$.</p> <p>I tried to think about the following example:</p> <p>$$X = \{10,\{x\}\}$$ $$Y = \{x\}$$</p> <p>Does this mean that $Y$ is not a subset of $X$, as $x$ is a member of $Y$, but $x$ is not a member of $X$? If this is the case, I think I could say that $Z = \{\{x\}\}$ is a subset of $X$.</p> <p>Or, is $Y$ a subset of $X$ as "x" exists somewhere within $X$, even though it is an element of a set, which itself is an element of $X$? I find this unlikely but cannot get past this idea. Thank you.</p>
Sahiba Arora
266,110
<blockquote> <p>"I am not sure about whether this requires the members of $A$ to simply be members of $B$, or if they could be part of $A$ in some other way - i.e. embedded within a set inside $A$."</p> </blockquote> <p>Former.</p> <blockquote> <p>"Does this mean that $Y$ is not a subset of $X$, as $x$ is a member of $Y$, but $x$ is not a member of $X$? If this is the case, I think I could say that $Z = \{\{x\}\}$ is a subset of $X$."</p> </blockquote> <p>Yes and yes.</p> <blockquote> <p>"Or, is $Y$ a subset of $X$ as "x" exists somewhere within $X$, even though it is an element of a set, which itself is an element of $X$?"</p> </blockquote> <p><strong>No.</strong></p> <p>You can draw a Venn diagram to get a better idea. So, for example, a disk with centre at origin and radius $1$ is a subset of a disk with centre at origin and radius $2.$ For $A$ to be a subset of $B$ it is merely sufficient that $A$ sits inside $B.$ There is no restriction on $B.$</p>
250,074
<p>How can one generate a random vector <span class="math-container">$v=[v_1, v_2, v_3]^T$</span> satisfying <span class="math-container">$\sqrt{v_1v_1^* + v_2 v_2^* + v_3 v_3^*} = 1$</span>, where <span class="math-container">$T$</span> and <span class="math-container">$*$</span> denote the transpose and complex-conjugate, respectively?</p>
Szabolcs
12
<p>Generate a single such vector:</p> <pre><code>Complex @@@ Partition[RandomPoint@Sphere[{0, 0, 0, 0, 0, 0}], 2] </code></pre> <p>Generate <code>n</code> of them with good performance:</p> <pre><code>n = 100; #1 + I #2 &amp; @@ Transpose[ ArrayReshape[RandomPoint[Sphere[{0, 0, 0, 0, 0, 0}], n], {n, 3, 2}], {2, 3, 1} ] </code></pre> <p>This method will sample uniformly from the 3-dimensional complex sphere.</p> <hr /> <p>At this point, it is appropriate to discuss why one of the suggestions in the comments will not work. The suggestion is the following for reals:</p> <pre><code>lst=Normalize /@ RandomReal[{}, {n, 3}] </code></pre> <p>or the same with <code>RandomComplex</code> for complexes.</p> <p>This will not sample uniformly from the sphere. It sampled from the unit cube, then normalized each element. That leads to a biased distribution, first, because we are restricted to the first octant. This fault is easily fixed by extending to all octants:</p> <pre><code>lst = Normalize /@ RandomReal[{-1, 1}, {10000, 3}]; </code></pre> <p>However, the sampling is still not uniform, as vectors in the direction of the cube edges will appear more frequently. This is plainly visible in a plot:</p> <pre><code>Graphics3D[{Opacity[0.5], Point[lst]}] </code></pre> <p><a href="https://i.stack.imgur.com/wwH5c.png" rel="noreferrer"><img src="https://i.stack.imgur.com/wwH5c.png" alt="enter image description here" /></a></p> <p>This can be saved by restricting the sampling to the unit ball before normalizing each vector:</p> <pre><code>n = 100; eps = 0.001; result = Normalize /@ Select[RandomReal[{-1, 1}, {n, 3}], eps &lt; Norm[#] &lt;= 1 &amp;]; </code></pre> <p><code>eps</code> here is an abirtrary small number that helps avoid numerical imprecision for points very close to the origin. Extension to complexes is possible with <code>RandomComplex[{-1-I, 1+I}, ...]</code>.</p> <p>However, this method does not generate <code>n</code> points, but fewer.</p> <pre><code>Length[result] (* 53 *) </code></pre> <p>One needs to do a bit more work to get precisely <code>n</code> points. This is why I chose to use <code>RandomPoint</code> for my answer.</p> <p><strong>Update:</strong> See <a href="https://mathematica.stackexchange.com/a/250080/12">@mikado's answer</a> which starts with the normal distribution (which is isotropic) instead of uniform distribution in a cube, and thus avoids the need for the <code>Select</code> above, and makes it easy to generate precisely the desired number of points.</p>
1,743,482
<p>I was doing this question on convergence of improper integrals where in our book they have used the fact that $2+ \cos(t) \ge1$. Can somebody prove this?</p>
user236182
236,182
<p>If $p$ is any odd prime, then by <a href="https://en.wikipedia.org/wiki/Wilson%27s_theorem" rel="nofollow">Wilson's Theorem</a>: $$(p-1)!\equiv 1\cdot 2\cdots\left(\frac{p-1}{2}\right)\left(-\frac{p-1}{2}\right)\cdots (-2)(-1)\pmod{p}$$</p> <p>$$\equiv (-1)^{\frac{p-1}{2}}\left(\left(\frac{p-1}{2}\right)!\right)^2\equiv -1\pmod{p}$$</p> <p>If $p\equiv 1\pmod{4}$, then all the solutions of $x^2\equiv -1\pmod{p}$ are $x\equiv \pm\left(\frac{p-1}{2}\right)!\pmod{p}$.</p> <hr> <p>Edit: another proof: Let $p$ be prime, $p\equiv 1\pmod{4}$ and let $g$ be a <a href="https://en.wikipedia.org/wiki/Primitive_root_modulo_n" rel="nofollow">primitive root mod</a> $p$. Then $\text{ord}_p(g)=p-1$, so $g^{\frac{p-1}{2}}\equiv -1\pmod{p}$. Therefore, all the solutions of $x^2\equiv -1\pmod{p}$ are $x\equiv \pm g^{\frac{p-1}{4}}\pmod{p}$.</p>
2,604,178
<p>Let $(G=(a_1,...,a_n),*)$ be a finite Group. Define for a element $a_i \in G$ a permutation $\phi = \phi(a_i)$ by left multiplication:</p> <p>$$ \begin{bmatrix} a_1 &amp; a_2 &amp; ... &amp; a_n \\ a_i*a_1 &amp; a_i*a_2 &amp; ... &amp; a_i*a_n \\ \end{bmatrix} $$ I am struggling to understand why this is the permutation as </p> <p>$$ \begin{bmatrix} a_k*a_1 &amp; a_k*a_2 &amp; ... &amp; a_k*a_n \\ a_i*a_k*a_1 &amp; a_i*a_k*a_2 &amp; ... &amp; a_i*a_k*a_n \\ \end{bmatrix} $$ where $a_k \in G$. Can somebody give me a reason for why these permutations are the same? Thanks for any help.</p>
gammatester
61,216
<p>You have to choose complex starting values, otherwise the method cannot converge to complex roots.</p> <p>With the correct iteration formula $$x_{n+1}=x_n - \frac{f(x)}{f'(x_n)} = x_n - \frac{x_n^2+1}{2x_n} = \frac{2x_n^2-x_n^2 -1}{2x_ n}=\frac{x_n^2 - 1}{2x_n}$$ and a complex starting value you get e.g.</p> <pre><code> 1.0 + 1.0 i 0.2500000000 + 0.7500000000 i -0.07500000000 + 0.9750000000 i 0.001715686274 + 0.9973039215 i -0.46418462831e-5 + 1.000002160 i -0.1002647834e-19 + 1.000000000 i 0.0 + 1.000000000 i </code></pre> <p>and for the other root</p> <pre><code> 3.0 - 1.0 i 1.350000000 - 0.5500000000 i 0.3573529412 - 0.4044117647 i -0.4348049736 - 0.8964750065 i 0.001593678319 - 0.8997608310 i -0.0001874328610 - 1.005581902 i -0.1037539915e-5 - 1.000015475 i -0.1605575154e-10 - 1.000000000 i 0.0 - 1.000000000 i </code></pre>
959,322
<p>Solve $$ \sum_{k = 1}^{ \infty} \frac{\sin 2k}{k}$$</p> <p>I first tried to use Eulers formula</p> <p>$$ \frac{1}{2i} \sum_{k = 1}^{ \infty} \frac{1}{k} \left( e^{2ik} - e^{-2ik} \right)$$</p> <p>However to use the geometric formula here, I must subtract the $k=0$ term and that term is undefinted since $1/k$. I also end up with something that diverges in my calculations and since $\sin 2k$ is limited the serie should not diverge.</p>
lab bhattacharjee
33,337
<p>For finite non-zero $a,$</p> <p>$$\sum_{k=1}^\infty\frac{e^{2iak}}k=-\ln(1-e^{2ia})$$</p> <p>$$1-e^{2ia}=-e^{ia}(e^{ia}-e^{-ia})=-e^{ia}[2i\sin(a)]$$</p> <p>$$\ln(1-e^{2ia})=\ln(e^{i(a+2m\pi)})+\ln2+\ln(-i)+\ln[\sin(a)]$$ $$=i(a+2m\pi)+\ln2-\frac{i\pi}2+\ln[\sin(a)]$$ </p> <p>as $-i=\cos\left(\frac\pi2\right)+i\sin\left(\frac\pi2\right)=e^{i\dfrac\pi2}$ and where $m$ is any integer</p> <p>Hope you can take it home from here setting $a=1,-1$ one by one</p>
361,862
<p>I would like you to expose and explain briefly some examples of theorems having some hypothesis that are (as far as we know) actually necessary in their proofs but whose uses in the arguments are extremely subtle and difficult to note at a first sight. I am looking for hypothesis or conditions that appear to be almost absent from the proof but which are actually hidden behind some really abstract or technical argument. It would be even more interesting if this unnoticed hypothesis was not noted at first but later it had to be added in another paper or publication not because the proof of the theorem were wrong but because the author did not notice that this or that condition was actually playing a role behind the scene and needed to be added. And, finally, an extra point if this hidden hypothesis led to some important development or advance in the area around the theorem in the sense that it opened new questions or new paths of research. This question might be related with this <a href="https://mathoverflow.net/questions/352249/nontrivially-fillable-gaps-in-published-proofs-of-major-theorems">other</a> but notice that it is not the same as I am speaking about subtleties in proof that were not exactly incorrect but incomplete in the sense of not mentioning that some object or result had to be use maybe in a highly tangential way.</p> <p>In order to put some order in the possible answers and make this post useful for other people I would like you to give references and to at least explain the subtleties that helps the hypothesis to hide at a first sight, expose how they relate with the actual proof or method of proving, and tell the main steps that were made by the community until this hidden condition was found, i.e., you can write in fact a short history about the evolution of our understanding of the subtleties and nuances surrounding the result you want to mention.</p> <p>A very well known and classic example of this phenomenon is the full theory of classical greek geometry that although correctly developed in the famous work of Euclides was later found to be incompletely axiomatized as there were some axioms that Euclides uses but <a href="https://en.wikipedia.org/wiki/Euclidean_geometry#cite_note-6" rel="noreferrer">he did not mention</a> as such mainly because these manipulations are so highly intuitive that were not easy to recognize that they were being used in an argument. Happily, a better understanding of these axioms and their internal respective logical relations through a period of long study and research lasting for millennia led to the realization that these axioms were not explicitly mention but necessary and to the development of new kinds of geometry and different geometrical worlds.</p> <p>Maybe this one is (because of being the most classic and expanded through so many centuries and pages of research) the most known, important and famous example of the phenomena I am looking for. However, I am also interested in other small and more humble examples of this phenomena appearing and happening in some more recent papers, theorems, lemmas and results in general.</p> <p>Note: I vote for doing this community wiki as it seems that this is the best way of dealing with this kind of questions.</p>
Alistair Wall
159,000
<p>Some of Euclid's theorems rely on axioms of betweenness that he was not aware of.</p> <p>Hilbert's axioms: <a href="https://www.math.ust.hk/~mabfchen/Math4221/Hilbert%20Axioms.pdf" rel="noreferrer">https://www.math.ust.hk/~mabfchen/Math4221/Hilbert%20Axioms.pdf</a></p>
1,041,177
<p>Proof that if $p$ is a prime odd and $k$ is a integer such that $1≤k≤p-1$ , then the binomial coefficient</p> <p>$$\displaystyle \binom{p-1}{k}\equiv (-1)^k \mod p$$</p> <p>This exercise was on a test and I could not do!!</p>
Bruno Joyal
12,507
<p>In characteristic $p$, where $p$ is odd,</p> <p>$$(1+X)^{p-1} = \frac{(1+X)^p}{1+X} = \frac{1+X^p}{1+X} = \frac{1-(-X)^p}{1-(-X)} = 1 -X + X^2 - \dots +X^{p-1}.$$</p>
1,231,365
<p>I am from a non-English speaking country. Should we say monotonous function or monotonic function?</p>
Warlord5
230,629
<p>That would be monotonic function. Monotonic is always used in relation to the function you are talking about. </p> <p><a href="http://mathworld.wolfram.com/MonotonicFunction.html">http://mathworld.wolfram.com/MonotonicFunction.html</a></p> <p>Monotonic describes something this is unchanged or altered, such as the function in maths whereas Monotonous describes something lacking in variety and is usually used in reference to tone. </p>
381,011
<p>I should prove this claim:</p> <blockquote> <p>Every undirected graph with n vertices and $2n$ edges is connected.</p> </blockquote> <p>If it is false I should find a counterexample. I was thinking to consider the complete graph with $n$ vertices. Such a graph is connected and contains $\frac{n(n-1)}{2}$ nodes. Considering that $2n &gt; \frac{n(n-1)}{2}\implies$ my graph is connected too. But I'm not sure this could be a solution because even my graph has $2n$ edges it doesn't have to be complete. Can anybody help me?</p>
Abel
71,157
<p>Hint: Suppose you have a non-empty graph $G$ with $n$ vertices and $2n$ edges, how many edges and vertices does $G\coprod G$ have? Is it connected?</p>
381,011
<p>I should prove this claim:</p> <blockquote> <p>Every undirected graph with n vertices and $2n$ edges is connected.</p> </blockquote> <p>If it is false I should find a counterexample. I was thinking to consider the complete graph with $n$ vertices. Such a graph is connected and contains $\frac{n(n-1)}{2}$ nodes. Considering that $2n &gt; \frac{n(n-1)}{2}\implies$ my graph is connected too. But I'm not sure this could be a solution because even my graph has $2n$ edges it doesn't have to be complete. Can anybody help me?</p>
Ma Ming
16,340
<p>This is false. Suppose $2n=\binom{k}{2}$ for some $k$ and $n&gt;k$. Take a complete graph with $k$ vertex together with $n-k$ isolated vertex, then you get a graph with $n$ vertex and $2n$ wedges.</p>
3,628,358
<p>As stated, I need to prove that, up to isomorphism, the only simple group of order <span class="math-container">$p^2 q r$</span>, where <span class="math-container">$p, q, r$</span> are distinct primes, is <span class="math-container">$A_5$</span> (the alternating group of degree 5).</p> <p>Now I know the following: if <span class="math-container">$G$</span> is a simple group and <span class="math-container">$|G| = 60$</span>, then <span class="math-container">$G$</span> is isomorphic to <span class="math-container">$A_5$</span>. However, I don't even know how to begin the proof that <span class="math-container">$|G| = 60$</span>, or anything similar.</p>
Dietrich Burde
83,966
<p>The groups of order <span class="math-container">$p^2qr$</span> for distinct primes <span class="math-container">$p,q,r$</span> have been classified <a href="https://www.jstor.org/stable/1986340?seq=1#metadata_info_tab_contents" rel="nofollow noreferrer">here</a> by Oliver G. Glenn in <span class="math-container">$1906$</span>.</p> <p>With the exception of the group of order <span class="math-container">$2^2\cdot 3\cdot 5$</span>, simply isomorphic with the icosahedron-group <span class="math-container">$A_5$</span>, all groups of order <span class="math-container">$p^2qr$</span> are solvable.</p>
2,311,979
<p>Let $A = (a_{i,j})_{n\times n}$ and $B = (b_{i,j})_{n\times n}$</p> <p>$(AB) = (c_{i,j})_{n\times n}$, where $c_{i,j} = \sum_{k=1}^n a_{i,k} b_{k,j}$, so</p> <p>$(AB)^T = (c_{j,i})$, where $c_{j,i} = \sum_{k=1}^n a_{j,k}b_{k,i} $, and $B^T = b_{j,i}$ and $A^T = a_{j,i}$, so </p> <p>$B^T A^T = d_{j,i}$ where $d_{j,i} = \sum_{k=1}^n b_{j,k} a_{k,i}$, but this mean that $(AB)^T \not = (B^T A^T)$, so where is the problem in this derivation ?</p> <p>Edit: To be clear, lets be more precise; Let $A = (a_{x,y})_{p\times n}$ and $B = (b_{z,t})_{n\times q}$</p> <p>So, $A^T_{n\times p} = (a_{y,x})$ and $B^T_{q\times n} = (b_{t,z})$, which implies</p> <p>$$(B^T A^T)_{i,j}^{q \times p} = \sum_{k=1}^n b_{i,k} a_{k,j},$$ and</p> <p>$(AB)_{c,d}^{p\times q} = \sum_{k=1}^n a_{c,k} b_{k,d}$, which implies $$((AB)^T)_{d,c}^{q\times p} = \sum_{k=1}^n = a_{d,k} b_{k,c}.$$ Since $i,d \in \{1,...,q\}$ and $j,c \in \{1,...,p\}$, $$((AB)^T)_{d,c}^{q\times p} = \sum_{k=1}^n = a_{d,k} b_{k,c} = = \sum_{k=1}^n = a_{i,k} b_{k,j},$$ which again concludes that $(AB)^T \not = (B^T A^T)$.</p>
Surb
154,545
<p>In the world of real vector spaces, one can define $A^T$ to be the adjoint of $A$ with respect to the Euclidean inner product $\langle \cdot,\cdot \rangle$ (this adjoint is unique). More precisely, $A^T$ is the unique linear mapping so that $$\langle Ax,y\rangle = \langle x,A^Ty\rangle \qquad \forall x,y$$ Similarly, $B^T$ and $(AB)^T$ are the unique linear mappings so that $$\langle Bx,y\rangle = \langle x,B^Ty\rangle \qquad \forall x,y$$ and $$\langle (AB)x,y\rangle = \langle x,(AB)^Ty\rangle \qquad \forall x,y.$$ Now, note that $$ \langle (AB)x,y\rangle=\langle A(Bx),y\rangle=\langle Bx,A^Ty\rangle=\langle x,B^TA^Ty\rangle=\langle x,(B^TA^T)y\rangle$$ By uniqueness of the adjoint, we directly obtain $(AB)^T=B^TA^T$. Note that the advantage of this proof is that it holds for the adjoint operator of matrices defined with respect to any inner product. Doing the same for complex vector spaces, we obtain that $(AB)^*=B^*A^*$ where $A^*$ is the Hermitian conjugate of $A$.</p> <p><strong>Note:</strong> To obtain uniqueness of the adjoint, simply plug in all the pairs of vectors taken from an orthogonal basis for $x,y$ in the above equations.</p>
4,330,755
<p>Given a convex pentagon <span class="math-container">$ABCDE$</span>, there is a unique ellipse with center <span class="math-container">$F$</span> that can be inscribed in it as shown in the image below. I've written a small program to find this ellipse, and had to numerically (i.e. by iterations) solve five quadratic equations in the center <span class="math-container">$F$</span> coordinates and the entries of the inverse of the <span class="math-container">$Q$</span> matrix, such that the equation of the inscribed ellipse is</p> <p><span class="math-container">$(r - F)^T Q (r - F) = 1 $</span></p> <p>My question is: Is it possible to determine the coordinates of the center in closed form, from the given coordinates of the vertices of the pentagon ?</p> <p><a href="https://i.stack.imgur.com/I1qFM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I1qFM.png" alt="Ellipse Inscribed in Pentagon" /></a></p>
Intelligenti pauca
255,730
<p>There is a simple geometric construction for the ellipse inscribed in a given convex pentagon <span class="math-container">$ABCDE$</span>. One can, first of all, find tangency points <span class="math-container">$PQRST$</span>. Draw, for instance, diagonals <span class="math-container">$AC$</span> and <span class="math-container">$BD$</span>, meeting at <span class="math-container">$F$</span>. Line <span class="math-container">$EF$</span> meets then side <span class="math-container">$BC$</span> at tangency point <span class="math-container">$P$</span>.</p> <p>This construction depends on Brianchon's theorem:</p> <blockquote> <p>The three opposite diagonals of every hexagon circumscribing a conic are concurrent.</p> </blockquote> <p>In fact, if <span class="math-container">$P$</span> is the tangency point on <span class="math-container">$BC$</span>, we can view <span class="math-container">$ABPCDE$</span> as a limiting case of a hexagon circumscribed to the ellipse. Hence diagonals <span class="math-container">$AC$</span>, <span class="math-container">$BD$</span> and <span class="math-container">$EP$</span> must intersect at the same point <span class="math-container">$F$</span>.</p> <p>Go on finding the other tangency points and remember then that the line, passing through the intersection point of two tangents to an ellipse and through the midpoint of their tangency points, also passes through the center of the ellipse. The center can thus be readily obtained, as the intersection point of <span class="math-container">$CM$</span> and <span class="math-container">$DN$</span>, where <span class="math-container">$M$</span> is the midpoint of <span class="math-container">$PQ$</span> and <span class="math-container">$N$</span> is the midpoint of <span class="math-container">$QR$</span>.</p> <p><a href="https://i.stack.imgur.com/pqomR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pqomR.png" alt="enter image description here" /></a></p> <p>I implemented the above method with Mathematica, to find an explicit expression for the coordinates of the center, but the resulting expression is too large to be written here. Anyway, with the coordinates as in your figure, I get: <span class="math-container">$$ O=\left({237\over59}, {103\over59}\right). $$</span></p>
46,462
<p>Hi I have a simple question. How do I plot the following with Day 1 as my X axis and Day 2 as my Y axis? I need the 22 variances plotted according to the Day they were taken from (these were originally 3D measurements taken over 2 days with the same specimens each day, there were 11 specimens and 22 xyz measurements from which I have taken the variances).</p> <p>Can someone kindly help me out? I can't even find Scatterplot listed in Mathematica docs, do they call it something else?</p> <pre><code>{{{"ID", "Day", 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19., 20., 21., 22.}, {"H. sapiens", 1., 145.7, 153.2, 164.6, 161.1, 170.8, 191.7, 179.2, 178.5, 198.5, 169.9, 135.8, 182.8, 205.3, 210.3, 197.3, 238.4, 207.4, 209., 188.6, 201.3, 210.2, 206.9}, {"H. sapiens", 2., 146.8, 152.7, 165.6, 165.1, 172.4, 189.2, 179.2, 193.9, 199.7, 172.9, 136.1, 181.8, 204.3, 212.3, 197.4, 238., 205.7, 211.5, 191.5, 204.3, 218.2, 204.2}, {"Pan troglodytes", 1., -200.9, -230.2, -205.5, -238.8, -237.3, -207.1, -213.2, \ -221.6, -225.9, -247.1, -242.6, -266.5, -259.1, -258., -266.1, -259., \ -227.3, -228.9, -212.1, -213.6, -223.9, -225.1}, {"Pan troglodytes", 2., -199.4, -229.6, -203.5, -243.6, -238.6, -205.7, -213.1, \ -222.3, -227.3, -258.1, -242.9, -265.4, -265.1, -258., -269.9, -260., \ -228.2, -233.3, -212.3, -215.1, -225.4, -223.6}, {"Pan verus ", 1., 261.2, 273., 274., 283., 285.06, 300., 294., 305.037, 315.33, 289.08, 263.6, 298.46, 306.3, 316.7, 306.5, 331.6, 320.81, 323.7, 298.13, 312.79, 319.2, 310.2}, {"Pan verus", 2., 264.1, 268.7, 273.1, 289.19, 288.03, 299.9, 291.9, 304.137, 316.54, 289.675, 263.4, 298.58, 309.3, 316.2, 306., 332.3, 323.31, 327.4, 297.26, 310.95, 321.8, 310.9}, {"Pan schweinfurthii ", 1., 230.6, 241.2, 241., 257.3, 253.3, 274.1, 265.9, 272.36, 280.147, 259.21, 229.8, 268.19, 277.62, 286.6, 275.56, 306.3, 287.4, 292.04, 270.36, 283.35, 291.3, 281.13}, {"Pan schweinfurthii ", 2., 226.9, 237.9, 239.9, 254.6, 257., 273.7, 262., 273.33, 282.84, 260.05, 229.6, 267.49, 278.23, 287.9, 272.72, 306.2, 287.58, 287.92, 270.85, 283.37, 292.3, 281.88}, {"Pan paniscus ", 1., -421.9, -447., -420.9, -454.5, -451.7, -419.8, -427.8, -431.9, \ -445.7, -463.4, -459.2, -476.3, -470.1, -466.5, -475.5, -450.1, \ -439.5, -445.8, -417.4, -419.5, -434.4, -432.2}, {"Pan paniscus", 2., -426.2, -447.9, -421.5, -455.6, -453., -420.4, -428.7, -432.7, \ -441.5, -466.8, -459.5, -476.4, -470.9, -467.2, -474.7, -472.5, \ -437.5, -446.9, -416.4, -424.7, -433.1, -437.}, {"G. gorilla ", 1., -175.9, -204., -175.1, -221., -217., -168., -187., -198., \ -213., -229., -220.8, -251.8, -251., -245., -256., -243., -211., \ -224., -190., -194., -210., -212.}, {"G. gorilla ", 2., -180.8, -205., 214.1, -228., -223., -167., -191., -195., -206., -242.9, -221.6, \ -251.8, -253., -243., -256., -243., -211., -224., -189.8, -196., \ -210., -215.}, {"G. graueri ", 1., 220., 192., 222., 177., 184., 228., 214., 205., 187., 162., 174., 147., 155., 162., 145., 170., 194., 186., 221.2, 215.5, 198.8, 198.3}, {"G. graueri ", 2., 221., 189., 226., 182., 179., 230., 214., 210., 188., 161., 172., 146., 153., 156., 146., 169., 193., 189., 221.2, 220.6, 202.2, 196.9}, {"G. beringei", 1., -763., -793.5, -762.2, -810.3, -802., -749.7, -771., -777., \ -783., -828., -809.3, -836.9, -833., -828., -837.9, -809., -788., \ -795., -767.3, -768.4, -789., -782.8}, {"G. beringei", 2., -763.3, -791.9, -761.3, -805.6, -800., -748.5, -769., -776., \ -774., -817.4, -812.8, -836.4, -833., -825., -837.2, -820., -791., \ -797., -766.6, -769.8, -782.6, -786.3}, {"G. diehli", 1., -78., -106.4, -77., -124., -121., -72.4, -87.5, -94., -99., \ -134., -122.1, -149., -153., -141., -154., -137., -105., -111., -83., \ -87., -104., -105.}, {"G. diehli", 2., -79.6, -105.8, -77., -124., -119., -72.1, -86., -100., -99., \ -134., -121.5, -148., -152., -142., -152., -135., -104., -110., \ -84.7, -86., -103., -107.}, {"P. abelii ", 1., -214.09, -232.6, -209.24, -232.56, -227.49, -185.5, -204.5, \ -209.5, -213.9, -250.52, -249.8, -256.49, -249.85, -244.22, -257.96, \ -237.6, -219.2, -225.2, -198.8, -201.9, -223.3, -224.2}, {"P. abelii \ ", 2., -214.93, -230.8, -209.97, -233.98, -230.51, -184.4, -204.4, \ -208.4, -214.1, -249.21, -250., -258.98, -250.1, -242., -256.86, \ -234.5, -225.9, -226.4, -201., -205.8, -225.1, -224.1}, {"P. \ pygmaeus", 1., -288.8, -280.5, -280.5, -265., -264., -238.5, -250.7, -234.2, \ -224.9, -258.9, -296.1, -259.5, -246.3, -234.5, -251.1, -212.5, \ -224.5, -219.6, -249.1, -230.4, -224.4, -233.}, {"P. pygmaeus", 2., -293.5, -284.3, -278.4, -256.8, -255.6, -236.3, -248.4, \ -233.4, -227.3, -261.4, -295.5, -262., -242.6, -233.8, -252.2, -212., \ -225.1, -220.3, -248.2, -230.9, -225.3, -233.7}}} </code></pre>
kglr
125
<pre><code>reorgdata = GatherBy[data[[1]], #[[2]] &amp;][[2 ;;, All, 3 ;;]]; variances = Thread[Variance /@ reorgdata]; means = Thread[Mean /@ reorgdata]; Row[{ListPlot[means, PlotLabel -&gt; "means", ImageSize -&gt; 300], ListPlot[variances, PlotLabel -&gt; "variances", ImageSize -&gt; 300]}] </code></pre> <p><img src="https://i.stack.imgur.com/ZpG9H.png" alt="enter image description here"></p>
149,049
<p>Suppose you have a list of intervals (or tuples), such as:</p> <pre><code>intervals = {{3,7}, {17,43}, {64,70}}; </code></pre> <p>And you wanted to know the intervals of all numbers not included above, e.g.:</p> <pre><code>myRange = 100; numbersNotUed[myRange,intervales] (*out: {{1,2},{8,16},{44,63},{71,100}}*) </code></pre> <p>What would be the most efficient way to approach this?</p> <p><em>Mathematica</em> currently supports <code>IntervalIntersection</code> but not <code>IntervalComplement</code>.</p>
Alexey Popkov
280
<p>Here is another implementation. It assumes that all the intervals lie in the specified range <em>and</em> intervals are sorted <em>and</em> aren't overlapping:</p> <pre><code>integerIntervalComplement[completeInterval : {start_, end_}, {subIntervals___}] := If[First[#2] - Last[#1] &lt;= 1, Nothing, {Last[#1] + 1, First[#2] - 1}] &amp; @@@ Partition[{{start - 1}, subIntervals, {end + 1}}, 2, 1]; </code></pre> <p>Testing:</p> <pre><code>integerIntervalComplement[{1, 80}, {{3, 7}, {17, 43}, {64, 70}}] </code></pre> <blockquote> <pre><code>{{1, 2}, {8, 16}, {44, 63}, {71, 80}} </code></pre> </blockquote> <pre><code>integerIntervalComplement[{1, 80}, {{1, 7}, {64, 80}}] </code></pre> <blockquote> <pre><code>{{8, 63}} </code></pre> </blockquote> <pre><code>integerIntervalComplement[{1, 80}, {{2, 7}, {64, 79}}] </code></pre> <blockquote> <pre><code>{{1, 1}, {8, 63}, {80, 80}} </code></pre> </blockquote> <pre><code>integerIntervalComplement[{1, 80}, {}] </code></pre> <blockquote> <pre><code>{{1, 80}} </code></pre> </blockquote> <p>If the intervals overlap, one can preprocess them first using <code>Interval</code>, for example:</p> <pre><code>List @@ Interval @@ {{30, 40}, {3, 7}, {8, 12}, {1, 10}} {{1, 12}, {30, 40}} </code></pre> <p>This also removes the condition for subintervals to be sorted.</p>
834,949
<p>I have this HW where I have to calculate the $74$th derivative of $f(x)=\ln(1+x)\arctan(x)$ at $x=0$. And it made me think, maybe I can say (about $\arctan(x)$ at $x=0$) that there is no limit for the second derivative, therefore, there are no derivatives of degree grater then $2$. Am I right?</p>
Dario
156,754
<p>Use Taylor series: $$\log(1+x)=\sum_{n=1}^{\infty}(-1)^{n+1}\frac{x^n}{n}$$ $$\arctan(x)=\sum_{n=1}^{\infty}(-1)^{n}\frac{x^{2n+1}}{2n+1}$$ so $$f(x)=\log(1+x)\arctan(x)=\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}(-1)^{n+m+1}\frac{x^{n+2m+1}}{n(2m+1)}$$ You want to compare this with $$f(x)=\sum_{k=0}^{\infty}\frac{f^{(k)}(0)}{k!}x^{k}\ .$$ To obtain the 74th derivative you have to compare the terms of degree 74 of the previous series: $$\sum_{n+2m+1=74}\frac{(-1)^{n+m+1}}{n(2m+1)}=\frac{f^{(74)}(0)}{74!}\ ,$$ and thus $$f^{(74)}(0)=74!\sum_{n+2m+1=74}\frac{(-1)^{n+m+1}}{n(2m+1)}$$</p>
1,554
<p>Suppose you have an incomplete Riemannian manifold with bounded sectional curvature such that its completion as a metric space is the manifold plus one additional point. Does the Riemannian manifold structure extend across the point singularity?</p> <p>(Penny Smith and I wrote a paper on this many years ago, but we had to assume that no arbitrarily short closed geodesics existed in a neighborhood of the singularity. I was never able to figure out how to get rid of this assumption and still would like someone better at Riemannian geometry than me to explain how. Or show me a counterexample.)</p> <p>EDIT: For simplicity, assume that the dimension of the manifold is greater than 2 and that in any neighborhood of the singularity, there exists a smaller punctured neighborhood of the singularity that is simply connected. In dimension 2, you have to replace this assumption by an appropriate holonomy condition. </p> <p>EDIT 2: Let's make the assumption above simpler and clearer. Assume dimension greater than 2 and that for any r > 0, there exists 0 &lt; r' &lt; r, such that the punctured geodesic ball B(p,r'){p} is simply connected, where p is the singular point. The precludes the possibility of an orbifold singularity.</p> <p>ADDITIONAL COMMENT: My approach to this was to construct a differentiable family of geodesic rays emanating from the singularity. Once I have this, then it is straightforward using Jacobi fields to show that this family must be naturally isomorphic to the standard unit sphere. Then using what Jost and Karcher call "almost linear coordinates", it is easy to construct a C^1 co-ordinate chart on a neighborhood of the singularity. (Read the paper. Nothing in it is hard.)</p> <p>But I was unable to build this family of geodesics without the "no small geodesic loop" assumption. To me this is an overly strong assumption that is essentially equivalent to assuming in advance that that differentiable family of geodesics exists. So I find our result to be totally unsatisfying. I don't see why this assumption should be necessary, and I still believe there should be an easy way to show this. Or there should be a counterexample.</p> <p>I have to say, however, that I am pretty sure that I did consult one or two pretty distinguished Riemannian geometers and they were not able to provide any useful insight into this.</p>
Igor Belegradek
1,573
<p>If by "extends across the point singularity" you mean extends smoothly, then I think you may just start with the Euclidean space thought of as a warped product over (0,infinity) with sphere as a fiber and replace the warping function r by any smooth function f(r) that is near r in C^2-topology. Then the curvature will not change much, while for the metric to be smooth at the origin f must satisfy consistency conditions on the higher derivatives of f at r=0 that surely will be violated for nearly every f. The consistency conditions are like those that can be found eg in Peterson's book, page 13 (in first edition).</p> <p>It might be possible to build a multiple warped product example in which the link at the singular point is not a sphere but I am not sufficiently motivated to attempt the computation. Handy formulas for curvature tensor of multiple warped products can be found in my paper arXiv:0711.2324 in appendix C. </p>
1,930,401
<p>Are there any non-linear real polynomials $p(x)$ such that $e^{p(x)}$ has a closed form antiderivative? If not, is the value of $\int_{0}^{\infty}e^{p(x)}dx$ known for any $p$ with negative leading term other than $-x$ and $-x^2$?</p>
Jack D'Aurizio
44,121
<p>In general, if $A_1 A_2\ldots A_n$ is a $n$-sided polygon and the lengths $l_1,l_2,\ldots,l_{n-1},l_n$ of $A_1 A_2,A_2 A_3,\ldots,A_{n-1} A_n, A_n A_1$ are fixed, the maximum area is achieved by the cyclic polygon, i.e. the polygon having all its vertices on a circle. You may easily prove this fact through <a href="https://en.wikipedia.org/wiki/Lagrange_multiplier" rel="nofollow">Lagrange multipliers</a> or Steiner's <a href="https://en.wikipedia.org/wiki/Symmetrization_methods" rel="nofollow">symmetrization method</a>. It is an istance of the <a href="https://mathproblems123.wordpress.com/2012/04/27/2523/" rel="nofollow">isoperimetric inequality for polygons</a>, that can be used to prove the usual <a href="https://en.wikipedia.org/wiki/Isoperimetric_inequality" rel="nofollow">isoperimetric inequality in the plane</a> through a limit process.</p>
1,930,401
<p>Are there any non-linear real polynomials $p(x)$ such that $e^{p(x)}$ has a closed form antiderivative? If not, is the value of $\int_{0}^{\infty}e^{p(x)}dx$ known for any $p$ with negative leading term other than $-x$ and $-x^2$?</p>
robjohn
13,854
<p><strong>Theorem:</strong> For given sidelengths, a cyclic polygon has maximal area.</p> <p><strong>Proof:</strong> Let <span class="math-container">$\{p_k\}_{k=1}^n$</span> be the vertices of the polygon.</p> <p>Set <span class="math-container">$v_k=p_k-p_{k-1}$</span> and <span class="math-container">$m_k=\frac12(p_k+p_{k-1})$</span>, where <span class="math-container">$p_0=p_n$</span>.</p> <p>Since the polygon is closed, <span class="math-container">$$ \sum_{k=1}^nv_k=0\implies\sum_{k=1}^n\delta v_k=0\tag1 $$</span> Since the <span class="math-container">$|v_k|$</span> are given, for <span class="math-container">$1\le k\le n$</span>, <span class="math-container">$$ v_k\cdot\delta v_k=0\tag2 $$</span> The area of the polygon is <span class="math-container">$$ A=\frac12\sum_{j\lt k}v_j^R\cdot v_k\tag3 $$</span> where <span class="math-container">$(x,y)^R=(y,-x)$</span>. Therefore, <span class="math-container">$$ \begin{align} \delta A &amp;=\frac12\sum_{j\lt k}v_j^R\cdot\delta v_k-\frac12\sum_{j\gt k}v_j^R\cdot\delta v_k\\ &amp;=\sum_{k=1}^n(m_k-p_0)^R\cdot\delta v_k\tag4 \end{align} $$</span> To maximize <span class="math-container">$(3)$</span> under conditions <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span>, that is, to guarantee that <span class="math-container">$(4)$</span> vanishes when <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span> hold, <a href="https://math.stackexchange.com/a/188306">orthogonality</a> requires that there be constants <span class="math-container">$\lambda_k$</span> and a point <span class="math-container">$p$</span> so that <span class="math-container">$$ (m_k-p_0)^R=p+\lambda_kv_k\tag5 $$</span> In other words, with <span class="math-container">$c=p_0-p^R$</span> <span class="math-container">$$ (m_k-c)\cdot v_k=0\tag6 $$</span> which translates to <span class="math-container">$$ |p_k-c|^2=|p_{k-1}-c|^2\tag7 $$</span> Thus, all <span class="math-container">$p_k$</span> lie on a circle with center <span class="math-container">$c$</span>.</p> <p><span class="math-container">$\large\square$</span></p>
3,702,094
<p>For a school project for chemistry I use systems of ODEs to calculate the concentrations of specific chemicals over time. Now I am wondering if </p> <p><span class="math-container">$$ \frac{dX}{dt} =X(t) $$</span></p> <p>the same is as </p> <p><span class="math-container">$$ X(t)=e^t . $$</span> </p> <p>As far as I know, this should be correct, because the derivative of <span class="math-container">$ e^t $</span> is the same as the current value. Can anyone confirm that this is correct (or not)?</p> <p>I already searched for it on the internet but can't really find any articles about this. Thanks!</p>
Fred
380,717
<p>The differential equation </p> <p><span class="math-container">$$ \frac{d X}{dt}=X(t)$$</span></p> <p>has the general solution</p> <p><span class="math-container">$$X(t)=Ce^t$$</span></p> <p>where <span class="math-container">$C \in \mathbb R.$</span></p>
2,547,488
<p>We have always been taught that a function assigns to every element in the domain a single <em>unique</em> element in the range. If a rule of assignment assigns to one element in the domain more than one element in the range then it isn't a function.</p> <p>Now in Munkres' <em>Topology</em>, on page 107, it says:</p> <p>"If $f:X \rightarrow Y$ maps all of $X$ into the single point $y_0$ of $Y$, then $f$ is continuous."</p> <p>But we cannot speak of the continuity of $f$ unless $f^{-1}$ is defined, because the topological definition of continuity (in Munkres and other textbooks) states that $f:X \rightarrow Y$ is continuous if for every open subset $V$ of $Y$, the set $f^{-1}(V)$ is open in $X$. </p> <p>How then can the constant function above be continuous when $f^{-1}$ maps a single $y_0$ to ALL $x \in X$? How is it a function? Or is the elementary definition of "function" relaxed in topology?</p>
drhab
75,923
<p>$f^{-1}$ denoting a function needs not to be defined. </p> <p>$f^{-1}(V)$ denoting the <em>preimage of</em> $V$ <em>w.r.t.</em> $f$ needs to be defined (and is) for every set $V\subseteq Y$.</p> <p>It is defined as: $$f^{-1}(V):=\{x\in X\mid f(x)\in V\}$$</p> <p>If $f$ is constant then $f^{-1}(V)\in\{\varnothing, X\}\subseteq\tau_X$ for every $V\subseteq Y$ so $f$ is continuous.</p>
312,847
<p>Let <span class="math-container">$k$</span> be a global field, and let <span class="math-container">$G = \mathbf G(\mathbb A_k)$</span> for a connected, reductive group <span class="math-container">$\mathbf G$</span> over <span class="math-container">$k$</span>. In <a href="https://services.math.duke.edu/~hahn/Chapter3.pdf" rel="noreferrer">these</a> notes by Jayce Getz and Heekyoung Hahn, a unitary representation of <span class="math-container">$G$</span> is a Hilbert space <span class="math-container">$V$</span> together with a continuous homomorphism <span class="math-container">$\pi: G \rightarrow \operatorname{GL}(V)$</span> whose image is contained in the group <span class="math-container">$U(V)$</span> of unitary operators on <span class="math-container">$V$</span>. </p> <p>What is the topology on <span class="math-container">$\operatorname{GL}(V)$</span> (which I assume is the group of bounded linear operators on <span class="math-container">$V$</span>) being considered here? Is it the induced topology coming from the norm topology?</p> <p>I am trying to compare this definition with one given by Gerald Folland in <em>A Course in Abstract Harmonic Analysis</em>, which requires that for each <span class="math-container">$v \in V$</span> the map <span class="math-container">$g \mapsto \pi(g)v$</span> be continuous <span class="math-container">$G \rightarrow V$</span>, where <span class="math-container">$V$</span> is taken in the norm topology. Are these two definitions of unitary representations different?</p> <p>This matters because one later defines the Fell topology on the unitary dual <span class="math-container">$\hat{G}$</span> of <span class="math-container">$G$</span>, and I want to know which representations are actually in <span class="math-container">$\hat{G}$</span>.</p>
paul garrett
15,629
<p>It is absolutely essential that the space of (bounded/continuous) operators be given the "strong" operator topology (strictly weaker than the norm topology), and the map <span class="math-container">$G\times V\to V$</span> to be jointly continuous.</p> <p>This is not a pathology: even in very simple cases, such as <span class="math-container">$G=\mathbb R$</span> acting on <span class="math-container">$V=L^2(\mathbb R)$</span>, that joint continuity fails when operators are given the uniform norm topology, since there is no sufficient uniform bound on change in <span class="math-container">$g\in G$</span> so that <span class="math-container">$g\cdot v$</span> is close to <span class="math-container">$v$</span> for all <span class="math-container">$v\in V$</span>. E.g., tent functions with ever-narrowing support illustrate this.</p>
110,078
<p>Let $0&lt; \alpha&lt; n$, $1 &lt; p &lt; q &lt; \infty$ and $\frac{1}{q}=\frac{1}{p}-\frac{\alpha}{n}$. Then: $ \left \| \int_{\mathbb{R}^n} \frac{f(y)dy}{|x-y|^{n-\alpha} } \right\|_{L^q(\mathbb{R}^n)}\leq$ $C\left\| f\right\| _{L^p(\mathbb{R^n})}$.</p>
user23078
23,078
<p>This is the standard Hardy-Littlewood-Sobolev inequality(or the theorem of fractional integration).A more direct approach is write $$ \int{f(x-y)|y|^{\alpha-n}dy}=\int_{|y|&lt;R}+\int_{|y|\ge R} $$ For the second term on the RHS,using Holder inequality,and easy to see that it's dominated by $\|f\|_{L^p}R^{-\frac{q}{n}}$. For the first term,one can use the majorizationgiven by the maximal function M,and to see that $$ |f\ast |y|^{\alpha-n}|(x)\leq C(M(f)(x)\cdot R^{\alpha}+\|f\|_{L^p}\cdot R^{-\frac{q}{n}}) $$ Choosing a proper constant R to make the two terms above be equal,and then the desired inequality hold by intergration(note that the maximal operator is bounded on $L^p$ for $1&lt;p&lt;\infty$).</p>
69,476
<p>Hello everybody !</p> <p>I was reading a book on geometry which taught me that one could compute the volume of a simplex through the determinant of a matrix, and I thought (I'm becoming a worse computer scientist each day) that if the result is exact this may not be the computationally fastest way possible to do it.</p> <p>Hence, the following problem : if you are given a polynomial in one (or many) variables $\alpha_1 x^1 + \dots + \alpha_n x^n$, what is the cheapest way (in terms of operations) to evaluate it ?</p> <p>Indeed, if you know that your polynomial is $(x-1)^{1024}$, you can do much, much better than computing all the different powers of $x$ and multiply them by their corresponding factor.</p> <p>However, this is not a problem of factorization, as knowing that the polynomial is equal to $(x-1)^{1024} + (x-2)^{1023}$ is also much better than the naive evaluation.</p> <p>Of course, multiplication and addition all have different costs on a computer, but I would be quite glad to understand how to minimize the "total number of operations" (additions + multiplications) for a start ! I had no idea how to look for the corresponding litterature, and so I am asking for your help on this one :-)</p> <p>Thank you !</p> <p>Nathann</p> <p>P.S. : <em>I am actually looking for a way, given a polynomial, to obtain a sequence of addition/multiplication that would be optimal to evaluate it. This sequence would of course only work for <strong>THIS</strong> polynomial and no other. It may involve working for hours to find out the optimal sequence corresponding to this polynomial, so that it may be evaluated many times cheaply later on.</em></p>
pEquals2
40,735
<p>For evaluation of a general polynomial in one variable, the provably fastest method is Horner's scheme as Emil has pointed out. It is worth mentioning that this scheme has a more popular face in the form of <a href="http://en.wikipedia.org/wiki/Polynomial_remainder_theorem" rel="nofollow">little Bézout's theorem</a> paired with <a href="http://en.wikipedia.org/wiki/Synthetic_division" rel="nofollow">Synthetic division</a> as is often taught in Precalculus courses in the U.S.A. This is implicitely present in the Wikipedia article for Horner's method, but the relationship is not well explained. A convenient consequence of this fact is that on computer algebra systems, storing polynomials in Horner normal form adds no performance benefit when it comes to evaluation.<br> To see that these two algorithms are the same one only needs to verify that the base step and the recursive procedure for both match. I do this for evaluation at $x_0$ of $$f(x)=\sum_{i=0}^n a_ix^i = \left((\dots(a_nx+a_{n-1})x+\dots)x+a_1\right)x+a_0,$$ where the center expression is the general form of a polynomial in one variable and the right-hand-side (RHS) is its Horner normal form. In synthetic division as in evaluation of the RHS following the order of operations, one begins by multipling $x_0\cdot a_n$ which will be denoted $y_n$. For the recursive step in synthetic division one sets $y_{i-1}=x_0\cdot y_i+a_{i}$ for $n&gt;i\geq0$. For $i&gt;0$, this is precisely the calculation of the $i^{th}$ set of parentheses counting out-to-in and thus can be thought of as the $(n-i)^{th}$ iteration of evaluating the RHS of the euqation.<br> In this example $f(x_0)=y_0$. </p>
823,928
<p>Prove that all of the rings, which mediate between principal ideal ring $K$ and the field of fractions $Q$, are the principal ideal ring.</p>
Bill Dubuque
242
<p><strong>Hint</strong> $ $ They're localizations since $\,K[a/b] = K[1/b],\,$ by $\,(a,b) =1\,\Rightarrow\, ra+sb = 1\,\Rightarrow\, ra/b + s = 1/b$</p>
122,728
<p>Suppose that I have a <a href="http://en.wikipedia.org/wiki/Symmetric_matrix" rel="nofollow noreferrer">symmetric</a> <a href="http://en.wikipedia.org/wiki/Toeplitz_matrix" rel="nofollow noreferrer">Toeplitz</a> <span class="math-container">$n\times n$</span> matrix</p> <p><span class="math-container">$$\mathbf{A}=\left[\begin{array}{cccc}a_1&amp;a_2&amp;\cdots&amp; a_n\\a_2&amp;a_1&amp;\cdots&amp;a_{n-1}\\\vdots&amp;\vdots&amp;\ddots&amp;\vdots\\a_n&amp;a_{n-1}&amp;\cdots&amp;a_1\end{array}\right]$$</span></p> <p>where <span class="math-container">$a_i \geq 0$</span>, and a diagonal matrix</p> <p><span class="math-container">$$\mathbf{B}=\left[\begin{array}{cccc}b_1&amp;0&amp;\cdots&amp; 0\\0&amp;b_2&amp;\cdots&amp;0\\\vdots&amp;\vdots&amp;\ddots&amp;\vdots\\0&amp;0&amp;\cdots&amp;b_n\end{array}\right]$$</span></p> <p>where <span class="math-container">$b_i = \frac{c}{\beta_i}$</span> for some constant <span class="math-container">$c&gt;0$</span> such that <span class="math-container">$\beta_i&gt;0$</span>. Let</p> <p><span class="math-container">$$\mathbf{M}=\mathbf{A}(\mathbf{A}+\mathbf{B})^{-1}\mathbf{A}$$</span></p> <p>Can one express a partial derivative <span class="math-container">$\partial_{\beta_i} \operatorname{Tr}[\mathbf{M}]$</span> in closed form, where <span class="math-container">$\operatorname{Tr}[\mathbf{M}]$</span> is the <a href="http://en.wikipedia.org/wiki/Matrix_trace" rel="nofollow noreferrer">trace</a> operator?</p>
joriki
6,622
<p>Expanding $\mathbf A(\mathbf A + \mathbf B + \mathbf E)^{-1}\mathbf A$ in $\mathbf E$ yields $\mathbf A(\mathbf A + \mathbf B)^{-1}\mathbf A-\mathbf A(\mathbf A + \mathbf B)^{-1}\mathbf E(\mathbf A + \mathbf B)^{-1}\mathbf A$ up to first order. Thus</p> <p>$$ \begin{eqnarray} \frac{\partial\operatorname{Tr}[M]}{\partial\beta_i} &amp;=&amp; -\operatorname{Tr}\left[\mathbf A(\mathbf A + \mathbf B)^{-1}\frac{\partial\mathbf B}{\partial\beta_i}(\mathbf A + \mathbf B)^{-1}\mathbf A\right] \\ &amp;=&amp; -\operatorname{Tr}\left[\frac{\partial\mathbf B}{\partial\beta_i}(\mathbf A + \mathbf B)^{-1}\mathbf A\mathbf A(\mathbf A + \mathbf B)^{-1}\right] \\ &amp;=&amp; \frac c{\beta_i^2}\left((\mathbf A + \mathbf B)^{-1}\mathbf A\mathbf A(\mathbf A + \mathbf B)^{-1}\right)_{ii}\;. \end{eqnarray} $$</p>
31,308
<p>Apologies if my question is poorly phrased. I'm a computer scientist trying to teach myself about generalized functions. (Simple explanations are preferred. -- Thanks.)</p> <p>One of the references I'm studying states that the space of Schwartz test functions of rapid decrease is the set of infinitely differentiable functions: $\varphi: \mathbb{R} \rightarrow \mathbb{R}$ such that for all natural numbers $n$ and $r$,</p> <p>$\lim_{x\rightarrow\pm\infty} |x^n \varphi^{(r)}(x)|$</p> <p>What I would like to know is why is necessary or important for test functions to decay rapidly in this manner? i.e. faster than powers of polynomials. I'd appreciate an explanation of the intuition behind this statement and if possible a simple example.</p> <p>Thanks.</p> <p>EDIT: the OP is actually interested in a particular 1994 paper on "Spatial Statistics" by Kent and Mardia, 1994 Link between kriging and thin plate splines (with J. T. Kent). In Probability, Statistics and Optimization (F. P. Kelly ed.). Wiley, New York, pp 325-339.</p> <p>Both are in Statistics at Leeds,</p> <p><a href="http://www.amsta.leeds.ac.uk/~sta6kvm/" rel="nofollow">http://www.amsta.leeds.ac.uk/~sta6kvm/</a> </p> <p><a href="http://www.maths.leeds.ac.uk/~john/" rel="nofollow">http://www.maths.leeds.ac.uk/~john/</a> </p> <p><a href="http://www.amsta.leeds.ac.uk/~sta6kvm/SpatialStatistics.html" rel="nofollow">http://www.amsta.leeds.ac.uk/~sta6kvm/SpatialStatistics.html</a> </p> <p>Scanned article: <a href="http://www.gigasize.com/get.php?d=90wl2lgf49c" rel="nofollow">http://www.gigasize.com/get.php?d=90wl2lgf49c</a> </p> <p>FROM THE OP: Here is motivation for my question: I'm trying to understand a paper that replaces an integral $$\int f(\omega) d\omega$$ with $$\int \frac{|\omega|^{2p + 2}}{ (1 + |\omega|^2)^{p+1}} \; f(\omega) \; d\omega$$ where $p \ge 0$ ($p = -1$ yields to the unintegrable expression) because $f(\omega)$ contains a singularity at the origin i.e. is of the form $\frac{1}{\omega^2}.$ </p> <p>LATER, ALSO FROM THE OP: I understand some parts of the paper but not all of it. For example, I am unable to justify the equations (2.5) and (2.7). Why do they take these forms and not some other form?</p>
Willie Wong
3,948
<p>From the Fourier analysis point of view, the reason is the property of the Fourier transform to interchange derivatives and multiplications, which you can read more about on Wikipedia. The crucial point is that <em>the smoothness of a function is directly related to the decay rate of its (inverse) Fourier transform</em>. So if you want a family of infinitely differentiable functions whose Fourier transform is also infinitely differentiable, you are necessarily led to consider the Schwarz class. </p> <p>As a by product of the definition, you also have that the Schwarz class is closed under pointwise multiplication and under convolutions. </p>
31,308
<p>Apologies if my question is poorly phrased. I'm a computer scientist trying to teach myself about generalized functions. (Simple explanations are preferred. -- Thanks.)</p> <p>One of the references I'm studying states that the space of Schwartz test functions of rapid decrease is the set of infinitely differentiable functions: $\varphi: \mathbb{R} \rightarrow \mathbb{R}$ such that for all natural numbers $n$ and $r$,</p> <p>$\lim_{x\rightarrow\pm\infty} |x^n \varphi^{(r)}(x)|$</p> <p>What I would like to know is why is necessary or important for test functions to decay rapidly in this manner? i.e. faster than powers of polynomials. I'd appreciate an explanation of the intuition behind this statement and if possible a simple example.</p> <p>Thanks.</p> <p>EDIT: the OP is actually interested in a particular 1994 paper on "Spatial Statistics" by Kent and Mardia, 1994 Link between kriging and thin plate splines (with J. T. Kent). In Probability, Statistics and Optimization (F. P. Kelly ed.). Wiley, New York, pp 325-339.</p> <p>Both are in Statistics at Leeds,</p> <p><a href="http://www.amsta.leeds.ac.uk/~sta6kvm/" rel="nofollow">http://www.amsta.leeds.ac.uk/~sta6kvm/</a> </p> <p><a href="http://www.maths.leeds.ac.uk/~john/" rel="nofollow">http://www.maths.leeds.ac.uk/~john/</a> </p> <p><a href="http://www.amsta.leeds.ac.uk/~sta6kvm/SpatialStatistics.html" rel="nofollow">http://www.amsta.leeds.ac.uk/~sta6kvm/SpatialStatistics.html</a> </p> <p>Scanned article: <a href="http://www.gigasize.com/get.php?d=90wl2lgf49c" rel="nofollow">http://www.gigasize.com/get.php?d=90wl2lgf49c</a> </p> <p>FROM THE OP: Here is motivation for my question: I'm trying to understand a paper that replaces an integral $$\int f(\omega) d\omega$$ with $$\int \frac{|\omega|^{2p + 2}}{ (1 + |\omega|^2)^{p+1}} \; f(\omega) \; d\omega$$ where $p \ge 0$ ($p = -1$ yields to the unintegrable expression) because $f(\omega)$ contains a singularity at the origin i.e. is of the form $\frac{1}{\omega^2}.$ </p> <p>LATER, ALSO FROM THE OP: I understand some parts of the paper but not all of it. For example, I am unable to justify the equations (2.5) and (2.7). Why do they take these forms and not some other form?</p>
Will Jagy
3,324
<p>Just a quick clue. The example you want is essentially the Gaussian normal distribution from probability, $$ \frac{1}{\sqrt {2 \pi}} \; \; e^{- x^2 / 2} $$ and probably the simplest motivation is that the Fourier transform of this function is just itself (well, up to a constant multiple, depends on whose definition you have).</p> <p>These are a stand-in for functions of compact support. A function and its Fourier transform cannot both have compact support, that is a fact of life. </p> <p>See:</p> <p><a href="http://en.wikipedia.org/wiki/Schwartz_space" rel="nofollow">http://en.wikipedia.org/wiki/Schwartz_space</a></p>
1,640,733
<p>I think it is true that any power of a logarithm, no matter how big, will eventually grow slower than a linear function with positive slope.</p> <p>Is it true that for any exponent $m&gt;0$ (no matter how big we make $m$), the function $f(x)$ $$f(x)=(\ln x)^m$$</p> <p>will eventually always be less than $g(x) = x$ ?</p> <p>I am pretty sure this is true because I have tried large values of $m$ and it always eventually slows down. But I don't know how to prove it.</p>
Clement C.
75,808
<p>Yes, this is true. This is equivalent to proving that, for any $a &gt; 0$, we have $$ \frac{\ln x}{x^a} \xrightarrow[x\to\infty]{}0 $$ (you can see it by setting $a=\frac{1}{m}$ from your question).\; which itself is equivalent to showing $$ \frac{a\ln x}{x^a} = \frac{\ln x^a}{x^a} \xrightarrow[x\to\infty]{}0 $$ so, at the end of the day, it is sufficient to show ("setting $t = x^a$") that $$ \frac{\ln t}{t} \xrightarrow[t\to\infty]{}0. $$ Now, that you can show in many ways (some would say "L'Hopital's rule"). One of them is to do (yet another) change of variable, $t=e^u$, and show the equivalent statement $$ \frac{u}{e^u} \xrightarrow[t\to\infty]{}0. $$ Now, this is obvious given the series definition of the exponential, for instance, as $e^u = 1+u+\frac{u^2}{2}+\sum_{n=3}^\infty \frac{u^n}{n!} &gt; \frac{u^2}{2}$ (the last inequality for $u &gt; 0$), so that $\frac{u}{e^u} &lt; \frac{2}{u}$ for $u&gt;0$.</p>
3,573,811
<p>This is a theorem given by my professor from Artin Algebra:</p> <p>Suppose that a finite abelian group <span class="math-container">$V$</span> is a direct sum of cyclic groups of prime orders <span class="math-container">$d_j=p_j^{r_j}$</span>. The integers <span class="math-container">$d_j$</span> are uniquely determined by the group <span class="math-container">$V$</span>. </p> <p>Proof: Let <span class="math-container">$p$</span> be one of those primes that appear in the direct sum decomposition of <span class="math-container">$V$</span>, and let <span class="math-container">$c_i$</span> denote the number of cyclic groups of order <span class="math-container">$p^i$</span> in the decomposition. The set of elements whose orders divide <span class="math-container">$p^i$</span> is a subgroup of <span class="math-container">$V$</span> whose order is a power of <span class="math-container">$p$</span>, say <span class="math-container">$p^{l_i}$</span>. Let <span class="math-container">$k$</span> be the largest index such that <span class="math-container">$c_k&gt;0$</span>. Then </p> <p><span class="math-container">$l_1=c_1+c_2+c_3+...+c_k$</span></p> <p><span class="math-container">$l_2=c_1+2c_2+2c_3+...+2c_k$</span></p> <p><span class="math-container">$l_3=c_1+2c_2+3c_3+...+3c_k$</span></p> <p><span class="math-container">$l_k=c_1+2c_2+3c_3+...+kc_k$</span></p> <p>The exponents <span class="math-container">$l_i$</span> determine the integers <span class="math-container">$c_i$</span>.</p> <p>DONE</p> <p>The only thing I understand is that abelian groups can be decomposed as cyclic groups of prime order. Unlike my other questions on this site, I have zero idea about this proof. I might not even understand the statement of the proof. Please help me understand this proof.</p> <p>Thanks I get it now</p>
Oliver Kayende
704,766
<p>Suppose <span class="math-container">$G$</span> is an abelian <span class="math-container">$p$</span>-group. The cases <span class="math-container">$|G|=1,p$</span> are obvious. Proceed by induction and assume every proper <span class="math-container">$G$</span> subgroup can be uniquely decomposed, up to order of summands, as a product of cyclic <span class="math-container">$p$</span>-groups. If the decomposition of <span class="math-container">$G$</span> is not unique then we'd have <span class="math-container">$$G\approx\Bbb Z_{p^{r_1}}^+\times\cdots\times\Bbb Z_{p^{r_m}}^+\approx\Bbb Z_{p^{s_1}}^+\times\cdots\times\Bbb Z_{p^{s_n}}^+$$</span> but then this would yield two distinct decompositions <span class="math-container">$$G'\approx\Bbb Z_{p^{r_1-1}}^+\times\cdots\times\Bbb Z_{p^{r_m-1}}^+\approx\Bbb Z_{p^{s_1-1}}^+\times\cdots\times\Bbb Z_{p^{s_n-1}}^+$$</span></p> <p>of the proper <span class="math-container">$G$</span> subgroup <span class="math-container">$G':=\{g^p:g\in G\}=\text{Im}(\phi),\phi:g\mapsto g^p$</span>, which violates our induction hypothesis. Therefore, each decomposition of <span class="math-container">$G$</span> into prime power cyclic groups is unique up to order of summands. </p>
88,122
<p>For the easiest case, assume that $L/E$ is Galois and $E/K$ is Galois. Under what conditions can we conclude that $L/K$ is Galois? I guess the general case can be a bit tricky, but are there some "sufficiently general" cases that are interesting and for which the question can be answered?</p> <p>EDIT: Since Jyrki's reply seems to suggest that there is no general criterion on the groups. Can we say something if we put criterions on the fields? Assume say that $K=\mathbb{Q}$ or $K=\mathbb{Q}_p$?</p>
Ted
15,012
<p>Always. Galois = normal + separable. A tower of normal extensions is normal, and a tower of separable extensions is separable.</p> <p>Edit: That's wrong. Separability is transitive, but not normality. See comments below.</p>
344,345
<p>Are there any relationship between the scalar curvature and the simplicial volume? </p> <p>The simplicial volume is zero (positive) on Torus (Hyperbolic manifold) and those manifolds does not admit a Riemannian metric with positive scalar curvature. What do we know about the simplicial volume of a Riemannian manifold with positive scalar curvature?</p>
Grisha Papayanov
43,309
<p>By theorems of Wilking and Milnor (see <a href="https://www.sciencedirect.com/science/article/pii/S0926224500000309" rel="nofollow noreferrer">On fundamental groups of manifolds of nonnegative curvature</a> by Wilking), the fundamental group of a compact manifold with nonnegative <strong>sectional</strong> (not scalar, I've misread the original question) curvature is of polynomial growth. By Gromov's theorem, it is virtually nilpotent, hence amenable. Compact manifolds with amenable fundamental group have vanishing simplicial volume.</p>
1,651,427
<blockquote> <p>Let $f$ be a bounded function on $[0,1]$. Assume that for any $x\in[0,1)$, $f(x+)$ exists. Define $g(x)=f(x+)$, $x\in [0,1)$, and $g(1)=f(1)$. Is $g(x)$ right continuous? </p> </blockquote> <p>Prove it or give me a counterexample.</p> <p>My ideas:</p> <p>$(1)$If $f$ is of bounded variation, then $g$ must be right continuous.</p> <p>$(2)$If the continuous points of $f$ are dense in $[0,1]$, then $g$ must be right continuous.</p> <p>But I can not find a counterexample. Please help me. Thanks!</p>
DanielWainfleet
254,665
<p>For $x\in [0,1)$ let $(x_n)_n$ be a sequence in $(x,1)$ converging to $x.$</p> <p>For each $n$ let $x_n&lt;y_n&lt; \min (1, x_n+2^{-n}).$</p> <p>Let $z_n\in (x_n,y_n)$ such that $|f(z_n)-g(x_n)|&lt;2^{-n}.$ </p> <p>Then $(\;f(z_n)\;)_n$ converges to $g(x)$ and $(\;f(z_n)-g(x_n)\;)_n$ converges to $0,$ so $(\;g(x_n)\;)_n$ converges to $g(x).$</p>
201,163
<p>I have a data set that contains data of the form (x0, y0, f1, f2, i1, i2, i3). The (x0, y0) are the coordinates, while the values f1 and f2 are real numbers (i1, i2, i3 correspond to some integers which are used as indices). The data can be downloaded <a href="http://www.mediafire.com/file/0xtxn8rggjorhdj/basins_%2528L4%2529.out/file" rel="nofollow noreferrer">here</a>.</p> <p>Now I plot the (x0, y0) coordinates of the data with i2 = 4, where each point is colored according to the value of f1. </p> <p><a href="https://i.stack.imgur.com/GSsy9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GSsy9.jpg" alt="enter image description here"></a></p> <p>As you can see, there are missing points. The data set with all the missing points can be found <a href="http://www.mediafire.com/file/obqpo2pfq56sa24/data_LGs.dat/file" rel="nofollow noreferrer">here</a></p> <p><a href="https://i.stack.imgur.com/owMyc.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/owMyc.jpg" alt="enter image description here"></a> </p> <p>Now, how can I use the original data with the f1 and f2 values, so as to interpolate and predict the f1 and f2 values of the missing points? Any suggestions? </p>
Roman
26,598
<p>read the data:</p> <pre><code>SetDirectory[NotebookDirectory[]]; data = Import["basins_(L4).out.txt", "Table"]; </code></pre> <p>interpolate <span class="math-container">$f_1$</span> and <span class="math-container">$f_2$</span>: linear interpolation on irregular grid,</p> <pre><code>F1 = Interpolation[{{#[[1]], #[[2]]}, #[[3]]} &amp; /@ Select[data, #[[6]] == 4 &amp;], InterpolationOrder -&gt; 1]; F2 = Interpolation[{{#[[1]], #[[2]]}, #[[4]]} &amp; /@ Select[data, #[[6]] == 4 &amp;], InterpolationOrder -&gt; 1]; </code></pre> <p>evaluate <span class="math-container">$f_1$</span> and <span class="math-container">$f_2$</span> on the entire grid (interpolate missing data):</p> <pre><code>T1 = Table[{x,y, F1[x,y]}, {x,Union[data[[All, 1]]]}, {y,Union[data[[All,2]]]}]; T2 = Table[{x,y, F2[x,y]}, {x,Union[data[[All, 1]]]}, {y,Union[data[[All,2]]]}]; </code></pre> <p>plots of the interpolated functions in the style of <a href="https://mathematica.stackexchange.com/a/201176/26598">@kickert's solution</a>:</p> <pre><code>ListPointPlot3D[Join @@ T1] </code></pre> <p><a href="https://i.stack.imgur.com/De4cI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/De4cI.png" alt="enter image description here"></a></p> <pre><code>ListPointPlot3D[Join @@ T2] </code></pre> <p><a href="https://i.stack.imgur.com/Fueoj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Fueoj.png" alt="enter image description here"></a></p>
2,354,004
<p>I'm struggling with the following sum:</p> <p>$$\sum_{n=0}^\infty \frac{n!}{(2n)!}$$</p> <p>I know that the final result will use the error function, but will not use any other non-elementary functions. I'm fairly sure that it doesn't telescope, and I'm not even sure how to get $\operatorname {erf}$ out of that.</p> <p>Can somebody please give me a hint? No full answers, please.</p>
Simply Beautiful Art
272,831
<p>One might note that:</p> <p>$$\frac{n!}{(2n)!}=\frac{\sqrt\pi}{4^n\Gamma(n+1/2)}$$</p> <p>Indeed, this makes your series a special case of the following function:</p> <p>$$f_\alpha(x)=\sum_{n=0}^\infty\frac{x^n}{\Gamma(n+\alpha)}$$</p> <p>With $S=\sqrt\pi f_{1/2}(1/4)$.</p> <p>Wonderfully, this problem serves as a good introduction for fractional derivatives. Let us define here the following:</p> <p>$$\frac{\mathrm d^\alpha}{\mathrm dx^\alpha}f(x)=\frac1{\Gamma(1-\{\alpha\})}\frac{\mathrm d^{\lceil\alpha\rceil}}{\mathrm dx^{\lceil\alpha\rceil}}\int_0^x(x-t)^{-\{\alpha\}}f(t)~\mathrm dt$$</p> <p>where $\lceil\alpha\rceil$ is the ceiling function and $\{\alpha\}=\alpha-\lfloor\alpha\rfloor$ is the fractional part of $\alpha$. From the above definition, one can deduce that</p> <p>$$\frac{\mathrm d^\alpha}{\mathrm dx^\alpha}x^\beta=\frac{x^{\beta-\alpha}\Gamma(\beta+1)}{\Gamma(\beta-\alpha+1)}$$</p> <p>From this, one may deduce that</p> <p>$$\frac{x^n}{\Gamma(n+\alpha)}=x^{1-\alpha}\frac{\mathrm d^{1-\alpha}}{\mathrm dx^{1-\alpha}}\frac{x^n}{n!}$$</p> <p>Thus, we find that</p> <p>$$f_\alpha(x)=x^{1-\alpha}\frac{\mathrm d^{1-\alpha}}{\mathrm dx^{1-\alpha}}e^x$$</p> <p>In the particular case of $\alpha=1/2$,</p> <p>$$\frac{\mathrm d^{1/2}}{\mathrm dx^{1/2}}e^x=\frac1{\Gamma(1/2)}\int_0^x(x-t)^{-1/2}e^t~\mathrm dt$$</p> <p>Let $x-t=u^2$ to get</p> <p>$$\frac{\mathrm d^{1/2}}{\mathrm dx^{1/2}}e^x=\frac1{\Gamma(3/2)}\int_0^{\sqrt x}u^2e^{x-u^2}~\mathrm du$$</p> <p>Whereupon the relationship to the error function is immediate.</p> <p>The general case for $f_\alpha(x)$ is left as extra credit for the reader with the additional hint that incomplete Gamma functions are recommended.</p>
2,354,004
<p>I'm struggling with the following sum:</p> <p>$$\sum_{n=0}^\infty \frac{n!}{(2n)!}$$</p> <p>I know that the final result will use the error function, but will not use any other non-elementary functions. I'm fairly sure that it doesn't telescope, and I'm not even sure how to get $\operatorname {erf}$ out of that.</p> <p>Can somebody please give me a hint? No full answers, please.</p>
Marco Cantarini
171,547
<p>The series can be written also in terms of the Incomplete Gamma function. As noted by Simply Beautiful Art we have <span class="math-container">$$\sum_{n\geq0}\frac{n!}{\left(2n\right)!}=\sum_{n\geq0}\frac{\Gamma\left(1/2\right)}{4^{n}\Gamma\left(n+1/2\right)}$$</span> <span class="math-container">$$=1+\frac{1}{4}\sum_{n\geq0}\frac{\Gamma\left(1/2\right)}{4^{n}\Gamma\left(n+1+1/2\right)}=1+\frac{1}{4}\sum_{n\geq0}\frac{1}{4^{n}\left(1/2\right)_{n+1}}$$</span> where <span class="math-container">$\left(x\right)_{n}=\Gamma\left(x\right)/\Gamma\left(x+n\right)$</span> is the <a href="http://mathworld.wolfram.com/PochhammerSymbol.html" rel="nofollow noreferrer">Pochhammer symbol</a> and <a href="http://dlmf.nist.gov/8.11#ii" rel="nofollow noreferrer">since</a> <span class="math-container">$$\gamma\left(a,z\right)e^{z}z^{-a}=\sum_{n\geq0}\frac{z^{n}}{\left(a\right)_{n+1}},\,a\neq-k,\,k\in\mathbb{N}$$</span> we have <span class="math-container">$$\sum_{n\geq0}\frac{n!}{\left(2n\right)!}=\color{red}{1+\frac{e^{1/4}\gamma\left(1/2,1/4\right)}{2}}\approx \color{blue}{1.5923}$$</span></p>
866,847
<p><strong>Question:</strong><br/> Show that $$\sum_{\{a_1, a_2, \dots, a_k\}\subseteq\{1, 2, \dots, n\}}\frac{1}{a_1*a_2*\dots*a_k} = n$$ (Here the sum is over all non-empty subsets of the set of the $n$ smallest positive integers.)</p> <blockquote> <p>I made an attempt and then encountered an inconsistency with the answer key which is detailed at the very bottom.</p> </blockquote> <p><strong>My Attempt:</strong><br/> $n=1$, there's only one non-empty subset $\{1\}$, hence,</p> <p>$$\frac{1}{1} = 1$$</p> <p>Let the proposition in the question be $P(n)$. Suppose $P(n)$ is true, show that $P(n)\rightarrow P(n+1)$ is true,</p> <p>$$P(n) = \sum_{\{a_1, a_2, \dots, a_k\}\subseteq\{1, 2, \dots, n\}}\frac{1}{a_1*a_2*\dots*a_k}=n$$</p> <p>$P(n+1)$ would mean, I add another integer $n+1$ to the set, the change to the possible subsets are divided into 3 cases,</p> <ol> <li><p>Subset of only $\{n+1\}$.</p></li> <li><p>Subsets of all positive integers containing $\{n+1\}$.</p></li> <li><p>Subsets of all positive integers not containing $\{n+1\}$.</p></li> </ol> <p>First case, just $$\frac{1}{n+1}$$, Second case, is $$\frac{P(n)}{n+1} = \frac{n}{n+1}$$, Third case, by inductive hypothesis is just $n$,</p> <p>Thus, adding them all we get, $$\frac{1}{n+1} + \frac{n}{n+1} + n = n + 1$$, completing the proof.</p> <p><strong>Problem:</strong><br/> The following are cases in the answer key:</p> <ol> <li>Just $\{n+1\}$</li> <li>a non-empty subsets of the first $n$ positive integers together with $n+1$.</li> <li>a non-empty subset of the first $n$ positive integers.</li> </ol> <p>(I've rearrange the cases so they all correspond to the ordering I gave in my Inductive Step). The following are the sub of the cases in the answer key:</p> <ol> <li>$\frac{1}{n+1}$</li> <li>$n$</li> <li>$\frac{n}{n+1}$</li> </ol> <p>As one can tell, case 1 is the same but case 2 and 3 is switch with respect to mine. What did I do wrong, it is unlikely that the textbook is wrong.</p>
rae306
168,956
<p>We know that $\cos(A+B)=\cos A\cos B-\sin A\sin B$.</p> <p>Now:</p> <p>$\cos(B-A)=\cos(-A+B)=\\\cos(-A) \cos B-\sin(-A)\sin(B)=\cos A\cos B+\sin A \sin B$.</p>