qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
3,260,776
<blockquote> <p>Suppose we are given any arbitrary collection of sets.</p> <p>How do get a largest topology from the above arbitrary collection?</p> </blockquote> <p>How to construct a Topology from this collection and in which condition ?</p> <p>I don't have any answer to me.</p> <p>Any help or idea is appreciated.</p>
Kavi Rama Murthy
142,385
<p>First take all possible finite intersections of those sets together with the empty set and then take arbitrary unions of the new sets you have obtained. You get a topology and this topology is the smallest topology containing the given sets. You can read about base and sub-base in any book on topology for more information. [An empty intersection is defined as the whole space].</p>
2,332,741
<p>I have the following problem:</p> <blockquote> <p>Let $\Omega \subset \mathbb{R}^3$ be an open bounded set with a smooth boundary $\partial \Omega$ and the unit normal $v$. Calculate for the vector field $a(x,y,z)=(0,0,-pz)$ with $p&gt;0$ the value of -$\int_{\partial\Omega}\langle a,v\rangle d\mu_{\partial\Omega}$.</p> </blockquote> <p>I don't really know how to start with this problem. So I used the divergence theorem: -$\int_{\partial\Omega}\langle a,v\rangle d\mu_{\partial\Omega}$=$\int_\Omega \text{div}(a)d\mu_M$=$\int_\Omega pd\mu_M$, but I don't know how to proceed from here. Can someone help me please? Thanks in advance.</p>
Tony Blair's Witch Project
445,451
<blockquote> <p>$\text{}$1. If $C$ is an irreducible nonsingular curve on a nonsingular projective surface $X$, is there always a very ample divisor $E\in|C|$?</p> </blockquote> <p>No. For example, a line on a nonsingular quadric surface has $L^2 = 0$. Your question is not grammatical, since very ample is a property of the sheaf $\mathcal{O}_S(C)$, not of the particular curve. In any case,$$C.D = \deg_C \mathcal{O}_S(D)$$rest $C$ for any $C$, $D$.</p> <blockquote> <p>$\text{}$2. If the question 1 is not the case, why is there a curve $D\in|C|$ that meets $C$ transversally?</p> </blockquote> <p>It is just not true, and you do not need it. If $C$ and $D$ are effective curves and intersect in dimension $0$ only (i.e. only finite set of points $O_i$, so no common components), there is a definition of local multiplicity $(C.D)_Pi$ so that $C.D =$ sum of those. However, even if $C$ or $D$ is not effective, or they have common components, the intersection $C.D$ is still perfectly well-defined. It is described in detail in Shafarevich's book on algebraic geometry and in excruciating detail in Fulton's book on algebraic curves.</p>
350,910
<p>Is there any condition based on the coefficients of terms that guarantees all real solutions to a general cubic polynomial? e.g. $$ax^3+bx^2+cx+d=0\, ?$$</p> <p>If not, are there methods rather than explicit formula to determine it?</p> <p>Thank You.</p>
Arthur
15,500
<p><a href="http://en.wikipedia.org/wiki/Cubic_polynomial#The_nature_of_the_roots">Wikipedia</a> says it's the following expression: $$ 18abcd-4b^3d + b^2c^2 -4ac^3 -27a^2d^2 $$ If it's positive there are three real roots. If it's negative there's one real and two (conjugate) imaginary roots. If it's equal to zero, there are fewer than three distinct roots, but they are all real.</p>
3,278,761
<p>Suppose a student says : "if 17 is even, then 2 is not a divisor of 17". </p> <p>Surely his teacher would tell him he is wrong, saying that when a number is even, this number has 2 as divisor. The teacher would correct with " if 17 were even, then 2 would be a divisor of 17". In other words, the student's claim contradicts the general rule : " For all number x, if x is even , then x has 2 as divisor." So, if 17 is even... </p> <p>But, since the sentence uttered by the student is a conditional with a false antecedent, this sentence ( the whole conditional) is true ; in virtue of " ex falso sequitur quodlibet" ( from a false proposition, anything follows). </p> <p>My question is : what is wrong in the student's claim?</p> <p>Can this hypothetical case be clarified by saying that </p> <p>(1) the student's sentence is <em>materially</em> true</p> <p>(2) the teacher is right in saying that the sentence is false in case it is understood as asserting a consequence relation ( logical consequence) between the antecedent and the consequent? </p> <p>Or , am I wrong in saying that " if 17 is even , then 2 is not a divisor of 17 " contradicts ( or is incompatible with) " For all number x, if x is even, then x is divisible by 2" ? </p>
David K
139,123
<p>You have stated a general rule, which is mathematically true:</p> <blockquote> <p>For every number <span class="math-container">$x,$</span> if <span class="math-container">$x$</span> is even, then <span class="math-container">$x$</span> has <span class="math-container">$2$</span> as divisor.</p> </blockquote> <p>Since the part following "for every number <span class="math-container">$x$</span>" is true for every number <span class="math-container">$x,$</span> it is true in the particular case where <span class="math-container">$x=17,$</span> that is,</p> <blockquote> <p>If <span class="math-container">$17$</span> is even, then <span class="math-container">$17$</span> has <span class="math-container">$2$</span> as divisor.</p> </blockquote> <p>Interpreted as mathematical statements, both the statement about the number <span class="math-container">$17$</span> and the "if-then" clause of the general rule are material conditionals, which are true whenever the antecedent is false.</p> <p>The following is also a material conditional, true for the same reason that the previous conditional was (namely, because the antecedent is false):</p> <blockquote> <p>If <span class="math-container">$17$</span> is even, then <span class="math-container">$2$</span> is not a divisor of <span class="math-container">$17$</span>.</p> </blockquote> <p>Note that this statement does not contradict either the previous conditional or the general rule in any way. In order to contradict a statement, you must assert its <em>negation.</em> You could write the negation of "If <span class="math-container">$17$</span> is even, then <span class="math-container">$17$</span> has <span class="math-container">$2$</span> as divisor" as follows:</p> <blockquote> <p><span class="math-container">$17$</span> is even and <span class="math-container">$2$</span> is not a divisor of <span class="math-container">$17.$</span></p> </blockquote> <p>You could write the negation of the general rule this way:</p> <blockquote> <p>There exists a number <span class="math-container">$x$</span> such that <span class="math-container">$x$</span> is even and <span class="math-container">$2$</span> is not a divisor of <span class="math-container">$x$</span>.</p> </blockquote> <p>The statement "If <span class="math-container">$17$</span> is even, then <span class="math-container">$2$</span> is not a divisor of <span class="math-container">$17$</span>" is not much like either of those statements; "if ... then" is nothing like "and".</p> <hr> <p>The material conditional is not how I would express a logical consequence. If I wanted to make a claim that "<span class="math-container">$2$</span> does not divide <span class="math-container">$17$</span>" arose as a logical consequence of the premise that "<span class="math-container">$17$</span> is even", I might write</p> <blockquote> <p>Assuming that <span class="math-container">$17$</span> is even, it follows that <span class="math-container">$2$</span> does not divide <span class="math-container">$17.$</span></p> </blockquote> <p>This is a very different sentence from any of the examples I gave above. It includes no material conditional. Instead it asserts the existence of a logical inference which (I believe) is invalid. And indeed if a student were to make such a statement, a teacher should not accept it as true, but might instead ask the student to provide a proof of this statement.</p> <hr> <p>By the way, the principle "ex falso sequitur quodlibet" is applied to deductions. An implication can be the subject of a deduction, but it is not itself a deduction. So I would not say that a material conditional with false antecedent is true because of "ex falso sequitur quodlibet." The falseness of the antecedent in a material conditional does not cause the consequent to <em>follow</em> (that is, it does not cause the consequent to become <em>true</em>); rather, it allows the consequent to be <em>false</em> without falsifying the entire conditional statement.</p>
134,796
<p>Example list below. All elements are in the form {1 or 0, 1 or 0, 1 or 0}, with a least one of the numbers 0 and 1 in the element (so excluding {1,1,1} and {0,0,0}) </p> <pre><code>ListA = {{1, 1, 0}, {1, 1, 0}, {1, 1, 0}, **{0, 1, 1}**, {1, 0, 1}, {1, 0, 1}, {1, 0, 1}} </code></pre> <p>I want a command to replace any single lone entry in the sequence to be replaced with the next sequence.</p> <p>In List A the single lone entry is {0, 1, 1} as before this there are three {1, 1, 0} in a succession and following the single lone entry there are three {1, 0, 1} in a succession. So I want this lone entry to be replaced by {1, 0, 1}. </p> <p>I want the command to be generic so can handle any combination of lone entries, I believe there will be 6 different scenarios (assuming the element sequence either side of the lone entry are different). Another example of lone entry of {{1, 1, 0}, {1, 1, 0}, <strong>{1, 0, 1}</strong>, {0, 1, 1}, {0, 1, 1}}</p> <p>Lone entries at the start and end of the lists can be ignored.</p>
Mr.Wizard
121
<p>Written for clarity over efficiency:</p> <pre><code>ListA = {{1, 1, 0}, {1, 1, 0}, {1, 1, 0}, {0, 1, 1}, {1, 0, 1}, {1, 0, 1}, {1, 0, 1}}; Split[ListA] % //. {a___List, lone : {{__}}, b__List} :&gt; {a, {b}[[1, {1}]], b} Join @@ % </code></pre> <blockquote> <pre><code>{{{1, 1, 0}, {1, 1, 0}, {1, 1, 0}}, {{0, 1, 1}}, {{1, 0, 1}, {1, 0, 1}, {1, 0, 1}}} {{{1, 1, 0}, {1, 1, 0}, {1, 1, 0}}, {{1, 0, 1}}, {{1, 0, 1}, {1, 0, 1}, {1, 0, 1}}} {{1, 1, 0}, {1, 1, 0}, {1, 1, 0}, {1, 0, 1}, {1, 0, 1}, {1, 0, 1}, {1, 0, 1}} </code></pre> </blockquote> <p>Other ideas:</p> <pre><code>sp = Split[ListA]; lone = Position[sp, {{__}}]; lone = DeleteCases[lone, {Length@sp}]; Scan[(sp[[#]] = sp[[# + 1, {1}]]) &amp;, lone]; Join @@ sp </code></pre> <p>Or:</p> <pre><code>fill[{{{__}}, {x_List, ___List}}] := {x} fill[{a_, b_}] := a Join @@ Developer`PartitionMap[fill, Split@ListA, 2, 1, 1, {{}}] </code></pre>
3,214,331
<p>If i take the complex number <span class="math-container">$e^{i(3+2i)}$</span>, it's conjugate is <span class="math-container">$e^{i(-3+2i)}$</span>.</p> <p>However, the conjugate of the function f, defined as <span class="math-container">$f(x+iy)=e^{i(x+iy)}$</span>, is, according to my book: <span class="math-container">$\overline{f(x+iy)}=e^{i(x-iy)}$</span>.</p> <p>I can't understand this difference ...</p>
Peter Foreman
631,494
<p>I think the book meant that <span class="math-container">$$f(\overline{x+iy})=f(x-iy)=e^{i(x-iy)}$$</span> but otherwise the book is in fact wrong as you say.</p>
1,028,695
<p>While reading though some engineering literature, I came across some logic that I found a bit strange. Mathematically, the statement might look something like this: </p> <p>I have a linear operator $A:L^2(\Bbb{R}^3)\rightarrow L^2(\Bbb{R}^4)$, that is a mapping which takes functions of three variables to functions of four variables. Then, "because the range function depends on 4 variables while the domain function depends on only three", there must be <em>redundancy</em> in the operator $A$, that is the range of $A$ is a proper subset of $L^2(\Bbb{R}^4)$, characterized by some <em>range conditions</em>.</p> <p>Is such a statement always true? For the specific example I am reading about (the X-ray transform), it is definitely true - in fact, the range of the operator is characterized by a <a href="http://en.wikipedia.org/wiki/John%27s_equation" rel="nofollow">certain PDE</a> - but I can't image such a thing is true in general. </p> <p>For instance, I can cook up an operator $A:L^2(\Bbb{R}^3)\rightarrow L^2(\Bbb{R}^4)$ such that the range of $A$ is dense in $L^2(\Bbb{R}^4)$: simply choose orthonormal bases $(e_j)$ and $(f_j)$ for both, then map $e_j$ to $f_j$.</p> <p>Any thoughts?</p>
Mike F
6,608
<p>It's not true. There is only one <a href="http://en.wikipedia.org/wiki/Hilbert_space#Separable_spaces" rel="nofollow">separable Hilbert space</a>, up to isomorphism. Since $L^2(\mathbb{R}^3)$ and $L^2(\mathbb{R}^4)$ are both separable, they are isomorphic as Hilbert spaces. That is, there is an isometric linear mapping $A$ of $L^2(\mathbb{R}^3)$ onto $L^2(\mathbb{R}^4)$.</p>
3,082,635
<p>Prove that for a given prime <span class="math-container">$p$</span> and each <span class="math-container">$0 &lt; r &lt; p-1$</span>, there exists a <span class="math-container">$q$</span> such that </p> <p><span class="math-container">$$rq \equiv 1 \bmod p$$</span></p> <p>I've only taken one intro number theory course (years ago), and this just popped up in a computer science class (homework). I was assuming that this proof would be elementary since my current class in an algorithm cours, but after the few basic attempts I've tried it didn't look promising. Here's a couple approaches I thought of:</p> <hr> <p>(<em>reverse engineer</em>)</p> <p>To arrive at the conclusion we would need</p> <p><span class="math-container">$$rq - 1 = kp$$</span></p> <p>for some <span class="math-container">$k$</span>. A little manipulation:</p> <p><span class="math-container">$$qr - kp = 1$$</span></p> <p>That looks familiar, but I can't see anything from it.</p> <hr> <p>(<em>sum on <span class="math-container">$r$</span></em>)</p> <p><span class="math-container">$$\sum_{r=1}^{p-2} r = \frac{(p-2)(p-1)}{2} = p\frac{p - 3}{2} + 1 \equiv 1 \bmod p$$</span></p> <p>which looks good but I don't know how to incorporate <span class="math-container">$r$</span> int0 the final equality. </p> <hr> <p>(<em>Wilson's Theorem—proved by Lagrange</em>) </p> <p>I vaguely recall this theorem, but I was looking at it in an old book and it wasn't easy to see how we arrived there. Anyways, <span class="math-container">$p$</span> is prime <em>iff</em> <span class="math-container">$$(p-1)! \equiv -1 \bmod p$$</span></p> <p>Here the <span class="math-container">$r$</span> multiplier is built in to the factorial expression so I was thinking of adding <span class="math-container">$2$</span> to either side</p> <p><span class="math-container">$$(p-1)! + 2 \equiv 1 \bmod p$$</span></p> <p>which is a dead end (pretty sure). But then I was thinking, maybe multiplying Wilson't Thm by <span class="math-container">$(p+1)$</span>? Then getting</p> <p><span class="math-container">$$(p+1)(p-1)! = -(p+1) \bmod p$$</span></p> <p>which I think results in</p> <p><span class="math-container">$$(p+1)(p-1)! = 1 \bmod p$$</span></p> <p>of which <span class="math-container">$r$</span> is a multiple and <span class="math-container">$q$</span> is obvious. But I'm not sure if that's valid.</p>
David
119,775
<p>There are lots of proofs of this, and which is best for you will depend very much on what results you know already. Here is one which uses only the following fact:</p> <ul> <li>if <span class="math-container">$p$</span> is prime and <span class="math-container">$x,y$</span> are integers and <span class="math-container">$p\mid xy$</span>, then <span class="math-container">$p\mid x$</span> or <span class="math-container">$p\mid y$</span>.</li> </ul> <p>Now let <span class="math-container">$p$</span> be prime and <span class="math-container">$0&lt;r&lt;p$</span>. Consider the numbers <span class="math-container">$$r,\ 2r,\ 3r,\ldots,\ (p-1)r\ .\tag{$*$}$$</span> Firstly, these are all different modulo <span class="math-container">$p$</span>. <strong>Proof</strong>: if <span class="math-container">$ar\equiv br\pmod p$</span>, then <span class="math-container">$$\eqalign{p\mid ar-br\quad &amp;\Rightarrow\quad p\mid(a-b)r\cr &amp;\Rightarrow\quad p\mid a-b\qquad [\hbox{since $p\not\mid r$}]\cr &amp;\Rightarrow\quad a=b\qquad [\hbox{since $0&lt;a,b&lt;p$.}]\cr}$$</span> Secondly, none of the numbers is divisible by <span class="math-container">$p$</span>. <strong>Proof</strong>: the numbers are <span class="math-container">$ar$</span> with <span class="math-container">$p\not\mid a$</span> and <span class="math-container">$p\not\mid r$</span>.</p> <p>Finally, this means that in <span class="math-container">$(*)$</span> there are <span class="math-container">$p-1$</span> different numbers modulo <span class="math-container">$p$</span> with the possible values <span class="math-container">$1,2,\ldots,p-1$</span> modulo <span class="math-container">$p$</span>. So they must take each value once each. In particular, one of the numbers <span class="math-container">$ar$</span> is <span class="math-container">$1$</span> modulo <span class="math-container">$p$</span>.</p>
24,704
<p>It seems that often in using counting arguments to show that a group of a given order cannot be simple, it is shown that the group must have at least <span class="math-container">$n_p(p^n-1)$</span> elements, where <span class="math-container">$n_p$</span> is the number of Sylow <span class="math-container">$p$</span>-subgroups.</p> <blockquote> <p>It is explained that the reason this is the case is because distinct Sylow <span class="math-container">$p$</span>-subgroups intersect only at the identity, which somehow follows from Lagrange's Theorem.</p> </blockquote> <p>I cannot see why this is true. </p> <p>Can anyone quicker than I tell me why? I know it's probably very obvious. </p> <p>Note: This isn't a homework question, so if the answer is obvious I'd really just appreciate knowing why. </p> <p>Thanks!</p>
wildildildlife
6,490
<p>Suppose $P$ and $Q$ are Sylow p-subgroups of prime order p (so not just any power of p; as others remarked, then it is not true in general). Note that $P\cap Q$ is a subgroup of $P$ (and of $Q$). So by Lagrange, the order $|P\cap Q|$ divides p. As p is prime, it is 1 or p. But it cannot be p, as $P$ and $Q$ are distinct. So $|P\cap Q|=1$ and consequently the intersection is trivial.</p>
4,244,966
<p>I have the ODE <span class="math-container">$y^2(1+y'^2)=4$</span> to solve this I used the substitution <span class="math-container">$y'=p$</span> <span class="math-container">$$y^2(1+p^2)=4$$</span> <span class="math-container">$$2y(1+p^2)dy+2py^2dp=0$$</span> <span class="math-container">$$(p^2+1)dy+py\;dp=0$$</span> <span class="math-container">$$\frac{dy}y+\frac{p}{p^2+1}dp=0$$</span> <span class="math-container">$$\ln|y|+\frac12\ln|p^2+1|=\ln|c|$$</span> <span class="math-container">$$y\sqrt{p^2+1}=c$$</span> Using <span class="math-container">$p^2+1=\frac4{y^2}$</span>, I get <span class="math-container">$2=c$</span> ! I can't find my mistake.</p>
Ishan Tiwari
771,724
<p>I think you've only circulated in the differential equation. You have written the series of steps only to arrive at the restatement of the original differential equation.</p> <p>From your solution,</p> <p><span class="math-container">$$y\sqrt{p^2-1} = c$$</span> or <span class="math-container">$$ y^2 (p^2-1) = c^2$$</span> which is restatement of the original equation and <span class="math-container">$c^2 = 4$</span> only satisfies the condition of the original differential equation. Therefore you need to try to solve it in such a way that 4 does not get removed. One way could be to rewrite it as follows:</p> <p><span class="math-container">$$ y'^2= \frac{4}{y^2} - 1$$</span></p> <p>This will lead to solution.</p>
624,002
<p>Determine whether $\mathbb{Z}\times \mathbb{Z}$ and $\mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}$ are isomorphic groups or not.</p> <p>pf) Suppose that these are isomorphic. Note that $\mathbb{Z}\times \mathbb{Z}$ is a subgroup of $\mathbb{Z}\times \mathbb{Z}$ and $\mathbb{Z}\times \mathbb{Z}\times\left \{ 0 \right \}$ is a subgroup of $\mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}$. Since $\mathbb{Z}\times \mathbb{Z}$ and $\mathbb{Z}\times \mathbb{Z}\times\left \{ 0 \right \}$ are isomorphic, $\mathbb{Z}\times \mathbb{Z}/\mathbb{Z}\times \mathbb{Z}$ and $\mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}/\mathbb{Z}\times \mathbb{Z}\times \left \{ 0 \right \}$ are isomorphic. But the first one is isomorphic to the trivial group and the second one is isomorphic to $\mathbb{Z}$. It is a contradiction.</p> <p>Is my proof right? If not, is there another proof?</p>
Boris Novikov
62,565
<p>No, it is not correct. Suppose $f:\mathbb{Z}\times \mathbb{Z}\to\mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}$ is an isomorophism. Then you have only $\mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}/\mathbb{Z}\times \mathbb{Z}\times \left \{ 0 \right \}\cong \mathbb{Z}\times \mathbb{Z}/f^{-1}(\mathbb{Z}\times \mathbb{Z}\times \left \{ 0 \right \})$.</p>
624,002
<p>Determine whether $\mathbb{Z}\times \mathbb{Z}$ and $\mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}$ are isomorphic groups or not.</p> <p>pf) Suppose that these are isomorphic. Note that $\mathbb{Z}\times \mathbb{Z}$ is a subgroup of $\mathbb{Z}\times \mathbb{Z}$ and $\mathbb{Z}\times \mathbb{Z}\times\left \{ 0 \right \}$ is a subgroup of $\mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}$. Since $\mathbb{Z}\times \mathbb{Z}$ and $\mathbb{Z}\times \mathbb{Z}\times\left \{ 0 \right \}$ are isomorphic, $\mathbb{Z}\times \mathbb{Z}/\mathbb{Z}\times \mathbb{Z}$ and $\mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}/\mathbb{Z}\times \mathbb{Z}\times \left \{ 0 \right \}$ are isomorphic. But the first one is isomorphic to the trivial group and the second one is isomorphic to $\mathbb{Z}$. It is a contradiction.</p> <p>Is my proof right? If not, is there another proof?</p>
bof
111,012
<p>The class of Abelian groups is an equational class. $\mathbb Z\times\mathbb Z$ is a free Abelian group with $2$ generators. $\mathbb Z\times\mathbb Z\times\mathbb Z$ is a free Abelian group with $3$ generators. If the free Abelian group with $2$ generators were isomorphic to the free Abelian group with $3$ generators, then it would follow from something in universal algebra <strong>[*]</strong> that there are no finite Abelian groups of order greater than $1$. But there is an Abelian group of order $2$. This contradiction proves that those two groups are not isomorphic.</p> <p><strong>[*]</strong> Namely, if $\mathbf K$ is an equational class (variety) of algebras (in the sense of universal algebra), and if the free $\mathbf K$-algebra on $m$ generators is isomorphic to the free $\mathbf K$-algebra on $n$ generators for some $m,n\in\mathbb N,m\ne n$, then $\mathbf K$ contains no finite algebra with more than one element. This is a basic theorem of universal algebra. Here is a proof for the case $m=2,n=3$:</p> <p>Suppose $F$ is is a free $K$-algebra with free generating sets $\{a,b\}_\ne$ and $\{c,d,e\}_\ne$. Then there are "polynomials" (terms in the language of $\mathbb K$) $\varphi(x,y,z),\ \psi(x,y,z),\ f(u,v),\ g(u,v),\ h(u,v)$ such that $a=\varphi(c,d,e),\ b=\psi(c,d,e),\ c=f(a,b),\ d=g(a,b),\ e=h(a,b)$. Hence the following identities hold in every $K$-algebra: $$u=\varphi(f(u,v),g(u,v),h(u,v))$$ $$v=\psi(f(u,v),g(u,v),h(u,v))$$ $$x=f(\varphi(x,y,z),\psi(x,y,z)$$ $$y=g(\varphi(x,y,z),\psi(x,y,z)$$ $$z=h(\varphi(x,y,z),\psi(x,y,z)$$ These identities show that, for any algebra $A\in\mathbf K$, the mapping $$\langle u,v\rangle\to\langle f(u,v),g(u,v),h(u,v)\rangle$$ is a bijection from $A\times A$ to $A\times A\times A$. It follows that $|A|^2=|A|^3$, i.e., $|A|$ is infinite or $|A|=1$.</p>
2,829,990
<p>I want to calcurate</p> <p><span class="math-container">$$ \lim_{n \to \infty} \int_{(0,1)^n} \frac{n}{x_1 + \cdots + x_n} \, dx_1 \cdots dx_n $$</span></p> <p>I met this in studying Lebesgue integral. But, I don't know how to do at all. I would really appreciate if you could help me!</p> <p>[Add]</p> <p>Thanks to everybody who gave me comments, I can understand the following,</p> <p><span class="math-container">\begin{align*} \lim_{n \to \infty} \int_{(0,1)^n} \frac{n}{x_1 + \cdots + x_n} dx_1 \cdots dx_n &amp;=\lim_{n \to \infty} n\int_0^\infty \bigg(\frac{1-e^{-t}}{t}\bigg)^n\,dt \end{align*}</span></p> <p>and</p> <p><span class="math-container">\begin{align*} \int_0^\infty \bigg(\frac{1-e^{-t}}{t}\bigg)^n\,dt &amp;=\frac{n}{(n-1)!}\sum_{i=0}^{n-1}{ n-1 \choose i} (-1)^{n-1-i} (i+1)^{n-2}\log(i+1) \end{align*}</span></p> <p>and</p> <p><span class="math-container">\begin{align*} \int_0^\infty \bigg(\frac{1-e^{-t}}{t}\bigg)^n\,dt &amp;=\int_0^\infty \frac{z^{n-1}}{(n-1)!}\, \mathrm{Beta}(z,n+1)\,dz\\ &amp;=n\,\int_0^\infty \frac{z^{n-1}}{z(z+1)\cdots(z+n)}\,dz \end{align*}</span></p> <p>But,I can't calcurate these integral and the limit. Please let me know if you find out.</p>
JanG
266,041
<p>This answer will be based om Fubini's theorem and DCT. Put \begin{equation*} I_n = \int_{V_n}\dfrac{x_1^p+x_2^p+\dots +x_n^p}{x_1^q+x_2^q+\dots +x_n^q}\, dx_1dx_2\dots dx_n \end{equation*} where $ V_n=(0,1)^n. $ We will try to prove that</p> <p>\begin{equation*} \lim_{n\to \infty}I_n = \dfrac{q+1}{p+1} \tag{1} \end{equation*} if $ p &gt; -1$ and $ q \ge 1$.</p> <p>With $p=0$ and $q=1$ this will answer OP's question.</p> <p>Before the proof we need some preparations.</p> <p>Put \begin{equation*} c_q = \int_{0}^{\infty}e^{-x^q}\, dx = \dfrac{\Gamma\left(\dfrac{1}{q}\right)}{q} \end{equation*} and \begin{equation*} \mathrm{eq}=\dfrac{1}{c_q}\int_{x}^{\infty} e^{-y^q}\, dy. \end{equation*}</p> <p>In order to later find a majorant we will use an inequality by $@$mickep (email communication in the case $q=1$).</p> <p>\begin{equation*} c_q\dfrac{1-\mathrm{eq}(t)}{t}\le \left(1+\dfrac{t^q}{2}\right)^{-\dfrac{1}{q}}, \quad t&gt;0.\tag{2} \end{equation*}</p> <p>Proof of (2). Put \begin{equation*} f(t) = \dfrac{t}{\left(1+\dfrac{t^q}{2}\right)^{\dfrac{1}{q}}} - c_q(1-\mathrm{eq}(t)), \quad t \ge 0. \end{equation*} Then $f(0)=0$. We intend to prove that $ f'(t) \ge 0 $ if $ t \ge 0. $ We observe that \begin{gather*} f'(t) = \dfrac{1}{\left(1+\dfrac{t^q}{2}\right)^{\dfrac{1}{q}+1}} -e^{-t^q} \ge 0 \\ \Longleftrightarrow\\ e^{t^q} \ge \left(1+\dfrac{t^q}{2}\right)^{\dfrac{1}{q}+1} \\ \Longleftrightarrow\\ t^{q} \ge \left(\dfrac{1}{q}+1\right)\ln\left(1+\dfrac{t^q}{2}\right)\\ \Longleftrightarrow\\ \dfrac{2q}{q+1}\dfrac{t^q}{2} \ge \ln\left(1+\dfrac{t^q}{2}\right) \end{gather*} Since $ x \ge \ln(1+x)$ and $\dfrac{2q}{q+1} \ge 1 $ this is true and we have proved (2). </p> <p>Proof of (1). \begin{gather*} I_n=[\mbox{ symmetry}] = n\int_{0}^{1}x_1^p\left(\int_{0}^{\infty}e^{-t(x_1^q+x_2^q+\dots +x_n^q)}\, dt\right)\, dx_1dx_2\dots dx_n = \\[2ex] n\int_{0}^{1}\int_{0}^{\infty}x_1^pe^{-tx_1^q}\left(\int_{0}^{1}e^{-ty^q}\, dy\right)^{n-1}\, dx_1dt = \left[z=t^{1/q}y\right] =\\[2ex]n\int_{0}^{1}\int_{0}^{\infty}x_1^pe^{-tx_1^q}\left(\int_{0}^{t^{1/q}}\dfrac{e^{-z^q}}{t^{1/q}}\, dz\right)^{n-1}\, dx_1dt =\\[2ex] n\int_{0}^{1}\int_{0}^{\infty}x_1^pe^{-tx_1^q}\left(\dfrac{1}{t^{1/q}}\left[-c_q\mathrm{eq}(z)\right]_{0}^{t^{1/q}}\right)^{n-1}\, dx_1dt = \left[t=\frac{s}{n-1}\right]=\\[2ex] \dfrac{n}{n-1 }\int_{0}^{1}\int_{0}^{\infty}x_1^pe^{-sx_1^q/(n-1)}\left(\dfrac{1}{s_{qn}}c_q(1-\mathrm{eq}(s_{qn}))\right)^{n-1}\, dx_1ds\tag{3} \end{gather*} where $ s_{qn}= \left(\dfrac{s}{n-1}\right)^{1/q} $.</p> <p>Now we will use Mickep's inequality (2) to find a majorant. \begin{gather*} e^{-sx_1^p/(n-1)}\left(\dfrac{1}{s_{qn}}c_q(1-\mathrm{eq}(s_{qn}))\right)^{n-1} \le 1 \cdot \left(\left(1+\dfrac{s}{2(n-1)}\right)^{-1/q}\right)^{n-1} =\\[2ex] \left(\left(1+\dfrac{s}{2(n-1)}\right)^{n-1}\right)^{-1/q} \le \left(\left(1+\dfrac{s}{2(N-1)}\right)^{N-1}\right)^{-1/q} , \quad n \ge N \end{gather*} In the last inequality we have used that \begin{equation*} \left(1+\dfrac{s}{2(n-1)}\right)^{n-1} \end{equation*} is increasing towards $ e^{s/2} $. If we choose $ N $ such that $ \dfrac{N-1}{q}&gt;1 $ the majorant \begin{equation*} \left(\left(1+\dfrac{s}{2(N-1)}\right)^{N-1}\right)^{-1/q} \end{equation*} will belong to $L_1.$</p> <p>Finally we will study the pointwise limit.</p> <p>Put \begin{equation*} g(x) = c_q(1-\mathrm{eq}(x)). \end{equation*} Then \begin{equation*} g'(x) = e^{-x^{q}} = 1- x^q + x^{2q}\cdot B_{1}(x^q) \end{equation*} where $B_1$ is q bounded function in the neighbourhood of origin. We get that \begin{equation*} g(x) = x-\dfrac{x^{q+1}}{q+1}+x^{2q+1}B_2 \tag{4} \end{equation*} where $B_{2}$ is bounded for small $x^{q}$. However, from (4) we get \begin{gather*} \left(\dfrac{1}{s_{qn}}c_q(1-\mathrm{eq}(s_{qn}))\right)^{n-1}=\left(1-\dfrac{s}{(q+1)(n-1)}+\dfrac{s^2}{(n-1)^{2}}B_3\right)^{n-1} \to e^{-s/(q+1)},\quad n \to \infty \end{gather*} since $B_{3}$ is bounded. Now we return to (3). \begin{equation*} \lim_{n\to \infty}I_n = \int_{0}^{1}\int_{0}^{\infty}x_1^pe^{-s/(q+1)}\, dx_1ds = \dfrac{q+1}{p+1}. \end{equation*}</p>
172,617
<p>I need to plot two datasets on the same plot. The datasets have the same x-range. However, I want to show only parts of the plot. </p> <p>A minimal example would be</p> <pre><code> h = π/100.; i1 = ListLinePlot[Table[{i*h, Sin[i*h]}, {i, 0, 100}], PlotStyle -&gt; Red]; i2 = ListLinePlot[Table[{i*h, Cos[i*h]}, {i, 0, 100}], PlotStyle -&gt; Blue]; l1 = Graphics[{Black, Dashed, Line[{{π/2, -1}, {π/2, 1}}]}]; Show[{i1, i2, l1}, PlotRange -&gt; All] </code></pre> <p>The output is the following</p> <p><a href="https://i.stack.imgur.com/JsUxu.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/JsUxu.jpg" alt="enter image description here"></a></p> <p>But, the plot I want is </p> <p><a href="https://i.stack.imgur.com/rJ3wG.png" rel="noreferrer"><img src="https://i.stack.imgur.com/rJ3wG.png" alt="enter image description here"></a></p> <p>Can anyone please help? Thanks in advance.</p>
Henrik Schumacher
38,178
<p>Maybe</p> <pre><code>h = π/100.; a = Table[{i*h, Sin[i*h]}, {i, 0, 100}]; b = Table[{i*h, Cos[i*h]}, {i, 0, 100}]; α = FirstPosition[a[[All, 1]] - π/2, _?Positive][[1]] i1 = ListLinePlot[a[[α ;;]], PlotStyle -&gt; Red]; i2 = ListLinePlot[b[[;; α]], PlotStyle -&gt; Blue]; l1 = Graphics[{Black, Dashed, Line[{{π/2, -1}, {π/2, 1}}]}]; Show[{i1, i2, l1}, PlotRange -&gt; All, AxesOrigin -&gt; {0, 0}] </code></pre> <p>?</p>
3,728,963
<p>(Exercise 21 Chapter 2, Baby Rudin) I am trying to prove</p> <blockquote> <p>Let <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be separated subsets of some <span class="math-container">$\mathbb{R}^k$</span>, suppose <span class="math-container">$\textbf{a} \in A$</span>, <span class="math-container">$\textbf{b} \in B$</span> and define <span class="math-container">$\textbf{p}(t) = (1-t)\textbf{a} + t\textbf{b}$</span>, for <span class="math-container">$t \in \mathbb{R}$</span>. Put <span class="math-container">$A_0 = \textbf{p}^{-1}(A), B_0 = \textbf{p}^{-1}(B)$</span>. [Thus, <span class="math-container">$t \in A_o$</span> iff <span class="math-container">$\textbf{p}(t) \in A$</span>.]</p> </blockquote> <blockquote> <p>Prove that <span class="math-container">$A_0$</span> and <span class="math-container">$B_0$</span> are separated subsets of <span class="math-container">$\mathbb{R}$</span>. My attempt so far:</p> </blockquote> <blockquote> <p>a. Assume to the contrary that <span class="math-container">$\exists y$</span> such that <span class="math-container">$y \in A_0 \cap \overline{B_0}$</span> which implies <span class="math-container">$y \in A_0$</span> and <span class="math-container">$y \in \overline{B_0}$</span>. Then, <span class="math-container">$\textbf{p}(y) \in A$</span> and either <span class="math-container">$y \in B_0$</span> or <span class="math-container">$y$</span> is a limit point of <span class="math-container">$B_0$</span>. If <span class="math-container">$y \in B_0$</span>, then <span class="math-container">$\textbf{p}(y) \in B$</span> which would contradict that <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are separated. If <span class="math-container">$y$</span> is a limit point of <span class="math-container">$B_0$</span>, ...</p> </blockquote> <p><strong>My question</strong>: I am having trouble completing the proof. <strong>Can someone please suggest how this proof can be completed?</strong></p> <p>P.S. I found <a href="https://math.stackexchange.com/questions/1731901/proof-for-separated-sets">this</a> proof but I have no idea why the idea of continuity was introduced in the first place, or even how one knows that <span class="math-container">$p$</span> is continuous, as the answer claims. I would like to complete this proof without using the concept of continuity, ideally, since Rudin hasn't introduced the concept of continuity so far (till Chapter 2).</p> <p><strong>Edit</strong>: We now claim that <span class="math-container">$\mathbf{p}(t)$</span> is continuous on all of <span class="math-container">$\mathbb{R}$</span>.</p> <p>Proof: Let <span class="math-container">$\epsilon &gt; 0$</span> and <span class="math-container">$c \in \mathbb{R}$</span>. Suppose <span class="math-container">$\left|t-c\right| &lt; \delta$</span> where <span class="math-container">$\delta = \frac{\epsilon}{\left|b-a\right|} &gt; 0$</span>. Then, we have</p> <p><span class="math-container">$$\left|\mathbf{p}(t)-\mathbf{p}(c)\right| = \left|(1-t)\mathbf{a} + t\mathbf{b}-\mathbf{a}(1-c)-c\mathbf{b}\right| = (t-c)\left|\mathbf{b}-\mathbf{a}\right| &lt; \frac{\epsilon}{\left|\mathbf{b}-\mathbf{a}\right|} \cdot \left|\mathbf{b}-\mathbf{a}\right| = \epsilon$$</span> and we are done.</p> <p>Definition of a continuous function:</p> <blockquote> <p>Suppose <span class="math-container">$X, Y$</span> are metric spaces, <span class="math-container">$E \subset X, p \in E$</span> and <span class="math-container">$f$</span> maps <span class="math-container">$E$</span> into <span class="math-container">$Y$</span>. Then, <span class="math-container">$f$</span> is said to be continuous at <span class="math-container">$p$</span> if for every <span class="math-container">$\epsilon &gt; 0, \exists \delta &gt; 0$</span> such that <span class="math-container">$d_Y(f(x), f(p))&lt; \epsilon$</span> for all points <span class="math-container">$x \in E$</span> for which <span class="math-container">$d_X(x, p) &lt; \delta$</span></p> </blockquote> <p>Definition of a closed set:</p> <blockquote> <p><span class="math-container">$E$</span> is closed if every limit point of <span class="math-container">$E$</span> is a point of <span class="math-container">$E$</span>.</p> </blockquote> <p>Definition of closure of a set (denoted by <span class="math-container">$\bar{E}$</span>):</p> <blockquote> <p><span class="math-container">$\bar{E} = E \cup E'$</span> where <span class="math-container">$E'$</span> is the set of limit points of <span class="math-container">$E$</span>.</p> </blockquote> <p>Definition of a limit point</p> <blockquote> <p>A point <span class="math-container">$p$</span> is a limit point of a set <span class="math-container">$E$</span> if every neighborhood of <span class="math-container">$p$</span> contains a point <span class="math-container">$q \neq p$</span> such that <span class="math-container">$q \in E$</span>.</p> </blockquote>
William Elliot
426,203
<p>Since <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are seperated, there exist open disjoint <span class="math-container">$U,V$</span> with <span class="math-container">$A \subset U, B \subset V$</span>. <span class="math-container">$A_0 \subset K = p^{-1}(U), B_0 \subset L = p^{-1}(V)$</span>.<br /> Show <span class="math-container">$K$</span> and <span class="math-container">$L$</span> are open and disjoint.</p>
136,240
<p>Why does the subgroup have to be able divide the group? For example why isn't the group:<br/> $S= \{-4,-3,-1,0,1,2,3,4\}$ <br/> a subgroup of $G= \{-6,-5,-4,-3,-2,-1,0,1,2,3,4,5,6 \}$? Aren't both additive, have a neutral number, each member has a reciprocal and every member in group $S$ is present in group $G$?</p> <p>Why do subgroups have such a specific requirement of being be able to divide the group?</p>
Henry T. Horton
24,934
<p>A group must be closed under the group operation. Since you said "both are additive," I am going to assume that the group operation on $G$ (and hence $S$) is addition. Then $G$ is not closed under addition ($6 + 1 = 7 \notin G$, for example), unless you are considering $G$ as $\mathbb{Z}/13$. The same issue happens with $S$: $3 + 1 = 4 \notin S$, so $S$ is not closed under the group operation and is hence not a subgroup.</p> <p>Also, the additive inverse of $-4 \in S$ is not in $S$.</p>
136,240
<p>Why does the subgroup have to be able divide the group? For example why isn't the group:<br/> $S= \{-4,-3,-1,0,1,2,3,4\}$ <br/> a subgroup of $G= \{-6,-5,-4,-3,-2,-1,0,1,2,3,4,5,6 \}$? Aren't both additive, have a neutral number, each member has a reciprocal and every member in group $S$ is present in group $G$?</p> <p>Why do subgroups have such a specific requirement of being be able to divide the group?</p>
Mikko Korhonen
17,384
<p>If $G$ is a finite group, then according to <a href="http://mathworld.wolfram.com/LagrangesGroupTheorem.html" rel="nofollow">Lagrange's theorem</a> the order (number of elements) of a subgroup $H$ divides the order of $G$. A proof of this fact can be found in any introductory text on abstract algebra.</p> <p>The basic idea is that $G$ can be split up into pairwise disjoint parts called <a href="http://mathworld.wolfram.com/Coset.html" rel="nofollow">cosets</a> that each have the same order $H$ does. Therefore $G$ has order $n \cdot |H|$, where $n$ is the number of these parts and $|H|$ the order of $H$.</p>
253,921
<p>I am trying to come up with a measurable function on $[0,1]^2$ which is not integrable, but such that the iterated integrals are defined and unequal.</p> <p>Any help would be appreciated.</p>
Michael Hardy
11,667
<p>$$ \int_0^1\int_0^1 \frac{x^2-y^2}{(x^2+y^2)^2} \,dy\,dx \ne \int_0^1\int_0^1 \frac{x^2-y^2}{(x^2+y^2)^2} \,dx\,dy $$</p> <p>Obviously either of these is $-1$ times the other and if this function were absolutely integrable, then they would be equal, so their value would be $0$. But one is $\pi/2$ and the other is $-\pi/2$, as may be checked by freshman calculus methods.</p>
3,528,370
<p>Maybe this is too obvious, but I what to be sure... Let <span class="math-container">$Y$</span> be a <span class="math-container">$p\times p$</span> symmetric random matrix (i.e. you can think about <span class="math-container">$Y$</span> as a matrix with random entries). Define <span class="math-container">$E[Y]$</span>, the expectation of <span class="math-container">$Y$</span>, as the matrix with entries <span class="math-container">$(E[Y])_{ij} = E[Y_{ij}]$</span>. I think that the next affirmation is true:</p> <blockquote> <p>If <span class="math-container">$E[Y] = 0_{p\times p}$</span> then <span class="math-container">$\lambda_{\max}(Y)\geq 0$</span> a.s., where <span class="math-container">$\lambda_{\max}(Y)$</span> is the greatest eigenvalue of <span class="math-container">$Y$</span> (which is real since <span class="math-container">$Y$</span> is symmetric). </p> </blockquote> <p>My argument is as follows. Suppose that <strong>all</strong> the eigenvalues are negative. Then <span class="math-container">$tr(Y)&lt;0$</span>, which implies that <span class="math-container">$E[tr(Y)]&lt;0$</span> and <span class="math-container">$tr(E[Y])&lt;0$</span>. This is a contradiction since <span class="math-container">$E[Y] = 0_{p\times p}$</span>. Then there exist at least one non-negative eigenvalue, one of which is <span class="math-container">$\lambda_{\max}(Y)$</span>.</p> <p>Is my argument correct? In that case, is there a generalization of this result?</p>
gt6989b
16,192
<p>To add to the existing Ian's answer, the mistake in your proof is that <span class="math-container">$tr(Y) &lt; 0$</span> <strong>does not imply</strong> that <span class="math-container">$\mathbb{E}[tr(Y)] &lt; 0$</span>.</p> <p>The reason for this is that <span class="math-container">$Y$</span> is some <em>realization</em> of the random matrix, which may be anything within the allowed range. Consider, for example, restricting <span class="math-container">$Y$</span> to be a <span class="math-container">$2 \times 2$</span> diagonal matrix with entries <span class="math-container">$u,v$</span>. Then, <span class="math-container">$\mathbb{E}[tr(Y)] = \mathbb{E}[u+v] = 0$</span>, but for any particular matrix, <span class="math-container">$tr(Y) = u+v$</span> and both <span class="math-container">$u,v$</span> may end up negative in any particular realization.</p> <hr> <p>For another example, consider a uniform random variable <span class="math-container">$A \sim \mathcal{U}[-1,1]$</span>. Clearly, <span class="math-container">$\mathbb{E}[A] = 0$</span>, but if I take a sample from this distribution, generating a sequence <span class="math-container">$A_1, A_2, \ldots$</span>, a good number of them will be negative.</p>
1,438,999
<p>If $(x-a)^2=(x+a)^2$ for all values of $x$, then what is the value of $a$?</p> <p>At the end when you get $4ax=0$, can I divide by $4x$ to cancel out $4$ and $x$?</p>
Peter
111,826
<p>Since we know that $$(x-a)^2 = (x+a)^2$$ holds for all $x$, we particularly know that it holds for $x=a$ as well. Plugging that in the relation yields $$0=4a^2$$ from which we conclude that $a=0$.</p>
102,304
<p>I have here a complex equation:</p> <p>$$z^2 - (7+j)z + 24 +j7 = 0$$</p> <p>How do we get the roots of this equation? I started using the quadratic formula $-b \pm \sqrt{ b^2-4ac}\over 2$, but it yielded too much complexity on it. Is there any way to directly attack this? Thanks.</p>
Zarrax
3,035
<p>Note that the constant term $(24 + 7j)$ is one-half of the square of the $7 + j$ coefficient. This suggests writing $z = (7 + j)w$, and the equation becomes $$(48 + 14j) w^2 - (48 + 14j)w + (24 + 7j) = 0$$ Divide through by $(24 + 7j)$ and you get $$2w^2 - 2w + 1 = 0$$ By the quadratic formula this has roots ${1 \over 2} \pm {j \over 2}$. So the roots of the original equation are $(7 + j) ({1 \over 2} \pm {j \over 2})$, or in other words $3 + 4j$ and $4 - 3j$.</p>
235,661
<p>Is this sufficient? Also, any good books/other suggestions regarding the subject will be very helpful.</p> <p>Find min, max, inf, sup (if they exist):</p> <p>$$B=\left\{\frac{m}{m+n}:m,n\in\mathbb{N}\right\}$$</p> <p>Showing B has an upper bound: Let $M=1$, we need to find $m,n$ fulfilling:$$\frac{m}{m+n}&gt;1$$ As $n\in\mathbb{N}$ and is only in the denominator, the smaller it's value, the greater the value of n, the smaller $b$ will be. Therefore, let us choose $n=1$ (smallest possible value).$$\frac{m}{m+1}&gt;1\,\,\,\,\,\leftrightarrow\,\,\,\,\,\,m&gt;m+1$$</p> <p>We got a contradiction, thus $M$ is an upper bound of $B$.</p> <p>Showing $M=\sup B$: Let $\epsilon&gt;0$, we need to find $b\in B$ fulfilling:$$\frac{m}{m+n}&gt;1-\epsilon$$ Again, we'll choose $n=1$ to get the biggest $b$ possible: $$\begin{align} \frac{m}{m+1}&amp;&gt;1-\epsilon\\ m&amp;&gt;m+1-m\epsilon -\epsilon\\m&amp;&gt;\frac{1-\epsilon}{\epsilon} \end{align}$$ Therefore for every $\epsilon$ we can choose $n=1,m&gt;\frac{1-\epsilon}{\epsilon}$, which means $\sup B=1$.</p> <p>Edit: Since $m,n \in\mathbb{N}$, $B&gt;0$.</p> <p>Showing $0=\inf B$: Let $\epsilon&gt;0$, we need to find $b\in B$ fulfilling: $$\frac{m}{m+n}&lt;0+\epsilon$$ Choosing $m=1$ to make $b$ as small as possible: $$1&lt;\epsilon+n\epsilon\\n&gt;\frac{1-\epsilon}{\epsilon}$$</p> <p>We have shown that such $b$ exists for every $\epsilon$. Therefore, $\sup B = 0$</p>
Martin Argerami
22,857
<p>Your proof that $1$ is an upper bound is unnecessarily complicated: as $m,n&gt;0$, we have $m&lt;m+n$, and then $m/(m+n)&lt;1$. </p> <p>Also, as was mentioned, $0$ is a lower bound (since everything is positive). And it is the infimum, as $1/(1+n)\to 0$. </p>
1,514,628
<p>I've been looking over some old assignments in my analysis course to get ready for my upcoming exam - I've just run into something that I have no idea how to solve, though, mainly because it looks nothing like anything I've done before. The assignment is as follows:</p> <p>"Let $H$ be a Hilbert space, and let $(e_n)_{n\in\mathbb{N}}$ be an orthonormal basis for $H$. Let $E$ be the linear subspace spanned by the three elements $e_1 + e_2$, $e_3 + e_4$, $e_2 + e_3$. Let $P_E : H \to E$ be the projection onto $E$."</p> <p>How would one then do the following three things:</p> <ol> <li>Determine an orthonormal basis for $E$</li> <li>Compute $P_E e_1$</li> <li>Calculate $\|e_1\|^2$, $\|P_E e_1\|^2$ and $\|e_1 - P_E e_1\|^2$</li> </ol> <p>Usually when we've looked at these types of assignments we've gotten actual basis vectors, $e$. How does one do these things symbollically?</p> <p>I've tried doing Gram-Schmidt for the first part, but I've no idea if it's right, what I'm doing. I end up with three basis vectors looking something like</p> <p>$u_1 = \frac{e_1+e_2}{2}$ , $u_2 = \frac{e_3+e_4}{2}$ , $u_3 = \frac{e_1+e_4}{2}$</p> <p>Any help would be much appreciated, right now I'm getting nowhere, haha.</p>
levap
32,262
<p>For part one, Gram-Schmidt is indeed the way to go. Let $f_1 = e_1 + e_2$, $f_2 = e_3 + e_4$, $f_3 = e_2 + e_3$. Then, since $f_1$ and $f_2$ are already orthogonal to each other, you need only to normalize them. Since $e_1$ and $e_2$ are orthogonal, you have $||f_1||^2 = ||e_1||^2 + ||e_2||^2 = 2$ so $||f_1|| = \sqrt{2}$ and $u_1 = \frac{e_1 + e_2}{\sqrt{2}}$. Similarly, $u_2 = \frac{e_3 + e_4}{\sqrt{2}}$. Then</p> <p>$$ u_3 = \frac{f_3 - \left&lt; f_3, u_1 \right&gt; \cdot u_1 - \left&lt; f_3, u_2 \right&gt; \cdot u_2}{||f_3 - \left&lt; f_3, u_1 \right&gt; \cdot u_1 - \left&lt; f_3, u_2 \right&gt; \cdot u_2||} = \frac{e_2 + e_3 - \frac{\left&lt;e_2 + e_3, e_1 + e_2 \right&gt;}{2} \cdot (e_1 + e_2) - \frac{\left&lt; e_2 + e_3, e_3 + e_4 \right&gt;}{2} \cdot (e_3 + e_4)}{||\cdot||} = \frac{e_2 + e_3 - \frac{1}{2}(e_1 + e_2) -\frac{1}{2}(e_3 + e_4)}{||\cdot||} = \frac{\frac{1}{2}(e_2 + e_3 - e_1 - e_4)}{\frac{\sqrt{4}}{2}} = \frac{e_2 + e_3 - e_1 - e_4}{2}. $$</p> <p>The projection $P_E(e_1)$ is then readily computed as</p> <p>$$ P_E(e_1) = \left&lt; e_1, u_1 \right&gt; u_1 + \left&lt; e_1, u_2 \right&gt; u_2 + \left&lt; e_1, u_3 \right&gt; u_3 = \frac{e_1 + e_2}{2} + 0 - \frac{e_2 + e_3 - e_1 - e_4}{4.} $$</p>
3,500,405
<p>I have a similar question to what was asked already <a href="https://math.stackexchange.com/questions/2511111/prove-that-the-following-map-has-at-least-k-2-fixed-points">here</a></p> <p>But I do not really understand the answer there.</p> <p>The problem is: Let <span class="math-container">$x_0 \in S^1$</span> and let <span class="math-container">$f: S^1 \rightarrow S^1$</span> be a continuous map with <span class="math-container">$f(x_0) = x_0$</span>. Suppose moreover that the induced map <span class="math-container">$f_{*} : \pi_1 (S^1, x_0) \rightarrow \pi_1 (S^1, x_0): [g] \mapsto k [g]$</span> for some <span class="math-container">$k &gt; 2$</span>. </p> <p>(i) Show that there are certainly <span class="math-container">$k-2$</span> other fixed points for <span class="math-container">$f$</span> besides <span class="math-container">$x_0$</span>. (hint: consider <span class="math-container">$f$</span> as being a map <span class="math-container">$f^{'}: I \rightarrow S^1$</span> with <span class="math-container">$f^{'} (0) = f^{'} (1) = x_0$</span> and study the lifts of <span class="math-container">$f^{'}$</span> to the universal covering space <span class="math-container">$\mathbb{R}$</span>.)</p> <p>(ii) Give an example of such an <span class="math-container">$f$</span> with precisely <span class="math-container">$k-1$</span> fixed points (of which <span class="math-container">$x_0$</span> is one). </p> <p>I do not understand the hint really. How can we consider <span class="math-container">$f$</span> as the map <span class="math-container">$f^{'}$</span>? And why would we do this? </p> <p>I know that the fundamental group of the circle is <span class="math-container">$\mathbb{Z}$</span>, and that every covering of <span class="math-container">$S^1$</span> is regular. If <span class="math-container">$p: \mathbb{R} \rightarrow S^1$</span> is the standard covering map, then the covering transformations are the homeomorphisms <span class="math-container">$\mathbb{R} \rightarrow \mathbb{R}: x \mapsto x + n$</span>, for <span class="math-container">$n$</span> an integer. But I'm not sure how this will help me. </p> <p>An elaborate answer is appreciated.</p>
Milo Brandt
174,927
<p>The general idea here is that <em>loops</em> with a fixed class in the fundamental group are uniquely associated to <em>paths</em> between a fixed pair of lifts in the universal cover. For this answer, let's have <span class="math-container">$S^1$</span> be the unit circle in the complex plane and the universal cover <span class="math-container">$p:\mathbb R\rightarrow S^1$</span> be <span class="math-container">$p(t)=e^{2\pi i t}$</span> and <span class="math-container">$I=[0,1]$</span> so that we can talk in terms of concrete examples.</p> <p>Suppose we wanted to think about the map <span class="math-container">$f(z)=z^2$</span> from <span class="math-container">$S^1$</span> to <span class="math-container">$S^1$</span>, for instance. This, when thought of as a loop based at <span class="math-container">$1$</span>, defines a curve which winds twice around the origin. It is, generally, reasonable to try to study a lift of this loop, but this is not possible because there is no map <span class="math-container">$\tilde f$</span> from <span class="math-container">$S^1$</span> to <span class="math-container">$\mathbb R$</span> so that <span class="math-container">$p\circ \tilde f = f$</span> - because, if we <em>tried</em> to lift <span class="math-container">$f$</span>, we might start at <span class="math-container">$f(1)=0$</span> and then note that <span class="math-container">$f(e^{2\pi i t})$</span> has to be <span class="math-container">$2t$</span> plus some integer - hence <span class="math-container">$f(e^{2\pi i t})$</span> must equal <span class="math-container">$2t$</span> if we base <span class="math-container">$f(1)=0$</span>. This obviously is not well-defined because <span class="math-container">$f(1)$</span> and <span class="math-container">$f(e^{2\pi i})$</span> are given as different values. The failure of <span class="math-container">$f$</span> to lift is <em>precisely</em> because <span class="math-container">$f$</span> is not contractible as a loop.</p> <p>Thus, instead, we define <span class="math-container">$f':I\rightarrow S^1$</span> by <span class="math-container">$f'(t)=f(p(t))$</span>. This changes our loop into a path - which we know that we can always lift. In particular, we can find that <span class="math-container">$$\tilde f'(t)=2t$$</span> is a lift of <span class="math-container">$f'$</span> - so we have solved the issue. You can imagine that <span class="math-container">$f'$</span> is obtained by taking the domain <span class="math-container">$S^1$</span> of <span class="math-container">$f$</span> and "cutting" it at <span class="math-container">$1$</span>, then unrolling the circle into a line segment and that this is done to avoid the issue of loops not lifting into the cover.</p> <p>Once we've done that, however, we will have a much easier time; in particular, if we forget about the example <span class="math-container">$f$</span> we've been using and just assume it has the desired property, we can, by knowing about the fundamental group of the circle, see that <span class="math-container">$f$</span> has to be homotopic (assuming, without loss of generality, that <span class="math-container">$x_0=1$</span>) to <span class="math-container">$f(z)=z^k$</span>. Then, <span class="math-container">$f'$</span> must be homotopic to <span class="math-container">$kt$</span> relative the endpoints of the path. In particular, this means that if we lift <span class="math-container">$f'$</span> to a path <span class="math-container">$\tilde f':I\rightarrow\mathbb R$</span> with <span class="math-container">$\tilde f'(0)=0$</span>, then <span class="math-container">$\tilde f'(1)=k$</span>.</p> <p>To finish, we just note that <span class="math-container">$f$</span> has a fixed point (other than <span class="math-container">$1$</span>) exactly when <span class="math-container">$f(t)=t$</span> which lifts to when <span class="math-container">$\tilde f'(t) = t + m$</span> for some <span class="math-container">$m\in\mathbb Z$</span>. However, the intermediate value theorem implies that such at <span class="math-container">$t$</span> exists for each <span class="math-container">$m\in \{1,2,\ldots,k-2\}$</span>.</p>
2,275,951
<p>The parabola y=x² is parameterized by x(t) = t and y(t) = t². At the point <strong>A</strong> (t,t²) a line segment <strong>AP</strong> 1 unit long is drawn normal to the parabola extending inward. Find the parametric equations of the curve traced by the point <strong>P</strong> as <strong>A</strong> moves along the parabola.</p> <p><a href="https://www.desmos.com/calculator/kp5wxcbpnt" rel="nofollow noreferrer">Best picture i could come up with</a></p>
A.Γ.
253,273
<p>Define $$ F(\alpha)=\int_0^{\pi/2}e^{-\alpha\tan x}\,dx,\quad\alpha&gt;0. $$ Differentiating $F$ two times we get \begin{align} F''(\alpha)&amp;=\int_0^{\pi/2}e^{-\alpha\tan x}\tan^2x\,dx=-F(\alpha)+\int_0^{\pi/2}e^{-\alpha\tan x}(\underbrace{1+\tan^2x}_{\tan'(x)})\,dx=\\ &amp;=-F(\alpha)+\int_0^{+\infty}e^{-\alpha t}\,dt=-F(\alpha)+\frac{1}{\alpha}. \end{align} The resulting differential equation $$ F''+F=\frac{1}{\alpha},\quad\alpha&gt;0, $$ can be solved e.g. by variation of parameters $$ F(\alpha)=A(\alpha)\cos\alpha+B(\alpha)\sin\alpha $$ that gives $$ \begin{bmatrix} \cos\alpha &amp; \sin\alpha\\ -\sin\alpha &amp; \cos\alpha \end{bmatrix} \begin{bmatrix} A'\\B' \end{bmatrix}=\begin{bmatrix} 0\\1/\alpha \end{bmatrix}\quad\Rightarrow\quad \begin{bmatrix} A'\\B' \end{bmatrix}=\begin{bmatrix} -\frac{\sin\alpha}{\alpha}\\\frac{\cos\alpha}{\alpha} \end{bmatrix}. $$ Together with the initial conditions $$ F(+\infty)=F'(+\infty)=0 $$ it gives the solution $$ F(\alpha)=-\cos\alpha\int_{+\infty}^\alpha\frac{\sin t}{t}\,dt+\sin\alpha\int_{+\infty}^\alpha\frac{\cos t}{t}\,dt. $$ Now $\alpha=\frac{\pi}{2}$ gives the result.</p>
119,810
<p>My question today is about the minimization of an error function with two parameters. It is a function that measures the error of a set of points. The two parameters are the weights of a regressor. </p> <p>$$\frac{1}{N}\sum_{t=1}^{N}[r^t-(w_1x^t+w_0)]^2$$ </p> <p>The minimum should be calculated by taking partial derivates of the error function above with respect to $w_1$ and $w_0$. Setting them equal to $0$ and solving for the unknown. However I didn't reach the solutions given. The solutions should be:<br> $$w_1=\frac{\sum_tx^tr^t-\sum_t\frac{x^t}{N}\sum_t\frac{r^t}{N}N}{\sum_t(x^t)^2-N(\sum_t\frac{x^t}{N})^2}$$<br> $$w_0=\sum_t\frac{r^t}{N}-w_1\sum^t\frac{x^t}{N}$$ </p> <p>They are performing well in practice. But my question is, can I reach them by taking the partial derivatives and setting them equal to $0$? Can anybody help me, at least with one? Thank you. </p> <p><strong>UPDATE:</strong><br> This is the regressor I get by using the $w_1$ and $w_0$ listed above. As you can see, the two model the data very well so they must be right. <img src="https://i.stack.imgur.com/GD6a3.png" alt="enter image description here"></p> <p><strong>UPDATE 2:</strong><br> I will post the passage from the book that lists $w_1$ and $w_0$ as the solution. Maybe you'll get the idea better.<br> <img src="https://i.stack.imgur.com/POP1A.png" alt="enter image description here"></p>
Chris Taylor
4,873
<p>I concur with the results in Henry's answer. In case you want to try it out for yourself, here's some Matlab code implementing the two solutions (note that due to Matlab's indexing rules, $w_0$ is <code>w(1)</code> and $w_1$ is <code>w(2)</code> etc.</p> <pre><code>% Create a fake dataset x = linspace(2,8,30)'; r = 0.5 + 0.1 * x + 0.1 * randn(30,1); % Formulas from your question w(2) = (sum(x.*r) - mean(x)*mean(r)) / (sum(x.^2) - sum(x)); w(1) = mean(r) - w(2) * mean(x); % Formulas from Henry's answer v(1) = (mean(x.^2) * mean(r) - mean(x) * mean(r.*x)) / (mean(x.^2) - mean(x)^2); v(2) = (mean(r.*x) - mean(r)*mean(x)) / (mean(x.^2) - mean(x)^2); % Plot the data plot(x,r,'xr') hold on xlabel('House size') ylabel('Price') % Plot your best fit line (green) plot([2 8], [1 2;1 8] * w', 'g') % Plot Henry's best fit line (blue) plot([2 8], [1 2;1 8] * v', 'b') </code></pre> <p>This should result in the following plot:</p> <p><img src="https://i.stack.imgur.com/KzIa6.png" alt="enter image description here"></p>
267,355
<p>Let $H_i = (V_i, E_i)$ be <a href="https://en.wikipedia.org/wiki/Hypergraph" rel="nofollow noreferrer">hypergraphs</a> for $i=1,2$. Then we say that $H_1\cong H_2$ if there is a bijection $\varphi:V_1\to V_2$ such that $A\in E_1$ implies $\varphi(A) \in E_2$ and $B\in E_2$ implies $\varphi^{-1}(B)\in E_1$.</p> <p>Is there a collection $\cal C$of pairwise non-isomorphic hypergraphs on $\omega$ with $|{\cal C}| = 2^{2^{\aleph_0}}$?</p>
Gro-Tsen
17,064
<p>Others have already answered, but I think the following counting argument is worth pointing out:</p> <ul> <li><p>there are $2^{2^{\aleph_0}}$ hypergraphs on $\omega$ (since a hypergraph on $\omega$ is just a collection of nonempty subsets of $\omega$),</p></li> <li><p>each isomorphism class contains at most $2^{\aleph_0}$ elements (since there are that many permutations of $\omega$),</p></li> </ul> <p>so there must be $2^{2^{\aleph_0}}$ isomorphism classes.</p>
2,264,791
<p>I have a problem that I'm having trouble figuring out the distribution with given condition.</p> <p>It is given that 1/(<span class="math-container">$X$</span>+1), where <span class="math-container">$X$</span> is an exponentially distributed random variable with parameter 1.</p> <blockquote> <p><strong>Original Problem:</strong></p> <p>What is the distribution of 1/(<span class="math-container">$X$</span>+1), where <span class="math-container">$X$</span> is an expoentially distributed random variable with parameter 1?</p> </blockquote> <p>With parameter 1, <span class="math-container">$X$</span> can be written as <span class="math-container">$e^{-x}$</span>, and after plug in the given function, I got <span class="math-container">$$\frac{1}{e^{-x}+1} = \frac{e^{x}}{e^{x}+1}$$</span></p> <p>What type of distribution is this?</p>
heropup
118,193
<p>You've misunderstood the difference between a random variable $X$ and its probability density function $f_X(x)$. These are not the same thing. $X$ represents the value of the random outcome. $f_X(x)$ represents a likelihood of observing a particular outcome.</p> <p>With this in mind, given that $X \sim \operatorname{Exponential}(1)$, we have $$f_X(x) = e^{-x}, \quad x \ge 0,$$ and the <strong>cumulative distribution function</strong> $$F_X(x) = \Pr[X \le x] = 1 - e^{-x}, \quad x \ge 0.$$ Then let $Y = 1/(1+X)$, so that the CDF of $Y$ is $$F_Y(y) = \Pr[Y \le y] = \Pr[1/(1+X) \le y] = \Pr[1+X \ge 1/y] = \Pr[X \ge \tfrac{1}{y} - 1].$$ What we have done is express the CDF of $Y$ in terms of a probability on $X$ through a monotone variable transformation. Then we use the CDF of $X$ to compute the resulting probability: $$F_Y(y) = \Pr[X \ge \tfrac{1}{y} - 1] = 1 - \Pr[X \le \tfrac{1}{y} - 1] = 1 - (1 - e^{-(1/y - 1)}) = e^{1 - 1/y}.$$ What is the corresponding support of $Y$? Well, we know $X \ge 0$. So $1+X \ge 1$, which in turn implies $1/(1+X) \le 1$. And since $X$ is always positive, so is $1/(1+X)$. Therefore, $0 &lt; Y \le 1$ and we would write the complete CDF as $$F_Y(y) = \begin{cases} 0, &amp; y \le 0, \\ e^{1 - 1/y}, &amp; 0 &lt; y \le 1, \\ 1, &amp; y &gt; 1.\end{cases}$$ The density would be written $$f_Y(y) = F'_Y(y) = \begin{cases} y^{-2} e^{1-1/y}, &amp; 0 &lt; y \le 1, \\ 0, &amp; \text{otherwise}.\end{cases}$$</p>
936,525
<p>I am following a proof in the text OPTIMIZATION THEORY AND METHODS a springer series by WENYU SUN and YA-XIANG YUAN. I come across what seems obvious that for a column vector $v$, with dimension $n\times 1$, $$\biggl\|I-\frac{vv^T}{v^Tv}\biggr\|=1,$$ where $I$ is an $n\times n$ matrix, and $\|.||$ is a matrix norm.</p> <hr> <p>I try to verify it by considering a Frobenius norm, that is </p> <p>\begin{equation*} \begin{split} \biggl\|I-\frac{vv^T}{v^Tv}\biggr\|_F&amp; = \biggl (tr\biggl(I-\frac{vv^T}{v^Tv}\biggr)^T\biggl(I-\frac{vv^T}{v^Tv}\biggr)\biggl)^{\frac{1}{2}} \\ &amp; = \biggl (tr\biggl(I-\frac{vv^T}{v^Tv}\biggr)^2\biggr)^{\frac{1}{2}}\\ &amp; =tr\biggl(I-\frac{vv^T}{v^Tv}\biggr)\\ &amp; =tr\bigl(I\bigr)-tr\biggr(\frac{vv^T}{v^Tv}\biggr)\\ &amp; =n-\frac{1}{\|v\|^2} \|v\|^2\\ &amp; =n-1 \end{split} \end{equation*} </p> <hr> <p>So, I do not what is the problem. Because in the text no specification of norm is given. May be I have to change another matrix norm.</p> <hr> <p>NOTE: A Frobenius matrix norm for any matrix $A$ is defined by \begin{equation*} \begin{split} ||A\|_F &amp; = \biggl( \sum_{i=1}^{m}\sum_{j=1}^{n}|a_{ij}|^2\biggr)^\frac{1}{2}\\ &amp; = \biggl(tr(A^TA)\biggr)^\frac{1}{2} \end{split} \end{equation*}</p>
Batman
127,428
<p>$P=I - \frac{v v^T}{v^T v}$ is the orthogonal projection onto $v^\perp$. </p> <p>Proof: Clearly, $P$ is symmetric. </p> <p>$P^2 = (I - \frac{v v^T}{v^T v}) (I - \frac{v v^T}{v^T v}) = I - 2 \frac{v v^T}{v^T v} + \frac{v v^T}{v^T v} \frac{v v^T}{v^T v} = I - 2 \frac{v v^T}{v^T v} + \frac{v (v^T v) v^T}{(v^T v)^2} = I - 2 \frac{v v^T}{v^T v} + (v^T v) \frac{v v^T}{(v^T v)^2} = P$</p> <p>Now show (non-trivial) orthogonal projections have (operator 2-norm) norm 1: $||Px||^2 = &lt;Px,Px&gt; =&lt; x, P^T P x&gt; = &lt;x, P P x&gt; = &lt;x, P^2 x&gt; = &lt;x,P x&gt; \leq ||x|| ||P x|| $ so $||Px || \leq ||x||$. Now show equality is achieved by any vector in the range of the projection (in this case, any vector orthogonal to $v$). </p>
200,658
<p>What is the value of :</p> <p>$$\sum_{n=1}^{\infty}\frac{n^2+n+1}{3^n}$$</p>
user 1591719
32,016
<p><strong>HINT</strong>: you may use $e^{kx}$. Then solve the geometrical progression, derive once/twice its both sides and then plug in $x=-\ln(3)$. </p>
2,661,443
<p>For the equation $2^x = 7$</p> <p>The textbook says to use log base ten to solve it like this $\log 2^x = \log 7$. </p> <p>I then re-arrange it so that it reads $x \log 2 = \log 7$ then divide the RHS by $\log 2$ to isolate the $x$. I understand this part.</p> <p>I can alternatively solve it in an easier way by simply using $\log_2 7$ on my calculator.</p> <p>Using both methods the answer comes to the same which is $2.807$</p> <p>My question is twofold:</p> <ol> <li><p>Why would the textbook suggest to use log base ten rather than simply using log base two?</p></li> <li><p>I can see how using log base ten and the suggested method in the textbook makes me arrive at the same answer but I don't understand WHY this is so. How does base ten play a factor in the whole scope of things.</p></li> </ol> <p>Thank you</p>
Mohammad Riazi-Kermani
514,496
<p>Logarithmic functions enjoy many properties.</p> <p>One of the very interesting properties of logarithms is the formula called change of base formula.</p> <p>$$ \log_a x = \frac {\log_b x}{\log_b a} $$</p> <p>For example $$ \log_2 7 = \frac {\log_{10} 7}{\log_{10} 2} $$</p> <p>This formula makes finding logarithms in an arbitrary base possible by using only logarithms base $10$ or natural logarithms. </p>
77,379
<p>It is to show for an $a\in \mathbb{C}^{\ast}$ that $aB_{1}(1)= B_{|a|}(a)$ </p> <p>where B denotes a disc </p> <p>Okay, maybe this is correct: </p> <p>$aB_{1}(1) = a(e^{i\phi}) = ae^{i\phi} = |a|e^{i\phi} = B_{|a|}(a)$</p> <p>But this seems very wrong! </p> <p>V</p>
VVV
18,298
<p>let a be a complex number of the form: $a:= u+vi$ and $z:= x+yi$ </p> <p>$B_{1}(1)$ means that $ |z-1| &lt; 1 $ and so $a|z-1| = (u+vi)|z-1| = (u+vi)(\sqrt{(x-1)^{2}+y^{2}} $ so we can write it as $|z-1|u+|z-1|vi&lt; a|1| = |a| = \sqrt{u^{2}+v^{2}}$</p> <p>This seems to be the wrong route also. ??</p> <p>V</p>
77,379
<p>It is to show for an $a\in \mathbb{C}^{\ast}$ that $aB_{1}(1)= B_{|a|}(a)$ </p> <p>where B denotes a disc </p> <p>Okay, maybe this is correct: </p> <p>$aB_{1}(1) = a(e^{i\phi}) = ae^{i\phi} = |a|e^{i\phi} = B_{|a|}(a)$</p> <p>But this seems very wrong! </p> <p>V</p>
t.b.
5,363
<p>First let me answer your specific question.</p> <ol> <li><p>Let $z \in B_{1}(1)$, that is to say, $|z - 1| \lt 1$. We want to show that $az \in B_{|a|}(a)$, that is, we want to show that $|az-a| \lt |a|$. But $$ |az - a| = |a| \cdot \underbrace{|z-1|}_{\lt 1} \lt |a|, $$ as we wanted, therefore $aB_1(1) \subset B_{|a|}(a)$.</p></li> <li><p>Conversely, let $w \in B_{|a|}(a)$, that is $|w -a| \lt |a|$. We want to show that $w \in aB_{1}(1)$. Since $a \neq 0$ we can write $$ |a| \gt |w-a| = |a| \cdot \left|\frac{w}{a}-1\right|, $$ so $\left|\frac{w}{a}-1\right| \lt 1$. But this means that $z = \frac{w}{a} \in B_{1}(1)$, so $w = az \in a B_{1}(1)$, hence $B_{|a|}(a) \subset aB_1(1)$.</p></li> </ol> <p>Putting 1. and 2. together we have $aB_1(1) = B_{|a|}(a)$, as desired.</p> <hr> <p>To make this a bit more useful, we generalize slightly:</p> <p>Consider $B_{r}(p)$ with $r \gt 0$ and let $a \in \mathbb{C}^\ast$. Then $aB_{r}(p) = B_{|a|r}(ap)$.</p> <p>Indeed, if $z \in B_{r}(p)$, so $|z-p| \lt r$, then $$ |az-ap| = |a|\cdot|z-p| \lt |a|r, $$ so $aB_{r}(p) \subset B_{|a|r}(ap)$.</p> <p>Conversely, if $w \in B_{|a|r}(ap)$ then $$ |a|r \gt |w-ap| = |a|\cdot \left|\frac{w}{a} - p\right|, $$ so $\left|\frac{w}{a} - p\right| \lt r$, thus $z = \frac{w}{a} \in B_{r}(a)$ and therefore $w=az \in aB_{r}(p)$. The claimed equality $aB_{r}(p) = B_{|a|r}(ap)$ is proved.</p> <p>To sum up: multiplying by a complex scalar $a \in \mathbb{C}^\ast$ scales all balls by a factor $|a|$ (i.e., multiplies the radii by $|a|$) and moves the centers from $p$ to $ap$.</p>
224,226
<p>I am trying to count the number of distinct colours in a <span class="math-container">$5\times5$</span> box, (a radius 2 filter) at all points over a quantized image. I cannot seem to get anything out of the following code except for a black square:</p> <pre><code>img = ColorQuantize[ExampleData[{&quot;TestImage&quot;, &quot;Peppers&quot;}], 8, Dithering -&gt; False]; dis = ImageFilter[CountDistinct[Flatten[#, 1]] &amp;, img, 2]; dis // ImageAdjust </code></pre> <p>I expect each pixel in the image to be replaced with a single non-negative integer telling me how many unique colours are in the radius 2 vicinity of that pixel. It's plain to see you can choose <span class="math-container">$5\times5$</span> boxes in the <em>peppers</em> image which have more than one colour so the output should look more interesting.</p> <p>I'd also like to know why this related code for 1D produces all 1's instead of the number of unique elements in a centered window as it slews across the list, and how to correct it:</p> <pre><code>MovingMap[CountDistinct, {1, 2, 3, 3, 3, 4, 5, 6}, {1, Center}, &quot;Reflected&quot;] </code></pre> <p>For 1D, I want to achieve with <code>MovingMap</code> the same behaviour you can get with <code>Partition</code> like this:</p> <pre><code>CountDistinct /@ Partition[{1, 2, 3, 3, 3, 4, 5, 6}, 3, 1, 2, {}] </code></pre>
MarcoB
27,951
<p>In your 1D <code>MovingMap</code> you are asking for a window of size 1, so the inputs to <code>CountDistinct</code> are lists with only one element, so the output is always 1 (you can see what happens by replacing <code>CountDistinct</code> with an undefined <code>f</code>). You would get closer with something like <code>MovingMap[CountDistinct, yourList, Quantity[3, &quot;Events&quot;]]</code>, but I am not convinced that you can reproduce the partitioning you get with Partition using <code>MovingMap</code></p> <hr /> <p>In your image processing, the problem stems from the fact that, as the documentation states, &quot;The function f [in <code>ImageFilter</code>] is assumed to return channel values that are normally in the range 0 to 1.&quot; <code>CountDistinct</code> does not return a number from 0 to 1. You want to rescale its results so they lie within (0,1). A brute force approach could be the assumption that you could have no more than (2n+1)(2n+1) different colors in an n-neighborhood of your pixel, which in your case is 25, so divide the output of <code>CountDistinct</code> by 25, and then apply <code>ImageAdjust</code>:</p> <pre><code>dis = With[{n = 2}, ImageAdjust@ImageFilter[CountDistinct[#]/(2 n + 1)^2 &amp;, img, n] ] </code></pre> <p><a href="https://i.stack.imgur.com/wmmXA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wmmXA.png" alt="filtered BW image" /></a></p>
727,752
<blockquote> <p>If S is a compact subset of R and T is a closed subset of S,then T is compact.</p> <p>(a) Prove this using definition of compactness.</p> <p>(b) Prove this using the Heine-Borel theorem.</p> </blockquote> <p>My solution: firstly I should suppose a open cover of T, and I still need to think of the set S-T. But if S-T is open in R,it can be done because the open cover of T and S-T is a open cover of R. The reality is S-T is not necessarily a open set in R. My question is that How we can find a open cover which covers S-T but misses T! I don't know how to do this thing!</p> <p>in terms of part (b), I know it is bounded, but How to prove it is closed for T.</p>
John Hughes
114,036
<p>Here's how to get started on part a. </p> <p>Start with an open cover of $T$. You need to show it has a finite subcover. If $U$ is in the open cover, then it's open in $T$, which means that there's an open set $U'$ in $S$ such that $U = T \cap U'$. For every $U$ in your cover, find a corresponding $U'$; now you <em>almost</em> have a cover of $S$. As you observed, if you add in $S - T$, you now have an open cover of $S$. What do you know about every open cover of $S$? </p>
727,752
<blockquote> <p>If S is a compact subset of R and T is a closed subset of S,then T is compact.</p> <p>(a) Prove this using definition of compactness.</p> <p>(b) Prove this using the Heine-Borel theorem.</p> </blockquote> <p>My solution: firstly I should suppose a open cover of T, and I still need to think of the set S-T. But if S-T is open in R,it can be done because the open cover of T and S-T is a open cover of R. The reality is S-T is not necessarily a open set in R. My question is that How we can find a open cover which covers S-T but misses T! I don't know how to do this thing!</p> <p>in terms of part (b), I know it is bounded, but How to prove it is closed for T.</p>
Simon Rose
87,590
<p>I will address part (b), since the others address (a) well.</p> <p>The Heine-Borel theorem says that a subset $V \subset \mathbb{R}$ is compact if and only if it is both closed and bounded.</p> <p>So suppose that $T \subset S \subset \mathbb{R}$. Since $S$ is compact, it is closed and bounded. What can you now say about the boundedness and closedness of $T$?</p>
4,242,561
<p>Let <span class="math-container">$T: R^3 \rightarrow R^3$</span> be a linear transformation such that <span class="math-container">$T(x,y,z) = (x,0,0)$</span>. Which implies that the matrix that represents the transformation is <span class="math-container">\begin{bmatrix}1&amp;0&amp;0\\0&amp;0&amp;0\\0&amp;0&amp;0\end{bmatrix}</span> Which of these would be the correct way to name the matrix?</p> <p><span class="math-container">$T= \begin{bmatrix}1&amp;0&amp;0\\0&amp;0&amp;0\\0&amp;0&amp;0\end{bmatrix}$</span> or <span class="math-container">$[T] = \begin{bmatrix}1&amp;0&amp;0\\0&amp;0&amp;0\\0&amp;0&amp;0\end{bmatrix}$</span> or perhaps none of them are right? I'm getting extremely confused with the notation. Tried to search online for awhile for an answer but I've been unable to find a proper answer regarding the notation.</p>
Bible Bot
963,424
<p>Like what the other poster said, it depends on the context and the chosen convention of the authors. I’ve seen matrices written with and without brackets around the letter so it’s up to you/the teacher.</p> <p>There rarely is one “true” convention for any particular math concept.</p>
230,971
<p>At the moment I use <code>Length[ DeleteDuplicates[ array ] ] == 1</code> to check whether an array is constant, but I'm not sure whether this is optimal.</p> <p>What would be the quickest way to test whether an array consists of equal elements?</p> <p>What if the elements would be integers?</p> <p>What if they are floats?</p>
Sjoerd Smit
43,522
<p>Here are two methods that are quite fast for flat lists (you can flatten arrays to test at deeper levels):</p> <pre><code>const = ConstantArray[1, 100000]; nonconst = Append[const, 2]; </code></pre> <p>Using <code>CountDistinct</code> (or <code>CountDistinctBy</code>):</p> <pre><code>CountDistinct[const] === 1 CountDistinct[nonconst] === 1 </code></pre> <blockquote> <p>True</p> </blockquote> <blockquote> <p>False</p> </blockquote> <p>Based on pattern matching:</p> <pre><code>MatchQ[const, {Repeated[x_]}] MatchQ[nonconst , {Repeated[x_]}] </code></pre> <blockquote> <p>True</p> </blockquote> <blockquote> <p>False</p> </blockquote> <p>The <code>MatchQ</code> approach can be generalized for deeper arrays using <code>Level</code> without having to <code>Flatten</code> everything:</p> <pre><code>constTensor = ConstantArray[1, {5, 5, 5}]; MatchQ[Level[constTensor, {ArrayDepth[constTensor]}], {Repeated[x_]}] </code></pre> <blockquote> <p>True</p> </blockquote> <p><code>Level</code> doesn't always perform better than <code>Flatten</code>, though. <code>Flatten</code> seems very efficient for packed arrays.</p> <h1>Timings</h1> <pre><code>CountDistinct[const] // RepeatedTiming MatchQ[const, {Repeated[x_]}] // RepeatedTiming </code></pre> <blockquote> <p>{0.00021, 1}</p> </blockquote> <blockquote> <p>{0.0051, True}</p> </blockquote> <p><code>MatchQ</code> has the advantage that it short-circuits when a list doesn't match:</p> <pre><code>nonconst2 = Prepend[const, 2]; MatchQ[nonconst2, {Repeated[x_]}] // RepeatedTiming </code></pre> <blockquote> <p>{6.*10^-7, False}</p> </blockquote> <h1>Edit</h1> <p>Here's another method I just came up with. It avoids messing around with the array (flattening etc.):</p> <pre><code>constantArrayQ[arr_] := Block[{ depth = ArrayDepth[arr], fst }, fst = Extract[arr, ConstantArray[1, depth]]; FreeQ[arr, Except[fst], {depth}, Heads -&gt; False] ]; </code></pre> <p>It seems like this one is quite fast for unpacked arrays:</p> <pre><code>constTensor = ConstantArray[1, 400*{1, 1, 1}]; constTensor[[1, 1, 1]] = 2.; &lt;&lt; Developer` PackedArrayQ @ constTensor (* False *) MatchQ[Level[constTensor, {ArrayDepth[constTensor]}], {Repeated[x_]}] // AbsoluteTiming MatchQ[Flatten[constTensor], {Repeated[x_]}] // AbsoluteTiming constantArrayQ[constTensor] // AbsoluteTiming (* {2.54311, False} *) (* {2.20663, False} *) (* {0.0236709, False} *) </code></pre> <p>For packed arrays, it looks like <code>MatchQ[Flatten[constTensor], {Repeated[x_]}]</code> is actually the fastest:</p> <pre><code>constTensor = ConstantArray[1, 400*{1, 1, 1}]; constTensor[[1, 1, 1]] = 2; &lt;&lt; Developer` PackedArrayQ @ constTensor (* True *) MatchQ[Level[constTensor, {ArrayDepth[constTensor]}], {Repeated[x_]}] // AbsoluteTiming MatchQ[Flatten[constTensor], {Repeated[x_]}] // AbsoluteTiming constantArrayQ[constTensor] // AbsoluteTiming (* {2.76109, False} *) (* {0.19088, False} *) (* {1.17001, False} *) </code></pre>
2,086,006
<p>You have $7$ boxes in front of you and $140$ kittens are sitting side-by-side inside the boxes, $20$ in each box. You want to take some kittens as your pets. However the kittens are very cowardly. Each time you chose a kitten from a box, the kittens that are in that box to the left of it go to the box in the left, the kittens that are in that box to the right go to the box in the right. If they don’t find a box in that direction, they simply run away. After taking a few kittens, you see that all other kittens have run away. At least how many kittens have you taken?</p>
MoebiusCorzer
283,812
<p>The function $$A:[1,+\infty)\to\Bbb R: x\mapsto xa^{\frac{1}{x}}(e^{\frac{1}{x}}-1)$$ is continuous as a product of composite functions of continuous functions.</p> <p>Hence, if the limit when $x\to+\infty$ exists, it is the same along any sequence $(x_{n})_{n}$ such that $x_{n}\to+\infty$ as $n\to+\infty$. We shall prove that the limit exists and thus for the sequence $x_{n}=n$ as well.</p> <p>We have:</p> <p>\begin{align*} &amp;\lim_{x\to+\infty}A(x)\\ &amp;=\lim_{x\to+\infty}xa^{\frac{1}{x}}(e^{\frac{1}{x}}-1)\\ &amp;=\lim_{x\to+\infty}\frac{a^{\frac{1}{x}}(e^{\frac{1}{x}}-1)}{1/x} \end{align*}</p> <p>Here, we have a $\frac{0}{0}$ limit and we can thus use L'Hospital's rule. It yields:</p> <p>\begin{align*} &amp;\lim_{x\to+\infty}A(x)\\ &amp;=\lim_{x\to+\infty}\frac{a^{\frac{1}{x}}(e^{\frac{1}{x}}-1)}{1/x}\\ &amp;=\lim_{x\to+\infty}\frac{\ln(a)a^{\frac{1}{x}}\left(-\frac{1}{x^{2}}\right)(e^{\frac{1}{x}}-1)+a^{\frac{1}{x}}\left(-\frac{1}{x^{2}}\right)e^{\frac{1}{x}}}{-\frac{1}{x^{2}}}\\ &amp;=\lim_{x\to+\infty}\ln(a)a^{\frac{1}{x}}(e^{\frac{1}{x}}-1)+e^{\frac{1}{x}}a^{\frac{1}{x}}\\ &amp;= 1 \end{align*}</p>
2,520,768
<p>How would I approach this problem? </p> <p>Let $(a, b, c) \in \mathbb{Z^3}$ with $a^2 + b^2 = c^2$. Show that: $$ 60 \,\mid\, abc $$</p>
David
119,775
<p>There are <a href="https://en.wikipedia.org/wiki/Pythagorean_triple#Generating_a_triple" rel="nofollow noreferrer">formulas</a> for the integer solutions of $a^2+b^2=c^2$: you can use them to give a proof.</p> <p>If you don't know these formulas, here is an alternative method using modular arithmetic.</p> <p>First we show that $5\mid abc$. This is clearly true if $5\mid a$ or $5\mid b$; now assume that neither of these is the case. The squares modulo $5$ are $0,1,4$; since $a,b\not\equiv0$ we have $$c^2=a^2+b^2\equiv\langle1\ \hbox{or}\ 4\rangle+\langle1\ \hbox{or}\ 4\rangle\equiv0\ \hbox{or}\ 2\ \hbox{or}\ 3\pmod5\ .$$ But $2,3$ are impossible, so $c^2\equiv0$, so $5\mid c$, so $5\mid abc$.</p> <p>Now show that $3\mid abc$ by using a similar method. In fact, we have $3\mid a$ or $3\mid b$, because if neither of these is the case then $$c^2=a^2+b^2\equiv1+1\equiv2\pmod3\ ,$$ which is impossible. Hence $3\mid abc$.</p> <p>Now consider the situation modulo $8$, in which the squares are $0,1,4$. If $a,b$ are both odd then $$c^2=a^2+b^2\equiv1+1\equiv2\pmod8$$ which is impossible. If $a,b$ are both even then obviously $4\mid abc$. If $a$ is even and $b$ is odd then $c$ is odd and $$a^2=c^2-b^2\equiv1-1\equiv0\pmod8\quad\Rightarrow\quad 8\mid a^2\quad \Rightarrow\quad 4\mid a$$ and so $4\mid abc$; similarly, if $b$ is even and $a$ is odd then $4\mid abc$.</p> <p>We have shown that $3,4,5\mid abc$; and $3,4,5$ are coprime in pairs; so $60\mid abc$.</p>
158,916
<p>Let $M$ be a closed riemannian 3-manifold. I think that the following fact should be true and should have a relatively simple proof, but I cannot figure it out.</p> <blockquote> <p>For every $\varepsilon&gt;0$ there is a $\delta&gt;0$ such that every smooth 2-sphere in $M$ of area smaller than $\delta$ bounds a ball of volume smaller than $\varepsilon$.</p> </blockquote> <p>Roughly, small-area spheres must bound small-volume balls. </p> <p>Note that: </p> <ul> <li> If $M\neq S^3$ then $M$ contains spheres that bound regions of arbitrarily small volume that are not balls (just take a spine of $M$ and small regular neighborhoods of it). <li> It suffices to prove that the 2-sphere is contained in a small-volume ball and invoke Alexander theorem. <li> The same fact stated for 3-spheres in $S^4$ would imply the (open) Schoenflies problem (every 3-sphere bounds a 4-ball), since every 3-sphere in $S^4$ can be shrinked to have arbitrarily small area. <li> It is not true in general that a torus of small area is contained in a ball (pick neighborhoods of a homotopically non-trivial knot). </ul>
Sam Nead
1,650
<p>In their paper "The classical Plateau problem and the topology of 3-manifolds", Meeks and Yau claim that for any fixed closed Riemannian 3-manifold $M$ there is a lower bound for the area of non-trivial two-spheres. Furthermore, the least area such is embedded (or double covers an essential $RP^2$.)</p> <p>There are many ways to use this to answer the question. Here is one possibility: </p> <p>We first induct on $s(M)$: the number of essential spheres in a maximal system in $M$. If $s(M) = 0$, then $M$ has universal cover $R^3$ or $S^3$, and we are done, using the proof of Alexander's theorem, say. (This step is not obvious, but let's move along.)</p> <p>Suppose that $S$ is a minimal area essential sphere in $M$, and so is embedded. Let $T$ be the given sphere with small area, which is necessarily inessential. If $T \cap S$ is not generic, then move $T$ slightly to make it so. We now induct on $|T \cap S|$. If $T \cap S = \emptyset$ then we can cut along $S$ and cap off with a ball to get a manifold $M'$. Note that $s(M') &lt; s(M)$. (We must also check that the area of a smallest essential sphere in $M'$ is greater than the area of $S$.)</p> <p>If $|T \cap S| \neq \emptyset$, then let $\alpha$ be an innermost curve of intersection. Thus $\alpha$ bounds a disk in $S$ that has area less than either disk it bounds in $T$. (This uses the fact that a surgery of an essential sphere yields at least one essential sphere, and the minimality of $S$.) So we may surger $T$ to get a pair of inessential spheres. (This is because both of them have area less than that of $T$.) This reduces $|T \cap S|$ and we are done.</p>
3,190,828
<p>Let <span class="math-container">$A\in M_n(\mathbb{C})$</span> be a matrix such that <span class="math-container">$A^n=aA$</span>,where <span class="math-container">$a\in \mathbb{R}-\{0,1\}$</span>.<br> I wanted to find <span class="math-container">$A$</span>'s eigenvalues and I thought that they are the roots of the polynomial equation <span class="math-container">$x^n=ax$</span>. Is this correct?</p>
Arthur
15,500
<p>If <span class="math-container">$A$</span> has eigenvalue <span class="math-container">$b$</span>, and <span class="math-container">$v$</span> is a corresponding (non-zero) eigenvector, then <span class="math-container">$$0 = (A^n-aA)v = A^nv - aAv = b^nv-abv = (b^n-ab)v$$</span>This means <span class="math-container">$b^n-ab = 0$</span>, which does make <span class="math-container">$b$</span> a root to the polynomial equation <span class="math-container">$x^n = ax$</span>.</p> <p>Apart from that, there is not much we can say. <span class="math-container">$A$</span> could have one, some, or all of those roots as eigenvalues, in any combination and multiplicity (as long as the total multiplicity is <span class="math-container">$n$</span>). For instance, by being a diagonal matrix with the desired combination of eigenvalues along the diagonal.</p>
2,080,716
<p>I have the quadratic form $$Q(x)=x_1^2+2x_1x_4+x_2^2 +2x_2x_3+2x_3^2+2x_3x_4+2x_4^2$$</p> <p>I want to diagonalize the matrix of Q. I know I need to find the matrix of the associated bilinear form but I am unsure on how to do this.</p>
Harsh Kumar
395,886
<p>Sum of numbers divisible by $3=1683$</p> <p>Sum of numbers divisible by $7=735$</p> <p>Sum of numbers divisible by $21=210$</p> <p><strong>We need the sum of divisor of $21$ since if we subtract only by the Sum of divisor of $3$ and Sum of divisor of $7$ then the numbers those are divisible by both $3$ &amp; $7$ means divisible by $21$ will be subtracted twice but we need to subtract it only once.</strong></p> <p>$\therefore$<br> The sum of numbers not divisible by $3$ &amp; $7=$ Sum of first $100$ natural numbers $-$ Sum of divisor of $3-$ Sum of divisor of $7+$ Sum of divisor of $21$ </p> <p>$=5050-1683-735+210$ </p> <p>$=2842$</p>
2,080,716
<p>I have the quadratic form $$Q(x)=x_1^2+2x_1x_4+x_2^2 +2x_2x_3+2x_3^2+2x_3x_4+2x_4^2$$</p> <p>I want to diagonalize the matrix of Q. I know I need to find the matrix of the associated bilinear form but I am unsure on how to do this.</p>
Nilabro Saha
304,908
<p>Number of multiples of $3$ between $1$ &amp; $100$ = $[\frac{100}{3}]$ = $33$,</p> <p>Number of multiples of $7$ between $1$ &amp; $100 =$ $[\frac{100}{7}]$ = $14$,</p> <p>Number of multiples of $21$ between $1$ &amp; $100$ = $[\frac{100}{21}]$ = $4$,</p> <p>Where $[x]$ is the box function.</p> <p><strong>Sum of first 100 terms excluding terms divisible by $3$ and $7$</strong>:</p> <p>$$ S = \sum_{i=1}^{100} i - 3\sum_{i=1}^{33} i - 7\sum_{i=1}^{14} i + 21\sum_{i=1}^4 i. $$</p> <p>Apply the formula for $\sum_{i=1}^n i$ and take it from here.</p>
2,595,247
<p>What is equation of circle when two lines y=x and y=x-4 are tangent to a circle at (2,2) and (4,0) respectively.</p>
QED
91,884
<p>Suppose $\{\alpha_n\}$ is a sequence of functions whose total variations are uniformly bounded on $[a,b]$. If $\{\alpha_n\}$ is pointwise convergent to $\alpha$ on $[a,b]$, then for every $f$ continuous on $[a,b]$, $$\lim_{n\to\infty}\int_{a}^bf(x)d\alpha_n=\int_a^bf(x)d\alpha$$ Clearly, $$\lim_{n\to\infty}x^{n+1}=\alpha(x)=\left\{\begin{array}{ccc}0&amp;\text{if}&amp;x\in[0,1)\\1&amp;\text{if}&amp;x=1\end{array}\right.$$</p>
1,515,478
<p>Given a quadratic equation with one and only one root (for example $6-\sqrt{2}$ ). Does there exist integers $a,b$ and $c$ where $ax^2 + bx + c = 0$ for that root?</p>
Jack Smith
841,375
<p>When there is only one root, the discriminant is zero and the root is <span class="math-container">$\frac{-b}{2a}$</span> because <span class="math-container">$\frac{-b+0}{2a}$</span> and <span class="math-container">$\frac{-b-0}{2a}$</span> are both equal to <span class="math-container">$\frac{-b}{2a}$</span></p> <p>It is impossible for a quadratic equation with integer coefficients to have only one irrational root because <span class="math-container">$\frac{-b}{2a}$</span> will always be rational when the coefficients are integers. That's simply because a rational number is defined as a number that can be expressed as a ratio of two integers.</p>
2,863,995
<p><em>Problem:</em></p> <p>Let $f_n: [0,1] \to \mathbb{R}$ be a sequence of measurable functions. </p> <p>Suppose that $\int_{0}^{1}|f_n(x)|^2 ~ dx \le 1$ for $n \in \mathbb{N}$ and $f_n$ converges to $0$ a.e. </p> <p>Show that $\lim_{n \to \infty} \int_{0}^{1} f_n(x) ~ dx = 0$.</p> <p><em>Question:</em> <strong>Is the following solution correct? If not, how can it be fixed?</strong></p> <p><em>Proposed Solution:</em></p> <p>We have $\|f_n\|_{L^2} \le 1$ $\forall n \in \mathbb{N}$ and the measure space is sigma-finite. So $\|f_n\|_{L^1} \le \|f_n\|_{L^2} \le 1$ and $f_n$ is bounded above by the integrable function $g(x) = 1$ for every $n$. By the pointwise-a.e. convergence, $f \equiv 0$, and $|f| \le g$. So by DCT, the result follows.</p>
Wraith1995
462,363
<p>The proof is false. DCT requires that the function be dominated in the sense that $|f_{n}| \leq g$. </p> <p>Also, the inequalities that you use don't apply in a sigma finite measure space. They apply in a finite measure space! They are false over the entire real line!</p>
1,986,172
<p>I am asked to simplify $(\sqrt{t^3}) \times (\sqrt{t^5})$.</p> <p>I get up to $\sqrt[3]{t^3}\times \sqrt{t^5}$ but I am not sure how to simplify this further as now roots are involved and not just powers.</p> <p>When I checked the solutions the final answer should be $t^4$ but I'm not sure how this is achieved.</p>
Ross Millikan
1,827
<p>One way is to note that $\left( \sqrt t \right)^3=t^{\frac 32}$ and similarly for the other one. Then when you multiply terms you add exponents</p>
125,116
<p>Is there a rotation representation that can also represent "turns", instead of collapsing coincident rotations into the same representation?</p> <p>In 2D, a simple angle satisfies this, as it can have additional multiples of $2\pi$. For example, rotating by a turn and a half would be $3\pi$.</p> <p>Is there something similar for 3D rotations? Does the concept even make sense there? Quaternions don't work for this since they only have two representations of any given rotation. Rotation vectors ($\theta\hat{e}$) seem to work, though they are very hard to work with.</p> <p>EDIT: My objective with this is to extend quaternion spherical interpolation to rotations of more than 180° in terms of beginning and end "orientation with turns" objects, so you could, for example, interpolate over an entire revolution using the same machinery as you would with normal small rotation interpolation.</p>
S. Carnahan
121
<p>Since you haven't described what you plan to do with this representation, I'm not sure what method would work well.</p> <p>One of the problems with representing "turns" in more than two dimensions is that you don't have much in the way of discrete invariants. This is because the fundamental group of $SO(n)$ only has two elements when $n \gt 2$. Indeed, a double rotation in any direction can be continuously deformed to the identity rotation. This means that two turns are indistinguishable from zero turns, if you consider paths that can be deformed to each other to be equivalent.</p> <p>If you want to keep track of how your rotation came about, here are two suggestions:</p> <ol> <li>You can use based paths in the group of rotations, i.e., continuous maps from a real interval into rotation matrices.</li> <li>If your paths are smooth, you can take the derivative, and use paths in velocity space, i.e., the Lie algebra of the rotation group.</li> </ol> <p>Both methods give you points in an infinite dimensional path space.</p>
203,464
<p>I would like to exclude the point <code>{x=0,y=0}</code> in the function definition</p> <pre><code>f = Function[{x, y}, {x/(x^2 + y^2), -(y/(x^2 + y^2))}] </code></pre> <p>So far I tried <code>ConditionalExpression</code>and <code>/;</code> without success.</p> <p>Thanks!</p>
Ulrich Neumann
53,677
<p>Try using <code>Distribute</code>:</p> <pre><code>Distribute[{{a, b, c}, {d, e, f}}, List] (*{{a, d}, {a, e}, {a, f}, {b, d}, {b, e}, {b, f}, {c, d}, {c, e}, {c,f}}*) </code></pre>
12,949
<p>Let $\kappa$ be an infinite cardinal. Then there exists at least one <a href="http://en.wikipedia.org/wiki/Real-closed_field">real-closed field</a> of cardinality $\kappa$ (e.g. <a href="http://en.wikipedia.org/wiki/Lowenheim-Skolem">Lowenheim-Skolem</a>; or, start with a function field over $\mathbb{Q}$ in $\kappa$ indeterminates, choose an ordering and a real-closure). </p> <p>But I think there are many more, namely $2^{\kappa}$ pairwise nonisomorphic real-closed fields of cardinality $\kappa$. This is equal to the number of binary operations on a set of infinite cardinality $\kappa$, so is the largest conceivable number.</p> <p>As for motivation -- what can I tell you, mathematical curiosity is a powerful thing. One application of this which I find interesting is that there would then be $2^{2^{\aleph_0}}$ conjugacy classes of order $2$ subgroups of the automorphism group of the field $\mathbb{C}$. </p> <p><b>Addendum</b>: Bonus points (so to speak) if you can give a general model-theoretic criterion for a theory to have the largest possible number of models which yields this result as a special case.</p>
Marty
3,545
<p>Hi Pete!</p> <p>There's been a lot of study of this and similar problems. I believe that Shelah's theorem, from his 1971 paper "The number of non-isomorphic models of an unstable first-order theory" (Israel J. of Math) answers your question about real closed fields in the positive.</p> <p>The best big result on such questions that I know of is in the 2000 Annals paper "The uncountable spectra of countable theories." by Hart, Hrushovski, Laskowski.</p> <p>To answer the question on real closed fields specifically (and somewhat cautiously since I'm not a model-theorist):</p> <p>The theory of real closed fields is a complete first order theory, with countable language. It is an unstable (an easy fact, I think, and <a href="http://en.wikipedia.org/wiki/Stable_theory">explained better on wikipedia</a> than I could explain) theory as well. Hence Shelah's result applies, and the bound $2^\kappa$ is realized as you surmised.</p> <p>Bonus points should go to Shelah (and perhaps also to Hart, Hrushovski, Laskowski, whose paper mentions the result of Shelah and proves other things) for proving that this bound is realized (for uncountable cardinals), <strong>except</strong> for theories $T$ which have <strong>all</strong> of the following properties:</p> <ol> <li>$T$ has infinite models.</li> <li>$T$ is superstable.</li> <li>$T$ has prime models over pairs.</li> <li>$T$ does not have the dimensional order property.</li> </ol> <p>I have no clue what the fourth property means. But there are plenty of non-superstable theories to which Shelah's theorem applies, and hence which realize your bound (for uncountable cardinals).</p> <p>For countable cardinality, I think there are still some open problems about how many non-isomorphic models there can be of a given theory, with cardinality $\aleph_0$.</p>
12,949
<p>Let $\kappa$ be an infinite cardinal. Then there exists at least one <a href="http://en.wikipedia.org/wiki/Real-closed_field">real-closed field</a> of cardinality $\kappa$ (e.g. <a href="http://en.wikipedia.org/wiki/Lowenheim-Skolem">Lowenheim-Skolem</a>; or, start with a function field over $\mathbb{Q}$ in $\kappa$ indeterminates, choose an ordering and a real-closure). </p> <p>But I think there are many more, namely $2^{\kappa}$ pairwise nonisomorphic real-closed fields of cardinality $\kappa$. This is equal to the number of binary operations on a set of infinite cardinality $\kappa$, so is the largest conceivable number.</p> <p>As for motivation -- what can I tell you, mathematical curiosity is a powerful thing. One application of this which I find interesting is that there would then be $2^{2^{\aleph_0}}$ conjugacy classes of order $2$ subgroups of the automorphism group of the field $\mathbb{C}$. </p> <p><b>Addendum</b>: Bonus points (so to speak) if you can give a general model-theoretic criterion for a theory to have the largest possible number of models which yields this result as a special case.</p>
Dave Marker
5,849
<p>For real closed fields this is fairly easy.</p> <p>First show that for any infinite cardinal k there are 2^k nonisomorphic linear orders of cardinality k</p> <p>For example if X is a subset of k let A_x be Q+2+Q if x is in k and Q+3+Q if x is not in X. Let L_X be the sum of the A_x for x in k. It is easy to see that L_X is isomorphic to L_Y if and only if X=Y.</p> <p>If F is a real closed and x and y are infinite element of R we say that x and y are comparable if and only if there are natural numbers m and n such that x is less than y^m and y is less than x^n. The ordering of R induces a linear order L_R of the comparability classes, which we call the ladder of R.</p> <p>Suppose L is a linear order. Let F be the real algebraic numbers. Let R_L be the real closure of the transcendental extension of the real algebraic numbers F(x_l:l\in L) ordered such that if i is less than j then x_i^n is less than x_j for all n. It's not hard to show that the ladder of R_L is isomorphic to L.</p> <p>Thus if we start with nonisomorphic orders A and B then the fields R_A and R_B will be nonisomorphic.</p>
2,301,198
<p>Solve the initial value problem for the sequence $\left \{ u_{n}| n \in \mathbb{N} \right \}$ satisfying the recurrence relation: $u_n − 5u_{n-1} + 6u_{n−2} = 0 $ with $u_0 = 1$ and $u_1 = 1$.</p> <p>Ive gotten the general solution to be $u_n = A(2)^n + B(3)^n$. </p> <p>Once I sub the initial values: </p> <p>$u_0 = 2A + 3B = 1$</p> <p>$u_1 = 2A + 3B = 1$</p> <p>And Im unsure on how to solve this system. Any help appreciated, thanks. </p>
mvw
86,776
<p>$$ u_0 = A 2^0 + B 3^0 = A + B \\ u_1 = A 2^1 + B 3^1 = 2A + 3B $$ So you got the first equation wrong.</p>
1,120,013
<p>Let $X$ and $Y$ be two random variables (say real numbers, or vectors in some vector space). It seems to me that the following is true:</p> <p>E [ X | E [ X | Y ] ] = E [ X | Y]</p> <p>Note that E [ X | Y ] is a random variable in it's own right. Also note that equality here is point-wise, for every point in the sample space of the joint distribution on on $(X,Y)$. My question, assuming I'm not missing something and the above is true, is whether this law has a name, or is written down / proved somewhere.</p>
Henry
6,460
<p>Essentially the <a href="http://en.wikipedia.org/wiki/Law_of_total_expectation" rel="nofollow">law of iterated expectation</a>, perhaps more commonly written like $$\operatorname{E_X} [X] = \operatorname{E}_Y [ \operatorname{E}_{X \mid Y} [ X \mid Y]].$$</p> <p>For a discrete case, the essence of the proof is $$\operatorname{E}_Y [ \operatorname{E}_{X \mid Y} [ X \mid Y]] = \sum_y \sum_x x \cdot \operatorname{P}(X=x \mid Y=y) \cdot \operatorname{P}(Y=y) =\sum_x x \cdot \operatorname{P}(X=x) =\operatorname{E_X} [X].$$</p>
308,520
<p>The DE is $y' = -y + ty^{\frac{1}{2}}$. </p> <p>$2 \le t \le 3$</p> <p>$y(2) = 2$</p> <p>I tried to see if it was in the <a href="http://www.sosmath.com/diffeq/first/lineareq/lineareq.html" rel="nofollow">linear form</a>. I got:</p> <p>$$\frac{dy}{dt} + y = ty^{\frac{1}{2}}$$</p> <p>The RHS was not a function of <code>t</code>. I also tried separation of variables, but I couldn't isolate the <code>y</code> from the term $ty^{\frac{1}{2}}$. Any hints?</p>
Maesumi
29,038
<p>Set $y=z^2$ and simplify. You get $y'=2zz'$ and your equation is $2zz'+z^2=tz$ or $2z'+z=t$ which is linear and you can apply integrating factor to it.</p> <p>(We assumed $y&gt;0$, which it is near initial point.)</p>
308,520
<p>The DE is $y' = -y + ty^{\frac{1}{2}}$. </p> <p>$2 \le t \le 3$</p> <p>$y(2) = 2$</p> <p>I tried to see if it was in the <a href="http://www.sosmath.com/diffeq/first/lineareq/lineareq.html" rel="nofollow">linear form</a>. I got:</p> <p>$$\frac{dy}{dt} + y = ty^{\frac{1}{2}}$$</p> <p>The RHS was not a function of <code>t</code>. I also tried separation of variables, but I couldn't isolate the <code>y</code> from the term $ty^{\frac{1}{2}}$. Any hints?</p>
Mikasa
8,581
<p>Hint: </p> <p>Another approach applying here is to see that your OD is a [Bernoulli] (<a href="http://en.wikipedia.org/wiki/Bernoulli_differential_equation" rel="nofollow">http://en.wikipedia.org/wiki/Bernoulli_differential_equation</a>) equation. Just to see that $n=1/2$. The substitution $w=y^{1-1/2}=\sqrt{y}$ works.</p>
3,858,362
<p>Solve <span class="math-container">$$\dfrac{x^3-4x^2-4x+16}{\sqrt{x^2-5x+4}}=0.$$</span> We have <span class="math-container">$D_x:\begin{cases}x^2-5x+4\ge0\\x^2-5x+4\ne0\end{cases}\iff x^2-5x+4&gt;0\iff x\in(-\infty;1)\cup(4;+\infty).$</span> Now I am trying to solve the equation <span class="math-container">$x^3-4x^2-4x+16=0.$</span> I have not studied how to solve cubic equations. Thank you in advance!</p>
user
505,767
<p>We have that</p> <p><span class="math-container">$$x^3-4x^2-4x+16=x(x^2-4x+4)-8x+16=x(x-2)^2-8(x-2)=$$</span></p> <p><span class="math-container">$$=(x-2)(x^2-2x-8)=(x-2)(x+2)(x-4)=0$$</span></p>
3,858,362
<p>Solve <span class="math-container">$$\dfrac{x^3-4x^2-4x+16}{\sqrt{x^2-5x+4}}=0.$$</span> We have <span class="math-container">$D_x:\begin{cases}x^2-5x+4\ge0\\x^2-5x+4\ne0\end{cases}\iff x^2-5x+4&gt;0\iff x\in(-\infty;1)\cup(4;+\infty).$</span> Now I am trying to solve the equation <span class="math-container">$x^3-4x^2-4x+16=0.$</span> I have not studied how to solve cubic equations. Thank you in advance!</p>
PierreCarre
639,238
<p>The other answers pretty much covered all the aspects, I'm just adding an heuristic way of obtaining integer solutions of polynomial equations that can come handy sometimes. The equation <span class="math-container">$x^3-4x^2-4x+16=0$</span> can be rewritten as <span class="math-container">$$ x(x^2-4x-4)= -16 $$</span> (it's just sending the constant term to the rhs and factoring <span class="math-container">$x$</span> on the lhs)</p> <p>So you see that any integer solution must divide 16. Since the divisors of 16 are <span class="math-container">$\pm 2, \pm 4, \pm 8, \pm 16$</span> (and <span class="math-container">$\pm 1$</span>, if you will), if there are any integer solutions, they must be in the set <span class="math-container">$\{-16,-8,\cdots, 8, 16\}$</span>. In this case, trying solutions in this set will yield all three solutions to the equation.</p> <p>Naturally, if there are no integer solutions, this gets you nowhere.</p>
1,862,524
<p>In the textbook I'm using to prepare the logic exam says that first order logic may be used to <strike>implement</strike> axiomatize data structures. There is an example of that:</p> <p>"Stack": uses a language that contains the <strike>predicates</strike> functions <em>top</em>, <em>pop</em> and <em>push</em>, and the constant <em>nil</em> (empty stack). It axioms are:</p> <ul> <li>$\forall s \forall x . top(push(s, x))=x$</li> <li>$\forall s \forall x . pop(push(s, x))=s$</li> <li>$\forall s . push(pop(s), top(s))=s$</li> </ul> <p>Premise: that I didn't understand this example completely (there's no explanation about how can these axioms describe a stack).</p> <p>Suppose I have to describe another data structure, for example a "Queue", what strategy do you follow to write the axioms of this data structure? Can you make an example of the axioms you would use?</p> <p><strong>EDIT</strong>: Using the @user21820 's answer as an example, I'm trying to answer my own question.</p> <p>Strategy:</p> <ol> <li>Describe the language used: $(Queue, add, remove, element, nil)$ where <ul> <li>$Queue$ is an unary predicate, $Queue(x)$ means $x \in Queue$;</li> <li>$add$ is a binary function. $add(q, e), q \in Queue$ returns the queue with a new element at the end of the queue;</li> <li>$remove$ is a binary function. $remove(q, e), q \in Queue$ returns the queue without the first element of the queue;</li> <li>$element$ is a unary function that returns the first element of the queue.</li> <li>$nil$ is a constant, means empty queue.</li> </ul></li> <li>Describe the axioms you need (they might be wrong): <ul> <li>remove an element from a queue with one element return the empty queue;</li> <li>remove an element from a queue return the queue without the first element;</li> <li>add an element from the queue returns the queue with the element added;</li> <li>element function returns the first element inserted;</li> </ul></li> <li>Write the axioms: <ul> <li>$remove(add(nil, x)) = nil$</li> <li>$\forall q \in Queue. \left(remove(add(q, x)) = ???\right)$</li> <li>$???$</li> <li>$???$</li> </ul></li> </ol>
jugglingmike
341,620
<p>Slight error in your final line. The original x limits were 0 and 2 which changed to 0 and 4 for u. However you seemed to have doubled them again when substituting into $4e^u$.</p>
4,280,426
<blockquote> <p>We have a bag with <span class="math-container">$3$</span> black balls and <span class="math-container">$5$</span> white balls. What is the probability of picking out two white balls if at least one of them is white?</p> </blockquote> <p>If <span class="math-container">$A$</span> is the event of first ball being white and <span class="math-container">$B$</span> the second ball being white, could it be <span class="math-container">$p\bigl((A|B)\cup(B|A)\bigr)$</span>? Although <span class="math-container">$B$</span> depends on <span class="math-container">$A$</span>, I don't understand why <span class="math-container">$A$</span> depends on <span class="math-container">$B$</span>, as <span class="math-container">$B$</span> occurs after <span class="math-container">$A$</span> has occurred.</p> <p>Thank you very much for your help.</p> <p>Edit: and the probability of obtaining two white balls if I have only one white (regardless if it’s the first or the second one)? Thank you very much for your help!</p>
José Carlos Santos
446,262
<p>Yes, <span class="math-container">$\overline A=\Bbb R$</span>, since <span class="math-container">$A$</span> is not closed, from which it follows that the only closed subset of <span class="math-container">$\Bbb R$</span> which contains <span class="math-container">$A$</span> is <span class="math-container">$\Bbb R$</span> itself.</p>
1,364,554
<p>Is it possible to select real values $a_{n, k}$ so that</p> <p>$$f(x) =\lim_{n \to \infty}\sum_{k = 0}^{n - 1} a_{n, k} x^k = \frac{1}{1 - x} $$ for all $x \in \mathbb{R} \setminus \{1\}$ ?</p> <p>Failing examples:</p> <ol> <li>$a_{n, k} = 1$ for all $n, k \in \mathbb{N}_0$ and $\lvert x \rvert &lt; 1$ </li> </ol> <p>$$f(x) = \lim_{n \to \infty} \sum_{k = 0}^{n - 1} x^k = \frac{1}{1 - x}$$</p> <ol start="2"> <li>$a_{n, k} = 1 - \frac{k}{n}$ for all $n, k \in \mathbb{N}_0$ and $ x\in [-1, 1)$ (One extra point!) </li> </ol> <p>$$\begin{aligned} f(x) &amp;=\lim_{n \to \infty}\sum_{k = 0}^{n - 1} \left( 1 - \frac{k}{n}\right) x^k \\ &amp;=\lim_{n \to \infty}\sum_{k = 0}^{n - 1} \left( 1 - \frac{k}{n}\right) x^k \\ &amp;= \lim_{n \to \infty} \frac{x (x^n-1)}{n (x-1)^2} + \frac{1}{1-x} \\ &amp;= \frac{1}{1 - x}\end{aligned}$$</p>
Jack D'Aurizio
44,121
<p>In order that our power sums match the Taylor series of $\frac{1}{1-x}$ in a neighbourhood of the origin we must have $\lim_{n\to +\infty}a_{n,k}=1$ for every $k$, so we cannot have convergence outside the unit disk.</p>
1,364,554
<p>Is it possible to select real values $a_{n, k}$ so that</p> <p>$$f(x) =\lim_{n \to \infty}\sum_{k = 0}^{n - 1} a_{n, k} x^k = \frac{1}{1 - x} $$ for all $x \in \mathbb{R} \setminus \{1\}$ ?</p> <p>Failing examples:</p> <ol> <li>$a_{n, k} = 1$ for all $n, k \in \mathbb{N}_0$ and $\lvert x \rvert &lt; 1$ </li> </ol> <p>$$f(x) = \lim_{n \to \infty} \sum_{k = 0}^{n - 1} x^k = \frac{1}{1 - x}$$</p> <ol start="2"> <li>$a_{n, k} = 1 - \frac{k}{n}$ for all $n, k \in \mathbb{N}_0$ and $ x\in [-1, 1)$ (One extra point!) </li> </ol> <p>$$\begin{aligned} f(x) &amp;=\lim_{n \to \infty}\sum_{k = 0}^{n - 1} \left( 1 - \frac{k}{n}\right) x^k \\ &amp;=\lim_{n \to \infty}\sum_{k = 0}^{n - 1} \left( 1 - \frac{k}{n}\right) x^k \\ &amp;= \lim_{n \to \infty} \frac{x (x^n-1)}{n (x-1)^2} + \frac{1}{1-x} \\ &amp;= \frac{1}{1 - x}\end{aligned}$$</p>
Daniel Fischer
83,702
<p>By the <a href="http://en.wikipedia.org/wiki/Stone%E2%80%93Weierstrass_approximation_theorem" rel="nofollow">Stone-Weierstraß theorem</a>, we can uniformly approximate every continuous function on a compact subset of $\mathbb{R}$ by a sequence of polynomials.</p> <p>Thus if we exhaust $\mathbb{R}\setminus \{1\}$ by a sequence of compact sets, say $K_m = \{ x \in \mathbb{R} : 2^{-m} \leqslant \lvert x-1\rvert \leqslant 2^m\}$, and for each $m$ pick a polynomial $p_m$ such that</p> <p>$$\sup \left\{ \biggl\lvert p_m(x) - \frac{1}{1-x}\biggr\rvert : x \in K_m\right\} \leqslant 2^{-m}$$</p> <p>and $\deg p_{m+1} &gt; \deg p_m$ for all $m$, we obtain a sequence of polynomials converging pointwise - even locally uniformly - to $\frac{1}{1-x}$ on $\mathbb{R}\setminus \{1\}$.</p> <p>Letting $a_{n,k}$ the coefficients of $p_m$ for $\deg p_m &lt; n \leqslant \deg p_{m+1}$, we then have</p> <p>$$\lim_{n\to\infty} \sum_{k = 0}^{n-1} a_{n,k} x^k = \frac{1}{1-x}$$</p> <p>for all $x \in \mathbb{R}\setminus \{1\}$ as desired.</p> <p>However, I don't think one could call such a sequence a "weighted MacLaurin series" in any reasonable way. The coefficients of the $p_m$ have most likely very little to do with the coefficients of the Taylor/MacLaurin series of $\frac{1}{1-x}$.</p>
688,430
<blockquote> <p>How can I show the following $$a^n|b^n \Rightarrow a|b$$ with $a,b$ integers.</p> </blockquote> <p>$$a^n|b^n \Rightarrow b^n=m \cdot a^n \Rightarrow b^n=(m\cdot a^{n-1}) \cdot a\qquad(1)$$ How can I continue? Do I maybe have to suppose the opposite and arrive at contradiction? $$\text{So } a \nmid b \Rightarrow b=q\cdot a+r$$ Replacing this at the relation $(1)$ could I conlude to something to get a contradiction? Or is there an other way to prove this??</p>
Bill Dubuque
242
<p><strong>Hint</strong> $ $ Clear if $\,a=0.$ Else $\,(\color{#c00}{b/a})^n \! = k\in\Bbb Z\,\Rightarrow\, b/a\in \Bbb Z,\,$ because the $ $ <a href="http://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow">Rational Root Test</a> $ $ implies that $ $ any $ $ rational $ $ root $ $ of $\ \ \color{#c00}x^n - k\ $ is integral.</p>
436,225
<p><a href="http://en.wikipedia.org/wiki/Incidence_matrix">The incidence matrix</a> of a graph is a way to represent the graph. Why go through the trouble of creating this representation of a graph? In other words what are the applications of the incidence matrix or some interesting properties it reveals about its graph?</p>
Jair Taylor
28,545
<p>Another example is the beautiful Matrix Tree Theorem, which says that the number of spanning trees of a graph is equal to a minor of the Laplacian of the graph, which is a matrix closely related to the incidence matrix.</p>
4,160
<p>I am a guest here, having responded to a general invitation extended to the <a href="https://stats.stackexchange.com/questions">Cross Validated</a> community, to possibly contribute answers whenever some question related to Statistics comes up in this site. I do not teach Mathematics, but I do occasionally teach Statistics. And one of the less obvious and most difficult aspects to teach, is the difference between Descriptive and Inferential Statistics. </p> <p>To me of course, it seems pretty clear: Descriptive Statistics just summarize some characteristics of a specific data set. Inferential Statistics is our attempt to draw inferences about something "larger" than the data set available. How do we manage that? <em>By making a whole new set of assumptions</em>. And what do we do after? <em>We use the exact same results we derived from Descriptive Statistics</em> -but which now lead us to totally different conclusions in nature and in scope. </p> <p>And here lies the problem: these additional assumptions are simply sewn alongside the <em>tools</em> of Descriptive Statistics. And students get uneasy: in the previous (Descriptive Statistics) class, this was just the "average of the data set", "a centrality measure". How on earth the exact same number, calculated in the exact same way, has now become "an estimate of the population mean" that moreover has been derived through the interaction of the data set with a <em>function of random variables</em>,(the estimator), a function that is (say), "unbiased and asymptotically consistent"? </p> <p>The problem is not whether the concepts themselves need work and mental effort to understand. The problem is that this "switch of vision" of the same thing (the data set), from a "vector of numbers" to a "set of realizations of distinct random variables that belong to the same (statistical) population, forming a sample of this population" is so big, that, the consequent use of the exact same tools and results seems in total disharmony: surely, such a big step in the set up should lead to some brand new tools also... and it does, but mostly "later on", while the tools of Descriptive Statistics remain prominent, with maybe minor modifications (like bias-correction). </p> <p>The bigger problem, is that the true problem takes time to show: students may play along, some may even like this new stochastic and probabilistic world, -but I keep getting the feeling that deep down, they feel that all the theoretical apparatus of Inferential Statistics is just an ingenious way to make something out of nothing (or too much out of too little), since after all, we keep adding the values in the data set and we divide by their number... </p> <p>Since "too much out of too little" is demonstrably <em>not</em> the case (if the general public knew how many procedures they consider as deterministically controlled, are in reality driven by statistical algorithms, I suspect they would had a serious panic attack), I believe it is important to find ways to deal with this.<br> One way would be to <em>start</em> with Inferential Statistics, and get rid of the notion that "Descriptive Statistics are a good introduction and familiarization step" (I just argued that they are not). </p> <p><strong>My question(s)?</strong> <strong>1)</strong> To those that teach Statistics, what are your experiences and how do you deal with the passage from Descriptive to Inferential Statistics?<br> And to everybody,<br> <strong>2)</strong> what are some other fields in Mathematics where such "structural breaks" happen, i.e. where objects and concepts already taught, acquire a <em>totally different meaning</em> by activating a new set of assumptions? And how do you teach that?</p>
Community
-1
<p>I've encountered something similar in <strong>logic</strong>, where symbols can be interpreted either syntactically ($\vdash$) or semantically ($\models$). I've had the same feeling that logic students were only playing along about appreciating the difference between the two.</p> <p>The <strong>statistical results connecting descriptive and inferential statistics</strong> look like:</p> <blockquote> <p>The sample mean is an unbiased estimate of the population mean.</p> <p>The sample variance times $N/(N-1)$ is an unbiased estimate of the population variance.</p> </blockquote> <p>Fortunately, these results are easy to prove. The numerical factor helps show the difference clearly. There are lots of other nearby results to consider.</p> <p>The <strong>logical results connecting syntax and semantics</strong> look like Godel's completeness theorem:</p> <blockquote> <p>A set of sentences is consistent by these inference rules iff it has a model with these semantics.</p> </blockquote> <p>These are harder to prove. I suspect that most logic teachers would have a hard time coming up with different hypotheses and nearby theorems. So it's harder to demonstrate understanding.</p> <p>Perhaps this can reframe your pedagogical issues? $\vdash$ vs $\models$ might be even harder for students and teachers than $s^2$ vs $\sigma^2$ and $N$ vs $N-1$. </p>
4,160
<p>I am a guest here, having responded to a general invitation extended to the <a href="https://stats.stackexchange.com/questions">Cross Validated</a> community, to possibly contribute answers whenever some question related to Statistics comes up in this site. I do not teach Mathematics, but I do occasionally teach Statistics. And one of the less obvious and most difficult aspects to teach, is the difference between Descriptive and Inferential Statistics. </p> <p>To me of course, it seems pretty clear: Descriptive Statistics just summarize some characteristics of a specific data set. Inferential Statistics is our attempt to draw inferences about something "larger" than the data set available. How do we manage that? <em>By making a whole new set of assumptions</em>. And what do we do after? <em>We use the exact same results we derived from Descriptive Statistics</em> -but which now lead us to totally different conclusions in nature and in scope. </p> <p>And here lies the problem: these additional assumptions are simply sewn alongside the <em>tools</em> of Descriptive Statistics. And students get uneasy: in the previous (Descriptive Statistics) class, this was just the "average of the data set", "a centrality measure". How on earth the exact same number, calculated in the exact same way, has now become "an estimate of the population mean" that moreover has been derived through the interaction of the data set with a <em>function of random variables</em>,(the estimator), a function that is (say), "unbiased and asymptotically consistent"? </p> <p>The problem is not whether the concepts themselves need work and mental effort to understand. The problem is that this "switch of vision" of the same thing (the data set), from a "vector of numbers" to a "set of realizations of distinct random variables that belong to the same (statistical) population, forming a sample of this population" is so big, that, the consequent use of the exact same tools and results seems in total disharmony: surely, such a big step in the set up should lead to some brand new tools also... and it does, but mostly "later on", while the tools of Descriptive Statistics remain prominent, with maybe minor modifications (like bias-correction). </p> <p>The bigger problem, is that the true problem takes time to show: students may play along, some may even like this new stochastic and probabilistic world, -but I keep getting the feeling that deep down, they feel that all the theoretical apparatus of Inferential Statistics is just an ingenious way to make something out of nothing (or too much out of too little), since after all, we keep adding the values in the data set and we divide by their number... </p> <p>Since "too much out of too little" is demonstrably <em>not</em> the case (if the general public knew how many procedures they consider as deterministically controlled, are in reality driven by statistical algorithms, I suspect they would had a serious panic attack), I believe it is important to find ways to deal with this.<br> One way would be to <em>start</em> with Inferential Statistics, and get rid of the notion that "Descriptive Statistics are a good introduction and familiarization step" (I just argued that they are not). </p> <p><strong>My question(s)?</strong> <strong>1)</strong> To those that teach Statistics, what are your experiences and how do you deal with the passage from Descriptive to Inferential Statistics?<br> And to everybody,<br> <strong>2)</strong> what are some other fields in Mathematics where such "structural breaks" happen, i.e. where objects and concepts already taught, acquire a <em>totally different meaning</em> by activating a new set of assumptions? And how do you teach that?</p>
Joseph Malkevitch
1,865
<p>You might find this article in the Journal of Mathematical Behavior of interest: Conceptual issues in understanding the inner logic of statistical inference: Insights from two teaching experiments, by Luis A. Saldanha, Patrick W. Thompson, pages 1-30, Volume 35, September, 2014.</p>
1,255,311
<p><img src="https://i.stack.imgur.com/5V9e0.png" alt="enter image description here"></p> <p>I understand inner product space with vectors, but the conversion to functions is throwing me off. Also why do they use an integral here, I've always seen summations. I think I'm missing something with notation here. Any help/hints would be appreciated. </p>
Chappers
221,811
<p>Take real and imaginary parts, with <span class="math-container">$A=a+bi$</span>, <span class="math-container">$C=c+di$</span>. Then the real and imaginary parts are <span class="math-container">$$ \binom{(a+c)\cos{t}+(d-b)\sin{t}}{(b+d)\cos{t}+(a-c)\sin{t}} = \begin{pmatrix} a+c &amp; d-b \\ b+d &amp; a-c \end{pmatrix} \binom{\cos{t}}{\sin{t}} $$</span></p> <hr> <p><strong>EDIT:</strong> wow, this is only half-finished. The point is that if we take <span class="math-container">$x,y$</span> as the real and imaginary parts, then if we call the matrix <span class="math-container">$M$</span>, then <span class="math-container">$(x,y) = M(\cos{t},\sin{t})$</span>, so we can (partially) invert <span class="math-container">$M$</span> to get <span class="math-container">$$ (\det{M})\binom{\cos{t}}{\sin{t}} = (\operatorname{adj}{M}) \binom{x}{y} $$</span> But then the norm is <span class="math-container">$$ (\det{M})^2 = (\det{M})^2(\cos^2{t} + \sin^2{t}) = (x,y)^T (\operatorname{adj}{M})^T (\operatorname{adj}{M}) (x,y) ,$$</span> which is a sum of squares. (And this is true even if <span class="math-container">$M$</span> is not invertible.)</p> <p>But it's easier to see directly: we get <span class="math-container">$\det{M} = a^2+b^2-c^2-d^2$</span>, so <span class="math-container">$$ (a^2+b^2-c^2-d^2) \cos{t} = (a-c)x+(b-d)y \\ (a^2+b^2-c^2-d^2) \sin{t} = -(b+d)x+(a+c)y $$</span> Now, if the determinant is zero, both of these give the same line. Otherwise, squaring and adding gives <span class="math-container">$$ (a^2+b^2-c^2-d^2)^2 = ((a-c)x+(b-d)y)^2 + (-(b+d)x+(a+c)y)^2 , $$</span> which is the formula for a conic section; it is an ellipse since it is written as the sum of two squares.</p> <p>Even nicer is to start with the form <span class="math-container">$z = r e^{ip} e^{it} + s e^{iq} e^{-it}$</span>: then we find that the equation of the quadratic form is <span class="math-container">$$ 1 = (r^2-s^2)^{-2} \Big( (r^2+s^2+2rs\cos{(p+q)})x^2 +(- 2rs\sin{(p+q)})(2xy) + (r^2+s^2-2rs\cos{(p+q)})y^2 \Big) , $$</span> and it is straightforward to compute the eigenvalues and eigenvectors, giving the canonical form <span class="math-container">$$ 1 = \frac{1}{(r-s)^2} (x\cos{\tfrac{1}{2}(p+q)}+y\sin{\tfrac{1}{2}(p+q)})^2 + \frac{1}{(r+s)^2} (-x\sin{\tfrac{1}{2}(p+q)}+y\cos{\tfrac{1}{2}(p+q)})^2 $$</span></p>
1,662,398
<p>I am currently studying for my upcoming midterm and I am stumped on this example provided in the slides. Basically here is the question:</p> <blockquote> <p>Given 35 computers, what is the probability that more than 10 computers are in use(active)? We are told that each computer is only active 10% of the time. The answer given in the slide is .0004</p> </blockquote> <p>Here is my following attempt to reproduce that answer:</p> <p>$$1-{35 \choose 10} \cdot (0.10)^{10} \cdot (0.90)^{25}$$ </p> <p>First I got the probability of exactly 10 computers being active out of 35 and then I subtracted 1 from it to get the probability of more than 10 computers. </p> <p>EDIT: I have solved this now with the following new work!</p> <p>1 - (summation of (35 choose k)*0.1^k * 0.9^(35-k) from 0 to 10</p>
Ross Millikan
1,827
<p>The expression you have subtracted from $1$ is the correct computation of the probability that exactly $10$ computers are active. The question asks for the probability that more than $10$ are active, so you should sum the chances that $11, 12, 13, \dots$ are active. As you only expect $3.5$ to be active, these will decrease rapidly. I would compute the chance that exactly $11$ are active (like you did for $10$), then $12$, and keep going until the chance was small enough that it didn't matter any more.</p>
1,662,398
<p>I am currently studying for my upcoming midterm and I am stumped on this example provided in the slides. Basically here is the question:</p> <blockquote> <p>Given 35 computers, what is the probability that more than 10 computers are in use(active)? We are told that each computer is only active 10% of the time. The answer given in the slide is .0004</p> </blockquote> <p>Here is my following attempt to reproduce that answer:</p> <p>$$1-{35 \choose 10} \cdot (0.10)^{10} \cdot (0.90)^{25}$$ </p> <p>First I got the probability of exactly 10 computers being active out of 35 and then I subtracted 1 from it to get the probability of more than 10 computers. </p> <p>EDIT: I have solved this now with the following new work!</p> <p>1 - (summation of (35 choose k)*0.1^k * 0.9^(35-k) from 0 to 10</p>
rogerl
27,542
<p>As you have correctly noted, the probability of exactly 10 being active at any one time is $${35 \choose 10} \cdot (0.10)^{10} \cdot (0.90)^{25}.$$ More generally, the probability of exactly $k$ being active at any time is $${35 \choose k} \cdot (0.10)^{k} \cdot (0.90)^{35-k}.$$ Since you want the probability of more than $10$ being active, you want either $$1-\sum_{k=0}^{10} {35 \choose k} \cdot (0.10)^{k} \cdot (0.90)^{35-k}$$ (that is, one minus the probability of ten or fewer) or $$\sum_{k=11}^{35} {35 \choose k} \cdot (0.10)^{k} \cdot (0.90)^{35-k}.$$</p>
387,268
<p>Let <span class="math-container">$A$</span> be an <span class="math-container">$N\times N$</span> nonnegative matrix with all diagonal entries equal to zero and such that there is <span class="math-container">$n_0$</span> such that all entries of <span class="math-container">$A^{n_0}$</span> are strictly positive. Let <span class="math-container">$\lambda_1,\ldots, \lambda_N$</span> be its eigenvalues ordered in the decreasing order with respect to their real parts, and <span class="math-container">$v_1,\ldots, v_N$</span> be the corresponding (left) eigenvectors. Perron and Frobenius tell us that <span class="math-container">$\lambda_1$</span> is a strictly positive real number and therefore (since the sum of eigenvalues must be zero) there must also be eigenvalues with strictly negative real part; let <span class="math-container">$\lambda_{k_0},\ldots, \lambda_N$</span> be those.</p> <p>Questions:</p> <p>(1) is it true that the &quot;smallest&quot; (with respect to the real part of the corresponding eigenvalue) eigenvector <span class="math-container">$v_N$</span> can be chosen in such a way that all of its entries are nonzero?</p> <p>(2) if the above doesn't hold, is it at least true that for any <span class="math-container">$j\in \{1,\ldots,N\}$</span> we can find <span class="math-container">$m\geq k_0$</span> such that <span class="math-container">$v_m$</span> has nonzero <span class="math-container">$j$</span>th component (that is, the set of eigenvectors corresponding to eigenvalues with negative real part cannot have a common all-zero entry index)?</p>
Noam D. Elkies
14,830
<p>(1) No. Counterexample: the symmetric <span class="math-container">$3 \times 3$</span> matrix <span class="math-container">$$ M(a,b) = \left[ \begin{array}{ccc} 0 &amp; a &amp; b \cr a &amp; 0 &amp; b \cr b &amp; b &amp; 0 \end{array} \right] $$</span> with <span class="math-container">$0 &lt; b &lt; a$</span> has <span class="math-container">$\lambda_3 = -a$</span> with 1-dimensional eigenspace generated by <span class="math-container">$(1,-1,0)$</span>.</p>
830,977
<p>I'm having some real trouble with lebesgue integration this evening and help is very much appreciated.</p> <p>I'm trying to show that $f(x) = \dfrac{e^x + e^{-x}}{e^{2x} + e^{-2x}}$ is integrable over $(0,\infty)$.</p> <p>My first thought was to write the integral as $f(x) = \frac{\cosh(x)}{\cosh(2x)}$ and then note $f(x) = \frac{\cosh(x)}{\sinh(x)^2 + \cosh(x)^2}$ so that $|f(x)| \le \frac{\cosh(x)}{\cosh(x)^2}$. These all seemed like sensible steps to me at this point, and I know the integral on the right hand side exists (wolfram alpha), but I'm having trouble showing it and am wondering if I have made more of a mess by introducing trigonometric functions.</p> <p>Thanks</p>
UserB1234
82,877
<p>Hint: You should be able to show that $e^{-x}$ and $e^{-3x}$ are integrable without too much trouble (this should follow from the "standard" trick of writing $\int_{0}^{\infty}g=\int_{0}^{\infty}\lim_{n}g\chi_{(0,n)}$ and then switching limits using the monotone convergence theorem). Then you can do a comparison based on $g(x)&lt;e^{-x}+e^{-3x}$.</p>
2,613,484
<p>Give an example of a vector space which has 125 elements. I don't know how proceed!!! Is there any technique about the field?? </p>
cansomeonehelpmeout
413,677
<p>$\frac{\mathbb{Z}_5[x]}{(x^3)}$ as a $\mathbb{Z}_5$-module, which is $\{a+bx+cx^2\mid a,b,c\in\mathbb{Z}_5\}$.</p> <p>This is the same as $$V=\left &lt; [a,b,c]\mid a,b,c\in\mathbb{Z}_5\right&gt;$$</p> <p>which have $5^3=125$ elements.</p>
2,553,284
<p>I know that $$\ln e^2=2$$ But what about this? $$(\ln e)^2$$ A calculator gave 1. I'm really confused.</p>
Gustavo Mezzovilla
396,214
<p>The natural log are the log with base $e$ (<em>euler's number</em> our <em>napier constant</em>). Therefore $$ \ln (x) = \log_e(x)$$ When you put $x=e$, we have $\ln(e)$, but that is simply $1$. Therefore $\big(\ln(e)\big)^2=1$.</p>
157,497
<p>Let's suppose I have created a 3d image of gray scale Images with:</p> <pre><code>image3d = Image3D[Table[readImage[i], {i, numberOfImages}]]; </code></pre> <p>and </p> <pre><code>image3dSlices = Image3DSlices[image3d] </code></pre> <p>To show the 3d image I can use:</p> <pre><code>image3d </code></pre> <p>or </p> <pre><code>Image3D[image3dSlices[[startImageNumber;;endImageNumber]]] </code></pre> <p>Is it somehow possible to convert the image data so that I could use <code>ListPlot3D</code> or <code>ListDensityPlot3D</code>? Please see also <a href="https://mathematica.stackexchange.com/questions/157419/image3d-plot-physical-axes-scales">here</a>: </p>
Michael E2
4,999
<p>The easiest way I've found is to apply the Fundamental Theorem of Calculus:</p> <pre><code>Integrate[Evaluate@D[Sum[Sin[n x]/n, {n, ∞}], x], {x, Pi, x}] </code></pre> <p>Another approach is to apply a trig. function, simplify, apply the inverse function, and simplify again. The problem here is that <a href="http://reference.wolfram.com/language/ref/ArcTan.html" rel="noreferrer"><code>ArcTan</code></a> and <a href="http://reference.wolfram.com/language/ref/ArcCot.html" rel="noreferrer"><code>ArcCot</code></a> both have ranges of <code>-Pi/2</code> to <code>Pi/2</code>, which are not suitable for the solution. So we can translate the solution to the appropriate range, and then translate back:</p> <pre><code>(FullSimplify[ Sum[Sin[n x]/n, {n, ∞}] /. x -&gt; Pi - u // Tan // TrigExpand ] // FullSimplify[ArcTan[#], -Pi &lt; u &lt; Pi] &amp;) /. u -&gt; Pi - x (* (π - x)/2 *) </code></pre> <p>There are other ways to tease out the answer, which have a mathematical fault even though they get the right answer. For instance:</p> <pre><code>Normal@Series[Sum[Sin[n x]/n, {n, ∞}], {x, Pi, 100}] Sum[Sin[n x]/n, {n, ∞}] // Cot // TrigExpand // FullSimplify[Pi/2 - ArcTan[#], -Pi &lt; x &lt; Pi] &amp; </code></pre>
157,497
<p>Let's suppose I have created a 3d image of gray scale Images with:</p> <pre><code>image3d = Image3D[Table[readImage[i], {i, numberOfImages}]]; </code></pre> <p>and </p> <pre><code>image3dSlices = Image3DSlices[image3d] </code></pre> <p>To show the 3d image I can use:</p> <pre><code>image3d </code></pre> <p>or </p> <pre><code>Image3D[image3dSlices[[startImageNumber;;endImageNumber]]] </code></pre> <p>Is it somehow possible to convert the image data so that I could use <code>ListPlot3D</code> or <code>ListDensityPlot3D</code>? Please see also <a href="https://mathematica.stackexchange.com/questions/157419/image3d-plot-physical-axes-scales">here</a>: </p>
Alexei Boulbitch
788
<p>You may like to proceed as follows. Here is your expression:</p> <pre><code>expr1 = Simplify[Sum[Sin[n x]/n, {n, 1, \[Infinity]}], 0 &lt; x &lt; 2 \[Pi]] (* 1/2 I (-Log[1 - E^(-I x)] + Log[1 - E^(I x)]) *) </code></pre> <p>Mma does not collect logarithms by itself. I use for this purpose the function entitled collectLog:</p> <pre><code>collectLog[expr_] := Module[{rule1a, rule1b, rule2, g, a, b, x}, rule1a = Log[a_] + Log[b_] -&gt; Log[a*b]; rule1b = Log[a_] - Log[b_] -&gt; Log[a/b]; rule2 = x_*Log[a_] -&gt; Log[a^x]; g[x_] := x /. rule1a /. rule1b /. rule2; FixedPoint[g, expr] ]; </code></pre> <p>It should be applied to the second element of the expression. Further the resulting subexpression is worth simplifying:</p> <pre><code>expr2 = MapAt[Simplify[collectLog[#]] &amp;, expr1, {2}] (* 1/2 I Log[-E^(I x)] *) </code></pre> <p>Now Mma should be instructed that <code>-1==E^I*Pi</code> and that <code>Log[E^a]==a</code>:</p> <pre><code>expr2 /. -E^a_ -&gt; E^(a + I*\[Pi]) /. Log[E^a_] -&gt; a // Expand (* -(\[Pi]/2) - x/2 *) </code></pre> <p>Done.</p> <p>Have fun!</p>
244,214
<p>One major approach to the theory of forcing is to assume that ZFC has a countable <em>transitive</em> model $M \in V$ (where $V$ is the "real" universe). In this approach, one takes a poset $\mathbb{P} \in M$, uses the fact that $M$ is <em>countable</em> to prove that there exists a generic set $G \in V$, then defines $M[G]$ as an actual set inside $V$ and proves it is a model of ZFC.</p> <p>The downside to this approach is that a countable transitive model may not exist. For example, it is possible that $V = L$ and $V$ is a minimal model of ZFC, so that any smaller model $M \in V$ is non-standard. However, if we only want a <em>countable</em> model of ZFC, there is no problem. First, Gödel's completeness theorem shows that (assuming ZFC is consistent, of course!) there is some model $M_0 \in V$ of ZFC. Then, the Löwenheim-Skolem theorem guarantees that there is a elementary substructure $M \subseteq M_0$ which is countable in $V$. So $M$ is a countable model of ZFC, and therefore, there is a generic filter $G \in V$.</p> <p>Can we continue the proof of forcing along these lines? Of course, transitivity is convenient for many reasons (such as showing that various formulas are absolute), but is it <em>possible</em> to go without it? Perhaps we would need to modify the construction of the $\mathbb{P}$-names and $M[G]$ by only considering elements that are actually in $M$. </p> <p><strong>EDIT</strong> To be a bit more clear, I believe that forcing can be done without a countable model at all, using either the syntactic approach or an approach via Boolean-valued models. My question is more humble. The arguments of forcing are very intuitive when $M$ is a countable transitive model; why don't <em>the same</em> (up to relativizing formulas to $M$) arguments work when $M$ is just countable? </p>
Mohammad Golshani
11,115
<p>You may look at the paper ``<a href="https://ojs.victoria.ac.nz/ajl/article/view/1784">Forcing with non-wellfounded models</a>''. Australas. J. Log. 5 (2007), 20–57, by Paul Corazza:</p> <p>Here is the abstract of the paper:</p> <blockquote> <p>We develop the machinery for performing forcing over an arbitrary (possibly non-well-founded) model of set theory. For consistency results, this machinery is unnecessary since such results can always be legitimately obtained by assuming that the ground model is (countable) transitive. However, for establishing properties of a given (possibly non-well-founded) model, the fully developed machinery of forcing as a means to produce new related models can be useful. We develop forcing through iterated forcing, paralleling standard steps of presentation in [K. Kunen, Set theory, North-Holland, Amsterdam, 1980] and [T. J. Jech, in Handbook of Boolean algebras, Vol. 3, 1197–1211, North-Holland, Amsterdam, 1989].''</p> </blockquote>
288,001
<p>Points A and B are given in Poincare disc model. Construct equilateral triangle ABC. Any kind of help is welcome.</p>
MvG
35,416
<p>This very much depends on the geometric primitives you have at hand. If you know how to construct a circle given its midpoint and a point on the circle, then you can simply intersect the circle around $A$ through $B$ with the circle around $B$ through $A$. A circle in the Poincaré disc is an Euclidean circle, but the hyperbolic center doesn't usually coincide with the Euclidean one.</p> <p>If you had coordinates, you could compute the coordinates of the Euclidean center, but as you asked about a construction, I'll not take this computational approach.</p> <p>You would be better off if you were to construct a circle through three given points, as in that case the Euclidean and the Poincaré interpretation don't differ. Let's concentrate on the circle around $A$ through $B$ for now. If you know how to perform a point reflection in hyperbolic geometry, you can use the mirror image of $B$ in $A$ as the second point. But you'll still need a third point, which has to lie on some other line. I'd suggest you draw a different hyperbolic line thorugh $A$, and then transfer the length $\lvert A,B\rvert$ to that line. You can define lengths in the disc model using cross ratios, and you can transfer cross ratios from one line to another using a sequence of perspectivities.</p> <p>The following illustration shows one possible way to construct a $B''$ such that $\lvert A,B\rvert=\lvert A,B''\rvert$. The second hyperbolic line (i.e. Euclidean circle) through $A$ was chosen arbitrarily. The rest of the construction is fixed. $A_{\text{inv}}$ is the image of $A$ under inversion in the unit circle. The fact that the line $AB'$ was chosen to go through $A_{\text{inv}}$ apparently ensures that $Q'$, $Q''$ and $C_2$ are collinear, so that this all works out. I only know this fact from experimentation, I don't know the name of the theorem behind it.</p> <p><img src="https://i.stack.imgur.com/UwTSE.png" alt="How to transfer cross ratio"></p> <p>There might be a shorter route, but this is the first thing that came to my mind.</p>
3,830,204
<p>Working through <em>Spivak's Calculus</em> and using old assignments from the course offered at my school I'm working on the following problem, asking me to find the integral <span class="math-container">$$\int \frac{1}{x^{2}+x+1} dx$$</span></p> <p>Looking through Spivak and previous exercises I worked on, I thought using a partial fraction decomposition would be the technique, but even in Spivak the only exercises I've seen which are similar involve:</p> <p><span class="math-container">$$\int \frac{1}{(x^{2}+x+1)^{n}} dx\ ,\text{where}\ n&gt; 1$$</span></p> <p>In which case it is pretty straightforward to solve. So there must be a reason why the exercise isn't presented unless it is so straightforward.</p> <p>Integration by parts and substitution (at least for now) have proven fruitless as well. So I come here to ask if I'm missing any special trick to compute this integral ?</p>
jacopoburelli
530,398
<p>Hint : <span class="math-container">$x^2+x+1 = (x+\frac{1}{2})^{2}+\frac{3}{4}$</span></p>
3,371,339
<p>How to show <span class="math-container">$Pr(X&gt;2E(X))\le 1/2$</span> given that <span class="math-container">$X$</span> is a continuous random variable and <span class="math-container">$P(X\le 0)=0$</span>? <span class="math-container">$E(X)$</span> here is the mean of <span class="math-container">$X$</span>.</p> <p>I started with the definition of <span class="math-container">$E(X)$</span> for the continuous case. Then, I broke the integral into the integral from -infinity to 0 + the integral from 0 to +infinity. Since <span class="math-container">$Pr(X\le 0)=0$</span>, the first term vanishes by using integration by parts. The second term will be greater or equal to the integral from <span class="math-container">$0$</span> to <span class="math-container">$x$</span> for any <span class="math-container">$x$</span> in the open interval <span class="math-container">$(0,+\infty)$</span>. I got stuck here. I am trying to end up with an inequality relationship between <span class="math-container">$E(X)$</span> and the density probability function <span class="math-container">$f(x)$</span> so I can use that in the definition of <span class="math-container">$Pr(X&gt;2*E(x))$</span>. Could anyone help me with this please?</p>
Community
-1
<p>Suppose <span class="math-container">$f_n: X\to \mathbb{\bar{R}}$</span> for all <span class="math-container">$n\in \mathbb{N}$</span>. If we can write <span class="math-container">$$ \{x\in X:\lim_{n\to\infty}f_n(x)\;\mathrm{exists}\}=E\cup E_{\infty}\cup E_{-\infty} $$</span> where <span class="math-container">$E_{\pm\infty}$</span> are the sets of <span class="math-container">$x$</span> such that the limit is equal to <span class="math-container">$\pm\infty$</span> and <span class="math-container">$E$</span> is the set of <span class="math-container">$x$</span> such that the limit exists. It is suffices to show that each term is measurable.</p> <p>Notice that <span class="math-container">$\limsup f_n\text{ and} \liminf f_n$</span> are measurable because <span class="math-container">$f_n$</span> is measurable and by proposition 2.7. Define [g(x):= \limsup f_n(x)-\liminf f_n(x) ] where <span class="math-container">$\limsup f_n(x)=\liminf f_n(x) \notin\{\pm\infty\}$</span>. So <span class="math-container">$g(x)$</span> is measurable. Thus, [E={x: \limsup f_n=\liminf f_n\notin{\pm\infty}}] implies the set <span class="math-container">$E$</span> can be written as <span class="math-container">$g^{-1}(\{0\})$</span> which is measurable.</p> <p>Now <span class="math-container">$\lim_{n\to\infty}f_n(x)=\infty$</span> if for all <span class="math-container">$M\geq 1$</span> there exists <span class="math-container">$N$</span> such that <span class="math-container">$f_n(x)\geq M$</span> for all <span class="math-container">$n\geq N$</span>, that is, <span class="math-container">$$ E_{\infty}=\bigcap_{M=1}^{\infty}\bigcup_{N=1}^{\infty}\bigcap_{n\geq N}\{x:f_n(x)\geq M\}$$</span> hence is measurable. A similar argument works for <span class="math-container">$E_{-\infty}$</span>, that is, <span class="math-container">$$ E_{\infty}=\bigcap_{M=1}^{\infty}\bigcup_{N=1}^{\infty}\bigcap_{n\geq N}\{x:f_n(x)\leq -M\}$$</span></p> <p>Suppose <span class="math-container">$f_n: X\to \mathbb{C}$</span> for all <span class="math-container">$n\in \mathbb{N}$</span>. Since <span class="math-container">$f_n(x)$</span> converges if and only if <span class="math-container">$Re(f_n(x))$</span> and <span class="math-container">$Im(f_n(x))$</span> are both convergence, that is, <span class="math-container">$$\{x: \lim f_n\text{ exists}\}=\{x: \lim Re(f_n)\text{ exists}\}\cap \{x: \lim Im(f_n)\text{ exists}\}$$</span> which two sets are both measurable by the previous argument and their intersection is also true. This gives the desired result.</p>
60,071
<p>Suppose we defined some mathematical object $P$, where $P$ is natural number, polynomial, endofunction, geometric figure, etc. What does the expression “$A$ is a set of $P$s” mean:</p> <ul> <li>Set inclusion) For all $a\in A$, $a$ is a $P$.</li> <li>Set equality) For all $a$, $a\in A$ iff $a$ is a $P$.</li> </ul> <p>If both are used, which is the most widespread one (which I can use on the Internet not explaining what I mean)?</p> <p>Update 0: What does the translation to another language of the expression above mean? (Describe your native language.)</p>
Ilmari Karonen
9,602
<p>I'd say "$A$ is <em>a</em> set of $P$s" for the first, and "$A$ is <em>the</em> set of (all) $P$s" for the second.</p>
60,071
<p>Suppose we defined some mathematical object $P$, where $P$ is natural number, polynomial, endofunction, geometric figure, etc. What does the expression “$A$ is a set of $P$s” mean:</p> <ul> <li>Set inclusion) For all $a\in A$, $a$ is a $P$.</li> <li>Set equality) For all $a$, $a\in A$ iff $a$ is a $P$.</li> </ul> <p>If both are used, which is the most widespread one (which I can use on the Internet not explaining what I mean)?</p> <p>Update 0: What does the translation to another language of the expression above mean? (Describe your native language.)</p>
Steven Alexis Gregory
75,410
<p>If you want set inclusion, you should say A is a set of P's.</p> <p>If you want set equality, you should say A is the set of all P's.</p> <p>The words "a" and "the" (most often) have very specific meanings in mathematics.</p>
9,335
<p>How to prove $\limsup(\{A_n \cup B_n\}) = \limsup(\{A_n\}) \cup \limsup(\{B_n\})$? Thanks!</p>
Arturo Magidin
742
<p>Use the definition, and double inclusion; that is, show that every element of $\limsup(A\cup B)$ must be either an element of $\limsup(A)$ or of $\limsup(B)$; then show that every element of $\limsup(A)$ must be in $\limsup(A\cup B)$ and that every element of $\limsup(B)$ must be in $\limsup(A\cup B)$.</p> <p>Of course, one must assume that you mean your "$A$" to be a sequence of sets and your "$B$" to likewise be a sequence of sets... Otherwise, what you write does not really make much sense (limit superior and limit inferior of a single set is not usually defined).</p>
267,971
<p>I want to keep inside of a integral evaluated after some replacement inside it, but at the same time the integral itself unevaluated.</p> <p>I start with:</p> <pre><code>int=HoldForm[Integrate[x^n/(x + 1)^(n + 1), {x, 0, 1}]] </code></pre> <p>Output as desired: <span class="math-container">$$\int_0^1 \frac{x^n}{(x+1)^{n+1}} \, dx$$</span></p> <p>When I replace <code>n</code> with some number I get output as expected:</p> <pre><code>int /. n -&gt; 3 </code></pre> <p><span class="math-container">$$\int_0^1 \frac{x^3}{(x+1)^{3+1}} \, dx$$</span></p> <p>But then I want to evaluate the inside of the integral and keep the integral itself unevaluated.</p> <p>So I tried instead:</p> <pre><code>int = HoldForm[Integrate[Evaluate[x^n/(x + 1)^(n + 1)], {x, 0, 1}]] </code></pre> <p><span class="math-container">$$\int_0^1 \text{Evaluate}\left[\frac{x^n}{(x+1)^{n+1}}\right] \, dx$$</span></p> <pre><code>int /. n -&gt; 3 </code></pre> <p>output not as I wanted: <span class="math-container">$$\int_0^1 \text{Evaluate}\left[\frac{x^3}{(x+1)^{3+1}}\right] \, dx$$</span></p> <p>I wanted: <span class="math-container">$$\int_0^1 \frac{x^3}{(x+1)^{4}} \, dx$$</span></p> <p>Any ideas how to do it?</p>
Lukas Lang
36,508
<p>This is exactly what <a href="https://reference.wolfram.com/language/ref/Inactivate.html" rel="nofollow noreferrer"><code>Inactivate</code></a> was designed for:</p> <pre><code>int = Inactivate[Integrate[x^n/(x + 1)^(n + 1), {x, 0, 1}], Integrate] </code></pre> <p><a href="https://i.stack.imgur.com/0nWLX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0nWLX.png" alt="enter image description here" /></a></p> <pre><code>int /. n -&gt; 3 </code></pre> <p><a href="https://i.stack.imgur.com/2yM95.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2yM95.png" alt="enter image description here" /></a></p> <p>(Notice also the light shading of the integral sign and the d to indicate the inactivation)</p>
1,993,693
<blockquote> <p>$$\lim_{x \rightarrow +\infty} \frac{2^x}{x}$$ $$\lim_{x \rightarrow \infty} \frac{x^{50}}{e^x}$$</p> </blockquote> <p>I don't really know how to solve this.</p> <p>As for the first one, I know that $\lim_{x \rightarrow \infty} a^x=0$ , I supposed that helps...?</p> <p>How do I solve these (preferably analytically, but I'll also accept otherwise)?</p>
Eff
112,061
<p>You say that $\lim_{x\to\infty} a^x = 0$, however this is only true if $|a| &lt; 1$, and hence not true for $2^x$ or $e^x$. </p> <p>With that said, you can solve both of these limits if you know that exponentials eventually dominate any polynomial (if the base of the exponential is larger than 1). Let $p$ be a polynomial and $a&gt;1$, then $$\lim\limits_{x\to\infty} \frac{p(x)}{a^x} = 0 \quad\quad \lim\limits_{x\to\infty} \frac{a^x}{p(x)} = \pm \infty,$$ where the sign in the last limit is the same as the sign of the term with the highest degree in the polynomial.</p> <p>You can now solve the limits, remembering that both $2^x$ and $e^x$ are exponentials with base larger than 1, i.e. $2&gt;1$ and $e&gt;1$, and $x$ and $x^{50}$ are polynomials.</p>
404,472
<p>Let $F$ and $F′$ be two finite fields with nine and four elements respectively. How many field homomorphisms are there from $F$ to $F′$?</p>
pankaj kumar
160,044
<p>Recall that any finite field has a finite characteristic--- a smallest positive integer $n$ with the property that for all $x$ in the field, $x + x + ... + x \;(n \text{ times}) = 0$. One can show from the field axioms that such an $n$ must be prime, and a divisor of the number of elements in the field.</p> <p>So here we see that $F$ must have characteristic $3$ and $F^{\prime}$ must have characteristic $2$. So let $f$ be any map from $F$ to $F^{\prime}$ satisfying</p> <p>$f(x+y) = f(x) + f(y) \quad \forall x , y \in F\tag{1}$ and $f(xy) = f(x) f(y) \quad \forall x , y \in F \tag{2}$</p> <p>It is easy to show that property $(1)$ implies that $f(0) = 0$. So</p> <p>$0 = f(0) = f(1+1+1)$ [... since $1+1+1 = 0$ in $F$, since $F$ has characteristic $3$] $= f(1) + f(1) + f(1)$ [... by property $(1)$]</p> <p>$= 0 + f(1)$ [... $f(1) + f(1) = 0$ in $F^{\prime}$ since $F^{\prime}$ has characteristic $2$]</p> <p>$= f(1)$ [... this is a fundamental property of 0)</p> <p>and hence for any $x \in F$ we have</p> <p>$f(x) = f(x*1)$ [... this is what $1$ does in any field]</p> <p>$= f(x) f(1)$ [... by property $(2)$]</p> <p>$= f(x) 0$ [... as we just showed f(1) = 0]</p> <p>$= 0$ [... an elementary consequence of the field axioms is that $y*0 = 0$ for any $y$ in any field].</p> <p>We conclude that any map from $F$ to $F^{\prime}$ satisfying $(1)$ and $(2)$ must satisfy $f(x) = 0$ for all $x$. It is easy to check that the map given by this formula does satisfy $(1)$ and $(2)$.</p> <p>Now: the term "field homomorphism" is usually defined so that not only must $(1)$ and $(2)$ be satisfied above but that $1$ must be mapped to $1$. So we have shown that by this definition there <em>are</em> no field homomorphisms whatsoever from $F$ to $F^{\prime}$ and the answer to your question is zero. </p>
4,338,285
<p>I have been thinking about the problem of finding the sum of the first squares for a long time and now I have an idea how to do it. However, the second step of this technique looks suspicious.</p> <ol> <li><p><span class="math-container">$$\sum_{i=1}^n i = \frac{n^2+n}{2}$$</span></p> </li> <li><p><span class="math-container">$$\int\sum_{i=1}^{n}idi=\int\frac{\left(n^{2}+n\right)}{2}dn$$</span></p> </li> <li><p><span class="math-container">$$\sum_{i=1}^{n}\left(\frac{i^{2}}{2}+C_{1}\right)=\left(\frac{n^{3}}{3}+\frac{n^{2}}{2}\right)\cdot\frac{1}{2}+C_{0}$$</span></p> </li> <li><p><span class="math-container">$$\sum_{i=1}^{n}i^{2}=\frac{n^{3}}{3}+\frac{n^{2}}{2}-2nC_{1}+2C_{0} $$</span></p> </li> <li><p>Assuming <span class="math-container">$C_{0}=0$</span>. Next, we are going to find the constant <span class="math-container">$C_{1}$</span></p> </li> <li><p>From step 4, we can conclude that: <span class="math-container">$C_{1}=\frac{n^{2}}{6}+\frac{n}{4}-\sum_{i=1}^{n}\frac{i^{2}}{2n}$</span>. We can fix <span class="math-container">$n$</span>, at any value, it is more convenient to take one(<span class="math-container">$n=1$</span>) then <span class="math-container">$C_{1}=-\frac{1}{12}$</span></p> </li> <li><p><span class="math-container">$$\sum_{i=1}^{n}i^{2}=\frac{n^{3}}{3}+\frac{n^{2}}{2}+\frac{n}{6}$$</span></p> </li> </ol> <p>Using the induction method, we can prove the correctness of this formula and that the value of the constant <span class="math-container">$C_{0}$</span> is really zero. But I created this question because the second step looks very strange, since the left part was multiplied by differential <span class="math-container">$di$</span>, and the right by <span class="math-container">$dn$</span>. If we assume that the second step is wrong, then why did we get the correct formula of summation of first squares?</p> <p>Note: The technique shown based on the integrated one is really interesting for me, using the same reasoning we can get the formula of the first cubes and so on</p> <p><strong>EDIT1</strong></p> <p>According to @DatBoi's comment, we can calculate constants <span class="math-container">$C_{0}$</span> and <span class="math-container">$C_{1}$</span> by solving a system of linear equations. The desired system must contain two equations, since we have two unknown values(<span class="math-container">$C_{0}$</span> and <span class="math-container">$C_{1}$</span>). To achieve this, we need to use the right part of the statement from step 4 twice, for two different n. For simplicity, let's take <span class="math-container">$n=1$</span> for first equation and <span class="math-container">$n=2$</span> for second equation, then the sum of the squares for these <span class="math-container">$n$</span> is 1 and 5, respectively.</p> <ol> <li>The main system <span class="math-container">$$ \left\{ \begin{array}{c} \frac{1}{3}+\frac{1}{2}-2C_{1}+2C_{0}=1 \\ \frac{8}{3}+\frac{4}{2}-4C_{1}+2C_{0}=5 \\ \end{array} \right. $$</span></li> <li>After simplification <span class="math-container">$$ \left\{ \begin{array}{c} \ C_{0}-C_{1}=\frac{1}{12} \\ \ C_{0}-2C_{1}=\frac{1}{6} \\ \end{array} \right. $$</span></li> <li>Roots: <span class="math-container">$C_{0}=0$</span> and <span class="math-container">$C_{1}=-\frac{1}{12}$</span></li> </ol> <p><strong>EDIT2</strong></p> <p>Considering @epi163sqrt's answer, the second step should be changed and it will take this form:</p> <ol start="2"> <li><span class="math-container">$$\sum_{i=1}^{n}\int_{ }^{ }idi=\int_{}^{}\frac{\left(n^{2}+n\right)}{2}dn$$</span></li> </ol> <p><em>My hypothesis</em>. If we have: <span class="math-container">$$\sum_{i=1}^{n}i^{p}=f\left(n,p\right)$$</span> Where <span class="math-container">$f$</span> is a closed form for summation, then this should be true for any natural degree</p> <p><span class="math-container">$$\sum_{i=1}^{n}\int_{}^{}i^{p}di=\int_{}^{}f\left(n,p\right)dn\ \to\ \sum_{i=1}^{n}\frac{i^{\left(p+1\right)}}{p+1}=\int_{}^{}f\left(n,p\right)dn-nC_{1}$$</span> Can you prove or disprove this hypothesis? My questions above are no longer relevant</p> <p><strong>EDIT3. Time for fun. Let's try to get a formula for summing the first cubes</strong></p> <ol> <li><p><span class="math-container">$$\sum_{i=1}^{n}i^{2}=\frac{n^{3}}{3}+\frac{n^{2}}{2}+\frac{n}{6}$$</span></p> </li> <li><p><span class="math-container">$$\sum_{i=1}^{n}\int_{ }^{ }i^{2}di=\int_{ }^{ }\frac{n^{3}}{3}+\frac{n^{2}}{2}+\frac{n}{6}dn$$</span></p> </li> <li><p><span class="math-container">$$\sum_{i=1}^{n}\frac{i^{3}}{3}=\frac{n^{4}}{12}+\frac{n^{3}}{6}+\frac{n^{2}}{12}-nC_{1}+C_{0}$$</span></p> </li> <li><p><span class="math-container">$$ \left\{ \begin{array}{c} \frac{1}{4}+\frac{1}{2}+\frac{1}{4}-3C_{1}+3C_{0}=1 \\ \frac{16}{4}+\frac{8}{2}+\frac{4}{4}-6C_{1}+3C_{0}=9 \\ \end{array} \right. $$</span> Roots: <span class="math-container">$C_{0}=0$</span> and <span class="math-container">$C_{1}=0$</span></p> </li> <li><p><span class="math-container">$$\sum_{i=1}^{n}i^{3}=\frac{n^{4}}{4}+\frac{n^{3}}{2}+\frac{n^{2}}{4}$$</span></p> </li> </ol> <p><strong>GREAT EDIT4 19.01.2022</strong></p> <p>So far I have no proof, however, the calculation of constants(<span class="math-container">$C_{0}$</span> and <span class="math-container">$C_{1}$</span>) can be significantly simplified by changing the lower index of summation to 0.</p> <p>1b. Let <span class="math-container">$M_{p}(n)$</span> be a closed form to obtain the summation, with degree of <span class="math-container">$p$</span>. I. e. <span class="math-container">$$\sum_{i=0}^{n}i^{p}=M_{p}\left(n\right)$$</span></p> <p>2b. Now let's assume that the statement written below is true <span class="math-container">$$\sum_{i=0}^{n}\int_{ }^{ }i^{p}di=\int_{ }^{ }M_{p}\left(n\right)dn$$</span></p> <p>3b. For now, we'll just take the integrals. <span class="math-container">$$\sum_{i=0}^{n}\left(\frac{i^{p+1}}{p+1}+C_{1}\right)=\int_{ }^{ }M_{p}\left(n\right)dn$$</span></p> <p>4b. Now let's express the sum explicitly. Also, we will move the <span class="math-container">$C_{1}$</span> without changing its sign, this is a valid action, since multiplying the constant by (-1) leads to another constant <span class="math-container">$$\sum_{i=0}^{n}i^{p+1}=\left(\int_{ }^{ }M_{p}\left(n\right)dn+nC_{1}\right)\left(p+1\right)$$</span></p> <p>5b. So we got the recurrent formula: <span class="math-container">$$M_{p}(n) = \left(\int_{ }^{ }M_{p-1}\left(n\right)dn+nC_{p}\right)p$$</span> <span class="math-container">$$M_{0}(n) = n+1$$</span></p> <p>6b. Now we have to build and resolve a system for two unknown constants. Therefore, the number of equations is two, we are also going to take n=0 and n=1: <span class="math-container">$$ \left\{ \begin{array}{c} M_{p}(0)=0 \\ M_{p}(1)=1 \end{array} \right. $$</span> 7b. As I said, we have two constants. In order to see this, we will add a new definition for <span class="math-container">$W_{p-1}(n)$</span> that satisfies the following expression: <span class="math-container">$\int_{ }^{ }M_{p-1}\left(n\right)dn=W_{p-1}\left(n\right)+C_{-p}$</span>. <span class="math-container">$$ \left\{ \begin{array}{c} \left(W_{p-1}\left(0\right)+C_{-p}+0C_{p}\right)p=0 \\ \left(W_{p-1}\left(1\right)+C_{-p}+1C_{p}\right)p=1 \end{array} \right. $$</span></p> <p>8b. I will skip the formal proof of the fact, but the intuition is that <span class="math-container">$W_{p}(n)$</span> is a polynomial that does not have a constant term. Therefore, we can safely know that <span class="math-container">$W_{p}(0)=0$</span>. let's rewrite and simplify the system:</p> <p>8b.1. <span class="math-container">$$ \left\{ \begin{array}{c} \left(C_{-p}\right)p=0 \\ \left(W_{p-1}\left(1\right)+C_{-p}+C_{p}\right)p=1 \end{array} \right. $$</span></p> <p>8b.2. <span class="math-container">$$ \left\{ \begin{array}{c} C_{-p}=0 \\ \left(W_{p-1}\left(1\right)+C_{p}\right)p=1 \end{array} \right. $$</span></p> <p>8b.3 <span class="math-container">$$ C_{p}=\frac{1}{p}-W_{p-1}\left(1\right) $$</span></p> <p>9b. We have completed the study of the constant. The last action is to match everything together. <span class="math-container">$$ M_{p}\left(n\right)=p\left(\left(\int_{ }^{ }M_{p-1}\left(n\right)dn\right)_{n}-n\left(\int_{ }^{ }M_{p-1}\left(n\right)dn\right)_{1}\right)+n $$</span> <span class="math-container">$$M_{0}(n) = n+1$$</span></p> <p>10b. (New step 29.04.2022) The previous step was not recorded correctly. I will also proceed to the calculation of definite integrals: <span class="math-container">$$ M_{p}(n) = \begin{cases} n+1, &amp; \text{if $p$ is zero } \\ p\int_{0}^{n}M_{p-1}\left(t\right)dt-np\int_{0}^{1}M_{p-1}\left(t\right)dt+n, &amp; \text{otherwise} \end{cases} $$</span></p>
epi163sqrt
132,007
<p>Here we look at steps (1) and (2) and we will see that the left-hand side of (2) needs to be revised somewhat. We start with the identity <span class="math-container">\begin{align*} \sum_{i=1}^n i=\frac{n^2+n}{2}\tag{0} \end{align*}</span></p> <p><strong>Step 1.:</strong></p> <blockquote> <p>We consider the left-hand side of (0) as function in <span class="math-container">$n$</span> and define the function <span class="math-container">\begin{align*} &amp;f:\mathbb{N}\to\mathbb{R}\tag{1.1}\\ &amp;f(n)=\sum_{i=1}^n i \end{align*}</span></p> </blockquote> <p>We observe (1.1) defines a function in the variable <span class="math-container">$n$</span>. The index <span class="math-container">$i$</span> is an index variable with validity defined by the <em>scope</em> of the sigma symbol <span class="math-container">$\sum$</span>. We know the sum formula (0) and can simplify the function by writing it as closed form.</p> <blockquote> <p><span class="math-container">\begin{align*} f(n)=\sum_{i=1}^n i=\frac{n^2+n}{2}\tag{1.2} \end{align*}</span></p> </blockquote> <p>Since <span class="math-container">$f$</span> is a polynomial function we can integrate it.</p> <p><strong>Step 2.:</strong></p> <blockquote> <p>We obtain from (1.2) by integrating <span class="math-container">$f$</span> <span class="math-container">\begin{align*} \int f(n)\,dn=\int \sum_{i=1}^n i\,dn = \int \frac{n^2+n}{2}\,dn = \left(\frac{n^3}{3}+\frac{n^2}{2}\right)\frac{1}{2}+C_0\tag{2.1} \end{align*}</span></p> </blockquote> <p>Note the integration in (2.1) is written using <span class="math-container">$n$</span> as integration variable: <span class="math-container">\begin{align*} \color{blue}{\int} \sum_{i=1}^n i\,\color{blue}{dn} \end{align*}</span> and this can be used for further considerations.</p> <p><strong>Note:</strong> When writing the expression instead in the form <span class="math-container">\begin{align*} \color{blue}{\int}\left(\sum_{i=1}^{n}i\right)\color{blue}{di}\tag{3} \end{align*}</span> we use the symbol <span class="math-container">$i$</span> for <em>two different</em> variables. The scope of the index variable <span class="math-container">$i$</span> is restricted to scope of the sigma symbol indicated by parentheses. The integration variable <span class="math-container">$i$</span> is a different symbol than the index variable <span class="math-container">$i$</span> and also independent of <span class="math-container">$n$</span>. We can use instead <span class="math-container">$x$</span> as integration variable and (3) can be written as <span class="math-container">\begin{align*} \color{blue}{\int}\left(\sum_{i=1}^{n}i\right)\color{blue}{di}&amp;=\int \sum_{i=1}^n i\,dx\\ &amp;=\int\,dx \sum_{i=1}^n i\\ &amp;=\left(x+C_1\right)\frac{n^2+n}{2} \end{align*}</span> which is not what we intend.</p> <p><strong>Conclusion:</strong> We can stick at (2.1) and can use it as basis for further calculations.</p> <p><strong>Hint:</strong> There is a famous relationship between sums and <em>Riemann</em> integrals known as the <em><a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler - MacLaurin summation formula</a></em> which gives for Riemann-integrable functions</p> <p><span class="math-container">\begin{align*} \sum_{i=1}^nf(i)=\int_{0}^n f(x)\,dx+\frac{f(n)-f(0)}{2}+\sum_{k=1}^{\left\lfloor\frac{p}{2}\right\rfloor}\frac{B_{2k}}{(2k)!}\left(f^{(2k-1)}(n)-f^{(2k-1)}(0)\right)+R_p\tag{4} \end{align*}</span></p> <p><span class="math-container">$B_k$</span> are the <em><a href="https://en.wikipedia.org/wiki/Bernoulli_number" rel="nofollow noreferrer">Bernoulli numbers</a></em> and <span class="math-container">$R_p$</span> is a remainder term. In case <span class="math-container">$f$</span> is a polynomial the remainder <span class="math-container">$R_p$</span> vanishes if <span class="math-container">$p$</span> is big enough.</p> <blockquote> <p>In the current case <span class="math-container">$\sum_{i=1}^n i^2$</span> we can set <span class="math-container">$p=2$</span> and we obtain from (4) with <span class="math-container">$f(x)=x^2$</span> <span class="math-container">\begin{align*} \color{blue}{\sum_{i=1}^n i^2}&amp;=\int_{0}^nx^2\,dx+\frac{n^2-0}{2}+\frac{B_2}{2}\left(2n-0\right)\\ &amp;\,\,\color{blue}{=\frac{1}{3}n^3+\frac{1}{2}n^2+\frac{1}{6}n} \end{align*}</span></p> </blockquote>
4,338,285
<p>I have been thinking about the problem of finding the sum of the first squares for a long time and now I have an idea how to do it. However, the second step of this technique looks suspicious.</p> <ol> <li><p><span class="math-container">$$\sum_{i=1}^n i = \frac{n^2+n}{2}$$</span></p> </li> <li><p><span class="math-container">$$\int\sum_{i=1}^{n}idi=\int\frac{\left(n^{2}+n\right)}{2}dn$$</span></p> </li> <li><p><span class="math-container">$$\sum_{i=1}^{n}\left(\frac{i^{2}}{2}+C_{1}\right)=\left(\frac{n^{3}}{3}+\frac{n^{2}}{2}\right)\cdot\frac{1}{2}+C_{0}$$</span></p> </li> <li><p><span class="math-container">$$\sum_{i=1}^{n}i^{2}=\frac{n^{3}}{3}+\frac{n^{2}}{2}-2nC_{1}+2C_{0} $$</span></p> </li> <li><p>Assuming <span class="math-container">$C_{0}=0$</span>. Next, we are going to find the constant <span class="math-container">$C_{1}$</span></p> </li> <li><p>From step 4, we can conclude that: <span class="math-container">$C_{1}=\frac{n^{2}}{6}+\frac{n}{4}-\sum_{i=1}^{n}\frac{i^{2}}{2n}$</span>. We can fix <span class="math-container">$n$</span>, at any value, it is more convenient to take one(<span class="math-container">$n=1$</span>) then <span class="math-container">$C_{1}=-\frac{1}{12}$</span></p> </li> <li><p><span class="math-container">$$\sum_{i=1}^{n}i^{2}=\frac{n^{3}}{3}+\frac{n^{2}}{2}+\frac{n}{6}$$</span></p> </li> </ol> <p>Using the induction method, we can prove the correctness of this formula and that the value of the constant <span class="math-container">$C_{0}$</span> is really zero. But I created this question because the second step looks very strange, since the left part was multiplied by differential <span class="math-container">$di$</span>, and the right by <span class="math-container">$dn$</span>. If we assume that the second step is wrong, then why did we get the correct formula of summation of first squares?</p> <p>Note: The technique shown based on the integrated one is really interesting for me, using the same reasoning we can get the formula of the first cubes and so on</p> <p><strong>EDIT1</strong></p> <p>According to @DatBoi's comment, we can calculate constants <span class="math-container">$C_{0}$</span> and <span class="math-container">$C_{1}$</span> by solving a system of linear equations. The desired system must contain two equations, since we have two unknown values(<span class="math-container">$C_{0}$</span> and <span class="math-container">$C_{1}$</span>). To achieve this, we need to use the right part of the statement from step 4 twice, for two different n. For simplicity, let's take <span class="math-container">$n=1$</span> for first equation and <span class="math-container">$n=2$</span> for second equation, then the sum of the squares for these <span class="math-container">$n$</span> is 1 and 5, respectively.</p> <ol> <li>The main system <span class="math-container">$$ \left\{ \begin{array}{c} \frac{1}{3}+\frac{1}{2}-2C_{1}+2C_{0}=1 \\ \frac{8}{3}+\frac{4}{2}-4C_{1}+2C_{0}=5 \\ \end{array} \right. $$</span></li> <li>After simplification <span class="math-container">$$ \left\{ \begin{array}{c} \ C_{0}-C_{1}=\frac{1}{12} \\ \ C_{0}-2C_{1}=\frac{1}{6} \\ \end{array} \right. $$</span></li> <li>Roots: <span class="math-container">$C_{0}=0$</span> and <span class="math-container">$C_{1}=-\frac{1}{12}$</span></li> </ol> <p><strong>EDIT2</strong></p> <p>Considering @epi163sqrt's answer, the second step should be changed and it will take this form:</p> <ol start="2"> <li><span class="math-container">$$\sum_{i=1}^{n}\int_{ }^{ }idi=\int_{}^{}\frac{\left(n^{2}+n\right)}{2}dn$$</span></li> </ol> <p><em>My hypothesis</em>. If we have: <span class="math-container">$$\sum_{i=1}^{n}i^{p}=f\left(n,p\right)$$</span> Where <span class="math-container">$f$</span> is a closed form for summation, then this should be true for any natural degree</p> <p><span class="math-container">$$\sum_{i=1}^{n}\int_{}^{}i^{p}di=\int_{}^{}f\left(n,p\right)dn\ \to\ \sum_{i=1}^{n}\frac{i^{\left(p+1\right)}}{p+1}=\int_{}^{}f\left(n,p\right)dn-nC_{1}$$</span> Can you prove or disprove this hypothesis? My questions above are no longer relevant</p> <p><strong>EDIT3. Time for fun. Let's try to get a formula for summing the first cubes</strong></p> <ol> <li><p><span class="math-container">$$\sum_{i=1}^{n}i^{2}=\frac{n^{3}}{3}+\frac{n^{2}}{2}+\frac{n}{6}$$</span></p> </li> <li><p><span class="math-container">$$\sum_{i=1}^{n}\int_{ }^{ }i^{2}di=\int_{ }^{ }\frac{n^{3}}{3}+\frac{n^{2}}{2}+\frac{n}{6}dn$$</span></p> </li> <li><p><span class="math-container">$$\sum_{i=1}^{n}\frac{i^{3}}{3}=\frac{n^{4}}{12}+\frac{n^{3}}{6}+\frac{n^{2}}{12}-nC_{1}+C_{0}$$</span></p> </li> <li><p><span class="math-container">$$ \left\{ \begin{array}{c} \frac{1}{4}+\frac{1}{2}+\frac{1}{4}-3C_{1}+3C_{0}=1 \\ \frac{16}{4}+\frac{8}{2}+\frac{4}{4}-6C_{1}+3C_{0}=9 \\ \end{array} \right. $$</span> Roots: <span class="math-container">$C_{0}=0$</span> and <span class="math-container">$C_{1}=0$</span></p> </li> <li><p><span class="math-container">$$\sum_{i=1}^{n}i^{3}=\frac{n^{4}}{4}+\frac{n^{3}}{2}+\frac{n^{2}}{4}$$</span></p> </li> </ol> <p><strong>GREAT EDIT4 19.01.2022</strong></p> <p>So far I have no proof, however, the calculation of constants(<span class="math-container">$C_{0}$</span> and <span class="math-container">$C_{1}$</span>) can be significantly simplified by changing the lower index of summation to 0.</p> <p>1b. Let <span class="math-container">$M_{p}(n)$</span> be a closed form to obtain the summation, with degree of <span class="math-container">$p$</span>. I. e. <span class="math-container">$$\sum_{i=0}^{n}i^{p}=M_{p}\left(n\right)$$</span></p> <p>2b. Now let's assume that the statement written below is true <span class="math-container">$$\sum_{i=0}^{n}\int_{ }^{ }i^{p}di=\int_{ }^{ }M_{p}\left(n\right)dn$$</span></p> <p>3b. For now, we'll just take the integrals. <span class="math-container">$$\sum_{i=0}^{n}\left(\frac{i^{p+1}}{p+1}+C_{1}\right)=\int_{ }^{ }M_{p}\left(n\right)dn$$</span></p> <p>4b. Now let's express the sum explicitly. Also, we will move the <span class="math-container">$C_{1}$</span> without changing its sign, this is a valid action, since multiplying the constant by (-1) leads to another constant <span class="math-container">$$\sum_{i=0}^{n}i^{p+1}=\left(\int_{ }^{ }M_{p}\left(n\right)dn+nC_{1}\right)\left(p+1\right)$$</span></p> <p>5b. So we got the recurrent formula: <span class="math-container">$$M_{p}(n) = \left(\int_{ }^{ }M_{p-1}\left(n\right)dn+nC_{p}\right)p$$</span> <span class="math-container">$$M_{0}(n) = n+1$$</span></p> <p>6b. Now we have to build and resolve a system for two unknown constants. Therefore, the number of equations is two, we are also going to take n=0 and n=1: <span class="math-container">$$ \left\{ \begin{array}{c} M_{p}(0)=0 \\ M_{p}(1)=1 \end{array} \right. $$</span> 7b. As I said, we have two constants. In order to see this, we will add a new definition for <span class="math-container">$W_{p-1}(n)$</span> that satisfies the following expression: <span class="math-container">$\int_{ }^{ }M_{p-1}\left(n\right)dn=W_{p-1}\left(n\right)+C_{-p}$</span>. <span class="math-container">$$ \left\{ \begin{array}{c} \left(W_{p-1}\left(0\right)+C_{-p}+0C_{p}\right)p=0 \\ \left(W_{p-1}\left(1\right)+C_{-p}+1C_{p}\right)p=1 \end{array} \right. $$</span></p> <p>8b. I will skip the formal proof of the fact, but the intuition is that <span class="math-container">$W_{p}(n)$</span> is a polynomial that does not have a constant term. Therefore, we can safely know that <span class="math-container">$W_{p}(0)=0$</span>. let's rewrite and simplify the system:</p> <p>8b.1. <span class="math-container">$$ \left\{ \begin{array}{c} \left(C_{-p}\right)p=0 \\ \left(W_{p-1}\left(1\right)+C_{-p}+C_{p}\right)p=1 \end{array} \right. $$</span></p> <p>8b.2. <span class="math-container">$$ \left\{ \begin{array}{c} C_{-p}=0 \\ \left(W_{p-1}\left(1\right)+C_{p}\right)p=1 \end{array} \right. $$</span></p> <p>8b.3 <span class="math-container">$$ C_{p}=\frac{1}{p}-W_{p-1}\left(1\right) $$</span></p> <p>9b. We have completed the study of the constant. The last action is to match everything together. <span class="math-container">$$ M_{p}\left(n\right)=p\left(\left(\int_{ }^{ }M_{p-1}\left(n\right)dn\right)_{n}-n\left(\int_{ }^{ }M_{p-1}\left(n\right)dn\right)_{1}\right)+n $$</span> <span class="math-container">$$M_{0}(n) = n+1$$</span></p> <p>10b. (New step 29.04.2022) The previous step was not recorded correctly. I will also proceed to the calculation of definite integrals: <span class="math-container">$$ M_{p}(n) = \begin{cases} n+1, &amp; \text{if $p$ is zero } \\ p\int_{0}^{n}M_{p-1}\left(t\right)dt-np\int_{0}^{1}M_{p-1}\left(t\right)dt+n, &amp; \text{otherwise} \end{cases} $$</span></p>
bonsoon
48,280
<p>Just to add a possibly well-known way (thought not necessarily as generalizable as above) of showing the sum of consecutive squares, in the spirit of apocryphal story of Gauss:</p> <p>Note that <span class="math-container">$$1^2 = 1\\ 2^2= 2+2 \\ 3^2 = 3+3+3 \\ 4^2 = 4+4+4+4 \\\vdots$$</span> etc.</p> <p>So we have <span class="math-container">$\sum_{i=1}^n i^2$</span> is the sum of the above pyramid of numbers. In particular, when <span class="math-container">$n=4$</span>, we have by rotating this pyramid of numbers in three ways: <span class="math-container">$$ \sum_{i=1}^4 i^2=\frac{1}{3}\left(\begin{array}{cccccc} 1+\\ 2+ &amp; 2+\\ 3+ &amp; 3+ &amp; 3+\\ 4+ &amp; 4+ &amp; 4+ &amp; 4 \end{array}+\begin{array}{cccccc} 4+\\ 3+ &amp; 4+\\ 2+ &amp; 3+ &amp; 4+\\ 1+ &amp; 2+ &amp; 3+ &amp; 4 \end{array}+\begin{array}{cccccc} 4+\\ 4+ &amp; 3+\\ 4+ &amp; 3+ &amp; 2+\\ 4+ &amp; 3+ &amp; 2+ &amp; 1 \end{array}\right)\\=\frac{1}{3}\left(\begin{array}{cccc} 9+\\ 9+ &amp; 9+\\ 9+ &amp; 9+ &amp; 9+\\ 9+ &amp; 9+ &amp; 9+ &amp; 9 \end{array}\right) = \frac{1}{3}(1+2+3+4)(9) $$</span></p> <p>So one can believe that <span class="math-container">$$ \sum_{i=1}^n i^2 = \frac{1}{3}\left( \begin{array}{cccccc} 1+\\ 2+ &amp; 2+\\ \vdots &amp; &amp; \ddots\\ n+ &amp; n+ &amp; \cdots &amp; n \end{array}+\begin{array}{cccccc} n+\\ (n-1)+ &amp; n+\\ \vdots &amp; &amp; \ddots\\ 1+ &amp; 2+ &amp; \cdots &amp; n \end{array}+\begin{array}{cccccc} n+\\ n+ &amp; (n-1)+\\ \vdots &amp; &amp; \ddots\\ n+ &amp; (n-1)+ &amp; \cdots &amp; 1 \end{array}\right) \\=\frac{1}{3}\left(\begin{array}{cccc} (2n+1)+\\ (2n+1)+ &amp; (2n+1)+\\ \vdots &amp; &amp; \ddots\\ (2n+1)+ &amp; (2n+1)+ &amp; \cdots &amp; (2n+1) \end{array}\right) \\=\frac{1}{3}(1+2+\cdots+n)(2n+1) \\=\frac{1}{3}\frac{n(n+1)}{2}(2n+1) $$</span></p>
159,446
<p>The ordinary Thom isomorphism says $H^{*+n}(E,E_{0}) \simeq H^{*}(X)$, where $E$ is a vector bundle over $X$ and $E_{0}$ is $E$ minus the zero section. Now assume that $S$ is a non vanishing section for the vector bundle $E$. In each fiber $E_{x}$ we remove two points $0_{x}$ and $S(x)$. Then we put $E_{0,1}$for the union of all 2-points punctured fibers.</p> <p><strong>Motivating by the ordinary Thom isomorphism, my question is</strong></p> <blockquote> <p>What should be a relevant right side of the following equality(equivalency):</p> </blockquote> <p>\begin{equation} H^{*+n}(E,E_{0,1}) \simeq \;? \end{equation}</p> <p>What should be a generalized Thom class?</p> <p>Does this right side depend on choosing a particular non vanishing section $S$?</p> <p>It is obvious that we can generalize the main question to multi- point punctured fibers. That is, assume we have m sections $S_{1},\ldots ,S_{m}$ such that we have m distinct vector $S_{1}(x),\ldots,S_{m}(x)$. We remove these m ponts from each fiber $E_{x}$we denote the resulting total space by $E_{1,2,\ldots m}$. We search for a relevant right side for:</p> <p>\begin{equation} H^{*+n}(E,E_{1,2, \ldots, m})=? \end{equation}</p>
Liviu Nicolaescu
20,302
<p>You can see from the special case when $X$ is a point that what you proposed cannot work.</p>
3,696,776
<p>I was given this problem:</p> <p><a href="https://i.stack.imgur.com/8ACor.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8ACor.png" alt="Problem"></a></p> <p>These are my calculations and I'm asking for verification:</p> <p>Pointwise limit:</p> <p><span class="math-container">$\lim_{n \to \infty} f_{n}(x) = \lim_{n \to \infty} \frac{x^{2n}}{1+x^{2n}} = \lim_{n \to \infty} \frac{x^{n}}{\frac{1}{x^n}+x^{n}} = 1$</span></p> <p>Uniform convergence:</p> <p><span class="math-container">$\mid f_{n}(x)-f(x)\mid = \mid f_{n}(x) - 1\mid = \mid\frac{x^{2n}}{1+x^{2n}} -1 \mid= \mid \frac{x^{n}}{\frac{1}{x^n}+x^{n}} - \frac{\frac{1}{x^n}+x^{n}}{\frac{1}{x^n}+x^{n}}\mid = \mid -\frac{\frac{1}{x^n}}{\frac{1}{x^n}+x^{n}}\mid = \frac{\frac{1}{x^n}}{\frac{1}{x^n}+x^{n}} \leq \frac{1}{x^n}$</span></p> <p>Thus:</p> <p><span class="math-container">$\lim_{n \to \infty} sup\{\mid f_{n}(x)-f(x)\mid : x \in [R, \infty)\} = \lim_{n \to \infty} \frac{1}{x^n} = 0.$</span></p> <p>From this follows that <span class="math-container">$f_n(x)$</span> is uniform convergent</p>
Aryaman Maithani
427,810
<p>You have written "<span class="math-container">$|f_n(x) - f(x)| = f_n(x) - 1$</span>". That is not correct. The RHS is negative.</p> <hr> <p>To do it, an easy way is to note that for every <span class="math-container">$n \in \Bbb N$</span>, the function <span class="math-container">$f_n$</span> is increasing on <span class="math-container">$[R, \infty)$</span>. And also that <span class="math-container">$f_n(x) &lt; 1$</span> for every <span class="math-container">$n \in \Bbb N$</span> and <span class="math-container">$x \le R$</span>. </p> <p>Now, you know that <span class="math-container">$$\lim_{n\to\infty} f_n(R) = 1.$$</span> So, given any <span class="math-container">$\epsilon &gt; 0$</span>, choose <span class="math-container">$N \in \Bbb N$</span> such that <span class="math-container">$|f_n(R) - 1| &lt; \epsilon$</span> for all <span class="math-container">$n \ge N$</span>.<br> Since <span class="math-container">$f_n(R) \le f_n(x) &lt; 1$</span> for all <span class="math-container">$n \in \Bbb N$</span> and <span class="math-container">$x &gt; R$</span>, it follows that <span class="math-container">$$|f_n(x) - 1| &lt; \epsilon,$$</span> for all <span class="math-container">$x &gt; R$</span> and <span class="math-container">$n \in \Bbb N$</span>, as desired.</p> <hr> <p><strong>EDIT:</strong> This is after the post was edited.<br> You write </p> <blockquote> <p><span class="math-container">$$\lim_{n \to \infty} \sup\{\mid f_{n}(x)-f(x)\mid : x \in [R, \infty)\} = \lim_{n \to \infty} \frac{1}{\color{#FF0000}x^n} = 0.$$</span></p> </blockquote> <p>It should actually be</p> <p><span class="math-container">$$\lim_{n \to \infty} \sup\{\mid f_{n}(x)-f(x)\mid : x \in [R, \infty)\} = \lim_{n \to \infty} \frac{1}{\color{#FF0000}R^n} = 0.$$</span></p>
118,275
<p>In the construction of Soergel's bimodules in representtion theory , it's essential for him to work with <em>split</em> Grothendieck groups. Here he starts with a certain small additive category $\mathcal{A}$ and writes $\langle \mathcal{A} \rangle$ for its split Grothendieck group: the free abelian group on objects $\langle A \rangle$ corresponding as usual to isomorphism classes, modulo sums$\langle C \rangle = \langle A \rangle + \langle B \rangle$ corresponding only to the situation $C \cong A \oplus B$.</p> <p>This is a less familiar situation than the usual Grothendieck group with sums corresponding to short exact sequences which may or may not split. </p> <blockquote> <p>Where does the notion of split Grothendieck group originate, and why?</p> </blockquote> <p>This is mostly asked out of curiosity, but I'm also looking for further interesting examples.</p>
Stephan Müller
28,011
<p>These groups are mentioned in [Swan '68 - Algebraic K-Theory, p.69]. He constructs $K_0(\mathcal{A}, S)$ for a class $S$ of exact sequences in $\mathcal{A}$. Take the free abelian group mod the relations from sequences in $S$. For example the class of all exact sequences for the Grothendieck-group $K_0(\mathcal{A})$ or all exact split seqences for the group you mentioned. The name 'split-Grothendieck-group' does not appear.</p> <p>He generalizes further to $K_0(\mathcal{A}, F)$ for a bifunctor $F:\mathcal{A} \times \mathcal{A} \to \mathcal{A}$ and obtains a generalized Picard-group.</p>
919,562
<p>I need to prove that:</p> <p>$$\inf\{\frac{1}{3}+\frac{3n+1}{6n^2} \Big| n\in\mathbb N\}=\frac{1}{3}$$</p> <p>I get stuck with my proof, I'll write it down.</p> <p>$$n\geq1$$ $$3n\geq3$$ $$3n+1\geq4$$ $$\frac{1}{3}+3n+1\geq4+\frac{1}{3}$$</p> <p>Now, I'm having a problem with $6n^2$ if I multiply by $6n^2$, I'll get variable in the express of $4+\frac{1}{3}$.</p> <p>Any ideas?, Thanks!</p>
copper.hat
27,978
<p>First note that ${1 \over 3} + { 3n+1 \over 6n^2} \ge {1 \over 3}$, so ${1 \over 3} $ is a lower bound.</p> <p>Now let $\epsilon &gt;0$ and choose $n$ large enough so that ${ 3n+1 \over 6n^2} &lt; \epsilon$. (One easy way is to choose $n$ large enough so that ${1 \over n } &lt; \epsilon$, then $\epsilon &gt; { 1\over n} = { 6 n \over 6 n^2} &gt; { 4n \over 6 n^2} \ge {3n+1 \over 6 n^2}$.)</p> <p>Then ${1 \over 3} + { 3n+1 \over 6n^2} &lt; {1 \over 3}+ \epsilon$.</p> <p>It follows that ${ 1\over 3} = \inf_{n \in \mathbb{N}} ({1 \over 3} + { 3n+1 \over 6n^2} )$.</p>
919,562
<p>I need to prove that:</p> <p>$$\inf\{\frac{1}{3}+\frac{3n+1}{6n^2} \Big| n\in\mathbb N\}=\frac{1}{3}$$</p> <p>I get stuck with my proof, I'll write it down.</p> <p>$$n\geq1$$ $$3n\geq3$$ $$3n+1\geq4$$ $$\frac{1}{3}+3n+1\geq4+\frac{1}{3}$$</p> <p>Now, I'm having a problem with $6n^2$ if I multiply by $6n^2$, I'll get variable in the express of $4+\frac{1}{3}$.</p> <p>Any ideas?, Thanks!</p>
BigM
90,395
<p>Obviously $\frac{1}{3}+\frac{3n+1}{n^2}\geq\frac{1}{3}.$ The sequence $a_n=\frac{1}{3}+\frac{3n+1}{n^2}$ converges to $\frac{1}{3}$ since $\frac{3n+1}{n^2}$ behaves like $\frac{1}{n}.$ Thus $a_n\rightarrow \frac{1}{3}.$ whcih is to say $\inf=\frac{1}{3}$</p>
1,728,920
<p>I am a software engineer trying to wrap his head around <strong>Fast Fourier Transform (FFT)</strong>. Specifically, I need to implement it as part of some software I am writing. Now I can handle the implementation of the algorithm/operations, and in fact will likely just use an open source math library to do most of the heavy lifting for me. But there's something fundamental here that I just <em>want</em> to understand.</p> <p>According to <a href="https://en.wikipedia.org/wiki/Fourier_analysis" rel="nofollow">Wikipedia</a>:</p> <blockquote> <p>Fourier analysis is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer.</p> </blockquote> <p>and:</p> <blockquote> <p>For example, determining what component frequencies are present in a musical note would involve computing the Fourier transform of a sampled musical note. One could then re-synthesize the same sound by including the frequency components as revealed in the Fourier analysis. In mathematics, the term Fourier analysis often refers to the study of both operations.</p> </blockquote> <p>So here we have two examples:</p> <ol> <li>Decomposing a general function into component trig functions to study heat transfer; and</li> <li>Decomposing a sampling of a musical note into component frequencies</li> </ol> <p>So I get the <em>what</em> here; that is, what FFT/Fourier Analysis is attempting to do (decomposing a function into component functions), and like I said, I can handle the <em>how</em> later (either coding up FFT myself or using a library), <strong>but what I'm stuck on is the <em>why</em>.</strong></p> <p><strong>Why do this?</strong> Using one of the examples above, <em>why</em> decompose a sound sample (musical note) into component frequencies? What is there to be gained/learned from doing this? What extra information does this expose for analytical purposes? Why can we better understand heat transfer by using FFT to decompose some "heat function" into smaller/component trig functions?</p>
Wouter
89,671
<p>Differential equations such as <a href="https://en.wikipedia.org/wiki/Thermal_conduction#Fourier.27s_law" rel="nofollow">the one that describes heat flow</a> are often easily solvable when they have a sinusoidal (or complex exponential) source term. So, physicists solve it in the easy case and then make use of the Fourier transform to turn the general problem (with a source that is not sinusoidal) into a sum of easy problems.</p> <p>In other words, thanks to the Fourier Transform, the solution of a linear differential equation with a non-sinusoidal source term (hard to solve) can be written as a sum of solutions of linear differential equation with sinusoidal source terms (easy to solve).</p>
200,920
<p>I would like to create an array f containing n indices. The label of those indices is stored in a liste of length n, let's call it "list".</p> <p>So I would like to have something like :</p> <blockquote> <p>{f[list[[0]]], f[list[[1]],...}</p> </blockquote> <p>The point is to affect the f[liste[[i]]] to some values after.</p> <p>I tried to make a table, but the problem is that the indices cannot be a list from what I have seen (in the sense that they must be regularly spaced).</p> <p>I also tried to use the array function but I have the same problem, I cannot specify the indices as being a list, it must be a regular spacing.</p> <p>How can I do it ?</p>
Carl Woll
45,431
<p>One idea I like to use for these kinds of integrals is to add an auxiliary variable and a Dirac delta function, convert the Dirac delta function to it's integral formulation, and then do a bunch of simple 1D integrals. For your case, this would proceed as follows, starting from:</p> <p><span class="math-container">$$\underset{t\in \mathbb{R}^n}{\int }e^{-t_1^4-t_2^4-\ldots \ -t_n^4-\left(1-t_1-t_2-\ldots -t_n\right){}^4}$$</span></p> <p>Introduce the auxiliary variable <span class="math-container">$s$</span> and add a Dirac delta function:</p> <p><span class="math-container">$$\underset{t\in \mathbb{R}^n}{\int }\underset{s\in \ \mathbb{R}}{\int }e^{-t_1^4-t_2^4-\ldots -t_n^4-s^4} \delta(1-t_1-t_2-\ldots -t_n-s)$$</span></p> <p>Next, introduce the integral formulation of the Dirac delta function:</p> <p><span class="math-container">$$\frac{1}{2 \pi }\underset{t\in \ \mathbb{R}^n}{\int }\underset{s\in \mathbb{R}}{\int }\underset{u\in \ \mathbb{R}}{\int }e^{-t_1^4-t_2^4-\ldots -t_n^4-s^4} e^{i u-i t_1 u-i t_2 \ u-\ldots -i t_n u-i s u}$$</span></p> <p>Finally, we can do all of the <span class="math-container">$t$</span> and <span class="math-container">$s$</span> integrals to obtain:</p> <p><span class="math-container">$$\frac{1}{2 \pi } \int_{-\infty \ }^{\infty } e^{i u} g(u)^{n+1} \, du$$</span></p> <p>where:</p> <p><span class="math-container">$$g(u)=\int_{-\infty }^{\infty } e^{-i t u-t^4} \, dt$$</span></p> <p>Now, let's have Mathematica do these integrals:</p> <pre><code>g[u_] = Sqrt[2 Pi] FourierTransform[Exp[-t^4], t, u] </code></pre> <blockquote> <p>2 Gamma[5/4] HypergeometricPFQ[{}, {1/2, 3/4}, u^4/256] - 1/4 u^2 Gamma[3/4] HypergeometricPFQ[{}, {5/4, 3/2}, u^4/256]</p> </blockquote> <p>So, the desired integral has become:</p> <pre><code>int[n_, opts:OptionsPattern[NIntegrate]] := (1/(2 Pi)) NIntegrate[ Cos[u] g[u]^(n+1), {u, -Infinity, Infinity}, opts, Method-&gt;{"GlobalAdaptive", Method-&gt;"GaussKronrodRule"} ] </code></pre> <p>Now, <code>g[u]</code> is real:</p> <pre><code>Refine[g[u] ∈ Reals, u ∈ Reals] </code></pre> <blockquote> <p>True</p> </blockquote> <p>so I use <code>Cos[u]</code> instead of <code>Exp[I u]</code> since the integral is real. I also customize the integration method. Let's check:</p> <pre><code>int[2, WorkingPrecision-&gt;20] int[2, WorkingPrecision-&gt;40] int[2, WorkingPrecision-&gt;60] </code></pre> <blockquote> <p>1.4733172914977911077</p> <p>1.473317291497785926905017339845596712841</p> <p>1.47331729149778592690501733984559670949096610342311667206502</p> </blockquote> <p>The result seems correct, and improves with higher working precision. Now, for higher orders:</p> <pre><code>int[4, WorkingPrecision -&gt; 20] int[5, WorkingPrecision -&gt; 20] int[6, WorkingPrecision -&gt; 20] int[40, WorkingPrecision -&gt; 20] </code></pre> <blockquote> <p>4.4732305211180348293</p> <p>7.7543594355221995796</p> <p>13.461688085347942892</p> <p>4.0351905913672630176*10^9</p> </blockquote>
3,295,021
<p>The Hopf fibration is a mapping <span class="math-container">$h:\mathbb{S^3} \mapsto\mathbb{S}^2$</span> defined by <span class="math-container">$r\mapsto ri\bar{r}$</span> where <span class="math-container">$r$</span> is a unit quaternion in the form <span class="math-container">$r=a+bi+cj+dk $</span> where <span class="math-container">$a,b,c,d \in \mathbb{R}$</span> and <span class="math-container">$ijk=-1$</span>. Explicitly, <span class="math-container">$$h(a,b,c,d)=(a^2+c^2-b^2+d^2,2(ad+bc),2(bd-ac))$$</span> Now, it is known that for a point in <span class="math-container">$\mathbb{S^2}$</span> the preimage <span class="math-container">$h^{-1}$</span> is a circle in <span class="math-container">$\mathbb{S^3}$</span>. How can this be? If you consider the set of points <span class="math-container">$$C= \{(\cos(t),0,0,\sin(t)) \mid t\in\mathbb{R}\} \in \mathbb{S^3}$$</span> which can be written in terms of quaternions as <span class="math-container">$C=e^{k t}$</span> where <span class="math-container">$ t \in \mathbb{R}$</span>. This clearly doesn't map to a single point in <span class="math-container">$\mathbb{S^2}$</span> under the Hopf fibration. It maps to the circle <span class="math-container">$i\cos(2t)+j\sin(2t)$</span>. So I am not understanding what's going on here. Why is this circle on in 4 dimensional space not mapping to a point in 3 dimensional space? </p>
Tom Chen
117,529
<p>For (i), define <span class="math-container">$\{x\} = x - \lfloor x \rfloor$</span>. Then <span class="math-container">\begin{align*} \lfloor 2x \rfloor + \lfloor 2y \rfloor = 2\lfloor x\rfloor + 2\lfloor y \rfloor + \lfloor 2\{x\} \rfloor + \lfloor 2\{y\} \rfloor \end{align*}</span> and <span class="math-container">\begin{align*} \lfloor x \rfloor + \lfloor y \rfloor + \lfloor x + y \rfloor = 2\lfloor x\rfloor + 2\lfloor y \rfloor + \lfloor \{x\} + \{y\} \rfloor \end{align*}</span> So the inequality to be proven is equivalent to <span class="math-container">\begin{align*} \lfloor 2\{x\} \rfloor + \lfloor 2\{y\} \rfloor \ge \lfloor \{x\} + \{y\} \rfloor \end{align*}</span> But this is true, since <span class="math-container">\begin{align*} \lfloor 2\{x\} \rfloor + \lfloor 2\{y\} \rfloor \ge \lfloor 2\max(\{x\}, \{y\})\rfloor \ge \lfloor \{x\} + \{y\} \rfloor \end{align*}</span></p>
1,946,881
<p>Looking around I have found lots of material on continuous time Markov processes on finite or countable state spaces, say $\{0,1,\ldots,J\}$ for some $J\in\mathbb{N}$ or just $\mathbb{N}$. Similarly I have earlier worked with (discrete time) Markov chains on general state spaces, following the modern classic by Meyn &amp; Tweedie. </p> <p>My question concerns monographs on continuous time Markov processes on general state spaces, say some subset of $\mathbb{R}^k$, $k\in\mathbb{N}$. Are there any good references - preferably but not necessarily suited for an ambitious master student - on this topic?</p>
Mikasa
8,581
<p>First one states that for all $x$ there is an elemnt $e$ satisfing that condition but the Second one is saying any $x$ has its own $e$ with that property. These two ones can be found out through definitions of Continuity and Uniformy Continiuty as well: $$\forall x \in I \, \exists \delta &gt; 0\, \forall y \in I \, ( \, |y-x|&lt;\delta \, \Rightarrow \, |f(y)-f(x)|&lt;\varepsilon,$$ $$\, \forall \varepsilon &gt; 0\, \exists \delta &gt; 0\, \forall x \in I\, \forall y \in I\, ( \, |y-x|&lt;\delta \, \Rightarrow \, |f(y)-f(x)|&lt;\varepsilon$$</p>
1,440,470
<p>Given two real valued independent random variables $X$ and $Y$, write their ratio as $R = \frac{X}{Y}$</p> <p>I know various other ways of finding a formula for the distribution of $R$, but I'm specifically interested in understanding why the following derivation does not yield the correct result.</p> <p>$$ P(R = r) = \int_{\mathbb{R}} P(\frac{X}{Y} = r | X = x)P(X = x)dx \\ = \int_{\mathbb{R}}P(Y = \frac{x}{r})P(X = x)dx \\ = \int_{\mathbb{R}}f_Y(\frac{x}{r})f_X(x)dx $$</p> <p>I can't see how this is wrong.</p>
zhoraster
262,269
<hr> <p>Sol'n, written by original asker</p> <p>1) $U([0, \frac{1}{2}])$, density is 2 for values on it's support</p> <p>2) $P(R \le r) = \int_{\mathbb{R}}P(R \le r | Y = y)f_Y(y)dy = \int_{\mathbb{R}}P(X\le ry)f_Y(y)dy$, if $X$ and $Y$ are independent.</p> <p>3) $f_{R|Y = y} = \frac{d}{dr}P(R \le r | Y = y) = \frac{d}{dr}P(X \le ry) = yf_X(ry)$</p> <p>4) $$ f_{R|X = x}(r | X = x) = \frac{d}{dr}P(R \le r | X = x) = \\ \frac{d}{dr}P(X \le rY | X = x) $$</p> <p>if $r \gt 0$ $$ = \frac{d}{dr}P(Y \ge \frac{x}{r}) = \\ \frac{d}{dr}(1 - F_Y(\frac{x}{r}) = \frac{x}{r^2}f_Y(\frac{x}{r}) ; x \ge 0 $$</p> <p>if $r \lt 0$ $$ = \frac{d}{dr}P(Y \le \frac{x}{r}) = \\ \frac{d}{dr}F_Y(\frac{x}{r}) = \frac{-x}{r^2}f_Y(\frac{x}{r}) ; x \le 0 $$</p> <p>So finally it is $ \frac{|x|}{r^2}f_Y(\frac{|x|}{r})$</p> <p>Is this okay? Or am I hopeless :|</p>
1,463,419
<p>A letter has come from exclusively LONDON or CLIFTON, but on the postmark only $2$ consecutive letters ''ON'' are found to be visible. What is the probability that the letter came from LONDON?</p> <hr> <p>This is a question of conditional probability. Let $A$ be the event that the letter has come from LONDON. Let $B$ be the event that consecutive letters ''ON'' are found to be visible. $A\cap B$ is the event that the letter has come from LONDON and consecutive letters ''ON'' are visible. We have to find $P(A\mid B) =\frac{P(A\cap B)}{P(B)}$.</p> <p>But then i am stuck. Please help me. Thanks.</p>
Soudipta Dutta
952,126
<p>E1: Letter (ON) came from CLIFTON</p> <p>E2 : Letter (ON) came from LONDON</p> <p>E : Event that 2 consecutive times the letter (ON) is visible.</p> <p>P(E1) = P(E2) = 1/2</p> <p>CLIFTON can be broken down to : CL,LI,IF,FT,TO,ON = 6 pairs in which one pair contains letter ON . Therefore, <span class="math-container">$P(\frac{\text{Number of times ON is present}}{\text{letter came from CLIFTON}}) = P(\frac{E}{_{E1}}) = \frac{1}{6}$</span></p> <p>LONDON can be broken down to : LO,ON,ND,DO,ON = 5 pairs in which two pairs contain the letter ON .</p> <p><span class="math-container">$P(\frac{\text{Number of times ON is present}}{\text{letter came from LONDON}}) = P(\frac{E}{_{E2}}) = \frac{2}{5}$</span></p> <p>Then the Probability that the letter ON came from LONDON when Just 2 consecutive letter(ON) are visible is given = <span class="math-container">$P(\frac{E2}{E}) = \frac{P(E2) * P(\frac{E}{E2})}{{P(E2) * P(\frac{E}{E2}) + P(E1) * P(\frac{E}{E1})}} = \frac{\frac{1}{2} * \frac{2}{5}}{\frac{1}{2} * [ \frac{2}{5} + \frac{1}{6}]} $</span></p>
1,554,603
<p>Let $\theta \in \mathbb R$, and let $T\in\mathcal L(\mathbb C^2)$ have canoncial matrix</p> <p>$M(T)$ = $$ \left( \begin{matrix} 1 &amp; e^{i\theta} \\ e^{-i\theta} &amp; -1 \\ \end{matrix} \right) $$ (a) Find the eigenvalues of $T$.</p> <p>(b) Find an orthonormal basis for $\mathbb C^2$ that consists of eigenvectors for $T$.</p> <p>I can get the eigenvalues of T and they are $\sqrt 2$ and $-\sqrt 2$. However, I cannot get each eigenvector respect to each eigenvalue. I know how to get eigenvectors by calculating the null space of $(T - \lambda I)$, but it looks like this is not a proper method to solve this problem. So, anyone can help? Thank you! </p>
Kushal Bhuyan
259,670
<p>$\tau(60)=12$ so there are $12$ ideals</p> <p>Note: The ideals in $Z_n$ are precisely the sets of the form $&lt;d&gt;$ where $d$ divides $n$, so number of ideals are same as the number of divisors of $n$</p>
386,018
<p>I've been working with Spivak's Differential Geometry exercises and I found myself confused with this one: "Let $C\subset \mathbb{R} \subset \mathbb{R}^2$ be the Cantor set. Show that $\mathbb{R}^2 - C$ is homeomorphic to the surface shown at the top of the next page."</p> <p><img src="https://i.stack.imgur.com/rARxC.jpg" alt="enter image description here"></p> <p>Well, for me it's obviously homeomorphic: if we think of $\mathbb{R}$ inside $\mathbb{R}^2$ as the $x$-axis for instance, the set $\mathbb{R}^2 - C$ will be the plane minus the pieces of $\mathbb{R}$ that makes up Cantor set, so that we'll have in the end the plane with the $x$-axis with a lot of holes like the surface shown above. For me it's kind of intuitive that $\mathbb{R}^2-C$ can be continuously deformed into the surface above, but how do we really come tho <em>prove</em> this fact?</p> <p>Since Spivak says that the pre-requisites are just his Calculus on Manifolds and basic notions of metric spaces I feel that we wouldn't need any advance topology for this job, but I'm a little confused really. </p> <p>Can someone give a little advice on how do we solve problems like this?</p> <p>Thanks very much in advance!</p>
75064
75,064
<p>The surface consists of infinitely many <a href="http://en.wikipedia.org/wiki/Pair_of_pants_%28mathematics%29" rel="noreferrer">pairs of pants</a> sewn together in the way shown: waist line to leg opening. A pair of pants is homeomorphic to the following domain in the plane: </p> <p><img src="https://i.stack.imgur.com/HKea8.png" alt="flat pants"></p> <p>Following the sewing procedure, you should insert a smaller copy of such a domain in each hole, and repeat <em>ad infinitum</em>. Sewing everything together, you get a disk minus a Cantor set. </p>