qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
788,245
<p>$$\sum_{n=1}^{\infty}\frac{(n+2)!}{(3n-1)}$$ I know this series does not converge. Can someone show me how to prove that? Should i use criteria of Dalamber or any other criteria?</p>
Hilario Fernandes
142,528
<p>If you prove that $(n+2)!$ grows faster than $(3n-1)$, the general term will grow, and the summatory will not converge. So, you can prove this inequality (by induction):</p> <p>\begin{equation} ((n+1)-2)!- (n-2)! &gt; (3(n+1)-1) - (3n-1) \end{equation} </p> <p>This proof method is easy because $a_{n+1} &gt; a_n$.</p>
3,260,776
<blockquote> <p>Suppose we are given any arbitrary collection of sets.</p> <p>How do get a largest topology from the above arbitrary collection?</p> </blockquote> <p>How to construct a Topology from this collection and in which condition ?</p> <p>I don't have any answer to me.</p> <p>Any help or idea is appreciated.</p>
John Hughes
114,036
<p>There are a couple of possibilities. </p> <p>I'm going to assume you have a set <span class="math-container">$U$</span> whose elements are all subsets of one "big" set <span class="math-container">$A$</span>. (For instance: you might have a set of intervals, all of which are subsets of <span class="math-container">$[0, 1]$</span>.)</p> <p>You want to find a set <span class="math-container">$U'$</span> such that </p> <p>(1) <span class="math-container">$U \subset U'$</span></p> <p>(2) <span class="math-container">$U'$</span> is a topology on <span class="math-container">$A$</span>. </p> <p>Here's one answer: let <span class="math-container">$U'$</span> be the power set of <span class="math-container">$A$</span>, i.e., let <span class="math-container">$U'$</span> contain <em>all</em> subsets of <span class="math-container">$A$</span>. Then the thing you get is called the "discrete" topology on <span class="math-container">$A$</span>. </p> <p>Here's a second answer: you know that a topology must contain <span class="math-container">$\emptyset$</span> and <span class="math-container">$A$</span>, so let <span class="math-container">$U_1 = U \cup \{ \emptyset, A \}$</span>. Now <span class="math-container">$U_1$</span> is <em>almost</em> a topology, but it doesn't necessarily contain arbitrary unions of elements, or finite intersections of elements. So </p> <ul> <li><p>Let <span class="math-container">$V_1 = U_1 \cup \{ \cup_{x \in X} x \mid x \in \Bbb P(U_1) \}$</span>. That's hard to read, but the second set says "let <span class="math-container">$x$</span> be any element of the power set of <span class="math-container">$U_1$</span>, i.e., any collection of subsets of <span class="math-container">$U_1$</span>. Take the union of these things. <em>THAT</em> is one of the elements of the new set. So <span class="math-container">$V_1$</span> now contains arbitrary unions, and <span class="math-container">$U_1 \subset V_1$</span>. </p></li> <li><p>Let <span class="math-container">$U_2 = V_1 \cup \{ \cap_{x \in X} x \mid x \in Bbb P'(V_1) \}$</span>, where <span class="math-container">$\Bbb P'$</span> denotes the "finite power set" of a set -- the collection of all <em>finite</em> subsets of the set. So <span class="math-container">$W_1$</span> contains arbitrary finite intersections of elements of <span class="math-container">$V_1$</span>. </p></li> </ul> <p>Now repeat the process, forming <span class="math-container">$V_2$</span> from <span class="math-container">$U_2$</span>, and so on. THe result is a sequence of sets <span class="math-container">$$ U \subset U_1 \subset V_1 \subset U_2 \subset V_2 \subset ... $$</span> all of which are subsets of <span class="math-container">$\Bbb P(A)$</span>. Let</p> <p><span class="math-container">$$ H = \cup_{i=1}^\infty U_i. $$</span></p> <p>Then I claim that <span class="math-container">$H$</span> is a topology. It certainly contains the empty set and <span class="math-container">$A$</span>. What about finite intersections? If you have a finite collection <span class="math-container">$B_1, \ldots, B_k$</span> of elements of <span class="math-container">$H$</span>, then there's some <span class="math-container">$N$</span> so large that all the <span class="math-container">$B_i$</span> lie in <span class="math-container">$U_N$</span>. But then they all lie in <span class="math-container">$V_N$</span> as well, so their intersection lies in <span class="math-container">$U_{N+1}$</span>, hence in <span class="math-container">$H$</span>. </p> <p>The argument for arbitrary unions is ... probably messier. Or maybe it's completely straightforward. I think it's completely straightforward. Yep...it is. \</p> <p>N.B.: Kavi Rama Murthy's answer suggests that once you compute <span class="math-container">$U_2$</span>, you're done, i.e., after that, my process adds no new elements to the topology. That's probably right, but I can't verify it yet, because I haven't had any coffee yet this morning. :) </p> <hr> <p>Anyhow, the construction above constructs the <em>smallest</em> topology on <span class="math-container">$A$</span> that contains <span class="math-container">$U$</span>. So my two answers are at the extremes: one constructs the largest topology on <span class="math-container">$A$</span> that contains <span class="math-container">$U$</span>, the other constructs the smallest. There are often many intermediate possibilities as well. </p>
2,300,049
<p>these are my toughs:</p> <p>$$z^2 = 1 + 2i \Longrightarrow (x+yi)(x+yi) = 1 + 2i$$</p> <p>so: $x^2-y^2 = 1$ and $2xy = 2$</p> <p>then i got that $x = 1/y$ but i cant continue to find the real- and imaginary part of z anymore. Appriciated any help</p>
Zain Patel
161,779
<p>You have $x^2 - y^2 = 1$ and $x = \frac{1}{y}$. Substitute the latter into the former to get $$\frac{1}{y^2} - y^2 = 1 \implies 1 - y^4 = y^2 \implies y^4 + y^2 - 1 =0.$$</p> <p>You can solve the quadratic $u^2 + u -1 = 0$ giving solution $u = \frac{1}{2}(-1 \pm \sqrt{5})$. So you need to solve $y^2 = \frac{1}{2}(-1 + \sqrt{5})$ (since $y$ is real) which is now straightforward. </p>
2,300,049
<p>these are my toughs:</p> <p>$$z^2 = 1 + 2i \Longrightarrow (x+yi)(x+yi) = 1 + 2i$$</p> <p>so: $x^2-y^2 = 1$ and $2xy = 2$</p> <p>then i got that $x = 1/y$ but i cant continue to find the real- and imaginary part of z anymore. Appriciated any help</p>
Michael Hardy
11,667
<p>\begin{align} z^2 = 1 + 2i &amp; = |1+2i|(\cos\alpha+i\sin\alpha) \text{ where } \tan \alpha = \frac 2 1 \\[10pt] &amp; = \sqrt{1^2+2^2} (\cos\alpha+i\sin\alpha) = \sqrt 5 (\cos\alpha+i\sin\alpha) \end{align} Therefore $$ z = \pm\left( \sqrt{\sqrt 5} \right)\left( \cos\frac\alpha2 + i \sin\frac\alpha 2 \right). $$</p>
3,278,761
<p>Suppose a student says : "if 17 is even, then 2 is not a divisor of 17". </p> <p>Surely his teacher would tell him he is wrong, saying that when a number is even, this number has 2 as divisor. The teacher would correct with " if 17 were even, then 2 would be a divisor of 17". In other words, the student's claim contradicts the general rule : " For all number x, if x is even , then x has 2 as divisor." So, if 17 is even... </p> <p>But, since the sentence uttered by the student is a conditional with a false antecedent, this sentence ( the whole conditional) is true ; in virtue of " ex falso sequitur quodlibet" ( from a false proposition, anything follows). </p> <p>My question is : what is wrong in the student's claim?</p> <p>Can this hypothetical case be clarified by saying that </p> <p>(1) the student's sentence is <em>materially</em> true</p> <p>(2) the teacher is right in saying that the sentence is false in case it is understood as asserting a consequence relation ( logical consequence) between the antecedent and the consequent? </p> <p>Or , am I wrong in saying that " if 17 is even , then 2 is not a divisor of 17 " contradicts ( or is incompatible with) " For all number x, if x is even, then x is divisible by 2" ? </p>
Maxime Ramzi
408,637
<p>The student would be right in claiming "if <span class="math-container">$17$</span> is even, then <span class="math-container">$2$</span> does not divide <span class="math-container">$17$</span>", as well as in claiming "if <span class="math-container">$17$</span> is even, then <span class="math-container">$2$</span> divides <span class="math-container">$17$</span>".</p> <p>The fact of the matter is that for any proposition <span class="math-container">$P$</span>, "if <span class="math-container">$17$</span> is even then <span class="math-container">$P$</span>" is correct, no matter how contradictory <span class="math-container">$P$</span> is : for instance, one rightfully conclude from these two statements that "if <span class="math-container">$17$</span> is even, then <span class="math-container">$2$</span> divides and does not divide <span class="math-container">$17$</span>". One must not forget that there is a "if <span class="math-container">$17$</span> is even"-assumption, which of course makes all the rest irrelevant, because <span class="math-container">$17$</span> isn't even. </p> <p>The teacher would be wrong on two accounts: saying that the student is wrong; but also transforming the mathematically relevant "if <span class="math-container">$17$</span> is even" into the irrelevant "if <span class="math-container">$17$</span> were even", which is not a mathematical statement. </p> <p>By the way, if you believe that intuitionism has managed to go beyond material implication, then that isn't a property of <em>material</em> implication, just of implication. </p>
134,796
<p>Example list below. All elements are in the form {1 or 0, 1 or 0, 1 or 0}, with a least one of the numbers 0 and 1 in the element (so excluding {1,1,1} and {0,0,0}) </p> <pre><code>ListA = {{1, 1, 0}, {1, 1, 0}, {1, 1, 0}, **{0, 1, 1}**, {1, 0, 1}, {1, 0, 1}, {1, 0, 1}} </code></pre> <p>I want a command to replace any single lone entry in the sequence to be replaced with the next sequence.</p> <p>In List A the single lone entry is {0, 1, 1} as before this there are three {1, 1, 0} in a succession and following the single lone entry there are three {1, 0, 1} in a succession. So I want this lone entry to be replaced by {1, 0, 1}. </p> <p>I want the command to be generic so can handle any combination of lone entries, I believe there will be 6 different scenarios (assuming the element sequence either side of the lone entry are different). Another example of lone entry of {{1, 1, 0}, {1, 1, 0}, <strong>{1, 0, 1}</strong>, {0, 1, 1}, {0, 1, 1}}</p> <p>Lone entries at the start and end of the lists can be ignored.</p>
Aisamu
8,238
<p>Just for kicks:</p> <pre><code>f[{a__, a__, b__}] := a f[{a__, b__, b__}] := b f[{a__, b__, c__}] := c {First@ListA} ~Join~ Map[f, Partition[ListA, 3, 1]] ~Join~ {Last@ListA} </code></pre> <blockquote> <p>{{1, 1, 0}, {1, 1, 0}, {1, 1, 0}, {1, 0, 1}, {1, 0, 1}, {1, 0, 1}, {1, 0, 1}}</p> </blockquote>
3,082,635
<p>Prove that for a given prime <span class="math-container">$p$</span> and each <span class="math-container">$0 &lt; r &lt; p-1$</span>, there exists a <span class="math-container">$q$</span> such that </p> <p><span class="math-container">$$rq \equiv 1 \bmod p$$</span></p> <p>I've only taken one intro number theory course (years ago), and this just popped up in a computer science class (homework). I was assuming that this proof would be elementary since my current class in an algorithm cours, but after the few basic attempts I've tried it didn't look promising. Here's a couple approaches I thought of:</p> <hr> <p>(<em>reverse engineer</em>)</p> <p>To arrive at the conclusion we would need</p> <p><span class="math-container">$$rq - 1 = kp$$</span></p> <p>for some <span class="math-container">$k$</span>. A little manipulation:</p> <p><span class="math-container">$$qr - kp = 1$$</span></p> <p>That looks familiar, but I can't see anything from it.</p> <hr> <p>(<em>sum on <span class="math-container">$r$</span></em>)</p> <p><span class="math-container">$$\sum_{r=1}^{p-2} r = \frac{(p-2)(p-1)}{2} = p\frac{p - 3}{2} + 1 \equiv 1 \bmod p$$</span></p> <p>which looks good but I don't know how to incorporate <span class="math-container">$r$</span> int0 the final equality. </p> <hr> <p>(<em>Wilson's Theorem—proved by Lagrange</em>) </p> <p>I vaguely recall this theorem, but I was looking at it in an old book and it wasn't easy to see how we arrived there. Anyways, <span class="math-container">$p$</span> is prime <em>iff</em> <span class="math-container">$$(p-1)! \equiv -1 \bmod p$$</span></p> <p>Here the <span class="math-container">$r$</span> multiplier is built in to the factorial expression so I was thinking of adding <span class="math-container">$2$</span> to either side</p> <p><span class="math-container">$$(p-1)! + 2 \equiv 1 \bmod p$$</span></p> <p>which is a dead end (pretty sure). But then I was thinking, maybe multiplying Wilson't Thm by <span class="math-container">$(p+1)$</span>? Then getting</p> <p><span class="math-container">$$(p+1)(p-1)! = -(p+1) \bmod p$$</span></p> <p>which I think results in</p> <p><span class="math-container">$$(p+1)(p-1)! = 1 \bmod p$$</span></p> <p>of which <span class="math-container">$r$</span> is a multiple and <span class="math-container">$q$</span> is obvious. But I'm not sure if that's valid.</p>
cqfd
588,038
<blockquote> <p><strong>Theorem</strong>:If <span class="math-container">$g $</span> is the greatest common divisor of <span class="math-container">$r $</span> and <span class="math-container">$p $</span>, then there exists integers <span class="math-container">$q $</span> and <span class="math-container">$k $</span> such that <span class="math-container">$$g=\text{gcd}(r,p)=rq+kp . $$</span> You can find a proof <a href="https://math.stackexchange.com/questions/321061/proving-that-gcda-b-as-bt-i-e-gcd-is-a-linear-combination">here.</a></p> </blockquote> <p>Note that <span class="math-container">$r $</span> and <span class="math-container">$p$</span> are co-primes. So <span class="math-container">$\text{gcd}(r,p)=1$</span>. Then by above mentioned theorem, we have<span class="math-container">$$\text{gcd}(r,p)=1=rq+kp.$$</span></p> <p>Can you conclude now?</p>
1,331,063
<p>Let $\cal{H}$ be a Hilbert space, $T$ a bounded linear operator on $\cal{H}$, $S$ a trace class operator, then can one verify that $$|Tr(TS)|\leq\|T\|\cdot|Tr(S)|?$$</p>
R.N
253,742
<p>It is correct for positive operators, for more information see Gohberg, Krein, Introduction to the Theoryof Linear Non-Self-Adjoint Operators in Hilbert Space page 27</p>
1,345,364
<p>I am struggling with this question: </p> <blockquote> <p>Let $\{a_n\}$ be defined recursively by $a_1=\sqrt2$, $a_{n+1}=\sqrt{2+a_n}$. Find $\lim\limits_{n\to\infty}a_n$. HINT: Let $L=\lim\limits_{n\to\infty}a_n$. Note that $\lim\limits_{n\to\infty}a_{n+1}=\lim\limits_{n\to\infty}a_n$, so $\lim\limits_{n\to\infty}\sqrt{2+a_n}=L$. Using the properties of limits, solve for $L$.</p> </blockquote> <p>I just don't know how I am suppose to find the limit of that or what my first step is. Any help?</p>
marty cohen
13,079
<p>If you have a recursion of the form $a_{n+1} =f(a_n) $, if $L = \lim_{n \to \infty} a_n $, then we must have $L = f(L)$.</p> <p>In your case, $f(x) = \sqrt{2+x}$.</p> <p>Therefore, for any limit $L$, we must have $L = \sqrt{2+L}$.</p> <p>Squaring, $L^2 = L+2$, which is a standard quadratic equation.</p> <p>Completing the square, from $L^2-L = 2$ we get $L^2-L+1/4 = 2+1/4 =9/4$, so $(L-1/2)^2 = 9/4$.</p> <p>By a miracle of homework problems, the right side is a square, so $(L-1/2)^2 = 9/4 = (3/2)^2$.</p> <p>Taking square roots, and remembering that square roots can be negative as well as positive, we get $L-1/2 = \pm 3/2$. Therefore $L = 1/2+3/2 = 2$ or $L = 1/2 - 3/2 = -1$.</p> <p>Since $L$ must be positive, we reject the negative solution, which goes into the corner and pouts.</p> <p>This leaves only $L=2$, and we see by substitution that this <em>does</em> satisfy $L = \sqrt{2+L}$.</p> <p>And they all lived happily ever after, at least until the next problem.</p>
4,112,771
<p>This is <strong>Exercise 2.3.4</strong> of Robinson's <em>&quot;A Course in the Theory of Groups (Second Edition)&quot;</em>. My universal algebra is a little rusty, so <a href="https://math.stackexchange.com/q/3590724/104041">this question</a> is not what I'm looking for; besides, I ought to be able to use tools given in Robinson's book.</p> <h2>The Details:</h2> <p>On page 56, <em>ibid.</em>,</p> <blockquote> <p>Let <span class="math-container">$F$</span> be a free group on a countably infinite set <span class="math-container">$\{x_1,x_2,\dots\}$</span> and let <span class="math-container">$W$</span> be a nonempty subset of <span class="math-container">$F$</span>. If <span class="math-container">$w=x_{i_1}^{l_1}\dots x_{i_r}^{l_r}\in W$</span> and <span class="math-container">$g_1,\dots, g_r$</span> are elements of a group <span class="math-container">$G$</span>, we define the <em>value</em> of the word <span class="math-container">$w$</span> at <span class="math-container">$(g_1,\dots,g_r)$</span> to be <span class="math-container">$w(g_1,\dots,g_r)=g_1^{l_1}\dots g_{r}^{l_r}$</span>. The subgroup of <span class="math-container">$G$</span> generated by all values in <span class="math-container">$G$</span> of words in <span class="math-container">$W$</span> is called the <em>verbal subgroup</em> of <span class="math-container">$G$</span> determined by <span class="math-container">$W$</span>,</p> <p><span class="math-container">$$W(G)=\langle w(g_1,g_2,\dots) \mid g_i\in G, w\in W\rangle.$$</span></p> </blockquote> <p>On page 57, <em>ibid.</em>,</p> <blockquote> <p>If <span class="math-container">$W$</span> is a set of words in <span class="math-container">$x_1, x_2, \dots$</span> and <span class="math-container">$G$</span> is any group, a normal subgroup <span class="math-container">$N$</span> is said to be <em><span class="math-container">$W$</span>-marginal</em> in <span class="math-container">$G$</span> if</p> <p><span class="math-container">$$w(g_1,\dots, g_{i-1}, g_ia, g_{i+1},\dots, g_r)=w(g_1,\dots, g_{i-1}, g_i, g_{i+1},\dots, g_r)$$</span></p> <p>for all <span class="math-container">$g_i\in G, a\in N$</span> and all <span class="math-container">$w(x_1,x_2,\dots,x_r)$</span> in <span class="math-container">$W$</span>. This is equivalent to the requirement: <span class="math-container">$g_i\equiv h_i \mod N, (1\le i\le r)$</span>, always implies that <span class="math-container">$w(g_1,\dots, g_r)=w(h_1,\dots, h_r)$</span>.</p> <p>[The] <span class="math-container">$W$</span>-marginal subgroups of <span class="math-container">$G$</span> generate a normal subgroup which is also <span class="math-container">$W$</span>-marginal. This is called <em>the <span class="math-container">$W$</span>-marginal of <span class="math-container">$G$</span></em> and is written <span class="math-container">$$W^*(G).$$</span></p> </blockquote> <p>On page 58, <em>ibid.</em>,</p> <blockquote> <p>If <span class="math-container">$W$</span> is a set of words in <span class="math-container">$x_1, x_2, \dots $</span>, the class of all groups <span class="math-container">$G$</span> such that <span class="math-container">$W(G)=1$</span>, or equivalently <span class="math-container">$W^*(G)=G$</span>, is called the <em>variety</em> <span class="math-container">$\mathfrak{B}(W)$</span> determined by <span class="math-container">$W$</span>.</p> </blockquote> <h2>The Question:</h2> <blockquote> <p>Prove that every variety is closed with respect to forming subgroups, images, and subcartesian products.</p> </blockquote> <h2>Thoughts:</h2> <p>This must have made sense to me at least two times (although it took some searching to confirm it):</p> <ul> <li><p>When I read Chapter 11 of <em>&quot;A Course in Universal Algebra&quot;</em> by Burris and Sankappanavar several years ago; this is tackled, as I said, <a href="https://math.stackexchange.com/q/3590724/104041">here</a>.</p> </li> <li><p>When I read Chapter 12 of Roman's <em>&quot;Fundamentals of Group Theory: An Advanced Approach&quot;</em> a few months ago. This is how I found out it's part of Birkhoff's Theorem. In the proof there, however, is the ever-infuriating &quot;It is clear&quot;; also: images are not covered.<span class="math-container">${}^\dagger$</span></p> </li> </ul> <p>I don't know why I've been stuck on it the last few days, but I have, so here I am.</p> <p>My rough understanding is that the processes of taking of subgroups, images, and subcartesian products mean they each inherit the words in <span class="math-container">$W$</span>, but this &quot;understanding&quot; is nothing more, I fear, than a restatement of the question.</p> <p>The chapter in Robinson's book is on free groups and presentations, so I've looked in many of my combinatorial group theory books, like Magnus <em>et al</em>., but the theorem is nowhere obvious.</p> <p>(I hope this is enough context.)</p> <p>I'm looking for something more than &quot;it is clear&quot; and less than a deep dive into <a href="/questions/tagged/universal-algebra" class="post-tag" title="show questions tagged &#39;universal-algebra&#39;" rel="tag">universal-algebra</a>.</p> <p>Please help :)</p> <hr /> <p><span class="math-container">$\dagger$</span> But quotients are, which, I suppose, is equivalent . . . I don't know.</p>
Arturo Magidin
742
<p>Let <span class="math-container">$W$</span> be a set of words, let <span class="math-container">$\mathfrak{W}$</span> be the corresponding variety.</p> <p>The key observation is that for any word <span class="math-container">$w(x_1,\ldots,x_n)$</span>, any groups <span class="math-container">$G$</span> and <span class="math-container">$K$</span>, and any morphism <span class="math-container">$f\colon G\to K$</span>, <span class="math-container">$$f(w(g_1,\ldots,g_n))=w(f(g_1),\ldots,f(g_n)).$$</span> This follows because <span class="math-container">$w(x_1,\ldots,x_n)$</span> is just an element of the free group <span class="math-container">$F_n$</span>, and the value of <span class="math-container">$w$</span> is the image of that element under the unique homomorphism <span class="math-container">$F_n\to G$</span> induced by the assignment <span class="math-container">$x_i\mapsto g_i$</span>; while <span class="math-container">$f(w(g_1,\ldots,g_n))$</span> is therefore the value of <span class="math-container">$w$</span> under the unique morphism <span class="math-container">$F_n\to K$</span> induced by the composite assignment <span class="math-container">$x_i\mapsto g_i\mapsto f(g_i)$</span>.</p> <p>In particular, for any groups <span class="math-container">$G$</span> and <span class="math-container">$K$</span> and any morphism <span class="math-container">$f\colon G\to K$</span>, we have <span class="math-container">$f(W(G))\subseteq W(K)$</span>, since the image of the generators of <span class="math-container">$W(G)$</span> lie in <span class="math-container">$W(K)$</span>.</p> <p>Note that any class of groups that is closed under subgroups will be closed under subcartesian products if and only if it is closed under cartesian (arbitrary unrestricted) products. Indeed, if <span class="math-container">$\{G_{\lambda}\}_{\lambda\in\Lambda}$</span> is a family of groups in the class, the cartesian product is a special case of the subcartesian product, and the subcartesian product is a subgroup of the cartesian.</p> <p>So it suffices to show that: (i) If <span class="math-container">$\{G_{\lambda}\}_{\lambda\in\Lambda}$</span> is a family of groups in <span class="math-container">$\mathfrak{W}$</span>, then <span class="math-container">$\mathop{\mathrm{Cr}}\limits_{\lambda\in\Lambda} G_{\lambda}$</span> lies in <span class="math-container">$\mathfrak{W}$</span>; (ii) that if <span class="math-container">$G\in\mathfrak{W}$</span> and <span class="math-container">$H\leq G$</span>, then <span class="math-container">$H\in\mathfrak{W}$</span>; and (iii) that if <span class="math-container">$G\in\mathfrak{W}$</span> and <span class="math-container">$\varphi\colon G\to K$</span> is a surjective homomorphism of groups, then <span class="math-container">$K\in\mathfrak{W}$</span>.</p> <ol> <li><p>Assume that <span class="math-container">$\{G_{\lambda}\}_{\lambda\in\Lambda}$</span> is a family of groups with <span class="math-container">$G_{\lambda}\in\mathfrak{W}$</span> for every <span class="math-container">$\lambda$</span>; that means that <span class="math-container">$W(G_{\lambda})=\{e_{\lambda}\}$</span> for every <span class="math-container">$\lambda$</span>. Let <span class="math-container">$w(x_1,\ldots,x_n)\in W$</span>, and let <span class="math-container">$\mathbf{y}_1,\ldots,\mathbf{y}_n\in \mathop{\mathrm{Cr}}\limits_{\lambda\in\lambda}G_{\lambda}$</span>. We want to show that <span class="math-container">$w(\mathbf{y}_1,\ldots,\mathbf{y}_n)=e$</span>. Indeed, recall that the operations are componentwise; thus, if <span class="math-container">$\pi_{\lambda}$</span> is the projection on the <span class="math-container">$\lambda$</span> coordinate, which means that <span class="math-container">$\pi_{\lambda}(\mathbf{y}_i) = \mathbf{y}_i(\lambda)\in G_{\lambda}$</span> (since the elements of the cartesian product are functions from <span class="math-container">$\Lambda$</span> to <span class="math-container">$\cup G_i$</span> with the value at <span class="math-container">$\lambda$</span> in <span class="math-container">$G_{\lambda}$</span>). Therefore, for each <span class="math-container">$\lambda$</span>, <span class="math-container">$$\begin{align*} \pi_{\lambda}(w(\mathbf{y}_1,\ldots,\mathbf{y}_n)) &amp;= w(\pi_{\lambda}(\mathbf{y}_1),\ldots,\pi_{\lambda}(\mathbf{y}_n))\\ &amp;= w(\mathbf{y}_1(\lambda),\ldots,\mathbf{y}_n(\lambda))\\ &amp;\in W(G_{\lambda}). \end{align*}$$</span> But <span class="math-container">$W(G_{\lambda})=\{e_{\lambda}\}$</span> because <span class="math-container">$G_{\lambda}\in\mathfrak{W}$</span>, so <span class="math-container">$\pi_{\lambda}(w(\mathbf{y}_1,\ldots,\mathbf{y}_n) = e_{\lambda}$</span>. As this occurs for every <span class="math-container">$\lambda$</span>, we conclude that <span class="math-container">$w(\mathbf{y}_1,\ldots,\mathbf{y}_n)=e$</span>. Thus, <span class="math-container">$W(\mathop{\mathrm{Cr}} G_{\lambda}) \in \mathfrak{W}$</span>, as desired.</p> </li> <li><p>Let <span class="math-container">$G\in\mathfrak{W}$</span>, <span class="math-container">$H\leq G$</span>. If <span class="math-container">$w(x_1,\ldots,x_n)\in W$</span>, and <span class="math-container">$h_1,\ldots,h_n\in H$</span>, then <span class="math-container">$h_i\in G$</span>, so <span class="math-container">$w(h_1,\ldots,h_n)\in W(G)=\{e\}$</span>; hence <span class="math-container">$W(H)=\{e\}$</span>, and thus <span class="math-container">$H\in\mathfrak{W}$</span>. Alternatively, the inclusion map <span class="math-container">$\iota\colon H\to G$</span> has <span class="math-container">$\iota(W(H))\subseteq W(G)=\{e\}$</span>, hence <span class="math-container">$W(H)=\{e\}$</span>.</p> </li> <li><p>If <span class="math-container">$G\in\mathfrak{W}$</span> and <span class="math-container">$f\colon G\to K$</span> is surjective, I claim that <span class="math-container">$f(W(G))=W(K)$</span>. Indeed, let <span class="math-container">$w(x_1,\ldots,x_n)\in W$</span>, and let <span class="math-container">$k_1,\ldots,k_n\in K$</span>. Then there exist <span class="math-container">$g_i\in G$</span> such that <span class="math-container">$f(g_i)=k_1$</span>, and hence <span class="math-container">$$\begin{align} w(k_1,\ldots,k_n) &amp;= w(f(g_1),\ldots,f(g_n))\\ &amp; = f(w(g_1),\ldots,w(g_n))\\ &amp;\in f(W(G)). \end{align}$$</span> Since the words <span class="math-container">$w(k_1,\ldots,k_n)$</span> generete <span class="math-container">$W(K)$</span>, and <span class="math-container">$f(W(G))$</span> is a subgroup, it follows that <span class="math-container">$W(K)\subseteq f(W(G))$</span>. But as we observed above, <span class="math-container">$f(W(G))\subseteq W(K)$</span> , giving equality. Thus, <span class="math-container">$W(K) = f(W(G))=f(\{e\}) = \{e\}$</span>, so <span class="math-container">$W(K)=\{e\}$</span> and hence <span class="math-container">$K\in\mathfrak{W}$</span>, as desired.</p> </li> </ol>
4,244,966
<p>I have the ODE <span class="math-container">$y^2(1+y'^2)=4$</span> to solve this I used the substitution <span class="math-container">$y'=p$</span> <span class="math-container">$$y^2(1+p^2)=4$$</span> <span class="math-container">$$2y(1+p^2)dy+2py^2dp=0$$</span> <span class="math-container">$$(p^2+1)dy+py\;dp=0$$</span> <span class="math-container">$$\frac{dy}y+\frac{p}{p^2+1}dp=0$$</span> <span class="math-container">$$\ln|y|+\frac12\ln|p^2+1|=\ln|c|$$</span> <span class="math-container">$$y\sqrt{p^2+1}=c$$</span> Using <span class="math-container">$p^2+1=\frac4{y^2}$</span>, I get <span class="math-container">$2=c$</span> ! I can't find my mistake.</p>
Alessio K
702,692
<p>You didn't make a mistake, but you have arrived back to the original equation. Note that <span class="math-container">$c$</span> can also be <span class="math-container">$-2$</span>.</p> <p><span class="math-container">$$y\sqrt{p^2+1}=\pm2\implies y^2(1+p^2)=4$$</span></p> <p>Instead, note that the differential equation is separable. It can be written as</p> <p><span class="math-container">$$y'=\frac{\sqrt{4-y^2}}{y}\quad\text{or}\quad y'=-\frac{\sqrt{4-y^2}}{y}$$</span></p>
567,204
<p>I'm currently studying CS, and as i didn't do maths A level i'm finding the module particularly difficult. We've now changed topics and lecturer, going onto discrete maths; and i'm refusing to fall behind :P. So, i'm going to post regularly/daily questions, just to make sure i have an understanding.</p> <p>Hopefully some of you guys have the time to give detailed answers to give me some sort of foundation </p> <p>Question - Range of a Functon</p> <p>Let $X = \{a,b,c,d\}$ and $Y = \{1,2,3,4,5\}$ and define $f:X\to Y$ by $f(a) =1$, $f(b) =2$, $f(c) = 5$, $f(d) = 2$. </p> <p>Find the domain, codomain and range. If someone could explain this question in detail so i can do some revision on it, i'd be grateful. Thanks</p>
gt6989b
16,192
<p><strong>Domain</strong> is a set on which valid inputs to $f$ are defined. Since $f$ is defined on $a,b,c,d$, the domain is the entire $X$.</p> <p><strong>Range</strong> is a set to which $f$ maps the input -- in other words, all possible outputs of $f$. Here, $\{1,2,5\}$. Sometimes range is called <strong>image</strong>.</p> <p><strong>Codomain</strong> is the bigger set where every output must be defined, here it is $Y$</p> <p>For another example consider $f:\mathbb{R} \to \mathbb{R}$ given by $f(x) = 1/x^2$.</p> <ul> <li>The domain are all possible inputs, so $0$ is not a part of it since $1/0$ is undefined. So the domain are all non-zero reals.</li> <li>The codomain is where $f$ maps, here is defined as $\mathbb{R}$.</li> <li>The image or range of $f$ is the actual outputs of $f$, here all positive reals.</li> </ul>
624,002
<p>Determine whether $\mathbb{Z}\times \mathbb{Z}$ and $\mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}$ are isomorphic groups or not.</p> <p>pf) Suppose that these are isomorphic. Note that $\mathbb{Z}\times \mathbb{Z}$ is a subgroup of $\mathbb{Z}\times \mathbb{Z}$ and $\mathbb{Z}\times \mathbb{Z}\times\left \{ 0 \right \}$ is a subgroup of $\mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}$. Since $\mathbb{Z}\times \mathbb{Z}$ and $\mathbb{Z}\times \mathbb{Z}\times\left \{ 0 \right \}$ are isomorphic, $\mathbb{Z}\times \mathbb{Z}/\mathbb{Z}\times \mathbb{Z}$ and $\mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}/\mathbb{Z}\times \mathbb{Z}\times \left \{ 0 \right \}$ are isomorphic. But the first one is isomorphic to the trivial group and the second one is isomorphic to $\mathbb{Z}$. It is a contradiction.</p> <p>Is my proof right? If not, is there another proof?</p>
Mikasa
8,581
<p>We know that these two groups are <em>free abelian</em> in which for $\mathbb Z\oplus\mathbb Z$ and $\mathbb Z\oplus\mathbb Z\oplus\mathbb Z$ the basis sets don't have the <strong>same cardinal number</strong> , so according to this <a href="http://en.wikipedia.org/wiki/Free_group#Facts%20and%20theorems">Theorem 3.</a> or <a href="http://librarum.org/book/1636/336">Theorem 10.14</a> the groups are not isomorphic. </p>
2,811,155
<p>I learned recently that there are mathematical objects that can be proven to exist, but also that can be proven to be impossible to "construct". For example see this answer on MSE:<br> <a href="https://math.stackexchange.com/questions/2808804/does-the-existence-of-a-mathematical-object-imply-that-it-is-possible-to-constru/2808837#2808837">Does the existence of a mathematical object imply that it is possible to construct the object?</a></p> <p>Now my question is then, what does existence of an object really mean, if it is impossible to "find" it? What does it mean when we say that a mathematical object exists?</p> <hr> <p>Because the abstract nature of this question, and to make sure I understand myself what I'm asking, here is a more specific example.</p> <p>Say that if have shown that there exists a real number $x$ that satisfies some property. I always assumed that this means that it possible to find this number $x$ in the set of real numbers. It may be hard to describe this number, but I would assume it is at least possible, to construct a number $x$ that is provable a real number that satisfies the property.</p> <p>But if that does not have to be the case, if it is impossible to find the number $x$ that satisfies this property, what does this then mean? I just can't get my mind around this. Does that mean that there is some real number out there, but uncatchable somehow by the nature of its existence?</p>
Mikhail Katz
72,694
<p>Concerning your question:</p> <blockquote> <p>what does existence of an object really mean, if it is impossible to "find" it? What does it mean when we say that a mathematical object exists?</p> </blockquote> <p>I would note that the underlying issue indeed centers on a concern with <em>meaning</em> as your wording suggests. How is it meaningful to talk about entities that are impossible to "find" as you put it? There are at least three approaches to this issue:</p> <p>(a) <strong><em>Platonists</em></strong> believe that the meaning is given to such entities through their objective existence but in an abstract realm. Adherents of this belief would suspect those who don't share their faith as being "platonists on weekdays, formalists on the weekend".</p> <p>(b) <strong><em>Formalists</em></strong> may question the coherence of the issue of <em>meaning</em> to begin with, and argue that one can only talk about <em>mathematical meaning</em> in a suitable formal framework (this is a bit of a dodge since one has yet to explain what "mathematical meaning" <em>means</em>).</p> <p>(c) An increasingly popular approach that goes back at least to <a href="https://en.wikipedia.org/wiki/Paul_Benacerraf" rel="nofollow noreferrer">Benacerraf</a> and <a href="https://en.wikipedia.org/wiki/Willard_Van_Orman_Quine" rel="nofollow noreferrer">Quine</a> is to distinguish between <strong><em>procedure</em></strong> and <strong><em>ontology</em></strong> and argue that traditional questions like "what is a number?" focus on <em>ontological</em> issues that are less fruitful than <em>procedural</em> ones, whereas the real issues of <em>meaning</em> lie in the analysis of the procedures mathematicians employ and their usefulness in applications. </p> <p>It seems to me that the <a href="https://math.stackexchange.com/a/2811628/72694">other answer</a> is a combination of (b) and (c) but it may be useful to separate them because the issues of meaning are more clearly addressed in (c) than in (b).</p> <p>In (c), <em>applications</em> are understood in a broad sense that certainly includes physics but also other fields within mathematics itself. From this point of view, Felix Klein's unifying contributions to fields ranging from analysis to geometry to group theory would rate higher than Cantor/Weierstrass contributions to the foundations focusing on the nature of entities like "number" and "point". Some related discussions can be found in <a href="http://dx.doi.org/10.1007/s10699-016-9498-3" rel="nofollow noreferrer">this 2017 publication in <em>Foundations of Science</em></a> and <a href="http://dx.doi.org/10.15330/ms.48.2.189-219" rel="nofollow noreferrer">this 2017 publication in <em>Mat. Stud.</em></a></p>
1,652,297
<p><strong>Thm</strong> Let $V$ and $W$ be Vector spaces and let $T:V \to W$ be linear </p> <p>If $\beta = \{ v_1,\dots ,v_n \}$ is a basis for $V$ then $$ R(T)=\text{span}(T(\beta))=\text{span}(\{ T(v_1),\dots,T(v_n) \} ) $$</p> <hr> <p><strong>Dimension Theorem</strong> </p> <p>Let $V$ and $W$ be Vector spaces and let $T:V \to W$ be linear </p> <p>If $V$ is finite dimensional then $\text{Nullity}(T)+\text{Rank}(T)=\text{dim}(V)$</p> <hr> <p>My impression is that $\text{Dim}(V)=\text{Rank}(T)$ since if $\text{dim}(V)=2$ there are $2$ vectors in the basis and $T$ of that basis will make the basis for image of $T$ so $\text{Rank}(T)=\text{dim}(V)$. </p> <p>Asked when the teacher when class was over but was told that was not the case. Did get answer from the proffesor but I do not know if he did not understood my question or I did not understood his answer.</p>
Alex Mathers
227,652
<p>Do you feel you have intuition for what rank really <em>means</em>? The rank is the dimension of the <em>image</em> of the transformation, so if you lose any basis vectors in your transformation, the two will not be equal.</p> <p>In particular here's a simple counter example: Let $V=\mathbb{R}^2$ and $T:V\to V$ be the zero map. That is, $T(v)=0$ for all $v\in V$. Then $\dim(V)=2$ but $\text{rank}(T)=0$.</p>
2,829,990
<p>I want to calcurate</p> <p><span class="math-container">$$ \lim_{n \to \infty} \int_{(0,1)^n} \frac{n}{x_1 + \cdots + x_n} \, dx_1 \cdots dx_n $$</span></p> <p>I met this in studying Lebesgue integral. But, I don't know how to do at all. I would really appreciate if you could help me!</p> <p>[Add]</p> <p>Thanks to everybody who gave me comments, I can understand the following,</p> <p><span class="math-container">\begin{align*} \lim_{n \to \infty} \int_{(0,1)^n} \frac{n}{x_1 + \cdots + x_n} dx_1 \cdots dx_n &amp;=\lim_{n \to \infty} n\int_0^\infty \bigg(\frac{1-e^{-t}}{t}\bigg)^n\,dt \end{align*}</span></p> <p>and</p> <p><span class="math-container">\begin{align*} \int_0^\infty \bigg(\frac{1-e^{-t}}{t}\bigg)^n\,dt &amp;=\frac{n}{(n-1)!}\sum_{i=0}^{n-1}{ n-1 \choose i} (-1)^{n-1-i} (i+1)^{n-2}\log(i+1) \end{align*}</span></p> <p>and</p> <p><span class="math-container">\begin{align*} \int_0^\infty \bigg(\frac{1-e^{-t}}{t}\bigg)^n\,dt &amp;=\int_0^\infty \frac{z^{n-1}}{(n-1)!}\, \mathrm{Beta}(z,n+1)\,dz\\ &amp;=n\,\int_0^\infty \frac{z^{n-1}}{z(z+1)\cdots(z+n)}\,dz \end{align*}</span></p> <p>But,I can't calcurate these integral and the limit. Please let me know if you find out.</p>
Felix Marin
85,343
<p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\on}[1]{\operatorname{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span> <span class="math-container">$\ds{\bbox[5px,#ffd]{}}$</span></p> <hr> <span class="math-container">\begin{align} &amp;\bbox[5px,#ffd]{\lim_{n \to \infty}\ \pars{n\int_{0}^{1}\cdots\int_{0}^{1} {\dd x_{1}\ldots\dd x_{n} \over x_{1} + \cdots + x_{n}}}} \\[5mm] = &amp; \ \lim_{n \to \infty}\ \braces{n\int_{0}^{1}\cdots\int_{0}^{1} \bracks{\int_{0}^{\infty} \expo{-\pars{x_{1}\ +\ \cdots\ +\ x_{n}}t}\,\,\,\,\, \dd t}\dd x_{1}\ldots\dd x_{n}} \\[5mm] = &amp; \ \lim_{n \to \infty}\ \bracks{n\int_{0}^{\infty}\pars{\int_{0}^{1} \expo{-tx}\,\,\dd x}^{n}\,\dd t} \\[5mm] = &amp; \ \lim_{n \to \infty}\ \bracks{n\int_{0}^{\infty} \pars{1 - \expo{t} \over t}^{n}\,\dd t} \\[5mm] = &amp; \ \lim_{n \to \infty}\ \bracks{n\int_{0}^{\infty} \exp\pars{n\ln\pars{1 - \expo{-t} \over t}}\,\dd t} \\[5mm] = &amp; \ \lim_{n \to \infty}\ \bracks{n\int_{0}^{\infty}\expo{-nt/2}\,\,\dd t} = \bbx{\color{$44f}{2}} \quad Laplace's\ Method \end{align}</span>
289,708
<p>The <a href="https://en.wikipedia.org/wiki/Catalan_number" rel="noreferrer">Catalan numbers</a> <span class="math-container">$C_n$</span> count both </p> <ol> <li>the Dyck paths of length <span class="math-container">$2n$</span>, and </li> <li>the ways to associate <span class="math-container">$n$</span> repeated applications of a binary operation. </li> </ol> <p>We call the latter <em>magma expressions</em>; we will explain below.</p> <p><strong>Dyck paths, and their lattice structure</strong></p> <p>A <em>Dyck path of length <span class="math-container">$2n$</span></em> is a sequence of <span class="math-container">$n$</span> up-and-right strokes and <span class="math-container">$n$</span> down-and-right strokes, all having equal length, such that the sequence begins and ends on the same horizontal line and never passes below it. A picture of the five length-6 Dyck paths is shown here:</p> <pre><code>A: B: C: D: E: /\ / \ /\/\ /\ /\ / \ / \ / \/\ /\/ \ /\/\/\ </code></pre> <p>There is an order relation on the set of length-<span class="math-container">$2n$</span> Dyck paths: <span class="math-container">$P\leq Q$</span> if <span class="math-container">$P$</span> fits completely under <span class="math-container">$Q$</span>; I'll call it the <em>height order</em>, though in the title of the post, I called it "Dyck order". I've been told it should be called the Stanley lattice order. For <span class="math-container">$n=3$</span> it gives the following lattice:</p> <pre><code> A | B / \ C D \ / E </code></pre> <p>For any <span class="math-container">$n$</span>, one obtains a poset structure on the set of length-<span class="math-container">$2n$</span> Dyck paths using height order, and in fact this poset is always a Heyting algebra (it represents the subobject classifier for the topos of presheaves on the twisted arrow category of <span class="math-container">$\mathbb{N}$</span>, the free monoid on one generator; see <a href="https://mathoverflow.net/questions/272394/reference-request-heyting-algebra-structure-on-catalan-numbers">this mathoverflow question</a>).</p> <p><strong>Magma expressions and the "exponential evaluation order"</strong></p> <p>A set with a binary operation, say •, is called a <a href="https://ncatlab.org/nlab/show/magma" rel="noreferrer">magma</a>. By a <em>magma expression of length <span class="math-container">$n$</span></em>, we mean a way to associate <span class="math-container">$n$</span> repeated applications of the operation. Here are the five magma expressions of length 3:</p> <pre><code>A: B: C: D: E: a•(b•(c•d)) a•((b•c)•d) (a•b)•(c•d) (a•(b•c))•d ((a•b)•c)•d </code></pre> <p>It is well-known that the set of length-<span class="math-container">$n$</span> magma expressions has the same cardinality as the set of length-<span class="math-container">$2n$</span> Dyck paths: they are representations of the <span class="math-container">$n$</span>th Catalan number.</p> <p>An <a href="http://www.labri.fr/perso/courcell/Textes/BC-Raoult%281980%29.pdf" rel="noreferrer">ordered magma</a> is a magma whose underlying set is equipped with a partial order, and whose operation preserves the order in both variables. Given an ordered magma <span class="math-container">$(A,$</span>•<span class="math-container">$,\leq)$</span>, and magma expressions <span class="math-container">$E(a_1,\ldots,a_n)$</span> and <span class="math-container">$F(a_1,\ldots,a_n)$</span>, write <span class="math-container">$E\leq F$</span> if the inequality holds for every choice of <span class="math-container">$a_1,\ldots,a_n\in A$</span>. Call this the <em>evaluation order</em>.</p> <p>Let <span class="math-container">$P=\mathbb{N}_{\geq 2}$</span> be the set of natural numbers with cardinality at least 2, the <em>logarithmically positive</em> natural numbers. Equipped with the operation given by exponentiation, <span class="math-container">$c$</span>•<span class="math-container">$d\:=c^d$</span>, we obtain an ordered magma, using the usual <span class="math-container">$\leq$</span>-order. Indeed, if <span class="math-container">$2\leq a\leq b$</span> and <span class="math-container">$2\leq c\leq d$</span> then <span class="math-container">$a^c\leq b^d$</span>.</p> <p><strong>Question:</strong> Is the exponential evaluation order on length-<span class="math-container">$n$</span> expressions in the ordered magma <span class="math-container">$(P,$</span>^<span class="math-container">$,\leq)$</span> isomorphic to the height order on length-<span class="math-container">$2n$</span> Dyck paths?</p> <p>I know of no <em>a priori</em> reason to think the answer to the above question should be affirmative. A categorical approach might be to think of the elements of <span class="math-container">$P$</span> as sets with two special elements, and use them to define injective functions between Hom-sets, e.g. a map <span class="math-container">$$\mathsf{Hom}(c,\mathsf{Hom}(b,a))\to\mathsf{Hom}(\mathsf{Hom}(c,b),a).$$</span> However, while I can define the above map, I'm not sure how to generalize it. And the converse, that being comparable in the exponential evaluation order means that one can define a single injective map between hom-sets, is not obvious to me at all.</p> <p>However, despite the fact that I don't know where to look for a proof, I do have evidence to present in favor of an affirmative answer to the above question.</p> <p><strong>Evidence that the orders agree</strong></p> <p>It is easy to check that for <span class="math-container">$n=3$</span>, these two orders do agree:</p> <pre><code> a^(b^(c^d)) A := A(a,b,c,d) | | a^((b^c)^d) B / \ / \ (a^b)^(c^d) (a^(b^c))^d C D \ / \ / ((a^b)^c)^d E </code></pre> <p>This can be seen by taking logs of each expression. (To see that C and D are incomparable: use a=b=c=2 and d=large to obtain C>D; and use a=b=d=2 and c=large to obtain D>C.) Thus the evaluation order on length-3 expressions in <span class="math-container">$(P,$</span>^<span class="math-container">$,\leq)$</span> agrees with the height order on length <span class="math-container">$6$</span> Dyck paths.</p> <p>(Note that the answer to the question would be negative if we were to use <span class="math-container">$\mathbb{N}$</span> or <span class="math-container">$\mathbb{N}_{\geq 1}$</span> rather than <span class="math-container">$P=\mathbb{N}_{\geq2}$</span> as in the stated question. Indeed, with <span class="math-container">$a=c=d=2$</span> and <span class="math-container">$b=1$</span>, we would have <span class="math-container">$A(a,b,c,d)=2\leq 16=E(a,b,c,d)$</span>.)</p> <p>It is even easier to see that the orders agree in the case of <span class="math-container">$n=0,1$</span>, each of which has only one element, and the case of <span class="math-container">$n=2$</span>, where the order <span class="math-container">$(a^b)^c\leq a^{(b^c)}$</span> not-too-surprisingly matches that of length-4 Dyck paths:</p> <pre><code> /\ /\/\ ≤ / \ </code></pre> <p>Indeed, the order-isomorphism for <span class="math-container">$n=2$</span> is not too surprising because there are only two possible partial orders on a set with two elements. However, according to <a href="https://oeis.org/A000112" rel="noreferrer">the OEIS</a>, there are 1338193159771 different partial orders on a set with <span class="math-container">$C_4=14$</span> elements. So it would certainly be surprising if the evaluation order for length-4 expressions in <span class="math-container">$(P,$</span>^<span class="math-container">$,\leq)$</span> were to match the height order for length-8 Dyck paths. But after some tedious calculations, I have convinced myself that these two orders in fact <em>do agree</em> for <span class="math-container">$n=4$</span>! Of course, this could just be a coincidence, but it is certainly a striking one.</p> <p><strong>Thoughts?</strong></p>
Martin Rubey
3,032
<p>(This is what I have written just before my wife killed the internet connection 12 hours ago before she went to bed. I only show that $D\leq E \Rightarrow A\leq B$ where $D$ and $E$ are Dyck paths and $A$ and $B$ the corresponding binary trees. I didn't look at Timothy's answer yet, but I am guessing it's the same.)</p> <p>Indeed, the bijection between (ordered, full) binary trees (with leaves labelled $a,b,c,\dots$ from left to right) and Dyck paths (traversing the binary tree starting at the root, first traversing the right subtree, and writing an up step for a right branch and a down step for a left branch) induces an order preserving map between the Stanley lattice and the exponential evaluation order.</p> <p>A path $D$ is covered by a path $E$ in the Stanley lattice, if and only if a peak in $D$ is converted to a valley in $E$, all other steps remaining the same.</p> <p>In terms of binary trees, a peak in the Dyck path corresponds to a pair of siblings where the right sibling $x$ does not have further children and there is a left branch somewhere after $x$, in the order the tree is traversed.</p> <p>To see what the covering relation in the Stanley lattice corresponds to, we first do an easy special case:</p> <p>Suppose that, in the binary tree $B$ corresponding to the Dyck path $E$, the parent $y$ of $x$ is a right child.</p> <p>Let $L_1$ be the subtree rooted at the sibling of $x$, and let $L_2$ be the subtree rooted at the sibling of $y$. The magma expression corresponding to the subtree rooted at the parent of $y$ is $L_2 (L_1 x)$.</p> <p>Then the binary tree $A$ corresponding to $D$ is obtained from $B$ by replacing the subtree rooted at the parent of $y$ with the binary tree corresponding to the magma expression $(L_2 L_1) x$, which is smaller than $L_2 (L_1 x)$.</p> <p>The general case is only superficially more complicated:</p> <p>Suppose that, in the binary tree $B$ corresponding to the Dyck path $E$, there is a (maximal) path of $k$ left branches from a node $y$ to the parent of $x$, with (right) siblings having subtrees $D_1,D_2,\dots,D_k$. Let $L_1$ be the subtree rooted at the (left) sibling of $x$ and $L_2$ be the subtree rooted at the (left) sibling of $y$. The magma expression corresponding to the subtree rooted at the parent of $y$ is $$L_2(\cdots((L_1 x)R_1)\cdots R_k).$$</p> <p>Then the binary tree $A$ corresponding to $D$ is obtained from $B$ by replacing the subtree rooted at the parent of $y$ with the binary tree corresponding to the magma expression $$(L_2 L_1)(x (R_1(\cdots R_k))).$$</p> <p>Setting $R=R_1\cdots R_k$, it remains to check that $L_2^{(L_1^{xR})} \geq (L_2^{L_1})^{(x^R)}$.</p>
172,617
<p>I need to plot two datasets on the same plot. The datasets have the same x-range. However, I want to show only parts of the plot. </p> <p>A minimal example would be</p> <pre><code> h = π/100.; i1 = ListLinePlot[Table[{i*h, Sin[i*h]}, {i, 0, 100}], PlotStyle -&gt; Red]; i2 = ListLinePlot[Table[{i*h, Cos[i*h]}, {i, 0, 100}], PlotStyle -&gt; Blue]; l1 = Graphics[{Black, Dashed, Line[{{π/2, -1}, {π/2, 1}}]}]; Show[{i1, i2, l1}, PlotRange -&gt; All] </code></pre> <p>The output is the following</p> <p><a href="https://i.stack.imgur.com/JsUxu.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/JsUxu.jpg" alt="enter image description here"></a></p> <p>But, the plot I want is </p> <p><a href="https://i.stack.imgur.com/rJ3wG.png" rel="noreferrer"><img src="https://i.stack.imgur.com/rJ3wG.png" alt="enter image description here"></a></p> <p>Can anyone please help? Thanks in advance.</p>
kglr
125
<pre><code>h = π / 100; {d1, d2} = Table[{i h, #[i h]}, {i, 0, 100}] &amp; /@ {Sin, Cos}; {ms1, ms2} = {{Opacity[0], Red}, {Blue, Opacity[0]}}; {i1, i2} = ListLinePlot[#[[1]], PlotStyle -&gt; Thick, Mesh -&gt; {{π/2}}, MeshStyle -&gt; None, MeshShading -&gt; #[[2]]] &amp; /@ {{d1, ms1}, {d2, ms2}}; Show[{i1, i2}, GridLines -&gt; {{{π / 2, Directive[Thick, Dashed]}}, None}, PlotRange -&gt; All] </code></pre> <p><a href="https://i.stack.imgur.com/ruXWi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ruXWi.png" alt="enter image description here"></a></p>
195,790
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="https://math.stackexchange.com/questions/19796/name-of-this-identity-int-e-alpha-x-cos-beta-x-space-dx-frace-al">Name of this identity? $\int e^{\alpha x}\cos(\beta x) \space dx = \frac{e^{\alpha x} (\alpha \cos(\beta x)+\beta \sin(\beta x))}{\alpha^2+\beta^2}$</a> </p> </blockquote> <p>I might have missed a technique from Calc 2, but this integral is holding me up. When I checked with WolframAlpha, it used a formula I didn't recognise.</p> <blockquote> <p>How do I solve $\int e^{-t/2}\sin(3t) dt$?</p> </blockquote> <p>The formula WolframAlpha uses is this:</p> <p>$$\int e^{\alpha t}\sin(\beta t)dt=\frac{e^{\alpha t}(-\beta \cos(\beta t)+\alpha \sin(\beta t)}{\alpha ^2+\beta ^2}$$</p> <p>I don't know where this formula comes from.</p>
davidlowryduda
9,754
<p>I tried to naively find common denominators at every step. Can you finish from here?</p> <p>$$\dfrac{\frac{2}{x^2 + 2xh + h^2} - \frac{2}{x^2}}{h} = \dfrac{\frac{2x^2 - 2(x^2 + 2xh + h^2)}{x^2(x^2 + 2xh + h^2)}}{h} = \dfrac{2x^2 - 2(x^2 + 2xh + h^2)}{hx^2(x^2 + 2xh + h^2)}$$</p>
195,790
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="https://math.stackexchange.com/questions/19796/name-of-this-identity-int-e-alpha-x-cos-beta-x-space-dx-frace-al">Name of this identity? $\int e^{\alpha x}\cos(\beta x) \space dx = \frac{e^{\alpha x} (\alpha \cos(\beta x)+\beta \sin(\beta x))}{\alpha^2+\beta^2}$</a> </p> </blockquote> <p>I might have missed a technique from Calc 2, but this integral is holding me up. When I checked with WolframAlpha, it used a formula I didn't recognise.</p> <blockquote> <p>How do I solve $\int e^{-t/2}\sin(3t) dt$?</p> </blockquote> <p>The formula WolframAlpha uses is this:</p> <p>$$\int e^{\alpha t}\sin(\beta t)dt=\frac{e^{\alpha t}(-\beta \cos(\beta t)+\alpha \sin(\beta t)}{\alpha ^2+\beta ^2}$$</p> <p>I don't know where this formula comes from.</p>
GeoffDS
8,671
<p>The trick is simply to add the two fractions together.</p> <p>$$\lim_{h \to 0} \frac{\frac{2}{(x + h)^2} - \frac{2}{x^2}}{h} = \lim_{h \to 0} \frac{\frac{2[x^2 - (x+h)^2]}{x^2(x + h)^2}}{h} = \lim_{h \to 0} \frac{\frac{2[-2hx - h^2]}{x^2(x + h)^2}}{h} = \lim_{h \to 0} \frac{2[-2x - h]}{x^2(x + h)^2} = \frac{2(-2x)}{x^4} = \frac{-4}{x^3}$$</p>
3,577,021
<p>Let inner product space V be defined over F or C and linear operators T on V, evaluate <span class="math-container">$T^{*}$</span> at the given vector in V.</p> <p><span class="math-container">$V=R^2, T(a,b)=(2a+b,a-3b), x=(3,5)$</span></p> <p>I know <span class="math-container">$T^{*}$</span> is the conjugate transpose. But how am I supposed to approach this, need some general idea.</p>
Graham Kemp
135,106
<p>By definition of conditioning over an event: <span class="math-container">$$\mathsf E(Y\mid Y&gt;1-X)=\mathsf E(Y\mathbf 1_{Y&gt;1-X})\div \mathsf E(\mathbf 1_{Y&gt;1-X})$$</span></p> <p><em>Always</em> use this before attempting to use the Law of Total Expectation. </p> <p><span class="math-container">$$\begin{align}\mathsf E(Y\mid Y&gt;1-X)&amp;=\dfrac{\mathsf E(\mathsf E(Y~\mathbf 1_{Y&gt;1-X}\mid X))}{\mathsf E(\mathsf E(\mathbf 1_{Y&gt;1-X}\mid X))}\\[2ex]&amp;=\dfrac{\int_0^1\int_{1-x}^1 y\,\mathrm d y\,\mathrm d x}{\int_0^1\int_{1-x}^1 1\,\mathrm d y\,\mathrm d x}&amp;&amp;\neq \int_0^1\dfrac{\int_{1-x}^1 y\,\mathrm d y}{\int_{1-x}^1 1\,\mathrm d y}\,\mathrm d x\end{align}$$</span></p> <hr> <p>Conditioning over events does not mix well with conditioning over sigma-algebrae. &nbsp; They are related concepts with similar symbolic representation, but they are not compatible.</p>
4,099,804
<p>I need to characterize every finitely generated abelian group G that has the following property: <span class="math-container">$$\frac{G}{S} \text{ is cyclic for every } \lbrace0\rbrace \lneq S\leq G$$</span> Given the problems before this one, I believe I am supposed to use the structure theorem figure out the underlying structure of the decomposition (for example that the decomposition has only one or two primes and such). I know the question in <a href="https://math.stackexchange.com/questions/4097319/f-g-abelian-group-so-that-every-quotient-is-cyclic">this post</a> is very similar, but it is not precisely the same and the slight difference in the conditions on the subgroups makes the argument non-applicable, unfortunately.</p>
Andrea Marino
177,070
<p>First question: how many factors can appear at most in the decomposition?</p> <p>Second question: in every case you deduced above, which powers can appear? Can you have for example a factor <span class="math-container">$\mathbb{Z}/2^3\mathbb{Z}$</span>?</p> <p>Good work!</p>
608,909
<p>Is it possible to solve analytically the following equation? $$\left(x+\frac{1}{x}\right)^{\frac{1}{x}}=A$$ with $A\gt 1$? I tried to transform it in the following: $\frac{1}{x}\ln\left(x+\frac{1}{x}\right)=B$ with $B=\ln(A)$, but it seems to be still unsolvable. Is there some trick to solve it? Thanks.</p>
Stefan Hamcke
41,672
<p>I just learned that the fact that both adjunction spaces are homotopy equivalent to each other can be seen as an immediate consequence of a general property:</p> <blockquote> <p>Let $h\mathbf{Top}^B$ denote the homotopy category under $B$, the quotient category of $(B\downarrow\mathbf{Top})$ where we identify $f\sim g:i\to j$ if there is a homotopy $H:f\simeq g$ under $B$, that is $H(i\times 1)=j$. Let $\pi B^A$ denote the track groupoid whose objects are maps $A→B$ and whose arrows are homotopies $H:f\simeq g$ where $H$ and $K$ are identified if there's a continuous deformation between them which leaves $f$ and $g$ fixed.<br> The statement is that if $j:A\to X$ is a cofibration, there exists a contravariant functor $\beta$ from the track groupoid $\pi B^A$ to the category $h\mathbf{Cof}^B$, the full subcategory of $h\mathbf{Top}^B$ whose objects are cofibrations. This $\beta$ assigns to an $f:A→B$ the cofibration $j_f:B\to X\cup_f B$, and to a morphism $[\phi]:f→g$ the homotopy class $[k]$ of maps $j_g\to j_f$, where $k$ is induced by extending the homotopy $\phi:A→B$ to a homotopy $\Phi:X→X\cup_f B$ and setting $k=\Phi_1\cup j_f$.</p> </blockquote> <p>You can find the proof in tom Dieck's <em>Algebraic Topology</em> where it is theorem $5.1.9$</p> <p>Note that $[k]=\beta[\phi]$ is an isomorphisms by functoriality, hence a homotopy equivalence.</p>
608,909
<p>Is it possible to solve analytically the following equation? $$\left(x+\frac{1}{x}\right)^{\frac{1}{x}}=A$$ with $A\gt 1$? I tried to transform it in the following: $\frac{1}{x}\ln\left(x+\frac{1}{x}\right)=B$ with $B=\ln(A)$, but it seems to be still unsolvable. Is there some trick to solve it? Thanks.</p>
Ronnie Brown
28,586
<p>This is also proved in <a href="http://groupoids.org.uk/topgpds.html" rel="nofollow noreferrer">Topology and Groupoids</a> (as it was in the 1968 edition, "Elements of Modern Topology"); this has some pictures of the crucial mapping cylinder construction <span class="math-container">$M(f) \cup X$</span> which, if <span class="math-container">$i: A \to X$</span> is a cofibration, is a useful model of the adjunction space <span class="math-container">$B \cup _f X$</span> for <span class="math-container">$f: A \to B$</span>. Here is a coloured picture of the homotopy as Fig 7.10: </p> <p><img src="https://groupoids.org.uk/images/fig7_10cs.jpg" alt="hom"></p>
2,144,140
<p>We know that $(a,b)$ are open by definition. How do you prove that some arbitrary union of $(a,b)$ cannot give you $[c,d]$ ?</p>
5xum
112,884
<p>Because for every union of open intervals, let's call it $U$, we have the property:</p> <blockquote> <p>$$\forall x\in U \exists\epsilon&gt;0: (x-\epsilon, x+\epsilon) \subseteq U$$</p> </blockquote> <p>Or, in english,</p> <blockquote> <p>Each element $x$ of $U$ has some neighborhood $(x-\epsilon, x+\epsilon)$ that is completely included in $U$.</p> </blockquote> <p>This property does not hold for $c$ and $d$ in the set $[c,d]$.</p> <hr> <p>The statement above can easily be proven. Let $U$ be some union of intervals, </p> <p>$$U=\bigcup_{i\in I} (a_i, b_i)$$</p> <p>(note, $I$ may not be finite or even countable for this proof to work!).</p> <p>Then, let $x\in U$. By definition of union, there exists some $i\in I$ such that $x\in (a_i, b_i)$. Then, set $\epsilon = \min\{\frac{x-a_i}{2}, \frac{b_i-x}{2}\}$.</p> <p>We can now prove that $(x-\epsilon, x+\epsilon)\subseteq (a_i, b_i)\subseteq U$:</p> <p>Let $y\in (x-\epsilon, x+\epsilon)$. Then, we know:</p> <ol> <li>$$y&gt;x-\epsilon$$</li> <li>$$x-\epsilon \geq x-\frac{x-a_i}{2} =\frac{x}{2} + \frac{a_i}{2} &gt; \frac{a}{2} + \frac{a}{2} = a $$</li> </ol> <p>So we also know that $y&gt;a_i$.</p> <p>Similarly, we can show that $y&lt;b_i$:</p> <ol> <li>$$y&lt; x+\epsilon$$</li> <li>$$x+\epsilon \leq x+\frac{b_i-x}{2} = \frac{x+b_i}{2}&lt;\frac{b_i+b_i}{2}=b_i$$</li> </ol> <p>so $y&lt;b_i$.</p> <p>Together, this means that $y\in (a_i, b_i)$ and, consequently, that $(x-\epsilon, x+\epsilon)\subseteq(a_i, b_i)$.</p>
1,885,492
<p>Question.</p> <blockquote> <p>prove that if ${ a }_{ 1 },{ a }_{ 2 },...{ a }_{ n }&gt;0$ then $$ \frac { { a }_{ 1 }+{ a }_{ 2 }+...+{ a }_{ n } }{ n } \ge \frac { n }{ \frac { 1 }{ { a }_{ 1 } } +\frac { 1 }{ { a }_{ 2 } } +...+\frac { 1 }{ { a }_{ n } } } $$ </p> </blockquote> <p><strong>Proof</strong> $$\\ \left( { a }_{ 1 }+{ a }_{ 2 }+...+{ a }_{ n } \right) \left( \frac { 1 }{ { a }_{ 1 } } +\frac { 1 }{ { a }_{ 2 } } +...+\frac { 1 }{ { a }_{ n } } \right) =\underset { n\left( n-1 \right) /2\quad terms }{ \underbrace { \left( \frac { { a }_{ 1 } }{ { a }_{ 2 } } +\frac { { a }_{ 2 } }{ { a }_{ 1 } } \right) +...+\left( \frac { { a }_{ n-1 } }{ { a }_{ n } } +\frac { { a }_{ n } }{ { a }_{ n-1 } } \right) + } \quad n\ge } \\ \ge n+2\cdot \frac { n\left( n-1 \right) }{ 2 } ={ n }^{ 2 }$$ in this proof i didn't understand this step "$n\left( n-1 \right) /2\quad ? terms$" I mean how the number of terms can be $n\left( n-1 \right) /2\quad $ .Can anybody explain it. Thanks in advance!</p>
Vincenzo Oliva
170,489
<p>As for the first equation, rearranging and squaring both sides we get $$72^2m^2-144m=48r^2,$$ which simplifies to $$108m^2-3m=3m(36m-1)=r^2.$$ Then $r^2=9k^2$ and $m=3n$ for some $k,n$, which yields $$n(108n-1)=k^2.$$ Since the factors of the LHS are coprime, both must be squares: say $n=y^2, 108n-1=x^2;$ therefore, $$x^2-108y^2=-1.$$ But $108$ is divisible by $4$, and <a href="http://mathworld.wolfram.com/PellEquation.html" rel="nofollow">this excludes</a> the existence of any solution $(x,y)$, hence the original equation itself has no solutions.</p>
1,298,730
<p>Find functions $f$ and $\alpha$ such that the improper Riemann-Stieltjes integral $\int_1^{\infty}|f|d\alpha$ converges, but $\int_1^{\infty}fd\alpha$ does not exist?</p> <p>I'm really not sure how to start this problem, and I haven't been able to find another post on here that has considered this.</p> <p>EDIT: I know that $\alpha$ needs to be some function which is not increasing or differentiable on $[1,\infty )$ since then absolute convergence implies conditional convergence</p> <p>Thank you,</p>
zhw.
228,045
<p>I see this is the same basic idea as another answer, but here it is anyway: Set $\alpha (x) = -\cos x.$ Then $d\alpha (x) = \sin x \,dx.$ Define $f(t)= 1/t$ on $(0,\pi), (2\pi,3\pi), \dots, f(t) = -1/t$ on $ (\pi,2\pi),(3\pi,4\pi), \dots$ Then</p> <p>$$\int_0^\infty |f(t)|\,d\alpha(t) = \int_0^\infty \frac{\sin t}{t}\,dt,$$</p> <p>which converges. But </p> <p>$$\int_0^\infty f(t)\,d\alpha(t) = \int_0^\infty \frac{|\sin t|}{t}\,dt = \infty.$$</p>
1,438,999
<p>If $(x-a)^2=(x+a)^2$ for all values of $x$, then what is the value of $a$?</p> <p>At the end when you get $4ax=0$, can I divide by $4x$ to cancel out $4$ and $x$?</p>
Brevan Ellefsen
269,764
<p>$$(x-a)^2 = (x+a)^2$$ $$x^2 - 2ax + a^2 = x^2 + 2ax + a^2$$ $$x^2 - 2ax + a^2 = x^2 + 2ax + a^2$$ $$-2ax = 2ax$$ $$-a = a$$ Note that this statement is only true when $a=0$, which is thus your solution.</p>
1,438,999
<p>If $(x-a)^2=(x+a)^2$ for all values of $x$, then what is the value of $a$?</p> <p>At the end when you get $4ax=0$, can I divide by $4x$ to cancel out $4$ and $x$?</p>
E.H.E
187,799
<p>$$\frac{(x-a)^2}{(x+a)^2}=1$$ $$\left(\frac{x-a}{x+a}\right)^2=1$$</p> <p>$$\left(1-\frac{2a}{x+a}\right)^2=1$$ it is clear that $\frac{2a}{x+a}$ should be equal to zero</p> <p>so, the $$a=0$$</p>
970,409
<p>Let matrices $A$, $B\in{M_2}(\mathbb{R})$, such that $A^2=B^2=I$, where $I$ is identity matrix. </p> <p>Why can be numbers $3+2\sqrt2$ and $3-2\sqrt2$ eigenvalues for the Matrix $AB$? </p> <p>Can be numbers $2,1/2$ the eigenvalues of matrix $AB$? </p>
Yiorgos S. Smyrlis
57,021
<p>Set $$ A=\left(\begin{matrix}0 &amp; 3-2\sqrt{2} \\ 3+2\sqrt{2} &amp; 0\end{matrix}\right),\quad B=\left(\begin{matrix}0 &amp; 1 \\ 1 &amp; 0\end{matrix}\right). $$ Then $$ AB=\left(\begin{matrix} 3-2\sqrt{2} &amp; 0 \\ 0 &amp; 3+2\sqrt{2}\end{matrix}\right). $$ The eigenvalues of $A,B$ are $\pm 1$, and hence $A^2=B^2=I$, while the eigenvalues of $AB$ are $3\pm2\sqrt{2}$.</p> <p>Next, set $$ A=\left(\begin{matrix}0 &amp; 2 \\ 1/2 &amp; 0\end{matrix}\right),\quad B=\left(\begin{matrix}0 &amp; 1 \\ 1 &amp; 0\end{matrix}\right). $$ Then $$ AB=\left(\begin{matrix} 2 &amp; 0 \\ 0 &amp; 1/2\end{matrix}\right). $$ The eigenvalues of $A,B$ are $\pm 1$, and hence $A^2=B^2=I$, while the eigenvalues of $AB$ are $2,1/2$.</p> <p>In particular, every pair $a,1/a$ can be eigenvalues of $AB$!</p>
102,304
<p>I have here a complex equation:</p> <p>$$z^2 - (7+j)z + 24 +j7 = 0$$</p> <p>How do we get the roots of this equation? I started using the quadratic formula $-b \pm \sqrt{ b^2-4ac}\over 2$, but it yielded too much complexity on it. Is there any way to directly attack this? Thanks.</p>
Peđa
15,660
<p>Let's denote $z$ as : $z=a+jb$ , then we have :</p> <p>$a^2-b^2+2abj-(7+j)(a+bj)+24+7j=0 \Rightarrow$</p> <p>$\Rightarrow a^2-b^2+2abj - (7a+7bj+aj-b)+24+7j=0$</p> <p>So , you have to solve following system of equations :</p> <p>$\begin{cases} a^2-b^2-7a+b+24=0 \\ 2ab-7b-a+7=0\\ \end{cases}$</p>
102,304
<p>I have here a complex equation:</p> <p>$$z^2 - (7+j)z + 24 +j7 = 0$$</p> <p>How do we get the roots of this equation? I started using the quadratic formula $-b \pm \sqrt{ b^2-4ac}\over 2$, but it yielded too much complexity on it. Is there any way to directly attack this? Thanks.</p>
André Nicolas
6,312
<p>Although we do find the roots, the following is mainly a spoof of school algebra. </p> <p>In school algebra, students are expected to solve very special equations of the form $ax^2+bx +c=0$, where $a$, $b$, and $c$ are integers, by <em>factoring</em>. The Quadratic Formula, and even the Rational Roots Theorem, are withheld from them, as that would make the problem too simple. </p> <p>The process they are taught involves factoring $a$ and $c$, and fiddling a bit to try to produce $-b$. They are only given quadratics that yield to this process.</p> <p>Let's play that game with our equation, to see whether we are dealing with a variant of a school problem. So we factor $24+7i$ in the <em>Gaussian integers</em>. Note that $(24+7i)(24-7i)=625$. If you have done some computations with Gaussian integers, you will see that the Gaussian primes involved in the factorization are $2\pm i$ (and associates, but we needn't worry about these). Also, since $5$ does not divide $24+7i$, we know that $24+7i$ must be an associate of $(2\pm i)^4$. Pretty quickly we find that $24+7i=-i(2+i)^4$.</p> <p>Now let's find two Gaussian integers whose product is $-i(2+i)^4$ and whose sum is $7+i$. Note that $(2+i)^2=3+4i$ and $-i(2+i)^2=4-3i$. Their sum is $7+i$, so we have found the roots. </p>
523,932
<p>I've got a system of equations which is:<br></p> <p>$\begin{cases} x=2y+1\\xy=10\end{cases}$</p> <p>I have gone into this: $x=\dfrac {10}y$. <br> How can I find the $x$ and $y$?</p>
Jay
9,814
<p>Notice that $10 = xy = (2y + 1)y = 2y^2 + y$. But then $$2y^2 + y - 10 = 0.$$ Can you solve this quadratic equation?</p> <p>If you use the substitutions $x = \frac{10}{y}$ or $y = \frac{10}{x}$ then you are implicitly assuming either $y$ or $x$ is not $0$.</p>
1,514,628
<p>I've been looking over some old assignments in my analysis course to get ready for my upcoming exam - I've just run into something that I have no idea how to solve, though, mainly because it looks nothing like anything I've done before. The assignment is as follows:</p> <p>"Let $H$ be a Hilbert space, and let $(e_n)_{n\in\mathbb{N}}$ be an orthonormal basis for $H$. Let $E$ be the linear subspace spanned by the three elements $e_1 + e_2$, $e_3 + e_4$, $e_2 + e_3$. Let $P_E : H \to E$ be the projection onto $E$."</p> <p>How would one then do the following three things:</p> <ol> <li>Determine an orthonormal basis for $E$</li> <li>Compute $P_E e_1$</li> <li>Calculate $\|e_1\|^2$, $\|P_E e_1\|^2$ and $\|e_1 - P_E e_1\|^2$</li> </ol> <p>Usually when we've looked at these types of assignments we've gotten actual basis vectors, $e$. How does one do these things symbollically?</p> <p>I've tried doing Gram-Schmidt for the first part, but I've no idea if it's right, what I'm doing. I end up with three basis vectors looking something like</p> <p>$u_1 = \frac{e_1+e_2}{2}$ , $u_2 = \frac{e_3+e_4}{2}$ , $u_3 = \frac{e_1+e_4}{2}$</p> <p>Any help would be much appreciated, right now I'm getting nowhere, haha.</p>
Hamed
191,425
<p>You know that $\{e_n\}_{n\in \mathbb{N}}$ is orthonormal. So let $(a,b)$ be the notation for inner product of the space. Gram-Schmidt is the way to go actually,as you guessed. Convince yourself that with Gram-Schmidt you find $$ f_1 = \frac{e_1+e_2}{\sqrt{2}}, \quad f_2=\frac{e_3+e_4}{\sqrt{2}}, \quad f_3=\frac{e_1-e_2-e_3+e_4}{2} $$ The last vector is basically $(e_1+e_2)+(e_3+e_4)-2(e_2+e_3)$.</p> <p>For part (2), note that a projection to $E$ is obtained as (for $h\in H$) $$P_E(h) = (h,f_1) f_1 + (h,f_2)f_2 + (h, f_3)f_3$$ To see this first note that if $h\in E$, then the above formula says $P_E(h)=h$. And if $h\notin E$, then $P_E(h)\in E$ (Check these). Also $P_E(P_E(h))=P_E(h)$ (Check this too) so $P_E$ is indeed the desired projection.</p>
3,500,405
<p>I have a similar question to what was asked already <a href="https://math.stackexchange.com/questions/2511111/prove-that-the-following-map-has-at-least-k-2-fixed-points">here</a></p> <p>But I do not really understand the answer there.</p> <p>The problem is: Let <span class="math-container">$x_0 \in S^1$</span> and let <span class="math-container">$f: S^1 \rightarrow S^1$</span> be a continuous map with <span class="math-container">$f(x_0) = x_0$</span>. Suppose moreover that the induced map <span class="math-container">$f_{*} : \pi_1 (S^1, x_0) \rightarrow \pi_1 (S^1, x_0): [g] \mapsto k [g]$</span> for some <span class="math-container">$k &gt; 2$</span>. </p> <p>(i) Show that there are certainly <span class="math-container">$k-2$</span> other fixed points for <span class="math-container">$f$</span> besides <span class="math-container">$x_0$</span>. (hint: consider <span class="math-container">$f$</span> as being a map <span class="math-container">$f^{'}: I \rightarrow S^1$</span> with <span class="math-container">$f^{'} (0) = f^{'} (1) = x_0$</span> and study the lifts of <span class="math-container">$f^{'}$</span> to the universal covering space <span class="math-container">$\mathbb{R}$</span>.)</p> <p>(ii) Give an example of such an <span class="math-container">$f$</span> with precisely <span class="math-container">$k-1$</span> fixed points (of which <span class="math-container">$x_0$</span> is one). </p> <p>I do not understand the hint really. How can we consider <span class="math-container">$f$</span> as the map <span class="math-container">$f^{'}$</span>? And why would we do this? </p> <p>I know that the fundamental group of the circle is <span class="math-container">$\mathbb{Z}$</span>, and that every covering of <span class="math-container">$S^1$</span> is regular. If <span class="math-container">$p: \mathbb{R} \rightarrow S^1$</span> is the standard covering map, then the covering transformations are the homeomorphisms <span class="math-container">$\mathbb{R} \rightarrow \mathbb{R}: x \mapsto x + n$</span>, for <span class="math-container">$n$</span> an integer. But I'm not sure how this will help me. </p> <p>An elaborate answer is appreciated.</p>
Matematleta
138,929
<p>Let <span class="math-container">$p:t\mapsto e^{2\pi it}$</span> be the usual covering map of <span class="math-container">$S^1$</span>. Since the induced map <span class="math-container">$f_*$</span> takes <span class="math-container">$[g]$</span> to <span class="math-container">$k[g]$</span>, it is not hard to show that the degree of the map </p> <p><span class="math-container">$$g:I\to S^1: t\mapsto e^{2\pi it}\mapsto f(e^{2\pi it})\ \text{is}\ k,$$</span> which means that the lift of <span class="math-container">$g,$</span> namely, <span class="math-container">$F:I\to \mathbb R$</span> satisfies </p> <p><span class="math-container">$$F(1)-F(0)=k\ \text{and of course,}\ e^{2\pi i F(t)}=f(e^{2\pi i t}).$$</span></p> <p>Suppose that there is an <span class="math-container">$s\in I$</span> such that <span class="math-container">$F(s)-s=j$</span> for some integer <span class="math-container">$j$</span>. Then, </p> <p><span class="math-container">$$f(e^{2\pi i s})=e^{2\pi i F(s)}=e^{2\pi i (j+s)}=e^{2\pi i s}\ \text{so}\ e^{2\pi i s}\ \text{is a fixed point of}\ f.$$</span></p> <p>Now as <span class="math-container">$s$</span> goes from <span class="math-container">$0$</span> to <span class="math-container">$1$</span>, <span class="math-container">$F(s)-s$</span> maps onto an interval containing the <span class="math-container">$k-2$</span> integers between the first integer greater than <span class="math-container">$F(0)$</span> and the greatest integer less than <span class="math-container">$F(1)$</span>. It follows that <span class="math-container">$f$</span> has at least <span class="math-container">$k-2$</span> fixed points.</p>
1,419,784
<p>When we consider calculations at tiniest of scales which number system would be more accurate, when we consider the binary number system( base 2) or the number system we generally use (base 10).</p> <p>The other way to say it would be if we consider a number system with base 1,2,3,4.... , does it effects the accuracy with which calculations can be done or it doesn't matter at all.</p> <p>or How does having a different base affects the accuracy of numbers.Can we prove it or determine by how much it differs when the same calculation is done with base 2 and base 10 numbers.</p>
Matt Samuel
187,867
<p>Without rounding, the base doesn't matter at all. With rounding, the base 10 number system is in a sense more general than binary because whenever something can be represented exactly in a finite number of digits in binary, the same is true in decimal, but there are numbers with a finite length base 10 representation that do not have finite binary representations. (This is true whenever one base is a multiple of another, for example whenever something has a finite length representation in base 3 it also has one in base 9, but not vice versa.) Also, if you are rounding to the same number of digits, decimal is far more accurate than binary. </p> <p>As long as you have enough digits for your application, the difference is insignificant, but if you're looking for economy in length decimal has more bang for the buck.</p> <p>Essentially, the larger the base, the more accuracy you have for the number of digits after the point. This has to be weighed against the fact that you need more symbols for digits. Because of the simplicity of binary having only two digits it is the easiest to use on a computer. For humans, though, I would say that in virtually any application where approximation is necessary the numbers are printed out in decimal, even if only because this is what people are familiar with.</p>
1,419,784
<p>When we consider calculations at tiniest of scales which number system would be more accurate, when we consider the binary number system( base 2) or the number system we generally use (base 10).</p> <p>The other way to say it would be if we consider a number system with base 1,2,3,4.... , does it effects the accuracy with which calculations can be done or it doesn't matter at all.</p> <p>or How does having a different base affects the accuracy of numbers.Can we prove it or determine by how much it differs when the same calculation is done with base 2 and base 10 numbers.</p>
Stefan Mesken
217,623
<p>The base in which we represent natural numbers does not effect natural numbers at all (the base $1$ however, isn't suitable - think about it). This is like asking "Is 'Times New Roman' or 'Comic Sans' (a better comparison is "Times New Roman" and a suitable variation of "Tengwar") more accurate when writing an essay?". These things matter, but they don't affect "accuracy".</p>
815,770
<p>Compute $F_{1000} \bmod F_{11}$, where $F_n$ denote the Fibonacci numbers.</p> <p>Progress:</p> <p>$F_{11}=89$ . I believe you should find the period of $F_n \bmod 89$ and use that to solve it. But I'm not not getting anywhere from that.</p> <p>Thanks!</p>
heropup
118,193
<p>The correct relationship is $$F_{11k + n} \equiv F_n \cdot F_{10}^k \pmod {F_{11}},$$ and since $1000 = 11(90)+10$, we have $$F_{1000} \equiv F_{10} \cdot F_{10}^{90} = F_{10}^{91} \pmod {F_{11}}.$$ Now we must compute $$55^{91} \pmod {89},$$ and since $89$ is prime, by Fermat's little theorem, we have $$55^{89} \equiv 55 \pmod {89}.$$ Consequently, $$55^{91} = 55^{89} \cdot 55^2 \equiv 55^3 \equiv 34 \pmod {89}.$$</p>
3,691,147
<p>Consider the wave equation in one dimension <span class="math-container">$u_{tt}-u_{xx}=0$</span> together with a Fourier Transform along <span class="math-container">$t$</span>, ie <span class="math-container">$$\text{FT}[u](x,\omega)=\int_{-\infty}^{+\infty}u(x,t)\exp(-i\omega t)\mathrm{d}t.\tag{1}$$</span> The above PDE transforms into <span class="math-container">$\partial_{xx}\text{FT}[u]+\omega^2\text{FT}[u]=0$</span> whose general solution reads <span class="math-container">$$\text{FT}[u](x,\omega)=A(\omega)\cos\omega x+B(\omega)\sin\omega x\tag{2}$$</span> which is essentially the Fourier Transform of d'Alembert's solution.</p> <p>Under which conditions on <span class="math-container">$u(x,t)$</span> is the <em>classical</em> differentiation of <span class="math-container">$\text{FT}[u](x,\omega)$</span> with respect to <span class="math-container">$x$</span> meaningful? When it is meaningful, is <span class="math-container">$\partial_x \text{FT}[u]$</span> the Fourier Transform of <span class="math-container">$u_x(x,t)$</span> that is <span class="math-container">$\text{FT}[u_x]$</span>? It is a classical result which is always used when solving PDE via Fourier Transform (and used above in the quantity <span class="math-container">$\partial_{xx} FT[u]$</span>), however I would like to read the exact assumptions on <span class="math-container">$u$</span>. For instance, is this differentiation acceptable when <span class="math-container">$u_{xx}(x,t)$</span> should be read in the sense of distributions because <span class="math-container">$u_x(x,t)$</span> is discontinuous?</p>
pluton
30,598
<p>If we consider for simplicity the left propagating wave, the solution reads <span class="math-container">$u(x,t)=T(x+t)$</span> where <span class="math-container">$T$</span> is a distribution. Its Fourier Transform in time is (because of translation) <span class="math-container">$$ \text{FT}[u](x,\omega)=\int_{-\infty}^{+\infty}T(x+t)\exp(-i\omega t)\mathrm{d}t=\exp(i\omega x)\text{FT}[T](\omega) \tag{3}$$</span> and so <span class="math-container">$\partial_x \text{FT}[u]$</span> and <span class="math-container">$\partial_{xx}\text{FT}[u]$</span> are well defined as soon as <span class="math-container">$T$</span> is a tempered distribution and <span class="math-container">$$\partial_x \text{FT}[u](x,\omega)=i\omega\text{FT}[u](\omega)\tag{4}$$</span></p> <p>Let us now have a look at the Fourier Transform of <span class="math-container">$u_x=T_x$</span> (in the sense of distributions) <span class="math-container">$$ \begin{aligned} \text{FT}[u_x](x,\omega)&amp;=\int_{-\infty}^{+\infty}T'(x+t)\exp(-i\omega t)\mathrm{d}t\\ &amp;=\exp(i\omega x)\text{FT}[T'](\omega)=i\omega\exp(i\omega x)\text{FT}[T](\omega)=i\omega\text{FT}[u](x,\omega) \end{aligned} \tag{5}$$</span> and Equations (5) and (4) are identical.</p> <p>Conclusion: for the wave equation in 1D with solution <span class="math-container">$u(x,t)=T(x+t)$</span>, the <em>classical</em> differentiation with respect to space of the Fourier Transform in time is legitimate as soon as <span class="math-container">$T$</span> is a tempered distribution and the following holds: <span class="math-container">$$\partial_x \text{FT}[u](x,\omega)=\text{FT}[u_x](x,\omega)=i \omega \text{FT}[u](x,\omega)$$</span> All this is probably obvious and agrees well with (2) :). Same derivations apply for the right propagating wave <span class="math-container">$V(x-t)$</span>.</p>
50,113
<p>What are some good books on field and Galois theory?</p>
Jack Rousseau
11,764
<p><a href="https://projecteuclid.org/ebooks/notre-dame-mathematical-lectures/Galois-Theory/toc/ndml/1175197041" rel="nofollow noreferrer">Galois Theory</a> by Emil Artin is a nice treatment of the latter.</p>
119,810
<p>My question today is about the minimization of an error function with two parameters. It is a function that measures the error of a set of points. The two parameters are the weights of a regressor. </p> <p>$$\frac{1}{N}\sum_{t=1}^{N}[r^t-(w_1x^t+w_0)]^2$$ </p> <p>The minimum should be calculated by taking partial derivates of the error function above with respect to $w_1$ and $w_0$. Setting them equal to $0$ and solving for the unknown. However I didn't reach the solutions given. The solutions should be:<br> $$w_1=\frac{\sum_tx^tr^t-\sum_t\frac{x^t}{N}\sum_t\frac{r^t}{N}N}{\sum_t(x^t)^2-N(\sum_t\frac{x^t}{N})^2}$$<br> $$w_0=\sum_t\frac{r^t}{N}-w_1\sum^t\frac{x^t}{N}$$ </p> <p>They are performing well in practice. But my question is, can I reach them by taking the partial derivatives and setting them equal to $0$? Can anybody help me, at least with one? Thank you. </p> <p><strong>UPDATE:</strong><br> This is the regressor I get by using the $w_1$ and $w_0$ listed above. As you can see, the two model the data very well so they must be right. <img src="https://i.stack.imgur.com/GD6a3.png" alt="enter image description here"></p> <p><strong>UPDATE 2:</strong><br> I will post the passage from the book that lists $w_1$ and $w_0$ as the solution. Maybe you'll get the idea better.<br> <img src="https://i.stack.imgur.com/POP1A.png" alt="enter image description here"></p>
Henry
6,460
<p>Let's start by ignoring the constant $\frac{1}{N}$. Then </p> <p>$$\sum_t[r^t-(w_1x^t+w_0)]^2 $$ $$= \sum_t r^{2t} + w_1^2 \sum_t x^{2t} +N w_0^2 -2 w_1 \sum_t r^t x^t -2 w_0 \sum_t r^t+ 2 w_1 w_0 \sum_t x^t $$</p> <p>Take the partial derivatives with respect to $w_0$ and $w_1$ and set them to zero and you get</p> <p>$$ 2N w_0 -2 \sum_t r^t + 2 w_1 \sum_t x^t = 0,$$ $$ 2 w_1 \sum_t x^{2t} -2 \sum_t r^t x^t + 2 w_0 \sum_t x^t = 0.$$</p> <p>The first of these gives your expression for $w_0$. Solving these simultaneous equations I think gives $$w_0 = \dfrac{(\sum_t r^t) (\sum_t x^{2t}) - (\sum_t r^t x^t)(\sum_t x^t) }{ N (\sum_t x^{2t}) -(\sum_t x^{t})^2 },$$ $$w_1=\dfrac{ N (\sum_t r^t x^t) - (\sum_t r^t)(\sum_t x^{t}) }{ N (\sum_t x^{2t}) -(\sum_t x^{t})^2 }$$</p> <p>and this last sadly does not look close enough to your quoted expression for $w_1$.</p> <p>I may have made an error, but letting $r=1$ I would expect the optimal values to be $w_0=1$ and $w_1=0$ to minimise the original expression. Mine seem to do that, while the one you quote for $w_1$ does not. I think there may also be dimensional issues with the expression you quote for $w_1$. </p>
267,355
<p>Let $H_i = (V_i, E_i)$ be <a href="https://en.wikipedia.org/wiki/Hypergraph" rel="nofollow noreferrer">hypergraphs</a> for $i=1,2$. Then we say that $H_1\cong H_2$ if there is a bijection $\varphi:V_1\to V_2$ such that $A\in E_1$ implies $\varphi(A) \in E_2$ and $B\in E_2$ implies $\varphi^{-1}(B)\in E_1$.</p> <p>Is there a collection $\cal C$of pairwise non-isomorphic hypergraphs on $\omega$ with $|{\cal C}| = 2^{2^{\aleph_0}}$?</p>
Joel David Hamkins
1,946
<p>The answer is yes. </p> <p>Consider the collection $\mathcal C$ of hypergraphs of the following form. They have underlying set $\omega$ as the vertices, the natural numbers. The finite edges in the hypergraph are all and only the sets of the form $\{0,1,\ldots,n\}$. And then the hypergraph can have any desired collection of infinite edge sets. </p> <p>Since there are $2^{\aleph_0}$ many infinite subsets of $\omega$, it follows that there are $2^{2^{\aleph_0}}$ many members of $\mathcal C$. And I claim that they are all pairwise non-isomorphic. To see this, note first that $\{0\}$ is the only edge with one member, and so the isomorphism must fix $0$. Similarly, $\{0,1\}$ is the only edge with two members, and so the isomorphism must fix $1$. And so on. The finite edges force the isomorphism to fix every individual vertex, and so non-identical members of $\mathcal C$ will be non-isomorphic.</p> <blockquote> <p><strong>EDIT</strong> &nbsp; Let $\ F:=\ \{\{1\ \ldots\ n\}: n\in\mathbb N\},\ $ and $\ D:=\{X\in 2^\mathbb N : |X|=|\mathbb N|\}. $ Then define: $$ C\ :=\ \{ F\cup G:\ G\subseteq D\} $$ The Dominic's isomorphisms are reduced to just one identity ismorphism per selected hypergraph, i.e. there are no non-trivial Domininc's isomorphisms between members of $\ C$.</p> </blockquote>
267,355
<p>Let $H_i = (V_i, E_i)$ be <a href="https://en.wikipedia.org/wiki/Hypergraph" rel="nofollow noreferrer">hypergraphs</a> for $i=1,2$. Then we say that $H_1\cong H_2$ if there is a bijection $\varphi:V_1\to V_2$ such that $A\in E_1$ implies $\varphi(A) \in E_2$ and $B\in E_2$ implies $\varphi^{-1}(B)\in E_1$.</p> <p>Is there a collection $\cal C$of pairwise non-isomorphic hypergraphs on $\omega$ with $|{\cal C}| = 2^{2^{\aleph_0}}$?</p>
Peter Heinig
108,556
<p>Joel's answer is spot on, and makes full use of your not requiring any further properties that your hypergraphs should have, but it might perhaps be nice to know that also with both </p> <ul> <li>less demands on the isomorphism</li> <li>more demands on each of the hypergraphs </li> </ul> <p>the number of isomorphism classes can still reach</p> <p>$2^{2^{\aleph_0}}$</p> <p>in natural situations.</p> <p>For example, in </p> <p><a href="https://arxiv.org/abs/1301.5980" rel="nofollow noreferrer">Infinite Matroids and Determinacy of Games</a></p> <p>a proof is given that there exist </p> <p>$2^{2^{\aleph_0}}$</p> <p>pairwise non-isomorphic hypergraphs with the additional nice property that each of them is a tame infinite matroid (in the sense of Bruhn--Diestel--Kriesell--Pendavingh--Wollan) on the ground set $\omega$, and each moreover is free of certain minors. </p> <p>In short, a matroid on an infinite set $E$ is an abstract simplicial complex on $E$ satisfying two additional properties, </p> <p>an exchange-axiom (analogous to the classical exchange axiom for finite matroids) </p> <p>and axiom (IM), </p> <p>which stipulates the existence of maximal elements in certain infinite subposets of the lattice of subsets $(2^E,\subseteq)$, and which does not have an analogue in the theory of finite matroids. If $E$ is finite, (IM) is always satisfied, which is why this notion of infinite matroids extends the classical definition of matroids. The property of being tame means that there does not exist any circuit intersecting a cocircuit in infinitely many ground-set elements. Such matroids naturally arise in the theory of countable infinite graphs.</p> <p>A natural further requirement would be to ask,</p> <ul> <li>that the hypergraph be a an ultrafilter on $\omega$.</li> </ul> <p>(Of course, then we do not ask that it be an abstract simplicial complex, since there being both a complex and a filter is impossible.)</p> <p>Posp&iacute;&#353;il proved a century ago or so that on $\omega$ there exist $2^{2^{\aleph_0}}$ ultrafilters, distinct as sets. (Using terminology of (hyper-)graph theory, this is labelled counting, and by itself does not answer your question.)</p> <p>In his thesis, A. Blass, using a natural notion of isomorphism of ultrafilters, gave, en route to the main results of the thesis, a reason why w.r.t. to that notion each isomorphism class of ultrafilters on $\omega$ must have size at most $2^{\aleph_0}$. (I could give some details provided you are interested, and provided I find the time; but probably it will be better if Blass himself would do so.) Combined with Posp&iacute;&#353;il's theorem it follows that there must be $2^{2^{\aleph_0}}$ isomorphism classes of ultrafilters on $\omega$ w.r.t. Blass' notion of isomorphism. If I am not mistaken (I did not write a proof), if two ultrafilters are isomorphic w.r.t. your notion of isomorphism of hypergraphs, then they are isomorphic w.r.t. Blass' notion; hence there are at least as many isomorphism classes w.r.t. your notion as w.r.t. his. Therefore his result also implies that there are $2^{2^{\aleph_0}}$ ultrafilters on $\omega$ which are pairwise non-isomorphic w.r.t. your notion.</p> <p>Moreover, applying to an ultrafilter the endofunctor $F$ of $\textsf{Sets}$ which is defined by replacing each set in a set of sets by its complement w.r.t. the ground-set results in an abstract simplicial complex, and if two ultrafilters $\mathcal{D}_0$ and $\mathcal{D}_1$ are non-isomorphic w.r.t. your notion of isomorphism, then $F(\mathcal{D}_0)$ and $F(\mathcal{D}_1)$ are non-isomorphic again, so in that sense, Blass' argument yields $2^{2^{\aleph_0}}$ isomorphism classes of abstract simplicial complexes on $\omega$, too.</p> <p>So apparently we have the following examples:</p> <p>if any hypergraph on $\omega$ is allowed: Joel's construction</p> <p>if each hypergraph is required to be an ultrafilter on $\omega$: Blass' arguments</p> <p>if each hypergraph is required to be an abtract simplicial complex on $\omega$: Blass' arguments viewed through $F$</p> <p>if each hypergraph is required to be an abtract simplicial complex which moreover is required to be a matroid in the sense of Bruhn--Diestel--Kriesell--Pendavingh--Wollan: the construction of Bowler and Carmesin in the article cited above.</p>
2,946,379
<p>The question posed is the following: Let <span class="math-container">$X$</span> be a Banach Space and let <span class="math-container">$T:X\to X$</span> be a Lipschitz-Continuous map. Show that, for <span class="math-container">$\mu$</span> sufficiently large, the equation <span class="math-container">\begin{equation} Tx+\mu x=y \end{equation}</span> has, for any <span class="math-container">$y\in X$</span>, a unique solution.</p> <p>Note that <span class="math-container">$x,y$</span> are vectors, since our book (<em>Mathematical Analysis</em> by Mariano Giaquinta and Giuseppe Modica) generally ignores vector indicators, since it's all multivariable.</p> <p>My proof is based on the Banach Fixed Point Theorem: Since <span class="math-container">$T$</span> is Lipschitz-continuous, we have <span class="math-container">$\|Tx\|\leq k\|x\|$</span> for <span class="math-container">$0&lt;k\leq1$</span>. So <span class="math-container">$\|Tx-\mu x\|\leq k\|x\| - \mu \|x\|$</span>.</p> <p>Then we can say</p> <p><span class="math-container">\begin{equation} \|Tx-\mu x\|\leq (k-\mu)\|x\| \end{equation}</span></p> <p>So, if <span class="math-container">$\mu$</span> is large enough that <span class="math-container">$|k-\mu|&lt;1$</span>, we have a contractive map, and by the Banach Fixed Point theorem, there exists a unique fixed point <span class="math-container">$x_0$</span> for <span class="math-container">$(T-\mu)x$</span>. Then, <span class="math-container">$Tx-\mu x=y$</span> has a unique solution, namely, <span class="math-container">$x_0$</span>.</p> <p>My question is whether this is a valid proof. I'm mostly foggy on if I applied the theorem correctly, and if I am allowed to say <span class="math-container">$Tx-\mu x=(T-\mu)x$</span>, since <span class="math-container">$T$</span> is a map and <span class="math-container">$\mu$</span> is a constant (I think).</p>
Henry Lee
541,220
<p>I think it should be: <span class="math-container">$$I=\int_1^\infty\frac{e^x+e^{3x}}{e^x-e^{5x}}dx$$</span> <span class="math-container">$u=e^x$</span> so <span class="math-container">$dx=\frac{du}{u}$</span> so: <span class="math-container">$$I=\int_e^\infty\frac{u+u^3}{u-u^5}\frac{1}{u}du=\int_1^\infty\frac{1+u^2}{u(1-u^4)}du=\int_1^\infty\frac{1+u^2}{u(1-u^2)(1+u^2)}du=\int_1^\infty\frac{1}{u(1-u^2)}du$$</span> <span class="math-container">$v=1-u^2$</span> <span class="math-container">$$I=-\frac{1}{2}\int_0^{-\infty}\frac{1}{v(1-v)}dv=\frac{1}{2}\int_{-\infty}^0\frac{1}{v(1-v)}dv$$</span> and this can easily be solved using partial fractions</p>
2,264,791
<p>I have a problem that I'm having trouble figuring out the distribution with given condition.</p> <p>It is given that 1/(<span class="math-container">$X$</span>+1), where <span class="math-container">$X$</span> is an exponentially distributed random variable with parameter 1.</p> <blockquote> <p><strong>Original Problem:</strong></p> <p>What is the distribution of 1/(<span class="math-container">$X$</span>+1), where <span class="math-container">$X$</span> is an expoentially distributed random variable with parameter 1?</p> </blockquote> <p>With parameter 1, <span class="math-container">$X$</span> can be written as <span class="math-container">$e^{-x}$</span>, and after plug in the given function, I got <span class="math-container">$$\frac{1}{e^{-x}+1} = \frac{e^{x}}{e^{x}+1}$$</span></p> <p>What type of distribution is this?</p>
Felix Marin
85,343
<p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \int_{0}^{\infty}\expo{-x}\delta\pars{y - {1 \over x + 1}}\,\dd x &amp; = \int_{0}^{\infty}\expo{-x}\, {\delta\pars{x -\bracks{1/y - 1}} \over \ds{% \verts{\partiald{\bracks{y - 1/\pars{x + 1}}}{x}}_{\ x\ =\ 1/y - 1}}}\,\dd x \\[5mm] &amp; = {1 \over y^{2}}\,\exp\pars{-\,\pars{{1 \over y} - 1}} \int_{0}^{\infty}\delta\pars{x -\bracks{{1 \over y} - 1}}\,\dd x \\[5mm] &amp; = {1 \over y^{2}}\,\exp\pars{1 - {1 \over y}}\bracks{{1 \over y} - 1 &gt; 0} \\[5mm] &amp; = {1 \over y^{2}}\,\exp\pars{1 - {1 \over y}}\bracks{y\pars{1 - y} &gt; 0} \\[5mm] &amp; = \bbx{\bracks{0 &lt; y &lt; 1}\,{1 \over y^{2}}\,\exp\pars{1 - {1 \over y}}} \end{align}</p>
936,525
<p>I am following a proof in the text OPTIMIZATION THEORY AND METHODS a springer series by WENYU SUN and YA-XIANG YUAN. I come across what seems obvious that for a column vector $v$, with dimension $n\times 1$, $$\biggl\|I-\frac{vv^T}{v^Tv}\biggr\|=1,$$ where $I$ is an $n\times n$ matrix, and $\|.||$ is a matrix norm.</p> <hr> <p>I try to verify it by considering a Frobenius norm, that is </p> <p>\begin{equation*} \begin{split} \biggl\|I-\frac{vv^T}{v^Tv}\biggr\|_F&amp; = \biggl (tr\biggl(I-\frac{vv^T}{v^Tv}\biggr)^T\biggl(I-\frac{vv^T}{v^Tv}\biggr)\biggl)^{\frac{1}{2}} \\ &amp; = \biggl (tr\biggl(I-\frac{vv^T}{v^Tv}\biggr)^2\biggr)^{\frac{1}{2}}\\ &amp; =tr\biggl(I-\frac{vv^T}{v^Tv}\biggr)\\ &amp; =tr\bigl(I\bigr)-tr\biggr(\frac{vv^T}{v^Tv}\biggr)\\ &amp; =n-\frac{1}{\|v\|^2} \|v\|^2\\ &amp; =n-1 \end{split} \end{equation*} </p> <hr> <p>So, I do not what is the problem. Because in the text no specification of norm is given. May be I have to change another matrix norm.</p> <hr> <p>NOTE: A Frobenius matrix norm for any matrix $A$ is defined by \begin{equation*} \begin{split} ||A\|_F &amp; = \biggl( \sum_{i=1}^{m}\sum_{j=1}^{n}|a_{ij}|^2\biggr)^\frac{1}{2}\\ &amp; = \biggl(tr(A^TA)\biggr)^\frac{1}{2} \end{split} \end{equation*}</p>
leonbloy
312
<p>Letting $A=I-\frac{vv^T}{v^Tv}$ . Then $A$ is symmetric, and its eigenvalues are real. Besides $A x = x - v \frac{v^T x}{v^Tv}=\lambda x$ implies that either $x = \alpha v$, (which gives $\lambda=0$) or $v^T x=0$ which gives $\lambda=1$. Hence it's greater eigenvalue is 1 and, that's the spectral norm.</p>
267,707
<p>The weight 2, level 1 Eisenstein series $E_2(z)$ is a non-holomorphic automorphic form. It is defined as the analytic continuation to $s = 0$ of the series $$ E_2(z, s) = \sum_{\substack{m, n \in \mathbf{Z} \\ (m, n) \ne (0,0)}} \frac{\operatorname{Im}(z)^s}{(mz + n)^2 |mz + n|^{2s}} $$ which is convergent for $\operatorname{Re}(s) &gt; 0$ (but not for $s = 0$). Moreover for any prime p the function $E_2(z)-pE_2(pz)$ is holomorphic. </p> <p>My question is: what's the Archimedean component of the automorphic representation corresponding to $E_2$? (If it it the holomorphic discrete series of weight two then seems the vector corresponding to $E_2$ should be holomorphic.)</p>
paul garrett
15,629
<p>(I took @LSpice's inquiry as encouragement to add some remarks to @GH from MO's good answer... But/and one of the issues that helped me overcome my skepticism about the utility of representation theory long ago was its clarification of exactly these issues about "Hecke summation", Maass-Shimura operators, and such. In particular, that it is not necessary to write out Fourier expansions using special functions, etc.)</p> <p>First, as in the comments that helped @GH...'s answer get on track, in a typical (naive) indexing scheme, the $2k$-th holomorphic discrete series is <em>not</em> a subrepresentation of the $2k$-th principal series (also in a naive normalization), but just of the $k$-th. Also, there are choices about when-and-how to append/renormalize the $y^k$ to weight-$2k$ automorphic forms, to make them left $\Gamma$-invariant and right $K$-equivariant. Lots of chances to make a mess...</p> <p>A more interesting issue than normalization is the fact that (the map...) formation of Eisenstein series (manifestly) does not commute with meromorphic continuation. Such a thing already occurs in looking at the spectral decomposition of the space of pseudo-Eisenstein series in terms of Eisenstein series.</p> <p>For <em>generic</em> principal series $I_s$, the raising and lowering operators $R,L$ change the $K$-type by $\pm 2$, with the lowering operator having a coefficient of $s-k$ (here the normalization matters, obviously). So, up to irrelevant constants, with $E_{s,2k}$ being the Eisenstein series hit by $I_s$ with $K$-type $2k$, $LE_{s,2k}=(s-k)E_{s,2k-2}$. Thus, at $s=1$, if $E_{s,2k-2}$ has no pole, then $E_{s,2k}$ is annihilated by the lowering operator.</p> <p>It is partly some sort of crazy luck that the lowering operator $L$, in coordinates, is essentially the Cauchy-Riemann operator, so annihilation by it implies holomorphy.</p> <p>But/and of course with $2k=2$, the Eisenstein series $E_{s,0}$ has a pole at $s=1$. The residue is just a constant, not very exciting, but not $0$. Thus, in whatever normalization one operates, the application of the lowering operator (Cauchy-Riemann) to (the meromorphically continued) $E_{s,2}$ gives that residue, not $0$.</p> <p>For Hilbert modular forms over (totally real) number fields of degree $&gt;1$ over $\mathbb Q$, the lowering operators attached to the various archimedean factors of the group do not map $E_{s,(2,2,2,2,...)}$ to $E_{s,0}$, where there is a pole, so they annihilate $E_{s,(2,2,...)}$.</p> <p>The weight-one Eisenstein series are another story... </p> <p>(Edit: some "$s-1$"'s should have been "$s-k$"'s, but I was thinking about $k=1$... maybe repaired...)</p>
2,661,443
<p>For the equation $2^x = 7$</p> <p>The textbook says to use log base ten to solve it like this $\log 2^x = \log 7$. </p> <p>I then re-arrange it so that it reads $x \log 2 = \log 7$ then divide the RHS by $\log 2$ to isolate the $x$. I understand this part.</p> <p>I can alternatively solve it in an easier way by simply using $\log_2 7$ on my calculator.</p> <p>Using both methods the answer comes to the same which is $2.807$</p> <p>My question is twofold:</p> <ol> <li><p>Why would the textbook suggest to use log base ten rather than simply using log base two?</p></li> <li><p>I can see how using log base ten and the suggested method in the textbook makes me arrive at the same answer but I don't understand WHY this is so. How does base ten play a factor in the whole scope of things.</p></li> </ol> <p>Thank you</p>
user
505,767
<p>The use of Log in base 10 is a common choice and we often find it in calculators since our standard system of counting is in base 10 (i.e. number of fingers).</p> <p>To handle algebraic problems any other choice is fine depending upon the specific problem to solve by <a href="https://en.wikipedia.org/wiki/Logarithm#Logarithmic_identities" rel="noreferrer">logarithmic identities</a>.</p> <p>For the application in calculus the best choice is the natural logarithm in base e.</p>
2,661,443
<p>For the equation $2^x = 7$</p> <p>The textbook says to use log base ten to solve it like this $\log 2^x = \log 7$. </p> <p>I then re-arrange it so that it reads $x \log 2 = \log 7$ then divide the RHS by $\log 2$ to isolate the $x$. I understand this part.</p> <p>I can alternatively solve it in an easier way by simply using $\log_2 7$ on my calculator.</p> <p>Using both methods the answer comes to the same which is $2.807$</p> <p>My question is twofold:</p> <ol> <li><p>Why would the textbook suggest to use log base ten rather than simply using log base two?</p></li> <li><p>I can see how using log base ten and the suggested method in the textbook makes me arrive at the same answer but I don't understand WHY this is so. How does base ten play a factor in the whole scope of things.</p></li> </ol> <p>Thank you</p>
Henry
6,460
<p>This works in any base because $\log_b (a^c) = c \log_b(a)$ </p> <p>The practical reason for using base $10$ was a little old fashioned: it allowed the use of tables of logarithms instead of a calculator and reducing these calculations to addition and subtraction. Calculators then provided a $\log_{10}$ function as a carry-over from tables and just called the button <strong>log</strong></p> <p>My 1953 elementary statistical tables have logarithms of numbers from $1$ to $10$ and anti-logarithms of numbers from $0$ to $1$. Since these are logarithms base $10$, I can easily deal with all numbers because $\log(a \times 10^n)=n +\log(a)$</p> <p>Here I want $\dfrac{\log 7}{\log 2}$. </p> <ul> <li><p>My tables tell me $\log 7\approx 0.8451_6$ (with the ${\,}_6$ helping interpolation) and $\log 2\approx 0.3010_{22}$. So that leaves me with trying to calculate $\dfrac{0.8451}{0.3010}$. I cannot be bothered to do long division, so instead I try to calculate $\text{antilog}\,({\log 0.8451 -\log 0.3010})$. </p></li> <li><p>My tables tell me $\log 8.45 \approx 0.9269_5$ so I write $\log 0.8451 \approx \bar{1}.9269$ (with the $\bar{1}$ because I wanted $\log (8.451\times 10^{-1})= -1+\log 8.451$). Similarly it tells me $\log 3.01 \approx 0.4786_{14}$ so I write $\log 0.3010 \approx \bar{1}.4786$</p></li> <li><p>I now calculate by hand $\bar{1}.9269 - \bar{1}.4786 = 0.4483$. My tables tell me $\text{antilog}\,0.448 \approx 2.805_7$ and I interpolate the final $3$ using the ${\,}_7$ to give a final approximate answer of $2.807$. Which is what you got with some clever silicon</p></li> </ul>
2,661,443
<p>For the equation $2^x = 7$</p> <p>The textbook says to use log base ten to solve it like this $\log 2^x = \log 7$. </p> <p>I then re-arrange it so that it reads $x \log 2 = \log 7$ then divide the RHS by $\log 2$ to isolate the $x$. I understand this part.</p> <p>I can alternatively solve it in an easier way by simply using $\log_2 7$ on my calculator.</p> <p>Using both methods the answer comes to the same which is $2.807$</p> <p>My question is twofold:</p> <ol> <li><p>Why would the textbook suggest to use log base ten rather than simply using log base two?</p></li> <li><p>I can see how using log base ten and the suggested method in the textbook makes me arrive at the same answer but I don't understand WHY this is so. How does base ten play a factor in the whole scope of things.</p></li> </ol> <p>Thank you</p>
chux - Reinstate Monica
83,175
<blockquote> <p>Why would the textbook suggest to use log base ten rather than simply using log base two?</p> </blockquote> <p>Hmmm, if we embrace binary, and use log<sub>2</sub>(), how about using base 2 for the constants and representation also? </p> <p>Then $2^x = 7$ becomes $10^x = 111$ and is solved with $x = 10.1100111010...$</p> <p>When numbers are represented in base 10. Using log<sub>n</sub>() other than $n==10$ takes an unnecessary additional manipulation and prevents observing niceties like log<sub>2</sub>(100) == 2.</p> <hr> <blockquote> <p>How does base ten play a factor in the whole scope of things. (?)</p> </blockquote> <p>As base 10 is common understood by the target audience (students), using log<sub>10</sub>() is also more readily understood by them. It is more instructive than employing base 2.</p> <p>On other hand, if the textbook was for processors, using base 2 makes more sense.</p>
224,226
<p>I am trying to count the number of distinct colours in a <span class="math-container">$5\times5$</span> box, (a radius 2 filter) at all points over a quantized image. I cannot seem to get anything out of the following code except for a black square:</p> <pre><code>img = ColorQuantize[ExampleData[{&quot;TestImage&quot;, &quot;Peppers&quot;}], 8, Dithering -&gt; False]; dis = ImageFilter[CountDistinct[Flatten[#, 1]] &amp;, img, 2]; dis // ImageAdjust </code></pre> <p>I expect each pixel in the image to be replaced with a single non-negative integer telling me how many unique colours are in the radius 2 vicinity of that pixel. It's plain to see you can choose <span class="math-container">$5\times5$</span> boxes in the <em>peppers</em> image which have more than one colour so the output should look more interesting.</p> <p>I'd also like to know why this related code for 1D produces all 1's instead of the number of unique elements in a centered window as it slews across the list, and how to correct it:</p> <pre><code>MovingMap[CountDistinct, {1, 2, 3, 3, 3, 4, 5, 6}, {1, Center}, &quot;Reflected&quot;] </code></pre> <p>For 1D, I want to achieve with <code>MovingMap</code> the same behaviour you can get with <code>Partition</code> like this:</p> <pre><code>CountDistinct /@ Partition[{1, 2, 3, 3, 3, 4, 5, 6}, 3, 1, 2, {}] </code></pre>
flinty
72,682
<p>I came up with another method. I can replace the colours in the original quantized image with single numbers to form a palletized grayscale image first. Then <code>ImageFilter</code> works and I get the same image as in <strong>@MarcoB's</strong> answer - seeing it with two different methods is enough confirmation for me that the image is good.</p> <pre><code>uniquecolours = Flatten[ImageData[img], 1] // DeleteDuplicates; pltimg = Image@Map[FirstPosition[uniquecolours, #] &amp;, ImageData[img], {2}]; dis = ImageFilter[CountDistinct, pltimg, 2] // ImageAdjust; dis // ImageAdjust </code></pre>
1,712,457
<blockquote> <p>Assume $f$ is differentiable over an open interval $I$. Suppose $a&lt;b$ are two numbers in $I$ with $f'(a) &lt; f'(b)$. Show that if $f'(a) &lt; 0 &lt;f'(b)$, then neither $f(a)$ nor $f(b)$ can be the minimum value of $f$ over $[a,b]$.</p> </blockquote> <p>Intuitively this makes sense: $f$ must change concavity on $[a,b]$ and thus it will have a relative minimum point where the first derivative is $0$. Since $f$ isn't the constant function and the function changes concavity at least once, $f(a)$ nor $f(b)$ can be the minimum on the interval.</p> <p>Is this reasoning fine or do I need to be more mathematical?</p>
Paramanand Singh
72,031
<p>You need to understand the meaning of sign of derivative.</p> <p><em>Let $f'(c) &gt; 0$ then there is an interval $J$ with $c$ as an interior point such that if $x&gt;c$ and $x \in J$ then $f(x)&gt;f(c)$ and if $x&lt;c$ and $x\in J$ then $f(x)&lt;f(c)$.</em></p> <p>Same way we have corresponding result for the case $f'(c)&lt;0$. For the current question note that $f'(a)&lt;0$ and hence there is an interval containing $a$ where values of $f(x)$ to the right of $x=a$ are less than $f(a)$ so $f(a)$ is not the minimum value of $f$ in $[a,b]$. And since $f'(b)&gt;0$ there is an interval containing $b$ where values of $f(x)$ to the left of $x=b$ are less than $f(b)$ and hence $f(b)$ is not the minimum value of $f$ on $[a,b]$.</p>
2,579,140
<p>N. Elkies' page <a href="http://www.math.harvard.edu/~elkies/trinomial.html" rel="nofollow noreferrer">http://www.math.harvard.edu/~elkies/trinomial.html</a> ends with an information about octic trinomials "whose Galois group is contained in $G_{1344}$".</p> <p>One of reported trinomials, $x^8+324x+567$, "has a smaller Galois group, which embeds as a <strong>transitive subgroup</strong> into $G_{1344}$ (...) This Galois group is isomorphic with $G_{168}$, acting on 8 letters via the other guise of that group, as $PSL_{2}(\mathbb{Z}/7\mathbb{Z})$".</p> <p>Using GAP one can check that indeed mentioned trinomial has Galois group $PSL(2,7)$:</p> <pre><code>gap&gt; x:=Indeterminate(Rationals, "x");; gap&gt; GaloisType(x^8+324*x+567); 37 gap&gt; t37:=TransitiveGroup(8,37); L(8)=PSL(2,7) gap&gt; Size(t37); 168 </code></pre> <p>In the context of the above web page it is surprising for me that $PSL(2,7)$ <strong>is not</strong> a subgroup of $G_{1344}$:</p> <pre><code>gap&gt; t48:=TransitiveGroup(8,48); E(8):L_7=AL(8) gap&gt; Size(t48); 1344 gap&gt; IsSubgroup(t48, t37); false </code></pre> <p>Moreover, there is another Galois group $G_{168}$, $C_2^3:(C_7: C_3)$, which <strong>is</strong> a subgroup of $G_{1344}$:</p> <pre><code>gap&gt; t36:=TransitiveGroup(8,36); E(8):F_21 gap&gt; Size(t36); 168 gap&gt; IsSubgroup(t48, t36); true </code></pre> <p><strong>Question</strong>: How should I understand this contradiction/"contradiction"?</p>
Dietrich Burde
83,966
<p>The group $G_{168}$ <strong>is</strong> isomorphic to $PSL(2,7)$, and it is a subgroup of $G_{1344}$. Elkies writes "This Galois group is isomorphic with $G_{168}$, acting on $8$ letters via the other guise of that group, as $PSL_2(\mathbb{Z}/7\mathbb{Z})$." So you could understand the "contradiction" reviewing his word "the other guise".</p>
2,184,593
<p>By Cauchy's criterion of limit (not sequencial criterion), show that $$\lim_{x\to 0}(\sin{\frac{1}{x}}+x\cos{\frac{1}{x}})$$ does not exist.</p> <p>Cauchy's criterion of limit </p> <p>$\lim_{x\to c}f(x)=l$ iff for every $\epsilon&gt;0$, there exists $\delta$ such that $$|f(x_2)-f(x_1)|&lt;\epsilon$$ for $0&lt;|x_1-c|&lt;\delta$ and $0&lt;|x_2-c|&lt;\delta$. </p> <p>Please suggest $x_2, x_1$ and help me to solve the problem.</p>
5xum
112,884
<p><strong>Hint</strong>:</p> <p>You can find, for any $\epsilon &gt; 0$, a value of $x_1$ such that $0&lt;x_1&lt;\epsilon$ and $\sin\frac1{x_1}=1$, and a value of $x_2$ such that $0&lt;x_2&lt;\epsilon$ and $\sin\frac{1}{x_2}=-1$. </p>
2,184,593
<p>By Cauchy's criterion of limit (not sequencial criterion), show that $$\lim_{x\to 0}(\sin{\frac{1}{x}}+x\cos{\frac{1}{x}})$$ does not exist.</p> <p>Cauchy's criterion of limit </p> <p>$\lim_{x\to c}f(x)=l$ iff for every $\epsilon&gt;0$, there exists $\delta$ such that $$|f(x_2)-f(x_1)|&lt;\epsilon$$ for $0&lt;|x_1-c|&lt;\delta$ and $0&lt;|x_2-c|&lt;\delta$. </p> <p>Please suggest $x_2, x_1$ and help me to solve the problem.</p>
Math-fun
195,344
<p>Note first that $$\lim_{x \to 0}x \cos \frac 1x=0$$ so you can safely focus on $\sin \frac 1x$. Now consider $x_n=\frac 1{2n\pi+\frac{\pi}2}$ and $y_n=\frac1{n\pi}$ for $n\to \infty$ and see what happens.</p>
359,742
<p>I have a mathematical problem that leads me to a particular necessity. I need to calculate the convolution of a function for itself for a certain amount of times. </p> <p>So consider a generic function $f : \mathbb{R} \mapsto \mathbb{R}$ and consider these hypothesis:</p> <ul> <li>$f$ is continuos in $\mathbb{R}$.</li> <li>$f$ is bound, so: $\exists A \in \mathbb{R} : |f(x)| \leq A, \forall x \in \mathbb{R}$.</li> <li>$f$ is integral-defined, so its area is a real number: $\exists \int_a^bf(x)\mathrm{d}x &lt; \infty, \forall a,b \in \mathbb{R}$. Which implies that such a function at ifinite tends to zero.</li> </ul> <p><strong>Probability mass functions:</strong> Such functions fit the constraints given before. So it might get easier for you to consider $f$ also like the pmf of some continuos r.v.</p> <p>Consider the convolution operation: $a(x) \ast b(x) = c(x)$. I name the variable always $x$.</p> <p>Consider now the following function:</p> <p>$$ F^{(n)}(x) = f(x) \ast f(x) \ast \dots \ast f(x), \text{for n times} $$</p> <p>I want to evaluate $F^{(\infty)}(x)$. And I would like to know whether there is a generic final result given a function like $f$.</p> <h3>My trials</h3> <p>I tried a little in Mathematica using the Gaussian distribution. What happens is that, as $n$ increases, the bell stretches and its peak always gets lower and lower until the function almost lies all over the x axis. It seems like $F^{(\infty)}(x)$ tends to $y=0$ function...</p> <p><img src="https://i.stack.imgur.com/8FDFH.png" alt="Trials in Mathematica"></p> <p>As $n$ increases, the curves gets lower and lower. </p>
Daniel Naftalovich
203,380
<p>While I don't know about the general case of your $F^{(\infty)}(x)$ for any such generic function $f$, it seems that in some cases you can find explicit closed forms for $n$-fold convolutions for some functions $f$. Maybe then you can study the limit as $n\rightarrow\infty$ for your purposes?</p> <p>For example, I found useful information in <code>Grindstead and Snell's Introduction to Probability</code> which has a page about this question for the case of a finite number of convolutions (chapter 7, pg 300 in my copy) and lists the equation </p> <p>$$ f_{S_n}(x)=\frac{1}{\sqrt{2\pi n}}e^{-x^2/2n} $$</p> <p>representing $n$ convolutions of a Gaussian, with $S_n=X_1+X_2+\dots+X_n$ where $X_1,X_2,\dots,X_n$ are independent Gaussian random variables with mean 0 and variance 1. $S_n$ is defined by their $n$-fold convolution as $f_{S_n}=(f_{X_1}*f_{X_2}*\dots*f_{X_n})(x)$. So for your Gaussian experiment, maybe this can provide some insight about the convergence to zero.</p> <p>The book shows similar examples for uniform and exponential random variables. They note that this can be done for certain cases, implying that it is not generally applicable. They also provide the following reference for additional information (which I have not studied): <code>J. B. Uspensky, Introduction to Mathematical Probability (New York: McGraw-Hill, 1937), p. 277.</code></p>
3,988,180
<p>The task: <span class="math-container">$$ \text{Calculate } \int_{C}{}f(z)dz\text{, where } f(z)=\frac{\bar{z}}{z+i}\text{, and } C \text{ is a circle } |z+i|=3\text{.} $$</span> Finding the circle's center and radius: <span class="math-container">$$ |z+i|=|x+yi+i|=|x+(y+1)i|=3 \\ x^2+(y+1)^2=3^2 $$</span> Parametrizing the circle: <span class="math-container">$$ z(t)=i+3e^{-2\pi i t} $$</span></p> <p>Now I need to calculate this integral: <span class="math-container">$$ \int_{0}^{1}{f(z(t))z'(t)dt}= 2\pi\int_{0}^{1}{ \frac{1-3ie^{-2\pi i t}}{e^{4 \pi i t}}dt } $$</span></p> <p>Unfortunately I calculated this integral, and it's equal to <span class="math-container">$0$</span>. Is this correct? I don't think so. Where did I go wrong? Maybe I made a mistake when calculating the integral - what would be the best way to calculate it?</p>
Mark Viola
218,419
<p>Note that on the circle <span class="math-container">$C$</span>, given by <span class="math-container">$|z+i|=3$</span>, we have <span class="math-container">$\overline{z+i}=\frac9{z+i}$</span>. Hence, on <span class="math-container">$C$</span></p> <p><span class="math-container">$$f(z)=\frac9{(z+i)^2}+i\frac{1}{z+i}$$</span></p> <p>Then,</p> <p><span class="math-container">$$\begin{align} \oint_{|z+i|=3}\frac{\overline{z}}{z+i}\,dz&amp;=\color{red}{\oint_{|z+i|=3}\frac{9}{(z+i)^2}\,dz}+\color{blue}{i\oint_{|z+i|=3}\frac{1}{z+i}\,dz}\\\\ &amp;=\color{red}{0}\color{blue}{-2\pi} \end{align}$$</span></p> <hr /> <hr /> <p>As an alternative approach, we will evaluate the integral using parameterization. On <span class="math-container">$C$</span>, <span class="math-container">$z=-i+3e^{i\theta}$</span> where <span class="math-container">$\theta \in [0,2\pi]$</span>. Furthermore, on <span class="math-container">$C$</span>C we have <span class="math-container">$f(z)=\frac{\bar z}{z+i}=\frac{i+3e^{-i\theta}}{3e^{i\theta}}$</span> and <span class="math-container">$dz=i3e^{i\theta}\,d\theta$</span>. Hence,</p> <p><span class="math-container">$$\begin{align} \oint_C f(z)\,dz&amp;=\int_0^{2\pi} \left(-1+i3e^{-i\theta}\right)\,d\theta\\\\ &amp;=-2\pi \end{align}$$</span></p> <p>as expected!</p>
3,988,180
<p>The task: <span class="math-container">$$ \text{Calculate } \int_{C}{}f(z)dz\text{, where } f(z)=\frac{\bar{z}}{z+i}\text{, and } C \text{ is a circle } |z+i|=3\text{.} $$</span> Finding the circle's center and radius: <span class="math-container">$$ |z+i|=|x+yi+i|=|x+(y+1)i|=3 \\ x^2+(y+1)^2=3^2 $$</span> Parametrizing the circle: <span class="math-container">$$ z(t)=i+3e^{-2\pi i t} $$</span></p> <p>Now I need to calculate this integral: <span class="math-container">$$ \int_{0}^{1}{f(z(t))z'(t)dt}= 2\pi\int_{0}^{1}{ \frac{1-3ie^{-2\pi i t}}{e^{4 \pi i t}}dt } $$</span></p> <p>Unfortunately I calculated this integral, and it's equal to <span class="math-container">$0$</span>. Is this correct? I don't think so. Where did I go wrong? Maybe I made a mistake when calculating the integral - what would be the best way to calculate it?</p>
Community
-1
<p>Observe that <span class="math-container">$$ \frac{\overline{z}}{z+i} =\frac{\overline{z}\cdot \overline{(z+i)}}{(z+i)\cdot \overline{(z+i)}} =\frac{\overline{z}\cdot \overline{(z+i)}}{|z+i|^2} =\frac{1}{9}\cdot \overline{z}\cdot \overline{(z+i)} $$</span></p> <p>Your parametrization of the path should be <span class="math-container">$$ z(t)=\color{red}{-i}+3e^{2\pi i t},\quad [0,1] $$</span></p> <p>If your integral is <span class="math-container">$I$</span>, then <span class="math-container">\begin{align} 9I &amp;=\int_C \overline{z}\cdot \overline{(z+i)}\,dz\\ &amp;=\int_0^1 (i+3e^{-2\pi i t})\cdot 3e^{-2\pi i t}\cdot 6\pi i\cdot e^{2\pi it} \,dt\\ &amp;=\int_0^1 (i+3e^{-2\pi i t})\cdot 18\pi i\,dt=-18\pi \end{align}</span></p> <p>and thus <span class="math-container">$I=-2\pi$</span>.</p>
2,804,129
<p>To Evaluate the Limit $$L=\lim_{n \to \infty}\left(1+\sum_{k=1}^{n} \frac{1}{\binom{n}{k}}\right)^n \tag{1}$$</p> <p>My try:</p> <p>I tried to use $$\frac{1}{\binom{n}{k}}+\frac{1}{\binom{n}{k+1}}=\frac{n+1}{n} \frac{1}{\binom{n-1}{k}} $$</p> <p>taking summation both sides from $k=1$ to $k=n$ we get</p> <p>$$\sum_{k=1}^{n} \frac{1}{\binom{n}{k}}+\sum_{k=1}^{n} \frac{1}{\binom{n}{k+1}}=\frac{n+1}{n} \sum_{k=1}^{n} \frac{1}{\binom{n-1}{k}} \tag{2}$$</p> <p>Now let $$S=\lim_{n \to \infty} \sum_{k=1}^{n} \frac{1}{\binom{n}{k}}$$ we have from $(2)$</p> <p>$$S+S=S$$</p> <p>hence $$S=0$$</p> <p>Now $(1)$ is in form of $1^{\infty}$ Indeterminate form whose limit is given by</p> <p>$$L=e^\left({\lim_{n \to \infty}}n \times \sum_{k=1}^{n} \frac{1}{\binom{n}{k}}\right)$$</p> <p>How to proceed now?</p>
Kavi Rama Murthy
142,385
<p>Since teh detreminant of $M$ is 1 there can be only one solution. Equate $T(a+be^{2t}+ce^{4t})$ to $1+2e^{2}+3e^{4t}$ and you will get $a=13, b=-10,c=3$.</p>
2,595,247
<p>What is equation of circle when two lines y=x and y=x-4 are tangent to a circle at (2,2) and (4,0) respectively.</p>
Wouter
89,671
<p>Because $\lim_{n\rightarrow\infty} x^{n+1}=0$ whenever $x\in [0,1[$.</p> <p>So your integrand approaches 0 everywhere except at $x=1$, which is just one point and thus doesn't influence the value of the integral.</p>
21,243
<p>I know that this question is some sort of bridge between Informatics and Mathematics, not knowing the best place where to post this question, I opted for this place because of the type of answer I want (which concerns Math more than anything).</p> <p>Consider to have a graph represented by a collection of nodes and connections. The best way to describe the graph is through the adjacency matrix (AM): a matrix that has 0 or 1 if one node is connected to another (we consider non-directed graphs, so we have all bidirectional connections).</p> <p>Does anyone know whether the eigenvalues of the matrix implies something for the graph??? properties, topology.... I ask this question because I've almost finished studying Markov Chains. In a chain, the matrix of transition probabilities P (for discrete time markov chains), or the matrix of transition frequencies Q (for cont-time markov chains), can be evaluated (their eigenvalues) in order to inspect whether the chain is ergodic or not (with a parallelism to Control Theory: eigenvalues in the unitary circle or in the negative half-plane).</p> <p>I am trying to find something similar for graphs.</p> <p>Thank you </p>
Udara
15,658
<p>I'm also interested on the convergence of Markov chains on a graph and found your questions very interesting. As per my knowledge, for adjacancy matrices, eigenvalues of AM implies many things. It implies the connectivity of the graph. If the difference between the largest eigenvalue and the second largest eigenvalue (this is known as spectral gap) is large, it means that the graph is a well-connected graph. In fact in spectral graph theory expansion parameter of a graph (for a d-regular graph) is lower bounded by the spectral gap of the adjacency matrix. You can read more on this in <a href="http://www.math.ias.edu/~boaz/ExpanderCourse/allnotes.pdf" rel="nofollow">http://www.math.ias.edu/~boaz/ExpanderCourse/allnotes.pdf</a></p> <p>I have a question on one thing you mentioned. It's about what you mentioned as "In a chain, the matrix of transition probabilities P (for discrete time markov chains), or the matrix of transition frequencies Q (for cont-time markov chains), can be evaluated (their eigenvalues) in order to inspect whether the chain is ergodic or not (with a parallelism to Control Theory: eigenvalues in the unitary circle or in the negative half-plane)."</p> <p>Can you please elaborate more on this and help me to get an idea about which properties can be implied using the eigenvalues of the transition probability matrix of a markov chain on graph.?</p>
1,515,478
<p>Given a quadratic equation with one and only one root (for example $6-\sqrt{2}$ ). Does there exist integers $a,b$ and $c$ where $ax^2 + bx + c = 0$ for that root?</p>
More water plz
282,704
<p>Yes. given $r_1 = 6-\sqrt{2}$, we know that $r_2 = 6+\sqrt{2}$</p> <p>so the quadratic equation is given by </p> <p>$(x-(6-\sqrt{2}))(x-(6+\sqrt{2}))=(x-6)^2-2 = x^2-12x+34$</p>
2,863,995
<p><em>Problem:</em></p> <p>Let $f_n: [0,1] \to \mathbb{R}$ be a sequence of measurable functions. </p> <p>Suppose that $\int_{0}^{1}|f_n(x)|^2 ~ dx \le 1$ for $n \in \mathbb{N}$ and $f_n$ converges to $0$ a.e. </p> <p>Show that $\lim_{n \to \infty} \int_{0}^{1} f_n(x) ~ dx = 0$.</p> <p><em>Question:</em> <strong>Is the following solution correct? If not, how can it be fixed?</strong></p> <p><em>Proposed Solution:</em></p> <p>We have $\|f_n\|_{L^2} \le 1$ $\forall n \in \mathbb{N}$ and the measure space is sigma-finite. So $\|f_n\|_{L^1} \le \|f_n\|_{L^2} \le 1$ and $f_n$ is bounded above by the integrable function $g(x) = 1$ for every $n$. By the pointwise-a.e. convergence, $f \equiv 0$, and $|f| \le g$. So by DCT, the result follows.</p>
Mike Earnest
177,399
<p>Here is a way to solve the problem. Since $f_n\to 0$ a.e, and the measure space is finite, you have $f_n\to 0$ in probability. Given $\epsilon&gt;0$, let $A_n=\{|f_n|&gt;\epsilon\}$. Then $$ \int_0^1 |f_n|\,dx=\int_{A_n} |f_n|\,dx + \int_{[0,1]\setminus A_n}|f_n|\,dx\le \int_0^1 f_n(x){\bf 1}_{A_n}(x)\,dx + \epsilon $$ Now apply Cauchy-Schwarz to that last integral.</p>
295,517
<p>My math is not incredibly strong and perhaps I have just not been searching for the right terms, but I have a summation that is part of an algorithm I've been working on and would really like to reduce it to just a formula, but am really struggling to find a solution (if one exists).</p> <p>$\sum_{i=1}^{n}\frac{5}{i^{0.35}}$</p> <p>Can anyone point me in the right direction as to how to approach this, or is likely not possible to reduce down to just a formula? Thanks very much in advance.</p>
Simply Beautiful Art
272,831
<p>In contrary to GEdgar's answer, we can use a different result:</p> <p>$$\sum_{k=1}^n\frac1{k^{0.35}}\approx\zeta(0.35)+\frac1{0.65}n^{0.65}$$</p> <p>where $\zeta(0.35)\approx-1.0105112244$</p> <p>For example, with $n=100$,</p> <p>$\zeta(0.35)+\frac1{0.65}(100)^{0.65}=29.685832082775940902507610997434927477716286043004922$</p> <p>$\sum_{k=1}^{100}\frac1{k^{0.35}}=29.785537003681227808300987700054442855878969334832028$</p>
2,501,518
<p>$\begin{pmatrix} a \\ b \end{pmatrix} \begin{pmatrix} a \\ b \end{pmatrix}^T \begin{pmatrix} C &amp; D \\ D^T &amp; E \end{pmatrix}= \begin{pmatrix} I_m &amp; 0\\ 0 &amp; I_n \end{pmatrix} $</p> <p>$a$ and $b$ are vectors with length $m$ and $n$ respectively. C has dimension $m$ by $m$ and E $n$ by $n$. </p> <p>From above, I get $4$ matrix equations in $4$ unknowns:</p> <p>$$ a a^T C + a b^T D^T = I_n \tag{1}$$</p> <p>$$ b a^T C + b b^T D^T = 0 \tag{2}$$</p> <p>$$ a a^T D + a b^T E^T = 0 \tag{3}$$</p> <p>$$ b a^T D + b b^T E^T = I_m \tag{4}$$</p> <p>where $I$ is an identity matrix and $a$ is known. Is it possible to get $b, C, D, E$?</p> <p>The matrix $\begin{pmatrix} C &amp; D \\ D^T &amp; E \end{pmatrix}$ is symmetric and the sum of a single row(or column) of this matrix is zero. </p> <p>I know $C$, $aa^T$, and $bb^T$ are symmetric, but I'm not sure if this information would help. Thank you. </p>
Community
-1
<p>There is no solution because of the followingobvious reason. The rank of the right-hand side is $n+m$, while that of the left hand side is bounded by the rank of $\begin{pmatrix} a \\ b\end{pmatrix}$, that is $m$.</p>
490,064
<p>Solve the Cauchy problem, $\forall t \in \mathbb{R}$, $$ \begin{cases} u''(t) + u(t) = |t|\\ u(0)=1, \quad u'(0) = -1 \end{cases} $$</p> <p>The solution to the homogeneous equation is $A\cos(t) + B \sin(t)$. Empirically, $|t|$ is "more or less" a particular solution, however it is not differentiable in $0$... What is the fastest way to find a particular solution two times differentiable?</p>
Amzoti
38,839
<p>As mentioned in the comments, you can solve the system for $t \ge 0$ and $t \lt 0$.</p> <p>However, you can use Laplace Transforms to solve the problem, and will arrive at:</p> <ul> <li>$t \ge 0, u(t) = t - 2 \sin t + \cos t$</li> <li>$t \lt 0, u(t) = \cos t - t$</li> </ul> <p>If we check the initial conditions, they match as does the solution with continuity.</p> <p>A piecewise plot shows:</p> <p><img src="https://i.stack.imgur.com/3yB2C.png" alt="enter image description here"></p>
3,858,362
<p>Solve <span class="math-container">$$\dfrac{x^3-4x^2-4x+16}{\sqrt{x^2-5x+4}}=0.$$</span> We have <span class="math-container">$D_x:\begin{cases}x^2-5x+4\ge0\\x^2-5x+4\ne0\end{cases}\iff x^2-5x+4&gt;0\iff x\in(-\infty;1)\cup(4;+\infty).$</span> Now I am trying to solve the equation <span class="math-container">$x^3-4x^2-4x+16=0.$</span> I have not studied how to solve cubic equations. Thank you in advance!</p>
Community
-1
<p>Answer : <span class="math-container">$\frac{x^3-4x^2-4x+16}{\sqrt{x^2 - 5x+4}}= \frac{x(x^2 - 4)-4(x^2 - 4)}{\sqrt{x^2 - 5x+4}}$</span>=<span class="math-container">$\frac{(x^2 - 4)(x-4)}{\sqrt{x^2 - 5x+4}}$</span></p> <p><span class="math-container">$\sqrt{x^2 - 5x+4} = 0 $</span> if <span class="math-container">$ x =(1, 4) $</span></p> <p>Suppose <span class="math-container">$x≠(1,4)$</span></p> <p><span class="math-container">$(x^2 - 4)(x-4)$</span> =0</p> <p><span class="math-container">$\Rightarrow$</span> <span class="math-container">$ (x - 2)(x+2)(x-4)=0$</span></p> <p><span class="math-container">$\Rightarrow $</span> the solution is (<span class="math-container">$x=2$</span> or <span class="math-container">$ x=- 2)$</span></p> <p>Because if <span class="math-container">$x =4 $</span>the denominator equal <span class="math-container">$0$</span> So <span class="math-container">$x=4$</span> not a solution</p> <p>Finally : <span class="math-container">$S=(2,-2) $</span></p>
3,536,671
<p>I have the following mathematical operations to use: Add, Divide, Minimum, Minus, Modulo, Multiply and Round.</p> <p>With these I need to get a number, run it through a combination of these and return 0 if the number is negative or equal to 0 and the number itself if the number is greater than 0.</p> <p>Is that possible?</p> <p>EDIT: Minus is Subtract</p>
J. W. Tanner
615,567
<p>Take <span class="math-container">$x-\min(0,x)$</span>.</p> <p>If <span class="math-container">$x&gt;0$</span>, then <span class="math-container">$\min(0,x)=0$</span>, so <span class="math-container">$x-\min(0,x)=x$</span>.</p> <p>If <span class="math-container">$x\le0$</span>, then <span class="math-container">$\min(0,x)=x$</span>, so <span class="math-container">$x-\min(0,x)=0$</span>.</p>
1,862,524
<p>In the textbook I'm using to prepare the logic exam says that first order logic may be used to <strike>implement</strike> axiomatize data structures. There is an example of that:</p> <p>"Stack": uses a language that contains the <strike>predicates</strike> functions <em>top</em>, <em>pop</em> and <em>push</em>, and the constant <em>nil</em> (empty stack). It axioms are:</p> <ul> <li>$\forall s \forall x . top(push(s, x))=x$</li> <li>$\forall s \forall x . pop(push(s, x))=s$</li> <li>$\forall s . push(pop(s), top(s))=s$</li> </ul> <p>Premise: that I didn't understand this example completely (there's no explanation about how can these axioms describe a stack).</p> <p>Suppose I have to describe another data structure, for example a "Queue", what strategy do you follow to write the axioms of this data structure? Can you make an example of the axioms you would use?</p> <p><strong>EDIT</strong>: Using the @user21820 's answer as an example, I'm trying to answer my own question.</p> <p>Strategy:</p> <ol> <li>Describe the language used: $(Queue, add, remove, element, nil)$ where <ul> <li>$Queue$ is an unary predicate, $Queue(x)$ means $x \in Queue$;</li> <li>$add$ is a binary function. $add(q, e), q \in Queue$ returns the queue with a new element at the end of the queue;</li> <li>$remove$ is a binary function. $remove(q, e), q \in Queue$ returns the queue without the first element of the queue;</li> <li>$element$ is a unary function that returns the first element of the queue.</li> <li>$nil$ is a constant, means empty queue.</li> </ul></li> <li>Describe the axioms you need (they might be wrong): <ul> <li>remove an element from a queue with one element return the empty queue;</li> <li>remove an element from a queue return the queue without the first element;</li> <li>add an element from the queue returns the queue with the element added;</li> <li>element function returns the first element inserted;</li> </ul></li> <li>Write the axioms: <ul> <li>$remove(add(nil, x)) = nil$</li> <li>$\forall q \in Queue. \left(remove(add(q, x)) = ???\right)$</li> <li>$???$</li> <li>$???$</li> </ul></li> </ol>
George V. Williams
54,806
<p>The book is wrong, your derivation is also wrong.</p> <p>Note that you substitute $u=8$ instead of $u=4$. You already changed the bounds to integrate from $u=0$ to $4$. The correct answer is $4(e^4 - 1)$.</p> <p>$\int_0^4 \frac{1}{2} (8)e^u du$ $\to$ $\int_0^4 4e^u du \Big\vert_0^4=4e^\color{red}{4}-4=4(e^\color{red}{4}-1)$</p>
4,280,426
<blockquote> <p>We have a bag with <span class="math-container">$3$</span> black balls and <span class="math-container">$5$</span> white balls. What is the probability of picking out two white balls if at least one of them is white?</p> </blockquote> <p>If <span class="math-container">$A$</span> is the event of first ball being white and <span class="math-container">$B$</span> the second ball being white, could it be <span class="math-container">$p\bigl((A|B)\cup(B|A)\bigr)$</span>? Although <span class="math-container">$B$</span> depends on <span class="math-container">$A$</span>, I don't understand why <span class="math-container">$A$</span> depends on <span class="math-container">$B$</span>, as <span class="math-container">$B$</span> occurs after <span class="math-container">$A$</span> has occurred.</p> <p>Thank you very much for your help.</p> <p>Edit: and the probability of obtaining two white balls if I have only one white (regardless if it’s the first or the second one)? Thank you very much for your help!</p>
Henno Brandsma
4,280
<p>The closed sets are all sets that are at most countable, or <span class="math-container">$\Bbb R$</span>. So the <em>only</em> closed superset of an uncountable set <span class="math-container">$A$</span> is <span class="math-container">$\Bbb R$</span> so</p> <p><span class="math-container">$$A \text{ uncountable } \implies \overline{A}=\Bbb R$$</span></p> <p>in this topology....</p>
3,102,905
<p>I have the following sequence <span class="math-container">$$(x_{n})_{n\geq 1}, \ x_{n}=ac+(a+ab)c^{2}+...+(a+ab+...+ab^{n})c^{n+1}$$</span> Also I know that <span class="math-container">$a,b,c\in \mathbb{R}$</span> and <span class="math-container">$|c|&lt;1,\ b\neq 1, \ |bc|&lt;1$</span> I need to find the limit of <span class="math-container">$x_{n}$</span>.</p> <p>The result should be <span class="math-container">$\frac{ac}{(1-bc)(1-c)}$</span> I miss something at these two sums which are geometric progressions.Each sum should start with <span class="math-container">$1$</span> but why ? If k starts from 0 results the first terms are <span class="math-container">$bc$</span> and <span class="math-container">$c$</span> right?</p> <p>My attempt: <span class="math-container">$x_{n}=a(c+c^{2}(1+b)+...+c^{n+1}(1+b+...+b^{n}))$</span></p> <p><span class="math-container">$1+b+...+b^{n}=\frac{b^{n+1}-1}{b-1}$</span> so <span class="math-container">$$x_{n}=a\sum_{k=0}^{n}c^{k+1}\cdot \frac{b^{k+1}-1}{b-1}\Rightarrow x_{n}=\frac{a}{b-1}\sum_{k=0}^{n}c^{k+1}\cdot (b^{k+1}-1)=\frac{a}{b-1}(\sum_{k=0}^{n}c^{k+1}\cdot b^{k+1}-\sum_{k=0}^{n}c^{k+1})$$</span></p> <p>Now I take separately each sum to calculate.</p> <p><span class="math-container">$\sum_{k=0}^{n}(bc)^{k+1}=bc+b^2c^2+...+b^{n+1}c^{n+1}$</span></p> <p>It's a geometric progression with <span class="math-container">$r=bc$</span>, right ?But if a calculate the sum, in the end I don't get the right answer.I get the right answer if this progression starts with <span class="math-container">$1$</span> as first term.Why?</p> <p>The same thing with the second sum.If the first term is <span class="math-container">$1$</span> I'll get the right answer.</p> <p>Why I need to add/subtract a <span class="math-container">$1$</span> to get the answer?Why I don't get the correct answer just by solving the progressions with the first term <span class="math-container">$bc$</span> and <span class="math-container">$c$</span>?</p>
exp ikx
614,823
<p>Hint: Take <span class="math-container">$ac$</span> common. For finding the limit of the sequence, consider <span class="math-container">$n \rightarrow \infty$</span> and apply the formula for summation of geometric series with infinite terms. Since your <span class="math-container">$r&lt;1$</span>, <span class="math-container">$S = \frac{a}{1-r}$</span> holds.</p>
2,199,303
<p>Consider the DE $$y''+\lambda y=0$$ where $\lambda$ is a constant </p> <p>subject to the boundary conditions $$y(0)=0$$ and $$y(a)=0$$ where $a$ is a positive constant</p> <p>I want to find the eigenvalues and eigenfunctions related to this problem</p> <p>My attempt:</p> <p>The auxiliary equation is $$m^2-\lambda=0\implies m=\sqrt{-\lambda}$$</p> <p>Thus we have two cases</p> <p>Case (i) $$\lambda&lt;0$$ in which we have real different roots so let $$\lambda=-k^2\implies m=\pm k$$ thus the solution is $$y=Ae^{kx}+Be^{-kx}$$</p> <p>Now when $y(0)=0$ we have that $$0=Ae^{k(0)}+Be^{-k(0)}\implies -B=A$$ since $k\ne 0$</p> <p>Now when $y(a)=0$ we have that $$0=Ae^{ka}+Be^{-ka}\implies \frac{Ae^{ka}}{e^{-ka}}=-B$$ since $a\ne 0$ thus there are no eigenvalues for when $\lambda &lt;0$</p> <p>Can anyone tell me if i have made a mistake somewhere in this, because i'm unsure what to do from here, i know the next case to check will be when $\lambda&gt;0$ which would give complex roots, but i would like to know if my approach up to now is correct, thanks for taking the time to read this, and any help would be appreciated also i know that when i multiply through by $$q^*=\frac{1}{a_2}e^{\int\frac{a_1}{a_2}}dx=1$$ thus in normal ?self-adjoint form.</p>
Chappers
221,811
<p>Yes, your approach so far is correct. One should also check (preferably first, so you don't forget!) the $\lambda=0$ case: here, it is easy to check that the boundary conditions can't be satisfied.</p> <p>One can actually extract the general solution from your analysis. In order for there to be a nonzero solution, the system of equations $$ \begin{pmatrix} 1 &amp; 1 \\ e^{ka} &amp; e^{-ka} \end{pmatrix} \begin{pmatrix} A \\ B \end{pmatrix} = 0 $$ must be degenerate (else the only solution is zero). Hence the determinant of the matrix must vanish; this is $$ e^{-ka}-e^{ka}=0, $$ or $e^{2ka}=1$. Recalling Euler's formula, this occurs if and only if $2ka=2n\pi i $ for some integer $n$. Hence $k=\frac{n\pi i}{a}$. The condition at zero gives $y=2A\sinh{kx}$, and $\sinh{iz} = i\sin{z}$, so the eigenfunctions are proportional to $\sin{(n\pi x/a)}$, for $n \in \{ 1,2,3,\dotsc\}$. Certainly this <em>looks</em> plausible, since this satisfies the equation and boundary conditions.</p>
2,292,324
<p>I know what the answer to this question is, but I am not sure how the answer was reached and I would really like to understand it! I am omitting the base case because it is not relevant for my question.</p> <p>Inductive hypothesis:</p> <p>$$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{n(n+1)} = \frac{n}{n+1}$$ is true when $n = k$ and $k &gt; 1$</p> <p>Therefore: $$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)} = \frac{k}{k+1}$$</p> <p>Inductive step:</p> <p>Prove that $$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)} = \frac{k+1}{k+1+1} = \frac{k+1}{k+2}$$</p> <p>$$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)} = \left[\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)}\right] + \frac{1}{(k+1)(k+2)}$$</p> <p>$$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)} = \frac{k}{k+1} + \frac{1}{(k+1)(k+2)}$$</p> <p>$$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)} = \frac{k+1}{k+2}$$</p> <p>What I am confused about is where the $\frac{1}{(k+1)(k+2)}$ comes from in the first line of the inductive step. Can someone please explain this in a little more detail? The source of the answer explains it as "break last term from sum", but I am unclear on what that means.</p>
Mark Viola
218,419
<p>The base case $n=1$ is true. Assume that for some $n&gt; 1$ </p> <p>$$\sum_{k=1}^n\frac{1}{k(k+1)}=\frac{n}{n+1}$$</p> <p>Then, we have</p> <p>$$\begin{align} \sum_{k=1}^{n+1}\frac{1}{k(k+1)}&amp;=\sum_{k=1}^n\frac{1}{k(k+1)}+\frac{1}{(n+1)(n+2)}\\\\ &amp;=\frac{n}{n+1}+\frac{1}{(n+1)(n+2)}\\\\ &amp;=\frac{n(n+2)+1}{(n+1)(n+2)}\\\\ &amp;=\frac{(n+1)^2}{(n+1)(n+2)}\\\\ &amp;=\frac{n+1}{n+2} \end{align}$$</p> <p>which completes the proof by induction.</p>
1,968,541
<p>$$144x^5 − 121x^4 + 100x^3 − 81x^2 − 64x + 49 = 0 $$</p> <p>I re-wrote it as </p> <p>$$ 12^2x^5 - 11^2x^4 + 10^2x^3 - 9^2x^2 - 8^2x + 7^2 = 0 $$</p> <p>And then as $$ \sum_{k=0}^{5} (k+7)^2(-1)^kr^k = 0 $$</p> <p>But I don't know what to do with that. Thanks for any help!</p>
Carl Schildkraut
253,966
<p>Generally, the best way to do these kinds of problems is by the rational root theorem. However, there is a nicer way here.</p> <p>Consider this polynomial $\mod 2$. If it has a solution $n\in\mathbb{Z}$, then</p> <p>$$-121n^4-81n^2+49\equiv 0\mod 2$$</p> <p>$$n^4+n^2+1\equiv 0\mod 2$$</p> <p>$$(n^2)(n^2+1)+1\equiv 0\mod 2$$</p> <p>However, $n^2$ and $n^2+1$ are of opposite parity, so their product is even and the LHS is $\equiv 1\mod 2$. This is a contradiction, and thus there are no integer roots to the polynomial.</p>
3,392,871
<blockquote> <p>Let <span class="math-container">$k&gt;1$</span> and define a sequence <span class="math-container">$\left\{a_{n}\right\}$</span> by <span class="math-container">$a_{1}=1$</span> and <span class="math-container">$$a_{n+1}=\frac{k\left(1+a_{n}\right) }{\left(k+a_{n}\right)}$$</span> (a) Show that <span class="math-container">$\left\{a_{n}\right\}$</span> is monotonic increasing. </p> </blockquote> <p>Assume <span class="math-container">$a_n \geq a_{n-1}$</span>. Then,</p> <p><span class="math-container">$$a_{n+1} = \frac{k(1+a_n)}{k+a_n} \geq \frac{k(1+a_{n-1})}{k+a_n}....$$</span></p> <p>But I get hung up on the <span class="math-container">$a_n$</span> in the denominator. I cannot replace it with <span class="math-container">$a_{n-1}$</span> since <span class="math-container">$a_n \geq a_{n-1}$</span>. Is there a trick to get around this?</p>
Theo Bendit
248,286
<p>Turn the question of whether <span class="math-container">$(a_n)$</span> is monotone increasing into an inequality purely in terms of a single term <span class="math-container">$a_n$</span>. In particular, <span class="math-container">$$a_n \le a_{n+1} = \frac{k(1 + a_n)}{k + a_n}.$$</span> Simplifying, making the temporary assumption that <span class="math-container">$k + a_n &gt; 0$</span>, <span class="math-container">$$a_n(k + a_n) \le k(1 + a_n) \iff a_n^2 - k \le 0 \iff a_n \in [-\sqrt{k}, \sqrt{k}].$$</span> Addressing this assumption, it is extremely easy to show <span class="math-container">$a_n &gt; 0$</span> for all <span class="math-container">$n$</span> by induction, hence <span class="math-container">$k + a_n &gt; k &gt; 1 &gt; 0$</span>. Thus, you really just need to show <span class="math-container">$a_n \le \sqrt{k}$</span>.</p> <p>You can now show this by induction. In fact, you can bundle the positivity proof in as well. That is, you can show that <span class="math-container">$0 \le a_n \le \sqrt{k}$</span> for all <span class="math-container">$n$</span> by induction.</p> <p>Let's begin. Since <span class="math-container">$k \ge 1$</span>, note that <span class="math-container">$0 \le 1 \le \sqrt{k}$</span>, hence <span class="math-container">$0 \le a_1 \le \sqrt{k}$</span>.</p> <p>Now, assume <span class="math-container">$0 \le a_n \le \sqrt{k}$</span>. Then, <span class="math-container">\begin{align*} a_{n+1} &amp;= \frac{k(1 + a_n)}{k + a_n} \\ &amp;= \frac{k + k^2 + ka_n - k^2}{k + a_n} \\ &amp;= \frac{k(k + a_n)}{k + a_n} + \frac{k - k^2}{k + a_n} \\ &amp;= k - \frac{k^2 - k}{k + a_n}. \end{align*}</span> Note that <span class="math-container">$k^2 - k &gt; 0$</span>, hence <span class="math-container">$x \mapsto k - \frac{k^2 - k}{k + x} = \frac{k(1 + x)}{k + x}$</span> is increasing for <span class="math-container">$x &gt; -k$</span>. So, since <span class="math-container">$-k &lt; 0 \le a_n \le \sqrt{k}$</span>, we have <span class="math-container">$$\frac{k(1 + 0)}{k + 0} \le \frac{k(1 + a_n)}{k + a_n} \le \frac{k(1 + \sqrt{k})}{k + \sqrt{k}} \implies 1 \le a_{n+1} \le \sqrt{k},$$</span> as required.</p>
3,392,871
<blockquote> <p>Let <span class="math-container">$k&gt;1$</span> and define a sequence <span class="math-container">$\left\{a_{n}\right\}$</span> by <span class="math-container">$a_{1}=1$</span> and <span class="math-container">$$a_{n+1}=\frac{k\left(1+a_{n}\right) }{\left(k+a_{n}\right)}$$</span> (a) Show that <span class="math-container">$\left\{a_{n}\right\}$</span> is monotonic increasing. </p> </blockquote> <p>Assume <span class="math-container">$a_n \geq a_{n-1}$</span>. Then,</p> <p><span class="math-container">$$a_{n+1} = \frac{k(1+a_n)}{k+a_n} \geq \frac{k(1+a_{n-1})}{k+a_n}....$$</span></p> <p>But I get hung up on the <span class="math-container">$a_n$</span> in the denominator. I cannot replace it with <span class="math-container">$a_{n-1}$</span> since <span class="math-container">$a_n \geq a_{n-1}$</span>. Is there a trick to get around this?</p>
trancelocation
467,003
<p>A possible way is to investigate the derivative of the function underlying the recursion and then use MVT together with induction:</p> <ul> <li><span class="math-container">$\boxed{f(x)} = k\frac{1+x}{k+x}= \frac{k+x + (k-1)x}{k+x}= 1 + (k-1)\frac{x}{k+x} = \boxed{1+ (k-1)\left(1 - \frac{k}{k+x} \right)}$</span></li> <li>Note that <span class="math-container">$f(x) \geq 1$</span> for <span class="math-container">$x \geq 0$</span>.</li> <li><span class="math-container">$f'(x)= \frac{k(k-1)}{(k+x)^2} &gt;0$</span> for <span class="math-container">$k&gt;1$</span></li> <li>The induction start gives <span class="math-container">$$a_2 = \frac{2k}{k+1} = 1+\frac{k-1}{k+1}&gt;1 = a_1$$</span></li> <li>The induction step uses the MVT and the positivity of <span class="math-container">$f'$</span>: <span class="math-container">$$a_{n+1}-a_n \stackrel{a_{n-1}&lt; \xi_n &lt; a_n}{=} \underbrace{f'(\xi_n)}_{&gt;0}(a_n - a_{n-1}) &gt; 0$$</span></li> </ul> <p>So, the sequence is strictly increasing.</p>
345,252
<blockquote> <p>Prove that $A \subset B$ if and only if $A \cap B = A$</p> </blockquote> <p>I'm having problems writing this proof using formal logic operators. I know the idea behind it: Since $A \subset B$, the only elements A and B have in common are those of A, so the intersection must be just A...</p>
Berci
41,488
<p>$\Rightarrow:$ We always have $A\cap B\subseteq A$. For the reversed containment, if $a\in A$, then $a\in B$ as well by assumption ($A\subseteq B$), so $A\subseteq A\cap B$.</p> <p>$\Leftarrow:$ We always have $A\cap B\subseteq B$.</p>
345,252
<blockquote> <p>Prove that $A \subset B$ if and only if $A \cap B = A$</p> </blockquote> <p>I'm having problems writing this proof using formal logic operators. I know the idea behind it: Since $A \subset B$, the only elements A and B have in common are those of A, so the intersection must be just A...</p>
Code-Guru
34,869
<p>Since you are proving an "if and only if statement" you have two things to prove. Do you know what they are? If so, then you will see that in one direction you need to prove that two sets are equal? Do you know how to do this? What about proving that one set is a subset of another?</p>
426,974
<p>Suppose the dynamical system <span class="math-container">$(X,T)$</span> has only proper factors (i.e. not <span class="math-container">$(X,T)$</span> itself) of zero topological entropy. Does the system <span class="math-container">$(X,T)$</span> also have zero entropy?</p>
Ville Salo
123,634
<p>YES under strong additional assumptions.</p> <blockquote> <p>Theorem. Let <span class="math-container">$X$</span> be compact metrizable and <em>zero-dimensional</em>, and let <span class="math-container">$T : X \to X$</span> be an <em>aperiodic homeomorphism</em>. If <span class="math-container">$(X, T)$</span> has only zero-entropy proper factors, then <span class="math-container">$(X, T)$</span> has zero entropy.</p> </blockquote> <p>I assume the asker already had &quot;compact metrizable&quot; in mind, as it's a standard category for this type of question. I guess &quot;homeomorphism&quot; is non-essential and you could take <span class="math-container">$T$</span> just continuous, but I didn't check. The assumption that <span class="math-container">$T$</span> is aperiodic is just to simplify the proof a little, I believe it is non-essential. From the tags I guess the zero-dimensional case is of interest, but (as I'll discuss after the proof) I believe it is essential for this proof.</p> <p>Proof. For a contradiction, suppose <span class="math-container">$(X, T)$</span> satisfies the assumptions, but has positive entropy. We may first take <span class="math-container">$X \subset \{0,1\}^{\mathbb{N}}$</span> where <span class="math-container">$\{0,1\}^{\mathbb{N}}$</span> is the Cantor set, and then after the topological conjugacy <span class="math-container">$x \mapsto (T^i(x))_{i \in \mathbb{Z}} \in (\{0,1\}^{\mathbb{N}})^{\mathbb{Z}}$</span>, we may assume <span class="math-container">$T$</span> is a shift map and <span class="math-container">$X$</span> a shift-invariant closed subset of <span class="math-container">$(\{0,1\}^{\mathbb{N}})^{\mathbb{Z}}$</span>.</p> <p>Restricting to the first <span class="math-container">$n$</span> coordinates is a factor map, and by the definition of entropy one of these factors has positive entropy. By the assumption that proper factors have zero entropy, we deduce that <span class="math-container">$(X, T)$</span> is topologically conjugate to a subshift: we have a finite discrete alphabet <span class="math-container">$A$</span>, <span class="math-container">$X \subset A^{\mathbb{Z}}$</span> is closed and shift-invariant, and <span class="math-container">$T$</span> is the shift map.</p> <p>Now take an <em>entropy pair</em> <span class="math-container">$(x, y) \in X^2$</span> [1], which exists by the assumption of positive entropy. This means <span class="math-container">$x \neq y$</span> and if <span class="math-container">$(U, V)$</span> is a cover of <span class="math-container">$X$</span> by non-dense open sets such that <span class="math-container">$x \in (U^c)^{\circ}, y \in (V^c)^{\circ}$</span>, then this cover has positive entropy. Since the entropy pair relation is <span class="math-container">$T \times T$</span>-invariant, we may assume <span class="math-container">$x_0 \neq y_0$</span>.</p> <p>For any <span class="math-container">$n \in \mathbb{N}$</span>, define now a factor map onto a subshift of <span class="math-container">$\{0,1\}^{\mathbb{Z}}$</span> by <span class="math-container">$f(z)_i = 1 \iff z_{[i, i+n]} = x_{[0, n]}$</span>. It is easy to see that the image subshift has positive entropy for any <span class="math-container">$n$</span>. Namely, consider the cover <span class="math-container">$\{[0], [1]\}$</span>, where for a word <span class="math-container">$w$</span>, <span class="math-container">$[w]$</span> denotes the cylinder set <span class="math-container">$\{z \;|\; z_{[0,|w|-1]} = w\}$</span>. Computing the entropy of this is equivalent to computing the entropy of the clopen cover <span class="math-container">$(U, V) = ([x_{[0, n]}]^c, [x_{[0, n]}])$</span> of <span class="math-container">$X$</span>. This cover satisfies <span class="math-container">$x \in (U^c)^{\circ}, y \in (V^c)^{\circ}$</span> because <span class="math-container">$x_0 \neq y_0$</span>, and clearly neither <span class="math-container">$U$</span> nor <span class="math-container">$V$</span> is dense, thus this cover has positive entropy since <span class="math-container">$(x, y)$</span> is an entropy pair, and we conclude the image has positive entropy.</p> <p>Since these are positive entropy factors, <span class="math-container">$(X, T)$</span> is topologically conjugate to each of them. By aperiodicity, the point <span class="math-container">$x$</span> is aperiodic, so density of <span class="math-container">$1$</span>s in the image subshift tends to <span class="math-container">$0$</span> as <span class="math-container">$n \rightarrow \infty$</span>. From this it is easy to deduce an upper bound on their entropies, which tends to <span class="math-container">$0$</span>. All these upper bounds apply to <span class="math-container">$X$</span>, so it is of zero entropy. Thus <span class="math-container">$(X, T)$</span> must have zero entropy. Square.</p> <p>Specifically what fails for general systems is that the factors need to be taken in some type of continuum. I tried subsystems of variants of <span class="math-container">$[0,1]^{\mathbb{Z}}$</span>, and the problem is that unlike our discrete alphabet <span class="math-container">$\{0, 1\}$</span>, every symbol <span class="math-container">$a \in [0,1]$</span> can transmit an infinite amount of information, so to speak. On the other hand, I have no idea how you could possibly construct a system that actually forces this sort of thing for <em>all</em> factor maps, let alone for all other factors, so I don't really have a conjecture about the general case at the moment.</p> <p>[1] <em>Blanchard, François</em>, <a href="https://doi.org/10.24033/bsmf.2216" rel="nofollow noreferrer"><strong>A disjointness theorem involving topological entropy</strong></a>, Bull. Soc. Math. Fr. 121, No. 4, 465-478 (1993). <a href="https://zbmath.org/?q=an:0814.54027" rel="nofollow noreferrer">ZBL0814.54027</a>.</p>
2,553,284
<p>I know that $$\ln e^2=2$$ But what about this? $$(\ln e)^2$$ A calculator gave 1. I'm really confused.</p>
BDSub
504,256
<p>Since $\ln e = 1 $ So $(\ln e)^2 = 1$</p>
2,553,284
<p>I know that $$\ln e^2=2$$ But what about this? $$(\ln e)^2$$ A calculator gave 1. I'm really confused.</p>
Fghj
458,544
<p>Consider the most important property of logarithm</p> <p>$$ \log(m)^n = n \log (m) $$</p> <p>So that, $$ \log_e(e)^2 = 2 \log_e(e) = 2 \ln(e) = 2 $$</p> <p>And since, $$ [ln(e)^2] ≠ [ln(e)]^2 $$</p> <p>As you might be thinking that $ ln(e)^2 $ is same as $ [ln(e)]^2 $ but that's not true.</p> <p>Actually, $$ ln(e)^2= ln(e^2) $$</p> <p>We have $$ [ln(e)]^2 = [1]^2 = 1 $$</p>
529,260
<p>Let $V$ be a complex vector space of dimension $n$ with a scalar product, and let $u$ be an unitary vector in $V$. Let $H_u: V \to V$ be defined as</p> <p>$$H_u(v) = v - 2 \langle v,u \rangle u$$</p> <p>for all $v \in V$. I need to find the minimal polynomial and the characteristic polynomial of this linear operator, but the only way I know to find the charactestic polynomial is using the associated matrix of the operator.</p> <p>I don't know how to find this matrix because I don't know how to deal with the scalar product. Is there some other way to find the characteristic polynomial? If not, how can I find the associated matrix of this linear operator?</p> <p>Thanks in advance.</p>
lhf
589
<p><em>Hint:</em> Consider an orthonormal basis containing $u$ and express $H$ in that basis.</p>
970,062
<p>To show to quadratic forms are not equivalent, we can find rank, or discriminant or some element which is represented by either one only etc. But Is there a general criterion to show that two binary(right now I am only concerned for binary) quadratic forms are equivalent. Like here is an example which uses variable transformation, but every time find what transformation will apply is tough.</p> <p>$\textbf{Example-}$ $X^2-Y^2 $ and $4XY$. replacing $X,Y$ by $X+Y,X-Y$ do the job here. Nnow there are criterion which gives an equivalent condition to check whether two binary forms are equivalent or not , like </p> <p>$\textbf{Criterion 1-}$ On algebraically closed fields, same dimension and same rank suffice.</p> <p>$\textbf{Criterion 2-}$ On Reals, same rank and same signature suffices.</p> <p>$\textbf{Criterion 3-}$ On finite fields,same discriminant is enough.</p> <p>But in general if we are given field as some arbitrary $K$, then what to do? I am very new at quadratic forms. Any help will be appreciated. </p> <p>I know that we can check similarity of their corresponding matrices, but is there any other way, beside checking matrices are similar or not. Like how would you proceed on these couple of question below. If there is no other method, can somebody please explain me the method to check whether two matrices are similar or not, I am really weak with matrices.</p> <p>$\textbf{Question 1-}$ Prove $X^2-4XY+3Y^2, X^2-Y^2,aX^2-aY^2$ are equivalent over $K$.</p> <p>$\textbf{Question 2-}$ Show </p> <p>i) $X^2-Y^2 \sim XY$ over $K$, ii) $3X^2-2Y^2 \sim X^2-6Y^2$ (over $Q$)</p>
Bhaskar Vashishth
101,661
<p>Measure Theory and Probability Theory by Athreya, and Probability and Measure by Billingsley</p>
1,473,513
<p>The motion of a pendulum is described by the differential equation</p> <p><span class="math-container">$$ \ddot\theta +\frac gl \sin \theta = 0$$</span></p> <p>if we integrate this equation with respect to <span class="math-container">$\theta$</span> we obtain</p> <p><span class="math-container">$$ \frac 12 \dot \theta ^2 - \frac gl \cos \theta = C $$</span></p> <p>Would anyone please shed some light on how to integrate the first term? It seems that: <span class="math-container">$$\int \ddot \theta\,d\theta = \frac 12 \dot \theta ^2$$</span></p> <p>Or in other words<br> <span class="math-container">$$\int{\frac{d^2\theta}{dt^2}}\,d\theta =\frac{1}{2}\left( \frac{d\theta}{dt} \right) ^2$$</span></p> <p>I don't really buy it</p>
Hosein Rahnama
267,844
<p>There is a tidy trick for that using <strong>chain-rule</strong>. Remember this once and for all. We have</p> <p>$$\ddot \theta (t) + {g \over l}\sin \left( {\theta \left( t \right)} \right) = 0$$</p> <p>where it is a <strong>nonlinear second order differential equation</strong>. Wow, it seems scary a little as we don't have <strong>linearty</strong>. This is how we tackle this down</p> <p>$$\ddot \theta (t) = {{{d^2}\theta } \over {d{t^2}}} = {d \over {dt}}\left( {{{d\theta } \over {dt}}} \right) = {d \over {d\theta }}\left( {{{d\theta } \over {dt}}} \right){{d\theta } \over {dt}} = \dot \theta {d \over {d\theta }}\left( {\dot \theta } \right) = {d \over {d\theta }}\left( {{1 \over 2}{{\dot \theta }^2}} \right)$$</p> <p>then put this into the equation and integrate with respect to $\theta $.</p> <p>I just want to say one more thing. When you use the <strong>work-energy theorem</strong>, you directly obtain the integrated form you wanted. Do you know why this happens? It's because the work-energy theorem is nothing more than <strong>integrating the second newton law</strong>. If you dig into the proof of <strong>work-energy</strong> theorem for a particle, you may understand what I mean.</p>
141,484
<p><strong>Bug introduced in 10.4.1 or earlier and fixed in 11.1.1</strong></p> <hr> <p>I recently installed MMA v11.1 and encountered an issue with the memory usage of the LinearModelFit[] command. It appears than when mixing numeric and nominal variables, the LinearModelFit[] command uses a very large block of memory. I first noticed with issue on a large Linux server, where the the MMA kernel kept crashing after consuming all 256 GB of memory for a relatively small problem. </p> <p>I created a simple example in an attempt to illustrate the problem:</p> <pre><code>In[44]:= $Version Out[44]= "11.1.0 for Microsoft Windows (64-bit) (March 13, 2017)" </code></pre> <p>First create a simple data matrix of size <em>n</em>:</p> <pre><code>regDat[n_] := Transpose[{ RandomChoice[{"Yes", "No"}, n], (* nominal variable *) RandomReal[{1, 10}, n], (* numeric #1 *) RandomReal[{1, 10}, n], (* numeric #2 *) RandomReal[{1, 10}, n] (* dependent variable *) } ]; </code></pre> <p>Now run a regression with <em>n</em> = 25,000, using only the numeric variables:</p> <pre><code>mem1 = MaxMemoryUsed[ LinearModelFit[regDat[25000], {v2, v3}, {v1Nom, v2, v3}, NominalVariables -&gt; {v1Nom}]] // AbsoluteTiming {0.10582, 8813008} </code></pre> <p>This uses about 8.8 MB of memory. Now run the same regression, except add the nominal variable (which has values {"Yes","No"}):</p> <pre><code>mem2 = MaxMemoryUsed[ LinearModelFit[regDat[25000], {v1Nom, v2, v3}, {v1Nom, v2, v3}, NominalVariables -&gt; {v1Nom}]] // AbsoluteTiming {6.92887, 4120980912} </code></pre> <p>This new regression takes 65x longer and uses 4.12 GB of memory. </p> <p>I've confirmed this behavior on v11.1 on Windows and Linux. My original problem had <em>n</em>=257,000 observations, with 6 numeric variables and 4 nominal variables, and was unable to run due to excessive memory usage. But the same code ran without issue on v10 and v11. </p> <p>(Note: The only time I've encountered memory issues using the LinearModelFit[] command is when I've inadvertently treated a numeric value as a nominal. I'm speculating that perhaps v 11.1 is treating all variables as nominal when a regression has a both types).</p> <p>Can anyone else confirm this behavior?</p> <p>Thanks,</p> <p>Mark</p>
yode
21,532
<p>This is what you after?</p> <pre><code>string="[[1 4 5 6 2] [9 8 7 4 7] [10 3 1 0 5]],[[3 0 9 1 7] [5 6 11 1 7] [3 0 2 0 1]]"; ImportString[StringReplace[#,Whitespace-&gt;","],"RawJSON"]&amp;/@StringSplit[string,","] </code></pre> <blockquote> <p>{{{1,4,5,6,2},{9,8,7,4,7},{10,3,1,0,5}},{{3,0,9,1,7},{5,6,11,1,7},{3,0,2,0,1}}}</p> </blockquote>
1,993,693
<blockquote> <p>$$\lim_{x \rightarrow +\infty} \frac{2^x}{x}$$ $$\lim_{x \rightarrow \infty} \frac{x^{50}}{e^x}$$</p> </blockquote> <p>I don't really know how to solve this.</p> <p>As for the first one, I know that $\lim_{x \rightarrow \infty} a^x=0$ , I supposed that helps...?</p> <p>How do I solve these (preferably analytically, but I'll also accept otherwise)?</p>
egreg
62,967
<p>You can catch two birds with a stone. Note that $2^x=e^{x\log 2}$ so $$ \frac{2^x}{x}=\frac{e^{x\log 2}}{x}=\frac{e^{x\log 2}}{x\log 2}\log 2 =\frac{e^t}{t}\log 2 \qquad(\text{for }t=x\log2) $$ and $$ \frac{x^{50}}{e^x}=\left(\frac{x}{e^{x/50}}\right)^{\!50}= 50^{50}\left(\frac{x/50}{e^{x/50}}\right)^{\!50}= 50^{50}\left(\frac{t}{e^t}\right)^{\!50}\qquad(\text{for }t=x/50) $$ Thus you only have to compute $$ \lim_{t\to\infty}\frac{e^t}{t} $$</p>
2,384,538
<p>I am studying Linear Algebra Done Right, chapter 2 problem 6 states:</p> <blockquote> <p>Prove that the real vector space consisting of all continuous real valued functions on the interval $[0,1]$ is infinite dimensional.</p> </blockquote> <p><strong>My solution:</strong></p> <p>Consider the sequence of functions $x, x^2, x^3, \dots$ This is a linearly independent infinite sequence of functions so clearly this space cannot have a finite basis. However this prove relies on the fact that no $x^n$ is a linear combination of the previous terms. In other words, is it possible for a polynomial of degree $n$ to be equal to a polynomial of degree less than $n$. I believe this is not possible, but does anyone know how to prove this? More specifically, could the following equation ever be true for all $x$?</p> <p>$x^n = \sum\limits_{k=1}^{n-1} a_kx^k$ where each $a_k \in \mathbb R$</p>
Jonas Meyer
1,424
<p>By definition, <span class="math-container">$x^n + \sum\limits_{k=0}^{n-1}a_k x^k$</span> is a nonzero polynomial of degree <span class="math-container">$n$</span>. A nonzero polynomial of degree <span class="math-container">$n$</span> can have at most <span class="math-container">$n$</span> zeros in any field, and <span class="math-container">$[0,1]$</span> has more than <span class="math-container">$n$</span> elements.</p>
97,788
<p>What exactly is the connection between knots and operator algebra? I heard that Jones established such a connection while discovering the celebrated Jones Polynomial. </p> <p>Now Jones Polynomial is probably understood out of that context on its own, but what was it with Operator algebras in this space ? Can someone explain in plain English?</p> <p>As is probably very evident, I am a complete newbie to this area.</p>
Daniel Moskovich
2,051
<p>The relationship between operator algebras and <em>braids</em> is fairly straightforward to explain, and is nicely written up in many places (<i>e.g.</i> in Kauffman's <a href="http://rads.stackoverflow.com/amzn/click/9810203446" rel="noreferrer">Knots and Physics</a>). Jones studied representations of the braid group $B_n$ into the <a href="http://en.wikipedia.org/wiki/Temperley%E2%80%93Lieb_algebra" rel="noreferrer">Temperley-Lieb algebra</a> $TL_n$. The existence of such a representation is not so surprising (the following explanation is with hindsight- historically, this isn't how it happened): A Temperley-Lieb element is a <a href="http://en.wikipedia.org/wiki/Transfer_matrix" rel="noreferrer">transfer matrix</a> in a <a href="http://en.wikipedia.org/wiki/Potts_model" rel="noreferrer">Potts model</a>, in which each $e_i$ implements one more interaction, and you can think of a braid as a motion of $n$ distinct points in the lattice, with the crossings of points as an interaction, so it's roughly sort of like a universal model for this type of lattice statistic mechanical setup. Taking the trace of the representation gives the <a href="http://en.wikipedia.org/wiki/Partition_function_(statistical_mechanics)" rel="noreferrer">partition function</a> for the model, so it's a natural thing for a statistical mechanic to do. Diagrammatically, taking the trace may be visualized as closing the braid.</p> <p>The surprise is that the trace is invariant under <a href="http://mathworld.wolfram.com/MarkovMoves.html" rel="noreferrer">Markov moves</a>, and by <a href="http://mathworld.wolfram.com/MarkovsTheorem.html" rel="noreferrer">Markov's theorem</a>, any two braids whose closure gives the same knot are related by a finite sequence of Markov moves. Thus, the trace of the representation actually ends up giving an invariant of a knot!!!</p> <p>Birman said in a talk that Jones verified invariance under Markov moves by chance- basically it was good luck, and nobody could have anticipated such invariance, or that the Jones polynomial would be a knot invariant. So there is some significant element of mystery in the question of what knots have to do with operator algebras. It ties in to the biggest question in quantum topology, which is a curious one: What do quantum invariants mean topologically? Why is any quantum invariant a topological invariant?</p> <p>The braid group is a group (tautology alert) which mimics interacting point particles, so you could imagine that it might be related to operator algebras (or to subfactors), but a knot isn't an element of a group, and the algebraic structure which you can equip the set of knots with is much more complicated and mysterious.</p>
97,788
<p>What exactly is the connection between knots and operator algebra? I heard that Jones established such a connection while discovering the celebrated Jones Polynomial. </p> <p>Now Jones Polynomial is probably understood out of that context on its own, but what was it with Operator algebras in this space ? Can someone explain in plain English?</p> <p>As is probably very evident, I am a complete newbie to this area.</p>
Koushik
30,081
<p>you may looks into David Evans monumental book on quantum symmetries on operator algebras</p>
4,459,722
<p>I have the vector field <span class="math-container">\begin{align*} X:\mathbb R^d&amp;\to\mathbb R^d\\ x&amp;\mapsto\frac{x}{\|x\|} \end{align*}</span> which is a differentiable vector field outside of the origin, and I am interested in its divergence. After some easy computation we get <span class="math-container">$$ \mathrm{div}(X)=\left\{\begin{array}{ll} 2\delta_0&amp;d=1\\ \frac{d-1}{\|x\|}&amp;d\geq 2. \end{array}\right. $$</span> The problem with the above computation is that the dimension 1 case is computed as a distributional derivative and the case of bigger dimension is computed classically. This is a problem because for the longest time I thought that <span class="math-container">$$ \mathrm{div}(X) $$</span> should have a Dirac delta in the origin in any dimension, and I still believe it has to have it.</p> <p>The reasons I do believe it has a Dirac delta component in the origin is because for <span class="math-container">$X$</span> infinite integral curves start from the origin. To put it differently, if we were to look at <span class="math-container">$-X$</span>, if one were to search for the solutions to the continuity equation for <span class="math-container">$-X$</span>, i.e. <span class="math-container">$$ \partial \mu_t+\mathrm{div}((-X)\mu_t)=0 $$</span> one would see how mass accumulates at the origin.</p> <p>Question 1): did I make a mistake in my computations and <span class="math-container">$\mathrm{div}(X)$</span> has a Dirac delta in the origin? Or more correctly, if my computation isn't wrong classically but is wrong distributionally, how would I carry out that computation? Because it still doesn't yield a singular part when I evaluate with polar coordinates.</p> <p>Question 2): is my intuition wrong and in reality in dimension bigger than one there is never a Dirac delta in that computation even distributionally?</p> <p>Any help or literature is appreciated. Maybe it is all because my differential geometry skills are very rusty.</p>
robjohn
13,854
<p><strong>Function Away from the Origin</strong></p> <p><span class="math-container">$\nabla\|x\|=\frac{x}{\|x\|}$</span>. Therefore, <span class="math-container">$$ \begin{align} \nabla\cdot\frac{x}{\|x\|} &amp;=\frac{\nabla\cdot x}{\|x\|}-x\cdot\frac{x}{\|x\|^3}\tag{1a}\\ &amp;=\frac{d}{\|x\|}-\frac1{\|x\|}\tag{1b}\\ &amp;=\frac{d-1}{\|x\|}\tag{1c} \end{align} $$</span> Explanation:<br /> <span class="math-container">$\text{(1a)}$</span>: <span class="math-container">$\nabla\cdot(av)=a\,\nabla\cdot v+v\cdot\nabla a$</span><br /> <span class="math-container">$\text{(1b)}$</span>: <span class="math-container">$\nabla\cdot x=d$</span><br /> <span class="math-container">$\text{(1c)}$</span>: simplify</p> <p><strong>Distribution Supported at the Origin</strong></p> <p>I had originally computed the part of the distribution supported at the origin using the <a href="https://en.wikipedia.org/wiki/Divergence_theorem" rel="nofollow noreferrer">Divergence Theorem</a>, but while trying to justify its application to a non-<span class="math-container">$C^1$</span> function, it became evident that I ended up computing the integral of the divergence without the theorem.</p> <p>Let us compute the divergence of <span class="math-container">$\frac{x}{(x\cdot x+\epsilon^2)^{1/2}}$</span>, which is a <span class="math-container">$C^\infty$</span> approximation of <span class="math-container">$\frac{x}{\|x\|}$</span>. <span class="math-container">$$ \begin{align} \nabla\cdot\frac{x}{(x\cdot x+\epsilon^2)^{1/2}} &amp;=\frac{d}{(x\cdot x+\epsilon^2)^{1/2}}-x\cdot\frac{x}{(x\cdot x+\epsilon^2)^{3/2}}\tag{2a}\\ &amp;=\frac{d-1}{(x\cdot x+\epsilon^2)^{1/2}}+\frac{\epsilon^2}{(x\cdot x+\epsilon^2)^{3/2}}\tag{2b} \end{align} $$</span> Explanation:<br /> <span class="math-container">$\text{(2a)}$</span>: <span class="math-container">$\nabla\cdot(av)=a\,\nabla\cdot v+v\cdot\nabla a$</span><br /> <span class="math-container">$\text{(2b)}$</span>: <span class="math-container">$x\cdot x=x\cdot x+\epsilon^2-\epsilon^2$</span></p> <p>Since <span class="math-container">$\frac{d-1}{\|x\|}$</span> is locally in <span class="math-container">$L^1$</span>, the integral of <span class="math-container">$\frac{d-1}{(x\cdot x+\epsilon^2)^{1/2}}$</span>, the left summand from <span class="math-container">$\text{(2b)}$</span>, over <span class="math-container">$B(0,r)$</span> vanishes uniformly as <span class="math-container">$r\to0$</span>.</p> <p>Consider the integral of the right summand from <span class="math-container">$\text{(2b)}$</span> over <span class="math-container">$B(0,r)$</span>: <span class="math-container">$$ \begin{align} &amp;\lim_{\epsilon\to0}\int_{B(0,r)}\frac{\epsilon^2}{(x\cdot x+\epsilon^2)^{3/2}}\,\mathrm{d}x\\ &amp;=\omega_{d-1}\lim_{\epsilon\to0}\int_0^r\frac{\epsilon^2}{(t^2+\epsilon^2)^{3/2}}\,t^{d-1}\,\mathrm{d}t\tag{3a}\\ &amp;=\omega_{d-1}\lim_{\epsilon\to0}\epsilon^{d-1}\int_0^{r/\epsilon}\frac1{(t^2+1)^{3/2}}\,t^{d-1}\,\mathrm{d}t\tag{3b}\\ &amp;=\lim\limits_{\epsilon\to0}\left\{\begin{array}{} 2\,\frac{r}{\sqrt{r^2+\epsilon^2}}&amp;\text{if $d=1$}\\ 2\pi\epsilon\left(1-\frac\epsilon{\sqrt{r^2+\epsilon^2}}\right)&amp;\text{if $d=2$}\\ 4\pi\epsilon^2\left(\log\left(\frac{r+\sqrt{r^2+\epsilon^2}}{\epsilon}\right)-\frac{r}{\sqrt{r^2+\epsilon^2}}\right)&amp;\text{if $d=3$}\\ \omega_{d-1}\epsilon^2\left[0,\frac{r^{d-3}}{d-3}\right]_\#&amp;\text{if $d\ge4$} \end{array}\right.\tag{3c}\\[6pt] &amp;=\left\{\begin{array}{} 2&amp;\text{if $d=1$}\\[3pt] 0&amp;\text{if $d\ge2$} \end{array}\right.\tag{3d} \end{align} $$</span> Explanation:<br /> <span class="math-container">$\text{(3a)}$</span>: convert to polar coordinates<br /> <span class="math-container">$\phantom{\text{(3a):}}$</span> the &quot;surface area&quot; of <span class="math-container">$S^{d-1}$</span> is <span class="math-container">$\omega_{d-1}=\frac{2\pi^{d/2}}{\Gamma(d/2)}$</span><br /> <span class="math-container">$\text{(3b)}$</span>: substitute <span class="math-container">$t\mapsto t\epsilon$</span><br /> <span class="math-container">$\text{(3c)}$</span>: compute the integrals for <span class="math-container">$d=1,2,3$</span><br /> <span class="math-container">$\phantom{\text{(3c):}}$</span> bound the integral for <span class="math-container">$d\ge4$</span><br /> <span class="math-container">$\phantom{\text{(3c):}}$</span> where <span class="math-container">$[a,b]_\#$</span> represents a number in <span class="math-container">$[a,b]$</span><br /> <span class="math-container">$\text{(3d)}$</span>: evaluate the limits</p> <p><strong>In Conclusion</strong></p> <p>Putting together <span class="math-container">$(1)$</span> and <span class="math-container">$(3)$</span> gives <span class="math-container">$$ \nabla\cdot\frac{x}{\|x\|}=\left\{\begin{array}{} 2\delta(x)&amp;\text{if $d=1$}\\ \displaystyle\frac{d-1}{\|x\|}&amp;\text{if $d\ge2$} \end{array}\right.\tag4 $$</span></p>
4,459,722
<p>I have the vector field <span class="math-container">\begin{align*} X:\mathbb R^d&amp;\to\mathbb R^d\\ x&amp;\mapsto\frac{x}{\|x\|} \end{align*}</span> which is a differentiable vector field outside of the origin, and I am interested in its divergence. After some easy computation we get <span class="math-container">$$ \mathrm{div}(X)=\left\{\begin{array}{ll} 2\delta_0&amp;d=1\\ \frac{d-1}{\|x\|}&amp;d\geq 2. \end{array}\right. $$</span> The problem with the above computation is that the dimension 1 case is computed as a distributional derivative and the case of bigger dimension is computed classically. This is a problem because for the longest time I thought that <span class="math-container">$$ \mathrm{div}(X) $$</span> should have a Dirac delta in the origin in any dimension, and I still believe it has to have it.</p> <p>The reasons I do believe it has a Dirac delta component in the origin is because for <span class="math-container">$X$</span> infinite integral curves start from the origin. To put it differently, if we were to look at <span class="math-container">$-X$</span>, if one were to search for the solutions to the continuity equation for <span class="math-container">$-X$</span>, i.e. <span class="math-container">$$ \partial \mu_t+\mathrm{div}((-X)\mu_t)=0 $$</span> one would see how mass accumulates at the origin.</p> <p>Question 1): did I make a mistake in my computations and <span class="math-container">$\mathrm{div}(X)$</span> has a Dirac delta in the origin? Or more correctly, if my computation isn't wrong classically but is wrong distributionally, how would I carry out that computation? Because it still doesn't yield a singular part when I evaluate with polar coordinates.</p> <p>Question 2): is my intuition wrong and in reality in dimension bigger than one there is never a Dirac delta in that computation even distributionally?</p> <p>Any help or literature is appreciated. Maybe it is all because my differential geometry skills are very rusty.</p>
Calvin Khor
80,734
<p>(This is in addition to the <a href="https://math.stackexchange.com/a/4460505/80734">other</a> <a href="https://math.stackexchange.com/a/4459766/80734">two</a> answers: here, we give the 'correct' generalisation of <span class="math-container">$\operatorname{sgn}'=2\delta_0$</span> to higher dimensions. Put another way, we are answering your &quot;question 2)&quot;.)</p> <p>Your intuition is not quite right; as you go up in dimension, <span class="math-container">$1/|x|$</span> should be thought of as <em>less singular</em>. This is e.g. what's behind the fact that <span class="math-container">$1/|x|$</span> is integrable once <span class="math-container">$d&gt;1$</span>. Instead, <span class="math-container">$1/|x|^d$</span> is the correct threshold for (local) integrability.</p> <p>Similarly, one should instead guess that <span class="math-container">$\DeclareMathOperator{\mydiv}{div} \mydiv(x/|x|^{d}) \newcommand{\dd}{\text d} $</span> is the one with a delta for a (distributional) divergence. Indeed, <span class="math-container">$$\fbox{$\mydiv\bigg(\frac x{|x|^{d}}\bigg) = \omega_{d-1} \delta_0.$} $$</span></p> <p>(<span class="math-container">$\omega_{d-1}=\int_{|x|=1}\dd \sigma$</span> is the surface area of <span class="math-container">$\Bbb S^{d-1}$</span>.) The first sign of hope is that for all <span class="math-container">$x\neq 0$</span>, <span class="math-container">$$ \mydiv(x|x|^{-d})=\let\del\partial \del_i (x_i|x|^{-d}) = d|x|^{-d}+ x_i\cdot (-dx_i |x|^{-d-2}) = d|x|^{-d} - d|x|^{-d} =0. \tag{$\star$ }\label{1}$$</span> Now, from the definition of the distributional divergence, <span class="math-container">\begin{align} \left\langle\mydiv(x|x|^{-d}),\phi\right\rangle &amp;= -\int_{\mathbb R^d}\frac{x}{|x|^d} \cdot \nabla \phi(x) \, \dd x \\ &amp;= -\lim_{\epsilon\to 0}\int_{|x|&gt;\epsilon}\frac{x}{|x|^d} \cdot \nabla \phi(x) \, \dd x \\ &amp;= \lim_{\epsilon\to 0}\Bigg(\int_{|x|&gt;\epsilon}\underbrace{\mydiv\Big(\frac{x}{|x|^d}\Big)}_{=0 \text{ by }\eqref{1}} \phi(x) \, dx - \int_{|x|=\epsilon} \frac{x\phi(x)}{|x|^d}\cdot n \,\dd \sigma(x) \Bigg)\\ &amp;= \lim_{\epsilon\to 0}\int_{|x|=\epsilon} \frac{\phi(x)}{|x|^{d-1}}\,\dd \sigma(x) \end{align}</span> since <span class="math-container">$n= -x/|x|$</span> is the outward normal to <span class="math-container">$\{|x|&gt;\epsilon\}$</span>. Performing the change of variables <span class="math-container">$y= x/\epsilon$</span>, <span class="math-container">\begin{align}\left\langle\mydiv(x|x|^{-d}),\phi\right\rangle &amp;= \lim_{\epsilon\to 0}\int_{|y|=1} \phi(\epsilon y)\,\dd \sigma(y) \\ &amp;= \phi(0)\int_{|y|=1}\,\dd \sigma(y) + \lim_{\epsilon\to 0}\int_{|y|=1}(\phi(\epsilon y)-\phi(0)) \,\dd \sigma(y) \\ &amp;=\langle \omega_{d-1}\delta_0,\phi\rangle, \end{align}</span> where the limit is zero from the smoothness of <span class="math-container">$\phi$</span> (e.g. <span class="math-container">$|\phi(\epsilon y) - \phi(0)| \le \|\nabla \phi\|_{L^\infty} \epsilon$</span>), showing the result.</p>
2,280,052
<p>Wolfram Alpha says: $$i\lim_{x \to \infty} x = i\infty$$</p> <p>I'm having a bit of trouble understanding what $i\infty$ means. In the long run, it seems that whatever gets multiplied by $\infty$ doesn't really matter. $\infty$ sort of takes over, and the magnitude of whatever is being multiplied is irrelevant. I.e., $\forall a \gt 0$:</p> <p>$$a\lim_{x \to \infty} x = \infty, -a\lim_{x \to \infty} x = -\infty$$</p> <p>What's so special about imaginary numbers? Why doesn't $\infty$ take over when it gets multiplied by $i$? Thanks.</p>
Mark S.
26,369
<p>Briefly, Wolfram|Alpha preserves the $i$ because it's giving you a "direction" for the infinity. Just like $-7\displaystyle{\lim_{x\to\infty}}x=-\infty$ (the direction being "leftwards" on the real line/in the complex plane), $i\displaystyle{\lim_{x\to\infty}}x=i\infty$ (the direction being "upwards" in the complex plane).</p> <p>This can be used to do many things. To steal an example from <a href="https://reference.wolfram.com/language/ref/DirectedInfinity.html" rel="noreferrer">the documentation for <code>DirectedInfinity</code></a>, you can ask Wolfram|Alpha to find an approximation of $\arcsin$ for numbers of the form $\varepsilon+yi$ where $|\varepsilon|$ is very small and $y$ is a very large positive number. <a href="http://www.wolframalpha.com/input/?i=Series%5BArcSin%5Bz%5D,%20%7Bz,%20I*Infinity,%201%7D%5D" rel="noreferrer">Wolfram|Alpha link</a></p> <p>This is contrasted with Wolfram|Alpha's confusingly-named "<a href="http://www.wolframalpha.com/input/?i=1%2F0" rel="noreferrer">complex infinity</a>", where there is no particular direction. See <a href="https://reference.wolfram.com/language/ref/ComplexInfinity.html" rel="noreferrer"><code>ComplexInfinity</code></a> for documentation.</p>
2,280,052
<p>Wolfram Alpha says: $$i\lim_{x \to \infty} x = i\infty$$</p> <p>I'm having a bit of trouble understanding what $i\infty$ means. In the long run, it seems that whatever gets multiplied by $\infty$ doesn't really matter. $\infty$ sort of takes over, and the magnitude of whatever is being multiplied is irrelevant. I.e., $\forall a \gt 0$:</p> <p>$$a\lim_{x \to \infty} x = \infty, -a\lim_{x \to \infty} x = -\infty$$</p> <p>What's so special about imaginary numbers? Why doesn't $\infty$ take over when it gets multiplied by $i$? Thanks.</p>
fleablood
280,126
<p>In the reals all non-zero numbers have a parity. Either they are positive or they are negative. <span class="math-container">$\lim_{x\rightarrow \infty}|ax| =\infty$</span> (if <span class="math-container">$a \ne 0$</span>) because the magnitude of <span class="math-container">$ax$</span> gets infinitely large.</p> <p>If <span class="math-container">$a &gt; 0$</span> then <span class="math-container">$\lim_{x\rightarrow \infty}ax = \infty$</span> because the magnitude of <span class="math-container">$ax$</span> becomes infinite and the parity of all <span class="math-container">$ax$</span> is positive.</p> <p>If <span class="math-container">$-a &lt; 0$</span> then <span class="math-container">$\lim_{x\rightarrow \infty}-ax = -\infty$</span>. What's the difference between <span class="math-container">$-\infty$</span> and <span class="math-container">$\infty$</span>? Neither of them are actual numbers. Well, again, the magnitude of <span class="math-container">$-ax$</span> becomes infinite. But the parity of all <span class="math-container">$-ax$</span> is negative so instead of increasing infinitely &quot;in the positive direction&quot;, <span class="math-container">$-ax$</span> increase in the &quot;negative direction&quot;. So <span class="math-container">$-\infty $</span> indicates infinite magnitude- negative parity.</p> <p>Non-zero Complex numbers do not have a single bidirectional parity. A complex number has two components, a real one and an imaginary on and thus are two-dimensional and instead of having a single positive/negative parity, they have a directional angle called an argument. These angles can be in any of an infinite number of &quot;directions&quot; from <span class="math-container">$0^{\circ}$</span> to <span class="math-container">$360^{\circ}$</span>. The number positive <span class="math-container">$1$</span> has an argument of <span class="math-container">$0^{\circ}$</span>. Then number <span class="math-container">$-1$</span> has an argument of <span class="math-container">$180^{\circ}$</span>. The number <span class="math-container">$\frac 12 + i \frac {\sqrt{3}}2$</span> has an argument of <span class="math-container">$30^{\circ}$</span>.</p> <p>And <span class="math-container">$i$</span> has argument <span class="math-container">$90^{\circ}$</span>.</p> <p>So what happens to <span class="math-container">$ix$</span> as <span class="math-container">$x \rightarrow \infty$</span>? Well just line <span class="math-container">$ax$</span> and <span class="math-container">$-ax$</span> its magnitude increases to infinity. So <span class="math-container">$\lim_{x\to\infty} |ix| = \infty$</span>. But what is the argument of all the <span class="math-container">$ix$</span>? They all have an argument of <span class="math-container">$90^{\circ}$</span>. But <span class="math-container">$\infty$</span> means infinite magnitude-positive parity. And <span class="math-container">$-\infty$</span> means infinite magnitude- negative parity.</p> <p>Neither of those apply for <span class="math-container">$\lim_{x\to\infty} ix$</span> which will have infinite magnitude - <span class="math-container">$90^{\circ}$</span> argument. How can we indicate that?</p> <p>Well.... if <span class="math-container">$-\infty$</span> means negative parity and <span class="math-container">$+\infty$</span> means positive parity, shouldn't <span class="math-container">$i\infty$</span> mean <span class="math-container">$90^{\circ}$</span> argument?</p>
1,427,595
<blockquote> <p>The <a href="https://en.wikipedia.org/wiki/Cayley_table" rel="nofollow">Cayley table</a> tells us whether a group is abelian. Because the group operation of an abelian group is commutative, a group is abelian if and only if its Cayley table is symmetric along its diagonal axis.</p> </blockquote> <p>Sorry, but why is this true?</p>
Ben Sheller
250,221
<p>Number the elements of the group, and then think of the Cayley table as a matrix: in the $ij$-th entry, we have $g_ig_j$.</p> <p>If the Cayley table is symmetric, then the $ij$-th entry is equal to the $ji$-th entry, so $g_ig_j=g_jg_i$, and so the group is abelian.</p> <p>Conversely, if the group is abelian, then $g_ig_j=g_jg_i$ for every $g_i,g_j$ in our group. Therefore, the $ij$-th entry is equal to the $ji$-th entry in the Cayley table, so the table is symmetric.</p>
3,696,776
<p>I was given this problem:</p> <p><a href="https://i.stack.imgur.com/8ACor.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8ACor.png" alt="Problem"></a></p> <p>These are my calculations and I'm asking for verification:</p> <p>Pointwise limit:</p> <p><span class="math-container">$\lim_{n \to \infty} f_{n}(x) = \lim_{n \to \infty} \frac{x^{2n}}{1+x^{2n}} = \lim_{n \to \infty} \frac{x^{n}}{\frac{1}{x^n}+x^{n}} = 1$</span></p> <p>Uniform convergence:</p> <p><span class="math-container">$\mid f_{n}(x)-f(x)\mid = \mid f_{n}(x) - 1\mid = \mid\frac{x^{2n}}{1+x^{2n}} -1 \mid= \mid \frac{x^{n}}{\frac{1}{x^n}+x^{n}} - \frac{\frac{1}{x^n}+x^{n}}{\frac{1}{x^n}+x^{n}}\mid = \mid -\frac{\frac{1}{x^n}}{\frac{1}{x^n}+x^{n}}\mid = \frac{\frac{1}{x^n}}{\frac{1}{x^n}+x^{n}} \leq \frac{1}{x^n}$</span></p> <p>Thus:</p> <p><span class="math-container">$\lim_{n \to \infty} sup\{\mid f_{n}(x)-f(x)\mid : x \in [R, \infty)\} = \lim_{n \to \infty} \frac{1}{x^n} = 0.$</span></p> <p>From this follows that <span class="math-container">$f_n(x)$</span> is uniform convergent</p>
Saaqib Mahmood
59,734
<p>For all <span class="math-container">$x \in [R, +\infty)$</span>, we find that, since <span class="math-container">$R &gt; 1$</span>, therefore we have <span class="math-container">$$ \begin{align} \lim_{n \to \infty} f_n(x) &amp;= \lim_{n \to \infty} \frac{ x^{2n} }{ 1 + x^{2n} } \\ &amp;= \lim_{n \to \infty} \frac{1}{ \frac{1}{x^{2n}} + 1} \\ &amp;= \frac{1}{0+1} \qquad [\mbox{ as $x \geq R &gt; 1$, so $\lim_{n \to \infty} \frac{1}{x^{2n}} = 0$ } ]\\ &amp;= 1. \end{align} $$</span> Now let <span class="math-container">$f \colon [R, +\infty) \rightarrow \mathbb{R}$</span> be defined by the formula <span class="math-container">$$ f(x) \colon= 1 \qquad \mbox{for all } x \in [R, +\infty). \tag{0} $$</span> Then the sequence <span class="math-container">$\left( f_n \right)_{n \in \mathbb{N}}$</span> converges <em>pointwise</em> to the function <span class="math-container">$f$</span> on <span class="math-container">$[R, +\infty)$</span>. </p> <p>Let us now check if this convergence is uniform.</p> <p>We note that, for all <span class="math-container">$n \in \mathbb{N}$</span> and for all <span class="math-container">$x \in [R, +\infty)$</span>, we have <span class="math-container">$$ \begin{align} \left\lvert f_n(x) - f(x) \right\rvert &amp;= \left\lvert \frac{x^{2n}}{1+x^{2n}} - 1 \right\rvert \\ &amp;= \left\lvert \frac{-1}{1+x^{2n}} \right\rvert \\ &amp;= \frac{1}{\left\lvert 1+x^{2n} \right\rvert } \\ &amp;= \frac{1}{ 1+x^{2n} } \\ &amp;&lt; \frac{1}{x^{2n} } \\ &amp;\leq \frac{1}{R^{2n}}. \tag{1} \end{align} $$</span></p> <p>Now as <span class="math-container">$R &gt; 1$</span>, so <span class="math-container">$$ \lim_{n \to \infty} \frac{1}{R^{2n}} = 0. $$</span></p> <p>Thus from (2) we can conclude that, given a real number <span class="math-container">$\varepsilon &gt; 0$</span>, we can find a natural number <span class="math-container">$N = N(\varepsilon)$</span> such that <span class="math-container">$$ \left\lvert \frac{1}{R^{2n}} - 0 \right\rvert = \frac{1}{R^{2n}} &lt; \varepsilon $$</span> for any natural number <span class="math-container">$n &gt; N$</span>. In fact, we can take <span class="math-container">$N$</span> to be any natural number greater than the quantity <span class="math-container">$$ \begin{cases} \frac{ - \ln \varepsilon }{ \ln R} \ \mbox{ if } \varepsilon \neq 1, \\ 1 \ \mbox{ if } \varepsilon = 1. \end{cases} $$</span></p> <p>So using (1) we can conclude that, given a real number <span class="math-container">$\varepsilon &gt; 0$</span>, we can find a natural number <span class="math-container">$N$</span> such that <span class="math-container">$$ \left\lvert f_n(x) - f(x) \right\rvert &lt; \varepsilon $$</span> for all <span class="math-container">$ x \in [R, +\infty)$</span> and for any natural number <span class="math-container">$n &gt; N$</span>. </p> <p>Hence the sequence <span class="math-container">$\left( f_n \right)_{n \in \mathbb{N}}$</span> indeed converges <em>uniformly</em> to the function <span class="math-container">$f$</span> defined by the formula (0) above. </p>
980,818
<p>I'm working on a problem that involves the following summation: $$y=\sum_{i=0}^{x}i2^i$$ I need to determine the largest value of $x$ such that $y$ is less than or equal to some integer K. Currently I'm using a lookup table approach which is fine, but I would really like to find and understand a solution that would allow calculation of $x$.</p> <p>Thank you!</p>
MPW
113,214
<p>You can compute this sum exactly to get $$y=x2^{x+1}+2$$.</p>
3,964,862
<p>The arithmetic mean has the nice property of minimising the sum of squares, or in other words, minimising the sum of quadratic-euclidean distances. Formally, given a set of points <span class="math-container">$x_0, \dots, x_n \in \mathbb{R}^d$</span>, the arithmetic mean, <span class="math-container">$\mu = \frac{1}{n} \sum_{i=1}^n x_i$</span>, is the unique solution to the equation <span class="math-container">$$\mu = \underset{x \in \mathbb{R}^d}{\operatorname{argmin}} \sum_{i=1}^n d^2(x_i, x) \, ,$$</span> where <span class="math-container">$d$</span> denotes the Euclidean distance. I was wondering if there exists a distance measure on <span class="math-container">$\mathbb{R}$</span> that has a similar property for the geometric mean, i.e.</p> <hr /> <p><strong>Question</strong></p> <p>Is there a distance measure <span class="math-container">$d$</span> on <span class="math-container">$\mathbb{R}$</span> such that for any set of points <span class="math-container">$x_1, \dots, x_n \in \mathbb{R}$</span> we have <span class="math-container">$$ \sqrt[n]{x_1 \cdot \dots \cdot x_n} = \underset{x \in \mathbb{R}^d}{\operatorname{argmin}} \sum_{i=1}^n d^2(x_i, x) \; \textbf{?}$$</span></p>
Bolito2
856,740
<p>Let <span class="math-container">$\alpha = (x_1x_2...x_n)^\frac{1}{n} $</span>. Then, <span class="math-container">$\ln(\alpha) = \frac{1}{n} \sum{\ln(x_i)}$</span> so because of the property you stated, <span class="math-container">$$\ln(\alpha) = \underset{x \in \mathbb{R}^d}{\operatorname{argmin}} \sum_{i=1}^n d^2(\ln(x_i), x)$$</span> Which implies that <span class="math-container">$$\alpha = \underset{x \in \mathbb{R}^d}{\operatorname{argmin}} \sum_{i=1}^n d^2(\ln(x_i), \ln(x))$$</span></p> <p>Then you can substitute to find the new metric <span class="math-container">$$\delta^2(x, y) = d^2(\ln(x), \ln(y)) = (\ln(x) - \ln(y))^2$$</span> <span class="math-container">$$\delta(x, y) = \displaystyle\left\lvert \ln{\frac{x}{y}} \right\rvert$$</span></p>
123,054
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="https://math.stackexchange.com/questions/33215/what-is-48293">What is 48&#247;2(9+3)?</a> </p> </blockquote> <p>In the field of real numbers, does the expression 10 / 2 * 5 make sense? Is it 25 or 1? Is it a bad question or the order of computation from left-to-right is implicit (axiomatic) when omitting parentheses?</p>
Ross Millikan
1,827
<p>The computer languages I have used explicitly say that 10/2*5=25. For writing, I would always include the parentheses to be clear.</p>
123,054
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="https://math.stackexchange.com/questions/33215/what-is-48293">What is 48&#247;2(9+3)?</a> </p> </blockquote> <p>In the field of real numbers, does the expression 10 / 2 * 5 make sense? Is it 25 or 1? Is it a bad question or the order of computation from left-to-right is implicit (axiomatic) when omitting parentheses?</p>
Brian M. Scott
12,042
<p>The most widespread convention is that operations <strong>of equal precedence</strong> are performed from left to right in the absence of parentheses; by this convention $10/2\cdot 5=25$. However, it is violated often enough, intentionally or otherwise, that in such cases one should always supply enough parentheses or other cues to make the intended sense clear: $(10/2)\cdot 5$, $(10/2)(5)$, $\frac{10}2\cdot5$, $\frac12(10)(5)$, etc. However, this left-to-right convention is normally superseded by precedence conventions, so that $2+3\cdot5=17$, not $30$.</p> <p>Most of the programming languages that I’ve seen follow these conventions, though they may differ slightly in the precedence of some operations; two exceptions that I know about are <a href="http://en.wikipedia.org/wiki/Smalltalk" rel="nofollow">Smalltalk</a>, which uses a strict left-to-right convention with no built-in precedence hierarchy, and <a href="http://en.wikipedia.org/wiki/APL_%28programming_language%29" rel="nofollow">APL</a>, which, like the Iverson notation on which it was based, uses a strict right-to-left convention.</p>
1,255,334
<p>A classmate and I are studying this following question from Stein-Shakarchi, Chapter 2, Exercise 12:</p> <blockquote> <p>Show that there are <span class="math-container">$f \in L^1(\mathbb{R}^d)$</span> and a sequence <span class="math-container">$\{f_n\}$</span> with <span class="math-container">$f_n \in L^1(\mathbb{R}^d)$</span> such that <span class="math-container">$$\|f-f_n\|_{L^1(\mathbb{R}^d)} \to 0,$$</span> but <span class="math-container">$f_n(x) \to f(x)$</span> for no <span class="math-container">$x$</span>.</p> <p>[Hint: In <span class="math-container">$\mathbb{R}$</span>, let <span class="math-container">$f_n=\chi_{I_n}$</span>, where <span class="math-container">$I_n$</span> is an appropriately chosen sequence of intervals with <span class="math-container">$m(I_n)\to 0$</span>.]</p> </blockquote> <p>Our attempt:</p> <p>First, I defined <span class="math-container">$I_n := [\frac{k-1}{2^n}, \frac{k}{2^n}]$</span> for all <span class="math-container">$k \in \mathbb{R}$</span>, so that <span class="math-container">$m(I_n)\to 0$</span> as <span class="math-container">$n \to \infty$</span>. From these intervals, the sequence of functions is defined to be: <span class="math-container">$$f_n := \chi_{I_n}(x) + \chi_{I_n}(-x)$$</span> Then <span class="math-container">$f=0$</span>, and so <span class="math-container">$$\int_{\mathbb{R}} |f_n - f| = \int_{\mathbb{R}} f_n = \frac{1}{2^{n}} +\frac{1}{2^{n}} \to 0.$$</span></p> <p>But I am left to show that <span class="math-container">$f_n \to f$</span> for no <span class="math-container">$x$</span>. I do not see this from my example. But does this example work? If so, why does <span class="math-container">$f_n$</span> not converge to <span class="math-container">$f$</span> for any <span class="math-container">$x$</span>.</p>
shalop
224,467
<p>We will give an example in $\mathbb{R}^1$. The analagous construction in $\mathbb{R}^d$ is similar.</p> <p>Fix $k \in \mathbb{Z}$. Take some enumeration $\{I^k_n\}_{n \in \mathbb{N}}$ of all subintervals of $[k,k+1]$ which have the form $[\frac{p}{q},\frac{p+1}{q}]$ for some integers $p,q$. Notice that $m(I^k_n) \to 0$ as $n \to \infty$, because for any $\epsilon &gt;0$, there are only finitely many $n$ such that $m(I^k_n)&gt;\epsilon$. Define $g^k_n=1_{I^k_n}$, for $n \in \mathbb{N}$. Notice that, for any $x \in [k,k+1]$ there are infinitely many $n$ with $g^k_n(x)=1$. Therefore $\limsup_{n \to \infty} g^k_n(x)=1$ for every $x \in [k,k+1]$.</p> <p>For $n \in \mathbb{N}$, let $f_n = \sum_{k \in \mathbb{Z}} 2^{-|k|}g^k_n$, and let $f=0$. Then $\int |f_n -f|= \sum_{k \in \mathbb{Z}} 2^{-|k|}m(I^k_n)$, which converges to zero as $n \to \infty$ by the DCT (applied to counting measure), because each individual $m(I^k_n)$ approaches zero when $k$ is fixed and $n \to \infty$. Therefore $||f_n - f||_{L^1} \to 0$. However, if $x \in \mathbb{R}$, say $x \in [k,k+1)$ where $k \in \mathbb{Z}$, then $\limsup f_n(x)= 2^{-|k|}&gt;0$, so $f_n(x)$ does not converge to $f(x)$, for <strong>any</strong> $x \in \mathbb{R}$.</p>