qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
147,917
<blockquote> <p>Let $A,B\in M_{n\times n} (\mathbb{C})$. Is it possible that $ABA-BAB=I$?</p> </blockquote> <p>I came across this interesting problem as I was studying for an exam. I guess in the case when $A$ and $B$ commute we have $A(A-B)B=I$ and I am not sure if it can happen.</p> <p>Any ideas?</p>
Henry T. Horton
24,934
<p>Yes. Let $$A_n = \mathrm{diag}\left(\frac{1}{2}(1 + \sqrt{5}), \dots, \frac{1}{2}(1 + \sqrt{5})\right)$$ and $$B_n = \mathrm{diag}(1, \dots, 1).$$ This example will work for any $n$.</p> <p><strong>Edit:</strong> Now, for a general solution in the noncommutative case, combine my commutative general solution with Robert Israel's $2 \times 2$ noncommutative solution, taking the block diagonal matrices $$\tilde{A}_n = \begin{pmatrix} A_{n-2} &amp; 0 &amp; 0 \\ 0 &amp; 2 &amp; 0 \\ 0 &amp; 0 &amp; 1 \end{pmatrix}$$ and $$\tilde{B}_n = \begin{pmatrix} B_{n-2} &amp; 0 &amp; 0 \\ 0 &amp; -\frac{1}{2} &amp; 1 \\ 0 &amp; -\frac{7}{2} &amp; 3 \end{pmatrix}.$$</p>
169,126
<p>I was trying to prove that $-(x + y) = -x - y$ and as you can see in the image below, I took the liberty of using the $-$ symbol as a number and applying the associative law with it. Is it kosher in all rigorousness given the axioms professional mathematicians use? <img src="https://i.stack.imgur.com/SMjqb.png" alt="enter image description here"></p>
pritam
33,736
<p>$$(x+y)+(-x-y)=(y+x)+(-x-y)=y+(x-x)+(-y)=y+0+(-y)=0$$ So $(x+y)$ is the additive inverse of $(-x-y)$. Hence $-(x+y)=-x-y$</p>
2,227,119
<p>The matrix is as follows:</p> <p>$$ \begin{bmatrix} 3/5 &amp; 4/5 &amp; 3/5 \\ -4/5 &amp; 3/5 &amp; 0 \\ 0 &amp; 0 &amp; 4/5 \\ \end{bmatrix} $$</p> <p>I get that in order for a matrix (call this matrix A) to be orthogonal it first must be an nxn matrix and that $|AX|\ βˆ€X ∈ ℝ^n$ must be equal to $|X|$; along with $(A^t)(A) = I$ and the columns of $A$ form an orthonormal subset of $ℝ^n.$</p> <p>keeping all of this in mind, what I first did was to just make sure and check to see if the magnitude of each column equaled $1$ and saw that they do indeed have a length of $1$, so that's good there. From here I took the dot product of each column and then saw that only $A_1$ dotted with $A_2$ works and have come to the conclusion that I need to change A_3, and this is where I'm stuck. Without doing what seems like endless trial and error, how does one solve this problem? I'm trying to keep all of these ideas in mind I'm just stuck in a rut it feels like. Thanks for the help!</p>
kmitov
84,067
<p>I think that the constrains are as follows: $0\le z \le 1$, $0 \le r \le 1-z$, $0 \le \theta \le 2 \pi$</p> <p>$\int_0^{2\pi}(\int_0^1(\int_0^{1-z}z r dr)dz)d\theta= \int_0^{2\pi}(\int_0^1z(\int_0^{1-z} r dr)dz)d\theta$ </p> <p>$= 2\pi(\int_0^1z\frac{(1-z)^2}{2}dz)=\pi(\int_0^1zdz-\int_0^12z^2dz+\int_0^1z^3dz)$ </p> <p>$=\pi(\frac{1}{2}-2 \times \frac{1}{3}+\frac{1}{4})=\pi/12$.</p> <p>The Jacobian $r$ was missed.</p>
2,227,119
<p>The matrix is as follows:</p> <p>$$ \begin{bmatrix} 3/5 &amp; 4/5 &amp; 3/5 \\ -4/5 &amp; 3/5 &amp; 0 \\ 0 &amp; 0 &amp; 4/5 \\ \end{bmatrix} $$</p> <p>I get that in order for a matrix (call this matrix A) to be orthogonal it first must be an nxn matrix and that $|AX|\ βˆ€X ∈ ℝ^n$ must be equal to $|X|$; along with $(A^t)(A) = I$ and the columns of $A$ form an orthonormal subset of $ℝ^n.$</p> <p>keeping all of this in mind, what I first did was to just make sure and check to see if the magnitude of each column equaled $1$ and saw that they do indeed have a length of $1$, so that's good there. From here I took the dot product of each column and then saw that only $A_1$ dotted with $A_2$ works and have come to the conclusion that I need to change A_3, and this is where I'm stuck. Without doing what seems like endless trial and error, how does one solve this problem? I'm trying to keep all of these ideas in mind I'm just stuck in a rut it feels like. Thanks for the help!</p>
zipirovich
127,842
<p>I'll venture a guess as to what your confusion might be about: how the solid is defined and how you're going to set up the triple integral are <strong>NOT</strong> necessarily the same thing. While the solid can be originally defined by certain constraints, you may choose a different description for it via different constraints that are more amenable to integration.</p> <p>Say, in this example, the equation $x^2+y^2=(1-z)^2$ defines a cone that extends infinitely far up and down. So that we have a reasonable (for this question) solid &mdash; one that doesn't extend forever &mdash; we need to describe what finite part of this cone we're actually dealing with. That's what $0\le z\le1$ tells you &mdash; that you have the portion of this cone that's vertically contained between the planes $z=0$ and $z=1$. See the (quickly generated, so not so great) pictures below.</p> <p><a href="https://i.stack.imgur.com/4S8oo.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4S8oo.gif" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/5Pv60.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5Pv60.gif" alt="enter image description here"></a></p> <p>You need to understand that to visualize the given solid. But you do <strong>NOT</strong> have to use the same constraints to set up the corresponding triple, or rather iterated, integral. Heck, you don't even have to use cartesian coordinates, right? For this one, cylindrical would be more convenient. The base in the $xy$-plane can be obtained by plugging in $z=0$, which gives us the unit circle $x^2+y^2=1$ or $r=1$. And then, for any point in the base, $z$ ranges from $z=0$ to $z=1-r$. So the way you set up your integral is perfectly correct. Note, however, that the solid <strong>as a whole</strong> does go up to $z=1$ at the vertex.</p>
448,227
<p>Is it possible to define the completion of a metric space using categorical terms?</p>
Daniel Fischer
83,702
<p>The completion of a metric space $X$ is an initial object in the category $\mathcal{U}_X$ whose objects are uniformly continuous maps $\iota_Y \colon X \to Y$, where $Y$ is a complete metric space, and whose morphisms are uniformly continuous maps $f \colon Y \to Z$ such that $f \circ \iota_Y = \iota_Z$.</p>
636,289
<p>Here's what I've gathered so far: </p> <p>First I calculate the number of combinations of $5$ cards from the whole deck, which is $2 598 960.$</p> <p>I'll use the complement, so I want to the combinations of everything but the aces and kings, so it's again combinations of $5$ from $44$ cards, which is $1086008$.</p> <p>$$1-\frac{1086008}{2598960} = 0.582$$</p> <p>That is incorrect however. What am I doing wrong? The complement for "at least one $X$ and at least one $Y$" should be "not $X$ or not $Y$", which is "everything but aces and kings". Is that even correct?</p>
Eric Tressler
26,785
<p>Let $K,A$ denote the number of Kings and Aces drawn, respectively.</p> <p>$$\begin{align*} P(K&gt;0 \text{ and } A&gt;0) &amp;= 1 - P(K=0 \text{ or } A=0) \\ &amp;= 1 - P(K=0) - P(A=0) + P(K=0 \text{ and } A=0) \end{align*}$$ The final term exists because we have subtracted that event twice, once in the term $P(K=0)$ and once in the term $P(A=0)$; now we have to add it back to account for our double-counting (this is the principle of inclusion-exclusion). The values in the last line are easy to calculate. $P(K=0) = P(A=0) = \binom{48}{5}/\binom{52}{5}$ and $P(K=0 \text{ and } A=0) = \binom{44}{5}/\binom{52}{5}$.</p>
647
<p>For example, I gave an exam earlier today with a problem that ended in the sentence</p> <blockquote> <p>Use the chain rule to find $(f\circ g)'(3)$.</p> </blockquote> <p>During the exam, one of the students asked me what the circle between the $f$ and $g$ means, and I answered that it represents the composition of functions.</p> <p>Later on, another student was working on a question that contained the phrase</p> <blockquote> <p>. . . where $V$ represents the volume of water in the tank.</p> </blockquote> <p>and also the phrase</p> <blockquote> <p>Given that water initially drains from the tank at a rate of $2.0\;\mathrm{L}/\mathrm{min}$ . . .</p> </blockquote> <p>The student asked whether $2.0\;\mathrm{L}/\mathrm{min}$ is the value of $\dfrac{dV}{dt}$, and I said I couldn't help with that.</p> <p>Would you have answered these students' questions? How do you decide what to do in these situations? I usually handle these on a case-by-case basis, but I'm curious if anyone has any good general rules for deciding what information to give.</p>
David Ebert
867
<p>Don't neglect the students in your class with unasked questions! </p> <p>If one student in your class asks you a question, then it's safe to assume that at least one or two also have the question somewhere in mind. If this is the case, then I would argue that it is unfair to answer a question for only the students who are willing to ask. If we really want to avoid giving "unfair advantages" then surely it is unfair to bias against those students without the courage or volition to ask a question even when their question isn't really what the the test is "about." </p> <p>The school I teach in currently has a "no questions allowed" policy during exams, including when there is a flagrant or misleading mistake in the question paper. At first I thought this was harsh, but I've come to think that this really is the most <em>fair</em> way of handling things for everyone. Additionally, this really motivates teachers to moderate their exams and carefully check for wording and clarity.</p> <p>In my class tests, if a student asks a question I will respond in <strong>only</strong> two ways:</p> <p>1) I provide general encourage along the lines of "You can do it!" This (I hope) is a way of calming the nerves of those with test anxiety or a memory block.</p> <p>2) If I find that I have made a really bad mistake in writing a class test, then I'll write the correct question on the board and explain the mistake to the entire class.</p>
3,000,522
<p>I'm working on a conjecture and I've come up to a point where I need to express every non-zero positive integers not divisble by 4 with only one parameter (as you would express every even number as 2p). Here the parameter p has to cross the whole natural integers domain <span class="math-container">$N*$</span>.</p> <p>Before giving you any further information about the idea I have to express such a set of numbers, I would like to show you something that might come in handy.</p> <p>Earlier while working on the said conjecture I needed to express (still with only one parameter n) a repeating sequence: (5, 0, 2, 3, 8, 6). I've succeded. I've come up using Fibonacci's sequence to do it. The idea was to notice that -4 the sequence was (1, -4, -2, -1, 4, 2) then write it as <span class="math-container">$u_n =(-1)^{f(n)}2^{g(n)}$</span> with <span class="math-container">$f(n)$</span> being odd three consecutive time then even the three following, and <span class="math-container">$g(n)$</span> alternating between <span class="math-container">$0,1,2$</span>. Finally I've come up with: <span class="math-container">$$g(n) =1+\frac {(-1)^{F_n}-(-1)^{F_{n+2}}}{2}$$</span> <span class="math-container">$f(n)$</span> is not interesting here. What's to be noticed is that <span class="math-container">$h(n)=g(n)-1 =\frac {(-1)^{F_n}-(-1)^{F_{n+2}}}{2}$</span> alternates between <span class="math-container">$-1,0,1$</span>.</p> <p>Now for the idea: non-multiples of 4 are <span class="math-container">$u_i=1,2,3,5,6,7,9,10,11,13,14,15...$</span> So basically we can add the <span class="math-container">$h(n)$</span> function to numbers <span class="math-container">$2+6k$</span> and write <span class="math-container">$u_i$</span> as: <span class="math-container">$$u_i = 2+6{k(i)} + h(i)$$</span>With <span class="math-container">$i$</span> in <span class="math-container">$N*$</span></p> <p>Therefore the question is can you find an expression for <span class="math-container">$k(n)$</span> that would at least inspire me. Thanks a lot for your answers.</p>
fleablood
280,126
<p>Let <span class="math-container">$n = 3k + r;$</span> where <span class="math-container">$1\le r \le 3$</span>. In other words <span class="math-container">$k = \lfloor \frac {n-1}3\rfloor$</span> and <span class="math-container">$r = n - 3k$</span>.</p> <p>Just let <span class="math-container">$N = 4*k + r$</span></p> <p>Or <span class="math-container">$N = 4 \lfloor \frac {n-1}3\rfloor + n - 3\lfloor \frac {n-1}3\rfloor = n + \lfloor \frac {n-1}3\rfloor$</span></p> <p>Thus you get <span class="math-container">$1,2,3,5,6,7,9,10,......$</span></p>
1,063,382
<p>Is there any proper way to know whether a function has the same domain and range $[a,b]$ where $a,b&lt;\infty$ i.e. $f:[a,b] \to [a,b]$ ?</p> <p>For example: $$ f(x) = e^{βˆ’x} ,\qquad [\ln(1.1), \ln(3)] $$</p>
lab bhattacharjee
33,337
<p>So, the required circle passes through the intersection of the circle &amp; its tangent at $(2,3)$</p> <p>Find the equation of the tangent in the form $: Ax+By+C=0$</p> <p>Now, the equation of any circle passing through the intersection of the given circle &amp; $Ax+By+C=0$</p> <p>can be written as $x^2+y^2+4x-6y-3+K(Ax+By+C)=0$ where $K$ is an arbitrary constant </p> <p>Use the fact that this circle passes through $(1,1)$ to find $K$</p>
1,063,382
<p>Is there any proper way to know whether a function has the same domain and range $[a,b]$ where $a,b&lt;\infty$ i.e. $f:[a,b] \to [a,b]$ ?</p> <p>For example: $$ f(x) = e^{βˆ’x} ,\qquad [\ln(1.1), \ln(3)] $$</p>
Tintarn
197,823
<p>Note that your equation is equivalent to $(x+2)^2+(y-3)^2=16$ hence it describes the circle with center $C(-2,3)$ and radius 4. Let $D(2,3)$ be the intersection point. Notice that $CD$ is parallel to the x-axis so the tangent at point $D$ must be parallel to the y-Axis and thus satisfies the equation $y=2$. Now, if the second circle touches this line also in D, then its center C' must be also with y-coordinate 3. Hence, the circle is symmetric to the line $y=3$ and you know that $(2,3),(1,1)$ and therefore also $(1,5)$ lies on the circle. Now you got three points for your circle and can set this into the general circle equation $(x-a)^2+(y-b)^2=r^2$ to obtain $r,a,b$.</p>
710,518
<p>I'm having a brainfart while trying to solve a problem for differential equations that requires me to recall some Calculus. If I have $y' = f(t, y) = 1 - t + 4y$, what is $y''$? Do I just differentiate with respect to $t$ to get $y'' = -1$?</p>
Mark Bennet
2,906
<p>One standard way of solving $$y'-4y=1-t$$</p> <p>This is linear, so if $y=F(t)$ is a solution and $G(t)$ is any solution of the homogeneous equation: $$G'-4G=0$$ $y=F+G$ is also a solution.</p> <p>The homogeneous equation is solved in various ways (e.g. separation of variables) as $G(t)=ae^{4t}$</p> <p>For a particular solution, with a polynomial on the right-hand side, we try a polynomial $F(t)=ct+d$, $F'=c$ $$F'-4F=-4ct+c-4d$$ whence $c=\frac 14, d=-\frac 3{16}$.</p>
2,854,419
<p>Are these first-order formulas equivalent? $$(\forall x)[(Ax \to Bx)\to(Cx \to Dx)]\tag{1}$$ $$(\forall x)(Ax \to Bx)\to (\forall x)(Cx \to Dx)\tag{2}$$ $$(\forall x)(Ax \to Bx)\to (\forall y)(Cy \to Dy)\tag{3}$$ I think (2) and (3) are equivalent, but I am not sure about (1).</p>
Graham Kemp
135,106
<p>$(1)$ has the schema $\forall x~(\psi\to\varphi)$ while $(2)$ has the schema $(\forall x~\psi)\to(\forall x~\varphi)$. &nbsp; Now while the former logically entails the latter the converse does not hold.</p> <p>Consider a universe where $\exists x~\lnot\psi$ and $\exists x~(\psi\land\lnot\varphi)$ are facts.</p> <hr> <p>$(3)$ has te schema $(\forall x~\psi)\to(\forall y~{\psi\vert}^x_y)$. &nbsp; If $y$ does not occur freely withing $\psi$, then $(2)$ and $(3)$ will be logically equivalent.</p>
3,609,214
<p>What I want to do is to first calculate all the possible permutations of the letters of the given word. Once I do that, I plan to keep an S in the 5th position and calculate possible permutations. But the question is do I have to multiply it by 2 and THEN deduct it from the total number of perms? Or will I get the correct answer if I directly deduct it? Thank you.</p>
Aryak Sen
768,532
<p>A block of a graph G is a maximal connected subgraph of G that has no articulation point(cut-point or cut-vertex). If G itself is connected and has no cut-vertex, then G is a block.Two or more blocks of a graph can meet at a single vertex only, which must necessarily be an articulation point of the graph.Hence, any graph can be written as an edge disjoint union of its blocks, because of their maximality. We can prove this by contradiction. Suppose we surmise a graph G which cannot be represented as an edge-disjoint union of blocks. For simplicity, let us consider that there are two different blocks <strong>X</strong> and <strong>Y</strong> which have a common edge <strong>uv</strong>. Without loss of generality, we are at liberty to assume that <strong>u</strong> has an adjacent vertex <strong>m</strong> in <strong>X</strong>. And <strong>v</strong> and <strong>m</strong> are vertices in <strong>X</strong>, therefore, there exists a path(<strong>P</strong>) between <strong>v</strong> and <strong>m</strong> not containing <strong>u</strong>. Evidently, <strong>P</strong> does not contain the edge <strong>uv</strong>. Also a path can be represented as a union of unique edges(k2 graphs, which are by definition blocks). Let <strong>S</strong> be the set of edges, in <strong>X</strong>-<strong>uv</strong>-<strong>um</strong>-<strong>P</strong>. And thus G can be represented by <strong>Y</strong> βˆͺ <strong>um</strong> βˆͺ <strong>P</strong> βˆͺ <strong>S</strong> which can be intuitively viewed as a union of different edge-disjoint blocks, which is a contradiction.</p>
23,564
<p>In the <a href="http://www.ems-ph.org/journals/newsletter/pdf/2009-12-74.pdf" rel="noreferrer">December 2009 issue</a> of the <a href="http://www.ems-ph.org/journals/all_issues.php?issn=1027-488X" rel="noreferrer">newsletter of the European Mathematical Society</a> there is a very interesting interview with Pierre Cartier. In page 33, to the question</p> <blockquote> <p>What was the ontological status of categories for Grothendieck?</p> </blockquote> <p>he responds</p> <blockquote> <p>Nowadays, one of the most interesting points in mathematics is that, although all categorical reasonings are formally contradictory, we use them and we never make a mistake.</p> </blockquote> <p>Could someone explain what this actually means? (Please feel free to retag.)</p>
Kevin Buzzard
1,384
<p>Note: I am not a historian. I'm just guessing as to what prompted the comments.</p> <p>Here's my guess: if you do set theory naively, in the old-fashioned "anything is a set" way, then you run into Russell's paradox; the set consisting of all sets that aren't elements of themselves gives you trouble. So you then decide set theory needs formalising (I'm talking about 100 years ago here of course) and you write down some axioms, and the ones that "won" are ZFC, where only "small" collections of things are sets, and "big" things like "all groups" aren't sets. Of course there's nothing contradictory about considering all groups, or quantifying over groups (i.e. saying "every group has an identity element"), but you can't quantify over the <em>set</em> of all groups.</p> <p>And now because it's the 50s or 60s and you want to do homological algebra and take derived functors and do spectral sequences and stuff in some abstract way, you are now feeling the pressure a bit, because you want to define "functions" from the category of all G-modules (G a group) to the category of abelian groups called "group cohomology", but "H^n(G,-)" isn't a function, because its domain and range aren't sets. So you call it a "functor", which is fine, and press on.</p> <p>And as time goes on, and you start composing derived functors, you know in your heart that it's all OK. </p> <p>And then Grothendieck comes along, and probably other people too, and raise the issue that one really should be a bit more careful, because we don't want another Frege (who wrote a huge treatise on set theory but allowed big sets and his axioms were contradictory because of Russell's paradox). So Grothendieck tried to tame these beasts and "go back to basics"---but in some sense he "failed"---or, more precisely, realised that there were fundamental problems if he really wanted to treat categories as sets. "Sod it all", thought Grothendieck, "this is not really the main point". So he said "let's just assume there's a universe, i.e. (basically) a set where all the axioms of set theory hold" (it was a bit more complicated than that but still). This assumption (a) fixed all his problems but was (b) unprovable from the axioms of ZFC (because of Goedel).</p> <p>So there's a guess. Cartier is perhaps going over the top with "contradictory"---the statement "Russell's paradox is a paradox" is true but the statement "any mathematical manipulation with collections of objects that don't form a set is formally contradictory" is much stronger and surely false.</p>
4,528,027
<p>Im trying to prove that the function <span class="math-container">$$\begin{cases}f(x,y)=\dfrac{(2x^2y^4-3xy^5+x^6)}{(x^2+y^2)^2}, &amp; (x,y)β‰ 0\\ 0, &amp; (x,y)=0\end{cases}$$</span> is continuous at point (0,0) using the rigorous defintion of a limit.</p> <p>Attempting to find the upper limit of the function: <span class="math-container">$$|f(x)-f(x_0)|= \left|\frac{(2x^2y^4-3xy^5+x^6)}{(x^2+y^2)^2}-0\right|$$</span> I see the denominator is always positive so this is equal to <span class="math-container">$\dfrac{|2x^2y^4-3xy^5+x^6|}{(x^2+y^2)^2}$</span>. Using the triangle inequality i know that this is equal or less than <span class="math-container">$\dfrac{|(2x^2y^4)-(3xy^5)|+|x^6|}{(x^2+y^2)^2}$</span>. From here I would like to continue finding expressions which are equal or greater than this, which allow me to cancel some terms against <span class="math-container">$((x^2+y^2)^2)$</span>. Im thinking i can write <span class="math-container">$$x^6 = (x^2)^3 ≀ (x^2+y^2)^3 $$</span> for instance, but i am unsure of how to &quot;handle&quot; <span class="math-container">$|(2x^2y^4)-(3xy^5)$</span>|. Could someone give me any pointers?</p>
Bernkastel
551,169
<p>Using the triangle inequality like this: <span class="math-container">$$|f(x,y)|\le \frac{|2x^2y^4|+|-3xy^5|+|x^6|}{(x^2+y^2)^2}$$</span> Notice that <span class="math-container">$$\frac{|x^6|}{(x^2+y^2)^2}\le \frac{|x|^6}{x^4}=\frac{x^6}{x^4}=x^2$$</span> <span class="math-container">$$\frac{|2x^2y^4|}{(x^2+y^2)^2} \le \frac{2x^2y^4}{y^4}=2x^2$$</span> <span class="math-container">$$\frac{|-3xy^5|}{(x^2+y^2)^2} \le \frac{3|x||y|y^4}{y^4}=3|x||y|$$</span> It is <span class="math-container">$|f(x,y)| \le 3(x^2+|x||y|)$</span>.</p> <p>So, noticing that <span class="math-container">$\sqrt{x^2+y^2}&lt;\delta$</span> implies <span class="math-container">$|x|&lt;\delta$</span> and <span class="math-container">$|y|&lt;\delta$</span>, you shold be able to conclude with an appropriate choice of <span class="math-container">$\delta$</span>.</p>
310,363
<p>Given $N$ boxes with the same capacity $C$, I toss coins into the boxes uniformly, one by one. When any one of the boxes is full, the sum of the coins in all boxes is denoted $S$. How to compute the probability density function of $S(N,C)$? </p>
Liviu Nicolaescu
20,302
<p>$\newcommand{\bP}{\mathbb{P}}$ Denote by $E_i$ the event that the $i$-th box is the one first filled to capacity. Then </p> <p>$$\bP(S=s)=\sum_{i=1}^N \bP(S=s, E_i)= N\bP(S=s, E_1). $$</p> <p>We have</p> <p>$$\bP(S=s, E_1)=\frac{1}{N^s}\sum_{\substack{0\leq k_2,\cdots k_N\leq C-1\\ k_2+\cdots k_N=s-c}}\binom{s}{C, k_2,\dotsc,k_N}. $$</p> <p>We have</p> <p>$$\sum_{C\leq s\leq (C-1)N+1} \bP(S=s, E_1)t^s=\sum_{\substack{0\leq k_2,\cdots k_N\leq C-1\\ k_2+\cdots k_N=s-c}}\binom{s}{C,k_2,\dotsc,k_N}\left(\frac{t}{N}\right)^s. $$</p> <p>Set $u:=\frac{t}{N}$. The probability generating function of $S$ is</p> <p>$$f_S(t):=\sum_{s}\bP(S=s)t^s=N\sum_{c\leq s\leq (N-1)C+1} \bP(S=s, E_1)t^s $$ $$=N\sum_{\substack{0\leq k_2,\cdots k_N\leq C-1\\ k_2+\cdots k_N=s-c}}\binom{s}{C,k_2,\dotsc,k_N}u^{s}. $$</p> <p><strong>Example 1.</strong> $N=2$. $\newcommand{\bE}{\mathbb{E}}$</p> <p>$$p_s:=\bP(S=s)=\frac{1}{2^{s-1}}\binom{s}{C},\;\;s=C,\dotsc, 2C-1.$$</p> <p>This is related related to the famous <a href="https://en.wikipedia.org/wiki/Banach%27s_matchbox_problem" rel="nofollow noreferrer"><em>Banach matchbox problem</em></a>. Suppose that we have two boxes, each filled with $C$ balls. We randomly draw a ball from each box until one becomes empty. Then $S$ denotes the number of drawn balls. The expectation of $S$ is</p> <p>$$\mu=\mu_C=\sum_{s=c}^{2C-1}\frac{s}{2^{s-1}}\binom{s}{c}$$</p> <p>We observe that</p> <p>$$p_{s+1}=\frac{s+1}{2(s+1-C)}p_s$$</p> <p>so $$(s+1-C)p_{s+1}=\frac{s}{2}p_s +\frac{1}{2}p_s, $$</p> <p>so that $$\sum_{s=C}^{2C-1}(s+1-C)p_{s+1}=\sum_{s=C}^{2C-1}\frac{s}{2}p_s +\frac{1}{2}\sum_{s=C}^{2C-1}p_s.$$</p> <p>We have $$\sum_{s=C}^{2C-1}\frac{s}{2}p_s+\frac{1}{2}\sum_{s=C}^{2C-1}p_s=\frac{\mu}{2}+\frac{1}{2}. $$</p> <p>On the other hand </p> <p>$$ \sum_{s=C}^{2C-1}(s+1-C)p_{s+1}=\sum_{t=C+1}^{2C}tp_t-C\sum_{t=C+1}^{2C}p_t $$</p> <p>$$=\mu-Cp_C+2Cp_{2C}-C+Cp_C-Cp_{2C}=\mu-C+Cp_{2C},$$</p> <p>Hence </p> <p>$$2\mu-2C+2Cp_{2C}=\mu+1, $$</p> <p>$$\mu=2C+1-2Cp_{2C}=2C+1-\frac{C}{2^{2C-1}}\binom{2C}{C}.$$</p> <p>Using Stirling formula we deduce that as $C\to\infty$ we have</p> <p>$$p_{2C}=\frac{1}{2^{2C-1}}\binom{2C}{C}\sim\frac{\sqrt{2\pi C}}{2\pi C}\frac{2^{2C}}{2^{2C-1}}=\sqrt{\frac{2}{\pi C}}. $$</p>
502,994
<p>If the image is f(x), why does f(|x|) look like two triangles above the x axis (basically the right side duplicated on the left)?</p> <p><img src="https://i.imgur.com/pGIdNYZ.jpg" alt="function image"></p>
Daniel Montealegre
24,005
<p>Ok, so let us call $g(x)=f(|x|)$. You agree that $g(x)=f(x)$ if $x\geq 0$, so the graph remains unchanged. </p> <p>What happens when you try to calculate say $g(-1)$?. You get $f(|-1|)=f(1)=2$, it is the same for all $x$ values that are negative, so that is why you get a copy of the triangle. </p>
2,296,141
<p>Assume that a system of equations is given as below:<br> $xy^2+xyu+yv=3$<br> $x^3yz+2xv-u^2v^2=2$ </p> <p>Show that we can find $u,v$ as functions of $x,y,z$ in some neighbourhood of the point $(1,1,1,1,1)$. Then find $u_x,u_y,u_z,v_x,v_y,v_z$ at the given point. </p> <p>Note: My problem is that i don't understand the concept of finding $u,v$ in a neighbour of a point. I would appreciate if someone describes it for me with some details. I don't know how that point mentioned in the question effects the answer (How should i use it?).</p>
Community
-1
<p>You may want to see this: <a href="https://en.m.wikipedia.org/wiki/Implicit_function_theorem" rel="nofollow noreferrer">Implicit Function Theorem</a></p> <p>In a nutshell, compute the "partial" Jacobian with respect to $u$ and $v$ of the vector function:</p> <p>$$ f_1(x,y,z,u,v) = xy^2 + xyu + yv $$ $$ f_2(x,y,z,u,v) = x^3yz + 2xv -u^2v^2 $$</p> <p>The partial Jacobian would be:</p> <p>$$ \begin{matrix} f_{1u} &amp; f_{1v} \\ f_{2u} &amp; f_{2v}\\ \end{matrix} $$</p> <p>where $f_{ij}$ is the derivative of the i-eth function defined above with respect to the j-eth variable. If the Jacobian is invertible when you evaluate it in $(1,1,1,1,1)$ and given that your function is well behaved, you have a sufficient condition for $u$ and $v$ to be functions of the other variables around that point.</p>
2,296,141
<p>Assume that a system of equations is given as below:<br> $xy^2+xyu+yv=3$<br> $x^3yz+2xv-u^2v^2=2$ </p> <p>Show that we can find $u,v$ as functions of $x,y,z$ in some neighbourhood of the point $(1,1,1,1,1)$. Then find $u_x,u_y,u_z,v_x,v_y,v_z$ at the given point. </p> <p>Note: My problem is that i don't understand the concept of finding $u,v$ in a neighbour of a point. I would appreciate if someone describes it for me with some details. I don't know how that point mentioned in the question effects the answer (How should i use it?).</p>
Alex M.
164,025
<p>This is a direct application of <a href="https://en.wikipedia.org/wiki/Implicit_function_theorem#Statement_of_the_theorem" rel="nofollow noreferrer">the implicit function theorem</a> (IFT). Let $F : \Bbb R^3 \times \Bbb R^2 \simeq \Bbb R^5 \to \Bbb R^2$ be given by $F(x,y,z,u,v) = (xy^2 + xyu + yv - 3, x^3yz + 2xv - u^2v^2 - 2)$. Notice that $F$ satisfies the following conditions:</p> <ul> <li>$F(1, 1, 1, 1, 1) = (0,0)$</li> <li>$F$ is continuously differentiable</li> <li><p>the differential of $F$ with respect to $u$ and $v$ at $(1,1)$ is an <em>invertible</em> linear map $: \Bbb R^2 \to \Bbb R^2$ because it is given by</p> <p>$$\begin{pmatrix} \dfrac {\partial F_1} {\partial u} &amp; \dfrac {\partial F_1} {\partial v} \\ \dfrac {\partial F_2} {\partial u} &amp; \dfrac {\partial F_1} {\partial v} \end{pmatrix} (1,1) = \begin{pmatrix} xy &amp; y \\ -2uv^2 &amp; 2x - 2u^2v \end{pmatrix} (1,1) = \begin{pmatrix} 1 &amp; 1 \\ -2 &amp; 0 \end{pmatrix}$$</p> <p>which has determinant $2 \ne 0$.</p></li> </ul> <p>From these, the IFT guarantees the existence of a neighbourhood $U$ of $(1,1,1) \in \Bbb R^3$ and of a neighbourhood $V$ of $(1,1) \in \Bbb R^2$, and of a differentiable map $f : U \to V$ such that $f(1,1,1) = (1,1)$ with $F(x,y,z, f(x,y,z)) = 0$, i.e. this will give you $(u,v)$ as a function of $(x,y,z)$, i.e. $(u,v) = f(x,y,z)$. Notice that we do not know $f$ explicitly, we just know that it exists.</p> <p>Plugging now $(u(x,y,z), v(x,y,z))$ back in $F$ (but now we know that they are functions of $(x,y,z)$ - that's the difference!) we get that</p> <p>$$\left\{ \begin{eqnarray} xy^2 + xy u(x,y,z) + yv(x,y,z) -3 = 0\\ x^3yz + 2xv(x,y,z) -u(x,y,z)^2 v(x,y,z)^2 - 2 = 0 \end{eqnarray} \right.$$</p> <p>and now this is an equality on the $U$ given above by the IFT. If we differentiate them both, we get</p> <p>$$\left\{ \begin{eqnarray} \Bbb d F_1 = 0 \\ \Bbb d F_1 = 0 \end{eqnarray} \right.$$</p> <p>which means</p> <p>$$\frac {\partial F_1} {\partial x} = \frac {\partial F_1} {\partial y} = \frac {\partial F_1} {\partial z} = \frac {\partial F_2} {\partial x} = \frac {\partial F_2} {\partial y} = \frac {\partial F_2} {\partial z} = 0$$</p> <p>which means, explicitly,</p> <p>$$\left\{ \begin{eqnarray} y^2 + yu(x,y,z) + xy \frac {\partial u} {\partial x} (x,y,z) + y \frac {\partial v} {\partial x} (x,y,z) = 0 \\ 2xy + xu(x,y,z) + xy \frac {\partial u} {\partial y} (x,y,z) + v(x,y,z) + y \frac {\partial v} {\partial y} (x,y,z) = 0 \\ xy \frac {\partial u} {\partial z} (x,y,z) + y \frac {\partial v} {\partial z} (x,y,z) = 0 \\ 3x^2yz + 2v(x,y,z) + 2x \frac {\partial v} {\partial x} (x,y,z) - 2u(x,y,z) \frac {\partial u} {\partial x} (x,y,z) v(x,y,z)^2 - 2 u(x,y,z)^2 v(x,y,z) \frac {\partial v} {\partial x} (x,y,z) = 0 \\ x^3z + 2x \frac {\partial v} {\partial y} (x,y,z) - 2u(x,y,z) \frac {\partial u} {\partial y} (x,y,z) v(x,y,z)^2 - 2 u(x,y,z)^2 v(x,y,z) \frac {\partial v} {\partial y} (x,y,z) = 0 \\ x^3y + 2x \frac {\partial v} {\partial z} (x,y,z) - 2u(x,y,z) \frac {\partial u} {\partial z} (x,y,z) v(x,y,z)^2 - 2 u(x,y,z)^2 v(x,y,z) \frac {\partial v} {\partial z} (x,y,z) = 0 \ . \end{eqnarray} \right.$$</p> <p>Applying this in $(1,1,1)$ and remembering that $u(1,1,1) = v(1,1,1) = 1$, we get</p> <p>$$\left\{ \begin{eqnarray} u_x(1,1,1) + v_x(1,1,1) = -2 \\ u_y(1,1,1) + v_y(1,1,1) = -4 \\ u_z(1,1,1) + v_z(1,1,1) = 0 \\ -2u_x(1,1,1) = -5 \\ -2u_y(1,1,1) = -1 \\ -2u_z(1,1,1) = -1 \ .\end{eqnarray} \right.$$</p> <p>The 4th, 5th and 6th equations give you $u_x,u_y,u_z$, whence using the 1st, 2nd and 3rd equations you will also get $v_x,v_y,v_z$.</p>
162,360
<p>First, the definition of a connected set:</p> <blockquote> <p><strong>Definition:</strong> A topological space is connected if, and only if, it cannot be divided in two nonempty, open and disjoint subsets, or, similarly, if the empty set and the whole set are the only subsets that are open and closed at the same time.</p> </blockquote> <p>I don't understand some points in the following proof, that every interval $I \subset \mathbb{R}$ is connected.</p> <p>Suppose $I = A \cup B$ and $A \cap B = \emptyset$, $A$ and $B$ are both non-empty and open in the subspace-topology of $I \subset \mathbb{R}$. Choose $a\in A$ and $b\in B$ and suppose $a &lt; b$. Let $s := \mathrm{inf}\{ x \in B ~|~ a &lt; x \}$. Then in every neighborhood of $s$ there are points of $B$ (because of the definition of the infimum), but also of $A$, <strong>then if not $s = a$, then $a &lt; s$ and the open intervall $(a,s)$ lies entirely in $A$</strong>. And so $s$ cannot be an inner point of $A$ nor $B$, but this is a contradiction to the property that both $A$ and $B$ be open and $s \in A \cup B$.</p> <p>With the bold part, I have a problem, why it follows that $(a,s)$ lies entirely in $A$ and it that case it must be that $A = (a,s)$, or not?</p>
Jayasiri
553,592
<p>Suppose $I$ is an interval. Suppose that $I$ is disconnected. Then there exists two disjoint open sets $A$ and $B$ in $I$ such that $I=A \cap B.$ Let $a\in A$ and $b\in B$. Without loss of generality, we assume that $a &lt; b$.</p> <p>Let $\alpha = sup \{ x | a \le x \text{ and } x \in A \}.$ Let $\beta =inf \{ x| x \le b \text{ and } x \in B \}.$ By least upper and lower bound property, both $\alpha $ and $\beta$ exist. if $ \alpha \ne \beta $, then there exists $x \in I$ such that $\alpha &lt; x &lt; \beta .$ But this would mean that $I$ is not the union of $A$ and $B$.</p>
1,825,076
<p>Prove that $\sum_{n=1}^\infty \frac{x^{2n}}{(1 + x + \dots + x^{2n})^2}$ converges uniformly when $x \geq 0$.</p>
Doug M
317,162
<p>$\sum \dfrac {x^{2n}}{(1+x+...x^{2n})^2}$</p> <p>Suppose $x&lt; 1$</p> <p>$\sum \dfrac {(1-x)^2x^{2n}}{(1-x^{2n+1})^2}\\ \sum \big(\dfrac {(1-x)x^{n}}{1-x^{2n+1}}\big)^2\\ \big(\dfrac {(1-x)x^{n}}{1-x^{2n+1}}\big)^2&lt;x^{2n}$</p> <p>$\sum x^{2n}$ converges when $x&lt;1$ and the series converges by the comparison test.</p> <p>Suppose $x&gt;1$</p> <p>$\sum \big(\dfrac {(x-1)x^n}{(x^{2n+1}-1)}\big)^2\\ \sum \big(\dfrac {1-x^{-1}}{x^n-x^{-n-1}}\big)^2\\$</p> <p>$\big(\dfrac {1-x^{-1}}{x^n-x^{-n-1}}\big)^2&lt;\frac{1}{x^{2n}}$</p> <p>$\sum \frac{1}{x^{2n}}$ converges when $x&gt;1$ and the series converges by the comparison test.</p> <p>Suppose $x=1,$ </p> <p>$\sum \dfrac {1}{(2n+1)^2}$ converges.</p>
2,714,738
<p>If $R$ is a noetherian ring then also $R[x]$ is a noetherian ring, i.e. $R[x]$ is noetherian as $R[x]$-module. Is $R[x]$ also noetherian as $R$-module?</p>
quasi
400,434
<p>Let $P_n$ be the $R$-sub-module of $R[x]$ whose elements are all polynomials of degree at most $n$. <p> Then you have the infinite chain $$P_0 \subset P_1 \subset P_2 \subset \cdots$$ with all inclusions proper.</p>
1,205,861
<p>I am trying to find a cubic polynomial over GF(5) that evaluates to zero at the points (1,2,0),(2,0,0),(1,4,1),(-2,-1,-1),(-3,2,1),(1,-1,0),(2,-3,1),(-3,-2,2),(1,-2,1).</p> <p>Clearly the zero polynomial would satisfy the condition, thus I'm looking for a proper cubic polynomial. I note that the generic form of such a polynomial is $f= ax^3+ by^3+ cz^3+ dx^2y + ex^2z+ fy^2x+ iy^2z+ gz^2x+ kz^2y+ lxyz$, that is I need to solve for the coefficients $a,b,\ldots,l$.</p> <p>I would prefer to do this calculation in GAP as I am using the software already for other calculations.</p>
Brent Kerby
218,224
<p>Hint: For each point, the condition that the cubic evaluates to zero at this point is equivalent to a linear equation in $a,\dots,l$. In all, you obtain a homogeneous system of 9 linear equations in the 10 variables $a,\dots,l$, and you are looking for a non-trivial solution to this system. You could do this by finding a basis for the null space of the matrix corresponding to the system.</p>
1,205,861
<p>I am trying to find a cubic polynomial over GF(5) that evaluates to zero at the points (1,2,0),(2,0,0),(1,4,1),(-2,-1,-1),(-3,2,1),(1,-1,0),(2,-3,1),(-3,-2,2),(1,-2,1).</p> <p>Clearly the zero polynomial would satisfy the condition, thus I'm looking for a proper cubic polynomial. I note that the generic form of such a polynomial is $f= ax^3+ by^3+ cz^3+ dx^2y + ex^2z+ fy^2x+ iy^2z+ gz^2x+ kz^2y+ lxyz$, that is I need to solve for the coefficients $a,b,\ldots,l$.</p> <p>I would prefer to do this calculation in GAP as I am using the software already for other calculations.</p>
ahulpke
159,739
<p>To describe the solution of the linear system in GAP:</p> <p>The first step is to get linear equations by evaluating the polynomial at the points. To allow distinguishing the coefficients we introduce the variables as extra polynomial variables:</p> <pre><code>field:=GF(5); x:=X(field,"x"); y:=X(field,"y"); z:=X(field,"z"); a:=X(field,"a"); b:=X(field,"b"); c:=X(field,"c"); d:=X(field,"d"); e:=X(field,"e"); f:=X(field,"f"); g:=X(field,"g"); i:=X(field,"i"); k:=X(field,"k"); l:=X(field,"l"); pol:=a*x^3+b*y^3+c*z^3+d*x^2*y+e*x^2*z+f*y^2*x+i*y^2*z+g*z^2*x +k*z^2*y+l*x*y*z; </code></pre> <p>Next, define the points, move them over GF(5), and evaluate the polynomial:</p> <pre><code>pts:=[[1,2,0],[2,0,0],[1,4,1],[-2,-1,-1],[-3,2,1],[1,-1,0], [2,-3,1],[-3,-2,2],[1,-2,1]]; pts:=List(pts,x-&gt;x*One(field)); vals:=List(pts,p-&gt;Value(pol,[x,y,z],p)); </code></pre> <p>The result you get are linear equations, but written as polynomials. While it probably is quickest to retype them, the following GAP code constructs the coefficient matrix for these equations. (The list <code>mums</code> gives the index numbers of the variables):</p> <pre><code>nums:=List([a,b,c,d,e,f,g,i,k,l],IndeterminateNumberOfUnivariateLaurentPolynomial); mat:=[]; for v in vals do ex:=ExtRepPolynomialRatFun(v); vec:=List([1..Length(nums)],x-&gt;Zero(field)); for s in [1,3..Length(ex)-1] do vec[Position(nums,ex[s][1])]:=ex[s+1]; od; Add(mat,vec); od; </code></pre> <p>Since GAP works with row vectors, we need to have the equation coefficients actually in the columns, thus transpose the matrix. We now calculate a basis for the nullspace, and construct the corresponding polynomials:</p> <pre><code>mat:=TransposedMat(mat); sol:=[]; for v in TriangulizedNullspaceMat(mat) do Add(sol,Value(pol,[a,b,c,d,e,f,g,i,k,l],v)); od; </code></pre> <p>We get the following three basis elements for the solutions:</p> <p>$Z(5)^{3}x^{2}y-xy^{2}+xyz+y^{3}-y^{2}z+Z(5)^{3}yz^{2}$,</p> <p>$Z(5)xyz+Z(5)^{3}y^{2}z+Z(5)yz^{2}+z^{3}$,</p> <p>$x^{2}z+xyz+Z(5)xz^{2}-y^{2}z+yz^{2}$</p>
4,212,850
<p>I have an ellipsoid centered at <span class="math-container">$0$</span> (the contour of a Gaussian distribution centered at <span class="math-container">$0$</span> with covariance matrix <span class="math-container">$\Sigma=\Lambda^{-1}$</span>) <span class="math-container">$$ x^\top \Lambda x = \gamma $$</span> and I know that the gradient at a point is given by <span class="math-container">$$ g(x) = -\Lambda x $$</span> Is there an expression for the tangent at a point? All I know is that <span class="math-container">$t(x)^\top g(x) = 0$</span>.</p> <p>In practice the tangent plane will be a hyperplane so there will be many vectors to choose from. However, I am looking for a tangent basis</p>
mathcounterexamples.net
187,663
<p>Clearly, not any two-variable map can be written as a product of two one-variable maps.</p> <p>For example, the map <span class="math-container">$f(x,y) = x^2 + y^2$</span> defined on <span class="math-container">$\mathbb R^2$</span>, vanishes only at <span class="math-container">$(0,0)$</span>. If <span class="math-container">$f(x,y) = g(x)h(y)$</span>, then either <span class="math-container">$g(0)=$</span> or <span class="math-container">$h(0)=0$</span>. In the first case you'll have <span class="math-container">$f(0,y)=0$</span> for any <span class="math-container">$y \in \mathbb R$</span>. Which isn't the case. Similar argument if <span class="math-container">$h(0)=0$</span>. This proves that <span class="math-container">$f$</span> can't be written as the product <span class="math-container">$g(x)h(y)$</span>.</p> <p>More generally, a map <span class="math-container">$(x,y) \mapsto g(x)h(y)$</span> vanishes on lines. This is not the case for general two-variable maps.</p>
2,653,401
<p>I have a question about the following partial fraction:</p> <p>$$\frac{x^4+2x^3+6x^2+20x+6}{x^3+2x^2+x}$$ After long division you get: $$x+\frac{5x^2+20x+6}{x^3+2x^2+x}$$ So the factored form of the denominator is $$x(x+1)^2$$ So $$\frac{5x^2+20x+6}{x(x+1)^2}=\frac{A}{x}+\frac{B}{x+1}+\frac{C}{(x+1)^2}$$</p> <p>Why is the denominator under $C$ not simply $x+1$? It is $x$ times $(x+1)^2$ and not $(x+1)^3$</p>
SchrodingersCat
278,967
<p>Notice that the RHS after simplification must have an identical denominator as with the LHS.</p> <p>The LHS denominator has a cubic term, hence the RHS must also be cubic.</p> <p>So the choice for the term under $C$ has to be $(x+1)^2$, and not the ones you have suggested.</p>
1,305,661
<p>How can I solve this Question ? </p> <ul> <li>Find all solutions exactly for $2\sin^2 x - \sin x = 0$</li> </ul> <p>my answer was $( 2k\pi,\pi+2k\pi, 11\pi/6, 7\pi/6)$ but my teacher answer was $( 2k\pi,\pi+2k\pi, \pi/6 +2k\pi, 5\pi/6 +2k\pi )$</p>
Harish Chandra Rajpoot
210,295
<p>We have $$2\sin^2 x-\sin x=0 $$ $$\implies \sin x(2\sin x-1)=0$$ We have $$\sin x=0 \implies \color{blue}{x=n\pi}$$ Where, $n$ is any integer &amp; $$2\sin x-1=0 $$$$\implies \sin x=\frac{1}{2}=\sin\frac{\pi}{6} \implies \color{blue}{x=2n\pi+\frac{\pi}{6}}\quad \text{&amp;} \quad \color{blue}{x=2n\pi+\frac{5\pi}{6}}$$ Now, writing the complete solution for $x$, we have $$\color{blue}{x\in \{n\pi\}\cup \{2n\pi+\frac{\pi}{6} \}\cup\{2n\pi+\frac{5\pi}{6}\}}$$ Where, $n$ is any integer </p>
1,048,644
<p>$$\sum_{i=1}^{\infty} \frac {(-1)^{i+1}\cdot 1\cdot 4 \cdot 7 \cdots (3i-2)}{i!2^i}$$</p> <p>By the alternating series test and the ratio test, I found that this series does not <em>absolutely converge</em>. However, I'm not at all sure how to figure out whether it converges conditionally or diverges.</p>
Adhvaitha
191,728
<p>Recall the identity: $$\sin^2(A) - \sin^2(B) = \sin(A+B)\sin(A-B)$$ Let $I_n = \displaystyle \int_0^{\pi/2} \dfrac{\sin^2(nx)}{\sin^2(x)}dx$, we obtain \begin{align} I_{n+1} - I_n &amp; = \int_0^{\pi/2} \dfrac{\sin^2((n+1)x)}{\sin^2(x)}dx - \int_0^{\pi/2} \dfrac{\sin^2(nx)}{\sin^2(x)}dx\\ &amp; = \int_0^{\pi/2}\dfrac{\sin^2((n+1)x)-\sin^2(nx)}{\sin^2(x)} dx\\ &amp; = \int_0^{\pi/2}\dfrac{\sin((2n+1)x)\sin(x)}{\sin^2(x)} dx\\ &amp; = \int_0^{\pi/2}\dfrac{\sin((2n+1)x)}{\sin(x)}dx \end{align} Let $J_n = \displaystyle \int_0^{\pi/2} \dfrac{\sin((2n+1)x)}{\sin(x)}dx$, we obtain \begin{align} J_{n+1} - J_n &amp; = \int_0^{\pi/2} \dfrac{\sin((2n+1)x)}{\sin(x)}dx - \int_0^{\pi/2} \dfrac{\sin((2n-1)x)}{\sin(x)}dx\\ &amp; = \int_0^{\pi/2}\dfrac{\sin((2n+1)x)-\sin((2n-1)x)}{\sin(x)} dx\\ &amp; = \int_0^{\pi/2}\dfrac{2\sin(x) \cos(2nx)}{\sin(x)} dx\\ &amp; = 2\int_0^{\pi/2}\cos(2nx)dx = \left.2 \dfrac{\sin(2nx)}{2n} \right \vert_0^{\pi/2} = 0 \end{align} Hence, $J_n = J_0 = \dfrac{\pi}2$. Hence, we have $I_{n+1} - I_n = \dfrac{\pi}2$. Since $I_0 = 0$, we obtain $$I_n = \dfrac{n \pi}2$$</p>
1,048,644
<p>$$\sum_{i=1}^{\infty} \frac {(-1)^{i+1}\cdot 1\cdot 4 \cdot 7 \cdots (3i-2)}{i!2^i}$$</p> <p>By the alternating series test and the ratio test, I found that this series does not <em>absolutely converge</em>. However, I'm not at all sure how to figure out whether it converges conditionally or diverges.</p>
xpaul
66,420
<p>Another way to evaluate this integral is to use residue. Note $$ \int_0^{\frac{\pi}{2}}\frac{\sin^2nx}{\sin^2x}dx=\frac{1}{4}\int_0^{2\pi}\frac{\sin^2nx}{\sin^2x}dx. $$ Letting $z=e^{i\theta}$, we have \begin{eqnarray} \int_0^{\frac{\pi}{2}}\frac{\sin^2nx}{\sin^2x}dx&amp;=&amp;\frac{1}{4}\int_{|z|=1}\frac{(z^n-z^{-n})^2}{(z-z^{-1})^2}\frac{1}{iz}dz\\ &amp;=&amp;\frac{1}{4i}\int_{|z|=1}\frac{(1-z^{2n})^2}{(1-z^2)^2}\frac{1}{z^{2n-1}}dz\\ \end{eqnarray} But \begin{eqnarray} \frac{(1-z^{2n})^2}{(1-z^2)^2}\frac{1}{z^{2n-1}}&amp;=&amp;\frac{1}{z^{2n-1}}\sum_{k=1}^\infty kz^{2(k-1)}(1-2z^{2n}+z^{4n})\\ &amp;=&amp;\sum_{k=1}^\infty kz^{2(k-n)-1}(1-2z^{2n}+z^{4n}) \end{eqnarray} whose coefficient of $\frac{1}{z}$ is $n$. Thus \begin{eqnarray} \int_0^{\frac{\pi}{2}}\frac{\sin^2nx}{\sin^2x}dx&amp;=&amp;\frac{1}{4}\int_{|z|=1}\frac{(z^n-z^{-n})^2}{(z-z^{-1})^2}\frac{1}{iz}dz\\ &amp;=&amp;\frac{1}{4i}\int_{|z|=1}\frac{(1-z^{2n})^2}{(1-z^2)^2}\frac{1}{z^{2n-1}}dz\\ &amp;=&amp;\frac{1}{4i}\cdot 2\pi i\cdot n\\ &amp;=&amp;\frac{n\pi}{2}. \end{eqnarray}</p>
4,033,453
<p>How many roots does the equation <span class="math-container">$$\left\lfloor\frac x3\right\rfloor=\frac x2$$</span> have?</p> <ol> <li><span class="math-container">$1$</span></li> <li><span class="math-container">$2$</span></li> <li><span class="math-container">$3$</span></li> <li>infinitely many</li> </ol> <p>I checked that <span class="math-container">$x=0$</span> and <span class="math-container">$x=-2$</span> are the answers, so I think the answer is <span class="math-container">$(2)$</span>. but I don't know how to solve the problem in general.</p>
Travis Willse
155,629
<blockquote> <p><span class="math-container">$$\left\lfloor \frac{x}{3} \right\rfloor = \frac{x}{2}$$</span></p> </blockquote> <p><strong>Hint</strong> Since <span class="math-container">$\left\lfloor \frac{x}{3} \right\rfloor$</span> is an integer, any solution <span class="math-container">$x$</span> must be a multiple of <span class="math-container">$2$</span>. Now, for <span class="math-container">$x &gt; 0$</span> we have <span class="math-container">$$\left\lfloor \frac{x}{3} \right\rfloor \leq \frac{x}{3} &lt; \frac{x}{2} ,$$</span> and for <span class="math-container">$x \leq -6$</span> we have <span class="math-container">$$\left\lfloor \frac{x}{3} \right\rfloor &gt; \frac{x}{3} - 1 \geq \frac{x}{2} ,$$</span> leaving only a few possibilities to check manually.</p>
3,057,200
<blockquote> <p>Does there exist an analytic function <span class="math-container">$f:D\rightarrow\mathbb C$</span>, where <span class="math-container">$D$</span> is a domain containing the unit disk, such that <span class="math-container">$f(z) = e^{i \text{Im} z}$</span> on the unit circle <span class="math-container">$|z|=1$</span>?</p> </blockquote> <p>I'm supposed to rely on the fact that if <span class="math-container">$f: B_R=\left \{ |z|\leq R \right \}\rightarrow \mathbb C$</span> is analytic and even on <span class="math-container">$(-R,R)$</span> (so <span class="math-container">$f(x)=f(-x)$</span> when <span class="math-container">$x$</span> is real), then it is also even on <span class="math-container">$B_R$</span>, outside the real number line. </p> <p>It seems that we need to show that <span class="math-container">$f$</span> isn't even on <span class="math-container">$B_1$</span> but is even on <span class="math-container">$(-1,1)$</span>, which I cannot show. We only know how <span class="math-container">$f$</span> behaves on the unit circle!</p>
achille hui
59,379
<p>Nope. <span class="math-container">$$\oint_{|z|=1} e^{i\Im z} dz = i\int_0^{2\pi} e^{i(\sin\theta + \theta)} d\theta = -2\pi iJ_1(1) \ne 0$$</span></p>
2,696,281
<blockquote> <p>$\triangle ABC$ is an equilateral triangle of side $1$. $ \triangle BDC$ is an isosceles triangle with $D$ opposite to $A$ and $DB = DC$ and $\angle BDC = 120^\circ$. If points $M$ and $N$ are on $AB$ and $AC$, respectively, such that $\angle MDN = 60^\circ$, find the perimeter of $\triangle AMN$.</p> </blockquote> <p>My answer comes out to be $1$ by taking the points $M$ and $N$ in such a way that the quadrilateral $AMDN$ becomes a rhombus. This is all I could do and please help in out extending the result of the perimeter (if correct) to all points $M$ and $N$ in general.</p> <p><a href="https://www.facebook.com/Mathematics-Gems-by-Ajay-Lakhina-603105973366261/" rel="nofollow noreferrer">Question is from the Facebook page created by my teacher</a></p>
Vasili
469,083
<p><a href="https://i.stack.imgur.com/KlEqR.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KlEqR.jpg" alt="Triangle ABC"></a></p> <p>I used the same idea as you, let put $M$ and $N$ in such a way that $AMDN$ is rhombus. Let $E$ be midpoint of $BC$, then $DC=\frac{CE}{\cos 30Β°}=\frac{1}{\sqrt{3}}$. Notice that $m\angle{NCD}=90Β°$ then $DN=\frac{DC}{\cos 30Β°}=\frac{2}{3}$. But $DN=MN=AM=AN$ so perimeter of $\triangle{AMN}=3\cdot \frac{2}{3}=2$ </p>
2,917,024
<p>Let $H$ be a Hilbert space and let ${e_n} ,\ n=1,2,3,\ldots$ be an orthonormal basis of $H$. Suppose $T$ is a bounded linear oprator on $H$. Then which of the following can not be true? $$(a)\quad T(e_n)=e_1, n=1,2,3,\ldots$$ $$(b)\quad T(e_n)=e_{n+1}, n=1,2,3,\ldots$$ $$(c)\quad T(e_n)=e_{n-1} , n=2,3,4,\ldots , \,\,T(e_1)=0$$</p> <p>I think $(a)$ is not true because $e_1$ can not span the range space. I really don't know how to approach to this problem. Could you please give me some hints? Thank you very much.</p>
mechanodroid
144,766
<p>$(a)$ cannot be true. Assume that such $T$ exists. Then recall that $\sum_{n=1}^\infty \frac1n e_n \in H$ so </p> <p>$$T\left(\sum_{n=1}^\infty \frac1n e_n\right) = \sum_{n=1}^\infty \frac1n Te_n = \left(\sum_{n=1}^\infty \frac1n\right)e_1$$</p> <p>but the latter sum doesn't converge. This is a contradiction.</p> <p>$(b)$ and $(c)$ can be true. Consider $$Sx = \sum_{n=2}^\infty \langle x, e_{n-1}\rangle e_{n}$$</p> <p>and</p> <p>$$Tx = \sum_{n=1}^\infty \langle x, e_{n+1}\rangle e_{n}$$ Both are contractive and hence bounded.</p>
39,654
<p>$\displaystyle \int \left( \frac{1}{x^2+3} \right)\; dx$</p> <p>I've let $u=x^2+3$ but can't seem to get the right answer.</p> <p>Really not sure what to do.</p>
Patrick Da Silva
10,704
<p>I suggest letting $u = \frac{x}{\sqrt{3}}$. Then $du = \frac 1{\sqrt{3}} dx$ and $$ \int \left( \frac 1{x^2+3} \right) dx = \int \left( \frac 1{(\sqrt 3 u)^2 + 3} \, \sqrt 3 \right) dx = \frac 1{\sqrt{3}} \int \left( \frac 1{u^2 + 1} \right) = \frac 1{\sqrt{3}} \arctan(u) + C. $$ Hope that helps</p>
1,762,078
<p>show that $$e^x-\ln{(x+2)}&gt;\dfrac{1}{6}\tag{(1)}$$</p> <p>I know $$e^x&gt;x+1,\ln{(x+2)}&lt;x+1$$ so I have only prove $$e^x-\ln{(x+2)}&gt;0$$ But How to prove $(1)$?</p>
Chill2Macht
327,486
<p>What I wrote previously is wrong.</p> <p>As Dr. MV points out in his comment, </p> <blockquote> <p>$e^xβˆ’\log(x+2)$ is not monotonically increasing on (βˆ’2,∞). There is an absolute minimum when $e^x$=$1\over x+2$, which occurs at xβ‰ˆβˆ’0.442854227254517 where f(x)β‰ˆ0.199346303057501.</p> </blockquote> <p>Note that this value is strictly greater than $\frac{1}{6}$, which proves the inequality (since the minimum value is greater than $\frac{1}{6}$ for all $x$ in (βˆ’2,∞).</p> <p><strong>Previous, wrong answer</strong></p> <p><strike> Claim: if the inequality holds at x=0, it will also hold for all x greater than 0.</p> <p>Sketch of proof: both $e^x$ and $\ln(x+2)$ are monotone increasing functions; the derivative of $e^x$ is always greater, so as $x$ increases, $e^x - \ln(x+2)$ can only increase.</p> <p>At $x=0$, we have $1 - \ln(2) &gt; \frac{1}{6}$ by direct evaluation.</p> <p>For $x \le -1$ we have no problem because then $\ln(x+2)$ is either negative or undefined whereas $e^x$ is always positive.</p> <p>For $-1&lt;x&lt;0$, we have a similar argument as for $x=0$; at $x=-1$, we have $e^{-1} - 0 &gt; \frac{1}{6}$, so for all $x$ greater the inequality will continue to hold for the same reason as given above.</p> <p>In fact the case $x \ge0$ probably can be folded into this one.</p> <p>As for the motivation for the inequality or for a more elegant proof (probably using Taylor series), I unfortunately don't know the answer. </strike></p>
1,762,078
<p>show that $$e^x-\ln{(x+2)}&gt;\dfrac{1}{6}\tag{(1)}$$</p> <p>I know $$e^x&gt;x+1,\ln{(x+2)}&lt;x+1$$ so I have only prove $$e^x-\ln{(x+2)}&gt;0$$ But How to prove $(1)$?</p>
math110
58,742
<p>I have slove this problem </p> <p>Let $$f(x)=e^x-\ln{(x+2)}-\dfrac{1}{6}\Longrightarrow f'(x)=e^x-\dfrac{1}{x+2},f''(x)=e^x+\dfrac{1}{x+2}&gt;0$$ Note $f'(1)&gt;0,f'(-1)&lt;0$,so $f'(x)=0$ has unieque real solution $x_{0}$, so we have $$f(x)\ge f(x_{0})=e^{x_{0}}-\ln{(x_{0}+2)}-\dfrac{1}{6}$$ Let $$f'(x_{0})=e^{x_{0}}-\dfrac{1}{x_{0}+2}=0\Longrightarrow \ln{(x_{0}+2)}+x_{0}=0$$ so we have $$f(x_{0})=e^{x_{0}}+x_{0}-\dfrac{1}{6}=\dfrac{1}{x_{0}+2}+x_{0}-\dfrac{1}{6}$$ sine$\ln{(x_{0}+2)}+x_{0}=0$,Let $g(x)=\ln{(x+2)}+x\Longrightarrow g(-0.5)=\ln{1.5}-0.5&lt;o.5-0.5&lt;0, g(1)&gt;0$,so we have $x_{0}&gt;-0.5$ so we have $t=x_{0}+2&gt;1.5$ $$f(x)\ge x_{0}+\dfrac{1}{x_{0}+2}-\dfrac{1}{6}=t+\dfrac{1}{t}-\dfrac{13}{6}&gt;1.5+\dfrac{2}{3}-\dfrac{13}{6}=0$$</p>
2,139,951
<p>A set S of integers $\{s_1, s_2, . . . , s_k\}$ such that 0 ∈ S is called a set of digits modulo m if and only if any integer b can be written as $$b = f_nm^n + f_{nβˆ’1}m^{nβˆ’1} + Β· Β· Β· + f_1m + f_0$$ for some $n \geq 0$ and some $f_i$ ∈ S, moreover such expression is unique assuming that $f_n \neq 0$. Show that:</p> <p>a) $k = m$ and the set of $s_i$ is a complete set of representatives modulo m</p> <p>b) S = {0, 1, βˆ’1} is a set of digits modulo 3</p> <p>c) S = {0, 2, βˆ’2} is NOT a set of digits modulo 3</p> <p>I can't really seem to get anywhere with this. I've tried representing b in a few ways using the division algorithm and then trying to convert to the expression above but this doesn't really seem to get me anywhere. I'd appreciate any help, thanks!</p>
mathtt
415,491
<p>I feel we are asking the exact same problem and the rest of it goes this way</p> <p>d) S = {0,1,2} is a set of digits modulo -3</p> <p>e) S = {0,1,-1,2,-2} is a set of digits modulo 5</p> <p>f) S = {0,1,-1,8,-8} is NOT a set of digits modulo 5</p> <p>g) S={0,1,-1,17,-17} is a set of digits modulo 5</p> <p>h) S={0,1,-2,(-2)^2, (-2)^3, (-2)^4, (-2)^5} is a set of digits modulo 7</p> <p>I am still kind of confused about how to proceed for the rest of the question. Can someone give a hint? Thank you so much! </p>
1,394,753
<p>I came across this puzzle recently which I hope people might enjoy.</p> <p>Let $S(n)$ be the set of positive integers less than $n$ which do not have a $2$ in their decimal representation and let $\sigma(n)$ be the sum of the reciprocals of the numbers in $S(n)$, so for example $\sigma(5) = 1 + 1/3 + 1/4$ . </p> <ul> <li>Show that $S(1000)$ contains $9^3 - 1$ distinct numbers.</li> <li>Show that $\sigma(n) &lt; 80$ for all $n$.</li> </ul>
wltrup
232,040
<p>Answer to the first part. Imagine that our number system only has the 9 digits $\{0,1,3,4,5,6,7,8,9\}$ (no 2). Then count to 1000 like we always do, except now in base 9. There are $9^3$ distinct numbers. Take away 1 for 0 (because the set excludes 0), and the result is $9^3 - 1$.</p>
1,394,753
<p>I came across this puzzle recently which I hope people might enjoy.</p> <p>Let $S(n)$ be the set of positive integers less than $n$ which do not have a $2$ in their decimal representation and let $\sigma(n)$ be the sum of the reciprocals of the numbers in $S(n)$, so for example $\sigma(5) = 1 + 1/3 + 1/4$ . </p> <ul> <li>Show that $S(1000)$ contains $9^3 - 1$ distinct numbers.</li> <li>Show that $\sigma(n) &lt; 80$ for all $n$.</li> </ul>
Joey Zou
260,918
<p>For the second part: Since $\sigma$ is increasing, it suffices to bound $\sigma(n)$ for $n = 10^m$, $m\in\mathbb{N}$. It is not too hard to see that $|S(10^{m})| = 9^{m}-1$, so if we let $T_m = S(10^{m+1})\backslash S(10^m)$, then $|T_m| = 9^{m+1} - 9^{m}$. We have \begin{align*} \sigma(10^{m}) = \sum\limits_{k\in S(10^m)}{\frac{1}{k}} &amp;= \sum\limits_{k\in S(10)}{\frac{1}{k}} + \sum\limits_{j=1}^{m-1}{\sum\limits_{k\in T_j}{\frac{1}{k}}} \\ &amp;\le \frac{1}{1}+\frac{1}{3}+\dots+\frac{1}{9} + \sum\limits_{j=1}^{m-1}{\frac{|T_j|}{10^j}} \\ &amp;&lt;8 + \sum\limits_{j=1}^{\infty}{\frac{9^{j+1}-9^j}{10^j}} \\ &amp;= 8 + 72 = 80 \end{align*} as desired.</p>
292,221
<blockquote> <p>How to prove that $$\int_{0}^{1}(1+x^n)^{-1-1/n}dx=2^{-1/n}$$</p> </blockquote> <p>I have tried letting $t=x^n$,and then convert it into a beta function, but I failed. Is there any hints or solutions?</p>
Did
6,179
<p>The change of variable $t=x^{-n}$ yields $\mathrm dt=-nx^{-n-1}\mathrm dx$, that is, $\mathrm dx=-\frac1nt^{-1-1/n}\mathrm dt$. Thus, the integral is $$ \int_1^{+\infty}\frac1n\frac{\mathrm dt}{(1+t)^{1+1/n}}=\left[\frac{-1}{(1+t)^{1/n}}\right]_1^{+\infty}=\frac1{2^{1/n}}. $$</p>
524,387
<p>Having watched an <a href="http://www.youtube.com/watch?v=NircIRdNUrE" rel="nofollow">integralCALC video lesson</a>, given</p> <p>$$w=xe^{y/z}, x = t^2, y = 1-t, z=1+2t$$ which could be rewritten as $$w=t^2 e^\frac{1-t}{1+2t}$$</p> <p>How does $dw/dt$ differ from $\partial w/\partial t$? Is there some intuition behind the partial derivative (such as a geometric interpretation)?</p>
Teepeemm
147,357
<p>There’s no difference in this case, but there could be a situation where there were a difference.</p> <p>Suppose that instead of the original problem, we had written $w=t^2e^{y/z}$, $y=1-t$, and $z=1+2t$. Since this is the same $w$, $dw/dt$ is the same. But if someone were to write $w(t,y,z)=t^2e^{y/z}$, then they could decide that $\partial w/\partial t$ should mean to differentiate only with respect to the first variable: $\partial w/\partial t=2te^{y/z}=2te^{\frac{1-t}{1+2t}}$. However, that someone should clearly indicate that is what they mean by saying, for example, β€œwhere $\partial w/\partial t$ is differentiating only with respect to the first variable in $w(t,y,z)$”.</p>
992,125
<p>If I rolled $3$ dice how many combinations are there that result in sum of dots appeared on those dice be $13$?</p>
Masacroso
173,262
<p>Expanding the answer of Henno Brandsma: a generating function is a way to pack a sequence as the coefficients of the power expansion of a function, by example we can pack the Fibonacci sequence as the coefficients on the power expansion of the function <span class="math-container">$h(x):=\frac1{1-x-x^2}$</span>.</p> <p>The important point here is that the algebra of generating functions (product, sum, etc.) is a handy way to compose the coefficients packed in them to get a new generating function with coefficients that are of interest to us.</p> <p>By example the polynomial</p> <p><span class="math-container">$$ p(x):=a_0x^0+a_1x^1+a_2x^2+\ldots+a_n x^n $$</span></p> <p>is the generating function that contains the sequence <span class="math-container">$a_0,a_1,\ldots,a_n$</span>.</p> <p>In our case each side of a standard fair dice appears just once in the dice, that is, there is only one side with a given number, from one to six. Therefore the generating function</p> <p><span class="math-container">$$ f(x):= x^1+x^2+x^3+x^4+x^5+x^6 $$</span></p> <p>pack the sequence of number of sides of a fair die (note that the power of each monomial represent one of the sides of a dice).</p> <p>Now: multiplication of generating functions have the effect that the new sequence, after multiplication, is a sum of products of the old ones, where the indices of every product in each sum add up to the exponent of the monomial that will accompany.</p> <p>Its easy to check that, as we are throwing three dice, then the generating function</p> <p><span class="math-container">$$g(x):=f(x)^3=(x^1+x^2+x^3+x^4+x^5+x^6)^3$$</span></p> <p>pack as coefficients the total amounts of different ways to add up to the exponent of each monomial.</p> <p>Now: the polynomial <span class="math-container">$f$</span> can be seen as the partial sum of a <a href="http://en.wikipedia.org/wiki/Geometric_series" rel="nofollow noreferrer">geometric series</a>, i.e.</p> <p><span class="math-container">$$ \begin{align*} f(x)&amp;=x^1+x^2+x^3+x^4+x^5+x^6\\ &amp;=x(x^0+x^1+x^2+x^3+x^4+x^5)\\ &amp;=x\sum_{k=0}^{5}x^k\\ &amp;=x\frac{1-x^6}{1-x} \end{align*} $$</span></p> <p>Then <span class="math-container">$$g(x)=x^3\left(\frac{1-x^6}{1-x}\right)^3=x^3\color{red}{(1-x^6)^3}\color{green}{(1-x)^{-3}}$$</span></p> <p>The colored expressions (red and green) can be expressed as <a href="http://en.wikipedia.org/wiki/Binomial_series" rel="nofollow noreferrer">binomial series</a>[*]. Then</p> <p><span class="math-container">$$\require{cancel} g(x)=x^3\color{red}{\sum_{j=0}^{3}(-1)^j\binom{3}{j}x^{6j}}\color{green}{\sum_{h=0}^{\infty}(-1)^h\binom{-3}{h}x^h}$$</span></p> <p>Now: as we know that <span class="math-container">$\binom{-3}{h}=(-1)^h\binom{3+h-1}{h}=(-1)^h\binom{h+2}{2}$</span> (to understand this equality you can <a href="http://en.wikipedia.org/wiki/Binomial_coefficient#Generalization_to_negative_integers" rel="nofollow noreferrer">see here</a>, and remember that <span class="math-container">$\binom{n}{k}=\binom{n}{n-k}$</span>), then we find that</p> <p><span class="math-container">$$g(x)=x^3\color{red}{\sum_{j=0}^{3}(-1)^j\binom{3}{j}x^{6j}}\color{green}{\sum_{h=0}^{\infty}\cancel{(-1)^h}\cancel{(-1)^h}\binom{h+2}{2}x^h}$$</span></p> <p>From here we can build a formula to know the coefficient for any exponent of <span class="math-container">$x$</span>. First note that any exponent of <span class="math-container">$x$</span> will be of the form <span class="math-container">$S=3+6j+h$</span>, so <span class="math-container">$h=S-3-6j$</span>, and the coefficient for any sum <span class="math-container">$S$</span> will be</p> <p><span class="math-container">$$[x^S]g(x)=1\cdot\sum_{j=0}^{3}\color{red}{(-1)^j\binom{3}{j}}\color{green}{\binom{S-3-6j+2}{2}}\\ =\sum_{j=0}^{3}\color{red}{(-1)^j\binom{3}{j}}\color{green}{\binom{S-1-6j}{2}}$$</span></p> <p>where the notation <span class="math-container">$[x^k]f(x)$</span> represent the coefficient that the power <span class="math-container">$x^k$</span> have in the function <span class="math-container">$f$</span>.</p> <p>We can use this last formula to know the amount of ways to obtain a sum <span class="math-container">$S$</span> throwing three dice, in our case for <span class="math-container">$S=13$</span>. Indeed the previous formula can be written in a more precise way: observe that if <span class="math-container">$S-1-6j&lt;2$</span> (green binomial) or <span class="math-container">$j&gt;3$</span> (red binomial) then the addend will be zero, because if <span class="math-container">$n&lt;k$</span> for <span class="math-container">$n,k\in\Bbb N$</span> then <span class="math-container">$\binom{n}{k}=0$</span>. Hence the addends of the sum are not zero when <span class="math-container">$S-1-6j\ge 2$</span> and <span class="math-container">$3\ge j$</span>. And the values of <span class="math-container">$j$</span> where the addends are not zero are determined by</p> <p><span class="math-container">$$S-1-6j\geq 2 \implies j\leq\frac{S-3}{6}\le\frac{18-3}6&lt;3\implies j\le 3,\quad S\in\{3,4,\ldots,18\}$$</span></p> <p>Then we can re-write <span class="math-container">$[x^S]g(x)$</span> as</p> <p><span class="math-container">$$\bbox[5px,border:2px solid gold]{[x^S]g(x)=\sum_{j=0}^{\lfloor\frac{S-3}{6}\rfloor}(-1)^j\binom{3}{j}\binom{S-1-6j}{2}}$$</span></p> <p>I hope you understand all information. Anyway surely you must read some more info to understand completely this answer. Just to clarify: the notation <span class="math-container">$\lfloor x\rfloor$</span> is the representation of the <a href="http://en.wikipedia.org/wiki/Floor_and_ceiling_functions" rel="nofollow noreferrer">floor function</a>.</p> <p>To complete the question, we will evaluate <span class="math-container">$[x^{13}]g(x)$</span>:</p> <p><span class="math-container">$$ \begin{align*}[x^{13}]g(x)&amp;=\sum_{j=0}^{1}(-1)^j\binom{3}{j}\binom{12-6j}{2}\\ &amp;=\binom{3}{0}\cdot\binom{12}{2}-\binom{3}{1}\binom{6}{2}\\ &amp;=1\cdot \frac{\cancelto{6}{12}\cdot 11}{\cancel{2}}-3\cdot \frac{\cancelto{3}{6}\cdot 5}{\cancel{2}}\\ &amp;=6\cdot 11 - 9\cdot 5\\ &amp;=21 \end{align*}$$</span></p> <hr /> <p>[*] Observe that for <span class="math-container">$n\in\Bbb N$</span></p> <p><span class="math-container">$$(x+y)^n=\sum_{k=0}^\infty\binom{n}{k}x^ky^{n-k}=\sum_{k=0}^n\binom{n}{k}x^ky^{n-k}$$</span></p> <p>then although the second sum is finite it represent a binomial series with infinite addends that are zero.</p>
2,918,497
<p>From <em>Linear Algebra</em> by Friedberg, Insel, and Spence:</p> <blockquote> <p>Given <span class="math-container">$M_1=\begin{pmatrix} 1&amp;0\\0 &amp;1\end{pmatrix}$</span>, <span class="math-container">$M_2=\begin{pmatrix} 0&amp;0\\0 &amp;1\end{pmatrix}$</span> and <span class="math-container">$M_3=\begin{pmatrix}0&amp;1\\1 &amp;0\end{pmatrix}$</span>,</p> <p>prove that <span class="math-container">$\text{span}\{M_1, M_2, M_3\}$</span> is the set of all symmetric <span class="math-container">$2 \times2$</span> matrices.</p> </blockquote> <p>For reference, we just learned about linear combinations/span, but only in terms of vectors, nothing really with matrices.</p>
Key Flex
568,718
<p>Since $M_1, M_2$, and $M_3$ are all symmetric, every matrix in their span will be symmetric, so we need to show that every symmetric matrix is in their span. Every $2 Γ— 2$ symmetric matrix has the form $$M=\begin{pmatrix} a &amp; b \\ b &amp; c \end{pmatrix}$$and since we can write $M = aM_1 + cM_2 + bM_3$, any such $M$ is in the span of $\{M_1, M_2, M_3\}$</p>
4,634,263
<p>I have the following problem that asks to find the shock curve for the following IVP <span class="math-container">$$u_t + (u^2)_x = 0, \quad u(x,0) = \frac{1}{2}e^{-x},$$</span> to then obtain a weak solution from this curve using Rankine Hugoniot condition. I'm pretty lost on how to raise the problem. In Evans there is an example where it is easy to get u^- and u^+, so we can solve for the shock curve using an ODE. In this case, does anyone have an idea how to raise the problem?</p>
FD_bfa
1,007,603
<p><strong>Why do we define independence in a way that allows for an event of probability zero to be independent of another event?</strong></p> <p>The case you are referring to can be seen as degenerate. If one event has probability <span class="math-container">$0$</span> then there isn't a clear way to interpret the notion of independence intuitively (in order to know if an event occurring has some impact on another event, we first need that event to be possible). The reason that we define independence in the way that we do is simply out of convenience (as you speculate in the question). It is convenient to have one notion of independence so that we can apply lots of results that apply to independent events in as broad a way as possible. If we were to exclude the cases that you refer to, then this would just create additional work to prove that the standard results that follow for independent events also work in the case that you have excluded.</p> <p><strong>What would the consequences be if we modified the definition?</strong></p> <p>If we were to change the definition now, then there would be some real consequences. For example, in the standard proof of <a href="https://en.wikipedia.org/wiki/Kolmogorov%27s_zero%E2%80%93one_law" rel="nofollow noreferrer">Kolmogorov's Zero-One Law</a>, we use the fact that if the event <span class="math-container">$A$</span> is independent of itself then the probability of <span class="math-container">$A$</span> must be <span class="math-container">$0$</span> or <span class="math-container">$1$</span> (in other words: <span class="math-container">$P(A \cap A) = P(A)P(A)$</span> if and only if <span class="math-container">$P(A) = 0$</span> or <span class="math-container">$P(A)=1$</span>) - this proof can be found in the book <a href="https://link.springer.com/book/10.1007/978-1-4614-6956-8" rel="nofollow noreferrer"><strong>&quot;Measure Theory&quot;</strong> by Donald Cohn</a>.</p> <p>If we were to modify the definition of independence in the way that you suggest, then this proof breaks down as the event <span class="math-container">$A$</span> can no longer said to be independent. It is, of course, still possible to modify this proof, but it becomes unnecessarily complicated because we can no longer refer to the notion of <span class="math-container">$A$</span> being independent and we can no longer apply any of the standard results that follow from independence either (without additional justification).</p> <p>This is, of course, one example, but there are many other proofs that use similar ideas to the one above and so the definition of independence that we currently have lends itself nicely to simplifying these types of proofs. The only downside is (as you also pointed out), that you lose some intuition when you think about these type of events logically. However, as examples involving events of this nature are degenerate, this isn't a big concern in the mathematical community.</p> <p><strong>Why do some definitions exclude edge cases?</strong></p> <p>There are, of course, some definitions that do exclude edge cases. However, there are usually much more serious reasons for this.</p> <p>For example, one could ask why <span class="math-container">$1$</span> is not a prime number. We could easily have allowed <span class="math-container">$1$</span> to be a prime and number, and to some, this might be more intuitive (like in this case).</p> <p>However, if we did modify the definition of prime numbers to include the number <span class="math-container">$1$</span>, then we would run into problems. Numbers would no longer have unique prime factorisations</p> <p><span class="math-container">$$6 = \color{red}{3 \times2} \space = \space \color{blue}{3 \times 2 \times 1}\space = \space \color{green}{3 \times 2 \times 1 \times 1} = \space \space ... $$</span></p> <p>This would create a lot more work for mathematicians and would also violate the <a href="https://en.wikipedia.org/wiki/Fundamental_theorem_of_arithmetic" rel="nofollow noreferrer">Fundamental Theorem of Arithmetic</a> (requiring it to be rewritten).</p> <p>Therefore, in some cases, like this one, it is sensible to exclude an &quot;edge case&quot;. However, in the case you describe, there are no serious ramifications of including <span class="math-container">$0$</span>. Therefore, from the perspective of convenience it makes sense to include the degenerate cases to simplify our analyses and proofs.</p>
1,072,924
<p><strong>Setting</strong></p> <p>You are given two independent random variables $X_0,X_1$ with common exponential density $f(x) = \alpha e^{-\alpha x}$. Let $R = \frac{X_o}{X_1}$. Determine $\Pr[R &gt; t]$ for $t &gt; 0$.</p> <p>I got up to here</p> <p>$$\Pr[R &gt; t] = \Pr[X_o/X_1 &gt; t] = \Pr[X_o &gt; X_1 t] = 1 - \Pr[X_o \le tX_1]$$</p> <p>I know how to express the last probability as a distribution, but it would have $X_1$ in it, making it a random variable. So how do I proceed?</p>
ki3i
202,257
<p>Suppose we are interested in computing the probability of some event defined by a finite collection of continuous random variables. The joint density function of these random variables, when integrated over appropriate regions (defined by this event) in its domain, gives the probability we are looking for. In the present case, the independence of the identically distributed exponential random variables, $X_0$ and $X_1$, implies that their joint density function factors into the product of the marginal density functions, $$ f_{X_0, X_1}\left(x_0, x_1\right){}={}f_{X_0}\left(x_0\right)f_{ X_1}\left(x_1\right){}={}\alpha^2e^{-\alpha(x_0 + x_1)}\,. $$ So, to compute the probability of the event $\displaystyle \left\{\frac{X_0}{X_1}&gt;t\right\}$, which makes sense in what follows since the random variables are positive valued (except on the zero-probability set $\left\{X \leq 0\right\}$), we compute the integral of the joint density over the region for which this event holds in the domain of the joint density. That is, $$ \begin{eqnarray*} P\left( \frac{X_0}{X_1}&gt;t \right)&amp;{}={}&amp;P\left(X_0 &gt; tX_1 \right)\newline &amp;{}={}&amp;\iint\limits_{X_0&gt;tX_1} f_{X_0, \, X_1}\left(x_0, x_1\right)dA_{x_0,\,x_1}\newline &amp;{}={}&amp; \int\limits_0^\infty \int\limits_{x_1 t}^\infty f_{X_0}\left(x_0\right) f_{ X_1}\left(x_1\right) dx_0 dx_1\newline &amp;{}={}&amp; \int\limits_0^\infty \left(\,\, \int\limits_{x_1 t}^\infty f_{X_0}\left(x_0\right)dx_0 \right) f_{ X_1}\left(x_1\right) dx_1\newline &amp;{}={}&amp; \int\limits_0^\infty \left( e^{-tx_1\alpha}\right)\alpha e^{-x_1\alpha}dx_1\newline &amp;{}={}&amp; \frac{1}{1 + t}\,. \end{eqnarray*} $$ </p>
2,290,295
<p>I do not know if the following is true: </p> <p>Question: If $\Omega\subseteq\mathbb{R}^n$ is open and connected, then there exists a covering by balls $\{B_j\}_{j=1}^{\infty}$ with the property $B_j\subseteq\Omega$ and $B_j\cap B_{j+1}\neq\emptyset$, for all $j\geq1$.</p> <p>Motivation: I have a function $u$ that is a.e. constant on any ball contained in $\Omega$, and I want to prove that $u$ is a.e. constant on $\Omega$. If my question has a positive answer, then I am done. </p> <p>Edit: I want an answer to "Question". Although the "Motivation" is important for me and can be proved not using "Question", I am interested on "Question".</p>
Rigel
11,776
<p>Let $B\subset\Omega$ be any ball and let $c\in\mathbb{R}$ be such that $u=c$ a.e. in $B$. Let us define the set $$ A := \{x\in\Omega:\ \exists r&gt;0\ \text{s.t.}\ B_r(x)\subset\Omega,\ u = c\ \text{a.e. in}\ B_r(x)\} $$ The set $A$ is clearly not empty and open. On the other hand, also the set $C := \Omega\setminus A$ is open. Namely, if $x\in C$ and $B_r(x) \subset \Omega$, then there exists a constant $k\neq c$ such that $u = k$ a.e. in $B_r(x)$. As a consequence, $B_r(x) \subset C$.</p> <p>In conclusion, $A$ is both open and closed in $\Omega$. Since $A\neq \emptyset$ and $\Omega$ is connected, we deduce that $A=\Omega$.</p>
2,290,295
<p>I do not know if the following is true: </p> <p>Question: If $\Omega\subseteq\mathbb{R}^n$ is open and connected, then there exists a covering by balls $\{B_j\}_{j=1}^{\infty}$ with the property $B_j\subseteq\Omega$ and $B_j\cap B_{j+1}\neq\emptyset$, for all $j\geq1$.</p> <p>Motivation: I have a function $u$ that is a.e. constant on any ball contained in $\Omega$, and I want to prove that $u$ is a.e. constant on $\Omega$. If my question has a positive answer, then I am done. </p> <p>Edit: I want an answer to "Question". Although the "Motivation" is important for me and can be proved not using "Question", I am interested on "Question".</p>
Mirko
188,367
<p>Yes, there is such a covering. </p> <p>Lemma 1. If $\Omega\subseteq\mathbb{R}^n$ is open and connected, and $B,D$ are two open balls contained in $\Omega$, then there is a finite chain $\{B_j:j=1,...,m\}$ of open balls (for some $m$), each contained in $\Omega$, such that $B_j\cap B_{j+1}\neq\emptyset$ for all $j=1,...,m-1$, and such that $B=B_1$ and $D=B_m$.<br> Sketch of (standard) proof. Let $A$ be the union of all balls $B_m$ such that for some $m$ there is a chain as in the statement of the lemma. Then $A$ is open and non-empty (as $B\subseteq A$). Its complement $C=\Omega\setminus A$ is also open, for if $x\in C$ then there is some ball $B_x$ with $x\in B_x\subseteq\Omega$, but then $B_x\cap A=\emptyset$, for if $B_x\cap A\not=\emptyset$ then $B_x\cap B_m\not=\emptyset$, for some $B_m$ from a suitable chain, but then we could extend that chain by letting $B_{m+1}=B_x$, a contradiction with the assumption that $x\not\in A$.<br> Hence $C=\emptyset$ (since $\Omega$ is connected). Pick any $x\in D$, then there is a chain $\{B_j:j=1,...,m\}$ with $x\in B_m$, so we could let $D=B_{m+1}$. End of (sketch of) proof of Lemma 1. </p> <p>Lemma 2. $\Omega$ is the union of countably many open balls, say $\Omega=\bigcup_{k=0}^\infty C_k$ (each $C_k$ an open ball).<br> Proof. This is a direct consequence of the existence of a countable basis of open balls for the topology of $\mathbb{R}^n$. (Note. For reasons apparent below, I prefer that $k$ starts at $0$ here.) </p> <p>Now, Lemma 1 and Lemma 2 answer your question positively, as follows. Fix $\Omega=\bigcup_{k=0}^\infty C_k$ as in Lemma 2. By Lemma 1, there is a finite chain $\{B_j:j=1,...,m_1\}$ of open balls (for some $m_1$), each $B_j$ contained in $\Omega$, such that $B_j\cap B_{j+1}\neq\emptyset$ for all $j=1,...,m_1-1$, and such that $B_1=C_0$ and $C_1=B_{m_1}$. By Lemma 1 again, there is a finite chain $\{B_j:j=m_1,...,m_2\}$ of open balls (for some $m_2\ge m_1$), each contained in $\Omega$, such that $B_j\cap B_{j+1}\neq\emptyset$ for all $j=m_1,...,m_2-1$, and such that $B_{m_1}=C_1$ and $C_2=B_{m_2}$. Then we extend the sequence of the $B_j$ going from $C_2$ to $C_3$, then another finite chain from $C_3$ to $C_4$, etc. That is, in general (if we let $m_0=1$), we have $m_{k-1}\le m_k$, and a finite chain $\{B_j:j=m_{k-1},...,m_k\}$ with $B_{m_{k-1}}=C_{k-1}$ and $C_k=B_{m_k}$. </p>
314,246
<p>In the case that $L:B_1 \rightarrow B_2 $ is a linear mapping of Banach spaces and $L$ is a isometric isomorphism (bijection and $\|Lx\|_{B_1} = \|x\|_{B_2} $) can I say that $L\overline{L}= 1 $ is trivial ? (the bar denotes the complex conjugate); </p> <p>TIA</p>
Harald Hanche-Olsen
23,290
<p>I am going out on a limb to answer a different question – but possibly the question that was intended, if the comments are anything to go by:</p> <p>Assume $L\colon H_1\to H_2$ is a linear isometry <em>between Hilbert spaces</em>. Then using the polarization identity $$ \langle x,y\rangle=\frac14\sum_{k=0}^3 i^k\lVert x+i^ky\rVert^2 $$ we can deduce $\langle Lx,Ly\rangle=\langle x,y\rangle$ for all $x$ and $y$, so that $L^*L=I_1$ (where $I_1$ is the identity on $H_1$). Since $L$ is also assumed to be a bijection, $LL^*=I_2$ follows, where $I_2$ is the identity on $H_2$.</p>
1,599,843
<p>I am working out of <em>Mathematical Statistics and Data Analysis by John Rice</em> and ran into the following interesting problem I'm having trouble figuring out.</p> <blockquote> <p>Ch 2 (#65)</p> <p>How could random variables with the following density function be generated from a uniform random number generator?</p> <p><span class="math-container">$$f(x) = \frac{1 + \alpha x}{2}, \quad -1 \leq x \leq 1,\quad -1 \leq \alpha \leq 1$$</span></p> </blockquote> <p>So I believe I'm suppose to use the following fact to solve the problem</p> <blockquote> <p>Proposition D</p> <p>Let <em>U</em> be uniform on [0, 1], and let <em>X</em> = <span class="math-container">$F^{-1}$</span>(<em>U</em>). Then the cdf of <em>X</em> is <em>F</em>.</p> <p><strong>Proof</strong></p> <p><span class="math-container">$$P(X \leq x) = P(F^{-1}(U) \leq x) = P(U \leq F(x)) = F(x)$$</span></p> </blockquote> <p>That is, we can use uniform random variables to generate other random variables that will have cdf <em>F</em></p> <p>So my goal should then be to find a cdf and it's inverse then give as input to the inverse the uniform random variable. I've included my attempt.</p> <p>Given <span class="math-container">$f(x) = \frac{1 + \alpha x}{2}$</span></p> <p><span class="math-container">$$F(X) = \int_{-1}^{x} \frac{1 + \alpha t}{2} dt \; = \; \frac{x}{2} + \frac{\alpha x}{4} + \frac{1}{2} - \frac{\alpha}{4}$$</span></p> <p><span class="math-container">$$4 \cdot F(X) - 2 + \alpha = 2x + \alpha x$$</span></p> <p><span class="math-container">$$F^{-1}(X) = \frac{4X - 2 + \alpha}{2 + \alpha}$$</span></p> <p>So our random variable is, for example, T where</p> <p><span class="math-container">$$T = F^{-1}(U) = \frac{4U - 2 + \alpha}{2 + \alpha}$$</span></p> <p>The answer in the back of the book is</p> <p><span class="math-container">$$X = [-1 + 2 \sqrt{1/4 - \alpha(1/2 - \alpha / 4 - U)}]/ \alpha$$</span></p> <p>I'm not really sure where I went wrong. Any help?</p>
Em.
290,196
<p>The cdf appears to be wrong. When $-1\leq x\leq 1$, \begin{align*} F_X(x) &amp;= \int_{-1}^{x} \frac{1 + \alpha t}{2} dt\\ &amp;=\int_{-1}^x \frac{1}{2}+\frac{\alpha}{2}t\,dt\\ &amp;=\frac{1}{2}[x+1]+\frac{\alpha}{4}[x^2-1]\\ \end{align*} Other than that, your approach seems fine.</p>
1,143,893
<p>Struggling with basic AS Statistical maths, Any help would be much appreciated.</p> <p>The two events $A,B$ are such that $P(A)= 0.65, P(A\cup B)= 0.93$</p> <p>Evaluate $P(B)$ given that $A$ and $B$ are independent. (4 Marks) </p> <p>Thank you.</p>
drhab
75,923
<p><strong>Hint</strong>:</p> <p>In general $P(A)+P(B)=P(A\cup B)+P(A\cap B)$</p>
4,120,122
<p>If <span class="math-container">$f(x)=\frac{1}{\sqrt{x} (1+\ln{x})}$</span> for <span class="math-container">$x \in (1, \infty)$</span>.</p> <p>I need to show that <span class="math-container">$ f \in L^p(1, \infty))$</span> if and only if <span class="math-container">$p \geq 2$</span></p> <p>I suppose that <span class="math-container">$p \geq 2$</span> and so <span class="math-container">$||f||_p^p=\int_1^{\infty}\frac{x^{-p/2}}{(1+\ln{x})^p}dx$</span> When I substitute <span class="math-container">$1+\ln{x} =u$</span> the integral becomes similar to gamma function with negative argument which means its not convergent. I don't know if this is the correct argument or there is another comparison method to prove the convergence of the integral</p>
Oliver DΓ­az
121,671
<p>Notice that</p> <p><span class="math-container">$$ |f(x)|^p=\frac{1}{x^{p/2}(1+\log(x))^p}\leq\frac{1}{x^{p/2}}$$</span> for <span class="math-container">$x\geq 1$</span>. This is enough for convergence when <span class="math-container">$p&gt;2$</span></p> <p>When <span class="math-container">$p=2$</span> one gets <span class="math-container">$$\int^\infty_1\frac{dx}{x(1+\log x)^2}=\int^\infty_1\frac{du}{u^2}=1$$</span></p> <p>For <span class="math-container">$p&lt;2$</span>, fix <span class="math-container">$\varepsilon&gt;0$</span> small enough so that <span class="math-container">$\frac{p}{2}+\varepsilon&lt;1$</span>. Since <span class="math-container">$\lim_{x\rightarrow\infty}\frac{1+\log(x)}{x^{\varepsilon/p}}=0$</span>, one has that <span class="math-container">$$ |f(x)|^p=\frac{1}{x^{p/2}(1+\log(x))^p}\geq\frac{1}{x^{\tfrac{p}{2}+\varepsilon}}$$</span> for <span class="math-container">$x$</span> large enough. That will give you divergence for <span class="math-container">$p&lt;2$</span>.</p>
3,408,839
<p><span class="math-container">$$\frac{1}{\log_{2x-1}{(x)}} + \frac {1}{\log_{x+6}{(x)}}=1+\frac{1}{\log_{x+10}{(x)}}$$</span> What should i do for the first step ?</p> <p>Is it like <span class="math-container">$\frac{1}{A}+\frac{1}{B}$</span> then i simplify into <span class="math-container">$\frac{A+B}{AB}$</span> ? I need your help or hint to solving this equation. Thank you so much, sir.</p>
Ishan Deo
577,897
<p>Remember - <span class="math-container">$$\log_{a}b = \frac{1}{\log_ba}$$</span></p>
961,304
<p>I am studying a book and I am stagnating on a what should be a straightfoward proof:</p> <p>Show that if $X$ is compact, $V\subset X$ is open and $x\in V$, then there exists an open set $U$ in $X$ with $x\in U\subset \bar U\subset V$</p> <p>I don't know how to find the appropriate set $U$. I am guessing you need to do something like taking the intersection with $V$ of a finite subcover of X and then show that its closure is contained in $V$ ...</p> <p>Could someone nudge me in the right direction?</p>
Kevin Arlin
31,228
<p>Sure. The median $m$ of a set of numbers $\{x_1,...,x_n\}$ is $x$ so that $x_i\leq x$ for half the $i$ and otherwise $x_i\geq x$-ignoring the details about $n$ being even or odd. So for simplicity suppose $f:[0,1]\to \mathbf{R}$ is continuous. Then we'll define analogously the <em>median</em> of $f$ to be $m$ such that "$f$ is below $m$ half the time and above $m$ half the time," more precisely, such that the length of $\{x:f(x)\leq m\}=1/2$, and similarly for $\{f(x)\geq m\}$. (Here we'll need the convention that we split $\{f(x)=m\}$ evenly to each half, to handle special cases like $f\equiv m$.)</p> <p>To compute this, we need a function $g$ associated to $f$. Namely, take $g(y)$ to be the length of $\{x:f(x)\leq y\}$. Now $g$ is not necessarily continuous: if for instance $f$ is constant at $m$ as above, then $g(y)=0,y&lt;m, 1\text{ otherwise.}$ But it <em>is</em> "upper continuous", that is, $\lim_{y'\to y^+} g(y')=g(y)$. I'll avoid arguing for this in detail-the point is essentially that if $f(x)\leq y+\epsilon$ for every $\epsilon$ then $f(x)\leq y$. Since $g$ is upper continuous, if we define $m=\inf\{y:g(y)\geq 1/2\}$ then we'll be guaranteed $g(m)\geq 1/2$, and we can define $m$ as our median.</p> <p>So, this is a lot harder than defining the mean, yes? There are still some issues we haven't worked out: is $g$ even well defined? $\{x:f(x)\leq y\}$ could be an extremely weird set-how do we measure its length? For this we have to defer to a much more advanced topic than calculus, which is called measure theory. However, if $f$ is <em>increasing</em>, then $\{x:f(x)\leq y\}$ will just be an interval, and so we can in principle compute $g$ without any fancy knowledge. </p> <p>So let's try an example or two. For $f(x)=x,$ we get $g=f$ and $m=1/2$, the same as the means: this shows that linear functions are the continuous analogue of distributions that aren't skewed, which in the finite case are the ones that have the same median as mean. How about $f(x)=x^2$? We get $g(y)=\sqrt y$, so $m=1/4$. But the mean of $x^2$ is $1/3$, so we see $x^2$ is "skewed to the right-"as you can see from its graph! In general, if $f$ is strictly increasing and $f(0)=0$, then as happened here we'll get $g=f^{-1}$, so that it'll be possible to compute the median as $f^{-1}(1/2)$. You can work out similar statements if $f(0)\neq 0$, but to move away from the increasing case things will become increasingly subtle.</p>
3,122,989
<p>Let <span class="math-container">$A\in\mathbb{R}^{n\times n}$</span> and define conjugation by <span class="math-container">$GL_n$</span> on <span class="math-container">$\mathbb{R}^{n\times n}$</span> in the usual way (e.g. for all <span class="math-container">$A\in\mathbb{R}^{n\times n}$</span> and <span class="math-container">$T\in GL_n$</span>, <span class="math-container">$A\mapsto T^{-1}AT$</span>). Are there any ways to derive conditions on <span class="math-container">$T$</span> so that a given subset of the entries of <span class="math-container">$A$</span> will be invariant under conjugation? </p> <p>For example, let <span class="math-container">$$ A = \begin{bmatrix}a &amp; b\\ c &amp; d \end{bmatrix} $$</span> and suppose we want <span class="math-container">$b$</span> to be invariant under conjugation. Then we are looking for the subset <span class="math-container">$U \subset GL_n$</span> such that, for all <span class="math-container">$T \in U$</span>, <span class="math-container">$$ T^{-1}AT = \begin{bmatrix}a' &amp; b \\ c' &amp; d' \end{bmatrix}. $$</span> Due to the application from which this question arises, it is also enough to be able say whether or not <span class="math-container">$U$</span> consists of just the identity matrix.</p> <p>I would also appreciate any pointers to relevant literature.</p>
Community
-1
<p>We assume that <span class="math-container">$T=[t_{i,j}]\in SL_n(\mathbb{C})$</span> (<span class="math-container">$\det(T)=1$</span>). Note that the underlying field is <span class="math-container">$\mathbb{C}$</span> and not <span class="math-container">$\mathbb{R}$</span>.</p> <p>We choose (for example) <span class="math-container">$U=\{(i,j);i\leq j\}$</span> (the indices of the entries of the upper right part of a matrix) and we require that <span class="math-container">$(Adj(P)AP-A)_{i,j}=0$</span> for every <span class="math-container">$(i,j)\in U$</span>.</p> <p>Then we obtain a system of <span class="math-container">$\dfrac{n(n+1)}{2}+1$</span> equations of degree <span class="math-container">$n$</span> in the <span class="math-container">$n^2$</span> unknowns <span class="math-container">$t_{i,j}$</span>. The question is: are these equations algebraically independent? (note that (*): the equality of index <span class="math-container">$(n,n)$</span> is a consequence of the equalities of indices <span class="math-container">$(i,i),i=1,\cdots,n-1$</span>). </p> <p>Experiments for <span class="math-container">$n=3$</span> (using Grobner basis) "show" that if <span class="math-container">$A$</span> is generic (randomly chosen), then the answer is yes (modulo (*)). That is, the solutions in <span class="math-container">$P\in SL_3(\mathbb{C})$</span> depend on <span class="math-container">$n(n-1)/2=3$</span> parameters.</p>
481,527
<p>The following question came up at a conference and a solution took a while to find.</p> <blockquote> <p><strong>Puzzle.</strong> Find a way of cutting a pizza into finitely many congruent pieces such that at least one piece of pizza has no crust on it.</p> </blockquote> <p>We can make this more concrete,</p> <blockquote> <p>Let <span class="math-container">$D$</span> be the unit disc in the plane <span class="math-container">$\mathbb{R}^2$</span>. Find a finite set of subsets of <span class="math-container">$D$</span>, <span class="math-container">$\mathcal{A}=\{A_i\subset D\}_{i=0}^n$</span>, such that</p> <ul> <li>for each <span class="math-container">$i$</span>, <span class="math-container">$A_i$</span> is simply connected and equal to the closure of its interior</li> <li>for each <span class="math-container">$i, j$</span> with <span class="math-container">$i\neq j$</span>, <span class="math-container">$\operatorname{int}(A_i)\cap \operatorname{int}(A_j)=\emptyset$</span></li> <li><span class="math-container">$\bigcup\mathcal{A}=D$</span></li> <li>for each <span class="math-container">$i,j$</span>, <span class="math-container">$A_i=t(A_j)$</span> where <span class="math-container">$t$</span> is a (possibly orientation reversing) rigid transformation of the plane</li> <li>for some <span class="math-container">$i$</span>, <span class="math-container">$\lambda(A_i\cap\partial D)=0$</span> where <span class="math-container">$\lambda$</span> is the Lebesgue measure on the boundary circle.</li> </ul> </blockquote> <p>Note that we require only that <span class="math-container">$\lambda(A_i\cap\partial D)=0$</span> and not that <span class="math-container">$A_i\cap\partial D=\emptyset$</span>. I know of a solution but am interested in what kinds of solutions other people can find, and so I welcome the attempt.</p>
Robert Israel
8,508
<p>Here is one solution in $12$ pieces.</p> <p><img src="https://i.stack.imgur.com/wA0xf.png" alt="enter image description here"></p>
1,010,820
<p>I tried the following :</p> <p>\begin{align}\sec\theta + \tan\theta&amp;=4\\ \frac1{\cos\theta} + \frac{\sin\theta}{\cos\theta}&amp;=4\\ \frac{1+\sin\theta}{\cos\theta}&amp;=4\\ \frac{1+\sin\theta}4&amp;=\cos\theta\end{align}</p> <p>now don't know how to evaluate further ?</p>
mookid
131,738
<p><strong>Hint:</strong> since $1+\cos\theta &gt;0$ and $\frac{1+\cos\theta}{\sin\theta} =4$, $\sin\theta &gt;0$ so $\sin\theta = \sqrt{1-\cos^2\theta}$. This leads to a quadratic equation on $\cos\theta$.</p>
1,976,001
<p>The statement goes as following: if $3 \mid 2a$, then $3 \mid a$ and $a$ is an integer. In my approach, I used prime factorization, but is this actually valid? This was my approach:</p> <p>$$3 \mid 2a \implies 2a = 2 \cdot 3 \cdot k, k \in \mathbb{N}$$</p> <p>$$\frac{2a}{2} = \frac{2\cdot 3 \cdot k}{2}$$</p> <p>$$a = 3 \cdot k$$</p> <p>$$\therefore 3 \mid a$$</p> <p>Is this valid or am I doing forbidden things here?</p>
PMar
380,500
<p>First note that 1 = 3 - 2. Multiply by a: a = 3a - 2a. If 2a = 3n (for some n), then a = 3a - 3n = 3(a - n). Hence 3 | a.</p> <p>This generalizes to other prime pairs: When p, q are distinct primes, GCD(p,q) = 1, hence there are (signed) integers m, n such that 1 = pm + qn. Multiplying both sides by 'the other value in the product' <em>(here a)</em> isolates that value on the left and produces an expression divisible by p or q <em>(depending on which is 'the first value in the product')</em> on the right.</p> <p>This also generalizes to algebraic structures more general than numbers. It does NOT work when the dividing item <em>(here 3)</em> is NOT prime!</p>
2,347,995
<p>Not sure what I'm doing wrong.</p> <p>Here's my work:</p> <p>Expressing the first part $A \setminus (B\setminus C)$ using logical symbols:</p> <p>$A \land \neg(B \land \neg C)$ becomes</p> <p>$A \land \neg B\lor C$ (De Morgan's law)</p> <p>While the second expression $(A \setminus B) \cup (A \cap C)$ is </p> <p>$(A \land \neg B) \lor (A \land C)$ which becomes</p> <p>$A \land \neg B \lor A \land C$ (Associative Law)</p> <p>or $A \land \neg B \land C$</p> <p>How are these two expressions similar? Thanks for any help in advance! </p>
Chris Culter
87,023
<p>We have $$\frac{1-\cos x}{x^2}=\frac{1}{2}-\frac{1}{4!}x^2+\frac{1}{6!}x^4+O(x^6)$$ where the terms after the $\frac12$ are bounded by $$\frac{1}{4!}+\frac{1}{5!}+\cdots=e-\left(1+1+\frac12+\frac16\right)\approx0.05$$</p>
48,679
<p>I've been going through Fermats proof that a rational square is never congruent. And I've stumbled upon something I can't see why is. Fermat says: ''If a square is made up of a square and the double of another square, its side is also made up of a square and the double of another square'' Im having difficulties understanding why this is. Can anyone help me understand this?</p>
jjcale
17,261
<p>The Hypernetted-chain approximation used in statistical mechanics.</p> <p>Was for instance used in the theory of the fractional quantum hall effect by Laughlin in order to estimate the energies of elementary excitations of Laughlins wave function. </p>
4,024,071
<p>'A or B' means one or the other or both. <br/><br/>So I think <span class="math-container">$\{x | x = u$</span> or <span class="math-container">$ x = v\}$</span> can be equal to <span class="math-container">$\{u\}$</span> or <span class="math-container">$\{v\}$</span> or <span class="math-container">$\{u, v\}$</span>. <br/>Similary, I think <span class="math-container">$\{x | x\in a $</span> or <span class="math-container">$ x\in b\}$</span> can be equal to any subset of <span class="math-container">$a \cup b$</span> except empty set.<br/><br/> But actually <span class="math-container">$\{x | x = u$</span> or <span class="math-container">$ x = v\} = \{u, v\}$</span> and <span class="math-container">$\{x | x\in a $</span> or <span class="math-container">$ x\in b\} = a \cup b$</span>. What am I missing?</p>
trancelocation
467,003
<p>The way via the greatest common divisor is absolutely not necessary. Actually the revenues for the given three months aren't necessary at all to solve the problem. So, I add this short answer, even if it does not address the OP's question:</p> <p>We know for the price <span class="math-container">$p=10a+b$</span> and <span class="math-container">$a\cdot b \in \{5,6,7,8,9\}$</span> since a price of <span class="math-container">$20$</span> would mean no pants sold.</p> <p>Given the options, the revenue <span class="math-container">$p\cdot a\cdot b$</span> must be even. So only <span class="math-container">$16$</span> or <span class="math-container">$18$</span> are to be considered. Checking these, <span class="math-container">$p= 18$</span>, hence, a revenue of <span class="math-container">$144$</span> is the only possible answer.</p>
625,746
<p>If the nth term of a series is given by $T(n)=T(n-1) + T(n-1) \times C$ where $C$ is a given constant and $T(1)=A$ and $n \ge 2$ . I need to tell whether its value will be greater than $G$ or not before its mth term.</p> <p>EXAMPLE : Say A that is first term is $2$ and say $C$ is also $2$ . If we need to check weather its value is atleast $10(=G)$ before or upto 2nd term.</p> <p>Then answer should be No.</p> <p>I need to just check it if its possible or not but without calculating the values. Can anyone help.If all the variables can be very large(say of order $10^9$).</p>
Claude Leibovici
82,404
<p><strong>HINT</strong> </p> <p>Assuming that the formula you wrote is the right one (see mathlove's comment), you then have<br> T(n) = (1+C) T(n-1) with T(1) = A.<br> Then<br> T(2) = (1+C) T(1),<br> T(3) = (1+C) T(2) = (1+C)^2 A<br> ....<br> T(n) = A (1+C)^(n-1) </p> <p>This looks like a geometric progression, isn't ? </p> <p>I am sure you can take from here. </p>
1,129,567
<p>Using the substitution $x=\cosh (t)$ or otherwise, find $$\int\frac{x^3}{\sqrt{x^2-1}}dx$$ The correct answer is apparently $$\frac{1}{3}\sqrt{x^2-1}(x^2+2)$$ I seem to have gone very wrong somewhere; my answer is way off, can someone explain how to get this answer to me.</p> <p>Thanks.</p> <p>My working: $$\int\frac{\cosh^3t}{\sinh^2t}dt$$ $$u=\sinh t$$ $$\int\frac{1+u^2}{u^2}du$$ $$\frac{-1}{u}+u$$ $$\frac{-1}{\sinh t}+\sinh t$$ $$\frac{-1}{\sqrt{x^2-1}}+\sqrt{x^2-1}$$</p> <p>^my working, I'm pretty sure this is very wrong though.</p> <p>Edit: I've spotted my error. On the first line it should be $$\int \cosh^3t \, dt$$ </p> <p>not</p> <p>$$\int\frac{\cosh^3t}{\sinh^2t}dt$$</p>
user84413
84,413
<p>Let $\displaystyle u=x^2, dv=\frac{x}{\sqrt{x^2-1}}dx$ $\;\;$and $\;\;du=2xdx, v=\sqrt{x^2-1}$ to get</p> <p>$\displaystyle\int\frac{x^3}{\sqrt{x^2-1}}\;dx=x^2\sqrt{x^2-1}-\int2x\sqrt{x^2-1}\;dx=x^2\sqrt{x^2-1}-\frac{2}{3}(x^2-1)^{3/2}+C$</p> <hr> <p>Alternatively, let $u=\sqrt{x^2-1}, \;\;du\displaystyle=\frac{x}{\sqrt{x^2-1}}dx, \;\;x^2=u^2+1$ to get</p> <p>$\displaystyle\int\left(u^2+1\right)du=\frac{1}{3}u^3+u+C=\frac{1}{3}(x^2-1)^{3/2}+\sqrt{x^2-1}+C$</p>
4,400,243
<p>How do we change the equation <span class="math-container">$x^{2} + y^{2} + xy - x - y = 0$</span> to the standard form of ellipse?</p> <p>Since there have a term <span class="math-container">$xy$</span> , I don't know how to use completing square method.</p>
Átila Correia
953,679
<p>Is this you are looking for?</p> <p><span class="math-container">\begin{align*} x^{2} + y^{2} + xy - x - y = 0 &amp; \Longleftrightarrow 2x^{2} + 2y^{2} + 2xy - 2x - 2y = 0\\\\ &amp; \Longleftrightarrow (x^{2} + 2xy + y^{2}) + (x^{2} - 2x + 1) + (y^{2} - 2y + 1) = 2\\\\ &amp; \Longleftrightarrow (x + y)^{2} + (x - 1)^{2} + (y - 1)^{2} = 2 \end{align*}</span></p>
3,975,162
<p>Let <span class="math-container">$(X, A, Β΅)$</span> be a positive metric space. If <span class="math-container">$\mu(X) &lt; \infty$</span> and <span class="math-container">$(A_n)_{(n \in N^*)},A \in X$</span> <br /> show that if <span class="math-container">$\mu(A\bigtriangleup A_n)\rightarrow 0$</span> then <span class="math-container">$ΞΌ(A_n)\rightarrow\mu(A)$</span></p> <p>What I have tried so far is</p> <p>use that <span class="math-container">$A\bigtriangleup A_n = (A/A_n)\cup(A_n/A)$</span> <br /> and since <span class="math-container">$(A/A_n)\cap(A_n/A)= \emptyset$</span> then <span class="math-container">$\mu(A\bigtriangleup A_n) = \mu(A/A_n)+\mu(A_n/A) \rightarrow 0$</span> <br /> now I am trying to contain <span class="math-container">$ΞΌ(A_n)$</span> in an inequality where both side converge to <span class="math-container">$A$</span></p> <p>EDIT: thanks to Thorgott's point both <span class="math-container">$\mu(A/A_n)$</span> and <span class="math-container">$\mu(A_n/A)$</span> converge to <span class="math-container">$0$</span> and <span class="math-container">$(A/A_n)\cap A_n = \emptyset$</span> then <span class="math-container">$\mu(A) = \mu(A/A_n) + \mu(A_n) \Rightarrow \mu(A_n) \rightarrow0+\mu(A)$</span></p>
B. Goddard
362,009
<p>I think I'd go back to the definition here. If <span class="math-container">$y\mid (y+1)^2$</span> then there is an integer <span class="math-container">$k$</span> such that</p> <p><span class="math-container">$$yk = (y+1)^2 = y^2+2y+1$$</span></p> <p>so</p> <p><span class="math-container">$$1 = yk - y^2 -2y = y(k-y-2)$$</span></p> <p>Therefore <span class="math-container">$y$</span> is a divisor of <span class="math-container">$1$</span>, which is disallowed.</p>
4,256,719
<p>I have to teach a 1 hr class about the stereographic projection in the complex plane and i am looking for sources or some interesting fact about this. The best I have found is in the Alhfors of Complex Analysis.</p> <p>It would help me a lot to read your suggestions</p>
David Quinn
187,299
<p><span class="math-container">$$y=\implies \ln y=\sqrt{\log_2n}\ln2$$</span> <span class="math-container">$$\implies(\ln y)^2=\log_2n(\ln2)^2=(\ln n)(\ln 2)$$</span> <span class="math-container">$$\implies2(\ln y)\frac1y\frac{dy}{dn}=\frac1n\ln2$$</span> <span class="math-container">$$\implies\frac{dy}{dn}=\frac{y}{\ln y}\cdot\frac{\ln2}{n}$$</span> <span class="math-container">$$=\frac{2^{\sqrt{\log_2n}}}{(\ln2)\sqrt{\log_2n}}\cdot\frac{\ln2}{n}$$</span> <span class="math-container">$$\implies \frac{dy}{dn}=\frac{2^{\sqrt{\log_2n}}}{2n\sqrt{\log_2n}}$$</span></p>
2,804,074
<p>Hello I am self teaching foundational math, I am thinking about union of a set. It definition assumes the set to have only elements that are also sets, else it would break. But I started thinking is the empty set infact an element of any mathematical object, like say the integer n? In that case taking the union of the set U {1,{2,3}} would yeild {2,3} Do I understand this correctly? Also since the integers can be constructed as sets of empty sets etc, the empty set would be an element of any integer. But does this extend to any mathematical object?</p> <p>My question boils down to do union of a set imply that the set is composed of sets, or do we allow the empty set to be the result when asking for the element if any mathematical object that is not an explicit set.</p>
Mythomorphic
152,277
<p>Let $x=y+t$, where $y\to \infty$ and $t\in[0,1]$.</p> <p>Then</p> <p>$$L=\lim_{y\to\infty}\frac{\{y+t\}^{2n}-1}{\{y+t\}^{2n}+1}=\frac{t^{2n}-1}{t^{2n}+1}$$</p> <p>for any $t$ since $n$ is not specified.</p> <p>But, as $n\to\infty$,</p> <p>Then $$L=\lim_{n\to\infty}\frac{t^{2n}-1}{t^{2n}+1}=\frac{0-1}{0+1}=-1$$</p>
338,638
<p>There is a wonderful series of articles by Flajolet et. al. about Mellin Transforms and the asymptotic analysis of generating functions. In particular, on page 45 of the article <a href="http://algo.inria.fr/flajolet/Publications/mellin-harm.pdf" rel="nofollow noreferrer">Mellin Transforms and Asymptotics: Harmonic Sums</a>, they state the following result:</p> <blockquote> <p>Proposition 6 (Growth of Special Dirichlet Series): Let <span class="math-container">$\lambda_{k}$</span> and <span class="math-container">$\mu_{k}$</span> admit asymptotic expansions in descending powers of <span class="math-container">$k$</span> as <span class="math-container">$$\lambda_{k}\sim\sum_{r=0}^{\infty}\frac{a_{r}}{k^{\alpha_{r}}}$$</span> <span class="math-container">$$\mu_{k}\sim k^{w}\left(1+\sum_{r=1}^{\infty}\frac{b_{r}}{k^{\beta_{r}}}\right)$$</span> Then the Dirichlet series <span class="math-container">$\sum_{k}\lambda_{k}\mu_{k}^{-s}$</span> can be continued to a meromorphic function <span class="math-container">$\Lambda\left(s\right)$</span> in the whole of the complex plane.</p> </blockquote> <p>Now, let <span class="math-container">$V$</span> be an arbitrary set of infinitely many positive integers, and let: <span class="math-container">$$\zeta_{V}\left(s\right)\overset{\textrm{def}}{=}\sum_{v\in V}\frac{1}{v^{s}}$$</span></p> <p>Enumerating the elements of <span class="math-container">$V$</span> in increasing order as <span class="math-container">$v_{1},v_{2},\ldots$</span>, recall that one way of defining the natural density of <span class="math-container">$V$</span> (denoted <span class="math-container">$d\left(V\right))$</span> is: <span class="math-container">$$d\left(V\right)=\lim_{n\rightarrow\infty}\frac{n}{v_{n}}$$</span> In the case where <span class="math-container">$d\left(V\right)$</span> exists and is positive, this gives the asymptotic <span class="math-container">$d\left(V\right)v_{n}\sim n$</span>. As such, for: <span class="math-container">$$\frac{\zeta_{V}\left(s\right)}{\left(d\left(V\right)\right)^{s}}=\sum_{n=1}^{\infty}\frac{1}{\left(d\left(V\right)v_{n}\right)^{s}}$$</span> I have that <span class="math-container">$$\lambda_{k}=1$$</span> and <span class="math-container">$$\mu_{k}=d\left(V\right)v_{k}\sim k$$</span> which is obtained by taking:<span class="math-container">$$\mu_{k}\sim k^{w}\left(1+\sum_{r=1}^{\infty}\frac{b_{r}}{k^{\beta_{r}}}\right)$$</span> setting <span class="math-container">$w=1$</span>, and letting all the <span class="math-container">$b_{r}$</span>s be <span class="math-container">$0$</span>. Hence, unless I am mistaken, Proposition 6 implies that <span class="math-container">$\zeta_{V}\left(s\right)$</span> extends to a meromorphic function on <span class="math-container">$\mathbb{C}$</span> whenever <span class="math-container">$V$</span> has positive natural density.</p> <p>However, consider the following. Let <span class="math-container">$\mathbb{P}$</span> denote the set of prime numbers. Then, <span class="math-container">$\zeta_{\mathbb{P}}\left(s\right)$</span> is the so-called Prime Zeta Function, which is known to have a natural boundary on the imaginary axis. On the other hand, since <span class="math-container">$d\left(\mathbb{P}\right)=0$</span>, it follows that <span class="math-container">$\mathbb{N}/\mathbb{P}$</span> has a well-defined natural density of <span class="math-container">$1$</span>, and thus, by Proposition 6, that <span class="math-container">$\zeta_{\mathbb{N}/\mathbb{P}}\left(s\right)$</span> is meromorphic on <span class="math-container">$\mathbb{C}$</span>. However, since I can write:<span class="math-container">$$\zeta_{\mathbb{P}}\left(s\right)=\zeta\left(s\right)-\zeta_{\mathbb{N}\backslash\mathbb{P}}\left(s\right)$$</span> it follows that <span class="math-container">$\zeta_{\mathbb{P}}\left(s\right)$</span> is the difference of two meromorphic functions, which forces <span class="math-container">$\zeta_{\mathbb{P}}\left(s\right)$</span> to be meromorphic on <span class="math-container">$\mathbb{C}$</span>, which is obviously not correct. </p> <p>So, where's the error, and how (if at all) can it be rectified? In particular, when, if ever, does the existence of <span class="math-container">$d\left(V\right)$</span> imply that <span class="math-container">$\zeta_{V}\left(s\right)$</span> is meromorphic on <span class="math-container">$\mathbb{C}$</span>?</p>
GH from MO
11,919
<p>Proposition 6 does not imply that if <span class="math-container">$V$</span> has positive natural density then <span class="math-container">$\zeta_V(s)$</span> extends to a meromorphic function. This is because the density assumption is much weaker than the assumptions in Proposition 6. Indeed, if the elements of <span class="math-container">$V$</span> are <span class="math-container">$v_1&lt;v_2&lt;\dotsb$</span>, then the density assumption says that <span class="math-container">$v_k$</span> is asymptotically a constant times <span class="math-container">$k$</span>, while Proposition 6 requires a full asymptotic expansion of <span class="math-container">$v_k$</span> in descending powers of <span class="math-container">$k$</span>. The latter means that, for any nonnegative integer <span class="math-container">$R$</span>, there are exponents <span class="math-container">$w_1&gt;w_2&gt;\dotsb&gt;w_R$</span> and coefficients <span class="math-container">$c_1,c_2,\dotsc,c_R\in\mathbb{R}$</span> such that <span class="math-container">$$v_k=c_1 k^{w_1}+c_2 k^{w_2}+\dotsb+c_{R-1}k^{w_{R-1}}+(c_R+o(1))k^{w_R}.$$</span> The density assumption implies that we have such a relation for <span class="math-container">$R=1$</span>, namely <span class="math-container">$w_1=1$</span> and <span class="math-container">$c_1=1/d(V)$</span> work, but it does not imply the relation for <span class="math-container">$R=2$</span>. And in fact, for the sequence of non-primes <span class="math-container">$\mathbb{N}\setminus\mathbb{P}$</span>, the above approximation only holds for <span class="math-container">$R=1$</span>.</p>
3,247,841
<p>I'm working on an algorithm to colour a map drawn in an editor using 4 colours, as a visual demonstration of the four colour theorem. However, my (imperfect) algorithm was able to colour all maps except this one, which after giving it a go myself I struggled to do. I was also unable to collapse it into an 'untangled' graph, so it's possible there's some illegality about it I've not fully understood (or I'm just bad at graph theory). I'd appreciate any help with solving this, and if possible an explanation of/link to a good algorithm to go about solving problems of this style.</p> <p>Here's the map:</p> <p><a href="https://i.stack.imgur.com/f5yii.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/f5yii.jpg" alt="The map"></a></p>
Mark Bennet
2,906
<p>The map you have given can be simplified considerably by deleting some of the regions. If you have a region which is adjacent to <span class="math-container">$1, 2$</span> or <span class="math-container">$3$</span> other regions, you can simply delete it, or amalgamate it into some adjoining region, because when you colour the rest and put the region back, there will be a colour you can use.</p> <p>The map you have looks complicated, but I can spot five regions you can simply delete in this example. And once you have deleted those you can possibly iterate the process.</p> <p>That gives a rather simpler map to colour.</p>
4,316,800
<p>I have the following two-dimensional SDE: <span class="math-container">$dX_1=(-\mu X_1 - X_2)dt +\sigma dW_1$</span> and <span class="math-container">$dX_2=(-\mu X_2 + X_1)dt +\sigma dW_2$</span></p> <p>I then have to show that <span class="math-container">$E(X_1^2 + X_2^2) = \frac{\sigma^2}{\mu}$</span>.</p> <p>I know I have to use the multidimensional Ito formula, however I am not too sure how to set everything up. Any help is greatly appreciated.</p> <p>Edit: The initial conditions are <span class="math-container">$X_1 = 1$</span> and <span class="math-container">$X_2=0$</span></p>
Kurt G.
949,989
<p>The one dimensional SDE <span class="math-container">$$ dX_t=AX_t\,dt+\sigma W_t $$</span> has the solution <span class="math-container">$$\tag{1} X_t=e^{At}\textstyle(X_0+\sigma\int_0^te^{-As}\,dW_s)\,. $$</span> When you replace the constant <span class="math-container">$A$</span> by the matrix <span class="math-container">$$ A=\left(\begin{matrix}-\mu&amp;-1\\1&amp;-\mu\end{matrix}\right) $$</span> and <span class="math-container">$X$</span> resp. <span class="math-container">$W$</span> by their two dimensional sisters then you will see that (1) is the solution to your system.</p>
1,728,524
<p>Classical texts for control theory show the linear system $\dot x=A \,x$, is stable if the real parts of the eigenvalues are negative.</p> <p>Does the same criteria apply for a system of the following form: $$ \left[ \begin{array}{cc|c} \dot x\\ 0 \end{array} \right] = B \left[ \begin{array}{cc|c} x \\ y \end{array} \right]$$ </p> <p>where $\dot x$, $x$, $y$ are vectors and $B$ is a matrix of constants. In this system, the equations for $\dot x$ include terms from $y$. For this system the number of unknowns equals the number of equations. While it would be possible to perform additional algebra to reduce the system to the classical form, $\dot x=A\,x$, is this necessary? I would prefer to write the equations in the form above, because this makes the physical interpretation of the equations more clear. </p> <p>I am calling the $y$ values "auxiliary" variables, because they are dynamic in the sense that they change with time (as a consequence of the linear system - the values $y$ do not have an explicit dependence on time), but an expression for their derivative does not fall out of the analysis. (In this system, the y equations results from a simple energy balance where no energy "hold-up" is assumed.) If there is a more appropriate description feel free to revise the question. </p> <p>Because $\dot y$ does not appear on the left hand side, it is not clear to me if the classic stability test still applies. </p>
Paul Irofti
113,248
<p>You might also be interested in building $C$ as the <a href="https://en.wikipedia.org/wiki/Orthogonal_Procrustes_problem" rel="nofollow">Procrustes approximation</a> of $A$ and $B$.</p> <p>$ \begin{equation} P = A^TB\\ U\Sigma V= \text{SVD}(P) \\ C = UV^T \\ \end{equation} $</p>
2,673,246
<p>How to prove that there exists a star-shaped set $ B βŠ† \mathbb{R}^n$ that is not convex. By picture it is pretty much clear that a star figure is star shaped which is not convex, but how to construct in $\mathbb{R}^n$.</p>
lab bhattacharjee
33,337
<p>Clearly, $\sin x\ne0$ </p> <p>and what if $\cos2x=0?$</p> <p>For $\sin x\cos2x\ne0,$ $$8\cos x\cos4x=\dfrac{\sin8x}{\sin x\cos2x}$$</p> <p>$$8\cos x\cos4x\cos5x=1\implies2\sin x\cos2x=2\cos5x\sin8x$$</p> <p>$$\sin3x-\sin x=\sin13x+\sin3x$$</p> <p>$$0=\sin13x+\sin x=2\sin7x\cos6x$$</p> <p>Now if $\sin7x=0\implies7x=m\pi$ where $m$ is any integer</p> <p>$\implies0\le\dfrac{m\pi}7\le\pi\iff0\le m\le7$</p> <p>But $\sin x\ne0,0&lt;m&lt;7$ which accounts for $6$ roots</p> <p>or if $\cos6x=0$ But $\cos2x\ne0$</p> <p>$$\implies\dfrac{\cos6x}{\cos2x}=4\cos^22x-3=0$$</p> <p>$\implies\cos^22x=\dfrac34\implies2x=n\pi\pm\dfrac\pi6=\dfrac\pi6(6n\pm1)$ where $n$ is any integer</p> <p>$\implies0\le\dfrac\pi6(6n\pm1)\le2\pi\iff0\le6n\pm1\le12$</p> <p>$0\le6n+1\le12\implies n=0,1$</p> <p>$0\le6n-1\le12\implies n=1,2$</p>
2,673,246
<p>How to prove that there exists a star-shaped set $ B βŠ† \mathbb{R}^n$ that is not convex. By picture it is pretty much clear that a star figure is star shaped which is not convex, but how to construct in $\mathbb{R}^n$.</p>
Michael Rozenberg
190,319
<p>Let $\cos{x}=t$.</p> <p>Thus, we need to solve $$8t(8t^4-8t^2+1)(16t^5-20t^3+5t)=1$$ or $$(8t^3+4t^2-4t-1)(8t^3-4t^2-4t+1)(16t^4-16t^2+1)=0,$$ which gives $10$ roots on $[-1,1]$.</p> <p>We can just solve this equation.</p> <p>An interesting root: $t=\cos\frac{2\pi}{7}$ or $t=\cos15^{\circ}.$</p> <p>Indeed, $$16t^4-16t^2+1=0$$ gives $$1-16\cos^2x(1-\cos^2x)=0$$ or $$1-4\sin^22x=0$$ or $$1-2(1-\cos4x)=0$$ or $$\cos4x=\frac{1}{2}$$ or $$x=\pm15^{\circ}+90^{\circ}k,$$ where $k=\mathbb Z$ and we get here four roots: $$\{15^{\circ},105^{\circ},75^{\circ},165^{\circ}\}.$$ Now, $$8t^3+4t^2-4t-1=0$$ gives $$8\cos^3x+4\cos^2x-4\cos{x}-1=0$$ or $$8\cos^3x-6\cos{x}+4\cos^2x+2\cos{x}-1=0$$ or $$2\cos3x+2(1+\cos2x)+2\cos{x}-1=0$$ or $$2(\cos{x}+\cos2x+\cos3x)=-1$$ and since $\sin\frac{x}{2}\neq0$ we obtain: $$2\sin\frac{x}{2}(\cos{x}+\cos2x+\cos3x)=-\sin\frac{x}{2}$$ or $$\sin\frac{3x}{2}-\sin\frac{x}{2}+\sin\frac{5x}{2}-\sin\frac{3x}{2}+\sin\frac{7x}{2}-\sin\frac{5x}{2}=-\sin\frac{x}{2}$$ or $$\sin\frac{7x}{2}=0$$ or $$x=\frac{360^{\circ}k}{7},$$ where $k\in\mathbb Z$, which gives $$\left\{\frac{360^{\circ}}{7},\frac{720^{\circ}}{7},\frac{1080^{\circ}}{7}\right\}.$$ The equation $8t^3-4t^2-4t+1=0$ also gives three roots.</p> <p>Another way.</p> <p>Since $\sin{x}\neq0,$ by your work we obtain $$2\sin{x}(\cos10x+\cos8x+\cos2x)=-\sin{x}$$ or $$\sin11x-\sin9x+\sin9x-\sin7x+\sin3x-\sin{x}=-\sin{x}$$ or $$\sin11x+\sin3x-\sin7x=0$$ or $$2\sin7x\cos4x-\sin7x=0,$$ which gives $$\sin7x=0$$ or $$\cos4x=\frac{1}{2}$$ and we get the same $10$ roots.</p>
3,252,113
<p>Let's say Johnny has nothing better to do but toss a coin all day long and record the results. By the end of the day, Johnny flipped the coin 1 million times and his results show 300,000 heads and 700,000 tails. Considering the coin and Johnny are both fair, the results show tails has a higher probability of 3:7 (heads:tails). </p> <p>Then, instead of telling Johnny to find a better hobby, do you tell him to toss the coin a million more times to prove the Law of Large Numbers works? And, considering you know Johnny's results, do you bet on heads for the next million coin tosses knowing that the results should converge close to 1:1? In which case, are you saying the coin doesn't have a 50/50 chance any longer considering the Law of Large Numbers is now in your favor with those results? </p> <p>I think most people would agree that a million coin tosses is a large enough result already. However, as a mathematician, do you say in such a case the amount of coin tosses isn't large enough for the Law of Large Numbers to take effect? And then if it's still not 1:1 after 2 million tosses but still at 3:7 (heads:tails), do you tell Johnny to toss it a million more times? When is it large enough or is it only enough when the results are near 1:1? </p>
auscrypt
675,509
<p>"Large" is <strong>not</strong> an exact number. All the Law of Large Numbers states is that <em>eventually</em> you'll get 'closer and closer' (in a precise mathematical sense) to the expected odds, not that there's a number you can point to and go "aha! Toss it <span class="math-container">$2.7\times10^{18}$</span> times and that's enough". </p> <p>Mathematically, the best you can do from here is calculate the probability that the coin is 30-70, based on <a href="https://en.wikipedia.org/wiki/Prior_probability" rel="nofollow noreferrer">prior probability</a> and simple binomial calculations. The Law of Large Numbers has nothing to do with these calculations.</p>
3,252,113
<p>Let's say Johnny has nothing better to do but toss a coin all day long and record the results. By the end of the day, Johnny flipped the coin 1 million times and his results show 300,000 heads and 700,000 tails. Considering the coin and Johnny are both fair, the results show tails has a higher probability of 3:7 (heads:tails). </p> <p>Then, instead of telling Johnny to find a better hobby, do you tell him to toss the coin a million more times to prove the Law of Large Numbers works? And, considering you know Johnny's results, do you bet on heads for the next million coin tosses knowing that the results should converge close to 1:1? In which case, are you saying the coin doesn't have a 50/50 chance any longer considering the Law of Large Numbers is now in your favor with those results? </p> <p>I think most people would agree that a million coin tosses is a large enough result already. However, as a mathematician, do you say in such a case the amount of coin tosses isn't large enough for the Law of Large Numbers to take effect? And then if it's still not 1:1 after 2 million tosses but still at 3:7 (heads:tails), do you tell Johnny to toss it a million more times? When is it large enough or is it only enough when the results are near 1:1? </p>
Clarinetist
81,560
<p>The Law of Large Numbers is <strong>not</strong> an <em>empirical</em> result. You <strong>cannot</strong> demonstrate this with your own finite-trial experiment. It is a <strong>mathematical</strong> statement which states that - if you're familiar with calculus and a bit of probability:</p> <p><span class="math-container">$$\lim_{n \to \infty}\dfrac{1}{n}\sum_{i=1}^{n}X_i = \mathbb{E}[X_1]$$</span></p> <p>where <span class="math-container">$X_1, \dots, X_n$</span> are independent and identically distributed random variables.</p> <hr> <p>In your situation, how the Law of Large Numbers applies is this: if I flip a coin <strong>infinitely many times</strong>, I get the theoretical probability of getting a heads or tails. Since, of course, you cannot flip a coin infinitely many times, it is thus impossible to demonstrate the Law of Large Numbers through a coin flip experiment.</p> <p>So, you might ask, what do you do then? How can you test to see if the theoretical proportion of heads/tails is a certain value? Well, you use statistics. A situation like this is what a <a href="https://www.tutorialspoint.com/statistics/one_proportion_z_test.htm" rel="nofollow noreferrer">one-sample proportion <span class="math-container">$Z$</span>-test</a> is used for - however, one must always be cautious about using canned procedures to test hypotheses. You can read all about the faults of <span class="math-container">$p$</span>-values and null-hypothesis significance tests (NHSTs) online through a quick search.</p> <p>A famous quote which is quite relevant in this situation (of which I do not know the originator, apologies): <em>Statistics means never having to say you're certain.</em></p>
19,294
<p>Instead of taking a one to one correspondence meaning each set has the same number of elements. why not use the concept of coverings of topology? The irrational numbers covers the whole numbers but not vice versa?</p> <p>A hierarchy of coverings instead of infinities. Wouldn't that make those infinities more manageable in those terms?( yes I know topology can be expressed in set theory)</p>
Chris Schommer-Pries
184
<p>I have a couple things to say.</p> <p>First, believe your definition of gerbe is slightly incorrect. When you say that your stack is locally isomorphic to <span class="math-container">$U \times B\mathbb{G}_m$</span>, this isomorphism needs to preserve some additional structure. It might be okay for <span class="math-container">$\mathbb{G}_m$</span>-gerbes, by accident, but for general non-abelian gerbes you will run into trouble. (It might still be okay for <span class="math-container">$\mu$</span>-gerbes, where <span class="math-container">$\mu$</span> is a sheaf of abelian groups over X).</p> <p>There are several ways to add this extra structure, but I think the most common are not necessarily the most enlightening. The fact of the matter is that <span class="math-container">$B\mathbb{G}_m$</span> is a group object in stacks and it "acts" on the gerbe over <span class="math-container">$X$</span>. The local isomorphism to <span class="math-container">$U \times B\mathbb{G}_m$</span> needs to respect this action. Morally, you should think of a gerbe as a principal bundle with structure "group" <span class="math-container">$B\mathbb{G}_m$</span>. </p> <p>The reason that this isn't the most common way to explain what a gerbe is, is that making this precise requires a certain comfortability with 2-categories and coherence equations that most people don't seem to have. Times are changing though. Just as for ordinary principal bundles, you can (in nice settings, say noetherian separated) classify them in terms of Cech data. When you do this you see that the only important part is the coherence data, which gives a 2-cocycle. For non-abelian gerbes you get non-trivial stuff which mixes together parts which look like data for a 1-cocycle and a 2-cocycle. I agree with Kevin that, at this point, if you really want to understand this stuff you should fill-in the rest of the details on your own. It is a good exercise!</p> <p>Alternatively, if higher categories make you uncomfortable, you can be cleaver. You can still make a definition along the lines of the one you outline precise without venturing into the world of higher categories and "coherent group objects in stacks". I recommend Anton's <a href="https://stacky.net/files/written/Stacks/Stacks.pdf" rel="nofollow noreferrer">course notes on Stacks</a> as taught by Martin Olsson. Section 31 has a definition of <span class="math-container">$\mu$</span>-gerbes which is equivalent to the one I sketched above but avoids the higher categorical aspects. There is also a proof that such gerbes are classified by <span class="math-container">$H^2(X; \mu)$</span>. Enjoy!</p> <hr> <p>Just to reiterate. In a gerbe you are not patching together classifying <em>spaces</em>, you are patching together classifying <em>stacks</em>. Despite the common notation, there is a difference. A stack is fundamentally an object in a 2-category. This means that you need to deal with 2-morphisms and that they can be just as important as the 1-morphisms. For <span class="math-container">$B\mathbb{G}_m$</span>, the 1-morphisms (which preserve the multiplication action of the stack <span class="math-container">$B \mathbb{G}_m$</span> !!) are all equivalent, so there is no Cech 1-cocycle data at all. All you get are the coherence data, which form a 2-cocycle. </p> <p>This is one reason that I prefer the notation <span class="math-container">$[pt/\mathbb{G}_m]$</span> to denote the stack <span class="math-container">$B\mathbb{G}_m$</span>. This is particularly important in the topological setting where these are truly different objects. </p>
2,527,754
<p>Denote the Borel sets in $\mathbb R^d$ as $\mathcal B^d$. Is there a proof for the rotation invariance of the Lebesgue measure that doesn't use already that one has $$ \lambda(A^{-1}(B)) = \vert \operatorname{det} A \vert ^{-1} \lambda (B) \qquad \text{for all } A \in \operatorname{GL}(\mathbb R^d), \ B \in \mathcal B^d?$$ For example one can show easily that $\lambda$ is invariant under translation just using that intervals are invariant under translation and a $\cap$-closed generator of the Borel sets. This the first step in the proof of the above statement.</p> <p>Hence I am interested to see a proof for the rotation invariance likewise without the above statement. </p>
Eric Wofsey
86,856
<p>One quick proof of this is using the fact that Lebesgue measure on $\mathbb{R}^n$ is $n$-dimensional Hausdorff measure for the Euclidean metric (up to a scale factor). Since the definition of Hausdorff-measure uses only the metric, any isometry automatically preserves it.</p> <p>Another way to prove it is using the fact that Lebesgue measure is a Haar measure and Haar measures are unique up to scaling. Since the composition of Lebesgue measure with a rotation is then also a Haar measure, it differs from Lebesgue measure by a constant factor. You can then find the constant factor is $1$ because a rotation fixes the unit ball, which has positive measure.</p>
3,123,681
<p>let us consider following problem taken from book</p> <p><em>An appliance store purchases electric ranges from two companies. From company A, 500 ranges are purchased and 2% are defective. From company B, 850 ranges are purchased and 2% are defective. Given that a range is defective, find the probability that it came from company B</em></p> <p>so here we are assuming that probability of selection company is equally right? that means that <span class="math-container">$P(A)=P(B)=\frac{1}{2}$</span> , also <span class="math-container">$2$</span> % defective means that probability of selection of defective from ranges is equal to <span class="math-container">$0.02$</span>, for instance in Company A, number of defective ranges is <span class="math-container">$500*0.02=10$</span> there probability is equal to <span class="math-container">$\frac{10}{500}=0.02=2$</span>% </p> <p>we know probability of selecting defective range is equal to</p> <p><span class="math-container">$ \frac{1}{2} *2$</span>% + <span class="math-container">$\frac{1}{2} *2$</span>% and probability of selecting defective from company B will be <span class="math-container">$1/2 * 2$</span>% divided by probability of selection of defective range, but book says that answer is <span class="math-container">$0.65 $</span>, how?</p>
Peter Foreman
631,494
<p>For small <span class="math-container">$x$</span> we have <span class="math-container">$$\arctan{x} \approx x$$</span> Which can be proven by examining the Taylor series of <span class="math-container">$\arctan{x}$</span>. So as <span class="math-container">$n\to\infty$</span>, the argument of the arctangent tends to <span class="math-container">$0$</span> and the limit is then equal to <span class="math-container">$$\lim_{n\to\infty} \sum_{k=1}^{n} \frac{1}{n+k}=\lim_{n\to\infty} \sum_{i=n+1}^{2n} \frac{1}{i}=\lim_{n\to\infty} (H_{2n}-H_{n})$$</span> Where <span class="math-container">$H_n$</span> denotes the <span class="math-container">$n$</span>th harmonic number given by <span class="math-container">$\sum_{k=1}^{n} \frac{1}{k}$</span>. There is an asymptotic limit for the harmonic numbers that is given by <span class="math-container">$$H_n \approx \gamma + \ln{(n)}$$</span> for large <span class="math-container">$n$</span> where <span class="math-container">$\gamma$</span> denotes the Euler-Mascheroni constant. So our limit becomes <span class="math-container">$$\lim_{n\to\infty} (H_{2n}-H_{n})=\lim_{n\to\infty} (\gamma + \ln{(2n)}-(\gamma + \ln{(n)}))=\ln{(2)} $$</span></p>
3,123,681
<p>let us consider following problem taken from book</p> <p><em>An appliance store purchases electric ranges from two companies. From company A, 500 ranges are purchased and 2% are defective. From company B, 850 ranges are purchased and 2% are defective. Given that a range is defective, find the probability that it came from company B</em></p> <p>so here we are assuming that probability of selection company is equally right? that means that <span class="math-container">$P(A)=P(B)=\frac{1}{2}$</span> , also <span class="math-container">$2$</span> % defective means that probability of selection of defective from ranges is equal to <span class="math-container">$0.02$</span>, for instance in Company A, number of defective ranges is <span class="math-container">$500*0.02=10$</span> there probability is equal to <span class="math-container">$\frac{10}{500}=0.02=2$</span>% </p> <p>we know probability of selecting defective range is equal to</p> <p><span class="math-container">$ \frac{1}{2} *2$</span>% + <span class="math-container">$\frac{1}{2} *2$</span>% and probability of selecting defective from company B will be <span class="math-container">$1/2 * 2$</span>% divided by probability of selection of defective range, but book says that answer is <span class="math-container">$0.65 $</span>, how?</p>
Claude Leibovici
82,404
<p>As DXT did, using the upper and lower bounds of <span class="math-container">$\tan^{-1}(x)$</span> <span class="math-container">$$\sum^{n}_{k=1}\bigg[\frac{1}{n+k}-\frac{1}{3(n+k)^3}\bigg]&lt;S_n=\sum^{n}_{k=1}\tan^{-1}\bigg(\frac{1}{n+k}\bigg)&lt;\sum^{n}_{k=1}\frac{1}{n+k}$$</span> <span class="math-container">$$\sum^{n}_{k=1}\frac{1}{n+k}=H_{2 n}-H_n$$</span> where appear harmonic numbers. <span class="math-container">$$\sum^{n}_{k=1}\frac{1}{(n+k)^3}=\frac{1}{2} (\psi ^{(2)}(2 n+1)-\psi ^{(2)}(n+1))$$</span> where appear polygamma functions.</p> <p>Using the asymptotics, we then have <span class="math-container">$$\log (2)-\frac{1}{4 n}-\frac{1}{16 n^2}+O\left(\frac{1}{n^3}\right) \lt S_n &lt; \log (2)-\frac{1}{4 n}+\frac{1}{16 n^2}+O\left(\frac{1}{n^3}\right)$$</span></p> <p>Using it for <span class="math-container">$n=10$</span>, the left bound is <span class="math-container">$\log (2)-\frac{41}{1600}\approx 0.667522$</span>, the right bound is <span class="math-container">$\log (2)-\frac{39}{1600}\approx 0.668772$</span> while the exact value is <span class="math-container">$\approx 0.667663$</span>.</p>
707,667
<p>From the wiki of <a href="http://en.wikipedia.org/wiki/Precision_and_recall" rel="nofollow">Precision and recall</a>:</p> <blockquote> <p>recall (also known as sensitivity) is the fraction of relevant instances that are retrieved.</p> </blockquote> <p>I can understand the literal meaning of "sensitivity", but quite confused with the term "recall".</p> <p>Since I'm not a native English speaker, I can merely understand this word as "remember sth." or "a call on sb. to return", which has nothing to do with the meaning in such a context. Could someone explain why they use this terminology? Is there any origin of it?</p>
user127096
127,096
<p>The solution is correct. Just to beef up this post, I'll sketch a slightly different proof: the complement of $l_0$ is open. </p> <p>If $x\notin l_0$, let $r=\frac12\limsup_{k\to\infty} |x(k)|$. If $\|x-y\|\le r$, then $$\limsup|y(k)| \ge \limsup_{k\to\infty} |x(k)|-r =r$$ hence $y\notin l_0$.</p> <p>By the way, this is the first time I see notation $l_0$ used for this subspace; all sources I know use $c_0$. I think $l_0$ is <a href="http://en.wikipedia.org/wiki/Lp_space#When_p_.3D_0" rel="nofollow">prone to confusion</a>.</p>
147,551
<p>My lecturer says as follows; $C_0 = \{(a,b) : -\infty \le a\le b&lt; \infty\}$</p> <p>$$C_\mathrm{open} = \{ A \in \mathbb{R} : A\text{ open} \}$$ </p> <p>He goes on to show that $\sigma(C_0) = \sigma(C_{\mathrm{open}})$; </p> <p>Clearly, $\sigma(C_0)$ is in $\sigma(C_{\mathrm{open}})$</p> <p>So now I need to show the other way round, $\sigma(C_{\mathrm{open}})$ is in $\sigma (C_0)$</p> <p>I do this by showing that $C_{\mathrm{open}}$ is in $\sigma (C_0)$</p> <p>He says; take $A$ as subset of $\mathbb{R}$ which is open, then $$A= \bigcup_{x \in X} (x-\varepsilon_x, x + \varepsilon_x)$$ Then he says $A \cap\mathbb{Q}$ is a subset of $$A= \bigcup_{x \in X} (x-\varepsilon_x, X + \varepsilon_x)$$ and $A\cap\mathbb{Q} = \{y_1,\ldots\}$ such that there exists $x_n$ s.t $y_n$ is in $(x_n-\varepsilon_{x_n}, x_n+\varepsilon_{x_n})$ for all $n$ of course $$\bigcup_{n=1}^{\infty} (x_n-\varepsilon_{x_n}, x_n+\varepsilon_{x_n})$$ is a subset of $A$ </p> <p>Then he says let $$\varepsilon_{n-} = \sup\{\varepsilon&gt;0\mid (x_n-\varepsilon, x_n] \subseteq A\}$$ and let $$\varepsilon_{n+} = \sup\{\varepsilon&gt;0\mid [x_n, x_n+\varepsilon)\subseteq A\}$$</p> <p>Now we need to show that $A$ is a subset of $$\bigcup_{n=1}^{\infty}(x_n-\varepsilon_{n-},x_n+\varepsilon_{n+})$$</p> <p>Take $x$ in $A$ then as before $x$ is in $(x-\varepsilon_x, x + \varepsilon_x)$. $\mathbb{Q}$ is dense, so there exists $n$ s.t $y_n$ is in $(x-\varepsilon_x, x + \varepsilon_x)$ which is a subset of $(x_n-\varepsilon_{n-},x_n+\varepsilon_{n+})$ by definition of $\varepsilon_{n+}$ and $\varepsilon_{n-}$.</p> <p>This then implies that $A$ is a subset of $$\bigcup_{n=1}^{\infty}(x_n-\varepsilon_{n-} , x_n+\varepsilon_{n+})$$</p> <p>So our arbitrary $A$ is in $\sigma(C_0)$ </p> <p>Can anybody explain this to me, I don't understand where the steps come from and lead to. All I know is you need to use the rationals because they're countable, unlike the reals. </p> <p>Thanks</p>
Arturo Magidin
742
<p>The problem asks to prove that a set with a binary operation and an identity in which the following two conditions hold is in fact a group:</p> <blockquote> <p>(i) every row and every column of the multiplication table contains every element of the set; and</p> <p>(ii) for every pair of elements $x\neq 1$ and $y\neq 1$, if $R$ is a rectangle in the body of the multiplication table that has $1$ in one vertex, $x$ in the corner in the same row as $1$, and $y$ in the corner in the same column as $1$, then the fourth corner of the rectangle depends only on the pair $(x,y)$ and not on the position of $1$. </p> </blockquote> <p>From the diagram at the bottom of page 4, you get that if you have the corners $$\begin{array}{cc} 1 &amp; c\\ ab&amp; \Box \end{array}$$ then $\Box$ must be $a(bc)$.</p> <p>But you get the same three corners if instead of having $$\begin{array}{c|cc} &amp;b&amp;bc\\ \hline b^{-1}&amp;1 &amp; c\\ a&amp;ab &amp; a(bc) \end{array}$$ you take $$\begin{array}{c|cc} &amp;1 &amp; c\\ \hline 1&amp;1 &amp; c\\ ab&amp; ab&amp; \Box \end{array}$$ but here the $\Box$ must be $(ab)c$. The assumption (ii) of the problem is that the bottom right corner depends only on the top right and bottom left corners, not on the rows and columns, so we conclude that $(ab)c=a(bc)$.</p> <p>You used the fact that $x^{-1}(xy) = y$ to construct the first of these two tables, to get the $c$ on the top right corner. </p> <p><em>Added.</em> I assumed, from the way you phrased your question, that you already had and agreed that $x^{-1}(xy) = y$ for all $x,y\in G$. This is hinted at with the first diagram at the bottom of page 4: $$\begin{array}{c|cc} &amp;1&amp;ab\\ \hline 1&amp;1 &amp; ab\\ a^{-1}&amp;a^{-1}&amp; a^{-1}(ab) \end{array}$$ which you should compare with $$\begin{array}{c|cc} &amp; a^{-1} &amp; b\\ \hline a &amp; 1 &amp; ab\\ 1 &amp; a^{-1} &amp; b \end{array}$$ i.e., the same argument as used later to prove associativity.</p>
3,106,784
<p>I've recently posted another question regarding natural deduction proofs and I've definitely made some progress, but I'm now stuck with a proof which seems like it could be flawed.</p> <p><img src="https://i.stack.imgur.com/5sMPp.png" alt="The proof"></p> <p>Now as you can see, it looks like I've got it all figured out, however you can see an error is returned for incorrect use of negation introduction. Now there seems to be a contradiction in the premises on lines 4 and 5: as per lines 9 and 10, R is true and P is false. I went with P being false (line 10) which leads to a contradiction, seemingly making the proof work out. However, I could just as well have gone with R being true (line 9), which, according to line 5, would not prove my contradiction as I must prove Q.</p> <p>Am I missing something obvious here or do you think the proof is broken?</p> <p>Thank you!</p>
Graham Kemp
135,106
<p>Tip: the rule of <span class="math-container">$\lnot\rm I$</span> <em>introduces</em> a negation. </p> <p>So if a contradiction is derived from assuming a negation, we infer that a double negation is the case. &nbsp; That is <span class="math-container">$\lnot Q\vdash\bot$</span> allows us to infer: <span class="math-container">$\vdash \lnot\lnot Q$</span>.</p> <p>Having done so, invoke the rule of double negation elimination (denoted: DNE, or <span class="math-container">$\lnot\lnot\rm E$</span>), and infer that <span class="math-container">$\vdash Q$</span>.</p>
2,440,785
<p>if $f: X \to X$ is continuous where $X$ is a topological space with a cofinite topology, then:</p> <p>$$(i) \ f^{-1}(x) \text{ is finite for all $x$} \\ \text{or} \\ (ii) \ f \text{ is constant}$$</p> <p>My approach:</p> <p>I couldn't build up a proper approach here to be honest. I believe we need to use the fact that inverse functions preserve differences of sets. But couldn't go on. </p> <p>Any hints?</p>
Henno Brandsma
4,280
<p>In a cofinite space $X$ the only closed sets are the finite subsets and $X$. A function is continuous iff the inverse image of a closed set is closed.</p> <p>So either all sets $f^{-1}[\{x\}]$ are finite, or one of them (there can only be at most one, of course ) has $f^{-1}[\{x\}] = X$ in which case $f$ is constant. </p> <p>So the named options are exactly the mutually exclusive options for $f$: constant or finite fibres.</p>
1,957,166
<p>For a given set $A$, An element such that $a \in A $ exists. </p> <p>If $A$ is a set of all natural numbers, then:</p> <p>$$ a \in A \in \mathbb{N} \subset \mathbb{Z} \subset \mathbb{R}. $$</p> <p>Would maths normally be written like this, if it is correct? </p>
z100
259,327
<p>Just correct a typo as OP stated "A is a set of all natural numbers": $$a \in A = \mathbb{N} \subset \mathbb{Z} \subset \mathbb{R}$$</p>
2,016,781
<p>Premise:</p> <ol> <li>Let P(n) return the real part of the nth nontrivial root of the zeta function</li> <li>The first several roots of the zeta function are already known </li> </ol> <p>Proof:</p> <ol> <li>Pick any integer n. For example, 1.</li> <li>Solve for P(n)</li> <li>Solve for P(n+1)</li> <li>Both (2) and (3) are 1/2 as established by Alan Turing and others</li> <li>By induction all P(n) are 1/2</li> </ol> <p>QED the real part of all nontrivial roots of the riemann zeta function is 1/2</p> <p>Any problems? </p>
Reese Johnston
351,805
<p>There are a considerable number of problems. First, you've omitted the base case; induction can only work if the claim holds of the starting point (usually $0$). Second, steps (2) and (3) have no content; $f(n)$ and $f(n+1)$ simply have values, no solving is required. Third, step (4) draws a conclusion that isn't there; why do you think that $f(n)$ is $\frac{1}{2}$? Or that $f(n+1)$ is?</p>
1,545,092
<p>The following sum represents the number of relevant kinds of lines in an N-dimensional tic-tac-toe game, which is why I am interested in finding a closed form, but it also is the sum of all possible combinations of N unique elements when any number of the elements from 1 to N can be chosen, which is also cool, and seems like the kind of thing that would have an elegant transcendental form involving factorials and stuff.</p> <p>$$ S = \sum_{j=1}^{N} {N! \over j!(N-j)!}, N \in \mathbb{Z}_{+} $$</p> <p>So is there an easy way to find a closed form here?</p>
Alex R.
22,064
<p>$$\binom{N}{j}=\frac{N!}{j!(N-j)!}.$$</p> <p>$$(1+x)^N=\sum_{j=0}^N\binom{N}{j}x^j.$$</p> <p>$$S=(1+1)^N-\binom{N}{0}=2^N-1$$</p>
1,545,092
<p>The following sum represents the number of relevant kinds of lines in an N-dimensional tic-tac-toe game, which is why I am interested in finding a closed form, but it also is the sum of all possible combinations of N unique elements when any number of the elements from 1 to N can be chosen, which is also cool, and seems like the kind of thing that would have an elegant transcendental form involving factorials and stuff.</p> <p>$$ S = \sum_{j=1}^{N} {N! \over j!(N-j)!}, N \in \mathbb{Z}_{+} $$</p> <p>So is there an easy way to find a closed form here?</p>
vadim123
73,324
<p>It is <a href="https://en.wikipedia.org/wiki/Binomial_coefficient#Series_involving_binomial_coefficients" rel="nofollow">well-known</a> that $$\sum_{j=0}^N\frac{N!}{j!(N-j)!}=2^N$$ Your series $S$ is missing the first term, hence $$S=2^N-1$$</p>
314,329
<p>When I first learned about factorials in grade school I quickly became interested in the idea and did a lot of playing with them. I noticed, though, that as the factorials got higher and higher they gained more and more trailing zeros.</p> <pre><code>5! has 1 trailing zero and = 120 10! has 2 trailing zeros and = 3,628,800 15! has 3 trailing zeros and = 1,307,674,368,000 20! has 4 trailing zeros and = 2,432,902,008,176,640,000 </code></pre> <p>I always wondered if this meant something or perhaps proves a certain theorem. Does it?</p> <p><a href="http://www.mathsisfun.com/calculator-precision.html" rel="nofollow">Full Precision Calc</a></p>
Hagen von Eitzen
39,174
<p>If $p$ is a prime, then $n!$ is a multiple of $p^k$ (but not $p^{k+1}$) where $$\tag1 k=\left\lfloor \frac np\right\rfloor+\left\lfloor \frac n{p^2}\right\rfloor+\left\lfloor \frac n{p^3}\right\rfloor+\ldots$$ and $\lfloor x\rfloor$ is the largest integer $\le x$. This is so because each summand counts the factors among $1,2,\ldots ,n$ that are multiples of $p$, of $p^2$, of $p^3$ and so on.</p> <p>If we let $p=2$ in $(1)$, the $k$ we obtain will always be at least as big as when we let $p=5$. Therefore the highest power of $10=2\cdot 5$ dividing $n!$ (i.e. the number of trailing zeroes) is given by $(1)$ with $p=5$. That is: The number of zeroes grows by one at every multiple of $5$, by two at every multiple of $25$, by three at every multiple of $125$ and so on. Note however, that the growth of the number of zeroes is <em>not</em> exponentially. In fact, $$\left\lfloor \frac np\right\rfloor+\left\lfloor \frac n{p^2}\right\rfloor+\left\lfloor \frac n{p^3}\right\rfloor+\ldots&lt; \frac np+\frac n{p^2}+\frac n{p^3}+\ldots =\frac{n}{p-1},$$ so the number of zeroes is linearly bounded and always $&lt; \frac n4$.</p> <p>By the way, $50!$ has only $\left\lfloor \frac{50}5\right\rfloor +\left\lfloor \frac{50}{25}\right\rfloor +\left\lfloor \frac{50}{125}\right\rfloor +\ldots = 10+2+0+0+\ldots = 12$ trailing zeroes</p>
1,429,306
<p>As is well known Minkowski spacetime (which is four dimensional vector space with scalar product $\eta _{\mu \nu}$ of signature $-+++$) is maximally symmetric, which manifests itself in presence of ten Killing vector fields. Those are generators of one parameter groups of isometries, which can be understood as translations, boosts (hyperbolic rotations) and Euclidean rotations. There are also five conformal Killing vectors fields, which are generators of one parameter groups of conformal transformations. One of those transformations is simply scaling whole space. However there are four others, with generators:</p> <p>$K _ {\mu} = 2 \eta _{\mu \alpha} x^{\alpha} x^{\nu} \partial _{\nu} - x^{\alpha} x_{\alpha} \partial _{\mu}$</p> <p>With standard identification of tangent vectors and partial derivatives understood in this formula. My question is what are those transformations? Can I visualize them or explicitly find those groups? What is their interpretation or significance? Any insight will be appreciated.</p>
Nex
266,087
<p>I will give an elementary proof that every right quasi-regular element $a \in R\setminus \{0,1\}$ is invertible. To do so we prove:</p> <ol> <li>The only idempotent elements in $R$ are $0$ and $1$;</li> <li>Every non-zero regular element in $R$ is invertible;</li> <li>Every right quasi-regular element $a \in R\setminus \{0,1\}$ is invertible.</li> </ol> <p><strong>Proof of 1</strong>: Note that if $e$ is an idempotent element in $R$, then $e$ is regular and hence either $e=1$ or $e$ is right quasi-regular. If $e$ is right quasi-regular we have $e + x - ex=0$ for some $x$ and so $e = e^2 +ex - e^2x = e(e+x-ex)=e0=0$. It follows that every idempotent element in $R$ is either $0$ or $1$.</p> <p><strong>Proof of 2</strong>: Suppose that $a$ is a regular element and $a \neq 0$, then by definition there exists $b$ such that $aba=a$ and hence $ab$ and $ba$ are idempotent (i.e. $(ab)(ab)=(aba)b=ab$ and $(ba)(ba)=b(aba)=ba$). Since $ab$ can't be $0$ because then $a=aba=0$ it follows from 1 that $ab=1$ and similarly that $ba=1$. This proves that every non-zero regular element is invertible.</p> <p><strong>Proof of 3</strong>: Suppose that $a$ is a right regular element such that $a \neq 0$ and $a \neq 1$. By assumption there exists $x$ such that $a + x -ax=0$. It follows that $(1-a)(1-x) = 1-x-a+ax=1$ and hence $(1-a)(1-x)(1-a) = 1-a$ so that $1-a$ is a non-zero regular element. By assumption there exists $y$ such that $1-a + y -(1-a)y=0$. However this means that $1 -a + ay=0$ and hence $1 = a -ay= a(1-y)$. This means that $a(1-y)a=a$ and hence $a$ is invertible by 2.</p>
789,043
<p>Given a finite sequence of decimal digits $a_1,a_2,...,a_n$ prove that there exists a natural number $m$ such that decimal representation of $2^m$ starts with that sequence of digits.</p> <p>Thanks for your help :)</p>
Calvin Lin
54,563
<p><strong>Hint:</strong> $\log 2$ is irrational. What can you say about $ \{ n log 2 \}$, the fractional parts?</p>
2,078,496
<p>Given the sequence of functions</p> <p>$f_n:[0,1]\rightarrow\mathbb{R}$</p> <p>$x\mapsto\begin{cases}2nx, &amp; x\in[0,\frac{1}{2n}] \\ -2nx+2, &amp; x\in (\frac{1}{2n},\frac{1}{n}) \\ 0, &amp; \text{otherwise} \end{cases}$</p> <hr> <p>The functions look like this:</p> <p><a href="https://i.stack.imgur.com/cE7h9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cE7h9.jpg" alt="enter image description here"></a></p> <hr> <p>Does this sequence converge pointwise or uniformly against a function $f(x)$ as $n$ approaches infinity?</p> <p>My thoughts: It doesn't, since for $n\rightarrow\infty$ the first interval becomes $[0,0]$ and the second $(0,0)$. Since $[0,0]$ is just $0$ and $(0,0)$ is a nonexistent interval, we have two cases for $x$: $x=0$, in which case the function is $2nx$, and $x\neq 0$, in which case the function is $0$.</p> <p>Now here is the part I don't quite understand:</p> <p>For $n\rightarrow\infty$, if $x=0$ the function would be $2\cdot\infty\cdot 0$. How do I interpret that?</p> <p>Anyway, since the slope of the first function section gets higher and higher with each iteration, it must be vertically for $n\rightarrow\infty$, right?</p> <p>Am I correct with my thougts? And what about uniform convergence?</p>
Tom
386,571
<p>First, note that $f_n(0) = 0$ for any $n$. </p> <p>Now, let $x_0\in(0,1]$ be fixed and let $N:=\lfloor\frac{1}{x_0}\rfloor+1$, then $f_n(x_0) = 0$ for any $n&gt;N$. Since, by the Archimedean property of the real numbers, such an $N$ can always be found, $\lim\limits_{n\rightarrow\infty}f_n(x)=0$ for any $x\in[0,1]$ and your sequence converges to the function $f(x)=0$ pointwise.</p> <p>However,$\underset{x\in[0,1]}{\sup}|f_n(x)| = 1$ for any value of $n$, therefore $\lim\limits_{n\rightarrow\infty}\underset{x\in[0,1]}{\sup}|f_n(x)| = 1$ and thus the convergence is not uniform.</p>
1,057,050
<p>I am currently doing a math problem and have come across an unfamiliar notation. A mini circle between <span class="math-container">$f$</span> and <span class="math-container">$h(x)$</span></p> <p>The question ask me to find for 'the functions <span class="math-container">$f(x)=2x-1$</span> and <span class="math-container">$h(x)=3x+2$</span>'</p> <p><span class="math-container">$$f \circ h(x)$$</span></p> <p>However, I can't do this as I do not know what the circle notation denotes to. Does it mean to multiply?</p>
etothepii
198,170
<p>The circle $\circ$ is the symbol for composition of functions. In General, if you have two functions $g\colon X\rightarrow Y$ and $f\colon Y\rightarrow Z$, then $f\circ g$ is a function from $X$ to $Z$. For $x\in X$ one has $(f\circ g)(x) = f(g(x))$.</p> <p>In your case one has: $f(x) = 2x-1$, $g(x) = 3x+2$ and $$ (f\circ g)(x) = f(g(x)) = 2(g(x))-1 = 2(3x+2) -1 = 6x+3. $$ You take the function $g(x)$ and put it in place of the $x$ in the function $f$.</p> <p>This is obviously different from $f(x)\cdot g(x) = (2x-1)\cdot (3x+2) = 6x^2+x-2$.</p>
2,571,746
<p>$$|x+4| -4 =x $$</p> <p>I've two questions about this equation. </p> <ul> <li><p>Why do we need to build an inequality?</p></li> <li><p>If we build an inequality, in what cases do we need to analyse?</p></li> </ul> <p>Also I'm trying to find the negative values that $x$ can take.</p>
Michael Rozenberg
190,319
<p>Reasoning for your new question is the same.</p> <p>There are two cases for killing of the absolute value:</p> <ol> <li>$x\geq-4$, which gives $x+4-4=x,$ which says that the equation has infenitely many solutions in this case: </li> </ol> <p>Any number from the set $[-4,+\infty)$ is valid;</p> <ol start="2"> <li>$x&lt;-4$, which gives $-x-4-4=x$ or $x=-4$, which is not valid in this case.</li> </ol> <p>The answer is: $[-4,+\infty).$</p>
231,549
<p>Is there a standard name for a linear operator $T$ on a finite dimensional vector space satisfying $T^n=T^{n+1}$ for some $n\geq 1$ or, equivalently, $T$ is a similar to a direct sum of a nilpotent matrix and an identity matrix? I am not looking so much for name suggestions, but rather for a generally accepted terminology from the literature.</p> <p><Strong>Added Motivation.</strong> In Kovacs proof that the complex algebra of the monoid of $n\times n$-matrices over a finite field is semisimple a key step is to show that the ideal of the monoid algebra spanned by the singular matrices is a unital ring. He shows that the identity is a linear combination of matrices satisfying the above property. He calls such matrices semi-idempotent. But I believe he invented the name. </p> <p>Being a semigroup theorist I don't like math terms involving "semi" and so in my book I would prefer another term, preferably one in use in the matrix theory literature. </p>
Igor Rivin
11,142
<p>I would say "unipotent" is the standard term.</p>
1,903,762
<blockquote> <p>Largest number that leaves same remainder while dividing 5958, 5430 and 5814 ?</p> </blockquote> <hr> <p>$$5958 \equiv 5430 \equiv 5814 \pmod x$$ $$3\times 17\times 19 \equiv 5\times 181\equiv 3\times 331\pmod x$$ $$969 \equiv 905\equiv 993\pmod x$$</p> <p>After a bit of playing with the calculator, I think the answer is $48$ but I don't know how to prove it. </p> <p>Sorry if the answer is too obvious, I am still trying to wrap my head modular arithmetic and not very successful yet.It would be great help if anybody would give me some hints on how to proceed ahead. Thanks. $\ddot \smile$ </p>
fleablood
280,126
<p>Okay, first we set it up even though we have no idea where we are going.</p> <p>$5958 \equiv n \mod x$</p> <p>$5430 \equiv n \mod x$</p> <p>$5814 \equiv n \mod x$.</p> <p>Then we noodle a bit to make it smaller and more pallatable.</p> <p>$5958 - 5814 = 144 \equiv 0 \mod x$</p> <p>$5814 - 5430 = 384 \equiv 0 \mod x$</p> <p>and </p> <p>$5958 - 5430 = 528 \equiv 0 \mod x$</p> <p>Well, now it's very clear that $x$ is a common divisor and all we have to do is find the $x = \gcd(144,384, 528)$</p> <p>$x = \gcd(144 = 2^43^2, 384 = 2^7*3, 528 = 2^4*3*11) = 2^4*3 = 48$.</p> <p>And we're done.</p>
2,419,391
<p>Suppose there exists a Lebesgue Space, <span class="math-container">$L_1$</span> and functions functions <span class="math-container">$\phi$</span>, <span class="math-container">$\phi'$</span>, <span class="math-container">$f$</span>, and <span class="math-container">$f'$</span> functions where <span class="math-container">$$\phi, \phi' \in L_1$$</span></p> <p>By rule of integration by parts, <span class="math-container">$$uv|_a^b = \int_a^b udv + \int_a^b vdu$$</span></p> <p>Let <span class="math-container">$$ u = \phi, du= \phi'$$</span> <span class="math-container">$$ v = f, dv = f'$$</span></p> <p>Are there any properties of Lebesgue functions that allow</p> <p><span class="math-container">$$ uv|_a^b = 0$$</span></p> <p>Are <span class="math-container">$\phi$</span> and <span class="math-container">$\phi'$</span> convergent as integrals? Do the unbounded limits of <span class="math-container">$\phi$</span> and <span class="math-container">$\phi'$</span> converge?</p>
Montie
517,270
<p>It is not true in general that $\phi \in L_1(\mathbb{R})$ implies $\lim_{x \to \infty} \phi (x) = 0$. The limit may not exist. For example, let $\phi (x) = 1$ for $x \in [n, n+2^{-n})$ for each natural number $n$ (including zero), and let $\phi \equiv 0$ otherwise. Then, we have </p> <p>\begin{equation*} \int_{-\infty}^\infty |\phi| = \sum_{n=0}^\infty 2^{-n} = 2 &lt; \infty, \end{equation*} but $\lim_{x \to \infty} \phi (x)$ does not exist. In fact, we can modify this example to construct an unbounded function which has no limit at infinity but belongs to $L^p$ for every finite $p$. </p> <p>It is not clear to me what your question is. Typically the situations in which boundary terms for integration by parts vanish are when we know the functions are periodic (e.g. they are solutions to a PDE on which we impose periodic boundary conditions) or when one of the functions is compactly supported on the domain of integration.</p>
2,419,391
<p>Suppose there exists a Lebesgue Space, <span class="math-container">$L_1$</span> and functions functions <span class="math-container">$\phi$</span>, <span class="math-container">$\phi'$</span>, <span class="math-container">$f$</span>, and <span class="math-container">$f'$</span> functions where <span class="math-container">$$\phi, \phi' \in L_1$$</span></p> <p>By rule of integration by parts, <span class="math-container">$$uv|_a^b = \int_a^b udv + \int_a^b vdu$$</span></p> <p>Let <span class="math-container">$$ u = \phi, du= \phi'$$</span> <span class="math-container">$$ v = f, dv = f'$$</span></p> <p>Are there any properties of Lebesgue functions that allow</p> <p><span class="math-container">$$ uv|_a^b = 0$$</span></p> <p>Are <span class="math-container">$\phi$</span> and <span class="math-container">$\phi'$</span> convergent as integrals? Do the unbounded limits of <span class="math-container">$\phi$</span> and <span class="math-container">$\phi'$</span> converge?</p>
Oliver DΓ­az
121,671
<p>Integration by parts from the point of view of Lebesgue integration is more appropriately understood in terms of functions of local finite variation. Here is a version and then I comment more on your particular problem. I hope this is helpful to you.</p> <hr /> <p><em><strong>Theorem:</strong></em> Let <span class="math-container">$F$</span>, <span class="math-container">$G$</span> be right--continuous functions of locally finite variation on an interval <span class="math-container">$I$</span> (bound or unbounded) Let <span class="math-container">$\mu_F$</span> and <span class="math-container">$\mu_G$</span> the Stieltjes-Lebesgue measures generated by <span class="math-container">$F$</span> and <span class="math-container">$G$</span> respectively. For any bounded interval <span class="math-container">$(a,b]\subset I$</span>, <span class="math-container">$$ \int_{(a,b]}F(t)\mu_G(dt)=F(b)G(b)-F(a)G(a)-\int_{(a,b]}G(t-)\mu_F(dt) $$</span> where <span class="math-container">$G(t-)=\lim_{s\nearrow t}G(s)$</span>.</p> <p>A proof can be obtained using Fubini's theorem <span class="math-container">\begin{aligned} F(b)-F(a))(G(b)-G(a))&amp;=\int_{(a,b]\times(a,b]}\mu_F\otimes\mu_G(dt,ds)\\ &amp;=\int_{(a,b]}\Big(\int_{(a,s]}\mu_F(dt)\Big)\mu_G(s) +\int_{(a,b]}\Big(\int_{(s,b]}\mu_F(dt)\Big)\mu_G(ds)\\ &amp;=\int_{(a,b]}\Big(\int_{(a,s]}\mu_F(dt)\Big)\mu_G(s) +\int_{(a,b]}\Big(\int_{(a,t)}\mu_G(ds)\Big)\mu_F(dt)\\ &amp;=\int_{(a,b]} F(s)-F(a)\mu_G(s) +\int_{(a,b]}G(t-)-G(a)\mu_F(dt)\\ \end{aligned}</span></p> <hr /> <p>In the setting that you have in mind, you may have <span class="math-container">$\phi$</span> and <span class="math-container">$\psi$</span> that are <strong>absolutely continuous</strong> in an interval <span class="math-container">$[c,d]$</span>. Then you <span class="math-container">$f=\phi'$</span> and <span class="math-container">$g=\phi'$</span> exists a.s. (with respect the Lebesgue measure in <span class="math-container">$\mathbb{R}$</span>) and <span class="math-container">\begin{aligned} \phi(x)&amp;=\phi(c)+\int^x_cf(t)\,dt,&amp;\qquad c\leq x\leq d\\ \psi(x)&amp;=\psi(c)+\int^x_cg(t)\, dt,&amp; \qquad c\leq x\leq d \end{aligned}</span> Then you may consider <span class="math-container">$F(x)=\phi(x)$</span> and <span class="math-container">$G(x)=\psi(x)$</span> in which case <span class="math-container">\begin{aligned} \mu_F((dt) &amp;= \phi(c)\delta_c(dt) + f\mathbb{1}_{(c,d]}(t)\,dt\\ \mu_G(dt) &amp;=\psi(c)\delta_c(dt) + g\mathbb{1}_{(c,d]}(t)\,dt \end{aligned}</span> Applying the theorem above, you recover the usual integration by parts formula for <span class="math-container">$[a,b]\subset[c,d]$</span>.</p> <p>As for the boundary condition <span class="math-container">$\phi(t)\psi(t)|^b_a=0$</span>, there many instances in which that happens. The important thing is to notice that <span class="math-container">$\phi$</span> and <span class="math-container">$\psi$</span> should be <em><strong>absolutely continuous</strong></em> to begin with.</p> <p>For infinite intervals one has to use dominated convergence (integrability conditions on the derivatives <span class="math-container">$\phi'=f$</span> and <span class="math-container">$\psi'=g$</span>) type of arguments. Typical situations are when both <span class="math-container">$|\mu_F|$</span> and <span class="math-container">$|\mu_F|$</span> are finite measures.</p>
262,655
<p>This question arose from the recent one, <a href="https://mathoverflow.net/q/262380/41291">roots of a polynomial linked to mock theta function?</a>. Let $$ g(x):=\sum_{k=0}^\infty x^k\prod_{j=1}^{k-1}(1 + x^j)^2\\=1+x+x^2+3 x^3+4 x^4+6 x^5+10 x^6+15 x^7+21 x^8+30 x^9+43 x^{10}+59 x^{11}+...; $$ the sequence $1,1,1,3,4,6,10,15,21,30,43,59,...$ with the generating function $g(x)$ is <a href="http://oeis.org/A059618" rel="nofollow noreferrer">A059618</a> on OEIS, it is the sequence of numbers of strongly unimodal partitions.</p> <p>Now let $$ f(q):=g(q)\prod_{n=1}^\infty(1-q^n), $$ and let $a_k$ be the $k$th coefficient in the Maclaurin series for $f$, $$ f(x)=\sum_{k=0}^\infty a_kx^k\\=1-x^2+x^3+x^6+x^7-x^9+x^{10}-x^{14}+x^{18}-x^{20}+x^{21}+x^{25}+x^{26}-x^{27}\\+x^{28}-x^{30}+x^{33}-x^{35}+x^{36}-x^{39}-x^{40}+x^{42}-x^{44}+2x^{45}-x^{49}+x^{52}-x^{54}\\+x^{55}+x^{56}+x^{57}-x^{60}-x^{65}+... $$ The sequence of $a_k$, starting with </p> <blockquote> <p>1,0,-1,1,0,0,1,1,0,-1,1,0,0,0,-1,0,0,0,1,0,-1,1,0,0,0,1,1,-1,1,0,-1,0,0,1,0,-1,1,0,0,-1,-1,0,1,0,-1,2,0,0,...</p> </blockquote> <p>is not on OEIS. Among the first 1000 terms of the sequence, there are 609 zeroes, 182 ones, 161 -1s, 19 of them are 2 ($a_{45},a_{150},a_{210},a_{221},a_{273},a_{300},...$), 22 are -2 ($a_{77},a_{90},a_{165},a_{225},...$), and two of them ($a_{525}$ and $a_{825}$) are 3; seems like $a_k$ are zero for $k=2^j$ ($j&gt;0$), for $k=p$ or $k=2p$, with $p$ prime $&gt;7$, $k=3p$ and $k=4p$ with $p$ prime $\geqslant23$, $k=5p$ with $p$ prime $&gt;31$, $6p$ for $p&gt;37$, $7p$ and $8p$ for $p&gt;43$, $9p$ for $p&gt;47$, $10p$ for $p&gt;61$, $11p$ for $p&gt;67$,...</p> <p>What may (or may not) be relevant is another sequence obtained from introducing new variable in the way I learned from <a href="http://math.stanford.edu/~rhoades/FILES/unimodal.pdf" rel="nofollow noreferrer">a paper by Rhoades</a> linked to from the above OEIS page for $g$.</p> <p>Let $$ g_t(q):=\sum_{k=0}^\infty q^k\prod_{j=1}^{k-1}(1 + q^jt)(1+q^j/t), $$ and let $$ f_t(q)=g_t(q)\prod_{n=1}^\infty(1-q^n), $$ so that $g_1(q)=g(q)$ and $f_1(q)=f(q)$. Then $$ f_t(q)=1-q^2+\frac{1+t^3}{(1+t)t}q^3+\frac{1+t^5}{(1+t)t^2}q^6+q^7-\frac{1+t^3}{(1+t)t}q^9+\frac{1+t^7}{(1+t)t^3}q^{10}+...; $$ most coefficients have form $\pm\frac{1+t^{2j+1}}{(1+t)t^j}$, except that I cannot figure out how $j$ depends on the number of the coefficient. Exceptions here start from the $15$th coefficient, which is $\frac{1+t^9}{(1+t)t^4}-1$ and the $45$th one which is $\frac{1+t^{17}}{(1+t)t^8}+\frac{1+t^3}{(1+t)t}$.</p> <p>Despite all these clues, to my shame I've given up searching for an explicit formula for $a_k$. Is there one? I am pretty sure there is, but what is it?</p>
მამუკა αƒ―αƒ˜αƒ‘αƒšαƒαƒ«αƒ”
41,291
<p>Found the following purely empirically, have no idea how to prove it: $$ \sum_{k=0}^\infty a_kq^k=\sum_{\substack{m,n\geqslant0\\n\ne1}}(-1)^mq^{\frac{(m+n)(3m+n+1)}2}; $$ also, in case this might be useful, $$ f_t(q)=\sum_{\substack{m,n\geqslant0\\n\ne1}}(-1)^m\frac{1+t^{2n-1}}{(1+t)t^{n-1}}q^{\frac{(m+n)(3m+n+1)}2} $$ but I don't have a proof of it either.</p>
1,131,092
<p>I've come across many different versions of this question on here, but not any that map the $[0,1]$ to $(1, \infty)$. </p> <p>I was thinking that it must be piece-wise defined, since the endpoints 0 and 1 will be the trickiest part of defining the bijection... The only method of doing this that I could come up with would be to possibly show a bijection from $[0,1]$ to $(1,2)$, then construct another bijection from $(1,2)$ to $(1, \infty)$, and then the composition will be from $[0,1]$ to $(1, \infty)$, but I haven't been able to come up with any function that can do this... Any help is much appreciated.</p>
Mathmo123
154,802
<p><strong>Hint:</strong> There is a simple (continuous) bijection from $(0,1)\to (1, \infty)$. So if you can find a bijection from $[0,1] \to (0,1)$ then you're done. One way to do this is in two steps: find a bijection $[0,1]\to(0,1]$ and then find another from $(0,1]\to(0,1)$.</p> <p>To find a bijection from $[0,1]\to(0,1]$, you intuitively want to fix "most of" $[0,1]$, but you need to send $0$ to somewhere that isn't $0$. And wherever you send $0$ can't be sent to itself, so has to be sent somewhere else. And wherever you send that to also has to be sent somewhere else... and so on.</p> <p>Let's try sending $0$ to $\frac12$. And we can send $\frac12$ to $\frac14$, and $\frac14$ to $\frac 18$, and so on, and fix everything that isn't of the form $\frac1{2^n}$ for an integer $n \ge 2$. Can you show that this gives a bijection?</p> <p>Can you do something similar to find a bijection from $(0,1]\to(0,1)$?</p>
2,552,133
<p>How can i calculate it?<br> $$\lim_{n\to \infty}{n^{\frac{2}{3}}((n+2)^{\frac{1}{3}}-n^{\frac{1}{3}})}$$ </p> <p>I don't have idea how to do it..</p>
lab bhattacharjee
33,337
<p><em>Hint:</em></p> <p>Rationalize the numerator</p> <p>$$a^3-b^3=(a-b)(a^2+ab+b^2)$$</p> <p>Then divide numerator &amp; denominator by $n^{2/3}$</p>
2,552,133
<p>How can i calculate it?<br> $$\lim_{n\to \infty}{n^{\frac{2}{3}}((n+2)^{\frac{1}{3}}-n^{\frac{1}{3}})}$$ </p> <p>I don't have idea how to do it..</p>
Guy Fsone
385,707
<p>Let $n=\frac1t$ then $${n^{\frac{2}{3}}((n+2)^{\frac{1}{3}}-n^{\frac{1}{3}})} ={t^{-\frac{2}{3}}((\frac1t+2)^{\frac{1}{3}}-t^{-\frac{1}{3}})} = \frac{\sqrt[3]{1+2t}-1}{t}$$ Hence, $$\lim_{n\to \infty}{n^{\frac{2}{3}}((n+2)^{\frac{1}{3}}-n^{\frac{1}{3}})} =\lim_{t\to 0}\frac{(1+2t)^{1/3}-1}{t} =\left((1+2t)^{1/3}\right)'\bigg|_{t =0} = 2/3 $$</p>