qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
162,360
<p>First, the definition of a connected set:</p> <blockquote> <p><strong>Definition:</strong> A topological space is connected if, and only if, it cannot be divided in two nonempty, open and disjoint subsets, or, similarly, if the empty set and the whole set are the only subsets that are open and closed at the same time.</p> </blockquote> <p>I don't understand some points in the following proof, that every interval $I \subset \mathbb{R}$ is connected.</p> <p>Suppose $I = A \cup B$ and $A \cap B = \emptyset$, $A$ and $B$ are both non-empty and open in the subspace-topology of $I \subset \mathbb{R}$. Choose $a\in A$ and $b\in B$ and suppose $a &lt; b$. Let $s := \mathrm{inf}\{ x \in B ~|~ a &lt; x \}$. Then in every neighborhood of $s$ there are points of $B$ (because of the definition of the infimum), but also of $A$, <strong>then if not $s = a$, then $a &lt; s$ and the open intervall $(a,s)$ lies entirely in $A$</strong>. And so $s$ cannot be an inner point of $A$ nor $B$, but this is a contradiction to the property that both $A$ and $B$ be open and $s \in A \cup B$.</p> <p>With the bold part, I have a problem, why it follows that $(a,s)$ lies entirely in $A$ and it that case it must be that $A = (a,s)$, or not?</p>
Arthur
15,500
<p>If not $s = a$, then $a&lt;s$ because $s$ is the infimum of a set of elements all larger than $a$.</p> <p>The entire interval $(a, s)$ must be in $A$ because of how $s$ was chosen. The number $s$ has been chosen so that it is smaller than any element in $B$ greater than $a$, so there can't be any elements of $B$ in the interval $(a, s)$, and since $A$ and $B$ between them contain all of $I$, $(a, s)$ must be completely within $A$.</p> <p>The crux of the proof is that with this $s$ as chosen (and exisiting, by Archimedes' lemma), it can't be in neither $A$ nor $B$, but at the same time it has to be in one of them, since $A$ and $B$ together cover the whole of $I$. Thus we have a contradiction, and we know that our assumption must be wrong, namely that $I$ can be written as the union of two nonempty, disjoint open sets.</p>
344,994
<p>Are there any decimals that times to make 1? I know that 0.25 x 4 = 1 but it's a decimal and a number. I need two decimals. Any ideas? I'm not an expert btw so... thanks</p>
Abhijit
27,809
<p>Assuming you mean to find the pair of Non Integer Rational Numbers whose product is equal to 1, then the best way is to express it as a fraction and then convert it to a decimal. Remember</p> <p>$$\frac{p}{q}\cdot \frac{q}{p} = 1$$</p> <p>So plug in any $p$ and $q$ to get the desired decimal fractional number.</p> <p>Example includes but not limited to $\left\{(4,5),(8,5),(5,2)\right\}$ which gives $\left\{(0.8,1.25),(1.6,0.625),(2.5,.4)\right\}$</p>
2,714,738
<p>If $R$ is a noetherian ring then also $R[x]$ is a noetherian ring, i.e. $R[x]$ is noetherian as $R[x]$-module. Is $R[x]$ also noetherian as $R$-module?</p>
Pierre-Yves Gaillard
660
<p>Let $R$ be a commutative ring and $X$ an indeterminate. Then $R[X]$ is a noetherian $R$-module if and only if $R$ is the zero ring.</p> <p>The other answers contain a proof (but not a statement) of this fact.</p> <p>Edit. In the first version, I had written "$R[X]$ is a noetherian $R$-module if and only if $R$ is not the zero ring". Thanks to quid for having pointed out this mistake!</p>
3,600,868
<p>Ellipse can be <a href="https://math.stackexchange.com/q/3594700/122782">perfectly packed with <span class="math-container">$n$</span> circles</a> if </p> <p><span class="math-container">\begin{align} b&amp;=a\,\sin\frac{\pi}{2\,n} \quad \text{or equivalently, }\quad e=\cos\frac{\pi}{2\,n} , \end{align}</span> </p> <p>where <span class="math-container">$a,b$</span> are the major and minor semi-axis of the ellipse and <span class="math-container">$e=\sqrt{1-\frac{b^2}{a^2}}$</span> is its eccentricity.</p> <p>Consider a triangle and any ellipse, naturally associated with it, for example, Steiner circumellipse/inellipse, Marden inellipse, <a href="https://mathworld.wolfram.com/BrocardInellipse.html" rel="nofollow noreferrer">Brocard inellipse</a>, <a href="https://mathworld.wolfram.com/LemoineInellipse.html" rel="nofollow noreferrer">Lemoine inellipse</a>, ellipse with the circumcenter and incenter as the foci and <span class="math-container">$r+R$</span> as the major axis, or any other ellipse you can come up with, which can be consistently associated with the triangle.</p> <p>The question is: provide the example(s) of triangle(s) for which the associates ellipse(s) can be perfectly packed with circles.</p> <p>Let's say that the max number of packed circles is 12, unless you can find some especially interesting case with more circles.</p> <p>For example, the Steiner incircle for the famous <span class="math-container">$3-4-5$</span> right triangle can not be perfectly packed. </p> <p>The example of the right triangle with the Marden inellipse, perfectly packed with six circles is given in the self-answer below.</p>
Blue
409
<p>We can consider of a broad family of inellipses that includes <a href="https://en.wikipedia.org/wiki/Steiner_inellipse" rel="nofollow noreferrer">Steiner's</a> and <a href="https://en.wikipedia.org/wiki/Mandart_inellipse" rel="nofollow noreferrer">Mandart's</a>.</p> <p>Given <span class="math-container">$\triangle ABC$</span> with <span class="math-container">$A=(0,0)$</span>, <span class="math-container">$B=(c,0)$</span>, <span class="math-container">$C=(b\cos A, b \sin A)$</span>, and define point <span class="math-container">$P$</span> by <span class="math-container">$$P = \frac{\frac1\alpha A + \frac1\beta B + \frac1\gamma C}{\frac1\alpha+\frac1\beta+\frac1\gamma} \tag{1}$$</span> (cumbersome reciprocals now make for cleaner expressions later). Let the cevians through <span class="math-container">$P$</span> meet the opposite sides in <span class="math-container">$A'$</span>, <span class="math-container">$B'$</span>, <span class="math-container">$C'$</span>; that is, define <span class="math-container">$$A' := \overleftrightarrow{AP}\cap\overleftrightarrow{BC} \qquad B' := \overleftrightarrow{BP}\cap\overleftrightarrow{CA} \qquad C' := \overleftrightarrow{CP}\cap\overleftrightarrow{AB}$$</span> One can show that there is a unique ellipse tangent to the side-lines of the triangle at <span class="math-container">$A'$</span>, <span class="math-container">$B'$</span>, <span class="math-container">$C'$</span>. In the notation of my previous answer, it has equation <span class="math-container">$$K_{20}\,x^2 + K_{11}\,x y + K_{02}\,y^2 + K_{10}\,x+K_{01}\,y+K_{00} = 0$$</span> with <span class="math-container">$$\begin{align} K_{20} &amp;= \phantom{-}4\, (\alpha + \beta)^2 |\triangle ABC|^2 \\[4pt] K_{11} &amp;= \phantom{-}4 c \left((\alpha + \beta) (a \alpha \cos B - b \beta \cos A) + (\alpha - \beta) c \gamma \,\right) |\triangle ABC| \\[4pt] K_{02} &amp;= -4 (\alpha + \beta)^2 |\triangle ABC|^2 \\ &amp;\phantom{=}\;+c^2 \left( a^2 \alpha^2 + b^2 \beta^2 + c^2\gamma^2 + 2 bc\beta \gamma \cos A + 2 ca\gamma \alpha \cos B + 2 ab \alpha \beta \cos C \right)\\[4pt] K_{10} &amp;= -8 c \alpha (\alpha + \beta) |\triangle ABC|^2 \\[4pt] K_{01} &amp;= -4 c^2 \alpha ( a \alpha \cos B - b \beta \cos A + c \gamma ) |\triangle ABC| \\[4pt] K_{00} &amp;= \phantom{-}4 c^2 \alpha^2 |\triangle ABC|^2 \end{align}$$</span></p> <p>From these we get (after dividing-through by a common factor of <span class="math-container">$c^4$</span>)</p> <p><span class="math-container">$$\begin{align} p &amp;= \left( a^2 \alpha^2 + b^2 \beta^2 + c^2\gamma^2 + 2 bc\beta \gamma \cos A + 2 ca\gamma \alpha \cos B + 2 ab \alpha \beta \cos C \right)^2 \\ q &amp;= -64 \alpha \beta \gamma (\alpha + \beta + \gamma )|\triangle ABC|^2 \end{align} \tag{2}$$</span></p> <p>These, in conjunction with equations <span class="math-container">$(4.x')$</span> in my <a href="https://math.stackexchange.com/a/3602602/409">previous answer</a> give conditions under which the inellipse is perfectly-packable.</p> <p>We recover the Steiner inellipse case (again in my <a href="https://math.stackexchange.com/a/3602602/409">previous answer</a>) by taking <span class="math-container">$\alpha=\beta=\gamma=1$</span> (and dividing-through by a common factor of <span class="math-container">$4$</span>).</p> <hr> <p>If we consider inellipses specifically of equilateral triangles, with <span class="math-container">$a=b=c=1$</span>, then we have <span class="math-container">$$\begin{align} p &amp;= \left( \alpha^2 + \beta^2 + \gamma^2 + \beta \gamma + \gamma \alpha + \alpha \beta \right)^2 \\ q &amp;= -12 \alpha \beta \gamma (\alpha + \beta + \gamma ) \end{align} \tag{3}$$</span> For the particular case of ellipse centered on the triangle's line of symmetry (so that, say, <span class="math-container">$\alpha=\beta$</span>) and packed with <span class="math-container">$3$</span> circles, equation <span class="math-container">$(4.3')$</span> of my previous answer reduces to <span class="math-container">$$(3 \alpha^2 - 8 \alpha\gamma - 4 \gamma^2) (12 \alpha^2 - 2 \alpha\gamma - \gamma^2) = 0$$</span> which we can solve to get (ignoring negative values) <span class="math-container">$$\gamma = \alpha \left(-1+\sqrt{13}\right) \qquad \gamma = \frac{\alpha}{2} \left(-2 +\sqrt{7}\right)$$</span> These correspond to the inellipses shown in <a href="https://math.stackexchange.com/a/3604463/409">@g.kov's answer</a>.</p> <p>Interestingly, all six equations <span class="math-container">$(4.x')$</span> factor when <span class="math-container">$\alpha=\beta$</span>.</p> <hr> <p><strong>Ceva Ratios.</strong> If we define <span class="math-container">$$\delta := \frac{|A'C|}{|BA'|} \qquad \epsilon := \frac{|B'A|}{|CB'|} \qquad \phi := \frac{|C'B|}{|AC'|}$$</span></p> <p><a href="https://en.wikipedia.org/wiki/Ceva%27s_theorem" rel="nofollow noreferrer">Ceva's Theorem</a> states that cevians <span class="math-container">$\overleftrightarrow{AA'}$</span>, <span class="math-container">$\overleftrightarrow{BB'}$</span>, <span class="math-container">$\overleftrightarrow{CC'}$</span> concur if and only if <span class="math-container">$\delta\epsilon\phi = 1$</span>. The point of concurrence can be written in the form <span class="math-container">$(1)$</span> with <span class="math-container">$$\alpha:\beta:\gamma \;=\; 1 : \phi : \frac1\epsilon \qquad \left( = \delta^0 : \phi^1 : \epsilon^{-1} \right) \tag{4}$$</span></p> <p>(The relation <span class="math-container">$\delta\epsilon\phi=1$</span> means that <span class="math-container">$(4)$</span> can be written in a variety of ways, none of which is satisfyingly symmetric. However, I think of <span class="math-container">$(4)$</span> as being written "relative to" vertex <span class="math-container">$A$</span>. The parameter <span class="math-container">$\alpha$</span> is naturally assigned its corresponding Ceva ratio, <span class="math-container">$\delta$</span>, raised to the <span class="math-container">$0$</span>-th power; but <span class="math-container">$\beta$</span> steals <span class="math-container">$\gamma$</span>'s Ceva ratio, <span class="math-container">$\phi$</span>, which is raised to the <span class="math-container">$1$</span>-th power (because it's "looking forward"); likewise, <span class="math-container">$\gamma$</span> gets <span class="math-container">$\beta$</span>'s Ceva ratio, <span class="math-container">$\epsilon$</span>, which is raised to the <span class="math-container">$(-1)$</span>-th power (because it's "looking backward").)</p> <p>Some ellipses, such as Mandart's, may be easier to describe using Ceva ratios <span class="math-container">$\delta:\epsilon:\phi$</span> than reciprocal-barycentric coordinates <span class="math-container">$\alpha:\beta:\gamma$</span>.</p> <hr> <p><strong>Actual Examples!</strong> Here's a family of the first six perfectly-packed ellipses in the equilateral triangle whose major axes align with a side of the triangle. Since the concurrence point <span class="math-container">$P$</span> lies on an axis of symmetry, so we have <span class="math-container">$$\alpha:\beta:\gamma = 1 : \kappa: \kappa$$</span> for values of <span class="math-container">$\kappa$</span> we give below.</p> <p>First, a family portrait ...</p> <p><a href="https://i.stack.imgur.com/PQES8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PQES8.png" alt="enter image description here"></a></p> <p>... and now the individual cases:</p> <p><a href="https://i.stack.imgur.com/GlAkj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GlAkj.png" alt="enter image description here"></a><a href="https://i.stack.imgur.com/tqiEa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tqiEa.png" alt="enter image description here"></a></p> <p><span class="math-container">$$\begin{array}{ccc} n = 1 &amp; \qquad &amp; n = 2 \\ \kappa = 1 &amp; &amp; \kappa = \frac16\left(1 + \sqrt{7}\right)= 0.607\ldots \end{array}$$</span></p> <p><a href="https://i.stack.imgur.com/vqPls.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vqPls.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/HGOFF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HGOFF.png" alt="enter image description here"></a></p> <p><span class="math-container">$$\begin{array}{ccc} n = 3 &amp; \qquad &amp; n = 4 \\ \kappa = \frac1{12}\left(1 + \sqrt{13}\right) = 0.383\ldots &amp; &amp; \kappa = 0.275\ldots \\ &amp; &amp; 1 + 4 \kappa - 20 \kappa^2 - 48 \kappa^3 + 72 \kappa^4 = 0 \end{array}$$</span></p> <p><a href="https://i.stack.imgur.com/d9JcE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d9JcE.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/uVQDR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uVQDR.png" alt="enter image description here"></a></p> <p><span class="math-container">$$\begin{array}{ccc} n = 5 &amp; \qquad &amp; n = 6 \\ \kappa = 0.213\ldots &amp; &amp; \kappa = 0.173\ldots \\ 1 + 4 \kappa - 32 \kappa^2 - 72 \kappa^3 + 144 \kappa^4 = 0 &amp; &amp; 1 + 4 \kappa - 44 \kappa^2 - 96 \kappa^3 + 144 \kappa^4 = 0 \end{array}$$</span></p>
2,653,401
<p>I have a question about the following partial fraction:</p> <p>$$\frac{x^4+2x^3+6x^2+20x+6}{x^3+2x^2+x}$$ After long division you get: $$x+\frac{5x^2+20x+6}{x^3+2x^2+x}$$ So the factored form of the denominator is $$x(x+1)^2$$ So $$\frac{5x^2+20x+6}{x(x+1)^2}=\frac{A}{x}+\frac{B}{x+1}+\frac{C}{(x+1)^2}$$</p> <p>Why is the denominator under $C$ not simply $x+1$? It is $x$ times $(x+1)^2$ and not $(x+1)^3$</p>
BCLC
140,308
<p>Explanation 1: If you have only $(x+1)$, your denominators are too weak:</p> <p>$$\frac{A}{x}+\frac{B}{x+1}+\frac{C}{x+1} = \frac{A(x+1)}{x(x+1)}+\frac{Bx}{x(x+1)}+\frac{Cx}{x(x+1)}$$</p> <p>Thus, you your denominator is quadratic whereas the desired denominator is $x(x+1)^2$, cubic. I mean, $\frac{C}{x+1}$ is redundant. Why? Just let $D=B+C$. Then we have $\frac{A}{x}+\frac{B}{x+1}+\frac{C}{x+1} = \frac{A}{x}+\frac{D}{x+1}$.</p> <hr> <p>Explanation 2: Would you do this?</p> <p>$$\frac{1}{x^2} = \frac{A}{x} + \frac{B}{x \ \text{and not} \ x^2 ?}$$</p> <hr> <p>Explanation 3: Actually, some texts suggest</p> <p>$$\frac{5x^2+20x+6}{x(x+1)^2} = \frac{\text{something}}{x}+\frac{\text{something}}{(x+1)^2}$$</p> <p>where $\text{something}$ is an arbitrary polynomial with degree 1 lower than the denominator, i.e.</p> <p>$$\frac{5x^2+20x+6}{x(x+1)^2} = \frac{D}{x}+\frac{Ex+F}{(x+1)^2}$$</p> <p>This is actually equivalent to what your text suggests! How?</p> <p>$$\frac{Ex+F}{(x+1)^2} = \frac{Ex+F+E-E}{(x+1)^2}$$</p> <p>$$= \frac{E(x+1)+F-E}{(x+1)^2} = \frac{E}{x+1} + \frac{F-E}{(x+1)^2}$$</p> <p>where $E=B$ and $F-E = C$</p> <hr> <p>If you're confused about equivalence or of why there's a squared, I think it's best to just play safe:</p> <p>Partial fractions, if I understand right, is pretty much an <a href="https://en.wikipedia.org/wiki/Ansatz" rel="nofollow noreferrer">ansatz</a> in basic calculus, iirc (the word 'ansatz' is used <a href="http://ostts.org/Stitz_Zeager_Precalculus_Book_By_Section_files/8-6.pdf" rel="nofollow noreferrer">here</a>, but I guess different from how I use it).</p> <p>Instead of trying to understand, I just go on the safe side and do:</p> <p>$$\frac{5x^2+20x+6}{x(x+1)^2}=\frac{A}{x}+\frac{B}{x+1}+\frac{Cx+D}{(x+1)^2}$$</p> <p>Similarly,</p> <p>$$\frac{5x^2+20x+6}{x^2(x+1)}=\frac{A}{x}+\frac{B}{x+1}+\frac{Cx+D}{(x)^2}$$</p> <p>$$\frac{5x^2+20x+6}{x(x+1)^3}=\frac{A}{x}+\frac{B}{x+1}+\frac{Cx+D}{(x+1)^2}+\frac{Ex^2+Dx+F}{(x+1)^3}$$</p> <p>Maybe there's some rule about how something is redundant, but I don't care (if I start caring, I may end up with your question) because I'm covering everything: Whenever there's an exponent in the denominator, I do as many fractions at that exponent. The additional coefficients will end up as zero anyway.</p>
901,161
<p>Let's consider the manifold $S^1$</p> <p>It is well known that we need two charts to cover this manifold.</p> <p>Nonetheless, we can cover the full space using a single coordinate $\theta$ which is just the angle from the center.</p> <p>Now, is this a general feature? I mean, is it always possible to have in every manifold a single coordinate set that cover points that are in different charts, just as in $S^1$?</p>
Ted Shifrin
71,348
<p>No, your "single coordinate" is only a smooth (continuous) function on $S^1-(1,0)$.</p>
901,161
<p>Let's consider the manifold $S^1$</p> <p>It is well known that we need two charts to cover this manifold.</p> <p>Nonetheless, we can cover the full space using a single coordinate $\theta$ which is just the angle from the center.</p> <p>Now, is this a general feature? I mean, is it always possible to have in every manifold a single coordinate set that cover points that are in different charts, just as in $S^1$?</p>
Andrew D. Hwang
86,418
<p>Perhaps you're "really" asking whether or not every $n$-manifold $M$ has $\mathbf{R}^{n}$ as a covering space, i.e., whether there's a single (smooth) map $\pi:\mathbf{R}^{n} \to M$ whose restrictions to suitably small subsets define coordinate charts on $M$.</p> <p>If so, the answer is "no"; even among spheres, only $S^{1}$ has this property.</p>
1,305,661
<p>How can I solve this Question ? </p> <ul> <li>Find all solutions exactly for $2\sin^2 x - \sin x = 0$</li> </ul> <p>my answer was $( 2k\pi,\pi+2k\pi, 11\pi/6, 7\pi/6)$ but my teacher answer was $( 2k\pi,\pi+2k\pi, \pi/6 +2k\pi, 5\pi/6 +2k\pi )$</p>
Dr. Sonnhard Graubner
175,066
<p>HINT: solve the equation $$\sin(x)(2\sin(x)-1)=0$$</p>
1,305,661
<p>How can I solve this Question ? </p> <ul> <li>Find all solutions exactly for $2\sin^2 x - \sin x = 0$</li> </ul> <p>my answer was $( 2k\pi,\pi+2k\pi, 11\pi/6, 7\pi/6)$ but my teacher answer was $( 2k\pi,\pi+2k\pi, \pi/6 +2k\pi, 5\pi/6 +2k\pi )$</p>
Ivo Terek
118,056
<p><strong>Hint:</strong> $$2\sin^2x-\sin x = 0 \implies \color{red}{2}\sin^2x + \color{blue}{(-1)}\sin x + \color{green}{0} = 0,$$ so we have $$\sin x = \frac{-\color{blue}{(-1)}\pm \sqrt{\color{blue}{(-1)}^2 - 4\cdot \color{red}{2}\cdot\color{green}{0}}}{2\cdot \color{red}{2}} = \frac{1 \pm 1}{4} = \begin{cases}\frac{1}{2} \\ 0\end{cases}.$$</p> <p>See when $\sin x = 1/2$ or $0$ and consider all possible values.</p>
1,305,661
<p>How can I solve this Question ? </p> <ul> <li>Find all solutions exactly for $2\sin^2 x - \sin x = 0$</li> </ul> <p>my answer was $( 2k\pi,\pi+2k\pi, 11\pi/6, 7\pi/6)$ but my teacher answer was $( 2k\pi,\pi+2k\pi, \pi/6 +2k\pi, 5\pi/6 +2k\pi )$</p>
graydad
166,967
<p>We have solutions to $2y^2-y=0$ as $$y = \frac{1 \pm \sqrt{1-0}}{4} \implies y = 0,\frac{1}{2}$$ With each pass around the unit circle we know $y = \sin(x)$ takes the value $0$ twice, and the value $\frac{1}{2}$ twice. Can you see why your teacher is correct now?</p>
221,130
<p>Let $P$ be a simple convex polytope in $\mathbb{R}^d$ (that is, any vertex belongs to exactly $d+1$ facets). Given the collection of outer normals to facets of $P$, combinatorics of $P$ may be different, of course. But is this information enough to reconstruct number of faces of $P$ of all dimensions? If yes, what is specific procedure to do this? I am looking at what happened when we move facets parallel and pass through `singularity', i.e. not-simple polytope, and on first glance it looks that $f$-vector preserves, but I am always not sure with such things. And this is not good proof anyway, I would prefer more direct argument. </p> <p>If the statement is however false, I wonder for which specific collections it is still true. </p>
Joseph O'Rourke
6,094
<p>Not an answer; just an illustration: <hr /> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <a href="https://i.stack.imgur.com/ya7nk.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/ya7nk.jpg" alt="fvector"></a> <br /> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <sup> $f$-vector left &amp; right: $(V,E,F)=(6,9,5)$. Middle polyhedron is not simple. </sup></p> <hr />
376,772
<blockquote> <p>Find a polynomial $q(a)$ of degree less than equal to $2$ that saitsifies the condition $q(a_0)=b_0, q'(a_0)=b'_0, \ \text{and} \ q'(a_1)=b'_1,$ where $a_0,a_1,b_0,b'_0,b'_1\in \mathbb{R}$, where $a_0\ne a_1$. And give a formula of the form $q(a)=b_0k_0(a)+b'_0k_1(a)+b'_1k_2(a).$</p> </blockquote> <p>How can I do this question? I am self teaching numerical analysis and this question is in the book <em>An Introduction to Numerical Analysis, by Atkinson</em> but I don't know how to do it. </p>
Ross Millikan
1,827
<p>A polynomial $q(a)$ of degree less than $2$ is linear, so you can write $q(a)=ca+d$. You will fail in this unless $b_0'=b_1'$. Probably you are asked for a polynomial of degree less than or equal to $2$ and should write $q(a)=ca^2+da+e$ Now plug in the restrictions you have on $q$ and solve for $c,d,e$</p>
3,900,679
<p>So there's this one thing I can't wrap my head around.</p> <p>The problem is:</p> <p>Use the change of variables <span class="math-container">$x = \sinh u$</span> to compute the indefinite integral <span class="math-container">\begin{equation*} \int \frac{dx}{\sqrt{1+x^2}} \end{equation*}</span></p> <p>Solution:</p> <p><span class="math-container">\begin{align*} \frac{dx}{du}=\sinh'x=\cosh x \end{align*}</span> <span class="math-container">$$\iff$$</span> <span class="math-container">\begin{align*} \int \frac{\cosh x}{\sqrt{1+\sinh^2x}}\,du = \int \frac{\cosh x}{\sqrt{\cosh^2x}}\,du = \int\frac{\cosh x}{\cosh x}\,du= \int 1 \,du = u + C = \sinh^{-1}x+C \end{align*}</span></p> <p>and I understand every step except the first one, <span class="math-container">$\frac{dx}{du}=\sinh'x$</span>. I'm just generally confused when it comes to change of variables and how to use the Leibniz notation to understand what's going on.</p>
Will Jagy
10,400
<p>CW</p> <p>positivity is immediate when <span class="math-container">$x&gt;1$</span></p> <p>Relation to zeta</p> <p><a href="https://i.stack.imgur.com/lRS2m.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lRS2m.png" alt="enter image description here" /></a></p> <p>alright, there is a functional equation for the derivative,</p> <p><a href="https://i.stack.imgur.com/qj1Tg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qj1Tg.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/aRJhC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aRJhC.png" alt="enter image description here" /></a></p>
3,057,200
<blockquote> <p>Does there exist an analytic function <span class="math-container">$f:D\rightarrow\mathbb C$</span>, where <span class="math-container">$D$</span> is a domain containing the unit disk, such that <span class="math-container">$f(z) = e^{i \text{Im} z}$</span> on the unit circle <span class="math-container">$|z|=1$</span>?</p> </blockquote> <p>I'm supposed to rely on the fact that if <span class="math-container">$f: B_R=\left \{ |z|\leq R \right \}\rightarrow \mathbb C$</span> is analytic and even on <span class="math-container">$(-R,R)$</span> (so <span class="math-container">$f(x)=f(-x)$</span> when <span class="math-container">$x$</span> is real), then it is also even on <span class="math-container">$B_R$</span>, outside the real number line. </p> <p>It seems that we need to show that <span class="math-container">$f$</span> isn't even on <span class="math-container">$B_1$</span> but is even on <span class="math-container">$(-1,1)$</span>, which I cannot show. We only know how <span class="math-container">$f$</span> behaves on the unit circle!</p>
Greg Martin
16,078
<p>Sequence of hints:</p> <ul> <li>If <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are two such functions, show they must be equal.</li> <li>If <span class="math-container">$f(z)$</span> is such a function, show that <span class="math-container">$g(z) = \overline{f(\bar z)}$</span> (its "reflection in the real axis") is another such function.</li> <li>If <span class="math-container">$f(z)$</span> is such a function, show that <span class="math-container">$f(x)$</span> is real-valued for all <span class="math-container">$x\in(-1,1)$</span>.</li> <li>If <span class="math-container">$f(z)$</span> is such a function, show that <span class="math-container">$h(z) = \overline{f(-\bar z)}$</span> (its "reflection in the imaginary axis") is another such function.</li> <li>If <span class="math-container">$f(z)$</span> is such a function, show that <span class="math-container">$f(x)=f(-x)$</span> for all <span class="math-container">$x\in(-1,1)$</span>.</li> </ul>
3,494,820
<p>Yesterday I learned about dual spaces when reading about spaces of linear maps. The concept of a linear map and why linear maps form a vector space is clear to me. But there are some details about the dual space and its basis that I could not fully understand.</p> <p>The text I am reading states the following:</p> <blockquote> <p>Furthermore, for fixed vector spaces <span class="math-container">$U$</span> and <span class="math-container">$V$</span> over <span class="math-container">$K$</span>, the operations of addition and scalar multiplication on the set <span class="math-container">$\operatorname{Hom}_K(U,V)$</span> of all linear maps from <span class="math-container">$U$</span> to <span class="math-container">$V$</span> makes <span class="math-container">$\operatorname{Hom}_K(U,V)$</span> into a vector space over <span class="math-container">$K$</span>.</p> <p>Given a vector space <span class="math-container">$U$</span> over a field <span class="math-container">$K$</span>, the vector space <span class="math-container">$U^{*} = \operatorname{Hom}_K(U,K)$</span> plays a special role. It is often called the dual space or the space of covectors of <span class="math-container">$U$</span>. One can think of coordinates as elements of <span class="math-container">$U^{*}$</span>. Indeed, suppose that <span class="math-container">$U$</span> is finite-dimensional and let <span class="math-container">$e_{1},...,e_{n}$</span> be a basis of <span class="math-container">$U$</span>. Every <span class="math-container">$x \in U$</span> can be uniquely written as <span class="math-container">$$x=\alpha_{1}e_{1}+...+\alpha_{n}e_{n}, \alpha_{i} \in K.$$</span> The scalars <span class="math-container">$\alpha_{1},...\alpha_{n}$</span> depend on <span class="math-container">$x$</span> as well as on the choice of basis, so for each <span class="math-container">$i$</span> one can write the coordinate function <span class="math-container">$$e^{i}: U \to K, e^{i}(x)=\alpha_{i}.$$</span> It is routine to check that each <span class="math-container">$e^{i}$</span> is a linear map, and indeed the functions <span class="math-container">$e^{1},...,e^{n}$</span> form a basis of the dual space <span class="math-container">$U^{*}$</span>.</p> </blockquote> <p>Now I have two questions:</p> <p>1) The text states that <span class="math-container">$\operatorname{Hom}_K(U,V)$</span> is a vector space for fixed <span class="math-container">$U$</span> and <span class="math-container">$V$</span>. This is perfectly clear to me, but is it correct that the dual space is <span class="math-container">$U^{*}=\operatorname{Hom}_K(U,K)$</span>, i.e. it consists of all linear maps from <span class="math-container">$U$</span> to the field <span class="math-container">$K$</span>? At first I thought this was a typo, but from what I've read on wikipedia and from other sources the notation seems to be correct. It also seems to make no sense to say the coordinate functions form a basis for <span class="math-container">$\operatorname{Hom}_K(U,V)$</span>.</p> <p>2) It is easy to see that the coordinate functions <span class="math-container">$e^{1},...,e^{n}$</span> are linear maps and I have also tried to check that the claim that they form a basis for <span class="math-container">$U^{*}$</span>. However, I am unsure if my proof is correct and I think this is mainly because of my confusion stated in the first question.</p> <p>My proof goes as follows:</p> <p>We need to prove that <span class="math-container">$e^{1},...,e^{n}$</span> are linearly independent and span <span class="math-container">$U^{*}$</span>.</p> <p>First note that linear maps are uniquely determined by their action on a basis. Now let <span class="math-container">$0_{UK}:U \to K, 0_{UK}(u)=0$</span> <span class="math-container">$\forall u$</span> be the zero map. To prove linear independence we need to show that</p> <p><span class="math-container">$(*)$</span> <span class="math-container">$b_{1}e^{1}+...+b_{n}e^{n}=0_{UK} \implies b_{i}=0$</span> <span class="math-container">$\forall i$</span></p> <p>or in other words the only linear combination of <span class="math-container">$e^{1},...,e^{n}$</span> that gives <span class="math-container">$0_{UK}$</span> is the trivial linear combination. Now assume <span class="math-container">$b_{i}=0$</span> for some <span class="math-container">$i$</span>, then(*) clearly fails if <span class="math-container">$x$</span> has a non-zero ith coordinate and so <span class="math-container">$b_{1}e^{1}+...+b_{n}e^{n}$</span> is not the zero map.</p> <p>To prove that <span class="math-container">$e^{1},...,e^{n}$</span> span <span class="math-container">$U^{*}$</span> we need to prove that any vector in <span class="math-container">$U^{*}$</span> (every linear map) can be written as a linear combination of <span class="math-container">$e^{1},...,e^{n}$</span>, i.e. <span class="math-container">$$T(u)=k_{1}e^{1}+...k_{n}e^{n}$$</span> for any vector <span class="math-container">$u \in U$</span>. To see this note that <span class="math-container">$$\begin{align*} T(u)&amp;=T(\alpha_{1}e_{1}+...\alpha_{n}e_{n}) \\ &amp;=\alpha_{1}T(e_{1})+...\alpha_{n}T(e_{n}) \\ &amp;=e^{1}T(e_{1})+...e^{n}T(e_{n}) \end{align*}$$</span> where <span class="math-container">$T(e_{i})$</span> are scalars by definition of <span class="math-container">$T$</span>.</p> <p>Is my proof correct or have I missed anything?</p> <p>Thanks very much for any hints and comments.</p>
Claude Leibovici
82,404
<p>A possible approach would be to prove that <span class="math-container">$$f(x)=e^{\frac{x^2}{2}} (x+1)-e^x$$</span> is positive if <span class="math-container">$x &gt;0$</span>.</p> <p>Using Taylor expansion built at <span class="math-container">$x=0$</span>, you would have <span class="math-container">$$f(x)=\frac{x^3}{3}+\frac{x^4}{12}+\frac{7 x^5}{60}+\frac{7 x^6}{360}+O\left(x^7\right)$$</span> and, if you continue the expansion, you should notice that all coefficients are positive. This means that the inequality holds for any <span class="math-container">$x &gt;0$</span>.</p>
2,917,024
<p>Let $H$ be a Hilbert space and let ${e_n} ,\ n=1,2,3,\ldots$ be an orthonormal basis of $H$. Suppose $T$ is a bounded linear oprator on $H$. Then which of the following can not be true? $$(a)\quad T(e_n)=e_1, n=1,2,3,\ldots$$ $$(b)\quad T(e_n)=e_{n+1}, n=1,2,3,\ldots$$ $$(c)\quad T(e_n)=e_{n-1} , n=2,3,4,\ldots , \,\,T(e_1)=0$$</p> <p>I think $(a)$ is not true because $e_1$ can not span the range space. I really don't know how to approach to this problem. Could you please give me some hints? Thank you very much.</p>
Angina Seng
436,618
<p>Let's look at (a).</p> <p>If $T$ satisfies $T(e_n)=e_1$ for all $n$, then $$T(e_1+e_2+\cdots+e_n)=ne_1.$$ But $\|e_1+e_2+\cdots+e_n\|=\sqrt n$ and $\|ne_1\|=n$.</p>
338,155
<p>I've problem with this surface integral: $$ \iint\limits_S {\sqrt{ \left(\frac{x^2}{a^4}+\frac{y^2}{b^4}+\frac{z^2}{c^4}\right)}}{dS} $$, where $$ S = \{(x,y,z)\in\mathbb{R}^3: \frac{x^2}{a^2} + \frac{y^2}{b^2} + \frac{z^2}{c^2}= 1\} $$</p>
Ron Gordon
53,268
<p>What an interesting integral. I had to resort to referring to the <a href="http://mathworld.wolfram.com/Ellipsoid.html">first differential form</a> of the spherical parametrization, but doing that, I am amazed at how this turns out.</p> <p>We parametrize in the usual way:</p> <p>$$x=a\sin{u} \cos{v}$$ $$y=b\sin{u} \sin{v}$$ $$z=c \cos{u}$$</p> <p>where $u \in [0,\pi)$ and $v \in [0,2 \pi)$. The coefficients of the first differential form are</p> <p>$$E=(a^2 \sin^2{v}+b^2 \cos^2{v}) \sin^2{u}$$ $$F=(b^2-a^2) \sin{u} \cos{u} \sin{v} \cos{v}$$ $$G=(a^2 \cos^2{v}+b^2 \sin^2{v}) \cos^2{u}+c^2 \sin^2{u}$$</p> <p>The stated integral is equal to</p> <p>$$\int_0^{\pi} du \: \int_0^{2 \pi} dv \: \sqrt{E G-F^2} \sqrt{\frac{\sin^2{u} \cos^2{v}}{a^2} + \frac{\sin^2{u} \sin^2{v}}{b^2} + \frac{\cos^2{u}}{c^2}}$$</p> <p>There is an enormous amount of algebra involved in simplifying the integrand. Miraculously, it simplifies a lot, and the integral is equal to</p> <p>$$\frac{1}{a b c} \int_0^{\pi} du \: \sin{u} \int_0^{2 \pi} dv\: (a^2 b^2 \cos^2{u} + b^2 c^2 \sin^2{u} \cos^2{v} + a^2 c^2 \sin^2{u} \sin^2{v})$$</p> <p>I really couldn't believe this myself at first, but it does check out for the case $a=b=c$. In any case, these integrals are much easier than one would expect from first seeing this problem, and the reader should have no trouble evaluating them by hand. The result is</p> <p>$$\frac{4 \pi}{3} \left ( \frac{a\, b}{c} + \frac{a\, c}{b} + \frac{b\, c}{a} \right)$$</p>
81,349
<p>My current research took me to the realm of PDE's (which for the long time used to be terra incognita for me as I'am a probabilist). Equations that I'am working with are mostly of second order or Hamilton-Jacobi equations. It's no suprise that I'am dealing with various notions of weak solutions (viscosity solutions mostly) without a hope to get classical solution. So here is my question:</p> <p>is it possible that equation $F(x,u,Du,D^2u)=0$ which is nondegenerate, nonsingular, with smooth $F$ has some sort of weak solutions (viscosity or other) but doesn't have classical solutions.</p> <p>I saw many results on regularity of solutions in various settings but none of encountered theorems even seem to get close to answer my question without awkward assumptions. I'd be glad if someone who is familiar with PDE's could tell me if such a general theorems exists nowadays.</p>
Jeff
18,406
<p>You might also be interested in the general theory of viscosity solutions to second order PDE <a href="http://www.arxiv.org/pdf/math/9207212" rel="nofollow">http://www.arxiv.org/pdf/math/9207212</a></p> <p>I would have posted as a comment but do not have the privilege yet. </p>
81,349
<p>My current research took me to the realm of PDE's (which for the long time used to be terra incognita for me as I'am a probabilist). Equations that I'am working with are mostly of second order or Hamilton-Jacobi equations. It's no suprise that I'am dealing with various notions of weak solutions (viscosity solutions mostly) without a hope to get classical solution. So here is my question:</p> <p>is it possible that equation $F(x,u,Du,D^2u)=0$ which is nondegenerate, nonsingular, with smooth $F$ has some sort of weak solutions (viscosity or other) but doesn't have classical solutions.</p> <p>I saw many results on regularity of solutions in various settings but none of encountered theorems even seem to get close to answer my question without awkward assumptions. I'd be glad if someone who is familiar with PDE's could tell me if such a general theorems exists nowadays.</p>
Pawel
19,379
<p>Just to make everything clear: the answer for my question is "yes, it is possible".</p> <p>Appropriate example is in paper under link provided by pgassiat in comments.</p>
3,873,438
<blockquote> <p>A particle of unit mass moves under the action of <span class="math-container">$n$</span> forces directed towards <span class="math-container">$n$</span> fixed points <span class="math-container">$A_1,A_2,...,A_n$</span>. The force towards <span class="math-container">$A_i$</span> is of magnitude <span class="math-container">$k_i$</span> times the distance of the particle from <span class="math-container">$A_i$</span>. When the particle is at B, it's acceleration is zero. When it is at another point <span class="math-container">$C$</span> which is at a distance <span class="math-container">$d$</span> from B, its acceleration is <span class="math-container">$f$</span>. Find the magnitude of <span class="math-container">$f$</span> in terms of <span class="math-container">$k_1, k_2, ... k_n$</span> and <span class="math-container">$d$</span>.</p> </blockquote> <p>Source <a href="https://imgur.com/6HHMj8X" rel="nofollow noreferrer">https://imgur.com/6HHMj8X</a></p> <p>I'm interested what a proper solution to this question would be. I'm not sure if mine is correct and it feels very cheesed.</p> <p>Given that the magnitude of <span class="math-container">$f$</span> is independent of the positions of the <span class="math-container">$A_i$</span> points, we can assume without loss of generality that all of the <span class="math-container">$n$</span> points are on top of each other. Therefore, B must also be on top of them and if I move a distance <span class="math-container">$d$</span> away from B, I am a distance <span class="math-container">$d$</span> away from all of the points. Therefore <span class="math-container">$f=d(k_1+k_2+k_3+...+k_n)$</span></p>
red whisker
833,796
<p>Treat all points as vectors in three space. At <span class="math-container">$B$</span>, the total force is <span class="math-container">$\sum_{i=1}^nk_i(A_i-B) = 0$</span> and at <span class="math-container">$C$</span>, the total force is <span class="math-container">$\sum_{i=1}^nk_i(A_i-C) = \vec{f}$</span> where <span class="math-container">$\vec{f}$</span> is the acceleration vector of magnitude <span class="math-container">$f$</span> (the particle is of unit mass). Subtracting <span class="math-container">$$\sum_{i=1}^nk_i(B-C) = \vec{f}$$</span> giving <span class="math-container">$f = d\sum_{i=1}^nk_i$</span>.</p>
2,225,932
<p>If given a convex function $f: \mathbb{R} \to \mathbb{R}$, then the conjugate function $f^*$ is defined as $$f^*(s) = \sup_{t \in \mathbb{R}} (st-f(t))$$</p> <p>Now i want to understand what is the physical interpretation of this conjugate function? What is its exposition? Please help me. </p>
Amin
264,002
<p>Suppose <span class="math-container">$f: \mathbb{R}^n \to \mathbb{R}$</span>, then the conjugate function is: <span class="math-container">$$f^*(\mathbf{y}) \triangleq \sup_{x\in \text{dom} f} [\mathbf{y}^ \mathsf{T}\mathbf{x}-f(\mathbf{x})],$$</span> where: <span class="math-container">$\text{dom}f^* = \{\textbf{y}|f^*(\mathbf{y})&lt;+\infty\}$</span>. Then, it is clear from the definition that it is a function <em>bounded above</em>.</p> <p>Some notions and descriptions about the conjugate function:</p> <ol> <li><p>Even if <span class="math-container">$f(x)$</span> is a <strong>non-convex</strong> function, then the conjugate function <span class="math-container">$f^*(\mathbf{y})$</span> is <em>guaranteed</em> to be a <strong>convex</strong> function in <span class="math-container">$\mathbf{y}$</span>. [Notice: The conjugate function is the <em>pointwise supremum</em> of a set of affine functions in <span class="math-container">$\mathbf{y}$</span> (here, <em>convexity</em> aspect of affine function matters) , so, it is <em>convex</em>.]</p> </li> <li><p>As an interpretation, it somehow represents (encodes) <strong>the convex hull of the epigraph</strong> of <span class="math-container">$f$</span> in terms of its supporting hyperplanes (The concept behind this is depicted in figure below.).<a href="https://i.stack.imgur.com/OYys9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OYys9.png" alt="enter image description here" /></a></p> </li> </ol> <ol start="3"> <li><p>There is an alternative way to obtain the <em>(Lagrange) dual function</em> of the primal problem by using conjugate function.</p> </li> <li><p>If the function <span class="math-container">$f$</span> is closed and convex, then <span class="math-container">$f^{**} = f$</span> (conjugate of conjugate is function itself).</p> </li> <li><p>Let <span class="math-container">$g_{f}$</span> be the convex envelope of <span class="math-container">$f$</span>. If <span class="math-container">$f$</span> is a lower semi-continuous function and its domain is compact (i.e., closed and bounded), then <span class="math-container">$f^{**} = g_{f}$</span> [4].</p> </li> </ol> <hr /> <p>References for more info.:</p> <ol> <li>Convex Optimization for Signal Processing and Communications, by Chong-Yung Chi.</li> <li>Convex Optimization by Stephen Boyd.</li> <li>Convex analysis and optimization, by D. P. Bertsekas.</li> <li>Lagrange multipliers and nonconvex programs, by J. Falk.</li> </ol>
106,838
<p>Suppose $n &gt; 1$ is a natural number. Suppose $K$ and $L$ are fields such that the general linear groups of degree $n$ over them are isomorphic, i.e., $GL(n,K) \cong GL(n,L)$ as groups. Is it necessarily true that $K \cong L$?</p> <p>I'm also interested in the corresponding question for the special linear group in place of the general linear group.</p> <p>NOTE 1: The statement is false for $n = 1$, because $GL(1,K) \cong K^\ast$ and non-isomorphic fields can have isomorphic multiplicative groups. For instance, all countable subfields of $\mathbb{R}$ that are closed under the operation of taking rational powers of positive elements have isomorphic multiplicative groups.</p> <p>NOTE 2: It's possible to use the examples of NOTE 1 to construct non-isomorphic fields whose additive groups are isomorphic <em>and</em> whose multiplicative groups are isomorphic.</p>
Jim Humphreys
4,231
<p>The answer to the question is yes, though I don't have all the old literature at my fingertips. This kind of question for various classes of linear groups has a long history in the study of homomorphisms and isomorphisms of classical groups and then other algebraic groups (van der Waerden, Dieudonne, ...) The most comprehensive treatment was given by Borel and Tits in their Ann. of Math 97 (1973) paper, but emphasizing simple types rather than general reductive groups. Anyway, for general linear groups the ideas occur much earlier and also involve the uniqueness of $n$. (As you point out, the case $n=1$ has a different flavor.) I'll check the sources, but you could also work back from the references in Borel-Tits.</p> <p>P.S. Note that any isomorphism (of abstract groups) between two general linear groups induces an isomorphism of the derived groups. Given $n&gt;1$, these are <em>special linear groups</em> and fit well into the older or newer sources I mentioned. (Probably there is enough detail in the 1928 Hamburg paper by Schreier and van der Waerden to settle your question, but I confess I've never gone back that far in the literature.)</p> <p>ADDED: One relatively modern reference I should point out is <em>Lectures on Linear Groups</em> by O.T. O'Meara, CBMS 22, Amer. Math. Soc., 1974. While O'Meara's own research interest at the time was in the direction of linear groups over various rings of interest, these lecture notes also incorporate older work over fields. See in particular his Sections 5.5-5.6 for theorems most relevant to the question asked here. </p> <p>Roughly speaking, the central concern in these isomorphism theorems is what happens to unipotent elements (classically, transvections and the like). In the setting of classical matrix groups, the original techniques rely on the underlying geometry of the situation. But in the broader treatment by Borel-Tits the structure theory of reductive groups (Jordan decomposition, Bruhat decomposition, etc.) plays the most prominent role. For general or special linear groups, it's hard for me to judge what approach is really "simplest".</p>
109,292
<p>Let $\kappa$ be a regular, uncountable cardinal. Let $A$ be an unbounded set, i.e. $\operatorname{sup}A=\kappa$. Let $C$ denote the set of limit points $&lt; \kappa$ of $A$, i.e. the non-zero limit ordinals $\alpha &lt; \kappa$ such that $\operatorname{sup}(X \cap \alpha) = \alpha$. How can I show that $C$ is unbounded? I cannot even show that $C$ has <em>any</em> points let alone that it's unbounded. (Jech page 92) </p> <p>Thanks for any help.</p>
azarel
20,998
<p>Fix $\xi\in \kappa$, since $A$ is unbounded there is a $\alpha_0\in A$ so that $\xi&lt;\alpha_0$. Now, construct recursively a strictly increasing sequence $\langle \alpha_n: n\in \omega\rangle$. Let $\alpha=\sup\{\alpha_n: n\in \omega\}.$ Since $\kappa$ is regular and uncountable, we have $\alpha&lt;\kappa.$ It is also easy to see that $\sup(A\cap\alpha)=\alpha$.</p>
3,007,044
<p>Sequence <span class="math-container">$a_n$</span> is defined as <span class="math-container">$a_n = \ln (1 + a_{n-1})$</span>, where <span class="math-container">$a_0 &gt; -1$</span>. Find all <span class="math-container">$a_0$</span> for which the sequence converges.</p> <p>Using arithmetic properties of limits, I've found that if the sequence converges, then it converges to <span class="math-container">$0$</span>, but I don't know how to prove it and find <span class="math-container">$a_0$</span>.</p>
user
505,767
<p><strong>HINT</strong></p> <p>We can show that as <span class="math-container">$a_0 &gt;0$</span></p> <ul> <li><span class="math-container">$a_n$</span> is decreasing</li> <li><span class="math-container">$a_n$</span> is bounded below</li> </ul>
4,189,417
<p>The book I am currently reading, is written in a local language, uses the symbol <span class="math-container">$$\gtreqqless$$</span></p> <p>Noted usage in the book (translated):</p> <blockquote> <p>Let <span class="math-container">$f(x) = x^3e^{-3x}$</span> for x &gt; 0. Then what is the maximum value of <span class="math-container">$f(x)$</span> ?</p> </blockquote> <blockquote> <p>Solution : <span class="math-container">$\dotsc$</span> <span class="math-container">$f'(x) = 3e^{-3x}x^2(1-x)$</span>... We have to work explicitly with <span class="math-container">$1 - x$</span> since <span class="math-container">$3e^{-3x}$</span> is positive for x &gt; 0 <span class="math-container">$\therefore$</span> We have to see when 1 - x <span class="math-container">$\geqslant$</span> 0 and when 1 - x <span class="math-container">$\leqslant$</span> 0. We can see that 1 - x <span class="math-container">$\gtreqqless$</span> 0 <span class="math-container">$\Longleftrightarrow$</span> 1 <span class="math-container">$\gtreqqless$</span> x <span class="math-container">$\dotsc$</span></p> </blockquote> <p>So how is the symbol read ,or, What does it mean ?</p> <p>Edit: I have searched through the book and it does not provide any page on symbols used and no past explanation has been provided.</p>
Troposphere
907,303
<p>It's a shorthand notation for <span class="math-container">$$ 1-x &gt; 0 \iff 1 &gt; x \quad\text{and}\quad 1-x = 0 \iff 1 = x \quad\text{and}\quad 1-x &lt; 0 \iff 1 &lt; x \, . $$</span></p>
4,200,434
<p>I am having difficulty with what should be a routine question, Exercise 2.2.2 (b) of <em>Understanding Analysis</em> by Stephen Abbott (2015).</p> <blockquote> <p><strong>Exercise 2.2.2.</strong> Verify, using the definition of convergence of a sequence, that the following sequences converge to the proposed limit.</p> <p>(b) <span class="math-container">$$\lim_{n \rightarrow \infty} \frac{2n^2}{n^3 + 3} = 0.$$</span></p> </blockquote> <p><strong>My attempt.</strong></p> <p><em>Due to a transcription error in my question and some answers based on my erroneous transcription, followed by re-edits etc., this has caused some confusion. To avoid any further future confusion, I have reverted this question, together with my attempt to the most recent state before I received correct answers, and upvoted answers accordingly. For the members of the community who took the time to assist, please accept my apologies, and also my gratitude for the time you've spent writing up your answers.</em></p> <p>I understand that verification using the formal definition (without recourse to theorems occurring later in the text) requires supplying a particular <span class="math-container">$N(\epsilon) \in \mathbb{N}$</span> in response to any <span class="math-container">$\epsilon &gt; 0$</span> such that whenever <span class="math-container">$n \geq N$</span>, then</p> <p><span class="math-container">$$\left \vert \frac{2n^2}{n^3+3} \space \right \vert &lt; \epsilon.$$</span></p> <p>Because <span class="math-container">$n &gt; 0$</span>, I can simplify the above to get</p> <p><span class="math-container">$$\left \vert \frac{2n^2}{n^3+3} \space \right \vert = \frac{2n^2}{n^3 + 3} &lt; \epsilon.$$</span></p> <p>Which yields the following cubic polynomical in <span class="math-container">$n$</span>, which I am unsure how to solve analytically:</p> <p><span class="math-container">$$\epsilon n^3 - 2n^2 + 3 \epsilon &gt; 0.$$</span></p>
Community
-1
<p>Let <span class="math-container">$\frac{1}{2}\varepsilon&gt;\frac{1}{n}&gt;0$</span> by the Archimedean property (I believe this is defined in Abbott's book).</p> <p>Now, observe the following for <span class="math-container">$n\in \mathbb{N}$</span>:</p> <p><span class="math-container">$\Big|\frac{2n^2}{n^3+3}-0\Big|=\Big| \frac{2n^2}{n^3+3} \Big|=\frac{2n^2}{n^3+3}&lt;\frac{2n^2}{n^3}=\frac{2}{n}&lt;2\cdot\frac{1}{2}\varepsilon=\varepsilon$</span></p> <p>and thus we have proven the limit to be zero.</p> <p>Edit: I edited the answer after you gave the correction.</p>
1,063,334
<p>I'm really stumped on this problem and don't know how to go about it. It says $g(x)$ = $|f(x)|$ and to show that if $f(c) = 0$ and g is differentiable at c, then one must have $D(f)(c) = 0$. Everywhere I look on the internet it says that |x| is not differentiable at 0 even there is nothing on absolute value of functions. </p> <p>I tried doing something like $$\lim_{x\to\ c^+} \frac{g(x) - g(c)}{x-c} = \lim_{x\to\ c^-} \frac{g(x) - g(c)}{x-c}$$</p> <p>but didn't know where to go afterwards. </p> <p>Also tried doing $|x| = \sqrt{x^2}$ but that did not work either. </p> <p>I'm thinking of whether I should be looking more into maximas or something. </p> <p>What do you guys think? Thanks</p>
Community
-1
<p>Generally it is not true. If $\chi_{\mathbb{Q}} $ is a charakteristic function of rationals then the function $g(t) =\left|\chi_{\mathbb{Q}} (t) -\frac{1}{2} \right|t^2 $ is differentiable everywhwre but the function $f(t) =\left(\chi_{\mathbb{Q}} (t) -\frac{1}{2} \right)t^2$ is equal zero only at zero.</p>
1,762,078
<p>show that $$e^x-\ln{(x+2)}&gt;\dfrac{1}{6}\tag{(1)}$$</p> <p>I know $$e^x&gt;x+1,\ln{(x+2)}&lt;x+1$$ so I have only prove $$e^x-\ln{(x+2)}&gt;0$$ But How to prove $(1)$?</p>
Christian Blatter
1,303
<p>A look at the graphs of $y=e^x$ and $y=\log(x+2)$ shows that the line $y=1+x$ is a tangent to both of them, and that it suffices to prove the inequality in question for the interval $-1&lt;x&lt;0$.</p> <p>We start with the inequalities $$e^{-t}&gt;1-t+{t^2\over2}-{t^3\over 6},\qquad\log(1+t)&lt;t-{t^2\over2}+{t^3\over3}\qquad(0&lt;t&lt;1)$$ resulting from truncating alternating series. This implies $$e^x&gt;1+x+{x^2\over 2}+{x^3\over6}\qquad(-1&lt;x&lt;0)\tag{1}$$ and $$\log(x+2)=\log\bigl(1+(x+1)\bigr)&lt;(1+x)-{(1+x)^2\over2}+{(1+x)^3\over3}\qquad(-1&lt;x&lt;0)\ .\tag{2}$$ From $(1)$ and $(2)$ we obtain by subtraction $$\eqalign{e^x-\log(x+2)&amp;&gt;1+x+{x^2\over 2}+{x^3\over6}-\left({5\over6}+x+{x^2\over2}+{x^3\over3}\right)\cr &amp;={1\over6}(1-x^3)&gt;{1\over6}\qquad(-1&lt;x&lt;0)\ . \cr} $$</p>
1,394,753
<p>I came across this puzzle recently which I hope people might enjoy.</p> <p>Let $S(n)$ be the set of positive integers less than $n$ which do not have a $2$ in their decimal representation and let $\sigma(n)$ be the sum of the reciprocals of the numbers in $S(n)$, so for example $\sigma(5) = 1 + 1/3 + 1/4$ . </p> <ul> <li>Show that $S(1000)$ contains $9^3 - 1$ distinct numbers.</li> <li>Show that $\sigma(n) &lt; 80$ for all $n$.</li> </ul>
Brian Tung
224,454
<p><strong>Hint for the second part.</strong> Consider the number of elements of $S(10^n)$ larger than $10^{n-1}$, and observe that they all contribute less than $1/10^{n-1}$. Obtain the appropriate geometric series with ratio $9/10$.</p>
1,632,990
<p>I'm having a bit of confusion here.</p> <p>What are the solutions of</p> <p>$\begin{pmatrix} 0&amp;1&amp;0 \\ 0&amp;0&amp;1 \\ 0&amp;0&amp;3\\ \end{pmatrix}x=0$</p> <p>Clearly,</p> <p>$x_2=0$<br> $x_3=0$<br> $3x_3=0$</p> <p>$x_1=s\in\mathbb{R}$, because there are no constraints for $x_1$.<br></p> <p>then e.g.</p> <p>$x=(1,0,0)$ is a solution.</p> <p>But since this is from a matrix having double eigenvalue on this matrix, then shouldn't there be two solutions.</p> <p>How do I formulate another solution?</p>
Arnaud D.
245,577
<blockquote> <p>But since this is from a matrix having double eigenvalue on this matrix, then there should be two solutions.</p> </blockquote> <p>If you mean that the characteristic polynomial has a double root, then this does not mean that you can find two linearly independant eigenvectors.</p> <p>For a matrix $A\in \mathbb{R}^{n\times n}$, if $\lambda \in \mathbb{R}$ is an eigenvalue of $A$ then its <em>algebraic multiplicity</em> $\mu_A(\lambda)$ is defined as its multiplicity as a root of the characteristic polynomial, and its <em>geometric multiplicity</em> $\gamma_A(\lambda)$ is defined as the dimension of the corresponding eigenspace. It can be proven that $\gamma_A(\lambda)\leq \mu_A(\lambda)$, but the inequality can be strict.</p> <p>In your case there are no solutions other than $(1,0,0)$ (up to a constant), since the two last coordinates need to be $0$, as you noticed. So here $1=\gamma_A(\lambda)\neq \mu_A(\lambda)=2$. </p>
1,632,990
<p>I'm having a bit of confusion here.</p> <p>What are the solutions of</p> <p>$\begin{pmatrix} 0&amp;1&amp;0 \\ 0&amp;0&amp;1 \\ 0&amp;0&amp;3\\ \end{pmatrix}x=0$</p> <p>Clearly,</p> <p>$x_2=0$<br> $x_3=0$<br> $3x_3=0$</p> <p>$x_1=s\in\mathbb{R}$, because there are no constraints for $x_1$.<br></p> <p>then e.g.</p> <p>$x=(1,0,0)$ is a solution.</p> <p>But since this is from a matrix having double eigenvalue on this matrix, then shouldn't there be two solutions.</p> <p>How do I formulate another solution?</p>
Bernard
202,857
<p>There is no other (independent) solution. $0$ is an eigenvalue of the matrix with <em>algebraic</em> multiplicity $2$, but the <em>geometric</em> multiplicity of $0$ (the dimension of the kernel) is $1$..</p> <p>Generally speaking, if $\lambda$ is an eigenvalue of a matrix $A$, we must distinguish between its algebraic multiplicity $m_\lambda$, i.e. its multiplicity as a root of the <code>characteristic polynomial</code>, and its geometric multiplicity, i.e. the dimension of $\ker(A-\lambda I)$. There is a relation between these numbers: $$1\le \dim\ker(A-\lambda I)\le m_\lambda.$$ The matrix is <code>diagonalisable</code> if and only if <code>the geometric multiplicity and the geometric multiplicity of each eigenvalue are equal</code>. In particular, if all eigenvalues are simple, the matrix is diagonalisable.</p> <p>This means here that your matrix will only have a Jordan normal form: $$\begin{bmatrix} 0&amp;1&amp;0\\0&amp;0&amp;0\\0&amp;0&amp;3 \end{bmatrix}.$$ One can calculate this happens in basis $$\begin{bmatrix} 1\\0\\0 \end{bmatrix},\enspace\begin{bmatrix} 0\\1\\0 \end{bmatrix},\enspace\begin{bmatrix} 1\\3\\9 \end{bmatrix}.$$</p>
104,186
<p>I'm having difficulty understanding how to express text in <code>Epilog</code> that is dynamically updated using <code>Log[b, x]</code>. <em>Mathematica</em> changes this to base $e$, but I would like it to be <code>Log[b, x]</code> in traditional format with base $b$, and I can't seem to make it work. I'm guessing I need to break the $\log$ apart using boxes or something, but don't know how to make a subscript that is a dynamically updated <code>b</code> value. Any ideas?</p> <pre><code>Manipulate[ Plot[{b^x, x, Log[b, x]}, {x, -10, 10}, PlotRange -&gt; {{-5, 5}, {-10, 10}}, PerformanceGoal -&gt; "Quality", ImageSize -&gt; All, Epilog -&gt; {Text[b^x, {-3, 4}], Text[Log[b, x], {3, -5}]}, GridLines -&gt; {Range[-10, 10, 1], Range[-10, 10, 1]}, GridLinesStyle -&gt; Opacity[.04]], {{b, 2, "Choose a base"}, 0.01, 4}] </code></pre>
J. M.'s persistent exhaustion
50
<p>The code in the OP will work right off the bat if the following (sadly undocumented) setting is first executed:</p> <pre><code>SetSystemOptions["SimplificationOptions" -&gt; {"AutosimplifyTwoArgumentLog" -&gt; False}] </code></pre> <p>After running that line, the <code>Manipulate[]</code> works as expected:</p> <p><img src="https://i.stack.imgur.com/hMJr0.png" alt="now it works!"></p>
349,147
<p>What is known regarding which hyperbolic groups are cubulated?</p> <p>I take it the usual definition of cubulated is acting properly and cocompactly on a CAT(0)-cube complex.</p> <p>My impression is that not all of them are, but I didn't manage to find references with a counterexample.</p> <p>Are there known ways to create non-cubulated hyperbolic groups? Are there famous examples of non-cubulated groups?</p>
HJRW
1,463
<p>As @AGenevois says in his answer, the standard examples of non-cubulated hyperbolic groups are those with Kazhdan's property (T), such as quaternionic hyperbolic lattices.</p> <p>Complex hyperbolic lattices (in dimension >2) provide a more delicate class of examples. On the one hand, they are not cubulable, by a theorem of <a href="https://arxiv.org/abs/1609.08474" rel="noreferrer">Delzant--Py</a>. On the other hand, they are known to have the <a href="https://en.wikipedia.org/wiki/Haagerup_property" rel="noreferrer">Haagerup property</a> (I think this is proved in the book by Bekka--de la Harpe--Valette), so they also don't have property (T).</p> <p>As far as I know, they are the only class of examples of hyperbolic groups known to be Haagerup but non-cubulable. I'd be interested to hear of others.</p>
1,818,260
<p>I'm studying mathematical physics and working on numerical solutions of partial differential equations. I am having trouble understanding the way we solve partial dif. equations, e.g.,</p> <p>$\frac{\partial u}{\partial t} = \frac{\partial^2 u}{\partial x^2}+\frac{2}{x}.\frac{\partial u}{\partial x}$</p> <p>$\frac{\partial u}{\partial t}=\frac{\partial^2 u}{\partial x^2} \dots$</p> <p>I mean for first and second order partial derivatives we use backward, forward or central difference formulas. In some solutions for a partial derivative like $\frac{\partial u}{\partial x}$ it is written by using forward difference and sometimes by using central difference formula. How do we decide which formula we should use? Do we decide by looking at the boundary conditions, e.g, Dirichlet, Neumann,$\dots$? Finally, I try to solve problems with $0$ intuiton, can you enlighten me about what and how we do, please?</p>
user1820964
315,800
<p>The advection term, $\frac{\partial u}{\partial x}$ is always tricky part to solve PDE in a numerical way. It has a certain direction as you said. </p> <p>There are many techniques to treat this term. Instead of explaining all things, I give you a link that has good notation and explanation. </p> <p><a href="http://disciplinas.stoa.usp.br/pluginfile.php/41896/mod_resource/content/1/LeVeque%20Finite%20Diff.pdf" rel="nofollow">Randall's lecture note in UW</a></p> <p>Look Chapter 13: Advection equation!</p>
3,259,073
<p>Let <span class="math-container">$A$</span> be a commutative ring with <span class="math-container">$1$</span>, and let <span class="math-container">$a, b, c \in A$</span>. Suppose there exist <span class="math-container">$x, y, z ∈ A$</span> such that <span class="math-container">$ax+by +cz = 1$</span>. Then there exist <span class="math-container">$x_j, y_j, z_j ∈ A$</span> such that <span class="math-container">$a^{50}x_j +b^{20}y_j +c^{15}z_j = 1$</span>.</p> <p>I need some help with this question. Don't know where to start.</p>
Gabriel Soranzo
51,400
<p>I think I have it (for the first)!</p> <p>For the first: <span class="math-container">$\text{Tor}_i(M,N)$</span> is annihilated by <span class="math-container">$\text{Ann}(M)$</span> and <span class="math-container">$\text{Ann}(N)$</span> because element of <span class="math-container">$\text{Tor}$</span> are nothing else classe of elements of <span class="math-container">$P_i\otimes N$</span> or <span class="math-container">$P_i\otimes M$</span>. Here <span class="math-container">$\text{Tor}_1(R/I,R/J)$</span> is annihilated by <span class="math-container">$I$</span> and <span class="math-container">$J$</span> so if <span class="math-container">$R=I+J$</span> it is annihilated by 1 and the conclusion come.</p>
4,132,124
<p>I am not able to find this answer anywhere its about discrete mathematics. Plz Help </p>
CoveredInChocolate
901,866
<p>Did you try using truth tables? Do you agree with the tables I wrote down?</p> <p><span class="math-container">$$ \begin{array}{cc|c} p &amp; q &amp; p\wedge q \\ \hline T &amp; T &amp; T \\ T &amp; F &amp; F \\ F &amp; T &amp; F \\ F &amp; F &amp; F \end{array} \quad\quad\quad \begin{array}{cc|c} p &amp; q &amp; p\vee q &amp; \neg(p\vee q) \\ \hline T &amp; T &amp; T &amp; F \\ T &amp; F &amp; T &amp; F \\ F &amp; T &amp; T &amp; F \\ F &amp; F &amp; F &amp; T \end{array} $$</span></p>
4,132,124
<p>I am not able to find this answer anywhere its about discrete mathematics. Plz Help </p>
Floridus Floridi
841,808
<p>A fallacy has to be a <em><strong>reasoning</strong></em> , that is a sequence of sentences with an inference word such as &quot;therefore&quot; before the last sentence.</p> <p>What you wrote is a <em><strong>sentence</strong></em> , an assertion, not a reasoning.</p> <p>What goes wrong in this sentence is that it is an <em><strong>antilogy</strong></em> ( or a contradiction or a logical falsehood), meaning a <em><strong>formula that is false in all possible cases</strong></em> . Antilogy is the contrary of tautology ( a sentence that is true in all possible cases). In the middle lie &quot; contingent sentences&quot; ( sometimes true, sometimes false).</p> <p>That it is an antilogy can be proved using a truth table ( there are truth tables generators on the web).</p> <p>Also, one can produce a reasoning to prove that the sentence is contradictory:</p> <p>(1) suppose the whole sentence is true.</p> <p>(2) In that case (P&amp;Q) is true, and therefore P is true.</p> <p>(3) Also, ~ (P OR Q) is true. But that means ( by the definition of the OR-operator) that neither P nor Q is true ( see : DeMorgan's laws).</p> <p>(4) So P is false.</p> <p>(5) So P is true ( by (2)) and P is false ( by (4)). Hence a contradiction.</p> <p>Note : there is also a contradiction as to the truth value of Q; but a single contradiction is enough for the global sentence to be an antilogy.</p> <p>A ressource : <a href="https://courses.umass.edu/phil110-gmh/MAIN/IHome-5.htm" rel="nofollow noreferrer">https://courses.umass.edu/phil110-gmh/MAIN/IHome-5.htm</a></p>
2,290,295
<p>I do not know if the following is true: </p> <p>Question: If $\Omega\subseteq\mathbb{R}^n$ is open and connected, then there exists a covering by balls $\{B_j\}_{j=1}^{\infty}$ with the property $B_j\subseteq\Omega$ and $B_j\cap B_{j+1}\neq\emptyset$, for all $j\geq1$.</p> <p>Motivation: I have a function $u$ that is a.e. constant on any ball contained in $\Omega$, and I want to prove that $u$ is a.e. constant on $\Omega$. If my question has a positive answer, then I am done. </p> <p>Edit: I want an answer to "Question". Although the "Motivation" is important for me and can be proved not using "Question", I am interested on "Question".</p>
Henno Brandsma
4,280
<p>There is in fact a useful characterisation of connectedness which is what you want:</p> <p>A space $X$ is connected iff for every open cover $\mathcal{U}$ of $X$, and for all $x,y\in X$ we have a chain from $\mathcal{O}$ from $x$ to $y$.</p> <p>I wrote a proof <a href="https://math.stackexchange.com/a/44938">in this answer</a>, for a similar question. I learnt it in my first topology course. </p> <p>The last thing means there are finitely many $O_1, O_2,\ldots O_n, (n \ge 1)$ from $\mathcal{O}$ such that $x \in O_1, y \in O_n$ and for all $i = 1,\ldots n-1$: $O_i \cap O_{i+1} \neq \emptyset$.</p> <p>If you know this then <strong>any</strong> open cover by open balls of your $\Omega$ will do (pick for all $x \in \Omega$ some ball $B(x,r_x) \subset \Omega$, by openness and use $\mathcal{O} = \{B(x,r_x): x \in \Omega\}$). If then your $f$ is constant on these balls, we can for any $x,y$ find a chain and from the non-intersecting property we get that the value on $x$ gets "transported" to $y$, and $f(x) = f(y)$. (I.e. If $a_i \in O_i \cap O_{i+1}$ then $f(x)= f(a_1)$ as both are in $O_1$, $f(a_1) = f(a_2)$ from both being in $O_2$, up to $f(a_{n-1}) = f(y)$ from both being in $O_n$)</p>
554,869
<p>I want to show that the following conditions are equivalent for a nonzero element $a$ in a Boolean algebra $\mathcal{B}$:</p> <p>1) for all $x\in\mathcal{B},a\leq x$ or $a\leq x'$</p> <p>2) for all $x,y\in\mathcal{B},a\leq x\sqcup y\Rightarrow a\leq x$ or $a\leq y$</p> <p>3) $a$ is minimal among nonzero elements of $\mathcal{B}$</p> <p>I can't show any of the implications. Could you give me some hint?</p>
Cameron Buie
28,900
<p>For (1)$\implies$(2), suppose that $a\not\le x$ and $a\not\le y,$ and use (1) to show that we must have $a\not\le x\sqcup y,$ thus proving the contrapositive of (2), so proving (2).</p> <p>For (2)$\implies$(1), take any $x\in\mathcal B$ and observe that $x\sqcup x'=1,$ so....</p> <p>For (1)$\implies$(3), suppose $x\in\mathcal B$ such that $x&lt;a.$ It follows that $a\le x'$ by hypothesis, so since $a'&lt;x'$ (why?), then $1=a\sqcup a'\le x'$ (why?), and so....</p> <p>For (3)$\implies$(1), take any $x\in\mathcal B$ and note that $a\sqcap x\le a$ and $a\sqcap x'\le a.$ Note further that $a\sqcap x$ and $a\sqcap x'$ cannot both be $0$ (why?), so without loss of generality, $0&lt;a\sqcap x\le a,$ so by (3) we have $a\sqcap x=a,$ and so....</p>
1,990,033
<p>Suppose I have a point $P(x_1, y_1$) and a line $ax + by + c = 0$. I draw a perpendicular from the point $P$ to the line. The perpendicular meets the line at point $Q(x_2, y_2)$. I want to find the coordinates of the point $Q$, i.e., $x_2$ and $y_2$.</p> <p>I searched up for similar questions where the coordinates of end points of the line segment are given. But here, I've got an equation for the line. So, I am pretty clueless how to solve this.</p> <p>Please give me a formula to arrive at my answer (if any) and show me its derivation too. I am a high school student with a basic knowledge of trigonometry. I have no idea of calculus, so please give me a simplified answer.</p> <p>Any help is highly appreciated. Thanks a lot in advance...</p>
Steven Alexis Gregory
75,410
<p>\begin{align} \sin 3x &amp;= \sin(2x + x) \\ &amp;= \sin 2x \; \cos x + \cos 2x \; \sin x \\ &amp;= 2 \sin x \; \cos^2 x + \cos 2x \; \sin x \\ &amp;= \sin x \; (2 \cos^2 x + \cos 2x) \\ &amp;= \sin x \; (2 \cos 2x + 1) \end{align}</p> <p>\begin{align} 4 \sin 2x \; \sin 4x &amp;= 2(\cos 4x \; \cos 2x + \sin 4x \; \sin 2x) -2(\cos 4x \; \cos 2x - \sin 4x \; \sin 2x) \\ &amp;= 2\cos(4x - 2x) - 2\cos(4x + 2x) \\ &amp;= 2(\cos 2x - \cos 6x) \end{align}</p> <p>\begin{align} 4\sin x \; \sin 2x \; \sin 4x &amp;= \sin 3x \\ 2\sin x \; (\cos 2x - \cos 6x) &amp;= \sin x \; (2 \cos 2x + 1) \\ \hline \sin x &amp;= 0\\ x &amp;\in \{180^\circ n : n \in \mathbb Z\} \\ \hline 2 \cos 2x - 2 \cos 6x &amp;= 2 \cos 2x + 1 \\ \cos 6x &amp;= -\dfrac 12\\ 6x &amp;\in \{360^\circ n \pm 240^\circ : n \in \mathbb Z \} \\ x &amp;\in \{60^\circ n \pm 40^\circ : n \in \mathbb Z \} \\ \end{align}</p> <p>Solution set:</p> <p>$$x \in (\{60^\circ n \pm 40^\circ : n \in \mathbb Z \} \cup \{180^\circ n : n \in \mathbb Z\}) \cap [0^\circ, 360^\circ]$$</p> <p>$$x \in \left\{ \begin{array}{rrrrr} 0^\circ, &amp; 20^\circ, &amp; 40^\circ, &amp; 80^\circ, &amp; 100^\circ, \\ 140^\circ, &amp; 160^\circ, &amp; 180^\circ, &amp; 200^\circ, &amp; 220^\circ, \\ 260^\circ, &amp; 280^\circ, &amp; 320^\circ, &amp; 340^\circ, &amp; 360^\circ \\ \end{array} \right\} $$</p>
314,246
<p>In the case that $L:B_1 \rightarrow B_2 $ is a linear mapping of Banach spaces and $L$ is a isometric isomorphism (bijection and $\|Lx\|_{B_1} = \|x\|_{B_2} $) can I say that $L\overline{L}= 1 $ is trivial ? (the bar denotes the complex conjugate); </p> <p>TIA</p>
Lance
405,830
<p>I do not think this is true.</p> <p>As an example take $B_1 = B_2 = \mathbb{C}^3$ equipped with euclidean norm. Choose a basis $\{e_1, e_2, e_3 \}$ of $\mathbb{C}^3$ and define $L = \begin{bmatrix} 0 &amp; 0 &amp; 1 \\ 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 \\ \end{bmatrix}$.</p> <p>Note that $L$ permutes the coordinates of $\mathbb{C}^3$ and $L^3 = 1$. Hence $L$ is an isometric isomorphism.</p> <p>Assuming $\overline{L}(\alpha x + \beta y) := \overline{\alpha}L(x)+ \overline{\beta}L(y)$ we have $\forall \ \lambda_1,\lambda_2,\lambda_3 \in \mathbb{C}$ :</p> <p>$L \circ \overline{L} (\lambda_1 e_1 + \lambda_2 e_2 + \lambda_3 e_3) = \overline{\lambda_1} e_3 + \overline{\lambda_2} e_1 + \overline{\lambda_3} e_2 $ </p> <p>which in general is $\neq \lambda_1 e_1 + \lambda_2 e_2 + \lambda_3 e_3$.</p>
84,735
<p>Unlike using</p> <pre><code>Unprotect[In,Out]; Clear[In,Out]; Protect[In,Out]; </code></pre> <p>to clean the occupied memory of the whole notebook, I want to clean the memory dynamically and accurately. That is, I want to clean out the specific memory I assign whenever I need.</p> <p>For example, I wrote a loop and at each loop, it generates some intermediate results that need not to be stored and I just want to clean its corresponding memory at the end of each loop. How can I achieve it ?</p> <p>For example,</p> <pre><code>For[i=0,i&lt;=100,i++, x=Array[#&amp;,{1000,1000}];(*occupy many memory*) Total[x]; (*after the Total operation, the x is worthless now so I want to clean out its memory How should I do*) ] </code></pre>
Bichoy
6,770
<p>You can use <code>Block</code> for this purpose. From Mathematica <a href="https://reference.wolfram.com/language/ref/Block.html" rel="noreferrer">documentation</a>:</p> <blockquote> <p>When you execute a block, values assigned to x, y, ... are cleared. When the execution of the block is finished, the original values of these symbols are restored.</p> </blockquote> <p>For the example you gave, you should use something like:</p> <pre><code>Clear[x] For[i=0,i&lt;=100,i++, totalx = Block[{x}, x=Array[#&amp;,{1000,1000}]; Total[x] ] (*do something with totalX*) ] </code></pre> <p><strong>Benchmarking</strong></p> <p>Using <code>MemoryInUse[]</code>, before and after executing the code, testing with an <code>Array</code> of <code>10,000 x 1,000</code>, I ran the following code:</p> <pre><code>a = 1; (* to make sure the Kernel is up and running *) memBefore = MemoryInUse[] (* ... the For loop ... with Array[#&amp;,{10000,1000}] *) memAfter = MemoryInUse[] extraMemUsed = memAfter - memBefore; N[extraMemUsed/1024/1024] (* in MB *) </code></pre> <p>Starting with a fresh kernel each time, the original code leaves the kernel with extra <code>78 MB</code> of memory usage, whereas the code using <code>Block</code> leaves the kernel with only <code>0.2 MB</code> of extra memory usage.</p>
3,342,761
<p>I'm trying to get an equation for a solid angle of a segment of octahedron in the same vein as described in this article <a href="http://www.rorydriscoll.com/2012/01/15/cubemap-texel-solid-angle/" rel="nofollow noreferrer">cubemap-texel-solid-angle</a>. I ended up having to integrate <span class="math-container">$$\int \int \frac{1}{(x^2+y^2+(1-x-y)^2)^\frac{3}{2}} \,\mathrm{d}x \,\mathrm{d}y$$</span> where <span class="math-container">$0 \leq x \leq 1$</span> and <span class="math-container">$0 \leq y \leq 1-x$</span>. That is, integral over a segment of a triangle mapped onto the sphere(one octant). Does anyone know how to integrate that? <a href="https://i.stack.imgur.com/UQR7z.png" rel="nofollow noreferrer">Subdivided polyhedrons by courtesy of Gavin Kistner</a></p> <p><strong>Update</strong> Thanks to the general formula from <a href="https://math.stackexchange.com/questions/1211287/how-to-find-out-the-solid-angle-subtended-by-a-tetrahedron-at-its-vertex/3305625#3305625">this answer</a>: <span class="math-container">$$\omega=\cos^{-1}\left(\frac{\cos\alpha-\cos\beta\cos\gamma}{\sin\beta\sin\gamma}\right)-\sin^{-1}\left(\frac{\cos\beta-\cos\alpha\cos\gamma} {\sin\alpha\sin\gamma}\right)-\sin^{-1}\left(\frac{\cos\gamma-\cos\alpha\cos\beta}{\sin\alpha\sin\beta}\right)$$</span></p> <p>We can calculate a solid angle for all the triangles. Here is the <a href="https://gist.github.com/aschrein/ad2554b9c54a207f6dcfbe431b130d04" rel="nofollow noreferrer">gist</a> and the <a href="https://www.shadertoy.com/view/tlBXDd" rel="nofollow noreferrer">shadertoy</a>. The naive implementation is not numerically stable at small angles.</p> <p><strong>Update</strong> See <a href="https://math.stackexchange.com/q/3343709">this answer</a></p>
David K
139,123
<p>The general problem I think you have described is to find the solid angle subtended at one vertex of a tetrahedron.</p> <p>If we label the vertex in question <span class="math-container">$O$</span> and put it at the center of a unit sphere, then project the opposite face onto the sphere, we get a spherical triangle. The lengths of the "sides" of that triangle are the angles <span class="math-container">$\alpha,$</span> <span class="math-container">$\beta,$</span> and <span class="math-container">$\gamma$</span> between the edges of the tetrahedron that meet at <span class="math-container">$O.$</span> The angles at the vertices of the spherical triangle are the dihedral angles <span class="math-container">$A,$</span> <span class="math-container">$B,$</span> and <span class="math-container">$C$</span> between the faces of the tetrahedron that meet at <span class="math-container">$O.$</span> The usual convention is we use the name <span class="math-container">$A$</span> for the angle between the sides of length <span class="math-container">$\beta$</span> and <span class="math-container">$\gamma,$</span> the name <span class="math-container">$B$</span> for the angle between the sides of length <span class="math-container">$\alpha$</span> and <span class="math-container">$\gamma,$</span> and the name <span class="math-container">$C$</span> for the angle between the sides of length <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta.$</span></p> <p>The solid angle at <span class="math-container">$O$</span> is then the area of the spherical triangle, which in turn is equal to the <em>spherical excess</em> of that triangle, defined as <span class="math-container">$$ E = A + B + C - \pi. $$</span></p> <p>But the information that you seem to be assuming is that you know the three angles <span class="math-container">$\alpha,$</span> <span class="math-container">$\beta,$</span> and <span class="math-container">$\gamma.$</span> So the question becomes how to find <span class="math-container">$E$</span> in terms of those angles.</p> <p>The spherical law of cosines says that <span class="math-container">$$ \cos\alpha = \cos\beta \cos\gamma + \sin\beta \sin\gamma \cos A. $$</span> Solving for <span class="math-container">$A$</span> we get <span class="math-container">$$ A = \arccos \left(\frac{\cos\alpha - \cos\beta \cos\gamma} {\sin\beta \sin\gamma}\right) .$$</span></p> <p>There are similar formulas involving the angles <span class="math-container">$B$</span> and <span class="math-container">$C,$</span> with the results <span class="math-container">$$ B = \arccos \left(\frac{\cos\beta - \cos\alpha \cos\gamma} {\sin\alpha \sin\gamma}\right) $$</span> and <span class="math-container">$$ C = \arccos \left(\frac{\cos\gamma - \cos\alpha \cos\beta} {\sin\alpha \sin\beta}\right) .$$</span></p> <p>As a result, one formula for the spherical excess is <span class="math-container">\begin{align} E &amp;= \arccos \left(\frac{\cos\alpha - \cos\beta \cos\gamma} {\sin\beta \sin\gamma}\right) \\ &amp;\qquad + \arccos \left(\frac{\cos\beta - \cos\alpha \cos\gamma} {\sin\alpha \sin\gamma}\right) \\ &amp;\qquad + \arccos \left(\frac{\cos\gamma - \cos\alpha \cos\beta} {\sin\alpha \sin\beta}\right) - \pi. \end{align}</span></p> <p>The formula shown in the question is a variation of this formula that can be obtained using the identity <span class="math-container">$\arccos(x) = \frac\pi2 - \arcsin(x).$</span></p> <p>I would be suspicious of the numerical stability of this formula for very small spherical angles (that is, when you have divided your sphere into a very large number of triangular facets), because neither <span class="math-container">$\arccos(x)$</span> nor <span class="math-container">$\arcsin(x)$</span> is very accurate when <span class="math-container">$x$</span> is close to <span class="math-container">$1.$</span> You might be better off with another formula such as <span class="math-container">$$ E = 2 \arctan\left(\frac{\tan\frac\alpha 2 \tan\frac\beta 2 \sin C} {1 + \tan\frac\alpha 2 \tan\frac\beta 2 \cos C}\right), $$</span> (from <a href="https://en.wikipedia.org/wiki/Spherical_trigonometry#Area_and_spherical_excess" rel="nofollow noreferrer">here</a>) using formulas such as <span class="math-container">$$ \cos C = \frac{\cos\gamma - \cos\alpha \cos\beta}{\sin\alpha \sin\beta} $$</span> and <span class="math-container">$$ \sin C = \sqrt{1 - \cos^2 C}. $$</span></p> <p>This should be fine if the three angles of the triangle are approximately equal (as seems to be the case in your "octahedron"-based construction). If one of the angles is almost <span class="math-container">$180$</span> degrees and the other two are almost zero you might want to compute <span class="math-container">$\sin C$</span> differently.</p>
1,349,002
<blockquote> <p>Let <span class="math-container">$g(z)= z^4+iz^3 +1$</span>. How many zeros does <span class="math-container">$g$</span> have in <span class="math-container">$\{z\in \Bbb{C}: \text{Re }(z), \text{Im }(z)&gt;0\}$</span>?</p> </blockquote> <p>I tried comparing the number of zeros of <span class="math-container">$g$</span> to that of <span class="math-container">$z\mapsto z^4$</span> and <span class="math-container">$z\mapsto iz^3$</span> using Rouché's theorem applied to the path that first walks the real axis from <span class="math-container">$0$</span> to some <span class="math-container">$R\in\Bbb{R}$</span>, then a quarter circle to <span class="math-container">$iR$</span> and then down the imaginary axis. However, Rouché's theorem didn't apply and I don't know what else to try.</p> <p>Thanks for any help.</p>
J.E.M.S
73,275
<p><strong>Hint:</strong> surely zeroes are inside some big quartercircle with radius R, than use argument principle.</p>
1,010,820
<p>I tried the following :</p> <p>\begin{align}\sec\theta + \tan\theta&amp;=4\\ \frac1{\cos\theta} + \frac{\sin\theta}{\cos\theta}&amp;=4\\ \frac{1+\sin\theta}{\cos\theta}&amp;=4\\ \frac{1+\sin\theta}4&amp;=\cos\theta\end{align}</p> <p>now don't know how to evaluate further ?</p>
Felix Marin
85,343
<p>$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,} \newcommand{\dd}{{\rm d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,{\rm e}^{#1}\,} \newcommand{\fermi}{\,{\rm f}} \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{{\rm i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}} \newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,} \newcommand{\sech}{\,{\rm sech}} \newcommand{\sgn}{\,{\rm sgn}} \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$ \begin{align} &amp;\sec\pars{\theta} + \tan\pars{\theta}=4\ \imp\ 1 + \sin\pars{\theta} =4\cos\pars{\theta} \\[5mm]&amp;\imp\ 1 - \cos^{2}\pars{\theta}=\sin^{2}\pars{\theta} =\bracks{4\cos\pars{\theta} - 1}^{2} \end{align}</p> <blockquote> <p>Then, $$ 0=17\cos^{2}\pars{\theta} - 8\cos\pars{\theta} =\bracks{17\cos\pars{\theta} - 8}\ \underbrace{\cos\pars{\theta}}_{\ds{\color{#c00000}{\not=\ 0}}}\ \imp\ \color{#66f}{\large\cos\pars{\theta} = {8 \over 17}} $$</p> </blockquote>
1,557,359
<p>I've been doing Project-Euler just as a way to increase my competency in computer science. I'm currently a Pure and Applied Math major who recently adopted computer science as a minor in order to apply to grad schools in comp. sci.</p> <p>Some built-in Sage functions that are extremely fast and I do not have comparably fast definitions include gcd, lcm, .intersection(), .is_prime(), prime_range(), .factor(), solve(),...</p> <p>Many Project-Euler problems I've been able to solve relatively easily and quickly with these built-in functions but when I see the forums after answering the question, other people's answers are quite lengthy (as they are writing they're own code).</p> <p>I'm debating if my method of getting through Project-Euler is truly a learning environment. I'm definitely learning more Sage commands and getting better at using Sage but I'm not sure my coding is getting any better unless I make my own definitions.</p> <p>In essence what is the transition period between using built-in definitions to strictly using your own independently created code? Or what's more beneficial, being able to create your own relatively fast code or getting the job done as fast as possible (using built-in methods)?</p>
learner
228,313
<p>From the perspective of a programmer/coder, it isn't a bad practice to use inbuilt functions when you can (in fact, people are lazy, if there's an inbuilt function, why do the work of manually defining another?)</p> <p>Now, from the perspective of a mathematician, the aim of the problems there is to motivate you to understand algorithms properly (i.e., understand how it works, how the algorithm is constructed from basic tools) and then implementing them on your own to solve those problems.</p> <p>Take your $\gcd()$ function for example. The mathematician's way to solve this would be implementing the Euclidean algorithm (recursive definition to find gcd). There's a one-line definition for it :</p> <pre><code>int gcd(int a,int b){ return (b==0) ? a : gcd(b,a%b); } </code></pre> <p>For lcm, you can just use the fact that $\textrm{lcm}(a,b)\gcd(a,b)=ab$ for positive integers $a,b$.</p> <p>And similarly, you can think up of algorithms for all the other functions. After all, those inbuilt functions were programmed from scratch by someone using these basic mathematical algorithms.</p> <p>So, in conclusion, there's no good or bad side to this, it just depends on your perspective. But since you're majoring in Mathematics and not Comp Sci, I'd recommend using inbuilt functions as rare as possible (use only when you can't think of an elementary algorithm to do it). If you do use an inbuilt function, try to search for a mathematical algorithm afterwards if you can and understand it well, so as to gain experience.</p>
4,586,330
<p>How do I prove the following equality? <span class="math-container">$$\sqrt[5]{\frac{5\sqrt5+11}{2}}+\sqrt[5]{\frac{5\sqrt5-11}{2}}=\sqrt{5} $$</span></p> <p>My approach was to notice that the first term equals the golden ratio and the second term equals the reciprocal of the golden ratio, and adding them up would give <span class="math-container">$2$</span> times golden ratio <span class="math-container">$- 1$</span>, which is <span class="math-container">$\sqrt5$</span>, but how do I show that?</p>
Wang YeFei
955,668
<p>You can raise both sides to the <span class="math-container">$5^{th}$</span> power and cancel out like terms of both sides until you see that both sides look the same. That's called brute force. We can save some times off by noting some relations between the terms of the sum on the left. Specifically, put <span class="math-container">$x = \sqrt[5]{\dfrac{5\sqrt{5}+11}{2}}\implies \dfrac{1}{x} = \sqrt[5]{\dfrac{5\sqrt{5}-11}{2}}$</span>. So we show: <span class="math-container">$x+\dfrac{1}{x} = \sqrt{5}$</span>. Let <span class="math-container">$a = x+\dfrac{1}{x}$</span>, then we have: <span class="math-container">$a^5 = \left(x+\dfrac{1}{x}\right)^5= x^5+\dfrac{1}{x^5}+5\left(x^3+\dfrac{1}{x^3}\right)+10\left(x+\dfrac{1}{x}\right)$</span>. But <span class="math-container">$x^5+\dfrac{1}{x^5} = 5\sqrt{5}$</span>, and <span class="math-container">$x^3+\dfrac{1}{x^3} = \left(x+\dfrac{1}{x}\right)^3 - 3\left(x+\dfrac{1}{x}\right)$</span>. Substituting these identities into the equation above yields: <span class="math-container">$a^5 = 5\sqrt{5}+5(a^3-3a)+10a\implies a^5-5a^3+a-\sqrt{5} = 0$</span>. Observe that the polynomial on the left hand side of this equation in <span class="math-container">$a$</span> has a factor <span class="math-container">$a - \sqrt{5}$</span> and it can be written as: <span class="math-container">$(a-\sqrt{5})\left(a^4 + \sqrt{5}a^3+1\right) = 0$</span>. But <span class="math-container">$a &gt; 0 \implies a^4+\sqrt{5}a^3 + 1 &gt; 0$</span>. Thus this equation implies: <span class="math-container">$a - \sqrt{5} = 0$</span> or <span class="math-container">$a = \sqrt{5}$</span>. So <span class="math-container">$x+\dfrac{1}{x} = \sqrt{5}$</span>. We're done !</p>
299,576
<p>Suppose $X = \displaystyle\bigsqcup_{i \in I} X_i$ is the disjoint union of infinitely many continua. The components of the Stone-Cech remainder $X^*$ can be described as follows. Treat $I$ as a discrete topological space and consider the continuous map $F:X_i \to I$ that sends each $X_i$ to $i \in I$. The Stone Cech lift $\beta F: \beta X \to \beta I$ restricts to a surjection $G: X^* \to I^*$. For each free ultrafilter $\mathcal U \in I^*$ the preimage $G^{-1}(\mathcal U)$ is a subcontinuum of $X^*$ and $\{G^{-1}(\mathcal U): \mathcal U \in I^* \}$ is the set of components. For a proof see <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1.8271&amp;rep=rep1&amp;type=pdf" rel="nofollow noreferrer">Lemma 2.1</a>. </p> <p>This fact is used to prove each subcontinuum of the remainder $\mathbb H ^*$ of the half line is the intersection of all the standard subcontinua containing it (<a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1.8271&amp;rep=rep1&amp;type=pdf" rel="nofollow noreferrer">Theorem 5.1</a>). </p> <blockquote> <p><strong>Definition:</strong> Suppose $Y$ is a locally compact noncompact Tychonoff space. By a <em>standard subcontinuum</em> we mean a set of the form $G^{-1}(\mathcal U)$ where $X_i$ are subcontinua of $Y$ and $\bigcup X_i$ has the discrete union topology.</p> </blockquote> <p>Suppose $K \subset \mathbb H $ is a subcontinuum. For each $x \in \mathbb H ^* - x$ let $U^*$ be open at $x$ and disjoint from $K$. It follows $\mathbb H - U^*$ is the disjoint union of infinitely many intervals with the discrete union topology. By the above the components of $\mathbb H - U^*$ are all standard subcontinua and one of them contains $K$ so is disjoint from $U$.</p> <p>Suppose $Y$ is a more general space than a discrete union of intervals. $Y$ is locally-compact, noncompact, Hausdorff and each of the infinitely many components is compact. Suppose we take a collection $\{X_i: i \in I \}$ of components and an ultrafilter $\mathcal U \in I^*$ with the following property: For each compact $K \subset Y$ there is $U \in \mathcal U$ with $\bigcup \{X_i: i \in U\}$ disjoint from $K$. (observe this holds for a disjoint union of intervals simultaneously for all $\mathcal U \in I^*$) Then the set $G^{-1}(\mathcal U)$ is a subcontinuum of $Y^*$ but I see no reason it should be a component.</p> <p>Is there a more general version of Lemma 2.1 linked above? More generally what is known about expressing the components of Stone-Cech remainders in terms of the components of the original space? Is there any reason to believe a version of Theorem 5.1 should hold for more general locally compact noncompact connected Hausdorff spaces than $\mathbb H$?</p> <p><strong>Edit:</strong> Is there any known theorem something like this?</p> <blockquote> <p><strong>Conjecture:</strong> Suppose $Y$ is a Tychonoff space and each component is compact. The quasicomponents of $\beta Y$ correspond to the ultrafilters of clopen sets of $Y$.</p> </blockquote> <p>The <em>quasicomponent</em> of a point means the intersection of all clopen sets at that point.</p>
D. S. Lipham
95,718
<p>The conjecture is true, but I'll leave it to others to address the other parts of your question (or maybe I'll edit this post later). </p> <p>Let $Y$ be Tychonoff and $p\in \beta Y$.</p> <p>Using the facts that (1) $A=\overline{A\cap Y}$ for every clopen $A\subseteq \beta Y$, and (2) every clopen subset of $Y$ is a zero set, we can see that</p> <p>$$\{A\subseteq \beta Y:A\text{ is clopen and }p\in A\}=\{\overline {A\cap Y}:A\subseteq \beta Y\text{ is clopen and }A\cap Y\in p\}=\{\overline B:B\subseteq Y \text{ is clopen and }B\in p\}.$$</p> <p>This shows the one-to-one correspondence between quasi-components of $\beta Y$ ($=$ components of $\beta Y$) and ultrafilters of clopen subsets of $Y$. Namely, the quasi-component of $p$ is equal to $$\bigcap\{\overline B:B\in u\},\text{ where } u=\{B\subseteq Y:B\text{ is clopen and }B\in p\}$$ is an ultrafilter of clopen subsets of $Y$.</p>
1,508,753
<p>Show that for $x,y\in\mathbb{R}$ with $x,y\geq 0$, the arithmetic mean-quadratic mean inequality $$\frac{x+y}{2}\leq \sqrt{\frac{x^2+y^2}{2}}$$ holds.</p> <p>After my calculations I'll get: </p> <p>$$-x^2+2xy-y^2$$ which can't be $\leq 0$.</p>
Pacciu
8,553
<p>@Garsa : The $\varepsilon-\delta$ argument cannot be used to evaluate a limit; instead, it can only be used to prove that a conjectured value $l$ is the limit of a function at a given point.</p> <p>In order to guess a possible value for the limit $\displaystyle \lim_{x\to 2} \frac{1}{1-x}$, one possible way is to plug $x=2$ into the function itself (assuming the function is definite in $2$): this way you get the guess: $$l=-1\; .$$</p> <p>In order to prove that $\displaystyle \lim_{x\to 2} \frac{1}{1-x} = -1$, you should prove that every time you choose an $\varepsilon &gt;0$ "small", you can find a $\delta &gt;0$ "small" in such a way that $|x-2|&lt;\delta$ implies $\left| \frac{1}{1-x} - (-1)\right|&lt;\varepsilon$. In practice, this means that once you fix a "small" value of the parameter $\varepsilon &gt;0$, you can always cut out from the set of solutions of the parametric inequality $\left| \frac{1}{1-x} - (-1)\right|&lt;\varepsilon$ a "small" symmetric neighbourhood of $2$.</p> <p>Now the inequality: $$\left| \frac{1}{1-x} - (-1)\right|&lt;\varepsilon$$ is equivalent to the system: $$\begin{cases} \frac{2-x}{1-x} &lt; \varepsilon\\ \frac{2-x}{1-x}&gt; - \varepsilon\end{cases}$$ i.e.: $$\begin{cases} \frac{(2-\varepsilon)- (1-\varepsilon) x}{1-x} &lt;0\\ \frac{(2+\varepsilon) - (1+\varepsilon) x}{1-x} &gt;0\end{cases}$$ The latter system can be solved explicitly: in fact, assuming $0&lt;\varepsilon &lt;1$ (you don't lose generality, because $\varepsilon$ has to be "small"), you find the solutions: $$\frac{2+\varepsilon}{1+\varepsilon} &lt; x &lt; \frac{2-\varepsilon}{1-\varepsilon}\; .$$ Finally, you have to cut a symmetric neighbourhood of $2$ out of these solutions. In order to do this, subtract $2$ in each side and find: $$-\frac{\varepsilon}{1+\varepsilon} &lt; x-2 &lt; \frac{\varepsilon}{1-\varepsilon}\; ,$$ hence, if you let: $$\delta := \min \left\{ \frac{\varepsilon}{1+\varepsilon} , \frac{\varepsilon}{1-\varepsilon}\right\} = \frac{\varepsilon}{1+\varepsilon}$$ it is obvious that the neighbourhood $-\delta &lt;x-2&lt;\delta$ is contained in the set $\frac{2+\varepsilon}{1+\varepsilon} &lt; x &lt; \frac{2-\varepsilon}{1-\varepsilon}$.</p> <p>Therefore, if $|x-2|&lt;\delta$ then $|f(x) - (-1)| &lt; \varepsilon$ and thus $\displaystyle \lim_{x\to 2} f(x) = -1$.</p>
1,508,753
<p>Show that for $x,y\in\mathbb{R}$ with $x,y\geq 0$, the arithmetic mean-quadratic mean inequality $$\frac{x+y}{2}\leq \sqrt{\frac{x^2+y^2}{2}}$$ holds.</p> <p>After my calculations I'll get: </p> <p>$$-x^2+2xy-y^2$$ which can't be $\leq 0$.</p>
Kashi
76,422
<p>Let $f(x)=1$ for every $x\in \mathbb {R}$ and $g(x)=1-x$ for every $x\in \mathbb {R}$. Notice that $\lim_{x\rightarrow 2}f(x)=1$; to see the $\epsilon-\delta$ argument for this notice that for this function $f(x)$, choose $\delta =\epsilon$ so that we have, $|f(x)-1|=0\leq |x-2|\leq \delta=\epsilon.$</p> <p>Further observe that $\lim_{x\rightarrow 2}g(x)=-1$; To see this, choose $\delta =\epsilon$ again so that we have, $|g(x)-(-1)|=|1-x+1|=|2-x|=|x-2|\leq \delta =\epsilon.$ </p> <p>So both the limit exists and $\lim_{x\rightarrow 2}g(x)$ is non-zero finite number. Hence the algebra of limits guarantees that $$\lim_{x\rightarrow 2}\frac{f(x)}{g(x)}=\frac{1}{-1}=-1$$.</p>
48,679
<p>I've been going through Fermats proof that a rational square is never congruent. And I've stumbled upon something I can't see why is. Fermat says: ''If a square is made up of a square and the double of another square, its side is also made up of a square and the double of another square'' Im having difficulties understanding why this is. Can anyone help me understand this?</p>
Laie
6,013
<p>Feynman's path integral in quantum field theory. It involves integration over spaces of fields, using measures that have not been made rigorous.</p>
48,679
<p>I've been going through Fermats proof that a rational square is never congruent. And I've stumbled upon something I can't see why is. Fermat says: ''If a square is made up of a square and the double of another square, its side is also made up of a square and the double of another square'' Im having difficulties understanding why this is. Can anyone help me understand this?</p>
Eric Zaslow
1,186
<p>Finally, a Math Overflow question that addresses my specialty: non-rigor!</p> <p>Here are a few examples of non-rigor as applied to evidence for dualities:</p> <ol> <li><p>Heterotic-Type II. In earlier times, the best evidence for heterotic-Type-II duality was a) counting the number of supersymmetries of the theory, and (b) comparing the moduli spaces.</p></li> <li><p>AdS-CFT. For AdS-CFT the earliest and best comparisons were counting the so-called anomalous dimensions of various operators. To date, I think the tests are far from rigorized (and yes, this would be a great problem to make mathematically precise).</p></li> <li><p>Mirror Symmetry, early days. Recall that mirror symmetry in CY moduli space came from constructing a chart of the Euler characteristics of CY complete intersections and noticing the symmetry of the chart about zero. Other non-rigorous arguments involve counting the dimensions (just the dimensions) of the moduli of purportedly mirror objects. Then there's the old compute-on-flat-space-and-let-supersymmetry-take-care-of-the-rest trick.</p></li> <li><p>Low energy effective field theory. The "fact" that string theory reduces to an oft-identifiable QFT in a low energy limit is a huge source of argumentation/inspiration in string theory. Accounting for (effective) black holes helped lead to M-theory in one context, and to the microscopic description of black-hole entropy in another. One can also argue for dualities by identifying equivalent field contents in two different models. This brings up another point.</p></li> <li><p>Invariance of BPS states under perturbation. It is great to take a quantity that does not vary and evaluate it in a limit where it is easy to compute. This argument appears again and again in physics -- and also in math, of course (e.g. in the heat-kernel proof of the index theorem). BPS numbers are just that. (Of course, they do vary, and the continuity of the relevant <em>physical</em> parameters [numbers are not necessarily physical quantities] is what underlies interesting explanations of wall-crossing.)</p></li> </ol> <p>I'm probably including too many that don't fit and excluding a lot that do. Very non-rigorous of me!</p>
2,886,502
<p>In the book of <em>Function of One Complex Variable</em> by Conway, at page 31, it is given that</p> <p><a href="https://i.stack.imgur.com/Ff0nB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ff0nB.png" alt="enter image description here"></a></p> <p>However, normally, if $z$ was a real number, we could argue that $z^{n+1}$ goes to zero as $n \to \infty$ if $|z| &lt; 1$, but in the case of complex numbers, if $|z = a + bi| = a^2 + b^2 &lt; 1$, then, for example, $$z^2 = a^2 - b^2 + (2ab)i,$$ and we can see that it might be the case that $2ab \geq b$, so if we see z = (a,b), then $z^2 = (a^2 - b^2, 2ab)$, and the statement $z^n \to 0_{\mathbb{C}}$ means both of the components of $z^n$ goes to $0_{\mathbb{R}}.</p> <p>However, even when we square the number $z$, the components does not need to decrease, so how can we be sure that if $|z| &lt; 1$, then $z^n \to 0_{\mathbb{C}}$ as $n\to \infty$ ? </p> <p><strong>tl:dr;</strong></p> <p>How can we prove that $z^{n+1} \to 0_{\mathbb{C}}$ as $n\to \infty$ ?</p>
what a disgrace
569,485
<p>Just write your complex numbers in polar form — are you familiar with the polar representation of complex numbers?</p> <p>Write $z=re^{i\theta}=r(\cos\theta+i\sin\theta)$ with $r=|z|$. Then $z^{n}=r^{n}e^{in\theta}$; $|e^{in\theta}| = 1$ for all real $n,\theta$ thus</p> <p>$$|z^{n}|=|r^{n}|\to0$$</p>
3,014,144
<p>Say we have a closed interval <span class="math-container">$[a,b]$</span>. How can I prove <span class="math-container">$|[a,b]|$</span> equal to <span class="math-container">$c$</span>, where <span class="math-container">$c$</span> is the continuum.</p> <p>I already proved <span class="math-container">$|(a,b)| = |(c,d)|$</span>. But I don't know whether it is useful.</p>
Marco
582,590
<p>Let <span class="math-container">$g(x)=f(1/x)$</span> for <span class="math-container">$x\in (1,\infty)$</span>. The hypotheses on <span class="math-container">$f(x)$</span> imply that <span class="math-container">$$\lim_{x \rightarrow \infty} g(x)=0~\mbox{and}~|(x^2g'(x))'|&lt;C,$$</span> since <span class="math-container">$(x^2g'(x))'=x^2 g''(x)+2x g'(x)=\frac{1}{x^2}f''(1/x)$</span> which is bounded in absolute value by <span class="math-container">$C$</span> by the assumption. </p> <p>Define <span class="math-container">$h(x)=xg'(x)$</span>, and so we need to show that <span class="math-container">$\lim_{x \rightarrow \infty}h(x)=0$</span>, since <span class="math-container">$\frac{1}{x}f'(1/x)=-xg'(x)=-h(x)$</span>. We first show that <span class="math-container">$h(x)$</span> is bounded on <span class="math-container">$[2,\infty)$</span>. On the contrary, suppose <span class="math-container">$h(x)$</span> is unbounded and so there exists a sequence <span class="math-container">$x_i \rightarrow \infty$</span> such that <span class="math-container">$|h(x_i)| &gt;i$</span> for all <span class="math-container">$i$</span>. Also, wlog we can assume that <span class="math-container">$h(x_i)$</span> are all positive or all negative. WLOG, suppose that <span class="math-container">$h(x_i)&gt;0$</span> for all <span class="math-container">$i$</span>, since the proof is similar in the other case. For each <span class="math-container">$i$</span> large enough, choose <span class="math-container">$y_i \in [2,x_i]$</span> such that <span class="math-container">$h(y_i)&gt;i$</span> and <span class="math-container">$h'(y_i)&gt;0$</span>. This is possible, since for <span class="math-container">$i$</span> large enough <span class="math-container">$h(x_i)&gt;h(2)$</span> and so <span class="math-container">$y_i=\inf_{t\in [2,x_i]}\{h(t)=h(x_i)\}$</span> works. But, one has <span class="math-container">$$|h(x)+xh'(x)|=|(xh(x))'|=|(x^2g'(x))' |&lt;C,$$</span> and replacing <span class="math-container">$x$</span> by <span class="math-container">$y_i$</span> and letting <span class="math-container">$i \rightarrow \infty$</span> results in a contradiction. So <span class="math-container">$h(x)$</span> is bounded on <span class="math-container">$[2,\infty)$</span>. </p> <p>Next, we show that <span class="math-container">$\lim_{x\rightarrow \infty}h(x)$</span> exist. Let <span class="math-container">$\alpha=\limsup_{x\rightarrow \infty} h(x)$</span> and <span class="math-container">$\beta=\liminf_{x \rightarrow \infty}h(x)$</span>. Then <span class="math-container">$\alpha, \beta \in \mathbb{R}$</span>, and we need to show that <span class="math-container">$\alpha=\beta$</span>. On the contrary, suppose <span class="math-container">$\alpha&gt;\beta$</span>. Assume that <span class="math-container">$\alpha &gt; 0$</span>. Let <span class="math-container">$x_i \rightarrow \infty$</span> such that <span class="math-container">$h(x_i) \rightarrow \alpha$</span>. Choose <span class="math-container">$\theta&gt;0$</span> such that <span class="math-container">$\beta&lt;\theta \alpha&lt;\alpha$</span>. By the intermediate-value theorem, for <span class="math-container">$i$</span> large enough, there exists <span class="math-container">$t_i&gt;0$</span> such that <span class="math-container">$h(x_i+t_i)=\theta h(x_i)$</span>. In addition, we can choose <span class="math-container">$t_i$</span> such that <span class="math-container">$h(x)&gt;\theta \alpha/2$</span> for all <span class="math-container">$x\in [x_i, x_i+t_i]$</span>. </p> <p>From <span class="math-container">$|(xh(x))'|&lt;C$</span> and the mean-value theorem, we have <span class="math-container">$$|x(h(x+t)-h(x))+th(x+t)|=|(x+t)h(x+t)-xh(x)|&lt;Ct$$</span> for all <span class="math-container">$t\geq 0$</span> and <span class="math-container">$x&gt;1$</span>. It follows that <span class="math-container">$$x|h(x+t)-h(x)|&lt;At,$$</span> for some <span class="math-container">$A&gt;0$</span> and all <span class="math-container">$t\geq 0$</span> and all <span class="math-container">$x&gt;1$</span>. Replacing <span class="math-container">$x$</span> with <span class="math-container">$x_i$</span> and <span class="math-container">$t$</span> by <span class="math-container">$t_i$</span> gives <span class="math-container">$$\frac{t_i}{x_i}&gt;\frac{1}{A}|h(x_i+t_i)-h(x_i)|&gt;\frac{1}{A}(1-\theta)h(x_i)$$</span> for <span class="math-container">$i$</span> large enough. It follows that <span class="math-container">$$\frac{x_i+t_i}{x_i}=1+\frac{t_i}{x_i}&gt;D,$$</span> for some <span class="math-container">$D&gt;1$</span> and all <span class="math-container">$i$</span> large enough. Now, one has</p> <p><span class="math-container">$$g'(x)= \frac{h(x)}{x} &gt;\frac{\theta \alpha}{2x},$$</span> for all <span class="math-container">$x\in [x_i,x_i+t_i]$</span>. It follows that <span class="math-container">$$|g(x_i+t_i)-g(x_i)|\geq \int_{x_i}^{x_i+t_i} \frac{\theta \alpha}{2x}dx\geq \frac{\theta \alpha}{2} \log \left (\frac{x_i+t_i}{x_i} \right) \geq \frac{\theta \alpha}{2}\log D,$$</span> for all <span class="math-container">$i$</span> large enough. This is a contradiction, since <span class="math-container">$\lim_{x \rightarrow \infty}g(x)=0$</span>. The proof in the case of <span class="math-container">$\beta&lt;0$</span> is similar. Therefore, <span class="math-container">$\alpha=\beta$</span> and so <span class="math-container">$L=\lim_{x\rightarrow \infty}h(x)$</span> exists. </p> <p>Finally, we show that <span class="math-container">$L=0$</span>. Suppose <span class="math-container">$L&gt;0$</span>. Then for <span class="math-container">$x$</span> large enough <span class="math-container">$|g'(x)|=|h(x)/x|&gt;L/(2x)$</span> which gives <span class="math-container">$g(x)&gt;g(x_0)+ (L/2) \log(x)$</span> for some <span class="math-container">$x_0$</span> and all <span class="math-container">$x$</span> large enough, a contradiction. The case where <span class="math-container">$L&lt;0$</span> is similar. Therefore, <span class="math-container">$L=0$</span> and the proof is completed. </p>
1,227,610
<p>I need to calculate derivative of the following function with respect to the matrix X:</p> <p>$f(X)=||diag(X^TX)||_2^2$</p> <p>where $diag()$ returns diagonal elements of a matrix into a vector. How can I calculate $\frac {\partial f(X)} {\partial X}$? Please help me.</p> <p>Thanks in advance!</p>
Community
-1
<p>Here you get a scalar function of the matrix $X$. The derivative of such a scalar function is defined as the matrix of partial derivatives wrt each entry of the matrix, as <a href="http://en.wikipedia.org/wiki/Matrix_calculus#Scalar-by-matrix" rel="nofollow">here</a>.</p> <p>You can expand out the expression and calculate the partial derivative.</p>
4,024,071
<p>'A or B' means one or the other or both. <br/><br/>So I think <span class="math-container">$\{x | x = u$</span> or <span class="math-container">$ x = v\}$</span> can be equal to <span class="math-container">$\{u\}$</span> or <span class="math-container">$\{v\}$</span> or <span class="math-container">$\{u, v\}$</span>. <br/>Similary, I think <span class="math-container">$\{x | x\in a $</span> or <span class="math-container">$ x\in b\}$</span> can be equal to any subset of <span class="math-container">$a \cup b$</span> except empty set.<br/><br/> But actually <span class="math-container">$\{x | x = u$</span> or <span class="math-container">$ x = v\} = \{u, v\}$</span> and <span class="math-container">$\{x | x\in a $</span> or <span class="math-container">$ x\in b\} = a \cup b$</span>. What am I missing?</p>
Arturo Magidin
742
<p>The price per pants must be a <strong>divisor</strong> of each of the amounts of money made in each of the months, as presumably he sold an integral number of pants. Thus, the price per pant must divide the greatest common common divisor of 6804 (money made in September), of 10800 (money made in October), and of 7938 (money made in December). Since the greatest common divisor is 54, the price per pant must be a <strong>divisor</strong> of 54.</p> <p>(The defining property of the greatest common divisor of a set of numbers is that (i) it divides each of the numbers in the set; and (ii) any number that divides all numbers in the set divides the greatest common divisor. So the price per pant, which must divide each of the three amounts, must also divide <span class="math-container">$54$</span>)</p> <p>Now, what are the divisors of <span class="math-container">$54$</span>? As <span class="math-container">$54=2\times 3^3$</span>, the divisors are: <span class="math-container">$1$</span>, <span class="math-container">$2$</span>, <span class="math-container">$3$</span>, <span class="math-container">$6$</span>, <span class="math-container">$9$</span>, <span class="math-container">$18$</span>, <span class="math-container">$27$</span>, and <span class="math-container">$54$</span>. The price must be one of them.</p> <p>We are told the price is “more than <span class="math-container">$14$</span> and less than <span class="math-container">$21$</span>”. Well, the only possibility then is that the price per pant is <span class="math-container">$18$</span>.</p>
1,197,056
<p>I tried to evaluate the following limits but I just couldn't succeed, basically I can't use L'Hopital to solve this... </p> <p>for the second limit I tried to transform it into $e^{\frac{2n\sqrt{n+3}ln(\frac{3n-1}{2n+3})}{(n+4)\sqrt{n+1}}}$ but still with no success...</p> <p>$$\lim_{n \to \infty } \frac{2n^2-3}{-n^2+7}\frac{3^n-2^{n-1}}{3^{n+2}+2^n}$$</p> <p>$$\lim_{n \to \infty } \frac{3n-1}{2n+3}^{\frac{2n\sqrt{n+3}}{(n+4)\sqrt{n+1}}}$$</p> <p>Any suggestions/help? :)</p> <p>Thanks</p>
Mark Fischler
150,362
<p>For the first limit, it breaks into 2 factors with finite limits.<br> $$\lim{n \to \infty} \frac{2n^2-3}{7-n^2} = \frac{2n^2}{-n^2} =-2\\ \lim{n \to \infty} \frac{3^n-2^{n-1}}{3^{n+2}+2^n2} = \frac{3^n}{3^{n+2}} = \frac{1}{9} $$ so the answer is $-\frac{2}{9}$.</p> <p>For the second, rewrite it as $$ \left(\frac{(3n-1)(2n-3)}{4n^2-9} \right) ^{\frac{\sqrt{n}2n(1+\frac{3}{2n}+\ldots)}{\sqrt{n}(n+4)(1+\frac{1}{2n}+\ldots)}} $$ and expand to next-lowest order in $1/n$ to get $$ \left( \frac{3}{2} \left[ 1-\frac{11}{6n}+\ldots\right] \right)^{2(1+\frac{3}{2n}+\ldots-\frac{9}{2n}+\ldots)} $$ Since the exponent does not go to infinity we can in fact just use the lowest order terms, getting $$\left( \frac{3}{2} \right)^2 = \frac{9}{4}$$</p>
1,100,266
<p>I begin with this:</p> <blockquote> <p>If $f:[a,b]\rightarrow [a,b]$ is continuous and $|f(x)-f(y)|&lt;|x-y|$ for all $x\neq y$, then the equation $f(x)=x$ has exactly one root.</p> </blockquote> <p>The problem is not very hard and I solved it. Then I wonder if this problem is true or not:</p> <blockquote> <p>If $f:\mathbb{R}\rightarrow \mathbb{R}$ is continuous and $|f(x)-f(y)|&lt;|x-y|$ for all $x\neq y$, then the equation $f(x)=x$ has exactly one root.</p> </blockquote> <p>With one more hypothesis $f$ is bounded, the above one becomes true. However, I could not find any function which is a counter-example for the original one. </p> <p>So my questions are:</p> <ol> <li><p>Is there any counter-example for the problem $\mathbb{R}\rightarrow \mathbb{R}$? If not, how can we prove it?</p></li> <li><p>Besides $f$ is bounded, can we find any different hypothesis to add (if necessary) to lead to the conclusion?</p></li> </ol> <p>Thanks so much.</p>
Harald Hanche-Olsen
23,290
<p>You can construct a counterexample by setting $f(x)=x-g(x)$ where $g$ is any positive, strictly increasing function that doesn't increase too fast. Something like $g(x)=\frac\pi2+\arctan x$ should work well.</p> <p>If you have $|f(x)-f(y)|\le L|x-y|$ for a constant $L&lt;1$, there will alays be a fixed point of $f$. This is Banach's fixed point theorem, but in this context it is easy to prove: Fix $y=0$ in the inequality and show that $$\lim_{x\to\pm\infty} x-f(x)=\pm\infty.$$</p>
625,746
<p>If the nth term of a series is given by $T(n)=T(n-1) + T(n-1) \times C$ where $C$ is a given constant and $T(1)=A$ and $n \ge 2$ . I need to tell whether its value will be greater than $G$ or not before its mth term.</p> <p>EXAMPLE : Say A that is first term is $2$ and say $C$ is also $2$ . If we need to check weather its value is atleast $10(=G)$ before or upto 2nd term.</p> <p>Then answer should be No.</p> <p>I need to just check it if its possible or not but without calculating the values. Can anyone help.If all the variables can be very large(say of order $10^9$).</p>
mathlove
78,967
<p>Since $$T(n)=(1+C)T(n-1)$$ we have $$T(n)=(1+C)^{n-1}T(1)=(1+C)^{n-1}A.$$</p> <p>Hence, you can use the followings (we suppose that $A\gt0$) :</p> <p>$$T(n)\ge G\iff (1+C)^{n-1}A\ge G\iff(1+C)^{n-1}\ge\frac{G}{A}\iff n\ge\log_{1+C}\frac GA+1,$$ $$T(n)\lt G\iff n\lt \log_{1+C}\frac{G}{A}+1.$$</p> <p>Here, we suppose that $C\gt -1, C\not=0.$</p> <p>So, for example, if $A=2, C=2, G=10$, we have $$T(n)\ge G\iff n\ge\log_{3}\frac{10}{2}+1=\log_{3}5+1,\ \ T(n)\lt G\iff n\lt \log_{3}5+1.$$ Note that $\log_{3}5\approx 1.465.$</p>
2,309,527
<p>I was sincerely hoping someone could explain to me how for the function found below I would determine its standard matrix and whether or not the function is 1-to-1 and onto. </p> <blockquote> <p>The linear function $T:\mathbb{R}^2 \to \mathbb{R}^3$ is given by $$ T(x,y) = \begin{pmatrix} x-y \\ 5x+3y \\ 2x+4y \end{pmatrix} $$</p> </blockquote> <ol> <li>I believe here the standard matrix would be: $\begin{bmatrix} 1 &amp; -1 \\ 5 &amp; 3 \\ 2 &amp; 4\end{bmatrix}$, because I think multiplying that matrix with $(x,y)^T$ would result in that function. I'm not sure about this though. </li> <li>As for the 1-to-1 question; I know that that function is 1-to-1 if each $x \in \mathbb{R}^2$ is related to a different $y \in \mathbb{R}^3$. But how do I test and prove this? </li> <li>As for onto: A function is onto when its image equals its co-domain, so here that would be if $T(x)=y$? But yet again, how would I test/prove this on this function? </li> </ol> <p>Some very basic questions that I tried googling, but the explanations I found so far did not help much unfortunately. For example, I found explanations for functions like $f(x,y) = x-y$, but none like the one I have here. I also did a lot of looking in the slides from the school course, but that didn't help either. </p>
copper.hat
27,978
<p>Note that $T_n(x) \in \operatorname{sp} \{ e_1,...,e_n\}$, hence $T_n$ has finite dimensional range.</p>
694,098
<p>$$\lim\limits_{x \to 3} \frac{3-x}{5-\sqrt{x^2+16}}$$</p> <p>The professor says we can't use l'hopital's rule and must solve algebraically.</p>
amWhy
9,003
<p>Try multiplying the numerator and denominator by the conjugate of the denominator:</p> <p>Multiply by $$\dfrac{5+ \sqrt{x^2 +16}}{5+\sqrt{x^2 + 16}}$$</p> <hr> <p>$$\lim\limits_{x \to 3} \frac{3-x}{5-\sqrt{x^2+16}} \cdot \dfrac{5+ \sqrt{x^2 +16}}{5+\sqrt{x^2 + 16}} = \lim_{x\to 3}\dfrac{(3-x)(5 + \sqrt {x^2 + 16)}}{25 - (x^2 + 16)}$$</p> <p>Then note that you have a difference of squares in the denominator: $$25 - (x^2 + 16) = 9 - x^2 = (3-x)(3+x)$$</p> <p>Now, you can cancel the factor $3-x$, as it appears in both the numerator and denominator.</p>
694,098
<p>$$\lim\limits_{x \to 3} \frac{3-x}{5-\sqrt{x^2+16}}$$</p> <p>The professor says we can't use l'hopital's rule and must solve algebraically.</p>
Bill Dubuque
242
<p><strong>Hint</strong> $\ \ f\bar f = (a\!-\!x)(a\!+\!x)\ \Rightarrow\ \dfrac{a-x}f \,=\, \dfrac{\bar f}{a+x}\ \ $ where $\ \ \bar f,\,f\, =\, 5\pm \sqrt{x^2+16}$</p> <p><strong>Remark</strong> $\ $ This is a special case of <em>rationalizing the denominator</em>, which <a href="http://www.google.com/search?q=site%3Amath.stackexchange.com+%28dubuque+OR+%22math+gems%22+OR+%22key+ideas%22%29+rationalize+denominator" rel="nofollow">is often handy.</a></p>
4,301,673
<p>Suppose we have chosen <span class="math-container">$n$</span> random points in a line segment <span class="math-container">$[0, t]$</span>, <span class="math-container">$n\leq t+1$</span>. What is the probability that distance between each pair of adjacent points is &gt; 1? Or more formally, let <span class="math-container">$U_1, U_2, ..., U_n \stackrel{iid}{\sim} U(0,t), n\leq t+1$</span>, let <span class="math-container">$U_{(i)}$</span> denotes <span class="math-container">$i^{th}$</span> ordered statistic. Find <span class="math-container">$P(\cap_{i=1}^{n-1} U_{(i+1)} - U_{(i)} &gt; 1)$</span>?</p>
Roah
2,481
<p>@MXXZ answer is toooo beautiful to be the only solution :)</p> <h6>The denominator</h6> <p>Our <span class="math-container">$n$</span> points are distributed uniformly on <span class="math-container">$[0;t]$</span>, so the denominator is <span class="math-container">$t^n$</span>.</p> <h6>The nominator</h6> <p>Imagine a favorable arrangement of points. Favorable means all points have the distance at least 1 between them. Now cut a segment of length one to the right of points <span class="math-container">$1$</span>, <span class="math-container">$2$</span>, ..., <span class="math-container">$n-1$</span>. You will be left with a segment of lenght <span class="math-container">$t-(n-1)$</span> and <span class="math-container">$n$</span> points uniformly distributed on it. So the nominator is <span class="math-container">$(t-(n-1))^n$</span>.</p> <p>And the probability is <span class="math-container">$$ \frac{(t-(n-1))^n}{t^n} $$</span></p> <h6>Sketch for <span class="math-container">$n=4$</span></h6> <p><img src="https://i.stack.imgur.com/bSrCq.jpg" alt="enter image description here" /></p>
1,728,524
<p>Classical texts for control theory show the linear system $\dot x=A \,x$, is stable if the real parts of the eigenvalues are negative.</p> <p>Does the same criteria apply for a system of the following form: $$ \left[ \begin{array}{cc|c} \dot x\\ 0 \end{array} \right] = B \left[ \begin{array}{cc|c} x \\ y \end{array} \right]$$ </p> <p>where $\dot x$, $x$, $y$ are vectors and $B$ is a matrix of constants. In this system, the equations for $\dot x$ include terms from $y$. For this system the number of unknowns equals the number of equations. While it would be possible to perform additional algebra to reduce the system to the classical form, $\dot x=A\,x$, is this necessary? I would prefer to write the equations in the form above, because this makes the physical interpretation of the equations more clear. </p> <p>I am calling the $y$ values "auxiliary" variables, because they are dynamic in the sense that they change with time (as a consequence of the linear system - the values $y$ do not have an explicit dependence on time), but an expression for their derivative does not fall out of the analysis. (In this system, the y equations results from a simple energy balance where no energy "hold-up" is assumed.) If there is a more appropriate description feel free to revise the question. </p> <p>Because $\dot y$ does not appear on the left hand side, it is not clear to me if the classic stability test still applies. </p>
Robert Israel
8,508
<p>The conditions of symmetry and not having zero elements are not relevant. A necessary and sufficient condition is that the column space of $A$ contains the column space of $B$. If so, you can take $C = A^+ B$ where $A^+$ is the <a href="https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_pseudoinverse" rel="nofollow">Moore-Penrose pseudoinverse</a> of $A$.</p>
3,757,038
<h2>The problem</h2> <p>So recently in school, we should do a task somewhat like this (roughly translated):</p> <blockquote> <p><em>Assign a system of linear equations to each drawing</em></p> </blockquote> <p>Then, there were some systems of three linear equations (SLEs) where each equation was describing a plane in their coordinate form and some sketches of three planes in some relation (e.g. parallel or intersecting at 90°-angles.</p> <h2>My question</h2> <p>For some reason, I immediately knew that these planes:</p> <p><a href="https://i.stack.imgur.com/6luFl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6luFl.png" alt="enter image description here" /></a></p> <p>belonged to this SLE: <span class="math-container">$$ x_1 -3x_2 +2x_3 = -2 $$</span> <span class="math-container">$$ x_1 +3x_2 -2x_3 = 5 $$</span> <span class="math-container">$$-6x_2 + 4x_3 = 3$$</span></p> <p>And it turned out to be true. In school, we proved this by determining the planes' intersecting lines and showing that they are parallel, but not identical.<br /> However, I believe that it must be possible to show the planes are arranged like this without a lot of calculation. Since I immediately saw/&quot;felt&quot; that the planes described in the SLE must be arranged in the way they are in the picture (like a triangle). I could also determine the same &quot;shape&quot; on a similar question, so I do not believe that it was just coincidence.</p> <h2>What needs to be shown?</h2> <p>So we must show that the three planes described by the SLE cut each other in a way that I do not really know how to describe. They do not intersect with each other perpendicular (at least they don' have to to be arranged in a triangle), but there is no point in which all three planes intersect. If you were to put a line in the center of the triangle, it would be parallel to all planes.</p> <p>The three planes do not share one intersecting line as it would be in this case:</p> <p><a href="https://i.stack.imgur.com/LQ5IY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LQ5IY.png" alt="enter image description here" /></a></p> <p>(which was another drawing from the task, but is not relevant to this question except for that it has to be excluded)</p> <h2>My thoughts</h2> <p>If you were to look at the planes exactly from the direction in which the parallel line from the previous section leads, you would see something like this:</p> <p><a href="https://i.stack.imgur.com/eMj2x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eMj2x.png" alt="enter image description here" /></a></p> <p>The red arrows represent the normal of each plane (they should be perpendicular). You can see that the normals somehow are part of one (new) plane. This is already given by the manner how the planes intersect with each other (as I described before). If you now were to align your coordinate system in such a way that the plane in which the normals lie is the <span class="math-container">$x_1 x_2$</span>-plane, each normals would have an <span class="math-container">$x_3$</span> value of <span class="math-container">$0$</span>. If you were now to further align the coordinate axes so that the <span class="math-container">$x_1$</span>-axis is identical to one of the normals (let's just choose the bottom one), the values of the normals would be somehow like this:</p> <p><span class="math-container">$n_1=\begin{pmatrix} a \\ 0 \\ 0 \end{pmatrix}$</span> for the bottom normal</p> <p><span class="math-container">$n_2=\begin{pmatrix} a \\ a \\ 0 \end{pmatrix}$</span> for the upper right normal</p> <p>and <span class="math-container">$n_3=\begin{pmatrix} a \\ -a \\ 0 \end{pmatrix}$</span> for the upper left normal</p> <p>Of course, the planes do not have to be arranged in a way that the vectors line up so nicely that they are in one of the planes of our coordinate system.</p> <p>However, in the SLE, I noticed the following:</p> <p>-The three normals (we can simpla read the coefficients since the equations are in coordinate form) are <span class="math-container">$n_1=\begin{pmatrix} 1 \\ -3 \\ 2 \end{pmatrix}$</span>, <span class="math-container">$n_2=\begin{pmatrix} 1 \\ 3 \\ -2 \end{pmatrix}$</span> and <span class="math-container">$n_3=\begin{pmatrix} 0 \\ -6 \\ 4 \end{pmatrix}$</span>.</p> <p>As we can see, <span class="math-container">$n_1$</span> and <span class="math-container">$n_2$</span> have the same values for <span class="math-container">$x_1$</span> and that <span class="math-container">$x_2(n_1)=-x_2(n_2)$</span>; <span class="math-container">$x_3(n_1)=-x_3(n_2)$</span></p> <p>Also, <span class="math-container">$n_3$</span> is somewhat similar in that its <span class="math-container">$x_2$</span> and <span class="math-container">$x_3$</span> values are the same as the <span class="math-container">$x_2$</span> and <span class="math-container">$x_3$</span> values of <span class="math-container">$n_1$</span>, but multiplied by the factor <span class="math-container">$2$</span>.</p> <p>I also noticed that <span class="math-container">$n_3$</span> has no <span class="math-container">$x_1$</span> value (or, more accurately, the value is <span class="math-container">$0$</span>), while for <span class="math-container">$n_1$</span> and <span class="math-container">$n_2$</span>, the value for <span class="math-container">$x_1$</span> is identical (<span class="math-container">$n_1=1$</span>).</p> <h2>Conclusion</h2> <p>I feel like I am very close to a solution, I just don't know what to do with my thoughts/approaches regarding the normals of the planes.<br /> Any help would be greatly appreciated.</p> <p><strong>How can I show that the three planes are arranged in this triangular-like shape by using their normals, i.e. without having to calculate the planes' intersection lines?</strong> (Probably we will need more than normals, but I believe that they are the starting point).</p> <hr /> <p><strong>Update:</strong> I posted a <a href="https://math.stackexchange.com/questions/3827387/why-are-three-vectors-linearly-dependent-when-one-of-them-is-a-combination-of-th">new question</a> that is related to this problem, but is (at least in my opinion) not the same question.</p>
Marco Camurri
780,467
<p>A line in the Euclidean space can be described by a system of two equations describing a plane.</p> <p>The equations will be in the form: <span class="math-container">$$ \begin{cases} ax + by + cz + d = 0 \\ a'x + b'y +c'z + d' = 0 \end{cases} $$</span> Another way to express a line is in parametric form: <span class="math-container">$$ \begin{cases} x = x_0 + l\cdot t\\ y = y_0 + m\cdot t \\ z = z_0 + n\cdot t \\ \end{cases} $$</span> Two lines are parallel if they have the same direction vectors <span class="math-container">$(l,m,n)$</span> or if they differ by a scalar multiplication.</p> <p>You can compute the direction vectors with the formula: <span class="math-container">$$ (l,m,n) = \left(\begin{vmatrix} b &amp; c \\ b' &amp; c' \end{vmatrix}, -\begin{vmatrix} a &amp; c \\ a' &amp; c' \end{vmatrix}, \begin{vmatrix} a &amp; b \\ a' &amp; b' \end{vmatrix}\right) $$</span></p> <p>If you pick any combination of two equations from your example and convert to the parametric form, you will see that they all have the same direction vectors, meaning the intersections between them are parallel.</p> <p>Also, if you arrange the coefficients into two matrices, like this:</p> <p>Incomplete matrix <span class="math-container">$$ A = \begin{pmatrix} a &amp; b &amp; c \\ a' &amp; b' &amp; c' \\ a'' &amp; b'' &amp; c'' \\ a''' &amp; b''' &amp; c''' \end{pmatrix} $$</span> Complete matrix <span class="math-container">$$ B = \begin{pmatrix} a &amp; b &amp; c &amp; d \\ a' &amp; b' &amp; c' &amp; d'\\ a'' &amp; b'' &amp; c'' &amp; d''\\ a''' &amp; b''' &amp; c''' &amp; d''' \end{pmatrix} $$</span></p> <p>you'll have that the two lines are parallel if the rank of <span class="math-container">$A$</span> is 2 and the rank of B is 3.</p> <p>From the equations above it is easy to see from the coefficients that such matrices would not be full rank because of the repeated terms.</p>
3,757,038
<h2>The problem</h2> <p>So recently in school, we should do a task somewhat like this (roughly translated):</p> <blockquote> <p><em>Assign a system of linear equations to each drawing</em></p> </blockquote> <p>Then, there were some systems of three linear equations (SLEs) where each equation was describing a plane in their coordinate form and some sketches of three planes in some relation (e.g. parallel or intersecting at 90°-angles.</p> <h2>My question</h2> <p>For some reason, I immediately knew that these planes:</p> <p><a href="https://i.stack.imgur.com/6luFl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6luFl.png" alt="enter image description here" /></a></p> <p>belonged to this SLE: <span class="math-container">$$ x_1 -3x_2 +2x_3 = -2 $$</span> <span class="math-container">$$ x_1 +3x_2 -2x_3 = 5 $$</span> <span class="math-container">$$-6x_2 + 4x_3 = 3$$</span></p> <p>And it turned out to be true. In school, we proved this by determining the planes' intersecting lines and showing that they are parallel, but not identical.<br /> However, I believe that it must be possible to show the planes are arranged like this without a lot of calculation. Since I immediately saw/&quot;felt&quot; that the planes described in the SLE must be arranged in the way they are in the picture (like a triangle). I could also determine the same &quot;shape&quot; on a similar question, so I do not believe that it was just coincidence.</p> <h2>What needs to be shown?</h2> <p>So we must show that the three planes described by the SLE cut each other in a way that I do not really know how to describe. They do not intersect with each other perpendicular (at least they don' have to to be arranged in a triangle), but there is no point in which all three planes intersect. If you were to put a line in the center of the triangle, it would be parallel to all planes.</p> <p>The three planes do not share one intersecting line as it would be in this case:</p> <p><a href="https://i.stack.imgur.com/LQ5IY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LQ5IY.png" alt="enter image description here" /></a></p> <p>(which was another drawing from the task, but is not relevant to this question except for that it has to be excluded)</p> <h2>My thoughts</h2> <p>If you were to look at the planes exactly from the direction in which the parallel line from the previous section leads, you would see something like this:</p> <p><a href="https://i.stack.imgur.com/eMj2x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eMj2x.png" alt="enter image description here" /></a></p> <p>The red arrows represent the normal of each plane (they should be perpendicular). You can see that the normals somehow are part of one (new) plane. This is already given by the manner how the planes intersect with each other (as I described before). If you now were to align your coordinate system in such a way that the plane in which the normals lie is the <span class="math-container">$x_1 x_2$</span>-plane, each normals would have an <span class="math-container">$x_3$</span> value of <span class="math-container">$0$</span>. If you were now to further align the coordinate axes so that the <span class="math-container">$x_1$</span>-axis is identical to one of the normals (let's just choose the bottom one), the values of the normals would be somehow like this:</p> <p><span class="math-container">$n_1=\begin{pmatrix} a \\ 0 \\ 0 \end{pmatrix}$</span> for the bottom normal</p> <p><span class="math-container">$n_2=\begin{pmatrix} a \\ a \\ 0 \end{pmatrix}$</span> for the upper right normal</p> <p>and <span class="math-container">$n_3=\begin{pmatrix} a \\ -a \\ 0 \end{pmatrix}$</span> for the upper left normal</p> <p>Of course, the planes do not have to be arranged in a way that the vectors line up so nicely that they are in one of the planes of our coordinate system.</p> <p>However, in the SLE, I noticed the following:</p> <p>-The three normals (we can simpla read the coefficients since the equations are in coordinate form) are <span class="math-container">$n_1=\begin{pmatrix} 1 \\ -3 \\ 2 \end{pmatrix}$</span>, <span class="math-container">$n_2=\begin{pmatrix} 1 \\ 3 \\ -2 \end{pmatrix}$</span> and <span class="math-container">$n_3=\begin{pmatrix} 0 \\ -6 \\ 4 \end{pmatrix}$</span>.</p> <p>As we can see, <span class="math-container">$n_1$</span> and <span class="math-container">$n_2$</span> have the same values for <span class="math-container">$x_1$</span> and that <span class="math-container">$x_2(n_1)=-x_2(n_2)$</span>; <span class="math-container">$x_3(n_1)=-x_3(n_2)$</span></p> <p>Also, <span class="math-container">$n_3$</span> is somewhat similar in that its <span class="math-container">$x_2$</span> and <span class="math-container">$x_3$</span> values are the same as the <span class="math-container">$x_2$</span> and <span class="math-container">$x_3$</span> values of <span class="math-container">$n_1$</span>, but multiplied by the factor <span class="math-container">$2$</span>.</p> <p>I also noticed that <span class="math-container">$n_3$</span> has no <span class="math-container">$x_1$</span> value (or, more accurately, the value is <span class="math-container">$0$</span>), while for <span class="math-container">$n_1$</span> and <span class="math-container">$n_2$</span>, the value for <span class="math-container">$x_1$</span> is identical (<span class="math-container">$n_1=1$</span>).</p> <h2>Conclusion</h2> <p>I feel like I am very close to a solution, I just don't know what to do with my thoughts/approaches regarding the normals of the planes.<br /> Any help would be greatly appreciated.</p> <p><strong>How can I show that the three planes are arranged in this triangular-like shape by using their normals, i.e. without having to calculate the planes' intersection lines?</strong> (Probably we will need more than normals, but I believe that they are the starting point).</p> <hr /> <p><strong>Update:</strong> I posted a <a href="https://math.stackexchange.com/questions/3827387/why-are-three-vectors-linearly-dependent-when-one-of-them-is-a-combination-of-th">new question</a> that is related to this problem, but is (at least in my opinion) not the same question.</p>
twosigma
780,083
<p>If you know some linear algebra, this problem becomes easier to describe and answer. The concepts and terminology from linear algebra capture the relevant ideas well.</p> <p>As long as the <strong>null space</strong> of the system of three <strong>planes</strong> has <strong>dimension 1</strong>, the planes will form a triangle (as in your 1st picture) or intersect in a line (as in your 2nd picture). In simple terms, when I say the null space has dimension 1, I mean that if you changed the right hand side of the system to all 0’s and look at the solutions to this system (called a <strong>homogeneous</strong> system), you will get a line (through the origin).</p> <p>The idea is as follows: Each plane <span class="math-container">$ax + by + cz = d$</span> is just a translation of the plane <span class="math-container">$ax + by + cz = 0$</span>. For example, <span class="math-container">$2x + y - 3z = 4$</span> is a translation of <span class="math-container">$2x + y - 3z = 0$</span>. So if the null space has dimension 1, this means the three planes corresponding to 0's on their right sides intersect in a line. So if you translate them back to the original planes of the original system, this just corresponds to moving each plane parallel to its original position. Thus we will either form a triangle (with the three parallel lines) or remain intersecting in a single line.</p> <p>So, in conclusion, if you have a <span class="math-container">$3 \times 3$</span> system of rank 2 that corresponds to the intersection of three planes, e.g. <span class="math-container">$\left[\begin{array}{rrr|r} 1 &amp; -2 &amp; 3 &amp; 2 \\ 1 &amp; 3 &amp; 4 &amp; 3 \\ 2 &amp; 1 &amp; 7 &amp; 4\end{array}\right]$</span>, then the rank-nullity theorem tells us the null space has dimension 1. (Here by &quot;rank 2&quot; I mean that the non-augmented matrix has rank 2). As we reasoned geometrically above, the only possibilities for the solution set to this system is either it forms a triangle, i.e. no solution, or they intersect in a line, i.e. a line of solutions. To check that they form a triangle instead of intersecting at a line, you can reduce the augmented matrix. If you get an equation like <span class="math-container">$0 = 1$</span> in one of the rows then there is no solution, i.e. no point of intersection of the three planes. This is the desired triangle that you asked about. On the other hand if you do not get a row like that, then the system has a solution, so the intersection must be a line.</p>
1,448,062
<p>Find $\overline{\lim}\limits_{n\to\infty}\left(\frac{1}{n}-\frac{2}{n}+\frac{3}{n}-...+(-1)^{n-1}\frac{n}{n}\right)$</p> <p>$\frac{1}{n}-\frac{2}{n}+\frac{3}{n}-...+(-1)^{n-1}\frac{n}{n}=\frac{1}{n}\sum\limits_{k=1}^{n}(-1)^{k-1}k$</p> <p>How to evaluate sum $\sum\limits_{k=1}^{n}(-1)^{k-1}k$?</p>
true blue anil
22,388
<p>I would use David Quinn's method, but since the logic of the book answer is bothering you, let me explain.</p> <p>Leaving aside the glued together $(SY)$, there are 10 letters with 3 repeats, which can be permuted in $\dfrac{10!}{3!}$ ways.</p> <p>Now, the $(SY)$ can be inserted into any of the <strong>11 gaps (including ends)</strong> of the 10 other letters,</p> <p>hence $11*10!/3!$</p> <p>But as I said, much simpler to use the $11!/3!$ logic. </p>
3,123,681
<p>let us consider following problem taken from book</p> <p><em>An appliance store purchases electric ranges from two companies. From company A, 500 ranges are purchased and 2% are defective. From company B, 850 ranges are purchased and 2% are defective. Given that a range is defective, find the probability that it came from company B</em></p> <p>so here we are assuming that probability of selection company is equally right? that means that <span class="math-container">$P(A)=P(B)=\frac{1}{2}$</span> , also <span class="math-container">$2$</span> % defective means that probability of selection of defective from ranges is equal to <span class="math-container">$0.02$</span>, for instance in Company A, number of defective ranges is <span class="math-container">$500*0.02=10$</span> there probability is equal to <span class="math-container">$\frac{10}{500}=0.02=2$</span>% </p> <p>we know probability of selecting defective range is equal to</p> <p><span class="math-container">$ \frac{1}{2} *2$</span>% + <span class="math-container">$\frac{1}{2} *2$</span>% and probability of selecting defective from company B will be <span class="math-container">$1/2 * 2$</span>% divided by probability of selection of defective range, but book says that answer is <span class="math-container">$0.65 $</span>, how?</p>
robjohn
13,854
<p>Note that <span class="math-container">$\arctan(x)=x+O\!\left(x^3\right)$</span>, then <span class="math-container">$$ \begin{align} \sum_{k=1}^n\arctan\left(\frac1{n+k}\right) &amp;=\sum_{k=1}^n\left[\frac1{n+k}+O\!\left(\frac1{(n+k)^3}\right)\right]\\ &amp;=\sum_{k=1}^n\frac1{1+k/n}\frac1n+O\left(\frac1{n^2}\right) \end{align} $$</span> Now it might be easier to use Riemann Sums.</p>
14,568
<p>The Poisson summation says, roughly, that summing a smooth $L^1$-function of a real variable at integral points is the same as summing its Fourier transform at integral points(after suitable normalization). <a href="http://en.wikipedia.org/wiki/Poisson_summation_formula">Here</a> is the wikipedia link.</p> <p>For many years I have wondered why this formula is true. I have seen more than one proof, I saw the overall outline, and I am sure I could understand each step if I go through them carefully. But still it wouldn't tell me anything about why on earth such a thing should be true.</p> <p>But this formula is exceedingly important in analytic number theory. For instance, in the book of Iwaniec and Kowalski, it is praised to high heavens. So I wonder what is the rationale of why such a result should be true.</p>
coudy
6,129
<p>The Poisson formula is just the decomposition in Fourier series of the Dirac measure on the circle. </p> <p>It happens that the theory of distributions on the circle is far more simpler than the one on the real line since everything is compactly supported. In his book, Laurent Schwartz starts the chapter about the Fourier transform with the circle and shows that any distribution has a convergent Fourier series (VII.1.3 "theorie des distributions"). Moreover the space of distributions is the union of all the negative Sobolev spaces and these spaces can be caracterized in terms of the decay of the Fourier coefficients. I think that the Dirac is in $H^{-1}$. Anyway the usual formula holds and we have</p> <p>$$ \delta_0(x) = \sum_n \langle \delta_0, e^{inx} \rangle e^{inx} = {1\over 2\pi}\sum_n e^{inx}, \quad x\in {\bf R}/{\bf Z}. $$ Note that with this choice of normalization, the scalar product is ${1\over 2\pi}\int_0^{2\pi} f(x)\overline{g(x)} \,dx$ so that the $e^{inx}$ are unit vectors.This gives the ${1\over 2\pi}$ on the right hand side.</p> <p>Now we compose this with the projection $\pi : {\bf R} \rightarrow {\bf R} /2\pi{\bf Z}$ to get the Poisson formula on ${\bf R}$. One must be a bit cautious because although we can always compose a function with another as soon as the domains match, it is not true that composition is always possible for distributions. </p> <p>In our particular case, the Dirac measure is the limit of an approximate identity, and so it is easy to guess what is happening just by composing the approximate identity and passing to the limit. Without surprise, the pullback of the Dirac measure at $\{0\}$ is given by the Dirac measure on the pullback of $\{0\}$, which is just the sum of all the Dirac measures on the points $2\pi{\bf Z}$.</p> <p>$$ \sum_{n\in {\bf Z}} \delta_{2\pi n}(x) = {1\over 2\pi} \sum_{n\in {\bf Z}} e^{inx}, \quad x\in {\bf R}. $$ We can evaluate this against a function in Schwartz space or use a convolution to obtain the original formula of Poisson. And we got the $2\pi$ at the correct locations without even thinking about it. So that's the truth.</p>
519,224
<p>Using the Squeeze Theorem, how do I find: $$\lim_{x\to 3} (x^2-2x-3)^2\cos\left(\pi \over x-3\right)$$ I thought I knew the Squeeze Theorem, but I haven't encountered anything like this yet, so I honestly have no idea how and where to start.</p> <p>I would appreciate any help I get because I really want to be able to understand these types of questions!</p>
André Nicolas
6,312
<p><strong>Hint:</strong> The absolute value is between $0$ and $(x^2-2x-3)^2$.</p>
3,814,502
<p>I came across the following question:</p> <blockquote> <p>Show that <span class="math-container">$(a, b:a^3 = 1, b^2= 1, ba=a^2b)$</span> gives a group of order <span class="math-container">$6$</span>. Show that it is non abelian. Is it the only non abelian group of order <span class="math-container">$6$</span> up to isomorphism?</p> </blockquote> <hr /> <p>I managed to prove everything except the last statement. How could anyone prove the group is the only non-abelian group of a particular order, without knowing all the groups of that order from some worksheet?</p> <p>Does this generalize as a problem, or did our instructor just want us to learn the groups of order <span class="math-container">$6$</span>?</p> <p>Also, what does &quot;up to isomorphism&quot; mean?</p>
ccroth
793,822
<p>In general, for given <span class="math-container">$n \in \mathbb{N}$</span>, there is not some simple way to figure out all non-isomoprhic groups of order <span class="math-container">$n$</span> (though it's been deduced for special cases of <span class="math-container">$n$</span> and a number of 'small' <span class="math-container">$n$</span>). See more in <a href="https://math.stackexchange.com/questions/491408/given-any-n-in-mathbbn-how-many-non-isomorphic-groups-are-there">this post</a>.</p> <p>To answer your last question, 'up to isomorphism' means that the objects in question aren't really different in terms of structure, just possibly 'name'. To be more specific, let's say you have two groups <span class="math-container">$G$</span> and <span class="math-container">$H$</span> as described:</p> <p><span class="math-container">$G$</span> is the group with set <span class="math-container">$\{ (0,0), (1,0), (0,1), (1,1)\}$</span>, and with operation <span class="math-container">$+$</span> defined by coordinate-wise addition, modulo <span class="math-container">$2$</span>. So for example <span class="math-container">$(1,0) + (1,0) = (0,0)$</span>. Notice this group has order <span class="math-container">$4$</span>, and each non-identity element has order <span class="math-container">$2$</span>.</p> <p><span class="math-container">$H$</span> is the group with set <span class="math-container">$\{1,a,b,c\}$</span> and operation <span class="math-container">$\cdot$</span>, defined by the following multiplication table (<span class="math-container">$1$</span> is identity of course): <span class="math-container">\begin{array}{|c|c|c|c|} \hline 1 &amp; a &amp; b &amp; c \\ \hline a &amp; 1 &amp; c &amp; b \\ \hline b &amp; c &amp; 1 &amp; a \\ \hline c &amp; b &amp; a &amp; 1 \\ \hline \end{array}</span> Note that <span class="math-container">$H$</span> also has order <span class="math-container">$4$</span> with each non-identity element having order <span class="math-container">$2$</span>.</p> <p>Evidently, <span class="math-container">$G$</span> and <span class="math-container">$H$</span> have the same 'structure'. To make this more precise, we can actually construct an explicit isomorphism <span class="math-container">$\phi: G \to H$</span>. In this case, notice that <span class="math-container">$G = \langle (1,0), (0,1) \rangle$</span> and <span class="math-container">$H = \langle a,b\rangle$</span>. Then define <span class="math-container">$\phi: G\to H$</span> by: <span class="math-container">$$ (1,0) \mapsto a, \; \; \; (0,1) \mapsto b. $$</span> I'll leave it to you to check <span class="math-container">$\phi$</span> is a homomorphism and that it's bijective (should be very straightforward from definitions).</p> <p>Since <span class="math-container">$\phi$</span> is an isormorphism between <span class="math-container">$G$</span> and <span class="math-container">$H$</span>, we say <span class="math-container">$G$</span> is isomorphic to <span class="math-container">$H$</span>, and in an algebraic sense there is really no difference between <span class="math-container">$G$</span> and <span class="math-container">$H$</span>, just the 'name' we gave the elements. But the relationship between the elements is identical. Both of these groups are really just <span class="math-container">$\mathbb{Z}_2 \times \mathbb{Z}_2$</span>.</p>
3,651,173
<p>I am trying to use generating function to find closed form formula for this expression: <span class="math-container">$$\sum_{n \geq 0} \frac{n^2+4n+5}{n!}$$</span> but I don't how to start this. any suggestion or hints. thank you</p>
Deepak
151,732
<p><span class="math-container">$$\begin{align}\sum_{n \geq 0} \frac{n^2+4n+5}{n!} &amp;= \sum_{n \geq 0} \frac{n^2}{n!} + \frac{4n}{n!} + \frac 5{n!} \\&amp;= \sum_{n \geq 1}\frac n{(n-1)!} + 4\sum_{n \geq 1}\frac 1{(n-1)!} + 5\sum_{n \geq 0}\frac 1{n!}\\&amp;=\sum_{n \geq 1}\frac{n-1+1}{(n-1)!} + 4\sum_{n \geq 1}\frac 1{(n-1)!} + 5\sum_{n \geq 0}\frac 1{n!}\\&amp;=\sum_{n \geq 2}\frac 1{(n-2)!} + \sum_{n \geq 1}\frac 1{(n-1)!} + 4\sum_{n \geq 1}\frac 1{(n-1)!} + 5\sum_{n \geq 0}\frac 1{n!}\\ &amp;= \sum_{a \geq 0}\frac 1{a!} + \sum_{b \geq 0}\frac 1{b!} + 4\sum_{c \geq 0}\frac 1{c!} + 5\sum_{d \geq 0}\frac 1{d!}\\ &amp;= e + e + 4e + 5e \\ &amp;= 11e\end{align}$$</span></p> <p>You can apply this to any convergent sum of rational expressions where the top is a polynomial and the bottom is a factorial.</p>
3,106,784
<p>I've recently posted another question regarding natural deduction proofs and I've definitely made some progress, but I'm now stuck with a proof which seems like it could be flawed.</p> <p><img src="https://i.stack.imgur.com/5sMPp.png" alt="The proof"></p> <p>Now as you can see, it looks like I've got it all figured out, however you can see an error is returned for incorrect use of negation introduction. Now there seems to be a contradiction in the premises on lines 4 and 5: as per lines 9 and 10, R is true and P is false. I went with P being false (line 10) which leads to a contradiction, seemingly making the proof work out. However, I could just as well have gone with R being true (line 9), which, according to line 5, would not prove my contradiction as I must prove Q.</p> <p>Am I missing something obvious here or do you think the proof is broken?</p> <p>Thank you!</p>
Frank Hubeny
312,852
<p>The justification for line 13 should be indirect proof (IP) rather than negation introduction. Here is a correct proof using what you have done and the same proof checker:</p> <p><a href="https://i.stack.imgur.com/x1qHS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x1qHS.png" alt="enter image description here"></a></p> <p>Indirect proof can be found on page 118 of <em>forallx</em>.</p> <hr> <p>Kevin Klement's JavaScript/PHP Fitch-style natural deduction proof editor and checker <a href="http://proofs.openlogicproject.org/" rel="nofollow noreferrer">http://proofs.openlogicproject.org/</a></p> <p>P. D. Magnus, Tim Button with additions by J. Robert Loftis remixed and revised by Aaron Thomas-Bolduc, Richard Zach, forallx Calgary Remix: An Introduction to Formal Logic, Winter 2018. <a href="http://forallx.openlogicproject.org/" rel="nofollow noreferrer">http://forallx.openlogicproject.org/</a></p>
1,210,194
<p>for example: I have a series <img src="https://i.stack.imgur.com/hV9QG.jpg" alt="enter image description here"></p> <p>is there numerical computation method to find it ? thanks</p>
Lutz Lehmann
115,115
<p>Your terms are proportional to <span class="math-container">$\frac1{16·36·n^6}$</span>, accordingly the error of the n-th partial sum is about <span class="math-container">$\frac1{16·36·5·n^5}$</span>, so the first 100 terms, summed from smallest to largest, should give a valid numerical result.</p> <p>You might also try out the Epsilon algorithm of P. Wynn, &quot;On the Convergence and Stability of the Epsilon Algorithm&quot;, SIAM Journal on Numerical Analysis 3 (1) (Mar 1966), p. 91–122</p>
177,915
<p>I have two inequalities that I can't prove:</p> <ol> <li>$\displaystyle{n\choose i+k}\le {n\choose i}{n-i\choose k}$</li> <li>$\displaystyle{n\choose k} \le \frac{n^n}{k^k(n-k)^{n-k}}$</li> </ol> <p>What is the best way to prove them? Induction (it associates with simple problems, but sometimes I find it difficult to use, what is worrying), or maybe combinatorial interpretation?</p>
joriki
6,622
<p>For the first one, if you write out all the binomial coefficients and cancel identical factors on both sides, you're left with</p> <p>$$\frac1{(i+k)!}\le\frac1{i!}\frac1{k!}\;,$$</p> <p>which is clearly true.</p> <p>For the second one, you can use <a href="http://en.wikipedia.org/wiki/Stirling%27s_approximation" rel="nofollow noreferrer">Stirling's approximation</a> in the form</p> <p>$$\sqrt{2\pi}\ n^{n+1/2}\mathrm e^{-n} \le n! \le \mathrm e\ n^{n+1/2}\mathrm e^{-n}\;,$$</p> <p>which for $k(n-k)\gt0$ yields</p> <p>$$ \begin{align} \binom nk &amp;=\frac{n!}{k!(n-k)!} \\ &amp;\le \frac{\mathrm e\ n^{n+1/2}\mathrm e^{-n}}{\sqrt{2\pi}\ k^{k+1/2}\mathrm e^{-k}\sqrt{2\pi}\ (n-k)^{n-k+1/2}\mathrm e^{-(n-k)}} \\ &amp;= \frac{\mathrm e}{2\pi}\sqrt{\frac{n}{k(n-k)}}\frac{n^n}{k^k(n-k)^{n-k}} \\ &amp;\le \frac{n^n}{k^k(n-k)^{n-k}}\;. \end{align} $$</p> <p>For $k(n-k)=0$, equality holds in your inequality if we interpret $0^0$ as $1$.</p>
475,151
<blockquote> <blockquote> <p>Determine the volume of $$ M:=\left\{(x,y,z)\in\mathbb{R}^3: z\in [0,2\pi],(x-\cos(z))^2+(y-\sin(z))^2\leq\frac{1}{4}\right\} $$</p> </blockquote> </blockquote> <p>My idea is to use the principle of Cavalieri, i.e. to set $$ M_z:=\left\{(x,y)\in\mathbb{R}^2: (x,y,z)\in M\right\} $$ and then calculate $$ \operatorname{vol}_3(M)=\int\limits_0^{2\pi}\operatorname{vol}_2(M_z)\, dz $$</p> <p>So I have to calculate $\operatorname{vol}_2(M_z)$.</p> <p>I set $a:=x-\cos(z)$ and $b:=y-\sin(z)$ and then the condition $a^2+b^2\leq\frac{1}{4}$ means that $$ \lvert a\rvert\leq\frac{1}{2}\Leftrightarrow \cos(z)-\frac{1}{2}\leq x\leq\cos(z)+\frac{1}{2},~~~~~\lvert b\rvert\leq\frac{1}{4}\Leftrightarrow\sin(z)-\frac{1}{2}\leq y\leq\sin(z)+\frac{1}{2}. $$</p> <p>So I used Fubini and calculated $$ \operatorname{vol}_2(M_z)=\int\limits_{\sin(z)-\frac{1}{2}}^{\sin(z)+\frac{1}{2}}\int\limits_{\cos(z)-\frac{1}{2}}^{\cos(z)+\frac{1}{2}}1\, dx\, dy=1. $$</p> <p>But then I get $$ \operatorname{vol}_3(M)=2\pi $$</p> <p>and the result should be $\frac{\pi^2}{2}$. So where is my mistake?</p>
2'5 9'2
11,123
<p>You are making a significant mistake at the point "So I used Fubini...". Your double integral is over a square, when it should be over a circle. The inner integral should have limits that depend on the outer integral's variable.</p>
1,752,106
<p>You roll $n$ fair dice. Let X be the number of pairs of dice that sum to $7$. Write $X$ as a sum of indicator variables and find $E[X]$</p> <p>This is how I approached the problem</p> <p>Let $S$ be all the strings length n on [6].</p> <p>Let $X(S)$ be the number of ways to add up the numbers on the n dice that adds up to 7. Then we can write $X$ as $X = X_{1}+X_{2}+.....+X_{n}$</p> <p>To find the expected value we can use: $$ \sum_{s\in S} X(S)P(S)$$</p> <p>Let $X(S) = k$ (k is the numbers that show up on the $n$ dice and adds up to 7. Eg: n = 4 and numbers that show up on the dice are $4631$. $4$ and $3$ make $7$ and $6$ and $1$ make $7$, so k = 2.)</p> <p>Now I have no idea how to find the P(S). I feel like the denominator of the P(S) will be $6^{n}$, but I don't know how to find the numerator.</p> <p>Thank you.</p>
André Nicolas
6,312
<p>There are $N=\binom{n}{2}$ (unordered) pairs of dice. Label these $P_1$ up to $P_N$.</p> <p>For every pair $P_i$, define random variable $Y_i$ by $Y_i=1$ if the pair $P_i$ has sum $7$, and by $Y_i=0$ otherwise. </p> <p>Then $X=Y_1+\cdots+Y_N$, and by the linearity of expectation we have $$E(X)=E(Y_1)+\cdots+E(Y_N).$$</p> <p>Finally, we calculate $\Pr(Y_i=1)$. If we toss two dice, the probability they have sum $7$ is $\frac{1}{6}$. So $E(X)=\frac{1}{6}\binom{n}{2}$.</p>
2,078,496
<p>Given the sequence of functions</p> <p>$f_n:[0,1]\rightarrow\mathbb{R}$</p> <p>$x\mapsto\begin{cases}2nx, &amp; x\in[0,\frac{1}{2n}] \\ -2nx+2, &amp; x\in (\frac{1}{2n},\frac{1}{n}) \\ 0, &amp; \text{otherwise} \end{cases}$</p> <hr> <p>The functions look like this:</p> <p><a href="https://i.stack.imgur.com/cE7h9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cE7h9.jpg" alt="enter image description here"></a></p> <hr> <p>Does this sequence converge pointwise or uniformly against a function $f(x)$ as $n$ approaches infinity?</p> <p>My thoughts: It doesn't, since for $n\rightarrow\infty$ the first interval becomes $[0,0]$ and the second $(0,0)$. Since $[0,0]$ is just $0$ and $(0,0)$ is a nonexistent interval, we have two cases for $x$: $x=0$, in which case the function is $2nx$, and $x\neq 0$, in which case the function is $0$.</p> <p>Now here is the part I don't quite understand:</p> <p>For $n\rightarrow\infty$, if $x=0$ the function would be $2\cdot\infty\cdot 0$. How do I interpret that?</p> <p>Anyway, since the slope of the first function section gets higher and higher with each iteration, it must be vertically for $n\rightarrow\infty$, right?</p> <p>Am I correct with my thougts? And what about uniform convergence?</p>
qualcuno
362,866
<p>This function converges pointwise to the zero function. It is trivial that $f_n(0) = 0 \ \forall n \in \mathbb{N}$. If $\alpha \in (0,1]$, by archimedianity there is some $n$ for wich $\alpha$ lies in the interval $[\frac{1}{n},1]$ and therefore for that $n$ and every larger integer, $f_n(\alpha) = 0$. This means exactly that $f_n \to 0$.</p>
306,732
<p>Let $R$ and $S$ be rings and $h$ and $g$ be homomorphisms from $R$ into $S$. </p> <p>Let $T=\{r| r\in R$ and $h(r)=g(r)\}$</p> <p>Prove that $T$ is a subring of $R$. </p> <p>I understand what the question is asking but I am a little confused on how to get it started. I know that I have to prove $T\neq \emptyset$; $rt\in T$ for all $r,t\in T$; and $r-t\in T$ for all $r,t\in T$ right? But I'm not really sure how I can do that. Also, I don't get what $T$ is actually being defined as. I don't want the answer just how to get started and what does $T$ actually mean? Thanks. </p>
chubbycantorset
51,426
<p>$T$ is just the set of those elements in the domain, $R$, that take the same value when mapped by the homomorphisms $g$ and $h$.</p> <p>Hope this makes it clearer.</p>
2,861,179
<p>Here's the question:</p> <blockquote> <p>Evaluate $\iint_{S} \boldsymbol{F} \cdot \boldsymbol{\hat{n}}$ if $\boldsymbol{F} = (x+y) \boldsymbol{\hat{i}} + x \boldsymbol{\hat{j}} +z \boldsymbol{\hat{k}}$ and $S$ is the surface of the cube bounded by the planes $x=0$,$x=1$,$y=0$, $y=1$, $z=0$ and $z=1$.</p> </blockquote> <p>Here's my attempt: Suppose the faces whose equations are $x=0$,$x=1$,$y=0$, $y=1$, $z=0$ and $z=1$ are respectively named $S_1$, $S_2$ and so on respectively and let $\boldsymbol{\hat{n}}$ denote the unit vector normal to them.</p> <p>Now on $S_1$, $\boldsymbol{F} = y \boldsymbol{\hat{j}} +z \boldsymbol{\hat{k}}$, $\boldsymbol{\hat{n}}=\boldsymbol{\hat{i}}$. Therefore $\iint_{S_1} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = \int_{0}^{1} \int_{0}^{1} y \mathrm{d}y \mathrm{d}z = \frac{1}{2}$.</p> <p>Similarly we have $\iint_{S_2} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = \frac{3}{2}$, $\iint_{S_3} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = \frac{1}{2}$, $\iint_{S_4} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = \frac{1}{2}$, $\iint_{S_5} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = 0$ and $\iint_{S_6} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = 1$.</p> <p>Hence overall we have $\iint_{S} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = 4$. But the answer on the textbook seems to be $2$. I checked everything up and there doesn't seem to be any error on my part but I was wondering how the answer doesn't match up.</p>
user
505,767
<p>By direct calculation keep attention on the sign of the flux, we need to assume a consistent direction for the normal.</p> <p>As an alternative by <a href="https://en.wikipedia.org/wiki/Divergence_theorem" rel="nofollow noreferrer"><strong>divergence theorem</strong></a> we have</p> <p>$$div\boldsymbol{F} = 2\implies \iint_{S} \boldsymbol{F} \cdot \boldsymbol{\hat{n}}\, dS=\iiint_{V} div\boldsymbol{F} \,dV=2 \iiint_{V} \,dV=2$$</p>
2,861,179
<p>Here's the question:</p> <blockquote> <p>Evaluate $\iint_{S} \boldsymbol{F} \cdot \boldsymbol{\hat{n}}$ if $\boldsymbol{F} = (x+y) \boldsymbol{\hat{i}} + x \boldsymbol{\hat{j}} +z \boldsymbol{\hat{k}}$ and $S$ is the surface of the cube bounded by the planes $x=0$,$x=1$,$y=0$, $y=1$, $z=0$ and $z=1$.</p> </blockquote> <p>Here's my attempt: Suppose the faces whose equations are $x=0$,$x=1$,$y=0$, $y=1$, $z=0$ and $z=1$ are respectively named $S_1$, $S_2$ and so on respectively and let $\boldsymbol{\hat{n}}$ denote the unit vector normal to them.</p> <p>Now on $S_1$, $\boldsymbol{F} = y \boldsymbol{\hat{j}} +z \boldsymbol{\hat{k}}$, $\boldsymbol{\hat{n}}=\boldsymbol{\hat{i}}$. Therefore $\iint_{S_1} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = \int_{0}^{1} \int_{0}^{1} y \mathrm{d}y \mathrm{d}z = \frac{1}{2}$.</p> <p>Similarly we have $\iint_{S_2} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = \frac{3}{2}$, $\iint_{S_3} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = \frac{1}{2}$, $\iint_{S_4} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = \frac{1}{2}$, $\iint_{S_5} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = 0$ and $\iint_{S_6} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = 1$.</p> <p>Hence overall we have $\iint_{S} \boldsymbol{F} \cdot \boldsymbol{\hat{n}} = 4$. But the answer on the textbook seems to be $2$. I checked everything up and there doesn't seem to be any error on my part but I was wondering how the answer doesn't match up.</p>
Michael Burr
86,421
<p>The problem, is that you are not using <em>outward pointing</em> normal vectors. Every plane has two unit normal vectors, but only one of them is pointing outward. For a flux integral, we use the convention that the flux is the flow out of an object.</p> <p>In your case, the outward pointing normal vector for $S_1$ is $\langle -1,0,0\rangle$, which changes the sign of your answer. The outward pointing normal vector for $S_2$ remains $\langle 1,0,0\rangle$, so that answer doesn't change. There are two more vectors which need to swap signs, and after that, you'll get $$ \left(-\frac{1}{2}\right)+\frac{3}{2}+\left(-\frac{1}{2}\right)+\frac{1}{2}+(-0)+1=2. $$</p>
3,423,840
<p><a href="https://i.stack.imgur.com/batlm.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/batlm.jpg" alt="enter image description here"></a><br> I used the ratio test for this problem and I messed up somewhere in the algebra. Sorry for the messy writing. <a href="https://i.stack.imgur.com/hRMUF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hRMUF.png" alt="enter image description here"></a></p>
Doug M
317,176
<p>The denominator can be written as <span class="math-container">$4^nn!$</span></p> <p><span class="math-container">$n!$</span> grows faster than <span class="math-container">$x^n$</span> for all <span class="math-container">$x$</span> and <span class="math-container">$x^n$</span> grows faster than any polynomial if <span class="math-container">$|x|&gt;1$</span> and decays faster than any polynomial if <span class="math-container">$|x|&lt;1$</span></p> <p>Or by the ratio test <span class="math-container">$\lim_\limits{n\to\infty}|(\frac {n+1}{n})^2\frac {x}{4}\frac {1}{n+1}| &lt; 1$</span> for all <span class="math-container">$x$</span></p>
4,354,663
<p>I'm trying to prove that if <span class="math-container">$x$</span> is a multiple of <span class="math-container">$3$</span>, then <span class="math-container">$x + 1$</span> is not a multiple of <span class="math-container">$3$</span>. This is a rather obvious fact, but I don't think I understand the solution.</p> <blockquote> <p>Suppose that <span class="math-container">$x$</span> is a multiple of <span class="math-container">$3$</span> but, for the sake of contradiction, that <span class="math-container">$x + 1$</span> is likewise a multiple of <span class="math-container">$3$</span>. Then <span class="math-container">$x = 3k$</span> for some <span class="math-container">$k \in \mathbb{Z}$</span> and <span class="math-container">$x + 1 = 3t$</span> for some <span class="math-container">$t \in \mathbb{Z}$</span>. Then <span class="math-container">$3k + 1 = 3t$</span>, so <span class="math-container">$3k - 3t = 1$</span> and hence <span class="math-container">$3(k-t) = 1$</span>. Therefore <span class="math-container">$k - t = \frac{1}{3} \not \in \mathbb{Z}$</span>, which is a contradiction, so <span class="math-container">$x + 1$</span> not a multiple of <span class="math-container">$3$</span>, as desired.</p> </blockquote> <p>The problem here is that the deduction <span class="math-container">$k - t = \frac{1}{3}$</span> followed by multiplying both sides by <span class="math-container">$\frac{1}{3}$</span>, when division is not defined in <span class="math-container">$\mathbb{Z}$</span>. Is this allowed? I'm trying to find another way to prove this from the axioms. Another possibility is to simply argue that <span class="math-container">$3 \mid 3(k-t)$</span> so <span class="math-container">$3 \mid 1$</span>, which is an absurdity, as the only divisors of <span class="math-container">$1$</span> are <span class="math-container">$\pm 1$</span>, and be done at that step.</p> <p>I'd appreciate some feedback on whether each of these two proof strategies are correct and which would be preferable in this instance.</p>
Sourav Ghosh
977,780
<p>Suppose that <span class="math-container">$ x $</span> is a multiple of <span class="math-container">$3 $</span> but, for the sake of contradiction, that <span class="math-container">$x+1$</span> is likewise a multiple of <span class="math-container">$3$</span> . Then <span class="math-container">$x=3k $</span> for some <span class="math-container">$ k\in \Bbb{Z }$</span> and <span class="math-container">$ x+1=3t $</span> for some <span class="math-container">$t\in \Bbb{Z}.$</span> Then <span class="math-container">$3k+1=3t$</span> ,</p> <p><span class="math-container">$3k−3t=1 $</span> and hence <span class="math-container">$ 3(k−t)=1$</span></p> <p>As, <span class="math-container">$k, t\in \Bbb{Z}\implies s=k-t\in\Bbb{Z}$</span></p> <p>And <span class="math-container">$3s=1$</span>.</p> <p>Implies <span class="math-container">$3|1$</span> , which is absurd.</p>
545,380
<p>Let $F$ be a field such that $|F|=3^{2n+1}$ and $r=3^{n+1}$. I want to find the number of $x\in F$ that satisfies the equation $x^{r+1}=1$.</p>
arctic tern
296,782
<p>Say we want to count the solutions to $x^k=e$ in a cyclic group of order $m$. We may as well write everything additively and work in $\mathbb{Z}/m\mathbb{Z}$. Writing $\overline{k}=k/\gcd(m,k)$ and $\overline{m}=m/\gcd(m,k)$,</p> <p>$$kx\equiv 0~\bmod m ~\iff~ \overline{k}x\equiv 0\bmod\overline{m} ~\iff~ x\equiv 0\bmod \overline{m}$$</p> <p>The first $\Leftrightarrow$ follows from dividing/multiplying by $\gcd(m,k)$, and the second by multiplying by the inverse of $\overline{k}$ mod $\overline{m}$ (which exists since $\gcd(\overline{m},\overline{k})=1$). Therefore, the solutions $x$ mod $m$ are precisely $\ell\overline{m}$ for $0\le \ell&lt;\gcd(m,k)$, of which there are $\gcd(m,k)$ many.</p> <p>If $F$ is a field of order $3^{2n+1}$, then $|F^\times|=3^{2n+1}-1$ and with the Euclidean algorithm we may calculate the number of solutions to $x^{3^{n+1}+1}=1$ as</p> <p>$$\begin{array}{ll} \gcd(3^{2n+1}-1,3^{n+1}+1) &amp; =\gcd\big(3^{2n+1}-1-3^n(3^{n+1}+1)),3^{n+1}+1\big) \\ &amp; = \gcd(-3^n-1,3^{n+1}+1) \\ &amp; =\gcd(3^n+1,3^{n+1}+1) \\ &amp; = \gcd\big(3^n+1,3^{n+1}+1-3(3^n+1)\big) \\ &amp; = \gcd(3^n+1,-2) \\ &amp; =2. \end{array}$$</p>
1,922,417
<p>This is not for the Maths part of the General GRE. This is for the GRE <a href="https://www.ets.org/gre/subject/about" rel="noreferrer">Subject Test</a> in <a href="https://www.ets.org/gre/subject/about/content/mathematics" rel="noreferrer">Maths</a>. Feel free to add or comment.</p> <ul> <li><p><a href="https://math.stackexchange.com/questions/2967070">How do I know the definition of rings or of anything on the GRE given that definitions can vary?</a></p></li> <li><p><a href="https://math.stackexchange.com/questions/2470560">What does the subject GRE measure?</a></p></li> <li><blockquote> <p>I think trying to relearn an entire undergraduate degree's worth of mathematics purely for this test would be an enormous waste of precious time and energy. You can't make this test your life,you have other priorities. I'd begin talking to graduate students who have been through the test-see what they'd recommend. - <a href="https://math.stackexchange.com/a/792069">Mathemagician1234</a></p> </blockquote></li> </ul> <hr> <p><a href="http://www.mathematicsgre.com/" rel="noreferrer">http://www.mathematicsgre.com/</a></p> <p><a href="http://www.physicsgre.com/viewtopic.php?f=13&amp;t=1078" rel="noreferrer">http://www.physicsgre.com/viewtopic.php?f=13&amp;t=1078</a></p>
operatorerror
210,391
<p>Schaum's outlines "Advanced Mathematics for Engineers and Scientists" has some excellent ODE, linear algebra, and complex problems that mimic the ones I have seen on practice exams. </p>
275,937
<p>Choose a random polynomial $P\in\mathbb{Z}[x]$ of degree $n$ and coefficients $\leq n$ and $\geq-n$. </p> <p>Let $r_1,\ldots,r_n$ be the roots of $P$ and consider $$G=\operatorname{Gal}(\mathbb{Q}(r_1,\ldots,r_n)/\mathbb{Q})$$</p> <p>What is the probability, as $n\to\infty,$ that $G$ is solvable? (I assume 0.) Who first proved this?</p>
MCT
92,774
<p>Yes, the probability is zero since it equals the symmetric group with probability one:</p> <p>If $Q_d(N)$ denotes the set of degree $d$ polynomials with coefficients $|a_i|\le N$ with Galois group not equal to $S_d$, then $$ |Q_d(N)|\ll N^{d-1/2}\log N $$</p> <p>This bound is sufficient to prove your result. This was proven by Patrick X. Gallagher.</p> <p>EDIT: Gallagher proved the following stronger result:</p> <p>$$|Q_d(N)| \ll N^{d-1/2} \log^{1 - \gamma} N$$</p> <p>where $\gamma \sim (2 \pi d)^{-1/2}$. <a href="http://www.ams.org/mathscinet/pdf/332694.pdf?arg3=&amp;co4=AND&amp;co5=AND&amp;co6=AND&amp;co7=AND&amp;dr=all&amp;pg4=AUCN&amp;pg5=TI&amp;pg6=PC&amp;pg7=ALLF&amp;pg8=ET&amp;review_format=html&amp;s4=Gallagher%2C%20P%20%2A&amp;s5=&amp;s6=&amp;s7=&amp;s8=All&amp;vfpref=html&amp;yearRangeFirst=&amp;yearRangeSecond=&amp;yrop=eq&amp;r=22" rel="nofollow">Source</a></p>
1,371,075
<p>$$3^x = 3 - x$$</p> <p>I have to prove that only one solution exists, and then find that one solution.</p> <p>My approach has been the following:</p> <p>$$\log 3^x = \log (3 - x)$$</p> <p>$$x\log 3 = \log (3 - x)$$</p> <p>$$\log 3 = \frac{\log (3 - x)}{x}$$</p> <p>And this is where I get stuck. Any help will be greatly appreciated, thanks in advance.</p>
Olivier Oloa
118,798
<p>You may consider the function given by $$ f(x)=3^x-(3-x),\quad x \in \mathbb{R}. $$ We have $$ f'(x)=3^x \cdot\ln3+1&gt;0,\quad x \in \mathbb{R}, $$ Thus the function is <strong>strictly increasing</strong> on $\mathbb{R}$.</p> <p>We have $$ \begin{align} f(0)&amp;=1-(3-0)=-2&lt;0\\\\ f(1)&amp;=3-(3-1)=1&gt;0 \end{align} $$ then the <strong>unique solution</strong> $x_0$ is such that $x_0 \in (0,1)$.</p> <p>You may observe that $$ 3^x=3-x $$ <strong>is equivalent to</strong> $$ (3-x)\ln 3 \times e^{(3-x)\ln 3}=3^3 \ln 3 $$ then, using a solution of $Xe^X=3^3 \ln 3$ in terms of the <a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow">Lambert function</a>, we get</p> <blockquote> <p>$$ x_0=3-\frac{W(27\ln 3)}{\ln 3}=\color{red}{0.741551813...} $$</p> </blockquote>
2,909,036
<p>I'm reading a book and there is some strange proof (strange for me) of the theorem that within each interval, no matter how small, there are rational points. Proof: we need only take a denominator $n$ large enough so that the interval $[0, \frac{1}{n}]$ is smaller than the interval $[A, B]$ in question; then at least one of the fractions $\frac{m}{n}$ must lie within the interval.</p> <p>I can't understand how they figured out that $[A, B]$ contains at least one $\frac{m}{n}$ and how it's connected to the interval $[0, \frac{1}{n}]$.</p>
Arthur
15,500
<p>Given a natural number $n$, if no fraction with denominator $n$ is contained in $[A,B]$, then there is a fraction $\frac mn$ which is the largest one which is <em>below</em> $[A,B]$ (meaning $\frac mn&lt; A$), and in addition we have that the next fraction with denominator $n$, which is $\frac{m+1}n$, must be <em>above</em> $[A,B]$ (meaning $B&lt;\frac{m+1}n$).</p> <p>Note take those two inequalities, and add them together, left side with left side, and right side with right side, and get $$ \frac mn+B&lt;\frac{m+1}n +A\\ B-A&lt;\frac{m+1}n-\frac mn=\frac1n $$ meaning that $[A,B]$ is smaller than $[0,\frac1n]$.</p> <p>Reversing this (using the so-called <em>contrapositive</em>), it means that if $[0,\frac1n]$ is no larger than $[A,B]$, then there must be a fraction of the form $\frac mn$ in the interval $[A,B]$.</p>
2,984,534
<p>I've gone ahead and split up <span class="math-container">$4000$</span> into <span class="math-container">$2^{5} 5^{3}$</span> and solved each solution separately - as in applied Hensel's lemma for mod 2 and mod 5 solutions separately, I just don't understand how I would combine these solutions.</p> <p>For mod <span class="math-container">$2^{5}$</span>, after lifting the final solution I got was <span class="math-container">$x = 31$</span> (set of solutions would be <span class="math-container">$x = 31 + 32n$</span>)</p> <p>For mod <span class="math-container">$5^{3}$</span>, after lifting the final solution I got was <span class="math-container">$x = 124$</span> (set of solutions would be <span class="math-container">$x = 124 + 125n$</span>) </p> <p>How can I combine what I got to get the final answer? Because these do not work for mod 4000</p>
Sri-Amirthan Theivendran
302,692
<p>Suppose that <span class="math-container">$f$</span> is not constant everywhere. In particular, if <span class="math-container">$f(x)=c$</span> for <span class="math-container">$x\in A$</span>, then, there exists <span class="math-container">$x_0\notin A$</span> such that <span class="math-container">$f(x_0)\neq c$</span>. Since the codomain is Hausdorff we can find neighbourhoods <span class="math-container">$U$</span> of <span class="math-container">$c$</span> and <span class="math-container">$V$</span> of <span class="math-container">$f(x_0)$</span> which are disjoint. So then <span class="math-container">$f^{-1} (U)$</span> and <span class="math-container">$f^{-1} (V)$</span> are disjoint. </p> <p>Now we use the fact that <span class="math-container">$A$</span> is dense in <span class="math-container">$X$</span>. We observe that <span class="math-container">$f^{-1} (V)$</span> is an open set containing <span class="math-container">$x_0$</span> and hence contains a point of <span class="math-container">$A$</span> which contradicts the fact that <span class="math-container">$f^{-1} (U)$</span> and <span class="math-container">$f^{-1} (V)$</span> are disjoint. </p> <p>Note that the fact that <span class="math-container">$Y$</span> is the real line is unncesssary. We only needed that it was Hausdorff.</p>
3,315,622
<p>Given <span class="math-container">$\textbf{a}$</span> , <span class="math-container">$\textbf{h}$</span> <span class="math-container">$\in$</span> <span class="math-container">$\mathbb{R}^n$</span> and a function from some subset of <span class="math-container">$\mathbb{R}^n$</span> to <span class="math-container">$\mathbb{R}^m$</span></p> <p>if <span class="math-container">$\lim_{\textbf{h} \rightarrow \textbf{0}}$</span> <span class="math-container">$f(\textbf{a+h})$</span> <span class="math-container">$=$</span> <span class="math-container">$\textbf{0}$</span> and <span class="math-container">$\textbf{h}=t \textbf{v}$</span> where t is just a real number.Then</p> <p><span class="math-container">$lim_{t \rightarrow 0}$</span> f(<span class="math-container">$\textbf{a}$</span>+t<span class="math-container">$\textbf{v}$</span>) <span class="math-container">$=$</span> <span class="math-container">$0$</span></p> <p>How can I turn this into a formal proof?</p>
Wassilis
693,418
<p>If <span class="math-container">$t\to 0$</span> then <span class="math-container">$tv\to 0$</span></p> <p>To do this formaly try and rewrite the increment parameter h as a function <span class="math-container">$h(t)$</span> of the real parameter <span class="math-container">$t$</span>(which you basicly already did by setting <span class="math-container">$h=tv$</span>)</p> <p>The function <span class="math-container">$h(t)$</span> is then continious in each <span class="math-container">$t\in R$</span>,meaning that for arbitary real <span class="math-container">$t_{0}$</span> the limit of h at <span class="math-container">$t_{0}$</span> exists and equals the function value at <span class="math-container">$t_{0}$</span></p> <p>In particular for <span class="math-container">$t_{0}=0$</span> <span class="math-container">$$\lim_{t\to 0}h(t)=h(0)\ \text{and we know }\ h(0)=0$$</span></p> <p>Then proceed <span class="math-container">$$0=\lim_{h\to 0}f(a+h)=\lim_{t\to 0}f(a+h(t))=\lim_{t\to 0}f(a+tv)$$</span></p>
105,413
<p>I know: I'm going to make a poor showing, but really I can't understand this:</p> <p><code>a</code> is an expressione whose FullForm is</p> <pre><code>Power[Plus[Subscript[u,x],Times[Complex[0,-1],Subscript[u,y]],Times[Complex[0,1],Subscript[v,x]],Subscript[v,y]],2] </code></pre> <p>Why does the following code return <code>True</code>, instead of the expected substitution ?</p> <pre><code>b = Replace[a, {Complex[a_, b_] -&gt; a + H b}] a == b </code></pre> <p>I have also tried </p> <pre><code>b = Replace[a, {Complex[a_, b_] -&gt; a + H b},levSpec] a == b </code></pre> <p>(even Infinity included) but without succeeding.</p> <p>Thanks in advance !!</p>
Jason B.
9,490
<p>Assuming this is what you are going for,</p> <p><a href="https://i.stack.imgur.com/KrEgF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KrEgF.png" alt="enter image description here"></a></p> <p>You can get there two ways,</p> <pre><code>Replace[a, {I x_ :&gt; H x, -I x_ :&gt; - H x}, Infinity] </code></pre> <p>or</p> <pre><code>a /. {I x_ :&gt; H x, -I x_ :&gt; - H x} </code></pre>
1,560,192
<ol> <li><p>Compute the determinant of \begin{pmatrix} 1 &amp; 2 &amp; 3 &amp; ...&amp; n\\ -1 &amp; 0 &amp; 3 &amp; ...&amp; n\\ -1 &amp; -2 &amp; 0 &amp; ...&amp; n\\ ...&amp; ...&amp; ...&amp; ...&amp; \\ -1 &amp; -2 &amp; -3 &amp; ...&amp; n \end{pmatrix} after some elementary raw operation, one can reach: \begin{pmatrix} 1 &amp; 2 &amp; 3 &amp; ...&amp; n\\ -2 &amp; -2 &amp; 0 &amp; ...&amp; 0\\ 0&amp; -2&amp; -3&amp; ...&amp;0&amp; \\ ...&amp; ...&amp; ...&amp; ...&amp; \\ 0 &amp; 0 &amp; 0 &amp; 1-n &amp; n \end{pmatrix} but I don't sure how to proceed.</p></li> <li><p>Why the det. of the following matrix is divisible by 6 without remainder? \begin{pmatrix} 2^0 &amp; 2^1 &amp; 2^2 \\ 4^0 &amp; 4^1 &amp; 4^2\\ 5^0 &amp; 5^1 &amp; 5^2 \end{pmatrix} So I know that I have to show that its det. is divisible by $2$ and $3$, or equivalently that the sum of its digits divisible by $3$ and last digit is even. But I don't sure how to start the process. </p></li> </ol> <p>Thank you.</p>
bartgol
33,868
<p>Yes. The derivative of $x^2$ tells you how fast the function $x^2$ increases. The second derivative of $x^2$ tells you how fast the derivative of $x^2$ increases. The second derivative is 2, meaning that the speed at which the rate of growth of $x^2$ increases is fixed. Therefore, the difference of the difference of consecutive squares is constant.</p> <p>For your second question, let $d_n = (n+1)^2 - n^2 = 2n+1$. Then</p> <p>$$ d_{n+1}-d_n = 2(n+1)+1 - (2n+1) = 2n+2+1-2n-1 = 2. $$</p>
2,242,865
<p>I have seen proofs using the delta-epsilon definition of continuity, and they make perfect sense, but I have not found one proof using the sequential definition of continuity. </p> <p>For example, when given functions, $f$ and $g$ that are continuous on [$a,b$], prove that the function $h=f+g$ is also continuous on [$a,b$]. I also have not seen a proof of $fg$ being continuous on [$a,b]$.</p> <p>Does a proof exist that does not use the delta-epsilon definition? If so, is it a less concrete proof when using the sequential definition?</p>
Mark Twain
60,908
<p>Yes. </p> <p>Let $f$, $g \colon D \to \mathbb{R}$, where $D$ is some nonempty subset of $\mathbb{R}$. Let $a \in D$. Suppose both of $f$ and $g$ are continuous at $a$. We can show that $f+g$ is continuous at $a$ using the sequential criterion. </p> <p>Let $(x_n)$ be a sequence in $D$ such that $\lim x_n = a$. Since $f$ and $g$ are continuous at $a$, $\lim f(x_n) = f(a)$ and $\lim g(x_n) = g(a)$. Then $$ \lim {}(f+g)(x_n) = \lim {}[f(x_n) + g(x_n)] = \lim f(x_n) + \lim g(x_n) = f(a) + g(a) = (f+g)(a). $$ Since $(x_n)$ was arbitrary, we have shown that for every sequence $(x_n)$ in $D$ that converges to $a$, the sequence $( (f+g)(x_n) )$ converges to $(f+g)(a)$. Therefore, $f+g$ is continuous at $a$. </p> <p>A similar argument shows that $fg$ is continuous at $a$ (replace the sums above with products).</p>
252,049
<p>I have seen on the nLab that we can <a href="https://ncatlab.org/nlab/show/loop+space+object" rel="nofollow">view the loop space as a particular homotopy pullback</a>, and that it is even the way a "loop space object" is defined in general (when it exists). </p> <p>Can someone give me some intuition of why it is so? To which construction in geometry does this correspond (up to homotopy)? </p> <p>Any reference would also be welcome.</p>
Emily
130,058
<p>This can be seen from the following general formula<span class="math-container">$^1$</span> for homotopy pullbacks of topological spaces.</p> <p><strong>Proposition.</strong> Let <span class="math-container">$$X\xrightarrow{f}Z\xleftarrow{g}Y$$</span> be a diagram of topological spaces. Then <span class="math-container">$$\mathrm{hocolim}\left(X\xrightarrow{f}Z\xleftarrow{g}Y\right)=\frac{X\times Z^{[0,1]}\times Y}{\left((x,\alpha\colon [0,1]\rightarrow Z,y)\ \middle|\ f(x)\sim\alpha(0)\text{ and }g(y)\sim\alpha(1)\right)}.$$</span></p> <p><strong>Corollary.</strong> Let <span class="math-container">$(X,x_0)$</span> be a pointed space. Then the loop space of <span class="math-container">$X$</span> is the homotopy pullback of the diagram <span class="math-container">$$*\hookrightarrow X\hookleftarrow*,$$</span> where the inclusion maps send the one-point space <span class="math-container">$*$</span> to <span class="math-container">$x_0$</span>.</p> <p><em>Proof.</em> We have <span class="math-container">$$\mathrm{hocolim}(*\hookrightarrow X\hookleftarrow*)=\frac{*\times\mathrm{Hom}\left([0,1],X\right)\times *}{\left((*,f\colon[0,1]\rightarrow X,*)\ \middle|\ f(0)\sim f(1)\sim x_0\right)}\cong\frac{\mathcal{L}X}{\left(f\in\mathcal{L}X\ \middle|\ f(0)\sim f(1)\sim x_0\right)}=\Omega X.$$</span> (Here <span class="math-container">$\mathcal{L}X$</span> is the unbased loop space of <span class="math-container">$X$</span>.)</p> <hr> <p><span class="math-container">$^1$</span>See Section 2.1 of <a href="http://www.staff.science.uu.nl/~meier007/TMF-Lecture.pdf" rel="nofollow noreferrer">these notes</a> by Lennart Meier.</p>
3,334,910
<p>Question. <span class="math-container">$\square ABCD$</span> is a square with <span class="math-container">$AB = 10$</span>. Circle <span class="math-container">$O$</span> inscribes the <span class="math-container">$\square ABCD$</span>. The center of the arc is <span class="math-container">$A$</span>. What is the area of the colored area?</p> <p><a href="https://i.stack.imgur.com/1N2eT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1N2eT.png" alt="enter image description here"></a></p> <p>Explanation: This problem can be solved by integral. But, this problem is from an elementary math book, which means that integral is not a good solution for this.</p> <p>Our teacher solved this problem with integral, but she was not able to solve it with elementary math. She asked my school students about this problem, but none of us were able to solve it. How can we solve the problem using only elementary math?</p> <p>Korean elementary math does not contain:</p> <ol> <li>irrationals and imaginary numbers</li> <li>functions</li> <li>Of course, angle functions (such as <span class="math-container">$sin$</span>, <span class="math-container">$cos$</span>, <span class="math-container">$tan$</span>, etc)</li> <li>inscribed angle</li> <li>non-linear equations</li> <li>similarity and congruence</li> </ol> <p>If you are unsure that you can use certain formulas or something else, comment me and I'll answer you.</p> <p><strong>Edit 1.</strong> Actually, we do not study <span class="math-container">$\pi$</span> in elementary school. We just assimilate it to <span class="math-container">$3.14$</span>. But it's not so important, so let's use <span class="math-container">$\pi$</span>.</p>
N. S.
9,176
<p><strong>Hint</strong><a href="https://i.stack.imgur.com/52jNe.png" rel="noreferrer"><img src="https://i.stack.imgur.com/52jNe.png" alt="Pic"></a></p> <p>The two yellow areas are equal.</p> <p>You can easily calculate <span class="math-container">$$2 \mbox{yellow}+\mbox{blue}+\mbox{purple}=\mbox{Area} ABCD-\frac{1}{4} \mbox{Area} C(A, AB)$$</span></p> <p>Next <span class="math-container">$$4\mbox{purple}=\mbox{Area} ABCD- \mbox{Area} C(0, OA)$$</span></p> <p>Finally <span class="math-container">$$\mbox{Blue}=\mbox{Area} (sector \, OEF) -\mbox{Area} (sector \, AEF) +\mbox{Area}(AEOF)$$</span></p>
1,456,591
<p>It is known that if $$|a+b|=|a|+|b|$$ then we can find the solution by simply observing that we can instead solve the inequality $$a b \geq 0$$</p> <p>My question is, if $|a+b+c|=|a|+|b|+|c|$, then what would be the '3 degree version' of the above?</p>
Umberto P.
67,536
<p>The third degree (or higher) version is "all nonnegative" or "all nonpositive".</p>
203,009
<p>Given $k,c \in \mathbb{N}$, let $P(k,c)$ be the minimum $n$ such that no matter how we color the edges of the complete graph $K_n$ with $c$ colors, there is always a monochromatic path of length $k$.</p> <p>What are the best known upper and lower bounds for $P(k,c)$?</p>
Tony Huynh
2,233
<p>For $c=2$, it is a theorem of <a href="http://www.renyi.hu/~gyarfas/Cikkek/01_GerencserGyarfas_OnRamseyTypeProblems.pdf">Gerencsér and Gyárfás</a> that $P(k,2)=\lfloor (3k-2)/2 \rfloor$. </p> <p>For $c=3$, <a href="http://web.cs.wpi.edu/~gsarkozy/Cikkek/27.pdf">Gyárfás, Ruszinkó, Sárközy and Szemerédi</a> proved that for sufficiently large $k$, $P(k,3)=2k-1$ if $k$ is odd, and $2k-2$ if $k$ is even.</p> <p>The exact asymptotics are unknown for larger values of $c$ (as far as I know). </p>
203,009
<p>Given $k,c \in \mathbb{N}$, let $P(k,c)$ be the minimum $n$ such that no matter how we color the edges of the complete graph $K_n$ with $c$ colors, there is always a monochromatic path of length $k$.</p> <p>What are the best known upper and lower bounds for $P(k,c)$?</p>
Jan Kyncl
24,076
<p>For the multi-color Ramsey numbers of even cycles, <a href="https://dx.doi.org/10.1002/jgt.20572" rel="nofollow">Luczak, Simonovits and Skokan</a> proved that $R(C_k;c)\le ck+o(k)$ for fixed number $c$ of colors and $k\rightarrow \infty$.</p> <p>For odd cycles, <a href="http://dx.doi.org/10.1016/S0095-8956(73)80005-X" rel="nofollow">Bondy and Erdos</a> claim that $R(C_k;c)\le (c+2)!k$.</p> <p>Both upper bounds apply also for paths with $k$ vertices since $P(k-1,c) \le R(C_k;c)$.</p>
1,572,065
<p>I came across this question in Probability &amp; Statistics by Rohatgi Exercise 1.6.9.</p> <p>Let $P(A|B) = P(A|B\cap C)P(C) + P(A|B\cap C^c)P(C^c)$ with $P(A|B\cap C) \neq P(A|B)$.</p> <p>If all the three events have non-zero probability, prove that $B$ and $C$ are independent.</p> <p>After several efforts, I couldn't solve this seemingly innocuous looking question. Any idea on how to proceed will be appreciated. </p>
ki3i
202,257
<p>Since ${\bf P}[B], {\bf P}[A\,\vert\,B]&gt;0$ then, using the given equation, note that we can relate two ways of computing $\,{\bf P}[A \cap B]$: </p> <p>$${\bf P}[A \,\vert\, B \cap C ]\,{\bf P}[ B \cap C ] + {\bf P}[ A \,\vert\, B \cap C^c ]\,{\bf P}[ B \cap C^c ] \\= {\bf P}[A \cap B ] \\= {\bf P}[A \,\vert\, B \cap C ]\,{\bf P}[B]{\bf P}[C] + {\bf P}[ A \,\vert\, B \cap C^c ]\,{\bf P}[B]\,{\bf P}[C^c]\tag{1}$$</p> <p>$(1)$ holds non-trivially so that we can subtract the r.h.s from the l.h.s, resulting in</p> <p>$$\bigg({\bf P}[ B \cap C ]-{\bf P}[ B ]\,{\bf P}[ C ]\bigg)\bigg({\bf P}[A \,\vert\, B \cap C ] - {\bf P}[ A \,\vert\, B \cap C^c ]\bigg)=0\tag{2}$$</p> <p>Above, for the factorisation, we have used the fact that ${\bf P}[C] &gt; 0$. Now, we know that ${\bf P}[A \,\vert\, B \cap C ] \neq {\bf P}[ A \,\vert\, B \cap C^c ]$ since, otherwise, the given equation in the question implies $ {\bf P}[A \,\vert\, B ] = {\bf P}[ A \,\vert\, B \cap C ]$; a contradiction. Hence, equation $(2)$ implies</p> <p>$${\bf P}[ B \cap C ]={\bf P}[ B ]\,{\bf P}[ C ]\,.\tag{3}$$</p> <p>All other probabilities of relevant event combinations - such as ${\bf P}[B\cap C^c] = {\bf P}[B]{\bf P}[C^c]$ - follow by using $(3)$ with basic relationships between probabilities of non-null events and sub-events. </p>
341,728
<p>Trivially, for any Lie Algebra (LA) g, g':=[g,g] is an ideal. What's wrong with the following argument?</p> <p>Be g a simple LA, then it has to be g'=g by definition of simple LA. But [g,g]=g seems to be an alternative way of characterizing a semi simple LG. Furthermore, for sl(2) it doesn't seem to be true z=[x,y] for any x,y,z € g. Thus, the previous implication I'm inclined to make must be flawed, but I can't see my mistake. </p> <p>I'm following J. Fuchs book. Thanks in advance.</p>
Derek Holt
2,820
<p>There is a general computer algorithm for such calculations (Schreier-Sims), but it is tedious to do it by hand.</p> <p>Carrying on with DonAntonio's calculations, you might notice that $(ab)^2 = (1,4)(3,5)(2,6)(7,8)$ commutes with both $a$ and $b$, so the subgroup $H = \langle (ab)^2 \rangle$ of $G$ has order 2 and is in the centre of $G$.</p> <p>Now, if we let $\overline{a}$ and $\overline{b}$ be the images $aH$ and $bH$ of $a,b$ in the quotient group $G/H$, we have $\overline{a}^3 = \overline{b}^3 = (\overline{ab})^2 = 1$.</p> <p>So, $G/H$ is a quotient group of the group $\langle x,y \mid x^3=y^3=(xy)^2=1 \rangle$, which is well known to be a presentation of the alternating group $A_4$ of order 12. So we have $|G/H| \le 12$ and hence $|G| \le 24$.</p>