qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
1,111,952
<p><strong>My Try:</strong> </p> <p>We substitute $y = x^{2/3}$. Therefore, $x = y^{3/2}$ and $\frac{dx}{dy} = \frac{2}{3}\frac{dy}{y^{1/3}}$</p> <p>Hence, the integral after substitution is: </p> <p>$$ \frac{3}{2} \int_0^\infty \sin(y)\sqrt{y} dy$$</p> <p>Let's look at:</p> <p>$$\int_0^\infty \left|\sin(y)\sqrt{y} \right| dy = \sum_{n=0}^\infty \int_{n\pi}^{(n+1)\pi}\left|\sin(y)\right| \sqrt{y} dy \ge \sum_{n=0}^\infty \sqrt{n\pi} \int_{n\pi}^{(n+1)\pi}\left|\sin(y)\right| dy \\= \sum_{n=1}^\infty \sqrt{n\pi} \int_{n\pi}^{(n+1)\pi}\sqrt{\sin(y)^2}$$</p>
mickep
97,236
<p>While typing, I noticed that @GEdgar already noted this, but here it goes anyway.</p> <p>Integrating by parts, we find that $$ \begin{align} \int \sin(x^{2/3})\,dx &amp;=\int -\frac{3}{2}x^{1/3}\frac{d}{dx}\cos(x^{2/3})\,dx \\ &amp;= -\frac{3}{2}x^{1/3}\cos(x^{2/3})+\int \frac{1}{2}x^{-2/3}\cos(x^{2/3})\,dx. \end{align} $$ Next, show that $$ \int_0^{+\infty} x^{-2/3}\cos(x^{2/3})\,dx $$ converges.</p>
4,547,918
<p>Given the torus and given the point p <span class="math-container">$\in$</span> M corresponding to the parameters <span class="math-container">$s=\frac{\pi }{4}$</span> and <span class="math-container">$t=\frac{\pi }{3}$</span>. Determine the cartesian equation of the tangent plane to M in p.</p> <p><span class="math-container">$\begin{cases} x=\left(3+\sqrt{2}cos\left(s\right)\right)cos\left(t\right) \\ y=\left(3+\sqrt{2}cos\left(s\right)\right)sin\left(t\right) \\ z=\sqrt{2}sin\left(s\right) \end{cases}$</span></p> <p>Could someone give me a hint or help me? I'm not sure if I firstly should go from the given parametric equation to a cartesian equation.</p>
electrical apprentice
912,523
<p><span class="math-container">$$\begin{align} \int {\cos^2 x \over 1+\sin x } \mathrm{d}x&amp;= \int {1-\sin^2 x \over 1 + \sin x } \mathrm{d}x \\&amp;=\int {(1+\sin x)(1-\sin x) \over (1+\sin x) } \mathrm{d}x\\ &amp;=\int (1-\sin x) \mathrm d x\\&amp;=x+\cos x + \mathrm{const} \end{align}$$</span></p>
3,444,214
<p>Let <span class="math-container">$\zeta = e^{2\pi i / 7}$</span>. I know the minimal polynomial of <span class="math-container">$\zeta$</span> over <span class="math-container">$\mathbb{Q}$</span> is <span class="math-container">$\sum_{i=0}^{6} x^{i}$</span>. But what is <span class="math-container">$[ \mathbb{Q}(\zeta) : \mathbb{Q}(\zeta) \cap \mathbb{R}]$</span>. I saw that it is <span class="math-container">$2$</span> but I can't find the minimal polynomial of degree <span class="math-container">$2$</span>. We have that <span class="math-container">$a \zeta ^{2} + b \zeta + c = 0$</span>, where <span class="math-container">$a,b,c \in \mathbb{Q}(\zeta) \cap \mathbb{R}$</span>. I know that <span class="math-container">$\cos(2 \pi n / 7) \in \mathbb{Q}(\zeta) \cap \mathbb{R}$</span>, and I want that <span class="math-container">$a \cdot \sin(4 \pi / 7) + b \cdot \sin ( 2 \pi /7 ) = 0 : a,b \in \mathbb{Q}(\zeta) \cap \mathbb{R}$</span>. But I can't seem to find it. </p>
Arthur
15,500
<p>You want a quadratic polynomial with real coefficients where <span class="math-container">$\zeta$</span> is a root. That means that its complex conjugate <span class="math-container">$\zeta^6$</span> must be the other root. Thus Vieta's formulas tells you that the polynomial you're after is <span class="math-container">$$ x^2-(\zeta+\zeta^6)x+\zeta\cdot\zeta^6\\ =x^2-2\cos(2\pi/7)x+1 $$</span></p>
492,407
<p>I was searching for methods on how to calculate the area of a polygon and stubled across this: <a href="http://www.mathopenref.com/coordpolygonarea.html" rel="nofollow noreferrer">http://www.mathopenref.com/coordpolygonarea.html</a>. $$ \mathop{area} = \left\lvert\frac{(x_1y_2 βˆ’ y_1x_2) + (x_2y_3 βˆ’ y_2x_3) + \cdots + (x_ny_1 βˆ’ y_nx_1)}{2} \right\rvert $$ where $x_1,\ldots,x_n$ are the $x$-coordinates and $y_1,\ldots,y_n$ are the $y$-coordinates of the vertices. It does work and all, yet I do not fully understand why this works.</p> <p>As far as I can tell you take the area of each triangle between two points. Basically you reapeat the formula of $\frac{1}{2} * h * w$ for each of the triangles and take the sum of them? Yet doesn't this leave a "square" in te center of the polygon that is not taken into account? (Apparently not since the correct answer is produced yet I cannot understand how).</p> <p>If someone could explain this some more to me I would be grateful.</p>
Oleg567
47,993
<p><img src="https://i.stack.imgur.com/wTYmV.png" alt="enter image description here"></p> <p>Let $O$ is the origin. Denote "signed area" of triangle $OAB$: $~~S_{OAB}= \dfrac{1}{2}(x_Ay_B-x_By_A)$.<br> It can be derived from cross product of vectors $\vec{OA}, \vec{OB}$.</p> <p>If way $AB$ is $\circlearrowleft$ (if polar angle of $A$ less than polar angle of $B$), then $S_{OAB}&gt;0$ ;<br> if way $AB$ is $\circlearrowright$ (if polar angle of $A$ greater than polar angle of $B$), then $S_{OAB}&lt;0$. </p> <p>Now, for each edge $A_jA_{j+1}$ ($j=1,2,\ldots,n$; $A_{n+1}\equiv A_1$) of polygon $A_1A_2\ldots A_n$ we can build $2$ vectors: $\vec{OA_j}$ and $\vec{OA_{j+1}}$.</p> <p>And "signed area" of polygon (which sign depends on numerating direction) $$ S_{A_1A_2...A_n} = \sum_{j=1}^n S_{OA_jA_{j+1}} = \sum_{j=1}^n \dfrac{1}{2}(x_jy_{j+1}-x_{j+1}y_j) = \dfrac{1}{2}\sum_{j=1}^n (x_jy_{j+1}-x_{j+1}y_j). $$</p> <p>When positive term adds, then square will increase, when negative, then area will decrease.</p> <p>We will mark "positive" area as blue, and "negative" as red.</p> <p>Illustration:<br> <img src="https://i.stack.imgur.com/SdUVW.gif" alt="enter image description here"></p>
514,912
<p>I have what may seem a very trivial question, but how it is answered may affect how a proof of mine is structured. It pertains to formatting and convention. When 'recursively' defining a function does it make sense to use quantifiers? </p> <p>For example would:</p> <p>$ 5 \in R $</p> <p>If $ r \in R $, then $ \forall s \in \mathbb Z, r + s \in R $</p> <p>be an acceptable substitute for:</p> <p>$ 5 \in R $</p> <p>If $ r \in R, $ then $ r + 1 \in R $ and $ r - 1 \in R $</p> <p>Or would using quantifiers in the former definition violate some fundamental rule about how recursive functions are supposed to be considered?</p> <p>Anyways, thanks for any help!</p> <p>Thanks, </p> <p>Tuba09</p>
W.W.
98,791
<p>For $$\sqrt{6 + \sqrt{6 + \sqrt{6 +\dots}}}:$$</p> <p>Let \begin{align*} x &amp;= \text{the given equation}\\ &amp;= \sqrt{6 + \sqrt{6 + \sqrt{6 + \dots}}} \end{align*}</p> <p>Since the series is infinite, we can write $x = \sqrt{6 + x}$, or $$x^2 - x - 6 = 0.$$</p> <p>Therefore, $x = 3$ or $x = -2$</p> <p>Since the answer cannot be negative, reject $x = -2$. Therefore, $x = 3$.</p> <p>Do the same thing for the first series.</p>
3,521,525
<p>How many three-digit numbers are there whose digits in the hundreds place and ones place are the same? (Assume that a nonzero digit is in the hundreds place.) </p> <p>Please try to simplify the solution so that a child under 14 may understand this. Also, it would help if you included a formula that <em>may</em> be reusable in similar circumstances. </p>
Ethan Bottomley-Mason
657,832
<p>The question is rather simple, so here is some advice on how to solve it. Consider the number of ways to get your statement to be true. How many ways can you get the ones and hundreds places to be the same? What are the possibilities for the tens place? To find the total number of possibilities, multiply the independent possibilities.</p>
3,005,100
<p>Given the following formula <span class="math-container">$$ \sum^n_{k=0}\frac{(-1)^k}{k+x}\binom{n}{k}\,. $$</span> How can I show that this is equal to <span class="math-container">$$ \frac{n!}{x(x+1)\cdots(x+n)}\,? $$</span></p>
Felix Marin
85,343
<p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span></p> <blockquote> <p>With <span class="math-container">$\ds{\Re\pars{x} &gt; 0}$</span>:</p> </blockquote> <p><span class="math-container">\begin{align} &amp;\bbox[10px,#ffd]{\sum_{k = 0}^{n}{\pars{-1}^{k} \over k + x}{n \choose k}} = \sum_{k = 0}^{n}\pars{-1}^{k} \pars{\int_{0}^{1}t^{k + x - 1}\,\dd t}{n \choose k} \\[5mm] = &amp;\ \int_{0}^{1}t^{x - 1}\sum_{k = 0}^{n} {n \choose k}\pars{-t}^{k}\,\dd t \\[5mm] = &amp;\ \int_{0}^{1}t^{x - 1}\,\pars{1 - t}^{n}\,\dd t = \mrm{B}\pars{x,n + 1}\ \pars{~\mrm{B}:\ Beta\ Function~} \\[5mm] = &amp;\ {\Gamma\pars{x}\Gamma\pars{n + 1} \over \Gamma\pars{x + n + 1}} \phantom{= \mrm{B}\pars{x,n + 1}\,\,\,\,\,\,\,\,\,\,\,\,} \pars{~\Gamma:\ Gamma\ Function~} \\[5mm] = &amp;\ {n! \over \Gamma\pars{x + n + 1}/\Gamma\pars{x}} = {n! \over x^{\overline{n +1}}} = \bbx{n! \over x\pars{x + 1}\cdots\pars{x + n}} \end{align}</span></p>
99,378
<p>The following equation in $\mathbb{C}$:</p> <p>$4z^2+8|z|^2-3=0$</p> <p>is not algebraic and has 4 solutions : $\pm\frac{1}{2}$ and $\pm i\frac{\sqrt{3}}{2}$. The Solve function in Mathematica only returns the 2 real values :</p> <pre><code>Solve[4 z^2 + 8 Abs[z]^2 - 3 == 0, Complexes] (* {{z -&gt; -(1/2)}, {z -&gt; 1/2}} *) </code></pre> <p>What am I missing ?</p>
Suba Thomas
5,998
<pre><code>Solve[4 z^2 + 8 Abs[z]^2 - 3 == 0 &amp;&amp; z \[Element] Complexes, z] </code></pre> <blockquote> <p>{{z -> -(1/2)}, {z -> 1/2}, {z -> -((I Sqrt[3])/2)}, {z -> ( I Sqrt[3])/2}}</p> </blockquote>
99,378
<p>The following equation in $\mathbb{C}$:</p> <p>$4z^2+8|z|^2-3=0$</p> <p>is not algebraic and has 4 solutions : $\pm\frac{1}{2}$ and $\pm i\frac{\sqrt{3}}{2}$. The Solve function in Mathematica only returns the 2 real values :</p> <pre><code>Solve[4 z^2 + 8 Abs[z]^2 - 3 == 0, Complexes] (* {{z -&gt; -(1/2)}, {z -&gt; 1/2}} *) </code></pre> <p>What am I missing ?</p>
Daniel Lichtblau
51
<p>A pedestrian approach, overkill in this case, is to separate into explicit real and imaginary parts both for the expression(s) and variable(s).</p> <pre><code>expr = 4 z^2 + 8 Abs[z]^2 - 3; {re, im} = ComplexExpand[{Re[expr], Im[expr]}, z] /. {Re[z] -&gt; rez, Im[z] -&gt; imz} solns = Solve[{re, im} == 0]; rez + I*imz /. solns (* Out[380]= {-3 + 4 imz^2 + 12 rez^2, 8 imz rez} Out[382]= {-(1/2), 1/2, -((I Sqrt[3])/2), (I Sqrt[3])/2} *) </code></pre>
2,498,359
<p>This is a basic probability question. </p> <p>Persons A and B decide to arrive and meet sometime between 7 and 8 pm. Whoever arrives first will wait for ten minutes for the other person. If the other person doesn't turn up inside ten minutes then the person waiting will leave. What is the probability that they will meet? I am assuming uniform distribution for arrival time between 7 pm and 8 pm for both of them. </p>
Ravenex
442,239
<p>This problem was already explained here:</p> <p><a href="https://math.stackexchange.com/questions/1279873/basic-probability-romeo-and-juliette-meet-for-a-date?rq=1">Basic probability: Romeo and Juliette meet for a date.</a></p> <p>It just needs to be adjusted to work in 6ths instead of 4ths (10/60 of an hour vs. 15/60 of an hour).</p> <p>The area of the time they don't meet will be 25/36 of the square so the probability they meet will be 9/36.</p>
1,232,363
<p>I have to solve a probability problem and it says that we take a random sample of size 10. But I donΒ΄t understand the concept (IΒ΄m on my first course on probability). </p> <p>Suppose that we have a box with 100 balls and I take a random sample of size 10</p> <p>Is a random sample of size 10 if</p> <ul> <li>I take AT THE SAME TIME 10 balls? or</li> <li>I take one ball, then return it and repeat this process ten times?</li> </ul> <p>Thanks for your help.</p>
Surb
154,545
<p>The approach of Mark Viola is probably the best for $2\times 2$ matrices. Note however that this generalizes to $n\times n$.</p> <p>Indeed, the <a href="https://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem#Positive_matrices" rel="nofollow noreferrer">Perron-Frobenius theorem</a> states (in particular) that every positive matrix with positive elements has a unique positive eigenvector. Furthermore, the associated eigenvalue is the spectral radius of the matrix.</p>
2,965,989
<p>Why <span class="math-container">$$p(y)=\int_0^\infty x\delta (y-x)dx=y\ \ ?$$</span></p> <p>For me, <span class="math-container">$$p(y)=\int_0^\infty x\delta (y-x)dx=\int_{\{y\}}xdx=0.$$</span></p> <p>If it would be written <span class="math-container">$\int_0^\infty xd\delta _y$</span>, then I would be agree with the answer. But here it's written <span class="math-container">$$\int_0^\infty x\boldsymbol 1_{x=y}(x)dx$$</span> what I interpret as <span class="math-container">$\int_0^\infty x\boldsymbol 1_{x=y}(x)dx$</span>.</p>
Chris
430,789
<p>By the definition of the dirac delta function,</p> <p><span class="math-container">$$ \int_{0}^\infty f(x) \delta(y-x) dx = f(y) $$</span></p> <p>for all <span class="math-container">$y \in [0, \infty)$</span> and arbitrary function <span class="math-container">$f$</span>.</p>
3,700,299
<p>I want to show that <span class="math-container">$\int\limits_{-\infty}^\infty e^{-\pi x^2}dx = 1$</span>.</p> <p>By definition <span class="math-container">$$\int\limits_{-\infty}^\infty e^{-\pi x^2}dx = \lim\limits_{t\to\infty}\int\limits_{-t}^t e^{-\pi x^2}dx$$</span> and since the integrand <span class="math-container">$e^{-\pi x^2}$</span> is an even function <span class="math-container">$$\int\limits_{-\infty}^\infty e^{-\pi x^2}dx = \lim\limits_{t\to\infty}\int\limits_{-t}^t e^{-\pi x^2}dx = 2\lim\limits_{t\to\infty}\int\limits_0^t e^{-\pi x^2}dx$$</span> i.e. we can equivalently show that <span class="math-container">$\lim\limits_{t\to\infty}\int\limits_0^t e^{-\pi x^2}dx=\frac{1}{2}$</span>.</p> <p>Since the antiderivative of <span class="math-container">$e^{-x^2}$</span> is given by the <a href="https://en.wikipedia.org/wiki/Error_function" rel="nofollow noreferrer">error function</a> we can't straightforwardly evaluate the integral, so I tried to use the power series expansion, hoping to be able to see that the resulting series will converge to <span class="math-container">$\frac{1}{2}$</span>:</p> <p><span class="math-container">$$|\int\limits_0^t e^{-\pi x^2}dx-\frac{1}{2}| = |\int\limits_0^t\sum\limits_{n=0}^\infty\frac{\pi^n\cdot x^{2n}}{n!}dx - \frac{1}{2}| = |\sum\limits_{n=0}^\infty\frac{\pi^n\cdot t^{2n+1}}{n!\cdot(n+1)}-\frac{1}{2}|$$</span></p> <p>However, I'm in a doubt that it converges and a quick check in Wolfram Mathematica shows indeed that with <span class="math-container">$t\to\infty$</span> the resulting series will diverge.</p> <p>What am I doing wrong? Can anybody help me with a proof for this problem? Any help will be really appreciated.</p>
John Hughes
114,036
<p>How about doing a substitution, <span class="math-container">$x = \frac{1}{\sqrt{\pi}} u; dx = \frac{1}{\sqrt{\pi}} du$</span>? </p> <p>That'll convert your integral into an integral of <span class="math-container">$exp(-u^2)$</span>, which is just the error function, whose value "at infinity" is well known. </p>
3,366,781
<blockquote> <p>Let <span class="math-container">$(S, +, \cdot, 0)$</span> and <span class="math-container">$(S', \oplus, \otimes, 0')$</span> be two semirings. Then <span class="math-container">$f: S\rightarrow S'$</span> is said to be a homomorphism if for all <span class="math-container">$a, b\in S,$</span> <span class="math-container">$f(a+b)=f(a)\oplus f(b)$</span>, <span class="math-container">$f(a.b)=f(a)\otimes f(b)$</span> and <span class="math-container">$f(0)=0'.$</span></p> </blockquote> <p>Let <span class="math-container">$\Bbb Z$</span> be a set of non negative integers and <span class="math-container">$P(\Bbb Z)$</span> be its power set. Then <span class="math-container">$(\Bbb Z, +, \cdot, 0 )$</span> and <span class="math-container">$ (P(\Bbb Z), \cup, \cap, \emptyset))$</span> are semirings, where the operations on <span class="math-container">$\Bbb Z$</span> are usual addition and multiplication, while the operations on <span class="math-container">$P(Z)$</span> are usual set union and intersection.</p> <blockquote> <p>Now, i wish to define a map <span class="math-container">$\phi: \Bbb Z\rightarrow P(\Bbb Z)$</span> such that <span class="math-container">$\phi $</span> is a homomorphism. Is no such homomorphism possible? If possible, how should <span class="math-container">$\phi$</span> be defined?</p> </blockquote> <p><strong>Edited:</strong> Also, see a related question <a href="https://www.google.com/url?sa=t&amp;source=web&amp;rct=j&amp;url=https://mathoverflow.net/questions/342038/define-a-homomorphism-of-a-set-of-graphs-to-its-power-set&amp;ved=2ahUKEwi2tYaYn-fkAhWO63MBHa0DDNQQFjAAegQIBhAB&amp;usg=AOvVaw1ojiTWIdWV636xCohN1NRU" rel="nofollow noreferrer">https://www.google.com/url?sa=t&amp;source=web&amp;rct=j&amp;url=https://mathoverflow.net/questions/342038/define-a-homomorphism-of-a-set-of-graphs-to-its-power-set&amp;ved=2ahUKEwi2tYaYn-fkAhWO63MBHa0DDNQQFjAAegQIBhAB&amp;usg=AOvVaw1ojiTWIdWV636xCohN1NRU</a></p>
kccu
255,727
<p>If you negate the definition of continuity, you get: "There exists <span class="math-container">$\epsilon&gt;0$</span> such that for all <span class="math-container">$\delta&gt;0$</span>, there exist <span class="math-container">$x,x' \in [a,b]$</span> such that <span class="math-container">$|x-x'|&lt;\delta$</span> and <span class="math-container">$|f(x)-f(x')|\geq \epsilon$</span>.</p> <p>So in particular, for any choice of <span class="math-container">$\delta = 1/n$</span>, there exist <span class="math-container">$x,x' \in [a,b]$</span> such that <span class="math-container">$|x-x'|&lt;\delta$</span> and <span class="math-container">$|f(x)-f(x')| \geq \epsilon$</span>. You can choose to call these <span class="math-container">$x_n,x_n'$</span> instead to indicate that they were chosen to go along with <span class="math-container">$\delta = 1/n$</span>.</p> <p>Technically you need to invoke the <a href="https://en.wikipedia.org/wiki/Axiom_of_choice" rel="nofollow noreferrer">Axiom of Choice</a> to choose <span class="math-container">$x_n,x_n'$</span> for <em>all</em> <span class="math-container">$\delta=1/n$</span> simultaneously. Typically this is not explicitly stated in analysis, although it is used quite frequently.</p>
3,908,955
<p>Is the given series convergent or divergent? Give a reason. Show details.</p> <p><span class="math-container">$$\sum_{n=2}^{\infty} \frac{(-i)^n}{ln \ n}$$</span></p> <p>So maybe I'll try using the ratio test?</p> <p>So the series converges if <span class="math-container">$$\left| \frac{z_{n+1}}{z_n} \right| &lt; 1$$</span></p> <p>So I have that <span class="math-container">$$z_n = \frac{(-i)^{n+1}}{ln \ (n + 1)} \cdot \frac{ln \ n}{(-i)^n}$$</span></p> <p>So I think the lns cancel right so all we're left with is an -i in the numerator? Is that right? Is that less than 1 so does it converge?</p>
Community
-1
<p>We put <span class="math-container">$f(x) = x^2 +x+1$</span></p> <p>We can prove that : <span class="math-container">$|f(x) - l|&lt;\delta $</span> <span class="math-container">$ \Rightarrow $</span> <span class="math-container">$ |x-a|&lt;\alpha $</span></p> <p><span class="math-container">$\alpha , \delta &gt; 0$</span></p> <p><span class="math-container">$|f(x) - l|= |x^2 +x+1-7|=|x^2 +x-6|=|x-2||x+3|$</span></p> <p><span class="math-container">$|f(x) - l|&lt;\delta$</span></p> <p><span class="math-container">$\Rightarrow $</span> <span class="math-container">$|x-2||x+3|&lt;\delta$</span></p> <p>Suppose <span class="math-container">$x\in [\frac{3}{2}, \frac{5}{2}] $</span></p> <p><span class="math-container">$\Rightarrow $$\frac{9}{2}\leq x+3\leq\frac{11}{2}$</span></p> <p><span class="math-container">$\Rightarrow $$|x+3|\leq \frac {11}{2} $</span></p> <p><span class="math-container">$\Rightarrow $$|x-2||x+3|\leq \frac{11}{2} |x-2|$</span></p> <p>we know that :</p> <p><span class="math-container">$|x-2||x+3|&lt;\delta$</span></p> <p>So:</p> <p><span class="math-container">$\frac{11}{2} |x-2|&lt;\delta$</span></p> <p><span class="math-container">$\Rightarrow $</span> <span class="math-container">$|x-2|&lt;\frac{2} {11}\delta$</span></p> <p>We put <span class="math-container">$\alpha=\frac{2\delta} {11}$</span></p> <p>Finally : After the definition of limite we proved <span class="math-container">$\lim_{x\to 2} f(x) =7$</span></p>
870,030
<p>Q: Prove that the relation given by $a\sim b\Leftrightarrow a-b\in\mathbb{Z}$ is a congruence relation on the additive group $\mathbb{Q}$.</p> <p>A: Maybe... <ul> <li>$a\sim a\Leftrightarrow a-a=0\in \mathbb{Z}$ &#10003; <li>$a\sim b\Leftrightarrow a-b\in \mathbb{Z}$. $a\in \mathbb{Z}\Rightarrow -a\in \mathbb{Z}$ and $-b\in\mathbb{Z}\Rightarrow b\in \mathbb{Z}$, yielding $a-b+(-2a+2b)\in\mathbb{Z}\Leftrightarrow -a+b\in \mathbb{Z}\Leftrightarrow b\sim a.$ &#10003; <li>$a\sim b$ and $b\sim c$ so $a-b\in \mathbb{Z}$ and $b-c\in \mathbb{Z}$ so $a-b+b-c\in\mathbb{Z}\Leftrightarrow a-c\in\mathbb{Z}\Leftrightarrow a\sim c.$ &#10003; <li>$a_1\sim a_2$ and $b_1\sim b_2$. So $(a_1-a_2)(b_1-b_2)\in \mathbb{Z}\Leftrightarrow a_1b_1-a_1b_2-a_2b_1+a_2b_2\in\mathbb{Z}\Leftrightarrow a_1b_1+a_2b_2\in\mathbb{Z}$<br/>$\Leftrightarrow a_1b_1\sim a_2b_2$ &#10003 </ul> </p> <p><p>The last bullet shows that the equivalence relation is a congruence relation on $\mathbb{Q}$.</p>
Nick
132,027
<p>Your proof that $a \sim b$ implies $b \sim a$ is a bit shaky. It is not necessarily true that $a \in \Bbb{Z}$ and $b \in \Bbb{Z}$ (let $a = 1/2$, $b = -1/2$, for example). You can get this more easily by simply noting that $b-a = -(a-b)$. Your proofs of reflexivity and transitivity are fine.</p> <p>For the last property, you actually need to show that given $a_1 \sim a_2$, $b_1 \sim b_2$, we have $a_1 + b_1 \sim a_2 + b_2$. In other words, we must show that $a_2+b_2 - (a_1+b_1)$ is an integer. Hopefully this is clear. If not, try rewriting it as $a_2 - a_1 + b_2 - b_1$.</p>
1,314,219
<p>Is there any formula for finding the last digit of the factorials? How to approach these type of questions? Thanks in advance.</p>
Deepak
151,732
<p>First of all, when you see this sort of seemingly intractable problem, don't despair. There's usually a very simple "trick" that makes the problem trivial.</p> <p>In this case, you have to realise two things:</p> <p>1) only the sum of last digits contributes to the last digit of the final sum.</p> <p>2) factorials of larger numbers have a lot of zeroes at the end.</p> <p>So your problem reduces to deciding the final term you have to consider. Luckily this is a very easy problem. Because:</p> <p>$5! = 120$</p> <p>$6! = 720$</p> <p>and so forth, every factorial after that ending with a zero.</p> <p>So you only have to consider the sum $1! + 2! + 3! + 4!$.</p> <p>Even that's simplified by recognising that $3!$ ends with a $6$ and $4!$ with a $4$, so they will sum up to give $0$ as the last digit.</p> <p>Turns out all you have to consider is $1! + 2!$, which is just $3$.</p> <p>I wanted to put an exclamation point at the end of the last line to emphasise how easy the whole thing was, but decided not to because it might look like a factorial! :)</p>
4,069,499
<p>If we let <span class="math-container">$x = 0$</span>.</p> <p><span class="math-container">\begin{align*} 3(0+7)-y(2(0)+9) \\ 21-9y \\ \end{align*}</span></p> <p>Then <span class="math-container">$9y$</span> should always equal <span class="math-container">$21$</span>? Solving for <span class="math-container">$y$</span> finds <span class="math-container">$\frac{7}{3}$</span>.</p> <p>But <span class="math-container">$3(x+7)-\frac{7}{3}(2(x)+9)$</span> does not have the same result for diffrent values of <span class="math-container">$x$</span>.</p> <p>Where am I going wrong?</p>
Raffaele
83,382
<p><span class="math-container">$3 (x + 7) - y (2 x + 9)=21-9y$</span> for <span class="math-container">$x=0$</span> and</p> <p><span class="math-container">$3 (x + 7) - y (2 x + 9)=24 - 11 y$</span> for <span class="math-container">$x=1$</span></p> <p>They are the same, so must be <span class="math-container">$$21-9y=24-11y\to y=\frac32$$</span></p>
4,069,499
<p>If we let <span class="math-container">$x = 0$</span>.</p> <p><span class="math-container">\begin{align*} 3(0+7)-y(2(0)+9) \\ 21-9y \\ \end{align*}</span></p> <p>Then <span class="math-container">$9y$</span> should always equal <span class="math-container">$21$</span>? Solving for <span class="math-container">$y$</span> finds <span class="math-container">$\frac{7}{3}$</span>.</p> <p>But <span class="math-container">$3(x+7)-\frac{7}{3}(2(x)+9)$</span> does not have the same result for diffrent values of <span class="math-container">$x$</span>.</p> <p>Where am I going wrong?</p>
Joe
623,665
<p>If we let <span class="math-container">$x=0$</span> then we do indeed find that the expression equals <span class="math-container">$21-9y$</span>. But this does not mean that <span class="math-container">$21-9y$</span> must equal <span class="math-container">$0$</span>; it only means that <span class="math-container">$21-9y$</span> is a constant. If, as Raffaele has suggested, we let <span class="math-container">$x=1$</span>, then we find that the expression equals <span class="math-container">$24-11y$</span>. Hence, <span class="math-container">$21-9y$</span> and <span class="math-container">$24-11y$</span> must be equal to the same constant, and so solving the problem boils down to solving the equation <span class="math-container">$21-9y=24-11y$</span>: <span class="math-container">\begin{align} 21 - 9y &amp;= 24 - 11y &amp;&amp;\text{Subtract $21$ from both sides}\\ -9y &amp;= 3 - 11y &amp;&amp;\text{Add $11y$ to both sides}\\ 2y &amp;= 3 &amp;&amp;\text{Divide both sides by $2$}\\ y &amp;= \frac{3}{2} \, . \end{align}</span></p>
3,880,743
<p>If <span class="math-container">$T:\mathbb{R}^2 \rightarrow \mathbb{R}$</span> is a function such that <span class="math-container">$T(\alpha v)=\alpha T(v)$</span> <span class="math-container">$\forall \alpha \in \mathbb{R}$</span> and <span class="math-container">$v \in \mathbb{R}^2$</span>, is T necessarily a linear transformation?</p> <p>My instinct is no since surely it has to also be additive in order to be linear (by definition), but I can't seem to think of a counterexample.</p> <p>Any pointers would be much appreciated!</p>
John Hughes
114,036
<p>As an alternative, define <span class="math-container">$$ T(x, 0) = (x, 0) $$</span> and <span class="math-container">$$ T(x, y) = (0,0) $$</span> for <span class="math-container">$y \ne 0$</span>. Now consider that <span class="math-container">$(1, 1) = (0, 1) + (1, 0)$</span> and apply <span class="math-container">$T$</span> to both sides assuming linearity, and arrive at a contradiction.</p>
2,619,638
<p>I know a function which is not equal a.e to a continuous function is the step function or the characteristic of any interval and I also know the Dirichlet function is not an a.e continuous function but I want an example of a function with both properties.</p>
really Nobody
521,491
<p>You're actually right, that should be $\sin^{-1}$ (or $\ -cos^{-1}$).</p> <p>Indeed, since $\sin^{-1}$ and $\cos^{-1}$ are linked, as you said, by $\cos^{-1}+\sin^{-1}=\pi/2$, and since a continuous function has an infinite number of antiderivatives differing by a constant, taking $sin^{-1}$ or $cos^{-1}$ makes no real difference, besides from inverting the sign (which is the issue here). </p> <p>EDIT : a very valid point was made in the comments of Dr. Sonnhard Graubner's answer : The - sign may be coming from the B term.</p>
7,130
<p>I'm looking for an explanation on how reducing the Hamiltonian cycle problem to the Hamiltonian path's one (to proof that also the latter is NP-complete). I couldn't find any on the web, can someone help me here? (linking a source is also good).</p> <p>Thank you.</p>
Jozef
14,829
<p>For the directed case,</p> <p>Given $\langle G=(V,E)\rangle$ for the Hamiltonian cycle, we can construct input $\langle G',s,t\rangle$: choose a vertex $u \in V$ and divide it into two vertices, such that the edges that go out of $u$, will go out of $s$ and the vertices that get in to $u$, will get in to $t$.</p>
7,130
<p>I'm looking for an explanation on how reducing the Hamiltonian cycle problem to the Hamiltonian path's one (to proof that also the latter is NP-complete). I couldn't find any on the web, can someone help me here? (linking a source is also good).</p> <p>Thank you.</p>
Rotenberg
242,055
<p>This is a reduction from undirected Hamilton Cycle to undirected Hamilton Path. It takes a graph $G$ and returns a graph $f(G)$ such that $G$ has a Hamilton Cycle iff $f(G)$ has a Hamilton Path.</p> <p>Given a graph $G = (V,E)$ we construct a graph $f(G)$ as follows.</p> <p>Let $v \in V$ be a vertex of G, and let $v',s,t \notin V$.</p> <p>We want to make $v'$ a "copy" of $v$, and add vertices of degree one; $s,t$, connected to $v,v'$, respectively. (See Figure 1.)</p> <p>That is, $f(G)$ has vertices $V\cup \{v',s,t\}$ and edges $E\cup\{(v',w)|(v,w)\in E\}\cup\{(s,v),(v',t),(v,v')\}$.</p> <p>Now if $G$ has a Hamilton Cycle, we may write it on the form $(v,u),edges,(u',v)$, where $edges$ is some list of edges which must form a simple path $u\ldots u'$ visiting all vertices but $v$. But then, $(s,v),(v,u),edges,(u',v'),(v',t)$ is a Hamilton Path between $s$ and $t$ in $f(G)$.</p> <p>If on the other hand $f(G)$ has a Hamilton Path, then it must have $s$ and $t$ as endpoints, since they have degree 1. In which case it must (up to reversal) be of the form $(s,v),(v,y),edges',(y',v'),(v',t)$. But then, $G$ has a Hamilton cycle $(v,y),edges',(y',v)$.</p> <p><img src="https://i.stack.imgur.com/IQs7E.png" alt="Hamilton Cycle to Hamilton Path reduction"></p>
3,279,878
<p>I got this equation while I was trying to solve a certain math Olympiad problem. I tried modulus and whatnot, but I haven't got anywhere. Is there a way to prove this?</p>
nonuser
463,553
<p>Solution with infinite descent.</p> <p>Let <span class="math-container">$(x,y,z,a)$</span> be a solution with the smallest possible <span class="math-container">$a$</span>. Say <span class="math-container">$a&gt;0$</span>. <span class="math-container">$$x^2+2y^2+3z^2=10a^2$$</span> </p> <p>Modulo 2 we get <span class="math-container">$2\mid x^2+z^2$</span>. </p> <ul> <li><p>If <span class="math-container">$x,z$</span> both odd then <span class="math-container">$x=2b+1$</span> and <span class="math-container">$z=2c+1$</span> so we have <span class="math-container">$$4b^2+4b+1+2y^2+12c^2+12c+3 = 10a^2$$</span> so <span class="math-container">$2b^2+2b+2+6c^2+6c+y^2=5a^2$</span> ans thus <span class="math-container">$2\mid y^2-a^2$</span></p> <ul> <li>If <span class="math-container">$y,a$</span> both odd, then <span class="math-container">$y=2d+1$</span> and <span class="math-container">$a=2t+1$</span> so <span class="math-container">$$b^2+b+3c^2+3c+2d^2+2d=10t^2+10t+1$$</span> which is impossible since left side is even and right odd.</li> <li>If <span class="math-container">$y,a$</span> both even, then <span class="math-container">$y=2d$</span> and <span class="math-container">$a=2t$</span> so <span class="math-container">$$b^2+b+1+3c^2+3c+2d^2=10t^2$$</span> which is impossible since right side is even and left odd.</li> </ul></li> <li><p>If <span class="math-container">$x,z$</span> both even then <span class="math-container">$x=2b$</span> and <span class="math-container">$z=2c$</span> so we have <span class="math-container">$$2b^2+y^2+6c^2 = 5a^2$$</span> so <span class="math-container">$2\mid y^2-a^2$</span> and we have two cases again. Can you finish?</p></li> </ul>
2,779,083
<p>Given a polynomial $f(z)\in\mathbb{C}[z]$, $\exists$ only finitely many $c$ s.t. $f(z)-c=0$ has repeated roots? Is above true in general? Is it true for polynomials of the form $f(z) = (z-z_1)\cdot ... \cdot (z - z_n)$ where $z_1, ... , z_n \in \mathbb{C}$are distinct?</p>
Hagen von Eitzen
39,174
<p>A multiple root of $f(z)-c$ is at the same time a root of $f'(z)$. As the polynomial $f'$ has only finitely many roots, the claim follows. The only special case is when $f'\equiv 0$, and indeed then $f$ is constant and for one specific value of $c$ has roots at all.</p>
3,840,699
<p>I need to calculate something of the form</p> <p><span class="math-container">\begin{equation} \int_{D} f(\mathbf{x}) d\mathbf{x} \end{equation}</span></p> <p>with <span class="math-container">$D \subseteq \mathbb{R^2}$</span>, but I only have available <span class="math-container">$f(\mathbf{x})$</span> at given samples of points in <span class="math-container">$D$</span>. What do you suggest to do the estimate? For example, I think Monte Carlo integration doesn't apply directly because I can't evaluate <span class="math-container">$f(\mathbf{x})$</span> at arbitrary <span class="math-container">$\mathbf{x}$</span>. Maybe it could be some kind of combination of Monte Carlo and interpolation?</p>
Ross Millikan
1,827
<p>Assuming you are just given a table of values there are two approaches that come to mind.</p> <p>One is to view each point as a sample of the value of the function. You can divide <span class="math-container">$D$</span> into regions by a <a href="https://en.wikipedia.org/wiki/Voronoi_diagram" rel="nofollow noreferrer">Voroni diagram</a>, associating every point in <span class="math-container">$D$</span> with the closest point you have data from. Multiply each <span class="math-container">$f(\bf x)$</span> by the area of its cell and add them up.</p> <p>The second is to pick some functional form, use the data points to feed a nonlinear minimizer to find the parameters of the form, and integrate the resulting function over <span class="math-container">$D$</span>. If you know something about <span class="math-container">$f$</span> this would seem preferable. If you don't, I would plot <span class="math-container">$f$</span> and look at it for inspiration. You can try a number of forms and see what fits the best.</p> <p>Either one can be badly wrong. There could be some point where the function gets huge that is not represented in your data. If you try including some term like <span class="math-container">$\frac a{|\bf x-x_0|^2+b^2}$</span> but don't have any points near <span class="math-container">$\bf {x_0}$</span> you can get badly fooled.</p>
1,328,909
<p>I know how to find for which $n$ $\phi(n)=n/2$ or $\phi(n)=n/3$, my method for finding those was simply to find primes $p$ that satisfy $\Pi_p$$_|$$_n$$1-1/p$ $ = 1/2$ or $1/3$.</p> <p>However, I don't know how to find $\Pi_p$$_|$$_n$$1-1/p = n/6$. Intuitively it seems that if I combine results for both $\phi(n) = n/2$ and $\phi(n) = n/3$ I'll get $\phi(n) = n/6$ but it does not work, cause I get number of the form $2^a3^b$ which gives $\phi(n) = n/3$ again.</p> <p>Is there a way to find $n$ for which $\phi(n)=n/6$? Or do such numbers exist at all?</p>
Nescio
47,988
<p>It seemed to be popular enough in the comments so, You can consider a function defined as $f(x)=0$ for $x\in\mathbb{R}\setminus A$ and $f(x)=1$ if $x\in\mathbb{Q}\cap{A}$ and $f(x)=2$ if $x\in\mathbb{I}\cap{A}$. It will be continuous at each point $x_0$ outside of $A$ as it will be $0$ at an environment of $x_0$ (As $\mathbb{R}\setminus A$ is open), and obviously not continuous at $A$.</p>
313,437
<p>I have to find out the convergence of the next integral: $$\int^{\pi/2}_0{\frac{\ln(\sin(x))}{\sqrt{x}}}dx$$ Any help? Thanks</p>
Ron Gordon
53,268
<p>The tricky part of the integral is near $x=0$. There, note that $\sin{x} \sim x$, and consider </p> <p>$$\int dx \frac{\log{x}}{\sqrt{x}}$$</p> <p>Substitute $x=u^2$, $dx=2 u du$ and this integral is equal to</p> <p>$$2 \int du u \frac{1}{u} \log{u^2} = 4\int du \log{u} = 4 (u \log{u}-u) = 2 (\sqrt{x} \log{x} - 2 \sqrt{x}) $$</p> <p>Thus, near $x=0$, the singularity is integrable, and the integral converges.</p>
2,022,423
<p>You are asked to <strong>permute the neighboring sub-sequence</strong> of the sequence $n,n-1,n-2,\cdots,1$ until the sequence is brought to the increasing order. </p> <p>By <em>permute the neighboring sub-sequence</em> I mean for example: $5,4,3,2,1 \to 5,3,4,2,1 $ or $5,4,3,2,1\to 5,2,4,3,1$ or $5,4,3,2,1\to5,2,1,4,3$. </p> <p><strong>What is the least number of permutations needed?</strong></p> <h2>Edit</h2> <p>A first nontrivial case I can come up with is:</p> <p>$$ 54321\to52143\to14523\to12345 $$</p> <h2>Edit 2</h2> <p>A second nontrivial case I come up with to show $T(n)&lt;f(n)$ is possible regarding the answer by @Brian M. Scott: $$ 7654321\to7632154\to7215634\to1567234\to1234567 $$ maybe $\frac{n+1}{2}$?</p>
Ross Millikan
1,827
<p>$n-1$ is an upper bound as we can exhibit an algorithm that achieves that. Take successive pairs of elements and invert them. This uses $\lfloor \frac n2 \rfloor$ swaps. Lock together the pairs you have swapped, considering the pair to be one element, and you have $n-\lfloor \frac n2 \rfloor=\lceil \frac n2 \rceil$ elements left. Now use the same algorithm again. Repeat until you have only one element left, which means the list is in proper order. </p> <p>Let $T(n)$ be the number of swaps needed for a list of length $n$. $T(1)=0, T(2)=1$ are the base cases for induction. Assume $T(k)=k-1$ has been proven up to $k$. Then $T(k+1)=\lfloor \frac {k+1}2 \rfloor + T(\lceil \frac {k+1}2 \rceil)=\lfloor \frac {k+1}2 \rfloor + \lceil \frac {k+1}2 \rceil-1=k$</p>
2,022,423
<p>You are asked to <strong>permute the neighboring sub-sequence</strong> of the sequence $n,n-1,n-2,\cdots,1$ until the sequence is brought to the increasing order. </p> <p>By <em>permute the neighboring sub-sequence</em> I mean for example: $5,4,3,2,1 \to 5,3,4,2,1 $ or $5,4,3,2,1\to 5,2,4,3,1$ or $5,4,3,2,1\to5,2,1,4,3$. </p> <p><strong>What is the least number of permutations needed?</strong></p> <h2>Edit</h2> <p>A first nontrivial case I can come up with is:</p> <p>$$ 54321\to52143\to14523\to12345 $$</p> <h2>Edit 2</h2> <p>A second nontrivial case I come up with to show $T(n)&lt;f(n)$ is possible regarding the answer by @Brian M. Scott: $$ 7654321\to7632154\to7215634\to1567234\to1234567 $$ maybe $\frac{n+1}{2}$?</p>
Brian M. Scott
12,042
<p>I don’t know that it’s best possible, but I can show that if $T(n)$ swaps are needed to reverse $\langle n,n-1,\ldots,1\rangle$, then $T(n)\le\left\lfloor\frac{3n}4\right\rfloor$. For convenience let $f(n)=\left\lfloor\frac{3n}4\right\rfloor$. It’s not hard to verify that $T(1)=0=f(1)$, $T(2)=1=f(2)$, $T(3)=2=f(3)$, and $T(4)=3=f(4)$, and the OP has shown that $T(5)=3=f(5)$.</p> <p>Now suppose that $n&gt;5$, and $T(k)\le f(k)$ for $k&lt;n$. Let $n=4q+r$, where $r\in\{0,1,2,3\}$ and $q\ge 1$. In $3$ swaps the initial sequence can be transformed into </p> <p>$$\sigma=\langle n-4,n-3,n-2,n-1,n,n-5,n-6,\ldots,2,1\rangle\;.$$</p> <p>Now treat the first $5$ terms of $\sigma$ as a single entity $\widehat{n-4}$ and work with the sequence</p> <p>$$\hat\sigma=\left\langle\widehat{n-4},n-5,n-6,\ldots,2,1\right\rangle$$</p> <p>of length $n-4=4(q-1)+r$. The induction hypothesis says that </p> <p>$$T(n-4)\le f(n-4)=3(q-1)+f(r)\;,$$</p> <p>so $\hat\sigma$ can be transformed into </p> <p>$$\left\langle 1,2,\ldots,n-5,\widehat{n-4}\right\rangle=\langle 1,2,\ldots,n-5,n-4,n-3,n-2,n-1,n\rangle$$</p> <p>with $3(q-1)+f(r)$ further switches. Thus,</p> <p>$$T(n)\le 3+3(q-1)+f(r)=3q+f(r)=f(n)\;,$$</p> <p>and the induction is complete.</p> <p><strong>Added:</strong> This can definitely be improved. For a string $\langle a_1,\ldots,a_n\rangle$ and $1\le i&lt;j\le k\le n$ let $S(i,j,k)$ be the operation of swapping the substrings $\langle a_i,\ldots,a_{j-1}\rangle$ and $\langle a_j,\ldots,a_k\rangle$. If $n=2m+1$, perform the swaps $S(k,k+2,k+m+1)$ for $k=m,m-1,\ldots,1$, then perform $S(2,m+1,n)$. Then:</p> <ul> <li>$a_n=1$ moves $2$ places to the left $m$ times, putting it in position $1$ where it stays on the last move; </li> <li>$a_{n-1}=2$ moves $2$ places to the left $m-1$ times, reaching position $2$, then $S(2,4,m+3)$ moves it to position $m+2$, and the final swap moves it back to position $2$; </li> <li>in general for $k=0,\ldots,m-1$, $a_{n-k}=k+1$ moves $2$ places to the left $m-k$ times to reach position $k+1$, then moves to position $m+k+1$ with $S(k+1,k+3,m+k+2)$, where it remains until the final swap moves it back to position $k+1$; </li> <li>$a_{m+1}=m+1$ is sent by the first swap to position $n$, where it stays until the final swap moves it back to its original position; </li> <li>for $k=2,\ldots,m$, $a_k=n+1-k$ stays in its original position for the first $m-k$ swaps, at which point it moves $m$ places to the right to position $m+k$, after which it moves left $2$ places with each of the remaining $k-1$ swaps before the final one, reaching position $m-k+2$, and the final swap takes it to position $2m-k+2=n+1-k$; and </li> <li>$a_1=n$ stays in its original position for the first $m-1$ swaps, then moves to position $m+1$ and thence to position $2m+1=n$ with the final two swaps.</li> </ul> <p>Thus, $T(2n+1)\le m+1$. </p> <p>It’s not hard to check that if $n=2m$, the swaps $S(k,k+2,k+m+1)$ for $k=m-1,\ldots,1$ followed by $S(1,2,2)$ and $S(3,m+2,2m)$ will convert $\langle 2m,\ldots,1\rangle$ to $\langle 1,\ldots,2m\rangle$, so $T(2m)\le m+1$, and in general we have</p> <p>$$T(n)\le\left\lceil\frac{n+1}2\right\rceil\;.$$</p>
1,617,890
<blockquote> <p>Question: Solve $\sin(3x)=\cos(2x)$ for $0≀x≀2\pi$.</p> </blockquote> <p>My knowledge on the subject; I know the general identities, compound angle formulas and double angle formulas so I can only apply those.</p> <p>With that in mind</p> <p>\begin{align} \cos(2x)=&amp;~ \sin(3x)\\ \cos(2x)=&amp;~ \sin(2x+x) \\ \cos(2x)=&amp;~ \sin(2x)\cos(x) + \cos(2x)\sin(x)\\ \cos(2x)=&amp;~ 2\sin(x)\cos(x)\cos(x) + \big(1-2\sin^2(x)\big)\sin(x)\\ \cos(2x)=&amp;~ 2\sin(x)\cos^2(x) + \sin(x) - 2\sin^2(x)\\ \cos(2x)=&amp;~ 2\sin(x)\big(1-\sin^2(x)\big)+\sin(x)-2\sin^2(x)\\ \cos(2x)=&amp;~ 2\sin(x) - 2\sin^3(x) + \sin(x)- 2 \sin^2(x)\\ \end{align} <strong>edit</strong> </p> <p>\begin{gather} 2\sin(x) - 2\sin^3(x) + \sin(x)- 2 \sin^2(x) = 1-2\sin^2(x) \\ 2\sin^3(x) - 3\sin(x) + 1 = 0 \end{gather} </p> <p>This is a cubic right? </p> <p>So $u = \sin(x)$,</p> <p>\begin{gather} 2u^3 - 3u + 1 = 0 \\ (2u^2 + 2u - 1)(u-1) = 0 \end{gather}</p> <p>Am I on the right track?<br> This is where I am stuck what should I do now?</p>
Ian Miller
278,461
<p>You have made some errors in your calculations (or some typos here).</p> <p>$$\sin(3x)=\cos(2x)$$</p> <p>$$ \sin(2x+x) = \cos(2x)$$</p> <p>$$\sin(2x)\cos(x) + \cos(2x)sin(x) = \cos(2x) $$</p> <p>$$ 2\sin(x)\cos(x)\cos(x) + (1-2\sin^2(x))\sin(x)) = \cos(2x) $$</p> <p>$$ 2\sin(x)\cos^2(x) + \sin(x) - 2\sin^{\bf{3}}(x) = \cos(2x) $$</p> <p>$$ 2\sin(x)(1-\sin^2(x))+\sin(x)-2\sin^{\bf{3}}(x)=\cos(2x) $$</p> <p>$$ 2\sin(x) - 2\sin^3(x) + \sin(x)- 2 \sin^{\bf{3}}(x) = \cos(2x) $$</p> <p>$$3\sin(x)-4\sin^3(x)=\cos(2x)$$</p> <p>Then recall that $\cos(2x)=1-2\sin^2(x)$ to give:</p> <p>$$3\sin(x)-4\sin^3(x)=1-2\sin^2(x)$$</p> <p>This is a cubic in $\sin(x)$. For simplicity write $y=\sin(x)$ to get:</p> <p>$$-4y^3+2y^2+3y-1=0$$</p> <p>$$-(y-1)(4y^2+2y-1)=0$$</p> <p>So $\sin(x)=1$ or $\sin(x)=\frac{-1\pm\sqrt{5}}{4}$</p> <p>So $x=\frac{\pi}{2}$ or $x=\frac{\pi}{10}$, $\frac{9\pi}{10}$, $\frac{13\pi}{10}$, $\frac{17\pi}{10}$</p>
1,580,586
<p>Question goes as follows: Consider the points on a line; $A(1,3,-1)$ and $B(-1,4,-2)$. Find the point $Q$ on $L$ closest to the point $P(1,1,0)$.</p> <p>My thinking: Closest distance from $a$ to $b$ is always a straight line, $90$ degree angle. Therefore: $$ Qβ‹…P=0 $$</p> <p>$$ L= \left(\begin{array}{cc} 1\\ 3\\ -1\\ \end{array}\right) + t \left(\begin{array}{cc} -2\\ 1\\ -1\\ \end{array}\right) $$</p> <p>$$ Q = \left(\begin{array}{cc} 1-2t\\ 3+t\\ -1-t\\ \end{array}\right) $$</p> <p>$$ (1-2t)\times(1)+(3+t)\times(1)+(-1-t)\times(0)=0 $$ $$ 4-t=0 $$</p> <p>$$t=4 $$ and $$ Q= \left(\begin{array}{cc} -7\\ 7\\ -5\\ \end{array}\right) $$</p> <p>But it is wrong, my answer tells me a different story and when I graph it is wrong.</p> <p>Answer $Q(2,5/2,-1/2)$</p>
Deepak
151,732
<p>Your method is incorrect.</p> <p>You're supposed to find the point $Q$ such that the vectors $\vec{AQ}$ and $\vec{PQ}$ are perpendicular.</p> <p>$\vec{AQ} = \vec{OQ} - \vec{OA} = \left(\begin{array}{cc} 2t\\ -t\\ t\\ \end{array}\right)$</p> <p>$\vec{PQ} = \vec{OQ} - \vec{OP} = \left(\begin{array}{cc} -2t\\ 2+t\\ -1-t\\ \end{array}\right)$</p> <p>Dot product the two and solve to get: $6t^2 + 3t = 0$.</p> <p>Reject $t = 0$ (as this makes $Q$ coincident with $A$) to get $t = -\frac 12$, giving you the expected answer.</p>
29,823
<p>Just a random thought here: Can cohomology theories (e.g. sheaf cohomology) on the Stone space $S_n(T)$ (the space of complete n-types) of a first-order theory $T$ tell us anything interesting (e.g. the classification of theories)? Is there any result in model theory that is obtained (probably most easily) by this kind of application of cohomology theories? Thanks!</p>
James Freitag
6,789
<p>I can not really inform you about this since I don't know, but I can point you to some notes of Angus Macintyre, <a href="http://modular.math.washington.edu/swc/notes/files/03MacintyreNotes.pdf" rel="nofollow">http://modular.math.washington.edu/swc/notes/files/03MacintyreNotes.pdf</a></p> <p>Here are some excerpts: </p> <p>"For me personally, the main surprise arising from the discovery of ACFA was how much there was to be done in terms of a model-theoretic reaction to the development of etale cohomology and its relatives."</p> <p>"Again, in a different direction, one begins to see cohomological ideas coming up all over applied model theory, for example in o-minimality."</p> <p>I hope that you find this useful. </p>
3,391,280
<p>Prove by Induction on n that <span class="math-container">$\exists x,y,z \in Z$</span> s.t. <span class="math-container">$x\ge 2, y\ge 2, z\ge 2$</span> satisfies <span class="math-container">$x^2+y^2=z^{2n+1}$</span> </p> <p>I'm a lot more comfortable with proving induction with <span class="math-container">$\forall$</span> I haven't really seen one of this format yet where there's an <span class="math-container">$\exists$</span>. Since this is obviously not true for all <span class="math-container">$x,y,z\in Z$</span> it's harder for me to figure out how to solve it.</p>
fleablood
280,126
<p>Well, if <span class="math-container">$x^2 + y^2 = z^{2n+1}$</span> then</p> <p><span class="math-container">$x^2z^2 + y^2z^2 = z^{2n+1}z^2$</span></p>
1,053,683
<p>How to show that $$\sum_{n=1}^\infty\frac{1}{n^2+3n+1}=\frac{\pi\sqrt{5}}{5}\tan\frac{\pi\sqrt{5}}{2}$$ ?</p> <p><strong>My try:</strong></p> <p>We have $$n+3n+1=\left(n+\frac{3+\sqrt{5}}{2}\right)\left(n+\frac{3-\sqrt{5}}{2}\right),$$ so $$\frac{1}{n^2+3n+1}=\frac{2}{\sqrt{5}}\left(\frac{1}{2n+3-\sqrt{5}}-\frac{1}{2n+3+\sqrt{5}}\right).$$ Then, I don't know how to proceed.</p>
Ron Gordon
53,268
<p>I think we can make some use of the residue theorem. Write $n^2+3 n+1 = (n+3/2)^2-5/4$ and the sum is</p> <p>$$\sum_{n=1}^{\infty} \frac1{\left (n+\frac{3}{2} \right )^2-\frac{5}{4}} = \frac12 \sum_{n=-\infty}^{\infty} \frac1{\left (n+\frac{3}{2} \right )^2-\frac{5}{4}} - 1 +1$$</p> <p>(To get the doubly infinite sum, I had to add back the $n=0$ and $n=-1$ terms, which happen to sum to zero.)</p> <p>The sum on the RHS may be attacked via the residue theorem, using the following:</p> <p>$$\sum_{n=-\infty}^{\infty} f(n) = - \pi \sum_k \operatorname*{Res}_{z=z_k} [f(z)\cot{\pi z} ] $$</p> <p>where $z_k$ is a non-integer pole of $f$. The poles are at </p> <p>$$z_{\pm} = -\frac{3}{2} \pm \frac{\sqrt{5}}{2} $$</p> <p>The sum is then equal to</p> <p>$$\frac{\pi}{2 \sqrt{5}} \left [\cot{\left (\frac{3 \pi}{2} - \frac{\sqrt{5} \pi}{2} \right )} - \cot{\left (\frac{3 \pi}{2} + \frac{\sqrt{5} \pi}{2} \right )} \right ] = \frac{\pi}{\sqrt{5}}\tan{\frac{\sqrt{5}\pi}{2}}$$</p>
2,264,021
<p>Can you help me explain the basic difference between Interior Point Methods, Active Set Methods, Cutting Plane Methods and Proximal Methods.</p> <p>What is the best method and why? What are the pros and cons of each method? What is the geometric intuition for each algorithm type?</p> <p>I am not sure I understand what the differences are. Please provide examples of each type of algorithm: active set, cutting plane and interior point.</p> <p>I would consider 3 properties of each algorithm class:complexity, practical computation speed and convergence rate. </p>
wyer33
33,022
<p>Although these are all optimization algorithms, they tend to be used in different contexts. Note, you requested a lot of technical information that I don't remember off the top of my head, but perhaps the following will get you started.</p> <h2>Interior Point and Active Set Methods</h2> <p>Both of these algorithms are used to solve optimization problems with inequalities. Generally speaking, they are used in conjunction with other algorithms for solving problems of the type: $$ \min\limits_{x\in X} \{ f(x) : h(x)\geq 0\} $$ or $$ \min\limits_{x\in X} \{ f(x) : g(x)=0,h(x)\geq 0\} $$ That said, they go about handling the inequality in different ways.</p> <p>In an active set method, we largely ignore the inequality constraint until we generate an iterate that would violate it. At this point, we momentarily stop, and then treat the blocking inequality constraint as an equality constraint. Those inequality constraints that we treat as equality constraints are called active. At this point, we continue with the algorithm, but we play a game to insure that our new iterates lie in the nullspace of the total derivative of the constraint. Really, unless we want to generate some kind of hateful method, we just assume that the $h$ are affine constraints because moving in the nullspace of the total derivative means moving around in the nullspace of the operator that represents $h$. Now, at some point, the algorithm may want to move off of the inequality, so we also need a mechanism to recognize when this occurs. At this point, the offending inequality constraint becomes inactive and we largely ignore it again. Anyway, the simplex method is an example of an active set method specialized for linear programs. Generally speaking, this algorithms tend to be very reliable and robust. That said, excessive pivoting, which means adding and removing inequality constraints from the active set, can dramatically slow down performance. In a nonlinear setting, working in the nullspace of the derivative of $h$ can be a pain. Every time we modify the active set, we end up changing the Hessian and the gradient. This complicates iterative system solvers since we constantly modify the problem. In addition, we have to solve to get into the active set, which can be expensive. There are some tricks that can be played with updating QR factorizations that help this process and this is discussed in things like Nocedal and Wright's book Numerical Optimization on page 478.</p> <p>Interior point methods typically refer to primal-dual interior point methods. Well, there's probably a better name since sometimes people use primal or dual only methods. Anyway, part of the confusion on the name is that there are a couple of interior point methods such as primal-dual, reflective (from Coleman and Li), and even something like Zoutendijk's feasible direction method is an interior point method. Anyway, in more common usage, interior point methods attack the optimality conditions \begin{align*} \nabla f(x) + g^\prime(x)^*y - h^\prime(x)^*z =&amp; 0\\ g(x) =&amp; 0\\ h(x)z =&amp; 0\\ h(x) \geq&amp; 0\\ z \geq&amp; 0 \end{align*} by perturbing the complimentary slackness condition as well as requiring strict inequality. This gives \begin{align*} \nabla f(x) + g^\prime(x)^*y - h^\prime(x)^*z =&amp; 0\\ g(x) =&amp; 0\\ h(x)z =&amp; \mu e\\ h(x) &gt;&amp; 0\\ z &gt;&amp; 0 \end{align*} where $e$ is the identity element and $\mu$ is the barrier parameter, which is carefully reduced to 0. The other way to derive it is to replace the inequality constraint with a log barrier function in the objective. That actually gives us the two different ways to visualize the problem. Personally, I prefer to just think of the perturbed problem, which is then fed to Newton's method. Most people prefer to think of how we modify the objective problem to represent inequality constraints. Really, the easy problem to graph is $\min\{x : x\geq 0\}$. With a log barrier, we change the objective $f(x)=x$ into $f(x)-\mu\log(x)$. If you graph that latter function, it'll give an idea of why it works. Anyway, interior point methods tend to work very efficiently and can solve many large scale problems, or really even small scale, faster than active set methods. Simply, rather than figuring out how to creatively pivot, we figure out how to creatively manipulate and manage the barrier parameter. That said, it can be a royal pain to do so. In addition, odd things can happen if we get to close to the boundary of the feasible set. As such, if we have to repeatedly solve an optimization problem where one problem is a perturbed version of another, interior point methods can be less optimal. In that case, an active set method can be preferable. All that said, the real advantage, in my opinion, that interior point methods have over active set methods is that the Hessian and gradient are only manipulated once per optimization iteration and not every time we hit the boundary. For nonlinear problems, this can be a big deal.</p> <h2>Cutting Plane Methods</h2> <p>Generally, these methods are used to help assist in solving mixed integer linear programs (MILPs.) In a MILP, we have a linear objective, linear equality constraints, linear inequality constraints, and integer constraints on the variables. Though there are many different algorithms to solve these formulations, the traditional, robust method is an algorithm called branch and cut. Essentially, solving an MILP is hard, so we relax it into a linear program, which, if we're minimizing, gives a lower bound. For example, if we had a MILP like $$ \min\{d^Tx : Ax=b, Bx\geq c, x\in\{0,1\}\} $$ we can relax it into $$ \min\{d^Tx : Ax=b, Bx\geq c, 0\leq x\leq 1\} $$ which gives a lower bound to the original problem. Now, the branch piece of branch and cut fixes the variables to integer quantities and then tries to bound the problem. It's called branch because we normally track these decisions with a tree where one branch fixes a variable one way and another branch fixes a variable another way. In addition, we add new inequality constraints to the relaxations to help strengthen the relaxation and give a better bound. These new constraints are called cutting planes. Essentially, we can add redundant, unnecessary inequality constraints to the original MILP, but these constraints may not be redundant for the relaxations. In fact, if we knew all the linear constraints necessary to represent the convex hull of the feasible set, we'd immediately be able to solve the MILP since we know the solution lies on the convex hull. Of course, we don't know this, so we try to be smart and add cutting planes like Gomory cuts. Long story short, cutting plane methods try to approximate some feasible set with new inequality constraints. Most often, we see this in MILPs, but there are other places where they arise.</p> <h2>Proximal Point Methods</h2> <p>In truth, I'm not super familiar with these algorithms and their properties. As such, the snarky answer is they're the algorithm that no one really used until compressed sensing became popular. Generally speaking, they're called proximal because there's a penalty term that keeps us close to a previous point. For example, in $\textrm{prox}(y) = \arg\min\{f(x) + \lambda \|x-y\|^2\}$, we have a term to keep us in proximity to $y$. Anyway, someone else should answer this one. Snarkiness aside, after the compressed sensing craze, they pop up in a variety of useful applications.</p>
308,856
<p>A set $E\subseteq \mathbb{R}^d$ is said to be Jordan measurable if its inner measure $m_{*}(E)$ and outer measure $m^{*}(E)$ are equal.However, Lebesgue mesure theory is developed with only outer measure. </p> <p>A function is Riemann integrable iff its upper integral and lower integral are equal.However, in Lebesgue integration theory, we rarely use upper Lebesgue integral.</p> <p>Why are outer measure and lower integral more important than inner measure and upper integral?</p>
Zvonimir Sikic
94,293
<p>Concerning Tao comment that the symmetry is broken by declaring 0β‹…βˆž = βˆžβ‹…0 = 0, I would like to add that this is the reason why Lebesgue integral does not satisfy Newton-Leibniz formula. Namely, for Cantor-Lebesgue function f, f(1)–f(0) = 1 but ∫01f’ = 0 because f’ = ∞ on Cantor set C which has measure 0 (and f’ = 0 on its complement). But if we realize that the measure of C is 0 = (1, 2/3, 4/9, ... ) and f’ = ∞ = (1, 3/2, 9/4, ...) then we see that this particular 0β‹…βˆž is not 0 but exactly 1, as it should be by Newton-Leibniz formula. We have the similar problem with countable additivity of limiting frequencies, which is usually contradicted by an infinite lottery with tokens 1,2,3,4,…. This contradiction also depends on βˆžβ‹…0 = 0 and disappears if we really calculate the relevant βˆžβ‹…0 ( <a href="https://www.researchgate.net/publication/290606552_A_Note_on_Probability_Frequency_and_Countable_Additivity" rel="nofollow noreferrer">https://www.researchgate.net/publication/290606552_A_Note_on_Probability_Frequency_and_Countable_Additivity</a> )</p>
2,519,620
<blockquote> <p><strong>Question :</strong> Three balls are to be randomly selected without replacement from an urn containing $20$ balls numbered $1$ through $20$. If we bet that at least one of the balls that are drawn hasa number as large as or larger than $17$, what is the probability that we win the bet?</p> </blockquote> <p>I am sorry that I neither have any approach nor any solution. I am not getting question at all. I am completely new to random variable concept, thus will need some help initially. Please help me out.</p> <p>P.S - This is not a homework question.</p>
Cm7F7Bb
23,249
<p>Let us simplify the game a little bit. Suppose that balls $1,\ldots,16$ are red and $17,\ldots,20$ are green. We win if we draw a green ball. The distribution of green balls among the three drawn balls is <a href="https://en.wikipedia.org/wiki/Hypergeometric_distribution" rel="nofollow noreferrer">hypergeometric</a> since we are drawing without replacement. The probability to get $k$ green balls in the sample is given by $$ \frac{{4\choose k}{16\choose 3-k}}{{20\choose 3}} $$ for $k=0,1,2,3$. Hence, the probability that we win is given by $$ \frac{{4\choose 1}{16\choose 2}}{{20\choose 3}}+\frac{{4\choose 2}{16\choose 1}}{{20\choose 3}}+\frac{{4\choose 3}{16\choose 0}}{{20\choose 3}}\approx0.5088. $$</p>
2,519,620
<blockquote> <p><strong>Question :</strong> Three balls are to be randomly selected without replacement from an urn containing $20$ balls numbered $1$ through $20$. If we bet that at least one of the balls that are drawn hasa number as large as or larger than $17$, what is the probability that we win the bet?</p> </blockquote> <p>I am sorry that I neither have any approach nor any solution. I am not getting question at all. I am completely new to random variable concept, thus will need some help initially. Please help me out.</p> <p>P.S - This is not a homework question.</p>
NewBee
473,224
<p>First, find out probability of not winning then 1-that.</p> <p>Total no of outcomes(A) : 20C1 x 19C1 x 18C1</p> <p>Favourable outcomes for not winning(B) : 16C1 x 15C1 x 14C1 (Selecting balls numbered &lt;17)</p> <p>So the probability of not winning(C) : B/A = 28/57</p> <p>Thus, required probability : 1 - C = 1 - (28/57) = <strong>29/57</strong>.</p>
3,135,386
<p>Our teacher tells us to convert it this way <span class="math-container">$ 3^x = e^{\ln 3^x}= e^{x\cdot\ln 3}$</span> and then use the rule <span class="math-container">$e^u\cdot u'$</span> but I can't understand where <span class="math-container">$\ln$</span> comes from and how <span class="math-container">$\ln 3^x$</span> = <span class="math-container">$x\cdot \ln 3$</span>.</p>
Sudeep Sapkota
650,719
<p>Any term <span class="math-container">$x$</span> can be expressed as power of <span class="math-container">$e$</span>:</p> <p><span class="math-container">$$x=e^{\ln x}$$</span> </p>
129,788
<blockquote> <p>Let be A and B two events from the same sample set. If $\space P(A)+P(B)=1$, can one say that they are opposite events?</p> </blockquote> <p>In my thought:</p> <p>$\space P(A)+P(B)=1$</p> <p>$\space P(A)=1-P(B)$</p> <p>So they are opposite events. But my book says no! It says that is not necessary true.</p> <p>Can you explain me, why not?</p> <p>Thanks</p>
Brian M. Scott
12,042
<p>Suppose that you roll an ordinary six-sided die. Let $A$ be the event that you roll $1,2$, or $3$, and let $B$ be the event that you roll an even number ($2,4$, or $6$). $P(A)=P(B)=\frac12$, so $P(A)+P(B)=1$; are $A$ and $B$ opposite events? (By the way, a better word is <em>complementary</em> events.)</p>
4,644,186
<p>Let m be a positive integer.Find the values of <span class="math-container">$$\sum_{k=0}^n \frac{{n\choose k }}{k+1}$$</span>. Leave your answer in terms of n where appropriate.</p> <p>Remark. There is an alternative method for computing the sums described here: make use of integration.</p> <p>I can only list out the terms <span class="math-container">$$\sum_{k=0}^n \frac{{n\choose k }}{k+1}=1+\frac{\binom{n}{1}}{2}+\frac{\binom{n}{2}}{3}+...+\frac{1}{m+1}$$</span> I can't think of how to simplify them and get the answer.</p> <p>Also, the question said I can use integration to solve it, but I have no idea how to start.I would greatly appreciate it if someone could show how to solve this.</p>
David H
55,051
<p><strong>Hint:</strong> You can rewrite the <span class="math-container">$\frac{1}{1+k}$</span> factor using the integral</p> <p><span class="math-container">$$\int_0^1 x^k \,dx= \frac{1}{1+k}.$$</span></p> <p>Then pull the summation inside the integral.</p>
3,016,386
<p>Hi I am struggling with this exercise, which may be perceived as simple. so I was trying to write tangents as follows:</p> <p><span class="math-container">$$\tan(z)=-i\frac{e^{iz}-e^{-iz}}{e^{iz}+e^{-iz}}$$</span> and then <span class="math-container">$$z=a+bi$$</span>, which led me to <span class="math-container">$$ \tan z=-i\frac{\cos a(e^{-b}-e^{b})+i\sin a(e^{-b}+e^{b})}{\cos a(e^{-b}+e^{b})+i\sin a(e^{-b}-e^{b})}$$</span>, so I guess here I can multiply denominator by conjunction, but this is really a complicated computation on an exam... help appreciated</p>
JosΓ© Carlos Santos
446,262
<p>It is much easier to deal with this problem using the fact that<span class="math-container">$$1+\tan^2(z)=\dfrac1{\cos^2(z)}.$$</span>So, which numbers can be written as <span class="math-container">$\dfrac1{\cos^2(z)}$</span>? Answer: all, except <span class="math-container">$0$</span>. It follows from this (and from the fact that <span class="math-container">$\tan$</span> is an odd function), that the range of <span class="math-container">$\tan$</span> is <span class="math-container">$\mathbb{C}\setminus\{\pm i\}$</span>.</p>
3,939,620
<p>Given a polynomial of the form <span class="math-container">$R(z):=\frac{P(z)}{Q(z)}$</span> such that <span class="math-container">$R(z)$</span> has no real roots and <span class="math-container">$deg(Q) \geq deg(P) + 2$</span>, then the integral can be expressed as</p> <p><span class="math-container">$$\int_{-\infty}^{+\infty} R(z)dz=2\pi i\sum_{a \in \Bbb H}{\mathrm{Res}\left ( f;a \right )}$$</span> where <span class="math-container">$\Bbb H:=\{z \in \Bbb C: Im(z)&gt;0\}$</span>.</p> <p>Now for proving this statement, we have first to show that the following limit exists: <span class="math-container">$$\int_{-\infty}^{+\infty} R(z)dz=\lim_{r \to -\infty}\int_{r}^{+\infty} R(z)dz+\lim_{r \to +\infty}\int_{-\infty}^{r} R(z)dz$$</span> The argument they used there, is that <span class="math-container">$\exists M &gt;0$</span> s.d. <span class="math-container">$ \vert R(z)\vert \leq \frac{M}{x^2}$</span>. But I don't see, why it exists such an <span class="math-container">$M$</span>. Many thanks for some help!</p>
Community
-1
<p>We must have</p> <p><span class="math-container">$$\sin x(\sin x+1)\ge0.$$</span></p> <p>As <span class="math-container">$$\sin x+1\ge0$$</span> is always true, we seem to be left with</p> <p><span class="math-container">$$\sin x\ge 0.$$</span></p> <p>But there is a trap*: when <span class="math-container">$$\sin x+1=0$$</span> the sign of <span class="math-container">$\sin x$</span> is irrelevant and finally <span class="math-container">$$x\in[0,\pi]\cup\left\{\frac{3\pi}2,2\pi\right\}.$$</span></p> <p>*Credit to Buraian.</p> <hr /> <p>After the fact, a systematic solution is</p> <p><span class="math-container">$$\sin x&gt;0\lor\sin x=0\lor\sin x=-1,$$</span></p> <p><span class="math-container">$$x\in(0,\pi)\cup\{0,\pi,2\pi\}\cup\left\{\frac\pi2,\frac{3\pi}2\right\}.$$</span></p>
3,995,119
<p>I've difficulties calculating the following integral</p> <p><span class="math-container">$$\int_z^\infty\mu\mathrm e^{-\mu y}(\mathrm e^{-\lambda z}-\mathrm e^{-\lambda y})\ \mathrm{d}y$$</span></p> <p>I'm gonna use her to find a joint distribution of two random variables. I've try to apply the following substitution <span class="math-container">$u=e^{\lambda y}$</span> but I couldn't do much after that. I appreciate any help. Thank you in advance. Sorry for my bad English.</p>
Kavi Rama Murthy
142,385
<p>Split it into two terms. <span class="math-container">$e^{-\lambda z}$</span> is constant in the first integral. So the answer is <span class="math-container">$e^{-\lambda z}e^{-\mu z}-\frac {\mu} {\lambda+\mu}e^{-(\lambda+\mu)z}$</span></p>
21,156
<p>The title says it all, is there a way to get in contact which users who consistently post answers without using <span class="math-container">$\LaTeX$</span>? I've come across a user who does that and (as I had some free time) edited about 10-15 of his posts, some of his answers were barely readable; on each post I left a comment including a link to the MathJax tutorial. He still keeps posting answers without using <span class="math-container">$\LaTeX$</span>, so is there anything else besides editing and commenting one can do?</p>
Ron Gordon
53,268
<p>It works like this. People can post solutions to problems all they want without MathJax/LaTeX, which is the lingua franca here. I do not have to bother reading their solution. If the OP believes it unfair that his/her solution does not get the requisite attention, then they can continue to post as they always have with the same results. </p> <p>I had a situation today in which someone posted a poorly-formatted solution, which had fatal problems unrelated to the poor formatting. The poster was incredulous when I told him his derivation was too hard to follow for me to provide any useful feedback. Nothing I could do for that guy (who indeed deleted his answer eventually.) </p>
1,250,703
<p>I was given the following task: define a combinatorial problem to the following equation, and say how each side of the equation solves the given problem. The equation is: $$ n\binom{n}{r} -r\binom{n}{r}=(r+1)\binom{n}{r+1} $$ I tried to think of a problem that both sides solve, but couldn't think of any... I don't want the answer but some kind of a hint to the combinatorial problem. Thanks in advance! </p>
ajotatxe
132,456
<p>How about this?</p> <blockquote> <p>From a set of $n$ elements, choose one of them, and then choose $r$ more.</p> </blockquote> <p>(The first selected objetc must be somehow different form the others).</p>
1,250,703
<p>I was given the following task: define a combinatorial problem to the following equation, and say how each side of the equation solves the given problem. The equation is: $$ n\binom{n}{r} -r\binom{n}{r}=(r+1)\binom{n}{r+1} $$ I tried to think of a problem that both sides solve, but couldn't think of any... I don't want the answer but some kind of a hint to the combinatorial problem. Thanks in advance! </p>
TravisJ
212,738
<p>If it helps, I usually think about these types of thing like selecting a set of people from a group and then giving some of them "special titles." Also, if it helps, notice that $n=\binom{n}{1}$. For example, if I had $\binom{n}{1}\binom{n}{r}$ I might say I'm going to select $r$ people to form a committee (from $n$ candidates), then select a president of the committee from the original $n$. The president may or may not already be in the committee, so your final committee might have $r$ or $r+1$ people in it.</p> <p>EDIT: Use thinking along the lines of what I said before, but re-write the LHS of that equation as $\binom{n}{r}(n-r)=\binom{n}{r}\binom{n-r}{1}$. Think about the committee selection, and choosing a president. How big is the committee you form (with the president)? and, do you choose the president first? or the "regular" committee members first?</p>
1,154,763
<p>I'm given this equation:</p> <p>$$ u(x,y) = \begin{cases} \dfrac{(x^3 - 3xy^2)}{(x^2 + y^2)}\quad&amp; \text{if}\quad (x,y)\neq(0,0)\\ 0\quad&amp; \text{if} \quad (x,y)=(0,0). \end{cases} $$</p> <p>It seems like L'hopitals rule has been used but I'm confused because</p> <ol> <li>there is no limit here it's just straight up $x$ and $y$ equals zero.</li> <li>if I have to invoke limit here to use Lhopitals rule, there are two variables $x$ and $y$. How do I take limit on both of them?</li> </ol>
Fernando
186,454
<p>u(0,0) can't possibly exist as it would be a division by 0. You haven't given us enough information. What is the point of this equation? What is the problem you have to solve?</p>
1,720,053
<p>The PDF describes the probability of a random variable to take on a given value:</p> <p>$f(x)=P(X=x)$</p> <p>My question is whether this value can become greater than $1$?</p> <p>Quote from wikipedia:</p> <p>"Unlike a probability, a probability density function can take on values greater than one; for example, the uniform distribution on the interval $[0, \frac12]$ has probability density $f(x) = 2$ for $0 \leq x \leq \frac12$ and $f(x) = 0$ elsewhere."</p> <p>This wasn't clear to me, unfortunately. The question has been asked/answered here before, yet used the same example. Would anyone be able to explain it in a simple manner (using a real-life example, etc)?</p> <p>Original question:</p> <p>"$X$ is a continuous random variable with probability density function $f$. Answer with either True or False.</p> <ul> <li>$f(x)$ can never exceed $1$."</li> </ul> <p>Thank you!</p> <p>EDIT: Resolved.</p>
H. Potter
289,192
<p>Discrete and continuous random variables are not defined the same way. Human mind is used to have discrete random variables (example: for a fair coin, -1 if it the coin shows tail, +1 if it's head, we have that $f(-1)=f(1)=\frac12$ and $f(x)=0$ elsewhere). As long as the probabilities of the results of a discrete random variable sums up to 1, it's ok, so they have to be at most 1.</p> <p>For a continuous random variable, the necessary condition is that $\int_{\mathbb{R}} f(x)dx=1$. Since an integral behaves differently than a sum, it's possible that $f(x)&gt;1$ on a small interval (but the length of this interval shall not exceed 1).</p> <p>The definition of $\mathbb{P}(X=x)$is not $\mathbb{P}(X=x)=f(x)$ but more $\mathbb{P}(X=x)=\mathbb{P}(X\leq x)-\mathbb{P}(X&lt;x)=F(x)-F(x^-)$. In a discrete random variable, $F(x^-)\not = F(x)$ so $\mathbb{P}(X=x)&gt;0$. However, in the case of a continuous random variable, $F(x^-)=F(x)$ (by the definition of continuity) so $\mathbb{P}(X=x)=0$. This can be seen as the probability of choosing $\frac12$ while choosing a number between 0 and 1 is zero.</p> <p>In summary, for continuous random variables $\mathbb{P}(X=x)\not= f(x)$.</p>
979,267
<p>Let $a_n$ be the $n$th sequence 1, 2 , 2 , 3 , 3 , 3 , 4 , 4 , 4 , 4 , 5 , 5 , 5 , 5 , 5, . . . . . . . constructed by including the integer $k$ exactly $k$ time. Show that $a_n$ $=$ $\lfloor \frac12 + (2n+\frac14)^.5 \rfloor$</p> <p>Let $\lvert r\rvert &lt; 1$ be a real number. Evaluate $\sum_{i=0}^\infty ir^i. $</p>
Sawarnik
93,616
<p><strong>It is.</strong></p> <p>We are given any triangle with heights $h_a, h_b, h_c$, and we assume its sides to be $a,b,c$. To prove it is the only such triangle, we note that by the area formulae: $$a=\frac{2\triangle}{h_a}, b=\frac{2\triangle}{h_b}, c=\frac{2\triangle}{h_c}$$</p> <p>We now use the fact that if a triangle with sides $a,b,c$ has area $K$ then area of the triangle with sides $ma,mb,mc$ has the area $Km^2$ (Do you see why?).</p> <p>We introduce the notation that $[a,b,c]$ means the area of the triangle with sides $a,b,c$. So the previous fact could be written as $[ma,mb,mc]=m^2[a,b,c]$ Thus, the triangle with sides $a,b,c$ has the area $\triangle$ then:</p> <p>$$ \triangle=[a,b,c] \\ = [\frac{2\triangle}{h_a}, \frac{2\triangle}{h_a}, \frac{2\triangle}{h_a}] \\ =4\triangle ^2 [\frac1{h_a},\frac1{h_b},\frac1{h_c}] $$</p> <p>Thus $\triangle=\triangle^2 \times4[\frac1{h_a},\frac1{h_b},\frac1{h_c}]$, or : $$\triangle =\frac1{4[\frac1{h_a},\frac1{h_b},\frac1{h_c}]}$$</p> <p>Thus, we have proved that the area can be deduced only from the heights independent of the sides. But now that we have the area and the heights, the sides are trivial to get by the area formulae. This proves that the triangle is unique, by the SSS criteria. </p> <p><em>Additionally, we now have a method to get the sides of the triangle, and the angles, ... follow!</em> </p>
2,255,617
<p>I am trying to learn how to do proofs by contradiction. The proof is,</p> <p>"Prove by Contradiction that there are no positive real roots of $x^6 + 2x^3 +4x + 5$"</p> <p>I understand that now I am attempting to prove that there is a positive real root of this equation, so I am able to contradict myself within the proof. I just don't even know where to start.</p>
Martin Argerami
22,857
<p>If we add, to the question, the reasonable requirement that the homomorphism is nonzero, the answer is still no. </p> <p>For instance consider $R=M_2 (\mathbb R) $, $R'=M_3 (\mathbb R) $. There are many nontrivial homomorphisms $R\to R'$: for any invertible $B $, $$A\longmapsto B\,\begin{bmatrix}A&amp;0\\0&amp;0\end{bmatrix}\,B^{-1}, $$ is a ring homomorphism. But there are no nonzero homomorphisms $R'\to R $.</p>
1,203,269
<p>I am trying to compute the hitting time of a linear Brownian motion on a two-sided boundary. More specifically, let $W_t$ be a (one-dimensional) Wiener process. Let $T = \inf \{t: |W_t| = a \}$ for some $ a &gt; 0$. I want to find $\mathbb{P}\{ T &gt; t\}$. </p> <p>I know that probability distribution hitting time of a positive level, $\inf \, \{t: W_t = b\,, \ b &gt; 0 \}$ can be computed quite easily, but I am not sure how to deal with it when dealing with the two-sided hitting time, i.e. with the absolute value. I am thinking of the minimum of hitting times of level $a$ and $-a$, but I can't get a promising conclusion.</p>
Math-fun
195,344
<p>I will give this a try.</p> <p>For simplicity let $T_a=\inf \{t: |W_t| = a \}$</p> <p>\begin{align} Pr(|W(t)|&gt;a)&amp;=P(|W(t)|&gt;a|T_a&lt;t)Pr(T_a&lt;t)+P(|W(t)|&gt;a|T_a&gt;t)Pr(T_a&gt;t)\\ \end{align}</p> <p>$P(|W(t)|&gt;a|T_a&gt;t)=0$ since the time that $|W(t)|$ hits $a$ for the first time has not arrived, hence $|W(t)|$ can not be bigger than $a$. </p> <p>Also note that $P(|W(t)|&gt;a|T_a&lt;t)=\frac12+Pr(W(t)&lt;-2a)$ since we know that $|W(t)|$ has hit $a$ before $t$ (we have $T_a&lt;t$). Therefore the event $\{|W(t)|&gt;a|T_a&lt;t\}$ is equivalent to $\{|a+W(t)|&gt;a\}$.</p> <p>Thus $$Pr(T_a&lt;t)=\frac{2P(W(t)&gt;a)}{\frac12+P(W(t)&lt;-2a)}.$$</p>
1,285,443
<blockquote> <p>Let us denote solution to the equation</p> <p>$$(x+a)^{x+a}=x^{x+2a}$$</p> <p>with $X_a$.</p> <p>($a$ is a non-zero real number)</p> <p>Prove that:</p> <p>$$\lim_ {a \to 0} X_a = e$$</p> </blockquote> <p>This is something that I noticed while making numerical experiments for another problem. The statement looks interesting, I couldn't find anything close to it on the internet. I don't have the idea how to prove it, but numerical methods confirm the statement.</p>
zoli
203,663
<p>Taking the logarithm of both sides of $$(x+a)^{x+a}=x^{x+2a}$$ we get </p> <p>$$(x+a)\ln(x+a)=x\ln(x)+2a\ln(x)$$</p> <p>or $$\frac{(x+a)\ln(x+a)-x\ln(x)}{a}=2\ln(x). \tag 1$$ The left hand side tends to $\frac{d(x\ln(x))}{dx}=\ln(x)+1$ if $a$ tends to zero. The right hand side does not depend on $a$. That is,</p> <p>$$\ln(x)+1=2\ln(x).$$</p> <p>As a result, whatever the solution of the original equation is, it has to tend to $e$ if $a$ tends to zero.</p> <p><strong>EDITED</strong></p> <p>To be precise, let's start over again from $(1)$. First of all, make it clear that $x$ is a solution of $(1)$ and, as a result, it depends on $a$. Also Let's subtract $\ln(x_a)+1$ from both sides of $(1)$. Having taken the absolute value of both sides, we have now:</p> <p>$$\left|\frac{(x_a+a)\ln(x_a+a)-x_a\ln(x_a)}{a}-(\ln(x_a)+1)\right|=|\ln(x_a)-1|.$$</p> <p>For any $\delta&gt;0$ and $\epsilon&gt;0$ there exists an $a(\delta,\epsilon)$ such that $\epsilon&gt;a(\delta,\epsilon)&gt;0$ and $$|\ln(x_{a(\delta,\epsilon)})-1|&lt;\delta.\tag 2$$</p> <p>Now, assume that $$\lim\limits_{a\rightarrow 0}x_{a}=e+\xi,\tag 3$$ for a $|\xi|&gt;0$. Since $\ln(x)$ is continuous at $e$, $(2)$ and $(3)$ are contradicting.</p>
3,433,492
<p>I know that a function can admitted multiple series representation (according to Eugene Catalan), but I wonder if there is a proof for the fact that each analytic function has only one unique Taylor series representation. I know that Taylor series are defined by derivatives of increasing order. A function has one and only one unique derivative. So can this fact be employed to prove that each function only has one Taylor series representation?</p>
11101
723,725
<p>You can prove that a power series is differentiable on the interior of interval of convergence, with derivative is obtained by differentiating term by term. So, you can conclude that the coefficient of <span class="math-container">$x^n$</span> must be <span class="math-container">$\frac{f^{(n)}(0)}{n!}$</span>. So, coefficients are determined uniquely. So, the Taylor series is unique. </p>
69,508
<p>I was just wondering, when I call the <code>CopulaDistribution</code> function in Mathematica, am I calling its cumulative function or its density function?</p> <p>I have looked up the help and am still a little bit unsure.</p> <p>EDIT: In particular, what does it mean when I take a RandomVariate from this CopulaDistribution? Surely I would have to sample from either the CDF or PDF.</p>
wolfies
898
<p>The question is of some interest because it captures rather nicely the difference between:</p> <p>A. <strong>mathematical statistics</strong> ... where we work with characterisations of distributions, such as starting with a pdf, or cdf, or cf ... e.g. Let $X$ be a random variable with pdf $f(x)$:</p> <p>$$f(x) = 1 -|x| \quad \text{for}\quad x\in(-1,1)$$</p> <p>and</p> <p>B. <strong><em>Mathematica</em>'s implementation of distributions</strong> ... which defines black box names to distributions that return nothing themselves ... but which we can then ask for a PDF or CDF or other characterisation from. </p> <p>In this regard:</p> <ul> <li><p>The poster asks whether <code>CopulaDistribution</code> returns the pdf or the cdf? <em>And the answer is neither.</em></p></li> <li><p>What does it return? <em>Generally nothing.</em> </p></li> </ul> <p>Rather, <code>CopulaDistribution</code> .... like <code>TransformedDistribution</code> or <code>MarginalDistribution</code>, ... are <em>Mathematica</em> functions that don't actually seem to do anything. Do they involve any computation? No. Do they take up any processing cycles? No. They just return exactly what you enter e.g.</p> <pre><code>CopulaDistribution[{"FGM", .2}, {NormalDistribution[-1, 2], NormalDistribution[1, 1/2]}] </code></pre> <p>returns instantly:</p> <blockquote> <p>CopulaDistribution[{"FGM", .2}, {NormalDistribution[-1, 2], NormalDistribution[1, 1/2]}]</p> </blockquote> <p>Similarly:</p> <pre><code>TransformedDistribution[x^4, x \[Distributed] NormalDistribution[0, 1] </code></pre> <p>returns instantly the same input we entered ... </p> <blockquote> <p>TransformedDistribution[x^4, x [Distributed] NormalDistribution[0, 1]</p> </blockquote> <p>It doesn't even try to compute anything.</p> <p>The only exception to this, that I am aware of, is where the solution is written up in advance ... much like a textbook appendix. So, for example:</p> <pre><code>TransformedDistribution[x^2, x \[Distributed] NormalDistribution[0, 1]] </code></pre> <blockquote> <p>ChiSquareDistribution[1]</p> </blockquote> <p>... but no calculation or derivation is involved in this. It is just an appendix lookup (which is actually rather un-<em>Mathematica</em>-like, in my view). </p> <p><strong>RE comments:</strong> I don't know what is meant by a 'distribution is a distribution', and I think it is inherently wrong to suggest that distributions are black boxes, because it is not the way we tend to work or think about distributions in mathematical statistics. The starting point in mathematical statistics is not to define a black box, but to define a pdf (or a cdf or a cf). In effect, this is what <em>Mathematica</em> ultimately has to do anyway when we define our own custom density ... except that you have to manually create this artificial black box or placeholder, using:</p> <pre><code>dist = ProbabilityDistribution[1 - Abs[x], {x, -1, 1}] </code></pre> <blockquote> <p>ProbabilityDistribution[1 - Abs[x], {x, -1, 1}]</p> </blockquote> <p>or <code>CopulaDistribution</code> or <code>MarginalDistribution</code> or ...</p> <p>Once the black box is created, you can then 'operate' on it using <code>PDF</code> or <code>CDF</code> etc. The same goes for <code>CopulaDistribution</code> ... you have created a black box, and if you want something calculated, you will have to apply <code>PDF</code> or <code>CDF</code> etc to the latter.</p>
120,687
<p>Consider the following code</p> <pre><code>styles = {Red, Blue, {Red, Dashed}, {Blue, Dashed}} pt1 = Plot[{x^2, 2 x^2, 1/x^2, 2/x^2}, {x, 0, 3}, Frame -&gt; True, PlotStyle -&gt; styles, PlotLegends -&gt; {"1", "2", "1", "2"}] </code></pre> <p>I would like the two red lines to carry the same label "1" and the two blue lines the same label "2". That is, in the legend I would like a red line and a red-dashed line below each other and then one label right of it. Similarly for the blue lines. Does anybody know how to do this?</p>
Mr.Wizard
121
<p>I got around to actually evaluating your code and I realized that <code>g</code> is <em>not</em> remembering its values; <code>DownValues[g]</code> only has a length of three. The "solution" is to restrict the function to numeric values, per <a href="https://mathematica.stackexchange.com/q/9971/121">The difference between &quot;SymbolicProcessing&quot; -&gt; 0 and restricting the function definition to numeric values only</a>, but doing that actually makes the integration much slower; the memoization simply has too large an overhead:</p> <pre><code>SetAttributes[numArgsQ, HoldFirst] numArgsQ[_[___?NumericQ]] := True mem : h[x_, y_, z_]?numArgsQ := mem = Exp[Sin[x]] + Cos[y + z]; NIntegrate[{h[x, y, z], Sqrt[h[x, y, z]] + x}, {x, 0, 10}, {y, 0, 10}, {z, 0, 10}, Method -&gt; {"LocalAdaptive", "SymbolicProcessing" -&gt; 0}, PrecisionGoal -&gt; 7] // Timing </code></pre> <blockquote> <pre><code>{9.12606, {1429.54, 6081.95 + 59.0571 I}} </code></pre> </blockquote> <p>(My timing for <code>f</code> or <code>g</code> is only ~ 4.83 seconds.)</p> <p>Extensive memoization has taken place as indicated by:</p> <pre><code>DownValues[h] // Length </code></pre> <blockquote> <pre><code>1108835 </code></pre> </blockquote> <p>For my use of <code>mem</code> and <code>numArgsQ</code> please see:</p> <ul> <li><a href="https://mathematica.stackexchange.com/q/52057/121">Quick way to use conditioned patterns when defining multi-argument function?</a></li> <li><a href="https://mathematica.stackexchange.com/questions/2639/what-does-the-construct-fx-fx-mean/2676#2676">What does the construct f[x_] := f[x] = ... mean?</a></li> </ul>
761,823
<blockquote> <p>Suppose that $G$ is a finite abelian group that does not contain a subgroup isomorphic to $\mathbb Z_p\oplus\mathbb Z_p$ for any prime $p$. Prove that $G$ is cyclic.</p> </blockquote> <p><strong>Attempt</strong>: If $G$ is a finite abelian group, then let $H$ be any subgroup of $G$</p> <p>It's given that $H \not\simeq \mathbb{Z}_p \oplus \mathbb{Z}_p$ which can be due to a variety of reasons like : $|H|$ may not be $p^2$ or $H$ may not contain any element of order $p$ etc</p> <p>Hence, the process of finding an element $g$ such that $|g| = |G|$ seems difficult, hence, probably the best bet would be to first assume that $G$ is not cyclic.</p> <p>Hence, $O(g) \neq |G|~ \forall ~ g \in G$ . </p> <p>Also, $G \not\simeq Z_{|G|}$ since $G$ is not cyclic.</p> <p>How do i arrive at a contradiction from here that if $G$ is not cyclic, it must contain a subgroup $H \simeq \mathbb Z_p \oplus \mathbb Z_p $.</p> <p>Please note that this question occurs in Gallian before normal and factor groups are introduced.</p> <p>Thank you for help.</p>
Kevin Arlin
31,228
<p>Factor $|G|$ into primes as $\prod_{i=1}^n p_i^{k_i}$ where the $p_i$ are distinct. Proceed by induction on $n$: the base case is done assuming you know prime-order groups are cyclic. Then for the induction step factor out all instances take a generator $g_1$ of a subgroup of order $p_1^{k_1}$ and $g_2$ of a subgroup of order $|G|/p_1^{k_1}$. (By induction such a subgroup is cyclic since if it contained $\mathbb{Z}_p\oplus \mathbb{Z}_p$ so would $G$.) </p> <p>I claim $g_1+g_2$ is a generator of $G$. For suppose $g_1+g_2$ has order $m$. Then $mg_1=-mg_2$. Now $mg_1$ is of order $p_1^n$ for some $1\leq n\leq k_1$, so $p_1^n mg_2=0$ and $|G|/p_1^{k_1}$ divides $p_1^nm$. But $|G|/{p_1^{k_1}}$ is relatively prime to $p_1$, so $|G|/{p_1^{k_1}}$ divides $m$, and $mg_2=0$. Thus $mg_1=0$ as well, so also $p_1^{k_1}$ divides $m$ and thus $|G|$ divides $m$ and $g_1+g_2$ is a generator.</p>
62,790
<p>Among several possible definitions of ordered pairs - see below - I find Kuratowski's the least compelling: its membership graph (2) has one node more than necessary (compared to (1)), it is not as "symmetric" as possible (compared to (3) and (4)), and it is not as "intuitive" as (4) - which captures the intuition, that an ordered pair has a first and a second element.</p> <p><img src="https://i.stack.imgur.com/RVRjq.png" alt="alt text"><a href="http://epublius.de/mathoverflow/orderedpairs.png" rel="noreferrer">(source)</a></p> <p><sup><em>Membership graphs for possible definitions of ordered pairs (≙ top node, arrow heads omitted)</em></sup></p> <pre><code>1: (x,y) := { x , { x , y } } 2: (x,y) := { { x } , { x , y } } (Kuratowski's definition) 3: (x,y) := { { x } , { { x } , y } } 4: (x,y) := { { x , 0 } , { 1 , y } } (Hausdorff's definition) </code></pre> <p>So my question is: </p> <blockquote> <p>Are there good reasons to choose Kuratowski's definition (or did Kuratowski himself give any) instead of one of the more "elegant" - sparing, symmetric, or intuitive - alternatives?</p> </blockquote>
Joel David Hamkins
1,946
<p>Of course there are many pairing functions, and they all have the crucial property that from the pair $(x,y)$, one can reconstruct both $x$ and $y$. And although your question has been answered, let me point out that all four of the ordered pair definitions that you consider have the property that the von Neumann rank of the pair $(x,y)$ is strictly greater than the ranks of $x$ and $y$. Thus, for your functions, if $x$ and $y$ are in $V_\alpha$, then the pair $(x,y)$ can only be guaranteed to appear by $V_{\alpha+2}$.</p> <p>But actually, this rank-increasing feature is sometimes annoying, and there occasionally arises in set-theoretic argument the need or desire for a <em>flat</em> pairing function, a pairing function that does not increase rank in this way. Specifically, what is desired is a pairing function $\langle x,y\rangle$ such that whenever $x,y\in V_\alpha$ for infinite $\alpha$, then also $\langle x,y\rangle\in V_\alpha$, for the same rank $\alpha$. (Note that one cannot achieve this for finite $\alpha\gt 1$, since there are too many pairs to fit.) With such a flat pairing function, every infinite $V_\alpha$ is closed under pairing, and this is sometimes important or at least convenient in inductive arguments, or in arguments about $\alpha$-strong cardinals and in similar situations, where one wants to consider only sets of a given rank, but one also wants to use pairs.</p> <p>It is a fun exercise to prove that flat pairing functions exist, and I encourage you to try it on your own, before reading what I write below. But the definitions are all somewhat more involved than the comparatively simple definitions you provide, since they achieve the flatness property. As Hurkyl says, we ultimately care only about the existence of the function with the desired properties, rather than its exact nature.</p> <p>Here is one way to construct a flat pairing function. Define $\langle x,y\rangle=x^0\cup y^1$, where $x^0$ is obtained by replacing every natural number $n$ in any element of $x$ by $n+1$ and adding the object $0$, whereas $y^1$ just replaces $n$ inside elements of $y$ with $n+1$. Thus, we can tell from any element of $x^0\cup y^1$ whether it came from $x$ or $y$, by looking to see if it contains $0$ or not, and we can reconstruct the unmodified set by removing $0$ and replacing all $n+1$ with $n$ again, and so it is a pairing function. And one can check that $\langle x,y\rangle$ has the same rank as the maximum rank of $x$ and $y$, if this max is infinite, and so this is a flat pairing function, as desired.</p>
3,772,923
<p>My child's teacher raised a quesion in class for students who are interested to prove. The teacher says that the volume of a cube is the greatest among rectangular-faced shapes of the same perimeter and asks his students to prove this proposition.</p> <p>I considered the relationship between the length of the sides of a cube and the lengths of the sides of rectangular-faced shapes in different situation. But when the calculations came down to polynomials, I couldn't proceed due to the uncertainty of the variables in the polynomials.</p> <p>Can anyone please find a good way to prove the above proposition? Or is there already a proof? Thank you for your help!</p>
Mikael Helin
418,258
<p>Another solution to what mjw posted, this one without use of Lagrange multipliers is as following. Fix the &quot;perimeter&quot; <span class="math-container">$P$</span> such that <span class="math-container">$P=4(a+b+c)$</span> is constant then the volume is</p> <p><span class="math-container">$$ V=ab(P/4-a-b) $$</span> and take partial derivates to get <span class="math-container">$$ \frac{\partial V}{\partial a}=b(P/4-a-b)-ab=0 $$</span> and <span class="math-container">$$ \frac{\partial V}{\partial b}=a(P/4-a-b)-ab=0. $$</span> It is easy to see <span class="math-container">$a=b\neq 0$</span> which we insert into one of the equations to get <span class="math-container">$a(P/4-2a)=a^2$</span> with solution <span class="math-container">$a=P/12$</span> which gives <span class="math-container">$b=P/12$</span> and <span class="math-container">$c=P/4-2a=P/4-P/6=P/12$</span>, i.e. all sides are of equal length, a cube.</p>
3,772,923
<p>My child's teacher raised a quesion in class for students who are interested to prove. The teacher says that the volume of a cube is the greatest among rectangular-faced shapes of the same perimeter and asks his students to prove this proposition.</p> <p>I considered the relationship between the length of the sides of a cube and the lengths of the sides of rectangular-faced shapes in different situation. But when the calculations came down to polynomials, I couldn't proceed due to the uncertainty of the variables in the polynomials.</p> <p>Can anyone please find a good way to prove the above proposition? Or is there already a proof? Thank you for your help!</p>
Rezha Adrian Tanuharja
751,970
<p>Is elementary solutions permitted?</p> <p><span class="math-container">$$ \frac{a+b+c }{3}\geq \sqrt[3]{abc} $$</span></p> <p>Equality i.e. maximum volume for a given sum of side lengths is when all sides are equal</p>
7,981
<p>I've read so much about it but none of it makes a lot of sense. Also, what's so unsolvable about it?</p>
Matt E
221
<p>The prime number theorem states that the number of primes less than or equal to $x$ is approximately equal to $\int_2^x \dfrac{dt}{\log t}.$ The Riemann hypothesis gives a precise answer to how good this approximation is; namely, it states that the difference between the exact number of primes below $x$, and the given integral, is (essentially) $\sqrt{x} \log x$. </p> <p>(Here "essentially" means that one should actually take the absolute value of the difference, and also that one might have to multiply $\sqrt{x} \log x$ by some positive constant. Also, I should note that the Riemann hypothesis is more usually stated in terms of the location of the zeroes of the Riemann zeta function; the previous paragraph is giving an equivalent form, which may be easier to understand, and also may help to explain the interest of the statement. See <a href="http://en.wikipedia.org/wiki/Riemann_hypothesis#Distribution_of_prime_numbers">the wikipedia entry</a> for the formulation in terms of counting primes, as well as various other formlations.)</p> <p>The difficulty of the problem is (it seems to me) as follows: there is no approach currently known to understanding the distribution of prime numbers well enough to establish the desired approximation, other than by studying the Riemann zeta function and its zeroes. (The information about the primes comes from information about the zeta function via a kind of Fourier transform.) On the other hand, the zeta function is not easy to understand; there is no straightforward formula for it that allows one to study its zeroes, and because of this any such study ends up being somewhat indirect. So far, among the various possible such indirect approaches, no-one has found one that is powerful enough to control all the zeroes. </p> <p>A very naive comment, that nevertheless might give some flavour of the problem, is that there are an infinite number of zeroes that one must contend with, so there is no obvious finite computation that one can make to solve the problem; ingenuity of some kind is necessarily required.</p> <p>Finally, one can remark that the Riemann hypothesis, when phrased in terms of the location of the zeroes, is very simple (to state!) and very beautiful: it says that all the non-trivial zeros have real part $1/2$. This suggests that perhaps there is some secret symmetry underlying the Riemann zeta function that would "explain" the Riemann hypothesis. Mathematicians have had, and continue to have, various ideas about what this secret symmetery might be (in this they are inspired by an analogy with what is called "the function field case" and the deep and beautiful theory of <a href="http://en.wikipedia.org/wiki/Weil_conjectures">the Weil conjectures</a>), but so far they haven't managed to establish any underlying phenonemon which implies the Riemann hypothesis.</p>
188,102
<p>I have the following list: </p> <pre><code>m={{14, "extinguisher"}, {54, "virgule"}, {55, "turnoff"}, {51, "sofa"}, {77, "beachcomber"}, {61, "stoic"}, {6, "isomorphism"}, {34, "leftist"}, {84, "spline"}, {42, "heartiness"}, {35, "postnatal"}, {41, "stratified"}, {66, "silkworm"}, {95, "conformance"}, {38, "hemophiliac"}, {19, "abdication"}, {13, "reimpose"}, {82, "cowhide"}, {78, "banteringly"}, {26, "contention"}}; </code></pre> <p>I wonder if it is possible to make a spiral bubble chart of this on Mathematica, where the number is represented by how the bubble should be big and each bubble would be labeled by the corresponding words. </p> <p>In fact I am expecting to make something as follow: <a href="https://i.stack.imgur.com/nErYz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nErYz.png" alt="enter image description here"></a></p>
kglr
125
<p>Using the function <code>spiral</code> from <a href="https://mathematica.stackexchange.com/a/6869/125">this answer by Heike</a> to compute the centers of disks arranged on a spiral:</p> <pre><code>sm = SortBy[m, -#[[1]] &amp;]; radii = Normalize[sm[[All, 1]], Max] ; centers = spiral[radii]; labels = sm[[All, 2]]; Graphics[MapThread[{ColorData["Rainbow"]@#3, Disk[##2], Black, Text[Style[#, Max[8, Floor[32 #3]]], #2]} &amp;, {labels, centers, radii}]] </code></pre> <p><a href="https://i.stack.imgur.com/oyDZY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oyDZY.png" alt="enter image description here"></a></p> <p>Alternatively, use <code>spiral</code> to construct input data for <code>BubbleChart</code>:</p> <pre><code>BubbleChart[MapThread[Append, {spiral[sm[[All, 1]]], sm[[All, 1]]}], BubbleScale -&gt; "Diameter", BubbleSizes -&gt; {.025, .4}, ColorFunction -&gt; "Rainbow", ChartLabels -&gt; Placed[Style[#, Max[8, Floor[32 #2]]] &amp; @@@ Transpose[{labels, Normalize[sm[[All, 1]], Max]}], Center]] </code></pre> <p><a href="https://i.stack.imgur.com/aSBBl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aSBBl.png" alt="enter image description here"></a></p>
2,508,011
<blockquote> <p>find the <span class="math-container">$x$</span> :</p> <p><span class="math-container">$$x^2(x-1)^2+x^2=8(x-1)^2$$</span></p> </blockquote> <hr /> <p>My Try :</p> <p><span class="math-container">$$x^2(x-1)^2+x^2=8(x-1)^2\\ x^2(x^2-2x+1)+x^2=8(x^2-2x+1)\\x^4-2x^3+x^2+x^2=8x^2-16x+8\\x^4-2x^3-6x^2+16x-8=0$$</span></p> <p>Now What ?</p>
A. Goodier
466,850
<p>Notice that $x=2$ is a solution. So $$x^4-2x^3-6x^2+16x-8 =(x-2)(x^3-6x+4)$$ $x=2$ is also a solution of $x^3-6x+4=0$. So $$x^4-2x^3-6x^2+16x-8 =(x-2)^2(x^2+2x-2)=0$$ The roots are $x=2,1\pm\sqrt{3}$.</p>
3,615,117
<p>I want to find the intersection of the sphere <span class="math-container">$x^2+y^2+z^2 = 1$</span> and the plane <span class="math-container">$x+y+z=0$</span>. </p> <p><span class="math-container">$z=-(x+y)$</span> that gives <span class="math-container">$x^2+y^2+xy= \frac 12$</span></p> <p>How do I represent this in the standard form of ellipse? Any help is appreciated to proceed further. Thanks in advance.</p>
Z Ahmed
671,540
<p>As this quadrar=tic of <span class="math-container">$x$</span> and <span class="math-container">$y$</span> has <span class="math-container">$xy$</span> term, it cannot represent a circle. <span class="math-container">$$x^2+y^2-xy=1/2~~~(1)$$</span> Next, write it as quadratic of <span class="math-container">$y$</span>: <span class="math-container">$$y^2+xy+x^2-1/2=01/2 \implies y=\frac{-x\pm\sqrt{x^2-4x^2+2}}{2}$$</span>. For the curve to be real <span class="math-container">$$3-3x^2 \ge 0 \implies -\sqrt{2/3}\le x \le \sqrt{2/3}$$</span> Therefore it is a bounded conic it has to be an ellipse as a circle was ruled out.</p>
1,407,641
<p>If $T$ is a linear transformation and is said to be one to one or onto- this only makes sense when we specify what domain and range is right? $T: V \rightarrow V$ may not be onto or one to one but $T: V \rightarrow Im(T)$ is certainly onto and may or may not be one to one. Is this right?</p>
EPS
133,563
<p>Perhaps this needs a bit more clarification:</p> <ol> <li>Your question is really about <strong>functions</strong> in general and not related to linear algebra.</li> <li>Any function should be thought of as a triple $(f, X, Y)$ which is normally denoted by $f\colon X\to Y$. In other words, whenever you're talking about a function, you should have fixed (at least implicitly) a domain and a codomain for it. Therefore, strictly speaking writing $$f\colon X\to \operatorname{Im} f$$ is not correct, because once you change the codomain you're dealing with a new function and you'd better use a different letter, say $g$, to denote it to avoid confusion. Of course, when you get comfortable with these notions, you can get a little sloppy and say things like ``any function is onto its image,'' etc.</li> </ol> <p>PS I just noticed that SRX has made the same point 2 in his comment earlier.</p>
2,040,041
<p>I was able to think that the numerator will always be positive and will overpower the denominator as well. But couldn't proceed from there.</p>
Robert Z
299,698
<p>Hint. Consider the power series expansion of $e^x=\sum_{k\geq 0}\frac{x^k}{k!}$. Then $$e^x-\frac{2(e^x-(1+x))}{x^2}=\sum_{k\geq 0}\frac{x^k}{k!}-2\sum_{k\geq 2}\frac{x^{k-2}}{k!}=\sum_{k\geq 0}\frac{x^k}{k!}-2\sum_{k\geq 0}\frac{x^{k}}{(k+2)!}\\=\sum_{k\geq 1}\frac{x^k}{k!}\left(1-\frac{2}{(k+2)(k+1)}\right).$$ Show that the coefficients of the resulting power series are all positive.</p>
2,644,910
<p>Ali Baba is trying to enter a cave. At the entrance, there is a drum with four openings, in each of which there is a pot with a herring inside. The herring may be lying with its tail up or down. Ali Baba can put his hands into any two openings, feel the herrings, and put any one or both of them either tail up or tail down as he pleases. After this, the drum rotates and once it stops, Ali Baba cannot determine into which openings he put his hands before. The door to the cave will open as soon as the four herrings are either all tail up or tail down. What should Ali Baba do?<br> This question is similar to a "binary" question, where I have to convert a series of 1s (up) and 0s (down) into all 1s or 0s, but I am not sure how to do that here with randomization.</p>
Davide Gallo
511,400
<p>Got it. Will prove by induction. We just have to show that the result holds when $n=mp$. Let $b \in \mathbb{Z}_m^*$. </p> <p>If $b \not\equiv 0 \pmod p$ then $(b,m)=1 \land (b,p)=1 \implies (b,n)=1 \implies b \in \mathbb{Z}_n^*$. Now we have $f(b)=b$. </p> <p>If $b \equiv 0 \pmod p$ then $m \not\equiv 0 \pmod p \implies b+m \not\equiv 0 \pmod p$. Therefore $(b+m,p)=1 \land (b+m,m)=1 \implies (b+m,n)=1 \implies b+m \in \mathbb{Z}_n^*$. Now we have $f(b+m)=b$.</p> <p>The inductive step is straightforward.</p>
4,520,388
<p>I'm stuck on this multivariable equation:</p> <p><span class="math-container">$$ \frac{d}{dx}\left(\int^x_af(g(b,t),t)dt\right) $$</span></p> <p>where a and b are just constants.</p> <p>If this involved a single variable, it looks like one would just apply the fundamental theorem of calculus. Is there an equivalent for multiple variables.</p> <p>I know that the answer should just be</p> <p><span class="math-container">$$ f(g(b,x),x) $$</span></p> <p>but I'm hoping someone can explain / walk me through. Is there maybe some rule that lets me pass the <span class="math-container">$\frac{d}{dx}$</span> into the integral?</p> <p>Thanks</p>
Leonid
679,193
<p>This doesn't actually need multiple variables, and could be deduced from the single variable FTC. You also don't want to pass the limit inside the integral since the limit of integration depends on <span class="math-container">$x$</span> (which you're trying to take the limit of).</p> <p>Recall that if you have a single variable function <span class="math-container">$h(t)$</span> then:</p> <p><span class="math-container">$$\dfrac{d}{dx} \int_a^x h(t)dt=h(x) $$</span></p> <p>in your case, for fixed <span class="math-container">$b$</span>, take <span class="math-container">$h(t)=f(g(b,t),t)$</span>. Notice this is just a single variable function. The fact that it is actually a composition of two single variable functions and that there's an extra constant <span class="math-container">$b$</span> doesn't change the fact that it's still a single variable function and hence the above analysis still applies:</p> <p><span class="math-container">$$\dfrac{d}{dx} \int_a^x h(t)dt=h(x) \implies \dfrac{d}{dx} \int_a^x f(g(b,t),t)dt=f(g(b,x),x)$$</span></p>
1,284,938
<p>I was revising for one of my end of year maths exams, then I came across this example on how to find lines of tangents to ellipses outside the curve. Personally, I'd use differentiation and slopes to find such lines, but the lecturer does something simpler and more elegant.</p> <p>The question is: "Find the equations of the lines through (1, 4) which are tangents to the ellipse $x^2 + 2y^2 = 6$"</p> <p>And then we put the lines into the standard form, which comes out as $ y = mx βˆ’ (m βˆ’ 4)$ where $m$ is the slope of the line.</p> <p>Then, the lecturer substitutes the equation we got into the original equation of the curve, which we get $x^2 + 2[mx βˆ’ (m βˆ’ 4)]^2 = 6$.</p> <p>Now, the lecturer goes from the equation above to something I can't understand how to derive. With the explanation "We now look for repeated roots in the equation, as each tangent meets the line exactly once, we get":</p> <p>$[4m(m βˆ’ 4)]^2 βˆ’ 4(1 + 2m^2 )(2m^2 βˆ’ 16m + 26) = 0$</p> <p>Can you guys please help me understand how to get to the equation above? I tried using the quadratic formula, where x has repeated roots (in other words, the rational bit is zero), but I still got something entirely different.</p> <p>Thanks.</p>
doraemonpaul
30,938
<p>Follow the method in <a href="http://en.wikipedia.org/wiki/Method_of_characteristics#Example" rel="nofollow">http://en.wikipedia.org/wiki/Method_of_characteristics#Example</a>:</p> <p>$\dfrac{dx}{dt}=1$ , letting $x(0)=0$ , we have $x=t$</p> <p>$\dfrac{dy}{dt}=2e^x-y=2e^t-y$ , we have $y=e^t+y_0e^{-t}=e^x+y_0e^{-x}$</p> <p>$\dfrac{dz}{dt}=0$ , letting $z(0)=f(y_0)$ , we have $z(x,y)=f(y_0)=f(ye^x-e^{2x})$</p> <p>$z(0,y)=y$ :</p> <p>$f(y-1)=y$</p> <p>$f(y)=y+1$</p> <p>$\therefore z(x,y)=ye^x-e^{2x}+1$</p>
4,312,323
<p>I get why <span class="math-container">$\sqrt{9} = \pm 3$</span>. But (at least I think) the Β± is there because there's a certain ambiguity as to which number was squared to obtain <span class="math-container">$9$</span>.</p> <p>Does that mean that if we remove the ambiguity <span class="math-container">$\sqrt{3^2} = 3$</span> ?</p> <p>One argument could be that since <span class="math-container">$\sqrt{3^2} = \sqrt{9} = \pm 3$</span>. Then again we could argue that we know for a fact that <span class="math-container">$9$</span> is the result of squaring the number <span class="math-container">$3$</span> and should therefore be <span class="math-container">$\sqrt{3^2} = 3$</span>.</p> <p>I apologize as I'm only a beginner and this may perhaps seem too basic.</p>
hyper-neutrino
457,091
<p>Your line of thinking makes sense, but it's not exactly like that - it's not that we &quot;don't know&quot; which value was squared to get it; rather, both are answers.</p> <hr /> <p>In most (almost all) contexts, <span class="math-container">$\sqrt n$</span> refers to <em>only</em> the positive value of the square root. So, <span class="math-container">$\sqrt{3^2}$</span> would just be <span class="math-container">$3$</span>, but so would <span class="math-container">$\sqrt{(-3)^2}$</span>. In this situation, you have to be careful because <span class="math-container">$\left(\sqrt n\right)^2=n$</span> but <span class="math-container">$\sqrt{n^2}=|n|$</span>.</p> <hr /> <p>If you think about it, <span class="math-container">$\sqrt n$</span> is just &quot;roots of the equation <span class="math-container">$x^2=n$</span>&quot; (hence why it's called <em>square root</em>) and so there will be two answers (well, except when <span class="math-container">$n=0$</span>, then there's only one distinct answer).</p> <hr /> <p>Overall, your line of thinking makes sense, but it's not that we &quot;don't know the original value&quot;. Depending on your definition, either only the positive value is correct, or occasionally both values are right, and either way, you <em>cannot</em> count on <span class="math-container">$\sqrt{n^2}$</span> to equal <span class="math-container">$n$</span>.</p>
4,092,877
<p>I'm trying to find the solution for the following differential equation, however, I'm not sure how to derive the answer and so I would really appreciate some support!</p> <p><span class="math-container">$y'' - y' = x^2$</span></p> <p>I have tried splitting this into a quadratic polynomial: <span class="math-container">$Ax^2 + bx + C$</span></p> <p>Then taking its derivative:</p> <p><span class="math-container">$$y'' - y' = x^2 \implies 2Ax + 2A + B =x^2 $$</span></p> <p>This is the case when <span class="math-container">$A = \frac{1}{2}x$</span> and <span class="math-container">$B = -x$</span></p> <p><span class="math-container">$y_1(x) = x^2 + \frac{1}{2}x-x$</span></p> <p>Though this is not the solution, because when I place this back into the equation I do not get the right answer.</p> <p>I thought the solution would be: <span class="math-container">$y=c_1\cos(x) + c_2\sin(x) + x^2+\frac{1}{2}x-x$</span></p> <p>My expectation is: <span class="math-container">$y=c_1e^x+c_2-2x-x^2-\frac{1}{x}x^3$</span></p>
Lukas
844,079
<p>You are right with your expectation. Let's first solve the homogeneous equation <span class="math-container">$y''-y'=0$</span>. It's characteristic polynomial is <span class="math-container">$x^2-x=0$</span> which has solutions <span class="math-container">$x_1=0, x_2=1$</span>. This means the solution for the homogeneous equation is <span class="math-container">$$y= c_1e^x + c_2e^0 = c_1e^x + c_2$$</span> For the inhomogeneous equation we have to add a special solution to this general solution of the homogeneous equation. For that we can write the right hand side as <span class="math-container">$x^2 e^{0x}$</span>. And the formulas about inhomogeneous linear differential equations with constant coefficients tells us that we get a polynomial with degree <span class="math-container">$\leq 2+1 = 3$</span> (because the degree of <span class="math-container">$x^2$</span> is <span class="math-container">$2$</span> and <span class="math-container">$x_1=0$</span> is a zero of order <span class="math-container">$1$</span> of the characteristic polynomial). So we try <span class="math-container">$u_{sp}(x) = ax^3+bx^2+cx+d$</span> to get: <span class="math-container">$$6ax+2b-3ax^2-2bx-c = x^2$$</span> which results in <span class="math-container">$a= \frac{-1}{3}, b=-1, c=-2$</span> and <span class="math-container">$d$</span> can be chosen freely (by comparing the coefficients). So we get as the final solution</p> <p><span class="math-container">$$y(x) = c_1e^x + c_2 - \frac{1}{3}x^3-x^2-2x$$</span></p>
683,513
<p>There is much discussion both in the education community and the mathematics community concerning the challenge of (epsilon, delta) type definitions in real analysis and the student reception of it. My impression has been that the mathematical community often holds an upbeat opinion on the success of student reception of this, whereas the education community often stresses difficulties and their "baffling" and "inhibitive" effect (see below). A typical educational perspective on this was recently expressed by Paul Dawkins in the following terms: </p> <p><em>2.3. Student difficulties with real analysis definitions. The concepts of limit and continuity have posed well-documented difficulties for students both at the calculus and analysis level of instructions (e.g. Cornu, 1991; Cottrill et al., 1996; Ferrini-Mundy &amp; Graham, 1994; Tall &amp; Vinner, 1981; Williams, 1991). Researchers identified difficulties stemming from a number of issues: the language of limits (Cornu, 1991; Williams, 1991), multiple quantification in the formal definition (Dubinsky, Elderman, &amp; Gong, 1988; Dubinsky &amp; Yiparaki, 2000; Swinyard &amp; Lockwood, 2007), implicit dependencies among quantities in the definition (Roh &amp; Lee, 2011a, 2011b), and persistent notions pertaining to the existence of infinitesimal quantities (Ely, 2010). Limits and continuity are often couched as formalizations of approaching and connectedness respectively. However, the standard, formal definitions display much more subtlety and complexity. That complexity often baffles students who cannot perceive the necessity for so many moving parts. Thus learning the concepts and formal definitions in real analysis are fraught both with need to acquire proficiency with conceptual tools such as quantification and to help students perceive conceptual necessity for these tools. This means students often cannot coordinate their concept image with the concept definition, inhibiting their acculturation to advanced mathematical practice, which emphasizes concept definitions.</em> </p> <p>See <a href="http://dx.doi.org/10.1016/j.jmathb.2013.10.002" rel="nofollow noreferrer">http://dx.doi.org/10.1016/j.jmathb.2013.10.002</a> for the entire article (note that the online article provides links to the papers cited above).</p> <p>To summarize, in the field of education, researchers decidedly have <em>not</em> come to the conclusion that epsilon, delta definitions are either "simple", "clear", or "common sense". Meanwhile, mathematicians often express contrary sentiments. Two examples are given below. </p> <p><em>...one cannot teach the concept of limit without using the epsilon-delta definition. Teaching such ideas intuitively does not make it easier for the student it makes it harder to understand. Bertrand Russell has called the rigorous definition of limit and convergence the greatest achievement of the human intellect in 2000 years! The Greeks were puzzled by paradoxes involving motion; now they all become clear, because we have complete understanding of limits and convergence. Without the proper definition, things are difficult. With the definition, they are simple and clear.</em> (see Kleinfeld, Margaret; Calculus: Reformed or Deformed? Amer. Math. Monthly 103 (1996), no. 3, 230-232.) </p> <p><em>I always tell my calculus students that mathematics is not esoteric: It is common sense. (Even the notorious epsilon, delta definition of limit is common sense, and moreover is central to the important practical problems of approximation and estimation.)</em> (see Bishop, Errett; Book Review: Elementary calculus. Bull. Amer. Math. Soc. 83 (1977), no. 2, 205--208.)</p> <p>When one compares the upbeat assessment common in the mathematics community and the somber assessments common in the education community, sometimes one wonders whether they are talking about the same thing. How does one bridge the gap between the two assessments? Are they perhaps dealing with distinct student populations? Are there perhaps education studies providing more upbeat assessments than Dawkins' article would suggest? </p> <p>Note 1. See also <a href="https://mathoverflow.net/questions/158145/assessing-effectiveness-of-epsilon-delta-definitions">https://mathoverflow.net/questions/158145/assessing-effectiveness-of-epsilon-delta-definitions</a></p> <p>Note 2. Two approaches have been proposed to account for this difference of perception between the education community and the math community: (a) sample bias: mathematicians tend to base their appraisal of the effectiveness of these definitions in terms of the most active students in their classes, which are often the best students; (b) student/professor gap: mathematicians base their appraisal on their own scientific appreciation of these definitions as the "right" ones, arrived at after a considerable investment of time and removed from the original experience of actually learning those definitions. Both of these sound plausible, but it would be instructive to have field research in support of these approaches.</p> <p>We recently published <a href="http://dx.doi.org/10.5642/jhummath.201701.07" rel="nofollow noreferrer">an article</a> reporting the result of student polling concerning the comparative educational merits of epsilon-delta definitions and infinitesimal definitions of key concepts like continuity and convergence, with students favoring the infinitesimal definitions by large margins.</p>
Paramanand Singh
72,031
<p>First let me focus on the reasons behind the difficulty in assimilating the $\epsilon, \delta$ definitions.</p> <p>For any beginner in calculus, assimilating the $\epsilon, \delta$ definition is a challenge. I have rarely seen any student for whom this definition seems natural. I don't think anyone would dispute that given the fact that these definitions were arrived at after a long long time Newton invented calculus.</p> <p>However the reasons for the difficulty in assimilating these definitions is not so much related to the definitions, but rather to the approach of presenting them to students. A student who is learning calculus for the first time normally has experience of algebraical manipulation but has very less interaction with order relations or inequalities. And another block is the understanding of "infinite". A student needs to be trained first in order relations and some understanding of "infinite". I can illustrate my point with two examples:</p> <p>1) A student of 13 yrs of age would find it very easy to solve $x + 5 = 3$ and at the same time find it bit difficult to solve $|x - 5| &lt; 3$.</p> <p>2) A student of 16 yrs of age would find it easy to show that there is no rational number whose square is $2$. But at the same time he will be hard pressed to show that we can find <strong>as good</strong> rational approximation to $\sqrt{2}$ <strong>as we want</strong> especially if you don't allow him the square root extraction method to find decimal approximation of $\sqrt{2}$ to any number of digits.</p> <p>I would say that there is a huge gap between "algebraical manipulation of expressions" and "appreciation of inequalities and infinite nature of integers and rationals" in terms of problem solving techniques and related conceptual framework. Unless this gap is bridged by the student himself or through his teachers, it is natural to expect that the student would find it challenging to accept the $\epsilon, \delta$ definitions.</p> <p>Next I come to question asked here. Mathematics community in general feels that these definitions of calculus are the most appropriate and natural and are hugely successful in teaching huge amount of further "mathematical analysis". This is simply because once you have understood these definitions you can't think of any more natural choice of any other definition. After the initial fight with $\epsilon, \delta$ is over, the general feeling is that these definitions are the simplest and most powerful tools to teach these topics. My own view is the same but I can't forget my days when I was fighting with $\epsilon, \delta$ and crossed the chasm with help of <a href="http://paramanand.blogspot.com/2005/11/book-review-course-of-pure-mathematics.html" rel="nofollow">Hardy's Pure Mathematics</a>.</p>
620,756
<p>Let $R$ be a commutative ring with $1\not =0$, and let $D\ni 1$ be a multiplicative subset of $R$. Consider the universal characterization of $D^{-1}R$:</p> <p>There is a morphism $\pi\colon R\to D^{-1}R$ such that for all rings and morphisms $\psi\colon R\to S$ satisfying</p> <ul> <li>$\psi(1)=1$</li> <li>$\psi(D)\subset S^{\times}$</li> </ul> <p>there is a unique morphism $\Psi\colon D^{-1}R\to S$ such that $\Psi\circ\pi=\psi$.</p> <hr> <p>Suppose $D^{-1}R\not=0$. Prove directly from the universal characterization that $\pi(D)\subset (D^{-1}R)^{\times}$.</p> <p><em>Note: See p. 707 in Dummit and Foote</em></p>
Louis
75,278
<p>I've been looking at this exercise for some minutes now, and I'm deeply confused. Please correct me if I am wrong (and I am sorry for that), I'm only writing as an "answer" here because this might be too long for a comment.</p> <p>I don't think you can solve this question purely by using the universal property given above. Why is this?</p> <p>There is a morphism $id: R \rightarrow R$ such that for all rings and morphisms $\psi: R \rightarrow S$ satisfying</p> <ul> <li>$\psi(1)=1$</li> <li>$\psi(D) \subset S^{\times}$</li> </ul> <p>there is a unique morphism $\Psi: R \rightarrow S$ such that $\Psi \circ id = \psi$.</p> <p>Hence from this point of view, we are in the same setting. However, the statement here is certainly not true.</p> <p>(I know it's not a healthy reference, but the universal property given in the wikipedia article on the topic includes the property $\pi(D) \subset (D^{-1}R)^{\times}$)</p>
620,756
<p>Let $R$ be a commutative ring with $1\not =0$, and let $D\ni 1$ be a multiplicative subset of $R$. Consider the universal characterization of $D^{-1}R$:</p> <p>There is a morphism $\pi\colon R\to D^{-1}R$ such that for all rings and morphisms $\psi\colon R\to S$ satisfying</p> <ul> <li>$\psi(1)=1$</li> <li>$\psi(D)\subset S^{\times}$</li> </ul> <p>there is a unique morphism $\Psi\colon D^{-1}R\to S$ such that $\Psi\circ\pi=\psi$.</p> <hr> <p>Suppose $D^{-1}R\not=0$. Prove directly from the universal characterization that $\pi(D)\subset (D^{-1}R)^{\times}$.</p> <p><em>Note: See p. 707 in Dummit and Foote</em></p>
Community
-1
<p>The universal "characterisation" you provided is just a property that is only a piece of the true universal property. You can think of it as saying $\phi:R\to D^{-1}R$ is initial; but with respect to what property? Just say that every map $R\to S$ that inverts $D$ factors through $R\to D^{-1}R$ is clearly not enough to characterise $D^{-1}R$.</p> <p>Consider morphisms $\pi:R\to S$ such that: $\pi(d)$ is invertible for every $d\in D$, if $\pi(x) = 0$ for some $x$ then $dx = 0$ for some $d\in D$, and every element of $S$ is of the form $\pi(r)\pi(d)^{-1}$. Then there exists a ring $D^{-1}R$ together with a morphism $\pi:R\to D^{-1}R$ that is initial with these properties.</p> <p>In other words, for every $f:R\to S$ with the above properties, there is a unique $g:D^{-1}R\to S$ such that $f = g\circ\pi$. In short, the universal property needs to be stated with the extra properties as above for $\pi$. Factoring every morphism uniquely which inverts $D$ is insufficient to characterise $D^{-1}R$, but I can see how page 707 of Dummit and Foote might have led you to this conclusion. You can find a lucid discussion with proofs in Atiyah and MacDonald's Commutative Algebra pp.37-38.</p> <p>In this correct setting, your exercise is just part of the definition. </p>
1,424,273
<p>Let $(a_n)$ be a convergent sequence of positive real numbers. Why is the limit nonnegative?</p> <p>My try: For all $\epsilon &gt;0$ there is a $N\in \mathbb{N}$ such that $|a_n-L|&lt;\epsilon$ for all $n\ge N$. And we know $0&lt; a_n$ for all $n\in \mathbb{N}$, particularly $0&lt;a_n$ for all $n\ge N$. Maybe by contradiction: suppose that $L&lt;0$, then $L&lt;0&lt;a_n$ for all $n\in \mathbb{N}$, particularly for all $n\ge N$. Then $0&lt;-L&lt;a_n-L$ for all $n\in \mathbb{N}$, particularly for all $n\ge N$. It follows: for all $\epsilon &gt;0$, there is a $N\in \mathbb{N}$ such that $0&lt;|-L|=-L&lt;|a_n-L|&lt;\epsilon$ for all $n\ge N$, which can't be true.</p> <p>Is my proof ok?</p>
Yes
155,328
<p>Let $l &lt; 0$ be the limit of $(a_{n})$. Then there is no $n \geq 1$ such that $|l-a_{n}| &lt; |l|$, a contradiction.</p>
256,322
<p>Let $A$ be an abelian group of order $n = p_1^{\alpha_1} \cdot \ldots \cdot p_k^{\alpha_k}$ (i.e., $n$'s unique prime factorization). The Primary Decomposition Theorem states that $A \cong \mathbb{Z}_{p_1^{\alpha_1}} \times \ldots \times \mathbb{Z}_{p_k^{\alpha_k}}$. On the other hand, the Fundamental Theorem of Finitely Generated Albelian Groups states that $A \cong \mathbb{Z}_{n_1} \times \ldots \times \mathbb{Z}_{n_j}$ for some $\{n_j\}$ s.t. $n = n_1 \cdot \ldots \cdot n_j$ and $n_{i+1}\,|\,n_i$ for all $1 \le i &lt; j-1$. Now I'm confused because it initially seems to me that both of these statements cannot be true at once. </p> <p>For example, suppose that the order of $A$ gives rise to at least <em>two</em> unique isomorphism types given by the Fundamental Theorem of Finitely Generated Abelian Groups. That is, suppose that $|A_1| = |A_2| = n = p_1^{\alpha_1} \cdot \ldots \cdot p_k^{\alpha_k}$ whereby $A_1 \not\cong A_2$ so that by the Fundamental Theorem of Finitely Generated Groups we have</p> <p>$$ A_1 \cong \mathbb{Z}_{n_1} \times \ldots \times \mathbb{Z}_{n_j} $$</p> <p>and</p> <p>$$ A_2 \cong \mathbb{Z}_{n_1} \times \ldots \times \mathbb{Z}_{m_k} $$</p> <p>with $\{n_i\} \ne \{m_k\}$. But we know no matter what that by the Primary Decomposition Theorem we have that $A_1 \cong \mathbb{Z}_{p_1^{\alpha_1}} \times \ldots \times \mathbb{Z}_{p_k^{\alpha_k}} \cong A_2$, a contradiction.</p> <p>What am I missing?</p>
Alexander Gruber
12,952
<p>The invariant factors (the $\mathbb{Z}_{n_i}$ in your FTFGAG decomposition) are also uniquely determined up to isomorphism. The Chinese remainder theorem gives the equivalence of these statements. I think the problem you're having is that in the primary decomposition statement the $p_k$'s don't necessarily have to be distinct; it's not the prime factorization of $n$, though they do multiply to $n$.</p>
256,322
<p>Let $A$ be an abelian group of order $n = p_1^{\alpha_1} \cdot \ldots \cdot p_k^{\alpha_k}$ (i.e., $n$'s unique prime factorization). The Primary Decomposition Theorem states that $A \cong \mathbb{Z}_{p_1^{\alpha_1}} \times \ldots \times \mathbb{Z}_{p_k^{\alpha_k}}$. On the other hand, the Fundamental Theorem of Finitely Generated Albelian Groups states that $A \cong \mathbb{Z}_{n_1} \times \ldots \times \mathbb{Z}_{n_j}$ for some $\{n_j\}$ s.t. $n = n_1 \cdot \ldots \cdot n_j$ and $n_{i+1}\,|\,n_i$ for all $1 \le i &lt; j-1$. Now I'm confused because it initially seems to me that both of these statements cannot be true at once. </p> <p>For example, suppose that the order of $A$ gives rise to at least <em>two</em> unique isomorphism types given by the Fundamental Theorem of Finitely Generated Abelian Groups. That is, suppose that $|A_1| = |A_2| = n = p_1^{\alpha_1} \cdot \ldots \cdot p_k^{\alpha_k}$ whereby $A_1 \not\cong A_2$ so that by the Fundamental Theorem of Finitely Generated Groups we have</p> <p>$$ A_1 \cong \mathbb{Z}_{n_1} \times \ldots \times \mathbb{Z}_{n_j} $$</p> <p>and</p> <p>$$ A_2 \cong \mathbb{Z}_{n_1} \times \ldots \times \mathbb{Z}_{m_k} $$</p> <p>with $\{n_i\} \ne \{m_k\}$. But we know no matter what that by the Primary Decomposition Theorem we have that $A_1 \cong \mathbb{Z}_{p_1^{\alpha_1}} \times \ldots \times \mathbb{Z}_{p_k^{\alpha_k}} \cong A_2$, a contradiction.</p> <p>What am I missing?</p>
user1770201
46,072
<p>Indeed, all of the comments indicating I was misquoting the Primary Decomposition Theorem were correct. </p> <p>Pg. 161 of Dummit &amp; Foote states the Primary Decomposition Theorem for finite abelian groups:</p> <blockquote> <p>Let $G$ be an abelian group of order $n &gt; 1$ and let the unique factorization of $n$ into distinct prime powers be $n = p_1^{\alpha_1} \cdot \ldots \cdot p_k^{\alpha_k}$. Then $G \cong A_1 \times A_2 \times \ldots \times A_k$, where $|A_i| = p_i^{\alpha_i}$. </p> </blockquote> <p>In other words, a finite abelian group can be decomposed into a direct product of its (unique) Sylow p-subgroups. From this we can see that the error I was making was in assuming that each of the $A_i$ were already cyclic (instead each of the $A_i$ can be further decomposed into cyclic groups via the FTFGAG so that there is no discord).</p>
3,066,446
<p>Let <span class="math-container">$\overline{X}$</span> be the average of a sample of <span class="math-container">$16$</span> independent normal random variables with mean <span class="math-container">$0$</span> and variance <span class="math-container">$1$</span>. Determine c such that <span class="math-container">$P(| \overline{X} | &lt; c) = .5$</span></p> <p>I am having a lot of trouble with this question. I know it is related to chi-square but I don't know how to even start. </p>
Mike_
632,850
<p>If you draw a plot of <span class="math-container">$x^2\sin x$</span>, you will see it has no minimum or maximum at <span class="math-container">$x=0$</span>. Neither <span class="math-container">$x^{2n} \sin x$</span>. However, <span class="math-container">$x^{2n+1} \sin x$</span> reaches minimum at <span class="math-container">$x=0$</span></p> <p>Calculate <span class="math-container">$f''$</span> and use property that <span class="math-container">$f''(x)$</span> is negative at <span class="math-container">$x=x_0$</span> if it's maximum at <span class="math-container">$x_0$</span>, positive in case of minimum and equals zero in case of inflection point. Note: this is not always true, but in your case it's ok*. (see e.g. <a href="https://en.wikipedia.org/wiki/Inflection_point" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Inflection_point</a> )</p> <p>UPD: *in this case it is not. As Silent pointed, <span class="math-container">$f''(0)=0$</span>, so other methods should be used (e.g. proving that <span class="math-container">$f_n(x_1)&gt;f_n(x_0)&lt;f_n(x_2), x_1&lt;x_0&lt;x_2)$</span>) - see answer below.</p>
171,690
<p>I am trying to make a projection on the <em>xy-plane</em> of the intersection of the surfaces from the functions: <code>1 + x^2 - y^2</code>, <code>3 Log[1 + x^2]</code>.</p> <p><a href="https://i.stack.imgur.com/XqC1g.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XqC1g.png" alt="Intersection of surfaces"></a></p> <p>Thanks.</p>
OkkesDulgerci
23,291
<pre><code>ContourPlot[1+x^2-y^2==3Log[1+x^2],{x,-1.5,1.5},{y,-1.5,1.5}] </code></pre>
2,778,575
<p>Given the equation: $\sin^2{x}+\cos{x}=0$</p> <p>How is it solved?</p> <p>I think: $\sin^2{x}=1-\cos^2{x}$, but even if I get a quadratic equation with one function (cos), how can I solve it?</p>
giobrach
332,594
<p>You are on the right track. The equation $$\sin^2 x + \cos x = 0 $$ becomes $$-\cos^2 x + \cos x + 1 = 0 $$ with the substitution $\sin^2 x = 1 - \cos^2 x$. At this point, you may solve the quadratic equation in $\cos x$ to find $$\cos x = \frac{-1 \pm \sqrt{1 + 4}}{-2} = \frac{1 \mp\sqrt 5}{2},$$ that is, formally, $$\cos x = \begin{cases} \varphi \\ - \dfrac 1 \varphi \end{cases} $$ where $\varphi = 1.618033...$ is the <em>golden ratio</em>. However, since $\cos x \in [-1,+1]$, the option $\cos x = \varphi$ must be discarded, so that $$\cos x = - \frac 1 \varphi = \frac{1 - \sqrt 5}{2}. $$ One solution is $$x = \arccos \frac{1 - \sqrt 5}{2} \approx 2.237$$ in radians (in degrees, about $128.2^\circ$); however, since $\cos$ is an even function, $-x$ must be a solution too. Finally, $\cos$ is $2\pi$-periodic, therefore the other solutions may be found by adding a factor of $2 \pi n$ to $x$ and $-x$, with $n \in \mathbb Z$.</p>
3,543,150
<p>My question : two indefinite integrals of a function being given , how to express one indefinite integral in terms of the other? </p> <p><a href="https://i.stack.imgur.com/VkMzJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VkMzJ.png" alt="enter image description here"></a></p>
David G. Stork
210,401
<p>The function is defined for positive and negative values of <span class="math-container">$x$</span>. The graph shows the real and imaginary parts:</p> <p><a href="https://i.stack.imgur.com/meg59.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/meg59.png" alt="Mathematica plot"></a></p> <p><span class="math-container">$${\rm Re}[(-1)^{4/3}] = -\frac{1}{2}$$</span></p> <p><span class="math-container">$${\rm Im}[(-1)^{4/3}] = - \frac{\sqrt{3}}{2}$$</span></p> <p>Given that there is no unique way to compute a partial root of a negative number, <em>Mathematica</em> seems to assume the most general complex form.</p> <p>There is no reason that the function should be (or is) symmetric with respect to the interchange <span class="math-container">$x \leftrightarrow -x$</span>.</p>
518,627
<p>Proof that:$ \sum\limits_{n=1}^{p} \left\lfloor \frac{n(n+1)}{p} \right\rfloor= \frac{2p^2+3p+7}{6} $ <br> where $p$ is a prime number such that $p \equiv 7 \mod{8}$ <br> <br>I tried to separate the sum into parts but it does not seems to go anywhere. I also tried to make a substitutions for $p$ ,but, I don't think it is entriely correct to call $p=7+8t$. Any ideas?</p>
Alexander Vlasev
11,998
<p>This is a partial answer. By the division algorithm let $n(n+1) = q_n p + r_n$ where $0\leq r_n &lt; p$. Then we see that</p> <p>$$\left\lfloor\frac{n(n+1)}{p}\right\rfloor = \left\lfloor q_n + \frac{r_n}{p}\right\rfloor = q_n + \left\lfloor\frac{r_n}{p}\right\rfloor = q_n$$</p> <p>So the problem transforms into finding the sum of the quotients $q_n$. Here</p> <p>$$\sum_{n=1}^p q_n =\sum_{n=1}^p\frac{n(n+1) - r_n}{p} = \frac{1}{3}(p+1)(p+2) - \frac{1}{p} \sum_{n=1}^p r_n$$</p> <p>Now compare this to what we need to obtain. We transformed this problem into the following one. Let $p \equiv 7 \pmod{8}$. Show that</p> <p>$$\sum_{n=1}^p r_n = \frac{p(p-1)}{2}$$</p> <p>where $r_n$ is the equivalence class of $n(n+1)$ modulo $p$. This I believe is an easier problem. The last two residues are $0$ so you have to show</p> <p>$$\sum_{n=1}^{p-2} r_n = \frac{p(p-1)}{2}$$</p>
1,693,045
<p>I know if $x=e^{\frac{2\pi i}{17}}$ then $x^{17}=1$ and $\Re(x)=\cos\left(\frac{2\pi}{17}\right)$.</p> <p>But how do I form a polynomial which has root $\cos\left(\frac{2\pi}{17}\right)$.</p> <p>I know you can consider de Moivre's theorem and expand the LHS using binomial theorem but that will take a long time.</p>
Wojowu
127,263
<p>By adding equalities $$\cos(n+1)x=\cos nx\cos x-\sin nx\sin x\\ \cos(n-1)x=\cos nx\cos x+\sin nx\sin x$$ we get an equality $$\cos(n+1)x+\cos(n-1)x=2\cos nx\cos x$$ If we now define, by induction, <a href="https://en.wikipedia.org/wiki/Chebyshev_polynomials" rel="nofollow">Chebyshev polynomials</a> $T_0(y)=1,T_1(y)=y,T_{n+1}(y)=2yT_n(y)-T_{n-1}(y)$ then it follows, by taking $y=\cos x$, that $$T_n(\cos x)=\cos nx$$ It follows that $\cos\frac{2\pi}{17}$ is a root of $T_{17}(x)-\cos 2\pi=T_{17}(x)-1$.</p> <p>Chebyshev polynomials are a bit tedious to calculate by hand, but thanks to the recurrence relation this can be done in quite a short amount of time. You can draw a Pascal-triangle like table containing their coefficients, which would make it even faster.</p>
3,053,975
<p><span class="math-container">$3^6-3^3 +1$</span> factors?, 37 and 19, but how to do it using factoring, <span class="math-container">$3^3(3^3-1)+1$</span>, can't somehow put the 1 inside </p>
Mark Bennet
2,906
<p><span class="math-container">$x^2-x+1$</span> factorises as <span class="math-container">$(x-\omega)(x+\omega^2)$</span> where <span class="math-container">$\omega^3=-1$</span>.</p> <p>Here <span class="math-container">$x=27$</span> and working modulo <span class="math-container">$27$</span> the cubes are <span class="math-container">$1,8,0,10,17,0,19,26, 0, 1, 8 \dots$</span> so <span class="math-container">$8^3\equiv -1$</span>, and we can take <span class="math-container">$\omega = 8, \omega^2=64\equiv 10$</span> and obtain the factorisation <span class="math-container">$$(27-8)(27+10)=19\times 37$$</span></p> <p>Since we have a cube root involved and the modulus is a power of <span class="math-container">$3$</span> there are some curiosities about the factorisation, but it checks out all the same.</p>
274,908
<p>I would like to plot a molecule in 3D and use different colors for the same atom type in the molecule. For example, by using:</p> <pre><code>MoleculePlot3D[Molecule[&quot;NC(=O)C[C@H](C(=O)O)N&quot;], ColorRules -&gt; {&quot;C&quot; -&gt; Black}] </code></pre> <p>all C atoms become Black. But how can I make, for example, the first C atom green, the second orange, etc?</p>
Domen
75,628
<p>Although I was suspecting Jason B. had an undocumented option up his sleeve, I will nevertheless post an alternative solution.</p> <pre><code>colors = {Green, Orange, Pink, Yellow}; (* Generate regular plot *) mol = MoleculePlot3D[Molecule[&quot;NC(=O)C[C@H](C(=O)O)N&quot;]] (* Extract atom indices of carbons *) ind = Flatten@Position[MoleculeValue[&quot;NC(=O)C[C@H](C(=O)O)N&quot;, &quot;FullAtomList&quot;], Atom[&quot;C&quot;]] (* Make colored spheres *) atoms = MapThread[{#1, Sphere[#2, radius]} &amp;, {colors, ind}] (* Replace carbons with colored spheres *) mol /. {{RGBColor[__], Sphere[ind, r_]} :&gt; (atoms /. radius -&gt; r)} </code></pre> <p><a href="https://i.stack.imgur.com/qC7X7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qC7X7.png" alt="Color molecule" /></a></p>
2,292,520
<p>I know that the logical negation of $$\neg(a \rightarrow b)= a \wedge \neg b $$ I am not clear what that means in the following simple setting:</p> <p>So its clear that $$x\geq 2 \to x^2\geq 4.$$ Now I can write the logical negation of $a\to b$ as $a \wedge \neg b$, but what does that intuitively mean? </p> <p>Suppose I want to prove "$a \wedge \neg b$", what do i need to prove mathematically?</p> <p>thnks</p>
Atakan Büyükoğlu
448,764
<p>a is $x\geq 2$ and $\neg b$ is $x^2\lt 4$.</p> <p>So, the intuitive meaning of $a \wedge \neg b$ is that both of these cannot happen at the same time, $x\geq 2$ with $x^2\lt 4$ have no common elements in their solution sets.</p> <p>To prove $a \wedge \neg b$, you should show that all elements in the domain of x satisfy both $x\geq 2$ and $x^2\lt 4$. However, this is impossible and that is why the negation of a true logical statement is false.</p>
2,710,703
<p>Given any non abelian group, how can I prove that every proper subgroup may be abelian? I know the definition of "abelian," but I don't know the difference between a group and a subgroup, nor do I understand how the two interconnect.</p>
N. S.
9,176
<p><strong>Hint</strong> If $H$ is a proper subgroup of $G$ then $|H|$ is a proper divisor of $|G|$.</p> <p><strong>Hint 2</strong> If all the proper divisors of $|G|$ are prime, then all the proper subgroups of $G$ are cyclic. </p>
1,245,775
<p>For example, if I have the fundamental solution set $\{x^2\}$, such that $y(x)=Cx^2$ is the solution to some unknown differential equation, is it guaranteed that only one such equation exists with this solution?</p> <p>I know I can work backwards to show that this solution satisfies $\dfrac{dy}{dx}-\dfrac{2}{x}y=0$, so this might not be a great example... Are there any cases where there can be more than one differential equation corresponding to any given solution?</p>
Archaick
191,173
<p>This generative model has some interesting properties. It has a fixed average degree with each node having degree at least one. It is also my intuition that it minimizes or comes very close to minimizing clustering for a fixed number of edges. I'm not familiar with any models which behave as the one you are describing. Good for you, it's hard to come up with new models nowadays =).</p>
1,245,775
<p>For example, if I have the fundamental solution set $\{x^2\}$, such that $y(x)=Cx^2$ is the solution to some unknown differential equation, is it guaranteed that only one such equation exists with this solution?</p> <p>I know I can work backwards to show that this solution satisfies $\dfrac{dy}{dx}-\dfrac{2}{x}y=0$, so this might not be a great example... Are there any cases where there can be more than one differential equation corresponding to any given solution?</p>
D Poole
83,727
<p>That random graph is denoted by $\mathbb{G}_{1-out}$. A common generalization is $\mathbb{G}_{k-out}$, where in step $j$, we choose $k$ vertices out of $V\setminus\{v_j\}$ and add the $k$ edges $\{v_j, \cdot\}$. Then at the end, delete multiple edges. </p> <p>One place that you can read about this models is Alan Frieze and Michal KaroΕ„ski's new book on Random Graphs. An early copy of this book is at <a href="http://www.math.cmu.edu/~af1p/Book.html" rel="nofollow">http://www.math.cmu.edu/~af1p/Book.html</a></p>
3,200,354
<p>How can I find the maximal value in the range <span class="math-container">$[-1,1]$</span> for <span class="math-container">$x$</span> and <span class="math-container">$y$</span> of the following expression:</p> <p><span class="math-container">$$\sin(\Pi x)(y-3)/2.$$</span></p> <p>I tried doing the derivative of both <span class="math-container">$x$</span> and <span class="math-container">$y$</span>, but it seemed there could be an easier way.</p>
Kavi Rama Murthy
142,385
<p><span class="math-container">$B \subset \Phi^{-1}(B)$</span> is equivalent to <span class="math-container">$\Phi (B) \subset B$</span>. Both say the same thing: whenever <span class="math-container">$b \in B$</span> we also have <span class="math-container">$\Phi (b) \in B$</span>.</p>
878,961
<p>I'm asked to prove a theorem (if that is the right word) about double derivatives. I'm still struggling with understanding Leibniz notation and I could use a push in the right direction. It's easy enough for me to differentiate the function when I write it down as $f(g(x))$ but not so much with Leibniz notation.</p> <p>The problem is as follows: If $y = f(u)$ and $u = g(x)$, where $f$ and $g$ are twice differentiable functions, show that:</p> <p>$$\frac{d^2y}{dx^2} = \frac{d^2y}{du^2}\frac{du}{dx}^2 + \frac{dy}{du}\frac{d^2u}{dx^2}$$</p> <p>Could someone fill me in on the details of each part of this equation? For example, why is the second derivative of $\frac{dy}{dx}$ written as $\frac{d^2y}{dx^2}$ instead of $\frac{d^2y}{d^2x}$?</p> <p>Thanks!</p>
Avitus
80,800
<p>Let $y=f\circ g$ and $y(x)=f(g(x))$, with $u:=g(x)$. Then</p> <p>$$\frac{dy}{dx}:=\frac{d(f\circ g)}{dx}=\frac{df}{du}\frac{du}{dx};$$</p> <p>whenever we write $\frac{df}{du}$ we mean $\frac{df(u)}{du}$, i.e. $\frac{dy}{du}$. Introducing the function $h(u):=\frac{df(u)}{du}$ we arrive at</p> <p>$$\frac{d^2y}{dx^2}=\frac{d}{dx}\left(h(u)\frac{dg}{dx}\right)= \frac{dh}{dx}\frac{dg}{dx}+ h\frac{d^2g}{dx^2}=\\ \frac{dh}{du}\frac{du}{dx}\frac{dg}{dx}+ \frac{df}{du}\frac{d^2g}{dx^2}=\frac{d^2f}{du^2}\left(\frac{dg}{dx}\right)^2+ \frac{df}{du}\frac{d^2g}{dx^2}=\frac{d^2y}{du^2}\left(\frac{du}{dx}\right)^2+ \frac{dy}{du}\frac{d^2u}{dx^2}.$$</p>
3,554,891
<p>Let's take a look back at this familiar "Law of cosines":</p> <blockquote> <p>β€ŽConsiderβ€Ž the β€Žtriangle β€Ž<span class="math-container">$\triangleβ€Žβ€Ž ABC$</span>. Let <span class="math-container">$a = BC, b = AC, c = AB$</span>; <span class="math-container">$\angle A, \angle B, \angle C$</span> are the angles of the triangle opposite to side <span class="math-container">$a, b, c,$</span> respectively. By the Law of Cosines: <span class="math-container">$$a^{2β€Ž} = b^{2β€Ž} + c^{2β€Ž} - 2bc \cdot \cos \angle A$$</span></p> </blockquote> <p>This formula can apply for any triangle.</p> <p>But what about quadrilaterals? Is there a formula, which shows the relationship between sides and angles, similar to the Law of Cosines? Can we extend the Law of Cosines???</p> <p>This is the way to approach the formula for quadrilaterals (<em>It's not (really) a proof</em>):</p> <blockquote> <p>Given the quadrilateral ABCD. Let <span class="math-container">$a = BC, b = CD, c = AB, d = AD$</span>. Let <span class="math-container">$E = AB \cap CD$</span> and <span class="math-container">$G = AC \cap BD$</span></p> </blockquote> <p>Let consider <span class="math-container">$\triangle ABC$</span> as a "special quadrilateral" (where <span class="math-container">$d=0$</span>). Then by the Law of Cosines:</p> <p><span class="math-container">$$a^{2β€Ž} = b^{2β€Ž} + c^{2β€Ž} - 2bc \cdot \cos \angle BEC = b^{2β€Ž} + c^{2β€Ž} - 2bc \cdot \cos \angle BGC$$</span></p> <p>(because when <span class="math-container">$d=0$</span>, <span class="math-container">$E \equiv G \equiv A \Rightarrow \angle BEC = \angle BGC$</span>)</p> <p>Notice that when <span class="math-container">$d=0$</span> then <span class="math-container">$CA = CD = CE = b$</span>; <span class="math-container">$BD = BE = BA = c$</span>. So we can guess the general formula for a quadrilateral will be one of these two formulas:</p> <blockquote> <p><span class="math-container">$$ a^{2β€Ž} + Kd^{2β€Ž} = b^{2β€Ž} + c^{2β€Ž} - 2 \cdot BE \cdot CE \cdot \cos \angle BEC \text{ (1)}$$</span> <span class="math-container">$$ a^{2β€Ž} + Kd^{2β€Ž} = b^{2β€Ž} + c^{2β€Ž} - 2 \cdot BD \cdot CA \cdot \cos \angle BGC \text{ (2)}$$</span></p> </blockquote> <p>(where <span class="math-container">$K$</span> is a constant)</p> <p>The reason we add <span class="math-container">$Kd^{2β€Ž}$</span> is to make the formula homogeneous (since the Law of Cosines is also homogeneous), and when <span class="math-container">$d=0$</span>, the <span class="math-container">$Kd^{2β€Ž}$</span> term is gone. Moreover, from our intuition, if the formula contains <span class="math-container">$\angle BEC$</span>, then two sides, which multiply to its cosines, have to be <span class="math-container">$BE$</span> and <span class="math-container">$CE$</span>. Otherwise, those two sides will be <span class="math-container">$BD$</span> and <span class="math-container">$CA$</span> multiplied by <span class="math-container">$\cos \angle BGC$</span></p> <p>To see which one is possibly correct, we can try to apply the formula to a special quadrilateral: square. In a square, <span class="math-container">$a=b=c=d$</span>, "<span class="math-container">$BE = CE = \infty$</span>", "<span class="math-container">$\angle BEC = \infty$</span>", <span class="math-container">$\angle BGC = 90^{\circ}$</span>. Apply <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span>:</p> <p><span class="math-container">$$(1): a^{2β€Ž} + Ka^{2β€Ž} = a^{2β€Ž} + a^{2β€Ž} - \infty$$</span> <span class="math-container">$$(2): a^{2β€Ž} + Ka^{2β€Ž} = a^{2β€Ž} + a^{2β€Ž}$$</span></p> <p><span class="math-container">$(1)$</span> is definitely wrong. The formula <span class="math-container">$(2)$</span> can be true if <span class="math-container">$K=1$</span>, so let re-written it:</p> <p><span class="math-container">$$a^{2β€Ž} + d^{2β€Ž} = b^{2β€Ž} + c^{2β€Ž} - 2 \cdot BD \cdot CA \cdot \cos \angle BGC$$</span></p> <p>To be sure that this formula is correct, let's apply this in another quadrilateral. This time is a rectangle, where <span class="math-container">$\angle BGC = 60^{\circ}$</span>. We have <span class="math-container">$a=d, b=c=a\sqrt{3}$</span>, <span class="math-container">$BD = AC = 2a$</span>. Apply the formula that we've just found, we get:</p> <p><span class="math-container">$$a^{2β€Ž} + a^{2β€Ž} = 3a^{2β€Ž} + 3a^{2β€Ž} - 2 \cdot 4a^{2β€Ž} \cdot \frac{1}{2}$$</span></p> <p>And this is true. You can verify it with some other quadrilaterals, and it'll also true. So, our new extended "Law of Cosines" is:</p> <blockquote> <p><span class="math-container">$$a^{2β€Ž} + d^{2β€Ž} = b^{2β€Ž} + c^{2β€Ž} - 2 \cdot BD \cdot CA \cdot \cos \angle BGC$$</span></p> </blockquote> <p>So that seems fine. But</p> <blockquote> <p><em>Is there a proof of the formula above</em>?</p> </blockquote> <p>Now, my main question (and my main focus) is:</p> <blockquote> <p><em><strong>Can we extend the formula (find a general formula) for polygons with n sides</strong></em>?</p> </blockquote> <p>This question is what I'm looking for (<em>This isn't a homework question</em>). I'm really curious about this. If you have an answer (or just an idea) to approach, please provide it. </p> <p>Thank you a lot and have a nice day :D</p>
Michael Rozenberg
190,319
<p>Let <span class="math-container">$\vec{BC}=\vec{a},$</span> <span class="math-container">$\vec{CD}=\vec{b},$</span> <span class="math-container">$\vec{DA}=\vec{d}$</span> and <span class="math-container">$\vec{AB}=\vec{c}.$</span></p> <p>Thus, since <span class="math-container">$$\vec{a}+\vec{c}=-\vec{b}-\vec{d},$$</span> we obtain: <span class="math-container">$$(\vec{a}+\vec{c})^2=(\vec{b}+\vec{d})^2,$$</span> which gives <span class="math-container">$$\vec{a}\vec{c}-\vec{b}\vec{d}=\frac{1}{2}(b^2+d^2-a^2-c^2).$$</span> In another hand, <span class="math-container">$$BD\cdot AC\cos\measuredangle BGC=\vec{DB}\cdot\vec{AC}=(\vec{c}+\vec{d})(\vec{c}+\vec{a})=$$</span> <span class="math-container">$$=c^2+\vec{a}\vec{c}+\vec{d}(\vec{a}+\vec{c})=c^2+\vec{a}\vec{c}-\vec{d}(\vec{b}+\vec{d})=c^2-d^2+\vec{a}\vec{c}-\vec{b}\vec{d}=$$</span> <span class="math-container">$$=c^2-d^2+\frac{1}{2}(b^2+d^2-a^2-c^2)=\frac{1}{2}(b^2-d^2+c^2-a^2)$$</span> and we are done!</p>
3,554,891
<p>Let's take a look back at this familiar "Law of cosines":</p> <blockquote> <p>β€ŽConsiderβ€Ž the β€Žtriangle β€Ž<span class="math-container">$\triangleβ€Žβ€Ž ABC$</span>. Let <span class="math-container">$a = BC, b = AC, c = AB$</span>; <span class="math-container">$\angle A, \angle B, \angle C$</span> are the angles of the triangle opposite to side <span class="math-container">$a, b, c,$</span> respectively. By the Law of Cosines: <span class="math-container">$$a^{2β€Ž} = b^{2β€Ž} + c^{2β€Ž} - 2bc \cdot \cos \angle A$$</span></p> </blockquote> <p>This formula can apply for any triangle.</p> <p>But what about quadrilaterals? Is there a formula, which shows the relationship between sides and angles, similar to the Law of Cosines? Can we extend the Law of Cosines???</p> <p>This is the way to approach the formula for quadrilaterals (<em>It's not (really) a proof</em>):</p> <blockquote> <p>Given the quadrilateral ABCD. Let <span class="math-container">$a = BC, b = CD, c = AB, d = AD$</span>. Let <span class="math-container">$E = AB \cap CD$</span> and <span class="math-container">$G = AC \cap BD$</span></p> </blockquote> <p>Let consider <span class="math-container">$\triangle ABC$</span> as a "special quadrilateral" (where <span class="math-container">$d=0$</span>). Then by the Law of Cosines:</p> <p><span class="math-container">$$a^{2β€Ž} = b^{2β€Ž} + c^{2β€Ž} - 2bc \cdot \cos \angle BEC = b^{2β€Ž} + c^{2β€Ž} - 2bc \cdot \cos \angle BGC$$</span></p> <p>(because when <span class="math-container">$d=0$</span>, <span class="math-container">$E \equiv G \equiv A \Rightarrow \angle BEC = \angle BGC$</span>)</p> <p>Notice that when <span class="math-container">$d=0$</span> then <span class="math-container">$CA = CD = CE = b$</span>; <span class="math-container">$BD = BE = BA = c$</span>. So we can guess the general formula for a quadrilateral will be one of these two formulas:</p> <blockquote> <p><span class="math-container">$$ a^{2β€Ž} + Kd^{2β€Ž} = b^{2β€Ž} + c^{2β€Ž} - 2 \cdot BE \cdot CE \cdot \cos \angle BEC \text{ (1)}$$</span> <span class="math-container">$$ a^{2β€Ž} + Kd^{2β€Ž} = b^{2β€Ž} + c^{2β€Ž} - 2 \cdot BD \cdot CA \cdot \cos \angle BGC \text{ (2)}$$</span></p> </blockquote> <p>(where <span class="math-container">$K$</span> is a constant)</p> <p>The reason we add <span class="math-container">$Kd^{2β€Ž}$</span> is to make the formula homogeneous (since the Law of Cosines is also homogeneous), and when <span class="math-container">$d=0$</span>, the <span class="math-container">$Kd^{2β€Ž}$</span> term is gone. Moreover, from our intuition, if the formula contains <span class="math-container">$\angle BEC$</span>, then two sides, which multiply to its cosines, have to be <span class="math-container">$BE$</span> and <span class="math-container">$CE$</span>. Otherwise, those two sides will be <span class="math-container">$BD$</span> and <span class="math-container">$CA$</span> multiplied by <span class="math-container">$\cos \angle BGC$</span></p> <p>To see which one is possibly correct, we can try to apply the formula to a special quadrilateral: square. In a square, <span class="math-container">$a=b=c=d$</span>, "<span class="math-container">$BE = CE = \infty$</span>", "<span class="math-container">$\angle BEC = \infty$</span>", <span class="math-container">$\angle BGC = 90^{\circ}$</span>. Apply <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span>:</p> <p><span class="math-container">$$(1): a^{2β€Ž} + Ka^{2β€Ž} = a^{2β€Ž} + a^{2β€Ž} - \infty$$</span> <span class="math-container">$$(2): a^{2β€Ž} + Ka^{2β€Ž} = a^{2β€Ž} + a^{2β€Ž}$$</span></p> <p><span class="math-container">$(1)$</span> is definitely wrong. The formula <span class="math-container">$(2)$</span> can be true if <span class="math-container">$K=1$</span>, so let re-written it:</p> <p><span class="math-container">$$a^{2β€Ž} + d^{2β€Ž} = b^{2β€Ž} + c^{2β€Ž} - 2 \cdot BD \cdot CA \cdot \cos \angle BGC$$</span></p> <p>To be sure that this formula is correct, let's apply this in another quadrilateral. This time is a rectangle, where <span class="math-container">$\angle BGC = 60^{\circ}$</span>. We have <span class="math-container">$a=d, b=c=a\sqrt{3}$</span>, <span class="math-container">$BD = AC = 2a$</span>. Apply the formula that we've just found, we get:</p> <p><span class="math-container">$$a^{2β€Ž} + a^{2β€Ž} = 3a^{2β€Ž} + 3a^{2β€Ž} - 2 \cdot 4a^{2β€Ž} \cdot \frac{1}{2}$$</span></p> <p>And this is true. You can verify it with some other quadrilaterals, and it'll also true. So, our new extended "Law of Cosines" is:</p> <blockquote> <p><span class="math-container">$$a^{2β€Ž} + d^{2β€Ž} = b^{2β€Ž} + c^{2β€Ž} - 2 \cdot BD \cdot CA \cdot \cos \angle BGC$$</span></p> </blockquote> <p>So that seems fine. But</p> <blockquote> <p><em>Is there a proof of the formula above</em>?</p> </blockquote> <p>Now, my main question (and my main focus) is:</p> <blockquote> <p><em><strong>Can we extend the formula (find a general formula) for polygons with n sides</strong></em>?</p> </blockquote> <p>This question is what I'm looking for (<em>This isn't a homework question</em>). I'm really curious about this. If you have an answer (or just an idea) to approach, please provide it. </p> <p>Thank you a lot and have a nice day :D</p>
mathlove
78,967
<p>Let us consider convex <span class="math-container">$n$</span>-gon <span class="math-container">$A_1A_2\cdots A_n$</span> where <span class="math-container">$\overline{A_jA_{j+1}}=a_j$</span> with <span class="math-container">$\angle{A_jA_{j+1}A_{j+2}}=\theta_j$</span>.</p> <p>Now, let us put our <span class="math-container">$n$</span>-gon on the <span class="math-container">$xy$</span> plane in the following way :</p> <ul> <li><p><span class="math-container">$A_1$</span> is at the origin</p></li> <li><p>The side <span class="math-container">$A_1A_2$</span> is on the <span class="math-container">$x$</span>-axis</p></li> <li><p>The <span class="math-container">$x$</span>-coordinate of <span class="math-container">$A_2$</span> is positive</p></li> <li><p>The <span class="math-container">$y$</span>-coordinate of <span class="math-container">$A_3$</span> is positive.</p></li> </ul> <p>Here, if we consider the projection of each side on the <span class="math-container">$x$</span>-axis, then we get <span class="math-container">$$a_1+a_2\cos(\pi-\theta_1)+a_3\cos(2\pi-(\theta_1+\theta_2))+\cdots +a_n\cos((n-1)\pi-(\theta_1+\theta_2+\cdots +\theta_{n-1}))=0$$</span> which can be written as <span class="math-container">$$a_1=\sum_{k=1}^{n-1}(-1)^{k+1}a_{k+1}\cos\bigg(\sum_{j=1}^{k}\theta_j\bigg)\tag1$$</span></p> <p>Similarly, if we consider the projection of each side on the <span class="math-container">$y$</span>-axis, then we get <span class="math-container">$$a_2\sin(\pi-\theta_1)+a_3\sin(2\pi-(\theta_1+\theta_2))+\cdots +a_n\sin((n-1)\pi-(\theta_1+\theta_2+\cdots +\theta_{n-1}))=0$$</span> which can be written as <span class="math-container">$$0=\sum_{k=1}^{n-1}(-1)^{k+1}a_{k+1}\sin\bigg(\sum_{j=1}^{k}\theta_j\bigg)\tag2$$</span></p> <p>From <span class="math-container">$(1)(2)$</span>, we obtain <span class="math-container">$$a_1^2+0^2=\bigg(\sum_{k=1}^{n-1}(-1)^{k+1}a_{k+1}\cos\bigg(\sum_{j=1}^{k}\theta_j\bigg)\bigg)^2+\bigg(\sum_{k=1}^{n-1}(-1)^{k+1}a_{k+1}\sin\bigg(\sum_{j=1}^{k}\theta_j\bigg)\bigg)^2$$</span> which can be written as <span class="math-container">$$a_1^2=\sum_{k=1}^{n-1}a_{k+1}^2+\sum_{1\le p\lt q\le n-1}\bigg(2(-1)^{p+1}a_{p+1}\cos\bigg(\sum_{j=1}^{p}\theta_j\bigg)\times (-1)^{q+1}a_{q+1}\cos\bigg(\sum_{j=1}^{q}\theta_j\bigg)+2(-1)^{p+1}a_{p+1}\sin\bigg(\sum_{j=1}^{p}\theta_j\bigg)\times (-1)^{q+1}a_{q+1}\sin\bigg(\sum_{j=1}^{q}\theta_j\bigg)\bigg)$$</span> i.e. <span class="math-container">$$a_1^2=\sum_{k=1}^{n-1}a_{k+1}^2+\sum_{1\le p\lt q\le n-1}2(-1)^{p+q}a_{p+1}a_{q+1}\bigg(\cos\bigg(\sum_{j=1}^{p}\theta_j\bigg)\cos\bigg(\sum_{j=1}^{q}\theta_j\bigg)+\sin\bigg(\sum_{j=1}^{p}\theta_j\bigg)\sin\bigg(\sum_{j=1}^{q}\theta_j\bigg)\bigg)$$</span> i.e. <span class="math-container">$$a_1^2=\sum_{k=1}^{n-1}a_{k+1}^2+\sum_{1\le p\lt q\le n-1}2(-1)^{p+q}a_{p+1}a_{q+1}\cos\bigg(\sum_{j=1}^{q}\theta_j-\sum_{j=1}^{p}\theta_j\bigg)$$</span> Therefore, we get <span class="math-container">$$\color{red}{a_1^2=\sum_{k=1}^{n-1}a_{k+1}^2+\sum_{1\le p\lt q\le n-1}2(-1)^{p+q}a_{p+1}a_{q+1}\cos\bigg(\sum_{j=p+1}^{q}\theta_j\bigg)}$$</span></p> <hr> <p>For example, for pentagon <span class="math-container">$A_1A_2A_3A_4A_5\ (n=5)$</span>, we get</p> <p><span class="math-container">$$\color{red}{a_1^2=a_2^2+a_3^2+a_4^2+a_5^2-2a_{2}a_{3}\cos(\theta_2)+2a_{2}a_{4}\cos(\theta_2+\theta_3)-2a_{2}a_{5}\cos(\theta_2+\theta_3+\theta_4)-2a_{3}a_{4}\cos(\theta_3)+2a_{3}a_{5}\cos(\theta_3+\theta_4)-2a_{4}a_{5}\cos(\theta_4)}$$</span></p> <hr> <p><strong>Added</strong> : One can get several formulas.</p> <p><strong>For quadrilateral <span class="math-container">$A_1A_2A_3A_4\ (n=4)$</span> :</strong></p> <ul> <li><p>If we change <span class="math-container">$(1)(2)$</span> to <span class="math-container">$$(1)\implies a_4\cos(\theta_4)=a_1-a_2\cos(\theta_1)+a_3\cos(\theta_1+\theta_2)$$</span><span class="math-container">$$(2)\implies a_4\sin(\theta_4)=a_2\sin(\theta_1)-a_3\sin(\theta_1+\theta_2)$$</span>squaring and adding give <span class="math-container">$$a_4^2=a_1^2+a_2^2+a_3^2-2a_1a_2\cos(\theta_1)-2a_2a_3\cos(\theta_2)+2a_1a_3\cos(\theta_1+\theta_2)$$</span></p></li> <li><p>If we change <span class="math-container">$(1)(2)$</span> to <span class="math-container">$$(1)\implies a_1+a_3\cos(\theta_1+\theta_2)=a_2\cos(\theta_1)+a_4\cos(\theta_4)$$</span><span class="math-container">$$(2)\implies a_3\sin(\theta_1+\theta_2)=a_2\sin(\theta_1)-a_4\sin(\theta_4)$$</span>squaring and adding give <span class="math-container">$$a_1^2+a_3^2+2a_1a_3\cos(\theta_1+\theta_2)=a_2^2+a_4^2+2a_2a_4\cos(\theta_1+\theta_4)$$</span></p></li> <li><p>If we change <span class="math-container">$(1)(2)$</span> to <span class="math-container">$$(1)\implies a_3\cos(\theta_1+\theta_2)-a_2\cos(\theta_1)=a_4\cos(\theta_4)-a_1$$</span><span class="math-container">$$(2)\implies a_3\sin(\theta_1+\theta_2)-a_2\sin(\theta_1)=-a_4\sin(\theta_4)$$</span>squaring and adding give <span class="math-container">$$a_2^2+a_3^2-2a_2a_3\cos(\theta_2)=a_1^2+a_4^2-2a_1a_4\cos(\theta_4)$$</span></p></li> </ul> <p><strong>For pentagon <span class="math-container">$A_1A_2A_3A_4A_5\ (n=5)$</span> :</strong></p> <ul> <li>If we change <span class="math-container">$(1)(2)$</span> to <span class="math-container">$$(1)\implies a_1-a_2\cos(\theta_1)+a_3\cos(\theta_1+\theta_2)=a_5\cos(\theta_5)-a_4\cos(\theta_4+\theta_5)$$</span><span class="math-container">$$(2)\implies a_2\sin(\theta_1)-a_3\sin(\theta_1+\theta_2)=a_5\sin(\theta_5)-a_4\sin(\theta_4+\theta_5)$$</span>squaring and adding give <span class="math-container">$$a_1^2+a_2^2+a_3^2-2a_1a_2\cos(\theta_1)-2a_2a_3\cos(\theta_2)+2a_1a_3\cos(\theta_1+\theta_2)=a_4^2+a_5^2-2a_4a_5\cos(\theta_4)$$</span></li> </ul>
129
<p>Is there some criterion for whether a space has the homotopy type of a closed manifold (smooth or topological)? Poincare duality is an obvious necessary condition, but it's almost certainly not sufficient. Are there any other special homotopical properties of manifolds?</p>
Martin O
86
<p>In surgery theory (which is basically a whole field of mathematics which tries to answer questions as the above), the next obstruction to the existence of a manifold in the homotopy type is that every finite complex with PoincarΓ© duality is the base space of a certain distinguished fibration (Spivak normal fibration) whose fibre is homotopy equivalent to a sphere. (In order to get a unique such fibration, identify two fibrations if they are fiber homotopy equivalent or if one is obtained from the other by fiberwise suspension.)</p> <p>For manifolds, this fibration is the spherization of the normal bundle, so the Spivak normal fibration comes from a vector bundle. This is invariant under homotopy equivalence. Thus the next obstruction is: the Spivak normal fibration must come from a vector bundle.</p> <p>If I remember right, then it was Novikov who first proved that for simply-connected spaces of odd dimension at least 5, this is the only further obstruction.</p> <p>In general, there is a further obstruction with values in a group <span class="math-container">$L_n(\pi_1,w)$</span> which depends on the fundamental group, first Stiefel-Whitney class and the dimension. See LΓΌck's notes on surgery theory at <a href="https://www.him.uni-bonn.de/lueck/data/ictp.pdf" rel="noreferrer">https://www.him.uni-bonn.de/lueck/data/ictp.pdf</a></p>
202,699
<p><a href="https://i.stack.imgur.com/UqPw4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UqPw4.png" alt="enter image description here"></a></p> <p>I try to solve for "t" at the various "x" from the function of </p> <pre><code>f[t_, x_] = 0.5 Erfc[(x - 0.0236454911650369 t)/Sqrt[4*0.0108274976811351*t]] + 0.5 Exp[0.0236454911650369 x/0.0108274976811351]* Erfc[(x + 0.0236454911650369 t)/Sqrt[4*0.0108274976811351*t]]; Table[{x, t /. NSolve[{f[t, x] == 0.05}, t, Reals ][[1]]}, {x, 0.5, 10, 0.5}] </code></pre> <p>and get coordinates as (x,t). But it does not work as shown in the picture. Please give me any advise on how to solve this problem. Thank you </p>
user64494
7,152
<p>How about the following?</p> <pre><code>Plot3D[Re[SphericalHarmonicY[3, 1,\[Theta],\[Phi]]] /. {\[Phi] -&gt; ArcSin[x*y], \[Theta]-&gt;2*ArcTan[x*Sqrt[1 - (x/4)^2-(y/2)^2]/ 2/(2*(1 - (x/4)^2 - (y/2)^2) - 1)]}, {x, -4, 4}, {y, -Sqrt[1 - (x/4)^2], Sqrt[1 - (x/4)^2]}, BoxRatios -&gt; Automatic] </code></pre> <p><a href="https://i.stack.imgur.com/3TNOD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3TNOD.png" alt="enter image description here"></a> The formulas from <a href="https://en.wikipedia.org/wiki/Hammer_projection" rel="nofollow noreferrer">Wiki</a> are used. Compare with</p> <pre><code>Plot3D[Re[SphericalHarmonicY[3, 1, \[Theta], \[Phi]]], {\[Theta], 0, Pi}, {\[Phi], 0, 2*Pi}] </code></pre> <p><a href="https://i.stack.imgur.com/gvmAr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gvmAr.png" alt="enter image description here"></a></p>
1,441,624
<p>Let <span class="math-container">$a$</span>, <span class="math-container">$b$</span> and <span class="math-container">$c$</span> be elements of a group <span class="math-container">$G$</span>, how can I prove that <span class="math-container">$abc$</span> and <span class="math-container">$cba$</span> do not necessarily have the same order?</p> <p>I know that this cannot hold for abelian groups, but unsure how to start otherwise.</p> <p>Also, it is impossible to find a counterexample if we change the order to <span class="math-container">$cab$</span>, see here: <a href="https://math.stackexchange.com/q/2238562">Let <span class="math-container">$G$</span> be a group. Show that <span class="math-container">$\forall a, b, c \in G$</span>, the elements <span class="math-container">$abc, bca, cab$</span> have the same order.</a>.</p>
Chappers
221,811
<p>Consider the quaternion group, $$ H_{8} = \langle \pm 1,\pm i,\pm j, \pm k \mid i^2=j^2=k^2=ijk=-1 \rangle. $$ Then $ ijk = -1 $ has order $2$ (obviously, since $(-1)^2 = 1$), but $jik = -ijk = 1 $ has order $1$.</p> <p>(To see this, note that $ij=(ijk)(-k)=k$, but $ji = -ji(ijk) = -k = -ji $.)</p>
1,102,638
<p>Let $n\in \mathbb{N}$. Can someone help me prove this by induction:</p> <p>$$\sum _{i=0}^{n}{i} =\frac { n\left( n+1 \right) }{ 2 } .$$</p>
k170
161,538
<p>Here are the steps $$ \lim \limits_{x \to 0}\left[{\frac{\sqrt{1 + x + x^2} - 1}{x}}\right] $$ $$ =\lim \limits_{x \to 0}\left[{\frac{\sqrt{1 + x + x^2} - 1}{x}}\right] \left[{\frac{\sqrt{1 + x + x^2} + 1}{\sqrt{1 + x + x^2} + 1}}\right] $$ $$ =\lim \limits_{x \to 0}\left[{\frac{1 + x + x^2 - 1}{ x\left(\sqrt{1 + x + x^2} + 1\right)}}\right] $$ $$ =\lim \limits_{x \to 0}\left[{\frac{x\left(1+ x\right)}{x\left(\sqrt{1 + x + x^2} + 1\right)}}\right] $$ $$ =\lim \limits_{x \to 0}\left[{\frac{1+ x}{\sqrt{1 + x + x^2} + 1}}\right] $$ $$ ={\frac{1+ 0}{\sqrt{1 + 0+ 0} + 1}} $$ $$ ={\frac{1}{\sqrt{1} + 1}} $$ $$ =\frac{1}{2} $$</p>
2,042,428
<p>If I'm correct, hidden induction is when we use something along the lines of "etc..." in a proof by induction. Are there any examples of when this would be appropriate (or when it's not appropriate but used anyway)?</p>
Mathematician 42
155,917
<p>Here is an example. Suppose that $A$ is a diagonalizable matrix, i.e. $A=P^{-1}DP$ where $D$ is some diagonal matrix. Then $A^k=P^{-1}D^kP$. Indeed, we have that $$A^k=(P^{-1}DP)^k=P^{-1}D(PP^{-1})D(PP^{-1})\dots (PP^{-1})DP=P^{-1}D^kP.$$ Here we actually used induction in the dots. There are many examples of this fashion.</p>
3,065,818
<blockquote> <p>If <span class="math-container">$$z=\dfrac{\sqrt{3}-i}{2}$$</span> then <span class="math-container">$$(z^{95}+i^{67})^{94}=z^n$$</span> then, <span class="math-container">$\text{find the smallest positive integral value of}$</span> <span class="math-container">$n$</span> <span class="math-container">$\text{where}$</span> <span class="math-container">$i=\sqrt{-1}$</span></p> </blockquote> <p><span class="math-container">$\text{My Attempt:}$</span> First of all I tried to convert <span class="math-container">$z$</span> into <span class="math-container">$\text{Euler's Form}$</span> so, <span class="math-container">$z=e^{-i(\frac{Ο€}{6})}$</span> Then, I raised <span class="math-container">$z$</span> to the <span class="math-container">$\text{95th}$</span> power. Then I'm getting stuck. And, not being able to proceed. Help. </p>
Bill Dubuque
242
<p>This is a <span class="math-container">$\rm\color{#0a0}{multiplicative}$</span> form of the following well-known <span class="math-container">$\rm\color{#90f}{additive}$</span> result about reduced fractions. The proof follows immediately by translating from additive to multiplicative form, as below.</p> <p><span class="math-container">$\overbrace{aj\!+\!bk=1}^{\large \gcd(a,b)\:=\:1},\ {xa=yb}\,\Rightarrow\, \overset{\rm\large Unique\ Fractionization_{\phantom{|}}\!}{\bbox[5px,border:1px solid red]{\dfrac{y}x = \dfrac{a}b\:\Rightarrow\begin{align}\,y = na\\ x = nb\end{align}}}\ \ \ {\rm for\ some}\,\ n\in\Bbb Z, \, $</span> with proof as follows</p> <p><span class="math-container">$\ \ \begin{align} \color{#c00}{xa=yb}\,\Rightarrow\,x &amp;= \color{#c00}x(\color{#c00}aj\!+\!bk) = (\color{#c00}yj\!+\!xk)\color{#c00}b = n b\ \ \ \ \color{#90f}{\text{[additive]}}\\[.3em] \color{#c00}{x^{\Large a}= y^{\Large b}}\Rightarrow\,x &amp;= \color{#c00}x^{\Large \color{#c00}aj\,+\,kb}\ =\ (\color{#c00}y^{\Large j}\! \cdot x^{\Large k})^{\Large \color{#c00}b} = n^{\Large b}\ \ \ \color{#0a0}{\text{[multiplicative]}} \end{align}$</span></p> <p>Note <span class="math-container">$\, n = y^{\large j} x^{\large k}\,$</span> is a rational root of <span class="math-container">$\,n^{\large a} = y\in\Bbb Z\,$</span> so <span class="math-container">$\,n\in\Bbb Z\,$</span> by the <a href="https://math.stackexchange.com/a/658058/242">Rational Root Test.</a></p> <p><strong>Remark</strong> <span class="math-container">$ $</span> The analogy between additive and multiplicative forms is clarified when we study abelian groups as <span class="math-container">$\,\Bbb Z\!\!-\!\!\text{modules}$</span>. Said fundamental result about principality of fractions is sometimes called <a href="https://math.stackexchange.com/a/714313/242">Unique Fractionization</a> to emphasize its equivalence with uniqueness of prime factorizations.</p>