qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
103,675
<p>I have defined a recursive sequence</p> <pre><code>a[0] := 1 a[n_] := Sqrt[3] + 1/2 a[n - 1] </code></pre> <p>because I want to calculate the <code>Limit</code> for this sequence when n tends towards infinity.</p> <p>Unfortunately I get a <code>recursion exceeded</code> error when doing:</p> <pre><code>Limit[a[n], n -&gt; Infinity] </code></pre> <p>How can I calculate the <code>Limit</code> for this sequence using Mathematica?</p>
Carl Woll
45,431
<p>Another possibility is to use <code>SequenceLimit</code>:</p> <pre><code>SequenceLimit[a /@ Range[10]] </code></pre> <blockquote> <p>2 Sqrt[3]</p> </blockquote>
3,497,420
<p>Consider the function <span class="math-container">$$f(x,y)=x^6-2x^2y-x^4y+2y^2.$$</span> The point <span class="math-container">$(0,0)$</span> is a critical point. Observe, <span class="math-container">\begin{align*} f_x &amp; = 6x^5-4xy-4x^3y, f_x(0,0)=0\\ f_y &amp; = 2x^2-x^4+4y. f_y(0,0)=0\\ f_{xx} &amp; = 30x^4-4y-12x^2y, f_{xx}(0,0)=0\\ f_{xy} &amp; = 4x-4x^3, f_{xy}(0,0)=0\\ f_{yy} &amp; = 4, f_{yy}=4 \end{align*}</span></p> <p>So, in order to determine the nature of the above critical point, we need to check the Hessian at <span class="math-container">$(0,0)$</span> which is <span class="math-container">$0$</span> and hence the test is inconclusive. <span class="math-container">$$ H(x,y)= \det \begin{pmatrix} f_{xx} &amp; f_{xy}\\ f_{yx} &amp; f_{yy} \end{pmatrix}=\det \begin{pmatrix} 0 &amp; 0 \\ 0 &amp; 4 \end{pmatrix}=0$$</span>So, I tried to see the function on slices like <span class="math-container">$y=0$</span> and <span class="math-container">$y=x$</span> but nothing worked. So please suggest me how do I find the nature of the critical point in this case?</p>
B. Goddard
362,009
<p>You might note that your function factors as</p> <p><span class="math-container">$$(x^2-y)(x^4-2y).$$</span></p> <p>So there are easy to find regions in the <span class="math-container">$xy$</span>-plane where the function is positive an negative. Close to the origin and between the curves <span class="math-container">$y=x^2$</span> and <span class="math-container">$y=x^4/2$</span>, the function is negative. This suggests trying the limit along the curve <span class="math-container">$y=x^3$</span>. It's not too horrible to analyse </p> <p><span class="math-container">$$f(x,x^3) = 3x^6-2x^5 -x^7$$</span></p> <p>around <span class="math-container">$x=0$</span> to see that it's negative there. </p> <p>Comparing with the curve given by <span class="math-container">$y=0$</span>, we get a saddle point at <span class="math-container">$(0,0).$</span></p>
4,495,950
<blockquote> <p>Why does <span class="math-container">$-\frac{1}{17-x}$</span> equal <span class="math-container">$\frac{1}{x-17}$</span>?</p> </blockquote> <p>Is there any simple computation to make this seem a little bit more intuitive? Right now, I cannot wrap my head around the fact that I can just switch signs of the term in the denominator.</p>
Guillermo García Sáez
696,501
<p><span class="math-container">$17-x=-1(x-17) $</span> so <span class="math-container">$(-1) (17-x) =(-1) ^2(x-17) =x-17$</span></p>
38,659
<p>I know how to use Matrix Exponentiation to solve problems having linear Recurrence relations (for example Fibonacci sequence). I would like to know, can we use it for linear recurrence in more than one variable too? For example can we use matrix exponentiation for calculating ${}_n C_r$ which follows the recurrence C(n,k) = C(n-1,k) + C(n-1,k-1). Also how do we get the required matrix for a general recurrence relation in more than one variable?</p>
Phira
9,325
<p>The likeliest interpretation of your confusion is that you have learned a very constrictive version of the product rule that it is not appropriate for counting non-trivial things.</p> <p>You multiply things not only when the choices are independent, but also when the <em>number</em> of the second choices are independent of the first choices. </p> <p>If you have learned a more restrictive version of product rule, then there is no point in trying to fit things into this arbitrary definition.</p>
38,659
<p>I know how to use Matrix Exponentiation to solve problems having linear Recurrence relations (for example Fibonacci sequence). I would like to know, can we use it for linear recurrence in more than one variable too? For example can we use matrix exponentiation for calculating ${}_n C_r$ which follows the recurrence C(n,k) = C(n-1,k) + C(n-1,k-1). Also how do we get the required matrix for a general recurrence relation in more than one variable?</p>
Qiaochu Yuan
232
<p>If you're asking what I think you're asking, here is an argument that only uses the "product rule": </p> <p>First, establish that the number of ways to order the numbers $\{ 1, 2, ... n \}$ is $n!$. Next, let ${n \choose k}$ denote the number of ways to choose $k$ numbers (not in any particular order) out of $\{ 1, 2, ... n \}$. Then:</p> <ul> <li>On the one hand, the number of ways to order $\{ 1, 2, ... n \}$ is $n!$.</li> <li>On the other hand, given any such order, the first $k$ elements of that order are a set of $k$ numbers (which we can choose in ${n \choose k}$ ways) together with an ordering of those numbers (which we can choose in $k!$ ways) together with an ordering of the rest of the numbers (which we can choose in $(n-k)!$ ways).</li> </ul> <p>It follows that $n! = {n \choose k} k! (n-k)!$. </p>
1,307,085
<p>How does one solve this equation?</p> <blockquote> <p>$$\cos {x}+\sin {x}-1=0$$</p> </blockquote> <p>I have no idea how to start it.</p> <p>Can anyone give me some hints? Is there an identity for $\cos{x}+\sin{x}$?</p> <p>Thanks in advance!</p>
Harish Chandra Rajpoot
210,295
<p>Given <span class="math-container">$$\color{blue}{\cos x+\sin x-1=0} $$</span><span class="math-container">$$\cos x+\sin x=1 $$</span> Divide both sides by <span class="math-container">$\color{blue}{\sqrt{2}}$</span> we get <span class="math-container">$$\frac{1}{\sqrt{2}}\cos x+ \frac{1}{\sqrt{2}}\sin x=\frac{1}{\sqrt{2}}$$</span> <span class="math-container">$$\cos x\cos\frac{\pi}{4}+\sin x\sin\frac{\pi}{4}=\cos\frac{\pi}{4}$$</span> Using formula <span class="math-container">$\color{purple}{\cos A\cos B+\sin A\sin B=\cos(A-B)}$</span>, we get <span class="math-container">$$\color{green}{\cos\left(x-\frac{\pi}{4}\right)=\cos\frac{\pi}{4}}$$</span> As there is no information about the unknown value <span class="math-container">$x$</span> hence writing the general solutions as follows <span class="math-container">$$x-\frac{\pi}{4}=2n\pi\pm \frac{\pi}{4}$$</span><span class="math-container">$$x=2n\pi\pm \frac{\pi}{4}+\frac{\pi}{4}$$</span><span class="math-container">$$ \color{}{x=2n\pi} \quad \text{Or}\quad \color{}{x=2n\pi+\frac{\pi}{2}} $$</span> <span class="math-container">$$\color{blue}{x\in\{2n\pi\}\cup\{2n\pi+\frac{\pi}{2}\}}$$</span> Where, <span class="math-container">$\color{}{n \space \text{is any integer}}$</span> i.e. <span class="math-container">$\ n=0, \pm1, \pm2,\pm3, \ldots$</span> </p>
1,307,085
<p>How does one solve this equation?</p> <blockquote> <p>$$\cos {x}+\sin {x}-1=0$$</p> </blockquote> <p>I have no idea how to start it.</p> <p>Can anyone give me some hints? Is there an identity for $\cos{x}+\sin{x}$?</p> <p>Thanks in advance!</p>
Jack Tiger Lam
186,030
<p>By inspection, it is obvious, that: $1-\sin{x} \equiv (\cos{\frac{x}{2}} - \sin{\frac{x}{2}})^2$.</p> <p>From the half angle expansions, $\cos{x} \equiv (\cos{\frac{x}{2}} - \sin{\frac{x}{2}})(\cos{\frac{x}{2}} + \sin{\frac{x}{2}})$.</p> <p>The equation is thus equivalent to:</p> <p>$(\cos{\frac{x}{2}} - \sin{\frac{x}{2}})(2\sin{\frac{x}{2}}) = 0$</p> <p>Which can then be solved using standard methods.</p>
713,521
<p>There are so many notations for differentiation. Some of them are: $$ f^\prime(x) \qquad \frac{d}{dx}(f(x))\qquad \frac{dy}{dx}\qquad \frac{df}{dx}\qquad D f(x)\qquad y^\prime\qquad D_x f(x) $$ Why are there so many ways to say "the derivative of $f(x)$"? Is there a specific use for each notation? What is the difference between $\dfrac{d}{dx}$ and $\dfrac{dy}{dx}$? I am only asking this because I am worried that I might use the wrong notation sometimes. For example, I don't know when I should use $\dfrac{dy}{dx}$ instead of $D_xf(x)$, or vice versa. I thank you in advance for your answers.</p>
Jacob Wakem
117,290
<p>A short answer is that in calculus you do lots of symbolic manipulation, so different notations are worth the bother to minimize eye sore and give you what power you need. For instance, the fraction notation helps if you are doing cancellations or partial derivatives.</p>
11,178
<p>As far as I know, one way to take a homotopy colimit in a model category is to replace (up to acyclic fibration) all arrows in the diagram with cofibrations, and take the strict colimit of the resulting diagram.</p> <p>In Top with the model structure given by Serre fibrations, cofibrations, and weak equivalences, if one wants to obtain a homotopy pushout of the diagram <span class="math-container">$X \leftarrow A \rightarrow Y$</span>, it is &quot;enough&quot; to replace only one of these arrows with a cofibration: that is, there is a natural map (by the universal property of the pushout) <span class="math-container">$Cyl(X)\cup_A Cyl(Y) \to Cyl(X) \cup_A Y$</span> that is a homotopy equivalence of spaces.</p> <blockquote> <p>Question 1: What conditions on the model category <span class="math-container">$\mathcal{C}$</span> (or objects <span class="math-container">$X,Y,A$</span>) will guarantee that the natural map <span class="math-container">$Cyl(X) \cup_A Cyl(Y) \to Cyl(X) \cup_A Y$</span> is a weak equivalence?</p> <p>Question 2: This question is less precise, but if the map above is a weak equivalence, does that mean <span class="math-container">$Cyl(X) \cup_A Y$</span> is a good model for the homotopy pushout?</p> </blockquote>
Reid Barton
126,667
<p>Question 1: The model category $\mathcal{C}$ should be <em>left proper</em>, i.e. the pushout of a weak equivalence along a cofibration is again a weak equivalence. (Dually, there is a notion of right proper.) Top is left proper, as is any model category in which every object is cofibrant, such as SSet. There is some information on this notion of properness <a href="http://ncatlab.org/nlab/show/proper+model+category" rel="noreferrer">at the nlab</a>, and I think it's also discussed more thoroughly in Hirschhorn's book <em>Model Categories and their Localizations</em> (and probably many other places).</p> <p>Question 2: Yes. People often say that a square in a model category is a homotopy pushout square if the induced map from the (strict) pushout of a cofibrant replacement (meaning cofibrant objects and maps) of the "initial" three objects to the last object is a weak equivalence, and that is the case here.</p>
3,490,329
<blockquote> <p>Show that a 2-dimensional subspace of the space of <span class="math-container">$2\times2$</span> matrices contains a non-zero symmetric matrix. </p> </blockquote> <p>I don't know if it should be written like the addition of two symmetric and skew-symmetric matrix or there is another way to show it. </p>
Mozhgan Farahani
736,783
<p>In general, I think I can write like the following If A,B are two 2×2 matrices , then Transpose of (A+B) = Transpose of A + Transpose of B , so it us closed under addition. Then, for any scalar k , Transpose of (kA) = k . Transpose of A = k.A , showing that it is closed under scalar multiplication. Thus, both conditions of subspace are hold. </p>
4,109,827
<p><span class="math-container">$$f(x,y)=\begin{cases}\dfrac{y^3}{x^2+y^2} &amp;(x,y) \neq \ \mathbb{(0,0)}\\ 0 &amp; (x,y)=(0,0) \\ \end{cases}$$</span></p> <p>Evaluate <span class="math-container">$f_x(0,0)$</span> and <span class="math-container">$f_y(0,0)$</span> and <span class="math-container">$D_\overrightarrow{u}f(0,0)$</span></p> <p>I tried directly taking the derivative to no avail (obviously) so then I tried to use the definition of partial derivative which also left me without a correct solution. Also I have proved that it is continuous (Sertoz Theorem) but how would I prove that it is also differentiable at <span class="math-container">$(0,0)$</span>?</p>
fwd
897,162
<p><span class="math-container">$$f_x(0,0) = \lim_{t\rightarrow 0} \frac{f(t,0) - f(0,0)}{t} = 0, \lim_{t\rightarrow 0} \frac{f(0,t) - f(0,0)}{t} = 1\ \ $$</span></p> <p>Next, I will show that <span class="math-container">$f$</span> is not differentiable at <span class="math-container">$(0,0)$</span>.</p> <p>Let <span class="math-container">$\vec{u}$</span> be a unit vector in <span class="math-container">$\mathbb{R}^2$</span> and let <span class="math-container">$\vec{u} = u_1e_1 + u_2e_2$</span>, where <span class="math-container">$e_1 = (1, 0)$</span> and <span class="math-container">$e_2=(0,1)$</span> are the canonical basis vectors. Then, <span class="math-container">$$ D_{\vec{u}}f(0,0) = \lim_{t\rightarrow 0} \frac{f((0,0) + t(u_1, u_2))-f(0,0)}{t}\\ = \lim_{t\rightarrow 0}\frac{f(tu_1, tu_2)}{t} \\ = \lim_{t\rightarrow 0}\frac{t^3u_2^3}{t(t^2u_1^2 + t^2u_2^2)} = u_2^3 $$</span> If <span class="math-container">$f$</span> was differentiable at <span class="math-container">$(0,0)$</span>, then <span class="math-container">$$D_{\vec{u}}f(0,0) = \nabla f(0,0) \cdot \vec{u} \\ = f_x(0,0)u_1 + f_y(0,0)u_2 = u_2 $$</span> However, this contradicts what we have above, as <span class="math-container">$\vec{u}$</span> can be chosen so that <span class="math-container">$u_2\ne u_2^3$</span>.</p>
1,270,042
<p>$$(a+5)(b-1)=ab-a+5b-5=20-5=15.$$</p> <p>So, both $a + 5$ and $b-1$ divide $15$. </p> <p>Then, $a + 5$ is one of $15, -15, 3, -3, 5, -5, 1, -1$, so $a$ is one of $10, -20, -2, -8, 0, -10, -4, -6$ and $b – 1$ is one of $15, -15, 3, -3, 5, -5, 1, -1$, so $b = 14, -14, 4, -2, 6, -4, 2, 0$.</p> <p>Could all possibilities for $a, b$ found by considering $(a+5)(b-1)$ be just random(not in a probability sense) and not connected to $ab = a - 5b + 20$ at all? In other words, could it be that if some of the possible $a, b$ found this way happen to satisfy $ab = a - 5b + 20$, then it's just a coincidence? </p>
Dr. Sonnhard Graubner
175,066
<p>Hint: rewrite your equation into $$a=-5+\frac{15}{b-1}$$</p>
2,024,997
<blockquote> <p>$$\lim_{x \rightarrow +\infty}\frac{\log_{1.1}x}{x}$$</p> </blockquote> <p>I can solve this easily by generating the graph with my calculator, but is there is a way to do this analytically?</p>
DeepSea
101,504
<p><strong>hint</strong>: L'hospitale's rule ! . Have you learned this formula yet?</p>
92,382
<p>I was working on a little problem and came up with a nice little equality which I am not sure if it is well-known (or) easy to prove (It might end up to be a very trivial one!). I am curious about other ways to prove the equality and hence I thought I would ask here to see if anybody knows any or can think of any. I shall hold off from posting my own answer for a couple of days to invite different possible solutions.</p> <blockquote> <p>Consider the sequence of functions: $$ \begin{align} g_{n+2}(x) &amp; = g_{n}(x) - \left \lfloor \frac{g_n(x)}{g_{n+1}(x)} \right \rfloor g_{n+1}(x) \end{align} $$ where $x \in [0,1]$ and $g_0(x) = 1, g_1(x) = x$. Then the claim is: $$x = \sum_{n=0}^{\infty} \left \lfloor \frac{g_n}{g_{n+1}} \right \rfloor g_{n+1}^2$$</p> </blockquote>
Community
-1
<p>For whatever it is worth, below is an explanation on why I was interested in this equality. Consider a rectangle of size $x \times 1$, where $x &lt; 1$. I was interested in covering this rectangle with squares of maximum size whenever possible (i.e. in a greedy sense).</p> <p>To start off, we can have $\displaystyle \left \lfloor \frac{1}{x} \right \rfloor$ squares of size $x \times x$. Area covered by these squares is $\displaystyle \left \lfloor \frac{1}{x} \right \rfloor x^2$.</p> <p>Now we will then be left with a rectangle of size $\left(1 - \left \lfloor \frac1x \right \rfloor x \right) \times x$.</p> <p>We can now cover this rectangle with squares of size $\left(1 - \left \lfloor \frac1x \right \rfloor x \right) \times \left(1 - \left \lfloor \frac1x \right \rfloor x \right)$.</p> <p>The number of such squares possible is $\displaystyle \left \lfloor \frac{x}{\left(1 - \left \lfloor \frac1x \right \rfloor x \right)} \right \rfloor$.</p> <p>The area covered by these squares is now $\displaystyle \left \lfloor \frac{x}{\left(1 - \left \lfloor \frac1x \right \rfloor x \right)} \right \rfloor \left(1 - \left \lfloor \frac1x \right \rfloor x \right)^2$.</p> <p>And so on.</p> <p>Hence, at $n^{th}$ stage if the sides are given by $g_{n-1}(x)$ and $g_n(x)$ with $g_n(x) \leq g_{n-1}(x)$, the number of squares with side $g_{n}(x)$ which can be placed in the rectangle of size $g_{n-1}(x) \times g_n(x)$, is given by $\displaystyle \left \lfloor \frac{g_{n-1}(x)}{g_{n}(x)} \right \rfloor$.</p> <p>These squares cover an area of $\displaystyle \left \lfloor \frac{g_{n-1}(x)}{g_{n}(x)} \right \rfloor g^2_{n}(x)$.</p> <p>Hence, at the $n^{th}$ stage using squares we cover an area of $\displaystyle \left \lfloor \frac{g_{n-1}(x)}{g_{n}(x)} \right \rfloor g^2_{n}(x)$.</p> <p>The rectangle at the $(n+1)^{th}$ stage is then given by $g_{n}(x) \times g_{n+1}(x)$ where $g_{n+1}(x)$ is given by $g_{n-1}(x) - \left \lfloor \frac{g_{n-1}(x)}{g_n(x)} \right \rfloor g_n(x)$.</p> <p>These squares end up covering the entire rectangle and hence the area of all these squares equals the area of the rectangle.</p> <p>This hence gives us $$x = x \times 1 = \sum_{n=1}^{\infty} \left \lfloor \frac{g_{n-1}(x)}{g_{n}(x)} \right \rfloor g^2_{n}(x)$$</p> <p>When I posted this question, I failed to see the simple proof which Srivatsan had.</p>
635,077
<p>$$\sin(a+b) = \sin(a) \cos(b) + \cos(a) \sin(b)$$</p> <p>How can I prove this statement?</p>
Fly by Night
38,495
<p>I typed "<em>Proof of Trigonometric formulae</em>" in to Google and the second hit was an extensive Wikipedia article which supplies proofs to many, many trigonometric identities.</p> <p><a href="https://en.wikipedia.org/wiki/Proofs_of_trigonometric_identities#Angle_sum_identities" rel="nofollow">Click here</a> for the section that you want.</p>
2,694,706
<p>A rational numbers cannot have irrational as$\exists n \in \mathbb{Z}, \, \sqrt[n]{\frac xy}$, but the two equalities give: $b^2=\frac{(a^2+c^2)}{2} \implies b = \sqrt[2]{\frac {(a^2+c^2)}{2}}$.<br> To avoid this, need $4\mid a$, &amp; $a=c$; so that if $\exists t=a/2, \, b = \sqrt[2]{\frac {(a^2+c^2)}{2}} =&gt; 2t$.<br></p> <p>As an example, $a=c=8, t=4, b=8$.</p> <p>I am giving an idea for a particular case only, &amp; am unable to pursue.</p>
Angina Seng
436,618
<p>The general solution of $b^2-a^2=5$ has $$b=\frac12\left(t+\frac 5t\right)$$ for $t\in\Bbb Q^*$. Then $$c^2=b^2+5=\frac{t^4+10t^2+25}{4t^2}+5=\frac{t^4+30t^2+25}{4t^2}.$$ The problem boils down to whether the genus-one curve $$y^2=x^4+30x^2+25$$ has rational points.</p>
2,694,706
<p>A rational numbers cannot have irrational as$\exists n \in \mathbb{Z}, \, \sqrt[n]{\frac xy}$, but the two equalities give: $b^2=\frac{(a^2+c^2)}{2} \implies b = \sqrt[2]{\frac {(a^2+c^2)}{2}}$.<br> To avoid this, need $4\mid a$, &amp; $a=c$; so that if $\exists t=a/2, \, b = \sqrt[2]{\frac {(a^2+c^2)}{2}} =&gt; 2t$.<br></p> <p>As an example, $a=c=8, t=4, b=8$.</p> <p>I am giving an idea for a particular case only, &amp; am unable to pursue.</p>
Oleg567
47,993
<p>We can rewrite these numbers with common denominator $D$: $a=\dfrac{A}{D}$, $\;b=\dfrac{B}{D}$, $\;c=\dfrac{C}{D}$. Then we'll get diophantine equation (system of diophantine equations): $$ B^2-A^2=C^2-B^2=5D^2.\tag{1} $$ The smallest solution of $(1)$ is $(A,B,C,D)=(31,41,49,12)$. <br>So, example of such three rational numbers $a,b,c$ is: $$a=\dfrac{31}{12},\quad b=\dfrac{41}{12}, \quad c=\dfrac{49}{12}.$$</p> <hr> <p>Short algorithm (to find $A,B,C,D$) description:</p> <ul> <li>for each $C$ (from some range) consider $B = 1,2,\ldots,C-1$; <ul> <li>if $2B^2-C^2$ is square number, then denote $A=\sqrt{2B^2-C^2}$; <ul> <li>if $\dfrac{B^2-A^2}{5}$ is square number, then denote $D=\sqrt{\frac{B^2-A^2}{5}}$ and output $(A,B,C,D)$.</li> </ul></li> </ul></li> </ul>
2,694,706
<p>A rational numbers cannot have irrational as$\exists n \in \mathbb{Z}, \, \sqrt[n]{\frac xy}$, but the two equalities give: $b^2=\frac{(a^2+c^2)}{2} \implies b = \sqrt[2]{\frac {(a^2+c^2)}{2}}$.<br> To avoid this, need $4\mid a$, &amp; $a=c$; so that if $\exists t=a/2, \, b = \sqrt[2]{\frac {(a^2+c^2)}{2}} =&gt; 2t$.<br></p> <p>As an example, $a=c=8, t=4, b=8$.</p> <p>I am giving an idea for a particular case only, &amp; am unable to pursue.</p>
Oleg567
47,993
<p><strong>Algebraic approach.</strong> </p> <p>First, note that $$a^2+c^2=2b^2,$$ and if denote $$p=\dfrac{a}{b}, \quad q=\dfrac{c}{b}, \tag{1}$$ then we will search rational points on the circle $$p^2+q^2=2.\tag{2}$$</p> <p>According to the article <a href="http://www.math.uconn.edu/~kconrad/ross2007/3squarearithprog.pdf" rel="nofollow noreferrer">Keith Conrad "Arithmetic progressions of three squares"</a>, each rational solution of eq. $(2)$ (by exception of $(p,q)=(1,-1)$) can be written in the parametric form $$ p = \dfrac{m^2-2m-1}{m^2+1}, \qquad q=\dfrac{-m^2-2m+1}{m^2+1},\\ \left( m = \dfrac{q-1}{p-1}\right),\tag{3} $$ where $m$ is rational parameter (positive or negative); <br>and vice versa (for each rational $m$ pair $(p,q)$ will have rational coordinates).</p> <p>If we require distance (step) $n=5$ between numbers $a^2,b^2,c^2$ in algebraic progression, then (according to Corollary $3.3$ of the article above) each rational solution of equation $$b^2-a^2=c^2-b^2=n\tag{4}$$ can be written in the form $$ a=\dfrac{x^2-2nx-n^2}{2y}, \quad b=\dfrac{x^2+n^2}{2y}, \quad c=\dfrac{x^2+2nx-n^2}{2y}, \\ \left(\; x=\dfrac{n(c-b)}{a-b}, \qquad y=\dfrac{n^2(2b-a-c)}{(a-b)^2} \right), \tag{5} $$<br> where $(x,y)$ is rational point on the <a href="https://en.wikipedia.org/wiki/Elliptic_curve" rel="nofollow noreferrer">elliptic curve</a> $$ y^2=x^3-nx;\tag{6} $$ and vice versa (each rational point of the EC $(6)$ detemines rational solution of eq. $(4)$).</p> <hr> <p>As was discussed before, one simple positive rational solution is $$(a,b,c) = \left(\dfrac{31}{12}, \dfrac{41}{12}, \dfrac{49}{12}\right);$$</p> <p>in fact, it means that there are $8$ rational solutions: $$(a,b,c)=\left(\pm\dfrac{31}{12}, \pm\dfrac{41}{12}, \pm\dfrac{49}{12}\right),$$ for which (see $(5), (6)$) we have the set of $8$ rational points on the EC $(6)$: $$ (x,y) = \left(-4,\pm 6\right), \\ (x,y) = \left( 45, \pm 300\right), \\ (x,y) = \left(-\frac{5}{9},\pm \frac{100}{27}\right), \\ (x,y) = \left( \frac{25}{4}, \pm \frac{75}{8}\right). $$</p> <p>Applying the beauty of <a href="https://en.wikipedia.org/wiki/Elliptic_curve#The_group_law" rel="nofollow noreferrer">the group law</a> of EC, we can derive other rational points from existing ones.</p> <p><strong>Example:</strong> <br> if we'll take starting point $P_1=(-4,6)$, then denote point $P_2$ as $P_2 = P_1+P_1$ <br>(note that we 'add' two points not coordinate-wise, but according to <a href="https://en.wikipedia.org/wiki/Elliptic_curve#The_group_law" rel="nofollow noreferrer">the group law</a> of EC): <br> $\; P_2 = \left( \frac{1681}{144} , -\frac{62279}{1728} \right)$, which leads to the solution $$(a,b,c) = \left(\frac{113279}{1494696}, - \frac{3344161}{1494696}, \frac{4728001}{1494696}\right);$$ then let's denote $P_3$ as $P_3 = P_1+P_2$: $\; P_3 = \left(-\frac{2439844}{5094049} , \frac{39601568754}{11497268593} \right)$, which leads to the solution $$(a,b,c) = \left( -\frac{518493692732129}{178761481355556}, \frac{654686219104361}{178761481355556}, \frac{767067390499249}{178761481355556} \right). $$</p> <hr> <p>This way, if focus on positive solutions $(a,b,c)$ of eq. $(4)$ in the form $(a,b,c)=\left(\frac{A}{D},\frac{B}{D},\frac{C}{D}\right)$, where $A,B,C,D\in\mathbb{N}$, we can construct series of rational points (with increasing size of common denominator):</p> <p>\begin{array}{|l|} \hline A=31; \\ B=41; \\ C=49; \\ D=12; \\ \hline A=113279; \\ B=3344161; \\ C=4728001; \\ D=1494696; \\ \hline A=518493692732129; \\ B=654686219104361; \\ C=767067390499249; \\ D=178761481355556; \\ \hline A=249563579992463717493803519; \\ B=249850594047271558364480641; \\ C=250137278774864229623059201; \\ D=5354229862821602092291248; \\ \hline A=115038188620995226180802686473825513089249; \\ B=160443526614433014168714029147613242401001; \\ C=195577262542844878506138849501555847171249; \\ D=50016678000996026579336936742637753055940; \\ \hline A=21214405287844054428542609853501469645112322962237848990081;\\ B=209239116668342644167838867143329714389679018137228536721441; \\ C=295147361324101461665473218814630755582253386512598845803201; \\ D=93092380947563478644577555596900542802151091304399908363272; \\ \hline \ldots \end{array}</p>
671,407
<p>I have problem with equation: $4^x-3^x=1$. </p> <p>So at once we can notice that $x=1$ is a solution to our equation. But is it the only solution to this problem? How to show that there aren't any other solutions? </p>
Mark Bennet
2,906
<p>Hint: one way of showing that a function takes a value only once is to show that it is increasing.</p> <p>Hint: For negative values of $x$ you need a different observation.</p>
1,689,523
<p>I need help with this Laplace question. <span class="math-container">$$f(t) = e^{-t} \sin(t) $$</span></p> <hr /> <p>Answer should be <span class="math-container">$\dfrac{1}{s^2 + 2s + 2}$</span></p> <hr /> <p>What I'm currently doing is as follows:</p> <p><span class="math-container">$u = \sin(t)\qquad$</span> <span class="math-container">$dv = e^{-(s+1)t}dt$</span></p> <p><span class="math-container">$du = \cos(t)dt\qquad$</span> <span class="math-container">$v = \dfrac{e^{-(s+1)t}}{-(s+1)}$</span></p> <p><span class="math-container">$\dfrac{-\sin(t) e^{-(s+1)t}}{-(s+1)} - \int\dfrac{ e^{-(s+1)t}\cos(t)}{ -(s+1)} dt$</span></p> <p>But even if I solved the integral, I wouldn't get this (which is what I should, see picture).</p> <blockquote> <p><img src="https://i.stack.imgur.com/BhMOx.png" alt="enter image description here" /></p> </blockquote>
user321205
321,205
<p>Calculate Laplace transform for $f(t)$ and $e^{-t}\cos(t)$ then you'll get a equation system where the unknowns will be the integrals $$\int\dfrac{ e^{-(s+1)t}\cos(t)}{ -(s+1)} dt \text{ and } \int\dfrac{ e^{-(s+1)t}\sin(t)}{ -(s+1)} dt $$ with this you can conclude.</p>
1,999,834
<p>Let $\varphi : G \rightarrow H$ be a group homomorphism with kernel $K$ and let $a,b \in \varphi(G)$. Let $X = \varphi^{-1}(a)$ and $Y = \varphi^{-1}(b)$. Fix $u \in X$. Let $Z=XY$. Prove that for every $w \in Z$ that there exists $v \in Y$ such that $uv=w$. This is Dummit and Foote exercise 3.1.2.</p> <p>My attempt:</p> <p>Suppose we assume that $v = u^{-1}w$</p> <p>I try to show that $v \in Y$</p> <p>$\varphi(v) = \varphi(u^{-1})\varphi(w) = a^{-1}\varphi(w)$</p> <p>If I could somehow show $\varphi(w) = ab$, then $\varphi(v) = b$ so that $v \in Y$ but I think I am going in circles.</p>
egreg
62,967
<p>Since $w\in Z$, you know that $w=xy$, for some $x\in X$ and $y\in Y$.</p> <p>By definition, $\varphi(x)=a$ and $\varphi(y)=b$.</p> <p>Also $\varphi(u)=a$, which implies $u^{-1}x\in\ker\varphi$.</p> <p>Then $$ w=xy=u(u^{-1}xy) $$ Can you finish?</p>
4,651,596
<p>I know the proof of the &quot;<a href="https://en.wikipedia.org/wiki/Doubling_the_cube" rel="nofollow noreferrer">Doubling the cube problem</a>&quot;. What is used there is the fact that if a number <span class="math-container">$a$</span> is constructible then <span class="math-container">$[\mathbb{Q}(a):\mathbb{Q}]$</span> is a power of <span class="math-container">$2$</span>.</p> <p>I found in a German textbook the remark that the inversion is not correct: If <span class="math-container">$[\mathbb{Q}(a):\mathbb{Q}]$</span> is a power of <span class="math-container">$2$</span>, then <span class="math-container">$a$</span> is not necessarily constructible.</p> <p>Do you know an example for an <span class="math-container">$a$</span> where <span class="math-container">$[\mathbb{Q}(a):\mathbb{Q}]$</span> is a power of <span class="math-container">$2$</span> and which is not constructible – or a textbook with an example?</p>
Arthur
15,500
<p>The set of constructible numbers is the smallest extension of <span class="math-container">$\Bbb Q$</span> where each positive number has a square root. (That's essentially what straightedge and compass constructions are able to do: field operations and square roots.) Every constructible number can be described by using rational numbers, field operations and square roots.</p> <p>I can't construct a concrete example of the following off the top of my head. But it is relatively well-known that there are irreducible degree 8 polynomials that aren't solvable through radicals. A root of such a polynomial won't be expressible with only the rational numbers, field operations and square roots. Thus it won't be constructible. Yet the field extension you get from adjoining such a root to <span class="math-container">$\Bbb Q$</span> has degree <span class="math-container">$8$</span>.</p>
106,126
<blockquote> <p><strong>Problem</strong> Prove that $n! &gt; \sqrt{n^n}, n \geq 3$. </p> </blockquote> <p>I'm currently have two ideas in mind, one is to use induction on $n$, two is to find $\displaystyle\lim_{n\to\infty}\dfrac{n!}{\sqrt{n^n}}$. However, both methods don't seem to get close to the answer. I wonder is there another method to prove this problem that I'm not aware of? Any suggestion would be greatly appreciated.</p>
Amihai Zivan
22,409
<p>You can prove by induction (a-la Gauss) that $n! = 1 \cdots n = (1 \cdot n)(2 \cdot (n-1))(3 \cdot (n-2))\cdots(n/2(n/2+1)) \geq n^{(n/2)}$ and that finishes the proof.</p>
106,126
<blockquote> <p><strong>Problem</strong> Prove that $n! &gt; \sqrt{n^n}, n \geq 3$. </p> </blockquote> <p>I'm currently have two ideas in mind, one is to use induction on $n$, two is to find $\displaystyle\lim_{n\to\infty}\dfrac{n!}{\sqrt{n^n}}$. However, both methods don't seem to get close to the answer. I wonder is there another method to prove this problem that I'm not aware of? Any suggestion would be greatly appreciated.</p>
Jonas Meyer
1,424
<p>To show that $(n!)^2&gt;n^n$ for all $n\geq 3$ by induction, you first check that $(3!)^2&gt;3^3$. Then to get the inductive step, it suffices to show that when $n\geq 3$, $(n+1)^2\geq\frac{(n+1)^{n+1}}{n^n}=(n+1)\left(1+\frac{1}{n}\right)^n$. This is true, and in fact $\left(1+\frac{1}{n}\right)^n&lt;3$ for all $n$. It would be more than enough to use $\left(1+\frac{1}{n}\right)^n&lt;n$, which is proved in <a href="https://math.stackexchange.com/questions/89583/how-do-i-prove-1-frac1nn-n-by-mathematical-induction">another thread</a>.</p>
2,578,444
<blockquote> <p><span class="math-container">$\tan x&gt; -\sqrt 3$</span></p> </blockquote> <p>How do I solve this inequality?</p> <p>From the <a href="https://www.desmos.com/calculator/qb8bg1vbsf" rel="nofollow noreferrer">graph</a> it is evident that <span class="math-container">$\tan x&gt;-\sqrt 3$</span> for <span class="math-container">$\left(\dfrac{2\pi}3 , \dfrac{5\pi} 3\right)$</span> <span class="math-container">$\forall x\in (0, 2\pi)$</span>.</p> <p>Generalising this solution we get <span class="math-container">$\left(2n\pi +\dfrac{2\pi}3 , 2n\pi+\dfrac{5\pi} 3\right) \forall n \in \mathbb{Z}$</span> as the answer.</p> <p>But the answer given is: <span class="math-container">$\left(n\pi - \dfrac \pi 3, n\pi + \dfrac \pi 2\right)$</span></p> <p>Where have I gone wrong?</p>
Eric Fisher
476,420
<p>Tangent has period $\pi$, not $2\pi$. Also, $tan(x) &gt; -\sqrt{3}$ when $x&gt;-\pi/3$. The function is not defined at $\pi/2$, but it’s positive for $x \in [0, \pi/2)$. </p>
1,185,108
<p>empty set is an subset of any sets maybe any collection of sets.</p> <p>I wonder what about the case of the empty set being a member,not subset, of any collection (family) of sets.</p>
Alberto Takase
146,817
<p>$\varnothing\in\mathcal{P}(\varnothing)$ but $\varnothing\notin \varnothing$.</p>
4,402,839
<p>I need to prove that</p> <blockquote> <p>Let <span class="math-container">$f:[a,b] \to \mathbf{R}$</span> be a bounded map and let <span class="math-container">$f$</span> be an integrable map on the interval <span class="math-container">$[c,b]$</span> for all <span class="math-container">$c \in (a,b).$</span> Then, <span class="math-container">$f$</span> is integrable on <span class="math-container">$[a,b].$</span></p> </blockquote> <p>My attempt:</p> <p>By hypotesis, <span class="math-container">$f:[a,b] \to \mathbf{R}$</span> is bounded, so there exists <span class="math-container">$0 &lt; K \in \mathbf{R}$</span> such that <span class="math-container">$|f(x)| \leq K$</span>, for all <span class="math-container">$x \in [a,b].$</span> Let <span class="math-container">$\varepsilon&gt;0,$</span> and take <span class="math-container">$c \in (a,b)$</span> such that <span class="math-container">$K \cdot (c-a) \leq \frac{\varepsilon}{4}$</span>. By hypotesis, <span class="math-container">$f$</span> is integrable on <span class="math-container">$[c,b],$</span> then, there exists a partition <span class="math-container">$\left\{t_1, \dots, t_n \right\} \subset [a,b]$</span> such that <span class="math-container">$\sum_{i=2}^{n} \omega_i(t_i-t_{i-1}) &lt; \frac{\varepsilon}{2}$</span>. Let's make <span class="math-container">$t_{1}=a$</span>, then we get a partition <span class="math-container">$\left\{t_0, t_1, \dots, t_n \right\}$</span> of <span class="math-container">$[a,b]$</span>.</p> <p>Now I need to prove that <span class="math-container">$\omega_0 \leq 2K$</span>, for me to get that <span class="math-container">$f$</span> is integrable on <span class="math-container">$[a,b]$</span>. Am I following the correct idea?</p> <p>I don't know how to procceed. Any ideas would be appreciated!</p>
Alan
175,602
<p>I find it easier to talk about the equivalent in Darboux integral terms, that a funct ion is integrable if the upper sum minus the lower sum for any partition is less than any <span class="math-container">$\epsilon$</span> you want. So to show intgrability over <span class="math-container">$[a,b]$</span> we start with <span class="math-container">$\epsilon&gt;0$</span>. The trick is to find the right <span class="math-container">$c$</span> to use so that when we integrate from <span class="math-container">$[a,c]$</span> we are guaranteed to be less than <span class="math-container">$\frac \epsilon 2$</span>, since we can make the part from <span class="math-container">$[c,b]$</span> be less than <span class="math-container">$\frac \epsilon 2$</span> by the fact that it is integrable. That's easy, just figure out worst case error, if <span class="math-container">$M&gt;0$</span> is the bound on <span class="math-container">$|f(x)|$</span>, then the biggest error change from upper to lower is <span class="math-container">$2M$</span> over a width of <span class="math-container">$c-a$</span>, so the integral on <span class="math-container">$[a,c]$</span> is bounded by <span class="math-container">$2M(c-a)$</span>. I find it easier to focus on the width, so define <span class="math-container">$\delta=c-a$</span></p> <p>Setting that to <span class="math-container">$\frac \epsilon 2$</span> and solving for <span class="math-container">$\delta$</span> we get <span class="math-container">$$2M\delta&lt;\frac \epsilon 2$$</span> <span class="math-container">$$\delta&lt;\frac \epsilon {4M}$$</span> Now you just have to make sure that <span class="math-container">$a+\delta&lt;b$</span>, so use <span class="math-container">$\delta'=\min \{\delta,b-a\}$</span></p> <p>now we can safely partition our integral on <span class="math-container">$[a,b]$</span> that on <span class="math-container">$[a,a+\delta']\cup [a+\delta',b]$</span> and be guaranteed that both parts are below <span class="math-container">$\frac \epsilon 2$</span> in the difference between upper and lower sums</p>
4,402,839
<p>I need to prove that</p> <blockquote> <p>Let <span class="math-container">$f:[a,b] \to \mathbf{R}$</span> be a bounded map and let <span class="math-container">$f$</span> be an integrable map on the interval <span class="math-container">$[c,b]$</span> for all <span class="math-container">$c \in (a,b).$</span> Then, <span class="math-container">$f$</span> is integrable on <span class="math-container">$[a,b].$</span></p> </blockquote> <p>My attempt:</p> <p>By hypotesis, <span class="math-container">$f:[a,b] \to \mathbf{R}$</span> is bounded, so there exists <span class="math-container">$0 &lt; K \in \mathbf{R}$</span> such that <span class="math-container">$|f(x)| \leq K$</span>, for all <span class="math-container">$x \in [a,b].$</span> Let <span class="math-container">$\varepsilon&gt;0,$</span> and take <span class="math-container">$c \in (a,b)$</span> such that <span class="math-container">$K \cdot (c-a) \leq \frac{\varepsilon}{4}$</span>. By hypotesis, <span class="math-container">$f$</span> is integrable on <span class="math-container">$[c,b],$</span> then, there exists a partition <span class="math-container">$\left\{t_1, \dots, t_n \right\} \subset [a,b]$</span> such that <span class="math-container">$\sum_{i=2}^{n} \omega_i(t_i-t_{i-1}) &lt; \frac{\varepsilon}{2}$</span>. Let's make <span class="math-container">$t_{1}=a$</span>, then we get a partition <span class="math-container">$\left\{t_0, t_1, \dots, t_n \right\}$</span> of <span class="math-container">$[a,b]$</span>.</p> <p>Now I need to prove that <span class="math-container">$\omega_0 \leq 2K$</span>, for me to get that <span class="math-container">$f$</span> is integrable on <span class="math-container">$[a,b]$</span>. Am I following the correct idea?</p> <p>I don't know how to procceed. Any ideas would be appreciated!</p>
Mathematician
971,859
<p>So I guess I've found out:</p> <p>By hypotesis, <span class="math-container">$f:[a,b] \to \mathbf{R}$</span> is bounded, so there exists <span class="math-container">$0 &lt; K \in \mathbf{R}$</span> such that <span class="math-container">$|f(x)| \leq K$</span>, for all <span class="math-container">$x \in [a,b].$</span> Let <span class="math-container">$\varepsilon&gt;0,$</span> and take <span class="math-container">$c \in (a,b)$</span> such that <span class="math-container">$K \cdot (c-a) \leq \frac{\varepsilon}{4}$</span>. By hypotesis, <span class="math-container">$f$</span> is integrable on <span class="math-container">$[c,b],$</span> then, there exists a partition <span class="math-container">$P=\left\{t_1, \dots, t_n \right\} \subset [a,b]$</span> such that <span class="math-container">$\sum_{i=2}^{n} \omega_i(t_i-t_{i-1}) &lt; \frac{\varepsilon}{2}$</span>. Let's make <span class="math-container">$t_{n+1}=b$</span>, then we get a partition <span class="math-container">$P'=\left\{t_0, t_1, \dots, t_n, t_{n+1} \right\}$</span> of <span class="math-container">$[a,b]$</span>, with <span class="math-container">$\omega_{n+1}$</span>.</p> <p>Note that <span class="math-container">$\omega_{n+1}$</span> is bounded once <span class="math-container">$f$</span> is bounded on <span class="math-container">$[c,b]$</span>, by hypotesis. Then we get <span class="math-container">$\omega_{n+1}&lt;2K$</span>. Therefore, <span class="math-container">$\omega_{n+1} \cdot (t_{n+1}-t_n)=\omega_{n+2}(b-c)&lt;\frac{\varepsilon}{2}$</span>.</p> <p>Now we conclude that <span class="math-container">$\sum_{i=2}^{n} \omega_i(t_i-t_{i-1}) &lt; \varepsilon$</span> then <span class="math-container">$f$</span> is integrable on <span class="math-container">$[a,b]$</span>.</p>
4,177,829
<p>Given angles <span class="math-container">$0&lt;\theta_{ij}&lt;\pi$</span> for <span class="math-container">$1\leq i&lt;j\leq k$</span>, what conditions are there on the angles to ensure that there exists <span class="math-container">$k$</span> unit vector <span class="math-container">$v_i\in \mathbb R^k$</span> so that the angle between <span class="math-container">$v_i$</span> and <span class="math-container">$v_j$</span> is <span class="math-container">$\theta_{ij}$</span>?</p> <p>There are clearly problems when <span class="math-container">$\theta_{12},\theta_{13},\theta_{23}$</span> are all close to <span class="math-container">$\pi$</span>. What if I can ensure, for a given <span class="math-container">$\epsilon&gt;0$</span> that <span class="math-container">$|\theta_{ij}-\frac{\pi}2|&lt;\epsilon?$</span></p> <p>Intuitively, this is true for <span class="math-container">$k=3$</span> and the angles close to <span class="math-container">$\frac{\pi}2$</span>. We can easily pick <span class="math-container">$v_1,v_2$</span> and the locus of points for <span class="math-container">$v_3$</span> meeting the angle requirement with <span class="math-container">$v_1$</span> is a near-great circle in the unit sphere. With <span class="math-container">$v_2$</span>, the same. And these two near-great circles are near-perpendicular. We need them to intersect for <span class="math-container">$v_3$</span> to be found.</p> <p>But I can’t prove it, and my intuition for <span class="math-container">$k&gt;3$</span> spheres is negligible.</p> <p>This is a possible solution for <a href="https://math.stackexchange.com/q/4177715">this question</a>. (In fact, I only need <span class="math-container">$k&gt;3$</span> to complete that question - I’ve got an entirely different solution for <span class="math-container">$k=3$</span> in that question.)</p> <hr /> <p>I think a minimum necessary condition is, for all distinct <span class="math-container">$i,j,k$</span>: <span class="math-container">$$\theta_{ij}+\theta_{jk}\geq \theta_{ik}.\tag 1$$</span> (Here we need <span class="math-container">$\theta_{ij}=\theta_{ji}$</span> for <span class="math-container">$i&gt;j$</span> to include all the correct cases in (1).)</p> <p>Also, probably: <span class="math-container">$$\theta_{ij}+\theta_{jk}+\theta_{ik}\leq 2\pi\tag 2$$</span></p> <p>It seems like, when <span class="math-container">$k=3$</span>, (1) and (2) should be enough.</p>
NN2
195,378
<p><strong>Note</strong>: It's not the final answer. It provides a necessary condition (but not sufficient condition).</p> <p>Let us represent the unit vectors <span class="math-container">$\{\boldsymbol{v}_i \}_{1 \le i \le k} \in \Bbb R^k$</span>, begining from the origine <span class="math-container">$\boldsymbol{O}$</span>, by <span class="math-container">$\boldsymbol{v}_i = (x_{i1},\ldots,x_{ik})'$</span> for <span class="math-container">$1\le i \le k$</span>. We have then <span class="math-container">$$||\boldsymbol{v}_i||=\sum_{p=1}^k x_{ip}^2=1 \qquad \text{for } 1 \le i \le k \tag{1}$$</span></p> <p>The angles <span class="math-container">$\theta_{ij}$</span> between two vectors <span class="math-container">$\boldsymbol{v}_i$</span> and <span class="math-container">$\boldsymbol{v}_j$</span> is determined by the <a href="https://en.wikipedia.org/wiki/Dot_product#Geometric_definition" rel="nofollow noreferrer">dot product</a> as follows <span class="math-container">$$\cos(\theta_{ij})=\frac{\boldsymbol{v}_i.\boldsymbol{v}_j}{||\boldsymbol{v}_i||.||\boldsymbol{v}_j||}=\boldsymbol{v}_i.\boldsymbol{v}_j=\sum_{p=1}^kx_{ip}x_{jp} \qquad \text{for } 1 \le i&lt; j \le k \tag{2}$$</span></p> <p>In total, we have <span class="math-container">$\frac{k(k-1)}{2}$</span> equations <span class="math-container">$(2)$</span> for each pair <span class="math-container">$(\boldsymbol{v}_i,\boldsymbol{v}_j)_{1 \le i&lt; j \le k}$</span>.</p> <p><strong>First necessary condition</strong>: Summing <span class="math-container">$k$</span> equations <span class="math-container">$(1)$</span> and twice <span class="math-container">$\frac{k(k-1)}{2}$</span> equations <span class="math-container">$(2)$</span> (in total, we have <span class="math-container">$k + 2\times \frac{k(k-1)}{2} = k^2$</span> terms), we have <span class="math-container">$$k+2 \times\sum_{1 \le i &lt; j\le k} \cos(\theta_{ij})=\sum_{h=1}^k\left(\sum_{i=1}^k x_{ih} \right)^2 \ge 0$$</span> or <span class="math-container">$$\sum_{1 \le i &lt; j\le k} \cos(\theta_{ij}) \ge -\frac{k}{2} \tag{3}$$</span></p> <p><strong>Remark</strong>: if we have <span class="math-container">$\theta_{ij} = \theta$</span> for <span class="math-container">$1 \le i &lt; j\le k$</span>, then we can deduce from <span class="math-container">$(3)$</span> that</p> <p><span class="math-container">$$0 \le \theta \le \text{arcos}\left( -\frac{1}{k-1} \right)$$</span></p> <p>The equality occurs when <span class="math-container">$k$</span> equations <span class="math-container">$(1)$</span>, <span class="math-container">$\frac{k(k-1)}{2}$</span> equations <span class="math-container">$(2)$</span> and <span class="math-container">$k$</span> equations <span class="math-container">$(4)$</span> are satisfied. <span class="math-container">$$\sum_{i=1}^k x_{ih} = 0 \qquad \text{for } 1 \le h \le k \tag{4}$$</span></p> <p>(In this case, we have <span class="math-container">$\frac{k(k+3)}{2}$</span> equations/constraints for <span class="math-container">$k^2$</span> variables <span class="math-container">$\{x_{ij}\}_{1\le i,j\le k}$</span>).</p> <p><strong>Second necessary condition</strong>: Applying the triangle inequality for <span class="math-container">$ i \ne j \ne h \ne i $</span></p> <p><span class="math-container">$$\sqrt{\sum_{p=1}^k (x_{ip}-x_{hp})^2}+\sqrt{\sum_{p=1}^k (x_{jp}-x_{hp})^2} \ge \sqrt{\sum_{p=1}^k (x_{ip}-x_{jp})^2}$$</span></p> <p>then <span class="math-container">$$\sin(\frac{\theta_{ih}}{2}) +\sin(\frac{\theta_{jh}}{2}) \ge \sin(\frac{\theta_{ij}}{2}) \quad \text{for } i \ne j \ne h\tag{5}$$</span> (we have <span class="math-container">$\frac{k(k-1)(k-2)}{6}$</span> constraints of type <span class="math-container">$(5)$</span>)</p> <p><strong>Conclusion</strong>: <span class="math-container">$(3), (5)$</span> are 2 necessary conditions.</p>
4,407,210
<p><span class="math-container">$$y''-\frac{1}{x}y'=2x\cdot cos(x)$$</span></p> <p>For the homogeneous part I multiplied through with <span class="math-container">$x^2$</span> and got a second order Cauchy Euler equation with the general solution: <span class="math-container">$$y_h (x)=A x^2 +B$$</span></p> <p>Then for the particular solution I tried using the method of undetermined coefficients but the whole thing became too entangled to solve and I couldn't get anywhere!</p> <p>Maple tells me the solution is: <span class="math-container">$$2sin(x) - 2x\cdot cos(x)+ \frac{C_1\cdot x^2 }{2}+C_2$$</span> but I can't figure it out... Any help would be appreciated!</p>
Doug M
317,176
<p>This reduces the problem to a first order Diff Eq. <span class="math-container">$u= y'\\ xu' - u = 2x^2\cos x$</span></p> <p>Choose a candidate for the particular solution that could work.</p> <p><span class="math-container">$u_p = x\sin x$</span><br /> &quot;Generalize&quot; this by adding terms that might come up in the derivative that we hope will cancel out. Give every term a coefficient.</p> <p><span class="math-container">$u_p = Ax\sin x + Bx\cos x + C\sin x+ D\cos x$</span></p> <p>Differentiate and plug into the original equation.</p> <p><span class="math-container">$u_p' = A\sin x + Ax\cos x + B\cos x - Bx\sin x + C\cos x - D\sin x\\ u_p' = - Bx\sin x + Ax\cos x + (A-D)\sin x + (B+C)\cos x$</span></p> <p><span class="math-container">$xu_p' - u_p = - Bx^2\sin x + Ax^2\cos x -Dx\sin x + Cx\cos x - C\sin x - D\cos x = 2x^2\cos x$</span></p> <p><span class="math-container">$A = 2, B,C,D = 0$</span></p> <p><span class="math-container">$u_p = 2x\sin x\\ y_p' = 2x\sin x\\ y_p = \int x\sin x \ dx = -2x\cos x + 2\sin x + C$</span></p> <p><span class="math-container">$y = y_h + y_p = Ax^2 + B - 2x\cos x + 2\sin x$</span></p>
2,067,097
<p>Given are two points on a line with coordinates. How do we calculate the third forming a perfect 60 degree triangle? So we have X,Y, but need Z...</p> <p>X: 0,0 &emsp;&emsp;&emsp;( 0,0 i.e. horizontal, vertical )<br> Y: 50, 0 <br> Z: 25, ??</p> <p>How to calculate the missing horizontal coordinate for Z? Forming a perfect 60 degree triangle?</p> <p>EDIT: I'm trying to figure out a formula that by providing only 50 as the length of one side of the triangle all the point coordinates can be figured out. The first 2 and half of them are easy (0,0 50, 0, ??, 25)... but how to calculate the number at '??' is what I'm trying to figure out.</p> <p>EDIT2 (@Dennis): It would probably help if I explained that my coordinates are not of the actual point but a 100px wide circle that the point is in the middle of. I made a small video to show you guys exactly what I'm doing and will post is just as soon as it's done uploading.</p> <p>EDIT3: Here is a video showing exactly what I mean and why I came up with the question in the first place: <a href="http://archebian.org/videos/math/triangle-question.mp4" rel="nofollow noreferrer">http://archebian.org/videos/math/triangle-question.mp4</a></p>
Fred
380,717
<p>I gave your question to Pythagoras (a friend of mine): He said: the missing horizontal coordinate is given by</p> <p>$$ \sqrt{50^2+25^2}.$$</p> <p>OOps !</p> <p>Edit: coordinate is given by</p> <p>$$ \sqrt{50^2-25^2}.$$</p>
427,564
<p>I'm supposed to give a 30 minutes math lecture tomorrow at my 3-grade daughter's class. Can you give me some ideas of mathemathical puzzles, riddles, facts etc. that would interest kids at this age?</p> <p>I'll go first - Gauss' formula for the sum of an arithmetic sequence.</p>
ralu
20,840
<p>Probability, throwing dice and gambling. How likley is to get #6 throwing dice and what implications this have.</p>
427,564
<p>I'm supposed to give a 30 minutes math lecture tomorrow at my 3-grade daughter's class. Can you give me some ideas of mathemathical puzzles, riddles, facts etc. that would interest kids at this age?</p> <p>I'll go first - Gauss' formula for the sum of an arithmetic sequence.</p>
Alexander Gruber
12,952
<p>Teach 'em how to play <a href="http://en.wikipedia.org/wiki/Sprouts_%28game%29">sprouts.</a></p>
427,564
<p>I'm supposed to give a 30 minutes math lecture tomorrow at my 3-grade daughter's class. Can you give me some ideas of mathemathical puzzles, riddles, facts etc. that would interest kids at this age?</p> <p>I'll go first - Gauss' formula for the sum of an arithmetic sequence.</p>
Robert Mastragostino
28,869
<p>Eeny-meeny-miney-mo is essentially counting up to $16$. I'm not sure if they'd be comfortable enough with multiplication and remainders to totally grasp it, but I think kids would like to be able to "cheat the system" and intentionally predict who to land on (by, say, realizing that 16 is one more than a multiple of 5).</p>
427,564
<p>I'm supposed to give a 30 minutes math lecture tomorrow at my 3-grade daughter's class. Can you give me some ideas of mathemathical puzzles, riddles, facts etc. that would interest kids at this age?</p> <p>I'll go first - Gauss' formula for the sum of an arithmetic sequence.</p>
Shivendra
83,703
<p>may be a nice multiplication speed improvement class will be helpful. tell the students to use multiplication in day to day life using the distributivity property of multiplication</p> <ul> <li><p>multiply 19x15 : it can be changed to 15x(20-1) in mind and then 300-15 =285 such tricks will help speed up their mathematics calculation speed and will be fun to teach with many examples!!</p></li> </ul>
1,722,995
<blockquote> <blockquote> <p>Question: Given the circle $x^2+y^2=25$ is inscribed in triangle $\triangle ABC$, where vertex $B$ lies on the first quadrant. Slope of $AB$ is $\sqrt 3$ and has a positive y-coordinate, and $|AB|=|AC|$. Find the equations for $AC$ and $BC$</p> </blockquote> </blockquote> <p>I found out the equation for the straight line passing through $AB$: Let the line be $y=\sqrt 3 x+c$. Then</p> <p>$3x^2+2\sqrt 3 cx+c^2+x^2=25$</p> <p>$\Delta =0$ (discriminant)</p> <p>$(2\sqrt 3 c)^2 - 4(4)(c^2-25)=0$</p> <p>$c=10$</p> <hr> <p>However, I don't see any simple way to find out the equations of line for $AC$ and $BC$. While it seems like there is enough information, I have tried using similar triangles, etc, but I can't find out the coordinates of the vertices. Can anyone give me some hints? Thank you!</p>
manshu
287,678
<p>Hint: Use the formula of incentre. It is given by $$(x,y)=(\dfrac{ax_1+bx_2+cx_3}{a+b+c},\dfrac{ay_1+by_2+cy_3}{a+b+c})$$</p> <p>Here $a,b,c $ are the lengths of the sides of the triangle. Length a is opposite to the point A. Length b is opposite to the point B. Length c is opposite to the point C.<br> Here you can easily see $x=y=0$ &amp; $b=c$</p>
3,041,907
<p>I am unable to isolate the variable <span class="math-container">$x$</span> of this inequality <span class="math-container">$y \leq \sqrt{2x-x^2}$</span> ( where <span class="math-container">$0 \leq y \leq 1 $</span>)</p> <p>Is it correct doing this: <span class="math-container">$y^2 \leq 2x-x^2$</span>? I found that <span class="math-container">$y^2 \leq x \leq 2-y^2$</span> and <span class="math-container">$0 \leq x \leq 2$</span>. Is it correct?</p> <p>From here I am not sure how to proceed. Thanks in advance for any help.</p>
Joel Pereira
590,578
<p>When you get to <span class="math-container">$x^2-2x+y^2$</span> = 0, you can complete the square for the x-terms and get <span class="math-container">$$ (x^2-2x+1) + y^2 = 1$$</span> <span class="math-container">$$ (x-1)^2 + y^2 = 1$$</span></p> <p>This is a circle of radius 1 centered at (1,0). So now you just need to test 2 regions, the inside of the circle and outside of the circle. Thus, take a point in each region and see if the inequality holds. Try the center of the circle and a point on one of the axes that lies outside the circle.</p>
366,311
<blockquote> <p>Show that the sequence $\displaystyle (x_n)=\left( \sum_{i=1}^n\frac 1 i\right)$ diverge by epsilon delta definition.</p> </blockquote> <p>I'm not familiar with proving divergent sequence. Do anyone have any des? Thank you.</p>
Alex Ravsky
71,850
<p>Hint: Estimate from below the sums $A_n=\sum_{i=2^n}^{2^{n+1}-1}\frac 1i$.</p>
278
<p>If you take a look at our status in the <a href="http://area51.stackexchange.com/proposals/64216/mathematics-educators">area51</a>, all criteria seem to be satisfied (soon) but not the number of questions asked (which seems to be decreasing, actually). Do you think this is a problem for us? Is it something we should / can take care of?</p>
Benjamin Dickman
262
<p>I think the fundamental piece is to accumulate <strong>users</strong> who ask high quality questions and/or provide high quality answers. The "we"/"us" you refer to is not static, of course, and the hope must be that the user-base on MESE continues to increase. How this accretion will manifest is still unclear (to me).</p> <p>I don't think the concern you are putting forth is that there is a danger of running out of meaningful and tractable questions in <em>Mathematics Education</em> (i.e., as an area of study); but I remark, nevertheless, that such questions are certain not to run out in our lifetimes.</p>
34,215
<p>How do professional mathematicians learn new things? How do they expand their comfort zone? By talking to colleagues? </p>
Deane Yang
613
<p>It seems to me that the most important thing to learn when you're a graduate student is how to learn more mathematics. Everything else is detail. So you do what you learned to do as a graduate student (in order of increasing effectiveness, at least for me):</p> <ul> <li>Read papers and books (I'm actually unable to do this. I fall asleep.)</li> <li>Sit in on courses</li> <li>Work out problems in books</li> <li>Try to work out the details of a paper yourself and refer back to the paper when you get stuck</li> <li>Set up working seminars with other people who want to learn the same thing</li> </ul> <p>And I'll repeat what one of the other answers said: ask every dumb question that comes to mind and that you can't figure out the answer to. This can be done in person, by phone, or by email. Or even on MathOverflow.</p>
1,858,529
<p>I know, there are some threads dealing with this sum but I want to solve it with the integral test for convergence(<a href="https://en.wikipedia.org/wiki/Integral_test_for_convergence" rel="nofollow">more</a>)</p> <blockquote> <p>$$\sum\limits_{n=3}^{\infty} \frac{1}{n\log(n)\log(\log(n))}$$</p> </blockquote> <p>I can't find the right substitution here: $$\int\limits_3^{\infty} \frac{1}{x\log(x)\log(\log(x))}dx$$ I used $t=\log(x)$ but it doesn't work. Any hints?</p>
Olivier Oloa
118,798
<p><strong>Hint</strong>. One has, for $x\ge3$, $$ (\log(\log x))'=\frac{\frac1x}{\log x}=\frac1{x\log x} $$ giving $$ \int\limits_3^{\infty} \frac{1}{x\log(x)\log(\log x)}dx=\int\limits_3^{\infty} \frac{\frac1{x\log x}}{\log(\log x)}dx=\int\limits_3^{\infty} \frac{(\log(\log x))'}{\log(\log x)}dx. $$</p>
1,858,529
<p>I know, there are some threads dealing with this sum but I want to solve it with the integral test for convergence(<a href="https://en.wikipedia.org/wiki/Integral_test_for_convergence" rel="nofollow">more</a>)</p> <blockquote> <p>$$\sum\limits_{n=3}^{\infty} \frac{1}{n\log(n)\log(\log(n))}$$</p> </blockquote> <p>I can't find the right substitution here: $$\int\limits_3^{\infty} \frac{1}{x\log(x)\log(\log(x))}dx$$ I used $t=\log(x)$ but it doesn't work. Any hints?</p>
Ben Grossmann
81,360
<p>Make the substitution $t = \ln(n)$. We find that $$ \int\limits_3^{\infty} \frac{1}{n\log(n)\log(\log(n))} = \int_3^\infty \frac{1}{t(n)\ln(t(n))} \frac{1}n dn = \\ \int_{\ln 3}^\infty \frac{1}{t\ln(t)}\,dt $$ Integrate this using the further substitution $u = \ln(t)$.</p> <p>Alternatively, start with the substitution $t = \ln(\ln(n))$.</p>
696,285
<p>Let $X$ and $Y$ be some infinite dimensional Banach spaces. Let $T:X\longrightarrow Y$ be some compact linear operator. It is easy to understand that $T$ cannot be surjective: the Open Mapping Theorem due to Banach, states that surjective operators should be open, and it follows that the image of the unit ball $\mathbb{B}_{1,X}$ should contain an open ball $\{||y||&lt;r\}$ for some $r&gt;0$, so its closure can't be compact. Is it true that the image of $T$ cannot contain any closed subspace of infinite dimension?</p>
Daniel Fischer
83,702
<p>Let $F\subset T(X)$ be a closed (in $Y$) subspace. Then $E = T^{-1}(F)$ is a closed subspace of $X$, and $T\lvert_E \colon E \to F$ is a compact surjective operator. Since $F$ is closed in $Y$, the open mapping theorem implies that $F$ is finite-dimensional.</p> <p>So: the image of a compact operator cannot contain an infinite-dimensional Banach space.</p>
696,285
<p>Let $X$ and $Y$ be some infinite dimensional Banach spaces. Let $T:X\longrightarrow Y$ be some compact linear operator. It is easy to understand that $T$ cannot be surjective: the Open Mapping Theorem due to Banach, states that surjective operators should be open, and it follows that the image of the unit ball $\mathbb{B}_{1,X}$ should contain an open ball $\{||y||&lt;r\}$ for some $r&gt;0$, so its closure can't be compact. Is it true that the image of $T$ cannot contain any closed subspace of infinite dimension?</p>
user133339
133,339
<p>A compact operator has a closed range iff it has a finite dimensional range. Without loss of generality,we can assume that the range of A is closed,otherwise we can consider the restriction of A to E where E=T−1(F),In all cases the theorem assures that F is finite dimensional. to prove the theorem,consider the canonical map associated with A and the fact that the closedness of the range of the operator imply the invertibility of canonical map and hence the finite dimensionality of the range (the identity is compact).</p>
2,445,655
<p>Challenge: A Good Deal</p> <p>You are currently learning some important aspects of collusion and cartels. This challenge puts you in the position of a bad guy, namely a price-fixing sales manager. Suppose that you find yourself in a so-called “smoke-filled room” to fix prices for the upcoming year with the sales manager of a competing firm, Snitch Inc.. Suddenly the door opens and one of the employees of Snitch Inc. enters the room. The employee knows that price-fixing is illegal and immediately grabs his cellphone to inform the competition authority. You are fully aware that you are now facing a serious risk of getting a fine or even a jail sentence. Whereas your fellow sales manager is simply in shock, not knowing what to do, you as a University School of Business and Economics alumni are quick to react and you try to save the situation. Your idea is to bribe the employee. You expect that the employee requires a bribe of at least €100 to remain silent. In other words, the reservation price of the employee is €100. For simplicity, assume in the following that this expectation is correct. At the same time, suppose that you are not willing to offer more than €200 for otherwise you would rather save your money and spend it on a good lawyer instead. In other words, your reservation price is €200. You consider your chances and are thinking about making an offer. For simplicity, assume in the following that all parties aim to maximize the gains from trade and that offers can be any positive real number (all values weakly above zero, that is).</p> <p>Try to provide a clear and concise answer to the following four questions. For the first two questions suppose that the employee is still slightly in shock and therefore can only respond by either accepting or rejecting your offer.</p> <ol> <li>How many Nash equilibria does this game have, if any? Explain your answer.</li> <li>How many subgame perfect Nash equilibria does this game have, if any? Explain your answer.</li> </ol> <p>Now suppose that you are dealing with an, somewhat cocky, employee who is brave enough to start negotiating with you. That is, in questions 3 and 4 below, the employee, instead of simply accepting or rejecting the offer, may now make a counteroffer instead. For simplicity, assume that both parties get a zero payoff in case of “no deal”.</p> <ol start="3"> <li>Suppose that the employee indeed does make a counteroffer and that you will either accept or reject (in other words, the game ends after your response to the counteroffer). What is the subgame perfect Nash equilibrium in this case? </li> </ol> <p>You fear that these negotiations may take quite some time and as a business man you know that time is money. Suppose that your impatience as well as that of the employee is given by a common discount factor 0 &lt; δ &lt; 1 (which is known to you and the employee). The interpretation is that the utility of a money amount K “tomorrow” is equal to the utility of an amount of δK “today”. Your goal is to make an acceptable offer the first round while at the same time saving as much money as possible.</p> <ol start="4"> <li>What is the subgame perfect Nash equilibrium outcome in this case? Do you (i.e., the sales manager) or the employee benefit from your impatience? Explain your answer.</li> </ol>
Eric M. Schmidt
48,235
<p>Making the substitution $u = 1/x$, we aim to find $$\lim_{u\to 0^{\large{-}}} \left(2/u + \sqrt{4/u^2 + 1/u}\ \right).$$ When $u &lt; 0$, we have $\sqrt{u^2} = -u$, so $\sqrt{4/u^2 + 1/u} = \sqrt{(4+u)/u^2}=-\sqrt{4 + u}/u$. Hence, we obtain $$\lim_{u\to 0^{\large{-}}} \frac{2-\sqrt{4 + u}}{u}.$$ The numerator is $0$ at $u=0$. So let $f(u) = 2 - \sqrt{4+u}$. If we use the definition of the derivative, we see that our limit can be written as $$\lim_{u\to 0^{\large{-}}}\frac{f(u) - f(0)}{u} = f'(u)\Big|_{u=0}$$ provided the derivative exists. Applying the standard rules of differentiation, $f'(u) = -1/(2\sqrt{4+u})$. This equals $-1/4$ at $u=0$. Hence, $$\lim_{x\to-\infty} \left(2x + \sqrt{4x^2 +x}\right) = -1/4.$$</p>
3,917,255
<p>Why does Chebyshev's inequality demand that <span class="math-container">$\mathbb{E(}X^2) &lt; \infty$</span>?</p>
José Carlos Santos
446,262
<p>Every closed subset of a compact metric space is compact. And a continuous map maps compact sets onto compact sets. And, finally, every compact subset of a metric space is closed. So, yes, your map is compact.</p>
477,477
<p>Prove that $e^x=-x^2+2x+5 $ have exactly two solutions.</p> <p>Is it enoguht that Vertex of the parabola is over $y=e^x$ and arms of it looks down</p>
bubba
31,744
<blockquote> <p>is there any trivial quad tessellation that minimizes distortion in this case</p> </blockquote> <p>As indicated in a comment, a cube is a tesselation of a sphere using squares as the tiles. Not a very interesting/useful tesselation, from a graphics point of view, though, unless the spherical object is very small or you're trying very hard to minimize the number of polygons used.</p> <p>You can get a a low-distortion quad tesselation by "inflating" a cube to get 6 patches that lie on the sphere. Or, saying it another way, you project the faces of the cube radially onto the sphere. One of the faces of the cube can be represented by the equations $(u,v) \mapsto (u,v,1)$ for $-1 \le u \le 1$ and $-1 \le v \le 1$ after the radial projection, you get the parametrization $$ \mathbf x(u,v) = \left(\frac{u}{\sqrt{u^2 + v^2 +1}}, \frac{v}{\sqrt{u^2 + v^2 +1}}, \frac{1}{\sqrt{u^2 + v^2 +1}} \right) $$ for a patch that covers one sixth of the sphere. You tesselate this patch in the obvious way, by making constant-sized steps in $u$ and $v$. The other five faces can be handled similarly, or can be obtained by rotations.</p> <p>Here's a picture of one patch:</p> <p><img src="https://i.stack.imgur.com/epBkx.jpg" alt="sphere patch"></p> <p>I don't know if the distortion is minimal, but it seems fairly low, to me.</p> <p>Patches of this type can also be written in NURBS form (non-uniform rational b-spline), and you may be able to hand these to your graphics subsystem and have it do the tesselation for you.</p>
3,154,032
<p>Suppose we have a 4 dimension positive signature clifford algebra. In <a href="https://math.stackexchange.com/questions/443555/calculating-the-inverse-of-a-multivector">Calculating the inverse of a multivector</a> and <a href="https://math.stackexchange.com/questions/556247/inverse-of-a-general-nonfactorizable-multivector">Inverse of a general nonfactorizable multivector</a>, the inverse of a multivector is presented as a solution when vectors/bivectors are present </p> <p><span class="math-container">$B^{-1} = \frac{B^\dagger}{B B^\dagger}$</span></p> <p>but the above is not true for any multivector. For example, how to know if </p> <p><span class="math-container">$(1+e_{1234})^{-1}$</span> </p> <p>exists and how to compute it?</p>
Nicholas Todoroff
1,068,683
<p><a href="https://doi.org/10.1007/s40314-021-01536-0" rel="nofollow noreferrer">This</a> paper by D. S. Shirokov (2021) claims to give a basis-free formula using only basic operations and various involutions on multivectors. <a href="https://arxiv.org/abs/2005.04015" rel="nofollow noreferrer">Preprint</a>.</p>
2,479,918
<p>Every vector space $V$ could be embedded into $V^{\ast}$ (see <a href="https://en.wikipedia.org/wiki/Dual_basis" rel="noreferrer">here</a>) after choosing a basis, for a given vector $v \in V$ denote this embedding by $v^{\ast}\in V^{\ast}$. Now for given vector spaces $V_1, \ldots, V_k$ over some field $F$, let $V = \{ \varphi : V_1 \times \ldots \times V_k \to F \mbox{ multilinear } \}$. Why not define the tensor product of $V_1, \ldots, V_k$ simply as $T = \{ \varphi^{\ast} \mid \varphi\in V\}$. Then the universal property is obviously fulfilled, for if we define $\pi : V_1 \times \ldots \times V_k \to T$ by $\pi(v_1, \ldots, v_k) = \Phi \in V^{\ast}$ with $$ \Phi(\varphi) = \varphi(v_1, \ldots, v_k). $$ Then if we have some multilinear $\varphi : V_1 \times \ldots \times V_k \to F$ define the linear map $h_{\varphi} : T \to F$ by $$ h_{\varphi}(\Phi) = \Phi(\varphi) $$ and we have $$ h_{\varphi}(\pi(v_1, \ldots, v_k)) = \varphi(v_1, \ldots, v_k) $$ i.e. it factors through $T$ by $\pi$ and $h_{\varphi}$. Then everything works out quite easily, no nasty "quotient constructions", it even appears too simple for me...</p> <p>I have nowhere seen this definition? So why not define it that way? Have I overlooked something? Note that we do not rely on reflexivity here, as $T$ does not has to be all of $V^{\ast}$, but just those elements that arise from elements of $V$ (the image of the embedding). Maybe the universal property breaks down because the linear map is not unique, but I do not see other choices for it?</p>
Eric Wofsey
86,856
<p>Let me mention something which has not been explicitly stated in any of the other answers: your construction is emphatically wrong for infinite-dimensional vector spaces (not just, you have to tweak it or do more work prove it works, but it just doesn't give the right answer at all).</p> <p>This is easiest to see by considering the case $k=1$. The tensor product of just a single vector space $V_1$ should, of course, be (isomorphic to) $V_1$ itself, since a multilinear map on the one-fold product $V_1$ is the same thing as a linear map on $V_1$. However, your definition of $T$ would make it isomorphic to $V$, which is just the dual space $V_1^*$ in this case. If $V_1$ is infinite-dimensional, then $V_1^*$ is not isomorphic to $V_1$!</p> <p>(The basic thing that goes wrong with your argument in this case is that you haven't checked <em>uniqueness</em> of your $h_\varphi$. You have only constrained the value of $h_\varphi$ on elements $\Phi$ of the form $\pi(v_1)$, but not every element of $T$ has this form, since $V=V_1^*$ is bigger than $V_1$.)</p>
760,926
<p>Show that $\binom{n}{0} - \binom{n}{1} + \binom{n}{2} - ...+(-1)^k * \binom{n}{k} = (-1)^k * \binom{n-1}{k}$.</p> <p>I know this has to do with permutations and combination problems, but I'm not sure how would I start with this problem. </p>
ml0105
135,298
<p>We have the identity $\binom{n}{k} = \binom{n-1}{k-1} + \binom{n-1}{k}$. So we see the series telescope:</p> <p>$$\sum_{i=0}^{k} (-1)^{i} \binom{n}{i} = \binom{n}{0} + \sum_{i=1}^{k} \binom{n-1}{i-1} + \sum_{i=0}^{k} \binom{n-1}{i}$$ </p> <p>So we see $\binom{n}{0} = 1$. Then $\binom{n}{1} = \binom{n-1}{0} + \binom{n-1}{1}$. For any $x$, $\binom{x}{0} = 1$. So $\binom{n}{0} - \binom{n-1}{0} = 0$.</p> <p>Now look at $\binom{n}{2} = \binom{n-1}{1} + \binom{n-2}{2}$. By telescoping, $-\binom{n-1}{1} + \binom{n-1}{1} = 0$.</p> <p>So we are left with the term $\binom{n}{k} = \binom{n-1}{k-1} + \binom{n-1}{k}$, with the $\binom{n-1}{k-1}$ cancelling out out the $\binom{n-1}{k-1}$ term from $\binom{n}{k-1}$. Then we are left with $\binom{n-1}{k}$. </p>
4,520,485
<p>Suppose there are 78 heroes. Only one of them is considered to be 'Tier 1'. At the beginning of some game you are given a choice between either 2 heroes or 4 heroes. The question is: how advantegeous is it to choose out of 4 heroes to choosing out of 2, if by advantageous we mean to have a higher probability of getting a 'Tier 1' hero? My logic is to first compute the total number of 2-hero combinations and 4-hero combinations: <span class="math-container">$$C^2_{78}=\frac{78!}{(78-2)!2!}=3003; C^4_{78}=\frac{78!}{(78-4)!4!}=1426425$$</span> The prbability of getting a 'Tier 1' hero if we choose between 2 is <span class="math-container">$\frac{77}{3033}\approx0.0256$</span>, while the probability of getting a 'Tier 1' hero if we choose between 4 is <span class="math-container">$\frac{77\cdot76\cdot75}{1426425}\approx0.3077$</span>. So, the advantage seems to be <span class="math-container">$\frac{77\cdot76\cdot75}{1426425}\cdot\frac{3033}{77}=12$</span>. So, you are 12 times more likely to get a 'Tier 1' hero if you choose out of 4 heroes than out of 2. Is this logic correct? The advantage seems to be too big</p>
Robert Israel
8,508
<p>It goes up to <span class="math-container">$n$</span>, not <span class="math-container">$k+n$</span>. In this case the product has only one factor namely <span class="math-container">$k+1 = 3$</span>.</p>
1,568,233
<p>$$\sum_{n=1}^{\infty}n^210^{-n} = \frac{110}{3^6}$$ I noticed this while playing around on my calculator. Is it true and how come?</p>
Ron Gordon
53,268
<p>$$\sum_{n=1}^{\infty} n^2 x^n = x \frac{d}{dx} \left [x \frac{d}{dx} \left (\frac1{1-x} \right ) \right ] $$</p> <p>because </p> <p>$$\frac1{1-x} = \sum_{n=0}^{\infty} x^n$$</p> <p>Evaluating the derivatives, we get</p> <blockquote> <p>$$\sum_{n=1}^{\infty} n^2 x^n = \frac{x(1+ x)}{(1-x)^3}$$</p> </blockquote> <p>Plug in $x=1/10$ and the expected answer results.</p>
1,898,207
<blockquote> <p>A man speaks the truth $8$ out of $10$ times. A fair die is thrown. The man says that the number on the upper face is $5$. Find the probability that the original number on the upper face is $5$.</p> </blockquote> <p>While solving I find two ways (shown in the image). I think one of them is correct and other one is incorrect. Please tell me which is the correct one and why.</p> <p>Any advice on solving tricky problems (on Bayes theorem) is welcome.<a href="https://i.stack.imgur.com/DRzEk.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DRzEk.jpg" alt="enter image description here"></a></p>
lulu
252,071
<p>Surely the answer is $\frac 8{10}$ as all you are asking is: "is the fellow telling the truth?"</p> <p>Let's see this following the approach you took.</p> <p>There are two ways in which one might hear the answer "$5$" from the fellow. Either it is $5$, and he tells the truth about it, or it is not $5$ and he lies and happens to say that it is $5$. Now, here we must make an assumption because the problem doesn't specify how he lies. Absent any information to the contrary, I assume that his lying is "unbiased". that is, when he chooses to lie, he chooses uniformly at random from the available options. If, say, the true value is $4$ then he chooses uniformly at random from $\{1,2,3,5,6\}$. Hence if the die does not show $5$, and he chooses to lie about it, there is a $\frac 15$ probability that he will say $5$ (Note: in your work you appear to assume that when he lies he always says $5$, which seems arbitrary).</p> <p>The probability of the first path is: $\frac 16\times \frac 8{10}$</p> <p>The probability of the second path is $\frac 56 \times \frac 2{10} \times \frac 15$</p> <p>Thus the answer is $$ \frac {\frac 16\times \frac 8{10}}{\frac 16\times \frac 8{10}+\frac 56 \times \frac 2{10} \times \frac 15}=\frac {1\times 8}{1\times 8+2}=\frac 8{10}$$</p> <p>Note: if you make other assumptions about the lying then you can get other answers. Certainly you'll get a different answer if you assume that his dishonest reply is biased for or against saying $5$, which is what you implicitly assumed.</p>
2,210,893
<p>A lot of times when proving for example inequalities like $$x \leq y$$ for real numbers $x,y$ the argument looks like $$x \leq y + \varepsilon$$ for all $\varepsilon &gt; 0$, hence $x \leq y$. </p> <p>Now this is obviously very intuitive, but is there a "proof" that this conclusion is correct? And is it always sufficient in order to proof $x \leq y$ to show $x \leq y + \epsilon$ for all $\varepsilon &gt; 0$? </p> <p>I'd appreciate any explanations!</p> <p><strong>NOTE:</strong> I know that these kinds of arguments are correct when dealing with sequences. But here we have no sequences so I wanted to understand this too. </p>
joeb
362,915
<p>Suppose $x \leqslant y + \varepsilon$ for every positive $\varepsilon$, and for the sake of contradiction, suppose $x &gt; y$. </p> <p>For the specific error $\varepsilon := \frac{1}{2}(x - y) &gt; 0$ we have that $$x \leqslant y + \varepsilon &lt; y + 2\varepsilon = y + (x - y) = x,$$ which is a desired contradiction.</p>
1,942,578
<p>Consider the following wedge</p> <p><a href="https://i.stack.imgur.com/xiaPX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xiaPX.png" alt=""></a> cut from a cylinder of radius r. The plane that cuts the wedge goes through the very bottom of the cylinder leading to an ellipse as the cross section of the wedge.</p> <p>The long axis of the ellipse is $2R$ in length. </p> <p>How can I prove that the minor axis of the ellipse is $2r$ in length where $r$ is the radius of the cylinder?</p>
Mark Viola
218,419
<blockquote> <p>Recall that the definition of the negation of uniform convergence states that a sequence of functions $f_n(x)$ , which converges to $f(x)$, fails to converge uniformly to $f(x)$ for $x\in A$ if there exists an $\epsilon&gt;0$, such that for all $N\in \mathbb{N}$, there exists an $x\in A$ and a number $n&gt;N$ such that</p> <p>$$|f_n(x)-f(x)|\ge \epsilon$$</p> </blockquote> <p>If $f_n(x)=\frac1{nx}$, $f(x)=0$, and $A=\{x|\,0&lt;x\le 1\}$, then for $\epsilon =1/2$ and all $N$, there exists an $x=1/N\in (0,1]$ and a number $n=2N&gt;N$ such that </p> <p>$$\begin{align} \left|f_n(x)-0\right|&amp;=\left|\frac{1}{nx}-0\right|\\\\ &amp;=\left|\frac{N}{n}\right|\\\\ &amp;=\frac12\\\\ &amp;\ge \epsilon \end{align}$$</p> <blockquote> <p>There, we conclude that $f_n(x)$ fails to converge uniformly to $0$ for $x\in (0,1]$.</p> </blockquote> <hr> <blockquote> <p>Recall that the definition of uniform convergence states that a sequence of functions $f_n(x)$ , which converges to $f(x)$, converges uniformly to $f(x)$ for $x\in B$ if for all $\epsilon&gt;0$, there exist a number $N\in \mathbb{N}$, such that for all $x\in B$ and all $n&gt;N$ </p> <p>$$|f_n(x)-f(x)|&lt; \epsilon$$</p> </blockquote> <p>Let $B=\{x\,|\,\delta \le x\le 1\}$, for $\delta&gt;0$ and let $\epsilon&gt;0$ be given. Then, note that </p> <p>$$\left|f_n(x)-0\right|=\frac{1}{nx}\le \frac{1}{n\delta }&lt;\epsilon$$</p> <p>for all $x\in B$ and for all $n&gt;\frac{1}{\delta \epsilon}$. </p> <p>Hence, $f_n(x)$ converges uniformly to $0$ for $x\in [\delta,1]$ for $\delta &gt;0$.</p>
137,414
<p>I am trying to evaluate the following expression numerically $$\frac{d^2}{dt^2}e^{-2t^2}\int_0^\infty\frac{\xi/\sqrt{2}}{\xi^{3/2}}e^{(-\xi^2/2-2\xi t))}$$</p> <p>My code is as follows</p> <pre><code>f[t_]:=Exp[-2*t^2]*NIntegrate[Erf[\[Xi]/Sqrt[2]]/\[Xi]^(3/2)*Exp[-(\[Xi]^2/2)-2*\[Xi]*t],{\[Xi],0,Infinity}] Der[t_] := f''[t] </code></pre> <p>But when I evaluate, say by entering </p> <pre><code>Der[5] </code></pre> <p>I am getting an error that the integrand has evaluated to non-numerical values for all sampling points in the region. Now I know that this is a common error, and other threads have revealed that this is happening because Mathematica is not recognizing something in my expression (likely $\xi$) as a numerical variable. However, I am not sure how I would go about fixing this. I have tried putting in ?NumericQ beside all instances of $\xi$ but that didn't seem to work.</p> <p>How would I go about numerically evaluating this expression (and ideally, plotting it as a function of t) without the NIntegrate::inumr error?</p> <p>Thank you in advance!</p>
Bob Hanlon
9,362
<pre><code>Needs["NumericalCalculus`"] f[t_?NumericQ] := Exp[-2*t^2]* NIntegrate[ Erf[ξ/Sqrt[2]]/ξ^(3/2)* Exp[-(ξ^2/2) - 2*ξ*t], {ξ, 0, Infinity}] </code></pre> <p>Use <a href="http://reference.wolfram.com/language/NumericalCalculus/ref/ND.html" rel="nofollow noreferrer"><code>ND</code></a></p> <pre><code>Der[t_?NumericQ] := ND[f[x], {x, 2}, t] Der[5] (* 3.41091*10^-20 *) </code></pre>
3,430,305
<blockquote> <p>Let <span class="math-container">$P$</span> be a partition of <span class="math-container">$[0, b]$</span> defined as <span class="math-container">$P = \{ 0 = x_0 &lt; x_1 &lt; &gt; \ldots &lt; x_n = b\}$</span>, and let <span class="math-container">$c_i \in [x_{i-1}, x_i]$</span> for every <span class="math-container">$1 &gt; \leq i \leq n$</span>.</p> <p>Prove: <span class="math-container">$$\left|\Sigma^n_{i=1}c_i\Delta x_i - \frac{b^2}{2}\right|\leq \frac{1}{2}\Sigma^n_{i=1}(\Delta x_i)^2$$</span></p> </blockquote> <p>Here's my attempting, trying to go from left to right:</p> <p>Using the triangle inequality: <span class="math-container">$$\left|\Sigma^n_{i=1}c_i\Delta x_i +\left(- \frac{b^2}{2}\right)\right| \leq \left|\Sigma^n_{i=1}c_i\Delta x_i\right| + \left|-\frac{b}{2}\right|\\ = \Sigma^n_{i=1}c_i\Delta x_i + \frac{b}{2}=\Sigma^n_{i=1}c_i\Delta x_i + \frac{\left(\Sigma^n_{i=1}\Delta x_i\right)^2}{2} \\\leq \Sigma^n_{i=1}x_i \cdot \Delta x_i + \frac{\left(\Sigma^n_{i=1}\Delta x_i\right)^2}{2}$$</span></p> <p>This is as far as I could develop it. Any way to proceed?</p>
Pebeto
605,486
<p>The approach described above assumed <span class="math-container">$0 \notin P$</span>. Here is the general case.</p> <p>A hyperplane is such <span class="math-container">$\{ x \in \mathbb{R}^n\ s.t. \ c^t x = c_0 \},$</span> where <span class="math-container">$c \in \mathbb{R}^n, \ c \neq 0$</span> and <span class="math-container">$c_0 \in \mathbb{R}$</span>. </p> <p>There exists a hyperplane that satisfies: <span class="math-container">$$ c^t 0 \leq c_0$$</span> and <span class="math-container">$$c^t x \geq c_0, \ \forall x \in P.$$</span> Hence, we get: <span class="math-container">$$c_0 \geq 0.$$</span></p> <p>Using the resolution theorem you mentioned, we must also have:</p> <p><span class="math-container">$$c^t x^i \geq c_0,\ \forall i,$$</span></p> <p>and</p> <p><span class="math-container">$$c^t w^j \geq 0, \forall j.$$</span></p> <p>Therefore, to find the separating hyperplane, we could solve the following LP: <span class="math-container">$$ \max_{c, c_0} c_0 \ s.t. $$</span> <span class="math-container">$$c^t x^i \geq c_0,\ \forall i,$$</span> <span class="math-container">$$c^t w^j \geq 0, \forall j,$$</span> <span class="math-container">$$c_0 \geq 0.$$</span> </p> <p>This will not work. Indeed, <span class="math-container">$c=0$</span> and <span class="math-container">$c_0 = 0$</span> is always a solution. Moreover, if <span class="math-container">$0$</span> is an extreme point of <span class="math-container">$P$</span> then <span class="math-container">$\max c_0 = 0$</span>, therefore having a maximum of <span class="math-container">$0$</span> cannot be used to determine the existence or not of the hyperplane. The problem is that we do not include the constraint <span class="math-container">$c \neq 0.$</span> So, here is a modification. Introduce the variable <span class="math-container">$z$</span> such that <span class="math-container">$z_k \geq c_k$</span> and <span class="math-container">$z_k \geq -c_k$</span>. Clearly, <span class="math-container">$z \geq 0.$</span> Now consider the following LP.</p> <p><span class="math-container">$$ \min_{c, c_0, z} \sum_{k=1}^n z_k \ s.t. $$</span> <span class="math-container">$$c^t x^i \geq c_0,\ \forall i,$$</span> <span class="math-container">$$c^t w^j \geq 0, \forall j,$$</span> <span class="math-container">$$z_k \geq c_k, \forall k,$$</span> <span class="math-container">$$z_k \geq -c_k, \forall k,$$</span> <span class="math-container">$$c_0 \geq 0.$$</span> </p> <p>If there exists an hyperplane with <span class="math-container">$(c_0,c), \ c \neq 0$</span> then clearly <span class="math-container">$\min_{c, c_0, z} \sum_{k=1}^n z_k &gt; 0.$</span></p> <p>On the other hand, if <span class="math-container">$\min_{c, c_0, z} \sum_{k=1}^n z_k = 0$</span>, this implies that <span class="math-container">$z = 0$</span>, hence <span class="math-container">$c=0.$</span></p>
2,485,261
<blockquote> <p>$\displaystyle \sum_{k=0}^n k {n \choose k} p^k (1-p)^{n-k}$ with $0&lt;p&lt;1$</p> </blockquote> <p>I know of one way to evaluate it (from statistics) but I was wondering if there are any other ways. </p> <p>This is the way I know:</p> <p>Let </p> <p>$$M(t)=\displaystyle \sum_{k=0}^n e^{kt} {n \choose k} p^k (1-p)^{n-k}$$</p> <p>Then $$M(t)=\displaystyle \sum_{k=0}^n {n \choose k} (pe^t)^k (1-p)^{n-k}=(pe^t+1-p)^n$$</p> <p>$$M'(t)=\displaystyle \sum_{k=0}^n ke^{kt} {n \choose k} p^k (1-p)^{n-k}=pe^tn(pe^t+1-p)^{n-1}$$</p> <p>$$M'(0)=\displaystyle \sum_{k=0}^n k {n \choose k} p^k (1-p)^{n-k}=np$$</p>
Clement C.
75,808
<p>Yes.</p> <p>$$\begin{align} \sum_{k=0}^n k \binom{n}{k} p^k (1-p)^{n-k} &amp;= \sum_{k=1}^n k \binom{n}{k} p^k (1-p)^{n-k} = \sum_{k=1}^n k\frac{n!}{k!(n-k)!} p^k (1-p)^{n-k}\\ &amp;=\sum_{k=1}^n \frac{n!}{(k-1)!(n-1-(k-1))!} p^k (1-p)^{n-k}\\ &amp;=np\sum_{k=1}^n \frac{(n-1)!}{(k-1)!(n-1-(k-1))!} p^{k-1} (1-p)^{n-1-(k-1)}\\ &amp;=np\sum_{\ell=0}^{n-1} \frac{(n-1)!}{\ell!(n-1-\ell)!} p^{\ell} (1-p)^{n-1-\ell}\\ &amp;=np\sum_{\ell=0}^{n-1} \binom{n-1}{\ell} p^{\ell} (1-p)^{n-1-\ell}\\ &amp;= np. \end{align}$$</p>
4,284,803
<p>I am solving a question and I can't get over this step: proving <span class="math-container">$$\sin \frac{1}{k} &gt; \frac{1}{k} - \frac{1}{k^2}$$</span> where <span class="math-container">$k$</span> is a positive integer.</p> <p>I tried using induction, but I failed. One of my friends managed to prove it using derivatives but I am searching for a solution which does not involve calculus or series. Proving this would help me solve a convergence problem.</p>
Piquancy
979,182
<p>We just have to think about:<span class="math-container">$$f(x)=\sin x-x+x^2,x\in[0,1].\\$$</span> <span class="math-container">$$f'(x)=\cos x-1+2x.\\$$</span>Let<span class="math-container">$\ g(x)=f'(x),$</span>than<span class="math-container">$$g'(x)=-\sin x+2.\\$$</span> When <span class="math-container">$x\in[0,1]$</span> <span class="math-container">$$g'(x)&gt;0.\\$$</span>At this point, <span class="math-container">$f'(x)(=g(x))$</span> is monotonically increasing.<br /> So<span class="math-container">$$f'(x)\geq f'(0)=0,$$</span>if and only if <span class="math-container">$x=0$</span>,the equal sign holds.<br /> Than we can know <span class="math-container">$f(x)$</span> is monotonically increasing.<br /> So<span class="math-container">$$f(x)\geq f(0)=0,$$</span>if and only if <span class="math-container">$x=0$</span>,the equal sign holds.<br /> Let x=<span class="math-container">$\frac{1}{k}&gt;0$</span>,come to the conclusion:<span class="math-container">$$\sin \frac{1}{k}&gt;\frac{1}{k}-\frac{1}{k^2}.$$</span></p>
1,772,650
<p><strong>Note: in relation to the answer of the duplicate question, I see that the second picture below refers to the triangulation when we consider <em>simplicial complexes</em>. I do not understand why the triangles are used as they are, however, so would like some help trying to understand this</strong></p> <p>I am having a lot of trouble understanding triangulations. I know that a triangulation involves decomposing a 2-manifold into triangular regions.</p> <p>A common example is the torus, which can be constructed from the square. I understand this representation:</p> <p><a href="https://i.stack.imgur.com/muE2n.png" rel="noreferrer"><img src="https://i.stack.imgur.com/muE2n.png" alt="Torus"></a></p> <p>Since the torus is homeomorphic to the space obtained by identifying those edges together.</p> <p>What I do not understand is the triangulation given:</p> <p><a href="https://i.stack.imgur.com/XVDtq.png" rel="noreferrer"><img src="https://i.stack.imgur.com/XVDtq.png" alt="Torus"></a></p> <p>Why is this triangulation given in all the books and resources etc?</p> <p>I do not understand what all the triangles mean. Why could we not just split the square into 2 triangles?</p> <p>Many thanks for your help on this one</p>
Eric Towers
123,905
<p>In a triangulation (specifically, a simplicial complex), the three vertices of a triangle are <em>distinct</em>. (Technically, the two 0-cells at the boundary of each 1-cell are distinct, the three 1-cells at the boundary of each 2-cell are distinct, et c. This leads to: the vertex set of a $k$-cell contains $k-1$ distinct vertices.) That is, if I tell you three vertices you can immediately tell me that either "there is no triangle with those three vertices" or "there is exactly one triangle with those three vertices, and its that one". (More generally, given a list of $k$ vertices, you can tell me whether there is no $k-1$-cell with those vertices or there is exactly one such $k-1$-cell and it's that one.) The intention is to make the vertex set of a $k$-cell into a unique label for the cell.</p> <p>If you divide the square in half, there is only one vertex after the identification of the labelled edges. This fails "distinctness". Call the one vertex, $v$. Both triangles have the same vertex collection, "$v,v,v$", so giving a valid vertex set does not pick out a <em>single</em> triangle.</p>
3,419,550
<p>Let <span class="math-container">$D ⊂ \mathbb{R}$</span> </p> <p>Let <span class="math-container">$D_A$</span> be the set of all accumulation points of <span class="math-container">$D$</span>. The set <span class="math-container">$\bar{D} := D \cup D_A$</span> is called the closure of <span class="math-container">$D$</span>.</p> <p>Show that if <span class="math-container">$D$</span> is bounded, then <span class="math-container">$\bar{D}$</span> is bounded</p> <hr> <p>My professor discussed with me that I could prove this by contrapositive. </p> <p>Let <span class="math-container">$x$</span> not be in <span class="math-container">$\bar{D}$</span>, thus <span class="math-container">$x$</span> is not in D.</p> <hr> <p>I can most certainly proof that if x is not in <span class="math-container">$\bar{D}$</span> , then x is not in <span class="math-container">$D$</span>, however I'm somewhat lost in how this shows that <span class="math-container">$\bar{D}$</span> is bounded.</p>
José Carlos Santos
446,262
<p>If <span class="math-container">$D$</span> is bounded, then <span class="math-container">$D\subset[a,b]$</span> for some interval <span class="math-container">$[a,b]$</span>. If <span class="math-container">$x\in\mathbb R\setminus[a,b]$</span>, then <span class="math-container">$x\notin D$</span> and <span class="math-container">$x$</span> is not an accumulation point of <span class="math-container">$D$</span>. Therefore, <span class="math-container">$x\notin\overline D$</span>. This proves that <span class="math-container">$\overline D\subset[a,b]$</span> and that therefore <span class="math-container">$\overline D$</span> is bounded.</p>
1,451,301
<p>We're given a function $P_n(x)$ for $-1\leq x\leq1$ as follows :</p> <p>$$P_n(x) = \int \limits_0^\pi \dfrac{1}{\pi}(x+i\sqrt{1-x^2} \cos\theta)^n \, d\theta$$</p> <p>for $n=(0,1,2,3,\ldots)$, we need to prove that $|P_n(x)| \leq 1$.</p> <p>I tried the following :</p> <p>Let $z=x+i\sqrt{1-x^2}\cos\theta$ </p> <p>I somehow want to prove that $|z|=|x+i\sqrt{1-x^2}\cos\theta|\leq1$, as that would imply that $|z^n|\leq1$, as $|z^n|=|z|^n$.</p> <p>$$|z| = \sqrt{x^2+(1-x^2)\cos^2 \theta} \Longrightarrow \sqrt{x^2 \sin^2\theta + \cos^2\theta}.$$</p> <p>Now, $x^2\leq1$ and $\sin^2\theta \leq1 \Longrightarrow x^2\sin^2\theta \leq1$.</p> <p>Also, $\cos^2\theta \leq1 \Longrightarrow \sqrt{x^2\sin^2\theta+\cos^2\theta} \leq \sqrt{2}$, </p> <p>But " $\leq1$" condition is required...</p> <p>That's the point where I am stuck, could anyone help me with this?</p>
Przemysław Scherwentke
72,361
<p>HINT: $(x^2+(1-x^2) \cos^2\theta) \leq (x^2+(1-x^2))=1$, because $\theta$ is real.</p>
1,021,599
<p>Let $X$ be a metric space and $q \in X$. I want to show that the distance function $d(q,p)$ is a uniformly continuous function of $p$. </p> <p>I know how to show that $d$ is continuous, but I am stuck on how to show UC. </p> <p>Given $\epsilon &gt;0$ let $\delta =?$. Then if $d(x,y) &lt;\delta$, then $|d(q,x)-d(q,y)|&lt;\epsilon$. </p> <p>I cannot figure out how to choose $\delta$. </p> <p>Please help :). Thank you. </p>
Vera
169,789
<p>$d(q,x)\leq d(q,y)+d(y,x)$ so that $d(q,x)-d(q,y)\leq d(y,x)=d(x,y)$</p> <p>By symmetry: $d(q,y)-d(q,x)\leq d(x,y)$</p> <p>This together allows the conclusion that $|d(q,x)-d(q,y)|\leq d(x,y)$</p>
2,008,263
<p>Solve $$(1+y^2\sin2x) \;dx - 2y\cos^2x \;dy = 0$$</p> <p>Well, first of all I've written $M = 1+y^2\sin2x$ , $N = 2y\cos^2x$.</p> <p>Then, I noticed that $M'_y$ <strong>does not</strong> equal to $N'_x$.</p> <p>I'm trying to find something to multiply the equation with, but my math skills sucks. So I'm going for $\frac{M'_y - N'_x}{N} = h(x)$ now I need to find $h(x)$ which I'm kinda struggling to find, would love your help.</p> <p><strong>Edit: I just noticed that $M'_y = N'_x$</strong></p>
lab bhattacharjee
33,337
<p>HINT:</p> <p>$$y^2\sin2x\ dx-2y\cos^2x\ dy=y^2d(-\cos^2x)+d(y^2)(-\cos^2x)=d(-\cos^2x\cdot y^2)$$</p>
954,933
<p>Let $\phi\in\ell^\infty$. For $p\in[1,\infty]$, define $M_\phi:\ell^p\to\ell^p$ by</p> <p>$$M_\phi(f)=\phi f.$$</p> <p>Show that $\Vert M_\phi\Vert=\Vert\phi\Vert_\infty$, and $M_\phi$ is compact if and only if $\phi\in c_0$, i.e. $\phi$ is a sequence that converges to $0$.</p> <p>I only have problem with the part "$\phi\in c_0$ $\Rightarrow$ $M_\phi$ compact.</p> <p>I tried to prove by contradiction, assume $M_\phi$ is not compact, then there is a bounded sequence $(f_n)_{n\in\mathbb{N}}$ in $\ell^p$ s.t. $(\phi f_n)_{n\in\mathbb{N}}$ has no convergent subsequence, then it also has no Cauchy subsequence. Then we can define</p> <p>$$t:=\inf_{m\neq n}\Vert \phi(f_m-f_n)\Vert_p&gt;0$$</p> <p>At this point I am stuck, I tried to find some $n\neq m$ s.t. $\Vert \phi(f_m-f_n)\Vert_p&lt;t$, then we get a contradiction. Can someone give some hints? Thanks!</p> <p>P.S. It is required to use the definition of compactness to prove this question.</p>
Yiorgos S. Smyrlis
57,021
<p>If $\boldsymbol\varphi\in\ell^\infty(\mathbb N)\smallsetminus c_0(\mathbb N)$, then there is an $\varepsilon&gt;0$, and a subsequence $\{\varphi(k_n)\}_{n\in\mathbb N}$, such that $$ \lvert \varphi(k_n)\rvert\ge \varepsilon. $$ Let now $$ \boldsymbol{u}_n=\boldsymbol{e}_{k_n}, $$ where $\boldsymbol{e}_{n}$ is the sequence is which is zero for all $k\in\mathbb N$, except $k=n$, where it takes the value 1. Clearly, $\boldsymbol{e}_{n}\in\ell^p(\mathbb N)$, for all $p$, and $\|\boldsymbol{e}_{n}\|_p=1$. Now $\{\boldsymbol\varphi\boldsymbol{u}_n\}_{n\in\mathbb N}$ does not have a convergent subsequence as $$ \|\boldsymbol\varphi\boldsymbol{u}_m-\boldsymbol\varphi\boldsymbol{u}_n\|_p\ge 2^{1/p}\varepsilon, $$ for all $m\ne n$.</p>
2,114,446
<p>But, just to get across the idea of a generating function, here is how a generatingfunctionologist might answer the question: the nth Fibonacci number, $F_{n}$, is the coefficient of $x^{n}$ in the expansion of the function $\frac{x}{(1 − x − x^2)}$ as a power series about the origin.</p> <p>I am reading a book about generating function, however, I got a little rusted about power series. could anyone give me a quick review about what the statement above is saying?</p> <p>namely, </p> <p>$F_{n}$, is the coefficient of $x_{n}$ in the expansion of the function $\frac{x}{(1 − x − x^2)}$ as a power series about the origin</p>
lab bhattacharjee
33,337
<p>$$(1-ax)^{-1}-(1-bx)^{-1}=\dfrac{x(a-b)}{1-(a+b)x+abx^2}$$</p> <p>Now the $r$th term of $(1-ax)^{-1}$ is $$(ax)^r$$</p> <p>So, the coefficient of $x^n$ will be $$a^n-b^n$$</p> <p>Here $a+b=1, ab=-1$</p>
267,236
<blockquote> <p><strong>Possible Duplicate:</strong><br /> <a href="https://math.stackexchange.com/questions/30156/why-is-this-entangled-circle-not-a-retract-of-the-solid-torus">Why is this entangled circle not a retract of the solid torus?</a></p> </blockquote> <p>I am stuck with exercise 16 (c), pag.39 of Hatcher's <em>Algebraic Topology</em>: prove that there is no retraction from <span class="math-container">$S^1\times D^2$</span> onto the set <span class="math-container">$A$</span>, which is described by an image in the book, and which you can see here below.</p> <p><img src="https://i.stack.imgur.com/MhCFv.jpg" alt="enter image description here" /></p>
Community
-1
<p>This is indeed a duplicate. Just note that the map $S^1 \cong A \to S^1 \times D^2$ induces a map</p> <p>$\pi_1 S^1 \to \pi_1 (S^1 \times D^2)$</p> <p>which is the zero map, $0: \mathbb{Z} \to \mathbb{Z}$. This is because $A$ can be shrunk to a point in $S^1 \times D^2$. (You're allowed to homotope $A$ through itself, meaning you can `unlink' it from itself.)</p> <p>A retraction $S^1 \times D^2 \to A$, however, would induce an isomorphism $\pi_1 A \cong \pi_1 A$ via the composition</p> <p>$$A \to S^1 \times D^2 \to A$$</p> <p>which is impossible because the first map induces the zero map on $\pi_1$.</p> <ul> <li>I would leave this as a comment but I don't have enough reputation. Matt, it is not true that $\pi_1 A \cong \pi_1(\ast)$. You mean to say that the inclusion $A \to S^1 \times D^2$ induces the zero map, but this does not mean that $\pi_1 A$ is trivial.</li> </ul>
1,102,758
<p><strong>Problem</strong></p> <p>Given a pre-Hilbert space $\mathcal{H}$.</p> <p>Consider unbounded operators: $$S,T:\mathcal{H}\to\mathcal{H}$$</p> <p>Suppose they're formal adjoints: $$\langle S\varphi,\psi\rangle=\langle\varphi,T\psi\rangle$$</p> <p>Regard the completion $\hat{\mathcal{H}}$.</p> <p>Here they're partial adjoints: $$S\subseteq T^*\quad T\subseteq S^*$$ In particular, both are closable: $$\hat{S}:=\overline{S}\quad\hat{T}:=\overline{T}$$</p> <blockquote> <p>But they don't need to be adjoints, or? $$\hat{S}^*=\hat{T}\quad\hat{T}^*=\hat{S}$$ <em>(I highly doubt it but miss a counterexample.)</em></p> </blockquote> <p><strong>Application</strong></p> <p>Given the pre-Fock space $\mathcal{F}_0(\mathcal{h})$.</p> <p>The ladder operators are pre-defined by: $$a(\eta)\bigotimes_{i=1}^k\sigma_i:=\langle\eta,\sigma_k\rangle\bigotimes_{i=1}^{k-1}\sigma_i\quad a^*(\eta)\bigotimes_{i=1}^k\sigma_i:=\bigotimes_{i=1}^k\sigma_i\otimes\eta$$ and extended via closure: $$\overline{a}(\eta):=\overline{a(\eta)}\quad\overline{a}^*(\eta):=\overline{a^*(\eta)}$$ regarding the full Fock space $\mathcal{F}(\mathcal{h})$.</p> <p>They are not only formally: $$\langle a(\eta)\varphi,\psi\rangle=\langle\varphi,a^*(\eta)\psi,\rangle$$ but really adjoint to eachother: $$\overline{a}(\eta)^*=\overline{a}^*(\eta)\quad\overline{a}^*(\eta)=\overline{a}(\eta)^*$$ <em>(The usual proof relies on Nelson's theorem, afaik.)</em></p>
m_gnacik
182,603
<p>Let $S_0$, $T_0$ be densely defined linear operators (this is required for adjoints to exists) on a Hilbert space such that $$(1) \ \qquad S_0 \subseteq T_0^* \quad \&amp; \quad T_0 \subseteq S_0^*. $$ Hence, we can conclude that $S_0$, $T_0$ are closable ($S_0^*$, $T_0^*$ are closed) and since the closure is the smalled closed extension we obtain that $$(2) \ \qquad \overline{S_0} \subseteq T_0^* \quad \&amp; \quad \overline{T_0} \subseteq S_0^*. $$ Also, the fact that $S_0$ and $T_0$ are closable implies that $S_0^*$ and $T_0^*$ are densely defined, and in particular $\overline{S_0}={S_0}^{**}$ and so $\overline{T_0}={T_0}^{**}$.</p> <p>Now for simplicity denote $S:=\overline{S_0}$ and $T:= \overline{T_0}$. Note that $$(3) \ \qquad S \subseteq T^* \quad \&amp; \quad T\subseteq S^*. $$</p> <p>The converse (reversed inclusion) is not true and the counterexample was delivered by @T.A.E.</p>
1,746,748
<p>My calculus teacher gave us this problem in class:</p> <p>Which is easier to integrate?</p> <p>$$\int \sin^{100}x\cos x dx$$</p> <p>or</p> <p>$$\int \sin^{50}xdx$$</p> <p>By easier, I assume the teacher means which integral would take less work. I'm unsure of how to approach this problem because of the relatively large exponents. I would guess the second because it has smaller exponents but I'm not sure.</p>
zz20s
213,842
<p>Which would you prefer to do?</p> <p>Let $u=\sin x$ and $du=\cos x$, transforming the integral into $\displaystyle \int u^{100} du$.</p> <p>Or use the reduction formula: $\displaystyle\int \sin^n x dx=\frac{-\sin^{n-1}x \cos x}{n}+\frac{n-1}{n}\int \sin^{n-2}x dx$ for $n=50$? I found the formula <a href="http://www.vias.org/calculus/07_trigonometric_functions_05_03.html" rel="nofollow">here</a>. </p> <p>Of course, both can be done, but the reduction formula is incredibly tedious to use. The power rule of integration is much simpler.</p>
2,127,494
<p>Given two $3$D vectors $\mathbf{u}$ and $\mathbf{v}$ their cross-product $\mathbf{u} \times \mathbf{v}$ can be defined by the property that, for any vector $\mathbf{x}$ one has $\langle \mathbf{x} ; \mathbf{u} \times \mathbf{v} \rangle = {\rm det}(\mathbf{x}, \mathbf{u},\mathbf{v})$. From this a number of properties of the cross product can be obtained quite easily. It is less obvious that, for instance $|\mathbf{u} \times \mathbf{v}|^2 = |\mathbf{u}|^2 |\mathbf{v}|^2 - \langle \mathbf{u} ; \mathbf{v} \rangle ^2$, from which the norm of the cross-product can be deduced.</p> <p>Is it possible to obtain these properties nicely (i.e. without dealing with coordinates), but with elementary linear algebra only (i.e. without the exterior algebra stuff, only properties of determinants and matrix / vector multiplication).</p> <p>Thanks in advance! </p>
mechanodroid
144,766
<p>I believe I've found an elegant proof.</p> <ol> <li><p>Assume that <span class="math-container">$ \mathbf{u}\times \mathbf{v} \ne 0$</span>. For <span class="math-container">$\mathbf{x} = \mathbf{u}\times \mathbf{v}$</span> we have <span class="math-container">\begin{align} \|\mathbf{u}\times \mathbf{v}\|^4 &amp;= \langle \mathbf{u}\times \mathbf{v},\mathbf{u}\times \mathbf{v}\rangle^2 \\ &amp;=\det(\mathbf{u}\times \mathbf{v},\mathbf{u},\mathbf{v})^2\\ &amp;= \det\left(\begin{bmatrix} \mathbf{u}\times \mathbf{v} \\ \mathbf{u} \\ \mathbf{v}\end{bmatrix}\begin{bmatrix} \mathbf{u}\times \mathbf{v} ,\mathbf{u} ,\mathbf{v}\end{bmatrix}\right)\\ &amp;= \begin{vmatrix} \|\mathbf{u}\times \mathbf{v}\|^2 &amp; \langle \mathbf{u}\times \mathbf{v}, \mathbf{u}\rangle &amp; \langle \mathbf{u}\times \mathbf{v},\mathbf{v}\rangle \\ \langle \mathbf{u},\mathbf{u}\times \mathbf{v}\rangle &amp; \langle \mathbf{u},\mathbf{u}\rangle &amp; \langle \mathbf{u},\mathbf{v}\rangle \\ \langle \mathbf{v},\mathbf{u}\times \mathbf{v}\rangle &amp; \langle \mathbf{v},\mathbf{u}\rangle &amp; \langle \mathbf{v},\mathbf{v}\rangle\end{vmatrix}\\ &amp;= \begin{vmatrix} \|\mathbf{u}\times \mathbf{v}\|^2 &amp; 0 &amp; 0 \\ 0 &amp; \|\mathbf{u}\|^2 &amp; \langle \mathbf{u},\mathbf{v}\rangle \\ 0 &amp; \langle \mathbf{u},\mathbf{v}\rangle &amp; \|\mathbf{v}\|^2\end{vmatrix}\\ &amp;= \|\mathbf{u}\times \mathbf{v}\|^2(\|\mathbf{u}\|^2\|\mathbf{v}\|^2 - \langle \mathbf{u},\mathbf{v}\rangle^2). \end{align}</span> Dividing by <span class="math-container">$\|\mathbf{u}\times \mathbf{v}\|^2$</span> gives the result.</p> </li> <li><p>Assume that <span class="math-container">$ \mathbf{u}\times \mathbf{v} = 0$</span>. Then the <span class="math-container">$\det(\mathbf{x},\mathbf{u},\mathbf{v}) = 0$</span> for all vectors <span class="math-container">$\mathbf{x}\in\Bbb{R}^3$</span> so <span class="math-container">$\{\mathbf{u},\mathbf{v}\}$</span> are linearly dependent. Equality condition in Cauchy-Schwartz gives <span class="math-container">$$\|\mathbf{u}\times \mathbf{v}\|^2 = 0 = \|\mathbf{u}\|^2\|\mathbf{v}\|^2 - \langle \mathbf{u},\mathbf{v}\rangle^2.$$</span></p> </li> </ol>
398,388
<p>The classification of finite simple groups has been called one of the great intellectual achievements of humanity, but I don't even know one single application of it. Even worse, I know a lot of applications of simple <em>modules</em> over some ring/algebra <span class="math-container">$A$</span>, but I can barely know an application of them for finite simple groups. When studying modules, one has, for example,</p> <ol> <li>If <span class="math-container">$S$</span> and <span class="math-container">$T$</span> are distinct simple modules, then <span class="math-container">$\operatorname{Hom}(S,T) = 0$</span>, and one can enhance this using Jordan-Holder to prove that, if <span class="math-container">$M$</span> and <span class="math-container">$N$</span> are modules whose Jordan-Holder decomposition don't have common factors, then <span class="math-container">$\operatorname{Hom}(M,N)=0$</span>. We may use this, for example, to try to compute some cohomology, also;</li> <li>The simple modules form a basis of the <span class="math-container">$K_0$</span> group, and therefore if we're interested in, for example, the multiplicative structure of <span class="math-container">$K_0$</span> it's enough to compute the (tensor) product of simple modules;</li> <li>If the algebra <span class="math-container">$A$</span> is basic (i.e. every simple representation is <span class="math-container">$1$</span>-dimensional), which happens for path algebras, then simple modules have a group structure with respect to the tensor product (so they are an analogue for the Picard group).</li> </ol> <p>For finite simple groups, the only application I know is for the (non)-solubility of polynomials, and it's a quite particular example which uses only <span class="math-container">$S_n$</span> and <span class="math-container">$A_n$</span>. So I have two questions:</p> <ol> <li>What are some (concrete) applications of (finite simple groups + Jordan-Holder) for general finite groups?</li> <li>What are some (concrete) applications of the classifications of finite simple groups?</li> </ol>
arsmath
3,711
<p>There's an entire book on this subject, &quot;Applying the Classification of Finite Simple Groups: A User’s Guide&quot; by Stephen D. Smith, published through the AMS, though you can find a draft version <a href="http://homepages.math.uic.edu/%7Esmiths/book.pdf" rel="noreferrer">here</a>.</p> <p>The applications are not as simple as they are for modules, but many questions can be settled by invoking the classification of finite simple groups. For example, you can invoke the classification to in turn classifying 2-transitive groups. The only known proof of the Schreier conjecture relies on the classification. The book has many more applications.</p>
333,807
<p><strong>Notations</strong>: For a scalar $a\in\mathbb{R}$, denote $$\mathrm{sgn}(a)=\left\{ \begin{array}{l l} 1 &amp; \mbox{if } a&gt;0\\ 0 &amp; \mbox{if } a=0\\ -1 &amp; \mbox{if } a&lt;0 \end{array}.\right.$$ For a vector $r\in\mathbb{R}^n$, $\mathrm{sgn}(r)$ is element-wise.</p> <p>Now we have a set of vectors $r_1, \dots, r_p\in\mathbb{R}^n$, $p&lt;n$. The corresponding sign vectors are $v_1,\dots,v_p\in\mathbb{R}^n$ satisfying $$\mathrm{sgn}(r_i)=v_i$$ which implies that the elements of $v_i$ can only be $1$, $-1$ or $0$.</p> <p><strong>Question</strong>: Given a nonzero vector $x\in\mathbb{R}^n$ satisfying $x\perp \mathrm{span}\{r_1,\dots,r_p\}$, can we prove $x\notin\mathrm{span}\{v_1,\dots,v_p\}$? If not, can you given any counterexamples?</p>
adam W
43,193
<p>The following uses $p=n$ thus strictly speaking is incorrect. It may serve as an example as to why that restriction is necessary for the question though.</p> <p>Using row vectors, look at $$\{r_1,\dots,r_p\} = \pmatrix{1 &amp; 7 &amp; 1 \\ -1 &amp; 1 &amp; 1 \\ -2 &amp; -6 &amp; 0}$$ $$ x = \pmatrix{3 &amp; -1 &amp; 4 }$$</p> <p>Then $$\{v_1,\dots,v_p\} = \pmatrix{1 &amp; 1 &amp; 1 \\ -1 &amp; 1 &amp; 1 \\ -1 &amp; -1 &amp; 0}$$</p> <p>The $r$ do not have full span and $x$ is perpendicular to them. After applying the sgn(), the $v$ set has full span, as may be seen with the simple row operation to upper triangular from adding the first row to the other rows: $$\pmatrix{1 &amp; 1 &amp; 1\\ 0 &amp; 2 &amp; 2 \\ 0 &amp; 0 &amp; 1 \\}$$ Thus a counterexample is given.</p> <p>It may be more difficult to find a counterexample if given the additional restriction that each $r$ is independent, but I believe it could be given in that case as well.</p>
1,621,019
<blockquote> <p>Let $a,b,$ and $c$ be the lengths of the sides of a triangle, prove that $$a(b^2+c^2-a^2)+b(a^2+c^2-b^2)+c(a^2+b^2-c^2) \le 3abc.$$</p> </blockquote> <p>I can't really factor this into something nice. Also using AM-GM or Cauchy-Schwarz doesn't look like it will help. I am thinking we need to bound the left and right side with something, but I don't know how to.</p>
Nikunj
287,774
<p>Note: Using the cosine rule, this inequality can be written as:</p> <p>$2abc(cosA+cosB+cosC)\le3abc$</p> <p>Now, we have to prove that $cosA+cosB+cosC\le3/2$</p> <p>Proof: $f(A,B) = cos(A) + cos(B) + cos(π-A-B)$</p> <p>$f(A,B) = cos(A) + cos(B) - cos(A+B)$</p> <p>within the region R:</p> <p>$$A+B&lt;π,$$</p> <p>$$A,B&gt;0$$</p> <p>Since it's an open region, we know the maximum cannot occur anywhere along the boundary of R and must occur at some critical point(s) in the interior.</p> <p>So we just have to find the point where the total derivative is 0. Of course, A and B aren't dependent on each other, so we can just set the two partials equal to zero to find the critical point(s):</p> <p>$$sin(A+B) - sin(A) = 0$$</p> <p>$$sin(A+B) - sinB = 0$$</p> <p>So $sinA = sinB$, thus A=B</p> <p>But $sinA = sin2A = 2sinAcosA$</p> <p>Thus $$cosA = 1/2$$</p> <p>Note: we can divide by sinA because A≠0. This also excludes A=0 as a solution, since A=0 is in the boundary of R, which is not a part of our valid region.</p> <p>$$A = π/3$$</p> <p>So the maximum is at $$A=B=C=π/3$$.</p> <p>Plugging that in, we know the maximum value of f is:</p> <p>$3cos(π/3) = 3/2$</p> <p>Thus for angles A,B,C of a triangle:</p> <p>$$cos(A)+cos(B)+cos(C) = f(A,B) = ≤ 3/2$$ .</p>
773,880
<p>What approach would be ideal in finding the integral $\int4^{-x}dx$?</p>
Felix Marin
85,343
<p>$$ \int 4^{-x}\,{\rm d}x = -\,{1 \over \ln(4)}\int\left[-\ln(4)\,4^{-x}\right]\,{\rm d}x =-\,{1 \over \ln(4)}\int{{\rm d 4^{-x}} \over {\rm d}x}\,{\rm d}x =-\,{4^{-x} \over \ln(4)} + \mbox{a constant} $$</p>
202,040
<p>I'd like to get separate plots for the functions in a list, and I'm trying the following, which doesn't work. What is the correct way to do that?</p> <pre><code>Table[ContourPlot3D[f, {x, -2, 2}, {y, -2, 2}, {z, -2, 2}], {f, {x + y + z + x y z == 0, x + y + z^2 + x y z^2 == 0, x + y^2 + z + x y^2 z == 0}}] </code></pre>
NonDairyNeutrino
46,490
<p>Instead of <code>Table</code> you could use <code>Map</code></p> <pre><code>ContourPlot3D[#, {x, -2, 2}, {y, -2, 2}, {z, -2, 2}] &amp; /@ {x + y + z + x y z == 0, x + y + z^2 + x y z^2 == 0, x + y^2 + z + x y^2 z == 0} </code></pre>
1,342,570
<p>So, this was my initial proof:</p> <hr> <p>Assume $R$ is a ring, and $a,b\in R$</p> <p>Let $x_1$ and $x_2$ be solutions of $ax=b$</p> <p>Hence, $ax_1=b=ax_2 \Rightarrow ax_1-ax_2=0_R \Rightarrow a(x_1-x_2)=0_R$</p> <p>Thus, we have $x_1-x_2=0_R \Rightarrow x_1=x_2$, and only one solution exists.</p> <hr> <p>Only now did I realize that I can only assume $x_1-x_2=0_R$ from $a(x_1-x_2)=0_R$ if $R$ was an integral domain. I didn't know why they provided that $R$ had an identity or why $a$ is a unit.</p>
Robert Lewis
67,071
<p>If</p> <p>$ax_1 = ax_2 = b, \tag{1}$</p> <p>with $a \in R$ a unit, then since we have $c \in R$ with $ac = ca = 1_R$, </p> <p>$x_1 = 1_R x_1 = (ca)x_1 = c(ax_1)$ $= c(ax_2) = (ca)x_2 = 1_R x_2 = x_2, \tag{2}$</p> <p>so the solution is unique. We further note that</p> <p>$ax = b \tag{3}$</p> <p>yields</p> <p>$x = 1_R x = (ca)x = c(ax) = cb; \tag{4}$</p> <p>the unique solution to (3) is thus $cb$.</p>
1,416,998
<p>In the definition of martingales, one finds in Stroock and Varadhan (Multidimensional Diffusion processes - page 20) the strange request that it be right-continuous process.</p> <p><a href="https://i.stack.imgur.com/0Nni7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0Nni7.png" alt="enter image description here"></a></p> <p>However no such requirement is made in the wiki <a href="https://en.wikipedia.org/wiki/Martingale_%28probability_theory%29" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Martingale_%28probability_theory%29</a></p> <p><a href="https://i.stack.imgur.com/D1B7Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D1B7Y.png" alt="enter image description here"></a></p> <p>nor in Kallenberg (Foundations of modern probability - page 96) </p> <p><a href="https://i.stack.imgur.com/KeK1H.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KeK1H.png" alt="enter image description here"></a></p> <p>Isn't this strange of the part of Stroock and Varadhan? Has the notion of martingales evolved? </p>
Inuyaki
244,122
<p>If $(a, b) = d$, then we have $$a+b = d\cdot\left(\frac{a}{d} + \frac{b}{d}\right)\text{ and }a-b = d\cdot\left(\frac{a}{d} - \frac{b}{d}\right)$$ and obviously then $$((a+b), (a-b)) = \left(d\cdot\left(\frac{a}{d} + \frac{b}{d}\right), d\cdot\left(\frac{a}{d} - \frac{b}{d}\right)\right) = d\cdot\left(\left(\frac{a}{d} + \frac{b}{d}\right), \left(\frac{a}{d} - \frac{b}{d}\right)\right),$$ which even more obviously is dividable by $d$.</p>
3,250,061
<blockquote> <p>Prove that if <span class="math-container">$p\equiv 5\pmod{8}$</span>, <span class="math-container">$p&gt;5$</span> then <span class="math-container">$\zeta_p$</span> not constructible </p> </blockquote> <p>How to do this? There is a theorem in my book that says that the regular <span class="math-container">$n$</span>-gon is constructible iff <span class="math-container">$n=2^k\cdot n_0$</span> where <span class="math-container">$n_0$</span> is the product of distinct Fermat primes, but I don't know how to apply it here since we are talking about an infinitude of primes.</p>
lhf
589
<p><span class="math-container">$\zeta_p$</span> has degree <span class="math-container">$\phi(p)=p-1 \equiv 4 \bmod 8$</span>.</p> <p>Now, <span class="math-container">$p-1 &gt; 4$</span> because <span class="math-container">$p&gt;5$</span>. If <span class="math-container">$p-1$</span> were a power of <span class="math-container">$2$</span>, then <span class="math-container">$p-1 \equiv 0 \bmod 8$</span>.</p> <p>Therefore, <span class="math-container">$p-1$</span> cannot be a power of <span class="math-container">$2$</span> and so <span class="math-container">$\zeta_p$</span> cannot be constructible.</p>
1,409,545
<p>I'm not sure if there is an actual solution to this problem or not, but thought I would give it a shot here to see if anyone has any ideas. So here goes:</p> <p>I basically have three vertices of a rigid triangle with known 3D coordinates. The vertices are projected onto a 2D plane (by projection, I mean that each vertex would basically have a fixed line drawn from it to the 2D plane, and that "line" would also stay rigid to the triangle so that the lines would move along with the triangle if it is transformed), in which I also know the 2D coordinates. A transformation matrix is applied to the original three points (can be a combination of rotation and translation) and I now know the new 2D projection coordinates.</p> <p>Is it possible to obtain either the unknown transformation matrix or the new coordinates? Any ideas are much appreciated. Thanks!</p>
haqnatural
247,767
<p>Your work is correct $$\frac { d\left( 7+5^{ x^{ 2 }+2x-1 } \right) }{ dx } =\frac { d\left( 7 \right) }{ dx } +\frac { d\left( 5^{ x^{ 2 }+2x-1 } \right) }{ dx } =0+\left( 5^{ x^{ 2 }+2x-1 } \right) \ln { 5 } \frac { d\left( x^{ 2 }+2x-1 \right) }{ dx } =\\ =\left( 5^{ x^{ 2 }+2x-1 } \right) \ln { 5 } \left( 2x+2 \right) \\ $$</p>
2,091,766
<p>Suppose $h:R \longrightarrow R$ is differentiable everywhere and $h'$ is continuous on $[0,1]$, $h(0) = -2$ and $h(1) = 1$. Show that: <p> $|h(x)|\leq max(|h'(t)| , t\in[0,1])$ for all $x\in[0,1]$</p> <p>I attempted the problem the following way: Since $h(x)$ is differentiable everywhere then it is also continuous everywhere. $h(0) = -2$ and $h(1) = 1$ imply that h(x) should cross x-axis at some point (at least once). Denote that point by c to get $h(c) = 0$ for some $c\in[0,1]$. <p> $h'(x)$ continuous means that $lim[h'(x)] = h'(a)$ as $x\rightarrow a$ but then I am stuck and I don't see how what I have done so far can help me to obtain the desired inequality. <p>Thank you in advance!</p>
quid
85,306
<p>First, your definition of $S$ is not quite precise. Let us sidestep this and take for the final set the union of all the sets you write down. That is, </p> <p>$$(d=2) \to S_2= \{a, \frac{a+b}{2},b\}$$ $$(d=3) \to S_3= \{a, \frac{2a+b}{3}, \frac{a+2b}{3}, b\}$$ $$(d=4)\to S_4= \{a, \frac{3a+b}{4}, \frac{a+b}{2}, \frac{a+3b}{4}, b\}$$ $$(d=5)\to S_5= \{a, \frac{4a+b}{5}, \frac{3a+2b}{5}, \frac{2a+3b}{5}, \frac{a+4b}{5}, b\}$$ </p> <p>$$\dots$$</p> <p>And then $S_{\infty} = \bigcup_{d \ge 1} S_d$.</p> <p>This set $S_\infty$ is then equal to $\{a+ q (b-a)\colon q \in \mathbb{Q}, \, 0\le q \le 1\}$. </p> <p>In particular, if you chose $a=0$ and $b=1$ then your set is the set of all <strong>rational</strong> numbers from $0$ to $1$. And generally for <strong>rational</strong> $a,b$ it is the set of all <strong>rational</strong> numbers from $a$ to $b$. </p> <p>But this is not what one usually means by an interval; your set for $a=0$ and $b=1$ does not contain $\sqrt{2}-1$ for example.</p> <p>You could say though that for <strong>rational</strong> $a,b$ the set is the (closed) <strong>rational</strong> interval from $a$ to $b$. This is not very common terminology, but not completely unheard of either. </p> <p>To answer the comment: yes, the set $S_{\infty}$ is dense in $[a,b]$. That is every number in $[a,b]$ can be approximated to any precision, by elements from $S_{\infty}$. But differently $\overline{S_{\infty}}$, the topological closure, is $[a,b]$. </p> <p>Yet, it is even true that, and something slightly different, that all the elements that you can write as a limit of a sequence $a_d$ with $a_d \in S_d$ for each $d$, is the interval $[a,b]$. </p> <p>If you meant this by taking the limit then it is indeed the interval. </p>
1,666,642
<p><a href="https://mathematica.stackexchange.com/questions/107859/puzzle-with-mathematica">This puzzle</a> was found from the <em>Hot Network Questions</em> on the right. A repost of a question in a way (from Mathematica SE), but I was wondering if the following puzzle could be done:</p> <p><a href="https://i.stack.imgur.com/VOj5a.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VOj5a.jpg" alt="the puzzle" /></a></p> <p>A quick translation:</p> <blockquote> <p>Can anyone solve this? It's so hard!</p> <p>Fill in the numbers <span class="math-container">$1–9$</span> in the boxes, the numbers cannot repeat.</p> </blockquote> <p>It has already been solved in the given link, the answers being (hidden for those who wish to try and solve it):</p> <blockquote class="spoiler"> <p> <span class="math-container">$17\times4=68+25=93$</span></p> </blockquote> <p>The answer to the Mathematica question stated there's a way to solve it without brute-forcing, but I didn't understand it too well. How would one—if possible—solve this without using too complex softwares?</p>
joriki
6,622
<p>You can reduce the effort to a level that can be handled without a computer.</p> <p>Let's write the calculation as</p> <p>\begin{array}{cc} &amp;A&amp;B\\ \times&amp;&amp;C\\\hline &amp;D&amp;E\\ +&amp;F&amp;G\\\hline &amp;H&amp;I \end{array}</p> <p>There are strong restrictions on $A$ and $C$, since their product must be single-digit. We can't have $C=1$, since otherwise $DE=AB$. So either $A=1$, or $(A,C)$ must be one of $(2,3)$, $(2,4)$, $(3,2)$ or $(4,2)$. We can exclude $(2,4)$ and $(4,2)$: This implies $D=8$, $F=1$ and $H=9$, but then adding $E$ and $G$ mustn't result in a carry, which is impossible, since $I$ can't be $8$ or $9$ and $E+G$ must be at least $3+5$.</p> <p>So unless $A=1$, $(A,C)$ must be either $(2,3)$ or $(3,2)$. Also $B\notin\{1,5\}$. That leaves $10$ possible triples for $(A,B,C)$, which each determine values for $D$ and $E$, and it shouldn't be too much effort to go through those $10$ cases, find the $4$ remaining options for $F$, $G$, $H$ and $I$ and show that they don't work out.</p> <p>That leaves us with $A=1$. In that case the options for $(C,B)$ can be enumerated as $(2,7)$, $(2,8)$, $(2,9)$, $(3,4)$, $(3,6)$, $(3,8)$, $(3,9)$, $(4,3)$, $(4,7)$, $(4,8)$, $(4,9)$, $(6,3)$, $(6,7)$, $(6,9)$, $(7,2)$, $(7,4)$, $(7,6)$, $(7,8)$, $(7,9)$, $(8,2)$, $(8,3)$, $(8,4)$, $(8,7)$, $(8,9)$. That's $24$ more cases, for a total of $34$ cases to be checked.</p>
1,666,642
<p><a href="https://mathematica.stackexchange.com/questions/107859/puzzle-with-mathematica">This puzzle</a> was found from the <em>Hot Network Questions</em> on the right. A repost of a question in a way (from Mathematica SE), but I was wondering if the following puzzle could be done:</p> <p><a href="https://i.stack.imgur.com/VOj5a.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VOj5a.jpg" alt="the puzzle" /></a></p> <p>A quick translation:</p> <blockquote> <p>Can anyone solve this? It's so hard!</p> <p>Fill in the numbers <span class="math-container">$1–9$</span> in the boxes, the numbers cannot repeat.</p> </blockquote> <p>It has already been solved in the given link, the answers being (hidden for those who wish to try and solve it):</p> <blockquote class="spoiler"> <p> <span class="math-container">$17\times4=68+25=93$</span></p> </blockquote> <p>The answer to the Mathematica question stated there's a way to solve it without brute-forcing, but I didn't understand it too well. How would one—if possible—solve this without using too complex softwares?</p>
Ravi Yadav
773,069
<p><span class="math-container">$1 7* 4$</span> </p> <p><span class="math-container">$=6 8 +2 5$</span> </p> <p><span class="math-container">$9 3$</span></p> <p>is the right answer! </p> <blockquote> <p><span class="math-container">$$17*4 = 68 + 25 = 93$$</span> </p> </blockquote> <p>There may be more combinations too!</p>
99,378
<p>The following equation in $\mathbb{C}$:</p> <p>$4z^2+8|z|^2-3=0$</p> <p>is not algebraic and has 4 solutions : $\pm\frac{1}{2}$ and $\pm i\frac{\sqrt{3}}{2}$. The Solve function in Mathematica only returns the 2 real values :</p> <pre><code>Solve[4 z^2 + 8 Abs[z]^2 - 3 == 0, Complexes] (* {{z -&gt; -(1/2)}, {z -&gt; 1/2}} *) </code></pre> <p>What am I missing ?</p>
Georges Perros
35,555
<p>Actually, Reduce did it :</p> <pre><code>Reduce[4 z^2 + 8 Abs[z]^2 - 3 == 0,z, Complexes] </code></pre> <blockquote> <p>z == -(1/2) || z == 1/2 || z == -((I Sqrt[3])/2) || z == (I Sqrt[3])/2</p> </blockquote> <p>Or using the option Method-> Reduce in Solve :</p> <pre><code>Solve[ 4 z^2 + 8 Abs[z]^2 - 3 == 0, z, Complexes, Method -&gt; Reduce] </code></pre> <blockquote> <p>{{z -> -(1/2)}, {z -> 1/2}, {z -> -((I Sqrt[3])/2)}, {z -> ( I Sqrt[3])/2}}</p> </blockquote> <p>Or using an option introduced in version 8 :</p> <pre><code>Solve[ 4 z^2 + 8 Abs[z]^2 - 3 == 0, z, Complexes, MaxExtraConditions -&gt; All] </code></pre> <blockquote> <p>{{z -> -(1/2)}, {z -> 1/2}, {z -> -((I Sqrt[3])/2)}, {z -> ( I Sqrt[3])/2}}</p> </blockquote> <p>This way one gets replacement rules in the usual Solve way instead of a boolean expression, in case the former is more useful.</p> <p>I discovered that the variable name can be omitted both in Solve and Reduce. But it cannot be considered good practice. The mention of the domain (complexes) is also superfluous.</p>
103,397
<p>Is there functionality in <em>Mathematica</em> to expand a function into a series with Chebyshev polynomials? </p> <p>The <code>Series</code> function only approximates with Taylor series.</p>
Jason B.
9,490
<p>You can just take <a href="https://mathematica.stackexchange.com/users/9362/bob-hanlon">Bob Hanlon</a>'s <a href="http://forums.wolfram.com/mathgroup/archive/2006/Aug/msg00165.html" rel="noreferrer">answer</a> from 2006 directly, and modify the plot just a bit to update it.</p> <pre><code>ChebyshevApprox[n_Integer?Positive, f_Function, x_] := Module[{c, xk}, xk = Pi (Range[n] - 1/2)/n; c[j_] = 2*Total[Cos[j*xk]*(f /@ Cos[xk])]/n; Total[Table[c[k]*ChebyshevT[k, x], {k, 0, n - 1}]] - c[0]/2]; f = 3*#^2*Exp[-2*#]*Sin[2 #*Pi] &amp;; ChebyshevApprox[3, f, x] // Simplify ((-(3/4))*((-E^(2*Sqrt[3]))*(Sqrt[3] - 2*x) - 2*x - Sqrt[3])*x* Sin[Sqrt[3]*Pi])/E^Sqrt[3] GraphicsGrid[ Partition[ Table[Plot[{f[x], ChebyshevApprox[n, f, x]}, {x, -1, 1}, Frame -&gt; True, Axes -&gt; False, PlotStyle -&gt; {Blue, Red}, PlotRange -&gt; {-2, 10}, Epilog -&gt; Text["n = " &lt;&gt; ToString[n], {0.25, 5}]], {n, 9}], 3], ImageSize -&gt; 500] </code></pre> <p><a href="https://i.stack.imgur.com/1yGlm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/1yGlm.png" alt="enter image description here"></a></p>
103,397
<p>Is there functionality in <em>Mathematica</em> to expand a function into a series with Chebyshev polynomials? </p> <p>The <code>Series</code> function only approximates with Taylor series.</p>
Michael E2
4,999
<p>Here's a way to leverage the Clenshaw-Curtis rule of <code>NIntegrate</code> and Anton Antonov's answer, <a href="https://mathematica.stackexchange.com/questions/26401/determining-which-rule-nintegrate-selects-automatically/96663#96663">Determining which rule NIntegrate selects automatically</a>, to construct a piecewise Chebyshev series for a function. It also turns out that <code>InterpolatingFunction</code> implements a Chebyshev series approximation as one of its interpolating units (undocumented). With <code>IntegrationMonitor</code>, you can save the sampling on the subintervals and use <code>FourierDCT[]</code> to convert the function values to Chebyshev coefficients. The error estimate for the integral ($E \approx |I_{n/2}-I_{n}|$) is not exactly the same as an approximation norm, but most numerical procedures have pitfalls.</p> <pre><code>fn = 3*#^2*Exp[-2*#]*Sin[2 #*Pi] &amp;[x]; {ifn, {{series}}} = Reap[ chebApprox[fn, {x, 0, 2}], (* see below for code *) "ChebyshevSeries"]; // AbsoluteTiming (* {0.003718, Null} *) Plot[{fn, ifn[x]}, {x, 0, 2}, PlotStyle -&gt; {AbsoluteThickness[5], Automatic}], LogPlot[(fn - ifn[x])/fn // Abs, {x, 0, 2}] </code></pre> <p><img src="https://i.stack.imgur.com/723Nt.png" alt="Mathematica graphics"></p> <p>The reaped <code>series</code> are in a list of the form <code>{{{x1, x2}, {coefficients}},...}</code>.</p> <pre><code>Short[series, 12] </code></pre> <p><img src="https://i.stack.imgur.com/Z5GUb.png" alt="Mathematica graphics"></p> <p>In this example there are four series over the intervals</p> <pre><code>series[[All, 1]] (* {{0, 0.5}, {0.5, 1.}, {1., 1.5}, {1.5, 2}} *) </code></pre> <p><strong>Code dump</strong></p> <pre><code>ClearAll[chebInterpolation]; (* Constructs a piecewise InterpolatingFunction, * whose interpolating units are Chebyshev series *) (* data0 = {{x0,x1},c1},..} *) chebInterpolation[data0 : {{{_, _}, _List} ..}] := Module[{data = Sort@data0, domain1, coeffs1, domain, grid, ngrid, coeffs, order, y0, yp0}, domain1 = data[[1, 1]]; coeffs1 = data[[1, 2]]; domain = List @@ Interval @@ data[[All, 1]]; y0 = chebFunc[coeffs1, domain1, First@domain1]; yp0 = chebFunc[dCheb[coeffs1, domain1], domain1, First@domain1]; grid = Union @@ data[[All, 1]]; ngrid = Length@grid; coeffs = data[[All, 2]]; order = Length[coeffs[[1]]] - 1; InterpolatingFunction[ domain, {5, 1, order, {ngrid}, {4; order}, 0, 0, 0, 0, Automatic, {}, {}, False}, {grid}, {{y0, yp0}} ~Join~ coeffs, {{{{1}}~Join~Partition[Range@ngrid, 2, 1] ~Join~ {{ngrid - 1, ngrid}}, {Automatic } ~Join~ ConstantArray[ChebyshevT, ngrid]}}] /; Length[domain] == 1 &amp;&amp; ArrayQ@coeffs ]; Clear[chebApprox]; (* Uses NIntegrate's Clenshaw-Curtis Rule * to construct Chebyshev series approximations to a function * over the subintervals created by NIntegrate *) Options[chebApprox] = {"Points" -&gt; 17} ~Join~ Options[NIntegrate]; chebApprox[f_, {x_, a_, b_}, opts : OptionsPattern[]] := Module[{t, samp, sampling}, With[{pg = OptionValue[PrecisionGoal] /. Automatic -&gt; 14, ag = OptionValue[AccuracyGoal] /. Automatic -&gt; 15}, t = Reap[ NIntegrate[f, {x, a, b}, Method -&gt; {"ClenshawCurtisRule", "Points" -&gt; OptionValue["Points"]}, PrecisionGoal -&gt; pg, AccuracyGoal -&gt; ag, IntegrationMonitor :&gt; (Sow[ Map[{First[#1@"Boundaries"], #1@"GetValues"} &amp;, #1], samp] &amp;), Evaluate@FilterRules[{opts}, Options[NIntegrate]]], samp]; sampling = With[{steps = t[[2, 1]]}, Flatten[ Table[If[MemberQ[steps[[n + 1]], {{s[[1, 1]], _}, __}], Nothing, s], {n, Length@steps - 1}, {s, steps[[n]]}], 1] ~Join~ DeleteCases[Last@steps, {{-Infinity, Infinity}, __}] ]; sampling = Sort@MapAt[chebSeries@*Reverse, sampling, {All, 2}]; Sow[sampling, "ChebyshevSeries"]; chebInterpolation[sampling] ]]; chebSeries[y_] := Module[{cc}, cc = Sqrt[2/(Length@y - 1)] FourierDCT[y, 1]; (* get coeffs from values *) cc[[{1, -1}]] /= 2; (* adjust first &amp; last coeffs *) cc ] (* Differentiate a Chebyshev series *) (* Recurrence: $2 r c_r = c'_{r-1} - c'_{r+1}$ *) Clear[dCheb]; dCheb::usage = "dCheb[c, {a,b}] differentiates the Chebyshev series c scaled over \ the interval {a,b}"; dCheb[c_] := dCheb[c, {-1, 1}]; dCheb[c_, {a_, b_}] := Module[{c1 = 0, c2 = 0, c3}, 2/(b - a) MapAt[#/2 &amp;, Reverse@ Table[ c3 = c2; c2 = c1; c1 = 2 (n + 1)*c[[n + 2]] + c3, {n, Length[c] - 2, 0, -1}], 1] ]; </code></pre> <p>Notes and references:</p> <ul> <li><p><a href="https://mathematica.stackexchange.com/questions/34228/interpolating-data-with-a-step/98349#98349">Interpolating data with a step</a></p></li> <li><p><a href="https://mathematica.stackexchange.com/questions/28337/whats-inside-interpolatingfunction1-4/28341#28341">What&#39;s inside InterpolatingFunction[{{1., 4.}}, &lt;&gt;]?</a></p></li> <li><p><code>adaptiveChebSeries</code> of <a href="https://mathematica.stackexchange.com/questions/86860/find-all-roots-of-a-function-with-parabolic-cylinder-functions-in-a-range-of-the/113950#113950">Find all roots of a function with parabolic cylinder functions in a range of the variable</a> presents another approach.</p></li> <li><p>The recurrence for the differentiation of Chebyshev series is probably well-known, but I got it from <a href="http://dx.doi.org/10.1093/comjnl/6.1.88" rel="nofollow noreferrer">Clenshaw and Norton (1963)</a>.</p></li> <li><p>This approach was suggested by a comment I saw in <a href="http://dx.doi.org/10.1137/110838297" rel="nofollow noreferrer">Boyd (2013)</a> that an interval could be split in two "whenever the Clenshaw–Curtis strategy calls for $N$ larger then some user-specified limit." The <code>"ClenshawCurtisRule"</code> of <code>NIntegrate</code> does not adapt the order $N$ (which equals two times the value of <code>"Points"</code> in <code>NIntegrate</code>), but it does split the intervals. </p></li> </ul> <p>One can get a single Chebyshev series by setting <code>MaxRecursion -&gt; 0</code>.</p> <pre><code>{{domain}, values} = Reap[NIntegrate[fn, {x, 0, 2}, PrecisionGoal -&gt; 14, AccuracyGoal -&gt; 15, Method -&gt; {"ClenshawCurtisRule", "Points" -&gt; 1 + 2^5}, (* adjust "Points" to achieve desired accuracy *) MaxRecursion -&gt; 0, WorkingPrecision -&gt; 40, IntegrationMonitor :&gt; (Sow[ Map[{#1@"Boundaries", #1@"GetValues"} &amp;, #1]] &amp;)] ][[2, 1, 1, 1]]; cs = Module[{n = 1, max, sum = 0, ser, len}, ser = chebSeries[Reverse@values]; max = Max[ser]; len = LengthWhile[Reverse[ser], (sum += Abs@#) &lt; 10^-22*max &amp;]; Drop[N@ser, -len] ]; approx[x_?NumericQ] := cs.Table[ChebyshevT[n - 1, Rescale[x, domain, {-1, 1}]], {n, Length@cs}] Plot[{fn, approx[x]}, {x, 0, 2}, PlotStyle -&gt; {AbsoluteThickness[5], Automatic}], LogPlot[(fn - approx[x])/fn // Abs, {x, 0, 2}] </code></pre> <p><img src="https://i.stack.imgur.com/mr2Vm.png" alt="Mathematica graphics"></p>
3,516,921
<p>Let <span class="math-container">$f : [−1, 0] → \mathbb{R}, x → x − x^2, n ∈ \mathbb{N}$</span> and let <span class="math-container">$P_n : x_0, . . . , x_n$</span> be an equal partition of <span class="math-container">$[−1, 0]$</span>.</p> <ul> <li>Compute the Riemann sum <span class="math-container">$S_{P_n} (f, z_1, . . . , z_n)$</span>, when <span class="math-container">$z_k$</span> is the midpoint of <span class="math-container">$[x_{k−1}, x_k]$</span> for every <span class="math-container">$k ∈ {1, . . . , n}$</span>.</li> </ul> <p>Im not familiar with the usage of the midpoint and cannot get this to work. All help would be appreciated.</p>
Community
-1
<p><span class="math-container">$x_j=-1+j/n\implies z_j=(x_j-x_{j-1})/2+x_{j-1}=x_{j-1}+1/(2n)=-1+(j-1)/n+1/(2n)=-1+(2j-1)/(2n)$</span>. </p> <p>So <span class="math-container">$f(z_j)=-1+(2j-1)/(2n)-(-1+(2j-1)/(2n))^2=-2+3(2j-1)/(2n)-(2j-1)^2/(4n^2)$</span>.</p> <p>So <span class="math-container">$R=\sum_{j=1}^n f(z_j)∆x=\sum_{j=1}^n(-2+3(2j-1)/(2n)-(2j-1)^2/(4n^2))(1/n)$</span></p> <p>You should be able to get rid of the <span class="math-container">$j$</span>'s using the familiar formulas for <span class="math-container">$\sum i$</span> and <span class="math-container">$\sum i^2$</span>.</p>
258,332
<blockquote> <p>Prove that if $a^x=b^y=(ab)^{xy}$, then $x+y=1$.</p> </blockquote> <p>How do I use logarithms to approach this problem?</p>
Brian M. Scott
12,042
<p>No logarithms are needed:</p> <p>$$a^x=(ab)^{xy}=a^{xy}b^{xy}=\left(a^x\right)^y\left(b^y\right)^x=\left(a^x\right)^y\left(a^x\right)^x=\left(a^x\right)^{x+y}$$</p>
258,332
<blockquote> <p>Prove that if $a^x=b^y=(ab)^{xy}$, then $x+y=1$.</p> </blockquote> <p>How do I use logarithms to approach this problem?</p>
P.K.
34,397
<p>How about this?$$\begin{align} &amp;(ab)^{xy} \\ =&amp; a^{xy}\cdot b^{xy} \\ = &amp; (a^x)^y \cdot (b^y)^x \\ = &amp; (a^x)^y \cdot (a^x)^x \\ = &amp; (a^x)^{x + y} \end{align}$$It suffices to say that $x + y = 1.$</p>
712,736
<p>For any continuous function $f(z)$ of period $1$. Show that $\varphi'=2\pi \varphi+f(t)$ has a unique solution of period $1$.</p> <p>Is this problem wrong with the counter example $\varphi(t)=e^{2\pi t}$. Shall we change it into $\varphi'=2\pi i\varphi+f(t)$</p>
mookid
131,738
<p>Look for a solution $\varphi(t) = z(t)\exp(2\pi t)$, then $$ z'(t) = f(t) \exp(-2\pi t)\\ z(t) = z(0) + \int_0^t f(s) \exp(-2\pi s) ds\\ \varphi(t) = \varphi(0)\exp(2\pi t)+ \int_0^t f(s) \exp(2\pi (t-s))ds $$</p> <p>Now if the solution is 1 periodic: $$ \varphi(0) =\varphi(1)\\ = \varphi(0)\exp(2\pi )+ \int_0^1 f(s) \exp(2\pi (1-s))ds $$gives you an unique possibility for $\varphi(0)=y_0$.</p> <p>Now, for such a solution $$ \varphi'(t) = 2\pi\varphi + f(t), \varphi(0)=y_0\\ \frac d{dt}\varphi(t+1) = \varphi'(t+1) = 2\pi\varphi(t+1) + f(t+1), \varphi(0+1)=\varphi(1)=y_0\\ $$ as the solution of the Cauchy problem is unique, then $\varphi(t+1)=\varphi(t)$.</p> <p>This solution is then a 1 periodic solution.</p>
3,279,878
<p>I got this equation while I was trying to solve a certain math Olympiad problem. I tried modulus and whatnot, but I haven't got anywhere. Is there a way to prove this?</p>
Will Jagy
10,400
<p>Just for culture:</p> <p><a href="https://i.stack.imgur.com/7kBhg.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7kBhg.jpg" alt="enter image description here"></a></p> <p>Showing what numbers are NOT represented is the easy part. These 102 ternary forms are special (among positive forms) in that each does represent all non-excluded numbers. </p>
3,840,699
<p>I need to calculate something of the form</p> <p><span class="math-container">\begin{equation} \int_{D} f(\mathbf{x}) d\mathbf{x} \end{equation}</span></p> <p>with <span class="math-container">$D \subseteq \mathbb{R^2}$</span>, but I only have available <span class="math-container">$f(\mathbf{x})$</span> at given samples of points in <span class="math-container">$D$</span>. What do you suggest to do the estimate? For example, I think Monte Carlo integration doesn't apply directly because I can't evaluate <span class="math-container">$f(\mathbf{x})$</span> at arbitrary <span class="math-container">$\mathbf{x}$</span>. Maybe it could be some kind of combination of Monte Carlo and interpolation?</p>
nicomezi
316,579
<p>There is no &quot;best&quot; estimation method in general. For an extreme example, if <span class="math-container">$f$</span> is linear over <span class="math-container">$\mathbb{R}^2$</span>, then knowing <span class="math-container">$f$</span> at three non-aligned points is enough to compute the integral exactly over every domain <span class="math-container">$D$</span>. Conversely, if <span class="math-container">$f$</span> strongly oscillates or is piecewise constant, you have no guarantee to converge quickly enough to the integral value for a given precision for the sample you have. If your sample contains some very close points and you know <span class="math-container">$f$</span> cannot vary very fast compare to their distance, you can use higher orders methods to obtain better estimates. And so on ...</p> <p>Without additionnal information Monte Carlo seems to be the only reasonnable thing to do.</p>
4,618,433
<p>Just a heads up: &quot;<span class="math-container">$a$</span>&quot; and &quot;<span class="math-container">$α$</span>&quot; are different</p> <p>Let <span class="math-container">$a,b \in \Bbb R$</span> and suppose <span class="math-container">$a^2 − 4b \neq 0$</span>. Let <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span> be the (distinct) roots of the polynomial <span class="math-container">$x^2 + ax + b$</span>. Prove that there is a real number <span class="math-container">$c$</span> such that either <span class="math-container">$\alpha − \beta = c$</span> or <span class="math-container">$\alpha − \beta = ci$</span>.</p> <p>I have no idea how to prove this mathematically. Can someone explain how they would this, including how they would implement this using a proof tree?</p> <p>This is what I was trying to do.</p> <p><span class="math-container">$$(x - \alpha)(x - \beta) = x^2 + ax + b$$</span></p> <p><span class="math-container">$$x^2 - \alpha x - \beta x + \alpha \beta = x^2 + ax + b$$</span></p> <p><span class="math-container">$$-\alpha x - \beta x = ax$$</span></p> <p><span class="math-container">$$-x(\alpha + \beta) = ax$$</span></p> <p><span class="math-container">$$(\alpha + \beta) = -a$$</span></p> <p><span class="math-container">$$\alpha \beta = b$$</span></p> <p>However, I'm confused where to go from here and wondering if what I'm doing is wrong.</p>
Ethan Bolker
72,858
<p>Here's an answer that needs very little algebra.</p> <p>If the roots are real their difference <span class="math-container">$c$</span> is real.</p> <p>If the roots are complex they are conjugates, so their difference is a real multiple of <span class="math-container">$i$</span>.</p>
3,391,280
<p>Prove by Induction on n that <span class="math-container">$\exists x,y,z \in Z$</span> s.t. <span class="math-container">$x\ge 2, y\ge 2, z\ge 2$</span> satisfies <span class="math-container">$x^2+y^2=z^{2n+1}$</span> </p> <p>I'm a lot more comfortable with proving induction with <span class="math-container">$\forall$</span> I haven't really seen one of this format yet where there's an <span class="math-container">$\exists$</span>. Since this is obviously not true for all <span class="math-container">$x,y,z\in Z$</span> it's harder for me to figure out how to solve it.</p>
Batominovski
72,152
<p><strong>Remark.</strong> This problem is much easier to prove without induction. But, well, since it is required, I will oblige. However, if you look carefully, this is exactly the same as what I wrote in my comment under the OP's question.</p> <p>For each integer <span class="math-container">$n\geq 0$</span>, we want to find <span class="math-container">$(x_n,y_n,z_n)\in\mathbb{Z}^3$</span> such that <span class="math-container">$$x_n^2+y_n^2=z_n^{2n+1}\,.$$</span> For the basis of our induction, start with <span class="math-container">$(x_0,y_0,z_0):=(a,b,a^2+b^2)$</span> for an arbitrary pair <span class="math-container">$(a,b)\in\mathbb{Z}^2$</span>. For an integer <span class="math-container">$n\geq 1$</span>, suppose you have <span class="math-container">$\left(x_{n-1},y_{n-1},z_{n-1}\right)$</span>. Define <span class="math-container">$$(x_n,y_n,z_n):=\left(x_{n-1}z_{n-1},y_{n-1}z_{n-1},z_{n-1}\right)\,.$$</span> Prove that this triple <span class="math-container">$(x_n,y_n,z_n)$</span> satisfies <span class="math-container">$x_n^2+y_n^2=z_n^{2n+1}$</span>.</p> <p>In fact, the same argument shows that, for each <span class="math-container">$k\in\mathbb{Z}_{&gt;0}$</span>, there are infinitely many integers <span class="math-container">$(x,y,z)\in\mathbb{Z}^3$</span> or <span class="math-container">$(x,y,z)\in\mathbb{Z}_{&gt;0}^3$</span> such that <span class="math-container">$$x^2+y^2=z^k\,.$$</span> The case where <span class="math-container">$k$</span> is odd has been dealt with. For an even <span class="math-container">$k$</span>, we start with a Pythagorean triple <span class="math-container">$(a,b,c)\in\mathbb{Z}^3$</span> (or <span class="math-container">$(a,b,c)\in\mathbb{Z}_{&gt;0}^3$</span>), i.e., <span class="math-container">$a^2+b^2=c^2$</span>. Then, <span class="math-container">$$(x,y,z)=\left(ac^{\frac{k}{2}-1},bc^{\frac{k}{2}-1},c\right)$$</span> is a solution to <span class="math-container">$x^2+y^2=z^k$</span>. You can, of course, write this proof inductively as well.</p>
3,905,331
<p>I need to prove that <span class="math-container">$\lim_\limits{n\to \infty}$$\sqrt{\frac{n^2+3}{2n+1}} = \infty$</span> (series) by using the definition:</p> <p>&quot;A sequence <span class="math-container">$a_n$</span> converges to <span class="math-container">$\infty$</span> if, for every number <span class="math-container">$M$</span>, there exists <span class="math-container">$N∈N$</span> such that whenever <span class="math-container">$n≥N$</span> it follows that <span class="math-container">$a_n&gt;M$</span>.&quot;</p> <p>I came with a proof which I'm not sure is valid, because I just learned the definition and maybe I didn't grasp it yet.</p> <p>Please let me know what do you think of the following proof:</p> <p>I'll find an <span class="math-container">$N$</span> such that whenever <span class="math-container">$n≥N$</span> it follows that <span class="math-container">$a_n&gt;M$</span> for every <span class="math-container">$M$</span>:</p> <p><span class="math-container">$M &lt; \sqrt{\frac{n^2+3}{2n+1}} \Rightarrow M^2 &lt; \frac{n^2+3}{2n+1} &lt; \frac{n^2+n^2}{2n} = \frac{2n^2}{2n} = n$</span></p> <p>Hence, we got <span class="math-container">$n &gt; M^2$</span>, therefore we can say that when <span class="math-container">$N = M^2$</span> then <span class="math-container">$a_n&gt;M$</span> for every <span class="math-container">$n &gt; M$</span>. End of proof.</p> <p>Do you think it is valid or am I missing something? Any suggestion for a better proof using the definition?</p> <p>Thanks!</p>
Ameet Sharma
122,510
<p>The proof isn't right. What you proved is that</p> <p><span class="math-container">$\frac{n^2+3}{2n+1}&gt;N \implies n&gt;M^2$</span> for <span class="math-container">$N=M^2$</span></p> <p>This strategy would have worked if you had double implication type deductions... so then your reasoning would work backwards as well as forwards... but your implications are one way.</p> <p>What you want to prove is that</p> <p><span class="math-container">$n&gt;N \implies \frac{n^2+3}{2n+1}&gt;M^2$</span>, for some <span class="math-container">$N$</span></p> <p>Also <span class="math-container">$n^2+3&lt;2n^2$</span> requires <span class="math-container">$n \ge 2$</span> in the integers.</p> <p>A way to check your proof is wrong is to try to go from <span class="math-container">$n&gt;M^2$</span> to <span class="math-container">$\frac{n^2+3}{2n+1}&gt;M^2$</span></p> <p>You should do this at the end anyway. That is really your proof. The rest of the work is preliminary.</p> <p>What you want to do is find an expression which is some constant times n which is less than <span class="math-container">$\frac{n^2+3}{2n+1}$</span> and force that expression to be greater than <span class="math-container">$M^2$</span></p> <p>First I'm going to assume <span class="math-container">$n \ge 1$</span>,</p> <p>So then</p> <p><span class="math-container">$\frac{n^2+3}{2n+1}&gt;\frac{n^2}{2n+1} \ge \frac{n^2}{2n+n} =\frac{n}{3}$</span> call this eqn.(1)</p> <p>So given an <span class="math-container">$M$</span> we want <span class="math-container">$\frac{n}{3}&gt;M^2$</span> so we want <span class="math-container">$n &gt; 3M^2$</span></p> <p>So here's the proof,</p> <p>given an <span class="math-container">$M&gt;0$</span>, we choose <span class="math-container">$N = \max \{1,3M^2+1\}$</span></p> <p><span class="math-container">$\begin{aligned}&amp;n \ge N \\ \\ \implies &amp;n \ge 3M^2+1 &gt; 3M^2 \\ \\ \implies &amp;\frac{n}{3} &gt; M^2 \end{aligned}$</span></p> <p>Since we know <span class="math-container">$n \ge 1$</span> we can use eqn. (1), so we know that</p> <p><span class="math-container">$\begin{aligned} &amp;\frac{n^2+3}{2n+1}&gt;M^2 \\ \\ \implies &amp;\sqrt{\frac{n^2+3}{2n+1}}&gt;M \\ \\ \implies &amp;a_n&gt;M\end{aligned}$</span></p> <p>EDIT: Note that I incorrectly had &quot;min&quot; before and fixed it to &quot;max&quot;.</p>
2,264,021
<p>Can you help me explain the basic difference between Interior Point Methods, Active Set Methods, Cutting Plane Methods and Proximal Methods.</p> <p>What is the best method and why? What are the pros and cons of each method? What is the geometric intuition for each algorithm type?</p> <p>I am not sure I understand what the differences are. Please provide examples of each type of algorithm: active set, cutting plane and interior point.</p> <p>I would consider 3 properties of each algorithm class:complexity, practical computation speed and convergence rate. </p>
littleO
40,119
<p>This is only a partial answer, but it's too long for a comment:</p> <p>Interior point methods are similar in spirit to Newton's method. Like Newton's method, they require solving a large linear system of equations at each iteration, and they converge to high accuracy in a small number of iterations (typically 30 or so). Interior point methods are great for small or medium sized problems, but often they can't be used for very large problems because solving the linear system at each iteration is too expensive. First-order methods, such as the proximal gradient method, are similar in spirit to gradient descent -- each iteration is cheap, and you can often get a low or medium accuracy solution in a few hundred iterations. These methods tend to be useful for very large scale problems where interior point methods can't be applied, and where high accuracy is not required.</p>
700,004
<p>I have been working on this proof for a few hours and I can not make it work out.</p> <p>$$\sum_{i=1}^{n}\frac{1}{i(i+1)}=1-\frac{1}{(n+1)}$$</p> <p>i need to get to $1-\frac{1}{k+2}$</p> <p>I get as far as $$1-\frac{1}{k+1}+\frac{1}{(k+1)(k+2)}$$ then I have tried $1-\frac{(k+2)+1}{(k+1)(k+2)}$ which got me exactly nowhere. </p>
MCT
92,774
<p>To finish your method:</p> <p>We have $1 - \frac{1}{k+1} + \frac{1}{(k+1)(k+2)}$. You made a slight algebraic mistake -- you didn't distribute the negative when you combined fractions. If you did, you would have gotten $1 - \frac{(k+2) - 1}{(k+1)(k+2)} = 1 - \frac{1}{k+2}$, thus completing the proof.</p>
308,856
<p>A set $E\subseteq \mathbb{R}^d$ is said to be Jordan measurable if its inner measure $m_{*}(E)$ and outer measure $m^{*}(E)$ are equal.However, Lebesgue mesure theory is developed with only outer measure. </p> <p>A function is Riemann integrable iff its upper integral and lower integral are equal.However, in Lebesgue integration theory, we rarely use upper Lebesgue integral.</p> <p>Why are outer measure and lower integral more important than inner measure and upper integral?</p>
Fedor Petrov
4,312
<p>I think, the reason is that if the ground space has infinite measure, you can not define the measurable sets as those for which inner measure equals the outer measure: it may happen that both are infinite, while the set is still not measurable.</p> <p>Note also (this may be related) that outer and inner regularity behave differently in general. For example, a sigma-finite Borel measure on a Polish space is inner regular, but not always outer regular (example: counting measure of rational numbers as a measure on <span class="math-container">$\mathbb{R}$</span>.)</p>
308,856
<p>A set $E\subseteq \mathbb{R}^d$ is said to be Jordan measurable if its inner measure $m_{*}(E)$ and outer measure $m^{*}(E)$ are equal.However, Lebesgue mesure theory is developed with only outer measure. </p> <p>A function is Riemann integrable iff its upper integral and lower integral are equal.However, in Lebesgue integration theory, we rarely use upper Lebesgue integral.</p> <p>Why are outer measure and lower integral more important than inner measure and upper integral?</p>
Martin Väth
165,275
<p>The asymmetry has only historical reasons. It is possible to develop Lebesgue theory (moreover, all extension theorems and thus the theory of product measures) from the “inner approach”. This was done by Heinz König in a couple of papers and monographs.</p> <p>Although this “inner approach” is for the Lebesgue measure equivalent (more general, in the <span class="math-container">$\sigma$</span>-finite case, IIRC), it has huge advantages for the non-<span class="math-container">$\sigma$</span>-finite case, since the &quot;outer approach&quot; loses a lot of information - intuitively, there are “bubbles of measure <span class="math-container">$\infty$</span>” which lost all information. For example, the product of Haar measures with the “inner” approach is compatible with the Fubini-Tonelli theorem one obtains from the approach by Haar integrals for functions with compact support. (For the “outer” approach this holds only for the <span class="math-container">$\sigma$</span>-finite case, in general.)</p>
2,446,282
<p>The maximum value of the function $f(x)= ax^2+bx+c$ is 10. Given that $f(3)=f(-1)=2$, find $f(2)$</p> <p>The answer is $f(2)=8$</p> <p>I thought that by maximum value it meant that c=10, but the equation I got gave as a result $f(2)=10$</p> <p>Any hint on how to solve it?</p>
Carl Christian
307,944
<p>Machine epsilon $\epsilon$ is the distance between 1 and the next floating point number. </p> <p>Machine precision $u$ is the accuracy of the basic arithmetic operations. This number is also know as the unit roundoff.</p> <p>When the precision is $p$ and the radix is $\beta$ we have $$ \epsilon = \beta^{1-p}.$$ To see this, simply a 1 to the last digit. If we round to nearest, then $$ u = \frac{1}{2} \beta^{1-p}.$$</p> <p>The wrong formula gave you the right result for $\beta \in \{2,10\}$ because you rounded up, i.e. applied the default rounding mode for human calculators.</p> <p>To be fair, the literature is not in agreement and different terms abound. I recommend that you follow Higham's book: "Accuracy and stability of numerical algorithms".</p>
2,705,794
<p>I ran across this problem on a practice Putnam worksheet. Completely stumped.</p> <p>Is $$\large \frac{m^{6} + 3m^{4} + 12m^{3} + 8m^{2}}{24}$$ an integer for all $m \in \mathbb{N}$?</p> <p>I suspect it is an integer for any $m$. It checks out for small cases.</p> <p>Any hints for proving the general case?</p>
VyshnavPT
546,126
<p>This can also be tackled using Mathematical induction. It is clear that m=1 holds. Assume it holds for a number k.Substitue k+1in place of k and cancel the divisors of 24 and then we get another polynomial.Then we must start the whole process for this polynomial(ie prove that 24 divides the new polynomial).Eventually the statement is proved.</p>