qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
3,344,766
<p>It is too be proved that <span class="math-container">$3(p^2)+61$</span> is a perfect square only for <span class="math-container">$$p=1 ; p=N$$</span> The question arises from a problem in arithmatic progression with a certain soft constraint of natural numbers. At the end a quadratic is formed and since our constraint is of natural number The discriminants should be a perfect square. If we take <span class="math-container">$p= 1$</span> we are able to solve the question. Our teacher left it to us to prove <span class="math-container">$p=1$</span> is the only solution. I have not proceeded due to lack of ideas.</p>
J. W. Tanner
615,567
<p><span class="math-container">$q^2-3p^2=61$</span> is a <a href="http://mathworld.wolfram.com/PellEquation.html" rel="nofollow noreferrer">Pell-like equation</a>.</p> <p>(The classic Pell equation would have <span class="math-container">$1$</span> instead of <span class="math-container">$61.$</span>)</p> <p>I believe there are <em>infinitely many</em> solutions to it.</p> <p>Here are some: <span class="math-container">$p=1, 6, 10, 25, $</span> and <span class="math-container">$39.$</span></p> <hr> <p>Addendum (as suggested by <a href="https://math.stackexchange.com/users/467003/trancelocation">trancelocation</a> in a comment to <a href="https://math.stackexchange.com/users/10400/will-jagy">Will Jagy</a>'s answer):</p> <p><span class="math-container">$(2+\sqrt3)(2-\sqrt3)=1$</span> so <span class="math-container">$q^2-3p^2=61\implies(q+\sqrt3p)(q-\sqrt3p)=61\implies$</span></p> <p><span class="math-container">$ (q+\sqrt3p)(2+\sqrt3)(q-\sqrt3p)(2-\sqrt3)=61$</span></p> <p><span class="math-container">$=[(2q+3p)+\sqrt3(q+2p)][(2q+3p)-\sqrt3(q+2p)],$</span></p> <p>which means <span class="math-container">$(2q+3p)^2-3(q+2p)^2=61$</span>, </p> <p>so for any solution <span class="math-container">$(p,q)$</span> to <span class="math-container">$q^2-3p^2=61$</span> [including for example (<span class="math-container">$p,q)=(1,8)],$</span></p> <p>there is a solution with larger integers <span class="math-container">$(2q+3p, q+2p)$</span>.</p>
3,030,332
<p>I have started complex analysis and I am stuck on one definition 'extended complex plane'.Book is saying 'To visualize point at infinity,think of complex plane passing through the equator of a unit sphere centred at 0.To each point z in plane there corresponds exactly one point P on surface of sphere which is obtained by intersection of sphere with line joining point z with north pole N of sphere'.Now my question 1.Is plane passing through equator of unit sphere means our normal <span class="math-container">$xy$</span> plane where <span class="math-container">$z$</span> co-ordinate is 0? 2.If so, then all the points in our complex plane inside unit circle should get mapped to north pole N ? What is going wrong with my understanding?</p>
RandomMathGuy
624,016
<p>I think that description may be a typo... I think they want to say the following:</p> <p>Imagine a unit sphere resting on top of the complex plane. For each point <span class="math-container">$z\in\mathbb{C}$</span> draw a line segment connecting that point to the "north pole" of the sphere. This line will intersect the sphere in exactly one point besides the north pole, and is distinct for each <span class="math-container">$z\in\mathbb{C}$</span>. No point will have a line segment that is tangent to the sphere at the north pole, but we may add an extra point to <span class="math-container">$\mathbb{C}$</span> that accomplishes this. This is the visualization of the point at infinity.</p>
231,403
<p>Let $1 \le p &lt; \infty$. For all $\epsilon &gt; 0$, does there exist $C = C(\epsilon, q)$ such that$$\|u\|_{L^p(0, 1)} \le \epsilon \|u'\|_{L^1(0, 1)} + C\|u\|_{L^1(0, 1)} \text{ for all }u \in W^{1, 1}(0, 1)?$$</p>
Fedor Petrov
4,312
<p>Of course. </p> <p>For $u\in C^1$ denote $v=u-\int_0^1 u$, then $v(x_0)=0$ for some $x_0\in (0,1)$ and for all $x\in (0,1)$ we have $$|v(x)|=|v(x)-v(x_0)|=\left|\int_{x_0}^x v'(t)dt\right|\leqslant \|v'\|_{L^1(0,1)}=\|u'\|_{L^1(0,1)},$$ thus $M:=\|v\|_{L^{\infty}(0,1)}\leqslant \|u'\|_{L^1(0,1)}$. This extends to $u\in W^{1,1}$ by continuity. We have $\|u\|_p\leqslant \|v\|_p+\|u\|_1$. Next, $$ \|v\|_p^p=\int |v(x)|^p dx\leqslant M^{p-1} \int |v(x)| dx= (\varepsilon^{-1} \|v\|_1^p)^{1/p}\cdot (M^{p} \varepsilon^{1/(p-1)})^{1-1/p}\leqslant \varepsilon^{-1} \|v\|_1^p+ M^{p} \varepsilon^{1/(p-1)}, $$ and result follows from this inequality and $\|v\|_1\leqslant 2\|u\|_1$.</p>
1,149,907
<p>What are some different methods to evaluate</p> <p>$$ \int_{-\infty}^{\infty} x^4 e^{-ax^2} dx$$</p> <p>for $a &gt; 0$.</p> <p>This integral arises in a number of contexts in Physics and was the original motivation for my asking. It also arises naturally in statistics as a higher moment of the normal distribution.</p> <p><strong>I have given a few methods of evaluation below.</strong> Anyone know of others?</p>
Jack D'Aurizio
44,121
<p>Assuming $a&gt;0$, we have: $$ I = \frac{1}{a^{5/2}}\int_{0}^{+\infty}x^{3/2}e^{-x}\,dx = \frac{\Gamma\left(5/2\right)}{a^{5/2}}=\color{red}{\frac{3\sqrt{\pi}}{4\, a^{5/2}}}.$$</p>
1,149,907
<p>What are some different methods to evaluate</p> <p>$$ \int_{-\infty}^{\infty} x^4 e^{-ax^2} dx$$</p> <p>for $a &gt; 0$.</p> <p>This integral arises in a number of contexts in Physics and was the original motivation for my asking. It also arises naturally in statistics as a higher moment of the normal distribution.</p> <p><strong>I have given a few methods of evaluation below.</strong> Anyone know of others?</p>
glS
173,147
<p>Let $b&gt;0$ be any positive, even integer, and let $a&gt;0$. Then you have $$ \int_{-\infty}^\infty dx \, x^b e^{-ax^2} = 2 \int_0^\infty dx \, x^b e^{-ax^2} = a^{- (b+1)/2} \int_0^\infty e^{-t} t^{\frac{b-1}{2}} = \color{red}{a^{-(b+1)/2} \Gamma \left(\frac{b+1}{2} \right) }. $$ The particular case $b=4$ gives your result.</p>
2,045,812
<p>(gcd; greatest common divisor) I am pulling a night shift because I have trouble understanding the following task.</p> <p>Fibonacci is defined by this in our lectures:<br> I) $F_0 := 1$ and $F_1 := 1$</p> <p>II) For $n\in\mathbb{N}$, $n \gt 1$ do $F_n=F_{n-1}+F_{n-2}$</p> <p><strong>Task</strong><br> Be for n>0 the numbers $F_n$ the Fibonacci-numbers defined as above.<br> Calculate for $n\in\{3,4,5,6\}$ $\gcd(F_n, F_{n+1})$ and display it as </p> <pre><code>aFₙ + bFₙ₊₁ </code></pre> <p>, that means find numbers a, b, so that </p> <pre><code>gcd(Fₙ, Fₙ₊₁) = aFₙ + bFₙ₊₁ </code></pre> <p>holds.</p> <hr> <p>I know how to use the Euclidian Algorithm, but I don't understand from where I should find the a and b from, because the task gives me {3,4,5,6} and every gcd in this gives me 1. (gcd(3,4)=1 ; gcd(4,5)=1) I need help solving this as I am hitting a wall.</p>
S. Y
364,140
<p>Here you want to replace $a$ and $b$ with $a_n$ and $b_n$. We have $$a_nF_{n} + b_n F_{n+1}=gcd(F_n, F_{n+1}) =gcd(F_{n-1}, F_n)=a_{n-1}F_{n-1}+ b_{n-1}F_n$$</p> <p>Replace $F_{n+1}$ with $F_n+F_{n-1}$, we get $$(a_n + b_n - b_{n-1})F_n + (b_n-a_{n-1})F_{n-1}=0$$</p> <p>If we let $a_n + b_n - b_{n-1}=0$ and $ b_n-a_{n-1}=0$, we could get an $F-$ sequence again. For example, replace $a_n$ in the first equation with $b_{n+1}$ and let $c_n=(-1)^nb_n$, we get $$c_{n+1}=c_n+c_{n-1}$$</p> <p>We can let $b_0=1$ and $b_1=-1=a_0$, then it can be shown $c_n=F_{n+1}$ (assume $F_0=0$, $F_1=1$) and thus $$b_n=(-1)^{n}F_{n+1}$$ and $$a_n = (-1)^{n+1} F_{n+2}$$, And we are looking at a famous identity $$F_{n+1}^2-F_nF_{n+2} = (-1)^{n}gcd(F_n, F_{n+1})=(-1)^{n}$$ </p> <p>Hope you feel this is interesting. </p>
2,869,305
<p>I need some help evaluating this limit... Wolfram says it blows up to infinity but I don't think so. I just can't prove it yet.</p> <p>$$ \lim_{x\to\infty}(x!)^{1/x}-\frac{x}{e} $$</p>
Claude Leibovici
82,404
<p>In the same spirit as in previous answers, considering $$A=(x!)^{\frac{1}{x}}-\frac{x}{e}=B-\frac{x}{e}$$ $$B=(x!)^{\frac{1}{x}}\implies \log(B)={\frac{1}{x}}\log(x!)$$ Now, using Stirling approximation $$\log(B)={\frac{1}{x}}\left(x (\log (x)-1)+\frac{1}{2} \left(\log (2 \pi )+\log \left({x}\right)\right)+\frac{1}{12 x}+O\left(\frac{1}{x^3}\right)\right)$$ $$\log(B)= (\log (x)-1)+\frac{1}{2x} \left(\log (2 \pi )+\log \left({x}\right)\right)+O\left(\frac{1}{x^2}\right)$$ Continuing with Taylor $$B=e^{\log(B)}=\frac{x}{e}+\frac{\log (2 \pi )+\log \left({x}\right)}{2 e}+O\left(\frac{1}{x}\right)$$ $$A=\frac{\log (2 \pi x)}{2 e}+O\left(\frac{1}{x}\right)\tag 1$$ Pushing further the expansion of $B$, you would get $$A=\frac{\log (2 \pi x)}{2 e}+\frac{3 \log ^2\left({2 \pi x}\right)+2}{24 e x}+O\left(\frac{1}{x^2}\right)\tag 2$$ For illustration purposes, let $x=10^k$ and computing $$\left( \begin{array}{cccc} k &amp; (1) &amp; (2) &amp; \text{exact} \\ 1 &amp; 0.761595453 &amp; 0.843495044 &amp; 0.849934276 \\ 2 &amp; 1.185132311 &amp; 1.204528536 &amp; 1.204745228 \\ 3 &amp; 1.608669170 &amp; 1.612217034 &amp; 1.612222301 \\ 4 &amp; 2.032206029 &amp; 2.032770401 &amp; 2.032770506 \end{array} \right)$$</p>
825,531
<p>How many positive integers $n$ are there, such that both $2n$ and $3n$ are perfect squares? I tried to use modular arithmetic, but I'm stuck.</p>
lab bhattacharjee
33,337
<p>If $x\ne0,$</p> <p>$$2x=a^2,3x=b^2\implies \frac{a^2}{b^2}=\frac23\iff \left(\frac ab\right)^2=\frac23$$</p> <p>or on multiplication, $$6x^2=a^2b^2\iff \left(\frac{ab}x\right)^2=6$$ </p> <p>which is impossible as $a,b,x$ are integers</p>
825,531
<p>How many positive integers $n$ are there, such that both $2n$ and $3n$ are perfect squares? I tried to use modular arithmetic, but I'm stuck.</p>
lab bhattacharjee
33,337
<p>Let $x=r\cdot2^a\cdot3^b$ where $(r,6)=1$</p> <p>As $2x$ is perfect square, $r\cdot2^{a+1}\cdot3^b$ must be, so $r$ must be perfect square $=s^2$(say) and $a+1,b$ each must be even $\implies a$ must be odd</p> <p>As $3x$ is perfect square, $s^2\cdot2^a\cdot3^{b+1}$ must be, so $a,b+1$ each must be even $\implies b$ must be odd</p> <p>So, we reach a contradiction for the parity of $a,b$</p>
57,232
<p>I have huge matrices in the form of</p> <pre><code>mtx1 = {{24+24 FF[6,9] GG[5,10]+24 FF[7,8] GG[5,10]+24 FF[5,10] GG[6,9]+24 FF[6,9] GG[6,9]+24 FF[7,8] GG[6,9]+24 FF[5,10] GG[7,8]+24 FF[6,9] GG[7,8]+24 FF[7,8] GG[7,8],24+24 FF[5,10] GG[5,10]+24 FF[6,9] GG[5,10]+24 FF[7,8] GG[5,10]+24 FF[6,9] GG[6,9]+24 FF[7,8] GG[6,9]+24 FF[5,10] GG[7,8]+24 FF[6,9] GG[7,8]+24 FF[7,8] GG[7,8],24+24 FF[5,10] GG[5,10]+24 FF[6,9] GG[5,10]+24 FF[7,8] GG[5,10]+24 FF[5,10] GG[6,9]+24 FF[6,9] GG[6,9]+24 FF[7,8] GG[6,9]+24 FF[6,9] GG[7,8]+24 FF[7,8] GG[7,8]},{24+24 FF[5,10] GG[5,10]+24 FF[7,8] GG[5,10]+24 FF[5,10] GG[6,9]+24 FF[6,9] GG[6,9]+24 FF[7,8] GG[6,9]+24 FF[5,10] GG[7,8]+24 FF[6,9] GG[7,8]+24 FF[7,8] GG[7,8],24+24 FF[5,10] GG[5,10]+24 FF[6,9] GG[5,10]+24 FF[7,8] GG[5,10]+24 FF[5,10] GG[6,9]+24 FF[7,8] GG[6,9]+24 FF[5,10] GG[7,8]+24 FF[6,9] GG[7,8]+24 FF[7,8] GG[7,8],24+24 FF[5,10] GG[5,10]+24 FF[6,9] GG[5,10]+24 FF[7,8] GG[5,10]+24 FF[5,10] GG[6,9]+24 FF[6,9] GG[6,9]+24 FF[7,8] GG[6,9]+24 FF[5,10] GG[7,8]+24 FF[7,8] GG[7,8]},{24+24 FF[5,10] GG[5,10]+24 FF[6,9] GG[5,10]+24 FF[5,10] GG[6,9]+24 FF[6,9] GG[6,9]+24 FF[7,8] GG[6,9]+24 FF[5,10] GG[7,8]+24 FF[6,9] GG[7,8]+24 FF[7,8] GG[7,8],24+24 FF[5,10] GG[5,10]+24 FF[6,9] GG[5,10]+24 FF[7,8] GG[5,10]+24 FF[5,10] GG[6,9]+24 FF[6,9] GG[6,9]+24 FF[5,10] GG[7,8]+24 FF[6,9] GG[7,8]+24 FF[7,8] GG[7,8],24+24 FF[5,10] GG[5,10]+24 FF[6,9] GG[5,10]+24 FF[7,8] GG[5,10]+24 FF[5,10] GG[6,9]+24 FF[6,9] GG[6,9]+24 FF[7,8] GG[6,9]+24 FF[5,10] GG[7,8]+24 FF[6,9] GG[7,8]}}; </code></pre> <p>but the matrices I use are much bigger. Now I want to get rid of each term that contains <code>FF[___]</code> or <code>GG[___]</code>. Both always come together, therefore I used</p> <pre><code>mtx2 = mtx1 /. FF[___] -&gt; 0; (* mtx2={{24, 24, 24}, {24, 24, 24}, {24, 24, 24}} *) </code></pre> <p>and got the desired result in mtx2. Unfortunatly it turns out that this zeroing is extremly time-consuming. For my huge matrices, it takes on the order of 100 seconds. </p> <p><strong>Question:</strong></p> <p>Is there a more time-efficient way to zero all <code>FF[___]</code>-terms in <code>mtx1</code>?</p> <p><strong>Comparison:</strong></p> <p>I compare several approaches, for a big 3 big test-matrix. The approaches also include the construction of the matrix.</p> <p><em>my original approach</em></p> <ul> <li>{174.8417751, 65.4913582, 25.3878123} seconds</li> </ul> <p><em>Coefficient-Creation of Matrix</em></p> <ul> <li>{134.4920621, 51.4260521sec, 19.6079772} seconds</li> </ul> <p><em>belisarius' Block-evaluation methode</em></p> <ul> <li>{82.3675688, 31.5639078, 12.3822025} seconds</li> </ul> <p><em>eldo's Join/Partition</em></p> <ul> <li>{77.8615328, 29.0973367, 11.2742769} seconds</li> </ul> <p><em>kguler's Block-evaluation</em></p> <ul> <li>{75.8906436, 29.1315892, 11.6544345} seconds</li> </ul> <p><em>Mr.Wizard's mtx1[[All, All, 1]]</em></p> <ul> <li>{75.5589726, 29.0378220, 11.9491954} seconds</li> </ul> <p><strong>Edit</strong></p> <p>The full problem, including the matrix-creation is posted here: <a href="https://mathematica.stackexchange.com/questions/57241/time-efficient-creation-of-matrix"> Time-efficient creation of matrix </a></p>
Mr.Wizard
121
<p>For the example you gave <code>FF[___]</code> and <code>GG[___]</code> are the only non-number terms, therefore by polynomial sort order you could use simply:</p> <pre><code>mtx1[[All, All, 1]] </code></pre> <blockquote> <pre><code>{{24, 24, 24}, {24, 24, 24}, {24, 24, 24}} </code></pre> </blockquote> <p>I shall now look at your newer question where I anticipate a more representative example.</p>
1,955,509
<p>There's this exercise in Hubbard's book:</p> <blockquote> <p>Let $ h:\Bbb R \to \Bbb R $ be a $C^1$ function, periodic of period $2\pi$, and define the function $ f:\Bbb R^2 \to \Bbb R $ by $$f\begin{pmatrix}r\cos\theta\\r\sin\theta \end{pmatrix}=rh(\theta)$$</p> <p>a. Show that $f$ is a continuous real-valued function on $\Bbb R^2$.</p> <p>b. Show that $f$ is differentiable on $\Bbb R^2 - \{\mathbf 0\}$.</p> <p>c. Show that all directional derivatives of $f$ exist at $\mathbf 0$ if and only if</p> <p>$$ h(\theta) = -h(\theta + \pi) \ \text{ for all } \theta $$</p> <p>d. Show that $f$ is differentiable at $ \mathbf 0 $ if an only if $h(\theta)=a \cos \theta + b \sin \theta$ for some number $a$ and $b$. </p> </blockquote> <p>I can't find how to prove $ f $ is continuous, I tried to prove $$ \lim_{\begin{pmatrix}r\cos\theta\\r\sin\theta \end{pmatrix} \to \begin{pmatrix}s\cos\phi\\s\sin\phi \end{pmatrix}} f\begin{pmatrix}r\cos\theta\\r\sin\theta \end{pmatrix}=s\ h(\phi) $$ for all $s$ and $\phi$. But I can't do much else.</p>
dxiv
291,201
<p>Hint: &nbsp;&nbsp;for $|x| \lt 1$</p> <p>$$f(x) = \sum_{n=0}^{\infty} x^{n+2} = \frac{x^2}{1-x} = -1 -x + \frac{1}{1-x}$$</p> <p>Now consider $f''(\frac{1}{4})$.</p>
1,546,599
<p>What method should I use for this limit? $$ \lim_{n\to \infty}{\frac{n^{n-1}}{n!}} $$</p> <p>I tried ratio test but I ended with the ugly answer $$\lim_{n\to \infty}\frac{(n+1)^{n-1}}{n^{n-1}} $$ which would go to 1? Which means we cannot use ratio test. I do not know how else I could find this limit.</p>
Sergio Adrián Lagunas Pinacho
238,699
<p>$$\lim_{n \to\infty} \left(\frac{n+1}{n}\right)^{n-1} = e$$ so it diverges.</p>
4,597,679
<p>My textbook is <em>A Mathematical Introduction to Logic, 2nd Edition</em> by Enderton.</p> <p>The question initially comes when I was trying to prove Exercise 4. on pg.99</p> <p><span class="math-container">$$\text{Show that if }x \text{ does not occur free in }\alpha,\text{ then }\alpha \vDash \forall x \alpha$$</span></p> <p>One of the lines in the proof, which as I worked out is:</p> <p>the assignment functions <span class="math-container">$s$</span> and <span class="math-container">$s(x|d)$</span> agree on any variable but <span class="math-container">$x$</span>, since <span class="math-container">$x$</span> does not occur free in <span class="math-container">$\alpha,$</span> hence <span class="math-container">$$\vDash_\mathfrak{U} \alpha[s] \Leftrightarrow \vDash_{\mathfrak{U}}\alpha[s(x|d)]$$</span></p> <p>Then, I referred back to <strong>Theorem 22A</strong>, which states: Assume <span class="math-container">$s_1,s_2$</span> are assignment functions from <span class="math-container">$V$</span> into <span class="math-container">$\mathfrak{U}$</span> which agree at all variables (if any) that occur free in the wff <span class="math-container">$\phi$</span>. Then <span class="math-container">$$\vDash_{\mathfrak{U}} \phi[s_1] \Leftrightarrow \vDash_{\mathfrak{U}} \phi[s_2]$$</span></p> <p>My question is that why only free variables are enough? Why don't we need to impose the agreement on the bound variables of <span class="math-container">$s_1,s_2$</span>?</p> <p>The proof of this theorem uses induction, where the inductive hypothesis simply states the agreement on free variables rather than telling the intuition behind it.</p> <p>I attempted to answer my own question, but when it comes to discerning the difference between free and bound variables, the sentence 'a bound variable is a variable that's being quantified' feels incomplete, could you please provide richer insights to this?</p> <p>Also, this question is closely related to the post: <a href="https://math.stackexchange.com/questions/274925/show-that-if-x-does-not-occur-free-in-%ce%b1-then-%ce%b1-vdash-%e2%88%80-x-%ce%b1/868815?noredirect=1#comment9683956_868815">Show that if $x$ does not occur free in $α$, then $α \vDash ∀ x α$.</a></p>
Mauro ALLEGRANZA
108,274
<p><em>In &quot;everyday-life&quot; there are no free variables</em>.</p> <p>In the language of predicate logic a free variable works like a pronoun in natural language: <span class="math-container">$\text{red}(x)$</span> is like &quot;it is red&quot;. Its meaning depends on the context.</p> <p>How we specify the &quot;context&quot; in predicate logic? With variable assignment functions:</p> <blockquote> <p><span class="math-container">$\mathfrak A,s \vDash \text {red}(x)$</span></p> </blockquote> <p>holds if the chosen interpretation <span class="math-container">$\mathfrak A$</span> is my desk and the variable assignment function <span class="math-container">$s$</span> maps the variable <span class="math-container">$x$</span> occurring free to my red pencil.</p> <hr /> <p>The variable assignment function mechanism is a mathematical &quot;trick&quot; to define the semantical specifications of the language.</p> <p>The recursive clauses for quantifiers reduce the satisfaction of the formula to the cases without quantifiers, i.e. to formulas with free variables.</p> <p>The result that you refer to (<strong>Theorem 22A</strong>) simply amounts to proving that what matter in the semantical &quot;valuation&quot; of a formula are only the variables that occur free in it, and not all other infinite many variables of the language.</p> <p>Maybe a simple example will help.</p> <p>Consider the formula <span class="math-container">$(x=0)$</span> and the domain <span class="math-container">$\mathbb N$</span>. If we consider a variable assignment function <span class="math-container">$s$</span> such that <span class="math-container">$s(x)=0$</span>, we have that <span class="math-container">$\mathbb N,s \vDash (x=0)$</span>; if instead we consider <span class="math-container">$s'$</span> with <span class="math-container">$s'(x)=1$</span> we will have that <span class="math-container">$\mathbb N,s' \nvDash (x=0)$</span>.</p> <p>Consider now the following two functions: <span class="math-container">$s$</span> such that <span class="math-container">$s(x)=0$</span> and <span class="math-container">$s(y)=1$</span> and <span class="math-container">$s'$</span> such that <span class="math-container">$s'(0)=1$</span> and <span class="math-container">$s'(y)=0$</span>: do you see any change?</p>
3,440,732
<p>How can I show that <span class="math-container">$P=\{\{2k-1,2k\},k\in \mathbb {N}\}$</span> is the basis of <span class="math-container">$\mathbb N$</span>. Obviously we have to show that basis's orders do work. But the problem is if I take two subsets that belong to <span class="math-container">$P$</span> , for example <span class="math-container">$P_1=\{1,2\}$</span> , <span class="math-container">$P_2=\{3,4\}$</span> , I get <span class="math-container">$P_1 \cap P_2=\emptyset$</span> , then how can I show that for every <span class="math-container">$n\in\mathbb N$</span> , there is <span class="math-container">$P_3\in P$</span> such that <span class="math-container">$n\in P_3$</span>?</p>
Aldoggen
607,154
<p>For <span class="math-container">$P$</span> to be a basis of <span class="math-container">$\mathbb{N}$</span>, you need to show that </p> <ol> <li>Every natural number is contained in a basis element. Since <span class="math-container">$n\in\{n-1,n\}$</span> or <span class="math-container">$n\in\{n,n+1\}$</span>, depending on whether <span class="math-container">$n$</span> is even or odd, this requirement is always met.</li> <li>For every two basis elements <span class="math-container">$P_1$</span> and <span class="math-container">$P_2$</span>, every element in their intersection must be contained in another basis element. If <span class="math-container">$P_1=P_2$</span>, every element in their intersection is again contained in <span class="math-container">$P_1$</span>. Since two different basis elements are always disjoint (and hence, every element in their intersection will have every property you can imagine, including being contained in a basis element), this requirement is also met.</li> </ol>
2,576,801
<p>$$\frac {|A-B|}{15} = \frac {|B-A|}{10} = \frac {|A\cap B|}{6}$$</p> <ul> <li>What is the minimal value of $A \cup B$?</li> </ul> <p>This question is from my exam. I've tried to solve it by giving values and added them up. </p> <p>Regards</p>
Community
-1
<p><strong>Hint:</strong></p> <p>Note that: $$|A \cup B| = |A-B| + |B-A | + |A \cap B|$$</p> <p>If we have: $$\frac{|A-B|}{15}=\frac{|B-A|}{10}=\frac{|A \cap B|}{6}= k$$ we will get: $$|A \cup B| = 15k+10k+6k = 31k$$ Can $k$ be $\leq 0$?</p>
524,568
<p>I'm working on a recursive function task which i'm a bit stuck at. I've tried to google it on how I can solve this task, but with no luck</p> <p>Here is the task:</p> <blockquote> <p><em>Provide a recursive function $r$ on $A$</em>* <em>which gives the number of characters in the string</em></p> </blockquote> <p>I Hope I can get some help here, since i've tried nearly everything.</p> <p>Thanks alot for your kind help!</p>
Carlos Eugenio Thompson Pinzón
99,344
<p>This seems rather a programing question than a mathematical one, but the idea of recursiveness would be something like:</p> <ol> <li>if the string is empty, answer 0</li> <li>otherwise, answer is 1 + the lenght of the substring after second position</li> </ol> <p>This recursiveness only make sense in implementations such as <strong>C</strong>'s $0$-terminated strings.</p>
3,194,169
<p>I am not a mathematician but I have a question.</p> <p>I found </p> <ul> <li><span class="math-container">$1 / 3 = 0.33333....$</span></li> <li><span class="math-container">$2 / 3 = 0.666666...$</span></li> <li><span class="math-container">$3 / 3 = 1$</span> while it should be <span class="math-container">$0.999999....$</span> </li> </ul> <p>Same as dividing by <span class="math-container">$9$</span></p> <ul> <li><span class="math-container">$1/9=0.11111..$</span></li> <li><span class="math-container">$2/9= 0.2222..$</span></li> <li><span class="math-container">$3/9=0.3333..$</span></li> <li><span class="math-container">$9/9=1$</span> while it should be <span class="math-container">$0.99999..$</span></li> </ul> <p>Divided by <span class="math-container">$7$</span> is little bit different. As </p> <ul> <li><span class="math-container">$1/7= 0.142857 \, 142857 \, 142857$</span></li> <li><span class="math-container">$2/7= 0.285714 \, 285714\, 285714$</span></li> <li><span class="math-container">$3/7= 0.428571\, 428571\, 428571$</span></li> </ul> <p>The same pattern of number repeated in the same arrangement. But with <span class="math-container">$7/7$</span> the answer is <span class="math-container">$1$</span> </p> <p>I can understand the approximation but with the number increases the error increased.</p> <p>To explain it:</p> <pre><code> Should be 1/3=0.3333. 1/3=0.3333 2/3=0.6666. 2/3=0.6666 3/3=1. 3/3=0.9999 4/3=1.333. 4/3=1.2222 5/3=1.666. 5/3=1.55555 6/3=2 6/3=1.88888 </code></pre> <p>Explanation is appreciated </p>
E.H.E
187,799
<p><span class="math-container">$$0.9999999...=\frac{9}{10}(1+\frac{1}{10}+\frac{1}{10^2}+.....)=\frac{9}{10}\frac{1}{1-\frac{1}{10}}=1$$</span></p>
608,875
<p>solve this equation $$\sqrt{\sqrt{3}-\sqrt{\sqrt{3}+x}}=x$$</p> <p>My try: since $$\sqrt{3}-x^2=\sqrt{\sqrt{3}+x}$$ then $$(x^2-\sqrt{3})^2=x+\sqrt{3}$$</p>
Michael Hoppe
93,935
<p>You're on the right track. Square again, you'll get $x=0$ as a possible solution. Remembering that $x^3-x=x(x-1)(x+1)$ you should be able to factor the equation and get $x=-1$ as a possible solution. Now solve the remaining quadratic.</p>
2,319,119
<p>Been given a question and find it to be too vague to know what's going on. </p> <p>The question is: </p> <p>$f(x) = 2x + 2$. Define $f(x)$ recursively.</p> <p>I'm just quite puzzled as there is no $f(0)$, $f(1)$ or $f(x-1)$ function to go by other than the original function.</p> <p>Supposed to be in the form of $f(x-1)$.</p> <p>Any help appreciated thanks.</p>
Red shoes
219,176
<p><strong>Hint</strong>: there is no sequence in $S$ converging to $f=0 \in C[0,1]$ (uniformly convergence)</p>
3,789,873
<p>Let <span class="math-container">$X, Y, X_n's$</span> be random variables for which <span class="math-container">$X_n+\tau Y\to_D X+\tau Y$</span> for every fixed positive constant <span class="math-container">$\tau$</span>. Show that <span class="math-container">$X_n \to_D X$</span>.</p> <p>I dont think we can let <span class="math-container">$\tau\to0$</span> and claim the result. Any hint on this question?</p>
shalop
224,467
<p>Write</p> <p><span class="math-container">$$\Bbb E[e^{itX_n}] - \Bbb E[e^{itX}] = \bigg(\Bbb E[e^{it X_{n}}] - \Bbb E[e^{it X_{n} + i t \epsilon Y}]\bigg) + \bigg(\Bbb E[e^{it X_{n} + i t \epsilon Y}] - \Bbb E[e^{it X + i t \epsilon Y}]\bigg)+ \bigg(\Bbb E[e^{it X + i t \epsilon Y}]-\Bbb E[e^{it X}]\bigg).$$</span> Now let <span class="math-container">$n \to \infty$</span> in the right hand side. By the assumption of the problem statement, the middle term goes to zero. Note that <span class="math-container">$|e^{itX_n + i t\epsilon Y} - e^{i t X_n} | = |e^{i t\epsilon Y}-1| \leq \min\{2,t\epsilon|Y|\},$</span> and the same bound remains true if we replace <span class="math-container">$X_n$</span> by <span class="math-container">$X$</span>. Thus the first and third terms are bounded in absolute value by <span class="math-container">$\Bbb E[\min\{2,\epsilon t |Y|\}]$</span>. Hence we showed that <span class="math-container">$$\limsup_{n \to \infty} \big|\Bbb E[e^{i t X_n}] - \Bbb E[e^{it X}]\big| \leq 2 \Bbb E[ \min \{2,\epsilon t |Y|\}].$$</span> Here <span class="math-container">$\epsilon$</span> is arbitrary and it only appears on the right hand side, so we can let it tend to zero, and by DCT the right side converges to zero while the left side remains unchanged.</p>
4,095,715
<p>I know how to do these in a very tedious way using a binomial distribution, but is there a clever way to solve this without doing 31 binomial coefficients (with some equivalents)?</p>
Igor Rivin
109,865
<p>Your expression equals <span class="math-container">$$(1+x^2)^n (1+x^4)^n.$$</span> You should be able to finish using the binomial theorem.</p>
4,064,353
<blockquote> <p>The number <span class="math-container">$1.5$</span> is special because it is equal to one quarter of the sum of its digits, as <span class="math-container">$1+5=6$</span> and <span class="math-container">$\frac{6}{4}=1.5$</span> .Find all the numbers that are equal to one quarter of the sum of their own digits.</p> </blockquote> <p>I was puzzling over this question for a while, but I couldn't find a formula without using the <span class="math-container">$\sum$</span> , but I can't really solve generalizations, only come up with them. The only thing I could come up with was to brute-force it, but I can't really come up with any 'special' numbers. Any help?</p>
Henry
6,460
<p>This is not much better than a brute force method:</p> <ul> <li><p>The sum of the digits <span class="math-container">$S$</span> is a non-negative integer, so a quarter of it <span class="math-container">$\frac{S}4$</span> is non-negative, of the form <span class="math-container">$x$</span> or <span class="math-container">$x.25$</span> or <span class="math-container">$x.5$</span> or <span class="math-container">$x.75$</span> for some non-negative integer <span class="math-container">$x$</span>.</p> </li> <li><p>We must have <span class="math-container">$x &lt; 10$</span> since if <span class="math-container">$10 \le x\lt 100$</span> then <span class="math-container">$40 \le 4x \le S \lt 412$</span> so the sum of the digits of <span class="math-container">$x$</span> must be at least <span class="math-container">$40-12=28$</span> but two-digit integers have sums of digits no more than <span class="math-container">$18$</span>. Similarly with larger <span class="math-container">$x$</span>.</p> <ul> <li>So a satisfactory <span class="math-container">$x$</span> has sum of digits <span class="math-container">$4x=S=x$</span> with solution <span class="math-container">$x=0$</span></li> <li>and a satisfactory <span class="math-container">$x.25$</span> has sum of digits <span class="math-container">$4(x+\frac14)=S=x+7$</span> with solution <span class="math-container">$x=2$</span></li> <li>and a satisfactory <span class="math-container">$x.5$</span> has sum of digits <span class="math-container">$4(x+\frac12)=S=x+5$</span> with solution <span class="math-container">$x=1$</span></li> <li>and a satisfactory <span class="math-container">$x.75$</span> has sum of digits <span class="math-container">$4(x+\frac34)=S=x+12$</span> with solution <span class="math-container">$x=3$</span></li> </ul> </li> </ul> <p>making the numbers which work <span class="math-container">$$0, \quad2.25,\quad 1.5,\quad 3.75$$</span></p>
67,434
<p>Let $f: \mathbb{R}^n \to L^2(\mathbb{R}^d) $ be a Bochner-integrable function (all measures are the Lebesgue measure). Does then $ \int_{\mathbb{R}^n} f(x) d\lambda^n (y) = \int_{\mathbb{R}^n} f(x)(y) d\lambda^n $ hold for $\lambda^d$-almost all $y \in \mathbb{R}^d$? I.e. can one compute such Bochner integrals just by computing ordinary Lebesgue integrals?</p>
Gerald Edgar
454
<p>Answer: YES and NO. </p> <p>YES: In any practical situation you are likely to meet, your formula is correct. You would prove it using Fubini's Theorem, pairing your two sides with an arbitrary $h \in L^2(\mathbb R^d)$ and getting the same answer on both sides. The catch is, you have to be able to apply Fubini. </p> <p>NO: As stated, it can fail. $f(x) \in L^2(\mathbb R^d)$, so $f(x)$ is an equivalence class. For each $x$, CHOOSE some representative for that class, call it $f(x)(y)$. But now, for fixed $y$ it may fail that $f(x)(y)$ is a measurable function of $x$. Or even if those are all measurable, it may fail that $f(x)(y)$ is measurable in the product measure.</p>
2,811,991
<p>$$\int_{-3}^5 f(x)\,dx$$<br> for $$ f(x) =\begin{cases} 1/x^2, &amp; \text{if }x \neq 0 \\ -10^{17}, &amp; \text{if }x=0 \end{cases} $$</p> <p>I tried with Newton Leibniz formula, is this correct ?</p> <p>$\int_{-3}^0 f(x)dx$ + $\int_{0}^5 f(x)dx$ =</p> <p>$1/x^2 |_{-3}^{0} $ $ + $ $1/x^2 |_0^5$=</p> <p>$3/(-3)^2+10^{17}+(-10^{17})-3/5^2)$= $16/75$</p> <p>I know I made a mistake, but I dont know what, could someone please correct me and help me.</p>
Robert Israel
8,508
<p>The $-10^{17}$ is a red herring: the value of a function at a single point has no effect on the integral. On the other hand, this does signal that something funny is likely to be happening as $x \to 0$, and indeed it does: the integrand goes to $\infty$ there. Since the integrand is unbounded, this is not an ordinary Riemann integral, but rather an improper integral. For that, you want to take a limit:</p> <p>$$ \int_{-3}^5 f(x)\; dx = \lim_{a \to 0-} \int_{-3}^a f(x)\; dx + \lim_{b \to 0+} \int_b^5 f(x)\; dx $$ However, (if you use a correct antiderivative) you'll find that both of these limits are $+\infty$. So the conclusion is that the improper integral does not exist (or is $+\infty$, depending on your point of view).</p>
2,808,512
<p>Show that if an $n\times n$ matrix $A$ satisfies $A^T=-A$, then $x^TAx=0$ for any $n\times 1$ vector $x$.</p> <p>My attempt: Since matrix transpose won't affect the diagonal entries, so matrix $A$ has only zeros on its diagonal. </p> <p>Then I tried to write $x$ in the form of $\begin{pmatrix} x_1\\ x_2\\ \vdots\\ x_n \end{pmatrix}$ and $A$ in the form of $\begin{pmatrix} 0 &amp; a_{11} &amp; \cdots &amp; a_{1n}\\ a_{21} &amp; 0 &amp; \cdots &amp; \cdots \\ \vdots &amp;\ddots &amp;\ddots &amp; \vdots \\ a_{n1} &amp; a_{n2} &amp; \cdots &amp; 0 \end{pmatrix}$ and multiply them in this form, but it doesn't make any sense to me.</p>
Botond
281,471
<p>We have that $$x^TA^Tx=\sum_{i,j}x_i(A^T)_{ij}x_j=\sum_{i,j}x_iA_{ji}x_j=\sum_{i,j}x_jA_{ji}x_i=x^TAx$$ But we also have that $$x^TA^Tx=\sum_{i,j}x_i(-A)_{ij}x_j=-x^TAx$$ So $$x^TAx=-x^TAx$$ So it must be $0$.</p>
851,459
<p>hello $$$$ I am trying to find explanation how to derive cahn hilliard equation:</p> <p>$$ u_t =\Delta (w'(u)-\epsilon ^2 \Delta u)$$ </p> <p>as gradient flow of energy functional $$ : E[u]=\int w(u)+\epsilon ^2 |\nabla u|^2 ) $$ I tried to follow the definition of gradient flow from : <a href="http://anhngq.wordpress.com/2010/11/05/what-is-a-gradient-flow/" rel="nofollow">http://anhngq.wordpress.com/2010/11/05/what-is-a-gradient-flow/</a> but I got stucked and then </p> <p>I read that it is $ H^{-1} $ gradient flow of the functional. can anyone tell me what is $ H^{-1} $ gradient flow? thanks. </p>
QA Ngô
122,043
<p>Let $H$ be a Hilbert space and $F :H \to \mathbb R$. Suppose that $F$ is Gâteaux differentiable at $u \in H$, then gradient of $F$ at $u$, denoted by $\text{grad}F(u)$, is given by the unique $w \in H$ which satisfies $$\langle F'(u), v \rangle_H = \langle w, v \rangle_H \quad \forall v \in H.$$ When we restrict $v$ to some set $X \subset H$, we have the so-called constraint gradient. Then we usually write $\text{grad}_H^X F(u)$ to denote the unique element of the following set $$\left\{ f \in H : \quad \lim_{t \to 0} \frac 1t \big( F(u+tv) - F(u) \big) = \langle f, v \rangle_H \quad \forall v \in X\right\}$$ least norm. Hence, for each $X$ and $H$, we have certain flows which mainly depend on how we calculate $\text{grad}_H^X F(u)$.</p> <p>For example, let say $$F(u) = \frac 12 \int_\Omega |du|^2 dx.$$ Clearly, with $X = C_0^\infty (\Omega)$, we have $$\text{grad}_{L^2}^X F(u) = -\Delta u$$ while we have $$\text{grad}_{H^1}^X F(u) = u$$ and $$\text{grad}_{H^{-1}}^X F(u) = \Delta^2 u.$$ In other words, the equation $$u_t = -\Delta u$$ will be a $L^2$-gradient flow associated to the energy functional $F$ above. However, it is no longer the $H^{-1}$-gradient flow associated to the energy functional $F$ but the following equation $$u_t = \Delta^2 u.$$</p>
1,930,722
<p>$$\lim_{x\to 0}\frac{x}{x+x^2e^{i\frac{1}{x}}} = 1$$</p> <p>I've trying seeing that:</p> <p>$$e^{\frac{1}{x^2}i} = 1+\frac{i}{x^2}-\frac{1}{x^42!}-i\frac{1}{x^63!}+\frac{1}{x^84!}+\cdots\implies$$</p> <p>$$x^2e^{\frac{1}{x^2}i} = x^2+\frac{i}{1}-\frac{1}{x^22!}-i\frac{1}{x^43!}+\frac{1}{x^64!}+\cdots$$</p> <p>but I couldn't find a way to prove that the limit goes to $1$</p>
Olivier Oloa
118,798
<p>We assume that $x \in \mathbb{R}$. One may write, as $x \to 0$, $$ \left|\frac{x}{x+x^2e^{i\frac{1}{x}}} \right|=\frac{1}{\left|1+xe^{i\frac{1}{x}} \right|}\le \frac{1}{1-\left|xe^{i\frac{1}{x}} \right|}=\frac{1}{1-\left|x\right|} \to 1 $$ since $$ ||a|-|b||\le |a+b|. $$</p>
1,130,645
<p>I am asked to prove: if <span class="math-container">$|a|&lt;\epsilon$</span> for all <span class="math-container">$\epsilon&gt;0$</span>, then <span class="math-container">$a=0$</span></p> <p>I can prove this as follows.</p> <p>Assume <span class="math-container">$a \not= 0$</span></p> <p>I want to show then that <span class="math-container">$|a| \geq \epsilon$</span> for some <span class="math-container">$\epsilon$</span></p> <p>We let <span class="math-container">$\epsilon = \frac{|a|}{2}$</span> and we are done.</p> <p>However, I am curious if you can use the idea of infinitesimals to prove this directly. Can I let <span class="math-container">$\epsilon$</span>=an (or maybe the?) <a href="http://mathworld.wolfram.com/Infinitesimal.html" rel="nofollow noreferrer">infinitesimal value</a> and then just show that:</p> <p><span class="math-container">$|a|&lt;\epsilon$</span> <span class="math-container">$\implies$</span> <span class="math-container">$-\epsilon&lt;a&lt;\epsilon$</span> <span class="math-container">$\implies$</span> <span class="math-container">$|a|=a=0$</span></p> <p>I don't know much about infinitesimals but wanted to try and prove this directly. This was the first thing that occurred to me for some reason. Can anyone shed some light on infinitesimals and whether or not I can use them this way? this might be completely off base because I really have almost no knowledge about how infinitesimals fit into mathematics and mathematical thinking.</p>
Rob Arthan
23,171
<p>Infinitesimals aren't really going to help. What you are trying to prove is true in any ordered field. A proof along the lines you suggest requires the extra assumption that $a$ is "standard" (see Hagen von Eltzen's answer) and so proves something weaker than you want. If you accept a proof by contradiction as "direct", then how about: <em>assume $a \not= 0$, then $|a| &gt; 0$ and hence, by assumption, with $\epsilon = |a|$, $|a| &lt; |a|$, a contradiction</em>.</p>
3,203,577
<p>What is the computational complexity of solving a linear program with <span class="math-container">$m$</span> constraints in <span class="math-container">$n$</span> variables?</p>
Jeff Linderoth
669,913
<p>The best possible (I believe) is by Michael Cohen, Yin Tat Lee, and Zhao Song: Solving linear program in the current matrix multiplication time. <a href="https://arxiv.org/abs/1810.07896" rel="noreferrer">https://arxiv.org/abs/1810.07896</a> (STOC 2019) Hope this helps.</p>
2,512,396
<p>i am trying to compute inverse Laplace transform of function = $\tan ^{-1} (2/s)$</p>
Math Lover
348,257
<p>Hint: $$\mathcal{L}(tf(t))=-\frac{d}{ds}(F(s)).$$</p>
2,441,793
<p><strong>Questions.</strong> </p> <p>(0) Is there a usual technical term in ring theory for the following kind of module?</p> <blockquote> <p>$M$ is a free $R$-module over a commutative ring $R$, and for each $R$-basis $B$ of $M$, each $b\in B$, and each <em>unit</em> $r$ of $R$, we have $r\cdot b=b$</p> </blockquote> <p>(1) Is there a classification of such modules, or a non-classifiability result in the literature?</p> <p><strong>Remarks.</strong></p> <p>--This is not a homework problem. I need to know more about this, yet it is not central to what I am doing currently, and I hope this well-known and documented.</p> <p>--Trivial examples are </p> <ul> <li><p>the $R$-module $\{0\}$ whose only basis is $\{\}$, for which the condition is vacuously true,</p></li> <li><p>$R:=\mathbb{Z}/2\mathbb{Z}$, $\kappa:=$some cardinal, and $M:=\prod_{i\in\kappa} R$, </p></li> <li><p>$R:=\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}$, $\kappa:=$some cardinal, and $M:=\prod_{i\in\kappa} R$, </p></li> <li><p>$\kappa_0,\kappa_1$ arbitrary cardinals, $R:=\prod_{i\in\kappa_0} \mathbb{Z}/2\mathbb{Z}$, and $M:=\prod_{i\in\kappa_1} R$. </p></li> </ul> <p>EDIT: beware, the following is evidently <strong>not a free $R$-module</strong>; I keep it here since with the warning, it seems instructive * $R:=\mathbb{Z}/4\mathbb{Z}$, whose units are $1+(4)_R$ and $3+(4)_R$, and $\kappa:=$some cardinal, and $M:=\prod_{i\in\kappa}\mathbb{Z}/2\mathbb{Z}$, </p> <p>--Since $R$ need not be a principal ideal domain, or even a domain, no classification theorem known to me applies here.</p>
user334639
221,027
<p>It's nice that you have this disquietude at this point.</p> <p>Mathematical precision and clarity depends on the context, the writer, the intended reader and their common backgrounds.</p> <p>Absolute precision using just symbols is possible in theory but in practice you will be much better using words instead.</p> <p>Hence, whether the symbol "=" means an <strong>equation</strong> (so you are talking about whether there is some $x$ for which equality holds), a <strong>statement</strong> (so $x$ has been fixed before and you are asserting that both sides of equality are the same number), an <strong>identity</strong> (so for all $x$ in a certain set the resulting numbers on both sides of the equality will be the same number), or a <strong>functional equality</strong> (so the functions implicitly defined by both expressions -which is often an imprecise way to define a function- are the same function), <strong>etc</strong>, should either be clear from the context or it should be made clear on the fly.</p> <p>When solving an equation such as $\sin(x)=0$, you should use &lt;--> because you want to describe the entire set of solutions, not more, not less.</p> <p>Now if you are proving an implication with a chain of implications, you'd normally use words instead that of symbols. In any case, you will assert that two things are equivalent instead of just saying that one implies the other if this hint will produce a better impact on the reader. For the logical validity of a proof, if you show that a-->b-->c-->d then you have shown that a-->d, and that's all you need to argue. But telling the reader that actually b&lt;-->c can make their life easier or harder, depending on the context.</p> <p>In summary, good math writing is not about utmost symbolic precision, it's about understanding who you're writing to.</p>
2,795,652
<blockquote> <p>Given a set of decimal digits. And given a set of primes $\mathbb{P}$, find some $p \in \mathbb{P}$ such that $p^n, n \in \mathbb{N} $ contained in itself all the digites from a given set, and it does not matter in what order.</p> </blockquote>
lhf
589
<p>More is true:</p> <blockquote> <p>For any finite sequence of decimal digits, there is a power of $2$ whose decimal expansion begins with these sequence.</p> </blockquote> <p>See <a href="https://math.stackexchange.com/questions/13131/starting-digits-of-2n">here</a> for a proof. For algorithms, see <a href="https://math.stackexchange.com/questions/46100/fractional-part-of-b-log-a">this question</a>.</p> <p>On the other hand, I've just run a computer search and every allowed subset of digits are the digits of a prime less than 304456880. Not all subsets of digits are allowed, of course: those whose sum is a multiple of $3$ are not. Nor are those solely composed of even digits. I've found $78$ forbidden subsets.</p>
661,542
<p>Permutation without consecutive numbers, such as 1,2 or 5,4?</p>
Caleb Stanford
68,107
<p><strong>Hint:</strong> Split into cases on the position of the number $3$.</p> <ul> <li><p>If the number $3$ is not first or last, then $1$ and $5$ must be on either side of it. Then what?</p></li> <li><p>If the number $3$ is first or last, then $1$ or $5$ will be next to it. How many ways are to fill the other numbers in?</p></li> </ul>
2,197,065
<p>Baire category theorem is usually proved in the setting of a complete metric space or a locally compact Hausdorff space.</p> <p>Is there a version of Baire category Theorem for complete topological vector spaces? What other hypotheses might be required?</p>
fleablood
280,126
<p>Probably because average distance is considered too easy.</p> <p>I'm not entirely sure what you mean be an "average time problem" but the one thing that makes average speed problems different is that speed is not an accumulative measured value but a ratio between two accumulative values. $\text{speed} = \dfrac {\text{distance}}{\text{time}}$</p> <p>As a consequence the average speed over several trips is NOT the average of the various speeds. $v_\text{average} \ne \frac {v_1 + v_2}2$. That makes no sense if one trip is $60$ mph and the second trip is $120$ mph the average of the speed is $90$ mph but that doesn't "mean" anything. What if the first trip was for only a few seconds and the second was for several hours. Even if you normalize them so that the two trips last the same time, or the two trips are the same distance the answer is .... pointless. What can we do or predict with the average of the speeds? Nothing.</p> <p>Average speed is the <em>total</em> distance divided by the total time and that is not the average of the speeds. $$v_\text{average} = \frac {d_\text{total}}{t_\text{total}}= \frac {d_1+d_2}{t_1 + t_2} = \frac {v_1 t_1 + v_2 t_2}{t_1+t_2} \ne \frac{v_1 + v_2}2.$$</p> <p>Distance on the other hand <em>is</em> linear and the average distance <em>is</em> the average of the distances.</p> <p>$$d_\text{average} = \frac {d_1 + d_2}2.$$</p> <p>Even if we express in terms of speed $\text{distance} = \text{speed} \times \text{time}$ and it distributes nicely.</p> <p>$$d_\text{average} = \frac {d_1+d_2}2 = \frac{v_1 t_1 + v_2 t_2}{2} = \frac {v_\text{average}(t_2 + t_1)}{2} \text{ (because } = \frac {\frac {d_\text{total}}{t_\text{total}} (t_1 + t_2)}2=\frac {d_\text{total}}2) $$</p> <p>Anyway, obviously we <em>could</em> have an average distant question. Take so many trips each of a certain speed lasting a certain time, what is the average distance of each trip. Straightforward to solve.</p> <p>===</p> <p>Okay viewing the comments I guess average time, distance, and speed, are all okay.</p> <p>Say tom does three trips. 1: is 60 mile at 45 mph for 1 hour 20 minutes, 2: is 75 miles at 60 mph for 1 hour and 15 minutes. And 3: is 60 miles at 80 mph for 45 minutes.</p> <p>Find the average distance, time, and speed pretending you don't know any of the distances, time and speed.</p> <p>Average distance = $\frac {d_{total}}3 = \frac {v_1*t_1 + v_2*t_2+v_3*t_3}{3} = \frac{45*1\frac 13 + 60*1.25 + 80*.75}{3} = \frac {60+75 + 60}3 = 65$ miles. It <em>IS</em> the average of the distances.</p> <p>Average speed = $\frac{d_{total}}{t_{total}} = \frac {60+75+60}{1\frac 13 + 1.25 + .75} = \frac {195}{3 hours 20 minutes} = \frac {195}{3\frac 13} = 58.5$ mph. It is <em>NOT</em> the average of the speeds.</p> <p>Average time + $\frac{t_{total}}3 = \frac{d_1/v_1 + d_2/v_2 + d_3/v_3}3 = \frac{60/45 + 75/60+60/80}3 = \frac {3\frac 13}3 = 1\frac 19 = 1 hour 6 \frac 23 minutes$. It <em>IS</em> the average of the times.</p>
4,131,208
<p>Does the following converge to <span class="math-container">$0$</span> for <span class="math-container">$\theta&gt;-1/2$</span>?</p> <p><span class="math-container">$$\lim_{n\rightarrow\infty}\frac{\sum_{i=1}^ni^{3\theta}}{\left(\sum_{i=1}^ni^{2\theta}\right)^{3/2}}$$</span></p> <p>I'd like to use the comparison test but no idea what to compare it to. If <span class="math-container">$\theta=1$</span>, based on code I wrote it seems to be going to 0 but would like to know how to use math.</p>
Claude Leibovici
82,404
<p>Anothe solution using generalized harmonic nmbers <span class="math-container">$$a_n=\frac{\sum_{i=1}^ni^{3\theta}}{\Big[\sum_{i=1}^ni^{2\theta}\Big]^{\frac 32}}=\frac{H_n^{(-3 \theta )}}{\Big[H_n^{(-2 \theta )}\Big]^{\frac 32}}$$</span> Using asymptotics <span class="math-container">$$H_n^{(-3 \theta )}=n^{3 \theta } \left(\frac{n}{3 \theta +1}+\frac{1}{2}+\frac{\theta }{4 n}+O\left(\frac{1}{n^3}\right)\right)+\zeta (-3 \theta )$$</span> <span class="math-container">$$H_n^{(-2 \theta )}=n^{2 \theta } \left(\frac{n}{2 \theta +1}+\frac{1}{2}+\frac{\theta }{6 n}+O\left(\frac{1}{n^3}\right)\right)+\zeta (-2 \theta )$$</span></p> <p><span class="math-container">$$a_n\sim\frac{(2 \theta +1)^{3/2}}{3 \theta +1}\frac 1{\sqrt n}\Bigg[1-\frac1 {4n}+O\left(\frac{1}{n^2}\right)\Bigg]$$</span> and then ...</p>
492,125
<p>How many different equivalence relations can be defined on a set of five elements?</p>
André Nicolas
6,312
<p>We have $5$ people, and want to find the number of ways to break them up into groups (equivalence classes). With a number significantly bigger than $5$, it would be useful to develop some theory (and there is such theory). With $5$, we just count in a systematic way.</p> <p>(i) One big happy family, everybody equivalent to everybody else. Clearly there is $1$ way to do this.</p> <p>(ii) One loner, and four friends. The loner can be chosen in $\binom{5}{1}$ ways, giving $5$ ways.</p> <p>(iii) A group of three friends, and a group of two friends. The group of three can be chosen in $\binom{5}{3}$ ways, and then the other group is determined.</p> <p>(iv) Three, one, one. You can handle this.</p> <p>(v) Two, two, one. This one is tricky, it is easy to get the wrong count. The loner can be chosen in $5$ ways. For each of these ways, the remaining four can be broken up into two teams of two in $3$ ways, for a total of $15$. To see this, suppose the four people are A, B, C, D. Then A can be teamed with any of the $3$ others.</p> <p>(vi) Two, one, one, one. You can handle this.</p> <p>(vii) Everybody by herself. Easy. </p> <p>Add up. </p>
4,496,772
<p>Let <span class="math-container">$V$</span> be a vector space of dimension <span class="math-container">$n$</span> over a finite field <span class="math-container">$F$</span> with <span class="math-container">$q$</span> elements, and let <span class="math-container">$U\subseteq V$</span> be a subspace of dimension <span class="math-container">$k$</span>. How many subspaces <span class="math-container">$W\subseteq V$</span> of dimension <span class="math-container">$m$</span> such that <span class="math-container">$U\subseteq W $</span> do we have <span class="math-container">$(k\leq m\leq n)$</span>?</p> <p><strong>Hint</strong>: look at the set: <span class="math-container">$$ \{ (U,W) \mid U \subseteq W \subseteq V; \, \dim(U)=k, \, \dim(W)=m \} $$</span></p> <p>I have no idea how to use the hint. I know that the number of subspaces of dimension <span class="math-container">$k$</span> is: <span class="math-container">$$ \frac{\prod_{i=0}^{k-1} (q^{n-1}-1)}{\prod_{i=0}^{k-1} (q^{k-1}-1)} $$</span> and I don't know how to proceed from here.</p>
Mike Earnest
177,399
<p>Cpc's answer is fantastic, but here is a more high-level presentation. <a href="https://en.wikipedia.org/wiki/Correspondence_theorem" rel="nofollow noreferrer">Noether's fourth isomorphism</a> says that the subspaces of <span class="math-container">$V$</span> which contain <span class="math-container">$U$</span> are in bijection with subspaces of <span class="math-container">$V/U$</span>. Therefore, the number of <span class="math-container">$W$</span> with <span class="math-container">$U\subseteq W\subseteq V$</span> and <span class="math-container">$\dim W=m$</span> is the same as the number of subspaces of <span class="math-container">$V/U$</span> with dimension <span class="math-container">$m-k$</span>, which is just the <span class="math-container">$q$</span>-binomial coefficient <span class="math-container">$\binom{n-k}{m-k}_q$</span>.</p>
4,496,772
<p>Let <span class="math-container">$V$</span> be a vector space of dimension <span class="math-container">$n$</span> over a finite field <span class="math-container">$F$</span> with <span class="math-container">$q$</span> elements, and let <span class="math-container">$U\subseteq V$</span> be a subspace of dimension <span class="math-container">$k$</span>. How many subspaces <span class="math-container">$W\subseteq V$</span> of dimension <span class="math-container">$m$</span> such that <span class="math-container">$U\subseteq W $</span> do we have <span class="math-container">$(k\leq m\leq n)$</span>?</p> <p><strong>Hint</strong>: look at the set: <span class="math-container">$$ \{ (U,W) \mid U \subseteq W \subseteq V; \, \dim(U)=k, \, \dim(W)=m \} $$</span></p> <p>I have no idea how to use the hint. I know that the number of subspaces of dimension <span class="math-container">$k$</span> is: <span class="math-container">$$ \frac{\prod_{i=0}^{k-1} (q^{n-1}-1)}{\prod_{i=0}^{k-1} (q^{k-1}-1)} $$</span> and I don't know how to proceed from here.</p>
JBL
1,080,305
<p>The point of the hint is to count the elements of that set: given what you already know, it is easy to do it by first choosing <span class="math-container">$W$</span> then choosing <span class="math-container">$U$</span>. Doing it the other way is easy in terms of the number you're trying to compute. And the two enumerations must be the same. This avoids repeating the argument that you used to establish the subspace counting formula in the first place.</p>
3,372,952
<blockquote> <p>I know that all the subspaces of <span class="math-container">$\mathbb{R}^{3}$</span> over <span class="math-container">$\mathbb{R}$</span> are:</p> <ul> <li><p>Zero subspace.</p></li> <li><p>Lines passing through origin.</p></li> <li><p>Planes passing through origin.</p></li> <li><p><span class="math-container">$\mathbb{R}^{3}$</span> itself.</p></li> </ul> <p>But how to prove these are the only subspaces ? </p> </blockquote> <p>I tried in the following way:</p> <p>If subspace <span class="math-container">$S$</span> is zero, then done. So, assume a non zero element <span class="math-container">$(x,y,x) \in S $</span>, then for any <span class="math-container">$\alpha \in \mathbb{R}, ~ \alpha(x,y,z) \in S.$</span> Now I need to prove that set <span class="math-container">$$ S= \lbrace \alpha(x,y,z) : \alpha \in \mathbb{R} \rbrace $$</span> will be line passing through origin and if there exist an element <span class="math-container">$ (x_{1},y_{1},z_{1}) \ne \alpha(x,y,z) $</span>, then we can generate plane passing through origin otherwise <span class="math-container">$\mathbb{R}^{3}.$</span> </p>
user
505,767
<p>We have considered all the possible dimensions contained in <span class="math-container">$\mathbb{R^3}$</span> therefore there is not any other subspace to be considered.</p> <p>It's a <a href="https://en.wikipedia.org/wiki/Proof_by_exhaustion" rel="nofollow noreferrer">proof by exhaustion</a>.</p>
4,381,737
<p><span class="math-container">$\mathbb{Z}[i] = \{a + bi : a, b \in \mathbb{Z}\}$</span> are the Gaussian integers. I want to show that there is no ideal <span class="math-container">$I \subseteq \mathbb{Z}[i]$</span> such that the quotient <span class="math-container">$\mathbb{Z}[i]/I$</span> is a field of size <span class="math-container">$3$</span> (i.e, is <span class="math-container">$\mathbb{F}_3$</span>).</p> <p>Here's my approach:</p> <p>Since <span class="math-container">$\mathbb{Z}[i]/I$</span> is a field, this would make <span class="math-container">$I$</span> a prime/maximal ideal in <span class="math-container">$\mathbb{Z}[i]$</span>. Moreover, in the reduction <span class="math-container">$\mathbb{Z}[i] \rightarrow \mathbb{Z}[i]/I = F_3$</span>, since <span class="math-container">$1 \mapsto 1$</span>, we would use homomorphism properties to see that <span class="math-container">$3 \mapsto 0$</span> which is just a complicated way of saying that <span class="math-container">$3 \in I$</span>.</p> <p>Now <span class="math-container">$(3) \subseteq I$</span>. But we know that through Fermat's sum of squares theorem, since <span class="math-container">$3 \equiv 3$</span> mod <span class="math-container">$4$</span>, we know that <span class="math-container">$3$</span> is a prime in <span class="math-container">$\mathbb{Z}[i]$</span> and thus maximal as well. This would mean that <span class="math-container">$I = (3)$</span>.</p> <p>Now, we obtain a contradiction in that <span class="math-container">$\mathbb{Z}[i]/(3)$</span> has size <span class="math-container">$9$</span>, not <span class="math-container">$3$</span> (i.e, all elements of the form <span class="math-container">$a + bi$</span> were <span class="math-container">$a, b \leq 3$</span>).</p> <p>This concludes the proof.</p> <p>Does this look correct? Is there another way to do it without wielding heavy results like Fermat's sum of squares?</p>
lhf
589
<p>Your proof is fine.</p> <p>Alternatively, <span class="math-container">$\mathbb{Z}[i]$</span> is a PID and so <span class="math-container">$I=\langle a+bi \rangle$</span>. Then <span class="math-container">$\mathbb{Z}[i]/I \cong \mathbb F_3$</span> implies <span class="math-container">$a^2+b^2=3$</span>, a contradiction.</p>
2,532,327
<p>Expand the following function in Legendre polynomials on the interval [-1,1] :</p> <p>$$f(x) = |x|$$</p> <p>The Legendre polynomials $p_n (x)$ are defined by the formula :</p> <p>$$p_n (x) = \frac {1}{2^n n!} \frac{d^n}{dx^n}(x^2-1)^2$$</p> <p>for $n=0,1,2,3,...$</p> <p>My attempt :</p> <p>we have using the fact that $|x|$ is an even function. $$a_0 = \frac {2}{\pi}$$ $$ a_n= \frac {2}{π} \int_{-1}^{1}x\cos(nx)\,dx$$</p> <p>Then what is the next step ?</p>
PM.
416,252
<p>You are being asked to compute the coefficients $a_n$ in the expansion</p> <p>$$ f(x)=\sum_{n=0}^\infty a_n P_n(x) $$</p> <p>See for example <a href="http://mathworld.wolfram.com/Fourier-LegendreSeries.html" rel="nofollow noreferrer">here</a> for more information.</p> <p>Another very useful reference (in general) is Abramowitz and Stegun's Handbook of Mathematical Functions where you will find chapters on Legendre functions and orthogonal polynomials.</p>
769,272
<p>So, I'm trying to solve the wave equation with the Fourier transform, and I'm struggling to figure out how to apply the BC's. Here's the problem I considered:</p> <p>$$\frac{d^2u}{dt^2}=c^2\frac{d^2u}{x^2}$$ $$u(x,0)=g(x)$$ $$\frac{du}{dt}=0$$ at t = 0</p> <p>Running through computations, I find that the Fourier transform solution is as follows:</p> <p>$$F(u(\lambda,t))=A(\lambda)e^{ic\lambda t}+B(\lambda)e^{-ic \lambda t}$$</p> <p>How would I apply boundary conditions to this and then transform back to u? ANy detailed explanation on this would be appreciated (I'm still learning this stuff). I'm thinking it may be easier to work with cosines and sines.</p>
Ted Shifrin
71,348
<p>The first fundamental form determines the Gaussian curvature but definitely does <em>not</em> determine the second fundamental form. Indeed, the first fundamental form can be defined on an abstract surface (say, the hyperbolic plane) that does not even live inside $\Bbb R^3$.</p> <p>You need the Gauss equations in order to compute the Gaussian curvature. That is, you need to calculate the Christoffel symbols and then various combinations of them and their derivatives.</p> <p>Here you go (have fun): \begin{align*} \Gamma_{uu}^u &amp;=\frac{\tfrac12GE_u+F(\tfrac12E_v-F_u)}{EG-F^2} \\ \Gamma_{uu}^v &amp;=\frac{-\tfrac12FE_u+E(F_u-\tfrac12E_v)}{EG-F^2}\\ \Gamma_{uv}^u &amp;=\frac{GE_v-FG_u}{2(EG-F^2)} \\ \Gamma_{uv}^v &amp;=\frac{-FE_v+EG_u}{2(EG-F^2)}\\ \Gamma_{vv}^u &amp;=\frac{G(F_v-\tfrac12G_u)-\tfrac12FG_v}{EG-F^2} \\ \Gamma_{vv}^v &amp;=\frac{F(\tfrac12G_u-F_v)+\tfrac12EG_v}{EG-F^2} \\ \end{align*} and $$EK = \big(\Gamma_{uu}^v\big)_v - \big(\Gamma_{uv}^v\big)_u +\Gamma_{uu}^u\Gamma_{uv}^v+\Gamma_{uu}^v\Gamma_{vv}^v-\Gamma_{uv}^u\Gamma_{uu}^v-\big(\Gamma_{uv}^v\big)^2.$$</p>
72,876
<p>A quiver in representation theory is what is called in most other areas a directed graph. Does anybody know why Gabriel felt that a new name was needed for this object? I am more interested in why he might have felt graph or digraph was not a good choice of terminology than why he thought quiver is a good name. (I rather like the name myself.)</p> <p>On a related note, does anybody know why quiver representations, resp. morphisms of quiver representations, are not commonly defined as functors from the free category on the quiver to the category of finite dimensional vector spaces, resp. natural transformations? </p> <p><b>Added</b> I made this community wiki in case this will garner more responses. </p> <p>My motivation for asking this is that one of my students just defended her thesis, which involved quivers, and the Computer Scientist on the committee remarked that these are normally called directed graphs and using that term might make the thesis appeal to a wider community. Afterwards, some of us were wondering what prompted Gabriel to coin a new term for this concept.</p>
Qiaochu Yuan
290
<p>Here is some speculation. Some abstractions in mathematics can be used to study so many different things that even when you use the same abstraction you might want to give it a different name to indicate what sort of thing you're actually studying. </p> <p>In other words, the name you use declares an <em>intention:</em> when you say "quiver," you're declaring an intention to study quiver representations or quiver varieties, etc. When you say "graph," on the other hand, you might instead be declaring an intention to study algorithms for finding shortest paths or a million other classical graph-theoretic questions.</p> <p>As another example, consider the different things we might call a functor $F : C \to D$:</p> <ul> <li>A "diagram (of shape $C$)." This declares an intent to talk about the limit or colimit of the functor. Here the emphasis is on $D$, or perhaps $F$. </li> <li>A "model," a "representation," a "module," or an "algebra." This declares an intent to study and emphasize $C$, or perhaps $F$, by drawing an analogy to, respectively, models of logical theories, representations of groups, modules over algebras, or algebras over operads. </li> <li>A "presheaf" (when the functor is contravariant and lands in something like $\text{Set}$). This declares a few possible intents, like an eye towards sheafification or a perspective that $F$ should be thought of as a generalized object in $C$. </li> </ul> <p>It's a shame that we don't have mathematical nomenclature explicitly dedicated to declaring intentions like this, but using different terms for the same thing is better than nothing. </p>
2,662,717
<p>Let $(f_k)_{k=m}^\infty$ be a sequence of differentiable functions $f_k:[a,b]\rightarrow R$ whose derivatives are continuous. Suppose there exists a sequence $(M_k)_{k=m}^\infty$ in $R$ with $|f_k'|\le M_k$ for all $x\in X, k\geq m,$ and such that $\sum_{k=m}^\infty M_k$ converges. Assume also that there is some $x_0\in [a,b]$ such that $\sum_{k=m}^\infty f_k(x_0)$ converges. </p> <p>Show that $\sum_{k=m}^\infty f_k$ converges uniformly to a differentiable function $f:[a,b]\rightarrow R$ and that $f'(x)=\sum_{k=m}^\infty f_k'(x)$ for all $x\in [a,b]$.</p> <p>$Remark:$ So you are showing that under these assumptions, </p> <p>$\frac{d}{dx}\sum_{k=m}^\infty f_k= \sum_{k=m}^\infty \frac{d}{dx} f_k$</p> <p>$Hint:$ Combine the Weirestrass M-test with Theorem $3.7.1$: Let $[a, b]$ be an interval, and for every integer $n ≥ 1$, let $f_n : [a, b] → R$ be a differentiable function whose derivative $f_n' : [a, b] → R$ is continuous. Suppose that the derivatives $f_n'$ converge uniformly to a function $g : [a, b] → R$. Suppose also that there exists a point $x_0$ such that the limit lim$_{n→∞} f_n (x_0)$ exists. Then the functions $f_n$ converge uniformly to a differentiable function $f$, and the derivative of $f$ equals $g$. </p>
Robert Stuckey
440,720
<p>The trick I've learned for double induction is to do your base case $m=0$ and $n=0$ and then fix $n$ to an arbitrary value (I usually start by fixing the second one), and then do induction on $m$. Then fix $m$ to an arbitrary value and do induction on $n$. Once you've done this you know that for any value of $n$ your claim holds for all $m$ and for any one value of $m$ your clam holds for all $n$. This should be sufficient to show that your claim holds for every $(m,n)$ pair.</p>
2,662,717
<p>Let $(f_k)_{k=m}^\infty$ be a sequence of differentiable functions $f_k:[a,b]\rightarrow R$ whose derivatives are continuous. Suppose there exists a sequence $(M_k)_{k=m}^\infty$ in $R$ with $|f_k'|\le M_k$ for all $x\in X, k\geq m,$ and such that $\sum_{k=m}^\infty M_k$ converges. Assume also that there is some $x_0\in [a,b]$ such that $\sum_{k=m}^\infty f_k(x_0)$ converges. </p> <p>Show that $\sum_{k=m}^\infty f_k$ converges uniformly to a differentiable function $f:[a,b]\rightarrow R$ and that $f'(x)=\sum_{k=m}^\infty f_k'(x)$ for all $x\in [a,b]$.</p> <p>$Remark:$ So you are showing that under these assumptions, </p> <p>$\frac{d}{dx}\sum_{k=m}^\infty f_k= \sum_{k=m}^\infty \frac{d}{dx} f_k$</p> <p>$Hint:$ Combine the Weirestrass M-test with Theorem $3.7.1$: Let $[a, b]$ be an interval, and for every integer $n ≥ 1$, let $f_n : [a, b] → R$ be a differentiable function whose derivative $f_n' : [a, b] → R$ is continuous. Suppose that the derivatives $f_n'$ converge uniformly to a function $g : [a, b] → R$. Suppose also that there exists a point $x_0$ such that the limit lim$_{n→∞} f_n (x_0)$ exists. Then the functions $f_n$ converge uniformly to a differentiable function $f$, and the derivative of $f$ equals $g$. </p>
N. Shales
259,568
<p>The process below is quite revealing if you bear in mind the Fibonacci sequence </p> <p>$$\begin{array}{ccccccc}F_1&amp;F_2&amp;F_3&amp;F_4&amp;F_5&amp;F_6&amp;\ldots\\1&amp;1&amp;2&amp;3&amp;5&amp;8&amp;\ldots\end{array}$$ </p> <p>and focus on the coefficients on the right hand sides:</p> <p>$$\begin{align} F_k&amp;=1F_{k-2}+1F_{k-1}\tag{write $F_{k-1}=F_{k-3}+F_{k-2}$}\\ &amp;=1F_{k-3}+2F_{k-2}\tag{write $F_{k-2}=F_{k-4}+F_{k-3}$}\\ &amp;=2F_{k-4}+3F_{k-3}\tag{write $F_{k-3}=F_{k-5}+F_{k-4}$}\\ &amp;=3F_{k-5}+5F_{k-4}\tag{write $F_{k-4}=F_{k-6}+F_{k-5}$}\\ &amp;=5F_{k-6}+8F_{k-5}\tag{etc.}\\&amp; \phantom{a}\vdots\end{align}$$</p> <p>It is natural to conjecture:</p> <p>$$F_k=c_mF_{k-m}+d_{m+1}F_{k-m+1}$$</p> <p>with $c_m=F_{m-1}$ and $d_{m+1}=c_{m+1}=F_m$ so that:</p> <p>$$F_{k}=F_{m-1}F_{k-m}+F_mF_{k-m+1}$$</p> <p>But not only that, we can see straight away from this process how to use induction to show that the coefficients $c_m$ obey the Fibonacci recurrence. </p> <p>If we finally define $n$ as $k=n+m$ then we get your result:</p> <p>$$F_{n+m}=F_{m-1}F_n+F_mF_{n+1}\tag*{}$$</p>
2,294,321
<blockquote> <p>Suppose $F(x)$ is a continuously differentiable (that is, first derivative exist and are continuous) vector field in $\mathbb{R}^3$ that satisfies the bound $$|F(x)| \leq \frac{1}{1 + |x|^3}$$ Show that $\int\int\int_{\mathbb{R}^3} \text{div} F dx = 0$.</p> </blockquote> <p>Attempted proof - Suppose $F(x)$ is a continuously differentiable vector field in $\mathbb{R}^3$ such that $$|F(x)| \leq \frac{1}{1 + |x|^3}$$ From <a href="http://nptel.ac.in/courses/122101003/downloads/Lecture-43.pdf" rel="nofollow noreferrer">defintion of vector field</a> We have $F:D\subseteq \mathbb{R}^3\to \mathbb{R}^3$ where $D$ is an open subset of $\mathbb{R}^3$. For every $x\in \mathbb{R}^3$, we can write $$F(x) = F_1(x) i + F_2(x) j + F_3(x) k$$ where $ i,j,k$ are the unit vectors. Since $F(x)$ is continuously-differentiable then so are the component functions. </p> <p>I am a bit lost on trying to go from here. I know that $$\text{div} F = \nabla F = \left(\frac{\partial}{\partial x},\frac{\partial}{\partial y},\frac{\partial}{\partial z}\right)\cdot \left(F_1,F_2,F_3\right) = \frac{\partial F_1}{\partial x} + \frac{\partial F_2}{\partial y} + \frac{\partial F_3}{\partial z}$$ I think I need to incorporate the bound that $|F(x)|$ in order to show the desired result. Perhaps this is best done using an epsilon-delta sort of proof but I am not sure. Any suggestions are greatly appreciated.</p> <p>Attempted proof 2 - Let $\{B(0,n)\}_{n=1}^{\infty}$ be a sequence of balls. Since $F(x)$ is continuously differentiable vector field, its components are also continuously differentiable. We know that the surface area of $\partial B(0,n)$ is $4\pi n^2$ and $$|F|\leq \frac{1}{1 + n^3}$$ Thus from the divergent theorem we have $$\int\int\int_{\mathbb{R}^3}\text{div}F dx = \int\int_{B(0,n)}F \times 0 = 0$$</p> <p>Attempted proof 3 - Let $\{B(0,n)\}_{n=1}^{\infty}$ be a sequence of balls. Since $F(x)$ is a continuously differentiable vector field, its components are also continuously differentiable. We know that the surface area of $\partial B(0,n)$ is $4\pi n^2$ and $$|F|\leq \frac{1}{1 + n^3}$$ Let $S$ be the boundary surface of $B(0,n)$ with positive orientation then from the divergence theorem $$\iint_{S}F dS = \iiint_{B(0,n)}F dB(0,n) $$ Thus if we let $n\to \infty$ we see that $$\iiint_{B(0,n)}F dB(0,n) = 0$$</p> <p>Attempted proof 4 - Let $\{B(0,n)\}_{n=1}^{\infty}$ be a sequence of balls. Since $F(x)$ is continuously differentiable vector field, so are its components. Let $S$ be the boundary surface of $B(0,n)$ with positive orientation. We know that the surface area of $\partial B(0,n)$ is $4\pi n^2$ and that $$|F|\leq \frac{1}{1 + n^3}$$ Thus if we apply the divergence theorem on the sequence of balls then we have $$\Big|\int_{\mathbb{R}^3}\text{div}F\,dx\Big| = \lim_n\Big|\int_{B(0,n)}\text{div}F\,dx\Big| = \lim_n\Big|\int_{\partial B(0,n)}F \cdot \nu\,dS\Big| \le \lim_n\frac{4\pi n^2}{1 + n^3} = 0.$$ Thus the result follows.</p>
Gio67
355,873
<p>Apply the divergence theorem on a sequence of balls $B(0,n)$ where $n\to\infty$ and use the bound on $F$ to bound the boundary integral $\int_{\partial B(0,n)}F\cdot \nu\,dS$ </p>
702,879
<p>I am studying exponentials from MacLane-Moerdijk's book, "Sheaves in geometry and Logic". I do not understand the following: Induced by the product-exponent adjunction, consider the bijection $$\hom(Y\times X,Z)\to\hom(Y,Z^X)\;\;\;\;\;\;(\star)$$ They say: The existence of the above adjunction can be stated in elementary terms (i.e., without using Hom-sets). For, set $Y=Z^X$ in $(\star)$; The identity arrow $1:Z^X\to Z^X$ on the right in $(\star)$ then corresponds, under the adjunction, to an arrow $e:Z^X\times X\to Z$. <em>The bijection $f\mapsto f'$ of $(\star)$, by naturality, now becomes the statement that to each $f:Y\times X\to Z$ there is a unique $f':Y\to Z^X$ such that the diagram</em> </p> <p><img src="https://i.stack.imgur.com/xGHJ4.jpg" alt="enter image description here"></p> <p><em>commutes</em>.</p> <p>I do not understand how the definition of naturality gives the above statement. Naturality, if I am not mistaken, says (naturality in $Y$, if this is what is meant) that if $$\alpha_Y:\hom(Y\times X,Z)\to\hom(Y,Z^X) $$ then for all $g:Y\to W$,$$(-\circ g)\circ \alpha_W=\alpha_Y\circ(-\circ (g\times1)) $$ How does that give the statement? I am sure I am doing something wrong. Can anyone clarify, please?</p>
ye23der
133,737
<p>Ok, I believe I have understood this. Can someone check if this is a correct way of viewing it?</p> <p>Let <span class="math-container">$\phi_Y \colon \hom(Y, Z^X) \to \hom(Y \times X, Z)$</span> be the bijection. The identity <span class="math-container">$1 \colon Z^X \to Z^X$</span> goes to an arrow <span class="math-container">$e \colon Z^X \times X \to Z$</span>. By naturality, for any <span class="math-container">$f' \colon Y \to Z^X$</span> the following diagram: <span class="math-container">$$ \require{AMScd} \begin{CD} \hom(Z^X, Z^X) @&gt;{\phi_{Z^X}}&gt;&gt; \hom(Z^X \times X, Z) \\ @VVV @VVV \\ \hom(Y, Z^X) @&gt;{\phi_Y}&gt;&gt; \hom(Y \times X, Z) \end{CD} $$</span> commutes. Particularly, we have: <span class="math-container">$$ \require{AMScd} \begin{CD} 1_{Z^X} @&gt;{\phi_{Z^X}}&gt;&gt; e \\ @VVV @VVV \\ f' @&gt;{\phi_Y}&gt;&gt; \phi_Y(f') = e\circ(f' \times 1) \end{CD} $$</span> But, since <span class="math-container">$\phi_Y$</span> is a bijection, each <span class="math-container">$f \colon Y \times X \to Z$</span> has the form <span class="math-container">$f = e \circ (f' \times 1)$</span> for a unique <span class="math-container">$f'$</span>, as required.</p>
3,288,651
<p>I have a problem understanding a proof about ideals, which states that every ideal in the integers can be generated by a single integer. And with that I realized that I also don't really understand ideals in general and the intuition behind them. </p> <p>So let me start by the definition of an ideal. For <span class="math-container">$a, b \in \mathbb{Z}$</span>, the ideal generated by <span class="math-container">$a$</span> is the set <span class="math-container">$ (a) := \{ua : u \in \mathbb{Z}\} $</span> while the ideal generated by <span class="math-container">$a$</span> and <span class="math-container">$b$</span> is the set <span class="math-container">$(a, b) := \{ua + vb : u,v \in \mathbb{Z}\}$</span>. Here comes my first question: Are those "multiples" of the generators (<span class="math-container">$u$</span> and <span class="math-container">$v$</span>) all possible integers? Or does this apply only to a specific amount of predefined integers? </p> <p>And now comes the proof in question. I added the questions in parenthesis where I had problems following: </p> <p>The lemma states that for <span class="math-container">$a, b \in \mathbb{Z}$</span> (not both 0), <span class="math-container">$ \exists d \in \mathbb{Z}: (a,b) = (d) $</span>. This means in my understanding that every ideal in the integers, no matter how many integers were used to generate it, can be generated only by a single integer. </p> <p><em>Proof</em>: The set <span class="math-container">$(a,b)$</span> must contain some positive numbers (why? The definition of the ideal doesn't state that). By the well-ordering principle, we know that those positive numbers must have a smallest positive number. Let <span class="math-container">$d$</span> be that number. Because <span class="math-container">$d \in (a,b)$</span>, every multiple of <span class="math-container">$d$</span> must also be in <span class="math-container">$(a,b) $</span> (why? Is there any definition or lemma or theorem that states that?). Therefore, we have <span class="math-container">$(d) \subseteq (a,b)$</span>. And now to prove the other side <span class="math-container">$\supseteq$</span>: For any <span class="math-container">$c \in (a,b) $</span> , <span class="math-container">$\exists q,r $</span> (are those elements of the integers or of the set <span class="math-container">$(a,b)$</span>? and do any restrictions apply to <span class="math-container">$q$</span>?) where <span class="math-container">$0 \leq r &lt; d$</span> such that <span class="math-container">$c = qd + r$</span> (as far as my understanding goes, this comes from the fact that any integer can be divided by another integer yielding a remainder). Since both <span class="math-container">$c$</span> and <span class="math-container">$d$</span> are in <span class="math-container">$(a,b)$</span>, so is <span class="math-container">$r=c−qd$</span> . Since <span class="math-container">$0≤r&lt;d$</span> and <span class="math-container">$d$</span> is (by assumption) the smallest positive element in <span class="math-container">$(a, b)$</span>, we must have <span class="math-container">$r = 0$</span>. Thus <span class="math-container">$ c = qd ∈ (d)$</span> (how did we conclude that last step?). </p> <p>Thank you for the clarifications. </p>
Anurag A
68,092
<p>An ideal <span class="math-container">$I$</span> of <span class="math-container">$\Bbb{Z}$</span> generated by <span class="math-container">$a,b \in \Bbb{Z}$</span> consists of all possible integer linear combinations of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> (<em>similar</em> to the notion of a span in a vector space). As an example, <span class="math-container">$a,b, a+2b, -a+3b, 5a-7b, \ldots \in \langle a,b\rangle$</span>. Thus, <span class="math-container">$$\langle a,b\rangle=\{ax+by \, | \, x,y \in \Bbb{Z}\}.$$</span></p> <p><strong>Q1).</strong> The reason <span class="math-container">$\langle a,b\rangle$</span> must contain a positive integer (assuming at least one of <span class="math-container">$a$</span> or <span class="math-container">$b$</span> is nonzero) is say if <span class="math-container">$a\neq 0$</span>, then either <span class="math-container">$a&gt;0$</span> or <span class="math-container">$a&lt;0$</span>. If <span class="math-container">$a&gt;0$</span>, then we already have <span class="math-container">$a \in \langle a,b\rangle$</span>, otherwise <span class="math-container">$-a \in \langle a,b\rangle$</span> will give us a positive element.</p> <p><strong>Q2).</strong> If <span class="math-container">$d \in \langle a,b\rangle$</span>, this means <span class="math-container">$\exists x,y \in \Bbb{Z}$</span> such that <span class="math-container">$d=ax+by$</span>. Consequently <span class="math-container">$nd=a(nx)+b(ny)$</span>, which is a linear combination of <span class="math-container">$a$</span> and <span class="math-container">$b$</span>. Thus <span class="math-container">$nd \in \langle a,b\rangle$</span>. </p> <p><strong>Q3).</strong> The division algorithm states that for <span class="math-container">$c,d \in \Bbb{Z}$</span> with <span class="math-container">$d \neq 0$</span>, <span class="math-container">$\exists$</span> integers <span class="math-container">$q$</span> (quotient) and <span class="math-container">$r$</span> (remainder) such that <span class="math-container">$c=dq+r$</span> with <span class="math-container">$0 \leq r &lt; d$</span>. There are no other restrictions on <span class="math-container">$q$</span>.</p> <p><strong>Q4).</strong> Since <span class="math-container">$r=c-dq$</span>, we already have <span class="math-container">$c,d \in \langle a,b\rangle$</span> so by closure under ring operations <span class="math-container">$r \in \langle a,b\rangle$</span>. If <span class="math-container">$r&gt;0$</span>, then we have a positive integer <span class="math-container">$r \in \langle a,b\rangle$</span> which is <strong>smaller</strong> than <span class="math-container">$d$</span>. This violates the fact that <span class="math-container">$d$</span> was the least positive integer in <span class="math-container">$\langle a,b\rangle$</span>. Thus the only possibility is that <span class="math-container">$r=0$</span>. This means <span class="math-container">$c=dq+0=dq$</span>. Since the ideal generated by <span class="math-container">$d$</span> contains all multiples of <span class="math-container">$d$</span>, therefore <span class="math-container">$c \in \langle d \rangle$</span>. This proves that <span class="math-container">$\langle a,b\rangle \subseteq \langle d \rangle$</span>.</p>
3,537,297
<p>I am asked the following question (<strong>non-calculator</strong>):</p> <blockquote> <p>Solve for <span class="math-container">$0 \leq x \leq 2 \pi$</span></p> <p><span class="math-container">$$4 \sin x =\sqrt{3} \csc x + 2 - 2\sqrt{3} $$</span> <strong>Answer:</strong> <span class="math-container">$x = \frac{\pi}{6}, \frac{5 \pi}{6}, \frac{4\pi}{3}, \frac{5\pi}{3}$</span></p> </blockquote> <p>My solution consists on multiplying both sides by <span class="math-container">$\sin x$</span> but I get stuck on a quadratic equation:</p> <p><a href="https://i.stack.imgur.com/f17zV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/f17zV.png" alt="enter image description here"></a></p> <p>How can I proceed?</p> <p>Thank you.</p>
Quanto
686,284
<p>Hint:</p> <p>Simplify your solutions with <span class="math-container">$\sqrt{4+2\sqrt3} = 1+\sqrt3$</span>. Therefore <span class="math-container">$\sin x= \frac12,\&gt; -\frac{\sqrt3}2$</span>.</p>
221,667
<p>I'm taking a second course in linear algebra. Duality was discussed in the early part of the course. But I don't see any significance of it. It seems to be an isolated topic, and it hasn't been mentioned anymore. So what's exactly the point of duality?</p>
Tabes Bridges
36,394
<p>Expanding on Hew Wolff's comment: in algebraic topology, a major theme is <em>functoriality</em>, which roughly boils down to the idea that when we associate algebraic objects to topological spaces, we should similarly obtain any maps between the algebraic objects from maps between topological spaces so that the algebra truly reflects the topology.</p> <p>Homology attaches a graded sequence of (additive) abelian groups $H_*(X)$ to a space $X$, and associates a map between such groups $f_*:H_*(X)\to H_*(Y)$ to a continuous map $f:X\to Y$. If $X$ is an $n$-dimensional manifold, the sequence $H_*(X)$ contains a (potentially) nonzero graded piece $H_i(X)$ for each $i=0,...,n$ which tells us, roughly, the number of $i$-dimensional holes in $X$. If we appeal to some intuition from linear algebra, where we can multiply together two vector spaces of dimensions $m$ and $n$ to obtain a vector space of dimension $m+n$, this idea of functoriality motivates us to ask if we can somehow multiply lower-dimensional information in homology to obtain higher-dimensional information.</p> <p>To obtain such a product map $H_m(X)\times H_n(X)\to H_{n+m}(X)$, we would need a continuous map $X\times X\to X$; unfortunately, the only halfway decent choices we have <em>a priori</em> are the projection maps onto factors, which obviously throw away a good deal of information. On the other hand, the dual construction of cohomology has a product map $H^m(X)\times H^n(X)\to H^{m+n}(X)$ induced by the diagonal embedding $X\to X\times X$ which sends $x\mapsto (x,x)$.</p> <p>Mike, hopefully this is somewhat understandable. I realize that it goes somewhat far afield, but that is somewhat necessary to answer the question. The main thing to understand is that your confusion is due to the artifice of presentation; duality arose in nature before it was formalized; it is really quite ubiquitous. Also, I should note that most of this discussion sticks quite close to Allen Hatcher's book.</p>
2,479,305
<p>I'm studying first-order logic and I saw the textbook put equality symbol $=$ ads a logical symbol.</p> <p>It's not a logical connective symbol, so $=xy$ is an atomic formula. <a href="https://en.wikipedia.org/wiki/Atomic_formula" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Atomic_formula</a></p> <p>However, the textbook explained it in such a way:" For example, $=v_1v_2$ is an atomic formula, since $=$ is a two-place predicate symbol and...".</p> <p>But $=$ was listed and was obviously a Logical symbol ( a "superset" of equality symbol), not a parameter( a "superset" of Predicate symbols ).</p> <p>How could $=$ be both a logical symbol and predicate symbol?</p>
Caleb Stanford
68,107
<p>This is a good question. Depending on who you ask, $=$ is either a special logical symbol, or a special predicate symbol. So it can be presented as either one.</p> <p>Syntactically, it behaves like a two-place predicate. Two-place predicates, like $\equiv, &lt;, \le$, etc. create a formula out of any two terms. $=$ is like this: for any terms $t_1$ and $t_2$, "$t_1 = t_2$" is a formula.</p> <p>However, $=$ is a bit special in that it is not just <em>any</em> two-place predicate, but it also obeys special rules. The special rules boil down to two facts:</p> <ol> <li><p>For every term $t$, $t = t$.</p></li> <li><p>For every two terms $t,t'$, if $t = t'$ then $t$ may be replaced by $t'$ (and vice versa) in any term.</p></li> </ol> <p>Because $=$ cannot be interpreted as any two-place predicate, and actually obeys special rules, it is therefore sometimes thought of as a logical symbol instead of as a two-place predicate. However, unlike other logical symbols, it operates on terms instead of formulas.</p> <p>The bottom line is that <strong>the equality symbol is special</strong> and doesn't quite fit in with all the other symbols. It may be thought of as part of the logic, or as a special predicate symbol that satisfies additional logical rules.</p>
777,691
<p>I would like to prove if $a \mid n$ and $b \mid n$ then $a \cdot b \mid n$ for $\forall n \ge a \cdot b$ where $a, b, n \in \mathbb{Z}$</p> <p>I'm stuck.<br> $n = a \cdot k_1$<br> $n = b \cdot k_2$<br> $\therefore a \cdot k_1 = b \cdot k_2$</p> <p>EDIT: so for <a href="http://en.wikipedia.org/wiki/Fizz_buzz" rel="nofollow">fizzbuzz</a> it wouldn't make sense to check to see if a number is divisible by 15 to see if it's divisible by both 3 and 5?</p>
Bill Dubuque
242
<p><strong>Hint</strong> $\,\ a,b\mid n\iff {\rm lcm}(a,b)\mid n\!\!\!\overset{\ \ \ \large \times\, (a,b)}\iff ab\mid n(a,b),\, $ which is not equivalent to $\,ab\mid n\ $ </p>
1,279,570
<p>I know that if $u\cdot v = 0$ then by definition, $u_1v_1 + u_2v_2 + \cdots + u_nv_n = 0$</p> <p>I also know that if $u$ and $v$ are linearly independent and a matrix $A$ has $u$ and $v$ as its successive column vectors then the equation $Ax = 0$ will have only the trivial solution. </p> <p>However I can't think of anything else that might help me answer this question.</p>
Peter Woolfitt
145,826
<p>You can use the fact that two vectors are linearly dependent iff one is a scalar multiple of the other:</p> <p>For the sake of contradiction suppose $u$ is nonzero and $u=cv$ (i.e. the vectors are dependent). Then $0=u\cdot v=c(u_1\cdot u_1+\dots+u_n\cdot u_n)=c(u_1^2+\dots+u_n^2)$, but this can only be $0$ if each $u_i^2=0$ (assuming we are working over the reals so that $x^2\ge0$ with equality iff $x=0$). Hence $u=0$, but this is a contradiction. Therefore, the two vectors $u$ and $v$ are in fact linearly independent.</p>
1,279,570
<p>I know that if $u\cdot v = 0$ then by definition, $u_1v_1 + u_2v_2 + \cdots + u_nv_n = 0$</p> <p>I also know that if $u$ and $v$ are linearly independent and a matrix $A$ has $u$ and $v$ as its successive column vectors then the equation $Ax = 0$ will have only the trivial solution. </p> <p>However I can't think of anything else that might help me answer this question.</p>
Math1000
38,584
<p>Assume that $$c_1u + c_2v = 0$$ for some scalars $c_1, c_2$. Then $$0 = \langle u, c_1u + c_2v\rangle = \langle u,c_1u\rangle + \langle u,c_2v\rangle = c_1\langle u,u\rangle + c_2\langle u,v\rangle = c_1\langle u,u\rangle,$$ since $\langle u,v\rangle=0$. Since $u\ne0$, it follows that $c_1=0$. From there we see immediately that $c_2=0$.</p>
1,387,454
<p>What is the sum of all <strong>non-real</strong>, <strong>complex roots</strong> of this equation -</p> <p>$$x^5 = 1024$$</p> <p>Also, please provide explanation about how to find sum all of non real, complex roots of any $n$ degree polynomial. Is there any way to determine number of real and non-real roots of an equation?</p> <hr> <p>Please not that I'm a high school freshman (grade 9). So please provide simple explanation. Thanks in advance!</p>
Américo Tavares
752
<ul> <li>Since $x^{5}=1024$ implies that $x=\sqrt[5]{1024}=4=x_0$ is one of the roots of $x^{5}-1024=0$, it follows that $$p(x)=x^{5}-1024=(x-4)q(x), $$ where $q(x)$ is a forth degree polynomial with real coefficients and leading term equal to $x^{4}$. Hence it is of the form $$q(x)=x^{4}+bx^{3}+cx^{2}+dx+e. $$</li> <li>The coefficients $b,c,d, e$ can be found by <a href="https://en.m.wikipedia.org/wiki/Polynomial_long_division" rel="nofollow">polynomial long division</a> (see below<sup>1</sup>) or by <a href="https://en.wikipedia.org/wiki/Ruffini%27s_rule" rel="nofollow"><em>Ruffini's Rule</em></a>. Using the latter to compute $$\frac{p (x)}{x-{\color{blue}4}}=q (x), $$ i.e. \begin{array}{c|rrrrrl} &amp; x^{5} &amp; x^{4} &amp; x^{3} &amp; x^{2} &amp; x^{1} &amp; \phantom{-} x^{0} \\ &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; -1024\\ x_0= {\color{blue}4} &amp; \downarrow &amp; 4 &amp; 16 &amp; 64 &amp; 256 &amp;\phantom{-} 1024\\ \hline &amp; 1 &amp; 4 &amp; 16 &amp; 64 &amp; 256 &amp; \phantom{-102} {\color{blue}0}=\text{remainder}\\ &amp; x^{4} &amp; x^{3} &amp; x^{2} &amp; x^{1} &amp; x^{0} \end{array} yields \begin{equation*} q(x)=x^{4}+4x^{3}+16x^{2}+64x+256. \end{equation*} The polynomial $q(x)$ has four roots we denote as $x_{1},x_{2},x_{3},x_{4}$. Since the coefficient of $x^{4}$ is $1$, we know that it can be written as \begin{eqnarray*} q(x) &amp;=&amp;\left( x-x_{1}\right) \left( x-x_{2}\right) \left( x-x_{3}\right) \left( x-x_{4}\right) \\ &amp;=&amp;x^{4}-\left( x_{1}+x_{2}+x_{3}+x_{4}\right) x^{3} \\ &amp;&amp;+\left( x_{3}x_{4}+x_{2}x_{4}+x_{2}x_{3}+x_{1}x_{4}+x_{1}x_{3}+x_{1}x_{2}\right) x^{2} \\ &amp;&amp;-\left( x_{1}x_{2}x_{3}+x_{1}x_{2}x_{4}+x_{1}x_{3}x_{4}+x_{2}x_{3}x_{4}\right) x+x_{1}x_{2}x_{3}x_{4} \end{eqnarray*}</li> <li>Now by comparing coefficients of $x^{3}$ we see that \begin{equation*} -b=x_{1}+x_{2}+x_{3}+x_{4}=-4, \end{equation*} which is just <a href="https://en.wikipedia.org/wiki/Vieta%27s_formulas" rel="nofollow">Vieta's formula</a> for the sum of the roots of the polynomial $ q (x) $. The original equation $p(x)=0$ has thus five roots: $x_0=4$ and the four roots $x_{1},x_{2},x_{3},x_{4}$ of $q(x)$. We just need to prove that none of these $x_{1},x_{2},x_{3},x_{4}$ are real numbers.</li> </ul> <p>For this purpose we can use simple Calculus. Since $p'(x)=5x^4&gt;0$ for $x\neq 0$ and $p'(0)=0$, the polynomial $p(x)$ is an increasing function and</p> <ol> <li>if $x&lt; 4$, then $p(x)&lt;0$,</li> <li>if $x&gt;4$, then $p(x)&gt;0$,</li> <li>$p(4)=0$, </li> </ol> <p>all roots of $q(x)=0$ are non-real complex numbers, whose sum as computed above equals $-4$.</p> <p>--</p> <p><sup>1</sup> Polynomial long division $p (x)/(x-{\color{blue}4})=q(x)$</p> <p>\begin{array}{rl} &amp;\underline{\phantom{+x^{5}} \phantom{+0}x^{4}\phantom{0}+4x^{3}+16x^{2}\phantom{0}+64x\phantom{0}+256\phantom{0}} \\ x-{\color{blue}4}) &amp; \phantom{+}x^{5}\phantom{+0x^{4}0+4x^{3}+16x^{2}+064x}-1024 \\ &amp; \underline{-x^{5}+4x^4\phantom{0}} \\ &amp; \phantom{-x^{5}+}\;4x^4 \\ &amp; \phantom{-x^{5}}\underline{\;-4x^4+16x^3\phantom{0}} \\ &amp; \phantom{-x^{5}-4x^4+}16x^3 \\ &amp; \phantom{-x^{5}-4x^4}\underline{\;-16x^3+64x^{2}\phantom{0}} \\ &amp; \phantom{-x^{5}-4x^4+16x^3+}64x^{2}\\ &amp; \phantom{-x^{5}-4x^4+16x^3}\underline{\;-64x^{2}+256x\phantom{0}} \\ &amp; \phantom{-x^{5}-4x^4+16x^3-64x^{2}+}256x-1024 \\ &amp; \phantom{-x^{5}-4x^4+16x^3-64x^{2}}\underline{\;-256x+1024\phantom{0}} \\ &amp; \phantom{-x^{5}-4x^4+16x^3-64x^{2}-256x+102}{\color{blue}0} \end{array}</p>
2,425,916
<p>What is this set describing?</p> <p>{$n∈\mathbb{N}|n\ne 1$ and for all $a∈\mathbb{N}$ and $b∈\mathbb{N},ab=n$ implies $a= 1$ or $b= 1$}</p> <p>Is it describing a subset of natural numbers, excluding 1, that is the product of two other natural numbers, of which one must be 1? Isn't that just every natural number except 1?</p>
fleablood
280,126
<p>!</p> <p>"Is it describing a subset of natural numbers, excluding 1, that is the product of two other natural numbers, of which one must be 1? Isn't that just every natural number except 1?"</p> <p>Almost. It is saying that if $n = ab$, no matter which $a$ or $b$ you choose, it <em>must</em> be that either $a$ or $b$ is $1$.</p> <p>Example: $6$. Is $6$ in the set? Well $6 \ne 1$ so that's a start. And "if $6 = a*b$..." Okay, that could be $(a,b) = (1,6),(2,3),(3,2)$ or $(61)$ "it must be that $a = 1$ or $b = 1$". Well... we could have $a = 2$ and $b=3 $ and neither $a$ nor $b$ must be $1$. So $6$ is not in the set.</p> <p>SPOILER ALERT: There is a word for numbers that must be in the set--- that is numbers for whom every pair of divisors must include $1$... numbers that can not be a product of any pair of numbers of which neither are $1$....</p>
1,591,278
<p>$$f(x) =\frac{ (x - 1) }{(x^2 - 1)}$$ $$g(x) = \frac{1}{(x + 1)}$$</p> <p>Is $f = g$? why?</p> <p>Solution: answer is no. I don't get it. why can't it be same when it is the alt form?? Can anyone explain this further...</p> <p>additional questions for further clarification: $$f(x) = \lim_{x\to 1}\frac{ (x^2 - 1) }{(x-1)} = infinite $$ $$f(x) = \lim_{x\to 1}\frac{ (x - 1)(x + 1) }{(x-1)} $$ $$f(x) = \lim_{x\to 1}(x+1)$$ $$ 1+1=2$$ How the function be justified that the limit can be apply to its alt form of a function which is not the same.</p>
SchrodingersCat
278,967
<p>No, they are not the same because the <a href="https://en.wikipedia.org/wiki/Domain_of_a_function" rel="nofollow">domain of definition</a> of both the functions are different.</p> <p>The domain of f $=\{x|x\in \mathbb{R} \, \land \, x \not = 1,-1\}$ whereas domain of g $=\{x|x\in \mathbb{R} \, \land \, x \not = -1 \}$</p> <p>Hence and importantly</p> <p>$f(1)$ does not exist but $g(1)$ exists and is equal to $\frac{1}{2}$.</p>
147,181
<p>I can't figure out how to make ListDensityPlot with both logarithmic coloring and logarithmic scale on both x and y axis.</p> <p><a href="https://mathematica.stackexchange.com/questions/36830/logarithmic-scale-in-a-densityplot-and-its-legend">Logarithmic scale in a DensityPlot and its legend</a></p> <p>This question worked for ListDensityPlot plot as well - I as able to color plot points in logarithmic coloring scale. But how do you add logarithmic scale to x and y axis as well?</p> <p>I would appreciate any help.</p>
Milad P.
39,744
<p>You can manually Plot the log of your function and specify <a href="https://reference.wolfram.com/language/ref/Ticks.html" rel="nofollow noreferrer">Ticks</a> (e.g. <code>Ticks -&gt; {{{1, 10}, {2, 100}, {3, 1000}}, {{1, 10}, {2, 100}, {3, 1000}}}</code>) and <a href="https://reference.wolfram.com/language/ref/ColorFunction.html" rel="nofollow noreferrer">ColorFunction</a>.</p>
147,181
<p>I can't figure out how to make ListDensityPlot with both logarithmic coloring and logarithmic scale on both x and y axis.</p> <p><a href="https://mathematica.stackexchange.com/questions/36830/logarithmic-scale-in-a-densityplot-and-its-legend">Logarithmic scale in a DensityPlot and its legend</a></p> <p>This question worked for ListDensityPlot plot as well - I as able to color plot points in logarithmic coloring scale. But how do you add logarithmic scale to x and y axis as well?</p> <p>I would appreciate any help.</p>
Carl Woll
45,431
<p>In M11.2 you can use <a href="http://reference.wolfram.com/language/ref/ScalingFunctions" rel="nofollow noreferrer"><code>ScalingFunctions</code></a> to make the ticks logarithmic, for example:</p> <pre><code>DensityPlot[ Sin[x +y]^2 ,{x, 1, 20}, {y, 1, 20}, ScalingFunctions -&gt; {"Log", "Log", "Linear"}, PlotPoints -&gt; 40 ] </code></pre> <p><a href="https://i.stack.imgur.com/EX13o.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EX13o.png" alt="enter image description here"></a></p> <p>Change "Linear" to "Log" if you want the values to be logarithmic as well.</p>
2,486,095
<p>Given an acute-angled triangle $\Delta ABC$ having it's Orthocentre at $H$ and Circumcentre at $O$. Prove that $\vec{HA} + \vec{HB} + \vec{HC} = 2\vec{HO}$</p> <p>I realise that $\vec{HO} = \vec{BO} + \vec{HB} = \vec{AO} + \vec{HA} =\vec{CO} + \vec{HC}$ which leads to $3\vec{HO} = (\vec{HA} + \vec{HB} + \vec{HC}) + (\vec{AO} + \vec{BO} + \vec{CO})$</p> <p>How can I prove that $(\vec{AO} + \vec{BO} + \vec{CO}) = \vec{HO}$ in order to solve the problem?</p> <p>Thank you for answering!</p>
Lutz Lehmann
115,115
<p>You just need to iterate further. Using python this is quickly implemented as</p> <pre><code>x,y,z = 0.,0.,0.; last=3. for k in range(20): x=(6-2*y+z)/5; y=(x+13+2*z)/10; z=(26+y-4*x)/8; norm = max(abs(u) for u in [x-1,y-2,z-3]) print " %2d: (%.12f,%.12f,%.12f) (%+.8f, %+.8f, %+.8f), %.8f" %(k+1,x,y,z,x-1,y-2,z-3, norm/last) last=norm </code></pre> <p>and gives the results</p> <pre><code> k (x,y,z) (x-1,y-2,z-3) norm quotient 1: (1.200000000000,1.420000000000,2.827500000000) (+0.20000000, -0.58000000, -0.17250000), 0.19333333 2: (1.197500000000,1.985250000000,2.899406250000) (+0.19750000, -0.01475000, -0.10059375), 0.34051724 3: (0.985781250000,1.978459375000,3.004416796875) (-0.01421875, -0.02154063, +0.00441680), 0.10906646 4: (1.009499609375,2.001833320313,2.995479360352) (+0.00949961, +0.00183332, -0.00452064), 0.44100899 5: (0.998362543945,1.998932126465,3.000685243835) (-0.00163746, -0.00106787, +0.00068524), 0.17237088 6: (1.000564198181,2.000193468585,2.999742084483) (+0.00056420, +0.00019347, -0.00025792), 0.34455775 7: (0.999871029462,1.999935519843,3.000056425249) (-0.00012897, -0.00006448, +0.00005643), 0.22859084 8: (1.000037077113,2.000014992761,2.999983335539) (+0.00003708, +0.00001499, -0.00001666), 0.28748514 9: (0.999990670003,1.999995734108,3.000004131762) (-0.00000933, -0.00000427, +0.00000413), 0.25163763 10: (1.000002532709,2.000001079623,2.999998868598) (+0.00000253, +0.00000108, -0.00000113), 0.27145874 11: (0.999999341870,1.999999707907,3.000000292553) (-0.00000066, -0.00000029, +0.00000029), 0.25985204 12: (1.000000175348,2.000000076045,2.999999921832) (+0.00000018, +0.00000008, -0.00000008), 0.26643375 13: (0.999999953948,1.999999979761,3.000000020496) (-0.00000005, -0.00000002, +0.00000002), 0.26263113 14: (1.000000012195,2.000000005319,2.999999994567) (+0.00000001, +0.00000001, -0.00000001), 0.26480487 15: (0.999999996786,1.999999998592,3.000000001431) (-0.00000000, -0.00000000, +0.00000000), 0.26355463 16: (1.000000000849,2.000000000371,2.999999999622) (+0.00000000, +0.00000000, -0.00000000), 0.26427120 17: (0.999999999776,1.999999999902,3.000000000100) (-0.00000000, -0.00000000, +0.00000000), 0.26385963 18: (1.000000000059,2.000000000026,2.999999999974) (+0.00000000, +0.00000000, -0.00000000), 0.26409535 19: (0.999999999984,1.999999999993,3.000000000007) (-0.00000000, -0.00000000, +0.00000000), 0.26396053 20: (1.000000000004,2.000000000002,2.999999999998) (+0.00000000, +0.00000000, -0.00000000), 0.26404207 </code></pre> <p>As you see from the norm quotient of the error vectors, the convergence is linear with a factor of about $0.26$ which gives you about 3 digits every 5 iterations, exhausting double floating point precision in about 25 iterations. For the displayed 12 decimals 20 iterations are theoretically and also practically sufficient.</p>
4,221,545
<p>Let <span class="math-container">$f$</span> be a twice-differentiable function on <span class="math-container">$\mathbb{R}$</span> such that <span class="math-container">$f''$</span> is continuous. Prove that <span class="math-container">$f(x) f''(x) &lt; 0$</span> cannot hold for all <span class="math-container">$x.$</span></p> <p>I have been able to think of specific examples of <span class="math-container">$f(x)$</span> in which <span class="math-container">$f(x)f''(x) &lt;0$</span> does not hold, but I have not been able to come up with specific values of <span class="math-container">$x$</span> for which <span class="math-container">$f(x)f''(x)&lt;0$</span> does not hold.</p> <p>Any help is greatly appreciated!</p>
Hagen von Eitzen
39,174
<p>Assume wlog that <span class="math-container">$f''(0)&gt;0$</span>. Then <span class="math-container">$f(0)&lt;0$</span> and as neither factor can change sign, <span class="math-container">$f''(x)&gt;0&gt;f(x)$</span> for all <span class="math-container">$x\in\Bbb R$</span> and <span class="math-container">$f$</span> is strictly convex. Pick <span class="math-container">$a&lt;b$</span> with <span class="math-container">$f(a)\ne f(b)$</span>. Then the line through <span class="math-container">$(a,f(a))$</span> and <span class="math-container">$(b,f(b))$</span> intersects the <span class="math-container">$x$</span>-axis. By convexity, <span class="math-container">$f$</span> is above this line outside <span class="math-container">$[a,b]$</span>, hence must assume positive values.</p>
1,969,934
<p>Taken from <a href="https://en.wikipedia.org/wiki/E_(mathematical_constant)" rel="nofollow noreferrer">Wikipedia</a>:</p> <blockquote> <p>The number $e$ is the limit $$e = \lim_{n \to \infty} \left (1 + \frac{1}{n} \right)^n$$</p> </blockquote> <p><strong>Graph of $f(x) = \left (1 + \dfrac{1}{x} \right)^x$</strong> <a href="http://www.meta-calculator.com/online/paw750g2nx4z" rel="nofollow noreferrer">taken from here. </a></p> <p><a href="https://i.stack.imgur.com/4eHLu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4eHLu.png" alt="Graph"></a></p> <p>Its evident from the graph that the limit actually approaches $e$ as $x$ approaches $\infty$. So I tried approaching the value algebraically. My attempt:</p> <p>$$\lim_{n \to \infty} \left (1 + \frac{1}{n} \right)^n$$ $$= \lim_{n \to \infty} \left(\frac{n + 1}{n}\right)^n$$ $$= \left(\lim_{n \to \infty} \left(\frac{n + 1}{n}\right) \right)^n$$ $$= 1^\infty$$</p> <p>which is an indeterminate form. I cannot think of any other algebraic manipulation. My question is that <strong>how can I solve this limit algebraically?</strong></p>
Community
-1
<p>Notice that</p> <p>$$\left(1+\dfrac{1}{n}\right)^n=e^{n\ln\left(1+\frac{1}{n}\right)}$$</p> <p>Can you solve the limit now?</p>
1,521,739
<p>The following is the Meyers-Serrin theorem and its proof in Evans's <em>Partial Differential Equations</em>:</p> <blockquote> <p><a href="https://i.stack.imgur.com/XnzXY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XnzXY.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/7woqQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7woqQ.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/pBIqX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pBIqX.png" alt="enter image description here"></a></p> </blockquote> <p>Could anyone explain where (for which $x\in U$) is the convolution in step 2 defined and how to get (3) from Theorem 1? </p> <blockquote> <p><a href="https://i.stack.imgur.com/NdZWL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NdZWL.png" alt="enter image description here"></a></p> </blockquote>
heropup
118,193
<p>Assuming you have done the coordinate transformation correctly, then the basic idea is that you calculate the vertex and focus of the transformed parabola, then perform the inverse transformation on those coordinates to recover the vertex and focus in the untransformed (original) coordinates.</p> <p>So for example, if your transformation constituted first a translation of the form $$\begin{bmatrix} x' \\ y' \end{bmatrix} = \begin{bmatrix} x - h \\ y - k \end{bmatrix},$$ followed by a counterclockwise rotation of $\theta$ $$\begin{bmatrix} x'' \\ y'' \end{bmatrix} = \begin{bmatrix} \cos \theta &amp; -\sin \theta \\ \sin \theta &amp; \cos \theta \end{bmatrix}\begin{bmatrix} x' \\ y' \end{bmatrix},$$ and the resulting parabola had vertex at $(x'', y'') = (p'',q'')$, then you'd apply the inverse of the rotation matrix to $(p'', q'')$, and then the inverse of the translation, giving $$\begin{bmatrix} p \\ q \end{bmatrix} = \begin{bmatrix} \cos \theta &amp; \sin \theta \\ -\sin \theta &amp; \cos \theta \end{bmatrix} \begin{bmatrix} p'' \\ q'' \end{bmatrix} + \begin{bmatrix}h \\ k \end{bmatrix}.$$ The same principle applies to the focus.</p>
3,597,829
<p>Edit: I am using the natural logarithm in what follows.</p> <p>I want to figure out how to show by hand that the maximum of <span class="math-container">$$\log(4)c+\log(3)a+\log(2)x$$</span> when <span class="math-container">$$a\geq 0, c\geq 0, x \geq 0, y \geq 0,$$</span> <span class="math-container">$$a+c+x+y=1,$$</span> <span class="math-container">$$(a+c)^2+(x+y)^2+2xc\leq 1-2\gamma,$$</span> where <span class="math-container">$\gamma$</span> is a fixed constant such that <span class="math-container">$4/25\leq \gamma \leq 1/4$</span>, is <span class="math-container">$\frac{\log(6)}{2}+\frac{\sqrt{1-4\gamma}}{2}\log(3/2)$</span>, which is given by <span class="math-container">$a=\frac{1+\sqrt{1-4\gamma}}{2}$</span>, <span class="math-container">$x=1-a=\frac{1-\sqrt{1-4\gamma}}{2}$</span>, <span class="math-container">$c=y=0$</span>.</p> <p>I have tried using Lagrange multipliers by changing the last constraint to an equality, but it becomes very messy and I get stuck. </p> <p>I would also be happy if I could simply show that <span class="math-container">$\log(4)c+\log(3)a+\log(2)x \leq \frac{\log(6)}{2}+\frac{\sqrt{1-4\gamma}}{2}\log(3/2)$</span> for all such <span class="math-container">$a,c,x,y$</span>. A method I have attempted is to use the concavity of <span class="math-container">$\log(x)$</span>. The equation of the line that passes through <span class="math-container">$(2, \log(2))$</span> and <span class="math-container">$(3, \log(3))$</span> is <span class="math-container">$L(x)=\log(3/2)x-\log(9/8)$</span>. Then by concavity of <span class="math-container">$\log(x)$</span> we have <span class="math-container">$\log(x)\leq L(x)$</span> for all integers <span class="math-container">$x$</span>. Then</p> <p><span class="math-container">\begin{align*}&amp;\log(4)c+\log(3)a+\log(2)x\\ &amp;\leq L(4)c+L(3)a+L(2)x\\ &amp;=\log(3/2) (4c+3a+2x+y)-\log(9/8)\\ &amp;=\log(3/2)\left(\frac{5+3}{2}c+\frac{5+1}{2}a+\frac{5-1}{2}x+\frac{5-3}{2}y\right)-\log(9/8)\\ &amp;=\log(3/2)\left(\frac{5+\sqrt{1-4\gamma}}{2}\right)-\log(9/8)+\log(3/2)\left(\frac{a-x+3(c-y)-\sqrt{1-4\gamma}}{2}\right)\\ &amp;=\frac{\log(6)}{2}+\frac{\sqrt{1-4\gamma}}{2}\log(3/2)+\log(3/2)\left(\frac{a-x+3(c-y)-\sqrt{1-4\gamma}}{2}\right).\\ \end{align*}</span> That means that to obtain the inequality I want, it would suffice to show that <span class="math-container">$a-x\leq 3(y-c)+\sqrt{1-4\gamma}$</span>. However, I have not had success in proving this inequality. Any suggestions or help is immensely appreciated. </p> <p>Another option would be to use perturbation techniques, but I do not have experience in this. Thank you for your help.</p>
mathlove
78,967
<p><span class="math-container">$\frac{\log(6)}{2}+\frac{\sqrt{1-4\gamma}}{2}\log(3/2)$</span> is not the maximum of <span class="math-container">$$f(c,a,x):=\log(4)c+\log(3)a+\log(2)x$$</span></p> <p>For example, for <span class="math-container">$\gamma=4/25$</span>, we have <span class="math-container">$$f(4/5,0,0)=\frac{\log(256)}{5}\color{red}{\gt}\frac{\log(162)}{5}=\frac{\log(6)}{2}+\frac{\sqrt{1-4(4/25)}}{2}\log(3/2)$$</span></p> <hr> <p>In the following, let us prove that the maximum of <span class="math-container">$\log(4)c+\log(3)a+\log(2)x$</span> is <span class="math-container">$$\begin{cases}\log(2)(1+\sqrt{1-4\gamma})&amp;\text{if $\ 4/25\le \gamma\lt\frac{\log(2)\log(4/3)}{(\log(8/3))^2}$} \\\\\frac{\log(6)}{2}+\frac{\sqrt{1-4\gamma}}{2}\log(3/2)&amp;\text{if $\ \frac{\log(2)\log(4/3)}{(\log(8/3))^2}\le\gamma\le 1/4$}\end{cases}$$</span></p> <p>We want to find the maximum of <span class="math-container">$f(c,a,x)$</span> under the condition that <span class="math-container">$$a\geq 0, c\geq 0, x \geq 0, 1-a-c-x \geq 0,$$</span> <span class="math-container">$$(a+c)^2+(1-a-c)^2+2xc\leq 1-2\gamma,4/25\leq \gamma \leq 1/4$$</span></p> <p><strong>Case 1</strong> : <span class="math-container">$c=0$</span></p> <p>We want to find the maximum of <span class="math-container">$f(0,a,x)$</span> under the condition that <span class="math-container">$$0\le x\le 1-a,\frac{1-\sqrt{1-4\gamma}}{2}\le a\le\frac{1+\sqrt{1-4\gamma}}{2}, 4/25\leq \gamma \leq 1/4$$</span></p> <p>Therefore, we get <span class="math-container">$$\begin{align}f(0,a,x)&amp;\le f(0,a,1-a) \\\\&amp;=\log(3/2)a+\log(2) \\\\&amp;\le\log(3/2)\frac{1+\sqrt{1-4\gamma}}{2}+\log(2) \\\\&amp;=\frac{\log(6)}{2}+\frac{\sqrt{1-4\gamma}}{2}\log(3/2)\end{align}$$</span></p> <p><strong>Case 2</strong> : <span class="math-container">$c\gt 0$</span></p> <p>We want to find the maximum of <span class="math-container">$f(c,a,x)$</span> under the condition that <span class="math-container">$$a\ge 0, c\gt 0, 0\le 1-a-c,$$</span> <span class="math-container">$$0\le x\leq \min\bigg(1-a-c,\frac{1-2\gamma-(a+c)^2-(1-a-c)^2}{2c}\bigg),$$</span> <span class="math-container">$$0\le \frac{1-2\gamma-(a+c)^2-(1-a-c)^2}{2c}, 4/25\leq \gamma \leq 1/4$$</span> Note here that <span class="math-container">$$\min\bigg(1-a-c,\frac{1-2\gamma-(a+c)^2-(1-a-c)^2}{2c}\bigg)=\begin{cases}1-a-c&amp;\text{if $\ a(1-a-c)\ge \gamma$} \\\\\frac{1-2\gamma-(a+c)^2-(1-a-c)^2}{2c}&amp;\text{if $\ a(1-a-c)\lt \gamma$}\end{cases}$$</span></p> <p><strong>Case 2-1</strong> : <span class="math-container">$a(1-a-c)\ge \gamma$</span></p> <p>We have <span class="math-container">$0\le x\leq 1-a-c$</span> from which we obtain <span class="math-container">$$f(c,a,x)\le f(c,a,1-a-c)=\log(2)c+\log(3/2)a+\log(2):=g(c,a)$$</span></p> <p>Suppose that <span class="math-container">$a=0$</span>. Then, there is no <span class="math-container">$\gamma$</span> such that <span class="math-container">$0\ge\gamma$</span> and <span class="math-container">$4/25\leq \gamma \leq 1/4$</span>. So, <span class="math-container">$a\gt 0$</span>.</p> <p>So, we want to find the maximum of <span class="math-container">$g(c,a)$</span> under the condition that <span class="math-container">$$0\lt c\le 1-a-\frac{\gamma}{a},\frac{1-\sqrt{1-4\gamma}}{2}\lt a\lt\frac{1+\sqrt{1-4\gamma}}{2}, 4/25\leq \gamma \leq 1/4$$</span> so <span class="math-container">$$g(c,a)\le g\bigg(1-a-\frac{\gamma}{a},a\bigg)=-\log(2)\frac{\gamma}{a}-\log(4/3)a+\log(4):=h(a)$$</span> We want to find the maximum of <span class="math-container">$h(a)$</span> under the condition that <span class="math-container">$$\frac{1-\sqrt{1-4\gamma}}{2}\lt a\lt\frac{1+\sqrt{1-4\gamma}}{2}, 4/25\leq \gamma \leq 1/4$$</span></p> <p>We see that <span class="math-container">$h'(a)$</span> is decreasing with <span class="math-container">$$h'(a)=0\iff a=\sqrt{\frac{\log(2)}{\log(4/3)}\gamma}$$</span></p> <p>If <span class="math-container">$4/25\le \gamma\lt \frac{\log(2)\log(4/3)}{(\log(8/3))^2}$</span>, then <span class="math-container">$$\frac{1-\sqrt{1-4\gamma}}{2}\lt \sqrt{\frac{\log(2)}{\log(4/3)}\gamma}\lt \frac{1+\sqrt{1-4\gamma}}{2}$$</span> from which we have <span class="math-container">$$h(a)\le h\bigg(\sqrt{\frac{\log(2)}{\log(4/3)}\gamma}\bigg)=\log(4)-2\sqrt{\log(2)\log(4/3)\gamma}$$</span></p> <p>If <span class="math-container">$\frac{\log(2)\log(4/3)}{(\log(8/3))^2}\le\gamma\le\frac 14$</span>, then <span class="math-container">$$\frac{1+\sqrt{1-4\gamma}}{2}\le \sqrt{\frac{\log(2)}{\log(4/3)}\gamma}$$</span> So, <span class="math-container">$h'(a)$</span> is positive in <span class="math-container">$\frac{1-\sqrt{1-4\gamma}}{2}\lt a\lt\frac{1+\sqrt{1-4\gamma}}{2}$</span>, so <span class="math-container">$h(a)$</span> is increasing from which we have <span class="math-container">$$h(a)\lt h\bigg(\frac{1+\sqrt{1-4\gamma}}{2}\bigg)=\frac{\log(6)}{2}+\frac{\sqrt{1-4\gamma}}{2}\log(3/2)$$</span></p> <p><strong>Case 2-2</strong> : <span class="math-container">$a(1-a-c)\lt \gamma$</span></p> <p>We have <span class="math-container">$$0\le x\leq \frac{1-2\gamma-(a+c)^2-(1-a-c)^2}{2c}$$</span> from which we obtain <span class="math-container">$$\begin{align}f(c,a,x)&amp;\le f\bigg(c,a,\frac{1-2\gamma-(a+c)^2-(1-a-c)^2}{2c}\bigg) \\\\&amp;=\log(2)c+\log(3/4)a+\log(2)\frac{-\gamma-a^2+a+c}{c}:=j(c,a)\end{align}$$</span> where <span class="math-container">$$\frac{\partial j}{\partial c}=\log(2)\frac{c^2+\gamma+a^2-a}{c^2}$$</span></p> <p>We want to maximize <span class="math-container">$j(c,a)$</span> under the condition that <span class="math-container">$$0\le a\lt \frac{1+\sqrt{1-4\gamma}}{2}, 0\lt c\le 1-a,\frac{a-a^2-\gamma}{a}\lt c$$</span> <span class="math-container">$$\frac{1-2a-\sqrt{1-4\gamma}}{2}\le c\le \frac{1-2a+\sqrt{1-4\gamma}}{2}, 4/25\leq \gamma \leq 1/4$$</span></p> <p><strong>Case 2-2-1</strong> : <span class="math-container">$\frac{1-\sqrt{1-4\gamma}}{2}\lt a\lt\frac{1+\sqrt{1-4\gamma}}{2}$</span></p> <p>Then, we have</p> <p><span class="math-container">$$\sqrt{a-a^2-\gamma}\ge\frac{a-a^2-\gamma}{a}$$</span></p> <p><span class="math-container">$$\small\frac{1-2a-\sqrt{1-4\gamma}}{2}\le 0\le \frac{a-a^2-\gamma}{a}\lt \frac{1-2a+\sqrt{1-4\gamma}}{2}\lt 1-a$$</span></p> <p>So, we want to maximize <span class="math-container">$j(c,a)$</span> under the condition that <span class="math-container">$$\frac{1-\sqrt{1-4\gamma}}{2}\lt a\le\frac{1+\sqrt{1-4\gamma}}{2},\frac{a-a^2-\gamma}{a}\lt c\le \frac{1-2a+\sqrt{1-4\gamma}}{2}, 4/25\leq \gamma \leq 1/4$$</span></p> <p>Using that <span class="math-container">$$\frac{\partial j}{\partial c}=\log(2)\frac{c^2+\gamma+a^2-a}{c^2}\ge 0\iff c\ge \sqrt{a-a^2-\gamma}$$</span> we have <span class="math-container">$$\begin{align}j(c,a)&amp;\le\max\bigg(j\bigg(\frac{a-a^2-\gamma}{a},a\bigg),j\bigg(\frac{1-2a+\sqrt{1-4\gamma}}{2},a\bigg)\bigg) \\\\&amp;=\begin{cases}j\bigg(\frac{a-a^2-\gamma}{a},a\bigg)&amp;\text{if $\ a\ge\frac{1+\sqrt{1-4\gamma}}{4}$} \\\\j\bigg(\frac{1-2a+\sqrt{1-4\gamma}}{2},a\bigg)&amp;\text{if $\ a\lt\frac{1+\sqrt{1-4\gamma}}{4}$}\end{cases}\end{align}$$</span></p> <p><strong>Case 2-2-1-1</strong> : <span class="math-container">$\frac{1+\sqrt{1-4\gamma}}{4}\le\frac{1-\sqrt{1-4\gamma}}{2}\ (\le a)$</span> which holds only when <span class="math-container">$2/9\le\gamma\le 1/4$</span></p> <p><span class="math-container">$$\begin{align}j(c,a)&amp;\le j\bigg(\frac{a-a^2-\gamma}{a},a\bigg) \\\\&amp;=h(a) \\\\&amp;\lt h\bigg(\frac{1+\sqrt{1-4\gamma}}{2}\bigg) \\\\&amp;=\frac{\log(6)}{2}+\frac{\sqrt{1-4\gamma}}{2}\log(3/2)\end{align}$$</span></p> <p><strong>Case 2-2-1-2</strong> : <span class="math-container">$\frac{1-\sqrt{1-4\gamma}}{2}\lt\frac{1+\sqrt{1-4\gamma}}{4}\lt\frac{1+\sqrt{1-4\gamma}}{2}$</span> which holds only when <span class="math-container">$4/25\le\gamma\lt 2/9$</span></p> <p><strong>Case 2-2-1-2-1</strong> : <span class="math-container">$\frac{1-\sqrt{1-4\gamma}}{2}\lt a\le\frac{1+\sqrt{1-4\gamma}}{4}$</span></p> <p><span class="math-container">$$\begin{align}j(c,a)&amp;\le j\bigg(\frac{1-2a+\sqrt{1-4\gamma}}{2},a\bigg) \\\\&amp;=\log(2)(1+\sqrt{1-4\gamma})-\log(4/3)a \\\\&amp;\lt \log(2)(1+\sqrt{1-4\gamma})-\log(4/3)\frac{1-\sqrt{1-4\gamma}}{2} \\\\&amp;=\frac{\log(3)}{2}+\frac{\sqrt{1-4\gamma}}{2}\log(16/3)\end{align}$$</span></p> <p><strong>Case 2-2-1-2-2</strong> : <span class="math-container">$\frac{1+\sqrt{1-4\gamma}}{4}\lt a\lt\frac{1+\sqrt{1-4\gamma}}{2}$</span></p> <p><span class="math-container">$$\begin{align}j(c,a)&amp;\le j\bigg(\frac{1-2a+\sqrt{1-4\gamma}}{2},a\bigg) \\\\&amp;=\log(2)(1+\sqrt{1-4\gamma})-\log(4/3)a \\\\&amp;\lt\log(2)(1+\sqrt{1-4\gamma})-\log(4/3)\frac{1+\sqrt{1-4\gamma}}{4} \\\\&amp;=\frac{\log(12)}{4}+\frac{\sqrt{1-4\gamma}}{4}\log(12)\end{align}$$</span></p> <p><strong>Case 2-2-2</strong> : <span class="math-container">$0\le a\le \frac{1-\sqrt{1-4\gamma}}{2}$</span></p> <p>Since <span class="math-container">$\frac{\partial j}{\partial c}=\log(2)\frac{c^2+\gamma+a^2-a}{c^2}\gt 0$</span>, we see that <span class="math-container">$j(c,a)$</span> is increasing on <span class="math-container">$c$</span>. We have</p> <p><span class="math-container">$$\small\frac{a-a^2-\gamma}{a}\le 0\le \frac{1-2a-\sqrt{1-4\gamma}}{2}\le \frac{1-2a+\sqrt{1-4\gamma}}{2}\le 1-a$$</span></p> <p>So, we want to maximize <span class="math-container">$j(c,a)$</span> under the condition that <span class="math-container">$$0\le a\le \frac{1-\sqrt{1-4\gamma}}{2}, \frac{1-2a-\sqrt{1-4\gamma}}{2}\le c\le \frac{1-2a+\sqrt{1-4\gamma}}{2}, 4/25\leq \gamma \leq 1/4$$</span></p> <p>Hence, we get <span class="math-container">$$\begin{align}j(c,a)&amp;\le j\bigg(\frac{1-2a+\sqrt{1-4\gamma}}{2},a\bigg) \\\\&amp;=\log(2)(1+\sqrt{1-4\gamma})-\log(4/3)a \\\\&amp;\le\log(2)(1+\sqrt{1-4\gamma})\end{align}$$</span></p> <hr> <p><strong>Conclusion</strong> : The maximum of <span class="math-container">$\log(4)c+\log(3)a+\log(2)x$</span> is <span class="math-container">$$\begin{cases}\log(2)(1+\sqrt{1-4\gamma})&amp;\text{if $\ 4/25\le \gamma\lt\frac{\log(2)\log(4/3)}{(\log(8/3))^2}$} \\\\\frac{\log(6)}{2}+\frac{\sqrt{1-4\gamma}}{2}\log(3/2)&amp;\text{if $\ \frac{\log(2)\log(4/3)}{(\log(8/3))^2}\le\gamma\le 1/4$}\end{cases}$$</span></p>
145,046
<p>I'm a first year graduate student of mathematics and I have an important question. I like studying math and when I attend, a course I try to study in the best way possible, with different textbooks and moreover I try to understand the concepts rather than worry about the exams. Despite this, months after such an intense study, I forget inexorably most things that I have learned. For example if I study algebraic geometry, commutative algebra or differential geometry, In my minds remain only the main ideas at the end. Viceversa when I deal with arguments such as linear algebra, real analysis, abstract algebra or topology, so more simple subjects that I studied at first or at the second year I'm comfortable. So my question is: what should remain in the mind of a student after a one semester course? What is to learn and understand many demostrations if then one forgets them all?</p> <p>I'm sorry for my poor english.</p>
Brian Rushton
51,970
<p>When a newborn baby sees his mothers face</p> <p>The first of all objects, the eyes first light</p> <p>It may forget</p> <p>It may pass on</p> <p>But it will not forget all</p> <p>Something remains, a hidden sight.</p> <p>Seen again, again, again,</p> <p>Her face becomes familiar</p> <p>Then sought for</p> <p>Then loved.</p> <hr> <p>You may not remember the theorems,</p> <p>You may not remember the proofs,</p> <p>You may forget and pass on into your future studies,</p> <p>But seen again, again, again,</p> <p>Something will stir within you</p> <p>Something will change inside you</p> <p>It will become familiar,</p> <p>Then natural,</p> <p>Then loved.</p> <hr> <p>Children take time,</p> <p>So do students. Good luck!</p>
325,337
<p>There's a continuous function $f: \mathbb{Q} \rightarrow \mathbb{R}$</p> <p>Prove that $ \exists t \in \mathbb{R} \setminus \mathbb{Q} \ \ \exists g \in \mathbb{R} :\ \ \lim _{q \rightarrow t, \ q\in \mathbb{Q}}f(q)=g$.</p> <p>Prove that there are $2^{\aleph _0}$ such numbers $t$.</p> <p>I know that if a function is continuous on rational points, then it's continuous on whole $ \mathbb{R}$, but that isn't relevant to the problem, is it?</p> <p>I would appreciate all your help.</p> <p>Thanks.</p>
Brian M. Scott
12,042
<p>Let $\Bbb P=\Bbb R\setminus\Bbb Q$, and for $x\in\Bbb R$ let $B(x,\epsilon)=(x-\epsilon,x+\epsilon)$. Suppose that the first result is false. Then for each $x\in\Bbb P$ there is an $n(x)\in\Bbb N$ such that for each $\epsilon&gt;0$ there are $p,q\in B(x,\epsilon)\cap\Bbb Q$ with $|f(p)-f(q)|\ge 2^{-n}$. For $k\in\Bbb N$ let $A_k=\{x\in\Bbb R\setminus\Bbb Q:n(x)=k\}$; by the Baire category theorem there are an $m\in\Bbb N$ and an open interval $(a,b)$ in $\Bbb R$ such that $A_m$ is dense in $(a,b)\cap\Bbb P$ and hence in $(a,b)$. Fix $p\in(a,b)\cap\Bbb Q$. Since $f$ is continuous, there is an $\epsilon&gt;0$ such that $|f(q)-f(r)|&lt;2^{-m}$ for all $q,r\in B(p,\epsilon)\cap\Bbb Q$, and it follows that $A_m\cap(a,b)\cap B(p,\epsilon)=\varnothing$, which is a contradiction.</p> <p>In fact this shows that the sets $A_k$ are nowhere dense in $\Bbb P$. They are also closed in $\Bbb P$. To see this, suppose that $x\in\operatorname{cl}_{\Bbb P}A_k$, and let $\epsilon&gt;0$ be arbitrary; then there is a $y\in A_k\cap B(x,\epsilon/2)$, so there are $p,q\in B(y,\epsilon/2)\cap\Bbb Q\subseteq B(x,\epsilon)\cap\Bbb Q$ with $|f(p)-f(q)|\ge 2^{-k}$, and the claim follows. For $k\in\Bbb N$ let $U_k=\Bbb P\setminus A_k$; $U_k$ is a dense open subset of $\Bbb P$, so $G=\bigcap_{k\in\Bbb N}U_k$ is a dense $G_\delta$-set in $\Bbb P$, and clearly $f$ has a continuous extension to each $x\in G$. Thus, to finish the problem we need only show that $|G|=2^\omega$; this is done in <a href="https://math.stackexchange.com/a/81659/12042">this answer</a> to an earlier question, also noted by Martin Sleziak in his answer.</p>
898,034
<blockquote> <p>Solve $$(x-3)\left(\frac{\mathrm dy}{\mathrm dx}\right)+y=6e^x, x&gt;0$$</p> </blockquote> <p>I have a very similar problem like this on my homework, and I have no clue how to set it up or even start. How could I set this up?</p>
Sidharth Ghoshal
58,294
<p>This is a standard linear ordinary differential equation</p> <p>To solve it we note:</p> <p>$$ \frac{dy}{dx} + \frac{1}{x - 3}y = \frac{1}{x-3}6e^x$$</p> <p>We now seek an integration factor $w(x)$ which can be multiplied on both sides to form (just observing left hand side)</p> <p>$$w(x) \frac{dy}{dx} + \frac{w(x)}{x-3}y = p(x)y' + p'(x)y = (p(x)y)'$$</p> <p>It becomes clear that $p(x) = w(x)$ and that if $w(x) = e^{\int \frac{1}{x-3}} = x - 3$ we will be good to go</p> <p>The resulting expression is after integration</p> <p>$$((x-3)y)' = 6e^x \rightarrow (x-3)y = 6e^x + C$$</p> <p>And finally:</p> <p>$$y = \frac{6e^x + C}{x-3}$$</p> <p>look up this general method, its very standard</p>
898,034
<blockquote> <p>Solve $$(x-3)\left(\frac{\mathrm dy}{\mathrm dx}\right)+y=6e^x, x&gt;0$$</p> </blockquote> <p>I have a very similar problem like this on my homework, and I have no clue how to set it up or even start. How could I set this up?</p>
evinda
75,843
<p>$$(x-3) \frac{dy}{dx}+y=0 \Rightarrow \frac{dy}{dx}=\frac{-y}{x-3} \Rightarrow -\frac{dy}{y}=\frac{dx}{x-3} \Rightarrow \int \left ( \frac{-1}{y} \right)dy=\int \frac{1}{x-3} dx \\ \Rightarrow -\ln |y| =\ln |x-3|+c \Rightarrow e^{-\ln |y|}=e^{\ln |x-3|+c} \Rightarrow \frac{1}{|y|}=C |x-3| \Rightarrow |y|=\frac{c'}{|x-3|}\\ \Rightarrow y= \pm \frac{c'}{x-3} \Rightarrow y=\frac{A}{x-3} $$</p> <p>So, for our non-homogeneous problem we have:</p> <p>$$y(x)=\frac{A(x)}{x-3}$$</p> <p>$$(x-3) \frac{dy}{dx}+y=6e^x \Rightarrow (x-3)\frac{A'(x)(x-3)-A(x)}{(x-3)^2}+\frac{A(x)}{x-3}=6e^x \\ \Rightarrow A'(x)-\frac{A(x)}{x-3}+\frac{A(x)}{x-3}=6e^x \\ \Rightarrow A'(x)=6e^x \Rightarrow A(x)=6e^x+D$$</p> <p>Therefore,the solution is: $$y(x)=\frac{6e^x+D}{x-3}+\frac{A}{x-3}=\frac{6e^x+E}{x-3}, \text{where E=A+D is a constant}$$</p>
898,034
<blockquote> <p>Solve $$(x-3)\left(\frac{\mathrm dy}{\mathrm dx}\right)+y=6e^x, x&gt;0$$</p> </blockquote> <p>I have a very similar problem like this on my homework, and I have no clue how to set it up or even start. How could I set this up?</p>
Mary Star
80,708
<p>$$(x-3)(\frac{dy}{dx})+y=6e^x \Rightarrow \frac{dy}{dx}+\frac{1}{x-3}y=\frac{6 e^x}{x-3} \Rightarrow y'(x)+\frac{1}{x-3}y(x)=\frac{6 e^x}{x-3} $$</p> <p>The solution of the homogeneous problem is the following:</p> <p>$$y'_h(x)+\frac{1}{x-3}y_h(x)=0 \Rightarrow \frac{dy_h}{dx}=-\frac{1}{x-3}y_h \Rightarrow \frac{dy_h}{y_h}=-\frac{dx}{x-3} \Rightarrow \int \frac{dy_h}{y_h}=-\int \frac{dx}{x-3} \\ \Rightarrow \ln y_h=-\ln{|x-3|} +c \Rightarrow \ln {|y_h|}=\ln{(|x-3|)^{-1}} +c \Rightarrow e^{\ln {|y_h|}}=e^{\ln{(|x-3|)^{-1}} +c} \\ \Rightarrow |y_h|=e^c \frac{1}{|x-3|} \Rightarrow y_h=\pm e^c \frac{1}{x-3} \overset{\pm e^c=C}{\Rightarrow} y_h(x)=\frac{C}{x-3}$$</p> <p>The solution of the non-homogengeneous problem is the following:</p> <p>We suppose that the solution is of the form: $$y_p(x)=\frac{C(x)}{x-3}$$</p> <p>Replacing this at the problem we get:</p> <p>$$y'_p(x)+\frac{1}{x-3}y_p(x)=\frac{6 e^x}{x-3} \Rightarrow \frac{C'(x)(x-3)-C(x)}{(x-3)^2}+\frac{1}{x-3}\frac{C(x)}{x-3}=\frac{6 e^x}{x-3} \Rightarrow \frac{C'(x)(x-3)-C(x)+C(x)}{(x-3)^2}=\frac{6 e^x}{x-3} \Rightarrow C'(x)=6 e^x \Rightarrow C(x)=6 e^x+c_2$$</p> <p>So, $$y_p(x)=\frac{6e^x+c_2}{x-3}$$</p> <p>The solution of the initial problem is equal to the sum of the homogeneous and the non-homogeneous problem $$y=y_h+y_p$$</p>
898,034
<blockquote> <p>Solve $$(x-3)\left(\frac{\mathrm dy}{\mathrm dx}\right)+y=6e^x, x&gt;0$$</p> </blockquote> <p>I have a very similar problem like this on my homework, and I have no clue how to set it up or even start. How could I set this up?</p>
izœc
83,639
<p>There are plenty of good standard-type answers here about solving the general linear differential equation of the form you are presented with; I just want to make a small note that, from a calculus perspective, finding a specific solution you needn't bother with the high-powered approach of integrating factors, but rather can recognize $(x-3) \frac{dy}{dx} + y = (x-3) \frac{dy}{dx} + \frac{d (x-3)}{dx} y$ as the product expansion form of $\frac{d}{dx} \left( (x-3) \cdot y \right)$. Then, the equation becomes $$ \frac{d}{dx} \left( (x-3) \cdot y \right) = 6 e^x = 6 \frac{d}{dx} e^x = \frac{d}{dx} 6 \cdot e^x . $$ Then via unique anti-derivatives (up to a constant of integration) or simply the fundamental theorem of calculus, we have that $$ (x-3)y = 6 \cdot e^x + C \implies y = \frac{6 e^x + C}{x-3} . $$</p> <p>This method is more directed towards a calculus student who has been given this problem without seeing the theory related to solving ordinary differential equations, and wants to know how they might be able to bring their calculus orientation, perspective, and techniques in order to solve the problem.</p> <p>To connect this with the other solutions offering the complete solution consisting of the so-called "homogenous solution" and the "particular solution", note that this solution is to that of the particular equation. The idea of the homogenous solution is that it is both consistent with the original equation (because, the homogenous solution is when the left hand of the equation vanishes, and so if you add that 'solution' to the solution satisfying the original equation, it still works), but tells you more information about what the solutions of the differential equation are/look like, than just the particular solution alone - it "completes the picture", if you will, by telling you on what functions the differential expression on the left vanishes.</p>
4,031,633
<p>Let <span class="math-container">$R$</span> be a commutative ring and <span class="math-container">$M$</span> is an <span class="math-container">$R$</span>-module. The following statement is well-known.</p> <p>If <span class="math-container">$M$</span> is finitely presented flat module, <span class="math-container">$M/J(R)M$</span> is free over <span class="math-container">$R/J(R)$</span> then <span class="math-container">$M$</span> is free.</p> <p>Here <span class="math-container">$J(R)$</span> is the (Jacobson) radical of <span class="math-container">$R$</span>. Is the statement true for finite <span class="math-container">$M$</span>? I expect that it is not, but was not able to construct a counter-examples. Clearly, in a counter-example the ring <span class="math-container">$R$</span> is not Noetherian. Also, local <span class="math-container">$R$</span> won't work, since over a local ring any finite flat module is free.</p>
Minseon Shin
7,719
<p>Set <span class="math-container">$I := J(R)$</span>. Suppose <span class="math-container">$M/IM$</span> has rank <span class="math-container">$n$</span> as a free <span class="math-container">$A/I$</span>-module; let <span class="math-container">$x_{1},\dotsc,x_{n} \in M$</span> be elements whose images in <span class="math-container">$M/IM$</span> form a basis for it as a free <span class="math-container">$A/I$</span>-module, let <span class="math-container">$N \subseteq M$</span> be the <span class="math-container">$A$</span>-submodule generated by the <span class="math-container">$x_{i}$</span>. Then <span class="math-container">$M = N$</span> by the usual Nakayama argument (<span class="math-container">$M/IM = (N+IM)/IM$</span> implies <span class="math-container">$M = N+IM$</span> implies <span class="math-container">$M/N = (N+IM)/N = I(M/N)$</span> so <span class="math-container">$M/N = 0$</span>) so we have a surjection <span class="math-container">$\varphi : A^{\oplus n} \to M$</span>. For all maximal ideals <span class="math-container">$\mathfrak{m}$</span> of <span class="math-container">$A$</span>, the localization <span class="math-container">$M_{\mathfrak{m}}$</span> is a free <span class="math-container">$A_{\mathfrak{m}}$</span>-module (by e.g. the local case) of rank <span class="math-container">$n$</span> (since <span class="math-container">$(M/I)_{\mathfrak{m}}$</span> is a free <span class="math-container">$(A/I)_{\mathfrak{m}}$</span>-module of rank <span class="math-container">$n$</span>), so the localization <span class="math-container">$\varphi_{\mathfrak{m}}$</span> is an isomorphism (the &quot;determinant trick&quot;, see Corollary (10.4) <a href="http://web.mit.edu/18.705/www/13Ed.pdf" rel="nofollow noreferrer">here</a>), hence <span class="math-container">$\varphi$</span> is an isomorphism.</p> <p>I guess more-or-less equivalently you could also use a limit argument to say that <span class="math-container">$M$</span> is locally finitely presented, then use e.g. <a href="https://stacks.math.columbia.edu/tag/00EO" rel="nofollow noreferrer">Tag 00EO</a> to conclude that <span class="math-container">$M$</span> is (globally) finitely presented.</p>
1,834,756
<p>The Taylor expansion of the function $f(x,y)$ is:</p> <p>\begin{equation} f(x+u,y+v) \approx f(x,y) + u \frac{\partial f (x,y)}{\partial x}+v \frac{\partial f (x,y)}{\partial y} + uv \frac{\partial^2 f (x,y)}{\partial x \partial y} \end{equation}</p> <p>When $f=(x,y,z)$ is the following true?</p> <p>$$\begin{align} f(x+u,y+v,z+w) \approx f(x,y,z) &amp;+ u \frac{\partial f (x,y,z)}{\partial x}+v \frac{\partial f (x,y,z)}{\partial y} + w \frac{\partial f (x,y,z)}{\partial z} \\ &amp;+uv \frac{\partial^2 f (x,y,z)}{\partial x \partial y} + vw \frac{\partial^2 f (x,y,z)}{\partial y \partial z}+ uw \frac{\partial^2 f (x,y,z)}{\partial x \partial z} \\ &amp;+ uvw \frac{\partial^3 f (x,y,z)}{\partial x \partial y \partial z} \end{align}$$</p>
Fei PAN
677,479
<p>Let <span class="math-container">$f$</span> be an infinitely differentiable function in some open neighborhood around <span class="math-container">$(x,y,z)=(a,b,c)$</span>.</p> <p><span class="math-container">$f( x,y,z) =f\left( a,b,c \right) +\left( x-a,y-b,z-c \right) \cdot \left( \begin{array}{c} \frac{\partial f}{\partial x}\left( a,b,c \right)\\ \frac{\partial f}{\partial y}\left( a,b,c \right)\\ \frac{\partial f}{\partial z}\left( a,b,c \right)\\ \end{array} \right) +\frac{1}{2}\left( x-a,y-b,z-c \right) \cdot \left( \left[ \begin{matrix} \frac{\partial ^2f}{\partial x^2}&amp; \frac{\partial ^2f}{\partial x\partial y}&amp; \frac{\partial ^2f}{\partial x\partial z}\\ \frac{\partial ^2f}{\partial y\partial x}&amp; \frac{\partial ^2f}{\partial y^2}&amp; \frac{\partial ^2f}{\partial y\partial z}\\ \frac{\partial ^2f}{\partial z\partial x}&amp; \frac{\partial ^2f}{\partial z\partial y}&amp; \frac{\partial ^2f}{\partial z^2}\\ \end{matrix} \right] _{\left( x,y,z \right) =\left( a,b,c \right)}\cdot \left( \begin{array}{c} x-a\\ y-b\\ z-c\\ \end{array} \right) \right)$</span></p>
1,594,622
<p>As an example, calculate the number of $5$ card hands possible from a standard $52$ card deck. </p> <p>Using the combinations formula, </p> <p>$$= \frac{n!}{r!(n-r)!}$$</p> <p>$$= \frac{52!}{5!(52-5)!}$$</p> <p>$$= \frac{52!}{5!47!}$$</p> <p>$$= 2,598,960\text{ combinations}$$</p> <p>I was wondering what the logic is behind combinations? Is it because there are 52 cards to choose from, except we're only selecting $5$ of them, to which the person holding them can rearrange however they please, hence we divide by $5!$ to account for the permutations of those 5 cards? Then do we divide by $47!$ because the remaining cards are irrelevant?</p>
Alex R.
22,064
<p>$n!$: the number of ways to select $n$ cards (from $n$ cards), such that order matters.</p> <p>$\frac{n!}{(n-r)!}=n(n-1)(n-2)\cdots (n-r+1):$ the number of ways of selecting $r$ cards from $n$ cards such that order matters.</p> <p>$r!$: The number of possible orderings of $r$ cards.</p> <p>We <em>don't</em> want to account for order, hence we divide the second by the third as you said.</p>
1,594,622
<p>As an example, calculate the number of $5$ card hands possible from a standard $52$ card deck. </p> <p>Using the combinations formula, </p> <p>$$= \frac{n!}{r!(n-r)!}$$</p> <p>$$= \frac{52!}{5!(52-5)!}$$</p> <p>$$= \frac{52!}{5!47!}$$</p> <p>$$= 2,598,960\text{ combinations}$$</p> <p>I was wondering what the logic is behind combinations? Is it because there are 52 cards to choose from, except we're only selecting $5$ of them, to which the person holding them can rearrange however they please, hence we divide by $5!$ to account for the permutations of those 5 cards? Then do we divide by $47!$ because the remaining cards are irrelevant?</p>
Graham Kemp
135,106
<p>There are $52!$ distinct ways to shuffle a deck of $52$ cards.</p> <p>However, we don't care about the order of the top $5$ or the bottom $47$ cards, we only wish to count ways to select distinct sets of cards for the hand.</p> <p>Every deck is one of a group of $5!$ which differ only by the order of the top $5$ cards. &nbsp; Every deck is also one of a group of $47!$ which differ only by the order of the bottom $47$. &nbsp; Therefore every deck is one of a group of $5!42!$ which differ only by the order of the top $5$ and bottom $47$ cards.</p> <p>So there are ${^{52!}\!{\big/}_{5!\,47!}}$ distinct ways to deal a hand of $5$ cards from a deck of $52$.</p>
639,665
<p>How can I calculate the inverse of $M$ such that:</p> <p>$M \in M_{2n}(\mathbb{C})$ and $M = \begin{pmatrix} I_n&amp;iI_n \\iI_n&amp;I_n \end{pmatrix}$, and I find that $\det M = 2^n$. I tried to find the $comM$ and apply $M^{-1} = \frac{1}{2^n} (comM)^T$ but I think it's too complicated.</p>
lennon310
120,707
<p>$M^{-1} = \begin{pmatrix} 0.5I_n&amp;-0.5iI_n \\-0.5iI_n&amp;0.5I_n \end{pmatrix}$</p>
342,096
<p>$$\left( \left( 1/2\,{\frac {-\beta+ \sqrt{{\beta}^{2}-4\,\delta\, \alpha}}{\alpha}} \right) ^{i}- \left( -1/2\,{\frac {\beta+ \sqrt{{ \beta}^{2}-4\,\delta\,\alpha}}{\alpha}} \right) ^{i} \right) \left( \sqrt{{\beta}^{2}-4\,\delta\,\alpha} \right) ^{-1}$$</p> <p>What exactly is the connection between the above formula and $$x^2-x-c$$ c being a constant. I know about linear recursion, but I don't understand why the Fibonacci Numbers are associated with $$x^2-x-1$$ I know about $${\frac {\sqrt {5}+1}{{2}}}$$</p> <p>But I don't see the connection. If I were to represent these terms algebraically it seems I would do something like this:</p> <p>$${\alpha}^{-1}$$ $$-{\frac {\beta}{{\alpha}^{2}}}$$ $$-{\frac {-{\beta}^{2}+\delta\,\alpha}{{\alpha}^{3}}}$$ $${\frac {\beta\, \left( -{\beta}^{2}+2\,\delta\,\alpha \right) }{{ \alpha}^{4}}}$$ $${\frac {{\beta}^{4}-3\,{\beta}^{2}\delta\,\alpha+{\delta}^{2}{\alpha}^ {2}}{{\alpha}^{5}}}$$ $$-{\frac {\beta\, \left( {\beta}^{4}-4\,{\beta}^{2}\delta\,\alpha+3\,{ \delta}^{2}{\alpha}^{2} \right) }{{\alpha}^{6}}} $$</p> <p>There also seems to be a relationship with factorials when they are used as arrays in matrices.</p>
xavierm02
10,385
<p>Let $F_0=0$, $F_1=1$, $\forall n \in \Bbb N, F_{n+2}=F_{n+1}+F_n$</p> <p>It's the Fibonacci sequence.</p> <p>Now let $\forall n \in \Bbb N,V_n=\begin{pmatrix} F_n\\F_{n+1}\end{pmatrix}$</p> <p>Let $M=\begin{pmatrix}0 &amp; 1\\ 1 &amp; 1\end{pmatrix}$</p> <p>Then $MV_n=\begin{pmatrix}0 &amp; 1\\ 1 &amp; 1\end{pmatrix}\begin{pmatrix} F_n\\F_{n+1}\end{pmatrix}=\begin{pmatrix} F_{n+1}\\F_n+F_{n+1}\end{pmatrix}$</p> <p>By induction, $\forall n \in \Bbb N, V_n=M^nV_0$</p> <p>So the only thing we really need to compute is $M^n$. In order to do this, we try do diagonalize it.</p> <p>Its characteristic polynomial is $\chi_M(\lambda)=\det(M-\lambda I_2)=\det \begin{pmatrix}-\lambda &amp; 1\\ 1 &amp; 1-\lambda\end{pmatrix}=(-\lambda)(1-\lambda)-(1)(1)=\lambda^2-\lambda-1$</p> <p>Let $\varphi=\cfrac{1+\sqrt{5}}{2}$ and $\psi=\cfrac{1-\sqrt{5}}{2}$ be the two roots of that polynomial.</p> <p>Since $M\in M_2(\Bbb R)$, we can find $P\in GL_2(\Bbb R)$ so that $M=PDP^{-1}$ where $D=\begin{pmatrix}\varphi &amp; 0\\ 0 &amp; \psi\end{pmatrix}$</p> <p>$D$ is the matrix of the canonical endomophism associated to $M$ in a basis $e=(e_1,e_2)$ where $Me_1=\varphi e_1$ and $Me_2=\psi e_2$</p> <p>We know that $e_1\in Ker(M-\varphi I_2) = Ker\left(\begin{pmatrix}-\varphi &amp; 1\\ 1 &amp; 1-\varphi\end{pmatrix}\right)=Vect\left(\begin{pmatrix}\cfrac{1}{\varphi}\\1\end{pmatrix}\right)$ because $\cfrac{1}{\varphi}\begin{pmatrix}-\varphi\\ 1 &amp;\end{pmatrix}+1\begin{pmatrix} 1\\ 1-\varphi\end{pmatrix}=\begin{pmatrix} 0\\0\end{pmatrix}$ so we can take $e_1=\varphi\begin{pmatrix}\cfrac{1}{\varphi}\\ 1\end{pmatrix}=\begin{pmatrix}1\\ \varphi\end{pmatrix}$</p> <p>Similarly, $e_2\in Ker(M-\psi I_2) = Ker\left(\begin{pmatrix}-\psi &amp; 1\\ 1 &amp; 1-\psi\end{pmatrix}\right)=Vect\left(\begin{pmatrix}\cfrac{1}{\psi}\\1\end{pmatrix}\right)$ because $\cfrac{1}{\psi}\begin{pmatrix}-\psi\\ 1 &amp;\end{pmatrix}+1\begin{pmatrix} 1\\ 1-\psi\end{pmatrix}=\begin{pmatrix} 0\\0\end{pmatrix}$ so we can take $e_2=\psi\begin{pmatrix}\cfrac{1}{\psi}\\ 1\end{pmatrix}=\begin{pmatrix}1\\ \psi\end{pmatrix}$</p> <p>And we get $P=\begin{pmatrix}1 &amp; 1\\ \varphi &amp; \psi\end{pmatrix}$</p> <p>Then we need $P^{-1}=\cfrac{\begin{pmatrix}\psi &amp; -1\\ -\varphi &amp; 1\end{pmatrix}}{\psi-\varphi}$</p> <p>Then $M^n=(PDP^{-1})^n=PD^nP^{-1}$ and $D^n=\begin{pmatrix}\varphi &amp; 0\\ 0 &amp; \psi\end{pmatrix}^n=\begin{pmatrix}\varphi^n &amp; 0\\ 0 &amp; \psi^n\end{pmatrix}$</p> <p>So $\begin{pmatrix} F_n\\F_{n+1}\end{pmatrix}=V_n=PD^nP^{-1}V_0=\cfrac{1}{\psi-\varphi}\begin{pmatrix}1 &amp; 1\\ \varphi &amp; \psi\end{pmatrix}\begin{pmatrix}\varphi^n &amp; 0\\ 0 &amp; \psi^n\end{pmatrix}\begin{pmatrix}\psi &amp; -1\\ -\varphi &amp; 1\end{pmatrix}\begin{pmatrix}0\\1\end{pmatrix}=\cfrac{1}{\psi-\varphi}\begin{pmatrix}1 &amp; 1\\ \varphi &amp; \psi\end{pmatrix}\begin{pmatrix}\varphi^n &amp; 0\\ 0 &amp; \psi^n\end{pmatrix}\begin{pmatrix}-1\\1\end{pmatrix}=\cfrac{1}{\psi-\varphi}\begin{pmatrix}1 &amp; 1\\ \varphi &amp; \psi\end{pmatrix}\begin{pmatrix}-\varphi^n\\\psi^n\end{pmatrix}=\cfrac{1}{\psi-\varphi}\begin{pmatrix}1 &amp; 1\\ \varphi &amp; \psi\end{pmatrix}\begin{pmatrix}-\varphi^n\\\psi^n\end{pmatrix}=\cfrac{1}{\psi-\varphi}\begin{pmatrix}-\varphi^n+\psi^n\\-\varphi^{n+1}+\psi^{n+1}\end{pmatrix}=\begin{pmatrix}\cfrac{\psi^n-\varphi^n}{\psi-\varphi}\\ \cfrac{\psi^{n+1}-\varphi^{n+1}}{\psi-\varphi}\end{pmatrix}$</p> <p>From which we can deduce $F_n = \cfrac{\psi^n-\varphi^n}{\psi-\varphi}$</p> <p>It's more often written as $\cfrac{\varphi^n-\psi^n}{\varphi-\psi}$ because $\varphi-\psi=\sqrt{5}$ whereas $\psi-\varphi=-\sqrt{5}$.</p> <p>And we finally get $\boxed{F_n = \cfrac{\varphi^n-\psi^n}{\sqrt{5}}}$</p>
342,096
<p>$$\left( \left( 1/2\,{\frac {-\beta+ \sqrt{{\beta}^{2}-4\,\delta\, \alpha}}{\alpha}} \right) ^{i}- \left( -1/2\,{\frac {\beta+ \sqrt{{ \beta}^{2}-4\,\delta\,\alpha}}{\alpha}} \right) ^{i} \right) \left( \sqrt{{\beta}^{2}-4\,\delta\,\alpha} \right) ^{-1}$$</p> <p>What exactly is the connection between the above formula and $$x^2-x-c$$ c being a constant. I know about linear recursion, but I don't understand why the Fibonacci Numbers are associated with $$x^2-x-1$$ I know about $${\frac {\sqrt {5}+1}{{2}}}$$</p> <p>But I don't see the connection. If I were to represent these terms algebraically it seems I would do something like this:</p> <p>$${\alpha}^{-1}$$ $$-{\frac {\beta}{{\alpha}^{2}}}$$ $$-{\frac {-{\beta}^{2}+\delta\,\alpha}{{\alpha}^{3}}}$$ $${\frac {\beta\, \left( -{\beta}^{2}+2\,\delta\,\alpha \right) }{{ \alpha}^{4}}}$$ $${\frac {{\beta}^{4}-3\,{\beta}^{2}\delta\,\alpha+{\delta}^{2}{\alpha}^ {2}}{{\alpha}^{5}}}$$ $$-{\frac {\beta\, \left( {\beta}^{4}-4\,{\beta}^{2}\delta\,\alpha+3\,{ \delta}^{2}{\alpha}^{2} \right) }{{\alpha}^{6}}} $$</p> <p>There also seems to be a relationship with factorials when they are used as arrays in matrices.</p>
Will Jagy
10,400
<p>If you start with $x^2 - x - 1$ and make it homogeneous, $x^2 - x y - y^2,$ the result relates to Fibonacci numbers in that $$ F_{n+1}^2 - F_{n+1} F_n - F_n^2 = (-1)^n. $$ This uses numbering $$ F_0 = 0, \; F_1 = 1, \; F_2 = 1, \; F_3 = 2, \; F_4 = 3, \; F_5 = 5, \ldots $$ and $$ x = F_{n+1}, \; y = F_n. $$</p> <p>It's just one of those things. $x^2 - x y - y^2$ is called an (indefinite) quadratic form. </p>
23,312
<p>What is the importance of eigenvalues/eigenvectors? </p>
Herb
7,398
<p>I think if you want a better answer, you need to tell us more precisely what you may have in mind: are you interested in theoretical aspects of eigenvalues; do you have a specific application in mind? Matrices by themselves are just arrays of numbers, which take meaning once you set up a context. Without the context, it seems difficult to give you a good answer. If you use matrices to describe adjacency relations, then eigenvalues/vectors may mean one thing; if you use them to represent linear maps something else, etc.</p> <p>One possible application: In some cases, you may be able to diagonalize your matrix $M$ using the eigenvalues, which gives you a nice expression for $M^k$. Specifically, you may be able to decompose your matrix into a product $SDS^{-1}$ , where $D$ is diagonal, with entries the eigenvalues, and $S$ is the matrix with the associated respective eigenvectors. I hope it is not a problem to post this as a comment. I got a couple of Courics here last time for posting a comment in the answer site.</p> <p>Mr. Arturo: Interesting approach!. This seems to connect with the theory of characteristic curves in PDE's(who knows if it can be generalized to dimensions higher than 1), which are curves along which a PDE becomes an ODE, i.e., as you so brilliantly said, curves along which the PDE decouples. </p>
23,312
<p>What is the importance of eigenvalues/eigenvectors? </p>
Doc
86,414
<p>I would like to direct you to an answer that I posted here: <a href="https://math.stackexchange.com/questions/357340/importance-of-eigenvalues/494168#494168">Importance of eigenvalues</a></p> <p>I feel it is a nice example to motivate students who ask this question, in fact I wish it were asked more often. Personally, I hold such students in very high regard. </p>
23,312
<p>What is the importance of eigenvalues/eigenvectors? </p>
Srijit
389,895
<p>Lets Go Back to the Historical Background to get the motivation!</p> <p>Consider The Linear Transformation T:V->V.</p> <p>Given a Basis of V, T is characterized beautifully by a Matrix A,which tells everything about T.</p> <p>The Bad part about A is that " A changes with the change of Basis of V ".</p> <p>Why it is bad?</p> <p>Because the Same Linear Transformation due to two different basis selection are given by two distinct matrices. Believe me! You cannot relate between the two matrices by looking at their entries. Really Not Interesting!</p> <p>Intuitively, there exist some strong relation between two such Matrices. </p> <p>Our Aim is to capture THE THING in common(Mathematically).</p> <p>Now Eigen Values are a necessary condition to check so but not sufficient though!</p> <p>Let make my statement clear.</p> <p>"The Invariance of a Subspace of V under a Linear Transformation of V" is such a property.</p> <p>That is if A and B represent the same T, then they must have all the eigenvalues equal. But The Converse is Not True! Hence not Sufficient but NECESSARY!</p> <p>And The Relation Is " A is Similar to B ". i.e. $ A=PBP^{-1} $ where P is a non-Singular Matrix.</p>
885,129
<p>I want to speed up the convergence of a series involving rational expressions the expression is $$\sum _{x=1}^{\infty }\left( -1\right) ^{x}\dfrac {-x^{2}-2x+1} {x^{4}+2x^{2}+1}$$ If I have not misunderstood anything the error in the infinite sum is at most the absolute value of the last neglected term. The formula for the $n$th term is $\dfrac {-x^{2}-2x+1} {x^{4}+2x^{2}+1}$ from the definition of the series. To get the series I used Maxima the computer algebra system. I have noticed that to get 13 decimal places of the series one must wade through $312958$ terms of the series. I had to kill the computer GUI and some other system processes and run Maxima to compute the sum. I took about 5 minutes. The final sum I obtained was $0.3106137076850$. Is there any way to speed up the convergence of the sum? In general is there any way to speed up the convergence of the sum of $$\sum _{x=1}^{\infty }\left( -1\right) ^{x}\dfrac {p(x)} {q(x)}$$ </p> <p>where both ${p(x)}$ and ${q(x)}$ are rational functions?</p>
Brad
108,464
<p>Here is the first step toward making the series evaluate quicker. You may need just as many terms but it will be faster. Break the series up through partial fractions.</p> <p>$$\dfrac {-x^{2}-2x+1} {x^{4}+2x^{2}+1} = -\frac{2 (x-1)}{\left(x^2+1\right)^2}-\frac{1}{x^2+1}$$</p> <p>I will focus on the second term in the sum.</p> <p>$$\sum_{x=0}^\infty \frac{(-1)^x}{x^2+1} = \sum_{n=0}^\infty \frac{1}{4n^2+1} - \sum_{k=0}^\infty \frac{1}{(2k+1)^2+1}$$</p> <p>Each of the two sums can be evaluated by contour methods. This approach can be found in my post <a href="https://math.stackexchange.com/questions/863174/how-to-find-sum-k-in-mathbbz-frac1kakb/863579#863579">here</a>. In short, for a sufficient rational function $f(z)$, </p> <p>$$\lim_{N\to\infty} \sum_{k = -N}^{k = N} f(k)$$</p> <p>is equal to the negative of the sum of the residues of $\pi f(z) \cot(\pi z)$ at the poles of $f(z)$. We can use this because we are summing an even function. This yields</p> <p>$$\sum_{n=0}^\infty \frac{1}{4n^2+1} = \frac{1}{4} \left(2+\pi \coth \left(\frac{\pi }{2}\right)\right)$$</p> <p>$$\sum_{k=0}^\infty \frac{1}{(2k+1)^2+1} = \frac{1}{4} \pi \tanh \left(\frac{\pi }{2}\right)$$</p> <p>Subtract these (and simplify) to find</p> <p>$$\sum_{x=1}^\infty \frac{(-1)^x}{x^2+1} = \frac{1}{2} (\pi \operatorname{csch}(\pi )-1)$$</p> <p>Now we can conclude our first step by writing...</p> <p>$$\sum _{x=1}^{\infty }\left( -1\right) ^{x}\dfrac {-x^{2}-2x+1} {x^{4}+2x^{2}+1} = -\frac{1}{2} (\pi \operatorname{csch}(\pi )-1) - \sum_{x=1}^\infty(-1)^x\frac{2 (x-1)}{\left(x^2+1\right)^2}$$</p> <p>It may be tempting to try the same approach for the remaining series but we are not summing an even function so it will not be as easy.</p>
1,918,144
<p>Suppose $\sum\limits_{k=0}^{\infty} b_k $ converges absolutely and has the sum $b$. Suppose $a \in\mathbb R$ with $|a|&lt;1$. What is the sum of the series $\sum\limits_{k=0}^{\infty}(a^kb_0 +a^{k-1}b_1 +a^{k-2}b_2 +...+b_k)$?</p>
kobe
190,421
<p>It's $\frac{b}{1-a}$. Since the series $\sum\limits_{k = 0}^\infty a^k$ and $\sum\limits_{k = 0}^\infty b_k$ converge absolutely, the Cauchy product of the two series converges to $\sum\limits_{k = 0}^\infty a^k\cdot \sum\limits_{k = 0}^\infty b_k = \frac{1}{1-a}\cdot b = \frac{b}{1-a}$. The Cauchy product is </p> <p>$$\sum_{k = 0}^\infty \sum_{j = 0}^k a^{k-j} b_j = \sum_{k = 0}^\infty (a^kb_0 + a^{k-1}b_1 + \cdots + b_k)$$</p>
4,079,711
<p>Calculate <span class="math-container">$$\iint_D (x^2+y)\mathrm dx\mathrm dy$$</span> where <span class="math-container">$D = \{(x,y)\mid -2 \le x \le 4,\ 5x-1 \le y \le 5x+3\}$</span> by definition.</p> <hr /> <p>Plotting the set <span class="math-container">$D$</span>, we notice that it is a parallelogram. I tried to divide it into equal parallelograms, i.e. take <span class="math-container">$x_i = -2 + \frac{6i}{n}$</span> and <span class="math-container">$y_j = 5x - 1+ \frac{4j}{m}$</span>. The definition requires to calculate the sum <span class="math-container">$$\sigma = \sum_{i=1}^n\sum_{j=1}^m f(\xi_i, \eta_j)\mu(D_{ij})$$</span> I was thinking about choosing <span class="math-container">$\xi_i = x_i$</span> and <span class="math-container">$\eta_j = y_j(x_i)$</span>. However, this seems a bit strange. Also, finding the area of <span class="math-container">$D_{ij}$</span> doesn't seem to be convenient.</p> <p>Any help is appreciated.</p>
D_S
28,556
<p>Suppose your definition of the product topology on <span class="math-container">$X \times Y$</span> is that whose open sets are unions of the sets of the form <span class="math-container">$U \times V$</span> for <span class="math-container">$U, V$</span> open in <span class="math-container">$X, Y$</span>.</p> <p>Let <span class="math-container">$\phi: A \rightarrow X \times Y$</span> be a continuous function in this sense. Your question is why the one variable functions <span class="math-container">$A \rightarrow X \times Y \rightarrow X$</span> and <span class="math-container">$A \rightarrow X \times Y \rightarrow Y$</span> are continuous, correct?</p> <p>This is obvious from the fact that each of the projections <span class="math-container">$X \times Y \rightarrow X$</span> and <span class="math-container">$X \times Y \rightarrow Y$</span> are continuous, since a composition of continuous functions remains continuous.</p> <p>To see that, say, <span class="math-container">$X \times Y \rightarrow X$</span> is continuous, you need only observe that the preimage of an open set <span class="math-container">$U \subset X$</span> is <span class="math-container">$U \times Y$</span>.</p>
1,472,916
<p>I am doing it in the following way. Is it correct?</p> <p>Set $S = \{1,\pi,\pi^2,\pi^3,...,\pi^n\}$ is LI over $\mathbb Q$</p> <p>Suppose $a_0\times 1 + a_1\times\pi + ... a_n\times \pi^n = 0$ where all the a_i's are not $0$.</p> <p>Then $\pi$ is a root of $a_0 + a_1x + ... + a_nx^n = 0$ which is imposible since $\pi$ is a transcendental number.</p> <p>Therefore, S is LI. Hence $\mathbb R$ is of infinite dimension over $\mathbb Q$.</p>
Scientifica
164,983
<p>It seems correct to me, but you have to correct some few things:</p> <ol> <li><p>You said "Set $S=\{1,\pi,\dots,\pi^n\}$ is LI over $\mathbb Q$" and then you proved that it is LI and that's a bad way to write things. Instead you can say: "Set $S=\{1,\pi,\dots,\pi^n\}$. Let's prove that $S$ is LI over $\mathbb Q$".</p></li> <li><p>To prove that $\mathbb R$ is infinite dimensional over $\mathbb Q$, you used the property that:</p> <blockquote> <p>A vector space $V$ over a field $\mathbb F$ is infinite dimensional if and only if there is a sequence $v_1,v_2,\dots$ of vectors in $V$ such that $\{v_1,\dots ,v_n\}$ is linearly independent for every positive integer $n$.</p> </blockquote> <p>but you didn't emphasise on such a sequence. You could then choose the notation $S_n$ for $S$ and after your proof involving polynomials you say "Therefore $S_n$ is a sequence of vectors in $\mathbb R$ which is LI for all $n\in\mathbb N$. Thus $\mathbb R$ is infinite dimensional over $\mathbb Q$".</p></li> </ol>
2,592,282
<p>If $f$and $g$ is one to one then ${f×g}$ and ${f+g}$ is one to one </p> <p>Is this true? If not, I need some clarification to understand </p> <p>Thanks</p>
quasi
400,434
<p>Let $f,g:\mathbb{R}\to\mathbb{R}$ be given by $f(x) = x$, and $g(x) = x$. Then $f,g$ are one-to-one, but $fg$ is not one-to-one. <p> Let $f,g:\mathbb{R}\to\mathbb{R}$ be given by $f(x) = x$, and $g(x) = -x$. Then $f,g$ are one-to-one, but $f+g$ is not one-to-one. <p> To the OP: <p> What are the simplest functions which you know are one-to-one? Linear functions, right? To test the truth of a potential claim, it makes sense to first spot-check the claim by trying a few simple examples.</p>
4,055,850
<p>I was thinking it could be plugging in <span class="math-container">$x^2+x$</span> to the <span class="math-container">$f'(x)$</span>, then using the Chain Rule to solve it, but I'm not sure if it is right. Please help!</p>
ryang
21,813
<p>Among</p> <ul> <li><span class="math-container">$r\,\mathrm{cis}(\pi+\theta),$</span></li> <li><span class="math-container">$r\,\mathrm{cis}(\pi-\theta),$</span> and</li> <li><span class="math-container">$r\,\mathrm{cis}(2\pi-\theta),$</span></li> </ul> <p>only the third one refers to the complex conjugate of <span class="math-container">$r\,\mathrm{cis}(\theta),$</span> according to the standard definition.</p> <p>Assuming that in this textbook <span class="math-container">$\mathrm{Re}(z)$</span> is indeed along the <span class="math-container">$x$</span>-axis (i.e., <span class="math-container">$z=x+iy$</span>), we can infer that the author</p> <ol> <li>uses <span class="math-container">$(0,2\pi]$</span> as the principal argument, and</li> <li> <ul> <li><p><em>either</em> writes <span class="math-container">$z$</span>'s conjugate as <span class="math-container">$(-z^*),$</span></p> </li> <li><p><em>or</em> writes <span class="math-container">$z$</span>'s conjugate as <span class="math-container">$(z^*)$</span> but defines it as <span class="math-container">$(-x+iy).$</span></p> </li> </ul> </li> </ol> <p>(1.) is fine (both <span class="math-container">$(-\pi,\pi]$</span> and <span class="math-container">$(0,2\pi]$</span> are conventional), but (2.) is strange and highly nonstandard.</p>
2,929,887
<p>Show that if a square matrix <span class="math-container">$A$</span> satisfies the equation <span class="math-container">$p(A)=0$</span>, where <span class="math-container">$p(x) = 2+a_1x+a_2x^2+...+a_kx^k$</span> where <span class="math-container">$a_1,a_2,...,a_k$</span> are constant scalars, then <span class="math-container">$A$</span> must be invertible. Find <span class="math-container">$A^{-1}$</span>.</p> <p>So I am aware of the fact that <span class="math-container">$A(-A-2I)=I$</span> but I don't see a way with the polynomial to get that form to then show that <span class="math-container">$A$</span> is invertible. The best I was able to do was that <span class="math-container">$p(A) = 2I+a_1A+a_2A^2+...+a_kA^k$</span> and <span class="math-container">$0=2I+a_1A+a_2A^2+...+a_kA^k$</span> since <span class="math-container">$p(A)=0$</span> but even from here I am not sure how to really approach the problem.</p>
Fred
380,717
<p>Let <span class="math-container">$u \in ker(A)$</span>, then <span class="math-container">$0=p(A)u=2u+a_1Au+...+a_kA^ku=2u$</span>, hence <span class="math-container">$u=0$</span>. This shows that <span class="math-container">$A$</span> is invertble.</p> <p>Can you proceed ?</p>
2,929,887
<p>Show that if a square matrix <span class="math-container">$A$</span> satisfies the equation <span class="math-container">$p(A)=0$</span>, where <span class="math-container">$p(x) = 2+a_1x+a_2x^2+...+a_kx^k$</span> where <span class="math-container">$a_1,a_2,...,a_k$</span> are constant scalars, then <span class="math-container">$A$</span> must be invertible. Find <span class="math-container">$A^{-1}$</span>.</p> <p>So I am aware of the fact that <span class="math-container">$A(-A-2I)=I$</span> but I don't see a way with the polynomial to get that form to then show that <span class="math-container">$A$</span> is invertible. The best I was able to do was that <span class="math-container">$p(A) = 2I+a_1A+a_2A^2+...+a_kA^k$</span> and <span class="math-container">$0=2I+a_1A+a_2A^2+...+a_kA^k$</span> since <span class="math-container">$p(A)=0$</span> but even from here I am not sure how to really approach the problem.</p>
P Vanchinathan
28,915
<p>You have a polynomial with non-zero constant term satisfied by <span class="math-container">$A$</span>. What you have to do is write reformulate <span class="math-container">$p(A)=0$</span> as <span class="math-container">$2I=-(a_1A+a_2A^2+\cdots +a_kA^k )$</span>. This means <span class="math-container">$I = -\frac12(a_1+a_2A+a_3A^2+\cdots +a_kA^{k-1}) A$</span>. This shows that <span class="math-container">$B=-\frac12(a_1+a_2A+a_3A^2+\cdots +a_kA^{k-1})$</span> is the inverse of <span class="math-container">$A$</span>.</p>
4,084,486
<p>I am studying different integral transform methods, and I am confused on why saying things such as <span class="math-container">$$ \mathcal{F}^{-1}[1] = \delta(x) $$</span> is valid? If you actually plug this in, <span class="math-container">$$ \mathcal{F}^{-1}[1] = \frac{1}{2\pi}\int_{-\infty}^{\infty}e^{ikx}dx $$</span> this does not converge for any <span class="math-container">${x}$</span> at all. It diverges everywhere. I understand the principle value of the integral when <span class="math-container">${x\neq 0}$</span> is <span class="math-container">$0$</span>, but I don't know why it's valid to just call it <span class="math-container">${\delta(x)}$</span> based on it's principle value.</p>
Mark Viola
218,419
<p>The Fourier Transform is a <a href="https://en.wikipedia.org/wiki/Fourier_transform#Tempered_distributions" rel="noreferrer">Tempered Distribution</a> (See <a href="https://en.wikipedia.org/wiki/Distribution_(mathematics)#Tempered_distributions_and_Fourier_transform" rel="noreferrer">THIS</a>). Hence, we interpret the object <span class="math-container">$\mathscr{F^{-1}}\{1\}=\int_{-\infty}^\infty (1) e^{ikx}\,dx$</span> as such. That is to say, the object is not an integral.</p> <p>Rather, the Fourier Transform acts on a function <span class="math-container">$\phi\in \mathbb{S}$</span>, where <span class="math-container">$\mathbb{S}$</span> is the Schwartz space of functions. Here, the Fourier transform of <span class="math-container">$1$</span> acts on <span class="math-container">$\phi\in \mathbb{S}$</span> as follows:</p> <p><span class="math-container">$$\begin{align} \langle \mathscr{F^{-1}}\{1\}, \phi\rangle &amp;=\langle 1,\mathscr{F^{-1}}\{\phi\}\rangle\\\\ &amp;=\int_{-\infty}^\infty (1)\frac1{2\pi}\int_{-\infty}^\infty \phi(x)e^{ikx}\,dx\,dk\\\\ &amp;=\phi(0)\tag1 \end{align}$$</span></p> <p>where <span class="math-container">$(1)$</span> is due to the Inverse Transform Theorem.</p> <p>Therefore, we identify the object <span class="math-container">$\mathscr{F^{-1}}\{1\}$</span> as the Dirac Delta distribution, <span class="math-container">$\delta(k)$</span>, which has the property such that for any <span class="math-container">$\phi \in \mathbb{S}$</span> we have</p> <p><span class="math-container">$$\langle \delta,\phi\rangle =\phi(0)$$</span></p>
2,780,216
<p>How many integer solutions are there to the equation $a + b + c = 21$ if $a \geq3$, $b \geq 1$, and $c \geq 2b$?</p>
farruhota
425,072
<p>Brute forcing: $$\begin{align}(a,b,c)=&amp;(3,1,17), (3,2,16), (3,3,15), (3,4,14), (3,5,13), (3,6,12),\\ &amp;(4,1,16), (4,2,15), (4,3,14), (4,4,13), (4,5,12),\\ &amp;(5,1,15), (5,2,14), (5,3,13), (5,4,12), (5,5,11),\\ &amp;(6,1,14), (6,2,13), (6,3,12), (6,4,11),\\ &amp;(7,1,13), (7,2,12), (7,3,11), (7,4,10),\\ &amp;(8,1,12), (8,2,11), (8,3,10), (8,4,9),\\ &amp;(9,1,11), (9,2,10), (9,3,9), (9,4,8),\\ &amp;(10,1,10), (10,2,9), (10,3,8),\\ &amp;(11,1,9), (11,2,8), (11,3,7),\\ &amp;(12,1,8), (12,2,7), (12,3,6),\\ &amp;(13,1,7), (13,2,6),\\ &amp;(14,1,6), (14,2,5),\\ &amp;(15,1,5), (15,2,4),\\ &amp;(16,1,4),\\ &amp;(17,1,3),\\ &amp;(18,1,2). \end{align}$$ Counting: $$6+2\cdot 5+4\cdot 4+3\cdot 3+3\cdot 2+3\cdot 1=50.$$</p>
1,500,156
<p>Call a function $f: \mathbb{R}^n \rightarrow \mathbb{R}$ nondecreasing if $x,y \in \mathbb{R}^n$ with $x \geq y$ implies $f(x) \geq f(y)$. Suppose $f$ is a nondecreasing, but not necessarily continuous, function on $\mathbb{R}^n$, and $D \subset \mathbb{R}^n$ is compact. Show that if $n = 1$, $f$ always has a maximum on $D$. Show also that if $n &gt; 1$, this need no longer be the case.</p> <p>I'm stucked at second afirmative. How to get a nondecreasing function ($f: \mathbb{R}^n \rightarrow \mathbb{R}$), with compact domain and show that there is not maximum to it?</p> <p>Any thoghts would be appreciated. Tks</p>
user2566092
87,313
<p>So if you define $f(x) = -\infty, x \leq b$ and $f(x) = 1/(x-b),b &lt; x &lt; a$ and $f(x) = + \infty, x \geq a$ then $f$ is increasing on $(b,a)$ but does not attain a finite maximum.</p>
4,490,664
<p>I want to examine the number of non-trivial zeros of <span class="math-container">$\zeta(s)$</span> in the small substrip given by <span class="math-container">$a \leq \Re s \leq 1$</span>, where <span class="math-container">$a&gt;0$</span> is an absolute constant, and <span class="math-container">$U \leq \Im s \leq T$</span> by considering the following: <span class="math-container">\begin{equation}\label{one} \tag{1} N(a, T)-N(a, U), \end{equation}</span> where <span class="math-container">$N(\sigma, t)$</span> denotes the number of non-trivial zeros of <span class="math-container">$\zeta(s)$</span>, <span class="math-container">$\rho=\beta+i\gamma$</span>, with <span class="math-container">$\sigma \leq \beta$</span> and <span class="math-container">$\gamma \leq t$</span>. The issue is that, from what I've seen, all such formula for <span class="math-container">$N(\sigma, t)$</span> are given in the form <span class="math-container">$O(\cdots)$</span>, which makes the study of \eqref{one} difficult, if not impossible. Does anyone have an suggestions or references to literature that may be of use?</p>
davidlowryduda
9,754
<p>It is known that there are on the order of <span class="math-container">$T \log T$</span> zeros up to height <span class="math-container">$T$</span> lying <em>exactly on</em> the critical line <span class="math-container">$\mathrm{Re} s = \tfrac{1}{2}$</span>. It is possible to show that there are at most <span class="math-container">$O(T)$</span> zeros in the strip <span class="math-container">$\mathrm{Re} s \in (a, 1)$</span> for any <span class="math-container">$a &gt; \tfrac{1}{2}$</span>, so that one hundred percent of zeros lie arbitrarily close to the critical line.</p> <p>There are more powerful versions of this result. I think a good reference to check would be Levinson's <em>Almost all roots of <span class="math-container">$\zeta(s) = 0$</span> are arbitrarily close to <span class="math-container">$\sigma = 1/2$</span></em>, appearing in the Proceedings of Nat. Acad. Sci. in 1975. (<a href="https://www.pnas.org/doi/10.1073/pnas.72.4.1322" rel="nofollow noreferrer">Link to paper</a>).</p> <p>Works citing Levinson's work will lead to a paper trail of similar analysis. (If you'll forgive a bit of self-promotion, I very recently <a href="https://davidlowryduda.com/zeros-of-dirichlet-series-iv/" rel="nofollow noreferrer">wrote a note about related counting results</a>, which I tried to make pretty straightforward).</p> <p>The basic idea in these is to use some form of the argument principle to count zeros in regions and bounding the results. And the fundamental challenge is that we don't understand the actual distribution of zeros well enough to get anything except for upper bounds, as any actual zero would correspond to a main term in the integral analysis. There is no hope to perform this type of analysis without having results expressed as <span class="math-container">$O(\cdot)$</span> bounds with current ideas and techniques.</p>
2,370,716
<p>Give the postive integer $n\ge 2$,and $x_{i}\ge 0$,such $$x_{1}+x_{2}+\cdots+x_{n}=1$$ Find the maximum of the value $$x^2_{1}+x^2_{2}+\cdots+x^2_{n}+\sqrt{x_{1}x_{2}\cdots x_{n}}$$</p> <p>I try $$x_{1}x_{2}\cdots x_{n}\le\left(\dfrac{x_{1}+x_{2}+\cdots+x_{n}}{n}\right)^n=\dfrac{1}{n^n}$$</p>
Community
-1
<p>When the function is symmetric in several variables, I believe it can be shown the extrema occur at points where the $x_i $ are all equal. Then $x_i=\frac1n $ for each i.Then there's the matter of whether it's a max or a min. Putting that aside: Well, we get $$n×\frac{1}{n^2}+\sqrt {\left(\frac1n \right)^n} $$ $$=\frac1n + \sqrt {\frac {1}{n^n}}$$. If there is indeed a max, this should be it. For n=2, we get 1. For n=3 we get $\frac13 + \sqrt {\frac1{27}}$...But this must be a min; since (0,0,1) gives 1. </p>
2,586,625
<p>Consider the set $$S=\{x\mid x\in S\}.$$</p> <p>For every element $x$, either $x\in S$ or $x\not\in S.$</p> <p>If we know that $x$ is in fact element of $S$, then, by definition, $x\in S$ so it is true that $x\in S$.</p> <p>If we know that $x\not\in S$, then, by definition, $x\not\in S$ so it is true that $x\not\in S$.</p> <p>But if my question is <em>what elements does $S$ contain,</em> then the answer is "not certain."</p>
Patrick Stevens
259,262
<p>You haven't actually defined a set there. You've written down some symbols and are hoping that they define a set, but they don't.</p> <p>The axiom schema which lets you define sets as "the things which satisfy some property" is the Axiom Schema of Comprehension: roughly, if $\phi$ is a predicate and $S$ is a set, then $\{x \in S: \phi(x)\}$ is a set. But you can't use that here because Comprehension selects a subset of a set which already exists; and $S$ isn't already known to be a set.</p> <p>The axiom schema which lets you define sets by producing them as the image of another set is the Axiom Schema of Replacement: roughly, if $f$ is a function of classes and $S$ is a set, then $\{f(x): x \in S \}$ is a set. But you can't use this here with $f = \mathrm{id}$ because $S$ isn't known a priori to be a set.</p>
2,586,625
<p>Consider the set $$S=\{x\mid x\in S\}.$$</p> <p>For every element $x$, either $x\in S$ or $x\not\in S.$</p> <p>If we know that $x$ is in fact element of $S$, then, by definition, $x\in S$ so it is true that $x\in S$.</p> <p>If we know that $x\not\in S$, then, by definition, $x\not\in S$ so it is true that $x\not\in S$.</p> <p>But if my question is <em>what elements does $S$ contain,</em> then the answer is "not certain."</p>
Ethan Bolker
72,858
<p>This is an interesting question.</p> <p>A set is determined by the elements it contains. That's the intuitive content of the assertion that two sets are equal if and only if they have the same elements.</p> <p>So the set you're asking about is just $S$. It contains what it contains.</p> <p>I think you are a little confused by the <em>set builder</em> notation $\{ \ldots | \ldots\}$.</p> <p>That notation is usually invoked to construct a set of objects that satisfy some condition. For example $$ E = \{ x | x = 2n \text{ for some integer } n\} $$ creates the set of even integers by telling you what it means for an integer to be even.</p> <p>In your example, the condition "$ x \in S$" does indeed tell you only what you already know by definition: the tautology "$x$ belongs to $S$ if and only if $x$ belongs to $S$".</p> <p><strong>Edit.</strong> @PatrickSevens answer says about the same thing as this one, but more carefully and more formally.</p>
2,701,582
<p>I thought I was doing this right until I checked my answer online and got a different one. I worked through the problem again and got my original answer a second time so this one is bothering me since the other similar ones I have done checked out okay. Please let me know if I'm doing something wrong, thanks!</p> <blockquote> <p>Find $x, y$ contained in integers such that $475x+2018y=1$ then find a value for $475\equiv -1$ (mod $2018$).</p> </blockquote> <p>Since it is in the form of $ax+by=1$ I know that the $\gcd(a,b)=1$. I still did the division algorithm since that helps me with the back substitution. Here is what I got.</p> <p>Division Algm:</p> <ul> <li><p>$2018=(4\times 475)+118$</p></li> <li><p>$475=(4\times 118)+3$</p></li> <li><p>$18=(39\times 3)+1$</p></li> </ul> <p>Back Sub:</p> <ul> <li><p>$1=118-(39\times 3)$</p></li> <li><p>$1=118-39(475-(4\times 118))$</p></li> <li><p>$1=(157\times 118)-(39\times 475)$</p></li> <li><p>$1=157\times (2018-(4\times 475))-(39\times 475)$</p></li> <li><p>$1=(157\times 2018)-(667\times 475)$</p></li> </ul> <p>So $x=667$ and $y=157$</p> <p>The second question I answered from the first part which is $475x$ congruent to $1$(mod $2018$) so that would just be $667$ from the first part. Any help is appreciated, thanks! </p>
Ewan Delanoy
15,381
<p>Answer : no, of course not. For example $\sigma=(1,2,\ldots,n)$ and $\tau=(1,2)$ together generate $S_n$, but the subsets of $A=\lbrace \sigma, \tau \rbrace$ only give four subgroups of $S_n$, and there are many more subgroups.</p>
4,362,221
<p>Show that for all <span class="math-container">$z$</span>, <span class="math-container">$\overline{e^z} = e^\bar{z}$</span></p> <p>I'm a little stuck with this one.</p> <p>First defined the following to help solve it: <span class="math-container">$$ z = a + bi $$</span> then plugging that in to the question gives <span class="math-container">$$ \overline{e^{a + bi}} = e^\overline{a + bi} $$</span></p> <p>which can then be simplified to <span class="math-container">$$ \overline{e^a(cos(b) + i sin(b))} = e^{a - bi} $$</span> and so, <span class="math-container">$$ e^a(cos(b) - isin(b)) = e^a(cos(-b) + isin(-b)) $$</span> therefore <span class="math-container">$$ cos(b) - isin(b) = cos(-b) + isin(-b) $$</span></p> <p>But these are unequal to eachother, are they not? I must've made a mistake somewher, but I cant figure out where.</p>
user2661923
464,411
<p><span class="math-container">$\displaystyle \overline{e^{x + iy}} = \overline {e^x \times e^{iy}} = \overline {e^x \times [\cos(y) + i\sin(y)]}$</span></p> <p><span class="math-container">$\displaystyle = e^x \times [\cos(y) - i\sin(y)].$</span></p> <hr /> <p><span class="math-container">$\displaystyle e^{\overline{x + iy}} = e^{x - iy} = e^x \times e^{-iy} = e^x \times [\cos(-y) + i\sin(-y)]$</span></p> <p><span class="math-container">$\displaystyle = e^x \times [\cos(y) - i\sin(y)].$</span></p>
556,423
<blockquote> <p>Determine using multiplication/division of power series (and not via WolframAlpha!) the first three terms in the Maclaurin series for $y=\sec x$.</p> </blockquote> <p>I tried to do it for $\tan(x)$ but then got kind of stuck. For our homework we have to do it for the $\sec(x)$. It is kind of tricky. Help would be awesome! </p> <p>Thanks!</p> <p>Taylor series for $\tan(x)$: \begin{align*} \tan (x) &amp;=\frac{\sin(x)}{\cos(x)}\\ &amp;=\frac{x-\frac {x^3}6+\frac{x^5}{120}-\cdots}{1-\frac{x^2}2+\frac{x^4}{24}-\cdots}\\ &amp;=x+\frac{x^3}3+\frac{2x^5}{15}+\cdots \end{align*}</p>
Community
-1
<p>Let's write</p> <p>$$1 = \cos{x} \sec{x} = \left(1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \dots\right)(a_0 + a_1 x + a_2x^2 + \dots)$$</p> <p>Expand the right hand side to find that</p> <p>$$1 = 1 (a_0) + x (a_1) + x^2 \left(a_2 - a_0 \frac{1}{2!}\right) + x^3 \left(a_3 - a_1 \frac{1}{2!}\right) + x^4 \left(a_4 - a_2 \frac{1}{2!} + a_0 \frac{1}{4!}\right)+\dots$$</p> <p>Equating coefficients gives (note that the left side is $1 + 0x + 0x^2 + 0x^3 + \dots$)</p> <p>\begin{align*} 1 &amp;= a_0 \\ 0 &amp;= a_1 \\ 0 &amp;= a_2 - \frac{a_0}{2} \\ 0 &amp;= a_3 - \frac{a_1}{2} \\ 0 &amp;= a_4 - \frac{a_2}{2} + \frac{a_0}{24} \end{align*}</p> <p>Solving this gives $a_0 = 1$, $a_1 = 0 = a_3$, $a_2 = \frac{1}{2}$ and $a_4 = \frac{5}{24}$, or</p> <p>$$\boxed{\sec{x} \approx 1 + \frac{1}{2} x^2 + \frac{5}{24} x^4}$$</p>
128,122
<p>Original Question: Suppose that $X$ and $Y$ are metric spaces and that $f:X \rightarrow Y$. If $X$ is compact and connected, and if to every $x\in X$ there corresponds an open ball $B_{x}$ such that $x\in B_{x}$ and $f(y)=f(x)$ for all $y\in B_{x}$, prove that f is constant on $X$. </p> <p>Here's my attempt: Cover X by $\bigcup _{x \in X}B_x$. Since X is compact there is a finite sub-covering $\bigcup _{n=1}^NB_x$ of $X$. Given $x\in X$ there is an i between 1 and N such that $x\in B_{x_{i}}$. By assumption $f(x)=f(x_{i})$. Since there are only finitely many balls covering $X$, $f(X)$ is finite, say $f(X)=\{a_{1},...a_{m}\}$.</p> <p>Where do I go from here, I want to show that f(X) is a singleton. Is $X$ a singleton too?</p>
Brian M. Scott
12,042
<p>You’ve not yet used the fact that $X$ is connected, and that’s essential. Otherwise, $X$ and $Y$ could both be the two-point discrete space, and $f$ could be the identity map; all of the hypotheses except connectedness of $X$ would be satisfied, but $f$ wouldn’t be constant.</p> <p>Let’s go back to your finite cover $\{B_{x_1},B_{x_2},\dots,B_{x_N}\}$ of $X$, where $f$ is constant on each of the open balls $B_{x_k}$. Let $F=\{f(x_k):k=1,\dots,N\}$; $F$ is a finite subset of the metric space $Y$. In fact, $F$ is a discrete metric space with the metric inherited from $Y$, so you really have a continuous function $f:X\to F$. You also know that $X$ is connected. You probably know that compactness is preserved by continuous functions; what about connectedness? And what finite metric spaces are connected?</p> <p>I’ve added a full proof of a more general result, but since I didn’t want to give the full solution to the homework problem right away, I’ve spoiler-protected it; mouseover to see it.</p> <blockquote class="spoiler"> <p> <strong>Added:</strong> It’s worth noting that the conclusion follows from much weaker assumptions: $X$ need not be compact, and $f$ need not be continuous. Let $\mathscr{U}$ be the collection of open subsets of $X$ on which $f$ is constant, so that in particular $B_x\in\mathscr{U}$ for all $x\in X$. Fix $x_0\in X$, let $y=f(x_0)$, and let $$U_0=\bigcup\Big\{U\in\mathscr{U}:\forall x\in U\big[f(x)=y\big]\Big\}\;.$$ If $U_0=X$, we’re done: $f$ is constant on $X$ with value $y$. If not, let $$U_1=\bigcup\Big\{U\in\mathscr{U}:\exists x\in U\big[f(x)\ne y\big]\Big\}\;.$$ Then $f(x)=y$ for every $x\in U_0$, and $f(x)\ne y$ for every $x\in U_1$, so $U_0\cap U_1=\varnothing$. On the other hand, it’s clear that $U_0\cup U_1=X$, and $U_0$ and $U_1$, being unions of open sets, are certainly open, so $\{U_0,U_1\}$ is a disconnection of $X$. This contradicts the connectedness of $X$ and shows that in fact we must have $U_0=X$ and hence $f$ constant on $X$.</p> </blockquote>
1,354,015
<blockquote> <p>If <span class="math-container">$a, b, c, d, e, f, g, h$</span> are positive numbers satisfying <span class="math-container">$\frac{a}{b}&lt;\frac{c}{d}$</span> and <span class="math-container">$\frac{e}{f}&lt;\frac{g}{h}$</span> and <span class="math-container">$b+f&gt;d+h$</span>, then <span class="math-container">$\frac{a+e}{b+f} &lt; \frac{c+g}{d+h}$</span>.</p> </blockquote> <p>I thought it is easy to prove. But I could not. How to prove this? Thank you.</p> <p>The question is a part of a bigger proof I am working on. There are two strictly concave, positive valued, strictly increasing functions <span class="math-container">$f_1$</span> and <span class="math-container">$f_2$</span> (See Figure 1). Given 4 points <span class="math-container">$x_1$</span>, <span class="math-container">$x_2$</span>, <span class="math-container">$x_3$</span> and <span class="math-container">$x_4$</span> such that <span class="math-container">$x_1&lt; x_i$</span>, <span class="math-container">$i=2, 3,4$</span> and <span class="math-container">$x_4&gt; x_i$</span>, <span class="math-container">$i=1, 2, 3$</span>, let <span class="math-container">$d=x_2-x_1$</span>, <span class="math-container">$b=x_4-x_3$</span> <span class="math-container">$c=f_1(x_2)-f_1(x_1)$</span>, <span class="math-container">$a=f_1(x_4)-f_1(x_3)$</span>. And given 4 points <span class="math-container">$y_1$</span>, <span class="math-container">$y_2$</span>, <span class="math-container">$y_3$</span> and <span class="math-container">$y_4$</span> such that <span class="math-container">$y_1&lt; y_i$</span>, <span class="math-container">$i=2, 3,4$</span> and <span class="math-container">$y_4&gt; y_i$</span>, <span class="math-container">$i=1, 2, 3$</span>, let <span class="math-container">$h=y_2-y_1$</span>, <span class="math-container">$f=y_4-y_3$</span> <span class="math-container">$g=f_2(y_2)-f_2(y_1)$</span>, <span class="math-container">$e=f_2(y_4)-f_2(y_3)$</span>.</p> <p>Since the functions are concave, we have <span class="math-container">$\frac{a}{b}&lt;\frac{c}{d}$</span> and <span class="math-container">$\frac{e}{f}&lt;\frac{g}{h}$</span>. And I thought in this setting, it is true that <span class="math-container">$\frac{a+e}{b+f} &lt; \frac{c+g}{d+h}$</span> even without the restriction <span class="math-container">$b+f&gt;d+h$</span>.</p> <p><img src="https://i.stack.imgur.com/Onz2d.png" alt="Figure 1"></p>
Keith
244,951
<p>This is false.</p> <p>For example, $$\frac{1}{3} &lt; \frac{5}{12}, \quad \frac{52}{5} &lt; \frac{11}{1}, \quad \frac{53}{8} &gt; \frac{16}{13}.$$</p>