qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
3,600,139
<blockquote> <p>Solve <span class="math-container">$y''-y'-y=\cos x$</span>.</p> </blockquote> <p>After first solving the homogeneous equation we know that the solution to it is <span class="math-container">$$y(x)=a\sin(x)+b\cos(x).$$</span></p>
Ankita Pal
739,790
<p>I have used D-operators method for this. <span class="math-container">$$y''-y'-y=\cos x\\ \implies(D^2-D-1)y=\cos x.$$</span> Let <span class="math-container">$$y=e^{mx}\\ \implies Dy=me^{mx}\\ \implies D^2y=m^2e^{mx}.$$</span> <span class="math-container">$$\therefore m^2-m-1=0\\ \implies m=\frac{1\pm\sqrt{1+4}}{2}=\frac{1\pm\sqrt{5}}{2}$$</span> So, C.F.=<span class="math-container">$c_1e^{\left(\frac{1+\sqrt{5}}{2}\right)x}+c_2e^{\left(\frac{1-\sqrt{5}}{2}\right)x}$</span> <span class="math-container">$$\begin{align}\\ P.I. &amp;=\frac{\cos x}{D^2-D-1}\\ &amp; =\frac{\cos x}{-1-D-1}\\ &amp; =\frac{\cos x}{-D-2}\\ &amp; =-\frac{\cos x}{D+2}\\ &amp; =-\frac{\cos x(D-2)}{D^2-4}\\ &amp; =-\frac{\cos x}{-1-4}(D-2)\\ &amp; =\frac{1}{5}\cos x(D-2)\\ &amp; =\frac{1}{5}(-\sin x-2\cos x)\\ \end{align}$$</span> Hence,the solution is: <span class="math-container">$$y=\text{C.F.+P.I.}$$</span></p>
3,762,306
<p>I have the equation <span class="math-container">$$(e^x-1)-k\arctan(x) = 0$$</span> where <span class="math-container">$0&lt;k \leq \frac 2\pi$</span> and I was wondering how I would go about starting to determine the amount of real roots of this equation. So far I have just manipulated the equation to get different equations of <span class="math-container">$x$</span>, however I'm unsure what to do with them.</p> <p>Current equations for <span class="math-container">$x$</span> are <span class="math-container">$x = \ln(k\arctan(x)-1)$</span> and <span class="math-container">$x = \frac{\tan(e^x-1)}{k}$</span></p>
Z Ahmed
671,540
<p>If <span class="math-container">$k&gt;0$</span></p> <p>Let <span class="math-container">$$f(x)=e^x-1-k \tan^{-1} x \implies f'(x)=e^x-\frac{k}{1+x^2}, f''(x)=e^{x}+\frac{k}{(1+x^2)^2}&gt;0$$</span> <span class="math-container">$\implies$</span> <span class="math-container">$f'(x)$</span> is an increasing function <span class="math-container">$\implies$</span> <span class="math-container">$f'(x)&gt;f'(-\infty)&gt;0 $</span> <span class="math-container">$\implies$</span> <span class="math-container">$f(x)$</span> is an increasing function. Hence <span class="math-container">$f(x)=0$</span>, will have at most one real root. As <span class="math-container">$f(-\infty)=-1+k\pi/2$</span> and <span class="math-container">$f(\infty)&gt;0$</span>, for one real root by IVT <span class="math-container">$f(-\infty) &lt;0$</span>. Finally this gives <span class="math-container">$0&lt;k&lt;\frac{2}{\pi}.$</span> The one root is <span class="math-container">$x=0$</span>.</p> <p>For <span class="math-container">$k&gt;2/\pi$</span>, both <span class="math-container">$f(-\infty)&gt;$</span> and <span class="math-container">$f(\infty)&gt;0$</span> so <span class="math-container">$f(x)=0$</span> will have <span class="math-container">$0,2,4,...$</span> number of real roots. Since <span class="math-container">$x=0$</span> is essentially a root. So there will two real roots if <span class="math-container">$k&gt;2/\pi$</span>. Further, since <span class="math-container">$f''(x)&gt;0$</span>, for the function <span class="math-container">$f(x)$</span> can have atmost one min, this rules out more than two real roots.</p>
7,354
<p>some of my students refer to there being an invisible $-1$ in front of the expression $-(x + 4)$ or in the exponent of $x$. While it is not phrased mathematically, I am ok with them saying this because it reminds them to distribute fully before simplifying etc. It got me thinking though is there a reason to not teach students to always write in $1$ wherever there is a single variable/unknown/etc such as $1(x^1+40)-1(3x^1 - 2)$? I know that it is not done in general because it is incredibly repetitive and annoying, but is there any reason why not to teach students to do this? Eventually they will be comfortable enough that they can imply the $1$ but I feel like a lot of my students would benefit from this especially when simplifying exponential expressions and distributing negative signs. Are there any downsides to this or is it just de facto mathematics education? </p>
Dirk
3,398
<p>Actually, I don't think that it is quite right that there really "is an invisible $-1$" in front of $-x$. I would say that one <strong>defines the symbol $-x$ to be the one that fulfills $x + (-x) = 0$</strong> and then one shows that $(-1) x =-x$.</p> <p>Ok, you don't teach that at high-school level, but the message that $-x$ is a symbol on its own is important. If there were no such symbol $-x$ for any $x$ it would be rather strange to have it for the special case of $x=1$. </p> <p>The reason for teaching this way is that mathematics is precisely about the accurate and concise description of facts. Leaving out unnecessary parts it a crucial point on the way to mathematical thinking. </p> <p>As an example:</p> <blockquote> <p>If n is an integer greater than 1, then either n is prime or n is a finite product of primes.</p> </blockquote> <p>This formulation is not good in a mathematical sense, since it contains many unnecessary things. Much crisper and, I guess, even clearer, is:</p> <blockquote> <p>A positive integer is a finite product of primes.</p> </blockquote> <p>Writing concise, omitting everything unnecessary is a part of mathematical education.</p>
7,354
<p>some of my students refer to there being an invisible $-1$ in front of the expression $-(x + 4)$ or in the exponent of $x$. While it is not phrased mathematically, I am ok with them saying this because it reminds them to distribute fully before simplifying etc. It got me thinking though is there a reason to not teach students to always write in $1$ wherever there is a single variable/unknown/etc such as $1(x^1+40)-1(3x^1 - 2)$? I know that it is not done in general because it is incredibly repetitive and annoying, but is there any reason why not to teach students to do this? Eventually they will be comfortable enough that they can imply the $1$ but I feel like a lot of my students would benefit from this especially when simplifying exponential expressions and distributing negative signs. Are there any downsides to this or is it just de facto mathematics education? </p>
Jack M
115
<p>It depends what you mean by "teach them to do it". If you mean "insist that they do it, correct them every time they don't and take off marks on a test", then no. If you mean "let them know about this trick as a way to remember how to distribute correctly, but don't insist they do it if they can remember without it", then sure.</p> <p>You wouldn't take off marks if someone didn't write "Soh Cah Toa" next to all their trig problems, and you wouldn't insist someone use training wheels if they're perfectly capable of riding a bike without them. But if you have students distributing $5-5(2+x)$ as $5-10+5x$ or something rather than $5-10-5x$, then I can't really think of a <em>better</em> way of explaining to them why this is wrong than the "invisible $-1$" notion.</p>
1,652,381
<p>Suppose $f$ derivable on $\mathbb R$ and that $\lim_{x\to a}f'(x)=\ell$. Show that $$\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}=\ell.$$</p> <p>The proof of my course goes like this:</p> <p>By mean value theorem, there is $y_h\in ]x,x+h[$ s.t. $$f(x+h)-f(x)=f'(y_h)h$$ and thus $$\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}=\lim_{h\to 0}f'(y_h)=\ell.$$</p> <p>QED.</p> <p><strong>Question 1: Didn't we only proved that $$\lim_{h\to 0^+}\frac{f(x+h)-f(x)}{h}=\ell\ \ ?$$</strong></p> <p><strong>Question 2: May-be by $]x,x+h[$ they mean $\{y\mid x&lt;y&lt;x+h\}$ if $h&gt;0$ and $\{y\mid x+h&lt;y&lt;x\}$ if $h&lt;0$, no ?</strong></p>
Dietrich Burde
83,966
<p>The next term is again $27$; then we had a palindromic sequence $$ 7,16,8,27,9,27,8,16,7,\ldots, $$ repeating like this. </p>
2,816,365
<p><a href="https://i.stack.imgur.com/T1W5r.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T1W5r.jpg" alt="enter image description here"></a></p> <p>I don't understand how it went from line four to five in the proof? Do you need to use induction? We haven't covered it yet.</p>
Michael Hardy
11,667
<p>It's similar to the concept of "almost everywhere". Suppose to every subset $T\subseteq S,$ you write $$ \mu(T) \begin{cases} =1 &amp; \text{if } T\in F, \\ = 0 &amp; \text{if } S\smallsetminus T\in F, \\ \text{is undefined} &amp; \text{otherwise.} \end{cases} $$</p> <p>Then, according to the definition of "filter", you have \begin{align} &amp; \mu(\varnothing) = 0 \\[6pt] &amp; \mu(S) = 1 \\[6pt] &amp; \text{If } \mu(T_1), \mu(T_2) \text{ both exist, and } T_1\cap T_2 = \varnothing, \\ &amp; \text{then } \mu(T_1\cup T_2) = \mu(T_1) + \mu(T_2). \end{align}</p> <p>Saying $\{x\in S: P(x)\} \in F$ is the same as saying $P(x)$ for almost all $x\in S.$</p>
2,816,365
<p><a href="https://i.stack.imgur.com/T1W5r.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T1W5r.jpg" alt="enter image description here"></a></p> <p>I don't understand how it went from line four to five in the proof? Do you need to use induction? We haven't covered it yet.</p>
Jesus Christ is True God
1,129,863
<p>Let <span class="math-container">$X={1,2,3}$</span> Choose some element from <span class="math-container">$X$</span> say <span class="math-container">$F={{1},{1,2},{1,3},{1,2,3}}$</span> Then every intersection of an element of <span class="math-container">$F$</span> with another element in <span class="math-container">$F$</span> is in <span class="math-container">$F$</span> again. Examples <span class="math-container">${1}∩{1,2,3}={1}$</span> <span class="math-container">${1,2}∩{1,2,3}={1,2}$</span> <span class="math-container">${1,3}∩{1,2,3}={1,3}$</span> <span class="math-container">${1,2,3}∩{1,2,3}={1,2,3}$</span></p> <p>Also the original <span class="math-container">$X={1,2,3}$</span> is also in <span class="math-container">$F$</span>. Here <span class="math-container">$F={{1},{1,2},{1,3},{1,2,3}}$</span> is called the filter on <span class="math-container">$X={1,2,3}$</span>.</p> <p>Suppose we have the collection <span class="math-container">$G={{1},{1,2},{1,3},{2,3},{1,2,3}}$</span> Then we have <span class="math-container">${1,3}∩{2,3}={3}$</span> but <span class="math-container">${3}$</span> isn't in <span class="math-container">$G$</span>. So this <span class="math-container">$G$</span> is not called a filter.</p> <p>Now with <span class="math-container">$F={{1},{1,2},{1,3},{1,2,3}}$</span> can we put as any other element in it so that after placing the extra element it is still a filter? Probably not in this case. So on <span class="math-container">$X={1,2,3}$</span>, <span class="math-container">$F={{1},{1,2},{1,3},{1,2,3}}$</span> is an Ultrafilter.</p> <p>If we have started say with <span class="math-container">$H={{1},{1,2},{1,2,3}}$</span> this is still a filter on <span class="math-container">$X={1,2,3}$</span> but we can still add <span class="math-container">${1,3}$</span> and it will still be classified as filter. So on <span class="math-container">$X={1,2,3}$</span></p> <p><span class="math-container">$F={{1},{1,2},{1,3},{1,2,3}}$</span> is an Ultrafilter but <span class="math-container">$H={{1},{1,2},{1,2,3}}$</span> is a filter but not an Ultrafilter.</p> <p>Now suppose we have <span class="math-container">$X={1,2,3,4}$</span> Let <span class="math-container">$F={{1,4},{1,2,4},{1,3,4},{1,2,3,4}}$</span> Every in intersection of element of <span class="math-container">$F$</span> is in F again. We have as examples <span class="math-container">${1,4}∩{1,4}={1,4} {1,4}∩{1,2,4}={1,4} {1,4}∩{1,3,4}={1,4} {1,2,4}∩{1,2,4}={1,2,4} {1,2,4}∩{1,3,4}={1,4} {1,3,4}∩{1,3,4}={1,3,4} {1,2,3,4}∩{1,2,3,4}={1,2,3,4}$</span> Also <span class="math-container">$X={1,2,3,4}$</span> is also in <span class="math-container">$F={{1,4},{1,2,4},{1,3,4},{1,2,3,4}}$</span> and the null element <span class="math-container">$∅= {}$</span> is not in <span class="math-container">$F$</span>. We call <span class="math-container">$F$</span> a filter but not an Ultrafilter on <span class="math-container">$X={1,2,3,4}$</span> We can still add element in it and it will still be a filter for instance by adding the element <span class="math-container">${1}$</span> from <span class="math-container">$X={1,2,3,4}$</span> we can have the filter <span class="math-container">$F={{1},{1,4},{1,2,4},{1,3,4},{1,2,3,4}}$</span> This is an Ultrafilter on <span class="math-container">$X={1,2,3,4}$</span> as we cannot add any further element from <span class="math-container">$X={1,2,3,4}$</span> that satisfies closures on intersection.</p> <p>There is another collection of sets taken from <span class="math-container">$X={1,2,3,4}$</span> Which is the powerset <span class="math-container">$P={{},{1},{2},{3},{4},{1,2},{1,3},{1,4},{2,3},{2,4},{3,4},{1,2,3},{1,2,4},{1,3,4},{2,3,4},{1,2,3,4}}$</span> This contain the null element <span class="math-container">$∅= {}$</span> so we cannot call this as Ultrafilter. This is not a proper filter according to the article in Wikipedia. In the powerset every intersection of element is again in the powerset again but it contains the null element <span class="math-container">$∅= {}$</span> and isn't classified as proper filter.</p>
2,065,027
<p>$$\newcommand{\gcd}{\text{gcd}}$$</p> <blockquote> <p>Prove: if $d=\gcd(m,n)$ so $\gcd\left(\frac{m}{d},\frac{n}{d}\right)=1$</p> </blockquote> <p>Intuitively it is obvious, but I am having a hard time to formalize the proof, what I have came to this:</p> <p>$d=\gcd(m,n)$ so $d|m$ and $d|n$ therefore $m=dx$ and $n=dy$ now if $\gcd\left(\frac{m}{d},\frac{n}{d}\right)\neq 1$ that mean that $m$ and $n$ have a common factor, after the division in $d$ which is the greatest common divisor is contradiction.</p>
Leox
97,339
<p>Suppose that $gcd(\frac{m}{d},\frac{n}{d})=d'&gt;1.$ Then $d' \mid m$ and $ d' \mid n$. It implies that $d'd \mid m$ and $ d'd \mid n$ and we find a common divisor $d d'$ which is greater then $d.$ Contradiction.</p>
168,495
<p>I am being asked to calculate the area of a triangle with a side-length of 15.5 inches. </p> <p>The formula for calculating a regular polygon's area is 1/2Pa</p> <p>Where P is the perimeter of the polygon, and a is the apothem. I am completely lost.</p>
André Nicolas
6,312
<p><strong>Hint:</strong> The calculations are too elaborate. Why do we have $c_1$, $c_2$, and $c_3$? There is one and only one constant term $c$. It can be easily eliminated, resulting in two linear equations involving the unknowns $a$ and $b$.</p> <p>For the elimination process, solving and substituting works, but is not a good idea. Take the first two equations, subtract. Now $c$ is gone. Take the second equation and the third, subtract. Again, $c$ is gone. We now have two linear equations for $a$ and $b$. Eliminate $b$ by multiplying the first new equation through by $x_2-x_3$, and the second new equation by $x_1-x_2$, so that the coefficients of $b$ match. Subtract, and solve for $a$. Do not expand anything you do not need to. </p> <p>For a fancier approach that generalizes nicely, look up the <a href="http://en.wikipedia.org/wiki/Vandermonde_matrix" rel="nofollow">Vandermonde Determinant.</a> </p> <p><strong>Edit:</strong> After the comments about the multiple $c_i$, these were removed. The substitution process used is not the optimal approach, but it works. Back substitution is now needed to find the other two coefficients. </p>
1,246,540
<p>$$ x ^{x } = e^{\ln x^x } $$ $$ \lim_{x\rightarrow 0} x^x = \;? $$</p> <p>I need to find the limit of x to the power of x as x approaches to 0 using l'Hopital's rule. From previous part there is a hint that I should use the first equation somehow, however I am confused how to rearrange the equation into a fraction where both nominator and denominator have limits of 0 or infinity. </p>
5xum
112,884
<p>Use these two rules:</p> <ul> <li>For every $a&gt;0, b$: $$\ln(a^b) = b\ln(a)$$</li> <li>If the limit $$\lim_{x\to a} f(x)$$ exists and $g$ is continuous, then</li> </ul> <p>$$\lim_{x\to a}(g(f(x)) = g(\lim_{x\to a} f(x))$$</p>
130,235
<p>I need to find </p> <p>$$f(n) = \int^\infty_0 t^{n-1} e^{-t} dt$$</p> <p>So I think I find the indefinate integral first? But what do I do with $n$, since I am integrating with respect to $t$?</p>
copper.hat
27,978
<p>If $n&gt;0$ is an integer, here is another approach: Let $I(x) = \int^\infty_0 e^{-x t} dt$, with $x&gt;0$. It is straightforward to evaluate $I(x) = \frac{1}{x}$, and notice that $f(1) = I(1) = 1$. To continue, notice that $\frac{d I(x)} {d x} = \int^\infty_0 (-t) e^{-x t} dt$, and by direct computation, $\frac{d I(x)} {d x} = -\frac{1}{x^2}$, so $f(2) = -\frac{d I(1)} {d x} = 1$. The process may be continued by induction.</p>
130,235
<p>I need to find </p> <p>$$f(n) = \int^\infty_0 t^{n-1} e^{-t} dt$$</p> <p>So I think I find the indefinate integral first? But what do I do with $n$, since I am integrating with respect to $t$?</p>
Sidharth Iyer
25,645
<p><a href="http://en.wikipedia.org/wiki/Gamma_function" rel="nofollow">$$\int^\infty_0 t^{n-1} e^{-t} dt=\Gamma(n)$$</a></p>
130,235
<p>I need to find </p> <p>$$f(n) = \int^\infty_0 t^{n-1} e^{-t} dt$$</p> <p>So I think I find the indefinate integral first? But what do I do with $n$, since I am integrating with respect to $t$?</p>
Sangchul Lee
9,340
<p>Here is also another approach, though a posteriori.</p> <p>Tonelli's theorem enables us to exchange the order of integration and summation of a sequence of nonnegative functions. Thus for $ 0 &lt; r &lt; 1 $, $$\begin{align*} \sum_{n=0}^{\infty} \frac{r^n}{n!} \int_{0}^{\infty} x^n e^{-x} \; dx &amp; = \int_{0}^{\infty} \sum_{n=0}^{\infty} \frac{(rx)^n}{n!} e^{-x} \; dx = \int_{0}^{\infty} e^{-(1 - r)x} \; dx \\ &amp; = \frac{1}{1 - r} = \sum_{n=0}^{\infty} r^{n}. \end{align*}$$</p> <p>Actually, it's essentially identical to cooper.hat's approach, since it is just the Taylor expansion of $f(1-r)$ with his/her $f$.</p>
24,412
<p>Trying to plot the following two functions to show points of intersection.</p> <pre><code>2 x + y - 1 == 0, x - y + 2 == 0 ContourPlot[{2 x + y - 1 == 0, x - y + 2 == 0}, {x, -5, 5}, {y, -5, 5}] </code></pre> <p>The above shows the plots, but I find it difficult to see the point of intersection. I suspect that there is a better method than this. </p> <p>Please suggest a good method to plot such equations.</p>
Jonathan Shock
5,081
<p>If you want to know the value of the points then you can use <code>Solve</code>. If you want to be able to see the intersection point highlighted then you can add a point at that value of <code>x</code> and <code>y</code>:</p> <pre><code>eqs = {2 x + y - 1 == 0, x - y + 2 == 0} sol = {x, y} /. Solve[eqs, {x, y}] Show[ListPlot[sol, PlotStyle -&gt; PointSize -&gt; 0.02],ContourPlot[Evaluate[eqs], {x, -5, 5}, {y, -5, 5}]] </code></pre>
2,505,216
<blockquote> <p>How can I evaluate<br> <span class="math-container">$$ \lim_{n \rightarrow \infty} \frac{n\sin n}{2n^2- 1}? $$</span></p> </blockquote> <hr> <p>Unsuccessful <strong>attempt</strong>:</p> <p>In the expression <span class="math-container">$\frac{n\sin n}{2n^2 - 1}$</span>, I divided the numerator and denominator by <span class="math-container">$n^2$</span>, but I got stuck with <span class="math-container">$\frac{\sin n}{n}$</span> and I do not know how to go on.</p> <p>Any help will be appreciated.</p>
Invisible
721,644
<p>With your work:</p> <p><span class="math-container">$L=\lim\limits_{n\to\infty}\frac{n\sin n}{2n^2-1}=\lim\limits_{n\to\infty}\frac{\frac{\sin n}n}{\color{red}{\underbrace{2-\frac1{n^2}}_{\in [1,2)\ \forall n\geqslant 1}}}$</span></p> <p>As mentioned: <span class="math-container">$$-1\leqslant\sin n\leqslant 1\Bigg/\cdot\frac1n\implies\boxed{-\underset{\Big\downarrow\\0}{\frac1n}\leqslant\frac{\sin n}n\leqslant\underset{\Big\downarrow\\0}{\frac1n}\implies \lim_{n\to\infty}\frac{\sin n}n=0}$$</span> <span class="math-container">$\implies L=\lim\limits_{n\to\infty}\frac{\frac{\sin n}n}{2-\frac1{n^2}}=0$</span></p>
2,223,788
<p>$R$ (1 is not assumed to be in $R$) is a prime right Goldie ring (finite dimensional and ACC on right annihilators) which contains a minimal right ideal. Show that $R$ must be a simple Artinian ring. </p> <p>This appeared in a past paper, the first 2 parts which I managed to prove were "Every essential right ideal of a semi-prime right Goldie ring contains a regular element" and "Every non-zero ideal of a prime ring must be essential as a right ideal". There is also an extra hint that I might have to use Artin-Wedderburn.</p> <p>I've been trying (and failing) to find a way to use these results since I can't assume (or prove that) the minimal right ideal is an ideal (to use the previous result). I also have that the minimal ideal is of the form $eR$ where $e$ is idempotent, but I feel like i'm barking up the wrong tree. Any help is appreciated, thanks.</p>
rschwieb
29,335
<p>By primeness of $R$, the minimal right ideal of $R$ must be faithful, so the ring is actually right primitive, and its Jacobson radical is zero.</p> <p>There exists, then, a collection of maximal regular right ideals $\{T_i\mid i\in I\}$ such that $J(R)=\cap T_i=\{0\}$. This allows the natural embedding of $R_R\hookrightarrow\prod_{i\in I} R/M_i$.</p> <p>Going back to the assumptions, we know that $soc(R_R)$ is an essential right ideal of $R$. Since it is Goldie, the semisimple module $soc(R_R)$ is finitely generated. It's well known that if $R$ has a finitely generated essential right socle, then it is finitely cogenerated as a module. The embedding $R_R\hookrightarrow\prod_{i\in I} R/M_i$ therefore restricts to a finite subset $F$ so that $R_R\hookrightarrow\prod_{i\in F} R/M_i$.</p> <p>But $\prod_{i\in F} R/M_i=\bigoplus_{i\in F}R/M_i$, so that $R$ is embedded in this finitely generated completely reducible module, and therefore is completely reducible itself, hence Artinian.</p> <p>According to your notes, this prime Artinian ring has identity and is a simple Artinian ring by the Artin-Wedderburn theorem.</p>
1,318,884
<blockquote> <p>Let S be a piecewise smooth oriented surface in <span class="math-container">$\mathbb{R}^3$</span> with positive oriented piecewise smooth boundary curve <span class="math-container">$\Gamma:=\partial S$</span> and <span class="math-container">$\Gamma : X=\gamma(t), t\in [a,b]$</span> a rectifiable parametrization of <span class="math-container">$\Gamma$</span>. Imagine <span class="math-container">$\Gamma$</span> is a wire in which a current I flows through. Then</p> <p><span class="math-container">$$m:=\frac{I}{2}\int_a^b\gamma(t)\times \dot{\gamma}(t)dt$$</span></p> <p>is the magnetic moment of the current.</p> <p>Show that for an arbitrary <span class="math-container">$u\in \mathbb{R}^3$</span></p> <p><span class="math-container">$$m\cdot u=I\int_Su\cdot d\Sigma$$</span> is true.</p> </blockquote> <p>I tried doing this with Stokes but I can't seem to get to the desired equation. The teacher gave us a hint: <span class="math-container">$k_u(x):=\frac{1}{2}u\times x$</span> is a vector field and <span class="math-container">$\operatorname{curl}k_u = u$</span>.</p> <p>Any tips or hints? I would appreciate it.</p>
Steven Alexis Gregory
75,410
<p>Let's just do this directly. Let $g = \gcd(m,n)$. We need to prove that $\operatorname{lcm}(m,n) = \dfrac{mn}{g}$.</p> <hr> <p>STEP $0$. (Preliminary stuff.)</p> <p>DEFINITION $1$. $L = \operatorname{lcm}(m,n)$ if and only if</p> <pre><code> 1. L is a multiple of m and of n. 2. If C is a multiple of m and of n, then C is a multiple of L. </code></pre> <p>LEMMA $2$. If $\gcd(a,b) = 1$ and $a \mid bc$, then $a \mid c$.</p> <p>PROOF. If $\gcd(a,b) = 1$, then there exists integers $A$ and $B$ such that $aA + bB = 1$. It follows that $acA + bcB = c$. Since $a | acA$ and $a \mid bcB$, then $a \mid c$.</p> <hr> <p>STEP $1$. $\dfrac{mn}{g}$ is a common multiple of $m$ and of $n$.</p> <p>This is true because $\dfrac m g$ and $\dfrac n g$ are integers and $\dfrac{mn}{g} = \dfrac{m}{g}n = m \dfrac{n}{g}$.</p> <hr> <p>STEP $2$. If $G$ is a common multiple of $m$ and of $n$, then $G$ is a multiple of $\dfrac{mn}{g}$. </p> <p>Suppose $G = mM = nN$ for some integers $M$ and $N$. Then $\dfrac G g = \dfrac m g M = \dfrac n g N$.</p> <p>Since $\gcd\left( \dfrac m g, \dfrac n g \right) = 1$, and $\dfrac m g M = \dfrac n g N$, then, by LEMMA $2$, $\dfrac m g \mid N$, say $N = \dfrac m g N'$ for some integer $N'$.</p> <p>So $\dfrac{G}{g} = \dfrac{n}{g} N = \dfrac{m}{g} \dfrac{n}{g} N'$. It follows that $G = \dfrac{mn}{g} N'$ and so $G$ is a multiple of $\dfrac{mn}{g}$.</p> <hr> <p>From STEP $1$, STEP $2$, and DEFINITION $0$, we can conclude that $\operatorname{lcm}(m,n) = \dfrac{mn}{\gcd(m,n)}$.</p>
2,042,476
<p>In this question, I'm assuming the definition of Riemann Integrability.<br> a) Produce an example of a sequence $f_n \rightarrow 0$ pointwise on $[0,1]$ where $\lim_{n\to\infty}\int_0^1f_n$ does not exist.<br> b) Produce an example of a sequence $g_n$ with $\int_0^1g_n\to0$ but $g_n(x)$ does not converge to zero for any $x\in[0,1]$. Let's insist that $g_n(x)\geq 0$ for all $x$ and $n$.</p> <p>For part (a), I thought of the function sequence $$f_n(x) = \begin{cases}1/x &amp; \text{if }0&lt;x&lt;1/n\\0 &amp;\text{if }x=0 \text{ or }x\geq1/n\end{cases}$$ This seems to work. I'm not sure about part (b) though. I would appreciate it if someone could check part (a) and offer some help with part (b).</p>
Marcus M
215,322
<p>For part $a$, your answer works, but none of the integrals exist. Try to see if you can alter your example so that each $f_n$ is integrable, but the limit of the integrals doesn't exist.</p> <p>For $b$, try to think about each $g_n$ being an indicator function of a set; if the indicator functions get narrow quite slowly and slide across $[0,1]$, then the integrals will go to zero. It is possible to choose functions so that the $g_n$'s will not converge to zero pointwise.</p>
110,040
<p>Can someone answer how can I find the matrix $A$ from the matrix equation:$$A^+BA=C$$ where $B$ and $C$ are known square matrices, $A^+$ denotes the Hermitian conjugate, and we are given a constraint: $\det(A)=a$ where $a$ is a known constant.</p>
draks ...
19,341
<p>Using J.D.'s result, I think it's possible to solve it via Superoperator formalism: $$ A=EAD \Rightarrow {\rm vec}A = (D^T\otimes E){\rm vec}A, $$ where ${\rm vec}A$ is a vector, with all coloumns stacked. So $A$ is an eigenvector of $(D^T\otimes E)$ with eigenvalue $+1$.</p>
110,040
<p>Can someone answer how can I find the matrix $A$ from the matrix equation:$$A^+BA=C$$ where $B$ and $C$ are known square matrices, $A^+$ denotes the Hermitian conjugate, and we are given a constraint: $\det(A)=a$ where $a$ is a known constant.</p>
Community
-1
<p>First, J.D.'s result $$A = B^{-1} B^{+} A (C^{+})^{-1} C$$ is valid for both singular and non-singular matrices if you use the Moore-Penrose inverse in place of the regular inverse.</p> <p>Then, as suggested by draks, the Kronecker-Vec formalism can be employed to find an eigenvector $\vec{a} = {\rm vec}A$ associated with the +1 eigenvalue (if such an eigenvalue exists). </p> <p>Finally, column un-stacking of $\vec{a}$ yields $A$, which now only needs to be multiplied by an appropriate scalar to satisfy the constaint on $det(A)$.</p> <p>As noted by Martin, the value of this constraint must itself satisfy a constraint: $det(A) = \sqrt{det(C)/det(B)}$</p>
1,379,899
<p>I'm trying to achieve a better conception of what it means to "divide out" a variable/number, because I'm currently have a lot of trouble justifying to myself why it actually works the way it does in certain contexts. I apologize if this is too much of an elementary question but I can't seem to come to a satisfying conclusion.</p> <p>I understand division in the context of explicit numbers but <a href="http://www.econ.nyu.edu/user/ramseyj/textbook/pg215.219.pdf" rel="nofollow">here</a> (see pg. 3) is an example of where I get confused:</p> <p>"so the number of ways of selecting r objects and ignoring all permutations of the remaining $(n − r)$ objects not chosen is to divide out the unwanted permutations:</p> <p>No. of perms. $= \dfrac{1 \times 2 \times 3 \times···\times (n − r) \times (n − r + 1) \times··· \times(n − 1) \times n}{1 \times 2 \times 3 \times··· \times (n − r)}$"</p> <p>So basically, why does this work, and how would explain it to a clever 5 year old? </p> <p>Similarly, what does it mean to "divide out" $P(B)$ in the classic representation of Bayes' Theorem, i.e.</p> <p>$P(A|B) = \dfrac{P(B|A)P(A)}{P(B)}$?</p>
goblin GONE
42,339
<p>I won't attempt to explain this to a clever $5$ year old; but anyway, you are (tacitly) looking for the following definition/theorem pair.</p> <blockquote> <p><strong>Definition.</strong> Let $X$ denote a set non-empty set. Then given an equivalence relation $\sim$ on $X$, call $\sim$ <em>regular</em> iff there exists a natural number $n$ such that for all $x \in X,$ the corresponding cell $x^\sim$ has cardinality $n$.</p> <p>If $\sim$ is regular, write $|\!\sim\!|$ for the cardinality of any (and hence every) cell of $\sim$.</p> </blockquote> <p>POSSIBLE SOURCE OF CONFUSION: The number $|\!\sim\!|$ will usually be distinct from the number $|X/\!\sim\!|$, which is the number of cells corresponding to $\sim$. Nevertheless, these numbers are far from independent:</p> <blockquote> <p><strong>Theorem.</strong> Let $X$ denote a set non-empty set and $\sim$ denote a regular equivalence relation on $X$. Then: $$|X/\!\sim\!| = |X|/|\!\sim\!|$$</p> </blockquote> <p>We can apply this to your example as follows. Given a set $X$ and cardinal numbers $b$ and $a$, write $\Pi_S(b,a)$ for the subset of $\mathcal{P}(X) \times \mathcal{P}(X)$ given as follows:</p> <p>$(B,A) \in \Pi_X(b,a)$ iff:</p> <ul> <li>$|B| = b$</li> <li>$|A| = a$</li> <li>$\{B,A\}$ partitions $X$.</li> </ul> <p>We're trying to show that:</p> <blockquote> <p><strong>Theorem.</strong> If $X$ is a finite set and $b,a$ are cardinal numbers with $b+a = |X|$, then: $$|\Pi_X(b,a)| = \frac{|\mathrm{Lo}(X)|}{\mathrm{Lo}(b)\mathrm{Lo}(a)}$$</p> </blockquote> <p>where I define $\mathrm{Lo}(X)$ as the set of linear orderings of $X$, and $\mathrm{Lo}(b)$ as the number of ways of linearly ordering any (and hence every) set with $b$ elements. Of course, we secretly know that $\mathrm{Lo}(b) = b!$, but this is really irrelevant to the argument that follows:</p> <p><em>Proof.</em> Fix $X$, $b$ and $a$. There is a function $\pi^b : \mathcal{P}(X) \leftarrow \mathrm{Lo}(X)$ that takes a linear ordering of $X$ to the set of all elements in the final $b$-long stretch, and another function $\pi_a : \mathcal{P}(X) \leftarrow \mathrm{Lo}(X)$ that takes a linear ordering of $X$ to the set of all elements in the initial $a$-long stretch. This yields a function:</p> <p>$$\langle \pi^b,\pi_a\rangle : \mathcal{P}(X) \times \mathcal{P}(X) \leftarrow \mathrm{Lo}(X)$$</p> <p>It turns out that this mapping has image equal to $\Pi_X(b,a)$. Hence:</p> <p>$$|\Pi_X(b,a)| = \left|\frac{\mathrm{Lo}(X)}{\mathrm{coimg}(\langle \pi^b,\pi_a\rangle)}\right|$$</p> <p>It also turns out that the equivalence relation induced on $\mathrm{Lo}(X)$, namely $\mathrm{coimg}(\langle \pi^b,\pi_a\rangle),$ is regular. Hence:</p> <p>$$|\Pi_X(b,a)| = \frac{|\mathrm{Lo}(X)|}{|\mathrm{coimg}(\langle \pi^b,\pi_a\rangle)|}$$</p> <p>In fact, we can find the number of elements in any (and hence every) cell of this partitioning explicitly:</p> <p>$$|\mathrm{coimg}(\langle \pi^b,\pi_a\rangle)| = \mathrm{Lo}(b)\mathrm{Lo}(a)$$</p> <p>Hence:</p> <p>$$|\Pi_X(b,a)| = \frac{|\mathrm{Lo}(X)|}{\mathrm{Lo}(b)\mathrm{Lo}(a)}$$</p> <p><strong>Edit.</strong> Another way of formulating this argument is with the notion of a $k$-bijection, see my answer <a href="https://math.stackexchange.com/questions/3926/a-1-1-function-is-called-injective-what-is-an-n-1-function-called/3956#3956">here</a>.</p>
3,646,694
<p>Let <span class="math-container">$M$</span> be an elliptic element of <span class="math-container">$SL_2(\mathbb R)$</span>. Then it is conjugate to a rotation <span class="math-container">$R(\theta)$</span>. Note that we can calculate <span class="math-container">$\theta$</span> in terms of the trace of <span class="math-container">$M$</span>; it means that we actually know <span class="math-container">$R(\theta)$</span> and we can write:</p> <p><span class="math-container">$$M=TR(\theta) T^{-1}$$</span></p> <p>If <span class="math-container">$S^1$</span> is the unit circle in <span class="math-container">$\mathbb R^2$</span>, it follows that <span class="math-container">$T(S^1)$</span> is the conic section <span class="math-container">$\mathcal C$</span> which is preserved by <span class="math-container">$M$</span>.</p> <blockquote> <p>Is there any explicit way to find the equation <span class="math-container">$\mathcal C$</span> in general?</p> </blockquote> <p>My procedure is quite uneffective, because one has to find <span class="math-container">$T$</span> first (so non-linear system) and then write down <span class="math-container">$T(S^1)$</span>, which is in general not obvious.</p>
Rew
707,748
<p>Given three roots <span class="math-container">$\alpha, \beta, \gamma$</span> of a polynomial, it can generally be written as <span class="math-container">$$(x-\alpha)(x-\beta)(x-\gamma)=0$$</span></p> <p>Note how each of the roots contribute in making the equality work. On expanding the braces, </p> <p><span class="math-container">$$(x-\alpha)(x-\beta)(x-\gamma)=0$$</span></p> <p><span class="math-container">$$[(x^2-x(\beta)-x(\alpha)+(\alpha\beta)](x-\gamma)=0$$</span></p> <p><span class="math-container">$$(x^3-x^2(\gamma)-x^2(\beta)+x(\beta\gamma)-x^2(\alpha)+x(\alpha\gamma)+x(\alpha\beta)-(\alpha\beta\gamma)=0$$</span></p> <p><span class="math-container">$$x^3-(\alpha+\beta+\gamma)x^2+(\alpha\beta+\beta\gamma+\alpha\gamma)x -(\alpha\beta\gamma)=0$$</span></p> <p>Note that if any given equation is scaled down in such a way that the coefficient of <span class="math-container">$x^3$</span> is <span class="math-container">$1$</span> then the coefficient of <span class="math-container">$x^2$</span> gives the negative of sum of roots. More precisely, sum of roots of cubic equation = <span class="math-container">$-$</span>(coefficient of <span class="math-container">$x^2$</span>)/(coefficient of <span class="math-container">$ x^3$</span>)</p> <p>I hope you can take over from here. As pointed out already sum of roots is <span class="math-container">$-2$</span></p> <p>Bonus: It can also be seen that product of roots of any given cubic equation is equal to the negative of the constant term divided by coefficient of <span class="math-container">$x^3$</span> and the sum of product of roots taken two at a time is, well, (coefficient of <span class="math-container">$x$</span>)/(coefficient of <span class="math-container">$x^3$</span>)</p> <p>These are enough conditions on the given roots to get you started.</p>
3,646,694
<p>Let <span class="math-container">$M$</span> be an elliptic element of <span class="math-container">$SL_2(\mathbb R)$</span>. Then it is conjugate to a rotation <span class="math-container">$R(\theta)$</span>. Note that we can calculate <span class="math-container">$\theta$</span> in terms of the trace of <span class="math-container">$M$</span>; it means that we actually know <span class="math-container">$R(\theta)$</span> and we can write:</p> <p><span class="math-container">$$M=TR(\theta) T^{-1}$$</span></p> <p>If <span class="math-container">$S^1$</span> is the unit circle in <span class="math-container">$\mathbb R^2$</span>, it follows that <span class="math-container">$T(S^1)$</span> is the conic section <span class="math-container">$\mathcal C$</span> which is preserved by <span class="math-container">$M$</span>.</p> <blockquote> <p>Is there any explicit way to find the equation <span class="math-container">$\mathcal C$</span> in general?</p> </blockquote> <p>My procedure is quite uneffective, because one has to find <span class="math-container">$T$</span> first (so non-linear system) and then write down <span class="math-container">$T(S^1)$</span>, which is in general not obvious.</p>
Jack D'Aurizio
44,121
<p>Since by Vieta's formulas the sum of the roots is <span class="math-container">$-2$</span>, there is not much to do here: given that two roots are <span class="math-container">$\alpha$</span> and <span class="math-container">$\alpha^2+2\alpha-4$</span>, the third one has to be <span class="math-container">$-\alpha^2-3\alpha+2$</span>.</p> <blockquote> <p>The interesting question, however, is: <em>how do we realize that there are two roots <span class="math-container">$\alpha,\beta$</span> fulfilling <span class="math-container">$\beta=\alpha^2+2\alpha-4$</span></em> ?</p> </blockquote> <p>Well, the discriminant of the polynomial is <span class="math-container">$361=19^2$</span>, so all the roots are real and the Galois group over <span class="math-container">$\mathbb{Q}$</span> <em>is not</em> <span class="math-container">$S_3$</span> but something simpler. By considering the depressed cubic we have</p> <p><span class="math-container">$$ \frac{27}{38\sqrt{19}}\,\underbrace{p\left(\frac{2}{3}(x\sqrt{19}-1)\right)}_{q(x)}=4x^3-3x+\frac{7}{2\sqrt{19}} $$</span> so by trigonometry we have that a root is given by <span class="math-container">$$ \zeta = -\frac{2}{3}+\frac{2}{3}\sqrt{19}\cos\Big(\underbrace{\frac{1}{3}\arccos\left(\frac{-7}{2\sqrt{19}}\right)}_{\theta}\Big) $$</span> and the other roots are given by <span class="math-container">$$ -\frac{2}{3}+\frac{2}{3}\sqrt{19}\cos(\theta+2\pi/3)\qquad\text{and}\qquad -\frac{2}{3}+\frac{2}{3}\sqrt{19}\cos(\theta+4\pi/3) $$</span> i.e. by <span class="math-container">$$ -\frac{2}{3}-\frac{1}{3}\sqrt{19}\cos(\theta)-\frac{1}{\sqrt{3}}\sqrt{19}\sin(\theta)\qquad\text{and}\qquad -\frac{2}{3}-\frac{1}{3}\sqrt{19}\cos(\theta)+\frac{1}{\sqrt{3}}\sqrt{19}\sin(\theta) $$</span> which are clearly related (to each other, and to <span class="math-container">$\zeta$</span>) via the Pythagorean theorem <span class="math-container">$\sin^2\theta+\cos^2\theta=1$</span>.</p>
3,341,784
<p>How do you get to the general equation of <span class="math-container">$y = ax^2 + bx + c$</span> for a parabola?</p> <p>I have not found any resources that show how to get to that equation.</p> <p>This equation is shown 1 minute 30 into this video on Simpsons Rule <a href="https://www.youtube.com/watch?v=vpfy3sGw8tI" rel="nofollow noreferrer">https://www.youtube.com/watch?v=vpfy3sGw8tI</a>.</p>
Rory Daulton
161,807
<p>Since you show little work of your own, I'll just give an outline and let you fill in the details. If you need more detail, show some more work of your own then ask.</p> <p>Your question involves two parts. First, given a parabola with a horizontal directrix, show that an equation of the form <span class="math-container">$y = ax^2+bx+c$</span> with <span class="math-container">$a\neq 0$</span> determines it. Let's say that the parabola's focus is at <span class="math-container">$(r,s)$</span> and the directrix is the line <span class="math-container">$y=t$</span>. Note that we must have <span class="math-container">$s\neq t$</span>, which means the focus is not on the directrix. The geometric definition of the parabola is that the distance from any point on the parabola <span class="math-container">$(x,y)$</span> to the focus equals the distance from that point to the directrix. The distance formulas are easy, so equate the two and we get</p> <p><span class="math-container">$$\sqrt{(x-r)^2+(y-s)^2}=|y-t|$$</span></p> <p>We can convert that equation to an equivalent one by squaring both sides (I'll leave it to you to show this does not add any new solutions) and solving for <span class="math-container">$y$</span>. We end up with</p> <p><span class="math-container">$$y = \left(\frac{1}{2(s-t)}\right)x^2+\left(\frac{-r}{s-t}\right)x+\left(\frac{r^2+s^2-t^2}{2(s-t)}\right)$$</span></p> <p>Since we know that <span class="math-container">$s-t$</span> is not zero and it is clear that <span class="math-container">$\frac{1}{2(s-t)}$</span> is also not zero, this equation has the desired form <span class="math-container">$y=ax^2+bx+c$</span> with <span class="math-container">$a\neq 0$</span>.</p> <p>The next part is to show that the curve defined by <span class="math-container">$ax^2+bx+c$</span> where <span class="math-container">$a\neq 0$</span> is a geometric parabola. First complete the square to get the form</p> <p><span class="math-container">$$y=a(x-h)^2+k$$</span></p> <p>where again <span class="math-container">$a\neq 0$</span>. Then show that this is also the equation for the parabola with its focus at the point <span class="math-container">$(h,k+\frac{1}{4a})$</span> and directrix at the horizontal line <span class="math-container">$y=k-\frac{1}{4a}$</span>. These are well-defined since <span class="math-container">$a\neq 0$</span> and clearly the focus is not on the directrix. I'll leave this second part to you--you can use the first part above to help.</p>
3,341,784
<p>How do you get to the general equation of <span class="math-container">$y = ax^2 + bx + c$</span> for a parabola?</p> <p>I have not found any resources that show how to get to that equation.</p> <p>This equation is shown 1 minute 30 into this video on Simpsons Rule <a href="https://www.youtube.com/watch?v=vpfy3sGw8tI" rel="nofollow noreferrer">https://www.youtube.com/watch?v=vpfy3sGw8tI</a>.</p>
Narasimham
95,860
<p>The relationship is basic coming in next after the linear relation. It better to learn fundamentals as such. If anyone asks to prove that <span class="math-container">$ y=ax+b$</span> for linear relations, then...you see?</p> <p>It is the integration step coming next after the straight line</p> <p><span class="math-container">$$ y'= 2 a x + b $$</span></p> <p><span class="math-container">$$ y = a x^2+ bx +c \tag1 $$</span></p> <p>where c is a constant of integration.</p> <p>If two real or complex (p,q) values for <span class="math-container">$x$</span> are both valid then we must have the product of zeros to be a zero</p> <p><span class="math-container">$$ x^2- (p+q)x + pq = (x-p) (x-q) =0 $$</span></p> <p>Setting the sum <span class="math-container">$ p+q = -b/a $</span> and product <span class="math-container">$pq= c/a $</span> in the above we obtain (1) in alternate form.</p> <p>In uniform motion ( going back to Newton understanding dynamic parabolic relations) if a particle attempts to move after annulling/compensating whatever length or distance <span class="math-container">$c$</span> gained moving to left with starting velocity to right <span class="math-container">$ b $</span> along while uniformly accelerating <span class="math-container">$a/2 $</span> towards right, then after <span class="math-container">$x$</span> seconds the particle will stay put always at the same location with zero displacement...i.e., never moves.. indicated by zero of the quadratic equation.</p> <p><span class="math-container">$$ c + b x + a x^2 =0 $$</span></p>
4,264,856
<p>I filled in from the definition of a Stirling number of the second kind that the following holds. <span class="math-container">$${2n\brace 2} = \frac{1}{2} \sum_{i=0}^{2} (-1)^i \binom{2}{i} (2-i)^{2n}$$</span></p> <p>And I've visually confirmed in Desmos that the following equality 'appears' to hold.</p> <p><span class="math-container">$${2n\brace 2} = 2^{2n-1}-1$$</span></p> <p>How do I prove that this equality does in fact hold?</p>
Paradox
969,042
<p>I guess the most straight-forward way is to expand the summation:<br /> <span class="math-container">\begin{align} {2n\brace 2} = &amp;\frac{1}{2} \sum_{i=0}^{2} (-1)^i \binom{2}{i} (2-i)^{2n} = \\ &amp; \frac{1}{2} \Bigg( \Big[(-1)^0 \binom{2}{0} (2-0)^{2n}\Big] + \Big[(-1)^1 \binom{2}{1} (2-1)^{2n}\Big] + \Big[(-1)^2 \binom{2}{2} (2-2)^{2n}\Big] \Bigg) = \\ &amp;\frac{1}{2} \Bigg( \Big[1 \cdot 1 \cdot 2^{2n}\Big] + \Big[(-1) \cdot 2 \cdot 1^{2n}\Big] + \Big[1 \cdot 1 \cdot 0^{2n}\Big] \Bigg) = \\ &amp;\frac{1}{2} \Bigg( 2^{2n} - 2 + 0 \Bigg) = \\ &amp; \frac{2}{2} \Bigg( 2^{2n-1} - 1 \Bigg) = 2^{2n-1} - 1 \end{align}</span></p>
171,602
<p>In all calculus textbooks, after the part about successive derivatives, the $C^k$ class of functions is defined. The definition says : </p> <blockquote> <p>A function is of class $C^k$ if it is differentiable $k$ times and the $k$-th derivative is continuous.</p> </blockquote> <p>Wouldn't be more natural to define them to be the class of functions that are differentiable $k$ times? Why is the continuity of the $k$th derivative is so important so as to justify a specific definition?</p>
Georges Elencwajg
3,217
<p>You can certainly consider $k$-times differentiable functions on, say, $[a,b]\subset \mathbb R$ and give them a notation, like $D^k[a,b]$.<br> The point however is that many well-known and interesting theorems, true for $C^k[a,b]$, will fail for $D^k[a,b]$or won't even make sense. Here are three examples from elementary calculus: </p> <p>a) Integration by parts for $u,v\in C^1[a,b]$: $$\int_{a}^{b}u(x)v^{\prime }(x)dx=\left( u({b})v(b)-u(a)v(a)\right) -\int_{a}^{b}u^{\prime }(x)v(x)dx$$ The integrals don't even make sense <em>a priori</em> if $u', v'$ are not continuous. </p> <p>b) Taylor's formula $$f(x)=\sum_{i=0}^k\frac{f^{(i)}(a)}{i!}(x-a)^i+\int_a^x\frac{f^{(k+1)}(t)}{k!}(x-t)^kdt\quad (x\in [a,b])$$ is valid for $f\in C^{k+1}([a,b])$ and again doesn't make sense for $f\in D^{k+1}([a,b])$ </p> <p>c) The change of variables formula $$ \int_a^b f(\phi (t))\phi'(t)dt=\int_{\phi(a)}^{\phi(b)} f(x)dx $$ again necessitates that $\phi$ be a $C^1$-function and not merely a differentiable one.</p>
36,083
<p>I'm trying to find a way to prove this:</p> <p>EDIT: without using LHopital theorem. $$\lim_{x\rightarrow 1}\frac{x^{1/m}-1}{x^{1/n}-1}=\frac{n}{m}.$$ Honestly, I didn't come with any good idea.</p> <p>We know that $\lim_{x\rightarrow 1}x^{1/m}$ is $1$.</p> <p>I'd love your help with this.</p> <p>Thank you.</p>
Emre
9,901
<p>Are you aware of <a href="http://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule" rel="nofollow">L'Hôpital's rule</a>? It is useful in evaluating the limits of fractions such as yours.</p>
36,083
<p>I'm trying to find a way to prove this:</p> <p>EDIT: without using LHopital theorem. $$\lim_{x\rightarrow 1}\frac{x^{1/m}-1}{x^{1/n}-1}=\frac{n}{m}.$$ Honestly, I didn't come with any good idea.</p> <p>We know that $\lim_{x\rightarrow 1}x^{1/m}$ is $1$.</p> <p>I'd love your help with this.</p> <p>Thank you.</p>
Alex Becker
8,173
<p>One thing about limits is that, if they exist, the "speed" at which you approach them doesn't matter. That is to say, $\lim_{x\rightarrow 1}\frac{x^{1/m}-1}{x^{1/n}-1} = \lim_{x^{1/n}\rightarrow 1}\frac{x^{1/m}-1}{x^{1/n}-1} = \lim_{y\rightarrow 1}\frac{y^{n/m}-1}{y-1}$. If you then apply L'Hopital's rule, you should get your answer.</p>
36,083
<p>I'm trying to find a way to prove this:</p> <p>EDIT: without using LHopital theorem. $$\lim_{x\rightarrow 1}\frac{x^{1/m}-1}{x^{1/n}-1}=\frac{n}{m}.$$ Honestly, I didn't come with any good idea.</p> <p>We know that $\lim_{x\rightarrow 1}x^{1/m}$ is $1$.</p> <p>I'd love your help with this.</p> <p>Thank you.</p>
Zarrax
3,035
<p>You can rewrite the limit as $$\lim_{x \rightarrow 1} {{x^{1\over m} - 1 \over x - 1} \over {x^{1 \over n} - 1 \over x- 1}}$$ By the quotient rule for limits this is exactly $${\lim_{x \rightarrow 1} {x^{1 \over m} - 1 \over x - 1} \over \lim_{x \rightarrow 1} {x^{1 \over n} - 1 \over x - 1}}$$ But notice that for any $\alpha$, ${\displaystyle \lim_{x \rightarrow 1} {x^{\alpha} - 1 \over x - 1}}$ is just the limit of difference quotients giving the definition of the derivative of the function $x^{\alpha}$ when evaluated at $x = 1$. So the limit is $\alpha$. So the limit in this question will be ${\displaystyle {{1 \over m} \over {1 \over n}} = {n \over m}}$.</p>
3,779,996
<p>Let <span class="math-container">$(X_1,...,X_n)$</span> be a random sample with PDF <span class="math-container">$f(x;\theta) = \frac{x}{\theta}\exp(-x^2/(2\theta)), \theta &gt; 0$</span></p> <p>I want to show that the likelihood ratio test of <span class="math-container">$H_0 : \theta \le \theta_0$</span> against <span class="math-container">$H_1 : \theta &gt; \theta_0$</span> where <span class="math-container">$\theta_0&gt;0$</span> is given is a Chi-square test</p> <p>This gives that the the likelihood function <span class="math-container">$\displaystyle L(\theta) = \frac{\prod x_i}{\theta^n}\exp(-\sum x_i^2/2\theta)$</span></p> <p>I am going to set <span class="math-container">$t = \prod X_i$</span> and <span class="math-container">$s = \sum X_i^2$</span>. So we get <span class="math-container">$\displaystyle L(\theta) = \frac{t}{\theta^n}\exp(-s/2\theta)$</span>. And <span class="math-container">$\max_{\theta \ge 0 }L(\theta)$</span> occurs when <span class="math-container">$\theta = \frac{s}{2n}$</span></p> <p>And <span class="math-container">$\max_{0 \le \theta \le \theta_0} L(\theta) = \begin{cases} L(\frac{s}{2n})&amp;\text{if }\theta_0 \ge \frac{s}{2n}\\ L(\theta_0)&amp;\text{else} \end{cases}$</span></p> <p>Now we have</p> <p><span class="math-container">$$ \Lambda_{H_0} = \frac{\max_{0 \le \theta \le \theta_0} L(\theta)}{\max_{0 \le \theta } L(\theta)} = \begin{cases} 1 &amp;\text{if } \theta_0 \ge \frac{s}{2n}\\ \bigg (\frac{s}{2n\theta_0}\bigg)^n\exp(n - s/(2\theta_0))&amp;\text{else} \end{cases} $$</span></p> <p>Hopefully I have calculated both of those correct, now is where I run into my issue I don't quite see how this is a Chi-square test.</p>
tommik
791,458
<p>The given density is a Rayleigh. If a sufficient estimator exists, the test must be based on this estimator.</p> <p>It is very easy to verify, via factorization theorem, that this sufficient statistic is <span class="math-container">$T=\sum_{i} X_i^2$</span></p> <p>Now let's derive the density of <span class="math-container">$Y=X^2$</span></p> <p>Via fundamental transformation theorem you find</p> <p><span class="math-container">$$f_Y(y)=\frac{\sqrt{y}}{\theta}e^{-\frac{y}{2\theta}}\frac{1}{2\sqrt{y}}=\frac{1}{2\theta}e^{-\frac{y}{2\theta}}\sim Exp(\frac{1}{2\theta})=Gamma(1;\frac{1}{2\theta})$$</span></p> <p>Now</p> <p><span class="math-container">$$\sum_i X_i^2 \sim Gamma (n;\frac{1}{2\theta})$$</span></p> <p>And concluding...</p> <p><span class="math-container">$$\frac{1}{\theta}\sum_i X_i^2\sim \chi_{(2n)}^2$$</span></p> <p>To find the critical region, first observe that <span class="math-container">$\theta_0 &lt; \theta_1$</span> and</p> <p><span class="math-container">$$\frac{L(\theta_0|\mathbf{x})}{ L(\theta_1|\mathbf{x}) }\propto e^{(\frac{1}{2\theta_1}-\frac{1}{2\theta_0 })\sum_iX_i^2}$$</span></p> <p>It is evident that LR is a decreasing function of <span class="math-container">$T=\sum_iX_i^2$</span>.</p> <p>Now you can apply Theorem 9.6 taken from Mood Graybill Boes and define the critical region</p> <p><span class="math-container">$$C=\{\mathbf{x}:\sum_iX_i^2&gt;k\}$$</span></p> <p>getting a size <span class="math-container">$\alpha$</span> UMP Test for <span class="math-container">$\mathcal{H}_0:\theta \leq \theta_0$</span> against <span class="math-container">$\mathcal{H}_1:\theta &gt; \theta_0$</span> using a chi-square distribution as showed above.</p>
1,357,405
<p>Under what conditions will the cubic equation $ax^3 + bx^2 + cx + d$ where $a,b,c,d \in \mathbb R$ yield roots which have negative real parts? (All roots must have negative real parts)</p> <p><strong><em>Motivation</strong>: I am studying a dynamical system i.e, Chua circuit, in $ \mathbb R^3 $and wish to analyze it's stability. For stability analysis, one needs to find out eigen values of the Jacobian matrix. If the eigen values have negative real parts, then the system will have a stable fixed point. I wish to synchronize the system and vary certain parameters so as to ensure that the system always has negative eigen values. For equations, visit</em> <a href="http://www.chuacircuits.com/diagram.php" rel="nofollow">http://www.chuacircuits.com/diagram.php</a></p>
Jack D'Aurizio
44,121
<p>Assume we have the monic polynomial $$ p(x) = x^3 - s_1 x^2 + s_2 x -s_3 = (x-\xi_1)(x-\xi_2)(x-\xi_3).$$ Since $s_1=\xi_1+\xi_2+\xi_3$, $s_1&lt; 0$ is a necessary condition for the roots of $p(x)$ to lie in $\text{Re}(z)&lt;0$.</p> <hr> <p>If the discriminant of $p(x)$ is positive, then $p(x)$ has three real roots and they are all negative iff $p(x)$ is increasing on $\mathbb{R}^+$, or: $$ \forall x\geq 0, \qquad p'(x) = 3x^2-2s_1 x + s_2 \geq 0,$$ but $s_1&lt; 0$ implies that the last line is equivalent to $s_2\geq 0$. </p> <hr> <p>If the discriminant of $p(x)$ is negative, $p(x)$ has only one real root (say $\xi_1$) and two conjugated complex roots $\xi_2,\xi_3 = \sigma\pm it$. It happens that: $$ s_1 = \xi_1 + 2\sigma,\qquad s_2=2\xi_1\sigma+(\sigma^2+t^2),\qquad s_3 = \xi_1(\sigma^2+t^2) $$ so $s_3 &lt; 0$ and $s_2&gt;0$ are necessary conditions to have $\xi_1,\sigma&lt;0$.</p> <p>It happens that $\sigma &lt; 0$ is equivalent to $\xi_1 &gt; s_1$, so $p(x)$ must have a root in $(s_1,+\infty)$.</p> <p>That can be checked through <a href="https://en.wikipedia.org/wiki/Sturm%27s_theorem" rel="nofollow">Sturm's theorem</a> or by locating the stationary points of $p(x)$.</p>
2,539,327
<p>The equation of a parabola is given by: $y= ax^2 + bx+c$</p> <p>Why is it that when the coefficient of $x$ i.e. $a$ is positive we get an upward parabola and when it's negative we get a downward parabola? </p> <p>Also, I <a href="https://www.desmos.com/calculator/ldyxmzfrws" rel="nofollow noreferrer">saw</a> that increasing the value of $|a|$ narrows the parabola, why? </p> <p>Lastly, what is the role of $b$ in determining the structure of this parabola? </p>
user
505,767
<p>In the quadratic equation &quot;<span class="math-container">$b$</span>&quot; and &quot;<span class="math-container">$c$</span>&quot; terms are correlated to an axis translation, thus we can consider the simpler case <span class="math-container">$$y=ax^2$$</span> for which is clear the role of &quot;<span class="math-container">$a$</span>&quot; to determine the sign of <span class="math-container">$y$</span>.</p> <p>To clarify the first point suppose to change the coordinates by translation by means of <span class="math-container">$y=(y+k)$</span> and <span class="math-container">$x=(x+h)$</span> then <span class="math-container">$$(y+k)=a(x+h)^2$$</span> <span class="math-container">$$y=ax^2+2hx+h^2-k$$</span> which is in the form <span class="math-container">$$y=ax^2+bx+c$$</span></p> <p>I think this way is simpler because you don't need any calculus knowledge.</p>
2,456,263
<h2>Question</h2> <blockquote> <p>In how many way can <span class="math-container">$7$</span> girls be seated at a round table so that <span class="math-container">$2$</span> particular girls are separated?</p> </blockquote> <h2>My Approach</h2> <blockquote> <p><span class="math-container">$6!$</span> ways for the girls to sit at a round table.</p> <p>which <span class="math-container">$2$</span> girls will be separated <span class="math-container">$?\Rightarrow \binom{7}{2}$</span></p> <p>Let these <span class="math-container">$2$</span> girls sit together <span class="math-container">$5! \cdot \binom{7}{2}$</span></p> <p>Required answer=<span class="math-container">$6!-5! \cdot \binom{7}{2}$</span></p> </blockquote> <p>Am I right?</p>
Donald Splutterwit
404,247
<p>\begin{eqnarray*} \frac{1}{i+1}= \int_0^1 x^i dx \end{eqnarray*} Sub this into the sum &amp; interchange the order of the sum &amp; the integral \begin{eqnarray*} \sum_{i=0}^{n} (-1)^i\binom{n}{i} \frac{1}{i+1}&amp;=&amp; \int_0^1 \sum_{i=0}^{n} (-1)^i\binom{n}{i}x^i dx \\ &amp;=&amp; \int_0^1 (1-x)^n dx \\ &amp;=&amp; \left[ \frac{-(1-x)^n}{n+1} \right]^1_0 \\ &amp;=&amp; \color{blue}{ \frac{1}{n+1}} \\ \end{eqnarray*}</p>
2,456,263
<h2>Question</h2> <blockquote> <p>In how many way can <span class="math-container">$7$</span> girls be seated at a round table so that <span class="math-container">$2$</span> particular girls are separated?</p> </blockquote> <h2>My Approach</h2> <blockquote> <p><span class="math-container">$6!$</span> ways for the girls to sit at a round table.</p> <p>which <span class="math-container">$2$</span> girls will be separated <span class="math-container">$?\Rightarrow \binom{7}{2}$</span></p> <p>Let these <span class="math-container">$2$</span> girls sit together <span class="math-container">$5! \cdot \binom{7}{2}$</span></p> <p>Required answer=<span class="math-container">$6!-5! \cdot \binom{7}{2}$</span></p> </blockquote> <p>Am I right?</p>
xpaul
66,420
<p>\begin{eqnarray} &amp;&amp;1 - \frac{1}{2}\binom{n}{1}+\frac{1}{3} \binom{n}{2}- \frac{1}{4}\binom{n}{3}+....+(-1)^n \frac{1}{n+1}\binom{n}{n}\\ &amp;=&amp;\sum_{k=0}^n(-1)^{k}\frac{1}{k+1}\binom{n}{k}\\ &amp;=&amp;\sum_{k=0}^n(-1)^{k}\frac{1}{n+1}\binom{n+1}{k+1}\\ &amp;=&amp;\frac{1}{n+1}\sum_{k=1}^{n+1}(-1)^{k}\binom{n+1}{k}\\ &amp;=&amp;-\frac{1}{n+1}\left[\sum_{k=0}^{n+1}(-1)^{k}\binom{n+1}{k}-1\right]\\ &amp;=&amp;\frac1{n+1} \end{eqnarray}</p>
1,373,927
<p>We know that a tree with n edges have n+1 nodes.So if $|B_{n+1}|$ is the number of all possible ordered trees with n+1 nodes then its true that $C_{n+1} = |B_{n+1}|$ where $C$ is the Catalan number.Let it be $|L_k|$ the number of all possible ordered trees with n+1 nodes and k leaves and Its true that $|B_{n+1}| = \displaystyle\sum_{i=1}^{n}|L_i|$.</p> <hr> <p>If my thinking is correct how can I continue or if you have a better idea please let me know.</p>
Colm Bhandal
252,983
<p>Your thinking looks fairly sound there. All I can think of is the following complicated recurrence from looking at the leftmost sub-tree and enumerating over all possible values for its nodes and its leaves:</p> <p>$$L_{n, k} = \sum_{i=1}^{n - 1}\prod_{j=1}^{max(k, i)}L_{i, j}L_{n - i, k - j} = \sum_{i=1}^{k}\prod_{j=1}^{i}L_{i, j}L_{n - i, k - j} + \sum_{i = k + 1}^{n - 1}\prod_{j=1}^{k}L_{i, j}L_{n - i, k - j}$$</p> <p>where $L_{i, j}$ denotes the number of trees with $i$ nodes and $j$ leaves.</p>
867,207
<p>The Efron-Stein inequality sais that if $X_1,\ldots,X_n$ are independent random variables on say $R^n$, and $f:R^n \rightarrow R$ s.t. $Z:=f(X_1,\ldots,X_n)$ has finite variance, then</p> <p>$$\operatorname{Var}(X)\le \sum_{i=1}^n E[(Z-E^{(i)}[Z]]$$</p> <p>where $E^{(i)}$ denotes conditional expectation taken w.r.t. $X_i$ by keeping the rest of the variables fixed.</p> <p>On going through the proof, it is not clear to me why do we need the variables to be independent and where is that used in the proof? </p>
Adam
82,101
<p>Any set $$R \subseteq A \times A = \{(x,y) \mid x \in A ,\,y \in A\} $$ is a relation on $A $. Since $$\{(b,c), (b,d)\} \subseteq A \times A$$ holds, it is indeed a relation on $ A$. </p>
4,367,330
<p>Let <span class="math-container">$X_1, X_2,...$</span> be a sequence of independent uniform random variables on <span class="math-container">$(0,1)$</span>. Define: <span class="math-container">$$N := \text{min} \{n\geq 2: X_n &lt; X_{n-1}\}.$$</span> Calculate <span class="math-container">$E(N)$</span>.</p> <p>I think this problem asked about the expectations of the first dropping entry of the series, I also did a simulation and I think the answer is <span class="math-container">$e$</span>? But I'm not sure how to compute it. I tried using the definition of expectations and I compute that <span class="math-container">$P(N=2)$</span> is <span class="math-container">$1/2$</span>, but I stuck with computing <span class="math-container">$P(N=3)$</span>. Can anyone tell me how to do this?</p>
nejimban
206,936
<ol> <li>Check that <span class="math-container">$$N=2+\mathbf1_{\{X_1&lt;X_2\}}+\mathbf1_{\{X_1&lt;X_2&lt;X_3\}}+\cdots$$</span></li> <li>Check that <span class="math-container">$\Bbb P(X_1&lt;\cdots&lt;X_k)=\frac1{k!}$</span> for any <span class="math-container">$k\ge2$</span>.</li> <li>Take expectations &amp; Fubinize.</li> </ol>
412,051
<p>Suppose $X_i \sim N(0,1)$ (independent, identical normal distributions) </p> <p>Then by Law of large number, $$ \sqrt{1-\delta} \frac{1}{n}\sum_i^\infty e^{\frac{\delta}{2}X_i^2} \rightarrow \sqrt{1-\delta} \int e^{\frac{\delta}{2}x^2}\frac{1}{\sqrt{2\pi}}e^{\frac{1}{2}x^2}dx =1 $$</p> <p>However, according to simulations, this approximation doesn't seem to work when $\delta$ close to one. Is that ture or just need to run larger samples? Thanks !</p> <p>Update (6/6): As sos440 mentioned, there's a typo and now fixed.</p>
doraemonpaul
30,938
<p>Note that since this problem has only one condition, this is in fact an under-determining problem.</p> <p>You can let one more dummy condition so that this becomes a just-determining problem.</p> <p>For example, letting the dummy condition be $u(x,0)=g(x)$ , the general solution is more conventient to consider as $u(x,y)=c_1(x+iy)+c_2(x-iy)$ rather than in $u(x,y)=c_1(y+ix)+c_2(y-ix)$ , and the solution with such conditions can be expressed by using D’Alembert’s formula:</p> <p>$u(x,y)=\dfrac{g(x+iy)+g(x-iy)}{2}-\dfrac{i}{2}\int_{x-iy}^{x+iy}h(t)~dt$</p> <p>Note that this solution suitable for $x,y\in\mathbb{C}$ , not only suitable for $-\infty&lt;x&lt;\infty$ and $y&gt;0$ .</p> <p>Note that the ranges stated in the questions are only provide the minimum requirements of the domain of the solutions required, you are always welcomed if you smart enough to find the solutions which the domain larger than the ranges stated in the questions.</p>
222,312
<p>How to prove that if you have $x^*$ such that $x^*=\text{psuedoinverse}(A) b$, and $Ay=b$, then $$\Vert x^* \Vert_2 \leq \Vert y \Vert_2$$</p>
Community
-1
<p>Let me try a one liner solution. </p> <p>$Ay=b$ $\Rightarrow$ $A(y-x^*)=0$ $\Rightarrow$ $\langle x^*,y-x^*\rangle=0$ $\Rightarrow$ $\|y\|^2=\|y-x^*\|^2+\|x\|^2\ge \|x\|^2$.</p> <p><strong>Remark.</strong> $R=A^t(AA^t)^{-1}$, $x^*=Rb$. Then we have $\langle x^*,y-x^*\rangle=\langle Rb,y-x^*\rangle=$ $\langle (AA^t)^{-1}b,A(y-x^*)\rangle=0$.</p>
3,772,371
<p>Write the polynomial of degree <span class="math-container">$4$</span> with <span class="math-container">$x$</span> intercepts of <span class="math-container">$(\frac{1}{2},0), (6,0) $</span> and <span class="math-container">$ (-2,0)$</span> and <span class="math-container">$y$</span> intercept of <span class="math-container">$(0,18)$</span>. The root (<span class="math-container">$\frac{1}{2},0)$</span> has multiplicity <span class="math-container">$2$</span>.</p> <p>I am to write the factored form of the polynomial with the above information. I get:</p> <p><span class="math-container">$f(x)=-6\big(x-\frac{1}{2}\big)^2(x+2)(x-6)$</span></p> <p>Whereas the provided solution is:</p> <p><span class="math-container">$f(x)=-\frac{3}{2}(2x-1)^2(x+2)(x-6)$</span></p> <p>Here's my working:</p> <p>Write out in factored form:</p> <p><span class="math-container">$f(x) = a\big(x-\frac{1}{2}\big)^2(x+2)(x-6)$</span></p> <p>I know that <span class="math-container">$f(0)=18$</span> so:</p> <p><span class="math-container">$$18 = a\big(-\frac{1}{2}\big)^2(2)(-6)$$</span></p> <p><span class="math-container">$$18 = a\big(\frac{1}{4}\big)(2)(-6)$$</span></p> <p><span class="math-container">$$18 = -3a$$</span></p> <p><span class="math-container">$$a = -6$$</span></p> <p>Thus my answer: <span class="math-container">$f(x)=-6\big(x-\frac{1}{2}\big)^2(x+2)(x-6)$</span></p> <p>Where did I go wrong and how can I arrive at:</p> <p><span class="math-container">$f(x)=-\frac{3}{2}(2x-1)^2(x+2)(x-6)$</span> ?</p>
Riccardo
662,881
<p><span class="math-container">$$\Big(x-\dfrac{1}{2}\Big)^2 = \Big(\color{red}{\frac{1}{2}}\Big(2x-1\Big)\Big)^2=\color{red}{\frac{1}{4}}\Big(2x-1\Big)^2$$</span></p>
3,772,371
<p>Write the polynomial of degree <span class="math-container">$4$</span> with <span class="math-container">$x$</span> intercepts of <span class="math-container">$(\frac{1}{2},0), (6,0) $</span> and <span class="math-container">$ (-2,0)$</span> and <span class="math-container">$y$</span> intercept of <span class="math-container">$(0,18)$</span>. The root (<span class="math-container">$\frac{1}{2},0)$</span> has multiplicity <span class="math-container">$2$</span>.</p> <p>I am to write the factored form of the polynomial with the above information. I get:</p> <p><span class="math-container">$f(x)=-6\big(x-\frac{1}{2}\big)^2(x+2)(x-6)$</span></p> <p>Whereas the provided solution is:</p> <p><span class="math-container">$f(x)=-\frac{3}{2}(2x-1)^2(x+2)(x-6)$</span></p> <p>Here's my working:</p> <p>Write out in factored form:</p> <p><span class="math-container">$f(x) = a\big(x-\frac{1}{2}\big)^2(x+2)(x-6)$</span></p> <p>I know that <span class="math-container">$f(0)=18$</span> so:</p> <p><span class="math-container">$$18 = a\big(-\frac{1}{2}\big)^2(2)(-6)$$</span></p> <p><span class="math-container">$$18 = a\big(\frac{1}{4}\big)(2)(-6)$$</span></p> <p><span class="math-container">$$18 = -3a$$</span></p> <p><span class="math-container">$$a = -6$$</span></p> <p>Thus my answer: <span class="math-container">$f(x)=-6\big(x-\frac{1}{2}\big)^2(x+2)(x-6)$</span></p> <p>Where did I go wrong and how can I arrive at:</p> <p><span class="math-container">$f(x)=-\frac{3}{2}(2x-1)^2(x+2)(x-6)$</span> ?</p>
fleablood
280,126
<p>You didn't go wrong anywhere.</p> <p><span class="math-container">$-6(x-\frac{1}{2})^2(x+2)(x-6) = -\frac{3}{2}(2x-1)^2(x+2)(x-6)$</span></p> <p>That is because</p> <p><span class="math-container">$-6(x- \frac 1{2})^2 = -6(\frac 12[2x-1])^2= -6\cdot (\frac 12)^2(2x-1)^2 = -6\cdot \frac 14(2x-1) = -\frac 32(2x-1)$</span>.</p> <p>Is there some rule that that says fractions in the <span class="math-container">$(ax + b)$</span> parts are frowned upon?</p> <p>If so, if you get <span class="math-container">$(x + \frac ab)^k$</span> and can replace it with <span class="math-container">$(\frac 1b)^k(bx + a)^k$</span> but I don't see why you should have to.</p> <p>(In fact, I <em>much</em> prefer your notation as it <em>directly</em> indicates the roots and solutions... and indicates what the leading coeficient will actually be when expanded out. And what the heck kind of sense does removing a horrifying offensive fraction from one area make if you are just going to have to put an equally offensive fraction somewhere else?)</p> <p>But.... weren't you supposed to expand this out? So far as I can tell neither answer has done that. If you expand it out, you will see both answers are <em>exactly</em> the same.</p>
372,420
<p>Let <span class="math-container">$X$</span> be a finite ultrametric space and <span class="math-container">$P(X)$</span> be the space of probability measures on <span class="math-container">$X$</span> endowed with the Wasserstein-Kantorovich-Rubinstein metric (briefly WKR-metric) defined by the formula <span class="math-container">$$\rho(\mu,\eta)=\max\{|\int_X fd\mu-\int_X fd\eta|:f\in Lip_1(X)\}$$</span> where <span class="math-container">$Lip_1(X)$</span> is the set of non-expanding real-valued functions on <span class="math-container">$X$</span>.</p> <blockquote> <p><strong>Problem.</strong> Is there any fast algorithm for calculating this metric between two measures on a finite ultrametric space? Or at least for calculating some natural distance, which is not &quot;very far&quot; from the WKR-metric?</p> </blockquote> <p><strong>Added in Edit.</strong> There is a simple upper bound <span class="math-container">$\hat \rho$</span> for the WKR-metric, defined by recursion on the cardinality of the set <span class="math-container">$d[X\times X]=\{d(x,y):x,y\in X\}$</span> of values of the ultrametric on <span class="math-container">$X$</span>. If <span class="math-container">$d[X\times X]=\{0\}$</span>, then for any measures <span class="math-container">$\mu,\eta\in P(X)$</span> on <span class="math-container">$X$</span> put <span class="math-container">$\hat\rho(\mu,\eta)=0$</span>. Assume that for some natural number <span class="math-container">$n$</span> we have defined the metric <span class="math-container">$\hat\rho(\mu,\eta)$</span> for any probability measures <span class="math-container">$\mu,\eta\in P(X)$</span> on any ultrametric space <span class="math-container">$(X,d)$</span> with <span class="math-container">$|d[X\times X]|&lt;n$</span>.</p> <p>Take any ultrametric space <span class="math-container">$X$</span> with <span class="math-container">$|d[X\times X]|=n$</span>. Let <span class="math-container">$b=\max d[X\times X]$</span> and <span class="math-container">$a=\max(d[X\times X]\setminus\{b\})$</span>. Let <span class="math-container">$\mathcal B$</span> be the family of closed balls of radius <span class="math-container">$a$</span> in <span class="math-container">$X$</span>. Since <span class="math-container">$X$</span> is an ultrametric space, the balls in the family <span class="math-container">$\mathcal B$</span> either coincide or are disjoint.</p> <p>Given any probability measures <span class="math-container">$\mu,\eta$</span> on <span class="math-container">$X$</span>, let <span class="math-container">$$\hat\rho(\mu,\eta)=\tfrac12b\cdot\sum_{B\in\mathcal B}|\mu(B)-\eta(B)|+\sum_{B\in\mathcal B'}\min\{\mu(B),\eta(B)\}\cdot\hat\rho(\mu{\restriction}B,\eta{\restriction}B),$$</span> where <span class="math-container">$\mathcal B'=\{B\in\mathcal B:\min\{\mu(B),\eta(B)\}&gt;0\}$</span> and the probability measurees <span class="math-container">$\mu{\restriction} B$</span> and <span class="math-container">$\eta{\restriction}B$</span> assign to each subset <span class="math-container">$S$</span> of <span class="math-container">$B$</span> the numbers <span class="math-container">$\mu(S)/\mu(B)$</span> and <span class="math-container">$\eta(S)/\mu(B)$</span>, respectively.</p> <p>It can be shown that <span class="math-container">$\rho\le\hat\rho$</span>.</p> <blockquote> <p><strong>Question.</strong> Is <span class="math-container">$\rho=\hat\rho$</span>?</p> </blockquote>
Carlo Beenakker
11,260
<p>There exists a variety of measures of uniformity of a point set. See, for example, <A HREF="https://www.bme.psu.edu/labs/Yang-lab/publications%20PDF/with%20Lizeng.pdf" rel="nofollow noreferrer">On assessing spatial uniformity of particle distributions...</A> for an overview, and a critical comparison when applied to real-world data.</p> <p>There are two distinct classes of uniformity measures: <em>Quadrat-based</em> measures divide the region into a number of small grids, called quadrats, and count the number of points falling into each grid. <em>Distance-based</em> methods focus on the distances between points, such as those between nearest neighbors or between randomly selected locations.</p> <IMG SRC="https://i.stack.imgur.com/dlOzn.png" WIDTH="400"/>
372,420
<p>Let <span class="math-container">$X$</span> be a finite ultrametric space and <span class="math-container">$P(X)$</span> be the space of probability measures on <span class="math-container">$X$</span> endowed with the Wasserstein-Kantorovich-Rubinstein metric (briefly WKR-metric) defined by the formula <span class="math-container">$$\rho(\mu,\eta)=\max\{|\int_X fd\mu-\int_X fd\eta|:f\in Lip_1(X)\}$$</span> where <span class="math-container">$Lip_1(X)$</span> is the set of non-expanding real-valued functions on <span class="math-container">$X$</span>.</p> <blockquote> <p><strong>Problem.</strong> Is there any fast algorithm for calculating this metric between two measures on a finite ultrametric space? Or at least for calculating some natural distance, which is not &quot;very far&quot; from the WKR-metric?</p> </blockquote> <p><strong>Added in Edit.</strong> There is a simple upper bound <span class="math-container">$\hat \rho$</span> for the WKR-metric, defined by recursion on the cardinality of the set <span class="math-container">$d[X\times X]=\{d(x,y):x,y\in X\}$</span> of values of the ultrametric on <span class="math-container">$X$</span>. If <span class="math-container">$d[X\times X]=\{0\}$</span>, then for any measures <span class="math-container">$\mu,\eta\in P(X)$</span> on <span class="math-container">$X$</span> put <span class="math-container">$\hat\rho(\mu,\eta)=0$</span>. Assume that for some natural number <span class="math-container">$n$</span> we have defined the metric <span class="math-container">$\hat\rho(\mu,\eta)$</span> for any probability measures <span class="math-container">$\mu,\eta\in P(X)$</span> on any ultrametric space <span class="math-container">$(X,d)$</span> with <span class="math-container">$|d[X\times X]|&lt;n$</span>.</p> <p>Take any ultrametric space <span class="math-container">$X$</span> with <span class="math-container">$|d[X\times X]|=n$</span>. Let <span class="math-container">$b=\max d[X\times X]$</span> and <span class="math-container">$a=\max(d[X\times X]\setminus\{b\})$</span>. Let <span class="math-container">$\mathcal B$</span> be the family of closed balls of radius <span class="math-container">$a$</span> in <span class="math-container">$X$</span>. Since <span class="math-container">$X$</span> is an ultrametric space, the balls in the family <span class="math-container">$\mathcal B$</span> either coincide or are disjoint.</p> <p>Given any probability measures <span class="math-container">$\mu,\eta$</span> on <span class="math-container">$X$</span>, let <span class="math-container">$$\hat\rho(\mu,\eta)=\tfrac12b\cdot\sum_{B\in\mathcal B}|\mu(B)-\eta(B)|+\sum_{B\in\mathcal B'}\min\{\mu(B),\eta(B)\}\cdot\hat\rho(\mu{\restriction}B,\eta{\restriction}B),$$</span> where <span class="math-container">$\mathcal B'=\{B\in\mathcal B:\min\{\mu(B),\eta(B)\}&gt;0\}$</span> and the probability measurees <span class="math-container">$\mu{\restriction} B$</span> and <span class="math-container">$\eta{\restriction}B$</span> assign to each subset <span class="math-container">$S$</span> of <span class="math-container">$B$</span> the numbers <span class="math-container">$\mu(S)/\mu(B)$</span> and <span class="math-container">$\eta(S)/\mu(B)$</span>, respectively.</p> <p>It can be shown that <span class="math-container">$\rho\le\hat\rho$</span>.</p> <blockquote> <p><strong>Question.</strong> Is <span class="math-container">$\rho=\hat\rho$</span>?</p> </blockquote>
gmvh
45,250
<p>Judging from your pictures, it should be sufficient to consider the root-mean-square distance <span class="math-container">$\rho$</span> of the points <span class="math-container">$\vec{x}_k$</span> from their center of mass <span class="math-container">$\vec{\mu}$</span>, divided by the radius <span class="math-container">$R$</span> of the circle: <span class="math-container">\begin{align} \vec{\mu} &amp;= \frac{1}{N}\sum_{k=1}^N\vec{x}_k, \\ \rho &amp;= \sqrt{\left(\frac{1}{N}\sum_{k=1}^N\left\lVert\vec{x}_k-\vec{\mu}\right\rVert^2\right)}, \\ S &amp;=\frac{\rho}{R}, \end{align}</span> where <span class="math-container">$S$</span> is your measure of scattering within the circle.</p>
577,490
<p>I wonder if I have approached this in the right way. I'm not sure if I have interpreted the question correctly, or made correct use of the successor function. Thank you in advance</p> <blockquote> <p><strong>Question:</strong> Prove that $¬∃n∅ = n^+$</p> </blockquote> <p>My approach:</p> <p>Prove that $¬∃n∅ = n^+$</p> <p>(i) In other words, prove (by counterexample) that $∅ ≠ n^+$.</p> <p>(ii) Let $n^+ = 1 \Rightarrow n^+ = ∅^+ \Rightarrow n^+ = ∅^+ = \{∅\}$.</p> <p>(iii) But $∅ ≠ \{∅\} \Rightarrow ¬∃n∅ = n^+$.</p>
Asaf Karagila
622
<p>The proof is incorrect, because you assume that $n=1$. You need to assume that $n^+=\varnothing$, then conclude somehow contradiction.</p> <p>For example,</p> <ol> <li>Assume $n^+=\varnothing$, then $n\cup\{n\}=\varnothing$.</li> <li>In particular $n\in\varnothing$.</li> <li>Contradiction.</li> </ol>
2,433,765
<p><a href="https://i.stack.imgur.com/AolxA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AolxA.png" alt="enter image description here"></a></p> <p>In the following diagram of a triangle, $\overline{AB} = \overline{BC} = \overline{CD}$ and $\overline{AD} = \overline{BD}$. Find the measure of angle $D$.</p> <p>I know this should be easy but I am stuck. I started by saying angle $\widehat{ACB} = \theta$ and that the supplement angle $\widehat{BCD} = 180^\circ-\theta$. I know that angle $\widehat{CAB}=\theta$ as well and that angle $\widehat{ABC} = 180^\circ-2\theta$. In addition, angles $\widehat{CDB}$ and $\widehat{CBD}$ are equal. I am not sure how to solve for angle $\widehat{CDB}$ ... is it possible to find an exact numerical measure? I hate overlooking something obvious. Thank you for your help.</p>
g.kov
122,782
<p><a href="https://i.stack.imgur.com/wEb4F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wEb4F.png" alt="enter image description here"></a></p> <p>\begin{align} 5\,\delta&amp;=180^\circ \end{align} </p> <p><strong>Edir</strong></p> <p>\begin{align} |AB|&amp;=|BC|=|CD| ,\\ |AD|&amp;=|BD| . \end{align} </p> <p>Let $\angle BDA=\delta$.</p> <p>Then, from isosceles $\triangle BDC$, $\angle CBD=\delta$, $\angle DCB=180^\circ-2\,\delta$.</p> <p>In $\triangle ABC$, $\angle BCA=180^\circ-\angle DCB=2\,\delta$, $\angle BAC=\angle BCA=2\,\delta$.</p> <p>Also, $\angle BAC=\angle BAD=\angle ABD$, hence $\angle ABD=2\,\delta$.</p>
1,865,086
<p>$$T_n=2^{-n}$$</p> <p>How can I tell if this converges? with previous questions I have just let $n = \infty$, however I'm unsure about this one.</p>
5xum
112,884
<p>You can <strong>tell</strong> (intuitively) that this converges by writing down a couple of elements and trying to find a pattern:</p> <p>$$\frac12, \frac14, \frac18,\frac1{16}\dots$$</p> <p>You should immediatelly see that the elements are becoming very small very fast. So, your intuition should tell you that the limit should be $0$. But that's only a small part of the deal.</p> <hr> <p>This is mathematics, after all. All your hunches and ideas and intuition are empty if you cannot <em>prove</em> it.</p> <p>Now, there are many ways to prove that the limit is $0$. The most basic way is to use the $\epsilon$-$\delta$ definition, which in this case is very easy to use. In fact:</p> <blockquote> <p>If it takes you more than one minute to figure out how the $\epsilon$-$\delta$ proof of this limit looks like, <strong>you need to practice more</strong>. An excellent way to practice is to write down this exact proof. It isn't hard once you get the hang of it, and if you want to do anything in math, you <em>need</em> practice.</p> </blockquote> <hr> <p>Of course, there are other ways of proving it. You could, for example, find some other sequence $S_n$ such that $0&lt;T_n&lt;S_n$ and $$\lim_{n\to\infty}S_n=0.$$ Then, if you know enough about limits, you can conclude that the limit of $T_n$ is also $0$.</p>
1,865,086
<p>$$T_n=2^{-n}$$</p> <p>How can I tell if this converges? with previous questions I have just let $n = \infty$, however I'm unsure about this one.</p>
operatorerror
210,391
<p>I think it is worth proving that $1/n\rightarrow 0$ rigorously with an epsilon-n proof at least once. After that, you can compare sequences like the one in your question to this sequence, since it is clear that $$ 2^{-n}\leq 1/n $$ Hint: use the Archimedean property.</p>
1,778,037
<p>$ 9^x-6^x=4^{x+1/2}$, solve for $x$</p> <p>Please don't solve this problems entirely, I just want some hints. I have tried to substitute $3^x=b$ and $2^x=a$. </p>
Niklas Rosencrantz
5,602
<p>Hint: $\ln (y ^x) =x\ln y$. Now you should see the first one. </p>
1,778,037
<p>$ 9^x-6^x=4^{x+1/2}$, solve for $x$</p> <p>Please don't solve this problems entirely, I just want some hints. I have tried to substitute $3^x=b$ and $2^x=a$. </p>
Siddd
274,503
<p>Consider this: The equation can be reduced to $$9^x - 6^x = 4^x.4^{1/2}$$ which is $$9^x = 6^x + 2.4^x$$ Now the RHS is divisible by $2$, but the LHS is not, for any values of $x\in \mathbb Z^+$. <br>Rest, try using $ln(y^x) = xln(y)$ as others have suggested. That should give the final answer</p>
3,987,911
<p>In the game of bridge, a player is given 13 cards from a deck of 52 cards, what is the probability that he/she gets exactly one king and one queen? Furthermore what is the expected value of the number of aces he/she gets?</p>
Community
-1
<p>This question can be solved 'in a flash' when looked at in the right way.</p> <p>If you look at the second term in your expression for time, you can simplify it to <span class="math-container">$$\frac{\sqrt{(x-37)^2+37^2}}{\sqrt2}.$$</span></p> <p>We have now, in effect, made it a route with two legs travelled at the same speed, <span class="math-container">$\sqrt2$</span>.</p> <p>Furthermore, you will be able to draw this as a route travelling from <span class="math-container">$(0,0)$</span> to <span class="math-container">$(37,100)$</span> via a point at <span class="math-container">$(x,63)$</span>.</p> <p>For a straight line route make <span class="math-container">$x/37=63/100$</span> and the problem is solved.</p>
778,140
<p>I want to draw a box plot, which requires that I know the median, the lower and upper quartiles, and the minimum and maximum values of my data.</p> <p>I understand that the quartiles are simply the value on certainly "percentile" of the cumulative frequency of the data.</p> <p>So lower quartile = the value of the observation on the 25th percentile of the data. Now my question (for AQA GCSE prep) is - what if taking 25% of my data ends up in a decimal number, let's say, $3.5$. And my data consists of classes in a grouped frequency table. And two of my classes are:</p> <p>$ class 1$ || $ 2 &lt;= h &lt; 3.5$</p> <p>$ class 2$ || $ 3.5 &lt;= h &lt; 5$</p> <p>So when I take 25% of 3.5 falls in between two classes. Which value should I choose as the lower quartile? Should it be $class 1$, or $class 2$? Should my rounding of 3.5 be the same as regular rounding is done, i.e. just rounding up to 4 (hence selecting $class 2$)? Or should I round choose $class 1$ for some reason?</p>
jsk
147,497
<p>If your data consist of the frequency of observations within each class, then the best that you can day is say that the quartile is somewhere within the class. To find out which class the quartile belongs to, you need to first figure out the index of the quartile. For example, if you have 100 observations in your dataset, then the median would be the average of the 50th and the 51st observations of your dataset. You would then need to find which class contains both the 50th and 51st observations by examining the cumulative frequencies. Perhaps the cumulative frequency for class 2 is 45 and the cumulative frequency for class 3 is 55. Then you would know that the 50th and 51st observations were in class 3, so the median would be in class 3.</p>
778,140
<p>I want to draw a box plot, which requires that I know the median, the lower and upper quartiles, and the minimum and maximum values of my data.</p> <p>I understand that the quartiles are simply the value on certainly "percentile" of the cumulative frequency of the data.</p> <p>So lower quartile = the value of the observation on the 25th percentile of the data. Now my question (for AQA GCSE prep) is - what if taking 25% of my data ends up in a decimal number, let's say, $3.5$. And my data consists of classes in a grouped frequency table. And two of my classes are:</p> <p>$ class 1$ || $ 2 &lt;= h &lt; 3.5$</p> <p>$ class 2$ || $ 3.5 &lt;= h &lt; 5$</p> <p>So when I take 25% of 3.5 falls in between two classes. Which value should I choose as the lower quartile? Should it be $class 1$, or $class 2$? Should my rounding of 3.5 be the same as regular rounding is done, i.e. just rounding up to 4 (hence selecting $class 2$)? Or should I round choose $class 1$ for some reason?</p>
David K
139,123
<p>If taking the $25$th percentile of the data gives you the value $3.5$, then the value of the first quartile is $3.5$. We would usually expect that the <em>number of observations</em> in each of your frequency classes is a whole number, but there is nothing wrong with having a quartile or median <em>value</em> that is not a whole number, so further rounding is neither needed nor desired.</p>
1,644,130
<p>I proved for a bounded set $\Omega$ and $1 \leq p \leq q \leq \infty$ that $L^{q}(\Omega) \subset L^{p}(\Omega)$. What is an example that would show strict inclusion, $ p&lt;q$, and false if $\Omega$ is not bounded?</p>
DanielWainfleet
254,665
<p>For $1\leq p&lt;q&lt;\infty ,\quad $ $f(x)=x^{-1/q} $ belongs to $L^p(0,1)$ but not to $L^q(0,1)$.</p>
959,410
<p>I need to evaluate an expression similar to the following:</p> <p>$\frac{\partial\mathrm{log}(a+b)}{\partial a}$</p> <p>At this point I don't know how to proceed. $b$ is a constant so there should be some way to eliminate it. How would you proceed in this case?</p> <p>Actually, the original expression is much more complicated, and related to multiclass logistic regression, but I wanted to spare you tedious details.</p>
James S. Cook
36,530
<p>Suppose you know $\frac{d}{dx} e^x=e^x$ then define $f(x)=e^x$ to have inverse function $g(x) = \ln (x)$ to be the inverse function of the exponential function. In particular this means we assume $e^{\ln(x)}=x$ for all $x \in (0, \infty)$ and $\ln(e^x) = x$ for all $x \in (-\infty, \infty)$. The existence of $g$ is no trouble as the exponential function is everywhere injective. </p> <p>Ok, assume $x&gt;0$ and let $y=\ln(x)$ then $e^y = e^{\ln(x)}=x$. Differentiate the equation $e^y=x$: $$ \frac{d}{dx} e^y = \frac{d}{dx} x \ \ \Rightarrow \ \ e^y \frac{dy}{dx}=1$$ where we have used the chain-rule and the observation that $y$ is a function of $x$. Finally, solve for $\frac{dy}{dx}$ (which is what we're after here) $$ \frac{dy}{dx} = \frac{1}{e^y} = \frac{1}{x} \ \ \Rightarrow \ \ \frac{d}{dx} \ln (x) = \frac{1}{x}.$$ Now, to solve your problem I merely apply this to $\ln(a+b)$ thinking of $a$ as $x$. $$ \frac{\partial}{\partial a} \ln (a+b) = \left[\frac{d}{dx} \ln (x+b) \right]_{x=a} = \left[\frac{1}{x+b} \right]_{x=a} = \frac{1}{a+b}.$$ </p>
2,895,129
<blockquote> <p>Let $f(x) \leq g(x)$ for all $x$ in an open set $D$, and let $a$ be an interior point of $D$ such that $f(a) = g(a)$ and both $f, g$ are differentiable at $a$. Does it follows that $f'(a)=g'(a)$?</p> </blockquote> <p>Is that right, and if so, why? If not then please give examples.</p>
Rigel
11,776
<p>By assumption, the differentiable function $h(x) := g(x) - f(x)$ has a minimum point at $a$. Since $a$ is an interior point, then $h'(a) = 0$, i.e. $f'(a) = g'(a)$.</p>
2,895,129
<blockquote> <p>Let $f(x) \leq g(x)$ for all $x$ in an open set $D$, and let $a$ be an interior point of $D$ such that $f(a) = g(a)$ and both $f, g$ are differentiable at $a$. Does it follows that $f'(a)=g'(a)$?</p> </blockquote> <p>Is that right, and if so, why? If not then please give examples.</p>
mengdie1982
560,634
<h1>Proof</h1> <p>I will give a proof for the problem as follows, which totally depends on some most basic facts of calculus. According to the assumptions, <span class="math-container">$$\frac{f(x)-f(a)}{x-a}\leq \frac{g(x)-g(a)}{x-a},\tag1$$</span>where <span class="math-container">$x$</span> belongs to some right neighborhood of <span class="math-container">$a.$</span> And<span class="math-container">$$\frac{f(x)-f(a)}{x-a}\geq \frac{g(x)-g(a)}{x-a},\tag2$$</span>where <span class="math-container">$x$</span> belongs to some left neighborhood of <span class="math-container">$a.$</span>Thus, for <span class="math-container">$(1)$</span>, let <span class="math-container">$x \to a+.$</span> <span class="math-container">$$\lim_{x \to a+}\frac{f(x)-f(a)}{x-a}\leq \lim_{x \to a+}\frac{g(x)-g(a)}{x-a},\tag 3$$</span>which implies <span class="math-container">$$f'_+(a) \leq g'_+(a).\tag4$$</span>Likewise, for <span class="math-container">$(2)$</span>, let <span class="math-container">$x \to a-.$</span> <span class="math-container">$$\lim_{x \to a-}\frac{f(x)-f(a)}{x-a}\geq \lim_{x \to a-}\frac{g(x)-g(a)}{x-a},\tag 5$$</span>which implies <span class="math-container">$$f'_-(a) \geq g'_-(a).\tag6$$</span></p> <p>But by the definition of the differentiability, we have <span class="math-container">$$f'(a)=f'_+(a)=f'_-(a),~~~~~g'(a)=g'_+(a)=g'_-(a).\tag7$$</span></p> <p>Thus, <span class="math-container">$(4)$</span> and <span class="math-container">$(5)$</span> claim that <span class="math-container">$$f'(a) \leq g'(a),~~~~~f'(a) \geq g'(a)\tag8$$</span>respectively, which demands <span class="math-container">$$f'(a)=g'(a).$$</span></p>
3,163,525
<p>How would I expand the following function as a power series, around <span class="math-container">$\eta=0$</span>?</p> <p><span class="math-container">$$g_0(1,\eta)=\frac{\left(\frac{PV}{NkT}\right)_0-1}{4\eta}$$</span></p> <p>Note that:</p> <p><span class="math-container">$$\left(\frac{PV}{NkT}\right)_0=1+\frac{3\eta}{\eta_c-\eta}+\sum_{k=1}^4kA_k\left(\frac{\eta}{\eta_c}\right)^k$$</span></p> <p>Then we have:</p> <p><span class="math-container">$$g_0(1,\eta)=\frac{\frac{3\eta}{\eta_c-\eta}+\sum_{k=1}^4kA_k\left(\frac{\eta}{\eta_c}\right)^k}{4\eta}$$</span></p>
Andrei
331,661
<p>Starting from your last equation: <span class="math-container">$$g_0(1,\eta)=\frac{\frac{3\eta}{\eta_c-\eta}+\sum_{k=1}^4kA_k\left(\frac{\eta}{\eta_c}\right)^k}{4\eta}$$</span> you have: <span class="math-container">$$g_0(1,\eta)=\frac{3}{4(\eta_c-\eta)}+\sum_{k=1}^4\frac k{4\eta_c^k}A_k\eta^{k-1}$$</span> The terms after the <code>+</code> sign are ok, you just need to deal with the expansion of <span class="math-container">$(\eta_c-\eta)^{-1}$</span>. You can give <span class="math-container">$\eta_c$</span> as a factor, then you have <span class="math-container">$$\frac3{3\eta_c}\left(1-\frac{\eta}{\eta_c}\right)^{-1}$$</span> For <span class="math-container">$|x|&lt;1$</span> you have <span class="math-container">$$\frac 1{1-x}=x+x^2+x^3+...=\sum_{n=1}^\infty x^n$$</span></p>
3,187,793
<p>We know that <span class="math-container">$221 = 17*13$</span>. So we can check if the system has roots to both of those equations separately, which it does:</p> <p><span class="math-container">$x^{5} \equiv 2$</span> mod <span class="math-container">$13$</span> has the solution <span class="math-container">$6 + 13n$</span> and <span class="math-container">$x^{5} \equiv 2$</span> mod <span class="math-container">$17$</span> has the solution <span class="math-container">$15 + 17n$</span>. </p> <p>I got these numbers from wolfram, I have no idea how to solve this problem WITHOUT a calculator. And even after finding these numbers. How would one obtain a solution modulo <span class="math-container">$221$</span>? I was thinking the Chinese Remainder Theorem but I am under the assumption that CRT only applies to problems with powers of <span class="math-container">$x$</span> which are <span class="math-container">$1$</span>.</p> <p>Thanks.</p>
Bill Dubuque
242
<p>Below we quickly <em>mentally</em> solve <span class="math-container">$\,x^{\large 5}\equiv 2\,$</span> by taking a <span class="math-container">$5$</span>'th root, i.e. raising both sides to power <span class="math-container">$\color{#c00}{1/5}$</span></p> <p>Suppose <span class="math-container">$a$</span> is coprime to <span class="math-container">$13$</span> &amp; <span class="math-container">$17$</span>. By little Fermat <span class="math-container">$\,a^{\large 12}\equiv 1\pmod{\!13},\, $</span> <span class="math-container">$a^{\large 16}\equiv 1\pmod{\!17}\,$</span> hence <span class="math-container">$\,a^{\large 48}\equiv 1\,$</span> mod <span class="math-container">$13\ \&amp;\ 17,\,$</span> so also mod <span class="math-container">$\,13\cdot 17 = 221\,$</span> by <a href="https://math.stackexchange.com/a/190522/242">CCRT (or lcm)</a>. <span class="math-container">$ $</span> Applying this: <span class="math-container">$\bmod{13\cdot 17}\!:\ x^{\large 5}\equiv 2\,$</span> <span class="math-container">$\Rightarrow\,x\,$</span> is coprime to <span class="math-container">$13,17\,$</span> so <span class="math-container">$\,x^{\large 48}\equiv 1.\,$</span> Similarly <span class="math-container">$\,\color{#0a0}{2^{\large 24}}\equiv 1\,$</span> by <span class="math-container">$\bmod 17\!:\ (2^{\large 4})^{\large 6}\equiv(-1)^{\large 6}\equiv 1$</span></p> <p>By Theorem below: <span class="math-container">$\,x^{\large\color{}{48}}\equiv 1\equiv 2^{\large 48}$</span> and <span class="math-container">$\,k'\equiv \color{#c00}{1/5 \equiv 29}\pmod{\!48}\ $</span> [computed below] implies</p> <p><span class="math-container">$$\ \ \ \ \ \ \ \ x^{\large 5}\equiv 2\iff x\equiv 2^{\large\color{#c00}{1/5}}\equiv 2^{\large\color{#c00}{29}}\equiv \bbox[5px,border:1px solid #c00]{2^{\large 5}}\,\ \ {\rm by}\ \ \color{#0a0}{2^{\large 24}}\equiv 1$$</span></p> <hr /> <p><strong>Theorem</strong> <span class="math-container">$ $</span> [Compute <span class="math-container">$\color{#c00}k$</span>'th root by raising to power <span class="math-container">$\frac{1}k\!\pmod{\!f}\,$</span> if <span class="math-container">$k$</span> is coprime to <span class="math-container">$\color{#d0f}{{\rm period}\ f}$</span>]</p> <p>Given <span class="math-container">$\ \color{#d0f}{a^f} \equiv 1\equiv \color{#d0f}{b^f}\pmod{\!n},\ $</span> and <span class="math-container">$\ k' \equiv \frac{1}k\equiv k^{-1}\pmod{\!f},\, $</span> so <span class="math-container">$\ kk' = 1 + jf,\ $</span> then</p> <p><span class="math-container">$$ \bbox[5px,border:1px solid #c00]{a^{\large\color{#c00} k} \equiv b \iff a \equiv b^{\large (\color{#c00}{1/k})_f}\equiv b^{\large k'}\!\!\!\pmod{\!n}}\qquad$$</span></p> <p><span class="math-container">$\begin{align}{\bf Proof}\ \ \bmod n\!:\ \ \ &amp;b \equiv a^{\large k}\,\Rightarrow\, b^{\large k'}\! \equiv a^{\large kk'}\! \equiv a^{\large 1+fj} \equiv a(\color{#d0f}{a^{\large f}})^{\large j} \equiv a\\ &amp;a \equiv b^{\large k'}\!\Rightarrow\, a^{\large k} \equiv b^{\large k'k} \equiv \,b^{\large 1+fj} \equiv \,b(\color{#d0f}{b^{\large f}})^{\large j} \equiv b \end{align}\ \ $</span> by <a href="https://math.stackexchange.com/a/879262/242">Congruence Laws</a></p> <p><strong>Remark</strong> <span class="math-container">$ $</span> Clearly the proof <a href="https://math.stackexchange.com/a/3877103/242">works in any group</a> using <span class="math-container">$\,\color{#d0f}{f = |G|}\,$</span> by Lagrange. Said in map language, the theorem shows that the <span class="math-container">$k$</span>'th power map <span class="math-container">$\,x^k$</span> has inverse <span class="math-container">$(k$</span>'th root) being <span class="math-container">$\,x^{k'}\,$</span> (on <span class="math-container">$\,\Bbb Z_n^{*})$</span></p> <hr /> <p>For completeness below we compute <span class="math-container">$\ 1/5 \pmod{\!48}\ $</span> using <a href="https://math.stackexchange.com/a/174687/242">Inverse Reciprocity</a></p> <p><span class="math-container">$\bmod 48\!:\,\ \dfrac{1}5\equiv \dfrac{1\!+\!48(\color{#c00}3)}5\equiv \dfrac{145}5\equiv 29\ $</span> by <span class="math-container">$\bmod 5\!:\ 0\equiv 1\!+\!48\color{#c00}x\equiv 1\!-\!2x\!\iff\! {\overbrace{2x\equiv1\equiv6}^{\large \color{#c00}{x\ \equiv\ 3}}}$</span></p> <hr /> <p><strong>Alternatively</strong> we can use CRT and compute the <span class="math-container">$5$</span>'th roots modulo each prime <span class="math-container">$13,17\,$</span> as follows, where the left &amp; rightmost equivalences are by CRT, and the middle one is by the Theorem</p> <p><span class="math-container">$x^{\large 5}\!\equiv 2\pmod{\!\!\!\overbrace{221}^{\large 13\,\cdot\, 17}\!\!} \!\!\rm\iff\!\! \begin{align} x^{\large 5}\!\equiv 2\!\!\!\pmod{\!13}\\ x^{\large 5}\!\equiv 2\!\!\!\pmod{\!17}\end{align}$</span> <span class="math-container">$\!\!\iff\!\! \begin{align} x&amp;\equiv\ \ 6\!\!\!\pmod{\!13}\\ x&amp;\equiv 15\!\!\!\pmod{\!17}\end{align} \!\!\iff\! x\equiv 32\pmod{\!\!\!\overbrace{221}^{\large 13\,\cdot\, 17}\!\!}$</span></p> <p>The first <span class="math-container">$\!\iff\!$</span> is by replacing <span class="math-container">$\,x^{\large 5}\,$</span> by <span class="math-container">$X$</span> then applying CRT (again we need only the trivial constant-case <a href="https://math.stackexchange.com/a/190522/242">CCRT or lcm)</a>. The fraction computations for <span class="math-container">$\,1/5\,$</span> in the Theorem in the middle arrow are quickly computed by Inverse Reciprocity as above (or the <a href="https://math.stackexchange.com/a/85841/242">Extended Euclidean Algorithm</a>)</p> <p><span class="math-container">$\!\bmod 12\!:\ \dfrac{1}5 \equiv \dfrac{1 + 12\,\cdot\, \color{#c00}2}5\ \equiv\ \color{#0a0}5,\ $</span> by <span class="math-container">$\bmod 5\!:\ 1\!+\!12\color{#c00}x \equiv 0 \iff x \equiv \dfrac{-1}{12}\, \equiv\, \dfrac{4}2\, =\, \color{#c00}2$</span></p> <p><span class="math-container">$\!\bmod 16\!:\ \dfrac{1}5 \equiv \dfrac{1\!+\!16(\color{#c00}{-1})}5\! \equiv\! \color{#f84}{-3},\ $</span> by <span class="math-container">$\bmod 5\!:\ 1\!+\!16\color{#c00}x \equiv 0 \iff x \equiv \dfrac{-1}{16} \equiv \dfrac{-1}1 = \color{#c00}{-1}$</span></p> <p>Plugging the above values of <span class="math-container">$\,1/5\,$</span> into the Theorem we obtain the residues <span class="math-container">$\,x\equiv 6,15\,\bmod\, 13,17$</span></p> <p>Thus <span class="math-container">$\bmod 13\!:\,\ x^{\large 5}\equiv 2\iff x\equiv 2^{\large\color{#0a0}{\:\! 5}}\equiv 6\,\ $</span> by the Theorem,</p> <p>and <span class="math-container">$\ \ \bmod 17\!:\,\ x^{\large 5}\equiv 2\iff x\equiv 2^{\large \color{#f84}{-3}}\equiv\dfrac{1}8\equiv\dfrac{-16}8\equiv -2\equiv 15 $</span></p> <p>Finally by <a href="https://math.stackexchange.com/a/20259/242">Easy CRT</a> <span class="math-container">$\,\ x\equiv 15+17\left[\dfrac{6\!-\!15}{17}\bmod{\!13}\right]$</span> <span class="math-container">$ \equiv15+17\left[\dfrac{4}{4}\right]\equiv \bbox[5px,border:1px solid #c00]{32}\,\ \pmod{\!13\cdot 17} $</span></p> <p>But this ends up being more work than the first direct method.</p> <p><strong>Remark</strong> <span class="math-container">$ $</span> See <a href="https://math.stackexchange.com/a/718709/242">here</a> for methods for the more general (non-coprime) case.</p>
2,755,091
<p>I know that I have to prove this with the induction formula. If proved the first condition i.e. $n=6$ which is divisible by $6$. But I got stuck on how to proceed with the second condition i.e $k$.</p>
Delta-u
550,182
<p>Let suppose that the result is true for $n=k$ i.e $7^k-1$ is divisible by $6$.</p> <p>Then: $$7^{k+1}-1=7 \times (7^k-1)+7-1=7 \times (7^k-1)+6$$ but $(7^k-1)$ is divisible by $6$ so $7 \times (7^k-1)$ is also divisible by $6$ and finally $(7^k-1)+6$ is divisible by $6$.</p>
2,755,091
<p>I know that I have to prove this with the induction formula. If proved the first condition i.e. $n=6$ which is divisible by $6$. But I got stuck on how to proceed with the second condition i.e $k$.</p>
Anastassis Kapetanakis
342,024
<p>We have: $$7^n-1=(7-1)(7^{n-1}+7^{n-2}+\dots +7+1) = 6(7^{n-1}+7^{n-2}+\dots +7+1) $$</p> <p>Which definitely is divisible by $6$.</p>
4,008,277
<p><a href="https://en.wikipedia.org/wiki/Vandermonde%27s_identity#Generalized_Vandermonde%27s_identity" rel="noreferrer">Vadermonde's identity</a> states that</p> <p><span class="math-container">$$ \sum_{k=0}^{r} {m \choose k}{n \choose r-k} = {m+n \choose r} $$</span></p> <p>I wonder if we can use this formula or otherwise to calculate:</p> <p><span class="math-container">$$ \sum_{k=0}^{r}k^l {m \choose k}{n \choose r-k}, l \in \mathbb{N} ?$$</span></p> <p>or if not, at least when <span class="math-container">$l=1,$</span> namely:</p> <p><span class="math-container">$$ \sum_{k=0}^{r}k {m \choose k}{n \choose r-k}?$$</span></p> <p>I'd appreciate the relevant calculations or some links where I can find the above?</p>
Trevor
493,232
<p>Nope, not unique.</p> <p>I hadn't checked far enough. The first counterexample is at <span class="math-container">$$f(38183)=f(38185)=258840858.$$</span></p> <p>It could plausibly still have only finitely many counterexamples, though, making the 'almost all' claim true. There are three more found so far through <span class="math-container">$n&lt; 6 \times 10^8$</span> with the form <span class="math-container">$(n,n+2)$</span>: <span class="math-container">$458009$</span>, <span class="math-container">$776111$</span>, and <span class="math-container">$65675407$</span>. As stated below, there was also one example of form <span class="math-container">$(n,n+4)$</span>: <span class="math-container">$113393278$</span>.</p> <p>Note <span class="math-container">$f(x)$</span> grows as the square of <span class="math-container">$x$</span>, so this may be what one would probabilistically expect. It's tough to say, but my guess is counterexamples may be very sparse, yet probably still infinite.</p>
4,008,277
<p><a href="https://en.wikipedia.org/wiki/Vandermonde%27s_identity#Generalized_Vandermonde%27s_identity" rel="noreferrer">Vadermonde's identity</a> states that</p> <p><span class="math-container">$$ \sum_{k=0}^{r} {m \choose k}{n \choose r-k} = {m+n \choose r} $$</span></p> <p>I wonder if we can use this formula or otherwise to calculate:</p> <p><span class="math-container">$$ \sum_{k=0}^{r}k^l {m \choose k}{n \choose r-k}, l \in \mathbb{N} ?$$</span></p> <p>or if not, at least when <span class="math-container">$l=1,$</span> namely:</p> <p><span class="math-container">$$ \sum_{k=0}^{r}k {m \choose k}{n \choose r-k}?$$</span></p> <p>I'd appreciate the relevant calculations or some links where I can find the above?</p>
Barry Cipra
86,747
<p>This is just a comment on the counterexample in the OP's answer, noting a relatively quick way to verify that <span class="math-container">$f(38183)=f(38185)$</span> without computing each to equal <span class="math-container">$258840858$</span>.</p> <p>A formula at the <a href="https://oeis.org/A004125" rel="noreferrer">OEIS entry</a> for the OP's sequence says</p> <p><span class="math-container">$$f(n)=n^2-\sum_{k=1}^n\sigma(k)$$</span></p> <p>where <span class="math-container">$\sigma(k)$</span> is the sum of the divisors of <span class="math-container">$k$</span>. To show that <span class="math-container">$f(m)=f(n)$</span> with <span class="math-container">$m\lt n$</span>, then, it suffices to show that</p> <p><span class="math-container">$$\sum_{k=m+1}^n\sigma(k)=n^2-m^2=(n-m)(n+m)$$</span></p> <p>For <span class="math-container">$m=38183$</span> and <span class="math-container">$n=38185$</span>, we have <span class="math-container">$(n-m)(n+m)=2\cdot76368=152736$</span> while</p> <p><span class="math-container">$$\begin{align} \sigma(38184)+\sigma(38185) &amp;=\sigma(2^3\cdot3\cdot37\cdot43)+\sigma(5\cdot7\cdot1091)\\ &amp;=15\cdot4\cdot38\cdot44+6\cdot8\cdot1092\\ &amp;=100320+52416\\ &amp;=152736 \end{align}$$</span></p> <p>Remark: The trickiest part of the verification is the factorization into primes (in particular, recognizing <span class="math-container">$1091$</span> as a prime number).</p>
4,008,277
<p><a href="https://en.wikipedia.org/wiki/Vandermonde%27s_identity#Generalized_Vandermonde%27s_identity" rel="noreferrer">Vadermonde's identity</a> states that</p> <p><span class="math-container">$$ \sum_{k=0}^{r} {m \choose k}{n \choose r-k} = {m+n \choose r} $$</span></p> <p>I wonder if we can use this formula or otherwise to calculate:</p> <p><span class="math-container">$$ \sum_{k=0}^{r}k^l {m \choose k}{n \choose r-k}, l \in \mathbb{N} ?$$</span></p> <p>or if not, at least when <span class="math-container">$l=1,$</span> namely:</p> <p><span class="math-container">$$ \sum_{k=0}^{r}k {m \choose k}{n \choose r-k}?$$</span></p> <p>I'd appreciate the relevant calculations or some links where I can find the above?</p>
Joffan
206,402
<p>I feel this is worth an answer, using Barry Cipra's technique, to report the first such equality over a gap of more than <span class="math-container">$2$</span>:</p> <p><span class="math-container">$$f(113393278)=f(113393282)$$</span></p> <p>with:<br /> <span class="math-container">$\sigma(113393278)=171694080$</span> with <span class="math-container">$113393278 = 2\cdot 179\cdot 383\cdot 827$</span> (not needed)<br /> <span class="math-container">$\sigma(113393279)=117765120$</span> with <span class="math-container">$113393279 = 43\cdot 67\cdot 39359$</span><br /> <span class="math-container">$\sigma(113393280)=493516800$</span> with <span class="math-container">$113393280 = 2^7\cdot 3\cdot 5\cdot 7\cdot 11\cdot 13\cdot 59$</span><br /> <span class="math-container">$\sigma(113393281)=123621120$</span> with <span class="math-container">$113393281 = 17\cdot 47\cdot 139\cdot 1021$</span> <span class="math-container">$\sigma(113393282)=172243200$</span> with <span class="math-container">$113393282 = 2\cdot 79\cdot 717679$</span></p>
3,774,263
<ol> <li>Is it possible that distance (<span class="math-container">$r$</span>) or angle (<span class="math-container">$θ$</span>) contains Imaginary or Complex number?</li> <li>If the answer is yes, how can I convert a number like that (Polar with complex argument) to Rectangular number? <br /> For example: <strong><span class="math-container">$(r,θ) = (5+2i, 3+4i)$</span></strong> how to convert to <strong><span class="math-container">$x+yi$</span></strong> ? <br /> <br /> Thank you.</li> </ol>
DonAntonio
31,254
<p>The fastest way is to invoque the dimensions theorem, and in this case:</p> <p><span class="math-container">$$\dim\ker x^*+\dim\text{Im}\,x^*=\dim X$$</span></p> <p>and since <span class="math-container">$\;\dim\text{Im}\,x^*\le1\;,\;\;\dim X&gt;1\;$</span> , we get that</p> <p><span class="math-container">$$\;\dim\ker x^*\ge1\implies \,\exists\,0\neq x\in X\,\,s.t.\,\,x^*(x)=0\;$$</span></p>
1,726,745
<p>I am looking for information in regards to a couple particular functions: </p> <p>1) $P(x)=\sum_{p\in\mathbb{P}}\frac{x^p}{p!}$</p> <p>2) $Q(x)=\sum_{p\not\in\mathbb{P}}\frac{x^p}{p!}$ (assuming $0, 1$ are included powers in the series...)</p> <p>3) $R(x)=\frac{1}{P(x)}$</p> <p>4) $S(x)=\frac{1}{Q(x)}$</p> <p>I don't know if there is much literature on these, since I don't know if they are known, unknown, or what they are called.</p>
reuns
276,986
<p><strong>trying to relate $P(x) = \sum_{p \in \mathcal{P}} \frac{x^p}{p!}$ to the logarithm of the Riemann zeta function</strong>, I get : </p> <p>$$P(x) = \sum_{p \in \mathcal{P}} \frac{x^p}{p!} = \frac{x}{(2 i \pi)^2}\int_{|u|=r} \frac{1}{(u-2x)(u-x)}\left( \int_{c-i\infty}^{c+i\infty} (-\ln u)^{-s} \Gamma(s) \sum_{m=1}^\infty \frac{\mu(m)}{m} \ln \zeta(ms) ds\right) du$$ which I hope should be simplifiable.</p> <p>let : $$f(z) = \sum_{n=1}^\infty a_n z^n, \qquad\qquad g(z) = \sum_{n=1}^\infty \frac{a_n}{n!} z^n, \qquad \qquad F(s) = \sum_{n=1}^\infty a_n n^{-s}$$ </p> <p>(here $a_n = 1$ iff $n$ is prime, hence $F(s) = \sum_{m=1}^\infty \frac{\mu(m)}{m} \ln \zeta(ms)$)</p> <p>by the <a href="https://en.wikipedia.org/wiki/Cauchy%27s_integral_formula" rel="nofollow">Cauchy integral formula</a>, for any $r &lt; R$ the radius of convergence of $f$ : </p> <p>$$a_n = \frac{n!}{2 i \pi}\int_{|u|=r} \frac{f(u)}{(u-z)^{n+1}} du$$</p> <p>hence for any $|z| &lt; r/2$ : $$g(z) = \frac{1}{2 i \pi}\sum_{n=1}^\infty z^n \int_{|u|=r} \frac{f(u)}{(u-z)^{n+1}} du = \frac{1}{2 i \pi}\int_{|u|=r} f(u) \sum_{n=1}^\infty \frac{z^n}{(u-z)^{n+1}} du$$ $$ =\frac{z}{2 i \pi}\int_{|u|=r} \frac{f(u)}{(u-2z)(u-z)} du$$</p> <p>while from $n^{-s} \Gamma(s) = \int_0^\infty x^{s-1} e^{-nx} dx$ (extending $f(z)$ on $[0,1[$ by analytic continuation if necessary) : $$\Gamma(s) F(s) = \int_0^\infty x^{s-1} f(e^{-x})dx$$</p> <p>hence by <a href="https://en.wikipedia.org/wiki/Mellin_transform" rel="nofollow">inverse Mellin transform</a> : $$f(e^{-x}) = \frac{1}{2 i \pi}\int_{c-i\infty}^{c+i\infty} x^{-s} F(s) \Gamma(s) ds$$</p> <p>and $$g(z) = \frac{z}{2 i \pi}\int_{|u|=r} \frac{f(u)}{(u-2z)(u-z)} du = \frac{z}{(2 i \pi)^2}\int_{|u|=r} \frac{1}{(u-2z)(u-z)}\left( \int_{c-i\infty}^{c+i\infty} (-\ln u)^{-s} F(s) \Gamma(s) ds\right) du = \ldots$$</p>
254,926
<p>I was working on a problem involving perturbation methods and it asked me to sketch the graph of $\ln(x) = \epsilon x$ and explain why it must have 2 solutions. Clearly there is a solution near $x=1$ which depends on the value of $\epsilon$, but I fail to see why there must be a solution near $x \rightarrow \infty$. It was my understanding that $\ln x$ has no horizontal asymptote and continues to grow indefinitely, where for really small values of $\epsilon, \epsilon x$ should grow incredibly slowly. How can I 'see' that there are two solutions?</p> <p>Thanks!</p>
hmakholm left over Monica
14,366
<p>The <em>slope</em> of $\log x$ falls of as $1/x$. So for, say, $x&gt;1/\varepsilon$ the logarithm always grows strictly <em>slower</em> than $\varepsilon x$, and the latter will eventually overtake it.</p> <p>More precisely, if we put $a=2/\varepsilon$ then at any point to the right of $a$, the function $\varepsilon x$ gains at least $\varepsilon/2$ on the logarithm for each unit increase in $x$. Therefore $\varepsilon x$ will be larger than $\log x$ no later than at $x=a+\frac{2\log a}{\varepsilon}$.</p>
1,963,214
<p>$\displaystyle\sum_{i=0}^{k-1}2^i=2^k-1$ for all $k \in\Bbb N$.</p> <p>Clearly, the first step here is easy. You can start with k=1 and solve to get $2^0=2^1-1$.</p> <p>What is a bit more challenging is the induction step. I don't even know where to begin here. Where would I even begin?</p>
Tsemo Aristide
280,301
<p>We have $|f(0)|\leq |0|^2$ implies that $f(0)=0$.</p> <p>$|lim_{x\rightarrow 0}{{f(x)-f(0)}\over{|x-0|}}|=lim_{x\rightarrow 0}|{{f(x)}\over |x|}|\leq lim_{x\rightarrow 0}{{|x|^2}\over{|x|}}=lim_{x\rightarrow 0}|x|=0.$ This implies that the differential of $f$ at $0$ is $0$.</p>
1,145,902
<p>I've searched for the inner product definition and I saw that it should only satisfy some conditions (axioms) and thus there could be several operations representing an inner product, one of which is the usual multiplication, or the dot product. the question is: why is the inner product of 2 functions is defined by $\int f_1(x)f_2(x)dx$? their choice of the operation is based on what? there could be other operations satisfying the inner product conditions so why this ?</p>
Iulia
205,001
<p>The absolute value: $|\cdot|:\mathbb{R}\rightarrow \mathbb{R}_+$ is given by: $|x|=\begin{cases} x,\text{ if }x\geq 0\\-x\text{ if }x&lt;0\end{cases}$. I will write: $abs(x):=|x|$. If you want to compute: $abs(abs(x))$, you know that $abs(x)\geq 0$, therefore, using the definition, $abs(abs(x))=abs(x)$. Indeed, $abs(x)^2=x^2$, and $\sqrt{x^2}=abs(x)$ (since the radical function takes nonnegative values. The euclidean norm: $\|\cdot\|:\mathbb{R}^n\rightarrow \mathbb{R}$ is defined by $\|(x_1,\dots,x_n)\|=\sqrt{x_1^2+\dots+x_n^2}$, therefore the absolute value is a particular case of the Euclidean norm, for $n=1$.</p>
673,946
<p>I have the function $f(x)=\frac {1}{2} \mathbf x^T Q \mathbf x$. </p> <p>I want to use the steepest descent algorithm where $Q$ is the diagonal matrix $\begin{bmatrix}1 &amp; 0\\0 &amp; 20\end{bmatrix}$ and $\mathbf x = \begin{bmatrix}0.7\\-0.2\end{bmatrix}$.</p> <p>I want to implement the ideal line search algorithm: for a starting $\mathbf x$ and direction $\mathbf d$ choose $\alpha &gt; 0$ so that $\mathbf d ^T\nabla f (\mathbf x + \alpha \mathbf d)=0$. </p> <p>I have the hint that I can find $\alpha$ by substituting the formula for $\nabla f(\mathbf z)$ and then solving for $\alpha$. </p> <p>I am to carry out 50 steps of the steepest descent method. </p> <p>Is this something that I just need to Matlab for? I would appreciate any guidance of what I should do! </p>
gt6989b
16,192
<p>One does typically implement numerical algorithms in some computing environment. Matlab or Python or Mathematica will do fine, whatever you have available, or even a raw programming language without the mathy surroundings :-), like C or C++.</p> <p>You describe the algorithm pretty well. At each step,</p> <ol> <li>Determine the correct direction $\vec{d}$.</li> <li>Execute the line search to get the $\alpha$.</li> <li>Iterate $\vec{x}_{n+1} = \vec{x}_n + \alpha \vec{d}$.</li> </ol> <p>Start with $\vec{x}_0 = [0.7, -0.2]^T$ and perform $50$ steps.</p> <p><strong>Hint #2</strong></p> <p>The most optimal guess for the direction of motion at $\vec{x}_{n}$ is $-\nabla f \left(\vec{x}_n\right)$, so we set $\vec{d} = -\nabla f \left(\vec{x}_n\right)$.</p> <p>The remaining job is to define $\alpha$. You can use the hint you were given to do that.</p>
4,168,198
<p>If M is a connected n-manifold and N is a codim 2 submanifold then M-N is connected.</p> <p>Is this true?</p> <p>I want to show <span class="math-container">$H_0(M-N)=\mathbb Z$</span>. I think it's better to do in <span class="math-container">$\mathbb Z_2$</span> coefficient because we get orientability. But because no compactness is mentioned I can't use Poincare duality. How can I resolve this?</p>
Moishe Kohan
84,907
<p>Here is an argument which works for a connected <span class="math-container">$n$</span>-dimensional topological manifold <span class="math-container">$M$</span> and a closed subset <span class="math-container">$C\subset M$</span> homeomorphic to a manifold of dimension <span class="math-container">$\le n-2$</span>. I am not sure what happens if you drop the assumption that <span class="math-container">$C$</span> is closed. My suspicion is that the complement need not be path-connected, but it might still be connected.</p> <p>First of all, it suffices to work with oriented manifolds <span class="math-container">$M$</span> (otherwise, you work in the orientation cover of <span class="math-container">$M$</span>). Next, using only the assumption that <span class="math-container">$C$</span> is a closed subset of <span class="math-container">$M$</span>, a form of the Poincare-Lefschetz duality in this situation yields an isomorphism <span class="math-container">$$ \check{H}^*_c(C)\cong H_{n-*}(M, M-C), $$</span> where the left hand side is the Chech cohomology with compact support. For manifolds, Chech cohomology is isomorphic to the singular cohomology and, hence, the dimension assumption yields <span class="math-container">$$ \check{H}^{n-1}_c(C)={H}^{n-1}_c(C)=0. $$</span> Next, I will use the long exact sequence of the pair <span class="math-container">$(M, M-C)$</span>: <span class="math-container">$$ \to H_1(M)\to 0=H_1(M, M-C)\to \tilde{H}_0(M-C) \to \tilde{H}_0(M)=0 $$</span> Thus, <span class="math-container">$\tilde{H}_0(M-C)=0$</span> and, hence, <span class="math-container">$M-C$</span> is path-connected.</p>
28,348
<p>Thomson et al. provide a proof that <span class="math-container">$\lim_{n\rightarrow \infty} \sqrt[n]{n}=1$</span> in <a href="http://classicalrealanalysis.info/documents/TBB-AllChapters-Landscape.pdf#page=95" rel="noreferrer">this book (page 73)</a>. It has to do with using an inequality that relies on the binomial theorem: <a href="https://i.stack.imgur.com/n3dxw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/n3dxw.png" alt="enter image description here" /></a></p> <p>I have an alternative proof that I know (from elsewhere) as follows.</p> <hr /> <p><strong>Proof</strong>.</p> <p><span class="math-container">\begin{align} \lim_{n\rightarrow \infty} \frac{ \log n}{n} = 0 \end{align}</span></p> <p>Then using this, I can instead prove: <span class="math-container">\begin{align} \lim_{n\rightarrow \infty} \sqrt[n]{n} &amp;= \lim_{n\rightarrow \infty} \exp{\frac{ \log n}{n}} \newline &amp; = \exp{0} \newline &amp; = 1 \end{align}</span></p> <hr /> <p>On the one hand, it seems like a valid proof to me. On the other hand, I know I should be careful with infinite sequences. The step I'm most unsure of is: <span class="math-container">\begin{align} \lim_{n\rightarrow \infty} \sqrt[n]{n} = \lim_{n\rightarrow \infty} \exp{\frac{ \log n}{n}} \end{align}</span></p> <p>I know such an identity would hold for bounded <span class="math-container">$n$</span> but I'm not sure I can use this identity when <span class="math-container">$n\rightarrow \infty$</span>.</p> <p><strong>Question:</strong></p> <p>If I am correct, then would there be any cases where I would be wrong? Specifically, given any sequence <span class="math-container">$x_n$</span>, can I always assume: <span class="math-container">\begin{align} \lim_{n\rightarrow \infty} x_n = \lim_{n\rightarrow \infty} \exp(\log x_n) \end{align}</span> Or are there sequences that invalidate that identity?</p> <hr /> <p>(Edited to expand the last question) given any sequence <span class="math-container">$x_n$</span>, can I always assume: <span class="math-container">\begin{align} \lim_{n\rightarrow \infty} x_n &amp;= \exp(\log \lim_{n\rightarrow \infty} x_n) \newline &amp;= \exp(\lim_{n\rightarrow \infty} \log x_n) \newline &amp;= \lim_{n\rightarrow \infty} \exp( \log x_n) \end{align}</span> Or are there sequences that invalidate any of the above identities?</p> <p>(Edited to repurpose this question). Please also feel free to add different proofs of <span class="math-container">$\lim_{n\rightarrow \infty} \sqrt[n]{n}=1$</span>.</p>
Mark
687
<p>$\sqrt[n]{n}=\sqrt[n]{1\cdot\frac{2}{1}\cdot\frac{3}{2}\dots\cdot\frac{n-1}{n-2}\cdot\frac{n}{n-1}}$ so you have a sequence of geometric means of the sequence $a_{n}=\frac{n}{n-1}$. Therefore its limit is equal to $\lim_{n\to\infty}a_{n}=\lim_{n\to\infty}\frac{n}{n-1}=1$.</p>
2,268,225
<p>I want to investigate the convergence behavior of $$\int_{0}^{\infty} \cos(x^r)\, dx \hspace{5mm} \textrm{and} \hspace{5mm} \int_{\pi}^{\infty} \left(\frac{\cos x}{\log x}\right)\arctan\lfloor x\rfloor dx.$$ My theoretical tools are Abel's test and Dirichlet's test: Say I have an integral of the form $$\int_{a}^{b}f\cdot g \hspace{1.5mm} dx$$ with improperness (vertical or horizontal asymptote) at $b$.</p> <p>Abel's test guarantees convergence if $g$ is monotone and bounded on $(a,b)$, and $\int_{a}^{b}f $ converges. Dirichlet's test guarantees convergence if $g$ is monotone on $(a,b)$ and $\displaystyle\lim_{x\to b} g(x) = 0$, and $\displaystyle\lim_{\beta \to b}$ $\int_{a}^{\beta}f $ is bounded.</p> <p>For the first integral $\displaystyle\int_{0}^{\infty} \cos(x^r)\, dx $ I'm guessing a substitution $t = x^r $ will give me an expression of the form $f\cdot g$ with $\cos(t)$ as my $f$. For the second integral $\displaystyle\int_{\pi}^{\infty} \dfrac{\cos x}{\log x}\arctan\lfloor x\rfloor\, dx$, I'm (even more) clueless. Help please?</p>
Olivier Oloa
118,798
<p>One may start with the standard evaluation, $$ 1+x+x^2+...+x^n=\frac{1-x^{n+1}}{1-x}, \quad |x|&lt;1. \tag1 $$ Then by differentiating $(1)$ one gets $$ 1+2x+3x^2+...+nx^{n-1}=\frac{1-x^{n+1}}{(1-x)^2}-\frac{(n+1)x^{n}}{1-x}, \quad |x|&lt;1, \tag2 $$ by multiplying by $x^2$ and by making $n \to +\infty$ in $(2)$, using $|x|&lt;1$, one has</p> <blockquote> <p>$$ \sum_{n=1}^\infty nx^{n+1}=\frac{x^2}{(1-x)^2}. \tag3 $$ </p> </blockquote> <p>Then one may put $x=\dfrac12$ to obtain an answer to the given sum.</p>
3,536,120
<p>Textbook example says that:</p> <p><span class="math-container">$$\sum \limits_{i=k+1}^{\infty}(1-p)^{i-1}p = \frac{(1-p)^k p}{1-(1-p)}=(1-p)^k~~~~~k=1,2,3,\cdots$$</span></p> <p>I'm told that a geometric series identity used to obtaining the above result is:</p> <p><span class="math-container">$$\sum \limits_{n=1}^{\infty} a r^{n-1} = \frac{a}{1-r}~~~~|r|&lt;1$$</span></p> <p>now, i'm wondering how they used this identity to get the above result...</p> <p>So I pull out the p term because it doesn't depend on i in the summation, which gives:</p> <p><span class="math-container">$$\sum \limits_{i=k+1}^{\infty}(1-p)^{i-1}p = p\sum \limits_{i=k+1}^{\infty}(1-p)^{i-1} $$</span></p> <p>which looks similar to <span class="math-container">$\sum \limits_{n=1}^{\infty} a r^{n-1} = \frac{a}{1-r}$</span> where <span class="math-container">$r=(1-p)$</span> and <span class="math-container">$a=1$</span>... but then i'm not really sure how to handle the lower limit of the sum...maybe:</p> <p><span class="math-container">$$\sum \limits_{i=k+1}^{\infty}(1-p)^{i-1}p = -p\sum \limits_{i=1}^{k}(1-p)^{i-1} +p\sum \limits_{i=1}^{\infty}(1-p)^{i-1} $$</span></p> <p><span class="math-container">$$\sum \limits_{i=k+1}^{\infty}(1-p)^{i-1}p = -p\sum \limits_{i=1}^{k}(1-p)^{i-1} +p\frac{1}{1-(1-p)}$$</span></p> <p><span class="math-container">$$\sum \limits_{i=k+1}^{\infty}(1-p)^{i-1}p = 1 -p\sum \limits_{i=1}^{k}(1-p)^{i-1} $$</span></p> <p>???</p> <p>not sure how they came up with:</p> <p><span class="math-container">$$\sum \limits_{i=k+1}^{\infty}(1-p)^{i-1}p = \frac{(1-p)^k p}{1-(1-p)}$$</span></p> <p>Second, option, ss this geometric summation with a variable lower bound just something i can lookup in a table somewhere?</p> <hr> <p>the above was taken out of the exercise:</p> <p>Let X be a geometric r.v. with parameter p.</p> <p>(a) show that <span class="math-container">$p_{X}(x) = P(X=x) = (1-p)^{x-1}p~~~~x=1,2,3,\cdots$</span></p> <p>satisfies the equation: <span class="math-container">$\sum \limits_{k} P_{X}(x_k) = 1$</span></p> <p>just in case you need the context of the question...</p>
Francis Adams
29,633
<p>The proof so far is correct.</p> <p>For the other direction, since <span class="math-container">$\mathbb{K}$</span> is a basis for the topology every open set <span class="math-container">$O\in\tau$</span> is a union of open sets from <span class="math-container">$\mathbb{K}$</span>, and since <span class="math-container">$\mathbb{K}$</span> is countable <span class="math-container">$O$</span> is necessarily a countable union of open sets from <span class="math-container">$\mathbb{K}$</span>. This shows that <span class="math-container">$O$</span> is in the sigma algebra generated by <span class="math-container">$\mathbb{K}$</span>, so <span class="math-container">$\tau\subseteq\sigma(\mathbb{K})$</span> and hence by minimality <span class="math-container">$\sigma(\tau)\subseteq\sigma(\mathbb{K})$</span>.</p>
4,434,225
<p>Prove or disprove, &quot;if <span class="math-container">$f$</span> is concave function, then <span class="math-container">$|f|$</span> is also concave&quot;.</p> <p>I know the result is false for conves function, but for concave function, I guess it is true. But I am unable to do the proof.</p> <p>I have tried to show that the set <span class="math-container">$A=\{(x,y)\in \Bbb R^2 :|f(x)|\ge y\}$</span> is a convex set. For this, take two points <span class="math-container">$(x_1,y_1), (x_2,y_2) \in A$</span> and take <span class="math-container">$\lambda \in (0,1)$</span>.</p> <p>Now, <span class="math-container">$|f(\lambda x_1+(1-\lambda)x_2)|\ge f(\lambda x_1+(1-\lambda)x_2)\ge \lambda f(x_1)+(1-\lambda)f(x_2)$</span>, as <span class="math-container">$f$</span> is concave. From this how to proceed? Can I get any help please?</p>
MHMH
427,446
<p>Counterexample: <span class="math-container">$f(x)=x$</span> is concave, but <span class="math-container">$|f(x)|=|x|$</span> is convex (and it is not concave).</p>
2,240,616
<p>Let $f:[0,1] \to \mathbb{R}$ be a function such that for every $a \in [0,1)$ and $b \in (0,1]$ the one-sided limits $$f(a^+)=\lim _{x\to a^+}f(x) \in \mathbb{R}$$ $$f(b^-)=\lim _{x \to b^-} f(x) \in \mathbb {R}$$ exist. </p> <p>A) Show that $f$ is bounded. </p> <p>B) Does $f$ necessarily achieve its maximum at some $x \in [0,1]$?</p> <p>C) Suppose further that $f$ is continuous at $0$ and $1$, and that $f(0) f(1)&lt;0$. Prove that there exists some point $p \in (0,1)$ such that $f(p^-)f(p^+) \leq 0$. </p> <p>Intuitively, I can see why part A is true, but I am not sure how to prove this formally. For part B, I think the answer is no, but I haven't yet come up with a counterexample. My initial thoughts on part C are to somehow apply the intermediate value theorem, but I am not sure if this is the correct approach or not.</p>
Fnacool
318,321
<p>A) Let $x_n$ be such that $|f(x_n)|\to \sup |f|$. By compactness, there exists a convergent subsequence, and furthermore, it can be chosen to be monotone. By assumption, the limit along the monotone subsequence exists. Thus $\sup |f|$ is finite. </p> <p>B) Answered before. </p> <p>C) WLOG $f(0)&lt;0$. Then by continuity at $0$ and $1$, there exists $\epsilon\le \frac 12$ such that $f(p-)&lt;0$ for $p$ in $[0,\epsilon)$ and $f(p-)&gt;0$ for $p\in (1-\epsilon,1]$. Let $\bar p =\sup\{p:f(p-)&lt;0\}$. Then ${\bar p} \in [\epsilon,1-\epsilon]$. By assumption, for every $n$, there exists $p_n- \in (\bar p-\frac 1n,\bar p)$ such that $f(p_n-)&lt; 0$ and $p_n+\in (\bar p , \bar p + \frac 1n)$ such that $f(p_n)\ge 0$. Therefore $f(p-)= \lim f(p_n-)\le 0$, while $f(p+)=\lim f(p_n+)\ge 0$. </p>
602,789
<p>Find the fundamental group of the surface of a cube with interior of all edges removed (i.e. the space which consists of the vertices and interior of the faces of the cube. </p> <p>Can I deformation retract the interior of each face into a cross meeting in the middle of the face made up of two diagonal lines which connects the each vertex to the vertex opposite of it. I can deform this into a sphere which has a lattice on it with 12 different loops. So the fundamental group is $\mathbb{Z}^{12}$? </p> <p>Thanks! </p>
Simon Rose
87,590
<p>The answer by rewritten is correct, although your intuition is also good.</p> <p>The specific deformation that you consider will give you the correct fundamental group, but as rewritten says, the loops do not commute so you will get $\mathbb{Z} * \cdots * \mathbb{Z}$ for some number of copies of $\mathbb{Z}$. What is that number?</p> <p>Well, if you think of the resulting object as a sphere less some points, you can see pretty easily that if $X_d = S^2 \setminus \{p_1, \ldots, p_d\}$, then:</p> <ul> <li>$\pi_1(X_1) = 0$, since this is just a sphere less a single point i.e. a disk.</li> <li>$\pi_1(X_2) = \mathbb{Z}$, since this is just a disk less a point (i.e. an annulus).</li> <li>$\pi_1(X_3) = \mathbb{Z} * \mathbb{Z}$ since this is just an annulus less a point, which deformation retracts onto a wedge of two circles.</li> </ul> <p>If you continue this on, you see that $\pi_1(X_d) = \underbrace{\mathbb{Z} * \cdots * \mathbb{Z}}_{d-1}$.</p>
324,307
<p><a href="http://en.wikipedia.org/wiki/Tucker%27s_lemma" rel="nofollow">Tucker's Lemma is here.</a></p> <p>Let's stay within the 2D case for now. A standard proof is constructive:</p> <p>(1) Pick an arced edge on the boundary of the circle. Note its labeling (for example, (1, 2)).</p> <p>(2) Walk into the circle through your chosen edge into a simplex of the triangle.</p> <p>(3) If the simplex carries three different labels, then two of them most be antipodal, so Tucker's Lemma is satisfied.</p> <p>(4) If instead the simplex is entirely labelled with 1's and 2's, then walk through the (1, 2) edge that you didn't enter from, and repeat step (3) or (4) on your new simplex.</p> <p>(5) Eventually, you will either dead-end in a tri-labeled simplex in the middle of the triangle, or you will walk out of the circle through another arced edge. If you leave the circle, then you've eliminated two edges; pick a new edge and try again.</p> <p>(6) Since there are an even number of vertices on the boundary of the circle, there are an odd number of edges on the boundary of the circle. So eventually, one of your paths must dead-end inside the circle.</p> <p>This proof does <em>not</em> rely on the fact that the triangulation of the circle is antipodal symmetric; instead, it only relies on the fact that there are an even number of vertices on the boundary of the circle. So why do we require antipodal symmetry, when the weaker "even number of edges" condition implies the same conclusion?</p>
Cameron Buie
28,900
<p>Your conclusion doesn't follow. For example, let $b=c=1,d=0$. Then $ad+b=cb+d$ doesn't imply <em>anything at all</em> about $a$.</p> <hr> <p>Let $$f(x)=a_1x+b_1,\;g(x)=a_2x+b_2,\;h(x)=a_3x+b_3,$$ where $a_1\neq 1$ or $b_1\neq 0$, and suppose that $fg=gf$ and $fh=hf$. Your work shows that this means $$a_1b_2+b_1=a_2b_1+b_2\tag{1}$$ and $$a_1b_3+b_1=a_3b_1+b_3\tag{2}$$ hold, and that you want to show that $$a_2b_3+b_2=a_3b_2+b_3.\tag{#}$$</p> <p>Let's rewrite $(1)$, $(2)$, and $(\#)$ as the equivalent equations $$(a_1-1)b_2=(a_2-1)b_1,\tag{$1'$}$$ $$(a_1-1)b_3=(a_3-1)b_1,\tag{$2'$}$$ and$$(a_2-1)b_3=(a_3-1)b_2.\tag{$\#'$}$$ and proceed casewise.</p> <p><strong>Case A</strong>: If $a_1=1$, then $b_1\neq 0$, so it follows by $(1')$ and $(2')$ that $a_2=a_3=1$, whence $(\#')$ holds.</p> <p><strong>Case B</strong>: If $a_1\neq 1$, then we can rewrite $(1')$ and $(2')$ as the equivalent equations $$b_2=\frac{a_2-1}{a_1-1}b_1\tag{$1''$}$$ and $$b_3=\frac{a_3-1}{a_1-1}b_1,\tag{$2''$}$$ whence $(\#')$ holds.</p> <p>At that point, we're done.</p>
4,214,631
<p>I'm trying to understand a paragraph from the article <a href="https://www.sciencedirect.com/science/article/pii/S0022123600936875" rel="nofollow noreferrer">Christ, Kiselev: Maximal functions associated to filtrations</a>. After the proof of Theorem 1.1., the authors deduce a theorem of Menshov as a corollary:</p> <blockquote> <p>Let <span class="math-container">$(X, \mu)$</span> be a measure space and let <span class="math-container">$(\phi_n)_n$</span> be an orthonormal sequence in <span class="math-container">$L^2(X)$</span>. Let <span class="math-container">$1 \le p &lt; 2$</span>. Then for every sequence <span class="math-container">$(c_n)_n \in \ell^p$</span> the series <span class="math-container">$$\sum_{n=1}^\infty c_n\phi_n$$</span> converges a.e. in <span class="math-container">$X$</span>.</p> </blockquote> <p>Obviously the series <span class="math-container">$\sum_{n=1}^\infty c_n\phi_n$</span> converges in <span class="math-container">$L^2(X)$</span> due to Parseval since in particular we have <span class="math-container">$(c_n)_n \in \ell^2$</span>.</p> <p>Theorem 1.1. from the article gives that the (sublinear) map <span class="math-container">$$\ell^p \to L^2(X), \quad c = (c_n)_n \mapsto \sup_{N\in\Bbb{N}} \left|\sum_{n=1}^N c_n\phi_n\right|$$</span> is well-defined and bounded in the sense that there exists a constant <span class="math-container">$C&gt;0$</span> such that <span class="math-container">$$ \left\|\sup_{N\in\Bbb{N}} \left|\sum_{n=1}^N c_n\phi_n\right|\right\|_{L^2(X)} \le C\|(c_n)_n\|_p, \quad \text{ for all }(c_n)_n \in \ell^p.$$</span></p> <p>The authors now conclude that the statement of the theorem clearly follows from this fact and from the fact that <span class="math-container">$\sum_{n=1}^\infty c_n\phi_n$</span> obviously converges a.e. when <span class="math-container">$(c_n)_n$</span> is finitely-supported.</p> <p>I'm not sure how it follows. I noticed that for <span class="math-container">$M \ge N$</span> the Cauchy sums are also bounded by the supremum by reverse triangle inequality: <span class="math-container">$$\left|\sum_{n=N+1}^M c_n\phi_n\right| \le 2\sup_{K\in\Bbb{N}} \left|\sum_{n=1}^K c_n\phi_n\right|.$$</span> So this is bounded a.e., however we would need that it goes to <span class="math-container">$0$</span> as <span class="math-container">$M,N\to\infty$</span>. Am I missing something easy?</p>
Sangchul Lee
9,340
<p>For each <span class="math-container">$m \in \mathbb{N}$</span>, by applying the inequality to the residual tail <span class="math-container">$n \mapsto c_n \mathbf{1}_{\{n\geq m\}},$</span> we get</p> <p><span class="math-container">$$ \left\| \sup_{N\in\mathbb{N}} \left| \sum_{n=m}^{N} c_n\phi_n \right| \right\|_{L^2(X)} \leq C \left\| (c_n \mathbf{1}_{\{n\geq m\}})_{n\in\mathbb{N}} \right\|_{p}. $$</span></p> <p>In particular,</p> <p><span class="math-container">$$ \sup_{N\in\mathbb{N}} \left| \sum_{n=m}^{N} c_n\phi_n \right| \quad \xrightarrow[m \to \infty]{\text{in } L^2(X)} \quad 0, $$</span></p> <p>and then we can extract a subsequence <span class="math-container">$(m_k)_{k\in\mathbb{N}}$</span> such that</p> <p><span class="math-container">$$ \sup_{N\in\mathbb{N}} \left| \sum_{n=m_k}^{N} c_n\phi_n \right| \quad \xrightarrow[k \to \infty]{\text{a.e. in } X} \quad 0. $$</span></p> <p>This then guarantees the a.e. convergence of the sum <span class="math-container">$\sum_{n=1}^{\infty} c_n \phi_n$</span>.</p>
54,088
<p>I am studying KS (Kolmogorov-Sinai) entropy of order <em>q</em> and it can be defined as</p> <p>$$ h_q = \sup_P \left(\lim_{m\to\infty}\left(\frac 1 m H_q(m,ε)\right)\right) $$</p> <p>Why is it defined as supremum over all possible partitions <em>P</em> and not maximum? </p> <p>When do people use supremum and when maximum?</p>
Michael Hardy
11,667
<p>The set of all negative numbers has a sup, which is $0$, but not a max.</p> <p>$0$ is the smallest number that no negative number can exceed.</p>
2,392,841
<p>Assuming the series of functions $\{\phi_{n}(x)\}$ and the function $\phi(x)$ are in Schwartz space $S(R)$, now my question is this,</p> <p>Does $\lim_{n\to\infty} \phi_{n}(x)\to \phi(x)$ (in Hilbert space $L^2[-\infty,\infty]$) imply $\lim_{n\to\infty}\phi_{n}(x)\to \phi(x)$ (in semi-norms topology of Schwartz space)?</p>
Viktor Vaughn
22,912
<p>All right, I think I see how to use the direct approach now. Since $Z(e_i) = Z(P_i)$, then $\sqrt{(e_i)} = \sqrt{P_i} = P_i$. Since $A$ is Noetherian, then $(e_i) = P_i^{r_i}$ for some $r_i$, and since the ideals $P_i$ are pairwise comaximal, then so are $P_i^{r_i}$ (c.f., exercise 1.13 in Atiyah-MacDonald). (One can also show this directly: for $i \neq j$ we have $$ Z((e_i) + (e_j)) = Z(e_i) \cap Z(e_j) = \varnothing $$ so $(e_i) + (e_j) = 1$.) We also have \begin{align*} \DeclareMathOperator{\Nil}{Nil} (e_1 \cdots e_m) = \prod_i P_i^{r_i} \subseteq \bigcap_i P_i = \Nil(A) \end{align*} which implies that $e_1 \cdots e_m$ is nilpotent. Then $(e_1 \cdots e_m)^r = 0$ for some $r$, but since each $e_i$ is idempotent, then $$ 0 = e_1^r \cdots e_m^r = e_1 \cdots e_m \, . $$ Then \begin{align*} A \cong \frac{A}{(0)} \cong \frac{A}{(e_1) \cdots (e_m)} \cong \frac{A}{(e_1)} \times \cdots \times \frac{A}{(e_m)} \cong \frac{A}{P_1^{r_1}} \times \cdots \times \frac{A}{P_m^{r_m}} \end{align*} by the Chinese Remainder Theorem. Each factor $A_i := A/P_i^{r_i}$ is local with maximal ideal $P_i/P_i^{r_i}$.</p>
2,392,841
<p>Assuming the series of functions $\{\phi_{n}(x)\}$ and the function $\phi(x)$ are in Schwartz space $S(R)$, now my question is this,</p> <p>Does $\lim_{n\to\infty} \phi_{n}(x)\to \phi(x)$ (in Hilbert space $L^2[-\infty,\infty]$) imply $\lim_{n\to\infty}\phi_{n}(x)\to \phi(x)$ (in semi-norms topology of Schwartz space)?</p>
Viktor Vaughn
22,912
<p>I don't yet see how to get the direct approach to work, so here's an answer that uses the theory of Artinian rings. This is basically just Theorem 8.7 in Atiyah-MacDonald, but I'll try to add a little extra exposition.</p> <p>First note that since $A$ is a finite-dimensional $k$-algebra, then it is Artinian: each ideal is a vector subspace, so the length of a descending chain of ideals is bounded by $\operatorname{dim}_k(A)$. Now we apply the aforementioned theorem:</p> <p><strong>Theorem.</strong> <em>Every Artinian ring $A$ can be written as a finite direct product of local Artinian rings.</em></p> <p><em>Proof.</em> One can show that every prime ideal is maximal in an Artinian ring and that there are only finitely many maximal ideals (c.f., Prop 8.1 and 8.3 in A-M; Waterhouse has already showed this in your particular case). Denote these by $\newcommand{\m}{\mathfrak{m}} \m_1, \ldots, \m_n$. Then $$ \DeclareMathOperator{\Nil}{Nil} \newcommand{\p}{\mathfrak{p}} \Nil(A) = \bigcap_{\substack{\p \trianglelefteq A\\ \text{prime}}} \p = \bigcap_{\substack{\m \trianglelefteq A\\ \text{maximal}}} \m = \bigcap_{i=1}^n \m_i \, . $$ By Prop. 8.4 the nilradical is nilpotent, so $\Nil(A)^r = 0$ for some $r$, hence $$ \prod_{i=1}^n \m_i^r \subseteq \bigcap_{i=1}^n \m_i^r = \Nil(A)^r = 0 \, . $$ Since the ideals $\m_i^r$ are pairwise comaximal, then $A \cong \prod_{i=1}^n A/\m_i^r$ by the Chinese Remainder Theorem. Each $A/\m_i^r$ is a local Artinian ring with maximal ideal $\m_i/\m_i^r$. That $\m_i/\m_i^r$ consists of nilpotent elements follows from the fact that prime and maximal ideals coincide, since this implies that the nilradical and Jacobson radical ($=\m_i/\m_i^r$) are equal.</p>
2,417,029
<p>I have been told that a line segment is a set of points. How can even infinitely many point, each of length zero, can make a line of positive length?</p> <p>Edit: As an undergraduate I assumed it was due to having uncountably many points. But the Cantor set has uncountably many elements and it has measure $0$.</p> <p>So having uncountably many points on a line is not sufficient for the measure to be positive.</p> <p>My question was: what else is needed? It appears from the answers I've seen that the additional thing needed is the topology and/or the sigma algebra within which the points are set.</p> <p>My thanks to those who have helped me figure out where to look for full answers to my question.</p>
John Wayland Bales
246,513
<p>At that level, you can only define length in terms of line segments, which should not present a problem.</p> <p>To approach the idea of "length" of a point you could use the idea of probability.</p> <p>You might use an example such as this. Suppose we randomly choose a number $x$ in the open interval $(0,1)$.</p> <ol> <li>What is the probability that $x$ will be in the interval $\left(\frac{1}{2},1\right)$?</li> <li>The interval $\left(\frac{1}{3},\frac{2}{3}\right)$?</li> <li>What is the probability that $x$ will exactly equal $\sqrt[3]{\frac{3}{\pi}}$?</li> </ol> <p>You can then relate the idea of "length" to probability.</p>
2,417,029
<p>I have been told that a line segment is a set of points. How can even infinitely many point, each of length zero, can make a line of positive length?</p> <p>Edit: As an undergraduate I assumed it was due to having uncountably many points. But the Cantor set has uncountably many elements and it has measure $0$.</p> <p>So having uncountably many points on a line is not sufficient for the measure to be positive.</p> <p>My question was: what else is needed? It appears from the answers I've seen that the additional thing needed is the topology and/or the sigma algebra within which the points are set.</p> <p>My thanks to those who have helped me figure out where to look for full answers to my question.</p>
Community
-1
<p>Elsewhere in comments, you've suggested you're familiar with topology. <em>In these terms</em>, I think I can describe more precisely what the issue is.</p> <p>Consider the usual subspace topology on the interval $[0,1]$ of the real line, and also the discrete topology on the same set of points.</p> <p>The notion of "an infinite collection of points" is really describing the latter topological space. It's only by considering those points <em>in place</em> as describing a subspace of the real line that we get something with line-like qualities.</p> <p>If we want to consider the points in isolation, we also have to remember the relevant structure (e.g. topology, metric, or whatever) if we want to talk about the points having any line-like qualities.</p> <p>So that's what's going on &mdash; a line <em>isn't</em> made out of points, and it's the extra bit the students are overlooking, such as a metric, that actually makes the set into a line segment one centimeter long.</p> <p>As to how to explain the difference to students... that's why people are suggesting you ask your question at <a href="http://matheducators.stackexchange.com">http://matheducators.stackexchange.com</a>!</p> <hr> <p>One could try to find a way to explain the difference between countable additivity and uncountable additivity of measures, but I strongly expect that would be missing the point. (also, measures forget <em>almost everything</em> about geometry, so trying to use them to explain how a set of points can be linelike is futile)</p>
1,072,302
<p>let $f$ and $g$ be two functions from $[0,1]$ to $[0,1]$ with $f$ strictly increasing. Which of the follwing is true?</p> <blockquote> <blockquote> <p>(a). If $g$ is continuous, then $f\circ g$ is continuous.</p> <p>(b). If $f$ is continuous, then $f\circ g$ is continuous.</p> <p>(c). If $f$ and $f\circ g$ are continuous, then $g$ is continuous.</p> <p>(d). If $g$ and $f\circ g$ are continuous, then $f$ is continuous.</p> </blockquote> </blockquote> <p><strong>I guessed this</strong></p> <p>$f$ is strictly increasing $\implies$ $f$ is continuous on $[0,1]$ So, If $g$ is continuous then $f\circ g$ is continuous. Is my approach is correct? If i am right, why the others are wrong? can you give a counter examples for that?</p>
Brandon Humpert
30,417
<p>For (a), consider $g(x) = x$ and $f$ any discontinuous increasing function (for example $f(x) = \frac{1}{2} + \frac{1}{2}x, x &gt; 0$ and $f(0) = 0$).</p> <p>For (b), consider $f(x) = x$ and $g$ any discontinuous function.</p> <p>For (d), consider $g(x) = 0$ and $f$ any discontinuous increasing function.</p>
1,775,378
<p>So this question is inspired by the following thread: <a href="https://forums.factorio.com/viewtopic.php?f=5&amp;t=25008">https://forums.factorio.com/viewtopic.php?f=5&amp;t=25008</a></p> <p>In it, the poster is examining an $8$-belt balancer (more on that to come) which he shows fails to satisfy a desirable property, which he called universally throughput unlimited.</p> <p>So what is a $n$-belt balancer? It is a configuration of belts (which move items around), and splitters (which take two belts in and balance their items on the two belts on the output side), which will balance the input of all $n$ input belts across all $n$ output belts. They are frequently used in large factories to move large amounts of items to a variety of different areas in a manner where no one belt worth of items getting backlogged (more items coming at it than it can use) results in other projects not getting full throughput (or at least as much as they can use).</p> <p>The desired property called <em>universally throughput unlimited</em> is the following: Suppose only $k$ of the $n$ input belts are getting input (assume full input; aka, input belts are assumed saturated), and that all but $k$ of the output belts are backlogged and have no throughput (already full of items and nothing is moving on those belts). Then the full input on those $k$ input belts can be provided across the $k$ output belts (which have the same maximum throughput, hence no one output belt can handle more than one input belt's worth of throughput). This basically means that the $n$-belt balancer is never a bottleneck no matter the current input or output limitations (which lanes are getting input/available for output).</p> <p>The question I have is the following: It always possible to create an $n$-belt balancer satisfying the universally throughput unlimited condition for any $n$? It not, for which $n$'s is it possible? (clearly, $n=2$ works because of how splitters behave)</p> <p>I have some ideas on how to approach this problem, but am nowhere near having it solved. The first idea is about how to represent the problem: We can represent the input belts and output belts as vertices of a directed graph. The inputs being sources (in-degree=0) and the outputs being sinks (out-degree=0). The balancer is the input and output vertices together with a set of <em>intermediate</em> vertices which represent splitters which have $1\leq$in-degree,out-degree$\leq$2 (one or two directed edges point to them and one or two coming from them) and the associated directed edges. Looking at the problem this way, it is easy to see that a <em>necessary</em> condition is that input on any belt can reach any output belt (it is necessary because if not, then consider the case of all input on one input belt and all but one output belt backlogged with 0 throughput; in such a case, if you can't route that input belt's input to the output belt you won't get any throughput), but this condition is not sufficient (multiple examples that satisfy this condition have been shown both theoretically and experimentally to fail to have the desired universally throughput unlimited property).</p> <p>An important thing to note is that belts can be routed <em>under</em> other belts via underground belts, hence planarity of the above described graph is not necessary. The fact that splitters have some very specific behaviors is important to this problem: They will always try to balance outputs provided there is no backlog, hence, in a no backlog scenario the output on each belt leaving a splitter is half of the <em>total</em> input on both of its input belts. If however, one of the output belts is backlogged with no throughput, then all of the throughput will be merged onto the 'free' belt <em>up to</em> it's throughput limit. If more than one belt worth of throughput is coming into a splitter in this case, then <em>both</em> input belts will start to bottleneck (each belt's effective throughput will be half of the maximum because that's how much of the saturated output belt is coming from the given input belt). Sometimes a backlog is only a <em>reduction</em> in throughput (do to bottlenecking down the line somewhere) in such a case, a splitter will still split input equally up to that the reduced throughput on the lowest throughput belt, after that, <em>all</em> remaining throughput is thrown at the belt with additional capacity until that two is saturated, and if there is any more input coming at the given splitter then <em>both</em> of its input belts will start to backlog.</p> <p>This backlog phenomenon can result in some very subtle behaviors. Which makes simply assigning weights to the directed edges in the above described graph (constrained to a value of $[0,1]$ where $1$ is saturated and $0$ is no throughput) inadequate to describe the problem. For instance a splitter causing a backlog with some throughput but not enough to avoid backlog, can lead to a reduction in throughput for <em>another</em> splitter's output belt, shifting more of it's input onto the other output belt (which might cause a splitter further down that belt to suddenly become a bottleneck and backlog, etc.)</p> <p>My suspicion from experimenting a tad as well as some theoretical work looking at how splitters are dividing inputs leads me to conjecture that it is not possible for all $n$, and that the most likely candidates are powers of $2$. Even then, for powers higher than $1$ it still might be impossible because of odd #s of belts having input needed to get to same number of output belts (and if balancing odd #s of belts isn't possible, then the universally throughput unlimited condition might not be satisfiable because of these cases).</p>
Kieren Pearson
297,523
<p>This might be off from what you are talking about because I am slightly unclear as to what you mean by a 'universally throughput unlimited' Balance.</p> <p>In my proof I attempt to prove that:</p> <blockquote> <p>For all natural numbers {n, m} there exists a perfect balance that will take n inputs and m outputs </p> </blockquote> <p>I interpret <code>Perfect Balance</code> as meaning:</p> <blockquote> <p>A Balance that, regardless of input throughout each of the n 'belts', will split the given input equally in volume throughout each of the output belts</p> </blockquote> <p>And just to note: <code>n-&gt;m</code> denotes a perfect balance that takes n inputs and m outputs.</p> <p>First, assume we have <code>n-&gt;n</code>. </p> <p>It can be shown that using this, we can create <code>2n-&gt;2n</code>. Firstly take the first <code>n</code> inputs of the total of <code>2n</code> and send them through the <code>n-&gt;n</code> balance. Take the second set of <code>n</code> inputs of the total of <code>2n</code> and send them through another parallel <code>n-&gt;n</code> balance. If we number the original inputs </p> <blockquote> <p>{<code>n_1</code>, <code>n_2</code>, <code>n_3</code> ... <code>n_2n</code>} </p> </blockquote> <p>The outputs of these two <code>n-&gt;n</code> balances would be:</p> <blockquote> <p>{<code>n_1 / n</code> + <code>n_2 / n</code> + <code>n_3 / n</code> + ... + <code>n_n / n</code>}</p> </blockquote> <p>For each belt of the first <code>n-&gt;n</code> balance, and</p> <blockquote> <p>{<code>n_n+1 / n</code> + <code>n_n+2 / n</code> + <code>n_n+3 / n</code> + ... + <code>n_2n / n</code>}</p> </blockquote> <p>For each belt of the second <code>n-&gt;n</code> balance. Then simply taking the <code>1st</code> and <code>n+1th</code> inputs and using a <code>2-&gt;2</code> balance (This is assumed to exist, as it does in the game) we achieve two belts of output that consist of:</p> <blockquote> <p>{<code>n_1 / n</code> + <code>n_2 / n</code> + <code>n_3 / n</code> + ... + <code>n_n / n</code>} / 2 +<br> {<code>n_n+1 / n</code> + <code>n_n+2 / n</code> + <code>n_n+3 / n</code> + ... + <code>n_2n / n</code>} / 2 </p> </blockquote> <p>Which can be seen easily to equal:</p> <blockquote> <p>{<code>n_1 / (2 * n)</code> + <code>n_2 / (2 * n)</code> + <code>n_3 / (2 * n)</code> + ... + <code>n_2n / (2 * n)</code>}</p> </blockquote> <p>And hence repeating this <code>2-&gt;2</code> Split for the <code>2nd</code> and <code>n+2th</code> belts, <code>3rd</code> and <code>n+3th</code> belts ... <code>nth</code> and <code>2nth</code> Gives us perfectly balanced inputs, and hence</p> <blockquote> <p>Assuming we have <code>n-&gt;n</code>, we can create <code>2n-&gt;2n</code>.</p> </blockquote> <p>Now because the special case of <code>2-&gt;2</code> exists, by induction,</p> <blockquote> <p>For all natural numbers {n}, there exists <code>2^n-&gt;2^n</code> </p> </blockquote> <p>Or in layman terms, all the powers of 2 balances must (and do) exist.</p> <p>Now the one last piece of the puzzle is extending this to prove we can create <code>n-&gt;n</code> assuming <code>m-&gt;m</code> exists where <code>n&lt;m</code>. This can be done by taking the <code>m-&gt;m</code>, and using <code>n</code> inputs (and hence creating a <code>n-&gt;m</code> assuming <code>m-&gt;m</code> exists). From this, we take the excess outputs (m-n outputs) and re-input them into the unused m-n inputs (This might require some visualization), and hence create a <code>n-&gt;n</code> from a <code>m-&gt;m</code>.</p> <p>Because for all <code>n</code>, there exists <code>m</code> such that <code>n&lt;m</code> and <code>2^x = m</code> (m is a power of 2) where n, m, x and natural numbers, and we already proved all <code>2^x-&gt;2^x</code> exist (all powers of 2 exist) we can therefore construct all <code>n-&gt;n</code>!</p> <p>In layman terms again, say we want to make a <code>5-&gt;5</code>, we take the <code>8-&gt;8</code> and take 3 outputs and re-input them into the 3 unused inputs, and hence create the <code>5-&gt;5</code> we wanted.</p> <p>I have proven that all <code>n-&gt;n</code> exist.</p>
83,921
<p>Consider $J = \sum_{i=0}^{N}y_{i-1}x_{i}y_{i+1}$ where $+$ and $-$ in the indices are mod $N+1$. Let $x_{i} = 1 - y_{i} \in \{0,1\}$. What are some of the tools useful and relaxation techniques available to maximize $J$ or any other symmetric multivariate polynomial?</p>
Terry Loring
6,133
<p>I have not proven the following to work, but I've done similar calculations when debugging software in my 2009-10 work with Matt Hastings. If these seem fast enough, it should not be too much work to validate them.</p> <p>(1) I would take small, random perturbations of the identity matrix and then do four or five iterations of Newtons method to get the polar part of the resulting matrix. I offer this suggestion under the assumption that you can tolerate some samples falling outside the delta ball.</p> <p>By Newton's method I mean replacing $X$ by $\frac{1}{2}\left( X + (X^{-1})^\mathrm{T}\right) $ where $T$ is the transpose. This is a simplification of what is suggested by N. J. Higham for computing the polar part of an invertible matrix.</p> <p>The number of iterations needed depends on delta. I have no idea how big your matrices are. If that are bigger than 200 by 200 you might need to use a package that has a well parallelized matrix inversion routine. If you are looking at smaller matrices, you might just go ahead and apply 10 iterations.</p> <p>(2) If you cannot abide by matrices outside the delta ball, then generate a random diagonal orthogonal $D$ that is close to $I$ and a random orthogonal $W$ and multiply to $U = WDW^\mathrm{T}$ as the desired random orthogonal, now known to be in the delta ball. I am assuming the operator norm, which is costly to compute by the default in pure math, I think.</p> <p>You need to generate the random unitary by taking polar decomposition of a random matrix. Now you need to read up on Newton's method for for computing the polar part, as you may deal with badly conditioned matrices.</p> <p>I am not sure what distribution you want for the diagonal orthogonal matrix. So I am hoping (1) will work for you.</p>
394,489
<p>$$\int^L_{-L} x \sin(\frac{\pi nx}{L})$$</p> <p>I've seen something like this in Fourier theory, but I'm still not sure how to approach this integral. Wolfram Alpha gives me the answer, but no method. Integrate by parts? Substitution?</p>
Cameron Buie
28,900
<p>If $n=0,$ this is simple, so suppose not. Use the substitution $$u=\frac{n\pi}lx,$$ so that $$x=\frac{l}{n\pi}u,$$ and so $$\int_{-l}^lx\sin\left(\frac{n\pi}lx\right)\,dx=\frac{l^2}{n^2\pi^2}\int_{-n\pi}^{n\pi}u\sin u\,du,$$ then integrate by parts.</p>
2,426,394
<p>I'm sorry to bother you with this easy problem. But I'm working alone and totally confused. It is the 1378th Problem from "Problems in Mathematical Analysis" written by Demidovich. The standard answer is $$\frac{60(1+x)^{99}(1+6x)}{(1-2x)^{41}(1+2x)^{61}}$$</p> <p>But when I followed the rutine $$\frac{d(P(x)/Q(x))}{dx}=\frac{\frac{dP(x)}{dx}Q(x)-P(x)\frac{dQ(x)}{dx}}{Q(x)^2}$$ I got $$\frac{100(1+x)^{99}(1-2x)^{40}(1+2x)^{60}-(1+x)^{100}(-80(1-2x)^{39}+120(1+2x)^{59})}{(1-2x)^{80}(1+2x)^{120}}$$</p> <p>and finally $$\frac{60(1+x)^{99}P(x)}{(1-2x)^{41}(1+2x)^{61}}$$</p> <p>$$P(x)=1-4x^2-\frac{3(1+x)}{(1-2x)^{39}}-\frac{2(1+x)}{(1+2x)^{59}}$$</p> <p>Would you tell me what mistake I made. Best regards.</p>
Nosrati
108,128
<p><strong>Hint:)</strong> If you are familiar with logarithm so write $$y=\frac{(1+x)^{100}}{(1-2x)^{40}(1+2x)^{60}}$$ and $$\ln y=100\ln(1+x)-40\ln(1-2x)-60\ln(1+2x)$$ take derivative and simplify!</p>
2,426,394
<p>I'm sorry to bother you with this easy problem. But I'm working alone and totally confused. It is the 1378th Problem from "Problems in Mathematical Analysis" written by Demidovich. The standard answer is $$\frac{60(1+x)^{99}(1+6x)}{(1-2x)^{41}(1+2x)^{61}}$$</p> <p>But when I followed the rutine $$\frac{d(P(x)/Q(x))}{dx}=\frac{\frac{dP(x)}{dx}Q(x)-P(x)\frac{dQ(x)}{dx}}{Q(x)^2}$$ I got $$\frac{100(1+x)^{99}(1-2x)^{40}(1+2x)^{60}-(1+x)^{100}(-80(1-2x)^{39}+120(1+2x)^{59})}{(1-2x)^{80}(1+2x)^{120}}$$</p> <p>and finally $$\frac{60(1+x)^{99}P(x)}{(1-2x)^{41}(1+2x)^{61}}$$</p> <p>$$P(x)=1-4x^2-\frac{3(1+x)}{(1-2x)^{39}}-\frac{2(1+x)}{(1+2x)^{59}}$$</p> <p>Would you tell me what mistake I made. Best regards.</p>
mechanodroid
144,766
<p>You can just take the derivative of the logarithm:</p> <p>$$y = \frac{(1+x)^{100}}{(1-2x)^{40}(1+2x)^{60}} \implies \ln y = 100\ln(1+x)-40\ln(1-2x)-60\ln(1+2x)$$</p> <p>$$\frac{y'}{y} = (\ln y)' = \frac{100}{1+x} + \frac{80}{1-2x} - \frac{120}{1+2x}$$</p> <p>\begin{align}y' &amp;= y\left(\frac{100}{1+x} + \frac{80}{1-2x} - \frac{120}{1+2x} \right)\\ &amp;= \frac{(1+x)^{100}}{(1-2x)^{40}(1+2x)^{60}}\left(\frac{100}{1+x} + \frac{80}{1-2x} - \frac{120}{1+2x} \right)\\ &amp;= \frac{(1+x)^{100}}{(1-2x)^{40}(1+2x)^{60}}\frac{100(1-4x^2)+80(1+x)(1+2x)-120(1+x)(1-2x)}{(1+x)(1-2x)(1+2x)}\\ &amp;= \frac{(1+x)^{100}}{(1-2x)^{40}(1+2x)^{60}}\frac{60(1+6x)}{(1+x)(1-2x)(1+2x)}\\ &amp;= \frac{60(1+x)^{99}(1+6x)}{(1-2x)^{41}(1+2x)^{61}} \end{align}</p>
4,292,822
<blockquote> <p>Prove linear independence of <span class="math-container">$1+x^3-x^5,1-x^3,1+x^5$</span> in the Vector Space of Polynomials</p> </blockquote> <p>The attempts I found online all are quite easy. You just substitute something in for <span class="math-container">$x$</span> into the equation <span class="math-container">$a(1+x^3-x^5)+b(1-x^3)+c(1+x^5)=0$</span> for example <span class="math-container">$x=1,0,-1$</span> and this will give you three equations where you can show that <span class="math-container">$a,b,c=0$</span>. But why can we substitute something in? If I define the Vector Space of Polynomials in a very abstract way that <span class="math-container">$\sum_{i} \alpha_i x^{i}+\sum_{i} \beta_{i} x^{i}:=\sum_{i} (\alpha_{i}+\beta_{i})x^{i})$</span> and <span class="math-container">$(\sum_{i}^{n} \alpha_i x^{i})(\sum_{i}^{m} \alpha_{i} x^{i} ):=\sum_{i=0}^{n+m} c_i x^i$</span> with <span class="math-container">$c_k=a_0 b_k+a_1 b_{k-1}+...+a_{k} b_0$</span> and a <span class="math-container">$x$</span> is just an abstract symbol with absolutely no meaning why should one be allowed to substitute something for <span class="math-container">$x$</span> or even worse differentiate the equation?</p>
zwim
399,263
<p>In fact your independence condition can be rewritten <span class="math-container">$(c-a)x^5+(a-b)x^3+(a+b+c)=0$</span></p> <p>It has to be true for all <span class="math-container">$x$</span> so it is equivalent to solve the system <span class="math-container">$\begin{cases}c-a=0\\a-b=0\\a+b+c=0\end{cases}$</span></p> <p>And the solution is obviously <span class="math-container">$a=b=c=0$</span>.</p> <p>Also in <span class="math-container">$\mathbb R$</span>, this polynomial has at most <span class="math-container">$5$</span> roots, therefore if you plug <span class="math-container">$6$</span> numbers you are assured to force the coefficients, but I feel it is simpler to just identify the polynomial with the null polynomial.</p>
228,283
<p>Could someone give a reference or construct an example of closed subspace of $Y\subset L_1[0,1]$ such that $\operatorname{dist}(x,Y)$ is not attained of for any $x\notin Y$.</p> <p>I read somewhere that $Y$ is necessarily of infinite dimension and codimension.</p>
Bill Johnson
2,554
<p>The $Y$ in Mikhail's answer has codimension one. Obviously $Y$ cannot be reflexive, but $Y$ can be of any non zero finite codimension or of infinite codimension. (Let $Z$ be any separable Banach space and let $Q$ be an operator from $L_1$ to $Z$ that maps the closed unit ball of $L_1$ onto the open unit ball of $Z$. The kernel $Y$ of such a quotient map $Q$ is antiproximinal. It is easy to built such an operator from $\ell_1$ onto $Z$; to get one from $L_1$ compose the operator from $\ell_1$ with a norm one projection from $L_1$ onto a subspace that is isometric to $\ell_1$.)</p>
3,421,969
<p>The point <span class="math-container">$C = (1, 2)$</span> lies inside the circle <span class="math-container">$x^2 + y^2 = 9$</span>. What is the length of the shortest chord of the circle through <span class="math-container">$C$</span>?</p>
Bernard
202,857
<p>It simply results from the high-school identity: <span class="math-container">$$a^n-1=(a-1)(a^{n-1}+a^{n-2}+\dots +a+1),$$</span> which is often used as an example of an easy, but not trivial, proof by induction.</p>
4,543,203
<p>I am going through Achim Klenke's Probability Theory textbook. In Section 7.4, he discusses what it means for two measures to be singular to one another and gives the following example.</p> <p>Let <span class="math-container">$\Omega = \{0, 1\}^\mathbb{N}$</span> and let <span class="math-container">$(\mathrm{Ber}_p)^{\otimes \mathbb{N}}$</span> and <span class="math-container">$(\mathrm{Ber}_q)^{\otimes \mathbb{N}}$</span> be the infinite product measures with parameters <span class="math-container">$p$</span> and <span class="math-container">$q$</span>, respectively. For <span class="math-container">$n \in \mathbb{N}$</span>, let <span class="math-container">$X_n$</span> be the <span class="math-container">$n$</span>-th coordinate map. Then under <span class="math-container">$(\mathrm{Ber}_r)^{\otimes \mathbb{N}}$</span>, <span class="math-container">$(X_n)_{n \in \mathbb{N}}$</span> is independent and Bernoulli distributed with parameter <span class="math-container">$r$</span>.</p> <p>Here is the step where I get confused:</p> <p>Klenke states that one can apply the strong law of large numbers such that for any <span class="math-container">$r \in \{p, q\}$</span>, there exists a measurable set <span class="math-container">$A_r \subset \Omega$</span> with <span class="math-container">$(\mathrm{Ber}_r)^{\otimes \mathbb{N}}(\Omega \backslash A_r) = 0$</span> and <span class="math-container">$\lim_{n \to \infty} n^{-1} \sum_1^n X_i(\omega) = r$</span> for all <span class="math-container">$\omega \in A_r$</span> and therefore in particular <span class="math-container">$A_p \cap A_q = \emptyset$</span> if <span class="math-container">$p \not = q$</span>, and thus <span class="math-container">$(\mathrm{Ber}_p)^{\otimes \mathbb{N}}$</span> and <span class="math-container">$(\mathrm{Ber}_q)^{\otimes \mathbb{N}}$</span> are singular in that case.</p> <p>Now I am completely confused by the last paragraph. First off, what guarantees the existence of such a measurable set <span class="math-container">$A_r$</span>? Then, how does it particular follow that <span class="math-container">$A_p \cap A_q = \emptyset$</span> if <span class="math-container">$p \not = q$</span>? And finally, how does the last imply singularity of the two measures? A nice and clear explanation would be greatly appreciated, thanks!</p>
Matija
1,096,797
<p>I'm also not sure what the problem is, so I'll argue why there is no problem. Consider <span class="math-container">$\{0,1\}^{\mathbb N}$</span> equipped with the Borel algebra induced by the discrete topology (which is the power set of <span class="math-container">$\{0,1\}^{\mathbb N}$</span>, cf. this <a href="https://math.stackexchange.com/questions/1497667/product-sigma-algebra-in-countable-case-proposition-1-3-in-folland">discussion</a> to see why it doesn't matter which product algebra we take, be it induced by projections or rectangles). For an integer <span class="math-container">$n&gt;0$</span> let <span class="math-container">$a_{n}:\{0,1\}^{\mathbb N}\rightarrow\mathbb R$</span>, <span class="math-container">$x\mapsto n^{-1}\sum_{m=1}^{n}x_m$</span>, where <span class="math-container">$\mathbb R$</span> is equipped with the canonical Borel algebra. Notice that <span class="math-container">$a_{n}$</span> is measurable since it is continuous (which is trivial for the discrete topology). So, for <span class="math-container">$\bar a\in\mathbb R$</span> and an integer <span class="math-container">$k&gt;0$</span> the set <span class="math-container">$$E_{n,k}(\bar a)=\{x\in\{0,1\}^{\mathbb N}:|a_{n}(x)-\bar a|\le 1/k\}\subseteq\{0,1\}^{\mathbb N}$$</span> is measurable, since this is the preimage of the measurable set <span class="math-container">$[\bar a-1/k,\bar a+1/k]$</span> under a measurable function. But then the set <span class="math-container">$E(\bar a)=\cap_{k&gt;0}\cup_{n_0&gt;0}\cap_{n\ge n_0}E_{n,k}(\bar a)$</span> is measurable, and we have <span class="math-container">$$E(\bar a)=\{x:\forall k\exists n_0\forall n\ge n_0\,|a_{n}(x)-\bar a|\le 1/k\}=\{x:\lim_{n\rightarrow\infty}a_{n}(x)=\bar a\}.$$</span> Now, the <a href="https://en.wikipedia.org/wiki/Law_of_large_numbers#Strong_law" rel="nofollow noreferrer">strong law of large numbers</a> (is well-defined and) yields <span class="math-container">$\mathbb P_r(E(r))=1$</span>, where <span class="math-container">$\mathbb P_r=\mathrm{Ber}_r^{\otimes\mathbb N}$</span>. And since the limit is unique (by definition of the limit) if it exists, we have <span class="math-container">$\bigcup_{s\in\mathbb R\setminus\{r\}}E(s)\subseteq \{0,1\}^{\mathbb N}\setminus E(r)$</span>. In particular, we have <span class="math-container">$\mathbb P_p(E(p))=1\neq 0=\mathbb P_q(E(p))$</span>, so <span class="math-container">$\mathbb P_p$</span> and <span class="math-container">$\mathbb P_q$</span> are <a href="https://en.wikipedia.org/wiki/Singular_measure" rel="nofollow noreferrer">singular</a> (notice that the definition on Wikipedia can be reduced to the existence of an event <span class="math-container">$A$</span> such that <span class="math-container">$\mu(A)=1$</span>, <span class="math-container">$\nu(A)=0$</span> since we consider non-negative normalized measures).</p> <p>I'm fairly confident that this discussion is reasonable. Thus, the underlying <span class="math-container">$\sigma$</span>-algebra in Klenke's book was overly restrictive (I doubt that), or the wording was intended to be suggestive (e.g. to point towards a definition, result or the likes), but if so, I still didn't get the message.</p>
2,787,114
<p>Before going to my question, let me give two prelimilary definitions</p> <blockquote> <p><strong>Definition 1.</strong> Let $S\subseteq\mathbb{R}^n$ be a non=empty open set in $\mathbb{R}^n$ under the usual topology on $\mathbb{R}^n$ and $f:S\to \mathbb{R}^m$. Let $\mathbf{c}\in S$ and $g:U(\subseteq \mathbb{R})\to S$ is such that,</p> <ul> <li><p>$U$ is open in $\mathbb{R}$ under the usual topology on $\mathbb{R}$</p></li> <li><p>$g(0)=\mathbf{c}$</p></li> <li><p>$g$ is continuous at $0$</p></li> </ul> <p>Then $f$ will be said to have <em>derivative along the cruve $g$ at the point $\mathbf{c}$</em> if, $$\displaystyle\lim_{h\to 0}\dfrac{(f\circ g)(h)-(f\circ g)(0)}{h}$$ exists. </p> <p><strong>Definition 2.</strong> Let $S\subseteq\mathbb{R}^n$ be a non=empty open set in $\mathbb{R}^n$ under the usual topology on $\mathbb{R}^n$ and $f:S\to \mathbb{R}^m$. Let $\mathbf{c}\in S$. Then $f$ will be said to have <em>approach independent derivative at $\mathbf{c}$</em> if, $$\displaystyle\lim_{h\to 0}\dfrac{(f\circ g)(h)-(f\circ g)(0)}{h}$$ exists for all $g$ satisfying the properties listed in the previous definition.</p> </blockquote> <p><strong>Question</strong></p> <p>If $f$ has approach independent derivative at $\bf{c}$ then is it continuous at $\mathbf{c}$?</p> <hr> <p>I was trying to find a counter example of such a function $f$ but till now I have not been able to find such an example. Any help will be appreciated. </p>
Frank Lu
41,622
<p>It turns out that the answer is positive. We prove by contradiction. Assume $f$ is discontinuous at $\mathbf{c}$, then there exists $\epsilon&gt;0$ and a sequence $(\mathbf{c}_n)_{n=1}^\infty$ in $S$ such that $\mathbf{c}_n\to \mathbf{c}$ and $$|f(\mathbf{c}_n)-f(\mathbf{c})|&gt;\epsilon,\quad\forall n\in\mathbb{N}.$$ Now define $g:(-1,1)\to S$ via $$g(x)=\begin{cases}\mathbf{c}_n, &amp;\text{if }x=\frac{1}{n}\\ \mathbf{c}, &amp;\text{elsewhere}. \end{cases}$$ Then it's trivial that $g$ satisfies all conditions listed. However $$\left|(f\circ g)\left(\frac{1}{n}\right) -(f\circ g)(0)\right|=|f(\mathbf{c}_n)-f(\mathbf{c})|&gt;\epsilon,\quad\forall n\in\mathbb{N}.$$ This shows that $$\lim_{h\to 0}(f\circ g)(h)-(f\circ g)(0)$$ cannot be $0$. Thus $f$ does not have derivative along $g$ at $\mathbf{c}$, and this is a contradiction.</p>
481,017
<blockquote> <p>Find all $f(x)$ satisfying $f(f(x)) = x^2 - 2$.</p> </blockquote> <p>Presumably $f(x)$ is supposed to be a function from $\mathbb R$ to $\mathbb R$ with no further restrictions (we don't assume continuity, etc), but the text of the problem does not specify further. </p> <p><strong>Possibly Helpful Links:</strong> Information on similar problems can be found <a href="https://mathoverflow.net/questions/17605/how-to-solve-ffx-cosx">here</a> and <a href="https://mathoverflow.net/questions/17614/solving-ffx-gx">here</a>.</p> <p><strong>Source:</strong> <a href="https://math.stackexchange.com/questions/481000/some-old-russian-problems">This question.</a> It is about to be closed for containing too many problems in one question. I'm posting each problem separately. </p>
Hugh
77,570
<p><strong>There is no such $f$</strong>.</p> <p>From <a href="http://yaroslavvb.com/papers/rice-when.pdf">http://yaroslavvb.com/papers/rice-when.pdf</a> , the question of existence is determined by the theorem: </p> <p><strong>Theorem 6.</strong> <em>Let $\mathbb{R}$ be the real line. Let $g$ be a real quadratic polynomial, so that $$g(x)=ax^2+ (b + 1)x+c,$$ for all real $x$, where $a\ne 0$, $b$, and $c$ are in $\mathbb{R}$. ... set $\Delta(g)= b^2-4ac$. If $\Delta(g)&gt; 1$, then g has no iterative roots of any order whatever.</em> [That is, there is no $f$ such $f\circ f = g$.] <em>If $\Delta(g) =1$, then $g$ can be embedded in a 2-sided flow on $\mathbb{R}$, all of whose members are continuous functions. If $\Delta(g) &lt;1$, then $g$ can be embedded in a 1-sided flow on $\mathbb{R}$, all of whose members are continuous functions; but $g$ cannot be embedded in any 2-sided flow on $\mathbb{R}$.</em></p> <p>As $\Delta(g) = 0 - 4(1)(-2) = 8 &gt; 1$ in your case, the question of existence is negative.</p> <p>Looking closely at the article, the main point is that no function with only one 2-cycle can have a square root. In our case that means that there can be no partial solution $f:D\to D$ of the funcional equation $f(f(x))=x^2-2$ in $D\subset\Bbb{R}$ if $x_0=\frac{-1+\sqrt{5}}{2}\approx 0.61803$ or $x_2=\frac{-1-\sqrt{5}}{2}\approx -1.61803$ are in $D$. </p> <p>In fact, clearly $x_0^2-2=x_2$ and $x_2^2-2=x_0$ (this implies that $x_0\in D$ if and only if $x_2\in D$). </p> <p>There can be no other pair $y_1\ne y_2$ with $y_1^2-2=y_2$ and $y_2^2-2=y_1$, since then $$ \{-1,2,x_0,x_2,y_1,y_2\} $$ would be roots of the polynomial $P(x)=x^4 - 4 x^2 - x + 2$, since $y_1^2-2=y_2$ and $y_2^2-2=y_1$ implies $$ (y_1^2-2)^2-2=y_1\quad\Rightarrow \quad y_1^4-4y_1^2+2=y_1 \quad\Rightarrow \quad P(y_1)=0 $$ and similarly $P(y_2)=0$.</p> <p>Now, if $x_0\in D$ (or $x_2\in D$) and $f:D\to D$ satisfy $f(f(x))=x^2-2$, then $x_1:=f(x_0)$ and $x_3:=f(x_2)$ would be such a pair, a contradiction that proves $x_0\notin D$ (and $x_2\notin D$).</p>
1,814,474
<p>Lets have a series $$\sum_{n=2}^\infty\frac{\ln\left(\frac{n+1}{n-1}\right)}{\sqrt{n}}$$</p> <p>However, I have absolutely no clue how to try to continue. I could probably use the integral criterion and integrate the problem using the residue theorem, but that is too much of a hassle. Is there an easy way to prove the convergence of this series?</p>
Jack D'Aurizio
44,121
<p>Some preliminary lemma.<br> <strong>Lemma 1.</strong> $$ \sum_{n\geq 1} H_n x^n = \frac{\log(1-x)}{1-x}. $$ <strong>Lemma 2.</strong> By Lemma $1$, $$ \sum_{n\geq 1}\frac{H_n}{n+1} x^{n+1} = \frac{1}{2}\log^2(1-x),\qquad \sum_{n\geq 1}\frac{H_n+H_{n+1}}{n+1}x^{n}=\frac{-x+\log^2(1-x)+\text{Li}_2(x)}{x}.$$ <strong>Lemma 3.</strong> Since $H_{n+1}^2-H_n^2 = \frac{H_n+H_{n+1}}{n+1}$, $$ \sum_{n\geq 1}H_{n}^2 x^n = \frac{\log^2(1-x)+\text{Li}_2(x)}{1-x}.$$ <strong>Lemma 4.</strong> By Lemma 3, $$ \sum_{n\geq 1}\frac{(-1)^{n+1} H_{n}^2}{n+1} = -\int_{0}^{1}\frac{\log^2(1+x)+\text{Li}_2(-x)}{1+x}\,dx=-\frac{\log^3(2)}{3}-\color{red}{\int_{0}^{1}\frac{\text{Li}_{2}(-x)}{1+x}\,dx}.$$ The problem boils down to the evaluation of the last integral. By integration by parts, it is: $$ \color{red}{\int_{0}^{1}\frac{\text{Li}_{2}(-x)}{1+x}\,dx}=-\frac{\pi^2}{12}\log(2)+\color{blue}{\int_{0}^{1}\frac{\log^2(1+x)}{x}\,dx}\tag{1} $$ but: $$ \color{blue}{\int_{0}^{1}\frac{\log^2(1+x)}{x}\,dx} = -2\int_{0}^{1}\frac{\log(1+x)\log(x)}{1+x}\,dx=\color{blue}{\frac{\zeta(3)}{4}}\tag{2}$$ and the proof is complete.</p>
3,671,941
<p>In <a href="https://sites.math.northwestern.edu/~mpopa/571/chapter3.pdf" rel="nofollow noreferrer">this chapter</a>, Example 2.1 notes that we can write <span class="math-container">$\mathbb{Z}_p$</span> as <span class="math-container">$$p\mathbb{Z}_p\cup(1+p\mathbb{Z}_p)\cup\cdots\cup (p-1+p\mathbb{Z}_p).$$</span> I understand that <span class="math-container">$\mathbb{Z}_p=\{x\in\mathbb{Q}_p:|x|_p\leq 1\}$</span> and that <span class="math-container">$p\mathbb{Z}_p=\{x\in\mathbb{Q}_p : |x|_p&lt;1\}=\{x\in\mathbb{Q}_p : |x|_p\leq p^{-1}\}$</span>, but I'm having trouble seeing why <span class="math-container">$\mathbb{Z}_p\subset \{a+p\mathbb{Z}_p\}$</span> for <span class="math-container">$a$</span> ranging between <span class="math-container">$0$</span> and <span class="math-container">$p-1$</span>. I understand that each of these is the closed ball of radius <span class="math-container">$p^{-1}$</span> centered at <span class="math-container">$a$</span>, but I don't see how that leads to these balls being a cover for <span class="math-container">$\mathbb{Z}_p$</span>. It must be the case that any point in <span class="math-container">$\mathbb{Z}_p$</span> can be written as <span class="math-container">$a+x$</span> for <span class="math-container">$a$</span> between <span class="math-container">$0$</span> and <span class="math-container">$p-1$</span> and <span class="math-container">$|x|_p\leq p^{-1}$</span>, but I don't see why that's true. While showing this mathematically is helpful, I'm trying to understand it from a conceptual point of view.</p>
Rob Arthan
23,171
<p>You can think of <span class="math-container">$\Bbb{Q}_p$</span> as numbers written in base <span class="math-container">$p$</span> notation that extend infinitely to the left of the <span class="math-container">$p$</span>-ary point, but finitely to the right. So you'd think of the example: <span class="math-container">$$ x = a_mp^m + a_{m+1}p^{m+1} + \ldots $$</span> in your linked chapter as: <span class="math-container">$$ x = \ldots a_3a_2a_1a_0.a_{-1} \ldots a_{m} $$</span> Thinking of it this way, the arithmetic operations are done much as one does for finite base <span class="math-container">$p$</span> numbers, but carrying on to the left indefinitely. In particular, multiplication by <span class="math-container">$p$</span> is a left shift. <span class="math-container">$\Bbb{Z}_p$</span> comprises the <span class="math-container">$x$</span> as above where <span class="math-container">$m \ge 0$</span>, i.e., where there are no digits after the point. But then <span class="math-container">$$ x = \ldots a_3a_2a_1a_0.0 = a_0 + p(\ldots a_3a_2a_1a_1.0) \in a_0 + p\Bbb{Z}_p $$</span></p> <p>[Postscript: in case anyone is worried about how division works: you work from right to left killing successive least-significant digits.]</p>
28,166
<p>With the initial conditions: $a&gt;b&gt;0$;</p> <p>I need to find $$\lim_{n\to\infty}\sqrt[n]{a^n-b^n}.$$</p> <p>I tried to block the equation left and right in order to use the Squeeze (sandwich, two policemen and a drunk, choose your favourite) theorem.</p>
TonyK
1,508
<p>Rearrange $(a^n-b^n)^{1/n}$ as $a(1-(b/a)^n)^{1/n}$. Can you do this?</p>
2,744,008
<p>Since the isolated singularities are $z=k\pi, k \in \mathbb Z$, so we divided the complex plane into the disjoint annulus, i.e. $\{z: n\pi &lt;|z|&lt;(n+1)\pi\}, n \in \mathbb N \cup \{0\}$. On these annuli, $f$ is analytic, so has a unique Laurent series.</p> <p>First, let's consider $\{z: 0 &lt; |z|&lt;\pi\}$, then the solution says $$\begin{aligned} f(z)&amp; =\frac{1}{\sin z}\\ &amp; =\frac{1}{z-\frac{z^3}{3!}+\frac{z^5}{5!}-\cdots}\\ &amp; =\frac{1}{z}\cdot \frac{1}{1-\left(\frac{z^2}{3!}-\frac{z^4}{5!}+\cdots\right)}\\ &amp; =\frac{1}{z}\left[1+\left(\frac{z^2}{3!}-\frac{z^4}{5!}+\cdots\right)+\left(\frac{z^2}{3!}-\frac{z^4}{5!}+\cdots\right)^2+\cdots\right] \end{aligned}$$ My questions are:</p> <p>Do we need $\left|\frac{z^2}{3!}-\frac{z^4}{5!}+\cdots\right|&lt;1$ to get the last step? If we do, then why it is less than 1?</p> <p>Thanks for any help!</p>
heropup
118,193
<p>Yes, for a continuous uniform random variable $X$ with support on an interval $[a,b]$, where $a &lt; b$, the probability density function of $X$ is given by $$f_X(x) = \begin{cases} \frac{1}{b-a}, &amp; a \le X \le b, \\ 0, &amp; \text{otherwise}. \end{cases}$$ However, this only tells us the density. To get the expected value, you must calculate $$\operatorname{E}[X] = \int_{x = -\infty}^\infty x f_X(x) \, dx = \int_{x=a}^b x \cdot \frac{1}{b-a} \, dx = \left[\frac{x^2}{2(b-a)}\right]_{x=a}^b = \frac{b^2 - a^2}{2(b-a)} = \frac{a+b}{2}.$$ And this is how one gets, for $a = 0$, $b = 1$, $$\operatorname{E}[X] = \frac{0 + 1}{2} = \frac{1}{2}.$$</p>