qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
425,400
<p>Consider the following integral expression: <span class="math-container">$$\mathcal I :=\iint_{\epsilon \leq|x-y| \leq 1/2} f(x) f(y) \frac{(g(x)-g(y))(x-y)}{|x-y|^{3}} d x d y $$</span> for <span class="math-container">$\epsilon&gt;0$</span>, <span class="math-container">$f \in L^\infty(\mathbb R)$</span>, and <span class="math-container">$g \in BV(\mathbb R)$</span>. Is it true that <span class="math-container">$$\mathcal I \lesssim TV(g)$$</span> or something of this nature (possibly adding the <span class="math-container">$\epsilon$</span> somewhere)?</p> <hr /> <p>Added later: does the dependence on <span class="math-container">$\epsilon$</span> in the answer below improve if we further assume <span class="math-container">$f$</span> to be compactly supported?</p> <hr /> <p>This is motivated by a question related to approximate differentiability.</p>
Iosif Pinelis
36,721
<p><span class="math-container">$\newcommand{\ep}{\epsilon}\newcommand{\R}{\mathbb R}$</span>Added later by the OP:</p> <blockquote> <p>does the dependence on <span class="math-container">$\ep$</span> in the answer below improve if we further assume <span class="math-container">$f$</span> to be compactly supported?</p> </blockquote> <p>The answer to this additional question is: of course, not.</p> <p>Indeed, suppose that <span class="math-container">$f=1_{[a,b]}$</span> for some real <span class="math-container">$a$</span> and <span class="math-container">$b$</span> such that <span class="math-container">$b-a&gt;1$</span>. Let <span class="math-container">$g$</span> be any nondecreasing function such that <span class="math-container">$g$</span> is constant on <span class="math-container">$(-\infty,a+1/2)$</span> and on <span class="math-container">$(b-1/2,\infty)$</span>, so that <span class="math-container">$$TV(g)=\int_{[a+1/2,\,b-1/2]}dg(z).$$</span></p> <p>Then for <span class="math-container">$\ep\in(0,1/2]$</span> we have <span class="math-container">\begin{equation} \mathcal I=2\|f\|_\infty^2\, J, \end{equation}</span> where <span class="math-container">\begin{equation} \begin{aligned} J &amp;:=\iint_{[a,b]^2}\,\frac{dx\, dy}{(x-y)^2}\,|g(x)-g(y)|\,1(\ep\le y-x\le1/2) \\ &amp;=\iint_{[a,b]^2}\,\frac{dx\, dy}{(x-y)^2}\,\int_x^y dg(z)\,1(\ep\le y-x\le1/2) \\ &amp;=\int_{[a,b]} dg(z) \\ &amp;\quad\times\iint_{[a,b]^2}\,\frac{dx\, dy}{(x-y)^2} \,1(x\le z\le y,\,\ep\le y-x\le1/2)\\ &amp;\ge\int_{[a+1/2,\,b-1/2]} dg(z)\int_{z-1/2}^z dx\,\int_{\max(z,x+\ep)}^{x+1/2}\frac{dy}{(x-y)^2} \\ &amp;=\Big(\ln\frac1{2\ep}\Big)\,\int_{[a+1/2,\,b-1/2]} dg(z)=\Big(\ln\frac1{2\ep}\Big)\,TV(g). \end{aligned} \end{equation}</span> Thus, <span class="math-container">\begin{equation} \mathcal I\ge2\Big(\ln\frac1{2\ep}\Big)\|f\|_\infty^2\, TV(g). \tag{4}\label{4} \end{equation}</span> (In view of inequality (3) in the previous answer, inequality \eqref{4} is actually the equality.)</p> <p>Thus, the form of the exact bound on <span class="math-container">$\mathcal I$</span> with the additional restriction that <span class="math-container">$f$</span> be compactly supported is exactly the same as the exact bound on <span class="math-container">$\mathcal I$</span> without this restriction.</p>
3,683,542
<p>My question is whether following conjecture is true:</p> <p>If <span class="math-container">$f: \&gt;{\mathbb R}_{\geq0}\to{\mathbb R}$</span> is continuous, and <span class="math-container">$\lim_{x\to\infty}\bigl|f(x)\bigr|=\infty$</span>, then <span class="math-container">$$\lim_{x\to\infty}{1\over x}\int_0^x\bigl|\sin(f(t))\bigr|\&gt;dt={2\over\pi}\ .$$</span></p> <p>In other words it says that average value of function <span class="math-container">$|\sin(f(t))|$</span> from <span class="math-container">$0$</span> to infinity is equal to average value of <span class="math-container">$|\sin(t)|$</span> from <span class="math-container">$0$</span> to infinity (which is <span class="math-container">$\frac{2}{\pi}$</span>), when <span class="math-container">$f(t)$</span> satisfies conditions shown above.</p> <p>I don't know how to prove it. I've checked it numerically for few cases using desmos. There is problem with proving it for single cases, because the integral <span class="math-container">$\int_0^x|\sin(f(t))|dt$</span> is almost always very hard to calculate and requires using special functions like Fresnel S integral.</p> <p>If you have any ideas, how to prove it, please let me know.</p> <p>Thanks for all the help.</p>
Barry Cipra
86,747
<p>The conjecture is not true. Let <span class="math-container">$f$</span> be a continuous approximation to the step function <span class="math-container">$\pi\lfloor x\rfloor$</span>, where the portions of the curve that connect consecutive horizontal steps get steeper and steeper as <span class="math-container">$x\to\infty$</span> in a way that the total length of the intervals where <span class="math-container">$f(x)$</span> is not an integer multiple of <span class="math-container">$\pi$</span> is finite. Then <span class="math-container">$\lim_{x\to\infty}{1\over x}\int_0^x|\sin(f(t))|\,dt=0$</span>, not <span class="math-container">$2/\pi$</span>.</p>
3,683,542
<p>My question is whether following conjecture is true:</p> <p>If <span class="math-container">$f: \&gt;{\mathbb R}_{\geq0}\to{\mathbb R}$</span> is continuous, and <span class="math-container">$\lim_{x\to\infty}\bigl|f(x)\bigr|=\infty$</span>, then <span class="math-container">$$\lim_{x\to\infty}{1\over x}\int_0^x\bigl|\sin(f(t))\bigr|\&gt;dt={2\over\pi}\ .$$</span></p> <p>In other words it says that average value of function <span class="math-container">$|\sin(f(t))|$</span> from <span class="math-container">$0$</span> to infinity is equal to average value of <span class="math-container">$|\sin(t)|$</span> from <span class="math-container">$0$</span> to infinity (which is <span class="math-container">$\frac{2}{\pi}$</span>), when <span class="math-container">$f(t)$</span> satisfies conditions shown above.</p> <p>I don't know how to prove it. I've checked it numerically for few cases using desmos. There is problem with proving it for single cases, because the integral <span class="math-container">$\int_0^x|\sin(f(t))|dt$</span> is almost always very hard to calculate and requires using special functions like Fresnel S integral.</p> <p>If you have any ideas, how to prove it, please let me know.</p> <p>Thanks for all the help.</p>
Severin Schraven
331,816
<p>No, this conjecture is not true. Consider a continuous function such that <span class="math-container">$f(x) = 2\pi n$</span> for <span class="math-container">$x\in [n+1/n^2, n+1]$</span>. Then we get for <span class="math-container">$n\leq x \leq n+1$</span> <span class="math-container">$$ \frac{1}{x} \int_0^x \vert \sin(f(t)) \vert dt \leq \frac{1}{n} \sum_{k=1}^\infty \int_{k}^{k+1/k^2} \vert \sin(f(t)) \vert dt \leq \frac{1}{n} \sum_{k\geq 1} \frac{1}{k^2} \rightarrow 0 $$</span> for <span class="math-container">$n\rightarrow \infty$</span>.</p>
2,161,911
<p>Find the limit :</p> <p>$$\lim_{ n\to \infty }\sqrt[n]{\prod_{i=1}^n \frac{1}{\cos\frac{1}{i}}}=\,\,?$$</p> <p>My try :</p> <p>$$\lim_{ n\to \infty }\sqrt[n]{a} =1\,\, \text {for} \,\,a&gt;0$$</p> <p>and;</p> <p>$$\prod_{i=1}^n \frac{1}{\cos\frac{1}{i}}&gt;0$$</p> <p>so :</p> <p>$$\lim_{ n\to \infty }\sqrt[n]{\prod_{i=1}^n \frac{1}{\cos\frac{1}{i}}}=1$$</p> <p>Is it right?</p>
Mark Viola
218,419
<p>Let $\epsilon&gt;0$ be given. Take $N$ so large that $|\log(\cos(1/i))|&lt;\epsilon/2$ whenever $i&gt;N$. Then, we can write for $n&gt;N+1$</p> <p>$$\begin{align} \left|\log\left(\sqrt[n]{\prod_{i=1}^n\sec(1/i)}\right)\right|&amp;=\frac1n\left|\sum_{i=1}^n\log(\cos(1/i))\right|\\\\ &amp;\le\frac1n\sum_{i=1}^N|\log(\cos(1/i))|+\frac1n\sum_{i=N+1}^n|\log(\cos(1/i))|\\\\ &amp;\le \frac1n\sum_{i=1}^N|\log(\cos(1/i))|+\frac\epsilon2 \left(1-\frac{N}{n}\right)\\\\ &amp;&lt;\epsilon \end{align}$$</p> <p>when $n&gt;\max\left(N+1,\frac{2N|\log(\cos(1/N))|}{\epsilon}\right)$.</p> <p>Therefore, we have $\lim_{n\to \infty}\log\left(\sqrt[n]{\prod_{i=1}^n\sec(1/i)}\right)=0$ and</p> <blockquote> <p>$$\lim_{n\to \infty}\sqrt[n]{\prod_{i=1}^n\sec(1/i)}=1 \tag 1$$</p> </blockquote> <hr> <blockquote> <p>NOTE: We could have applied the <a href="https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem" rel="nofollow noreferrer">Stolz-Cesaro Theorem</a> and obtained directly</p> </blockquote> <p>$$\begin{align} \lim_{n\to \infty}\frac{\sum_{i=1}^n\log(\cos(1/i))}{n}&amp;=\lim_{n\to \infty}\frac{\sum_{i=1}^{n+1}\log(\cos(1/i))-\sum_{i=1}^n\log(\cos(1/i))}{(n+1)-(n)}\\\\ &amp;=\lim_{n\to \infty}\log(\cos(1/(n+1)))\\\\ &amp;=0 \end{align}$$</p> <p>from which we recover $(1)$ immediately.</p>
238,052
<p>If I have 4 different types of data such (that I get from an Excel file) as:</p> <pre><code>https://pastebin.com/j3Bgfxqm </code></pre> <p>I am trying to implement a <code>Do</code> loop that extracts the data from the Excel file, superimposes the data in two different regions (as done here: <a href="https://mathematica.stackexchange.com/questions/237742/superimposed-curves-in-two-regions">Superimposed curves in two regions</a>), plots the data individually (from the <code>Do</code> loop) and then plots the data all together and superimposed. Here's my approach to everything except superimposing the curves (that's the part I don't know how to implement in the <code>Do</code> loop):</p> <pre><code>Do[ datTCpc = Extract[Import[&quot;excel file.xlsx&quot;], 1][[3 ;; All, {i, i + 1}]]; store = AppendTo[tts1, datTCpc]; (*Stores the data*) (*Plotting individually in the do loop*) Show[ ListLinePlot[datTCpc, PlotStyle -&gt; {Red}, AxesLabel -&gt; {&quot;T (ºC)&quot;, &quot;Cp(J/gK)&quot;}, LabelStyle -&gt; {Black, Bold, 14}, PlotLabel -&gt; Style[q2 &quot;K/min&quot;, Black, 14]]] // Print, {i, 1, imax2, 2}] (*Plots the store data combined - superimposed*) Show[ListLinePlot[store, PlotStyle -&gt; {Red, Blue, Gray, Black}, LabelStyle -&gt; {Black, Bold, 14}, ImageSize -&gt; Large, Frame -&gt; True, Axes -&gt; False, GridLines -&gt; Automatic, GridLinesStyle -&gt; Lighter[Gray, .8], PlotRange -&gt; {{20, 110}, All}]] </code></pre> <p>How can I superimpose the data using the first region from 29 to 41 (in x-axis data) and the second region from 72 to 85 (in the x-axis data)?</p> <p><strong>Clarification:</strong> By superimposing the curves or the data, I mean simply placing the curves one on top of the other (taking one as the reference and putting the other on top based on two regions).</p> <p><strong>EDIT</strong></p> <p>This is how the data will look like superimposed (done manually in Excel), where the two regions are shown in red circles:</p> <p><a href="https://i.stack.imgur.com/iMzbf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iMzbf.png" alt="enter image description here" /></a></p> <p>It was done by translating or rotating the curves until minimizing the differences.</p>
Daniel Huber
46,318
<p>In MMA using Loops is not advised. Instead, use Map and a function. Here is the procedure.</p> <p>Assume that we want to overlay dataset2, dataset3,.. onto dataset1</p> <p>We again need data:</p> <pre><code>dat = Import[&quot;https://pastebin.com/j3Bgfxqm&quot;, &quot;Data&quot;][[1]]; dat = ToExpression[ StringCases[#, &quot;{&quot; ~~ NumberString ~~ &quot;,&quot; ~~ NumberString ~~ &quot;}&quot;]] &amp; /@ dat; </code></pre> <p>Then we define the intervals and truncate the first data set:</p> <pre><code>n1 = 12; n2 = 300; n3 = 450; n4 = 580; d1 = Join[Take[dat[[1]], {n1, n2}], Take[dat[[1]], {n3, n4}]]; </code></pre> <p>Then we define the affine transformation and error function:</p> <pre><code>affTrans[t_, r_, d_] := (t + r.#) &amp; /@ d; err[{tx_, ty_}, phi_, d_] := Total[(Norm /@ (affTrans[{tx, ty}, RotationMatrix[phi], d] - d1))^2]; </code></pre> <p>Using the error function, we define a function that does the actual work. We then map this function onto all the datasets with the exception of data set 1:</p> <pre><code>results = (d = Join[Take[#, {n1, n2}], Take[#, {n3, n4}]]; sol = {tx, ty} + RotationMatrix[phi].# &amp; /. FindMinimum[{err[{tx, ty}, phi, d]}, {{tx, 0}, {ty, 0}, {phi, 0}}][[2]]; sol /@ # ) &amp; /@ Rest[dat]; </code></pre> <p>Finally, we prepend dataset 1 to the results and plot them:</p> <pre><code>PrependTo[results, dat[[1]]]; ListLinePlot[results] </code></pre> <p><a href="https://i.stack.imgur.com/qtJD2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtJD2.png" alt="enter image description here" /></a></p>
1,666,295
<p>I was wondering when we add partial pivoting to an $LU$ factorization to a matrix $A$ it supposedly changes the data structure but improves the overall algorithm since we get better numerical stability. I am curious to why this is? </p> <p>Any feedback is appreciated, my apologies for not formally introducing the maths involved but I was more for hoping a qualitative explanation, not to say quantitative is not appreciated. </p>
SplitInfinity
316,671
<p>The algorithm for $LU$ factorization of a matrix $A$ (at least, the only one that I'm aware of) involves dividing several elements in $A$ by $a_{11}$. Without pivoting, this element might be equal to 0, and we obviously can't have that. In addition, this element might be very small, and dividing by very small numbers in 32- or 64-bit floating point arithmetic can lead to numerical instability. To get around this, we add partial pivoting to place the largest element in the 1st column in the $(1, 1)$ position of A.</p>
4,314,443
<p>I'm trying to proof the following statement coming from a book:</p> <p>&quot;Pushforward of the vector field <span class="math-container">$\dfrac{d}{dx}$</span> by exponential, i.e <span class="math-container">$exp_{*}\dfrac{d}{dx}$</span> = <span class="math-container">$x\dfrac{d}{dx}$</span> on <span class="math-container">$R_{+}^*$</span>&quot;</p> <p>The only statement I have in my possession is that for <span class="math-container">$\phi$</span> a diffeomorphism between <span class="math-container">$M \rightarrow N$</span> , and <span class="math-container">$X$</span> a vector field, (which is a section of <span class="math-container">$TM$</span>), on has the following property:</p> <p><span class="math-container">$\forall x$</span> in <span class="math-container">$N$</span></p> <p><span class="math-container">$(\phi_{*}X)(x) = d_{\phi^{-1}(x)}\phi$</span> . <span class="math-container">$X(\phi^{-1}(x))$</span></p> <p>Hence, I tried applying the formula, <span class="math-container">$exp_{*}\dfrac{d}{dx}(x)$</span> = <span class="math-container">$(d_{\exp^{-1}(x)}\exp)$</span> . <span class="math-container">$\dfrac{d}{dx}(\exp^{-1}(x))$</span> = <span class="math-container">$d_{\ln(x)}\exp$</span> . <span class="math-container">$\dfrac{d}{dx}(\ln(x))$</span> <br /> = <span class="math-container">$d_{\ln(x)}\exp$</span> . <span class="math-container">$\dfrac{1}{x}$</span> = <span class="math-container">$exp(1/x)$</span></p> <p>But if the result I'm trying to prove is right, I should have that <span class="math-container">$\forall x$</span> in <span class="math-container">$R_{+}^*$</span> <span class="math-container">$x\dfrac{d}{dx}(x)$</span> = <span class="math-container">$x$</span></p> <p>Am I missing something more conceptual ? Thank you in advance for explanation</p>
Masacroso
173,262
<p>The pushforward of a vector field is well-defined just by a diffeomorphism (or in a Lie group), that is, if <span class="math-container">$\varphi :M\to N$</span> is such diffeomorphism then <span class="math-container">$\varphi _*:\mathfrak{X}(M)\to \mathfrak{X}(N)$</span>, where <span class="math-container">$\mathfrak{X}(D)$</span> is the space of smooth vector fields on <span class="math-container">$D$</span>.</p> <p>The pushforward is defined by</p> <p><span class="math-container">$$ (\varphi _*X)_{\varphi (p)}f=X_p(f\circ \varphi ) $$</span></p> <p>In this case if <span class="math-container">$\varphi :\mathbb{R}\to (0,\infty ),\, x\mapsto e^x$</span> you have</p> <p><span class="math-container">$$ \left(\exp_*\frac{\partial}{\partial x}\right)_rf=\left .\frac{\partial}{\partial x}\right|_{\ln r}(f \circ \exp)=(f'\circ \exp)(\ln r)\exp'(\ln r)=r\cdot f'(r)=\left .t\frac{\partial}{\partial t}\right|_{r}f $$</span></p> <p>where <span class="math-container">$\frac{\partial}{\partial t}$</span> is the canonical basis of <span class="math-container">$\mathfrak{X}((0,\infty ))$</span> and <span class="math-container">$f$</span> is any smooth function in <span class="math-container">$(0,\infty )$</span>.</p>
4,314,443
<p>I'm trying to proof the following statement coming from a book:</p> <p>&quot;Pushforward of the vector field <span class="math-container">$\dfrac{d}{dx}$</span> by exponential, i.e <span class="math-container">$exp_{*}\dfrac{d}{dx}$</span> = <span class="math-container">$x\dfrac{d}{dx}$</span> on <span class="math-container">$R_{+}^*$</span>&quot;</p> <p>The only statement I have in my possession is that for <span class="math-container">$\phi$</span> a diffeomorphism between <span class="math-container">$M \rightarrow N$</span> , and <span class="math-container">$X$</span> a vector field, (which is a section of <span class="math-container">$TM$</span>), on has the following property:</p> <p><span class="math-container">$\forall x$</span> in <span class="math-container">$N$</span></p> <p><span class="math-container">$(\phi_{*}X)(x) = d_{\phi^{-1}(x)}\phi$</span> . <span class="math-container">$X(\phi^{-1}(x))$</span></p> <p>Hence, I tried applying the formula, <span class="math-container">$exp_{*}\dfrac{d}{dx}(x)$</span> = <span class="math-container">$(d_{\exp^{-1}(x)}\exp)$</span> . <span class="math-container">$\dfrac{d}{dx}(\exp^{-1}(x))$</span> = <span class="math-container">$d_{\ln(x)}\exp$</span> . <span class="math-container">$\dfrac{d}{dx}(\ln(x))$</span> <br /> = <span class="math-container">$d_{\ln(x)}\exp$</span> . <span class="math-container">$\dfrac{1}{x}$</span> = <span class="math-container">$exp(1/x)$</span></p> <p>But if the result I'm trying to prove is right, I should have that <span class="math-container">$\forall x$</span> in <span class="math-container">$R_{+}^*$</span> <span class="math-container">$x\dfrac{d}{dx}(x)$</span> = <span class="math-container">$x$</span></p> <p>Am I missing something more conceptual ? Thank you in advance for explanation</p>
rych
73,934
<p><span class="math-container">$\def\R{\mathbb{R}}%$</span>@Masacroso's is the answer, but I think we could also use the Fréchet derivative of the map between two copies of <span class="math-container">$\R$</span>, <span class="math-container">$Dexp: \R_1\to\R_2$</span>, <span class="math-container">$Dexp_x=e^x=y$</span>; for an arbitrary <span class="math-container">$h\in \R_1$</span>, <span class="math-container">$Dexp_x(h)=e^xh\in \R_2.$</span></p> <p>The map derivative, on the other hand, is a pushforward of <strong>tangent</strong> vectors. We can use what's called canonical identification or natural isomorphism: <span class="math-container">$g_1: T\R_1\to\R_1$</span> and <span class="math-container">$g_2: T\R_2\to\R_2$</span>. The relation formula (see @levap's answer <a href="https://math.stackexchange.com/a/2158705/73934">here</a> for example) between the pushforward and the Fréchet derivative of our map <span class="math-container">$exp$</span> is</p> <p><span class="math-container">$$exp_{*p}=g_2^{-1}\circ Dexp\circ g_1:T\R_1\to T\R_2$$</span> And so we have <span class="math-container">$exp_{*x}(h\tfrac{d}{dx})=[g_2^{-1}\circ Dexp\circ g_1](h\tfrac{d}{dx})=[g_2^{-1}\circ Dexp_x](h)=g_2^{-1}(e^xh)=(e^xh)\tfrac{d}{dy}$</span>,</p> <p>where <span class="math-container">$\tfrac{d}{dy}$</span> is the basis vector on the <span class="math-container">$T\R_2$</span>. Taking <span class="math-container">$h=1$</span> we finally have <span class="math-container">$$exp_{*x}(\tfrac{d}{dx})=y\tfrac{d}{dy}$$</span></p>
69,655
<p>I'm facing a strange behavior of <code>HoldForm</code>.</p> <p>I need to display <code>1/2*3/4</code> in LaTeX like this : $$ \frac{1}{2} \times \frac{3}{4} $$</p> <p>So I use Mathematica : <code>1/2* 3/4 // HoldForm // TeXForm</code> BUT I get $$ \frac{3}{2\ 4} $$</p> <p>First the writing <code>2 space 4</code> is ambigous and second it does not hold form at all :(</p> <p>Can you help me ? Thank you ! (happy Holidays)</p> <p><strong>EDIT : I would need an automatic transformation of any input to correct TeX or an automatic correction of any output to correct TeX.</strong></p>
bbgodfrey
1,063
<p>Use <code>HoldForm</code> applied to each fraction to keep the fractions from combining.</p> <pre><code>HoldForm[1/2] HoldForm[3/4] </code></pre> <p>to produce $$ \frac{1}{2} \frac{3}{4} $$</p> <p>or </p> <pre><code>HoldForm[(1/2) (3/4)] </code></pre> <p>to produce $$ \frac{3}{2 \times 4} $$</p> <p>Using <code>TeXForm</code> produces the desired LaTex code.</p> <pre><code>(HoldForm[1/2] HoldForm[3/4]) // TeXForm (* \frac{1}{2} \frac{3}{4} *) </code></pre> <p><strong>Addendum</strong></p> <p>Simpler is</p> <pre><code>Infix[f[1/2, 3/4], "\[Times]"] // TeXForm (* \frac{1}{2}\times \frac{3}{4} *) </code></pre> <p>which also provides the times sign. $$\frac{1}{2}\times \frac{3}{4}$$</p> <p><strong>Second Addendum</strong></p> <pre><code>z1 z2 /. Times -&gt; Cross /. {z1 -&gt; 1/2, z2 -&gt; 3/4} // TeXForm </code></pre> <p>also produces the desired output. (This is based on the third Answer to <a href="https://mathematica.stackexchange.com/questions/39061/multiplication-sign-in-texform">39061</a>.)</p>
69,655
<p>I'm facing a strange behavior of <code>HoldForm</code>.</p> <p>I need to display <code>1/2*3/4</code> in LaTeX like this : $$ \frac{1}{2} \times \frac{3}{4} $$</p> <p>So I use Mathematica : <code>1/2* 3/4 // HoldForm // TeXForm</code> BUT I get $$ \frac{3}{2\ 4} $$</p> <p>First the writing <code>2 space 4</code> is ambigous and second it does not hold form at all :(</p> <p>Can you help me ? Thank you ! (happy Holidays)</p> <p><strong>EDIT : I would need an automatic transformation of any input to correct TeX or an automatic correction of any output to correct TeX.</strong></p>
Mr.Wizard
121
<p>The behavior you observe is due to the formatting rules associated with <code>Times</code>. Please start by reading my answer here: <a href="https://mathematica.stackexchange.com/q/7880/121">Returning an unevaluated expression with values substituted in</a>. We can apply a similar technique here though the result is not quite as desired if we merely block <code>Times</code> during Box creation. We get:</p> <blockquote> <p>$\left(1*\frac{1}{2}\right)*\left(3*\frac{1}{4}\right)$</p> </blockquote> <p>This form is due to the internal format of <code>1/2</code> and <code>3/4</code>:</p> <pre><code>Hold[1/2, 3/4] // FullForm </code></pre> <blockquote> <pre><code>Hold[Times[1, Power[2, -1]], Times[3, Power[4, -1]]] </code></pre> </blockquote> <p>One way to handle this is to post-process the Box form yield the format we desire:</p> <pre><code>SetAttributes[hf, HoldAll] MakeBoxes[hf[args__], fmt_] := Block[{Times}, MakeBoxes[HoldForm[args], fmt]] /. RowBox[{"(", RowBox[{n_, "*", FractionBox["1", d_]}], ")"}] :&gt; FractionBox[n, d] </code></pre> <p>Now using <code>hf</code> in place of <code>HoldForm</code>:</p> <pre><code>hf[1/2*3/4] // TeXForm </code></pre> <blockquote> <pre><code>\frac{1}{2}*\frac{3}{4} </code></pre> </blockquote> <p>Formatted:</p> <blockquote> <p>$\frac{1}{2}*\frac{3}{4}$</p> </blockquote>
2,771,823
<p>The question is if the modulus of a multiplication, i.e. $a*a*a$ modulus $n$, is the same when we take the modules at each step of the multiplication. So if </p> <p>$(((a \text{ mod } n)* a \text{ mod } n) * a \text{ mod } n) = a*a*a \text{ mod } n$?</p>
AbstractNonsense
429,931
<p>If $a\equiv b(\mod n)$ and $a'\equiv b'(\mod n)$, then $aa'\equiv bb' (\mod n). $ So you can either multiply and then reduce $\mod n$ or vice versa. For an application, to see that you can reduce large computations to something with logarithmic runtime, e.g. have a look at <a href="https://en.wikipedia.org/wiki/Exponentiation_by_squaring" rel="nofollow noreferrer">square &amp; multiply</a></p>
1,220,800
<blockquote> <p>Calculation of x real root values from $ y(x)=\sqrt{x+1}-\sqrt{x-1}-\sqrt{4x-1} $</p> </blockquote> <p>$\bf{My\; Solution::}$ Here domain of equation is $\displaystyle x\geq 1$. So squaring both sides we get</p> <p>$\displaystyle (x+1)+(x-1)-2\sqrt{x^2-1}=(4x-1)$.</p> <p>$\displaystyle (1-2x)^2=4(x^2-1)\Rightarrow 1+4x^2-4x=4x^2-4\Rightarrow x=\frac{5}{4}.$</p> <p>But when we put $\displaystyle x = \frac{5}{4}\;,$ We get $\displaystyle \frac{3}{2}-\frac{1}{2}=2\Rightarrow 1=2.$(False.)</p> <p>So we get no solution.</p> <p>My Question is : Can we solve above question by using comparision of expressions?</p> <p>Something like $\sqrt{x+1}&lt;\sqrt{x-1}+\sqrt{4x-1}\; \forall x\geq 1?$ </p> <p>If that way possible, please help me solve it. Thanks.</p>
Vincenzo Oliva
170,489
<p>Alternatively, isolating $ \sqrt{4x-1}$ and then multiplying both sides by $\sqrt{x+1}+\sqrt{x-1}$ makes it easier to conclude the LHS is smaller than the RHS: $$\require\cancel \cancel{x}+1-\cancel{x}+1=2&lt;\sqrt{4x-1}\left(\sqrt{x+1}+\sqrt{x-1}\right).\tag{$\star$}$$</p> <p>Once we check $(\star)$ holds for $x=1$ we are done, since clearly its RHS is increasing.</p>
1,220,800
<blockquote> <p>Calculation of x real root values from $ y(x)=\sqrt{x+1}-\sqrt{x-1}-\sqrt{4x-1} $</p> </blockquote> <p>$\bf{My\; Solution::}$ Here domain of equation is $\displaystyle x\geq 1$. So squaring both sides we get</p> <p>$\displaystyle (x+1)+(x-1)-2\sqrt{x^2-1}=(4x-1)$.</p> <p>$\displaystyle (1-2x)^2=4(x^2-1)\Rightarrow 1+4x^2-4x=4x^2-4\Rightarrow x=\frac{5}{4}.$</p> <p>But when we put $\displaystyle x = \frac{5}{4}\;,$ We get $\displaystyle \frac{3}{2}-\frac{1}{2}=2\Rightarrow 1=2.$(False.)</p> <p>So we get no solution.</p> <p>My Question is : Can we solve above question by using comparision of expressions?</p> <p>Something like $\sqrt{x+1}&lt;\sqrt{x-1}+\sqrt{4x-1}\; \forall x\geq 1?$ </p> <p>If that way possible, please help me solve it. Thanks.</p>
Community
-1
<p>For $x\ge1$,</p> <p>$$l(x):=\sqrt{x+1}-\sqrt{x-1}=\dfrac2{\sqrt{x+1}+\sqrt{x-1}}\le\sqrt2$$ and</p> <p>$$r(x):=\sqrt{4x-1}\ge\sqrt3.$$</p>
2,105,963
<p>Suppose ${{A_1}}$=[1,3] and ${{A_2}}$=[2,4], then ${{A_1} \cup {A_2}}$=[1,4] now $\sup \left( {{A_1} \cup {A_2}} \right)$ is clearly 4. so, $\sup \left( {{A_1} \cup {A_2}} \right) = \max \left( {\sup {A_1},\sup {A_2}} \right)$ is true.</p> <p>Confusion with definition: <br/> <em>s</em> is least upper bound for a set $A \subseteq R$ if two criterion are met <br/> (1) <em>s</em> is an upper bound for <em>A</em><br/> (2) if <em>b</em> is any upper bound for <em>A</em>, then $s \le b$ <br/> In the proof if I take ${s_1}$ to be ${\sup {A_1}}$ and ${s_2}$ to be ${\sup {A_2}}$, then if I apply definition then least of ${s_1}$ and ${s_2}$ is $\sup \left( {{A_1} \cup {A_2}} \right)$, which is certainly not true. What is exactly am I missing here?</p> <p>Then once proved how can I extend it to $\sup \left( { \cup _{k = 1}^n{A_k}} \right)$ ? May be if i get clear with the base case then it will not be required.<br/> Edit: ${{A_1}}$ and ${{A_2}}$ are nonempty sets which are bounded above.</p>
Andrew D. Hwang
86,418
<p>$\newcommand{\eps}{\varepsilon}$<strong>Suggestion</strong>: To prove that a specific real number&nbsp;$c$ is the supremum of a non-empty set&nbsp;$A$ of real numbers (that is bounded above), it's often helpful to use the second condition of the definition in contrapositive form:</p> <ol> <li><p>$c$ is an upper bound for&nbsp;$A$, i.e., for every&nbsp;$x$ in&nbsp;$A$, $x \leq c$;</p></li> <li><p>For every $\eps &gt; 0$, there exists an&nbsp;$x$ in&nbsp;$A$ such that $c - \eps &lt; x$.</p></li> </ol> <p>So, to show that $c := \max(\sup A_{1}, \sup A_{2}) = \sup(A_{1} \cup A_{2})$, it suffices to show:</p> <ol> <li><p>If $x \in A_{1} \cup A_{2}$, then $x \leq \max(\sup A_{1}, \sup A_{2})$;</p></li> <li><p>For every $\eps &gt; 0$, there exists an&nbsp;$x$ in&nbsp;$A_{1} \cup A_{2}$ such that $\max(\sup A_{1}, \sup A_{2}) - \eps &lt; x$.</p></li> </ol>
55,918
<blockquote> <p><strong>Zariski's Main Theorem</strong> (<a href="http://www.numdam.org/numdam-bin/fitem?id=PMIHES_1966__28__5_0" rel="noreferrer">EGA IV</a>, Thm 8.12.6): Suppose $Y$ is a quasi-compact and quasi-separated scheme, and $f:X\to Y$ is quasi-finite, separated, and finitely presented. Then $f$ factors as $X\xrightarrow{g} Z\xrightarrow{h} Y$, where $g$ is an open immersion and $h$ is finite.</p> </blockquote> <p>Is there a canonical choice for the factorization $f=h\circ g$, at least under some circumstances? </p> <p> For example, suppose $f$ factors as $X\to U\to Y$, where $X\to U$ is finite &eacute;tale and $U\to Y$ is a Stein open immersion (i.e. the pushforward of $\mathcal O_U$ is $\mathcal O_Y$). Then I'm pretty sure the Stein factorization $X\to \mathit{Spec}_Y(f_*\mathcal O_X)\to Y$ witnesses Zariksi's Main Theorem (i.e. is an open immersion followed by a finite map). </p> <p>In general, when does the Stein factorization witness ZMT? In the cases where it fails to witness ZMT (e.g. $X$ finite over an affine open in $Y$), is there some other canonical witness?</p>
Qing Liu
3,485
<p>I think an initial object exists if you work with integral excellent schemes (maybe integral is not really necessarily, but then require that $X$ be schematically dense in $Z$). </p> <p>So suppose $X, Y$ are integral and excellent. Consider all possible factorizations $X\to Z_{\alpha} \to Y$ with $Z_{\alpha}$ integral. Then $K(Z_{\alpha})=K(X)$. For any pair $Z_{\alpha}, Z_{\beta}$, the closure $Z_{\gamma}$ of $X$ in $Z_{\alpha}\times_Y Z_{\beta}$ gives a factorization $X\to Z_{\gamma}\to Y$ with $Z_{\gamma}$ dominating $Z_{\alpha}$ and $Z_{\beta}$, finite over $Y$, and $X\to Z_{\gamma}$ is an open immersion (one checks that $X\to Z_{\gamma}$ is an immersion, hence open in some closed subscheme $F$, but $X$ is birational to $Z_{\gamma}$, so $F=Z_{\gamma}$). Thus we can consider the projective limite $Z$ of the $Z_{\alpha}$'s. </p> <p>By construction $Z$ is affine and integral over $Y$. As $Z_{\alpha}$ is dominated by the normalization $\widetilde{Y}$ of $Y$ in $K(X)$ and $\widetilde{Y}$ is finite over $Y$ by excellent hypothesis, $Z$ is finite over $Y$. It remains to see that the canonical map $X\to Z$ is an open immersion. This property is local over $Y$. So we suppose $Y$ is affine. Cover $X$ by principal affine open subsets $D(h)$'s of some $Z_{\alpha_0}$. Then $D(h) \to D_Z(h)$ is a closed immersion because $D(h)\to D_{Z_{\alpha_0}}(h)$ is, and it is birational, so it is an isomorphism and we are done. </p> <p>It would interesting to compute explicitely the projective limite in some concrete situations. For exemple, consider a surface $S$, finite over $\mathbb A^2_{\mathbb C}$, with non-normal locus $\Delta$. Let $X$ be an open subset of $S$ with $\Delta\cap X$ non-empty and not equal to $\Delta$. The inclusion $X\to S=Z_{\alpha_0}$ is a factorisation. But what is the $Z$ constructed as above ? </p>
9,990
<p>Consider the following problem: </p> <ul> <li>Maria always buys ice-cream when she goes to the beach. She bought ice-cream today. So, she must have gone to the beach. </li> </ul> <p>Obviously this statement is wrong. Maria could have gone to other place and bought an ice-cream. You don't need any math tool to arrive at this conclusion, all you need is reasoning. </p> <p>However, several adults (18~50 years old) with difficulty in math, also have a really hard time to solve/understand such kind of problems. For them, learning math is the same as memorizing rules and formulas. Anything different than that (ex: reasoning) is extremely painful.</p> <p>So, is it possible to make such students correctly answer <a href="https://en.wikipedia.org/wiki/Graduate_Management_Admission_Test">GMAT</a> style questions?</p>
Daniel R. Collins
5,563
<p>You can say that this is &quot;just reasoning&quot;, but the truth is that this is a specific application of basic logic, in particular the implication (if/then) relation. I have a colleague with a PhD in logic who says, &quot;Implication is tricky!&quot; when I bring this up. And I do think that it's a major problem that schools don't teach basic logic as a high-school (or earlier) requirement; it really puts all their later coursework on a foundation of shifting sand without that.</p> <p>If I had complete dominion at the community college where I teach, then personally I would mandate a 1-credit seminar in basic logic (at least drills with and-or-not-if/then statements) for incoming students. At times I've tried to find an hour in my basic math classes to work on this, but unfortunately at the moment other priorities take precedence for that time.</p> <p>Some blog articles I've written on this subject:</p> <ul> <li><a href="http://www.madmath.com/2012/07/teach-logic.html" rel="noreferrer">http://www.madmath.com/2012/07/teach-logic.html</a></li> <li><a href="http://www.madmath.com/2013/11/branching-decisions-in-algebra.html" rel="noreferrer">http://www.madmath.com/2013/11/branching-decisions-in-algebra.html</a></li> <li><a href="http://www.madmath.com/2014/05/basic-logic-errors.html" rel="noreferrer">http://www.madmath.com/2014/05/basic-logic-errors.html</a></li> </ul>
887,656
<p>Is there a closed form (complex) solution $z(t)$ to the equation</p> <p>\begin{align} \frac{dz}{dt}=f(t)\bar{z}, \end{align}</p> <p>(the bar means complex conjugate) for any given complex valued function $f$ of a real variable $t$? The usual approach to deal with separable equations gives \begin{align} \int\frac{1}{\bar{z}}dz=\int{f(t)dt}. \end{align}</p> <p>However, the integral on the left is path dependent, so it is apparently not possible to obtain a solution (not even implicit) from this approach.</p>
MrSlunk
12,509
<p>Here is an alternative approach.<br> When you are given the equation $$z' = f(t) \overline{z},$$ you are also implicitly given $$\overline{z}' = \overline{f}(t) z$$ by conjugation. So let $Z =(z,\overline{z})$, then we have $$Z' = \left(\begin{matrix}0&amp;f(t)\\ \overline{f}(t)&amp;0 \end{matrix}\right)Z $$ This is $2$d linear differential equation with time dependant coefficients and it is well established that no general closed form solution exists.<br> A series solution does exist however, in the form of a <a href="http://en.wikipedia.org/wiki/Magnus_series" rel="nofollow">Magnus Expansion</a>; ie $$Z(t) = \exp(\Omega(t))Z(0) $$ where $$\Omega(t) = \sum_{k=1} \Omega_k(t)$$ Each $\Omega_k$ is expressed in term of integrals over $k-1$ nested commutators. You can read about this yourself, but i would just like to consider the case where $f(t)$ is real. The first term is given by $$\Omega_1(t) = \int_0^t\left(\begin{matrix}0&amp;f(\tau)\\ \overline{f}(\tau)&amp;0 \end{matrix}\right)d\tau$$ and the second by $$ \Omega_2(t) =\int_0^t\int_0^\tau \left(\begin{matrix}f(s)\overline{f}(\tau)-f(\tau)\overline{f}(s)&amp;0\\0&amp; f(\tau)\overline{f}(s)-f(s)\overline{f}(\tau) \end{matrix}\right) ds d\tau. $$ From this we see that if $f$ is real, $\Im[f(\tau)\overline{f}(s)]=0$ for all $\tau, s$ and hence $\Omega_2(t) =0$ (as does $\Omega_k=0$ for $k&gt;2$). Once you put this all together you get the same thing as what JJacquelin posted.</p>
1,653,416
<p>We know that: <a href="https://www.youtube.com/watch?v=w-I6XTVZXww" rel="nofollow">https://www.youtube.com/watch?v=w-I6XTVZXww</a> $$S=1+2+3+4+\cdots = -\frac{1}{12}$$</p> <p>So multiplying each terms in the left hand side by $2$ gives: $$2S =2+4+6+8+\cdots = -\frac{1}{6}$$ This is the sum of the even numbers</p> <p>Furthermore, we can add it to itself but shifting the terms one place: $$ \begin{align} 1+2+3+4+\cdots &amp; \\ 1+2+3+\cdots &amp; \\ =1+3+5+7+\cdots &amp; =2S \end{align} $$ This is the sum of the odd numbers</p> <p>If we were to now sum the odd numbers and the even numbers like below: $$ 2+4+6+8+\cdots \\[6pt] 1+3+5+7+\cdots \\[6pt] \text{if we add the terms in a certain order we can get } 1+2+3+4+5+6+7+\cdots$$ This supposedly tells us that: $$4S = S\\[6pt] 4 \left(\frac{-1}{12}\right)=\frac{-1}{12} \\[6pt] \frac{-1}{3} = \frac{-1}{12} $$</p> <p>What is faulty with this proof.</p>
Count Iblis
155,436
<p>As pointed out in section 3, page 1191 of <a href="http://arxiv.org/abs/hep-ph/0510142">this article</a>, the rules for manipulating divergent series are more restrictive than those for convergent series. As pointed out in the article, to avoid problems you should work with the power series obtained by multiplying the nth term by $x^n$, instead.</p>
1,206,195
<p>I am trying to find the maximum of $x^{1/x}$. I don't know how to find the derivative of this. I have plugged in some numbers and found that $e^{1/e}$ seems to be the maximum at around 1.44466786. I don't know if this is the maximum, and I would like an explanation of why it is/what the maximum is. essentially, how do I solve ${{dy}\over{dx}}(x^{1/x})=0$?</p>
lab bhattacharjee
33,337
<p>$$y=x^{1/x}\implies \ln(y)=\frac{\ln x}x$$</p> <p>$$\frac{d(\ln y)}{dx}=\frac{d(\ln y)}{dy}\cdot\frac{dy}{dx}=\frac1y\cdot\frac{dy}{dx}$$</p> <p>and $$\frac{d\left(\frac{\ln x}x\right)}{dx}=\frac{1-\ln x}{x^2}$$</p>
191,796
<blockquote> <p>I met with the following difficulty reading the paper <a href="http://www.cnki.com.cn/Article/CJFDTotal-ZZDZ198801008.htm" rel="nofollow">Li, Rong Xiu "The properties of a matrix order column" (1988)</a>:</p> <p>Define the matrix $A=(a_{jk})_{n\times n}$, where $$a_{jk}=\begin{cases} j+k\cdot i&amp;j&lt;k\\ k+j\cdot i&amp;j&gt;k\\ 2(j+k\cdot i)&amp; j=k \end{cases}$$ and $i^2=-1$.</p> <p>The author says it is easy to show that $rank(A)=n$. I have proved for $n\le 5$, but I couldn't prove for general $n$.</p> </blockquote> <p>Following is an attempt to solve this problem: let $$A=P+iQ$$ where $$P=\begin{bmatrix} 2&amp;1&amp;1&amp;\cdots&amp;1\\ 1&amp;4&amp;2&amp;\cdots&amp; 2\\ 1&amp;2&amp;6&amp;\cdots&amp; 3\\ \cdots&amp;\cdots&amp;\cdots&amp;\cdots&amp;\cdots\\ 1&amp;2&amp;3&amp;\cdots&amp; 2n \end{bmatrix},Q=\begin{bmatrix} 2&amp;2&amp;3&amp;\cdots&amp; n\\ 2&amp;4&amp;3&amp;\cdots &amp;n\\ 3&amp;3&amp;6&amp;\cdots&amp; n\\ \cdots&amp;\cdots&amp;\cdots&amp;\cdots&amp;\cdots\\ n&amp;n&amp;n&amp;\cdots&amp; 2n\end{bmatrix}$$</p> <p>and define $$J=\begin{bmatrix} 1&amp;0&amp;\cdots &amp;0\\ -1&amp;1&amp;\cdots&amp; 0\\ \cdots&amp;\cdots&amp;\cdots&amp;\cdots\\ 0&amp;\cdots&amp;-1&amp;1 \end{bmatrix}$$ then we have $$JPJ^T=J^TQJ=\begin{bmatrix} 2&amp;-2&amp;0&amp;0&amp;\cdots&amp;0\\ -2&amp;4&amp;-3&amp;\ddots&amp;0&amp;0\\ 0&amp;-3&amp;6&amp;-4\ddots&amp;0\\ \cdots&amp;\ddots&amp;\ddots&amp;\ddots&amp;\ddots&amp;\cdots\\ 0&amp;0&amp;\cdots&amp;-(n-2)&amp;2(n-1)&amp;-(n-1)\\ 0&amp;0&amp;0&amp;\cdots&amp;-(n-1)&amp;2n \end{bmatrix}$$ and $$A^HA=(P-iQ)(P+iQ)=P^2+Q^2+i(PQ-QP)=\binom{P}{Q}^T\cdot\begin{bmatrix} I&amp; iI\\ -iI &amp; I \end{bmatrix} \binom{P}{Q}$$</p>
Christian Remling
48,839
<p>OK, let me try again, maybe I'll get it right this time. I'll show that $P$ is positive definite. This will imply the claim because if $(P+iQ)(x+iy)=0$ with $x,y\in\mathbb R^n$, then $Px=Qy$, $Py=-Qx$, and by taking scalar products with $x$ and $y$, respectively, we see that $\langle x, Px \rangle = -\langle y, Py\rangle$, which implies that $x=y=0$. Here I use that $Q$ is symmetric.</p> <p>Let me now show that $P&gt;0$. Following math110's suggestion, we can simplify my original calculation as follows: Let $ B=B_n = P -\textrm{diag}(1,2,\ldots , n)$. For example, for $n=5$, this is the matrix $$ B_ 5= \begin{pmatrix} 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1\\ 1 &amp; 2 &amp; 2 &amp; 2 &amp; 2\\ 1 &amp; 2 &amp; 3 &amp; 3 &amp; 3\\ 1 &amp; 2 &amp; 3 &amp; 4 &amp; 4\\ 1 &amp; 2 &amp; 3 &amp; 4 &amp; 5 \end{pmatrix} . $$ I can now (in general) subtract the $(n-1)$st row from the last row, then the $(n-2)$nd row from the $(n-1)$st row etc. This confirms that $\det B_n=1$. Moreover, the upper left $k\times k$ submatrices of $B_n$ are of the same type; they equal $B_k$. This shows that $B&gt;0$, by Sylvester's criterion, and thus $P&gt;0$ as well.</p>
2,820,696
<p>We were just discussing with colleagues the number of combinations you could get with two "normal", $6$-sided dice. Almost all of my colleagues were saying $36$ ($6^2$), which I agree with as such, but you will get almost half of the possible combinations counted twice. If I count, the number of different combinations with $2$ dice is $21.$ I'm not able though to get to the formula that would let me calculate this for $3,4,5$ or more dice.</p> <p>I remember from my old mathematics courses that there are several different formulas you can "pick" depending on repetition, order importance, etc... What would be the appropriate formula taking into account $n$ as the number of dice and $s$ the number of sides ? </p>
Arthur
15,500
<p>If order doesn't count, then consider the following setup (I will do it for three 6-sided dice, but it can easily be generalized):</p> <p>Put up five dividers ("bars"): $$ |\quad|\quad|\quad|\quad| $$ These five bars give us six regions (four gaps between the bars, as well as to the left and to the right). Each region corresponds to a number from 1 to 6 (say from left to right). Now distribute 3 $*$'s ("stars") in these six regions in any way you want. Each star corresponds to the result of a die. For instance, a throw of $3, 3, 4$ looks like $$ |\quad|{}*{}*{}|{}*{}|\quad| $$ This way each die throw corresponds to one ordering of 3 stars and 5 bars, and each ordering of stars and bars corresponds to a die throw.</p> <p>The ordering of $3$ stars and $5$ bars can be thought of this way: You have $8$ available spots for any of the two symbols, and you want to choose $3$ of those spost to contain a star (the remaining $5$ will get bars). That can be done in $\binom{8}{3} = 56$ ways.</p>
2,762,391
<p>Let $x,y&gt;0$ s.t. $x^3+y^3\geq 2$.</p> <p>Show that $x^2+y^2\geq x+y $.</p> <p>I analyse the case when $x,y\geq 1$ but I don't know to solve the case when $x\geq 1\geq y $.</p>
Cesareo
397,348
<p>By symmetry $x = y = z$ </p> <p>then </p> <p>$$ 2z^3 \ge 2 \Rightarrow z^3 \ge 1 \Rightarrow z^2 \ge z $$</p> <p>Comparison with $x^3+y^3 \ge 2$ (light blue) and the circle $(x-\frac{1}{2})^2+(y-\frac{1}{2})^2 = 1/2$ (red)</p> <p><a href="https://i.stack.imgur.com/uGGTb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uGGTb.jpg" alt="enter image description here"></a></p>
2,762,391
<p>Let $x,y&gt;0$ s.t. $x^3+y^3\geq 2$.</p> <p>Show that $x^2+y^2\geq x+y $.</p> <p>I analyse the case when $x,y\geq 1$ but I don't know to solve the case when $x\geq 1\geq y $.</p>
A.Γ.
253,273
<p>Consider $f(x,y)=x^2+y^2-x-y$. The inequality is equivalent to the fact that the following minimum is non-negative $$ \min f(x,y)\quad\text{subject to }x^3+y^3\ge 2,\ x\ge 0,\ y\ge 0. $$</p> <ol> <li>As $f(x,y)=(x-\frac12)^2+(y-\frac12)^2-\frac12$ has compact sublevel sets, the minimum exists. Apply the necessary condition for optimality.</li> <li>The first constraint must be active at the minimum (otherwise we can make the variable that is greater than one a bit smaller, which makes $f(x,y)$ smaller, contradiction). It is also easy to rule out the case when the positivity constraints are active ($x=0$ or $y=0$), the function $f(x,y)\ge 0$ there. Thus the only interesting case is where the gradients of $f(x,y)$ and $x^3+y^3$ are parallel. It makes $$ \begin{vmatrix}2x-1 &amp; x^2\\2y-1 &amp; y^2\end{vmatrix}=(x-y)(x+y-2xy)=0. $$</li> <li>The case when $x+y=2xy$ is not interesting again since then $$ f(x,y)=x^2+y^2-2xy=(x-y)^2\ge 0. $$</li> <li>The case $x=y$ and $x^3+y^3=2$ gives $x=y=1$ where $f(1,1)=0$. Hence, it is the minimum.</li> </ol>
96,110
<p><span class="math-container">$A = \begin{pmatrix} 0 &amp; 1 &amp;1 \\ 1 &amp; 0 &amp;1 \\ 1&amp; 1 &amp;0 \end{pmatrix} $</span></p> <p>The matrix <span class="math-container">$(A+I)$</span> has rank <span class="math-container">$1$</span> , so <span class="math-container">$-1$</span> is an eigenvalue with an algebraic multiplicity of at least <span class="math-container">$2$</span> .</p> <p>I was reviewing my notes and I don't understand how the first statement implies the second one. </p> <p>Can anyone please explain how rank 1 of <span class="math-container">$(A + I)$</span> implies <span class="math-container">$-1$</span> is an eigenvalue with an algebraic multiplicity of <span class="math-container">$2$</span>?</p> <p>Thank you in advance.</p>
Pierre-Yves Gaillard
660
<p>It is clear that $A+I$ is diagonalizable with $(0,0,3)$ on the diagonal. </p> <p>Thus, $A$ is diagonalizable with $(-1,-1,2)$ on the diagonal. </p> <p>Justification of the first claim:</p> <p>A vector is in the kernel of $A+I$ if and only if the sum of its coordinates is zero. </p> <p>The vector $(1,1,1)$ is an eigenvector for $A+I$ with eigenvalue $3$.</p> <p><strong>EDIT.</strong> We can apply the following observation to $B:=A+I$:</p> <p>If $B$ is a rank one $n$ by $n$ matrix with entries in a field, then </p> <p>$\bullet$ either the trace of $B$ is zero, and $B$ is similar to the direct sum of $(\begin{smallmatrix}0&amp;1\\ 0&amp;0\end{smallmatrix})$ and a zero matrix, </p> <p>$\bullet$ or the trace $t$ of $B$ is nonzero, and $B$ is similar to the direct sum of the scalar $t$ and a zero matrix. </p> <p>Indeed, either the kernel (which is a hyperplane) contains the image (which is a line), or it doesn't.</p>
96,110
<p><span class="math-container">$A = \begin{pmatrix} 0 &amp; 1 &amp;1 \\ 1 &amp; 0 &amp;1 \\ 1&amp; 1 &amp;0 \end{pmatrix} $</span></p> <p>The matrix <span class="math-container">$(A+I)$</span> has rank <span class="math-container">$1$</span> , so <span class="math-container">$-1$</span> is an eigenvalue with an algebraic multiplicity of at least <span class="math-container">$2$</span> .</p> <p>I was reviewing my notes and I don't understand how the first statement implies the second one. </p> <p>Can anyone please explain how rank 1 of <span class="math-container">$(A + I)$</span> implies <span class="math-container">$-1$</span> is an eigenvalue with an algebraic multiplicity of <span class="math-container">$2$</span>?</p> <p>Thank you in advance.</p>
Tapu
17,142
<p>Here is my trying (which may be less simple than the previous answers):</p> <p>we note that $A+I=ee^t\text{ or }$ $$A=ee^t-I$$ post multiplication by $e$ readily shows that $e$ is an eigenvector of $A$ with eigenvalue $2$. Since we can find two other vectors orthogonal to $e$, post multiplications by those show that $-1,-1$ are the remaining eigenvalues of $A$. Note that this happens for any such $n\times n $ $A$.</p>
4,054,428
<p>The question is</p> <p>Let <span class="math-container">$X$</span> be a discrete random variable with probability mass function</p> <p><a href="https://i.stack.imgur.com/DhJVI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DhJVI.png" alt="enter image description here" /></a></p> <p>(a) Find <span class="math-container">$E(X)$</span> <br/>I did <span class="math-container">$E(X) = -2*.3+ -1*6 + 12*.1 = -5.4$</span><br/> (b) Find <span class="math-container">$Var(X)$</span> <br/> I did <span class="math-container">$E(x^{2})-(E(x))^{2}= (-2^{2} * .3 + -1*.6 + 12*.1) - (-5.4)^{2} = -29.76$</span> <br/> (c) Find expected value of X(bar) (the sample mean) <br/> Would this just be the same thing as <span class="math-container">$E(X)$</span>? (d) if n = 100, what is the variance of X(bar)? <br/> would I just multiply the variance by 100? <br/></p> <p>Last question is if I wanted to put this all in R, how would I input my CDF into R and find the expected value and variance, etc. I understand it on paper (most of the time) but I am unsure how I would put it in R, any help is appreciated, thank you in advance.</p>
Graham Kemp
135,106
<p>(a) Your calculation was off. It seems that you dropped the decimal.<span class="math-container">$$\mathsf E(X)=−2\cdot 0.3+−1\cdot \underbrace{0.6}+12\cdot0.1=0.0$$</span></p> <p>(b) A negative variance is an error warning.   You need to square <em>each</em> supported value when you multiply by the probability mass.<span class="math-container">$$\mathsf{Var}(X)=((−2)^2\cdot 0.3+(−1)^2\cdot 0.6+(12)^2\cdot0.1) -(0.0)^2= 16.2$$</span></p> <p>(c) The <em>expected sample mean</em> is indeed the expected value of the distribution.   We just apply the Linearity of expectation, and <em>identical distribution</em> of the samples: <span class="math-container">$$\begin{align}\mathsf E(\bar X)&amp;=\mathsf E\left(\dfrac{\sum_{i=1}^nX_i}{n}\right)\\[1ex]&amp;=\dfrac{\sum_{i=1}^n\mathsf E(X_i)}{n}\\[1ex]&amp;=\mathsf E(X_1)\\[1ex]&amp;=\mathsf E(X)\end{align}$$</span></p> <p>(d) For the variance of the sample mean, you must apply the Bilinearity of Covariance, and the <em>independence</em> and identical distribution of the samples.</p> <p><span class="math-container">$$\begin{align}\mathsf {Var}(\overline X)&amp;=\mathsf{Var}\left(\frac{\sum_{i=1}^nX_i}{n}\right)\\[1ex]&amp;=\frac 1{n^2}\mathsf {Cov}(\sum_{i=1}^n X_i,\sum_{j=1}^n X_j)\\[1ex]&amp;~~\vdots\end{align}$$</span></p>
62,526
<p>I want to show the following:</p> <p>$X$ $n$-connected $\iff $ any continuous map $f:K \rightarrow X$ where $K$ is a cell complex of dimension $\leq n$ is homotopic to a constant map</p> <p>For this I think I can use the following: $X$ $n$-connected $\iff $ every continuous map $f: S^n \rightarrow X$ is homotopic to a constant map.</p> <p>Proof:</p> <p>"$\Leftarrow$"</p> <p>If any continuous map $f:K \rightarrow X$ where $K$ is a cell complex of dimension $\leq n$ is homotopic to a constant map then any $f: S^n \rightarrow X$ is homotopic to a constant map. So $X$ is $n$-connected.</p> <p>"$\Rightarrow$"</p> <p>I'm not sure how to proceed in this direction. I know $X$ is $n$-connected and so $\pi_i (X) = 0$ for all $i \leq n$. I also know any $f: S^i \rightarrow X$ is null-homotopic. </p> <p>How to proceed from here? Many thanks for your help!</p>
SFeesh
346,530
<p>We show that <span class="math-container">$K^m \hookrightarrow K \rightarrow X$</span> is nullhomotopic for every <span class="math-container">$m \leqslant n$</span> by induction on <span class="math-container">$m$</span>. This is clear for <span class="math-container">$m = 0$</span> since <span class="math-container">$X$</span> is path-connected. Suppose the map is nullhomotopic for some <span class="math-container">$m$</span> with <span class="math-container">$0 &lt; m &lt; n$</span>. Since <span class="math-container">$K^m \hookrightarrow K \rightarrow X$</span> is nullhomotopic and since CW pairs have the homotopy extension property, there is a homotopy of <span class="math-container">$K^{m+1} \hookrightarrow K \rightarrow X$</span> to a map <span class="math-container">$f \colon K^{m+1} \rightarrow X$</span> that sends <span class="math-container">$K^m$</span> to a points <span class="math-container">$x_0 \in X$</span>. Hence, <span class="math-container">$f$</span> factors through a map <span class="math-container">$K^{m+1}/K^m \cong \vee \mathbb{S}^{m+1} \rightarrow X$</span>, which is nullhomotopic since <span class="math-container">$X$</span> is <span class="math-container">$n$</span>-connected.</p>
2,297,421
<p>Probably a really simple question, but I am trynig to fit an air bed in a tent.</p> <p>Circular tent with a diameter of $3$m and a central vertical pole in the middle.</p> <p>The air bed measures $1.41$ m $\times$ $1.9$ m.</p> <p>Will the air bed fit fully inside the tent without being obstructed by the central pole?</p>
hmakholm left over Monica
14,366
<p>Here are some helpful facts:</p> <ol> <li><p>It is sufficient to prove the property when $n$ is an odd prime.</p></li> <li><p>A quadratic polynomial over a field is reducible exactly if it has a root.</p></li> <li><p>The polynomials for different $k$ (modulo $n$) cannot have roots in common.</p></li> <li><p>The polynomial for $k=0$ has two <em>different</em> roots.</p></li> </ol>
285,841
<p>I would like to solve the following problem:</p> <p>$$\begin{array}{ll} \text{minimize} &amp; \mathbf{x}^T \mathbf{A} \mathbf{x}\\ \text{subject to} &amp; \mathbf{x}^T\mathbf{B}\mathbf{x} = 0\\ &amp; \mathbf{x}^T \mathbf{x} = 1\end{array}$$</p> <p>where $\bf x$ is a vector, $\bf A, \bf B$ are square matrices, and $\bf A$ is symmetric. </p> <hr> <p>Here is my thinking:</p> <p>Use the Lagrange multiplier method, \begin{equation} \mathcal L (\bf x, \lambda, \mu) = \mathbf{x}^T \mathbf{A} \mathbf{x} - \lambda \mathbf{x}^T\mathbf{x} - \mu \mathbf{x}^T \mathbf{B} \mathbf{x}. \end{equation} Take the derivative with respect to $\bf x$, we get: \begin{equation} \bf{A x = \lambda x + \mu Bx} \end{equation} This is not exactly an eigenvalue problem or a generalized one. What's next?</p> <p>I can apply the constraints and get $\lambda = \bf x^TAx$, $\mu = \bf x^TB^TAx/(x^TB^TBx)$. But I am looking for a method that can turn the problem to a linear problem, e.g. generalized eigenvalue problem, so that I can apply the standard numerical linear algorithms. In principle, if I can solve $\det (A-\lambda I - \mu B) = 0$, I can eliminate, say, $\mu$. But this is not feasible, numerically. A perturbative solution with $|\mu|\ll 1$ is acceptable. </p> <p><em>Question: Are there any methods, ideally using standard numerical linear algorithm, to solve this problem?</em></p> <hr> <p>These problems are similar but not the same:</p> <p><a href="https://mathoverflow.net/questions/184538/linearly-constrained-eigenvalue-problem">Linearly constrained eigenvalue problem</a> </p> <p>Thank you in advance. </p> <p><strong>Edit</strong>: In viewing of the comments, I removed the "full rank" condition and does not requires $\bf A$ to be "positively defined". Hopefully, the problem may have a solution? </p> <p>The background of the problem is as follows: $\bf A$ is a Hamiltonian. $\bf x$ is its eigenvector with lowest energy. $\bf x^T Bx = 0$ represents a constraint imposed by a symmetry. In practice, $\bf A$ is truncated, and $\bf x^T B x \ne 0$. </p> <p>Now, I am trying to reformulate the problem to guarantee the symmetry constraint $\bf x^T B x = 0$. As a result, $\bf x$ may not be an eigenvector of $\bf A$, which is the price to pay. My hope is that as the symmetry violation is small enough, the problem may still have an efficient solution. Hope this helps. </p>
Mark L. Stone
75,420
<p>Because no one has offered a solution meeting your ideal of using a standard numerical linear algorithm, I will offer an approach using the global numerical nonlinear optimizer BARON.</p> <p>Here is a solution using BARON as the solver under YALMIP under MATLAB. I will use the B provided by @Federico Poloni in his comment above. I'm not sure what symmetric and positively defined is supposed to mean, so I chose a random A which is symmetric positive definite with all elements positive, which ought to comply with whatever it means.</p> <pre><code>n = 4; B = [zeros(n/2) eye(n/2);eye(n/2) zeros(n/2)]; A = rand(n); A = A*A'; % random instantiation of A x = sdpvar(n,1); % declare x an an optimization vector Constraints = [x'*B*x == 0,x'*x == 1] % the non-convex constraints Objective = x'*A*x % objective function to be minimnized % minimize the Objective, subject to the Constraints, using BARON optimize(Constraints,Objective,sdpsettings('solver','baron')) </code></pre> <p>For </p> <pre><code>A = 1.716800970124081 0.998289669825227 1.266317282130762 0.970191833948101 0.998289669825227 1.486118602130391 1.165572239200317 0.702280553602394 1.266317282130762 1.165572239200317 1.679161019401491 0.884294705407438 0.970191833948101 0.702280553602394 0.884294705407438 0.729460526019744 </code></pre> <p>The result is </p> <pre><code> optimal x = [-0.397502000000000 -0.061859500000000 -0.140779000000000 0.904625000000000]' optimal objective value = 0.116730782147915 </code></pre> <p>The constraints are satisfied to within a tolerance of less than 1e-6, but a tighter tolerance could be used.</p> <p>True, this will not scale in a friendly way as n increases.</p>
52,480
<p>The question comes from a statement in Concrete Mathematics by Graham, Knuth, and Patashnik on page 465.</p> <p>$$\sum_{k \geq n} \frac{(\log k)^2}{k^2} = O \left(\frac{(\log n)^2}{n} \right).$$</p> <p>How is this calculated?</p>
Qiaochu Yuan
232
<p>Here's a pretty straightforward solution. The idea is to split the sum up into a main part and an error term, and generalizes to many sums where the integral test won't work because the corresponding integral is difficult. </p> <p>First recall that we have $\sum_{k \ge n} \frac{1}{k^s} = O \left( \frac{1}{n^{s-1}} \right)$ (for example by the integral test, although for integer $s$ there is a more elementary way to see this). Split the sum as</p> <p>$$\sum_{k=n}^{n^2} \frac{(\log k)^2}{k^2} + \sum_{k \ge n^2+1} \frac{(\log k)^2}{k^2}.$$</p> <p>Since $(\log k)^2$ is eventually bounded by $k^{\epsilon}$ for all $\epsilon &gt; 0$, the second term is $O \left( \frac{1}{n^{4-2\epsilon}} \right)$ for all $\epsilon &gt; 0$. On the other hand, the first term is at most </p> <p>$$\sum_{k=n}^{n^2} \frac{(\log n^2)^2}{k^2} \le 4 (\log n)^2 \sum_{k=n}^{\infty} \frac{1}{k^2} = O \left( \frac{(\log n)^2}{n} \right).$$</p> <p>(Note that unlike the argument using the integral test, this argument doesn't optimize the constant as it stands. But one can actually replace $n^2$ with $n^{1+\epsilon}$ for any $\epsilon &gt; 0$ and the argument carries through, and taking the limit as $\epsilon \to 0$ gives the correct constant.)</p> <hr> <p>For a fun exercise in splitting sums, try getting an optimal bound for the sum described in <a href="http://qchu.wordpress.com/2010/02/08/optimizing-parameters/" rel="nofollow">this blog post</a>. </p>
4,251,161
<p><strong>Objective</strong><br /> I need to find roots of <span class="math-container">$$f(x)=c$$</span> in interval <span class="math-container">$[a,b]$</span>, where</p> <ul> <li><span class="math-container">$f(a)=0$</span> and <span class="math-container">$c&lt;f(b)&lt;1$</span></li> <li><span class="math-container">$f(x)$</span> is unknown outside of the interval</li> <li><span class="math-container">$f(x)$</span> is monotonuous and has decreasing derivative (i.e., <span class="math-container">$f''(x)&lt;0$</span>)</li> </ul> <p>(Overall it resembles flipped hockey stick.)</p> <p><strong>What I tried</strong><br /> Right now I am using bisection method, which is somewhat slow for my purposes, as evaluating the function is expensive.</p> <p>I tried other bracketing methods:</p> <ul> <li><a href="https://en.wikipedia.org/wiki/Regula_falsi" rel="nofollow noreferrer"><em>false position (regula falsi)</em></a> is slower due to the stagnant bound. <a href="https://en.wikipedia.org/wiki/Regula_falsi#The_Illinois_algorithm" rel="nofollow noreferrer"><em>Illinois</em></a> improves performance a bit, but still slower than bisection.</li> <li><a href="https://en.wikipedia.org/wiki/Ridders%27_method" rel="nofollow noreferrer"><em>Ridder's method</em></a> converges in less steps, but overall slower, as it requires more function evaluations (perhaps also due to the stagnant bound).</li> </ul> <p><strong>Question</strong><br /> Could I use my knowledge of the function properties outline in the beginning, to achieve faster convergence (less function evaluations)?</p> <hr /> <p><strong>Possibilities</strong>. As an idea, one could try using transformation: <span class="math-container">$$F(x) = \log (1-f(x)), C=\log (1-c),$$</span> In a hope that <em>false position</em> converges faster for equation <span class="math-container">$$F(x)=C.$$</span></p> <p><strong>Supporting information:</strong> more details about the curve can be found <a href="https://math.stackexchange.com/q/4252758/765359">here</a>. Mine is not exactly teh same, but very similar. Calculating it directly is expensive, and one resorts to resampling and averaging.</p>
Claude Leibovici
82,404
<p>I shall suppose that the computation of <span class="math-container">$f'(x)$</span> is more expensive that the computation of <span class="math-container">$f(x)$</span> itself.</p> <p>For this kind of problems, I extensively used the so-called RTSAFE subroutine from <em>&quot;Numerical Recipes&quot;</em> (have a look at the documentation <a href="http://s3.amazonaws.com/nrbook.com/book_C210.html" rel="nofollow noreferrer">here</a> on chapter <span class="math-container">$9$</span>. Just adapt the FUND subroutine to return the derivative by finite difference.</p> <p>This corresponds to a combination of bisection and Newton steps.</p>
1,276,264
<p>So, I was wondering if it is possible to solve for $n$ in $2^n=8$ (or any other question where $n$ is a power) using $9^{th}$ grade math. Please excuse my naïveté if this is extremely stupid/simple. </p> <p>Thanks so much in advance! –– come to think of it: Is it possible at all?</p>
Chris
235,548
<p>Note that $8=2^3$. Compare it to $2^n$ and conclude that $n=3$.</p>
1,276,264
<p>So, I was wondering if it is possible to solve for $n$ in $2^n=8$ (or any other question where $n$ is a power) using $9^{th}$ grade math. Please excuse my naïveté if this is extremely stupid/simple. </p> <p>Thanks so much in advance! –– come to think of it: Is it possible at all?</p>
passenger
23,187
<p>I don't know what is included in 9-th grade math, but here is a try to explain that $ 2^n, n \in \mathbb N $ tends to infinity as $n$ does so.</p> <p>Obviously $n=3$ is a solution, since $2^3=8$. Now, if you try larger values of $n$ then $ 2^n$ becames larger and larger, since every time you multiply more times $2$ with itself. So, it is impossible to reach again $8$.</p> <p><strong>Edit</strong>(after Mathmo123 comment): Probably I misread the question, and I understood why is it unique. I apologize. If the question is " How we can solve this equation, i.e how to see that $n=3$ is a solution then we can work out a few values of $n$ since $8$ is small comparable to what we get raising $2^n$ for large n$. So, you try,</p> <p>$n=1$: then, $2^1=2$</p> <p>$n=2$: then, $ 2^2=4$</p> <p>$n=3$: then, $\boxed{2^3=8}$</p> <p>$n=4$: then, $2^4 =16&gt;8$</p> <p>$n=5$: then, $2^5= 32 &gt; 8$</p> <p>and so on...larger and larger.</p> <p>For general exponential equations, we use logarithms as pointed out in the answer of Mathmo123.</p>
3,471,684
<p>Is always correct statement that if natural numbers <span class="math-container">$a,b \in \Bbb N$</span> for which LCM<span class="math-container">$(a,b)=16\cdot(a,b)$</span>, then <span class="math-container">$a|b$</span> or <span class="math-container">$b|a$</span>?</p> <p>I used formula that LCM<span class="math-container">$(a,b)=\frac{a\cdot b}{(a,b)}$</span></p> <p><span class="math-container">$\frac{a\cdot b}{(a,b)}=16\cdot(a,b) \implies a\cdot b= (4\cdot(a,b))^2$</span></p> <p>Is it somehow useful? What should I do next?</p>
Bill Dubuque
242
<p><strong>Hint</strong> <span class="math-container">$ $</span> <em>Conceptually</em> it is obvious: a prime <span class="math-container">$\,p\,$</span> occurs in <span class="math-container">$\,c = {\rm lcm}(a,b)/\gcd(a,b)\!\iff\! p\, $</span> occurs to <em>different</em> powers in <span class="math-container">$\,a\,$</span> and <span class="math-container">$\,b.\,$</span> Since only the prime <span class="math-container">$\,p=2\,$</span> occurs in OP's <span class="math-container">$\,c = 2^4,\,$</span> we deduce that <span class="math-container">$\,a,b\,$</span> have prime factorizations that differ only in the power of the prime <span class="math-container">$\,2,\,$</span> so the one with the least power of <span class="math-container">$\,2\,$</span> divides the other (we used existence and uniqueness of prime factorizations throughout). Hence it is clear that the result still holds true if we generalize <span class="math-container">$\,16=2^4\to \color{#c00}{p^k}\,$</span> any prime power.</p> <hr> <p><strong>Or</strong> cancelling their gcd reduces to case <span class="math-container">$\,a,b\,$</span> coprime so <span class="math-container">$\,{\rm lcm}(a,b)\!=\!ab = \color{#c00}{p^k}\smash{\overset{\rm wlog}\Rightarrow}^{\phantom{|^l}}\! a=1\,$</span> so <span class="math-container">$\,a\mid b$</span> (note the divisibilities are not affected by gcd cancellation since <span class="math-container">$\,ad\mid bd\iff a\mid b).\, $</span> </p> <p><strong>Homogeneous reduction</strong> For statements similarly <em>homogeneous</em> in <span class="math-container">$\,a,b\,$</span> we can cancel (a power of) the gcd to reduce to the case <span class="math-container">$\,a,b\,$</span> coprime, e.g. see <a href="https://math.stackexchange.com/a/1831214/242">here</a> and <a href="https://math.stackexchange.com/a/2950518/242">here</a> and <a href="https://math.stackexchange.com/a/346060/242">here</a> for further examples. In more advanced contexts one doesn't <em>explicitly</em> change variables and cancel; rather one simply writes: "being homogeneous in <span class="math-container">$\,a,b\,$</span> wlog we may reduce to the case <span class="math-container">$\,a,b\,$</span> coprime <span class="math-container">$\ldots$</span>".</p>
220,996
<p>I was wondering if for every NFA there exists an equivalent DFA? I think the answer is yes. How would one <em>prove</em> it? Since I'm just starting up learning about Automata I'm not confused about this and especially the proof of such a statement.</p>
HabiSoft
45,748
<p>Indeed, every NFA can be converted to an equivalent DFA. In fact, DFAs, NFAs and regular expressions are all equivalent. One approach would be to observe the NFA and, if it is simple enough, determine the regular expression that it recognizes, then convert the regular expression to a DFA.</p> <p>Yet, there are algorithms out there that can take an NFA and produce its equivalent DFA. For example, the powerset construction check out this link and google:</p> <p><a href="http://en.wikipedia.org/wiki/Powerset_construction" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Powerset_construction</a></p> <p>Furthermore, every DFA has a unique minimum state DFA that recognizes a regular expression using a minimal number of states. This DFA state minimization also has an algorithm.</p> <p>Good luck!</p>
135,012
<p>How to prove (or to disprove) that all the roots of the polynomial of degree $n$ $$\sum_{k=0}^{k=n}(2k+1)x^k$$ belong to the disk $\{z:|z|&lt;1\}?$ Numerical calculations confirm that, but I don't see any approach to a proof of so simply formulated statement. It would be useful in connection with an irreducibility problem. </p>
Gabriel Furstenheim
5,556
<p>The idea is taken from this other question <a href="https://mathoverflow.net/questions/18094/polynomial-with-the-primes-as-coefficients-irreducible">Polynomial with the primes as coefficients irreducible?</a></p> <p>Show instead that $f(1/x)$ has all roots lying outside of the unit disk. For that, multiply by $(x-1)$ and equate to $0$ obtaining:</p> <p>$$x^{k+1}+\sum_1^k 2x^j=2k+1$$ Take absolute values and apply the triangular inequality and one obtains:</p> <p>$$|x^{k+1}|+\sum_1^k 2|x^j|\geq \left|x^{k+1}+\sum_1^k 2x^j\right|=2k+1$$ This is clearly non possible if $|x|&lt;1$. Moreover, if $|x|=1$ there is an equality which means that all the terms are aligned. In particular $2x^2/2x$ is real, so the only possibility is $x=1,-1$. But neither is a root of $f(1/x)$ so you are done. </p>
88,861
<p>If $n$ is an integer, how do you know whether $n^n$ is a perfect square, without a calculator?</p> <p>The actual question is: "<em>how many integers between $1$ and $100$ inclusive, raised to their own power, are perfect squares?</em>".</p>
Gaurav Tiwari
11,044
<p>'$n^n$ is a perfect square' means that Square root of this number is a whole number. Let me solve this for both case of even and odd numbers as suggested by @pete:</p> <p>If $n$ is an even number, then we may replace $n$ by $2m \ \forall m=1,2 \ldots$; and thus can write $n^n$ as ${(2m)}^{2m}$.Therefore, $\sqrt {{(2m)}^{2m}} = {(2m)}^m$ which is a whole number.</p> <p>Similarly, if $n$ is odd, then we can adjust $n$ by $2m+1 \ \forall m=0,1,2 \ldots$; and $\sqrt{{(2m+1)}^{2m+1}} = {(2m+1)}^{m+\frac{1}{2}} ={(2m+1)}^m \times \sqrt{2m+1}$ which is a whole number iff $n=2m+1$ is already a perfect square.</p>
4,267,862
<p>If k balanced n-sided dice are rolled, what is the probability that each of the n different numbers will appear at least once?</p> <p>A case of this was discussed <a href="https://math.stackexchange.com/questions/264408">here</a>, but I’m not sure how to extend this. Specifically, I’m not sure how the to calculate the numbers that can repeat term in the accepted answer.</p>
G Cab
317,234
<p>(<em>Please allow me to use <span class="math-container">$n$</span> in place of your <span class="math-container">$k$</span>, and <span class="math-container">$m$</span> in place of your <span class="math-container">$n$</span></em>)</p> <p>So we have <span class="math-container">$n$</span> fair <span class="math-container">$m$</span>-face dice. If you consider the dies to be distinct, by color or by launching them in sequence, then the space of events is given by <span class="math-container">$m^n$</span> equiprobable words (strings , m-tuples) of length <span class="math-container">$n$</span> formed out of the alphabet <span class="math-container">$\{1, 2, \ldots, m \}$</span>.</p> <p>Let's consider the development of <span class="math-container">$$ \begin{array}{l} \left( {x_{\,1} + x_{\,2} + \cdots + x_{\,m} } \right)^n = \\ = \cdots + x_{\,j_{\,1} } x_{\,j_{\,2} } \cdots x_{\,j_{\,n} } + \cdots \quad \left| {\;j_i \in \left\{ {1, \ldots ,m} \right\}} \right.\quad = \\ = \sum\limits_{\left\{ {\begin{array}{*{20}c} {0\, \le \,k_{\,j} \,\left( { \le \,n} \right)} \\ {k_{\,1} + k_{\,2} + \, \cdots + k_{\,m} \, = \,n} \\ \end{array}} \right.\;} {\left( \begin{array}{c} n \\ k_{\,1} ,\,k_{\,2} ,\, \cdots ,\,k_{\,m} \\ \end{array} \right)x_{\,1} ^{k_{\,1} } x_{\,2} ^{k_{\,2} } \cdots x_{\,m} ^{k_{\,m} } } \\ \end{array} $$</span> where <span class="math-container">$$x_{\,j} ^{k_{\,j} } $$</span> accounts for the <span class="math-container">$j$</span>th face (character) repeated <span class="math-container">$k_j$</span> times, and where putting the <span class="math-container">$x$</span>'s at 1 we get <span class="math-container">$$ \begin{array}{l} \left( {\underbrace {1 + 1 + \cdots + 1}_m} \right)^n = m^n = \\ = \sum\limits_{\left\{ {\begin{array}{*{20}c} {0\, \le \,k_{\,j} \,\left( { \le \,n} \right)} \\ {k_{\,1} + k_{\,2} + \, \cdots + k_{\,m} \, = \,n} \\ \end{array}} \right.\;} {\left( \begin{array}{c} n \\ k_{\,1} ,\,k_{\,2} ,\, \cdots ,\,k_{\,m} \\ \end{array} \right)1^{k_{\,1} } 1^{k_{\,2} } \cdots 1^{k_{\,m} } } = \\ = \sum\limits_{\left\{ {\begin{array}{*{20}c} {0\, \le \,k_{\,j} \,\left( { \le \,n} \right)} \\ {k_{\,1} + k_{\,2} + \, \cdots + k_{\,m} \, = \,n} \\ \end{array}} \right.\;} {\left( \begin{array}{c} n \\ k_{\,1} ,\,k_{\,2} ,\, \cdots ,\,k_{\,m} \\ \end{array} \right)} \\ \end{array} $$</span></p> <p>Out of these we want to number the cases in which <span class="math-container">$k_1, k_2 , \ldots, k_m$</span> are at least one, i.e. <span class="math-container">$$ N\left( {n,m} \right) = \sum\limits_{\left\{ {\begin{array}{*{20}c} {1\, \le \,k_{\,j} \,\left( { \le \,n} \right)} \\ {k_{\,1} + k_{\,2} + \, \cdots + k_{\,m} \, = \,n} \\ \end{array}} \right.\;} {\left( \begin{array}{c} n \\ k_{\,1} ,\,k_{\,2} ,\, \cdots ,\,k_{\,m} \\ \end{array} \right)} $$</span></p> <p>There are <span class="math-container">$\binom{m}{m-l} = \binom{m}{l}$</span> ways to choose <span class="math-container">$m-l$</span> characters not appearing and leaving <span class="math-container">$l$</span> to appear at least once so it shall be <span class="math-container">$$ \begin{array}{l} m^n = \sum\limits_{\left\{ {\begin{array}{*{20}c} {0\, \le \,k_{\,j} \,\left( { \le \,n} \right)} \\ {k_{\,1} + k_{\,2} + \, \cdots + k_{\,m} \, = \,n} \\ \end{array}} \right.\;} {\left( \begin{array}{c} n \\ k_{\,1} ,\,k_{\,2} ,\, \cdots ,\,k_{\,m} \\ \end{array} \right)} = \\ = \sum\limits_{\left( {0\, \le } \right)\,l\,\left( { \le \,m} \right)} {\left( \begin{array}{c} m \\ l \\ \end{array} \right)\sum\limits_{\left\{ {\begin{array}{*{20}c} {1\, \le \,c_{\,j} \,\left( { \le \,n} \right)} \\ {c_{\,1} + \,c_{\,2} + \cdots \, + c_{\,l} \, = \,n} \\ \end{array}} \right.} {\;\left( \begin{array}{c} n \\ c_{\,1} ,\,c_{\,2} , \cdots \,,c_{\,l} \, \\ \end{array} \right)} } \\ \end{array} $$</span></p> <p>But also it is, from the definition of the Stirling N. of 2nd kind <span class="math-container">$$ \;m^{\,n} = \sum\limits_{\left( {0\, \le } \right)\,k\,\left( { \le \,n} \right)} {\left\{ \begin{array}{c} n \\ k \\ \end{array} \right\}\,m^{\,\underline {\,k\,} } } = \sum\limits_{\left( {0\, \le } \right)\,k\,\left( { \le \,n} \right)} {k!\left\{ \begin{array}{c} n \\ k \\ \end{array} \right\}\,\left( \begin{array}{c} m \\ k \\ \end{array} \right)} $$</span> and therefore <span class="math-container">$$ N\left( {n,m} \right) = \sum\limits_{\left\{ {\begin{array}{*{20}c} {1\, \le \,k_{\,j} \,\left( { \le \,n} \right)} \\ {k_{\,1} + k_{\,2} + \, \cdots + k_{\,m} \, = \,n} \\ \end{array}} \right.\;} {\left( \begin{array}{c} n \\ k_{\,1} ,\,k_{\,2} ,\, \cdots ,\,k_{\,m} \\ \end{array} \right)} = m!\left\{ \begin{array}{c} n \\ m \\ \end{array} \right\} $$</span></p>
652,446
<p>I just ran into the next problem: The random variables $X$ and $Y$ are independent, where $X \sim Normal(1,1)$ and $Y \sim Gamma(\lambda,p)$ with $E(Y) = 1$ and $Var(Y) = 1/2$ How do we find $E(X+Y)^3$ ?? I've tried a convolution, which leads to a really ugly looking integral from which I then have to get the third moment. I've tried characteristic functions and ran into the same problem, I'm sure there has to be some other easy way to solve this. Any ideas?</p>
Tom-Tom
116,182
<p>Write $$\mathrm E\left[(X+Y)^3\right]=\mathrm E\left[X^3+3X^2Y+3XY^2+Y^3\right]$$ and use the fact that $X$ and $Y$ are independent, <em>i.e.</em> $$\mathrm E\left[X^pY^q\right]=\mathrm E\left[X^p\right]\mathrm E\left[Y^q\right].$$</p>
1,731,382
<p>Notice that the parabola, defined by certain properties, is also the trajectory of a cannon ball. Does the same sort of thing hold for the catenary? That is, is the catenary, defined by certain properties, also the trajectory of something?</p>
pjs36
120,540
<p>From the right perspective, maybe.</p> <p><a href="https://i.stack.imgur.com/2ZPeI.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/2ZPeI.gif" alt="enter image description here"></a></p> <p>(image from <a href="https://en.wikipedia.org/wiki/Square_wheel" rel="noreferrer">Wikipedia</a>)</p> <p>I'm not exactly sure how to frame this as a trajectory problem, but certainly there is stuff moving and a catenary is traced! </p> <p>We have a square moving horizontally at a constant speed, and rotating at "the right" constant angular velocity (I'm not certain the angular velocity is fixed, but I suspect it is). Throughout a given quarter rotation starting with a vertex of the square at the bottom, the point directly below the radius will trace out an inverted catenary.</p>
4,353,958
<p><a href="https://i.stack.imgur.com/zxyBa.png" rel="nofollow noreferrer">In this example</a>, it says that the phase of the complex number <span class="math-container">$i$</span> is <span class="math-container">$\pi/2$</span> and <span class="math-container">$-1$</span> has phase <span class="math-container">$\pi$</span>.</p> <p>We use this formula to find the phase: <span class="math-container">$\varphi(z)=\text{arctan}\left(\frac{\text{Im}(z)}{\text{Re}(z)}\right)$</span></p> <p>Then, the phase of <span class="math-container">$i$</span> should be <span class="math-container">$\text{arctan}(1/0)$</span>, which is undefined, and <span class="math-container">$-1$</span> should be <span class="math-container">$\text{arctan}(0/1)=0$</span>. So why it says otherwise in the example?</p>
José Carlos Santos
446,262
<p>If <span class="math-container">$z\in\Bbb C$</span> and <span class="math-container">$z=x+yi$</span>, with <span class="math-container">$x,y\in\Bbb R$</span>, then, if you want to write <span class="math-container">$z$</span> as <span class="math-container">$\rho\bigl(\cos(\varphi)+\sin(\varphi)i\bigr)$</span>, then you will have <span class="math-container">$\rho\cos(\varphi)=x$</span> and <span class="math-container">$\rho\sin(\varphi)=y$</span>. Therefore, if <span class="math-container">$x\ne0$</span>, you will have<span class="math-container">$$\frac yx=\frac{\rho\sin(\varphi)}{\rho\cos(\varphi)}=\tan(\varphi).$$</span>But this was done under the assumption that <span class="math-container">$x\ne0$</span>. If <span class="math-container">$x=0$</span>, then<span class="math-container">\begin{align}z&amp;=yi\\&amp;=\begin{cases}|y|\sin\left(\frac\pi2\right)i=|y|\left(\cos\left(\frac\pi2\right)+\sin\left(\frac\pi2\right)i\right)&amp;\text{ if }y\geqslant0\\-|y|\sin\left(\frac{3\pi}2\right)i=-|y|\left(\cos\left(\frac{3\pi}2\right)+\sin\left(\frac{3\pi}2\right)i\right)&amp;\text{ otherwise.}\end{cases}\end{align}</span>Besides, note that <span class="math-container">$\tan(\varphi)=\frac yx$</span> is <em>not</em> the same thing as <span class="math-container">$\varphi=\arctan\left(\frac yx\right)$</span>.</p>
1,085,702
<p>It's said that a computer program &quot;prints&quot; a set <span class="math-container">$A$</span> (<span class="math-container">$A \subseteq \mathbb N$</span>, positive integers.) if it prints every element of <span class="math-container">$A$</span> in ascending order (even if <span class="math-container">$A$</span> is infinite.). For example, the program can &quot;print&quot;:</p> <ol> <li>All the prime numbers.</li> <li>All the even numbers from <span class="math-container">$5$</span> to <span class="math-container">$100$</span>.</li> <li>Numbers including &quot;<span class="math-container">$7$</span>&quot; in them.</li> </ol> <p>Prove there is a set that no computer program can print.</p> <p>I guess it has something to do with an algorithm meant to manipulate or confuse the program, or to create a paradox, but I can't find an example to prove this. Any help?</p> <p>Guys, this was given to me by my Set Theory professor, meaning, this question does not regard computer but regards an algorithm that cannot exhaust <span class="math-container">$\mathcal{P}(\mathbb{N})$</span>. Everything you say about computer or number of programs do not really help me with this... The proof has to contain Set Theory claims, and I probably have to find a set with terms that will make it impossible for the program to print. I am not saying your proofs including computing are not good, on the contrary, I suppose they are wonderful, but I don't really understand them nor do I need to use them, for it's about sets.</p>
WillO
29,145
<p>Jihad's answer proves that some such $A$ exists. For an explicit example, let $A$ be the set of Godel numbers of true statements about arithmetic (after fixing your favorite encoding).</p>
1,085,702
<p>It's said that a computer program &quot;prints&quot; a set <span class="math-container">$A$</span> (<span class="math-container">$A \subseteq \mathbb N$</span>, positive integers.) if it prints every element of <span class="math-container">$A$</span> in ascending order (even if <span class="math-container">$A$</span> is infinite.). For example, the program can &quot;print&quot;:</p> <ol> <li>All the prime numbers.</li> <li>All the even numbers from <span class="math-container">$5$</span> to <span class="math-container">$100$</span>.</li> <li>Numbers including &quot;<span class="math-container">$7$</span>&quot; in them.</li> </ol> <p>Prove there is a set that no computer program can print.</p> <p>I guess it has something to do with an algorithm meant to manipulate or confuse the program, or to create a paradox, but I can't find an example to prove this. Any help?</p> <p>Guys, this was given to me by my Set Theory professor, meaning, this question does not regard computer but regards an algorithm that cannot exhaust <span class="math-container">$\mathcal{P}(\mathbb{N})$</span>. Everything you say about computer or number of programs do not really help me with this... The proof has to contain Set Theory claims, and I probably have to find a set with terms that will make it impossible for the program to print. I am not saying your proofs including computing are not good, on the contrary, I suppose they are wonderful, but I don't really understand them nor do I need to use them, for it's about sets.</p>
KSmarts
192,747
<p>Any known model of computing, and thus any computer we can make, is equivalent to a Turing machine. Since the number of different possible inputs and states of a Turing machine is finite, there are, as Jihad said, countably many possible programs. Since there are uncountably many subsets of the natural numbers, there must be some subsets that the computer can not print.</p> <p>If you want to find something specific, you should look to uncomputable functions, or undecidable problems.</p> <p>For example, you could have it try to print <a href="http://oeis.org/A060843" rel="nofollow">A060843</a>, the Busy Beaver numbers. The Busy Beaver function determines the maximum number of steps that an $n$-state Turing machine can take before halting. Basically, it tells you how long a computer can run without getting stuck in an infinite loop. It grows extremely quickly: only the first $4$ elements of this sequence are known exactly, the fifth is known to be more than $47$ million, and the sixth is more than $8$ quadrillion.</p> <p>Finding a general-case algorithm for this function would be equivalent to solving the halting problem, which is undecidable. This means that while the sequence is well-defined, it is not computable.</p> <p>Or, given some encoding of logical statements to natural numbers, you could have it print the set of all $n$ where the statement represented by $n$ is true. This is Hilbert's Decision Problem, or Entscheidungsproblem, and is very similar (possibly equivalent, I'm not sure) to WillO's answer about G&ouml;del Numbers.</p> <p>You could also break it through self-reference, as Mark Bennet suggests.</p>
1,085,702
<p>It's said that a computer program &quot;prints&quot; a set <span class="math-container">$A$</span> (<span class="math-container">$A \subseteq \mathbb N$</span>, positive integers.) if it prints every element of <span class="math-container">$A$</span> in ascending order (even if <span class="math-container">$A$</span> is infinite.). For example, the program can &quot;print&quot;:</p> <ol> <li>All the prime numbers.</li> <li>All the even numbers from <span class="math-container">$5$</span> to <span class="math-container">$100$</span>.</li> <li>Numbers including &quot;<span class="math-container">$7$</span>&quot; in them.</li> </ol> <p>Prove there is a set that no computer program can print.</p> <p>I guess it has something to do with an algorithm meant to manipulate or confuse the program, or to create a paradox, but I can't find an example to prove this. Any help?</p> <p>Guys, this was given to me by my Set Theory professor, meaning, this question does not regard computer but regards an algorithm that cannot exhaust <span class="math-container">$\mathcal{P}(\mathbb{N})$</span>. Everything you say about computer or number of programs do not really help me with this... The proof has to contain Set Theory claims, and I probably have to find a set with terms that will make it impossible for the program to print. I am not saying your proofs including computing are not good, on the contrary, I suppose they are wonderful, but I don't really understand them nor do I need to use them, for it's about sets.</p>
Hanno
81,567
<p>The following is an elementary and informal approach which does not use set theory:</p> <p>First, fix the syntax for your computer programs and enumerate them by natural numbers.</p> <p><em>Definition:</em> Denote $A\subset{\mathbb N}$ the set of those numbers which correspond to a computer program that prints out infinitely many numbers (in ascending order), and call such a program <em>observably nonterminating</em>. </p> <p><em>Claim:</em> There is no computer program which prints $A$ in ascending order.</p> <p><em>Edit:</em> In the following I argue that the printability of $A$ implies the (contradictory) printability of a set as in <a href="https://math.stackexchange.com/a/1085881/81567">QuestionC's more direct answer</a>.</p> <p><em>Proof of claim:</em> For otherwise, note first that such a program could be used to decide, for any $n$, whether the program with number $n$ is observably nonterminating: namely, pick any $m\geq n$ numbering a program that you know to print out infinitely many numbers, wait for you hypothetical computer program to output $m$, and look if it printed $n$ until then. </p> <p>However, building on such a decision procedure for observable nontermination, you can write a computer program that prints out $n$ if either $n$ does not number a observably nonterminating program, or if $n$ does number a observably nonterminating program which however does <em>not</em> print $n$. This is well-defined since for any observably nonterminating program it is decidable whether it prints a given number: as above, wait long enough for a larger number to be printed, and look whether the number you are interested in has been printed before. Note that this program is observably nonterminating since there are infinitely programs that print out only finitely many numbers, and the numbers of all these programs will be printed. Finally, this observably nonterminating program itself has a number, and looking at whether or not it prints its own number gives rise to contradiction.</p>
1,274,717
<p>Say that $V$ is a finite dimensional vector space over a field and and $f : V \to V$ a linear map. There is an integer $i$ such that $\text{ker}(f^n) = \text{ker}(f^{n+1})$ for all $n \geq i$. You see that by noting that $\text{ker}(f^n) \subseteq \text{ker}(f^{n+1})$ for all $n$ and since $V$ is finite dimensional, they must stabilize at some point.</p> <p>I am having trouble seeing that for $n \leq s$ where $s$ is the least integer $i$ above, that $\text{ker}(f^n) \subsetneq \text{ker}(f^{n+1})$. How can I see the proper containment?</p> <p>I am pretty sure that the proof goes like this: If $n &lt; s$ and $\text{ker}(f^{n-1}) = \text{ker}(f^{n})$, then $\text{ker}(f^j) = \text{ker}(f^{j+1})$ for all $j \geq n$ contradicting that $s$ is the least such integer. But how do I show this? Thanks for your help.</p>
Alex W
230,729
<p>Let $K_i=\ker(f^i)$. Assume, that $n\in\mathbb{Z}_{\geq 0}$ and $K_n=K_{n+1}$. We show that $K_{n+1}=K_{n+2}$. Clearly, this will prove your claim. Obviously, $K_{n+1}\subset K_{n+2}$. Conversely, let $a\in K_{n+2}$ and let's show that $a\in K_{n+1}$. Since $a\in K_{n+2}$, then $0=f^{n+2}(a)=f^{n+1}(f(a))$, hence $f(a)\in K_{n+1}=K_n$. Therefore $0=f^n(f(a))=f^{n+1}(a)$, so $a\in K_{n+1}$.</p>
1,341,385
<p>I want to be a mathematician or computer scientist. I'm going to be a junior in high school, and I skipped precalc/trig to go straight to AP Calc since I've studied a lot of analysis and stuff on my own. My dad wants me to memorize about 30 trig identities (though some of them are very similar) since I'm missing trig. I've gone through and proved all of them, but memorizing them seems like a waste of effort. My dad is a physicist, so he is good at math, but I think he may be wrong here. Can't one just use deMoivre's theorem to get around memorizing the identities?</p>
Teoc
190,244
<p>To be honest, the only trig identities you really need are the definitions of the 6 trig functions, and Euler/De Moivre. You can prove almost all the trig identities from $e^{ix}$. Deriving an identity is much easier than just rote memorization, and with repeated deriviation, it eventually becomes memorized. </p>
171,565
<p>Does anybody know whether there exists a proof by induction (or at least a proof that does not use Hilbert polynomials) that the degree of the Segre variety product of $n$ lines is $n!$ ?</p>
Francesco Polizzi
7,460
<p>Let me give a proof by induction.</p> <p>For $n=2$ we have $\mathbb{P}^1 \times \mathbb{P}^1$ embedded as a quadric in $\mathbb{P}^3$, so the claim is true in this case.</p> <p>Now let $X_n$ be the Segre embedding of the product of $n$ lines $\mathbb{P}^1 \times \ldots \times \mathbb{P}^1$. An easy computation using the definition of Segre embedding shows that there is a hyperplane section of $X_n$ given by the union of $n$ copies of $X_{n-1}$, one for each copy of $\mathbb{P}^1$ (for instance, there is a hyperplane section of the quadric $X_2$ given by the union of two lines).</p> <p>By the induction assumption, $X_{n-1}$ has degree $(n-1)!$. So the degree of $X_n$ is $$n \cdot(n-1)! = n!.$$</p>
171,565
<p>Does anybody know whether there exists a proof by induction (or at least a proof that does not use Hilbert polynomials) that the degree of the Segre variety product of $n$ lines is $n!$ ?</p>
abx
40,297
<p>The Segre embedding is defined by the line bundle $\ L:=\mathcal{O}_{\mathcal{P}^1}(1)\boxtimes\ldots \boxtimes \mathcal{O}_{\mathcal{P}^1}(1)\ $ on $(\mathbb{P}^1)^n$, therefore its degree is $L^n=(p_1^*h+\ldots +p_n^*h)^n$, where $h$ is the class of a point in $\mathrm{Pic}(\mathbb{P}^1)$ and $p_i$ the $i$-th projection. The only nonzero term in this expression is $p_1^*h\cdot \ldots \cdot p_n^*h=1$, with coefficient $n!$.</p>
3,567,662
<p>I have just learned how to convert a plane in R3 from Cartesian to parametric form, by setting 2 variables to 0 and solving for the 3rd one in order to obtain 3 points on the plane, and solve from there. However, this does not work when 1 or 2 of the variables are 0, as it is not possible to find 3 points on the plane in the same way (for example in the picture). How can this be solved for the particular question, and for other cases where there are variables that are 0? <a href="https://i.stack.imgur.com/C7fEp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C7fEp.png" alt="enter image description here"></a></p>
Community
-1
<p>Enumerate following the systematic sequence of <span class="math-container">$(x_i,y_j)$</span> indexes</p> <p><span class="math-container">$$11,\ 21,12,\ 31,22,13,\ 41,32,23,14,\ 51,42,33,24,15,\ \cdots$$</span></p> <p>or any other enumeration of <span class="math-container">$\mathbb N\times\mathbb N$</span>.</p>
232,276
<p>I can prove with the triangle inequality that the unit sphere in $R^n$ is convex, but how to show that it is strictly convex?</p>
Hagen von Eitzen
39,174
<p>For the euclidean metric, we see that the unit ball is <em>strictly</em> convex because for different vectors $a$ and $b$ we have that $$f(t):=||ta+(1-t)b||^2\\ =\langle ta+(1-t)b, ta+(1-t)b\rangle\\ = t^2||a||^2+(1-t)^2||b||^2 + 2t(1-t)\langle a,b\rangle\\= (\ldots) t^2+(\ldots)t + (\ldots)$$ is a quadratic function with positive leading term (because $f(t)\to+\infty$ as $t\to\infty$), hence $f$ assumes its maximum on the interval $[0,1]$ only at one or both of the endpoints. In other words: We make use of the fact that $f(t)=c_2t^2+c_1t+c_0$ with $c_2&gt;0$ is strictly convex, which implies $f(t)&lt;\max\{f(0),f(1)\}$ for $0&lt;t&lt;1$.</p>
78,368
<p>Please, can anybody give a reference(s) to some good recent review papers about copulas and time series?</p>
Fabrizio
18,614
<p>I would suggest to look at the following surveys (both available online): H. Manner and O. Reznikova. "A survey on time-varying copulas: Specification, simulations and estimation. Econometric Reviews, forthcoming. A. Patton. "Copula-Based Models for Financial Time Series", 2009, in T.G. Andersen, R.A. Davis, J.-P. Kreiss and T. Mikosch (eds.) Handbook of Financial Time Series, Springer Verlag. </p> <p>Regards</p>
3,274,766
<p>I have been trying to do this problem:</p> <p>Solve <span class="math-container">$$\sec(x)=\tan(x),\quad 0≤x&lt;2π$$</span></p> <p>I started by rewriting <span class="math-container">$\sec(x)$</span> as <span class="math-container">$\frac{1}{\cos(x)}$</span>.</p> <p>I then rewrote <span class="math-container">$\tan(x)$</span> to get <span class="math-container">$\frac{\sin(x)}{\cos(x)}$</span>.</p> <p>Therefore:</p> <p><span class="math-container">$$\frac{1}{\cos(x)} = \frac{\sin(x)}{\cos(x)}$$</span> Therefore:</p> <p><span class="math-container">$$1=\sin(x)$$</span></p> <p><span class="math-container">$$x=\frac{π}{2}$$</span></p> <p>However, this is not a solution to the original problem because both <span class="math-container">$y=\tan(x)$</span> and <span class="math-container">$y=\sec(x)$</span> have asymptotes as <span class="math-container">$x=\frac{π}{2}$</span></p> <p>So where have I gone wrong, or are there no solutions? </p> <p>If there are no solutions, how would I know (in other words, can we prove there are no solutions, or is <span class="math-container">$x=\frac{π}{2}$</span> not working enough to conclude there are no solutions?)</p>
José Carlos Santos
446,262
<p>There is nothing wrong with what you did. The problem has no solution since, precisely, if <span class="math-container">$x\in\mathbb R$</span> is such that both <span class="math-container">$\tan(x)$</span> and <span class="math-container">$\sec(x)$</span> are defined, then <span class="math-container">$\sec(x)=\tan(x)\iff\sin(x)=1$</span>. But at those numbers <span class="math-container">$x$</span> for which <span class="math-container">$\sin(x)=1$</span>, <span class="math-container">$\tan(x)$</span> and <span class="math-container">$\sec(x)$</span> are undefined.</p>
4,544,450
<p>im trying to solve this logical equation</p> <p>p≡((p∧∼q)→q)→p i know i have to solve for the right side and im pretty certain the final step must be the absorption law 10). But the ∼q is bugging me</p>
Graham Kemp
135,106
<blockquote> <p>im pretty certain the final step must be the absorption law 10). But the ∼q is bugging me</p> </blockquote> <p>Yes it is, but there is no reason to be bugged.</p> <p>The relevant absorption law is that: for <em>any</em> predicates <span class="math-container">$A$</span> and <span class="math-container">$B$</span> , we have: <span class="math-container">$((A\land B)\lor A)\equiv A$</span> .</p> <p>Just substitute <span class="math-container">$p$</span> for <span class="math-container">$A$</span> and <span class="math-container">${\sim}q$</span> for <span class="math-container">$B$</span> . Since the law works for <em>any</em> <span class="math-container">$A,B$</span>, it shall work for those too.</p>
2,409,580
<p>Recall that </p> <blockquote> <p><strong>Theorem (Bessel inequality).</strong> Let $(e_k)$ be an orthonormal sequence in an inner product space $X$. Then for every $x \in X $, $$\sum_{k=1}^{\infty} |\langle x,e_k \rangle|^2 \le \|x\|^2 .$$</p> </blockquote> <p>The proof results in $\le$ and not just $=$. Can $\le$ be 'reduced' to $=$? And if so, how? By example searching I couldn't find any space $X$ such that the relation is just $&lt;$. </p> <p>For some sapces that I knew, e.g. $X=\mathbb{R^n}$, equality holds. Is there any space such that $\sum_{k=1}^{\infty} |\langle x,e_k \rangle|^2 &lt; \|x\|^2 ?$</p>
Angina Seng
436,618
<p>In general, equality holds (<a href="https://en.wikipedia.org/wiki/Parseval%27s_identity" rel="nofollow noreferrer">Parseval's theorem</a>) when the $e_k$ form a <strong>complete</strong> orthonormal sequence. For instance the $e^{inx}$ for $n\in\Bbb Z$ is a complete orthonormal sequence in $L^2[0,1]$. If you take a complete orthonormal sequence, and discard an element, and call that $x$, then $\left&lt;x,e_k\right&gt;=0$ for all elements of the new sequence, but $\|x\|^2=1$.</p>
242,203
<p>What's the derivative of the integral $$\int_1^x\sin(t) dt$$</p> <p>Any ideas? I'm getting a little confused.</p>
TonyK
1,508
<p>If $f$ is <em>any function at all</em> that can be integrated, then the derivative of the integral of $f(t)dt$ from $1$ to $x$ is $f(x)$. This wonderful fact is the Fundamental Theorem of Calculus.</p>
2,406,107
<p><a href="https://i.stack.imgur.com/ccIr4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ccIr4.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/zCPUz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zCPUz.png" alt="enter image description here"></a></p> <p>What set or relation does y,x belong to?</p> <p>What set or thing does w belong to?</p> <p>What set or thing does z belong to?</p> <p>I have a hard time keeping track what set w,z,y are members of, are part of whose domain and codomain. What are all the codomains and domains here?</p> <p>How do we formally define using set builder notation R,S, and T?</p>
drhab
75,923
<p>In general if $R\subseteq A\times B$ and $S\subseteq B\times C$ then: $$S\circ R:=\{\langle a,c\rangle\mid\exists b\in B[\langle a,b\rangle\in R\wedge\langle b,c\rangle\in S\}\subseteq A\times C$$</p> <p>So starting with relations $R\subseteq A\times B$, $S\subseteq B\times C$ and $T\subseteq C\times D$ we have $S\circ R\subseteq A\times C$ and $T\circ S\subseteq B\times C$.</p> <p>Continuing this we find that $T\circ(S\circ R)$ and $(T\circ S)\circ R$ are both subsets of $A\times D$ that satisfy: $$T\circ(S\circ R)=(T\circ S)\circ R$$</p>
3,135,440
<p>This is throwing me off a bit I believe mainly because the way the question is worded? Would this simply be <span class="math-container">$4$</span> out of <span class="math-container">$36$</span>?</p>
Michael Rybkin
350,247
<p>It means that you're doing something wrong.</p> <p><span class="math-container">$$ \left(\frac{x+1}{x-1}\right)'= \frac{(x+1)'(x-1)-(x+1)(x-1)'}{(x-1)^2}=\\ \frac{x-1-(x+1)}{(x-1)^2}= \frac{x-1-x-1}{(x-1)^2}=-\frac{2}{(x-1)^2} $$</span> <hr> <span class="math-container">$$ [(x+1)(x-1)^{-1}]'=\\ (x+1)'(x-1)^{-1}+(x+1)[(x-1)^{-1}]'=\\ (x-1)^{-1}+(x+1)(-1)(x-1)^{-2}=\\ \frac{1}{x-1}-\frac{x+1}{(x-1)^2}= \frac{x-1}{(x-1)^2}-\frac{x+1}{(x-1)^2}=\\ \frac{x-1-x-1}{(x-1)^2}=-\frac{2}{(x-1)^2} $$</span> <hr> <span class="math-container">$$ \lim\limits_{\Delta x \rightarrow 0}\frac{\frac{x+\Delta x+1}{x+\Delta x-1}-\frac{x+1}{x-1}}{\Delta x}= \lim\limits_{\Delta x \rightarrow 0}\frac{\frac{(x+\Delta x+1)(x-1)-(x+1)(x+\Delta x-1)}{(x+\Delta x-1)(x-1)}}{\Delta x}=\\ \lim\limits_{\Delta x \rightarrow 0}\frac{x^2-x+x\Delta x-\Delta x+x-1-(x^2+x\Delta x-x+x+\Delta x-1)}{\Delta x(x+\Delta x-1)(x-1)}=\\ \lim\limits_{\Delta x \rightarrow 0}\frac{x^2+x\Delta x-\Delta x-1-(x^2+x\Delta x+\Delta x-1)}{\Delta x(x+\Delta x-1)(x-1)}= \lim\limits_{\Delta x \rightarrow 0}\frac{x^2+x\Delta x-\Delta x-1-x^2-x\Delta x-\Delta x+1}{\Delta x(x+\Delta x-1)(x-1)}=\\ \lim\limits_{\Delta x \rightarrow 0}\frac{-2\Delta x}{\Delta x(x+\Delta x-1)(x-1)}= \lim\limits_{\Delta x \rightarrow 0}\frac{-2}{(x+\Delta x-1)(x-1)}=\\ \frac{-2}{(x+0-1)(x-1)}=-\frac{2}{(x-1)^2} $$</span></p>
3,909,191
<p>I am not too sure as to what the relation is but I think <span class="math-container">$R = \{(1, 2), (2, 3), (3, 4), ..., (n - 1, n)\} $</span>.</p> <p>Any guidance would be appreciated.</p>
kevinkayaks
449,944
<p>In the next step, you want the cube root of the complex number <span class="math-container">$z = 1-i\sqrt{3}/2$</span>. First express this number in the form <span class="math-container">$z = re^{i\theta}$</span> using Euler's identity. For a number <span class="math-container">$z = a + i b$</span> we have <span class="math-container">$ r = \sqrt{a^2 + b^2}$</span> and <span class="math-container">$\theta = \tan^{-1}(b/a)$</span>:</p> <p><span class="math-container">$$ z = \sqrt{1^2 + (\sqrt{3}/2)^2} e^{i\tan^{-1}(-\sqrt{3}/2) + 2\pi i n} = \frac{1}{2}\sqrt{7}e^{i \tan^{-1}(\sqrt{3}/2) + i 2\pi n} .$$</span> Here <span class="math-container">$n$</span> is an arbitrary integer which accounts for the periodicity of <span class="math-container">$e^{i x}$</span>.</p> <p>Then you can take the cube root:</p> <p><span class="math-container">$$ z^{1/3} = \frac{7^{1/6}}{2^{1/3}} e^{i \tan^{-1}(-\sqrt{3}/2)/3 + i 2 \pi n/3} = \frac{7^{1/6}}{2^{1/3}} \big[ \cos( \tan^{-1}(-\sqrt{3}/2)/3) + i \sin (\tan^{-1}(-\sqrt{3}/2)/3)\Big] e^{i 2 \pi n/3} $$</span></p> <p>which you can simplify as desired. Notice it's multi-valued (<span class="math-container">$n = 0, \pm 1, \dots$</span>). These different values are <a href="https://en.wikipedia.org/wiki/Branch_point" rel="nofollow noreferrer">branches</a>.</p>
26,589
<p>Whenever you open a post in Mathstack, you can not know how many answer that post has got. To know you need to scroll down to the end of the post. In general when I see a post I how long is the scroll bar, to predict if the question has got an answer or not. <strong>But sometimes</strong> OP post very long question, so this prediction does not work every time(Although it is a prediction!).</p> <p>My suggestion is in the right top side where stat of the post shows(i.e. asked and viewed part), another bar where you can see <strong>how many answer has been given and any answer accepted or not</strong>.</p> <p>I suggest this because when you refresh new question page you do not expect to open every single at the same time. You open a post, you see the problem, you are interested, you scroll down, otherwise, go for next post. Meanwhile people may have answer your next favourite post. When you open it, you see that it has been answered by someone already. If the post is very long sometimes reading the whole question, you see that 9 or 10 answers has been posted. So to avoid that you need to remember everytime scroll down first.</p> <p>This is just a suggestion. Just adding one bar would make things more simple and easy to use too. That's all.</p>
Asaf Karagila
622
<p>Between the comments on the questions and the answers itself, this thing literally exists.</p> <p>The number of answers, and you can choose how to order them. Moreover, if there is an accepted answer, it will appear first (with an exception for when the answer was posted by the author of the question, in which case it might not appear first).</p> <p>Here is a screenshot:</p> <p><a href="https://i.stack.imgur.com/Nedv6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Nedv6.png" alt="enter image description here"></a></p>
26,589
<p>Whenever you open a post in Mathstack, you can not know how many answer that post has got. To know you need to scroll down to the end of the post. In general when I see a post I how long is the scroll bar, to predict if the question has got an answer or not. <strong>But sometimes</strong> OP post very long question, so this prediction does not work every time(Although it is a prediction!).</p> <p>My suggestion is in the right top side where stat of the post shows(i.e. asked and viewed part), another bar where you can see <strong>how many answer has been given and any answer accepted or not</strong>.</p> <p>I suggest this because when you refresh new question page you do not expect to open every single at the same time. You open a post, you see the problem, you are interested, you scroll down, otherwise, go for next post. Meanwhile people may have answer your next favourite post. When you open it, you see that it has been answered by someone already. If the post is very long sometimes reading the whole question, you see that 9 or 10 answers has been posted. So to avoid that you need to remember everytime scroll down first.</p> <p>This is just a suggestion. Just adding one bar would make things more simple and easy to use too. That's all.</p>
amWhy
9,003
<p>Yes: "Whenever you open a post" you will have first clicked on the post in order to open it. </p> <p>But every question listed on the main page (from which you open a post) <em>already shows</em> the net votes the question has earned thus far, the number of answers received, and when the number of answers received is green, that means that the question has an answer. </p> <p>And you'll also see the number of views the question has received up to that point.</p> <p>Look to the left column which sits to the left of the questions:</p> <p><a href="https://i.stack.imgur.com/2xJsB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2xJsB.png" alt="enter image description here"></a></p>
488,983
<p>I'm trying to prove the group isomorphism $(\Bbb Z[x]/(x^{n+1}))^\times\cong\Bbb Z/2\Bbb Z \times \prod _{i=1}^n \Bbb Z$.</p> <p>Obviously I tried to establish a ring isomorphism from $\Bbb Z[x]/(x^{n+1})$ to some ring $R$, a direct product of easier rings, and prove that $R^\times$ equals the RHS of the original isomorphism. In the solutions of similar problems I've seen, one defines a surjective ring homomorphism $\phi : R^\prime[x]\rightarrow R$ of substituting $x$ with an element of $R$ such that the kernel of $\phi$ equals the dividing ideal, and uses the fundamental homomorphism theorem and CRT to show the ring isomorphism.</p> <p>However this doesn't work for this particular problem; since $x^{n+1}$ has a (unique) multiple root, the kernel of homomorphism of plugging in does not equal to the dividing ideal. Moreover, since $\Bbb Z[x]$ is not an Euclidean domain, I have no idea what the units of $\Bbb Z[x]$ modulo some ideal are like.</p> <p>I would appreciate your help.</p>
Ryan Reich
3,547
<p>This problem is stated deceptively in the sense that it gives a highly polished <em>description</em> of what $\def\Z{\mathbb{Z}}(\Z[x]/(x^{n + 1}))^\times$ is, but doesn't actually say what it is as a subset of $\Z[x]/(x^{n + 1})$ itself. You will have an easier time proving it if you sit down and compute what the units actually are, rather than trying to find an abstract homomorphism that happens to work.</p> <p>First of all: the typical representation of $\Z[x]/(x^{n + 1})$ takes it to be the set of polynomials of degree $n$ or less, with the natural product reduced mod $x^{n + 1}$. So among its elements are all integers $n \in \Z$, with their normal product since they all have degree zero, and the only units there are $\pm 1$. That's your $\Z/2\Z$.</p> <p>Second, you have to figure out what makes a polynomial $p(x)$ invertible mod $x^n$. I claim these are just all the polynomials with constant term $\pm 1$; I leave it up to you to figure out how to find an inverse for such a thing.</p> <p>Now, there is a <em>set-theoretic</em> isomorphism between $\{p(x) \mid \deg p \leq n, p(0) = \pm 1\}$ and $\Z/2\Z \times \prod_{i = 1}^n \Z$, namely, just take the coefficients of $p$, but this is not a group homomorphism because polynomial multiplication mixes the coefficients. So there is the issue of representing this set in some other way than as a $\Z$-linear combination of powers of $x$ that makes it clearer what the multiplicative structure is.</p> <p>The goal is to <em>factor</em> each polynomial in $\Z[x]/(x^{n + 1})$ into powers of $\pm 1$ and of $n$ other, predetermined polynomials. The $\pm 1$ part is easy: it's $p(0)$. For the next, consider the following trick:</p> <p>$$(1 + kx + O(x^2))(1 - kx + O(x^2)) = 1 + O(x^2).$$</p> <p>(This is seventh-grade algebra.) Note that $(1 - x)^k = 1 - kx + O(x^2)$. In other words, if we let $p_1(x) = 1 - x$, then every polynomial $p(x)$ has a <em>unique</em> factor of a power of $p_1(x)$ such that the quotient has no linear term. Now, replace $x$ by $x^2$ and do the same with the quadratic term and $p_2(x) = 1 - x^2$, and so on up to $p_n(x) = 1 - x^n$. (This is closely related to proving that $\Z[x]/(x^{n + 1})$ is what I said it was.) If you work out the details, that is your proof.</p>
1,732,526
<p>I start by expanding the denominator and separating the real and imaginary but get stuck when deciding what my $u$ and $v$ should be.</p> <p>Thanks.</p>
Robert Israel
8,508
<p>Avoid looking at real and imaginary parts. Sum and product of analytic functions are analytic. Quotient is analytic where the denominator is nonzero.</p>
440,844
<p>Suppose we have a linear map $A \colon V \to V$ on a finite- dimensional vector space, and $W \leq V$ it's invariant subspace. Then we have obviously $\operatorname{Ker} A + W \subseteq A^{-1}(W)$.</p> <p>Is it then necessary $\operatorname{Ker} A + W = A^{-1}(W)$ ?</p> <p>I can prove it in case $A$ is a projector. How to prove it in general? Or is there a counteexample?</p>
Asaf Karagila
622
<p>Suppose that $f\colon A\to B$ is surjective, then for every $b\in B$ the set $F_b=\{a\in A\mid f(a)=b\}$ is non-empty. Therefore, using the axiom of choice, there is some $g$ which selects an element from $F_b$, that is $g(F_b)\in F_b$.</p> <p>Now show that $g$ is actually a function from $B$ into $A$, and that $g$ is injective.</p> <p>(You can't avoid the axiom of choice in this proof, because in fact this statement is equivalent to the axiom of choice, and often is taken as the statement of the axiom of choice.)</p>
4,193,578
<p>This is maybe a very easy one, but I can't find a solution...</p> <p>I'm looking for a sequence <span class="math-container">$a_1,...,a_n$</span> such that <span class="math-container">$0\leq a_1&lt;\cdots&lt;a_n&lt;1$</span> and <span class="math-container">$\sum_{k=1}^na_k=1$</span>. Of course, this should work for any choice of <span class="math-container">$n\in\mathbb{N}$</span>.</p> <p>My first option was a finite geometric series, but couldn't come up with the right parameters.</p> <p>Another option is to use some discrete probability distribution like Binomial, but this does not guarantee the increasing condition.</p> <p>Any help will be appreciated.</p>
Arthur
15,500
<p>Here is a very general, pretty easy way to get what you want.</p> <p>Take any strictly increasing sequence of <span class="math-container">$n$</span> positive numbers. Find their sum. Divide each term by that sum. You now have a strictly increasing sequence of positive numbers wich sums to <span class="math-container">$1$</span>.</p>
4,006,571
<p>Prove this statement using a proof by contradiction: <br /> Let <span class="math-container">$n$</span> be a natural number. If <span class="math-container">$x_1,\ldots,x_n \in \mathbb{N} \cup \{0\}$</span> and <span class="math-container">$\sum_{i=1}^{n}{x_i} = n+1$</span> then there is an <span class="math-container">$i \in [n]$</span> such that <span class="math-container">$x_i \geq 2$</span></p> <p>I'm not sure how to approach this problem with the proof by contradiction. So far, I assume that there is no <span class="math-container">$i \in [n] : x_i \geq 2$</span>. Then, <span class="math-container">$\sum_{i=1}^{1}{x_i} &lt; 2$</span>. Finally, because <span class="math-container">$\sum_{i=1}^{1}{x_i} = n + 1 = 2$</span>, I can conclude that by contradiction this is false.</p> <p>I'm not entirely sure if this logic is a valid proof by contradiction or what a better proof would be. Any help is appreciated</p>
J. Dunivin
203,407
<p>The mistake is that you are focusing only on <span class="math-container">$\sum_{i=1}^1 x_i = x_1$</span> to get your contradiction, but we don't know anything about this sum. The sum that we do know a lot about is <span class="math-container">$\sum_{i=1}^n x_i = x_1 + x_2 + \cdots + x_n = n+1$</span>, so we should try and contradict this assumption.</p> <p>To be clear, the reason why we don't know anything about <span class="math-container">$\sum_{i=1}^1 x_i$</span> is because <span class="math-container">$n$</span> was given and is fixed. We don't know what the value of <span class="math-container">$n$</span> is, nor do we have control over what value it is, just that we <em>have it</em></p> <p>So with proof by contradiction, we would want to try to contradict some assumption or some known fact.</p> <p>As you pointed out, we assume that there is no <span class="math-container">$i \in [n]$</span> such that <span class="math-container">$x_i \geq 2$</span>. What's another way to say this? Well, this is saying <strong>for every</strong> <span class="math-container">$i \in [n], \, x_i &lt; 2$</span>. Now we <strong>do</strong> have control of this <span class="math-container">$i$</span>, so we know that <span class="math-container">$x_i = 0,1$</span>, as each is a nonnegative integer.</p> <p>Now if we consider the sum <span class="math-container">$\sum_{i=1}^n x_i = x_1+x_2 + \cdots + x_n$</span>, we note that each <span class="math-container">$x_i$</span> could equal <span class="math-container">$0$</span> or it could equal <span class="math-container">$1$</span>. Obviously, the maximal possibility is that each <span class="math-container">$x_i = 1$</span>, so we have an inequality: <span class="math-container">$$\sum_{i=1}^n x_i = \underbrace{x_1}_{\leq 1} + \underbrace{x_2}_{\leq 1} + \cdots + \underbrace{x_n}_{\leq 1} \,\, \leq \,\, \underbrace{1 + 1 + \cdots + 1}_{n-\text{times}} = n.$$</span> So the sum can be at most <span class="math-container">$n$</span>. But this is a contradiction, as we know that <span class="math-container">$\sum_{i=1}^n x_i = n+1$</span>.</p>
362,801
<blockquote> <p>$f:[0,1]\to\mathbb{R}^2$ is continuous, $f(0) \in B_{1}(0,0)$ and $f(1) \in B_{1}(10,10)$. Prove there exists $t \in [0,1]$ such that $f(t) \in \{(x,y): x+y=5\}$. </p> </blockquote> <p>I am thinking we need to use extreme value theorem or intermediate value theorem. Which one and how? </p> <p>Just for information $B_1$(x,y) is the circle of radius 1 around pt (x,y)</p>
Hagen von Eitzen
39,174
<p><strong>Hint:</strong> $g\colon\mathbb R^2\to\mathbb R$, $(x,y)\mapsto x+y-5$ is continuous and so is $g\circ f$.</p> <p><strong>Second hint:</strong> If $(x,y)\in B_r(a,b)$, then $a-r&lt;x&lt;a+r$ and $b-r&lt;y&lt;b+r$.</p>
350,910
<p>Is there any condition based on the coefficients of terms that guarantees all real solutions to a general cubic polynomial? e.g. $$ax^3+bx^2+cx+d=0\, ?$$</p> <p>If not, are there methods rather than explicit formula to determine it?</p> <p>Thank You.</p>
lsp
64,509
<p>Discriminant, $D = 18abcd - 4b^3d + b^2c^2 - 4ac^3 - 27a^2d^2$</p> <p>If $D &lt; 0$, then the polynomial has two complex roots, </p> <p>If $D = 0$, then there are three real roots, and two of them are definitely equal,</p> <p>If $D &gt; 0$, then there are three distinct real roots.</p> <p>Refer <a href="http://en.wikipedia.org/wiki/Casus_irreducibilis" rel="nofollow">http://en.wikipedia.org/wiki/Casus_irreducibilis</a> for more information.</p>
2,300,049
<p>these are my toughs:</p> <p>$$z^2 = 1 + 2i \Longrightarrow (x+yi)(x+yi) = 1 + 2i$$</p> <p>so: $x^2-y^2 = 1$ and $2xy = 2$</p> <p>then i got that $x = 1/y$ but i cant continue to find the real- and imaginary part of z anymore. Appriciated any help</p>
sharding4
254,075
<p>An alternative approach would be to use DeMoivre's Theorem. $1+2i=\sqrt{5}e^{i\arctan{2}}$ so $z=\sqrt[4]{5}e^{i\arctan 2/2}$ or $z=\sqrt[4]{5} e^{(i\arctan 2 +2\pi i)/2}$ If $\tan \theta =2, \tan \frac{\theta}{2} = \frac{2}{1+\sqrt{5}}$ giving $$z=\sqrt[4]{5}\frac{1+\sqrt{5}}{\sqrt{10+2\sqrt{5}}}+i\sqrt[4]{5}\frac{2}{\sqrt{10+2\sqrt{5}}}$$ The other root being $-z$.</p>
2,543,215
<p>I’ve read a post asking whether a subring of a PID is always a PID. The answer is no, but the post itself gave me more questions.</p> <ol> <li><p>Is that possible for a PID that is a subring of a non-PID?</p></li> <li><p>Is that possible for a subring of a PID that is not a UFD? </p></li> </ol> <p>Some hints or examples are really appreciated!</p> <p>Thank you!</p>
Hagen von Eitzen
39,174
<ol> <li>$\Bbb Z\times \Bbb Z$ has $\Bbb Z\times 0\cong \Bbb Z$ as a subring.</li> <li>Every PID is a UFD</li> </ol>
3,214,331
<p>If i take the complex number <span class="math-container">$e^{i(3+2i)}$</span>, it's conjugate is <span class="math-container">$e^{i(-3+2i)}$</span>.</p> <p>However, the conjugate of the function f, defined as <span class="math-container">$f(x+iy)=e^{i(x+iy)}$</span>, is, according to my book: <span class="math-container">$\overline{f(x+iy)}=e^{i(x-iy)}$</span>.</p> <p>I can't understand this difference ...</p>
MSDG
447,520
<p>Well write it out: <span class="math-container">\begin{align} \overline{ e^{i(x+iy)} } &amp;= \overline{e^{ix-y}} = e^{-y}\overline{e^{ix}} = e^{-y}\overline{(\cos x+ i \sin x)} = e^{-y}(\cos x-i\sin x)\\ &amp;= e^{-y}(\cos(-x) + i \sin (-x)) = e^{-y}e^{-ix} = e^{i(-x+iy)}, \end{align}</span> so the book likely has a typo.</p>
3,082,635
<p>Prove that for a given prime <span class="math-container">$p$</span> and each <span class="math-container">$0 &lt; r &lt; p-1$</span>, there exists a <span class="math-container">$q$</span> such that </p> <p><span class="math-container">$$rq \equiv 1 \bmod p$$</span></p> <p>I've only taken one intro number theory course (years ago), and this just popped up in a computer science class (homework). I was assuming that this proof would be elementary since my current class in an algorithm cours, but after the few basic attempts I've tried it didn't look promising. Here's a couple approaches I thought of:</p> <hr> <p>(<em>reverse engineer</em>)</p> <p>To arrive at the conclusion we would need</p> <p><span class="math-container">$$rq - 1 = kp$$</span></p> <p>for some <span class="math-container">$k$</span>. A little manipulation:</p> <p><span class="math-container">$$qr - kp = 1$$</span></p> <p>That looks familiar, but I can't see anything from it.</p> <hr> <p>(<em>sum on <span class="math-container">$r$</span></em>)</p> <p><span class="math-container">$$\sum_{r=1}^{p-2} r = \frac{(p-2)(p-1)}{2} = p\frac{p - 3}{2} + 1 \equiv 1 \bmod p$$</span></p> <p>which looks good but I don't know how to incorporate <span class="math-container">$r$</span> int0 the final equality. </p> <hr> <p>(<em>Wilson's Theorem—proved by Lagrange</em>) </p> <p>I vaguely recall this theorem, but I was looking at it in an old book and it wasn't easy to see how we arrived there. Anyways, <span class="math-container">$p$</span> is prime <em>iff</em> <span class="math-container">$$(p-1)! \equiv -1 \bmod p$$</span></p> <p>Here the <span class="math-container">$r$</span> multiplier is built in to the factorial expression so I was thinking of adding <span class="math-container">$2$</span> to either side</p> <p><span class="math-container">$$(p-1)! + 2 \equiv 1 \bmod p$$</span></p> <p>which is a dead end (pretty sure). But then I was thinking, maybe multiplying Wilson't Thm by <span class="math-container">$(p+1)$</span>? Then getting</p> <p><span class="math-container">$$(p+1)(p-1)! = -(p+1) \bmod p$$</span></p> <p>which I think results in</p> <p><span class="math-container">$$(p+1)(p-1)! = 1 \bmod p$$</span></p> <p>of which <span class="math-container">$r$</span> is a multiple and <span class="math-container">$q$</span> is obvious. But I'm not sure if that's valid.</p>
Will Jagy
10,400
<p>Continued fraction for <span class="math-container">$\frac{137}{73}$</span></p> <p><span class="math-container">$$ \frac{ 137 }{ 73 } = 1 + \frac{ 64 }{ 73 } $$</span> <span class="math-container">$$ \frac{ 73 }{ 64 } = 1 + \frac{ 9 }{ 64 } $$</span> <span class="math-container">$$ \frac{ 64 }{ 9 } = 7 + \frac{ 1 }{ 9 } $$</span> <span class="math-container">$$ \frac{ 9 }{ 1 } = 9 + \frac{ 0 }{ 1 } $$</span> Simple continued fraction tableau:<br> <span class="math-container">$$ \begin{array}{cccccccccc} &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 7 &amp; &amp; 9 &amp; \\ \frac{ 0 }{ 1 } &amp; \frac{ 1 }{ 0 } &amp; &amp; \frac{ 1 }{ 1 } &amp; &amp; \frac{ 2 }{ 1 } &amp; &amp; \frac{ 15 }{ 8 } &amp; &amp; \frac{ 137 }{ 73 } \end{array} $$</span> <span class="math-container">$$ $$</span> <span class="math-container">$$ 137 \cdot 8 - 73 \cdot 15 = 1 $$</span> </p>
2,572,032
<p>I'm looking for help with <strong>(b)</strong> and <strong>(c)</strong> specifically. I'm posting <strong>(a)</strong> for completeness.</p> <p><strong>(a)</strong> Show convergence for $a_n=\sqrt{n+1}-\sqrt{n}$ towards $0$ and test $\sqrt{n}a_n$ for convergence.</p> <p><strong>(b)</strong> Show $b_n=\sqrt[k]{n+1}-\sqrt[k]{n}$ converges towards $0$ for all $k \geq 2$.</p> <p><strong>(c)</strong> For which $\alpha\in\mathbb{Q}_+$ does $n^\alpha b_n$ converge?</p> <hr> <p>I'm pretty sure I solved <strong>(a)</strong>. I have proven the convergence of $a_n$ by using the fact that $$\sqrt{n}&lt;\sqrt{n+1}\leq\sqrt{n}+\frac{1}{2\sqrt{n}}$$ which holds true since $$(\sqrt{n}+\frac{1}{2\sqrt{n}})^2=n+1+\frac{1}{4n}\geq n+1\,.$$ This gives us $$0&lt;\sqrt{n+1}-\sqrt{n}\leq\frac{1}{2\sqrt{n}}$$ and after applying the squeeze theorem with noting that $\frac{1}{2\sqrt{n}}\longrightarrow0$ we can tell that also $a_n\longrightarrow0$.</p> <p>Now $x_n=\sqrt{n}a_n=\sqrt{n}(\sqrt{n+1}-\sqrt{n})$.</p> <p>We have \begin{align*}\sqrt{n}(\sqrt{n+1}-\sqrt{n})&amp;=\sqrt{n}\sqrt{n+1}-\sqrt{n}\sqrt{n}\\&amp;=\sqrt{n(n+1)}-n\\&amp;=\sqrt{n^2+n}-n\\&amp;=\frac{(\sqrt{n^2+n}-n)(\sqrt{n^2+n}+n)}{\sqrt{n^2+n}+n}\\&amp;=\frac{n^2+n-n^2}{\sqrt{n^2+n}+n}\\&amp;=\frac{n}{\sqrt{n^2+n}+n}\\&amp;=\frac{n}{n\sqrt{1+\frac{1}{n}}+n}\\&amp;=\frac{1}{\sqrt{1+\frac{1}{n}}+1}\end{align*}</p> <p>and hence since the harmonic sequence $\frac{1}{n}$ converges towards 0 we have $$\text{lim}_{n\rightarrow\infty} \frac{1}{\sqrt{1+\frac{1}{n}}+1} = \frac{1}{1+1} = \frac{1}{2}\,._{\,\,\square}$$</p>
Nosrati
108,128
<p><strong>Hint:</strong> It might be useful applying mean-value theorem for continuous function $f(x)=\sqrt[k]{x}$ on $[1,\infty)$ then there exists $\xi\in[n,n+1]$ such that $$\sqrt[k]{n+1}-\sqrt[k]{n}=\dfrac{1}{k\sqrt[k]{\xi}}\leq\dfrac{1}{k\sqrt[k]{n}}$$</p>
24,704
<p>It seems that often in using counting arguments to show that a group of a given order cannot be simple, it is shown that the group must have at least <span class="math-container">$n_p(p^n-1)$</span> elements, where <span class="math-container">$n_p$</span> is the number of Sylow <span class="math-container">$p$</span>-subgroups.</p> <blockquote> <p>It is explained that the reason this is the case is because distinct Sylow <span class="math-container">$p$</span>-subgroups intersect only at the identity, which somehow follows from Lagrange's Theorem.</p> </blockquote> <p>I cannot see why this is true. </p> <p>Can anyone quicker than I tell me why? I know it's probably very obvious. </p> <p>Note: This isn't a homework question, so if the answer is obvious I'd really just appreciate knowing why. </p> <p>Thanks!</p>
Myself
5,189
<p>It simply isn't true, Sylow p-subgroups can very well intersect non-trivially, Plop gave an example thereof.</p> <p>Well, it seems like you actually cannot say the following, see comments. I'm just leaving it here as a mistake one shouldn't make, so I won't mind if a moderator deletes it since it's not an actual answer.</p> <p>[wrong]You could say that the number of elements of order $p$ is at least $n_p(p^n-p^{n-1})+p^{n-1}$, which is the case when all $p$-groups intersect maximally. [/wrong] Note that in this case however the intersection of all Sylow $p$-subgroups is a normal subgroup (even a characteristic subgroup, it is called $\mathbf O_p(G)$), so this cannot occur in a simple group.</p>
624,002
<p>Determine whether $\mathbb{Z}\times \mathbb{Z}$ and $\mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}$ are isomorphic groups or not.</p> <p>pf) Suppose that these are isomorphic. Note that $\mathbb{Z}\times \mathbb{Z}$ is a subgroup of $\mathbb{Z}\times \mathbb{Z}$ and $\mathbb{Z}\times \mathbb{Z}\times\left \{ 0 \right \}$ is a subgroup of $\mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}$. Since $\mathbb{Z}\times \mathbb{Z}$ and $\mathbb{Z}\times \mathbb{Z}\times\left \{ 0 \right \}$ are isomorphic, $\mathbb{Z}\times \mathbb{Z}/\mathbb{Z}\times \mathbb{Z}$ and $\mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}/\mathbb{Z}\times \mathbb{Z}\times \left \{ 0 \right \}$ are isomorphic. But the first one is isomorphic to the trivial group and the second one is isomorphic to $\mathbb{Z}$. It is a contradiction.</p> <p>Is my proof right? If not, is there another proof?</p>
Edward ffitch
26,243
<p>Suppose $\mathbb{Z}\times \mathbb{Z}$ is isomorphic to $\mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}$, via a map $\phi$. Then, as $(1,0)$, $(0,1)$ generate $\mathbb{Z}\times \mathbb{Z}$, $\phi(1,0)$ and $\phi(0,1)$ generate $\mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}$. But $\mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}$ cannot be generated by fewer than 3 elements, a contradiction. So $\mathbb{Z}\times \mathbb{Z}$ and $\mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}$ are not isomorphic.</p>
458,922
<p>Recently I've stumbled across this claim:</p> <blockquote> <p>Peano axioms can be deduced in ZFC</p> </blockquote> <p>I found a lot of info regarding this claim (e.g. what would (one version of) the natural numbers look like within the universe of sets: $0 = \emptyset$, $n + 1 = n \cup \{n\}$), but not what the deductions (ZFC $\vdash$ PA) actually look like. What do they look like? </p>
Carl Mummert
630
<p>The most direct, model-theoretic method to prove the existence of a model of PA within ZFC is as follows:</p> <ol> <li><p>First, we formalize the syntax of PA within ZFC. The method is similar to the one used for formalize syntax for the second incompleteness theorem. The main result is a formula $S(n,m)$ which says that $n,m\in \omega$, and $n$ codes a formula $\phi_n(x_1)$ in the language of PA with one free variable and $$(\omega, +,\times,0,1, =) \vDash \phi_n(\ulcorner m\urcorner).$$ Developing this formula $S$ takes some time but the methods are completely standard</p></li> <li><p>Then, using the axiom of separation, one uses ZFC to form the set $$I = \{(n,m) \in \omega^2 : \text{$m$ is minimal such that } S(n,m) \}$$</p></li> <li><p>Now one must verify <em>in a single ZFC proof</em> that $(\omega, +,\times,0,1,=) \vDash PA$. This goes as follows. Using the formalization from part 1, ZFC proves, for some $k$, that there are coded axioms $n_1, \ldots, n_k$ of PA such that for any coded axiom $n$ of PA, either $n=n_i$ for some $i &lt; k$ or $n$ is a coded induction axiom. In the first case, we can do a proof by cases to verify that $(\omega, +,\times,0,1,=)$ satisfies the axiom coded by $n_i$. In the latter case, where $n$ is an induction axiom, we use the single set $I$ from part 2 as a parameter to verify that the axiom coded by $n$ holds in $(\omega, +,\times,0,1,=)$. Namely: if the induction axiom failed for some coded formula $\phi(x)$ then if we let $\phi_r(x)$ be the negation of $\phi(x)$ the set $\{ m \in \omega : (r,m) \in I\}$ would have no least element. </p></li> </ol>
3,045,491
<p>The set <span class="math-container">$\{(x,y,z) \in R^3: x^8+y^4+z^8-16=0\}$</span> is a bounded set? I guess it isn't a bounded set because from <span class="math-container">$x,y,z \geq 0$</span> i suppose it's only inferiorly bounded. Is it correct? Please tell me the correct answer.</p>
Clive Newstead
19,542
<p><strong>Hint:</strong> Note that <span class="math-container">$x^8$</span>, <span class="math-container">$y^4$</span> and <span class="math-container">$z^8$</span> are all nonnegative. What can you say about the quantity <span class="math-container">$x^8+y^4+z^8$</span> if <span class="math-container">$|x|&gt;\sqrt{2}$</span>, <span class="math-container">$|y|&gt;2$</span> or <span class="math-container">$|z|&gt;\sqrt{2}$</span>?</p>
2,741,229
<p>I have searched a lot, but i haven't found any proof about that statement. I have checked the proof of</p> <blockquote> <p>If <span class="math-container">$f$</span> is differentiable, then <span class="math-container">$f$</span> is continuous</p> </blockquote> <p>but it's not the same argument I think. Also, I want to know what's your opinion about the statement</p> <blockquote> <p>If derivative of <span class="math-container">$f$</span> is not continuous, then <span class="math-container">$f$</span> is not continuous</p> </blockquote>
José Carlos Santos
446,262
<p>If <span class="math-container">$f$</span> is differentiable, then <span class="math-container">$f$</span> is continuous. The continuity of <span class="math-container">$f'$</span> is irrelevant here.</p> <p>In particular, <em>even if <span class="math-container">$f'$</span> is discontinuous, <span class="math-container">$f$</span> is continuous</em>.</p>
2,741,229
<p>I have searched a lot, but i haven't found any proof about that statement. I have checked the proof of</p> <blockquote> <p>If <span class="math-container">$f$</span> is differentiable, then <span class="math-container">$f$</span> is continuous</p> </blockquote> <p>but it's not the same argument I think. Also, I want to know what's your opinion about the statement</p> <blockquote> <p>If derivative of <span class="math-container">$f$</span> is not continuous, then <span class="math-container">$f$</span> is not continuous</p> </blockquote>
arp
370,974
<p>Answering only part of the question: "If derivative of f is not continuous, then f is not continuous": As perhaps the simplest counter-example, the absolute value function is continuous but not continuously differentiable. </p> <p>This does not disprove the opposite statement, of course.</p>
129,132
<p>Both the ratio test and the root test define a number (via a limit).</p> <p>If both limits exist (and shows that the series is convergent), what (if any) is the relation between the 2 numbers ? are they equal ? What is the relation (if any) between them and the original series (other than the fact that they say the series is convergent) ?</p>
David Mitra
18,986
<p>For your first question:</p> <p>If both limits exist, they must be equal to each other. In fact, for a sequence of positive terms <span class="math-container">$(a_n)$</span>, if <span class="math-container">$\lim\limits_{n\rightarrow\infty} {a_{n+1}\over a_n}$</span> exists, then so does <span class="math-container">$\lim\limits_{n\rightarrow\infty}\root n \of {a_n}$</span> and moreover, in this case, the two limits are equal to each other. This follows from a more general fact contained in <a href="http://web.archive.org/web/20130728110356/http://math.uga.edu/~pete/243series4.pdf" rel="nofollow noreferrer">these notes</a> of Pete L. Clark.</p> <br> <p>I'm not sure if the following answers your second question, but:</p> <p>In general, there is no relationship between the value of the limit <span class="math-container">$\lim\limits_{n\rightarrow\infty} {a_{n+1}\over a_n}$</span> and the value of the sum <span class="math-container">$\sum\limits_{n=1}^\infty a_n$</span>.<br /> Indeed, here is a silly example showing this:</p> <p>Suppose <span class="math-container">$(a_n)$</span> is a sequence of positive terms and that <span class="math-container">$\lim\limits_{n\rightarrow\infty} {a_{n+1}\over a_n}=r&lt;1$</span>. Then <span class="math-container">$\sum\limits_{n=1}^\infty a_n$</span> converges, say to <span class="math-container">$S\ne 0$</span>. Now let <span class="math-container">$a&gt;0$</span> and consider the sequence <span class="math-container">$(b_n)$</span> defined by <span class="math-container">$b_n=a\cdot a_n$</span>. Here we have <span class="math-container">$\lim\limits_{n\rightarrow\infty} {b_{n+1}\over b_n} =\lim\limits_{n\rightarrow\infty} {a_{n+1}\over a_n}= r$</span>. But, <span class="math-container">$\sum\limits_{n=1}^\infty b_n=aS$</span>.</p> <p>So if, <span class="math-container">$\lim\limits_{n\rightarrow\infty} {a_{n+1}\over a_n}=r&lt;1$</span>, the corresponding series could possibly converge to any given positive number. The same remark holds for the limit in the Root test.</p>
2,794,962
<p>Give is: $C$ which is a closed curve which forms the surface $\Sigma$., $\vec{v} $ which is a constant vector. </p> <p>I should prove the following expression without using Stokes' Theorem: </p> <p>$$\oint_C \vec{v} \cdot d\vec{l} = 0$$</p> <p>How do I go about doing it for an arbitrarily closed (even overlapping) curve ? </p>
Alecto Irene Perez
242,788
<p><strong>Problem statement:</strong></p> <ul> <li>You have a biased coin. </li> <li>If you flip the coin, then with probability $p$ the coin will come up heads, and otherwise it'll come up tails.</li> <li>You're allowed to continue flipping the coin until it comes up tails, or you've flipped it $N$ times (whichever comes first). </li> </ul> <p><strong>There are two approaches to this problem. The algebraic approach, and the pure probability approach.</strong> I'll cover the algebraic approach first, as it may have more familiar notation for a beginner, however the pure probability approach is <em>simpler and cleaner</em>. </p> <p>Let's look at an example case. If $N=3$, the possible results are: <code>T</code> (tails on first flip), <code>HT</code> (1 head, then 1 tail), <code>HHT</code> (2 heads, then 1 tail), and <code>HHH</code> (3 heads).</p> <p>The probability of getting all heads is pretty straightforward: it's just $p^N$. Otherwise, the probability of getting $n$ heads (with $n&lt;N$) is either $0$ (in the case of negative heads or more than $N$ heads), or it's $p^n(1-p)$. In formal probability notation, we can write this as follows.</p> <p>Let $X$ represent the number of heads you get from carrying out this process. For a given number of heads $n$,</p> <p>$$\Pr(X=n)=\begin{cases} p^n(1-p)&amp; \text{if}\; 0\leq n&lt;N,\\ p^n&amp; \text{if}\; n=N,\\ 0&amp; \text{otherwise} \end{cases}$$</p> <p>This means that the expected number of heads $\text E(X)$ is given by:</p> <p>$$\text E(X)=N p^N+\sum_{n=0}^{N-1}n p^n(1-p)=N p^N+(1-p)\sum_{n=0}^{N-1}n p^n$$. </p> <p>Simplifying this formula, we obtain:</p> <p>$$\text E(X) = \frac {p(1-p^N)} {(1-p)}$$</p> <p>This formula gives the answer you calculated:</p> <p>$$0.3 (1 - 0.3^2)\,/\,0.7=0.39$$</p> <p><strong>Pure probability approach:</strong> Let's generalize. What is the expected number of heads, assuming there's no limit to the number of tosses? (Your problem is easy to solve once we know this). </p> <p>When there's no limit on the number of flips, then the formula becomes $\text E(X)=\frac p {1-p}$. This fits our intuition: if $p$ is closer to $1$, then on average you'll have a lot more successes. </p> <p>Your problem asked </p> <blockquote> <p>What is the expected number of heads, given that we know we got $N$ heads or fewer?</p> </blockquote> <p>And we can write this as $\text E(X\,|\,X\leq N)$. This means the expectation of $X$, given that the number of heads $X$ is less than or equal to our limit. </p> <p>Basically, we're removing all the cases where there were more than $N$ heads. To solve you're problem, just subtract out all cases where there were more than $N$ heads:</p> <p>$$\text E(X\,|\,X\leq N)=\text E(X)-\text E(X\,|\,X&gt;N)$$</p> <p>This particular distribution is geometric. That means that $\text E(X\,|\,X&gt;N+1)=p\, \text E(X\,|\,X&gt;N)$. Through induction, we obtain $$\text E(X\,|\,X&gt;N)=p^N\,\text E(X)$$</p> <p>It follows</p> <p>$$\text E(X\,|\,X\leq N)=\text E(X)\;-\;p^N\,\text E(X)$$</p> <p>Which simplifies to</p> <p>$$\text E(X\,|\,X\leq N)=(1-p^N)\,\text E(X)$$</p> <p>We have that $\text E(X)=\frac p {1-p}$, so</p> <p>$$\text E(X\,|\,X\leq N)=\frac {p\,(1-p^N)} {1-p}$$</p>
3,728,963
<p>(Exercise 21 Chapter 2, Baby Rudin) I am trying to prove</p> <blockquote> <p>Let <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be separated subsets of some <span class="math-container">$\mathbb{R}^k$</span>, suppose <span class="math-container">$\textbf{a} \in A$</span>, <span class="math-container">$\textbf{b} \in B$</span> and define <span class="math-container">$\textbf{p}(t) = (1-t)\textbf{a} + t\textbf{b}$</span>, for <span class="math-container">$t \in \mathbb{R}$</span>. Put <span class="math-container">$A_0 = \textbf{p}^{-1}(A), B_0 = \textbf{p}^{-1}(B)$</span>. [Thus, <span class="math-container">$t \in A_o$</span> iff <span class="math-container">$\textbf{p}(t) \in A$</span>.]</p> </blockquote> <blockquote> <p>Prove that <span class="math-container">$A_0$</span> and <span class="math-container">$B_0$</span> are separated subsets of <span class="math-container">$\mathbb{R}$</span>. My attempt so far:</p> </blockquote> <blockquote> <p>a. Assume to the contrary that <span class="math-container">$\exists y$</span> such that <span class="math-container">$y \in A_0 \cap \overline{B_0}$</span> which implies <span class="math-container">$y \in A_0$</span> and <span class="math-container">$y \in \overline{B_0}$</span>. Then, <span class="math-container">$\textbf{p}(y) \in A$</span> and either <span class="math-container">$y \in B_0$</span> or <span class="math-container">$y$</span> is a limit point of <span class="math-container">$B_0$</span>. If <span class="math-container">$y \in B_0$</span>, then <span class="math-container">$\textbf{p}(y) \in B$</span> which would contradict that <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are separated. If <span class="math-container">$y$</span> is a limit point of <span class="math-container">$B_0$</span>, ...</p> </blockquote> <p><strong>My question</strong>: I am having trouble completing the proof. <strong>Can someone please suggest how this proof can be completed?</strong></p> <p>P.S. I found <a href="https://math.stackexchange.com/questions/1731901/proof-for-separated-sets">this</a> proof but I have no idea why the idea of continuity was introduced in the first place, or even how one knows that <span class="math-container">$p$</span> is continuous, as the answer claims. I would like to complete this proof without using the concept of continuity, ideally, since Rudin hasn't introduced the concept of continuity so far (till Chapter 2).</p> <p><strong>Edit</strong>: We now claim that <span class="math-container">$\mathbf{p}(t)$</span> is continuous on all of <span class="math-container">$\mathbb{R}$</span>.</p> <p>Proof: Let <span class="math-container">$\epsilon &gt; 0$</span> and <span class="math-container">$c \in \mathbb{R}$</span>. Suppose <span class="math-container">$\left|t-c\right| &lt; \delta$</span> where <span class="math-container">$\delta = \frac{\epsilon}{\left|b-a\right|} &gt; 0$</span>. Then, we have</p> <p><span class="math-container">$$\left|\mathbf{p}(t)-\mathbf{p}(c)\right| = \left|(1-t)\mathbf{a} + t\mathbf{b}-\mathbf{a}(1-c)-c\mathbf{b}\right| = (t-c)\left|\mathbf{b}-\mathbf{a}\right| &lt; \frac{\epsilon}{\left|\mathbf{b}-\mathbf{a}\right|} \cdot \left|\mathbf{b}-\mathbf{a}\right| = \epsilon$$</span> and we are done.</p> <p>Definition of a continuous function:</p> <blockquote> <p>Suppose <span class="math-container">$X, Y$</span> are metric spaces, <span class="math-container">$E \subset X, p \in E$</span> and <span class="math-container">$f$</span> maps <span class="math-container">$E$</span> into <span class="math-container">$Y$</span>. Then, <span class="math-container">$f$</span> is said to be continuous at <span class="math-container">$p$</span> if for every <span class="math-container">$\epsilon &gt; 0, \exists \delta &gt; 0$</span> such that <span class="math-container">$d_Y(f(x), f(p))&lt; \epsilon$</span> for all points <span class="math-container">$x \in E$</span> for which <span class="math-container">$d_X(x, p) &lt; \delta$</span></p> </blockquote> <p>Definition of a closed set:</p> <blockquote> <p><span class="math-container">$E$</span> is closed if every limit point of <span class="math-container">$E$</span> is a point of <span class="math-container">$E$</span>.</p> </blockquote> <p>Definition of closure of a set (denoted by <span class="math-container">$\bar{E}$</span>):</p> <blockquote> <p><span class="math-container">$\bar{E} = E \cup E'$</span> where <span class="math-container">$E'$</span> is the set of limit points of <span class="math-container">$E$</span>.</p> </blockquote> <p>Definition of a limit point</p> <blockquote> <p>A point <span class="math-container">$p$</span> is a limit point of a set <span class="math-container">$E$</span> if every neighborhood of <span class="math-container">$p$</span> contains a point <span class="math-container">$q \neq p$</span> such that <span class="math-container">$q \in E$</span>.</p> </blockquote>
Justin Young
17,892
<p>Ok, here we go: this is a general proof of the following:</p> <p>If <span class="math-container">$p:X\to Y$</span> is a continuous function and <span class="math-container">$S\subseteq Y$</span> is a subset, then <span class="math-container">$\overline{p^{-1}(S)} \subseteq p^{-1}\left (\overline S \right )$</span>.</p> <p><strong>Proof:</strong> Assume that <span class="math-container">$x \in \overline{p^{-1}(S)}$</span>. If <span class="math-container">$x\in p^{-1}(S)$</span>, then clearly we are done since <span class="math-container">$S\subset \overline S$</span>. If <span class="math-container">$x\not\in p^{-1}(S)$</span>, then <span class="math-container">$x$</span> is a limit point of <span class="math-container">$p^{-1}(S)$</span>. Consider <span class="math-container">$p(x)$</span>. We want to show <span class="math-container">$p(x) \in \overline S$</span>, then we need to show in this case that <span class="math-container">$p(x)$</span> is a limit point of <span class="math-container">$S$</span>. Consider a neighborhood <span class="math-container">$B(p(x), \epsilon)$</span>, for <span class="math-container">$\epsilon &gt;0$</span>. By continuity, there exists a <span class="math-container">$\delta &gt;0$</span> such that if <span class="math-container">$d(x,z) &lt; \delta$</span>, then <span class="math-container">$d(p(x), p(z)) &lt; \epsilon$</span> (I omit the decoration on the metrics for readability, do not assume the metrics are the same). Now, <span class="math-container">$x$</span> is a limit point of <span class="math-container">$p^{-1}(S)$</span>, and <span class="math-container">$B(x,\delta)$</span> is a neighborhood of <span class="math-container">$x$</span>, therefore, by definition there exists <span class="math-container">$q\in B(x,\delta)\cap p^{-1}(S)$</span>. It follows that <span class="math-container">$d(p(q),p(x)) &lt; \epsilon$</span>, and <span class="math-container">$p(q) \in S$</span>. We have now shown that <span class="math-container">$p(x)$</span> is a limit point of <span class="math-container">$S$</span>. This completes the proof of <span class="math-container">$\overline{p^{-1}(S)} \subseteq p^{-1}\left (\overline S \right )$</span>.</p> <p>Applying this to your specific function, we conclude: <span class="math-container">$\overline{A_0}\cap B_0 = \overline{p^{-1}(A)}\cap B_0 \subseteq p^{-1}\left (\overline A \right )\cap B_0 = p^{-1}\left (\overline A \cap B \right )= \emptyset$</span>, and by symmetry we get the corresponding inequality by switching the roles of <span class="math-container">$A$</span> and <span class="math-container">$B$</span>.</p> <p>Here is my counterexample for equality: Let <span class="math-container">$k=2$</span> and define <span class="math-container">$A = \{(-1,0)\}\cup \{0\}\times (0,1]$</span>, and <span class="math-container">$B = \{(1,0)\}$</span>, then let <span class="math-container">$\textbf a = (-1,0)$</span> and <span class="math-container">$\textbf b = (1,0)$</span>. If we define <span class="math-container">$p:\mathbb R \to \mathbb R^2$</span> by <span class="math-container">$p(t) = (1-t)\textbf a + t\textbf b$</span>, then you can verify that <span class="math-container">$p^{-1}(A) = \{0\}$</span>, which is closed, so <span class="math-container">$\overline{p^{-1}(A)}= \{0\}$</span>, but <span class="math-container">$A' =\{(0,0)\}$</span>, so <span class="math-container">$p^{-1}(\overline A) = \{0, 1/2\}$</span>. Thus, the inclusion <span class="math-container">$\overline{p^{-1}(A)} \subseteq p^{-1}\left (\overline A \right )$</span> is strict in this case.</p>
3,728,963
<p>(Exercise 21 Chapter 2, Baby Rudin) I am trying to prove</p> <blockquote> <p>Let <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be separated subsets of some <span class="math-container">$\mathbb{R}^k$</span>, suppose <span class="math-container">$\textbf{a} \in A$</span>, <span class="math-container">$\textbf{b} \in B$</span> and define <span class="math-container">$\textbf{p}(t) = (1-t)\textbf{a} + t\textbf{b}$</span>, for <span class="math-container">$t \in \mathbb{R}$</span>. Put <span class="math-container">$A_0 = \textbf{p}^{-1}(A), B_0 = \textbf{p}^{-1}(B)$</span>. [Thus, <span class="math-container">$t \in A_o$</span> iff <span class="math-container">$\textbf{p}(t) \in A$</span>.]</p> </blockquote> <blockquote> <p>Prove that <span class="math-container">$A_0$</span> and <span class="math-container">$B_0$</span> are separated subsets of <span class="math-container">$\mathbb{R}$</span>. My attempt so far:</p> </blockquote> <blockquote> <p>a. Assume to the contrary that <span class="math-container">$\exists y$</span> such that <span class="math-container">$y \in A_0 \cap \overline{B_0}$</span> which implies <span class="math-container">$y \in A_0$</span> and <span class="math-container">$y \in \overline{B_0}$</span>. Then, <span class="math-container">$\textbf{p}(y) \in A$</span> and either <span class="math-container">$y \in B_0$</span> or <span class="math-container">$y$</span> is a limit point of <span class="math-container">$B_0$</span>. If <span class="math-container">$y \in B_0$</span>, then <span class="math-container">$\textbf{p}(y) \in B$</span> which would contradict that <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are separated. If <span class="math-container">$y$</span> is a limit point of <span class="math-container">$B_0$</span>, ...</p> </blockquote> <p><strong>My question</strong>: I am having trouble completing the proof. <strong>Can someone please suggest how this proof can be completed?</strong></p> <p>P.S. I found <a href="https://math.stackexchange.com/questions/1731901/proof-for-separated-sets">this</a> proof but I have no idea why the idea of continuity was introduced in the first place, or even how one knows that <span class="math-container">$p$</span> is continuous, as the answer claims. I would like to complete this proof without using the concept of continuity, ideally, since Rudin hasn't introduced the concept of continuity so far (till Chapter 2).</p> <p><strong>Edit</strong>: We now claim that <span class="math-container">$\mathbf{p}(t)$</span> is continuous on all of <span class="math-container">$\mathbb{R}$</span>.</p> <p>Proof: Let <span class="math-container">$\epsilon &gt; 0$</span> and <span class="math-container">$c \in \mathbb{R}$</span>. Suppose <span class="math-container">$\left|t-c\right| &lt; \delta$</span> where <span class="math-container">$\delta = \frac{\epsilon}{\left|b-a\right|} &gt; 0$</span>. Then, we have</p> <p><span class="math-container">$$\left|\mathbf{p}(t)-\mathbf{p}(c)\right| = \left|(1-t)\mathbf{a} + t\mathbf{b}-\mathbf{a}(1-c)-c\mathbf{b}\right| = (t-c)\left|\mathbf{b}-\mathbf{a}\right| &lt; \frac{\epsilon}{\left|\mathbf{b}-\mathbf{a}\right|} \cdot \left|\mathbf{b}-\mathbf{a}\right| = \epsilon$$</span> and we are done.</p> <p>Definition of a continuous function:</p> <blockquote> <p>Suppose <span class="math-container">$X, Y$</span> are metric spaces, <span class="math-container">$E \subset X, p \in E$</span> and <span class="math-container">$f$</span> maps <span class="math-container">$E$</span> into <span class="math-container">$Y$</span>. Then, <span class="math-container">$f$</span> is said to be continuous at <span class="math-container">$p$</span> if for every <span class="math-container">$\epsilon &gt; 0, \exists \delta &gt; 0$</span> such that <span class="math-container">$d_Y(f(x), f(p))&lt; \epsilon$</span> for all points <span class="math-container">$x \in E$</span> for which <span class="math-container">$d_X(x, p) &lt; \delta$</span></p> </blockquote> <p>Definition of a closed set:</p> <blockquote> <p><span class="math-container">$E$</span> is closed if every limit point of <span class="math-container">$E$</span> is a point of <span class="math-container">$E$</span>.</p> </blockquote> <p>Definition of closure of a set (denoted by <span class="math-container">$\bar{E}$</span>):</p> <blockquote> <p><span class="math-container">$\bar{E} = E \cup E'$</span> where <span class="math-container">$E'$</span> is the set of limit points of <span class="math-container">$E$</span>.</p> </blockquote> <p>Definition of a limit point</p> <blockquote> <p>A point <span class="math-container">$p$</span> is a limit point of a set <span class="math-container">$E$</span> if every neighborhood of <span class="math-container">$p$</span> contains a point <span class="math-container">$q \neq p$</span> such that <span class="math-container">$q \in E$</span>.</p> </blockquote>
DanielWainfleet
254,665
<p>This is a response to a query by the proposer in a comment to the A from @WilliamElliot.</p> <p>Sets <span class="math-container">$A,B$</span> are separated iff <span class="math-container">$A\cap \bar B=B\cap \bar A=\phi.$</span> Sets <span class="math-container">$A,B$</span> are completely separated iff there exist disjoint open <span class="math-container">$U,V$</span> with <span class="math-container">$A\subseteq U$</span> and <span class="math-container">$B\subseteq V.$</span></p> <p>If <span class="math-container">$(X,d)$</span> is a metric space and <span class="math-container">$A, B$</span> are separated subsets of <span class="math-container">$X$</span> then <span class="math-container">$A, B$</span> are completely separated.</p> <p>PROOF: For each <span class="math-container">$a\in A$</span> take <span class="math-container">$r_a\in \Bbb R^+$</span> such that <span class="math-container">$B\cap B_d(a,r_a)=\phi.$</span> For each <span class="math-container">$b\in B$</span> take <span class="math-container">$s_b\in \Bbb R^+$</span> such that <span class="math-container">$A\cap B_d(b,s_b)=\phi.$</span></p> <p>Let <span class="math-container">$U=\cup_{a\in A}B_d(a,r_a/2)$</span> and <span class="math-container">$V=\cup_{b\in B}B_d(b,s_b/2).$</span></p> <p>To show <span class="math-container">$U\cap V=\phi,$</span> suppose instead that <span class="math-container">$c\in U\cap V.$</span> Take <span class="math-container">$a\in A$</span> such that <span class="math-container">$c\in B_d(a,r_a/2).$</span> Take <span class="math-container">$b\in B$</span> such that <span class="math-container">$c\in B_d(b,s_b/2).$</span> Then <span class="math-container">$$d(a,b)\le d(a,c)+d(c,b)&lt;r_a/2+s_b/2\le \max \{r_a,s_b\}=^{def}K\in \{r_a,s_b\}.$$</span></p> <p>If <span class="math-container">$K=r_a$</span> then <span class="math-container">$d(a,b)&lt;K=r_a,$</span> contrary to the def'n of <span class="math-container">$r_a.$</span></p> <p>If <span class="math-container">$K=s_b$</span> then <span class="math-container">$d(b,a)&lt;K=s_b,$</span> contrary to the def'n of <span class="math-container">$s_b.$</span></p> <p>So <span class="math-container">$c\in U\cap V$</span> cannot exist.</p>
2,508,499
<p>How is this hold that $\mathbb R \subseteq B(0,2)$ where $\big&lt;\mathbb R,d\big&gt;$ and d is a discrete metric?</p> <p>By doing so we showed that $\mathbb R $ is bounded</p>
K.Power
306,685
<p>let $x\in \mathbb R$. Then $d(x,0)=1$ if $x\neq 0$, and $d(x,0)=0$ otherwise. Hence $d(x,0)\leq 1 &lt;2$ for all $x\in \mathbb R$.</p>
2,144,140
<p>We know that $(a,b)$ are open by definition. How do you prove that some arbitrary union of $(a,b)$ cannot give you $[c,d]$ ?</p>
User8976
98,414
<p>In $\Bbb R$ both open and closed sets are $\Bbb R$ and $\phi$. Now see the complement of $[c,d]$ is $(-\infty ,c) \cup (d, + \infty)$ which is open. Hence $[c,d]$ is closed.</p>
4,187,498
<p>I am studying the proof of the Prime Number Theorem and I want to show that the function <span class="math-container">$\frac{\zeta'(s)}{\zeta(s)}$</span> has a simple pole at <span class="math-container">$s=1$</span>.</p> <p>I think that if I can find the Laurent series expansion of <span class="math-container">$\zeta(s)$</span>, I could then find the same for <span class="math-container">$\frac{\zeta'(s)}{\zeta(s)}$</span> and then conclude that it has a simple pole at <span class="math-container">$s=1$</span>.(Correct me if I am wrong.)</p> <p>But, how do I find the Laurent expansion ? I know that <span class="math-container">$\zeta(s)$</span> has a simple pole at <span class="math-container">$s=1$</span> but how can I use this to find the complete expansion ? Also, do I even need to find the complete expansion to show that <span class="math-container">$\frac{\zeta'(s)}{\zeta(s)}$</span> has a simple pole at <span class="math-container">$s=1$</span> ? Is there any other way ?</p> <p>Please help. Any help/hint shall be highly appreciated.</p>
TravorLZH
748,964
<p>Finding Laurent expansion for <span class="math-container">$\zeta(s)$</span> is equivalent to finding a power series representation for</p> <p><span class="math-container">$$ F(s)=\zeta(s)-{1\over s-1} $$</span></p> <p>at <span class="math-container">$s=1$</span>. This means that we need to develop strategies allowing us to deduce a formula for <span class="math-container">$F^{(k)}(s)$</span>. This can be done by plugging the Dirichlet series representation of <span class="math-container">$\zeta(s)$</span> into Euler-Maclaurin formula:</p> <p><span class="math-container">$$ \zeta(s)=\sum_{n=1}^\infty{1\over n^s}={1\over s-1}+\frac12-s\int_1^\infty{\overline B_1(x)\over x^{s+1}}\mathrm dx $$</span></p> <p>Consequently we have for all <span class="math-container">$k&gt;1$</span> that</p> <p><span class="math-container">$$ F^{(k)}(1)={\mathrm d^k\over\mathrm ds^k}\left[-s\int_1^\infty{\overline B_1(x)\over x^{s+1}}\mathrm dx\right]_{s=1} $$</span></p> <p>After simplifications, we can observe that</p> <p><span class="math-container">$$ \begin{aligned} {\partial^k\over\partial s^k}[-sx^{-s-1}] &amp;=(-s)(-\log x)^kx^{-s-1}-k(-\log x)^{k-1}x^{-s-1} \\ &amp;=(-1)^kx^{-s-1}[(-s)\log^kx+k\log^{k-1}x] \\ \end{aligned} $$</span></p> <p>As a result, we have</p> <p><span class="math-container">$$ \begin{aligned} F^{(k)}(1) &amp;=(-1)^k\int_1^\infty{\overline B_1(x)[k\log^{k-1}x-\log^kx]\over x^2}\mathrm dx \\ &amp;=(-1)^k\int_1^\infty\overline B_1(x)\mathrm d\left(\log^kx\over x\right) \end{aligned} $$</span></p> <blockquote> <p>One can verify that this quantity converges</p> </blockquote> <p>To eliminate <span class="math-container">$\overline B_1(x)$</span> in the above integral, we apply Euler-Maclaurin formula to <span class="math-container">$\log^kx/x$</span> (for <span class="math-container">$k&gt;1$</span>):</p> <p><span class="math-container">\begin{aligned} \sum_{n=1}^N{\log^kn\over n} &amp;={\log^{k+1}N\over k+1}+{\log^kN\over2N}+\int_1^N\overline B_1(x)\mathrm d\left(\log^kx\over x\right) \\ &amp;={\log^{k+1}N\over k+1}+\gamma_k+o(1) \end{aligned}</span></p> <p>where <span class="math-container">$\gamma_k$</span> is the Stieltjes constants:</p> <p><span class="math-container">$$ \gamma_k=\int_1^\infty\overline B_1(x)\mathrm d\left(\log^kx\over x\right) $$</span></p> <p>As a result, we can plug Stieltjes constants back into <span class="math-container">$F(s)$</span> to get</p> <p><span class="math-container">$$ F(s)=F(1)+\sum_{k=1}^\infty{(-1)^k\gamma_k\over k!}(s-1)^k $$</span></p> <p>Now it remains to determine <span class="math-container">$F(1)$</span>, and using Euler-Maclaurin again on harmonic series allows us to determine <span class="math-container">$F(1)=\gamma$</span>. Consequently, the Laurent expansion of <span class="math-container">$\zeta(s)$</span> at <span class="math-container">$s=1$</span> is as follows</p> <p><span class="math-container">$$ \zeta(s)={1\over s-1}+\sum_{k=0}^\infty{(-1)^k\gamma_k\over k!}(s-1)^k $$</span></p> <p>where</p> <p><span class="math-container">$$ \gamma_k=\lim_{N\to\infty}\left\{\sum_{n=1}^N{\log^kn\over n}-{\log^{k+1}N\over k+1}\right\} $$</span></p> <blockquote> <p><span class="math-container">$\gamma_0=\gamma$</span> makes the above expression valid for <span class="math-container">$k\ge0$</span>.</p> </blockquote>
1,298,730
<p>Find functions $f$ and $\alpha$ such that the improper Riemann-Stieltjes integral $\int_1^{\infty}|f|d\alpha$ converges, but $\int_1^{\infty}fd\alpha$ does not exist?</p> <p>I'm really not sure how to start this problem, and I haven't been able to find another post on here that has considered this.</p> <p>EDIT: I know that $\alpha$ needs to be some function which is not increasing or differentiable on $[1,\infty )$ since then absolute convergence implies conditional convergence</p> <p>Thank you,</p>
Community
-1
<p>How about the function:</p> <p>$$f(x)=\begin{cases}\frac{1}{\sqrt{n}} \text{ for } x \in [n,n+1) \text{ and odd } n\\ -\frac{1}{\sqrt{n}} \text{ for } x \in [n,n+1) \text{ and even } n \end{cases}$$</p> <p>and let $\alpha(x)=xf(x)$. </p> <p>Then for odd $n$, </p> <p>$$\int_{n}^{n+1} f(x) d\alpha(x)=\int_{n}^{n+1} f(x)\alpha'(x) dx=\int_{n}^{n+1} \frac{1}{n} dx=\frac{1}{n}$$</p> <p>$$\int_{n}^{n+1} |f(x)| d\alpha(x)=\int_{n}^{n+1} \frac{1}{n} dx=\frac{1}{n}$$</p> <p>For even $n$:</p> <p>$$\int_{n}^{n+1} f(x) d\alpha(x)=\frac{1}{n}$$ $$\int_{n}^{n+1} |f(x)| d\alpha(x)=-\frac{1}{n}$$</p> <p>So then $$\int_{1}^{\infty} |f(x)| d\alpha(x)=\sum\limits_{n=1}^{\infty} \frac{(-1)^n}{n}&lt;\infty$$ $$\int_{1}^{\infty} f(x) d\alpha(x)=\sum\limits_{n=1}^{\infty} \frac{1}{n}=\infty$$</p>
1,711,087
<p>Find the number of elements in the set if the average of these numbers is seven less than the number of elements in the set.</p>
Sam
286,799
<p>Let the number of elements in the set be $x$.</p> <p>Then we have $$\frac{30}{x}=x-7$$ Implying, $$30 = x^2 - 7x$$ $$x^2-7x-30=0$$ $$(x-10)(x+3)=0$$ $$x=10,-3$$</p> <p>We cannot have a negative number of elements in the set so there are 10 elements.</p>
3,528,370
<p>Maybe this is too obvious, but I what to be sure... Let <span class="math-container">$Y$</span> be a <span class="math-container">$p\times p$</span> symmetric random matrix (i.e. you can think about <span class="math-container">$Y$</span> as a matrix with random entries). Define <span class="math-container">$E[Y]$</span>, the expectation of <span class="math-container">$Y$</span>, as the matrix with entries <span class="math-container">$(E[Y])_{ij} = E[Y_{ij}]$</span>. I think that the next affirmation is true:</p> <blockquote> <p>If <span class="math-container">$E[Y] = 0_{p\times p}$</span> then <span class="math-container">$\lambda_{\max}(Y)\geq 0$</span> a.s., where <span class="math-container">$\lambda_{\max}(Y)$</span> is the greatest eigenvalue of <span class="math-container">$Y$</span> (which is real since <span class="math-container">$Y$</span> is symmetric). </p> </blockquote> <p>My argument is as follows. Suppose that <strong>all</strong> the eigenvalues are negative. Then <span class="math-container">$tr(Y)&lt;0$</span>, which implies that <span class="math-container">$E[tr(Y)]&lt;0$</span> and <span class="math-container">$tr(E[Y])&lt;0$</span>. This is a contradiction since <span class="math-container">$E[Y] = 0_{p\times p}$</span>. Then there exist at least one non-negative eigenvalue, one of which is <span class="math-container">$\lambda_{\max}(Y)$</span>.</p> <p>Is my argument correct? In that case, is there a generalization of this result?</p>
Ian
83,396
<p>Consider <span class="math-container">$p=1$</span> and <span class="math-container">$Y$</span> equal to the 1x1 matrix 1 with probability 1/2 or the 1x1 matrix -1 with probability 1/2.</p>
1,438,999
<p>If $(x-a)^2=(x+a)^2$ for all values of $x$, then what is the value of $a$?</p> <p>At the end when you get $4ax=0$, can I divide by $4x$ to cancel out $4$ and $x$?</p>
Ennar
122,131
<p>If $(x-a)^2=(x+a)^2$ for all $x$, then graphs of functions $x\mapsto (x-a)^2$ and $x\mapsto (x+a)^2$ coincide but these are just parabolas with roots at $a$ and $-a$, respectively. Since they must coincide, $a = -a$ which implies $a = 0$.</p>
1,514,628
<p>I've been looking over some old assignments in my analysis course to get ready for my upcoming exam - I've just run into something that I have no idea how to solve, though, mainly because it looks nothing like anything I've done before. The assignment is as follows:</p> <p>"Let $H$ be a Hilbert space, and let $(e_n)_{n\in\mathbb{N}}$ be an orthonormal basis for $H$. Let $E$ be the linear subspace spanned by the three elements $e_1 + e_2$, $e_3 + e_4$, $e_2 + e_3$. Let $P_E : H \to E$ be the projection onto $E$."</p> <p>How would one then do the following three things:</p> <ol> <li>Determine an orthonormal basis for $E$</li> <li>Compute $P_E e_1$</li> <li>Calculate $\|e_1\|^2$, $\|P_E e_1\|^2$ and $\|e_1 - P_E e_1\|^2$</li> </ol> <p>Usually when we've looked at these types of assignments we've gotten actual basis vectors, $e$. How does one do these things symbollically?</p> <p>I've tried doing Gram-Schmidt for the first part, but I've no idea if it's right, what I'm doing. I end up with three basis vectors looking something like</p> <p>$u_1 = \frac{e_1+e_2}{2}$ , $u_2 = \frac{e_3+e_4}{2}$ , $u_3 = \frac{e_1+e_4}{2}$</p> <p>Any help would be much appreciated, right now I'm getting nowhere, haha.</p>
Mark Fischler
150,362
<p>Your idea of using Graham-Schmidt for determining the orthogonal basis is absolutely right, but the answer you present has gone wrong. All you know about the $e_i$ is that for all $i$, $e_i \cdot e_i = 1$ and for all $i \neq j$, $e_i \cdot e_j = 0$. But that is quite a lot to know, and enough to do G.S.</p> <p>Start from basis element $b_1 =k e_1$; normalizing gives $$ b_1 = \frac{1}{\sqrt{2}} (e_1 + e_2) $$ Now, following G.S., take $b_2$ proportional to $(e_3+e_4) - b_1 \cdot (e_3+e_4) b_1$ Since in this case the dot product is zero, $b_2$ will be normalized to $$ b_2 = \frac{1}{\sqrt{2}} (e_3 + e_4) $$ Lastly, take $b_3$ proportional to $$(e_2+e_3) - b_1 \cdot (e_2+e_3) b_1 - b_2 \cdot (e_2+e_3) b_2 = \frac{1}{\sqrt{2}} \left( -e_1 + (\sqrt{2}-1) e_2 + (\sqrt{2}-1) e_3 -e_4 \right) $$ If you normalize that, you get $$b_3 = \frac{1}{\sqrt{4+2\sqrt{2}}} \left( -e_1 + (\sqrt{2}-1) e_2 + (\sqrt{2}-1) e_3 -e_4 \right) $$ Otther bases are of course possible. For example, if you had started with the vector $(e_2+e_3)$ you would have gotten a different basis.</p> <p>For the second question, $$p_E e_1 = b_1\cdot e_1 + b_2\cdot e_2 + b_3\cdot e_3 = \frac{1}{\sqrt{2}} e_1 + 0 + \frac{\sqrt{2}-1}{\sqrt{4+2\sqrt{2}}} e_3 $$</p> <p>The third part is then straightforward (of course $\|e_1\|^2$ is trivially 1) given the expresion for $P_E e_1$.</p>
1,705,453
<p>I have a list of prime numbers which can be expressed in the form of $3x+1$. One such prime of form $3x+1$ satisfies the expression: $a^2+b^2-ab$.</p> <p>Now I am having list of prime numbers of form $3x+1$ (i.e., $7,19 \ldots$). But I am unable to find the $a$ and $b$ which satisfy the above expression.</p> <p>Thanks for your help in advance.</p>
mathreadler
213,607
<p><strong>Hint</strong> You get cancellation effects. A difference which is close enough to zero will lose almost all precision the terms could have. An example: say you have 16 bits precision and subtract two numbers which are of same magnitude and equal in the first 12 bits. That leaves only 4 possible bits of precision left (the least significant bits). The problem fast gets catastrophic as you start getting close to all bits of the subtraction being cancelled out.</p>
3,403,272
<p><p> I'm currently taking abstract algebra and I'm very lost.</p> <blockquote> <p>Let <span class="math-container">$G = (\Bbb Z/18\Bbb Z, +)$</span> be a cyclic group of order <span class="math-container">$18$</span>.</p> <p>(1) Find a subgroup <span class="math-container">$H$</span> of <span class="math-container">$G$</span> with <span class="math-container">$|H|= 3.$</span></p> <p>(2) What are the elements of <span class="math-container">$G/H$</span>?</p> <p>(3) Find a familiar group that is isomorphic to <span class="math-container">$G/H$</span>.</p> </blockquote> <p><p> For one I think I understand that since it is a cyclic group we need a generator so I choose <span class="math-container">$\langle [6]\rangle$</span>. <span class="math-container">$[6]+[6]=[12]$</span> and <span class="math-container">$[6]+[6]+[6]=[18]=[0]$</span> so <span class="math-container">$H=\langle [6]\rangle=\{[0],[6],[12]\}$</span>. Here we see <span class="math-container">$18$</span> divided by <span class="math-container">$6$</span> is <span class="math-container">$3$</span> so <span class="math-container">$|H| = 3.$</span> <p> The next part are the elements <span class="math-container">$G/H$</span> just the subgroup I wrote down before? <p> The last question is confusing me the most. In order to be isomorphic to one another the group that I select must have three elements as well, correct? The problem is there is no other subgroup of <span class="math-container">$G$</span> that has an order <span class="math-container">$3$</span>.</p>
Locally unskillful
494,915
<p><span class="math-container">$G/H$</span> has 6 elements since <span class="math-container">$|G/H| =|G|/|H|=\frac{18}{3}=6$</span>. We we are looking for a group with 6 elements. We say that <span class="math-container">$x,y \in G$</span> are in the same equivalence class if <span class="math-container">$x-y \in H$</span>, ie <span class="math-container">$x-y=0,6$</span> or <span class="math-container">$12$</span>, so:</p> <p><span class="math-container">$0=6=12$</span></p> <p><span class="math-container">$1 = 7 = 13$</span></p> <p><span class="math-container">$2=8=14$</span></p> <p><span class="math-container">$3=9=15$</span></p> <p><span class="math-container">$4=10=16$</span></p> <p><span class="math-container">$5=11=17$</span></p> <p>Therefore, the elements of <span class="math-container">$G/H=\{\bar{0},\bar{1},\bar{2},\bar{3},\bar{4},\bar{5}\}$</span>.</p>
2,722,609
<p>In a past thread it was mentioned that $x \in A$ is a predicate. I know $\exists x$ and $\forall x$ are quantifiers but are they also predicates themselves? What about when combined with "in" itself (or whatever this operator is called)? e.g. $\exists x \in A$ or $\forall x \in A$</p>
Bram28
256,001
<p>No, the quantifiers are not predicates. Rather, combined with predicates, quantifiers can form claims. E.g. $\exists x \ x \in A$ would be the claim that there is some object $x$ that is an element of $A$</p> <p>This is not the same as $\exists x \in A$ though, which is a restricted quantifier. You'd need to combine that quantifier with some predicate (or formula in general) to get a claim again. For example, $\exists x \in A \ P(x)$ is the claim that there is some element of $A$ that has property $P$.</p>
3,691,147
<p>Consider the wave equation in one dimension <span class="math-container">$u_{tt}-u_{xx}=0$</span> together with a Fourier Transform along <span class="math-container">$t$</span>, ie <span class="math-container">$$\text{FT}[u](x,\omega)=\int_{-\infty}^{+\infty}u(x,t)\exp(-i\omega t)\mathrm{d}t.\tag{1}$$</span> The above PDE transforms into <span class="math-container">$\partial_{xx}\text{FT}[u]+\omega^2\text{FT}[u]=0$</span> whose general solution reads <span class="math-container">$$\text{FT}[u](x,\omega)=A(\omega)\cos\omega x+B(\omega)\sin\omega x\tag{2}$$</span> which is essentially the Fourier Transform of d'Alembert's solution.</p> <p>Under which conditions on <span class="math-container">$u(x,t)$</span> is the <em>classical</em> differentiation of <span class="math-container">$\text{FT}[u](x,\omega)$</span> with respect to <span class="math-container">$x$</span> meaningful? When it is meaningful, is <span class="math-container">$\partial_x \text{FT}[u]$</span> the Fourier Transform of <span class="math-container">$u_x(x,t)$</span> that is <span class="math-container">$\text{FT}[u_x]$</span>? It is a classical result which is always used when solving PDE via Fourier Transform (and used above in the quantity <span class="math-container">$\partial_{xx} FT[u]$</span>), however I would like to read the exact assumptions on <span class="math-container">$u$</span>. For instance, is this differentiation acceptable when <span class="math-container">$u_{xx}(x,t)$</span> should be read in the sense of distributions because <span class="math-container">$u_x(x,t)$</span> is discontinuous?</p>
pluton
30,598
<p>A partial answer to the above question is available in the book "Fourier Analysis, by TW Körner, Cambridge University Press, 1988, page 268, Theorem 53.5" (where <span class="math-container">$x$</span> and <span class="math-container">$t$</span> should be interchanged to comply with the question):</p> <p>Let <span class="math-container">$g:\mathbb{R}\times\mathbb{R}\to\mathbb{C}$</span> be a continuous function such that <span class="math-container">$g_2$</span> exists and is continuous. Suppose <span class="math-container">$\int_{-\infty}^{+\infty}|g(x,t)|\mathrm{d}x$</span> and <span class="math-container">$\int_{-\infty}^{+\infty}|g_2(x,t)|\mathrm{d}x$</span> exist for each <span class="math-container">$t$</span> and that <span class="math-container">$\int_{|x|&gt;R}|g_2(x,t)|\mathrm{d}x\to 0$</span> as <span class="math-container">$R\to \infty$</span> uniformly in <span class="math-container">$t$</span> on each <span class="math-container">$[a,b]$</span>. Then <span class="math-container">$\int_{-\infty}^{+\infty}g(x,t)\mathrm{d}x$</span> is differentiable with <span class="math-container">$$\frac{d}{dt}\int_{-\infty}^{+\infty}g(x,t)\mathrm{d}x=\int_{-\infty}^{+\infty}\frac{\partial g}{\partial t}(x,t)\mathrm{d}x$$</span></p> <p>[note by OP] where <span class="math-container">$g_2$</span> is the first partial derivative of <span class="math-container">$g$</span> with respect to its second argument.</p>
1,103,239
<p>For example, if I multiply the value of a base squared by four, I also get twice the base if it's squared. Look:$$6^2\cdot4=12^2$$ because $$36\cdot4=144$$and $36$ is the square of $6$ and $144$ is the square of $12$. Why does this always happen?</p>
Neal
20,569
<p>Because $4 = 2\cdot 2$ and multiplication works like this: $$ 6^2 \cdot 4 = 6\cdot 6 \cdot 4 = 6\cdot 6 \cdot 2\cdot 2 = 6\cdot 2\cdot 6\cdot 2 = (6\cdot 2)^2$$</p>
200,658
<p>What is the value of :</p> <p>$$\sum_{n=1}^{\infty}\frac{n^2+n+1}{3^n}$$</p>
Beni Bogosel
7,327
<p>You have for $|x|&lt;1$ </p> <p>$$ \sum_{k=0}^\infty x^k=\frac{1}{1-x}$$</p> <p>$$ \sum_{k=0}^\infty kx^k=x\cdot \left(\frac{1}{1-x}\right )'$$</p> <p>$$ \sum_{k=0}^\infty k^2x^k=x \cdot \left( x \cdot \left( \frac{1}{1-x}\right )'\right )'$$</p> <p>Replace $x$ with $1/3$ and you will get the result.</p>
2,661,443
<p>For the equation $2^x = 7$</p> <p>The textbook says to use log base ten to solve it like this $\log 2^x = \log 7$. </p> <p>I then re-arrange it so that it reads $x \log 2 = \log 7$ then divide the RHS by $\log 2$ to isolate the $x$. I understand this part.</p> <p>I can alternatively solve it in an easier way by simply using $\log_2 7$ on my calculator.</p> <p>Using both methods the answer comes to the same which is $2.807$</p> <p>My question is twofold:</p> <ol> <li><p>Why would the textbook suggest to use log base ten rather than simply using log base two?</p></li> <li><p>I can see how using log base ten and the suggested method in the textbook makes me arrive at the same answer but I don't understand WHY this is so. How does base ten play a factor in the whole scope of things.</p></li> </ol> <p>Thank you</p>
James K
92,207
<p>Another common type of question in logarithms is</p> <p>$$2^{x+2} = 3^{2x}$$</p> <p>Now suppose you have learnt always to use $\log_2$ if the base is 2, and $\log_3$ if the base is 3. You are now stuck! Which base should you use?</p> <p>On the other hand suppose you have learned always to use $\log_{10}$, then you get $$(x+2)\log2 = 2x\log3$$ which you can solve. It is important that you know the $\log(a^b) = b\log(a)$ rule, and solving using base 10 helps you to practice this method.</p> <p>So a student who learns only one technique for solving equations with logarithms can learn to use base 10. Better students can then learn to use different bases, if another base is more convenient. This is why the textbook may teach you to use base 10: It is more general and practices an important formula.</p> <p>As for why "10", as other answers have noted, any base is possible, but base 10 is convenient since decimal numbers are based around 10.</p>
77,379
<p>It is to show for an $a\in \mathbb{C}^{\ast}$ that $aB_{1}(1)= B_{|a|}(a)$ </p> <p>where B denotes a disc </p> <p>Okay, maybe this is correct: </p> <p>$aB_{1}(1) = a(e^{i\phi}) = ae^{i\phi} = |a|e^{i\phi} = B_{|a|}(a)$</p> <p>But this seems very wrong! </p> <p>V</p>
VVV
18,298
<p>Following the second attempt: </p> <p>So we look at this system of inequalities: </p> <p>$aB_{1}(1)$: $|az-a|$ and $|z-1|&lt;1 $</p> <p>what we want to show is that this equals $|z-a|&lt;|a|$</p> <p>Then stuck. </p> <p>Following the first attempt: </p> <p>for $a \in \mathbb{C}^{*} = \mathbb{C}\backslash \{0\}$</p> <p>we look at $B_{1}(0) : |z|&lt;1 $ so $|az|&lt; |a|$ which is $B_{|a|}(0)$ this must be finished like this.</p> <p>But I don't know where I have shown it. </p>
218,933
<p>The space $L^2 (\mathbb{R}^2; \mathbb{C})$ can be decomposed as $$ L^2 (\mathbb{R}^2; \mathbb{C}) = \bigoplus_{k \in \mathbb{Z}} L^2_k (\mathbb{R}^2; \mathbb{C}), $$ where $$ L^2_k (\mathbb{R}^2; \mathbb{C}) = \bigl\{ f \in L^2 (\mathbb{R}^2;\mathbb{C}) : \text{for almost every \(z \in \mathbb{R}^2 \simeq \mathbb{C}\) and \(\theta \in \mathbb{R}\), }\\ f (e^{i \theta} z) = e^{i k \theta} f (z) \bigr\}, $$ where the summands are mutualy orthogonal (see for example Stein and Weiss, <em>Introduction to Fourier analysis on Euclidean spaces</em>, 1971, §IV.2).</p> <p>The summands are invariant under Fourier transform and the Fourier transform can there be described by Bessel functions. This decomposition appears also implicitly in separation of variables arguments.</p> <p>As Stein and Weiss do not give any name to this decomposition, I was wondering whether under which name(s) this decomposition is known in the literature. </p>
Abdelmalek Abdesselam
7,410
<p>Maybe: isotypical decomposition of the representation $L^2(\mathbb{R}^2,\mathbb{C})$ of the group $U(1)$ might do.</p>
1,712,457
<blockquote> <p>Assume $f$ is differentiable over an open interval $I$. Suppose $a&lt;b$ are two numbers in $I$ with $f'(a) &lt; f'(b)$. Show that if $f'(a) &lt; 0 &lt;f'(b)$, then neither $f(a)$ nor $f(b)$ can be the minimum value of $f$ over $[a,b]$.</p> </blockquote> <p>Intuitively this makes sense: $f$ must change concavity on $[a,b]$ and thus it will have a relative minimum point where the first derivative is $0$. Since $f$ isn't the constant function and the function changes concavity at least once, $f(a)$ nor $f(b)$ can be the minimum on the interval.</p> <p>Is this reasoning fine or do I need to be more mathematical?</p>
bgins
20,321
<p>First, by considering $f(-x)$ or $f(a+b-x)$, it's enough to show that $f(a)$ cannot be the minimum on $[a,b]$. Now the derivative of $f$ at $a$ is equal to the right derivative there, $$0&gt;f'(a)=f'(a^+)=\lim_{h\to0^+}\frac{f(a+h)-f(a)}h.$$ But this means that for all $h&gt;0$ sufficiently small, the numerator must be negative: $$\forall\epsilon&gt;0,\,\exists\delta&gt;0\quad\text{so that}\quad 0&lt;h&lt;\delta\implies\left|\frac{f(a+h)-f(a)}h-f'(a)\right|&lt;\epsilon.$$ We only need to take some positive $\epsilon&lt;|f'(a)|$ to guarantee that the fraction representing average slope above is within $\epsilon$ of $f'(a)$ for any positive $h$ less than some $\delta$. For example, take $\epsilon=-\frac12f'(a)&gt;0$. Then there is a $\delta&gt;0$ so that for any $0&lt;h&lt;\delta$, the inequalities hold. Fix $\delta$, and set $h$ to be any positive number less than $\delta$ such that $a+h\in[a,b]$; for example, $h=\min(\frac\delta2,b)$. But this means that $$ f'(a)-\epsilon &lt; \frac{f(a+h)-f(a)}h &lt; f'(a)+\epsilon &lt; 0 $$ $$ f(a+h)-f(a) &lt; 0 $$ $$ f(a+h) &lt; f(a) $$ so that $a$ cannot be the minimum, which completes the proof.</p>