qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
59,567 | <p>I am looking for a way to add a legend showing the identity of various atoms (with different colours) to this picture. Any Clues?</p>
<pre><code>Import["ExampleData/1PPT.pdb", "Rendering" -> "BallAndStick"]
</code></pre>
<p><img src="https://i.stack.imgur.com/FSFoH.png" alt="enter image description here"></p>
| rcollyer | 52 | <p>Bob Hanlon's <a href="https://mathematica.stackexchange.com/a/59587/52">answer</a> works very well, but in some ways it is the hard way of doing things. If you have v9 or v10, then it is arguably easier to use the legend constructs within it. Similar to his answer, we get the image and element names:</p>
<pre><code>img = Import["ExampleData/1PPT.pdb", "Rendering" -> "BallAndStick", ImageSize -> 500];
elements = Import["ExampleData/1PPT.pdb", "ResidueAtoms"] // Flatten // Union;
</code></pre>
<p>But, we deviate with the legend construction, e.g.</p>
<pre><code>legend = SwatchLegend[
ElementData[#, "IconColor"] & /@ elements,
ElementData[#, "StandardName"] & /@ elements,
LegendMarkers -> Graphics@Disk[]
]
</code></pre>
<p>and to display the legend, we use</p>
<pre><code>Legended[img, legend]
</code></pre>
<p><img src="https://i.stack.imgur.com/T754S.png" alt="enter image description here"></p>
<p>At this point, there is not much in the way of savings, but it simplifies a number of things, such as repositioning the legend using <code>Placed</code>, or adding additional legends. For instance, what if the pdb file also displayed hydrogen bonds, in addition to covalent bonds, then it is straightforward to add a second legend:</p>
<pre><code>legendRow = SwatchLegend[
ElementData[#, "IconColor"] & /@ elements,
ElementData[#, "StandardName"] & /@ elements,
LegendMarkers -> Graphics@Disk[],
LegendLayout -> "Row"
];
legendBonds = LineLegend[
{Gray, Directive[Red, Dashed]},
{"Covalent", "Hydrogen"}
];
Legended[img, {Placed[legendRow, Top], legendBonds}]
</code></pre>
<p><img src="https://i.stack.imgur.com/L8SbY.png" alt="enter image description here"></p>
|
4,415,907 | <p>I need to evaluate the Fourier inverse integral</p>
<p><span class="math-container">$\displaystyle \int_{-\infty}^{\infty}\frac{\sinh\left(y\sqrt{\alpha^2-\omega^2}\right)}{\sinh\left(H\sqrt{\alpha^2-\omega^2}\right)}e^{i\alpha x}d\alpha \tag*{}$</span></p>
<p>which arose while solving a PDE.</p>
<p>Here, <span class="math-container">$H>0,x\in\mathbb{R},y\in[0,H]$</span>. The domain of <span class="math-container">$\omega$</span> was not given in the original problem, but I am going to assume <span class="math-container">$\omega>0$</span> for simplicity.</p>
<p>The problem asks us to introduce proper branch cuts for the square root function before evaluating the integral.</p>
<p>For reference, <a href="https://imgur.com/a/U341iyL" rel="nofollow noreferrer">this is the original question</a>. My attempt up until I get the integral is also shown in the link.</p>
<p><strong>My Attempt</strong></p>
<p>The branch points of the square root functions are at <span class="math-container">$\alpha =\pm\omega$</span>. So, I considered the following branch cuts and contours.</p>
<p><a href="https://i.stack.imgur.com/YtLxf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YtLxf.png" alt="enter image description here" /></a></p>
<p>(Here <span class="math-container">$\omega$</span> has an absolute value, but you can ignore it and assume <span class="math-container">$|\omega|=\omega$</span>)</p>
<p>We, firstly, need to find the poles of the integrand in the upper half-plane. Those are given by the equation</p>
<p><span class="math-container">$H\sqrt{\alpha^2-\omega^2}=n\pi i\tag*{}$</span></p>
<p>Solving this, we obtain</p>
<p><span class="math-container">$\displaystyle \alpha =\pm\sqrt{\omega^2-\frac{n^2\pi^2}{H^2}}\tag*{} $</span></p>
<p>where <span class="math-container">$n=0,1,2,\cdots$</span>.</p>
<p>The problem is that some of those poles are on the branch cuts depending on the parameters. I have been told this is not permissible, so I am not sure how to proceed.</p>
<p>Edit: The statement that "some of those poles are on the branch cuts" is not correct. It is on the real axis but between <span class="math-container">$[-\omega,\omega]$</span>.</p>
<p>Edit2: <em>Tables of Fourier Transforms and Fourier Transforms of Distributions</em> by Fritz Oberhettinger states(p37,7.48) that if <span class="math-container">$a<b$</span>, we have</p>
<p><span class="math-container">$\displaystyle \int_{0}^{\infty} \frac{\sinh{(a\sqrt{k^2+x^2}})}{\sinh{(b\sqrt{k^2+x^2}})}\cos{xy}dx = -\pi b^{-1} \sum_{n=0}^{\infty}(-1)^nc_n\sin{(ac_n)}v_n^{-1}e^{-yv_n}$</span></p>
<p>where <span class="math-container">$c_n=n\pi/b$</span>, <span class="math-container">$v_n = (k^2+c_n^2)^{1/2}$</span>. I would guess our integral would have a similar form.</p>
| Kaira | 691,829 | <p>For completeness, I will provide the complete solution I can imagine.</p>
<p>We calculate the integral for <span class="math-container">$x>0$</span>. The case <span class="math-container">$x<0$</span> can be solved with a similar method.</p>
<p>We firstly note that the poles of the integrand are located at <span class="math-container">$\alpha =\pm p_n(n=1,2,3\cdots)$</span> where <span class="math-container">$p_n= \sqrt{\omega^2-\frac{n^2\pi^2}{H^2}}$</span>. If <span class="math-container">$\omega H<\pi$</span> then every pole is on the imaginary axis, but if <span class="math-container">$\omega H>\pi$</span> then some poles are on the real axis. We consider the case <span class="math-container">$\omega H<\pi$</span>. Otherwise we can consider the contour going around the poles.</p>
<p>The integrand has a branch point at <span class="math-container">$\alpha=\pm \omega$</span>, so we define a branch cut on a real axis going away from the branch points. We then consider the following rectangle contour as @Diger advises.</p>
<p><a href="https://i.stack.imgur.com/i8UGd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/i8UGd.png" alt="enter image description here" /></a></p>
<p>Here, <span class="math-container">$\epsilon>0$</span> is an arbitrarily small number, and <span class="math-container">$R$</span> is an arbitrarily large number (However, we take <span class="math-container">$R$</span> so that rectangle does not have poles on it). For <span class="math-container">$x>0$</span> we take the rectangle in the upper half-plane. For <span class="math-container">$x<0$</span> we use the dotted rectangle.</p>
<p>We now calculate the residues of the poles at <span class="math-container">$\alpha=p_n$</span>. We note that the residue of <span class="math-container">$\frac{N(z)}{D(z)}$</span> at <span class="math-container">$z=z_0$</span> is <span class="math-container">$\frac{N(z_0)}{D'(z_0)}$</span>(<span class="math-container">$N, D$</span> is a function that is analytic near <span class="math-container">$z=z_0$</span> and <span class="math-container">$D$</span> has a simple zero at <span class="math-container">$z=z_0$</span>). Applying the formula, we obtain that the residue at <span class="math-container">$\alpha=p_n$</span> is</p>
<p><span class="math-container">$\displaystyle -H^{-1}(-1)^n c_n\sin{(yc_n)}p_n^{-1}e^{ip_nx}\tag*{}$</span></p>
<p>Here, <span class="math-container">$c_n=n\pi /H$</span>. Note that we have a similar formula in the question.</p>
<p>Thus, by the residue theorem, the integral on the rectangle is</p>
<p><span class="math-container">$\displaystyle -2\pi iH^{-1}\sum_{|p_n|<R}(-1)^n c_n\sin{(yc_n)}p_n^{-1}e^{ip_nx}\tag*{}$</span></p>
<p>Now we calculate integrals on each contour. The integrals on <span class="math-container">$C_1, C_2$</span> and <span class="math-container">$C_3$</span> goes to <span class="math-container">$0$</span> as <span class="math-container">$R\to\infty$</span>, as @Diger proves. So we need to prove that the integral on <span class="math-container">$E_1$</span> and <span class="math-container">$E_2$</span> goes to <span class="math-container">$0$</span> as <span class="math-container">$\epsilon \to 0$</span>.</p>
<p>On <span class="math-container">$E_1$</span>, we have <span class="math-container">$\alpha =-\omega +\epsilon e^{i\theta}(\theta:\pi\to 0)$</span>. Thus, <span class="math-container">$d\alpha =i\epsilon e^{i\theta}$</span> and <span class="math-container">$K=(-2\omega \epsilon e^{i\theta}+\epsilon^2 e^{2i\theta})^{1/2} = -2\omega \epsilon e^{i\theta}+O(\epsilon^2)$</span>. Thus, we have <span class="math-container">$K=-2\omega \epsilon \cos{\theta}-2i\omega \epsilon\sin{\theta}$</span>.</p>
<p>We have <span class="math-container">$|\sinh(x+yi)|^2=\sinh^2(x)+\sin^2(y)$</span> and <span class="math-container">$\frac{4}{\pi^2}c^2\leq \sin^2{c}\leq c^2\leq \sinh^2{c}$</span> for small <span class="math-container">$c$</span>, so</p>
<p><a href="https://i.stack.imgur.com/1vpCn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1vpCn.png" alt="enter image description here" /></a>
(Forgive the image; I got too lazy)</p>
<p>The integral on <span class="math-container">$E_2$</span> also goes to zero by the same method. Thus, by taking <span class="math-container">$\epsilon\to 0, R\to\infty$</span>, we have</p>
<p><span class="math-container">$\displaystyle \int_{-\infty}^{\infty}\frac{\sinh\left(y\sqrt{\alpha^2-\omega^2}\right)}{\sinh\left(H\sqrt{\alpha^2-\omega^2}\right)}e^{i\alpha x}d\alpha = \displaystyle -2\pi iH^{-1}\sum_{n=1}^{\infty}(-1)^n c_n\sin{(yc_n)}p_n^{-1}e^{ip_nx}\tag*{}$</span></p>
<p>If <span class="math-container">$\omega H<\pi$</span>, every <span class="math-container">$p_n$</span> is imaginary number, so we can write <span class="math-container">$p_n=iv_n$</span> where <span class="math-container">$v_n=\sqrt{n^2\pi^2/H^2-\omega^2}$</span>. Thus we have</p>
<p><span class="math-container">$\displaystyle \int_{-\infty}^{\infty}\frac{\sinh\left(y\sqrt{\alpha^2-\omega^2}\right)}{\sinh\left(H\sqrt{\alpha^2-\omega^2}\right)}e^{i\alpha x}d\alpha = \displaystyle -2\pi H^{-1}\sum_{n=1}^{\infty}(-1)^n c_n\sin{(yc_n)}v_n^{-1}e^{-xv_n}\tag*{}$</span></p>
<p>which gives the exact same formula like the one in the question.</p>
|
47,926 | <p>Is there any known two-dimensional Conway's game of life variation where each cell can not be just on/off but able to hold more states, maybe 4 or 5?</p>
| paul garrett | 12,291 | <p>I wrote a "spatial ecology" variant
<A Href="http://www.math.umn.edu/~garrett/a05/Life1.html" rel="nofollow"> here </A>
in which different populations (with various birth and death rates) in-effect compete for "space". It's in Java, with source available (tho' was written a long time ago, with the 'original' windowing system).</p>
|
1,343,909 | <p>I was reading some examples about linear functionals from the book Introductory functional analysis with applications of Kreysig and one of the examples states the following </p>
<p>Let <span class="math-container">$L:C[0,1]\rightarrow C[0,1]$</span></p>
<p><span class="math-container">$$L[f(x)]=\int_{0}^{x}f(s)ds$$</span> that is linear and <span class="math-container">$R(T)=C^{1}[0,1]$</span> s.t <span class="math-container">$L(0)=0$</span>.</p>
<p>My question is how can I can calculate <span class="math-container">$L^{-1}: R(T)\rightarrow C[0,1] $</span></p>
<p>I could give some suggestion ?</p>
<p>Thanks!</p>
| mich95 | 229,072 | <p>The function is not bijective, as for any function $f$ in $C[0,1]$ $L(f(0))=\int\limits_{0}^{0}f(s)ds=0$, so the function $x \to x+1$ does not have a preimage, as $0+1=1$</p>
|
88,363 | <p>It is easy to truncate Series upto some order, say $n$. My question is how do I remove low orders? Let us say my series is a power series in $x$. I want to remove the terms with negative powers because they diverge at $x = 0$. I can simply write</p>
<p>s1-s2, where</p>
<p>s1=Normal[Series[blah, {x, 0, n}]</p>
<p>s2=Normal[Series[blah, {x, 0, -1}]</p>
<p>but Mathematica does not understand to cancel the removed terms because they are complicated. The solution would be to use Collect[s1-s2, x, Simplify], but this is horribly slow as I increase $n$ above even 2. I suppose I could simply delete the terms by hand, but the outputs are very messy, and there must exist a proper way to do this.</p>
| Dr. belisarius | 193 | <p>If everything else fails, you can always do</p>
<pre><code>Total[SeriesCoefficient[f@x, {x, 0, #}] x^# & /@ Range[0, 10]]
</code></pre>
|
3,583,117 | <p>I would like to understand clearly why the following equality is true</p>
<p><span class="math-container">$P[X+Y \leq z] = E_Y[P[X+Y] \leq z | Y]]$</span></p>
<p>I wrote the left part of the equation as follows:</p>
<p><span class="math-container">$E_Y[P[X+Y] \leq z | Y]] = \sum_y y P[X+y \leq z]P(y)$</span></p>
<p>and I have tried with a toy example where <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are two <span class="math-container">$r.v$</span> that model the throw of a die and it works, but I would like to clearly understand why is it true, I know that is linked with the law of total probability right?</p>
| Masoud | 653,056 | <p>It is clear that <span class="math-container">$E(1_A)=P(A)$</span> so
<span class="math-container">$$P(X+Y\leq z)=P(A)=E(1_A)\overset{(1)}{=}EE(1_A|Y)=E\bigg( E(1_A|Y)\bigg)=E\bigg(P(X+Y\leq z |Y)\bigg)$$</span>
when <span class="math-container">$A=\{X+Y\leq z\}$</span></p>
<p>In <span class="math-container">$(1)$</span> we use <a href="https://en.wikipedia.org/wiki/Law_of_total_expectation" rel="nofollow noreferrer">Law_of_total_expectation</a>.</p>
<p>Now you can </p>
<p><span class="math-container">$$E\bigg(P(X+Y\leq z |Y)\bigg)=\sum_{y} P(X+Y \leq z|Y=y) P_Y(y)=\sum_{y} P(X+y \leq z|Y=y) P_Y(y)$$</span></p>
<p><span class="math-container">$$=\sum_{y} P(X \leq z-y|Y=y) P_Y(y)$$</span></p>
<p>since <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are independent </p>
<p><span class="math-container">$$=\sum_{y} P(X \leq z-y) P_Y(y)$$</span></p>
|
176,260 | <blockquote>
<p>Let $\left\{ f_{n}\right\} $ denote the set of functions on
$[0,\infty) $ given by $f_{1}\left(x\right)=\sqrt{x} $ and
$f_{n+1}\left(x\right)=\sqrt{x+f_{n}\left(x\right)} $ for $n\ge1 $.
Prove that this sequence is convergent and find the limit function.</p>
</blockquote>
<p>We can easily show that this sequence is is nondecreasing. Originally, I was trying to apply the fact that “every bounded monotonic sequence must converge” but then it hit me this is true for $\mathbb{R}^{n} $. Does this fact still apply on $C[0,\infty) $, the set of continuous functions on $[0,\infty) $. If yes, what bound would we use?</p>
| Did | 6,179 | <p><strong>Hints:</strong></p>
<ul>
<li>For every $x\gt0$, the function $u_x:t\mapsto \sqrt{x+t}$ is continuous hence every convergent sequence defined by $x_{n+1}=u_x(x_n)$ for every $n\geqslant0$ has limit $z_x$ such that $u_x(z_x)=z_x$. Here, $z_x=\frac12(1+\sqrt{1+4x})$.</li>
<li>For every $x\gt0$, $z_x-u_x(t)=c_x(t)\cdot(z_x-t)$, with $c_x(t)=1/(z_x+t)$ hence $0\lt c_x(t)\leqslant 1/z_x$.</li>
<li>For every $x\gt0$, applying the preceding remark to any sequence $(x_n)_n$ such that $x_0\leqslant z_x$ and $x_{n+1}=u_x(x_n)$ for every $n\geqslant0$ shows that $z_x-z_x^{-n}\cdot(z_x-x_0)\leqslant x_n\leqslant z_x$ for every $n\geqslant0$.</li>
<li>For every $x\gt0$, $z_x\gt1$.</li>
</ul>
<p><strong>Conclusion:</strong> </p>
<ul>
<li>For every $x\gt0$, $f_n(x)\to z_x$. On the other hand, $f_n(0)=0$ for every $n\geqslant0$ hence $f_n(0)\to0\ne z_0$ and the limit is discontinuous at $x=0^+$.</li>
</ul>
|
24,186 | <p>Consider the code below:</p>
<pre><code>s = Solve[(3 - Cos[4*x])*(Sin[x] - Cos[x]) == 2, x, InverseFunctions -> True];
Select[s[[All, 1, 2]], Element[#, Reals] &]
</code></pre>
<p>In MMA 8.0, I get </p>
<pre><code>{-\[Pi], \[Pi]/2, \[Pi]}
</code></pre>
<p>but in MMA 9.0, I get an empty set { }</p>
<p>Assuming the solution by MMA 8.0 is correct, can someone show me a work around for MMA 9.0? </p>
| Michael E2 | 4,999 | <p>If the question is about converting general math-book expressions to pure functions, you could use something like</p>
<pre><code>SetAttributes[convert, HoldAll];
convert[expr_, vars_List] :=
With[{variables = Unevaluated@vars},
Block[variables,
Evaluate@(Hold[expr] /. Thread[vars -> Slot /@ Range@Length@vars]) & // ReleaseHold
]]
</code></pre>
<p>To apply,</p>
<pre><code>convert[Sqrt[(Abs[u[[1]]] - Abs[v[[1]]])^2 + (Abs[u[[2]]] - Abs[v[[2]]])^2], {u, v}]
</code></pre>
<p>and you get</p>
<blockquote>
<pre><code>Sqrt[(Abs[#1[[1]]] - Abs[#2[[1]]])^2 + (Abs[#1[[2]]] - Abs[#2[[2]]])^2] &
</code></pre>
</blockquote>
<p>Alternative definition:</p>
<pre><code>convert[expr_, vars_List] := Function @@@ Hold[{vars, expr}] // ReleaseHold
</code></pre>
<p>and the output would be the other kind of pure function:</p>
<blockquote>
<pre><code>Function[{u, v}, Sqrt[(Abs[u[[1]]] - Abs[v[[1]]])^2 + (Abs[u[[2]]] - Abs[v[[2]]])^2]]
</code></pre>
</blockquote>
<p>Of course one could simply type that to begin with.</p>
<hr>
<p>I'm not confident I understand the question. I'm not sure why one would want to de-compose a formula into component functions, but here are some variations à la @ThiesHeidecke's answer:</p>
<pre><code>u = {-3, 3}; v = {1, 5};
Composition[Sqrt, Total, #^2 &, Subtract @@ # &, Abs[{##}] &][u, v]
(* 2 Sqrt[2] *)
</code></pre>
<p>Beware: Things go awry if <code>u</code>, <code>v</code> are not vectors. A way to avoid such a pitfall is</p>
<pre><code>Composition[Sqrt, Total, #^2 &, Subtract @@ # &, Abs, Outer[Part, {##}, {1, 2}, 1] &][u, v]
</code></pre>
<p>But now this is limited to 2-dimensional vectors. If we abandon pure functions, we can define a function to handle all cases:</p>
<pre><code>distAbs[u_?VectorQ, v_?VectorQ] :=
Composition[Sqrt, Total, #^2 &, Subtract @@ # &, Abs[{##}] &][u, v]
</code></pre>
<p>As others have pointed out, this particular function can be encoded less ornately as</p>
<pre><code>EuclideanDistance @@ Abs[{##}] &
</code></pre>
<p>For other formulas, the following is straightforward and easily adapted:</p>
<pre><code>distAbs[u_?VectorQ, v_?VectorQ] :=
Sqrt[(Abs[u[[1]]] - Abs[v[[1]]])^2 + (Abs[u[[2]]] - Abs[v[[2]]])^2];
</code></pre>
<p>I would not be surprised if this is the sort of thing you're looking for. This is the easiest way to make function out of a formula, even if the formula might be expressed more simply in terms of other <em>Mathematica</em> functions.</p>
|
1,579,521 | <p>Find the value of
<span class="math-container">$$ \iint_{\Sigma} \langle x, y^3, -z\rangle. d\vec{S} $$</span>
where <span class="math-container">$ \Sigma $</span> is the sphere <span class="math-container">$ x^2 + y^2 + z^2 = 1 $</span> oriented outward by using the divergence theorem.</p>
<p>So I calculate <span class="math-container">$\operatorname{div}\vec{F} = 3y^2 $</span> and then I convert <span class="math-container">$ x, y, z $</span> into <span class="math-container">$ x = p\sin \phi \cos \theta, y = p\sin \phi \sin \theta, z = p\cos \phi $</span> but then I got stuck from that point.</p>
| Fundamental | 218,829 | <p>$$\int_{0}^{2\pi} \int_{0}^{\pi} \int_{0}^1 \left(\rho^2 \sin \phi \right)\underbrace{(\sqrt{3}\rho \sin \phi \sin \theta)^2}_{3y^2 \ \textrm{in spherical}} \ d\rho \ d\phi \ d\theta$$</p>
|
545,003 | <p>I have a proof that I am trying to prove and I am getting stuck at the inductive hypothesis. This is my theorem:</p>
<blockquote>
<p>For all real numbers $n>3$, the following is true: $n + 3 < n!$.</p>
</blockquote>
<p>I have proven true for $n = 4$, and will assume true for some arbitrary value $k$, i.e.,</p>
<p>$$k + 3 < n!,$$</p>
<p>and I want to prove for $k+1$, i.e.,</p>
<p>$$(k+1) + 3 < (k+1)!.$$</p>
<p>Consider the $k+1$ term:</p>
<p>$$(k+1)+3 = ?$$</p>
<p>I am confused on how to approach the next step.</p>
<p>Ok here is how I am proceeding. It seems really long so if anyone has a better way let me know:
$$
=(k+3)+1
$$
$$
<(k!)+1
$$
$$
<k!+k!
$$
$$
=2k!
$$
$$
<(k+1)k!
$$
$$
=(k+1)!
$$
Therefore both sides are equivalent.</p>
| Newb | 98,587 | <p>As you are trying to solve this problem, I'll only give you a hint.</p>
<p>Inductive Step: we want to show $(n+1)+3 < (n+1)!$</p>
<p>That's equivalent to $n+4 < (n+1)\cdot n!$ by the property of the factorial.</p>
<p>We can distribute: $n+4 < (n\cdot n!) + (1\cdot n!)$</p>
<p>Can you take it from here?</p>
|
23,566 | <p>I love math, and I used to be very good at it. The correct answers came fast and intuitively. I never studied, and redid the demonstration live for the tests (sometimes inventing new ones). I was the one who answered the tricky questions in class (8 hours of math/week in high school)... You get the idea.</p>
<p>As such I used to have a lot of confidence in my math abilities, and didn't think twice about saying the first idea that came to mind when answering a question.</p>
<p>That was more than 10 years ago, and I (almost) haven't done any math since then. I've graduated in a scientific field that requires little of it (I prefer not to give details) and worked for some time.</p>
<p>Now I'm back at school (master of statistics) and I need to do math, once again. I make mistakes upon blunders with the same confidence I used to have when I was good, which is extremely embarrassing when it happens in class. </p>
<p>I feel like a tone deaf musician and an ataxic painter at the same time.</p>
<p>One factor that probably plays a role is that I've learnt math in my mother tongue, and I'm now using it English, but I wouldn't expect it to make such a difference. </p>
<p>I know that it will require practice and hard work, but I need direction.</p>
<p>Any help is welcome.</p>
<p>Kind regards,</p>
<p>-- Mathemastov</p>
| 0x0 | 6,335 | <p>Revisit all of your High school books. It will be easier to grasp and remind you of many stuff you forgot.</p>
|
1,022,950 | <p>I was reading linear dependence between vectors, where I come across below explanation:</p>
<hr>
<p>In a rectangular xy-coordinate system every vector in the plane can be expressed in
exactly one way as a linear combination of the standard unit vectors. For example, the
only way to express the vector (3, 2) as a linear combination of i = (1, 0) and j = (0, 1)
is</p>
<blockquote>
<p>(3, 2) = 3(1, 0) + 2(0, 1) = 3i + 2j ...formula(1)</p>
</blockquote>
<p>Suppose, however, that we were to introduce a third coordinate axis that makes an angle of 45◦ with the x-axis. The unit vector along the w-axis is</p>
<blockquote>
<p>w = $(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})$</p>
</blockquote>
<p>Whereas Formula (1) shows the only way to express the vector (3, 2) as a linear combination of i and j, there are infinitely many ways to express this vector as a linear combination of i, j, and w. Three possibilities are</p>
<p>(3, 2) = 3(1, 0) + 2(0, 1) + 0$(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})$ = 3i + 2j + 0w</p>
<p>(3, 2) = 2(1, 0) + (0, 1) + $\sqrt{2}$$(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})$ = 3i + j + $\sqrt{2}$w</p>
<p>(3, 2) = 4(1, 0) + 3(0, 1) - $\sqrt{2}$$(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})$ = 4i + 3j - $\sqrt{2}$w</p>
<hr>
<p>What I did not understood is </p>
<ul>
<li>How these last three expressions of (3,2) are formed, I just did not get anything of it. Maybe missing something elementary maths.</li>
<li>How introducing another axis allows us to express any vector in <strong>infinitely many ways</strong>?, and how these last three expressions proves that?</li>
</ul>
| Jihad | 191,049 | <p><strong>Hint</strong>. Prove that $\forall \varepsilon > 0\exists N \forall n>N: \sqrt{\frac{n+1}{n}} > 1 - \varepsilon$ and $\sqrt{\frac{n+1}{n}} < 1 + \varepsilon$.</p>
|
184,682 | <p>I have difficulties with a rather trivial topological question: </p>
<p>A is a discrete subset of $\mathbb{C}$ (complex numbers) and B a compact subset of $\mathbb{C}$. Why is $A \cap B$ finite? I can see that it's true if $A \cap B$ is compact, i.e. closed and bounded, but is it obvious that $A \cap B$ is closed?</p>
| Cameron Buie | 28,900 | <p>I'm assuming that by "discrete set" you mean that $A$ has no accumulation points--that is (since we're in a Hausdorff space), that every point in the plane has a neighborhood containing at most one point of $A$. Thus, $A$ is vacuously closed (as it contains all of its accumulation points), so since $B$ is closed, then so is $A\cap B$.</p>
<p>Compactness isn't enough for finiteness all by itself (consider the unit disk), so let's proceed a little further. For each point $z\in A$, we have by definition of discrete set that there is a neighborhood $U$ of $z$ such that $A\cap U=\{z\}$. To make this explicit, there is some least positive integer $n$ such that $A\cap\{w\in\Bbb C:|w-z|<\frac1n\}=\{z\}$, and given this $n$ (which depends on $z$), we'll let $U_z:=\{w\in\Bbb C:|w-z|<\frac1n\}$.</p>
<p>Now, the set $\mathcal{U}=\{U_z:z\in A\}$ is an open cover of $A$, and so an open cover of $A\cap B$, which we've already determined to be compact. Thus $\mathcal{U}$ has a finite subcover of $A\cap B$, say $\mathcal{V}=\{V_1,...,V_n\}$. We know by our construction of $\mathcal{U}$ that each open set $V_k\in\mathcal{V}\subseteq\mathcal{U}$ has the property that $A\cap V_k$ contains exactly one point, so $A\cap B\cap V_k$ contains at most one point. Hence, $$(A\cap B)\cap\bigcup_{k=1}^nV_k=\bigcup_{k=1}^n(A\cap B\cap V_k)$$ contains at most $n$ points. But $\mathcal{V}=\{V_1,...,V_n\}$ covers $A\cap B$, so $A\cap B\subseteq\bigcup_{k=1}^nV_k$, so $A\cap B=(A\cap B)\cap\bigcup_{k=1}^nV_k$. Thus, $A\cap B$ has at most $n$ points, so is finite.</p>
|
1,184,961 | <p>I need to prove/show that $n^3 \leq 3^n$ for all natural numbers $n$ by strong induction. I have no clue where to begin!!!! :( I know how to do the beginning steps of showing that it's true for $k = 0$ and $k = 1$, etc but get suck on how to start the strong inductive step.</p>
| user103828 | 103,828 | <p>Assume true for $n=k$. Now let's try for $n=k+1$,
\begin{align*}
(k+1)^3 &=k^3+3k^2+3k+1 \\
&\leq 3^k + 2k^3 \\
&\leq 3^k+2 \cdot 3^k=3^{k+1}
\end{align*}
where the second step used $2k^3-3k^2-3k-1 =k^3+(k-1)^3-6k \geq 0$ (for $k\geq 3$) and the second and third step used the assumption that $k^3 \leq 3^k$.</p>
<p>P.S. you would need to check the beginning steps up to $n=3$.</p>
|
2,664,286 | <p>I am confused between the usage of two words <em>for all</em> and <em>any</em>. Let us consider the example of the definition normal subgroups, A subgroup $H$ is said to be normal if $\forall g \in G, g^{-1}Hg = H$ but if I rephrase the definition of normal subgroup to $H$ is normal in $G$ if for any $g \in G, g^{-1}Hg = H$.</p>
<p>Are these two definitions correct?</p>
| user | 505,767 | <p><strong>HINT</strong></p>
<p>R is that point on the line such that the angles of PR and QR with the line are equal.</p>
<p>A trick is to consider the reflection $\bar P$ of $P$ (or Q) with respect to the line and then consider the intersection between $\bar P Q$ and the line. The intersection point is R.</p>
<p><a href="https://i.stack.imgur.com/imNcW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/imNcW.jpg" alt="enter image description here"></a></p>
|
2,664,286 | <p>I am confused between the usage of two words <em>for all</em> and <em>any</em>. Let us consider the example of the definition normal subgroups, A subgroup $H$ is said to be normal if $\forall g \in G, g^{-1}Hg = H$ but if I rephrase the definition of normal subgroup to $H$ is normal in $G$ if for any $g \in G, g^{-1}Hg = H$.</p>
<p>Are these two definitions correct?</p>
| Peter Szilas | 408,605 | <p>Refer to Gimusi's drawing:</p>
<p>Let me use $\overline{AC}: = $length $AC.$</p>
<p>Let $A'$ be the reflection of $A$ with respect to the given line. </p>
<p>If $A',B,C$ are collinear then </p>
<p>$\overline {AC}+ \overline {CB}$ is shortest.</p>
<p>Proof:</p>
<p>Line $A'B$ intersects the given line at $C.$</p>
<p>Choose any other path from $A$ to $B$, </p>
<p>i.e a point $D \not = C$.</p>
<p>Consider the $\triangle A'DB$ .</p>
<p>The sum of 2 sides of a triangle is greater than the third side.</p>
<p>Hence: </p>
<p>$\overline {A'D} + \overline{DB} \gt \overline{ A'B}$, </p>
<p>or, since $\overline{ A'D }= \overline{ AD}$ , and </p>
<p>$\overline{ A'B} = \overline {AC} + \overline{CB}$, </p>
<p>we have:</p>
<p>$\overline{AD} + \overline {DB} > \overline{AC} + \overline{CB}.$</p>
|
2,407,820 | <p>$$\ln^q (1+x) \le \frac{q}{p} x^p \quad (x \ge 0, \; 0 < p \le q)$$</p>
<p>For $p=q$ this reduces to the familiar $\ln(1+x) \le x$. Otherwise I haven't had much success in proving it. General suggestions would be appreciated.</p>
| Ahmad | 411,780 | <p>put $p=\frac{1}{\ln x}$ and when $x > e$ then $p <1$ and we arrive at </p>
<p>$\ln^q(x+1) \leq q \ln x * x^{p}$ which is $\ln^q (x+1) \leq q \ln x *e$</p>
<p>when $q \geq 2$ we get that $\ln^q(x+1) \leq e q \ln x$</p>
<p>Because $\ln^q(x+1)\geq \ln^q x \geq e q \ln x$ divide by $\ln x$</p>
<p>We get that $\ln^{q-1} x \geq e q $ and since $q \geq 2$</p>
<p>then we have that $\ln^{q-1} x \geq \ln x \geq e q$ which will be true whenever </p>
<p>$\ln x \geq e q$ exponent -ate both sides we get that </p>
<p>$x \geq e^{e q}$ for example if we put $q=3$ then the inequality will be false for all $ x \geq e^{3e} \approx 3480.2$</p>
|
1,715,358 | <p>Being fascinated by the approximation $$\sin(x) \simeq \frac{16 (\pi -x) x}{5 \pi ^2-4 (\pi -x) x}\qquad (0\leq x\leq\pi)$$ proposed, more than 1400 years ago by Mahabhaskariya of Bhaskara I (a seventh-century Indian mathematician) (see <a href="https://math.stackexchange.com/questions/976462/a-1-400-years-old-approximation-to-the-sine-function-by-mahabhaskariya-of-bhaska">here</a>), I considered the function $$\sin \left(\frac{1}{2} \left(\pi -\sqrt{\pi ^2-4 y}\right)\right)$$ which I expanded as a Taylor series around $y=0$. This gives $$\sin \left(\frac{1}{2} \left(\pi -\sqrt{\pi ^2-4 y}\right)\right)=\frac{y}{\pi }+\frac{y^2}{\pi ^3}+\left(\frac{2}{\pi ^5}-\frac{1}{6 \pi ^3}\right)
y^3+O\left(y^4\right)$$ Now, I made (and may be, this is not allowed) $y=(\pi-x)x$. Replacing, I obtain
$$\sin(x)=\frac{(\pi -x) x}{\pi }+\frac{(\pi -x)^2 x^2}{\pi ^3}+\left(\frac{2}{\pi ^5}-\frac{1}{6 \pi ^3}\right) (\pi -x)^3 x^3+\cdots$$ I did not add the $O\left(.\right)$ on purpose since not feeeling very comfortable.</p>
<p>What is really beautiful is that the last expansion matches almost exactly the function $\sin(x)$ for the considered range $(0\leq x\leq\pi)$ and it can be very useful for easy and simple approximate evaluations of definite integrals such as$$I_a(x)=\int_0^x \frac{\sin(t)}{t^a}\,dt$$ under the conditions $(0\leq x\leq \pi)$ and $a<2$.</p>
<p>I could do the same with the simplest Padé approximant and obtain $$\sin(x)\approx \frac{(\pi -x) x}{\pi \left(1-\frac{(\pi -x) x}{\pi ^2}\right)}=\frac{5\pi(\pi -x) x}{5\pi ^2-5(\pi -x) x}$$ which, for sure, is far to be as good as the magnificent approximation given at the beginning of the post but which is not very very bad (except around $x=\frac \pi 2$).</p>
<p>The problem is that I am not sure that I have the right of doing things like that.</p>
<p><strong>I would greatly appreciate if you could tell me what I am doing wrong and/or illegal using such an approach.</strong></p>
<p><strong>Edit</strong></p>
<p>After robjohn's answer and recommendations, I improved the approximation writing as an approximant $$f_n(x)=\sum_{i=1}^n a_i \big(\pi-x)x\big)^i$$ and minimized $$S_n=\int_0^\pi\big(\sin(x)-f_n(x)\big)^2$$ with respect to the $a_i$'s.</p>
<p>What is obtained is $$a_1=\frac{60480 \left(4290-484 \pi ^2+5 \pi ^4\right)}{\pi ^9} \approx 0.31838690$$ $$a_2=-\frac{166320 \left(18720-2104 \pi ^2+21 \pi ^4\right)}{\pi ^{11}}\approx 0.03208100$$ $$a_3=\frac{720720 \left(11880-1332 \pi ^2+13 \pi ^4\right)}{\pi ^{13}}\approx 0.00127113$$ These values are not very far from those given by Taylor ($\approx 0.31830989$), ($\approx 0.03225153$), ($\approx 0.00116027$) but, as shown below, they change very drastically the results.</p>
<p>The errors oscillate above and below the zero line and, for the considered range, are all smaller than $10^{-5}$.</p>
<p>After minimization, $S_3\approx 8.67\times 10^{-11}$ while, for the above Taylor series, it was $\approx 6.36\times 10^{-7}$.</p>
| robjohn | 13,854 | <p><strong>A few approximations</strong></p>
<p>When making approximations, there is no legal or illegal. There are things that work better and things that don't. When making approximations that are supposed to work over a large range of values, often the plain Taylor series is not the best way to go. Instead, a polynomial or rational function that matches the function at a number of points is better.
$$
\frac{\pi(\pi-x)x}{\pi^2-\left(4-\pi\right)(\pi-x)x}\tag{1}
$$
matches the values and slopes of $\sin(x)$ at $0$, $\frac\pi2$, and $\pi$. However, it is always low.</p>
<p><img src="https://i.stack.imgur.com/nx4Iv.png" alt="enter image description here"></p>
<p>If instead, we match the values at $0$, $\frac\pi6$,$\frac\pi2$, $\frac{5\pi}6$, and $\pi$ we get Mahabhaskariya's approximation
$$
\frac{16(\pi-x)x}{5\pi^2-4(\pi-x)x}\tag{2}
$$
which is both high and low, and the maximal error is about $\frac13$ of the one-sided error.</p>
<p><img src="https://i.stack.imgur.com/nr6w9.png" alt="enter image description here"></p>
<p>A good quadratic polynomial approximation also matches the values at $0$, $\frac\pi6$,$\frac\pi2$, $\frac{5\pi}6$, and $\pi$
$$
\frac{31}{10\pi^2}(\pi-x)x+\frac{18}{5\pi^4}(\pi-x)^2x^2\tag{3}
$$
<img src="https://i.stack.imgur.com/0TVkQ.png" alt="enter image description here"></p>
<p>The maximal error is about $\frac23$ that of Mahabhaskariya's.</p>
<p>If we want to extend to a cubic polynomial, we can try to match values at $0$, $\frac\pi6$, $\frac\pi4$, $\frac\pi2$
$$
\tfrac{9711-6400\sqrt2}{210\pi^2}(\pi-x)x+\tfrac{-7194+5120\sqrt2}{15\pi^4}(\pi-x)^2x^2+\tfrac{43488-30720\sqrt2}{35\pi^6}(\pi-x)^3x^3\tag{4}
$$
<img src="https://i.stack.imgur.com/hqjnH.png" alt="enter image description here"></p>
<p>The maximum error of approximation $(4)$ is about $\frac1{40}$ that of approximation $(3)$.</p>
<hr>
<p><strong>Analysis of the functions in the question</strong></p>
<p>The function
$$
\frac{\pi(\pi-x)x}{\pi^2-(\pi-x)x}\tag{5}
$$
has a maximum error about $40\times$ as big as $(3)$</p>
<p><img src="https://i.stack.imgur.com/lVNkR.png" alt="enter image description here"></p>
<p>The function
$$
\frac{(\pi-x)x}\pi+\frac{(\pi-x)^2x^2}{\pi^3}+\left(\frac2{\pi^5}-\frac1{6\pi^3}\right)(\pi-x)^3x^3\tag{6}
$$
has $30\times$ the maximum error of $(4)$. However, the coefficients of $(6)$ are more appealing.</p>
<p><img src="https://i.stack.imgur.com/ZOA1u.png" alt="enter image description here"></p>
|
157,992 | <p>Please, help me</p>
<p>Prove that $(1, i);(1,-i)$ are characteristic vectors of $\begin{bmatrix} a & b \\ -b & a \end{bmatrix}$</p>
<p>I've found the polynomial characteristic: $\lambda^2-2a\lambda+a^2+b^2$ and the roots are:</p>
<p>$\lambda_{1} = \frac{a+ib}{\lambda} \\
\lambda_{2} = \frac{a-ib}{\lambda}$</p>
<p>But, I don't know how to resolve the system and find characteristic vectors.</p>
<p>Thanks so much.</p>
| Arturo Magidin | 742 | <p>In order to test if a <em>given</em> nonzero vector $\mathbf{v}$ is a characteristic vector (aka eigenvector) of a <em>given</em> matrix $A$, you do <strong>not</strong> need to find the eigenvalues, or characteristic polynomial of the matrix! All you have to do is compute $A\mathbf{v}$ and see if what you get is a scalar multiple of $\mathbf{v}$ or not.</p>
<p>So you need to compute
$$\left(\begin{array}{cc}a&b\\-b&a\end{array}\right)\left(\begin{array}{c}1\\i\end{array}\right) = \left(\begin{array}{c}a+bi\\-b+ai\end{array}\right).$$
Now, is $(a+bi,-b+ai)$ a scalar multiple of $(1,i)$? Yes: check that if $\lambda=a+bi$, then $\lambda(1,i) = (a+bi,-b+ai)$. Since $A(1,i)^t$ is a scalar multiple of $(1,i)$, then $(1,i)$ is a characteristic vector of $A$.</p>
<p>Similarly with $(1,-i)$. </p>
<p>(I'm reminded of how the vast majority of my Calculus students will, when faced with a problem like "Verify that $f(x)=2\sin x-3\cos x$ is a solution to the differential equation $y''+y=0$" will proceed to try to find the general solution to the equation and see if the given $f(x)$ is of that general form, instead of simply <em>plugging in</em> $f$ and checking the equality... even after I tell them to just plug in... then again, if they are asked to verify that $17$ is a solution to $x^2 -27x + 170=0$, they will proceed to solve the quadratic instead of just plugging in, too)</p>
|
120,260 | <p>Let $X$ be a simply connected smooth projective variety, whose Picard group is generated by the classes of the irreducible codimension 1 loci $D_1, \ldots, D_k$. Let $E_1, \ldots, E_r$ be other irreducible codimension 1 loci, and suppose that $X^0$ is the complement in $X$ of the divisors $D_i$ and $E_j$.</p>
<p>Suppose now that $X_0$ is the complement of $n$ irreducible loci of codimension $1$ in $Y$, a smooth projective variety.</p>
<p>Question: Can I conclude that the Picard group of $Y$ has rank $n-r$?</p>
<p>I can answer the question affirmatively over $\mathbb{C}$, by using the long exact sequence with compact support associated with the inclusion $Y \setminus X^0 \to Y$, but I would like to know if there is an algebraic proof of this (valid over any algebraically closed field $k$).</p>
<p>EDIT: As pointed out in the answer, I am actually assuming that the Picard group of $X$ is FREELY generated by the $D_1, \ldots, D_k$.</p>
| Daniel Litt | 6,950 | <p>There has indeed been exciting recent work in this area, by Bhargava and Shankar (see <a href="http://www-math.mit.edu/~poonen/papers/Exp1049.pdf">this Bourbaki expose by Poonen</a>) and also by <a href="http://www.math.harvard.edu/~gross/preprints/stable18.pdf">Bhargava and Gross</a>. Briefly, the work of Bhargava and Shankar bounds the average rank of the group of rational points of elliptic curves over $\mathbb{Q}$, while the Bhargava and Gross paper does the same for Jacobians of hyperelliptic curves.</p>
<p>Section 4 of the (quite readable) write-up by Poonen explains why I refer to these results as recent advances in the geometry of numbers: both of these results boil down to (subtle!) computations of adelic volumes! It's worth noting that the <a href="http://arxiv.org/abs/1006.1002">work of Bhargava and Shankar</a> does not use adelic language, and so is more obviously related to the "classical" geometry of numbers.</p>
|
2,633,720 | <blockquote>
<p>Prove by induction that $$ (k + 2)^{k + 1} \leq (k+1)^{k +2}$$ for $ k > 3 .$</p>
</blockquote>
<p>I have been trying to solve this, but I am not getting the sufficient insight. </p>
<p>For example, $(k + 2)^{k + 1} = (k +2)^k (k +2) , (k+1)^{k +2}= (k+1)^k(k +1)^2.$</p>
<p>$(k +2) < (k +1)^2 $ but $(k+1)^k < (k +2)^k$ so what I want would clearly not be immediate from using something like If $ 0 < a < b, 0<c<d $ then $0 < ac < bd $. THe formula is valid for n = 4, So if it is valid for $n = k$ I would have to use</p>
<p>$ (k + 2)^{k + 1} \leq (k+1)^{k +2} $ somewhere in order to get that $ (k + 3)^{k + 2} \leq (k+2)^{k +3} $ is also valid. This seems tricky.</p>
<p>I also tried expanding $(k +2)^k $ using the binomial formula and multiplying this by $(k + 2)$, and I expanded $(k+1)^k$ and multiplied it by $(k + 1)^2 $ term by term. I tried to compare these sums, but it also gets tricky. I would appreaciate a hint for this problem, thanks. </p>
| Anne Bauval | 386,889 | <p>A simpler (and a little stronger) statement is:
<span class="math-container">$$\forall n\ge3\quad(n+1)^n<n^{n+1}.$$</span>
We first check that <span class="math-container">$(3+1)^3=64<81=3^{3+1}.$</span>
Then, for the induction step, it is sufficient to prove (for all <span class="math-container">$n\ge3$</span>) that
<span class="math-container">$$\frac{(n+2)^{n+1}}{(n+1)^{n+2}}\le\frac{(n+1)^n}{n^{n+1}},$$</span>
i.e.
<span class="math-container">$$(n(n+2))^{n+1}\le(n+1)^{2n+2},$$</span>
i.e.
<span class="math-container">$$n(n+2)\le(n+1)^2,$$</span>
which is obvious.</p>
|
438,231 | <p>How should I state the general solution for the equation $\sin(4\phi)=\cos(2\phi)$.
The angles are $15$, $45$, $75$ and $135$ if I restrict myself within the range $[0,360]$</p>
| Empy2 | 81,790 | <p>$\sin 4\phi=2\sin 2\phi\cos 2\phi=\cos 2\phi$ so either $\cos 2\phi=0$ or $\sin 2\phi=1/2$. So $2\phi = n180^\circ+90^\circ$ or $2\phi=n360^\circ+30^\circ$ or $2\phi=n360^\circ+150^\circ$</p>
|
322,302 | <p>Conjectures play important role in development of mathematics.
Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p>
<p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p>
<p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p>
<p>Asking the question I keep in mind by "recent years" something like a dozen years before now, by a "conjecture" something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit "not so famous", but on the other hand these might not be considered as strict criteria, and let us "assume a good will" of the answerer.</p>
| skupers | 798 | <p>In <a href="https://arxiv.org/abs/1812.02448" rel="noreferrer">https://arxiv.org/abs/1812.02448</a>, Tadayuki Watanabe announced a disproof of the Smale conjecture in dimension 4. In particular, he shows that the inclusion <span class="math-container">$O(5) \hookrightarrow \mathrm{Diff}(S^4)$</span> is <em>not</em> a homotopy equivalence. This was the last remaining dimension in which it was not known whether the inclusion <span class="math-container">$O(n+1) \hookrightarrow \mathrm{Diff}(S^n)$</span> was a homotopy equivalence (it is for <span class="math-container">$n \leq 3$</span> and it is not for <span class="math-container">$n \geq 5$</span>).</p>
|
322,302 | <p>Conjectures play important role in development of mathematics.
Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p>
<p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p>
<p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p>
<p>Asking the question I keep in mind by "recent years" something like a dozen years before now, by a "conjecture" something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit "not so famous", but on the other hand these might not be considered as strict criteria, and let us "assume a good will" of the answerer.</p>
| Martin Väth | 165,275 | <p>The famous Nussbaum conjecture stated that every continuous map of a closed ball in a Banach space with a compact iterate (i.e. the iterate has relatively compact range) has a fixed point. Again Robert Cauty (see my previous post) proved it 2015 in the positive by showing that even a Lefschetz type fixed point theorem for maps with compact iterates holds:</p>
<ul>
<li>Cauty, Robert, Un théorème de Lefschetz–Hopf pour les fonctions à itérées compactes, Crelle Journal für die reine und angewandte Mathematik <strong>2017</strong> (729), <a href="https://doi.org/10.1515/crelle-2014-0134" rel="nofollow noreferrer">https://doi.org/10.1515/crelle-2014-0134</a></li>
</ul>
<p>The conjecture was formulated in about 1970.</p>
<p>As Robert Nussbaum once pointed out, the attractivity of this conjecture lied in the fact that it is apparently so simple to prove, and that it can in fact be shown relatively easily under mild additional hypotheses (differentiability is such an “obviously” sufficient hypothesis, or that the map is even condensing, or that the range of some iterate has a locally nice topological structure, ...), but the longer one works on the problem, the harder it seems, and the less likely that one does not need <em>any</em> additional hypothesis. Many novelties in the field were inspired by proofs under such additional hypotheses.</p>
|
122,848 | <p>Is my calculation correct for this rotation around a point?</p>
<p>A point a(-19.94,392.11) is rotated -49.45 degrees, what is the new coordinates of point a?</p>
<p>My solution:</p>
<pre><code>x' = x*cos(0) - y*sin(0)
y' = x*sin(0) + y*cos(0)
x' = (-12.961) - (-298.0036)
y' = (15.15) + (254.92)
x' = 285.04
y' = 270.07
</code></pre>
| Peđa | 15,660 | <p>Result of my calculation is slightly different than yours . </p>
<p>Let's denote :</p>
<p>$\theta=-49.45^\circ $</p>
<p>$x'=r\cos \alpha ~\text{ and }~y'=r\sin \alpha$</p>
<p>where $~r=\sqrt{x^2+y^2} =392.5$</p>
<p>Note that :</p>
<p>$\alpha=\arctan\left|\frac{y}{x}\right|+\frac{\pi}{2}-|\theta|$</p>
<p>$\alpha=42.07^\circ$</p>
<p>hence : </p>
<p>$(x',y')=(291.69,262.64)$</p>
|
25,100 | <p>Suppose one has a set $S$ of positive real numbers, such that the usual numerical ordering on $S$ is a well-ordering. Is it possible for $S$ to have any countable ordinal as its order type, or are the order types that can be formed in this way more restricted than that?</p>
| Pietro Majer | 6,101 | <p>To complete the picture (the obvious remaining part). If ${S\subset\mathbb R}$ is well ordered, then it is countable: indeed it has countable cofinality. Thus well-ordered subsets of <strong>R</strong> are <em>exactly</em> countable ordinals.</p>
|
3,479,883 | <p>I know that (I might be wrong):</p>
<ul>
<li>Symbol for empty or null set : {Ø} or {}</li>
<li>Null or empty set is 'subset of all sets' as well as 'empty or null set' set</li>
<li>So, { {} } is same as { Ø }</li>
</ul>
<p>I just want to know { {} } or { Ø } is an empty set or not ? And if yes then we can conclude that if a set contains a only null set which is by definition always true then it must be a null or empty set.</p>
<p>(Here I am assuming empty and null are same, because I've read that they sometimes taken as different.)</p>
| MPW | 113,214 | <p>A set whose only element is the empty set is not empty (an empty set contains no element).</p>
<p>Think of sets a boxes. If you put a small empty box into a big box, the big box isn't empty anymore. It doesn't matter if the small box is empty or not. That's the beauty of the <span class="math-container">$\{\;\}$</span> notation -- it "looks" like a box.</p>
<p>If you remember that <span class="math-container">$\varnothing$</span> is just another name for <span class="math-container">$\{\;\}$</span>, then you immediately know that <span class="math-container">$\varnothing$</span> and <span class="math-container">$\{\varnothing\}$</span> are not the same thing, but <span class="math-container">$\{\varnothing\}$</span> and <span class="math-container">$\{\{\;\}\}$</span> are the same thing.</p>
|
4,182,153 | <p>Let A be a nonempty compact subset of <span class="math-container">$R$</span> (real numbers) and let B be a nonempty closed subset of R. Recall
that <span class="math-container">$\operatorname{dist}(A, B) = \inf{|x − y| : x ∈ A, y ∈ B}$</span>. Show that there exist <span class="math-container">$a ∈ A$</span> and <span class="math-container">$b ∈ B$</span> such that
<span class="math-container">$|a − b| = \operatorname{dist}(A, B)$</span>.</p>
<p>How to prove this question?</p>
| usr25 | 944,317 | <p>Consider the closed set <span class="math-container">$S = A - B$</span>, which is <span class="math-container">$A + (-B)$</span>, if it contains <span class="math-container">$0$</span>, then <span class="math-container">$a = b$</span>, and <span class="math-container">$A \cap B \neq \emptyset$</span>.
Otherwise, consider <span class="math-container">$s = dist(0,S)$</span>, if <span class="math-container">$\pm s \in S$</span> then the statement is proven, this is a simpler version beacuse it is only one closed set and a point. You can take a look at this other question for a prove for that. <a href="https://math.stackexchange.com/questions/83505/distance-to-a-closed-set">Distance to a closed set</a></p>
|
3,299,661 | <p>I am familiar with all 3 of the entities I have listed in my question. I know the definitions of "reflexive", "symmetric", and "transitive". However, I am afraid I do not mechanistically understand the "flow" of how we ultimately generate equivalence classes from a particular relation that exhibits the 3 properties of equivalence.</p>
<p>To help illustrate my confusion, consider the following example:</p>
<p><span class="math-container">$S=\{1,2,3,4,5,6\}$</span></p>
<p>Let <span class="math-container">$R_1$</span> be a relation on <span class="math-container">$S$</span> such that <span class="math-container">$x-y$</span> is divisible by <span class="math-container">$3$</span></p>
<p>So, firstly, from what I understand about relations, I am going to find all of the order pairs that satisfy this (these ordered pairs are a subset of the cartesian product <span class="math-container">$S$</span> x <span class="math-container">$S$</span>). </p>
<p><span class="math-container">$R_1 = \{(1,4) (2,5) (3,6) (4,1) (5, 2) (6, 3)(1,1)(2,2)(3,3)(4,4)(5,5)(6,6)\}$</span></p>
<p>Ok cool. These are all of the ordered pairs that "satisfy" or "make the <span class="math-container">$R_1$</span> relation true".</p>
<p>For this given relation, I can observe the following:</p>
<p><strong>1 -</strong> The reflexive property is satisfied because of the presence of <span class="math-container">$(1,1), (2,2),\ etc$</span></p>
<p><strong>1st question</strong>: If, for example (6,6) was not in this set, <span class="math-container">$R_1$</span> could not be deemed reflexive because <span class="math-container">$(3,6)$</span> and <span class="math-container">$(6,3)$</span> are present, correct? (i.e. because the element "6" shows up as as an ordered pair, <span class="math-container">$(6,6)$</span> MUST show up as well in order to declare this relation reflexive)</p>
<p><strong>2 -</strong> The symmetric property is satisfied because of the presence of <span class="math-container">$(1,4) \& (4,1)$</span>, <span class="math-container">$(2,5)\&(5,2),\ etc$</span></p>
<p><strong>3 -</strong> The transitive property is satisfied because...</p>
<p><strong>2nd question</strong>: I actually do not immediately see why the transitive property is satisfied (I believe that the transitive property <em>should be</em> satisfied because the "congruence modulo n" relation is an equivalence relation...and I'm fairly certain that the relation <span class="math-container">$R_1$</span> that I described is of that form). Is it just because my set is too small to see the transitive property in its stereotypical form?</p>
<p>So, assuming that this relation IS an equivalence relation (I believe that it is...for the reason mentioned above), I really do not understand how we go from this single set of ordered pairs to equivalence classes. From example videos I have seen, I know that a set of integers mod 3 will create three equivalence classes...namely, the integers with remainder 0, 1, and 2 when divided by 3. </p>
<p><strong>3rd question</strong>:
However, I do not really understand, mechanistically, how we "separate" these ordered pairs. All of the ordered pairs are initially grouped together. How do we decide, from this initial <span class="math-container">$R_1$</span> set, which ordered pairs belong to which equivalence class? Obviously, if you know how mod 3 works, you could sort of intuit that 1 and 4 go together because </p>
<p><span class="math-container">$1 mod (3) = 4 mod (3)$</span></p>
<p>...however, if I knew nothing about how <span class="math-container">$mod (3)$</span> worked, how would I know how to make the appropriate partitions? </p>
| José Carlos Santos | 446,262 | <ol>
<li>If <span class="math-container">$(6,6)\notin R_1$</span>, then <span class="math-container">$R_1$</span> would not be reflexive simply because <span class="math-container">$R_1$</span> being reflexive <em>means</em> that <span class="math-container">$(\forall n\in\{1,2,3,4,5,6\}):(n,n)\in R_1$</span>.</li>
<li>It is transitive because if <span class="math-container">$3\mid x-y$</span> and <span class="math-container">$3\mid y-z$</span>, then <span class="math-container">$3\mid(x-y)+(y-z)$</span>. But this means that <span class="math-container">$3\mid x-z$</span>. So, this proves that<span class="math-container">$$x\mathrel{R_1}y\text{ and }y\mathrel{R_1}z\implies x\mathrel{R_1}z.$$</span></li>
<li>Consider the number <span class="math-container">$1$</span>. For which numbers <span class="math-container">$n\in\{1,2,3,4,5,6\})$</span> do we have <span class="math-container">$1\mathrel{R_1}n$</span>? It is easy to see that this occurs if and only if <span class="math-container">$n=1$</span> or <span class="math-container">$n=4$</span>. So, the equivalence class of <span class="math-container">$1$</span> is <span class="math-container">$\{1,4\}$</span>. Now take some element of <span class="math-container">$\{1,2,3,4,5,6\}\setminus\{1,4\}$</span>. Suppose that you have taken <span class="math-container">$5$</span>. For which numbers <span class="math-container">$n\in\{1,2,3,4,5,6\})$</span> do we have <span class="math-container">$5\mathrel{R_1}n$</span>? It is easy to see that this occurs if and only if <span class="math-container">$n=2$</span> or <span class="math-container">$n=5$</span>. So, the equivalence class of <span class="math-container">$5$</span> is <span class="math-container">$\{2,5\}$</span>. And now you start all over again, taking some element from <span class="math-container">$\{1,2,3,4,5,6\}\setminus\bigl(\{1,4\}\cup\{2,5\}\bigr)$</span>…</li>
</ol>
|
925,541 | <p>The exercise states: prove that the limit of the sequence $$a_{n+2}=(a_na_{n+1})^{1/2} \ where \ a_1 \ge 0, a_2 \ge 0 $$</p>
<p>is $L = (a_1a_2^2)^{1/3}$</p>
<p>The solutin says: $$Let \ b_n = \frac{a_{n+1}}{a_n},$$ then $$b_{n+1}= 1/\sqrt{b_n} \ for \ all \ n$$ wich implies that
$$b_{n+1}= b_1^{(-1/2)^n} \rightarrow 1 \ as \ n \rightarrow \infty$$</p>
<p>Consider
$$\prod_{j=2}^{n+1}b_j = \prod_{j=1}^{n}(b_j)^{-1/2} $$</p>
<p>This implies that:$$(a_1^{1/2}a_2)^{-2/3}a_{n+1} = \left( \frac{1}{b_{n+1}} \right)^{2/3}$$...</p>
<p>I am having problems obtaining this last implication, I see that $$\prod_{j=2}^{n+1}b_j = \frac{a_{n+2}}{a_2} \ and \ \prod_{j=1}^{n}(b_j)^{-1/2}= \frac{a_{n+1}}{a_1} $$ But still I struggle.</p>
<p>Any help?</p>
| mike | 75,218 | <p>Consider squaring the LHS and RHS of the following equation:</p>
<p>$$\prod_{j=2}^{n+1}b_j = \prod_{j=1}^{n}b_j^{-1/2}\tag{0}$$</p>
<p>We obtain:</p>
<p>$$\prod_{j=2}^{n+1}(b_j)^2 = (b_1)^{c(n)}, c(n)=\frac{2}{3}(-1+(-1/2)^n)\tag{1}$$</p>
<p>$$\prod_{j=1}^{n}b_j^{-1} = (b_1)^{d(n)}, d(n)=c(n)\tag{2}$$</p>
|
4,215 | <p>I suspect it is impossible to split a (any) 3d solid into two, such that each of the pieces is identical in shape (but not volume) to the original. How can I prove this?</p>
| Mariano Suárez-Álvarez | 274 | <p>It seems that Puppe and others proved that this is impossible for <em>any</em> strictly convex solid. See [B. L van den Waerden, Aufgabe Nr 51, <em>Elem. Math.</em> <strong>4</strong> (1949) <strong>18</strong>, 140]</p>
<p>The reference comes from <em>Unsolved problems in geometry</em> by Hallard T. Croft, K. J. Falconer and Richard K. Guy.</p>
|
433,403 | <ol>
<li>Let F(x,y) be the statement, “x can fool y,” where the domain consists of all of the people in the world. Translate this statement into symbolic logic.
a. Everyone can be fooled by somebody.</li>
</ol>
<p>Would it be: For every x.y in W, F(x,y) is in W?</p>
<p>I am not getting the gist of this...</p>
| Cameron Buie | 28,900 | <p>"For every $x,y$ in $W$, $F(x,y)$ is in $W$" translates to "For every person $x$ and person $y$ in the world, the statement '$x$ can fool $y$' is a person in the world." Does this statement make sense?</p>
<p>What you're trying to say is that for every person in the world, there is some person in the world who can fool them. That is, for every person $x$ in $W$, there is some person $y$ in $W$ such that $y$ can fool $x$. Symbolically: $$\forall x\in W\:\exists y\in W\:F(y,x)$$</p>
|
354,250 | <p><strong>Remark:</strong> All the answers so far have been very insightful and on point but after receiving public and private feedback from other mathematicians on the MathOverflow I decided to clarify a few notions and add contextual information. 08/03/2020.</p>
<h2>Motivation:</h2>
<p>I recently had an interesting exchange with several computational neuroscientists on whether organisms with spatiotemporal sensory input can simulate physics without computing partial derivatives. As far as I know, partial derivatives offer the most quantitatively precise description of spatiotemporal variations. Regarding feasibility, it is worth noting that a number of computational neuroscientists are seriously considering the question that human brains might do reverse-mode automatic differentiation, or what some call backpropagation [7].</p>
<p>Having said this, a large number of computational neuroscientists (even those that have math PhDs) believe that complex systems such as brains may simulate classical mechanical phenomena without computing approximations to partial derivatives. Hence my decision to share this question.</p>
<h2>Problem definition:</h2>
<p>Might there be an alternative formulation for mathematical physics which doesn't employ the use of partial derivatives? I think that this may be a problem in reverse mathematics [6]. But, in order to define equivalence a couple definitions are required:</p>
<p><strong>Partial Derivative as a linear map:</strong></p>
<p>If the derivative of a differentiable function <span class="math-container">$f: \mathbb{R}^n \rightarrow \mathbb{R}^m$</span> at <span class="math-container">$x_o \in \mathbb{R}^n$</span> is given by the Jacobian <span class="math-container">$\frac{\partial f}{\partial x} \Bigr\rvert_{x=x_o} \in \mathbb{R}^{m \times n}$</span>, the partial derivative with respect to <span class="math-container">$i \in [n]$</span> is the <span class="math-container">$i$</span>th column of <span class="math-container">$\frac{\partial f}{\partial x} \Bigr\rvert_{x=x_o}$</span> and may be computed using the <span class="math-container">$i$</span>th standard basis vector <span class="math-container">$e_i$</span>:</p>
<p><span class="math-container">\begin{equation}
\frac{\partial{f}}{\partial{x_i}} \Bigr\rvert_{x=x_o} = \lim_{n \to \infty} n \cdot \big(f(x+\frac{1}{n}\cdot e_i)-f(x)\big) \Bigr\rvert_{x=x_o}. \tag{1}
\end{equation}</span></p>
<p>This is the general setting of numerical differentiation [3].</p>
<p><strong>Partial Derivative as an operator:</strong></p>
<p>Within the setting of automatic differentiation [4], computer scientists construct algorithms <span class="math-container">$\nabla$</span> for computing the dual program <span class="math-container">$\nabla f: \mathbb{R}^n \rightarrow \mathbb{R}^m$</span> which corresponds to an operator definition for the partial derivative with respect to the <span class="math-container">$i$</span>th coordinate:</p>
<p><span class="math-container">\begin{equation}
\nabla_i = e_i \frac{\partial}{\partial x_i} \tag{2}
\end{equation}</span></p>
<p><span class="math-container">\begin{equation}
\nabla = \sum_{i=1}^n \nabla_i = \sum_{i=1}^n e_i \frac{\partial}{\partial x_i}. \tag{3}
\end{equation}</span></p>
<p>Given these definitions, a constructive test would involve creating an open-source library for simulating classical and quantum systems that doesn’t contain a method for numerical or automatic differentiation.</p>
<h2>The special case of classical mechanics:</h2>
<p>For concreteness, we may consider classical mechanics as this is the general setting of animal locomotion, and the vector, Hamiltonian, and Lagrangian formulations of classical mechanics have concise descriptions. In all of these formulations the partial derivative plays a central role. But, at the present moment I don't have a proof that rules out alternative formulations. Has this particular question already been addressed by a mathematical physicist?</p>
<p>Perhaps a reasonable option might be to use a probabilistic framework such as Gaussian Processes that are provably universal function approximators [5]?</p>
<h2>Koopman Von Neumann Classical Mechanics as a candidate solution:</h2>
<p>After reflecting upon the answers of Ben Crowell and <a href="https://mathoverflow.net/a/354289">gmvh</a>, it appears that we require a formulation of classical mechanics where:</p>
<ol>
<li>Everything is formulated in terms of linear operators.</li>
<li>All problems can then be recast in an algebraic language.</li>
</ol>
<p>After doing a literature search it appears that Koopman Von Neumann Classical Mechanics might be a suitable candidate as we have an operator theory in Hilbert space similar to Quantum Mechanics [8,9,10]. That said, I just recently came across this formulation so there may be important subtleties I ignore.</p>
<h2>Related problems:</h2>
<p>Furthermore, I think it may be worth considering the following related questions:</p>
<ol>
<li>What would be left of mathematical physics if we could not compute partial derivatives?</li>
<li>Is it possible to accurately simulate any non-trivial physics without computing partial derivatives?</li>
<li>Are the operations of multivariable calculus necessary and sufficient for modelling classical mechanical phenomena?</li>
</ol>
<h2>A historical note:</h2>
<p>It is worth noting that more than 1000 years ago as a result of his profound studies on optics the mathematician and physicist Ibn al-Haytham(aka Alhazen) reached the following insight:</p>
<blockquote>
<p>Nothing of what is visible, apart from light and color, can be
perceived by pure sensation, but only by discernment, inference, and
recognition, in addition to sensation.-Alhazen</p>
</blockquote>
<p>Today it is known that even color is a construction of the mind as photons are the only physical objects that reach the retina. However, broadly speaking neuroscience is just beginning to catch up with Alhazen’s understanding that the physics of everyday experience is simulated by our minds. In particular, most motor-control scientists agree that to a first-order approximation the key purpose of animal brains is to generate movements and consider their implications. This implicitly specifies a large class of continuous control problems which includes animal locomotion.</p>
<p>Evidence accumulated from several decades of neuroimaging studies implicates the role of the cerebellum in such internal modelling. This isolates a rather uniform brain region whose processes at the circuit-level may be identified with efficient and reliable methods for simulating classical mechanical phenomena [11, 12].</p>
<p>As for the question of whether the mind/brain may actually be modelled by Turing machines, I believe this was precisely Alan Turing’s motivation in conceiving the Turing machine [13]. For a concrete example of neural computation, it may be worth looking at recent research that a single dendritic compartment may compute the xor function: [14], <a href="https://www.reddit.com/r/MachineLearning/comments/ejbwvb/r_single_biological_neuron_can_compute_xor/" rel="nofollow noreferrer">Reddit discussion</a>.</p>
<h2>References:</h2>
<ol>
<li>William W. Symes. Partial Differential Equations of Mathematical Physics. 2012.</li>
<li>L.D. Landau & E.M. Lifshitz. Mechanics (Volume 1 of A Course of Theoretical Physics). Pergamon Press 1969.</li>
<li>Lyness, J. N.; Moler, C. B. (1967). "Numerical differentiation of analytic functions". SIAM J. Numer. Anal. 4: 202–210. <a href="https://doi.org/10.1137/0704019" rel="nofollow noreferrer">doi:10.1137/0704019</a>.</li>
<li>Naumann, Uwe (2012). The Art of Differentiating Computer Programs. Software-Environments-tools. SIAM. ISBN 978-1-611972-06-1.</li>
<li>Michael Osborne. Gaussian Processes for Prediction. Robotics Research Group
Department of Engineering Science
University of Oxford. 2007.</li>
<li>Connie Fan. REVERSE MATHEMATICS. University of Chicago. 2010.</li>
<li>Richards, B.A., Lillicrap, T.P., Beaudoin, P. et al. A deep learning framework for neuroscience. Nat Neurosci 22, 1761–1770 (2019). <a href="https://doi.org/10.1038/s41593-019-0520-2" rel="nofollow noreferrer">doi:10.1038/s41593-019-0520-2</a>.</li>
<li>Wikipedia contributors. "Koopman–von Neumann classical mechanics." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 19 Feb. 2020. Web. 7 Mar. 2020.</li>
<li>Koopman, B. O. (1931). "Hamiltonian Systems and Transformations in Hilbert Space". Proceedings of the National Academy of Sciences. 17 (5): 315–318. Bibcode:1931PNAS...17..315K. <a href="https://doi.org/10.1073/pnas.17.5.315" rel="nofollow noreferrer">doi:10.1073/pnas.17.5.315</a>. PMC 1076052. PMID 16577368.</li>
<li>Frank Wilczek. Notes on Koopman von Neumann Mechanics, and a
Step Beyond. 2015.</li>
<li>Daniel McNamee and Daniel M. Wolpert. Internal Models in Biological Control. Annual Review of Control, Robotics, and Autonomous Systems. 2019.</li>
<li>Jörn Diedrichsen, Maedbh King, Carlos Hernandez-Castillo, Marty Sereno, and Richard B. Ivry. Universal Transform or Multiple Functionality? Understanding the Contribution of the Human Cerebellum across Task Domains. Neuron review. 2019.</li>
<li>Turing, A.M. (1936). "On Computable Numbers, with an Application to the Entscheidungsproblem". Proceedings of the London Mathematical Society. 2 (published 1937). 42: 230–265. <a href="https://doi.org/10.1112/plms/s2-42.1.230" rel="nofollow noreferrer">doi:10.1112/plms/s2-42.1.230</a>. (and Turing, A.M. (1938). "On Computable Numbers, with an Application to the Entscheidungsproblem: A correction". Proceedings of the London Mathematical Society.</li>
<li>Albert Gidon, Timothy Adam Zolnik, Pawel Fidzinski, Felix Bolduan, Athanasia Papoutsi, Panayiota Poirazi, Martin Holtkamp, Imre Vida, Matthew Evan Larkum. <a href="https://doi.org/10.1126/science.aax6239" rel="nofollow noreferrer">Dendritic action potentials and computation in human layer 2/3 cortical neurons</a>. Science. 2020.</li>
</ol>
| gmvh | 45,250 | <p>As to question 2, there are certainly plenty of non-trivial discrete models in statistical physics, such as the Ising or Potts models, or lattice gauge theories with discrete gauge groups, that require no partial derivatives (or indeed any operations of differential calculus) at all to formulate and simulate.</p>
<p>Similarly, quantum mechanics can be formulated entirely in the operator formalism, and an entity incapable of considering derivatives could still contemplate the time-independent Schrödinger equation and solve it algebraically for the harmonic oscillator (using the number operator) or the hydrogen atom (using the Laplace-Runge-Lentz-Pauli vector operator).</p>
<p>So an answer to question 1 might be "at least anything that can be written as a discrete-time Markov chain with a discrete state space, as well as anything that can be recast as an eigenvalue problem", and other problems that can be recast in purely probabilistic or algebraic language should also be safe (although it might be hard to come up with their formulations without using derivatives at some intermediate step).</p>
<p>As to question 3, I personally don't believe that an approach to classical mechanics or field theory can be correct if it isn't equivalent (at least at a sufficiently high level of abstraction) to formulating and solving differential equations. But the level of abstraction could conceivably be quite high -- for an attempt to formulate classical mechanics without explicitly referring to <strong>numbers</strong> (!) cf. Hartry Field's philosophical treatise "<a href="https://global.oup.com/academic/product/science-without-numbers-9780198777915?cc=de&lang=en&#" rel="noreferrer">Science without Numbers</a>".</p>
|
102,738 | <p>I imported two sets data
one: </p>
<pre><code>data1={{0., 5.02512*10^-10}, {0.06668, 2.99284*10^-8}, {0.13336,
3.22116*10^-8}, {0.20004, 2.58191*10^-8}, {0.26672,
1.99125*10^-7}, {0.3334, 1.21646*10^-8}, {0.40008,
3.35916*10^-7}, {0.46676, 3.79768*10^-7}, {0.53344,
1.02102*10^-7}, {0.60012, 1.17535*10^-6}, {0.6668,
1.72507*10^-7}, {0.73348, 1.23789*10^-6}, {0.80016,
1.9808*10^-6}, {0.86684, 1.39616*10^-7}, {0.93352,
4.60649*10^-6}, {1.0002, 1.39262*10^-6}, {1.06688,
3.83127*10^-6}, {1.13356, 0.0000101002}, {1.20024,
3.26005*10^-8}, {1.26692, 0.0000229263}, {1.3336,
0.0000144712}, {1.40028, 0.000020778}, {1.46696,
0.000134013}, {1.53364, 4.94753*10^-6}, {1.60032,
0.00250851}, {1.667, 0.00326501}, {1.73368, 0.0000968109}, {1.80036,
0.000207831}, {1.86704, 7.79724*10^-6}, {1.93372,
0.0000459028}, {2.0004, 0.0000321442}, {2.06708,
2.43685*10^-6}, {2.13376, 0.0000276559}, {2.20044,
3.87948*10^-6}, {2.26712, 9.62673*10^-6}, {2.3338,
0.0000130072}, {2.40048, 1.53889*10^-7}, {2.46716,
0.0000116171}, {2.53384, 3.36691*10^-6}, {2.60052,
3.53838*10^-6}, {2.6672, 8.3132*10^-6}, {2.73388,
2.36251*10^-8}, {2.80056, 6.58432*10^-6}, {2.86724,
3.33096*10^-6}, {2.93392, 1.45936*10^-6}, {3.0006,
6.35157*10^-6}, {3.06728, 2.69642*10^-7}, {3.13396,
4.25243*10^-6}, {3.20064, 3.49319*10^-6}, {3.26732,
5.50908*10^-7}, {3.334, 5.33684*10^-6}, {3.40068,
6.86369*10^-7}, {3.46736, 2.92315*10^-6}, {3.53404,
3.88476*10^-6}, {3.60072, 1.32685*10^-7}, {3.6674,
4.88858*10^-6}, {3.73408, 1.2985*10^-6}, {3.80076,
2.10915*10^-6}, {3.86744, 4.63201*10^-6}, {3.93412,
9.45702*10^-10}, {4.0008, 4.94888*10^-6}, {4.06748,
2.37468*10^-6}, {4.13416, 1.60386*10^-6}, {4.20084,
6.40728*10^-6}, {4.26752, 1.82055*10^-7}, {4.3342,
6.14228*10^-6}, {4.40088, 5.175*10^-6}, {4.46756,
1.4092*10^-6}, {4.53424, 0.000013092}};
</code></pre>
<p>second:</p>
<pre><code>{{0., 5.02512*10^-10}, {0.06668, 6.99284*10^-8}, {0.13336,
9.22116*10^-8}, {0.20004, 9.58191*10^-8}, {0.26672,
6.99125*10^-7}, {0.3334, 7.21646*10^-8}, {0.40008,
1.35916*10^-7}, {0.46676, 8.79768*10^-7}, {0.53344,
9.02102*10^-7}, {0.60012, 5.17535*10^-6}, {0.6668,
9.72507*10^-7}, {0.73348, 0.23789*10^-6}, {0.80016,
5.9808*10^-6}, {0.86684, 9.39616*10^-7}, {0.93352,
1.60649*10^-6}, {1.0002, 5.39262*10^-6}, {1.06688,
7.83127*10^-6}, {1.13356, 0.0000101002}, {1.20024,
5.26005*10^-8}, {1.26692, 0.0000229263}, {1.3336,
0.0000144712}, {1.40028, 0.000020778}, {1.46696,
0.000134013}, {1.53364, 4.94753*10^-6}, {1.60032,
0.00250851}, {1.667, 0.00326501}, {1.73368, 0.0000968109}, {1.80036,
0.000207831}, {1.86704, 7.79724*10^-6}, {1.93372,
0.0000459028}, {2.0004, 0.0000321442}, {2.06708,
7.43685*10^-6}, {2.13376, 0.0000276559}, {2.20044,
9.87948*10^-6}, {2.26712, 9.62673*10^-6}, {2.3338,
0.0000130072}, {2.40048, 1.53889*10^-7}, {2.46716,
0.0000116171}, {2.53384, 3.36691*10^-6}, {2.60052,
3.53838*10^-6}, {2.6672, 8.3132*10^-6}, {2.73388,
2.36251*10^-8}, {2.80056, 6.58432*10^-6}, {2.86724,
3.33096*10^-6}, {2.93392, 1.45936*10^-6}, {3.0006,
6.35157*10^-6}, {3.06728, 2.69642*10^-7}, {3.13396,
4.25243*10^-6}, {3.20064, 3.49319*10^-6}, {3.26732,
5.50908*10^-7}, {3.334, 5.33684*10^-6}, {3.40068,
6.86369*10^-7}, {3.46736, 2.92315*10^-6}, {3.53404,
3.88476*10^-6}, {3.60072, 1.32685*10^-7}, {3.6674,
4.88858*10^-6}, {3.73408, 1.2985*10^-6}, {3.80076,
2.10915*10^-6}, {3.86744, 4.63201*10^-6}, {3.93412,
9.45702*10^-10}, {4.0008, 4.94888*10^-6}, {4.06748,
2.37468*10^-6}, {4.13416, 1.60386*10^-6}, {4.20084,
6.40728*10^-6}, {4.26752, 1.82055*10^-7}, {4.3342,
6.14228*10^-6}, {4.40088, 5.175*10^-6}, {4.46756,
1.4092*10^-6}, {4.53424, 0.000013092}};
</code></pre>
<p>The desired case is: plotting by <code>ListLogPlot</code> of two sets in one plot. But, before plotting, the second one must be multiplied by <code>100</code> for preventing of overlapping plots on each other. <code>100</code> must be multiplied to the second column of second data. I mean: </p>
<pre><code> `{0., 100*5.02512*10^-10}, {0.06668,100*6.99284*10^-8}, {0.13336,
100*9.22116*10^-8}, {0.20004, 100*9.58191*10^-8}, {0.26672,
100*6.99125*10^-7}, {0.3334, 10*7.21646*10^-8}.......`
</code></pre>
| B flat | 33,996 | <p>This works. Thank you for the help!</p>
<pre><code>SetOptions[InputNotebook[],
CounterAssignments -> {{"ItemNumbered", -1}}]
SetOptions[InputNotebook[],
StyleDefinitions ->
Notebook[{Cell[StyleData[StyleDefinitions -> "Default.nb"]],
Cell[StyleData["ItemNumbered"],
CellDingbat ->
Cell[TextData[{CounterBox["ItemNumbered"], "."}],
FontWeight -> "Bold"], CellMargins -> {{81, 10}, {4, 8}},
StyleKeyMapping -> {"Tab" -> "SubitemNumbered"},
CellFrameLabelMargins -> 4,
CellChangeTimes -> {3.657516045744032`*^9},
CounterIncrements -> {"ItemNumbered", "ItemNumbered"},
CounterAssignments -> {{"SubitemNumbered",
0}, {"SubsubitemNumbered", 0}}, MenuSortingValue -> 1630,
FontFamily -> "Arial", FontSize -> 15,
Global`ReturnCreatesNewCell -> True]}, WindowSize -> {808, 751},
WindowMargins -> {{Automatic, 20}, {16, Automatic}},
FrontEndVersion ->
"10.3 for Mac OS X x86 (32-bit, 64-bit Kernel) (October 9, 2015)",
StyleDefinitions -> "PrivateStylesheetFormatting.nb"]]
</code></pre>
<p><a href="https://i.stack.imgur.com/rmolA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rmolA.png" alt="enter image description here"></a></p>
|
207,865 | <p>It is known that all $B$, $C$ and $D$ are $3 \times 3$ matrices. And the eigenvalues of $B$ are $1, 2, 3$; $C$ are $4, 5, 6$; and $D$ are $7, 8, 9$. What are the eigenvalues of the $6 \times 6$ matrix
$$\begin{pmatrix}
B & C\\0 & D
\end{pmatrix}$$
where $0$ is the $3 \times 3$ matrix whose entries are all $0$.
From my intuition, I think the eigenvalues of the new $6 \times 6$ matrix are the eigenvalues of $B$ and $D$. But how can I show that? </p>
| Jacob | 825 | <p>By definition, an eigenvalue $\lambda$ of the block matrix $A$ satisfies</p>
<p>$$\det \begin{pmatrix} B-\lambda I & C \\ 0 & D-\lambda I \end{pmatrix} = 0.$$</p>
<p>Using a <a href="http://en.wikipedia.org/wiki/Determinant#Block_matrices">property of block matrix determinants</a>, we have</p>
<p>$$\det \begin{pmatrix} B-\lambda I & C \\ 0 & D-\lambda I \end{pmatrix} = \det(B-\lambda I)\det(D-\lambda I) = 0$$</p>
<p>Thus the eigenvalues of $B,D$ are also the eigenvalues of $A$.</p>
|
2,802,156 | <p>I have a function:</p>
<p>$${{\mathop{\rm F}\nolimits} _i}\left( {\bf{\xi }} \right) = \sum\limits_k^N {{\mathop{\rm D}\nolimits} \left( {\frac{1}{N}\sum\limits_j^N {{\mathop{\rm G}\nolimits} \left( {j,{\mathop{\rm I}\nolimits} \left( {j,{\bf{\xi }}} \right)} \right)} - {\mathop{\rm G}\nolimits} \left( {k,{\mathop{\rm I}\nolimits} \left( {k,{\bf{\xi }}} \right)} \right)} \right)}$$</p>
<p>$${\rm F}_i(\xi)=\sum_k^N {\rm D}_k\left(\frac1N\sum_j^N{\rm G}_j({\rm I}_j(\xi))-{\rm G}_k({\rm I}_k(\xi))\right).$$</p>
<p>$\xi$ is a vector.</p>
<p>How do I calculate the partial derivative using the chain rule?</p>
<p>$$\frac{\partial{\rm F}_i}{\partial\xi}=? $$</p>
<p>I guess...</p>
<p>$$\frac{\partial{\rm F}_i}{\partial\xi}=\sum_k^N\frac{\partial{\rm F}_i}{\partial{\rm D}_k}\left(\frac1N\sum_j^N\frac{\partial {\rm D}_k}{\partial {\rm G}_j}\frac{\partial {\rm G}_j}{\partial {\rm I}_j}\frac{\partial {\rm I}_j}{\partial \xi}-\frac{\partial {\rm D}_k}{\partial {\rm G}_k}\frac{\partial {\rm G}_k}{\partial {\rm I}_k}\frac{\partial {\rm I}_k}{\partial \xi}\right). $$</p>
<ul>
<li>full version</li>
</ul>
<p><a href="https://i.stack.imgur.com/GBOTf.jpg" rel="nofollow noreferrer">enter image description here</a></p>
| Christian Sykes | 322,386 | <p>$${\rm F}^\prime_i(\xi)=\sum_k^N {\rm D}^\prime_k\left(\frac1N\sum_j^N\left({\rm G}_j({\rm I}_j(\xi))-{\rm G}_k({\rm I}_k(\xi))\right)\right)\cdot \frac1N\sum_j^N\left({\rm G}^\prime_j({\rm I}_j(\xi)){\rm I}^\prime_j(\xi)-{\rm G}^\prime_k({\rm I}_k(\xi)){\rm I}^\prime_k(\xi)\right)$$</p>
<p>Edit: This is the derivative of what was originally asked about but is certainly not the same as the derivative of the function contained in the image of the updated question.</p>
|
2,860,156 | <p>Let $A\in \mathbb{M}_3(\mathbb{R})$ be a symmetric matrix whose eigen-values are $1,1$ and $3$. Express $A^{-1}$ in the form $\alpha I +\beta A$, where $\alpha, \beta \in \mathbb{R}$.</p>
| Fred | 380,717 | <p>The minimal polynomial of $A$ is $p(x)=(x-1)(x-3)=x^2-4x+3$. By Cayley-Hamilton:</p>
<p>$$A^2-4A+3I=0.$$</p>
<p>This gives</p>
<p>$$A-4I+3A^{-1}=0.$$</p>
|
3,964,910 | <p>Let <span class="math-container">$E$</span> be a metric space, <span class="math-container">$(\mu_n)_{n\in\mathbb N}$</span> be a sequence of finite nonnegative measures on <span class="math-container">$\mathcal B(E)$</span> and <span class="math-container">$\mu$</span> be a probability measure on <span class="math-container">$\mathcal B(E)$</span> s.t. <span class="math-container">$$\frac{\mu_n}{\mu_n(E)}\xrightarrow{n\to\infty}\mu\tag1$$</span> with respect to the to the topology of <a href="https://en.wikipedia.org/wiki/Convergence_of_measures#Weak_convergence_of_measures" rel="nofollow noreferrer">weak convergence of measures</a>.</p>
<p>Assuming that <span class="math-container">$c:=\sup_{n\in\mathbb N}\mu_n(E)<\infty$</span>, are we able to deduce that the nonnormalized sequence <span class="math-container">$(\mu_n)_{n\in\mathbb N}$</span> is convergent as well?</p>
<p>If not, does the other direction of this assertion hold, i.e. can we infer that the normalized sequence (i.e. the left-hand side of <span class="math-container">$(1)$</span>) is convergent from knowing that the nonnormalized sequence is convergent? It's easy to see that <span class="math-container">$(\mu_n(E))_{n\in\mathbb N}$</span> is convergent in that case (since the function constantly equal to <span class="math-container">$1$</span> is bounded and continuous). So, this should be possible to obtain.</p>
<p><em>Remark</em>: Without assuming <span class="math-container">$c<\infty$</span>, the first implication is clearly wrong. We simply can consider any finite nonnegative measure <span class="math-container">$\nu$</span> on <span class="math-container">$\mathcal B(E)$</span> and set <span class="math-container">$\nu_n:=n\nu$</span> for <span class="math-container">$n\in\mathbb N$</span>.</p>
| 0xbadf00d | 47,771 | <p>By the same argument as <a href="https://math.stackexchange.com/a/3964926/47771">presented by Botnakov N.</a> we can even show more:</p>
<p>Let <span class="math-container">$\mathcal M(E)$</span> denote the space of finite signed measures on <span class="math-container">$\mathcal B(E)$</span> equipped with the total variation norm <span class="math-container">$\left\|\;\cdot\;\right\|$</span>, <span class="math-container">$(\mu_t)_{t\in I}$</span> be a net in <span class="math-container">$\mathcal M(E)$</span> with <span class="math-container">$$c:=\lim_{t\in I}\left\|\mu_t\right\|\ne0\tag2$$</span> and <span class="math-container">$\mu\in\mathcal M(E)$</span> with <span class="math-container">$$\left(\frac{\mu_t}{\left\|\mu_t\right\|}\right)_{t\in I}\to\mu\tag3$$</span> with respect to the topology of <a href="https://en.wikipedia.org/wiki/Convergence_of_measures#Weak_convergence_of_measures" rel="nofollow noreferrer">weak convergence of measures</a>. Then, <span class="math-container">$$\mu_tf=\left\|\mu_t\right\|\frac{\mu_t}{\left\|\mu_t\right\|}f\xrightarrow{t\in I}c\mu f\;\;\;\text{for all }f\in C_b(E)\tag4;$$</span> i.e. <span class="math-container">$$(\mu_t)_{t\in I}\to c\mu\tag5.$$</span></p>
|
354,124 | <p>I was stumbled with a basic calculus question by a friend.</p>
<p>The question first asks to find unit vectors $v,w$ s.t $|u+v|$ is
maximal and $|u-w|$ is minimal where $u=(-2,5,3)$.</p>
<p>Then the question asks to find unit vectors $v,w$ s.t $u\cdot v$
is maximal and $|u\cdot w|$ is minimal.</p>
<p>It's easy to write out the equations in all parts, for example for
the first part: denote $v=(x,y,z)$ then we wish to find the maximum
$$(z+3)^{2}+(y+5)^{2}+(x-2)^{2}$$ under $$x^{2}+y^{2}+z^{2}=1$$</p>
<p>and similarly for the second part with the minimum. But this doesn't
seem like the right way to go at this, since I only know to solve
such a question with Lagrange multipliers, and they didn't study this
(yet). </p>
<p>Can anyone please help point me out in the right direction ?</p>
| Heberto del Rio | 71,372 | <p>Given $u=(-2,5,3)$ consider the following function $f_\pm(v)=<u\pm v,u\pm v>$ which is the inner product of $u\pm v$ with itself. The critical point of $f_\pm$ coincide with the critical point of $|u\pm v|$ (why?).</p>
<p>Now $f_\pm(v)=<u\pm v,u\pm v>=<u,u>\pm 2<u,v>+<v,v>=|u|^2+|v|^2\pm 2<u,v>$, since $u=(-2,3,5)$ and $|v|=1$ we have that $f_\pm(v)=39\pm 2<u,v>$. Since $u$ is a fixed vector and $v$ is allowed to move on a circle , $f_\pm (v)$ will be the biggest when $u$ and $v$ are multiple of each other (observe that $f_\pm(v)=39\pm\sqrt{38}\cos\theta$ where $\theta$ is the angle between $u$ and $v$, so in order for $f_\pm(v)$ to be biggest $\cos\theta$ has to be biggest, that is $\theta=0$ or $\theta=\pi$) that is when $v=\dfrac{u}{|u|}$ or $v=-\dfrac{u}{|u|}$. </p>
<p>This the reason why the previous suggestion</p>
<p>P.S. for each function you will have to check which of the two possible answers is the solution. </p>
|
1,114,258 | <p>I am new to differential geometry and Riemannian geometry. </p>
<p>I have on two separate occasions (separated by 6 months) encountered exercises where I feel like I am not giving a complete answer. </p>
<p>Problem 1: </p>
<p><em>Show that the Gaussian curvature of the surface of a cylinder is zero.</em></p>
<p>Problem 2: </p>
<p><em>Use Cartesian coordinates to write out and solve the geodesic equations for a two-dimensional flat plane and show the solutions are straight lines.</em> </p>
<p>In both cases, my argument went something like </p>
<ol>
<li>Define the metric.</li>
<li>Show the Christoffel symbols are zero.</li>
<li>Use this to show there is no curvature. </li>
</ol>
<p>But I feel like this is just me avoiding the real stuff. In other words, I resort to calculus because I don't actually know what I'm doing. So I make a roundabout argument rather than diving right into the mathematics itself. </p>
<p>Is my argument sensible? Is there a more formal way to approach this type of problem? If you could do an example that would be fantastic. </p>
| Olórin | 187,521 | <p>This two groups are vector spaces over $\mathbf{Q}$, and as these two groups have the same cardinal, any basis (over $\mathbf{Q}$) of one of them has the same cardinal than has any basis (over $\mathbf{Q}$) of the other one. This allows you to show that these two $\mathbf{Q}$-vector spaces are isomorphic as $\mathbf{Q}$-vector spaces. As any such isomorphism is also a group isomorphism, you're done.</p>
<p>Take for instance a Hamel basis $(e_i)_{i\in I}$, and a bijection $f : I \to I\coprod I$, then the $\mathbf{Q}$-linear map $F$ defined on basis vectors by $F(e_i)=(e_{f(i)},0)$ or $(0,e_{f(i)})$ claimed that $f(i)$ is in the first copy of $I$ or in the second is an isomorphism from $\mathbf{R}$ to $\mathbf{R}^2$, and there is a obvious group isomophism from $\mathbf{R}^2$ to $\mathbf{C}$, the one sending $(x,y)$ to $x+iy$...</p>
|
1,557,165 | <p>Prove that
$$\int_1^\infty\frac{e^x}{x (e^x+1)}dx$$
does not converge.</p>
<p>How can I do that? I thought about turning it into the form of $\int_b^\infty\frac{dx}{x^a}$, but I find no easy way to get rid of the $e^x$.</p>
| mathochist | 215,292 | <p>If we divide the top and bottom by $e^x$, we have</p>
<p>$$\int_{1}^{\infty}\frac{1}{x+x/e^x}$$</p>
<p>For large values of $x$, $x/e^x < x$, so $x+x/e^x < 2x$ and therefore $1/(x+x/e^x) > 1/2x$. Then the tail of $1/2x$ lies under the curve of $1/(x+x/e^x)$. Then since
$$\int_{1}^{\infty}\frac{1}{2x}$$
diverges, we know that the first integral diverges as well.</p>
|
1,029,485 | <p>I wish to show the following statement:</p>
<p>$
\forall x,y \in \mathbb{R}
$</p>
<p>$$
(x+y)^4 \leq 8(x^4 + y^4)
$$</p>
<p>What is the scope for generalisaion?</p>
<p><strong>Edit:</strong></p>
<p>Apparently the above inequality can be shown using the Cauchy-Schwarz inequality. Could someone please elaborate, stating the vectors you are using in the Cauchy-Schwarz inequality: </p>
<p>$\ \ \forall \ \ v,w \in V, $ an inner product space,</p>
<p>$$|\langle v,w\rangle|^2 \leq \langle v,v \rangle \cdot \langle w,w \rangle$$</p>
<p>where $\langle v,w\rangle$ is an inner product.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>we have $8(x^4+y^4)-(x+y)^4=7x^4-4x^3y-6x^2y^2-4xy^3+7y^4=(7x^2+10xy+7y^2)(x-y)^2\geq 0$
this is true.</p>
|
3,451,374 | <p>Given that I have a random variable <span class="math-container">$\max\{K-X, 0\}$</span> where <span class="math-container">$k>0$</span> is a constant and <span class="math-container">$x$</span> is uniformly distributed on <span class="math-container">$[-K, K]$</span> or I guess more generally with any distribution. How does one go about finding the Expectation of such random variables? Some ideas come up to mind like I think it could be <span class="math-container">$P(K-x>0)\cdot(K-x)+P(k-x<0)\cdot0$</span> but that is itself a random variable. Maybe it is obtained by taking the Expectation of this one but I can't justify that.</p>
| Ragib Zaman | 14,657 | <p>I believe the approach you are trying to use is the Law of Total Expectation:</p>
<p><span class="math-container">$$\mathbb{E}[g(X)] = \mathbb{E}[g(X) \ | \ A] \ \mathbb{P}(A) + \mathbb{E}[g(x) \ | \ A^c] \ \mathbb{P}(A^c).$$</span> In your case, taking <span class="math-container">$g(X) = \max(K-X,0)$</span> and <span class="math-container">$A$</span> being the event that <span class="math-container">$X \leq K,$</span> we get </p>
<p><span class="math-container">$$\mathbb{E}[\max(K-X)] = \mathbb{E}[\max(K-X) \ | \ X\leq K] \ \mathbb{P}(X\leq K) + \mathbb{E}[\max(K-X) \ | \ X>K] \ \mathbb{P}(X>K)$$</span></p>
<p><span class="math-container">$$ = \mathbb{E}[K-X \ | \ X\leq K] \ \mathbb{P}(X\leq K) + \mathbb{E}[0 \ | \ X>K] \ \mathbb{P}(X>K) $$</span> <span class="math-container">$$= \mathbb{E}[K-X \ | \ X\leq K] \ \mathbb{P}(X\leq K)$$</span></p>
<p>If <span class="math-container">$X$</span> is uniform over <span class="math-container">$[-K, K]$</span> then by linearity of expectation this evaluates to <span class="math-container">$K.$</span></p>
<p>Another approach is to use </p>
<p><span class="math-container">$$ \mathbb{E}[g(X)] = \int_{-\infty}^{\infty} g(x) p(x) dx$$</span></p>
<p>where <span class="math-container">$p(x)$</span> is the density of the random variable <span class="math-container">$X.$</span> So in your example with <span class="math-container">$X$</span> uniform over <span class="math-container">$[-K, K]$</span> we have <span class="math-container">$$ \mathbb{E}[\max(K-x,0)] = \int^K_{-K} max(K-x, 0) \cdot \frac{1}{2K} dx = K$$</span></p>
|
3,275,423 | <p>How do I see that for a <span class="math-container">$K$</span>-vector space <span class="math-container">$V$</span> the map</p>
<blockquote>
<p><span class="math-container">$\bigwedge^d(V^*) \times \bigwedge^d(V) \rightarrow K, (f_1 \wedge ... \wedge f_d, x_1 \wedge ... \wedge x_d) \mapsto det(f_i(x_i)_{i,j})$</span></p>
</blockquote>
<p>is bilinear?</p>
| dan_fulea | 550,003 | <p>I try to write an answer that should clear the definition of the map in the OP. By definition, it is a (multi)linear map. The main instrument is using "universality" when working with elements in the category of vector spaces and (multi)linear applications. (This would not fit as a comment, and it would be hard to type without markup control.)</p>
<p>(1)
Let us fix some field <span class="math-container">$K$</span> (of characteristic <span class="math-container">$\ne 2$</span>, or maybe even <span class="math-container">$=0$</span> to exclude any problems with the definition of the wedge space).</p>
<p>We work in the category of vector spaces over <span class="math-container">$V$</span>. </p>
<p>Functorially, if <span class="math-container">$f,g$</span> are (linear) maps <span class="math-container">$V\to V'$</span> and <span class="math-container">$W\to W'$</span>, then <span class="math-container">$f\otimes g$</span> is a (bi)linear map <span class="math-container">$V\times W\to V'\times W'$</span>. Same is valid for more tensor factors.</p>
<p>(2) Consider now "the <span class="math-container">$V$</span>" from the OP.
We have an evaluation map <span class="math-container">$V^*\otimes V\to K$</span>.</p>
<p>Using it we can define for a fixed pair <span class="math-container">$(i_0,j_0)$</span> the map
<span class="math-container">$$
\left(\times_{i=1}^dV^*\right)\ \otimes\
\left(\times_{j=1}^dV\right)
\to K\ ,
$$</span>
<span class="math-container">$$
(f_1,f_2,\dots,f_d)\otimes(x_1,x_2,\dots,x_d)
\to f_{i_0}(x_{j_0})\ .
$$</span></p>
<p>(3)
Putting together all the above maps for all possible values of
<span class="math-container">$(i_0, j_0)$</span>, so that the image space is a space of matrices <span class="math-container">$d\times d$</span>,
we have a map
By repeating it in <span class="math-container">$d$</span> tensor parts, we also have a map:
<span class="math-container">$$
\left(\times_{i=1}^dV^*\right)\ \otimes\
\left(\times_{j=1}^dV\right)
\to M_{d\times d}(K)\ ,
$$</span>
<span class="math-container">$$
(f_1,f_2,\dots,f_d)\otimes(x_1,x_2,\dots,x_d)
\to \Big[\ f_{i_0}(x_{j_0})\ \Big]_{1\le i_0,j_0\le d}\ .
$$</span>
As it is so far, this map is linear (in each component), but it is not "balanced", i.e. we cannot move a scalar from one <span class="math-container">$f$</span>-component to an other one, or from an <span class="math-container">$x$</span>-component to an other one.</p>
<p>(4)
Consider now the composition
<span class="math-container">$$
\left(\times_{i=1}^dV^*\right)\ \otimes\
\left(\times_{j=1}^dV\right)
\to M_{d\times d}(K)
\overset\det\longrightarrow
K\ .
$$</span>
This composition is now balanced by the properties of the determinant.
For instance,
<span class="math-container">$(af_1,f_2,\dots,f_d)\otimes x$</span>
and/or
<span class="math-container">$f\otimes(ax_1,x_2,\dots,x_d)$</span>
is mapped via (3) to the matrix obtained from the one for <span class="math-container">$f\otimes x=(f_1,f_2,\dots,f_d)\otimes (x_1,x_2,\dots,x_d)$</span> by multiplying the first matrix row/column with the scalar <span class="math-container">$a\in K$</span>.</p>
<p>Applying the <span class="math-container">$\det$</span>, <span class="math-container">$a$</span> becomes now a factor of the result.</p>
<p>The same computation can be done when <span class="math-container">$a$</span> is on an other component position of <span class="math-container">$f$</span> and/or of <span class="math-container">$x$</span>.</p>
<p>So the balancing properties is valid after applying <span class="math-container">$\det$</span>.</p>
<p>(5)
The balancing implies we have an induced map <span class="math-container">$\bar\Phi$</span>:
<span class="math-container">$$
\bar\Phi\ :\
\left(\bigotimes_{i=1}^dV^*\right)\ \otimes\
\left(\bigotimes_{j=1}^dV\right)
\to
K\ .
$$</span>
On elements <span class="math-container">$f_1\otimes f_2\otimes\dots\otimes f_d$</span> algebraically tensored with <span class="math-container">$x_1\otimes x_2\otimes\dots\otimes x_d$</span> it is defined by lifting this to (linear combinations of) <span class="math-container">$(f_1,f_2,\dots,f_3)\otimes(x_1,x_2,\dots,x_d)$</span> and applying <span class="math-container">$\Phi$</span> (followed by linear assembly).</p>
<p>The result does not depend on the lifts.
Every relation that has to be tested for <span class="math-container">$\bar\phi$</span> has an equivalent pendant at the level of <span class="math-container">$\Phi$</span>. </p>
<p>(6)
It remains to observe that <span class="math-container">$\bar\Phi$</span> is alternating in its <span class="math-container">$f$</span>-components, and also in its <span class="math-container">$x$</span>-components.
We only need to show this at the level of <span class="math-container">$\Phi$</span>.</p>
<p>So we have to compare first the result of applying <span class="math-container">$\Phi$</span> on the elements</p>
<p><span class="math-container">$(\color{blue}{f_1,f_2},\dots,f_d)\otimes(x_1,x_2,\dots,x_d)$</span> and respectively</p>
<p><span class="math-container">$(\color{blue}{f_2,f_1},\dots,f_d)\otimes(x_1,x_2,\dots,x_d)$</span></p>
<p>(and on all other cases of a change implemented by a transposition of two indices).</p>
<p>We apply <span class="math-container">$\Phi$</span> on the above two elements, the intermediate matrix station delivers two matrices with interchanged first and second rows, further applying <span class="math-container">$\det$</span> leads to a sign difference. In this case and in the other cases of using a transposition of indices of the <span class="math-container">$f$</span>-component</p>
<p>This shows the alternating relation for the
<span class="math-container">$f$</span>-components.</p>
<p>The similar argument applied for the comparison of <span class="math-container">$\Phi$</span>-values on</p>
<p><span class="math-container">$(f_1,f_2,\dots,f_d)\otimes(\color{blue}{x_1,x_2},\dots,x_d)$</span> and respectively</p>
<p><span class="math-container">$(f_1,f_2,\dots,f_d)\otimes(\color{blue}{x_2,x_1},\dots,x_d)$</span></p>
<p>and on the values in the more general case, when we are using a transposition <span class="math-container">$(j_1,j_2)$</span> instead of <span class="math-container">$(1,2)$</span> as above,</p>
<p>is leading to the comparison of two determinants with matrices two exchanged columns, and again we deduce the alternation relation,
this time on the <span class="math-container">$x$</span>-components.</p>
<p>(7)
We can thus factorize through the wedge-product, getting
a final map <span class="math-container">$\hat\Phi$</span>:
<span class="math-container">$$
\hat\Phi\ :\
\left(\wedge_{i=1}^dV^*\right)\ \otimes\
\left(\wedge_{j=1}^dV\right)
\to
K\ .
$$</span>
(The wedge product can be realized in characteristic zero either als subobject, or as a quotient of the tensor product. The factorization makes sense when the quotient is taken, and the map is already alternating.)</p>
|
3,371,104 | <p>How could I show <span class="math-container">$$\int_{\mathbb{R}}\dfrac{1}{\sqrt{1+t^{2}}}dt=\infty?$$</span> </p>
<p>I tried to use comparison test so that <span class="math-container">$$\dfrac{1}{\sqrt{1+t^{2}}}\geq \dfrac{C}{t},$$</span> for some <span class="math-container">$C$</span>, and we can use the fact that <span class="math-container">$$\int_{\mathbb{R}}\dfrac{1}{t}dt=\infty$$</span> to conclude, but this requires me to find a <span class="math-container">$C$</span> such that <span class="math-container">$$1+t^{2}\leq\dfrac{1}{C^{2}}t^{2},$$</span> which is <span class="math-container">$$\dfrac{1-C^{2}}{C^{2}}t^{2}\geq 1$$</span> for all <span class="math-container">$t$</span>.</p>
<p>How could I find such value?</p>
<p>Thank you!</p>
| Bernard | 202,857 | <p><strong>Hint</strong>:</p>
<p>Its antiderivative is known:
<span class="math-container">$$\int\!\frac{\mathrm dt}{\sqrt{1+t^2}}=\operatorname{argsinh}t=\ln(t+\sqrt{t^2+1}).$$</span></p>
|
232,540 | <p>I'm trying to prove this conclusion but have some problems with one of the steps.</p>
<p>Assume $X_1,\ldots,X_n,\ldots$ is a sequence of Gaussian random variables, converging almost surely to $X$, prove that $X$ is Gaussian.</p>
<p>We use characteristics function here. Since $|\phi_{X_n}(t)|\leq 1$, by dominated convergent theorem, we have for any $t$</p>
<p>$$
\lim_{n\rightarrow\infty}e^{it\mu_n-t^2\sigma_n^2/2}=\lim_{n\rightarrow \infty}\phi_{X_n}(t) = \lim_{n\rightarrow \infty}\mathbb{E}\left[e^{itX_n}\right] = \mathbb{E}\left[e^{itX}\right] = \phi_X(t)
$$</p>
<p><strong>this is the step that I cannot figure out</strong>: $e^{it\mu_n-t^2\sigma_n^2/2}$ converges for any $t$ if and only if $\mu_n$ and $\sigma_n$ converges. </p>
<p>Let $\mu=\lim_n \mu_n$, and $\sigma=\lim_n\sigma_n$, then $\phi_X(t)=e^{it\mu-t^2\sigma^2/2}$, which proves that $X$ is a Gaussian random variable.</p>
<p>Why can we get that $\mu_n$ and $\sigma_n$ converge? This looks intuitive for me, but I cannot make a rigorous prove. </p>
| Shashi | 349,501 | <p>Although this question is old and it has a perfect answer already, I provide here a slightly different proof. A proof which mainly shows the convergence of <span class="math-container">$\mu_n$</span> in a funny way (which is the whole point of writing this). </p>
<p>Notice first that we have the existence and finiteness of the limit of <span class="math-container">$\phi_{X_n}$</span> and therefore using continuity of <span class="math-container">$\log|\cdot|$</span> we also find the existence and finiteness of the limit
<span class="math-container">\begin{align}
\lim_{n\to\infty}-2\log|\phi_{X_n}(1)|=\lim_{n\to\infty}\sigma_n^2
\end{align}</span>
which we will call <span class="math-container">$\sigma^2$</span> (note that it might be <span class="math-container">$0$</span>). </p>
<p>To show that <span class="math-container">$\mu_n$</span> is bounded, we assume that is unbounded so that there is unbounded subsequence <span class="math-container">$\mu_{n_k}$</span>
<span class="math-container">\begin{align*}
\lim_{k\to\infty}F_{X_{n_k}}(x)=\lim_{k\to\infty}\int^x_{-\infty}\frac{1}{\sqrt{2\pi\sigma_{n_k}^2}}e^{-\frac{(t-\mu_{n_k})^2}{2\sigma_{n_k}^2}}\,dt = \lim_{k\to\infty}\int^\infty_{-\infty}\mathbf{1}\left\{r\leq \frac{x-\mu_{n_k}}{\sqrt{2\sigma_{n_k}^2}}\right\}e^{-r^2}\,dr
\end{align*}</span>
Using DCT we get either <span class="math-container">$0$</span> or <span class="math-container">$1$</span> which contradicts the fact that <span class="math-container">$F_{X_n}$</span> converges to a CDF. So <span class="math-container">$\mu_n$</span> is bounded.</p>
<p>By considering the convergence of the limit of <span class="math-container">$\phi_{X_n}(t)e^{\sigma_n^2t^2/2}$</span> we conclude the convergence of <span class="math-container">$$\lim_{n\to\infty}\cos(\mu_nt) \ \ \ \text{ and } \ \ \ \lim_{n\to\infty} \sin(\mu_nt) $$</span>
for all <span class="math-container">$t\in\mathbb R$</span>. </p>
<p>But now we consider the convergence of the following two integrals using DCT*
<span class="math-container">\begin{align}
\color{red}{\lim_{n\to\infty}\int_\mathbb R\frac{\cos(\mu_nt)}{t^2+1}\,dt=\lim_{n\to\infty}\pi e^{-|\mu_n|}} \ \ \ \text{ and } \ \ \ \color{blue}{\lim_{n\to\infty}\int_\mathbb R\frac{\sin(\mu_nt)}{t(t^2+1)}\,dt = \lim_{n\to\infty}\pi (1-e^{-|\mu_n|})\text{sgn}(\mu_n)}
\end{align}</span>
But now it is straightforward to see that the convergence of both can happen iff <span class="math-container">$\mu_n$</span> itself converges. First deduce the convergence of <span class="math-container">$|\mu_n|$</span> and then split cases where the limit is zero or nonzero. For the nonzero case we can deduce the convergence of <span class="math-container">$\text{sgn}(\mu_n)$</span> using the convergence of <span class="math-container">$(1-e^{-|\mu_n|})^{-1}$</span> and the blue limit. So <span class="math-container">$\mu_n=\text{sgn}(\mu_n)|\mu_n|\to\mu$</span> for some <span class="math-container">$\mu\in\mathbb R$</span>. This then implies that <span class="math-container">$X$</span> is normally distributed with parameters <span class="math-container">$\mu$</span> and <span class="math-container">$\sigma^2$</span>.</p>
<p><sup>*: The dominating function of the blue integral can be derived through <span class="math-container">$|\sin(\mu_nt)|\leq |\mu_nt|\leq \sup_n |\mu_n||t|$</span>.</sup></p>
|
397,274 | <p>Suppose you have a group isomorphism given by the first isomorphism theorem:</p>
<p><span class="math-container">$$G/\ker(\phi) \simeq \operatorname{im}(\phi)$$</span></p>
<p>What can we say about the group <span class="math-container">$\ker(\phi)\times \operatorname{im}(\phi)$</span>? In particular, when does the following hold:</p>
<p><span class="math-container">$$G\simeq \ker(\phi)\times \operatorname{im}(\phi)?$$</span></p>
<p>I ask this question because i want to prove that <span class="math-container">$GL_n^+(\mathbb{R}) \simeq SL_n(\mathbb{R}) \times \mathbb{R}^*_{>0}$</span>, with <span class="math-container">$GL_n^+(\mathbb{R})$</span> the group of matrices with positive determinant. I proved that <span class="math-container">$SL_n(\mathbb{R})$</span> is a normal subgroup and that <span class="math-container">$GL_n^+(\mathbb{R})/ SL_n(\mathbb{R}) \simeq \mathbb{R}^*_{>0}$</span>, using the surjective homomorphism <span class="math-container">$\det(M)$</span>. I tried something with semidirect products but I got stuck.</p>
| Edoardo Lanari | 77,181 | <p>Even if it can't be applied to your example, I would like to point out that in the abelian case (more generally in any abelian category) it's equivalent to have a split exact sequence: $0 \to \ker(\phi) \to G \to Im(\phi) \to0$</p>
|
4,008,141 | <p>Given <span class="math-container">$G= \{ z \in \mathbb{C} | \exists \ n \in \mathbb{Z}^{+} \text{such that} \ z^n=1\}$</span>.
Define a map <span class="math-container">$f : G \to G $</span> by <span class="math-container">$$f(z)=z^k$$</span></p>
<p>where <span class="math-container">$k>1$</span> is fixed and <span class="math-container">$k \in \mathbb{Z}^+$</span></p>
<p><strong>My question</strong> : Prove that <span class="math-container">$f(z)$</span> is a onto group homomorphism ?</p>
<p><strong>My attempt</strong> : No , take <span class="math-container">$z^{n/k} \in G$</span> now <span class="math-container">$f(z^{n/k})=z^n=1$</span> for all <span class="math-container">$z^{n/k} \in G$</span></p>
<p><span class="math-container">$\implies$</span> <span class="math-container">$f$</span> become a constant i,e <span class="math-container">$f=1$</span></p>
<p>we know that constant function never give onto</p>
<p>therefore <span class="math-container">$f$</span> is not a onto homomorphism</p>
| Numbra | 743,703 | <p>The problem comes from the fact that <span class="math-container">$n$</span> is not fixed ! For <span class="math-container">$z \in G$</span>, <em>a priori</em>, the "corresponding <span class="math-container">$n$</span>" could be anything.</p>
<p>A good idea to start would be to understand what the elements of <span class="math-container">$G$</span> look like. In fact, you can try to show (or may be you already know) that if <span class="math-container">$z^n = 1$</span>, then you can write <span class="math-container">$z = e^{\frac{2ir\pi}n}$</span> for some <span class="math-container">$0 \leq r < n$</span>.</p>
<p>This means that <span class="math-container">$G = \{e^{\frac{2ir\pi}n}\;|\; n \in \mathbb N^\ast , 0\leq r < n\}$</span>. Now you can work out more precisely how your function acts on this set !</p>
|
4,008,141 | <p>Given <span class="math-container">$G= \{ z \in \mathbb{C} | \exists \ n \in \mathbb{Z}^{+} \text{such that} \ z^n=1\}$</span>.
Define a map <span class="math-container">$f : G \to G $</span> by <span class="math-container">$$f(z)=z^k$$</span></p>
<p>where <span class="math-container">$k>1$</span> is fixed and <span class="math-container">$k \in \mathbb{Z}^+$</span></p>
<p><strong>My question</strong> : Prove that <span class="math-container">$f(z)$</span> is a onto group homomorphism ?</p>
<p><strong>My attempt</strong> : No , take <span class="math-container">$z^{n/k} \in G$</span> now <span class="math-container">$f(z^{n/k})=z^n=1$</span> for all <span class="math-container">$z^{n/k} \in G$</span></p>
<p><span class="math-container">$\implies$</span> <span class="math-container">$f$</span> become a constant i,e <span class="math-container">$f=1$</span></p>
<p>we know that constant function never give onto</p>
<p>therefore <span class="math-container">$f$</span> is not a onto homomorphism</p>
| jasmine | 557,708 | <p>From Numbra answer</p>
<p>Motive : to show <span class="math-container">$f$</span> is onto</p>
<p><span class="math-container">$G = \{e^{\frac{2ir\pi}n}\;|\; n \in \mathbb N^\ast , 0\leq r < n\}$</span>.</p>
<p>Now let <span class="math-container">$f $</span> is given by <span class="math-container">$e^{\frac{2ir\pi}n} \in G \implies a=e^{\frac{2ir\pi}{kn}}\in G$</span></p>
<p>according to question <span class="math-container">$f(a)=a^k=e^{\frac{2ir\pi}n}$</span></p>
<p>so <span class="math-container">$f$</span> is onto</p>
|
43,505 | <p>I am looking to make a physics based Mathematica project. Ideally the project would take around 12 hours, gathering any experimental data and analyse the findings.</p>
<p>I'd have full access to university physics labs. The project would be for 2nd year physics students in the end and would aim to introduce using Mathematica in their work.</p>
| Zviovich | 1,096 | <p>Please check Bobthechemist site for some ideas.</p>
<p><a href="http://www.bobthechemist.com/">BobtheChemist's projects</a></p>
<p>Also, some other simple physics experiments done interfacing with Sensors and Arduino here.</p>
<p><a href="http://community.wolfram.com/groups/-/m/t/181641?p_p_auth=PThZ9lzq">An experiment in moment of inertia</a></p>
<p><a href="http://community.wolfram.com/groups/-/m/t/193779?p_p_auth=PThZ9lzq">Simple Pendulum Experiment</a></p>
|
1,894,199 | <p>Evaluate definite integral: $$\int_{-\pi/2}^{\pi/2} \cos \left[\frac{\pi n}{2} +\left(a \sin t+b \cos t \right) \right] dt$$</p>
<p>$n$ is an integer. $a,b$ real numbers.</p>
<p>The purpose of the integral - computing matrix elements of an electron Hamiltonian in an elliptic ring in the quantum box basis.</p>
<p>Before you ask me what I've done already, I've got this integral from the original, much more complicated one.</p>
<p>Originally I just gave up and computed it numerically.</p>
<blockquote>
<p>But I wonder - is it possible to express the integral in closed form with Bessel functions? </p>
</blockquote>
<p>Or maybe some series, still better than numerical integration.</p>
<p>I'm not asking for a full solution, some hint would be fine. Or even just a reassurance that a closed form exists.</p>
| Elias Costa | 19,266 | <p>This is not a complete answer. But I think that might be helpful toward the desired solution. Fix $x_n=\frac{\pi n}{2}$. Use the Taylor series $\cos (x_n+h)=\lim_{m\to \infty}\sum^{m}_{k=0} \frac{(-1)^k}{(2k)!}f^{(k)}(x_n)\cdot (x_n+h)^{2k} $ for $h=h(t)=(a\sin t +b\cos t)$ in the interval $ [-|a|-|b|,|a|+|b|]$.
Set
$
I_n=\int_{-\frac{\pi}{2}}^{+\frac{\pi}{2}}
\cos \left(
x_n+h(t)
\right)\;
\mathrm{d}t
$.
$$
I_n
=
\int_{-\frac{\pi}{2}}^{+\frac{\pi}{2}}
\left[
\lim_{m\to \infty}\sum^{m}_{k=0} \frac{(-1)^k}{(2k)!}
\left(
\frac{\pi n}{2} +a \sin t+b \cos t
\right)^{2k}
\right]
\mathrm{d}t
$$
By <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="nofollow">Dominated_convergence_theorem</a>
\begin{align}
I_n=&
\lim_{m\to \infty}\sum^{m}_{k=0}
\int_{-\frac{\pi}{2}}^{+\frac{\pi}{2}}
\frac{(-1)^k}{(2k)!}
\left(
\frac{\pi n}{2} +a \sin t+b \cos t
\right)^{2k}
\mathrm{d}t
\\
=&
\lim_{m\to \infty}\sum^{m}_{k=0}
\frac{(-1)^k}{(2k)!}
\int_{-\frac{\pi}{2}}^{+\frac{\pi}{2}}
\left(
\frac{\pi n}{2} +a \sin t+b \cos t
\right)^{2k}
\mathrm{d}t
\end{align}
By <a href="https://en.wikipedia.org/wiki/Multinomial_theorem" rel="nofollow">multinomial theorem</a>
\begin{align}
I_n=
&
\lim_{m\to \infty}\sum^{m}_{k=0}
\frac{(-1)^k}{(2k)!}
\sum_{i_1+i_2+i_3=2k}
\frac{(2k)!}{i_1!\cdot i_2!\cdot i_3! }
\left(
\frac{\pi n}{2}
\right)^{i_1}
\cdot
\left(
a
\right)^{i_2}
\cdot
\left(
b
\right)^{i_3}
\cdot
\int_{-\frac{\pi}{2}}^{+\frac{\pi}{2}}
\left(\sin t\right)^{i_2}
\cdot
\left(\cos t\right)^{i_3}
\mathrm{d}t
\end{align}
Now you can search a table with definite integrals ( for exemple <a href="http://rads.stackoverflow.com/amzn/click/0123822513" rel="nofollow">Handbook-Mathematical-Formulas-Integrals</a>) for a recursive calculations of $\int_{-\frac{\pi}{2}}^{+\frac{\pi}{2}}
\left(\sin t\right)^{i_2}
\cdot
\left(\cos t\right)^{i_3}
\mathrm{d}t$. Let's say
$$
\int \sin^{u}(x)\cos^v(x) \mathrm{d} x
=
\begin{cases}
-\frac{sin^{u-1}(x)\cos^{u+1}(x)}{u+v}+\frac{u-1}{u+v}\int \sin^{u-2}(x)\cos^v(x) \mathrm{d} x
\\
\frac{sin^{u+1}(x)\cos^{u-1}(x)}{u+v}+\frac{v-1}{u+v}\int \sin^{u}(x)\cos^{v-2}(x) \mathrm{d} x
\end{cases}
$$
I strongly suspect that this integral can be expressed in terms of
<a href="https://en.wikipedia.org/wiki/Gamma_function" rel="nofollow">Gamma</a> and <a href="https://en.wikipedia.org/wiki/Beta_function" rel="nofollow">Beta</a> functions.</p>
|
160,542 | <p>I suspect the following integration to be wrong. My answer is coming out to be $3/5$, but the solution says $1$.</p>
<p>$$\int_0^1\frac{2(x+2)}{5}\,dx=\left.\frac{(x+2)^2}{5}\;\right|_0^1=1.$$</p>
<p>Please help out. Thanks.</p>
| Pedro | 23,350 | <p>The integration is obtained as follows:</p>
<p>$$\int 2\frac{x+2}{5}dx=\frac{2}{5}\int (x+2)d(x+2)=\frac{2}{5}\int udu=\frac{2}{5}\frac{u^2}{2}=\frac 1 5 (x+2)^2$$</p>
<p>Since $\frac 1 5 (x+2)^2$ is a primitive of $2\frac{x+2}{5}$ we can use FTCII, and get</p>
<p>$$\int 2\frac{x+2}{5}dx=\frac{(\color{red}{1}+2)^2}{5}-\frac{(\color{red}{0}+2)^2}{5}= \frac 9 5- \frac 4 5 = 1$$
It seems what you did was this:</p>
<p>$$\int 2\frac{x+2}{5}dx=\frac{(0+2)^2}{5}-\frac{(0+1)^2}{5}= \frac 4 5- \frac 1 5 = \frac 3 5$$</p>
|
160,542 | <p>I suspect the following integration to be wrong. My answer is coming out to be $3/5$, but the solution says $1$.</p>
<p>$$\int_0^1\frac{2(x+2)}{5}\,dx=\left.\frac{(x+2)^2}{5}\;\right|_0^1=1.$$</p>
<p>Please help out. Thanks.</p>
| Cameron Buie | 28,900 | <p>Here's another potential approach that you will likely find useful in the future (though it isn't really necessary, here), called "$u$-substitution".</p>
<p>Let's put $u=x+2$. Now, $x=0$ if and only if $u=2$, and $x=1$ if and only if $u=3$. Also, $$\frac{du}{dx}=\frac{d}{dx}[x+2]=1,$$ and if we treat $\frac{du}{dx}$ just like any other fraction and "solve" for $dx$, we get $du=dx$. Now we'll go through and substitute everything $x$-related with the corresponding $u$-related thing, so that $$\int_0^1\frac{2(x+2)}{5}\,dx=\int_2^3\frac{2u}{5}\,du=\left.\frac{u^2}{5}\right|_2^3=1.$$</p>
<p>Now, $u$-substitutions typically require a bit more finagling than this one did, but as a preview (and an alternate approach), I think it serves its purpose.</p>
|
373,906 | <p>(This question is <a href="https://math.stackexchange.com/questions/3859476">originally from Math.SE</a> where it was suggested that I ask the question here)</p>
<p>Let <span class="math-container">$G$</span> be a finite group with fewer than <span class="math-container">$p^2$</span> Sylow <span class="math-container">$p$</span>-subgroups, and let <span class="math-container">$p^n$</span> be the power of <span class="math-container">$p$</span> dividing <span class="math-container">$\lvert G\rvert$</span>. I can show that if <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> are any two distinct Sylow <span class="math-container">$p$</span>-subgroups of <span class="math-container">$G$</span> then <span class="math-container">$\lvert P\cap Q\rvert=p^{n-1}$</span>. I was wondering if this intersection is necessarily the same across all Sylow <span class="math-container">$p$</span>-subgroups of <span class="math-container">$G$</span>.</p>
<blockquote>
<p>Is the intersection <span class="math-container">$P\cap Q$</span> the same for any two distinct Sylow <span class="math-container">$p$</span>-subgroups <span class="math-container">$P$</span> and <span class="math-container">$Q$</span>?</p>
</blockquote>
<p>We might as well assume that <span class="math-container">$G$</span> has more than one Sylow <span class="math-container">$p$</span>-subgroup, in which case here are two equivalent formulations:</p>
<blockquote>
<p>Does the intersection of all Sylow <span class="math-container">$p$</span>-subgroups of <span class="math-container">$G$</span> necessarily have order <span class="math-container">$p^{n-1}$</span>?</p>
</blockquote>
<blockquote>
<p>Must there exist a normal subgroup of <span class="math-container">$G$</span> of order <span class="math-container">$p^{n-1}$</span>?</p>
</blockquote>
<p>I'm looking for a proof or counterexample of this conjecture.</p>
<p>I know that the conjecture holds in the case where <span class="math-container">$G$</span> has <span class="math-container">$p+1$</span> Sylow <span class="math-container">$p$</span>-subgroups.</p>
<p>There is some good partial progress in the comments and answers of the Math.SE link.</p>
| Thomas Browning | 95,685 | <p>Now that I understand things better, let me also give a direct proof (using essentially the same idea as Brodkey's theorem).</p>
<p>Let <span class="math-container">$P,Q,R$</span> be Sylow <span class="math-container">$p$</span>-subgroups of <span class="math-container">$G$</span>, let <span class="math-container">$N=N_G(P\cap Q)$</span>. We know that <span class="math-container">$P\cap Q$</span> has order <span class="math-container">$p^{n-1}$</span>, so <span class="math-container">$P\leq N$</span> and <span class="math-container">$Q\leq N$</span>. In other words, <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> are Sylow <span class="math-container">$p$</span>-subgroups of <span class="math-container">$N$</span>.
The intersection <span class="math-container">$R\cap N$</span> is a <span class="math-container">$p$</span>-subgroup of <span class="math-container">$N$</span>, so <span class="math-container">$R\cap N\leq P^g$</span> for some <span class="math-container">$g\in N$</span>. We have
<span class="math-container">$$R\cap Q^g=R\cap(N\cap Q^g)=(R\cap N)\cap Q^g\leq P^g\cap Q^g=(P\cap Q)^g=P\cap Q.$$</span>
Now suppose that <span class="math-container">$P\neq Q$</span>. Then <span class="math-container">$P\cap Q$</span> has order <span class="math-container">$p^{n-1}$</span>, so <span class="math-container">$R\cap Q^g$</span> must also have order <span class="math-container">$p^{n-1}$</span>, and in fact we must have <span class="math-container">$R\cap Q^g=P\cap Q$</span>. Thus, <span class="math-container">$P\cap Q\leq R$</span>.
But <span class="math-container">$R$</span> is any arbitrary Sylow <span class="math-container">$p$</span>-subgroup of <span class="math-container">$G$</span>, so the conjecture is proven.</p>
|
644,494 | <p><strong>Question</strong></p>
<blockquote>
<p>If for some real number $a$, $\lim_{x\to 0}\frac{\sin 2x + a\sin x}{x^3}$ exists, then the limit is equal to:</p>
</blockquote>
<p>Here what i have done</p>
<p>since it is of $0/0$ form applying L' Hospital's rule$$\implies\lim_{x\to0}\frac{\sin 2x + a \sin x}{x^3} = \frac{2\cos 2x + a\cos x}{3x^2}$$ now what to do i am stuck here please help</p>
<p>Thanks</p>
<p>Akash</p>
| mathlove | 78,967 | <p>Since $3x^2\to 0,$ the numerator also has to go to $0$ when $x\to 0$. </p>
<p>Hence, $2+a=0\iff a=-2.$</p>
<p>So, you'll have
$$\lim_{x\to 0}\frac{2\cos{2x}-2\cos x}{3x^2}.$$</p>
<p>You can use L' Hospital's rule twice.</p>
|
644,494 | <p><strong>Question</strong></p>
<blockquote>
<p>If for some real number $a$, $\lim_{x\to 0}\frac{\sin 2x + a\sin x}{x^3}$ exists, then the limit is equal to:</p>
</blockquote>
<p>Here what i have done</p>
<p>since it is of $0/0$ form applying L' Hospital's rule$$\implies\lim_{x\to0}\frac{\sin 2x + a \sin x}{x^3} = \frac{2\cos 2x + a\cos x}{3x^2}$$ now what to do i am stuck here please help</p>
<p>Thanks</p>
<p>Akash</p>
| Mhenni Benghorbal | 35,472 | <p>You can use the Taylor series</p>
<p>$$ \frac{\sin2x + a\sin x}{x^3} = \frac{(2x-(2x)^3/3!+\dots)+a (x-x^3/3!+\dots)}{x^3}$$</p>
<p>$$ \sim_{x\to 0} \frac{(a+2)x-(2^3+1)x^3/3!}{x^3}\dots\,. $$</p>
<p>Can you finish it?</p>
|
644,494 | <p><strong>Question</strong></p>
<blockquote>
<p>If for some real number $a$, $\lim_{x\to 0}\frac{\sin 2x + a\sin x}{x^3}$ exists, then the limit is equal to:</p>
</blockquote>
<p>Here what i have done</p>
<p>since it is of $0/0$ form applying L' Hospital's rule$$\implies\lim_{x\to0}\frac{\sin 2x + a \sin x}{x^3} = \frac{2\cos 2x + a\cos x}{3x^2}$$ now what to do i am stuck here please help</p>
<p>Thanks</p>
<p>Akash</p>
| user44197 | 117,158 | <p>This is not an answer as others have done a great job of it. I will try and explain <em>how to think</em> about the problem. In the end each of us approach a problem differently so how we think may be quite different. So, this is just my thinking.</p>
<p>The sine function has only odd powers since it is an odd function. So it will have $x$, $x^3$, $x^5$. Denominator has $x^3$, so if we can choose $a$ so that the $x$ term goes away, then we will be left with only $x^3$, $x^5$ etc. and after dividing by $x^3$, the $x^3$ term will be the limit and the rest will go to zero. </p>
<p>The $x$ term of $\sin 2x$ is $2x$ and $x$ term of $a\sin x$ is $a x$. So to get rid of $x$ term, we need $a = -2$.
The $x^3$ term of $\sin 2x$ is $-(2x)^3/3!$ and $x^3$ of $a\sin x$ is $-a x^3/3!$. So the limit is
$$
-8/3! -a /3! = -6/3! = -1$$</p>
|
1,221,221 | <p>Suppose $f:\mathbb{R}\rightarrow \mathbb{R}$ is continuous and has a left derivative, $f^-$, everywhere in a neighborhood of $x.$ Suppose $f^-$ is continuous at $x.$ Does this imply that $f$ is differentiable at $x$?</p>
| Rolf Hoyer | 228,612 | <p>Only if it's 'conditionally' divergent in the sense that the positive terms form a divergent series, and also the negative terms form a divergent series. You would also need $a_n\to 0$, of course. In this case, you can use the same algorithm for rearrangement in order to force convergence to some (arbitrary) value.</p>
|
2,050,867 | <p>I would like to prove for all $x, y \in \mathbb{R}$ that $\dfrac{e^{x}+e^{y}}{2} \geq e^{\frac{x+y}{2}}$. My idea, is to show that $f(x,y) \ge 0$, it means that $(0,0)$ is the minimum of $f(x,y)$. So, I compute the equation: $\nabla f(x,y)=\begin{pmatrix}0 \\0 \end{pmatrix}$. I find that the solutions are $x=y$. <strong>My first question is may I choose $\textbf{x=y=0}$?</strong> After I have made this assumption, I compute the eigenvalues of $\nabla^2f(x,y)_{(0,0)}$ and got $\lambda_1=0$ and $\lambda_2=\dfrac{1}{2}$. Thus, I can't conclude anything about $(0,0)$ from this point since one of the eigenvalues is zero. <strong>Do you have any idea about what can I do further using this method or a different path to prove it?</strong> Thank you.</p>
| Fred | 380,717 | <p>$0 \le (e^{x/2}-e^{y/2})^2=e^x-2e^{\frac{x+y}{2}}+e^y$</p>
|
469,947 | <blockquote>
<p>Show that the presentation $G=\langle a,b,c\mid a^2 = b^2 = c^3 = 1, ab = ba, cac^{-1} = b, cbc^{-1} =ab\rangle$ defines a group of order $12$.</p>
</blockquote>
<p>I tried to let $d=ab\Rightarrow G=\langle d,c\mid d^2 =c^3 = 1, c^2d=dcdc\rangle$. But I don't know how to find the order of the new presentation. I mean I am not sure how the elements of new $G$ look like. (For sure not on the form $c^id^j$ and $d^kc^l$ otherwise $|G|\leq 5$).</p>
<p>Is it good step to reduce the number of generators or not necessary?</p>
| Martin Brandenburg | 1,650 | <p><em>Direct proof.</em></p>
<p>$N:=\langle a,b \rangle$ is clearly a normal subgroup of $G$ with $G/N = \langle c : c^3 = 1 \rangle = C_3$, and $N$ is a quotient of the Klein four-group $V_4 = \langle a,b : a^2=b^2=1, ab=ba \rangle$. Therefore $|G|$ divides $12$. Now one checks that the permutations $a=(12)(34)$, $b = (14)(23)$, $c = (123) \in A_4$ satisfy the relations which define $G$, and that they generate $A_4$. Hence we get a surjective homomorphism $G \to A_4$. Since $|G|$ divides $12$, this shows that $G \cong A_4$ is an isomorphism and $|G|=12$.</p>
<p><em>A more conceptual proof using semidirect products.</em></p>
<p>The automorphism group of the Klein four-group $V_4$ is $S_3$ (since you can permute $a,b,ab$ freely, you can also see this from linear algebra applied to $V_4 \cong \mathbb{F}_2^2$). In particular there is an automorphism $c$ of order $3$, namely the one mapping $a \mapsto b, b \mapsto ab, ab \mapsto a$. This means that the cyclic group $C_3 = \langle c : c^3 = 1 \rangle$ of order $3$ acts on this group. The corresponding semidirect product $V_4 \rtimes C_3$ has the desired presentation $\langle a,b,c : a^2=b^2=c^3=1, ab=ba, c a c^{-1} = b, c b c^{-1} = ab \rangle$. See <a href="https://mathoverflow.net/questions/96078/are-semi-direct-products-categorical-limits">MO/96078</a> for the universal property and the resulting group presentation of a semidirect product. But the usual construction of the semidirect product $N \rtimes H$ shows that $|N \rtimes H|= |N| |H|$. In particular, $G \cong V_4 \rtimes C_3$ has order $12$.</p>
|
1,252,591 | <p>How many odd numbers can be formed using digits $0,4,5,7$.
I am getting answer $12$ but the actual answer is $14$. </p>
| gnasher729 | 137,175 | <p>There are 24 permutations of the digits, six each starting with 0, 4, 5 and 7. Four digit numbers don't start with a 0, leaving 3 times 6 permutations. If the first digit is 5 or 7 then we remove the two numbers ending in 4, so 3x6 - 2x2 = 14. </p>
|
288,974 | <p>Alright this maybe really funny but I want to know why is this wrong. We often come across identities which we prove by multiplying both the sides of the identity by a certain entity but why don't we multiply it by $0$. That way every identity will be proved in one single line. That is so stupid. I mean, by that way we may also say that $1=2=3$. I know it is wrong. But why? I mean if we can multiply both the sides by $2$ then why not by $0$. For example, consider the following trigonometric identity :</p>
<p>Prove the identity : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta$</p>
<h2>Usual way</h2>
<p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p>
<p>$\displaystyle \implies {\sin^2 \theta \over \cos^2 \theta } = {\tan^2 \theta \cos ^2 \theta \over \cos^2 \theta}$ (multiplying both the sides by $\displaystyle 1 \over \cos^2 \theta$)</p>
<p>$\implies \tan ^2 \theta = \tan^2\theta$</p>
<p>$\implies LHS=RHS$</p>
<p>$\therefore proved$</p>
<h2>Funny way</h2>
<p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p>
<p>$\displaystyle \implies {\sin^2 \times 0} = {\tan^2 \theta \cos ^2 \theta \times 0}$ (multiplying both the sides by $0$)</p>
<p>$\implies 0 = 0$</p>
<p>$\therefore proved$</p>
<p>Please explain why is this wrong.</p>
| Nick | 60,044 | <p>Zero differs from other numbers and mathematical entities which represent some sort of value because it is the absence of value, <em>not a value itself</em>. Thus, it would be incorrect to say $a \cdot 0 = b \cdot 0$ means that $a = b$. Similarly, in the case of infinity: let's say that $a$ does equal $b$. However, $a \cdot \infty \neq b \cdot \infty$ because infinity does not have a definite or specific value. Rather, it is a concept of something without limit.</p>
|
1,579,170 | <p>This problem is dependent because it matters which one you choose, So i don't think we can do the multiplication thing in this one. </p>
<ul>
<li>Probability of ( non defective ) = 6/10 </li>
</ul>
<p>What does the question mean when it says all will be non-defective? is "all" the 2 randomly chosen telephone? How would i do this problem? 2 is chosen randomly and 6 is non defective, I just thought of doing 2/6 cause 2 was the chosen and 6 total are non-defective. But if i wanted to find the number of non defective i would just do 6/10? I feel like I don't understand this question</p>
| David | 119,775 | <p>Use the Chinese Remainder Theorem as you have suggested. But the easiest way is to use it to <strong>check</strong> your answer, not to <strong>find</strong> the answer. Let's write $l={\rm lcm}(p-1,q-1)$.</p>
<p>We have
$$a^{p-1}\equiv1\pmod p\ ,$$
and since $l$ is a multiple of $p-1$,
$$a^l\equiv1\pmod p\ .$$
Similarly
$$a^l\equiv1\pmod q\ .$$
Therefore $x=a^l$ is a solution of the simultaneous congruences
$$x\equiv1\pmod p\ ,\quad x\equiv1\pmod q\ ;$$
but by CRT this system has a unique solution modulo $pq$, and $x=1$ is clearly a solution, so we have $a^l\equiv1\pmod{pq}$.</p>
|
3,231,869 | <p>I am a little confused what this actually means: </p>
<p><span class="math-container">$e^{x+e^x}$</span></p>
<p>It is obviously not the same if I for example
<span class="math-container">$$e^{x}:= \lambda \\
e^{x+e^x} \neq \lambda^\lambda
$$</span></p>
| Dr. Sonnhard Graubner | 175,066 | <p>It is the same as <span class="math-container">$$e^x\cdot e^{e^x}$$</span></p>
|
1,612,353 | <blockquote>
<p>In how many ways out of $20$ students you can select $1$ treasurer, $1$ secretary and $3$ more representatives?</p>
</blockquote>
<p>I understand that for single selections I can multiply with the availability of the persons. Like for treasurer I can have $20$ options, for secretary then I have $19$ options. So I can select a secretary and a treasurer in $20\cdot 19$ ways. but for $3$ more representatives? Should I multiply up to $16$? This is exactly where I am stuck.</p>
| vrugtehagel | 304,329 | <p>There are $20\choose 1$ options for the treasurer. Assuming one students can't have multiple roles, we have $19\choose 1$ options for the secretary and $18\choose 3$ options for the representatives, making the total number of possibilities $20\cdot 19\cdot 816=310080$.</p>
<p>Hope this helped!</p>
|
292,122 | <p>This question actually came out of a question. In some other post, I saw a reference and going through, found this, $n>0$.</p>
<p>Solve for n explicitly without calculator:
$$\frac{3^n}{n!}\le10^{-6}$$</p>
<p>And I appreciate hint rather than explicit solution.</p>
<p>Thank You.</p>
| mjqxxxx | 5,546 | <p>Note that, for $n=3m$, $$3^{-3m}{(3m)!}=\left[m\left(m-\frac{1}{3}\right)\left(m-\frac{2}{3}\right)\right]\cdots\left[1\cdot\frac{2}{3}\cdot\frac{1}{3}\right] <\frac{2}{9}\left(m!\right)^3.$$
So you have to go at least far enough so that
$$
\frac{2}{9}\left(m!\right)^3>10^{6},
$$
or $m! > \sqrt[3]{4500000} > 150$. So $m=5$ (corresponding to $n=15$) isn't far enough; the smallest $n$ satisfying your inequality will be at least $16$.</p>
<p>Similarly, for $n=3m+1$,
$$
3^{-3m-1}(3m+1)!=\left[\left(m+\frac{1}{3}\right)m\left(m-\frac{1}{3}\right)\right]\cdots \left[\frac{4}{3}\cdot1\cdot\frac{2}{3}\right]\cdot\frac{1}{3} < \frac{1}{3}(m!)^3,$$
so you need $m!>\sqrt[3]{3000000}> 140$, and $m=5$ (that is, $n=16$) is still too small.</p>
<p>Finally, for $n=3m+2$,
$$
3^{-3m-2}(3m+2)!=\left[\left(m+\frac{2}{3}\right)\left(m+\frac{1}{3}\right)m\right]\cdots \left[\frac{5}{3}\cdot\frac{4}{3}\cdot1\right]\cdot\frac{2}{3}\cdot\frac{1}{3} > \frac{560}{729}(m!)^3,
$$
where the coefficient comes from the last eight terms, so it is sufficient that $m! > 100\cdot\sqrt[3]{729/560}.$ To show that $m=5$ is large enough, we need to verify that $(12/10)^3=216/125 > 729/560$. Carrying out the cross-multiplication, you can check without a calculator that $216\cdot 560 =120960$ is larger than $729\cdot 125=91125$, and conclude that $m=5$ (that is, $n=17$) is large enough. The inequality therefore holds for exactly all $n\ge 17$.</p>
|
801,562 | <p>We consider that $R$ is a commutative ring with $1_R$.</p>
<p>Each $c \in R^*$(if we see it as a constant polynomial), divides each polynomial of $R[X]$.</p>
<p>($c \in R^*$ means that $c$ is invertible.)</p>
<p>I haven't undersotod it..Could you explain it to me?</p>
<p>Does it mean that if we have a polynomial $p(X) \in R$,then $\frac{p}{c} \in \mathbb{Z}$ ?
If yes, why is it like that??</p>
| drhab | 75,923 | <p>If $c\in R^*$ then polynomial $f=a_{0}+\cdots+a_{n}X^{n}$ can be written as $cg$ where
$g=c^{-1}a_{0}+\cdots+c^{-1}a_{n}X^{n}$.</p>
|
1,148,760 | <p>$\displaystyle \int x^7\cos x^4 dx$</p>
<p>I tried first by letting $x^4 = u$ and then using integration by parts by assigning f(x) to $u^\frac74$ and cos(u) to g'(x) and I end up getting after applying parts twice, the same integral on the RHS as what we are looking for. So I bring it in on the LHS and add it over and get $\displaystyle \cos x^4 \bigg (\frac{\displaystyle 4 \displaystyle u^\frac{11}{4}}{11} \bigg)$</p>
| abel | 9,252 | <p>make a substitution $u = x^4, du = 4x^3 dx.$ the
$$\int x^7 \cos x^4 \, dx = \frac14\int u\cos u\, du
= \frac14 \int u \, d(\sin u) = \frac14 \left( u\sin u -\int \sin u \,du\right)
=\frac14 \left( u\sin u + \cos u\right) + C = \frac14 \left( x^4\sin x^4 + \cos x^4\right) +C $$ </p>
|
3,142,339 | <p>Let <span class="math-container">$p$</span> be a real number. I am looking for all <span class="math-container">$(x,y)$</span> such that <span class="math-container">$\ln[e^{x}+e^{y}]=px+(1-p)y$</span>. My effort:</p>
<p>Take exponent of both sides to obtain <span class="math-container">$e^{x}+e^{y}=e^{px}e^{(1-p)y}$</span> and then let <span class="math-container">$X=e^{x}, Y=e^{y}$</span>, so that <span class="math-container">$X+Y=X^{p}Y^{1-p}$</span>. How can I proceed from here?</p>
| Eric Towers | 123,905 | <p>For future complex continued fractions...</p>
<p>For a continued fraction to converge, the sequence of convergents (values finite initial segments of the partial fraction expression) must converge to a particular complex number, that is, to a specific argument and magnitude. For the real continued fraction you mention, the algebraic expression allows the argument to be either <span class="math-container">$0$</span> (containing "<span class="math-container">$+\sqrt{5}$</span>") or <span class="math-container">$\pi$</span> (containing "<span class="math-container">$-\sqrt{5}$</span>"). The same thing can be done with the complex version. Your two expressions have arguments <span class="math-container">$-\pi/6$</span> and <span class="math-container">$7\pi/6$</span>. If your continued fraction had any hope of converging (which, as others have shown, it does not) it converges to something with a specific argument. You generically hope that one of the algebraic solutions you have has an argument matching the limit argument of the convergents. Finding that limit argument usually requires another computation, typically solving another recurrence.</p>
|
1,675,329 | <p>What's the value of $\sum_{i=1}^\infty \frac{1}{i^2 i!}(= S)$?</p>
<p>I try to calculate the value by the following.</p>
<p>$$\frac{e^x - 1}{x} = \sum_{i=1}^\infty \frac{x^{i-1}}{i!}.$$
Taking the integral gives
$$ \int_{0}^x \frac{e^t-1}{t}dt = \sum_{i=1}^\infty \frac{x^{i}}{i i!}. $$</p>
<p>In the same, we gets the following equation</p>
<p>$$ \int_{s=0}^x \frac{1}{s} \int_{t=0}^s \frac{e^t-1}{t}dt ds= \sum_{i=1}^\infty \frac{x^{i}}{i^2 i!}. $$</p>
<p>So we holds</p>
<p>$$S = \int_{s=0}^1 \frac{1}{s} \int_{t=0}^s \frac{e^t-1}{t}dt ds.$$</p>
<p>Does this last integral have an elementary closed form or other expression?</p>
| TOM | 118,685 | <p>By A.S.'s comment, we gets
$$\int_{s=0}^x \frac{1}{s} \int_{t=0}^s \frac{e^t-1}{t}dt ds = \int_{t=0}^x \frac{e^t-1}{t}\int_{s=t}^x \frac{1}{s}ds dt = \int_{0}^x \frac{(e^t-1) (\log{x} - \log{t})}{t}dt.$$</p>
<p>So, we holds
$$S = - \int_{0}^1 \frac{(e^t-1) \log{t}}{t}dt = - \int_{- \infty}^0 (e^{e^u}-1) u du.$$</p>
|
934,660 | <p>Prove that for $ n \geq 2$, n has at least one prime factor.</p>
<p>I'm trying to use induction. For n = 2, 2 = 1 x 2. For n > 2, n = n x 1, where 1 is a prime factor. Is this sufficient to prove the result? I feel like I may be mistaken here.</p>
| Sheheryar Zaidi | 131,709 | <p>Inductive case: Assume $n$ has prime factors: It is either a prime, then it's got a prime factor (itself), and then $n+1$ is even and has 2 as a prime factor. If $n$ isn't prime, then the FTA says it has a unique prime factorization and $n+1$ is either prime or FTA says it has a prime factorization. </p>
<p>Induction seems a bit useless here. </p>
|
4,491,251 | <p>Per the question title, what's the easiest way to evaluate the following?
<span class="math-container">$$\int_0^{\pi/6}\sec x\,dx$$</span></p>
<p>You can do something like computing the derivatives of <span class="math-container">$\sec x$</span> and <span class="math-container">$\tan x$</span>, adding them up, computing the derivative of the logarithm of the absolute value of the sum of <span class="math-container">$\sec x$</span> and <span class="math-container">$\tan x$</span>, and then completing the integration-by-parts, getting the final answer of <span class="math-container">$\ln(\sqrt{3})$</span>.</p>
<p>But that feels like pulling something out of thin air.</p>
<p>I'm wondering if there's an easier way to compute the integral.</p>
| Bob Dobbs | 221,315 | <p>Let's make the famous <span class="math-container">$z=\tan(\frac{x}{2})$</span> substitution. Then <span class="math-container">$\sec x=\frac{1+z^2}{1-z^2}$</span> and <span class="math-container">$dx=\frac{2dz}{1+z^2}$</span>. Knowing that <span class="math-container">$\tan(\frac{\pi}{12})=2-\sqrt{3}$</span>, we compute <span class="math-container">$$\int_0^{\frac{\pi}{6}}\sec{x}dx=\int_0^{2-\sqrt{3}}\frac{-2dz}{z^2-1}=\ln\left (\left|\frac{z+1}{z-1}\right|\right )|_0^{2-\sqrt{3}}=\ln\left(\frac{3-\sqrt{3}}{\sqrt{3}-1}\right)=\ln(\sqrt{3}).$$</span></p>
|
4,491,251 | <p>Per the question title, what's the easiest way to evaluate the following?
<span class="math-container">$$\int_0^{\pi/6}\sec x\,dx$$</span></p>
<p>You can do something like computing the derivatives of <span class="math-container">$\sec x$</span> and <span class="math-container">$\tan x$</span>, adding them up, computing the derivative of the logarithm of the absolute value of the sum of <span class="math-container">$\sec x$</span> and <span class="math-container">$\tan x$</span>, and then completing the integration-by-parts, getting the final answer of <span class="math-container">$\ln(\sqrt{3})$</span>.</p>
<p>But that feels like pulling something out of thin air.</p>
<p>I'm wondering if there's an easier way to compute the integral.</p>
| Quanto | 686,284 | <p><span class="math-container">$$\int \sec x\,dx= \int \frac{\sec^2x}{\sqrt{1+\tan^2x}}dx=\sinh^{-1}(\tan x)+C
$$</span></p>
|
1,249,707 | <blockquote>
<p>Assume <span class="math-container">$V$</span> to be a finite dimensional vector space. Define the algebraic multiplicity <span class="math-container">$am(\lambda)$</span> of an eigenvalue <span class="math-container">$\lambda$</span> of a linear operator <span class="math-container">$T:V\to V$</span> as the maximum index of the factor <span class="math-container">$(t-\lambda)$</span> appearing in the characteristic polynomial of <span class="math-container">$T$</span>. Also define <span class="math-container">$G_\lambda=\{v\in V:(T-\lambda I)^kv=0\}$</span>. I want to show that <span class="math-container">$\dim(G_\lambda)=am(\lambda)$</span> without using Jordan Form.</p>
</blockquote>
<p>Sheldon Axler in "Linear Algebra Done Right" specifically defined the "multiplicity" of <span class="math-container">$\lambda$</span> as <span class="math-container">$\dim(G_\lambda)$</span>, hence I could not get any help from it. I am not very conversant with the properties of the Jordan form, hence I would like a more elementary proof. Please note that I cannot use the decomposition of <span class="math-container">$V$</span> into a direct sum of generalized eigenspaces because I will need to prove that indeed, <span class="math-container">$am(\lambda)=\dim(G_\lambda)$</span> to prove this.</p>
<p>I started by assuming that <span class="math-container">$f(t)=(t-\lambda)^kp(t)$</span> where <span class="math-container">$f$</span> is the characteristic polynomial of <span class="math-container">$T$</span>, <span class="math-container">$p$</span> is any other polynomial not containing the factor <span class="math-container">$(t-\lambda)$</span>. So I will have to show that <span class="math-container">$\dim(G_\lambda)=k$</span>.</p>
<p>By Cayley Hamilton Theorem, <span class="math-container">$f(T)=0\implies (T-\lambda I)^kp(T)=0$</span> hence <span class="math-container">$p(T)v\in G_\lambda \forall v\in V$</span>. Now consider the collection <span class="math-container">$\{p(T)v,(T-\lambda I)p(T)v,...,(T-\lambda I)^{k-1}p(T)v\}$</span> for a nonzero <span class="math-container">$v\in V$</span> which I know is linearly independent (based on the previous exercise) and hence <span class="math-container">$\dim(G_\lambda)\geq k$</span>.</p>
<p>How will the other direction follow?</p>
| JR2 | 1,066,996 | <p>I would like to complement Marc van Leeuwen's answer. I am going to use some results from the PDF that Marc linked (<a href="https://www.maa.org/sites/default/files/pdf/awards/Axler-Ford-1996.pdf" rel="nofollow noreferrer">Axler's paper</a>). First, we know that <span class="math-container">$V$</span> can be written as the direct sum of the generalized eigenspaces, <span class="math-container">$G(\lambda_i)$</span>, <span class="math-container">$V = G(\lambda_1) \oplus \cdots \oplus G(\lambda_r)$</span>, because generalized eigenvectors corresponding to different eigenvalues are linearly independent, and because the generalized eigenvectors span <span class="math-container">$V$</span>. What is the shape of the matrix associated with <span class="math-container">$T$</span> under this basis? The direct sum translates into a block diagonal matrix
<span class="math-container">$$
T = \left[
\begin{array}{cccc}
M_1 & 0 & 0 & 0 \\
0 & M_2 & 0 & 0 \\
0 & 0 & \ddots & 0 \\
0 & 0 & 0 & M_r \\
\end{array}
\right],
$$</span>
where <span class="math-container">$M_i$</span> is the operator <span class="math-container">$T$</span> restricted to <span class="math-container">$G(\lambda_i)$</span>.
What is the shape of each <span class="math-container">$M_i$</span>? Because we are working on <span class="math-container">$G(\lambda_i)=\{v\in V:(T-\lambda_i I)^kv=0 \text{ for some }k \in \mathbb{Z}^+\}$</span>, we'll choose a basis constructed in the following way. If <span class="math-container">$(T-\lambda_i I)^kv=0$</span> for <span class="math-container">$k=1$</span>, then <span class="math-container">$Tv=\lambda_iv$</span> and this translates into a diagonal element <span class="math-container">$\lambda_i$</span> in the matrix <span class="math-container">$M_i$</span> if we choose <span class="math-container">$v$</span> as an element of our basis. Now extract as many linearly independent vectors <span class="math-container">$v$</span> as you can that satisfy <span class="math-container">$(T-\lambda_i I)v=0$</span>. Call them <span class="math-container">$B_i(k=1) = \{x_1,\ldots,x_p\}$</span>. The <span class="math-container">$i$</span> makes reference to the <span class="math-container">$i$</span>-th block <span class="math-container">$M_i$</span> and the <span class="math-container">$k$</span> indicates that we are working on a subspace of <span class="math-container">$G(\lambda_i)$</span> associated with the equation <span class="math-container">$(T-\lambda_i I)^kv=0$</span>. So far, the matrix <span class="math-container">$M_i$</span> has the following structure with respect to <span class="math-container">$B_i(k=1)$</span>.
<span class="math-container">$$
M_i = \left[
\begin{array}{cccc}
\lambda_i & 0 & 0 & ? & ? \\
0 & \ddots & 0 & ? & ? \\
0 & 0 & \lambda_i & ? & ? \\
0 & 0 & 0 & ? & ? \\
0 & 0 & 0 & ? & ? \\
\end{array}
\right],
$$</span></p>
<p>Now repeat the same process for <span class="math-container">$(T-\lambda_i I)^kv=0$</span> for <span class="math-container">$k=2$</span>. Note that <span class="math-container">$(T-\lambda_i I)^2v=0$</span> implies that the vector <span class="math-container">$w=(T-\lambda_i I)v$</span> satisfies <span class="math-container">$(T-\lambda_i I)w=0$</span>. Hence, <span class="math-container">$w$</span> can be written as a linear combination of the <span class="math-container">$x_1,\ldots,x_p$</span>, which implies that there are scalars <span class="math-container">$c_1,\ldots,c_p$</span>, such that
<span class="math-container">$$
c_1x_1 + \cdots c_px_p = w = Tv - \lambda_i v,
$$</span>
and consequently that
<span class="math-container">$$
Tv = \lambda_i v + c_1x_1 + \cdots c_px_p.
$$</span>
This means that <span class="math-container">$T$</span> acting on <span class="math-container">$v$</span> produces a multiple of <span class="math-container">$v$</span> plus a combination of vectors in <span class="math-container">$B_i(k=1)$</span>. Thus, the matrix <span class="math-container">$M_i$</span> now looks like
<span class="math-container">$$
M_i = \left[
\begin{array}{cccc}
\lambda_i & 0 & 0 & \times & ? \\
0 & \ddots & 0 & \times & ? \\
0 & 0 & \lambda_i & \times & ? \\
0 & 0 & 0 & \lambda_i & ? \\
0 & 0 & 0 & 0 & ? \\
\end{array}
\right],
$$</span>
with respect to the basis <span class="math-container">$\{x_1,\ldots,x_p,v\}$</span>. The symbol <span class="math-container">$\times$</span> indicates a potentially nonzero entry. Continue building and appending the bases <span class="math-container">$B_i(k)$</span>, <span class="math-container">$k=2,3,\ldots$</span>, until you span the whole <span class="math-container">$G(\lambda_i)$</span>. Finally, note that the resulting matrix <span class="math-container">$M_i$</span> is upper-triangular with the eigenvalues in the main diagonal. Hence, the multiplicity of <span class="math-container">$\lambda_i$</span>, <span class="math-container">$m(\lambda_i)$</span>, corresponds to the number of diagonal entries in <span class="math-container">$M_i$</span>, i.e., the size of the block, and by construction, this same number tells us the dimension of <span class="math-container">$G(\lambda_i)$</span>.</p>
|
173,387 | <p>How can I indent properly long code in <em>Mathematica</em>?
Are there some best practices?</p>
| Fraccalo | 40,354 | <p>As already said in other answers, this is very subjective, but here a tip I find very useful for coding plots: I put every command on a different line, and I use the comma separator at the beginning of the line.
This is quite handy for commenting parts of the code, to enable/disable some plot options quickly (i.e. just commenting the whole line and not commenting through 2 lines).</p>
<p>This is an example of what I mean: </p>
<pre><code>DensityPlot[(Exp[-(x^2 + y^2)], {x,-4,4}, {y,-4,4}
,PlotRange -> All
,PlotPoints -> 150
,LabelStyle -> {24, Black}
,FrameStyle -> Black
,ImageSize -> 300
]
</code></pre>
<p>If I want to comment the ImageSize for example, I just select the whole line and comment it:</p>
<pre><code>DensityPlot[(Exp[-(x^2 + y^2)], {x,-4,4}, {y,-4,4}
,PlotRange -> All
,PlotPoints -> 150
,LabelStyle -> {24, Black}
,FrameStyle -> Black
(*,ImageSize -> 300*)
]
</code></pre>
<p>However, for doing the same thing having the commas at the end of the line, I would have to do this:</p>
<pre><code>DensityPlot[(Exp[-(x^2 + y^2)], {x,-4,4}, {y,-4,4},
PlotRange -> All,
PlotPoints -> 150,
LabelStyle -> {24, Black},
FrameStyle -> Black(*,
ImageSize -> 300*)
]
</code></pre>
<p>This ofc holds just for the last option line (the other ones are easily commented in both the cases), but if you have nested options, like having epilogs etc in your code, you may have "more than one last option line" in the code, and then having the comma at the beginning of the code becomes more useful :)</p>
|
4,280,328 | <p>I think the substitution <span class="math-container">$x=\xi+\eta,$</span> <span class="math-container">$y=\xi-\eta$</span> can be done. Then the equation takes the form <span class="math-container">$$ \begin{gathered} 38(\xi^{2}+\eta^{2})=221+33(\xi^{2}-\eta^{2}) \\ 5 \xi^{2}+71 \eta^{2}=221 \end{gathered} $$</span></p>
<p>whence <span class="math-container">$\xi^{2}=30-71 n$</span>, <span class="math-container">$\eta^{2}=1+5 n$</span>. For <span class="math-container">$n=0$</span> we obtain noninteger solutions and for the rest one of the equalities has a negative right-hand side. Am I wrong?</p>
| sirous | 346,566 | <p>A simpler approach:</p>
<p><span class="math-container">$19y^2-(33x)y+19x^2-221=0$</span></p>
<p>we solve this for y:</p>
<p><span class="math-container">$\Delta=(33x)^2-4\times 19\times (19x^2-221)$</span></p>
<p>Or:</p>
<p><span class="math-container">$\Delta=16796-355\geq 0\Rightarrow x^2<47.31$</span></p>
<p>that is numbers <span class="math-container">$36, 25, 16, 9, 4, 1$</span> can be checked. Only For <span class="math-container">$x^2=25$</span> and <span class="math-container">$x^2=4$</span> we have integers for <span class="math-container">$\sqrt \Delta$</span> as 89 and 124 respectively which gives <span class="math-container">$x=5, y=2$</span>. Since the equation is symmetric for x and y we also have <span class="math-container">$x=2, y=5$</span></p>
|
293,047 | <p>When I am reading through higher Set Theory books I am frequently met with statements such as '$V$ is a model of ZFC' or '$L$ is a model of ZFC' where $V$ is the Von Neumann Universe, and $L$ the Constructible Universe. For instance, in Jech's 'Set Theory' pg 176, in order to prove the consistency of the Axiom of Choice with ZF, he constructs $L$ and shows that it models the ZF axioms plus AC. </p>
<p>However isn't this strictly inaccurate as $V$ and $L$ are proper classes? For instance, by this very method we might as well take it as a $Theorem$ in ZFC that ZFC is consistent since $V$ models ZFC. However this is obviously impossible as ZFC cannot prove its own consistency. I highly doubt that Jech would make a mistake in such classic textbook, so I must be missing something.</p>
<p>How could we, for instance, show Con(ZF) $\implies$ Con(ZF + AC) without invoking the use of proper classes? I imagine, for instance, that we would start with some (set sized) model $M$ of ZFC and apply some sort of 'constructible universe' construction to $M$. </p>
| Joel David Hamkins | 1,946 | <p>What is shown in the cases you mention is not that the model is a model of ZFC, made as a single statement, but rather the <em>scheme</em> of statements that the model satisfies every individual axiom of ZFC, as a separate statement for each axiom. </p>
<p>The difference is between asserting "$L$ is a model of ZFC" and the scheme of statements "$L$ satisfies $\phi$" for every axiom $\phi$ of ZFC. </p>
<p>This difference means that from the scheme, you cannot deduce Con(ZFC).</p>
<p>For the proof that Con(ZF) implies Con(ZFC), one assumes Con(ZF), and so there is a set model $M$ of ZF. The $L$ of this model, which is a class in $M$ but a set for us in the meta-theory, is a model of ZFC, since it satisfies every individual axiom of ZFC. So we've got a model of ZFC, and thus Con(ZFC).</p>
|
1,480,671 | <p>How to prove $\int_{0}^{\infty}{h(t)\mathbb{E}(I(X>t))dt}=\mathbb{E}(\int_{0}^{\infty}{h(t)I(X>t)dt})$.
Can I treat $h(t)$ as a constant respect to $X$? Then, directly get the result?</p>
<p>The point is I do not understand what $\mathbb{E}(\int_{0}^{\infty}{h(t)I(X>t)dt})$ is.</p>
| John Dawkins | 189,130 | <p>The integral $\int_0^\infty h(t)I(X>t)\,dt$ is a random variable, call it $Y$. The role of the indicator random variable $I(X>t)$ is to restrict the $t$-integration to the (random) interval $(0,X)$. In other words,
$$
Y(\omega) =\int_0^{X(\omega)} h(t)\,dt,
$$
for each sample point $\omega$ in the sample space. You are then forming the expectation of $Y$. If $h$ takes only non-negative values, then Tonelli's theorem can be used to justify the change in order of expectation and integration (in $t$).</p>
|
1,894,867 | <p>Let $n=3^{1000}+1$. Is n prime?</p>
<p>My working so far:</p>
<p>$n=3^{1000}+1 \cong 1 \mod 3$</p>
<p>I notice that n is of form; $n=3^n+1$</p>
<p>Seeking advice tips, and methods on progressing this.</p>
| Mythomorphic | 152,277 | <p>Taking binomial expansion,</p>
<p>\begin{align}
3^{1000}+1&=(2+1)^{1000}+1\\
&=1+\sum_{k=0}^{1000}{1000\choose k}2^k1^{1000-k}\\
&=1+{1000\choose 0}+\sum_{k=1}^{1000}{1000\choose k}2^k\\
&=2\left[1+\sum_{k=1}^{1000}{1000\choose k}2^{k-1}\right]
\end{align}</p>
<p>So $3^{1000}+1$ is composite.</p>
|
1,102,928 | <p>Let $\mathcal{H}$ be a Hilbert space. I am trying to show that every self-adjoint idempotent continuous linear transformation is the orthogonal projection onto some closed subspace of $\mathcal{H}$. If $P$ is such an operator, the obvious thing is to consider $S=\{Px:x\in\mathcal{H}\}$. However, I'm having trouble showing that S is in fact closed even though I'm sure this should be almost trivial. I tried to show that if $x_n\to x$ and $x_n\in S$ then $x\in S$ but somehow I just can't quite do it...</p>
| copper.hat | 27,978 | <p>A convenient way to check for closure of subspaces is to try to write the subspace as the kernel of some continuous operator.</p>
<p>Note that $(I-P)x = 0$ <strong>iff</strong> $Px=x$.</p>
<p>Note that $x \in S$ <strong>iff</strong> $x = Py$ for some $y$ <strong>iff</strong> $Px = x$, and so
$S = \ker (I-P)$. Hence $S$ is closed.</p>
|
1,375,365 | <p>Find all polynomials for which </p>
<p>What I have done so far:
for $x=8$ we get $p(8)=0$
for $x=1$ we get $p(2)=0$</p>
<p>So there exists a polynomial $p(x) = (x-2)(x-8)q(x)$</p>
<p>This is where I get stuck. How do I continue?</p>
<p><strong>UPDATE</strong></p>
<p>After substituting and simplifying I get
$(x-4)(2ax+b)=4(x-2)(ax+b)$</p>
<p>For $x = 2,8$ I get</p>
<p>$x= 2 \to -8a+b=0$</p>
<p>$x= 8 \to 32a+5b=0$</p>
<p>which gives $a$ and $b$ equal to zero.</p>
| drhab | 75,923 | <p>The route you take is fruitful.</p>
<p>$p\left(x\right)=\left(x-2\right)\left(x-8\right)q\left(x\right)$
leads to:</p>
<p>$$\left(x-4\right)q\left(2x\right)=2\left(x-2\right)q\left(x\right)$$</p>
<p>Then $4$ must be a root of $q$, so $q\left(x\right)=\left(x-4\right)r\left(x\right)$ leading to:</p>
<p>$$r\left(2x\right)=r\left(x\right)$$</p>
<p>Then $r\left(x\right)$ must be a constant polynomial and we end up with: $$p\left(x\right)=c\left(x-2\right)\left(x-4\right)\left(x-8\right)$$</p>
|
1,375,365 | <p>Find all polynomials for which </p>
<p>What I have done so far:
for $x=8$ we get $p(8)=0$
for $x=1$ we get $p(2)=0$</p>
<p>So there exists a polynomial $p(x) = (x-2)(x-8)q(x)$</p>
<p>This is where I get stuck. How do I continue?</p>
<p><strong>UPDATE</strong></p>
<p>After substituting and simplifying I get
$(x-4)(2ax+b)=4(x-2)(ax+b)$</p>
<p>For $x = 2,8$ I get</p>
<p>$x= 2 \to -8a+b=0$</p>
<p>$x= 8 \to 32a+5b=0$</p>
<p>which gives $a$ and $b$ equal to zero.</p>
| Eric Towers | 123,905 | <p>The following is essentially @drhab's solution, but uses only one idea repeatedly.</p>
<p>From $$ (x-8)p(2x) = 8(x-1)p(x) $$ we see $x-8$ divides $p(x)$. Let $p(x) = (x-8)p_1(x)$ and substitute, yielding $$ 2(x-8)(x-4)p_1(2x) = 8(x-1)(x-8)p_1(x) $$
From this we see $x-4$ divides $p_1(x)$. Let $p_1(x) = (x-4)p_2(x)$ and substitute, yielding $$ 4(x-8)(x-4)(x-2)p_2(2x) = 8(x-1)(x-4)(x-8)p_2(x) $$
From this we see $x-2$ divides $p_2(x)$. Let $p_2(x) = (x-2)p_3(x)$ and substitute, yielding $$ 8(x-8)(x-4)(x-2)(x-1)p_3(2x) = 8(x-1)(x-2)(x-4)(x-8)p_3(x) $$
( ... and our recursive process stops because the new $x-1$ factor divides the $x-1$ that's been lingering on the right all along.)
But now we simplify to $p_3(2x) = p_3(x)$ and the rest of @drhab's argument finishes the argument.</p>
|
444,486 | <p>I am teaching myself real analysis, and in this particular set of lecture notes, the <a href="http://www.math.louisville.edu/~lee/RealAnalysis/IntroRealAnal-ch01.pdf" rel="nofollow">introductory chapter on set theory</a> when explaining that not all sets are countable, states as follows:</p>
<blockquote>
<p>If $S$ is a set, $\operatorname{card}(S) < \operatorname{card}(\mathcal{P}(S))$.</p>
</blockquote>
<p>Can anyone tell me what this means? It is theorem 1.5.2 found on page 13 of the article.</p>
| jwg | 64,062 | <p>To augment Kendra Lynne's answer, what does it mean to say that signal analysis in $\mathbb{R}^2$ isn't as 'clean' as in $\mathbb{C}$?</p>
<p>Fourier series are the decomposition of periodic functions into an infinite sum of 'modes' or single-frequency signals. If a function defined on $\mathbb{R}$ is periodic, say (to make the trigonometry easier) that the period is $2\pi$, we might as well just consider the piece whose domain ins $(-\pi, \pi]$.</p>
<p>If the function is real-valued, we can decompose it in two ways: as a sum of sines and cosines (and a constant):
$$ f(x) = \frac{a_0}{2} + \sum_{n=1}^{\infty} a_n \cos(nx) + \sum_{n=1}^{\infty} b_n \sin(nx)$$
There is a formula for the $a_k$ and the $b_k$. There is an asymmetry in that $k$ starts at $0$ for $a_k$ and at $1$ for $b_k$. There is a formula in terms of $\int_{-\pi}^{\pi} f(x) \cos(kx)dx$ for the $a_k$ and a similar formula for the $b_k$. We can write a formula for $a_0$ which has the same integral but with $\cos(0x) = 0$, but unfortunately we have to divide by 2 to make it consistent with the other formulae. $b_0$ would always be $0$ if it existed, and doesn't tell us anything about the function. </p>
<p>Although we wanted to decompose our function into modes, we actually have two terms for each frequency (except the constant frequency). If we wanted to say, differentiate the series term-by-term, we would have to use different rules to differentiate each term, depending on whether it's a sine or a cosine term, and the derivative of each term would be a different type of term, since sine goes to cosine and vice versa.</p>
<p>We can also express the Fourier series as a single series of shifted cosine waves, by transforming
$$ a_k \cos(kx) + b_k \sin(kx) = r_k \cos(kx + \theta_k) .$$
However we now lost the fact of expressing all functions as a sum of the same components. If we want to add two functions expressed like this, we have to separate the $r$ and $\theta$ back into $a$ and $b$, add, and transform back. We also still have a slight asymmetry - $r_k$ has a meaning but $theta_0$ is always $0$.</p>
<p>The same Fourier series using complex numbers is the following:
$$ \sum_{n=-\infty}^{\infty} a_n e^{inx} .$$ This expresses a function $(-\pi, \pi] \rightarrow \mathbb{C}$. We can add two functions by adding their coefficients, we can even work out the energy of a signal as a simple calculation (each component $e^{ikx}$ has the same energy. Differentiating or integrating term-by-term is easy, since we are within a constant of differentiating $e^x$. A real-valued function has $a_n = a_{-n}$ for all $n$ (which is easy to check). $a_n$ all being real, $a_{2n}$ being zero for all $n$ or $a_{n}$ being zero for all $n < 0$ all express important and simple classes of periodic functions.</p>
<p>We can also define $z = e^{ix}$ and now the Fourier series is actually a Laurent series:
$$ \sum_{n=-\infty}^{\infty} a_n z^{n} .$$ </p>
<p>The Fourier series with $a_n = 0$ for all $n < 0$ is a Taylor series, and the one with $a_n$ all real is a Laurent series for a function $\mathbb{R} \rightarrow \mathbb{R}$. We are drawing a deep connection between the behavior of a complex function on the unit circle and its behavior on the real line - either of these is enough to specify the function uniquely, given a couple of quite general conditions.</p>
|
444,486 | <p>I am teaching myself real analysis, and in this particular set of lecture notes, the <a href="http://www.math.louisville.edu/~lee/RealAnalysis/IntroRealAnal-ch01.pdf" rel="nofollow">introductory chapter on set theory</a> when explaining that not all sets are countable, states as follows:</p>
<blockquote>
<p>If $S$ is a set, $\operatorname{card}(S) < \operatorname{card}(\mathcal{P}(S))$.</p>
</blockquote>
<p>Can anyone tell me what this means? It is theorem 1.5.2 found on page 13 of the article.</p>
| G Cab | 317,234 | <p>Electrical engineers are much entangled in the complex numbers field just because a fundamental circuit block, like it is a RC, "works perfectly" with the complex numbers: its <a href="https://en.wikipedia.org/wiki/Electrical_impedance" rel="nofollow noreferrer">impedance</a> looks to be "naturally" complex.<br />
And then linear analogic circuits naturally lead to Fourier and Laplace and Transfer function, Bode diagrams and so on.</p>
|
654,408 | <p>I know that the volume form on $S^1$ is $\omega= ydx-xdy$. But how I can derive that? The only things that I know are the definition of differential q-form, and the fact that the vector field $v= y \frac{\partial}{\partial x}-x\frac{\partial}{\partial y}$ never vanishes on $S^1$.</p>
| Martín-Blas Pérez Pinilla | 98,199 | <p>See the proof of Proposition 12.6 in <a href="http://www.math.toronto.edu/mat1300/orientation.11.pdf" rel="nofollow">http://www.math.toronto.edu/mat1300/orientation.11.pdf</a>.</p>
<p>EDIT: Wikipedia gives the following reference for the deduction of the generalization of your formula: Flanders, Harley (1989). Differential forms with applications to the physical sciences. </p>
<p>EDIT2:
$$x=\cos\theta, y=\sin\theta$$
$$xdy-ydx=\cos\theta\cos\theta d\theta - \sin\theta(-\sin\theta)d\theta=d\theta$$
and now, go back.</p>
|
2,038,520 | <p>I know that the series b. converges as $\sum \frac{1}{n^p}$ converges for $p>1$, So a. also converges. I want to know the sum.</p>
<blockquote>
<blockquote>
<p>a.$1+\frac{1}{9}+\frac{1}{25}+\frac{1}{49}+.....$</p>
<p>$b.1+\frac{1}{4}+\frac{1}{9}+\frac{1}{16}+.....$</p>
</blockquote>
</blockquote>
| Ethan Alwaise | 221,420 | <p>The Riemann zeta function is defined as
$$\zeta(s) = \sum_{n=1}^{\infty}\frac{1}{n^s}.$$
The value of $\zeta(2)$ is known to be $\frac{\pi^2}{6}$. Thus
$$\sum_{n=0}^{\infty}\frac{1}{n^2} = \frac{\pi^2}{6}.$$
The series in a. in your post can be written as
$$\sum_{n=1}^{\infty}\frac{1}{n^2} - \sum_{n=1}^{\infty}\frac{1}{(2n)^2} = \frac{3}{4}\sum_{n=1}^{\infty}\frac{1}{n^2} = \frac{\pi^2}{8}.$$</p>
|
4,107,920 | <p>the question tells me that <span class="math-container">$P(A|B)>P(A)$</span> and needs me to prove: <Br></p>
<ol>
<li><span class="math-container">$P(B|A)>P(B)$</span> <br></li>
<li><span class="math-container">$P(B^c|A)<P(B^c)$</span></li>
</ol>
<p>In general all I want to ask is do I need to care that <span class="math-container">$P(B)>0$</span> or <span class="math-container">$P(A)>0$</span> for the conditional probability, and how do I do it if I do need to. <br>
<strong>Here's how I proved</strong>: <br>
Knowing that <span class="math-container">$P(A|B)>P(A)$</span> I know that <span class="math-container">$\frac{P(A\cap B)}{P(B)}>P(A)\Rightarrow P(A\cap B)>P(A)P(B)$</span>. <br></p>
<ol>
<li>I can write it as <span class="math-container">$P(B\cap A) > P(B)P(A)$</span> (After multiplying <span class="math-container">$P(A)>0$</span>, I assumed it's <span class="math-container">$>0$</span> because otherwise I wouldn't have <span class="math-container">$P(B|A)?$</span> - and exactly right here is my question, can I do this? if not, say <span class="math-container">$P(A)=0$</span> how do I prove (1)? <br></li>
</ol>
<p>I did prove (2) also using (1), but I think that getting an answer about (1) will be enough for me to keep going. <br>
Thanks in advance.</p>
| Kavi Rama Murthy | 142,385 | <p>If <span class="math-container">$P(A)=0$</span> then the hypothesis becomes <span class="math-container">$0 >0$</span> which is false. So we must have <span class="math-container">$P(A) >0$</span> and 1) follows from the hyptohesis by your argument.</p>
|
189,069 | <p>The Survival Probability for a walker starting at the origin is defined as the probability that the walker stays positive through n steps. Thanks to the Sparre-Andersen Theorem I know this PDF is given by</p>
<pre><code>Plot[Binomial[2 n, n]*2^(-2 n), {n, 0, 100}]
</code></pre>
<p>However, I want to validate this empirically. </p>
<p>My attempt to validate this for <code>n=100</code>:</p>
<pre><code>FoldList[
If[#2 < 0, 0, #1 + #2] &,
Prepend[Accumulate[RandomVariate[NormalDistribution[0, 1], 100]], 0]]
</code></pre>
<p>I want<code>FoldList</code> to stop if <code>#2 < 0</code> evaluates to <code>True</code>, not just substitute in 0. </p>
| Mr.Wizard | 121 | <p>Something seems odd to me about your code. You are summing twice, once with <a href="https://reference.wolfram.com/language/ref/Accumulate.html" rel="noreferrer"><code>Accumulate</code></a> and once with <a href="https://reference.wolfram.com/language/ref/FoldList.html" rel="noreferrer"><code>FoldList</code></a>. If this is really what you want then you could use:</p>
<pre><code>SeedRandom[26]
sum = Prepend[Accumulate[RandomVariate[NormalDistribution[0, 1], 100]], 0];
TakeWhile[sum, NonNegative] // Accumulate
</code></pre>
<blockquote>
<pre><code>8
{0, 1.10708, 1.23211, 2.28173, 3.30295, 4.05759, 5.26123, 6.62964}
</code></pre>
</blockquote>
<p>This is equivalent to your <a href="https://reference.wolfram.com/language/ref/FoldList.html" rel="noreferrer"><code>FoldList</code></a> construct up to the appropriate point:</p>
<pre><code>FoldList[If[#2 < 0, 0, #1 + #2] &, sum]
</code></pre>
<blockquote>
<pre><code>{0, 1.10708, 1.23211, 2.28173, 3.30295, 4.05759, 5.26123, 6.62964, 0, ...
</code></pre>
</blockquote>
<p>Perhaps you meant to only sum once. In that case <code>TakeWhile[sum, NonNegative]</code> is a direct solution but also sub-optimal as it does not provide early exit behavior, which I suspect is what you're actually after here. It is not clear to me if you need the cumulative sum (walk) itself or only its length; if the latter consider this:</p>
<pre><code>SeedRandom[26]
dist = RandomVariate[NormalDistribution[0, 1], 100];
Module[{i = 0},
Fold[If[# < 0, Return[i, Fold], i++; # + #2] &, 0, dist]
]
</code></pre>
<blockquote>
<pre><code>8
</code></pre>
</blockquote>
|
1,121,845 | <p>let $G$ be a multiplicative group of non-zero complex analysis.consider the group homomorphism $\phi:G\rightarrow G$ defined by $\phi(z)=z^4$.</p>
<p>1.Identify kernel of $\phi=H$.</p>
<p>2.Identify $G/H$</p>
<p>My try:</p>
<p>let $z\in \ker \phi$ then $\phi(z)=1\implies z^4=1$
let $z=re^{i\theta}\implies r^4\cos 4\theta =1;r^4\sin 4\theta =0$ </p>
<p>then $r=1$ and $\tan 4\theta=0\implies 4\theta=0\implies \theta=\frac{n\pi}{2}$</p>
<p>Is it correct?</p>
<p>I cant proceed in the 2nd problem.</p>
<p>Any hints in this regard.</p>
| Aaron Maroja | 143,413 | <p>Hint: </p>
<ol>
<li><p>If $z \in \ker \phi$ then $\phi (z) = 1$. Think of the fourth root of unit. </p></li>
<li><p>Use the <a href="http://en.wikipedia.org/wiki/Isomorphism_theorem" rel="nofollow">Isomorphism Theorem</a>.</p></li>
</ol>
<p>If $\xi$ is a n-th root of unit and $z ^n = a$ then $a\xi$ is a root of $z^n - a = 0$.</p>
|
2,137,591 | <blockquote>
<p>$$\int \frac{1}{x+x\log x}\,dx$$</p>
</blockquote>
<p>I couldn't use any of the integration techniques to solve this, any help will be appreciated!</p>
| Chinny84 | 92,628 | <p>$$
\int \frac{1}{x}\frac{1}{1+\log x}dx
$$
let $1+ \log x = u\implies du =\frac{1}{x}\frac{1}{\ln 10}dx$ </p>
<p>then we have
$$
\int \ln 10\frac{1}{u}du = \ln 10 \ln u + C
$$</p>
|
1,456,224 | <p>I've been asked to compute the Euler-Lagrange equation and second variation of the functional $$I[y]=\int_{a}^{b}(y'^2+y^4)dx$$
with boundary conditions $y(a)=\alpha$, $y(b)=\beta$. It's easy to see that $$I[y+\delta y]=I[y]+\int_{a}^{b}\delta y(4y^{3}-2y'') dx+\int_{a}^{b}(6y^{2}\delta y^{2}+\delta y'^{2})dx$$
So the Euler-Lagrange equation integrates to give $y^{4}-y'^{2}=k$, where $k$ is a constant of integration. We're then asked to solve this equation when $\alpha=\beta=0$. The equation is separable, but to my shame I can't do the integration (I think it involves special functions), so I looked for a different way. Completing the square on $I$ gives $$\int_{a}^{b}(y'^2+y^4)dx=\int_{a}^{b}(y'+y^2)^{2}dx-\int_{a}^{b}2y^{2}y'dx$$
But the final term is just $\left[\frac{2}{3}y^{3}\right]^{y=0}_{y=0}=0$, and the other two integrals are non-negative. The only way to extremise the RHS (I think) is to minimise it, and besides the second variation is non-negative, so we make the RHS zero by allowing $y'=-y^{2}$. But then the LHS forces both $y'=0$ and $y=0$ on all of $[a,b]$. Does this mean that the only solution is the zero function? Then, since $\delta^{2}I=\int_{a}^{b}\delta y'^{2}dx$, and this is positive unless $\delta y$ is constant (and hence $0$, by the boundary condition), do we have the zero function as the actual solution? </p>
<p>Sorry if this sounds a little incoherent, when I started writing the answer I forgot to consider $y(x)=0$ and found the other solutions to $y'=-y^{2}$, which can't possibly satisfy the boundary conditions. In fact, looking at it now, I'm starting to think that even completing the square was unnecessary.</p>
| Qmechanic | 11,127 | <p>OP has essentially already proven that there is only a trivial solution $y\equiv 0$. See also the answer by John Ma. Now OP is pondering <em>Why?</em> Perhaps he would appreciate a bit of physics intuition: The <a href="http://en.wikipedia.org/wiki/Lagrangian_mechanics" rel="nofollow">Lagrangian</a> $L=T-V$ describes a non-relativistic point particle in 1D in an inverted (=unstable) quartic potential $V \propto -y^4$, where $x$ and $y$ play the roles of time and position, respectively. The particle is initially and finally at the unstable equilibrium $y=0$. Intuitively (and/or by examining the equation of motion), if the particle leaves/crosses the unstable equilibrium $y=0$, then it would never return. Hence it never left in the first place. In other words, it sits still at the unstable equilibrium $y\equiv0$. </p>
|
51,509 | <p>Here is a problem due to Feynman. If you take 1 divided by 243 you get 0.004115226337 .... It goes a little cockeyed after 559 when you're carrying out the decimal expansion, but it soon straightens itself out and repreats itself nicely. Now I want to see how many times it repeats itself. Does it do this indefinitely, or does it stop after certain number of repititions? Can you write a simple <em>Mathematica</em> program to verify one conjecture or the other?</p>
| Dr. belisarius | 193 | <pre><code>RealDigits[1/243]
(*
{{{4, 1, 1, 5, 2, 2, 6, 3, 3, 7, 4, 4, 8, 5, 5, 9, 6, 7, 0, 7, 8, 1, 8, 9, 3, 0, 0}}, -2}
*)
</code></pre>
|
51,509 | <p>Here is a problem due to Feynman. If you take 1 divided by 243 you get 0.004115226337 .... It goes a little cockeyed after 559 when you're carrying out the decimal expansion, but it soon straightens itself out and repreats itself nicely. Now I want to see how many times it repeats itself. Does it do this indefinitely, or does it stop after certain number of repititions? Can you write a simple <em>Mathematica</em> program to verify one conjecture or the other?</p>
| srgntoptics | 63,808 | <p>When you examine the repetitions over a larger scale, another interesting repetition shows up.</p>
<p>using the same code as eldo:</p>
<pre><code>Count[#, Max@#] &[ StringLength /@ Rest@StringSplit[ToString@N[1/243, 10^2], "00"]]
</code></pre>
<p>with <code>10^2</code> digits all the way to <code>10^9</code> showing repetitions of:</p>
<blockquote>
<p>3</p>
<p>37</p>
<p>370</p>
<p>3703</p>
<p>37037</p>
<p>370370</p>
<p>3703703</p>
<p>37037037</p>
</blockquote>
|
2,027,337 | <p>My homework sets up the problem accordingly:</p>
<blockquote>
<p>An object moves horizontally in one dimension with a velocity given by
v(t) = $8\cos\left(\frac{\pi \cdot t}{6}\right)$ m/s.</p>
<p>Find the The position of the object is given by s(t) =
$s\left(t\right)=\int _0^t\:v\left(y\right)\:dy\:$ for $t\ge 0$. Find
the position function for all $t\ge 0$.</p>
</blockquote>
<p>I find this problem differently worded than any other u-substitution problem I've worked on, and I'm having trouble figuring it out. Apparently I can use this relationship:</p>
<blockquote>
<p>$\int_a^b\:f\left(g\left(x\right)\right)g'\left(x\right)dx\:=\:\int_{g\left(a\right)}^{g\left(b\right)}f\left(u\right)du\:$</p>
</blockquote>
<p>...Which I've used before. I assume g(x) would equal my u-substitution, which is $\frac{\pi \cdot t}{6}$ I presume - but what confuses me are the boundaries, one of which is a variable. Could someone walk me through this?</p>
<p>There is also a follow up question: </p>
<blockquote>
<p>What is the period of the motion - that is, starting at any point,
how long does it take for the object to return to that position?</p>
</blockquote>
<p>Since the period of the sine function is $2\pi$, do I just set the resulting equation to that and solve? </p>
| A.D. | 294,708 | <p>You need find the function position $s(t)$, how you know the functon velocity $v(t)$ your problem is equal to find indefined integral $\int v(t)dt$. Let be $g(t) = \frac{\pi - t}{ 6}$, this implies that $\frac{dg}{dt} = -1/6$ and therefore
$$ v(t) = -48 g'\cos{g}$$
this implies that </p>
<p>$$\int v(t) dt = \int 48 g'\cos(g) dt = \int -48 g'\cos(g) dt = -48 \int\cos(g)dg = -48 \sin(\frac{\pi - t}{ 6}). $$ </p>
|
2,027,337 | <p>My homework sets up the problem accordingly:</p>
<blockquote>
<p>An object moves horizontally in one dimension with a velocity given by
v(t) = $8\cos\left(\frac{\pi \cdot t}{6}\right)$ m/s.</p>
<p>Find the The position of the object is given by s(t) =
$s\left(t\right)=\int _0^t\:v\left(y\right)\:dy\:$ for $t\ge 0$. Find
the position function for all $t\ge 0$.</p>
</blockquote>
<p>I find this problem differently worded than any other u-substitution problem I've worked on, and I'm having trouble figuring it out. Apparently I can use this relationship:</p>
<blockquote>
<p>$\int_a^b\:f\left(g\left(x\right)\right)g'\left(x\right)dx\:=\:\int_{g\left(a\right)}^{g\left(b\right)}f\left(u\right)du\:$</p>
</blockquote>
<p>...Which I've used before. I assume g(x) would equal my u-substitution, which is $\frac{\pi \cdot t}{6}$ I presume - but what confuses me are the boundaries, one of which is a variable. Could someone walk me through this?</p>
<p>There is also a follow up question: </p>
<blockquote>
<p>What is the period of the motion - that is, starting at any point,
how long does it take for the object to return to that position?</p>
</blockquote>
<p>Since the period of the sine function is $2\pi$, do I just set the resulting equation to that and solve? </p>
| MPW | 113,214 | <p>To evaluate $$\int_a^bk\cos cx\; dx$$
Put $$u=cx$$
$$du = c\; dx$$
so
$$dx = \frac1c (c\; dx) = \frac1c \; du$$
$$x=a \iff u = ca$$
$$x=b \iff u = cb$$
and
$$\int_a^b k\cos cx\; dx = \frac kc\int_{ca}^{cb}\cos u\; du
= \left[\frac kc\sin u\right]_{ca}^{cb}= \frac kc(\sin cb - \sin ca)$$
In your case, $a=0$, $b=t$, $k=8$, and $c=\pi/6$, so the value is
$$\boxed{\frac{48}{\pi}\sin \frac{\pi t}{6}}$$</p>
|
3,888,365 | <p>I have been trying to understand this limit:</p>
<p><span class="math-container">$$\lim_{x \to 0}\frac{tan(x)-sin(x)}{x^2}$$</span></p>
<p>When aplying the l'Hopital rule I arrive to the limit being <span class="math-container">$0$</span> but when doing things organically I get an indetermination:</p>
<p><span class="math-container">$$
\lim_{x \to 0}\frac{tan(x)-sin(x)}{x^2}=\lim_{x \to 0}\frac{tan(x)}{x^2}-\frac{sin(x)}{x^2}= \lim_{x \to 0} \frac{sin(x)}{cos(x)x^2}-\frac{sin(x)}{x^2}= \lim_{x \to 0}\frac{sin(x)}{x^2}(\frac{1}{cos(x)}-1)
$$</span></p>
<p>Clearly <span class="math-container">$\lim_{x \to 0} \frac{1}{cos(x)}=1$</span> hence <span class="math-container">$(\frac{1}{cos(x)}-1)=0$</span> and I could well aply <span class="math-container">$\lim_{x \to 0}\frac{sin(x)}{x}=1$</span> but that still leaves <span class="math-container">$\lim_{x \to 0}\frac{1}{x}$</span> which is undetermined because it has different limits on <span class="math-container">$0^-$</span> and <span class="math-container">$0^+$</span>.</p>
<p>Is there something I'm missing?</p>
| Somos | 438,089 | <p>In the numerator, since both <span class="math-container">$\tan(x)$</span> and <span class="math-container">$\sin(x)$</span> are odd functions, the difference is also an odd function. The denominator <span class="math-container">$x^2$</span> is an even function. The quotient is an odd function. If the limit as <span class="math-container">$x$</span> approaches <span class="math-container">$0$</span> from positive reals exists, then it is the negative of the limit from negative reals. Thus, the bidirectional limit exists and it must be equal to its negative. That implies the limit is zero if it exists.</p>
<p>Your last step is not valid since the first factor goes to infinity and the second goes to zero. This is an indeterminate form and hence we can't tell what the limit is from that.</p>
|
2,384,422 | <p>I'm really stuck on how to go about solving the following first order ODE; I've got little idea on how to approach it, and I'd really appreciate if someone could give me some hints and/or working for a solution so I can have a reference point on how to approach these sorts of problems.</p>
<p>The following is one of many ODE's I've gotten off a problem set I found in a textbook at a library:</p>
<p>$$y' = xe^{-\sin(x)} - y\cos(x)$$</p>
<p>Can anyone help?</p>
| Frieder Jäckel | 440,045 | <p>I always like to think of these type of ODE's in terms of the product rule.
\begin{equation}x=y'e^{\sin(x)}+y\cos(x)e^{\sin(x)}=\left(ye^{\sin(x)}\right)'
\end{equation}
So integrating both sides and dividing by $e^{\sin(x)}$ yields\begin{equation}y=e^{-\sin(x)}\left(\frac{1}{2}x^2+c\right).
\end{equation}</p>
|
2,503,306 | <p>Suppose $g{^n}$=e. Show the order of $g$ divides $n$.</p>
<p>Would I use Eulers Theorem???;</p>
<p>$a{^{\phi p}}$ $\equiv1 \pmod p$</p>
<p>$a{^{p-1}}\equiv1 \pmod p$</p>
<p>$a{^p}\equiv a\pmod p$</p>
<p>So then I would have </p>
<p>$g{^n}\equiv g\pmod n$</p>
<p>then I think you use the $\gcd$, which states $\gcd(a,b) = 1$</p>
<p>or </p>
<p>$a=nq+r$ and $b=nq+r$</p>
<p>which is $a\equiv b\pmod n$??</p>
| José Carlos Santos | 446,262 | <p>Let $o$ be the order of $g$. Then $n$ can be written as $oq+r$, with $r\in\{0,1,\ldots,o-1\}$. Therefore\begin{align}e&=g^n\\&=g^{oq+r}\\&=(g^o)^q.g^r\\&=g^r.\end{align} Since $g^r=1$, since $o$ is the order of $g$ and since $r<o$, $r$ can only be equal to $0$. And this means that $o\mid n$.</p>
|
203,505 | <p>Let <span class="math-container">$P(x)$</span> be a non-constant polynomial with real coefficients.</p>
<p>Can <a href="http://en.wikipedia.org/wiki/Natural_density" rel="noreferrer">natural density</a> of</p>
<p><span class="math-container">$$\{n\ |\ \lfloor P(n)\rfloor \ \text{is prime.}\}$$</span></p>
<p>be positive?</p>
| Terry Tao | 766 | <p>No. There are two cases. Firstly, suppose that one of the non-constant coefficients of $P$ is irrational. Then, by the Weyl equidistribution theorem, $\lfloor P(n) \rfloor$ is equidistributed mod $W$ for any modulus $W$, which already limits the natural density of the prime-producing $n$ to be at most $\phi(W)/W$ for any $W$, which implies zero density by taking $W$ to be a product of all the primes less than a large threshold $w$.</p>
<p>If the non-constant coefficients are all rational, then by passing to a suitable arithmetic progression one can make them all integer, at which point one may as well make the constant coefficient integer as well. Then one can sieve using the Chebotarev density theorem (or <a href="http://en.wikipedia.org/wiki/Landau_prime_ideal_theorem">Landau prime ideal theorem</a>) as in David's answer. (One should probably get an upper bound of $O(x/\log x)$ for the number of $n \leq x$ with $P(n)$ prime by this method, where the implied constants depend on the coefficients of $P$ of course.)</p>
|
1,267,395 | <p>Julie is required to pay a 2 percent tax on all income over 3,000. She also has to pay 2.5 percent on all income over 20,000. She earned more than 20,000 and paid 992.50 what was her total income</p>
| DeepSea | 101,504 | <p>Let $x$ be her total income, then we have: $0.02(x-3,000) + 0.025(x-20,000) = 992.5$ Can you solve this linear equation?</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.