qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
259,795 | <p>Consider the function $f \colon\mathbb R \to\mathbb R$ defined by
$f(x)=
\begin{cases}
x^2\sin(1/x); & \text{if }x\ne 0, \\
0 & \text{if }x=0.
\end{cases}$</p>
<p>Use $\varepsilon$-$\delta$ definition to prove that the limit $f'(0)=0$.</p>
<p>Now I see that h should equals to delta; and delta should equal to epsilon in this case. Thanks for everyone contributed!</p>
| Alex Youcis | 16,497 | <p>I would post this as a comment, since I need to check it a little more before I am totally ok with it--alas, it is too long. So, an answer it shall stay.</p>
<p>EDIT: I should emphasize that where I am using finite dimesionality is that every linear map in sight is continuous. In particular, a linear isomorphism is a homeomorphism.</p>
<p>I think this should suffice. Let $V$ and $W$ be finite dimensional real vector spaces and $T:V\to W$ a surjective linear map.</p>
<p>EDIT EDIT: As Matt E points out below, it is prudent to mention the following. Below I prove that the map $\pi$ is open when we give $V/Y$ the quotient topology, but not that it is open when we give it the standard metric topology which makes $\widetilde{T}$ a homeomorphism. But, the fact that the quotient topology on $V/Y$ coincides with the linear topology is deducible because of the following: a) we're in finite dimensions so $Y$ is closed, and b) give $V$ you're favorite norm, since $Y$ is closed it's ommonly known then that the quotient topology on $V/Y$ is the topology induced by the quotient norm and so the quotient topology is the linear topology (all normed topologies are the same in finite dimensions). There are other ways to see that the two topologies coincide if you're interested. :)</p>
<p>We first claim that the projection map $V\to V/Y$ is open for any $Y$ a subspace of $V$. To see this, we note that it suffices to prove that $\pi^{-1}(\pi(U))$ (where $\pi$ is the projection map) is open for each open set $U$ in $V$. But, note that $$\pi^{-1}(\pi(U))=\bigcup_{v\in Y}(v+U)$$</p>
<p>which, being the union of open sets, is open. </p>
<p>Now that we have the projection map is open this should be easy. Namely, let $Y=\ker T$. We know then that $T$ induces a linear isomorphism $\widetilde{T}:V/Y\to W$. Since this is a linear isomorphism, it is, in particular, a homeomorphism and so an open map. So, if we take $U\subseteq V$ open we have that $T(U)$ is just $\widetilde{T}(\pi(U))$ where $\pi$ is the projection $V\to V/Y$. So, we see that $\pi(U)$ is open in $V/Y$ since $\pi$ is an open map, and $\widetilde{T}(\pi(U))$ is open because $\widetilde{T}$ is a homeomorphism. So, $T(U)=\widetilde{T}(\pi(U))$ is open as desired.</p>
|
2,843,560 | <p>If $\sin x +\sin 2x + \sin 3x = \sin y\:$ and $\:\cos x + \cos 2x + \cos 3x =\cos y$, then $x$ is equal to</p>
<p>(a) $y$</p>
<p>(b) $y/2$</p>
<p>(c) $2y$</p>
<p>(d) $y/6$</p>
<p>I expanded the first equation to reach $2\sin x(2+\cos x-2\sin x)= \sin y$, but I doubt it leads me anywhere. A little hint would be appreciated. Thanks!</p>
| Batominovski | 72,152 | <p>Note that $$\sin(x)+\sin(2x)+\sin(3x)=\sin(2x)\,\big(1+2\cos(x)\big)$$
and
$$\cos(x)+\cos(2x)+\cos(3x)=\cos(2x)\,\big(1+2\cos(x)\big)\,.$$
Thus, for the required equalities to be true, we need
$$1+2\cos(x)=s\in\{-1,+1\}\text{ and }2x=\left\{\begin{array}{ll}
2k\pi+y\,,&\text{if }s=+1\\
(2k+1)\pi-y\,,&\text{if }s=-1\,.
\end{array}\right.$$
for some integer $k$. This mean
$$x=\left\{\begin{array}{ll}
k\pi+\frac{y}{2}\,,&\text{if }s=+1\\
k\pi+\frac{\pi-y}{2}\,,&\text{if }s=-1\,.
\end{array}\right.$$
The problem statement seems to suggest that (b) is the correct answer (with option $s=+1$ and $k=0$). However, there is a caveat, as Saucy O'Path mentioned.</p>
<p>To be precise, the possible values of $(x,y)$ are
$$(x,y)=\left(\frac{\pi}{2}+m\pi,(2n+1)\pi\right)\text{ and }(x,y)=\big((2m+1)\pi,(2n+1)\pi\big)$$
where $m,n\in\mathbb{Z}$.
In addition, $x=\pi$ is a solution, but if (b) is true, then $y=2x=2\pi$ would correspond to the value $x=\pi$, but this is not the case. When $x=\pi$, the only working values of $y$ are odd multiples of $\pi$. </p>
<p><strong>Conclusion:</strong> This is a poorly designed problem, and should be ignored. None of the provided choices is a (solely) correct answer. Each except (c) can be correct, e.g., (a) with $(x,y)=(\pi,\pi)$, (b) with $(x,y)=\left(\frac{\pi}{2},\pi\right)$, and (d) with $(x,y)=\left(\frac{\pi}{2},3\pi\right)$. </p>
|
1,013,346 | <p>Given a box which contains $3$ red balls and $7$ blue balls. A ball is drawn from the box and a ball of the other color is then put into the box. A second ball is drawn from the box, What is the probability that the second ball is blue? </p>
<p>could anyone provide me any hint? </p>
<p>Please, don't offer a complete sketch of the solution, a hint is enough for me as this is a homework problem. </p>
| Felix Marin | 85,343 | <p>$\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\Li}[1]{\,\mathrm{Li}_{#1}}
\newcommand{\mrm}[1]{\,\mathrm{#1}}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
As shown by $\texttt{@Timbuc}$, you just need to know the sum
$\ds{\sum_{n = 1}^{\infty}{1 \over n^{4}}}$. It has an interesting evaluation by starting from the $\ds{\cot\pars{z}}$ expansion
$\ds{\pars{~\mbox{with}\ z \not= 0, \pm\pi,\pm 2\pi,\ldots~}}$:</p>
<p>\begin{align}
\cot\pars{z} & =
{1 \over z} + 2z\sum_{n = 1}^{\infty}{1 \over z^{2} - n^{2}\pi^{2}} =
{1 \over z} - {2z \over \pi^{2}}\sum_{n = 1}^{\infty}{1 \over n^{2}}
{1 \over 1 - z^{2}/ \pars{n^{2}\pi^{2}}}
\\[5mm] & =
{1 \over z} - {2z \over \pi^{2}}\sum_{n = 1}^{\infty}{1 \over n^{2}}
\pars{1 + {z^{2} \over n^{2}\pi^{2}} + \cdots} =
{1 \over z} - {2z \over \pi^{2}}\sum_{n = 1}^{\infty}{1 \over n^{2}}
-{2z^{3} \over \pi^{4}}\color{#f00}{\sum_{n = 1}^{\infty}{1 \over n^{4}}} - \cdots
\end{align}
<hr>
\begin{align}
z\cot\pars{z} & =
1 - {2z^{2} \over \pi^{2}}\sum_{n = 1}^{\infty}{1 \over n^{2}}
-\bracks{{48 \over \pi^{4}}\color{#f00}{\sum_{n = 1}^{\infty}{1 \over n^{4}}}}{z^{4} \over 4!} - \cdots
\end{align}
<hr>
$$
\color{#f00}{\sum_{n = 1}^{\infty}{1 \over n^{4}}} =
\left.-\,{\pi^{4} \over 48}\,\totald[4]{\bracks{z\cot\pars{z}}}{z}
\right\vert_{\ z\ \to\ 0} = \pars{-\,{\pi^{4} \over 48}}\pars{-\,{8 \over 15}} =
\color{#f00}{\pi^{4} \over 90}
$$</p>
|
81,267 | <p>I have the following problem: I have a (a lot)*3 table, meaning that I have 3 columns, say X, Y and Z, with real values. In this table some of the rows have the same (X,Y) values, but with different value of Z. For instance</p>
<pre><code>{{12.123, 4.123, 513.423}, {12.123, 4.123, 33.43}}
</code></pre>
<p>have the same (X,Y) but different Z. This is a case of multiplicity=2, but in principle it could be higher. What I want to do is to take all the unique rows, AND in case they have multiplicity >1 (i.e. repeated (X,Y)), pick the one with minimum Z value. In the previous example it would be the second one.</p>
<p>I hope I have been clear! Thank you very much indeed!</p>
| sacratus | 23,055 | <pre><code>DeleteDuplicates[SortBy[data, Last],( #1[[1]]==#2[[1]] && #1[[2]]==#2[[2]] & ) ]
</code></pre>
<p>Explanation:
<code>data</code> is your data of the form <code>{{x1,y1,z1},...,{xn,yn,zn}}</code>. With <code>SortBy[#, Last] &</code> we sort this dataset with respect to the last coordinate, e.g. z. </p>
<p>With <code>DeleteDuplicates</code> you can simply delete duplicate elements of a list. By specifighing a <code>SameTest</code> one can define which elements are treated to be equal.</p>
<p>The following <code>SameTest</code></p>
<pre><code>#1[[1]]==#2[[1]] && #1[[2]]==#2[[2]] &
</code></pre>
<p>says that both x and y-coordinates should be equal so that two elements are treated to be the same. </p>
<p>This is a shorter form of this test:</p>
<pre><code>#1[[1;;2]]==#2[[1;;2]] &
</code></pre>
<p><strong>EDIT:</strong>
Probably most simple form of this approach:</p>
<pre><code>DeleteDuplicates[SortBy[data, Last], Most[#1] ==Most[#2]&]
</code></pre>
<p>as suggested by <a href="https://mathematica.stackexchange.com/users/9362/bob-hanlon">Bob-Hanlon</a> </p>
|
353 | <p>One of the challenges of undergraduate teaching is logical implication. The case by case definition, in particular, is quite disturbing for most students, that have trouble accepting "false implies false" and "false implies true" as true sentences. </p>
<blockquote>
<p><strong>Question:</strong> What are good point of view, methods and tips to help students grasp the concept of logical implication?</p>
</blockquote>
<p>To focus the question, I would like to restrict to math majors, although the question is probably equally interesting for other kind of students.</p>
| Anschewski | 199 | <p>To supplement Brendan's idea:
I like to connect quantified statements to unquantified in the following way:
Assume a statement like "If X is a dog, then X has a head". Now, once you have found this to be true, you might want the truth not to depend on X. Thus replacing X for anything like "my car" should still give you a true statement. However, "If my car is a dog, then my car has a head" should be true then.</p>
<p>A second remark:
At this point one should make clear, that the truth of statements is not given by nature, but something we define. So the question should not be which truth value is right, but which one makes more sense.</p>
<p>Edit: third remark:
There are statements which feel "more true" than others, althoug they are of the form "A=>B" with A, B being wrong. Compare "If I am the pope, then I am a woman." and "If I am the pope, then I live in Rome". For me (not the pope, male, not living in rome), all parts of the statements are wrong. However, the second feels true, whereas the first one feels wrong. So it is worth a discussion if and how one should calculate truth values of conditional statemens from the truth values of their parts.</p>
|
353 | <p>One of the challenges of undergraduate teaching is logical implication. The case by case definition, in particular, is quite disturbing for most students, that have trouble accepting "false implies false" and "false implies true" as true sentences. </p>
<blockquote>
<p><strong>Question:</strong> What are good point of view, methods and tips to help students grasp the concept of logical implication?</p>
</blockquote>
<p>To focus the question, I would like to restrict to math majors, although the question is probably equally interesting for other kind of students.</p>
| user26872 | 4,665 | <p><em>Here is an example I sometimes use to motivate the truth table for implication. The example taps in to our ability to recognize cheaters.</em> </p>
<p>Suppose you go to the vending machine. The price of a soda is one dollar. </p>
<ol>
<li><p>Suppose you put a dollar in the vending machine and receive a soda. Do you feel cheated? <strong>No!</strong> </p></li>
<li><p>Suppose you put a dollar in the vending machine and do not receive a soda. Do you feel cheated? <strong>Yes!</strong> </p></li>
<li><p>Suppose you do not put a dollar in the vending machine but receive a soda anyways. Do you feel cheated? <strong>No!</strong> </p></li>
<li><p>Suppose you do not put a dollar in the vending machine and do not receive a soda. Do you feel cheated? <strong>No!</strong> </p></li>
</ol>
|
246,808 | <p>Trying to solve the following PDE with BC <code>T==1</code> on a spherical cap of a unit sphere and <code>T==0</code> at infinity (approximated as <code>r==(x^2 + y^2 + z^2)^0.5==40^0.5</code>) and the flux over the remaining surfaces taken to be zero (only half domains has been specified due to symmetry reasons):</p>
<pre><code>p = 0.2;
Pe = 20;
<< NDSolve`FEM`
boundaries = {-x^2 - y^2 - z^2 + 1, x^2 + y^2 + z^2 - 40, -y,
z - p + 1};
\[CapitalOmega] =
ToElementMesh[
ImplicitRegion[And @@ (# <= 0 & /@ boundaries), {x, y, z}]];
Show[\[CapitalOmega][
"Wireframe"["MeshElement" -> "MeshElements", Boxed -> True]],
Axes -> True, AxesLabel -> {"x", "y", "z"},
PlotRange -> {{-7, 7}, {-0.5, 7}, {-7, 1}}]
Show[\[CapitalOmega]["Wireframe"], Axes -> True,
AxesLabel -> {"x", "y", "z"},
PlotRange -> {{-7, 7}, {-0.5, 7}, {-7, 1}}]
</code></pre>
<p><a href="https://i.stack.imgur.com/VrfMr.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/VrfMr.gif" alt="dominio" /></a></p>
<p>The free-surface is located at <code>z==-0.8</code> in the chosen reference system.
The differential equation is:</p>
<pre><code>sol = NDSolveValue[{D[T[x, y, z], x] ==
1/Pe Laplacian[T[x, y, z], {x, y, z}], {DirichletCondition[
T[x, y, z] == 1., boundaries[[1]] == 0.],
DirichletCondition[T[x, y, z] == 0., boundaries[[2]] == 0.]}},
T, {x, y, z} \[Element] \[CapitalOmega]]
</code></pre>
<p>I noticed a non-smooth behavior of the solution <code>sol</code>, as confirmed by the density plots:</p>
<pre><code>z1 = -0.8;
DensityPlot[sol[x, y, z1], {x, -4, 4}, {y, 0, 2}, PlotRange -> All,
PlotPoints -> 100, AspectRatio -> 1/2]
DensityPlot[sol[x, 0, z], {x, -4, 4}, {z, -0.8, -2}, PlotRange -> All,
PlotPoints -> 100, AspectRatio -> 1/2]
</code></pre>
<p><a href="https://i.stack.imgur.com/S4c63.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/S4c63.gif" alt="fig1" /></a>
<a href="https://i.stack.imgur.com/5P7Dm.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/5P7Dm.gif" alt="fig2" /></a></p>
<p>and by some plots of the solution</p>
<pre><code>Plot[sol[x, 0, -0.8], {x, 0.6, 6.1}, Frame -> True,
PlotRange -> {{-0.1, 7}, {-0.1, 1.2}}]
Plot[sol[x, 0, -0.8], {x, -6.1, -0.6}, Frame -> True,
PlotRange -> {{-7, 0.1}, {-0.1, 1.2}}]
</code></pre>
<p><a href="https://i.stack.imgur.com/NRs8c.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/NRs8c.gif" alt="fig3" /></a>
<a href="https://i.stack.imgur.com/Sw4XW.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/Sw4XW.gif" alt="fig4" /></a></p>
<p>The derivatives of the solution show an even worse behavior. I report here as example just the derivative with respect to <code>x</code>:</p>
<pre><code>Dr[x_, y_, z_] = D[sol[x, y, z], x]
Plot[Dr[x, 0, -0.8], {x, 0.6, 8}, Frame -> True]
Plot[Dr[x, 0, -0.8], {x, -8, -0.6}, Frame -> True]
</code></pre>
<p><a href="https://i.stack.imgur.com/I0bTo.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/I0bTo.gif" alt="fig5" /></a>
<a href="https://i.stack.imgur.com/6EKBK.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/6EKBK.gif" alt="fig6" /></a></p>
<p>I am interested in the derivative because I am trying to calculate the total flux of the gradient of <code>sol</code> through the two portion of the spherical cap, <code>SC1</code> and <code>SC2</code>, which border the domain. As known, this is proportional to the heat flow through <code>SC1</code> and <code>SC2</code>. The definition of <code>SC1</code> and <code>SC2</code> is:</p>
<pre><code>SC1 = ImplicitRegion[
x^2 + y^2 + z^2 == 1 && z <= p - 1 && y >= 0 && x >= 0, {x, y, z}];
SC2 = ImplicitRegion[
x^2 + y^2 + z^2 == 1 && z <= p - 1 && y >= 0 && x <= 0, {x, y, z}];
Show[DiscretizeRegion[SC1, {{-5, 5}, {-5, 5}, {-3, 3}},
MaxCellMeasure -> 0.0001], Axes -> True,
AxesLabel -> {"x", "y", "z"},
PlotRange -> {{-1, 1}, {-0.2, 1}, {-0.8, -1}}]
Show[DiscretizeRegion[SC2, {{-5, 5}, {-5, 5}, {-3, 3}},
MaxCellMeasure -> 0.0001], Axes -> True,
AxesLabel -> {"x", "y", "z"},
PlotRange -> {{-1, 1}, {-0.2, 1}, {-0.8, -1}}]
</code></pre>
<p>and the heat flow through <code>SC1</code> and <code>SC2</code> should be:</p>
<pre><code>NIntegrate[#, {x, y, z} \[Element] SC1] & /@ (Grad[
sol[x, y, z], {x, y, z}].{x, y, z})
NIntegrate[#, {x, y, z} \[Element] SC2] & /@ (Grad[
sol[x, y, z], {x, y, z}].{x, y, z})
</code></pre>
<p>Unfortunately, I get a lot of errors from the last calculation, I don't know if this happens because of the non-smooth behavior of the solution and of the derivative or for other mistakes that I am doing. Thank you in advance.</p>
| Tim Laska | 61,809 | <p>The basic problem appears to be a convective-diffusive heat transfer problem of X-directed fluid flow across a heated spherical cap tip. To study this type of problem, it probably is easier to construct a virtual cuboid wind tunnel. When simulating virtual wind tunnels, the upstream section is typically much shorter than the downstream week region, so there really is not any spherical symmetry.</p>
<p>The following workflow will show how to construct a virtual wind tunnel and add refinement zones so that gradients may be captured near the object of interest without blowing up the total model size.</p>
<h1>Virtual wind tunnel construction with refinement zones.</h1>
<p>As discussed <a href="https://mathematica.stackexchange.com/a/173762/61809">here</a>, a <code>MeshRefinementFunction</code> will not necessarily refine the surface mesh in 3D. The suggested workaround was to use <a href="https://reference.wolfram.com/language/ref/BoundaryDiscretizeRegion.html" rel="nofollow noreferrer"><code>BoundaryDiscretizeRegion</code></a> to obtain a finely discretized surface mesh on the input geometry before applying the <code>MeshRefinementFunction</code>.</p>
<p>The following workflow creates a refined region with a finely discretized tip and a wind tunnel domain less refined region:</p>
<pre><code><< NDSolve`FEM`
Pe = 15;
tip = BoundaryDiscretizeRegion[Ball[], MaxCellMeasure -> .00125,
Axes -> True, AxesLabel -> {"X", "Y", "Z"}];
refCuboid =
BoundaryDiscretizeRegion[Cuboid[{-1.5, 0, -1.5}, {3.5, 1.5, -0.8}],
MaxCellMeasure -> .01, Axes -> True, AxesLabel -> {"X", "Y", "Z"}];
reftip = RegionDifference[refCuboid, tip, Axes -> True,
AxesLabel -> {"X", "Y", "Z"}]
domCuboid =
BoundaryDiscretizeRegion[Cuboid[{-2.5, 0, -3.8}, {7.5, 4, -0.8}],
Axes -> True, AxesLabel -> {"X", "Y", "Z"}];
domref = RegionDifference[domCuboid, refCuboid, Axes -> True,
AxesLabel -> {"X", "Y", "Z"}]
</code></pre>
<p><a href="https://i.stack.imgur.com/1fkuK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1fkuK.png" alt="Basic wind tunnel shapes" /></a></p>
<h2>Create volume mesh with refinement zones</h2>
<p>The following workflow creates a refinement zone with a high level of surface discretization at the tip. The total number of elements is about 134,000, which does not take too long to solve.</p>
<pre><code>(* Create Mesh Refinement Function *)
mrf = With[{rmf = RegionMember[reftip]},
Function[{vertices, volume},
Block[{x, y, z}, {x, y, z} = Mean[vertices];
If[rmf[{x, y, z}], volume > (0.07/1.5)^3, volume > (0.3)^3]]]];
(* Create and Display Volumetric Mesh *)
mesh = ToElementMesh[RegionUnion[domref, reftip],
MeshQualityGoal -> "Maximal", "MeshElementConstraint" -> 40,
"MaxBoundaryCellMeasure" -> {"Length" -> .1},
MeshRefinementFunction -> mrf]
mesh["Wireframe"[
"MeshElement" -> "MeshElements",
"ElementMeshDirective" -> Directive[EdgeForm[Black]],
PlotRange -> {{-2.5`, 7.5`}, {0.2,
4.`}, {-3.8`, -0.7999999999999998`}}]]
groups = mesh["BoundaryElementMarkerUnion"];
temp = Most[Range[0, 1, 1/(Length[groups])]];
colors = ColorData["BrightBands"][#] & /@ temp
mesh["Wireframe"["MeshElementStyle" -> FaceForm /@ colors]]
</code></pre>
<p><a href="https://i.stack.imgur.com/gKeYS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gKeYS.png" alt="Mesh" /></a></p>
<h1>Solution</h1>
<p>We and solve and now compare our solution on the refined mesh versus the OP mesh.</p>
<pre><code>sol = NDSolveValue[{D[T[x, y, z], x] ==
1/Pe Laplacian[T[x, y, z], {x, y, z}], {DirichletCondition[
T[x, y, z] == 1., ElementMarker == 3],
DirichletCondition[T[x, y, z] == 0., ElementMarker == 4]}},
T, {x, y, z} ∈ mesh];
z1 = -0.8;
DensityPlot[sol[x, y, z1], {x, -4, 4}, {y, 0, 2}, PlotRange -> All,
PlotPoints -> 100, AspectRatio -> 1/2]
DensityPlot[sol[x, 0, z], {x, -4, 4}, {z, -0.8, -2}, PlotRange -> All,
PlotPoints -> 100, AspectRatio -> 1/2]
</code></pre>
<p><a href="https://i.stack.imgur.com/SJ6Q4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SJ6Q4.png" alt="Density plots" /></a></p>
<p>Visually, the solution appears to be much smoother than the images shown in the OP.</p>
<h2>Other plots</h2>
<p>The other plots appear much smoother with the refined mesh.</p>
<pre><code>Plot[sol[x, 0, -0.8], {x, 0.6, 6.1}, Frame -> True,
PlotRange -> {{-0.1, 7}, {-0.1, 1.2}}]
Plot[sol[x, 0, -0.8], {x, -6.1, -0.6}, Frame -> True,
PlotRange -> {{-7, 0.1}, {-0.1, 1.2}}]
Dr[x_, y_, z_] = D[sol[x, y, z], x];
Plot[Dr[x, 0, -0.8], {x, 0.6, 8}, Frame -> True]
Plot[Dr[x, 0, -0.8], {x, -8, -0.6}, Frame -> True]
</code></pre>
<p><a href="https://i.stack.imgur.com/9PimR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9PimR.png" alt="Other plots" /></a></p>
<h1>Integration of total flux</h1>
<h2>Using <a href="https://reference.wolfram.com/language/ref/NIntegrate.html" rel="nofollow noreferrer"><code>NIntegrate</code></a></h2>
<p>The integration strategy given in the OP does solve albeit quite slowly due to slow convergence issues. When I evaluate the OP code, I obtained:</p>
<pre><code>NIntegrate[#, {x, y, z} ∈ SC1] & /@ (Grad[
sol[x, y, z], {x, y, z}] . {x, y, z})
NIntegrate[#, {x, y, z} ∈ SC2] & /@ (Grad[
sol[x, y, z], {x, y, z}] . {x, y, z})
(* -0.95414 *)
(* -2.96035 *)
</code></pre>
<h2>Summing up the discretized data</h2>
<p>Using a variety of Mathematica functions, we can extract the normals, areas, and thermal gradients of each triangle on the mesh corresponding to the spherical cap. This should be enough information to estimate an integrated flux.</p>
<p>The following workflow shows how to extract the necessary information for the left and right-hand sides of the spherical cap.</p>
<pre><code>(*Element info shortcuts*)
ebi = ElementIncidents[#["BoundaryElements"]][[1]] &;
ebm = ElementMarkers[#["BoundaryElements"]][[1]] &;
ebn = #["BoundaryNormals"][[1]] &;
ei = ElementIncidents[#["MeshElements"]][[1]] &;
em = ElementMarkers[#["MeshElements"]][[1]] &;
epi = ElementIncidents[#["PointElements"]][[1]] &;
epm = Flatten@ElementMarkers[#["PointElements"]] &;
(*extract boundary mesh from element mesh*)
bmesh = ToBoundaryMesh[mesh];
bcrd = bmesh["Coordinates"];
bi = ebi[bmesh];(*boundary element incidents*)
bm = ebm[bmesh];(*boundary element markers*)
bn = ebn[bmesh];(*boundary normals*)
(*find markers corresponding to the spherical cap*)
mrk3pos = Flatten@Position[bm, 3, 1];
(*generate necessary info to estimate surface integral*)
bn3 = bn[[mrk3pos]];
polys = Map[Polygon,
GetElementCoordinates[bcrd, #] & /@ bi[[mrk3pos]]];
area3 = Area /@ polys;
center3 = Map[Mean, GetElementCoordinates[bcrd, #] & /@ bi[[mrk3pos]]];
(*find positions of left and right side of spherical cap*)
posXids = Position[center3[[All, 1]], _?(# >= 0 &), 1] // Flatten;
negXids = Complement[Range[Length[mrk3pos]], posXids];
Show[{Graphics3D[{Red, polys[[posXids]]}],
Graphics3D[{Blue, polys[[negXids]]}]}]
</code></pre>
<p><a href="https://i.stack.imgur.com/o9tCF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o9tCF.png" alt="Left and right-hand side of spherical cap" /></a></p>
<p>There is a bit of jaggedness at the seam of the left and right-hand sides. Hopefully, the errors will average out. We could have put a seam at the interface with more elaborate model construction. Now, we can estimate the fluxes using the following:</p>
<pre><code>gradT[x_, y_, z_] = {Derivative[1, 0, 0][sol][x, y, z],
Derivative[0, 1, 0][sol][x, y, z],
Derivative[0, 0, 1][sol][x, y, z]};
f = NDSolve`FEM`MapThreadDot[(gradT @@@ center3[[#]]), bn3[[#]]] .
area3[[#]] &;
f[posXids]
f[negXids]
f[posXids~Join~negXids]
(* -0.952311 *)
(* -2.96147 *)
(* -3.91378 *)
</code></pre>
<p>These results agree with the OP integral formulation quite well and are significantly faster.</p>
<h1>Comparison to another code</h1>
<p>When possible, it is always good to compare the <em>Mathematica</em> FEM results with another FEM code. In this case, I will show that the Mathematica results compare favorably with the FEM code COMSOL. First, I will create a <a href="https://reference.wolfram.com/language/ref/SliceContourPlot3D.html" rel="nofollow noreferrer"><code>SliceContourPlot3D</code></a> for comparison purposes.</p>
<pre><code>surf = {{x^2 + y^2 + z^2 ==
1.001^2}, {"XStackedPlanes", {7.5}}, {"YStackedPlanes", {0}}, \
{"ZStackedPlanes", {-0.8}}, {"BackPlanes"}};
SliceContourPlot3D[sol[x, y, z], surf, {x, y, z} ∈ mesh,
Contours -> 11, PlotPoints -> 100, BoxRatios -> Automatic,
ColorFunction -> "ThermometerColors", PlotRange -> {-0.001, 1},
PlotLegends -> Automatic]
</code></pre>
<p><a href="https://i.stack.imgur.com/6BCSl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6BCSl.png" alt="Mathematica slice contour plot" /></a></p>
<p>Here is the comparable COMSOL plot:</p>
<p><a href="https://i.stack.imgur.com/2MXTx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2MXTx.png" alt="COMSOL plot" /></a></p>
<p>As you can see, the visualizations compare favorably.</p>
<p>We also can see that the integrations of gradT on the left and right hemispherical cap surfaces also agreed to within about 2%</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Application</th>
<th style="text-align: left;">Left</th>
<th style="text-align: left;">Right</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;"><em>Mathematica</em></td>
<td style="text-align: left;">-2.96147</td>
<td style="text-align: left;">-0.95414</td>
</tr>
<tr>
<td style="text-align: left;">COMSOL</td>
<td style="text-align: left;">-3.0255</td>
<td style="text-align: left;">-0.952755</td>
</tr>
<tr>
<td style="text-align: left;">%Diff</td>
<td style="text-align: left;">2.16226</td>
<td style="text-align: left;">0.0465888</td>
</tr>
</tbody>
</table>
</div><h1>Update in response to a comment about low Péclet numbers</h1>
<p>The Péclet number is defined by:</p>
<p><span class="math-container">$$Pe=\frac{Advective\ Transport\ Rate}{Diffusive\ Transport\ Rate}$$</span></p>
<p>So, as the Péclet number goes down the Advective component becomes less significant indicating that a "wind tunnel" is perhaps not the best model to study this problem. Although you may want to consider modifying the geometry, you can substantially mitigate the effects of low Péclet numbers by changing the default wall conditions to <code>DirichletCondition</code>'s. Here is a logarithmic sweep from Pe numbers from 0.01 to 100 (note that this process is slow):</p>
<pre><code>pfun = ParametricNDSolveValue[{D[T[x, y, z], x] ==
1/P Laplacian[T[x, y, z], {x, y, z}], {DirichletCondition[
T[x, y, z] == 1., ElementMarker == 3],
DirichletCondition[T[x, y, z] == 0.,
Or @@ (ElementMarker == # & /@ {4, 6, 7})]}},
T, {x, y, z} ∈ mesh, {P}];
surf = {{x^2 + y^2 + z^2 ==
1.001^2}, {"XStackedPlanes", {7.5}}, {"YStackedPlanes", {0}}, \
{"ZStackedPlanes", {-0.8}}, {"BackPlanes"}};
frames = SliceContourPlot3D[pfun[#][x, y, z],
surf, {x, y, z} ∈ mesh, Contours -> 11,
PlotPoints -> 100, BoxRatios -> Automatic,
ColorFunction -> "ThermometerColors", PlotRange -> {-0.001, 1},
PlotLegends -> Automatic, PlotLabel -> N@#] & /@ (10^# & /@
Subdivide[-2, 2, 20]);
ListAnimate[frames]
</code></pre>
<p><a href="https://i.stack.imgur.com/0wb4I.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0wb4I.gif" alt="Parametric sweep of Péclet numbers" /></a></p>
|
1,897,538 | <p>Next week I will start teaching Calculus for the first time. I am preparing my notes, and, as pure mathematician, I cannot come up with a good real world example of the following.</p>
<p>Are there good examples of
\begin{equation}
\lim_{x \to c} f(x) \neq f(c),
\end{equation}
or of cases when $c$ is not in the domain of $f(x)$?</p>
<p>The only thing that came to my mind is the study of physics phenomena at temperature $T=0 \,\mathrm{K}$, but I am not very satisfied with it.</p>
<p>Any ideas are more than welcome!</p>
<p><strong>Warning</strong></p>
<p>The more the examples are approachable (e.g. a freshman college student), the more I will be grateful to you! In particular, I would like the examples to come from natural or social sciences. Indeed, in a first class in Calculus it is not clear the importance of indicator functions, etc..</p>
<p><strong>Edit</strong></p>
<p>As B. Goddard pointed out, a very subtle point in calculus is the one of removable singularities. If possible, I would love to have some example of this phenomenon. Indeed, most of the examples from physics are of functions with poles or indeterminacy in the domain.</p>
| B. Goddard | 362,009 | <p>Well, it's your chance to make a point about removable singularities. Students tend to think that the functions $x+3$ and $(x^2-9)/(x-3)$ are the same, but there is that one point where they're different. "Meh! What's one little point among so many?" (I wish) they would ask. All of differential calculus is about that one point, I (would) answer. The derivative is exactly this sort of limit.
It's not really a "real world" example, but it's pretty darn concrete.</p>
|
1,897,538 | <p>Next week I will start teaching Calculus for the first time. I am preparing my notes, and, as pure mathematician, I cannot come up with a good real world example of the following.</p>
<p>Are there good examples of
\begin{equation}
\lim_{x \to c} f(x) \neq f(c),
\end{equation}
or of cases when $c$ is not in the domain of $f(x)$?</p>
<p>The only thing that came to my mind is the study of physics phenomena at temperature $T=0 \,\mathrm{K}$, but I am not very satisfied with it.</p>
<p>Any ideas are more than welcome!</p>
<p><strong>Warning</strong></p>
<p>The more the examples are approachable (e.g. a freshman college student), the more I will be grateful to you! In particular, I would like the examples to come from natural or social sciences. Indeed, in a first class in Calculus it is not clear the importance of indicator functions, etc..</p>
<p><strong>Edit</strong></p>
<p>As B. Goddard pointed out, a very subtle point in calculus is the one of removable singularities. If possible, I would love to have some example of this phenomenon. Indeed, most of the examples from physics are of functions with poles or indeterminacy in the domain.</p>
| Lubin | 17,760 | <p>A simpler version of the example of @AOrtiz:<br>
Consider a glass of water, and the function $\Delta(h)=$ density at a point $h$ cm above the surface, in gm/ml. It’s $1$ when $h<0$, $0$ (or a very small $\varepsilon$) when $h>0$, and I’m sure that you’ll agree that any reasonable definition of density at a point will return $\Delta(0)=1/2$.</p>
<p>I’ve never used this example in teaching, but it seems to me that it would be very thought-provoking.</p>
|
588,214 | <p>What is the meaning of $[G:C_G(x)]$ in group theory? Is this equivalent to $\frac{|G|}{|Z_G(x)|}$, or to $|Z_G(x)|$?</p>
| Nicky Hekster | 9,605 | <p>You might want to know that $Z(G) \subseteq C_G(x)$ for all $x \in G$. In fact $Z(G) = \bigcap_{x \in G} C_G(x)$.</p>
|
468,487 | <p>I have calculated the likelihood of an event to be $1$ in $1.07 \times 10^{2867}$.</p>
<p>I'm looking for a way to describe to a layperson how unlikely this event is to occur, but the number is so mind boggling large I can't find a way to put it into words.</p>
<p>Any suggestions would be appreciated</p>
| Hagen von Eitzen | 39,174 | <p>It is the probability of the proverbial monkey hacking a typewriter and producing the first page of Hamlet.</p>
|
907,851 | <p>I am really new into math, why is $-2^2 = -4 $ and $(-2)^2 = 4 $? </p>
| Ahaan S. Rungta | 85,039 | <p>$ -2^2 = - \left( 2^2 \right) = -4 $, whereas $ \left( -2 \right)^2 = \left( -2 \right) \cdot \left( -2 \right) = 4 $, because the negatives cancel. </p>
|
907,851 | <p>I am really new into math, why is $-2^2 = -4 $ and $(-2)^2 = 4 $? </p>
| John Joy | 140,156 | <p>By convention brackets are evaluated first, then exponentiation, division/multiplication, and finally addition/subtraction. Think of the unary negation operator as a shorthand for "multiply by -1". These rules are usually remembered by students with the BEDMAS acronym.</p>
<p>I prefer to think of the rules as those such that the number of brackets are minimized when writing out a polynomial. For example, consider
$$x^4+5x^3-3x^2+2x+6$$
Were multiplication to have a higher precedence than exponentiation, the expression would look like this
$$x^4+5(x^3)-3(x^2)+2x+6$$
Were the expression just evaluated left to right (no precedence rules), the expression would look like this
$$(x^4)+(5(x^3))-(3(x^2))+(2x)+6$$
Obviously the first example is the easiest to read.</p>
<p>Another approach is to evaluate the "strongest" operations first. Exponentiation is defined as repeated multiplication, so exponentiation is higher on the pecking order. Similarly, multiplication and divisions are defined as repeated additions (or subtractions), so multiplication and divisions would be evaluated before the additions/subtractions. If we need to change the order of operations, we use brackets as follows.
$$x^3-4x^2-11x+30 = (x-2)(x+3)(x-5)$$
Notice how the brackets force us to evaluate the additions and subtractions before the multiplications.</p>
|
939,212 | <p>Apologies if this isn't at the level of questions expected here!</p>
<p>I've got two simultaneous equations to solve.</p>
<p>(Equation 1): $ x y = 4 $</p>
<p>(Equation 2): $ x + y = 2 $</p>
<p>They produce the following curves:</p>
<p><img src="https://i.stack.imgur.com/dMrNi.png" alt="enter image description here"></p>
<p><strong>Question:</strong> Whilst it's graphically obvious that they do not make contact, what is the <em>algebraic</em> indicator that these two lines do not meet? How do you prove that?</p>
| Alice Ryhl | 132,791 | <p>So the curves are</p>
<p>$$\frac4x\quad\text{and}\quad2-x$$</p>
<p>To prove that these curves do not meet, it simply means that</p>
<p>$$\frac4x=2-x$$</p>
<p>has no solutions, to solve this we multiply by $x$</p>
<p>$$4=2x-x^2$$</p>
<p>Then multiply with $-1$ and switch the left and right of the equality</p>
<p>$$x^2-2x=-4$$</p>
<p>Then I'm going to add one to each side</p>
<p>$$x^2-2x+1=-3$$</p>
<p>The left hand side can be rewritten as</p>
<p>$$(x-1)^2=-3$$</p>
<p>No number squared can be negative, so the equation has no solutions.</p>
|
939,212 | <p>Apologies if this isn't at the level of questions expected here!</p>
<p>I've got two simultaneous equations to solve.</p>
<p>(Equation 1): $ x y = 4 $</p>
<p>(Equation 2): $ x + y = 2 $</p>
<p>They produce the following curves:</p>
<p><img src="https://i.stack.imgur.com/dMrNi.png" alt="enter image description here"></p>
<p><strong>Question:</strong> Whilst it's graphically obvious that they do not make contact, what is the <em>algebraic</em> indicator that these two lines do not meet? How do you prove that?</p>
| amWhy | 9,003 | <p>If the two curves intersect (meet), they do so wherever they have a point in common. This will only happen when $y_1=y+2$, \iff $$\frac 4x = 2-x \iff 2x - x^2 = 4 \iff x^2 -2x + 4 = 0$$ for some real $x$.</p>
<p>Use the quadratic equation to show that there is no real solution to this equation Indeed, you need only check the discriminant of the quadratic $$\underbrace{4 - 16}_{b^2 - 4ac} = -12\lt 0 $$ to see that it is undefined in the reals, and hence no real $x$ that make this equation true. </p>
<p>I.e., the lines cannot intersect.</p>
|
570,740 | <p>Hi there I'm having some trouble with the following problem:</p>
<p>I have a $3\times3$ symmetric matrix
$$
A=\pmatrix{1+t&1&1\\ 1&1+t&1\\ 1&1&1+t}.
$$
I am trying to determine the values of $t$ for which the vector $b = (1,t,t^2)^\top$ (this is a column vector) is in the column space of $A$.</p>
<p>I think I'm fairly aware of how to go about it, forming the augmented matrix $[A|b]$ and basically using row ops to find a solution with which I could solve for the value(s) of $t$. But I've been trying this and have no luck. May I be missing something?</p>
<p>Thank you</p>
| Community | -1 | <p>Swapping two columns of A will not change its column space. Consider the matrix
$$
A' = \left( \begin{array}{ccc}
1 & 1 & 1+t \\
1 & 1+t & 1 \\
1+t & 1 & 1 \end{array} \right),
$$
which is obtained by interchanging the first and last column of A. Forming the augmented matrix $[A', b]$ and subtracting the first row from the second row and the first row times $1+t$ from the third row yields
$$
\left( \begin{array}{ccc|c}
1 & 1 & 1+t & 1 \\
0 & t & -t & t-1 \\
0 & -t & -t^2-2t & t^2-t-1
\end{array}\right).
$$
Notice that if $t=0$, the second row will make the system inconsistent, so $t \ne 0$. Adding the second row to the third and then dividing the second row by $t$ gives
$$
\left( \begin{array}{ccc|c}
1 & 1 & 1+t & 1 \\
0 & 1 & -1 & 1-\frac{1}{t} \\
0 & 0 & -t^2-3t & t^2-2
\end{array} \right).
$$
Now, if $t=-3,0$, the third row will make the system inconsistent, so $t \ne -3,0$. Thus, we can divide the third row by $-t^2 -3t$ to obtain the echelon form matrix
$$
R = \left(\begin{array}{ccc|c}
1 & 1 & 1+t & 1 \\
0 & 1 & -1 & 1-\frac{1}{t} \\
0 & 0 & 1 & \frac{t^2-2}{-t^2-3t}
\end{array} \right).
$$
Thus, $[1, t, t^2]^T$ is in the column space of A whenever $t \ne 0$ and $t \ne -3$.</p>
|
1,979,876 | <p>Currently I started studying about ray-casting when I came across this following problem based on ray-triangle intersection. The problem was:</p>
<p>You are provided with a triangle with vertices ,<strong>(x1,y1,z1)</strong>, <strong>(x2,y2,z2)</strong> and <strong>(x3,y3,z3)</strong>. A ray with origin <strong>(a1,b1,c1)</strong> and direction <strong>(a2,b2,c2)</strong> is also given. Your task is to find: </p>
<p>1)Whether or not the ray intersects the triangle</p>
<p>2) If the ray intersects the triangle, what's the point of intersection and also find the distance of theat point from the origin of the ray</p>
<p>Examples:</p>
<p>1) For <strong>(x1,y1,z1)</strong>=<strong>(-2,2,6)</strong>, <strong>(x2,y2,z2)</strong>=<strong>(2,2,6)</strong>, <strong>(x3,y3,z3)</strong>=<strong>(0,-4,6)</strong> , <strong>(a1,b1,c1)</strong>=<strong>(1,0,0)</strong>,<strong>(a2,b2,c2)</strong>=<strong>(-0.2,0,1)</strong> the answer is: coordinates of intersection:<strong>(-0.2,0,6)</strong> and the distance of the origin of the ray from the point of intersection is <strong>6.12</strong></p>
<p>2) For <strong>(x1,y1,z1)</strong>=<strong>(-2,2,1)</strong>, <strong>(x2,y2,z2)</strong>=<strong>(2,2,1)</strong>, <strong>(x3,y3,z3)</strong>=<strong>(0,-4,1)</strong> , <strong>(a1,b1,c1)</strong>=<strong>(0,0,0)</strong>,<strong>(a2,b2,c2)</strong>=<strong>(0,0,1)</strong> the answer is: coordinates of intersection:<strong>(0,0,1)</strong> and the distance of the origin of the ray from the point of intersection is <strong>1</strong> </p>
<p>3) For <strong>(x1,y1,z1)</strong>=<strong>(-10,-2.3,0)</strong>, <strong>(x2,y2,z2)</strong>=<strong>(4.4,20.3,9.5)</strong>, <strong>(x3,y3,z3)</strong>=<strong>(9.8,-10,0)</strong> , <strong>(a1,b1,c1)</strong>=<strong>(0,0,0)</strong>,<strong>(a2,b2,c2)</strong>=<strong>(0.68,-1.14,1.82)</strong> the answer is: coordinates of intersection:<strong>(0.67, -1.12, 1.79)</strong> and the distace of the origin of the ray from the point of intersection is <strong>2.22</strong> </p>
<p>All I need is to understand the general equations to solve this question. I need to know how the equations are formed and how the problem is solved. This seems a pretty hard question for me.</p>
| Nominal Animal | 318,422 | <p>The way this is done in most raytracers and raycasters is actually quite simple. The intersection distance (from ray origin) is calculated first, and it is typically calculated for several objects for the same ray, so that the objects can be handled in order of increasing distance (unless fully reflected). Then, the intersection point on the relevant surface(s) is parametrized if needed, for example, for shading purposes.</p>
<p>The direction vector for the ray is always an unit vector (length 1), so that it can be parametrized as
$$\vec{p}(t) = \vec{p}_0 + t \hat{d}$$
with $\lVert\hat{d}\rVert = 1$. In your case,
$$\vec{p}_0 = (a_1, \; b_1, \; c_1)$$
and
$$\hat{d} = \left(\frac{a_2}{\sqrt{a_2^2 + b_2^2 + c_2^2}}, \; \frac{b_2}{\sqrt{a_2^2 + b_2^2 + c_2^2}}, \; \frac{c_2}{\sqrt{a_2^2 + b_2^2 + c_2^2}}\right)$$
If $\hat{d}$ were not an unit vector, then $t$ would be the distance in units of the length of the direction vector.</p>
<p>For each planar surface, the four components of the plane are stored separately; the equation for the plane being
$$x n_x + y n_y + n_z z = n_d$$
where $\hat{n} = (n_x, n_y, n_z)$ is the plane normal (typically pointing <em>outwards</em> for one-sided surfaces), preferably an unit vector to make the math simpler, and so that $n_d$ is the minimum distance from origin. In your case,
$$\vec{n} = (x_2 - x_1, y_2 - y_1, z_2 - z_1) \times (x_3 - x_1, y_3 - y_1, z_3 - z_1)$$
where $\times$ represents a vector cross product; the plane unit normal is then
$$\hat{n} = \frac{\vec{n}}{\lVert\vec{n}\rVert} = \frac{\vec{n}}{\sqrt{\vec{n} \cdot \vec{n}}}$$
and $n_d$ is the dot product between $\hat{n}$ and any point on the plane, usually calculated using one of
$$\begin{array}{rl}
n_d \; =& \hat{n} \cdot (x_1 - a_1, y_1 - b_1, z_1 - c_1) \\
\; =& \hat{n} \cdot (x_2 - a_1, y_2 - b_1, z_2 - c_1) \\
\; =& \hat{n} \cdot (x_3 - a_1, y_3 - b_1, z_3 - c_1)
\end{array}$$
Note that with numerical calculations using floating-point values the three may not be exactly the same value due to rounding.</p>
<p>When $\vec{p}_0$, $\hat{d}$, $\hat{n}$, and $n_d$ are known, we can easily solve the (signed) distance $t$ at which the ray intersects the plane:
$$t = \frac{n_d - \vec{p}_0 \cdot \hat{n}}{\hat{d} \cdot \hat{n}}$$
Note that if $\hat{d}$ and $\hat{n}$ are perpendicular, the plane is parallel to the ray, and the denominator is zero. If $t \lt 0$, the intersection occurs "before" the ray starts, and if $t = 0$, at ray origin.</p>
<p>For $t \ge 0$, the point where the ray intersects the plane is
$$\vec{p}(t) = \vec{p}_0 + t \hat{n}$$</p>
<p>The next step is to determine whether the point is within the triangle or not. There are several different methods to do this, depending on whether you need the 2D coordinates for the intersection point on the plane (for shading purposes or similar), or if you only are interested whether the point is inside the triangle or not.</p>
<p>The typical method is to find the planar cartesian coordinates $(u,v)$ for the intersection point. Typically, we choose $(0,0)$ to be the vertex at the clockwise end of the longest edge, $(w,0)$ at the counterclockwise end of the longest edge, and $(g,h)$ to be the third vertex for a triangle, with $h \gt 0$. Also, $0 \le w \le g$. Below, I shall assume $\vec{v}_1$ is $(0,0)$, $\vec{v}_2$ is $(w,0)$, and $\vec{v}_3$ is $(g,h)$ -- but note that this assumes the edge between the first two is the longest one; relabel/renumber the vertices if necessary.</p>
<p>The planar unit vectors are also often precalculated for each plane, and stored separately:
$$\hat{e}_u = \frac{\vec{v}_2 - \vec{v}_1}{\left\lVert \vec{v}_2 - \vec{v}_1 \right\rVert}$$
The second unit vector must be perpendicular to the first, which can easily be done assuming the three vertex vectors are not collinear:
$$\vec{e}_v = (\vec{v}_3 - \vec{v}_1) - \hat{e}_u \left( \hat{e}_u \cdot (\vec{v}_3 - \vec{v}_1 \right )$$
and normalizing the result:
$$\hat{e}_v = \frac{\vec{e}_v}{\left\lVert\vec{e}_v\right\rVert}$$</p>
<p>The planar coordinates for the three vertices are then $(0,0)$ for $\vec{v}_1$, $(w,0)$ for $\vec{v}_2$ where
$$w = \left\lVert \vec{v}_2 - \vec{v}_1 \right\rVert = \hat{e}_u \cdot \left( \vec{v}_2 - \vec{v}_1 \right)$$
and $(g,h)$ for $\vec{v}_3$ where
$$\begin{cases}
g = \hat{e}_u \cdot \left( \vec{v}_3 - \vec{v}_1 \right ) \\
h = \hat{e}_v \cdot \left( \vec{v}_3 - \vec{v}_1 \right )
\end{cases}$$
Note that the slopes $k_1 = h/g$ and $k_2 = h/(w-g)$ may be useful later, and will not change unless the shape or size of the triangle changes.</p>
<p>Similarly, the coordinates $(u,v)$ for the intersection point $\vec{p}$ are
$$\begin{cases}
u = \hat{e}_u \cdot \left( \vec{p} - \vec{v}_1 \right ) \\
v = \hat{e}_v \cdot \left( \vec{p} - \vec{v}_1 \right )
\end{cases}$$</p>
<p>If the edge 12 is the longest one in the triangle, then $0 \le g \le w$, $0 \le h$, and it is trivial to check if $(u,v)$ is within the triangle:
$$\begin{array}{}
\text{if } u \lt 0 \text{ then } \vec{p} \text{ is outside } \\
\text{if } u \gt w \text{ then } \vec{p} \text{ is outside } \\
\text{if } v \lt 0 \text{ then } \vec{p} \text{ is outside } \\
\text{if } v \gt h \text{ then } \vec{p} \text{ is outside } \\
\text{if } u \le g \text{ and } v \gt u \frac{h}{g} \text{ then } \vec{p} \text{ is outside } \\
\text{if } u \ge g \text{ and } v \gt (w - u)\frac{h}{w - g} \text{ then } \vec{p} \text{ is outside } \\
\text{else } \vec{p} \text{ is inside }
\end{array}$$</p>
<p>For other planar polygons, and for generic texture support for triangles, you can instead store the 2D $(u,v)$ coordinates for each vertex, and rely on a 2D point-in-polygon test instead.</p>
<p>Although the above are pretty annoying to calculate by hand, if you write the functions needed to compute these in the general case, you'll find that the resulting code needs relatively few basic arithmetic operations per ray per triangle, especially if you precalculate the plane normal, plane distance from origin, the $u$ and $v$ planar unit normals, and the two slopes.</p>
<p>If you program in C, I recommend you start with types</p>
<pre><code>typedef struct {
double x;
double y;
double z;
} Vec3D;
static Vec3D vec3d(const double x, const double y, const double z)
{
Vec3D result = { x, y, z };
return result;
}
typedef struct {
Vec3D vertex[3];
Vec3D unit_u;
Vec3D unit_v;
double u1, u2, v2; /* w, g, h */
} Triangle;
Triangle triangle(Vec3D v1, Vec3D v2, Vec3D v3)
{
const double dd12 = (v2.x - v1.x)*(v2.x - v1.x)
+ (v2.y - v1.y)*(v2.y - v1.y)
+ (v2.z - v1.z)*(v2.z - v1.z);
const double dd23 = (v3.x - v2.x)*(v3.x - v2.x)
+ (v3.y - v2.y)*(v3.y - v2.y)
+ (v3.z - v2.z)*(v3.z - v2.z);
const double dd31 = (v1.x - v3.x)*(v1.x - v3.x)
+ (v1.y - v3.y)*(v1.y - v3.y)
+ (v1.z - v3.z)*(v1.z - v3.z);
Triangle result;
if (dd12 >= dd23 && dd12 >= dd31) {
result.vertex[0] = v1;
result.vertex[1] = v2;
result.vertex[3] = v3;
} else
if (dd23 >= dd12 && dd23 >= dd31) {
result.vertex[0] = v2;
result.vertex[1] = v3;
result.vertex[2] = v1;
} else {
result.vertex[0] = v3;
result.vertex[1] = v1;
result.vertex[2] = v2;
}
/* TODO: precalculate rest of Triangle result.
result.vertex[0] to result.vertex[1]
is the longest edge in the triangle;
use result.vertex[0..2] instead of v1/v2/v3.
*/
return result;
}
</code></pre>
|
467,574 | <p>Using permutation or otherwise, prove that $\displaystyle \frac{(n^2)!}{(n!)^n}$ is an integer,where $n$ is a positive integer.</p>
<p>I have no idea how to prove this..!!I am not able to even start this Can u give some hints or the solution.!cheers.!!</p>
| walcher | 89,844 | <p>Let $H$ be the subgroup of $S_{n^2}$ consisting of all permutations $\sigma$ for which the following holds: if $kn \lt m \le (k+1)n$, then $kn \lt \sigma (m) \le (k+1)n.$ Then $H$ separately permutes the numbers $$1, 2, ..., n; n+1, ..., n+n=2n; 2n+1, ..., 3n; ...; kn+1, kn+2, ..., (k+1)n; ...; n(n-1), ..., n^2,$$ so $H$ is isomorphic to $(S_n)^n$. Hence $|H| = |(S_n)^n| = |S_n|^n = (n!)^n$, but by Lagrange's theorem $|H|$ divides $|S_{n^2}|=(n^2)!$</p>
|
467,574 | <p>Using permutation or otherwise, prove that $\displaystyle \frac{(n^2)!}{(n!)^n}$ is an integer,where $n$ is a positive integer.</p>
<p>I have no idea how to prove this..!!I am not able to even start this Can u give some hints or the solution.!cheers.!!</p>
| Christoph | 86,801 | <p>Looking up the sequence on <a href="http://oeis.org/A034841" rel="nofollow">OEIS</a> you notice this is the number of arrangements of $1, 2, 3, \ldots, n^2$ in an $n\times n$ matrix such that each row is increasing and thus is an integer.</p>
<p>You can verify this by using combinatorics to calculate the number of such arrangements. Given an empty $n\times n$ matrix and a bucket with the numbers $1, 2, 3, \ldots, n^2$, to fill each row of the matrix with increasing numbers you have to choose $n$ numbers from the bucket for each row of the matrix. So the number of such arrangements has to be
$\binom{n^2}{n} \binom{n(n-1)}{n} \binom{n(n-2)}{n} \cdots \binom{n}{n}$. When you write out the binomial coefficients in terms of factorials you notice that the denominator of each factor contains the nominator of the next factor and the whole product simplifies to $\frac{(n^2)!}{(n!)^n}$.</p>
|
1,290,111 | <p>How one can prove the following statement:</p>
<p>$k(n-1)<n^2-2n$ for all odd $n$ and $k<n$</p>
<p><em>Tried so far</em>: induction on $n$, graphing, and rewriting $n^2−2n$ as $(n−1)^2−1$.</p>
| NickC | 189,951 | <p>This isn't true.</p>
<p>Take $k=4$, $n=5$. We have $4(5-1)=16$ and $5^2-2\cdot 5=15$.</p>
|
2,338,123 | <p>Function $f(z)$ is an entire function such that $$|f(z)| \le |z^{n}|$$ for $z \in \mathbb{C}$ and some $n \in \mathbb{N}$.</p>
<p>Show that the singularities of the function $$\frac {f(z)}{z^{n}}$$ are removable. What can be implied about the function $f(z)$ if moreover $f(1) = i$? Draw a far-reaching conclusion.</p>
<p>My attempt: If the singularities of $\frac {f(z)}{z^{n}}$ are removable, it is entire (not sure, need help with the justification) and bounded, so constant from Liouville's theorem, the constant value of the funcion is $i$, hence $f(z)=iz^{n}$. </p>
<p>But what about the $n$ here, is it arbitrary? Could somebody help me prove the removability of the singularities and suggest if my attempt is going the right way?</p>
| Community | -1 | <p>Consider $\frac{f(z)}{z^n}$. Since $f$ was entire, we just need to show that the singularity at $0$ is removable. Because of Reimann's theorem(read here <a href="https://en.wikipedia.org/wiki/Removable_singularity#Riemann.27s_theorem" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Removable_singularity#Riemann.27s_theorem</a>) this is equivalent to showing that</p>
<p>$$\lim_ {z \to 0}z\frac{f(z)}{z^n} = \lim_ {z \to 0}\frac{f(z)}{z^{n-1}} =0 $$
But this follows from your information that $|f(z)| \leq |z|^n$.</p>
<p>Rest of your proof was correct. As far as $n$ is concerned, it will be equal to your earlier $n$.(Prove this !)</p>
|
1,988,517 | <p>$$\binom{1}{0},\binom{1}{1},\binom{1}{2}$$
What does this mean, and how do I achieve an numerical value when trying to solve a proof or problem in this form? </p>
| sdd | 179,397 | <p>As you know, in general, $\binom{n}{k}$ is a number with value = the count of ways $k$ objects can be selected out of $n$ objects (when we are not interested in the order of selection, but just in the set of elements that are chosen).</p>
<p>Respectively:</p>
<ul>
<li><p>$\binom{1}{0} = 1$ since there is only one way to select 0 elements out of 1 - just do not select anything. </p></li>
<li><p>$\binom{1}{1} = 1$ too, since we must just select the only element we have. We know also that $\binom{1}{1} = \binom{1}{0}$, because $1=1-0$.</p></li>
<li><p>$\binom{1}{2} = 0$ since we cannot select 2 elements out of 1, at all. In general, $\binom{n}{k} = 0$, if $k>n$ or $k<0$.</p></li>
</ul>
|
1,988,517 | <p>$$\binom{1}{0},\binom{1}{1},\binom{1}{2}$$
What does this mean, and how do I achieve an numerical value when trying to solve a proof or problem in this form? </p>
| Marble | 302,943 | <p>An expression for ${n}\choose{k}$ that you can use to compute it is $\frac{n}{k!(n-k)!}$.</p>
<p>The actual interpretation of the values is:</p>
<ol>
<li><p>$\binom{1}{0} = 1$ means there is only one way to select 0 elements out of 1element</p></li>
<li><p>$\binom{1}{1} = 1$ means we one element and select the only element we have.</p></li>
<li><p>$\binom{1}{2} = 0$ means we cannot select 2 elements out of 1 so 0</p></li>
</ol>
|
1,776,726 | <p>I'm trying to determine whether or not </p>
<blockquote>
<p>$$\sum_{k=1}^\infty \frac{2+\cos k}{\sqrt{k+1}}$$ </p>
</blockquote>
<p>converges or not. </p>
<p>I have tried using the ratio test but this isn't getting me very far. Is this a sensible way to go about it or should I be doing something else?</p>
| SchrodingersCat | 278,967 | <p>Your function will be maximum when the denominator will be minimum.</p>
<p>So, your work will be to calculate $$\frac{d}{dx}\left[\left\{(1-x)-\frac{4}{1-x}+1\right\}^2+1\right]=0$$</p>
<p>It will come down to $$2\left\{(1-x)-\frac{4}{1-x}+1\right\}\left\{-1-\frac{4}{(1-x)^2}\right\}=0$$
This will give rise to $2$ cases.</p>
<p>Case-$1$:</p>
<p>$$-1-\frac{4}{(1-x)^2}=0 \Rightarrow x \not \in \mathbb{R}$$</p>
<p>Hence $-1-\frac{4}{(1-x)^2}\not =0$ for any real $x$.</p>
<p>Case-$2$:</p>
<p>$$(1-x)-\frac{4}{1-x}+1=0$$
$$(1-x)^2+(1-x)-4=0$$</p>
<p>So $$1-x=\frac{-1\pm \sqrt{1+16}}{2}=\frac{\sqrt{17}\pm 1}{2}$$
Or $$x=1-\frac{\sqrt{17}\pm 1}{2}$$</p>
<p>This will give $2$ values of $x$. </p>
<p>Compute $\frac{d^2}{dx^2}\left[\left\{(1-x)-\frac{4}{1-x}+1\right\}^2+1\right]$ and check for which value of $x$, the expression comes out to be positive.</p>
<p>That is where the function is minimised, or, <em>your actual function</em> is maximised.</p>
<p>Hope this helps.</p>
|
908,309 | <p>I'm finding the principal value of $$ i^{2i} $$</p>
<p>And I know it's solved like this:</p>
<p>$$ (e^{ i\pi /2})^{2i} $$ </p>
<p>$$ e^{ i^{2} \pi} $$</p>
<p>$$ e^{- \pi} $$</p>
<p>I understand the process but I don't understand for example where does the $ i $ in $ 2i $ go?</p>
<p>Is this some kind of a property of Euler's number? if so please explain to me. </p>
| Cookie | 111,793 | <p>The $i$ in $2i$ was combined with the $i$ inside the parentheses. Hence, you got $$i\cdot i = i^2$$ which is due to exponent laws. More applied to your case:
$$(e^{ i\pi /2})^{2i}=e^{i \cdot i \cdot \pi}=e^{i^2\cdot \pi}.$$</p>
|
908,309 | <p>I'm finding the principal value of $$ i^{2i} $$</p>
<p>And I know it's solved like this:</p>
<p>$$ (e^{ i\pi /2})^{2i} $$ </p>
<p>$$ e^{ i^{2} \pi} $$</p>
<p>$$ e^{- \pi} $$</p>
<p>I understand the process but I don't understand for example where does the $ i $ in $ 2i $ go?</p>
<p>Is this some kind of a property of Euler's number? if so please explain to me. </p>
| syusim | 138,951 | <p>$$\bigl(e^{i\pi /2}\bigr)^{2i} = e^{(i\pi /2) \cdot 2i} = e^{i^2\pi}.$$</p>
<p>This is just an application of the exponent laws. Don't overthink it!</p>
|
1,037,068 | <p>If two sequences converge equally, we have
$$\lim_{n\rightarrow \infty }\left ( a_{n} \right )=\lim_{n\rightarrow \infty }\left ( b_{n} \right )$$</p>
<p>As a follow up, is the following equality also true?
$$\lim_{n\rightarrow \infty }\left ( \ln a_{n} \right )=\lim_{n\rightarrow \infty }\left ( \ln b_{n} \right )$$</p>
<p>Notice that I didn't put absolute value brackets, because I am working with sequences involving only positive terms at the moment.</p>
| JohnD | 52,893 | <p>So it is enough to show that if Parseval's Equality holds for all $x\in H$, then any $x\in H$ has a unique set of coefficients $c_n$.</p>
<p>Suppose Parseval's holds for $f,g\in H$ and $f\not=g$ but that they have the same coefficients:
$$
f=\sum_{j=1}^\infty \langle f,v_j\rangle v_j \qquad g=\sum_{j=1}^\infty \langle g,v_j\rangle v_j, \qquad \langle f,v_j\rangle=\langle g,v_j\rangle.
$$</p>
<p>Then $0\not=f-g\in H$ and
\begin{align}
f-g&=\sum_{j=1}^\infty \langle f-g,v_j\rangle v_j\\
0\not=\|f-g\|^2&=\sum_{j=1}^\infty |\langle f-g,v_j\rangle|^2\\
&=\sum_{j=1}^\infty |\langle f,v_j\rangle-\langle g,v_j\rangle|^2\\
&=0,
\end{align}
which is a contradiction. Thus, the coefficients (relative to the orthonormal set $\{v_j\}$) are uniquely determined by $x\in H$. (Technically, this is true after we have identified functions that only differ on a set of measure zero.)</p>
|
712,697 | <p>What is the radius of convergence?</p>
<p>$$\sum_{n=0}^{\infty} n^3 (5x+10)^n$$</p>
| Asaf Karagila | 622 | <p>Yes, we can.</p>
<p>Consider $\mathcal U$ as the open cover whose elements are <em>all</em> the open neighborhoods witnessing the local noetherian property. Since every $x\in X$ has such neighborhood, this is certainly an open cover. By quasi-compactness, we have some finite subcover, $V_1,\ldots,V_n$.</p>
<p>Now let $\mathcal V$ be any non-empty family of open sets. Write $\mathcal V_i=\{U\cap V_i\mid U\in\mathcal V\}$. Then there is at least one $i$ such that $\mathcal V_i$ is not empty, and by the noetherian property of $V_i$ there is a maximal open set in $\mathcal V_i$. Of course, there might be more than just $j$ such that $\mathcal V_j$ is non-empty.</p>
<p>For each $U\in\mathcal V$ write $m(U)=|\{i\mid U\cap V_i\text{ is maximal}\}|$, that is $m(U)$ is the number of $i$'s in which $U\cap V_i$ is maximal in $\mathcal V_i$. Since the values of $m(U)$ are between $0$ and $n$ there is some maximal value $m$, and it has to be at least $1$.</p>
<p>Finally, consider $\{U\in\mathcal V\mid m(U)=m\}$. Somewhere in this set we have a maximal open set in $\mathcal V$. Pick some $U_0$, if it is maximal then we are done. Otherwise, consider $\mathcal U_0=\{V\in\mathcal V\mid U_0\subseteq V\}$, and note that for each such $V$ we have $m(V)=m(U_0)=m$, and in particular where $U_0\cap V_i$ is maximal, $V\cap V_i=U_0\cap V_i$.</p>
<p>Let $i$ be the least index such that $U\cap V_i$ is not maximal, and there is some $V$ such that $U\cap V_i\subsetneq V\cap V_i$ (such index exists, otherwise $U$ is maximal), now consider $\{V\cap V_i\mid V\in\mathcal U_0\}$ and let $\mathcal U_1$ be the set of $V$ such that $V\cap V_i$ is maximal. Pick some $U_1\in\mathcal U_1$, if it is maximal, then we are done.</p>
<p>Otherwise we proceed to repeat the process on the least index $i$ such that for some $V\in\mathcal U_1$ we have $U_1\cap V_i\subsetneq V\cap V_i$. We observe that $U_0\subseteq U_1$, and the induction process guarantees that we have an increasing chain of open sets.</p>
<p>However the induction can only proceed $n-m$ steps, so it must halt and we must have a maximal element.</p>
|
3,413,253 | <p>Can anyone give me some examples and non examples of Lindelöf or second countable space and spaces that is Lindelöf but not second countable? And I understand the definition but find it is hard to visualize and imagine.
I have tried google it but it turns out I only found some silly examples like finite set or empty set.</p>
<p>In general, how can one construct a topological space that is Lindelöf or second countable?</p>
<p>Someone in stack exchange said the real line with discrete topology is Lindelöf, but I do not think so. We can simply construct an open cover defined by the collection of all the singleton set. And this open cover is well defined since singleton set is open in discrete topology. Hence, by definition it is not Lindelöf.</p>
<p>Last question, is (0,1) in the real line equipped with usual topology Lindelöf? I think it is Lindelöf but I could not give any formal proof.
(0,1) fails to be a compact set since we can construct an open cover defined by (1/n,1-1/n) but this open cover does not work so well for arguing for Lindelöf property since quotient number is dense in (0,1). So intuitively I think it is Lindelöf.</p>
<p>I wrote a pretty long question. My mothertongue is not English. Hopefully, you guys can understand me.</p>
| Henno Brandsma | 4,280 | <p>Lindelöf and second countable are saying that a space is "small" in some sense; so one way to find non-examples is to take products of lots of spaces, such products (or powers) are "big".</p>
<p><span class="math-container">$\Bbb R^I$</span> is not Lindelöf for <span class="math-container">$I$</span> uncountable. (it's also not first countable at any point). It is not normal (which is one of the easier ways to see it's not Lindelöf). Of course you're right that the discrete reals are not Lindelöf (take the open cover by singleton sets). BTW, it's a non-trivial fact that if <span class="math-container">$I$</span> has size at most that of <span class="math-container">$\Bbb R$</span>, this product is still separable, so it's also an example of a separable non-Lindelöf space for such <span class="math-container">$I$</span>.</p>
<p><span class="math-container">$[0,1]^I$</span> for <span class="math-container">$I$</span> uncountable is compact (Tychonoff's theorem) so Lindelöf but not first countable at any point too, so certainly not second countable either. But it <em>is</em> normal, of course. And separable iff <span class="math-container">$|I| \le |\Bbb R|$</span>.</p>
<p>For metric spaces: Lindelöf, second countable and separable are equivalent properties (see a more general fact in my answer <a href="https://math.stackexchange.com/a/2812134/4280">here</a>. So <span class="math-container">$(0,1)$</span> in the usual topology is certainly Lindelöf, as the rationals in it are dense.</p>
|
72,854 | <p>Hi everybody,</p>
<p>Does there exist an explicit formula for the Stirling Numbers of the First Kind which are given by the formula
$$
x(x-1)\cdots (x-n+1) = \sum_{k=0}^n s(n,k)x^k.
$$</p>
<p>Otherwise, what is the computationally fastest formula one knows?</p>
| joro | 12,481 | <p>Stirling Numbers of the First Kind are treated in the book <a href="http://www.jjj.de/fxt/fxtpage.html#fxtbook" rel="nofollow">"Matters Computational" (was: "Algorithms for Programmers")</a> by Jörg Arndt. A C++ implmentation of Arndt is at <a href="http://www.jjj.de/fxt/demo/comb/stirling1-demo.cc" rel="nofollow">stirling1-demo.cc</a>. The author is known for writing fast algorithms. </p>
<p>Another resource for formulas is the <a href="https://oeis.org/Seis.html" rel="nofollow">The On-Line Encyclopedia of Integer Sequences</a> - search for your terms.</p>
|
298,029 | <p>how can we convert sin function into continued fraction ?</p>
<p>for example </p>
<p><a href="http://mathworld.wolfram.com/EulersContinuedFraction.html">http://mathworld.wolfram.com/EulersContinuedFraction.html</a></p>
<p>how can we convert sin to simmilar continued fraction ?? </p>
<p>and what about sinh and cosh ? arcsin ? arctan ? cos ? arccos ?? </p>
<p>in general , how can convert any function to continued fraction ??? </p>
<p>my friend asked me this question , so i hope that you help me to be enabled to help him</p>
<p>thanx for all of you </p>
| robjohn | 13,854 | <p>We will proceed as in <a href="https://math.stackexchange.com/a/287987">this answer</a>.</p>
<p>Define
<span class="math-container">$$
P_n(x)=\sum_{k=0}^\infty\frac{4^{k+n}-\sum\limits_{j=1}^n\binom{2k+2n+1}{2j-1}}{(2k+2n+1)!}(-x^2)^k\tag{1}
$$</span>
Then
<span class="math-container">$$
\begin{align}
\frac{P_{n-1}(x)}{P_n(x)}
&=\frac
{\displaystyle\sum_{k=0}^\infty\frac{4^{k+n-1}-\sum\limits_{j=1}^{n-1}\binom{2k+2n-1}{2j-1}}{(2k+2n-1)!}(-x^2)^k}
{\displaystyle\sum_{k=0}^\infty\frac{4^{k+n}-\sum\limits_{j=1}^n\binom{2k+2n+1}{2j-1}}{(2k+2n+1)!}(-x^2)^k}\\[12pt]
&=\color{#C00000}{-x^2+}\frac
{\displaystyle\sum_{k=0}^\infty\frac{\color{#C00000}{\binom{2k+2n-1}{2n-1}}}{(2k+2n-1)!}(-x^2)^k}
{\displaystyle\sum_{k=0}^\infty\frac{4^{k+n}-\sum\limits_{j=1}^n\binom{2k+2n+1}{2j-1}}{(2k+2n+1)!}(-x^2)^k}\\[12pt]
&=-x^2+\frac
{\displaystyle\sum_{k=0}^\infty\color{#C00000}{\frac{2n(2n+1)\binom{2k+2n+1}{2n+1}}{(2k+2n+1)!}}(-x^2)^k}
{\displaystyle\sum_{k=0}^\infty\frac{4^{k+n}-\sum\limits_{j=1}^n\binom{2k+2n+1}{2j-1}}{(2k+2n+1)!}(-x^2)^k}\\[12pt]
&=\color{#C00000}{2n(2n+1)}-x^2\color{#C00000}{-}\frac
{\displaystyle\sum_{k=0}^\infty\frac{2n(2n+1)\color{#C00000}{\left[4^{k+n}-\sum\limits_{j=1}^{n+1}\binom{2k+2n+1}{2j-1}\right]}}{(2k+2n+1)!}(-x^2)^k}
{\displaystyle\sum_{k=0}^\infty\frac{4^{k+n}-\sum\limits_{j=1}^n\binom{2k+2n+1}{2j-1}}{(2k+2n+1)!}(-x^2)^k}\\[12pt]
&=2n(2n+1)-x^2\color{#C00000}{+2n(2n+1)x^2}\frac
{\displaystyle\sum_{k=0}^\infty\color{#C00000}{\frac{4^{k+n+1}-\sum\limits_{j=1}^{n+1}\binom{2k+2n+3}{2j-1}}{(2k+2n+3)!}}(-x^2)^k}
{\displaystyle\sum_{k=0}^\infty\frac{4^{k+n}-\sum\limits_{j=1}^n\binom{2k+2n+1}{2j-1}}{(2k+2n+1)!}(-x^2)^k}\\[12pt]
&=2n(2n+1)-x^2+2n(2n+1)x^2\color{#C00000}{\left/\frac{P_n(x)}{P_{n+1}(x)}\right.}\tag{2}
\end{align}
$$</span>
As I <a href="http://chat.stackexchange.com/transcript/message/8028723#8028723">suggested in chat</a>, consider
<span class="math-container">$$
\begin{align}
\sin(x)
&=\frac{\sin(2x)}{2\cos(x)}\\
&=\frac
{\displaystyle x\sum_{k=0}^\infty\frac{4^k(-x^2)^k}{(2k+1)!}}
{\displaystyle\sum_{k=0}^\infty\frac{(-x^2)^k}{(2k)!}}\\
&=x\left/\left(\frac
{\displaystyle\sum_{k=0}^\infty\frac{(-x^2)^k}{(2k)!}}
{\displaystyle \sum_{k=0}^\infty\frac{4^k(-x^2)^k}{(2k+1)!}}
\right)\right.\\
&=x\left/\left(1+x^2\left/\frac{P_0(x)}{P_1(x)}\right.\right)\right.\tag{3}
\end{align}
$$</span>
<span class="math-container">$(2)$</span> and <span class="math-container">$(3)$</span> lead us to the continued fraction
<span class="math-container">$$
\sin(x)=\cfrac{x}{1+\cfrac{x^2}{2\cdot3-x^2+\cfrac{2\cdot3x^2}{\ddots\lower{6pt}{2n(2n+1)-x^2+\cfrac{2n(2n+1)x^2}{P_n(x)/P_{n+1}(x)}}}}}\tag{4}
$$</span></p>
|
34,204 | <p>I have several contour lines and one point. How can I find a point in one of those contour lines which is nearest to the given point?</p>
<pre><code>(*Create the implicit curves*)
Data={{10,20,1},{10,40,2},{10,60,3},{10,80,4},{20,25,2},{20,45,3},{20,65,4},{30,30,3},{30,50,4},{40,35,4},{40,55,5},{50,20,4},{50,40,5},{60,25,5}};
U=NonlinearModelFit[Data,a x^b (y^(1-b))+c,{a,b,c},{x,y}];
L={ContourPlot[U[x,y]=={1},{x,0,100},{y,0,100},ContourStyle->Red],ContourPlot[U[x,y]=={2},{x,0,100},{y,0,100},ContourStyle->Magenta],ContourPlot[U[x,y]=={3},{x,0,100},{y,0,100},ContourStyle->Brown],ContourPlot[U[x,y]=={4},{x,0,100},{y,0,100},ContourStyle->Blue],ContourPlot[U[x,y]=={5},{x,0,100},{y,0,100},ContourStyle->Green]};
(*Point nearest to which we need to find the points on the curves*)
pt={30,50};
(*Graphic*)
Show[L,Graphics[{PointSize[Large],Blue,Point[pt]}],FrameLabel->{"X","Y"}]
</code></pre>
<p><img src="https://i.stack.imgur.com/kElkt.jpg" alt="enter image description here"></p>
| Michael E2 | 4,999 | <p>My original idea was the same as <a href="https://mathematica.stackexchange.com/users/61/cormullion">cormullion's</a>, and then followed by <code>FindMinimum</code>. But here another, still related way using <code>MeshFunctions</code>. It's not as efficient in general as other ways, perhaps, but as a method, it is sometimes convenient.</p>
<p>The idea is to create a mesh function that has the same sign as the angle between the vector from the <code>pt</code> to the curve and the normal to the curve. Then use <code>Mesh</code> to include the points where the angle is zero (thereby overcoming the limitation on cormullion's <code>ContourPlot</code>/<code>Nearest</code> method). To avoid including the mesh points in <code>Nearest</code>, I set <code>MeshStyle -> None</code>. The mesh is still computed, I have to select those points that are on the contours. These appear in <code>Lines</code>.</p>
<pre><code>clevels = {3, 6, 9, 12, 15, 18, 24, 35};
test[a_, b_, c_] = a x^b (y^(1 - b)) + c;
meshfn = Numerator @ Together[
Cross[(pt - {x, y})].D[Rationalize@test[.42, .55, .1], {{x, y}}]]
plot = ContourPlot[Evaluate@test[.42, .55, .1],
{x, 0, 100}, {y, 0, 100},
Contours -> clevels,
ContourStyle -> ColorData[1] /@ Range@Length@clevels,
ContourShading -> None,
Mesh -> {{0}}, MeshStyle -> None,
MeshFunctions -> {Function[{x, y, z}, Evaluate[meshfn]]}];
cpts = First @ Cases[plot,
GraphicsComplex[pts_, __] :>
pts[[Flatten @ Cases[plot, Line[p_] :> p, Infinity]]],
Infinity];
nf = Nearest @ cpts;
Show[
plot,
Graphics[{Blue, PointSize@Large, Point@pt, Red, Point@nf[pt]}]
]
</code></pre>
<p><img src="https://i.stack.imgur.com/nT0RJ.png" alt="Mathematica graphics"></p>
<p>You can make a nice plot by including the mesh line.</p>
<pre><code>nf2 = Nearest[Function[{x, y}, Evaluate[meshfn]] @@@ cpts -> cpts];
ContourPlot[Evaluate@test[.42, .55, .1],
{x, 0, 100}, {y, 0, 100},
Contours -> clevels,
ContourStyle -> ColorData[1] /@ Range @ Length @ clevels,
ContourShading -> None,
Mesh -> {{0}},
MeshFunctions -> {Function[{x, y, z}, Evaluate[meshfn]]},
Epilog -> {Red, PointSize[Large], Point@pt, PointSize[Medium],
Thin, {Point[#], Line[{pt, #} & /@ #]} &@
nf2[0., 2 + Length @ clevels]}]
</code></pre>
<p><img src="https://i.stack.imgur.com/GnMsw.png" alt="Mathematica graphics"></p>
|
619,564 | <p>I have to prove if this function is differentiable.</p>
<p>$$f(x,y)= \begin{cases} \frac{\cos x-\cos y}{x-y} \iff x \neq y \\-\sin x \iff x=y \end{cases}$$</p>
<p>if $x \neq y$ it is continuous, but i want to see if it is continuous in x=y too.</p>
<p>i can rewrite f as
$$ f(x,y)= \begin{cases} \frac{g(x)-g(y)}{x-y} \iff x \neq y \\
g'(x)=g'(y) \iff x=y \end{cases}$$</p>
<p>and see that $lim_{xy \to xx} g(x,y)=g'(x)$. THus, it is continuous.
Also, the partial derivatives exist:
$$f_x(x,y)=\begin{cases} \frac{-\sin x(x-y)-\cos x+\cos y}{(x-y)^2} \\ -\cos(x) \end{cases}$$
$$f_y(x,y)=\begin{cases} \frac{\sin y(x-y)+\cos x-\cos y}{(x-y)^2} \\ 0 \end{cases}$$
If I proved that they are continuous, too, for the theorem of the total differential, the function would be differentiable. Still, I'm not sure this is the right way of reasoning.</p>
| Christian Blatter | 1,303 | <p>One has
$$f(x,y)=-\int_0^1\sin\bigl((1-t)y+t\,x\bigr)\ dt\qquad\forall\ (x,y)\in{\mathbb R}^2\ .$$
This shows that $f\in C^\infty({\mathbb R}^2)$.</p>
|
1,582,275 | <p>Suppose that $B = S^{-1}AS$ for some $n \times n$ matrices $A$, $B$, and $S$.</p>
<ol>
<li>Show that if $x \in \ker(B)$ then $Sx \in \ker(A)$.</li>
</ol>
<p>Proof: $B = S^{-1}AS$ implies that $SB = AS$ which implies that $SBx = ASx = 0$, that is $Sx \in \ker(A)$.</p>
<ol start="2">
<li>Show that the linear transformation $T : \ker(B) \to ker(A), \, x \mapsto Sx$ is an isomorphism. </li>
</ol>
<p>I know how to prove part 1, but I am not sure what to do for part 2. </p>
| seeker | 267,945 | <p>$T:Ker\ (B)\rightarrow Ker\ (A)$ is given by $T(x)=S(x)$. First of all by part 1, this is well defined.</p>
<ol>
<li><p>Check that it is linear.</p></li>
<li><p>Let $x\in Ker\ (T)$. Then $T(x)=S(x)=0\implies x\in Ker (S)$. But $S$ is invertible $\implies x=0$. So $T$ is one-one.</p></li>
<li><p>Let $x\in Ker\ (A)\implies A(x)=0$. So consider the vector $S^{-1}x$. Then $BS^{-1}(x)=S^{-1}A(x)=0\implies S^{-1}(x)\in Ker\ (B)$ and $T(S^{-1}(x))=S(S^{-1}(x))=x$. So $T$ is onto.</p></li>
</ol>
<p>Hence $T$ is an isomorphism.</p>
|
3,339,647 | <p>I had a class in algebraic topology, our main book is Allen Hatcher, our professor defined a term called "Exponential Law" as the following:</p>
<p><span class="math-container">$Hom (X \times Y, Z) \cong Hom (X, Hom (Y, Z))$</span> </p>
<p><span class="math-container">$\alpha : X \times Y \rightarrow Z $</span></p>
<p><span class="math-container">$\tilde{\alpha} : X \rightarrow Hom (Y, Z)$</span></p>
<p><span class="math-container">$\tilde{\alpha} (x)(y) = \alpha (x, y) $</span></p>
<p>(I may have errors in copying after my professor, forgive me if I have).</p>
<p><strong>My questions are:</strong></p>
<p>1-Where can I find this title in Allen Hatcher or any other book (Actually I asked my professor and he/she said that I may find it in Munkres under the title of "Mapping spaces" and I assumed that he/she means Munkres of general topology and also I did not find this exponential law ), could anyone help me in this please?</p>
<p>2-Why it is called exponential law?</p>
| Noah Riggenbach | 482,732 | <p>1.) This has already been answered in the comments, but as an alternative source Davis and Kirk talk about it when they are discussing compactly generated weak hausdorff spaces, which I prefer.</p>
<p>2.) If you write <span class="math-container">$\operatorname{Hom}(X,Y)$</span> as <span class="math-container">$Y^X$</span>(which is standard) then the statement becomes <span class="math-container">$$Z^{X×Y}=(Z^Y)^X$$</span></p>
|
192,636 | <p>Suppose I have some 3D points, e.g. <code>{{0, 0, 1}, {0, 0, 1.3}, {0, 1, 0}, {1.2, 0, 0}}</code>. Now I want to find the smallest and largest distance between two points.</p>
<p>A trivial way is to find all possible distances, then look for the smallest and largest number.This becomes very much time-consuming for large data sets.</p>
<p>Could you please suggest any alternative?</p>
| Carl Woll | 45,431 | <p>I think Henrik meant the following approach using <a href="http://reference.wolfram.com/language/ref/Nearest" rel="noreferrer"><code>Nearest</code></a>:</p>
<pre><code>min[pts_] := Min @ Nearest[pts->"Distance", pts, 2][[All, 2]]
</code></pre>
<p>Let's compare the above approach with a simple version based on <a href="http://reference.wolfram.com/language/ref/DistanceMatrix" rel="noreferrer"><code>DistanceMatrix</code></a>:</p>
<pre><code>min2[pts_] := With[{dm = DistanceMatrix[pts]},
Min[dm + Max[dm] IdentityMatrix[Length[pts], SparseArray]]
]
</code></pre>
<p>Sample data:</p>
<pre><code>SeedRandom[1]
pts = RandomReal[10,{1000,2}];
</code></pre>
<p>Timing comparison:</p>
<pre><code>min[pts] //RepeatedTiming
min2[pts] //RepeatedTiming
</code></pre>
<blockquote>
<p>{0.000693, 0.009433}</p>
<p>{0.00629, 0.009433}</p>
</blockquote>
<p>A similar treatment is possible for the maximum distance, but is much slower:</p>
<pre><code>max[pts_] := Max @ Nearest[pts->"Distance", pts, All][[All, -1]]
max2[pts_] := Max @ DistanceMatrix[pts]
</code></pre>
<p>Comparison:</p>
<pre><code>max[pts] //RepeatedTiming
max2[pts] //RepeatedTiming
</code></pre>
<blockquote>
<p>{0.019, 13.7336}</p>
<p>{0.00296, 13.7336}</p>
</blockquote>
<p>Note that methods to compute the maximum distance based on <a href="http://reference.wolfram.com/language/ref/ConvexHullMesh" rel="noreferrer"><code>ConvexHullMesh</code></a> will be slower than using <a href="http://reference.wolfram.com/language/ref/DistanceMatrix" rel="noreferrer"><code>DistanceMatrix</code></a>, e.g.:</p>
<pre><code>ConvexHullMesh[pts]; //AbsoluteTiming
</code></pre>
<blockquote>
<p>{0.012535, Null}</p>
</blockquote>
<p>which is already 4 times slower, without computing any distances yet. Also, methods computing a bounding ball will not yield the correct result. For example, consider an equilateral triangle in 2 dimensions:</p>
<pre><code>With[{eq = SSSTriangle[1, 1, 1]},
Graphics[{eq, Circumsphere @@ eq}]
]
</code></pre>
<blockquote>
<p><a href="https://i.stack.imgur.com/S08y4.png" rel="noreferrer"><img src="https://i.stack.imgur.com/S08y4.png" alt="enter image description here"></a></p>
</blockquote>
<p>Clearly the diameter of the circle is larger than the maximum distance.</p>
|
4,063,337 | <p>In an exercise I'm asked the following:</p>
<blockquote>
<p>a) Find a formula for <span class="math-container">$\int (1-x^2)^n dx$</span>, for any <span class="math-container">$n \in \mathbb N$</span>.</p>
<p>b) Prove that, for all <span class="math-container">$n \in \mathbb N$</span>: <span class="math-container">$$\int_0^1(1-x^2)^n dx = \frac{2^{2n}(n!)^2}{(2n + 1)!}$$</span></p>
</blockquote>
<p>I used the binomial theorem in <span class="math-container">$a$</span> and got:</p>
<p><span class="math-container">$$\int (1-x^2)^n dx = \sum_{k=0}^n \left( \begin{matrix} n \\ k \end{matrix} \right) (-1)^k \ \frac{x^{2k + 1}}{2k+1} \ \ \ + \ \ C$$</span></p>
<p>and so in part (b) i got:</p>
<p><span class="math-container">$$\int_0^1 (1-x^2)^n dx = \sum_{k=0}^n \left( \begin{matrix} n \\ k \end{matrix} \right) \ \frac{(-1)^k}{2k+1}$$</span></p>
<p>I have no clue on how to arrive at the expression that I'm supposed to arrive. How can I solve this?</p>
| DonAntonio | 31,254 | <p>An idea: substitution <span class="math-container">$\;t=\sin x\,,\,\,dt=\cos x\,dx\;$</span> , so</p>
<p><span class="math-container">$$\color{red}{I_n}:=\int_0^1(1-t^2)^ndt=\int_0^{\pi/2}(\cos x)^{2n+1}dx=\int_0^1\left(1-\sin^2x\right)(\cos x)^{2n-1} dx=$$</span></p>
<p><span class="math-container">$$=I_{n-1}-\int_0^{\pi/2}\sin^2x(\cos x)^{2n-1}dx\stackrel{\text{by parts:}\\u=\sin x,\,v'=\sin x(\cos x)^{2n-1}}=\color{red}{I_{n-1}}+\overbrace{\left.\frac{\sin x(\cos x)^{2n}}{2n}\right|_0^1}^{=0}-\color{red}{\frac1{2n}I_n}$$</span></p>
<p><span class="math-container">$$\implies I_n=\frac{2n}{2n+1}I_{n-1}$$</span></p>
<p>For <span class="math-container">$\;n=0\;$</span> we get</p>
<p><span class="math-container">$$I_0=\int_0^1(1-t^2)^0dt=\int_0^1dt=1=\frac{2^0(0!)^2}{1!}\;\;\checkmark$$</span></p>
<p>Now induction...and the result follows at once:</p>
<p><span class="math-container">$$I_n=\frac{2n}{2n+1}I_{n-1}\stackrel{\text{Ind. Hyp.}}=\frac{(2n)^2\cdot2^{2n-2}((n-1)!)^2}{(2n-1)!\cdot2n\cdot(2n+1)}=\frac{2^{2n}(n!)^2}{(2n+1)!}$$</span></p>
|
3,162,056 | <p>In Kleene's "Mathematical Logic" and "Introduction to Metamathematics" for a classical predicate calculus the following two rules of inference are chosen.</p>
<p>If <span class="math-container">$A(x) \Rightarrow C$</span> then <span class="math-container">$(\exists xA(x)) \Rightarrow (C)$</span> and
if <span class="math-container">$C \Rightarrow A(x)$</span> then <span class="math-container">$(C) \Rightarrow (\forall xA(x))$</span> where <span class="math-container">$C$</span> does not contain variable <span class="math-container">$x$</span> free. </p>
<p>I tried motivating these choices but unfortunately I could not. Because it is a classical predicate calculus I tried considering truth table semantics to somehow see why these results should be valid, but what I found (not sure if correct) is that the following results are semantically valid as well.</p>
<p>If <span class="math-container">$A(x) \Rightarrow C$</span> then <span class="math-container">$(\forall x(A(x)) \Rightarrow (C)$</span> and also if <span class="math-container">$C \Rightarrow A(x)$</span> then <span class="math-container">$(C) \Rightarrow (\exists x A(x))$</span> where <span class="math-container">$C$</span> again does not contain variable <span class="math-container">$x$</span> free. </p>
<p>If this is indeed true then I am confused as one sees that <span class="math-container">$\exists$</span> and <span class="math-container">$\forall$</span> act in inference rules in exactly the same way while intuitively I would think that these two logical symbols should act differently.</p>
<p>I would appreciate your advice or thoughts about this.</p>
| frabala | 53,208 | <p>An intuitive answer, in a world of match sticks and fire and no magic, i.e. one can not make fire out of nothing.</p>
<p>Consider a variable <span class="math-container">$x$</span> to mean a match stick. Let <span class="math-container">$A(x)$</span> mean "Stick <span class="math-container">$x$</span> smokes" and <span class="math-container">$C$</span> mean "There is fire".</p>
<p>The rule "if <span class="math-container">$A(x)\Rightarrow C$</span> then <span class="math-container">$(\exists x A(x))\Rightarrow C$</span>" states the very obvious: "if stick x smokes then there's fire" means "if there is some stick <span class="math-container">$x$</span> that smokes, then there's fire". This sentence holds in any universe of match sticks.</p>
<p>Now, consider your rule "if <span class="math-container">$A(x)\Rightarrow C$</span> then <span class="math-container">$(\forall x A(x))\Rightarrow C$</span>". It states the following: "if stick x smokes, then there's fire" means also that "if all sticks <span class="math-container">$x$</span> are smoking, then there's fire". This derivation is valid only in models that contain at least one object. Because in an empty universe that has no match sticks, all match sticks (which are actually none) are smoking. However, there can be no fire without match sticks!</p>
|
4,535,612 | <blockquote>
<p>Is a group of order <span class="math-container">$2^kp$</span> not simple, where <span class="math-container">$p$</span> is a prime and <span class="math-container">$k$</span> is an positive integer?</p>
</blockquote>
<p>I did this for the groups of order <span class="math-container">$2^k 3$</span>. Here the intersection of two distinct Sylow <span class="math-container">$2$</span>-subgroups (if number of sylow <span class="math-container">$2$</span>-subgroup is more than <span class="math-container">$1$</span>) is a nontrivial proper normal subgroup of the group.
So group is not simple.</p>
<p>But if <span class="math-container">$p$</span> is any prime greater than <span class="math-container">$3$</span> then is this always not simple?</p>
<p>Please give some hint to solve this?</p>
<p>For <span class="math-container">$\;p=3\;$</span> I solved like this: <a href="https://math.stackexchange.com/a/2525990/934187">https://math.stackexchange.com/a/2525990/934187</a></p>
| Viktor Vaughn | 22,912 | <p>Yes, this can be done using elimination theory. See the reference to Cox, Little, and O'Shea's book I've given <a href="https://math.stackexchange.com/a/1318832">here</a> or <a href="https://math.stackexchange.com/a/1490119">here</a>.</p>
<p>We can find the desired polynomial by computing a Gröbner basis using <a href="https://sagecell.sagemath.org/" rel="nofollow noreferrer"><em>SageMath</em></a>. For the given <span class="math-container">$x$</span> and <span class="math-container">$y$</span>, we find that they satisfy the polynomial <span class="math-container">$P(x,y) = x^3 - y^2 + 2 y - 1 = x^3 - (y-1)^2$</span>.</p>
|
2,468,155 | <p>This problem is from Challenge and Thrill of Pre-College Mathematics:
Prove that $$ (a^3+b^3)^2\le (a^2+b^2)(a^4+b^4)$$</p>
<p>It would be really great if somebody could come up with a solution to this problem.</p>
| Kenny Lau | 328,173 | <p>$$\begin{array}{rrcl}
& (a^3+b^3)^2 &\le& (a^2+b^2)(a^4+b^4) \\
\iff& a^6 + 2a^3b^3 + b^6 &\le& a^6+a^2b^4+b^2a^4+b^6 \\
\iff& 2a^3b^3 &\le& a^2b^4+b^2a^4 \\
\iff& 2ab &\le& b^2+a^2 \\
\iff& 0 &\le& b^2-2ab+a^2 \\
\iff& 0 &\le& (b-a)^2 \\
\end{array}$$</p>
|
2,468,155 | <p>This problem is from Challenge and Thrill of Pre-College Mathematics:
Prove that $$ (a^3+b^3)^2\le (a^2+b^2)(a^4+b^4)$$</p>
<p>It would be really great if somebody could come up with a solution to this problem.</p>
| Guy Fsone | 385,707 | <p>$$(a^3+b^3)^2 = a^6 + 2a^3b^3 + b^6$$</p>
<p>But we know that $$(X-Y)^2\ge 0\Longleftrightarrow X^2+Y^2 \ge 2XY$$</p>
<p>taking $X= a^3$ and $Y=b^3$ we get
$$2a^3b^3 \le a^2b^4+b^2a^4 $$
so
$$(a^3+b^3)^2 = a^6 + 2a^3b^3 + b^6 \le a^6 + \color{red}{a^2b^4+b^2a^4 } + b^6 = (a^2+b^2)(a^4+b^4) $$</p>
|
16,290 | <p>Hi I am new here and have a calculus question that came up at work.</p>
<p>Suppose you have a $4' \times 8'$ piece of plywood. You need 3 circular pieces all equal diameter. What is the maximum size of circles you can cut from this piece of material?
I would have expected I could write a function for the area of the 3 circles in terms of $x$ and $y$, then differentiate it, find a point of maxima/minima and go from there.</p>
<p>My coworker did cut three $33''$ circles and that solved the real-world problem. But my passion would be to find the mathematical answer to this. I hope that my new stackexchange.com friends have the same passion, and can help me find the answer to this in general terms. </p>
<p>What I mean by that is someone says I have a piece of material Q units by 2Q units, what are the three circles of maximum size?... I hope you understand what I am asking.
I am looking to be a friend and contributor
BD</p>
| Listing | 3,123 | <p>Isaac had the right intuition. </p>
<p><img src="https://i.stack.imgur.com/JEswV.png" alt="alt text"></p>
<p>I used Matlab to globally optimize the radius under the constrait that all three circles have the same radius, are in the rectangle and don't intersect each other. What I got is shown above,</p>
<p>R = 1.41699, </p>
<p>Pos1 = (6.58301,2.58301)</p>
<p>Pos2 = (1.41699,2.58301)</p>
<p>Pos3 = (4.0,1.41699)</p>
<p>Note that 1.41699"=17.00388 inches so Isaac found the analytic solution.</p>
<p>Due to request here the Matlab source (yes Matlab is indeed more powerful than u think):</p>
<p>circleradius.m:</p>
<pre><code>%% x = [pos1X,pos1Y,pos2X,pos2Y,pos3X,pos3Y,r]
function f = circleradius(x)
f = -x(7); %% minimize the maximum :D
</code></pre>
<p>constraints.m</p>
<pre><code>%% x = [pos1X,pos1Y,pos2X,pos2Y,pos3X,pos3Y,r]
function [c, ceq] = contraints(x)
c = [4*x(7)^2 - ((x(1) - x(3))^2 + (x(2) - x(4))^2); %% d(Circle1,Circle2)<=(2r)^2
4*x(7)^2 - ((x(1) - x(5))^2 + (x(2) - x(6))^2);
4*x(7)^2 - ((x(3) - x(5))^2 + (x(4) - x(6))^2);
x(7) - x(1); %% Circles are in the rectangle
x(7) - x(3);
x(7) - x(5);
x(7) - x(2);
x(7) - x(4);
x(7) - x(6);
x(7) + x(1) - 8; %% Width is 8
x(7) + x(3) - 8;
x(7) + x(5) - 8;
x(7) + x(2) - 4; %% Height is 4
x(7) + x(4) - 4;
x(7) + x(6) - 4;
-x(1); -x(2); -x(3); -x(4); -x(5); -x(6); -x(7)]; %% No negative values
ceq = [ ];
</code></pre>
<p>Later in main window:</p>
<pre><code>[x,fval,exitflag] = patternsearch(@circleradius,[0.5000 1 1.5000 1 2.5000 1 0.5000],[],[],[],[],[],[],@contraints)
output of x >> 4.0000 2.5830 1.4170 1.4171 6.5830 1.4171 1.4170
</code></pre>
<p>Note that the values are truncated and the above is a solution which is symmetric to what I posted before. You can make it use a mesh of size 10e-10, so you are almost sure to get the global maximum, as the function is continuous. Also the result is faster and more reliable than NMinimize of Mathematica.</p>
|
4,467,763 | <p>I have the equation</p>
<p><span class="math-container">$A\vec{x} = \vec{b} \tag{1}.$</span></p>
<p>where <span class="math-container">$A$</span> is an <span class="math-container">$m\times n$</span> matrix of rank <span class="math-container">$m$</span>, so that <span class="math-container">$m<n$</span> and the system is underdetermined. As I understand it, I can get the minimum L2 norm solution of the system by premultiplying <span class="math-container">$\vec{b}$</span> with the Moore-Penrose right inverse:</p>
<p><span class="math-container">$\vec{x} = A^T(AA^T)^{-1}\vec{b}\tag{2}.$</span></p>
<p>What I don't understand is how I get from (1) to (2), i.e. what are the algebraic steps? I'm specifically interested in knowing how to do it with just basic matrix manipulations, without calculus.</p>
| blamocur | 991,086 | <p><span class="math-container">$$|Ax-b|^2 = \left(Ax-b \right)^{T}\cdot\left(Ax-b \right)$$</span>
<span class="math-container">$$ = \left(x^TA^T-b^T \right)\left(Ax-b \right)$$</span>
<span class="math-container">$$ = x^T(A^TA)x - 2b^TAx +|b|^2$$</span> By differentiating the last line over <span class="math-container">$x$</span> and setting the
resulting gradient to 0 you get the required expression.</p>
|
4,467,763 | <p>I have the equation</p>
<p><span class="math-container">$A\vec{x} = \vec{b} \tag{1}.$</span></p>
<p>where <span class="math-container">$A$</span> is an <span class="math-container">$m\times n$</span> matrix of rank <span class="math-container">$m$</span>, so that <span class="math-container">$m<n$</span> and the system is underdetermined. As I understand it, I can get the minimum L2 norm solution of the system by premultiplying <span class="math-container">$\vec{b}$</span> with the Moore-Penrose right inverse:</p>
<p><span class="math-container">$\vec{x} = A^T(AA^T)^{-1}\vec{b}\tag{2}.$</span></p>
<p>What I don't understand is how I get from (1) to (2), i.e. what are the algebraic steps? I'm specifically interested in knowing how to do it with just basic matrix manipulations, without calculus.</p>
| Klaas van Aarsen | 134,550 | <p>We're looking for the Moore-Penrose pseudo inverse <span class="math-container">$A^+$</span> so that we have <span class="math-container">$\vec x = A^+ \vec b$</span>.</p>
<p>Since <span class="math-container">$A$</span> is of full row rank <span class="math-container">$m$</span>, we can have that <span class="math-container">$$AA^+=I \tag 3$$</span>
Due to the full row rank we also have that <span class="math-container">$AA^T$</span> is invertible. So <span class="math-container">$$AA^T (AA^T)^{-1}=I \tag 4$$</span>
If we compare equations (3) and (4), we can see that <span class="math-container">$$A^+=A^T (AA^T)^{-1}\tag 5$$</span> satisfies equation (3).</p>
<p>In theory there could still be a different matrix that is "better". However, the Moore-Penrose pseudo inverse is unique, and we can verify that the matrix in (5) satisfies the 4 conditions of the definition as given on <a href="https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse" rel="nofollow noreferrer">wiki</a>.</p>
<p>Note that this approach only works if <span class="math-container">$A$</span> is of full row rank. Otherwise the formulas don't hold.</p>
|
2,933,753 | <p>Given two finite groups <span class="math-container">$G, H$</span>, we are going to say that <span class="math-container">$G<_oH$</span> if either</p>
<p>a. <span class="math-container">$|G|<|H|$</span></p>
<p>or </p>
<p>b. <span class="math-container">$|G|=|H|$</span> and <span class="math-container">$\displaystyle\sum_{g\in G} o(g)<\sum_{h\in H} o(h)$</span>,</p>
<p>where <span class="math-container">$o(g)$</span> denotes the order of the element <span class="math-container">$g$</span> (has this ordering a name?).</p>
<p>What is the smallest example (in this ordering) of a pair of nonisomorphic groups such that <span class="math-container">$G$</span> and <span class="math-container">$H$</span> are incomparable, i.e., such that they have same cardinal and same sum of orders of elements?</p>
| Travis Willse | 155,629 | <p>The following Maple function (code here requires the Maple package <code>GroupTheory</code>) takes as argument a group and returns the sum of the orders of its elements:</p>
<pre><code>sumOfOrders := G -> add(u, u in map(PermOrder, convert(Elements(G), list), G));
</code></pre>
<p>This function takes as argument a positive integer <span class="math-container">$n$</span> and returns the multiset of sums of orders of elements of groups of order <span class="math-container">$n$</span>:</p>
<pre><code>sumOfOrdersList := n -> sort(map(sumOfOrders, AllSmallGroups(n)));
</code></pre>
<p>Checking the lowest orders manually we find that groups incomparable in your sense occur first in order 16: Executing <code>sumOfOrdersList(16)</code> shows there are three groups with sum <span class="math-container">$47$</span>, three with sum <span class="math-container">$55$</span> and two with <span class="math-container">$87$</span>.</p>
<p>(Cf. <a href="https://groupprops.subwiki.org/wiki/Groups_of_order_16" rel="nofollow noreferrer">https://groupprops.subwiki.org/wiki/Groups_of_order_16</a> )</p>
|
114,487 | <p>I have a stack of images (usually ca 100) of the same sample. The images have intrinsic variation of the sample, which is my signal, and a lot of statistical noise. I did a principal components analysis (PCA) on the whole stack and found that components 2-5 are just random noise, whereas the rest is fine. How can I produce a new stack of images where the noise components are filtered out?</p>
<p>EDIT:</p>
<p>I am sorry I was not as active as you yesterday. I must admit I am bit overwhelm by the depth and yet simplicity of your answers. It is hard for me to choose one, since all of them work great and give what I actually wanted.</p>
<p>I feel that I need to elaborate a bit more the problem I am working on. Unfortunately, my supervisor does not allow me to upload nay data before we have published the final results, so I have to work in abstract terms. We have an atomic cloud cooled to a temperature of 10 µK. Due to inter-atomic and laser interaction, the atomic cloud (all of the atoms a whole) is excited and starts to oscillates in different vibrational modes. This dynamic behavior is of great interest to us, since it provides and insight to the inter-atomic physics. </p>
<p>The problem is that most of the relevant variations are obscured by noise due to the imaging process. The noise usually is greatly suppressed if you take two images one with Noise+Signal and Noise only and then subtract them. However, this does not work if the noise in the two images is not correlated, which sadly is our case. Therefore, we decided to use PCA, because there you can clearly see the oscillation modes and filter everything that is crap. If you are interested in using PCA to visualize dynamics, you can have a look at this paper by different group:</p>
<p><a href="http://iopscience.iop.org/article/10.1088/1367-2630/16/12/122001" rel="noreferrer">http://iopscience.iop.org/article/10.1088/1367-2630/16/12/122001</a></p>
<p>I deeply thank everybody who contributed.</p>
| Community | -1 | <p>As requested by Anton. I halved the amount of noise because otherwise some images have barely any signal left. As you can see below, we are still putting in a significant amount of noise.</p>
<p>(To conserve space I'm only visualizing the first ten images in this answer, but the denoising is happening over all 100 test images.)</p>
<pre><code>SeedRandom[2016] (* for reproducibility *)
MNISTdigits = ExampleData[{"MachineLearning", "MNIST"}, "TestData"];
images = RandomSample[Cases[MNISTdigits, (im_ -> 0) :> im], 100];
n = Length@images;
noisyImages = (ImageEffect[#, {"GaussianNoise", RandomReal[0.5]}] &) /@ images;
Take[noisyImages, 10]
</code></pre>
<p><a href="https://i.stack.imgur.com/PtIIG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PtIIG.png" alt="enter image description here"></a></p>
<p>Convert the data to vectors and find their mean:</p>
<pre><code>toVector = Flatten@*ImageData;
fromVector = Image[Partition[#, 28]] &;
data = toVector /@ noisyImages;
mean = Mean[data];
fromVector[mean]
</code></pre>
<p><a href="https://i.stack.imgur.com/UxxTs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UxxTs.png" alt="enter image description here"></a></p>
<p>Center the data by subtracting the mean from all the data points. Now, what we have is how much each image differs from the mean, i.e. the variations in how the letters are drawn, plus the noise in each image.</p>
<pre><code>centeredData = (# - mean &) /@ data;
(ImageAdjust@*fromVector) /@ Take[centeredData, 10]
</code></pre>
<p><a href="https://i.stack.imgur.com/wjYZg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wjYZg.png" alt="enter image description here"></a></p>
<p>The singular value decomposition extracts the patterns in these variations, from most to least common.</p>
<pre><code>{u, s, v} = SingularValueDecomposition[centeredData, n];
</code></pre>
<p>Apparently you already know which components are important, but if you didn't, you could look at the distribution of singular values:</p>
<pre><code>ListPlot[Diagonal@s, PlotRange -> All]
</code></pre>
<p><a href="https://i.stack.imgur.com/gQaXJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gQaXJ.png" alt="enter image description here"></a></p>
<p>It looks like the first five stand out from the rest. Let's look at what variations they encode:</p>
<pre><code>(ImageAdjust@*fromVector) /@ Take[Transpose[v], 5]
</code></pre>
<p><a href="https://i.stack.imgur.com/pnw8c.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pnw8c.png" alt="enter image description here"></a></p>
<p>Zero out all the rest of the singular values and reconstruct the modified data matrix, then the images themselves:</p>
<pre><code>snew = DiagonalMatrix@Table[If[i <= 5, s[[i, i]], 0], {i, n}]
denoisedData = (# + mean &) /@ (u.snew.Transpose[v])
denoisedImages = fromVector /@ denoisedData;
Take[denoisedImages, 10]
</code></pre>
<p><a href="https://i.stack.imgur.com/sL8FR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sL8FR.png" alt="enter image description here"></a></p>
<p>Let's review, because the actual denoising code is quite short and simple:</p>
<pre><code>data = toVector /@ noisyImages;
mean = Mean[data];
centeredData = (# - mean &) /@ data;
{u, s, v} = SingularValueDecomposition[centeredData, n];
snew = DiagonalMatrix@Table[If[i <= 5, s[[i, i]], 0], {i, n}]
denoisedData = (# + mean &) /@ (u.snew.Transpose[v])
denoisedImages = fromVector /@ denoisedData;
</code></pre>
<p><a href="https://i.stack.imgur.com/PtIIG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PtIIG.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/sL8FR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sL8FR.png" alt="enter image description here"></a></p>
|
724,045 | <p>I believe I'm not correctly understanding the concept of unique factorization and irreducibles.</p>
<p>Consider $R = \mathbb{F}_7$ and $h \in R[x]$ where $h = x^4 + 4x^3 + 3x^2 + 5x + 6$. Now $h$ has the following factorizations:</p>
<p>$h_1 = (3x^2 + 3x + 4)(5x^2 + x + 5)$</p>
<p>and</p>
<p>$h_2 = (x^2 + x + 6)(x^2 + 3x + 1)$</p>
<p>All four of those quadratics are irreducible in $R[x]$.</p>
<p>So factorization does not appear to be unique in $R[x]$. But I read on wikipedia (which may be wrong), that if $R$ is a UFD, then $R[x]$ is a UFD.</p>
<p>I must have a misunderstanding because $R = \mathbb{F}_7$ is a UFD, but we have shown two distinct factorizations of a polynomial in $R[x]$. Where am I going wrong?</p>
| ncmathsadist | 4,154 | <p>You can get in trouble if the convergence is not absolute, i.e. $$\sum_n \|x_n\| = +\infty$$ but
$$\lim_{N\to\infty} \sum_{n\le N} x_n $$
exists.</p>
|
422,084 | <p>$\mathbb{R}^2$ with different topologies on it are homeomorphic as a topological space?
for example with discrete topology and usual topology, what I need is a continous bijection with inverse is continous, from usual to discrete any continous map is finally constant map,so I think they are not homeomorphic.
thank you for help and discussion</p>
| Brian M. Scott | 12,042 | <p>No, they are not homeomorphic. $\Bbb R^2$ with the discrete topology is not connected, and $\Bbb R^2$ with the usual topology is connected, so there isn’t even a continuous map from $\Bbb R^2$ with the usual topology onto $\Bbb R^2$ with the discrete topology: continuous maps preserve connectedness. There are many other ways to see that they aren’t homeomorphic. For instance, the usual topology is separable, and the discrete topology isn’t. The usual topology has a countable base; the discrete topology does not.</p>
|
2,180,102 | <p>If you have 3 labeled points on a surface of a paper. Like </p>
<pre><code> 1
2 3
</code></pre>
<p>This makes a perfect equilateral triangle.</p>
<p>From this perspective I can say that the camera is on top of the paper looking down. We can say the camera is at coordinate $(0,0,100)$. Which is 0 degree rotation in Z axis and 90 degree rotation in Y axis.</p>
<p>Then, I move the camera to some arbitrary spot. Now, the points are like</p>
<pre><code> 1
2 3
</code></pre>
<p>This looks like the camera is farther back from the paper and was lowered, say at location $(0, -100, 50)$. Which is about -90 degree rotation in Z-axis, and 45 degree rotation in Y axis.</p>
<p>So my question is basically, given the $(x_1,y_1), (x_2,y_2), (x_3,y_3)$. Is there some formula that can take these arbitrary points, compare it with the original 3, to know how much of a X,Y,Z rotation it is of the camera?</p>
<p>I can also rotate on angles like this. For example, I can take the second example from above, then I can rotate my head clockwise, making the numbers flip like</p>
<pre><code>2
1
3
</code></pre>
<p>I think getting a normal vector from the center might be better to find.</p>
| arctic tern | 296,782 | <p>The proof is valid if you're allowed to use the fact that $\phi$ is multiplicative. Personally, the Chinese Remainder Theorem is what I consider to be <em>the reason</em> that $\phi$ is multiplicative, so this would be circular for me.</p>
<p>Yes, we do have $(\prod_i R_i)^\times=\prod_i R_i^\times$. It is actually straightforward to prove they are equal as subsets of $\prod_i R_i$: show every element of the former is an element of the latter and vice-versa.</p>
|
1,523,427 | <p>Is it possible to cover all of $\mathbb{R}^2$ using balls $\{ B(x_n,n^{-1/2})\}_{n=1}^\infty$ of decreasing radius $n^{-1/2}$? I know that if we chose e.g. radius $n^{-1}$ it could never work because $\sum \pi (n^{-1})^2 < \infty$. But in this case the balls cover an infinite amount of area, so it seems that it may be possible to construct.</p>
<p>Edit: It can definitely be done with balls of radius $n^{-1/2+\epsilon}$ for any $\epsilon > 0$.</p>
| robjohn | 13,854 | <p>With disks centered on an $n{+}1\times n{+}1$ grid, we can cover $[0,1]^2$ with $(n+1)^2$ disks of radius $\frac1{n\sqrt2}$.</p>
<p>Therefore, we can cover $[0,1]^2$ with $\left(\left\lceil\sqrt{\frac n2}\,\right\rceil+1\right)^2$ disks of radius $\frac1{\sqrt{n}}$.</p>
<p>For $n\ge294$, $\left(\left\lceil\sqrt{\frac n2}\,\right\rceil+1\right)^2\le\frac23n$. Thus, we have shown</p>
<blockquote>
<p>For $n\ge294$, we can cover $[0,1]^2$ with at most $\frac23n$ disks of radius $\frac1{\sqrt{n}}$.</p>
</blockquote>
<p>Enumerate the squares $\big\{[i,i+1]\times[j,j+1]:i,j\in\mathbb{Z}\big\}=\big\{S_k:k=1,2,3,\dots\big\}$</p>
<p>Cover $S_k$ with $196\cdot3^{k-1}$ disks with radii from $\frac1{\sqrt{98\cdot3^{k-1}+1}}$ to $\frac1{\sqrt{294\cdot3^{k-1}}}$.</p>
|
3,708,243 | <p>Equation: </p>
<blockquote>
<p><span class="math-container">$x^2-x-6=0$</span></p>
</blockquote>
<p>The two roots of this equation are <span class="math-container">$3$</span> and <span class="math-container">$-2$</span>. When writing the answer can I also write it as <span class="math-container">$-2, 3$</span> or do I have to maintain a certain order?</p>
| Peter Shor | 2,501 | <p>This isn't the way the simplex algorithm is usually presented, but it is certainly equivalent to the usual presentation. I'm going to use the standard terminology (pivots, phase I, basic feasible solution); if wherever you got this from doesn't use this, feel free to ask about it in comments.</p>
<p>This isn't a regular pivot step; this is a pivot step that is preparing for phase I of the simplex algorithm, in which we are trying to find a basic feasible solution of the original problem. In phase I, we don't use <em>z</em> so I will drop it. We had</p>
<p><span class="math-container">\begin{align}
& \max w \\
x_3 &= -1 +x_0 + x_2\\
x_4 &= -3 +x_0+x_1+x_2\\
x_5 &= 4 + x_0 - 2 x_1 - x_2\\
w &= -x_0\\
&\text{enters}\,x_0, \quad \text{exits:}\,x_4.\\
\end{align}</span></p>
<p>Let's figure out what the next step is.</p>
<p><span class="math-container">\begin{align}
& \max w \\
x_3 &= 2 - x_1 + x_4 \\ x_0 &= 3 - x_1 - x_2 + x_4 \\ x_5 &= 7 - 3x_1 - 2x_2 + x_4 \\ w &= -x_0.\\
\end{align}</span></p>
<p>Now, if we set all the variables not in the basis to <span class="math-container">$0$</span>, we get <span class="math-container">$x_3 = 2$</span>, <span class="math-container">$x_0=3$</span>, <span class="math-container">$x_5 = 7$</span>, and they're all positive, so we can apply a standard pivot. </p>
<p>If instead, we had taken <span class="math-container">$x_3$</span> out, we would have:
<span class="math-container">\begin{align}
& \max w \\
x_0 &= 1 - x_2 + x_3 \\ x_4 &= -2 + x_1 + x_3 \\ x_5 &= 5 - 2x_1 - 2x_2 +x_3 \\
w &= -x_0,
\end{align}</span></p>
<p>but this gives <span class="math-container">$x_4 = -2$</span>, which means we can't apply a standard pivot.</p>
|
354,986 | <p>I have to show that $h$ is measurable as well as $\int h d(\mu \times \nu) < \infty$ .</p>
<p>I tried showing by contradiction that $\int h$ had to be finite but I'm stuck with showing how it is measurable.</p>
| Julien | 38,053 | <p><strong>1- Measurability:</strong></p>
<p><strong>Fact:</strong> a function is measurable if and only if it is the pointwise limit of a sequence of <a href="http://en.wikipedia.org/wiki/Simple_function" rel="noreferrer">simple functions</a>.</p>
<p>So take $s_n, t_n$ simple such that $f(x)=\lim s_n(x)$ for every $x$ and $g(y)=\lim t_n(y)$ for every $y$. Then $h(x,y)=f(x)g(y)=\lim s_n(x)t_n(y)$ for every $(x,y)$. It only remains to observe that each $(x,y)\longmapsto s_n(x)t_n(y)$ is a simple function to conclude that $h$ is measurable. This follows readily from
$$
1_A(x)1_B(y)=1_{A\times B}(x,y).
$$</p>
<p><strong>2- Integrability:</strong></p>
<p>We will use the <a href="http://en.wikipedia.org/wiki/Monotone_convergence_theorem" rel="noreferrer">monotone convergence theorem</a>, asuming you have not proved Fubini yet. Otherwise, this is trivial. But how can you have Fubini if you don't have measurability?</p>
<p>Take two nondecreasing sequences of nonnegative simple functions $s_n(x)$, $t_n(y)$ converging pointwise to $|f(x)|$ and $|g(y)|$ respectively. Then $s_n(x)t_n(y)$ is a nondecreasing sequence of nonnegative simple functions converging pointwise to $|h|$. By the monotone convergence theorem
$$
\int |h| \;d(\mu\times \nu)=\lim\int s_n(x)t_n(y)\;d(\mu\times \nu)(x,y).
$$
By definition of the product measure
$$
\int 1_{A\times B}\;d(\mu\times \nu)=(\mu\times \nu) (A\times B)=\mu(A)\nu (B)=\int 1_Ad\mu\int 1_Bd\nu.
$$
By linearity, this extends to simple functions. Hence, by monotone convergence again,
$$
\int s_n(x)t_n(y)\;d(\mu\times \nu)(x,y)=\int s_nd\mu\int t_nd\nu\longrightarrow \int |f|d\mu\int |g|d\mu<\infty.
$$
So $h$ is integrable with
$$
\int |h| \;d(\mu\times \nu)=\int |f|d\mu\int |g|d\nu.
$$
Note that applying the above to $f_{\pm}$ and $g_{\pm}$, we can deduce
$$
\int h \;d(\mu\times \nu)=\int fd\mu\int gd\nu.
$$</p>
|
821,768 | <p><img src="https://i.stack.imgur.com/bS4PE.png" alt="enter image description here"></p>
<p>In the rectangle ABCD,
$$1. \, BE = EF = FC = AB$$
$$2. \, \angle AEB = \beta , \angle AFB = \alpha , \angle ACB = \theta. $$
Prove that $\alpha + \theta = \beta$.</p>
<p>I have so far obtained that - $$1. \cos\beta = \sin \beta$$ $$2.\cos\alpha = 2\sin\alpha$$ $$3. \cos \theta = 3\sin \theta$$
But I am not able to understand what to do next. Please help.</p>
| DSinghvi | 148,018 | <p>You have $\beta= 45degree$ because an isoceles right triangle ABE.
then $\tan(\alpha + \theta)$=$(\tan\alpha+\tan\theta)$,$1-\tan\alpha\tan\theta
=$(5/6 $\tan\beta$)$1-\tan^2\beta$=$5\tan\beta$ $6-\tan^2\beta$
gives $\tan(\alpha + \theta)=1=\tan\beta$
$\alpha + \theta = \beta$</p>
|
147,378 | <p>I have the following equation:</p>
<p>$$\frac{dx}{dt}+x=4\sin(t)$$</p>
<p>For solving, I find the homogenous part as:
$$f(h)=C*e^{-t}$$</p>
<p>Then finding $f(a)$ and $df(a)$:
$$f(a)=4A\sin(t)+4B\cos(t)$$
$$df(a)=4A\cos(t)-4B\sin(t)$$</p>
<p>Substituting in orginal equation:</p>
<p>$$4A\cos(t)-4B\sin(t)+4A\sin(t)+4B\cos(t)=4\sin(t)$$</p>
<p>I have to find numerical values of $A$ and $B$ but I absoloutly have no idea how can I solve this, I am also not sure if the steps I did are correct or not. Would somone please help with this equation?</p>
<p>The final answer should be substituted in:
$$x=f(h)+f(a)$$</p>
| Jon | 20,667 | <p>At the stage</p>
<p>$$4A\cos(t)-4B\sin(t)+4A\sin(t)+4B\cos(t)=4\sin(t)$$</p>
<p>you are done. Collecting common factors you will get</p>
<p>$$4(A+B)\cos(t)+4(A-B-1)\sin(t)=0$$</p>
<p>but this must be independent and so you have a system of equations</p>
<p>$$A-B=1$$</p>
<p>$$A+B=0$$</p>
<p>giving $A=-B=\frac{1}{2}$.</p>
|
335,483 | <p>Let $N$ be a set of non-negative integers. Of course we know that $a+b=0$ implies that $a=b=0$ for $a, b \in N$.</p>
<p>How do (or can) we prove this fact if we don't know the subtraction or order?</p>
<p>In other words, we can only use the addition and multiplication.</p>
<p>Please give me advise.</p>
<p>EDIT</p>
<p>The addition law mean that for $a, b \in N$, there is an element $a+b$ in $N$ and this operation is associative.
The multiplication law means that for $a, b \in N$, there is an element $ab$ in $N$ and this operation is associative.
Also the distribution laws hold.</p>
<hr>
<p>EDIT2</p>
<p>Let me rephrase the question since I don't want arguments on orders.</p>
<p>Let $N$ be a set with operation $+$ and $\times$.</p>
<p>$N$ is a monoid with the operation $+$ and $\times$ respectively. There is an unit element $0\in N$.</p>
<p>The distribution laws hold as in the case of the set of integers.</p>
<p>Can we prove the fact above with this assumption?</p>
| Ross Millikan | 1,827 | <p>Please set out what you mean by the addition law. You need the axiom that there is no number whose successor is $0$ or this fails. That is what distinguishes the integers from the naturals. It allows you to define order as $x \le y \leftrightarrow \exists (z) x+z=y$</p>
|
215,864 | <p>I run to the following problem which says if you have a smooth curve that is evolving over time (say finite length at the beginning) then </p>
<p>$$\frac{d}{dt}(curve \; length \; at \; time \; t)=-\int_{curve} k\cdot v \; ds,$$</p>
<p>where $k$ is curvature of the curve and $v$ is velocity of point on curve. $ds$ represents integration by parametrization by arclength. </p>
<p>I have tried proving this but I can not get out without re parametrazing curves. Is there some neater way to do this?</p>
| cactus314 | 4,997 | <p>Here's a nice crib-sheet for <a href="http://www.mpi-inf.mpg.de/~ag4-gm/handouts/06gm_curves.pdf" rel="nofollow noreferrer">differential geometry of curves in space</a>.
<hr>
Let $\gamma: [a,b]\times [0,1] \to \mathbb{R}^3$ be a family of curves. The rate of change of arc-length is:
$$ \frac{d}{dt}(\mathrm{arc length})=\frac{d}{dt}\int_{a}^{b} ||\gamma'(s,t)|| ds
= \int_{a}^{b} \frac{d}{dt} ||\gamma'(s,t)|| ds $$
<del>I will ignore the last term since there's no dependence on endpoints in your question.</del> The size of the tangent vector is inner product of $\gamma' = \frac{d\gamma}{ds}$ with itself (square-root).
$$ ||\gamma'(s,t)|| = ( \gamma'\cdot \gamma')^{1/2}$$
Differentiate both sides with respect to time and use the chain rule from calculus:
$$ \frac{d}{dt} ||\gamma'(s,t)|| = \frac{\frac{d\gamma'}{dt}\cdot \gamma'}{\sqrt{ \gamma'\cdot \gamma'}} = \frac{dv}{ds} \cdot T $$</p>
<p>where $T$ is the unit tangent vector. $$ T = \frac{\gamma'}{ ( \gamma'\cdot \gamma')^{1/2}} $$ Velocity is relative to the deformation parameter $s$ not the curve paramter $t$. Since partial derivatives commute, we can relate $\gamma', v$:
$$ \frac{d \gamma'}{dt} = \frac{\partial^2 \gamma}{\partial s \partial t} = \frac{\partial}{\partial s} \frac{\partial \gamma}{\partial t} = \frac{dv}{ds}$$</p>
<p>Then... integration by parts:
$$\frac{d}{dt}(\mathrm{arc length})= \int \frac{dv}{ds} \cdot T ds = -\int v \cdot \frac{dT}{ds}ds= -
\int v \cdot (\kappa N) ds $$
Curvature is the derivative of the unit tangent vector $\fbox{$\frac{dT}{ds} = \kappa N$}$. It was missing in your problem that curvature points in the direction of the normal.
<hr>
<hr>
For small pieces of arc, the curvature is constant and the curve can be approximate by the arc of a circle. The arc-length is just $ds = r d\theta$.</p>
<p>The maximum potential for growth is when the arc moves outward in the normal direction. If each curve moves tangentially, the arc-length doesn't change. In general, the arc grows according to the normal component of the velocity, $\Delta (ds) = (v \Delta t \cdot N)d\theta $.
<img src="https://i.stack.imgur.com/MLrs0.gif" alt="enter image description here"></p>
<p>Globally we can imagine the curve swept out by $\gamma(\cdot,t)$. Our change in arc-length is $\frac{d^2A}{dt^2}$. This area should grow the most if it expands out in the normal direction, so it should be proportional to $v\cdot N$. Sharper turns should grow faster and we just showed it should be proportional to curvature $k$. So we get $\frac{d}{dt}(\mathrm{arclength}) =-\int v\cdot (kN) ds$.</p>
<p><img src="https://i.stack.imgur.com/PBnap.gif" alt="enter image description here"></p>
|
388,561 | <p>I am trying to do this question in Bredon's <em>Topology and geometry</em> about using the transversality theorem to show that the intersection of two manifolds is a manifold.</p>
<p>Now it goes as follows:</p>
<p>Let $f(x,y,z)=(2-(x^2+y^2)^{1/2})+z^2$ on $\mathbb{R}^3 - (0,0,z)$. Then one can show that $M=f^{-1}(1)$ is a manifold.</p>
<p>Now let $N = \{ (x,y,z) \in \mathbb{R}^3 | x^2+y^2=4 \}$. I need to show that $M \cap N$ is a manifolds by using the transversality theorem, but I'm not quite sure how to do this.</p>
<p>I thought about finding the tanget planes to each to these manifolds and then showing that there a points in one of them that aren't in the other, but I dont know if this will work. So I was wondering if I could get some hints on this.</p>
<p>Thank you</p>
| Ted Shifrin | 71,348 | <p>Try writing down a function $g\colon\mathbb R^3-\{z\text{-axis}\}\to \mathbb R^2$ so that, for example, $g^{-1}(1,4) = M\cap N$ and $(1,4)$ is a regular value of $g$.</p>
|
3,501,052 | <p>I want to find the number of real roots of the polynomial <span class="math-container">$x^3+7x^2+6x+5$</span>.
Using Descartes rule, this polynomial has either only one real root or 3 real roots (all are negetive). How will we conclude one answer without doing some long process?</p>
| Ross Millikan | 1,827 | <p>If there are three real roots, the value of the function must be of opposite signs at the points the derivative is zero. </p>
|
3,501,052 | <p>I want to find the number of real roots of the polynomial <span class="math-container">$x^3+7x^2+6x+5$</span>.
Using Descartes rule, this polynomial has either only one real root or 3 real roots (all are negetive). How will we conclude one answer without doing some long process?</p>
| 2'5 9'2 | 11,123 | <p>I note that this is "close" to <span class="math-container">$$(x+5)(x+1)(x+1)=x^3+7x^2+11x+5$$</span> which has a repeated root at <span class="math-container">$-1$</span>, and another root at <span class="math-container">$-5$</span>. The repeated root at <span class="math-container">$-1$</span> is a local minimum, considering the general shape of a cubic with positive leading coefficient.</p>
<p>So you have <span class="math-container">$$(x+5)(x+1)(x+1)-5x$$</span></p>
<p>Adding that <span class="math-container">$-5x$</span> is going to push the local minimum upward, since <span class="math-container">$-5x$</span> is positive near <span class="math-container">$-1$</span>. The doubled root will be perturbed into two non-real complex conjugate roots. And only the perturbed root near <span class="math-container">$-5$</span> will still be real.</p>
|
646,010 | <p>So i kinda think i have figured this out, i'm not very good at math, and need a formula to figure out some stats for a game i'm playing.</p>
<p>I have a Weapon with a reload speed of X sec.. however, i also have a modifier attached, that will make the weapon reload faster by +Y%</p>
<p>i made this formula, mostly by guessing, as i got no clue what i am doing.</p>
<pre><code>100/(100+Y)*X
</code></pre>
<p>the results i am getting looks right to me, but is the formula ok?</p>
| mkl314 | 123,304 | <p>The textbook's answer is wrong, and there is no way to prove that this integral diverges. Instead, there are ways to establish its convergence.</p>
<p>Since the integrand is continuous on segment $[0,1]$, it suffices to verify convergence on $(1,\infty)$, which can be established by substituting $x=\sqrt{t}$ followed by integrating by parts, thus reducing the integral to an absolutely convergent one:$$\int\limits_1^{\infty}(\ln{x})^2\sin(x^2)\,dx=\frac{1}{8}\cdot\!\!\int\limits_1^{\infty}\frac{(\ln{t})^2}{\sqrt{t}}\!\cdot \sin{t}\,dt=-\frac{1}{8}\cdot\!\!\int\limits_1^{\infty}\frac{(\ln{t})^2}{\sqrt{t}}\,d(\cos{t})=\dots$$</p>
|
430,654 | <p>Show that this sequence converges and find the limit.
$a_1 = 0$, $a_{n+1} = \sqrt{5+2a_{n} }$ </p>
| Brian M. Scott | 12,042 | <p>Suppose that we’ve shown — somehow — that the sequence converges to some limit $L$. Finding $L$ is then quite easy. Let $f(x)=\sqrt{5+2x}$; then $f$ is a continuous function on its domain, so</p>
<p>$$L=\lim_{n\to\infty}a_n=\lim_{n\to\infty}a_{n+1}=\lim_{n\to\infty}f(a_n)=f\left(\lim_{n\to\infty}a_n\right)=f(L)\;.$$</p>
<p>This is an argument that you’ll see over and over in problems of this and related kinds. Solve the equation $L=f(L)$ for $L$: it’s $L=\sqrt{5+2L}$, so $L^2=5+2L$, and after a little algebra we find that $L=1\pm\sqrt6$. Clearly $a_n\ge 0$ for all $n\in\Bbb N$; why? This means that the sequence cannot possibly have a negative limit, so if it has any limit at all, that limit must be $L=1+\sqrt6$.</p>
<p>One of the standard ways to show that a sequence converges is to show that it’s both monotone and bounded.</p>
<ul>
<li>Show that if $0\le x<1+\sqrt6$, then $x<f(x)$. This is an easy exercise in quadratic inequalities. </li>
<li>Show that if $0\le x<1+\sqrt6$, then $0<f(x)<1+\sqrt6$. One way is to show that $f$ is a strictly increasing function of $x$.</li>
</ul>
|
3,278,797 | <p>I tried to solve it and I got answer '3'. But that is just my intuition.I don't have concrete method to prove my answer .I did like this, in order to maximize the fraction, we need to minimize the denominator .So if plug in '1' in expression, denominator becomes '1'.Now denominator is minimalized,the result of expression is '3'. Thats how I got to conclusion.But I can't prove that " 3 is the solution" mathematically.
Can anyone show me how to prove it properly</p>
| José Carlos Santos | 446,262 | <p>Yes, the maximum is <span class="math-container">$3$</span>. Note that <span class="math-container">$x^2-x+1$</span> is always greater than <span class="math-container">$0$</span> and that<span class="math-container">$$\frac{x^2+x+1}{x^2-x+1}-3=-2\frac{x^2-2x+1}{x^2-x+1}=-2\frac{(x-1)^2}{x^2-x+1}<0,$$</span>unless <span class="math-container">$x=1$</span>.</p>
|
4,518,908 | <p>For sufficiently large integer <span class="math-container">$m$</span>, in order to prove</p>
<p><span class="math-container">$\frac{(m+1)}{m}<\log(m)$</span></p>
<p>is it sufficient to point out that</p>
<p><span class="math-container">$ \displaystyle\lim_{m \to \infty} \frac{(m+1)}{m}=1 $</span></p>
<p>while</p>
<p><span class="math-container">$ \displaystyle\lim_{m \to \infty} \log(m)=\infty $</span>?</p>
| Dark Rebellion | 858,891 | <p><span class="math-container">$\forall x(Px\implies\forall y:P(x+y))$</span></p>
<p><span class="math-container">$\iff \forall x(Px\implies\forall y(True \implies P(x+y)))$</span></p>
<p><span class="math-container">$\iff \forall x(Px\implies\forall y(\exists z:z=x+y \implies P(x+y)))$</span>, this is true because the universe is the set of real numbers</p>
<p><span class="math-container">$\iff \forall x(Px\implies\forall y\forall z(z=x+y \implies P(x+y)))$</span></p>
<p><span class="math-container">$\iff \forall x(Px\implies\forall y\forall z(z=x+y \implies Pz))$</span></p>
<p><span class="math-container">$\iff \forall x(Px\implies\forall y\forall z(y=z-x \implies Pz))$</span></p>
<p><span class="math-container">$\iff \forall x(Px\implies\forall z(\exists y:y=z-x \implies Pz))$</span></p>
<p><span class="math-container">$\iff \forall x(Px\implies\forall z(True \implies Pz))$</span>, this is also true because the universe is the set of real numbers</p>
<p><span class="math-container">$\iff \forall x(Px\implies\forall z:Pz)$</span></p>
<p><span class="math-container">$\iff \exists x:Px\implies\forall z:Pz$</span></p>
<p><span class="math-container">$\iff \neg\exists x:Px\lor\forall z:Pz$</span></p>
<p><span class="math-container">$\iff \forall x:\neg Px\lor\forall z:Pz$</span></p>
<p><span class="math-container">$\iff \forall x:\neg Px\lor\forall x:Px$</span></p>
<p>In conclusion, yes, both statements are equivalent.</p>
|
130,806 | <p><strong>Qusestion:</strong> Let f be a continuous and differentiable function on $[0, \infty[$, with $f(0) = 0$ and such that $f'$ is an increasing function on $[0, \infty[$. Show that the function g, defined on $[0, \infty[$ by $$g(x) = \begin{cases} \frac{f(x)}{x}, x\gt0& \text{is an increasing function.}\\ f'(0), x=0 \end{cases}$$</p>
<p>I have tried to solve this problem but I don't know whether I have done it right or not. </p>
<p><strong>Solution:</strong> I have applied mean value theorem on the interval $[0, x]$. Then, $$\frac{f(x)}{x} =f'(c) , 0\lt c \lt x$$ </p>
<p>It is given that $f'$ is an increasing function. So I deduce that $\frac{f(x)}{x}$ is also increasing.</p>
<p>Further, $$g(x) = f'(c) \text {such that } 0<c<x$$ Therefore, $$g(0) =f'(c) \text{such that} 0<c<0$$
So, $c=0$</p>
<p>Thus $g(x) = f'(0)$ at $x=0$</p>
| Davide Giraudo | 9,849 | <p>$c$ depends on $x$, and what you did doens't prove that if $x_1\leq x_2$ then $c_{x_1}\leq c_{x_2}$. </p>
<p>But we can write for $x>0$, since $f'$ is increasing hence integrable over finite intervals
$$\frac{f(x)}x=\frac{f(x)-f(0)}x=\frac 1x\int_0^xf'(t)dt=\int_0^1f'(xs)ds$$
by the substitution $t=xs$. This formula also works for $x=0$, and now it's easy to deduce that $g$ is increasing: if $x_1\leq x_2$, for all $0\leq s\leq 1$ we have $sx_1\leq sx_2$ and since $f'$ is increasing...</p>
|
40,709 | <p>Wolfram's MathWorld website, at the page on <a href="http://mathworld.wolfram.com/Function.html" rel="nofollow">functions</a>, makes the following claim about the notation $f(x)$ for a function:</p>
<blockquote>
<p>While this notation is deprecated by professional mathematicians, it is the more familiar one for most nonprofessionals.</p>
</blockquote>
<p>From context, it appears that this is referring to the use of $f(x)$ to refer to <em>the actual function</em>, rather than just to a particular value, when $x$ is (in the context) a dummy variable.</p>
<p>Is this true? Do professional mathematicians "deprecate" this notation?</p>
<p>To avoid long and windy discussions as to the values or otherwise of this notation (which would be much more appropriate in a blog), this question should be viewed as a poll. As MO runs on StackExchange 1.0, it doesn't have the feature whereby the actual "up" and "down" votes for an answer can be easily seen. Therefore I shall post two answers, one in favour and one against, the following statement. Please <strong>only vote up</strong>. A vote for one answer will be taken as a vote against the other. The Law of the Excluded Middle does not hold here. The motion is:</p>
<blockquote>
<p>This house believes that the notation $f(x)$ to refer to a function has value in professional mathematics and that there is no need to apologise or feel embarrassed when using it thus.</p>
</blockquote>
<p><strong>This poll has now run its course. The final tally can be seen below.</strong></p>
| Andrew Stacey | 45 | <p>Vote for this answer if you <strong>disagree</strong> with the statement:</p>
<blockquote>
<p>This house believes that the notation $f(x)$ to refer to a function has value in professional mathematics and that there is no need to apologise or feel embarrassed when using it thus.</p>
</blockquote>
<p>(Note: the answer is CW so that this is a genuine poll)</p>
|
2,097,557 | <blockquote>
<p>If $0< \alpha, \beta< \pi$ and $\cos\alpha + \cos\beta-\cos (\alpha + \beta) =3/2$ then prove $\alpha = \beta= \pi/3$</p>
</blockquote>
<p>How do I solve for $\alpha$ and $\beta$ when only one equation is given? By simplification I came up with something like
$$
\sin\frac{\alpha}{2} \sin\frac{\beta}{2} \cos \frac {\alpha +\beta}{2}=\frac{1}{8}.
$$
I don't know if this helps. How to do this?</p>
| lab bhattacharjee | 33,337 | <p>HINT:</p>
<p>Use <a href="http://mathworld.wolfram.com/ProsthaphaeresisFormulas.html" rel="nofollow noreferrer">Prosthaphaeresis</a> Formula on $\cos\alpha,\cos\beta$</p>
<p>and Double angle formula on $$\cos 2\cdot\dfrac{\alpha+\beta}2$$ to get</p>
<p>$$2\cos^2\dfrac{\alpha+\beta}2-2\cos\dfrac{\alpha-\beta}2\cos\dfrac{\alpha+\beta}2+\dfrac12=0\ \ \ \ (1)$$ which is a Quadratic Equation in $\cos\dfrac{\alpha+\beta}2$ whose discriminant must be $\not<0$</p>
<p>i.e., $$\left(2\cos\dfrac{\alpha-\beta}2\right)^2-4=-2\sin^2\dfrac{\alpha-\beta}2\ge0$$</p>
<p>But for real $\dfrac{\alpha-\beta}2,$ $$\sin^2\dfrac{\alpha-\beta}2\ge0$$</p>
<p>So, we have $$\sin^2\dfrac{\alpha-\beta}2=0\iff\sin\dfrac{\alpha-\beta}2=0$$</p>
<p>$\implies\dfrac{\alpha-\beta}2=n\pi$ where $n$ is any integer</p>
<p>But as $0<\alpha,\beta<\pi.n=0\implies\alpha-\beta=0$</p>
<p>So, $(1)$ is reduced to $$0=2\cos^2\beta-2\cos\beta+\dfrac12=\dfrac{(2\cos\beta-1)^2}2$$</p>
<p>So, $\cos\beta=\dfrac12=\cos\dfrac\pi6\implies\beta=2m\pi\pm\dfrac\pi6$ where $m$ is any integer</p>
<p>But as $0<\alpha<\pi$</p>
|
1,087,080 | <p>By definition, a closed set is a set that contains its limit points. However, by the time the closed set contains its limit points, those points are no longer limit points and become isolated points. For example:</p>
<p>$\mathbf A = \{\frac{1}{n}: n \in \mathbb N \}$. The limit of this set (set $\mathbf A$) is clearly equal to $0$. This is because the $\epsilon$ -neighborhood $\mathbf V_{\epsilon}(0) \cap \mathbf A = \{\frac{1}{n} \}$, and $\frac{1}{n} \neq 0$. However, when $0$ is included, the $\epsilon$ -neighborhood $\mathbf V_{\epsilon}(0) \cap \mathbf A = \{0 \}$ for $\mathbf A=[0,\frac{1}{n} ]$. This will contradicts the definition of limit point of set A and hence $0$ must be an isolated point.</p>
<p>Another example: $\left(a,b\right)$ is an open interval with limit $a$ and $b$. Then its closure $\bar A$ will be $\left[a,b\right]$. By definition, the $\epsilon$ -neighborhood of any point in $\left[a,b\right]$ intersects the closure $\bar A$ at that same point, and hence, no points in that closure set is a limit point: A contradiction that closure sets are closed sets.</p>
<p>Also, I am trying to prove the lemma: If x is a limit point of $A \subseteq A'$, then x is limit point of $A'$.
Proof: Suppose x is a limit point of $A$, then there exists a sequence $(a_n)$ $\subset A \subseteq A'$: lim($a_n$)=x with $a_n$ $\neq x \forall n \in \mathbb N$. Then since $(a_n)$ $\subset A'$, it follows that x must be a limit point of $A'$.</p>
<p>So my questions are:</p>
<p><strong>1. What is wrong with my contradiction in the 2 examples? Please explain them to me.
2. Is my proof for the lemma correct? I am going to use it for the proof that closure set is closed.</strong></p>
<p>My background: I am studying elementary Real Analysis by starting with Abbot. I thank you very much for your help.</p>
<p><strong>Extra question</strong>: We have this theorem: x is a limit point of set $A$ if and only if there exists a sequence $(a_n) \subset A$ such that $\lim (a_n)=x$ $\forall a_n \neq x$. So, for some finite $n \in \mathbb N$ such that $a_n = x$, x is still a limit point of set A . Is this correct? I thought that x would be an isolated points since we need $a_n \neq x \forall n \in \mathbb N$</p>
<p>I thank you again for your answers.</p>
| Sultan of Swing | 144,369 | <p><strong>Regarding the lemma</strong>: Suppose $x$ is a limit point of $A$. Then <em>every</em> <strong>neighborhood</strong> of $x$ contains a point in $A$ that is different than $x$. Since $A \subseteq A'$, then we can clearly see that these <strong>neighborhoods</strong> certainly contain a point in $A'$. The reason for this is because these neighborhoods contained at least a point in $A$, and we know $A$ is a subset of $A'$, so every point in $A$ is also in $A'$.</p>
<p>Using your proof: suppose there exists $(a_n) \rightarrow x$, and $(a_n)$ is a sequence in $A$. We know $a_n \neq x$ $\forall n \in \mathbb N$. Since $A$ is subset of $A'$, we must have that $(a_n)$ is a sequence in $A'$. Therefore, since we have the existence of a sequence in $A'$ that converges to $x$ where $a_n \neq x$ $\forall n \in \mathbb N$, we must have that $x$ is a limit point of $A'$.</p>
<p><strong>In your second example</strong>, remember a set is closed if and only if it contains all of its limit points. Your "contradiction" is wrong; $[a,b]$ contains its limit points (you mentioned that $(a,b)$ has limits $a$ and $b$), so therefore it is closed. You based your contradiction on the fact that "the neighborhood of any point in $\bar A = [a,b]$ will intersect $\bar A = [a,b]$ at that same point," but those same neighborhoods will also intersect <em>other</em>, distinct points in $[a,b]$. This means that <em>every</em> point in $[a,b]$ is a limit point, whereas you said <em>no points</em> were limit points. They <em>all</em> are.</p>
<p><strong>Extra question</strong>: No, if the sequence has a term that is <em>equal</em> to $x$, then $x$ is not a limit point. The sequence of points must be in $A\backslash \{x\}$. The theorem is: "$x$ is a limit point of $A$ if and only if there exists a sequence in $A$ whose limit is $x$ and none of the terms in the sequence are equal to $x$". Or "$x$ is a limit point of $A$ if and only if there exists a sequence in $A\backslash \{x\}$ whose limit is $x$."</p>
|
1,087,080 | <p>By definition, a closed set is a set that contains its limit points. However, by the time the closed set contains its limit points, those points are no longer limit points and become isolated points. For example:</p>
<p>$\mathbf A = \{\frac{1}{n}: n \in \mathbb N \}$. The limit of this set (set $\mathbf A$) is clearly equal to $0$. This is because the $\epsilon$ -neighborhood $\mathbf V_{\epsilon}(0) \cap \mathbf A = \{\frac{1}{n} \}$, and $\frac{1}{n} \neq 0$. However, when $0$ is included, the $\epsilon$ -neighborhood $\mathbf V_{\epsilon}(0) \cap \mathbf A = \{0 \}$ for $\mathbf A=[0,\frac{1}{n} ]$. This will contradicts the definition of limit point of set A and hence $0$ must be an isolated point.</p>
<p>Another example: $\left(a,b\right)$ is an open interval with limit $a$ and $b$. Then its closure $\bar A$ will be $\left[a,b\right]$. By definition, the $\epsilon$ -neighborhood of any point in $\left[a,b\right]$ intersects the closure $\bar A$ at that same point, and hence, no points in that closure set is a limit point: A contradiction that closure sets are closed sets.</p>
<p>Also, I am trying to prove the lemma: If x is a limit point of $A \subseteq A'$, then x is limit point of $A'$.
Proof: Suppose x is a limit point of $A$, then there exists a sequence $(a_n)$ $\subset A \subseteq A'$: lim($a_n$)=x with $a_n$ $\neq x \forall n \in \mathbb N$. Then since $(a_n)$ $\subset A'$, it follows that x must be a limit point of $A'$.</p>
<p>So my questions are:</p>
<p><strong>1. What is wrong with my contradiction in the 2 examples? Please explain them to me.
2. Is my proof for the lemma correct? I am going to use it for the proof that closure set is closed.</strong></p>
<p>My background: I am studying elementary Real Analysis by starting with Abbot. I thank you very much for your help.</p>
<p><strong>Extra question</strong>: We have this theorem: x is a limit point of set $A$ if and only if there exists a sequence $(a_n) \subset A$ such that $\lim (a_n)=x$ $\forall a_n \neq x$. So, for some finite $n \in \mathbb N$ such that $a_n = x$, x is still a limit point of set A . Is this correct? I thought that x would be an isolated points since we need $a_n \neq x \forall n \in \mathbb N$</p>
<p>I thank you again for your answers.</p>
| egreg | 62,967 | <p>First of all, recall that the notion of limit point has a meaning only for <em>subsets</em> of the real numbers (or, more generally but not too generally, of a metric space).</p>
<p>A point $x$ is a limit point of $A$ if and only if, for every $\varepsilon>0$, the interval $(x-\varepsilon,x+\varepsilon)$ contains a point of $A$ <em>different from</em> $x$.</p>
<p>Equivalently, $x$ is a limit point of $A$ if and only if there exists a sequence $(a_n)$ in $A$, with $a_n\ne x$ for all $n$, and $\lim_{n\to\infty}a_n=x$.</p>
<p>This is linked to your extra question. If a finite number of terms of a sequence $(a_n)$ equal $x$, then there exists $k$ such that $a_n\ne x$ for all $n>k$. Then define $b_n=a_{n+k}$: the sequence $(b_n)$ has all terms different from $x$ and, if $(a_n)$ converges to $x$, also $(b_n)$ converges to $x$.</p>
<p>Let's look at your “contradictions”. Note that nowhere in the definition it is required that a limit point of $A$ doesn't belong to $A$.</p>
<p>For instance, <em>every point of $[0,1]$ is a limit point of $[0,1]$.</em> Indeed, if $0< x\le1$, let $n_0$ be the first natural number such that $1/n_0<x$. Then $x-1/n_0>0$ and the sequence $$a_n=x-\frac{1}{n+n_0}$$ is in $[0,1]$ and converges to $x$, without ever assuming the value $x$. For $0$, just consider $a_n=1/n$.</p>
<p>Similarly, $0$ is a limit point of $\{0\}\cup\{1/n:n\in\mathbb{N},n>0\}$. In this case, however, no other point is a limit point. So this set contains each of its limit points and is closed.</p>
<p>The important thing to note is that <em>a sequence in $A$ converging to $x$ without ever assuming the value $x$ must exist</em>, not that every sequence in $A$ that converges to $x$ shouldn't assume the value $x$. A sequence in $A$ showing that $x$ is a limit point is good for proving that $x$ is a limit point of the closure of $A$ as well!</p>
<p>Your proof of the lemma is good. In particular, it confirms that a limit point of $A$ is also a limit point of the closure of $A$.</p>
|
4,638,490 | <p>Given functions f and g, as above, what exactly does it mean? Does it mean, for example, that g(n) is <em>exactly</em> equal to <span class="math-container">$2^{h(n)}$</span> for some function h contained in <span class="math-container">$O(f(n))$</span> - or does it rather mean that <span class="math-container">$g(n) = O(2^{h(n)})$</span> for some function h contained in <span class="math-container">$O(f(n))$</span>? Any help would be much appreciated!</p>
| Axo | 1,012,859 | <p>Remember big-Oh <span class="math-container">$(O)$</span> is an asymptotically tight bound. So if for some <span class="math-container">$f,g$</span> you have <span class="math-container">$f(x) = O(g(x)$</span>, then as <span class="math-container">$x$</span> approaches infinity, <span class="math-container">$f$</span> is almost a constant multiple times <span class="math-container">$g$</span>. This tells us that there exists a natural number <span class="math-container">$N$</span> and a point <span class="math-container">$x_0$</span> such that,</p>
<p><span class="math-container">$$ |f(x)| \leq N\cdot g(x), \tag{$x \geq x_0$}$$</span></p>
<p>This can be used to show that all elements in <span class="math-container">$O(g(x))$</span> are constant multiples of each other and can be considered "equivalent". So you can say,</p>
<p><span class="math-container">$$ 2^{O(f(n))} \equiv 2^{h(n)} \tag{ $h(n) \in O((f(n))$}$$</span> This is not the same as <span class="math-container">$O(2^{f(n)})$</span> as then for any <span class="math-container">$g(n)$</span> in this set, there is a natural number <span class="math-container">$N'$</span> and point <span class="math-container">$n_0$</span> such that,</p>
<p><span class="math-container">$$ |2^{f(n)}| \leq N' \cdot g(n) \tag{$n \geq n_0$}$$</span></p>
<p>but this is not the same as <span class="math-container">$2^{|f(n)|}$</span> which is what the first equation would be when raising both sides as a power of 2.</p>
|
1,102,668 | <p><a href="https://math.stackexchange.com/q/67994/198434">This question</a> shows how dividing both sides of an equation by some $f(x)$ may eliminate some solutions, namely $f(x)=0$. Naturally, all examples admit $f(x)=0$ as a solution to prove the point.</p>
<p>I tried to find a simple example of an equation that could be solved by dividing both sides by some $f(x)$ but where $f(x)=0$ was not a solution, and failed miserably. Sure, I can divide both sides of, let's say, $x^2-1=0$ by $x$ ($x=0$ is not a solution), but that doesn't help me solve the equation.</p>
<p>I started wondering if actually the equations that can be solved (or at least simplified) by dividing both sides by some $f(x)$ were precisely those where $f(x)=0$ is a solution: by removing a solution, the division reduces the equation to a simpler form. This is particularly obvious with this example given in that question's <a href="https://math.stackexchange.com/a/68001/198434">accepted answer</a>:</p>
<p>$$(x-1)(x-2)(x-3)(x-4)(x-5)(x-6)=0.$$</p>
<p>By successively dividing by $x-1$, $x-2$ and so on, the equation becomes simpler as the solutions are removed, until there's no solution left ($1=0$).</p>
<p>However, both the accepted answer and the quote in the question itself say that $f(x)=0$ <em>may</em> be a solution, which I also understand as it <em>may not</em> be one.</p>
<p>So, are there equations where dividing by some $f(x)$ significantly improves the equation resolution, without $f(x)=0$ being a solution?</p>
| user141592 | 178,602 | <p>Well, it depends. In general, $g(x)$ divides $f(x)$ if and only if $g(x) = 0$ is a root of the equation. This happens when you are working over any field, because if $g(x)$ divides $f(x)$, then $f(x) = q(x)g(x)$, so $f(x)$ can only be zero by the null factor law if $g(x) = 0$ or $q(x) = 0$. However, $g(x)=0$ may not have any solutions in the field you are working in. Take for example $f(x) = (x^2 + 1)(x-2)$. In this case, $x^2+1$ divides $f(x)$, but it gives no solutions if you only want to find the <strong>real</strong> roots. If however you are working over an algebraically closed field, like $\mathbb C$, then every factor of $f(x)$ also gives a zero of $f(x)$.</p>
|
42,617 | <p>Let $f(x,y)$ define a surface $S$
in $\mathbb{R}^3$ with a unique local minimum at $b \in S$.
Suppose <a href="http://en.wikipedia.org/wiki/Gradient_descent" rel="noreferrer">gradient descent</a> from any start point $a \in S$
follows a geodesic on $S$ from $a$ to $b$.
(<b>Q1</b>.)
What is the class of functions/surfaces
whose gradient-descent paths are geodesics?</p>
<p>Certainly if $S$ is a surface of revolution
about a $z$-vertical line through $b$,
its "meridians"
are geodesics, and these would be the paths followed
by gradient descent down to $b$.
So the class of surfaces includes surfaces of
revolution. But surely it is wider than that?</p>
<p>(<b>Q2</b>.)
One could ask the same question about paths followed by
<a href="http://en.wikipedia.org/wiki/Newton%27s_method_in_optimization" rel="noreferrer">Newton's method</a>, which in general are different from gradient-descent
paths, as indicated in this Wikipedia image:
<br />
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/d/da/Newton_optimization_vs_grad_descent.svg/220px-Newton_optimization_vs_grad_descent.svg.png" alt="Newton's vs. Gradient">
<em>Gradient descent: green.
Newton's method: red.</em>
<br /></p>
<p>(<b>Q3</b>.) These questions make sense in arbitrary dimensions,
although my primary interest is for surfaces in $\mathbb{R}^3$.</p>
<p>Any ideas on how to formulate my question as constraints on $f(\;)$,
or pointers to relevant literature,
would be appreciated. Thanks!</p>
| Willie Wong | 3,948 | <p>For (Q1). The tangent space of $S$ is generated by the gradient flow vector field $v = (|\nabla f|^2, \nabla f)$ and the tangents to the level sets $w= (0, \nabla^\perp f)$. The geodesic constraint can be imposed as the condition "no sideways acceleration", which means that $[(\nabla f \cdot \nabla )v] \cdot w = 0$. This implies that $\nabla^2_{ij} f \nabla^if \nabla^{(\perp)j}f = 0$. In other words, the eigendirections of the Hessian of $f$ must be $\nabla f$ and its orthogonal, or that $\nabla f$ is parallel to $\nabla |\nabla f|^2$. So this means that <strong>$f$ and $|\nabla f|^2$ share the same level sets</strong>. (This same characterization is valid for any dimension; so also answers (Q3). )</p>
<p>In particular, this answers Denis Serre's (Q4) in the positive. </p>
|
3,882,457 | <p><a href="https://arxiv.org/pdf/quant-ph/0208163" rel="nofollow noreferrer">These notes</a> are a great introduction to deformation quantization but I failed to check the validity of the statement p.9, right before (5.18).</p>
<p><strong>Context:</strong> let <span class="math-container">$(\mathcal{A},+,\mu)$</span> be an algebra. <span class="math-container">$\mu:\mathcal{A}\times \mathcal{A} \to \mathcal{A}$</span> standing for multiplication. Deformation consists in considering a family (paramatrized by <span class="math-container">$\nu$</span> in a yet to be chosen space) of product on <span class="math-container">$\mathcal{A}[[\nu]]$</span> (formal power series with coefficients in <span class="math-container">$\mathcal{A}$</span>) generically given by
<span class="math-container">$$ \forall\ f,g\in \mathcal{A},\quad \mu_{\nu}(f,g) := \mu(f,g) + \sum_{k=1}^{+\infty} \nu^k \mu_k (f,g) \label{1}\tag{1}$$</span>
i.e. by a family of bilinear maps <span class="math-container">$\mu_k:\mathcal{A}\times \mathcal{A} \to \mathcal{A}$</span> satisfying some conditions and extended to elements <span class="math-container">$F, G\in \mathcal{A}[[\nu]]$</span> of the form <span class="math-container">$F=\sum_{k=1}^{+\infty} \nu^k f_{k},\ f_k \in \mathcal{A}$</span> by <span class="math-container">$\mathbb{K}[[\nu]]$</span>-bilinearity. (the idea behind formal power series, as far as I understand, is to ignore convergence issues but still have a structure where one can compare terms of the same degree in <span class="math-container">$\nu$</span>").</p>
<p>Two of these <strong>star-product</strong> are equivalent if there exists an invertible algebra isomorphism (transition map) <span class="math-container">$T:(\mathcal{A}[[\nu]],+,\mu_{\nu}) \longrightarrow (\mathcal{A}[[\nu]],+,\rho_{\nu})$</span>, i.e. a map such that
<span class="math-container">$$ \forall\ F,G \in \mathcal{A}[[\nu]], \quad T\big(\mu_{\nu}(F,G)\big)= \rho_{\nu}(T(F),T(G))$$</span></p>
<hr />
<p><strong>Question:</strong> Let <span class="math-container">$\mathcal{A}=\mathcal{C}^{\infty}(\mathbb{R}^2)$</span> and denote <span class="math-container">$(a, \overline{a})$</span> or <span class="math-container">$(b, \overline{b}$</span>) the variables of the functions. I want to check that the <strong>normal product</strong> ( (5.4) p.8; with the more usual notation for products)
<span class="math-container">$$f \ast_N g := \sum_{k=0}^{+\infty} \frac{\hbar^k}{k!} \frac{\partial^k f}{\partial a^k} \frac{\partial^k g}{\partial \overline{a}^k} = f\, e^{\hbar \overleftarrow{\partial}_a \overrightarrow{\partial}_{\overline{a}}}\, g \label{2}\tag{2}$$</span>
is equivalent to the <strong>Moyal product</strong> ((5.15) p.9, one can consider <span class="math-container">$\hbar$</span> as the deformation parameter... although there is usually the factor as in (\ref{5}))
<span class="math-container">$$ f \ast_M g := \sum_{k=0}^{+\infty} \left(\frac{\hbar}{2} \right)^k \frac{1}{k!} \left. \left( \frac{\partial }{\partial a} \frac{\partial }{\partial \overline{b}} - \frac{\partial }{\partial \overline{a}} \frac{\partial }{\partial b}\right)^k f\big(a, \overline{a}\big) g\big(b, \overline{b}\big) \right|_{\genfrac{}{}{0pt}{1}{a=b}{\overline{a}=\overline{b}}} = f\, e^{\frac{\hbar}{2}\big( \overleftarrow{\partial}_a \overrightarrow{\partial}_{\overline{a}} - \overleftarrow{\partial}_{\overline{a}} \overrightarrow{\partial}_{a} \big)}\, g \label{3}\tag{3}$$</span>
i.e.
<span class="math-container">$$ T\big(f \ast_N g \big) = T(f) \ast_M T(g)\quad \text{with}\quad T = \genfrac{}{}{0pt}{0}{"}{}\!\!\exp\left(-\frac{\hbar}{2} \frac{\partial^2}{\partial a\, \partial \overline{a}} \right)\genfrac{}{}{0pt}{0}{"}{} \label{4}\tag{4}$$</span></p>
<p><strong>Remarks:</strong></p>
<ul>
<li>In fact I already checked (\ref{4}) up to second order in <span class="math-container">$\hbar$</span> but it did not work at order 3 (although I'm not sure as the calculations were quite tedious...). It was not a priori clear that (\ref{4}) hold, one could have the other way round <span class="math-container">$ T\big(f \ast_M g \big) = T(f) \ast_N T(g)$</span> instead but this seems to fail at order 1. I only want to check the first few orders, but I would gladly take a proof for all order. I will soon write what I have done, but as I mentionned, it's tedious.</li>
<li>The Moyal product is first defined in the text (3.5-3.6) p.5 by
<span class="math-container">$$ f \ast_M g := \sum_{k=0}^{+\infty} \frac{\nu^k}{k!} \underbrace{\left(\frac{\partial }{\partial q_1} \frac{\partial }{\partial p_{2}} - \frac{\partial }{\partial p_{1}} \frac{\partial }{\partial q_2} \right)^k f(q_1,p_1)\, g(q_2,p_2)}_{\mu_k(f,g)}\left.\vphantom{\frac{T}{T}}\right|_{\genfrac{}{}{0pt}{1}{q_1=q_2}{p_{1}=p_{2}}}\quad \text{with}\quad \nu = \frac{i\hbar}{2} \label{5}\tag{5}$$</span>
and it does coincide with (\ref{3}) via (these are the correct <span class="math-container">$\sqrt{2}$</span> factors...)
<span class="math-container">$$ \left\lbrace \begin{aligned}
a & := \frac{1}{\sqrt{2}} \left(q + i\hspace{.5pt} p \right) \\
\overline{a} & := \frac{1}{\sqrt{2}} \left( q - i\hspace{.5pt} p \right)
\end{aligned} \right. \enspace \Longrightarrow\quad
\left\lbrace \begin{aligned}
\frac{\partial}{\partial\hspace{.7pt} q} & = \frac{\partial\hspace{.7pt} a}{\partial\hspace{.7pt} q} \frac{\partial}{\partial\hspace{.7pt} a} + \frac{\partial\hspace{.7pt} \overline{a}}{\partial \hspace{.7pt} q} \frac{\partial}{\partial\hspace{.7pt} \overline{a}} =\frac{1}{\sqrt{2}} \left( \frac{\partial}{\partial\hspace{.7pt} a} + \frac{\partial}{\partial\hspace{.7pt} \overline{a}} \right) \\
\frac{\partial}{\partial\hspace{.7pt} p} & = \frac{\partial\hspace{.7pt} a}{\partial\hspace{.7pt} p} \frac{\partial}{\partial\hspace{.7pt} a} + \frac{\partial\hspace{.7pt} \overline{a}}{\partial \hspace{.7pt} p} \frac{\partial}{\partial\hspace{.7pt} \overline{a}} = \frac{i}{\sqrt{2}} \left( \frac{\partial}{\partial\hspace{.7pt} a} - i\, \frac{\partial}{\partial\hspace{.7pt} \overline{a}}\right)
\end{aligned} \right. $$</span></li>
<li>To make (\ref{3}) (same for (\ref{5})) more explicit, let me write the <span class="math-container">$k=2$</span> term: (notation <span class="math-container">$\displaystyle \partial_a=\frac{\partial}{\partial a},\ \partial_{ab}= \frac{\partial^2}{\partial a \partial b}$</span> etc.)
<span class="math-container">$$\begin{split}
\mu_2(f,g) &= \Big(\partial_{aa\overline{b}\overline{b}} - 2 \partial_{a\overline{a}b\overline{b}} + \partial_{\overline{a}\overline{a}bb} \Big) f(a,\overline{a})g(b,\overline{b})\left.\vphantom{\frac{T}{T}}\right|_{\genfrac{}{}{0pt}{1}{a=b}{\overline{a}=\overline{b}}}\\
&= (\partial_{aa}f)(\partial_{\overline{a}\overline{a}}g) - 2 (\partial_{a\overline{a}}f)(\partial_{b\overline{b}}g) + (\partial_{\overline{a}\overline{a}}f) (\partial_{aa}g)
\end{split} \label{6}\tag{6}$$</span>
One can also use the <span class="math-container">$\overleftarrow{\partial}$</span> or <span class="math-container">$\overrightarrow{\partial}$</span> notations or a tensorial notation.</li>
</ul>
| Cosmas Zachos | 362,193 | <p>Your reference has this explained in its ref [26], namely <a href="https://doi.org/10.1063/1.533395" rel="nofollow noreferrer">Zachos (2000) J Math Phys 41, 5129–5134, hep-th/9912238</a>.</p>
<p>In any case, it is straightforward to prove your (4) through elementary Fourier analysis. That is, use test/sample functions
<span class="math-container">$$
f=\exp (ma+n\bar a), ~~~~ g=\exp (ka+s\bar a),
$$</span>
so (4) presents as
<span class="math-container">$$
\exp (\hbar ms -\hbar(m+k)(n+s)/2) ~ \overset{?}{=} ~ \exp (-\hbar mn/2-\hbar ks/2 +\hbar(ms-nk)/2),
$$</span>
indeed, an identity.</p>
<p>You might, or might not, appreciate the geometrical features associated with it.</p>
<p>For the mainstream review of <em>all</em> such moves, see <a href="https://www.hep.anl.gov/czachos/aaa.pdf" rel="nofollow noreferrer">this booklet</a>.</p>
|
483,442 | <p>I am trying to learn about velocity vectors but this word problem is confusing me.</p>
<p>A boat is going 20 mph north east, the velocity u of the boat is the durection of the boats motion, and length is 20, the boat's speed. If the positive y axis represents north and x is east the boats direction makes an angle of 45 degrees. You can compute the components of u by using trig</p>
<p>$$u_1 = 20 \cos 45$$
$$u_2 = 20 \sin 45$$</p>
<p>Why? How did this happen? Why sin and why cos? What does this represent? Why two points? What are these two points? It says that these are $R^2$ which I am not sure what that means and my book does not explain. I think t he R means all real numbers and the squared is referencing 2d maybe so x and y but the book doesn't say so I am not so sure. My book mentions none of these things.</p>
| André Nicolas | 6,312 | <p>It is little known, but socks have individual identities. There are $\binom{16}{6}$ equally likely ways to choose $6$ socks from the $16$.</p>
<p>Now we find the number of ways to choose $6$ socks, so that there is no pair among them. There are $\dbinom{8}{6}$ ways to choose the "types" of sock we will have. For each choice of $6$ types, there are $2^6$ ways to choose the actual socks. For at each chosen "type" of sock, we have $2$ choices as to which of the two socks of that type to take.</p>
<p>Thus the probability there is <strong>no pair</strong> is $p=\dfrac{\binom{8}{6}2^6}{\binom{16}{6}}$.</p>
<p>The probability there is at least one pair is therefore $1-p$.</p>
|
182,527 | <p>I have the following question:</p>
<p>Let $X$: $\mu(X)<\infty$, and let $f \geq 0$ on $X$. Prove that $f$ is Lebesgue integrable on $X$ if and only if $\sum_{n=0}^{\infty}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) < \infty $.</p>
<p>I have the following ideas, but am a little unsure. For the forward direction:</p>
<p>By our hypothesis, we are taking $f$ to be Lebesgue integrable. Assume $\sum_{n=0}^{\infty}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) = \infty $. Then for any n, no matter how large, $\mu(\lbrace x \in X : f(x) \geq 2^n \rbrace)$ has positive measure. Otherwise, the sum will terminate for a certain $N$, giving us $\sum_{n=0}^{N}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) < \infty $. Thus we have $f$ unbounded on a set of positive measure, which in combination with $f(x) \geq 0$, gives us that $\int_E f(x) d\mu=\infty$. This is a contradiction to $f$ being Lebesgue integrable. So our summation must be finite.</p>
<p>For the reverse direction: </p>
<p>We have that $\sum_{n=0}^{N}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) < \infty \\$. Assume that $f$ is not Lebesgue integrable, then we have $\int_E f(x) d\mu=\infty$. Since we are integrating over a finite set $X$, then this means that $f(x)$ must be unbounded on a set of positive measure, which makes our summation infinite, a contradiction.</p>
<p>Any thoughts as to the validity of my proof? I feel as if there is an easier, direct way to do it.</p>
| Kevin Arlin | 31,228 | <p>This won't quite do. Your argument doesn't actually use the $2^n$ at all: it would read exactly the same if we asked only for $\sum_{n=0}^{\infty}\mu(\{x \in X: f(x) \geq 2^n\}) < \infty$. Yet $\frac{1}{x}$ satisfies this weaker hypothesis on $[0,1]$: the infinite sum becomes $\sum_{i=0}^\infty \frac{1}{2^i}=2$, while we know the integral diverges. See if you can try a similar approach that explicitly uses the exponential decay of the measure of sets on which $f$ is big.</p>
|
182,527 | <p>I have the following question:</p>
<p>Let $X$: $\mu(X)<\infty$, and let $f \geq 0$ on $X$. Prove that $f$ is Lebesgue integrable on $X$ if and only if $\sum_{n=0}^{\infty}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) < \infty $.</p>
<p>I have the following ideas, but am a little unsure. For the forward direction:</p>
<p>By our hypothesis, we are taking $f$ to be Lebesgue integrable. Assume $\sum_{n=0}^{\infty}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) = \infty $. Then for any n, no matter how large, $\mu(\lbrace x \in X : f(x) \geq 2^n \rbrace)$ has positive measure. Otherwise, the sum will terminate for a certain $N$, giving us $\sum_{n=0}^{N}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) < \infty $. Thus we have $f$ unbounded on a set of positive measure, which in combination with $f(x) \geq 0$, gives us that $\int_E f(x) d\mu=\infty$. This is a contradiction to $f$ being Lebesgue integrable. So our summation must be finite.</p>
<p>For the reverse direction: </p>
<p>We have that $\sum_{n=0}^{N}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) < \infty \\$. Assume that $f$ is not Lebesgue integrable, then we have $\int_E f(x) d\mu=\infty$. Since we are integrating over a finite set $X$, then this means that $f(x)$ must be unbounded on a set of positive measure, which makes our summation infinite, a contradiction.</p>
<p>Any thoughts as to the validity of my proof? I feel as if there is an easier, direct way to do it.</p>
| Did | 6,179 | <p>$$\frac12\left(1+\sum_{n=0}^{+\infty}2^n\,\mathbf 1_{f\geqslant2^n}\right)\leqslant f\lt1+\sum_{n=0}^{+\infty}2^n\,\mathbf 1_{f\geqslant2^n}$$</p>
|
2,206,938 | <p>Context: <a href="http://www.hairer.org/notes/Regularity.pdf" rel="nofollow noreferrer">http://www.hairer.org/notes/Regularity.pdf</a>, section 4.1 (pages 15-16)</p>
<blockquote>
<p>Define
$$(\Pi_x\Xi^0)(y)=1 \qquad (\Pi_x\Xi)(y)=0 \qquad (\Pi_x\Xi^2)(y)=c$$
and
$$(\Pi^{(n)}_x\Xi^0)(y)=1 \qquad (\Pi^{(n)}_x\Xi)(y)=\sqrt{2c}\sin(nx) \qquad (\Pi^{(n)}_x\Xi^2)(y)=2c\sin^2(nx).$$
As a model, $\Pi^{(n)}$ converges to $\Pi$.</p>
</blockquote>
<p>I don't see how this convergence is supposed to take place. Isn't the limit of $\sin(nx)$, as $n\to \infty$, undefined? </p>
| 5xum | 112,884 | <blockquote>
<p>It seems to me that these functions have to be dominated by something like $x^\alpha$ for some $\alpha > 0$</p>
</blockquote>
<p>False. Take the function $f(x)=e^x$ as a counterexample.</p>
<hr>
<p>Also, there is no such thing as "Lipschitz continuous at some point". Lipschitz continuity is defined on a <strong>set</strong>, not at a point:</p>
<p>A function $f:X\to Y$ (where $X$ and $Y$ are metric spaces with metrics $d_X$ and $d_Y$ is Lipschitz continuous if there exists some $K\in\mathbb R$ such that $d_Y(f(x_1),f(x_2))\leq K\cdot d_X(x_1,x_2)$ for all pairs $x_1,x_2\in X$</p>
|
1,333,637 | <p>Where X is a space obtained by pasting the edges of a polygonal region together in pairs. </p>
<p>Alternatively: Show that X is homeomorphic to exactly one of the spaces in the following list: $S^2,T_n, P^2, K_m, P^2\#K_m$, where $K_m$ is the m-fold connected sum of $K$(Klein bottle) with itself and $M \geq 0$.</p>
<p>We have the classification theorem: If X is the space obtained from a polygonal region in the plane by pasting its edges together in pairs. Then X is homeomorphic to either $S^2, T_n$ or $P_m$. Where $P_m$ is the m-fold projective plane.</p>
<p>It seems I have to show that one of the things on the list that is not $S^2$ or $T_n$ is homeomorphic to $P_m$? The likely candidates seem to be, for the list in the title: $T_n\#P^2$. For the second list: $P^2\#K_m$. But I'm not sure if these are correct and how to formally show that those are homeomorphic to $P_m$</p>
| goblin GONE | 42,339 | <p>Rather than solving the problem for you, I collect here all relevant facts. Given these facts, actually solving it should be fairly straightforward.</p>
<p><strong>Convention.</strong> Write $T_0$ for the $2$-sphere.</p>
<p><strong>Classification Theorem.</strong> Suppose $X$ is a compact surface.</p>
<ul>
<li>If $X$ is orientable, then $X \cong T_n$ for some integer $n \geq 0$.</li>
<li>If $X$ is non-orientable, then $X \cong P_n$ for some integer $n \geq 1$.</li>
</ul>
<blockquote>
<p><strong>Definition.</strong> Letting $\chi$ denote the Euler characteristic function, define $\psi$ as the function on compact surfaces given as
follows. $$\psi(X) = 2 - \chi(X).$$</p>
</blockquote>
<p><strong>Proposition 0.</strong></p>
<ul>
<li>$\psi(X \,\#\,Y) = \psi(X)+\psi(Y),$ for all compact surfaces $X$ and $Y$.</li>
<li>$\psi(T_0) = 0$</li>
</ul>
<p><strong>Proposition 1.</strong> $\psi(T) = 2, \psi(P) = 1$</p>
<p><strong>Corollary.</strong> $\psi(T_n) = 2n, \psi(P_n) = n$</p>
<p><strong>Proposition 2.</strong> (Dyck's theorem.) $T \,\#\, P = P \,\#\,P\,\#\,P$</p>
<p><strong>Proposition 3.</strong> If $K$ is the Klein bottle, then:</p>
<p>$$K \cong P \,\#\, P$$</p>
|
2,555,463 | <p>Given a line $l$ and two points $p_1$ and $p_2$, identify the point $v$ which is equidistant from $l$, $p_1$, and $p_2$, assuming it exists.</p>
<p>My idea is to: (1) identify the parabolas containing all points equidistant from each point and the line, then (2) intersect these parabolas. As $v$ is equidistant from all three and each parabola contains all points equidistant from $l$ and each point, the intersection of these parabolas must be $v$. However, I have had no luck in finding a way to compute, much less represent, these parabolas.</p>
| Narasimham | 95,860 | <p>I think you are almost there. The parabola has a property you already know.There are $two$ solutions/points for circle centers, but not one, get detected by a direct procedure as follows:</p>
<p>Intersections of a <em>properly/conveniently placed parabola</em> ( wlog $y=-f$ is chosen directrix) and perpendicular bisector of $P_1P_2 $ should be found.</p>
<p>$$ P_1:(a,b)\, ; P_2: (0,f) ; $$ where $f$ is parabola focal length.</p>
<p>Bisector equation is</p>
<p>$$(x-a)^2+(y-b)^2= x^2+(y-f)^2$$</p>
<p>Simplifying </p>
<p>$$ y(f-b)-ax+Q=0 ,\,where\, Q=(a^2+b^2-f^2)/2 \tag 1 $$</p>
<p>Parabola equation</p>
<p>$$ y = \frac{x^2}{4f} \tag2 $$</p>
<p>Plug 2) into 1) and solve quadratic in $x$, getting two solutions, meaning two points satisfy the given condition.</p>
<p>The key point to realize is... there is a <em>single parabola</em>, not two, containing centers of these <em>two circles</em> on the parabola, but not one circle.</p>
<p>$$ x_1 = 2/ (1 - b/f)*(a + \sqrt{a^2 - Q (1 - b/f)}; \, y_1 = x_1^2/(4 f) ; $$</p>
<p>$$ x_2 = 2/ (1 - b/f)*(a - \sqrt{a^2 - Q (1 - b/f)}; \, y_2 = x_2^2/(4 f); $$</p>
<p>For the numerical example given in graph, I have taken the values $ (a,b,f)=((1,3,1)$ as numerical input.</p>
<p>and it results in coordinates of intersection points of circumcircle centers. By evaluation/calculation approximately they are :</p>
<p>$$C_1=(-4.16228,4.33114);\, C_2=(2.16228,1.16886) ; \,$$ which depicts two circumcircles as you desired.</p>
<p>EDIT1:</p>
<p>Another possibility for $l$ seems to exist as given schematically; however it is not an independent situation for the line given but reflection about line of centers.</p>
<p><a href="https://i.stack.imgur.com/Q3DzA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Q3DzA.png" alt=" Parabola&Directrix MSE"></a>
EDIT2:</p>
<p>If we take $(P_1,P_2)$ as $(-c,0),(c,0)$ and the given line as $ y= m x + c, $ the equations to the (tilted double parabola ) would be simplest.</p>
|
2,178,395 | <p>In how many ways can the letters of the word CHROMATIC be arranged,</p>
<p>find the probability that the string of letters begins with the letter M</p>
<p>I don't understand how to single out M so the possibilities would only begin with M?</p>
| mvw | 86,776 | <p>Consider the number of ways to arrange the letters of CHROATIC, as you have the M already given.</p>
|
767,686 | <p>Let $f:A\rightarrow B$ be a function and let $C_1,C_2\subset A$
Prove that</p>
<p>$f(C_1\cap C_2)=f(C_1)\cap f(C_2) \leftrightarrow$ $f$ is injective</p>
<p>Attempt:</p>
<p>$(\leftarrow)$ Let $f(x)\in f(C_1\cap C_2)$. Then there exists $x\in C_1\cap C_2$ because $f$ is injective. So $x\in C_1$ and $x\in C_2$. So
$f(x)\in f(C_1)$ and $f(x)\in f(C_2)$. So $f(x)\in f(C_1)\cap f(C_2)$. Thus $f(C_1\cap C_2)=f(C_1)\cap f(C_2)$ I got to this but don't have an idea on how to prove the other direction..</p>
| user41281 | 145,477 | <p>Hint: Draw lines of slope $\pm 1$ at $x = 0$ and $x = 2$. By the Mean Value Theorem, no point in the set $\{ (x, y) \, | \, x \in (0, 2), \, y = f(x) \}$ lies outside that parallelogram. </p>
|
767,686 | <p>Let $f:A\rightarrow B$ be a function and let $C_1,C_2\subset A$
Prove that</p>
<p>$f(C_1\cap C_2)=f(C_1)\cap f(C_2) \leftrightarrow$ $f$ is injective</p>
<p>Attempt:</p>
<p>$(\leftarrow)$ Let $f(x)\in f(C_1\cap C_2)$. Then there exists $x\in C_1\cap C_2$ because $f$ is injective. So $x\in C_1$ and $x\in C_2$. So
$f(x)\in f(C_1)$ and $f(x)\in f(C_2)$. So $f(x)\in f(C_1)\cap f(C_2)$. Thus $f(C_1\cap C_2)=f(C_1)\cap f(C_2)$ I got to this but don't have an idea on how to prove the other direction..</p>
| Hagen von Eitzen | 39,174 | <p>Assume $h(1)>1$. Then there exists $\xi\in(0,1)$ with $(1-0)h'(\xi)=h(1)-h(0)$, i.e. $h'(\xi)>1$ contrary to assumption.</p>
<p>Assume $h(1)=1$ and let $h'(1)<1$ then for sufficiently small positive $k$ we have $\frac{h(1-k)-h(1)}{-k}<1$, i.e. $h(1-k)>1-k$, and then for some $\xi\in(0,1-k)$ we find $h'(\xi)>1$ as above.
The same works if $h'(1)>-1$ and working on the right half interval. As at least one of the inequalities holds, we are done.</p>
|
64,395 | <p>Let G be a directed graph on N vertices chosen at random, conditional on the requirement that the out-degree of each vertex is 1 and the in-degree of each vertex is either 0 or 2. The "periodic" points of G are those contained in a cycle. What do we know about the statistics of G? For instance, what is the mean number of periodic points, and how do the cycle lengths look?</p>
<p>By comparison, a random directed graph with all out-degrees 1 (which is to say, the graph of a random function from vertices to vertices) has on order of sqrt(N) periodic points on average.</p>
<p>(Motivation: the graph of a quadratic rational function f acting on P^1(F_q) looks like this, and I'm wondering what the "expected" dynamics are.) </p>
| Derek | 752 | <p>Let $G$ be a directed graph on $N$ vertices such that the out-degree of each vertex is 1 and the in-degree is either 0 or $n$. Letting $N=nt$, there are $t$ vertices with in-degree $n$ and $(n-1)t$ vertices with in-degree 0. Assuming the vertices are labeled, the number of such graphs is
$$
\binom{nt}{t}\frac{(nt)!}{(n!)^t}.
$$
For $1\leq m\leq t$, the number of such graphs with a fixed $m$-cycle is
$$
\binom{nt-m}{t-m}\frac{(nt-m)!}{((n-1)!)^m(n!)^{t-m}},
$$
so the number of $m$-periodic points amongst all such graphs is
$$
m\cdot\binom{nt-m}{t-m}\frac{(nt-m)!}{((n-1)!)^m(n!)^{t-m}}\cdot\frac{(nt)!}{m(nt-m)!}=\binom{nt-m}{t-m}\frac{(nt)!}{((n-1)!)^m(n!)^{t-m}}.
$$
Summing over $1\leq m\leq t$, the average we want is
$$
\binom{nt}{t}^{-1}\sum_{m=1}^{t}{n^m\binom{nt-m}{t-m}}=-1+\binom{nt}{t}^{-1}\sum_{m=0}^{t}{n^m\binom{nt-m}{t-m}}.
$$
When $n=1$, the average is $t$ (since all points would be periodic). When $n=2$, as in the question, the sum is
$$
-1+\frac{4^t}{\binom{nt}{t}}\sim-1+\sqrt{\frac{\pi}{2}}\sqrt{N},
$$
so the number of periodic points does indeed look random. When $n>2$, I haven't found the identity I need yet.</p>
|
1,840,485 | <p>I am an undergraduate really passionate about the mathematics and microbiology. I have few big problems in learning which I would like to seek your advice. </p>
<p>Whenever I study mathematical books (Rudin, Hoffman/Kunze, etc.), I always try to prove every theorem, lemma, corollary, and their relationships in the book. Unfortunately, that determination has been demanding huge time consumption; sometimes, it takes me days to fully understand and able to prove materials in the few pages of book. I am willing to devote my time to understand the topics, but I also wanted to devote time to my undergraduate research projects and other courses. Recently, I started to depend a lot more to the proofs presented in books and websites (like MSE), which has been causing a huge guilt and fear that I am not making the knowledge into my own.</p>
<p>Despite my effort to prove/solve every problem per chapter, I found myself to skip some of the problems and move on to the next chapter, which resulted huge fear as that means I did not fully understand the materials..</p>
<ul>
<li>How do you read the mathematics books and make knowledge on your own?</li>
<li>Is it absolutely recommended to prove everything and solve every problems in the book? </li>
<li>Also is it recommended to devote more time to the problems than exposition preceding the problems? I found myself devoting a lot time to the actual expositions in the book as I like to play around with definitions and theorems, try to come up with my own ideas, and formulate my own problems (I actually found that making my own problems is much more fun than problems presented in the book).</li>
</ul>
| avz2611 | 142,634 | <p>I would suggest you read every problem, and in your head if you can see the direction pretty clearly then no need doing that, generally big texts do have repetition, but concise books meant for only problem solving without any theory do try to make sure each problem is unique.<br>
As far as second part of your research goes, I believe your approach gives you more fundamental clarity but definitely not an approach to get 'grades', if you're planning on doing research you should carry on this attitude but if not I would suggest some problem solving.</p>
|
1,482,205 | <p>Show that $\sigma(AB) \cup \{0\} = \sigma(BA) \cup \{0\}$ in general, and that $\sigma(AB) = \sigma(BA)$ if $A$ is bijective. </p>
<p>I studied the associative statement of this somewhere but it did not include the zeroth element.
If you assume the bijection, how can you show the first part?</p>
<h2>My attempt</h2>
<p>Let show that $A$ commutes with $B$ which is self-adjoint linear operatar such that $AB = BA$.
\begin{equation*}
AB = A^{\ast} B^{\ast} = (BA)^{\ast} = (ABA)^{\ast} = A^{\ast} B^{\ast} A^{\ast} = ABA = BA.
\end{equation*}
In general, the zeroth element follows directly in the left-hand-side if $A$ is bijective. $\square$</p>
<p>Comments</p>
<ul>
<li>not sure if $\sigma$ should be carried along; I was reading Kreuzig for the application</li>
</ul>
<hr>
<p>How can you show the first part with the zeroth element?</p>
| NoChance | 15,180 | <blockquote>
<p>"So I got 1b/1b+a/b that simplified is b/b+a/b. So then I added those
two fractions together and got ab/b"</p>
</blockquote>
<p>You have figured out correctly up to this point:</p>
<p>$\frac{b}{b}+\frac{a}{b}$</p>
<p>The next step in your statement is where the error made. Since the denominators are equal, you just add the numerators and divide by the common denominator to get:</p>
<p>$\frac{b}{b}+\frac{a}{b}=\frac{b+a}{b}$</p>
<p>The general rule is (where $x$ and $y$ are not zero)is:</p>
<p>$\frac{A}{x}+\frac{B}{y}=\frac{Ay+Bx}{xy}$</p>
|
467,268 | <p>Any body knows the meaning of this expectation ($E[g(x)]$) form?</p>
<p>$E[g(x)]=Pr(g(x) >\varepsilon)E[g(x)|g(x) > g(\varepsilon)]+Pr(g(x) \leq\varepsilon)E[g(x)|g(x)\leq g(\varepsilon)]$</p>
| iostream007 | 76,954 | <p>HINT:</p>
<p>equation of line $y-y_1=m(x-x_1)$</p>
<p>now you have m(gradient) and a point ($x_1,y_1$) which is mid point of two given points.just find out mid point of given point.</p>
<p>for mid point: If C(x,y) is mid point of $A(x_1,y_1)$ and $B(x_2,y_2)$</p>
<p>then $x=\dfrac {x_1+x_2}{2}$ and $y=\dfrac {y_1+y_2}{2}$</p>
|
467,268 | <p>Any body knows the meaning of this expectation ($E[g(x)]$) form?</p>
<p>$E[g(x)]=Pr(g(x) >\varepsilon)E[g(x)|g(x) > g(\varepsilon)]+Pr(g(x) \leq\varepsilon)E[g(x)|g(x)\leq g(\varepsilon)]$</p>
| Avitus | 80,800 | <ul>
<li>A brief introduction</li>
</ul>
<p>A line in $\mathbb R^2$ is described by an equation of the form </p>
<p>$$y=mx+q~~~ (*)$$</p>
<p>(in cartesian coordinates), where the parameters $m$ and $q$ are called the slope (you called it gradient) and the intercept.</p>
<p>Why is the slope called...slope? For any two distinct points $P=(x_0,y_0)$ and $Q=(x_1,y_1)$ on the line $(*)$ the ratio</p>
<p>$$\frac{y_1-y_0}{x_1-x_0} $$</p>
<p>is constant, and it is equal to $m$! If you draw $P$ and $Q$ in $\mathbb R^2$ you surely can understand why such ratio, i.e. $m$, is then called the slope.</p>
<p>The other parameter, called the intercept, measures the distance between the point of intersection between the line and the y-axis and the point $(0,0)$.</p>
<p>In summary, to find a line in $\mathbb R^2$ you need to find these 2 parameters. To do so, you could specify that the line passes through 2 distinct points or, (this is your case!) you can specify the slope of the line you are searching for <em>and</em> 1 point lying on it.</p>
<p>Concretely, to solve your exercise you need to find such point, as the slope is given as datum. The exercise says that the point lying on the line is the mid-point of the segment in $\mathbb R^2$ with endpoints given by $(5,-2)$ and $(-3,4)$.</p>
<p>All you need is to find such mid-point and arrive at $q$ by putting the coordinates of the mid-point in (*), with $m=-2$.</p>
<ul>
<li><strong>Explicit solution (Spoilers)</strong></li>
</ul>
<p>The mid point of the segment with endpoints $(5,-2)$ and $(-3,4)$ is</p>
<p>$$M:=\left(\frac{5-3}{2},\frac{-2+4}{2}\right)=(1,1).$$</p>
<p>Using the datum $m=-2$, we search for the line $y=mx+q$ in $\mathbb R^2$ passing through $M$. Using $M$'s coordinates we arrive at</p>
<p>$$1=-2\cdot 1+q \Leftrightarrow 1+2=q\Leftrightarrow 3=q, $$</p>
<p>and the line is $y=-2x+3$.</p>
|
165,560 | <p>To find the volume of the following region:</p>
<pre><code>fn[x_, y_, z_]:= Abs[0.7*x*Exp[I*y] + 0.3*Sqrt[x^2 + 8*10^-5]
+ Sqrt[x^2 + 3*10^-3]*0.02*Exp[I*z]]
R = ImplicitRegion[fn[x, y, z]<=3*10^-3, {{x, 0, 0.015}, {y, 0, 2*Pi}, {z, 0, 2*Pi}}]
RegionPlot3D[
fn[x, y, z] <= 3*10^-3, {x, 0, 0.015}, {y, 0, 2*Pi}, {z, 0, 2*Pi},
PlotPoints -> 50, AxesLabel -> {x, y, z},
PlotStyle -> Directive[Yellow, Opacity[0.5]], Mesh -> None]
</code></pre>
<p><a href="https://i.stack.imgur.com/DXSbo.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DXSbo.jpg" alt="Plot of region of interest"></a></p>
<p>Using <code>Volume</code>:</p>
<pre><code>Volume[R]
</code></pre>
<blockquote>
<p>Volume::nmet: Unable to compute the volume of region</p>
</blockquote>
<p>Using <code>NIntegrate</code>:</p>
<pre><code>NIntegrate[
Boole[fn[x, y, z] <= 3*10^-3], {x, 0, 0.015}, {y, 0, 2*Pi}, {z, 0,
2*Pi}, Method -> {"GlobalAdaptive", "MaxErrorIncreases" -> 50000},
PrecisionGoal -> 3] // Timing
</code></pre>
<blockquote>
<p>NIntegrate::slwcon:
Numerical integration converging too slowly; suspect one of the following: singularity, value of the integration is 0, highly oscillatory integrand, or WorkingPrecision too small.</p>
<p>NIntegrate::eincr: The global error of the strategy GlobalAdaptive has increased more than 50000 times. The global error is expected to decrease monotonically after a number of integrand evaluations... NIntegrate obtained 0.12219642052793653 and 0.007515502934285486 for the integral and error estimates.</p>
<p>{19.25, 0.122196}</p>
</blockquote>
<p>The result does not converge.</p>
<p>Using <code>MonteCarlo</code>:</p>
<pre><code>NIntegrate[
Boole[fn[x, y, z] <= 3*10^-3], {x, 0, 0.015}, {y, 0, 2*Pi}, {z, 0,
2*Pi}, Method -> {"MonteCarlo", "MaxPoints" -> 10^12,
"RandomSeed" -> 9}, PrecisionGoal -> 3] // Timing
</code></pre>
<blockquote>
<p>{8.39063, 0.121479}</p>
</blockquote>
<p>This is slow, and if <code>PrecisionGoal -> 5</code> is set,</p>
<pre><code>NIntegrate[
Boole[fn[x, y, z] <= 3*10^-3], {x, 0, 0.015}, {y, 0, 2*Pi}, {z, 0,
2*Pi}, Method -> {"MonteCarlo", "MaxPoints" -> 10^12,
"RandomSeed" -> 9}, PrecisionGoal -> 5]
</code></pre>
<p>It does not give a result even after running for more than 2 minutes.</p>
<p><strong>I hope to find a better way to calculate the volume of this region with:</strong></p>
<ol>
<li>High accuracy: at least <code>PrecisionGoal -> 5</code></li>
<li>Good error estimate</li>
<li>Fast</li>
</ol>
| Greg Hurst | 4,346 | <p>I think the main issue is the axes bounds are quite disproportionate and that's effecting the sampling.</p>
<p>Here's your region scaled to the unit cube:</p>
<pre><code>R2 = ImplicitRegion[fn[3 x/200, 2π y, 2π z] <= 3*10^-3, {{x, 0, 1}, {y, 0, 1}, {z, 0, 1}}];
reg = BoundaryDiscretizeRegion[R2, {{0, 1}, {0, 1}, {0, 1}}, MaxCellMeasure -> .0001]
</code></pre>
<p><a href="https://i.stack.imgur.com/CBmdd.png" rel="noreferrer"><img src="https://i.stack.imgur.com/CBmdd.png" alt="enter image description here"></a></p>
<pre><code>0.015 * (2π)^2 * Volume[reg]
</code></pre>
<blockquote>
<pre><code>0.121605
</code></pre>
</blockquote>
<p>If we clean up <code>fn</code> a bit, we get a noticeable speedup in <code>BoundaryDiscretizeRegion</code>:</p>
<pre><code>f2[x_, y_, z_] = ComplexExpand[Rationalize[fn[x, y, z]]]^2;
R3 = ImplicitRegion[f2[3 x/200, 2π y, 2π z] <= (3*10^-3)^2, {{x, 0, 1}, {y, 0, 1}, {z, 0, 1}}];
</code></pre>
<p>Compare:</p>
<pre><code>timeDisc[r_] := First[AbsoluteTiming[BoundaryDiscretizeRegion[r, {{0, 1}, {0, 1}, {0, 1}}, MaxCellMeasure -> .0001]]];
{timeDisc[R2], timeDisc[R3]}
</code></pre>
<blockquote>
<pre><code>{1.92265, 1.18681}
</code></pre>
</blockquote>
<p>Note that this approach works on the original unscaled region, but there are strange fringes and it's very slow:</p>
<pre><code>BoundaryDiscretizeRegion[R, {{0, 0.015}, {0, 2π}, {0, 2π}},
BoxRatios -> {1, 1, 1}, MaxCellMeasure -> .0001]; // AbsoluteTiming
</code></pre>
<p><a href="https://i.stack.imgur.com/45AtZ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/45AtZ.png" alt="enter image description here"></a></p>
|
3,346,676 | <blockquote>
<p><strong>Question.</strong> Find a divergent sequence <span class="math-container">$\{X_n\}$</span> in <span class="math-container">$\mathbb{R}$</span> such that for any <span class="math-container">$m\in\mathbb{N}$</span>,
<span class="math-container">$$\lim_{n\to\infty}|X_{n+m}-X_n|=0$$</span></p>
</blockquote>
<p>I don't really know, if someone could walk me through this it'd be really appreciated.
Edit: I'm dumb af ignore what I said before I deleted it. lmao</p>
| Sangchul Lee | 9,340 | <p>Here is another type of counter-example.</p>
<p>Let <span class="math-container">$x_n = \sin \big(\frac{\pi}{2}\sqrt{n}\big)$</span>. Then <span class="math-container">$x_n$</span> does not converge since <span class="math-container">$x_n$</span> oscillates. This is easily seen by noting that the values of <span class="math-container">$(x_n)$</span> at odd squares are given by <span class="math-container">$x_{(2n+1)^2} = (-1)^n$</span>. On the other hand, by the mean-value theorem,</p>
<p><span class="math-container">$$ \left| x_{n+m} - x_n \right| \leq \frac{\pi}{2} \left| \sqrt{n+m} - \sqrt{n} \right| = \frac{\pi}{2} \frac{m}{\sqrt{n+m} + \sqrt{n}}, $$</span></p>
<p>hence the difference <span class="math-container">$\left| x_{n+m} - x_n \right|$</span> converges to <span class="math-container">$0$</span> as <span class="math-container">$n\to\infty$</span> for each fixed <span class="math-container">$m$</span>.</p>
|
3,429,623 | <p>Is the union of <span class="math-container">$\emptyset$</span> with another set, <span class="math-container">$A$</span> say, disjoint? Even though <span class="math-container">$\emptyset \subseteq A$</span>?</p>
<p>I would say, yes - vacuously. But some confirmation would be great.</p>
| William Elliot | 426,203 | <p>There is no such thing as a disjoint union of two sets.<br>
Two sets are disjoint when their intersection is disjoint.<br>
By thus definition, the empty set and any set are disjoint.<br>
Usually disjoint is limited to not empty sets. </p>
<p>A collection K of sets is collectively disjoint when <span class="math-container">$\cap$</span>K is empty.<br>
A useful notion of a disjoint collection is pairwise disjoint.<br>
A collection K of sets is pairwise disjoint when <span class="math-container">$\cap$</span>K is empty.<br>
If K is pairwise disjoint, so is K <span class="math-container">$\cup$</span> {empty set}.<br>
Again for this notion, the empty set is usually discarded. </p>
<p>Exercise. Present a collection of sets that's collectively disjoint but not pairwise disjoint. </p>
|
767,304 | <p>Prove that there are no real numbers $x$ such that</p>
<p>$$\sum_{n\,=\,0}^\infty \frac {(-1)^{n + 1}} {n^x} = 0$$</p>
<p>Can I have a hint please?</p>
| Felix Marin | 85,343 | <p>$\newcommand{\+}{^{\dagger}}
\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack}
\newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,}
\newcommand{\dd}{{\rm d}}
\newcommand{\down}{\downarrow}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}
\newcommand{\fermi}{\,{\rm f}}
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{{\rm i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\isdiv}{\,\left.\right\vert\,}
\newcommand{\ket}[1]{\left\vert #1\right\rangle}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left(\, #1 \,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,}
\newcommand{\sech}{\,{\rm sech}}
\newcommand{\sgn}{\,{\rm sgn}}
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}
\newcommand{\wt}[1]{\widetilde{#1}}$
$\ds{\sum_{n = \color{#f00}{\LARGE 1}}^{\infty}{\pars{-1}^{n + 1} \over n^{x}} = 0:\
{\large ?}}$</p>
<blockquote>
<p>\begin{align}
&\sum_{n = 1}^{\infty}{\pars{-1}^{n + 1} \over n^{x}}
=\sum_{n = 1}^{\infty}\bracks{{1 \over \pars{2n - 1}^{x}} - {1 \over \pars{2n}^{x}}}
\\[3mm]&={1 \over 2^{x}}\sum_{n = 1}^{\infty}
\bracks{{1 \over \pars{n - 1/2}^{x}} - {1 \over n^{x}}} > 0\quad
\mbox{when}\quad x > 0.\qquad\qquad\mbox{So ?...}
\end{align}</p>
</blockquote>
<p>$\ds{x \leq 0}$ cases are not considered for obvious reasons.</p>
|
139,417 | <p>I have a polygon defined by a list of nodes (x,y). I want to cut the polygon by a horizontal line at position y = a and get the new polygon above the position y = a. I am using the RegionIntersect function, but it seems very slow if I want to combine the function with Manipulate function as well. Is there any way to improve my code to get better speed?</p>
<pre><code>R2 = Polygon[{{0, 0}, {300, 0}, {300, 500}, {0, 750}}] ;
Manipulate[
R1 = ImplicitRegion[{0 <= x <= 300, a <= y <= 700}, {x, y} ];
R3 = RegionIntersection[R1, R2];
RegionPlot[R3], {a, 1, 499}]
</code></pre>
| Marchi | 29,455 | <p>On my machine doing everything in one step and discretizing the regions appears to smooth things out a bit.</p>
<pre><code>Manipulate[
RegionPlot[
DiscretizeRegion@
RegionIntersection[
DiscretizeRegion@
Polygon[{{0, 0}, {300, 0}, {300, 500}, {0, 750}}],
DiscretizeRegion@
ImplicitRegion[{0 <= x <= 300, a <= y <= 700}, {x, y}]]], {a, 0, 500}]
</code></pre>
|
2,558,988 | <blockquote>
<p>Let us consider a function $f(x,y)=4x^2-xy+4y^2+x^3y+xy^3-4$. Then find the maximum and minimum value of $f$.</p>
</blockquote>
<p>My attempt. $f_x=0$ implies $8x-y+3x^2y+y^3=0$ and $f_y=0$ implies $-x+8y+x^3+3xy²=0$ and $f_{xy}=3x^2+3y^2-1$. Now $f_x+f_y=0$ implies $(x+y)((x+y)^2+7)=0$ implies $x=-y$ as $x$ and $y$ are reals. Now putting it in $f_x=0$ get the three values of $x$ as $0$, $(-3+3\sqrt{5})/2$ and $(-3-3\sqrt{5})/2$. And then $f_{xy}(0,0)<0$ implies $f$ has maximum at $(0,0)$. But at other two points $f_{xy}$ gives the positive value. So how can I proceed to solve the problem.
Please help me to solve it.</p>
| José Carlos Santos | 446,262 | <p>Fix a $x_0\in X$. Consider these subsets of $X\times\mathbb R$:</p>
<ul>
<li>$\{x_0\}\times(0,+\infty)$;</li>
<li>$\left\{\bigl(x,f(x)+k\bigr)\right\}$ ($k>0$).</li>
</ul>
<p>They are all connected. So, for each $k>0$ the set$$G_k=\{x_0\}\times(0,+\infty)\cup\left\{\bigl(x,f(x)+k\bigr)\right\},$$since it's the union of two connected sets with a common point, which is $(x_0,k)$. But$$\bigl\{(x,y)\in X\times\mathbb{R}\,|\,f(x)>y\bigr\}=\bigcup_{k>0}G_k.$$Since each $G_k$ is connected and $\bigcap_{k>0}G_k=\{x_0\}\times(0,+\infty)$, which is not empty, $\bigl\{(x,y)\in X\times\mathbb{R}\,|\,f(x)>y\bigr\}$ is connected.</p>
|
2,073,230 | <p>I thought I'd might use induction, but that seems too hard, then I tried to take the derivative and show that that's positive $\forall$n. But I can't figure out how to do that either, I've tried induction there too.</p>
| user399601 | 399,601 | <p>You can do it directly with the binomial formula: since \begin{align*} \Big(\frac{n+0.06}{n} \Big)^n &= 1 + \frac{0.06}{n} \binom{n}{1} + ... + \frac{0.06^n}{n^n} \binom{n}{n} \\ &= \sum_{k=0}^n \frac{0.06^k}{k!} \Big( 1 - \frac{1}{n}\Big) \cdot ... \cdot \Big( 1 - \frac{k-1}{n}\Big) \end{align*} and each $1 - \frac{m}{n+1} > 1 - \frac{m}{n}$, it follows that \begin{align*} \Big(\frac{n+1+0.06}{n+1} \Big)^{n+1} &= \sum_{k=0}^n \frac{0.06^k}{k!} \Big( 1 - \frac{1}{n+1}\Big)...\Big(1 - \frac{k-1}{n+1}\Big) \\ &\quad \quad + \underbrace{\frac{0.06^{n+1}}{(n+1)!} \Big( 1 - \frac{1}{n+1}\Big)...\Big( 1 - \frac{n}{n+1}\Big)}_{>0} \\ &> \Big(\frac{n+0.06}{n}\Big)^n.\end{align*}</p>
|
3,337,210 | <p>I am struggling with the following equation, which I need to proof by induction:</p>
<p><span class="math-container">$$\sum_{k=1}^{2n}\frac{(-1)^{k+1}}{k}= \sum_{k=n+1}^{2n}\frac{1}{k}$$</span></p>
<p><span class="math-container">$n\in \mathbb{N}$</span>.<br/> I tried a few times and always got stuck.</p>
<p>Help would be appreciated. </p>
| poetasis | 546,655 | <p>I don't know if this helps but I offer a proof from a paper I am writing in hopes that it shows that such can be presented intuitively. Some authors seem to think the reader has the the same background knowledge from [sometimes] years of research (and insights gained) that when into developing a proof. My proof could have been done by merely presenting the equations and showing how they relate. I hope mine shows consideration for the reader and let's you see that there are ways to prove-with-insight. Let me know, good or bad, because it will help me too.</p>
<p><a href="https://i.stack.imgur.com/FH5Gz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FH5Gz.png" alt="enter image description here"></a></p>
|
3,337,210 | <p>I am struggling with the following equation, which I need to proof by induction:</p>
<p><span class="math-container">$$\sum_{k=1}^{2n}\frac{(-1)^{k+1}}{k}= \sum_{k=n+1}^{2n}\frac{1}{k}$$</span></p>
<p><span class="math-container">$n\in \mathbb{N}$</span>.<br/> I tried a few times and always got stuck.</p>
<p>Help would be appreciated. </p>
| NazimJ | 533,809 | <p>What I like to do when a proof is long and abstract, is I break it up into chunks that I can describe intuitively in 1 sentence. Then these sentences form the outline of the proof which is an explanation. The skill here is to decide how much detail to include in each sentence</p>
<p>For example I will do this for your cosine formula proof. And I will include enough detail as I need to make it intuitive enough to myself</p>
<ul>
<li>The goal is to show "Pythagorean theorem is almost true for all triangle, minus some error". In other words, that any side length can be expressed in terms of the other side lengths (<span class="math-container">$b $</span> and <span class="math-container">$c $</span>), and the opposite angle (<span class="math-container">$\alpha $</span>)</li>
<li>First we can simplify the problem; split our triangle into 2 right angled triangles, which are familiar to work with</li>
<li>On the right triangle, we can use the familiar Pythagorean theorem with <span class="math-container">$a$</span>, <span class="math-container">$h $</span> and <span class="math-container">$b-r $</span></li>
<li>But since <span class="math-container">$h $</span> and <span class="math-container">$r $</span> are shared by the left triangle, we can easily convert them using the familiar SOHCAHTOA to expressions involving <span class="math-container">$b $</span>, <span class="math-container">$c $</span> and <span class="math-container">$\alpha $</span></li>
</ul>
|
3,337,210 | <p>I am struggling with the following equation, which I need to proof by induction:</p>
<p><span class="math-container">$$\sum_{k=1}^{2n}\frac{(-1)^{k+1}}{k}= \sum_{k=n+1}^{2n}\frac{1}{k}$$</span></p>
<p><span class="math-container">$n\in \mathbb{N}$</span>.<br/> I tried a few times and always got stuck.</p>
<p>Help would be appreciated. </p>
| David K | 139,123 | <p>For the law of cosines, we know that two sides and the included angle determine a triangle, so <span class="math-container">$a$</span> is definitely determined by <span class="math-container">$b,$</span> <span class="math-container">$c,$</span> and <span class="math-container">$\alpha.$</span></p>
<p>If you had no idea what formula related <span class="math-container">$a$</span> to <span class="math-container">$b,$</span> <span class="math-container">$c,$</span> and <span class="math-container">$\alpha,$</span> you could still try to find a formula by dividing the triangle into two right triangles as shown in the proof from brilliant.org.
Once you have drawn the two right triangles it is just a matter of applying the Pythagorean Theorem twice in order to get <em>some</em> formula relating <span class="math-container">$a$</span> to <span class="math-container">$b,$</span> <span class="math-container">$c,$</span> and <span class="math-container">$\alpha.$</span> That much is guaranteed.</p>
<p>So you can easily get to the formula</p>
<p><span class="math-container">$$ a^2 = (c\sin\alpha)^2 + (b - c\cos\alpha)^2. $$</span></p>
<p>Here's where experience helps: this is not the only place where there is a fortunate opportunity to add <span class="math-container">$\sin^2 + \cos^2$</span> and get <span class="math-container">$1$</span> (or add <span class="math-container">$k\sin^2 + k\cos^2$</span> for some common factor <span class="math-container">$k$</span> and get <span class="math-container">$k$</span>).
So when we see a sine and a cosine of the same angle each inside a squared term, one obvious thing to try is to combine their squares in this way.</p>
<p>Or we might now see ahead that far, but we might try multiplying out the terms on the right side of the equation anyway, because who knows, sometimes you stumble over something that way:</p>
<p><span class="math-container">$$ a^2 = c^2\sin^2\alpha + b^2 - 2bc\cos\alpha + c^2\cos^2\alpha. $$</span></p>
<p>And now if it didn't occur to us before to look for an opportunity to simplify <span class="math-container">$c^2\sin^2\alpha + c^2\cos^2\alpha,$</span> it's a lot more obvious now that there is one.</p>
<p>This may not seem earth-shatteringly beautiful like the proof of the infinitude of primes, but then it is after all a much more pedestrian result:
just a formula for computing one of the sides of a triangle knowing two others and the angle between them, or alternatively computing one of the angles knowing the three sides.
The result is computational, so it's not so out of place for the proof to be computational. Much like proving that <span class="math-container">$114 + 265 = 379,$</span> we just do the arithmetic and we get an answer.</p>
<p>Too bad this proof only works for acute angles and we still have to do more work to show that the same formula happens to apply to obtuse angles as well.</p>
|
2,131,679 | <p>Let $f: \mathbb{R} \to \mathbb{R}$ be continuous and $D \subset \mathbb{R}$ be a dense subset of $\mathbb{R}$. Furthermore, $\forall y_1,y_2 \in D \ f(y_1)=f(y_2)$. Should $f$ be a constant function?</p>
<p>My attempt:
Since $f$ is continuous
$$\forall x_0 \ \forall \varepsilon >0 \ \exists \delta>0 \ \forall x \in \mathbb{R} \ \left(|x-x_0|<\delta \Longrightarrow |f(x)-f(x_0)|<\varepsilon \right)$$
Let $f$ be non-constant function.
Since $D$ is dense $\exists x_1 \in (x_0-\delta, x_0+\delta) \ : \ x_1 \in D$.
Let's take $x_2 \in (x_0-\delta, x_0+\delta)$ such that $f(x_2) \ne f(x_1)$.
Let $\varepsilon = \frac{|f(x_2)-f(x_1)|}{2}>0$.
Therefore, we have
$$|f(x_1)-f(x_0)|<\frac{|f(x_2)-f(x_1)|}{2} \ \ \ |f(x_2)-f(x_0)|<\frac{|f(x_2)-f(x_1)|}{2}$$
Adding the expressions above, we obtain
$$|f(x_2)-f(x_1)|\le |f(x_1)-f(x_0)|+|f(x_2)-f(x_0)|<|f(x_2)-f(x_1)|$$
what is the contradiction.
Are my mussings correct?</p>
| Futurologist | 357,211 | <p>Triangles say $ABD$ and $BCE$ are congruent because $AB = BC, \,\, AD = \frac{1}{3} AC = \frac{1}{3} AB = BE$ and $\angle \, BAD = \angle \, CBE = 60^{\circ}$. Hence $\angle \, ADO = \angle \, ADC = \angle \, CEB = \angle \, OEB = \theta$. This implies that $\angle \, ADO + \angle \, AEO = \angle \, ADO + 180^{\circ} - \angle OEB = \theta + 180^{\circ} - \theta = 180^{\circ}.$ Therefore quad $ADOE$ is inscribed in a circle and $\angle \, AOE = \angle \, ADE$. </p>
<p>Now, draw a line through point $E$ parallel to $BC$ and denote by $E'$ its intersection point with $AC$. Then triangle $AEE'$ is also equilateral and $AE' = \frac{2}{3} AC$. However, $AD = \frac{1}{3}AC$ so $D$ is the midpoint of $AE'$. Thus $ED$ is the median in the equilateral triangle $AEE'$ from vertex $E$ and is therefore an altitude so $\angle \, ADE = 90^{\circ}$. As already proved $\angle \, AOE = \angle \, ADE = 90^{\circ}$ so $\angle \, AOC = 90^{\circ}$ as well.</p>
<p><strong>Edit.</strong> If you want to prove that $PQ= PO =OQ$ you can simply take the center $G$ of triangle $ABC$ (point $G$ is the intersection of the three altitudes, which are the three medians, which are the three angle bisector, which are the three orthogonal bisectors of the edges) and perform a $120^{\circ}$ rotation around it (say counterclockwise). Then triangle $CAF$ is rotated to triangle $ABD$ which in its own turn is rotated to triangle $BCE$. Consequently, segment $AF$ is rotated to $BD$ which is rotated to $CE$. Therefore, since the pair of segments $AF,\, BD$ is rotated to the pair of segments $BD, \, CE$, the intersection point $P = AF \cap BD$ is rotated to intersection point $O = BD \cap CE$. Analogously, you see that point $O$ is rotated to point $Q$. Therefore, the points $P, O, Q$ form an equilateral triangle. </p>
<p>Or, alternatively, since $AF$ is $120^{\circ}$-rotated to $BD$ the intersection angle between the two should be $60^{\circ}$, i.e. $\angle \, FPB = \angle \, QPO = 60^{\circ}$. By the same argument, $\angle \, POQ = 60^{\circ}$ and $\angle \, OQP = 60^{\circ}$. Therefore $OPQ$ is an equilateral triangle. </p>
|
3,384,280 | <p>I'm trying to solve the limit of this sequence without the use an upper bound o asymptotic methods:</p>
<p><span class="math-container">$$\lim_{n\longrightarrow\infty}\frac{\sqrt{4n^2+1}-2n}{\sqrt{n^2-1}-n}=\left(\frac{\infty-\infty}{\infty-\infty}\right)$$</span></p>
<p>Here there are my differents methods:</p>
<ol>
<li>assuming <span class="math-container">$f(n)=\sqrt{4n^2+1}, \,$$\ g(n)=2n$</span>, <span class="math-container">$h(n)=\sqrt{n^2-1}$</span>, <span class="math-container">$\ \psi(n)= n$</span> <span class="math-container">$$f(n)-g(n)=\frac{\dfrac{1}{g(n)}-\dfrac{1}{f(n)}}{\dfrac{1}{f(n)\cdot g(n)}}, \quad h(n)-\psi(n)=\frac{\dfrac{1}{\psi(n)}-\dfrac{1}{h(n)}}{\dfrac{1}{h(n)\cdot \psi(n)}}$$</span>
I always have an undetermined form.</li>
<li>I've done some rationalizations:
<span class="math-container">$$\lim_{n\longrightarrow\infty}\frac{\sqrt{4n^2+1}-2n}{\sqrt{n^2-1}-n}=\lim_{n\longrightarrow\infty}\frac{\sqrt{4n^2+1}-2n}{\sqrt{n^2-1}-n}\cdot \frac{\sqrt{4n^2+1}+2n}{\sqrt{4n^2+1}+2n}$$</span> where to the numerator I find <span class="math-container">$1$</span> and to the denominator an undetermined form. Similar situation considering
<span class="math-container">$$\lim_{n\longrightarrow\infty}\frac{\sqrt{4n^2+1}-2n}{\sqrt{n^2-1}-n}=\lim_{n\longrightarrow\infty}\frac{\sqrt{4n^2+1}-2n}{\sqrt{n^2-1}-n}\cdot \frac{\sqrt{n^2-1}+n}{\sqrt{n^2-1}+n}$$</span></li>
<li><span class="math-container">$$\frac{\sqrt{4n^2+1}-2n}{\sqrt{n^2-1}-n}=\frac{n\left(\sqrt{4+\dfrac{1}{n^2}}-2\right)}{n\left(\sqrt{1-\dfrac{1}{n^2}}-1\right)}\rightsquigarrow \left(\frac{0}{0}\right)$$</span>
At the moment I am not able to think about other possible simple solutions.</li>
</ol>
| user | 505,767 | <p>From here</p>
<p><span class="math-container">$$\frac{\sqrt{4n^2+1}-2n}{\sqrt{n^2-1}-n}=\frac{\sqrt{4+\dfrac{1}{n^2}}-2}{\sqrt{1-\dfrac{1}{n^2}}-1}$$</span></p>
<p>we can use that</p>
<p><span class="math-container">$$\sqrt{4+\dfrac{1}{n^2}}=2\sqrt{1+\dfrac{1}{4n^2}}\sim 2\left(1+\dfrac{1}{8n^2}\right)=2+\dfrac{1}{4n^2}$$</span></p>
<p><span class="math-container">$$\sqrt{1-\dfrac{1}{n^2}}\sim 1-\dfrac{1}{2n^2}$$</span></p>
|
300,435 | <p>Let $V,V',V''$ and $W$ be vector spaces over $k$. Then, it is known that $\operatorname{Hom}(\cdot,V)$ is a contravariant exact functor, i.e. for each exact sequence</p>
<p>$0\to V'\to V\to V'' \to 0$,</p>
<p>and each $W$, the induced sequence</p>
<p>$0\to \operatorname{Hom}(V'',W)\to\operatorname{Hom}(V,W)\to\operatorname{Hom}(V',W)\to 0$ </p>
<p>is exact.</p>
<p>But what if all spaces involved are Banach spaces (over $\mathbb{C}$ or $\mathbb{R}$) and if we replace $\operatorname{Hom}(\cdot,V)$ by $\operatorname{L}(\cdot,V)$, i.e. the <em>continuous</em> linear maps with domain $V$? Is this functor still exact? What if we restict ourselves even stronger to Hilbert spaces? I'm particularly interested in the specialization of $W=\mathbb{R}$ or $W=\mathbb{C}$, where we get the dual spaces (again, only <em>continuous</em> maps considered in the Banach- or Hilbert case).</p>
<p>I'm learning for an exam and the books I've been reading say nothing about it...</p>
<p>Thank you!</p>
| Martin | 49,437 | <p>To say that$\DeclareMathOperator{Hom}{Hom}$
$$
0 \to V' \xrightarrow{i} V \xrightarrow{p} V'' \to 0
$$
is exact is equivalent to saying that $i$ is a kernel of $p$ and $p$ is a cokernel of $i$. This amounts to the automatic exactness of
$$
0 \to \Hom(W,V') \xrightarrow{i_\ast}\Hom(W,V) \xrightarrow{p_\ast} \Hom(W,V'')
$$
and
$$
0 \to \Hom(V'',W) \xrightarrow{p^\ast} \Hom(V,W) \xrightarrow{i^\ast} \Hom(V',W)
$$
for all Banach spaces $W$ where I write $\Hom(V,W) = L(V,W)$ for the space of <em>continuous</em> linear maps.</p>
<hr>
<p>Consider the special case $W = \mathbb{R}$ (or $\mathbb{C}$):</p>
<p>Giving a morphism $f \colon \mathbb R \to X$ is the same as choosing a vector $x = f(1) \in X$ because for a scalar $\lambda$ we have $f(\lambda) = f(\lambda 1) = \lambda f(1) = \lambda x$. So: $\Hom(\mathbb R, X) = X$ for every Banach space and the sequence
$$
0 \to \Hom(\mathbb R,V') \xrightarrow{i_\ast}\Hom(\mathbb R,V) \xrightarrow{p_\ast} \Hom(\mathbb R,V'') \to 0
$$
really is the sequence $0 \to V' \xrightarrow{i} V \xrightarrow{p} V'' \to 0$ we started with.</p>
<p>Since $\operatorname{im} i = \ker{p}$, the map $i\colon V' \to V$ is a homeomorphism onto its range $i(V')$ and $i(V')$ is a closed subspace of $V$. A linear map $f' \colon V' \to \mathbb R$ thus corresponds to a linear functional on the subspace $i(V')$ of $V$ and the Hahn-Banach theorem allows us to extend that linear functional to all of $V$. In other words, $i^\ast \colon \Hom(V,\mathbb R) \to \Hom(V',\mathbb R)$ is always surjective and therefore the dual sequence
$$
0 \to (V'')^\ast \to V^\ast \to (V')^\ast \to 0
$$
is exact.</p>
<hr>
<p>For general Banach spaces $W$, the answer is that neither $p_\ast$ nor $i^\ast$ need to be surjective: For $V' = c_0$ and $V = \ell_\infty$ and $V'' = \ell_\infty / c_0$, the sequence
$$
0 \to c_0 \xrightarrow{i} \ell_\infty \xrightarrow{p} \ell_\infty/c_0 \to 0
$$
is exact. <em>Phillips' lemma</em> says that for $W = c_0$ the identity $V' \to W$ cannot be extended to a morphism $V = \ell_\infty \to c_0$ and, equivalently, the identity $\ell_\infty/c_0 \to V''$ cannot be lifted to $\ell_\infty/c_0 \to \ell_\infty$.</p>
<p>See</p>
<ul>
<li><a href="https://math.stackexchange.com/q/96739/">Do continuous linear functions between Banach spaces extend?</a></li>
<li><a href="https://math.stackexchange.com/q/132520/">Complement of $c_{0}$ in $\ell^{\infty}$</a></li>
</ul>
<p>for proofs and further discussion of this last point.</p>
|
300,435 | <p>Let $V,V',V''$ and $W$ be vector spaces over $k$. Then, it is known that $\operatorname{Hom}(\cdot,V)$ is a contravariant exact functor, i.e. for each exact sequence</p>
<p>$0\to V'\to V\to V'' \to 0$,</p>
<p>and each $W$, the induced sequence</p>
<p>$0\to \operatorname{Hom}(V'',W)\to\operatorname{Hom}(V,W)\to\operatorname{Hom}(V',W)\to 0$ </p>
<p>is exact.</p>
<p>But what if all spaces involved are Banach spaces (over $\mathbb{C}$ or $\mathbb{R}$) and if we replace $\operatorname{Hom}(\cdot,V)$ by $\operatorname{L}(\cdot,V)$, i.e. the <em>continuous</em> linear maps with domain $V$? Is this functor still exact? What if we restict ourselves even stronger to Hilbert spaces? I'm particularly interested in the specialization of $W=\mathbb{R}$ or $W=\mathbb{C}$, where we get the dual spaces (again, only <em>continuous</em> maps considered in the Banach- or Hilbert case).</p>
<p>I'm learning for an exam and the books I've been reading say nothing about it...</p>
<p>Thank you!</p>
| Nate Eldredge | 822 | <p>I'm going to adjust your notation, because $V'$ looks too much like a dual space to me.</p>
<p>In the category of Banach spaces, where the morphisms are the continuous linear maps, one should perhaps interpret "image" in the categorical sense, as the <em>closure</em> of the image. (See the comments on Martin's answer.) If we interpret "exact" according to this sense, then a sequence $$0 \to X_1 \overset{S}{\to} X_2 \overset{T}{\to} X_3 \to 0$$ is exact iff $\ker S = 0$, $\overline{S(X_1)} = \ker T$, and $T(X_2)$ is dense in $X_3$.</p>
<p>In this setting, the functor $L(\cdot, W)$ is not exact, not even when $W = \mathbb{R}$. For instance, let's take $X_1 = \ell^1$, $X_2 = \ell^2$, $S$ the inclusion map, and $X_3 = 0$. Since $S$ is injective and has dense image, this sequence is exact. If $L(\cdot, \mathbb{R})$ were an exact functor, then $S^* : (\ell^2)^* \to (\ell_1)^*$ should also be injective and have dense image. $S^*$ is injective, but since it is a map from the separable Banach space $(\ell^2)^* = \ell^2$ to the non-separable Banach space $(\ell^1)^* = \ell^\infty$, it cannot have dense image. (In fact, $S^*$ is just the inclusion map $\ell^2 \to \ell^\infty$ and the closure of $\ell^2$ in $\ell^\infty $ is $c_0$, the sequences which vanish at infinity.)</p>
<p>If you work in the category of <em>reflexive</em> Banach spaces, then $L(\cdot, \mathbb{R})$ is an exact functor; this is a fairly straightforward diagram chase using the Hahn-Banach theorem.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.