qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
95,242 | <p>Is it possible to use <code>ProbabilityScalePlot</code> to show different plot markers in a single dataset, such as in going from <code>plot2</code> to <code>plot3</code> below?</p>
<pre><code>nPoints = 10;
x = RandomVariate[NormalDistribution[1, 1], nPoints];
y = RandomVariate[LogNormalDistribution[1, 1], nPoints];
z = RandomVariate[WeibullDistribution[1, 1], nPoints];
plot1 = SmoothHistogram[{x, y, z}, Filling -> Axis]
plot2 = ProbabilityScalePlot[{x, y, z}]
plot3 = ProbabilityScalePlot[Flatten[{x, y, z}]]
</code></pre>
| george2079 | 2,079 | <p>More complicated, but I thought it interesting to see how to generate the plot from first principles:</p>
<pre><code>nPoints = 10;
x = RandomVariate[NormalDistribution[1, 1], nPoints];
y = RandomVariate[LogNormalDistribution[1, 1], nPoints];
z = RandomVariate[WeibullDistribution[1, 1], nPoints];
data = {x, y, z};
nn = Length@Flatten[data];
ordereddata =
MapIndexed[ {(First@#2 - .3)/(nn + .4), Sequence @@ #1} &,
Sort[Join @@
MapIndexed[ Function[{dat, ind}, {#, First@ind} & /@ dat],
data ]]];
dataprime =
Table[{#[[2]], 1 - Sqrt[2] InverseErfc[2 #[[1]]]} & /@
Select[ ordereddata , #[[3]] == k &], {k, Length@data}];
Show[ ProbabilityScalePlot[Flatten[data] ],
ListPlot[dataprime, PlotStyle -> {Blue, Red, Green}] ]
</code></pre>
<p><a href="https://i.stack.imgur.com/1mVBD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1mVBD.png" alt="enter image description here"></a> </p>
<p>Note this is drawing the color markers over top of the probabilityplot markers.<code>ProbabiltyPlotRange</code> for some reason does not respect <code>PlotMarkers->None</code>, but you can hide them with something like <code>PlotMarkers -> Graphics@{PointSize[0], Point[{0, 0}]}</code></p>
<p>Note also this is specific to the default normal distribution assumed by <code>ProbabilityPlotRange</code></p>
|
95,242 | <p>Is it possible to use <code>ProbabilityScalePlot</code> to show different plot markers in a single dataset, such as in going from <code>plot2</code> to <code>plot3</code> below?</p>
<pre><code>nPoints = 10;
x = RandomVariate[NormalDistribution[1, 1], nPoints];
y = RandomVariate[LogNormalDistribution[1, 1], nPoints];
z = RandomVariate[WeibullDistribution[1, 1], nPoints];
plot1 = SmoothHistogram[{x, y, z}, Filling -> Axis]
plot2 = ProbabilityScalePlot[{x, y, z}]
plot3 = ProbabilityScalePlot[Flatten[{x, y, z}]]
</code></pre>
| kglr | 125 | <pre><code>colorF = Piecewise[{{Red, MemberQ[x, #[[1]]]}, {Green, MemberQ[y, #[[1]]]}}, Blue] &;
Normal[ProbabilityScalePlot[Flatten[{x, y, z}]]] /.
Point[p_] :> ({colorF @ #, PointSize[.015], Point @ #} & /@ p)
</code></pre>
<p><a href="https://i.stack.imgur.com/SQGSh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SQGSh.png" alt="enter image description here"></a></p>
|
7,025 | <p>Many commutative algebra textbooks establish that every ideal of a ring is contained in a maximal ideal by appealing to Zorn's lemma, which I dislike on grounds of non-constructivity. For Noetherian rings I'm told one can replace Zorn's lemma with countable choice, which is nice, but still not nice enough - I'd like to do without choice entirely.</p>
<p>So under what additional hypotheses on a ring $R$ can we exhibit one of its maximal ideals while staying in ZF? (I'd appreciate both hypotheses on the structure of $R$ and hypotheses on what we're given in addition to $R$ itself, e.g. if $R$ is a finitely generated algebra over a field, an explicit choice of generators.)</p>
<p>Edit: I guess it's also relevant to ask whether there are decidability issues here. </p>
| Guillermo Mantilla | 2,089 | <p>There are certain kinds of rings that came to my mind when I saw this question. $K[v]:=$ Coordinate ring of an affine variety $V$ over a field $K$ , and $C(X, F)$:= the ring of continuous $F$-valued ($F$ a topological field) functions on a compact space $X$. In both examples one can construct maximal ideals as zero sets of minimal closed subsets of certain topological space (In one case $V$ and in other case $X$). So for $C(X, F)$ some max ideals correspond to points of $X$, and for $K[V]$ some max ideals correspond to points in $V$. </p>
<p>Another way to think about the question is the following analogy to it. Under what hypothesis can we exhibit a basis of a vector space without using any form of choice? here the clear answer should be finite dimensional spaces! This gives a clear picture about what rings $R$ we should consider, they are $R$'s that are Artinian rings. </p>
<p>About decidability I'd say think about boolean rings. The category of Boolean rings is equivalent to the category of Boolean algebras. Under that equivalence one have a correspondence between ideals and filters that takes maximal ideals to ultrafilters. If I'm not mistaken the existence of ultrafilters is equivalent to some form of choice (see <a href="http://www3.interscience.wiley.com/cgi-bin/fulltext/103520653/PDFSTART" rel="nofollow">http://www3.interscience.wiley.com/cgi-bin/fulltext/103520653/PDFSTART</a> )
so the same form of choice should be equivalent to the existence of maximal ideals.</p>
|
7,025 | <p>Many commutative algebra textbooks establish that every ideal of a ring is contained in a maximal ideal by appealing to Zorn's lemma, which I dislike on grounds of non-constructivity. For Noetherian rings I'm told one can replace Zorn's lemma with countable choice, which is nice, but still not nice enough - I'd like to do without choice entirely.</p>
<p>So under what additional hypotheses on a ring $R$ can we exhibit one of its maximal ideals while staying in ZF? (I'd appreciate both hypotheses on the structure of $R$ and hypotheses on what we're given in addition to $R$ itself, e.g. if $R$ is a finitely generated algebra over a field, an explicit choice of generators.)</p>
<p>Edit: I guess it's also relevant to ask whether there are decidability issues here. </p>
| David E Speyer | 297 | <p>It seems to me like there are two points here. </p>
<p>(1) A ring is called noetherian if any ascending chain of ideals terminates. I think all the standard facts about noetherian rings can be proved without choice: $\mathbb{Z}$ is noetherian; fields are noetherian; $A$ noetherian implies $A[x]$ noetherian; that quotients of noetherian rings are noetherian; localizations of noetherian rings are noetherian; completions of noetherian rings are noetherian.</p>
<p>That should take care of most the rings we need in algebraic geometry. </p>
<p>However, I am worried about another issue:</p>
<p>(2) The usual proof that noetherian rings have maximal ideal goes as follows. Let $A$ be a ring without maximal ideals. Take the ideal $I_0 = \{ 0 \}$. Since it is not maximal, choose an ideal $I_1$ which contains it. Choose an ideal $I_2$ which contains $I_1$. Continue in this manner to produce a chain $I_0 \subsetneq I_1 \subsetneq I_2 \subsetneq \cdots$. This doesn't terminate, so $A$ is not noetherian.</p>
<p>This proof, of course, uses countable choice. My gut feeling is that this can be eliminated for the same sort of rings I address in (1). But does anyone know a reference for this?</p>
|
7,025 | <p>Many commutative algebra textbooks establish that every ideal of a ring is contained in a maximal ideal by appealing to Zorn's lemma, which I dislike on grounds of non-constructivity. For Noetherian rings I'm told one can replace Zorn's lemma with countable choice, which is nice, but still not nice enough - I'd like to do without choice entirely.</p>
<p>So under what additional hypotheses on a ring $R$ can we exhibit one of its maximal ideals while staying in ZF? (I'd appreciate both hypotheses on the structure of $R$ and hypotheses on what we're given in addition to $R$ itself, e.g. if $R$ is a finitely generated algebra over a field, an explicit choice of generators.)</p>
<p>Edit: I guess it's also relevant to ask whether there are decidability issues here. </p>
| Neel Krishnaswami | 1,610 | <p>You should take a look at Coquand and Lombardi's "A Logical Approach to Abstract Algebra". </p>
<p>They observe that commutative rings have a purely equational description, and so there are very strong metatheorems that apply to this theory: Birkhoff's completeness theorem for equational logic, of course; and also Barr's theorem, which states that if a geometric sentences is a consequences of a geometric theories with classical logic plus choice, it's also intuitionistically valid. (And all equational theories are also geometric theories.) </p>
<p>They strengthen Barr's theorem a bit, by characterizing the relevant intuitionistic proofs, and then "de-Noetherian-ize" several basic theorems which are typically proved using maximal ideals. </p>
|
7,025 | <p>Many commutative algebra textbooks establish that every ideal of a ring is contained in a maximal ideal by appealing to Zorn's lemma, which I dislike on grounds of non-constructivity. For Noetherian rings I'm told one can replace Zorn's lemma with countable choice, which is nice, but still not nice enough - I'd like to do without choice entirely.</p>
<p>So under what additional hypotheses on a ring $R$ can we exhibit one of its maximal ideals while staying in ZF? (I'd appreciate both hypotheses on the structure of $R$ and hypotheses on what we're given in addition to $R$ itself, e.g. if $R$ is a finitely generated algebra over a field, an explicit choice of generators.)</p>
<p>Edit: I guess it's also relevant to ask whether there are decidability issues here. </p>
| Ingo Blechschmidt | 31,233 | <p>If the ring is countable (or the image of a linear well-ordering), then no choice of any kind (not even countable choice) and in fact not even the law of excluded middle is required: There is an explicit construction, admissible by the standards of constructive mathematics.</p>
<p>This result is due to Krivine and was elucidated by Berardi and Valentini. See here for an introduction: <a href="https://arxiv.org/abs/2207.03873" rel="nofollow noreferrer">https://arxiv.org/abs/2207.03873</a></p>
<p>In the general case, we can force the ring to become countable by passing to a suitable extension of the universe. In the extended universe, we can apply the Krivine construction again. From the point of view of the extended universe, we will have succeeded in constructing a maximal ideal. From the point of view of the base universe, we will only have constructed a suitable sheaf of ideals.</p>
<p>The base universe and the extended universe validate the same first-order statements. (Constructively, this is a nontrivial fact.) Hence, even though a maximal ideal itself might not constructively exist, its first-order consequences will hold constructively.</p>
<p>This phenomenon is briefly discussed in Section 4 of the linked paper.</p>
|
4,066,601 | <p>The question is</p>
<blockquote>
<p>Find all solutions <span class="math-container">$z\in \mathbb C$</span> for the following equation: <span class="math-container">$z^2 +3\bar{z} -2=0$</span></p>
</blockquote>
<p>I have attempted numerous methods of approaching this question, from trying to substitute <span class="math-container">$x+iy$</span> and <span class="math-container">$x-iy$</span> respectively, in addition to substituting <span class="math-container">$z^2$</span> for <span class="math-container">$z\bar z$</span>, but with no luck. I would really appreciate if you were able to provide some direction so I know where to start. Thank you!</p>
| Lukas | 844,079 | <p>Plugging in <span class="math-container">$x+iy$</span> for <span class="math-container">$z$</span> seems to be a good idea actually. You get
<span class="math-container">$$(x^2-y^2+3x-2)+i(2xy-3y)=0$$</span>
For a complex number to be zero both the real and the imaginary part have to be zero, so we get <span class="math-container">$2xy=3y$</span> and <span class="math-container">$x^2-y^2+3x-2=0$</span>. The first equation is easy for <span class="math-container">$y \neq 0$</span>. Then we get <span class="math-container">$x=1.5$</span> and from there we can conclude <span class="math-container">$y= \pm \sqrt{4.75}$</span> with the second equation.</p>
<p>If <span class="math-container">$y=0$</span> the second equation is a quadratic in <span class="math-container">$x$</span> with two solutions that can easily be calculated with the quadratic formula. All in all we will have four solutions: <span class="math-container">$$(1.5, \sqrt{4.75}), (1.5, -\sqrt{4.75}), (..., 0), (..., 0)$$</span></p>
|
215,834 | <p>I have the following plot </p>
<pre><code>Show[Graphics[Axes -> True], ParametricPlot[al[t], {t, 0, 1}],
ParametricPlot[be[t], {t, 0, 1}]]
</code></pre>
<p>where <code>al[t]</code> and <code>be[t]</code> are parametric plots of a BezierFunction. </p>
<p>I would like to add arrows to the midpoint of the parametric plots. I have tried using the Arrow command but this does not work. Is there anyway to do this easily?</p>
| kglr | 125 | <pre><code>SeedRandom[1]
al = BezierFunction[RandomReal[{-1, 1}, {14, 2}]];
be = BezierFunction[RandomReal[{-1, 1}, {20, 2}]];
</code></pre>
<p>You can temporarily redefine <code>Line</code> as <code>Arrow</code> using <code>Block</code> and use <code>ParametricPlot</code>:</p>
<pre><code>Block[{Line = Arrow},
ParametricPlot[{al[t], be[t]}, {t, 0, 1}, PlotRange -> All, Frame -> True, Axes -> False]]
</code></pre>
<p><a href="https://i.stack.imgur.com/ake8U.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ake8U.png" alt="enter image description here"></a></p>
<p>Alternatively, you can use <code>Graphics</code></p>
<pre><code>Graphics[{Thick,
MapThread[{#, Arrow[#2 /@ Subdivide[200]]} &,
{ColorData[97] /@ {1, 2}, {al, be}}]},
Frame -> True]
</code></pre>
<blockquote>
<p>same picture</p>
</blockquote>
<p>You can specify the size and position of the arrow heads using the directive <code>Arrowheads[{{size, pos}}]</code>:</p>
<pre><code>Block[{Line = Arrow},
ParametricPlot[{al[t], be[t]}, {t, 0, 1},
PlotStyle -> Arrowheads[{{.05, .75}}], PlotRange -> All,
Frame -> True, Axes -> False]]
</code></pre>
<p><a href="https://i.stack.imgur.com/KRsDX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/KRsDX.png" alt="enter image description here"></a></p>
<p>Alternatively, with <code>Graphics</code>:</p>
<pre><code>Graphics[{Arrowheads[{{.05, .75}}], Thick,
MapThread[{#, Arrow[#2 /@ Subdivide[200]]} &, {ColorData[97] /@ {1, 2}, {al, be}}]},
Frame -> True]
</code></pre>
<blockquote>
<p>same picture</p>
</blockquote>
|
215,834 | <p>I have the following plot </p>
<pre><code>Show[Graphics[Axes -> True], ParametricPlot[al[t], {t, 0, 1}],
ParametricPlot[be[t], {t, 0, 1}]]
</code></pre>
<p>where <code>al[t]</code> and <code>be[t]</code> are parametric plots of a BezierFunction. </p>
<p>I would like to add arrows to the midpoint of the parametric plots. I have tried using the Arrow command but this does not work. Is there anyway to do this easily?</p>
| wmora2 | 40,277 | <p>To put arrows in a curve I usually use this code</p>
<pre><code>ParametricPlot[...] /. Line[fig___] :> {Arrowheads[ConstantArray[0.06, 4]], Arrow[fig]};
</code></pre>
<p><a href="https://i.stack.imgur.com/2B8SW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2B8SW.png" alt="enter image description here" /></a></p>
<p>The complete code:</p>
<pre><code>r1[t_] = {-1 + 2 Cos[t], 2 Sin[t], 0};
r2[t_] = {0, 2 + 2 Sinh[t], -2 + 2 Cosh[t]};
p1 = {0, 2 - Sqrt[1/2], 1 - Sqrt[1/2]}; p2 = {0, 2 + Sqrt[1/2],
1 + Sqrt[1/2]};
q1 = {0, 2 + Sqrt[1/2], 1 - Sqrt[1/2]}; q2 = {0, 2 - Sqrt[1/2],
1 + Sqrt[1/2]};
c2 = ParametricPlot3D[{0, 2 + Sqrt[1/2] Sinh[t],
1 + Sqrt[1/2] Cosh[t]}, {t, -2, 2},
Mesh -> None, PlotStyle -> {AbsoluteThickness[3.5]}] /.
Line[fig___] :> {Arrowheads[ConstantArray[0.06, 4]], Arrow[fig]};
c3 = ParametricPlot3D[{0, 2 + Sqrt[1/2] Sinh[t], 1 -
Sqrt[1/2] Cosh[t]}, {t, -2, 2},
Mesh -> None, PlotStyle -> {AbsoluteThickness[3.5]}] /.
Line[fig___] :> {Arrowheads[ConstantArray[0.06, 4]], Arrow[fig]};
Graphics3D[{
First@c2,
First@c3,
AbsolutePointSize[8.5], Orange, Point[{{0, 2, 1}}],
Thick, Line[{ p1 - (p2 - p1), p1 + 2 (p2 - p1)}],
Line[{ q1 - (q2 - q1), q1 + 2 (q2 - q1)}],
Dashed, Thick, Gray, Line[{p1, q1, p2, q2, p1}]},
Boxed -> False, PlotRange -> All]
</code></pre>
|
153,448 | <p>On the complex plane, I have a transformation "T" such that :</p>
<p>$z' = (m+i)z + m - 1 - i$ ($z'$ is the image and $z$ the preimage, $z$ and $z'$ are both complex number)</p>
<p>and $m$ is a real number. </p>
<p>I'd need to determine "$m$" such that this transformation "T" is a rotation.</p>
<p>I know a rotation can be written under the form : $z'- w = k (z - w)$
with "$w$" the complex number associated with the center and "$k$" a complex number modulus 1. But I can't find how to put "T" under the form of a rotation.</p>
<p>Some hint would be very appreciated,
Thanks.</p>
| Gigili | 181,853 | <p>$$\int_{0}^{f(x)}t^2dt=\frac{{f(x)}^3}{3}=x^3(1+x)^2$$</p>
<p>$$\Downarrow$$</p>
<p>$${f(x)}^3=3x^3(1+x)^2$$
$$\Downarrow$$</p>
<p>$${f(2)}^3=3 \cdot2^3(1+2)^2=3^3 \cdot 2^3$$
$$\Downarrow$$</p>
<p>$$f(2)=3 \cdot 2 =6$$</p>
|
2,252,206 | <p>This question is related to <a href="https://math.stackexchange.com/questions/1574196/units-of-group-ring-mathbbqg-when-g-is-infinite-and-cyclic">this</a> one, in that I am asking about the same problem, but not necessarily about the same aspect of the problem.</p>
<p>I need to identify all units of the group ring $\mathbb{Q}(G)$ where $G$ is an infinite cyclic group.</p>
<p>Now, as I understand it, if $R$ is a ring and $G$ is a group, then if we consider the set of all formal sums </p>
<p>$$r_{1}g_{1}+r_{2}g_{2}+\cdots + r_{k}g_{k},$$ </p>
<p>$r_{i} \in R$, $g_{i} \in G$, where we allow the empty sum to play the part of the zero element $0$, </p>
<p>then if we consider two formal sums to be equivalent if they have the same reduced form, the group ring $R(G)$ refers to the set of equivalence of classes of such sums with respect to this equivalence relation.</p>
<p>In this case, then, since $\mathbb{Q} = R$ and $G$ is some infinite cyclic group, say $\langle x \rangle$ (although, if it is an infinite cyclic group, couldn't we say that it is isomorphic to $\mathbb{Z}$?), so our sums look like </p>
<p>$$q_{1}x_{1} + q_{2}x_{2} + \cdots + q_{k}x_{k}$$</p>
<p>for some rationals $q_{i}$ and elements of $\langle x \rangle$, $x_{i}$. </p>
<p>Now, the units of this group ring are the nonzero, invertible elements, and that various relationships exist among units, principal ideals, and associate elements. I am not sure how to apply any of this information to this situation, though, as I am relatively inexperienced with working with group rings.</p>
<p>Moreover, I did find the answered question I linked to above, but this answer uses some terminology that I am unfamiliar with: for example, I do not know what it means to be a "localization of \mathbb{Q}[x]", and I only know a little bit about Laurent polynomials from Complex Analysis, which I'm assuming is where he is getting the negative powers from in his answer.</p>
<p>Now, among my questions is: <strong>1. How do you know that $\mathbb{Q}(G)$ is isomorphic to $\mathbb{Q}[x, x^{-1}]$?</strong> That it is seems weird to me, since $G$ here is supposed to be cyclic, and he seems to be saying that a group ring on a cyclic group is isomorphic to a group ring on a group with two generators, but perhaps my confusion just stems from my inexperience with group rings? If someone could please explain this to me, I would be forever grateful. Also, <strong>2. what is the actual isomorphism used or how do you show that the two group rings are isomorphic? 3. How does this tell us what the units are?</strong></p>
<p>I'm extremely confused and I thank you very much in advance for your time, help, and patience!</p>
| Travis Willse | 155,629 | <p><strong>Hint</strong> The Hermitian isometry condition $||T v|| = || v ||$ is equivalent to $\langle T v, T v \rangle = \langle v, v \rangle$, and one can use the <a href="https://en.wikipedia.org/wiki/Polarization_identity#For_vector_spaces_with_complex_scalars" rel="nofollow noreferrer">Hermitian polarization identity</a> to write a Hermitian inner product $\langle a, b \rangle$ as a certain $\Bbb C$-linear combination of expressions of the form $\langle c, c \rangle$.</p>
|
10,468 | <p>I know that many graph problems can be solved very quickly on graphs of bounded degeneracy or arboricity. (It doesn't matter which one is bounded, since they're at most a factor of 2 apart.) </p>
<p>From Wikipedia's article on the clique problem I learnt that finding cliques of any constant size k takes linear time on graphs of bounded arboricity. That's pretty cool.</p>
<p>I wanted to know more examples of algorithms where the bounded arboricity condition helps. This might even be well-studied enough to have a survey article written on it. Unfortunately, I couldn't find much about my question. Can someone give me examples of such algorithms and references? Are there some commonly used algorithmic techniques that exploit this promise? How can I learn more about these results and the tools they use?</p>
| David Eppstein | 440 | <p>Bounded degeneracy or arboricity just means that the graph is sparse (number of edges is proportional to number of vertices in all subgraphs).</p>
<p>Some ideas that have been used for fast algorithms on these graphs:</p>
<ul>
<li><p>Order the vertices so that each vertex has only d neighbors that are later in the ordering, where d is the degeneracy. Then if one can similarly order the structure one is looking for, there are not too many different choices to try. For instance (though this is not the method of the Chiba & Nishizeki paper that you indirectly refer to) one can find all cliques by trying all subsets of later neighbors of each vertex. This idea also works to color these graphs with at most d+1 colors: just choose colors for vertices one at a time in the opposite of the above ordering. See e.g. <a href="http://dx.doi.org/10.1145/2402.322385" rel="noreferrer">Matula and Beck, JACM 1983</a>.</p></li>
<li><p>Find a low degree vertex, do something to it to reduce the size of the graph while preserving its overall sparsity, and continue. This is how one finds an ordering as above (repeatedly remove the smallest degree vertex) and is also how many planar coloring algorithms work.</p></li>
<li><p>Find a big independent set (or a big independent set of bounded-degree vertices), do something on it, and repeat on the remaining smaller graph. This often leads to linear time algorithms because every graph of bounded degeneracy has an independent set of Ω(n) vertices, so the size of the graph goes down by a constant factor at each repetition and the total time can be bounded by a geometric series. This is a variation of the "low degree vertex" idea that works better in the parallel algorithms setting.</p></li>
<li><p>Observe that there can only be very few vertices with high degree (O(dk) vertices with degree greater than n/k) or else they would have too many edges. So if you are looking for a structure that needs high degree vertices you don't have many choices to try. See e.g. <a href="http://dx.doi.org/10.1007/s00453-008-9204-0" rel="noreferrer">Alon and Gutner, Algorithmica 2009</a>.</p></li>
</ul>
|
10,468 | <p>I know that many graph problems can be solved very quickly on graphs of bounded degeneracy or arboricity. (It doesn't matter which one is bounded, since they're at most a factor of 2 apart.) </p>
<p>From Wikipedia's article on the clique problem I learnt that finding cliques of any constant size k takes linear time on graphs of bounded arboricity. That's pretty cool.</p>
<p>I wanted to know more examples of algorithms where the bounded arboricity condition helps. This might even be well-studied enough to have a survey article written on it. Unfortunately, I couldn't find much about my question. Can someone give me examples of such algorithms and references? Are there some commonly used algorithmic techniques that exploit this promise? How can I learn more about these results and the tools they use?</p>
| anonymous | 17,256 | <p>One article that provides algorithms for the MDS problem on graphs of bounded arboricity is "Minimum Dominating Set Approximation in Graphs of Bounded Arboricity" by Lenzen and Wattenhofer <a href="http://www.disco.ethz.ch/publications/disc10_LW_204.pdf" rel="nofollow">http://www.disco.ethz.ch/publications/disc10_LW_204.pdf</a></p>
|
1,634,741 | <p>$22+22=4444$</p>
<p>$43+46=618191$</p>
<p>$77+77=?$</p>
<p>What should come in place of $?$</p>
<p>I cannot see any logic in $43+46=618191$. Is there any?</p>
| 2.71828-asy | 302,548 | <p>Let $S = a + ar + ar^2 + ar^3 ...$</p>
<p>Then $S-Sr = (a + ar + ar^2 + ar^3 ... ar^n) - (ar + ar^2 + ar^3 + ar^4 ... ar^{n+1}) = a - ar^{n+1}$</p>
<p>Factoring out an S we have $S(1-r) = a-ar^{n+1}$</p>
<p>Finally, $$S = {(a - ar^{n+1})\over(1-r)}$$</p>
<p>In your case, you are trying to find $5^4 + 5^5 + 5^6 ... 5^n$</p>
<p>You can factor out a $5^4$ to get $5^4(1 + 5 + 5^2 ... + 5^{n-4})$</p>
<p>Plugging in corresponding values of $a$ and $r$ into the equation above we have:
$$S = 5^4 \times {5^{n-3}-1\over4} $$</p>
|
4,531,652 | <p>In my school book, I read this theorem</p>
<blockquote>
<p>Let <span class="math-container">$n>0$</span> is an odd natural number (or an odd positive integer), then the equation <span class="math-container">$$x^n=a$$</span> has exactly one real root.</p>
</blockquote>
<p>But, the book doesn't provide a proof, only tells <span class="math-container">$x=\sqrt [n]a$</span>.
How can I prove this theorem?</p>
<p>I tried to prove some special cases</p>
<p><span class="math-container">$$x^3=8$$</span>
<span class="math-container">$$(x-2)(x^2+2x+4)=0$$</span>
<span class="math-container">$$x=2 \vee x^2+2x+4=0$$</span></p>
<p>But the Discriminant of <span class="math-container">$x^2+2x+4=0$</span> equals to <span class="math-container">$2^2-4×4=-12<0$</span>. So <span class="math-container">$x=2$</span> is an only root. But for <span class="math-container">$x^5=32$</span>, I got <span class="math-container">$x=2$</span> and <span class="math-container">$x^4+2x^3+4x^2+8x+16=0$</span>.</p>
<p>I don't know how I can proceed.</p>
| nonstudent | 1,089,358 | <p>The case <span class="math-container">$a=0$</span> is obviously trivial. Suppose that <span class="math-container">$a>0$</span>. This implies <span class="math-container">$x^n>0\implies x>0$</span>, where <span class="math-container">$n$</span> is an odd positive integer.</p>
<p>Thus, we can apply the <em>real-valued</em> logarithm rules to the both sides:</p>
<p><span class="math-container">$$
\begin{aligned}x^n=a,\,a>0
&\implies \ln x^n =\ln a\\
&\implies n \ln x= \ln a\\
&\implies \ln x =\frac {\ln a}{n}\\
&\implies x=e^{\frac {\ln a}{n}}\end{aligned}$$</span></p>
<p>Then, note that <span class="math-container">$f(x)=e^x$</span> is an exponential function. By properties of the real-valued exponential function <span class="math-container">$f(x)=e^x$</span> is strictly increasing function and the value of <span class="math-container">$e^x$</span> is an unique. This implies that, the value of <span class="math-container">$e^{\frac {\ln a}{n}}$</span> is unique.</p>
<p>On the other hand,</p>
<p><span class="math-container">$$
\begin{aligned}x=\left(e^{\ln a}\right)^{\frac 1n}=a^{\frac 1n}=\sqrt [n]{a}.\end{aligned}
$$</span></p>
<p>Then suppose that, <span class="math-container">$a<0$</span>. Since <span class="math-container">$n$</span> is an odd positive integer, we have:</p>
<p><span class="math-container">$$
\begin{aligned} x^n=a\iff(-x)^n=-a>0 \end{aligned}
$$</span></p>
<p>Now applying the same logarithm rules, we get</p>
<p><span class="math-container">$$-x=e^{\frac {\ln (-a)}{n}}\implies x=-e^{\frac {\ln (-a)}{n}}$$</span></p>
<p>This means, value of <span class="math-container">$-e^{\frac {\ln (-a)}{n}}$</span> is also unique.</p>
<p>On the other hand,</p>
<p><span class="math-container">$$
\begin{aligned}
(-x)^n=-a,\,-a>0&\implies (-x)=\sqrt [n]{-a}\\
&\implies -x=-\sqrt [n]{a}\\
&\implies x=\sqrt[n]{a}.\end{aligned}
$$</span></p>
|
4,052,760 | <blockquote>
<p>Prove that <span class="math-container">$\int\limits^{1}_{0} \sqrt{x^2+x}\,\mathrm{d}x < 1$</span></p>
</blockquote>
<p>I'm guessing it would not be too difficult to solve by just calculating the integral, but I'm wondering if there is any other way to prove this, like comparing it with an easy-to-calculate integral. I tried comparing it with <span class="math-container">$\displaystyle\int\limits^{1}_{0} \sqrt{x^2+1}\,\mathrm{d}x$</span>, but this greater than <span class="math-container">$1$</span>, so I'm all out of ideas.</p>
| Dr. Wolfgang Hintze | 198,592 | <p>Still simpler.</p>
<p>For <span class="math-container">$0<x<1$</span> we have <span class="math-container">$x^2<x$</span>.</p>
<p>Hence <span class="math-container">$\sqrt{x^2+x}\lt \sqrt{x+x}=\sqrt{2}\sqrt{x}$</span>, and the integral can be estimated as <span class="math-container">$\int_0^1 \sqrt{x^2+x}<\sqrt{2}\int_0^1 \sqrt{x}=\frac{2}{3}\sqrt{2}=\sqrt{\frac{8}{9}}\lt 1$</span>.
QED.</p>
|
121,653 | <p>What is information about the existence of rational points on hyperelliptic curves over finite fields available?</p>
| Michael Zieve | 30,412 | <p>[Edited to remove material subsumed and improved by Felipe's answer.]</p>
<p>Here is some historical info. Dickson studied this question in his 1909 paper "Definite forms in a finite field". For Dickson, a "definite form" is a homogeneous $f(x,z)\in\mathbb{F}_q[x,z]$ which takes nonzero square values for all $(x,z)$ in $\mathbb{F}_q\times\mathbb{F}_q$ except $(0,0)$. If $q$ is odd and $f(x,z)$ is not a square then Dickson's condition is equivalent to saying that the hyperelliptic curve $y^2=f(x,1)$ has $2q+2$ points over $\mathbb{F}_q$, or equivalently, its quadratic twist $y^2=nf(x,1)$ has no points (where $n$ is any nonsquare in $\mathbb{F}_q$).</p>
<p>In modern language, Dickson showed that there are no pointless genus-$2$ curves over $\mathbb{F}_q$ if $q$ is odd and $q\ge 13$. Carlitz took up this topic in a series of papers, and among other things made the connection with Weil's bound, which implies that a pointless hyperelliptic curve over $\mathbb{F}_q$ has genus at least $(q+1)/(2\sqrt{q})$, or roughly $\sqrt{q}/2$. As Felipe's answer indicates, this bound is essentially best possible when $q$ is an odd square. It can be improved by a factor of roughly $\sqrt{2}$ (and possibly much more) when $q$ is prime.</p>
<p>It is known that genus-$2$ pointless hyperelliptic curves exist over $\mathbb{F}_q$ precisely when $q\le 11$, and in genus-$3$ the analogous result is $q\le 25$ (the latter is due to Howe, Lauter, and Top). Further experimental results over small prime fields appear in papers by Glazunov.</p>
|
1,068,103 | <p>The question is about approximating the continuous function in an interval $[a, b]$. If we consider the linear space of all such functions endowed with the norm</p>
<p>$$||f|| = \max_{x \in [a, b]}|f(x)|$$ </p>
<p>then the best error approximating a function $f$ using polynomials $p \in \pi_n$ (with $\pi_n$ is denoted the set of all polynomials of degree $\leq n$) is</p>
<p>$$E_n(f) = \inf_{p\in \pi_n} ||f-p||$$</p>
<p>I need to show that $E_n(f + g) \leq E_n(f) + E_n(g)$. I don't know how to approach the problem. Any hints?</p>
| Robert Israel | 8,508 | <p>Hint: if $p$ is a good approximation to $f$ and $q$ is a good approximation to $g$,
try approximating $f+g$ by $p + q$.</p>
|
1,068,103 | <p>The question is about approximating the continuous function in an interval $[a, b]$. If we consider the linear space of all such functions endowed with the norm</p>
<p>$$||f|| = \max_{x \in [a, b]}|f(x)|$$ </p>
<p>then the best error approximating a function $f$ using polynomials $p \in \pi_n$ (with $\pi_n$ is denoted the set of all polynomials of degree $\leq n$) is</p>
<p>$$E_n(f) = \inf_{p\in \pi_n} ||f-p||$$</p>
<p>I need to show that $E_n(f + g) \leq E_n(f) + E_n(g)$. I don't know how to approach the problem. Any hints?</p>
| Exodd | 161,426 | <p>Let
$$E_n(f)=a,\qquad E_n(g)=b$$
There exist two polynomials $p_1$ and $p_2$ such that
$$\|f-p_1\|<a+\epsilon,\qquad \|g-p_2\|<b+\epsilon$$
so
$$E_n(f+g)\le\|f+g-p_1-p_2\|\le a+b+2\epsilon$$
for all epsilon, so
$$E_n(f+g)\le a+b=E_n(f)+E_n(g)$$</p>
|
175,723 | <p>I am reading Goldstein's Classical Mechanics and I've noticed there is copious use of the $\sum$ notation. He even writes the chain rule as a sum! I am having a real hard time following his arguments where this notation is used, often with differentiation and multiple indices thrown in for good measure. How do I get some working insight into how sums behave without actually saying "Now imagine n=2. What does the sum become in this case?" Is there an easier way to do this? Is there an "algebra" or "calculus" of sums, like a set of rules for manipulating them? I've seen some documents on the web but none of them seem to come close to Goldstein's usage in terms of sophistication. Where can I get my hands on practice material for this notation?</p>
| Robert Israel | 8,508 | <p>The minimal polynomial of $(\theta^2-\theta)/2$ is ${z}^{3}+11\,{z}^{2}+36\,z+4$.</p>
<p>One way to get this is: if $t = (\theta^2-\theta)/2$, express $t^3 + b t^2 + c t + d$ as a rational linear combination of $1$, $\theta$ and $\theta^2$, and solve the system of equations that say that the coefficients of $1$, $\theta$ and $\theta^2$ are all $0$.</p>
|
81,728 | <p>The question is to compute or estimate the following probabilty.</p>
<p>Suppose that you have $N$ (e.g. $30$) tasks, each of which repeats every $t$ min (e.g. $30$ min) and lasts $l$ min (e.g. $5$ min). If the tasks started at uniformly random point in time yesterday, what is the probability that there is a time today at which at least $m$ (e.g. $10$) of the tasks run.</p>
| John Jiang | 4,923 | <p>Consider a circle of length $t+\ell$. Then I think your problem is asking if I drop $N$ points uniformly at random onto that circle, what is the probability that at least $m$ of them are in an interval of length $\ell$. When $m/(N-m) >> \ell / t$, so that the tail event that there are $m$ points in a fixed interval of size $\ell$ is exponentially small, you can use the union bound</p>
<p>$$ N \sum_{j\ge m} (t/(t+\ell))^{N-j} (\ell/(t+\ell))^j \binom{N}{j}$$. </p>
<p>You could also consider using the Brownian bridge (from $(0,0)$ to $(1,1))$ approximation of the partial sum $\sum_{j=1}^k X_j$ where $X_j$ is the distance between the $j$th and $j+1$st points in say the clockwise direction, with a particular chosen first point $p$. Then the question roughly becomes what is the chance that there is some $s \in [0,1]$ such that $B_{s + \ell/(\ell + t)} - B_s$ exceeds $m/N$. So for instance if $m/N \le \ell / (t + \ell) + o(\sqrt{\ell / (t + \ell)})$ then this probability should be very close to $1$. </p>
|
196,902 | <p>Hello fellow Ace Users.</p>
<p>Currently I'm working on a project to implement Peridynamics.
This is a discretization technique in the fashion of a meshless particle method.
AceGen/AceFEM provides the feature of arbitrary nodes per element which suits my need perfectly as such a peridynamic particle interacts with arbitrary number of neighbour particles.
To use the benefits of this method such as modelling discontinuities I'm aiming to utilize an explicit solution procedure.</p>
<p>I appreciate any thoughts on this! I have some code running in AceGen/AceFEM so far, still struggling on some design decisions which lead to the following specific questions:</p>
<ul>
<li><ol>
<li>Whats exactly prarallelized in AceFEM? My recent experience indicate that SMSStandardModule["Tasks"] is not. Is that correct ? How about SMSStandardModule["Tangent and residual"] (I'm talking about the evaluation of the elements not solving the global equation system in parallel.)?</li>
</ol></li>
<li><ol start="2">
<li>Is there any known (maybe approximate) limit to the performance regarding arbitrary nodes per element?</li>
</ol></li>
<li><ol start="3">
<li>Does anyone have experiences with explicit simulations in AceFEM/AceGen?</li>
</ol></li>
<li><ol start="4">
<li>I expect a lot of data due to the particle discretization. Visualisation in post processing will be a to hard task to do in Mathematica. Does anyone have experiences with exporting the simulation data for use in e.g Paraview? If so, what's most performant way to write these to a file without significantly slowing down the simulation? I'm aware of the SMTPut[] feature by the way, but to my knowledge this binds me to Mathematica again.</li>
</ol></li>
</ul>
<p>As always You have my kudos in advance and I'm excited for your comments and answers !</p>
<p>Thanks for the response so far.</p>
<p>I'm back with a 'minimal' example that shows my main concerns.</p>
<p>The code is provided in <a href="https://github.com/5A5H/SimplePD" rel="nofollow noreferrer">SimpePDImplementation</a> on GitHub:</p>
<p>The element contains a vary basic implementation of explicit peridynamics following two steps for each time step:</p>
<ul>
<li><ol>
<li>Compute force density for each node (based on its neighbours)</li>
</ol></li>
<li><ol start="2">
<li>Integrate in time: acceleration = force density / density (per node)</li>
</ol></li>
</ul>
<p>These two task are implemented twice (using the same code), for once implemented into the SKR subroutine and for once as individual element tasks.
!As the code is explicit i do not want or have a system of equation to solve, but i definitely want go over all elements in parallel to gain speedup.
The results for both implementations are the same as expected, however the SKR implementation run significantly slower (i guess do to the solution of the linear system which is completely zero in this case).
<a href="https://i.stack.imgur.com/7Aby0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7Aby0.png" alt="enter image description here"></a></p>
<p>While performing the analysis, I checked my CPU usage.</p>
<p>For the SKR implementation I get:
<a href="https://i.stack.imgur.com/h4bXs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/h4bXs.png" alt="CPU usage for SKR Implementation."></a>
While for the Task implementation as reported I have:
<a href="https://i.stack.imgur.com/5vvhv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5vvhv.png" alt="enter image description here"></a></p>
<p>My conclusion so far is that the parallelization only works on the solution of the linear system and at least does not parallelize the loop over all elements for tasks.</p>
<p>I would be great if one of you guy can confirm, or even better of course tell me what I made wrong so that I know wether AceFEM works for my purpose at all.</p>
<p>Best,
S</p>
| BHudobivnik | 47,826 | <p>I can mostly help with the implementation aspect of meshfree methods and answer point two. Although Mathematica can plot anything you want, but you might need to program it yourself.</p>
<ol start="2">
<li>In the new version of AceGen/FEM 6.923, prof Korelc introduced a new SMSIO functions and there is improvement of <code>SMSArray</code> usage and some new global fields. With that the formulation of elements with variable number of nodes is greatly simplified and the efficiency is independent of maximum number of nodes. You can put 1000 nodes if you want, the efficiency will not change more than couple %. To see the Improvement you need to introduce your vector of DOF as an array and not as a list with a fixed length of maximum possible values, additionally there are some fields that will give you actual number of nodes (I haven't tested that part). Here is a simple code extract of important things:</li>
</ol>
<p>Function <code>additionalNodes</code> needs to pad actual amount of nodes with <code>Null</code> to match the max length for each element:</p>
<pre><code>maxNodes=1000;
ClearAll[additionalNodes];
additionalNodes[nodes_, maxNodes_] := Module[{n},
If[Length[nodes] > maxNodes,
SMCError = {"Maximum number of nodes:", maxNodes, "Nodes:", nodes};
SMCAbort["Given number of nodes exceeded the maximum number.", "",""];
];
Join[ConstantArray[Null, maxNodes - Length[nodes]]]
];
</code></pre>
<p>You need to specify maximum no. of nodes, all nodes should be given as dummy-nodes and <code>SMSAdditionalNodes</code> should be assigned the above function:</p>
<pre><code>SMSTemplate[
"SMSTopology" -> "QX"
, "SMSNoNodes" -> maxNodes
, "SMSDOFGlobal" -> 2
, "SMSNodeID" -> "D -D"
,...,
, "SMSAdditionalNodes" -> Function[additionalNodes[{##},maxNodes]]
, "SMSMMAInitialisation" -> {{Definition[additionalNodes]}, Null}
]
</code></pre>
<p>Inside the <code>SMSStandardModule</code> we can extract the actual number of nodes/DOF from a new fields (Or if you like from ed$$["Data",_] field)</p>
<pre><code>nNodes ⊢ SMSLastTrueNode[];
nDOF ⊢ SMSLastTrueDOF[];
nDOFNode = SMSNoDimensions;
</code></pre>
<p>The vector of all unknowns can then be defined as follows with SMSArray:</p>
<pre><code>pe ⊨ SMSArray[nDOF,
Function[{dof}, SMSIO["Nodal DOFs"[(dof - 1)/nDOFNode + 1,SMSMod[dof-1,nDOFNode] + 1]]]
];
</code></pre>
<p>Then we can loop over arbitrary points/edges... we can read the displacement and coordinate of each point with <code>node</code> defined as an <code>SMSInteger</code> or a loop parameter as:</p>
<pre><code>XNode ⊢ Table[SMSReal[nd$$[node, "X", j]], {j, SMSNoDimensions}];
uNode ⊨ Table[SMSPart[pe, (node - 1) nDOFNode + j], {j, nDOFNode}];
</code></pre>
<p>After your define your potential, the Residual and tangent can be derived same way as before, but only looped over the actual no of DOF defined by nDOF:</p>
<pre><code>SMSDo[Rgi ⊨ SMSD[W, pe, i, "Constant"->SMSVariables[pseudoWConstants]];
SMSIO[Rgi, "Add to", "Residual"[i]];
SMSDo[
Kgij ⊨SMSD[Rgi, \[DoubleStruckP]e, j];
SMSIO[Kgij, "Add to", "Tangent"[i, j]];
,{j, i, nDOF}
];
,{i, 1, nDOF}
];
</code></pre>
|
3,489,345 | <p>My goal is to find the values of <span class="math-container">$N$</span> such that <span class="math-container">$10N \log N > 2N^2$</span></p>
<p>I know for a fact this question requires discrete math. </p>
<p>I think the problem revolves around manipulating the logarithm. The thing is, I forgot how to manipulate the logarithm using discrete math. </p>
<p>My question is how do I manipulate this equation in a way such that I can find the values of N such that the equation is true? </p>
| Claude Leibovici | 82,404 | <p>In the real domain, consider the function
<span class="math-container">$$f(x)=5\log(x)-x$$</span> The first derivative cancels at <span class="math-container">$x=5$</span> and by the second derivative test, this is a maximum. So, there is a limited range of <span class="math-container">$x$</span> where <span class="math-container">$f(x) >0$</span>.</p>
<p>Sooner or later, you will learn that the zero's of <span class="math-container">$f(x)$</span> are given in terms of Lmabert function, that is to say that <span class="math-container">$f(x) >0$</span> if
<span class="math-container">$$-5 W\left(-\frac{1}{5}\right) < x < -5 W_{-1}\left(-\frac{1}{5}\right)$$</span> which, numerically are <span class="math-container">$1.30$</span> and <span class="math-container">$12.71$</span>.</p>
<p>So, for your problem with integer numbers <span class="math-container">$2 \leq n \leq 12$</span>.</p>
|
53,185 | <p>Let us consider a noncompact Kähler manifold with vanishing scalar curvature but nonzero Ricci tensor. I'm wondering what can it tell us about the manifold. The example (coming from physics) has the following Kähler form</p>
<p><span class="math-container">$$K = \bar{X} X + \bar{Y} Y + \log(\bar{X} X + \bar{Y} Y)$$</span></p>
<p>e.g. this is a 2D complex manifold. I claim that its Ricci form is nonzero, whereas its scalar curvature is identically zero.</p>
<p>I'm wondering if such manifolds possess any interesting properties and how can we classify them.</p>
<p><strong>UPD</strong>.</p>
<p>Partly the answer for 4 manifolds (2d complex manifolds) is given in the paper by C Lebrun "Counter-examples to the generalized positive action conjecture'' <a href="https://projecteuclid.org/journals/communications-in-mathematical-physics/volume-118/issue-4/Counter-examples-to-the-generalized-positive-action-conjecture/cmp/1104162166.full" rel="nofollow noreferrer">paper</a>. The author considers vanishing scalar curvature and derives the most generic form of the Kähler potential such that it vanishes. There are several integration constants in the final answer, playing with them we can get different manifolds including the one I was talking above. For that case the Kähler metric is the metric of a standard blow-up in the origin</p>
<p><span class="math-container">$$K = \bar{X}X+\bar{Y}Y+a\log(\bar{X}X+\bar{Y}Y)$$</span></p>
<p>where <span class="math-container">$a>0$</span>.</p>
<p>Now one can ask the same question about manifolds of higher dimension if they all with vanishing scalar curvature (but nonvanishing Ricci tensor) are described by the blow-ups of <span class="math-container">$\mathbb{C}^n$</span>'s. In particular, I'm interested in the following Kähler potential</p>
<p><span class="math-container">$$K = \sum\limits_{i=1}^N \sum\limits_{i=1}^{\tilde N}|X^i Y^j|^2 + a \log \sum\limits_{i=1}^N|X^i|^2.$$</span></p>
| Zatrapilla | 4,129 | <p>The Ricci scalar is the average gaussian curvature in all the two-dimensional subspaces passing through the point, I believe. Whence you can derive the 'meaning'.</p>
|
53,185 | <p>Let us consider a noncompact Kähler manifold with vanishing scalar curvature but nonzero Ricci tensor. I'm wondering what can it tell us about the manifold. The example (coming from physics) has the following Kähler form</p>
<p><span class="math-container">$$K = \bar{X} X + \bar{Y} Y + \log(\bar{X} X + \bar{Y} Y)$$</span></p>
<p>e.g. this is a 2D complex manifold. I claim that its Ricci form is nonzero, whereas its scalar curvature is identically zero.</p>
<p>I'm wondering if such manifolds possess any interesting properties and how can we classify them.</p>
<p><strong>UPD</strong>.</p>
<p>Partly the answer for 4 manifolds (2d complex manifolds) is given in the paper by C Lebrun "Counter-examples to the generalized positive action conjecture'' <a href="https://projecteuclid.org/journals/communications-in-mathematical-physics/volume-118/issue-4/Counter-examples-to-the-generalized-positive-action-conjecture/cmp/1104162166.full" rel="nofollow noreferrer">paper</a>. The author considers vanishing scalar curvature and derives the most generic form of the Kähler potential such that it vanishes. There are several integration constants in the final answer, playing with them we can get different manifolds including the one I was talking above. For that case the Kähler metric is the metric of a standard blow-up in the origin</p>
<p><span class="math-container">$$K = \bar{X}X+\bar{Y}Y+a\log(\bar{X}X+\bar{Y}Y)$$</span></p>
<p>where <span class="math-container">$a>0$</span>.</p>
<p>Now one can ask the same question about manifolds of higher dimension if they all with vanishing scalar curvature (but nonvanishing Ricci tensor) are described by the blow-ups of <span class="math-container">$\mathbb{C}^n$</span>'s. In particular, I'm interested in the following Kähler potential</p>
<p><span class="math-container">$$K = \sum\limits_{i=1}^N \sum\limits_{i=1}^{\tilde N}|X^i Y^j|^2 + a \log \sum\limits_{i=1}^N|X^i|^2.$$</span></p>
| diverietti | 9,871 | <p>On a $n$-dimensional Kähler manifold $(X,\omega)$, the Ricci form is (minus) the curvature of the canonical bundle $K_X$ endowed with the induced metric. Thus, if $X$ has zero Ricci curvature then its canonical bundle is flat. Thus, the structure group can be reduced to a subgroup of the special linear group $SL(n,\mathbb C)$. </p>
<p>However, Kähler manifolds already possess holonomy in $U(n)$, and so the (restricted) holonomy of a Ricci flat Kähler manifold is contained in $SU(n)$. Conversely, if the (restricted) holonomy of a $2n$-dimensional Riemannian manifold is contained in $SU(n)$, then the manifold is a Ricci-flat Kähler manifold.</p>
<p>In the case when $X$ is compact the celebrated solution of Yau to the Calabi problem asserts that if $c_1(X)=0$ then $X$ posses a metric with vanishing Ricci curvature. For the non compact case, there are some (among others) results by Tian and Yau which concerns the existence of complete Ricci-flat Kähler metrics on quasiprojective varieties. One of their main theorems is the following:</p>
<p>Suppose that $X$ is a smooth complex projective variety with ample anticanonical line bundle (i.e. a Fano manifold), and that $D\subset X$ is a smooth anticanonical divisor. Then $X\setminus D$ admits a complete Ricci-flat Kähler metric. </p>
|
53,185 | <p>Let us consider a noncompact Kähler manifold with vanishing scalar curvature but nonzero Ricci tensor. I'm wondering what can it tell us about the manifold. The example (coming from physics) has the following Kähler form</p>
<p><span class="math-container">$$K = \bar{X} X + \bar{Y} Y + \log(\bar{X} X + \bar{Y} Y)$$</span></p>
<p>e.g. this is a 2D complex manifold. I claim that its Ricci form is nonzero, whereas its scalar curvature is identically zero.</p>
<p>I'm wondering if such manifolds possess any interesting properties and how can we classify them.</p>
<p><strong>UPD</strong>.</p>
<p>Partly the answer for 4 manifolds (2d complex manifolds) is given in the paper by C Lebrun "Counter-examples to the generalized positive action conjecture'' <a href="https://projecteuclid.org/journals/communications-in-mathematical-physics/volume-118/issue-4/Counter-examples-to-the-generalized-positive-action-conjecture/cmp/1104162166.full" rel="nofollow noreferrer">paper</a>. The author considers vanishing scalar curvature and derives the most generic form of the Kähler potential such that it vanishes. There are several integration constants in the final answer, playing with them we can get different manifolds including the one I was talking above. For that case the Kähler metric is the metric of a standard blow-up in the origin</p>
<p><span class="math-container">$$K = \bar{X}X+\bar{Y}Y+a\log(\bar{X}X+\bar{Y}Y)$$</span></p>
<p>where <span class="math-container">$a>0$</span>.</p>
<p>Now one can ask the same question about manifolds of higher dimension if they all with vanishing scalar curvature (but nonvanishing Ricci tensor) are described by the blow-ups of <span class="math-container">$\mathbb{C}^n$</span>'s. In particular, I'm interested in the following Kähler potential</p>
<p><span class="math-container">$$K = \sum\limits_{i=1}^N \sum\limits_{i=1}^{\tilde N}|X^i Y^j|^2 + a \log \sum\limits_{i=1}^N|X^i|^2.$$</span></p>
| Gunnar Þór Magnússon | 4,054 | <p>Dear Peter, I don't think one can say anything about such manifolds because the scalar curvature is too weak an invariant to be of use. Here is an infinite family of non-diffeomorphic compact examples to support my claim; for non-compact ones, remove a subvariety.</p>
<p>Let $Y$ be a projective manifold of dimension $n$ with ample canonical bundle. By the Calabi-Yau theorem, $Y$ admits a Kahler-Einstein metric $\omega_Y$ with $Ric \omega_Y = - \omega_Y$. Recall that the projective space $\mathbb P^n$ admits the Fubini-Study metric $\omega_{FS}$ that has $Ric \omega_{FS} = \omega_{FS}$. We set $X = \mathbb P^n \times Y$ and equip this space with the product metric $\omega = \omega_{FS} \oplus \omega_Y$. (Here and everywhere we should write $pr_1^\ast\omega_{FS} \oplus pr_2^*\omega_Y$ for the appropriate projection maps.) By varying $Y$ among projective manifold with ample bundle (which are legion) we get non-diffeomorphic $X$.</p>
<p><strong>Claim.</strong> The space $X$ has non-zero Ricci curvature but zero scalar curvature.</p>
<p><em>Proof.</em> The dimension of $X$ is $2n$. We have $\omega^{2n} = \binom{2n}{n} \omega_{FS}^n \wedge \omega_Y^n$. A calculation in local coordinates then gives that
$$Ric \omega = Ric \omega_{FS} + Ric \omega_Y = \omega_{FS} - \omega_Y \not= 0.$$
The scalar curvature $s$ of $\omega$ satisfies
$$
2n s dV = Ric \omega \wedge \omega^{2n-1} / (2n-1)!,
$$
where $dV = \omega^{2n}/(2n)!$ is the volume form of $\omega$. Since
$$
\omega^{2n-1} = \binom{2n-1}{n} \bigl( \omega_{FS}^{n-1}\wedge \omega_Y^n + \omega_{FS}^n \wedge \omega_Y^{n-1} \bigr)$$
we get
$$
2n s dV = \frac{1}{(2n-1)!}\binom{2n-1}{n}\bigl( (2n)! dV - (2n)! dV\bigr) = 0,
$$
whence $s = 0$.</p>
|
2,211,075 | <p>I don't understand the following example from Math book.</p>
<p>Solve for the equation <code>sin(theta) = -0.428</code> for <code>theta</code> in <code>radians</code> to 2 decimal places. where <code>0<= theta<= 2PI</code>.</p>
<p>And this is the answer:</p>
<p><code>theta=-0.44 + 2PI = 5.84rad and theta = PI-(0.44) = 3.58rad</code> </p>
<p>I don't understand the part why we need to add <code>2PI</code> in the first answer and add <code>PI</code> in second answer?</p>
| Matrefeytontias | 392,482 | <p>You cannot just replace the tangent formula by the derivative ; doing so would mean that you are actually taking the limit when x -> 0, but in that case, you must also divide by $sin(0)$, which you cannot do obviously.</p>
<p>What you should do here is use equivalent functions, namely the following :</p>
<p>$sin(x) \underset{x \rightarrow 0}{=}x + o(x^2)$</p>
<p>$cos(x) \underset{x \rightarrow 0}{=}1 - \frac{x^2}{2} + o(x^3)$</p>
<p>Where $o(x^n)$ is a term that represents negligeable values. Technically speaking, $o(x^n)$ is a non-zero function $g$ so that $\frac{g(x)}{x^n} \underset{x \rightarrow 0}{\rightarrow} 0$. When replacing functions in products or divisions by their equivalents, you can ignore the $o(x^n)$ term, but not when replacing functions in sums or differences. For example, this is true :</p>
<p>$\frac{sin(x)cos(x)}{x} \underset{x \rightarrow 0}{=} \frac{x \times cos(x)}{x} = cos(x) \underset{x \rightarrow 0}{\rightarrow} 1$</p>
<p>But this is false :</p>
<p>$(1 + \frac{1}{n})^n \underset{n \rightarrow \infty}{=} (1 + 0)^n = 1$</p>
<p>In your case, just don't forget to include the negligeable terms when replacing terms of the sum with equivalent functions, and you should be good - remember than squaring is doing a product.</p>
|
956,680 | <p>$\displaystyle\lim_{x\to0}\frac{x^2+1}{\cos x-1}$</p>
<p>My solution is:</p>
<p>$\displaystyle\lim_{x\to0}\frac{x^2+1}{\cos x-}\frac{\cos x+1}{\cos x+1}$</p>
<p>$\displaystyle\lim_{x\to0}\frac{(x^2+1)(\cos x+1)}{\cos^2 x-1}$</p>
<p>$\displaystyle\lim_{x\to0}\frac{(x^2+1)(\cos x+1)}{-(1-\cos^2 x)}$</p>
<p>Since $\sin^2 x=1-\cos^2 x$</p>
<p>$\displaystyle\lim_{x\to0}\frac{(x^2+1)(\cos x+1)}{-\sin^2 x}$</p>
<p>I'm stuck here. What next?</p>
| Paul | 17,980 | <p>This Limit doesn't exist! Notice that as $x\to 0$, $x^2+1 \to 1$ and $\cos x-1 \to 0$; So the Limit $\to -\infty.$</p>
|
3,905,629 | <p>I need to compute a limit:</p>
<p><span class="math-container">$$\lim_{x \to 0+}(2\sin \sqrt x + \sqrt x \sin \frac{1}{x})^x$$</span></p>
<p>I tried to apply the L'Hôpital rule, but the emerging terms become too complicated and doesn't seem to simplify.</p>
<p><span class="math-container">$$
\lim_{x \to 0+}(2\sin \sqrt x + \sqrt x \sin \frac{1}{x})^x \\
= \exp (\lim_{x \to 0+} x \ln (2\sin \sqrt x + \sqrt x \sin \frac{1}{x})) \\
= \exp (\lim_{x \to 0+} \frac
{\ln (2\sin \sqrt x + \sqrt x \sin \frac{1}{x})}
{\frac 1 x}) \\
= \exp \lim_{x \to 0+} \dfrac
{\dfrac {\cos \sqrt x} {x} + \dfrac {\sin \dfrac 1 x} {2 \sqrt x}
- \dfrac {\cos \dfrac 1 x} {x^{3/2}}}
{- \dfrac {1} {x^2} \left(2\sin \sqrt x + \sqrt x \sin \frac{1}{x} \right)}
$$</span></p>
<p>I've calculated several values of this function, and it seems to have a limit of <span class="math-container">$1$</span>.</p>
| robjohn | 13,854 | <p>For <span class="math-container">$x\in\left(0,\frac\pi2\right]$</span>, the concavity of <span class="math-container">$\sin(x)$</span> says
<span class="math-container">$$
\frac2\pi\le\frac{\sin(x)}x\le1
$$</span>
Therefore,
<span class="math-container">$$
\underbrace{\left(\frac4\pi\sqrt{x}-\sqrt{x}\right)^x}_{\left(\frac4\pi-1\right)^x\sqrt{x^x}}\le\left(2\sin\left(\sqrt{x}\right)+\sqrt{x}\sin\left(\frac1x\right)\right)^x\le\underbrace{\left(2\sqrt{x}+\sqrt{x}\right)^x}_{3^x\sqrt{x^x}}
$$</span>
The Squeeze Theorem says
<span class="math-container">$$
\lim_{x\to0^+}\left(2\sin\left(\sqrt{x}\right)+\sqrt{x}\sin\left(\frac1x\right)\right)^x=1
$$</span></p>
|
2,823,758 | <p>I was learning the definition of continuous as:</p>
<blockquote>
<p>$f\colon X\to Y$ is continuous if $f^{-1}(U)$ is open for every open $U\subseteq Y$</p>
</blockquote>
<p>For me this translates to the following implication:</p>
<blockquote>
<p>IF $U \subseteq Y$ is open THEN $f^{-1}(U)$ is open</p>
</blockquote>
<p>however, I would have expected the definition to be the other way round, i.e. with the 1st implication I defined. The reason for that is that just by looking at the metric space definition of continuous:</p>
<blockquote>
<p>$\exists q = f(p) \in Y, \forall \epsilon>0,\exists \delta >0, \forall x \in X, 0 < d(x,p) < \delta \implies d(f(x),q) < \epsilon$</p>
</blockquote>
<p>seems to be talking about Balls (i.e. open sets) in X and then has a forward arrow for open sets in Y, so it seems natural to expect the direction of the implication to go in that way round. However, it does not. Why does it not go that way? Whats is wrong with the implication going from open in X to open in Y? And of course, why is the current direction the correct one?</p>
<p>I think conceptually I might be even confused why the topological definition of continuous requires to start from things in the target space Y and then require things in the domain. Can't we just say map things from X to Y and have them be close? <strong>Why do we require to posit things about Y first in either definition for the definition of continuous to work properly</strong>?</p>
<hr>
<p>I can't help but point out that this question <a href="https://math.stackexchange.com/questions/323610/the-definition-of-continuous-function-in-topology">The definition of continuous function in topology</a> seems to be similar but perhaps lack the detailed discussion on the direction on the implication for me to really understand why the definition is not reversed or what happens if we do reverse it. The second answer there tries to make an attempt at explaining why we require $f^{-1}$ to preserve the property of openness but its not conceptually obvious to me why thats the case or whats going on. Any help?</p>
<hr>
<p>For whoever suggest to close the question, the question is quite clear:</p>
<blockquote>
<p><strong>why is the reverse implication not the "correct" definition of continuous?</strong></p>
</blockquote>
<hr>
<p>As an additional important point I noticed is, pointing out <strong>the difference between open mapping and continuous function would be very useful</strong>.</p>
<hr>
<p>Note: I encountered this in baby Rudin, so thats as far as my background in analysis goes, i.e. metric spaces is my place of understanding. </p>
<hr>
<p>Extra confusion/Appendix:</p>
<p>Conceptually, I think I've managed to nail what my main confusion is. In conceptual terms continuous functions are suppose to map "nearby points to nearby points" so for me its metric space definition makes sense in that sense. However, that doesn't seem obvious to me unless we equate "open sets" to be the definition of "close by". Balls are open but there are plenty of sets that are open but are not "close by", for example the union of two open balls. I think this is what is confusing me most. How is the topological def respecting that conceptual requirement? </p>
| Evan Wilson | 570,598 | <p>The two definitions are equivalent to each other for metric spaces. To see that the first definition implies the second, let $\epsilon>0$ and $y=f(x)$. The open ball $B_\epsilon(y)$ is open in $Y$. Therefore $f^{(-1)}(B_\epsilon(y))$ must be open in $X$. Therefore, it contains the open ball $B_\delta(x)$ for small enough $\delta>0$. Since $B_\delta(x)\subset f^{(-1)}(B_\epsilon(y))$, we have found $\delta>0$ such that $c\in X, d(x,c)<\delta \implies d(f(x),f(c))<\epsilon$.</p>
<p>The reverse implication also uses an argument using open balls.</p>
|
205,671 | <p>How would one go about showing the polar version of the Cauchy Riemann Equations are sufficient to get differentiability of a complex valued function which has continuous partial derivatives? </p>
<p>I haven't found any proof of this online.</p>
<p>One of my ideas was writing out $r$ and $\theta$ in terms of $x$ and $y$, then taking the partial derivatives with respect to $x$ and $y$ and showing the Cauchy Riemann equations in the Cartesian coordinate system are satisfied. A problem with this approach is that derivatives get messy.</p>
<p>What are some other ways to do it?</p>
| James S. Cook | 36,530 | <p>A less standard approach: Take $\{e_{r},e_{\theta}\}$ as a basis in polar coordinates then the Jacobian matrix for any function on $\mathbb{R}^2$ has the form:
$$ [df] = \left[ \begin{array}{cc} U_r & U_{\theta} \\ V_r & V_{\theta} \end{array} \right] $$
<strong>Define</strong> complex-differentiability via complex linearity of the differential; $df(vw)=df(v)w$ at the point in question for all $v,w \in \mathbb{C}$. In particular this gives the beautiful formula: $df(v)=df(1)v$; this means the first column of the Jacobian fixes the second by complex multiplication. It is geometrically clear that $\frac{1}{r}e_{\theta} = ie_r$. Observe that $e_{\theta} = ire_r$ thus $df(e_{\theta}) =df(ire_{r})=irdf(e_r)$. On the other hand, $f=U+iV$ and
$$ df(e_r) = U_r+iV_r \qquad \& \qquad df(e_{\theta})=f_{\theta}=U_{\theta}+iV_{\theta} $$
Thus, $U_{\theta}+iV_{\theta}=ir(U_r+iV_r)$ and we derive
$$ \boxed{ U_{\theta} = -rV_r \qquad \& \qquad V_{\theta}=rU_r. } $$</p>
|
2,615,185 | <p>The title is not complete, since it would be too long. Consider the following statement:</p>
<blockquote>
<p>Let $U \subset \mathbb{R}^n$ be open, connected and such that its one-point compactification is a manifold. Then, this compactification must be (homeomorphic to) the sphere $S^n$.</p>
</blockquote>
<p>Is the statement above true? If so, why?</p>
| Moishe Kohan | 84,907 | <p>I do not expect any simple proofs of this result. A good exercise would be to prove this for domains in $R^2$ without using anything about the classification of surfaces or the Schoenflies theorem in $R^2$. </p>
<p>Here is a proof of your statement (in the topological category). I will assume that $n\ge 2$ since there is nothing to prove in dimension 1. </p>
<p>Let $U$ be any open (connected, which is not really necessary) domain in $R^n$ whose 1-point compactification is an $n$-manifold $M$. First of all, it is easy to see that the complement of $U$ has to be connected (since a point cannot locally separate a manifold of dimension $\ge 2$). </p>
<p>Let $p\in M$ be such that $M-p\cong U$. Let $B$ be a metric ball centered at $p$ (with respect to a Euclidean metric on a neighborhood of $p$ in $M$). Let $Y=\partial B$. Then $Y$ as an $n-1$-dimensional tame sphere
is contained in $U$ and separating $R^n$ in two components, a bounded component $C$ and an unbounded one. The bounded component is necessarily contained in $U$ (since the complement of $U$ is connected). Then, by topological Schoenflies theorem, $C$ is homeomorphic to $B^n$. Thus, $M$ is obtained by gluing two balls ($B$ and $C$) along their common boundary sphere $Y$ and, hence, homeomorphic to $S^n$. </p>
|
2,402,410 | <p>I defined the "function":</p>
<p>$$f(t)=t \delta(t)$$</p>
<p>I know that Dirac "function" is undefined at $t=0$ (see <a href="http://web.mit.edu/2.14/www/Handouts/Convolution.pdf" rel="nofollow noreferrer">http://web.mit.edu/2.14/www/Handouts/Convolution.pdf</a>).</p>
<p>In Wolfram I get $0 \delta(0)=0$ (<a href="http://www.wolframalpha.com/input/?i=0" rel="nofollow noreferrer">http://www.wolframalpha.com/input/?i=0</a>*DiracDelta(0)). Why? I expect $0 \delta(0)=undefined$ (if $\delta(0)=\infty$, thus I will have an indeterminate form $0 \infty$).</p>
<p>Thank you for your time.</p>
| Ethan Bolker | 72,858 | <p>You do understand that the Dirac delta "function" isn't a function, since you too put the word in quotes. To justify assertions about it you have to see how those assertions behave in the integrals that involve the delta function. ("Behavior inside integrals" is the idea behind distributions.) That's the essence of @Cauchy 's answer, and what Wolfram knows.</p>
|
1,554,285 | <p>Here's my problem:</p>
<blockquote>
<p>In Ohio, 55% of the population support the republican candidate in an
upcoming election. 200 people are polled at random. If we suppose that
each person’s vote (for or against) is a Bernoulli random variable
with probability p, and votes are independent,</p>
<p>(a) Show that the number of people polled that support the democratic
candidate X has distribution Bin(200, .45) and calculate the mean and
variance.</p>
<p>(b) Calculate directly the probability that more than half of the
polled people will vote for the democratic candidate. Tell me the
equation that you used to solve this.</p>
<p>(c) Use the CLT to approximate the Binomial probability and calculate
the approximate probability that half of the polled people will vote
for the democratic candidate</p>
</blockquote>
<p>And here's what I got so far:</p>
<p><strong>Part a:</strong>
Let us suppose if X number of people are supporting the democratic candidate, then there can be $\binom {200} {X}$ possible ways to select the people
$\binom {200} {X} (0.45)^X (0.55)^{1-X}$
Therefore the given distribution is binomial distribution with n=200, p =0.55 and 1-p = 0.45</p>
<p>According to the theorem, the mean of the probability distribution is given as
$E(X) = n*p = 200 * 0.45 = 90$</p>
<p>The variance of probability distribution is given as $E(X^2) - (E(X))^2 = np(1-p)$</p>
<p>For this problem,</p>
<p>$200*(0.45)*(1-0.45) = 49.5$</p>
<p><strong>Part b:</strong>
More than half of the people voting for the democratic candidate would be equal to $\sum\limits_{i=101}^{i=200} \binom {200} {i} (0.45)^i (0.55)^{200-i}$</p>
<p><strong>Part c</strong> I'm at a total loss. </p>
<p>I'm very new to these sorts of problems and suspect I might be way off the mark on every part. Any guidance would be appreciated. (Apologies if this is way too long a problem, I can split it up.)</p>
| Rowan | 229,922 | <p>Hints:</p>
<p>Let $X=\sum_{i=1}^{200}X_i$, where $X_i=\{\text{No.i person votes for the democratic candidate}\}$, so that $$X_i\sim
\begin{pmatrix}
1 & 0\\
p & 1-p\\
\end{pmatrix}$$</p>
<p>You have already calculated $E(X)$ and $D(X)$. So according to CLT, $X\sim?$</p>
|
2,490,128 | <p>Over the domain of integers, if $(a-c)|(ab+cd)$ then $(a-c)|(ad+bc)$.</p>
<p>Note: $x|y$ means "$x$ divides $y$," i.e. $\exists k\in \mathbb{Z}. y=x\cdot k$</p>
<p>This is part of an assignment on GCD, Euclidean algorithm, and modular arithmetic.</p>
<p>My approach:</p>
<p>If $a-c$, divides a linear combination of $a$ and $c$, then $a-c$ is a common divisor of $a$ and $c$. This comes from the definition of a common divisor: that if a certain $d$ divides two integers $x$ and $y$, then $d$ divides a linear combination of $x$ and $y$. Both $ab+cd$ and $ad+bc$ are linear combinations of $a$ and $c$ so $a-c$ must divide both of them.</p>
| Nosrati | 108,128 | <p>Let $x=\dfrac1u$ then
\begin{align}
I
&= \int\dfrac{-u}{\sqrt{u^2+u+1}}du \\
&= -\int\dfrac{2u+1}{2\sqrt{u^2+u+1}}du+\dfrac12\int\dfrac{1}{\sqrt{(u+\frac12)^2+\frac34}}du \\
&= -\sqrt{u^2+u+1}+\dfrac12\operatorname{arcsinh}\dfrac{2u+1}{\sqrt{3}}+C \\
&= -\dfrac{\sqrt{x^2+x+1}}{x}+\dfrac12\operatorname{arcsinh}\dfrac{x+2}{x\sqrt{3}}+C
\end{align}</p>
|
1,879,395 | <p>I am trying to learn generating functions so I am trying this recurrence:</p>
<p>$$F(n) = 1 + \frac{n-1}{n}F(n-1)$$</p>
<p>But I am struggling with it. Luckily the base case can be anything since $F(1)$ will multiply it by $0$ anyway, so let's say $F(0) = 0$. Then I tried this:</p>
<p>$$G(x) = \sum_{n=0}^{\infty} F(n)x^n$$</p>
<p>Remove base case $n=0$, split $F(n)$ into its parts:</p>
<p>$$G(x) = 0 + \sum_{n=1}^{\infty} x^n + \sum_{n=1}^{\infty} \frac{n-1}{n} F(n-1) x^{n}$$ </p>
<p>Simplify the first sum (accounting for $n=0$), pull $x$ out of the right sum and shift index:</p>
<p>$$G(x) = -1 + \frac{1}{1-x} + x\sum_{n=0}^{\infty} \frac{n}{n+1} F(n) x^{n}$$ </p>
<p>At this point I don't know how to simplify the right sum any further because I cannot simply pull out $\frac{n}{n+1}$ and replace the sum with $G(x)$ like I normally can with constant coefficients.</p>
<p>Just looking for hints because I want to solve this myself (as much as I can, anyway), please. What are the typical methods people use at this point?</p>
| alans | 80,264 | <p>If you are interested in an easier proof:</p>
<p>Observe: </p>
<p>$F(1)=1$, $F(2)=\frac{3}{2}$, $F(3)=2$, $F(4)=\frac{5}{2}$ $\dots$ </p>
<p>Now it is easy to get the pattern $F(n)=\frac{n+1}{2}$ and prove it by induction. </p>
<p>First you can check that the induction base $F(1)=1$ holds. </p>
<p>In the induction step,
assume $F(n)=\frac{n+1}{2}$.
Then $$F(n+1)= 1 + \frac{n+1-1}{n+1}F(n)=1+\frac{n}{n+1}\frac{n+1}{2}=\frac{n+2}{2},$$ where we used the induction assumption.</p>
|
2,214,030 | <p>$\mathbb{R}^{13}$ has two subspaces such that dim(S)=7 and dim(T)=8 <br/></p>
<p>⒜ max dim (S∩T)=?<br/>
⒝ min dim (S∩T)=?<br/>
⒞ max dim (S+T)=?<br/>
⒟ min dim (S+T)=?<br/>
⒠ dim(S∩T) + dim (S+T)=?</p>
| Jaideep Khare | 421,580 | <p>It is '$0 <|x-a| $' , not '$0 \le |x-a|$'. This suggests that :
$$|x-a| \neq 0 \implies x \neq a$$</p>
<p>Id est, $x$ tends to $a$ but is never ever exactly equal to $a$.</p>
<p>Also, absolute values aren't always positive, they are always <strong>NON-NEGATIVE</strong>.</p>
|
733,553 | <p>It's been a long time since high school, and I guess I forgot my rules of exponents. I did a web search for this rule but I could not find a rule that helps me explain this case:</p>
<p>$ 2^n + 2^n = 2^{n+1} $</p>
<p>Which rule of exponents is this?</p>
| Abraham Zhang | 112,045 | <p>If you realise that there are $2$ of $2^n$, then we have
$$2^1\times2^n$$
If we are multiplying $2$ by itself <strong>n</strong> times and then multiplying the result by another $2$, we get $2$ multiplied by itself <strong>n+1</strong> times, which is $$2^{n+1}$$</p>
|
2,469,720 | <p>Math problem:</p>
<blockquote>
<p>Find $x$, given that $ \, 2^2 \times 2^4 \times 2^6 \times 2^8 \times \ldots \times 2^{2x} = \left( 0.25 \right)^{-36}$</p>
</blockquote>
<p>To solve this question, I changed the left side of the equation to $2^{2+4+6+ \ldots + 2x}$ and the right side to: $\frac{2^{74}}{3^{36}}$.</p>
<p>My question is how can $3$ to a power (in this case $36$) be changed to $2$ to a power? (algebraically-without a calculator)</p>
<p>By checking with a calculator and doing $\log$, I found that it is not a whole number and therefore the wrong method for this question.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>use that $$2+4+6+8+...+2x=2^{x(x+1)}$$ and $$\left(\frac{1}{4}\right)^{-36}=2^{72}$$ and you will have $$2^{2^{x(x+1)}}=2^{72}$$</p>
|
4,146,081 | <p>How can I demonstrate the Jacobi identity:</p>
<p><span class="math-container">\begin{equation}
[S_{i}, [S_{j},S_{k}]] + [S_{j}, [S_{k},S_{i}]] + [S_{k}, [S_{i},S_{j}]] = 0 ~,
\end{equation}</span></p>
<p>using the infinitesimal generators <span class="math-container">$S_{\kappa}$</span> for a continuous group, where the generators satisfies the Lie algebra, such that:</p>
<p><span class="math-container">$$[S_{\alpha},S_{\beta}] = \sum_{\gamma} f_{\alpha \beta \gamma}S_{\gamma}$$</span></p>
<p>where <span class="math-container">$f_{\alpha \beta \gamma}$</span> are the structure constants ?</p>
<p>I was doing the following:</p>
<p><span class="math-container">$$\begin{align*}
[S_{i}, [S_{j},S_{k}]] &+ [S_{j}, [S_{k},S_{i}]] + [S_{k}, [S_{i},S_{j}]] = \\
&=
[S_{i}, \sum_{l} f_{jkl}S_{l}] + [S_{j}, \sum_{l} f_{kil}S_{l}] + [S_{k}, \sum_{l} f_{ijl}S_{l}]\\
&= \sum_{l} f_{jkl} [S_{i}, S_{l}] + \sum_{l} f_{kil}[S_{j}, S_{l}] + \sum_{l} f_{ijl}[S_{k}, S_{l}]\\
&= \sum_{l} f_{jkl} \sum_{m} f_{ilm}S_{m} + \sum_{l} f_{kil}\sum_{n} f_{jln}S_{n}\\
&\qquad + \sum_{l} f_{ijl}\sum_{p} f_{klp}S_{p}\\
&= \sum_{l,~m} f_{jkl} f_{ilm}S_{m} + \sum_{l,~n} f_{kil}f_{jln}S_{n} + \sum_{l,~p} f_{ijl}f_{klp}S_{p}\\
&= f_{jk}^{l} f_{il}^{m}~S_{m} + f_{ki}^{l}f_{jl}^{n}~S_{n} + f_{ij}^{l}f_{kl}^{p}~S_{p}
\end{align*}$$</span>
I know that these structure constants are antisymmetric:</p>
<p><span class="math-container">\begin{equation}
f_{\alpha \beta}^{\gamma} = - f_{\beta \alpha}^{\gamma} ~~.
\end{equation}</span></p>
<p>Are there a way to go further and show that the expression will be equal to zero ?</p>
| Dietrich Burde | 83,966 | <p>The Jacobi identity can be derives formally by expanding <span class="math-container">$[S_i,S_j]=S_iS_j-S_jS_i$</span>. Define the <em>associator</em> of <span class="math-container">$S_i,S_j,S_k$</span> by
<span class="math-container">$$
(S_i,S_j,S_k)=(S_iS_j)S_k-S_i(S_jS_k)
$$</span>
This is zero if the product is associative. However, a direct computation shows that we always have
<span class="math-container">$$
[[S_i,S_j],S_k]+[[S_j,S_k],S_i]+[[S_k,S_i],S_j]]=
$$</span></p>
<p><span class="math-container">$$
(S_i,S_j,S_k)+(S_j,S_k,S_i)+(S_k,S_i,S_j)-(S_j,S_i,S_k)-(S_i,S_k,S_j)-(S_k,S_j,S_i).
$$</span>
In particular, the Jacobi identity holds if the associator is always zero (but also, if it is, say, nonzero and left-symmetric, and so on).</p>
|
370,007 | <p>A river boat can travel a 20km per hour in still water. The boat travels 30km upstream against the current then turns around and travels the same distance back with the current. IF the total trip took 7.5 hours, what is the speed of the current? Solve this question algebraically as well as graphically..</p>
<p>I started the Algebra Solution: starting with this
x=(Vstill-Vcurrent)t,(When goes up stream)
x=(Vstill Vcurrent) t2( when it goes back stream....</p>
<p>I have the same question on a quiz in 1 hours and I need to know how to do this please show a solution :D thanks</p>
| lab bhattacharjee | 33,337 | <p>$ord_pa=m\iff a^m\equiv1\pmod p$ and $ord_pb=n\iff b^n\equiv1\pmod p$ where $n$ is any integer</p>
<p>$\implies a^{lcm(m,n)}\equiv1\pmod p,b^{lcm(m,n)}\equiv1\pmod p$</p>
<p>$\implies (ab)^{lcm(m,n)}\equiv1\pmod p\implies ord_p(ab)$ divides lcm$(m,n)$</p>
<p>Conversely, let $ord_p(ab)=h$ and $(m,n)=d$ and $\frac mM=\frac nN=d$</p>
<p>As $(ab)^h\equiv1\pmod p\implies (ab)^{mh}\equiv1,$
$\implies (a^m)^h\cdot b^{mh}\equiv1$</p>
<p>$\implies b^{mh}\equiv1\implies n$ divides $mh$ as $ord_pb=n$</p>
<p>$\implies Nd$ divides $Mdh \implies N$ divides $Mh \implies N$ divides $h$ as $(M,N)=1$</p>
<p>Similarly, $M$ divides $h\implies $lcm$(M,N)$ divides $h=ord_p(ab)$</p>
<p>But $ord_p(ab)$ divides lcm$(m,n)\implies ord_p(ab)=$ lcm$(m,n)$</p>
<p>If $(m,n)=1,$ lcm$(m,n)=mn$</p>
|
269,665 | <p>Is Klein bottle an algebraic variety? I guess no, but how to prove. How about other unorientable mainfolds? </p>
<p>If we change to Zariski topology, which mainfold can be an algebraic variety? </p>
| Zhen Lin | 5,191 | <p>Any complex manifold is not merely an orientable manifold but an <em>oriented</em> manifold. Hence the Klein bottle cannot be a complex manifold (and so not complex algebraic).</p>
<p>Indeed, consider the holomorphic tangent bundle $T M$ of a complex manifold $M$. We define an orientation as follows: take a complex basis $e_1, \ldots, e_n$, and declare the <em>real</em> basis $e_1, \ldots, e_n, i e_1, \ldots, i e_n$ to be positively oriented. One can check that this is independent of the choice of complex basis, so this defines a global orientation of $M$.</p>
|
3,416,895 | <p>here's the relevant question: <a href="https://math.stackexchange.com/q/193157/716946">If $\sigma_n=\frac{s_1+s_2+\cdots+s_n}{n}$ then $\operatorname{{lim sup}}\sigma_n \leq \operatorname{lim sup} s_n$</a></p>
<p>In the accepted answer, <strong>doesn't the last inequality only work if <span class="math-container">$\sup_{l\geq k}s_l$</span> is nonnegative?</strong>
The "last inequality" I'm referring to is this:
<span class="math-container">$$\frac 1n\sum_{j=1}^ks_j+\frac{n-k}n\sup_{l\geqslant k}s_l\leqslant \frac 1n\sum_{j=1}^ks_j+\sup_{l\geqslant k}s_l.$$</span></p>
<p>I ran into this issue when trying to prove the analagous statement for liminf, because in the case of liminf I could only get a similar inequality if <span class="math-container">$\inf_{l\geq k}s_l \leq 0$</span>, as follows:</p>
<p><span class="math-container">$$\sigma_n=
\frac 1n\sum_{j=1}^ks_j+\frac 1n\sum_{j=k+1}^ns_j
\geqslant \frac 1n\sum_{j=1}^ks_j+\frac{n-k}n\inf_{l\geqslant k}s_l
$$</span>
From here, if <span class="math-container">$\inf_{l\geq k}s_l \leq 0$</span> then I could continue and write
<span class="math-container">$\geq\frac 1n\sum_{j=1}^ks_j+\inf_{l\geqslant k}s_l$</span>.</p>
<p>Could someone clarify please?</p>
| Martin R | 42,969 | <p>You have that
<span class="math-container">$$ \tag{*}
\sigma_n\geqslant \frac 1n\sum_{j=1}^ks_j+\frac{n-k}n\inf_{l\geqslant k}s_l
$$</span>
and you are right that this is <span class="math-container">$\ge \frac 1n\sum_{j=1}^ks_j+\inf_{l\geqslant k}s_l$</span> only if <span class="math-container">$\inf_{l\geqslant k}s_l \le 0$</span>.</p>
<p>But that estimate is actually not needed: For fixed <span class="math-container">$k$</span> you can take the <span class="math-container">$\liminf_{n \to \infty}$</span> in <span class="math-container">$(*)$</span>, this gives
<span class="math-container">$$
\liminf_{n \to \infty}\sigma_n\geqslant \inf_{l\geqslant k}s_l
$$</span>
because the right-hand side has a limit for <span class="math-container">$n \to \infty$</span>.
Then take the limit for <span class="math-container">$k \to \infty$</span> and conclude that
<span class="math-container">$$
\liminf_{n \to \infty}\sigma_n\geqslant\liminf_{n \to \infty}s_n\, .
$$</span></p>
<p>The same approach works for <span class="math-container">$\limsup$</span> in the referenced Q&A.</p>
|
2,649,283 | <p>There are these two questions that my professor posted, and they absolutely stumped me:</p>
<p>$ \vdash (\exists x. \bot) \implies P $
and
$(\exists x. \top) \vdash (\forall x. \bot ) \implies P$.</p>
<p>What do I even do with the $(\exists x. \bot)$ part? It got me stuck for quite some time. Any help will be appreciated.</p>
| Bram28 | 256,001 | <p>Here are some proofs in the Fitch system:</p>
<p>$\def\fitch#1#2{\begin{array}{|l}#1 \\ \hline #2\end{array}}$ </p>
<p>$\fitch{
1.
}{
\fitch{
2.\exists x. \bot}{
\fitch{
3.\bot
}{
4.P \quad \bot \text{ Elim } 3}
\\
5.P \quad \exists \text{ Elim } 2, 3-4} \\
6. \exists x. \bot \rightarrow P \quad \rightarrow \text{ Intro } 2-5}$</p>
<p>$\fitch{
1. \exists x. \top
}{
\fitch{
2.\forall x. \bot}{
3.\bot \quad \forall \text{ Elim } 2\\
4.P \quad \bot \text{ Elim } 3}
\\
5. \forall x. \bot \rightarrow P \quad \rightarrow \text{ Intro } 2-4}$</p>
<p>Note that for the second proof you never use the $\exists x. \top$. Indeed, $\forall x. \bot \rightarrow P$ is valid all by itself.</p>
|
1,970,235 | <p>If I remember right, $f(x)$ is continuous at $x=a$ if</p>
<ol>
<li><p>$\lim_{x \to a} f(x)$ exists</p></li>
<li><p>$f(a)$ exists</p></li>
<li><p>$f(a) = \lim_{x \to a} f(x)$</p></li>
</ol>
<p>So $\lim_{x \to 0^{-}} \sqrt{x}$ exists? Thus $\lim_{x \to 0^{-}} \sin(\sqrt{x})$ <a href="https://math.stackexchange.com/questions/1929450/prove-lim-x-to-0-sin-sqrtx-does-not-exist">exists</a>?</p>
| operatorerror | 210,391 | <p>Alright, I'll bite with the classic proof for the limit from the right, where the square root function has real value. </p>
<p>Fix $\epsilon>0$. Then taking $\delta=\epsilon^2$ we get
$$
|x|<\epsilon^2\Rightarrow |\sqrt{x}|<\epsilon
$$</p>
|
1,970,235 | <p>If I remember right, $f(x)$ is continuous at $x=a$ if</p>
<ol>
<li><p>$\lim_{x \to a} f(x)$ exists</p></li>
<li><p>$f(a)$ exists</p></li>
<li><p>$f(a) = \lim_{x \to a} f(x)$</p></li>
</ol>
<p>So $\lim_{x \to 0^{-}} \sqrt{x}$ exists? Thus $\lim_{x \to 0^{-}} \sin(\sqrt{x})$ <a href="https://math.stackexchange.com/questions/1929450/prove-lim-x-to-0-sin-sqrtx-does-not-exist">exists</a>?</p>
| Alex M. | 164,025 | <p>You make a single mistake in your question: you forget to specify the domain of definition of $\sqrt \cdot$. You know that this is $[0, \infty)$, so "limit in $0$" here means, necessarily, "limit from the right" - simply because there is nothing to the left of $0$ in $[0, \infty)$.</p>
<p>If you prefer to be more pedantic, you could say that $[0, \infty)$ carries the subspace topology induced by the topology of $\Bbb R$: a subset $U \subseteq [0, \infty)$ is open (in the topology of $[0, \infty)$) if and only if there exist some open subset $V \subseteq \Bbb R$ (in the topology of $\Bbb R$) such that $U = V \cap [0, \infty)$. It is enough to notice now that all these open subsets $U$ are "truncated to the left of $0$".</p>
|
1,618,373 | <p>Prove that $S_4$ cannot be generated by $(1 3),(1234)$</p>
<p>I have checked some combinations between $(13),(1234)$ and found out that those combinations cannot generated 3-cycles.</p>
<p>Updated idea:<br>
Let $A=\{\{1,3\},\{2,4\}\}$<br>
Note that $(13)A=A,(1234)A=A$<br>
Hence, $\sigma A=A,\forall\sigma\in \langle(13),(1234)\rangle$<br>
In particular, $(12)\notin \sigma A,\forall\sigma\in \langle(13),(1234)\rangle$<br>
So we conclude that $S_4\neq\langle(13),(1234)\rangle$</p>
| CopyPasteIt | 432,081 | <p>Here we copy/paste/modify another <a href="https://math.stackexchange.com/a/3843575/432081">answer</a>:</p>
<p>Define</p>
<p><span class="math-container">$\; \tau = (13)$</span></p>
<p><span class="math-container">$\;\sigma = (1234)$</span>
<br>
<br>
<span class="math-container">$1^{st} \text {group of calculations:}$</span></p>
<p><span class="math-container">$\;\tau^1 = (13)$</span></p>
<p><span class="math-container">$\;\tau^2 = \tau^0 \quad \text { - the identity permutation}$</span>
<br>
<br>
<span class="math-container">$2^{nd} \text{ group of calculations:}$</span></p>
<p><span class="math-container">$\;\sigma^1 = (12)\,(23) \,(34)$</span></p>
<p><span class="math-container">$\;\sigma^2 = (13)\,(24)$</span></p>
<p><span class="math-container">$\;\sigma^3 = (14)\,(24)\,(34)$</span></p>
<p><span class="math-container">$\;\sigma^4 = \sigma^0 \quad \text { - the identity permutation}$</span>
<br>
<br>
<span class="math-container">$3^{rd} \text{ group of calculations:}$</span></p>
<p><span class="math-container">$\tau\sigma = (12) \,(34)$</span></p>
<p><span class="math-container">$\tau\sigma^2 = (24)$</span></p>
<p><span class="math-container">$\tau\sigma^3 = (14) \,(23)$</span>
<br>
<br>
<span class="math-container">$4^{th} \text{ group of calculations:}$</span></p>
<p><span class="math-container">$\sigma\tau = (14) \,(23)$</span></p>
<p><span class="math-container">$\sigma^2\tau = (24)$</span></p>
<p><span class="math-container">$\sigma^3\tau = (12)\,(34)$</span>
<br>
<br>
<br>
So far we've identified exactly <span class="math-container">$8$</span> permutations that are in the group generated by <span class="math-container">$\tau$</span> and <span class="math-container">$\sigma$</span>.</p>
<p>When we run the <span class="math-container">$4^{th} \text{ group of calculations}$</span> we get the same permutations as the <span class="math-container">$3^{rd} \text{ group of calculations}$</span>, and we can now write these symbolic (defining) rules,</p>
<p><span class="math-container">$\tag 1 \tau^2 = 1_d$</span>
<span class="math-container">$\tag 2 \sigma^4 = 1_d$</span>
<span class="math-container">$\tag 3 \sigma\tau = \tau\sigma^3$</span></p>
<p>Given any word (string) in the letters <span class="math-container">$\tau$</span> and <span class="math-container">$\sigma$</span> we can 'move' all the <span class="math-container">$\tau$</span> letters to the left and standardize (present/represent) the permutation to have the form</p>
<p><span class="math-container">$\tag 4 \tau^n \sigma^m \quad n \in \{0,1\} \text{ and } m \in \{0,1,2,3\}$</span></p>
<p>We conclude that the group generated by <span class="math-container">$\tau$</span> and <span class="math-container">$\sigma$</span> has exactly <span class="math-container">$8$</span> elements.</p>
<p>Since</p>
<p><span class="math-container">$\; \sigma \tau = \tau \sigma^3$</span></p>
<p>and</p>
<p><span class="math-container">$\; \tau \sigma \ne \tau \sigma^3$</span></p>
<p>we also know that this a non-abelian subgroup of <span class="math-container">$S_4$</span>.</p>
|
139,575 | <p>I use Magma to calculate the L-value, yields</p>
<p>E:=EllipticCurve([1, -1, 1, -1, 0]);
E;
Evaluate(LSeries(E),1),RealPeriod(E),Evaluate(LSeries(E),1)/RealPeriod(E);</p>
<p>Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - x over Rational Field
0.386769938387780043302394751243 3.09415950710224034641915800995
0.125000000000000000000000000000</p>
<p><span class="math-container">$#torsionsubgroup = 4, c_{17}(E)=1.$</span></p>
<p>But the strong BSD predicts that</p>
<p><span class="math-container">$L(E,1)/\Omega_{\infty}$</span>= <span class="math-container">$(#Sha(E)/#tor(E)^2)*c_{17}(E)$</span></p>
<p>We will get <span class="math-container">$L(E,1)/\Omega_{\infty}=1/16$</span>, not <span class="math-container">$1/8$</span>.
Why does that happen? Thanks a lot.</p>
| Tim Dokchitser | 3,132 | <p>To expand my comment, there are at least 3 subtle ways to get BSD wrong:</p>
<p>1) The BSD period over ${\mathbb Q}$ is the real period $\Omega_\infty$ when $E({\mathbb R})$ is connected ($\Delta(E)<0$) and $2\Omega_\infty$ when it has two connected components ($\Delta(E)>0$). The same thing happens over number fields, at every real place. So in your example you have to divide the $L$-value by $2\Omega_\infty$ to get $1/16$.</p>
<p>Another way (alternative) of phrasing this: is that the Tamagawa number at an Archimedean place is $2$ if real and split ($\Delta>0$), and $1$ otherwise. It is less easier to forget to include it then, and you can always use $\Omega_\infty$. Magma does not have Tamagawa numbers at infinite places directly.</p>
<p>The other two concern BSD over number fields:</p>
<p>2) The height pairing in BSD depends on the ground field. For instance, $E=37A1$ has $E({\mathbb Q})={\mathbb Z}\cdot P$ and $E({\mathbb Q(i)})={\mathbb Z}\cdot P$, where $P$ is the same point $(0,0)$. The regulator is $\approx 0.051$ over ${\mathbb Q}$ but $\approx 2\cdot 0.051=0.102$ over ${\mathbb Q(i)}$; the factor $2$ is $[{\mathbb Q(i)}:{\mathbb Q}]$.</p>
<p>3) Over ${\mathbb Q}$ there is this luxury of having a global everywhere minimal model, so there is a canonical differential to integrate. Over number fields you cannot do this, so one usually takes any invariant differential and introduces a correction term that measures its failure to be minimal at all primes. The point is that if you start with a curve over ${\mathbb Q}$ with additive reduction at $p$ and go up to a number field $K$ where $p$ ramifies, e.g. $50A3$ over $K={\mathbb Q}(\sqrt 5)$, the minimal model might stop being minimal, and this correction factor comes in for BSD over $K$. </p>
<p>The functions ConjecturalRegulator and ConjecturalSha in Magma take care of these normalizations - it's actually quite nice to experiment with them. </p>
<p>Hope this helps. </p>
<p>P.S. You would not believe how many times each of these mistakes was made!</p>
|
111,795 | <p>I need to add a small graphics on top of a larger one, and the small graphics should stick very close to the large one, with their axis aligned. Here's a minimal code to work with, using some elements from this question/answer :</p>
<p><a href="https://mathematica.stackexchange.com/questions/22521/how-to-make-a-plot-on-top-of-other-plot">How to make a plot on top of other plot?</a></p>
<pre><code>Intensity[p_, q_, phi_] := Plot[
(If[p > 0, Sin[2Pi p^2 x]/(2Pi p^2 x), 1]Cos[2Pi p^2 q x + phi/2])^2,
{x, -30, 30},
PlotPoints -> 400,
MaxRecursion -> 4,
PlotRange -> All,
PlotRange -> {{-30, 30}, {0, 1}},
Axes -> False,
AspectRatio -> 1,
Frame -> True,
ImageSize -> {600, 600}
]
LumIntensity[p_, q_, phi_] := DensityPlot[
(If[p > 0, Sin[2Pi p^2 x]/(2Pi p^2 x), 1]Cos[2Pi p^2 q x + phi/2])^2,
{x, -30, 30}, {y, 0, 1},
AspectRatio -> 0.1,
PlotPoints -> {1000, 2},
Frame -> None,
ImageSize -> 600
]
GraphicsColumn[
{LumIntensity[0.25, 5, 0], Intensity[0.25, 5, 0]},
Spacings -> 0
]
</code></pre>
<p>Here's what I want to achieve (which the question/answer above don't solve) :</p>
<p><img src="https://s22.postimg.org/b3ad8su2p/interference.jpg" alt="interference"></p>
<p>Also, how can I add a black frame around the small graphics ? Using <code>Frame -> True</code> or <code>Framed[...]</code> gives an ugly output.</p>
<p>The combination would be used for a Manipulate box, since <code>p</code>, <code>q</code> and <code>phi</code> are variables.</p>
<p><strong>EDIT :</strong> Actually, it would be better if the small graphics was placed at the bottom of the large one.</p>
| Cham | 6,260 | <p>This is a partial solution, using <code>Epilog</code> and <code>Inset</code>. It has an alignment problem, especially after we resize the picture by hand inside the <code>Manipulate</code> box. Also, without resizing the whole, playing with the parameters may give an alignment problem after a while. How to fix this ?</p>
<pre><code>LumIntensity[x_, p_, q_] := (If[p > 0, Sin[2Pi p^2 x]/(2Pi p^2 x), 1]Cos[2Pi p^2 q x])^2
Intensity1[p_, q_] := Inset[
DensityPlot[LumIntensity[x, p, q],
{x, -30, 30}, {y, 0, 1},
ColorFunction -> GrayLevel,
AspectRatio -> 0.1,
Frame -> None,
PlotPoints -> {1000, 2},
ImageSize -> 600],
{0, -0.1}
]
Intensity2[p_, q_] := Plot[
LumIntensity[x, p, q],
{x, -30, 30},
PlotPoints -> 400,
MaxRecursion -> 4,
PlotRange -> {{-30, 30}, {-0.2, 1}},
Axes -> None,
AspectRatio -> 1,
Frame -> True,
Epilog -> Intensity1[p, q],
ImageSize -> {600, 600}
]
Manipulate[
Intensity2[p, q],
{{p, 0.25, Style["Diffraction : p", 12]}, 0, 0.5, 0.01,
ImageSize -> Large, Appearance -> "Labeled"},
{{q, 1, Style["Interference : q", 12]}, 1, 10, 0.01,
ImageSize -> Large, Appearance -> "Labeled"},
ControlPlacement -> Bottom,
FrameMargins -> None
]
</code></pre>
<p>Preview :</p>
<p><img src="https://s22.postimg.org/sic3kyhe9/Manip_Box.jpg" alt="Box"></p>
<p>So how can I make the bottom graphics always well aligned with the graphics above it, even after we resive the whole by hand ?</p>
|
3,553,644 | <p>I am taking a Introduction to Calculus course and am struggling to understand how derivatives can represent tangent lines.</p>
<p>I learned that derivatives are the rate of change of a function but they can also represent the slope of the tangent to a point. I also learned that a derivative will always be an order lower that the original function.</p>
<p>For example: <span class="math-container">$f(x) = x^3 and f'(x) = 3x^2$</span></p>
<p>What I fail to understand is that how can <span class="math-container">$3x^2$</span> represent the slope of the tangent line if it is not a linear function?</p>
<p>Wouldn't this example mean that the slope or the tangent itself is a parabola?</p>
| Tryst with Freedom | 688,539 | <p>The derivative represents the slope of the tangent, not the equation of a tangent line. </p>
<p>For understanding why it is so, we delve into the question of 'what is the derivative?', the fundamental idea of finding the derivative is taking a point on the curve and another point, which is extremely close to it, and computing the slope of the line through those two points. This is reflected in the definition of the derivative, which I assume you are familiar with.</p>
<p><span class="math-container">$$\lim_{h \to 0} \frac{ f(x+h)-f(x)}{h}$$</span></p>
<p>if you look at any curve, you would notice that that a line tangent to the curve at one point won't be tangent it to at another by the very definition of the tangent. And hence it is understandable that the derivative of a function is in fact another function which relates x-coordinate of a point on a curve to the slope of the line tangent to it.</p>
<p>Finally, if you really wanted, you could find the equation of tangent as well. For this, you have to simply use the 'point slope formula' of the line</p>
<p><span class="math-container">$$\frac{y-y_o}{x-x_o} = {\frac{dy}{dx}}\biggr\rvert_{x_o}$$</span></p>
<p>where the slope is the derivative evaluated at x-coordinate of the point where the tangent meets the curve.</p>
|
3,223,705 | <p>I have a task for school and we need to plot a polar function with MATLAB. The function is <span class="math-container">$r = 1-2\cos(6\theta)$</span>.</p>
<p>I did this and I'm getting exactly the same as on Wolfram Alpha: <a href="https://www.wolframalpha.com/input/?i=polar+plot+r%3D1-2" rel="nofollow noreferrer">https://www.wolframalpha.com/input/?i=polar+plot+r%3D1-2</a>\cos(6theta)</p>
<p>I've used for <span class="math-container">$\theta$</span> a value between <span class="math-container">$0$</span> and <span class="math-container">$2\pi$</span>. Now the question explitictly says we need to take into account the period and the domain of the function and we can use the logic function <span class="math-container">$r>0$</span> for this. But I didn't do this because I just took a value between <span class="math-container">$0$</span> and <span class="math-container">$2\pi$</span> for <span class="math-container">$\theta$</span>. Am I doing something wrong here?</p>
<p>Thanks.</p>
| the_candyman | 51,370 | <p>Matlab code:</p>
<pre><code> clear all
close all
nPoints = 500; % Number of points for the plot
theta = linspace(0, 2*pi, nPoints); % Define the theta points
r = 1 - 2*cos(6*theta); % Evaluate the radius for each theta
x = r.*cos(theta); % Evaluate x for each theta
y = r.*sin(theta); % Evaluate y for each theta
plot(x,y) % Plot it!!!
</code></pre>
<p>Output: a nice flower!</p>
<p><a href="https://i.stack.imgur.com/vAcs5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vAcs5.png" alt="enter image description here"></a></p>
|
2,062,706 | <p>I have the following function:</p>
<p>\begin{equation}
f(q,p) = q \sqrt{p} + (1-q) \sqrt{1 - p}
\end{equation}</p>
<p>Here, $q \in [0,1]$ and $p \in [0,1]$.</p>
<p>Now, given some value $q \in [0,1]$ what value should I select for $p$ in order to maximize $f(q,p)$? That is, I need to define some function $g(q)$ such that $f(q, g(q))$ is a local maximum.</p>
<p>I've been thinking about this problem for days and I don't know where to begin. Any help will be greatly appreciated.</p>
| yurnero | 178,464 | <p>By Cauchy-Schwarz,
$$
[q \sqrt{p} + (1-q) \sqrt{1 - p}]^2\leq(q^2+(1-q)^2)(p+1-p)=q^2+(1-q)^2.
$$
To have equality, we require
$$
\frac{\sqrt{p}}{\sqrt{1-p}}=\frac{q}{1-q}\iff \boxed{p=\frac{q^2}{1-2q+2q^2}}\in[0,1].
$$</p>
|
1,186,825 | <p>Prove $$\lim_{n\to\infty}\int_0^1 \left(\cos{\frac{1}{x}} \right)^n\mathrm dx=0$$</p>
<p>I tried, but failed. Any help will be appreciated.</p>
<p>At most points $(\cos 1/x)^n\to 0$, but how can I prove that the integral tends to zero clearly and convincingly?</p>
| Siminore | 29,672 | <p>Your integral coincides with
$$
\int_1^{+\infty} \frac{(\cos u)^n}{u^2}\mathrm{d}u.
$$
For almost every $u>1$, $\lim_n (\cos u)^n =0$, and $$ \frac{|\cos u|^n}{u^2} \leq \frac{1}{u^2}.$$
Since $(u \mapsto u^{-2} ) \in L^1(1,+\infty)$, by the Dominated Convergence Theorem the integral converges to zero. Actually this is just a little variant fo Villetaneuse's proof...</p>
|
833,376 | <p>I know very little in the way of math history, but I question that was bothering me recently is where the terms open and closed came from in topology. I know that it's easy to ascribe a sense of openness/closedness to said sets, but I feel like there are a lot of other, more appropriate words that could have been used. On a related note, I was wondering who first developed the idea of open/closed sets, and who first used those words to describe them.</p>
| hugo mancera | 247,838 | <p>the followings archives corresponds to the doctoral thesis of Henri Lebesgue:</p>
<p>You can search for Analysis situs, old name for the actual topology.Lebesgue and Rham (french mathematicians) wrote about that subject.</p>
<p>sincerely</p>
<p>Hugo Mancera
Colombia</p>
|
1,410,163 | <p>Show that the limit of the function, $f(x,y)=\frac{xy^2}{x^2+y^4}$, does not exist when $(x,y) \to (0,0)$.</p>
<p>I had attempted to prove this by approaching $(0, 0)$ from $y = mx$, assuming $m = -1$ and $m = 1$. The result was $f(y, -y) = \frac{y}{1+y^2}$ and $f(y, y) = \frac{y}{1+y^2}$ as the limits which are obviously different. Essentially, I was just wondering what is the correct working out for a solution to this question.</p>
| Siminore | 29,672 | <p>This is elementary logic. Assume that $p \in B^c$ <em>and</em> $p \in A$. Then, by assumption, $p \in B$, a contradiction. Hence $B^c \subset A^c$. To conclude, just reverse the argument.</p>
<p>As you see, everything reduces to the properties of logical negation.</p>
|
3,320,193 | <blockquote>
<p>If given <span class="math-container">$P(B\mid A) =4/5$</span>, <span class="math-container">$P(B\mid A^\complement)= 2/5$</span> and <span class="math-container">$P(B)= 1/2$</span>, what is the probability of <span class="math-container">$A$</span>?</p>
</blockquote>
<p>I know I need to apply Bayes theorem here to figure this out, but I'm struggling a bit to understand how. </p>
<p>So far I've considered this formula:
<span class="math-container">$$P(B\mid A) = \dfrac{P (B \cap A) }{ P (B \cap A) + P(B^\complement \cap A)}$$</span></p>
<p>From this formula, I understand that <span class="math-container">$P(B \cap A) = P(A) \cdot P(B\mid A)$</span> so I plug in the given values but then only find that <span class="math-container">$P(B^\complement |A)$</span> is <span class="math-container">$2/25$</span>. But this does not get me any closer to my goal, <span class="math-container">$P(A)$</span>.</p>
<p>I imagine my understanding of this is quite backward. Any pointers would be helpful.</p>
<p>Thank you</p>
| mathsdiscussion.com | 694,428 | <p>Using venn diagram one of easy way to find solution(<a href="https://i.stack.imgur.com/ZiYVm.jpg" rel="nofollow noreferrer">https://i.stack.imgur.com/ZiYVm.jpg</a>)</p>
|
374,909 | <p>If <span class="math-container">$A\subseteq\mathbb{N}$</span> is a subset of the positive integers, we let <span class="math-container">$$\mu^+(A) = \lim\sup_{n\to\infty}\frac{|A\cap\{1,\ldots,n\}|}{n}$$</span> be the <em>upper density</em> of <span class="math-container">$A$</span>.</p>
<p>For <span class="math-container">$n\in\mathbb{N}$</span> we let <span class="math-container">$\sigma(n)$</span> be the number of divisors of <span class="math-container">$n$</span>, the numbers <span class="math-container">$1$</span> and <span class="math-container">$n$</span> included.</p>
<p>Do we have <span class="math-container">$\mu^+\big(\sigma^{-1}(\{k\})\big) = 0$</span> for all <span class="math-container">$k\in\mathbb{N}$</span>? If not, what is the value of <span class="math-container">$\sup\big\{\mu^+\big(\sigma^{-1}(\{k\})\big):k\in\mathbb{N}\big\}$</span>?</p>
| reuns | 84,768 | <p><span class="math-container">$\tau(n) \le k$</span> implies that <span class="math-container">$n=\prod_{i=1}^j p_i$</span> with <span class="math-container">$j\le k$</span>, thus <span class="math-container">$$\sum_{n=1,\tau(n)\le k}^\infty n^{-s} \le (1+\sum_{p \ prime} p^{-s})^k=\sum_n a_k(n) n^{-s}$$</span></p>
<p>(a coefficient-wise bound)</p>
<p><span class="math-container">$1+\pi(x)=O(x/\log x)=O(\sum_{n\le x} 1/\log n)$</span> and <span class="math-container">$x/\log x=O(\pi(x))$</span> imply that <span class="math-container">$$f_k(x)=\sum_{n\le x} a_k(n) =\sum_{n\le x} \frac1{\log n}f_{k-1}(x/n)$$</span> <span class="math-container">$$=O(\sum_{n\le x} \frac1{\log n} \frac{x/n}{\log x/n}(\log \log x/n)^{k-1})=O(\frac{x (\log\log x)^{k-1}}{\log x})$$</span></p>
|
4,176,646 | <p>I need to find the directional derivatives for all vectors <span class="math-container">$u=[u_1\ \ u_2]\in \mathbb R^2$</span> with <span class="math-container">$\|u\|=1$</span> at <span class="math-container">$P_0=(0,0)$</span>, and determine whether <span class="math-container">$f$</span> is differentiable at <span class="math-container">$P_0$</span>.</p>
<p><span class="math-container">$$f(x,y)=\begin{cases}
1 & y=x^2,x\neq 0\\
0 & \text{else}
\end{cases}$$</span></p>
<p>First of all, if <span class="math-container">$f$</span> is not continuous then can I always say it isn't differentiable?</p>
<p>And my attemp was this:</p>
<p><span class="math-container">$$\lim_{t\rightarrow 0} \frac {f(P_0+tu)-f(P_0)} t = \lim_{t\rightarrow 0}
\begin{cases}
\frac{1}{t} & \text{else}\\
0 & u_1=0 \text{ or } u_1^2\neq u_2\\
\end{cases}$$</span>
Does the fact that <span class="math-container">$\lim_{t\rightarrow 0}\frac {1}{t}$</span> does not exist say anything about f being differentiable? Because <span class="math-container">$D_if(P_0)$</span> both exist for <span class="math-container">$i=1,2$</span>.</p>
<p>So I'd like to know if my calculation is correct, and if the continuous statement is true.</p>
<p>Thanks!</p>
| BillyJoe | 573,047 | <p>What you are trying to compute is <span class="math-container">$\nu_2(x)$</span>, the <span class="math-container">$2$</span>-adic valuation of <span class="math-container">$x$</span>, i.e. the highest exponent <span class="math-container">$\nu_2(x)$</span> such that <span class="math-container">$2^{\nu_2(x)}$</span> divides <span class="math-container">$x$</span> (see <a href="https://en.wikipedia.org/wiki/P-adic_order" rel="nofollow noreferrer">here</a>).</p>
<p>If you like a fanciful formula you can get this one:</p>
<p><span class="math-container">$$\nu_2(x) = \log_2 \left[x - \sum_{k=0}^{\lfloor \log_2{x} \rfloor}\left(\left\lfloor\frac{2x-1+2^{k+1}}{2^{k+2}}\right\rfloor - \left\lfloor\frac{2x-1+2^{k+2}}{2^{k+3}}\right\rfloor - \left\lfloor \frac{x}{2^{k+2}} \right\rfloor\right)2^k \right] + \frac{1+(-1)^x}{2}$$</span></p>
<p>For an explanation see <a href="https://math.stackexchange.com/q/3611016/573047">here</a>.</p>
|
244,433 | <p>I have a list:</p>
<pre><code>data = {{2.*10^-9, 0.0025}, {4.*10^-9, 0.0025}, {6.*10^-9, 0.0025}, {8.*10^-9, 0.0025}, {1.*10^-8, 0.0025}, {7.*10^-9, 0.0023}, {3.*10^-9, 0.0025},...}
</code></pre>
<p>And I wanted to remove every third pair and get</p>
<pre><code> newdata = {{2.*10^-9, 0.0025}, {4.*10^-9, 0.0025}, {8.*10^-9, 0.0025}, {1.*10^-8, 0.0025}, {3.*10^-9, 0.0025},...}
</code></pre>
| Rohit Namjoshi | 58,370 | <p>Another way</p>
<pre><code>MapIndexed[If[Divisible[First@#2, 3], Nothing, #1] &, data]
</code></pre>
<p><strong>Update</strong></p>
<p>One way to iterate is to use <code>Nest</code>.</p>
<pre><code>filter = MapIndexed[If[Divisible[First@#2, 3], Nothing, #1] &, #] &;
data = Range[20]; (* Easy to see what is removed *)
Nest[filter, data, 5]
(* {1, 2, 14, 20} *)
</code></pre>
<p>To see intermediate steps</p>
<pre><code>NestList[filter, data, 5] // Column
</code></pre>
|
244,433 | <p>I have a list:</p>
<pre><code>data = {{2.*10^-9, 0.0025}, {4.*10^-9, 0.0025}, {6.*10^-9, 0.0025}, {8.*10^-9, 0.0025}, {1.*10^-8, 0.0025}, {7.*10^-9, 0.0023}, {3.*10^-9, 0.0025},...}
</code></pre>
<p>And I wanted to remove every third pair and get</p>
<pre><code> newdata = {{2.*10^-9, 0.0025}, {4.*10^-9, 0.0025}, {8.*10^-9, 0.0025}, {1.*10^-8, 0.0025}, {3.*10^-9, 0.0025},...}
</code></pre>
| AsukaMinato | 68,689 | <pre><code>Riffle[data[[;; ;; 3]], data[[2 ;; ;; 3]]]
</code></pre>
|
244,433 | <p>I have a list:</p>
<pre><code>data = {{2.*10^-9, 0.0025}, {4.*10^-9, 0.0025}, {6.*10^-9, 0.0025}, {8.*10^-9, 0.0025}, {1.*10^-8, 0.0025}, {7.*10^-9, 0.0023}, {3.*10^-9, 0.0025},...}
</code></pre>
<p>And I wanted to remove every third pair and get</p>
<pre><code> newdata = {{2.*10^-9, 0.0025}, {4.*10^-9, 0.0025}, {8.*10^-9, 0.0025}, {1.*10^-8, 0.0025}, {3.*10^-9, 0.0025},...}
</code></pre>
| Sjoerd Smit | 43,522 | <p>If you ask me, this is the most direct approach:</p>
<pre><code>Delete[data, List /@ Range[3, Length[data], 3]]
</code></pre>
|
244,433 | <p>I have a list:</p>
<pre><code>data = {{2.*10^-9, 0.0025}, {4.*10^-9, 0.0025}, {6.*10^-9, 0.0025}, {8.*10^-9, 0.0025}, {1.*10^-8, 0.0025}, {7.*10^-9, 0.0023}, {3.*10^-9, 0.0025},...}
</code></pre>
<p>And I wanted to remove every third pair and get</p>
<pre><code> newdata = {{2.*10^-9, 0.0025}, {4.*10^-9, 0.0025}, {8.*10^-9, 0.0025}, {1.*10^-8, 0.0025}, {3.*10^-9, 0.0025},...}
</code></pre>
| imida k | 34,532 | <p>My two answers :-)</p>
<pre><code>data[[Select[Range[Length[data]], Mod[#, 3] != 0 &]]]
</code></pre>
<p>and</p>
<pre><code>Transpose[Select[Transpose[{data, Range[Length[data]]}], Mod[#[[2]], 3] != 0 &]][[1]]
</code></pre>
|
704,917 | <p>I need your help evaluating this integral:
<span class="math-container">$$I=\int_0^\infty F(x)\,F\left(x\,\sqrt2\right)\frac{e^{-x^2}}{x^2} \, dx,\tag1$$</span>
where <span class="math-container">$F(x)$</span> represents <a href="http://mathworld.wolfram.com/DawsonsIntegral.html" rel="nofollow noreferrer">Dawson's function/integral</a>:
<span class="math-container">$$F(x)=e^{-x^2}\int_0^x e^{y^2} \, dy = \frac{\sqrt{\pi}}{2} e^{-x^{2}} \operatorname{erfi}(x).\tag2$$</span></p>
<p>Dawson's function can also be represented by the infinite integral <span class="math-container">$$F(x) = \frac{1}{2} \int_{0}^{\infty} e^{-t^{2}/4} \sin(xt) \, dt.$$</span></p>
<p>Since <span class="math-container">$F(x)$</span> behaves like <span class="math-container">$x$</span> near <span class="math-container">$x=0$</span> and like <span class="math-container">$\frac{1}{2x}$</span> for large values of <span class="math-container">$x$</span>, we know that integral <span class="math-container">$(1)$</span> converges.</p>
| Random Variable | 16,033 | <p>Notice that for <span class="math-container">$a>0$</span>, we have <span class="math-container">$$F(ax) = e^{-a^{2}x^{2}}\int_{0}^{ax} e^{y^{2}} \mathrm dy = e^{-a^{2}x^{2}} \int_{0}^{a} u e^{x^{2}u^{2}} \, \mathrm du . \tag{1}$$</span></p>
<p>Then using <span class="math-container">$(1)$</span>, we get</p>
<p><span class="math-container">$$ \begin{align} I &= \int_{0}^{\infty} F(x) F(x \sqrt{2}) \, \frac{e^{-x^{2}}}{x^{2}} \, \mathrm dx \\&= \int_{0}^{\infty} \int_{0}^{\sqrt{2}} \int_{0}^{1} x e^{-x^{2}} e^{x^{2} y^{2}} x e^{-2x^{2}} e^{x^{2}z^{2}} \, \frac{e^{-x^{2}}}{x^{2}} \, \mathrm dy \, \mathrm dz \, \mathrm dx \\ &= \int_{0}^{\sqrt{2}} \int_{0}^{1} \int_{0}^{\infty} e^{-(4-y^{2}-z^{2})x^{2}} \, \mathrm dx \, \mathrm dy \, \mathrm dz \\ &= \frac{\sqrt{\pi}}{2} \int_{0}^{\sqrt{2}} \int_{0}^{1} \frac{1}{\sqrt{4-y^{2}-z^{2}}} \, \mathrm dy \, \mathrm dz \\&= \frac{\sqrt{\pi}}{2} \int_{0}^{\sqrt{2}} \int_{0}^{\arcsin ( \frac{1}{\sqrt{4-z^{2}}})} \, \mathrm d \theta \, \mathrm d z \tag{2} \\ &= \frac{\sqrt{\pi}}{2} \int_{0}^{\sqrt{2}} \arcsin \left( \frac{1}{\sqrt{4-z^{2}}} \right) \, \mathrm dz \\ &= \frac{\sqrt{\pi}}{2} \left( \frac{ \sqrt{2} \pi}{4} - \int_{0}^{\sqrt{2}} \frac{z^{2}}{\sqrt{3-z^{2}} (4-z^{2})} \, \mathrm dz \right) \tag{3} \\ &= \frac{\sqrt{\pi}}{2} \left( \frac{\sqrt{2} \pi}{4} - \int_{1 / \sqrt{2}}^{\infty} \frac{1}{\sqrt{3u^{2}-1} (4u^{2}-1)} \frac{\mathrm du}{u}\right) \tag{4} \\ &= \frac{\sqrt{\pi}}{2} \left( \frac{\sqrt{2} \pi}{4} - 3 \int_{1 /\sqrt{2}}^{\infty} \frac{1}{ (4w^{2}+1)(w^{2}+1)} \, \mathrm dw\right) \tag{5}\\ &=\frac{\sqrt{\pi}}{2} \left( \frac{\sqrt{2} \pi}{2} - 4 \int_{1/ \sqrt{2}}^{\infty} \frac{1}{4w^{2}+1} + \int_{1/ \sqrt{2}}^{\infty} \frac{1}{w^{2}+1} \, \mathrm dw\right) \\ &= \frac{\sqrt{\pi}}{2} \left[ \frac{\sqrt{2} \pi}{4} - \pi +2 \arctan \left( \sqrt{2} \right) +\frac{\pi}{2} - \arctan \left( \frac{1}{\sqrt{2}} \right) \right] \\ &= \frac{\sqrt{\pi}}{2} \left( \frac{\sqrt{2} \pi}{4} - \pi + 3 \arctan{\sqrt{2}} \right). \end{align}$$</span></p>
<hr />
<p><span class="math-container">$(2)$</span> Let <span class="math-container">$y=\sqrt{4-z^{2}}\sin \theta$</span>.</p>
<p><span class="math-container">$(3)$</span> Integrate by parts.</p>
<p><span class="math-container">$(4)$</span> Let <span class="math-container">$z = \frac{1}{u}$</span>.</p>
<p><span class="math-container">$(5)$</span> Let <span class="math-container">$w^{2}=3u^2-1$</span>.</p>
<hr />
<p><strong>EDIT</strong>:</p>
<p>Using the same approach, I get</p>
<p><span class="math-container">$$ \int_{0}^{\infty} F(ax) F(bx) \, \frac{e^{-p^{2}x^{2}}}{x^{2}} \, \mathrm dx $$</span></p>
<p><span class="math-container">$$ = \frac{\sqrt{\pi}}{2} \left[b \arcsin \left( \frac{a}{\sqrt{a^{2}+p^{2}}} \right) - \sqrt{a^{2}+b^{2}+p^{2}} \arctan \left(\frac{ab}{p \sqrt{a^{2}+b^{2}+p^{2}}} \right) + a \arctan \left( \frac{b}{p} \right)\right]$$</span></p>
<p>where <span class="math-container">$a, b,$</span> and <span class="math-container">$p$</span> are all positive parameters.</p>
|
4,459,439 | <p>Suppose <span class="math-container">$G$</span> is an abelian finite group, and the number of order-2 elements in <span class="math-container">$G$</span> is denoted by <span class="math-container">$N$</span>.</p>
<p>I have found that <span class="math-container">$N= 2^n-1$</span> for some <span class="math-container">$n$</span> that satisfy <span class="math-container">$2^n| \ |G|$</span>. I write my proof. Would you tell me if this proof is correct? Moreover, Would you tell me can we say more about the number <span class="math-container">$N$</span>?</p>
<h2>My Proof</h2>
<p>I: The subset of all elements with order 2 union <span class="math-container">$\{e \}$</span> is a subgroup of <span class="math-container">$G$</span> because for all <span class="math-container">$x \ne y: (xy)^2=x^2y^2=e$</span>, and all elements are self inverse. Therefore, <span class="math-container">$N+1$</span> divides |G| by the Lagrange theorem.</p>
<p>II: If <span class="math-container">$N=1$</span> (i.e. there is only one element of order 2 in <span class="math-container">$G$</span>), namely <span class="math-container">$x$</span>, everything will be fine. However, if we have <span class="math-container">$x$</span> and <span class="math-container">$y$</span> as elements of order 2 in <span class="math-container">$G$</span>, then <span class="math-container">$xy$</span> has order 2 and <span class="math-container">$N=3$</span>. If there exists another element of order 2 in <span class="math-container">$G$</span>, namely <span class="math-container">$z$</span>, then <span class="math-container">$xz,\ yz,\ xyz$</span> have order 2 and <span class="math-container">$N=7$</span>. If there exists another element of order 2 in <span class="math-container">$G$</span>, namely <span class="math-container">$w$</span>, then <span class="math-container">$xw,\ yw,\ zw,\ xyw,\ xzw,\ yzw, \ xyzw$</span> have order 2 and <span class="math-container">$N=15$</span>. By induction, <span class="math-container">$N=$$n\choose{1}$$+\cdots +$${n}\choose{n}$</span> <span class="math-container">$= \ 2^n-1$</span> for some <span class="math-container">$n$</span>.</p>
<p>With I and II, <span class="math-container">$N= 2^n-1$</span> for some <span class="math-container">$n$</span> that satisfy <span class="math-container">$2^n| \ |G|$</span>.</p>
<p>Obviously, if |G| is odd, <span class="math-container">$N=0$</span>. Or if <span class="math-container">$|G|=36$</span>, <span class="math-container">$N=1$</span> or <span class="math-container">$N=3$</span>.</p>
<p>Is what I wrote correct?</p>
<p>Can we be more specefic about the number of elements of order 2 in <span class="math-container">$G$</span>?</p>
| Mark | 470,733 | <p>By the fundamental theorem of finite abelian groups, <span class="math-container">$G$</span> can be decomposed:</p>
<p><span class="math-container">$G\cong\mathbb{Z_{2^{n_1}}}\times\mathbb{Z_{2^{n_2}}}\times...\times\mathbb{Z_{2^{n_k}}}\times H$</span></p>
<p>Where <span class="math-container">$H$</span> is a group of odd order. A cyclic group of even order has exactly one element of order <span class="math-container">$2$</span>. So let's say <span class="math-container">$a_i$</span> is the element of order <span class="math-container">$2$</span> of the group <span class="math-container">$\mathbb{Z_{2^{n_i}}}$</span>. Then clearly an element <span class="math-container">$g\in G$</span> satisfies <span class="math-container">$2g=0$</span> if and only if it has the form <span class="math-container">$g=(\epsilon_1a_1, \epsilon_2a_2,...,\epsilon_ka_k, 0)$</span> where <span class="math-container">$\epsilon_i\in\{0,1\}$</span>. So the number of such elements is <span class="math-container">$2^k$</span>. Thus the number of elements of order <span class="math-container">$2$</span> is <span class="math-container">$2^k-1$</span>. So what you wrote is correct, and <span class="math-container">$k$</span> is the number of <span class="math-container">$2$</span>-groups in the unique decomposition of <span class="math-container">$G$</span>. (and for each <span class="math-container">$k$</span> there exists such a group where <span class="math-container">$N=2^k-1$</span>, we can't say more)</p>
|
3,526,586 | <p>1) Let <span class="math-container">$A \in \mathbb{R}^{n \times n}$</span> be a matrix with nonzero determinant. Show that there exists <span class="math-container">$c>0$</span> so that for every <span class="math-container">$v \in \mathbb{R}^{n},\|A v\| \geq c\|v\|$</span></p>
<p>My attempt:
Since <span class="math-container">$A$</span> is invertible, we have <span class="math-container">$\frac{\|Av\|}{\|v\|}>0$</span> for all <span class="math-container">$v \neq 0$</span>. But how can we fix a constant <span class="math-container">$c>0$</span> ?</p>
| Community | -1 | <p>It must be <span class="math-container">$1$</span>, because <span class="math-container">$(0,1,0)^t\in U$</span>.</p>
|
3,202,797 | <p>Why is solving the system of equations
<span class="math-container">$$1+x-y^2=0$$</span>
<span class="math-container">$$y-x^2=0$$</span>
the same as minimizing
<span class="math-container">$$f(x,y)=(1+x-y^2)^2 + (y-x^2)^2$$</span></p>
<p>Originally I thought it was because if you take the partial derivatives of <span class="math-container">$f(x,y)$</span> and set them equal to zero that is what you are doing in the system. But when I worked out the partial derivatives it was not clear that that is what was going on. </p>
<p>Can someone clarify why they are equivalent?</p>
| Rohit Pandey | 155,881 | <p>We can say <span class="math-container">$f(x,y)=g(x,y)^2+h(x,y)^2$</span>. It is clear that being the sum of two square terms, <span class="math-container">$f(x,y)\geq 0$</span>. So, the minimum value of <span class="math-container">$f(x,y)$</span> (which is <span class="math-container">$0$</span>) comes about when <span class="math-container">$g(x,y)=0$</span> and <span class="math-container">$h(x,y)=0$</span>.</p>
|
39,684 | <p>In order to have a good view of the whole mathematical landscape one might want to know a deep theorem from the main subjects (I think my view is too narrow so I want to extend it).</p>
<p>For example in <em>natural number theory</em> it is good to know quadratic reciprocity and in <em>linear algebra</em> it's good to know the Cayley-Hamilton theorem (to give two examples).</p>
<p>So, what is one (<strong>per post</strong>) deep and representative theorem of each subject that one can spend a couple of months or so to learn about? (In Combinatorics, Graph theory, Real Analysis, Logic, Differential Geometry, etc.)</p>
| Community | -1 | <ul>
<li>In Algebraic Number theory you have the Kronecker-Weber theorem.</li>
</ul>
|
39,684 | <p>In order to have a good view of the whole mathematical landscape one might want to know a deep theorem from the main subjects (I think my view is too narrow so I want to extend it).</p>
<p>For example in <em>natural number theory</em> it is good to know quadratic reciprocity and in <em>linear algebra</em> it's good to know the Cayley-Hamilton theorem (to give two examples).</p>
<p>So, what is one (<strong>per post</strong>) deep and representative theorem of each subject that one can spend a couple of months or so to learn about? (In Combinatorics, Graph theory, Real Analysis, Logic, Differential Geometry, etc.)</p>
| Yuval Filmus | 1,277 | <p>In Combinatorics you have for example Szemerédi's regularity lemma and all sorts of "related" results, such as the Green-Tao theorem.</p>
|
39,684 | <p>In order to have a good view of the whole mathematical landscape one might want to know a deep theorem from the main subjects (I think my view is too narrow so I want to extend it).</p>
<p>For example in <em>natural number theory</em> it is good to know quadratic reciprocity and in <em>linear algebra</em> it's good to know the Cayley-Hamilton theorem (to give two examples).</p>
<p>So, what is one (<strong>per post</strong>) deep and representative theorem of each subject that one can spend a couple of months or so to learn about? (In Combinatorics, Graph theory, Real Analysis, Logic, Differential Geometry, etc.)</p>
| Arturo Magidin | 742 | <p>In <strong>Finite Group Theory,</strong> while the Odd Order Theorem and the Classification are major results, I would put a major landmark at the Sylow Theorems and Hall's Theorem (a generalization of the Sylow Theorems). Especially the former come up all the time, and there are many interesting corollaries that often are not discussed. </p>
<p>Another good possibility is the <strong>O'Nan-Scott Theorem</strong> for the study of permutation groups.</p>
<p>(Also, I think it would take a lot longer than "a couple of months or so" to really learn and understand the Classification of Finite Simple Groups...)</p>
|
39,684 | <p>In order to have a good view of the whole mathematical landscape one might want to know a deep theorem from the main subjects (I think my view is too narrow so I want to extend it).</p>
<p>For example in <em>natural number theory</em> it is good to know quadratic reciprocity and in <em>linear algebra</em> it's good to know the Cayley-Hamilton theorem (to give two examples).</p>
<p>So, what is one (<strong>per post</strong>) deep and representative theorem of each subject that one can spend a couple of months or so to learn about? (In Combinatorics, Graph theory, Real Analysis, Logic, Differential Geometry, etc.)</p>
| Eric Naslund | 6,075 | <p>In subject $X$ the Fundamental Theorem of $X$ is always pretty important.</p>
<p>For example: </p>
<ul>
<li>Fundamental Theorem of Finitely Generated Abelian Groups. </li>
<li>Fundamental Theorem of Calculus.</li>
<li>Fundamental Theorem of Arithmetic.</li>
<li>Fundamental Theorem of Algebra.</li>
<li>Fundamental Theorem of Galois Theory</li>
<li><a href="http://en.wikipedia.org/wiki/Fundamental_theorem" rel="nofollow">Fundamental Theorem of.....</a></li>
</ul>
|
39,684 | <p>In order to have a good view of the whole mathematical landscape one might want to know a deep theorem from the main subjects (I think my view is too narrow so I want to extend it).</p>
<p>For example in <em>natural number theory</em> it is good to know quadratic reciprocity and in <em>linear algebra</em> it's good to know the Cayley-Hamilton theorem (to give two examples).</p>
<p>So, what is one (<strong>per post</strong>) deep and representative theorem of each subject that one can spend a couple of months or so to learn about? (In Combinatorics, Graph theory, Real Analysis, Logic, Differential Geometry, etc.)</p>
| Community | -1 | <p>In graph theory you have <a href="http://en.wikipedia.org/wiki/K%C3%B6nig%27s_theorem_%28graph_theory%29" rel="nofollow">Konig's theorem</a>.</p>
|
39,684 | <p>In order to have a good view of the whole mathematical landscape one might want to know a deep theorem from the main subjects (I think my view is too narrow so I want to extend it).</p>
<p>For example in <em>natural number theory</em> it is good to know quadratic reciprocity and in <em>linear algebra</em> it's good to know the Cayley-Hamilton theorem (to give two examples).</p>
<p>So, what is one (<strong>per post</strong>) deep and representative theorem of each subject that one can spend a couple of months or so to learn about? (In Combinatorics, Graph theory, Real Analysis, Logic, Differential Geometry, etc.)</p>
| Community | -1 | <ul>
<li>In Field theory The Impossibility of Trisecting the Angle and Doubling the Cube </li>
</ul>
|
2,530,458 | <p>Find Range of $$ y =\frac{x}{(x-2)(x+1)} $$</p>
<p>Why is the range all real numbers ? </p>
<p>the denominator cannot be $0$ Hence isn't range suppose to be $y$ not equals to $0$ ?</p>
| StephenG - Help Ukraine | 298,172 | <p>I think this range is more properly $\bar{\mathbb{R}}$ which is the Extended Real Number line ( or the <a href="http://mathworld.wolfram.com/AffinelyExtendedRealNumbers.html" rel="nofollow noreferrer">Affinely Extended Real Numbers</a> if you prefer ) and not $\mathbb{R}$.</p>
<p>$\mathbb{R}$ does not include $\pm\infty$ and at the two poles ( $x=-1$ and $x=2$ ) this function can be said to take on these "values".</p>
<p>$\bar{\mathbb{R}}$ does include these values.</p>
|
410,905 | <p>If A is real, symmetric, regular, positive definite matrix in $R^{n.n}$ and $x,h\in R^n$, why is it $\langle Ah,x\rangle = \langle h,A^T x\rangle =\langle Ax,h\rangle$?
Is there some rule or theorem for this?</p>
| pritam | 33,736 | <p>Note that inner product can be written as: $\langle x,y\rangle=y^Tx$. So $\langle Ah, x\rangle=x^T Ah$ and $\langle h, A^T x\rangle=(A^Tx)^Th=(x^TA)h=\langle Ah,x\rangle.$ Also $A^T=A$ as $A$ is symmetric and this gives the last equality.</p>
|
2,799,123 | <p>Prove the following equation by counting the non-empty subsets of $\{1,2,\ldots,n\}$ in $2$ different ways:</p>
<p>$1+2+2^2+2^3\ldots+2^{n-1}=2^n-1$.</p>
<p>Let $A=\{1,2\ldots,n\}$. I know from theory that it has $2^n-1$ non-empty subsets, which is the right-hand side of the equation but, how do count the left one?</p>
<p>I've proven it using induction but how can i get to the first part of the equation by counting subsets differently?</p>
| user | 505,767 | <p>For the LHS we need to sum</p>
<ul>
<li>$n$ subset with $1$ elements</li>
<li>$\binom{n}{2}$ subset with 2 elements</li>
<li>$\binom{n}{3}$ subset with 3 elements</li>
<li>etc.</li>
</ul>
<p>that is by binomial theorem $\sum_{k=0}^{n} \binom{n}{k}a^kb^{n-k}=(a+b)^n$</p>
<p>$$\sum_{k=1}^{n} \binom{n}{k} =\sum_{k=0}^{n} \binom{n}{k} -\binom{n}{0}=(1+1)^n-1=2^n-1$$</p>
<p>By direct check note that</p>
<p>$$1+2+2^2+\dots+2^{n-1}=(2-1)(1+2+2^2+\dots+2^{n-1} )=$$$$=2(1+2+2^2+\dots+2^{n-1})-1(1+2+2^2+\dots+2^{n-1})=$$
$$=2+2^2+\dots+2^{n}-1-2-2^2-\dots-2^{n-1}=2^n-1$$</p>
<p>Note that</p>
<p>$$1+2+2^2+\dots+2^{n-1}=(2-1)(1+2+2^2+\dots+2^{n-1} )=$$$$=2(1+2+2^2+\dots+2^{n-1})-1(1+2+2^2+\dots+2^{n-1})=$$
$$=2+2^2+\dots+2^{n}-1-2-2^2-\dots-2^{n-1}=2^n-1$$</p>
|
145,303 | <p>Another question about the convergence notes by Dr. Pete Clark:</p>
<p><a href="http://alpha.math.uga.edu/%7Epete/convergence.pdf" rel="nofollow noreferrer">http://alpha.math.uga.edu/~pete/convergence.pdf</a></p>
<p>(I'm almost at the filters chapter! Getting very excited now!)</p>
<p>On page 15, Proposition 4.6 states that for the following three properties of a topological space <span class="math-container">$X$</span>,</p>
<p><span class="math-container">$(i)$</span> <span class="math-container">$X$</span> has a countable base.</p>
<p><span class="math-container">$(ii)$</span> <span class="math-container">$X$</span> is separable.</p>
<p><span class="math-container">$(iii)$</span> <span class="math-container">$X$</span> is Lindelof (every open cover admits a countable subcover).</p>
<p>we always have <span class="math-container">$(i)\Rightarrow (ii)$</span> and <span class="math-container">$(i)\Rightarrow (iii)$</span>.</p>
<p>Also, we if <span class="math-container">$X$</span> is metrizable, we have <span class="math-container">$(iii)\Rightarrow (i)$</span>, and <em>thus all three are equivalent</em>.</p>
<p>This last part confuses me. We establish all the implications claimed in the proof, but there seems to be a missing link in the claim that all three are equivalent: namely <span class="math-container">$(ii)\Rightarrow (iii)$</span>.</p>
| Brian M. Scott | 12,042 | <p>Carl’s approach is almost certainly the easiest way to patch the gap in the notes. A nice variant is to let $D=\{x_n:n\in\Bbb N\}$ be a countable dense subset of $X$, define a map $$h:X\to\Bbb R^{\Bbb N}:x\mapsto\langle d(x,x_n):n\in\Bbb N\rangle\;,$$ and prove that $h$ is s homeomorphism of $X$ onto a subspace of $\Bbb R^{\Bbb N}$: being a countable product of second countable spaces, $\Bbb R^{\Bbb N}$ is easily shown to be second countable.</p>
<p>It is possible to give direct proofs of $(ii)\implies(iii)$ and $(iii)\implies(ii)$, but every one that I can think of either (a) smuggles in what amounts to a proof of second countability along the way or (b) uses much higher-powered machinery. </p>
<p>As an example of (a), consider the following proof that $(iii)\implies(ii)$.</p>
<blockquote>
<p>For each $n\in\Bbb N$ let $\mathscr{U}_n=\{B(x,2^{-n}):x\in X\}$, the set of open $2^{-n}$-balls in $X$; $\mathscr{U}_n$ is an open cover of $X$, so it has a countable subcover $\mathscr{V}_n=\{B(x_n(k),2^{-n}):k\in\Bbb N\}$. Let $$D=\{x_n(k):n,k\in\Bbb N\}\;;$$ clearly $D$ is countable. Let $W$ be any non-empty open set in $X$. Pick $x\in W$; there is an $n\in\Bbb N$ such that $B(x,2^{-n})\subseteq W$. $\mathscr{V}_n$ covers $X$, so there is some $k\in\Bbb N$ such that $x\in B(x_n(k),2^{-n})$; but then $x_n(k)\in D\cap B(x,2^{-n})\subseteq D\cap W$, and $D$ is dense in $X$. $\dashv$</p>
</blockquote>
<p>With just a little more work this becomes the argument that Carl suggested to prove that $(iii)$ implies $(i)$.</p>
<p>As an example of (b), the fact that every metric space has a $\sigma$-discrete base almost immediately implies that every separable metric space is Lindelöf: </p>
<blockquote>
<p>Let $\mathscr{B}=\bigcup\{\mathscr{B}_n:n\in\Bbb N\}$ be a base for $X$ such that each $\mathscr{B}_n$ is discrete. Suppose that $\mathscr{U}$ is an open cover of $X$ with no countable subcover. Let $\mathscr{R}\subseteq\mathscr{B}$ be a refinement of $\mathscr{U}$ covering $X$. $\mathscr{R}$ has no countable subcover, so $\mathscr{R}\cap\mathscr{B}_n$ is uncountable for some $n\in\Bbb N$. But then $\mathscr{R}\cap\mathscr{B}_n$ is an uncountable family of pairwise disjoint, non-empty open sets, and $X$ cannot be separable. $\dashv$</p>
</blockquote>
<p>The reason for the difficulty in going directly between $(ii)$ and $(iii)$ is that in general separability and Lindelöfness are very far from being equivalent; in the metric setting it’s second countability that ties them together by being equivalent to each. Here are a couple of examples illustrating their independence, even in rather nice spaces.</p>
<p>If you retopologize $\Bbb R$ by making every $x\in\Bbb R\setminus\{0\}$ and giving $0$ a base of nbhds of the form $A\setminus C$, where $C$ is any countable subset of $\Bbb R\setminus\{0\}$, then $X$ is a very nice space that is Lindelöf but not separable.</p>
<p>On the other hand, a slightly more complicated retopologization of $\Bbb R$ yields a very nice space that is separable but not Lindelöf. Make each rational an isolated point. To each irrational $x$ associate a sequence $\langle q_x(k):k\in\Bbb N\rangle$ of rational numbers converging monotonically to $x$ in the usual topology; a base of the topology at $x$ consists of the sets $B_n(x)=\{x\}\cup\{q_x(k):k\ge n\}$ for $n\in\Bbb N$. Note that if $x$ and $y$ are distinct irrationals, the sets $\{q_x(k):k\in\Bbb N\}$ and $\{q_y(k):k\in\Bbb N\}$ have at most finitely many points in common; this ensures that the space is completely regular and Hausdorff. $\Bbb Q$ is a countable dense subset, so the space is separable. And $$\{B_0(x):x\in\Bbb R\setminus\Bbb Q\}\cup\Big\{\{q\}:q\in\Bbb Q\Big\}$$ is an open cover with no countable subcover, so the space is not Lindelöf.</p>
|
1,156,874 | <p>How to show that $\mathbb{Z}[i]/I$ is a finite field whenever $I$ is a prime ideal? Is it possible to find the cardinality of $\mathbb{Z}[i]/I$ as well?</p>
<p>I know how to show that it is an integral domain, because that follows very quickly.</p>
| Arthur | 99,272 | <p>Let $\alpha \in I$ be an element, with norm $N(\alpha)$. Any element $x \in \mathbb{Z}[i]$ can be written as $q \alpha + r$ with $N(r) < N(\alpha)$. So every element of $\mathbb{Z}[i]/I$ (viewed as an equivalence class) contains an element of norm smaller than $N(\alpha)$, and there are only finitely many such elements.</p>
<p>We don't have to use the assumption that $I$ is prime for the finiteness.</p>
|
1,123,777 | <p><strong><span class="math-container">$U$</span> here represents the upper Riemann Integral.</strong></p>
<p><img src="https://i.stack.imgur.com/GbNm2.jpg" alt="enter image description here" /></p>
<p><img src="https://i.stack.imgur.com/KtRI4.jpg" alt="enter image description here" /></p>
<p><img src="https://i.stack.imgur.com/wlmAk.jpg" alt="enter image description here" /></p>
<p><img src="https://i.stack.imgur.com/pvKC3.jpg" alt="enter image description here" /></p>
<p><strong>I understand the vast majority of this proof</strong>, however the part underlined in orange states <span class="math-container">$\forall \varepsilon>0 $</span> should it not be <span class="math-container">$\forall \varepsilon\geq0 $</span> so that we have</p>
<p><span class="math-container">$U(f)\leq S(f,\Delta_\varepsilon ^1) \leq U(f)+\frac{\varepsilon}{2}$</span>?</p>
<p>For the green part , if the statement works <span class="math-container">$\forall\varepsilon>0$</span> surely it could work in the case <span class="math-container">$U(f+g)=50$</span>, <span class="math-container">$U(f)+U(g)=49$</span>, <span class="math-container">$\varepsilon=2$</span></p>
| heropup | 118,193 | <p>For a given positive integer $n$, let $p_i(n) = (a_1, a_2, \ldots, a_k)$ be a given partition of $n$ that satisfies the criteria. For each such partition $p_i(n)$, how many ways are there to generate a unique partition $p_i(n+1)$? Is there a bijection? </p>
|
749,926 | <p>I have a group of 10 players and I want to form two groups with them.Each group must have atleast one member.In how many ways can I do it?</p>
| André Nicolas | 6,312 | <p>We solve first a different problem. We want to divide our people into two teams, one to wear blue uniforms, the other to wear red. Our set has $2^{10}$ subsets. Throw away the empty set and the full set. That leaves $2^{10}-2$ ways to choose the team that will wear blue uniforms.</p>
<p>However, there are no coloured uniforms in our actual problem. So the number of ways to divide our $10$ people into two non-empty groups is $\frac{2^{10}-2}{2}$.</p>
<p><strong>Remark:</strong> One could, less plausibly, interpret the problem as meaning that some people may remain unpicked for either group. Again, we count first the number of ways to split the people into <em>uniformed</em> groups, and then divide by $2$. </p>
<p>Call the groups B, R, and U (unpicked). It is convenient to first count the ways we can split into these groups, with no restriction. There are $3^{10}$ ways to do this. Now we remove the forbidden configurations, in which there are no B, or no R, or both. There are $2^{10}$ with no B, $2^{10}$ with no R. The sum $2\cdot 2^{10}$ double-counts the configurations in which there are no B <strong>and</strong> no R. It follows that there are $3^{10}-2\cdot 2^{10}+1$ legal configurations. Like before, divide by $2$. We get that the number of ways to choose $2$ groups neither of which is empty is $\frac{3^{10}-2\cdot 2^{10}+1}{2}$. </p>
|
4,112,958 | <p>This is a Number Theory problem about the extended Euclidean Algorithm I found:</p>
<p>Use the extended Euclidean Algorithm to find all numbers smaller than <span class="math-container">$2040$</span> so that <span class="math-container">$51 | 71n-24$</span>.</p>
<p>As the eEA always involves two variables so that <span class="math-container">$ax+by=gcd(a,b)$</span>, I am not entirely sure how it is applicable in any way to this problem. Can someone point me to a general solution to this kind of problem by using the extended Euclidean Algorithm?
Also, is there maybe any other more efficient way to solve this than using the eEA?</p>
<p>(Warning: I'm afraid I'm fundamentally not getting something about the eEA, because that section of the worksheet features a number of similiar one variable problems, which I am not able to solve at all.)</p>
<p>I was thinking about using <span class="math-container">$71n-24=51x$</span>, rearranging that into
<span class="math-container">$$71n-51x=24.$$</span> It now looks more like the eEA with <span class="math-container">$an+bx=gcd(a,b)$</span>, but <span class="math-container">$24$</span> isn‘t the <span class="math-container">$gcd$</span> of <span class="math-container">$71$</span> and <span class="math-container">$51$</span>...</p>
| J. W. Tanner | 615,567 | <p>You are looking for numbers such that <span class="math-container">$71n\equiv24\bmod51$</span>.</p>
<p>The extended Euclidean algorithm gives the Bezout relation <span class="math-container">$23\times71-32\times51=1$</span>,</p>
<p>so <span class="math-container">$23\times71\equiv1\bmod51$</span>. Therefore, you are looking for <span class="math-container">$n\equiv23\times24\bmod51$</span>.</p>
<hr />
<p>Alternatively, you could say <span class="math-container">$20n\equiv24\bmod51$</span>, so <span class="math-container">$5n\equiv6\bmod 51$</span>,</p>
<p>and <span class="math-container">$5\times10=50\equiv-1\bmod51$</span>, so <span class="math-container">$n\equiv6(-10)=-60\bmod51$</span>.</p>
|
644,935 | <p>I'm having trouble integrating $3^x$ using the $px + q$ rule. Can some please walk me through this?</p>
<p>Thanks</p>
| Martín-Blas Pérez Pinilla | 98,199 | <p>And the $px+q$ rule is...?</p>
<p>You <strong>can</strong> use $3^x = e^{x\log 3}$ and the obvious change of variable.</p>
|
12,878 | <p>I wish the comments didn't have a lower bound for characters.Many times all I want to say is "yes". Can someone explain me what is the purpose of this l.b.?</p>
| user127096 | 127,096 | <p>This is the site humbly suggesting you to use more characters. I'd like to also encourage you to do this. And also to consider the following observations: </p>
<ol>
<li>Users who habitually type things like "n-mfld", "height fcn", "cohomol", "sing. coho", " Lebesgue meas."... are making the site harder to use for others: these abbreviated keywords will not come up in search. </li>
<li>Users who habitually omit punctuation and ignore capitalization rules are making the site look less professional.</li>
<li>Users who habitually post incomplete questions, lacking in explanation of notation and terms in the question, are wasting the time of those who read and try to answer the question. </li>
</ol>
|
3,219,428 | <p>Sorry for the strange title, as I don't really know the proper terminology.</p>
<p>I need a formula that returns 1 if the supplied value is anything from 10 to 99, returns 10 if the value is anything from 100 to 999, returns 100 if the value is anything from 1000 to 9999, and so on.</p>
<p>I will be translating this to code and will ensure the value is never less than 1, in case that changes anything.</p>
<p>It's probably something really simple but I can't wrap my head around a nice way to do this so... thanks!</p>
| Sharky Kesa | 398,185 | <p>I think your function is
<span class="math-container">$$f(x) = 10^{\left \lfloor \log_{10}(x) \right \rfloor - 1}$$</span>
where <span class="math-container">$\lfloor r \rfloor$</span> denotes the largest integer less than or equal to <span class="math-container">$r$</span>.</p>
|
398,857 | <p>Please help me solve this and please tell me how to do it..</p>
<p>$12345234 \times 23123345 \pmod {31} = $?</p>
<p>edit: please show me how to do it on a calculator not a computer thanks:)</p>
| lab bhattacharjee | 33,337 | <p>As $10^1\equiv 10\pmod{31},$</p>
<p>$10^2=100\equiv7,$</p>
<p>$10^3\equiv10\cdot7\equiv8,$</p>
<p>$10^4\equiv49\equiv18,$</p>
<p>$10^5=10^2\cdot10^3\equiv 7\cdot8\equiv25,$</p>
<p>$10^6=(10^3)^2\equiv8^2\equiv2,$</p>
<p>$10^7=10^4\cdot10^3\equiv18\cdot8\equiv20,$</p>
<p>$$12345234=4+3\cdot10+2\cdot10^2+5\cdot10^3+4\cdot10^4+3\cdot10^5+2\cdot10^6+1\cdot10^7$$
$$\equiv4+3\cdot10+2\cdot7+5\cdot8+4\cdot18+3\cdot25+2\cdot2+1\cdot20\pmod{31}$$</p>
|
3,043,598 | <p>I have seen a procedure to calculate <span class="math-container">$A^{100}B$</span> like products without actually multiplying where <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are matrices. But the procedure will work only if <span class="math-container">$A$</span> is diagonalizable because the procedure attempts to find <span class="math-container">$B$</span> such that</p>
<p><span class="math-container">$B = a_1X_1 + a_2X_2 + ...$</span>, where <span class="math-container">$X_1,X_2,...$</span> are independent eigen-vectors (or basis) of <span class="math-container">$A$</span> and <span class="math-container">$a_1,a_2,...$</span> are scalars. </p>
<p>Is there any other procedure to multiple such matrices where there is no restriction on matrix <span class="math-container">$A$</span> being deficient or diagnalizable?</p>
| Aaron | 9,863 | <p>When a matrix is not diagonalizable, you can instead use Jordan Normal Form (JNF). Instead of picking a basis of eigenvectors, you use "approximate eigenvectors." While eigenvectors are things in the kernel of <span class="math-container">$A-\lambda I$</span>, approximate eigenvectors are things in the kernel of <span class="math-container">$(A-\lambda I)^k$</span>. Essentially, we can reduce the problem to when <span class="math-container">$A$</span> is a single Jordan block.</p>
<p>Suppose that <span class="math-container">$A=\lambda I + N$</span> where <span class="math-container">$N$</span> is nilpotent, and let <span class="math-container">$x$</span> be a cyclic vector for <span class="math-container">$N$</span>, so that our vector space has a basis of <span class="math-container">$x, Nx, N^2 x, \ldots, N^{k-1} x$</span>. For simplicity of notation, set <span class="math-container">$x_i=N^i x$</span>. This is essentially an abstract form of what it means to be a Jordan block, as <span class="math-container">$A$</span> is a put into Jordan form when we take the <span class="math-container">$x_i$</span> as a basis.</p>
<p>To mimic what you had for diagonalizable matrices, We need to compute <span class="math-container">$A^n x_i$</span>. Since <span class="math-container">$N$</span> commutes with <span class="math-container">$\lambda I$</span>, we can use the binomial theorem to compute <span class="math-container">$(\lambda I + N)^n = \sum \binom{n}{i} \lambda^{n-i} N^i$</span>. Then </p>
<p><span class="math-container">$$(\lambda I + N)^n x_j = \sum \binom{n}{i} \lambda^{n-i} N^i x_j=\sum \binom{n}{i} \lambda^{n-i} x_{j+i}.$$</span></p>
|
2,315,647 | <p>Compute the gravitational attraction on a unit mass at the origin due to the mass (of constant density) occupying the volume inside the sphere $r = 2a$ and above the plane $z=a$. Use spherical coordinates.</p>
<p>So I know the function should be
$$(G/r^2) dM$$
What are the limits of integration? What should the integral look like?</p>
| Community | -1 | <p>While Rafa gave a good answer, I just wanted to expand a bit on the derivation here. That way you're less likely to get lost. Let me know if I need to explicate further on some part.</p>
<hr>
<p>A couple of physics formulas necessary for this:</p>
<ul>
<li>The gravitational force on a mass $m$ due to a mass $M$ at a distance $r$ is $$\mathbf F = \mathbf{\hat r}\frac{GmM}{r^2}$$</li>
<li>The density of an object is generally defined implicitly by $M = \int_V \rho\ dV$, but in the case of constant density objects this reduces to $$\rho = \frac{M}{V}$$ where $M$ is the mass of the object and $V$ is its volume. We will use the fact that $M=\rho V \implies dM = \rho\ dV$ later in the exercise.</li>
</ul>
<hr>
<p>Given that this is a spherical cap</p>
<p><a href="https://i.stack.imgur.com/UBO1Im.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UBO1Im.png" alt="enter image description here"></a></p>
<p>where the radius is $2a$ and the height of cap is $a$, we see that the polar angle $\theta$ varies over $0$ to whatever $\theta$ is at the bottom of the cap. To figure that out, notice that a triangle is made (look at the shape in the picture above, but ignore the letters -- those don't correspond to this exercise) where the hypotenuse is $2a$ and the adjacent side is has length $a$. Hence $\cos(\theta) = \dfrac{a}{2a} = \dfrac 12 \implies \theta = \dfrac{\pi}{3}$.</p>
<p><a href="https://i.stack.imgur.com/C5XBV.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C5XBV.jpg" alt="enter image description here"></a></p>
<p>The azimuthal angle $\varphi$ goes all the way around the circle -- i.e. it varies over $0$ to $2\pi$. And $r$ is going to be a bit trickier -- it'll be a function of $\theta$. When $\theta = 0$, $r$ can vary over $a$ to $2a$, but when $\theta=\frac{\pi}{4}$, $r$ can only be $2a$. In general, for a given $\theta$, $r$ can go no lower than the plane $z=a$ -- which in spherical coordinates is $\rho\cos(\theta) =a$. I.e. $r$ varies over $a\sec(\theta)$ to $2a$.</p>
<p>Now that we have our bounds, we can set up our integral:</p>
<p>$$\begin{align}\mathbf F &= \iiint_\text{cap} d\mathbf F \\
&= \iiint_\text{mass of cap} \mathbf {\hat r}\frac{G(1)dM}{r^2} \\
&= G\rho \int_0^{2\pi}\int_0^{\pi/3}\int_{a\sec(\theta)}^{2a} \mathbf {\hat r}\frac{r^2\sin(\theta)\ drd\theta d\varphi}{r^2} \\
&= G\rho \int_0^{2\pi}\int_0^{\pi/3}\int_{a\sec(\theta)}^{2a} \left(\cos(\varphi)\sin(\theta)\mathbf {\hat x} + \sin(\varphi)\sin(\theta)\mathbf {\hat y} + \cos(\theta)\mathbf {\hat z}\right)\sin(\theta)\ drd\theta d\varphi \\
&= G\rho\mathbf {\hat x}\int_0^{2\pi}\int_0^{\pi/3}\big(2a-a\sec(\theta)\big) \cos(\varphi)\sin(\theta)^2\ d\theta d\varphi \\
&\ \ \ \ + G\rho\mathbf {\hat y}\int_0^{2\pi}\int_0^{\pi/4}\big(2a-a\sec(\theta)\big)\sin(\varphi)\sin(\theta)^2\ d\theta d\varphi \\
&\ \ \ \ + G\rho\mathbf {\hat z}\int_0^{2\pi}\int_0^{\pi/4}\big(2a-a\sec(\theta)\big)\cos(\theta)\sin(\theta)\ d\theta d\varphi \\
&= 0\mathbf {\hat x} + 0\mathbf{\hat y} + G\rho\mathbf {\hat z}\left(a\frac {\pi}2\right) \\
&= \frac{\pi G\rho a}{2}\mathbf{\hat z}
\end{align}$$</p>
<hr>
<p>But it might be useful to go a step further. The exercise doesn't give us a value for the density or mass of the spherical cap. However, it may be more convenient to express it in terms of mass than density (feel free to skip this part and just use the above if you don't think your professor wants you to do this). Hence we use the definition $$\rho = \frac{M}{V}$$ to plug in for the $\rho$. That will require us to find the volume of a spherical cap. You can do so with an integral very similar to the one we calculated above ... or you can use the fact that it's been calculated before and just <a href="https://en.wikipedia.org/wiki/Spherical_cap#Volume_and_surface_area" rel="nofollow noreferrer">look up the formula</a>. In this case we have $$V = \frac {\pi(a)}{6}\left(3((2a)^2-a^2)+a^2\right) = \frac{5}{3}\pi a^3$$</p>
<p>Thus we have $\rho = \dfrac{m}{\frac{5}{3}\pi a^3} = \dfrac{3m}{5\pi a^3}$ and hence the force on our mass is</p>
<p>$$\mathbf F = \frac{3G m}{10a^2}\mathbf{\hat z}$$</p>
|
1,057,675 | <p>I was asked to prove that $\lim\limits_{x\to\infty}\frac{x^n}{a^x}=0$ when $n$ is some natural number and $a>1$. However, taking second and third derivatives according to L'Hôpital's rule didn't bring any fresh insights nor did it clarify anything. How can this be proven? </p>
| Asaf Karagila | 622 | <p>When you "close" the bracket, it implicitly means that you're working in the space $\Bbb R\cup\{\pm\infty\}$ (and we omit the $+$ from the positive infinity sign), and in that space $[0,\infty)$ is not closed, since $\infty$ is indeed a limit point of that set.</p>
<p>[It might be the case that you are working in the $1$-point compatification of $\Bbb R$, which is like "tying" both ends of $\Bbb R$ into a single point denoted by $\infty$, but then the space is not ordered, so talking about intervals becomes a bit awkward.]</p>
<p>Remember that "open" and "close" are relative to a space and to a topology. $\{1\}$ is not open as a subset of $\Bbb R$, but it is open as a subset of $\Bbb N$ (in their standard topologies).</p>
|
1,599,886 | <p>What is the proper way of proving : the density operator $\hat{\rho}$ of a pure state has exactly one non-zero eigenvalue and it is unity, i.e,</p>
<p>the density matrix takes the form (after diagonalizing):
\begin{equation}
\hat{\rho}=
{\begin{bmatrix}
1 & 0 & \cdots & 0 \\
0 & 0 & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & 0 \\
\end{bmatrix}}
\end{equation}</p>
<p>For mixed state: $\hat{\rho}=\sum \limits_{i}P_{i}|\psi_{i}\rangle\langle\psi_{i}|$ </p>
<p>For any state: $Tr(\hat{\rho})=\sum\limits_{i}P_{i}=1$</p>
<p>For pure state: $\hat{\rho}=|\psi\rangle\langle\psi|$</p>
<p>$|\psi\rangle$ is the statevector of the system</p>
<p>$P_{i}$ is the probability to be in the state $|\psi_{i}\rangle$, which are the eigenvalues of the density operator.</p>
| user247327 | 247,327 | <p>Do you not know how to find the eigenvalues of a matrix by solving the "characteristic equation" of the matrix? The characteristic equation of any diagonal matrix is just the product of linear terms, each the number on the diagonal minus x so the eigenvalues [b]are[/b] the numbers on the diagonal.</p>
|
1,556,298 | <p>If we have $p\implies q$, then the only case the logical value of this implication is false is when $p$ is true, but $q$ false.</p>
<p>So suppose I have a broken soda machine - it will never give me any can of coke, no matter if I insert some coins in it or not.</p>
<p>Let $p$ be 'I insert a coin', and $q$ - 'I get a can of coke'.</p>
<p>So even though the logical value of $p \implies q$ is true (when $p$ and $q$ are false), it doesn't mean the implication itself is true, right? As I said, $p \implies q$ has a logical value $1$, but implication is true when it matches the truth table of implication. And in this case, it won't, because $p \implies q$ is <strong>false</strong> for true $p$ (the machine doesn't work).</p>
<p>That's why I think it's not right to say the implication is true based on only one row of the truth table. Does it make sense?</p>
| hmakholm left over Monica | 14,366 | <p>Your distinction between "true" and "logical value 1" is not one that formal logic generally observes. Here "1" and "true" are synonyms for the same concept.</p>
<p>The meaning of the $\Rightarrow$ connective is what its truth table says it is, neither more nor less -- the truth table <em>defines</em> the connective (in classical logic). Fancy words such as "implication" or "if ... then" are just mnemonics to help you remember what the truth table is, and what the connective is <em>good for</em> -- but when there's a conflict between your intuitive understanding of those words and the truth table, the truth table wins over the words.</p>
<p>The important thing to realize is that $\Rightarrow$ is designed to be used <em>together with a $\forall$</em>. If you try to understand its naked truth table it doesn't seem very motivated -- certainly it can't express any notions of cause and effect, because the truth values of $p$ and $q$ just <em>are what they are</em> in any given world. As long as we're only looking at <em>one</em> possible state of the world, there's not much intuitive meaning in asking "what if $p$ held?" because that implies a wish to consider a world where the truth value of $p$ were different.</p>
<p>The device of standard formal logic that allows us to speak about different worlds is <em>quantifiers</em>. What we want to say is something like</p>
<blockquote>
<p>In every possible world where I put in a coin, the machine will spit out a soda.</p>
</blockquote>
<p>(though that is a little simplified -- we want to consider a "possible world" to be one where I made a different decision about my coins, not to be one where the machine had inexplicably stopped working even though it does work <em>now</em>. But let's sweep that problem aside for now).</p>
<p>This is the same as saying</p>
<blockquote>
<p>In every possible world <em>period</em>, it is true that either I don't put in a coin, or I get a soda.</p>
</blockquote>
<p>which logically becomes, using the truth table</p>
<blockquote>
<p>For all worlds $x$, the proposition (In world $x$ I put in a coin) $\Rightarrow$ (In world $x$ I get a soda) is true.</p>
</blockquote>
<p>Since there's a quantification going on, the truth value of the whole thing is not spoiled by the fact that there are some possible worlds with a broken machine where the $\Rightarrow$ evaluates to true. What interests us is just whether the $\Rightarrow$ evaluates to true <em>every time</em> or <em>not every time</em>. As long as we're in the "not every time" context, the machine is broken, and that conclusion is not affected by the "spurious" local instances of $\Rightarrow$ evaluating to true in particular worlds.</p>
<p>The construction that models (more or less) our intuition about cause and effect (or "if ... then") is not really $\Rightarrow$, but the <em>combination</em> of $\forall\cdots\Rightarrow$.</p>
<p>Unfortunately in the usual style of mathematical prose it is often considered acceptable to leave the quantification <em>implicit</em>, but logically it is there nevertheless. (And to add insult to injury, many systems of <em>formal</em> logic will implicitly treat formulas with free variables as universally quantified too, so even there you get to be sloppy and not call attention to the fact that there's quantification going on.)</p>
<hr>
<p>Note also that this is the case even in propositional logic where there are no explicit quantifiers at all. To claim that $P\to Q$ is logically valid is to say that <em>in all valuations</em> where $P$ is true, $Q$ will also be true -- there's a quantification built into the meta-logical concept of "logically valid".</p>
|
1,556,298 | <p>If we have $p\implies q$, then the only case the logical value of this implication is false is when $p$ is true, but $q$ false.</p>
<p>So suppose I have a broken soda machine - it will never give me any can of coke, no matter if I insert some coins in it or not.</p>
<p>Let $p$ be 'I insert a coin', and $q$ - 'I get a can of coke'.</p>
<p>So even though the logical value of $p \implies q$ is true (when $p$ and $q$ are false), it doesn't mean the implication itself is true, right? As I said, $p \implies q$ has a logical value $1$, but implication is true when it matches the truth table of implication. And in this case, it won't, because $p \implies q$ is <strong>false</strong> for true $p$ (the machine doesn't work).</p>
<p>That's why I think it's not right to say the implication is true based on only one row of the truth table. Does it make sense?</p>
| CiaPan | 152,299 | <p>The fact some specific values satisfy the formula doesn't mean the formula is true in general. "It is noon now and it rains" is true right now and right here, but at other place or in other time this will appear false.</p>
<p>Your implication will turn out true if you check it keeps satisfied by any possible combination of its components values.</p>
|
539,448 | <p>$a,b,c,d,e>o$. Show that</p>
<p>$$ a^{b+c+d+e}+ b^{c+d+e+a}+c^{ d+e+a+b}+ d^{e+a+b+c}+e^{a+b+c+d}>1$$ </p>
| math110 | 58,742 | <p>oh,I ask my teacher(tian27546),he told me this is he inequality:<a href="http://www.artofproblemsolving.com/Forum/viewtopic.php?f=52&t=484816&p=2718780#p2718780" rel="nofollow">http://www.artofproblemsolving.com/Forum/viewtopic.php?f=52&t=484816&p=2718780#p2718780</a></p>
|
1,734,419 | <p>I have tried to show that : $2730 |$ $n^{13}-n$ using fermat little theorem but i can't succeed or at a least to write $2730$ as $n^p-n$ .</p>
<p><strong>My question here</strong> : How do I show that $2730$ divides $n^{13}-n$ for $n$ is integer ?</p>
<p>Thank you for any help </p>
| Quentchen | 325,651 | <p>First you calculate the prime factorisation of $2730$. You will find that it splits in 5 prime factors. Then use the Chinese Remainder theorem and show that $n^{13}\equiv n \pmod{p}$ for the prime factors. </p>
|
8,695 | <p>I have a parametric plot showing a path of an object in x and y (position), where each is a function of t (time), on which I would like to put a time tick, every second let's say. This would be to indicate where the object is moving fast (widely spaced ticks) or slow (closely spaced ticks). Each tick would just be short line that crosses the plot at that point in time, where that short line is normal to the plotted curve at that location.</p>
<p>I'm sure I can figure out a way to do it using lots of calculations and graphics primitives, but I'm wondering if there is something built-in that I have missed in the documentation that would make this easier.</p>
<p>(Note: this is about ticks on the plotted curve itself -- this doesn't have anything to do with the ticks on the axes or frame.)</p>
| s0rce | 65 | <p>I couln't get nice little lines so I've used filled circles instead. I hope this works.</p>
<p>I've spaced the points out but Pi/4 on the curve in this example.</p>
<pre><code>f = {Sin[#], Sin[2 #]} &
Show[
ParametricPlot[f[u], {u, 0, 2 \[Pi]}],
ListPlot[f /@ Range[0, 2 \[Pi], \[Pi]/4],
PlotStyle -> Directive[PointSize[0.02], Black]]
]
</code></pre>
<p>"<img src="https://i.imgur.com/nppKs.png" alt="Mathematica graphic">"</p>
|
974,656 | <p><img src="https://i.stack.imgur.com/LyqzL.jpg" alt="enter image description here"></p>
<p>One way to solve this and my book has done it is by : </p>
<p><img src="https://i.stack.imgur.com/2wYSn.jpg" alt="enter image description here"></p>
<hr>
<p>This is a well known way, but I have a different method, and it seems logical to me (but I don't know what the mistake is). And yes it's wrong, but I don't understand what's wrong with my following method :</p>
<p>For 1 component the mean is $E(X)=2.5$ so for 5 components it's : </p>
<p>$$E(5X)=5E(X)=5(2.5)=12.5$$</p>
<p>So for 5 items we can say : </p>
<p>$$\lambda= 1/E(5X)= 1/12.5$$</p>
<p>$$X ~ Expo (1/12.5)$$</p>
<p>$$P(T \geq 3)=1-e^{-3/12.5}$$
$$P(T \geq3)=0.21$$</p>
<p>Which is not the same as in the book. Please help me, what is wrong with my method.</p>
| André Nicolas | 6,312 | <p>Without loss of generality we may assume that $a_n\gt 0$. For if it is not, we can multiply $P(x)$ by $-1$ wiithout changing the roots. Then $a_0\lt 0$.</p>
<p>Note that $P(0)\lt 0$. If we can show that $P(a)\gt 0$ for some positive $a$, it will follow by the Intermediate Value Theorem that $P(x)=0$ for some $x$ between $0$ and $a$, that is, for some positive $x$.</p>
<p>By dividing top and bottom of
$$\frac{a_nx^n+a_{n-1}x^{n-1}+\cdots+a_0}{a_nx^n}\tag{1}$$ by $a_nx^n$ we can show that
$$\lim_{x\to\infty} \frac{P(x)}{a_nx^n}=1.\tag{1}$$
In particular, if $x$ is large enough positive, $P(x)$ is positive, so there is a positive $a$ such that $P(a)\gt 0$.</p>
<p>We now show that there is a negative number $-b$ such that $P(-b)\gt 0$. That will imply that $P(x)=0$ has a root between $-b$ and $0$, that is, a negative root.
An argument essentially identical to the argument we used for (1) shows that
$$\lim_{x\to-\infty} \frac{P(x)}{a_nx^n}=1.\tag{2}$$
Since $n$ is even, for negative $x$ we have that $a_nx^n$ is positive. So if $x$ is large enough negative, by (2) $P(x)$ is positive. This completes the proof.</p>
|
28,532 | <p><code>MapIndexed</code> is a very handy built-in function. Suppose that I have the following list, called <code>list</code>:</p>
<pre><code>list = {10, 20, 30, 40};
</code></pre>
<p>I can use <code>MapIndexed</code> to map an arbitrary function <code>f</code> across <code>list</code>:</p>
<pre><code>{f[10, {1}], f[20, {2}], f[30, {3}], f[40, {4}]}
</code></pre>
<p>where the second argument to <code>f</code> is the part specification of each element of the list.</p>
<p>But, now, what if I would like to use <code>MapIndexed</code> only at certain elements? Suppose, for example, that I want to apply <code>MapIndexed</code> to only the second and third elements of <code>list</code>, obtaining the following:</p>
<pre><code>{10, f[20, {2}], f[30, {3}], 40}
</code></pre>
<p>Unfortunately, there is no built-in "<code>MapAtIndexed</code>", as far as I can tell. What is a simple way to accomplish this? Thanks for your time.</p>
| amr | 950 | <p>Here's a form similar to Kuba's approach:</p>
<pre><code>mapAtIndexed[f_, list_, pos_] :=
ReplacePart[list, # :> f[list[[Sequence @@ #]], #] & /@ pos];
</code></pre>
<p>A pure pattern version:</p>
<pre><code>mapAtIndexed[f_, list_, pos_] :=
ReplacePart[list,
i : (Alternatives @@ pos) :> f[list[[Sequence @@ i]], i]];
</code></pre>
<p>And I assume you're familiar with <code>Position</code>.</p>
|
2,337,583 | <p>I cannot understand the inductive dimension properly. I read something on Google but mostly there only are conditions or properties. Not a definition. I got to know about it from the book “ The fractal geometry of nature”. ( I am a 12 grader.)</p>
| Theo Bendit | 248,286 | <p>It's a recursive definition. We define all the spaces of dimension $-1$: we define there to be only one, the empty set with its one and only topology). We then define the spaces of dimension $0$ to be the spaces which have the following properties:</p>
<ul>
<li>It's not of dimension $-1$ (i.e. non-empty), and</li>
<li>Every open subset of the space contains a (typically smaller) open subset, whose boundary is contained in the first open set, but whose boundary has dimension $-1$ (i.e. has empty boundary).</li>
</ul>
<p>Dimension $1$ spaces are defined similarly, with $-1$ replaced with $0$, etc, defining spaces of all integer dimensions. Not every space is covered by this recursive definition, so we define their inductive dimension to be $\infty$.</p>
<p>As an example, take the real line. To show it has dimension $1$, we should first show it does not have dimension $-1$ or $0$.</p>
<p>Showing it doesn't have dimension $-1$ is easy, since it's non-empty. Suppose it had dimension $0$. Then if I take an open set, say $(0, 1)$ for example, I should be able to find an open subset of $(0, 1)$ with empty boundary. Suppose $U \subseteq (0, 1)$ is such an open set. Then $U$ is bounded, so we must have a supremum $\alpha$ of $U$. But then $\alpha$ lies in the boundary of $U$, so the boundary isn't empty after all. Thus $\mathbb{R}$ is not $0$-dimensional.</p>
<p>Let's verify the second property. Take an arbitrary open subset $V \subseteq R$. Then $V$ must contain an open interval $I = (\alpha, \beta)$, and by shrinking the open interval as necessary, I can assume without loss of generality that $\alpha, \beta \in V$. Note that the boundary of $I$ is $\lbrace \alpha, \beta \rbrace$. I claim that this is a $0$-dimensional subspace of $\mathbb{R}$.</p>
<p>Note that $X := \lbrace \alpha, \beta \rbrace$ is not empty, so it's not $-1$-dimensional. The subspace topology on $X$ is discrete, so the full list of open sets are $\emptyset, \lbrace \alpha \rbrace, \lbrace \beta \rbrace, \lbrace \alpha, \beta \rbrace$, all of which have empty boundaries. So, $X$ has dimension $0$.</p>
<p>Thus, I have shown, by definition, that $\mathbb{R}$ has dimension $1$.</p>
|
325,964 | <p>This question may be trivial, or overly optimistic. I do not know (but I guess the latter...). I am a group theorist by trade, and the set-up I describe cropped up in something I want to prove. So this question is out of my comfort zone, but I am happy to clarify anything if needed.</p>
<p>I have a countable set <span class="math-container">$S$</span> equipped with a partial order <span class="math-container">$<$</span> and a minimum element <span class="math-container">$0$</span> (so <span class="math-container">$0<x$</span> for all <span class="math-container">$x\in S\setminus\{0\}$</span>). I want to perform induction on chains which contain <span class="math-container">$0$</span>, so <span class="math-container">$0<x<\dotsb<y<z$</span>, in this order. As in: if property <span class="math-container">$P$</span> holds for <span class="math-container">$0, x, \dotsc, y$</span> then <span class="math-container">$P$</span> holds for <span class="math-container">$z$</span>. Obviously I can perform induction on a finite chain. My question is:</p>
<blockquote>
<p>What are my options if I want to to perform induction on infinite chains?</p>
</blockquote>
<p>I <em>believe</em> one option would be transfinite induction, and in order to apply this I would need to prove that every chain not containing <span class="math-container">$0$</span> has a minimum element. But this condition on chains is unlikely to hold in my setting. So I am wondering: do I have any other options?</p>
<p>[An example to keep in mind is the chain with elements from <span class="math-container">$\{2^n\mid n\leq m\}\cup\{0\}$</span> for some fixed integer <span class="math-container">$m$</span>, with the natural ordering inherited from <span class="math-container">$\mathbb{Q}$</span>. So the chain <span class="math-container">$0<\dotsb<2^{m-1}< 2^m$</span>. This example makes me think the question is not trivial - standard induction will not work.]</p>
| Dirk | 109,932 | <p>You can't just do induction on chains without adding extra info. If you could, I could give you the chain <span class="math-container">$0 < z$</span> and thus see without much inductive work that the property holds. To have a chance at classical induction, you need a concept of successor, i.e. a function <span class="math-container">$f : S \to S$</span>, such that for every two elements <span class="math-container">$a,b \in S$</span> with <span class="math-container">$a < b$</span>, you can construct a chain <span class="math-container">$a = a_0$</span>, <span class="math-container">$a_{i+1} = f(a_i)$</span> and <span class="math-container">$a_m = b$</span> for a finite <span class="math-container">$m$</span>.<br>
As you come from group theory, try to think about composition series. You might be able to prove a result along such a series, but if you consider any sequence of subgroups, things will be a lot harder up to impossible.</p>
<p>For completeness sake, note that multiple functions <span class="math-container">$f$</span> are allowed. This plays a role for example in context free languages or regular expressions. You show that the desired property holds for the starting element(s) and you show that every rule in the language keeps the property intact, hence it holds for the whole language.</p>
<p>Regarding transfinite induction, we normally assume that a property holds for all <span class="math-container">$y < z$</span>. That is much stronger than just assuming that it holds along a single chain (even if it is a fine one) up to <span class="math-container">$z$</span>.</p>
<p>Unfortunately I don't fully understand your example. The set you have given contains all <span class="math-container">$2^n$</span> without any restrictions imposed by <span class="math-container">$m$</span> (the restriction <span class="math-container">$n \leq m$</span> is void as both <span class="math-container">$n$</span> and <span class="math-container">$m$</span> run trough all of <span class="math-container">$\mathbb{Z}$</span>), so the set is just <span class="math-container">$\{2^n \mid n \in \mathbb{Z}\}$</span>. Of course standard induction from small to big will not work here, as you don't have a smallest element in <span class="math-container">$\mathbb{Z}$</span>, but two inductions, starting at <span class="math-container">$n = 0$</span> and going up and down should do the trick, depending on the property you want to show.</p>
|
325,964 | <p>This question may be trivial, or overly optimistic. I do not know (but I guess the latter...). I am a group theorist by trade, and the set-up I describe cropped up in something I want to prove. So this question is out of my comfort zone, but I am happy to clarify anything if needed.</p>
<p>I have a countable set <span class="math-container">$S$</span> equipped with a partial order <span class="math-container">$<$</span> and a minimum element <span class="math-container">$0$</span> (so <span class="math-container">$0<x$</span> for all <span class="math-container">$x\in S\setminus\{0\}$</span>). I want to perform induction on chains which contain <span class="math-container">$0$</span>, so <span class="math-container">$0<x<\dotsb<y<z$</span>, in this order. As in: if property <span class="math-container">$P$</span> holds for <span class="math-container">$0, x, \dotsc, y$</span> then <span class="math-container">$P$</span> holds for <span class="math-container">$z$</span>. Obviously I can perform induction on a finite chain. My question is:</p>
<blockquote>
<p>What are my options if I want to to perform induction on infinite chains?</p>
</blockquote>
<p>I <em>believe</em> one option would be transfinite induction, and in order to apply this I would need to prove that every chain not containing <span class="math-container">$0$</span> has a minimum element. But this condition on chains is unlikely to hold in my setting. So I am wondering: do I have any other options?</p>
<p>[An example to keep in mind is the chain with elements from <span class="math-container">$\{2^n\mid n\leq m\}\cup\{0\}$</span> for some fixed integer <span class="math-container">$m$</span>, with the natural ordering inherited from <span class="math-container">$\mathbb{Q}$</span>. So the chain <span class="math-container">$0<\dotsb<2^{m-1}< 2^m$</span>. This example makes me think the question is not trivial - standard induction will not work.]</p>
| Andrej Bauer | 1,176 | <p>You seem to be asking about <em>well-founded induction</em>. It generalizes many forms of induction, including the usual induction on numbers and transfinite induction on ordinals.</p>
<p>Consider a relation <span class="math-container">$<$</span> on a set <span class="math-container">$A$</span>. Say that <span class="math-container">$S \subseteq A$</span> is <em><span class="math-container">$<$</span>-progressive</em> when, for all <span class="math-container">$x \in A$</span>,
<span class="math-container">$$(\forall y < x \,.\, y \in S) \Rightarrow x \in S.$$</span>
In words, an element is in <span class="math-container">$S$</span> as soon as all of its predecessors are.
There is a logical counter-part: say that <span class="math-container">$\phi$</span> is a property of elements of <span class="math-container">$A$</span>, then <span class="math-container">$\phi$</span> is <em><span class="math-container">$<$</span>-progressive</em> when, for all <span class="math-container">$x \in A$</span>,
<span class="math-container">$$(\forall y < x \,.\, \phi(y)) \Rightarrow \phi(x).$$</span></p>
<p>A <em>well-founded</em> relation is a relation <span class="math-container">$<$</span> on a set <span class="math-container">$A$</span> such that, if <span class="math-container">$S \subseteq A$</span> is <span class="math-container">$<$</span>-progressive then <span class="math-container">$S = A$</span>. A well-founded relation enjoys the following induction principle: <em>If <span class="math-container">$\phi$</span> is a <span class="math-container">$<$</span>-progressive property then <span class="math-container">$\phi(x)$</span> holds for all <span class="math-container">$x \in A$</span>.</em> In fact, the induction principle is just a reformulation of the definition of well-foundedness.</p>
<p>We have the following characterization:</p>
<p><strong>Theorem.</strong> Let <span class="math-container">$<$</span> be relation on <span class="math-container">$A$</span>. The following are equivalent:</p>
<ol>
<li><span class="math-container">$<$</span> is well-founded,</li>
<li>every nonempty subset <span class="math-container">$S \subseteq A$</span> has a <span class="math-container">$<$</span>-minimal element,</li>
<li>there are no infinite descending chains <span class="math-container">$\cdots < x_3 < x_2 < x_1$</span> in <span class="math-container">$A$</span>.</li>
</ol>
<p>To summarize, a relation <span class="math-container">$<$</span> without infinite descending chains gives us the following induction principle: <em>Suppose that for every <span class="math-container">$x \in A$</span> we have <span class="math-container">$(\forall y < x . \phi(y)) \Rightarrow \phi(x)$</span>. Then <span class="math-container">$\forall z \in A. \phi(z)$</span>.</em></p>
<p>The descending chain condition is useful for figuring out whether induction is valid. For example, we cannot use induction on <span class="math-container">$A = \{0\} \cup \{2^{-m} \mid m \in \mathbb{N}\}$</span> when we order <span class="math-container">$A$</span> using <span class="math-container">$<$</span>, but we can if we order it with <span class="math-container">$>$</span>.</p>
<p>A final remark: a linearly ordered well-founded relation is just a well-ordered relation. Induction on well-ordered relations is a bit more familiar, as it is just ordinal induction.</p>
|
43,172 | <p>I am trying to solve $\frac{dx}{dt} + \alpha x = 1$, $x(0) = 2$, $\alpha > 0$ where $\alpha$ is a constant. </p>
<p>[some very badly done mathematics deleted]</p>
<p>Continuing with Gerry's suggestion:</p>
<p>$\log|1-\alpha x | = -t\alpha + \log|1-2\alpha|$</p>
<p>$1-\alpha x = e^{-t\alpha}(1-2\alpha)$</p>
<p>$x(t) = \frac{1 - e^{-t\alpha}(1-2\alpha)}{\alpha}$</p>
<p>Then the asymptotic behaviour of $x(t)$ as $t$ goes infinity would be $e^{-t\alpha}$ approaching zero, therefore the overall $x(t)$ would approach $\frac{1}{\alpha}$. </p>
| Gerry Myerson | 8,269 | <p>joriki's approach is fine. Alternatively, the equation is "variables separable" and can be solved by rewriting as $${1\over1-\alpha x}\,dx=dt$$ and then integrating; $$\int{1\over1-\alpha x}\,dx=\int\,dt,\qquad -{1\over\alpha}\log|1-\alpha x|=t+C$$ stick in $t=0$ to get $$C=-{1\over\alpha}\log|1-2\alpha|,\qquad -{1\over\alpha}\log|1-\alpha x|=t-{1\over\alpha}\log|1-2\alpha|$$ and now solve for $x$. </p>
|
464,426 | <p>Find the limit of $$\lim_{x\to 1}\frac{x^{1/5}-1}{x^{1/3}-1}$$</p>
<p>How should I approach it? I tried to use L'Hopital's Rule but it's just keep giving me 0/0.</p>
| Amr | 29,267 | <p><strong>Hint:</strong> $$\frac{x^{\frac{1}{5}}-1}{x^{\frac{1}{3}}-1}=\frac{(x^{\frac{1}{15}})^3-1}{(x^{\frac{1}{15}})^5-1}=\frac{((x^{\frac{1}{15}})-1)(x^{\frac{1}{15}})^2+(x^{\frac{1}{15}})+1)}{((x^{\frac{1}{15}})-1)((x^{\frac{1}{15}})^4+(x^{\frac{1}{15}})^3+(x^{\frac{1}{15}})^2+(x^{\frac{1}{15}})+1)}$$</p>
<p>This is equal to:
$$\frac{(x^{\frac{1}{15}})^2+(x^{\frac{1}{15}})+1}{(x^{\frac{1}{15}})^4+(x^{\frac{1}{15}})^3+(x^{\frac{1}{15}})^2+(x^{\frac{1}{15}})+1}$$</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.