qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
23,471 | <p>I'm trying to find an explanation for the different sizes I'm seeing for fonts added to graphics in different ways, and haven't yet located an easy to understand explanation. Here's a minimal example:</p>
<pre><code>Graphics[
{LightGray,
Rectangle[{0, 0}, {72, 72}],
Red,
Style[
Text["Hig", {0, 0}, {-1, -1}], 72,
FontFamily -> "Times New Roman"],
Black,
First[
First[
ImportString[
ExportString[
Style["Hig",
FontFamily -> "Times New Roman",
FontSize -> 72], "PDF"],If[$VersionNumber>=13,{"PDF","PageGraphics"},"PDF"],
"TextMode" -> "Outlines"]]]
},
PlotRange -> {{0, 100}, {0, 100}},
Axes -> True,
Ticks -> {Table[x, {x, 0, 100, 10}], Table[x, {x, 0, 100, 10}]},
Epilog -> {Text["72", {200, 75}], Line[{{0, 72}, {200, 72}}]}]
</code></pre>
<p><img src="https://i.stack.imgur.com/l2wT7.png" alt="fonts" /></p>
<p>The red text doesn't change size when you resize the graphic, although everything else changes. The black text resizes along with everything else. But neither text seems to be the result of specifying 72.</p>
<h2>Update</h2>
<p>After changing the screen resolution to Automatic following @Sjoerd's suggestion, I can see how the red text is basically displaying at a fixed size that's independent of <em>Mathematica</em>. In this picture, the left image shows 72 dpi, the red font is using 70ish pixels of vertical space. The right image is at 'Automatic' (so presumably 133.51 for my machine, according to <a href="http://members.ping.de/%7Esven/dpi.html" rel="nofollow noreferrer">this site</a>, and similarly uses up 70ish pixels.</p>
<p><img src="https://i.stack.imgur.com/Qjbuv.png" alt="resolutions" /></p>
<p>I'm still puzzled by the size of the black font, which doesn't seem to be related to the specified font size or to the screen resolution. Perhaps the PDF translation introduces another scaling factor.</p>
| Sjoerd C. de Vries | 57 | <p>I will not duplicate geordie's explanation of the scaling-with-graphics-resize part of the question.</p>
<p>The reason the displayed font looks too small is the setting of the "ScreenResolution" option (part of <code>FontProperties</code>) to 72, which used to be the default for decades, but is incorrect for most screens nowadays.</p>
<p>If you set it to <code>Automatic</code> (or perhaps the value you know is correct for your display)</p>
<pre><code>SetOptions[$FrontEnd, FontProperties -> {"ScreenResolution" -> Automatic}]
</code></pre>
<p>you'll get a better match. Compare the default setting </p>
<p><img src="https://i.stack.imgur.com/notPK.png" alt="enter image description here"></p>
<p>with the <code>Automatic</code> one on my display</p>
<p><img src="https://i.stack.imgur.com/NY2GY.png" alt="enter image description here"></p>
|
216,031 | <p>Using image analysis, I have found the positions of a circular ring and imported them as <code>xx</code> and <code>yy</code> coordinates. I am using <code>ListInterpolation</code> to interpolate the data:</p>
<pre><code>xi = ListInterpolation[xx, {0, 1}, InterpolationOrder -> 4, PeriodicInterpolation -> True, Method -> "Spline"];
yi = ListInterpolation[yy, {0, 1}, InterpolationOrder -> 4, PeriodicInterpolation -> True, Method -> "Spline"];
</code></pre>
<p>I plot the results as:</p>
<pre><code>splinePlot = ParametricPlot[{xi[s], yi[s]} , {s, 0, 1}, PlotStyle -> {Red}]
</code></pre>
<p>and the result looks like: </p>
<p><a href="https://i.stack.imgur.com/c2wRi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/c2wRi.png" alt="interpolation of data"></a></p>
<p>I am trying to study this shape as it deforms, and I will need to look at the derivatives of this interpretation (notably, second derivatives). I know that there are physical constraints that will not let the local curvature at any point be <strong>larger</strong> than, for example <code>1/10</code> in the units show (so, a radius of curvature <code>10</code>). <strong>Is there a way that I can constrain the interpolation so that the local curvature never exceeds a given value?</strong></p>
<p>Here is the data (Dropbox Link): <a href="https://www.dropbox.com/s/g9vajch0obbcplk/testShape.csv?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/g9vajch0obbcplk/testShape.csv?dl=0</a></p>
| Michael E2 | 4,999 | <p>An approach to this problem may be found by searching for "person curve" on this site. In particular, I will adapt <a href="https://mathematica.stackexchange.com/a/17780/4999">this answer by @SimonWoods</a> and <a href="https://mathematica.stackexchange.com/a/19200/4999">this answer by @J.M.</a> to <a href="https://mathematica.stackexchange.com/questions/17704/how-to-create-a-new-person-curve">How to create a new "person curve"?</a> @MikeY's approach is fundamentally the same, but the details (esp. polar vs. rectangular coordinates) differ. I might be able to add a curvature-based approach, if I have time.</p>
<pre><code>ClearAll[fourierCoefficients, fourierFun, fourierInterpolate,
truncate];
(* Fourier coefficients of the coordinates of a list of points *)
fourierCoefficients[x_?MatrixQ] :=
fourierCoefficients /@ Transpose@x;
fourierCoefficients[x_?VectorQ] := Module[{fc},
fc = 2 Chop[
Take[Fourier[x, FourierParameters -> {-1, 1}],
Ceiling[Length[x]/2]]];
fc[[1]] /= 2;
fc
];
(* Function from Fourier coefficients
* fourierFun[fc][t] is converted to a symbolic-algebraic expression *)
fourierFun[fc_][t_] := fourierFun[fc, t];
fourierFun[fc_, t_] :=
Abs[#].Cos[Pi (2 Range[0, Length[#] - 1] t - Arg[#]/Pi)] & /@ fc;
(* construct Fourier interpolation *)
fourierInterpolate[x_] := fourierFun[fourierCoefficients@x];
(* truncate a Fourier series equivalent to a least squares projection
* onto the low order subspace *)
truncate[fourierFun[fc_], {order_}] := fourierFun[
With[{norms = Norm /@ Transpose@fc},
Take[fc, All, Min[1 + order, Length@First@fc]]
]
];
</code></pre>
<p>Example:</p>
<pre><code>xydata = Get["https://pastebin.com/raw/htNS50qt"];
fourierInterpolate[xydata];
{xFN[t_], yFN[t_]} = truncate[fourierInterpolate[xydata], {10}][t]
(*
{113.477 + 32.3787 Cos[π (0.530648 + 2 t)] +
0.495141 Cos[π (0.392001 + 4 t)] +
0.238509 Cos[π (-0.534649 + 6 t)] +
0.258527 Cos[π (-0.288496 + 8 t)] +
0.344561 Cos[π (-0.404418 + 10 t)] +
0.119943 Cos[π (-0.576067 + 12 t)] +
0.0223725 Cos[π (-0.852096 + 14 t)] +
0.0350636 Cos[π (0.117051 + 16 t)] +
0.0514668 Cos[π (0.516506 + 18 t)] +
0.0284801 Cos[π (0.9183 + 20 t)],
203.572 + 33.1083 Cos[π (-0.973853 + 2 t)] +
0.23916 Cos[π (0.619213 + 4 t)] +
0.361602 Cos[π (-0.905991 + 6 t)] +
0.135998 Cos[π (0.511184 + 8 t)] +
0.341424 Cos[π (0.136349 + 10 t)] +
0.0831821 Cos[π (-0.066907 + 12 t)] +
0.0387512 Cos[π (0.137154 + 14 t)] +
0.0488503 Cos[π (-0.0273644 + 16 t)] +
0.0234458 Cos[π (-0.377519 + 18 t)] +
0.0300021 Cos[π (0.601693 + 20 t)]}
*)
ParametricPlot[{xFN[t], yFN[t]}, {t, 0, 1}]
</code></pre>
<p><img src="https://i.stack.imgur.com/vrbA0.png" width="300"></p>
|
2,278,018 | <p>Let $\;\;\displaystyle \sum_{n=1}^\infty U_n\;$ be a divergent series of positive real numbers.</p>
<p>Then, show that the series $\;\displaystyle\sum_{n=1}^\infty \dfrac{U_n}{1+U_n}\;$ is divergent.</p>
<p>Is there is any easy method to prove it? </p>
| hamam_Abdallah | 369,188 | <p>Suppose that $U_n\geq 0$ and that $\sum \frac {U_n}{1+U_n}$ is convergent.</p>
<p>$$\begin{aligned}\sum\left(1-\frac {1}{1+U_n}\right)& convergent\\
&\Rightarrow\lim_{n\to+\infty}\left(1-\frac {1}{1+U_n}\right)=0\\
&\Rightarrow\lim_{n\to+\infty} U_n=0\\
\implies \frac {U_n}{1+U_n}\sim \frac {U_n }{1}
\end{aligned}$$
This implies that $\sum U_n$ is convergent, which proves that $\sum \frac {U_n}{1+U_n}$ is divergent by contrapositive.</p>
|
1,868,797 | <blockquote>
<p><strong>Question:-</strong></p>
<p>Three points represented by the complex numbers $a,b$ and $c$ lie on a circle with center $O$ and radius $r$. The tangent at $c$ cuts the chord joining the points $a$ and $b$ at $z$. Show that $$z=\dfrac{a^{-1}+b^{-1}-2c^{-1}}{a^{-1}b^{-1}-c^{-2}}$$</p>
</blockquote>
<hr>
<p><strong>Attempt at a solution:-</strong> To simplify our problem let $O$ be the origin, then the equation of circle becomes $|z|=r$.</p>
<p>Now, the equation of chord passing through $a$ and $b$ can be given by the following determinant</p>
<p>$$\begin{vmatrix}
z & \overline{z} & 1 \\
a & \overline{a} & 1 \\
b & \overline{b} & 1 \\
\end{vmatrix}= 0$$
which simplifies to $$z(\overline{a}-\overline{b})-\overline{z}(a-b)+(\overline{a}b-a\overline{b})=0 \tag{1}$$</p>
<p>Now, for the equation of the tangent through $c$, I used the cartesian equation of tangent to a circle $xx_1+yy_1=r^2$ from which I got $$z\overline{c}+\overline{z}c=2r^2\tag{2}$$</p>
<p>Now, from equation $(1)$, we get
$$\overline{z}=\dfrac{z\left(\overline{a}-\overline{b}\right)+\left(a\overline{b}-\overline{a}b\right)}{(a-b)}$$</p>
<p>Putting this in equation $(2)$, we get $$z=\dfrac{2r^2(a-b)+\left(a\overline{b}-\overline{a}b\right)c}{\left(a\overline{c}+\overline{a}c\right)-\left(b\overline{c}+\overline{b}c\right)}$$</p>
<blockquote>
<p>After this I am not able to get to anything of much value, so your help would be appreciated. And as always, more solutions are welcomed.</p>
</blockquote>
| Kemei Kimutai | 1,042,350 | <p>The above question is not degenerate since if you work out this by simplex method, <span class="math-container">$X_1$</span> is not a basic variable so it can take a value <span class="math-container">$>=0$</span>. In the final tablue, the solution could be degenerate if either <span class="math-container">$X_2$</span> or <span class="math-container">$S_2$</span> was equal to 0</p>
|
103,545 | <p>It is well-known that, given a normalized eigenform $f=\sum a_n q^n$, its coefficients $a_n$ generate a number field $K_f$. </p>
<p>In their 1995 <a href="http://www.math.mcgill.ca/darmon/pub/Articles/Expository/05.DDT/paper.pdf">paper</a> "Fermat's Last Theorem", Darmon, Diamond, and Taylor remark that, at the time of writing, very little was known about what sort of number fields could arise as some $K_f$. They do claim, however, that $K_f$ must be totally real or CM. This claim is made just before Lemma 1.37, on page 40 of the copy I linked to.</p>
<p>This is probably standard knowledge among experts, but I'm having trouble finding a reference, so my questions are:</p>
<p>1) Can someone please provide a reference for this claim?</p>
<p>2) Is this still the state of the art, or do we now know more about what types of fields can appear as $K_f$ for some $f$? What if we restrict our attention to weight $k=2$?</p>
<p>Thank you!</p>
<blockquote>
<p>Edit: In my question, I originally just wrote "modular form" instead of "normalized eigenform". Thanks to @Stopple for pointing this out! Also, I originally claimed the paper was published in 2007, but Kevin Buzzard pointed out it was published in 1995. Thanks Kevin!</p>
</blockquote>
| Stopple | 6,756 | <p>It's not true for any old modular form. Since the forms live in a vector space over $\mathbb C$, you can achieve any complex number as a coefficient.</p>
<p>Here's a partial answer to what is true. You need to have a cusp form that is an eigenfunction of the Hecke operators, normalized so the leading coefficient is $1$. Since the Hecke operators are self adjoint in the Peterson (sp?) inner product, the eigenvalues are real, and one can show these are the coefficients in the $q$ expansion as follows: for $p$ prime, the $m$th coefficient of $T_p f$ is $a_{mp}$, for all $m$, more or less from the definition of $T_p$. This is also $\lambda_p a_m$, and from this and $a_1=1$ one deduces $a_p=\lambda_p$ (take $m=1$.) The general case follows from the recursion for powers of primes, and multiplicativity.</p>
<p>This answer is not quite right because it doesn't explain how CM extensions can arise, but it's a start.</p>
|
176,893 | <p>Suppose I have a polynomial
$$
p(x)=\sum_{i=0}^n p_ix^i.
$$
For simplicity furthermore assume $p_n=1$. </p>
<p>As it is well known we may use Gershgorin circles to give an upper bound for the absolute values of the roots of $p(x)$. The theorem states that all roots are contained within a circle with radius
$$
r=\max\{|p_0|, 1+|p_1|,\ldots, 1+|p_{n-1}|\}.
$$</p>
<hr>
<p>Now I wonder if there is something like an inverse of this theorem. Suppose I know that all roots are contained within a circle of radius $r$. Is there anything that can be said about the maximum coefficient, i.e.
$$
\max |p_i|\leq \text{some function of }r?
$$</p>
<p>I would also be graceful for a counter example.</p>
| Robert Israel | 13,650 | <p>Since the coefficients are $\pm 1$ times sums of products of the roots, this is obvious: for a polynomial of degree $d$ with all roots of absolute value $\le r$,</p>
<p>$$ |p_i| \le {d \choose i} r^{d-i} $$</p>
<p>The maximum of the right side is at $i = \left\lfloor \dfrac{d-r}{1+r} \right\rfloor$ or $\left\lceil \dfrac{d-r}{1+r} \right\rceil$ if $r < d$, or at $i=0$ if $r \ge d$.</p>
|
1,225,655 | <p>Find the equation of the line that is tangent to the curve at the point $(0,\sqrt{\frac{\pi}{2}})$. Given your answer in slope-intercept form.</p>
<p>I don't know how can I get the tangent line, without a given equation!!, this is part of cal1 classes.</p>
| Emilio Novati | 187,568 | <p>If we suppose that your curve is the graph of a function $y=f(x)$ such that $f(0) = \sqrt{\pi/2}$, than the equation of the tangent at $x=0$ is:</p>
<p>$
y-\sqrt{\pi/2}=f'(0)(x-0)
$</p>
<p>i.e.</p>
<p>$y=f'(0) x+\sqrt{\pi/2}$</p>
|
813,825 | <p>In strong Induction for the induction hypothesis you assume for all K, p(k) for k
<p>If for example I am working with trees and not natural numbers can I still use this style of proof?</p>
<p>For example if I want my induction hypothesis to be that p(k) for k < n where n is a node in the tree and everything smaller than/bellow it (The nodes children,k) is assumed to be true.</p>
<p>In the proof would I have to define what the < operator does for two nodes?</p>
| Robert Israel | 8,508 | <p>Yes, strong induction will work on a partially ordered set such as a tree, as long as there are no infinite downward chains
$a_1 > a_2 > a_3 > \ldots$.</p>
|
25,285 | <p>I am just looking for a basic introduction to the Podles sphere and its topology. All I know is that it's a q-deformation of $S^2$. </p>
| mathphysicist | 2,149 | <p>I am not an expert but perhaps the paper <a href="http://www.fsr.ac.ma/GNPHE/ajmpvolume3-2006/ajmp0607.pdf" rel="nofollow">Spectral geometry: case of quantum spheres</a> by Andrzej Sitarz could serve as a reasonably good starting point.</p>
|
1,000,448 | <p>One of the $x$-intercepts of the function $f(x)=ax^2-3x+1$ is at $x=-1$. Determine $a$ and the other $x$-intercept.</p>
<p>I happen to know that $a=-4$ and the other $x$-intercept is at $x=\frac{1}{4}$ but I don't know how to get there. I tried substituting $x=-1$ into the quadratic formula.</p>
<p>$$
-1=\frac{-(-3) \pm \sqrt{(-3)^2-4a}}{2a}
$$</p>
<p>Solving for $a$ I managed to come up with $a=-\frac {5}{2}$ by some convoluted process but obviously that doesn't work.</p>
<p>How do I properly solve for $a$, taking into account the known root of $x$?</p>
| Sammy Black | 6,509 | <p>If $r$ and $s$ are roots of $f(x) = ax^2 + bx + c$ then $x-r$ and $x-s$ are <a href="http://en.wikipedia.org/wiki/Factor_theorem" rel="nofollow">factors</a> of $f(x)$. Therefore,
$$
f(x) = a(x - r)(x - s) = a \bigl( x^2 - (r + s)x + rs \bigr),
$$
so
$$
\left\{
\begin{align}
-a(r + s) &= b \\
ars &= c
\end{align}
\right.
$$
If the coefficients $b$ and $c$ are known, as well as one root $s$, then we have a system of two equations with two unknowns. Multiplying the first equation by $s$ and adding the two equations together yields
$$
-as^2 = bs + c,
$$
so the leading coefficient$\Large^\dagger$ is
$$
a = - \frac{bs + c}{s^2}.
$$
Then the other root is
$$
r = \frac{c}{as} = - \frac{cs}{bs + c}.
$$</p>
<p>$\Large^\dagger$ I am assuming that that $c \ne 0$ and, hence, that neither of the roots were zero. I leave that special (and easier) case to you.</p>
|
481,173 | <p>The most common way to find inverse matrix is $M^{-1}=\frac1{\det(M)}\mathrm{adj}(M)$. However it is very trouble to find when the matrix is large.</p>
<p>I found a very interesting way to get inverse matrix and I want to know why it can be done like this. For example if you want to find the inverse of $$M=\begin{bmatrix}1 & 2 \\ 3 & 4\end{bmatrix}$$</p>
<p>First, write an identity matrix on the right hand side and carry out some steps:</p>
<p>$$\begin{bmatrix}1 & 2 &1 &0 \\ 3 & 4&0&1\end{bmatrix}\to\begin{bmatrix}1 & 2 &1 &0 \\ 3/2 & 2&0&1/2\end{bmatrix}\to\begin{bmatrix}1/2 & 0 &-1 &1/2 \\ 3/2 & 2&0&1/2\end{bmatrix}\to\begin{bmatrix}3/2 & 0 &-3 &3/2 \\ 3/2 & 2&0&1/2\end{bmatrix}$$
$$\to\begin{bmatrix}3/2 & 0 &-3 &3/2 \\ 0 & 2&3&-1\end{bmatrix}\to\begin{bmatrix}1 & 0 &-2 &1 \\ 0 & 2&3&-1\end{bmatrix}\to\begin{bmatrix}1 & 0 &-2 &1 \\ 0 & 1&3/2&-1/2\end{bmatrix}$$</p>
<p>You can 1. swap any two row of the matrix 2. multiply a constant in any row 3. add one row to the other row. Just like you are doing Gaussian elimination. when the identical matrix shift to the left, the right hand side become</p>
<p>$$M^{-1}=\begin{bmatrix}-2 &1 \\3/2&-1/2\end{bmatrix}$$</p>
<p>How to prove this method work?</p>
| Brian M. Scott | 12,042 | <p>This is a very standard method; if you discovered it on your own, congratulations! It works because each of the elementary row operations that you’re performing is equivalent to multiplication by an <a href="http://en.wikipedia.org/wiki/Elementary_matrix">elementary matrix</a>. To convert $A$ to $I$, you perform some sequence of elementary row operations, which in effect is multiplying $A$ by a sequence of elementary matrices:</p>
<p>$$I=(E_mE_{m-1}\ldots E_2E_1)A\;,$$</p>
<p>say, if it took $m$ row operations. This says that $E_mE_{m-1}\ldots E_2E_1$ is $A^{-1}$. (Well, actually it says that $E_mE_{m-1}\ldots E_2E_1$ is a left inverse for $A$, but there’s a theorem that says that a left inverse of a square matrix is actually the inverse of the matrix.)</p>
<p>You’ve performed exactly the same operations to $I$ on the other side of the augmented matrix, so on that side you end up with</p>
<p>$$E_mE_{m-1}\ldots E_2E_1I=E_mE_{m-1}\ldots E_2E_1\;,$$</p>
<p>which we just saw is $A^{-1}$.</p>
<p>Thus, if you start with $\begin{bmatrix}A\mid I\end{bmatrix}$, you’re guaranteed to end up with $\begin{bmatrix}I\mid A^{-1}\end{bmatrix}$, exactly as you discovered.</p>
|
3,196,797 | <p>Suppose <span class="math-container">$gcd(m,n)=1$</span>, and let <span class="math-container">$F :Z_n→Z_n$</span> be defined by <span class="math-container">$F([a])=m[a]$</span>. Prove that <span class="math-container">$F$</span> is an automorphism of the additive group <span class="math-container">$Z_n$</span>. I find it is diffcult to prove <span class="math-container">$F$</span> is injective and surjective. Could you please to help me proof it with all the details. I type it roughly, and i am sorry and sincerely looking for a result.</p>
| Lada Dudnikova | 477,927 | <p>I shall prove two things</p>
<ol>
<li>The solution of <span class="math-container">$am = 0 \ (mod \ n)$</span> is trivial </li>
<li>(1) means that the map is injective.</li>
</ol>
<hr>
<p>Let there a non-zero element s.t. <span class="math-container">$am = 0$</span> modulo <span class="math-container">$n$</span>.
The multiplication by integer in the group means consequtive addition(substraction) of the element to itself. Knowing that <span class="math-container">$gcd(m,n) = 1$</span> means that there is such a pair <span class="math-container">$(x',y')$</span> that <span class="math-container">$x'm+y'n = 1 \implies x'm = 1$</span> modulo <span class="math-container">$n$</span></p>
<p>This means <span class="math-container">$1 = a(x'm) = x'(am) = 0 $</span> contradiction.</p>
<hr>
<p>Let the injectivity be violated. Then there is <span class="math-container">$ma = mb = c \neq 0$</span> modulo <span class="math-container">$n$</span></p>
<p>that means <span class="math-container">$m(a-b) = c \neq 0$</span> modulo <span class="math-container">$n$</span> which is a contradiction with the previous statement.</p>
<p>I admit that I operate rather dirty making commutative permutations and not justifying why in cyclic group I can do it. Do you need it to be justfied?</p>
|
2,668,826 | <p>I am stuck on this result, which the professor wrote as "trivial", but I don't find a way out.</p>
<p>I have the function </p>
<p>$$f_{\alpha}(t) = \frac{1}{2\pi} \sum_{k = 1}^{+\infty} \frac{1}{k}\int_0^{\pi} (\alpha(p))^k \sin^{2k}(\epsilon(p) t)\ dp$$</p>
<p>and he told use that for $t\to +\infty$ we have:</p>
<p>$$f_{\alpha}(t) = \frac{1}{2\pi} \sum_{k = 1}^{+\infty} \frac{1}{4^k k}\binom{2k}{k}\int_0^{\pi} (\alpha(p))^k\ dp$$</p>
<p>Now, it's all about the sine since it's the only term with a dependance on $t$. Yet I cannot find a way to send</p>
<p>$$\sin^{2k}(\epsilon(p) t)$$</p>
<p>into</p>
<p>$$\frac{1}{4^k}\binom{2k}{k}$$</p>
<p>Any help? Thank you so much.</p>
<p><strong>More Details</strong></p>
<p>$$\epsilon(p)$$</p>
<p>Is a positive, limited and continuous function.</p>
<p>The "true" starting point was</p>
<p>$$f_{\alpha}(t) = -\frac{1}{2\pi}\int_0^{\pi} \log\left(1 - \alpha(p)\sin^2(\epsilon(p)t)\right)\ dp$$</p>
<p>Then I thought I could have expanded the logarithm in series. Maybe I shouldn't had to...</p>
| Jack D'Aurizio | 44,121 | <p>We may consider that for any $\alpha>0$
$$ \frac{d}{d\alpha}\int_{0}^{+\infty}\frac{\log(1+x^\alpha)}{(1+x^2)\log x}\,dx =\int_{0}^{+\infty}\frac{x^{\alpha}}{(1+x^2)(1+x^{\alpha})}\,dx=\frac{\pi}{2}-\int_{0}^{+\infty}\frac{dx}{(1+x^2)(1+x^{\alpha})}$$
and with or without the Beta function it is well-known that $\int_{0}^{+\infty}\frac{dx}{(1+x^2)(1+x^{\alpha})}=\frac{\pi}{4}$ does not really depend on $\alpha$. It follows that $\int_{0}^{+\infty}\frac{\log(1+x^\alpha)}{(1+x^2)\log x}\,dx=\frac{\pi\alpha}{4}$ and
$$ \int_{0}^{+\infty}\frac{\log\left(\frac{1+x^{11}}{1+x^3}\right)}{(1+x^2)\log x}\,dx=\color{red}{2\pi}.$$</p>
<hr>
<p>In any case,
$$ \int_{0}^{1}\frac{dx}{(1+x^2)(1+x^\alpha)}\stackrel{x\mapsto\tan\theta}{=}\int_{0}^{\pi/2}\frac{\cos^\alpha(\theta)}{\sin^\alpha(\theta)+\cos^\alpha(\theta)}d\theta\stackrel{\theta\mapsto\frac{\pi}{2}-\varphi}{=}\int_{0}^{\pi/2}\frac{\sin^\alpha(\varphi)}{\sin^\alpha(\varphi)+\cos^\alpha(\varphi)}d\varphi. $$</p>
|
2,707,514 | <p>I'm reading <em>A First Course in Modular Forms</em> by Diamond and Shurman and am confused on a small point in Chapter 2. Let <span class="math-container">$\Gamma$</span> be a congruence subgroup of <span class="math-container">$\operatorname{SL}_2(\mathbb Z)$</span>. <span class="math-container">$\gamma \in \mathscr H$</span> is called an <em>elliptic point</em> for <span class="math-container">$\Gamma$</span> if the stablizer of <span class="math-container">$\gamma$</span> in <span class="math-container">$\operatorname{PSL}_2$</span> is nontrivial.</p>
<blockquote>
<p>Proposition 2.1.1 Let <span class="math-container">$\tau_1, \tau_2 \in \mathscr H$</span> be given. There exist open neighborhoods <span class="math-container">$U_i$</span> of <span class="math-container">$\tau_i$</span> in <span class="math-container">$\mathscr H$</span> such that if <span class="math-container">$\gamma \in \operatorname{SL}_2(\mathbb Z), \gamma(U_1) \cap U_2 \neq \emptyset$</span>, then <span class="math-container">$\gamma(\tau_1) = \tau_2$</span>.</p>
<p>Corollary 2.2.3 Let <span class="math-container">$\Gamma$</span> be a congruence subgroup of <span class="math-container">$\operatorname{SL}_2(\mathbb Z)$</span>. Each point <span class="math-container">$\tau \in \mathscr H$</span> has a neighborhood <span class="math-container">$U$</span> in <span class="math-container">$\mathscr H$</span> such that <span class="math-container">$\gamma \in \Gamma, \gamma(U) \cap U \neq \emptyset$</span> implies <span class="math-container">$\gamma \in \operatorname{Stab} \tau$</span>. Such a neighborhood has no elliptic points except possibly <span class="math-container">$\tau$</span>.</p>
</blockquote>
<p>Taking <span class="math-container">$\tau = \tau_1 = \tau_2$</span> and <span class="math-container">$U = U_1 \cap U_2$</span> in the proposition implies everything in the corollary except for the last sentence. How do we know that we can choose <span class="math-container">$U$</span> small enough to exclude all elliptic points? In other words, how do we know that the elliptic points in <span class="math-container">$\mathscr H$</span> form a discrete set?</p>
| Lee Mosher | 26,501 | <p>Since each elliptic fixed point of an element of $\Gamma$ is an elliptic fixed point of an element of $SL_2(\mathbb{Z})$, it suffices to prove the stronger statement that the elliptic fixed points of $SL_2(\mathbb{Z})$ form a discrete set. And in fact what I'll prove is even stronger, they form a discrete <em>closed</em> set.</p>
<p>Under the action of $SL_2(\mathbb{Z})$ on $\mathscr H$, there are exactly two orbits of elliptic points: the orbit of $z_0=0+1i$ which is the elliptic fixed point of $f_0(z)=-1/z$; and the orbit of $z_1=\frac{1}{2} + \frac{\sqrt{3}}{2}i$ which is the fixed point of $f_1(z) = \frac{z-1}{z}$. You can check this by verifying for yourself that if the trace of $f \in SL_2(\mathbb{Z})$ equals $0$ then it is conjugate to a power of $f_0$; if the trace is $\pm 1$ then it is conjugate to a power of $f_1$; and if the trace has absolute value $\ge 2$ then it is not elliptic.</p>
<p>Suppose that the elliptic fixed points did not form a discrete closed subset of $\mathscr H$. There would exist, therefore, a point $p \in \mathscr H$ and a sequence of infinitely many distinct elliptic fixed points $q_1,q_2,q_3,\ldots$ converging to $p$. Passing to finite index subsequence, we may suppose that there exists $i \in \{0,1\}$ such that each of $q_1,q_2,q_3,\ldots$ is in the orbit of $z_i$ (and so is fixed by a conjugate of $f_i$). Passing to a further subsequence, we may suppose that each $q_n$ has distance $\le 1/2$ from $p$. We therefore obtain a sequence of infinitely many distinct nontrivial elements of $SL_2(\mathbb{Z})$, which I'll denote $g_1,g_2,g_3,\ldots$ (each of which is conjugate to $f_i$) such that for each $n$ the point $q_n$ is fixed by $g_n$, and furthermore we have
\begin{align*}
d(g_n(p),p) &\le d(g_n(p),g_n(q_n)) + d(g_n(q_n),p) \\
&= d(p,q_n) + d(q_n,p) \\
& \le 1
\end{align*}
The subset of all $g \in SL_2(\mathbb{R})$ such that $d(g(p),p) \le 1$ is compact. Therefore, the sequence $g_n \in SL_2(\mathbb{Z})$ has a convergent subsequence in $SL_2(\mathbb{R})$. But that is a contradiction, because $SL_2(\mathbb{Z})$ is a discrete subgroup of $SL_2(\mathbb{R})$: $\mathbb{Z}$ is discrete in $\mathbb{R}$, and so no sequence of distinct elements of $SL_2(\mathbb{Z})$ has a convergent subsequence in $SL_2(\mathbb{R})$. </p>
|
1,624 | <p>For example, to change the color of each pixel to the mean color of the three channels, I tried</p>
<pre><code>i = ExampleData[{"TestImage", "Lena"}];
Mean[i]
</code></pre>
<p>but it just remains unevaluated:</p>
<p><img src="https://i.stack.imgur.com/K1RRR.png" alt="enter image description here"></p>
<p>How can I read the colors of an image into a list or matrix and change the color codes and save it back to an image?</p>
| acl | 16 | <p>Certainly. For instance, here's how to reduce the number of colours to 10 (randomly chosen in RGB space):</p>
<pre><code>i = Import["ExampleData/lena.tif"]
</code></pre>
<p><img src="https://i.stack.imgur.com/4uKUl.png" alt="Mathematica graphics"></p>
<p>You can try <code>ImageData[i]</code> to see the actual RGB values for each pixel. Now produce ten random triplets of reals between <code>0.</code> and <code>1.</code>, and construct a function to quickly pick the one closest to some given number:</p>
<pre><code>colours = RandomReal[{0, 1}, {10, 3}];
nf = Nearest[colours];
</code></pre>
<p>Then map the thing over the RGB values of the image and look at it:</p>
<pre><code>Map[First[nf[#]] &, ImageData[i], {-2}] // Image
</code></pre>
<p><img src="https://i.stack.imgur.com/kwhwN.png" alt="Mathematica graphics"></p>
<p>Try increasing the number of randomly selected colours to see what happens:</p>
<pre><code>Manipulate[
Module[{colours = RandomReal[{0, 1}, {num, 3}], nf},
nf = Nearest[colours];
Map[First[nf[#]] &, ImageData[i], {-2}] // Image
],
{{num, 10}, 1, 1000, 1}
]
</code></pre>
<p><img src="https://i.stack.imgur.com/lwxSL.png" alt="Mathematica graphics"></p>
|
90,263 | <p>Let $\mathcal{E} = \lbrace v^1 ,v^2, \dotsm, v^m \rbrace$ be the set of right
eigenvectors of $P$ and let $\mathcal{E^*} = \lbrace \omega^1 ,\omega^2,
\dotsm, \omega^m \rbrace$ be the set of left eigenvectors of $P.$ Given any two
vectors $v \in \mathcal{E}$ and $ \omega \in \mathcal{E^*}$ which correspond to
the eigenvalues $\lambda_1$ and $\lambda_2$ respectively. If $\lambda_1 \neq
\lambda_2$ then $\langle v, \omega^\tau\rangle = 0.$ </p>
<p>Proof. For any eigenvector $v\in \mathcal{E}$ and $ \omega \in
\mathcal{E^*}$ which correspond to the eigenvalues $\lambda_1$ and $\lambda_2$
where $\lambda_1 \neq \lambda_2$ we have, \begin{equation*} \begin{split}\langle\omega,v\rangle = \frac{1}{\lambda_2}\langle \lambda_2 \omega, v\rangle = \frac{1}{\lambda_2} \langle P^ \tau \omega ,v\rangle = \frac{1}{\lambda_2}\langle\omega,P v\rangle = \frac{1}{\lambda_2} \langle \omega,\lambda_1v\rangle = \frac{\lambda_1}{\lambda_2}\langle \omega,v\rangle .\end{split}
\end{equation*} This implies $(\frac{\lambda_1}{\lambda_2} - 1)\langle\omega,v\rangle =
0.$ If $\lambda_1 \neq \lambda_2$ then $\langle \omega,v\rangle = 0.$</p>
<p>My question: what if $ \lambda_2 = 0 \neq \lambda_1,$ how can I include this case in my proof.</p>
| BDH | 24,315 | <p>With a completely eclectic sense of culture within a school, there's a question that can be brought up, being, "What is the purpose of this course for each and every student that walks into my classroom?" Many answers will arise, whether it is a pure fascination of the subject or because it can be seen as a small stepping stone towards a much larger goal (college, a decent job, etc.) With all of these different responses a teacher becomes much more open-minded, seeing and appreciating the different viewpoints on life each student has. A teacher should feel a sense of urgency; ultimately taking that appreciation and channeling that into the passion and determination required to make sure that every student has the opportunity to succeed in mathematics as well as life. </p>
<p>You can use the different types of culture seen throughout a classroom to your advantage. Analogies between mathematics and their lifestyle is a phenomenal way to attract students into your teaching, for it is referencing something they are already comfortable with. And by constantly making connections to the material as well as their lifestyles and where they come from, you are also implicitly making the students more open minded, for they are also listening about the other cultures throughout the room.</p>
|
245,049 | <p>I am trying to do a n-round of convolution of a function. The code is posted as below. But it is not working. Is there a solution?</p>
<pre><code>p[x_] := 1/(x + 1)*UnitStep[x]
p1[x_] := Convolve[p[y], p[y], y, x]
p2[x_] := Convolve[p[y], p1[y], y, x]
</code></pre>
<p>p1 succeeded. But the output of p2 only repeats the question as follows:</p>
<pre><code>Convolve[UnitStep[y]/(1 + y),
Convolve[UnitStep[y]/(1 + y), UnitStep[y]/(1 + y), y, y], y, x]
</code></pre>
| Roman | 26,598 | <p>Using <a href="https://mathematica.stackexchange.com/a/201001/26598">partial memoization</a>:</p>
<pre><code>Clear[p];
p[0] = Function[x, 1/(x + 1)*UnitStep[x]];
p[n_Integer?Positive] := p[n] =
Function[x, Evaluate@Convolve[p[n - 1][y], p[0][y], y, x]]
p[0][x]
(* UnitStep[x]/(1 + x) *)
p[1][x]
(* ((-I \[Pi] + Log[-1 - x] + Log[1 + x]) UnitStep[x])/(2 + x) *)
p[2][x]
(* -(1/(3 (3 + x)))(2 \[Pi]^2 + 3 I \[Pi] Log[-2 - x] +
3 I \[Pi] Log[1 + x] + 3 Log[-1 - x] Log[1/(2 + x)] +
3 Log[1 + x] Log[1/(2 + x)] - 3 I \[Pi] Log[(1 + x)/(2 + x)] -
3 Log[1 + x] Log[2 + x] - 3 PolyLog[2, -1 - x] -
6 PolyLog[2, 1/(2 + x)] + 6 PolyLog[2, (1 + x)/(2 + x)] +
3 PolyLog[2, 2 + x]) UnitStep[x] *)
Plot[{p[0][x], p[1][x], p[2][x]}, {x, 0, 2}]
</code></pre>
<p><a href="https://i.stack.imgur.com/rvB2T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rvB2T.png" alt="enter image description here" /></a></p>
|
84,711 | <p>This is a homework question I was asked to do</p>
<p>Of a twice differentiable function $ f : \mathbb{R} \to \mathbb{R} $ it is given that $f(2) = 3, f'(2) = 1$ and $f''(x) = \frac{e^{-x}}{x^2+1}$ . Now I have to prove that $$ \frac{7}{2} \leq f\left(\frac{5}{2}\right) \leq \frac{7}{2} + \frac{e^{-2}}{40} . $$ I tried this by computing the third Taylor polynomial of $f$ near $a=2$, setting $x = \frac{5}{2}$, which gave me $$f(5/2) \approx 7/2 + \frac{e^{-2}}{40} - \frac{ - e^{-5/2}}{48} $$, but now I don't know what to do next. I guess one has to do something with finding the error of the first and second order Taylor polynomials, but I'm not sure how to do so.
Can you help me?</p>
<p>Thanks in advance,</p>
| Community | -1 | <p>The calculation can be done in the following way:
$f'''(x)=(f''(x))'= -\frac{e^{-x}}{x^2+1} -\frac{2xe^{-x}}{(x^2+1)^2}$ which at $x=2$ yields $f'''(2)=-\frac{9e^{-2}}{25}$ and so \begin{eqnarray*}
f(5/2) & = & f(2)+f'(2)(5/2-2) + f''(2)(5/2-2)^2/2! +f'''(2)(5/2-2)^3/3!+\dots \\ & = &3+1/2+e^{-2}/40+ \frac{-3e^{-2}}{50}+\dots \end{eqnarray*}
and the remaining terms are smaller than $\frac{3e^{-2}}{50}$ so you obtain your inequalities.</p>
|
84,711 | <p>This is a homework question I was asked to do</p>
<p>Of a twice differentiable function $ f : \mathbb{R} \to \mathbb{R} $ it is given that $f(2) = 3, f'(2) = 1$ and $f''(x) = \frac{e^{-x}}{x^2+1}$ . Now I have to prove that $$ \frac{7}{2} \leq f\left(\frac{5}{2}\right) \leq \frac{7}{2} + \frac{e^{-2}}{40} . $$ I tried this by computing the third Taylor polynomial of $f$ near $a=2$, setting $x = \frac{5}{2}$, which gave me $$f(5/2) \approx 7/2 + \frac{e^{-2}}{40} - \frac{ - e^{-5/2}}{48} $$, but now I don't know what to do next. I guess one has to do something with finding the error of the first and second order Taylor polynomials, but I'm not sure how to do so.
Can you help me?</p>
<p>Thanks in advance,</p>
| Sasha | 11,069 | <p>Start with $f^\prime(x) = f^\prime(2) + \int_2^x f^{\prime\prime}(y) \mathrm{d} y = 1 + \int_2^x \frac{\exp(-u)}{1+u^2} \mathrm{d} u$. Then
$$ \begin{eqnarray}
f(x) &=& f(2) + \int_2^x f^\prime(z) \mathrm{d} z = 3 + \int_2^x \left( 1 + \int_2^z \frac{\exp(-u)}{1+u^2} \mathrm{d} u \right) \mathrm{d} z \\
&=& 3 + (x-2) + \int_2^x \int_2^z \frac{\exp(-u)}{1+u^2} \mathrm{d} u \mathrm{d} z
\end{eqnarray} $$</p>
<p>Since the double integral is a non-negative quantity (as an integral of non-negative function), it follows $f\left( \frac{5}{2} \right) \ge 3 + \left( \frac{5}{2} - 2\right) = \frac{7}{2}$.</p>
<p>On the other hand, since $\frac{\exp(-u)}{1+u^2}$ is decreasing for $u>0$:
$$\begin{eqnarray}
\int_2^\frac{5}{2} \int_2^z \frac{\exp(-u)}{1+u^2} \mathrm{d} u \mathrm{d} z &\le& \int_2^\frac{5}{2} \int_2^z \frac{\exp(-2)}{1+2^2} \mathrm{d} u \mathrm{d} z = \int_2^{\frac{5}{2}} \frac{\exp(-2)}{5} (z-2) \mathrm{d} z \\ &=& \frac{1}{5 \mathrm{e}^{2}} \cdot \left. \frac{1}{2} (z-2)^2 \right|_2^\frac{5}{2} = \frac{1}{5 \mathrm{e}^{2}} \cdot \frac{1}{8} = \frac{1}{40 \mathrm{e}^{2}}
\end{eqnarray}
$$
It, thus, follows that
$$
\frac{7}{2} \le f\left( \frac{5}{2} \right) \le \frac{7}{2} + \frac{1}{40 \mathrm{e}^{2}}
$$</p>
|
1,892 | <p>Although whether $$ P = NP $$ is important from theoretical computer science point of view, but I fail to see any practical implication of it.</p>
<p>Suppose that we can prove all questions that can be verified in polynomial time have polynomial time solutions, it won't help us in finding the actual solutions. Conversely, if we can prove that $$ P \ne NP,$$ then this doesn't mean that our current NP-hard problems have no polynomial time solutions. </p>
<p>From practical point of view ( practical as in the sense that we can immediately use the solution to real world scenario), it shouldn't bother me whether P vs NP is proved or disproved any more than whether my <strong><em>current problem</em></strong> has a polynomial time solution. </p>
<p>Am I right?</p>
| Casebash | 123 | <p>Actually $P \ne NP$ <em>does</em> mean that our current NP-hard problems have no polynomials time solutions. NP-complete problems are the hardest problems in NP and NP-hard problems are at least as hard as this. So if $P \ne NP$, then all these NP-hard problems must be harder than P.</p>
<p>Whether the proof helps us find solutions will of course depend on the proof. If $P \ne NP$, then we know not to waste time looking for polynomial solutions.</p>
<p>If $P=NP$, then the real practical benefits would of course come from the solution, rather than the proof. That is fine - there is no reason why all theoretical computer science needs to be <em>directly</em> practical.</p>
|
1,892 | <p>Although whether $$ P = NP $$ is important from theoretical computer science point of view, but I fail to see any practical implication of it.</p>
<p>Suppose that we can prove all questions that can be verified in polynomial time have polynomial time solutions, it won't help us in finding the actual solutions. Conversely, if we can prove that $$ P \ne NP,$$ then this doesn't mean that our current NP-hard problems have no polynomial time solutions. </p>
<p>From practical point of view ( practical as in the sense that we can immediately use the solution to real world scenario), it shouldn't bother me whether P vs NP is proved or disproved any more than whether my <strong><em>current problem</em></strong> has a polynomial time solution. </p>
<p>Am I right?</p>
| JDH | 413 | <p>Many of the problems we know to be in NP or NP-complete are problems that we actually want to solve, problems that arise, say, in circuit design or in other industrial design applications. Furthermore, since the diverse NP-complete problems are all polynomial time related to one another, if we should ever learn a feasible means of solving any of them, we would have feasible means for all of them. The result of this would be extraordinary, something like a second industrial revolution. It would be as though we suddenly had a huge permanent increase in computational power, allowing us to solve an enormous array of practical problems heretofore out of our computational reach. The P vs. NP question is important in part because of this tantalizing possibility. </p>
<p>If it were proved that P = NP and the proof provided a specific polynomial time algorithm for an NP-complete problem, then because of the existing reduction proofs, we could immediately produce polynomial time algorithms for all our other favorite NP problems. Of course, a proof may be indirect, and not provide a specific polynomial time algorithm, but you can be sure that if we have a proof of P=NP, then enormous resources will be put into extracting from the proof a speciffic algorithm.</p>
<p>Conversely, if someone were to prove $P \neq NP$, then it would mean that there could be no polynomial time solution for any NP complete problem. (In particular, the last sentence of your second paragraph is not correct.) </p>
|
1,892 | <p>Although whether $$ P = NP $$ is important from theoretical computer science point of view, but I fail to see any practical implication of it.</p>
<p>Suppose that we can prove all questions that can be verified in polynomial time have polynomial time solutions, it won't help us in finding the actual solutions. Conversely, if we can prove that $$ P \ne NP,$$ then this doesn't mean that our current NP-hard problems have no polynomial time solutions. </p>
<p>From practical point of view ( practical as in the sense that we can immediately use the solution to real world scenario), it shouldn't bother me whether P vs NP is proved or disproved any more than whether my <strong><em>current problem</em></strong> has a polynomial time solution. </p>
<p>Am I right?</p>
| Charles Stewart | 100 | <p>Currently, if a manager asks their software engineering team to look at implementing some utility, and the team says that requirements are NP hard, that's a reason that the project requirements need to be changed before work on implementation can begin. That's because no-one knows how to give feasible solutions to such problems.</p>
<p>The plurality of complexity theorists furthermore believe P =/= NP, so that means that there's a widespread belief among experts that feasible solutions to these problems will never be found.</p>
<p>If someone shows P=NP, then if the team says the requirements are NP-complete, then the manager and the team will start to move from talk of theory to possible realisations and their efficiency. </p>
|
1,892 | <p>Although whether $$ P = NP $$ is important from theoretical computer science point of view, but I fail to see any practical implication of it.</p>
<p>Suppose that we can prove all questions that can be verified in polynomial time have polynomial time solutions, it won't help us in finding the actual solutions. Conversely, if we can prove that $$ P \ne NP,$$ then this doesn't mean that our current NP-hard problems have no polynomial time solutions. </p>
<p>From practical point of view ( practical as in the sense that we can immediately use the solution to real world scenario), it shouldn't bother me whether P vs NP is proved or disproved any more than whether my <strong><em>current problem</em></strong> has a polynomial time solution. </p>
<p>Am I right?</p>
| n0vakovic | 956 | <p>Not directly related to the question, but definitely relevant. </p>
<p>Three days ago <a href="http://www.hpl.hp.com/personal/Vinay_Deolalikar/Papers/pnp_preliminary.pdf" rel="nofollow">proof</a> for P != NP is published. Community thinks it looks serious.</p>
|
3,274,807 | <p>Given an <span class="math-container">$n \times n$</span> symmetric matrix with real coefficients it has <span class="math-container">$n$</span> eigenvalues. I was wondering are the eigenvalues continuous with respect to the coefficients of the matrices? I have seen somewhere that the eigenvalues of matrices are continuous but it was for complex matrices. </p>
<p>Does this holds for real symmetric matrices? I would guess that it does, but I was not sure how to show it. Any reference or comments would be appreciated. Thank you. </p>
<p>Added after comment: For the complex case I understand as the characteristic polynomial is a polynomial in the coefficients of the matrix, and the roots (in <span class="math-container">$\mathbb{C}$</span>) of a polynomial over <span class="math-container">$\mathbb{C}$</span> varies continuously. </p>
<p>However, over <span class="math-container">$\mathbb{R}$</span> I wasn't sure if I could still say the same, as for example, polynomial like <span class="math-container">$x^2 + a$</span> no longer has root after <span class="math-container">$a > 0$</span>. So I wasn't sure if I could still say the same about real eigenvalues of real matrices. And so, I wasn't sure about the real eigenvlaues of symmetric real matrices either... </p>
<p>I know that the reals are a subset of the complexes, but I wasn't sure if that was enough to obtain the statement for the real eigenvalues of symmetric real matrices... Any clarification would be appreciated. Thank you. </p>
| cangrejo | 86,383 | <p>The set of real numbers is a subset of the set of complex numbers, if we consider that real numbers are complex numbers with imaginary part equal to zero. Therefore, whatever holds for all complex numbers holds for real numbers.</p>
<p>As you observe, a polynomial might not have real roots. However, all the eigenvalues of a symmetric real matrix are real. By definition, they are the roots of the characteristic polynomial, so you can be sure that an example like the one you proposed will not arise.</p>
|
3,274,807 | <p>Given an <span class="math-container">$n \times n$</span> symmetric matrix with real coefficients it has <span class="math-container">$n$</span> eigenvalues. I was wondering are the eigenvalues continuous with respect to the coefficients of the matrices? I have seen somewhere that the eigenvalues of matrices are continuous but it was for complex matrices. </p>
<p>Does this holds for real symmetric matrices? I would guess that it does, but I was not sure how to show it. Any reference or comments would be appreciated. Thank you. </p>
<p>Added after comment: For the complex case I understand as the characteristic polynomial is a polynomial in the coefficients of the matrix, and the roots (in <span class="math-container">$\mathbb{C}$</span>) of a polynomial over <span class="math-container">$\mathbb{C}$</span> varies continuously. </p>
<p>However, over <span class="math-container">$\mathbb{R}$</span> I wasn't sure if I could still say the same, as for example, polynomial like <span class="math-container">$x^2 + a$</span> no longer has root after <span class="math-container">$a > 0$</span>. So I wasn't sure if I could still say the same about real eigenvalues of real matrices. And so, I wasn't sure about the real eigenvlaues of symmetric real matrices either... </p>
<p>I know that the reals are a subset of the complexes, but I wasn't sure if that was enough to obtain the statement for the real eigenvalues of symmetric real matrices... Any clarification would be appreciated. Thank you. </p>
| Nitin Uniyal | 246,221 | <p>For order <span class="math-container">$2$</span> matrix <span class="math-container">$A=\begin{pmatrix}a&b\\b&c\\\end{pmatrix}$</span>, eigenvalues are <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span> s.t. <span class="math-container">$\alpha+\beta=a+c$</span> and <span class="math-container">$\alpha\beta=ac-b^2$</span>. Suppose <span class="math-container">$\epsilon>0$</span> is the change in eigenvalues of <span class="math-container">$A$</span>, then there always exist a <span class="math-container">$\delta>0$</span> ,where <span class="math-container">$\delta=\epsilon$</span> is the
change in the diagonal of <span class="math-container">$A$</span> s.t.</p>
<p><span class="math-container">$A+\delta I=\begin{pmatrix}a+\epsilon &b\\b&c+\epsilon\\\end{pmatrix}$</span>has characteristic equation <span class="math-container">$t^2-(a+2\epsilon+c)t+[{ac-b^2+(a+c)\epsilon+\epsilon^2}]=0$</span> which clearly suggests that the new roots are <span class="math-container">$\alpha+\epsilon$</span> and <span class="math-container">$\beta+\epsilon$</span>.</p>
|
1,281,627 | <p>Today I completed the chapter of '**Limits **' in my school, and I found this chapter very fascinating. But the only problem I have with limits and Derivatives is that I don't know How can I use it in my daily life. (Any Book Recommendation?)</p>
| wythagoras | 236,048 | <p>Derivatives have some applications in economics, for example marginal cost, and I believe that marginal cost is used a lot in business applications.</p>
|
3,929,063 | <p>I always want to apply category theory to structures that involve "time" or "stepping"/"increment"(discrete "time").</p>
<p>I visualize it as a sequence of categories that are somehow connected(generally the objects are the same and only the morphisms will change, not sure if they are functorially related or not in all cases).</p>
<p>In my cases they are discrete systems in the sense that a morphism does not continuously flow, although, I suppose it should be valid for such systems too(analogous to the calculus of limits).</p>
<p>Is there anything in category theory that deals with this topic? I find that that category theory is more relevant to real systems when time seems to be included but I haven't been able to really apply it with time. In fact, one thing I sort of want to do is have Markovian like structures on categories. A set of categories with probabilistic transitions between them, this too is a sort of category with, I guess, the transitions being functors that are selected with a probability(but in a sequence of steps).</p>
<p>It seems to be category theory generally is thought of as a static structure but this is quite limiting to problems that are actually relatively simple statically but change over time. E.g., imagine a cell phone network where the objects are cell phones and the morphisms are connections. The connections may be off or on(a phone call) and also last for any length of time. Each instant in time there is a category/graph of the network. But over time the graph is dynamically changing. We can even do things like calculus over it.</p>
<p>So, in such ways it looks more like graph theory, BUT, of course, these happen to be categorical structures(identities, composition inclusion, etc)... so the goal is to extract some time-dependent structural variation. E.g., we can think of a typical "basic" category as static in time analogous to a constant function, it's derivative is 0. In this case the categorical structure is dynamic and its "derivative" is not constant but shows how the structure is evolving. Ultimately I want to be able to extract some meta-structure from the evolution of the changing categorical structure(and ultimately derive some type of meta^n-structural deductive chains which would, I hope, ultimately be categorical in nature).</p>
| Mozibur Ullah | 26,254 | <p>Time is a physical concept and it's a question of how it is to be modelled. It's quite possible that it's an artifact of our modelling that we get time invariance since causal nets, a model in quantum gravity allow, for modelling of time as open - a relatively recent result.</p>
<p>This to me seems a prime suspect in modelling by category theory: the problem being to say something new that hasn't already been said by causal set theory.</p>
|
3,929,063 | <p>I always want to apply category theory to structures that involve "time" or "stepping"/"increment"(discrete "time").</p>
<p>I visualize it as a sequence of categories that are somehow connected(generally the objects are the same and only the morphisms will change, not sure if they are functorially related or not in all cases).</p>
<p>In my cases they are discrete systems in the sense that a morphism does not continuously flow, although, I suppose it should be valid for such systems too(analogous to the calculus of limits).</p>
<p>Is there anything in category theory that deals with this topic? I find that that category theory is more relevant to real systems when time seems to be included but I haven't been able to really apply it with time. In fact, one thing I sort of want to do is have Markovian like structures on categories. A set of categories with probabilistic transitions between them, this too is a sort of category with, I guess, the transitions being functors that are selected with a probability(but in a sequence of steps).</p>
<p>It seems to be category theory generally is thought of as a static structure but this is quite limiting to problems that are actually relatively simple statically but change over time. E.g., imagine a cell phone network where the objects are cell phones and the morphisms are connections. The connections may be off or on(a phone call) and also last for any length of time. Each instant in time there is a category/graph of the network. But over time the graph is dynamically changing. We can even do things like calculus over it.</p>
<p>So, in such ways it looks more like graph theory, BUT, of course, these happen to be categorical structures(identities, composition inclusion, etc)... so the goal is to extract some time-dependent structural variation. E.g., we can think of a typical "basic" category as static in time analogous to a constant function, it's derivative is 0. In this case the categorical structure is dynamic and its "derivative" is not constant but shows how the structure is evolving. Ultimately I want to be able to extract some meta-structure from the evolution of the changing categorical structure(and ultimately derive some type of meta^n-structural deductive chains which would, I hope, ultimately be categorical in nature).</p>
| N. Virgo | 27,193 | <p>I don't know if the following are exactly what you're looking for, since they are about modelling dynamical systems <em>within</em> category theory, rather than having a dynamical system <em>of</em> categories, but they are likely to be useful for inspiration, if nothing else.</p>
<p>There's a classic way to look at discrete-time dynamical systems in category theory. One is just to model a dynamical system as a set <span class="math-container">$X$</span> with an endomap, that a function <span class="math-container">$f\colon X\to X$</span>, which you think of as the update function. This can be seen as a functor <span class="math-container">$\mathcal{N}\to\mathsf{Set}$</span>, where <span class="math-container">$\mathcal{N}$</span> is a monoid (i.e. a category with a single object) whose morphisms are the natural numbers and the composition of two morphisms is given by addition. Natural transformations between such functors give a natural notion of morphism between dynamical systems, and the resulting category (being a presheaf category and hence a topos) has a lot of very nice properties. This is explored in Lawvere and Schanuel's classic book 'Conceptual Mathematics', among other places.</p>
<p>For Markov processes, a nice place to start is Tobias Fritz' recent paper <a href="https://arxiv.org/abs/1908.07021" rel="nofollow noreferrer">A synthetic approach to Markov kernels, conditional independence and theorems on sufficient statistics</a>. This paper gives a modern framework for thinking about probability in category theory, based on what he calls "Markov categories", which generally behave like categories of sets and stochastic functions (i.e. Markov kernels). Similarly to the above construction discrete-time, time-homogeneous Markov process is a functor <span class="math-container">$\mathcal{N}\to \mathscr{M}$</span>, where <span class="math-container">$\mathscr{M}$</span> is a Markov category. Fritz mentions Markov processes only briefly in his paper, but still I think it's a good place to start in thinking about them.</p>
<p>In both of these cases, if you want to consider non-time-homogeneous systems (i.e. where the update function can change on every time step) you can look at functors from <span class="math-container">$\mathbf{N}$</span> instead of <span class="math-container">$\mathcal{N}$</span>, where <span class="math-container">$\mathbf{N}$</span> is the natural numbers seen as a preorder instead of a monoid. (I'm making this notation up - I'm not sure if there are standard symbols for these categories.)</p>
<p>This kind of approach can also be used for continuous-time systems, by replacing the monoid of natural numbers with a monoid of real numbers, although things tend to get significantly more complicated due to the need to consider topology. See this <a href="https://jadeedenstarmaster.wordpress.com/2019/03/31/dynamical-systems-with-category-theory-yes/" rel="nofollow noreferrer">blog post by Jade Master</a> for such a construction.</p>
<p>Since you seem to be looking for a discrete-time dynamical system where the state space consists of categories, it could be that you would be interested in functors <span class="math-container">$\mathcal{N}\to\mathsf{Cat}$</span> or <span class="math-container">$\mathbf{N}\to\mathsf{Cat}$</span>. This seems like it should be related to the <a href="https://en.wikipedia.org/wiki/Grothendieck_construction" rel="nofollow noreferrer">Grothendieck construction</a>. I have no intuition for what it would be like, and I also don't immediately know how one would make it into a stochastic process.</p>
<p>Going in a somewhat different direction, there is a fair bit of current research in category theory in something called "open dynamical systems", which may be in discrete or continuous time. The idea here is to define dynamical systems with inputs and outputs (in various different senses), such that they can be composed with each other. If you search YouTube you can find some lectures by David Jaz Myers that give a good overview, and you can also find some papers on this topic by John Baez' group, among others.</p>
|
997,999 | <p>I read that integration is the opposite of differentiation AND at the same time is a summation process to find the area under a curve. But I can't understand how these things combine together and actually an integral can be the same time those two things. If the integration is the opposite of differentiation, then the result of the integration should be the initial function from which we derived the derivative function. How can this be at the same time the area under the curve of the (which?) function?</p>
| 5xum | 112,884 | <p>What you are asking is, esencially, the fundamental theorem of calculus.</p>
<p>The process in analysis is such that first, for continuous functions (actually, a slightly larger class of functions, not only continuous, but that isn't all that important), we can calculate the area under their graphs by taking the limit of <a href="http://en.wikipedia.org/wiki/Riemann_sum" rel="nofollow">Riemann</a> sums. That is, for a function $f$ defined on $[a,b]$, we have a strictly defined method with which to calculate the surface area under the graph of $f$. We use the notation </p>
<p>$$\int_a^bf(t)dt$$
to denote this surface area. (note: no "integration" has taken part so far. We only took some limits and defined a number which is the surface area).</p>
<p>Now, for a continuous function $f$ defined on $[a,b]$, we can define the function </p>
<p>$$F(x) = \int_a^x f(t)dt$$
which is equal to the area under the curve $\{(t, f(t)| x\in[a,x]\}$</p>
<p>For this function, the fundamental theorem states that the function $F$ is the antiderivative of $f$, meaning that $F'=f$, and it is obvious that $F(a) = \int_a^b f(t) dt$.</p>
<p><em>Note</em>: What the theorem basically says is "the area under the function $f$ is an antiderivative of $f$".</p>
<hr>
<p>This theorem has huge consequences, and the one you want is this one:</p>
<p>For a continuous function $f$, it is easy to show that if $F' = f$ and $G' = f$ (if $G$ and $G$ are antiderivatives of $f$), then $F(x) = G(x) + C$ for some constant $C\in\mathbb R$.</p>
<p>The consequence of this is that if you find <strong>any one</strong> antiderivative $G$ of $f$, you can immediatelly be sure that $$G(x) = \int_a^xf(t)dt + C,$$</p>
<p>meaning that $G(b) - G(a)$ is equal to
$$\int_a^bf(t)dt + C - \int_a^a f(t)dt - C = \int_a^b f(t)dt,$$
precisely the surface area under $f$ on the interval $[a,b]$.</p>
<hr>
<p><strong>EDIT</strong>:</p>
<p>You said to give you an example using $f(x) = 3x^2$, so here you go:</p>
<p>I will look at $f$ on the interval $[0,1]$. We know that <em>one</em> antiderivative of $f$ is the function $g(x) = x^3 + 5$ (or any other function of the form $x^3 + C$, including $x^3$). Now, define the function $F(x)$ as:</p>
<p>$$F(x) = \int_0^xf(t)dt,$$
Where the definite integral is calculated using Riemann sums. We <em>do not yet know</em> how to calculate $F(x)$, but we know that $F$ is a well defined function. We also know that $F(0) = 0$, since there is no area under one single point.</p>
<p>What we want to evaluate is $F(1)$, since that is the area under $f$ on the interval $[0,1]$.</p>
<p>Now what do we know?</p>
<ol>
<li>Using the fundamental theorem of calculus, we know that $F$ is an antiderivative of $f$, i.e. we know that $F'(x) = f(x)$.</li>
<li>We know that $g'(x) = f(x)$.</li>
<li>We can also prove that if $F_1$ and $F_2$ are both antiderivatives of $f$, then they differ only by a constant, i.e. if $F_1' = F_2' = f$, then $F_1 = F_2 + C$. Strictly speaking, if $F_1' = F_2',$ then $$\exists C_1 \in\mathbb R\forall x\in[0,1]: F_1(x) = F_2(x) + C_1.$$</li>
</ol>
<p>From these three points, we can conclude, without knowig what $F$ is, that $F$ differs from $g$ by only a constant, i.e. there exists some constant $C$ such that $g(x) = F(x) + C$. Now, what is the value $g(1) - g(0)$ equal to?</p>
<p>Well, we know that $g(1) = F(1) + C$ and $g(0) = F(0) + C$, so
$$g(1)-g(0) = (F(1)+C) - (F(0) + C) = F(1) + C - F(0) + C = F(1) - F(0).$$</p>
<p>We also know that $F(0) = 0$, so</p>
<p>$$g(1) - g(0) = F(1) - F(0) = F(1) - 0 = F(1).$$</p>
<p>This means that if we calculate $g(1) - g(0)$ (and we can do that, since we know what $g$ is - remember, $g$ is <em>any</em> antiderivative of $f$!), we will actually calculate the number $F(1)$. But by definition, $F(1)$ is equal to the area under $f$ on the interval $[0,1]$, so we have, by plugging the values $1$ and $0$ into the <em>indefinite</em> integral of $f$, calculated the <em>definite</em> integral of $f$.</p>
|
173,131 | <p>Let's suppose that for the following expression:</p>
<p>$\qquad \alpha\,\beta +\alpha+\beta$</p>
<p>I know that $\alpha$ and $\beta$ are of small magnitude (e.g., 0 < $\alpha$ < 0.02 and 0 < $\beta$ < 0.02). Therefore, the magnitude of $\alpha\,\beta$ is negligible, i.e., the original expression can be approximated by</p>
<p>$\qquad \alpha+\beta$</p>
<p>Is there any command in Mathematica to do such an operation?</p>
<hr>
<p>Reminder from original question: if and $\alpha$ and $\beta$ are of small magnitude, we may approximate the original equation by disregarding nonlinear terms, such as $\alpha^n$, $\beta^n$,n=2,3,..., and $\alpha\beta$</p>
<p>My observation is that by applying the good code suggestion of @Henrik Schumacher in a fraction it seems to not generate a proper result to the whole fraction. For instance,if </p>
<p>$\text{numerator}=\alpha \beta +\alpha +\beta ^2+\beta -\lambda \epsilon q(t)+\text{$\beta $q}(t)$ </p>
<pre><code>numerator = α*β + β^2 + α + β + β q[t] - ϵ*λ*q[t]
f = numerator;
(f /. {α -> 0, β -> 0}) + (D[f, {{α, β}, 1}] /. {α -> 0, β -> 0}) . {α, β}
</code></pre>
<p>The code generates the correct elimination on numerator :$\alpha +\beta -\lambda \epsilon q(t)+\text{$\beta $q}(t)$</p>
<p>and</p>
<p>$\text{denominator}=\alpha +\alpha (-\beta ) \text{LD}(t)+\alpha \beta q(t)+\beta q(t)+\epsilon +1$:</p>
<pre><code>denominator= 1 +α +ϵ -α*β*LD[t] +β*q[t] +α*β*q[t]
f = denominator;
(f /. {α -> 0, β -> 0}) + (D[f, {{α, β}, 1}] /. {α -> 0, β -> 0}) . {α, β}
</code></pre>
<p>The code generates the correct elimination on denominator: $\alpha +\beta q(t)+\epsilon +1$ </p>
<p>Nevertheless, when applying the suggested first order Taylor Series expansion in the fraction as a whole: </p>
<pre><code>f = (numerator /denominator);
(f /. {α -> 0, β -> 0}) + (D[
f, {{α, β}, 1}] /. {α -> 0, β ->
0}).{α, β} // Simplify
</code></pre>
<p>The result generated is incorrect.
$\frac{(\epsilon +1) (\alpha +\beta )-q(t) (\lambda \epsilon (-\alpha +\epsilon +1)+\beta \text{$\beta $q}(t))+\beta \lambda \epsilon q(t)^2+(-\alpha +\epsilon +1) \text{$\beta $q}(t)}{(\epsilon +1)^2}$</p>
<p>That can be observed by, for instant, noticing that in the output above (i) the numerator has $\beta^2$, (ii) the code generated $q[t]^2$ that did not exist in the original equation.</p>
<p>I hope that this time I could express my concern in a proper format.
Thank you all for your support!</p>
<hr>
<p>Questions related to @Akku14's code suggestion</p>
<p><strong>Question 1</strong>:
@Akku14, I was trying to use your code suggestion with the slight modification in the original purpose(instead of eliminating α and β, now eliminating α, β and LD[t]), but I had no success. I think the reason is because I could not find a way of writing LD[t] as a parameter of function g: </p>
<p>for the following equation: </p>
<pre><code>\[CapitalDelta]p[t] = (α*β + β^2 + α + β + β*q[t] - ϵ*λ*q[t])/(1 + α + ϵ -α*LD[t] + β*q[t] + α*β*q[t])
</code></pre>
<p>$\text{$\Delta $p}(t)=\frac{\alpha \beta +\alpha +\beta ^2+\beta +\beta q(t)-\lambda \epsilon q(t)}{\alpha +\alpha (-\text{LD}(t))+\alpha \beta q(t)+\beta q(t)+\epsilon +1}$</p>
<pre><code>g[α_, β_, LD[t] _] = \[CapitalDelta]p[t]
</code></pre>
<p>By using @Akku14's suggestion:</p>
<pre><code>ser = (Series[g[α eps, β eps, LD[t] eps], {eps, 0, 1}] //
Normal) /. eps -> 1 // Simplify
</code></pre>
<p>I get the following output:</p>
<p>$\frac{(\epsilon +1) (\alpha +\beta )+q(t) (\lambda \epsilon (\alpha -\epsilon -1)+\beta (\epsilon +1)-\alpha \lambda \epsilon LD[t]+\beta \lambda \epsilon q(t)^2}{(\epsilon +1)^2}$</p>
<p>which is incorrect since $\alpha \lambda \epsilon LD[t]$ is present in the numerator of <code>ser</code>.</p>
<p>Again, I think that the problem is that my <code>g</code> is not recognizing LD[t] as a parameter; would any of you know how to approach this issue?</p>
<p><strong>Question 2.1</strong>: in case I wanted second order Taylor Series for <code>g[α, β, LD[t]]</code>, changing <code>{eps, 0, 1}</code> to <code>{eps, 0, 2}</code> at <code>ser</code> would be enough to get the correct result? Like:</p>
<pre><code>ser = (Series[g[α eps, β eps, LD[t] eps], {eps, 0, 2}] //
Normal) /. eps -> 1 // Simplify
</code></pre>
<p><strong>Question 2.2</strong>: in case I wanted n order Taylor Series for <code>g[α, β, LD[t]]</code>, changing <code>{eps, 0, 1}</code> to <code>{eps, 0, n}</code> at <code>ser</code> would be enough to get the correct result?</p>
| Akku14 | 34,287 | <p>Let me give a slightly different form, that is - as far as I see quickly - equivalent to that of Henrik Schumacher, and show, why there appears a q[t]^2 term.</p>
<pre><code>f[a_, b_] := a b + a + b
</code></pre>
<p>Take series for small eps, and fix the result with eps->1</p>
<pre><code>(Series[f[a eps, b eps], {eps, 0, 1}] // Normal) /. eps -> 1
(* a + b *)
</code></pre>
<p>Higher orders give the original function</p>
<pre><code>(Series[f[a eps, b eps], {eps, 0, 5}] // Normal) /. eps -> 1
(* a + b + a b *)
</code></pre>
<p>Now the rational function</p>
<pre><code>numerator[a_, b_] :=
a*b + b^2 + a + b + b q[t] - ϵ*λ*q[t]
denominator[a_, b_] :=
1 + a + ϵ - a*b*LD[t] + b*q[t] + a*b*q[t]
ser0 = (Series[
numerator[a eps, b eps]/denominator[a eps, b eps], {eps, 0,
1}] // Normal) /. eps -> 1 // Together
(* (1/((1 + ϵ)^2))(a + b + a ϵ + b ϵ +
b q[t] + b ϵ q[t] - ϵ λ q[t] +
a ϵ λ q[t] - ϵ^2 λ q[t] +
b ϵ λ q[t]^2) *)
</code></pre>
<p>Taking separate series</p>
<pre><code>ser1 = (Series[numerator[a eps, b eps], {eps, 0, 1}] // Normal) /.
eps -> 1 // Together
(* a + b + b q[t] - ϵ λ q[t] *)
ser2 = (Series[denominator[a eps, b eps], {eps, 0, 1}] // Normal) /.
eps -> 1 // Together
(* 1 + a + ϵ + b q[t] *)
</code></pre>
<p>Although ser1/ser2 has no q[t]^2 term, it has a term <code>q[t]/(1 + b q[t])</code> which yields a q[t]^2 term with small b</p>
<pre><code>(Series[q[t]/(1 + b q[t]) /. {a -> a eps, b -> b eps}, {eps, 0, 1}] //
Normal) /. eps -> 1
(* q[t] - b q[t]^2 *)
</code></pre>
<p>Therefore further series ser3 has to be done, that yields the same result as ser0</p>
<pre><code>ser3 = (Series[ser1/ser2 /. {a -> a eps, b -> b eps}, {eps, 0, 1}] //
Normal) /. eps -> 1 // Together
(* (1/((1 + ϵ)^2))(a + b + a ϵ + b ϵ +
b q[t] + b ϵ q[t] - ϵ λ q[t] +
a ϵ λ q[t] - ϵ^2 λ q[t] +
b ϵ λ q[t]^2) *)
ser3 == ser0
(* True *)
</code></pre>
|
2,559,260 | <p>There exists a function $f$ such that $\lim_{x \rightarrow \infty} \frac{f(x)}{x^2} = 25$ and $\lim_{x \rightarrow \infty} \frac{f(x)}{x} = 5$</p>
<p>I am confused, I do not whether it is true or not</p>
<p>I have a counter-example, but I think thre might be such function</p>
| Contestosis | 462,389 | <p>There is no such a function.
To prove that, let $f$ be a solution.
The conditions on the limits impose that $f$ does not vanish near $+ \infty $.
Therefore, the ratio of the too quantities is $x$ and the limit of $x$ near $ + \infty $ is $ +\infty$ and not $ 5 / 25 $.
Another way of seeing that :
If $ f(x)/x $ has 5 as a limit, $f(x)/x^2$ has $0$ as a limit.</p>
|
302,243 | <p>Let $f:[0,1]\to\mathbb{R}$ be a Lipschitz function, and $\pi f$ be its piecewise linear interpolant on an equispaced grid with $n$ points.</p>
<p>It should be true (if I am not making mistakes with the constant) that
$$
\int_0^1 |f - \pi f| \leq \frac{1}{4n} \operatorname{Lip}(f).
$$</p>
<p>Do you have a reference that I can cite for this result, without having to re-prove it? All the references I have found by looking around assume better regularity.</p>
| Jason Starr | 13,265 | <p>That is false. I am taking a break from something else, so I will mostly refer to other MO answers. I gather from your example that you allow $X$ to be singular. Then Simpson proved that every finitely presented group $G$ is isomorphic to the fundamental group of a (usually singular) complex projective variety $X$, Theorem 12.1 of the following.</p>
<p>MR2918179 <br>
Simpson, Carlos <br>
Local systems on proper algebraic V-manifolds. <br>
Pure Appl. Math. Q. 7 (2011), no. 4, <br>
Special Issue: In memory of Eckart Viehweg, 1675–1759. <br></p>
<p>I found this reference from Andy Putman's answer to this MO question.<br> <a href="https://mathoverflow.net/questions/270023/fundamental-groups-of-compact-k%C3%A4hler-manifolds">Fundamental groups of compact Kähler manifolds</a></p>
<p>Let $G$ be a nontrivial finitely presented group whose Abelianization is trivial, e.g., the alternating group $\mathfrak{A}_n$ for $n\geq 5$. Then $\text{Pic}^0(X)$ is trivial. Thus the structure sheaf of $X$ has the invariance property. Yet the structure sheaf is not the pushforward of a coherent sheaf, because the rank of the structure sheaf, $1$, is not divisible by the order of $G$.</p>
<p>Let me anticipate the next question: "What if $E_\rho\otimes F$ is isomorphic to $F^{\oplus r}$ for every complex representation $\rho$ of $\pi_1(X)$ with finite dimension $r$ and with associated vector bundle $E_\rho$ on $X$?" However, the fantastic answers to the following question resolve this negatively; there are many finitely presented groups having no nontrivial representation, and then you can take $F$ to be the structure sheaf. <br><a href="https://mathoverflow.net/questions/9628/finitely-presented-sub-groups-of-gln-c/14244">Finitely presented sub-groups of GL(n,C)</a> </p>
<p><B>Edit.</B> I just noticed the edit by the OP that allows (and insists) that $X$ be highly reducible. That makes things much simpler. Let $<g_1,\dots,g_m|p_1,\dots, p_n>$ be a presentation of $G$. For each generator $g_i$, let $X'_i \cong C_i\times \mathbb{P}^3$ be an irreducible component, where $(C_i,x_i)$ is a nodal plane cubic with a marked point $p_i$. Let $(C_0,y_1,\dots,y_m)$ be an $m$-marked curve of genus $0$, and let $X'_0$ be $C_0\times \mathbb{P}^3$. </p>
<p>Begin with $X'$, the reducible $4$-fold obtained by gluing each $X'_i,$ $i=1,\dots,m$, to $X'_0$ by identifying $\{x_i\}$ with $\{y_i\}$, inducing an identification of $\{x_i\}\times \mathbb{P}^3$ with $\{y_i\}\times \mathbb{P}^3$. The fundamental group of $X'$ is the free group on $m$-generators $<g_1,\dots,g_m>$. </p>
<p>Moreover, for every element $p$ of the fundamental group, the subgroup $<p>$ of the fundamental group is the image fundamental group under pushforward for an unramified, finite morphism of complex projective schemes, $$f_p:D_p\times \mathbb{P}^3\to X',$$ where $D_p$ is an $n$-gon of $\mathbb{P}^1$s that lifts to $\widetilde{X}'$ once we separate a single node. Choose a closed immersion, $$e_p:D_p\hookrightarrow \mathbb{P}^3.$$ The graph of $e_p$ in $D_p\times \mathbb{P}^3$ is a copy of $D_p$. </p>
<p>For each $j=1,\dots,n$, let $W_j$ denote the image under $f_{p_j}$ of the graph of $e_{p_j}$. This is an $n$-gon in $X'$ whose image fundamental group in $<g_1,\dots,g_m>$ equals $<p_j>$. If the embeddings $e_{j}$ are chosen sufficiently general, then these curves $W_j$ are pairwise disjoint. Let $X_j$ be a copy of $\mathbb{P}^3$, and glue this to $X'$ by identifying $W_j$ with time image of $e_{p_j}$ in $X_j$. Then the reduced, complex projective scheme $X$ obtained by gluing each $X_j$ to $X'$ has fundamental group $< g_1,\dots,g_m|p_1,\dots,p_n>$. </p>
|
2,214,236 | <p>The question:</p>
<blockquote>
<p>An object is dropped from a cliff. How far does the object fall in the 3rd second?"</p>
</blockquote>
<p>I calculated that a ball dropped from rest from a cliff will fall $45\text{ m}$ in $3 \text{ s}$, assuming $g$ is $10\text{ m/s}^2$.</p>
<p>$$s = (0 \times 3) + \frac{1}{2}\cdot 10\cdot (3\times 3) = 45\text{ m}$$</p>
<p>But my teacher is telling me $25\text{ m}$! </p>
<p>EDITS: His reasoning was that from $t=0$ to $t=1$, $s=10\text{ m}$, and from $t=1$ to $t =2$, $s=20$...</p>
<p>The mark scheme also says $25\text{ m}$</p>
| Sophie | 431,586 | <p>Using the formula $s=ut+1/2at^2$</p>
<p>Where $a$ is acceleration, $u$ is the initial velocity, $t$ is the time and $s$ the the displacement. </p>
<p>We can deduce that the displacement will be $45m$. </p>
<p>You are indeed correct!</p>
|
615,093 | <p>How to prove the following sequence converges to $0.5$ ?
$$a_n=\int_0^1{nx^{n-1}\over 1+x}dx$$
What I have tried:
I calculated the integral $$a_n=1-n\left(-1\right)^n\left[\ln2-\sum_{i=1}^n {\left(-1\right)^{i+1}\over i}\right]$$
I also noticed ${1\over2}<a_n<1$ $\forall n \in \mathbb{N}$.</p>
<p>Then I wrote a C program and verified that $a_n\to 0.5$ (I didn't know the answer before) by calculating $a_n$ upto $n=9990002$ (starting from $n=2$ and each time increasing $n$ by $10^4$). I can't think of how to prove $\{a_n\}$ is monotone decreasing, which is clear from direct calculation.</p>
| Beni Bogosel | 7,327 | <p>Define $I_n =\displaystyle \int_0^1 \frac{x^n}{1+x}$. Then you can obtain immediately that $I_{n+1}+I_n = \displaystyle \frac{1}{n+1}$. Next note that $0\leq I_{n+1}\leq I_n$ since for $0\leq x \leq 1$ the inequality $0\leq \frac{x^{n+1}}{1+x} \leq \frac{x^n}{1+x}$ holds.</p>
<p>Therefore $I_n \to 0$ as $n \to \infty$. Thus we have
$$ a_{n+1}+a_n = (n+1)I_{n}+nI_{n-1} = 1+I_n \to 1 $$</p>
<p>Now if you prove that $a_n$ converges, you are done, since $a_{n+1}+a_n \to 1$.</p>
<p>(maybe this is more intricate...)</p>
|
377,152 | <p>Let n be a fixed natural number. Show that:
$$\sum_{r=0}^m \binom {n+r-1}r = \binom {n+m}{m}$$</p>
<p>(A): using a combinatorial argument and (B): by induction on $m$?</p>
| Brian M. Scott | 12,042 | <p>Finding this combinatorial argument isn’t altogether straightforward if you’ve not yet had much experience. The righthand side is easy to interpret: it’s the number of ways of choosing $m$ numbers from the set $[n+m]=\{1,2,\dots,n+m\}$. Similarly, each term on the lefthand side is easy to interpret: $\binom{n+r-1}r$ is the number of ways of choosing $r$ numbers from the set $[n+r-1]$. What’s not so clear is how to relate the two.</p>
<p>Suppose that I choose my set $S$ of $m$ integers from the set $[n+m]$. Let $k$ be the largest integer that I <strong>don’t</strong> choose: $k=\max\left([n+m]\setminus S\right)$. The other $n-1$ numbers not in $S$ must all be smaller than $k$, and there are $k-1$ positive integers smaller than $k$, so $S$ must contain $(k-1)-(n-1)=k-n$ members of the set $[k-1]$ of positive integers less than $k$. Thus, there are $\binom{k-1}{k-n}$ ways to choose the part of $S$ below $k$. And since $k$ is the largest number <strong>not</strong> in $S$, the rest of $S$ is already known: it’s $\{k+1,\dots,n+m\}$. Thus, the total number of ways of choosing an $m$-element subset of $[n+m]$ must be</p>
<p>$$\sum_k\binom{k-1}{k-n}\;.$$</p>
<p>To finish the combinatorial argument, answer the following questions:</p>
<ul>
<li>What is the range of possible values of $k$? </li>
<li>What is the relationship between my $k$ and the $r$ of the problem?</li>
</ul>
<hr>
<p>The proof by induction is very standard and very straightforward; the only tool that you need for the induction step is Pascal’s identity.</p>
|
1,456,411 | <p>When we're introduced to $\mathbb{R}^3$ in multivariable calculus, we first think of it as a collection of points. Then we're taught that you can have these things called <em>vectors</em>, which are (equivalence classes of) arrows that start at one point and end up at another.</p>
<p>At this point $\mathbb{R}^3$ is an affine space, not a vector space: for two points $x, y \in \mathbb{R}^3$, the operation $x + y$ is meaningless (my professor likes to say: "You can't add Chicago and New York!") but the operation $x - y$ gives a vector (the vector which points from New York to Chicago). You can also add a point and a vector, which gives you a translated point.</p>
<p>The distinction between the point $(0, 1, 2)$ and the vector $\langle 0, 1, 2 \rangle$ is sometimes made.</p>
<p>But then we quickly move on to treating $\mathbb{R}^3$ as a <em>vector</em> space, where instead of a point $A$, you have vectors starting at the origin with their tip at $A$. For example, parameterized curves such as</p>
<p>$$r(t) = (t, t^2, 3t)$$</p>
<p>are called "vector-valued functions" and not "point-valued functions". So, my question is, what is the reason that we historically don't define two spaces -- $\mathbb{R}^3$ and $\mathbb{R}^3_{\text{affine}}$? (I'm sure there's better notation).</p>
<p>For example, my "point-valued function" $r(t)$ would be a function $\mathbb{R} \rightarrow \mathbb{R}^3_{\text{affine}}$, but its derivative $r'(t)$ (the velocity <em>vector</em>) would be a function $\mathbb{R} \rightarrow \mathbb{R}^3$. What would this make more difficult?</p>
<p>In particular, I know that $\mathbb{R}^3 \iff \mathbb{R}^3_{\text{affine}}$ is a bijection, and that we use this sometimes, but how often in multivariable calculus? If we are using it all the time, then it wouldn't make sense to emphasize the distinction.</p>
| Ivo Terek | 118,056 | <p>In differential geometry, we deal with tangent spaces to manifolds. Let's make a precise distinction in our case. There are more than one definition of tangent space, amd one can show certain equivalences between them.</p>
<p>Take $p\in \Bbb R^3$ and define: $$T_p(\Bbb R^3) = \{(p,v) \mid v \in \Bbb R^3\}.$$ So far, $p$ and $v$ are just triples of real numbers, but we think of $p$ as the point and $v_p \equiv (p,v)$ as a tangent vector to $\Bbb R^3$ in $p$. The set $T_p(\Bbb R^3)$ has a natural vector space structure given by $(p,v_1)+\lambda(p,v_2) = (p, v_1+\lambda v_2)$, and is then isomorphic to $\Bbb R^3$ via: $$T_p(\Bbb R^3) \ni (p,v)\mapsto v \in \Bbb R^3.$$ The issue is that we have $\Bbb R^3 \cong T_p(\Bbb R^3)$, but it is not true that given a manifold $M$, we have $M \cong T_p(M)$, since $M$ is may not even be a vector space. And thus, confusion arises. Also, note that $T_p(\Bbb R^3) \neq T_q(\Bbb R^3)$ if $p\neq q$, so we can't "add Chicago to New York", but we'll have these tangent spaces isomorphic all right. For example, if you have a curve $r: I \to M$, now in general, we have that $r'(t) \in T_{r(t)}(M)$. When you start working with ambients other than $\Bbb R^3$ the distinction point/vector will be more clear and we'll be unable to hide all these isomorphisms under the carpet (when they exist).</p>
|
141,423 | <p>Let $V \subset H \subset V'$ be a Hilbert triple.</p>
<p>We can define a weak derivative of $u \in L^2(0,T;V)$ as the element $u' \in L^2(0,T;V')$ satisfying
$$\int_0^T u(t)\varphi'(t)=-\int_0^T u'(t)\varphi(t)$$
for all $\varphi \in C_c^\infty(0,T)$.</p>
<p>Then we define the space $W = \{u \in L^2(0,T;V) : u' \in L^2(0,T;V')\}$. We know for example that for $u, v \in W$,
$$\frac{d}{dt}(u(t),v(t))_H = \langle u'(t), v(t) \rangle + \langle v'(t), u(t) \rangle.\tag{1}$$</p>
<p>Now suppose I change my space of test functions and define a weak derivative as an element $u' \in L^2(0,T;V')$ satisfying
$$\int_0^T u(t)\varphi'(t)=-\int_0^T u'(t)\varphi(t)$$
for all $\varphi \in C_c^1(0,T)$.</p>
<p>Define $\tilde W = \{u \in L^2(0,T;V) : u' \in L^2(0,T;V')\}$ where the derivative is now with respect to these new test functions. How is this related to $W$? Do properties like (1) still hold? </p>
<p>I think yes, by uniqueness of weak derivatives. But I wanted to check in case I missed something.</p>
| Igor Khavkine | 2,622 | <p>The main property that you would want in your weak derivative is that it defines a closed, unbounded operator on $L^2$. Suppose you have an unbounded operator $A$ (for you it would be the derivative) defined on a dense domain $D(A)$ in $L^2$. As such, $A$ need not be closed (it's graph in $L^2\times L^2$ is not a closed subspace). So the next step is to consider its closed extensions (operators whose graphs are closed subspaces of $L^2\times L^2$ which contain the graph of $A$ itself). Among these, there exists a minimal closed extension $\bar{A}$ with domain $D(\bar{A})\supseteq D(A)$ (its graph is the closure of the graph of $A$).</p>
<p>So, when it comes to the minimal closed extension $\bar{A}$, it doesn't matter which domain you pick for $D(A)$ as long as it produces the same $\bar{A}$. Two unbounded operators $A_0$ and $A_1$ (say the derivative on $C^\infty_c$ or $C^1_c$) define the same minimal closed extension, $\bar{A}_0 = \bar{A}_1$, if for instance the graph of $A_1$ is contained and dense in the graph of $\bar{A}_0$ (as it happens to be the case in your example).</p>
|
62,581 | <p>I have a 2D coordinate system defined by two non-perpendicular axes. I wish to convert from a standard Cartesian (rectangular) coordinate system into mine. Any tips on how to go about it?</p>
| J. M. ain't a mathematician | 498 | <p>I'll make the assumption that: </p>
<ol>
<li><p>The <em>oblique coordinate</em> system <span class="math-container">$(u,v)$</span> with angle <span class="math-container">$\varphi$</span> and the Cartesian system <span class="math-container">$(x,y)$</span> share an origin.</p></li>
<li><p>Axis <span class="math-container">$u$</span> and <span class="math-container">$v$</span> share the same unit of lengh with the Cartesian system.</p></li>
</ol>
<p>Consider the following diagram:</p>
<p><img src="https://i.stack.imgur.com/00qBO.png" alt="oblique coordinates"></p>
<p>We have at once the relationship <span class="math-container">$x=u+h$</span>. The acute angle formed by the <span class="math-container">$v$</span>-axis and <span class="math-container">$h$</span> is <span class="math-container">$\varphi$</span>, since alternate interior angles in a pair of parallel lines are congruent. We use trigonometry to deduce the relation <span class="math-container">$y=h\tan\,\varphi$</span>. Eliminating <span class="math-container">$h$</span> gives the equation for <span class="math-container">$u$</span>: <span class="math-container">$u=x-y\cot\,\varphi$</span>.</p>
<p>To find an expression for <span class="math-container">$v$</span>, we see that this length is equal to the length of the hypotenuse of the right triangle with legs <span class="math-container">$y$</span> and <span class="math-container">$h$</span>. This leads to the equation <span class="math-container">$y=v\sin\,\varphi$</span>. Solving for <span class="math-container">$v$</span> gives the equation <span class="math-container">$v=y\csc\,\varphi$</span>.</p>
<p>Thus, the conversion formulae from Cartesian to oblique coordinates with angle <span class="math-container">$\varphi$</span> are</p>
<p><span class="math-container">$$\begin{align*}u&=x-y\cot\,\varphi\\v&=y\csc\,\varphi\end{align*}$$</span></p>
<p>For completeness, the formulae for converting from oblique to Cartesian coordinates are</p>
<p><span class="math-container">$$\begin{align*}x&=u+v\cos\,\varphi\\y&=v\sin\,\varphi\end{align*}$$</span></p>
|
62,581 | <p>I have a 2D coordinate system defined by two non-perpendicular axes. I wish to convert from a standard Cartesian (rectangular) coordinate system into mine. Any tips on how to go about it?</p>
| Jiri Kriz | 12,741 | <p>Let us denote by "old" the usual cartesian system with orthogonal axes and by "new" the system with the skew axes $(\alpha_1, \alpha_2)^T, (\beta_1, \beta_2)^T$ (expressed in the old system). An old vector $(x,y)^T$ can be expressed as a linear combination of the skew vectors:
$$
\left( \begin{array}{c} x \\ y \end{array}\right) =
a \left( \begin{array}{c} \alpha_1 \\ \alpha_2\end{array}\right) +
b \left( \begin{array}{c} \beta_1 \\ \beta_2\end{array}\right) =
\left( \begin{array}{cc} \alpha_1 & \beta_1 \\ \alpha_2 & \beta_2 \end{array} \right) \left( \begin{array}{c} a \\ b \end{array}\right) =
A \left( \begin{array}{c} a \\ b \end{array}\right)
$$
Using this equation we can transform vectors from the new skew system to the old orthogonal system. Note that the columns of the matrix A are built from the coordinates of the skew basis vectors.</p>
<p>In order to transform from the the old orthogonal system to new skew system we need to invert the above equation and we get:</p>
<p>\begin{equation}
\left( \begin{array}{c} a \\ b \end{array}\right) =
A^{-1} \left( \begin{array}{c} x \\ y \end{array}\right) =
\frac{1}{\alpha_1 \beta_2 - \alpha_2 \beta_1}
\left( \begin{array}{cc} \beta_2 & -\beta_1 \\ -\alpha_2 & \alpha_1 \end{array}\right)
\left( \begin{array}{c} x \\ y \end{array}\right)
\end{equation}</p>
<p>The above invverse matrix $A^{-1}$ should be computed once and then used for the transformation of all needed points. The inversion is possible when the respective denominator is not zero, i.e. when the new skew axes are not parallel.</p>
|
3,300,793 | <p>Having an immense amount of trouble trying to figure this problem out, and the more I think and ask about it the more confused I seem to get. I think I have finally figured it out so can someone who truly knows the correct answer justify this?</p>
<p>Problem:Let <span class="math-container">$A=\{a,b,c\}$</span>,Let <span class="math-container">$\mathcal{P}(A)=\{S:S\subseteq A\}$</span></p>
<p>Q1.Is the relation "is a subset of" a relation on <span class="math-container">$A$</span>?</p>
<p>Q2.Is the relation "is a subset of" a relation on <span class="math-container">$\mathcal{P}(A)$</span>?</p>
<p>Q1.Attempt:</p>
<p>I would say the relation "is a subset of" is <strong>NOT</strong> a relation on <span class="math-container">$A$</span></p>
<p>Since <span class="math-container">$A \times A=\{(a,a),(a,b),(a,c),(b,a),(b,b),(b,c),(c,a),(c,b)(c,c)\}$</span></p>
<p>Knowing <span class="math-container">$R \subseteq A \times A$</span></p>
<p>It is clear to me <span class="math-container">$a,b,c$</span> are elements, not sets, so an element in the relation on <span class="math-container">$A$</span> takes the form of <span class="math-container">$(a,a)$</span> for <span class="math-container">$a \in A$</span>.So a "subset relation" cannot be defined on <span class="math-container">$A$</span>.</p>
<p>Since <span class="math-container">$a \subseteq a$</span> is false for all <span class="math-container">$a \in A$</span> because <span class="math-container">$a$</span> is an element not a set</p>
<p>Q2.However a subset relation <strong>can</strong> be defined on <span class="math-container">$\mathcal{P}(A)$</span></p>
<p>Since elements of a relation on <span class="math-container">$\mathcal{P}(A)$</span> are actually ordered pairs of sets, now it <strong>is</strong> possible to define a "subset relation" on <span class="math-container">$\mathcal{P}(A)$</span>.</p>
<p>For example <span class="math-container">$(\{a\},\{a,b,c\})\in \mathcal{P}(A) \times \mathcal{P}(A)$</span></p>
<p>and since for a relation on <span class="math-container">$\mathcal{P}(A)$</span>, <span class="math-container">$R \subseteq \mathcal{P}(A) \times \mathcal{P}(A)$</span> and <span class="math-container">$\{a\} \subseteq \{a,b,c\}$</span></p>
<p>Now it is possible to define the relation "is a subset" of on <span class="math-container">$\mathcal{P}(A)$</span></p>
| JMoravitz | 179,297 | <p>To summarize my comments above, all that a set needs to be to be called a relation is to be a subset of a cartesian product of sets. Nothing more, nothing less. With regards to Q1, after some rewording the question asks if the set of ordered pairs <span class="math-container">$\{(x,y)\in A\times A~:~x\subseteq y\}$</span> is a relation.</p>
<p>Regardless of your point of view or how pedantic you want to be about whether the elements of <span class="math-container">$A$</span> are able to themselves be sets or not, as soon as you have a decided upon interpretation of how the elements of <span class="math-container">$A$</span> behave and are defined you should be able to agree that for each <span class="math-container">$x,y\in A$</span> the statement <span class="math-container">$x\subseteq y$</span> has a truth value as being clearly true or clearly false with no ambiguity. We can say that <span class="math-container">$5\not\subseteq 3$</span> for instance, whether that is because you treat <span class="math-container">$5$</span> and <span class="math-container">$3$</span> as sets and you see that <span class="math-container">$5$</span> is not a subset of <span class="math-container">$3$</span>, or whether you choose to treat them as atomic elements who are not sets and you say that <span class="math-container">$5\not\subseteq 3$</span> for the reason that these are not sets.</p>
<p>@MatthewLeingang is correct when he poitns out that <span class="math-container">$(\{a\},\{a\})\not\in A\times A$</span>, however he is incorrect in assuming that this has any relevance to the problem at hand whatsoever. We never cared about whether <span class="math-container">$(\{a\},\{a\})$</span> was an element of our relation or not (<em>and it not being in our relation wouldn't have mattered anyways</em>), we only cared about seeing which elements (<em>if any</em>) of <span class="math-container">$A\times A$</span> are included in our relation.</p>
<p>The only reason why the above would <em>not</em> have been a relation is in the event that the condition for an ordered pair to be in the set is ill-defined or undefined. The set <span class="math-container">$\{(x,y)\in A\times A~:~x\heartsuit y\}$</span> is such an example since we don't yet know the meaning of the symbol <span class="math-container">$\heartsuit$</span>.</p>
<p>Otherwise, just because a relation is empty does not stop it from being a relation. Again, even if you disagree on whether or not <span class="math-container">$a,b,c$</span> should be considered <a href="https://en.wikipedia.org/wiki/Urelement" rel="nofollow noreferrer">pure "elements" and not sets</a> or not, that does not stop the described relation from being well defined.</p>
<hr>
<p>That all being said, depending on which style of set theory you are studying, the most common foundations to set theory that people study in the modern age is <a href="https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory" rel="nofollow noreferrer">Zermelo-Fraenkel set theory</a>. In an attempt to make things as clean and unambiguous as possible, we restrict our attention to <a href="https://en.wikipedia.org/wiki/Hereditary_set" rel="nofollow noreferrer">pure sets</a> and every mathematical object, number, set, function, operation, statement, etc... are all also considered sets. (<em>There are things that ZF can't handle, such as proper classes. Read about other set theories in the wiki link above such as ZFG</em>)</p>
<p>Under this interpretation, even if we didn't intend it, <span class="math-container">$a,b,c$</span> are all actually sets too! And as such, not only is the relation defined, just like before, it is even nonempty as it is guaranteed to contain at least the pairs <span class="math-container">$(a,a),(b,b)$</span> and <span class="math-container">$(c,c)$</span> and depending on the values of <span class="math-container">$a,b,c$</span> possibly more.</p>
|
2,831,199 | <p>What is the probability of getting $6$ $K$ times in a row when rolling a dice N times?</p>
<p>I thought it's $(1/6)^k*(5/6)^{n-k}$ and that times $N-K+1$ since there are $N-K+1$ ways to place an array of consecutive elements to $N$ places.</p>
| Lazar Šćekić | 519,090 | <p>So, let's say that X is a random variable for tracing the number of 6s.
Let's say that C is the condition for the 6s to be consecutive.
Since the task is to find the probability of gaining at least K subsequent 6s, we can look for probability of event A-at least K subsequent 6s fell this way:
P(A)=P(X=K|C)+P(X=K+1|C)+...+P(X=K+N|C)+...,
where P(X=K|C) is the probability of K 6s falling under the condition that they are consecutive.
Since the event that K 6s have fallen is independent from the event that all the 6s are in order, we can say that:
P(X=K|C)=P(X=K)*P(C).
Is this the right way to go?</p>
<p>Edit:
Doesn't seem right because the events ARE independent, because K 6s need to fall in order for them to be consecutive.</p>
|
2,951,242 | <p>1) <span class="math-container">$cl(\mathbb R) $</span></p>
<p>2) <span class="math-container">$int ([1, \infty) \cup $</span> {3})</p>
<p>3) <span class="math-container">$ \partial (-1,\infty ) \cap $</span> {-3}
it’s a boundary</p>
<p>My solution:
1) it’s same <span class="math-container">$\mathbb R $</span>
2) <span class="math-container">$(1,\infty ) \cap $</span> {3}
3) {-1,-3}
Correct or no ?</p>
| José Carlos Santos | 446,262 | <ol>
<li>Correct: it is <span class="math-container">$\mathbb R$</span>.</li>
<li>Correct, but why didn't you just write that the interior is <span class="math-container">$(-1,\infty)$</span>?</li>
<li>Wrong. Since <span class="math-container">$\partial(-1,\infty)=\{-1\}$</span>, <span class="math-container">$\bigl(\partial(-1,\infty)\bigr)\cap\{3\}=\emptyset$</span>. Unless you meant <span class="math-container">$\partial\bigl((-1,\infty)\cap\{3\}\bigr)$</span>, in which case the answer is <span class="math-container">$\{3\}$</span>.</li>
</ol>
|
2,640,477 | <p>According to <a href="https://rads.stackoverflow.com/amzn/click/0073383090" rel="nofollow noreferrer">Rosen</a>, an infinite set A is countable if $|A|= |\mathbb{Z}^+|$ which in turn can be established by finding a bijection from A to $\mathbb{Z}^+$.</p>
<p>Also, a sequence is defined as a function from $\mathbb{Z}^+$ (or $\{0\} \cup \mathbb{Z}^+$) to some set.</p>
<p>With the above, a sequence is certainly enumerable. However, it need not be a bijection, e.g. Fibonacci(1) = Fibonacci(2) = 1.</p>
<p>This implies that not every sequence is countable which seems counterintuitive. Are there any results in this regard? Is there a mistake in the reasoning above?</p>
| Netchaiev | 517,746 | <p>Every sequence has a countable or a finite set of values. </p>
<p>Besides, you are mixing two ideas : a sequence $(u_n)_n$ is a function $n\mapsto u_n\in F$ ($F$ being any possible set) and almost never a bijection, but the set of all its values are finite or countable.</p>
|
1,480,720 | <p>How many times do you have to flip a coin such that the probability of getting $2$ heads in a row is at least $1/2$?</p>
<p>I tried using a Negative Binomial:
$P(X=2)=$$(n-1)\choose(r-1)$$p^r\times(1-p)^{n-r} \geq 1/2$ where $r = 2$ and $p = 1/4$. However, I don't get a value of $n$ that makes sense.</p>
<p>Thank You</p>
| OFRBG | 42,793 | <p>Well, I think you may think of it as a binary tree. The tree either splits towns $H$ or toward $T$. We want to find the level of the tree where $2^{n-1}$ nodes have at least double heads. There might be a neater way of doing this, but this works:</p>
<ul>
<li>Take the first trial. You get either $T$ or $H$.</li>
<li>The next trail for each branch is either $T$ or $H$, giving the combinations $HH,HT,TT,TH$. We have 1 out of 4 branches that have a consecutive $H$.</li>
<li>For the next step, we expand the branches again. Now we have 2 "original" branches that cover the requirement, spawning from the former $HH$ and a new one from $TH$, so that's $3$ out of $8$.</li>
<li>We follow this idea and spawn doubles from the branches that fulfilled the condition. That's $4$ from the $HH$, and $2$ from the $HT$. That's $6$. Then we add $2$ from the branches of $TTH$ and $HTH$. That's eight out of sixteen.</li>
</ul>
<p>So you need at least $4$ trials to be half sure of getting at least consecutive heads.</p>
<hr>
<p>Don't take this too seriously yet! Work in progress. Suggestions are welcome.</p>
<p>Maybe you could model it as a propagating condition. So first you propagate one out of four, and with each level extra you get $2^{n-1}$ successful branches. Also, for each new level you add, you add $(n-2)$ successful branches. Then you make a sum a certain level:</p>
<p>$$
2^M + \sum_{k=1}^{M}k\ 2^{M-k}\
$$</p>
<p>where $M$ is the number of trials minus the number of consecutive heads you want. I'll try to prove it when I get some extra time.</p>
|
3,464,282 | <p>I have a heat type equation
<span class="math-container">$$\frac{d}{dt}V + \frac{1}{2} \sigma^{2} S^{2} \frac{d^{2}}{dS^{2}}V + (r-D) S \frac{d}{ds} V - rV = 0$$</span></p>
<p>Am asked to prove the solution is separable
<span class="math-container">$$V=A(t) B(s)$$</span>
and that A(t) is 1st order diff eq and B(S) 2nd order diff eq.</p>
<p>I did
<span class="math-container">$$\frac{d}{dt}V=B(s) \frac{d}{dt}A(t)$$</span>
and
<span class="math-container">$$\frac{d}{dS}V=A(t) \frac{d}{dS}B(S)$$</span>
and
<span class="math-container">$$\frac{d^2}{dS^2}V=A(t) \frac{d^2}{dS^2}B(S)$$</span></p>
<p>Plugged it in, got</p>
<p><span class="math-container">$$\frac{\frac{d}{dt} A(t)}{A(t)} + \frac{1}{2} \sigma^{2} S^{2} \frac{\frac{d^{2}}{dS^{2}} B(S)}{B(S)} + (r-D) S \frac{\frac{d}{dS} B(S)}{B(S)} - r = 0$$</span></p>
<p>but don't know where to go from here?</p>
| gt6989b | 16,192 | <p>For simpler notation, denote <span class="math-container">$F_x(x,t)$</span> the partial of <span class="math-container">$F$</span> wrt <span class="math-container">$x$</span>. Your PDE is then
<span class="math-container">$$V_t + \frac{\sigma^2 S^2}{2} V_{SS} + (r-D)s V_S - rV = 0,$$</span>
which looks like the Black-Scholes equation BTW.</p>
<p>Under the assumption <span class="math-container">$V(t,S) = A(t) B(S)$</span> you have <span class="math-container">$V_t = A_t B, V_S = A B_S$</span> and <span class="math-container">$V_{SS} = AB_{SS}$</span>, substituting you get
<span class="math-container">$$
A_t B + \frac{\sigma^2 S^2}{2} AB_{SS} + (r-D)s AB_S - rAB = 0
$$</span>
Dividing both sides by <span class="math-container">$V = AB$</span> you get
<span class="math-container">$$
\frac{A_t}{A} + \frac{\sigma^2 S^2}{2} \frac{B_{SS}}{B} + (r-D)s \frac{B_S}{B} - r = 0
$$</span>
which is equivalent to
<span class="math-container">$$
\frac{\sigma^2 S^2}{2} \frac{B_{SS}}{B} + (r-D)s \frac{B_S}{B} = r - \frac{A_t}{A}
$$</span>
but now LHS only depends on <span class="math-container">$S$</span> and all RHS only depends on <span class="math-container">$t$</span>, for all values of <span class="math-container">$S,T$</span>, which is only possible if both LHS and RHS are equal to the same constant, say <span class="math-container">$c$</span>.</p>
<p>So you get two independent ODEs out of your PDE:
<span class="math-container">$$
\begin{split}
\frac{\sigma^2 S^2}{2} \frac{B_{SS}}{B} + (r-D)s \frac{B_S}{B} &= c\\
r - \frac{A_t}{A} &= c
\end{split}
$$</span></p>
|
1,439,850 | <p>So the problem states that the centre of the circle is in the first quadrant and that circle passes through $x$ axis, $y$ axis and the following line: $3x-4y=12$. I have only one question. The answer denotes $r$ as the radius of the circle and then assumes that centre is at $(r,r)$ because of the fact that the circle passes through $x$ and $y$ axis. I was thinking that this single fact does not permit one to assume that centre must be at $(r,r)$, simply because the centre may be positioned in such a manner that the distance to $y$ and $x$ axis is not the same and not necessarily $r$. Is my thinking correct? If not, why? </p>
| E.H.E | 187,799 | <p>$$x^2y'+3xy=1$$
divide by $x$
$$xy'+3y=\frac{1}{x}$$
we can solve this O.D.E by Euler-Cauchy Method</p>
<p>1- to find the complementary solution
$$xy'+3y=0$$
assume
$$y_c=x^m$$
$$y'=mx^{m-1}$$
substitute it to get
$$m=-3$$
hence
$$y_c=C_1x^{-3}=\frac{C_1}{x^3}$$ </p>
<p>2- to find the particular solution
$$y_p=\frac{A}{x}$$
$$y'_p=-\frac{A}{x^2}$$
substitute it to get
$$A=\frac{1}{2}$$</p>
<p>$$y=y_c+y_p=\frac{C_1}{x^3}+\frac{1}{2x}$$</p>
|
1,821,186 | <p>Why is the solution of $|1+3x|<6x$ only $x>1/3$? After applying the properties of modulus, I get $-6x<1+3x<6x$. And after solving each inequality, I get $x>-1/9$ and $x>1/3$, but why is $x>-1/9$ rejected? </p>
| MPW | 113,214 | <p><strong>Hint:</strong> Well, you have a countable collection of points to use as centers of balls, and you have a countable basis $\{B(x,\tfrac1n):n\in\mathbb N\}$ at each such point $x$, so...</p>
|
2,900,454 | <p>There are so many different methods I've found on SE and through Matlab, and they're all giving me different results.</p>
<p>Specifically, I have {v1} = (1,2,1) and {v2} = (2,1,0) in set S. What is the method to find {v3} vectors that are orthogonal to both v1 and v2?</p>
<p>I'm preparing for a final and I'm trying to find a flexible method for many cases. The answer I got for above was v3 = {1,-2,3} but different methods are returning different results.</p>
| PackSciences | 588,260 | <p>What I would do:</p>
<ul>
<li>Compute the planes orthogonal to both of your vectors.</li>
</ul>
<p>A plane orthogonal to the vector $(a,b,c)$ has the equation $ax + by + cz + d = 0, \forall d \in \mathbb{R}$</p>
<ul>
<li>Compute the intersection of the two planes by replacing the first plane equation in the second one</li>
</ul>
|
2,901,783 | <p>I am having trouble solving a multi part question.</p>
<p>Express $ \frac x{x^2-3x + 2} $ in the partial fraction form.</p>
<p>The answer I got was $\frac2{x-2}-\frac1{x-1}$ .</p>
<p>The problem comes when they asked:</p>
<p>Show that, if $x$ is so small that $x^3$ and higher powers of $x$ can be neglected, then:</p>
<p>$$\frac x{x^2-3x +2}\approx\frac12x+\frac34x^2$$</p>
<p>I understand that I had to expand the partial fraction using Generalized Binomial Theorem such that I needed to expand: $-\left[1+\left(-\frac12x\right)\right]^{-1}+\left[1+\left(-x\right)\right]^{-1}$ since I needed to manipulate the equation into $\left(1+a\right)^n$ format.</p>
<p>I got $$\frac12x+\frac34x^2+\frac78x^3+\frac{15}{16}x^4+\cdots$$</p>
<p>but I am not sure how to continue to <strong>show that , if $x$ is so small that $x^3$ and higher powers of $x$ can be neglected</strong>. Do i just sub in numbers ranging from -1 to 1 and that's it or is there a structured way to show?</p>
| Ennar | 122,131 | <p>I believe you are already done and just have linguistic issue; I think that "if $x$ is so small that $x^3$ and higher powers of $x$ can be neglected, then [...]" should be interpreted as "if $x$ has property $P(x)$, then [...]" where $P(x)\equiv$ "$x^3$ and higher powers can be neglected". That is, you are not supposed to show that you can neglect $x^3$, it's an assumption. Thus, it's the same question as "show that power series expansion is $x/2 + 3x^2/4\, + $ higher order terms."</p>
|
628,681 | <p>The computed moments of log normal distribution can be found <a href="http://en.wikipedia.org/wiki/Log-normal_distribution#Arithmetic_moments">here</a>. How to compute them?</p>
| heropup | 118,193 | <p>If $X$ is lognormal, then $Y = \log X$ is normal. So consider $${\rm E}[X^k] = {\rm E}[e^{kY}] = \int_{y=-\infty}^\infty e^{ky} \frac{1}{\sqrt{2\pi}\sigma} e^{-(y-\mu)^2/(2\sigma^2)} \, dy. $$ Now observe that $$\begin{align*} ky - \frac{(y-\mu)^2}{2\sigma^2} &= - \frac{-2k\sigma^2 y + y^2 - 2\mu y + \mu^2}{2\sigma^2} \\ &= -\frac{1}{2\sigma^2}\left(y^2 - 2(\mu + k\sigma^2)y + (\mu + k \sigma^2)^2 + \mu^2 - (\mu + k \sigma^2)^2\right) \\ &= -\frac{\left(y - (\mu+k\sigma^2)\right)^2}{2\sigma^2} + \frac{k(2\mu + k \sigma^2)}{2}. \end{align*}$$ Thus the $k^{\rm th}$ raw moment is simply $${\rm E}[X^k] = e^{k(2\mu + k\sigma^2)/2} \int_{y=-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma} e^{-(y - \mu')^2/(2\sigma^2)} \, dy,$$ where $\mu' = \mu + k \sigma^2$. But this latter integral is equal to 1, being the integral of a normal density with mean $\mu'$ and variance $\sigma^2$. So ${\rm E}[X^k] = e^{k(2\mu + k\sigma^2)/2}$. The variance of $X$ is then easily calculated from ${\rm Var}[X] = {\rm E}[X^2] - {\rm E}[X]^2$.</p>
<p>In fact, the expression for the $k^{\rm th}$ raw moment of $X$ that we derived is actually also the moment generating function of $Y = \log X$.</p>
<hr>
<p><strong>Addendum.</strong> A somewhat different computation can be made from the observation that $$\frac{Y - \mu}{\sigma} = Z \sim \operatorname{Normal}(0,1),$$ so $$\operatorname{E}[X^k] = \operatorname{E}[e^{kY}] = \operatorname{E}[e^{k(\sigma Z + \mu)}] = e^{k \mu + (k \sigma)^2/2} \operatorname{E}[e^{(k\sigma) Z - (k\sigma)^2/2}].$$ Then $$\operatorname{E}[e^{(k \sigma) Z - (k \sigma)^2/2}] = \int_{z=-\infty}^\infty \frac{e^{-z^2/2 + (k \sigma) z - (k \sigma)^2/2}}{\sqrt{2\pi}} \, dz = \int_{z=-\infty}^\infty \frac{e^{-(z-k\sigma)^2/2}}{\sqrt{2\pi}} \, dz = 1,$$ and the result is proven.</p>
|
628,681 | <p>The computed moments of log normal distribution can be found <a href="http://en.wikipedia.org/wiki/Log-normal_distribution#Arithmetic_moments">here</a>. How to compute them?</p>
| Machinato | 240,067 | <p>First let us write <span class="math-container">$X = \exp Y$</span> where <span class="math-container">$Y$</span> is normal. Let us denote <span class="math-container">$\operatorname{E}\left[Y\right]=\mu$</span> and <span class="math-container">$\operatorname{Var}\left[Y\right]=\sigma^2$</span>. We can write <span class="math-container">$Y = \mu + \sigma Z$</span> where <span class="math-container">$Z\sim N(0,1)$</span>. Therefore</p>
<blockquote>
<p><span class="math-container">$$\operatorname{E}\left[X^k\right]=\operatorname{E}\left[e^{kY}\right]=\operatorname{E}\left[e^{k\mu + k\sigma Y}\right]=e^{k\mu}\operatorname{E}\left[e^{k\sigma Z}\right]=e^{k\mu}\sum_{n=0}^\infty\frac{(k\sigma)^n}{n!}\operatorname{E}\left[Z^n\right]$$</span></p>
</blockquote>
<p>Let us find the moments of normal distribution. Note from the symmetry that</p>
<blockquote>
<p><span class="math-container">$$\operatorname{E}\left[Z^{2n+1}\right]=0\qquad ,n=0,1,2,\ldots$$</span></p>
</blockquote>
<p>From the Central Limit Theorem on a set <span class="math-container">$\{Z_1,Z_2,\ldots Z_m\}$</span> of i.i.d. random variables with <span class="math-container">$Z_1 \sim N(0,1)$</span> we have</p>
<blockquote>
<p><span class="math-container">$$\frac{\bar{Z}-0}{1/\sqrt{m}}= \frac{1}{\sqrt{m}}\sum_{i=1}^m Z_i \overset{m\rightarrow\infty}{\longrightarrow} Z \sim N(0,1)$$</span></p>
</blockquote>
<p>Therefore (only even exponents)
<span class="math-container">$$\operatorname{E}\left[Z^{2n}\right]\approx \frac{1}{m^n}\operatorname{E}\left[\sum_{i_1,i_2,\ldots,i_{2n}} Z_{i_1}Z_{i_2}\cdots Z_{i_{2n}}\right]$$</span></p>
<p>In the limit <span class="math-container">$m\rightarrow\infty$</span> the only dominant term comes from a summation of <strong>most distinct</strong> combinations. The most populous combinations are ones which contain only pairs of <span class="math-container">$Z_i$</span> and <span class="math-container">$Z_j$</span> with the same <span class="math-container">$i=j$</span>. The combinations with quadruples <span class="math-container">$Z_iZ_iZ_iZ_i$</span> and higher order of repetition are even less present. We will therefore count the number of possible pairings. Since <span class="math-container">$m$</span> is large and <span class="math-container">$\operatorname{E}\left[Z_i^2\right]=1$</span> for any <span class="math-container">$i$</span>, the problem is eqivalent to counting the number of words which consist of <span class="math-container">$2n$</span> letters, the letters are <span class="math-container">$L_1,L_2,L_3,\ldots,L_n$</span> and each letter is twice in the word (factor <span class="math-container">$m^n$</span> is cancelled, for each <span class="math-container">$L_i$</span> there is one <span class="math-container">$m$</span>). Accorgind to the formula for repeated permutations, there exactly</p>
<p><span class="math-container">$$\frac{(2n)!}{2!2!\cdots 2!}=\frac{(2n)!}{2!^n}$$</span></p>
<p>words. There words represents different "coloring" or assigment for <span class="math-container">$i_{j}$</span>. Since there are exactly <span class="math-container">$n$</span> of these <span class="math-container">$i_j$</span>'s and since they can be interchanged in the sum, we get</p>
<p><span class="math-container">$$\operatorname{E}\left[Z^{2n}\right] = \frac{(2n)!}{2!^n n!}$$</span></p>
<p>Substituting this result into the formula for <span class="math-container">$\operatorname{E}\left[X^k\right]$</span>, we get</p>
<blockquote>
<p><span class="math-container">$$\operatorname{E}\left[X^k\right]=e^{k\mu}\sum_{n=0}^\infty\frac{(k\sigma)^{2n}}{(2n)!}\frac{(2n)!}{2!^n n!} = e^{k\mu}\sum_{n=0}^\infty\frac{1}{n!}\left(\frac{k^2\sigma^2}{2}\right)^{n} = e^{k\mu}e^{\frac12 k^2 \sigma^2} = \exp\left(k\mu+\frac12 k^2 \sigma^2\right)$$</span></p>
</blockquote>
|
2,988,089 | <p>Let A, B, C, and D be sets. Prove or disprove the following:</p>
<pre><code> (A ∩ B) ∪ (C ∩ D)= (A ∩ D) ∪ (C ∩ B)
</code></pre>
<p>I am just wondering can i simply prove it using a membership table ( seems to easy ) or do i have to use setbuilder notation?</p>
<p>Thank you!</p>
| Mohammad Riazi-Kermani | 514,496 | <p>Let <span class="math-container">$$A=B=\{1,2,3,4,5\}$$</span> and <span class="math-container">$$C=D=\{6,7,8,9,10\}$$</span> We have <span class="math-container">$$(A ∩ B) ∪ (C ∩ D)= \{1,2,3,4,5,6,7,8,9,10\}$$</span></p>
<p>while <span class="math-container">$$ (A ∩ D) ∪ (C ∩ B) =\emptyset $$</span></p>
|
262,745 | <p>I need to find the normal vector of the form Ax+By+C=0 of the plane that includes the point (6.82,1,5.56) and the line (7.82,6.82,6.56) +t(6,12,-6), with A=1.</p>
<p>Of course, this is easy to do by hand, using the cross product of two lines and the point. There's supposed to be an automated way of doing it, though, and I can't find it.
Any ideas on an efficient way of doing it?</p>
| Michael E2 | 4,999 | <p>An algebraic approach to add to the mix of answers:</p>
<pre><code>n = {1, b, c}; (* unknown normal with a=1 *)
p = {x, y, z}; (* free point on the plane *)
coeff = SolveAlways[n.(p-pt) == 0 /. {Thread[p -> pt], Thread[p -> lineeq]}, t]
(* {{b -> -0.255754, c -> 0.488491}} *)
n . (p - pt) == 0 /. First[coeff] // Expand
(* -9.28026 + x - 0.255754 y + 0.488491 z == 0 *)
</code></pre>
|
3,789,676 | <p>I am try to calculate the derivative of cross-entropy, when the softmax layer has the temperature T. That is:
<span class="math-container">\begin{equation}
p_j = \frac{e^{o_j/T}}{\sum_k e^{o_k/T}}
\end{equation}</span></p>
<p>This question here was answered at T=1: <a href="https://math.stackexchange.com/questions/945871/derivative-of-softmax-loss-function">Derivative of Softmax loss function</a></p>
<p>Now what would be the final derivative in terms of <span class="math-container">$p_i$</span>, <span class="math-container">$q_i$</span>, and T? Please see the linked question for the notations.</p>
<p>Edit: Thanks to Alex for pointing out a typo</p>
| greg | 357,854 | <p><span class="math-container">$
\def\o{{\tt1}}\def\p{\partial}
\def\F{{\cal L}}
\def\L{\left}\def\R{\right}
\def\LR#1{\L(#1\R)}
\def\fracLR#1#2{\L(\frac{#1}{#2}\R)}
\def\Diag#1{\operatorname{Diag}\LR{#1}}
\def\trace#1{\operatorname{Tr}\LR{#1}}
\def\qiq{\quad\implies\quad}
\def\grad#1#2{\frac{\p #1}{\p #2}}
$</span>Before taking derivatives, define the <em>all-ones</em> vector <span class="math-container">$(\o)$</span>
plus a few more vectors
<span class="math-container">$$\eqalign{
x &= \fracLR{o}{T} &&\qiq
dx &= \fracLR{do}{T} \\
e &= \exp(x),\;\;&E=\Diag e &\qiq
de &= E\;dx \\
p &= \frac{e}{\o:e},\;&P= \Diag p &\qiq
dp &= \LR{P-pp^T}dx \\
}$$</span>
and also introduce the Frobenius product <span class="math-container">$(:)$</span>, which is a concise notation
for the trace
<span class="math-container">$$\eqalign{
A:B &= \sum_{i=1}^m\sum_{j=1}^n A_{ij}B_{ij} \;=\; \trace{A^TB} \\
A:A &= \big\|A\big\|^2_F \\
}$$</span>
Write the objective function using the above notation.
<span class="math-container">$$\eqalign{
\F &= -y:\log(p) \qquad\qquad \\
}$$</span>
Then calculate its differential and gradient.
<span class="math-container">$$\eqalign{
d\F &= -y:d\log(p) \\
&= -y:P^{-1}\,dp \\
&= -y:P^{-1}\LR{P-pp^T}dx \\
&= -y:\LR{I-\o p^T}dx \\
&= \LR{p\o^T-I}y:dx \\
&= \LR{p-y}:\fracLR{do}{T} \\
&= \fracLR{p-y}{T}:do \\
\grad{\F}{o}
&= \fracLR{p-y}{T} \\
}$$</span>
This result is about as nice as one could hope.</p>
<p>Setting <span class="math-container">$\;T=1\,$</span> recovers the answer in the linked post.</p>
|
574,041 | <p>Consider a set of linear equations described by $A\vec{X}=\vec{B}$ is given, where $A$ is an $n\times n$ matrix and $\vec{X}$ and $\vec{B}$ are n-row vectors. Also suppose that this system of equations have a unique solution and this solution is given.</p>
<p>Imagine a new set of linear equations $A'\vec{X}=\vec{B}$, where all elements of $A'$ is equal to those of $A$ but one element $A_{ij}$ which is increased by $k$. I am interested to know if I could somehow relate the solution of the first problem to the second problem by knowing the value of $k$ and $A$. In other words, I would like to derive the new solution without resolving the set of linear equations.</p>
| D Left Adjoint to U | 26,327 | <p>Let's look at the $3\times 3$ case when $a_{21}$ is changed by $+k$. The determinant $\det(A)$ is in every entry in $A^{-1}$, so let's look at that. Expand along the second row to see that $\det(A + k e_{21}) = -(a_{21} + k)|C_{21}| + a_{22} |C_{22}| - a_{23}|C_{23}|$ where $|C_{ik}|$ is the determinant of the matrix $A$ with row $i$ and column $k$ deleted, a $2\times 2$ matrix. This formula equals $det(A) + (-1)^{i+j}k|C_{ij}|$.</p>
|
802,877 | <blockquote>
<p>Find $\displaystyle\lim_{n\to\infty} n(e^{\frac 1 n}-1)$ </p>
</blockquote>
<p>This should be solved without LHR. I tried to substitute $n=1/k$ but still get indeterminant form like $\displaystyle\lim_{k\to 0} \frac {e^k-1} k$. Is there a way to solve it without LHR nor Taylor or integrals ?</p>
<p>Maybe with the definition of a limit ?</p>
<p>EDIT:</p>
<p>$f(x)'=\displaystyle\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}=\lim_{h\to 0}\frac{(x+h)(e^{1/x+h}-1)-x(e^{\frac 1 x}-1)}{h}=
\lim_{h\to 0}\frac{xe^{1/x+h}+he^{1/x+h}-x-h-xe^{\frac 1 x}+x}{h}=
\lim_{h\to 0}\frac{xe^{1/x+h}+he^{1/x+h}-h-xe^{\frac 1 x}}{h}$</p>
| Тимофей Ломоносов | 54,117 | <p>Why should one use Taylor where we don't need it at all?</p>
<p>$$L = \lim\limits_{n\to\infty}n\left(e^\frac{1}{n}-1\right)=\lim\limits_{x\to0}\frac{e^x-1}{x}$$</p>
<p>Substitute $u=e^x-1$. Then $x=\ln(u+1)$</p>
<p>$$L=\lim\limits_{u \to 0}\frac{u}{\ln(1+u)}=\lim\limits_{u\to0}\frac{1}{\frac{1}{u}\ln(1+u)}=\lim\limits_{u\to0} \frac{1}{\ln \left ( 1+u\right)^{1/u}}=\frac{\lim\limits_{u\to0} 1}{\lim\limits_{v \to \infty} \ln\left(1+\frac{1}{v}\right)^v}=\frac{1}{\ln e}=1$$</p>
|
2,945,367 | <p><a href="https://i.stack.imgur.com/MGzHc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MGzHc.png" alt="enter image description here"></a></p>
<p>We were given a couple formulas, but the one that immediately stood out to me was the Vfinal = Vinitial + at</p>
<p>so we know the patrol will constantly accelerate "until he pulls next to the speeding car" so Vfinal = 30m/s and Vinitial = 0(cop at rest) and the acceleration is a constant 3m/s^2</p>
<p>so 30 = 3t, t = 10. However when I continue through this lecture, it turns out t = 20. he uses the formula x = xo + v0t + 1/2at^2 and somehow gets a different time than me. I don't understand how it's possible that we are both solving for time but getting different results. </p>
<p>Is it safe to assume that when xo = 0 and v0 = 0 that</p>
<p>sqrt(2x/a) = Vfinal / a </p>
<p>since we cancel out terms and they both equal t?</p>
| Keen-ameteur | 421,273 | <p>Observe that if <span class="math-container">$P:=\{ \xi_i \}_{i=0}^n$</span> is a partition of <span class="math-container">$[a,b]$</span>, and <span class="math-container">$\xi_j= \zeta_0 < \zeta_1<...< \zeta_{l_j}=\xi_{j+1}$</span>, then by (a) you know that:</p>
<p><span class="math-container">$\underset{x\in [\xi_j,\xi_{j+1}]}{\sup} f(x)\cdot (\xi_{j+1}-\xi_{j})\geq \underset{r=1}{\overset{l_j}{\sum}} \Bigg( \underset{x\in [\zeta^j_{r},\zeta^j_{r+1}]}{\sup} f(x)\cdot (\zeta^j_{r+1}-\zeta^j_{r})\Bigg)$</span> </p>
<p>Considering <span class="math-container">$P'$</span> which is a refinement of <span class="math-container">$P$</span> given by <span class="math-container">$\xi_j=\zeta_0^j<....<\zeta_{l_j}^j=\xi_{j+1}$</span> for all <span class="math-container">$0\leq j\leq n-1$</span>, then:</p>
<p><span class="math-container">$U(f,P) = \underset{j=1}{\overset{n}{\sum}} \Bigg( \underset{x\in [\xi_j,\xi_{j+1}]}{\sup} f(x)\cdot (\xi_j-\xi_{j+1})\Bigg)\leq$</span></p>
<p><span class="math-container">$\leq \underset{j=1}{\overset{n}{\sum}} \Bigg( \underset{r=1}{\overset{l_j}{\sum}} \Big( \underset{x\in [\zeta^j_{r},\zeta^j_{r+1}]}{\sup} f(x)\cdot (\zeta^j_{r+1}-\zeta^j_{r})\Big) \Bigg)=U(f,P')$</span></p>
|
2,717,821 | <p>Since we have 4 digits there is a total of 10000 Password combinations possible.</p>
<p>Now after each trial the chance for a successful guess increases by a slight percentage because we just tried one password and now we remove that password from the "guessing set". That being said I am struggling with the actual calculation.</p>
<p>I first calculate the probability of me NOT guessing the password and then subtract that from 1.</p>
<p>\begin{align}
1-\frac{9999}{10000} \cdot \frac{9998}{10000} \cdot \frac{9997}{10000} = 0.059\%
\end{align}</p>
| Siong Thye Goh | 306,553 | <p>In the second trial, there are $9998$ ways out of $9999$ ways we can miss the right pin.</p>
<p>$$1-\frac{9999}{10000}\cdot \frac{9998}{9999}\cdot \frac{9997}{9998}$$</p>
|
58,870 | <p>I am teaching a introductory course on differentiable manifolds next term. The course is aimed at fourth year US undergraduate students and first year US graduate students who have done basic coursework in
point-set topology and multivariable calculus, but may not know the definition of differentiable manifold. I am following the textbook <a href="http://rads.stackoverflow.com/amzn/click/0132126052">Differential Topology</a> by
Guillemin and Pollack, supplemented by Milnor's <a href="http://rads.stackoverflow.com/amzn/click/0691048339">book</a>.</p>
<p>My question is: <strong>What are good topics to cover that are not in assigned textbooks?</strong> </p>
| John Sidles | 11,394 | <p><b>Update:</b> it may be Spivak's new book <a href="http://rads.stackoverflow.com/amzn/click/0914098322" rel="nofollow"><i>Physics for Mathematicians: Mechanics I</i></a> covers most of the material that this answer had in mind. I've just ordered a copy, and will report on it when it arrives.</p>
<hr>
<p>Neither Milnor's book nor Guillemin and Pollack's book contains the word "symplectic" ... which is a great pity! </p>
<p>Since the manifolds under study are smooth, they have a cotangent bundle; this bundle is associated to a tautological one-form whose exterior derivative is a (canonical) symplectic form. </p>
<p>If in addition the base manifold has a metric, then a canonical (quadratic) Hamiltonian function too is defined on the tangent bundle. </p>
<p>Hmmm ... what might be the integral curves of this Hamiltonian function? It is instructive for students to discover for themselves that the curves are simply the geodesics of the base manifold. </p>
<p>In this way, students gain an appreciation that all of dynamics (both classical and quantum) is intimately linked to the geometry and topology of smooth manifolds ... this appreciation is good preparation for many careers in math, science, and engineering.</p>
|
195,832 | <p>I want to download the content of the website(contains text) and only a few lines from the content (from a specific number of the line up to the last line minus specific offset). Unfortunately, I do not know how to get the number of line in the content. For example, I want to replace 59 with the length of the content minus specific offset.</p>
<pre><code>data1 = Import["https://en.wikipedia.org/wiki/Segment_tree"];
Snippet[data1, 51 ;; 59]
</code></pre>
| dionys | 20,144 | <p>One approach is to split the imported data on newlines using <code>StringSplit</code> so the length of the resulting list is the line count. Then we can use <code>Riffle</code> and <code>StringJoin</code> to get the selected lines:</p>
<pre><code>Module[{start = 51, offset = 85, raw, data, linecount, stop},
raw = Import["https://en.wikipedia.org/wiki/Segment_tree"];
data = StringTrim@StringSplit[raw, "\n"];
linecount = Length@data;
stop = linecount - offset;
Print["# lines: ", linecount];
Print[" start: ", start];
Print[" stop: ", stop];
Print["# saved: ", stop - start];
data = Take[data, {start, stop}];
Riffle[data, " "] // StringJoin]
</code></pre>
<p>Alternatively, we could count newlines in the raw data and use <code>Snippet</code> as corey979 points out in the comments:</p>
<pre><code> Module[{start = 51, offset = 85, raw, linecount, stop, counter = 0},
raw = Import["https://en.wikipedia.org/wiki/Segment_tree"];
linecount = Block[{str = StringToStream[raw]},
While[Read[str, Record, NullRecords -> True] =!= EndOfFile,
counter++];
Close[str]; counter];
stop = linecount - offset;
Print["# lines: ", linecount];
Print[" start: ", start];
Print[" stop: ", stop];
Print["# saved: ", stop - start];
Snippet[raw, start ;; stop]]
</code></pre>
<p>The result for the website specified in the question gives:</p>
<blockquote>
<p>This section describes the query operation of a segment tree in a \
one-dimensional space. A query for a segment tree, receives a point q
\ x (should be one of the leaves of tree), and retrieves a list of all
\ the segments stored which contain the point q x . Formally stated; \
given a node (subtree) v and a query point q x , the query can be \
done using the following algorithm: Report all the intervals in I ( v
\ ). If v is not a leaf: If q x is in Int(left child of v ) then</p>
</blockquote>
|
2,782,109 | <blockquote>
<p>If a positive integer $m$ was increased by $20$%, decreased by $25$%, and then increased by $60$%, the resulting number would be what percent of $m$?</p>
</blockquote>
<p>A common step-by-step calculation will take time.</p>
<p>After $20$% increase, $6m/5$.<br>
After $25$% decrease, $9m/10$.<br>
After $60$% increase, $144m/100$.<br>
Finally, $m \cdot \frac{x}{100} = \frac{144m}{100} = 144$%</p>
<p>what is the faster (or, fastest) method to solve this?</p>
| Matti P. | 432,405 | <p>This is what I would put into my calculator:
$$
1.2 \times \underbrace{0.75}_{=1-0.25} \times 1.6 = 1.44
$$</p>
|
2,187,765 | <p>This is part of Exercise 2.7.9 of F. M. Goodman's <em>"Algebra: Abstract and Concrete"</em>.</p>
<blockquote>
<p>Let $C$ be the commutator subgroup of a group $G$. Show that if $H$ is a normal subgroup of $G$ with $G/H$ abelian, then $C\subseteq H$.</p>
</blockquote>
<p>The following seems to be wrong.</p>
<h2>My Attempt:</h2>
<p>The commutator subgroup $C$ of $G$ is the subgroup generated by all elements of the form $xyx^{-1}y^{-1}$ for $x, y\in G$.</p>
<p>Since $G/H$ is abelian, we have for $x, y\in G$,
$$\begin{align}
xyx^{-1}y^{-1}H&=xyy^{-1}x^{-1}H \\
&=H,
\end{align}$$ so that all elements of the form $xyx^{-1}y^{-1}$ are in $H$. Thus $C\subseteq H$.</p>
<blockquote>
<p>But I don't use the fact that $H$ is normal. What have I done wrong and what is the right proof?</p>
</blockquote>
| Adam Hughes | 58,831 | <p>$G/H=\{gH: g\in G\}$ by definition. this is only a group under $(gH)(g'H) = (gg')H$ if $Hg' = g'H$. But this is just another way of stating the definition of $H$ being normal. In your proof you just neglected to note that $xyx^{-1}y^{-1}H$ is only relevant because it is equal to $(xH)(yH)(x^{-1}H)(y^{-1}H)$ because $H$ is normal.</p>
<p>I would call this "incomplete" rather than "wrong" if anything, as the problem is a few steps beyond reproving the basic fact that $G/H$ is only a group when $H$ is normal. I think you just forgot that that's what makes $G/H$'s group operation well-defined.</p>
|
2,187,765 | <p>This is part of Exercise 2.7.9 of F. M. Goodman's <em>"Algebra: Abstract and Concrete"</em>.</p>
<blockquote>
<p>Let $C$ be the commutator subgroup of a group $G$. Show that if $H$ is a normal subgroup of $G$ with $G/H$ abelian, then $C\subseteq H$.</p>
</blockquote>
<p>The following seems to be wrong.</p>
<h2>My Attempt:</h2>
<p>The commutator subgroup $C$ of $G$ is the subgroup generated by all elements of the form $xyx^{-1}y^{-1}$ for $x, y\in G$.</p>
<p>Since $G/H$ is abelian, we have for $x, y\in G$,
$$\begin{align}
xyx^{-1}y^{-1}H&=xyy^{-1}x^{-1}H \\
&=H,
\end{align}$$ so that all elements of the form $xyx^{-1}y^{-1}$ are in $H$. Thus $C\subseteq H$.</p>
<blockquote>
<p>But I don't use the fact that $H$ is normal. What have I done wrong and what is the right proof?</p>
</blockquote>
| drhab | 75,923 | <p>The fact that $G/H$ is abelian gives us the second equality of:$$xyH=(xH)(yH)=(yH)(xH)=yxH$$
Consequently we find: $$x^{-1}y^{-1}xy=(yx)^{-1}xy\in H$$</p>
<p>This for every $x,y\in G$ so we are allowed to conclude that $H$ contains the commutator subgroup.</p>
|
1,910,983 | <p>What conditions are equivalent to <strong>singularity</strong> of matrix $A\in \mathbb{R}^{n,n}$.<br>
<strong>a.</strong> $\dim(ker A) \ge 0$<br>
<strong>b.</strong> There is exist vector $b$ such that $Ax=b$ is contradictory.<br>
<strong>c.</strong> $rank(A^T) < n$ </p>
<p><strong>a.</strong> is true for each matrix, in other words $\dim$ can't be negative.<br>
<strong>c.</strong> $rank(A^T) = rank(A)$. Singularity means that some vector (row) is linearly <strong>dependent</strong> on other vectors. Then, we may using elementary operations on rows, reset this row -> so it is true that $rank(A) < n$</p>
<p>Is it correct ?<br>
When it comes to <strong>b.</strong> I suppose that it is true, however I can't prove it. </p>
| Drew N | 178,098 | <p>A bit of modular arithmetic reveals a simple way to compute the answer, at least for the following special case:</p>
<p>Assume that the non-periodic string has no elements in common with the periodic part, and also that the repeated string has no duplicates e.g. your periodic part can be {4,7,0} but not {4,7,4,0}. </p>
<p>If the period is $n$ then you can just replace the periodic part with increasing integers mod $n$, the block $\{1,2,...,n\}$, so that $a_{k+1} = a_k + 1 \bmod 4$ in your case. Since your period starts with the 11th term, you want $a_{11} \equiv 1 \bmod 4$. Writing $a_k = k+2 \bmod 4$ works to achieve this.</p>
<p>(we could have just written $a_k = k$, so that starting at $k=11$ gives the block $\{3,4,1,2\}$ rather than $\{1,2,3,4\}$ as I made it. It doesn't really matter.)</p>
<p>Now we want to equate elements in the periodic part. Since the <em>non</em> period part is of length $10$, we want the smallest $l > 10$ so that $a_k \equiv a_{2k} \bmod 4$.</p>
<p>Comes out to be $k+2 \equiv 2k + 2 \bmod 4 \Rightarrow k \equiv 2k \bmod 4 \Rightarrow 0 \equiv k \bmod 4$. Then the answer is $k = 12$; the smallest integer, greater than 10, which is divisible by 4, the period of your sequence. </p>
|
1,261,825 | <p>How can I find the inverse function of $f(x) = x^x$? I cannot seem to find the inverse of this function, or any function in which there is both an $x$ in the exponent as well as the base. I have tried using logs, differentiating, etc, etc, but to no avail. </p>
| Kushashwa Ravi Shrimali | 232,558 | <p>As the other user mentioned, it is basically the application of <a href="https://cs.uwaterloo.ca/research/tr/1993/03/W.pdf" rel="noreferrer">Lambert W Function</a>.</p>
<p>Say, $x^x = z$ which implies, $x \ln x = \ln z$.</p>
<p>Now, I can write: $x = e^{\ln x} $ using the properties of logarithms and exponential functions.</p>
<p>Therefore, $$\ln x = W \ln z \\ x = e^{W \ln z} $$ </p>
<p>which is indeed the inverse of $x^x$ .</p>
<p>I suggest you to go through the <a href="http://en.wikipedia.org/wiki/Lambert_W_function#Applications" rel="noreferrer">Wikipedia's page for Applications of Lambert W Function</a>. </p>
<p>Hope it helps!</p>
|
1,006,354 | <ul>
<li>A multiple choice exam has 175 questions. </li>
<li>Each question has 4 possible answers. </li>
<li>Only 1 answer out of the 4 possible answers is correct. </li>
<li>The pass rate for the exam is 70% (123 questions must be answered correctly). </li>
<li>We know for a fact that 100 questions were answered correctly. </li>
</ul>
<p>Questions: What is the probability of passing the exam, if one were to guess on the remaining 75 questions? That is, pick at random one of the 4 answers for each of the 75 questions. </p>
| user4568 | 173,559 | <p>We have already answered 100 questions, so there are only 75 questions left to answer.
Since we are guessing our way through the multiple choice questions, our probability of success in each question will be $\frac{1}{4}$
Since the pass mark is $\frac{123}{175}$, we need at least $\frac{23}{75}$ in the final 75 questions.
This is the same as saying that we need to find $P(X\ge23)$, i.e. "What is the probability of getting 23 or more questions correct?"</p>
<p>The information we have so far suggests that we can use the binomial distribution.
$X \sim B(n,p)$. Where $n=75$ and $p=\frac{1}{4}$ in your question.
However, we may have a slight problem.
75 is too large for us to use the $ncr$ formula and binomial tables don't generally include $n=75$. Unless you have a graphical calculator or some sort of statistical software, we will need to use a $normal \ approximation$ in order to answer your question.</p>
<p><strong>When do you need to normally approximate?</strong></p>
<ul>
<li>Look at $np$ and $nq$. (For your question, n=75 and p=$\frac{1}{4}$)</li>
<li>Look at n, is it "Large"? ($n\ge30$ is normally a candidate).</li>
<li>if $n$p and $nq>5$ and $n$ is large, you can try a normal approximation.</li>
<li>Also, if $p$ is close to $\frac{1}{2}$, this is another indication.</li>
</ul>
<p>$X \sim B \left(75,\frac{1}{4} \right)$</p>
<p>we are looking for $P(X\ge23)$</p>
<p><strong>Normally approximating:</strong></p>
<p>$Y \sim N(18.75,14.0625)$ as $Y \sim N(np,npq)$ is the normal approximation to $X \sim B(n,p)$ where $q=1-p$</p>
<p>Applying continuity correction, $P(X \le 22)$ becomes $P(Y \le 22.5)$</p>
<p><strong>Normal distribution</strong>: $Z = \frac{X-\mu}{\sigma}$</p>
<p>$Z= \frac{22.5-18.75}{\sqrt14.0625}$</p>
<p>We need to find $1-P(Z<1)$</p>
<p>Look for $\varPhi(1)$ on normal tables, the answer is $0.84134$.</p>
<p>Therefore, the normal approximation gives us an answer of $1-0.84134=0.15866$</p>
<p>The normally approximated answer to your question is $0.159(3dp)$.</p>
<p><strong>Continuity correction:</strong></p>
<p>$P(X\ge23)$ is the same as $1-P(X\le22)$</p>
<p>$P(X\le a)$ becomes $P(Y\le a+0.5)$ after the continuity correction.</p>
|
812,778 | <p>Prove that $(4/5)^{\frac{4}{5}}$ is irrational.</p>
<p><strong>My proof so far:</strong></p>
<p>Suppose for contradiction that $(4/5)^{\frac{4}{5}}$ is rational.</p>
<p>Then $(4/5)^{\frac{4}{5}}$=$\dfrac{p}{q}$, where $p$,$q$ are integers.</p>
<p>Then $\dfrac{4^4}{5^4}=\dfrac{p^5}{q^5}$</p>
<p>$\therefore$ $4^4q^5=5^4p^5$</p>
<p>I've got to this point and now I don't know where to go from here.</p>
| André Nicolas | 6,312 | <p><strong>Outline:</strong> If the number $\alpha$ is rational, there exist integers $p$ and $q$ which are <strong>relatively prime</strong> such that $\alpha=\frac{p}{q}$.</p>
<p>From your $4^4q^5=5^4p^5$, argue that $5$ divides $q$, and then that $5$ divides $p$. </p>
|
812,778 | <p>Prove that $(4/5)^{\frac{4}{5}}$ is irrational.</p>
<p><strong>My proof so far:</strong></p>
<p>Suppose for contradiction that $(4/5)^{\frac{4}{5}}$ is rational.</p>
<p>Then $(4/5)^{\frac{4}{5}}$=$\dfrac{p}{q}$, where $p$,$q$ are integers.</p>
<p>Then $\dfrac{4^4}{5^4}=\dfrac{p^5}{q^5}$</p>
<p>$\therefore$ $4^4q^5=5^4p^5$</p>
<p>I've got to this point and now I don't know where to go from here.</p>
| user3294068 | 140,502 | <p>This is essentially the same as the standard proof that $\sqrt 2$ is irrational.</p>
<p>Let $z = (4/5)^{(4/5)}$. Now to calculate $z^5$:</p>
<p>$$
z^5 = \left(\left({4\over 5}\right)^{4\over 5}\right)^5 = \left({4\over 5}\right)^{4}
= {4^4\over 5^4}
$$</p>
<p>Clearly $z \neq 0$ as $0^{(5/4)} = 0 \neq 4/5$.</p>
<p>Let a rational number $r = p/q$ where $p$ and $q$ are positive integers. </p>
<p>Every positive integer is known to have a unique prime factorization. Specifically, we know there exist integers $i \geq 0$ and $t \geq 1$ such that $p = t \times 5^i$ and $t \not \equiv 0 \mod 5$.</p>
<p>Likewise, $q = u \times 5^j$. Thus, we have
$$
r = \frac pq = \frac tu \times 5^{(i-j)}
$$
and neither $t$ nor $u$ is divisible by $5$.</p>
<p>Now, we calculate $r^5$:
$$
r^5 = \frac{u^5}{v^5} \times 5^{5 (i-j)}
$$</p>
<p>If $r^5 = z^5$, then we must have $5(i-j) = -4$. There are clearly no integers $i,j$ which satisfy that. Thus we have shown that for any rational number $r$ it cannot be the case that $r = z = (4/5)^{(4/5)}$.</p>
<p>QED</p>
|
119,456 | <p>I generated a 2d random array in $x-y$ plane with</p>
<pre><code>L = 10;
random = Table[{x, y, RandomReal[{-1, 1}]}, {x, 0, L, L/10}, {y, 0, L, L/10}];
</code></pre>
<p>Now I want to save it for the next using by</p>
<pre><code>iniF = Interpolation[Flatten[random, 1]];
inif[x_, y_] = c+iniF[x, y];
</code></pre>
<p>where $c$ is constant. How do you save the random data with a convenient format?
Thank you! </p>
| Wjx | 6,084 | <p>Add this simple code before u run everything will let Random generate exactly the same result every time you run. If you're careful enough, you may find this in lot's of posts with randomly generated input.</p>
<pre><code>SeedRandom["Whatever you write here, keep it the same in multiple runs!"]
</code></pre>
<p>Hope this can help you!</p>
|
2,231,388 | <blockquote>
<p>Consider a ring map $B \rightarrow A$. Consider the map $f:A \otimes_{B}A \rightarrow A$, where $x \otimes y$ goes to $xy$. Let $I$ be the kernel of $f$. Why is it true that $I/I^2$ is isomorphic to $I \otimes_{A \otimes_{B}A} A$?</p>
</blockquote>
<p>This is what I've been able to prove till now:</p>
<p>Let $R=A \otimes_{B}A$. Now, consider the $R$-module homomorphism $\phi$from $I$ to $I\otimes (R/I)$, $\phi(a)=a\otimes 1$. It is easy to see that the kernel of $\phi$ contains $I^2$. How do I prove it is exactly $I^2$?</p>
| Georges Elencwajg | 3,217 | <p>Given an $R$-module $M$ and an ideal $I\subset R$ we have $R$-module morphisms $$M\otimes_R R/I\to M/IM: m\otimes \tilde r\mapsto \overline {rm} \operatorname \quad {and} \quad M/IM \to M\otimes_R R/I:\overline {m}\mapsto m\otimes \tilde 1$$ which are easily seen to be inverse to each other and yield an isomorphism $M\otimes_R R/I\cong M/IM$.<br>
In particular for $M=I$ we get the required isomorphism of $R$-modules $I\otimes_R R/I\cong I/I^2.$</p>
|
746 | <p>There have been a number of questions in the Close part of Review lately which were basically asking for help creating an algorithm to do some mundane task (see <a href="https://mathoverflow.net/questions/140585/how-to-perform-divide-step-of-in-place-quicksort#comment362909_140585">here</a>, <a href="https://mathoverflow.net/questions/139681/using-the-affine-maxima-package">here</a>, <a href="https://mathoverflow.net/questions/140705/numerical-solutions-of-ode-by-free-parameters-with-matlab">here</a> for example). I wonder if some of these people could be helped by migrating them over to <a href="https://softwareengineering.stackexchange.com/">https://softwareengineering.stackexchange.com/</a>. Even if you don't think these questions in particular should go there, I'll bet that those who are okay with migration in general would be okay with the option to migrate to the programmers website. Hence my question: are others in favor of adding this feature? If so, can someone make a request to the appropriate powers?</p>
| François G. Dorais | 2,000 | <p>Yes, it's possible, but as Anna Lear <a href="https://meta.mathoverflow.net/a/163">explained in an earlier answer</a>, there are some requirements. There needs to be a clear pattern of questions to be migrated and the migrated posts need to have low rejection rate on the target site. In the mean time, if you need to have a post migrated, just flag the moderators and explain the situation. Moderators can migrate any new post to any site in the network.</p>
|
1,575,671 | <p>The whole question is that <br>
If $f(x) = -2cos^2x$, then what is $d^6y \over dx^6$ for x = $\pi/4$?</p>
<p>The key here is what does $d^6y \over dx^6$ mean?</p>
<p>I know that $d^6y \over d^6x$ means 6th derivative of y with respect to x, but I've never seen it before.</p>
| Community | -1 | <p>For convenience, first transform</p>
<p>$$-2\cos^2(x)=-1-\cos(2x).$$</p>
<p>Then the sixth derivative is $$2^6\cos(2x),$$ because $\cos(x)''=-\cos(x)$ and because of the scaling of the variable .</p>
<p>At $x=\dfrac\pi4$, $$0.$$</p>
|
151,937 | <p>In <code>FindGraphCommunities</code>, how can one find the vertices associated with the edges that are found to connect one or more communities?</p>
| kglr | 125 | <p><strong>Update:</strong> Functions for finding the edges that connect communities and for tabulating the results:</p>
<pre><code>ClearAll[connectingEdgesF, tabulateF]
connectingEdgesF = Module[{g = #}, Complement[EdgeList[#],
Flatten[EdgeList[Subgraph[g, #]] & /@ FindGraphCommunities[g]]]] &;
tabulateF = Module[{rule = Join @@ MapIndexed[Thread[# -> #2[[1]]] &,
FindGraphCommunities[#], 1], edges = connectingEdgesF[#]},
TableForm[(List @@@ edges) /. {a_, b_} :> Join[{a, a /. rule}, {b, b /. rule}],
TableHeadings -> {None, {"From Vertex", "in Community",
"To Vertex", "in Community"}}, TableAlignments -> Center]] &;
</code></pre>
<p>Examples:</p>
<pre><code>SeedRandom[5]
g2 = RandomGraph[{20, 50}, DirectedEdges -> True];
connectingEdgesF[g2]
</code></pre>
<blockquote>
<p>{2 -> 12, 2 -> 14, 4 -> 7, 6 -> 7, 6 -> 17, 7 -> 15, 8 -> 11, 8 -> 13,
11 -> 19, 13 -> 16, 15 -> 11, 16 -> 2, 17 -> 10, 18 -> 17, 19 -> 15,
20 -> 5, 20 -> 15}</p>
</blockquote>
<pre><code>CommunityGraphPlot[g2, EdgeStyle -> Thread[connectingEdgesF[g2] -> Directive[Red, Thick]],
VertexLabels -> "Name"]
</code></pre>
<p><a href="https://i.stack.imgur.com/PqRlP.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PqRlP.jpg" alt="enter image description here"></a></p>
<pre><code>tabulateF[g2]
</code></pre>
<p><a href="https://i.stack.imgur.com/tkUQS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tkUQS.jpg" alt="enter image description here"></a></p>
<pre><code>edges = {start -> 1, start -> 17, start -> 18, start -> 19,
start -> 15, 14 -> goal, 1 -> 2, 2 -> 3, 2 -> 13, 2 -> 14, 3 -> 5,
3 -> 11, 3 -> 12, 4 -> 3, 5 -> 7, 6 -> 12, 7 -> 8, 7 -> 9, 7 -> 14,
8 -> 9, 8 -> 14, 9 -> 10, 9 -> 14, 10 -> 14, 11 -> 6, 12 -> 7,
13 -> 3, 15 -> 7, 15 -> 16, 16 -> 7, 17 -> 1, 17 -> 2, 17 -> 3,
17 -> 4, 17 -> 5, 17 -> 14, 18 -> 1, 18 -> 2, 18 -> 3, 18 -> 4,
18 -> 14, 19 -> 1, 19 -> 2, 19 -> 3, 19 -> 4, 19 -> 5, 19 -> 7,
19 -> 8, 19 -> 9, 19 -> 10, 19 -> 12, 19 -> 14, 19 -> 16};
g3 = Graph[edges];
connectingEdgesF[g3]
</code></pre>
<blockquote>
<p>{2 -> 3, 2 -> 13, 2 -> 14, 3 -> 5, 4 -> 3, 7 -> 8, 7 -> 9, 7 -> 14,
12 -> 7, 17 -> 3, 17 -> 5, 17 -> 14, 18 -> 3, 18 -> 14, 19 -> 3,
19 -> 5, 19 -> 7, 19 -> 8, 19 -> 9, 19 -> 10, 19 -> 12, 19 -> 14,
19 -> 16, start -> 15}</p>
</blockquote>
<pre><code>CommunityGraphPlot[g3, EdgeStyle -> Thread[connectingEdgesF[g3] -> Directive[Red, Thick]],
VertexLabels -> "Name"]
</code></pre>
<p><a href="https://i.stack.imgur.com/eZOlf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eZOlf.jpg" alt="enter image description here"></a></p>
<pre><code>tabulateF[g3]
</code></pre>
<p><a href="https://i.stack.imgur.com/6D8wJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6D8wJ.jpg" alt="enter image description here"></a></p>
<p><strong>Original answer:</strong></p>
<pre><code>SeedRandom[5]
g = RandomGraph[{20, 50}];
mycommunitylists = FindGraphCommunities[g];
</code></pre>
<p>The edges with both vertices in the same community:</p>
<pre><code>withinedges = Flatten[EdgeList[Subgraph[g, #]] & /@ mycommunitylists];
</code></pre>
<p>Remaining edges have each vertex in a different community:</p>
<pre><code>communityconnectors = Complement[EdgeList[g], withinedges]
(* or communityconnectors = EdgeList[EdgeDelete[g, withinedges]] *)
</code></pre>
<blockquote>
<p>{1 <-> 7, 1 <-> 14, 1 <-> 17, 2 <-> 4, 2 <-> 6, 2 <-> 17, 3 <-> 4, 3 <-> 5, 3 <-> 16,<br>
4 <-> 20, 5 <-> 17, 5 <-> 20, 6 <-> 16, 6 <-> 17, 7 <-> 16, 9 <-> 12, 10 <-> 11,<br>
10 <-> 12, 11 <-> 12, 12 <-> 15, 13 <-> 14, 14 <-> 17, 14 <-> 19, 18 <-> 20}</p>
</blockquote>
<p>Highlighting the edges that connect different communities:</p>
<pre><code>CommunityGraphPlot[g, EdgeStyle -> Thread[communityconnectors -> Directive[Red, Thick]],
VertexLabels -> "Name"]
</code></pre>
<p><a href="https://i.stack.imgur.com/SV6PF.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SV6PF.jpg" alt="enter image description here"></a></p>
|
3,037,296 | <p>I'm confused of what <span class="math-container">$\sqrt {3 + 4i}$</span> would be after I used quadratic formula to simplify <span class="math-container">$z^2 + iz - (1 + i)$</span></p>
| user | 505,767 | <p>Recall that</p>
<p><span class="math-container">$$z=x+iy=|z|(\cos \theta+i\sin \theta)$$</span><span class="math-container">$$\implies \sqrt z=\sqrt{|z|}\left(\cos \left(\frac{\theta}2+k\pi\right)+i\sin \left(\frac{\theta}2+k\pi\right)\right),\,k=0,1$$</span></p>
|
3,238,670 | <p>Could someone explain <strong>how to get from: <span class="math-container">$x-\frac{1}{x}=A$</span> to <span class="math-container">$x+\frac{1}{x}=\sqrt{A^2+4}$</span></strong> ? It is one of the Algebra II tricks.</p>
<p>Thanks.</p>
| Dave | 334,366 | <p>Start by squaring both sides:
<span class="math-container">$$\begin{align}x-\frac{1}{x}&=A\\\left(x-\frac{1}{x}\right)^2&=A^2\\x^2-2+\frac{1}{x^2}&=A^2.\end{align}$$</span>
Then try adding <span class="math-container">$4$</span> to both sides and "reversing" the processes above.</p>
|
1,913,689 | <blockquote>
<p>Let $f: X \rightarrow Y$ be a function. $A \subset X$ and $B \subset Y$.
Prove $A \subset f^{-1}(f(A))$.</p>
</blockquote>
<p>Here is my approach. </p>
<p>Let $x \in A$. Then there exists some $y \in f(A)$ such that $y = f(x)$. By the definition of inverse function, $f^{-1}(f(x)) = \{ x \in X$ such that $y = f(x) \}$. Thus $x \in f^{-1}(f(A)).$</p>
<p>Does this look OK, and how can I improve it?</p>
| drhab | 75,923 | <p>A nice mnemonic on preimages is:$$x\in f^{-1}(C)\iff f(x)\in C\tag1$$
It is evident that: $$\forall x\left[x\in A\implies f\left(x\right)\in f\left(A\right)\right]$$</p>
<p>According to $(1)$ here $f(x)\in f(A)$ can be replaced by $x\in f^{-1}(f(A))$.</p>
<p>This results in:$$\forall x\left[x\in A\implies x\in f^{-1}(f(A))\right]$$
or equivalently: $$A\subseteq f^{-1}(f(A))$$</p>
|
2,007,373 | <p>At some point in your life you were explained how to understand the dimensions of a line, a point, a plane, and a n-dimensional object. </p>
<p>For me the first instance that comes to memory was in 7th grade in a inner city USA school district. </p>
<p>Getting to the point, my geometry teacher taught,</p>
<p>"a point has no length width or depth in any dimensions, if you take a string of points and line them up for "x" distance you have a line, the line has "x" length and zero height, when you stack the lines on top of each other for "y" distance you get a plane"</p>
<p>Meanwhile I'm experiencing cognitive dissonance, how can anything with zero length or width be stacked on top of itself and build itself into something with width of length?</p>
<p>I quit math. </p>
<p>Cut to a few years after high school, I'm deep in the math's. </p>
<p>I rationalized geometry with my own theory which didn't conflict with any of geometry or trigonometry. </p>
<p>I theorized that a point in space was infinitely small space in every dimension such that you can add them together to get a line, or add the lines to get a plane. </p>
<p>Now you can say that the line has infinitely small height approaching zero but not zero.</p>
<p>What really triggered me is a Linear Algebra professor at my school said that lines have zero height and didn't listen to my argument. . . </p>
<p>I don't know if my intuition is any better than hers . . . if I'm wrong, if she's wrong . . . </p>
<p>I would very much appreciate some advice on how to deal with these sorts of things. </p>
| Henricus V. | 239,207 | <p>Viewpoint from measure theory:</p>
<p>The length/area/volume/hypervolume of a set $S \subseteq \mathbb{R}^n$ is merely its Lebesgue measure $\lambda(S)$.</p>
<p>Since $\lambda$ is a continuous measure, the measure of any single point is $0$, so $\lambda(\{x\}) = 0$ for all $x$, but uncountable unions of points, such as $S = \bigcup_{x \in S} \{x\}$, may have non-zero measure.</p>
<p>The important thing here is uncountability. If you can iterate through the points in $S$ one by one such that any point in $S$ will eventually be encountered, then $S$ must have $0$ measure.</p>
|
2,939,585 | <p>I want to prove that if <span class="math-container">$ \gamma$</span> is a closed path and <span class="math-container">$\gamma\subseteq B_R(0) $</span> then <span class="math-container">$\mathbb{C}\setminus B_R(0)\subseteq \operatorname{Ext}_\gamma$</span> where <span class="math-container">$ \operatorname{Ext}_\gamma=\{a\not \in \gamma : \operatorname{Ind}_\gamma a=0\} $</span> and <span class="math-container">$ \operatorname{Ind}_\gamma a=\frac{1}{2\pi i}\oint_\gamma \frac{dz}{z-a}$</span>.</p>
<p>I think that one have to bound the index like this:
<span class="math-container">$$ \left|\operatorname{Ind}_\gamma a\right|=\left|\frac{1}{2\pi i}\oint_\gamma \frac{dz}{z-a}\right| \leq \operatorname{lenght}(\gamma)\frac{1}{2\pi } \frac{1}{\min\{|z-a|:z\in \gamma\}}$$</span>
and then use the fact that I can make <span class="math-container">$ \min\{|z-a|:z\in \gamma\}$</span> as larger as I want, but I don't know how to proceed.</p>
| zhw. | 228,045 | <p>The function</p>
<p><span class="math-container">$$\text { Ind}_\gamma(z) = \frac{1}{2\pi i}\int_\gamma \frac{f(w)}{w-z}\,dw$$</span></p>
<p>is a continuous integer valued function defined on <span class="math-container">$\Omega =\mathbb C\setminus \gamma^*,$</span> where <span class="math-container">$\gamma^*$</span> denotes the range of <span class="math-container">$\gamma.$</span> It follows that on any connected subset of <span class="math-container">$\mathbb C\setminus \gamma^*,$</span> <span class="math-container">$\text { Ind}_\gamma$</span> is constant. Since <span class="math-container">$\mathbb C \setminus B_R(0)$</span> is such a connected subset, <span class="math-container">$\text { Ind }_\gamma$</span> is constant there. But as <span class="math-container">$z\to \infty$</span> in this subset, <span class="math-container">$\text { Ind}_\gamma(z)\to 0.$</span> Thus <span class="math-container">$\text { Ind}_\gamma$</span> is the constant <span class="math-container">$0$</span> in this set, which is the desired conclusion.</p>
|
2,109,832 | <p>This is for beginners in probability!</p>
<p>Could someone give me a step by step on how to find the MGF of the binomial distribution?</p>
| spaceisdarkgreen | 397,125 | <p>You can use the double angle formula to write $$\begin{eqnarray}\sin(2\arcsin(3/5)) &=& 2\sin(\arcsin(3/5))\cos(\arcsin(3/5))\\ &=& 2\sin(\arcsin(3/5))\sqrt{1-\sin^2(\arcsin(3/5))}\end{eqnarray}$$</p>
<p>We have $\sin(\arcsin(3/5)) =3/5$, so can plug this in to get the answer.</p>
|
2,109,832 | <p>This is for beginners in probability!</p>
<p>Could someone give me a step by step on how to find the MGF of the binomial distribution?</p>
| marty cohen | 13,079 | <p>$\begin{array}\\
\sin(2\arcsin(x))
&=2\sin(\arcsin(x))\cos(\arcsin(x))\\
&=2x\sqrt{1-\sin^2(\arcsin(x))}\\
&=2x\sqrt{1-x^2}\\
\end{array}
$</p>
<p>Putting $x = 3/5$,
since
$\sqrt{1-x^2} = 4/5$,
I get
$2\dfrac35 \dfrac45
=\dfrac{24}{25}
$.</p>
|
999,147 | <p>I'm looking to gain a better understanding of how the cofinite topology applies to R.
I know the definition for this topology but I'm specifically looking to find some properties such as the closure, interior, set of limit points, or the boundary set and how these change based on whether a subset A in R is closed, open, or clopen. </p>
<p>Any help would be appreciated. </p>
<p>Note: I have only the most basic of definitions for closure, interior, etc. </p>
<p>Thank you! </p>
| Community | -1 | <p>So to answer your first question if $O$ is an open set that isn't empty in the co-finite topology then by definition $\bar{O}$ is the smallest closed set that contains $O$. Now we know that $O$ has infinitly many elements and the only closed set that doesn't only have finitely many elements is $\mathbb{R}$ so it must be $\bar{O}$.</p>
<p>The boundary of an open set $O$ is $\bar{O} \backslash O$ so if $O$ isn't empty then in this topology its boundary is $\mathbb{R} \backslash O$.</p>
|
3,982,103 | <p>Let <span class="math-container">$(X,\tau)$</span> be a topological space. Prove that <span class="math-container">$\tau$</span> is the finite-closed topology on <span class="math-container">$X$</span> if and only if (i)<span class="math-container">$(X,\tau)$</span> is a <span class="math-container">$T_1$</span>-space, and (ii) every infinite subset of <span class="math-container">$X$</span> is dense in <span class="math-container">$X$</span>.</p>
<p>I already proved the forward direction but I'm stuck on the backward direction. We know that everyone singleton is closed because of i) and from ii) every open set intersects any infinite set non trivially. Now I need to figure out how to show that every open set are just infinite sets with countably finite points removed, thus the topology is finite-closed.</p>
| Henno Brandsma | 4,280 | <p>Let <span class="math-container">$O$</span> be an open set in <span class="math-container">$(X,\tau)$</span>, so <span class="math-container">$O \in \tau$</span>.</p>
<ul>
<li>If <span class="math-container">$O=\emptyset$</span>, <span class="math-container">$O \in \tau_{fc}$</span>, as required.</li>
<li>If <span class="math-container">$O=X$</span>, likewise, <span class="math-container">$O \in \tau_{cf}$</span>.</li>
<li>In the final case <span class="math-container">$O$</span> is non-empty and is disjoint from <span class="math-container">$O^\complement$</span> by definition, and by <span class="math-container">$(ii)$</span>, <span class="math-container">$O$</span> must intersect <strong>every</strong> infinite set. So <span class="math-container">$O^\complement$</span> is <strong>not</strong> infinite, hence <span class="math-container">$O^\complement$</span> is finite, i.e. <span class="math-container">$O \in \tau_{cf}$</span>.</li>
</ul>
<p>QED</p>
|
2,305,656 | <p>I solved this problem on my own, months ago, but the solution seems to me completely forgotten, a little help on it would be appreciated:</p>
<p>Suppose $\alpha= \alpha(t)$ on an interval $I$ is a smooth (of class C$^1$) parametric representation of the curve $C$, and for any $t \in I$ we have $\space\space\frac{d}{dt}\alpha(t).v=0$, where $v$ is a constant vector, furthermore $\alpha(0)$ is perpendicular to $v$.
Then $\forall t\in I: \space \alpha(t).v=0$.</p>
| Michael Rozenberg | 190,319 | <p>I think you mean $a=b$.
Thus, $2a>c$ or $\frac{a}{c}>\frac{1}{2}$ and
$$k=\frac{r}{R}=\frac{\frac{2S}{a+b+c}}{\frac{abc}{4S}}=\frac{16S^2}{2abc(a+b+c)}=$$
$$=\frac{(a+b-c)(a+c-b)(b+c-a)}{2abc}=\frac{(2a-c)c^2}{2a^2c}=\frac{\frac{2a}{c}-1}{\frac{2a^2}{c^2}}.$$
Hence,
$$\frac{2ka^2}{c^2}-\frac{2a}{c}+1=0,$$
which gives
$$\frac{a}{c}=\frac{1\pm\sqrt{1-2k}}{2k}.$$</p>
|
189,014 | <p>Ok, I know the simple answer is to set some form of Hold attribute to the function but bear with me for a bit while I explain my motivation and why that is not quite what I want.</p>
<hr>
<p>I have a collection of data that is naturally grouped together and some functions that operate on that data. To me, the obvious way to represent this is a struct-like data-type and an <a href="https://reference.wolfram.com/language/ref/Association.html" rel="noreferrer">Association</a> seemed perfect for the job. There are a couple of questions and answers describing this approach on this site. </p>
<p>Great, now I have an association as follows: (Just an example)</p>
<pre><code><|
"Atom Names" -> {"N","C","O","C","H","H"},
"Atom Nr" -> {56,23,117,81,211,5},
"Resname" -> {ALA,ALA,TYR,LEU,GLY,GLY},
"Bias Value" -> {1,5,1,5,1,1}
"getRandomAtom" :> RandomChoice[{56,23,117,81,211,5}],
"atomExists" :> (MemberQ[{"N","C","O","C","H","H"},#]&)
|>
</code></pre>
<p>I have some other functions that operate on this collection of data. </p>
<pre><code>f1[a_Association, args__] := a["getRandomAtom"] + Total[a["Atom Nr"]]
f2[a_Association, atom_String] := If[a[atomExists][atom], a["Bias Value"] + 1]
(*etc*)
</code></pre>
<p>Ideally these function would not just accept <em>any</em> <code>Association</code> but only the 'correct' type of <code>Association</code>. I can ensure this in a couple of ways. Eg. create a helper function and use <code>Condition</code> or <code>PatternTest</code> to check if the <code>Association</code> has all the correct keys or much more simply, I can just wrap the entire <code>Association</code> with a inert head.</p>
<pre><code>Protect[atomData]
atomData[<|
"Atom Names" -> {"N","C","O","C","H","H"},
"Atom Nr" -> {56,23,117,81,211,5},
"Resname" -> {ALA,ALA,TYR,LEU,GLY,GLY},
"Bias Value" -> {1,5,1,5,1,1}
"getRandomAtom" :> RandomChoice[{56,23,117,81,211,5}],
"atomExists" :> (MemberQ[{"N","C","O","C","H","H"},#]&)
|>]
</code></pre>
<p>And in my functions, I can just check the heads. </p>
<pre><code>f3[a_atomData,args__] := "Do Stuff"
</code></pre>
<p>But I'd like to retain the functionality of <code>Association</code> transparently. We can somewhat achieve this through the use of upvalues.</p>
<pre><code>atomData[a_Association][key_] := a[key];
atomData /: h_[atomData[a_Association],args___] := h[a,args]
(*And some additional stuff*)
atomData /: ToString[atomData[a_Association]] := a["Atom Names"]
atomData[a_Association][] := a
</code></pre>
<p>As <a href="https://mathematica.stackexchange.com/questions/109657/">this</a> question notes, this will not work for all functions (eg. Lookup) as they have the <code>HoldAllComplete</code> attribute. (Coincidentally, my motivation is almost exactly the same as the OP of that question)</p>
<hr>
<p>Here comes the problem. Since I defined these upvalues, my functions which check for the <code>Head</code> <code>atomData</code> won't work anymore. </p>
<pre><code>f3[myatomdata,3,4] (* myatomdata has head atomData *)
</code></pre>
<p>The upvalues associated with atomData will be applied first and this will result in <code>f3[<|...(*underlying association*)...|>,3,4]</code>, which will not be evaluated as the first argument nolonger has the <code>Head</code> <code>atomData</code>.</p>
<pre><code>SetAttributes[f3,HoldFirst]
</code></pre>
<p>won't help either, as in the function call <code>f3[myatomdata,3,4]</code>, the evaluator will leave <code>myatomdata</code> alone, which means that it will have <code>Head</code> <code>Symbol</code>, and once again f3 will not be evaluated.</p>
<hr>
<p>It seems that I have defeated my entire purpose by setting these upvalues. Is there a better way to do what I want? </p>
<p>I can think of 2 ways, both of which seem quite ugly. </p>
<ol>
<li><p>Modify the upvalue definition to exclude certain functions. Something like </p>
<pre><code> atomData /:
(h : Except[f1|f2|f3])[atomData[a_Association], args___] := h[a, args]
</code></pre></li>
</ol>
<p>This seems particularly inelegant, as I'd have to modify this every time I add another function that will use <code>atomData</code>.</p>
<ol start="2">
<li><p>Do the head checking yourself. </p>
<pre><code> SetAttributes[f3,HoldFirst]
atomData /: (h : Except[Head])[atomData[a_Association], args___] := h[a, args]
f3[a_ /; Head[a]===atomData] := "Do Stuff"
</code></pre></li>
</ol>
<p>The ideal solution would be a way to prevent <em>just</em> the upvalue from evaluating and leaving all others (OwnValues,DownValues etc.) alone.</p>
<hr>
<p>PS. I'm also open to the idea that this whole approach is rubbish if someone can suggest a better way. I come from a background of C++, Java, and Python; Thinking of everything in terms of objects has been ingrained in me. Apologies for the long-winded explanation. </p>
| Mr.Wizard | 121 | <p>You might consider putting the "type" label inside an Association itself. This will complicate key addressing but simplify other handling.</p>
<pre><code>asc = <|"atomData" ->
<|"Atom Names" -> {"N", "C", "O", "C", "H", "H"},
"Atom Nr" -> {56, 23, 117, 81, 211, 5},
"Resname" -> {ALA, ALA, TYR, LEU, GLY, GLY},
"Bias Value" -> {1, 5, 1, 5, 1, 1},
"getRandomAtom" :> RandomChoice[{56, 23, 117, 81, 211, 5}],
"atomExists" :> (MemberQ[{"N", "C", "O", "C", "H", "H"}, #] &)|>|>;
aDtest[a_Association /; Keys[a] === {"atomData"}] := True;
f1[a_?aDtest, args__] := a[[1]]["getRandomAtom"] + Total[a[[1]]["Atom Nr"]]
f1[asc, 2]
</code></pre>
<blockquote>
<pre><code>549
</code></pre>
</blockquote>
<p>Using <code>a[[1]]</code> each time is only one way to approach this; others include:</p>
<pre><code>f1[aa_?aDtest, args__] :=
With[{a = aa[[1]]}, a["getRandomAtom"] + Total[a["Atom Nr"]]]
</code></pre>
<p>Or:</p>
<pre><code>f1[a_?aDtest, args__] := f1core[a[[1]], args]
f1core[a_, args__] := a["getRandomAtom"] + Total[a["Atom Nr"]]
</code></pre>
<hr>
<h2>For <em>Mathematica</em> 10.4 or later</h2>
<p>The method above can be improved for recent versions of <em>Mathematica</em> as follows:</p>
<pre><code>ClearAll[f1]
p1 = <|"atomData" -> a_|>;
f1[p1, args__] := a["getRandomAtom"] + Total[a["Atom Nr"]]
f1[asc, 2]
</code></pre>
<blockquote>
<pre><code>549
</code></pre>
</blockquote>
<p>Reference:</p>
<ul>
<li><a href="https://mathematica.stackexchange.com/q/55526/121">MatchQ-ing Associations (MMA 10)</a></li>
</ul>
|
1,791,146 | <p>I know that a set G with a binary operation $*$ is a group, if:</p>
<ol>
<li><p>$a*b\in G$, for all $a, b \in G$.</p></li>
<li><p>$*$ is associative:</p></li>
</ol>
<p>$$(a*b)*c=a*(b*c) \\ \text{for all }a, b, c\in G.$$</p>
<ol start="3">
<li>An identity element $e \in G$ exists, such that</li>
</ol>
<p>$$a*e = e*a = a\\ \text{for all }a\in G.$$</p>
<ol start="4">
<li>For all elements $a \in G$, there exists an $a^{-1} \in G$, such that:</li>
</ol>
<p>$$a*a^{-1} = a^{-1}*a=e.$$</p>
<p>Can I use that to show that the empty set is a group?</p>
| goblin GONE | 42,339 | <p>As BrianO says, <span class="math-container">$\emptyset$</span> is not a group, because every group has an identity element. This also means that <span class="math-container">$\emptyset$</span> is not a vector space, it's not a ring, it's not a module, and it's not a boolean algebra. However, <span class="math-container">$\emptyset$</span> <em>is</em> a perfectly good: semilattice, <a href="https://en.wikipedia.org/wiki/Band_%28mathematics%29" rel="nofollow noreferrer">band</a>, and affine space. Also, it's best to drop the non-emptiness condition from the usual definition of a <a href="https://en.wikipedia.org/wiki/Heap_%28mathematics%29" rel="nofollow noreferrer">heap</a>, in which case <span class="math-container">$\emptyset$</span> is a perfectly good heap.</p>
|
2,393,525 | <p>I have two questions which I think both concern the same problem I am having. Is $...121212.0$ a rational number and is $....12121212....$ a rational number? The reason I was thinking it could be a number is when you take the number $x=0.9999...$, then $10x=9.999...$ . Therefore, we conclude $9x=9$ which means $x=1$. Why could or couldn't you do the same thing and divide the first number in similar fashion by defining it as $x$ and then taking $x/100$?</p>
| fleablood | 280,126 | <p>$0.a_0a_1a_2..... = \sum\limits_{k=0}^{\infty} a_k*10^{-k} = \lim\limits_{n\rightarrow \infty }\sum\limits_{k=0}^{n} a_k*10^{-k}$. This limit exists. For one thing the terms $a_k*10^{-k}$ get <em>small</em> and approach infinitesimal. (But more importantly, the difference between the finite sums becomes infinitesimal.)</p>
<p>So this is a valid number. It may or may not be rational.</p>
<p>$.....a_3a_2a_1.0$ if it were to mean anything would have to mean $\sum\limits_{k=0}^{\infty} a_k*10^{k} = \lim\limits_{n\rightarrow \infty}\sum\limits_{k=0}^{n} a_k*10^{k}$. This limit does <em>NOT</em> exist. The terms $a*10^k$ get <em>large</em> infinitely and these do not converge to any finite number.</p>
<p>So this is not a number of any sort or any meaningful concept.</p>
<p>When numbers get infinitely <em>small</em> they approach $0$ and it is possible (but it doesn't always happen) that we can add them infinitely and have the limit exist (but it is important to realize there are exceptions). (Decimals, however, <em>can</em> be added infinitely and converge. I won't go into details.)</p>
<p>When number get infinitely <em>large</em> they .... blow up. They do not converge to anything but increase to infinite. We can not <em>ever</em> add them infinitely and have the limit exist.</p>
|
1,042,227 | <p>I want to verify that the solution to the difference equation</p>
<p>$m_x - 2pqm_{x-2} = p^2 + q^2$</p>
<p>with boundary conditions</p>
<p>$m_0 = 0$</p>
<p>$m_1 = 0$</p>
<p>is</p>
<p>$$m_x = -\frac{1}{2}(\frac{1}{\sqrt{2pq}} +1)(\sqrt{2pq})^x + \frac{1}{2}(\frac{1}{\sqrt{2pq}} - 1)(-\sqrt{2pq})^x + 1$$</p>
<p><strong>General solution to inhomogeneous equation</strong></p>
<p>I know that that general solution for $m_x$ will be equal to the general solution to the homogeneous equation plus a particular solution to the inhomogeneous equation. So the general solution to the homogeneous equation</p>
<p>$m_x - 2pqm_{x-2} = 0$</p>
<p>ends up being </p>
<p>$m_x = A(\sqrt{2pq})^x + B(-\sqrt{2pq})^x$</p>
<p>Using $m_0 = 0$ we have that $A = -B$ giving us</p>
<p>$m_x = A(\sqrt{2pq})^x - A(-\sqrt{2pq})^x$</p>
<p>Using $m_1 = 0$ we have that </p>
<p>$0 = -B(\sqrt{2pq}) + B(-\sqrt{2pq})$</p>
<p>$0 = -B(\sqrt{2pq}) - B(\sqrt{2pq})$</p>
<p>$0 = -2B(\sqrt{2pq})$</p>
<p>$=> B = -A = 0$</p>
<p>So I must have done something wrong? And where is the "$1$" at the end of the correct general solution above coming from? Can someone show how to verify the solution correctly?</p>
| Bumblebee | 156,886 | <p><strong>HINT:</strong> Since you have the solution we can use the mathematical induction for verify the solution. Also $(-1)^x=(-1)^{x-2}, \forall x\in\mathbb{N}.$</p>
|
25,162 | <p>Suppose we have a smooth dynamical system on $R^n$ (defined by a system of ODEs).
Assume that:</p>
<p>(1) The system has an absorbing ball, that is every trajectory eventually enters this ball
and stays in it. </p>
<p>(2) The system has a unique stationary point, and this stationary point is locally
asymptotically stable.</p>
<p>(2) The system has no period orbits.</p>
<p>Can we conclude that the stationary point is in fact <em>globally</em> stable?</p>
| coudy | 6,129 | <p>No. You could have in the ball a compact attractor K containing no periodic orbits. In fact there are attractors on which the dynamic is minimal (all trajectories are dense in K) and conjuguated to
(the suspension of) an adding machine. </p>
<p>Examples of such attractors even appear in the unidimensional setting, for unimodal maps. I think that Bruin, Keller, Liverani (1997, erg. th. dyn. sys.) give such an example. Adding a attracting fixed point to these examples is not difficult.</p>
|
25,162 | <p>Suppose we have a smooth dynamical system on $R^n$ (defined by a system of ODEs).
Assume that:</p>
<p>(1) The system has an absorbing ball, that is every trajectory eventually enters this ball
and stays in it. </p>
<p>(2) The system has a unique stationary point, and this stationary point is locally
asymptotically stable.</p>
<p>(2) The system has no period orbits.</p>
<p>Can we conclude that the stationary point is in fact <em>globally</em> stable?</p>
| Martin M. W. | 1,227 | <p>As the questioner notes in a comment, the answer is Yes for n<3. </p>
<p>One way to create counterexamples for larger n is to use the work on the Seifert Conjecture. Start with a vector field pointing inward to the origin, and replace a little piece of it with an "aperiodic plug." This "plug" looks from the outside like a constant flow, has no periodic orbits in the interior, but there is at least one orbit that goes in and never comes out.</p>
<p>For details on various plug constructions, <a href="http://www.geom.uiuc.edu/docs/forum/seifert/se2.html" rel="nofollow">this note from the Geometry Center</a> is very readable and also has references to the original papers of Wilson and Kuperberg. </p>
|
3,545,250 | <p>Being new to calculus, I'm trying to understand Part 1 of the Fundamental Theorem of Calculus. </p>
<p>Ordinarily, this first part is stated using an " area function" <em>F</em> mapping every <em>x</em> in the domain of <em>f</em> to the number " integral from a to x of f(t)dt". </p>
<p>However, I encounter <strong><em>difficulties to understand what is the status of this area function, being apparently neither an indefinite integral , nor a definite integral</em></strong>( for, I think, a definite integral is a number, not a function); if this " area function" is not an " integral " ( of some sort), I do not understand in which way asserting that <em>F'=f</em> amounts to saying " integration and differentiation are inverse processes" as it is said informally. </p>
<p>Hence my question : is there an easier to understand version of FTC Part 1 that does not make use of the area function concept? </p>
<p>Note : I think I understand in which way the area function is a function and what it " does". What I do not understand is the role it plays in proving that " integration and differentiation a reverse processes" ( being given this function is neither a definite integral, nor an indefinite integral, as MSE answers I got previously tend to show). </p>
| Paramanand Singh | 72,031 | <p>I think the key issue here is that you are unable to understand how integration and differentiation are reverse processes.</p>
<p>In order to understand and appreciate it fully you need to know the definition of derivative (easy) and that of integral (difficult and mostly avoided in beginner's calculus texts).</p>
<p>Just as derivative is defined as a limit, the integral <span class="math-container">$\int_{a} ^{b} f(x) \, dx$</span> is also defined as a complicated limit based on <span class="math-container">$a, b, f$</span>. There are some technicalities involved here and you can have a look at <a href="https://math.stackexchange.com/a/1834341/72031">this answer</a> for more details.</p>
<p>The link between derivatives and integrals is then understood by analyzing the integral <span class="math-container">$\int_{a} ^{x} f(t) \, dt$</span>. The idea is to understand how the integral varies as the interval of integration varies. And there you have the Fundamental Theorem of Calculus part 1 which says that</p>
<blockquote>
<p><strong>FTC Part 1</strong>: Let <span class="math-container">$f:[a, b] \to\mathbb {R} $</span> be Riemann integrable on <span class="math-container">$[a, b] $</span>. Then the function <span class="math-container">$F:[a, b] \to \mathbb {R}$</span> defined by <span class="math-container">$$F(x) =\int_{a} ^{x} f(t) \, dt$$</span> is continuous on <span class="math-container">$[a, b] $</span>. And further if <span class="math-container">$f$</span> is continuous at some point <span class="math-container">$c\in[a, b] $</span> then <span class="math-container">$F$</span> is differentiable at <span class="math-container">$c$</span> with <span class="math-container">$F'(c) =f(c) $</span>.</p>
</blockquote>
<p>In simpler terms if the function <span class="math-container">$f$</span> being integrated is continuous on entire interval of integration then <span class="math-container">$F'(x) =f(x) $</span> in entire interval. Thus we are able to figure out the rate at which the integral varies as the interval of integration varies.</p>
<p>And this gives us a way of evaluating integrals without using the complicated definition of integral. Rather one hopes to find an anti-derivative and just subtract its values at end points of the interval. More formally we have</p>
<blockquote>
<p><strong>FTC Part 2</strong>: Let <span class="math-container">$f:[a, b] \to\mathbb {R} $</span> be Riemann integrable on <span class="math-container">$[a, b] $</span> and further let's assume that <span class="math-container">$f$</span> possesses an anti-derivative <span class="math-container">$F$</span> on <span class="math-container">$[a, b] $</span> ie there exists a function <span class="math-container">$F:[a, b] \to \mathbb {R} $</span> such that <span class="math-container">$F'(x) =f(x) $</span> for all <span class="math-container">$x\in[a, b] $</span>. Then <span class="math-container">$$\int_{a} ^{b} f(x) \, dx=F(b) - F(a) $$</span></p>
</blockquote>
|
3,356,544 | <p>A lot of calculators actually agree with me saying that it is defined and the result equals 1, which makes sense to me because:</p>
<p><span class="math-container">$$ (-1)^{2.16} = (-1)^2 \cdot (-1)^{0.16} = (-1)^2\cdot\sqrt[100]{(-1)^{16}}\\
= (-1)^2 \cdot \sqrt[100]{1} = (-1)^2 \cdot 1 = 1$$</span></p>
<p>However, there are certain calculators (WolframAlpha among them) which contest this answer, and instead claim it is equal to:</p>
<p><a href="https://i.stack.imgur.com/XB8nG.png" rel="noreferrer"><img src="https://i.stack.imgur.com/XB8nG.png" alt="enter image description here"></a></p>
<p>Graphing this as an exponential function was not possible.</p>
<p>What's going on?</p>
| Conrad | 298,272 | <p><span class="math-container">$(-1)^{2.16}=(-1)^{\frac{54}{25}}=\exp({\frac{54}{25}(2k+1)i\pi})=\exp({\frac{4}{25}(2k+1)i\pi})$</span> </p>
<p>is a set of <span class="math-container">$25$</span> numbers corresponding to <span class="math-container">$k=0,...24$</span> as the exponential above has period <span class="math-container">$25$</span>. </p>
<p>Choosing <span class="math-container">$k=12$</span> shows that <span class="math-container">$1$</span> is indeed in this set, though it doesn't correspond to the usual "principal" value which is for <span class="math-container">$k=0$</span> and which in this case, gives <span class="math-container">$\exp({\frac{4}{25}i\pi})$</span> and this is what Wolfram Alpha gave</p>
<p>Edit later - just to make it clear, here <span class="math-container">$\exp(z)=\sum{\frac{z^n}{n!}}$</span> is the uniquely defined usual entire exponential function </p>
|
385,789 | <p>Can anybody please help me this problem?</p>
<p>Let $K = \mathbb{F}_p$ be the field of integers module an odd prime $p$, and $G = \mathcal{M}^*_n(\mathbb{F}_p)$ the set of $n\times n$ invertible matrices with components in $\mathbb{F}_p$. Based on the linear (in)dependence of the columns of a matrix $M\in G$, get the number of matrices in $G$.</p>
<p>Thanx in advance.</p>
| anon | 11,763 | <p>The process for creating an arbitrary invertible matrix is as follows:</p>
<ul>
<li>The first column of an invertible matrix can be selected to be any nonzero vector.</li>
<li>Second column can be picked to be any vector not in the span of the first.</li>
<li>Third column can be picked to be any vector not in the span of the first two.</li>
<li>$\cdots\cdots\cdots$</li>
</ul>
<p>Convince yourself that this will always create a matrix whose columns are independent (hence is invertible), and any invertible matrix can be obtained in this way.</p>
<p>At each step, figure out how many vectors can be picked. (For this, you'll need to find out how many vectors are in a subspace of a given dimension.) Then multiply all of these counts together.</p>
|
273,798 | <p>I am writing a large numerical code where I care a lot about performance, so I am trying to write compiled functions that are as fast as possible.</p>
<p>I need to write a function that does the following. Consider a list of positive integers, for example {5,3}, take its flattened binary form (with a given number of digits, let's say 5) which is {0, 0, 1, 0, 1, 0, 0, 0, 1, 1} in our example, and then count how many 1s there are starting from the left and stopping to some index1, then to some index2, then to some index3, then to some index4, etc... Finally, sum all the results and return it. The list {index1, index2, index3, index4, ...} is given as an input, and in all cases it contains at most 4 indexes.
For example, if index1=4 we encounter the number 1 just once, and if index2=6, we encounter the number 1 twice, so the function should return 1+2=3.
Here's my code so far</p>
<pre><code>CCSign = Compile[{
{L,_Integer},{f,_Integer},{indexes,_Integer,1},{state,_Integer,1}
},
With[{
binarystate = Flatten[IntegerDigits[#,2,L]&/@state],
},
Total[
Total@Take[binarystate, #]&/@indexes
]
], CompilationTarget->"C"
];
</code></pre>
<p>Is there some way to improve it and make it faster?</p>
<p>Thank you!</p>
| Henrik Schumacher | 38,178 | <p>The post linked by MarcoB contains a link to this Wikipedia page which is very illuminating:</p>
<p><a href="https://en.wikipedia.org/wiki/Hamming_weight" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Hamming_weight</a></p>
<p>There I found also the very useful remark that a population count function has been introduced in C++20. So the only thing we need is one (or several) bitmasks. The C++ code for that is very simple (like 5 lines), but in order to call the function from <em>Mathematica</em> we need quite a lot of boiler plate code:</p>
<pre><code>Needs["CCompilerDriver`"];
(*Unload the LibraryFunction cf in the case that it is already loaded.*)
Quiet[LibraryFunctionUnload[cf]];
(*Create LibraryFunction cf.*)
cf = Module[{lib, file, path, name},
name = "cf";
path = $TemporaryDirectory;
(*Generate a string of C++ code and save it to file.*)
file = Export[FileNameJoin[{path, name <> ".cpp"}],
"
#include \"WolframLibrary.h\"
#include <bit>
EXTERN_C DLLEXPORT int " <>name<>"(WolframLibraryData libData, mint Argc, MArgument *Args, MArgument Res)
{
// Get a pointer to the input array and the array's size.
MTensor states_ = MArgument_getMTensor(Args[0]);
const int64_t * __restrict const states = libData->MTensor_getIntegerData(states_);
const int64_t n = libData->MTensor_getDimensions(states_)[0];
// Retrieve the second input ans bit mask.
const int64_t mask = MArgument_getInteger(Args[1]);
// Prepare the output vector.
MTensor results_;
int64_t dims [1] = {n};
int err = libData->MTensor_new(MType_Integer, 1, dims, &results_);
if(err)
{
return err;
}
int64_t * __restrict const results = libData->MTensor_getIntegerData(results_);
// Now we start to process all the numbers in the input.
for( int64_t i = 0; i < n; ++i )
{
// Erase all nonzero bits of states[i] that are not present in mask and convert to an unsigned integer.
uint64_t s = static_cast<uint64_t>(states[i] & mask);
// Count the number of nonzero bits. Works only with C++20.
results[i] = std::popcount(s);
}
// Push results to output.
MArgument_setMTensor(Res, results_);
return LIBRARY_NO_ERROR;
}",
"Text"
];
(*Compile the library*)
lib = CreateLibrary[{file}, name
, "TargetDirectory" -> path
(*,"ShellCommandFunction"\[Rule]Print*)
, "ShellOutputFunction" -> Print
, "CompileOptions" -> "-std=c++20"
];
(*Load the desired function from the library into the Mathematica session.*)
LibraryFunctionLoad[lib, name,
{{Integer, 1, "Constant"},Integer}, (*first input is a 1D array of integers; second output is an integer*)
{Integer, 1}(*output is a 1D array of integers*)
]
];
</code></pre>
<p>So how to use that? Let's suppose you have a bit sequence like this random one:</p>
<pre><code>bitseq = RandomInteger[{0, 1}, 63];
</code></pre>
<p>For an integer <code>x</code> you want to count all nonzero bits for which the corresbonding bit in the bitmask is also nonzero. Then you simply convert also <code>bitseq</code> to an integer and call <code>cf</code> like this:</p>
<pre><code>mask = FromDigits[bitseq, 2];
bitcounts = cf[{x}, mask][[1]];
</code></pre>
<p>How does it perform? Well let's see how this works for an array of input integers (for which it was orginally designed):</p>
<pre><code>n = 100000000;
x = RandomInteger[{1, 2^32 - 1}, n]; // AbsoluteTiming // First
bitseq = RandomInteger[{0, 1}, 63];
mask = FromDigits[bitseq, 2];
bitcounts = cf[x, mask]; // AbsoluteTiming // First
</code></pre>
<blockquote>
<p>0.345124</p>
<p>0.106136</p>
</blockquote>
<p>You see that creating the random array <code>x</code> took significantly longer than counting the bits.</p>
<p>If you know in advance that the integers lie in a small, finite range from <code>a</code> to <code>b</code>, then you can simply compute the results for all elements in that range once and place them into a lookup table <code>A</code>. Then for any other <code>x</code> you just take <code>A[[x-a+1]]</code>.</p>
|
3,069,987 | <p>I know that whatever numbers you choose for x and y and their sum equals to 1 will satisfy the equation <span class="math-container">$x^2 + y = y^2 + x$</span></p>
<p>Algebraic proof: </p>
<p>Given: <span class="math-container">$x + y = 1$</span></p>
<p><span class="math-container">$$LS = x^2+ y
= (1-y)^2 + y
= 1 - 2y+y^2 + y
= y^2 - y + 1$$</span></p>
<p><span class="math-container">$$RS = y^2 + x
= y^2 + (1-y)
= y^2 - y + 1$$</span></p>
<p>Therefore,<span class="math-container">$$ LS = RS $$</span></p>
<p>How can this be proved geometrically? (Ex. in a diagram of rectangular areas)</p>
<p>I tried to add a square piece with side lengths y with a rectangle with side lengths x and x+y but I can't seem to prove it geometrically. </p>
<p>Can someone help? </p>
| Michael Rozenberg | 190,319 | <p>Because <span class="math-container">$$0=x^2+y-(y^2+x)=(x-y)(x+y)-(x-y)=(x-y)(x+y-1).$$</span></p>
|
1,121,205 | <p>Can we find an bijective continuous map $f:X\to Y$ from a disconnected topological space $X$ to a connected topological space $Y$?</p>
<p>It seems counter intuitive for me, but I am not able to prove that $f(X)$ will be disconnected. I cannot think of any counterexample either. Can someone help?</p>
| Forever Mozart | 21,137 | <p>Let $X=Y=\{0,1\}$. Give $X$ the discrete topology and give $Y$ the indiscrete topology. The identity function from $X$ to $Y$ is a continuous bijection, $X$ is not connected, and $Y$ is connected.</p>
|
132,862 | <p>Is it true that given a matrix $A_{m\times n}$, $A$ is regular / invertible if and only if $m=n$ and $A$ is a basis in $\mathbb{R}^n$?</p>
<p>Seems so to me, but I haven't seen anything in my book yet that says it directly.</p>
| Frank | 460,691 | <p>Note: I'm still learning about the Peano Axioms, so if any part of my answer is inaccurate, please let me know.</p>
<p>First, consider why we even bother thinking about the natural numbers. Of course, it's because they obey unique properties; namely, the natural numbers are "natural" in the sense that they represent how we count real-life objects. Clearly, the set of natural numbers is no ordinary set, so we might say that <span class="math-container">$\mathbb{N}$</span> is a "set with structure," where the "structure" encapsulates all the unique properties that make the natural numbers special.</p>
<p>Think of the Peano Axioms as a means of formulating this "structure." Any set obeying this "structure" should either be the natural numbers, or a set isomorphic to the natural numbers. Therefore, it doesn't make much sense to try and prove that <span class="math-container">$\{0, 1, 2, 3, \dots\} \subseteq \mathbb{N}$</span> (as you do in your answer); instead, you should be trying to prove that the set <span class="math-container">$\mathbb{N} = \{0, 1, 2, 3, \dots \}$</span> <em>paired with the canonical successor function</em> obeys the Peano Axioms, and thus represents the natural numbers.</p>
<p>If you view the axioms in this way, your question can be reframed as, "Why are the first four Peano Axioms insufficient in defining the 'structure' of <span class="math-container">$\mathbb{N}$</span>?" A simple counterexample suffices to answer this, which Tao's half-integers does beautifully. Tao defines the half-integers to be</p>
<p><span class="math-container">$$\mathbb{N} := \{0, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, \dots\}.$$</span></p>
<p>and its successor function <span class="math-container">$S$</span> to be</p>
<p><span class="math-container">$$
S : \mathbb{N} \to \mathbb{N} \\
S(n) = n + 1
$$</span></p>
<p>where <span class="math-container">$+$</span> is the standard real addition operator. It isn't difficult to check that this obeys the first four Peano Axioms. But this set is not the natural numbers, nor is it isomorphic to them! Therefore, we add a fifth axiom, which (in plain English) states that, "All natural numbers are some eventual successor of <span class="math-container">$0$</span>."</p>
|
179,581 | <p><strong>Problem:</strong></p>
<p>(a). If $f$ is continuous on $[a,b]$ and $\int_a^x f(t) dt = 0$ for all $x \in [a,b]$, show that $f(x) = 0$ for all $x \in [a,b]$.</p>
<p>(b). If $f$ is continuous on $[a,b]$ and $\int_a^x f(t)dt = \int_x^b f(t)dt$ for all $x \in [a,b]$, show that $f(x)=0$ for all $x\in [a,b]$.</p>
<p><strong>Work so far:</strong></p>
<p>For (a), I think I am supposed to use Leibniz's rule and differentiate both sides and say $f(x)d/dx(x) - f(a)d/dx(a) = 0,$ so $f(x)-0=0$ and $f(x)=0.$ For (b) I think I am supposed to use Leibniz's Rule and differentiate both sides and get $f(x)d/dx(x) - f(a)d/dx(a) = f(b)d/dx(b) - f(x)d/dx(x)$, thus
$f(x) - 0 = 0 - f(x)$,
$2f(x) = 0$, and
$f(x) = 0$....am I going about this correctly?</p>
| EuYu | 9,246 | <p>Some hints:</p>
<p>a) Recall the first part of the Fundamental Theorem of Calculus
$$f(x) = \frac{d}{dx}\int_a^x f(t)\ dt$$</p>
<p>b) Write $$\int_{a}^{x}f(t)\ dt + \int_{b}^{x}f(t)\ dt = 0$$ and do something similar to part a.</p>
|
3,366,064 | <p>I have a baking recipe that calls for 1/2 tsp of vanilla extract, but I only have a 1 tsp measuring spoon available, since the dishwasher is running. The measuring spoon is very nearly a perfect hemisphere. </p>
<p>My question is, to what depth (as a percentage of hemisphere radius) must I fill my teaspoon with vanilla such that it contains precisely 1/2 tsp of vanilla? Due to the shape, I obviously have to fill it more than halfway, but how much more?</p>
<p>(I nearly posted this in the Cooking forum, but I have a feeling the answer will involve more math knowledge than baking knowledge.)</p>
| TonyK | 1,508 | <p>It makes things a bit simpler if we turn your measuring spoon upside down, and model it as the set of points <span class="math-container">$\{(x,y,z):x^2+y^2+z^2=1, z\ge 0\}$</span>. The area of a cross-section at height <span class="math-container">$z$</span> is then <span class="math-container">$\pi(1-z^2)$</span>, so the volume of the spoon between the planes <span class="math-container">$z=0$</span> and <span class="math-container">$z=h$</span> is</p>
<p><span class="math-container">$$\pi\int_0^h(1-z^2)dz = \pi\left(h-\frac13h^3\right)$$</span></p>
<p>The volume of the hemisphere is <span class="math-container">$\frac23\pi$</span>, and we want the integral to be equal to half this, i.e.
<span class="math-container">$$\pi\left(h-\frac13h^3\right)=\frac{\pi}{3}$$</span>
or
<span class="math-container">$$h^3-3h+1=0$$</span>
This cubic equation doesn't factorize nicely, so we <a href="https://www.wolframalpha.com/input/?i=h%5E3-3h%2B1%3D0" rel="noreferrer">ask Wolfram Alpha</a> what it thinks. The relevant root is <span class="math-container">$h\approx 0.34730$</span>. Remember that we turned the spoon upside down, so you should fill it to a height of <span class="math-container">$1-h=0.65270$</span>, or <span class="math-container">$65.27\%$</span>.</p>
|
3,366,064 | <p>I have a baking recipe that calls for 1/2 tsp of vanilla extract, but I only have a 1 tsp measuring spoon available, since the dishwasher is running. The measuring spoon is very nearly a perfect hemisphere. </p>
<p>My question is, to what depth (as a percentage of hemisphere radius) must I fill my teaspoon with vanilla such that it contains precisely 1/2 tsp of vanilla? Due to the shape, I obviously have to fill it more than halfway, but how much more?</p>
<p>(I nearly posted this in the Cooking forum, but I have a feeling the answer will involve more math knowledge than baking knowledge.)</p>
| Community | -1 | <p>Alternative: use two teaspoons.</p>
<p>Use water as you develop your skill. Fill tsp A, and pour into tsp B until the contents appear equal. Each now contains half a tsp. And now you know what half a tsp looks like in practice.</p>
<p>And you don't have to calculate cosines against thumb-sized hardware.</p>
|
2,912,570 | <p>Let $X$ and $Y$ be two standard normal distributions with correlation $-0.72$. Compute $E(3X+Y\mid X-Y=1)$.</p>
<p>My solution: Conditioning on $X-Y=1$, we have $E(3X+Y\mid X-Y=1) = E(4Y+3\mid X-Y=1) = 3+4E(Y\mid X-Y=1) = 3$.</p>
<p>(1) Is my solution correct? My intuition is that the conditional density of $Y$ remains symmetric about 0 conditioning on $X-Y=1$.</p>
<p>(2) How to solve $E(Y\mid X-Y=1)$ more rigorously?</p>
<p>Thank you, guys!</p>
| heropup | 118,193 | <p>Unfortunately, $$\operatorname{E}[Y \mid X-Y = 1] \ne 0.$$ You can see this if you look at this picture:</p>
<p><a href="https://i.stack.imgur.com/YMhJv.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YMhJv.jpg" alt="enter image description here"></a></p>
<p>Here, the ellipses represent curves of constant bivariate density, and the blue line is the equation $X - Y = 1$. Consequently, along this line, the density is symmetric about $X + Y = 0$, which means that the expected value of $Y$ given that $X - Y = 1$ is $-1/2$, not $0$.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.