qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,203,066 | <p>The definition I have is the following:</p>
<blockquote>
<p>A vector space V is said to be <strong>finite-dimensional</strong> if there is a finite set of vectors in V that spans V and is said to be <strong>infinite-dimensional</strong> if no such set exists.</p>
</blockquote>
<p>However, with this definition I can't determine whether the vector space $\mathbb{R}^3$ is finite-dimensional or infinite-dimensional (I am assuming that it is finite since the dimension of $\mathbb{R}^3$ is $3$)</p>
<p>Going with my thought process, though, I know that $(1,0,0),(0,1,0),(0,0,1)$ spans $\mathbb{R}^3$. However we can also check that $(2,0,0),(0,2,0),(0,0,2)$ spans $\mathbb{R}^3$. Also note that $(3,0,0),(0,3,0),(0,0,3)$ spans $\mathbb{R}^3$. This process could be continued over and over to show that there are infinitely many vectors that span $\mathbb{R}^3$. </p>
<p>Wouldn't this mean that $\mathbb{R}^3$ is infinite-dimensional? Because there isn't a finite number of vectors that span $\mathbb{R}^3$. (Again I want to say this isn't the case and that there is something I am overlooking.) </p>
| P Vanchinathan | 28,915 | <p>Definition: A city is said to be food-friendly if one can get three different cuisines in a single restaurant.</p>
<p>In London, there is Restaurant A which serves Indian, Chinese and Continental. ANd also Restaurant B that serves Thai, Mexican and Greek food. And restaurant C serving, Indian, Russian and Japanese.</p>
<p>Now does London meet the above definition of food-friendly city? or do you say there three restaurants A,B, C instead of a single restaurant? So London is not food-friendly??</p>
|
2,507,864 | <blockquote>
<p>Check if for any two set families $\mathcal A $ and $\mathcal B $ the
following is true: $\bigcup (\mathcal A \cap \mathcal B) = \bigcup
\mathcal A \cap \bigcup \mathcal B$</p>
</blockquote>
<p>First of all I considered an example: $\mathcal A = \{ \{1,2\}, \{1,3\} \}$ and $\mathcal B = \{\{1,2\},\{3,5\}\}$<br>
Now, $\mathcal A \cap \mathcal B = \{1,2 \}$, and so $\bigcup(\mathcal A \cap \mathcal B) =\{1,2 \}$ Whereas, on the other hand, $\bigcup \mathcal A= \{1,2,3 \} $ and $\bigcup \mathcal B = \{1,2,3,5 \}$ and so their intersection is $\{1,2,3 \}$ and so my thesis is - the statement is not true. But now, when I start to evaluate it using the Axiom of Extensionality, I get: $$(\exists X)((X\in \mathcal A \land X\in \mathcal B )\land x \in X)$$
$$\iff(\exists X)(X \in \mathcal A\land x\in X) \land (\exists X)(X \in \mathcal B \land x\in X)$$
$$\iff x \in \bigcup \mathcal A \land x \in \bigcup \mathcal B \iff x \in (\bigcup \mathcal A \cap \bigcup \mathcal B)$$
So, on the one hand the example I provided shows that the theorem is false, but on the other hand, the definitions says that it is actually true. Therefore I must have erred somewhere in my reasoning but I can't see where. Could you steer me towards this error?</p>
| Kim Jong Un | 136,641 | <p>Consider the concave functions $f(x)=-x^2+3x$ and $g(x)=-x$. But $h(x)\equiv\max\{f(x),g(x)\}$ is <em>not</em> concave.</p>
<p><a href="https://i.stack.imgur.com/AF9Kj.png" rel="noreferrer"><img src="https://i.stack.imgur.com/AF9Kj.png" alt="enter image description here"></a></p>
|
2,826,313 | <p>So I have been given the following equation : $z^6-5z^3+1=0$. I have to calculate the number of zeros (given $|z|>2$). I already have the following:</p>
<p>$|z^6| = 64$ and $|-5z^3+1| \leq 41$ for $|z|=2$. By Rouche's theorem: since $|z^6|>|-5z^3+1|$ and $z^6$ has six zeroes (or one zero of order six), the function $z^6-5z^3+1$ has this too. However, how do I calculate the zeroes $\textit{outside}$ the disk? Is there a standard way to do this? </p>
<p>Thanks in advance.</p>
| José Carlos Santos | 446,262 | <p>Since it is a polynomial of degree $6$ and since it has $6$ zeros inside the disk, it has no zeros outside the disk.</p>
|
2,136,791 | <p>I got a following minimization problem</p>
<p>$$\min_{\mathbf{X}^{(1)}, \, \mathbf{X}^{(2)}} \;\left\| \mathbf{B} - \mathbf{A} (\mathbf{X}^{(1)} \odot \mathbf{X}^{(2)}) \right\|^{2}_{F},$$</p>
<p>where the matrices $\mathbf{B}\in \mathbb{R}^{100 \times 3}$, $\mathbf{A}\in \mathbb{R}^{100\times 36}$, $\mathbf{X}^{(1)}\in \mathbb{R}^{9 \times 3}$ and $\mathbf{X}^{(2)}\in \mathbb{R}^{4 \times 3}$. The operation $\odot$ refers to the <a href="https://en.wikipedia.org/wiki/Kronecker_product#Khatri.E2.80.93Rao_product" rel="nofollow noreferrer">Khatri-rao product</a>.</p>
<p>Given matrices $\mathbf{A}$ and $\mathbf{B}$, my problem is to find out matrices $\mathbf{X}^{(1)}$ and $\mathbf{X}^{(2)}$ such that</p>
<p>$$\mathbb{f} = \left\| \mathbf{B} - \mathbf{A} (\mathbf{X}^{(1)} \odot \mathbf{X}^{(2)}) \right\|^{2}_{F}$$</p>
<p>is minimized.</p>
<p>My idea is to compute the gradient of $\mathbb{f}$ with respect to $\mathbf{X}^{(1)}$ and $\mathbf{X}^{(2)}$ respectively.</p>
<p>My question is, how to do differentiation with respect to a matrix? I have consulted a <a href="http://www4.ncsu.edu/~pfackler/MatCalc.pdf" rel="nofollow noreferrer">reference</a> but the situation seems different because it involves a Khatri-rao product in $\mathbb{f}$. Thanks in advance.</p>
<p>$\dfrac{\partial \mathbf{f}}{\partial \mathbf{X}^{(1)}}$ and $\dfrac{\partial \mathbf{f}}{\partial \mathbf{X}^{(2)}} $?</p>
| greg | 357,854 | <p><span class="math-container">$\def\p{\partial} \def\bb{\mathbb}$</span>
Given two matrices with the same number of columns, e.g.
<span class="math-container">$$A\in{\bb R}^{a\times n} \qquad B\in{\bb R}^{b\times n}$$</span>
their Khatri-Rao <span class="math-container">$(\boxtimes)$</span> product can be written in terms of the Kronecker <span class="math-container">$(\otimes)$</span> and Hadamard <span class="math-container">$(\odot)$</span> products and all-ones vectors <span class="math-container">${\tt1}_p\in{\bb R}^{p}\;$</span>
<span class="math-container">$$\eqalign{
A\boxtimes B &= (A\otimes {\tt1}_b)\odot({\tt1}_a\otimes B) \\
}$$</span>
We'll also need one more product, the trace/Frobenius product, i.e.
<span class="math-container">$$\eqalign{
A:C &= {\rm Tr}(A^TC)\qquad &\big({\rm for}\;\, C\in{\bb R}^{a\times n}\big) \\
A:C &= C:A \\
A:A &= \big\|A\big\|^2_F &\big({\rm useful\,identity}\big) \\
A:(C\odot M) &= (A\odot C):M &\big({\rm also\,useful}\big) \\
}$$</span></p>
<p>Since this derivation is going to be complicated enough without the distraction of superscripts, let's define the following matrices for ease of typing
<span class="math-container">$$\eqalign{
X &= X^{(1)}\in{\bb R}^{9\times 3}\qquad &Y = X^{(2)}\in{\bb R}^{4\times 3} \\
Z &= X\boxtimes Y\in{\bb R}^{36\times 3}\qquad &W = A^T(AZ-B)\in{\bb R}^{36\times 3} \\
w &= {\rm vec}(W)\qquad &{\cal W} = {\rm Diag}(w)\in{\bb R}^{108\times 108} \\
}$$</span>
Write the objective function in terms of these variables
then calculate the differential.
<span class="math-container">$$\eqalign{
f &= \tfrac 12(AZ-B):(AZ-B) \\
df &= (AZ-B):(A\,dZ) \\
&= W:dZ \\
&= W:(dX\boxtimes Y) + W:(X\boxtimes dY) \\
}$$</span>
Now calculate the gradient with respect to <span class="math-container">$X$</span>.
<span class="math-container">$$\eqalign{
df &= W:(dX\boxtimes Y) \\
&= W:(dX\otimes{\tt1}_4)\odot({\tt1}_9\otimes Y) \\
&= W\odot({\tt1}_9\otimes Y):(dX\otimes{\tt1}_4) \\
&= w\odot{\rm vec}({\tt1}_9\otimes Y):{\rm vec}(dX\otimes{\tt1}_4) \\
&= {\cal W}\cdot{\rm vec}({\tt1}_9\otimes Y):{\rm vec}(dX\otimes{\tt1}_4) \\
&= {\cal W}\cdot{\rm vec}({\tt1}_9\otimes Y)
:(I_{27}\otimes{\tt1}_4)\cdot{\rm vec}(dX) \\
&= (I_{27}\otimes{\tt1}_4^T)\cdot{\cal W}\cdot{\rm vec}({\tt1}_9\otimes Y):dx \\
\frac{\p f}{\p x}
&= (I_{27}\otimes{\tt1}_4^T)\cdot{\cal W}\cdot{\rm vec}({\tt1}_9\otimes Y) \\
}$$</span>
And with respect to <span class="math-container">$Y$</span>.
<span class="math-container">$$\eqalign{
df
&= {\cal W}\cdot{\rm vec}(X\otimes{\tt1}_4)
:{\rm vec}({\tt1}_9\otimes dY) \\
&= {\cal W}\cdot{\rm vec}(X\otimes{\tt1}_4)
:({\tt1}_9\otimes I_{12})\cdot{\rm vec}(dY) \\
&= ({\tt1}_9^T\otimes I_{12})\cdot{\cal W}\cdot{\rm vec}(X\otimes{\tt1}_4):dy \\
\frac{\p f}{\p y}
&= ({\tt1}_9^T\otimes I_{12})\cdot{\cal W}\cdot{\rm vec}(X\otimes{\tt1}_4) \\
}$$</span>
The gradients can be converted between vector and matrix forms, e.g.
<span class="math-container">$$\eqalign{
\frac{\p f}{\p y} &= {\rm vec}\left(\frac{\p f}{\p Y}\right)
\qquad\iff\qquad
\frac{\p f}{\p Y} &= {\rm unvec}\left(\frac{\p f}{\p y}\right) \\\\
}$$</span>
These results are in a different form than Florian's,
but they should be equivalent.</p>
|
632,043 | <p>tl;dr: why is raising by $(p-1)/2$ not always equal to $1$ in $\mathbb{Z}^*_p$?</p>
<p>I was studying the proof of why generators do not have quadratic residues and I stumbled in one step on the proof that I thought might be a good question that might help other people in the future when raising powers modulo $p$.</p>
<p>Let $p$ be prime and as usual, $\mathbb{Z}^*_p$ be the integers mod $p$ with inverses.</p>
<p>Consider raising the generator $g$ to the power of $(p-1)/2$:</p>
<p>$$g^{(p-1)/2}$$</p>
<p>then, I was looking for a somewhat rigorous argument (or very good intuition) on why that was <strong>not always</strong> equal to $1$ by fermat's little theorem (when I say always, I mean, even when you do NOT assume the generator has a quadratic residue).</p>
<p>i.e. why is this logic flawed:</p>
<p>$$ g^{(p-1)/2} = (g^{(p-1)})^{\frac{1}{2}} = (1)^{\frac{1}{2}} \ (mod \ p)$$</p>
<p>to solve the last step find an x such that $1 = x \ (mod \ p)$. $x$ is obviously $1$, which completes the wrong proof that raising anything to $(p-1)/2$ is always equal to $1$. This obviously should not be the case, specially for a generator since the only power that should yield $1$ for a generator is $p-1$, otherwise, it can't generate one of the elements in the cyclic set. </p>
<p>The reason that I thought that this was illegal was because you can only raise to powers of integers mod $p$ and $1/2$ is obviously not valid (since its not an integer). Also, if I recall correctly, not every number in a set has a k-th root, right? And $1/2$ actually just means square rooting...right? Also, maybe it was a notational confusion where to the power of $1/2$ actually just means a function/algorithm that "finds" the inverse such that $z = x^2 \ (mod \ p)$. So is the illegal step claiming that you can separate the powers because at that step, you would be raising to the power of an element not allowed in the set?</p>
| André Nicolas | 6,312 | <p>Note that $1$ has <strong>two</strong> square roots modulo $p$ if $p\gt 2$. </p>
<p>So from $g^{p-1}\equiv 1\pmod{p}$, we conclude that
$$\left(g^{(p-1)/2}\right)^2\equiv 1\pmod{p},$$
and therefore
$$g^{(p-1)/2}\equiv \pm 1\pmod{p}.$$</p>
<p>If $g$ is a primitive root of $p$, and $p\gt 2$, then $g^{(p-1)/2}\equiv 1\pmod{p}$ is not possible, so $g^{(p-1)/2}\equiv -1\pmod{p}$.</p>
|
1,090,620 | <p>I don't know how to solve this limit</p>
<p>$$ \lim_{y\to0} \frac{x e^ { \frac{-x^2}{y^2}}}{y^2}$$</p>
<p>$\frac{1}{e^ { \frac{x^2}{y^2}}} \to 0$</p>
<p>but $\frac{x}{y^2} \to +\infty$</p>
<p>This limit presents the indeterminate form $0 \infty$ ?</p>
| syockit | 53,159 | <p>Here's one sloppy way to work it:
Assume $x>0$, let
$$k= \lim_{y\to0} \frac{xe^{-\frac{x^2}{y^2}}}{y^2}$$
Take natural logarithm of both sides:
$$\ln(k)= \lim_{y\to0} \left[\ln{x}-2\ln{y}-\frac{x^2}{y^2}\right]=-\infty$$
Therefore
$$ k = e^{-\infty} = 0 $$
For $x<0$, substitute $x\to -x$, and you'll end up with:
$$ -k = e^{-\infty} = 0 $$</p>
|
513,500 | <p>Suppose $f,g$ are analytic functions in domain $D$.If $fg=0$, I want to prove either $f(z)=0$ or $g(z)=0$. </p>
| njguliyev | 90,209 | <p>Hint: If $f(z_0) \ne 0$ then $f(z) \ne 0$ at some neighborhood of $z_0$.</p>
|
50,994 | <p>I am trying to calculate the following integral. </p>
<pre><code>sigma1 = 10.0; sigma2 = 5.0; delta = 0.5;
t[x1_, y1_, x_, y_] := 100*HeavisideLambda[sigma1^-1*(x - x1), sigma2^-1*(y - y1)];
B2[x1_, y1_, x_, y_] := HeavisideTheta[(delta/2)^2 - (x - x1)^2, (delta/2)^2 - (y - y1)^2];
trans[x1_, y1_, x2_, y2_] :=
NIntegrate[B2[x1, y1, xz, yz]*t[xz, yz, xp, yp]*
(B2[x2, y2, xz, yz] - B2[x2, y2, xp, yp]),
{xp, x2 - 2.0*sigma1, x2 + 2.0*sigma1},
{yp, y2 - 2.0*sigma2, y2 + 2.0*sigma2},
{xz, x1 - 0.5*delta, x1 + 0.5*delta},
{yz, y1 - 0.5*delta, y1 + 0.5*delta},
WorkingPrecision -> 12, AccuracyGoal -> 8, MinRecursion -> 8, MaxRecursion -> 100];
</code></pre>
<p>I am interested in the value of the integral for the following inputs:</p>
<pre><code>trans[0, 0, delta, 0]
</code></pre>
<p>Here is my problem: for values of delta greater than 0.5 (I tried 1.0, 0.9, 0.8, 07, 0.6, 0.515), the result is a negative number, and it takes some time for <em>Mathematica</em> to come up with the result. For any value of delta smaller than 0.5, <em>Mathematica</em> immediately returns 0, and doesn't give any hints about what is wrong.</p>
<p>This is a part of a refinement study and I need to choose smaller and smaller values for delta. Do you know how I can make this work?</p>
| Michael E2 | 4,999 | <p>The Heaviside functions are essentially piecewise functions, and <code>NIntegrate</code> knows how to handle <code>Piecewise</code> functions but not Heaviside functions. In particular, it will analyze the domain of <code>Piecewise</code> functions and adjust its sampling accordingly. Here are two rules for conversion, ignoring boundary points which won't affect the integral anyway:</p>
<pre><code>heaviside2piecewise = {
HeavisideTheta -> (Piecewise[{{1, #1 > 0 && #2 > 0}}, 0] &),
HeavisideLambda -> (Piecewise[{{#1 + 1, -1 < #1 < 0}, {1 - #1, 0 <= #1 < 1}}, 0] *
Piecewise[{{#2 + 1, -1 < #2 < 0}, {1 - #2, 0 <= #2 < 1}}, 0] &)};
</code></pre>
<p>Then we can apply them to the integrand:</p>
<pre><code>trans[x1_, y1_, x2_, y2_] := NIntegrate[
B2[x1, y1, xz, yz] * t[xz, yz, xp, yp] * (B2[x2, y2, xz, yz] - B2[x2, y2, xp, yp]) /.
heaviside2piecewise,
{xp, x2 - 2.0*sigma1, x2 + 2.0*sigma1},
{yp, y2 - 2.0*sigma2, y2 + 2.0*sigma2},
{xz, x1 - 0.5*delta, x1 + 0.5*delta},
{yz, y1 - 0.5*delta, y1 + 0.5*delta}]
</code></pre>
<p>It evaluates rather quickly, too:</p>
<pre><code>trans[0, 0, delta, 0]
(* -5.73958 *)
Needs["GeneralUtilities`"]; (* V10 *)
trans[0, 0, delta, 0] // AccurateTiming
(* 0.0757289 *)
</code></pre>
<p><strong>Exact solution</strong></p>
<p>In fact, this integral may be solved exactly, with exact values for the parameters.</p>
<pre><code>transExact[x1_, y1_, x2_, y2_] := Integrate[
B2[x1, y1, xz, yz] * t[xz, yz, xp, yp] * (B2[x2, y2, xz, yz] - B2[x2, y2, xp, yp]) /.
heaviside2piecewise,
{xp, x2 - 2*sigma1, x2 + 2*sigma1},
{yp, y2 - 2*sigma2, y2 + 2*sigma2},
{xz, x1 - 1/2*delta, x1 + 1/2*delta},
{yz, y1 - 1/2*delta, y1 + 1/2*delta}];
transExact[0, 0, delta, 0]
(* -(551/96) *)
</code></pre>
|
17,143 | <p>My next project I'd like to start working on is Domain Coloring. I am aware of the beautiful discussion at:</p>
<p><a href="https://mathematica.stackexchange.com/questions/7275/how-can-i-generate-this-domain-coloring-plot">How can I generate this "domain coloring" plot?</a></p>
<p>And I am studying it. However, a lot of the articles on domain coloring refer back to Hans Lundmark's page at:</p>
<p><a href="http://www.mai.liu.se/~halun/complex/domain_coloring-unicode.html" rel="nofollow noreferrer">http://www.mai.liu.se/~halun/complex/domain_coloring-unicode.html</a></p>
<p>So, I would like to begin my work by using Mathematica to draw these three images based on Hans' notes. I'd appreciate if anyone can provide some code that will produce these images, as I could use it to start my study of the rest of Hans' page.</p>
<p><img src="https://i.stack.imgur.com/FuqMb.jpg" alt="arg"></p>
<p><img src="https://i.stack.imgur.com/9S0I6.jpg" alt="abs"></p>
<p><img src="https://i.stack.imgur.com/8cqhp.png" alt="blend"></p>
<p>A very small adjustment. Still learning.</p>
<pre><code>g[{f_, cf_}] :=
DensityPlot[f, {x, -1, 1}, {y, -1, 1}, PlotPoints -> 51,
ColorFunction -> cf, Frame -> False];
g /@ {{Arg[-(x + I y)], "SolarColors"},
{Mod[Log[2, Abs[x + I y]], 1], GrayLevel}}
ImageMultiply @@ %
</code></pre>
<p><img src="https://i.stack.imgur.com/115CH.png" alt="scheme-blend-1"></p>
<p>Not sure where to put my current question, so I'll update here. Just came back to visit and discovered some wonderful answers at the bottom of this list. I do understand the opening code:</p>
<pre><code>f[z_] := (z + 2)^2*(z - 1 - 2 I)*(z + I)
paint[z_] :=
Module[{x = Re[z], y = Im[z]},
color = Blend[{Black, Red, Orange, Yellow},
Rescale[ArcTan[-x, -y], {-Pi, Pi}]];
shade = Mod[Log[2, Abs[x + I y]], 1];
Darker[color, shade/4]]
</code></pre>
<p>But then I encounter difficulty with the following code:</p>
<pre><code>ParametricPlot[{x, y}, {x, -3, 3}, {y, -3, 3},
ColorFunctionScaling -> False,
ColorFunction -> Function[{x, y}, paint[f[x + y I]]], Frame -> False,
Axes -> False, MaxRecursion -> 1, PlotPoints -> 50, Mesh -> 400,
PlotRangePadding -> 0, MeshStyle -> None, ImageSize -> 300]
</code></pre>
<p>I'm good with the first few lines. Looks like ParametricPlot is plotting points, where x and y both range from -3 to 3 (correct me if I am wrong). I also understand the ColorFunctionScaling and the ColorFunction lines. I understand Axes, PlotRangePadding, MeshStyle, and ImageSize. Where I am having trouble is with what PlotPoints->50 and Mesh->400 are doing. </p>
<p>First of all, my image size is 300. What does PlotPoints->50 mean? Does that mean it will sample and array of 50x50 points out of 300x300 and scale the results to fit in the domain [-3,3]x[-3,3]? My next question is, then those points get colored? And if so, how are the remainder of the points in the image colored? For example, I tried:</p>
<pre><code>Table[ParametricPlot[{x, y}, {x, -3, 3}, {y, -3, 3},
ColorFunctionScaling -> False,
ColorFunction -> Function[{x, y}, paint[f[x + y I]]],
PlotPoints -> n, MeshStyle -> None], {n, 10, 50, 10}]
</code></pre>
<p>And the images got a little sharper as the PointPlots->n increased. </p>
<p>Here's another question. What does Mesh->400 do in this situation. For example, I tried lowering the mesh number:</p>
<pre><code>ParametricPlot[{x, y}, {x, -3, 3}, {y, -3, 3},
ColorFunctionScaling -> False,
ColorFunction -> Function[{x, y}, paint[f[x + y I]]], Frame -> False,
Axes -> False, MaxRecursion -> 1, PlotPoints -> 50, Mesh -> 100,
PlotRangePadding -> 0, MeshStyle -> None, ImageSize -> 300]
</code></pre>
<p>And was completely surprised that it had an effect on the image, particularly when MeshStyle->None. Here's the image I get:</p>
<p><img src="https://i.stack.imgur.com/4jqEj.png" alt="today"></p>
<p>Why does setting Mesh->100 decrease the sharpness of the image?</p>
<p>One final question I have regards adding the mesh lines. Simon suggested<br>
For the mesh you could do something like Mesh->{Range[-5,5],Range[-5,5]}, MeshStyle->Opacity[0.5], MeshFunctions->{(Re@f[#1+I #2]&),(Im@f[#1+I #2]&)} and cormullion added them to produce a beautiful result, but I tried this:</p>
<pre><code>ParametricPlot[{x, y}, {x, -3, 3}, {y, -3, 3},
ColorFunctionScaling -> False,
ColorFunction -> Function[{x, y}, paint[f[x + y I]]], Frame -> False,
Axes -> False, MaxRecursion -> 1, PlotPoints -> 50,
Mesh -> {Range[-5, 5], Range[-5, 5]}, PlotRangePadding -> 0,
MeshStyle -> Opacity[0.5],
MeshFunctions -> {(Re@f[#1 + I #2] &), (Im@f[#1 + I #2] &)},
ImageSize -> 300]
</code></pre>
<p>And got this resulting image.</p>
<p><img src="https://i.stack.imgur.com/zamAO.png" alt="today2"></p>
<p>So I am clearly missing something. Maybe someone could post the code that gives cormullion's last image?</p>
<p>OK, just purchased and installed Presentations package. Tried this:</p>
<pre><code>With[{f = Function[z, (z + 2)^2 (z - 1 - 2 I) (z + I)],
zmin = -2 - 2 I, zmax = 2 + 2 I,
colorFunction = Function[arg, HotColor[Rescale[arg, {-Pi, Pi}]]],
imgSize = 400},
Draw2D[{ComplexDensityDraw[Arg[f[z]], {z, zmin, zmax},
ColorFunction -> colorFunction, ColorFunctionScaling -> False,
Mesh -> 50, MeshFunctions -> {Function[{x, y}, Abs[f[x + I y]]]},
PlotPoints -> {50, 50}]}, Frame -> True, FrameLabel -> {Re, Im},
PlotLabel -> Row[{"Arg coloring and Abs mesh of ", f[z]}],
RotateLabel -> False, BaseStyle -> 12, ImageSize -> imgSize]]
</code></pre>
<p>But got this colorless image.</p>
<p><img src="https://i.stack.imgur.com/xSlX8.png" alt="today3"></p>
<p>Any thoughts on how to fix this?</p>
| cormullion | 61 | <p>I was hoping that this question would get some good answers, but it must have been asked at a time when everyone was feeling a bit curmudgeonly after Christmas... :) </p>
<p>My understanding of <a href="http://www.mai.liu.se/~halun/complex/domain_coloring-unicode.html" rel="noreferrer">Hans Lundmark's linked page</a> is that there's basically a painting method that colors each point of the final plot depending on the value of a function which is applied to xy coordinates. So you don't have to start off with these images, you just specify how the points are going to be painted. (Of course, you can start off with these images and transform them using <code>ImageApply</code>, but that's not what he had in mind, I think.)</p>
<p>Unfortunately I couldn't find many <code>ColoringFunction</code>s that accepted x and y coordinates - some of the plot functions accept only a single value for coloring. But apparently <code>ParametricPlot</code> can take a <code>ColorFunction</code> that processes x and y coordinates. So here's a function, a painting method, and a <code>ParametricPlot</code> that draws something similar to what the linked page is doing:</p>
<pre><code>f[z_] := z ^3;
paint[z_] :=
Module[{x = Re[z], y = Im[z]},
color =
Blend[{Black, Red, Orange, Yellow},
Rescale[ArcTan[-x , -y], {-Pi, Pi}]];
shade = Mod[Log[2, Abs[x + I y]], 1];
Darker[color, shade/4]]
ParametricPlot[
{x, y}, {x, -2, 2}, {y, -2, 2},
ColorFunctionScaling -> False,
ColorFunction -> Function[{x, y}, paint[f[x + y I]]],
Frame -> False, Axes -> False, MaxRecursion -> 1, PlotPoints -> 50,
Mesh -> 400, PlotRangePadding -> 0,
MeshStyle -> None, ImageSize -> 300
]
</code></pre>
<p><img src="https://i.stack.imgur.com/d37nq.png" alt="domain coloring"></p>
<p>which doesn't look <em>too</em> different from the original (given some fiddling and fudging, perhaps):</p>
<p><img src="https://i.stack.imgur.com/y0nqN.png" alt="original"></p>
<p>and his fourth degree polynomial (whatever that might be):</p>
<p>f(z) = (z + 2 )2 (z − 1 − 2i) (z + i),</p>
<p>converts to this in <em>Mathematica</em>:</p>
<pre><code>f[z_] := (z + 2)^2 (z - 1 - 2 I) (z + I);
</code></pre>
<p>and looks like this:</p>
<p><img src="https://i.stack.imgur.com/IizLf.png" alt="fourth degree polynomial"></p>
<p>compared with his original:</p>
<p><img src="https://i.stack.imgur.com/n7pVC.png" alt="fourth degree polynomial"></p>
<p>Now, I know what you're all going to say — where are the mesh lines? Well, I've been looking at the mesh lines documentation for some minutes now, and it makes little sense to me yet. That's something which you can perhaps look into - an exercise for the questioner...? Or perhaps a more experienced answerer.</p>
<p><strong>Update</strong>: Simon's mesh functions get us closer to the original:</p>
<p><img src="https://i.stack.imgur.com/2k7bO.png" alt="domain with mesh functions"></p>
<p>My intuitive understanding of the <code>PlotPoints</code> and <code>Mesh</code> options is that they determine the resolution or quality of the plot. You can use <em>Mathematica</em> to help you explore different combinations of settings (this takes a minute or so on my machine):</p>
<p><img src="https://i.stack.imgur.com/dAPNJ.png" alt="grid"></p>
<pre><code>f[z_] := (z + 2)^2 (z - 1 - 2 I) (z + I);
paint[z_] :=
Module[{x = Re[z], y = Im[z]},
colour =
Blend[{Black, Red, Orange, Yellow},
Rescale[ArcTan[-x , -y + 0.0001], {-Pi, Pi}]];
shade = Mod[Log[2, Abs[x + I y]], 1];
Darker[colour, shade/4]]
g = Grid[Table[
ParametricPlot[
{x, y}, {x, -3., 3.}, {y, -3., 3.},
ColorFunctionScaling -> False,
ColorFunction -> Function[{x, y}, paint[f[x + y I]]],
Frame -> False, Axes -> False, MaxRecursion -> 3,
PlotRangePadding -> 0,
PlotPoints -> pp,
Mesh -> mesh,
PlotLegends ->
Placed[{StringJoin["☝", "Plot Points: ", ToString[pp],
", Mesh: ", ToString[mesh]]}, Below],
MeshStyle -> Directive[Opacity[0.3], Yellow],
MeshFunctions -> {(Re@f[#1 + I #2] &), (Im@f[#1 + I #2] &)},
ImageSize -> 500
],
{pp, {25, 75, 150}},
{mesh, {25, 150, 400}}]]
</code></pre>
|
929,598 | <p>A rectangle is 4 times as long as it is wide. If the length is increased by 4 inches and the width is decreased by 1 inch, the area will be 60 square inches. What were the dimensions of the original rectangle? Explain your answer.</p>
| Jack D'Aurizio | 44,121 | <p>By multiplying both sides by $(2k)!! = (2k)(2k-2)\cdot\ldots\cdot 2 = 2^k\cdot k!$, you get: $$(2k+1)! = (2k+1)!.$$</p>
|
1,413,150 | <p>So for a periodic function <span class="math-container">$f$</span> (of period <span class="math-container">$1$</span>, say), I know the Riemann-Lebesgue Lemma which states that if <span class="math-container">$f$</span> is <span class="math-container">$L^1$</span> then the Fourier coefficients <span class="math-container">$F(n)$</span> go to zero as <span class="math-container">$n$</span> goes to infinity. And as far as I know, the converse of this is not true. My question, then, is this:</p>
<blockquote>
<p>Under what conditions on the Fourier coefficients <span class="math-container">$F(n)$</span> is the function <span class="math-container">$f$</span>, defined pointwise as the Fourier series with <span class="math-container">$F(n)$</span> as coefficients,</p>
<ol>
<li>integrable,</li>
<li>continuous, and</li>
<li>differentiable?</li>
</ol>
</blockquote>
| Arin Chaudhuri | 404 | <p>Here is another way to approach this problem.
The function $$f(z) = 1 - z/2 + z^2/3 + \ldots + (-1)^{k+1} z^k/(k+1) + \ldots $$ is analytic on the unit disc $ \{ z : |z| < 1\}$, which implies $ g(z) = \exp f(z)$ is also analytic on $ \{ z : |z| < 1\}$ and hence can be expanded as a power series $$g(z) = a_0 + a_1 z + a_2 z^2 + \dots + $$ in $ \{ z : |z| < 1\}$. We can easily compute the the first few coefficients as $a_0 = \exp f(0) = e$, $a_1 = \exp f(0) f^{'}(0) = -e/2$, and similarly $a_3 = 11e/24.$</p>
<p>However, $f(x) = \log(1+x)/x$ for for all real $x$ with $ 0 < x < 1$, so $g(x) = (1+x)^{1/x}$ for $ 0 < x < 1$ and the above series for real $x$ is an analytic extension of $(1+x)^{1/x}$ to $-1 < x < 1$.</p>
<p>Writing $$(1+x)^{1/x} = e - ex/2 + 11x^2/24 + \dots $$ from which we get
$(1+x)^{1/x} + cx = e + (c - e/2) x + 11ex^2/24 + \dots + \dots $.</p>
<p>The derivative of the above function at 0 is c -e/2, which is < 0, if c < e/2, by the continuity of the derivative, there is an interval $[0,\epsilon]$ on which the derivative of the function above is strictly negative and hence it decreases. Since 1/n decreases and lies in $[0,\epsilon]$ for all large $n$ this means $(1+1/n)^{n} + c/n$ increases eventually for any $ c < e/2$. This holds for any $x_n$ that strictly decreases to 0 not only $1/n,$, $(1+x_n)^{1/x_n} + c x_n$ eventually increases if $ c < e/2$. We can similarly argue that $(1+x_n)^{1/x_n} + c x_n$ eventually increases if $ c > e/2$ if $x_n$ strictly decreases to 0. For $c = e/2$, the positivity of the coefficient of $x^2$ implies $(1+x_n)^{1/x_n} + e x_n / 2$ eventually starts decreasing.</p>
|
1,705,481 | <blockquote>
<p>$$\sum_{n=1}^{\infty} \frac{\ln(n)}{n^2}$$</p>
</blockquote>
<p>I have tried the comparison test with $\frac{1}{n}$ and got $0$ with $\frac{1}{n^2}$ I got $\infty$</p>
<p>What should I try?</p>
| Community | -1 | <p>Like others suggest, estimating $\ln(n)$ is probably the best way to visualize it.</p>
<p>An alternative is using integral test(and if you get this problem from first year calculus course, this is probably what you're expected to do),</p>
<p>Not hard to check $f(x)=\frac{\ln(x)}{x^2}$ is continuous and decreasing(by computing $f'(x)$).</p>
<p>The integral</p>
<p>$$\int_{1}^{\infty} \frac{\ln(x)}{x^2} dx=$$</p>
<p>Let $u=\ln(x)$, $dv=\frac{1}{x^2}dx$, $du=\frac{1}{x} dx$, $v=-\frac{1}{x}$</p>
<p>$$\int_{1}^{\infty} \frac{\ln(x)}{x^2} dx\\=\frac{-\ln(x)}{x}\Big|_{1}^{\infty}+\int_{1}^{\infty}\frac{1}{x^2} dx\\=0+\frac{-1}{x}\Big|_{1}^{\infty}\\=1$$ converges therefore the original series converge.</p>
|
1,074,341 | <p>Prove that a Covering map is proper if and only if it is finite-sheeted.</p>
<p>First suppose the covering map $q:E\to X$ is proper, i.e. the preimage of any compact subset of $X$ is again compact. Let $y\in X$ be any point, and let $V$ be an evenly covered nbhd of $y$. Then since $q$ is proper, and $\{y\}$ is compact, $q^{-1}( \{ y\})$ is also compact. In particular the sheets $\bigsqcup_{\alpha\in I}U_\alpha$ of V are an open cover of $q^{-1}( \{ y\})$ and must therefore contain a finite subcover $\{U_1,...,U_n\}$. Then the cardinality of the fiber $q^{-1}( \{ y\})$ is $n$, so that $q$ is finite-sheeted.</p>
<p>Conversely, we suppose that $q$ is finite-sheeted. Let $C\subset X$ be a compact set, and let $\{U_a\}_{a\in I}$ be an open cover of $q^{-1}(C)$...</p>
<p>Now how do I continue?</p>
| Tommaso Seneci | 320,104 | <p>"$\Rightarrow$" Since a singleton $\{x\}\subset X$ is compact wrt every topology, $q^{-1}(\{x\})$ is compact. Covering gives the property for $q^{-1}(\{x\})$ to be discrete in the sense that there exists an open neighborhood $V \ni x$ such that each connected component of $q^{-1}(V)$ is homeomorphic to $V$ itself. Then clearly the connected components are disjoint open sets. This means that for each $e\in q^{-1}(\{x\})$ there exists an open neighborhood $U\ni x$ such that $U\cap q^{-1}(\{x\}) = \{e\}$ (definition of discrete set). Compactness implies sequential compactness therefore if $q^{-1}(\{x\})$ wasn't finite we could extract a sequence which does not converge to any of its points (because they are very separated by open sets). Absurdness which clearly shows that $q^{-1}(\{x\})$ is finite.</p>
<p>"$\Leftarrow$" Let $K\subset X$ be compact and $L=q^{-1}(K)$. If $K$ is was empty than the result would be trivial so assume $K\neq \emptyset$. Let $\{T_\alpha\}$ be an open covering for $L$. Openness of $q$ implies that $\{q(T_\alpha)\}$ is an open covering for K. For each $x\in K$ there exists $U_x \subset q(T_{\alpha})$ (for some $\alpha$) open neighborhood such that each connected component of $q^{-1}(U_x)$ is homeomorphic to $U_x$. Assume now that the components of $q^{-1}(U_x)$ are not finite. Then there are components $E_1,\ldots,E_n \subset q^{-1}(U_x)$ containing $\{e_1,\ldots,e_n\} = q^{-1}(\{x\})$. The others cannot contain them and therefore they are not bijective onto $U_x$ since they don't map anything into $x$ itself. Since these are finite we can shrink them in a way that if a component of $q^{-1}(U_x)$ has non empty intersection with $L$, than it is a subset of $T_\alpha$ for some $\alpha$. Cover $K$ by finitely many such sets $K\subset \cup_{i=1}^m U_{x_i}$. Now $L\subset q^{-1}(\cup_{i=1}^m U_{x_i}) = \cup_{i=1}^m q^{-1}(U_{x_i})$ is covered by finitely connected components and each of these is inside a $T_\alpha$, therefore $L$ is covered by finitely many $T_\alpha$, i.e. it is compact.</p>
|
588,930 | <p>I want help with this question.</p>
<blockquote>
<p>Show that for all $x>0$, $$ \frac{x}{1+x^2}<\tan^{-1}x<x.$$</p>
</blockquote>
<p>Thank you.</p>
| Mikasa | 8,581 | <p>I know a method which is based of using The mean value theorem for functions in Calculus. Let $f(x)=\arctan(x)$. Since $f'=1/1+x^2$ so according to mean value theorem, there is a $\xi\in(0,x)$ such that $$\frac{\tan^{-1}(x)-\tan^{-1}(0)}{x-0}=f'(\xi)$$ But $0<\xi<x$ makes $f'(\xi)$ to be: $$f'(x)<f'(\xi)<1$$ I think the rest is clear. Indeed multiplying both sides by $x\neq 0$ gives the result. :-)</p>
|
2,103,602 | <p>What is the maximum value of
$\displaystyle{{1 + 3a^{2} \over \left(a^{2} + 1\right)^{2}}}$, given that $a$ is a real number, and for what values of $a$ does it occur ?.</p>
| dxiv | 291,201 | <p>Writing it as $\cfrac{1+3a^2}{(a^2+1)^2}= \cfrac{3(a^2+1)-2}{(a^2+1)^2}= \cfrac{3}{a^2+1}-\cfrac{2}{(a^2+1)^2}\,$ gives a quadratic in $x=\cfrac{1}{a^2+1}\,$, with a maximum at $x = \cfrac{3}{4}\,$ ($\iff a^2 = 1/3\,$) of value $\cfrac{3 \cdot 3}{4}- \cfrac{2 \cdot 9}{16} = \cfrac{9}{8}\,$.</p>
|
182,510 | <p>Is there a continuous probability measure on the unit circle in the complex plane - $\sigma$ with full support, such that $\hat{\sigma}(n_k)\rightarrow1$ as $k\rightarrow\infty$ for some increasing sequence of integers $\ n_k$ </p>
| Sean Eberhard | 23,805 | <p>Define a sequence $(X_m)_{m\geq 1}$ of independent $\{0,1\}$-valued random variables by </p>
<p>$$\mathbf{P}(X_m = 0) = p_m > 1/2,$$</p>
<p>where the sequence $p_m\to1$ slowly. Define</p>
<p>$$X = X_1/2 + X_2/4 + \cdots.$$</p>
<p>Then the distribution $\sigma$ of $X$ is continuous in $[0,1]$ provided only $\prod_{m\geq1} p_m = 0$. Moreover $\sigma$ has full supported provided $0<p_m<1$ for all $m$.</p>
<p>Define</p>
<p>$$Y_m = 2^m X \text{(mod $1$)} = X_{m+1}/2 + X_{m+2}/4 + \cdots.$$</p>
<p>Then $\mathbf{P}(Y_m\leq 1/2^k) = p_{m+1}\cdots p_{m+k}\to 1$ as $m\to\infty$, for each fixed $k$. Thus $Y_m$ tends to $0$ in distribution. It follows that</p>
<p>$$\hat{\sigma}(2^m) = \int e^{-i2\pi 2^m x} d\sigma(x) = \mathbf{E}(e^{-i2\pi Y_m}) \to 1.$$</p>
|
182,510 | <p>Is there a continuous probability measure on the unit circle in the complex plane - $\sigma$ with full support, such that $\hat{\sigma}(n_k)\rightarrow1$ as $k\rightarrow\infty$ for some increasing sequence of integers $\ n_k$ </p>
| Robert Israel | 8,508 | <p>Define the sequence $n_k$, a sequence of positive reals $r_k$ and a sequence of nested subsets $A_k$ of the circle $\mathbb T$ as follows. Each $A_k$ will be the union of $2^k$ open intervals of length $r_k$ on which $|e^{in_k t} - 1| < 2^{-k}$, and each of these intervals will contain two intervals of $A_{k+1}$. This can be done inductively: all we need is to take $n_{k+1}$ large enough so that each interval of $A_k$ contains at least two points where $e^{i n_{k+1} t} = 1$, and take intervals of small enough length $r_k$ around two of those to form $A_{k+1}$. I will also choose $n_{k+1}$ to be a multiple of $n_k$.</p>
<p>Now let $\mu_k$ be normalized Lebesgue measure on $A_k$, and $\mu$ a weak limit point of $\mu_k$. Then $|\hat{\mu}(n_k) - 1| \le 2^{-k}$ for all $k$. $\mu$ is a singular continuous probability measure. But you wanted one with full support.</p>
<p>OK, take $\sigma = \sum_{j = 1}^\infty 2^{-j} T_{t_j}\mu$ where $T_t$ is translation by $t \in [0,2 \pi)$, choosing $t_j$ a sequence dense in $[0,2 \pi]$ such that $n_j t_j/(2 \pi)$ is an integer. This is again a singular continuous probability measure, but with full support. For $k \ge j$, $n_k$ is a multiple of $n_j$ and so $\widehat{T_{t_j}\mu}(n_k) = \hat{\mu}(n_k)$. Thus $$|\hat{\sigma}(n_k) - 1| \le \sum_{j=1}^k 2^{-j} |\hat{\mu}(n_k) - 1| + \sum_{j=k+1}^\infty 2^{-j} |\widehat{T_{t_j}\mu}(n_k) - 1|
\le 2^{2-k}$$</p>
|
4,309,797 | <p>I have a question which asks me to compute the double integral
<span class="math-container">$$\iint_By^2-x^2\,dA$$</span> where B is the region enclosed by <span class="math-container">$$y=x,y=x+2,y=\frac{2}{x},y=\frac{2}{x}$$</span>I made a change of variables by letting <span class="math-container">$$u=xy \qquad \text{and}\qquad v=y-x$$</span>which gives me a very nice region in the <span class="math-container">$uv$</span>-plane <span class="math-container">$$1\le u\le 2\qquad\text{and}\qquad 0\le v\le 2$$</span> However i am having a hard time representing the integrand in terms of <span class="math-container">$u$</span> and <span class="math-container">$v$</span> <span class="math-container">$$y^2-x^2=(y-x)(y+x)=v(y+x)$$</span> Is it possible to express this in terms of <span class="math-container">$u$</span> and <span class="math-container">$v$</span> or am i wasting my time?</p>
| Ninad Munshi | 698,724 | <p>I'm assuming your bounds were actually</p>
<p><span class="math-container">$$y=x \hspace{15 pt} y=x+2 \hspace{15 pt} y = \frac{1}{x} \hspace{15 pt} y = \frac{2}{x}$$</span></p>
<p>It is completely possible to find <span class="math-container">$x+y$</span>, the trick is noticing the degree of the terms. <span class="math-container">$u$</span> is quadratic in <span class="math-container">$xy$</span> while <span class="math-container">$v$</span> is linear, so we should expect another linear term to involve a square root of a <span class="math-container">$u$</span> and <span class="math-container">$v^2$</span> somehow. For example</p>
<p><span class="math-container">$$v^2 = y^2-2xy+x^2$$</span></p>
<p><span class="math-container">$$v^2+4u = y^2+2xy+x^2 $$</span></p>
<p><span class="math-container">$$\sqrt{v^2+4u} =x+y$$</span></p>
<p>Now that you have formulas for <span class="math-container">$y-x$</span> and <span class="math-container">$y+x$</span> you maybe tempted to invert and solve for the Jacobian, but that would be a waste of your time. It's simpler derivative wise to compute the inverse Jacobian instead</p>
<p><span class="math-container">$$J^{-1} = \left|\begin{vmatrix}y & x \\ -1 & 1\end{vmatrix}\right| = y+x \implies J = \frac{1}{J^{-1}} = \frac{1}{x+y}$$</span></p>
<p>In other words finding the formula for <span class="math-container">$x+y$</span> was unnecessary because the Jacobian would have canceled it out. I showed the trick for solving for <span class="math-container">$x+y$</span> because it can still be useful for other problems. This means our integral is</p>
<p><span class="math-container">$$\int_0^2\int_1^2 v\:dvdu$$</span></p>
|
935,506 | <p>I'm a bit puzzled by this one.</p>
<p>The domain $X = S(0,1)\cup S(3,1)$ (where $S(\alpha, \rho)$ is a circular area with it's center at $\alpha$ and radius $\rho$). So the domain is basically two circles with radius 1 and centers at 0 and 3.</p>
<p>I'm supposed to find analytic function $f$ defined on $X$ where the imaginary part of $f$ is a constant but $f$ is not constant.</p>
<p>Where do I start?</p>
| Barry Cipra | 86,747 | <p>I recommend starting from scratch, with the substitution $x^2=\sin\theta$, so that $2x\,dx=\cos\theta \,d\theta$ and $\sqrt{1-x^4}=\sqrt{1-\sin^2\theta}=\cos\theta$. Thus, using a couple of other standard trig identities,</p>
<p>$$\int4x\sqrt{1-x^4}\,dx=\int2\cos^2\theta\,d\theta=\int(1+\cos2\theta)\,d\theta\\
=\theta+{1\over2}\sin2\theta+C\\ =\theta+\sin\theta\cos\theta+C\\=\arcsin(x^2)+x^2\sqrt{1-x^4}+C$$</p>
|
1,904,903 | <p>Taken from Soo T. Tan's Calculus textbook Chapter 9.7 Exercise 27-</p>
<p>Define $$a_n=\frac{2\cdot 4\cdot 6\cdot\ldots\cdot 2n}{3\cdot 5\cdot7\cdot\ldots\cdot (2n+1)}$$
One needs to prove the convergence or divergence of the series $$\sum_{n=1}^{\infty} a_n$$</p>
<p>upon finding the radius of convergence for $\sum_{n=1}^{\infty}\frac{2\cdot 4\cdot 6\cdot\ldots\cdot 2n}{3\cdot 5\cdot7\cdot\ldots\cdot (2n+1)}\cdot x^{2n+1}$ to be $1$ and checking the endpoints. Also, please use tests and methods that are taught in introductory courses.</p>
<p>Answers show divergence but no without explanation. </p>
| Jack D'Aurizio | 44,121 | <p>By using <a href="https://en.wikipedia.org/wiki/Beta_function" rel="nofollow noreferrer">Euler's beta function</a> we have
$$ \frac{(2n)!!}{(2n+1)!!} = \frac{4^n n!^2}{(2n+1)!} = 4^n B(n+1,n+1) $$
hence:
$$\begin{eqnarray*} \sum_{n=1}^{N}\frac{(2n)!!}{(2n+1)!!} = \int_{0}^{1}\sum_{n=1}^{N}\left(4x(1-x)\right)^n\,dx&=&\int_{-1/2}^{1/2}\sum_{n=1}^{N}(1-4x^2)^n\,dx\\&=&\int_{0}^{1}\sum_{n=1}^{N}(1-x^2)^n\,dx\end{eqnarray*}$$
but the last integrand function, over the interval $(0,1)$, is bounded below by
$$ \max\left(0,N-\frac{N(N+1)}{2}x^2\right) $$
hence it follows that
$$ \sum_{n=1}^{N}\frac{(2n)!!}{(2n+1)!!} \geq \int_{0}^{\sqrt{\frac{2}{N+1}}}\left(N-\frac{N(N+1)}{2}x^2\right)\,dx = \frac{2\sqrt{2}\,N}{3\sqrt{N+1}}$$
and the original series is divergent. By telescoping we also have the identity
$$ \sum_{n=1}^{N}\frac{(2n)!!}{(2n+1)!!} = -2+\frac{\Gamma(N+2)}{\Gamma\left(N+\frac{3}{2}\right)}\sqrt{\pi} $$
and the improved bound
$$\boxed{\; \sum_{n=1}^{N}\frac{(2n)!!}{(2n+1)!!} > -2+\sqrt{\pi(N+1)}\;}$$
follows from <a href="https://math.stackexchange.com/questions/98348/how-do-you-prove-gautschis-inequality-for-the-gamma-function">Gautschi's inequality</a>. </p>
|
246,606 | <p>I have matrix:</p>
<p>$$
A = \begin{bmatrix}
1 & 2 & 3 & 4 \\
2 & 3 & 3 & 3 \\
0 & 1 & 2 & 3 \\
0 & 0 & 1 & 2
\end{bmatrix}
$$</p>
<p>And I want to calculate $\det{A}$, so I have written:</p>
<p>$$
\begin{array}{|cccc|ccc}
1 & 2 & 3 & 4 & 1 & 2 & 3 \\
2 & 3 & 3 & 3 & 2 & 3 & 3 \\
0 & 1 & 2 & 3 & 0 & 1 & 2 \\
0 & 0 & 1 & 2 & 0 & 0 & 1
\end{array}
$$</p>
<p>From this I get that:</p>
<p>$$
\det{A} = (1 \cdot 3 \cdot 2 \cdot 2 + 2 \cdot 3 \cdot 3 \cdot 0 + 3 \cdot 3 \cdot 0 \cdot 0 + 4 \cdot 2 \cdot 1 \cdot 1) - (3 \cdot 3 \cdot 0 \cdot 2 + 2 \cdot 2 \cdot 3 \cdot 1 + 1 \cdot 3 \cdot 2 \cdot 0 + 4 \cdot 3 \cdot 1 \cdot 0) = (12 + 0 + 0 + 8) - (0 + 12 + 0 + 0) = 8
$$</p>
<p>But WolframAlpha is saying that <a href="http://www.wolframalpha.com/input/?i=det+%7B%7B1%2C2%2C3%2C4%7D%2C%7B2%2C3%2C3%2C3%7D%2C%7B0%2C1%2C2%2C3%7D%2C%7B0%2C0%2C1%2C2%7D%7D&dataset=" rel="nofollow">it is equal 0</a>. So my question is where am I wrong?</p>
| Inquest | 35,001 | <p>$$
A = \begin{bmatrix}
1 & 2 & 3 & 4 \\
2 & 3 & 3 & 3 \\
0 & 1 & 2 & 3 \\
0 & 0 & 1 & 2
\end{bmatrix}
$$
$$
P_1A = \begin{bmatrix}
1 & 2 & 3 & 4 \\
0 & -1 & -3 & -5 \\
0 & 1 & 2 & 3 \\
0 & 0 & 1 & 2
\end{bmatrix}
$$
$$
P_2P_1A = \begin{bmatrix}
1 & 2 & 3 & 4 \\
0 & -1 & -3 & -5 \\
0 & 0 & -1 & -2 \\
0 & 0 & 1 & 2
\end{bmatrix}
$$
$$
P_3P_2P_1A = \begin{bmatrix}
1 & 2 & 3 & 4 \\
0 & -1 & -3 & -5 \\
0 & 0 & -1 & -2 \\
0 & 0 & 0 & 0
\end{bmatrix}
$$</p>
<p>$$\det(P_3P_2P_1A)=\det(P_3).\det(P_2).\det(P_1).\det(A)=0$$
$$\det(P_3)\neq0,\det(P_2)\neq0,\det(P_1)\neq0$$
$$\implies \det(A)=0$$</p>
|
246,606 | <p>I have matrix:</p>
<p>$$
A = \begin{bmatrix}
1 & 2 & 3 & 4 \\
2 & 3 & 3 & 3 \\
0 & 1 & 2 & 3 \\
0 & 0 & 1 & 2
\end{bmatrix}
$$</p>
<p>And I want to calculate $\det{A}$, so I have written:</p>
<p>$$
\begin{array}{|cccc|ccc}
1 & 2 & 3 & 4 & 1 & 2 & 3 \\
2 & 3 & 3 & 3 & 2 & 3 & 3 \\
0 & 1 & 2 & 3 & 0 & 1 & 2 \\
0 & 0 & 1 & 2 & 0 & 0 & 1
\end{array}
$$</p>
<p>From this I get that:</p>
<p>$$
\det{A} = (1 \cdot 3 \cdot 2 \cdot 2 + 2 \cdot 3 \cdot 3 \cdot 0 + 3 \cdot 3 \cdot 0 \cdot 0 + 4 \cdot 2 \cdot 1 \cdot 1) - (3 \cdot 3 \cdot 0 \cdot 2 + 2 \cdot 2 \cdot 3 \cdot 1 + 1 \cdot 3 \cdot 2 \cdot 0 + 4 \cdot 3 \cdot 1 \cdot 0) = (12 + 0 + 0 + 8) - (0 + 12 + 0 + 0) = 8
$$</p>
<p>But WolframAlpha is saying that <a href="http://www.wolframalpha.com/input/?i=det+%7B%7B1%2C2%2C3%2C4%7D%2C%7B2%2C3%2C3%2C3%7D%2C%7B0%2C1%2C2%2C3%7D%2C%7B0%2C0%2C1%2C2%7D%7D&dataset=" rel="nofollow">it is equal 0</a>. So my question is where am I wrong?</p>
| Cameron Buie | 28,900 | <p>The method that you're using works just fine for $3\times 3$ matrices, but fails to work with $n\times n$ matrices for other $n$. You're going to have to do it another way.</p>
<p>For example, expanding the deteriminant along the first column, we find that $$\begin{align}\det A &=1\cdot\det\left[\begin{array}{ccc}3 & 3 & 3\\1 & 2 & 3\\0 & 1 & 2\end{array}\right]-2\cdot\det\left[\begin{array}{ccc}2 & 3 & 4\\1 & 2 & 3\\0 & 1 & 2\end{array}\right]+0\cdot\det\left[\begin{array}{ccc}3 & 3 & 3\\2 & 3 & 4\\0 & 1 & 2\end{array}\right]-0\cdot\det\left[\begin{array}{ccc}3 & 3 & 3\\2 & 3 & 4\\1 & 2 & 3\end{array}\right]\\ &= \det\left[\begin{array}{ccc}3 & 3 & 3\\1 & 2 & 3\\0 & 1 & 2\end{array}\right]-2\det\left[\begin{array}{ccc}2 & 3 & 4\\1 & 2 & 3\\0 & 1 & 2\end{array}\right].\end{align}$$</p>
<p>At that point, you can use your method of calculating determinants of $3\times 3$ matrices to get the rest of the way.</p>
|
3,503,999 | <p>Consider the function <span class="math-container">$$f(x):=\frac{x-x_0}{\Vert x-x_0 \Vert^2} + \frac{x-x_1}{\Vert x-x_1 \Vert^2}$$</span></p>
<p>for two fixed <span class="math-container">$x_0,x_1 \in \mathbb R^2$</span> and <span class="math-container">$x \in \mathbb R^2$</span> as well. </p>
<p>Does anybody know what the Hessian of the function </p>
<p><span class="math-container">$$g(x):=\Vert f(x) \Vert^2$$</span> </p>
<p>is? It is such a difficult composition of functions that I find it very hard to compute.</p>
<p>The bounty is for a person who fully derives the Hessian of <span class="math-container">$f. $</span> Please let me know if you have any questions.</p>
| Michael Hoppe | 93,935 | <p>Avoid coordinates. Here's a solution that works with all dot products in any dimension: </p>
<p>Define <span class="math-container">$h_k(x)=\frac{x-x_k}{\|x-x_k\|^2}$</span> for <span class="math-container">$k\in\{0,1\}$</span>.
Then we have
<span class="math-container">$\|h_k(x)\|^2=1/\|x-x_k\|^2$</span> and
<span class="math-container">$$d_ph(x)=p\|h_k(x)\|^2-2h_k(x)\langle p,h_k(x)\rangle$$</span>
and<span class="math-container">$$d_p\|h_k(x)\|^2=-2\|h_k(x)\|^2\langle p,h_k(x)\rangle.$$</span></p>
<p>After a straightforward calculation I get
<span class="math-container">$$\frac12\nabla g(x)=\bigl(\|h_0(x)\|^2-\|h_1(x)\|^2\bigr)\cdot\bigl(h_1(x)-h_0(x)\bigr)-2\langle h_0(x),h_1(x)\rangle\bigl(h_0(x)+h_1(x)\bigr).$$</span></p>
<p>From here feel free to calculate the Hessian: Differentiating <span class="math-container">$\frac12\nabla g(x)$</span> again at <span class="math-container">$q$</span> we get (omitting the argument <span class="math-container">$x$</span> for the sake of readability)
<span class="math-container">$$\begin{align}
-&q\left(2\langle h_0,h_1\rangle(\|h_0\|^2+\|h_1\|^2)+(\|h_0\|^2-\|h_1\|^2)^2\right)\\
+&h_0\langle q,4h_0\|h_0\|^2-(2h_0+h_1)\|h_1-h_0\|^2\rangle\\
+&h_1\langle q,4h_1\|h_1\|^2-(2h_1+h_0)\|h_0-h_1\|^2\rangle,
\end{align}
$$</span>
hence the Hessian at <span class="math-container">$(p,q)$</span> is
<span class="math-container">$$\begin{align}
2\langle p,-&q\left(2\langle h_0,h_1\rangle\|(h_0\|^2+\|h_1\|^2)+(\|h_0\|^2-\|h_1\|^2)^2\right)\\
+&h_0\langle q,4h_0\|h_0\|^2-(2h_0+h_1)\|h_1-h_0\|^2\rangle\\
+&h_1\langle q,4h_1\|h_1\|^2-(2h_1+h_0)\|h_0-h_1\|^2\rangle\rangle.
\end{align}
$$</span></p>
|
3,002,114 | <blockquote>
<p>Prove that
<span class="math-container">$$
\binom{n}{1}^2+2\binom{n}{2}^2+\cdots + n\binom{n}{n}^2
= n \binom{2n-1}{n-1}.
$$</span></p>
</blockquote>
<p>So
<span class="math-container">$$
\sum_{k=1}^n k \binom{n}{k}^2
= \sum_{k=1}^n k \binom{n}{k}\binom{n}{k}
= \sum_{k=1}^n n \binom{n-1}{k-1} \binom{n}{k}
= n \sum_{k=0}^{n-1} \frac{(n-1)!n!}{(n-k-1)!k!(n-k-1)!(k+1)!}
= n^2 \sum_{k=0}^{n-1} \frac{(n-1)!^2}{(n-k-1)!^2k!^2(k+1)}
=n^2 \sum_{k=0}^{n-1} \binom{n-1}{k}^2\frac{1}{k+1}.
$$</span>
I do not know what to do with <span class="math-container">$\frac{1}{k+1}$</span>, how to get rid of that.</p>
| Henno Brandsma | 4,280 | <p>A combinatorial proof:</p>
<p>We have <span class="math-container">$n$</span> men and <span class="math-container">$n$</span> women and I want to choose a <span class="math-container">$n$</span>-person committee and a committee president among those, with the condition that the president must be a woman.</p>
<p>One way to do that is to choose a president among the <span class="math-container">$n$</span> women and then choose <span class="math-container">$n-1$</span> committee members from the remaining <span class="math-container">$2n-1$</span> persons. </p>
<p>So that way gives <span class="math-container">$n \binom{2n-1}{n-1}$</span> ways, the right hand side.</p>
<p>On the other hand I can split the count on the number of women in the committee, so there can be <span class="math-container">$i=1,2,\ldots,n$</span> many women.</p>
<p>To count those for a fixed <span class="math-container">$i$</span>, first pick the <span class="math-container">$i$</span> women in <span class="math-container">$\binom{n}{i}$</span> ways, pick the president among those in <span class="math-container">$i$</span> ways, and then pick the men in <span class="math-container">$\binom{n}{n-i} = \binom{n}{i}$</span> ways. So for a fixed <span class="math-container">$i$</span> we have <span class="math-container">$i \binom{n}{i}^2$</span> committees with <span class="math-container">$i$</span> women. Summing them gives us the left hand side.</p>
|
3,002,114 | <blockquote>
<p>Prove that
<span class="math-container">$$
\binom{n}{1}^2+2\binom{n}{2}^2+\cdots + n\binom{n}{n}^2
= n \binom{2n-1}{n-1}.
$$</span></p>
</blockquote>
<p>So
<span class="math-container">$$
\sum_{k=1}^n k \binom{n}{k}^2
= \sum_{k=1}^n k \binom{n}{k}\binom{n}{k}
= \sum_{k=1}^n n \binom{n-1}{k-1} \binom{n}{k}
= n \sum_{k=0}^{n-1} \frac{(n-1)!n!}{(n-k-1)!k!(n-k-1)!(k+1)!}
= n^2 \sum_{k=0}^{n-1} \frac{(n-1)!^2}{(n-k-1)!^2k!^2(k+1)}
=n^2 \sum_{k=0}^{n-1} \binom{n-1}{k}^2\frac{1}{k+1}.
$$</span>
I do not know what to do with <span class="math-container">$\frac{1}{k+1}$</span>, how to get rid of that.</p>
| lab bhattacharjee | 33,337 | <p><span class="math-container">$$k \binom nk^2=\binom nk\cdot k\binom nk$$</span></p>
<p>For <span class="math-container">$k\ge1,$</span> <span class="math-container">$$k\binom nk=k\cdot\dfrac{n!}{k!\cdot(n-k)!}=n\dfrac{(n-1)!}{(k-1)!\{n-1-(k-1)\}!}=n\binom{n-1}{k-1}$$</span></p>
<p>Now in the identity <span class="math-container">$(1+x)^{2n-1}=(x+1)^n(1+x)^{n-1},$</span> compare coefficients of <span class="math-container">$x^n$</span></p>
<p><span class="math-container">$$\binom{2n-1}n=\sum_{k=0}^n\binom nk\binom{n-1}{k-1}$$</span></p>
|
3,002,114 | <blockquote>
<p>Prove that
<span class="math-container">$$
\binom{n}{1}^2+2\binom{n}{2}^2+\cdots + n\binom{n}{n}^2
= n \binom{2n-1}{n-1}.
$$</span></p>
</blockquote>
<p>So
<span class="math-container">$$
\sum_{k=1}^n k \binom{n}{k}^2
= \sum_{k=1}^n k \binom{n}{k}\binom{n}{k}
= \sum_{k=1}^n n \binom{n-1}{k-1} \binom{n}{k}
= n \sum_{k=0}^{n-1} \frac{(n-1)!n!}{(n-k-1)!k!(n-k-1)!(k+1)!}
= n^2 \sum_{k=0}^{n-1} \frac{(n-1)!^2}{(n-k-1)!^2k!^2(k+1)}
=n^2 \sum_{k=0}^{n-1} \binom{n-1}{k}^2\frac{1}{k+1}.
$$</span>
I do not know what to do with <span class="math-container">$\frac{1}{k+1}$</span>, how to get rid of that.</p>
| Marko Riedel | 44,883 | <p>We present a slight variation using formal power series and the
coefficient-of operator. Starting from</p>
<p><span class="math-container">$$\sum_{k=1}^n k {n\choose k}^2
= \sum_{k=1}^n k {n\choose k} [z^{n-k}] (1+z)^n
\\ = [z^n] (1+z)^n \sum_{k=1}^n k {n\choose k} z^k
= n [z^n] (1+z)^n \sum_{k=1}^n {n-1\choose k-1} z^k
\\ = n [z^n] z (1+z)^n \sum_{k=0}^{n-1} {n-1\choose k} z^k
= n [z^{n-1}] (1+z)^n (1+z)^{n-1}
\\ = n [z^{n-1}] (1+z)^{2n-1} = n \times {2n-1\choose n-1}$$</span></p>
<p>which is the claim.</p>
|
2,831,731 | <p>I don't know how should i define a homotopy on a set.
I think {{},{a,b,c}} should work but i don't know how to write the homotopy between the identity map and a constant map.
(So sorry for this basic quistion.....)</p>
| E. KOW | 443,898 | <p>Hint: In the trivial topology you mentioned, any map $X\to\left\{a,b,c\right\}$ is continuous.</p>
|
2,751,909 | <blockquote>
<p>Let $f$ be a non-negative differentiable function such that $f'$ is continuous and
$\displaystyle\int_{0}^{\infty}f(x)\,dx$ and $\displaystyle\int_{0}^{\infty}f'(x)\,dx$ exist.</p>
<p>Prove or give a counter example: $f'(x)\overset{x\rightarrow
\infty}{\rightarrow} 0$</p>
</blockquote>
<p><strong>Note:</strong> I think it is not true but I couldn't find a counter example.</p>
| innerz09 | 554,811 | <p>The function is continuous on this interval so it’s integrable by definition.</p>
<p>Using Riemann integral theory you can pick any Pn.</p>
<p>EDIT: I can relate to what’s been said in the comment but it depends on what equivalent definition you use.</p>
|
39,424 | <p>I need to teach an intro course on number theory in 1 month. I was just notified. Since I have never studied it, what are good books to learn it quickly?</p>
| William Stein | 8,441 | <p>Stein's book may be useful (and it is free): <a href="http://wstein.org/ent/" rel="nofollow">http://wstein.org/ent/</a> </p>
|
1,424,124 | <p>If $a,b$ be two positive integers , where $b>2 $ , then is it possible that $2^b-1\mid2^a+1$ ? I have figured out that if $2^b-1\mid 2^a+1$, then $2^b-1\mid 2^{2a}-1$ , so $b\mid2a$ and also $a >b$ ; but nothing else. Please help. Thanks in advance</p>
| Hagen von Eitzen | 39,174 | <p>If $b$ is odd, then $b\mid 2a$ implies $b\mid a$, then $2^b-1\mid 2^a-1$ and hence $2^b-1\nmid 2^a+1$ (as $2^a+1$ is between $2^a-1$ and $2^a-1+(2^b-1)$).</p>
<p>If $b$ is even, $b=2c$ say, then $c\mid a$, hence $2^c-1\mid 2^a-1$ and $2^c-1\mid 2^b-1$. As $c>1$ this shows $d:=\gcd(2^a-1,2^b-1)>1$ (and of course odd) and so $\gcd(2^a+1,2^b-1)<\frac{2^b-1}{d}<2^b-1$.</p>
|
3,232,341 | <p>How would I show this? I know a directed graph with no cycles has at least one node of outdegree zero (because a graph where every node has outdegree one contains a cycle), but do not know where to go from here.</p>
| Jithin Mathews | 984,391 | <ul>
<li>For undirected graphs:</li>
</ul>
<p>Since all vertices are having an indegree <span class="math-container">$>1$</span>, the count of all the indegrees on all the n different vertices will be <span class="math-container">$\ge n$</span>. However, if a graph (connected or not) has its number of vertices <span class="math-container">$> (n-1)$</span>, then it will have at least one cycle. Hence Proved.</p>
<ul>
<li>For directed graphs: (existential proof)</li>
</ul>
<p>Performing a search like DFS on a graph with all its vertices indegree <span class="math-container">$>0$</span> will guarantee to visit at least one vertex again (via. its indegree-edge and outdegree-edge) and thus forming a cycle.</p>
|
2,098,693 | <p>Full Question: Five balls are randomly chosen, without replacement, from an urn that contains $5$
red, $6$ white, and $7$ blue balls. What is the probability of getting at least one ball of
each colour?</p>
<p>I have been trying to answer this by taking the complement of the event but it is getting quite complex. Any help?</p>
| barak manos | 131,263 | <p>First, use <em>inclusion/exclusion</em> principle in order to count the number of desired combinations:</p>
<ul>
<li>Include the total number of combinations: $\binom{5+6+7}{5}=8568$</li>
<li>Exclude the number of combinations without red balls: $\binom{6+7}{5}=1287$</li>
<li>Exclude the number of combinations without white balls: $\binom{5+7}{5}=792$</li>
<li>Exclude the number of combinations without blue balls: $\binom{5+6}{5}=462$</li>
<li>Include the number of combinations without red and white balls: $\binom{7}{5}=21$</li>
<li>Include the number of combinations without red and blue balls: $\binom{6}{5}=6$</li>
<li>Include the number of combinations without white and blue balls: $\binom{5}{5}=1$</li>
</ul>
<p>Then, in order to compute probability, divide the result by the total number of combinations:</p>
<p>$$\frac{8568-1287-792-462+21+6+1}{8568}\approx70.66\%$$</p>
|
4,635,416 | <p>Let <span class="math-container">$X$</span> be a symmetric random variable, that is <span class="math-container">$X$</span> and <span class="math-container">$-X$</span> have the same distribution function <span class="math-container">$F$</span>. Suppose that <span class="math-container">$F$</span> is continuous and strictly increasing in a neighborhood of <span class="math-container">$0$</span>. Then prove that the median <span class="math-container">$m$</span> of <span class="math-container">$F$</span> is equal to <span class="math-container">$0$</span>, where we define <span class="math-container">$m:=\inf\{x\in \mathbb{R}|F(x)\ge \frac{1}{2}\}$</span>.</p>
<p>This definition of the median kind of annoys me. I could easily show that <span class="math-container">$\mathbb{P}(X\le 0)\ge \frac{1}{2}$</span> and <span class="math-container">$\mathbb{P}(X\ge 0)\ge \frac{1}{2}$</span> and by the usual definition of the median I would be done, but I don't know how to deal with that <span class="math-container">$\inf$</span>. I could only observe that my first equality implies that <span class="math-container">$m<0$</span>. I think that the point of the qustion is to use that <span class="math-container">$F$</span> is invertible on that neighborhood, but I can't make any progress.</p>
| GReyes | 633,848 | <p>If you know how to prove that <span class="math-container">$P(X\le 0)=1/2$</span>, then you can prove that <span class="math-container">$0$</span> satisfies the "inf" definition of median by contradiction. If you assume that <span class="math-container">$\textrm{inf}\, \{x, F(x)\ge 1/2\}>0$</span> then, by definition of infimum, there will be some <span class="math-container">$x_0>0$</span> such that <span class="math-container">$F(x_0)=P(X\le x_0)<1/2$</span>. But then <span class="math-container">$P(X\le 0)\le P(X\le x_0)<1/2$</span>, contradicting the fact that <span class="math-container">$P(X\le 0)=1/2$</span>. A similar contradiction arises if you assume that the infimum is negative. So it has to be zero.</p>
|
248,313 | <p>Assume that $f:\mathbb R \rightarrow \mathbb R$ is continuous and $h\in \mathbb R$. Let $\Delta_h^n f(x)$ be a finite difference of $f$ of order $n$, i.e</p>
<p>$$
\Delta_h^1 f(x)=f(x+h)-f(x),
$$
$$
\Delta_h^2f(x)=\Delta_h^1f(x+h)-\Delta_h^1 f(x)=f(x+2h)-2f(x+h)+f(x),
$$
$$
\Delta_h^3 f(x)=\Delta_h^2f(x+h)-\Delta_h^2f(x)=f(x+3h)-3f(x+2h)+3f(x+h)-f(x),
$$
etc.
There is an explicite formula for $n$-th difference:
$$
\Delta_h^n f(x)=\sum_{k=0}^n (-1)^{n-k}\frac{n!}{k!(n-k)!} f(x+kh).
$$</p>
<p>Assume now that $n\in \mathbb N$ and $f:\mathbb R \rightarrow \mathbb R$ are such that for each $x \in \mathbb R$:
$$
\frac{\Delta_h^n f(x)}{h^n} \rightarrow 0 \textrm{ as } h \rightarrow 0.
$$
Is it then $f$ a polynomial of degree $\leq n-1$?</p>
<p>It is clear if $n=1$, because then $f'(x)=0$ for $x\in \mathbb R$.</p>
<p>Edit. Without continuity assumption about $f$ it is not true, because for $n-1$-additive function $F$ which is not $n-1$-linear we have $\Delta_h^nf(x)=0$, where $f(x)=F(x,...,x)$.</p>
| Ewan Delanoy | 15,381 | <p>This is actually a comment too long to fit in the usual format.
WimC’s claim about the uniform convergence case is correct : suppose that $\Gamma(h,x)=\frac{\Delta_h^2f(x)}{h^2} \to 0$, uniformly in $x$ on an interval $[a,b]$. </p>
<p>Let us put $\beta(h)={\sf sup}_{x\in[a,b]}(\big| \Gamma(h,x)\big|)$ for $h>0$. Then the hypothesis states that $\beta(h) \to 0$ when $h \to 0$.</p>
<p>Now, the identity</p>
<p>$$
\Delta_{2h}^{2}f(x)=\Delta_h^{2}f(x+2h)+2\Delta_h^{2}f(x+h)+\Delta_h^{2}f(x)
$$</p>
<p>yields</p>
<p>$$
\Gamma(2h,x)=\frac{\Gamma(h,x+2h)+2\Gamma(h,x+h)+\Gamma(x,h)}{4}
$$</p>
<p>Taking sups above, we see that $\beta(2h) \leq \beta(h)$. So if the bound $|\beta(h)| \leq \varepsilon $ holds for $h\in [0,\eta]$, it will also hold for $h \in [0,2\eta]$ ; it will even hold everywhere, by induction. Since this holds for every $\varepsilon >0$, we see that $\beta=0$, as wished.</p>
|
1,338,980 | <p>Suppose you have a set of data $\{x_i\}$ and $\{y_i\}$ with $i=0,\dots,N$. In order to find two parameters $a,b$ such that the line
$$
y=ax+b,
$$
give the best linear fit, one proceed minimizing the quantity
$$
\sum_i^N[y_i-ax_i-b]^2
$$
with respect to $a,b$ obtaining well know results. </p>
<p>Imagine now to desire a fit with a function like
$$
y=ax^p+b.
$$
After some manipulation one obtain the following relations
$$
a=\frac{N\sum_i(y_ix_i^p)-\sum_iy_i\cdot\sum_ix_i^p}{(\sum_ix_i^p)^2+N\sum_i(x_i^p)^2},
$$
$$
b=\frac{1}{N}[\sum_iy_i-a\sum_ix_i^p]
$$
and
$$
\frac{1}{N}[N\sum_i(y_ix_i^p\ln x_i)-\sum_iy_i\cdot\sum_ix_i^p\ln x_i]=\frac{a}{N}[N\sum_i(x_i^p)^2\ln x_i-\sum_ix_i^p\cdot\sum_ix_i^p\ln x_i.
$$
To me it seems that from this it is nearly impossible to extract the exponent $p$. Am I correct?</p>
| Ross Millikan | 1,827 | <p>If you want to determine $p$ instead of assuming a value for $p$ and fitting $a$ and $b$, you have moved from linear curve fitting to non-linear curve fitting. For linear curve fitting it is not required that the curve be a straight line, but that the model be linear in the parameters. Fitting data to $y=ax^2+bx+c$ is linear because $y$ depends linearly on $a,b,c$. Now you need to minimize an error function numerically instead of solving a matrix equation. A discussion is free online in chapter 15 of <a href="http://apps.nrbook.com/c/index.html" rel="nofollow">Numerical Recipes</a> and probably in most other numerical analysis texts.</p>
|
1,345,643 | <p>In an exercise it seems I must use Pascal's triangle to solve this $(z^1+z^2+z^3+z^4)^3$. The result would be $z^3 + 3z^4 + 6z^5 + 10z^ 6 + 12z^ 7 + 12z^ 8 + 10z^ 9 + 6z^ {10} + 3z^ {11} + z^{12}$. But how do I use the triangle to get to that result? Personally I can only solve things like $(x+y)^2$ and $(x+y)^3$.</p>
<p>Thanks for any tips that may be given.</p>
| Community | -1 | <p>The expression factors as</p>
<p>$$z^3(1+z)^3(1+z^2)^3=z^3(1+3z+3z^2+z^3)(1+3z^2+3z^4+z^6).$$</p>
<p>Then I see no better way than to perform the multiply (though there is a symmetry)
$$\begin{align}
&1+3z+&3z^2+&z^3\\
&&3z^2+&9z^3+&9z^4+&3z^5\\
&&&&3z^4+&9z^5+&9z^6+&3z^7\\
&&&&&&z^6+&3z^7+&3z^8+z^9\\
\end{align}$$</p>
|
1,619,371 | <p>I was working on a problem and reduced it to evaluating</p>
<p>$$\int_{0}^{1}\sqrt{1+x^a}\,dx~~a>0$$</p>
<p>your suggestion? Thanks</p>
| Tom-Tom | 116,182 | <p>We have
$$ I=\int_0^1\sqrt{1+x^a}\,\mathrm dx=\int_0^1\sum_{k=0}^\infty\binom{1/2}{k}x^{ka}\mathrm dx,$$
where
$$\binom{1/2}{k}=\frac{(1/2)(1/2-1)\dots(1/2-k+1)}{k!}.$$
We get
$$I=\sum_{k=0}^\infty\binom{1/2}{k}\frac1{1+ka}=\sum_{k=0}^\infty.$$
Let us rewrite $(1/2)(1/2-1)\cdots(1/2-k+1)=(-1)^k(-\frac12)_k$ where the rising Pochhammer symbol $(x)_n=(x)(x+1)\cdots(x+n-1)$ is used. We can also rewrite $1+ka=a(\frac1a+k)$ as $$1+ka=a\frac{(\frac1a)_{k+1}}{(\frac1a)_k}=\frac{(1+\frac1a)_k}{(\frac1a)_k}.$$
Therefore the integral rewrites
$$I=\sum_{k=0}^\infty\frac{(-\frac12)_k(\frac1a)_k}{(1+\frac1a)_k}\frac{(-1)^k}{k!}={}_2F_1\left(-\frac12,\frac1a;\,1+\frac1a\,\middle|\,-1\right)$$
by definition of the hypergeometric function ${}_2F_1$. </p>
|
1,030,335 | <blockquote>
<p>Let <span class="math-container">$n$</span> and <span class="math-container">$r$</span> be positive integers with <span class="math-container">$n \ge r$</span>. Prove that:</p>
<p><span class="math-container">$$\binom{r}{r} + \binom{r+1}{r} + \cdots + \binom{n}{r} = \binom{n+1}{r+1}.$$</span></p>
</blockquote>
<p>Tried proving it by induction but got stuck. Any help with proving it by induction or any other proof technique is appreciated.</p>
| Mike | 193,928 | <p>Let $E = \{1,2, \dots , n+1\}$. The number $\binom{n+1}{r+1}$ is the number of subsets $A$ of $E$ with $r + 1$ elements. </p>
<p>Classify these subsets $A$ according to their largest element $b$, which can be any number among $r + 1$, $r + 2$, ..., $n + 1$. The number of $(r+1)$-element subsets of $E$ with largest element $b$ is the same as the number of $r$-element subsets of $\{1, 2, \dots, b-1\}$, which is $\binom{b-1}{r}$. </p>
<p>Now $b$ can be any number $r + 1, r + 2, \ldots, n + 1$, and there are $\binom{r}{r}$, $\binom{r + 1}{r}, \ldots, \binom{n}{r}$ possible subsets $A$ in each case. This proves the desired equality.</p>
|
1,334,527 | <p>The integral in hand is
$$
I(n) = \frac{1}{\pi}\int_{-1}^{1} \frac{(1+2x)^{2n}}{\sqrt{1-x^2}}\, dx
$$
I dont know whether it has closed-form or not, but currently I only want to know its asymptotic behavior. Setting $x=\cos\theta$, then
$$
I(n) = \frac{1}{\pi}\int_{0}^{\pi/2} \Big[(1+2\cos\theta)^{2n}+(1-2\cos\theta)^{2n}\Big]\, d\theta
$$
The second term can be neglected, therefore
$$
I(n) \sim \frac{1}{\pi}\int_{0}^{\pi/2}(1+2\cos\theta)^{2n}\, d\theta
$$
How can I move on?</p>
| Dr. Wolfgang Hintze | 198,592 | <p>This is not a solution but a comment following the comment of Claude pointing out an interesting generating function and a shorter recursion.</p>
<p>In <a href="http://oeis.org/A082758" rel="nofollow noreferrer">http://oeis.org/A082758</a> Paul Barry gives the simple g.f.</p>
<p><span class="math-container">$$g(x)=\frac{1}{\sqrt{(1+x)(1-3x)}}\tag{1}$$</span></p>
<p>so that (Michael Somos)</p>
<p><span class="math-container">$$I(n) = \frac{1}{(2n)!}\frac{\partial ^{2n}}{\partial x^{2n}}g(x)|_{x\to 0}\tag{2}$$</span></p>
<p>Putting this SeriesCoefficient expression into Mathematica returns a DifferenceFunction corresponding the the following recurrence</p>
<p><span class="math-container">$$req=\{(-3-3 n) y(n)+(-3-2 n) y(n+1)+(2+n) y(n+2)=0,y(0)=1,y(1)=1\}\tag{3}$$</span></p>
<p>The first few terms of the solution are</p>
<p><span class="math-container">$$y(n) = \{1,1,3,7,19,51,141,393,1107,3139,8953\}$$</span></p>
<p>and every second term gives the value of the integral, i.e.</p>
<p><span class="math-container">$$y(2n) = I(n)$$</span></p>
|
2,659,448 | <p>The following question is an exercise from Munkres' Analyis on Manifolds (Chapter 4 - Section 20):</p>
<p>Consider the vectors $a_i$ in $R^3$ such that:</p>
<p>$[a_1\ a_2\ a_3\ a_4] = \begin{bmatrix} 1 & 0 & 1 & 1 \\ 1 & 0 & 1 & 1 \\ 1 & 1 & 2 & 0 \end{bmatrix}$</p>
<p>Let $V$ be the subspace of $R^3$ spanned by $a_1$ and $a_2$. Show that $a_3$ and $a_4$ also span $V$, and that the frames $(a_1,a_2)$ and $(a_3,a_4)$ belong to opposite orientations of $V$.</p>
<p>My initial approach was to span $a_1$ and $a_2$ to determine V. Since two vectors can span at most $R^2$, the third term of any general vector within span must be 0. So, $(x,x,x+y)=0\Longrightarrow x=0,y=0$
And so, V is simply the origin $(0,0)$. Spanning $a_3$ and $a_4$ also yields this, showing that they both span V. This seems like a very odd answer for the first part in my opinion.</p>
<p>However, in trying to answer the second part I am completely lost. In Munkres, we see that the orientation of some $n$-tuple $(x_1,\ldots,x_n)$ is determined by the sign of the determinant of the matrix they form. But surely the determinant of a $3\times2$ matrix is not defined? One thought I had was to simply take the determinant of the upper $\frac{2}{3}$ of the matrix (so determinant is defined), but the determinant of this is 0, which isn't covered by Munkres' definition, so the frame has no orientation?</p>
<p>Any help / clarification / solutions would be massively appreciated.</p>
| amd | 265,466 | <p>$\mathbb R^2$ is not a subspace of $\mathbb R^3$, so there’s no way for any of these vectors to combine to span the former. On the other hand, two of them <em>could</em> span a two-dimensional subspace of $\mathbb R^3$. There are many such subspaces besides the one that consists of vectors with last coordinate equal to zero (which is certainly <em>isomorphic</em> to, but not equal to, $\mathbb R^2$). These misconceptions appear to have led you to the absurd conclusion that neither $a_1$ nor $a_2$ belong to their own span—after all, neither of their third coordinates vanish. </p>
<p>The span of $a_1$ and $a_2$ consists of all linear combinations $\lambda a_1+\mu a_2$. They are obviously linearly independent, so this is indeed a two-dimensional subspace of $\mathbb R^3$. Similarly, $a_3$ and $a_4$ are obviously linearly-independent, too. We can find by inspection that $a_3 = a_1+a_2$ and $a_4=a_1-a_2$, so the span of the latter pair is contained in the span of the former. You can solve these equations for $a_1$ and $a_2$ to show that the inclusion goes in the other direction as well, and hence the two subspaces are identical, but it’s enough to note that they, too, are linearly independent without explicitly inverting the equations. </p>
<p>If you didn’t happen to spot these relationships among the vectors, you could instead proceed systematically by computing the row-reduced echelon form of the matrix in your question, which is $$\begin{bmatrix}1&0&1&1 \\ 0&1&1&-1 \\ 0&0&0&0 \end{bmatrix}.$$ The first two columns verify that $a_1$ and $a_2$ are linearly independent, while the second two columns show that the other two vectors are both elements of their span and in fact give the coordinates of $a_3$ and $a_4$ relative to the $(a_1,a_2)$ basis of $V$. This last fact will come in handy for the second part. </p>
<p>These two bases of $V$ have opposite orientations iff the matrix that maps between them has a negative determinant. This will be a $2\times2$ matrix since you’re mapping from a two-dimensional space to another two-dimensional space. Recalling that the columns of a transformation matrix are the images of the basis vectors, this means that the matrix that maps coordinates of elements of $V$ from the $(a_3,a_4)$ basis to the $(a_1,a_2)$ basis is just the upper-right $2\times2$ submatrix of the above rref matrix. Its determinant is $-2$, therefore this change of basis is orientation-reversing.</p>
|
452,306 | <p>I am trying to be able to find the radius of a cone combined with a cylinder.
see my other question
(Solving for radius of a combined shape of a cone and a cylinder where the cone is base is concentric with the cylinder? part2 )</p>
<p>I have a volume calculation that Has been reduced as far as I know how to.</p>
<p>Know values:</p>
<p>$$v=65712.4$$
$$x=3$$
$$y=2$$
$$\theta=30$$
$$r=unknown$$</p>
<p>$$v=\pi r^3\left(2y-\frac{2}{3}\tan\theta-\frac{x}{r}\right)$$</p>
<p>Since I haven't solved a Quadratic equation in a while. </p>
<p>I would appreciate it explained in steps. </p>
<p>Thank You For Your Time.</p>
| John | 30,229 | <p>Let $B=\{g^{n}:n\in\mathbb{Z}\}$. Clearly $\bar{B}$ is also a subgroup of $G$. </p>
<p>If $1$ is an isolated point in $\bar{B}$ then all points of $\bar{B}$ are isolated, which means that $\bar{B}$ is compact and discrete, and hence finite. Thus, $g^{n}=1$ for some $n$ and so $\bar{A}$ is a subgroup of $G$.</p>
<p>On the other hand, if $1$ is a limit point in $\bar{B}$ then for any symmetric neighborhood $V$ of $1$ there is a positive integer $n$ such that $g^{n}\in V$ ($n>0$ is possible since $V$ is symmetric). Then $g^{n-1}\in(g^{-1}V)\cap A$, and since the $g^{-1}V$ form a neighborhood basis at $g^{-1}$ we have that $g^{-1}\in \bar{A}$. This means that $\bar{A}=\bar{B}$ and so $\bar{A}$ is a subgroup of $G$. </p>
|
127,086 | <p>I am struggling with an integral pretty similar to one already resolved in MO (link: <a href="https://mathoverflow.net/questions/101469/integration-of-the-product-of-pdf-cdf-of-normal-distribution">Integration of the product of pdf & cdf of normal distribution </a>). I will reproduce the calculus bellow for the sake of clarity, but I want to stress the fact that my computatons are essentially a reproduction of the discussion of the previous thread.</p>
<p>In essence, I need to solve:
<span class="math-container">$$\int_{-\infty}^\infty\Phi\left(\frac{f-\mathbb{A}}{\mathbb{B}}\right)\phi(f)\,df,$$</span>
where <span class="math-container">$\Phi$</span> is cdf of a standard normal, and <span class="math-container">$\phi$</span> its density. <span class="math-container">$\mathbb{B}$</span> is a negative constant.</p>
<p>As done in the aforementioned link, the idea here is to compute the derivative of the integral with respect to <span class="math-container">$\mathbb{A}$</span> (thanks to Dominated Convergence Theorem, integral and derivative can switch positions). With this,
<span class="math-container">\begin{align*}
\partial_A\left[\int_{-\infty}^\infty\Phi\left(\frac{f-A}{B}\right)\phi(f)\,df\right]&=\int_{-\infty}^\infty\partial_A\left[\Phi\left(\frac{f-A}{B}\right)\phi(f)\right]\,df=\int_{-\infty}^\infty-\frac{1}{B}\phi\left(\frac{f-A}{B}\right)\phi(f)\,df
\end{align*}</span>
We note now that </p>
<p><span class="math-container">$$\phi\left(\frac{f-A}{B}\right)\phi(f)=\frac{1}{2\pi}\exp\left(-\frac{1}{2}\left[\frac{(f-A)^2}{B^2}+f^2\right]\right)=\exp\left(-\frac{1}{2B^2}\left[f^2(1+B^2)+A^2-2Af\right]\right)$$</span>
<span class="math-container">$$=\frac{1}{2\pi}\exp\left(-\frac{1}{2B^2}\left[\left(f\sqrt{1+B^2}-\frac{A}{\sqrt{1+B^2}}\right)^2+\frac{B^2}{1+B^2}A^2\right]\right)$$</span></p>
<p>Finally, then,</p>
<p><span class="math-container">$$\partial_A\left[\int_{-\infty}^\infty\Phi\left(\frac{f-A}{B}\right)\phi(f)\,df\right]$$</span></p>
<p><span class="math-container">$$\ \ \ \ \ =-\frac{1}{\sqrt{2\pi}B}\exp\left(-\frac{A^2}{2(1+B^2)}\right)\frac{1}{2\pi}\int_{-\infty}^\infty\exp\left(-\frac{1}{2B^2}\left[f\sqrt{1+B^2}-\frac{A}{\sqrt{1+B^2}}\right]^2\right)\,df$$</span></p>
<p>and with the change of variable
<span class="math-container">\begin{align}
\left[y\longmapsto f\frac{\sqrt{1+B^2}}{B}-\frac{A}{B\sqrt{1+B^2}}\Longrightarrow df=\frac{B}{\sqrt{1+B^2}}\,dy\right]
\end{align}</span>
we get
<span class="math-container">\begin{align}
\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty\exp\left(-\frac{1}{2B^2}\left[f\sqrt{1+B^2}-\frac{A}{\sqrt{1+B^2}}\right]^2\right)\,df=\frac{B}{\sqrt{1+B^2}}\int_{-\infty}^{\infty}\phi(y)\,dy=\frac{B}{\sqrt{1+B^2}}
\end{align}</span>
This means that
<span class="math-container">\begin{align}
\partial_A\left[\int_{-\infty}^\infty\Phi\left(\frac{f-A}{B}\right)\phi(f)\,df\right]&=-\frac{1}{\sqrt{2\pi}B}\exp\left(-\frac{A^2}{2(1+B^2)}\right)\frac{B}{\sqrt{1+B^2}}=-\frac{1}{\sqrt{1+B^2}}\phi\left(\frac{A}{\sqrt{1+B^2}}\right)
\end{align}</span>
At this point, given that (as <span class="math-container">$\mathbb{B}$</span> is negative)
<span class="math-container">$$\Phi\left(\frac{f-A}{\mathbb{B}}\right)\phi(f)=0$$</span>
when <span class="math-container">$\mathbb{A}\rightarrow-\infty$</span>, the integral we are looking for is equal to
<span class="math-container">\begin{align}
\int_{-\infty}^{\mathbb{A}}-\frac{1}{\sqrt{1+\mathbb{B}^2}}\phi\left(\frac{A}{\sqrt{1+\mathbb{B}^2}}\right)\,dA
\end{align}</span>
Again with the obvious change of variables
<span class="math-container">$$\left[y\longmapsto\frac{A}{\sqrt{1+\mathbb{B}^2}}\Longrightarrow\sqrt{1+\mathbb{B}^2}\,dy=dA\right]$$</span>
one gets
<span class="math-container">\begin{align}
\int_{-\infty}^{\mathbb{A}}-\frac{1}{\sqrt{1+\mathbb{B}^2}}\phi\left(\frac{A}{\sqrt{1+\mathbb{B}^2}}\right)\,dA=-\frac{1}{\sqrt{1+\mathbb{B}^2}}\sqrt{1+\mathbb{B}^2}\int_{-\infty}^{\mathbb{A}/\sqrt{1+\mathbb{B}^2}}\phi(y)\,dy=-\Phi({\mathbb{A}/\sqrt{1+\mathbb{B}^2}}).
\end{align}</span>
The problem here is that this number should obviously be positive, so at some point I am missing a signal. As the computations seem sound to me, I would like to see if anyone could help me to find my mistake. </p>
<p>Many thanks to you all.</p>
| Hugh Perkins | 115,210 | <p>When you do the change of variable, you are doing:</p>
<p>\begin{align}
\left[y\longmapsto f\frac{\sqrt{1+B^2}}{B}-\frac{A}{B\sqrt{1+B^2}}\Longrightarrow df=\frac{B}{\sqrt{1+B^2}}\,dy\right]
\end{align}</p>
<p>.. which is correct. However, we need to apply the same change of variable to the bounds, ie to $-\infty$ and $+\infty$. Given that $\mathbb{B}$ is a negative constant, the bounds will change sign, becoming $+\infty$ and $-\infty$. So we will have:</p>
<p>\begin{align}
\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty\exp\left(-\frac{1}{2B^2}\left[f\sqrt{1+B^2}-\frac{A}{\sqrt{1+B^2}}\right]^2\right)\,df=\frac{B}{\sqrt{1+B^2}}\int_{\infty}^{-\infty}\phi(y)\,dy=-\frac{B}{\sqrt{1+B^2}}
\end{align}</p>
<p>This sign change will then propagate through to the final answer, reversing its sign too.</p>
<p>Full working, based off your working above, but with some buggettes removed:</p>
<p>Start with:</p>
<p>$$
I =
\def\A{\mathbb{A}}
\def\B{\mathbb{B}}
\int_{-\infty}^\infty \Phi\left(
\frac{f - \A}
{\B}
\right)\phi(f)\, df
$$</p>
<p>Take derivative wrt $\A$:</p>
<p>$$
\partial_\A I = \int_{-\infty}^\infty
\partial_\A \left(
\Phi \left(
\frac{f - \A}{\B}
\right)
\phi(f)
\right)
\, df
$$</p>
<p>$$
=\int_{-\infty}^\infty
\left(
\frac{-1}{\B}
\right)
\phi\left(
\frac{f - \A}{\B}
\right)
\phi(f)
\,
df
$$</p>
<p>Looking at $E_1 = \phi\left(\frac{f - \A}{\B}\right) \phi(f)$:</p>
<p>$$
E_1 = \frac{1}{2\pi}
\exp \left(
- \frac{1}{2}
\left(
\frac{f^2 - 2f\A + \A^2 + \B^2f^2}
{B^2}
\right)
\right)
$$</p>
<p>$$
=\frac{1}{2\pi} \exp\left( - \frac{1}{2\B^2}
\left(
\left(
\sqrt{(1 + \B^2)}f - \A\frac{1}{\sqrt{1 + \B^2}}
\right)^2
- \frac{\A^2}
{1 + \B^2}
+ \A^2
\right)
\right)
$$</p>
<p>$$
- \frac{\A^2}{1+\B^2} + \A^2
$$</p>
<p>$$
= \frac{-\A^2 + \A^2 + \A^2\B^2}
{1 + \B^2}
$$</p>
<p>$$
= \frac{\A^2\B^2}
{1 + \B^2}
$$</p>
<p>Therefore $E_1$ is:</p>
<p>$$
\frac{1}{2\pi}
\exp \left(
-\frac{1}{2\B^2}
\frac{\A^2\B^2}{1 + \B^2}
\right)
\exp \left(
- \frac{1}{2\B^2} \left(
f\sqrt{1 + \B^2} - \frac{\A}{\sqrt{1 + \B^2}}
\right)^2
\right)
$$</p>
<p>$$
=
\frac{1}{2\pi}
\exp \left(
-\frac{\A^2}{2(1 + \B^2)}
\right)
\exp \left(
- \frac{1}{2\B^2} \left(
f\sqrt{1 + \B^2} - \frac{\A}{\sqrt{1 + \B^2}}
\right)^2
\right)
$$</p>
<p>Make change of variable:</p>
<p>$$
y = f\frac{\sqrt{1 + \B^2}}{\B} - \frac{A}{\B\sqrt{1 + \B^2}}
$$</p>
<p>Therefore:</p>
<p>$$
dy = \frac{\sqrt{1 + \B^2}}{\B}\,df
$$</p>
<p>For the limits, we have $f_1 = -\infty$, and $f_2 = \infty$</p>
<p>$\sqrt{1 + \B^2}$ is always positive. $\B$ is always negative. Therefore:</p>
<p>$$
y_1 = +\infty, y_2 = -\infty
$$</p>
<p>Therefore:</p>
<p>$$
\partial_\A I =
\frac{-1}{\B}\int_{+\infty}^{-\infty} \frac{1}{2\pi}
\exp \left(
- \frac{\A^2} {2(1+\B^2)}
\right)
\exp \left(
- \frac{1}{2} y^2
\right)
\frac{\B}{\sqrt{1 + \B^2}}
\,
dy
$$</p>
<p>$$
=
\frac{1}{\sqrt{2\pi}}
\frac{1}{\B}
\frac{\B}{\sqrt{1 + \B^2}}
\exp \left(
- \frac{\A^2}{2(1 + \B^2)}
\right)
\int_{-\infty}^\infty
\frac{1}{\sqrt{2\pi}}
\exp\left(
-\frac{1}{2} y^2
\right)
\,
dy
$$</p>
<p>$$
= \frac{1}{\sqrt{2\pi}}
\frac{1}{\sqrt{1+\B^2}}
\exp \left(
- \frac{\A^2}{2(1 + \B^2)}
\right)
(1)
$$</p>
<p>$$
= \frac{1}{\sqrt{1 + \B^2}}
\phi\left(
\frac{\A}{\sqrt{1 + \B^2}}
\right)
$$</p>
<p>Now we need to re-integrate back up again, since we currently have $\partial_\A I$, and we need $I$.</p>
<p>Since we dont have limits, we'll need to find at least one known point.</p>
<p>We have the following integral:</p>
<p>$$
I = \frac{1}{\sqrt{1 + \B^2}} \int \phi\left(
\frac{\A}{\sqrt{1 + \B^2}}
\right)
\,d\A
$$</p>
<p>$$
= \frac{1}{\sqrt{1 + \B^2}}
\sqrt{1 + \B^2}
\Phi\left(
\frac{\A}
{\sqrt{1 + \B^2}}
\right)
+ C
$$
... where $C$ is a constant of integration</p>
<p>$$
= \Phi \left(
\frac{\A}{\sqrt{1 + \B^2}}
\right)
+ C
$$</p>
<p>Looking at the original integral, we had/have:</p>
<p>$$
I = \int_{-\infty}^\infty \Phi\left(
\frac{f - \A}{\B}
\right)
\phi(f)
\,
df
$$</p>
<p>We can see that, given that $\B$ is negative, as $\A \rightarrow \infty$, $\Phi\left(\frac{f-\A}{\B}\right) \rightarrow \Phi(\infty) = 1$.</p>
<p>Therefore, as $\A \rightarrow \infty$, $\int_{-\infty}^\infty \Phi(\cdot)\phi(f)\,df \rightarrow 1$</p>
<p>Meanwhile, looking at the later expression for $I$, ie:</p>
<p>$$
I= \Phi\left(
\frac{\A}{\sqrt{1 + \B^2}}
\right) + C
$$</p>
<p>... as $\A \rightarrow +\infty$, $\Phi\left( \frac{\A}{\sqrt{1 + \B^2}} \right) \rightarrow 1$</p>
<p>But we know that as $\A \rightarrow +\infty$, $I \rightarrow 1$.</p>
<p>Therefore, $C = 0$</p>
<p>Therefore:</p>
<p>$$
I = \Phi \left(
\frac{\A}
{\sqrt{1 + \B^2}}
\right)
$$</p>
|
332,760 | <blockquote>
<p>For an odd prime, prove that a primitive root of $p^2$ is also a primitive root of $p^n$ for $n>1$. </p>
</blockquote>
<p>I have proved the other way round that any primitive root of $p^n$ is also a primitive root of $p$ but I have not been able to solve this one. I have tried the usual things that is I have assumed the contrary that there does not exist the primitive root following the above condition and then proceeded but couldn't solve it.<br>
Please help.</p>
| awllower | 6,792 | <p>Let us give a more elementary answer, while still using some binomial theorem. But we shall employ no more than a binomial lemma. It states that, for a prime $p$, and an integer $1\leq a\leq p-1$, we have $p\mid \binom{p}{a}$.<br>
Now you know that $a$ is a primitive root of $p^2$, so the order of $a$ modulo $p^2$ is $p(p-1)$. And we know that $a^{p-1}=1+kp$ for some $k$. Then the assumption that $a$ is a primitive root of $p^2$ implies that $k$ is not divisible by $p$. Therefore $$a^{p^n(p-1)}=(a^{p-1})^{p^n}=1+kp^{n+1}+mp^{n+2}$$ for some $m$.(This is the place where we make use of the binomial lemma.) This directly tells you that $a$ is a primitive root of $p^n$ for every $n\ge1$. </p>
<blockquote class="spoiler">
<p> <strong>More Details.</strong> We elaborate upon two things: Firstly, we show the centered equation, and, secondly, we show how that implies the primitivity of $a$. For the first, use the lemma to deduce that $(a^{p-1})^{p}=1+kp\times p+\text{terms of higher powers of $p$}$. Now, by induction, the result follows. For the second, just divide $(a^{p-1})^{p^r}$ by $p^n$ for $k=0,1,\ldots,n-1$, to see that, for lower powers of $a$ than $(a^{p-1})^{p^{n-1}}$, it cannot be congruent to $1$ modulo $p^n$. So $a$ is indeed a primitive root of $p^n$. </p>
</blockquote>
<p>If there is any error in the above proof, please inform me; if there is any ambiguity, please point it out, thanks.</p>
|
581,257 | <p>I would like to see a proof of when equality holds in <a href="https://en.wikipedia.org/wiki/Minkowski_inequality" rel="nofollow noreferrer">Minkowski's inequality</a>.</p>
<blockquote>
<p><strong>Minkowski's inequality.</strong> If <span class="math-container">$1\le p<\infty$</span> and <span class="math-container">$f,g\in L^p$</span>, then <span class="math-container">$$\|f+g\|_p \le \|f\|_p + \|g\|_p.$$</span></p>
</blockquote>
<p>The proof is quite different for when <span class="math-container">$p=1$</span> and when <span class="math-container">$1<p<\infty$</span>. Could someone provide a reference? Thanks!</p>
| Samantha Wyler | 723,878 | <p>Let <span class="math-container">$q$</span> be a conjugate exponent of <span class="math-container">$p$</span>, meaning <span class="math-container">$\frac{1}{q} + \frac{1}{p} = 1$</span>. Now <span class="math-container">$\|f + g\|_p = \|f\|_p + \|g\|_p$</span> iff <span class="math-container">$\|f+ g\|_p^p = \|f + g\|_p^{p - 1}(\|f\|_p + \|g\|_p)$</span>.</p>
<p>So since
<span class="math-container">$$
\begin{split}
\|f + g\|_p^p & = \int_X |f + g|^p d\mu = \int_X (|f + g|)|f + g|^{p - 1} d\mu \\
&\leq \int_X \big(|f| + |g|\big)|f + g|^{p - 1} d\mu = \int_X |f||f + g|^{p-1} d\mu + \int_X |g||f + g|^{p-1} d\mu \\
& = \|f(f + g)^{p - 1}\|_{L^1} + \|g(f + g)^{p - 1}\|_{L^1} \\
&\leq \|f\|_p\|(f + g)^{p - 1}\|_q + \|g\|_p \|(f + g)^{p - 1}\|_q \\
& = \big(\|f\|_p + \|g\|_p\big)\|(f + g)^{p - 1}\|_q \\
& = \big(\|f\|_p + \|g\|_p\big)\left(\int_X \big(|f(x) + g(x)|^{p - 1}\big)^q d\mu\right)^{1/q} \\
& = \big(\|f\|_p + \|g\|_p\big)\left(\int_X |f(x) + g(x)|^{pq - q} d\mu\right)^{1/q} \\
& = \big(\|f\|_p + \|g\|_p\big)\left(\int_X |f(x) + g(x)|^{p} d\mu\right)^{1/q} \\
& = \big(\|f\|_p + \|g\|_p\big)\left(\int_X |f(x) + g(x)|^{p} d\mu\right)^{1 - 1/p} \\
& = \big(\|f\|_p + \|g\|_p\big)\left(\int_X |f(x) + g(x)|^{p} d\mu\right)^{\frac{p - 1}{p}} = \|f + g\|_p^{p - 1}\big(\|f\|_p + \|g\|_p\big)
\end{split}
$$</span>
where the <span class="math-container">$3^{rd}$</span> inequality follows from the triangular inequality and the <span class="math-container">$6^{th}$</span> inequality holds from Hölder's inequality.</p>
<p>Therefore we have that Minkowski's inequality gives us equality iff the triangular inequality is an equality almost everywhere and Hölder's inequality is an equality. Now the triangular inequality is an equality almost everywhere, iff <span class="math-container">$g(x)$</span> and <span class="math-container">$f(x)$</span> have the same sign almost everywhere, or if on every set of positive measure where the two differ by sign at least one of them is zero. We have equality for Hölder’s if <span class="math-container">$|f|^p$</span> is a constant multiple of <span class="math-container">$|g|^q$</span> almost everywhere.</p>
|
3,386,999 | <p>How can I ind the values of <span class="math-container">$n\in \mathbb{N}$</span> that make the fraction <span class="math-container">$\frac{2n^{7}+1}{3n^{3}+2}$</span> reducible ?</p>
<p>I don't know any ideas or hints how I solve this question.</p>
<p>I think we must be writte <span class="math-container">$2n^{7}+1=k(3n^{3}+2)$</span> with <span class="math-container">$k≠1$</span></p>
| saulspatz | 235,128 | <p>I have a solution, but I'm sure there's a better way to do this. The greatest common divisor <span class="math-container">$g$</span> of of <span class="math-container">$2n^7+1$</span> and <span class="math-container">$3n^3+2$</span> must also divide <span class="math-container">$$3(2n^7+1)-2n^4(3n^3+2)=3-4n^4$$</span> then <span class="math-container">$g$</span> must also divide <span class="math-container">$$4n(3n^3+2)-3(4n^4-3)=8n+9$$</span> Then <span class="math-container">$g$</span> must divide <span class="math-container">$$ 3n^4(8n+9)-8(3n^3+2)=27n^2-16$$</span></p>
<p>Continuing in this manner, we eventually find that <span class="math-container">$g$</span> must divide <span class="math-container">$1163$</span> which is prime. So any solution satisfies <span class="math-container">$$3n^3+2\equiv0\pmod{1163}$$</span></p>
<p>The only solution to this is is <span class="math-container">$n\equiv435\pmod{1163}$</span> which I found with a python script, though I imagine there's a way to do it with a pencil.</p>
<p>It's easy to verify that also <span class="math-container">$2\cdot435^7+1\equiv0\pmod{1163}$</span>, so the complete solution is <span class="math-container">$$n\equiv435\pmod{1163}.$$</span> </p>
<p><strong>EDIT</strong></p>
<p>Daniel Wainfleet's answer shows the right way to find <span class="math-container">$435$</span>. </p>
|
3,631,042 | <p>Probably, <span class="math-container">$y = x^2$</span> plots a parabola only given certain assumptions that structure a cartesian coordinate plane, and it does not plot a parabola in e.g. the polar coordinate plane.</p>
<p>Now, why exactly does a parabola share an equation with the area of a square? 'Why' here is to be understood as inquiring at the equation's suggestion of a -geometrical- correspondence between the two given certain assumptions, but only the equation suggests this and not the actual shapes. Is this completely accidental, i.e., does the geometry of a parabola have nothing to do with that of a square, or does the equation <span class="math-container">$y = x^2$</span> indeed suggests some sort of relationship between the two shapes? </p>
<p>Most of all, I want to know: can we manage to identify any geometrical correspondence between a square and a parabola due to the equation?</p>
<p>(The equation of a circle in cartesian coordinates similarly bothers me, but at least we can speak of some sort of relationship between pythagorean triples.)</p>
| marty cohen | 13,079 | <p>The parabola
is the graph of a function
which plots the square of
the ordinate.
That happens to be
the same as the function
with gives the area
of a square given the side.</p>
<p>The name should
give you a hint why.</p>
|
3,360,914 | <p>Let <span class="math-container">$A$</span>, <span class="math-container">$B$</span> and <span class="math-container">$C$</span> be symmetric, positive semi-definite matrices. Is it true that
<span class="math-container">$$ \|(A + C)^{1/2} - (B + C)^{1/2}\| \leq \|A^{1/2} - B^{1/2}\|,$$</span>
in either the 2 or Frobenius norm? </p>
<p>It is clearly true when <span class="math-container">$A, B$</span> and <span class="math-container">$C$</span> commute, but the general case is less clear to me. In fact, even the particular case <span class="math-container">$B = 0$</span> does not seem obvious.</p>
<hr>
<p>Without loss of generality, it is clear that we can assume that <span class="math-container">$C$</span> is diagonal.
We show that it is sufficient to prove to prove the inequality for the matrix with zeros everywhere except on any position <span class="math-container">$k$</span> on the diagonal,
<span class="math-container">$$
(C_k)_{ij} = \begin{cases} 1 & \text{if } i=j=k\\ 0 & \text{otherwise} \end{cases}
$$</span>
Clearly, if the inequality is true for one <span class="math-container">$C_k$</span>, it is true for any <span class="math-container">$C_k$</span>, by flipping the axes, and also for <span class="math-container">$C = \alpha C_k$</span>, for any <span class="math-container">$\alpha \geq 0$</span>, because
<span class="math-container">\begin{align}
\|(A + \alpha \, C_k)^{1/2} - (B + \alpha C_k)^{1/2}\|
&= \sqrt{\alpha} \|(A/\alpha + C_k)^{1/2} - (B/\alpha + C_k)^{1/2}\| \\
&\leq \sqrt{\alpha} \|(A/\alpha)^{1/2} - (B/\alpha)^{1/2}\|
= \sqrt{\alpha} \|A^{1/2} - B^{1/2}\|
\end{align}</span>
Now, a general diagonal <span class="math-container">$C$</span> can be decomposed as <span class="math-container">$C = \sum_{k=1}^{n} \alpha_k C_k$</span>.
Applying the previous inequality (specialized for a matrix <span class="math-container">$C$</span> with only one nonzero diagonal element) repeatedly,
we can remove the diagonal elements one by one
<span class="math-container">\begin{align}
&\|(A + \sum_{k=1}^{n}\alpha_k \, C_k)^{1/2} - (B + \sum_{k=1}^{n}\alpha_k \, C_k)^{1/2}\| \\
&\qquad = \|((A + \sum_{k=1}^{n-1}\alpha_k \, C_k) + \alpha_n C_n)^{1/2} - ((B + \sum_{k=1}^{n-1}\alpha_k \, C_k) + \alpha_n C_n)^{1/2}\| \\
&\qquad \leq \|(A + \sum_{k=1}^{n-1}\alpha_k \, C_k)^{1/2} - (B + \sum_{k=1}^{n-1}\alpha_k \, C_k)^{1/2}\| \\
&\qquad \leq \|(A + \sum_{k=1}^{n-2}\alpha_k \, C_k)^{1/2} - (B + \sum_{k=1}^{n-2}\alpha_k \, C_k)^{1/2}\| \\
&\qquad \leq \dots \leq \sqrt{\alpha} \|A^{1/2} - B^{1/2}\|.
\end{align}</span></p>
<hr>
<p>Here are three ways of proving the inequality in 1 dimension,
which I tried to generalize to the multidimensional case without success.
Let us write <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, <span class="math-container">$c$</span> instead of <span class="math-container">$A$</span>, <span class="math-container">$B$</span>, <span class="math-container">$C$</span>,
to emphasize that we are working in one dimension,
and let us assume without loss of generality that <span class="math-container">$a \leq b$</span>.</p>
<ul>
<li><p>Let us write:
<span class="math-container">$$ f(c) = \sqrt{b + c} - \sqrt{a + c} $$</span>
We calculate that the derivative of <span class="math-container">$f$</span> is given by
<span class="math-container">$$
f'(c) = \frac{1}{2} \left( \frac{1}{\sqrt{b + c}} - \frac{1}{\sqrt{a + c}} \right) \leq 0,
$$</span>
and so <span class="math-container">$f(c) = f(0) + \int_{0}^{c} f'(x) \, d x \leq f(0)$</span>.</p></li>
<li><p>We have, by the fundamental theorem of calculus and a change of variable
<span class="math-container">\begin{align}
\sqrt{b + c} - \sqrt{a + c} &= \int_{a + c}^{b + c} \frac{1}{2 \sqrt{x}} \, d x = \int_{a}^{b} \frac{1}{2 \sqrt{x + c}} \, d x \\
&\leq \int_{a}^{b} \frac{1}{2 \sqrt{x}} \, d x = \sqrt{b} - \sqrt{a}.
\end{align}</span></p></li>
<li><p>Squaring the two sides of the inequality, we obtain
<span class="math-container">$$
a + c - 2 \sqrt{a+ c} \, \sqrt{b + c} + b + c \leq a + b - 2 \sqrt{a} \sqrt{b}.
$$</span>
Simplifying and rearranging,
<span class="math-container">$$
c + \sqrt{a} \sqrt{b} \leq \sqrt{a+ c} \, \sqrt{b + c} .
$$</span>
Squaring again
<span class="math-container">$$
\require{cancel} \cancel{c^2 + a b} + 2 c \sqrt{a b} \leq \cancel{c^2 + ab} + ac + bc,
$$</span>
leading to
<span class="math-container">$$ a + b - 2 \sqrt{ab} = (\sqrt{b} - \sqrt{a})^2 \geq 0$$</span>.</p></li>
</ul>
<p>Numerical experiments suggest that the inequality is true in both the 2 and the Frobenius norm.
(One realization of) the following code prints 0.9998775.</p>
<pre><code>import numpy as np
import scipy.linalg as la
n, d, ratios = 100000, 3, []
for i in range(n):
A = np.random.randn(d, d)
B = np.random.randn(d, d)
C = .1*np.random.randn(d, d)
A, B, C = A.dot(A.T), B.dot(B.T), C.dot(C.T)
lhs = la.norm(la.sqrtm(A + C) - la.sqrtm(B + C), ord='fro')
rhs = la.norm(la.sqrtm(A) - la.sqrtm(B), ord='fro')
ratios.append(lhs/rhs)
print(np.max(ratios))
</code></pre>
| bsbb4 | 337,971 | <p>In short, the number of lattice points of given bounded magnitude grows linearly, giving a contribution proportional to <span class="math-container">$n \cdot \frac{1}{n^2} = \frac{1}{n}$</span>, making the series diverge.</p>
<p>I give a proof that <span class="math-container">$\sum_{\omega \in \Lambda^*} \omega^{-s}$</span> converges absolutely iff <span class="math-container">$s>2$</span>:</p>
<p>Fix a lattice <span class="math-container">$\Lambda \in \Bbb C$</span> and set <span class="math-container">$\Omega_r = \{m\lambda_1 + n\lambda_2 | m,n \in \Bbb Z \,\,\text{and}\,\, \max(|m|, |n|) = r \}$</span>.</p>
<p>Then <span class="math-container">$\Lambda^*$</span> is a disjoint union of the <span class="math-container">$\Lambda_r$</span>, <span class="math-container">$r > 0$</span>. Observe that <span class="math-container">$|\Lambda_r| = 8r$</span>.</p>
<p>Let <span class="math-container">$D$</span> and <span class="math-container">$d$</span> be the greatest and least moduli of the elements of the parallelogram <span class="math-container">$\Pi_1$</span> containing <span class="math-container">$\Lambda_1$</span>. Then we have <span class="math-container">$rD \geq |\omega| \geq rd$</span> for all <span class="math-container">$\omega \in \Lambda_r$</span>.</p>
<p>Define <span class="math-container">$\sigma_{r, s} = \sum_{\omega \in \Lambda_r} |\omega|^{-s}$</span>.</p>
<p><span class="math-container">$\sigma_{r, s}$</span> lies between <span class="math-container">$8r(rD)^{-s}$</span> and <span class="math-container">$8r(rd)^{-s}$</span>. Therefore <span class="math-container">$\sum_{r=1}^\infty \sigma_{r, s}$</span> converges iff <span class="math-container">$\sum r^{1-s}$</span> converges, i.e. iff <span class="math-container">$s > 2$</span>. </p>
<p>The claim follows.</p>
<p>This proof follows the one in Jones and Singerman's Complex functions pp.91.</p>
|
2,020,128 | <p>For $r$ is a real number, I can write $r \in \mathbb{R}$.</p>
<p>For $\varepsilon$ is an infinitesimal, I'd like to write something like $\varepsilon \in something$ Is there a symbol for "the set of infinitesimals"? Or alternatively, a commonly used abbreviation for "infinitesimal"?</p>
<p>For $H$ is an infinite (hyperreal) number, I'd like to write something like $H \in \infty$ Is there a symbol for "the set of infinite hyperreals", or a common abbreviation?</p>
| achille hui | 59,379 | <p>In non-standard analysis,
a <a href="https://en.wikipedia.org/wiki/Monad_%28non-standard_analysis%29" rel="nofollow noreferrer">monad</a> (also called halo) is the set of points infinitesimally close to a given point.</p>
<p>On model for extending real numbers is the <a href="https://en.wikipedia.org/wiki/Hyperreal_number" rel="nofollow noreferrer">hyperreal numbers</a>. The set of hyperreals is usually denoted as ${}^*\mathbb{R}$.
Given $x \in {}^*\mathbb{R}$, the monad of $x$ is the set</p>
<p>$$\mathrm{monad}(x) = \{\; y \in {}^*\mathbb{R} : x - y \text{ is infintesimal }\;\}$$</p>
<p>For those $x$ where $|x| < n$ for $n \in \mathbb{N}$, we call $x$ finite (or limited). For such a $x$, there is a unique real number belongs to the monad of $x$. It will be called the standard part of $x$ (also known as shadow of $x$).</p>
<p>To specify a number $x$ is infinitesimally small, one can use the notation
$x \in \mathrm{monad}(0)$ or $x \in \mathrm{hal}(0)$.</p>
<p>If one want to go beyond this single use of notations for infinitesimals, I'll suggest one pick a textbook on this topic and stick to it. For example, I use following book as reference</p>
<blockquote>
<p>Lectures on the Hyperreals (an introduction to Nonstandard Analysis) by Robert Goldblatt</p>
</blockquote>
<p>It uses following notations</p>
<ul>
<li><p>Hyperreal $b$ is infinitely close to hyperreal $c$, denoted by $b \simeq c$ if $b - c$ is infinitesimal. This define an equivalent relations
on ${}^*\mathbb{R}$. The halo of a point $b$ is the $\simeq$-equivalence class
$$\mathrm{hal}(b) = \{ \; c \in {}^*\mathbb{R} : b \simeq c \; \}$$</p></li>
<li><p>Hyperreals $b$ and $c$ are of limited distance apart, denoted by $b \sim c$, if $b - c$ is limited (i.e $|b-c| < n$ for some $n \in \mathbb{N}$). The galaxy of $b$ is the $\sim$-equivalent class
$$\mathrm{gal}(b) = \{ \; c \in {}^*\mathbb{R} : b \sim c \; \}$$</p></li>
<li><p>The standard part of $x$ is denoted by $\mathrm{sh}(x)$.</p></li>
</ul>
<p>Your mileage may vary.</p>
|
2,020,128 | <p>For $r$ is a real number, I can write $r \in \mathbb{R}$.</p>
<p>For $\varepsilon$ is an infinitesimal, I'd like to write something like $\varepsilon \in something$ Is there a symbol for "the set of infinitesimals"? Or alternatively, a commonly used abbreviation for "infinitesimal"?</p>
<p>For $H$ is an infinite (hyperreal) number, I'd like to write something like $H \in \infty$ Is there a symbol for "the set of infinite hyperreals", or a common abbreviation?</p>
| Mikhail Katz | 72,694 | <p>For infinite numbers there is a fairly common notation in the context of integers $\mathbb N$ and hyperintegers ${}^\ast\mathbb N$. Namely, a hyperinteger is infinite if it belongs to the set complement $${}^\ast\mathbb N\setminus\mathbb N.$$ This is not particularly elegant but introducing special notation for this set may cause even greater confusion.</p>
|
936,200 | <p>Suppose that x_0 is a real number and x_n = [1+x_(n-1)]/2 for all natural n. Use the Monotone Convergence Theorem to prove x_n → 1 as n grows.</p>
<p>Can someone please help me? I don't know what to assume since I don't know if it is increasing or decreasing when x_0 < 1 and when x_0 > 1.
Any hint/help would really help. Thank you.</p>
| Amitai Yuval | 166,201 | <p>"You belong to me", "I belong to you"... Possession always causes confusion.</p>
<p>A set, by definition, is a collection of elements. A given element $x$ can either be <em>in</em> a given collection, or <em>not in</em> the collection. I read "$y\in\{0,1,2,3\}$" as "$y$ is one of the elements $0,1,2,3$". No belonging involved.</p>
|
1,465,627 | <p>The problem is to maximize the determinant of a $3 \times 3$ matrix with elements from $1$ to $9$.<br>
Is there a method to do this without resorting to brute force?</p>
| copper.hat | 27,978 | <p>Swapping rows and columns leaves the absolute value of the determinant unchanged, so we can
assume that the middle cell contains 1. Then, since the absolute value of the determinant is unaffected by taking transposes (and swapping the top & bottom rows or
the left and right columns), we can assume that the 2 occurs in either a
corner of the middle of the top row.</p>
<p>This leaves $2 \cdot 7! = 10,080$ possibilities to check, which is an
improvement on $9!=362,880$.</p>
<p>After fixing the 1,2, one could notice that there are only 4 essentially
different places to put the 3, hence we only need to check $2 \cdot 4 \cdot 6! = 5,760$ possibilities.</p>
<p>Brute force (computing all 9! possibilities), always a favourite of mine, works too:</p>
<pre><code># python 2.7.6
import numpy
import itertools
sup = 0
for p in itertools.permutations(range(1,10)):
m = [ p[0:3], p[3:6], p[6:10] ]
d = abs(numpy.linalg.det(m))
if d > sup:
sup = d
print sup
</code></pre>
<p>This prints 412.0 after 8 seconds on my old X201 tablet.</p>
|
1,465,627 | <p>The problem is to maximize the determinant of a $3 \times 3$ matrix with elements from $1$ to $9$.<br>
Is there a method to do this without resorting to brute force?</p>
| gnoodle | 934,371 | <p>The question asks for alternatives to brute force, but just to illustrate the difficulties of using brute force, here is python code for brute forcing it:</p>
<pre><code>import numpy as np
import itertools
import time
MATRIX_SIZE = 3 #3 for a 3x3 matrix, etc
best_matrices = []
highest_det = 0
start_time = time.process_time()
def report():
print(round(time.process_time() - start_time,4) ,"\t",
iteration, "/", num_permutations, round(100*iteration/ num_permutations,2), "% \t",
permutation, "\tdet=",det,"(prev best det",highest_det,")")
num_permutations = np.math.factorial(MATRIX_SIZE**2)
iteration = 0
for permutation in itertools.permutations(range(1, MATRIX_SIZE*MATRIX_SIZE+1)):
iteration += 1
matrix = np.array(permutation).reshape((MATRIX_SIZE, MATRIX_SIZE))
det = round((np.linalg.det(matrix)) )
if det > highest_det:
report()
highest_det = det
best_matrices = [permutation]
elif det==highest_det:
report()
best_matrices.append(permutation)
total_time = time.process_time() - start_time
#print("List of the matrices with the highest determinant:\n", *best_matrices, sep='\n')
print(len(best_matrices), "matrices found")
print("which all have determinant", highest_det)
print("\ntime taken:", total_time)
</code></pre>
<p>Took my machine 3 seconds for a 3x3 matrix. Seems like would take several years for a 4x4 matrix...<br />
Although with a bit of thought you could shorten the process.<br />
There are 36 matrices with a determinant of 412 and 36 with a determinant of -412.</p>
<p>All of the matrices with det 412 have 7,8,9 in separate cols and separate rows, as someone else mentioned.</p>
|
1,039,563 | <p>Whether the graphs G and G' given below are isomorphic?</p>
<p><img src="https://i.stack.imgur.com/0evn6.jpg" alt="enter image description here"></p>
| Hagen von Eitzen | 39,174 | <p>As improper integral, this should be the
$$ \lim_{a\to+\infty}\underbrace{\int_0^af(t)\sin t\,\mathrm dt}_{=:F(t)}.$$
Assume $f$ is a nonconstant polynomial. Then for $a$ big enough, $f(t)>1$ for all $t>a$ or $f(t)<-1$ for all $t>a$.
Hence $|F((k+1)\pi)-F(k\pi)|>\left|\int_{k\pi}^{(k+1)\pi}\sin t\,\mathrm dt\right|=2$ for $k$ big enough, makeing convergence impossible.</p>
|
966,798 | <p>How I solve the following equation for $0 \le x \le 360$:</p>
<p>$$
2\cos2x-4\sin x\cos x=\sqrt{6}
$$</p>
<p>I tried different methods. The first was to get things in the form of $R\cos(x \mp \alpha)$:</p>
<p>$$
2\cos2x-2(2\sin x\cos x)=\sqrt{6}\\
2\cos2x-2\sin2x=\sqrt{6}\\
R = \sqrt{4} = 2 \\
\alpha = \arctan \frac{2}{2} = 45\\
\therefore \cos(2x + 45) = \frac{\sqrt6}{2}
$$</p>
<p>which is impossible. I then tried to use t-substitution, where:</p>
<p>$$
t = \tan\frac{x}{2}, \sin x=\frac{2t}{1+t^2}, \cos x =\frac{1-t^2}{1+t^2}
$$</p>
<p>but the algebra got unreasonably complicated. What am I missing?</p>
| Andrew D. Hwang | 86,418 | <p>Here's a general answer:</p>
<p>The definitions of analysis are formulated in terms of conditions depending on a positive real number $\delta$ that "remain true if $\delta$ is made smaller". For example, the precise definition of the statement $\lim\limits_{x \to a} f(x) = L$ includes the condition
$$
\text{If $|x - a| < \delta$, then $|f(x) - L| < \varepsilon$,}
$$
which we might denote $P(\delta)$, regarding $f$, $a$, $L$, and $\varepsilon$ as given/known.</p>
<p>If the condition $P(\delta)$ is true for some $\delta > 0$, and if $0 < \delta' < \delta$, then $P(\delta')$ is also true, because its hypothesis is logically more strict.</p>
<p>Now suppose you have finitely many such conditions satisfied by positive numbers $\delta_{1}, \dots, \delta_{k}$, and you want a <em>single</em> $\delta > 0$ that satisfies <em>all</em> your conditions. It suffices to take a positive $\delta$ that does not exceed $\delta_{1}, \dots, \delta_{k}$. The standard idiom of analysis is to take
$$
\delta = \min(\delta_{1}, \dots, \delta_{k}).
$$</p>
<p>To be picky, it's not that we <em>need</em> to use the minimum, but it's <em>sufficient</em> or <em>enough</em> to use the minimum.</p>
|
245,312 | <p>Let $\kappa>0$ be a cardinal and let $(X,\tau)$ be a topological space. We say that $X$ is $\kappa$-<em>homogeneous</em> if</p>
<ol>
<li>$|X| \geq \kappa$, and</li>
<li>whenever $A,B\subseteq X$ are subsets with $|A|=|B|=\kappa$ and $\psi:A\to B$ is a bijective map, then there is a homeomorphism $\varphi: X\to X$ such that $\varphi|_A = \psi$.</li>
</ol>
<p><strong>Questions</strong>: Is it true that for $0<\alpha < \beta$ there is a space $X$ such that $|X|\geq \beta$, and $X$ is $\alpha$-homogeneous, but not $\beta$-homogeneous? Is there even such a space that is $T_2$? Also it would be nice to see an example for $\alpha=1, \beta=2$. And I was wondering whether there is a standard name for $\kappa$-homogeneous spaces. (Not all of these questions have to be answered for acceptance of answer.)</p>
| Will Brian | 70,618 | <p>The sort of space you describe is usually called <em>strongly</em> $\kappa$-homogeneous. If you google that phrase you will find some interesting results about these kinds of spaces (mostly concerning how this property relates to other homogeneity properties).</p>
<p>The earliest reference I could find to strongly $n$-homogeneous spaces (only finite values of $n$ are considered) is in a 1953 paper by C. E. Burgess (available <a href="http://www.ams.org/journals/proc/1954-005-01/S0002-9939-1954-0061367-1/S0002-9939-1954-0061367-1.pdf" rel="noreferrer">here</a>). </p>
<p>Despite the fact that these kinds of spaces have been in the literature for well over half a century, it is unknown whether there is a topological space that is strongly $4$-homogeneous but not strongly $5$-homogeneous. (This is stated explicitly in <a href="https://arxiv.org/pdf/1607.00103.pdf" rel="noreferrer">this paper</a> by Ancel and Bellamy from last year -- see the second paragraph of the second page.) As far as I can tell (although I don't have an authoritative reference), it is also unknown whether there is a strongly $n$-homogeneous space that is not also strongly $(n+1)$-homogeneous for any finite $n \geq 4$.</p>
<p>Therefore Joel's answer (along with some of Andreas's comments below it) constitutes the state-of-the-art knowledge on this question. </p>
|
2,405,205 | <p>The Wikipedia article on <a href="https://en.wikipedia.org/wiki/Fraction_(mathematics)#Complex_fractions" rel="nofollow noreferrer">Fractions</a> says:</p>
<blockquote>
<p>If, in a complex fraction, there is no unique way to tell which fraction lines takes precedence, then this expression is improperly formed, because of ambiguity. So 5/10/20/40 is not a valid mathematical expression, because of multiple possible interpretations [...]</p>
</blockquote>
<p>The first sentence makes sense, but does the second sentence follow? WolframAlpha <a href="http://wolframalpha.com/input/?i=5%2F10%2F20%2F40" rel="nofollow noreferrer">interprets that input without issue</a>, as do popular programming languages.</p>
<p>Is the order of operations not accepted in formal math?</p>
| Franklin Pezzuti Dyer | 438,055 | <p>Yes, but that's because Wolfram Alpha does that <em>by convention</em>. When you type something like that in, you're probably "confusing" WA, and so it has to use its last resort, which is to apply the operations in the order in which they are typed. Even though WA can interpret it, it's still bad mathematics to write a complex fraction that way if you want anyone to know what you mean.</p>
|
985,212 | <p>Can you row reduce the matrix before computing $\det(\lambda I-A)$? Will this still give an equivalent characteristic polynomial?</p>
| hardmath | 3,111 | <p>A typical presentation of <a href="http://en.wikipedia.org/wiki/Elementary_matrix#Operations" rel="noreferrer">elementary row operations</a> sets out three kinds:</p>
<p>(1) Multiply a row by a nonzero scalar.</p>
<p>(2) Add a multiple of one row to another.</p>
<p>(3) Swap two rows.</p>
<p>The effects on the determinant of a (square) matrix when these are applied are easily determined. (1) multiplies the determinant by the same scalar used to multiply the row. (2) leaves the determinant unchanged. (3) changes the sign of the determinant.</p>
<p>However even if the cumulative effects of a series of row operations were contrived to leave the determinant of $A$ unchanged, this does not imply that the characteristic polynomial is preserved. For the characteristic polynomial to remain unchanged, we would need all the elementary symmetric invariants of characteristic roots (the coefficients of the characteristic polynomial, effectively) to stay the same.</p>
<p>For simplicity let's consider only the trace of $A$, the sum of characteristic roots, which determines the coefficient of $\lambda^{n-1}$, which is also the sum of the diagonal entries of $A$. Operation (1) adds $(r-1)a_{ii}$ to the trace, when the <em>i</em>th row is multiplied by $r$. Operation (2) adds $r a_{ij}$ to the trace, when $r$ times the <em>i</em>th row is added to the <em>j</em>th row. Operation (3) adds $a_{ij}+a_{ji}-a_{ii}-a_{jj}$ to the trace when the <em>i</em>th and <em>j</em>th rows are swapped. All these are fairly unpredicatable effects on the trace, and hence on the characteristic polynomial.</p>
<p>Considering the case of a $2\times 2$ matrix, we see that the reduced row-echelon form of a matrix $A$ is has characteristic polynomial either $\lambda^2$, $\lambda(\lambda-1)$, or $(\lambda-1)^2$. Reconstructing even the characteristic polynomial of $A$ from the characteristic polynomial of its reduced row-echelon form seems unwieldy.</p>
<p>On the other hand it does make sense to consider computing $\det(\lambda I - A)$ by applying elementary row operations to row reduce $\lambda I - A$. However since the matrix entries are polynomials, say from $\mathbb{R}[\lambda]$, the ring operations are not as easy to carry out as in the case of row reducing real matrix $A$.</p>
<p>For example consider the matrix $A$ and its reduced row echelon form $R$:</p>
<p>$$ A = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \; R = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} $$</p>
<p>The characteristic polynomial for $A$ is $\lambda^2 - 1$, while the characteristic polynomial for $R$ is $(\lambda - 1)^2$. Only a single elementary row operation, swapping the two rows, was required to change $A$ into $R$.</p>
<p>However the polynomial matrix:</p>
<p>$$ \lambda I - A = \begin{pmatrix} \lambda & -1 \\ -1 & \lambda \end{pmatrix} $$</p>
<p>may be reduced by a sequence of three elementary row operations (one of each kind!) to upper triangular form:</p>
<p>$$ \begin{pmatrix} 1 & -\lambda \\ 0 & \lambda^2 - 1 \end{pmatrix} $$</p>
<p>whose determinant is evidently $\lambda^2 - 1$. Thus elementary row operations applied to $\lambda I - A$ can provide us the characteristic polynomial of $A$.</p>
|
2,030,547 | <p>The following expression came up in a proof I was reading, where it is said "It is easily shown: $$\lim_{x\to\infty} x(1-\frac{\ln (x-1)}{\ln x})=0."$$</p>
<p>Unfortunately I'm not having an easy time showing it. I guess it should come down to showing that the ratio $\frac{\ln (x-1)}{\ln x}$ converges to 1 superlinearly, which seems intuitive but I don't know how to prove it formally. Any tips?</p>
<p>Edit: original question had an implicit typo - I had $\ln x - 1$ rather than the intended $\ln(x-1)$.</p>
| egreg | 62,967 | <p>Let me start with a different example. Consider all maps from a set $X$ to $\mathbb{R}$ and push them together in a set, say $M(X,\mathbb{R})$.</p>
<p>This set can be given a structure of vector space by
$$
f+g\colon x\mapsto f(x)+g(x),
\qquad
\alpha f\colon x\mapsto \alpha f(x)
$$
for $f,g\in M(X,\mathbb{R})$ and $\alpha\in\mathbb{R}$.</p>
<p>If $X$ has more structure, then we may identify some useful subspaces of $M(X,\mathbb{R})$. For instance, if $X$ is a metric space, we could consider the <em>continuous</em> maps, $C(X,\mathbb{R})$; if $X$ is a differentiable manifold, we could consider the <em>infinitely differentiable</em> maps, $C^\infty(X,\mathbb{R})$; if $X$ is a measure space, we could consider the <em>integrable</em> maps $\mathscr{L}^1(X,\mathbb{R})$ or the <em>square integrable</em> maps $\mathscr{L}^2(X,\mathbb{R})$.</p>
<p>If we fix $x_0\in X$, we have an interesting map $e_{x_0}\colon M(X,\mathbb{R})\to\mathbb{R}$, simply defined by
$$
e_{x_0}(f)=f(x_0)
$$
Guess what? This map is <em>linear</em>. So we get a map, the <em>evaluation map</em>,
$$
e\colon X\to\operatorname{L}(M(X,\mathbb{R}),\mathbb{R})
\qquad
x_0\mapsto e_{x_0}
$$
The codomain is the set of all linear maps from $M(X,\mathbb{R})$ to $\mathbb{R}$.</p>
<p>Yes, there is another example of interesting subspace I didn't mention before (on purpose). If $X=V$ is a <em>vector space</em>, then the set of linear maps $V\to\mathbb{R}$ is a subspace of $M(V,\mathbb{R})$. This is usually denoted by $V^*$, the <em>dual space</em> of $V$. So the map $e$ is
$$
e\colon X\to M(X,\mathbb{R})^*
$$</p>
<p>In all examples above, $e_{x_0}$ restricts to a map from the “interesting” subspace to $\mathbb{R}$ and so we can define an evaluation map, say,
$$
e\colon X\to\operatorname{L}(C^\infty(X,\mathbb{R}),\mathbb{R})=
C^\infty(X,\mathbb{R})^*
$$
in the case when $X$ is a differentiable manifold. Or, when $X=V$ is a vector space, an evaluation map
$$
e\colon V\to\operatorname{L}(V^*,\mathbb{R})=(V^*)^*=V^{**}
$$
Yes, in this case the codomain is now the dual of $V^*$. And, guess what? The map $e$ is <em>linear</em> (which is easy to prove).</p>
<p>How does the map work? Exactly in the same way as before: it is a particular case, after all: if $v_0\in V$, $e(v_0)$ is an element of $V^{**}$, that is, a linear map $V^*\to\mathbb{R}$; which one? Like before
$$
e\colon V\to V^{**}
$$
sends $v_0$ to $e_{v_0}$, which is defined by
$$
e_{v_0}(f)=f(v_0)
$$
which is exactly the same as saying $e_{v_0}=v_0^*$ as in your notes (although I'd not use a $^*$ here, but probably something like $\widehat{v_0}$).</p>
<p>Getting rid of the noughts, we have a map
$$
e\colon V\to V^{**},\quad v\mapsto e_v
$$
where $e_v(f)=f(v)$.</p>
<p>This can also be seen as a <em>bilinear</em> map $(v,f)\mapsto f(v)$, from $V\times V^*$ to $\mathbb{R}$, which has some interesting uses as well. But, concentrating on $e$, we see that it is injective (provided we accept that every vector space has a basis) and an isomorphisms precisely in the case when $V$ is finite-dimensional.</p>
|
2,359,621 | <p>Consider $f:\mathbb{R}^2 \rightarrow \mathbb{R}$ where</p>
<p>$$f(x,y):=\begin{cases}
\frac{x^3}{x^2+y^2} & \textit{ if } (x,y)\neq (0,0) \\
0 & \textit{ if } (x,y)= (0,0)
\end{cases} $$</p>
<p>If one wants to show the continuity of $f$, I mainly want to show that </p>
<p>$$ \lim\limits_{(x,y)\rightarrow0}\frac{x^3}{x^2+y^2}=0$$</p>
<p>But what does $\lim\limits_{(x,y)\rightarrow0}$ mean? Is it equal to $\lim\limits_{(x,y)\rightarrow0}=\lim\limits_{||(x,y)||\rightarrow0}$ or does it mean $\lim\limits_{x\rightarrow0}\lim\limits_{y\rightarrow0}$?</p>
<p>If so, how does one show that the above function tends to zero?</p>
| Mark Viola | 218,419 | <p>Note that we have </p>
<p>$$\left|\frac{x^3}{x^2+y^2}\right|\le |x|$$</p>
<p>The limit as $(x,y)\to(0,0)$ is therefore $0$.</p>
<hr>
<p>The limit $\lim_{(x,y)\to(0,0)}f(x,y)=L$ means that for all $\epsilon>0$, there exists a deleted neighborhood $N_{0,0}$ (e.g., there exists a $\delta>0$, such that $0<\sqrt{x^2+y^2}<\delta$), such that whenever $(x,y)\in N_{0,0}$, $|f(x,y)-L|<\epsilon$.</p>
<p>Note that the iterated limits $\lim_{x\to0}\lim_{y\to0}f(x,y$ and $\lim_{y\to0}\lim_{x\to0}f(x,y$ are not necessarily equal to each other or equal to the limit $\lim_{(x,y)\to(0,0)}f(x,y)$.</p>
<p>In <a href="http://math.stackexchange.com/questions/2183291/exchange-of-two-limits-of-a-function/2183554#2183554">THIS ANSWER</a>, I referenced the Moore-Osgood Theorem, which gives sufficient conditions when the limit and the iterated limits are equal.</p>
|
4,074,718 | <p>The angle bisectors of <span class="math-container">$\angle B$</span> and <span class="math-container">$\angle C_{ex}$</span> intersect at point <span class="math-container">$E$</span>. If <span class="math-container">$\angle A=70^\circ$</span>, what is <span class="math-container">$\angle E$</span> equal to?</p>
<p><a href="https://i.stack.imgur.com/dOPmK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dOPmK.png" alt="enter image description here" /></a></p>
<p>I tried to solve this question as follows:</p>
<p><span class="math-container">$a=\angle EBC$</span> and <span class="math-container">$b=\angle BCA$</span></p>
<p><span class="math-container">$2a+b=110^{\circ}$</span></p>
<p>Also <span class="math-container">$\angle ACE= 90^\circ-\frac{b}{2}$</span></p>
<p>This is as far as I got. I don't know how to work out what <span class="math-container">$\angle E$</span> is equal to. Could you please explain to me how to solve this question?</p>
| lhf | 589 | <p>The polynomial with integer coefficients with least degree that has <span class="math-container">$\frac{\sqrt{3}}{2}$</span> as a root is <span class="math-container">$4 x^2 - 3$</span>.</p>
<p>Every polynomial with rational coefficients that has <span class="math-container">$\frac{\sqrt{3}}{2}$</span> as a root is a rational polynomial multiple of <span class="math-container">$4 x^2 - 3$</span>.</p>
<p>Likewise, for <span class="math-container">$\frac{\sqrt{2}}{3}$</span>, the polynomial is <span class="math-container">$9 x^2 - 2$</span>.</p>
<p>Since these two polynomials are coprime, every polynomial with rational coefficients that has <span class="math-container">$\frac{\sqrt{3}}{2}$</span> and <span class="math-container">$\frac{\sqrt{2}}{3}$</span> as roots is a multiple of <span class="math-container">$(4 x^2 - 3)(9 x^2 - 2)$</span>.</p>
|
4,377,771 | <blockquote>
<p>Find all natural numbers <span class="math-container">$a, b, c$</span> such that <span class="math-container">$a\leq b\leq c$</span> and <span class="math-container">$a^3+b^3+c^3-3abc=2017$</span>.</p>
</blockquote>
<h2><strong>My Attempt</strong></h2>
<p><span class="math-container">$$a^3+b^3+c^3-3abc=2017$$</span>
<span class="math-container">$$(a+b+c)(a^2+b^2+c^2-ab-bc-ca)=2017*1$$</span>
Now, <span class="math-container">$a+b+c$</span> can't be equal to <span class="math-container">$1$</span> as <span class="math-container">$a, b, c$</span> are natural numbers.<br />
So, <span class="math-container">$$a+b+c=2017$$</span> <span class="math-container">$$a^2+b^2+c^2-ab-bc-ca=1$$</span>
How should I proceed after this?</p>
| Saturday | 1,015,303 | <p><span class="math-container">$$a^2+b^2+c^2-ab-bc-ca=\frac{1}{2} \bigg( (a-b)^2 + (b-c)^2 + (c-a)^2 \bigg)=1$$</span>
<span class="math-container">$$ \implies (a-b)^2 + (b-c)^2 + (c-a)^2 =2$$</span></p>
<p>So, two between <span class="math-container">$a,b,c$</span> are equal and the other has the difference of <span class="math-container">$1$</span> from the others.</p>
<p>WLOG, assume <span class="math-container">$b=a$</span> and <span class="math-container">$c=a \pm 1$</span> (Ignoring <span class="math-container">$a\le b\le c$</span> here.).</p>
<p><span class="math-container">$$a+b+c=3a\pm1 = 2017 = 3\times672 \,+1$$</span></p>
<p>Thus <span class="math-container">$c=a+1$</span>, <span class="math-container">$a=b=672$</span> and <span class="math-container">$c=673$</span>.</p>
|
4,377,771 | <blockquote>
<p>Find all natural numbers <span class="math-container">$a, b, c$</span> such that <span class="math-container">$a\leq b\leq c$</span> and <span class="math-container">$a^3+b^3+c^3-3abc=2017$</span>.</p>
</blockquote>
<h2><strong>My Attempt</strong></h2>
<p><span class="math-container">$$a^3+b^3+c^3-3abc=2017$$</span>
<span class="math-container">$$(a+b+c)(a^2+b^2+c^2-ab-bc-ca)=2017*1$$</span>
Now, <span class="math-container">$a+b+c$</span> can't be equal to <span class="math-container">$1$</span> as <span class="math-container">$a, b, c$</span> are natural numbers.<br />
So, <span class="math-container">$$a+b+c=2017$$</span> <span class="math-container">$$a^2+b^2+c^2-ab-bc-ca=1$$</span>
How should I proceed after this?</p>
| Steffen Jaeschke | 629,541 | <p><a href="https://i.stack.imgur.com/DtQ6U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DtQ6U.png" alt="enter image description here" /></a></p>
<p>These are surfaces over {a,b} expected reals. The calculations are straightforward.</p>
<p><a href="https://i.stack.imgur.com/YcjRp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YcjRp.png" alt="enter image description here" /></a></p>
<p>The problems are the second and third solutions are complex. The graph shows only the real parts.</p>
<p><a href="https://i.stack.imgur.com/CCrgh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CCrgh.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/BxwnY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BxwnY.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/bw4SE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bw4SE.png" alt="enter image description here" /></a></p>
<p>Still calculations are straightforward.</p>
|
4,377,771 | <blockquote>
<p>Find all natural numbers <span class="math-container">$a, b, c$</span> such that <span class="math-container">$a\leq b\leq c$</span> and <span class="math-container">$a^3+b^3+c^3-3abc=2017$</span>.</p>
</blockquote>
<h2><strong>My Attempt</strong></h2>
<p><span class="math-container">$$a^3+b^3+c^3-3abc=2017$$</span>
<span class="math-container">$$(a+b+c)(a^2+b^2+c^2-ab-bc-ca)=2017*1$$</span>
Now, <span class="math-container">$a+b+c$</span> can't be equal to <span class="math-container">$1$</span> as <span class="math-container">$a, b, c$</span> are natural numbers.<br />
So, <span class="math-container">$$a+b+c=2017$$</span> <span class="math-container">$$a^2+b^2+c^2-ab-bc-ca=1$$</span>
How should I proceed after this?</p>
| Shridhar Sharma | 988,232 | <p><a href="https://i.stack.imgur.com/BESmD.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BESmD.jpg" alt="This is the Solution" /></a></p>
<p>It was an easy question, just one important result and a bit of number theory was needed</p>
|
4,003,948 | <p>In the Book that I'm reading (Mathematics for Machine Learning), the following para is given, while listing the properties of a matrix determinant:</p>
<blockquote>
<p>Similar matrices (Definition 2.22) possess the same determinant.
Therefore, for a linear mapping <span class="math-container">$Φ : V → V$</span> all transformation matrices
<span class="math-container">$A_Φ$</span> of <span class="math-container">$Φ$</span> have the same determinant. Thus, the determinant is invariant
to the choice of basis of a linear mapping.</p>
</blockquote>
<p>I know that matrices <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are similar if they satisfy <span class="math-container">$B=C^{-1}AC$</span>.
I can prove that determinants of such <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are equal using other properties of a determinant.</p>
<p>But beyond that I don't understand what this paragraph is saying. I can understand all matrices <span class="math-container">$Y$</span> such that <span class="math-container">$Y=X^{-1}AX$</span> have the same determinant as <span class="math-container">$A$</span>, for varying <span class="math-container">$X$</span>s.</p>
<p>But how do I connect this to linear mappings of the form <span class="math-container">$Φ : V → V$</span>. What does <span class="math-container">$Φ : V → V$</span> mean here? Maybe someone can give me an example.</p>
<p>EDIT:
This video is pretty basic, but it helped me understand better
<a href="https://www.youtube.com/watch?v=s4c5LQ5a4ek" rel="nofollow noreferrer">https://www.youtube.com/watch?v=s4c5LQ5a4ek</a></p>
| Alekos Robotis | 252,284 | <p>Given a linear transformation <span class="math-container">$T:V\to V$</span>, if we choose a basis <span class="math-container">$\mathcal{B}$</span> for <span class="math-container">$V$</span> we get an induced matrix <span class="math-container">$\Phi_{\mathcal{B}}:\Bbb{R}^n\to \Bbb{R}^n$</span> representing the linear transformation with respect to this basis. Given another choice of basis <span class="math-container">$\mathcal{B}'$</span>, there exists a change of basis matrix <span class="math-container">$P:\Bbb{R}^n\to \Bbb{R}^n$</span> such that <span class="math-container">$\Phi_{\cal{B}}=P^{-1}\Phi_{\cal{B}'}P$</span>. Hence, in light of this fact about determinants of similar matrices, we get
<span class="math-container">$$
\det \Phi_{\cal{B}}=\det(P^{-1}\Phi_{\mathcal{B}'}P)=\det\Phi_{\mathcal{B}'}.
$$</span>
Remark: You can think of choosing a basis as choosing coordinates on <span class="math-container">$V$</span> (like you might do for a surface in calculus). Then the point is that we can define the determinant by choosing any coordinate system and writing the matrix representation of <span class="math-container">$T$</span> in coordinates, then computing there. The fact above says that this is independent of coordinate system and hence well defined as an invariant of the transformation <span class="math-container">$T$</span>. The same applies for the trace of a map which you will probably encounter later if you haven't yet.</p>
|
2,227,047 | <p>For any $x=x_1, \dotsc, x_n$, $y=y_1, \dotsc, y_n$ in $\mathbf E^n$, define $\|x-y\|=\max_{1 \le k \le n}|x_k-y_k|$. Let $f\colon\mathbf E^n \to \mathbf E^n$ be given by $f(x)=y$, where $y_k= \sum_{i=1}^n a_{ki} x_i + b_k$ where $k =1,2, \dotsc,n$. Under what conditions is $f$ a contraction mapping?</p>
<p>Any hint or solution for this question? I am beginner for this course, I can not understand clearly. </p>
| MCS | 378,686 | <p>Praise be! I just figured it out!</p>
<p>$\int_{-\infty}^{\infty}\frac{dx}{\left|1+\alpha x^{2}\right|}$</p>
<p>is the $L^{2}$ norm of $\left(1+\alpha x^{2}\right)^{-\frac{1}{2}}$.
The fourier transform ($\int_{-\infty}^{\infty}f\left(x\right)e^{-2\pi i\xi x}dx$ convention) of $\left(1+\alpha x^{2}\right)^{-\frac{1}{2}}$ is $\frac{2}{\sqrt{\alpha}}K_{0}\left(\frac{2\pi\left|\xi\right|}{\sqrt{\alpha}}\right)$, where $K_{0}$ is the modified Bessel function of the second kind.</p>
<p>Wolfram Alpha gives</p>
<p>$\int_{-\infty}^{\infty}\left|K_{0}\left(\left|x\right|\right)\right|^{2}dx=\frac{\pi^{2}}{2}$</p>
<p>So, using Parseval's Identity, splitting the integral in half, and performing the change of variables then yields the answer:</p>
<p>$\int_{-\infty}^{\infty}\frac{dx}{\left|1+\alpha x^{2}\right|}=\frac{\pi}{\sqrt{\alpha}}$</p>
<p>Woo!</p>
<p>Wait... dammit. This doesn't take into account the fact that $\alpha$ is complex. The change of variables leads to a contour integral on a ray from 0. So there's still more work to be done.</p>
<p>Turns out you need to integrate the square modulus of $K_{0}$ along the ray from 0 to $\frac{\infty}{\sqrt{\alpha}}$. Any thoughts as to how to do this?</p>
|
7,108 | <p>I need help to make a diagram(square), someone can teach me how to do? </p>
<p>I know that I could look at the posts to see a model, but I am stopped for 7 days to edit questions</p>
<p>Thanks in advance.</p>
| Willie Wong | 1,543 | <ol>
<li><p>As long as the answer is on topic, answers the question, is not offensive, and not spam, then most users won't have a problem with you posting a YouTube Link. </p></li>
<li><p>It is perhaps better, however (since the YouTube link URLs are usually rather cryptic, and I often hesitate to click on random links on the internet), to give a little bit more than a link: say that you are linking to a video you made yourself to demonstrate the answer, and perhaps a few quick words about the methods used (to solve the problem). </p></li>
<li><p>As usual, we have no control (besides some basic vote-fraud detection system) on how users will vote in response to your proposed answer. As Pavel <a href="http://meta.math.stackexchange.com/questions/7105/is-it-allowed-to-answer-a-question-with-a-youtube-video#comment26202_7105">wrote</a>, users may have legitimate reasons to find a video response less useful. </p></li>
</ol>
|
959,393 | <p>Let's use the following example:</p>
<p>$$17! = 16!*17 \approx 2 \cdot 10^{13} * 17 = 3.4 \cdot 10^{14} $$</p>
<p>Are you allowed to do this? I am in doubt whether or not this indicates that $17! = 3.4 \cdot 10^{14}$, which is obviously not true, but I think it doesn't.</p>
| C_Guy | 179,154 | <p>It is allowed. It is like asking: does
$$10 = 5*2 \geq 5$$
indicate that $10=5$?</p>
|
186,553 | <p><strong>Problem:</strong> Test the convergence of $\sum_{n=0}^{\infty} \frac{n^{k+1}}{n^k + k}$, where $k$ is a positive constant.</p>
<p>I'm stumped. I've tried to apply several different convergence tests, but still can't figure this one out.</p>
| André Nicolas | 6,312 | <p>It seems likely that you are expected to use the "guess and check" procedure. </p>
<p>The product is $144$, not too many <strong>integer</strong> possibilities for $a$, $b$, $c$.
Without loss of generality you may assume $a \ge b\ge c$. Also, the sum of squares condition tells you that $a \le 12$.</p>
<p>Why <em>integers</em>? Because the problem is meant to be solved easily by guess and check. </p>
|
68,618 | <p>My calculus teacher assigns us online homework to do. He never went over any question that looks like this (he in fact said we shouldn't be concerned with this):<img src="https://i.stack.imgur.com/N31ML.png" alt="What is this?"></p>
<p>Yet, I need to answer this right to progress with my homework. It stinks because if I get it wrong, I lose points on my homework average.</p>
<p>So, could someone help explain to me what's going on here, and perhaps guide me to a point where I can try to figure out the solution myself? I'm not asking for a straight answer (although if thats what you want to provide, go for it [since I was told I don't <em>have</em> to know this stuff]), but this stuff really confuses me. Thank you for your help.</p>
<p>Oh, and in case you need it, here's the original prompt (the question I posted above is just a part of a series of questions that go along with this prompt):</p>
<p><img src="https://i.stack.imgur.com/cqXkn.png" alt="enter image description here"></p>
| André Nicolas | 6,312 | <p>Without loss of generality we may assume that $f(c)$ is positive. </p>
<p>Let $\epsilon =f(c)$. By the definition of continuity of $f$ at $c$, there is a $\delta>0$ such that if $|x-c|<\delta$ (and $a\lt c-\delta$, and $c+\delta \lt b$, to make sure we stay in our interval) then $|f(x)-f(c)|<\epsilon$.</p>
<p>Now if you have some experience with inequalities, you should be able to reach the conclusion.</p>
<p>If the "without loss of generality" is not persuasive, if $f(c)<0$, let $g(x)=-f(x)$, apply the above argument to $g(x)$, and see what this says about $f(x)$. </p>
<p>I did not use precisely the language of the question. But you should now be able to see enough of what is going on to be able to answer that question.</p>
<p><strong>Comment</strong>: It will be helpful to draw a picture while figuring out what the second paragraph is saying. </p>
|
68,618 | <p>My calculus teacher assigns us online homework to do. He never went over any question that looks like this (he in fact said we shouldn't be concerned with this):<img src="https://i.stack.imgur.com/N31ML.png" alt="What is this?"></p>
<p>Yet, I need to answer this right to progress with my homework. It stinks because if I get it wrong, I lose points on my homework average.</p>
<p>So, could someone help explain to me what's going on here, and perhaps guide me to a point where I can try to figure out the solution myself? I'm not asking for a straight answer (although if thats what you want to provide, go for it [since I was told I don't <em>have</em> to know this stuff]), but this stuff really confuses me. Thank you for your help.</p>
<p>Oh, and in case you need it, here's the original prompt (the question I posted above is just a part of a series of questions that go along with this prompt):</p>
<p><img src="https://i.stack.imgur.com/cqXkn.png" alt="enter image description here"></p>
| hmakholm left over Monica | 14,366 | <p>I think the question is quite confusingly worded. It took me several minutes to figure out what it meant -- and it's not as if I don't know the subject matter well.</p>
<p>What must be going on is that you're supposed to imagine reading something like this in a proof:</p>
<blockquote>
<p>bla bla bla, and therefore we know that $f$ is continuous, and that $f(c)\ne 0$. We can then apply the definition of continuity with $\varepsilon = $ <code>_______</code> to find a $\delta$ such that $f(x)$ has the same sign as $f(c)$ for every $x\in(c-\delta,c+\delta)$. Thus, bla bla bla</p>
</blockquote>
<p>One of $|f(c)|$ and $|c|$ will make this into a valid argument if you fill it into the blank, and one will produce nonsense. Your task is to select the valid one.</p>
<p>In order to answer the question you need (1) to remember the definition of continuity, and (2) to be able to distinguish a nonsense argument from a valid one. The second of these abilities is often considered too advanced a skill to demand of pre-university students these days (they're supposed to be satisfied with accepting the teacher's judgement in each case), which is probably why your teacher is not allowed to say you <em>must</em> be able to do it...</p>
|
1,024,794 | <p>I have this equation: $x+7-(\frac{5x}8 + 10) = 3 $</p>
<p>I've used step-by-step calculators online but I simply don't understand it. Here is how I've tried to solve the problem: </p>
<p>$$x+7-\left(\frac{5x}8+10\right) = x + 7 - \frac{5x}8 - 10 = 3$$</p>
<p>$$x + 7 - \frac{5x}8 - 10 + 10 = 3 + 10$$</p>
<p>$$x + 7 - 7 - \frac{5x}8 = 13 - 7$$</p>
<p>$$x - \frac{5x}8 = 6$$</p>
<p>$$x - 8\times\frac{5x}8 = 6\times8$$</p>
<p>$$x - 5x = 48$$</p>
<p>$$\frac{-4x}{-4} = \frac{48}4$$</p>
<p>$$x = -12$$</p>
<p>Now obviously, it's wrong. The right answer is $16$, but I don't know how to get to that answer. Therefore, I'm extremely thankful if someone truly can show what I need to do, and why I need to do it, because I'm completely lost right now. Thanks.</p>
| Arian | 172,588 | <p>Consider the following propositions:
$$P(z): "\xi(k(z))=k(\xi(z))=0"$$
$$Q(z): "k(z)=\zeta(z)=0"$$
$$R(z): "\text{RH is false}"$$
So your problem can be stated as if the following equivalence relation is true:
$$(P(z)\to Q(z))\leftrightarrow R(z)$$
Now the above biconditional statement is true if and only if $R(z)\equiv T$ and $P(z)\to Q(z)\equiv T$ or $R(z)\equiv F$ and $R(z)\equiv F$ where $T$ and $F$ stand for "true" and "false" respectively.</p>
<p>First take $R(z)\equiv T$ then there exists some $z_0$ with $\Re(z)\neq 1/2$ such that $\zeta(z_0)=0$. It is well known that zeros of $\xi(z)$ are precisely the non-trivial zeros of zeta function $\zeta(z)$. Therefore $\xi(z_0)=0$. By your definition $k(z)=\xi(1-z)$ and using the symmetry $\xi(z)=\xi(1-z)$ you just get $\xi(k(z))=k(\xi(z))=\xi(\xi(z))$. At $z_0$ we have $\xi(\xi(z_0))=\xi(0)=1/2$ a result you get from the functional definition of $\xi(z)$ without assuming any additional hypothesis. So $P(z)\equiv F$ and whatever the truth value of $Q(z)$ is one has $P(z)\to Q(z)\equiv T$. Therefore we have shown so far
\begin{equation}
R(z)\to (P(z)\to Q(z))
\end{equation}
To check for the other direction notice that if $\xi(\xi(z))=0$ implies $\xi(z)=w_0$ where $w_0$ is a non-trivial zero of zeta function on the critical strip. Since there is no non-trivial zero on the interval $(0,1)$ then the statement $\xi(z)=\zeta(z)=w_0=0$ is false. Therefore the compound proposition $P(z)\to Q(z)$ is false. So for $P(z)\to Q(z)$ to be true you need now necessarily $P(z)\equiv F$ which also leaves $Q(z)$ free to take any of the two truth values because if $P(z)\equiv F$ then $P(z)\to Q(z)\equiv T$ whatever the truth value of $Q(z)$ is. But $P(z)\equiv F$ means
$$\xi(\xi(z))\neq 0$$
implying that $\xi(z)\neq w_0$ where $w_0$ is any non-trivial zero of zeta function. However this condition does not pin down any thing on the validity of Riemann hypothesis. Therefore
$$(P(z)\to Q(z))\to R(z)$$
is not assured to be true. As a result your biconditional statement is not assured to hold true. But of course you have that the following is true:
\begin{equation}
R(z)\to (P(z)\to Q(z))
\end{equation}</p>
|
70,429 | <p>For a $n$-dim smooth projective complex algebraic variety $X$, we can form the complex line bundle $\Omega^n$ of holomorphic $n$-form on $X$. Let $K_X$ be the divisor class of $\Omega^n$, then $K_X$ is called the canonical class of $X$.</p>
<p><strong>Question</strong>: Is homology class of $K_X$ in $H_{2n-2}(X)$ a topological invariant? If it's true, please tell me the idea of proof or some references. If not, please give me the counterexamples.</p>
| Tim Perutz | 2,356 | <p>This answer is about the case of complex surfaces $X$ and their diffeomorphisms (all my diffeos are assumed to be orientation-preserving!). </p>
<p><b>(1) Examples of self-diffeomorphisms that reverse the sign of the canonical class.</b> </p>
<p>Take $X=\mathbb{C}P^1\times \mathbb{C}P^1$. Let $\tau$ be reflection in the equator of $S^2=\mathbb{C}P^1$. Then $\tau \times \tau$ preserves orientation and acts as $-I$ on $H^2(X)$. It therefore sends $K_X$ to $-K_X$.</p>
<p>One can also realise the automorphism $-I$ of $H^2(X)$ by a diffeomorphism when $X$ is the blow-up of the projective plane at $k$ points, $k = 2,3,\dots,9$. This follows from a result of C.T.C. Wall from</p>
<p><i>Diffeomorphisms of 4-manifolds</i>, J. London Math. Soc. 39 (1964) 131–140, MR0163323</p>
<p>Wall says that if $N$ is a simply connected, closed oriented 4-manifold with $b_2(N)<9$, and $X$ is the connected sum of $N$ with $S^2 \times S^2$, then all automorphisms of the intersection form of $X$ are realised by diffeos. To apply this, recall that the 1-point blow-up of $\mathbb{C}P^1\times \mathbb{C}P^1$ is the 2-point blow up of the projective plane. (Wall's strategy, by the way, is to factor the automorphism into reflections along hyperplanes, and to realise those.)</p>
<p><b>(2) Results from Seiberg-Witten theory.</b> </p>
<p>These results tie complex geometry amazingly closely to differential topology. They say that the unsigned pair $\pm K_X$ is invariant under diffeomorphisms (Witten <a href="http://arxiv.org/abs/hep-th/9411102">http://arxiv.org/abs/hep-th/9411102</a> and others); so too is the Kodaira dimension; so too are the plurigenera (Friedman-Morgan <a href="http://arxiv.org/abs/alg-geom/9502026">http://arxiv.org/abs/alg-geom/9502026</a>). </p>
<p>In Kodaira dimension $<2$, one can take this further and prove that oriented-diffeomorphic surfaces are actually deformation-equivalent (to be safe, let me specify the simply connected case). But that's <i>not</i> the explanation in general: there are pairs of simply connected general-type surfaces that are diffeomorphic (by diffeos preserving the canonical class), which are not deformation-equivalent (Catanese-Wajnryb <a href="http://arxiv.org/abs/math/0405299">http://arxiv.org/abs/math/0405299</a>).</p>
<p><b>(3) How it happens.</b></p>
<p>The Seiberg-Witten invariant (for an oriented 4-manifold with $b^+(X)>1$) is a map
$$SW: Spin^c(X)\to\mathbb{Z}$$
defined on the $H^2(X)$-torsor of $Spin^c$-structures. The overall sign is equivalent to a "homology orientation". It's natural under diffeomorphisms. It's also invariant under "conjugation" $\mathfrak{s}\mapsto \bar{\mathfrak{s}}$ of $Spin^c$-structures.</p>
<p>For algebraic surfaces, there's a canonical spin-c structure $\mathfrak{s}$, so $Spin^c(X)$ is identified with $H^2(X)$. Witten (<a href="http://arxiv.org/abs/hep-th/9411102">http://arxiv.org/abs/hep-th/9411102</a>) observed that the elliptic equations that define $SW$ simplify drastically in the algebraic case; in evaluating $SW$ on a cohomology class represented by a complex line bundle $L\to X$, you're led to consider a moduli space of pairs consisting of a holomorphic structure on the line bundle and a holomorphic section of it, with an obstruction bundle on the moduli space. Conjugation-invariance becomes Serre duality. </p>
<p>For general type surfaces, $\pm SW(\mathfrak{s}) = \pm SW(\bar{\mathfrak{s}}) = \pm 1$; all other spin-c structures have vanishing invariant. Since $c_1(\mathfrak{s})=-c_1(\bar{\mathfrak{s}})=-K$, one deduces diffeomorphism-invariance of $\pm K$. For lower Kodaira dimension, a more complicated analysis is needed.</p>
|
2,252,206 | <p>This question is related to <a href="https://math.stackexchange.com/questions/1574196/units-of-group-ring-mathbbqg-when-g-is-infinite-and-cyclic">this</a> one, in that I am asking about the same problem, but not necessarily about the same aspect of the problem.</p>
<p>I need to identify all units of the group ring $\mathbb{Q}(G)$ where $G$ is an infinite cyclic group.</p>
<p>Now, as I understand it, if $R$ is a ring and $G$ is a group, then if we consider the set of all formal sums </p>
<p>$$r_{1}g_{1}+r_{2}g_{2}+\cdots + r_{k}g_{k},$$ </p>
<p>$r_{i} \in R$, $g_{i} \in G$, where we allow the empty sum to play the part of the zero element $0$, </p>
<p>then if we consider two formal sums to be equivalent if they have the same reduced form, the group ring $R(G)$ refers to the set of equivalence of classes of such sums with respect to this equivalence relation.</p>
<p>In this case, then, since $\mathbb{Q} = R$ and $G$ is some infinite cyclic group, say $\langle x \rangle$ (although, if it is an infinite cyclic group, couldn't we say that it is isomorphic to $\mathbb{Z}$?), so our sums look like </p>
<p>$$q_{1}x_{1} + q_{2}x_{2} + \cdots + q_{k}x_{k}$$</p>
<p>for some rationals $q_{i}$ and elements of $\langle x \rangle$, $x_{i}$. </p>
<p>Now, the units of this group ring are the nonzero, invertible elements, and that various relationships exist among units, principal ideals, and associate elements. I am not sure how to apply any of this information to this situation, though, as I am relatively inexperienced with working with group rings.</p>
<p>Moreover, I did find the answered question I linked to above, but this answer uses some terminology that I am unfamiliar with: for example, I do not know what it means to be a "localization of \mathbb{Q}[x]", and I only know a little bit about Laurent polynomials from Complex Analysis, which I'm assuming is where he is getting the negative powers from in his answer.</p>
<p>Now, among my questions is: <strong>1. How do you know that $\mathbb{Q}(G)$ is isomorphic to $\mathbb{Q}[x, x^{-1}]$?</strong> That it is seems weird to me, since $G$ here is supposed to be cyclic, and he seems to be saying that a group ring on a cyclic group is isomorphic to a group ring on a group with two generators, but perhaps my confusion just stems from my inexperience with group rings? If someone could please explain this to me, I would be forever grateful. Also, <strong>2. what is the actual isomorphism used or how do you show that the two group rings are isomorphic? 3. How does this tell us what the units are?</strong></p>
<p>I'm extremely confused and I thank you very much in advance for your time, help, and patience!</p>
| Community | -1 | <p>The key word is <a href="https://en.wikipedia.org/wiki/Polarization_identity" rel="nofollow noreferrer">polarization identity</a>. </p>
|
201,820 | <p>Suppose we have in <code>~/time-data/time-data.org</code> the following data:</p>
<pre><code>* Parent1
:LOGBOOK:
CLOCK: [2019-07-09 Tue 00:00]--[2019-07-09 Tue 00:20] => 0:20
:END:
** Child1
:LOGBOOK:
CLOCK: [2019-07-10 Wed 00:02]--[2019-07-10 Wed 00:40] => 0:38
:END:
** Child2
:LOGBOOK:
CLOCK: [2019-07-11 Thu 00:02]--[2019-07-11 Thu 06:40] => 0:38
:END:
</code></pre>
<p>We then can use <a href="https://github.com/atheriel/org-clock-csv" rel="nofollow noreferrer">atheriel/org-clock-csv</a> to to pull this data via</p>
<pre><code>(org-clock-csv-to-file "~/time-data/time-data.csv" '("~/time-data/time-data.org"))
</code></pre>
<p>which populates <code>time-data.csv</code> with</p>
<pre><code>task,parents,category,start,end,effort,ishabit,tags
Parent1,,,2019-07-09 00:00,2019-07-09 00:20,,,
Child1,Parent1,,2019-07-10 00:02,2019-07-10 00:40,,,
Child2,Parent1,,2019-07-11 00:02,2019-07-11 06:40,,,
</code></pre>
<p>so that in Mathematica we can run:</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/43DSa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/43DSa.png" alt="enter image description here"></a></p>
</blockquote>
<p><strong>Question:</strong> How do we get a <code>DateListPlot</code> out of this that shows, i.e., hours spent per day?</p>
<hr>
<p><strong>EDIT:</strong> I fed everyone's answers through my actual data (which spans several months) and <a href="https://www.wolframcloud.com/obj/george.w.singer/Published/time-data" rel="nofollow noreferrer">published them here</a>. I get lots of errors and (mostly) unparsable graphs. I think these answers are getting me closer to something usable though!</p>
| M.R. | 403 | <p>If Mathematica was perfect, <code>DateHistogram[..., "Day", "Hour"]</code> would work, making what you want a one-liner. I believe that a <code>DateInterval</code> function might be coming in the next version (12.1) which would presumably work with <code>DateHistogram</code> and <code>TimelinePlot</code>.</p>
<p>All that aside, let's see how to chart a temporal histogram across all of your tasks. First, let's import your dataset:</p>
<pre><code>csv = "task,parents,category,start,end,effort,ishabit,tags
Parent1,,,2019-07-07 00:00,2019-07-07 00:20,,,
Child1,Parent1,,2019-07-8 00:02,2019-07-8 00:40,,,
Child2,Parent1,,2019-07-9 00:02,2019-07-9 06:40,,,
Parent2,,,2019-07-08 00:00,2019-07-08 00:20,,,
Child21,Parent2,,2019-07-9 00:02,2019-07-9 00:40,,,
Child22,Parent2,,2019-07-10 00:02,2019-07-10 06:40,,,
Parent3,,,2019-07-09 00:00,2019-07-09 00:20,,,
Child31,Parent3,,2019-07-10 00:02,2019-07-10 00:40,,,
Child32,Parent3,,2019-07-11 00:02,2019-07-11 06:40,,,";
ds = ImportString[csv, {"CSV", "Dataset"}, HeaderLines -> 1];
</code></pre>
<p><a href="https://i.stack.imgur.com/k1Qso.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k1Qso.png" alt="enter image description here"></a></p>
<p>Now with a single <code>GroupBy[]</code> command, we change the raw data into the form we need:</p>
<pre><code>data = GroupBy[ds[All, <|"p" -> If[#parents == "", #task, #parents],
"d" -> (DateObject /@ {#"start", #"end"})|> &], First -> Last,
Map[{CurrentDate[#[[1]], "Hour"], DateDifference[#[[1]], #[[2]], "Hour"]} &]]
</code></pre>
<p><a href="https://i.stack.imgur.com/ypSNy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ypSNy.png" alt="enter image description here"></a></p>
<p>and then visualize it by simply calling:</p>
<pre><code>Row @ {DateListPlot[data, Filling -> Axis, ImageSize -> Medium, PlotLegends -> None],
StackedDateListPlot[data, PlotTheme -> "Detailed", ImageSize -> Medium]}
</code></pre>
<p><a href="https://i.stack.imgur.com/4MNLb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4MNLb.png" alt="enter image description here"></a></p>
<p>Another (non-dataset based) way to do this is as follows:</p>
<pre><code>ds = ImportString[csv, "CSV"];dates = Map[DateObject, ds[[2 ;;, {4, 5}]], {-1}];
dr = Flatten[DateRange[##, "Minute"] & @@@ dates];
DateHistogram[dr, "Day", DateReduction -> "Week", FrameLabel -> {None, "Minutes"}, Frame -> True,
LabelingFunction -> (Column@{Quantity[#/60., "Hours"], Quantity[#, "Seconds"]} &)]
</code></pre>
<p><a href="https://i.stack.imgur.com/BefEZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BefEZ.png" alt="enter image description here"></a></p>
<p>Or we can discretize by "Hours":</p>
<pre><code>dr = DeleteDuplicates[DateObject[#, "Hour"] & /@ Flatten[DateRange[##, "Hours"] & @@@ dates]];
DateHistogram[dr, "Day", DateReduction -> "Week", FrameLabel -> {None, "Hours"}, Frame -> True]
</code></pre>
<p><a href="https://i.stack.imgur.com/BUDAQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BUDAQ.png" alt="enter image description here"></a></p>
<p>Yet another way to analyze it (with a different coding style to boot) is to read it as a graph and plot a weighted tree-map. To further break your data down by parent task, try this:</p>
<pre><code>edges = Normal[(Reverse /@ Rule @@@ ds[[2 ;;, {1, 2}]]) /. "" -> "Root"];
TreePlot[edges, Top, "Root", VertexLabels -> "Name", DirectedEdges -> True]
</code></pre>
<p><a href="https://i.stack.imgur.com/ZfAVx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZfAVx.png" alt="enter image description here"></a></p>
<pre><code>taskParent[t_] := With[{parent = FirstCase[edges, Verbatim[Rule][p_, t] :> p]}, If[parent == "Root", t, parent]];
dr = DateRange[##, "Minute"] & @@@ dates;
groups = Flatten /@ GroupBy[Thread[{taskParent /@ ds[[2 ;;, 1]], dr}], First -> Last];
DateHistogram[Values[groups], "Day", ChartLayout -> "Stacked", ChartLegends -> Keys[groups], DateReduction -> "Week", FrameLabel -> {None, "Hours"}, PlotTheme -> "Marketing"]
</code></pre>
<p><a href="https://i.stack.imgur.com/YpEFk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YpEFk.png" alt="enter image description here"></a></p>
|
244,769 | <p>I am DMing a game of DnD and one of my players is really into fear effects, which is cool, but the effect of having monsters suffer from the "panicked" condition gets tedious to render via dice rolls.</p>
<p>The rule is, on the battle grid the monster will run for 1 square in a random direction, then from that new position it will move into another random adjacent square. repeat this process until its moved its full move speed.</p>
<pre><code>movespeed = 6;
points = Point[
NestList[{(#[[1]] + RandomChoice[{-1, 0, 1}]), #[[2]] +
RandomChoice[{-1, 0, 1}]} &, {11/2, 11/2}, movespeed]];
Graphics[{PointSize[Large], points},
GridLines -> {Range[0, 11], Range[0, 11]},
PlotRange -> {{0, 11}, {0, 11}}, Axes -> True]
</code></pre>
<p>I have written some code that shows me the squares the monster moves through, but I would love to replace the little black dots with numbers like "1", "2",...,"6" so that I know the path it actually took.</p>
| A.G. | 7,060 | <p>Here is a solution that uses <code>AffineTransform</code> and <code>Solve</code>s for coefficients. You can specify a scale.</p>
<pre><code>{a, b, c, p} = {{0.2, 0.8}, {0.1, 0.15}, {0.8, 0.25}, {0.6, 0.7}};
(* Specify scale here; for example, u and v's lengths are 1/2 *)
scale = 1/2;
(* Define the parameters *)
ClearAll[u0, ux, uy, v0, vx, vy, sol, UV];
m := {{ux, uy}, {vx, vy}};
V := {u0, v0};
UV = AffineTransform[{m, V}];
(* Find values of the parameters *)
sol = FindInstance[
(* Call w the origin of the uv coordinates *)
(* a and b are on the wv axis, c is on the wu axis *)
UV[a][[1]] == 0 && UV[b][[1]] == 0 && UV[c][[2]] == 0 &&
(* normalize / scale *)
ux^2 + uy^2 == 1/scale^2 &&
vx^2 + vy^2 == 1/scale^2 &&
(* u and v are orthogonal: scalar product is 0 *)
First@m . Last@m == 0,
{u0, ux, uy, v0, vx, vy}];
(* assign the results to parameters *)
{u0, ux, uy, v0, vx, vy} = {u0, ux, uy, v0, vx, vy} /. First@sol;
TableForm[{UV[{x, y}], UV[a], UV[b], UV[c], UV[p]},
TableHeadings -> {{"Formula", "a", "b", "c", "p"}, {"u", "v"}}]
</code></pre>
<p><a href="https://i.stack.imgur.com/shFPv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/shFPv.png" alt="enter image description here" /></a></p>
<pre><code>(* Inverse transformation *)
XY = InverseFunction[UV];
w = XY[{0, 0}]; (* origin of the uv axes *)
TableForm[{XY[{u, v}], w},
TableHeadings -> {{"Formula", "w"}, {"x", "y"}}]
"Just checking -- the following should = {x,y}:"
XY[UV[{x, y}]] // Simplify
</code></pre>
<p><a href="https://i.stack.imgur.com/2A2zM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2A2zM.png" alt="enter image description here" /></a></p>
<pre><code>(* sketch of the system *)
Graphics[{
AbsolutePointSize[5],
{Point[{a, b, c}]},
{Blue, Point[p]},
{Green, AbsolutePointSize[10], Point[w]},
(* Extend both u and v axes a bit *)
{Black, Dashed,
Line[{a - (b - a), b + (b - a)}]},
{Black, Dashed, Line[{w - (c - w), c + (c - w)}]},
{Black, Arrow[{w, XY[{1, 0}]}]},
{Black, Arrow[{w, XY[{0, 1}]}]},
{Blue, Point[{0, 0}]},
{Blue, Arrow[{{0, 0}, {1, 0}}]},
{Blue, Arrow[{{0, 0}, {0, 1}}]}
}, Axes -> True, PlotRange -> {{-0.1, 1.5}, {-0.1, 1.5}},
AspectRatio -> 1, Frame -> True]
</code></pre>
<p><a href="https://i.stack.imgur.com/y0kbS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y0kbS.png" alt="enter image description here" /></a></p>
|
3,858,414 | <p>I need help solving this task, if anyone had a similar problem it would help me.</p>
<p>The task is:</p>
<p>Calculate using the rule <span class="math-container">$\lim\limits_{x\to \infty}\left(1+\frac{1}{x}\right)^x=\large e $</span>:</p>
<p><span class="math-container">$\lim_{x\to0}\left(\frac{1+\mathrm{tg}\: x}{1+\sin x}\right)\Large^{\frac{1}{\sin x}}
$</span></p>
<p>I tried this:</p>
<p><span class="math-container">$ \lim_{x\to0}\left(\frac{1+\mathrm{tg}\: x}{1+\sin x}\right)^{\Large\frac{1}{\sin x}}=\lim_{x\to0}\left(\frac{1+\frac{\sin x}{\cos x}}{1+\sin x}\right)^{\Large\frac{1}{\sin x}}=\lim_{x\to0}\left(\frac{\sin x+\cos x}{\cos x\cdot(1+\sin x)}\right)^{\Large\frac{1}{\sin x}}
$</span></p>
<p>But I do not know, how to solve this task.
Thanks in advance !</p>
| fleablood | 280,126 | <p>Let <span class="math-container">$x,y\in A$</span>.</p>
<p>Either <span class="math-container">$x R y$</span> of <span class="math-container">$x \not R y$</span>.</p>
<p>Case 1: <span class="math-container">$x R y$</span>.</p>
<p><span class="math-container">$[x]=\{z\in A| z R x\}$</span>, <span class="math-container">$[y]=\{z\in A|z R y\}$</span>.</p>
<p>If <span class="math-container">$m\in [x]$</span> then <span class="math-container">$m Rx$</span> and as <span class="math-container">$R$</span> is transitive and <span class="math-container">$xR y$</span> then <span class="math-container">$m Ry$</span> so <span class="math-container">$m \in [y]$</span> so <span class="math-container">$[x]\subset [y]$</span> but if <span class="math-container">$n \in[y]$</span> then <span class="math-container">$n Ry$</span> but as <span class="math-container">$R$</span> is symmetric and <span class="math-container">$xRy$</span> the <span class="math-container">$yR x$</span> and as <span class="math-container">$R$</span> is transitive <span class="math-container">$nRx$</span> so <span class="math-container">$n\in [x]$</span> and <span class="math-container">$[y]\subset [x]$</span> so <span class="math-container">$[x]=[y]$</span>.</p>
<p>Case 2: <span class="math-container">$x \not R y$</span>.</p>
<p>If <span class="math-container">$n \in [x]\cap [y]$</span> then <span class="math-container">$n Rx$</span> and <span class="math-container">$nR y$</span>. As <span class="math-container">$R$</span> is symmetric than <span class="math-container">$xRn$</span> and <span class="math-container">$nRy$</span>. As <span class="math-container">$R$</span> is transitive <span class="math-container">$x R y$</span> which is a contradiction. So <span class="math-container">$[x]\cap [y] = \emptyset$</span>.</p>
<p>So <span class="math-container">$x Ry \implies [x]=[y]$</span></p>
<p>And <span class="math-container">$x \not R y \implies [x]\cap [y] = \emptyset$</span>.</p>
<p>Those are the only two options.</p>
|
1,634,741 | <p>$22+22=4444$</p>
<p>$43+46=618191$</p>
<p>$77+77=?$</p>
<p>What should come in place of $?$</p>
<p>I cannot see any logic in $43+46=618191$. Is there any?</p>
| stackoverflowuser2010 | 9,177 | <p>The problem asks for a closed-form solution to:</p>
<p><span class="math-container">$$\sum_{i=4}^{N} 5^i = 5^4 + 5^5 + ... + 5^N$$</span></p>
<p>The OP's original intuition was correct:
<span class="math-container">$$\sum_{i=4}^{N} = \sum_{i=0}^{N} 5^i - \sum_{i=0}^{3} 5^i$$</span></p>
<p>More generally, for summing a geometric series starting at an arbitrary index <span class="math-container">$m$</span>:
<span class="math-container">$$
\sum_{i=m}^{N} r^i = \sum_{i=0}^{N} r^i - \sum_{i=0}^{m-1} r^i \\
$$</span></p>
<p>To get a closed form for the above expression, let's start with the closed-form equation for a geometric series:
<span class="math-container">$$
\sum_{i=0}^{N} r^i = \frac{r^{N+1}-1}{r-1}
$$</span></p>
<p>So:</p>
<p><span class="math-container">$$
\begin{align*}
\sum_{i=m}^{N} r^i &= \sum_{i=0}^{N} r^i - \sum_{i=0}^{m-1} r^i \\
&= (\frac{r^{N+1}-1}{r-1}) - (\frac{r^{m-1+1}-1}{r-1}) \\
&= \frac{r^{N+1} - r^m}{r-1}
\end{align*}
$$</span></p>
<p>An alternative and equivalent form can be found if we multiply the top and bottom by <span class="math-container">$-1$</span>:
<span class="math-container">$$
\sum_{i=m}^{N} r^i = \frac{r^m - r^{N+1}}{1-r}
$$</span></p>
|
4,052,760 | <blockquote>
<p>Prove that <span class="math-container">$\int\limits^{1}_{0} \sqrt{x^2+x}\,\mathrm{d}x < 1$</span></p>
</blockquote>
<p>I'm guessing it would not be too difficult to solve by just calculating the integral, but I'm wondering if there is any other way to prove this, like comparing it with an easy-to-calculate integral. I tried comparing it with <span class="math-container">$\displaystyle\int\limits^{1}_{0} \sqrt{x^2+1}\,\mathrm{d}x$</span>, but this greater than <span class="math-container">$1$</span>, so I'm all out of ideas.</p>
| saulspatz | 235,128 | <p>HINT:</p>
<p><span class="math-container">$x^2+x<x^2+x+\frac14=\left(x+\frac12\right)^2$</span></p>
|
4,052,760 | <blockquote>
<p>Prove that <span class="math-container">$\int\limits^{1}_{0} \sqrt{x^2+x}\,\mathrm{d}x < 1$</span></p>
</blockquote>
<p>I'm guessing it would not be too difficult to solve by just calculating the integral, but I'm wondering if there is any other way to prove this, like comparing it with an easy-to-calculate integral. I tried comparing it with <span class="math-container">$\displaystyle\int\limits^{1}_{0} \sqrt{x^2+1}\,\mathrm{d}x$</span>, but this greater than <span class="math-container">$1$</span>, so I'm all out of ideas.</p>
| Unit | 196,668 | <p>Well, you could use the AM-GM inequality:
<span class="math-container">$$\sqrt{x^2 + x} = \sqrt{x(x+1)} < \frac{x + (x+1)}{2} = x + \frac{1}{2}$$</span>
and then
<span class="math-container">$$\int_0^1 \sqrt{x^2 + x} \, dx < \int_0^1 x + \frac{1}{2} \, dx = \frac{1}{2} + \frac{1}{2} = 1.$$</span></p>
|
1,476,456 | <p>How many positive (integers) numbers less than $1000$ with digit sum to $11$ and divisible by $11$?</p>
<p>There are $\lfloor 1000/11 \rfloor = 90$ numbers less than $1000$ divisible by $11$.</p>
<p>$N = 100a + 10b + c$ where $a + b + c = 11$ and $0 \le a, b, c \le 9$</p>
<p>I got $\binom{13}{2} - 9 = 69$ solutions.</p>
| JMoravitz | 179,297 | <p>Digitsum is related to the modulo 9 operation. A weakening of the conditions given is that you are counting how many $0\leq n\leq 1000$ satisfy the coungruencies:</p>
<p>$\begin{array}{} n\equiv 2\pmod{9}\\
n\equiv 0\pmod{11}\end{array}$</p>
<p>By the <a href="https://en.wikipedia.org/wiki/Chinese_remainder_theorem" rel="nofollow">chinese remainder theorem</a>, we get that</p>
<p>$n\equiv 11\pmod{99}$</p>
<p>So, we can look at the possible solutions and trim the ones that don't meet the stronger requirement that the digit sum be $11$ (as opposed to $2$ or $20$ or $29$ or $37$)</p>
<p>We have the list then $\{11,110,209,308,407,506,605,704,803,902\}$</p>
<p>All but the first two have digitsum 11 (whereas the first two have only digit sum equaling 2).</p>
<p>The answer is then $8$.</p>
|
124,660 | <p>I'm solving a fairly simple equation :</p>
<pre><code>w[p1_, p2_, xT_] :=
94.8*cv*p1*y[(p1 - p2)/p1, xT]*Sqrt[(p1 - p2)/p1*mw/t1];
y[r_, xT_] := 1 - (1.4 r)/(3 xT*γ) /. γ -> 1.28;
sol = NSolve[{w[p1, p2, 0.66] == 30, p2 == 1.07}, {p1, p2}] /. {cv ->
1.77, t1 -> 318, mw -> 38};
</code></pre>
<p>Mathematica has no problem solving this case. However, when I change the y function to:</p>
<pre><code>y[r_, xT_] :=
Max[0.667, 1 - (1.4 r)/(3 xT*γ)] /. γ -> 1.28
</code></pre>
<p>Then the calculation does not seem to converge. Is there a reason why this should be difficult ? Can it be implemented differently ?</p>
| Feyre | 7,312 | <p>The discontinuity makes it so that</p>
<blockquote>
<p>NSolve was unable to solve the system with inexact coefficients</p>
</blockquote>
<p>The solution of <code>NSolve[]</code> in this case is to:</p>
<blockquote>
<p>The answer was obtained by solving a corresponding exact system and numericizing the result.</p>
</blockquote>
<p>However, it is impossible to do so with your strategy of list replacements.</p>
<pre><code>cv = 1.77; t1 = 318; mw = 38; γ = 1.28;
w[p1_, p2_, xT_] :=
94.8*cv*p1*y[(p1 - p2)/p1, xT]*Sqrt[(p1 - p2)/p1*mw/t1];
y[r_, xT_] :=
Max[0.667, 1 - (1.4 r)/(3 xT*γ) /. γ -> 1.28];
sol = NSolve[{w[p1, p2, 0.66] == 30, p2 == 1.07}, {p1, p2}]
</code></pre>
<blockquote>
<p><code>{{p1 -> 1.32278, p2 -> 1.07}}</code></p>
</blockquote>
<p>Which corresponds to the third answer obtained by the original problem.
Note that the numericization of an exact system doesn't leave imaginary rounding errors like with the original problem, don't forget to <code>Chop[]</code>.</p>
|
232,777 | <p>Let $F$ be an ordered field.</p>
<p>What is the least ordinal $\alpha$ such that there is no order-embedding of $\alpha$ into any bounded interval of $F$?</p>
| JMP | 70,355 | <p>A union of $\dbinom kr$ sets with cardinality $r$ has cardinality $\ge k$, and there are two of them due to there being $\ge k$ elements in $X$.</p>
|
1,343,722 | <p>Note: I am looking at the sequence itself, not the sequence of partial sums.</p>
<p>Here's my attempt...</p>
<p>Setting up:</p>
<p>$$\left\{\frac{2(n+1)}{2(n+1)-1}\right\} - \left\{\frac{2n}{2n-1}\right\}$$</p>
<p>Simplifying:</p>
<p>$$\left\{\frac{2n+2}{2n+1}\right\} - \left\{\frac{2n}{2n-1}\right\}$$</p>
<p>$$\frac{(2n+2)(2n-1)-(2n)(2n+1)}{(2n+1)(2n-1)}$$</p>
<p>$$\frac{4n^2+2n-2-(4n+2n)}{(2n+1)(2n-1)}$$</p>
<p>$$\frac{4n^2-4n-2}{4n^2-1}$$</p>
<p>How should I proceed from this point? I think I need to get rid of the ratio, so that I can judge whether or not it'll be positive or negative. Or can I just judge from this point that it will be a positive value? When I use the $\frac{a_{n+1}}{a_{n}}$ test, I get a result that the sequence is strictly decreasing.</p>
| MCT | 92,774 | <p>Hint:</p>
<p>$$\sum_{n=1}^{\infty} \frac{n^2}{2^n} = \sum_{i=1}^{\infty} (2i - 1) \sum_{j=i}^{\infty} \frac{1}{2^j}.$$</p>
<p>Start by using the geometric series formula on $\displaystyle \sum_{j=1}^{\infty} \frac{1}{2^j}$ to simplify the double series into a singular series. Then you will have a series that looks like $\displaystyle \sum_{i=1}^{\infty} \frac{i}{2^i}$. Just as I broke your initial series with quadratic term $n^2$ into a double series with linear term $i$, you can break this series with linear term $i$ into a double series with constant term $c$.</p>
<p>In a clearer form:</p>
<p>\begin{align}
&\frac{1}{2} + &\frac{4}{4} + &\frac{9}{8} + &\frac{16}{16} + \dots =\\
&\frac{1}{2} + &\frac{1}{4} + &\frac{1}{8} + &\frac{1}{16} + \dots +\\
& &\frac{3}{4} + &\frac{3}{8} + &\frac{3}{16} + \dots + \\
& & & \frac{5}{8} + &\frac{5}{16} + \dots + \\
& & & &\frac{7}{16} + \dots + \\
\end{align}</p>
<p>Notice that each of the sums are geometric series, which can be evaluated easily.</p>
<p>Sorry about the weird formatting, I don't know where these spaces are coming from.</p>
<hr>
|
1,343,722 | <p>Note: I am looking at the sequence itself, not the sequence of partial sums.</p>
<p>Here's my attempt...</p>
<p>Setting up:</p>
<p>$$\left\{\frac{2(n+1)}{2(n+1)-1}\right\} - \left\{\frac{2n}{2n-1}\right\}$$</p>
<p>Simplifying:</p>
<p>$$\left\{\frac{2n+2}{2n+1}\right\} - \left\{\frac{2n}{2n-1}\right\}$$</p>
<p>$$\frac{(2n+2)(2n-1)-(2n)(2n+1)}{(2n+1)(2n-1)}$$</p>
<p>$$\frac{4n^2+2n-2-(4n+2n)}{(2n+1)(2n-1)}$$</p>
<p>$$\frac{4n^2-4n-2}{4n^2-1}$$</p>
<p>How should I proceed from this point? I think I need to get rid of the ratio, so that I can judge whether or not it'll be positive or negative. Or can I just judge from this point that it will be a positive value? When I use the $\frac{a_{n+1}}{a_{n}}$ test, I get a result that the sequence is strictly decreasing.</p>
| Tom-Tom | 116,182 | <p>For $x$ such that $|x|<1$ we have
$$f(x)=\sum_{n=0}^\infty x^n=\frac1{1-x}.$$
The derivative of $f$ is
$$f'(x)=\sum_{n=0}^\infty nx^{n-1}=\frac{1}{\left(1-x\right)^2},$$
such that
$$xf'(x)=\sum_{n=1}^\infty nx^n=\frac x{\left(1-x\right)^2}.$$
A second derivative gives
$$xf''(x)+f'(x)=\sum_{n=1}^\infty n^2x^{n-1}=\frac1{\left(1-x\right)^2}+\frac{2x}{\left(1-x\right)^3}$$
so you deduce that
$$\sum_{n=0}^\infty n^2x^n=\frac{x+x^2}{\left(1-x\right)^3}.$$
With $x=1/2$, this gives $\sum_n n^22^{-n}=6$. </p>
<p><strong>EDIT: Another equivalent solution</strong></p>
<p>Write the series for $-1<x<1$
$$f(x)=\sum_{n=0}^\infty x^n=\sum_{n=0}^\infty \mathrm e^{n\ln x}=\frac1{1-x}=\frac1{1-\mathrm e^{\ln x}}.$$
Thus, the series we look for is $$\frac{\mathrm d^2f}{\mathrm d(\ln x)^2}=\sum_{n=0}^\infty n^2\mathrm e^{n\ln x}=\frac{\mathrm d}{\mathrm d\ln x}\left(\frac{\mathrm e^{\ln x}}{\left(1-\mathrm e^{\ln x}\right)^2}\right)=\frac{2\mathrm e^{2\ln x}}{\left(1-\mathrm e^{\ln x}\right)^3}+\frac{\mathrm e^{\ln x}}{\left(1-\mathrm e^{\ln x}\right)^2}=\frac{x+x^2}{\left(1-x\right)^3}.$$
The result is obtained setting $x=1/2$ and we get
$$\sum_{n=0}^\infty \frac{n^2}{2^n}=6.$$</p>
|
81,728 | <p>The question is to compute or estimate the following probabilty.</p>
<p>Suppose that you have $N$ (e.g. $30$) tasks, each of which repeats every $t$ min (e.g. $30$ min) and lasts $l$ min (e.g. $5$ min). If the tasks started at uniformly random point in time yesterday, what is the probability that there is a time today at which at least $m$ (e.g. $10$) of the tasks run.</p>
| sbacallado | 19,438 | <p>Each block of $t$ minutes today will have the same pattern of tasks. Think of the middle point of a task as a uniform random variable on the unit circle. Then, $m$ tasks overlap if their corresponding random variables fall within a ball of radius $l/t$. This is a continuous generalization of the birthday problem. </p>
<p>If instead of the unit circle, the variable fell on the unit interval $[0,1]$, the case $m=2$ has a simple solution in Feller Vol II. (p. 42):</p>
<p>$$p(\text{At least }2\text{ tasks coincide}) =
\begin{cases}
1-\left[1-(N-1)\frac{l}{t}\right]^N & \text{if } \frac{l}{t} < \frac{1}{N-1} \\
1 & \text{if } \frac{l}{t} \geq \frac{1}{N-1}.
\end{cases}
$$</p>
<p>If $l/t$ is small, you can probably approximate the probability you are looking for with this probability. In this case, you could also find approximations using the discrete birthday problem (see Wolf Schwarz, Comparing Continuous and Discrete Birthday Coincidences: “Same-Day” versus “Within 24 Hours”, The American Statistician, 2010). This may allow you to treat the case $m>2$.</p>
|
81,728 | <p>The question is to compute or estimate the following probabilty.</p>
<p>Suppose that you have $N$ (e.g. $30$) tasks, each of which repeats every $t$ min (e.g. $30$ min) and lasts $l$ min (e.g. $5$ min). If the tasks started at uniformly random point in time yesterday, what is the probability that there is a time today at which at least $m$ (e.g. $10$) of the tasks run.</p>
| Alex Levine | 23,247 | <p>Probability that a task Is operating at a given instant = l/t.</p>
<p>Probability that a task isn’t operating at a given instant = (t-l)/t.</p>
<p>Probability that at least m of N tasks are operating at a given instant is C(N,i)[ (l/t)^i][(t-l)/t]^(N-i) summed for i=m to N where C(N,i) is the combination of N objects taken i at a time.</p>
<p>Integrate the sum over (0,t] . Result is (((t-l)^N)/t^(N-1))∑i=m to N [C(N,i)(l/(t-l))^i] which further simplifies to </p>
<p>(((t-l)^N)/t^(N-1))[(t/(t-l))^N – ((t/(t-l))^m)]</p>
<p>This further simplifies to t[1-((t-l)/t)^(N-m)]</p>
<p>We only have to compute the probability for one interval of length t since the start times for each task has fixed period t and task lasts for same interval l. So if the event of m tasks happening in an instant over an interval of length t doesn’t happen once, it never happens.</p>
|
611,361 | <p>Let's have function $f$ defined by:
$$f(x)=2\sum_{k=1}^{\infty}\frac{e^{kx}}{k^3}-x\sum_{k=1}^{\infty}\frac{e^{kx}}{k^2},\quad x\in(-2\pi,0\,\rangle$$
My question:
Can somebody expand it into a correct Maclaurin series, but using an unconventional way? Conventional is e.g. using $n$-th derivative of $f(x)$ in zero.</p>
<p>Reedited:
Let me explain the reason for my question. This will be like conventionaly use of expansion of $e^{kx}$, but using incorrect arguments(using zetas for divergent series).
Nice thing is, that the final result looks correct!
We have: \begin{align}f(x)&=2\sum_{k=1}^{\infty}\sum_{m=0}^{\infty}\frac{k^{m-3}x^m}{m!}-\sum_{k=1}^{\infty}\sum_{m=0}^{\infty}\frac{k^{m-2}x^{m+1}}{m!}=\\&=
\sum_{m=0}^{\infty}\frac{x^m}{m!}2\sum_{k=1}^{\infty}k^{m-3}-\sum_{m=0}^{\infty}\frac{x^{m+1}}{m!}\sum_{k=1}^{\infty}k^{m-2}=\\&=\sum_{m=0}^{\infty}\frac{x^m}{m!}2\zeta(3-m)-\sum_{m=0}^{\infty}\frac{x^{m+1}}{m!}\zeta(2-m)=\\&=\sum_{m=0}^{\infty}\frac{x^m}{m!}2\zeta(3-m)-\sum_{m=1}^{\infty}\frac{x^{m}}{(m-1)!}\zeta(3-m)=\\&=\sum_{m=0}^{\infty}\frac{x^m}{m!}(2-m)\zeta(3-m)\end{align}
So we get nice Maclaurin series containing Zetas:
$$f(x)=\sum_{m=0}^{\infty}\frac{x^m}{m!}(2-m)\zeta(3-m)$$
And now, if somebody will find expansion, but using some unconventional technique, there is a chance to get some interesting formula for $\zeta(3)$. That's motivation for my question.</p>
| Farshad Nahangi | 50,728 | <p>let $g(x)=\sum_{k=1}^{\infty}\frac{e^{kx}}{k^3}$ then your problem is converted to
$$f(x)=2g(x)-xg'(x)$$ then
\begin{align*}
f'(x)&=g'(x)-xg''(x)\\
f''(x)&=-x\cdot g'''(x)
\end{align*}
where
$$g'''(x)=\sum_{k=1}^{\infty} e^{kx}$$
Thus
$$f''(x)=-x\cdot\sum_{k=1}^{\infty} e^{kx}=-x\cdot\sum_{k=1}^{\infty}\sum_{m=0}^{\infty}\frac{(kx)^m}{m!}=-\sum_{k=1}^{\infty}\sum_{m=0}^{\infty}\frac{k^mx^{m+1}}{m!}$$
Now you can integrate.
$$f(x)=-\sum_{k=1}^{\infty}\sum_{m=0}^{\infty}\frac{k^mx^{m+3}}{(m+3)(m+2)m!}$$</p>
|
3,963,884 | <p>Suppose <span class="math-container">$(X_n)_n$</span> are i.i.d. random variables and let <span class="math-container">$W_n = \sum_{k=1}^n X_k$</span>. Assume that there exist <span class="math-container">$u_n>0 , v_n \in \mathbb{R}$</span> such that</p>
<p><span class="math-container">$$\frac{1}{u_n}W_n-v_n\Rightarrow W$$</span></p>
<p>where <span class="math-container">$W$</span> is not degenerate. Show that</p>
<p><span class="math-container">$$u_n\to \infty , \frac{u_n}{u_{n+1}}\to 1$$</span></p>
<p>What happens to <span class="math-container">$u_n$</span> if <span class="math-container">$W$</span> is degenerate?</p>
<p><strong>Hint:</strong> You may need to consider <span class="math-container">${u_{2n}}/{u_n}$</span>.</p>
<p>Is the following attempt, for the first part, true?</p>
<p>In order to remove <span class="math-container">$v_n,$</span> we can consider <span class="math-container">$\frac{1}{u_n}\sum_{k=1}^n(X_{2k+1}-X_{2k})$</span> which converges in distribution to a non-degenerate random variable <span class="math-container">$Y.$</span> So we can suppose that <span class="math-container">$\frac{1}{u_n}W_n$</span> converges in distribution to <span class="math-container">$W$</span>.</p>
<p>If <span class="math-container">$W$</span> is non-degenerate then there exist <span class="math-container">$x \in \mathbb{R};|\phi_W(x)|<1,$</span> since <span class="math-container">$\frac{1}{u_n}W_n-v_n\Rightarrow W$</span> then there exist <span class="math-container">$k \in \mathbb{N};|\phi_{X_1}(\frac{x}{u_k})|<1$</span> which means that <span class="math-container">$X_1$</span> is not degenerate, if <span class="math-container">$(u_n)_n$</span> is bounded from above then there exist a subsequence <span class="math-container">$(u_{k_n})$</span> such that <span class="math-container">$W_{k_n}$</span> converges in distribution.
Let <span class="math-container">$(u_{q_n})_n$</span> be an arbitrary subsequence, since <span class="math-container">$X_1$</span> is not-degenerate then <span class="math-container">$\sum_{l=1}^{q_n}X_l=W_{q_n}$</span> doesn't converges in distribution, so <span class="math-container">$u_{q_n}$</span> is not bounded from above and we can extract a subsequence from <span class="math-container">$u_{q_n}$</span> diverging to <span class="math-container">$+\infty.$</span></p>
<p>In case <span class="math-container">$W$</span> is degenerate, is it possible to know the behavior of <span class="math-container">$u_n$</span>?</p>
| Kavi Rama Murthy | 142,385 | <p>Suppose <span class="math-container">$\frac {W_n} {c_n} $</span> converges in distribution to <span class="math-container">$W$</span> and <span class="math-container">$(c_n)$</span> does not tend to <span class="math-container">$\infty$</span>. Then there is a subsequence <span class="math-container">$c_{n_k}$</span> converging to some real number <span class="math-container">$c$</span>. This implies that <span class="math-container">$W_{n_k}$</span> converges in distribution to <span class="math-container">$cW$</span>. Hence <span class="math-container">$(\phi(t))^{n_k} \to Ee^{itcW}$</span> where <span class="math-container">$\phi$</span> is the characteristic function of <span class="math-container">$X_i$</span>'s. Since <span class="math-container">$Ee^{itcW}$</span> does not vanish for <span class="math-container">$t$</span> near <span class="math-container">$0$</span> it follows that <span class="math-container">$|\phi (t)|=1$</span> for al <span class="math-container">$t$</span> near <span class="math-container">$0$</span>. This implies that <span class="math-container">$X_i$</span>'s are a.s constants. In this case <span class="math-container">$c_n \sim nc$</span>.</p>
<p>If <span class="math-container">$W$</span> is allowed to be degenerate, take <span class="math-container">$X_n=0$</span> for all <span class="math-container">$n$</span>. Obviously nothing can be said about <span class="math-container">$c_n$</span> in this case.</p>
|
4,264,496 | <p>So we have the jensen's inequality: <span class="math-container">$$|EX| \leq E|X|$$</span></p>
<p><strong>Any bound</strong> on the Jensen gap (upper bound or lower bound)? <span class="math-container">$$\text{gap}=E|X| - |EX|$$</span></p>
| Reijo Jaakkola | 737,246 | <p>The gap can be arbitrarily large. For instance, if <span class="math-container">$X$</span> is a random variable so that <span class="math-container">$X(0) = -N$</span> and <span class="math-container">$X(1)=N$</span>, and the events <span class="math-container">$0$</span> and <span class="math-container">$1$</span> have probability <span class="math-container">$1/2$</span>, then <span class="math-container">$|E(X)| = |\frac{1}{2}N - \frac{1}{2}N|=0$</span>, but <span class="math-container">$E(|X|) = N$</span>.</p>
|
3,690,185 | <p>By <span class="math-container">$a_n \sim b_n$</span> I mean that <span class="math-container">$\lim_{n \rightarrow \infty} \frac{a_n}{b_n} = 1$</span>.</p>
<p>I don't know how to do this problem. I have tried to apply binomial theorem and I got
<span class="math-container">$$\int_{0}^{1}{(1+x^2)^n dx} = \int_0^1 \sum_{k=0}^n{\binom{n}{k}x^{2k}dx} = \sum_{k=0}^n \int_0^1{ \binom{n}{k}x^{2k}dx} = \sum_{k=0}^n \frac {\binom{n}{k}}{2k+1}$$</span>
But I don't know what I could do with this, nor if it is a correct approach. </p>
| mr_snazzly | 572,048 | <p>You can also write <span class="math-container">$ \int_0^1 (1+x^2)^ndx = \int_0^1 e^{n\log(1+x^2)}dx $</span> and use a generalization of Laplace's method to handle the boundary case.</p>
|
3,489,345 | <p>My goal is to find the values of <span class="math-container">$N$</span> such that <span class="math-container">$10N \log N > 2N^2$</span></p>
<p>I know for a fact this question requires discrete math. </p>
<p>I think the problem revolves around manipulating the logarithm. The thing is, I forgot how to manipulate the logarithm using discrete math. </p>
<p>My question is how do I manipulate this equation in a way such that I can find the values of N such that the equation is true? </p>
| Community | -1 | <p>First divide by <span class="math-container">$2N$</span> on both sides, </p>
<p><span class="math-container">$5\log N> N$</span> (since <span class="math-container">$N>0$</span> then the inequality stays the same)</p>
<p>Then by raising to the <span class="math-container">$e$</span> power on both sides (the exponential is an increasing function) you'll get</p>
<p><span class="math-container">$e^{5\log N}>e^{N}\implies e^{\log N^5}>e^N \implies N^5>e^{N}$</span></p>
<p>Can you end it from here?</p>
|
956,680 | <p>$\displaystyle\lim_{x\to0}\frac{x^2+1}{\cos x-1}$</p>
<p>My solution is:</p>
<p>$\displaystyle\lim_{x\to0}\frac{x^2+1}{\cos x-}\frac{\cos x+1}{\cos x+1}$</p>
<p>$\displaystyle\lim_{x\to0}\frac{(x^2+1)(\cos x+1)}{\cos^2 x-1}$</p>
<p>$\displaystyle\lim_{x\to0}\frac{(x^2+1)(\cos x+1)}{-(1-\cos^2 x)}$</p>
<p>Since $\sin^2 x=1-\cos^2 x$</p>
<p>$\displaystyle\lim_{x\to0}\frac{(x^2+1)(\cos x+1)}{-\sin^2 x}$</p>
<p>I'm stuck here. What next?</p>
| idm | 167,226 | <p>There is no indeterminatation, </p>
<p>$$\lim_{x\to 0}\frac{x^2+1}{\cos x-1}=\frac{1}{0^-}=-\infty $$</p>
|
4,349,582 | <p>Its rather easy to show that <span class="math-container">$a_n=\frac{n^{1/n}}{n}$</span> ist monotonic (which means <span class="math-container">$a_{n+1}<a_n$</span> for each <span class="math-container">$n$</span>) using derivations. But how can I do it without them? Thanks.</p>
| Ethan Bolker | 72,858 | <p>The mathematical function you seek does not have a special name, nor does it have a formula. If you search for <a href="https://www.google.com/search?q=decimal2binary" rel="nofollow noreferrer"><em>decimal2binary</em></a> you will find algorithms for pen and paper calculation and coded in many languages.</p>
<p>You can give that function any name you like.</p>
<p>For single digits just use a table:</p>
<pre><code>0 0
1 1
2 10
3 11
...
9 1001
</code></pre>
|
2,823,758 | <p>I was learning the definition of continuous as:</p>
<blockquote>
<p>$f\colon X\to Y$ is continuous if $f^{-1}(U)$ is open for every open $U\subseteq Y$</p>
</blockquote>
<p>For me this translates to the following implication:</p>
<blockquote>
<p>IF $U \subseteq Y$ is open THEN $f^{-1}(U)$ is open</p>
</blockquote>
<p>however, I would have expected the definition to be the other way round, i.e. with the 1st implication I defined. The reason for that is that just by looking at the metric space definition of continuous:</p>
<blockquote>
<p>$\exists q = f(p) \in Y, \forall \epsilon>0,\exists \delta >0, \forall x \in X, 0 < d(x,p) < \delta \implies d(f(x),q) < \epsilon$</p>
</blockquote>
<p>seems to be talking about Balls (i.e. open sets) in X and then has a forward arrow for open sets in Y, so it seems natural to expect the direction of the implication to go in that way round. However, it does not. Why does it not go that way? Whats is wrong with the implication going from open in X to open in Y? And of course, why is the current direction the correct one?</p>
<p>I think conceptually I might be even confused why the topological definition of continuous requires to start from things in the target space Y and then require things in the domain. Can't we just say map things from X to Y and have them be close? <strong>Why do we require to posit things about Y first in either definition for the definition of continuous to work properly</strong>?</p>
<hr>
<p>I can't help but point out that this question <a href="https://math.stackexchange.com/questions/323610/the-definition-of-continuous-function-in-topology">The definition of continuous function in topology</a> seems to be similar but perhaps lack the detailed discussion on the direction on the implication for me to really understand why the definition is not reversed or what happens if we do reverse it. The second answer there tries to make an attempt at explaining why we require $f^{-1}$ to preserve the property of openness but its not conceptually obvious to me why thats the case or whats going on. Any help?</p>
<hr>
<p>For whoever suggest to close the question, the question is quite clear:</p>
<blockquote>
<p><strong>why is the reverse implication not the "correct" definition of continuous?</strong></p>
</blockquote>
<hr>
<p>As an additional important point I noticed is, pointing out <strong>the difference between open mapping and continuous function would be very useful</strong>.</p>
<hr>
<p>Note: I encountered this in baby Rudin, so thats as far as my background in analysis goes, i.e. metric spaces is my place of understanding. </p>
<hr>
<p>Extra confusion/Appendix:</p>
<p>Conceptually, I think I've managed to nail what my main confusion is. In conceptual terms continuous functions are suppose to map "nearby points to nearby points" so for me its metric space definition makes sense in that sense. However, that doesn't seem obvious to me unless we equate "open sets" to be the definition of "close by". Balls are open but there are plenty of sets that are open but are not "close by", for example the union of two open balls. I think this is what is confusing me most. How is the topological def respecting that conceptual requirement? </p>
| Anonimo | 570,174 | <p>I think that understand you. </p>
<p>You have two topological spaces $(X,\tau)$ and $(Y,\tau')$ and a aplication continues f:$X \rightarrow Y$.</p>
<p>For the general definition of continues you can say:</p>
<p>$\forall x \in X , \forall G' \in \tau' : f(x) \in G', \exists G \in \tau : x \in G, f(G) \subseteq G' $.
You can prove this using $G=f^{-1}(G') $.</p>
<p>And if you applied this to metric spaces obtains(I suppose that your p verified f(p)=q) your definition of continues function in metric spaces.</p>
<p>You ask why use implication for opens in $Y$ to $X$, and no for opens in $X$ to $Y$.</p>
<p>I give you some reasons:</p>
<p>1-Implication for opens in Y to X is more general because $f^{-1}(G) $ can be $\varnothing$ and you do not contemplate this case for opens in X to Y.</p>
<p>2-Implication for opens in X to Y say that $\exists$ some open that verified... but not say who is this open and for implication for opens in Y to X you know who is this open is $f^{-1}(G')$.</p>
<p>If we change the definition of aplication continues to:
$f: X\rightarrow Y$ is continues if $f(U) \in \tau', \forall U \in \tau$.</p>
<p>We have ,for example, that a constant function can be no continue for example:</p>
<p>if we take the constant function 1 for $\mathbb{R}$ in $\mathbb{R}$ we have that $f((0,1))=\{1\}$ that is not open, then $f$ is not continue.</p>
|
205,671 | <p>How would one go about showing the polar version of the Cauchy Riemann Equations are sufficient to get differentiability of a complex valued function which has continuous partial derivatives? </p>
<p>I haven't found any proof of this online.</p>
<p>One of my ideas was writing out $r$ and $\theta$ in terms of $x$ and $y$, then taking the partial derivatives with respect to $x$ and $y$ and showing the Cauchy Riemann equations in the Cartesian coordinate system are satisfied. A problem with this approach is that derivatives get messy.</p>
<p>What are some other ways to do it?</p>
| thebluegiraffe | 510,820 | <p>With only partial differentiation and algebra. From what we already know of complex functions:</p>
<p>$ƒ(z) = u(r,\theta) + iv(r,\theta)$</p>
<p>$z = re^{i\theta} = r\cos\theta +ir\sin\theta = x + iy$</p>
<p>$x(r,\theta)= r\cos\theta,\quad y(r,\theta) = r\sin\theta $</p>
<p>Apply partial derivative and chain rule:</p>
<p>$\frac{∂f}{∂r} = \frac{df}{dz}\frac{∂z}{∂r} = \frac{df}{dz}e^{i\theta} = \frac{df}{dz}\frac1rz$</p>
<p>$\frac{∂f}{∂\theta} = \frac{df}{dz}\frac{∂z}{∂\theta} = \frac{df}{dz}ire^{i\theta} = \frac{df}{dz}iz$</p>
<p>which are also equal to:</p>
<p>$\frac{∂f}{∂r} = \frac{∂u}{∂r} + i\frac{∂v}{∂r}$</p>
<p>$\frac{∂f}{∂\theta} = \frac{∂u}{∂\theta} + i\frac{∂v}{∂\theta}$</p>
<p>then set the equality of $r\frac{∂f}{∂r} = \frac1i\frac{∂f}{∂\theta}$:</p>
<p>$r(\frac{∂u}{∂r} + i\frac{∂v}{∂r}) = -\frac1i(\frac{∂u}{∂\theta} + i\frac{∂v}{∂\theta}) $</p>
<p>and separate the real and imaginary:</p>
<p>$\frac{∂u}{∂r} = \frac1r\frac{∂v}{∂\theta}, \quad \frac{∂v}{∂r} = -\frac1r\frac{∂u}{∂\theta}$</p>
<p>yey :3</p>
|
2,218,914 | <p>What is a boundary point when solving for a max/min using Lagrange Multipliers?
After you solve the required system of equation and get the critical maxima and minima, when do you have to check for boundary points and how do you identify them?</p>
<p>e.g. Optimise (1+a)(1+b)(1+c) given constraint a+b+c=1, with a,b,c all non-negative.</p>
<p>After using the Lagrange multiplier equating the respective partial derivatives, I get (a,b,c)=(1/3, 1/3, 1/3). Clearly there must be both a maximum and minimum, and I assume this is the maximum. Where is the minimum? (0,0,1) optimises best for the minimum, and I assume using 0 is a boundary point but why? And what effect does the restriction to non-negative reals have? </p>
| Yuri Negometyanov | 297,350 | <p>At first - about elementary way.
$$(1+a) + (1+b) + (1+c) = 4.$$
Using AM-GM, one can get:
$$(1+a)(1+b)(1+c)\le \left(\dfrac{1+a+1+b+1+c}3\right),$$
so $\left(\dfrac13,\dfrac13,\dfrac13\right)$ is maximum.
Note that the issue conditions are significant in this case.</p>
<p>Partitial derivatives of Lagrange multipliers method for
$$f(a,b,c,\lambda) = (1+a)(1+b)(1+c)+\lambda(a+b+c-1)$$
can give
$$\begin{cases}
(1+b)(1+c) + \lambda = 0\\
(1+a)(1+c) + \lambda = 0\\
(1+a)(1+b) + \lambda = 0\\
a+b+c = 1
\end{cases}$$</p>
<p>$$\begin{cases}
(b-a)(1+c) = 0\\
(1+a)(c-b) = 0\\
a+b+c =1,
\end{cases}$$
and one can get that
$\left(\dfrac13,\dfrac13,\dfrac13\right)$
is multiple root for maximum.</p>
<p>So the function has not a global minima, and boundary conditions work.</p>
|
2,218,914 | <p>What is a boundary point when solving for a max/min using Lagrange Multipliers?
After you solve the required system of equation and get the critical maxima and minima, when do you have to check for boundary points and how do you identify them?</p>
<p>e.g. Optimise (1+a)(1+b)(1+c) given constraint a+b+c=1, with a,b,c all non-negative.</p>
<p>After using the Lagrange multiplier equating the respective partial derivatives, I get (a,b,c)=(1/3, 1/3, 1/3). Clearly there must be both a maximum and minimum, and I assume this is the maximum. Where is the minimum? (0,0,1) optimises best for the minimum, and I assume using 0 is a boundary point but why? And what effect does the restriction to non-negative reals have? </p>
| farruhota | 425,072 | <p>First of all, if the non negativity condition is not given (if a,b,c can be any real numbers), then there is no minimum. Indeed, let c=0, a be a large negative number, b be a large positive number such that a+b=1. Hence (1+a)(1+b)(1+c) tends to $-\infty$.</p>
<p>When it is solved by the Lagrange multipliers method, four (not one) constraints must be considered.</p>
<p>Optimize $(1+a)(1+b)(1+c)$ subject to $a+b+c=1, a,b,c\geq0$. </p>
<p>Then the Kuhn-Tucker conditions must be checked by considering various cases...</p>
<p>Another approach (to imagine better): let's look at the 2-variable function:</p>
<p>Optimize $z=(1+x)(1+y)$ subject to $x+y=1, x,y\geq0$.</p>
<p>Substitute $y=1-x$ into the objective function: $z=(1+x)(1+1-x)=-x^2+x+2.$</p>
<p>Equivalent problem: Optimize $z=-x^2+x+2$ subject to $x\geq0$.</p>
<p>According to the Extreme Point Theorem, the extreme values of the function occur either at the border or the critical point(s).</p>
<p>Border: x=0.
Critical point(s): $z'_x=0 \Rightarrow -2x+1=0 \Rightarrow x=\frac{1}{2}.$</p>
<p>Evaluation: $z(0)=2 - min$; $z(\frac{1}{2})=\frac{9}{4} - max.$</p>
<p>Or referring to the initial two variable objective function $z=(1+x)(1+y):$</p>
<p>$z(0,1)=2 - min; z(\frac{1}{2},\frac{1}{2})=\frac{9}{4} - max$.</p>
<p>Note: Now it can be generalized to the 3-variable function.</p>
|
4,552,723 | <p>Assume the following angles are known:
<span class="math-container">$ABD$</span>,<span class="math-container">$DBC$</span>,<span class="math-container">$BAC$</span>,<span class="math-container">$ACD$</span>.</p>
<p>Is it possible to compute <span class="math-container">$CDA$</span>?</p>
<p><a href="https://i.stack.imgur.com/nrpQL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nrpQL.png" alt="enter image description here" /></a></p>
| Sam | 530,289 | <p>No since no matter what equations you create you will always get
<span class="math-container">$$\measuredangle BDC+\measuredangle DCA$$</span>
that cannot be isolated and depend on the length of <span class="math-container">$[AB]$</span>
However, if the quadrilateral was a parallelogram then definitely since in that case
<span class="math-container">$$\measuredangle BCA=\measuredangle DAC$$</span></p>
|
1,586,354 | <p>I did the following exercise:</p>
<blockquote>
<p>Suppose $n$ is an even positive integer and $H$ is a subgroup of $\mathbb{Z}_n$ (integers mod n with addition). Prove that either every member of $H$ is even or exactly half of the members of $H$ are even.</p>
</blockquote>
<p>My answer:</p>
<p>Since $\mathbb{Z}_n$ is cyclic so is $H$. If $k$ generates $H$ when $k$ is even then every element in $H$ is even. If $k$ is odd then exactly every other element is even which proves the claim. </p>
<p>Assuming my proof is correct I was wondering how else to do this. The exercise appears before the chapter about cyclic groups. </p>
<blockquote>
<p>How to answer this question without using any knowledge of cyclic
groups, generators, etc.?</p>
</blockquote>
| Noah Schweber | 28,111 | <p>Here's a sketch of an alternate alternate proof: consider the map $f: x\mapsto x+x$, with $ran(f):=A$ the set of even elements. </p>
<blockquote>
<p>For every $a, b\in A$, $f^{-1}(a)$ has the same number of elements as $f^{-1}(b)$.</p>
</blockquote>
<p>Proof sketch: Fix $c+c=a$ and $d+d=b$, and consider the map $g: x\mapsto x+d-c$. It's not hard to check that $g$ is a bijection from $f^{-1}(a)$ to $f^{-1}(b)$. $\Box$</p>
<p>Let $\xi$ be the number of elements in $f^{-1}(a)$ for $a\in A$.</p>
<blockquote>
<p>$\xi=1$ or $\xi=2$.</p>
</blockquote>
<p>Proof sketch: Clearly, $\xi\ge 1$, so we just have to show that $\xi\le 2$. To do this, note that for $0\le i<n$ we have $0\le 2i<2n$; so if $a$ is even, then $i+i\equiv a$ implies $i+i=a$ or $i+i=n+a$. $\Box$</p>
<blockquote>
<p>If $\xi=1$, then every element is even.</p>
</blockquote>
<p>Proof: Then $x\mapsto x+x$ is injective, hence bijective. $\Box$</p>
<blockquote>
<p>If $\xi=2$, then exactly half the elements are even.</p>
</blockquote>
<p>Proof: Consider the equivalence relation $\approx$ on $H$ given by $a\approx b$ if $a+a=b+b$. Since $\xi=2$, the $\approx$-classes all have exactly two elements, that is, $\approx$ partitions $H$ into pairs. The number of pairs is $\vert H\vert/2$, and each pair corresponds to a unique element of $A$. $\Box$</p>
|
2,483,231 | <p>If $F(x)=f(g(x))$, where $f(5) = 8$, $f'(5) = 2$, $f'(−2) = 5$, $g(−2) = 5$, and
$g'(−2) = 9$, find $F'(−2)$. I'm totally lost on this problem, I'm assuming to incorporate the Chain Rule. I get $5(5) * 9 = 225$ but I am incorrect.</p>
<p>Update: Thanks guys, I see where I messed up thanks!</p>
| Community | -1 | <p>$F'(x)=f'(g (x))\cdot g'(x) $, by the chain rule. .. I'm getting 18...</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.