qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
3,716,439 | <p>Uhm I'm having trouble solving for x in these type of equations: <span class="math-container">$a^x + b^x = c^x$</span> (where x is the only unknown)</p>
<p>I saw an instance by Presh Talwalkar in MindYourDecisions. I tried solving for x and used an example with a known answer. <span class="math-container">$3^x + 4^x = 5^x$</span> </p>
<p>We all know that x = 2 but I don't know the correct approach. I tried <span class="math-container">$ln(3^x + 4^x) = xln(5)$</span> and got stuck.</p>
| user577215664 | 475,762 | <p>Are you sure you copied correectly the system ? Maybe it's a minus sign instead of a positive one in one of the differential equation ?</p>
<p>Because if you substract both DE you get:</p>
<p><span class="math-container">$$x+3y=\sin t $$</span></p>
<p>Use this to solve the seond differential equation:
<span class="math-container">$$x'+y'=x+y$$</span></p>
|
108,781 | <p>I want to add arrowheads and labels to the following plot. </p>
<pre><code>P1 = Plot[x, {x, 0, 2}, PlotStyle -> {Dashed, Red}, Filling -> Bottom];
P2 = Plot[6 x, {x, 0, 2}, PlotStyle -> {Dashed, Blue}];
Show[P1, P2, AspectRatio -> Automatic, Frame -> True,
PlotRangePadding -> None, AxesOrigin -> {0, 0}, Axes -> None,
FrameStyle -> Directive[Black],
LabelStyle -> {18, Bold},
FrameLabel -> {{None, "T"}, {"w", None} }, ImageSize -> 500]
</code></pre>
<p>I want the arrowheads towards the origin and the label (say, "w=T") to be on the curve. </p>
| kglr | 125 | <p>As an alternative to <code>Epilog</code>you can use the <code>PlotStyle</code> settings to add arrows: </p>
<pre><code>Plot[{x, 6 x}, {x, 0, 2}, Filling -> {1 -> Bottom},
PlotRange -> {0, 2}, AspectRatio -> 1, PlotRangePadding -> None,
AxesOrigin -> {0, 0}, Axes -> None, Frame -> True,
FrameStyle -> Directive[Black], LabelStyle -> {18, Bold},
FrameLabel -> {{None, "T"}, {"w", None}}, ImageSize -> 400,
PlotStyle -> {({Arrowheads[{{-.05, 0.}, {.5, 1/2,
Graphics[Text[Style["w = T\n", Large, Bold, Italic, Black]]]}}],
Red, Dashed, Arrow @@ #} &),
({Arrowheads[{{-.05, 0.}}], Blue, Dashed, Arrow @@ #} &)}]
</code></pre>
<p><img src="https://i.stack.imgur.com/upXT5.png" alt="Mathematica graphics"></p>
<p>Alternatively, you can provide the <code>Arrowheads</code> specs as part of <code>PlotStyle</code> and post-process the <code>Plot</code> output to replace <code>Line</code>s with <code>Arrow</code>s to get the same picture. </p>
<pre><code>Plot[{x, 6 x}, {x, 0, 2}, Filling -> {1 -> Bottom},
PlotRange -> {0, 2}, AspectRatio -> 1, PlotRangePadding -> None,
AxesOrigin -> {0, 0}, Axes -> None, Frame -> True,
FrameStyle -> Directive[Black], LabelStyle -> {18, Bold},
FrameLabel -> {{None, "T"}, {"w", None}}, ImageSize -> 400,
PlotStyle -> {{Arrowheads[{{-.05, 0.}, {.5, 1/2,
Graphics[Text[Style["w = T\n", Large, Bold, Italic, Black]]]}}], Red, Dashed},
{Arrowheads[{{-.05, 0.}}], Blue, Dashed}}] /. Line -> Arrow
</code></pre>
|
1,645,211 | <p>The problem statement is,</p>
<p>Find a linear transformation $T: \mathbb R^3 \to \mathbb R^3$ such that the set of all vectors satisfying $4x_1-3x_2+x_3=0$ is the (i) null space of $T$ (ii) range of $T$.</p>
<p>For (i),</p>
<p>I found out the basis of the null space for the system,</p>
<p>$$\begin{pmatrix}4&-3&1\end{pmatrix} \begin{pmatrix}x_1\\x_2\\x_3\end{pmatrix} =0$$</p>
<p>viz, $\mathbf v_1 = \begin{pmatrix}3\\4\\0\end{pmatrix} , \mathbf v_2 = \begin{pmatrix}-1\\0\\4\end{pmatrix}$</p>
<p>So, basically, I have to find linear transformation such that $T\begin{pmatrix}3\\4\\0\end{pmatrix} =0$ and $T\begin{pmatrix}-1\\0\\4\end{pmatrix}=0$ such that vector $\mathbf v \in span\{\mathbf v_1, \mathbf v_2\}$ satisfies $T\left(\mathbf v\right) =0$</p>
<p>Now, I'm stuck at this point. I'm not able to describe the such linear transformations.</p>
<p>Further, for (ii), I don't know how to approach the problem.</p>
| user311576 | 311,576 | <p>If $M(s)$ is supposed to be the $\sigma$-Algebra generated by $s$, then
it should suffice to use the definition of a measure and the general description of an element of $M(s)$.
If $M(s)$ is not said algebra, please clarify.</p>
|
1,371,921 | <p>The intersection defined by the two planes $v \bullet \begin{pmatrix} 8 \\ 1 \\ -12 \end{pmatrix} = 35$
and
$v \bullet \begin{pmatrix} 6 \\ 7 \\ -9 \end{pmatrix} = 70$
is a line. What is the equation of this line?</p>
<p>This is what I have so far: I set $v = \begin{pmatrix} a \\ b \\ c \end{pmatrix}$. I was able to simplify both LHS of the two given equations to: </p>
<p>$8a+b-12c = 35$</p>
<p>$6a+7b-9c = 70$</p>
<p>I could find the values of $a, b, c$, but I don't know if it will be helpful. How can I find the equation of the line of intersection?</p>
| David Quinn | 187,299 | <p>There are two basic methods: either</p>
<ol>
<li>You can obtain the direction of the line by doing the cross-product of the normals $$\left(\begin {matrix}8\\1\\-12\end {matrix}\right)\times\left(\begin {matrix}6\\7\\-9\end {matrix}\right)=\left(\begin {matrix}75\\0\\50\end {matrix}\right) \text{which is parallel to}\left(\begin {matrix}3\\0\\2\end {matrix}\right)$$</li>
</ol>
<p>And then somehow obtain a point which satisifies both plane equations, perhaps by assigning a value to one of the variables and solving for the other two.</p>
<p>Or, a more systematic way is:</p>
<ol start="2">
<li>Start with your plane equations (I am using $x$, $y$ and $z$) and eliminate any one of the variables,
So, for example, if you eliminate $y$ you get $$2x-3z=7$$
Now separate the letters onto opposite sides of the equation and intoduce $=\lambda$. So now you have $$2x=7+3z=\lambda$$</li>
</ol>
<p>Now get each of $x$, $y$ and $z$ in terms of $\lambda$ and you end up with the equation of the line in one of many possible vector forms; in this case $$\underline{r}=\left(\begin {matrix}0\\7\\\frac{-7}{3}\end {matrix}\right)+\lambda \left(\begin {matrix}\frac 12\\0\\\frac 13\end {matrix}\right)$$</p>
<p>Of course you can write the direction vector as $$\left(\begin {matrix}3\\0\\2\end {matrix}\right)$$ as before.</p>
|
2,354,094 | <p>I've been stumped by this problem:</p>
<blockquote>
<p>Find three non-constant, pairwise unequal functions $f,g,h:\mathbb R\to \mathbb R$ such that
$$f\circ g=h$$
$$g\circ h=f$$
$$h\circ f=g$$
or prove that no three such functions exist.</p>
</blockquote>
<p>I highly suspect, by now, that no non-trivial triplet of functions satisfying the stated property exists... but I don't know how to prove it.</p>
<p>How do I prove this, or how do I find these functions if they do exist?</p>
<p>All help is appreciated!</p>
<p>The functions should also be continuous.</p>
| stewbasic | 197,161 | <p>Here is a proof that no such continuous functions exist. I'll use juxtaposition to denote function composition. As noted in the comments,
$$
h^2=fgh=f^2=ghf=g^2.
$$
Let $e=f^4=g^4=h^4$. Note that $fgf=hf=g$ and $gfg=gh=f$, so
$$
ef=g^4f=gf^2gf=gfg=f,
$$
$$
fe=fg^4=fgf^2g=gfg=f.
$$
Similarly $eg=g=ge$ and $eh=h=he$. In particular $e^2=ef^4=f^4=e$, so $e$ is idempotent. Consider its image $X=\mathrm{im}\,e\subseteq\mathbb R$, so $e|_X$ is the identity. We have $\mathrm{im}\,f\subseteq X$ since $ef=f$. Moreover $f|_X^{4}=e|_X=\mathrm{id}_X$, so $f|_X$ is a permutation of $X$, and similarly for $g$ and $h$. (aside: it now follows that $f|_X$, $g|_X$ and $h|_X$ generate a quotient of the quaternion group).</p>
<p>Suppose $|X|>1$. Since $e$ is continuous, $X$ is a (possibly infinite) interval. Any continuous permutation of such a set is strictly monotone (either increasing or decreasing). Moreover the only strictly increasing involution of $X$ is the identity. Indeed suppose $\sigma$ is such a function. If $\sigma(x)>x$ then
$$
x=\sigma^2(x)>\sigma(x)>x,
$$
a contradiction. Similarly we cannot have $\sigma(x)<x$, so $\sigma$ is the identity.</p>
<p>In particular $f|_X$, $g|_X$ and $h|_X$ are strictly monotone. Since $f|_X=g|_Xh|_X$, they can't all be decreasing. wlog suppose $f|_X$ is increasing. Then $f|_X^2$ is increasing. Since $f|_X^4=\mathrm{id}_X$, applying the above result twice gives $f|_X^2=\mathrm{id}_X$ and then $f|_X=\mathrm{id}_X$.</p>
<p>If $|X|=1$, then we also have $f|_X=\mathrm{id}_X$. In either case, for any $x\in\mathbb R$ we have $g(x)\in X$, so
$$
h(x)=f(g(x))=g(x),
$$
whence $h=g$.</p>
|
3,702,520 | <p>I've used cosine & sine rule, and after playing around with them, I'll always end up with an equation involving <span class="math-container">$\sin\theta,\cos\theta$</span> and other weird (and wonderful expressions). I've approached the question in many different ways: by showing <span class="math-container">$AD=x$</span> (since <span class="math-container">$\angle DAC=\angle DCA=30^\circ$</span> it forms an isosceles triangle), also tried to show <span class="math-container">$\sin\theta=1/2$</span> ... but I never got the correct answer.</p>
<p>This has been bothering me for weeks.</p>
<p>Thanks in advance<img src="https://i.stack.imgur.com/tQope.jpg" alt="![enter image description here">]<a href="https://i.stack.imgur.com/tQope.jpg" rel="nofollow noreferrer">1</a></p>
| AryanSonwatikar | 571,692 | <p><span class="math-container">$$x+2=-\sqrt{3}i$$</span>
Squaring,
<span class="math-container">$$x^2 +4x+4=-3$$</span>
Or,
<span class="math-container">$$x^2+4x+7=0\Rightarrow (x^2+4x+7)^2=0$$</span>
Open the brackets and use to simplify.</p>
|
3,702,520 | <p>I've used cosine & sine rule, and after playing around with them, I'll always end up with an equation involving <span class="math-container">$\sin\theta,\cos\theta$</span> and other weird (and wonderful expressions). I've approached the question in many different ways: by showing <span class="math-container">$AD=x$</span> (since <span class="math-container">$\angle DAC=\angle DCA=30^\circ$</span> it forms an isosceles triangle), also tried to show <span class="math-container">$\sin\theta=1/2$</span> ... but I never got the correct answer.</p>
<p>This has been bothering me for weeks.</p>
<p>Thanks in advance<img src="https://i.stack.imgur.com/tQope.jpg" alt="![enter image description here">]<a href="https://i.stack.imgur.com/tQope.jpg" rel="nofollow noreferrer">1</a></p>
| user376343 | 376,343 | <p>Real coefficients in the quartic inspired me to multiply <span class="math-container">$x + 2 + \sqrt{3}i\;$</span> by the conjugate, which gives</p>
<p><span class="math-container">$$(x + 2 + \sqrt{3}i)(x + 2 - \sqrt{3}i)=x^2+4x+7.$$</span></p>
<p>Dividing the given expression by <span class="math-container">$x^2+4x+7$</span> (just to see if we get a simple form) leads to <span class="math-container">$$2x^4+3x^3-x^2-15x+36=\underbrace{(x^2+4x+7)}_{0}\cdot(2x^2-5x+5)+1$$</span>
Yes, if <span class="math-container">$x + 2 + \sqrt{3}i=0\;$</span> then <span class="math-container">$2x^4+3x^3-x^2-15x+36=1.$</span></p>
|
2,829,121 | <p>I get how to derive the ellipse equation, but I'm struggling to understand what it means intuitively. </p>
<p>You see, a circle equation can be understood very intuitively. The circle equation models how the radius of the circle can be represented using the Pythagorean theorem. But I don't understand what the ellipse equation means at such a level. Does it model how an ellipse can be drawn out using a stretched rope? What exactly does it model? Can someone please explain? </p>
<p>Can you please explain it as simply as possible, as I'm still a beginner? </p>
| Mohammad Riazi-Kermani | 514,496 | <p>You may derive the equation for ellipse from the equation of circle by scaling your $x$ and $y$</p>
<p>$$ x^2 + y^2 = R^2 $$</p>
<p>Let $$x\to x/a,\text {and } y\to y/b$$</p>
<p>You get $$ (x/a)^2 + (y/b)^2 = R^2 $$</p>
<p>$$ (\frac {x}{aR})^2 + (\frac {y}{bR})^2 = 1 $$
$$ \frac {x^2}{A^2} + \frac {y^2}{B^2} = 1$$</p>
|
26,074 | <p>Okay, so this was labeled as a "fun problem", but I'm having trouble knowing how to approach it.</p>
<p>I'm given: $\lim\limits_{x \to 0^+} f(x) = A$ and $\lim\limits_{x \to 0^-} f(x) = B$.</p>
<p>I need to find (or at least know where to start):</p>
<p>a) $\lim\limits_{x \to 0^+} f(x^3 - x)$</p>
<p>b) $\lim\limits_{x \to 0^-} f(x^3 - x)$</p>
<p>Any insight on how to approach this (or even a solution) would be greatly appreciated.</p>
<p>Thanks</p>
| Mariano Suárez-Álvarez | 274 | <p>When $x$ goes to zero, $x^3-x$ also goes to zero. But we can be more precise: in fact,</p>
<ul>
<li><p>when $x$ goes to zero from the left, $x^3-x$ goes to zero from the right, and</p></li>
<li><p>when $x$ goe to zero from the right, $x^3-x$ goes to zero from the left.</p></li>
</ul>
<p>Can you see how to use this to solve the problem? </p>
|
1,609,443 | <p>So my textbook states that it is $(-\infty,0)$ that is concave down, but why not $(-\infty,0]$? Could someone help?</p>
| Bob Krueger | 228,620 | <p>Let me give a more pedagogical explanation without referring to any specific definitions. This question is very similar to "Is $x^2$ increasing on $(0,\infty)$ or $[0,\infty)$?" Here is my answer to that question: if $x^2$ is increasing on $[0,\infty)$, then we would also have to agree by symmetry that $x^2$ is decreasing on $(-\infty,0]$. This means that at $0$ this function is both increasing and decreasing, which simply seems kind of weird. We normally think of increasing as <em>strictly</em> going up, and decreasing as <em>strictly</em> going down, so that there is a third option of going nowhere, neither increasing or decreasing, staying flat. We could modify our definitions of increasing and decreasing (which one usually calls monotone in one fashion or another), but for the sake of making sense of the English, most keep it this way.</p>
<p>Then to go back to your question, at $0$ your function would be both concave up and concave down, which just doesn't seem definitive to me. So we say that it is neither, and so $x^3$ is concave down on $(-\infty,0)$ only. </p>
|
1,609,443 | <p>So my textbook states that it is $(-\infty,0)$ that is concave down, but why not $(-\infty,0]$? Could someone help?</p>
| Jendrik Stelzner | 300,783 | <p>This depends on your definition of concavity. I prefer the following:</p>
<blockquote>
<p><strong><em>Definition:</em></strong> Let $I \subseteq \mathbb{R}$ be an intervall and $f$ a realvalued function defined on $I$. Then $f$ is <em>convex</em> on $I$ if for all $x,y \in I$ and $\lambda \in [0,1]$
$$
f(\lambda x + (1-\lambda) y) \leq \lambda f(x) + (1-\lambda) f(y).
$$
and <em>concave</em> on $I$ if for all $x,y \in I$ and $\lambda \in [0,1]$
$$
f(\lambda x + (1-\lambda) y) \geq \lambda f(x) + (1-\lambda) f(y).
$$
We say <em>strictly</em> convex on $I$ if for all $x,y \in I$ with $x \neq y$ and $\lambda \in (0,1)$
$$
f(\lambda x + (1-\lambda) y) < \lambda f(x) + (1-\lambda) f(y).
$$
and <em>concave</em> on $I$ if for all $x,y \in I$ with $x \neq y$ and $\lambda \in (0,1)$
$$
f(\lambda x + (1-\lambda) y) > \lambda f(x) + (1-\lambda) f(y).
$$</p>
</blockquote>
<p>When using this definition it is true that $f \colon \mathbb{R} \to \mathbb{R}$, $x \mapsto x^3$ is concave on $(-\infty,0]$ (even strictly concave), as well as (strictly) convex on $[0,\infty)$.</p>
<p>(This definition of convexity has the advantage that $f \colon I \to \mathbb{R}$ is convex on $I$ if and only if the epigraph $\{(x,y) \in I \times \mathbb{R} \mid y \geq f(x)\}$ is a convex subset of $I \times \mathbb{R}$. It also easily generalizes to functions defined on arbitrary convex subsets of vector spaces, e.g. convex subsets of $\mathbb{R}^n$.)</p>
|
171,697 | <blockquote>
<p>Let $f \in C^1(\mathbb R, \mathbb R)$ and suppose
$$ \tag{H}
\vert f(x) \vert\le \frac{1}{2}\vert x \vert + 3, \quad \forall x \in \mathbb R.
$$</p>
<p>Then, every solution of
$$
x'(t)+x(t)+f(x(t))=0, \quad t \in \mathbb R
$$
is bounded on $[0,+\infty]$. </p>
</blockquote>
<p>First of all, I would like to say that the text has been copied correctly: I mean, it's really $[0,+\infty]$ so I suppose the author wants me to prove $\displaystyle \lim_{t \to +\infty} x(t) <\infty$. Indeed, boundedness on $[0,+\infty)$ is quite obvious, because we have global existence (its enough to write the equation as $x'=-x-f(x)$ and to observe that the RHS is sublinear thanks to $(H)$).</p>
<p>So, we have to prove $\displaystyle \lim_{t \to +\infty} x(t) <\infty$. How can we do? I've got some ideas but I can't conclude. First, I observe that the problem is autonomous: this implies that solutions are either constant either monotonic.</p>
<p>First idea: I've fixed $x_0 \in \mathbb R$ and I've written the equivalent integral equation:
$$
x(t) = x_0 - \int_0^t [x(s)+f(x(s))]ds
$$</p>
<p>Taking the absolute value and making some rough estimates, we get
$$
\vert x(t) \vert \le \vert x_0 \vert + \left\vert \int_0^t [x(s)+f(x(s))]ds \right\vert \le \vert x_0 \vert + \int_0^t \frac{3}{2}\left\vert x(s) \right\vert +3 ds
$$
but now I don't know how to conclude. Gronwall's lemma? But how can I use it? </p>
<p>Second idea: if $x_0 \in \mathbb R$ is s.t. $x_0 +f(x_0) \neq 0$, the solution of the Cauchy problem is not constant. I can divide both members of equation and I obtain (integrating on $[0,t]$)
$$
-t=\int_0^t \frac{dx}{x+f(x)}
$$
Now I let $t \to +\infty$ but... what can I conclude?</p>
| copper.hat | 27,978 | <p>The goal is to show that the solution is bounded. (I have no idea what the $+\infty$ bit is about, it adds nothing as far as I can tell.)</p>
<p>We first need to show that the solution from any initial condition $x_0$ (at $t_0$) is defined on $[t_0,\infty)$. It is not immediately clear to me that 'we have global existence'. After that, it is fairly straightforward to show boundedness using a 'Lyapunov' function of sorts.</p>
<p>Let $\phi(x) =-x -f(x)$. Since $f$ is $C^1$, we have that $\phi$ is $C^1$, hence Lipschitz on any bounded set. From the usual considerations of ODEs, it is straightforward to establish that, for any $B>0$, if $|x_0| < B$, then there exists a $\delta>0$ (which depends only on $B$ and $f$) such that the solution to $\dot{x} = \phi(x)$ with initial condition $x(t_0) = x_0$ is defined on $[t_0,t_0+\delta]$. If we can establish that $|x(t_0+\delta)| < B$, then it is clear that the solution is defined on $[t_0,\infty)$. Furthermore, the solution is $C^1$.</p>
<p>Using the bound provided for $f$, we have:</p>
<p>If $x\geq 8$, $\phi(x) \leq -x -f(x) \leq -x +\frac{1}{2} |x| +3 = -\frac{1}{2} x+3 \leq -1$.</p>
<p>If $x\leq -8$, $\phi(x) \geq -x -f(x) \geq-x -\frac{1}{2} |x| -3 = -\frac{1}{2} x-3 \geq +1$.</p>
<p>Now let $x$ be as a solution to $\dot{x} = \phi(x)$, and define the 'Lyapunov' function $V(t) = \frac{1}{2} x(t)^2$. Then it is easy to see that $V$ is $C^1$ and $\dot{V}(t) = x(t) \phi(t)$. Consequently, if $|x(t)| \geq 8$, then $\dot{V}(t) \leq -8$.</p>
<p>Thus, if $|x_0| \geq 8$, then the solution will satisfy $|x(t)| < 8$ in finite time.</p>
<p>Now suppose $|x_0| < 8$, and that at some $t \in [t_0,t_0+\delta]$, we have $|x(t)| \geq 8$. Let $t_1$ be the first time at which $|x(t_1)| = 8$ ($x$ is continuous, so this is well defined). Since $V$ is $C^1$, we have, for some suitably small $\epsilon>0$, $\dot{V}(t) \leq -7$, for $t \in [t_1-\epsilon, t_1]$. Since $|x(t_1-\epsilon)| < 8$, this contradicts $|x(t_1)| = 8$, since $V(t_1) = V(t_1-\epsilon)+\int_{t_1-\epsilon}^{t_1} \dot{V}(\tau) d \tau$. </p>
<p>Hence $|x(t)| < 8$ for all $t \in [t_0,t_0+\delta]$, and from the considerations above, we see that $x$ is defined on $[t_0, \infty)$, and $|x(t)| < 8$, for all $t\geq t_0$.</p>
<p>If $\phi(x(t')) = 0$ for some $t'$, then it is clear that $x(t) = x(t')$, for all $t \geq t'$ (by uniqueness of solution). If $\phi(x(t')) \neq 0$ for all $t$, then it is clear that $\phi(x(t))$ has the same sign as $\phi(x(t'))$ for all $t\geq t'$. Hence either $x(t)$ is increasing and bounded above, or decreasing and bounded below. Hence $\lim_{t \to \infty} x(t)$ exists (and is in $[-8,8]$, of course).</p>
|
2,840,425 | <p>I am trying to solve the assignment below with the quotient rule. Every time I get stuck on a similar assignment which has a square root inside the numerator of a fraction. What do I do with the square root in the numerator and how do I find the derivative?</p>
<p>$$\frac{\sqrt x+3}{x}$$</p>
| Peter Szilas | 408,605 | <p>An option:</p>
<p>$\dfrac{1}{ x^2-a^2}=\dfrac{1}{(x-a)(x+a)}=$</p>
<p>$\dfrac{1}{2a}[ \dfrac{1}{x-a} -\dfrac{1}{x+a}]$.</p>
|
2,873,636 | <p>I have been self studying mathematical logic through A Concise Introduction to Mathematical Logic by Wolfgang Rautenberg and got stuck. I am unable to understand how the value matrix corresponds with the truth table of a given operation for instance the AND operation. I know that in a matrix the place an element is placed into important. Is there some kind of relation based on the position in the matrix. For example the ith column and jth row. Why are the values ordered as they are within the matrix?
<a href="https://i.stack.imgur.com/Floii.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Floii.png" alt="value matrix 1"></a>
<a href="https://i.stack.imgur.com/ondwq.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ondwq.jpg" alt="value matrix"></a></p>
| Rob Arthan | 23,171 | <p>The author is using an abbreviated form of the notation often used for binary operations on finite algebraic structures, leaving out the row and column labels. Drawn up in full, the table for conjunction would look like this:
$$
\begin{array}{c|cc}
{\land} & 1 & 0 \\\hline
1 & 1 & 0 \\
0 & 0 & 1
\end{array}
$$</p>
<p>The idea is that to evaluate $a \land b$, you look for the row labelled $a$ and the column labelled $b$ and the result is the corresponding entry in the matrix: $1 \land 1 = 1$, $1 \land 0 = 0$, $0 \land 1 = 0$ and $0 \land 0 = 0$. </p>
|
2,788,002 | <p>If I convert, say, #FF$1919$ to binary, I can do it in groups of three bytes like:</p>
<p>FF: $1111$ $1111$</p>
<p>$19$: $0001$ $1001$</p>
<p>$19$: $0001$ $1001$</p>
<p>So can I write that #FF$1919 =$ $1111$ $1111$ $0001$ $1001$ $0001$ $1001$?</p>
<p>Or do I have to write them as separate bytes like above?</p>
| mvw | 86,776 | <p>Yes, the positional systen works as expected:
$$
(FF1919)_{16}
= 15 \cdot 16^5 + 15 \cdot 16^4 + 1 \cdot 16^3 + 9 \cdot 16^2 + 1 \cdot 16^1 + 9 \cdot 16^0 \\
$$
Then we have
$$
16^k = (2^4)^k = 2^{4k}
$$
so
\begin{align}
(FF1919)_{16}
&=
(1111)_2 \cdot 2^{4\cdot 5} +
(1111)_2 \cdot 2^{4\cdot 4} +
(0001)_2 \cdot 2^{4\cdot 3} +
(1001)_2 \cdot 2^{4\cdot 2} + \\
& \quad \,
(0001)_2 \cdot 2^{4\cdot 1} +
(1001)_2 \cdot 2^{4\cdot 0} \\
&=
1\cdot 2^{23} +
1\cdot 2^{22} +
1\cdot 2^{21} +
1\cdot 2^{20} +
1\cdot 2^{19} +
\dotsb \\
& \quad \,
1\cdot 2^{3} +
0\cdot 2^{2} +
0\cdot 2^{1} +
1\cdot 2^{0} \\
&=
(1111 \, 1111 \, 0001 \, 1001 \, 0001 \, 1001)_2
\end{align}</p>
|
1,332,433 | <blockquote>
<p>Let <span class="math-container">$f:\mathbb R\to \mathbb R$</span> be such that for every sequence <span class="math-container">$(a_n)$</span> of real numbers,</p>
<p><span class="math-container">$\sum a_n$</span> converges <span class="math-container">$\implies \sum f(a_n)$</span> converges</p>
<p>Prove there is some open neighborhood <span class="math-container">$V$</span>, such that <span class="math-container">$0\in V$</span> and <span class="math-container">$f$</span> restricted to <span class="math-container">$V$</span> is a linear function.</p>
</blockquote>
<p>This was given to a friend at his oral exam this week. He told me the problem is hard. I tried it myself, but I haven't made any significant progress.</p>
| Martin Sleziak | 8,297 | <p>I have added an answer to collect various resources where this fact is proven. This answer is <a href="http://meta.math.stackexchange.com/tags/community-wiki/info">community-wiki</a> - feel free to add further references you are aware of.</p>
<p><strong>Papers:</strong></p>
<ul>
<li>Gerald Wildenberg: <em>Convergence-Preserving Functions,</em> The American Mathematical Monthly Vol. 95, No. 6 (Jun. - Jul., 1988), pp. 542-544, <a href="http://www.jstor.org/stable/2322761" rel="nofollow noreferrer">jstor link</a>, doi: <a href="http://dx.doi.org/10.2307/2322761" rel="nofollow noreferrer">10.2307/2322761</a>, <a href="http://www.ams.org/mathscinet-getitem?mr=0945022" rel="nofollow noreferrer">MR0945022</a>, <a href="https://zbmath.org/?q=an:0664.40001" rel="nofollow noreferrer">Zbl 0664.40001</a>.</li>
<li>Arthur Smith: <em>Convergence-Preserving Functions: An Alternative Discussion,</em> The American Mathematical Monthly Vol. 98, No. 9 (Nov., 1991), pp. 831-833, <a href="http://www.jstor.org/stable/2324271" rel="nofollow noreferrer">jstor link</a>, doi: <a href="http://dx.doi.org/10.2307/2324271" rel="nofollow noreferrer">10.2307/2324271</a>, <a href="http://www.ams.org/mathscinet-getitem?mr=1133000" rel="nofollow noreferrer">MR1133000</a>, <a href="https://zbmath.org/?q=an:0783.40002" rel="nofollow noreferrer">Zbl 0783.40002</a></li>
<li>Rado, R. <em>A theorem on infinite series.</em> J. London Math. Soc. 35 1960 273–276; doi: <a href="http://dx.doi.org/10.1112/jlms/s1-35.3.273" rel="nofollow noreferrer">10.1112/jlms/s1-35.3.273</a>, <a href="http://www.ams.org/mathscinet-getitem?mr=0145321" rel="nofollow noreferrer">MR145321</a>, <a href="https://zbmath.org/?q=an:0098.26703" rel="nofollow noreferrer">Zbl 0098.26703</a></li>
</ul>
<p><strong>Books</strong></p>
<ul>
<li>Teodora-Liliana Radulescu, Vicentiu D. Radulescu and Titu Andreescu: <em>Problems in Real Analysis: Advanced Calculus on the Real Axis</em>, Problem 2.5.30, <a href="https://books.google.sk/books?hl=en&lr=&id=_RhAAAAAQBAJ&oi=fnd&pg=PA108" rel="nofollow noreferrer">page 108</a></li>
</ul>
<p><strong>Online resources</strong></p>
<ul>
<li><a href="https://math.stackexchange.com/questions/116964/the-set-of-functions-which-map-convergent-series-to-convergent-series">The set of functions which map convergent series to convergent series</a></li>
</ul>
|
33,778 | <p>Let $X$ and $Y$ be independent exponential variables with rates $\alpha$ and $\beta$, respectively. Find the CDF of $X/Y$.</p>
<p>I tried out the problem, and wanted to check to see if my answer of: $\frac{\alpha}{ \beta/t + \alpha}$ is correct, where $t$ is the time, which we need in our final answer since we need a cdf.</p>
<p>Can someone verify if this is correct?</p>
| Did | 6,179 | <p>Recall one of the most important characterizations of the exponential distribution:</p>
<blockquote>
<p>The random variable $Y$ is exponentially distributed with rate $\beta$ if and only if $P(Y\geqslant y)=\mathrm{e}^{-\beta y}$ for every $y\geqslant0$.</p>
</blockquote>
<p>Let $Z=X/Y$ and $t\gt0$. Conditioning on $X$ and applying our characterization to $y=X/t$, one gets
$$
P(Z\leqslant t)=P(Y\geqslant X/t)=E(\mathrm{e}^{-\beta X/t}).
$$
Now, the density of the distribution of $X$ is $\alpha\mathrm{e}^{-\alpha x}$ on $x\geqslant0$, hence for every $\gamma\geqslant0$,
$$
E(\mathrm{e}^{-\gamma X})=\int_0^{+\infty}\alpha\mathrm{e}^{-(\alpha+\gamma) x}\mathrm{d}x=\frac{\alpha}{\alpha+\gamma}\left[-\mathrm{e}^{-(\alpha+\gamma) x}\right]_{0}^{+\infty}=\frac{\alpha}{\alpha+\gamma}.
$$
Substituting $\gamma=\beta/t$ yields the formula.</p>
|
33,778 | <p>Let $X$ and $Y$ be independent exponential variables with rates $\alpha$ and $\beta$, respectively. Find the CDF of $X/Y$.</p>
<p>I tried out the problem, and wanted to check to see if my answer of: $\frac{\alpha}{ \beta/t + \alpha}$ is correct, where $t$ is the time, which we need in our final answer since we need a cdf.</p>
<p>Can someone verify if this is correct?</p>
| dohmatob | 168,758 | <p>Here is a one-line proof.
<span class="math-container">$$
\mathbb P(X/Y \le t) = \mathbb P(Y \ge X/t) = \mathbb E[\exp(-\beta X/t)] = \text{MGF}_X(-\beta/t) = (1 - (-\beta/t)1/\alpha)^{-1} = \frac{\alpha}{\alpha + \beta/t} = \frac{\alpha t}{\alpha t + \beta}.
$$</span></p>
<p><strong>N.B.:</strong> For the MGF of an exponential variable, <a href="https://en.wikipedia.org/wiki/Moment-generating_function#Examples" rel="nofollow noreferrer">see this table</a>.</p>
|
1,783,439 | <p>How will the basis vectors of the subspace $\mathbb{R}^n$ consisting of those vectors $A=(a_1,\cdots,a_n)$ such that $a_1+\cdots+a_n=0$ look like?</p>
<p>The initial problem was "what is the dimension of the subspace $\mathbb{R}^n$ consisting of those vectors $A=(a_1,\cdots,a_n)$ such that $a_1+\cdots+a_n=0$ ?</p>
<p>I solved this question using the liner mapping $L:\mathbb{R}^n \rightarrow \mathbb{R}$ s.t. $L_{I}(X)=I.X$, where $I=(1,\cdots,1); X=(x_1,\cdots,x_n)$. It is being asked that what is the $dimension[kernel(L)]$ which is equal to $n-1$.</p>
<p>Now, I wanted to solved this question not using the $dim[V]=dim[Im(F)]+dim[Ker(F)]$ but looking at the subspace itself. How will the basis vector of the subspace asked in the question will be?</p>
| MUH | 214,956 | <p>To prove that they are Linearly Independent, all we have to do is P.T. if
$$
x_1v_1+\cdots+x_{n-1}v_{n-1}=0\\
\Rightarrow x_1e_1-x_1e_n+x_2e_2-x_2e_n+\cdots+x_{n-1}e_{n-1}-x_{n-1}e_{n}=0\\
\Rightarrow x_1e_1+x_2e_2+\cdots+x_{n-1}e_{n-1}-(x_1e_n+x_2e_n+\cdots+x_{n-1}e_{n})=0 \\
\Rightarrow x_1=\cdots=x_{n-1}=0
$$</p>
<p>where $v_1=e_1-e_n,\cdots,v_{n-1}=e_{n-1}-e_n $</p>
<p>All we need to show now is that any element of the subspace can be expressed in terms of the basis of the subspace i.e.</p>
<p>$v=a_1v_1+\cdots+a_{n-1}v_{n-1}=(a_1,a_2,\cdots,a_{n-1},-(a_1+a_2+\cdots+a_{n-1}))$</p>
<p>which means that $a_1+a_2+\cdots+a_{n}=0$</p>
|
1,006,080 | <p>How to solve:
$$\left|\frac{x+4}{ax+2}\right| > \frac{1}{x}$$</p>
<p>What I have done:<br><br></p>
<p>I) $x < 0$:</p>
<p>Obviously this part of the inequation is $x\in(-\infty, 0), x\neq \frac{-2}{a}$<br><br></p>
<p>II) $x > 0$:</p>
<p>$$\left|\frac{x+4}{ax+2}\right| > \frac1x$$
$$\frac{|x+4|}{|ax+2|} > \frac1x$$
because $x > 0$ we can transform it to:</p>
<p>$x^2 + 4x > |ax + 2|$<br>
$(x^2 + 4x)^2 > (ax + 2)^2$<br>
$(x^2 + 4x)^2 - (ax + 2)^2 > 0$<br>
$(x^2 + 4x - ax - 2) (x^2 +4x +ax + 2) > 0$<br><br>
$(x^2 + (4-a)x - 2)(x^2 + (4+a)x + 2) > 0$<br><br></p>
<p>I am kind of stuck here. I also need to discuss the solution for various values of the parameter a. What is the easiest way out of this step? Something with the discriminants, perhaps?</p>
| Community | -1 | <p>This is not an answer, just an alternative exposition of your findings as far as I was able to reproduce them.</p>
<p>First, the video describes Archimedes' method for approximating $\pi$ via the following recurrence for the side length of a regular $3\cdot2^n$-gon inscribed in a circle, which I'll call $\ell_n$:
$$ \ell_1=1 \text{ ,}\qquad \ell_{n+1} = 2\sqrt{2-\sqrt{4-\ell_n^2}} \tag1 $$
Then we approximate $\pi$ as half the perimeter of that $3\cdot2^n$-gon, that is,
$$ h_n = 3\cdot 2^{n-1} \ell_n $$</p>
<p>What you've noticed numerically is that
$$ \frac{h_n-h_{n-1}}{h_{n-1}-h_{n-2}} \approx \frac14 \tag2 $$
(Nice observation, by the way. If you do the trigonometry, it turns out that
$$ \frac{h_n-h_{n-1}}{h_{n-1}-h_{n-2}} = \frac1{2\cos\theta_n(1+\cos\theta_n)}
\text{ ,}\quad\text{ where } \theta_n = \frac{\pi}{3\cdot2^n} $$
which confirms your finding because $\cos\theta\to 1$ as $\theta\to 0$. This also shows that the LHS is always $\ge\frac14$.)</p>
<p>Next, you started approximating $h_n$ using (2) instead of the Archimedean recurrence (1). Actually, you say you have something more precise than $\frac14$ on the RHS of (2), but you don't say what it is, so <strong>I'll just proceed as if you used $\frac14$.</strong></p>
<p>Let's call the values obtained by this method $\hat h_n$. Since you started using this method after the $96$-gon, the sequence is given by the rules
$$ \hat h_4 = h_4 \text{ ,}\quad \hat h_5 = h_5 \text{ ,}\quad \frac{\hat h_n-\hat h_{n-1}}{\hat h_{n-1}-\hat h_{n-2}} = \frac14 $$
By telescoping product, we get
$$ \hat h_n - \hat h_{n-1} = (\hat h_5 - \hat h_4) \prod_{k=6}^n \frac{\hat h_k-\hat h_{k-1}}{\hat h_{k-1}-\hat h_{k-2}}
= \frac{\hat h_5 - \hat h_4}{4^{n-5}} $$
and then, by telescoping sum,
$$ \hat h_n = \hat h_5 + \sum_{k=6}^n (\hat h_k-\hat h_{k-1})
= \hat h_5 + (\hat h_5-\hat h_4)\sum_{k=6}^n \frac1{4^{k-5}}
= \hat h_5 + \frac13 (\hat h_5-\hat h_4) \Big(1-\frac1{4^{n-4}}\Big)
$$
Now you can see that as $n\to\infty$, we have
$$ \hat h_n \to \hat h_5 + \frac13 (\hat h_5-\hat h_4)
= \frac{4h_5-h_4}{3}
\approx 3.1415925335
$$
So, <em>this</em> sequence is not actually tending to $\pi$, but to something a bit smaller.</p>
<p><strong>Update:</strong> I don't know whether Archimedes knew your method, but I will note that the summation of the infinite geometric series used here was absolutely something he knew — he did that as part of his computation of the area of a parabolic sector.</p>
<p><strong>Update:</strong> It occurs to me that another way to use your idea is to compute $h_n$ via (1) as usual, but rather than report that as your estimate, report $(4h_n-h_{n-1})/3$. This is as if you had used (2) starting from $n$ instead of starting from $6$, and it gives a sharper lower bound for $\pi$. (It's a lower bound because, as noted above, in (2) we have $\ge$.) You still have to use (1), unfortunately.</p>
|
3,962,427 | <p>I have proven that the sequence is convergent.</p>
<p>Let <span class="math-container">$x_n=\frac{(2n)!!}{(2n+1)!!}$</span></p>
<p><span class="math-container">$\frac{x_{n+1}}{x_n}$</span> <span class="math-container">$=\frac{2n+2}{2n+3}<1$</span> Therefore the sequence decreases. On the other hand
<span class="math-container">$x_n>0$</span>. Therefore the sequence is bounded below.
Since we have a sequence that is bounded below and decreases the sequence must converge.</p>
<p>But from here I don’t know how to find the limit. Could you please help me? And could you please say if what I’ve done is necessary or not?</p>
<p>Thanks in advance!</p>
| Varun Vejalla | 595,055 | <p>Similar to another answer, you have that <span class="math-container">$$x_n = \prod_{k=1}^{n}\frac{2k}{2k+1}$$</span></p>
<p>Then
<span class="math-container">$$\ln(x_n) = \sum_{k=1}^{n}\ln\left( \frac{2k}{2k+1} \right)$$</span></p>
<p>Each term will be negative and with the comparison test, you have that <span class="math-container">$$\sum_{k=1}^{n}\ln\left( \frac{2k}{2k+1} \right) < -\sum_{k=1}^n \frac{1}{2k+1} \to -\infty$$</span></p>
<p>Then <span class="math-container">$\ln(x_n)$</span> must go to <span class="math-container">$-\infty$</span> as well, and <span class="math-container">$x_n = \exp(\ln(x_n)) \to 0$</span>.</p>
|
1,025,139 | <p>How to prove the following? Should I use induction or something else?</p>
<blockquote>
<p>Let $n$ and $r$ be positive integers with $n \ge r$. Prove that
$$\binom{r}{r} + \binom{r+1}{r} + \cdots + \binom{n}{r} = \binom{n+1}{r+1}.$$</p>
</blockquote>
<p><em>Attempted start:</em></p>
<p>Basis step: ${\binom{1}{1}} = {\binom{2}{2}}$ true.</p>
<p>Where do I go from here?</p>
| Hypergeometricx | 168,053 | <p>Note that</p>
<p>$$\begin{align}
\binom ab+\binom a{b+1}&=\binom{a+1}{b+1}\\
\Rightarrow \binom ab&=\binom{a+1}{b+1}-\binom a{b+1}\end{align}$$</p>
<p>Hence
$$\begin{align}
\binom ir&=\binom {i+1}{r+1}-\binom i{r+1}\\
\sum_{i=r}^n\binom ir&=\sum_{i=r}^n\binom {i+1}{r+1}-\sum_{i=r}^n\binom i{r+1}\\
&=\sum_{i=r+1}^{n+1}\binom {i}{r+1}-\sum_{i=r+1}^n\binom i{r+1}\\
&=\binom {n+1}{r+1}\qquad\blacksquare
\end{align}$$</p>
|
951,414 | <p>Consider the absolute value equation:</p>
<p>|x| + |x-2| +|x-4|= 6</p>
<p>How to find the solution(s)?</p>
<p>My attempt:</p>
<p>For |x|, we got x, for x>=0 and -x, for x <0</p>
<p>For |x-2|, we got x-2, for x >= 0 and -(x-2), for x<0</p>
<p>For |x-4|, we got x-4, for x>=0 and -(x-4), for x<0</p>
<p>After this, I'm confused how to find the solutions? is it any easy way to find the solution?</p>
<p>Thanks</p>
| Alexander Vigodner | 92,276 | <p>You have 3 "nodes" for switching $|x|$ , $|x-2|$ and $x-4|$. The easiest way is to imagine real line with these points and split solution considering intervals $[-\infty,0],[0,2],[2,4],[4,\infty]$ separately. On each interval you can remove $||$ with certain sign and immediately solve the equation, checking that final solution is in the corresponding interval.</p>
|
951,414 | <p>Consider the absolute value equation:</p>
<p>|x| + |x-2| +|x-4|= 6</p>
<p>How to find the solution(s)?</p>
<p>My attempt:</p>
<p>For |x|, we got x, for x>=0 and -x, for x <0</p>
<p>For |x-2|, we got x-2, for x >= 0 and -(x-2), for x<0</p>
<p>For |x-4|, we got x-4, for x>=0 and -(x-4), for x<0</p>
<p>After this, I'm confused how to find the solutions? is it any easy way to find the solution?</p>
<p>Thanks</p>
| drhab | 75,923 | <ul>
<li>$x<0$ leads to $6-3x=6$ </li>
</ul>
<p>Solve this equation and check wether the solution satisfies $x<0$. If so then you have found a solution of the original equation. If not then no solution exists that satisfies $x<0$.</p>
<p>Do the same for the cases:</p>
<ul>
<li><p>$0\leq x<2$</p></li>
<li><p>$2\leq x<4$</p></li>
<li><p>$4\leq x$</p></li>
</ul>
|
951,414 | <p>Consider the absolute value equation:</p>
<p>|x| + |x-2| +|x-4|= 6</p>
<p>How to find the solution(s)?</p>
<p>My attempt:</p>
<p>For |x|, we got x, for x>=0 and -x, for x <0</p>
<p>For |x-2|, we got x-2, for x >= 0 and -(x-2), for x<0</p>
<p>For |x-4|, we got x-4, for x>=0 and -(x-4), for x<0</p>
<p>After this, I'm confused how to find the solutions? is it any easy way to find the solution?</p>
<p>Thanks</p>
| Mark Bennet | 2,906 | <p>Think of the modulus function measuring the distance from fixed points. So that $|x|=|x-0|$ is the distance from $0$.</p>
<p>So your sum measures the total distance from the points $0,2,4$</p>
<p>If $x$ is greater than $4$ the first two distances are greater than $4$ and $2$</p>
<p>If $x$ is less than $0$ we reach the same conclusion with the last two points by symmetry.</p>
<p>If $x$ is between $0$ and $2$ the total distance to $0$ and $2$ is always $2$, and the remaining distance is less than $4$. Similarly by symmetry for the points between $2$ and $4$.</p>
<p>Now check the points $0$, $2$, $4$ themselves</p>
|
1,483,191 | <p>It's now known that to reach the solved position of the rubik cube from any other position you need at most $20$ moves (a $180^{\circ}$ of a face is counted as one move , not two ) . </p>
<p>See for example here : <a href="http://www.cube20.org/" rel="nofollow">http://www.cube20.org/</a> </p>
<blockquote>
<p>Now I'll define $d(A,B)$ be the <em>distance</em> between the two positions of the cube $A$ and $B$.That is the minimal number of moves to reach $B$ from $A$ .</p>
</blockquote>
<p>Let's denote with $P$ the set of all the possible positions and with $\phi$ the solved position .</p>
<p>From the above :</p>
<blockquote>
<p>$$\max_{A \in P} d(A,\phi)=20$$ </p>
</blockquote>
<p>Now my question is :</p>
<blockquote>
<p>What is $$\max_{A,B \in P} d(A,B)$$
That is : How <em>far</em> two positions can be from each other ?</p>
</blockquote>
<p>What I have :</p>
<p>It's obvious that $d(A,B) \leq d(A,\phi)+d(B,\phi)$ (triangle inequality :))</p>
<p>So $$d(A,B) \leq 20+20=40$$ but I kind of doubt that $40$ is the maximum .</p>
<p>My intuition is that it's in the $20$-$30$ range .</p>
<p>What do you think?</p>
<p>Thanks in advance for all the help !</p>
| Servaes | 30,382 | <p>Let $G$ the Rubik's cube group of legal moves of the Rubik's cube under concatenation. Then there is a one-to-one correspondence between the set $P$ of legal positions of the Rubik's cube, and the group $G$, by associating to every position $X\in P$ a concatenation of moves $\varphi(X)\in G$ leading from the starting position to $X$. This immediately yields a distance function $d'$ on $G$ defined by
$$d'(g,h):=d(\varphi^{-1}(g),\varphi^{-1}(h)),$$
for all $g,h\in G$. Now it is clear that for all $g_1,g_2,h\in G$ we have
$$d'(g_1,g_2)=d'(g_1h,g_2h),$$
because a series of moves leading from $\varphi^{-1}(g_1)$ to $\varphi^{-1}(g_2)$ also leads from $\varphi^{-1}(g_1h)$ to $\varphi^{-1}(g_2h)$, and vice versa, as such series of moves represent $(g_2h)(g_1h)^{-1}=g_2g_1^{-1}\in G$. Hence for all $A,B\in P$ we have
\begin{eqnarray*}
d(A,B)&=&d'(\varphi(A),\varphi(B))=d'(\varphi(A)\varphi(B)^{-1},\varphi(B)\varphi(B)^{-1})\\
&=&d'(\varphi(A)\varphi(B)^{-1},\varphi(\operatorname{id}_G))=d(AB^{-1},\phi),
\end{eqnarray*}
which shows that $d(A,B)\leq20$ holds for all $A,B\in P$.</p>
<hr>
<p>On a more conceptual note; considering the 'shifted' distance function
$$d''(A_1,A_2):=d'(\varphi(A_1)\varphi(B)^{-1},\varphi(A_2)\varphi(B)^{-1}),$$
which I implicitly use in the last part of my answer, comes down to considering $B$ as the 'solved state' of the Rubik's cube in stead of $\phi$. The rest f the argument then says that a series of moves leads from $A$ to $B$ if and only if it leads from "$AB^{-1}$" to $\phi$. This is also the basic idea of the other answer by Mike Haskel.</p>
|
211,052 | <p>I want to modify a list of table row by adding an extra column. For that I <a href="https://reference.wolfram.com/language/ref/Map.html" rel="nofollow noreferrer"><code>Map</code></a> the data with a pure function that evluate the new column value from exisiting one and reconstruct a new list from that and the initial <a href="https://reference.wolfram.com/language/ref/Part.html" rel="nofollow noreferrer"><code>Part</code>s</a> of the list:</p>
<pre><code>b = 5000
data = {{1, 45000., 27500., "Inverted"},
{2, 22500., 18333.3, ""},
{3, 15000., 13750., "Inverted"},
{4, 11250., 11000., ""},
{5, 9000., 9166.67, "Inverted"},
{6, 7500., 7857.14, ""},
{7, 6428.57, 6875., "Inverted"}}
{#[[1;;3]], #[[1]]> 2b, #[[4]]} & /@ data
</code></pre>
<blockquote>
<pre><code>{{{1,45000.,27500.},False,Inverted},
{{2,22500.,18333.3},False,},
{{3,15000.,13750.},False,Inverted},
{{4,11250.,11000.},False,},
{{5,9000.,9166.67},False,Inverted},
{{6,7500.,7857.14},False,},
{{7,6428.57,6875.},False,Inverted}}
</code></pre>
</blockquote>
<p>The problem is <code>#[[1;;3]]</code> return a list--so I end-up with nested list instead of flat records.</p>
<p>As a workarround, I <a href="https://reference.wolfram.com/language/ref/Flatten.html" rel="nofollow noreferrer"><code>Flatten</code></a> each record:</p>
<pre><code> Flatten[{#[[1;;3]], #[[1]]> 2b, #[[4]]}] & /@ data
</code></pre>
<blockquote>
<pre><code>{{1,45000.,27500.,False,Inverted},
{2,22500.,18333.3,False,},
{3,15000.,13750.,False,Inverted},
{4,11250.,11000.,False,},
{5,9000.,9166.67,False,Inverted},
{6,7500.,7857.14,False,},
{7,6428.57,6875.,False,Inverted}}
</code></pre>
</blockquote>
<pre><code>Flatten[{#[[1;;3]], #[[1]]> 2b, #[[4]]}] & /@ data
</code></pre>
<p>It works <em>in that particular</em> case. But it is not entirely satisfactory since, if the initial record would already contain list items, they would have been flattened too.</p>
<p>If there a more generic solution to build a list from items and list spans?</p>
| Sylvain Leroux | 68,791 | <p>While posting my question, the system suggested me the <a href="/questions/tagged/concatenation" class="post-tag" title="show questions tagged 'concatenation'" rel="tag">concatenation</a>, which in its turn lead me to the <a href="https://reference.wolfram.com/language/ref/Catenate.html" rel="nofollow noreferrer"><code>Catenate</code></a> function. It works at the expense of few additional brackets:</p>
<pre><code>Catenate[{#[[1;;3]], {#[[1]]> 2b}, {#[[4]]}}] & /@ data
(* ^ ^ ^ ^
</code></pre>
<blockquote>
<pre><code>{{1,45000.,27500.,False,Inverted},{2,22500.,18333.3,False,},{3,15000.,13750.,False,Inverted},{4,11250.,11000.,False,},{5,9000.,9166.67,False,Inverted},{6,7500.,7857.14,False,},{7,6428.57,6875.,False,Inverted}}
</code></pre>
</blockquote>
|
211,052 | <p>I want to modify a list of table row by adding an extra column. For that I <a href="https://reference.wolfram.com/language/ref/Map.html" rel="nofollow noreferrer"><code>Map</code></a> the data with a pure function that evluate the new column value from exisiting one and reconstruct a new list from that and the initial <a href="https://reference.wolfram.com/language/ref/Part.html" rel="nofollow noreferrer"><code>Part</code>s</a> of the list:</p>
<pre><code>b = 5000
data = {{1, 45000., 27500., "Inverted"},
{2, 22500., 18333.3, ""},
{3, 15000., 13750., "Inverted"},
{4, 11250., 11000., ""},
{5, 9000., 9166.67, "Inverted"},
{6, 7500., 7857.14, ""},
{7, 6428.57, 6875., "Inverted"}}
{#[[1;;3]], #[[1]]> 2b, #[[4]]} & /@ data
</code></pre>
<blockquote>
<pre><code>{{{1,45000.,27500.},False,Inverted},
{{2,22500.,18333.3},False,},
{{3,15000.,13750.},False,Inverted},
{{4,11250.,11000.},False,},
{{5,9000.,9166.67},False,Inverted},
{{6,7500.,7857.14},False,},
{{7,6428.57,6875.},False,Inverted}}
</code></pre>
</blockquote>
<p>The problem is <code>#[[1;;3]]</code> return a list--so I end-up with nested list instead of flat records.</p>
<p>As a workarround, I <a href="https://reference.wolfram.com/language/ref/Flatten.html" rel="nofollow noreferrer"><code>Flatten</code></a> each record:</p>
<pre><code> Flatten[{#[[1;;3]], #[[1]]> 2b, #[[4]]}] & /@ data
</code></pre>
<blockquote>
<pre><code>{{1,45000.,27500.,False,Inverted},
{2,22500.,18333.3,False,},
{3,15000.,13750.,False,Inverted},
{4,11250.,11000.,False,},
{5,9000.,9166.67,False,Inverted},
{6,7500.,7857.14,False,},
{7,6428.57,6875.,False,Inverted}}
</code></pre>
</blockquote>
<pre><code>Flatten[{#[[1;;3]], #[[1]]> 2b, #[[4]]}] & /@ data
</code></pre>
<p>It works <em>in that particular</em> case. But it is not entirely satisfactory since, if the initial record would already contain list items, they would have been flattened too.</p>
<p>If there a more generic solution to build a list from items and list spans?</p>
| kglr | 125 | <pre><code>{## & @@ #[[1 ;; 3]], #[[1]] > 2 b, #[[4]]} & /@ data
</code></pre>
<p>or</p>
<pre><code>{Sequence @@ #[[1 ;; 3]], #[[1]] > 2 b, #[[4]]} & /@ data
</code></pre>
<blockquote>
<p>{{1, 45000., 27500., False, "Inverted"},<br>
{2, 22500., 18333.3, False,
""},<br>
{3, 15000., 13750., False, "Inverted"},<br>
{4, 11250., 11000.,
False, ""},<br>
{5, 9000., 9166.67, False, "Inverted"},<br>
{6, 7500.,
7857.14, False, ""},<br>
{7, 6428.57, 6875., False, "Inverted"}}</p>
</blockquote>
|
2,846,928 | <blockquote>
<p>If I have the a complex number $z \in \mathbb{C}$ with absolute value $|z| = 1$, how do I show that $-i \frac {z-1}{z+1}$ is real? </p>
</blockquote>
| fleablood | 280,126 | <p>Well $\overline{z + 1} =Re(z+1) - iIm(z+1) = (Re z + 1) - iIm(z) = Re(z) - iIm(z) + 1 = \overline z + 1$ for all complex numbers.</p>
<p>SO</p>
<p>$-i \frac {z-1}{z + 1}= -i\frac {(z-1)(\overline z +1)}{(z+1)\overline{(z+1)}}$</p>
<p>$=-i \frac {z\overline z -\overline z + z - 1}{|z + 1|^2}$</p>
<p>$= -i \frac {|z|^2 +i2Im(z) - 1}{|z + 1|^2}$</p>
<p>$=-i \frac {i2Im(z)}{|z+1|^2} = \frac {2Im(z)}{|z+1|^2}$ which is real.</p>
|
3,164,343 | <p>Machine generates one random integer in range <span class="math-container">${[0;40)}$</span> on every spin.<br>
You should choose 5 numbers in that range. </p>
<p>Then the machine will spit out <strong>5</strong> numbers (<em>numbers are independent of each other</em>). </p>
<p><strong>what is the probability that you will get exactly two numbers correct?</strong></p>
<hr>
<p><strong>My logic:</strong><br>
You should get two of them right. chance of that is: <span class="math-container">$r = { \left( 1 \over 40 \right)^ 2 }$</span><br>
You should get 3 of wrong. Chance of that is: <span class="math-container">$w = { \left( 39 \over 40 \right)^3 }$</span><br>
As order doesn't matter answer should be: <span class="math-container">$$ans = { rw \over 2!3!}$$</span>
Simulator tells me I'm wrong. Where is my logic flawed?</p>
<p><strong>P.s. Machine can spit out duplicates</strong></p>
| Balakrishnan Rajan | 652,935 | <p>Let's say, you have chosen 5 numbers. Then the odds of the machine generating one of your chosen numbers is <span class="math-container">$\frac{5}{40}$</span>. This happens twice and it doesn't happen thrice. So this should be <span class="math-container">$\left(\frac{5}{40}\right)^2\times\left(\frac{35}{40}\right)^3$</span>. Now this could have happened in <span class="math-container">$\frac{5!}{2! \times 3!}$</span> (or <span class="math-container">$5 \choose 2$</span>). </p>
<p>So, is it <span class="math-container">$\left(\left(\frac{5}{40}\right)^2 \times \left(\frac{35}{40}\right)^3 \times {5 \choose 2} \right)$</span>?</p>
<p>However, the assumption is that you choose distinct numbers in that range. In case, your strategy is to maximize the "correct" score. If you are picking at random without a strategy, then <a href="https://math.stackexchange.com/a/3164437/652935">lulu's</a> answer is the one you are looking for. </p>
|
3,745,452 | <p>I'm evaluating the area of a parametric function and I'm stuck at this point.
<span class="math-container">$$S=2\pi \int_{0}^{2\pi} x\cos(x)\sqrt{x^2+1} \mathrm{d}x.$$</span>
Is there anyway to work this problem further without a calculator?</p>
| Frosty | 780,483 | <p>This question is not possible to integrate without a calculator. It was in the instructions for the assignment and I overlooked it. Here is the original problem for anyone curious.
<a href="https://www.bartleby.com/solution-answer/chapter-102-problem-57e-calculus-early-transcendentals-8th-edition/9781285741550/set-up-an-integral-that-represents-the-area-of-the-surface-obtained-by-rotating-the-given-curve/4001ed01-52f2-11e9-8385-02ee952b546e" rel="nofollow noreferrer">https://www.bartleby.com/solution-answer/chapter-102-problem-57e-calculus-early-transcendentals-8th-edition/9781285741550/set-up-an-integral-that-represents-the-area-of-the-surface-obtained-by-rotating-the-given-curve/4001ed01-52f2-11e9-8385-02ee952b546e</a></p>
|
7,354 | <p>some of my students refer to there being an invisible $-1$ in front of the expression $-(x + 4)$ or in the exponent of $x$. While it is not phrased mathematically, I am ok with them saying this because it reminds them to distribute fully before simplifying etc. It got me thinking though is there a reason to not teach students to always write in $1$ wherever there is a single variable/unknown/etc such as $1(x^1+40)-1(3x^1 - 2)$? I know that it is not done in general because it is incredibly repetitive and annoying, but is there any reason why not to teach students to do this? Eventually they will be comfortable enough that they can imply the $1$ but I feel like a lot of my students would benefit from this especially when simplifying exponential expressions and distributing negative signs. Are there any downsides to this or is it just de facto mathematics education? </p>
| Karl | 4,668 | <p>I see this as two separate questions:</p>
<p>1) Are there any downsides?</p>
<p>2) Should I condone it for my students?</p>
<p>1) As mentioned in other answers, it can make the transition into deeper areas more difficult than need be. (Understanding for groups)</p>
<p>2) I think the answer to the questions can only be answered by the students you teach.</p>
<p>Some students have so little confidence in themselves that building this up should be a priority for any teacher. If leaving a $-1x$ is right for my students at that time then I will do it. </p>
|
7,354 | <p>some of my students refer to there being an invisible $-1$ in front of the expression $-(x + 4)$ or in the exponent of $x$. While it is not phrased mathematically, I am ok with them saying this because it reminds them to distribute fully before simplifying etc. It got me thinking though is there a reason to not teach students to always write in $1$ wherever there is a single variable/unknown/etc such as $1(x^1+40)-1(3x^1 - 2)$? I know that it is not done in general because it is incredibly repetitive and annoying, but is there any reason why not to teach students to do this? Eventually they will be comfortable enough that they can imply the $1$ but I feel like a lot of my students would benefit from this especially when simplifying exponential expressions and distributing negative signs. Are there any downsides to this or is it just de facto mathematics education? </p>
| Rusty Core | 7,930 | <p>Like another poster, I also see two separate questions here, but different ones.</p>
<p>1) Should we explicitly write <strong>identity value</strong>? 1 is identity for multiplication and division, while 0 is identity for addition and subtraction. Adding 0 does not change the result, likewise dividing by 1 does not change the result. Which is why they are omitted for brevity, but if one wants to use it, it will not change the result, although when I was in school I would get half a grade off for extra constant or parentheses.</p>
<p>2) Should one discern subtraction of a positive value from addition of a opposite negative one? There was a custom to use inline + and - for addition and subtraction, and to specify the sign of a term above the term. But I guess because mathematicians are a lazy bunch, they use the property of "adding a number is the same as subtracting an opposite number," so two operations - <strong>binary subtraction and unary sign</strong> - have been condensed into one. This is, in fact, quite a hard idea for many students. I would not mind if some wrote -(5+x) as (-1) * (5+x) or something like this, if it helps.</p>
|
2,065,027 | <p>$$\newcommand{\gcd}{\text{gcd}}$$</p>
<blockquote>
<p>Prove: if $d=\gcd(m,n)$ so $\gcd\left(\frac{m}{d},\frac{n}{d}\right)=1$</p>
</blockquote>
<p>Intuitively it is obvious, but I am having a hard time to formalize the proof, what I have came to this:</p>
<p>$d=\gcd(m,n)$ so $d|m$ and $d|n$ therefore $m=dx$ and $n=dy$ now if $\gcd\left(\frac{m}{d},\frac{n}{d}\right)\neq 1$ that mean that $m$ and $n$ have a common factor, after the division in $d$ which is the greatest common divisor is contradiction.</p>
| Anurag A | 68,092 | <p>If $d=\gcd(m,n)$, then $\exists \, x,y \in \mathbb{Z}$ such that $d=mx+ny$. From this we can also write
$$1=\frac{m}{d}x+\frac{n}{d}y.$$
Thus $\gcd(m/d,n/d)=1$.</p>
|
1,736,502 | <p>I start with a filtered probability space $(\Omega,\mathcal{F},(\mathcal{F}_t),P)$. I assume that the filtration is right-continuous. On this probability space I define a supermartingale $M$. Now suppose that I can find another process $N$ such that $N$ is a cadlag modification of $M$. Hence it is also a supermartingale. Does this imply that the mapping $t\mapsto E[M_t]$ is right-continuous?</p>
<p>I reasoned as follows. I consider an arbitrary sequence of rationals $t_n \downarrow t$ for some $t\geq 0$. I need to show $\lim_{t_n \downarrow t} E[M_{t_n}-M_t] = 0$.</p>
<p>$$\lim_{t_n \downarrow t} E[M_{t_n}-M_t] = \lim_{t_n \downarrow t} E[M_{t_n}-M_t+N_t-N_t] $$
It feels very tempting to say $\lim_{t_n \downarrow t} E[M_{t_n}-N_t] = 0$ because of something like Levy-Doob downward convergence theorem. Because the
$$\lim_{t_n \downarrow t} E[M_{t_n}-M_t] = \lim_{t_n \downarrow t} E[-M_t+N_t] = 0 $$
by the fact that $M$ and $N$ are modifications.</p>
<p>Can someone tell me whether what I am saying is sensible and if possible, what I would need to do to make the argument rigorous?</p>
| John Dawkins | 189,130 | <p>Because $N$ is a modification of $M$, $E(M_t)=E(N_t)$ for all $t\ge 0$. What do you know about the right continuity of $t\mapsto E(N_t)$?</p>
|
153,186 | <p>How can I prove that the double negation elimination is not provable in constructive logic?<br>
To clarify, double negation elimination is the following statement: </p>
<p>$$\neg\neg q \rightarrow q$$</p>
| Zhen Lin | 5,191 | <p>There are two ways to go about this:</p>
<ol>
<li><p><strong>Proof theory.</strong> We analyse the formal proofs in the inference system given, and show that there can be no formal proof of double negation elimination. This is difficult and depends a lot on the details of the inference system.</p></li>
<li><p><strong>Model theory.</strong> We construct a model of intuitionistic propositional logic in which double negation elimination is visibly false. This is much easier and less sensitive to details, but is conceptually more challenging.</p></li>
</ol>
<p>First of all, what is a model of intuitionistic propositional logic? It is an algebraic structure $\mathfrak{A}$ equipped with constants $\top$ and $\bot$ as well as binary operations $\land$, $\lor$, $\to$, plus a partial order $\le$, such that the following soundness rule is valid: given formulae $\phi$ and $\psi$ in the language of intuitionistic propositional logic with propositional variables $x, y, z, \ldots$, if the sequent $\phi \vdash \psi$ is provable, then $\phi \le \psi$ in $\mathfrak{A}$ for all choices of $x, y, z, \ldots$ in $\mathfrak{A}$.</p>
<p>For example, $\mathfrak{A}$ could be a <strong>Heyting algebra</strong>, which is a algebraic structure satisfying some axioms. First of all $\mathfrak{A}$ is a (bounded) lattice:</p>
<ul>
<li>Unit laws:
\begin{align}
\top \land x & = x &
\bot \lor x & = x \\
x \land \top & = x &
x \lor \bot & = x
\end{align}</li>
<li>Associativity, commutativity, and idempotence:
\begin{align}
(x \land y) \land z & = x \land (y \land z) &
(x \lor y) \lor z & = x \lor (y \lor z) \\
x \land y & = y \land x &
x \lor y & = y \lor x \\
x \land x & = x &
x \lor x & = x \\
\end{align}</li>
<li>Absorption law:
\begin{align}
x \land (x \lor y) & = x &
x \lor (x \land y) & = x
\end{align}</li>
</ul>
<p>One can then verify that $x \le y$ defined by $x \land y = x$ is a partial order on $\mathfrak{A}$ and that $\top$, $\bot$, $\land$, $\lor$ have their usual order-theoretic meanings. We then add some axioms for $\to$:</p>
<ul>
<li>Distributive law:
$$x \to (y \land z) = (x \to y) \land (x \to z)$$</li>
<li>Internal tautology:
$$x \to \top = \top$$</li>
<li>Internal weakening:
$$y \to (x \land y) = y$$</li>
<li>Internal modus ponens:
$$x \land (x \to y) = x \land y$$</li>
</ul>
<p><strong>Exercise.</strong> Show that $y \le z$ implies $(x \to y) \le (x \to z)$, and $x \le ((x \land y) \to y)$ and $((x \to y) \land y) \le x$, and hence or otherwise that $x \land y \le z$ if and only if $x \le (y \to z)$.</p>
<p><strong>Exercise.</strong> Verify the soundness rule for interpreting intuitionistic propositional logic in a Heyting algebra.</p>
<p><strong>Proposition.</strong> Double negation elimination is not valid in intuitionistic propositional logic.</p>
<p><em>Proof.</em> We construct a three-element Heyting algebra to falsify double negation elimination. Let $\mathfrak{A} = \{ \bot, \omega, \top \}$ and define the binary operations as below:
\begin{align}
\begin{array}{|r|ccc|}
\hline
\land & \bot & \omega & \top \\
\hline
\bot & \bot & \bot & \bot \\
\omega & \bot & \omega & \omega \\
\top & \bot & \omega & \top \\
\hline
\end{array} &&
\begin{array}{|r|ccc|}
\hline
\lor & \bot & \omega & \top \\
\hline
\bot & \bot & \omega & \top \\
\omega & \omega & \omega & \top \\
\top & \top & \top & \top \\
\hline
\end{array} &&
\begin{array}{|r|ccc|}
\hline
\to & \bot & \omega & \top \\
\hline
\bot & \top & \top & \top \\
\omega & \bot & \top & \top \\
\top & \bot & \omega & \top \\
\hline
\end{array}
\end{align}
Then, observe that $((\omega \to \bot) \to \bot) = \top$, but $\top \nleq \omega$. Therefore $(x \to \bot) \to \bot$ cannot be an axiom of intuitionistic propositional logic.</p>
<p><em>Remark.</em> De Morgan's laws remain valid in this Heyting algebra. Find one which falsifies part of De Morgan's laws. (See <a href="https://math.stackexchange.com/questions/120187/does-de-morgans-laws-hold-in-propositional-intuitionistic-logic">this question</a>.)</p>
|
153,186 | <p>How can I prove that the double negation elimination is not provable in constructive logic?<br>
To clarify, double negation elimination is the following statement: </p>
<p>$$\neg\neg q \rightarrow q$$</p>
| H. Kabayakawa | 32,428 | <p>There is an intuitionistic idea which becomes more valuable with the arrival of computational-mathematics. Roughly speaking in constructive logic or math, when you say "object A" exists you are saying "we know some way to build up A". This position implies the '<em>tertio excluso</em>' rule $\neg(\neg q))\rightarrow q$ fails.</p>
<p>The classic example belongs to Brouwer (1925). We write the decimal expansion of $\pi$, and down to it the number $\rho=0.333\ldots$that we cut when the sequence $012346789$ appear in $\pi$ the first time. With the classical logic we say that $\rho$ is rational. If the sequence $012346789$ doesn't appear $\rho$ will be defined by $ \frac{1}{3}$. With the constructive approach you can't prove $\rho$ is rational before you find the first sequence $0123456789$ in $\pi$ or you have a proof that this sequence never appears in the decimal expansion of $\pi$.</p>
<p>But in constructive math you can prove that it is contradictory that $\rho$ is not rational. If we suposse $\rho$ isn't rational (equivalently, that we have a construction to tell apart $\rho$ of any rational number) then $\rho=0.3333\ldots 3$ must be impossible, and the sequence $0123456789$ doesn't appear in $\pi$. Therefore $\rho=\frac{1}{3}$ which is impossible too. Then it is contradictory that $\rho$ is not rational, but we don't have a proof that $\rho$ is rational.</p>
<p>EDIT:</p>
<p>My answer is complementary to other more formal answers. It goes to the main point to give up the $\neg(\neg q))\rightarrow q$. Essentially my answer is intuitionistic, no a formalism. </p>
|
130,235 | <p>I need to find </p>
<p>$$f(n) = \int^\infty_0 t^{n-1} e^{-t} dt$$</p>
<p>So I think I find the indefinate integral first? But what do I do with $n$, since I am integrating with respect to $t$?</p>
| kspacja | 22,289 | <p>I think $n$ is parametr. So you have to do it, treating $n$ like normal number.</p>
|
1,815,918 | <p>Let $G$ be a group and $|G|=p^n$ for some prime $p$. If $f:G\to H$ is a surjective homomorphism, how do I know $H=f(G)$ also has cardinality a power of $p$?</p>
| wgrt | 317,545 | <p>I did so</p>
<p>Be $X = P \cap I$ with $P = pair$ and $I= odd$ in [-1,1] </p>
<p>$f \in X, -f \in X$ exist $f(t)$ any $t \in [-1,1]$ such that $f(-t)$ </p>
<p>Thus $f(t)= \dfrac{f(t)+f(-t)}{2}+\dfrac{f(t)-f(-t)}{2}$</p>
<p>Be $ z \in P \cap I$, then $z(t)=z(-t)=-z(-t)$, any $t \in [-1,1] $. </p>
<p>So, $z = 0$</p>
<p>Is correct?</p>
|
3,066,762 | <p>This is a very interesting identity but I don't know how to prove this,
note that <span class="math-container">$$A_1,\ldots,A_J,B \in \mathbb{R}^{n \times n}$$</span>
and <span class="math-container">$m$</span> is the number of block diagonal in <span class="math-container">$\mathbf{A}$</span> ,so consider
<span class="math-container">$$
\mathbf{A} =
\begin{bmatrix}
A_1 & A_2 & \ldots &A_J & & & & \\
& & & \ddots \\
& & & & A_1 & A_2 & \ldots & A_J
\end{bmatrix} \in \mathbb{R}^{nm\times (Jnm)}$$</span>
<span class="math-container">$$
\mathbf{B} = \begin{bmatrix}
B & & \\
& \ddots & \\
& & B
\end{bmatrix} \in \mathbb{R}^{(Jnm)\times (Jnm)}$$</span>
<span class="math-container">$$
\mathbf{A}^\prime =
\left[\begin{smallmatrix}
A_1 & & & & A_2 & & & & \ldots & & & & A_J\\
& A_1 & & & & A_2 & & & & \ldots & & & & A_J\\
& & \ddots & & & & \ddots & & & & \ldots & & & & \ddots \\
& & & A_1 & & & & A_2 & & & & \ldots & & & & A_J
\end{smallmatrix}\right] \in \mathbb{R}^{mn\times (Jmn)}$$</span></p>
<p><span class="math-container">$$
\mathbf{B}^\prime =
\left[\begin{smallmatrix}
\begin{bmatrix}B \\ 0\\ \vdots \\ 0 \\ \end{bmatrix} & & & & \begin{bmatrix}0 \\ B\\ \vdots \\ 0 \\ \end{bmatrix} & & & & \ldots & & & & \begin{bmatrix}0 \\ 0\\ \vdots \\ B \\ \end{bmatrix}\\
& \begin{bmatrix}B \\ 0\\ \vdots \\ 0 \\ \end{bmatrix} & & & & \begin{bmatrix}0 \\ B\\ \vdots \\ 0 \\ \end{bmatrix} & & & & \ldots & & & & \begin{bmatrix}0 \\ 0\\ \vdots \\ B \\ \end{bmatrix}\\
& & \ddots & & & & \ddots & & & & \ldots & & & & \ddots \\
& & & \begin{bmatrix}B \\ 0\\ \vdots \\ 0 \\ \end{bmatrix} & & & & \begin{bmatrix}0 \\ B\\ \vdots \\ 0 \\ \end{bmatrix} & & & & \ldots & & & & \begin{bmatrix}0 \\ 0\\ \vdots \\ B \\ \end{bmatrix}
\end{smallmatrix}\right] \in \mathbb{R}^{Jmn\times (Jmn)}$$</span></p>
<p>It seems to be true that
<span class="math-container">$$\mathbf{AB} = \mathbf{A^\prime B^\prime}$$</span></p>
<h2>what I guess might work</h2>
<p>From <span class="math-container">$\mathbf{A}$</span> to <span class="math-container">$\mathbf{A}^\prime$</span> it is a column permutation <span class="math-container">$\mathbf{C}$</span>, while explicitly find its inverse should give us <span class="math-container">$$\mathbf{AB} = \mathbf{A CC^{-1} B}= \mathbf{A^\prime B^\prime}$$</span></p>
<p>But it seems to me, very difficult to write down the <span class="math-container">$\mathbf{C}$</span></p>
<h2>Update:</h2>
<p>It seems that C can be written but well, I can show it is correct by <em>hand-waving</em> way to write it down then...</p>
| Robert Israel | 8,508 | <p>Maple agrees with G&R on the formula, however it has considerable difficulty with the numerical evaluation of the integral as this oscillates very rapidly. But the integral from <span class="math-container">$0$</span> to <span class="math-container">$20$</span> is given as <span class="math-container">$.6536634068$</span>, agreeing quite nicely with the theoretical result. </p>
|
2,982,205 | <p>I need help with a question that asks this:</p>
<p>Let <span class="math-container">$f(x)= -1/x$</span> and <span class="math-container">$g(x)=e^x$</span> find the domain , range ,monotonicity intervals. Draw the graph of <span class="math-container">$g(f(x))$</span>. </p>
<p>Everything else except the graph is pretty easy but trying to find the graph has left me pretty confused.</p>
<p>Any hints would be greatly appreciated.</p>
| user | 505,767 | <p>We have that</p>
<p><span class="math-container">$$(h+z)^2 (\bar h +\bar z) - z^2\bar z=(h^2+2hz+z^2)(\bar h +\bar z) - z^2\bar z=$$</span></p>
<p><span class="math-container">$$=h^2\bar h+2h\bar hz+z^2\bar h+h^2\bar z+2hz\bar z+z^2\bar z- z^2\bar z=$$</span></p>
<p><span class="math-container">$$=h^2\bar h+2h\bar hz+h^2\bar z+z^2\bar h+2z\bar zh=z^2\bar h+2z\bar zh \iff h^2\bar h+2h\bar hz+h^2\bar z=0$$</span></p>
<p>and as noticed by Lord Shark it seems we are neglecting the higher order terms for <span class="math-container">$h$</span> (and <span class="math-container">$\bar h$</span>) since we are assuming <span class="math-container">$|h|<<|z|$</span>.</p>
|
3,646,694 | <p>Let <span class="math-container">$M$</span> be an elliptic element of <span class="math-container">$SL_2(\mathbb R)$</span>. Then it is conjugate to a rotation <span class="math-container">$R(\theta)$</span>. Note that we can calculate <span class="math-container">$\theta$</span> in terms of the trace of <span class="math-container">$M$</span>; it means that we actually know <span class="math-container">$R(\theta)$</span> and we can write:</p>
<p><span class="math-container">$$M=TR(\theta) T^{-1}$$</span></p>
<p>If <span class="math-container">$S^1$</span> is the unit circle in <span class="math-container">$\mathbb R^2$</span>, it follows that <span class="math-container">$T(S^1)$</span> is the conic section <span class="math-container">$\mathcal C$</span> which is preserved by <span class="math-container">$M$</span>.</p>
<blockquote>
<p>Is there any explicit way to find the equation <span class="math-container">$\mathcal C$</span> in general?</p>
</blockquote>
<p>My procedure is quite uneffective, because one has to find <span class="math-container">$T$</span> first (so non-linear system) and then write down <span class="math-container">$T(S^1)$</span>, which is in general not obvious.</p>
| lhf | 589 | <p>Here is another, slightly more sophisticated, take:</p>
<p>The other root is <span class="math-container">$\beta= \gamma ^2+2\gamma-4$</span>, where <span class="math-container">$\gamma = \alpha^2+2\alpha-4$</span>.
Expanding <span class="math-container">$\beta$</span> in terms of <span class="math-container">$\alpha$</span> and reducing it mod <span class="math-container">$\alpha^3+2\alpha^2-5\alpha+1$</span> gives <span class="math-container">$\beta=-\alpha^2 - 3 \alpha + 2$</span>. Note that <span class="math-container">$\beta=g(\gamma)=g(g(\alpha))$</span>, where <span class="math-container">$g(x)=x^2+2x-4$</span>.</p>
<p>(This is because <span class="math-container">$\mathbb Q(\alpha)$</span> must be the splitting field of <span class="math-container">$x^3+2x^2-5x+1$</span> since it already contains two roots and so must contain the third. The Galois group is cyclic of order <span class="math-container">$3$</span> and so the roots are <span class="math-container">$\alpha$</span>, <span class="math-container">$g(\alpha)$</span>, <span class="math-container">$g^2(\alpha)$</span>.)</p>
|
1,850,978 | <p>My question concerns the following problem:</p>
<blockquote>
<p>Let $K=\mathbb F_7[T]/(T^3-2)$. Show that $X^3-2$ splits into linear factors in $K[X]$.</p>
</blockquote>
<p>Write $K\simeq \mathbb F_7[\alpha]$ for a root $\alpha\in \overline{\mathbb F_7}$ of $X^3-2$. I found out through experimentation that $2\alpha$, $4\alpha$ are also roots of this polynomial (since $2^3\equiv 4^3 \equiv 1 \ \mathrm{mod}\ (7)$). Is there a more conceptual way to see this?</p>
| C. Falcon | 285,416 | <p>Notice that $X^3-2$ does not have any root in $\mathbb{F}_7$. Moreover polynomials of $\mathbb{F}_7[X]$ are invariant under the action of the <a href="https://en.wikipedia.org/wiki/Frobenius_endomorphism" rel="nofollow">Frobenius morphism</a> with respect to $\mathbb{F}_7$. Therefore, $\alpha^7$ and $\alpha^{49}$ are roots of $X^3-2$. Besides, $\alpha,\alpha^7,\alpha^{49}$ are distincts, otherwise $\alpha\in\mathbb{F}_7$.</p>
<p><strong>Remark.</strong> This is analogous to: if a real polynomial has a complex root, its conjuguate is also a root.</p>
|
686,296 | <p>Let $S$ be a set of rational numbers that is closed under addition and multiplication, and having the property that for every rational number $r$ exactly one the following three statements is true: $r\in S$, $-r\in S$, $r=0$.</p>
<p>There are a couple of questions I have to prove but I am having trouble with the first one:</p>
<p>Prove that $0$ does not belong to $S$. I think I am confused by the last statement where it says $r$ is exactly one of the following three statements part since it seems to say it is okay for $r=0$ but that seems like it violates $r\in S$ and $-r \in S$ since $0 = -0$. Or is there some other thing I am missing completely? Thank you for any help in this!</p>
| Eka | 130,847 | <p>Suppose $0 \in S$ (our hypothesis).</p>
<p>Then $0 \in S$ and $-0 \in S$.</p>
<p>So, at least two (and not exactly one) of the statements is true.</p>
<p>So, $0 \notin S$.</p>
<p>Because our hypothesis leads to a contradiction, the hypothesis cannot be true.</p>
|
2,459,263 | <p>If $p$ and $q$ are two points on a $n$-dimensional manifold $M$, then their tangent spaces are two $n$-dimensional vector spaces. So algebraically they have the same structure, but they are not the same because $p$ and $q$ are different points. I can understand they are not the same by visualization. How is it stated algebraically? </p>
<p>I mean if M is 2 dimensional I can picture two different planes, one on $p$ and one on $q$. How is that difference stated mathematically?</p>
<p>Thanks.</p>
| anomaly | 156,999 | <p>The general construction you're looking for is that of the tangent bundle (or, more generally, a vector or fiber bundle). The individual tangent spaces are isomorphic; they're vector spaces of the same dimension, and there's a canonical basepoint. The important point, though, is that they vary continuously. More specificially, each point $p\in M$ has a neighborhood $U$ with a homeomorphism $TU \to U \times \mathbb{R}^n$ such that the projection $TU \to U \times \mathbb{R}^n \to U$ is just the map sending the tangent space $T_p M$ to $p$. (Unravelling the definition of tangent space for a manifold will show where this map comes from.) It's not true that $TM = M\times \mathbb{R}^n$; the point is that the tangent bundle is (generally) locally trivial but not actually trivial, i.e., a product. Even in the particular case of tangent spaces to manifolds, it may be easier to think of them abstractly rather than as embedded in some large $\mathbb{R}^N$.</p>
|
1,900,365 | <p>Apparently this is a true statement, but I cannot figure out how to prove this. I have tried setting </p>
<p>$$(m)(m + 1)(m + 2)(m + 3) = (m + 4)^2 - 1 $$</p>
<p>but to no avail. Could someone point me in the right direction? </p>
| Siong Thye Goh | 306,553 | <p>\begin{align}
m(m+1)(m+2)(m+3)&=\left[(m)(m+3) \right] \left[ (m+1)(m+2)\right]\\
&=\left[m^2+3m \right] \left[ m^2+3m+2\right]\\
&=\left[(m^2+3m+1)-1 \right] \left[ (m^2+3m+1)+1\right]\\
&=(m^2+3m+1)^2-1
\end{align}</p>
|
36,083 | <p>I'm trying to find a way to prove this:</p>
<p>EDIT: without using LHopital theorem.
$$\lim_{x\rightarrow 1}\frac{x^{1/m}-1}{x^{1/n}-1}=\frac{n}{m}.$$
Honestly, I didn't come with any good idea.</p>
<p>We know that $\lim_{x\rightarrow 1}x^{1/m}$ is $1$.</p>
<p>I'd love your help with this.</p>
<p>Thank you.</p>
| Community | -1 | <p>$$\frac{x^{1/m}-1}{x^{1/n}-1} = \frac{e^{\log(x^{1/m})}-1}{e^{\log(x^{1/n})}-1} = \frac{e^{\frac1{m}\log(x)}-1}{e^{\frac1{n}\log(x)}-1} = \frac{e^{\frac1{m}\log(x)}-1}{\log(x)} \frac{\log(x)}{e^{\frac1{n}\log(x)}-1}$$</p>
<p>$$\lim_{x \rightarrow 1} \frac{x^{1/m}-1}{x^{1/n}-1} = \lim_{\log(x) \rightarrow 0} \frac{e^{\frac1{m}\log(x)}-1}{\log(x)} \frac{\log(x)}{e^{\frac1{n}\log(x)}-1} = \lim_{y \rightarrow 0} \frac{e^{\frac{y}{m}}-1}{y} \frac{y}{e^{\frac{y}{n}}-1} = \frac1{m} \frac1{\frac1{n}} = \frac{n}{m}$$</p>
|
468,510 | <p>Given $n^2$, how many fourth powers $(x^4)$ are between 0 and $n^2$? </p>
<p>$n,x\in \mathbb{Z}$</p>
<p>Does this just reduce down to how many squares are below $n$?</p>
| Hagen von Eitzen | 39,174 | <p>There are so many as there are squares between $0$ and $n$, so the answer is $\sqrt n$, with up or downrounding depending on whether you count $0$ and $n$ itself (if it is square)</p>
|
3,490,454 | <p>Definition from Munkres, the textbook we are following: If <span class="math-container">$Y$</span> is a compact Hausdorff space and <span class="math-container">$X$</span> is a proper subspace of <span class="math-container">$Y$</span> whose closure equals <span class="math-container">$Y$</span>, then <span class="math-container">$Y$</span> is said to be a compactification of <span class="math-container">$X$</span>. If <span class="math-container">$Y \setminus X$</span> equals a single point then, then <span class="math-container">$Y$</span> is called the one-point compactification of <span class="math-container">$X$</span>. From this definition, I understood it this way: I just need to add one single point to the given space and it would be compact and Hausdorff. But don’t I need to add 4 points? Namely 0, 1, 2 and 3. Because these are the limit points that the given space lacks to be closed, and in return, compact. What am I missing?..</p>
| celtschk | 34,930 | <p>You are missing that you have to consider the intervals in isolation, not as subsets of <span class="math-container">$\mathbb R$</span>. That means there's no interval <span class="math-container">$[1,2]$</span> sitting in between, and also no intervals on either side. And you may move them around and bend them as much as you like (as long as you don't tear them apart).</p>
<p>In particular, you can bend them around to get an 8-like figure with only the crossing point missing. The one-point compactification then adds exactly that crossing point.</p>
<p>A more methodical way would be to first compactify the set in any way you want (with the only restriction that your original set must be open in the compactification), and then to identify all the points you added with each other. In your example, you could use your four-point compactification, and then identify the four points with each other.</p>
<p>The ultimate methodical way is, of course, to just apply the definition of the one-point compactification, and then think about what the resulting space looks like.</p>
|
4,338,558 | <p>I am not sure if my proof here is sound, please could I have some opinions on it? If you disagree, I would appreciate some advice on how to fix my proof. Thanks</p>
<p><span class="math-container">$X_1, X_2, ..., X_n$</span> are countably infinite sets.</p>
<p>Let <span class="math-container">$X_1 = \{{x_1}_1, {x_1}_2, {x_1}_3, ... \}$</span></p>
<p>Let <span class="math-container">$X_2 = \{{x_2}_1, {x_2}_2, {x_2}_3, ... \}$</span></p>
<p>...</p>
<p>Let <span class="math-container">$X_n = \{{x_n}_1, {x_n}_2, {x_n}_3, ... \}$</span></p>
<p>Let <span class="math-container">$P_n$</span> be the list of the first <span class="math-container">$n$</span> ordered primes: <span class="math-container">$P_n = (2,3,5,...,p_n) = (p_1,p_2,p_3,...,p_n)$</span></p>
<p>Define the injection: <span class="math-container">$\sigma: X_1 \times X_2 \times ... \times X_n \to \mathbb{N}$</span></p>
<p><span class="math-container">$\sigma (({x_1}_A, {x_2}_B, {x_3}_C,...,{x_n}_N)) = p_1^A\cdot p_2^B \cdot p_3^C \cdot ... \cdot p_n^N$</span></p>
<p>By the Fundamental Theorem of Arithmetic, <span class="math-container">$\sigma$</span> is an injection, because if two elements in the domain map to the same element in the codomain, they must be the same element.</p>
<p>Clearly, the image is finite. So by definition, the Cartesian product of n sets which are all countably infinite, is itself, countably infinite.</p>
<p>EDIT: Is it worth noting that my <span class="math-container">$X_n$</span> sets should be ordered or does that not matter?</p>
| H A Helfgott | 169,068 | <p>Let me carry out @Ninad_Munshi's suggestion in the first case, so that we can compare it with what I did yesterday.</p>
<p>First of all,
<span class="math-container">$$\iint_U \frac{\log x \log y}{x^\alpha y^\alpha \max(x,y)} dx dy =
2 \frac{\partial}{\partial a} \frac{\partial}{\partial b}
\mathop{\iint_U}_{x>y} \frac{dx dy}{x^{a+1} y^b} \bigg|_{(a,b)=(\alpha,\alpha)},$$</span>
where <span class="math-container">$U=\{[(x,y)\in [1,\infty)^2: (x y)^\alpha \max(x,y)>R\}$</span> as before. Since <span class="math-container">$x^{\alpha+1} y^\alpha>R \iff x>(R/y^{\alpha})^{1/(\alpha+1)}$</span> and
<span class="math-container">$(R/y^\alpha)^{1/(\alpha+1)}>y \iff R>y^{2\alpha+1}$</span>,
<span class="math-container">$$\begin{aligned}\mathop{\iint_U}_{x>y} \frac{dx dy}{x^{a+1} y^b} &=
\int_{1}^{R^{\frac{1}{2\alpha+1}}} \frac{1}{y^b}\int_{(R/y^{\alpha})^{1/(\alpha+1)}}^\infty \frac{dx}{x^{a+1}} dy+
\int_{R^{\frac{1}{2\alpha+1}}}^\infty \frac{1}{y^{b}}
\int_y^\infty \frac{dx}{x^{a+1}} dy\\
&= \int_{1}^{R^{\frac{1}{2\alpha+1}}} \frac{1}{y^b}\frac{1}{a (R/y^{\alpha})^{a/(\alpha+1)}} dy +
\int_{R^{\frac{1}{2\alpha+1}}}^\infty \frac{1}{y^{b}} \frac{1}{a y^a} dy \\ &= \frac{1}{a R^{a/(\alpha+1)}}
\int_{1}^{R^{\frac{1}{2\alpha+1}}} \frac{dy}{y^{b-\alpha a/(\alpha+1)}} +
\frac{1}{a} \int_{R^{\frac{1}{2\alpha+1}}}^\infty \frac{dy}{y^{a+b}}\\
&= \frac{1}{a R^{a/(\alpha+1)}} \cdot \frac{-\frac{1}{b-1-\alpha a/(\alpha+1)}}{y^{b-1-\alpha a/(\alpha+1)}}\bigg|_1^{R^{\frac{1}{2\alpha+1}}} + \frac{1}{a} \frac{1}{(a+b-1)R^{\frac{a+b-1}{2\alpha+1}}}\\
&= \left(\frac{1}{a (a+b-1)} - \frac{\alpha+1}{a
\left((\alpha+1)(b-1)-\alpha a\right)}\right)
\frac{1}{R^{\frac{a+b-1}{2\alpha+1}}}
+\frac{\alpha+1}{a \left((\alpha+1)(b-1)-\alpha a\right) R^{\frac{a}{\alpha+1}}}
\end{aligned}$$</span>
Here the coefficient of <span class="math-container">$\frac{1}{R^{\frac{a+b-1}{2\alpha+1}}}$</span> simplifies to <span class="math-container">$\frac{2\alpha+1}{(a+b-1) (\alpha a - (\alpha+1) (b-1))}$</span>. Since <span class="math-container">$\frac{\partial}{\partial a} \frac{\partial}{\partial b} \frac{1}{R^{\frac{a+b-1}{2\alpha+1}}} = \frac{(\log R)^2}{(2\alpha+1)^2} \frac{1}{R^{\frac{a+b-1}{2\alpha+1}}}$</span>, it is clear that the main term will be <span class="math-container">$\frac{1}{(2\alpha+1) (2\alpha-1)} \frac{(\log R)^2}{R^{\frac{2\alpha-1}{2\alpha+1}}}$</span> (multiplied by <span class="math-container">$2$</span>, in the end; let's leave that part out for all terms). It is also clear that there will be no term proportional to <span class="math-container">$\frac{(\log R)^2}{R^{\frac{\alpha}{\alpha+1}}}$</span>, as <span class="math-container">$\frac{\partial}{\partial b} \frac{1}{R^{\frac{a}{\alpha+1}}}=0$</span>. The coefficient of <span class="math-container">$\frac{\log R}{R^{\frac{2\alpha-1}{2\alpha+1}}}$</span> will be
<span class="math-container">$$\begin{aligned}- \left(\frac{\partial}{\partial a} + \frac{\partial}{\partial b}\right) \frac{1}{(a+b-1) (\alpha a - (\alpha+1)(b-1))} \bigg|_{(a,b)=(\alpha,\alpha)} &=
\frac{2}{(a+b-1)^2 (\alpha a - (\alpha+1)(b-1))}\bigg|_{(a,b)=(\alpha,\alpha)} + \frac{\alpha-(\alpha+1)}{(a+b-1) (\alpha a - (\alpha+1)(b-1))^2} \bigg|_{(a,b)=(\alpha,\alpha)} \\ &= \frac{3-2\alpha}{(2\alpha-1)^2}.
\end{aligned}$$</span>
The coefficient of <span class="math-container">$\frac{1}{R^{\frac{2\alpha-1}{2\alpha+1}}}$</span> will be
<span class="math-container">$2\alpha+1$</span> times
<span class="math-container">$$\begin{aligned}\frac{\partial}{\partial a} \frac{\partial}{\partial b} \frac{1}{(a+b-1) (\alpha a - (\alpha+1)(b-1))} &=
\frac{2}{(a+b-1)^3 (\alpha a - (\alpha+1)(b-1))}
- \frac{2 \alpha (\alpha+1)}{(a+b-1) (\alpha a - (\alpha+1)(b-1))^3}
+ \frac{\alpha-(\alpha+1)}{(a+b-1)^2 (\alpha a - (\alpha+1)(b-1))^2}
\end{aligned}$$</span>
evaluated at <span class="math-container">$(a,b)=(\alpha,\alpha)$</span>, and that is
<span class="math-container">$$\begin{aligned}(2\alpha+1) \left(\frac{2}{(2\alpha-1)^3}-\frac{2 \alpha(\alpha+1)}{2\alpha-1}- \frac{1}{(2\alpha-1)^2}\right) &=
\frac{2\alpha+1}{(2\alpha-1)^3} (-8 \alpha^4+6\alpha^2-4\alpha+3 )\\&= - (2\alpha^2+4\alpha+3) - \frac{16(\alpha^2-\alpha)}{(2\alpha-1)^3},
\end{aligned}$$</span>
which agrees with the coefficient we had before.
The coefficient of <span class="math-container">$\frac{\log R}{R^{\frac{a}{a+1}}}$</span> is
<span class="math-container">$$-\frac{\partial}{\partial b} \frac{1}{a((a+1)(b-1)-\alpha a)}\bigg|_{(a,b)=(\alpha,\alpha)} = \frac{a+1}{a ((a+1)(b-1)-\alpha a)^2}\bigg|_{(a,b)=(\alpha,\alpha)} = \frac{\alpha+1}{\alpha},$$</span> which agrees with what we had before. The coefficient of <span class="math-container">$\frac{1}{R^{\frac{a}{a+1}}}$</span> is
<span class="math-container">$$\begin{aligned}\frac{\partial}{\partial a} \frac{\partial}{\partial b} \frac{\alpha+1}{a \left((\alpha+1)(b-1)-\alpha a\right)} \bigg|_{(a,b)=(\alpha,\alpha)}&=
\frac{\partial}{\partial a} \frac{-(\alpha+1)^2}{a \left((\alpha+1)(b-1)-\alpha a\right)^2}\bigg|_{(a,b)=(\alpha,\alpha)}\\
&= \frac{(\alpha+1)^2}{a^2 \left((\alpha+1)(b-1)-\alpha a\right)^2}\bigg|_{(a,b)=(\alpha,\alpha)} +
\frac{2 (\alpha+1)^2\cdot (-\alpha)}{a \left((\alpha+1)(b-1)-\alpha a\right)^3}\bigg|_{(a,b)=(\alpha,\alpha)} \\
&= \frac{(\alpha+1)^2}{\alpha^2} + 2 (\alpha+1)^2 = 2\alpha^2 + 4 \alpha+3 + \frac{2\alpha+1}{\alpha^2},
\end{aligned}$$</span> which is exactly what we had before.</p>
<p>So all is well. But, as you can see, this is not really shorter or much easier than what we had before. (It seems to be a bit longer and a bit easier.)</p>
|
4,138,564 | <p>Prove that if <span class="math-container">$q_i$</span>'s are polynomials of degree <span class="math-container">$i$</span>,then <span class="math-container">$q_0,...,q_n$</span> are linearly independent.</p>
<hr />
<p>Assume they are not linearly independent, so there is a scalar <span class="math-container">$\lambda_m \ne 0$</span> with <span class="math-container">$0\le m\le n$</span> such that <span class="math-container">$$\sum_{i=0}^{n}\lambda_iq_{i}\left(x\right)=0$$</span>
<span class="math-container">$$\sum_{i=0}^{n}\frac{\lambda_i}{\lambda_m}q_{i}\left(x\right)=0$$</span></p>
<p>How to get a contradiction?</p>
| Kavi Rama Murthy | 142,385 | <p>Suppose some <span class="math-container">$\lambda_i$</span> is not <span class="math-container">$0$</span>. There is a largest integer <span class="math-container">$m \leq n$</span> such that <span class="math-container">$\lambda_m \neq 0$</span> and <span class="math-container">$\lambda_k=0$</span> for <span class="math-container">$ k>m$</span>. [<span class="math-container">$m$</span> could be equal to <span class="math-container">$n$</span>]. Now look at the coefficient of <span class="math-container">$x^{m}$</span> to get a contradiction.</p>
|
3,676,719 | <p>I've recently started to learn about differential equations and I am having a hard time solving any of them.</p>
<p>I feel like I'm missing some steps.</p>
<p>Therefore, could I please know how to solve:</p>
<p><span class="math-container">$(ye^x +y)dx+ye^{(x+y)}dy=0$</span></p>
<p>So far, I've looked around the book and some websites that could give the final answer, to at least know what way should I go, but I feel like I'm going nowhere. All I was able to find is that the equation above is something called "an equation with separable variables".</p>
<p>Equation:
<a href="https://www.symbolab.com/solver/ordinary-differential-equation-calculator/%5Cleft(ye%5E%7Bx%7D%2By%5Cright)dx%2Bye%5E%7Bx%2By%7Ddy%3D0" rel="nofollow noreferrer">here</a></p>
<p>Thank you very much.</p>
| J. W. Tanner | 615,567 | <p><span class="math-container">$A^2=\pmatrix{a&&b\\c&&d}^2=\pmatrix{a^2+bc&&ab+bd\\ac+dc&&bc+d^2}=\pmatrix{0&0\\0&0}$</span></p>
<p><span class="math-container">$\implies b(a+d)=0$</span> and <span class="math-container">$ c(a+d)=0 \implies a+d=0 $</span> or <span class="math-container">$b=c=0$</span>,</p>
<p>but if <span class="math-container">$b=c=0$</span> then <span class="math-container">$a^2+bc=0, bc+d^2=0\implies a=d=0\implies a+d=0$</span> anyways.</p>
<p>So indeed it can be shown based on the definitions of matrix trace and multiplication that </p>
<p>the trace of a <span class="math-container">$2\times2$</span> matrix whose square is the <span class="math-container">$0$</span> matrix is <span class="math-container">$0.$</span> </p>
|
4,482,637 | <p>Is there are a term for a generalized exponential function? The series expansions of sine and cosine look very similar to the exponential function's series expansion</p>
<p><span class="math-container">\begin{align}
& \sum_{n = 0}^{\infty} \frac{x^n}{n!} \\
& \sum_{n = 0}^{\infty} \frac{(-x)^n}{n!} \\
& \sum_{n = 0}^{\infty} \frac{x^{2n}}{(2n)!} & & \sum_{n = 0}^{\infty} \frac{x^{2n + 1}}{(2n + 1)!} & \\
& \sum_{n = 0}^{\infty} \frac{(-1)^nx^{2n}}{(2n)!} & & \sum_{n = 0}^{\infty} \frac{(-1)^nx^{2n + 1}}{(2n + 1)!} \\
& \sum_{n = 0}^{\infty} \frac{x^{3n}}{(3n)!} & & \sum_{n = 0}^{\infty} \frac{x^{3n + 1}}{(3n + 1)!} & & \sum_{n = 0}^{\infty} \frac{x^{3n + 2}}{(3n + 2)!} \\
& \sum_{n = 0}^{\infty} \frac{(-1)^nx^{3n}}{(3n)!} & & \sum_{n = 0}^{\infty} \frac{(-1)^nx^{3n + 1}}{(3n + 1)!} & & \sum_{n = 0}^{\infty} \frac{(-1)^nx^{3n + 2}}{(3n + 2)!} \\
& \vdots && \vdots && \vdots
\end{align}</span></p>
<p>and so I was wondering as to whether or not there is a name for all of these types of infinite sums and what properties they all share.</p>
| Átila Correia | 953,679 | <p>The first integral is wrong because <span class="math-container">$x\in\mathbb{R}$</span> need not be a natural number. So the multiplication cannot be split as you did. Similar reasoning applies to the second case, that is to say, since <span class="math-container">$x$</span> need not be a natural number, the interpretation for <span class="math-container">$2^{x}$</span> is wrong.</p>
|
4,482,637 | <p>Is there are a term for a generalized exponential function? The series expansions of sine and cosine look very similar to the exponential function's series expansion</p>
<p><span class="math-container">\begin{align}
& \sum_{n = 0}^{\infty} \frac{x^n}{n!} \\
& \sum_{n = 0}^{\infty} \frac{(-x)^n}{n!} \\
& \sum_{n = 0}^{\infty} \frac{x^{2n}}{(2n)!} & & \sum_{n = 0}^{\infty} \frac{x^{2n + 1}}{(2n + 1)!} & \\
& \sum_{n = 0}^{\infty} \frac{(-1)^nx^{2n}}{(2n)!} & & \sum_{n = 0}^{\infty} \frac{(-1)^nx^{2n + 1}}{(2n + 1)!} \\
& \sum_{n = 0}^{\infty} \frac{x^{3n}}{(3n)!} & & \sum_{n = 0}^{\infty} \frac{x^{3n + 1}}{(3n + 1)!} & & \sum_{n = 0}^{\infty} \frac{x^{3n + 2}}{(3n + 2)!} \\
& \sum_{n = 0}^{\infty} \frac{(-1)^nx^{3n}}{(3n)!} & & \sum_{n = 0}^{\infty} \frac{(-1)^nx^{3n + 1}}{(3n + 1)!} & & \sum_{n = 0}^{\infty} \frac{(-1)^nx^{3n + 2}}{(3n + 2)!} \\
& \vdots && \vdots && \vdots
\end{align}</span></p>
<p>and so I was wondering as to whether or not there is a name for all of these types of infinite sums and what properties they all share.</p>
| ryang | 21,813 | <p>The above comments and answer point out that you can't take a variable out of an integral, and that <span class="math-container">$x$</span> is not a natural number, but in fact, in the given examples, the error could be more fundamental.</p>
<p>Let <span class="math-container">$f:\mathbb Z \to \mathbb R$</span> and <span class="math-container">$g:\mathbb Z \to \mathbb R$</span> such that <span class="math-container">$f(x)=2x$</span> and <span class="math-container">$g(x)=2^x.$</span> Giving benefit of the doubt that the author indeed meant to work in <span class="math-container">$\mathbb Z,$</span> that is, that
<span class="math-container">$$\int 2x \,\mathrm dx=\int f,\\
\int 2^x \,\mathrm dx=\int g,$$</span> then both integrals immediately equal <span class="math-container">$0,$</span> since the domain of integration in each case is a set of isolated points.</p>
<p>If the intention, however, was for <em>interval</em> integration domains, then it is of course invalid to assert that <span class="math-container">$2x=\underbrace{(2 + 2 + 2 + \dots + 2)}_{x \text{ times}}.$</span></p>
|
372,420 | <p>Let <span class="math-container">$X$</span> be a finite ultrametric space and <span class="math-container">$P(X)$</span> be the space of probability measures on <span class="math-container">$X$</span> endowed with the Wasserstein-Kantorovich-Rubinstein metric (briefly WKR-metric) defined by the formula
<span class="math-container">$$\rho(\mu,\eta)=\max\{|\int_X fd\mu-\int_X fd\eta|:f\in Lip_1(X)\}$$</span> where <span class="math-container">$Lip_1(X)$</span> is the set of non-expanding real-valued functions on <span class="math-container">$X$</span>.</p>
<blockquote>
<p><strong>Problem.</strong> Is there any fast algorithm for calculating this metric between two measures on a finite ultrametric space? Or at least for calculating some natural distance, which is not "very far" from the WKR-metric?</p>
</blockquote>
<p><strong>Added in Edit.</strong> There is a simple upper bound <span class="math-container">$\hat \rho$</span> for the WKR-metric, defined by recursion on the cardinality of the set <span class="math-container">$d[X\times X]=\{d(x,y):x,y\in X\}$</span> of values of the ultrametric on <span class="math-container">$X$</span>. If <span class="math-container">$d[X\times X]=\{0\}$</span>, then for any measures <span class="math-container">$\mu,\eta\in P(X)$</span> on <span class="math-container">$X$</span> put <span class="math-container">$\hat\rho(\mu,\eta)=0$</span>. Assume that for some natural number <span class="math-container">$n$</span> we have defined the metric <span class="math-container">$\hat\rho(\mu,\eta)$</span> for any probability measures <span class="math-container">$\mu,\eta\in P(X)$</span> on any ultrametric space <span class="math-container">$(X,d)$</span> with <span class="math-container">$|d[X\times X]|<n$</span>.</p>
<p>Take any ultrametric space <span class="math-container">$X$</span> with <span class="math-container">$|d[X\times X]|=n$</span>. Let <span class="math-container">$b=\max d[X\times X]$</span> and <span class="math-container">$a=\max(d[X\times X]\setminus\{b\})$</span>. Let <span class="math-container">$\mathcal B$</span> be the family of closed balls of radius <span class="math-container">$a$</span> in <span class="math-container">$X$</span>. Since <span class="math-container">$X$</span> is an ultrametric space, the balls in the family <span class="math-container">$\mathcal B$</span> either coincide or are disjoint.</p>
<p>Given any probability measures <span class="math-container">$\mu,\eta$</span> on <span class="math-container">$X$</span>, let
<span class="math-container">$$\hat\rho(\mu,\eta)=\tfrac12b\cdot\sum_{B\in\mathcal B}|\mu(B)-\eta(B)|+\sum_{B\in\mathcal B'}\min\{\mu(B),\eta(B)\}\cdot\hat\rho(\mu{\restriction}B,\eta{\restriction}B),$$</span>
where <span class="math-container">$\mathcal B'=\{B\in\mathcal B:\min\{\mu(B),\eta(B)\}>0\}$</span> and the probability measurees <span class="math-container">$\mu{\restriction} B$</span> and <span class="math-container">$\eta{\restriction}B$</span> assign to each subset <span class="math-container">$S$</span> of <span class="math-container">$B$</span> the numbers <span class="math-container">$\mu(S)/\mu(B)$</span> and <span class="math-container">$\eta(S)/\mu(B)$</span>, respectively.</p>
<p>It can be shown that <span class="math-container">$\rho\le\hat\rho$</span>.</p>
<blockquote>
<p><strong>Question.</strong> Is <span class="math-container">$\rho=\hat\rho$</span>?</p>
</blockquote>
| oliversm | 112,077 | <h1>Star discrepancy</h1>
<p>The <a href="https://mathworld.wolfram.com/StarDiscrepancy.html" rel="nofollow noreferrer">star discrepancy</a> is usually used when thinking about random numbers and low discrepancy sequences, and seems to fit the bill for your task.</p>
|
345,652 | <p>I try to understand why by definition </p>
<ol>
<li>$[c_0,c_1,\ldots,c_n]=[c_0,[c_1,\ldots,c_n]]$ and also </li>
<li>$[c_0,c_1,\ldots,c_n]=[c_0,c_1,\ldots,c_{n-2},[c_{n-1},c_n]]$ .</li>
</ol>
<p>Those are continued fractions, and $1$ and $2$ are notes I have in the lecture summery.</p>
<p>But we can add bracket where we we want. for example:<br>
$[c_0,c_1,\ldots,c_n]=[c_0,c_1,[c_2,\ldots,c_n]]$ </p>
<p>Thanks!</p>
| Clive Newstead | 19,542 | <p>It's a fairly easy inductive proof that you can write
$$[c_0, c_1, c_2, \dots, c_{n-1}, c_n] = [c_0, [c_1, [c_2, \cdots [c_{n-1}, c_n] \cdots ]]]$$
and you can add or remove any of the sets of square brackets that appear on the right-hand side as you please. That is, you can stick a $[$ where you like as long as the closing $]$ is right at the end, not somewhere in the middle</p>
<p>What you <em>can't</em> do is add square brackets into the left-hand side willy-nilly. For instance, in general,
$$c_0+\dfrac{1}{c_1+\frac{1}{c_2}} = [c_0, [c_1, c_2]] \ne [[c_0, c_1], c_2] = c_0+\dfrac{1}{c_1}+\dfrac{1}{c_2}$$</p>
|
17,103 | <p>I'm reading some very old papers (by Birch et al) on quadratic forms and I don't get the following point: </p>
<blockquote>
<p>If <span class="math-container">$f$</span> is a quadratic form in <span class="math-container">$X_1,X_2,\cdots,X_n$</span> over a
finite field, then one can change variables such that <span class="math-container">$f$</span> can be written as <span class="math-container">$\sum_{i = 1}^s Y_{2i - 1}Y_{2i} + g$</span>, where <span class="math-container">$g$</span> is a quadratic form which involves
variables other than
<span class="math-container">$Y_1,Y_2,\cdots,Y_{2s}$</span> and has order at
most 2 (i.e. can be written using at
most two linear forms).</p>
</blockquote>
<p>So either this is a well-known result - but I don't find a reference - or either this is easy to see, but in that case I'm just missing the point. By the way, is this really true in characteristic 2? </p>
<p>(And in fact I'm not sure what role is played by the fact that the field is finite...)</p>
| damiano | 4,344 | <p>I will sketch below a standard argument to show what you need, because I find it very neat!</p>
<p>Let $V$ be a finite dimensional vector space over a field $k$ and let $q \colon V \to k$ be a quadratic form on $V$. Denote by $b$ the symmetric bilinear form associated to $q$: thus for vectors $v,w \in V$ define $b(v,w) := q(v+w)-q(v)-q(w)$. Suppose that $v$ is a non-singular zero of $q$. Since $v$ is non-singular, it follows that there is a $w' \in W$ such that $\alpha := b (v , w') \neq 0$. Let $w := \frac{1}{\alpha^2} (\alpha w' - q(w') v)$; it is immediate to check that $q(w)=0$ and $b (v , w) = 1$. Observe that the ``orthogonal complement'' of $v,w$ with respect to the form $q$ has codimension two and does not contain the span of $v,w$. Thus, we conclude that we can find a basis of $V$ such that $q(x_1,\ldots,x_n) = x_1 x_2 + q'$, where $q'$ is a quadratic form over a space of dimension two less than the dimension of $V$.</p>
<p>The statement about finite fields follows at once, since over a finite field, any quadratic form in three or more variables admits a non-trivial zero. This is a consequence of the Chevalley-Warning Theorem. More generally, any field such that quadratic forms in three variables always admit a zero has the property you need, e.g. any $C_1$-field would work.</p>
|
2,552,720 | <p>How would you prove algebraically that the assertion $\exists (x,y) \in (0,1)^2. x +y > 1 + xy$ <a href="http://www.wolframalpha.com/input/?i=x%20%2B%20y%20%3E%20%201%20%2B%20xy,%200%20%3C%20x%20%3C%201,%200%20%3C%20y%20%3C%201" rel="nofollow noreferrer">is false</a>?</p>
| stressed out | 436,477 | <p>Use the factorization
$$1-x-y+xy=(1-x)(1-y)$$</p>
<p>Now note that:</p>
<p>$0<x<1 \implies 0<1-x<1$</p>
<p>$0<y<1 \implies 0<1-y<1$</p>
<p>Since both are positive,</p>
<p>$$1-x-y+xy>0$$
$$1+xy>x+y$$</p>
|
2,552,720 | <p>How would you prove algebraically that the assertion $\exists (x,y) \in (0,1)^2. x +y > 1 + xy$ <a href="http://www.wolframalpha.com/input/?i=x%20%2B%20y%20%3E%20%201%20%2B%20xy,%200%20%3C%20x%20%3C%201,%200%20%3C%20y%20%3C%201" rel="nofollow noreferrer">is false</a>?</p>
| zipirovich | 127,842 | <p>Hint: move all terms to the same side to get the equivalent inequality $1-x-y+xy<0$, and then factor the expression on the left-hand side.</p>
|
15,994 | <p>I am trying to solve this problem <a href="https://mathematica.stackexchange.com/questions/15922/help-me-to-solve-a-condition-from-an-equation-that-requires-many-reductions">here</a> but have to understand basics. Suppose I have a multiplication such as <span class="math-container">$a*(b+c+d+...+x)$</span>. How can I multiply it to the result <span class="math-container">$a*b+a*c+...+a*x$</span>?</p>
<p><strong>Trials</strong></p>
<blockquote>
<p><strong>[Not working]</strong> Reduce[(a + b)*c]</p>
<p><strong>[Not working]</strong> Simplify[(a + b)*c] or FullSimplify[(a + b)*c]</p>
</blockquote>
| rcollyer | 52 | <p>While covered by the comments and <a href="https://mathematica.stackexchange.com/a/16008/52">another answer</a>, I would like to expand on this a bit. The word "simplify" has very different connotations between what you intend and what <em>Mathematica</em> intends. For <em>Mathematica</em>, it means simply the least complex solution by <a href="http://reference.wolfram.com/mathematica/ref/ComplexityFunction.html" rel="nofollow noreferrer">some measure</a>. This defaults to the number of subexpressions and integer digits, but it can be set to whatever you wish in an attempt to coerce <code>Simplify</code> to generate a specific form (or avoid others), such as this example from the docs</p>
<pre><code>f[e_] := 100 Count[e, _ChebyshevT, {0, Infinity}] + LeafCount[e]
FullSimplify[ChebyshevT[n, x], ComplexityFunction -> f]
(* Cos[n ArcCos[x]] *)
</code></pre>
<p>where <code>ChebyshevT</code> is being made more expensive than its alternative forms. I would also look at the <a href="http://wolfram.com/xid/0bnhtcjqmulci-di60se" rel="nofollow noreferrer">example</a> for <code>Abs</code> below that. Using this definition, then, the suggestions of <code>Expand</code> make sense as it is not simplest (least complex) form.</p>
<p>Your use of the word simplify is very different. In assigning problems to students, we use the word simplify imprecisely, and what we mean is transform it into a specific form which is not necessarily the simplest in form. I think this is where the confusion lies; this is not what <em>Mathematica</em> means, but it may be convinced that it is correct.</p>
|
2,738,138 | <p>I know it's probably a stupid question, but I'm confused. I have a set {$x\in\mathbb R, \frac{1}{x} \le 1$} that I want to represent as interval/s.</p>
<p>Thinking about it logically, I know that the set is $x\in]-\infty, 0[$U$[1, +\infty[$.</p>
<p>However, when trying to solve the inequality, I can't seem to get the answer. What am I doing wrong?</p>
<p>I take $\frac{1}x \le 1$, and I split it into 2 cases:</p>
<ol>
<li>if $x > 0$, then $x \ge 1$,</li>
<li>if $x < 0$, then $x \le 1$,
which is every element of $\mathbb R$. Where am I going wrong? Thanks.</li>
</ol>
| user | 505,767 | <p>It is correct indeed the solutions are $x<0$ and $x\ge 1$ since</p>
<ol>
<li>for $x > 0$ we have $x \ge 1\implies x\ge1$</li>
<li>for $x < 0$ we have $x \le 1\implies x<0$</li>
</ol>
|
2,738,138 | <p>I know it's probably a stupid question, but I'm confused. I have a set {$x\in\mathbb R, \frac{1}{x} \le 1$} that I want to represent as interval/s.</p>
<p>Thinking about it logically, I know that the set is $x\in]-\infty, 0[$U$[1, +\infty[$.</p>
<p>However, when trying to solve the inequality, I can't seem to get the answer. What am I doing wrong?</p>
<p>I take $\frac{1}x \le 1$, and I split it into 2 cases:</p>
<ol>
<li>if $x > 0$, then $x \ge 1$,</li>
<li>if $x < 0$, then $x \le 1$,
which is every element of $\mathbb R$. Where am I going wrong? Thanks.</li>
</ol>
| Mohammad Riazi-Kermani | 514,496 | <p>For positive values of $x$, $$1/x \le 1 \iff x\ge 1$$</p>
<p>For negative values of $x$, $1/x$ is negative so it is less than $1.$</p>
<p>For $x=0$, $1/x$ is undefined </p>
<p>Thus the answer is $$x\in [1,\infty)$$</p>
|
13,861 | <p>I am currently working with a weighted adjacency matrix for a directed graph, and it contains several 0 columns and rows. With the unaltered matrix, I am able to monitor the relations between vertices with,</p>
<pre><code>TableForm[Normal @ WeightedAdjacencyMatrix[graph],
TableHeadings -> {a = VertexList[graph], a}]
</code></pre>
<p>This outputs a table with the corresponding vertex list labeling the rows and columns. I want to delete the 0 rows and columns while altering the labels to reflect the change. My matrix is currently $85\times 85$, and eliminating the necessary rows and columns reduces the size to $77\times 38$. I could theoretically go through by hand and track the eliminated entries, but that sounds way too time consuming for something that I'm sure has a simple solution. Any help is appreciated. </p>
| kglr | 125 | <pre><code>nonzerorowsF = Function[{grph}, Pick[Range[VertexCount[grph]],
Tr@Abs[#] != 0 & /@ WeightedAdjacencyMatrix[grph]]];
nonzerocolsF = Function[{grph}, Pick[Range[VertexCount[grph]],
Tr@Abs[#] != 0 & /@ Transpose[WeightedAdjacencyMatrix[grph]]]];
</code></pre>
<p>example:</p>
<pre><code> options = Sequence[VertexStyle -> LightYellow,
VertexSize -> 0.2,
VertexLabels -> Placed["Name", {1/2, 1/2}],
VertexLabelStyle -> Directive[16, Red, Bold, Italic],
EdgeLabelStyle -> Directive[16, Blue, Bold],
ImageSize -> 350, EdgeStyle -> Blue];
ew = RandomReal[{-5, 5}, 4];
g = Graph[{3, 4, 5, 1, 2, 6},
{2 -> 3, 3 -> 1, 1 -> 2, 1 -> 4},
EdgeWeight -> ew, options,
EdgeLabels -> Thread[EdgeList[g] -> ew]]
</code></pre>
<p><img src="https://i.stack.imgur.com/JqAG3.png" alt="enter image description here"></p>
<pre><code>rows = nonzerorowsF[g];
columns = nonzerocolsF[g];
TableForm[Normal@WeightedAdjacencyMatrix[g][[rows, columns]],
TableHeadings -> {VertexList[g][[rows]], VertexList[g][[columns]]}]
</code></pre>
<p><img src="https://i.stack.imgur.com/G1PZW.png" alt="enter image description here"></p>
|
4,517,597 | <p>Suppose <span class="math-container">$\{f_n\}_{n \in \mathbb{N}}$</span> is a family of bounded, differentiable, monotone increasing functions on <span class="math-container">$[0,1]$</span>, which converge uniformly to a limit <span class="math-container">$f$</span>. Also, suppose we know that <span class="math-container">$f_n'$</span> is <span class="math-container">$\alpha$</span>-Lipschitz continuous for some constant <span class="math-container">$\alpha$</span> (not depending on <span class="math-container">$n$</span>). I want to analyze the differentiability of <span class="math-container">$f$</span> and the convergence of <span class="math-container">$f_n'$</span>, if this is even possible.</p>
<p>Of course, the monotone increasing property of <span class="math-container">$f_n$</span> implies that <span class="math-container">$f$</span> is also monotone increasing, so it is differentiable Lebesgue almost surely on <span class="math-container">$[0,1]$</span>. Is it possible to say that <span class="math-container">$f_n'$</span> converges to <span class="math-container">$f'$</span> pointwise, wherever <span class="math-container">$f'$</span> exists (even if only on a subsequence)?</p>
| Yuval Peres | 360,408 | <p>The answer is positive, and weaker assumptions suffice.</p>
<p><strong>Claim:</strong> Suppose <span class="math-container">$\{f_n\}_{n \in \mathbb{N}}$</span> is a family of bounded, differentiable, functions on <span class="math-container">$[0,1]$</span>, which converge pointwise to a limit <span class="math-container">$f$</span>. Also, suppose we know that <span class="math-container">$\{f_n'\}$</span> are a uniformly equicontinuous<span class="math-container">$^{(*)}$</span> family of functions.
Then <span class="math-container">$f$</span> is differentiable on <span class="math-container">$[0,1]$</span> and <span class="math-container">$f_n' \to f$</span> uniformly.</p>
<p><span class="math-container">$(*)$</span> (Equicontinuity, defined in [1], certainly holds if <span class="math-container">$f_n'$</span> are <span class="math-container">$\alpha$</span>-Lipschitz continuous for some constant <span class="math-container">$\alpha$</span> not depending on <span class="math-container">$n$</span>. In fact, Holder continuity with uniform constants also suffices.).</p>
<p><strong>Proof of claim:</strong> Since <span class="math-container">$f_n$</span> are uniformly bounded and <span class="math-container">$\{f_n'\}$</span> are uniformly equicontinuous, it follows<span class="math-container">$^{\bf (**)}$</span> that the derivatives <span class="math-container">$f'_n$</span> are uniformly bounded. The Ascoli-Arzela theorem [1] implies that <span class="math-container">$f_n'$</span> has a subsequence <span class="math-container">$f'_{n(k)}$</span> that converges uniformly to some <span class="math-container">$g \in C[0,1]$</span>.
Therefore, for all <span class="math-container">$t$</span> in <span class="math-container">$[0,1]$</span>,
<span class="math-container">$$f(t)-f(0)=\lim_k \int_0^t f'_{n(k)}(x) \,dx = \int_0^t g(x) \,dx \,,$$</span>
so <span class="math-container">$f'=g$</span> in <span class="math-container">$[0,1]$</span> by the fundamental theorem of calculus.</p>
<p>If <span class="math-container">$f'_n$</span> do not converge uniformly to <span class="math-container">$g=f'$</span>, then there exists some <span class="math-container">$\epsilon>0$</span> and another subsequence <span class="math-container">$f'_{m(k)}$</span> of <span class="math-container">$f_n'$</span>, such that <span class="math-container">$\|f'_{m(k)}-f'\|_\infty >\epsilon$</span> for all <span class="math-container">$k$</span>.
But <span class="math-container">$f'_{m(k)}$</span> must also have a uniformly convergent subsequence by [1],
and the argument above shows that the limit of this subsequence must be <span class="math-container">$f'$</span>, a contradiction.
Thus <span class="math-container">$f'_n \to f'$</span> uniformly in <span class="math-container">$[0,1]$</span>.</p>
<p><strong><span class="math-container">$(**)$</span> Addendum:</strong> Since there exists <span class="math-container">$M$</span> such that <span class="math-container">$|f_n| \le M$</span> in <span class="math-container">$[0,1]$</span> for all <span class="math-container">$n$</span>, the mean value theorem implies that for each <span class="math-container">$n$</span>, there is some <span class="math-container">$t_n \in (0,1)$</span> such that <span class="math-container">$|f_n'(t_n)|=|f_n(1)-f_n(0)| \le 2M$</span>. Since <span class="math-container">$f_n'$</span> are uniformly equicontinuous, there is some <span class="math-container">$\delta_1<1 $</span> so that <span class="math-container">$|x-y|<\delta_1$</span> implies <span class="math-container">$|f_n(x)-f_n(y)|<1$</span> for all <span class="math-container">$n$</span>. Thus for all <span class="math-container">$n \ge 1$</span> and all <span class="math-container">$x \in [0,1]$</span>, we have
<span class="math-container">$|f_n'(x)-f_n'(t_n)| \le 1+1/\delta_1$</span>, so <span class="math-container">$|f_n'(x)| \le 2M+1+1/\delta_1$</span>.</p>
<p>[1] <a href="https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem</a></p>
|
2,406,049 | <p>I would appreciate help understanding what $\aleph_1$ is according to this definition:</p>
<blockquote>
<blockquote>
<p>If $\alpha$ is an ordinal, then $\aleph_{\alpha}$ is the unique infinite cardinal such that:
$\{\kappa:\kappa\text{ is an infinite cardinal and }\kappa\lt\aleph_{\alpha}\}$
is isomorphic to $\alpha$ as a well-ordered set.</p>
</blockquote>
</blockquote>
<p>My question specifically is:</p>
<p>With $\alpha=1$ an ordinal ($=\{0\}$ according to von Neumann), what would the set $\{\kappa:\kappa\text{ is an infinite cardinal and }\kappa\lt\aleph_1\}$ look like?</p>
<p><strong>EDIT</strong> I think I should emphasize that the aspect that especially confuses me is "isomorphic to $\alpha=1$."</p>
<p>Thanks </p>
| Angina Seng | 436,618 | <p>As $\aleph_1$ is the second infinite cardinal, the infinite cardinals
$\le\aleph_1$ are $\aleph_0$ and $\aleph_1$. It looks like your definition
should have $<$ rather than $\le$.</p>
|
3,792,983 | <p>As far as I can tell the definition of a source and a sink respectively are given in terms of the divergence operator.</p>
<p>That is, given a vector field <span class="math-container">$\vec{D}$</span>, it has a <em>source</em> in point <span class="math-container">$P$</span> if its divergence <span class="math-container">$\text{div}\vec{D}$</span> is pozitive in <span class="math-container">$P$</span> or a <em>sink</em> if it's negative. For example, in electromagnetism, one says <span class="math-container">$\text{div}\vec{D} = \rho_v$</span> where <span class="math-container">$\rho_v$</span> is the volume charge density and <span class="math-container">$\vec{D}$</span> is the electric flux density.</p>
<p>But let's say <span class="math-container">$\vec{D}$</span> is given by a positive point charge <span class="math-container">$q$</span> located at <span class="math-container">$(0,0,0)$</span> which creates the field</p>
<p><span class="math-container">$$\vec{D} = \text{const} \frac{\vec{R}}{|\vec{R}|^3}$$</span></p>
<p>where <span class="math-container">$\vec{R}=x\vec{i}+y\vec{j}+z\vec{k}$</span>.</p>
<p>In this case, <span class="math-container">$\text{div}\vec{D}=0$</span> everywhere, however the origin is a sort of a source as the field "emerges" from there and the net flux over each surface enclosing the charge is positive.</p>
<p>My question is: are there any other definitions of a source and sink? Possibly some that are a bit more general and encompass more particular cases such as the one I've last mentioned?</p>
| Daniel Fischer | 83,702 | <p><span class="math-container">$A$</span> must be assumed nonempty of course. Then we can use induction on the dimension.</p>
<p>In <span class="math-container">$\mathbb{R}^1$</span>, a nonempty closed convex set <span class="math-container">$A$</span> that contains no line has one of the forms <span class="math-container">$(-\infty, a]$</span>, <span class="math-container">$[a, +\infty)$</span>, or <span class="math-container">$[a,b]$</span> (with <span class="math-container">$a \leqslant b$</span>), and for all these <span class="math-container">$a$</span> is an extreme point of <span class="math-container">$A$</span>.</p>
<p>For the induction step, let <span class="math-container">$x \in A$</span> and consider an arbitrary line <span class="math-container">$L$</span> passing through <span class="math-container">$x$</span>. Since <span class="math-container">$L \not\subset A$</span> there is a point <span class="math-container">$y \in L\setminus A$</span>. Let <span class="math-container">$s = \max \{ t \in [0,1] : x + t(y-x) \in A\}$</span> and <span class="math-container">$z = x + s(y-x)$</span>. Then there is a supporting hyperplane for <span class="math-container">$A$</span> passing through <span class="math-container">$z$</span>. This is given by
<span class="math-container">$$ H = \{\xi : \langle \xi, \eta\rangle = \langle z, \eta\rangle\}$$</span>
for some <span class="math-container">$\eta \in \mathbb{R}^n$</span> with <span class="math-container">$\langle \eta, \eta\rangle = 1$</span>. We can without loss of generality assume that <span class="math-container">$\langle \xi, \eta\rangle \leqslant \langle z, \eta\rangle$</span> for all <span class="math-container">$\xi \in A$</span>.</p>
<p>Now <span class="math-container">$A_H = A \cap H$</span> is a closed convex set in the hyperplane <span class="math-container">$H$</span> (which we can identify with <span class="math-container">$\mathbb{R}^{n-1}$</span>) that contains no line and is nonempty (for <span class="math-container">$z \in A_H$</span>). By the induction hypothesis, <span class="math-container">$A_H$</span> has extreme points. But an extreme point of <span class="math-container">$A_H$</span> is also an extreme point of <span class="math-container">$A$</span>, for if a point <span class="math-container">$p$</span> of <span class="math-container">$A_H$</span> is represented as a convex combination of two points of <span class="math-container">$A$</span>, then these two points must both lie in <span class="math-container">$A_H$</span>. Thus <span class="math-container">$A$</span> has extreme points.</p>
|
1,393,955 | <p>I have the following evolution equations realted to mean curavture flow, with the induced metric $g=\{g_{ij}\}$, measure $d\mu$ and second fundamental form $A=\{h_{ij}\}$:</p>
<p>1)$\frac{\partial}{\partial t}g_{ij}=-2Hh_{ij}$</p>
<p>2)$\frac{\partial}{\partial t}d\mu=-H^2d\mu$</p>
<p>3)$\frac{\partial}{\partial t}h_{ij}=\Delta h_{ij}-2Hh_{il}h^l_j+|A|^2h_{ij}$</p>
<p>Now we consider the Weingarten map $W:T_pM\to T_pM$ associated with $A$ and $g$, given by the matrix $\{h^i_j\}=\{g^{il}h_{lj}\}$, and let $P$ be any invariant symmetric homogeneous polynomial.</p>
<p>Then I wish to prove the result.</p>
<p>If $W=\{h^i_j\}$ is the Weingarten map and $P(W)$ is an invariant polynomial of degree $\alpha$, i.e. $P(\rho W)=\rho^{\alpha}P(W)$, then</p>
<p>1) $\frac{\partial}{\partial t}h^i_j=\Delta h_j^i+|A|^2h_j^i$</p>
<p>2)$\frac{\partial}{\partial t}P=\Delta P- \frac{\partial^2 P}{\partial h_{ij}\partial h_{pq}}\nabla_lh_{ij}\nabla_lh_{pq}+\alpha|A|^2P$</p>
<p>I'm able to prove the first part as follows,</p>
<p>$\frac{\partial}{\partial t}h^i_j=\frac{\partial}{\partial t}(g^{ik}h_{jk})
=\frac{\partial}{\partial t}(g^{ik})h_{jk}+\frac{\partial}{\partial t}(h_{jk})g^{ik}$</p>
<p>$=2Hg^{is}h_{sl}g^{lk}h_{jk}+(\nabla h_{jk}-2Hh_{jl}h^l_k+|A|^2h_{jk})g^{ik}$</p>
<p>$=\Delta h_j^i+|A|^2H_j^i$.</p>
<p>To work out $\frac{\partial}{\partial t}g^{ij}$ you just need to use the fact that $g^{ij}g_{ij}=1$</p>
<p>I think I'm missing a simple useful result to get part 2) out?</p>
| Anthony Carapetis | 28,513 | <p>I'm guessing the equality you need is</p>
<p>$$h^i_j \frac{\partial P}{\partial h^i_j} = \alpha P.$$</p>
<p>To see that this is true, note that the LHS is the derivative of $P$ in the outwards radial direction; i.e.</p>
<p>$$ h^i_j \frac{\partial P}{\partial h^i_j} = \frac{d}{dt}\Big|_{t=0} P((1+t)W).$$</p>
<p>If we apply the $\alpha$-homogeneity of $P$ to the RHS we get $$\frac{d(1+t)^\alpha}{dt}\Big|_{t=0} P(W)=\alpha P(W).$$</p>
|
4,168,198 | <p>If M is a connected n-manifold and N is a codim 2 submanifold
then M-N is connected.</p>
<p>Is this true?</p>
<p>I want to show <span class="math-container">$H_0(M-N)=\mathbb Z$</span>. I think it's better to do in <span class="math-container">$\mathbb Z_2$</span> coefficient because we get orientability. But because no compactness is mentioned I can't use Poincare duality. How can I resolve this?</p>
| Lee Mosher | 26,501 | <p>This has little to do with orientability, nor compactness, Poincare duality, nor <span class="math-container">$\mathbb Z_2$</span> coefficients. It's just proving path connectivity of <span class="math-container">$M-N$</span>. Also, the proof works in any codimension <span class="math-container">$q \ge 2$</span>, and I'll write it that way.</p>
<p>Consider any <span class="math-container">$x,y \in M-N$</span>.</p>
<p>Start by choosing a continuous path <span class="math-container">$\gamma : [0,1] \to M$</span> such that <span class="math-container">$\gamma(0)=x$</span> and <span class="math-container">$\gamma(1)=y$</span>. Of course the path <span class="math-container">$\gamma$</span> might hit <span class="math-container">$N$</span>.</p>
<p>The intuition is that because <span class="math-container">$N$</span> is of codimension <span class="math-container">$q \ge 2$</span>, when the path <span class="math-container">$\gamma$</span> comes close to <span class="math-container">$N$</span> you can push it away. The result is a path <span class="math-container">$\delta : [0,1] \to M-N$</span>. The endpoints <span class="math-container">$\gamma(0)$</span> and <span class="math-container">$\gamma(1)$</span> will be far enough away that they don't get moved when you push other portions of the path away from <span class="math-container">$N$</span>, hence <span class="math-container">$\delta(0)=\gamma(0)$</span> and <span class="math-container">$\delta(1)=\gamma(1)$</span>, and we obtain a path <span class="math-container">$\delta$</span> in <span class="math-container">$M-N$</span> from <span class="math-container">$x$</span> to <span class="math-container">$y$</span></p>
<p>To make this intuition work one applies the <a href="https://en.wikipedia.org/wiki/Tubular_neighborhood" rel="nofollow noreferrer">Tubular Neighborhood Theorem</a> and the same <a href="https://en.wikipedia.org/wiki/Lebesgue%27s_number_lemma" rel="nofollow noreferrer">Lebesgue number lemma</a> techniques that are used elsewhere in algebraic topology.</p>
<p>Here are some details. The <a href="https://en.wikipedia.org/wiki/Tubular_neighborhood" rel="nofollow noreferrer">Tubular Neighborhood Theorem</a> applied to the submanifold <span class="math-container">$N \subset M$</span> produces an open <span class="math-container">$U \subset M$</span> with <span class="math-container">$N \subset U$</span>, and a continuous retract <span class="math-container">$f : U \to N$</span>, such that <span class="math-container">$f$</span> is a <a href="https://en.wikipedia.org/wiki/Fiber_bundle" rel="nofollow noreferrer">fiber bundle</a> in which the fibers are homeomorphic to the open q-dimensional disc <span class="math-container">$D^q$</span>. Each <span class="math-container">$p \in N$</span> has a path connected open neighborhood <span class="math-container">$V_p \subset N$</span> and a homeomorphism <span class="math-container">$h_p : U_p = f^{-1}(V_p) \approx V_p \times D^q$</span> such that <span class="math-container">$h_p(U_p \cap N) = V_p \times \{0\}$</span>. Because both <span class="math-container">$V_p$</span> and <span class="math-container">$D^q - \{0\}$</span> are path connected (this is where we use <span class="math-container">$q \ge 2$</span>), it follows that <span class="math-container">$h_p(U_p - N) = V_p \times (D^q - \{0\})$</span> is path connected. Because <span class="math-container">$h_p$</span> is a homeomorphism, it follows that <span class="math-container">$U_p - N$</span> is path connected.</p>
<p>You can choose <span class="math-container">$U$</span> to be a subset of any prechosen open neighborhood of <span class="math-container">$N$</span>; let's choose <span class="math-container">$U$</span> to be a subset of <span class="math-container">$M - \{x,y\}$</span>.</p>
<p>Consider the following open covering of <span class="math-container">$M$</span>:
<span class="math-container">$$\mathcal U = \{M-N\} \cup \{U_p \mid p \in N\}
$$</span>
By pullback we get an open covering of <span class="math-container">$[0,1]$</span>:
<span class="math-container">$$\gamma^{-1}(\mathcal U) = \{\gamma^{-1}(M-N)\} \cup \{\gamma^{-1}(U_p) \mid p \in N\}
$$</span>
Applying the Lebesgue number lemma, we obtain a subdivision
<span class="math-container">$$0 = x_0 < x_1 < ... < x_K = 1
$$</span>
such that for each <span class="math-container">$k=1,\ldots,K$</span> the restriction <span class="math-container">$\gamma[x_{k-1},x_k]$</span> has image contained in some element of the open covering <span class="math-container">$\gamma^{-1}(\mathcal U)$</span>.</p>
<p>Using all this structure, let's construct a path <span class="math-container">$\delta : [0,1] \to M-N$</span> with <span class="math-container">$\delta(0)=x$</span>, <span class="math-container">$\delta(1)=y$</span>.</p>
<p><strong>Step 1:</strong> If <span class="math-container">$\gamma(x_k) \not\in N$</span>, let <span class="math-container">$\delta(x_k)=\gamma(x_k)$</span>. And if <span class="math-container">$\gamma[x_{k-1},x_k]$</span> is disjoint from <span class="math-container">$N$</span>, let <span class="math-container">$\delta(x)=\gamma(x)$</span> for all <span class="math-container">$x \in [x_{k-1},x_k]$</span>. Note that this case applies whenever <span class="math-container">$\gamma[x_{k-1},x_k]$</span> contains a point that is not in <span class="math-container">$U$</span>, because <span class="math-container">$M-N$</span> is the only element of the open cover <span class="math-container">$\mathcal U$</span> that has nonempty intersection with <span class="math-container">$M-U$</span>. So, for example, <span class="math-container">$\delta$</span> equals <span class="math-container">$\gamma$</span> on the subintervals <span class="math-container">$[x_0,x_1]$</span> and <span class="math-container">$[x_{K-1},x_K]$</span>.</p>
<p><strong>Step 2:</strong> For any <span class="math-container">$x_k$</span>, if <span class="math-container">$\gamma(x_k) \in N$</span> then choose <span class="math-container">$\delta(x_k)$</span> to be a point of <span class="math-container">$U_{\gamma(x_k)} - N$</span> such that <span class="math-container">$f(\delta(x_k)) = \gamma(x_k)$</span>.</p>
<p><strong>Step 3:</strong> For any <span class="math-container">$k=1,...,K$</span>, if <span class="math-container">$\gamma[x_{k-1},x_k] \cap N \ne \emptyset$</span> then there exists <span class="math-container">$p \in N$</span> such that <span class="math-container">$\gamma[x_{k-1},x_k] \subset U_p$</span>. In Steps 1 and 2 we have already found endpoint values <span class="math-container">$\delta(x_{k-1})$</span>, <span class="math-container">$\delta(x_k)$</span> in <span class="math-container">$U_p - N$</span>. Since <span class="math-container">$U_p - N$</span> is path connected, we can define <span class="math-container">$\delta$</span> on <span class="math-container">$[x_{k-1},x_k]$</span> with those endpoint values.</p>
|
1,811,567 | <p>We had a high school mathematics teacher who taught us a cool technique that I've forgotten. It can be used, for example, for developing a formula for the sum of squares for the first "n" integers. You start by making a column for Sn, and then determine the differences until you get a constant. See the picture.</p>
<p><img src="https://i.stack.imgur.com/1iEYU.jpg" alt="example - sorry about the rotated picture">
(sorry about the rotated picture)</p>
<p><strong>How do you proceed from here to the formula?</strong></p>
| Abr001am | 223,829 | <ul>
<li>backgrounded on the fact that the difference of two consecutive squares is an odd number:</li>
</ul>
<p>$(n+1)^2-n^2=2n+1$</p>
<p>the difference of two consecutive odd numbers is always $2$ thus the self-saying technique.</p>
<p>So when we proceed to find a square $n^2$ we sum up all odd numbers from 1 to $2n-1$ which is an arithmetic sequence of distance 2.</p>
<p>$n^2=\frac{n(2n)}{2}=\frac{\frac{(2n)(2n)}{2}}{2}=\frac{\frac{(2n-1)(2n)}{2}+n}{2}=\frac{\binom{2n}{2}+n}{2}$</p>
<p>Also we note that </p>
<p>$n^2=\frac{\frac{(2n)(2n)}{2}}{2}=\frac{\frac{(2n+1)(2n)}{2}-n}{2}=\frac{\binom{2n+1}{2}-n}{2}$</p>
<p>Summing up the two sums of two formulas $$\sum_n{\frac{\binom{2i}{2}+n}{2}}+\sum_n{\frac{\binom{2i+1}{2}-n}{2}}=\sum_n{\frac{\binom{2i}{2}+\binom{2i+1}{2}}{2}}=\sum_{2n+1}{\frac{\binom{i}{2}}{2}}=2\sum{i^2}$$ is just the sum of values in second column of pascal triangle (even and odd binomials) until $2n+1$, divided by 2.</p>
<pre><code> 1
1 1
1 2 |1 |
1 3 |3 | 1
1 4 |6 | 4 1
1 5 |10| 10 5 1
1 6 |15| 20 15 6 1
. . \ \
. . \ \
. . \ \
1 7 21 \35\ . . . .
</code></pre>
<p>Summing up the second column will result in the next row/column situated pascal value, which is known to be $\binom{2n+2}{3}$</p>
<p>So ... $$2\sum_n{i^2}=\frac{\binom{2n+2}{3}}{2}=\frac{\frac{2n(2n+1)(2n+2)}{6}}{2}$$</p>
|
2,936,356 | <p>Let <span class="math-container">$a,b>0$</span>. How can I calculate the roots of the following polynomial?</p>
<p><span class="math-container">$$2bx^6 + 4bx^5+(4b-a)x^4 + 4(b+a)x^3 + (2b-6a)x^2+4ax-a=0$$</span></p>
| Robert Israel | 8,508 | <p>There is really only one parameter here rather than two: if <span class="math-container">$s = a/b$</span>, you can divide by <span class="math-container">$b$</span> and write the equation as
<span class="math-container">$$ 2\,{x}^{6}+4\,{x}^{5}- \left( s-4 \right) {x}^{4}+4\, \left( s+1
\right) {x}^{3}-2\, \left( 3\,s-1 \right) {x}^{2}+4\,sx-s=0$$</span></p>
<p>But in general you're not going to get a nice closed-form solution.
E.g. for <span class="math-container">$s=1$</span> this is an irreducible sextic over the rationals with Galois group <span class="math-container">$S_6$</span>. There is no solution in radicals. Of course you can use numerical methods for specific values of <span class="math-container">$s$</span>. Or you might use a series expansion: for small <span class="math-container">$s$</span>, one root is
<span class="math-container">$$ {\frac {\sqrt {2}\sqrt {s}}{2}}-{\frac {3s}{2}}+{
\frac {25\,\sqrt {2}{s}^{3/2}}{8}}-{\frac {61\,{s}^{2}}{4
}}+{\frac {2615\,\sqrt {2}{s}^{5/2}}{64}}-{\frac {
1863\,{s}^{3}}{8}}+{\frac {177433\,\sqrt {2}{s}^{7/2}}{256}}-{\frac {68165\,{s}^{4}}{16}}+{\frac {54967659\,
\sqrt {2}{s}^{9/2}}{4096}}-{\frac {2758467\,{s}^{5}}{32}}+{\frac {4607808079\,\sqrt {2}{s}^{11/2}}{16384}}+\ldots
$$</span>
and another is obtained by replacing <span class="math-container">$s^{k/2}$</span> by <span class="math-container">$-s^{k/2}$</span> for odd <span class="math-container">$k$</span>.</p>
|
2,694,122 | <blockquote>
<p>Given that a function $f$ is defined as $$f(x)=1+2x+3x^2+4x^3+...$$. We have to prove that $f$ is continuous on $[0,\frac{1}{8}]$ and evaluate $$\int_{0}^{\frac{1}{8}}f(x)dx$$ </p>
</blockquote>
<p>I am not sure in a particular area. I have proved $f$ is continuous on $[0,\frac{1}{8}]$. But the evaluation of integral gives </p>
<p>$$\int_{0}^{\frac{1}{8}}f(x)dx=\sum_{n=1}^{\infty}\frac{1}{8^{n}}=\frac{8}{7}(1-({\frac{1}{8})^n})= \frac{8}{7}(1-\frac{1}{8^n})$$. </p>
<p>I think I am wrong somewhere in calculations. Please guide me where I am wrong. Any help or suggestion will be precious.</p>
| user284331 | 284,331 | <p>$f(x)=\displaystyle\sum_{n=0}^{\infty}(n+1)x^{n}$, $x\in[0,1/8]$, then $f(x)\leq\displaystyle\sum_{n=0}^{\infty}(n+1)/8^{n}<\infty$ by ratio test, and by Weierstrass M-Test, the series is uniformly convergent, and we are allowed to swipe the sum and integral: $\displaystyle\int_{0}^{1/8}f(x)dx=\displaystyle\sum_{n=0}^{\infty}(n+1)\int_{0}^{1/8}x^{n}dx=\sum_{n=0}^{\infty}(1/8)^{n+1}=\dfrac{(1/8)}{1-(1/8)}=1/7$.</p>
<p>The mistake there is that there should be no $n$ left, there is the limit taken.</p>
|
2,694,122 | <blockquote>
<p>Given that a function $f$ is defined as $$f(x)=1+2x+3x^2+4x^3+...$$. We have to prove that $f$ is continuous on $[0,\frac{1}{8}]$ and evaluate $$\int_{0}^{\frac{1}{8}}f(x)dx$$ </p>
</blockquote>
<p>I am not sure in a particular area. I have proved $f$ is continuous on $[0,\frac{1}{8}]$. But the evaluation of integral gives </p>
<p>$$\int_{0}^{\frac{1}{8}}f(x)dx=\sum_{n=1}^{\infty}\frac{1}{8^{n}}=\frac{8}{7}(1-({\frac{1}{8})^n})= \frac{8}{7}(1-\frac{1}{8^n})$$. </p>
<p>I think I am wrong somewhere in calculations. Please guide me where I am wrong. Any help or suggestion will be precious.</p>
| Jack D'Aurizio | 44,121 | <p>As an alternative approach, it is not difficult to notice that $(1-x)^2 f(x)$ has a simple expression.<br>
In general, if $f(x)=\sum_{n\geq 0}p(n) x^n$ with $p$ being a polynomial with degree $d$, $(1-x)^{d+1}f(x)$ has a simple expression, since the multiplication by $(1-x)$ acts on the coefficients of $f(x)$ as the backward difference operator. In our case
$$(1-x)^2\sum_{n\geq 0}(n+1)x^n =1$$
(massive cancellation!) for any $x\in(-1,1)$, hence you are just asked to check that $\frac{1}{(1-x)^2}$ is continuous on $\left[0,\frac{1}{8}\right]$, which is obvious since $(1-x)^2$ is continuous and bounded below by a positive constant, and to compute
$$ \int_{0}^{1/8}\frac{dx}{(1-x)^2} = \int_{7/8}^{1}\frac{dx}{x^2}=\frac{8}{7}-1=\frac{1}{7}.$$
Of course, the RHS can also be computed from
$$ \int_{0}^{+\infty}\sum_{n\geq 0}(n+1) x^n\,dx = \sum_{n\geq 0}(n+1)\int_{0}^{1/8}x^n\,dx = \sum_{n\geq 0}\frac{1}{8^{n+1}} = \frac{1}{7}\quad\text{(geometric series).}$$</p>
|
2,268,225 | <p>I want to investigate the convergence behavior of $$\int_{0}^{\infty} \cos(x^r)\, dx \hspace{5mm} \textrm{and} \hspace{5mm} \int_{\pi}^{\infty} \left(\frac{\cos x}{\log x}\right)\arctan\lfloor x\rfloor dx.$$ My theoretical tools are Abel's test and Dirichlet's test:
Say I have an integral of the form $$\int_{a}^{b}f\cdot g \hspace{1.5mm} dx$$
with improperness (vertical or horizontal asymptote) at $b$.</p>
<p>Abel's test guarantees convergence if $g$ is monotone and bounded on $(a,b)$, and $\int_{a}^{b}f $ converges. Dirichlet's test guarantees convergence if $g$ is monotone on $(a,b)$ and $\displaystyle\lim_{x\to b} g(x) = 0$, and $\displaystyle\lim_{\beta \to b}$ $\int_{a}^{\beta}f $ is bounded.</p>
<p>For the first integral $\displaystyle\int_{0}^{\infty} \cos(x^r)\, dx $ I'm guessing a substitution $t = x^r $ will give me an expression of the form $f\cdot g$ with $\cos(t)$ as my $f$. For the second integral $\displaystyle\int_{\pi}^{\infty} \dfrac{\cos x}{\log x}\arctan\lfloor x\rfloor\, dx$, I'm (even more) clueless. Help please?</p>
| k.Vijay | 428,609 | <p>Apply $(1-x)^{-2}=1+2x+3x^2+4x^3+\cdots\infty$.</p>
<p>Now, </p>
<p>$\dfrac{1}{4}+\dfrac{2}{8}+\dfrac{3}{16}+\dfrac{4}{32}+\dfrac{5}{64}+\cdots\infty\\
=\dfrac{1}{4}\left(1+\dfrac{2}{2}+\dfrac{3}{2^2}+\dfrac{4}{2^3}+\dfrac{5}{2^4}+\cdots\infty\right)\\
=\dfrac{1}{4}\left(1-\dfrac{1}{2}\right)^{-2}\hspace{25pt}\text{ here }x=\dfrac{1}{2}.\\
=\dfrac{1}{4}\times4=1.$</p>
|
54,088 | <p>I am studying KS (Kolmogorov-Sinai) entropy of order <em>q</em> and it can be defined as</p>
<p>$$
h_q = \sup_P \left(\lim_{m\to\infty}\left(\frac 1 m H_q(m,ε)\right)\right)
$$</p>
<p>Why is it defined as supremum over all possible partitions <em>P</em> and not maximum? </p>
<p>When do people use supremum and when maximum?</p>
| Eric Naslund | 6,075 | <p>If the maximum exists, then the supremum and maximum are the same. However sometimes the maximum does not exist, and there is no maximal element. In this case it still makes sense to talk about a least upper bound.</p>
<p>The classic example is the set of all rationals whose square is less than or equal to $2$. That is the set $$A=\left\{ r\in\mathbb{Q}:\ r^{2}\leq2\right\}.$$</p>
<p>$A$ has no maximal element, however it does have a supremum and $\sup A=\sqrt{2}$.</p>
<p>An even simpler example is the set of all reals that are strictly less than $2$: $$B=\left\{ r\in\mathbb{R}:\ r<2\right\}.$$ This set has no maximum since for any $x\in B$ the element $\frac{x+2}{2}$ satisfies $x<\frac{x+2}{2}<2$. However it is not hard to see that $\sup B=2$.</p>
|
211,297 | <p>Let $ \mathbb{F} $ be an uncountable field. Suppose that $ f: \mathbb{F}^{2} \rightarrow \mathbb{F} $ satisfies the following two properties:</p>
<ol>
<li>For each $ x \in \mathbb{F} $, the function $ f(x,\cdot): y \mapsto f(x,y) $ is a polynomial function on $ \mathbb{F} $.</li>
<li>For each $ y \in \mathbb{F} $, the function $ f(\cdot,y): x \mapsto f(x,y) $ is a polynomial function on $ \mathbb{F} $.</li>
</ol>
<p>Is it necessarily true that $ f $ is a bivariate polynomial function on $ \mathbb{F}^{2} $? What if $ \mathbb{F} $ is merely countably infinite?</p>
| Gerry Myerson | 8,269 | <p>Maybe this works for the countably infinite case. Order the rationals (or whatever countably infinite field you have) as $r_1,r_2,\dots$. Let $$f(x,y)=(x-r_1)(y-r_1)+(x-r_1)(x-r_2)(y-r_1)(y-r_2)+\cdots$$ Then if $r$ is any rational, say, $r=r_j$, then $f(r,y)$ is a polynomial of degree $j-1$ in $y$, and similarly for $f(x,r)$. But clearly $f$ is not a polynomial function --- what would be its degree?</p>
|
2,497,072 | <p>Write using logical symbols: "Every even number greater than two is the sum of two prime numbers" <br>
This is how I attempted to write this:
$$((n=2m \land n > 2) \Rightarrow n=p_1+p_2)\land(x|p_1 \Rightarrow x=1 \lor x=p_1) \land (y|p_2 \Rightarrow y=1 \lor y=p_2) \land (p_1 >1 \land p_2 >1)$$
I am not sure if I haven't confused logical operations. I also think whether I should have used "iff" in the first parentheses instead. What should I change in my solution?</p>
| fonini | 113,664 | <p>(I'm assuming this is homework; by the way, you should provide more context in your question, like where the question came from, and any restrictions like "quantifiers are not allowed".)</p>
<p>Anyway, I don't know what your teacher wants you to do, but I think you should be aware that you <em>can't</em> write "Every even number etcetc" without quantifiers. Look at the statement $$\left(2\mid n\right)\implies\left(\mbox{$n$ is the sum of two primes}\right)$$
for example. It does <em>not</em> mean that "every even $n$ is the sum of two primes". It means, by definition, the same as:
$$\left(2\nmid n\right)\lor\left(\mbox{$n$ is the sum of two primes}\right).$$
What this means is: for some given $n$ (I didn't say what's the value of $n$, but it's some fixed value), either this $n$ is not even, or it is the sum of two primes. Now, what we want to do is state this for <em>all</em> $n$. Is is commonly assumed, when there are just a few variables (like in my example), that all variables that were not declared have an universal quantifier attached to it. That is, it is commmonly assumed that this:
$$\left(2\mid n\right)\implies\left(\mbox{$n$ is the sum of two primes}\right)$$
is just a shorthand for this:
$$\forall n, \left(2\mid n\right)\implies\left(\mbox{$n$ is the sum of two primes}\right).$$</p>
<p>Ok, now on to your question. The statement I wrote above is, as you know, wrong, since $2$ is not a sum of two primes. The statement should read:
$$\forall n, \left(2\mid n\land n>2\right)\implies\left(\mbox{$n$ is the sum of two primes}\right).$$
Now that it's not <a href="https://en.wikipedia.org/wiki/Goldbach%27s_conjecture" rel="nofollow noreferrer">obviously wrong</a> anymore, we must write "$n$ is the sum of two primes" using logic symbols. This is done like "There exist two primes such that $n$ is their sum". Again, I don't believe you could do it without quantifiers. Anyway, you would write it like:
$$\exists p_1,\exists p_2, \mbox{$p_1$ is prime} \land \mbox{$p_2$ is prime} \land n=p_1+p_2.$$</p>
<p>You <em>could</em> use the fact that $\forall x$ is the same as $\neg\exists x\neg$ to rewrite it as
$$\neg\forall p_1,\forall p_2, \left( \mbox{$p_1$ is prime} \land \mbox{$p_2$ is prime}\right) \implies n\neq p_1+p_2$$
And then you could drop the quantifiers, since they <em>could</em> be regarded as implicit:
$$\neg\left[ \left( \mbox{$p_1$ is prime} \land \mbox{$p_2$ is prime}\right) \implies n\neq p_1+p_2\right]$$
but I hope you agree that this is quite unreadable.</p>
<p>Getting back to our problem, our statement is:
$$\forall n, \left(2\mid n\right)\implies\left(\mbox{$n$ is the sum of two primes}\right)$$
and we have already rewritten it as:
$$\forall n, \left(2\mid n\right)\implies
\exists p_1,\exists p_2, \mbox{$p_1$ is prime} \land \mbox{$p_2$ is prime} \land n=p_1+p_2$$
Now, we just have to write "$p$ is prime" using logic. Writing it as "if $n$ is a positive divisor of $p$, then $n$ is either $1$ or $p$", I hope you can do it by yourself, after all of this (as you have already done in your question).</p>
<hr>
<p>As for the way you have written it, I can see where you want to get, but I wouldn't consider it "correct". The way it's written, it means "$p_1$ and $p_2$ are prime, and $n$ is a natural number such that either it isn't an even greater than $2$, or else it equals $p_1+p_2$". You still haven't said in your logic statement that $p_1$ and $p_2$ depend on the value of $n$, and that this happens for any value of $n$ we come up with.</p>
<hr>
<p>Finally, you asked whether you should have used $\iff$ instead of $\implies$ in $\left(2\mid n\right)\implies\left(\mbox{$n$ is the sum of two primes}\right)$. No, you shouldn't. The point is to write that "every even greater than two is a sum of two primes", not that "$n$ is an even greater than $2$ iff it is a sum of two primes". Actually, the converse is just wrong. If a number is a sum of two primes, it doesn't mean that it is "even and greater than two". Even if the converse were right, you could still write just the forward implication, and it would be correct.</p>
|
2,417,029 | <p>I have been told that a line segment is a set of points.
How can even infinitely many point, each of length zero, can make a line of positive length?</p>
<p>Edit:
As an undergraduate I assumed it was due to having uncountably many points.
But the Cantor set has uncountably many elements and it has measure $0$.</p>
<p>So having uncountably many points on a line is not sufficient for the measure to be positive.</p>
<p>My question was: what else is needed? It appears from the answers I've seen that the additional thing needed is the topology and/or the sigma algebra within which the points are set.</p>
<p>My thanks to those who have helped me figure out where to look for full answers to my question.</p>
| gen-ℤ ready to perish | 347,062 | <p>Perhaps you create a mapping from the length of each segment to the number of segments necessary, namely $n= (1 \ \mathrm{cm})/{\ell}$. Graph this function with $n$ on the vertical axis and $\ell / \mathrm{cm}$ on the horizontal axis. Show them that where $\ell = 0 \ \mathrm{cm}$, which is the “length” of a point, it is implied / we can conclude / for the purpose of completeness, we declare / et cetera that $n=\infty$.</p>
<p>I think this would be a wonderful introduction to limits. I think you could reinforce this claim by proving out or proving that the graph is valid for all other positive values of $n$. I know that this helped me think through calculus when I was learning.</p>
|
2,417,029 | <p>I have been told that a line segment is a set of points.
How can even infinitely many point, each of length zero, can make a line of positive length?</p>
<p>Edit:
As an undergraduate I assumed it was due to having uncountably many points.
But the Cantor set has uncountably many elements and it has measure $0$.</p>
<p>So having uncountably many points on a line is not sufficient for the measure to be positive.</p>
<p>My question was: what else is needed? It appears from the answers I've seen that the additional thing needed is the topology and/or the sigma algebra within which the points are set.</p>
<p>My thanks to those who have helped me figure out where to look for full answers to my question.</p>
| Cort Ammon | 153,899 | <p>First off, consider instead trying to argue that a line can be made up of infinitely many small lines, each of infintessimally short length. This will be closer to the way calculus handles lines, so will serve them well in the future.</p>
<p>If you must use points, one approach you might be able to take is to start with two points, $0$ and $1$. Then draw in the mid point ($\frac{1}{2}$). Then draw in the $2$ midpoints from there ($\frac{1}{4}$ and $\frac{3}{4}$). Then the $4$ midpoints, then $8$, and so forth. In each stage, you draw more and more midpoints. If you do this an infinite number of times, you will fill in the line completely. (<em>I'm pretty sure you just need to do a countably infinite number of steps here, for iteration $N$ of this midpoint filling process fills in $2^N$ points, so by the continuum hypothesis, that process will produce a number of points equal to the cardinality of the real numbers. Stronger mathematicians, please check!</em>).</p>
<p>As a bonus, this process <em>visually</em> fills in the line rather quickly, while the formalism showing that the line is connected at the end of the process can be left for higher mathematics.</p>
|
657,992 | <p>I understand how to explain but can't put it down on paper.</p>
<p>$\displaystyle \frac{a}{b}$ and $\displaystyle \frac{c}{d}$ are rational numbers. For there to be any rational between two numbers I assume $\displaystyle \frac{a}{b} < \frac{c}{d}$.</p>
<p>I let $\displaystyle x = \frac{a}{b}$ and $\displaystyle y = \frac{c}{d}$ so $x < y$. the number in the middle of $x$ and $y$ is definitely a rational number so $\displaystyle x < \frac{x+y}{2} < y$. I know that there is another middle between $x$ and $\displaystyle \frac{x+y}{2}$ and it keeps on going. How would I write it as a proof?</p>
| Joe Anderson | 249,953 | <p>I wanted to add to Cameran Buie's answer with the actual proof by induction, since it seems that the question was "How do I word the proof?" I'm going overboard here, being fully precise; in any normal situation, much of this reasoning can be brushed over and still get the point across.</p>
<h3>First, a precise wording of what the original post has so far.</h3>
<p>Let $x=a/b$ and $y=c/d$ be two distinct rational numbers. We may as well assume $x < y$, since we can easily swap the letter assignment otherwise. We may also assume $b,d$ are positive since they are nonzero and we can multiply be $\frac{-1}{-1}$ to swap signs. We want to show that there are infinitely many other rational numbers in between these.</p>
<p>We will start by showing that the average of $x$ and $y$ is in between $x$ and $y$. (This can be done in many ways. I'll do it the most tedious but direct way here.) The average is
$$
q=\frac{x+y}{2} = \frac{\frac ab + \frac cd}{2} = \frac{ad+bc}{2bd}
$$
which is a rational number because $x,y$ are rational imply $b,d$ are nonzero imply $2bd$ is nonzero.</p>
<p>To show that $x\lt q\lt y$, we can show the left is true:
$$
\begin{aligned}
x &\lt y && \text{given} \\
\frac ab &\lt \frac bd && \text{substitution} \\
ad &\lt bc &&\text{by multiplying by $bd$}\\
&&&\text{(where $b,d$ are positive)} \\
2ad&\lt ad+bc && \text{by adding $ad$}\\
\frac ab&\lt \frac{ad+bc}{2bd} && \text{by dividing by $2bd$} \\
&&&\text{(where $b,d$ are positive)} \\
x&\lt q && \text{substitution}
\end{aligned}
$$</p>
<p>The right side, $q\lt y$, is shown to be true in a very similar fashion. (To figure out these steps, work them backwards, starting from what you want to show and simplifying to what you know.)</p>
<p>So, we now know that between any distinct rationals $x,y$, there is a rational $q$ between them.</p>
<h3>Now, a precise wording of the proof by induction.</h3>
<p>Let $x,y$ be given distinct rationals as before, and we will show that there are infinitely many distinct rationals between them.</p>
<p>In the base step, we show that there is $1$ distinct rational between $x,y$. We proved that for $q$ above.</p>
<p>In the induction step, we assume that we have shown there are $n$ distinct rationals between $x,y$ for some positive integer $n$, and will show that there are $n+1$ distinct rationals between $x,y$. Let $y_0$ be the minimum of all the $n$ distinct rationals between $x,y$. By the above proof, there must be a rational $q$ between $x,y_0$, and since it is less than the smallest of the others between $x,y$, it must be a new distinct rational between $x,y$. Thus there are $n+1$ distinct rationals between $x,y$.</p>
<p>By induction, there are infinitely many distinct rationals between $x,y$. (The induction is saying we know there is one of them, and for every positive-integer number of them there is one more, so we must have every-positive-integer number of them, which is the same as "infinitely many".)</p>
<h3>A note on proof by contradiction.</h3>
<p>For some people, proof by contradiction is reserved as a last resort. Sure, a proof by contradiction can be shorter, but if you can avoid it, such as here with the proof by induction, then you usually gain more insight into how to actually <em>construct</em> your answer.</p>
<p>In this case, that's shown by how, if you actually had two rationals, say $0$ and $1$, then the proof by induction tells you exactly what infinite sequence of rationals you are finding, say $1/2^n$ for each positive integer $n$. The proof by contradiction tells you there's infinitely many, but not how to find them.</p>
<p>This is all a matter of personal preference. Whichever you find most intuitive should stay close to your heart; whichever you find most convincing to others should be what you write down.</p>
|
2,938,491 | <p>I have been pondering this question for a little while, and unfortunately the Google has not given me an answer.</p>
<p>I understand that for example you had a table or graph that crosses or touches the x axis at say x= -2, 0, 3 you could form an equation as</p>
<p>f(x) = ax(x+2)(x-3)</p>
<p>Then solve for a and you have your function.</p>
<p>I have considered transforming a graph to force zeros, and in the couple of attempts I made, it was successful, but I am unsure if this would be the mathmatically proper way to do so.</p>
<p>So my question is if you have a graph or table of coordinates similar to my example above, but the points never cross zerI, what would be the proper Mathmatics procedure to find the equation.</p>
<p>UPDATE</p>
<p>y = x^2. Vertex = 0,0 and zero = 0 </p>
<p>In comparison to:</p>
<p>y = (x-1)^2+1 vertex = 1,1 and zero = null</p>
<p>Its the same form but in a different position. In this situation the functions were provided, but for clarification of what I am looking for, I thought this would help. </p>
<p>Thank you.</p>
| Andrei | 331,661 | <p><span class="math-container">$$S(n)=(1+2n)^2=1+4n+4n^2$$</span>
You can now use the following <span class="math-container">$$\sum_{n=0}^m1=m+1\\\sum_{n=0}^mn=\frac{m(m+1)}{2}\\\sum_{n=0}^mn^2=\frac{m(m+1)(2m+1)}{6}$$</span></p>
<p>Alternatively, compute the first 4-5 elements. The sum of a polynomial of order <span class="math-container">$p$</span> will be a polynomial of order <span class="math-container">$p+1$</span> in the number of terms. Find the coefficients, then prove by induction</p>
|
148,612 | <p>You are given a rectangular paper sheet. The diagonal vertices of the sheet are brought together and folded so that a line (mark) is formed on the sheet. If this mark length is same as the length of the sheet, what is the ratio of length to breadth of the sheet?</p>
<p>This is my first question on this site, so if this is not a good question please help.</p>
| TonyK | 1,508 | <p>$\hskip 2in$ <img src="https://i.stack.imgur.com/EjFV4.jpg" alt="enter image description here"></p>
<p>We are given that $ZY = BC$, and we want to find $BC/AB$. To keep things simple, choose units so that $AB=1$; and let $s = AC$ and $t = BC$. Then we want to find $t$.</p>
<p>Triangle $XYC$ is similar to triangle $BAC$, so $CX/XY = BC/AB$.<br>
$ZY=2XY$ and $AC=2CX$, so $AC/ZY = BC/AB$.<br>
And we are given that $ZY=BC$, so $AC/BC = BC/AB$.</p>
<p>Thus $s = t^2$. And by Pythagoras, $s^2 = 1 + t^2$. Substituting for $s$, we get $t^4 - t^2 - 1 = 0$, which (as Marvis already showed) gives $t = \sqrt \phi$.</p>
|
162,364 | <p>Assuming we have a list below, it has real number elements and complex numbers, how can I quickly find if the list has any Real number that its value is less than 1.0? </p>
<pre><code> lis = {3 Sqrt[354], Sqrt[2962], Sqrt[2746], 3 Sqrt[282], Sqrt[2338],
Sqrt[2146], 3 Sqrt[218], Sqrt[1786], Sqrt[1618], 27 Sqrt[2], Sqrt[
1306], Sqrt[1162], 3 Sqrt[114], Sqrt[898], Sqrt[778], 3 Sqrt[74],
Sqrt[562], Sqrt[466], 3 Sqrt[42], Sqrt[298], Sqrt[226], 9 Sqrt[2],
Sqrt[106], Sqrt[58], 3 Sqrt[2], I Sqrt[14], I Sqrt[38], 3 I Sqrt[6],
I Sqrt[62], I Sqrt[62], 3 I Sqrt[6], I Sqrt[38], I Sqrt[14], Sqrt[
1.2], Sqrt[58], Sqrt[1.06], 9 Sqrt[2], Sqrt[226]}
</code></pre>
<p>I am also considering if we can find methods to filter any real or complex numbers with specific values in any types list (e.g. there are some string elements mixed with real and complex number elements in a given list). But this could be another question and isn't necessary for my example. Please leave some advice if you interested! Thanks in advance!</p>
| aardvark2012 | 45,411 | <p>There's always <code>Select</code>:</p>
<pre><code>Select[lis, Negative[# - 1] &]
</code></pre>
<p><a href="http://reference.wolfram.com/language/ref/Negative.html" rel="nofollow noreferrer"><code>Negative</code></a> is nice because it'll work on pretty much anything, but only return <code>True</code> if the argument is a negative real number. Testing it on a longer list:</p>
<pre><code>SeedRandom[123456]
lis = RandomSample[
Join[RandomReal[2, 10], RandomComplex[{-1 - I, 1 + I}, 10]],
20];
Select[lis, Negative[# - 1] &]
(* {0.982588, 0.86323, 0.0961836, 0.963281, 0.780961, 0.225647, 0.34709} *)
</code></pre>
<p>But the question is a little ambiguous -- the phrase "how can I quickly find if the list has any Real number that its value is less than 1.0?" indicates that you don't actually care what the numbers <em>are</em>, just whether they're there or not. If that's what you're really after, you can do:</p>
<pre><code>CountsBy[lis, Negative[# - 1] &][True]
(* 7 *)
</code></pre>
<p>As far as generalization goes, <code>Select</code> is so incredibly generalizable that it's hard to know where to start. If, say, there are strings involved in your list, these approaches still work as they are:</p>
<pre><code>SeedRandom[123456]
lis = RandomSample[
Join[RandomReal[2, 10], RandomComplex[{-1 - I, 1 + I}, 10],
RandomChoice[CharacterRange["a", "z"], 10]], 30]
Select[lis, Negative[# - 1] &]
CountsBy[lis, Negative[# - 1] &][True]
(* {0.0961836, 0.982588, 0.225647, 0.780961, 0.963281}
5 *)
</code></pre>
|
4,292,822 | <blockquote>
<p>Prove linear independence of <span class="math-container">$1+x^3-x^5,1-x^3,1+x^5$</span> in the Vector Space of Polynomials</p>
</blockquote>
<p>The attempts I found online all are quite easy. You just substitute something in for <span class="math-container">$x$</span> into the equation <span class="math-container">$a(1+x^3-x^5)+b(1-x^3)+c(1+x^5)=0$</span> for example <span class="math-container">$x=1,0,-1$</span> and this will give you three equations where you can show that <span class="math-container">$a,b,c=0$</span>. But why can we substitute something in? If I define the Vector Space of Polynomials in a very abstract way that <span class="math-container">$\sum_{i} \alpha_i x^{i}+\sum_{i} \beta_{i} x^{i}:=\sum_{i} (\alpha_{i}+\beta_{i})x^{i})$</span> and <span class="math-container">$(\sum_{i}^{n} \alpha_i x^{i})(\sum_{i}^{m} \alpha_{i} x^{i} ):=\sum_{i=0}^{n+m} c_i x^i$</span> with <span class="math-container">$c_k=a_0 b_k+a_1 b_{k-1}+...+a_{k} b_0$</span> and a <span class="math-container">$x$</span> is just an abstract symbol with absolutely no meaning why should one be allowed to substitute something for <span class="math-container">$x$</span> or even worse differentiate the equation?</p>
| Sebastián P. Pincheira | 721,724 | <p>Let's play a little with notation: let <span class="math-container">$$\begin{align}&p_1\colon x\mapsto 1+x^3-x^5,\\&p_2\colon x\mapsto 1-x^3\text{ and}\\&p_3\colon x\mapsto 1+x^5.\end{align}$$</span></p>
<p>What you want to show is that, for <span class="math-container">$a,b,c\in\mathbb{R}$</span>
<span class="math-container">$$ap_1+bp_2+bp_3=\mathbf{0}\implies \begin{bmatrix}a\\ b\\ c\end{bmatrix}=0$$</span>
where <span class="math-container">$\mathbf{0}$</span> is the function <span class="math-container">$\mathbf{0}\colon x\mapsto 0$</span>.<br />
Since <span class="math-container">$\mathbb{R}[x]$</span> is a vector space, <span class="math-container">$P:=ap_1+bp_2+bp_3\in \mathbb{R}[x]$</span>.<br />
Now, what you want to show is that
<span class="math-container">$$P=\mathbf{0}\implies \begin{bmatrix}a\\ b\\ c\end{bmatrix}=0$$</span>
But <span class="math-container">$P=\mathbf{0}\iff \forall x\in \mathbb{R},\; P(x)=0$</span>.<br />
Clearly, if <span class="math-container">$a\in \mathbb{R}$</span>, then we have that <span class="math-container">$P(a)=0$</span> and we can substitute values of <span class="math-container">$x$</span>.<br />
For differentiation, since <span class="math-container">$P=\mathbf{0}$</span>, then they must have the same derivative, so <span class="math-container">$P'(x)=\mathbf{0}$</span> and we can differentiate.</p>
|
68,147 | <p>While I was investigating some specific types of prime numbers I have faced with the following infinite sequence :</p>
<p>$1,2,8,9,15,20,26,38,45,65,112,244,303,393,560,....$</p>
<p>I tried to find recursive formula using Maple and it's listtorec command, so up to $393$ I got the next output:</p>
<p>$ f(n+3) = ((-10604990407411886564453040+8614360900967683126093782*n$ $-1437788330056801496567841*n^2-20019334790519891406942*n^3$ $+10676199651161684501481*n^4)*f(n+1)$ $+(-1637719982644311036922320-2457276199701830407970234*n$ $-480059310080505210547097*n^2+383671472063948372228234*n^3$ $-33849767081583104776903*n^4)*f(n+2))$ $/(-936042047504931985146406*n -3812415630664251269364960$ $+337414858035611215686569*n^2+50641450188283496191324*n^3$ $-8211420729473965803551*n^4) $</p>
<p>but when I added $560$ to list Maple sent me message FAIL.</p>
<p>So, my question is : how can I find pattern for this sequence if it exists ?</p>
| spin | 12,623 | <p>You can look up integer sequences at OEIS: <a href="http://oeis.org/A056805">http://oeis.org/A056805</a></p>
<p>So your sequence is "Numbers $n$ such that $6*10^n+1$ is prime". I assume you're looking for a formula, but if there was a closed-form expression for these numbers, we could find arbitrarily large prime numbers! The largest known prime has 12978189 digits and right now there is a 250,000 dollar prize to whoever finds a prime number with at least 1,000,000,000 digits (see <a href="http://www.eff.org/awards/coop">http://www.eff.org/awards/coop</a>). So if you find a formula for these numbers, please tell me.</p>
|
1,137 | <p>In a freshman course in mathematics for biology students, I have had the issue that basic algebra (e.g. simplifying $\frac{a/b}{c/d}$) is far from being mastered. On the one hand, it seems ill-suited to teach more advanced matters to student failing on the basics. On the other hand, if we take too much time to cover the material allegedly covered in high school, we can hardly hope to cover what is needed to these student, and we enforce the impression that not mastering high-school mathematics is ok.</p>
<blockquote>
<p>How much effort should we spend in class for studying material that is
supposed to be mastered but in practice is not?</p>
</blockquote>
| András Bátkai | 61 | <p>I think if you notice this problem, you should briefly revise it, go on with the material, stress that it is essential that they know the basic stuff, and hand out extra exercises on the basic material which will not be graded but which you are ready to discuss in your office hour.</p>
<p>This way you offer a chance to learn the basics but stress that your job is to teach something more advanced. </p>
<p>My experience, however, is mixed. They acknowledge my goodwill to help and even recognize that they have serious problems, but it is very very rare that someone comes to my office. They are not mathematics students, they have more important things to do...</p>
|
1,867,695 | <blockquote>
<p>$\lim_{n\rightarrow \infty}n\left ( 1-\sqrt{1-\frac{5}{n}} \right )$</p>
</blockquote>
<p>$\lim_{n\rightarrow \infty} n *\lim_{n\rightarrow \infty}\left ( 1-\sqrt{1-\frac{5}{n}} \right ) = \infty * \left ( 1-\sqrt{1-0} \right ) = \infty * 0 = 0$</p>
<p>Did I do it correctly?</p>
<p>Problem is when I use my calculator and put big values for $n$, I get $2,5$ as result (if I take very very big values, it's $0$ :D).</p>
<p>Anyway, this made me feel a bit insecure and that's why I'm asking if I did it right.</p>
| egreg | 62,967 | <p>No, you can't do it like that: when you have a product where one factor has limit $\infty$ and the other has limit $0$, you cannot apply a theorem on products of limits.</p>
<p>The theorem helps when both limits are finite and you just multiply them; it also works when one limit is $\infty$ (or $-\infty$) and the other one is either infinite or *finite and not $0$”. Also in this case you can “multiply”: the limit will be either $\infty$ or $-\infty$, depending on the signs of the factors.</p>
<p>Instead of the sequence (which is implied by the use of $n$), try and find the limit of the function:
$$
\lim_{x\to\infty}x\left(1-\sqrt{1-\frac{5}{x}}\right)=
\lim_{t\to0^+}\frac{1-\sqrt{1-5t}}{t}
$$
with the substitution $x=1/t$. This limit is much easier to manage; if it exists, then the sequence will have the same limit. Note however that the limit of the function may not exist whereas the limit of the sequence exists. Not in this case: take your pick below.</p>
<h2>1. Rationalization</h2>
<p>$$
\lim_{t\to0^+}\frac{1-\sqrt{1-5t}}{t}
=
\lim_{t\to0^+}\frac{1-(1-5t)}{t(1+\sqrt{1-5t})}=
\lim_{t\to0^+}\frac{5)}{1+\sqrt{1-5t}}=\frac{5}{2}
$$</p>
<h2>2. Taylor expansion</h2>
<p>$$
\lim_{t\to0^+}\frac{1-\sqrt{1-5t}}{t}
=
\lim_{t\to0^+}\frac{1-(1-\frac{5}{2}t+o(t))}{t}
=
\lim_{t\to0^+}\left(\frac{5}{2}+o(1)\right)=\frac{5}{2}
$$</p>
<h2>3. Derivative</h2>
<p>The limit is the derivative at $0$ of $f(t)=1-\sqrt{1-5t}$ and
$$
f'(t)=\frac{5}{2\sqrt{1-5t}}
$$
so
$$
f'(0)=\frac{5}{2}
$$</p>
|
183,237 | <blockquote>
<p>Does</p>
<p><span class="math-container">$$\int_{-\infty}^\infty \text{e}^{\ a\ (x+b)^2}\ \text dx=\int_{-\infty}^\infty \text{e}^{\ a\ x^2}\ \text dx\ \ \ \ \ ?$$</span></p>
<p>hold, even if the imaginary part of <span class="math-container">$b$</span> is nonzero?</p>
</blockquote>
<p>What I really want to understand is what the phrase "<a href="http://en.wikipedia.org/wiki/Common_integrals_in_quantum_field_theory#Integrals_with_a_complex_argument_of_the_exponent" rel="nofollow noreferrer">By analogy with the previous integrals</a>" means in that link. There, the expression <span class="math-container">$\frac{J}{a}$</span> is complex but they seem to imply the integral can be solved like above anyway.</p>
<p>The reusult tells us that the integral is really independend of <span class="math-container">$J$</span>, which is assumed to be real here. I wonder if we can also generalize this integral to include complex <span class="math-container">$J$</span>. In case that the shift above is possible, this should work out.</p>
<p>But even if the idea is here to perform that substitution, how to get rid of the complex <span class="math-container">$a$</span> to obtain the result. If everything is purely real or imaginary, then <a href="https://math.stackexchange.com/questions/163946/are-complex-substitutions-legal-in-integration/166359#166359">this</a> solves the rest of the problem.</p>
| paul garrett | 12,291 | <p>Another approach that sometimes simplifies this sort of issue is to invoke the "identity principle": when the integral is holomorphic as a function of the parameter (some potential danger here!), the outcome can be computed in a convenient range, and then invoke the identity principle to know that the same formula holds for all (!) parameter values.</p>
<p>In the case at hand, this approach does succeed.</p>
<p>Beware, in cases like Cauchy's formula for $z$ inside a circle $\gamma$, $f(z)={1\over 2\pi i}\int_\gamma {f(\zeta)\;d\zeta\over \zeta-z}$, the integrand is <em>not</em> holomorphic in the parameter $z$ as it crosses the circle. That is, certainly the integral represents $f(z)$ for $z$ <em>inside</em>, but not <em>outside</em>, where it is $0$.</p>
<p>Also, in the context in which such questions would arise, it might be reasonable to be more careful about "moving contours": note that $\int_{=\infty}^{+\infty}$ is really a limit of integrals $\int_{-M}^N$, and, thus, a contour-shift uses an integral over a rectangle (or parallelogram!) with one side the interval $[-M,N]$. This, too, legitimizes the change of variables for complex parameter values.</p>
|
2,787,114 | <p>Before going to my question, let me give two prelimilary definitions</p>
<blockquote>
<p><strong>Definition 1.</strong> Let $S\subseteq\mathbb{R}^n$ be a non=empty open set in $\mathbb{R}^n$ under the usual topology on $\mathbb{R}^n$ and $f:S\to \mathbb{R}^m$. Let $\mathbf{c}\in S$ and $g:U(\subseteq \mathbb{R})\to S$ is such that,</p>
<ul>
<li><p>$U$ is open in $\mathbb{R}$ under the usual topology on $\mathbb{R}$</p></li>
<li><p>$g(0)=\mathbf{c}$</p></li>
<li><p>$g$ is continuous at $0$</p></li>
</ul>
<p>Then $f$ will be said to have <em>derivative along the cruve $g$ at the point $\mathbf{c}$</em> if, $$\displaystyle\lim_{h\to 0}\dfrac{(f\circ g)(h)-(f\circ g)(0)}{h}$$ exists. </p>
<p><strong>Definition 2.</strong> Let $S\subseteq\mathbb{R}^n$ be a non=empty open set in $\mathbb{R}^n$ under the usual topology on $\mathbb{R}^n$ and $f:S\to \mathbb{R}^m$. Let $\mathbf{c}\in S$. Then $f$ will be said to have <em>approach independent derivative at $\mathbf{c}$</em> if, $$\displaystyle\lim_{h\to 0}\dfrac{(f\circ g)(h)-(f\circ g)(0)}{h}$$ exists for all $g$ satisfying the properties listed in the previous definition.</p>
</blockquote>
<p><strong>Question</strong></p>
<p>If $f$ has approach independent derivative at $\bf{c}$ then is it continuous at $\mathbf{c}$?</p>
<hr>
<p>I was trying to find a counter example of such a function $f$ but till now I have not been able to find such an example. Any help will be appreciated. </p>
| Eric Wofsey | 86,856 | <p>If $f$ has an approach independent derivative at $c$, then $f$ is constant in a neighborhood of $c$ (and therefore rather trivially is continuous at $c$).</p>
<p>Indeed, suppose $f$ is not constant in any neighborhood of $c$. That means there is a sequence $(c_n)$ approaching $c$ such that $f(c_n)\neq f(c)$ for all $n$.</p>
<p>Now choose a descending sequence of real numbers $t_n$ approaching $0$ such that $0<t_n<|f(c)-f(c_n)|/n$ for all $n$. Define $g:(-1,t_1)\to \mathbb{R}^n$ by $g(t)=c$ if $t\leq 0$, $g(t_n)=c_n$, and $g$ interpolates linearly on each interval $(t_{n+1},t_n)$. This is obviously continuous everywhere except possibly at $0$; continuity at $0$ follows from the fact that $c_n$ converges to $c$. Letting $U=g^{-1}(S)$, we can restrict $g$ to $U$ and it will satisfy all of your conditions.</p>
<p>However, observe that for each $n$, $$\frac{(f\circ g)(t_n)-(f\circ g)(0)}{t_n}=\frac{f(c_n)-f(c)}{t_n}$$ has absolute value greater than $n$ by our choice of $t_n$. Since $t_n\to 0$ as $n\to\infty$, this proves that the derivative of $f$ along $g$ at $c$ does not exist.</p>
<hr>
<p>Note that this conclusion should not be surprising. You are asking for $f\circ g$ to be differentiable at $0$. But you only assume that $g$ is continuous, so it is totally unreasonable to expect $f\circ g$ to be <em>differentiable</em>, even if $f$ is differentiable. For instance, the identity function $f:\mathbb{R}\to\mathbb{R}$ trivially fails to satisfy your property, since you could take $g$ to be any function that is continuous but not differentiable at $0$. For $f$ to have your property, it must somehow be "extremely differentiable" (at least along curves) such that its differentiability is preserved by composing it with any continuous (not necessarily differentiable!) function. It turns out that the only way this can happen is if $f$ is constant (at least locally).</p>
|
3,687,355 | <p>I have trouble with how to find eigenvectors when you have a complex eigenvalue</p>
<p>For example the matrix
<span class="math-container">$$ \begin{pmatrix}
0 & 1\\
-2 & -2
\end{pmatrix}$$</span></p>
<p>Here you get the eigenvalues <span class="math-container">$-1$</span> and <span class="math-container">$ \pm i$</span></p>
<p>Where do i go from here to find a eigenvector. The solution says it should be the 2x1 matrix:
<span class="math-container">$$\begin{pmatrix}
1\pm i \\
-2
\end{pmatrix}$$</span></p>
| José Carlos Santos | 446,262 | <p>You do it just as you would for real eigenvalues.</p>
<p>For instance, if you want an eigenvector corresponding to the eigen value <span class="math-container">$-1+i$</span>, you solve the system<span class="math-container">$$\left\{\begin{array}{ll}y=(-1+i)x\\-2x-2y=(-1+i)y,\end{array}\right.$$</span>you will get that the solutions are the vectors of the form <span class="math-container">$\bigl((1+i)x,-2x\bigr)$</span>. In particular, <span class="math-container">$(1+i,-2)$</span> is an eigenvector corresponding to the eigenvalue <span class="math-container">$-1+i$</span>.</p>
|
75,704 | <p><strong>Bug introduced in 10.0 and fixed in 10.0.2</strong></p>
<hr>
<p>I was looking at integrals like:</p>
<pre><code>Integrate[HermiteH[50, x]*Exp[-x^2], {x, 0, Infinity}]
</code></pre>
<p>which gave me a "does not converge on $(0,\infty)$" error. On the other hand something like </p>
<pre><code>Integrate[(x^50)*Exp[-x^2]], {x, 0, Infinity}]
Integrate[HermiteH[4,x]*Exp[-x^2]], {x, 0, Infinity}]
</code></pre>
<p>works just fine. In general, for $n\geq 16$, $\int_0^\infty H_n(x)e^{-x^2}dx$ is reported as divergent. </p>
<p>Is there an easy way to resolve this bug? </p>
<p>I should mention that I'm using Mathematica 10.0.0.0.</p>
| Silvia | 17 | <p>You can use the new <code>RunProcess</code> as you have already noticed:</p>
<pre><code>RunProcess[{"powershell", "Get-Help"}, "StandardOutput"]
</code></pre>
<p>Or you can stick to the old fashion (which is basically invoke powershell from cmd):</p>
<pre><code>RunInCmd[cmd_] :=
FromCharacterCode[ReadList["!" <> cmd, Byte],
"CP936" (*change it to whatever codepage you want*)
] // StringSplit[#1, "\n"] &
RunInCmd["powershell Get-Help"]
</code></pre>
|
75,704 | <p><strong>Bug introduced in 10.0 and fixed in 10.0.2</strong></p>
<hr>
<p>I was looking at integrals like:</p>
<pre><code>Integrate[HermiteH[50, x]*Exp[-x^2], {x, 0, Infinity}]
</code></pre>
<p>which gave me a "does not converge on $(0,\infty)$" error. On the other hand something like </p>
<pre><code>Integrate[(x^50)*Exp[-x^2]], {x, 0, Infinity}]
Integrate[HermiteH[4,x]*Exp[-x^2]], {x, 0, Infinity}]
</code></pre>
<p>works just fine. In general, for $n\geq 16$, $\int_0^\infty H_n(x)e^{-x^2}dx$ is reported as divergent. </p>
<p>Is there an easy way to resolve this bug? </p>
<p>I should mention that I'm using Mathematica 10.0.0.0.</p>
| Gustavo Delfino | 251 | <p>This is another way of doing it:</p>
<pre><code>Import["!powershell.exe \"Get-Help\"", "TEXT"]
</code></pre>
|
872,893 | <p>How can I prove that the equation $r=16^r$ is wrong for any arbitrary value of $r$?
I have tried:</p>
<p>\begin{align}
&r=16^r &&\implies \log_r r = \log_r 16^r \\
&&& \implies 1 = r\log_r 16 \\
&&& \implies 1/r = \log_r 16 \\
&&& \implies r^{1/r} = r^{\log_r 16}\\
&&& \implies \sqrt[r]{r} = 16
\end{align}
I am stuck here.</p>
| user121591 | 121,591 | <p>Draw a graph of y=x and y=16^x on the same axes, then I'm sure you can find a reason why there can't be a solution</p>
|
1,314,802 | <p>True.</p>
<p>Since $f$ is continuous (because all uniformly continuous function is continuous), we can assume:</p>
<p>$$ f\left(\lim_{n\to\infty} \frac{1}{n}\right) $$</p>
<p>Since $ \lim_{n\to\infty} \frac{1}{n} $ is bounded in $ (0,1] $ and $ I \subset (0,1] $, we have by hypothesis $f$ uniformly continuous then</p>
<p>$$ \lim_{n\to\infty} f\left(\frac{1}{n}\right) \text{, exists.} $$</p>
| marwalix | 441 | <p><strong>Hint</strong> look at the $\log$ then use $\cos{x}=1-\frac{x^2}{2}+o(x^3)$ and $\log(1+x)=x-\frac{x^2}{2}+o(x^2)$</p>
<p>Let's look in more details</p>
<p>$$\log\left(\left(1+\frac{1-\cos x}{x}\right)^{\frac{1}{x}}\right)=\frac{1}{x}\log\left(1+\frac{1-\cos x}{x}\right)$$</p>
<p>Using the expansion of $\cos$ we get</p>
<p>$$\log\left(\left(1+\frac{1-\cos x}{x}\right)^{\frac{1}{x}}\right)=\frac{1}{x}\log\left(1+\frac{x}{2}+o(x)\right)$$</p>
<p>And using the expansion of $\log$ we get</p>
<p>$$\log\left(\left(1+\frac{1-\cos x}{x}\right)^{\frac{1}{x}}\right)=\frac{1}{2}+o(x)$$</p>
|
1,314,802 | <p>True.</p>
<p>Since $f$ is continuous (because all uniformly continuous function is continuous), we can assume:</p>
<p>$$ f\left(\lim_{n\to\infty} \frac{1}{n}\right) $$</p>
<p>Since $ \lim_{n\to\infty} \frac{1}{n} $ is bounded in $ (0,1] $ and $ I \subset (0,1] $, we have by hypothesis $f$ uniformly continuous then</p>
<p>$$ \lim_{n\to\infty} f\left(\frac{1}{n}\right) \text{, exists.} $$</p>
| xpaul | 66,420 | <p>You can use this way:
\begin{eqnarray}
\lim\limits_{x\to 0} \left(1+\frac{1-\cos x}{x}\right)^{\frac{1}{x}}&=&\lim\limits_{x\to 0} \left(\left(1+\frac{1-\cos x}{x}\right)^{\frac{x}{1-\cos x}}\right)^{\frac{1-\cos x}{x^2}}=e^{\frac12}
\end{eqnarray}
This is because
$$ \lim_{x\to0}\frac{1-\cos x}{x}=0, \lim_{x\to0}\frac{x}{1-\cos x}=\infty, \lim_{x\to0}(1+x)^{\frac1x}=e,\lim_{x\to0}\frac{1-\cos x}{x^2}=\frac12. $$</p>
|
2,393,130 | <blockquote>
<p><strong>Problem:</strong> <span class="math-container">$ABCD$</span> is a rectangle. A point <span class="math-container">$P$</span> is <span class="math-container">$11$</span> from <span class="math-container">$A$</span>, <span class="math-container">$13$</span> from <span class="math-container">$B$</span> and <span class="math-container">$7$</span> from <span class="math-container">$C$</span>. What is the length of <span class="math-container">$DP=x?$</span> (Note: <span class="math-container">$P$</span> can be inside the rectangle or outside of it.)</p>
<p><a href="https://i.stack.imgur.com/wzBoW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wzBoW.jpg" alt="enter image description here" /></a></p>
</blockquote>
<p>I drew this scenario as best as I could, but I only have triangles with two sides and no angles. How do I begin? Any weird and relatively unknown theorems I should use?</p>
| farruhota | 425,072 | <p>Hint: draw lines through <span class="math-container">$P$</span> parallel to the sides of rectangle. Use Pythagoras theorem for the lines <span class="math-container">$PA, PB, PC, PD$</span> to make up the system of four equations. Then you get the answer <span class="math-container">$1$</span>.</p>
<p>Details:</p>
<p><a href="https://i.stack.imgur.com/CASp2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CASp2.png" alt="enter image description here" /></a></p>
<p><span class="math-container">$$\begin{cases} a^2+d^2=x^2 \\ a^2+b^2=11^2 \\ b^2+c^2=13^2 \\ c^2+d^2=7^2\end{cases} \stackrel{(4)-(3)+(2)}{\Rightarrow} x^2=7^2-13^2+11^2=1 \Rightarrow x=1.$$</span></p>
|
2,744,008 | <p>Since the isolated singularities are $z=k\pi, k \in \mathbb Z$, so we divided the complex plane into the disjoint annulus, i.e. $\{z: n\pi <|z|<(n+1)\pi\}, n \in \mathbb N \cup \{0\}$. On these annuli, $f$ is analytic, so has a unique Laurent series.</p>
<p>First, let's consider $\{z: 0 < |z|<\pi\}$, then the solution says
$$\begin{aligned}
f(z)& =\frac{1}{\sin z}\\
& =\frac{1}{z-\frac{z^3}{3!}+\frac{z^5}{5!}-\cdots}\\
& =\frac{1}{z}\cdot \frac{1}{1-\left(\frac{z^2}{3!}-\frac{z^4}{5!}+\cdots\right)}\\
& =\frac{1}{z}\left[1+\left(\frac{z^2}{3!}-\frac{z^4}{5!}+\cdots\right)+\left(\frac{z^2}{3!}-\frac{z^4}{5!}+\cdots\right)^2+\cdots\right]
\end{aligned}$$
My questions are:</p>
<p>Do we need $\left|\frac{z^2}{3!}-\frac{z^4}{5!}+\cdots\right|<1$ to get the last step? If we do, then why it is less than 1?</p>
<p>Thanks for any help!</p>
| Andrew Gross | 370,491 | <p>By $f(x)$ you're referring to the probability density function (PDF), which tells you the probability of observing a single value. </p>
<p>The expected value, call it $\mathbb{E}[X]$, is the long-run average. For a continuous uniform distribution running from 0 to 1 this is defined as:
$$\mathbb{E}[X]=\int_{0}^{1}xf(x)dx=\int_{0}^{1}xdx$$</p>
<p>This is just $\frac{1}{2}.$</p>
|
1,840,176 | <p>I got a task in front of me but I don't really understand it. If someone could explain, I think I would be able to solve it myself.</p>
<blockquote>
<p>$P(x) = \sum_{k=0}^{\infty}a_{k}x^{k}$ is a power series. There exists a $k_{0} \in \mathbb{N}$ with $a_{k} \neq 0$ for all $k \geq k_{0}$.</p>
<p>Proof that: If the sequence $\left ( \left | \frac{a_{k+1}}{a_{k}} \right | \right )_{k \geq k_{0}}$ converges towards a number in $\mathbb{R}$ or towards $\infty$ and if $a:= \lim_{k\rightarrow \infty} \left | \frac{a_{k+1}}{a_{k}} \right | \in \mathbb{R} \cup \left \{ \infty \right \}$ indicates this limit point ($\infty $ or $-\infty$) then following applies for the radius of convergence $R$ of $P$:</p>
<p>$R=\left\{\begin{matrix}
0, & a = \infty\\
\infty, & a = 0 \\
\frac{1}{a}, & otherwise
\end{matrix}\right.$</p>
</blockquote>
<hr>
<p>What is meant by $k_{0}$ ? It's just any unknown variable which seems to be smaller or equal $k$, right? Oh and it cannot be smaller than zero.</p>
<p>What is $a_{k}$ ? It's just any sequence that cannot be zero, right?</p>
<p>So first I take the sequence $a_{n}$, use the ratio test to see if it converges. Okay after that is done, I check if in the ratio test, I get + or - $\infty$.</p>
<p>Is it right so far?</p>
<p>But what confuses me most is this: </p>
<blockquote>
<p>$a:= \lim_{k\rightarrow \infty} \left | \frac{a_{k+1}}{a_{k}} \right | \in \mathbb{R} \cup \left \{ \infty \right \}$</p>
</blockquote>
<p>What is it saying with infinity?</p>
<p>Sorry I haven't started with the task but first I try to understand everything, then start. </p>
| user21820 | 21,820 | <p><strong>Hint</strong>: Since you've gotten explanation of the notation already, here's the sketch of how to prove the result. Radius of convergence means that the power series converges for anything <strong>strictly</strong> inside. So if $|x| = r < \frac1a$, then let $s$ be such that $r < s < \frac1a$, and so as $k \to \infty$ eventually $|\frac{a_{k+1}}{a_k}| \to a$ and hence $|\frac{a_{k+1}}{a_k}| < s^{-1}$, which by induction gives $|a_{m+k}| < |a_m| s^{-k}$ for every natural $k$ where $m$ is some (sufficiently large) constant natural number. Therefore for any natural $q \ge p \ge m$ we have $| \sum_{k=p}^q a_k x^k | \le \sum_{k=p}^\infty |a_k| r^k = \sum_{k=p}^\infty |a_m| s^{m-k} r^k = |a_m| s^m \sum_{k=p}^\infty |a_m| (\frac{r}{s})^k$ which is finite since $\frac{r}{s} < 1$. Thus by Cauchy convergence the original power series converges.</p>
|
1,789,051 | <p>I know how to prove when the two equivalence classes are not disjoint, i.e. $[a]=[b]$. I see that the proof works for proving a equivalence class is disjoint, but I don't get it. Can someone explain it to me? </p>
| ervx | 325,617 | <p>Suppose we have two equivalence classes $[a]\not=[b]$. We'll show they are disjoint. Suppose $x\in[a]\cap[b]$. Then, $x\sim a$ and $x\sim b$. By symmetry, $a\sim x$, and hence, by transitivity, $a\sim b$. Therefore, $[a]=[b]$, which is a contradiction. </p>
|
1,789,051 | <p>I know how to prove when the two equivalence classes are not disjoint, i.e. $[a]=[b]$. I see that the proof works for proving a equivalence class is disjoint, but I don't get it. Can someone explain it to me? </p>
| student forever | 328,820 | <p>Thinking as equivalence relation on $\mod n$:</p>
<p>If we have, for fixed integers $k,l$, $$a+kn=b+ln$$ holds then for arbitrary integer $t$ we have $$b+tn=a+(k-l+t)n$$ and hence $$[b] \subseteq [a] \text{ and then similarly } [a] \subseteq [b] .$$</p>
<p>General is similar.</p>
|
120,281 | <p>Does this outline of a proof work?</p>
<p>Consider the ball and the bidisc in $\mathbb{C}^2$. Give each space its Bergman metric. To show that the ball and the bidisc are not holomorphic, it is enough to show that they are not isometric. </p>
<p>One way to distinguish the two spaces is their sectional curvature. I think I have shown that the sectional curvature of the Bergman metric of the ball is constant and negative, whereas the sectional curvature of the bidisc is nonpositive non constant. For example in the plane generated by the vectors $\langle 1,0 \rangle$ and $\langle 0,1 \rangle$ the section curvature is 0, but in the plane generated by $\langle 1,0\rangle$ and $\langle i,0 \rangle $ the sectional curvature is negative.</p>
<p>Is this true? Is there anything subtle I might have missed? I have seen a lot of pretty convoluted proofs of this fact, and I would think that this basic outline would be recorded in print somewhere if it is true, but I cannot seem to find it.</p>
| Claudio Gorodski | 15,155 | <p>In general, the ball $B^n$ with the Bergmann metric is isometric to the Hermitian symmetric space $SU(1,n)/S(U(1)\times U(n))$ where $SU(1,n)$ denotes the special pseudo-unitary group
with respect to the indefinite Hermitian inner product with signature $(1,n)$ and $S(U(1)\times SU(n))$ denotes the subgroup of block matrices of sizes $1\times 1$ and $n\times n$. Note that this subgroup is the fixed point set of the involution of $SU(1,n)$
given by conjugation by the diagonal matrix with entries $(-1,1,\ldots,1)$. </p>
<p>For an arbitrary Hermitian symmetric space given as $G/K$ where $G$ is connected and $K$
is a symmetric subgroup (i.e. open subgroup in the fixed point set of an invoutive automorphism of $G$), the identity components of the isometry group and the group of holomorphic automorphisms both coincide with $G$. Hence in the case of $B^n$ this group is $SU(1,n)$. </p>
<p>Next, the $n$-polydisc is the product $B^1\times\cdots\times B^1$ ($n$-times) and it
carries the structure of a Hermitian symmetric space, product of $n$ copies of
$SU(1,1)/S(U(1)\times U(1))\cong SL(2,\mathbb R)/SO(2)$, the unit disk in $\mathbb C$.
As such, the identity component of its group of holomorphic automorphism is $SU(1,1)\times\cdots SU(1,1)$ ($n$ copies). It is readily seen that this group is not isomorphic to $SU(1,n)$ if $n>1$ (for instance, they have different dimensions, $3n$ and $n^2+2n$). </p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.