qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
405,772 | <p>I encountered a conformal mapping on the complex plane:$$z\rightarrow e^{i\pi z}$$
and I am not sure about where it does send the point at infinity. If I could say something along the lines: $$\text{Im}(\infty) = \infty$$
Then it would map it to the origin but there is still a voice in my head saying that this equality is non-sense. And from the usual definition of the imaginary part:$$\text{Im}(z)=\frac{z+\bar{z}}{2i}$$ it makes even less sense.
I'd appreciate some enlightenment.</p>
| Cameron Buie | 28,900 | <p>If we allow $z$ to approach the point at infinity along the positive imaginary axis, we find that $e^{i\pi z}$ tends to $0$. Approaching along the negative imaginary axis, we find that $e^{i\pi z}$ gets big without bound. It turns out that we can make $e^{i\pi z}$ tend toward anything we like, just by allowing $z$ to approach the point at infinity along an appropriate path, and along <em>some</em> paths (such as the positive real axis) it won't tend toward anything at all. Thus, there's no nice way to extend the map to the point at infinity.</p>
|
2,184,593 | <p>By Cauchy's criterion of limit (not sequencial criterion), show that $$\lim_{x\to 0}(\sin{\frac{1}{x}}+x\cos{\frac{1}{x}})$$ does not exist.</p>
<p>Cauchy's criterion of limit </p>
<p>$\lim_{x\to c}f(x)=l$ iff for every $\epsilon>0$, there exists $\delta$ such that $$|f(x_2)-f(x_1)|<\epsilon$$ for $0<|x_1-c|<\delta$ and $0<|x_2-c|<\delta$. </p>
<p>Please suggest $x_2, x_1$ and help me to solve the problem.</p>
| Juniven Acapulco | 44,376 | <p>One part of Cauchy's Criterion says that</p>
<blockquote>
<p><strong>RESULT:</strong> If $\exists \epsilon>0$ such that $\forall \delta >0$, we can find $x_1,x_2$ satisfying $0<|x_1-a|<\delta$ and $0<|x_2-a|<\delta$ but $|f(x_2)-f(x_1)|\geq \epsilon$ then $\lim_{x\to a}f(x)$ does not exist. </p>
</blockquote>
<p>Let us write $$f(x)=\sin\frac{1}{x}+x\cos\frac{1}{x}.$$</p>
<p>Take $\epsilon\leq 2$. Let $\delta>0$. By Archimedean property,we can find $n\in\Bbb N$ such that $\frac{1}{n}<\delta$. Take
$$x_1=\frac{1}{\frac{3\pi}{2}+2\pi n}\qquad\text{and}\qquad x_2=\frac{1}{\frac{\pi}{2}+2\pi n}.$$ Notice that $0<x_1<\frac{1}{n}<\delta$ and $0<x_2<\frac{1}{n}<\delta$. Notice that $\sin x_2=1$, $\sin x_1=-1$, $\cos x_2=0$, and $\cos x_1=0$. With this, we get
$$|f(x_2)-f(x_1)|=|1-(-1)|=2\geq\epsilon.$$<br>
Apply the result and we are done.</p>
<p><strong>NOTE:</strong> That was the idea behind the hint given by @5xum. The only difference is that our delta and epsilon were interchanged. Since the OP want for clarification, I rather post an answer than to put all of these in the comments.</p>
|
4,613,214 | <p>I have to do a large modulo but my answer is incorrect.<br />
I am given:<br />
<span class="math-container">$$ 111^{4733} \mod 9467 $$</span></p>
<ul>
<li>9467 prime</li>
<li>111 and 9467 are coprime</li>
<li>Also note that 4733*2=9466<br />
So we can Apply Euler's theorem</li>
</ul>
<p><span class="math-container">$$ 111^{4733} = 111^{9466 \cdot \frac{1}{2}} = (111^{9466})^{\frac{1}{2}}=(1)^{\frac{1}{2}}=1 \mod 9467 $$</span></p>
<p>However, the correct answer is
<span class="math-container">$$ 111^{4733} \equiv 9466 \mod 9467 $$</span>
What is the approach to solving it?<br />
<strong>EDIT1:</strong> I understand that exponent 1/2 is not allowed and I also do not want to use Legendre symbol, as we have not studied it in the course. Besides, I want to solve it without a calculator. Moreover, I should mention that this is not homework, but this is homework-solutions that were given to us to prepare for the exam. I just try to understand the approach of how to solve this and similar exponents. Hence, the complete solution would be fine.<br />
<strong>EDIT2:</strong> It turned out that a calculator was allowed in this specific exercise. Other than that I will mark the Legendre symbol, as the correct solution for this.</p>
| Giorgos Giapitzakis | 907,711 | <p>In modulo arithmetic, fractional powers are not well defined. For example, <span class="math-container">$1^{1/2}$</span> can just as easily be <span class="math-container">$1 \pmod{9467}$</span> or <span class="math-container">$9466 \pmod{9467}$</span>. In this case, it's best to notice that you are asked to calculate
<span class="math-container">$$111^{(p-1)/2} \pmod p$$</span>
where <span class="math-container">$p=9467$</span>. But this is the Legendre symbol <span class="math-container">$\left(\frac{111}{9467}\right)$</span>. To calculate it you can write <span class="math-container">$111=3\cdot 37$</span> and then use the law of quadratic reciprocity to work with something more manageable. The only other thing you'll need to use here is the well known fact that
<span class="math-container">$$\left(\frac{2}{p}\right)= \begin{cases}1 \qquad \text{if }p\equiv \pm 1 \mod 8 \\
-1 \,\,\quad \text{if } p\equiv \pm 3 \mod 8\end{cases}$$</span></p>
|
37,013 | <p><strong>Question:</strong> What are some interesting or useful applications of the Hahn-Banach theorem(s)?</p>
<p><strong>Motivation:</strong> Most of the time, I dislike most of Analysis. During a final examination, a question sparked my interest in the Hahn-Banach theorem(s). One of my favorite things to do is to write a math blog (mlog?) post about various topics so that I can better understand them, but I know very little about Hahn-Banach and a quick google search didn't seem to point to anything neat. I was interested in seeing what you all liked (if anything!) about the Hahn-Banach Theorems. </p>
<p>Also, I can't seem to make this a community wiki, but I think it ought to be one. If someone could either fix this, I would appreciate it! (If not, please delete this!)</p>
| AD - Stop Putin - | 1,154 | <p>How about the Wiener Tauberian theorem: </p>
<p><strong>Theorem (N. Wiener 1932).</strong> For $f\in L^1(\mathbb{R})$, let $X= \operatorname{span}\{f_t:t\in\mathbb{R}\}$ (that is the linear subspace spanned by the translates of $f$). Then the closure of $X$ in $L^1$ is $L^1$ if and only if the Fourier transform of $f$ has no zero.</p>
<p>Which, in itself, has applications in many different fields running from number theory to PDE.</p>
|
230,971 | <p>At the moment I use <code>Length[ DeleteDuplicates[ array ] ] == 1</code> to check whether an array is constant, but I'm not sure whether this is optimal.</p>
<p>What would be the quickest way to test whether an array consists of equal elements?</p>
<p>What if the elements would be integers?</p>
<p>What if they are floats?</p>
| kglr | 125 | <p><code>Statistics`Library`ConstantVectorQ</code> is quite fast.</p>
<p>Using Sjoerd's input examples:</p>
<pre><code>const = ConstantArray[1, 100000];
nonconst = Append[const, 2];
nonconst2 = Prepend[const, 2];
t11 = Statistics`Library`ConstantVectorQ@const // RepeatedTiming;
t21 = CountDistinct[const] == 1 // RepeatedTiming;
t31 = MatchQ[const, {Repeated[x_]}] // RepeatedTiming;
t41 = Length[DeleteDuplicates@const] == 1 // RepeatedTiming;
t51 = Equal @@ MinMax[const] // RepeatedTiming;
t61 = Equal @@ const // RepeatedTiming;
t12 = Statistics`Library`ConstantVectorQ@nonconst // RepeatedTiming
t22 = CountDistinct[nonconst] == 1 // RepeatedTiming;
t32 = MatchQ[nonconst, {Repeated[x_]}] // RepeatedTiming;
t42 = Length[DeleteDuplicates@nonconst] == 1 // RepeatedTiming;
t52 = Equal @@ MinMax[nonconst] // RepeatedTiming;
t62 = Equal @@ nonconst // RepeatedTiming;
t13 = Statistics`Library`ConstantVectorQ@nonconst2 // RepeatedTiming
t23 = CountDistinct[nonconst2] == 1 // RepeatedTiming;
t33 = MatchQ[nonconst2, {Repeated[x_]}] // RepeatedTiming;
t43 = Length[DeleteDuplicates@nonconst2] == 1 // RepeatedTiming;
t53 = Equal @@ MinMax[nonconst2] // RepeatedTiming;
t63 = Equal @@ nonconst2 // RepeatedTiming;
TableForm[{{t11, t12, t13}, {t21, t22, t23}, {t31, t32, t33}, {t41,
t42, t43}, {t51, t52, t53}, {t61, t62, t63}},
TableHeadings -> {{"ConstantVectorQ", "CountDistinct", "MatchQ",
"Length+DeleteDuplicates", "Equal + MinMax", "Apply[Equal]"},
{"const", "nonconst", "nonconst2"}}]
</code></pre>
<p><a href="https://i.stack.imgur.com/yRC9v.png" rel="noreferrer"><img src="https://i.stack.imgur.com/yRC9v.png" alt="enter image description here" /></a></p>
|
2,086,006 | <p>You have $7$ boxes in front of you and $140$ kittens are sitting side-by-side inside the
boxes, $20$ in each box. You want to take some kittens as your pets. However the
kittens are very cowardly. Each time you chose a kitten from a box, the kittens that
are in that box to the left of it go to the box in the left, the kittens that are in that box
to the right go to the box in the right. If they don’t find a box in that direction, they
simply run away. After taking a few kittens, you see that all other kittens have run
away. At least how many kittens have you taken?</p>
| Olivier Oloa | 118,798 | <p>One may write, as $n \to \infty$,
$$
\begin{align}
a_n&=n\left(e^{\frac{\large\ln(ea)}n}-e^{\large\frac{\ln(a)}n} \right)
\\&=n\left(e^{\large\frac{1+\ln(a)}n}-e^{\large\frac{\ln(a)}n} \right)
\\&=n \cdot e^{\large\frac{\ln(a)}n}\left(e^{\large\frac{1}n}-1 \right)
\\&= e^{\large\frac{\ln(a)}n}\cdot\frac{e^{\frac{1}n}-1}{\frac{1}n}
\end{align}
$$ then one may conclude with
$$
\lim_{n \to \infty}\frac{e^{\frac{1}n}-1}{\frac{1}n}=1.
$$</p>
|
2,520,768 | <p>How would I approach this problem? </p>
<p>Let $(a, b, c) \in \mathbb{Z^3}$ with $a^2 + b^2 = c^2$. Show that:
$$
60 \,\mid\, abc
$$</p>
| user502959 | 502,959 | <p>If one of them is 0, then the product is 0, divisible by 60.</p>
<p>Assume that $abc\neq 0$, WLOG $a,b,c>0$.</p>
<p>Then $a=m^2-n^2$, $b=2mn$, $c=m^2+n^2$ for some $m,n\in\mathbb{N}$ (well known thing, <a href="https://en.wikipedia.org/wiki/Pythagorean_triple" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Pythagorean_triple</a>), so $abc=2mn(m^2-n^2)(m^2+n^2)$.</p>
<p>$60=4\cdot 3\cdot 5$ and $4,3,5$ relatively prime.</p>
<p>If $2|m$ or $2|n$, then $4|2mn$; else $2|m^2-n^2$ and $2|m^2+n^2$, so $4|(m^2-n^2)(m^2+n^2)$; hence the divisibility by 4 is proved.</p>
<p>If $3|m$ or $3|n$, then $3|mn$; else $m^2\equiv 1\equiv n^2\pmod{3}$, so $3|m^2-n^2$ and the divisibility by 3 is proved.</p>
<p>If $5|m$ or $5|n$, then $5|mn$; else $m^2$ and $n^2$, as squares, are 1 or 4 mod 5. If $m^2\equiv n^2\pmod{5}$, then $5|m^2-n^2$, if not, then $5|m^2+n^2$, since in this case $m^2+n^2\equiv 1+4$. This gives the divisibility by 5.</p>
|
2,595,247 | <p>What is equation of circle when two lines y=x and y=x-4 are tangent to a circle at (2,2) and (4,0) respectively.</p>
| Ѕᴀᴀᴅ | 302,797 | <p>If it's a Lebesgue integral, it's apparent by the dominated convergence theorem. If it's a Riemann integral, it can be proved by showing$$
\lim_{n \to \infty} \int_0^{A} x^{n + 1} f'(x) \,\mathrm{d}x = 0
$$
for every $0 < A <1$.</p>
|
3,355,542 | <p>Let <span class="math-container">$f \in L^{1} [0,1]$</span> such that for all smooth function <span class="math-container">$h: [0,1] \to \mathbb R$</span> with <span class="math-container">$h(0) = h(1) = 0$</span> one has <span class="math-container">$\int_{0}^{1} f(t) h'(t) = 0$</span>. Prove that <span class="math-container">$f$</span> admits a representative which is almost everywhere differentiable on <span class="math-container">$[0,1]$</span> with <span class="math-container">$f' =0$</span>. </p>
<p>I know without initial condition <span class="math-container">$h(0)=h(1) =0$</span> above is well-known statement. </p>
<p>(from comments below)
My goal of asking this question was in fact to clarify the answer provided here <a href="http://www.mathoverflow.net/a/341462/108824" rel="nofollow noreferrer">MO link</a> See my comments under the answer. </p>
| David C. Ullrich | 248,223 | <p><strong>Hehe:</strong> If <span class="math-container">$n$</span> is a non-zero integer then <span class="math-container">$e^{2\pi int}=h_n'(t)$</span>. So <span class="math-container">$\hat f(n)=0$</span> for <span class="math-container">$n\ne0$</span>, hence <span class="math-container">$f=\hat f(0)$</span> almost everywhere...</p>
|
203,464 | <p>I would like to exclude the point <code>{x=0,y=0}</code> in the function definition</p>
<pre><code>f = Function[{x, y}, {x/(x^2 + y^2), -(y/(x^2 + y^2))}]
</code></pre>
<p>So far I tried <code>ConditionalExpression</code>and <code>/;</code> without success.</p>
<p>Thanks!</p>
| kglr | 125 | <p>You can use <a href="https://reference.wolfram.com/language/ref/Outer.html" rel="nofollow noreferrer"><code>Outer</code></a> or <a href="https://reference.wolfram.com/language/ref/Tuples.html" rel="nofollow noreferrer"><code>Tuples</code></a> as follows:</p>
<pre><code>Join @@ Outer[List @* Plus, {a, b, c}, {d, e, f}]
</code></pre>
<blockquote>
<p>{{a + d}, {a + e}, {a + f}, {b + d}, {b + e}, {b + f}, {c + d}, {c + e}, {c + f}}</p>
</blockquote>
<pre><code>Map[List @* Total] @ Tuples[ {{a, b, c}, {d, e, f}}]
</code></pre>
<blockquote>
<p>{{a + d}, {a + e}, {a + f}, {b + d}, {b + e}, {b + f}, {c + d}, {c +
e}, {c + f}} </p>
</blockquote>
<p>Also</p>
<pre><code>List /@ Total /@ Tuples[ {{a, b, c}, {d, e, f}}] (* or *)
Tuples[foo[{a, b, c}, {d, e, f}]] /. foo -> List @* Plus
</code></pre>
<blockquote>
<p>{{a + d}, {a + e}, {a + f}, {b + d}, {b + e}, {b + f}, {c + d}, {c +
e}, {c + f}} </p>
</blockquote>
|
4,359,372 | <p>My question is: Does there exist <span class="math-container">$x_n$</span> (<span class="math-container">$n\geq 0$</span>) such that <span class="math-container">$x_n$</span> is a bounded and divergent sequence with <span class="math-container">$$x_{n+m}\leq (x_n+x_m)/2$$</span> for all <span class="math-container">$n,m\geq 0$</span>?</p>
<p>I'm guessing that such an example does exist but can't seem to find an example.</p>
<p>Since we require <span class="math-container">$x_n$</span> to be bounded, we can't have something with <span class="math-container">$\lim_{n\to\infty} x_n\to\pm \infty$</span>, so it has to look something like <span class="math-container">$x_n=(-1)^n$</span>, although this doesn't work since
<span class="math-container">$$(-1)^{1+1}=1\not\leq ((-1)^1+(-1)^1)/2=-1$$</span></p>
<p>Assuming that such a sequence can't be found, I'm not sure how I'd prove it either.</p>
| user2661923 | 464,411 | <p>Extending the answer of bjcolby15:</p>
<p>It is assumed that all of the <em>weights</em> are only allowed on one side of the scale and the object to be weighed is on the other side of the scale.</p>
<p>Any positive integer can be expressed in base <span class="math-container">$2$</span> format. In such a format, each digit in the expression will be either <span class="math-container">$1$</span> or <span class="math-container">$0$</span>. This <strong>bijects</strong> to determining which weights to use to weigh the object.</p>
<p>For example, <span class="math-container">$7$</span> equals <span class="math-container">$111$</span>, written in base <span class="math-container">$2$</span>. This corresponds to using the three weights of <span class="math-container">$4$</span>, <span class="math-container">$2$</span>, and <span class="math-container">$1$</span> to equal a weight of <span class="math-container">$7$</span>. Note, that under the assumption that each weight is either used on a specific scale or not, there are only <span class="math-container">$2$</span> choices for each weight. Either use it or not.</p>
<p>This is why base <span class="math-container">$2$</span> is so relevant. Because for each digit in a base <span class="math-container">$2$</span> representation, either the digit is <span class="math-container">$0$</span> (which corresponds to not using the weight) or <span class="math-container">$1$</span> which corresponds to using the weight.</p>
<p>See also, the Addendum below, which discusses an alternative assumption re the use of the weights.</p>
<hr />
<p><strong>Addendum</strong><br>
Changing the assumption of how the weights are to be used:</p>
<p>Suppose instead, that you place an object on the left scale, and that the weights may be placed on either the left scale or the right scale. This means that instead of having only <span class="math-container">$2$</span> choices for how to use each weight, you have <span class="math-container">$3$</span> choices:</p>
<ul>
<li><p>Use the weight on the left scale, along with the object to be weighed. Imagine assigning the number <span class="math-container">$-1$</span> to the weight here.</p>
</li>
<li><p>Use the weight on the right scale, opposite the object to be weighed. Imagine assigning the number <span class="math-container">$+1$</span> to the weight here.</p>
</li>
<li><p>Don't use the weight at all. Imagine assigning the number <span class="math-container">$0$</span> to the weight here.</p>
</li>
</ul>
<p>So, you have <span class="math-container">$3$</span> choices. This roughly corresponds to expressing numbers in a <strong>variant</strong> of the normal base <span class="math-container">$3$</span> format: instead of using the digits <span class="math-container">$0,1,2$</span>, you use the digits <span class="math-container">$0,+1,-1$</span>.</p>
<p>This is why (for example) <span class="math-container">$4$</span> weights are sufficient to weigh any object up to <span class="math-container">$40$</span>. Because the weights of <span class="math-container">$1,3,9,27$</span> are used. Note that <span class="math-container">$\displaystyle 40 = \frac{3^4 - 1}{2}.$</span></p>
|
298,284 | <p>Let $\zeta(M,s)$ be the Minakshisundaram-Pleijel zeta function, which encodes the eigenvalues of the Laplace-Beltrami operator. Where can I find a proof or reference of the following identity? If $M$ is a surface:
$$\zeta'(\Delta, 0) = \frac{1}{12}\int_M K dA$$</p>
<p>Where $K $ is the Gaussian Curvature.</p>
| Sylvain JULIEN | 13,625 | <p>If you read French, it seems Marcel Berger adresses this question in his review article in Development of Mathematics 1950-2000, Birkhaüser.</p>
|
298,284 | <p>Let $\zeta(M,s)$ be the Minakshisundaram-Pleijel zeta function, which encodes the eigenvalues of the Laplace-Beltrami operator. Where can I find a proof or reference of the following identity? If $M$ is a surface:
$$\zeta'(\Delta, 0) = \frac{1}{12}\int_M K dA$$</p>
<p>Where $K $ is the Gaussian Curvature.</p>
| Alex M. | 54,780 | <p>Look for "A Panoramic View of Riemannian Geometry" by Marcel Berger: you will find what you are looking for in sub-subchapter 9.7.2 "Great Hopes" (pages 421-422). Taking a look at 1.8.5 "Second Way: the Heat Equation" (page 100) where things are done for surfaces with boundary embedded in Euclidean spaces will improve your understanding. Berger's text does not give all the details, but is full of further references that will lead you further.</p>
|
1,048,526 | <p>I'm trying to bound the quantity
<span class="math-container">$\langle \nabla \Psi(x),\bar{x}-x \rangle$</span> above, with the bound depending on <span class="math-container">$\|x-\bar{x}\|$</span> and perhaps also of <span class="math-container">$\|x-y\|$</span> for fixed (but not varying) points <span class="math-container">$y$</span>. Where here <span class="math-container">$\Psi(x):X\rightarrow \mathbb{R}$</span> with <span class="math-container">$X$</span> a finite dimensional Banach space (or <span class="math-container">$\mathbb{R}^n$</span>, whatever)
And <span class="math-container">$\Psi(x)$</span> is a <span class="math-container">$\mu$</span>-strongly convex function (with <span class="math-container">$\mu$</span>>0) that can be written as <span class="math-container">$\Psi=f+g$</span> with <span class="math-container">$f$</span> convex and differentiable with <span class="math-container">$\nabla f$</span> <span class="math-container">$L$</span>-Lipschitz continuous and <span class="math-container">$g$</span> <span class="math-container">$\mu$</span>-strongly convex.</p>
<p>I know that if <span class="math-container">$\Psi$</span> was differentiable and its gradient was <span class="math-container">$L$</span>-Lipschitz continuous one could fix some point <span class="math-container">$x^*$</span> on the optimal set and bound as</p>
<p><span class="math-container">$\langle \nabla \Psi(x), \bar{x}-x \rangle \leq \|\nabla \Psi(x)\|\|\bar{x}-x\| = \|\nabla \Psi(x)-\nabla \Psi(x^*)\|\|\bar{x}-x\| \leq L\|x-x^*\|\|\bar{x}-x\|$</span></p>
<p>And the bound is done.
So my question is, is there an analogous of this property on the non-differentiable case? Like, I know that I can pick a point <span class="math-container">$x^*$</span> on the optimal set such that <span class="math-container">$0 \in \partial \Psi(x^*)$</span>, but then can I say that for a <span class="math-container">$v \in \partial \Psi(x)$</span> it holds</p>
<p><span class="math-container">$\|v\| = \|v-0\| \leq L\|x-x^*\|$</span> or something on that line?</p>
<p>Any help is appreciated</p>
| xel | 418,533 | <p>I think the property you are looking for is Lipschtiz continuity of the function $\Psi$, as this is equivalent to bounded subgradients.</p>
|
2,501,518 | <p>$\begin{pmatrix}
a \\
b
\end{pmatrix}
\begin{pmatrix}
a \\
b
\end{pmatrix}^T
\begin{pmatrix}
C & D \\
D^T & E
\end{pmatrix}=
\begin{pmatrix}
I_m & 0\\
0 & I_n
\end{pmatrix}
$</p>
<p>$a$ and $b$ are vectors with length $m$ and $n$ respectively. C has dimension $m$ by $m$ and E $n$ by $n$. </p>
<p>From above, I get $4$ matrix equations in $4$ unknowns:</p>
<p>$$ a a^T C + a b^T D^T = I_n \tag{1}$$</p>
<p>$$ b a^T C + b b^T D^T = 0 \tag{2}$$</p>
<p>$$ a a^T D + a b^T E^T = 0 \tag{3}$$</p>
<p>$$ b a^T D + b b^T E^T = I_m \tag{4}$$</p>
<p>where $I$ is an identity matrix and $a$ is known. Is it possible to get $b, C, D, E$?</p>
<p>The matrix $\begin{pmatrix}
C & D \\
D^T & E
\end{pmatrix}$ is symmetric and the sum of a single row(or column) of this matrix is zero. </p>
<p>I know $C$, $aa^T$, and $bb^T$ are symmetric, but I'm not sure if this information would help. Thank you. </p>
| Dustin G. Mixon | 442,087 | <p>There are no solutions in the case where $A$ and $B$ are square. By 1 and 4, $A^{-1}=A^TC+B^TD^T$ and $B^{-1}=A^TD+B^TE^T$. This implies that $A$, $B$, and their inverses all have full rank, as do any product of these. But 2 and 3 report $BA^{-1}=AB^{-1}=0$, a contradiction.</p>
|
3,858,362 | <p>Solve <span class="math-container">$$\dfrac{x^3-4x^2-4x+16}{\sqrt{x^2-5x+4}}=0.$$</span>
We have <span class="math-container">$D_x:\begin{cases}x^2-5x+4\ge0\\x^2-5x+4\ne0\end{cases}\iff x^2-5x+4>0\iff x\in(-\infty;1)\cup(4;+\infty).$</span> Now I am trying to solve the equation <span class="math-container">$x^3-4x^2-4x+16=0.$</span> I have not studied how to solve cubic equations. Thank you in advance!</p>
| QED | 91,884 | <p><span class="math-container">$$\frac{x^3-4x^2-4x+16}{\sqrt{x^2-5x+4}}=\frac{(x-4)(x-2)(x+2)}{\sqrt{(x-4)(x-1)}}==\frac{\sqrt{x-4}(x-2)(x+2)}{\sqrt{x-1}}$$</span>
The fraction will be <span class="math-container">$=0$</span> if the numerator is <span class="math-container">$0$</span> and the denominator is not <span class="math-container">$0$</span>. That is true for <span class="math-container">$x=2,-2,4$</span>.</p>
|
3,858,362 | <p>Solve <span class="math-container">$$\dfrac{x^3-4x^2-4x+16}{\sqrt{x^2-5x+4}}=0.$$</span>
We have <span class="math-container">$D_x:\begin{cases}x^2-5x+4\ge0\\x^2-5x+4\ne0\end{cases}\iff x^2-5x+4>0\iff x\in(-\infty;1)\cup(4;+\infty).$</span> Now I am trying to solve the equation <span class="math-container">$x^3-4x^2-4x+16=0.$</span> I have not studied how to solve cubic equations. Thank you in advance!</p>
| user2661923 | 464,411 | <p>Alternate approach</p>
<p>My algebra abilities are limited. I will show you how I would attack the
problem.</p>
<p>Given <span class="math-container">$$\frac{f(x)}{\sqrt{g(x)}} = 0$$</span></p>
<p>where (if I understand correctly) <span class="math-container">$x$</span> may be any real number</p>
<p>then my first step is to <strong>automatically</strong>
convert the equation to</p>
<p><span class="math-container">$$\frac{[f(x)]^2}{g(x)} = 0$$</span></p>
<p>with the understanding that any values of <span class="math-container">$x$</span> that satisfy the second equation
must be <strong>manually examined</strong> to see if they also satisfy the first equation.</p>
<p>My next step, which I consider <strong>mandatory</strong> in this problem is to <strong>meta-cheat.</strong></p>
<p>It can be presumed that you would not have been given this problem unless a
solution could be arrived at through the reasonable use of the tools that you
have been offered in your class.</p>
<p>Furthermore, attacking cubic equations (let alone <span class="math-container">$6^{\text{th}}$</span> degree equations) through
brute force is generally considered off limits, especially if you have not
been studying cubic equations in class.</p>
<p>At this point, there are only two possiblities:<br></p>
<ol>
<li><p>The teacher or book author is not of sound mind.</p>
</li>
<li><p>There is some hidden elegance that you are expected to discover.</p>
</li>
</ol>
<p>At this point, the only possible elegance <em>that I can imagine</em> (which allows for the possibility that my imagination is too narrow) consists of seeing if <span class="math-container">$f(x)$</span> and
<span class="math-container">$g(x)$</span> can be factored without much trouble. If so, then you can remove
common factors, which will simplify the examination of</p>
<p><span class="math-container">$$\frac{[f(x)]^2}{g(x)}.$$</span></p>
<p>There are two ways to handle this. One way is to notice that
<span class="math-container">$g(x) = (x-1)(x-4)$</span> and then ask yourself whether either of those two factors
is also a factor of <span class="math-container">$f(x)$</span>.</p>
<p>The other alternative, given that you are not supposed to use brute force
against a cubic, is to <strong>accidentally</strong> notice that the coefficients of <span class="math-container">$f(x)$</span> are</p>
<p><span class="math-container">$$ 1, -4, -4, 16$$</span></p>
<p>This <em>suggests</em> in and of itself that <span class="math-container">$(x-4)$</span> <strong>might</strong> be a factor of <span class="math-container">$f(x)$</span>.</p>
<p>However you determine the common factor(s), and simplify the problem, at
this point the <strong>meta-cheating</strong> is concluded, and you can then attack the
simplified problem more easily.</p>
<p><strong>By the way</strong> <br>
Once you factor <span class="math-container">$g(x) = (x-1)(x-4)$</span> <br>
you must then immediately presume that <br>
neither <span class="math-container">$x=1$</span> or <span class="math-container">$x=4$</span> can be considered as satisfactory
answers.</p>
<p>This is because either of those two values for <span class="math-container">$x$</span> would
cause the denominator in the original problem to <span class="math-container">$= 0$</span>, which is forbidden.</p>
|
3,536,671 | <p>I have the following mathematical operations to use: Add, Divide, Minimum, Minus, Modulo, Multiply and Round.</p>
<p>With these I need to get a number, run it through a combination of these and return 0 if the number is negative or equal to 0 and the number itself if the number is greater than 0.</p>
<p>Is that possible?</p>
<p>EDIT: Minus is Subtract</p>
| Jaap Scherphuis | 362,967 | <p>Here is a slightly different way.</p>
<p>What you really want is <span class="math-container">$\max(0,x)$</span>, but you don't have the max function available. Fortunately however you do have the min function, and you can use the fact that <span class="math-container">$\max(a,b) = -\min(-a,-b)$</span>.</p>
<p>So you can use <span class="math-container">$-\min(0,-x)$</span>.</p>
|
4,280,426 | <blockquote>
<p>We have a bag with <span class="math-container">$3$</span> black balls and <span class="math-container">$5$</span> white balls. What is the probability of picking out two white balls if at least one of them is white?</p>
</blockquote>
<p>If <span class="math-container">$A$</span> is the event of first ball being white and <span class="math-container">$B$</span> the second ball being white, could it be <span class="math-container">$p\bigl((A|B)\cup(B|A)\bigr)$</span>?
Although <span class="math-container">$B$</span> depends on <span class="math-container">$A$</span>, I don't understand why <span class="math-container">$A$</span> depends on <span class="math-container">$B$</span>, as <span class="math-container">$B$</span> occurs after <span class="math-container">$A$</span> has occurred.</p>
<p>Thank you very much for your help.</p>
<p>Edit: and the probability of obtaining two white balls if I have only one white (regardless if it’s the first or the second one)?
Thank you very much for your help!</p>
| Atticus Stonestrom | 663,661 | <p>Yes, it is the case that <span class="math-container">$\operatorname{cl}(A)=\mathbb{R}$</span>. Here is an argument that works in more general contexts: by definition, a subset <span class="math-container">$X\subseteq\mathbb{R}$</span> is closed if and only if either <span class="math-container">$X$</span> is countable or <span class="math-container">$X=\mathbb{R}$</span>. Now, <span class="math-container">$\operatorname{cl}(A)$</span> is a closed set containing <span class="math-container">$A$</span>; since <span class="math-container">$A$</span> is uncountable, this leaves only one option for what <span class="math-container">$\operatorname{cl}(A)$</span> can be: namely, <span class="math-container">$\mathbb{R}$</span> itself. More generally, this same argument shows that <span class="math-container">$\operatorname{cl}(B)=\mathbb{R}$</span> for any uncountable subset <span class="math-container">$B\subseteq\mathbb{R}$</span>.</p>
|
2,511,095 | <p>Let $p$ be an odd prime. We know that the polynomial $x^{p-1}-1$ splits into linear factors modulo $p$. If $p$ is of the form $4k+1$ then we can write
$$x^{p-1}-1=x^{4k}-1=(x^{2k}+1)(x^{2k}-1).$$
The theorem of Lagrange tells us that any polynomial congruence of degree $n$ mod $p$ has at most $n$ solutions. Hence we can deduce from this factorization that $-1$ is a quadratic residue modulo $p$. Similarly if $p$ is of the form $3k+1$ we can write $4(x^{p-1}-1)=4(x^{3k}-1)=(x^k-1)((2x^{k}+1)^2+3)$ and deduce that $-3$ is a quadratic residue mod $p$. </p>
<p><strong>Can we prove in this fashion that $-2$ is a quadratic residue mod $p$ if $p$ is of the form $8k+1$ or $8k+3$?</strong></p>
<p>Note that I am interested only in this specific method. I know how to prove this using different means.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>Setting $a=bt$ then we get
$$b^3+t^3+39b^3t-18=0$$
$$3b^3t^2+13b^3-5=0$$ eliminating $b^3$ then we get
$$5t^3-54t^2+195t-234=0$$
one solution is $t=3$ then you will get $a=3b$
plugging this in the given equation we get $$a=\frac{3}{2},b=\frac{1}{2}$$</p>
|
2,511,095 | <p>Let $p$ be an odd prime. We know that the polynomial $x^{p-1}-1$ splits into linear factors modulo $p$. If $p$ is of the form $4k+1$ then we can write
$$x^{p-1}-1=x^{4k}-1=(x^{2k}+1)(x^{2k}-1).$$
The theorem of Lagrange tells us that any polynomial congruence of degree $n$ mod $p$ has at most $n$ solutions. Hence we can deduce from this factorization that $-1$ is a quadratic residue modulo $p$. Similarly if $p$ is of the form $3k+1$ we can write $4(x^{p-1}-1)=4(x^{3k}-1)=(x^k-1)((2x^{k}+1)^2+3)$ and deduce that $-3$ is a quadratic residue mod $p$. </p>
<p><strong>Can we prove in this fashion that $-2$ is a quadratic residue mod $p$ if $p$ is of the form $8k+1$ or $8k+3$?</strong></p>
<p>Note that I am interested only in this specific method. I know how to prove this using different means.</p>
| Ennar | 122,131 | <p><em>Disclaimer: This answer is probably not appropriate considering (algebra-precalculus) tag, but I'll write it anyway since it might be useful to others stumbling upon this question.</em></p>
<p>The system is easier to solve when considering original equation $$(a+b\sqrt{13})^3=18+5\sqrt{13}.$$</p>
<p>If we look at norm $N(x+y\sqrt{13}) = x^2-13y^2$, it turns out that $N(18+5\sqrt{13}) = -1$. By multiplicativity of $N$ it follows that $N(a+b\sqrt{13}) = -1$, which gives us $a^2-13b^2 = -1$. Substitute $13b^2 = a^2 + 1$ in $a^3+39ab^2 - 18 = 0$ to get equation $$4a^3+3a-18 = 0.$$</p>
<p>By rational root theorem, the only rational root of the last equation is $a=\frac 32$. It follows that $13b^2 = (3/2)^2+1$, or $b^2 = \frac 14$. From the equation $3a^2b + 13b^3 = 5$, we know that $b$ must be positive, so $b = \frac 12$.</p>
<p>Note that there are complex solutions as well.</p>
|
300,163 | <p>I need to integrate the $z/\bar z$ (where $\bar z$ is the conjugate of $z$) counterclockwise in the upper half ($y>0$) of a donut-shaped ring. The outer circle is $|z|=4$ and the inner circle is $|z|=2$. </p>
<p><strong>My method:</strong></p>
<p>$z/\bar z = e^{2i\theta}$ - which is entire over the complex plane.
So with respect to $d\theta$, we get the integral $re^{i3\theta} d\theta$ which, we can then evaluate at r=4 (from pi to 0) and r=2 from (0 to pi)</p>
<p><strong>Two questions:</strong></p>
<p>1) As integrating in the counterclockwise direction, surely I shouldn't be getting a negative number?</p>
<p>2) Via the deformation theorem, as the function is holomorphic on both circles and the region between them, should I not be getting 0? </p>
| Ron Gordon | 53,268 | <p>In the ccw direction, there are 4 contributions to this integral:</p>
<p>$$\begin{align}\oint_C dz \frac{z}{z^*} &= 4 \int_0^{\pi} d\theta \: e^{i 3 \theta} - 2\int_0^{\pi} d\theta \: e^{i 3 \theta} + \int_{-4}^{-2} dt + \int_2^4 dt\\ &= \frac{-8}{3 i} + \frac{4}{3 i} + 2 + 2 \\ &= 4 + i\frac{4}{3} \end{align} $$</p>
<p>The fact that this is $\ne 0$ has something to do with the fact that $z^*$ is not holomorphic within the integration region.</p>
|
878,785 | <p>I know that the common approach in order to find an angle is to calculate the dot product between 2 vectors and then calculate arcus cos of it. But in this solution I can get an angle only in the range(0, 180) degrees. What would be the proper way to get an angle in range of (0, 360)?</p>
| MvG | 35,416 | <p><em>I'm adapting <a href="https://stackoverflow.com/a/16544330/1468366">my answer on Stack Overflow</a>.</em></p>
<h1>2D case</h1>
<p>Just like the <a href="http://en.wikipedia.org/wiki/Dot_product" rel="noreferrer">dot product</a> is proportional to the cosine of the angle, the <a href="http://en.wikipedia.org/wiki/Determinant" rel="noreferrer">determinant</a> is proprortional to its sine. And if you know the cosine and the sine, then you can compute the angle. Many programming languages provide a function <code>atan2</code> for this purpose, e.g.:</p>
<pre><code>dot = x1*x2 + y1*y2 # dot product
det = x1*y2 - y1*x2 # determinant
angle = atan2(det, dot) # atan2(y, x) or atan2(sin, cos)
</code></pre>
<h1>3D case</h1>
<p>In 3D, two arbitrarily placed vectors define their own axis of rotation, perpendicular to both. That axis of rotation does not come with a fixed orientation, which means that you cannot uniquely fix the direction of the angle of rotation either. One common convention is to let angles be always positive, and to orient the axis in such a way that it fits a positive angle. In this case, the dot product of the normalized vectors is enough to compute angles.</p>
<h1>Plane embedded in 3D</h1>
<p>One special case is the case where your vectors are not placed arbitrarily, but lie within a plane with a known normal vector $n$. Then the axis of rotation will be in direction $n$ as well, and the orientation of $n$ will fix an orientation for that axis. In this case, you can adapt the 2D computation above, including $n$ into the determinant to make its size $3\times3$.
One condition for this to work is that the normal vector $n$ has unit length. If not, you'll have to normalize it.
The determinant could also be expressed as the <a href="http://en.wikipedia.org/wiki/Triple_product#Scalar_triple_product" rel="noreferrer">triple product</a>:</p>
<p>$$\det(v_1,v_2,n) = n \cdot (v_1 \times v_2)$$</p>
<p>This might be easier to implement in some APIs, and gives a different perspective on what's going on here: The cross product is proportional to the sine of the angle, and will lie perpendicular to the plane, hence be a multiple of $n$. The dot product will therefore basically measure the length of that vector, but with the correct sign attached to it.</p>
|
436,225 | <p><a href="http://en.wikipedia.org/wiki/Incidence_matrix">The incidence matrix</a> of a graph is a way to represent the graph. Why go through the trouble of creating this representation of a graph? In other words what are the applications of the incidence matrix or some interesting properties it reveals about its graph?</p>
| Lord Soth | 70,323 | <p>Because then one may apply matrix theoretical tools to graph theory problems. One area where it is useful is when you consider flows on a graph, e.g. <a href="http://ocw.mit.edu/courses/mathematics/18-06sc-linear-algebra-fall-2011/ax-b-and-the-four-subspaces/graphs-networks-incidence-matrices/MIT18_06SCF11_Ses1.12sum.pdf" rel="noreferrer">the flow of current on an electrical circuit and the associated potentials</a>.</p>
|
2,506,279 | <blockquote>
<p>If $\lim_{x\to \infty}xf(x^2+1)=2$ then find
$$\lim_{x\to 0}\dfrac{2f'(1/x)}{x\sqrt{x}}=?$$</p>
</blockquote>
<p>My Try :
$$g(x):=xf(x^2+1)\\g'(x)=f(x^2+1)+2xf'(x^2+1)$$
Now what?</p>
| vinc17 | 459,608 | <p>The sigma notation is correct, and maybe the best one as it is non-ambiguous.</p>
<p>The <a href="https://en.wikipedia.org/wiki/Floating-point_arithmetic#Floating-point_numbers" rel="nofollow noreferrer">Wikipedia page</a> excluded $2^{-3}$, $2^{-5}$ and $2^{-6}$ because their corresponding bits were 0's: $0 × 2^{-3} = 0$, etc. However, this could be confusing.</p>
<p>I also agree that "the first bit" was a poor choice, because bits can be read left-to-right or right-to-left (both can be useful, depending on the context).</p>
<p>I think that the various issues on the Wikipedia page have now been solved with the corrections you and I did.</p>
<p>Moreover, be careful that in addition to the fact that the truncated binary expansion is an approximation to the real value, the binary-to-decimal conversion is also approximated (unless you take into account all the decimal digits of the powers of two).</p>
<p>Note that for better understanding, you may also be interested in:</p>
<ul>
<li><a href="https://en.wikipedia.org/wiki/Radix_point" rel="nofollow noreferrer">Radix point</a></li>
<li><a href="https://en.wikipedia.org/wiki/Positional_notation" rel="nofollow noreferrer">Positional notation</a></li>
<li><a href="https://en.wikipedia.org/wiki/Binary_number" rel="nofollow noreferrer">Binary number</a></li>
<li><a href="https://en.wikipedia.org/wiki/Decimal" rel="nofollow noreferrer">Decimal</a></li>
</ul>
|
387,268 | <p>Let <span class="math-container">$A$</span> be an <span class="math-container">$N\times N$</span> nonnegative matrix with all diagonal entries equal to zero and such that there is <span class="math-container">$n_0$</span> such that all entries of <span class="math-container">$A^{n_0}$</span> are strictly positive. Let <span class="math-container">$\lambda_1,\ldots, \lambda_N$</span> be its eigenvalues ordered in the decreasing order with respect to their real parts, and <span class="math-container">$v_1,\ldots, v_N$</span> be the corresponding (left) eigenvectors. Perron and Frobenius tell us that <span class="math-container">$\lambda_1$</span> is a strictly positive real number and therefore (since the sum of eigenvalues must be zero) there must also be eigenvalues with strictly negative real part; let <span class="math-container">$\lambda_{k_0},\ldots, \lambda_N$</span> be those.</p>
<p>Questions:</p>
<p>(1) is it true that the "smallest" (with respect to the real part of the corresponding eigenvalue) eigenvector <span class="math-container">$v_N$</span> can be chosen in such a way that all of its entries are nonzero?</p>
<p>(2) if the above doesn't hold, is it at least true that for any <span class="math-container">$j\in \{1,\ldots,N\}$</span> we can find <span class="math-container">$m\geq k_0$</span> such that <span class="math-container">$v_m$</span> has nonzero <span class="math-container">$j$</span>th component (that is, the set of eigenvectors corresponding to eigenvalues with negative real part cannot have a common all-zero entry index)?</p>
| Jochen Glueck | 102,946 | <p><em>Partial answer:</em> For the special case of self-adjoint matrices, the answer to (2) is <strong>yes</strong>. Funnily enough, this has nothing to do with the non-negativity of the matrix:</p>
<p><strong>Proposition.</strong> Let <span class="math-container">$A \not= 0$</span> be a self-adjoint complex <span class="math-container">$N \times N$</span> matrix with all diagonal entries equal to <span class="math-container">$0$</span>, let <span class="math-container">$\lambda_1 \ge \dots \ge \lambda_N \in \mathbb{R}$</span> be its eigenvalues, and let <span class="math-container">$v_1, \dots, v_n \in \mathbb{C}^N$</span> be an orthonormal basis of eigenvectors. Assume that every entry of <span class="math-container">$v_1$</span> is non-zero.</p>
<p>Then for each <span class="math-container">$j \in \{1,\dots,N\}$</span> there exists an eigenvector for a negative eigenvalue whose <span class="math-container">$j$</span>-the component is non-zero.</p>
<p><em>Proof.</em> Fix <span class="math-container">$j$</span> and write the <span class="math-container">$j$</span>-th canonical unit vector <span class="math-container">$e_j$</span> as
<span class="math-container">$$
e_j = \sum_{k=0}^N \alpha_k v_k,
$$</span>
where <span class="math-container">$\alpha_k = \langle v_k, e_j \rangle$</span> for each <span class="math-container">$k$</span> (here, I use the "physical" convention that the inner product be linear in the second component). We have
<span class="math-container">$$
0
=
\langle e_j, A e_j \rangle
=
\sum_{k,\,\ell=1}^N \overline{\alpha_k} \alpha_\ell \langle v_k, Av_\ell \rangle
=
\sum_{k=1}^N \lvert \alpha_k \rvert^2 \lambda_k.
$$</span>
Since <span class="math-container">$\lambda_1 > 0$</span> and <span class="math-container">$\alpha_1 \not= 0$</span> by assumption, it follows that there exists <span class="math-container">$k_0$</span> such that <span class="math-container">$\lambda_{k_0} < 0$</span> and <span class="math-container">$\alpha_{k_0} \not= 0$</span>. <span class="math-container">$\square$</span></p>
<p><strong>Remark.</strong> The assumption that every entry of <span class="math-container">$v_1$</span> be non-zero can be weakened; without this assumption, the following is still true:</p>
<p>Fix <span class="math-container">$j$</span>. If there exists an eigenvector for a positive eigenvalue whose <span class="math-container">$j$</span>-th component is non-zero, then there also exists an eigenvector for a negative eigenvalue whose <span class="math-container">$j$</span>-th component is non-zero. The proof is the same.</p>
|
426,974 | <p>Suppose the dynamical system <span class="math-container">$(X,T)$</span> has only proper factors (i.e. not <span class="math-container">$(X,T)$</span> itself) of zero topological entropy. Does the system <span class="math-container">$(X,T)$</span> also have zero entropy?</p>
| Ronnie Pavlov | 116,357 | <p>This question is very related to the question of <strong>lowering topological entropy</strong>, introduced in ``Can one always lower topological entropy?'' by Shub and Weiss and then very nearly solved by Lindenstrauss in "Lowering topological entropy" and "Mean Dimension, Small Entropy Factors, and an Embedding Theorem." There's too much to state everything here, but here's a quick overview:</p>
<p>The question is: does every <span class="math-container">$(X,T)$</span> of positive entropy possess a nontrivial (i.e. not one point) factor <span class="math-container">$(Y,S)$</span> with <span class="math-container">$h(Y) < h(X)$</span>?</p>
<ol>
<li><p>Shub and Weiss showed that the answer is "yes" for subshifts and more generally, systems with the so-called <strong>small boundary property</strong>.</p>
</li>
<li><p>Lindenstrauss showed that the answer is "yes" for systems <span class="math-container">$(X,T)$</span> where <span class="math-container">$X$</span> is finite-dimensional. He then showed that the same is true when <span class="math-container">$(X,T)$</span> has zero <strong>mean dimension</strong> and a nontrivial minimal factor. Mean dimension is too long to define here, but in particular, <span class="math-container">$(X,T)$</span> has zero mean dimension if <span class="math-container">$(X,T)$</span> has finite entropy!</p>
</li>
<li><p>Germane to your question: Lindenstrauss also gives examples of <span class="math-container">$(X,T)$</span> (with infinite entropy and <span class="math-container">$X$</span> infinite-zero dimension) for which every nontrivial factor <span class="math-container">$(Y,S)$</span> factors onto the original system <span class="math-container">$(X,T)$</span>! The easiest is <span class="math-container">$X = [0,1]^{\mathbb{Z}}$</span> and <span class="math-container">$T$</span> the shift <span class="math-container">$\sigma$</span>, but he also constructs a minimal example.</p>
</li>
</ol>
<p>To summarize: any finite-entropy system (with a nontrivial minimal factor) has a nontrivial factor of smaller entropy, thus not conjugate to the original. So for systems with a nontrivial minimal factor, the only possibilities for your condition are zero or infinite entropy.</p>
<p>For infinite entropy, we can't yet rule out your condition, since technically the example above doesn't satisfy your criteria (the nontrivial factors are not isomorphic to the original system). However, the factors are in a way "bigger" than the original system, which is perhaps even more surprising!</p>
|
3,266,930 | <blockquote>
<p>Let <span class="math-container">$X$</span> be a positive random variable on the <span class="math-container">$(\Omega,\mathscr{A},P)$</span>. Show that if <span class="math-container">$X\in L_p$</span> for <span class="math-container">$1<p<\infty$</span>.
Prove <span class="math-container">$\lim_{x\to\infty} x^p P(X>x)=0$</span></p>
</blockquote>
<p>Using Chebyshev inequality:</p>
<p><span class="math-container">$\lim_{x\to\infty} x^p P(X>x)\leqslant\lim_{x\to\infty} x^p\frac{1}{x^p} \int_{X>x}|X|^p dP=\lim_{x\to\infty} \int_{X>x}|X|^p dP $</span></p>
<p>Is it true <span class="math-container">$ \lim_{x\to\infty} \int_{X>x}|X|^p dP=0 $</span>?</p>
<p><strong>Questions:</strong></p>
<p>Is my reasoning right? How do I prove <span class="math-container">$ \lim_{x\to\infty} \int_{X>x}|X|^p dP=0 $</span>?</p>
<p>Thanks in advance!</p>
| Bernard | 202,857 | <p><em>Euler's theorem</em> asserts that every element <span class="math-container">$a$</span> which is coprime to <span class="math-container">$n$</span>, i.e. which is a unit mod. <span class="math-container">$n$</span>, satisfies <span class="math-container">$\:a^{\varphi(n)}\equiv 1\mod n$</span>. </p>
<p>This does not mean the order of <span class="math-container">$a$</span> is <span class="math-container">$\varphi(n)$</span>, only that it is a <em>divisor</em> of <span class="math-container">$\varphi(n)$</span>.</p>
<p>Now, let's use the high school factorisation:
<span class="math-container">$$0\equiv 1-a^{\varphi(n)}=(1-a)(1+a+a^2+\dots+a^{\varphi(n)-1}).$$</span>
By hypothesis, <span class="math-container">$1-a$</span> is a unit mod. <span class="math-container">$n$</span>, so this implies the second factor is congruent to <span class="math-container">$0$</span>.</p>
|
3,854,286 | <p>This was an exercise in my class, please help:</p>
<blockquote>
<p>Put <span class="math-container">$A = {\mathbb Q}[x,y]$</span> and <span class="math-container">$B = {\mathbb Q}[x,z]$</span>. Consider the morphism <span class="math-container">$f \colon A \to B$</span> of <span class="math-container">${\mathbb Q}$</span>-algebras given by <span class="math-container">$x \mapsto x$</span>, <span class="math-container">$y \mapsto x z$</span>. Then <span class="math-container">$B$</span> is an <span class="math-container">$A$</span>-module. Is <span class="math-container">$B$</span> flat ? [Hint: Consider the inclusion <span class="math-container">$(x,y) \subset A$</span>.]</p>
</blockquote>
<p>My guess is that it isn't flat, by using their hint I found that the map</p>
<p><span class="math-container">$(x,y)\otimes_A B\to A\otimes_A B$</span></p>
<p>sends <span class="math-container">$x\otimes z - y\otimes 1$</span> to <span class="math-container">$0$</span>, so if this was nonzero I would be done. But I have a hard time proving that <span class="math-container">$x\otimes z - y\otimes 1$</span> is nonzero.</p>
| David Holmes | 618,250 | <p>I don't follow Mindlack's proof, as I'm not sure what their <span class="math-container">$C$</span> is. The proof I had in mind when posting the question was the following:</p>
<p>For the justification that <span class="math-container">$$y \otimes 1 \neq x \otimes z,$$</span> could use the "truncated Koszul complex" <span class="math-container">$$A \to A \oplus A \to (x,y) \to 0;$$</span> here the first map sends 1 to <span class="math-container">$(y,-x)$</span> and the second map sends <span class="math-container">$$(1,0) \mapsto x \text{ and }(0,1) \mapsto y.$$</span> One can check without too much trouble that this is an exact sequence (actually the first map is also injective) .</p>
<p>Now we use that the tensor product is right exact. Tensoring the sequence over <span class="math-container">$A$</span> with <span class="math-container">$B$</span> yields
<span class="math-container">$$B \to B \oplus B \to (x,y) \otimes_A B \to 0$$</span> where first map sends <span class="math-container">$1$</span> to <span class="math-container">$(xz,-x)$</span>, and the second sends <span class="math-container">$$(1,0) \mapsto x\otimes 1 \text{ and } (0,1)\mapsto y\otimes 1.$$</span> We want to show that <span class="math-container">$$y \otimes 1 - x \otimes z$$</span> does not lift to the first copy of <span class="math-container">$B$</span>. We choose a lift to <span class="math-container">$ B \oplus B$</span>, say given by <span class="math-container">$ (-z,1)$</span>. This is not in the image of the first map because neither side is divisible by <span class="math-container">$x$</span> (note that <span class="math-container">$B$</span> is a UFD) .</p>
|
3,410,802 | <p>I was trying to prove that a surjective Endomorphism <span class="math-container">$f:A \to A$</span> of a noetherian ring is also injective. I would like to know why this argument is not correct?
<span class="math-container">$A/\rm{Ker}f \cong \rm{Im}f=A \Rightarrow Kerf=\{0\}$</span></p>
| Community | -1 | <p>It doesn't work because there isn't any general principle of ring theory according to which <span class="math-container">$R/I$</span> should be isomorphic to <span class="math-container">$R$</span> only if <span class="math-container">$I=0$</span>.</p>
<p>If <span class="math-container">$f$</span> were surjective but not injective, then the sequence of ideals <span class="math-container">$$0\subseteq\ker( f)\subseteq \ker (f^2)\subseteq \ker (f^3)\subseteq\cdots$$</span> would be <em>strictly</em> increasing, which is inconsistent with noetherianity of <span class="math-container">$A$</span>.</p>
|
3,410,802 | <p>I was trying to prove that a surjective Endomorphism <span class="math-container">$f:A \to A$</span> of a noetherian ring is also injective. I would like to know why this argument is not correct?
<span class="math-container">$A/\rm{Ker}f \cong \rm{Im}f=A \Rightarrow Kerf=\{0\}$</span></p>
| Claudius | 218,931 | <p>Your argument cannot be correct, since it doesn't use the noetherian hypothesis; this is because for a non-noetherian ring the statement is wrong:<br>
Consider the polynomial ring <span class="math-container">$A = \mathbb Z[x_1,x_2,x_3,\dotsc]$</span> in infinitely many variables. It is clearly not noetherian. Now, consider the ring homomorphism <span class="math-container">$f\colon A\to A$</span> given on variables by <span class="math-container">$f(x_1) = 0$</span> and <span class="math-container">$f(x_n) = x_{n-1}$</span> for <span class="math-container">$n>1$</span>.
It is surjective, but not injective. </p>
|
529,260 | <p>Let $V$ be a complex vector space of dimension $n$ with a scalar product, and let $u$ be an unitary vector in $V$. Let $H_u: V \to V$ be defined as</p>
<p>$$H_u(v) = v - 2 \langle v,u \rangle u$$</p>
<p>for all $v \in V$. I need to find the minimal polynomial and the characteristic polynomial of this linear operator, but the only way I know to find the charactestic polynomial is using the associated matrix of the operator.</p>
<p>I don't know how to find this matrix because I don't know how to deal with the scalar product. Is there some other way to find the characteristic polynomial? If not, how can I find the associated matrix of this linear operator?</p>
<p>Thanks in advance.</p>
| Felix Marin | 85,343 | <p>$$
\left\langle \nu',H_{u}\left(\nu\right)\right\rangle
=
\left\langle \nu', \nu\right\rangle
-
2\left\langle \nu, u\right\rangle
\left\langle \nu', u\right\rangle
$$</p>
|
1,943,351 | <p>Good day,</p>
<p>In class we said that if a random variable <span class="math-container">$X-Y$</span> is independent of random variables <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> then <span class="math-container">$X-Y$</span> is almost sure constant, i.e. there exists a <span class="math-container">$c \in \mathbb{R}$</span> such that <span class="math-container">$P(X-Y=c)=1$</span>.</p>
<p>First, I don't exactly how to prove this. I know that <span class="math-container">$X$</span> is constant if it is independent of itself. Therefore I could prove that <span class="math-container">$X-Y$</span> is independent of itself (But the other directions doesn't hold I suppose). Do I know that <span class="math-container">$X-Y$</span> is independent of itself?</p>
<blockquote>
<p>Is it correct to say: If <span class="math-container">$Z$</span> is independent of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> then it is independent of <span class="math-container">$g(X,Y)$</span> where <span class="math-container">$g$</span> is a measurable function.</p>
</blockquote>
<p>I don't think so. The definition of independence doesn't give this property.</p>
<p>Then how do I prove that <span class="math-container">$X-Y$</span> is almost sure constant? Another approach through expectations:</p>
<p><span class="math-container">$$E(X-Y|X)=E(X-Y|Y)=E(X-Y)=EX-EY $$</span></p>
<p>But is seems not leading me to a goal.</p>
<blockquote>
<p>So: Why is the random variable <span class="math-container">$X-Y$</span> almost sure constant if it is independent of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>?</p>
<p>Is this valid for a general random variable <span class="math-container">$f(X,Y)$</span> (where <span class="math-container">$f$</span> is measurable for example)? i.e. <span class="math-container">$f(X,Y)$</span> is almost sure constant it it is independent of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>?</p>
</blockquote>
<p>If not I would ask for a counterexample.</p>
<p>Thanks a lot for your help,
Marvin</p>
| zhoraster | 262,269 | <p>Extending Landon Carter's answer,</p>
<blockquote>
<p>i.e. $f(X,Y)$ is almost sure constant it it is independent of $X$ and $Y$?</p>
</blockquote>
<p>No. Flip a coin twice, let $X_i = \mathbf{1}_{\text{H on $i$th flip}}$, $i=1,2$, $Z = \mathbf{1}_{X_1=X_2}$. Then $Z$ is independent of both $X_1$ and $X_2$, but is not constant. (Note that here $Z$ is moreover a function of the difference $X_1-X_2$.)</p>
<p>As Landon Carter wrote, joint independence is enough. A weaker sufficient condition is that $f(X,Y)$ is independent of the <em>vector</em> $(X,Y)$. Indeed, in this case $f(X,Y)$ is independent of any transformation of the vector $(X,Y)$. In particular, it is independent of itself, which means that it is constant a.s.</p>
|
1,506,726 | <p>Let $V$ is a finite dimensional vector space, and $H_{1}, H_{2}$ be subspaces of $V$ such that $V=H_{1}\oplus H_{2}$. Now $V/H_{1}$ is isomorphic to $H_{2}$.</p>
<p>If we replace the vector space $V$ by a group $G$ and consider $H_1$ ,a normal subgroup of $G$, then <strong>under what conditions</strong> can we say that if $G=H_{1}\oplus H_{2}$ (which is same as $G = H_1 H_2)$, for some subgroup $H_2$ in $G$, then $G/H_{1}$ is isomorphic to $H_{2}$?</p>
<p>Any help will be appreciated!</p>
| Remy | 284,272 | <blockquote>
<p><strong>Lemma.</strong> If $H_1, H_2 \subseteq G$ are two subgroups, with $H_1$ normal, such that $H_1 \cap H_2 = 0$ and $H_1 H_2 = G$, then $G/H_1 \cong H_2$.</p>
</blockquote>
<p>Note that when $H_1$ is normal, the set $H_1 H_2$ is just the set of products $h_1 h_2$ with $h_1 \in H_1$ and $h_2 \in H_2$ (the statement here is that the latter subset is already a subgroup).</p>
<p><em>Proof.</em> Consider the quotient map $G \to G/H_1$, and restrict it to a map $\pi \colon H_2 \to G/H_1$. Then the assumptions imply that $\pi$ is injective and surjective (think!).</p>
|
1,506,726 | <p>Let $V$ is a finite dimensional vector space, and $H_{1}, H_{2}$ be subspaces of $V$ such that $V=H_{1}\oplus H_{2}$. Now $V/H_{1}$ is isomorphic to $H_{2}$.</p>
<p>If we replace the vector space $V$ by a group $G$ and consider $H_1$ ,a normal subgroup of $G$, then <strong>under what conditions</strong> can we say that if $G=H_{1}\oplus H_{2}$ (which is same as $G = H_1 H_2)$, for some subgroup $H_2$ in $G$, then $G/H_{1}$ is isomorphic to $H_{2}$?</p>
<p>Any help will be appreciated!</p>
| BCLC | 140,308 | <p>Different proof of Remy's Lemma:</p>
<ol>
<li><p>Observe <span class="math-container">$g \in G$</span> is uniquely <span class="math-container">$g=ab$</span> for some <span class="math-container">$(a,b) \in H_1 \times H_2$</span></p></li>
<li><p>Define the projection map onto <span class="math-container">$H_2$</span> <span class="math-container">$\phi:G \to H_2$</span>, <span class="math-container">$\phi_2(g=ab):=b$</span>.</p></li>
<li><p><span class="math-container">$\phi_2$</span> is a surjective group homomorphism with <span class="math-container">$\ker \phi_2 = H_1$</span>.</p></li>
<li><p>By (3), apply the 1st isomorphism theorem.</p></li>
</ol>
<hr>
<p>Edit: Proof <span class="math-container">$\phi_2$</span> is a group homomorphism (wow that was hard): Let <span class="math-container">$abcd \in G$</span> for <span class="math-container">$a,c \in H_1$</span> and <span class="math-container">$b,d \in H_2$</span>. We must show <span class="math-container">$\phi_2(abcd)=\phi_2(ab)\phi_2(cd)=bd$</span>. This is true if <span class="math-container">$abcd=ebd$</span> for some <span class="math-container">$e \in H_1$</span>. Choose <span class="math-container">$e=abcb^{-1}$</span>, which we can do because <span class="math-container">$H_1$</span>'s normality gives us <span class="math-container">$bcb^{-1} \in H_1$</span>.</p>
|
1,473,513 | <p>The motion of a pendulum is described by the differential equation</p>
<p><span class="math-container">$$ \ddot\theta +\frac gl \sin \theta = 0$$</span></p>
<p>if we integrate this equation with respect to <span class="math-container">$\theta$</span> we obtain</p>
<p><span class="math-container">$$ \frac 12 \dot \theta ^2 - \frac gl \cos \theta = C $$</span></p>
<p>Would anyone please shed some light on how to integrate the first term? It seems that:
<span class="math-container">$$\int \ddot \theta\,d\theta = \frac 12 \dot \theta ^2$$</span></p>
<p>Or in other words<br>
<span class="math-container">$$\int{\frac{d^2\theta}{dt^2}}\,d\theta =\frac{1}{2}\left( \frac{d\theta}{dt} \right) ^2$$</span></p>
<p>I don't really buy it</p>
| Ron Gordon | 53,268 | <p>Multiply the equation through by $\dot{\theta}$:</p>
<p>$$\dot{\theta}\, \ddot{\theta} +\frac{g}{\ell} \dot{\theta} \sin{\theta} = 0$$</p>
<p>Integrate with respect to $t$.</p>
<p>$$\int dt \, \dot{\theta}\, \ddot{\theta} = \int d\dot{\theta} \, \dot{\theta} = \frac12 \dot{\theta}^2 + C$$</p>
<p>$$\int dt\, \dot{\theta} \sin{\theta} = \int d\theta \, \sin{\theta} $$</p>
|
1,993,693 | <blockquote>
<p>$$\lim_{x \rightarrow +\infty} \frac{2^x}{x}$$ $$\lim_{x \rightarrow
\infty} \frac{x^{50}}{e^x}$$</p>
</blockquote>
<p>I don't really know how to solve this.</p>
<p>As for the first one, I know that $\lim_{x \rightarrow \infty} a^x=0$ , I supposed that helps...?</p>
<p>How do I solve these (preferably analytically, but I'll also accept otherwise)?</p>
| hamam_Abdallah | 369,188 | <p>Hint for the first.</p>
<p>taking logarithm we get</p>
<p>$$\lim_{x\to +\infty}(x\ln(2)-\ln(x))=$$</p>
<p>$$\lim_{x\to +\infty} x\left(\ln(2)-\frac{\ln(x)}{x}\right)=$$</p>
<p>$$+\infty$$</p>
<p>since $\lim_{x\to+\infty}\frac{\ln(x)}{x}=0$.</p>
<p>thus the first limit is $+\infty$.</p>
<p>the same approach gives $0$ for the second.</p>
|
1,993,693 | <blockquote>
<p>$$\lim_{x \rightarrow +\infty} \frac{2^x}{x}$$ $$\lim_{x \rightarrow
\infty} \frac{x^{50}}{e^x}$$</p>
</blockquote>
<p>I don't really know how to solve this.</p>
<p>As for the first one, I know that $\lim_{x \rightarrow \infty} a^x=0$ , I supposed that helps...?</p>
<p>How do I solve these (preferably analytically, but I'll also accept otherwise)?</p>
| E.H.E | 187,799 | <p>Hint:
$$2^x=\sum_{k=0}^{\infty }\frac{(x\log 2)^k}{k!}$$
and
$$\frac{x^{50}}{e^x}=\frac{50!x^{50}}{x^{50}+50!(1+x+\frac{x^2}{21}+....\frac{x^{49}}{49!}+\frac{x^{51}}{51!}......)}$$</p>
|
404,472 | <p>Let $F$ and $F′$ be two finite fields with nine and four elements respectively.
How many field homomorphisms are there from $F$ to $F′$?</p>
| Jared | 65,034 | <p>Hint $1$: A homomorphism of fields is injective. Can you see why?</p>
<p>Hint $2$: Hint $1$ answers your question. Can you see why?</p>
|
123,918 | <p>Someone <a href="https://stackoverflow.com/questions/9851628/minimal-positive-number-divisible-to-n">asked this question</a> in SO:</p>
<blockquote>
<p><span class="math-container">$1\le N\le 1000$</span></p>
<p>How to find the minimal positive number, that is divisible by N, and
its digit sum should be equal to N.</p>
</blockquote>
<p>I'm wondering if for every integer <span class="math-container">$N$</span>, can we always find a positive number <span class="math-container">$q$</span> such that, it is dividable by <span class="math-container">$N$</span> and the sum of its digits is <span class="math-container">$N$</span>?</p>
| Monoide | 27,545 | <p>If 10 doesn't divise N we can define the set $A = \{(n=x_1x_2\cdots x_i0) : \sum\limits_{k} x_k = N\}$ wich is a set containing at least one solution (if it exists) of the problem above (in particular the minimum one). The size of $A$ can be brutaly bounded by $10^N$. It is easy to construct all elements in $A$ by an algorithm and then try them one by one.
In the other case, we can easily reduce the problem to the first case.</p>
<p>Actualy, a good program will always find a solution (or not) in a finite number of steps.</p>
<p>The question of existence still resists to me.</p>
|
2,384,538 | <p>I am studying Linear Algebra Done Right, chapter 2 problem 6 states:</p>
<blockquote>
<p>Prove that the real vector space consisting of all continuous real valued functions on the interval $[0,1]$ is infinite dimensional.</p>
</blockquote>
<p><strong>My solution:</strong></p>
<p>Consider the sequence of functions $x, x^2, x^3, \dots$
This is a linearly independent infinite sequence of functions so clearly this space cannot have a finite basis.
However this prove relies on the fact that no $x^n$ is a linear combination of the previous terms. In other words, is it possible for a polynomial of degree $n$ to be equal to a polynomial of degree less than $n$. I believe this is not possible, but does anyone know how to prove this? More specifically, could the following equation ever be true for all $x$?</p>
<p>$x^n = \sum\limits_{k=1}^{n-1} a_kx^k$ where each $a_k \in \mathbb R$</p>
| Rodrigo A. Pérez | 88,190 | <p>Suppose $x^n = \sum\limits_{k=1}^{n-1} a_kx^k$. Consider what happens when $x$ is much larger than $|a_1|+\ldots+|a_{n-1}|$. By the triangle inequality, you will get a contradiction. Essentially, you are showing that $x^n$ grows faster than any degree $n-1$ polynomial...</p>
|
1,970,305 | <p>I have just begun reading through Section 3.2 of Hatcher's Algebraic Topology. While I reasonably understood the computations relating to the cup product, I was unsure of the purpose of the cup product. From what I knew, it does not help us to compute cohomology groups, given that we need the cohomology groups to compute the cup product. </p>
<p>In a nutshell, why do we care about the cup product? </p>
| Curious Math Student | 333,346 | <p>I was able to figure it out. So I thought I would post it in case someone else searched for help on a similar problem!</p>
<p>Solve $6x+15y+10z = 53 \rightarrow 6x+5(3y+2z)=53.$ Let $w=3y+2z.$ Solve:</p>
<p>$$3y+2z=w\ (1)
\\6x+5w=53\ (2)$$</p>
<p>Solution to $(2):\ gcd(6,5) = 1.$ So forming a linear combo, get $6(1)+5(-1)=1$. Multiply by 53, get $6(53)+5(-53)=53$. So,
$$x=x_0+(b/d)n = 53+5n$$
$$w=w_0-(a/d)n = -53-6n.$$</p>
<p>Solution to $(1):$ First, solve $3y+2z=1.$ So, $gcd(3,2)=1$; form linear combo, get $3(1)+2(-1)=1.$ Multiply by $w=-53-6n$ to get</p>
<p>$$3(w)+2(-w)=w \rightarrow 3(-53-6n)+2(53+6n)=-53-6n$$</p>
<p>Then, $y_0=-53-6n\ and\ z_0=53+6n$. So,</p>
<p>$$y=y_0+(b/d)m =-53-6n+2m$$
$$z=z_0 -(a/d)m = 53+6n-3m $$</p>
<p>Similarly, you can let $w=2x+5y$ and you'll get the same answer. Hope this process makes some sense!</p>
|
2,280,052 | <p>Wolfram Alpha says:
$$i\lim_{x \to \infty} x = i\infty$$</p>
<p>I'm having a bit of trouble understanding what $i\infty$ means. In the long run, it seems that whatever gets multiplied by $\infty$ doesn't really matter. $\infty$ sort of takes over, and the magnitude of whatever is being multiplied is irrelevant. I.e., $\forall a \gt 0$:</p>
<p>$$a\lim_{x \to \infty} x = \infty, -a\lim_{x \to \infty} x = -\infty$$</p>
<p>What's so special about imaginary numbers? Why doesn't $\infty$ take over when it gets multiplied by $i$? Thanks.</p>
| Level River St | 137,034 | <p>Real numbers and imaginary numbers are different things. <code>∞</code> is different from <code>∞i</code>, just like infinity oranges and infinity bottles of juice are different things. There are operations that will convert one to the other, but that is beside the point.</p>
<p>I think a good way to see this is to see where the reals and imaginaries lie on the Argand Diagram (complex numbers and nonintegers omitted for brevity / clarity.)</p>
<p><a href="https://i.stack.imgur.com/D4Ly3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D4Ly3.png" alt="enter image description here" /></a></p>
|
1,317,610 | <p>Let $u = u(t,x)$ satisfy the PDE
$$
\frac{\partial u}{\partial t} = \frac{1}{2}c^2\frac{\partial^2 u}{\partial x^2} + (a + bx)\frac{\partial u}{\partial x} + f u,
$$
where $a,b,c,f \in \mathbb{R}$ are constant.</p>
<p>I'm aware of solution methods for when $c \propto x^2$ (so not constant) and $a = 0$, for which I would make the change of variables $x \mapsto \log x$ to make it constant coefficient, use the Fourier transform to make it an ODE and solve from there. This seemingly easier PDE has got me stumped, though, and I would appreciate a push in the right direction!</p>
| Brian Tung | 224,454 | <p>An alternative to Joffan's solution is to count up all the ways there could be exactly $k$ vowels (as suggested by André Nicolas). We then get</p>
<p>$$
N = \sum_{k=1}^8 \binom{8}{k} 5^k 21^{8-k}
$$</p>
<p>All the methods yield $N = 171004205215$, confirming the expression $26^8-21^8$ you originally derived.</p>
|
202,247 | <p>I'm working on some problem in algebraic geometry. I need a reference to the following result:</p>
<p>Let $h\in\mathbb{N}$ with $h\geq1$ and let $F\in\mathbb{C}\left[x_{1},\ldots,x_{h}\right]$
be a non zero polynomial. The complement manifold $\mathbb{C}^{h}\setminus\left\lbrace F=0\right\rbrace$ is a
nonempty open connected subspace of $\mathbb{C}^{h}.$</p>
<p>Probably this is contained in some old work of Zariski (or even older). Please do not esitate to suggest me some bibliographical references.</p>
| Peter Michor | 26,935 | <p>$F$ is non zero, thus the complement is not empty. The regular part of the zero set, namely $\{z:F(z)=0, dF(z)\ne 0\}$, has complex codimension 1, thus real codimension 2; so the complement of this is connected. The singular part $\{z:F(z)=0, dF(z)=0\}$ is of higher codimension. There are only finitely many such parts, thus the complement is connected. </p>
|
1,255,334 | <p>A classmate and I are studying this following question from Stein-Shakarchi, Chapter 2, Exercise 12:</p>
<blockquote>
<p>Show that there are <span class="math-container">$f \in L^1(\mathbb{R}^d)$</span> and a sequence <span class="math-container">$\{f_n\}$</span> with <span class="math-container">$f_n \in L^1(\mathbb{R}^d)$</span> such that <span class="math-container">$$\|f-f_n\|_{L^1(\mathbb{R}^d)} \to 0,$$</span> but <span class="math-container">$f_n(x) \to f(x)$</span> for no <span class="math-container">$x$</span>.</p>
<p>[Hint: In <span class="math-container">$\mathbb{R}$</span>, let <span class="math-container">$f_n=\chi_{I_n}$</span>, where <span class="math-container">$I_n$</span> is an appropriately chosen sequence of intervals with <span class="math-container">$m(I_n)\to 0$</span>.]</p>
</blockquote>
<p>Our attempt:</p>
<p>First, I defined <span class="math-container">$I_n := [\frac{k-1}{2^n}, \frac{k}{2^n}]$</span> for all <span class="math-container">$k \in \mathbb{R}$</span>, so that <span class="math-container">$m(I_n)\to 0$</span> as <span class="math-container">$n \to \infty$</span>.
From these intervals, the sequence of functions is defined to be:
<span class="math-container">$$f_n := \chi_{I_n}(x) + \chi_{I_n}(-x)$$</span>
Then <span class="math-container">$f=0$</span>, and so
<span class="math-container">$$\int_{\mathbb{R}} |f_n - f| = \int_{\mathbb{R}} f_n = \frac{1}{2^{n}} +\frac{1}{2^{n}} \to 0.$$</span></p>
<p>But I am left to show that <span class="math-container">$f_n \to f$</span> for no <span class="math-container">$x$</span>. I do not see this from my example. But does this example work? If so, why does <span class="math-container">$f_n$</span> not converge to <span class="math-container">$f$</span> for any <span class="math-container">$x$</span>.</p>
| zhw. | 228,045 | <p>In one dimension, consider in order </p>
<p>$$\chi_{[0,1]}, \chi_{[0,1/2]}, \chi_{[1/2,1]},\chi_{[0,1/3]},\chi_{[1/3,2/3]},\chi_{[2/3,1]}, \dots $$</p>
<p>This sequence $\to 0$ in $L^1,$ and pointwise nowhere.</p>
|
3,276,572 | <p>Let be <span class="math-container">$\lVert \cdot \rVert$</span> a matrix norm (submultiplicative).</p>
<p>Do we have for all matrices of determinant 1, the following lower bound:</p>
<p><span class="math-container">$$\lVert M \rVert \geq 1$$</span></p>
<p>I'm very confused and could not find any counterexample and I find this statement very fishy, I tried to experiment with:</p>
<p><span class="math-container">\begin{bmatrix}
1& x \\
0& 1
\end{bmatrix}</span></p>
<p>But, its Frobenius norm cannot be small enough.</p>
| José Carlos Santos | 446,262 | <p>If <span class="math-container">$\lVert M\rVert<1$</span>, then, if <span class="math-container">$B$</span> is the closed unit ball, the volume of <span class="math-container">$M(B)$</span> will be smaller than the valume of <span class="math-container">$B$</span>. But that cannot happen because, sense, <span class="math-container">$\det M=1$</span>, the map <span class="math-container">$v\mapsto M.v$</span> must preserve volumes.</p>
|
3,276,572 | <p>Let be <span class="math-container">$\lVert \cdot \rVert$</span> a matrix norm (submultiplicative).</p>
<p>Do we have for all matrices of determinant 1, the following lower bound:</p>
<p><span class="math-container">$$\lVert M \rVert \geq 1$$</span></p>
<p>I'm very confused and could not find any counterexample and I find this statement very fishy, I tried to experiment with:</p>
<p><span class="math-container">\begin{bmatrix}
1& x \\
0& 1
\end{bmatrix}</span></p>
<p>But, its Frobenius norm cannot be small enough.</p>
| Theo Bendit | 248,286 | <p><a href="https://math.stackexchange.com/questions/2855044/why-is-the-norm-of-a-matrix-larger-than-its-eigenvalue/">The norm of a matrix is larger than its eigenvalues</a>. The determinant is the product of its eigenvalues. So, if the determinant of <span class="math-container">$M$</span> is <span class="math-container">$1$</span>, there must be at least one eigenvalue <span class="math-container">$\lambda$</span> such that <span class="math-container">$|\lambda| \ge 1$</span>, which implies
<span class="math-container">$$\|M\| \ge |\lambda| \ge 1,$$</span>
regardless of the matrix norm used.</p>
|
3,324,647 | <p>Say you have the following matrix A in <span class="math-container">$R^2 \rightarrow R^2$</span>:</p>
<p><span class="math-container">$
\begin{bmatrix}
7 & -10 \\
5 & -8
\end{bmatrix}
$</span></p>
<p>Thus the eigenvalues/eigenvectors are: 2 <span class="math-container">$\begin{bmatrix} 2 \\ 1 \end{bmatrix}$</span> and -3 <span class="math-container">$\begin{bmatrix} 1 \\ 1 \end{bmatrix}$</span>.</p>
<p>Thus the eigenspace matrix is <span class="math-container">$\begin{bmatrix} 2 & 1 \\ 1 & 1 \end{bmatrix}$</span>. </p>
<p>Say you have the vector v(x,y) of (2,3), thus Ax = [-16, -14]. </p>
<p>I'm confused as to how does the eigenspace and eigenvalues allow me to easily see what A is doing to the vector (2,3)? </p>
<p>How do I apply the eigenvalues/eigenspace on vector v(2,3) to see what A is doing to it?</p>
| angryavian | 43,949 | <p>You need to write <span class="math-container">$(2,3)$</span> as a linear combination of the eigenvectors.</p>
<p>In this case, <span class="math-container">$(2,3) = -(2,1) + 4 (1,1)$</span>, so
<span class="math-container">$$A \begin{bmatrix}2 \\ 3 \end{bmatrix} = - A \begin{bmatrix}2 \\ 1 \end{bmatrix} + 4 A \begin{bmatrix}1\\ 1 \end{bmatrix}$$</span>
and then you can use the fact that these are eigenvectors to easily complete the computation and understand how the "stretching" along the two "eigen-directions" describes the action of <span class="math-container">$A$</span> on <span class="math-container">$(2,3)$</span>.</p>
<p>More generally, you should read about diagonalizing <span class="math-container">$A$</span>.</p>
|
4,114,180 | <p>The Theorem is as follows:</p>
<p>For any numbers x and y, the following statements are true:</p>
<ol>
<li><span class="math-container">$|x|<y$</span> if and only if <span class="math-container">$-y<x<y$</span></li>
<li><span class="math-container">$|x|\leq{y}$</span> if and only if <span class="math-container">$-y\leq{x}\leq{y}$</span></li>
<li><span class="math-container">$|x|\geq{y}$</span> if and only if either <span class="math-container">$x\leq{-y}$</span> or <span class="math-container">$x\geq{y}$</span></li>
<li><span class="math-container">$|x|>y$</span> if and only if either <span class="math-container">$x<{-y}$</span> or <span class="math-container">$x>{y}$</span></li>
</ol>
<p>Here's my progress so far,
<span class="math-container">$2|x|-3\geq{|x-1|}\\
\implies 2\left(|x|-\frac{3}{2}\right)\geq{|x-1|}\\
\implies |x|\geq{\frac{|x-1|}{2}+\frac{3}{2}}\\
\implies |x|\geq{\frac{|x-1|+3}{2}} \text{ can now use part 3.}\\
\implies x\leq{\frac{-|x-1|-3}{2}} \lor x\geq{\frac{|x-1|+3}{2}}\\$</span></p>
<p>I had this idea of replacing <span class="math-container">$|x-1|$</span> with 1 since it is the distance from 1 to x on the number line, but that is flawed since we could've done this initially and gotten the answer. I'm not sure where to go from here. Maybe I could've applied part 2 of the proof to the beginning of <span class="math-container">$|x-1|$</span> but then I'd arrive at <span class="math-container">$-(2|x|-3)\leq{|x-1|}\leq{2|x|-3}$</span> which doesn't help me at all. Any tips or hints?</p>
| user0102 | 322,814 | <p>First, let us consider that <span class="math-container">$x\geq 1$</span>. Then it results that
<span class="math-container">\begin{align*}
2|x| - 3 \geq |x-1| & \Longleftrightarrow 2x - 3 \geq x - 1\\\\
& \Longleftrightarrow x \geq 2
\end{align*}</span></p>
<p>Thus the first solution set is given by <span class="math-container">$S_{1} = [2,\infty)$</span>.</p>
<p>Second, let us consider that <span class="math-container">$0\leq x\leq 1$</span>. Then it results that
<span class="math-container">\begin{align*}
2|x| - 3 \geq |x-1| & \Longleftrightarrow 2x - 3 \geq 1 - x\\\\
& \Longleftrightarrow 3x \geq 4\\\\
& \Longleftrightarrow x\geq \frac{4}{3}
\end{align*}</span></p>
<p>Thus the second solution set is given by <span class="math-container">$S_{2} = \varnothing$</span>.</p>
<p>Finally, we can consider the case where <span class="math-container">$x\leq 0$</span>. Then it results that</p>
<p><span class="math-container">\begin{align*}
2|x| - 3 \geq |x-1| & \Longleftrightarrow -2x - 3 \geq 1 - x\\\\
& \Longleftrightarrow x\leq -4
\end{align*}</span></p>
<p>Hence the third solution set is given by <span class="math-container">$S_{3} = (-\infty,-4]$</span>.</p>
<p>Gathering all results, we arrive at the solution set <span class="math-container">$S = S_{1}\cup S_{2}\cup S_{3} = (-\infty,-4]\cup[2,+\infty)$</span>.</p>
<p>Hopefully this helps!</p>
|
1,946,881 | <p>Looking around I have found lots of material on continuous time Markov processes on finite or countable state spaces, say $\{0,1,\ldots,J\}$ for some $J\in\mathbb{N}$ or just $\mathbb{N}$. Similarly I have earlier worked with (discrete time) Markov chains on general state spaces, following the modern classic by Meyn & Tweedie. </p>
<p>My question concerns monographs on continuous time Markov processes on general state spaces, say some subset of $\mathbb{R}^k$, $k\in\mathbb{N}$. Are there any good references - preferably but not necessarily suited for an ambitious master student - on this topic?</p>
| Sean Roberson | 171,839 | <p>You've actually defined an <strong>identity element</strong>, not an inverse.</p>
<p>Anyway, in your second case, you're saying that the choice of $e$ is dependent upon the chosen $x$, so the identity is not unique. The first says you can find an $e$ for which any $x$ will yield $xe = ex = x$. This $e$ works for any choice of $x \in G.$</p>
|
2,655,518 | <p>$2ac=bc$
find the ratio ( $K$ )
what is the ratio of their area?
<a href="https://i.stack.imgur.com/9NPRi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9NPRi.png" alt="enter image description here"></a>I found out it is $2$ or $1/2$
is it true? </p>
<p>if the question isn't clear, make sure to notify me, I will make an effort to make it understandable </p>
| Emilio Novati | 187,568 | <p>Hint:</p>
<p>By <a href="https://en.wikipedia.org/wiki/Square_root#Properties_and_uses" rel="nofollow noreferrer">definition, in the real numbers,</a> $\sqrt{25}=+5$ (the <strong>positive</strong> number such that its square is $25$). </p>
<p>And the same for any root of even index.</p>
<p>Note that if we define:</p>
<p>$b=\{$a real number such that $b^2=25\}$ than $b$ can have the two different values $b=\pm5$ and we cannot write an identity as $a=b$ or $c=b$.</p>
|
1,802,497 | <p>If $f(x)=2 [x]+\cos x$
Then $f:R \to R$ is: </p>
<p>$(A)$ One-One and onto</p>
<p>$(B)$ One-One and into</p>
<p>$(C)$ Many-One and into</p>
<p>$(D)$ Many-One and onto</p>
<p>$[ .]$ represent floor function (also known as greatest integer function
)</p>
<p>Clearly $f(x)$ is into as $2[x]$ is an even integer and $f(x)$ will not be able to achieve every real number.</p>
<p>Answer says option$(C)$ is correct but I cannot see $f(x)$ to be many-one as it does not look like that value of $f(x)$ is same for any two values of $x$</p>
<p>e.g. $f(x)= [x]+\cos x$, then $f(0)=f(\frac{\pi}{2})=1$ making the function many-one but can't see it happening for $f(x)= 2[x]+\cos x$</p>
<p>Could someone help me with this?</p>
| Eff | 112,061 | <p>You are right with respect to surjectiveness (it is not onto).</p>
<p><strong>Hint:</strong></p>
<p>For injectiveness (one to one), look in a neighbourhood around $x = 3\pi$ for example.</p>
|
1,802,497 | <p>If $f(x)=2 [x]+\cos x$
Then $f:R \to R$ is: </p>
<p>$(A)$ One-One and onto</p>
<p>$(B)$ One-One and into</p>
<p>$(C)$ Many-One and into</p>
<p>$(D)$ Many-One and onto</p>
<p>$[ .]$ represent floor function (also known as greatest integer function
)</p>
<p>Clearly $f(x)$ is into as $2[x]$ is an even integer and $f(x)$ will not be able to achieve every real number.</p>
<p>Answer says option$(C)$ is correct but I cannot see $f(x)$ to be many-one as it does not look like that value of $f(x)$ is same for any two values of $x$</p>
<p>e.g. $f(x)= [x]+\cos x$, then $f(0)=f(\frac{\pi}{2})=1$ making the function many-one but can't see it happening for $f(x)= 2[x]+\cos x$</p>
<p>Could someone help me with this?</p>
| copper.hat | 27,978 | <p>Note that $\pi \in (3,4)$ hence $f$ has a strict local minimum on $[3,4]$ at
$\pi$. It follows that $f$ is not injective.</p>
<p>Note that if $x \ge 0$ then $f(x) >0$ and if $x< 0$ then $f(x) <0$. Hence $0$ is not in the range and so it is not surjective.</p>
|
1,077,284 | <p>I am trying to find the equation of a 3D surface as illustrated below. The boundaries of this surface is comprised of two planar elliptical arcs $AB$ and $AC$ as well as a 3D arc $BC$ which is a 3D curve on an elliptical surface described nicely in <a href="https://math.stackexchange.com/a/1075515/62050">this post</a>. Could someone kindly help me how this surface bounded by $AB$, $AC$, and $BC$ can be put into an equation? Thanks in advance.</p>
<p><img src="https://i.stack.imgur.com/wG2XK.png" alt="enter image description here"></p>
| dindoun | 202,614 | <p>I found $\int\limits^1_0 t^5(t^2-1)^2 dt = \frac{1}{60}$ with the same use of the theorem than JimmyK4542</p>
|
1,463,419 | <p>A letter has come from exclusively LONDON or CLIFTON, but on the postmark only $2$ consecutive letters ''ON'' are found to be visible. What is the probability that the letter came from LONDON?</p>
<hr>
<p>This is a question of conditional probability. Let $A$ be the event that the letter has come from LONDON. Let $B$ be the event that consecutive letters ''ON'' are found to be visible. $A\cap B$ is the event that the letter has come from LONDON and consecutive letters ''ON'' are visible. We have to find $P(A\mid B)
=\frac{P(A\cap B)}{P(B)}$.</p>
<p>But then i am stuck. Please help me. Thanks.</p>
| Graham Kemp | 135,106 | <p>You wish to find the posterior probability that the word is <code>LONDON</code> given that the letters <code>ON</code> are visible. This is: $\mathsf P(A\mid B)$.</p>
<p>You should be able to evaluate the conditional probability that the letters <code>ON</code> are visible given that the word is <code>LONDON</code>. This is: $\mathsf P(B\mid A)$.</p>
<p>You should also be able to evaluate the conditional probability that the letters <code>ON</code> are visible given that the word is <code>CLIFTON</code>. This is: $\mathsf P(B\mid \neg A)$.</p>
<p>You also need to assign the prior probabilities that the envelop is from <code>LONDON</code> or <code>CLIFTON</code>. $\mathsf P(A), \mathsf P(\neg A)$</p>
<p>$$\mathsf P(A\mid B) = \frac{\mathsf P(B\mid A)\,\mathsf P(A)}{\mathsf P(B\mid A)\,\mathsf P(A)+\mathsf P(B\mid \neg A)\,\mathsf P(\neg A)}$$</p>
|
46,905 | <p>I need to draw a set of curves on one graph (characteristics equations). As you can see they have exchanged x and y axes. My goal is to plot all those curves on one graph. Are there ways to do that? </p>
<pre><code>f[t_, t0_] := -(2 - 4/Pi*ArcTan[2])*Exp[-t]*(t - t0);
g[x_, x0_] := (x - x0)/(-(2 - 4/Pi*ArcTan[x + 2]));
Show[Table[Plot[f[t, t0], {t, 0, 1},
PlotRange -> {0, -0.3},
AxesLabel -> {t, x}], {t0, 0, 1, 0.1}]]
</code></pre>
<p><img src="https://i.stack.imgur.com/6BDw9.jpg" alt="enter image description here"></p>
<pre><code>Show[
Table[
Plot[g[x, x0], {x, 0, -0.3}, PlotRange -> {0, 1}, AxesLabel -> {x, t}],
{x0, 0, -0.3, -0.05}]]
</code></pre>
<p><img src="https://i.stack.imgur.com/F9h7Q.jpg" alt="enter image description here"></p>
| qwerty | 5,861 | <p>Another way :</p>
<pre><code>f[t_, t0_] := -(2 - 4/Pi*ArcTan[2])*Exp[-t]*(t - t0);
g[x_, x0_] := (x - x0)/(-(2 - 4/Pi*ArcTan[x + 2]));
curveset1 = Show[Table[ Plot[f[t, t0], {t, 0, 1}, PlotRange -> {0, -0.3}],
{t0, 0, 1, 0.1}]] // First;
curveset2 = Show[Table[ Plot[g[x, x0], {x, 0, -0.3}, PlotRange -> {0, 1}],
{x0, 0, -0.3, -0.05}]] // First;
Graphics[{
GeometricTransformation[Scale[curveset1, {1, 10/3}, {0, 0}],
TranslationTransform[{0, 1}]],
GeometricTransformation[Scale[curveset2, {10/3, 1}, {0, 0}],
TranslationTransform[{1, 0}]]},
Frame -> True, FrameTicks -> {#, Reverse@#} &@{Range[0, 1, 0.2],
{#, NumberForm[N@(3/10) (# - 1), {5, 2}]} & /@ Range[0, 1, 1/6]}]
</code></pre>
<p><img src="https://i.stack.imgur.com/kd61n.png" alt="enter image description here"></p>
|
4,278,763 | <p>Let <span class="math-container">$X_1,...$</span> be a sequence of independent and identically distributed random variables with mean <span class="math-container">$0$</span> and variance <span class="math-container">$\sigma^2$</span>. Let <span class="math-container">$S_n=\sum^n_{i=1}X_i$</span> and show that <span class="math-container">$\{Z_n,n\geq1\}$</span> is a martingale when <span class="math-container">$$Z_n=S_n^2-n\sigma^2$$</span></p>
<p>I know I need to show that <span class="math-container">$E(Z_{n+1}|\mathcal{F}_n)=Z_n$</span> to prove that <span class="math-container">$Z_n$</span> is a martingale. I know that if <span class="math-container">$Z_n=\sum^n_{i=1}X_i$</span>, then <span class="math-container">$\{Z_n\}$</span> is a martingale. We show this by the following calculations:
<span class="math-container">\begin{align}
E(Z_{n+1}|\mathcal{F}_n)&=E(Z_n+X_{n+1}|\mathcal{F}_n)\\
&=E(Z_n|\mathcal{F}_n)+E(X_{n+1}|\mathcal{F}_n)\\
&=Z_n+E(X_{n+1})\\
&=Z_n+0=Z_n
\end{align}</span>
I can see that this problem is analogous to that problem, but I am not sure how to go about it. Could I get some pointers? Thank you.</p>
| angryavian | 43,949 | <ul>
<li>Prove that <span class="math-container">$S_{n+1}^2 = S_n^2 + 2X_{n+1} S_n + X_{n+1}^2$</span>.</li>
<li>Show that <span class="math-container">$E[S_{n+1}^2 \mid \mathcal{F}_n] = S_n^2 + \sigma^2$</span>.</li>
</ul>
|
4,278,763 | <p>Let <span class="math-container">$X_1,...$</span> be a sequence of independent and identically distributed random variables with mean <span class="math-container">$0$</span> and variance <span class="math-container">$\sigma^2$</span>. Let <span class="math-container">$S_n=\sum^n_{i=1}X_i$</span> and show that <span class="math-container">$\{Z_n,n\geq1\}$</span> is a martingale when <span class="math-container">$$Z_n=S_n^2-n\sigma^2$$</span></p>
<p>I know I need to show that <span class="math-container">$E(Z_{n+1}|\mathcal{F}_n)=Z_n$</span> to prove that <span class="math-container">$Z_n$</span> is a martingale. I know that if <span class="math-container">$Z_n=\sum^n_{i=1}X_i$</span>, then <span class="math-container">$\{Z_n\}$</span> is a martingale. We show this by the following calculations:
<span class="math-container">\begin{align}
E(Z_{n+1}|\mathcal{F}_n)&=E(Z_n+X_{n+1}|\mathcal{F}_n)\\
&=E(Z_n|\mathcal{F}_n)+E(X_{n+1}|\mathcal{F}_n)\\
&=Z_n+E(X_{n+1})\\
&=Z_n+0=Z_n
\end{align}</span>
I can see that this problem is analogous to that problem, but I am not sure how to go about it. Could I get some pointers? Thank you.</p>
| James Anderson | 784,963 | <p>Firstly, <span class="math-container">$S_{n+1}^2=(S_n+X_{n+1})^2=S_n^2+2S_nX_{n+1}+X_{n+1}^2$</span></p>
<p>Then
<span class="math-container">\begin{align}
E[Z_{n+1}-Z_n|\mathcal{F}_n]&=E[S_n^2+2S_nX_{n+1}+X_{n+1}^2-(n+1)\sigma^2-(S_n^2-n\sigma^2)|\mathcal{F}_n]\\
&=E[2S_nX_{n+1}+X_{n+1}-\sigma^2|\mathcal{F}_n]\\
&=E[S_nX_{n+1}|\mathcal{F}_n]+E[X_{n+1}^2|\mathcal{F}_n]-\sigma^2\\
&=2S_nE[X_{n+1}]+E[X_{n+1}^2]-\sigma^2\\
&=2S_n\cdot0+(\text{Var}(X_{n+1})+E[X_{n+1}]^2)-\sigma^2\\
&=0
\end{align}</span>
Last line follows from the fact that variance of <span class="math-container">$X$</span> is <span class="math-container">$\sigma^2$</span></p>
|
1,765,530 | <p>How many $5$-digit numbers (including leading $0$'s) are there with no digit appearing exactly $2$ times? The solution is supposed to be derived using Inclusion-Exclusion.</p>
<p>Here is my attempt at a solution:</p>
<p>Let $A_0$= sequences where there are two $0$'s that appear in the sequence.</p>
<p>...</p>
<p>$A_{9}$=sequences where there are two $9$'s that appear in the sequence.</p>
<p>I want the intersection of $A_0^{'}A_1^{'}...A_9^{'}$= $N-S_1+S_2$ because you can only have at most two digits who are used exactly two times each in a $5$ digit sequence.</p>
<p>$N=10^5$, $S_1=10\cdot \binom{5}{2}\cdot[9+9\cdot 8 \cdot 7]$, and $S_2=10 \cdot 9 \cdot \binom{5}{4} \cdot8$.</p>
<p>The $S_1$ term comes from selecting which of the ten digits to use twice, selecting which two places those two digits take, and then either having the same digit used three times for the other three places, or having different digits used for the other three digits.</p>
<p>The $S_2$ term comes from selecting which two digits are used twice, selecting where those four digits go, and then having eight choices for the remaining spot.</p>
<p>So my answer becomes $10^5 -10 \cdot \binom{5}{2} \cdot [9+9 \cdot 8 \cdot 7]+10 \cdot 9 \cdot \binom{5}{4} \cdot 8$.</p>
<p>Am I doing this correctly?</p>
| Marko Riedel | 44,883 | <p>Suppose we ask about $N$ digit numbers including leading zeros where
no digit appears two times.</p>
<p><P>The species of set partitions with no two-elements sets is
$$\mathfrak{P}(\mathcal{U}\mathfrak{P}_{\ne 2, \ge 1}(\mathcal{Z})).$$</p>
<p>This yields the generating function</p>
<p>$${n\brace k}_{\ne 2} =
\frac{(\exp(z)-z^2/2-1)^k}{k!}.$$</p>
<p>We get as the answer for the problem</p>
<p>$$\sum_{q=1}^N {10\choose q} {N\brace q}_{\ne 2} q!$$</p>
<p>which simplifies in terms of ordinary Stirling numbers of the second
kind to</p>
<p>$$N! [z^N] \sum_{q=1}^N {10\choose q}
(\exp(z)-z^2/2-1)^q
\\ = N! [z^N] \sum_{q=1}^N {10\choose q}
\sum_{p=0}^q {q\choose p} (\exp(z)-1)^{q-p} (-1)^p \frac{z^{2p}}{2^p}
\\ = N! \sum_{q=1}^N {10\choose q}
\sum_{p=0}^{\min(q,\lfloor N/2\rfloor)}
{q\choose p} [z^{N-2p}] (\exp(z)-1)^{q-p} \frac{(-1)^p}{2^p}
\\ = N! \sum_{q=1}^N {10\choose q}
\sum_{p=0}^{\min(q,\lfloor N/2\rfloor)}
{q\choose p} \frac{(-1)^p}{2^p} \frac{(q-p)!}{(N-2p)!}
{N-2p\brace q-p}.$$</p>
<p>This yields the sequence</p>
<p>$$10, 90, 730, 5410, 37900, 264250,
1908910, 14322520, \\ 110305720,
875799550, 7203731050, 60866700940, 527138423380,
\\ 4696469283970, 42797376850150, \ldots$$</p>
<p><strong>Remark.</strong> Starting from the alternate species</p>
<p>$$\mathfrak{S}_{=10}(\mathfrak{P}_{\ne 2}(\mathcal{Z}))$$</p>
<p>we obtain the closed form</p>
<p>$$N! [z^N] (\exp(z)-z^2/2)^{10}.$$</p>
|
2,865,122 | <p><a href="http://math.sfsu.edu/beck/complex.html" rel="nofollow noreferrer">A First Course in Complex Analysis by Matthias Beck, Gerald Marchesi, Dennis Pixton, and Lucas Sabalka</a> Exer 3.8</p>
<blockquote>
<p>Suppose <span class="math-container">$f$</span> is holomorphic in region <span class="math-container">$G$</span>, and <span class="math-container">$f(G) \subseteq \{ |z|=1 \}$</span>. Prove <span class="math-container">$f$</span> is constant.</p>
</blockquote>
<p>(I guess we may assume <span class="math-container">$f: G \to \mathbb C$</span> s.t. image(f)=<span class="math-container">$f(G)$</span>. I guess it doesn't matter if we somehow have <span class="math-container">$f: A \to \mathbb C$</span> for any <span class="math-container">$A$</span> s.t. <span class="math-container">$G \subseteq A \subseteq \mathbb C$</span> as long as <span class="math-container">$G$</span> is a region and <span class="math-container">$f$</span> is holo there.)</p>
<p>I will now attempt to elaborate the following proof at a <a href="https://math.oregonstate.edu/%7Edeleenhp/teaching/winter17/MTH483/hw2-sol.pdf" rel="nofollow noreferrer">Winter 2017 course in Oregon State University</a>.</p>
<p><strong>Question 1</strong>: For the following elaboration of the proof, what errors if any are there?</p>
<p><strong>Question 2</strong>: Are there more elegant ways to approach this? I have a feeling this can be answered with Ch2 only, i.e. Cauchy-Riemann or differentiation/holomorphic properties instead of having to use Möbius transformations.</p>
<blockquote>
<p>OSU Pf (slightly paraphrased): Let <span class="math-container">$g(z)=\frac{1+z}{1-z}$</span>, and define <span class="math-container">$h(z)=g(f(z)), z \in G \setminus \{z : f(z) = 1\}$</span>. Then <span class="math-container">$h$</span> is holomorphic on its domain, and <span class="math-container">$h$</span> is imaginary valued by Exer 3.7. By a variation of Exer 2.19, <span class="math-container">$h$</span> is constant. QED</p>
</blockquote>
<p>My (elaboration of OSU) Pf: <span class="math-container">$\because f(G) \subseteq C[0,1]$</span>, let's consider the Möbius transformation in the preceding Exer 3.7 <span class="math-container">$g: \mathbb C \setminus \{z = 1\} \to \mathbb C$</span> s.t. <span class="math-container">$g(z) := \frac{1+z}{1-z}$</span>:</p>
<p>If we plug in <span class="math-container">$C[0,1] \setminus \{1\}$</span> in <span class="math-container">$g$</span>, then we'll get the imaginary axis by Exer 3.7. Precisely: <span class="math-container">$$g(\{e^{it}\}_{t \in \mathbb R \setminus \{0\}}) = \{is\}_{s \in \mathbb R}. \tag{1}$$</span> Now, define <span class="math-container">$G' := G \setminus \{z \in G | f(z) = 1 \}$</span> and <span class="math-container">$h: G' \to \mathbb C$</span> s.t. <span class="math-container">$h := g \circ f$</span> s.t. <span class="math-container">$h(z) = \frac{1+f(z)}{1-f(z)}$</span>. If we plug in <span class="math-container">$G'$</span> in <span class="math-container">$h$</span>, then we'll get the imaginary axis. Precisely: <span class="math-container">$$h(G') := \frac{1+f(G')}{1-f(G')} \stackrel{(1)}{=} \{is\}_{s \in \mathbb R}. \tag{2}$$</span></p>
<p>Now Exer 2.19 says that a real valued holomorphic function over a region is constant: <span class="math-container">$f(z)=u(z) \implies u_x=0=u_y \implies f'=0$</span> to conclude by Thm 2.17 that <span class="math-container">$f$</span> is constant or simply by partial integration that <span class="math-container">$u$</span> is constant. Actually, an imaginary valued holomorphic function over a region is constant too: <span class="math-container">$f(z)=iv(z) \implies v_x=0=v_y \implies f'=0$</span> again by Cauchy-Riemann Thm 2.13 to conclude by Thm 2.17 that <span class="math-container">$f$</span> is constant or simply by partial integration that <span class="math-container">$v$</span> is constant.</p>
<p><span class="math-container">$(2)$</span> precisely says that <span class="math-container">$h$</span> is imaginary valued over <span class="math-container">$G'$</span>. <span class="math-container">$\therefore,$</span> if <span class="math-container">$G'$</span> is a region (A) and if <span class="math-container">$h$</span> is holomorphic on <span class="math-container">$G'$</span> (B), then <span class="math-container">$h$</span> is constant on <span class="math-container">$G'$</span> with value I'll denote <span class="math-container">$Hi, H \in \mathbb R$</span>:</p>
<p><span class="math-container">$\forall z \in G',$</span></p>
<p><span class="math-container">$$Hi = \frac{1+f(z)}{1-f(z)} \implies f(z) = \frac{Hi-1}{Hi+1}, \tag{3}$$</span></p>
<p>where <span class="math-container">$Hi+1 \ne 0 \forall H \in \mathbb R$</span>.</p>
<p><span class="math-container">$\therefore, f$</span> is constant on <span class="math-container">$G'$</span> (Q4) with value given in <span class="math-container">$(3)$</span>.</p>
<p>QED except possibly for (C)</p>
<hr />
<blockquote>
<p>(A) <span class="math-container">$G'$</span> is a region</p>
</blockquote>
<p>I guess if <span class="math-container">$G \setminus G'$</span> is finite, then G' is a region. I'm thinking <span class="math-container">$D[0,1]$</span> is a region and then <span class="math-container">$D[0,1] \setminus \{0\}$</span> is still a region.</p>
<blockquote>
<p>(B) To show <span class="math-container">$h$</span> is holomorphic in <span class="math-container">$G'$</span>:</p>
</blockquote>
<p>Well <span class="math-container">$h(z)$</span> is differentiable <span class="math-container">$\forall z \in G'$</span> and <span class="math-container">$f(z) \ne 1 \forall z \in G'$</span> and <span class="math-container">$f'(z)$</span> exists in <span class="math-container">$G' \subseteq G$</span> because <span class="math-container">$f$</span> is differentiable in <span class="math-container">$G$</span> because <span class="math-container">$f$</span> is holomorphic in <span class="math-container">$G$</span>.</p>
<p><span class="math-container">$$h'(z) = g'(f(z)) f'(z) = \frac{2}{(1-w)^2}|_{w=f(z)} f'(z) = \frac{2 f'(z)}{(1-f(z))^2} $$</span></p>
<p>Now, <span class="math-container">$f'(z)$</span> exists on an open disc <span class="math-container">$D[z,r_z] \ \forall z \in G$</span> where <span class="math-container">$r_z$</span> denotes the radius of the open disc s.t. <span class="math-container">$f(z)$</span> is holomorphic at <span class="math-container">$z$</span>. So, I guess <span class="math-container">$\frac{2 f'(z)}{(1-f(z))^2} = h'(z)$</span> exists on an open disc with the same radius <span class="math-container">$D[z,r_z] \ \forall z \in G'$</span>, and <span class="math-container">$\therefore, h$</span> is holomorphic in <span class="math-container">$G'$</span>.</p>
<blockquote>
<p>(C) Possible flaw:</p>
</blockquote>
<p>It seems that on <span class="math-container">$G'$</span>, <span class="math-container">$f$</span> has value <span class="math-container">$\frac{Hi-1}{Hi+1}$</span> while on <span class="math-container">$G \setminus G'$</span>, <span class="math-container">$f$</span> has value <span class="math-container">$1$</span>.</p>
<p><span class="math-container">$$\therefore, \forall z \in G, f(z) = \frac{Hi-1}{Hi+1} 1_{G'}(z) + 1_{G \setminus G'}(z)$$</span></p>
<p>It seems then that we've actually show only that <span class="math-container">$f$</span> is constant on <span class="math-container">$G$</span> except for the subset of G where <span class="math-container">$f=1$</span>.</p>
| trying | 309,917 | <p>In every point on a line in the plane the possible tangent vectors form a real monodimensional vector space. An holomorphic function has in every point of its domain a derivative map that is a complex linear, that is a roto-homothetic real transformation of the plane.</p>
<p>Turning attention to your problem: in every point of the region $G$ the tangent vectors form a bidimensional real vector space. When a tangent vector is transformed by the derivative map it is rotated and/or enlarged in the plane of the tangent vectors in the image of the point. But the only possible tangent vectors in any point of the image (subset of the unit circle, that is a set of arcs and/or points) are in a real monodimensional vector space (for points on an arc) or zero-dimensional vector space (for isolated points). So for that roto-homothetic transformation to be fitted, it must make any tangent vector in the region $G$ vanish, that is, the derivative map must be $0$.</p>
|
2,865,122 | <p><a href="http://math.sfsu.edu/beck/complex.html" rel="nofollow noreferrer">A First Course in Complex Analysis by Matthias Beck, Gerald Marchesi, Dennis Pixton, and Lucas Sabalka</a> Exer 3.8</p>
<blockquote>
<p>Suppose <span class="math-container">$f$</span> is holomorphic in region <span class="math-container">$G$</span>, and <span class="math-container">$f(G) \subseteq \{ |z|=1 \}$</span>. Prove <span class="math-container">$f$</span> is constant.</p>
</blockquote>
<p>(I guess we may assume <span class="math-container">$f: G \to \mathbb C$</span> s.t. image(f)=<span class="math-container">$f(G)$</span>. I guess it doesn't matter if we somehow have <span class="math-container">$f: A \to \mathbb C$</span> for any <span class="math-container">$A$</span> s.t. <span class="math-container">$G \subseteq A \subseteq \mathbb C$</span> as long as <span class="math-container">$G$</span> is a region and <span class="math-container">$f$</span> is holo there.)</p>
<p>I will now attempt to elaborate the following proof at a <a href="https://math.oregonstate.edu/%7Edeleenhp/teaching/winter17/MTH483/hw2-sol.pdf" rel="nofollow noreferrer">Winter 2017 course in Oregon State University</a>.</p>
<p><strong>Question 1</strong>: For the following elaboration of the proof, what errors if any are there?</p>
<p><strong>Question 2</strong>: Are there more elegant ways to approach this? I have a feeling this can be answered with Ch2 only, i.e. Cauchy-Riemann or differentiation/holomorphic properties instead of having to use Möbius transformations.</p>
<blockquote>
<p>OSU Pf (slightly paraphrased): Let <span class="math-container">$g(z)=\frac{1+z}{1-z}$</span>, and define <span class="math-container">$h(z)=g(f(z)), z \in G \setminus \{z : f(z) = 1\}$</span>. Then <span class="math-container">$h$</span> is holomorphic on its domain, and <span class="math-container">$h$</span> is imaginary valued by Exer 3.7. By a variation of Exer 2.19, <span class="math-container">$h$</span> is constant. QED</p>
</blockquote>
<p>My (elaboration of OSU) Pf: <span class="math-container">$\because f(G) \subseteq C[0,1]$</span>, let's consider the Möbius transformation in the preceding Exer 3.7 <span class="math-container">$g: \mathbb C \setminus \{z = 1\} \to \mathbb C$</span> s.t. <span class="math-container">$g(z) := \frac{1+z}{1-z}$</span>:</p>
<p>If we plug in <span class="math-container">$C[0,1] \setminus \{1\}$</span> in <span class="math-container">$g$</span>, then we'll get the imaginary axis by Exer 3.7. Precisely: <span class="math-container">$$g(\{e^{it}\}_{t \in \mathbb R \setminus \{0\}}) = \{is\}_{s \in \mathbb R}. \tag{1}$$</span> Now, define <span class="math-container">$G' := G \setminus \{z \in G | f(z) = 1 \}$</span> and <span class="math-container">$h: G' \to \mathbb C$</span> s.t. <span class="math-container">$h := g \circ f$</span> s.t. <span class="math-container">$h(z) = \frac{1+f(z)}{1-f(z)}$</span>. If we plug in <span class="math-container">$G'$</span> in <span class="math-container">$h$</span>, then we'll get the imaginary axis. Precisely: <span class="math-container">$$h(G') := \frac{1+f(G')}{1-f(G')} \stackrel{(1)}{=} \{is\}_{s \in \mathbb R}. \tag{2}$$</span></p>
<p>Now Exer 2.19 says that a real valued holomorphic function over a region is constant: <span class="math-container">$f(z)=u(z) \implies u_x=0=u_y \implies f'=0$</span> to conclude by Thm 2.17 that <span class="math-container">$f$</span> is constant or simply by partial integration that <span class="math-container">$u$</span> is constant. Actually, an imaginary valued holomorphic function over a region is constant too: <span class="math-container">$f(z)=iv(z) \implies v_x=0=v_y \implies f'=0$</span> again by Cauchy-Riemann Thm 2.13 to conclude by Thm 2.17 that <span class="math-container">$f$</span> is constant or simply by partial integration that <span class="math-container">$v$</span> is constant.</p>
<p><span class="math-container">$(2)$</span> precisely says that <span class="math-container">$h$</span> is imaginary valued over <span class="math-container">$G'$</span>. <span class="math-container">$\therefore,$</span> if <span class="math-container">$G'$</span> is a region (A) and if <span class="math-container">$h$</span> is holomorphic on <span class="math-container">$G'$</span> (B), then <span class="math-container">$h$</span> is constant on <span class="math-container">$G'$</span> with value I'll denote <span class="math-container">$Hi, H \in \mathbb R$</span>:</p>
<p><span class="math-container">$\forall z \in G',$</span></p>
<p><span class="math-container">$$Hi = \frac{1+f(z)}{1-f(z)} \implies f(z) = \frac{Hi-1}{Hi+1}, \tag{3}$$</span></p>
<p>where <span class="math-container">$Hi+1 \ne 0 \forall H \in \mathbb R$</span>.</p>
<p><span class="math-container">$\therefore, f$</span> is constant on <span class="math-container">$G'$</span> (Q4) with value given in <span class="math-container">$(3)$</span>.</p>
<p>QED except possibly for (C)</p>
<hr />
<blockquote>
<p>(A) <span class="math-container">$G'$</span> is a region</p>
</blockquote>
<p>I guess if <span class="math-container">$G \setminus G'$</span> is finite, then G' is a region. I'm thinking <span class="math-container">$D[0,1]$</span> is a region and then <span class="math-container">$D[0,1] \setminus \{0\}$</span> is still a region.</p>
<blockquote>
<p>(B) To show <span class="math-container">$h$</span> is holomorphic in <span class="math-container">$G'$</span>:</p>
</blockquote>
<p>Well <span class="math-container">$h(z)$</span> is differentiable <span class="math-container">$\forall z \in G'$</span> and <span class="math-container">$f(z) \ne 1 \forall z \in G'$</span> and <span class="math-container">$f'(z)$</span> exists in <span class="math-container">$G' \subseteq G$</span> because <span class="math-container">$f$</span> is differentiable in <span class="math-container">$G$</span> because <span class="math-container">$f$</span> is holomorphic in <span class="math-container">$G$</span>.</p>
<p><span class="math-container">$$h'(z) = g'(f(z)) f'(z) = \frac{2}{(1-w)^2}|_{w=f(z)} f'(z) = \frac{2 f'(z)}{(1-f(z))^2} $$</span></p>
<p>Now, <span class="math-container">$f'(z)$</span> exists on an open disc <span class="math-container">$D[z,r_z] \ \forall z \in G$</span> where <span class="math-container">$r_z$</span> denotes the radius of the open disc s.t. <span class="math-container">$f(z)$</span> is holomorphic at <span class="math-container">$z$</span>. So, I guess <span class="math-container">$\frac{2 f'(z)}{(1-f(z))^2} = h'(z)$</span> exists on an open disc with the same radius <span class="math-container">$D[z,r_z] \ \forall z \in G'$</span>, and <span class="math-container">$\therefore, h$</span> is holomorphic in <span class="math-container">$G'$</span>.</p>
<blockquote>
<p>(C) Possible flaw:</p>
</blockquote>
<p>It seems that on <span class="math-container">$G'$</span>, <span class="math-container">$f$</span> has value <span class="math-container">$\frac{Hi-1}{Hi+1}$</span> while on <span class="math-container">$G \setminus G'$</span>, <span class="math-container">$f$</span> has value <span class="math-container">$1$</span>.</p>
<p><span class="math-container">$$\therefore, \forall z \in G, f(z) = \frac{Hi-1}{Hi+1} 1_{G'}(z) + 1_{G \setminus G'}(z)$$</span></p>
<p>It seems then that we've actually show only that <span class="math-container">$f$</span> is constant on <span class="math-container">$G$</span> except for the subset of G where <span class="math-container">$f=1$</span>.</p>
| zhw. | 228,045 | <p>Note that $f\overline {f}=1$ in $G.$ This implies $\overline {f} =1/f$ in $G.$ Hence $\overline {f}$ is holomorphic in $G.$ This implies both $f+\overline f= 2\text { Re } f$ and $f-\overline f=2i\text { Im } f$ are holomorphic in $G.$ By the remarks you made right after $(2)$ in your question, these functions are constant, which implies $f$ is constant.</p>
|
2,865,122 | <p><a href="http://math.sfsu.edu/beck/complex.html" rel="nofollow noreferrer">A First Course in Complex Analysis by Matthias Beck, Gerald Marchesi, Dennis Pixton, and Lucas Sabalka</a> Exer 3.8</p>
<blockquote>
<p>Suppose <span class="math-container">$f$</span> is holomorphic in region <span class="math-container">$G$</span>, and <span class="math-container">$f(G) \subseteq \{ |z|=1 \}$</span>. Prove <span class="math-container">$f$</span> is constant.</p>
</blockquote>
<p>(I guess we may assume <span class="math-container">$f: G \to \mathbb C$</span> s.t. image(f)=<span class="math-container">$f(G)$</span>. I guess it doesn't matter if we somehow have <span class="math-container">$f: A \to \mathbb C$</span> for any <span class="math-container">$A$</span> s.t. <span class="math-container">$G \subseteq A \subseteq \mathbb C$</span> as long as <span class="math-container">$G$</span> is a region and <span class="math-container">$f$</span> is holo there.)</p>
<p>I will now attempt to elaborate the following proof at a <a href="https://math.oregonstate.edu/%7Edeleenhp/teaching/winter17/MTH483/hw2-sol.pdf" rel="nofollow noreferrer">Winter 2017 course in Oregon State University</a>.</p>
<p><strong>Question 1</strong>: For the following elaboration of the proof, what errors if any are there?</p>
<p><strong>Question 2</strong>: Are there more elegant ways to approach this? I have a feeling this can be answered with Ch2 only, i.e. Cauchy-Riemann or differentiation/holomorphic properties instead of having to use Möbius transformations.</p>
<blockquote>
<p>OSU Pf (slightly paraphrased): Let <span class="math-container">$g(z)=\frac{1+z}{1-z}$</span>, and define <span class="math-container">$h(z)=g(f(z)), z \in G \setminus \{z : f(z) = 1\}$</span>. Then <span class="math-container">$h$</span> is holomorphic on its domain, and <span class="math-container">$h$</span> is imaginary valued by Exer 3.7. By a variation of Exer 2.19, <span class="math-container">$h$</span> is constant. QED</p>
</blockquote>
<p>My (elaboration of OSU) Pf: <span class="math-container">$\because f(G) \subseteq C[0,1]$</span>, let's consider the Möbius transformation in the preceding Exer 3.7 <span class="math-container">$g: \mathbb C \setminus \{z = 1\} \to \mathbb C$</span> s.t. <span class="math-container">$g(z) := \frac{1+z}{1-z}$</span>:</p>
<p>If we plug in <span class="math-container">$C[0,1] \setminus \{1\}$</span> in <span class="math-container">$g$</span>, then we'll get the imaginary axis by Exer 3.7. Precisely: <span class="math-container">$$g(\{e^{it}\}_{t \in \mathbb R \setminus \{0\}}) = \{is\}_{s \in \mathbb R}. \tag{1}$$</span> Now, define <span class="math-container">$G' := G \setminus \{z \in G | f(z) = 1 \}$</span> and <span class="math-container">$h: G' \to \mathbb C$</span> s.t. <span class="math-container">$h := g \circ f$</span> s.t. <span class="math-container">$h(z) = \frac{1+f(z)}{1-f(z)}$</span>. If we plug in <span class="math-container">$G'$</span> in <span class="math-container">$h$</span>, then we'll get the imaginary axis. Precisely: <span class="math-container">$$h(G') := \frac{1+f(G')}{1-f(G')} \stackrel{(1)}{=} \{is\}_{s \in \mathbb R}. \tag{2}$$</span></p>
<p>Now Exer 2.19 says that a real valued holomorphic function over a region is constant: <span class="math-container">$f(z)=u(z) \implies u_x=0=u_y \implies f'=0$</span> to conclude by Thm 2.17 that <span class="math-container">$f$</span> is constant or simply by partial integration that <span class="math-container">$u$</span> is constant. Actually, an imaginary valued holomorphic function over a region is constant too: <span class="math-container">$f(z)=iv(z) \implies v_x=0=v_y \implies f'=0$</span> again by Cauchy-Riemann Thm 2.13 to conclude by Thm 2.17 that <span class="math-container">$f$</span> is constant or simply by partial integration that <span class="math-container">$v$</span> is constant.</p>
<p><span class="math-container">$(2)$</span> precisely says that <span class="math-container">$h$</span> is imaginary valued over <span class="math-container">$G'$</span>. <span class="math-container">$\therefore,$</span> if <span class="math-container">$G'$</span> is a region (A) and if <span class="math-container">$h$</span> is holomorphic on <span class="math-container">$G'$</span> (B), then <span class="math-container">$h$</span> is constant on <span class="math-container">$G'$</span> with value I'll denote <span class="math-container">$Hi, H \in \mathbb R$</span>:</p>
<p><span class="math-container">$\forall z \in G',$</span></p>
<p><span class="math-container">$$Hi = \frac{1+f(z)}{1-f(z)} \implies f(z) = \frac{Hi-1}{Hi+1}, \tag{3}$$</span></p>
<p>where <span class="math-container">$Hi+1 \ne 0 \forall H \in \mathbb R$</span>.</p>
<p><span class="math-container">$\therefore, f$</span> is constant on <span class="math-container">$G'$</span> (Q4) with value given in <span class="math-container">$(3)$</span>.</p>
<p>QED except possibly for (C)</p>
<hr />
<blockquote>
<p>(A) <span class="math-container">$G'$</span> is a region</p>
</blockquote>
<p>I guess if <span class="math-container">$G \setminus G'$</span> is finite, then G' is a region. I'm thinking <span class="math-container">$D[0,1]$</span> is a region and then <span class="math-container">$D[0,1] \setminus \{0\}$</span> is still a region.</p>
<blockquote>
<p>(B) To show <span class="math-container">$h$</span> is holomorphic in <span class="math-container">$G'$</span>:</p>
</blockquote>
<p>Well <span class="math-container">$h(z)$</span> is differentiable <span class="math-container">$\forall z \in G'$</span> and <span class="math-container">$f(z) \ne 1 \forall z \in G'$</span> and <span class="math-container">$f'(z)$</span> exists in <span class="math-container">$G' \subseteq G$</span> because <span class="math-container">$f$</span> is differentiable in <span class="math-container">$G$</span> because <span class="math-container">$f$</span> is holomorphic in <span class="math-container">$G$</span>.</p>
<p><span class="math-container">$$h'(z) = g'(f(z)) f'(z) = \frac{2}{(1-w)^2}|_{w=f(z)} f'(z) = \frac{2 f'(z)}{(1-f(z))^2} $$</span></p>
<p>Now, <span class="math-container">$f'(z)$</span> exists on an open disc <span class="math-container">$D[z,r_z] \ \forall z \in G$</span> where <span class="math-container">$r_z$</span> denotes the radius of the open disc s.t. <span class="math-container">$f(z)$</span> is holomorphic at <span class="math-container">$z$</span>. So, I guess <span class="math-container">$\frac{2 f'(z)}{(1-f(z))^2} = h'(z)$</span> exists on an open disc with the same radius <span class="math-container">$D[z,r_z] \ \forall z \in G'$</span>, and <span class="math-container">$\therefore, h$</span> is holomorphic in <span class="math-container">$G'$</span>.</p>
<blockquote>
<p>(C) Possible flaw:</p>
</blockquote>
<p>It seems that on <span class="math-container">$G'$</span>, <span class="math-container">$f$</span> has value <span class="math-container">$\frac{Hi-1}{Hi+1}$</span> while on <span class="math-container">$G \setminus G'$</span>, <span class="math-container">$f$</span> has value <span class="math-container">$1$</span>.</p>
<p><span class="math-container">$$\therefore, \forall z \in G, f(z) = \frac{Hi-1}{Hi+1} 1_{G'}(z) + 1_{G \setminus G'}(z)$$</span></p>
<p>It seems then that we've actually show only that <span class="math-container">$f$</span> is constant on <span class="math-container">$G$</span> except for the subset of G where <span class="math-container">$f=1$</span>.</p>
| BCLC | 140,308 | <p>i guess <a href="https://en.wikipedia.org/wiki/Open_mapping_theorem_(complex_analysis)" rel="nofollow noreferrer">open mapping theorem</a>, as suggested by Angina Seng, is <a href="https://academia.stackexchange.com/q/116019">inadmissible</a>, but anyhoo</p>
<p><span class="math-container">$f$</span> is either constant or non-constant. If constant, then done. If non-constant, then by open mapping theorem, <span class="math-container">$f$</span> is open. Then <span class="math-container">$G$</span> is open implies <span class="math-container">$f(G)$</span> open (in <span class="math-container">$\mathbb C$</span>). However no non-empty subset of the unit circle is open (in <span class="math-container">$\mathbb C$</span>).</p>
<p>remarks:</p>
<ol>
<li><p>so it seems pretty easy with open mapping theorem, but the proof for open mapping theorem seems to use either Rouché's theorem or cauchy's differentiation formula, sooo...yeah. too high at this point of the text.</p>
</li>
<li><p>seems kinda overkill to use open mapping. it's like using functional analysis to prove pythagorean theorem.</p>
</li>
</ol>
<ul>
<li>Update: Trivia: <a href="https://en.wikipedia.org/wiki/Parseval%27s_identity" rel="nofollow noreferrer">Parseval's identity is a generalisation of pythaogrean</a>. There's a <a href="https://math.stackexchange.com/a/4111029">new answer</a> that uses Parseval's identity.</li>
</ul>
|
223,176 | <p>I made this problem: </p>
<p>$f(x)=e^{f^{\prime \prime}}$ </p>
<p>I have just been taught the first derivative, and was thinking about what if the derivative depended upon it own derivative. I understand that $e^x$ is its "own" derivative, but the problem I made I was thinking that the first derviative is not logical, because to know the first derivative you then must know the 2nd or 3rd derviative, it seems self-referenecing. </p>
<p>Is the problem I made, a real problem or just some abstract idea? </p>
| Berci | 41,488 | <p>Such kind of problems usually ask for the function $f$ itself (of course, then its derivative can be calculated, too). </p>
<p>And, such is called a <a href="http://en.wikipedia.org/wiki/Differential_equation" rel="nofollow">Differential equation</a>.</p>
|
2,860,360 | <p>It is a general question about simple examples of calculating class numbers in quadratic fields. Here are an excerpt from Frazer Jarvis' book <em>Algebraic Number Theory</em>:</p>
<p>"<em>Example 7.20</em> For $K=\mathbb{Q}(\sqrt[3]{2} )$, the discriminant is 108, and $r_{2}=1$. So the Minkowski bound is $\approx 2.940$. So every ideal is equivalent to one whose norm is at most 2. The only ideal of norm 1 is the full ring of integers, which is principal; the ideal $(2)=\mathcal{p}_{2}^{3}$, where $\mathcal{p}_{2}=(\sqrt[3]{2})$ is also principal. Thus every ideal is equivalent to a principal ideal, so the class group is trivial."</p>
<p>The question is why does it suffice to look at the principal ideal generated by 2?</p>
| RayDansh | 572,459 | <p>The probability at least one person succeeds out of $6$ equals $1$ minus the probability that all of the $6$ fail. So if the success rate is $p$, then the probability at least one person succeeds out of $n$ people is $1-(1-p)^n$. </p>
<p>Going to your example of $20$% success and $6$ people, we get $1-(1-0.20)^6=0.739$.</p>
<p>Probabilities aren't cumulative; only expected values are.</p>
|
2,860,360 | <p>It is a general question about simple examples of calculating class numbers in quadratic fields. Here are an excerpt from Frazer Jarvis' book <em>Algebraic Number Theory</em>:</p>
<p>"<em>Example 7.20</em> For $K=\mathbb{Q}(\sqrt[3]{2} )$, the discriminant is 108, and $r_{2}=1$. So the Minkowski bound is $\approx 2.940$. So every ideal is equivalent to one whose norm is at most 2. The only ideal of norm 1 is the full ring of integers, which is principal; the ideal $(2)=\mathcal{p}_{2}^{3}$, where $\mathcal{p}_{2}=(\sqrt[3]{2})$ is also principal. Thus every ideal is equivalent to a principal ideal, so the class group is trivial."</p>
<p>The question is why does it suffice to look at the principal ideal generated by 2?</p>
| saulspatz | 235,128 | <p>If six people each try $1,000,000$ times, the total number of success is approximate $1,200,000.$ The success rate is approximately $$
\frac{1200000}{6000000}=.2$$</p>
<p>You seem to have overlooked the fact that there are six million trials.</p>
|
71,031 | <p>In 1976 Cappell and Shaneson gave some examples of knots in homotopy 4-spheres and for some time these examples were considered as possible counter-examples to the smooth 4-dimensional Poincare conjecture.</p>
<p>In a series of papers, Akbulut and Gompf have shown most of these Cappell-Shaneson knots actually are knots in the standard <span class="math-container">$S^4$</span>, the most recent reference being <a href="https://arxiv.org/abs/0908.1914" rel="nofollow noreferrer">this</a>.</p>
<p>In principle, one should be able to work through their arguments to derive a picture of these 2-knots in the 4-sphere. Has anyone done this, for <em>any</em> of the Cappell-Shaneson knots?</p>
<p>I know various people have created censi of 2-knots, does anyone know if any Cappell-Shaneson knots appear in those censi? (I have a hard time accepting censuses as plural of census, sorry, it sounds so wrong!)</p>
<p>I'd be happy with any fairly explicit geometric picture of a Cappell-Shaneson knot sitting in <span class="math-container">$S^4$</span>. The two I'm most familiar with is the Whitneyesque motion-diagram, and the "resolution of a knotted 4-valent graph in <span class="math-container">$S^3$</span>" picture. What I want to avoid is the "attach a handle and fuss about and argue that the manifold you've constructed is diffeomorphic to <span class="math-container">$S^4$</span>" situation.</p>
| Min Hoon Kim | 21,694 | <p>I think the explicit embedding of Cappell-Shaneson knot is given in the following paper:</p>
<p>S. Akbulut and R. Kirby, A potential smooth counterexample to in dimension 4 to the Poincare conjecture, the Schoenflies conjecture, and the Andrews-Curtis conjecture, Topology 24 (1985) 375--390. (See Figure 16 of that paper)</p>
<p>The paper of Aitchison and Rubinstein mentioned by Scott Carter figures out that there is an error (on the $\mathbb{Z}/2$-framing of $\gamma$-curve which turns out to be 1) in S. Akbulut and R. Kirby's former paper "An exotic involution on $S^4$, Topology 18 (1979) 1--15. Hence, what S. Akbulut and Kirby really showed (in 1979) is that the specific (or the simplest) Cappell-Shaneson sphere is obtained from the Gluck construction of a smooth 2-knot in standard $S^4$. Figure 16 of 1985 topology paper of S. Akbulut and R. Kirby describes that a smooth 2-knot is obtained from gluing two ribbon disks of a knot $8_9$.</p>
<p>Finally, I would like to say that there is a same stuff given in Figure 6.2, page 17 of Kirby's famous book "The topology of 4-manifolds" Springer Lecture notes in Mathematics 1374.</p>
|
1,967,847 | <blockquote>
<p>A vector space $V$ is called <strong>finite-dimensional</strong> if there is a finite subset of $V$ that is a basis for $V$. If there is no such finite subset of $V$, then $V$ is called <strong>infinite-dimensional</strong>.</p>
<hr>
<p>We now establish some results about finite-dimensional vector spaces that will tell about the number of vectors in a basis, compare two different bases, and give properties of bases. First, we observe that if $\{\mathbf{v}_1, \mathbf v_2,\dotsc, \mathbf v_k\}$ is a basis for a vector space $V$, then $\{c\mathbf v_1, \mathbf v_2, \dotsc, \mathbf v_k\}$ is also a basis when $c\neq 0$ (Exercise 35). Thus a basis for a nonzero vector space is never unique.<br>
<a href="https://i.stack.imgur.com/ghEh9.png" rel="nofollow noreferrer">Image.</a></p>
</blockquote>
<p>I am confused if $\mathbb R^2$ is a finite dimensional vector space. $[1 \ 0],[0 \ 1]$ will be the standard basis of $\mathbb R^2$. However, there are also $[2 \ 0],[0 \ 1]$ and I can find infinite many to be the basis of $\mathbb R^2$. So, $\mathbb R^2$ is an infinite dimensional vector space? </p>
| Faraad Armwood | 317,914 | <p>I don't understand why this question was downvoted since it is an honest one. For a vector space $V$, the existence of a finite basis is all you need to determine the dimension (which is unique). This is because you can show that if $A$ is a basis for $V$ and $|A| = n$ then any other basis $B$ for $V$ also have to have cardinality $n$. </p>
|
2,963,886 | <p>What do these statements mean in discrete mathematics?</p>
<p><strong>Example 1:</strong> Let <span class="math-container">$P:\mathbb{Z}\times \mathbb{Z}\to \{T,F\}$</span>, where <span class="math-container">$P(x,y)$</span> denotes "<span class="math-container">$x+y=5$</span>".</p>
<p><strong>Example 2:</strong> Let <span class="math-container">$B=\{T,F\}$</span>. Let <span class="math-container">$P(p,q,r,\ldots )$</span> be a proposition. Then, <span class="math-container">$$P:=(p\rightarrow q)\rightarrow r:B\times B\times B\to B$$</span></p>
| Bertrand Wittgenstein's Ghost | 606,249 | <p><strong>Example 1</strong>: The set of ordered pairs of integers such that the sum of the first and the second is 5. P(x,y)=xPy which means P is a 2-place predicate defined as x+y=5.</p>
<p><strong>Example 2</strong>: The second one is the Boolean definition of Predicate. It, in essence, says given a relationship P such that p implies q whole implies r will give you an ordered triple such that the whole will be an element B={T,F}. That is, the Predicate P will be either true or false given the values of (p,q,r…). </p>
<p>This is what I understood of it. </p>
|
1,572,593 | <p>Let $G$ a group, $H$ and $K$ two subgroups of G of finite order such that $H \cap K = \{1_G\}$. </p>
<p>I already show the first exercise which says that the cardinal of $HK$ is $|H||K|$.</p>
<p>The second exercise ask to deduce that if $|G|=pm$ where $p$ is a prime number and $p>m$, then $G$ has at most one subgroup of order $p$. Show that if the subgroup exist, it is normal in $G$.</p>
<p>I think a can use the Cauchy and Sylow theorem, but I'm a bit block off on this problem. Is anyone is able to give me a hint?</p>
| kccu | 255,727 | <p>First notice that if $p>m$, $p \nmid m$. So the $p$-Sylows of $G$ will have order $p$. The Sylow theorems tell you that the number of $p$-Sylows is congruent to $1$ modulo $p$ and is also a divisor of $m$. Using the fact that $p>m$, what are/is the possible number(s) of $p$-Sylows? Now consider conjugates of the $p$-Sylows to show normality.</p>
|
479,594 | <p>I was wandering which is the best way to generate various combinations of $x_i$ such that $$\sum\limits_{i=1}^7 x_i = 1.0$$</p>
<p>where $ x_i \in \{0.0, 0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0\}$</p>
<p>I can generate these using brute-force, i.e checking through all $ 11^7$ combinations and only taking those which satisfies our constraint, however I am interested to know if there is another approach for this. Any ideas?</p>
| Marko Riedel | 44,883 | <p>You could use generating functions. Put $$f(z) = \sum_{n\ge 1} a_n z^n.$$</p>
<p>Summing your recurrence for $n\ge 2$ and multiplying by $z^n,$ we get
$$ f(z) - \frac{1}{4} z = \frac{1}{4} \sum_{n\ge 2} n z^n
- \frac{1}{2} z \sum_{n\ge 2} z^{n-1} \sum_{k=1}^{n-1} a_k.$$</p>
<p>Simplify to obtain
$$ f(z) - \frac{1}{4} z = \frac{1}{4} \frac{z^2(2-z)}{(1-z)^2}
- \frac{1}{2} z \frac{1}{1-z} f(z).$$</p>
<p>Now solve for $f(z).$ This yields
$$ f(z) = \frac{1}{2} \frac{z}{(1-z)(2-z)}
= \frac{1}{2} \frac{1}{1-z} - \frac{1}{2} \frac{1}{1-z/2}.$$</p>
<p>Finally read off the coefficients, which is now easy, to get
$$a_n = \frac{1}{2} \left(1 - \frac{1}{2^n}\right).$$ </p>
|
1,259,853 | <p>Why the derivative of $n^{1/n} = \sqrt[n]{n}$ is $n^{1/n} \left( \frac{1}{n^2} - \frac{\log(n)}{n^2}\right)$ (according to Maxima and other tools online)?</p>
<p>I have tried to applied the chain rule, but it comes something completely different:</p>
<p>$$\frac{1}{n} n^{\frac{1}{n} - 1} \cdot 1 = \frac{1}{n} n^\frac{1}{n}n^{-1} = \frac{1}{n^2} n^\frac{1}{n} = \frac{\sqrt[n]{n}}{n^2}$$</p>
<p>Sincerely, I am not seeing where that $\log$ and the rest of the stuff comes from. I have a more difficult problem that is similar and whose solution contains a $\log$ somewhere, but I am not seeing where it comes from.</p>
| Kevin Church | 229,638 | <p>The previous answers have explained how to do the calculation correctly. I want to comment on why your method didn't work.</p>
<p>The reason you have the wrong answer is you haven't applied the chain rule correctly. You started with $f(n)=n^{1/n}$, and tried to apply the chain rule. Since your first calculation was $$\frac{d}{dn}n^{1/n}=\frac{1}{n}n^{\frac{1}{n}-1}.$$ It looks to me like you've assumed that the $\frac{1}{n}$ in the exponent is constant, allowing you to apply the power rule. The problem is that $\frac{1}{n}$ is not constant.</p>
<p>By analogy, at some point you should have seen that $\frac{d}{dx}e^x=e^x$. We could calculate the derivative of $e^{1/x}$ using the chain rule;
$$\frac{d}{dx}e^{1/x}=\frac{-1}{x^2}e^{1/x}.$$ By the incorrect method that you used earlier, by pretending that the $x$ in the exponent is constant, I would get the incorrect result $$\frac{d}{dx}e^{1/x}=0.$$</p>
|
3,089,326 | <p>9 person randomly enter 3 different rooms. What is the probability that</p>
<p>a)the first room has 3 person?</p>
<p>b)every room has 3 person?</p>
<p>c)the first room has n person, second room has 3 persons, third room 2 persons?</p>
<p>What i want to know is that which probability techniques i need to use when dealing with this type of questions, i have only learnt few techniques like combination and permutation only. </p>
| Community | -1 | <p>Continuity of <span class="math-container">$f$</span> on <span class="math-container">$\mathbb{R}$</span> is obtained iff <span class="math-container">$\lim_{h\to 0} f(x+h)-f(x)=0$</span>. As I noted in my comment, this expression is equal to <span class="math-container">$\lim_{h\to 0} f(x)+f(h)-f(x)=\lim_{h\to 0}f(h)$</span>. By assumption, we know that <span class="math-container">$f$</span> is continuous at <span class="math-container">$x=0$</span> thus proving: <span class="math-container">$\lim_{h\to 0} f(h)=0$</span></p>
|
2,915,685 | <blockquote>
<p>Let $X$ be Banach space and $Y$ be a normed vector space and suppose that I find a linear map $T \in L(X,Y)$. With the aid of $T$, I wonder under what condition I can conclude that $Y$ is also a Banach space.</p>
</blockquote>
<p>Is $T$ a linear isomorphism enough? I encounter this question when think of an example like $\ell^p$-space equipped with norm
$$
\|a\|_p := \left(\sum_{n=1}^\infty |a_n|^p\right)^{1/p}
$$
and $\ell^p(w)$, a weighted $\ell^p$-space, equipped with norm
$$
\|a\|_{p,w}:=\left(\sum_{n=1}^\infty |a_n|^p w(n)\right)^{1/p}
$$
for $p \geq 1$ and $w: \mathbb{N} \to [0,\infty)$. It seems to me these two guys are isomorphic, and I was trying to argue $\ell^p(w)$-space is Bananch space without using the standard Cauchy sequence argument. Does this thought make sense? or it is completely off-track? Any comment or feedback is appreciated.</p>
<p><strong>Edit</strong> What if $w(n) > 0$ is forced. Does the idea above alive?</p>
| Community | -1 | <p>This is not true since if you take $\omega (1)= 1 , \omega (n) = 0 $ for $n>1$ then $\ell^p (\omega ) $ is not a Hausdorff topological vector soace.</p>
|
1,749,909 | <p>I need a help with somthing:
I need to tell if these two integrals are Convergence\Absolute convergence:</p>
<p>$\int _1^{\infty }\frac{\ln x}{\sqrt{x^3-x^2-x+1}}dx$, $\int _0^{\infty \:}\:\frac{\left(x^{\frac{1}{4}}+1\right)\cdot \sin\left(2\sqrt{x}\right)}{x}dx$
Now I compute this and I find that both converge. But I don't know how to check
if it is also Absolute convergence.
Thanks.</p>
| CiaPan | 152,299 | <p>If you remind them the integration is somewhat like a 'continuous' analogue to a 'discrete' summation, it would become quite obvious the integral of a zero function should be zero, similar to a sum of zero terms sequence. And that the zero value of the integral corresponds to 'no area' of a degenerate figure 'between' the $f(x)=0$ graph and the $X$ axis.</p>
<p>Show some physical applications, like integrating (one-dimensional) density into a mass (say a mass of a wire with varying diameter) or integrating a velocity over time into a distance travelled. They should make a clear need of additivity (twice the density - twice the mass, 10 mph more - ten miles farther in an hour).</p>
<p>In the next step show the sum of negative terms and the integral of negative function. And then a sum of an alternating-sign sequence (possibly reducing to zero) and an integral of an alternating-sign function (like sine). You can show a 3D example of earthworks and excavations (as an illustration, not calculation) with balancing volumes integrated from the heights and depths relative to ground level.</p>
<p>Then show that by additivity for a finite integral there's always a constant term which added to an integrated function makes the integral zero, and that corresponds to shifting the graph vertically so that 'positive' and 'negative' areas above and below axis make balance.</p>
<p>That should let them grasp an idea (along with formal definitions) how additivity works and where the zero 'should be' in integration – and especially why zero is at the axis level.</p>
|
3,493,519 | <p>Can I get a verification if this is the right way to approach this problem?</p>
<blockquote>
<p>Give an example of a linear map <span class="math-container">$T$</span> such that <span class="math-container">$\dim(\operatorname{null}T) = 3$</span> and <span class="math-container">$\dim(\operatorname{range}T) = 2$</span>.</p>
</blockquote>
<p>By the fundamental theorem of linear maps,
<span class="math-container">$$\dim V = \dim \operatorname{range}T + \dim\operatorname{null}T,$$</span>
thus <span class="math-container">$\dim V=5$</span>. Let <span class="math-container">$e_1,e_2,e_3,e_4,e_5$</span> be a basis for <span class="math-container">$\mathbb{R}^5$</span>.
Let <span class="math-container">$f_1,f_2$</span> be a basis for <span class="math-container">$\mathbb{R}^2$</span>. Define a linear map <span class="math-container">$T \in \mathcal{L}(\mathbb{R}^5,\mathbb{R}^2)$</span> by <span class="math-container">$$T(a_1e_1+a_2e_2+a_3e_3+a_4e_4+a_5e_5)=a_1f_1+a_2f_2.$$</span></p>
<p>Thus <span class="math-container">$\dim(\operatorname{null}T) = 3$</span> and <span class="math-container">$\dim(\operatorname{range}T) = 2$</span>.</p>
| Ben Grossmann | 81,360 | <p>Yes, your example is correct (though like the other answerer, I would tend to prefer something more specific).</p>
<p>Based on your previous post and your use of the word "thus", I assume that you're also trying to prove that the range and nullspace have the dimension you claim (even though such a step is typically considered unnecessary). I would say that you have <em>not</em> done so in the answer you have presented. One proof would be as follows:</p>
<blockquote>
<p>We see that the range is given by <span class="math-container">$\{a_1 f_1 + a_2 f_2 : a_1,a_2 \in \Bbb R\}$</span>. This is the span of the linearly independent set <span class="math-container">$\{f_1,f_2\}$</span>. Thus, the dimension of the range is <span class="math-container">$2$</span>.</p>
<p>On the other hand, we note that
<span class="math-container">$$
T(a_1e_1+a_2e_2+a_3e_3+a_4e_4+a_5e_5) = 0 \iff\\
a_1f_1+a_2f_2 = 0 \iff\\
a_1 = 0\text{ and } a_2 = 0.
$$</span>
Thus, the kernel of <span class="math-container">$T$</span> is given by <span class="math-container">$\{0e_1 + 0e_2 + a_3e_3+a_4e_4+a_5e_5 : a_3,a_4,a_5 \in \Bbb R\}$</span>. This is the span of the linearly independent set <span class="math-container">$\{e_3,e_4,e_5\}$</span>. Thus, the dimension of the kernel is <span class="math-container">$3$</span>.</p>
</blockquote>
|
207,807 | <p>Is there an explicit example of a non-commutative monoid $M$ such that for all elements $m,n \in M$ and $p \in \mathbb{N}$ we have $(m \cdot n)^p=m^p \cdot n^p$?</p>
<p>It suffices to construct a semigroup $H$ with an absorbing element $0$ such that $a^2=0$ for all $a$, because then $M := H \cup \{e\}$ will do the job. This sounds easy at first glance, but I cannot find an example.</p>
| Boris Novikov | 62,565 | <p>If this question is still interested for you:</p>
<p>Let $P(\le)$ be a poset. Define a multiplication on pairs $\{(a,b)|a\le b\}\subset P\times P$ with an extra zero by the rule:
$(a,b)(c,d)=(a,d)$ if $b=c$, otherwise $(a,b)(c,d)=0$.</p>
|
2,713,201 | <p>How would you work something like this out? </p>
<p>Are there similar problems to
$$\frac{d\left( (\cos(x))^{\cos(x)}\right)}{dx}$$
which could be worked out the same way?</p>
| MPW | 113,214 | <p>A generalized power rule is
$$(f^g)'=gf^{g-1}\cdot f' + f^g\log f \cdot g'$$</p>
<p>This generalizes the power rule and the exponential rule simultaneously. It is easily obtained by using implicit differentiation. But it is <em>extremely</em> easy to remember if you look at the components of the sum: when $g$ is constant, the factor $g'$ vanishes and you recover the familiar power rule for constant exponents; when $f$ is constant, the factor $f'$ vanishes and you recover the familiar exponential rule for constant bases. Knowing those simple rules, you can piece together the rule easily without rederiving it.</p>
<p>In your case, $f(x)=g(x)=\cos x$, so the result is
$$((\cos x)^{\cos x})'=\cos x(\cos x)^{\cos x - 1}(-\sin x) + (\cos x)^{\cos x}\log\cos x(-\sin x)$$
$$=\boxed{-(\cos x)^{\cos x}\cdot \sin x\cdot(1 + \log\cos x)}$$</p>
<p>Implicitly, this is defined where everything makes sense ($\cos x > 0$, e.g.).</p>
|
3,443,094 | <blockquote>
<p>If <span class="math-container">$$\lim_{x\to 0}\frac{ae^x-b}{x}=2$$</span> the find <span class="math-container">$a,b$</span></p>
</blockquote>
<p><span class="math-container">$$
\lim_{x\to 0}\frac{ae^x-b}{x}=\lim_{x\to 0}\frac{a(e^x-1)+a-b}{x}=\lim_{x\to 0}\frac{a(e^x-1)}{x}+\lim_{x\to 0}\frac{a-b}{x}=\boxed{a+\lim_{x\to 0}\frac{a-b}{x}=2}\\
\lim_{x\to 0}\frac{a-b}{x} \text{ must be finite}\implies \boxed{a=b}\\
$$</span>
Now I think I am stuck, how do I proceed ?</p>
| QC_QAOA | 364,346 | <p>Going off of your work, we know that <span class="math-container">$a=b$</span>. Thus, the question is: what <span class="math-container">$a$</span> makes</p>
<p><span class="math-container">$$\lim_{x\to 0}a\frac{e^x-1}{x}=2?$$</span></p>
<p>Using the Taylor Series for <span class="math-container">$e^x$</span>, we see</p>
<p><span class="math-container">$$2=\lim_{x\to 0}a\frac{e^x-1}{x}=\lim_{x\to 0}a\frac{\sum_{n=0}^\infty\frac{x^n}{n!}-1}{x}=\lim_{x\to 0}a\frac{\sum_{n=1}^\infty\frac{x^n}{n!}}{x}$$</span></p>
<p><span class="math-container">$$=\lim_{x\to 0}a\sum_{n=1}^\infty\frac{x^{n-1}}{n!}=a+\lim_{x\to 0}x\left(\sum_{n=2}^\infty\frac{x^{n-2}}{n!}\right)=a.$$</span></p>
<p>Thus, <span class="math-container">$a=b=2$</span></p>
|
3,811,154 | <p>Prove that this integral is less than infinity. If <span class="math-container">$0<a<c$</span> and <span class="math-container">$0<b$</span>: <span class="math-container">$$\int_0^\infty \frac{|x|^a}{(x+b)^{c+1}} dx.$$</span></p>
<p>From inspection, because <span class="math-container">$a<c$</span> and <span class="math-container">$|x| < |x+b|$</span>, if this I was looking at the absolute convergence, then the numerator is smaller than the denominator, so it would converge. But, I am not used to trying to find convergence of integrals, I am used to series.</p>
<p>I tried doing integration by parts and substituting the original integral back in, but that did not work. I can show that here if someone wants to see.</p>
<p>Any suggestions on how to proceed?</p>
| user58697 | 58,697 | <p>Notice that since <span class="math-container">$b > 0$</span>, the <span class="math-container">$\int_0^\infty \frac{|x|^a}{(x+b)^{c+1}} dx < \int_0^\infty \frac{|x + b|^a}{(x+b)^{c+1}} dx = \int_0^\infty \frac{1}{(x+b)^{c - a+1}} dx$</span>. Now, since <span class="math-container">$c > a$</span>, the last one obviously converges.</p>
|
4,405,145 | <p>Given <span class="math-container">$x_1, x_2, x_3, x_4, x_5$</span> be independent standard normal random variable and <span class="math-container">$\bar x$</span> the sample mean <span class="math-container">$\bar x= (x_1 + x_2 + x_3 + x_4 + x_5)/5$</span>. Then <span class="math-container">$\Pr(\bar x\leqslant c)$</span></p>
<p>What is c?</p>
<p>I tried to rewrite the given information mean which is <span class="math-container">$\bar x=∑_{x\in\{1,2,3,4,5\}}x_i/5 $</span> and <span class="math-container">$x_i\sim\mathcal N(0,1)$</span> then got stuck. What to do next?</p>
| user97357329 | 630,243 | <p><strong>A second solution by Cornel Ioan Valean</strong> (in large steps)</p>
<p>Let's start with the variable change <span class="math-container">$\displaystyle x\mapsto i\frac{1-\sqrt{x}}{1+\sqrt{x}}$</span> and with understanding the resulting integral as a PV integral, and then we have
<span class="math-container">$$I=\int_0^1 \frac{\arctan^2(x)}{x}\log\left(\frac{x}{(1-x)^2}\right)\textrm{d}x$$</span>
<span class="math-container">$$=\lim_{\epsilon\to 0^{+}}\biggr\{-\frac{1}{16}\int_{-1}^{-\epsilon}\frac{\log^2(x)}{\sqrt{x}(1-x)}\log\left(\frac{1-x}{2(\sqrt{x}-i)^2}\right)\textrm{d}x$$</span>
<span class="math-container">$$+\frac{1}{16}\int_{\epsilon}^1\frac{\log^2(x)}{\sqrt{x}(1-x)}\log\left(\frac{2(\sqrt{x}-i)^2}{1-x}\right)\textrm{d}x\biggr\}$$</span>
<span class="math-container">$$=\frac{1}{16}\Re\biggr\{\int_0^1\frac{\log^2(x)}{\sqrt{x}(1-x)}\log\left(\frac{2(\sqrt{x}-i)^2}{1-x}\right)\textrm{d}x\biggr\}$$</span>
<span class="math-container">$$=\frac{1}{2}\log(2)\underbrace{\int_0^1 \frac{\log^2(x)}{1-x^2}\textrm{d}x}_{\displaystyle \text{Trivial}}+\underbrace{\frac{1}{16}\color{blue}{\int_0^1\frac{\log^2(x)\log(1+x)}{\sqrt{x}(1-x)}\textrm{d}x}}_{\displaystyle \text{Reducible to Derivatives form of Beta function}}-\underbrace{\frac{1}{16}\int_0^1\frac{\log^2(x)\log(1-x)}{\sqrt{x}(1-x)}\textrm{d}x}_{\displaystyle \text{Derivatives form of Beta function}}$$</span>
<span class="math-container">$$=G^2.$$</span></p>
<p><strong>A first note</strong>: For the blue integral one might observe that
<span class="math-container">$$\color{blue}{\int_0^1\frac{\log^2(x)\log(1+x)}{\sqrt{x}(1-x)}\textrm{d}x}=$$</span>
<span class="math-container">$$=\int_0^1\frac{\log^2(x)(\log(1-x^2)-\log(1-x))}{\sqrt{x}(1-x)}\textrm{d}x$$</span>
<span class="math-container">$$=\int_0^1\frac{(1+x)\log^2(x)\log(1-x^2)}{\sqrt{x}(1-x^2)}\textrm{d}x-\int_0^1\frac{\log^2(x)\log(1-x)}{\sqrt{x}(1-x)}\textrm{d}x,$$</span>
and at this point it is crystal clear how the resulting integrals are expressible in terms of the derivatives of the Beta function.</p>
<p><strong>A second note</strong>: The third equality is obtained by the use of <a href="https://math.stackexchange.com/q/3866385">this integral (very nicely and easily calculated)</a>, which is already well-known on the site, given in a trigonometric form.</p>
<p>More precisely, we have that
<span class="math-container">$$\Re\biggr\{\int_{-1}^0 \frac{\log^2(x)}{\sqrt{x}(1-x)}\log\left(\frac{1-x}{2(\sqrt{x}-i)^2}\right)\textrm{d}x\biggr\}$$</span>
<span class="math-container">$$=-8\log(2)\pi \underbrace{\int_0^1\frac{\log(x)}{1+x^2}\textrm{d}x}_{\displaystyle-G}+8\pi \int_0^1\frac{\log(x)}{1+x^2}\log\left(\frac{1+x^2}{(1-x)^2}\right)\textrm{d}x$$</span>
<span class="math-container">$$\overset{x\mapsto \tan(x)}{=}8\log(2)\pi G-16\pi\underbrace{\int_0^{\pi/4}\log(\cos(x)-\sin(x))\log(\tan(x))\textrm{d}x}_{\displaystyle \log(2)G/2}=0,$$</span>
where the last integral is given at the previously mentioned link.</p>
<p><strong>End of story</strong></p>
|
1,034,698 | <p>I have an assignment with the following question:</p>
<pre><code>Does an Orthogonal Matrix exist such that its first row consists of the
following values:
</code></pre>
<p>($1$/$\sqrt{3}$, $1$/$\sqrt{3}$, $1$/$\sqrt{3}$)</p>
<pre><code>If there is, find one.
</code></pre>
<p>I know I can solve this question with the Gram Schmidt algorithm, but it includes a lot of complicated calculations.</p>
<p>Is there any other way to prove this statement without the Gram Schmidt algorithm?</p>
<p>Thanks,</p>
<p>Alan </p>
| Learnmore | 294,365 | <p>I think an easy way to solve this is as follows:</p>
<p>dim $(\mathbb R^3)/W$=dim $\mathbb R^3$-dim $W$</p>
<p>Now let $(a,b,c)\in W$ then $2a+3b-c=0$
so $c=2a+3b$</p>
<p>so a basis for $W$ is $\{(1,0,2)^t,(0,1,3)^t\}$ so dim$W$=2</p>
<p>So you get your answer.</p>
<p>Another way out is try the linear mapping $f:\mathbb R^3\rightarrow \mathbb R$
defined by $f(x,y,z)=2x+3y-z$(check it's linear) having kernel as $W$.Then $\mathbb R^3/W\cong Imf$ .Im$f$ is a subspace of $\mathbb R^3$ having dimension $1$</p>
|
3,583,879 | <blockquote>
<p>a) $P_5=11$$</p>
<p>b) <span class="math-container">$P_1+P_2+P_3+P_4+P_5 =26$</span></p>
</blockquote>
<p>For the first part
<span class="math-container">$$\alpha^5+\beta ^5$$</span>
<span class="math-container">$$=(\alpha^3+\beta ^3)^2-2(\alpha \beta )^3$$</span></p>
<p>I found the value of <span class="math-container">$\alpha^3+\beta^3=4$</span></p>
<p>So <span class="math-container">$$16-2(-1)=18$$</span> which doesn’t match.</p>
<p>In the second part depends on the value obtained from part 1, so I need to get that cleared up. </p>
<p>I checked the computation many times, but it might end up being just that. Also, is there a more efficient way to do this?</p>
| Z Ahmed | 671,540 | <p>if <span class="math-container">$$x^2-x-1=0~~~(1)$$</span> has roots as <span class="math-container">$a,b$</span> then <span class="math-container">$P_k=a^k+b^k,P_0=2,P_1=a+b=1$</span> and
<span class="math-container">$$a^2-a-1=0~~~(2),~~ b^2-b-1=0~~~(3)$$</span>
Multiply Eq.(2) once by <span class="math-container">$a^k$</span> and (3) by <span class="math-container">$b^k$</span>. Adding these two Eqs. you get
<span class="math-container">$$(a^{k+2}+b^{k+2})-(a^{k+1}+b^{k+1})-(a^k+b^k)=0 \implies P_{k+2}= P_{k+1}+P_k~~~(4)$$</span>
So <span class="math-container">$P_5=P_4+P_3, P_4=P_3+P_2, P_3=P_2+P_1, P_2=P_1+P_0=a+b+2=3$$ $$ \implies P_3=3+1=4, \implies P_4=4+3=7, P_5=7+4=11$</span>
So <span class="math-container">$P_1+P_2+P_3+P_4+P_5=1+3+4+7+11=26$</span></p>
|
2,805,312 | <p>Suppose $E$ is a vector bundle over $M, d^E$ a covariant derivative, $\sigma\in\Omega^p(E)$ and $\mu$ a q-form.</p>
<p>I have seen the following pair of formulae for wedge products:</p>
<p>$d^E(\mu\wedge \sigma)=d\mu\wedge\sigma+(-1)^q\mu\wedge d^E\sigma$</p>
<p>$d^E(\sigma\wedge\mu)=d^E\sigma\wedge\mu+(-1)^p\sigma\wedge d\mu$</p>
<p>I am quite happy with the first one but I can only get the second one if $q$ is even.</p>
<p>Here is my calculation when we write $\sigma=\omega\otimes s$:</p>
<p>$d^E((\omega\wedge\mu)\otimes s)=(d\omega\wedge\mu+(-1)^p\omega\wedge d\mu)\otimes s+(-1)^{p+q}(\omega\wedge\mu)d^E s$</p>
<p>$=(-1)^p\sigma\wedge d\mu+(d\omega+(-1)^{p+q}\omega d^Es)\wedge\mu$</p>
<p>I think I must have something wrong as there is a similar formula involving the connection on the endomorphism bundle.</p>
| mcwiggler | 450,808 | <p>You need an extra step of permuting forms to get $\mu$ all the way to the right in your second term, permuting past the one-form $d^Es$ changes the sign by a factor $(-1)^q$, leaving the correct sign $(-1)^p$ in your expression.</p>
|
33,993 | <p>I am given the parameters for a bivariate normal distribution ($\mu_x, \mu_y, \sigma_x, \sigma_y,$ and $\rho$). How would I go about finding the Var($Y|X=x$)? I was able to find E[$Y|X=x$] by writing $X$ and $Y$ in terms of two standard normal variables and finding the expectation in such a manner. I am unsure how to do this for the variance.</p>
<p>Also, how do I find the probability that both $X$ and $Y$ exceed their mean values (i.e., $P(X>\mu_x, Y > \mu_y)$)?</p>
<p>Thanks for the help!</p>
| GWu | 8,829 | <p>First, the joint PDF $f(x,y)$ is obvious, just plug in your parameters. <a href="http://en.wikipedia.org/wiki/Multivariate_normal_distribution#Bivariate_case" rel="nofollow">Bivariate Normal</a>.
Then you can find the marginal density for $X$, which gives you the conditional density of $Y$ given $X=x$:
$$f_{Y|X}(y|x)=\frac{f(x,y)}{f_X(x)}.$$
Now use the conditional density you can evaluate both conditional expectation and <a href="http://en.wikipedia.org/wiki/Conditional_variance" rel="nofollow">conditional variance</a> :
$$\mathbb{E} (Y|X=x)=\int_{-\infty}^\infty y f_{Y|X}(y|x)dy,$$
and
$$\text{Var} (Y|X=x)=\int_{-\infty}^\infty (y-h(x))^2 f_{Y|X}(y|x)dy=\frac14,$$
where $h(x)=\mathbb{E} (Y|X=x)=-\frac{\sqrt 3}4(x-2)-1$.</p>
<p>And with the joint PDF, $P(X>\mu_x, Y > \mu_y)$ is just an integration:
$$P(X>\mu_x, Y > \mu_y)=\int_{\mu_x}^\infty\int_{\mu_y}^\infty f(x,y)dydx=\frac1{12},$$
though I guess there's an easier way to compute.</p>
|
131,283 | <p>I came across a question which required us to find $\displaystyle\sum_{n=3}^{\infty}\frac{1}{n^5-5n^3+4n}$. I simplified it to $\displaystyle\sum_{n=3}^{\infty}\frac{1}{(n-2)(n-1)n(n+1)(n+2)}$ which simplifies to $\displaystyle\sum_{n=3}^{\infty}\frac{(n-3)!}{(n+2)!}$. I thought it might have something to do with partial fractions, but since I am relatively inexperienced with them I was unable to think of anything useful to do. I tried to check WolframAlpha and it gave $$\sum_{n=3}^{m}\frac{(n-3)!}{(n+2)!}=\frac{m^4+2m^3-m^2-2m-24}{96(m-1)m(m+1)(m+2)}$$
From this it is clear that as $m\rightarrow \infty$ the sum converges to $\frac{1}{96}$, however I have no idea how to get there. Any help would be greatly appreciated!</p>
| Kirthi Raman | 25,538 | <p>If you know partial fractions, this should be
$$\frac{(n-3)!}{(n+2)!}=\frac{1}{4n}+\frac{1}{24(n-2)}+\frac{1}{24(n+2)}-\frac{1}{6(n-1)}-\frac{1}{6(n+1)}$$</p>
<p>And you might have to simplify the finite sum to get an expression like</p>
<p>$$\sum_{n=3}^{m}\frac{(n-3)!}{(n+2)!}=\frac{m^4+2m^3-m^2-2m-24}{96(m-1)m(m+1)(m+2)}$$</p>
|
2,713,038 | <p>I've seen some solutions to this problem but I'm wondering what is incorrect about an argument like this:</p>
<p>$S = \{x \in \mathbb{R}^d: |x| = 1\}$, then $\delta S = \{x \in \mathbb{R}^d: |\delta x| = 1\}$, and so</p>
<p>\begin{align*}
\{ x \in \mathbb{R}^d: |\delta ||x| = 1\} & = \{ x \in \mathbb{R}^d: |x| = 1 / |\delta | \}
\end{align*}</p>
<p>As we take $\delta $ large, then $\delta S$ becomes the set in which $|x| = 0$, i.e. the origin in $\mathbb{R}^d$ , which has zero measure, and since: $m(\delta S) = \delta^d m(S)$ where $m$ is the Lebesgue measure, it follows that $m(S) = 0$</p>
| p4sch | 530,357 | <p>$\delta S$ is for any $\delta>0$ a sphere with radius $\delta>0$. No matter how big $\delta >0$ is choosen, it does not become the set $\{0\}$.</p>
|
290,910 | <p>Which sequences of adjacent edges of a polyhedron could be considered to be a geodesic? The edges of a face most surely will not, but the "equator" of the octahedron eventually will. But for what reasons? How do the defining property of a geodesic - having zero geodesic curvature - apply to a sequence of edges?</p>
<p>(One crude guess: any sequence of edges that pairwise don't share a face? What does this have to do with curvature?)</p>
| Joseph O'Rourke | 237 | <p>It so happens I drew the "equator of a dodecahedron" for one of my papers, so I can't resist including it here: <br />
<img src="https://i.stack.imgur.com/1fXvS.png" alt="Dodecahedron equator"><br /></p>
<p>Two points I'd like to make. First, a geodesic is a curve that has $\le \pi$ surface to each side at every point. This is Alexandrov's definition, and it is the right way to think of geodesics on polyhedra. He and Pogorelov called these <em>quasigeodesics</em> (Alexandrov and Zalgaller, <em>Intrinsic Geometry of Surfaces</em>, 1967, p.16; Pogorelov, <em>Extrinsic Geometry of Convex Surfaces</em>, 1973, p.28).</p>
<p>Second, if you allow a doubly covered polygon as a polyhedron of zero volume (as Alexandrov did), then indeed the edges of a face could be a geodesic: consider a doubly covered square, for example. Then the edges of one square face have $\pi$ at all interior-edge points to either side, and $\pi/2$ to either side at the four corners. </p>
|
290,910 | <p>Which sequences of adjacent edges of a polyhedron could be considered to be a geodesic? The edges of a face most surely will not, but the "equator" of the octahedron eventually will. But for what reasons? How do the defining property of a geodesic - having zero geodesic curvature - apply to a sequence of edges?</p>
<p>(One crude guess: any sequence of edges that pairwise don't share a face? What does this have to do with curvature?)</p>
| Hans-Peter Stricker | 1,792 | <p>There seems to be an almost trivial definition (which relies on a specific realization of a polyhedron): if the polyhedron is convex and inscribed into a sphere and the central projections of the edges onto the sphere sum up to one great arc or circle, then the edges are geodesic. (Note that each single edge is projected onto a single great arc.)</p>
<p>Due to this definition, the obvious "equators" of the regular octahedron are geodesic, but also the edges of any face of any inscribable polyhedron can be geodesic (in a specific realization). On the other side, some sequences of edges cannot be - in no realization of the polyhedron - geodesic due to this definition, e.g. the zig-zag-sequence around the "equator" of the dodecahedron (?). </p>
<p>It remains open, whether there is a more combinatorial definition of geodesics, say in a polyhedral graph.</p>
|
1,248,068 | <p>Let $S$ be a set of cardinality $\aleph_1$. Consider the directed family $\mathcal{C}$ (here <em>directed</em> means <em>directed with respect to the inclusion</em>) of all countably infinite subsets of $S$. Suppose that</p>
<p>$$\mathcal{C} = \bigcup_{n=1}^\infty \mathcal{C}_n$$</p>
<p>for some families $\mathcal{C}_n$. Does it follow that for some $n_0$ the family $\mathcal{C}_{n_0}$ contains an uncountable, directed subfamily?</p>
<p>Of course, at least one $\mathcal{C}_n$ is uncountable, so let us take this one. Must it contain an uncountable directed subfamily?</p>
| Guesta | 234,404 | <p>Improving on hot-queens result, note that this is true even if S is countably infinite. To show this identify S with rationals and note that some $C_n$ must contain uncountably many reals where we view a real x as the set of rationals less than x.</p>
|
45,771 | <p>Hi, it seems like a big field and I'm having trouble getting some solid/classic references to get me started.</p>
<p>If $U \subset \mathbb{R}^d$ is a bounded domain with, say, $C^2$-boundary $\partial U$ and $(S(t),t \ge 0)$ is the Dirichlet heat semigroup on $L^p(U)$ then $(S(t) f)(x) = \int_U G_U(t,x,y) f(y)\,dy$ for $f \in L^p(U)$ where $G_U$ is the Dirichlet heat kernel.</p>
<p>I would like to find a good reference for bounds of the type:
$$
\left|\frac{\partial G_U}{\partial \nu_y} (t,x,y)\right| \le C_1 t^{-(d + k)/2} \exp\left(- \frac{|x-y|^2}{C_2 t}\right)
$$
It seems like it should be classic result? I found Aronson's 1968 paper but it only contains estimates for the kernel and not the 'derivatives'. I can find lots of recent papers on manifolds and such but I am just looking for a solid and accessible reference for my simple case.</p>
<p>Further, if one has a semigroup $(T(t), t \ge 0)$ with a kernel $k(t,x,y)$ that has a pointwise Gaussian estimate $|k(t,x,y)| \le c_1 e^{\omega t} t^{-d/2} e^{-|x-y|^2/(c_2 t)}$, do the estimates for the 'derivatives' as above follow readily? or are more assumptions needed? Again, a reference to point me in the right direction would be greatly appreciated.</p>
<p>Thanks.</p>
| Suvrit | 8,430 | <p>Warning: the following response is that of a "googlist" not of an expert. </p>
<p>Although not classical (as per your request), perhaps the following two offer reasonable pointers? If you find these to be unhelpful, please let me know. I also had a longer listing available, if I find it, I will update my answer.</p>
<ol>
<li>Heat kernel expansion: user's manual: <a href="http://arxiv.org/abs/hep-th/0306138" rel="nofollow">arXiv link</a></li>
<li>Several of the papers of Grigoryan, e.g., <a href="http://www.math.uni-bielefeld.de/~grigor/grad.pdf" rel="nofollow">this one</a></li>
</ol>
|
59,846 | <p>In "The New Book of Prime Number Records", Ribenboim reviews the known results on the degree and number of variables of prime-representing polynomials (those are polynomials such that the set of positive values they obtain for nonnegative integral values of the variables coincides with the set of primes). For example, it is known that there is such a polynomial with 42 variables and degree 5, as well as one with 10 variables and astronomical degree.</p>
<p>Ribenboim mentions that it's an open problem to determine the least number of variables possible for such a polynomial, and remarks "it cannot be 2". It's a fairly simple exercise to show that it cannot be 1, but why can't it be 2?</p>
<p>EDIT: here's the relevant excerpt from Ribenboim's book. Given that nobody seems to be familiar with such a proof, I'm inclined to assume that this is a typo and he just meant "it cannot be 1". </p>
<p><img src="https://i.stack.imgur.com/R1Hcw.png" alt="Excerpt from "The New Book of Prime Number Records""></p>
| Charles | 1,778 | <p>I <a href="https://mathoverflow.net/questions/75637/is-there-a-two-variable-prime-representing-polynomial-in-the-sense-of-jones-sato">asked the same question at MathOverflow</a> (linking here) where I noted that, at least as of 1982, the problem was still open because even universal Diophantine equations were not known to be impossible with two variables.</p>
<p>On further searching I found a FOM posting (see link above) which shows that the universal Diophantine equation problem is still open, so it looks like Ribenboim's book is in error (probably a typo, as Alon suggests).</p>
|
950,485 | <p>I have been trying to solve the following limit but am completely stuck.</p>
<p>$$\lim_{\alpha \rightarrow \infty} 1-\left( \frac{y+\alpha}{\alpha-1} \right)^{-\alpha}$$</p>
<p>I have tried inverting the ratio and came up with the following expression:</p>
<p>$$ 1 - \lim_{\alpha \rightarrow \infty} \left( 1-\frac{y+1}{y+\alpha}\right)^\alpha$$</p>
<p>Which roughly resembles the exponential function:</p>
<p>$$\lim_{\alpha \rightarrow \infty} \left( 1- \frac{x}{\alpha} \right)^\alpha = \exp(-x)$$</p>
<p>Except for the additive term in the denominator. Is there a u-substitution type trick to this?</p>
| Rogelio Molina | 87,320 | <p>You can try the $u =y + \alpha$ substitution, in this way you don't need L'Hopital's rule: </p>
<p>\begin{eqnarray}
\lim_{\alpha \to \infty} \left( 1- \frac{1+y}{y+\alpha} \right)^{\alpha} = \lim_{u \to \infty} \left( 1- \frac{1+y}{u} \right)^{u-y} = \lim_{u \to \infty} \left( 1- \frac{1+y}{u} \right)^{u}/ \lim_{u\to \infty} \left(1 -\frac{1+y}{u} \right)^y = e^{-(1+y)}/1 = e^{-(1+y)}
\end{eqnarray}</p>
|
3,965,164 | <p>I know the standard and expanded forms of the equation of the circle in the simple 2d space,</p>
<p><span class="math-container">${(x-a)}^2+{(y-b)}^2=r^2$</span></p>
<p><span class="math-container">$x^2-2ax+y^2-2by=c$</span></p>
<p>So in 3d space what are the equations for a circle laying in an arbitrary plane,
and what is the 3d version of the polar form of the circle equation</p>
<p><span class="math-container">$$(a+rcos(t),b+rsin(t))$$</span></p>
<p>How the two equations (that I mentioned up) look in 3d space and are there more abstractive formulae for the circle equation in more than 3 dimensions?</p>
| azif00 | 680,927 | <p>For example, if <span class="math-container">$\mathbb Q = \{q_n\}_{n \in \mathbb N}$</span>, then <span class="math-container">$$\mathbb R \setminus \{q_0\} \supseteq \mathbb R \setminus \{q_0,q_1\} \supseteq \mathbb R \setminus \{q_0,q_1,q_2\} \supseteq \cdots$$</span> and <span class="math-container">$$\bigcap_{n \in \mathbb N} (\mathbb R \setminus \{q_0,\dots,q_n\}) =
\mathbb R \setminus \bigg( \bigcup_{n \in \mathbb N}\{q_0,\dots,q_n\} \bigg) = \mathbb R \setminus \mathbb Q.$$</span>
Also, you forgot the words "These conditions are not enough to identify exactly one sequence of sets" in the @Gae. S.'s comment. Another comment: there is no algorithm to approach this kind of questions.</p>
<p>This example came to my mind when I saw that <span class="math-container">$\bigcap_{n \in \mathbb N}A_n = \mathbb R \setminus \mathbb Q$</span> is equivalent to <span class="math-container">$\bigcup_{n \in \mathbb N} (\mathbb R \setminus A_n) = \mathbb Q$</span>, thanks to De Morgan's laws.</p>
|
3,298,516 | <p>I have trouble with understanding proof of next theorem:</p>
<blockquote>
<p>Let <span class="math-container">$X,Y \in L_{2} ( \Omega, P)$</span>. Then
<span class="math-container">$$ | \mathbb{E} (XY) | \le \sqrt{\mathbb{E} X^{2} \mathbb{E} Y^{2}} .$$</span></p>
<p>Proof:
Let <span class="math-container">$\Omega = \{ \omega_{n} \colon n \in\mathbb{N} \}$</span>. Then, for every <span class="math-container">$n \in\mathbb{N}$</span>
<span class="math-container">$$ | \mathbb{E} (XY) | \le \mathbb{E} | XY | = \sum_{k=1}^{n} | X (\omega_{k}) | \cdot | Y (\omega_{k}) | \cdot P(\{\omega_{k}\})$$</span>
Futher, using Cauchy Schwartz inequality, and tending <span class="math-container">$n \to \infty$</span> proof is done.</p>
</blockquote>
<p>This
<span class="math-container">$$ | \mathbb{E} (XY) | \le \mathbb{E} | XY | = \sum_{k=1}^{n} |X (\omega_{k})| \cdot |Y(\omega_{k})| P(\{\omega_{k}\})$$</span> is what is bothering me. How can we observe <span class="math-container">$
\mathbb{E} | XY | $</span> as a sum of finite numbers?</p>
| Dominik Kutek | 601,852 | <p>Step with equality <span class="math-container">$|\mathbb{E} (XY) | \leq \mathbb{E} | XY | = \sum_{k=1}^{n} ( | X (\omega_{k}) | | Y (\omega_{k}) | P({\omega_{k}}))$</span> isn't correct, since there (can be) more than <span class="math-container">$n$</span> elements in <span class="math-container">$\Omega$</span> for which <span class="math-container">$X,Y$</span> attain nonzero values with nonzero probabilities (so you have to take limit there or go different way).</p>
<p>I would do it as follows. Let <span class="math-container">$\Omega = \{ \omega_n : n\in \mathbb N \} $</span>. Let <span class="math-container">$p_k = \mathbb P(\omega_k), k\in \mathbb N $</span>. Given that, we can define an inner product:</p>
<p><span class="math-container">$ \rho(X,Y) = \sum_{k=1}^\infty X(\omega_k)Y(\omega_k)p_k = \sum_{k=1}^{\infty} (XY)(\omega_k)p_k = \mathbb E[XY] $</span></p>
<p>By Cauchy-Schwarz :</p>
<p><span class="math-container">$ |\mathbb E[XY] |^2 \leq \mathbb E[|X||Y|]^2 = \rho(|X|,|Y|)^2 \leq \rho(|X|,|X|)\rho(|Y|,|Y|) = (\sum_{k=1}^\infty |X|^2(\omega_k)p_k)(\sum_{k=1}^\infty |Y|^2(\omega_k)p_k) = (\sum_{k=1}^\infty X^2(\omega_k)p_k)(\sum_{k=1}^\infty Y^2(\omega_k)p_k) = \mathbb E[X^2] \mathbb E[Y^2]$</span></p>
<p>Taking square-root, you get what you wanted.</p>
|
3,298,516 | <p>I have trouble with understanding proof of next theorem:</p>
<blockquote>
<p>Let <span class="math-container">$X,Y \in L_{2} ( \Omega, P)$</span>. Then
<span class="math-container">$$ | \mathbb{E} (XY) | \le \sqrt{\mathbb{E} X^{2} \mathbb{E} Y^{2}} .$$</span></p>
<p>Proof:
Let <span class="math-container">$\Omega = \{ \omega_{n} \colon n \in\mathbb{N} \}$</span>. Then, for every <span class="math-container">$n \in\mathbb{N}$</span>
<span class="math-container">$$ | \mathbb{E} (XY) | \le \mathbb{E} | XY | = \sum_{k=1}^{n} | X (\omega_{k}) | \cdot | Y (\omega_{k}) | \cdot P(\{\omega_{k}\})$$</span>
Futher, using Cauchy Schwartz inequality, and tending <span class="math-container">$n \to \infty$</span> proof is done.</p>
</blockquote>
<p>This
<span class="math-container">$$ | \mathbb{E} (XY) | \le \mathbb{E} | XY | = \sum_{k=1}^{n} |X (\omega_{k})| \cdot |Y(\omega_{k})| P(\{\omega_{k}\})$$</span> is what is bothering me. How can we observe <span class="math-container">$
\mathbb{E} | XY | $</span> as a sum of finite numbers?</p>
| drhab | 75,923 | <p>This is not really an answer to your question, but is too much for a comment.</p>
<p>I think it is better just to drop this proof (which is dubious and at least not general).</p>
<hr>
<p>Observe that for every <span class="math-container">$t\in\mathbb R$</span> we have:<span class="math-container">$$\mathbb E(tX-Y)^2\geq0$$</span> or equivalently:<span class="math-container">$$t^2\mathbb EX^2-2t\mathbb EX\mathbb EY+\mathbb EY^2\geq0\tag1$$</span></p>
<p>This indicates that the discriminant here is not positive:<span class="math-container">$$4(\mathbb EX\mathbb EY)^2-4\mathbb EX^2\mathbb EY^2\leq0$$</span></p>
<p>Hence:<span class="math-container">$$|\mathbb EX\mathbb EY|\leq\sqrt{\mathbb EX^2\mathbb EY^2}$$</span></p>
<hr>
<p>P.S. If <span class="math-container">$\mathbb EX^2=0$</span> then we are not dealing with a quadratic equation in <span class="math-container">$(1)$</span> but that case is trivial because then automatically also <span class="math-container">$\mathbb EX=0$</span>.</p>
|
2,653,483 | <p>Let $a =111 \ldots 1$, where the digit $1$ appears $2018$ consecutive times.</p>
<p>Let $b = 222 \ldots 2$, where the digit $2$ appears $1009$ consecutive times.</p>
<p>Without using a calculator, evaluate $\sqrt{a − b}$.</p>
| Donald Splutterwit | 404,247 | <p>Let
\begin{eqnarray*}
N=\sum_{i=0}^{1008} 10^i =\underbrace{11\cdots 1}_{\text{1009 ones}} \\
M=10^{1009}+1
\end{eqnarray*}
note that $M-2=9N$ and<br>
\begin{eqnarray*}
a=NM \\
b=2N
\end{eqnarray*}
So $a-b=9N^2$ and
\begin{eqnarray*}
\sqrt{a-b} = 3N=\underbrace{\color{red}{33\cdots 3}}_{\text{1009 threes}}.
\end{eqnarray*}</p>
|
2,653,483 | <p>Let $a =111 \ldots 1$, where the digit $1$ appears $2018$ consecutive times.</p>
<p>Let $b = 222 \ldots 2$, where the digit $2$ appears $1009$ consecutive times.</p>
<p>Without using a calculator, evaluate $\sqrt{a − b}$.</p>
| Tiago Emilio Siller | 526,875 | <p>$a= \overbrace{11\ldots11}^{2018} = \dfrac{10^{2018}-1}{9}$</p>
<p>$b= \overbrace{22\ldots22}^{1009} = 2\cdot\dfrac{10^{1009}-1}{9}$</p>
<p>$\Rightarrow a-b = \dfrac{10^{2018}-1}{9}-2\cdot\dfrac{10^{1009}-1}{9} = \dfrac{10^{2018}-2\cdot10^{1009}+1}{9} = \left(\dfrac{10^{1009}-1}{3} \right)^2$</p>
<p>$\Rightarrow \sqrt{a-b} = \dfrac{10^{1009}-1}{3} = \overbrace{33\ldots33}^{1009}$</p>
|
2,209,438 | <p>I am trying to find this limit,</p>
<blockquote>
<p>$$\lim_{x \rightarrow 0} \frac{1}{x^4} \int_{\sin{x}}^{x} \arctan{t}dt$$</p>
</blockquote>
<p>Using the fundamental theorem of calculus, part 1,
$\arctan$ is a continuous function, so
$$F(x):=\int_0^x \arctan{t}dt$$
and I can change the limit to
$$\lim_{x \rightarrow 0} \frac{F(x)-F(\sin x)}{x^4}$$</p>
<p>I keep getting $+\infty$, but when I actually integrate $\arctan$ (integration by parts) and plot the function inside the limit, the graph tends to $-\infty$ as $x \rightarrow 0+$.</p>
<p>I tried using l'Hospital's rule, but the calculation gets tedious.</p>
<p>Can anyone give me hints?</p>
<p><strong>EDIT</strong></p>
<p>I kept thinking about the problem, and I thought of power series and solved it, returned to the site and found 3 great answers. Thank You!</p>
| Paramanand Singh | 72,031 | <p>Note that by the Mean Value Theorem for integrals we have $$\int_{\sin x}^{x}\arctan t\,dt = (x - \sin x)\arctan c$$ for some $c$ between $x$ and $\sin x$. Then we have $$\lim_{x \to 0}\frac{1}{x^{4}}\int_{\sin x}^{x}\arctan t\,dt = \lim_{x \to 0}\frac{x - \sin x}{x^{3}}\cdot\frac{\arctan c}{x}$$ The first factor on the right tends to $1/6$ (via L'Hospital's Rule or Taylor series) and we show that next factor tends to $1$. For this we assume that $x \to 0^{+}$ so that $\sin x < c < x$ and therefore $$\arctan \sin x < \arctan c < \arctan x$$ and dividing by $x$ we get $$\frac{\arctan \sin x}{\sin x}\cdot\frac{\sin x}{x} < \frac{\arctan c}{x} < \frac{\arctan x}{x}$$ By Squeeze theorem we see that $(\arctan c)/x \to 1$ as $x \to 0^{+}$. A similar argument can be given for $x \to 0^{-}$ and we have the desired limit as $1/6$.</p>
|
2,934,028 | <blockquote>
<p>A particle moves along the top of the
parabola <span class="math-container">$y^2 = 2x$</span> from left to right at a constant speed of 5 units
per second. Find the velocity of the particle as it moves through
the point <span class="math-container">$(2, 2)$</span>. </p>
</blockquote>
<p>So I isolate <span class="math-container">$y$</span>, giving me <span class="math-container">$y=\sqrt{2x}$</span>. I then find the derivative of <span class="math-container">$y$</span>, which is <span class="math-container">$1/\sqrt{2x}$</span>. And <span class="math-container">$\sqrt{2x}=y$</span> so the derivative of also equal to <span class="math-container">$1/y$</span>. At <span class="math-container">$(2,2)$</span> the derivative is 0.5. Not sure where to go from there though. Any help is appreciated. </p>
| Peter Szilas | 408,605 | <p><span class="math-container">$(2y) dy/dx=2$</span>; <span class="math-container">$dy/dx =1/y$</span>.</p>
<p>Slope at <span class="math-container">$(2,2)$</span>: <span class="math-container">$dy/dx =1/2= \tan \alpha$</span>.</p>
<p><span class="math-container">$\cos \alpha =2/√5$</span>, <span class="math-container">$\sin \alpha =1/√5.$</span>
(Pythagoras).</p>
<p><span class="math-container">$v_x= v \cos \alpha$</span>; <span class="math-container">$v_y = v \sin \alpha$</span>, where <span class="math-container">$v = |\vec v|$</span> .</p>
<p><span class="math-container">$\vec v = (v_x,v_y) $</span>.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.