qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,540,082 | <p>I need to prove that the following sequence converges as $n\to\infty$:
$\sum\limits_{k=n+1}^{2n} 1/k$</p>
<p>The problem is that I've only ever seen sums from i to n for example. I'm confused because not only is 2n alien to me, but also k and n are related.</p>
<p>So far I've tried substituting k for n+1 to simplify the expression, but it led nowhere. I also tried to enter this into spreadsheet calculation in hopes to get an idea where the border might be, but the result just grows and grows. I almost expected this because the sequence is reminiscent of the harmonic series which diverges. The key difference is obviously the beginning and end of the sum, but I can't figure out how to tackle this.</p>
<p>Any ideas or pointers are appreciated</p>
| frog | 84,997 | <p>Observe that you sum $n$ terms in the sum, hence you can estimate the sum by $n$ times the biggest term, hence you get
$$
\sum_{k=n+1}^{2n}\frac{1}{k}\leqslant \frac{n}{n+1}\stackrel{n\to\infty}{\longrightarrow}1
$$
Hence the sequence is bounded and since it is monotonically increasing the convergence follows.</p>
|
3,779,674 | <p>I've been working on this question for some time now, and have not had any significant progress towards a solution:</p>
<blockquote>
<p>Let <span class="math-container">$\Omega = \{z = x +iy \in \mathbb{C} : |y| \leq 1\}.$</span> If <span class="math-container">$f(z) = z^2 +2$</span>, then draw a sketch of
<span class="math-container">$$f(\Omega) = \{ f(z) : z \in \Omega\}.$$</span>
Justify your answer.</p>
</blockquote>
<p>Naturally I first determined that the set <span class="math-container">$\Omega$</span> essentially refers to the strip of complex numbers in the plane which have their imaginary part between <span class="math-container">$i$</span> and <span class="math-container">$-i$</span>, so I then set about attempting to plot the particualar set. However, I'm not sure how to go about drawing this — I tried 'mapping' random points on the curve by drawing arrows from <span class="math-container">$z$</span> to <span class="math-container">$f(z)$</span>, and found some success:</p>
<ul>
<li>The purely imaginary numbers 'map' to the region <span class="math-container">$[1, 2)$</span> on the real line.</li>
<li>The purely real numbers 'map' to the region <span class="math-container">$[0, \infty)$</span> on the real line.</li>
<li>The complex numbers of the form <span class="math-container">$x+i$</span> trace out the path of the curve <span class="math-container">$y=2\sqrt{x+1}$</span>, as would be expected.</li>
</ul>
<p>After this, though, I'm having a really hard time trying to find any sort of pattern between these observations, and I'm nowhere near trying to find the drawing of the curve as I'd like to be. The closest I've come is this crude sketch on Desmos.</p>
<p> <a href="https://i.stack.imgur.com/jVIa4l.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jVIa4l.png" alt="Sketch" /></a></p>
<hr />
<p>Just as an add-on, this question is expected to be solved on a normal sheet of paper. It appeared in <a href="https://www.isical.ac.in/%7Eadmission/IsiAdmission2017/PreviousQuestion/BStat-BMath-UGB-2019.pdf" rel="nofollow noreferrer">this test</a> last year which was expected to be solved by high-schoolers.</p>
| Dietrich Burde | 83,966 | <p><em>Hint:</em> The result is a matrix with integer coeffcients, namely
<span class="math-container">$$
\begin{pmatrix} 2070 & 3722 & 4631 \cr
3722 & 6701 & 8353 \cr
4631 & 8353 & 10423 \end{pmatrix}
$$</span>
So you need to evaluate at a certain point.</p>
|
3,779,674 | <p>I've been working on this question for some time now, and have not had any significant progress towards a solution:</p>
<blockquote>
<p>Let <span class="math-container">$\Omega = \{z = x +iy \in \mathbb{C} : |y| \leq 1\}.$</span> If <span class="math-container">$f(z) = z^2 +2$</span>, then draw a sketch of
<span class="math-container">$$f(\Omega) = \{ f(z) : z \in \Omega\}.$$</span>
Justify your answer.</p>
</blockquote>
<p>Naturally I first determined that the set <span class="math-container">$\Omega$</span> essentially refers to the strip of complex numbers in the plane which have their imaginary part between <span class="math-container">$i$</span> and <span class="math-container">$-i$</span>, so I then set about attempting to plot the particualar set. However, I'm not sure how to go about drawing this — I tried 'mapping' random points on the curve by drawing arrows from <span class="math-container">$z$</span> to <span class="math-container">$f(z)$</span>, and found some success:</p>
<ul>
<li>The purely imaginary numbers 'map' to the region <span class="math-container">$[1, 2)$</span> on the real line.</li>
<li>The purely real numbers 'map' to the region <span class="math-container">$[0, \infty)$</span> on the real line.</li>
<li>The complex numbers of the form <span class="math-container">$x+i$</span> trace out the path of the curve <span class="math-container">$y=2\sqrt{x+1}$</span>, as would be expected.</li>
</ul>
<p>After this, though, I'm having a really hard time trying to find any sort of pattern between these observations, and I'm nowhere near trying to find the drawing of the curve as I'd like to be. The closest I've come is this crude sketch on Desmos.</p>
<p> <a href="https://i.stack.imgur.com/jVIa4l.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jVIa4l.png" alt="Sketch" /></a></p>
<hr />
<p>Just as an add-on, this question is expected to be solved on a normal sheet of paper. It appeared in <a href="https://www.isical.ac.in/%7Eadmission/IsiAdmission2017/PreviousQuestion/BStat-BMath-UGB-2019.pdf" rel="nofollow noreferrer">this test</a> last year which was expected to be solved by high-schoolers.</p>
| mechanodroid | 144,766 | <p>You can calculate the last expression as
<span class="math-container">$$(A+I)^9A+A = (((A+I)^3)^2+I)A = \begin{bmatrix} 2070 & 3722 & 4631 \cr
3722 & 6701 & 8353 \cr
4631 & 8353 & 10423 \end{bmatrix}$$</span>
with only <span class="math-container">$4$</span> matrix multiplications and <span class="math-container">$2$</span> additions with identity <span class="math-container">$I$</span> which shouldn't be hard.</p>
|
3,779,674 | <p>I've been working on this question for some time now, and have not had any significant progress towards a solution:</p>
<blockquote>
<p>Let <span class="math-container">$\Omega = \{z = x +iy \in \mathbb{C} : |y| \leq 1\}.$</span> If <span class="math-container">$f(z) = z^2 +2$</span>, then draw a sketch of
<span class="math-container">$$f(\Omega) = \{ f(z) : z \in \Omega\}.$$</span>
Justify your answer.</p>
</blockquote>
<p>Naturally I first determined that the set <span class="math-container">$\Omega$</span> essentially refers to the strip of complex numbers in the plane which have their imaginary part between <span class="math-container">$i$</span> and <span class="math-container">$-i$</span>, so I then set about attempting to plot the particualar set. However, I'm not sure how to go about drawing this — I tried 'mapping' random points on the curve by drawing arrows from <span class="math-container">$z$</span> to <span class="math-container">$f(z)$</span>, and found some success:</p>
<ul>
<li>The purely imaginary numbers 'map' to the region <span class="math-container">$[1, 2)$</span> on the real line.</li>
<li>The purely real numbers 'map' to the region <span class="math-container">$[0, \infty)$</span> on the real line.</li>
<li>The complex numbers of the form <span class="math-container">$x+i$</span> trace out the path of the curve <span class="math-container">$y=2\sqrt{x+1}$</span>, as would be expected.</li>
</ul>
<p>After this, though, I'm having a really hard time trying to find any sort of pattern between these observations, and I'm nowhere near trying to find the drawing of the curve as I'd like to be. The closest I've come is this crude sketch on Desmos.</p>
<p> <a href="https://i.stack.imgur.com/jVIa4l.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jVIa4l.png" alt="Sketch" /></a></p>
<hr />
<p>Just as an add-on, this question is expected to be solved on a normal sheet of paper. It appeared in <a href="https://www.isical.ac.in/%7Eadmission/IsiAdmission2017/PreviousQuestion/BStat-BMath-UGB-2019.pdf" rel="nofollow noreferrer">this test</a> last year which was expected to be solved by high-schoolers.</p>
| Disintegrating By Parts | 112,478 | <p><span class="math-container">\begin{align}
\det(\lambda I-A)&=\left|
\begin{array}{ccc}
\lambda & -1 & 0 \\
-1 & \lambda & -1 \\
0 & -1 & \lambda-1
\end{array}
\right| \\
& = \lambda(\lambda(\lambda-1)-1)+((-1)(\lambda-1)) \\
& = \lambda^2(\lambda-1)-(\lambda-1) \\
& =(\lambda-1)^2(\lambda+1).
\end{align}</span>
Therefore <span class="math-container">$(A-I)^2(A+I)=0$</span>. You are asked to compute <span class="math-container">$(A+I)^{10}(A-I)^2+A$</span>, which must be the same as <span class="math-container">$A$</span> because <span class="math-container">$(A+I)^{10}(A-I)^2$</span> has <span class="math-container">$(A-I)^2(A+I)$</span> as a factor.</p>
|
2,232,677 | <p>A parallellogram $ABCD$ is given, where $E$ is the midpoint of $BC$ and $F$ is a point on $AD$ so that $|FD| = 3|AF|$. $G$ is the point where $BF$ and $AE$ intersects.</p>
<p>Express the vector $AG$ in terms of vectors $AB$ and $AD$.</p>
<p>My solution to the problem is the following: Impose an <em>affine transformation</em> so that $ABCD$ becomes a unit square with point $A$ residing on origo. Then $BF$ is on the line $y = \frac{1}{4} - \frac{1}{4}x$ and $AE$ on $y = \frac{1}{2}x$. The lines' intersection gives me the required coefficients to express $AG$ in terms of $AB$ and $AD$. </p>
<p>My question is how would you solve this problem without the affine transformation? The reason I'm asking is because this problem was given at an early stage of the course before affine transformations was introduced. So I want to know if there is a simpler or "more intuitive" way to solve it which I haven't learned.</p>
| DHMO | 413,023 | <p>Forgive my lack of vector symbols below.</p>
<hr>
<p>$AE = AB+\dfrac12AD$</p>
<p>$BF = -AB+\dfrac14AD$</p>
<hr>
<p>Let $AG = hAB+kAD$.</p>
<p>Since $AG // AE$, we have $h=2k$.</p>
<p>Since $BG // BF$, we have $-(h-1)=4k$.</p>
<hr>
<p>Solving for $h$ and $k$ gives $h=\dfrac13$ and $k = \dfrac16$, so:</p>
<p>$$AG=\dfrac13AB+\dfrac16AD$$</p>
|
3,933,135 | <p>Let <span class="math-container">$m$</span> and <span class="math-container">$n$</span> are positive integer</p>
<blockquote>
<p>Can it be shown that, For every <span class="math-container">$m\ge 5$</span></p>
<p><span class="math-container">$$\sum_{i=1}^ni^m-m^i>0\iff n=2,3,...,m$$</span></p>
</blockquote>
<p>Example: let <span class="math-container">$m=5$</span>, choose any <span class="math-container">$n$</span> between <span class="math-container">$2$</span> to <span class="math-container">$5$</span>, now let <span class="math-container">$n=2$</span> then <span class="math-container">$\sum_{i=1}^2i^5-5^i=(1^5-5^1)+(2^5-5^2)=-4+7=3>0$</span></p>
<p>More on observation, <span class="math-container">$\sum_{i=1}^ni^m-m^i=0$</span> holds only for <span class="math-container">$(m,n)=\{(1,1),(2,3),(2,4)\}$</span></p>
<p>Score Pari/GP</p>
<pre><code>for(m=5,50,for(n=1,50,if(sum(i=1,n,i^m-m^i)>0,print([m,n]))))
</code></pre>
| John Omielan | 602,049 | <p>You're asking about proving for every <span class="math-container">$m \ge 5$</span> that</p>
<p><span class="math-container">$$\sum_{i=1}^{n}(i^m - m^i) \gt 0 \iff n = 2, 3, \, \ldots \, , m \tag{1}\label{eq1A}$$</span></p>
<p>First, consider the <span class="math-container">$\impliedby$</span> direction, starting at <span class="math-container">$n = 2$</span>. Define</p>
<p><span class="math-container">$$f_1(m) = 1 - m + 2^m - m^2 \tag{2}\label{eq2A}$$</span></p>
<p>As you've already shown, <span class="math-container">$f_1(5) = 3 \gt 0$</span>. Differentiating <span class="math-container">$f_1(m)$</span> gives</p>
<p><span class="math-container">$$f'_1(m) = -1 + \ln(2)2^{m} - 2m \tag{3}\label{eq3A}$$</span></p>
<p>You can easily verify <span class="math-container">$f'(5) \gt 0$</span> and <span class="math-container">$f'(m)$</span> is an increasing function of <span class="math-container">$m$</span> (e.g., by induction using <span class="math-container">$2m \gt m + 1$</span>). Thus, <span class="math-container">$f(m) \gt 0$</span> for <span class="math-container">$m \ge 5$</span>. Next, since the natural logarithm is a strictly increasing function, consider the values of <span class="math-container">$i$</span> where</p>
<p><span class="math-container">$$i^m \gt m^i \iff m\ln(i) \gt i\ln(m) \tag{4}\label{eq4A}$$</span></p>
<p>Define</p>
<p><span class="math-container">$$f_2(i) = m\ln(i) - i\ln(m) \tag{5}\label{eq5A}$$</span></p>
<p>It's fairly easy to confirm <span class="math-container">$f_2(3) \gt 0$</span> for all <span class="math-container">$m \ge 5$</span>. Differentiating here gives</p>
<p><span class="math-container">$$f'_2(i) = \frac{m}{i} - \ln(m) \tag{6}\label{eq6A}$$</span></p>
<p>With <span class="math-container">$m = 5$</span>, this gives <span class="math-container">$f'_2(3) \approx 0.057$</span>. Also, <span class="math-container">$g(m) = \frac{m}{3} - \ln(m) \implies g'(m) = \frac{1}{3} - \frac{1}{m} \gt 0$</span>, so <span class="math-container">$f'_2(3) \gt 0$</span> for all <span class="math-container">$m \ge 5$</span>. Note <span class="math-container">$f'_2(i)$</span> is a monotonically decreasing function, with <span class="math-container">$\lim_{i \to \infty}f'_2(i) = -\ln{m} \lt 0$</span>. Thus, there's only <em>one</em> value of <span class="math-container">$i$</span> where <span class="math-container">$f'_2(i) = 0$</span>.</p>
<p>We therefore have <span class="math-container">$f_2(3) \gt 0$</span>, with <span class="math-container">$f_2(i)$</span> reaching a maximum at <span class="math-container">$i = \frac{m}{\ln(m)} \lt m$</span> and then decreasing. Since <span class="math-container">$f_2(m) = 0$</span>, this means <span class="math-container">$f_2(i) \gt 0$</span> for all <span class="math-container">$3 \le i \le m - 1$</span>. This shows \eqref{eq4A} is true, so the value being summed in \eqref{eq1A} is positive, for all these <span class="math-container">$i$</span>. As such, the left side summation in \eqref{eq1A} is positive for all <span class="math-container">$3 \le n \le m$</span>.</p>
<p>Combining this with <span class="math-container">$f_1(m) \gt 0$</span> in \eqref{eq2A} means the <span class="math-container">$\impliedby$</span> direction of \eqref{eq1A} is true for all <span class="math-container">$2 \le n \le m$</span>.</p>
<hr />
<p>With the <span class="math-container">$\implies$</span> direction, <span class="math-container">$n = 1$</span> gives a summation of <span class="math-container">$1 - m \lt 0$</span>. As discussed above, with \eqref{eq5A} giving <span class="math-container">$f_2(i) \lt 0$</span> for <span class="math-container">$i \gt m$</span>, this means each extra term being summed is negative for <span class="math-container">$n \ge m + 1$</span>. Thus, if the summation is negative for <span class="math-container">$n = m + 1$</span>, it'll be negative for all <span class="math-container">$n \ge m + 1$</span>. To check on this, using the <span class="math-container">$m^i$</span> summation being of a geometric series gives</p>
<p><span class="math-container">$$\begin{equation}\begin{aligned}
\sum_{i=1}^{m+1}(i^m - m^i) & = \sum_{i=1}^{m+1}i^m - \sum_{i=1}^{m+1}m^i \\
& = \sum_{i=1}^{m+1}i^m - \frac{m(m^{m+1} - 1)}{m - 1}
\end{aligned}\end{equation}\tag{7}\label{eq7A}$$</span></p>
<p>Since <span class="math-container">$i^{m} \lt x^{m}$</span> for <span class="math-container">$i \lt x \le i + 1$</span> means <span class="math-container">$\int_{i}^{i+1}i^{m}dx \lt \int_{i}^{i+1}x^m dx \implies i^{m} \lt \int_{i}^{i+1}x^m dx$</span>, then using this and combining the integrals for all <span class="math-container">$1 \le i \le m - 1$</span> in the summation gives</p>
<p><span class="math-container">$$\begin{equation}\begin{aligned}
& \sum_{i=1}^{m+1}i^m - \frac{m(m^{m+1} - 1)}{m - 1} \\
& \lt \int_{1}^{m}x^m dx + m^m + (m + 1)^m - \frac{m(m^{m+1} - 1)}{m - 1} \\
& = \left. \frac{x^{m+1}}{m + 1} \right|_{1}^{m} + m^m + (m + 1)^m - \frac{m(m^{m+1} - 1)}{m - 1}\\
& = \frac{m^{m+1} - 1}{m + 1} + m^m + (m + 1)^m - \frac{m(m^{m+1} - 1)}{m - 1} \\
& = \frac{m(m^{m}) + (m + 1)(m^m) + (m + 1)^{m+1} - 1}{m + 1} - \frac{m(m^{m+1} - 1)}{m - 1} \\
& = \frac{(m - 1)((2m + 1)(m^m) + (m + 1)^{m+1} - 1) - (m + 1)m(m^{m+1} - 1)}{(m + 1)(m - 1)} \\
\end{aligned}\end{equation}\tag{8}\label{eq8A}$$</span></p>
<p>Define</p>
<p><span class="math-container">$$\begin{equation}\begin{aligned}
f_3(m) & = (m - 1)((2m + 1)(m^m) + (m + 1)^{m+1} - 1) - (m + 1)m(m^{m+1} - 1) \\
& = (2m^2 - m - 1)(m^m) + (m - 1)(m + 1)^{m+1} - (m - 1) \\
& \; \; \; \; - (m^3 + m^2)(m^{m}) + (m^2 + m) \\
& = (-m^3 + m^2 - m - 1)(m^m) + (m - 1)(m + 1)^{m+1} + m^2 + 1
\end{aligned}\end{equation}\tag{9}\label{eq9A}$$</span></p>
<p>This gives <span class="math-container">$f_3(5) = -144600$</span>. Next, simplify \eqref{eq9A} by using <span class="math-container">$0 \gt (-m - 1)(m^m) + m^2 + 1$</span> to get</p>
<p><span class="math-container">$$\begin{equation}\begin{aligned}
f_3(m) & \lt (-m^3 + m^2)(m^m) + (m - 1)(m + 1)^{m+1} \\
& = (-m + 1)(m^{m+2}) + (m - 1)(m + 1)^{m+1} \\
& = (m - 1)((m + 1)^{m+1} - m^{m+2})
\end{aligned}\end{equation}\tag{10}\label{eq10A}$$</span></p>
<p>As done before using the natural logarithm,</p>
<p><span class="math-container">$$(m + 1)^{m+1} \lt m^{m+2} \iff (m + 1)\ln(m + 1) \lt (m + 2)\ln(m) \tag{11}\label{eq11A}$$</span></p>
<p>Next, define</p>
<p><span class="math-container">$$f_4(m) = (m + 1)\ln(m + 1) - (m + 2)\ln(m) \tag{12}\label{eq12A}$$</span></p>
<p>Note <span class="math-container">$f_4(5) \approx -0.5155$</span>. Taking the derivative, plus using the logarithm <a href="https://en.wikipedia.org/wiki/Natural_logarithm#Properties" rel="nofollow noreferrer">property</a> that <span class="math-container">$\ln(x) \le x - 1$</span> for <span class="math-container">$x \gt 0$</span>, gives</p>
<p><span class="math-container">$$\begin{equation}\begin{aligned}
f'_4(m) & = \ln(m + 1) + \frac{m + 1}{m + 1} - \ln(m) - \frac{m + 2}{m} \\
& = 1 - \left(1 + \frac{2}{m}\right) + \ln\left(\frac{m + 1}{m}\right) \\
& = - \frac{2}{m} + \ln\left(1 + \frac{1}{m}\right) \\
& \le - \frac{2}{m} + \frac{1}{m} \\
& = -\frac{1}{m} \\
& \lt 0
\end{aligned}\end{equation}\tag{13}\label{eq13A}$$</span></p>
<p>This shows <span class="math-container">$f_4(m) \lt 0$</span> for all <span class="math-container">$m \ge 5$</span>, so <span class="math-container">$f_3(m) \lt 0$</span> in \eqref{eq10A}. This confirms <span class="math-container">$\sum_{i=1}^{m+1}i^m - \frac{m(m^{m+1} - 1)}{m - 1} \lt 0$</span> in \eqref{eq7A}, which proves the <span class="math-container">$\implies$</span> direction in \eqref{eq1A} always holds.</p>
|
2,911,849 | <p>Let $S_i$ be a countable family of sets and $f$ a continuous map (say everything is in the complex plane).
Is it true that
$$
f( \overline{ \cup_i S_i } ) =
\overline{ \cup_i f( S_i ) }
$$
or perhaps the inclusion $\subseteq$?</p>
| José Carlos Santos | 446,262 | <p>The inclusion holds. You have$$f\left(\overline{\bigcup_iU_i}\right)\subset\overline{f\left(\bigcup_iU_i\right)}$$and this has nothing to do with unions. It's just that, if $f$ is continuous and $A\subset D_f$, $f\left(\overline A\right)\subset\overline{f(A)}$.</p>
<p>However, the equality doesn't hold in general. Just take, say, $f(z)=\frac1{1+|z|^2}$. Then $f(\mathbb{C})=(0,1]$, which is not closed. But$$f(\mathbb{C})=f\left(\bigcup_{z\in\mathbb C}\overline{\{z\}}\right)=(0,1]$$whereas $\overline{\bigcup_{z\in\mathbb{C}}f\bigl(\{z\}\bigr)}$ is obviously a closed set (it turns out that it is $[0,1]$).</p>
|
63,601 | <p>I know that if a ring has a multiplicative identity, then the multiplicative identity must be unique. Are there simple-to-describe examples of rings with two (or more) multiplicative right-identities?</p>
| Qiaochu Yuan | 290 | <p>Take the semigroup ring of a semigroup with two or more multiplicative right identities. For example, the semigroup
$$S = \langle a, b | ab = aa = a, ba = bb = b \rangle$$
works (it is the universal example, so if it fails then no example can work). Multiplication in this semigroup can be described as follows: every word evaluates to the first letter in it. The resulting semigroup ring is literally the free ring on two not necessarily identical right identities. </p>
|
3,907,519 | <p>Prove that for all <span class="math-container">$a,b \in\mathbb Z$</span>, if <span class="math-container">$a+b$</span> is even then <span class="math-container">$a-b$</span> is even.</p>
<p>I started off by assuming that a and <span class="math-container">$b$</span> are both odd integers, <span class="math-container">$2k+1$</span> and <span class="math-container">$2m+1$</span> such that their sum is <span class="math-container">$2(k+m+1)$</span> which is an even integer. I then concluded that <span class="math-container">$a-b$</span> is <span class="math-container">$2(k-m)$</span> which is also even, hence the statement is true.
My question is, is this enough to prove that the statement is true or do I also have to proof it when both <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are even, and when one of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> is even and the other is odd. Thanks.</p>
| Rubenscube | 848,029 | <p>I think this is definitely a good start, but it is always a good idea to cover all possible cases, which means mentioning even and even, but also even and odd. I hope this helps!</p>
<p>Another way to proof it, is without using cases:
If <span class="math-container">$a+b$</span> is even, than there exists a whole number <span class="math-container">$n$</span> such that <span class="math-container">$a+b=2n$</span>. If you substract <span class="math-container">$2b$</span> from both sides, you'll see that the result is <span class="math-container">$a-b=2n-2b=2(n-b)$</span>. Which is even and that is exactly what you were trying to prove.</p>
|
3,907,519 | <p>Prove that for all <span class="math-container">$a,b \in\mathbb Z$</span>, if <span class="math-container">$a+b$</span> is even then <span class="math-container">$a-b$</span> is even.</p>
<p>I started off by assuming that a and <span class="math-container">$b$</span> are both odd integers, <span class="math-container">$2k+1$</span> and <span class="math-container">$2m+1$</span> such that their sum is <span class="math-container">$2(k+m+1)$</span> which is an even integer. I then concluded that <span class="math-container">$a-b$</span> is <span class="math-container">$2(k-m)$</span> which is also even, hence the statement is true.
My question is, is this enough to prove that the statement is true or do I also have to proof it when both <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are even, and when one of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> is even and the other is odd. Thanks.</p>
| Peter Szilas | 408,605 | <p><span class="math-container">$a+b$</span> is even.</p>
<p>Assume <span class="math-container">$a-b$</span> is odd.</p>
<p>Add: <span class="math-container">$(a+b)+(a-b)=2a;$</span></p>
<p>Adding an odd number <span class="math-container">$(a-b)$</span> to an even number <span class="math-container">$(a+b) $</span> gives an even number <span class="math-container">$2a$</span>. A contradiction.</p>
|
1,736,360 | <p>Using the binomial expansion, find the taylor series of the function $$f(z) = \frac{1}{(1+z)^3}$$ and find its radius of convergence.</p>
<hr>
<p>The solution is</p>
<p>$$f(z) = \sum_{n=0}^\infty (-1)^n\frac{(n+1)(n+2)}2z^n$$</p>
<p>How do we find this? I thought the binomial expansion was</p>
<p>$$(1+z)^n = \sum_{r=0}^n \binom{n}{r}z^r$$</p>
<p>but here $n=-3$ so how do we progress?</p>
| David Quinn | 187,299 | <p>Using the Binomial Theorem, for $|x|<1$, the coefficient of $x^r$ is $$\frac{(-3)(-4)...(-(r+2))}{r!}=(-1)^r\frac{1.2.3...(r+2)}{1.2.r!}$$
$$=(-1)^r\frac{(r+1)(r+2)}{2!}$$</p>
<p>Hence the result given</p>
|
918,693 | <p>I've been at this for a few hours now, and it's frighteningly similar to the problem stated here: </p>
<p><a href="https://math.stackexchange.com/questions/388356/how-to-prove-int-0-infty-e-x2cos2bx-dx-frac-sqrt-pi2-e-b2">How to prove $\int_0^\infty e^{-x^2}cos(2bx) dx = \frac{\sqrt{\pi}}{2} e^{-b^2}$</a></p>
<p>but with enough change that it's still proving problematic. I also had some parts of the solution fly right over my head! I've been trying to differentiate w.r.t. b and find a clever u-sub to no avail. I'm also curious of using Euler's identity to exchange the cos for some exponentials, but I keep hitting a wall there as well. Help on either front would be great.</p>
<p>Edit: a>0 and b>0 are arbitrary constants just to be complete about this.</p>
<p>Update: Solved thanks to voldemort's solution and the Differential Equation solution from the linked problem, thanks all!</p>
| André Nicolas | 6,312 | <p>In your integral, $a$ is presumably positive, else we would not have convergence.</p>
<p>Make the change of variable $t=x\sqrt{a}$ and you will be at an integral very close to the one linked to. </p>
|
138,173 | <p>$f(x, y) = 0$ and $g(x, y) = 0$,
both $f$ and $g$ are cubic polynomial equation (at most 10 coefficients for each).</p>
<p>Is there any fixed method to solve this degenerate equation system?
thanks.</p>
| Peter Mueller | 18,739 | <p>Resultants are good, but for practical computations the Bezout Lemma (it's the same as the extended Euclidean algorithm) can be less expensive: Assume that $f(x,y)$ and $g(x,y)$ are relatively prime. Consider $f$ and $g$ as polynomials in $y$ over the field $k(x)$, where $k$ is your (unspecified) base field. Then there are $r,s\in k(x)[y]$ with $r(y)f(x,y)+s(y)g(x,y)=1$. Multiplying with the least common multiple $u(x)$ of the denominators of the coefficients of $r(y)$ and $s(y)$ yields $R(x,y)f(x,y)+S(x,y)g(x,y)=u(x)$, where $R,S\in k[x,y]$. So if $f=g=0$, then $u=0$.</p>
<p>Of course $u(x)$ is essentially the resultant of $f$ and $g$ with respect to $y$.</p>
|
2,581,361 | <p>For even $n \in \mathbb{N}$, prove $\binom{n}{i}< \binom{n}{j} $ if $0\leq i<j\leq \frac{n}{2}$.</p>
<p>So far all I have been able to come up with are a bunch of seemingly useless inequalities. </p>
<p>Any hints would be greatly appreciated.</p>
| P. Koymans | 385,053 | <p>Try proving
$$
\binom{n}{i} < \binom{n}{i + 1}
$$
by using the formula
$$
\binom{n}{i} = \frac{n!}{i! \cdot (n - i)!}.
$$</p>
|
962,301 | <p>$$(\sqrt 2-\sqrt 1)+(\sqrt 3-\sqrt 2)+(\sqrt 4-\sqrt 3)+(\sqrt 5-\sqrt 4)…$$</p>
<p>I have found partial sums equal to natural numbers. The first 3 addends sum to 1. The first 8 sum to 2. The first 15 sum to 3. When the minuend in an addend is the square root of a perfect square, the partial sum is a natural number. So I believe this series to be divergent.</p>
<p>Am I right? Have I used correct terminology? How would this be expressed using sigma notation? Is there a proof that this series diverges?</p>
| Ron Gordon | 53,268 | <p>The $n$ term may be rewritten as </p>
<p>$$\frac1{\sqrt{n}+\sqrt{n+1}}$$</p>
<p>which behaves as $1/(2 \sqrt{n})$ as $n \to\infty$, so by comparison with the harmonic series, this series diverges.</p>
|
13,478 | <p>This may be a silly question - but are there interesting results about the invariant: the minimal size of an open affine cover? For example, can it be expressed in a nice way? Maybe under some additional hypotheses?</p>
| Alicia Garcia-Raboso | 1,797 | <p>This is not a complete answer by any means, but here are the two most basic arguments. First of all, you have that every projective scheme that can be embedded in $\mathbb{P}^n$ can be covered by $n+1$ open affines, namely the closed subschemes of the affines $U_i = \lbrace z_i \neq 0 \rbrace \cong \mathbb{A}^n$.</p>
<p>For a lower bound, think Cech-cohomologically: if $X$ can be covered by $k$ affine opens, then $\check{H}^l(X) = 0$ for every $l > k$. If $X$ is Noetherian separated, then Cech cohomology coincides with sheaf cohomology, which indicated that you need at least $\max\lbrace l \;|\; H^l(X) \neq 0 \rbrace$ open affines to cover it.</p>
|
2,839,823 | <p>I don't know where to ask, but I'm trying. I just thing we cannot do hocus pocus methods.</p>
<hr>
<p>Solve $3^x+4^x=5^x$</p>
<p>Okay, so my friend gave me this equation, and his solution. But I don't belive it holds. It comes here:</p>
<p>"Solution: </p>
<p>$3^x+4^x=5^x\Leftrightarrow \frac{3^x}{3^x}+\frac{4^x}{3^x}=\frac{5^x}{3^x}\Leftrightarrow 1+\left(\frac{4}{3}\right )^x=\left(\frac{5}{3}\right )^x\Leftrightarrow 1+\left(\frac{4}{3}\right )^{\frac{x}{2}\cdot 2}=\left(\frac{5}{3}\right )^{\frac{x}{2}\cdot 2}\Leftrightarrow \left(\frac{4}{3}\right )^{\frac{x}{2}\cdot 2}-\left(\frac{5}{3}\right )^{\frac{x}{2}\cdot 2}=-1\Leftrightarrow \left(\left(\frac{4}{3}\right)^{\frac{x}{2}}\right )^2-\left(\left(\frac{5}{3}\right )^{\frac{x}{2}}\right)^2=-1$</p>
<p>Let $a=\left(\frac{4}{3}\right )^{\frac{x}{2}}$ and let $b=\left(\frac{5}{3}\right )^{\frac{x}{2}}$ then</p>
<p>$a^2-b^2=\frac{-1}{3}\cdot 3 \Leftrightarrow (a-b)(a+b)=\frac{-1}{3}\cdot 3$</p>
<p>Now
$a-b=\frac{-1}{3}$ and $a+b=3$ solving the system of equations, we get $a=\frac{4}{3}$ and $b=\frac{5}{3}$ hence we put back.</p>
<p>$a=\left(\frac{4}{3}\right )^{\frac{x}{2}} \Rightarrow \frac{4}{3}=\left(\frac{4}{3}\right )^{\frac{x}{2}} $ and $b=\left(\frac{5}{3}\right )^{\frac{x}{2}} \Rightarrow \frac{5}{3}=\left(\frac{5}{3}\right )^{\frac{x}{2}} $ we see that $x=2$ in both cases, which satisfies the equation."</p>
| Phil H | 554,494 | <p>Another method..........</p>
<p>$f(x) = 3^x + 4^x - 5^x$</p>
<p>$f'(x) = \ln(3)3^x + \ln(4)4^x - \ln(5)5^x$</p>
<p>$x_0 = 3$</p>
<p>$x_1 = 3 - \frac{27 + 64 -125}{-82.79} = 2.59$</p>
<p>$x_2 = 2.59 - \frac{-11.15}{-40.73} = 2.32$</p>
<p>$x_3 = 2.32 - \frac{-4.12}{-18.72} = 2.10$</p>
<p>$x_4 = 2.10 - \frac{-.94}{-10.75} = 2.01$</p>
<p>$x_5 = 2.01 - \frac{-.08}{-8.40} = 2.00$</p>
|
1,028,301 | <p>What I've first done is show that $Aut(\mathbb Z / 24\mathbb Z)$ is isomorphic to $\mathbb Z_2\oplus\mathbb Z_2\oplus\mathbb Z_2$ due to it having 8 members and the greatest rank out of all of them is 2.</p>
<p>Now $M_2(\mathbb Z/3\mathbb Z)$ is isomophic to $\mathbb Z_3\oplus\mathbb Z_3\oplus\mathbb Z_3\oplus\mathbb Z_3$ (right?) so I need to find a homomorphism between those two groups.</p>
<p>I'm stuck here.
Any help is appreciated.</p>
| yatima2975 | 360 | <p>You're looking for a non-trivial homomorphism $\phi$ of a group (let's call it $G$) of size 81 to one of size 8 (your isomorphisms are indeed correct!). </p>
<p>What can you say about $|Ker(\phi)|$, $|G/Ker(\phi)|$ and $|Im(\phi)|$ in terms of their factorisations? </p>
|
142,007 | <p>Assume: $p$ is a prime that satisfies $p \equiv 3 \pmod 4$</p>
<p>Show: $x^{2} \equiv -1 \pmod p$ has no solutions $\forall x \in \mathbb{Z}$.</p>
<p>I know this problem has something to do with Fermat's Little Theorem, that $a^{p-1} \equiv 1\pmod p$. I tried to do a proof by contradiction, assuming the conclusion and showing some contradiction but just ran into a wall. Any help would be greatly appreciated.</p>
| Arturo Magidin | 742 | <p>Suppose <span class="math-container">$x^2\equiv -1\pmod{p}$</span>. Then <span class="math-container">$x^4\equiv 1\pmod{p}$</span>. Since <span class="math-container">$p = 4k+3$</span>, we have
<span class="math-container">$$x^{p-1} = x^{4k+2} = x^2x^{4k} \equiv -1(x^4)^k\equiv -1\pmod{p},$$</span>
which contradicts Fermat's Little Theorem.</p>
|
185,900 | <p>What is the <strong>fastest way</strong> to find the smallest positive root of the following transcendental equation:</p>
<p><span class="math-container">$$a + b\cdot e^{-0.045 t} = n \sin(t) - m \cos(t)$$</span></p>
<pre><code>eq = a + b E^(-0.045 t) == n Sin[t] - m Cos[t];
</code></pre>
<p>where
<span class="math-container">$a,b,n,m$</span> are some real constants.</p>
<p>for instance I tried :</p>
<pre><code>eq = 5 E^(-0.045 t) + 0.1 == -0.3 Cos[t] + 0.009 Sin[t];
sol = FindRoot[eq, {t, 1}]
{t -> 117.349}
</code></pre>
<p>There is an answer but it doesn't mean that this is the smallest positive root ))</p>
<p>I don't like <code>FindRoot[]</code> because you need starting point, for different initial parameters <span class="math-container">$(a,b,n,m)$</span></p>
<p>Is there a way to find the <strong>smallest positive root</strong> of equation for any <span class="math-container">$(a,b,n,m)$</span> (if there exist the solution), without <em>starting points</em>?</p>
<p>If No. how to determine automatically starting point for a given parameters?</p>
<p>there is a numerical and graphical answers in <em>Wolfram Alpha</em></p>
<p><a href="https://i.stack.imgur.com/Y90wg.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y90wg.jpg" alt="enter image description here"></a></p>
| Bob Hanlon | 9,362 | <p>Amplifying on comment by @bills; for your specific example there are no roots</p>
<pre><code>eq = 11 + 5 E^(-0.045 t) == 0.03 Sin[t] - 1.2 Cos[t] // Rationalize;
</code></pre>
<p>The <a href="https://reference.wolfram.com/language/ref/FunctionRange.html" rel="noreferrer"><code>FunctionRange</code></a> of the LHS and RHS of the equation do not intersect</p>
<pre><code>FunctionRange[#, t, y] & /@ List @@ eq // N
(* {y > 11., -1.20037 <= y <= 1.20037} *)
</code></pre>
<p>Or look at the <a href="https://reference.wolfram.com/language/ref/Plot.html" rel="noreferrer"><code>Plot</code></a></p>
<pre><code>Plot[Evaluate[List @@ eq], {t, 0, 100},
Frame -> True,
PlotLegends -> Placed["Expressions", {0.5, 0.45}]]
</code></pre>
<p><a href="https://i.stack.imgur.com/rvfE0.png" rel="noreferrer"><img src="https://i.stack.imgur.com/rvfE0.png" alt="enter image description here"></a></p>
<p>For values of the parameters for which the plots intersect, the <code>Plot</code> will provide an initial value for <a href="https://reference.wolfram.com/language/ref/FindRoot.html" rel="noreferrer"><code>FindRoot</code></a> or constraint for use with <a href="https://reference.wolfram.com/language/ref/NSolve.html" rel="noreferrer"><code>NSolve</code></a>.</p>
<pre><code>eq = 1 + 5 E^(-0.045 t) == 0.03 Sin[t] - 1.2 Cos[t] // Rationalize;
Plot[Evaluate[List @@ eq], {t, 60, 100},
Frame -> True,
PlotLegends -> "Expressions"]
</code></pre>
<p><a href="https://i.stack.imgur.com/cc7Qg.png" rel="noreferrer"><img src="https://i.stack.imgur.com/cc7Qg.png" alt="enter image description here"></a></p>
<pre><code>FindRoot[eq, {t, 72}]
(* {t -> 72.1339} *)
NSolve[{eq, 60 < t < 80}, t][[1]]
(* {t -> 72.1339} *)
</code></pre>
<p>Note that for <code>NSolve</code> the constraint can be loose; whereas, for <code>FindRoot</code> the initial estimate must be closer to the actual value.</p>
|
618,166 | <p>Does there exists a real valued function which is everywhere continuous and differentiable at exactly one point on the real line?</p>
| Martin Argerami | 22,857 | <p>Let <span class="math-container">$f(t) $</span> be the <a href="https://en.wikipedia.org/wiki/Weierstrass_function" rel="nofollow noreferrer">Weierstrass function</a>, continuous everywhere and nowhere differentiable. Define
<span class="math-container">$$
g(t)=t^2f(t).
$$</span>
Then <span class="math-container">$g$</span> is continuous everywhere and differentiable only at <span class="math-container">$t=0$</span>.</p>
|
1,195,166 | <p>for a>0 can someone give me some clues to prove this inequality. </p>
<p><img src="https://i.stack.imgur.com/WFQA2.png" alt="enter image description here"></p>
<p>I found its derivative but I can't express the f(0) to use the Mean Value Theorem. Can you give me at least a point where to start at</p>
| ASB | 111,607 | <p>$ \dfrac{f(x+\delta x)-f(x)}{(x+\delta x)-x}=\dfrac{\sqrt{x+\delta x}-\sqrt{x}}{\delta x}=\dfrac{(\sqrt{x+\delta x}-\sqrt{x})(\sqrt{x+\delta x}+\sqrt{x})}{\delta x (\sqrt{x+\delta x}+\sqrt{x})}=\dfrac{1}{\sqrt{x+\delta x}+\sqrt{x}} $</p>
|
1,463,887 | <p>$f(x,y)=x^2-y^2$ constraint to $x^2+y^2=1$</p>
<p>$f_x=2x$ and $f_y=-2y$ $\implies$ the critical point is at $(0,0)$</p>
<p>However, $(0,0)$ does not occur in the constraint. does that mean i don't have to consider it?</p>
<p>So if we work of the boundary curves $y=\pm\sqrt{1-x^2}$</p>
<p>we get a min @ $(0,1)$ with the value $-1$ and a max @ either $(1,0) or (-1,0)$ with the value $1$</p>
<p>did i do it correctly?</p>
| Jack D'Aurizio | 44,121 | <p>A bit of an overkill, don't you think? The problem is equivalent to finding the extreme values of $z-(1-z)$, i.e. of a line, subject to the constraint $z\in[0,1]$.</p>
|
2,901,914 | <blockquote>
<p>Given the function $f:[0,2\pi)\to S^1$, $\varphi\mapsto (\cos(\varphi), \sin(\varphi))^t$. Show that $f$ is continuous and a bijection, but not a homeomorphism.</p>
</blockquote>
<p>That $f$ is continuous is clear, since every component is continuous.
Furthermore it is continuous differentiable.</p>
<p>When I want to show, that $f$ is a bijection it is easy to see, that $f$ is injective, since for</p>
<p>$f(x)=f(y)\Leftrightarrow (\cos(x), \sin(x))=(\cos(y),\sin(y))\Leftrightarrow \cos(x)=\cos(y)\wedge\sin(x)=\sin(y)\stackrel{x,y\in [0,2\pi)}{\Leftrightarrow} x=y$</p>
<p>But how can I show, that $f$ is a surjection?</p>
<p>To show, that $f$ is not a homeomorphism, I have to verify, that $f^{-1}$ is not continuous.
Can I use the inverse function theorem?</p>
<p>I get:</p>
<p>$Df(\varphi)=\begin{pmatrix}-\sin(\varphi)&0\\0&\cos(\varphi)\end{pmatrix}$</p>
<p>With determinant $\operatorname{det}Df(\varphi)=-\sin(\varphi)\cos(\varphi)$</p>
<p>Where $Df(\varphi)$ is not invertible for $\varphi=0$.</p>
<p>Thanks in advance for hints and comments.</p>
| Sri-Amirthan Theivendran | 302,692 | <p>To see that the inverse is not continuous note that there exists a sequence of points $(y_n)$ s.t $y_n\to (1,0)$ but $f^{-1}(y_n)\to 2\pi\ne f^{-1}(1,0)=0.$ Consider the sequence of points given by
$$
(\cos (2\pi-n^{-1}), \sin (2\pi-n^{-1}) )
$$
for example.</p>
|
2,901,914 | <blockquote>
<p>Given the function $f:[0,2\pi)\to S^1$, $\varphi\mapsto (\cos(\varphi), \sin(\varphi))^t$. Show that $f$ is continuous and a bijection, but not a homeomorphism.</p>
</blockquote>
<p>That $f$ is continuous is clear, since every component is continuous.
Furthermore it is continuous differentiable.</p>
<p>When I want to show, that $f$ is a bijection it is easy to see, that $f$ is injective, since for</p>
<p>$f(x)=f(y)\Leftrightarrow (\cos(x), \sin(x))=(\cos(y),\sin(y))\Leftrightarrow \cos(x)=\cos(y)\wedge\sin(x)=\sin(y)\stackrel{x,y\in [0,2\pi)}{\Leftrightarrow} x=y$</p>
<p>But how can I show, that $f$ is a surjection?</p>
<p>To show, that $f$ is not a homeomorphism, I have to verify, that $f^{-1}$ is not continuous.
Can I use the inverse function theorem?</p>
<p>I get:</p>
<p>$Df(\varphi)=\begin{pmatrix}-\sin(\varphi)&0\\0&\cos(\varphi)\end{pmatrix}$</p>
<p>With determinant $\operatorname{det}Df(\varphi)=-\sin(\varphi)\cos(\varphi)$</p>
<p>Where $Df(\varphi)$ is not invertible for $\varphi=0$.</p>
<p>Thanks in advance for hints and comments.</p>
| José Carlos Santos | 446,262 | <p>The function $f$ is surjective becasue if $(x,y)\in S^1$, there is a $\theta\in[0,2\pi)$ such that $(x,y)=(\cos\theta,\sin\theta)$; just take $\theta=\arccos x$ if $y\geqslant0$ and $\theta=2\pi-\arccos x$ otherwise.</p>
<p>And $f^{-1}$ is discontinuous because $\lim_{n\to\infty}\left(\cos\left(2\pi-\frac1n\right),\sin\left(2\pi-\frac1n\right)\right)=(1,0)=f(0)$, but $\lim_{n\to\infty}f^{-1}\left(\cos\left(2\pi-\frac1n\right),\sin\left(2\pi-\frac1n\right)\right)$ doesn't exist (in $[0,2\pi)$).</p>
|
1,324,824 | <p>Compute the flux integral with vector field,
$\vec F(x,y,z)= <2x,y,3z>$
and a sphere of radius 36 centered at the point $(1,2,-1)$</p>
<p>so what I did was I used divergence theorem which is</p>
<p>$\int\int \vec F \cdot \vec n dS = \int \int \int (div \vec F) dV$</p>
<p>I used the right hand side and so</p>
<p>$\int \int \int 6dV$</p>
<p>From here I can see the answer is 6*(volume of sphere), and hence my answer doesn't change even if the origin of sphere is different. So my question is, does the flux integral answer changes when the origin of sphere change?(assuming volume does not change)</p>
| user1337 | 62,839 | <p>Since the divergence of the field $\vec{F}$ is constant, the given integral doesn't depend on the origin of the sphere. Moreover, the surface mustn't even be sphere shaped, as long as it encloses the same volume.</p>
|
1,836,998 | <p>$$
\lim_{x\to 0^{+}}\left[\left(1+\frac{1}{x}\right)^x+\left(\frac{1}{x}\right)^x+\left(\tan(x)\right)^{\frac{1}{x}}\right]$$</p>
<p>Attempt: I used $\tan(x)\approx x$ also $(1+n)^{1/n}=e$ so I let $x=0+h$ and lim changes to lim h tending to $0$. But that gives me $e+\infty+h^{1/h}$. While the answer is an integer between $0-9$ . Where is my mistake?.Thanks</p>
| Mark Viola | 218,419 | <p>Note that we can write </p>
<p>$$\begin{align}
\left(1+\frac1x\right)^x&=e^{x\log\left(1+\frac1x\right)}\\\\
&=e^{x\left(\log(1+x)-\log(x)\right)}\\\\
&=e^{-x\log(x)}e^{x\log(1+x)}
\end{align}$$</p>
<p>Since $\lim_{x\to 0^+}x\log(x)=0$, we find using the continuity of the exponential function that</p>
<p>$$\lim_{x\to 0^+}\left(1+\frac1x\right)^x=1$$</p>
<hr>
<p>To evaluate the limit $\lim_{x\to 0^+}\left(\frac1x\right)^x$, we proceed as before. We write simply</p>
<p>$$\begin{align}
\lim_{x\to 0^+}\left(\frac1x\right)^x&=\lim_{x\to 0^+}e^{-x\log(x)}\\\\&=1
\end{align}$$ </p>
<hr>
<p>To evaluate the limit $\lim_{x\to 0^+}\left(\tan (x)\right)^{1/x}$, we use the analogous approach and write</p>
<p>$$\lim_{x\to 0^+}\left(\tan (x)\right)^{1/x}=\lim_{x\to 0^+}e^{\frac1x \log(\tan(x))}$$</p>
<p>Note that $1/x\to \infty$ while $\log(\tan(x))\to -\infty$. Thus, the product $\frac1x \log(\tan(x)) \to -\infty$ and we find that</p>
<p>$$\lim_{x\to 0^+}\left(\tan (x)\right)^{1/x}=0$$ </p>
|
204,780 | <p>Let $X$ and $Y$ be independent discrete random variables, each taking values $1$ and $2$ with probability $1/2$ each. How do I calculate the covariance between $max(X,Y)$ and $min(X,Y)$?</p>
| Brian M. Scott | 12,042 | <p>Let $W=\min\{X,Y\}$ and $Z=\max\{X,Y\}$; then the desired covariance is $\mathrm{E}[WZ]-\mathrm{E}[W]\mathrm{E}[Z]$. $W=1$ unless $X=Y=2$, so $\mathrm{Pr}(W=1)=\frac34$ and $\mathrm{Pr}(W=2)=\frac14$, and therefore</p>
<p>$$\mathrm{E}[W]=\frac34\cdot1+\frac14\cdot2=\frac54\;.$$ </p>
<p>The calculation of $\mathrm{E}[Z]$ is similar. So is that of $\mathrm{E}[WZ]$: just average the four possibilities according to their respective probabilities.</p>
|
459,124 | <p>Title says it all, I have an itch about series like this that seem to fall in the gray area where classical proofs that rational partial sums that converge too quickly must converge to transcendental don't seem to apply. This is such a special case, does any one know if the series</p>
<p>$$\sum_{i=0}^{\infty} \frac1{k^{2^i}}$$</p>
<p>converges to algebraic or transcendental, perhaps the answer/lack of an answer depending on the choice of integer $k > 1$?</p>
| Jared | 65,034 | <p>(What I learned reading the Wikipedia article on <a href="http://en.wikipedia.org/wiki/Transcendental_number" rel="nofollow">transcendental numbers</a>)</p>
<p>If $a$ is an algebraic number in $(-1,1)$, it has been shown that any number of the form:</p>
<p>$$\sum_{i=0}^{\infty}a^{2^i}$$</p>
<p>is transcendental. Let $a=\frac{1}{k}$ to answer your question. The article attributes this result to Loxton and references the $13$th chapter of the book <em>New Advances in Transcendence Theory</em> by Alan Baker.</p>
|
3,820,947 | <p>Now I have always been rather intrigued with factorial, at first, in high school, teachers told me that factorials are only defined for whole numbers. As I studied, I found factorials for positive reals and negative fractions. But the integral with which we define factorial falls flat on the negative integers.</p>
<p>why is that we can find the factorial of (-1/2) and root(3) but not for -1 or -2? Does this go against the definition of a factorial? If yes, what IS the definition of a factorial because children are never taught it and it clouds their reasoning and perception of the topic.</p>
| Jordan Mitchell Barrett | 649,843 | <p>The recursive definition of the factorial <span class="math-container">$(n+1)! = n! \cdot (n+1)$</span> can be rearranged to deduce <span class="math-container">$$n! = \dfrac{(n+1)!}{n+1}$$</span></p>
<p>This is one way to work out that <span class="math-container">$0!=1$</span>, taking <span class="math-container">$n=0$</span> in the above. However, if we try and do this for <span class="math-container">$(-1)!$</span>, we get a problem: <span class="math-container">$$(-1)! = \dfrac{(-1+1)!}{-1+1} = \dfrac{0!}{0} = \dfrac{1}{0}$$</span></p>
<p>But we can't divide by <span class="math-container">$0$</span>, so we can't define <span class="math-container">$(-1)!$</span>.</p>
<p>As noted in the comments, the gamma function <span class="math-container">$\Gamma$</span> is the "right" way to extend the factorial to non-integers. <span class="math-container">$\Gamma$</span> diverges at the negative integers, since it is required to satisfy <span class="math-container">$\Gamma(n+1) = \Gamma(n) \cdot (n+1)$</span>, so the same argument as above applies.</p>
|
84,255 | <p>It is just like the linear algebra over commutative ring (maybe advanced linear algebra), that is a nature extension and can make the structure of Lie algebra more algebraic, but I find little book discussing this topic.</p>
<p>I just know Weibel’s book “Homological Algebra” discuss something, but that is just a form, do not pay more attention to the Lie algebra and show some difference between field coefficient and ring coefficient. </p>
<p>Does anybody know something else?</p>
| Primoz | 6,339 | <p>Lie algebras over rings (Lie rings) are important in group theory. For instance, to every group $G$ one can associate a Lie ring</p>
<p>$$L(G)=\bigoplus _{i=1}^\infty \gamma _i(G)/\gamma _{i+1}(G),$$</p>
<p>where $\gamma _i(G)$ is the $i$-th term of the lower central series of $G$. The addition is defined by the additive structure of $\gamma _i{G}/\gamma _{i+1}(G)$, and the Lie product is defined on homogeneous elements by $[x\gamma _{i+1}(G),y\gamma _{j+1}(G)]=[x,y]\gamma _{i+j+1}(G)$, and then extended to L(G) by linearity.</p>
<p>There are several other ways of constructing Lie rings associated to groups, and there are numerous applications of these. One of the most notorious ones is the solution of the Restricted Burnside Problem by Zelmanov, see the book M. R. Vaughan-Lee, "The Restricted Burnside Problem". There's other books related to these rings, for example,
Kostrikin, "Around Burnside",
Huppert, Blackburn, "Finite groups II",
Dixon, du Sautoy, Mann, Segal, "Analytic pro-$p$ groups".</p>
|
84,255 | <p>It is just like the linear algebra over commutative ring (maybe advanced linear algebra), that is a nature extension and can make the structure of Lie algebra more algebraic, but I find little book discussing this topic.</p>
<p>I just know Weibel’s book “Homological Algebra” discuss something, but that is just a form, do not pay more attention to the Lie algebra and show some difference between field coefficient and ring coefficient. </p>
<p>Does anybody know something else?</p>
| Fernando Muro | 12,166 | <p>Integral Lie algebras are also important in homotopy theory, if $X$ is a simply connected space and $\pi_*(X)=\bigoplus_{n\geq 1}\pi_n(X)$ are its homotopy groups, $\pi_*(X)[-1]$ is a graded Lie algebra. The Lie bracket is the Whitehead product. This Lie algebras satisfy the feature that $[x,x]\neq 0$ in general since over the integers this is not equivalent to antisymmetry.</p>
|
1,898,839 | <p>Is the following inequality true?</p>
<p><span class="math-container">$$\mbox{Tr} \left( \mathrm P \, \mathrm M^T \mathrm M \right) \leq \lambda_{\max}(\mathrm P) \, \|\mathrm M\|_F^2$$</span></p>
<p>where <span class="math-container">$\mathrm P$</span> is a positive definite matrix with appropriate dimension. How about the following?</p>
<p><span class="math-container">$$\mbox{Tr}(\mathrm A \mathrm B)\leq \|\mathrm A\|_F \|\mathrm B\|_F$$</span></p>
| Rodrigo de Azevedo | 339,790 | <p><span class="math-container">$$\mbox{tr}(\mathrm A \mathrm B) = \left\langle \mathrm A^\top, \mathrm B \right\rangle = \langle \mbox{vec} (\mathrm A^\top), \mbox{vec} (\mathrm B) \rangle \leq \|\mbox{vec} (\mathrm A^\top)\|_2 \|\mbox{vec} (\mathrm B)\|_2 = \|\mathrm A\|_F \|\mathrm B\|_F$$</span></p>
|
4,802 | <p>Let $z^i(X, m)$ be the free abelian group generated by all codimension $i$ subvarieties on $X \times \Delta^m$ which intersect all faces $X \times \Delta^j$ properly for all j < m. Then, for each i, these groups assemble to give, with the restriction maps to these faces, a simplicial group whose homotopy groups are the higher Chow groups CH^i(X,m) (m=0 gives the classical ones).</p>
<p>Does anyone have an intuition to share about these higher Chow groups? What do they measure/mean? If I pass from the simplicial group to a chain complex, what does it mean to be in the kernel/image of the differential?</p>
<p>Could one say that the higher Chow groups keep track of in how many ways two cycles can be rationally equivalent (and which of these different ways are then equivalent etc.)?</p>
<p>Finally: I don't see any reason why the definition shouldn't make sense over the integers or worse base schemes. Is this true? Does it maybe still make sense but lose its intended meaning?</p>
| Benjamin Antieau | 100 | <p>This may not be particularly helpful, but when $X=\mathrm{spec} k$, for a field $k$, then $CH^{2n}(X,n)=K_n^M(k)$, the Milnor $K$-theory of $k$. I do not know if there are any other useful characterizations. Very little is known, except for the formal properties like $A^1$-invariance and such.</p>
<p>I believe that one can certainly define the higher Chow groups over other base schemes. For a reference, I would check Levine's book on motivic cohomology.</p>
|
4,802 | <p>Let $z^i(X, m)$ be the free abelian group generated by all codimension $i$ subvarieties on $X \times \Delta^m$ which intersect all faces $X \times \Delta^j$ properly for all j < m. Then, for each i, these groups assemble to give, with the restriction maps to these faces, a simplicial group whose homotopy groups are the higher Chow groups CH^i(X,m) (m=0 gives the classical ones).</p>
<p>Does anyone have an intuition to share about these higher Chow groups? What do they measure/mean? If I pass from the simplicial group to a chain complex, what does it mean to be in the kernel/image of the differential?</p>
<p>Could one say that the higher Chow groups keep track of in how many ways two cycles can be rationally equivalent (and which of these different ways are then equivalent etc.)?</p>
<p>Finally: I don't see any reason why the definition shouldn't make sense over the integers or worse base schemes. Is this true? Does it maybe still make sense but lose its intended meaning?</p>
| Jinhyun Park | 3,168 | <p>Benjamin Antineau's answer needs a minor correction. For $X= spec (k)$, we have $CH^n (X, n) = K^M _n (k)$, not $CH^{2n} (X, n)$. Indeed not too many is known, but it is a very interesting subject (at least for me) to pursue. A good start would be Burt Totaro's paper 'Milnor K-theory is the simplest part of K-theory' or something similarly titled, where you can find the cubical version of it. Using $\mathbb{A}^1$-invariance with some spectral sequence arguments, one can prove that the above "simplicial version" and "cubical version" are isomorphic, thus equivalent.</p>
<p>Going back to Peter Arndt's question about 'intuition', the easiest one would be to look it as an algebro-geometric version of singular homology theory.</p>
<p>For instance, when $X$ is a topological space, a singular $n$-simplex is given by a continuous map $s: \Delta ^n \to X$. We collect their formal finite sums over the integers, and apply some simplicial formalisms. That's how we get the singular complex.</p>
<p>When $X$ is a variety, the problem is bad, even if we take $\Delta^n$ to be the algebraic n-simplex. One problem would be that there aren't enough morphisms of varieties $s: \Delta^n \to X$ to begin with. So, a way out is to take all "correspondences", i.e. closed subvarieties in the product space $\Delta^n \times X$. One problem that still persists here is that, to be able to apply the simplicial formalism, one has to have a good intersection property of correspondences with the faces of $\Delta^n$, but by taking all algebraic cycles, one may not get it. Consequently, we put conditions such as proper intersection with all faces. </p>
<p>That's why we define things in this way.</p>
<p>Regarding the question of what kernel/image does: it is difficult to explain everything, but the easiest case might worth paying attention: for instance, $z^i (X, 0)$ is the codimension i algebraic cycles on $X$, and the boundary map $z^i (X, 1) \to z^i (X, 0)$ by definition gives the rational equivalence of cycles on $X$. In this way, from the cokernel for instance, we recover the Chow group. </p>
|
352,501 | <p>There are several things that confuse me about this proof, so I was wondering if anybody could clarify them for me.</p>
<blockquote>
<p><strong>Lemma</strong> Let <span class="math-container">$G$</span> be a group of order <span class="math-container">$30$</span>. Then the <span class="math-container">$5$</span>-Sylow subgroup of <span class="math-container">$G$</span> is
normal.</p>
</blockquote>
<blockquote>
<p><strong>Proof</strong> We argue by contradiction. Let <span class="math-container">$P_5$</span> be a <span class="math-container">$5$</span>-Sylow subgroup of <span class="math-container">$G$</span>. Then the
number of conjugates of <span class="math-container">$P_5$</span> is congruent to <span class="math-container">$1 \bmod 5$</span> and divides <span class="math-container">$6$</span>. Thus, there must be six conjugates of <span class="math-container">$P_5$</span>. Since the number of conjugates is the index of the normalizer, we see that <span class="math-container">$N_G(P_5) = P_5$</span>.</p>
</blockquote>
<p>Why does the fact that the order of <span class="math-container">$N_G(P_5)$</span> is 5 mean that it is equal to <span class="math-container">$P_5$</span>?</p>
<blockquote>
<p>Since the <span class="math-container">$5$</span>-Sylow subgroups of <span class="math-container">$G$</span> have order <span class="math-container">$5$</span>, any two of them intersect in the
identity element only. Thus, there are <span class="math-container">$6\cdot4 = 24$</span> elements in <span class="math-container">$G$</span> of order <span class="math-container">$5$</span>. This leaves <span class="math-container">$6$</span> elements whose order does not equal <span class="math-container">$5$</span>. We claim now that the <span class="math-container">$3$</span>-Sylow subgroup, <span class="math-container">$P_3$</span>, must be normal in <span class="math-container">$G$</span>.</p>
</blockquote>
<blockquote>
<p>The number of conjugates of <span class="math-container">$P_3$</span> is congruent to <span class="math-container">$1 \bmod 3$</span> and divides <span class="math-container">$10$</span>. Thus, if
<span class="math-container">$P_3$</span> is not normal, it must have <span class="math-container">$10$</span> conjugates. But this would give <span class="math-container">$20$</span> elements of order <span class="math-container">$3$</span> when there cannot be more than <span class="math-container">$6$</span> elements of order unequal to <span class="math-container">$5$</span> so that <span class="math-container">$P_3$</span> must indeed be normal.</p>
</blockquote>
<blockquote>
<p>But then <span class="math-container">$P_5$</span> normalizes <span class="math-container">$P_3$</span>, and hence <span class="math-container">$P_5P_3$</span> is a subgroup of <span class="math-container">$G$</span>. Moreover, the Second Noether Theorem gives</p>
</blockquote>
<blockquote>
<p><span class="math-container">$(P_5P_3)/P_3 \cong P_5/(P_5 \cap P_3)$</span>.</p>
</blockquote>
<blockquote>
<p>But since <span class="math-container">$|P_5|$</span> and <span class="math-container">$|P_3|$</span> are relatively prime, <span class="math-container">$P_5 \cap P_3 = 1$</span>, and hence <span class="math-container">$P_5P_3$</span> must have order <span class="math-container">$15$</span>.</p>
</blockquote>
<p>Why do we need to use the second Noether theorem? Why can't we just use the formula <span class="math-container">$\frac{|P_5||P_3|}{|P_3 \cap P_5|}$</span> to compute the order?</p>
<blockquote>
<p>Thus, <span class="math-container">$P_5P_3 \cong Z_{15}$</span>, by Corollary <span class="math-container">$5.3.17$</span>. But then <span class="math-container">$P_5P_3$</span> normalizes <span class="math-container">$P_5$</span>, which contradicts the earlier statement that <span class="math-container">$N_G(P_5)$</span> has order <span class="math-container">$5$</span>.</p>
</blockquote>
<p>Why do we have to realize that <span class="math-container">$P_5P_3$</span> is isomorphic to <span class="math-container">$Z_{15}$</span>? Also, how can we conclude that <span class="math-container">$P_5P_3$</span> normalizes <span class="math-container">$P_5$</span>?</p>
<p>Thanks in advance.</p>
| pavan | 423,856 | <p><span class="math-container">$o(G) = 30 = 2*3*5$</span></p>
<p>Since the number of <span class="math-container">$5$</span> - Sylow subgroups is of form, <span class="math-container">$N_G(5) = 5k +1 $</span> and <span class="math-container">$N_G(5) \Big | o(G)$</span>,</p>
<p><span class="math-container">$\implies N_G(5) = 1$</span> or <span class="math-container">$N_G(5) = 6$</span></p>
<p>Similarly, <span class="math-container">$N_G(3) = 1$</span> or <span class="math-container">$N_G(3) = 10$</span>.</p>
<p>First we will show that, <span class="math-container">$5$</span> - Sylow subgroup or <span class="math-container">$3$</span> - Sylow subgroup is normal is <span class="math-container">$G$</span>.</p>
<p>If <span class="math-container">$N_G(5) = 6$</span> and <span class="math-container">$N_G(3) = 10$</span>, then,</p>
<p>There are <span class="math-container">$(6*4 = 24)$</span> elements of order <span class="math-container">$5$</span> and <span class="math-container">$(10*2 = 20)$</span> elements of order <span class="math-container">$3$</span>.</p>
<p>So a total of <span class="math-container">$24+20 =44$</span> elements. Which is a contradiction.</p>
<p>Thus, <span class="math-container">$N_G(5) = 1$</span> or <span class="math-container">$N_G(3) = 1$</span> or both.</p>
<p>If <span class="math-container">$N_G(5) = 1 $</span>, then we are done. If not,</p>
<p>We have <span class="math-container">$N_G(3) = 1$</span> and <span class="math-container">$N_G(5) = 6$</span>, let <span class="math-container">$H_3$</span> and <span class="math-container">$H_5$</span> be <span class="math-container">$3$</span>-sylow and <span class="math-container">$5$</span>-sylow subgroups respectively. Since <span class="math-container">$N_G(3) = 1$</span>, <span class="math-container">$H_3$</span> is normal.</p>
<p>It is easy to see that <span class="math-container">$H_3H_5$</span> is a subgroup in <span class="math-container">$G$</span> and <span class="math-container">$o(H_3 H_5) = 15$</span>.</p>
<p>By the argument used earlier, <span class="math-container">$H_5$</span> is normal in <span class="math-container">$H_3 H_5$</span>. (Because, the number of 5-Syllow subgroups in <span class="math-container">$H_3H_5$</span>, i.e., <span class="math-container">$N_{H_3H_5}(5) = 1$</span>)</p>
<p><span class="math-container">$\implies$</span> There are only <span class="math-container">$4$</span> elements of order <span class="math-container">$5$</span> in <span class="math-container">$H_3H_5$</span>.</p>
<p>Thus, the remaining <span class="math-container">$20$</span> elements of order <span class="math-container">$5$</span> lie in <span class="math-container">$G - H_3H_5$</span>, but <span class="math-container">$\Big | G- H_3 H_5 \Big | = 15$</span>, a contradiction.</p>
<p>Hence <span class="math-container">$N_G(5) = 1$</span></p>
<p>QED</p>
|
723,355 | <p>To win the lottery you must pick the one winning ticket.
Given the option of drawing 1 out of 10 tickets or 10 out of 100 tickets to win which is the better option? Are they both 1 out of 10 or is it better to pick 1 out of 100 first, then 1 out of 99, then 1 out of 98 (assuming you don't pick the winner) up to your 10 picks. Both should be 10% but I don't know why the 10 separate picks out of 100 doesn't add up to more than 10%. Can someone explain the probability equation for this? </p>
| Mike Parks | 496,458 | <p>Statistically your odds of winning indeed are increased by picking 10 times out of 100 assuming the 10 picks are distinct and without replacement.</p>
<p>1 pick of 10 = 10% Chance of selecting the winning ticket</p>
<p>1 of 100= 1%
1 of 99= 1.01%
1 of 98 = 1.02%
1 of 97 = 1.03%
1 of 96 = 1.04%
1 of 95 = 1.05%
1 of 94 = 1.06%
1 of 93 = 1.07%
1 of 92 = 1.08%
1 of 91 = 1.10%</p>
<p>The odds of picking the winning ticket in this manner is 10.46%</p>
|
2,007,123 | <p>Suppose $R=\mathbb{Q}[x]$ is a ring and define $T$ to be $T=\{f(x) \in R : f(0) \neq 0\}$ </p>
<p>Prove that $\frac{x}{1}$ is the only irreducible element in $T^{-1}R$ (disregarding associates).</p>
<hr>
<p>My approach: assume $\frac{x}{1}$ is reducible. So there exist non-unit elements $\frac{a}{b}, \frac{c}{d}$ in $T^{-1}R $ such that $\frac{x}{1}=\frac{a}{b} \frac{c}{d}$. So $x=ac$ and since the degree of $x$ is $1$ it follows that the degree of $a$ or $c$ needs to be $0$. So one of the elements $\frac{a}{b}, \frac{c}{d}$ is a unit and therefore our assumption that $\frac{x}{1}$ is reducible can't be true. Is my reasoning thus far correct?</p>
<p>Now I need to show that $\frac{x}{1}$ is the <strong>only</strong> irreducible element. But how can one prove that?</p>
| Nominal Animal | 318,422 | <p>Rewritten on 2016-11-12. The OP raised very good questions in the comments. Note that this is not intended as an exhaustive answer (as one might expect from, say, a mathematician?), but more like observations from someone who routinely uses bilinear interpolation for numerical data.</p>
<blockquote>
<p>How can a bilinear interpolation be defined for an arbitrary quadrilateral,
i.e. without running into singularities?</p>
</blockquote>
<p>Bilinear interpolation is usually defined as
$$f(u,v) = (1-u) (1-v) F_{00} + u (1 - v) F_{01} + (1-u) v F_{10} + u v F_{11}$$
where $0 \le u, v \le 1$ and
$$\begin{array}{}
f(0,0) = F_{00} \\
f(0,1) = F_{01} \\
f(1,0) = F_{10} \\
f(1,1) = F_{11} \\
f(\frac{1}{2},\frac{1}{2}) = \frac{F_{00}+F_{01}+F_{10}+F_{11}}{4}
\end{array}$$</p>
<p>If we use notation
$$p(t; p_0, p_1) = (1-t) p_0 + t p_1 = p_0 + t (p_1 - p_0)$$
for the simplest form of linear interpolation, with $0 \le t \le 1$, $p(0;p_0,p_1) = p_0$, $p(1;p_0,p_1) = p_1$, then bilinear interpolation can be written as
$$f(u,v) = p(u; p(v; F_{00}, F_{01}), p(v; F_{10}, F_{11}))$$
so this simply extends the single-variable linear interpolation to two variables and $2^2 = 4$ samples.</p>
<p>Bilinear interpolation is not often used for arbitrary quadrilaterals. After pondering the questions OP posed in the comments, I realized that the typical form used for interpolation,
$$\begin{cases}
x(u,v) = x_{00} + u ( x_{10} - x_{00}) + v ( x_{01} - x_{00} ) \\
y(u,v) = y_{00} + u ( y_{10} - y_{00}) + v ( y_{01} - y_{00} ) \\
f(u,v) = (1-v) \left ( (1-u) f_{00} + u f_{10} \right ) + (v) \left ( (1-u) f_{10} + u f_{11} \right )
\end{cases}$$
is not applicable to arbitrary quadrilaterals, as it assumes it to be a parallelogram, i.e. with
$$\begin{cases}
x_{11} = x_{10} + x_{01} - x_{00} \\
y_{11} = y_{10} + y_{01} - y_{00}
\end{cases}$$
Solving $x = x(u,v)$, $y = y(u,v)$ for $u$ and $v$ yields
$$\begin{cases}
A = x_{00} (y_{01} - y_{10}) + x_{01} (y_{10} - y_{00}) + x_{10} (y_{00} - y_{01}) \\
u = \frac{ (x_{01} - x_{00}) y - (y_{01} - y_{00}) x + x_{00} y_{01} - y_{00} x_{01} }{A} \\
v = \frac{ (x_{00} - x_{10}) y - (y_{00} - y_{10}) x - x_{00} y_{10} + y_{00} x_{10} }{A}
\end{cases}$$
where $$A = \left(\vec{p}_{10} - \vec{p}_{00}\right) \times \left(\vec{p}_{01} - \vec{p}_{00}\right)$$
where $\times$ signifies the 2D analog of vector cross product, so $\lvert A \rvert$ is the area of the parallelogram. Thus, exactly one solution exists for all non-degenerate parallelograms.</p>
<p>For the most common use case, a regular rectangular axis-aligned grid of samples $p_{ji}$, $0 \le j, i \in \mathbb{Z}$, we have
$$\begin{cases}
x = a_x + b_x i \\
y = a_y + b_y j
\end{cases}$$
with $b_x \ne 0$, $b_y \ne 0$, corresponding to interpolation parameters
$$\begin{cases}
i = \left\lfloor \frac{x - a_x}{b_x} \right\rfloor \\
j = \left\lfloor \frac{y - a_y}{b_y} \right\rfloor \\
u = \frac{x - a_x}{b_x} - i \\
v = \frac{y - a_y}{b_y} - j
\end{cases}$$
so that
$$p(x,y) = (1-v) \left ( (1-u) p_{j,i} + (u) p_{j,i+1} \right ) + (v) \left ( (1-u) p_{j+1,i} + (u) p_{j+1,i+1} \right )$$</p>
<p><br>
To apply bilinear interpolation to an arbitrary quadrilateral, we need to use
$$\begin{cases}
x(u,v) = (1-u)(1-v) x_{00} + (u)(1-v) x_{10} + (1-u)(v) x_{01} + (u)(v) x_{11} \\
y(u,v) = (1-u)(1-v) y_{00} + (u)(1-v) y_{10} + (1-u)(v) y_{01} + (u)(v) y_{11} \\
f(u,v) = (1-u)(1-v) f_{00} + (u)(1-v) f_{10} + (1-u)(v) f_{01} + (u)(v) f_{11}
\end{cases}$$
In some cases it is sufficient to produce additional samples, for example so that each quadrilateral can be split into four sub-quadrilaterals, doubling the resolution. Then, we do not need to solve for $x$ and $y$, and only need to compute
$$\begin{array}{cc}
x\left(\frac{1}{2},0\right), & y\left(\frac{1}{2},0\right), & f\left(\frac{1}{2},0\right) \\
x\left(\frac{1}{2},1\right), & y\left(\frac{1}{2},1\right), & f\left(\frac{1}{2},1\right) \\
x\left(0,\frac{1}{2}\right), & y\left(0,\frac{1}{2}\right), & f\left(0,\frac{1}{2}\right) \\
x\left(1,\frac{1}{2}\right), & y\left(1,\frac{1}{2}\right), & f\left(1,\frac{1}{2}\right)
\end{array}$$</p>
<p>However, solving $(u,v)$ for some specific $(x,y)$ is quite complicated. Indeed, I was surprised how complicated it turns out to be! (I apologize for misrepresenting this case as "easy" in a previous edit. Mea culpa.)</p>
<p>In practice, we first try to solve $u$ or $v$, and then the other by substituting into one of the equations above. If we decide we wish to solve $u$ first, we need to solve
$$U_2 u^2 + U_1 u + U_0 = 0$$
where
$$\begin{cases}
U_2 = (y_{00}-y_{01}) (x_{10}-x_{11}) - (x_{00}-x_{01}) (y_{10}-y_{11}) \\
U_1 = (y_{00}-y_{01}-y_{10}+y_{11}) x - (x_{00}-x_{01}-x_{10}+x_{11}) y + (x_{11}-2 x_{10}) y_{00} + (2 x_{00}-x_{01}) y_{10} + y_{01} x_{10} - y_{11} x_{00} \\
U_0 = (y_{10}-y_{00}) x - (x_{10}-x_{00}) y + y_{00} x_{10} - x_{00} y_{10}
\end{cases}$$
The possible solutions are
$$\begin{cases}
u = \frac{-U_1 \pm \sqrt{ U_1^2 - 4 U_2 U_0}}{2 U_2}, & U_2 \ne 0 \\
u = \frac{-U_0}{U_1}, & U_2 = 0, U_1 \ne 0 \\
u = 0, & U_2 = 0, U_0 = 0
\end{cases}$$
If we find $0 \le u \le 1$, we solve for $v$ by substituting into $X(u,v) = x$,
$$v = \frac{ (y_{00} - y_{10}) u + y - y_{00} }{ (y_{00} - y_{01} - y_{10} + y_{11}) u - y_{00} + y_{01} }$$
or into $Y(u,v) = y$,
$$v = \frac{ (x_{00} - x_{10}) u + x - x_{00} }{ (x_{00} - x_{01} - x_{10} + x_{11}) u - x_{00} + x_{01} }$$</p>
<p>If we find no solutions, we try to solve for $v$ in
$$V_2 v^2 + V_1 v + V_0 = 0$$
where
$$\begin{cases}
V_2 = (x_{00}-x_{01}) (y_{10}-y_{11}) - (y_{00}-y_{01}) (x_{10}-x_{11}) \\
V_1 = (x_{00}-x_{01}-x_{10}+x_{11}) y - (y_{00}-y_{01}-y_{10}+y_{11}) x + (y_{11}-2 y_{10}) x_{00} + (2 y_{00}-y_{01}) x_{10} + x_{01} y_{10} - x_{11} y_{00} \\
V_0 = (x_{10}-x_{00}) y - (y_{10}-y_{00}) x + x_{00} y_{10} - y_{00} x_{10}
\end{cases}$$
The possible solutions are similar to those for $u$:
$$\begin{cases}
v = \frac{-V_1 \pm \sqrt{ V_1^2 - 4 V_2 V_0}}{2 V_2}, & V_2 \ne 0 \\
v = \frac{-V_0}{V_1}, & V_2 = 0, V_1 \ne 0 \\
v = 0, & V_2 = 0, V_0 = 0
\end{cases}$$
If you find $0 \le v \le 1$, you solve for $u$ by substituting into $X(u,v) = x$,
$$u = \frac{(x_{00} - x_{01}) v + x - x_{00} }{ (x_{00} - x_{01} - x_{10} + x_{11}) v - x_{00} + x_{10} }$$
or into $Y(u,v) = y$,
$$u = \frac{(y_{00} - y_{01}) v + y - y_{00} }{ (y_{00} - y_{01} - y_{10} + y_{11}) v - y_{00} + y_{10} }$$</p>
<p>It is also possible to solve $(u,v)$ numerically, by calculating $X(u,v)$ and $Y(u,v)$ repeatedly with different $u$, $v$, until $\lvert X(u,v) - x \rvert \le \epsilon$ and $\lvert Y(u,v) - y \rvert \le \epsilon$, where $\epsilon$ is the maximum acceptable error in $x$ and $y$ (maximum distance to correct $(x,y)$ being $\sqrt{2}\epsilon$).</p>
<p>There are a number of different methods for the numerical search. Some of the following observations may come in handy, when implementing a numerical search:
$$\begin{array}{rl}
\frac{d \, X(u,v)}{d\,u} = & x_{10} - x_{00} + v ( x_{11} - x_{01} - x_{10} + x_{00} ) \\
\frac{d \, X(u,v)}{d\,v} = & x_{01} - x_{00} + u ( x_{11} - x_{01} - x_{10} + x_{00} ) \\
\frac{d \, Y(u,v)}{d\,u} = & y_{10} - y_{00} + v ( y_{11} - y_{01} - y_{10} + y_{00} ) \\
\frac{d \, Y(u,v)}{d\,v} = & y_{01} - y_{00} + u ( y_{11} - y_{01} - y_{10} + y_{00} ) \\
X(u + du, v) - X(u, v) = & du \left ( x_{10} - x_{00} + v ( x_{11} - x_{01} - x_{10} + x_{00} ) \right ) \\
X(u, v + dv) - X(u, v) = & dv \left ( x_{01} - x_{00} + u ( x_{11} - x_{01} - x_{10} + x_{00} ) \right ) \\
Y(u + du, v) - Y(u, v) = & du \left ( y_{10} - y_{00} + v ( y_{11} - y_{01} - y_{10} + y_{00} ) \right ) \\
Y(u, v + dv) - Y(u, v) = & dv \left ( y_{01} - y_{00} + u ( y_{11} - y_{01} - y_{10} + y_{00} ) \right )
\end{array}$$</p>
<p>In other words, it is true that the bilinear interpolation is quite difficult for arbitrary quadrilaterals, and very problematic for self-intersecting quadrilaterals. However, the most common quadrilateral types -- rectangles and parallelograms -- are easy, and even the general case is solvable at least numerically, even in the presence of singularities.</p>
<hr>
<blockquote>
<p>Why bilinear interpolation with quadrilaterals?</p>
</blockquote>
<p>As I've shown above, for the rectangles and parallelograms -- the only quadrilaterals I've used bilinear interpolation with in real solutions --, bilinear interpolation is easy and simple.</p>
<p>Indeed, the emphasis on <em>quadrilaterals</em> (in the sense of arbitrary quadrilaterals) seems incorrect, as bilinear interpolation is mostly used with rectangles or parallelograms.</p>
<p>Perhaps the emphasis should be on that bilinear interpolation uses two variables to interpolate between four known values; or more generally, $k$-linear interpolation uses $k$ variables to interpolate between $2^k$ values. Trilinear interpolation is similarly common for cuboids with vertices
$$\begin{cases}
\vec{p}_{011} = \vec{p}_{010} + \vec{p}_{001} - \vec{p}_{000} \\
\vec{p}_{101} = \vec{p}_{100} + \vec{p}_{001} - \vec{p}_{000} \\
\vec{p}_{110} = \vec{p}_{100} + \vec{p}_{010} - \vec{p}_{000} \\
\vec{p}_{111} = \vec{p}_{100} + \vec{p}_{010} + \vec{p}_{001} - 2 \vec{p}_{000}
\end{cases}$$
i.e. cuboids defined by one vertex and three edge vectors.</p>
<p>Regular grids are ubiquitous, and linear mapping is the simplest interpolation method, with easy properties. Cubic interpolation and other interpolation methods do produce better results, but are computationally more expensive, and the properties may produce unwanted behaviour: most typically, the interpolated value is no longer guaranteed to reside within the range spanned by the constants.</p>
|
3,401 | <p>Consider an idealized classical particle confined to a two-dimensional surface that is frictionless. The particle's initial position on the surface is randomly selected, a nonzero velocity vector is randomly assigned to it, and the direction of the particle's movement changes only at the surface boundaries where perfectly elastic collisions occur (i.e. there is no information loss over time).</p>
<p>My question is - Does there exist such a bounded surface where the probability of the particle visiting any given position at some time 't', P(x,y,t), becomes equal to unity at infinite time? In other words, no matter where we initialize the particle, and no matter the velocity vector assigned to it, are there surfaces that will always be 'everywhere accessible'?</p>
<p>(Once again, I welcome any help asking this question in a more appropriate manner...)</p>
| Igor Rivin | 11,142 | <p>I think the answers are not to the question asked (at least <em>as</em> it is asked). The ergodicity of the geodesic flow (which, by the way holds for all negatively curved surfaces -- a fact surprising not mentioned in any of the above answers) does not mean that a fixed geodesic will hit <em>every</em> point on the surface eventually, but merely that it will become dense (well, more than that, but less than hitting every point). The OP asks for every point to be hit.</p>
|
4,253,485 | <p>I understand that if you roll <span class="math-container">$10$</span> <span class="math-container">$6$</span>-faced dice (D6) the different outcomes should be <span class="math-container">$60,466,176$</span> right? (meaning <span class="math-container">$1/60,466,176$</span> probability)</p>
<p>but if you are playing a board-game, you don't care about that right? what matters is the sum of the numbers in the end, so it should anyway be <span class="math-container">$1/60$</span> instead of whatever right? because its a sum not <span class="math-container">$10$</span> individual rolls.</p>
<p>so a <span class="math-container">$60$</span> face die (D60) should be the same as rolling all the dice right? if not can someone please explain? I've been looking all over the net for a compelling answer but there is none. I've seen a D60 can be used as <span class="math-container">$2$</span> D6 but also there is no explanation for this.</p>
<p>if not, what would be the number of faces needed for an equivalent single die that can replace <span class="math-container">$10$</span> D6. I know so far the D120 is the last possible die that can be made, but it doesn't matter, I just want to know too...</p>
<p>EDIT: Sorry I forgot to mention in some tabletop games Dice can have <span class="math-container">$0$</span>, so you'd treat all <span class="math-container">$6$</span> sided dice as going from <span class="math-container">$0$</span> to <span class="math-container">$5$</span>, making minimum value of <span class="math-container">$0$</span> possible in all <span class="math-container">$10$</span> rolls. But still that changes the maximum in the same way minimum used to be, now the max you get in 6 sided dice is <span class="math-container">$50$</span>, while on the 60 sided die it would be <span class="math-container">$59$</span> if we apply a <span class="math-container">$-1$</span> to all results... it gets messy, and this detail doesn't really change much</p>
| Alan Abraham | 823,763 | <p>No, the probability distribution of the sum of the faces of rolling <span class="math-container">$10$</span> <span class="math-container">$6$</span>-sided dice is different than that of rolling a single <span class="math-container">$60$</span>-sided die.</p>
<p>For obvious reasons, this is because you can roll any number between <span class="math-container">$1$</span> and <span class="math-container">$9$</span> with a <span class="math-container">$60$</span>-sided die, but cannot attain such a sum with <span class="math-container">$10$</span> <span class="math-container">$6$</span>-sided dice.</p>
<p>The probability distribution of a single <span class="math-container">$6$</span>-sided die can be modeled with the generating function
<span class="math-container">$$\frac{1}{6}(x+x^2+x^3+x^4+x^5+x^6)$$</span>
Where the coefficient of <span class="math-container">$x^k$</span> is the probability of rolling a <span class="math-container">$k$</span>. Moreover, the probability distribution of the sum of rolling <span class="math-container">$10$</span> <span class="math-container">$6$</span>-sided dice can be modeled as
<span class="math-container">$$\frac{1}{6^{10}}(x+x^2+x^3+x^4+x^5+x^6)^{10}$$</span>
Where the coefficient of <span class="math-container">$x^k$</span> is the probability of rolling a sum of <span class="math-container">$k$</span>. It can be clearly seen that this expansion is completely different than the generating function that models the probability distribution of rolling a <span class="math-container">$60$</span>-sided die i.e.
<span class="math-container">$$\frac{1}{60}(x+x^2+x^3+\ldots+x^{59}+x^{60})$$</span></p>
<p>However, there are some interesting ideas that you can get from using these generating functions. For example, we have that
<span class="math-container">$$\frac{1}{6}(x+x^2+x^3+x^4+x^5+x^6)$$</span>
<span class="math-container">$$=\frac{1}{6}x(1+x+x^2+x^3+x^4+x^5)$$</span>
<span class="math-container">$$=\frac{1}{6}x(1+x^3)(1+x+x^2)$$</span>
<span class="math-container">$$=\frac{1}{6}x(1+x)(1-x+x^2)(1+x+x^2)$$</span>
We can take two dice, and say that their generating function for their probability distribution is
<span class="math-container">$$=\frac{1}{6^2}x^2(1+x)^2(1-x+x^2)^2(1+x+x^2)^2$$</span>
<span class="math-container">$$=\frac{1}{6^2}[(1+x)(1+x+x^2)x^2][(1+x)(1+x+x^2)(1-x+x^2)^2]$$</span>
<span class="math-container">$$=\frac{1}{6^2}[x^2+2x^3+2x^4+x^5][1+x^2+x^3+x^4+x^5+x^7]$$</span>
This means we can replace two <span class="math-container">$6$</span>-sided dice with, for example, a die with faces of <span class="math-container">$2,3,3,4,4,5$</span> and a die with faces of <span class="math-container">$0,2,3,4,5,7$</span>, and the sum from rolling these two dice will have the same probability distribution as rolling two standard <span class="math-container">$6$</span>-sided dice. We can extend this process to the generating functions for more <span class="math-container">$6$</span>-sided dice and create some wacky dice that still have the same probability distribution.</p>
|
119,492 | <p>First I define a function, just a sum of a few sin waves at different angular frequencies:</p>
<pre><code>ubdat = 50;
ws = 10*{2, 5, 10, 20, 40}
fn = Table[Sum[Sin[w*x], {w, ws}], {x, 0, ubdat, .001}];
pts = Length@fn
ListPlot[fn, Joined -> True, PlotRange -> {{0, 1000}, All}]
{20, 50, 100, 200, 400}
</code></pre>
<p><a href="https://i.stack.imgur.com/ZCcaO.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ZCcaO.png" alt="enter image description here"></a></p>
<p>If I take the Fourier transform and scale it correctly, you can see the correct peaks:</p>
<pre><code>fnft = Abs@Fourier@fn;
fnftnormed = Table[{2*Pi*i/ubdat, fnft[[i]]}, {i, Length@fnft}];
ListPlot[fnftnormed, Joined -> True, PlotRange -> {{0, 500}, All}]
</code></pre>
<p><a href="https://i.stack.imgur.com/adDJS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/adDJS.png" alt="enter image description here"></a></p>
<p>Now, I want to do a low pass filter on it, for, say, $\omega_c=140$. This should get rid of the peaks at 200 and 400, ideally. Doing it this way returns the same plots as above:</p>
<pre><code>fnfilt = LowpassFilter[fn, 140];
ListPlot[fnfilt, Joined -> True, PlotRange -> {{0, 1000}, All}]
fnfiltft = Abs@Fourier@fnfilt;
fnfiltftnormed =
Table[{2*Pi*i/ubdat, fnfiltft[[i]]}, {i, Length@fnfiltft}];
ListPlot[fnfiltftnormed, Joined -> True, PlotRange -> {{0, 500}, All}]
</code></pre>
<p>I assume the problem is something to do with defining SampleRate, but the documentation explaining how it's defined or how to use it on the <a href="https://reference.wolfram.com/language/ref/LowpassFilter.html" rel="noreferrer">LowpassFilter page</a> is <em>very</em> sparse:</p>
<blockquote>
<p>By default, SampleRate->1 is assumed for images as well as data. For a
sampled sound object of sample rate of r, SampleRate->r is used. With
SampleRate->r, the cutoff frequency should be between 0 and $r*\pi$.</p>
</blockquote>
<p>It appears to have a broken link at the bottom, so maybe that had something helpful. The page for SampleRate itself has even less info.</p>
<p>My naive attempt at choosing a sample rate would be dividing the number of samples by the total range, so in this case, <code>Floor[pts/ubdat]=1000</code>. Using this <em>does</em> affect the FT, but not a whole lot:</p>
<pre><code>fnfilt = LowpassFilter[fn, 140, SampleRate -> 1000];
ListPlot[fnfilt, Joined -> True, PlotRange -> {{0, 1000}, All}]
fnfiltft = Abs@Fourier@fnfilt;
fnfiltftnormed =
Table[{2*Pi*i/ubdat, fnfiltft[[i]]}, {i, Length@fnfiltft}];
ListPlot[fnfiltftnormed, Joined -> True, PlotRange -> {{0, 500}, All}]
</code></pre>
<p><a href="https://i.stack.imgur.com/XUqiC.png" rel="noreferrer"><img src="https://i.stack.imgur.com/XUqiC.png" alt="enter image description here"></a></p>
<p>So what am I missing? I've tried googling for some sort of guide on using filters in Mathematica, but I can't find anything and it's very frustrating.</p>
| Hugh | 12,558 | <p>I could not make LowpassFilter work for one-dimensional time histories either. I suggest you use a Butterworth filter which is more standard. Also I like to use random for testing because you can look at the transfer function. For the test below I add a few zeros at the end of the random to allow for filter ringing. </p>
<p>Here is your code modified for random.</p>
<pre><code>ubdat = 50;
fn = Join[Table[RandomReal[{-1, 1}], {x, 0, ubdat - 10, .001}],
ConstantArray[0, Round[10/0.001]]];
pts = Length@fn
ListPlot[fn, Joined -> True, PlotRange -> {{0, 1000}, All}]
</code></pre>
<p><img src="https://i.stack.imgur.com/9iVsS.png" alt="Mathematica graphics"></p>
<p>The Fourier transform is less clear than with sine waves but stay with it</p>
<pre><code>fnft = Abs@Fourier@fn;
fnftnormed = Table[{2*Pi*i/ubdat, fnft[[i]]}, {i, Length@fnft}];
ListPlot[fnftnormed, Joined -> True, PlotRange -> {{0, 500}, All}]
</code></pre>
<p><img src="https://i.stack.imgur.com/9xPjz.png" alt="Mathematica graphics"></p>
<p>Now I make a low pass filter, convert it to a time model and then use it on the data.</p>
<pre><code>lowpass = ButterworthFilterModel[{3, 140}];
filt = ToDiscreteTimeModel[lowpass, 0.001];
fnfilt = RecurrenceFilter[filt, fn];
fnfiltft = Abs@Fourier@fnfilt;
fnfiltftnormed =
Table[{2*Pi*i/ubdat, fnfiltft[[i]]}, {i, Length@fnfiltft}];
ListPlot[fnfiltftnormed, Joined -> True, PlotRange -> {{0, 500}, All}]
</code></pre>
<p><img src="https://i.stack.imgur.com/jenGx.png" alt="Mathematica graphics"></p>
<p>This is looking better with a clear reduction in level. Now we look at the relationship between the output and the input. I use your frequency axis.</p>
<pre><code>ListLinePlot[
Transpose[{fnfiltftnormed[[All, 1]], Abs[fnfiltft/fnft]}][[1 ;;
6000]], PlotRange -> All]
</code></pre>
<p><img src="https://i.stack.imgur.com/T7420.png" alt="Mathematica graphics"></p>
<p>Now there is a nice transfer function with a clear drop at 140 radians/sec. </p>
<p>If you repeat this with your sine waves you will get the result you want but you can't do the transfer function because the spectra are zero between the peaks. </p>
<p>Hope that helps.</p>
|
2,966,959 | <blockquote>
<p>Consider the simultaneous system of differential equations:
<span class="math-container">$$ \begin{equation}
x'(t)=y(t) -x(t)/2\\
y'(t)=x(t)/4-y(t)/2
\end{equation} $$</span>
If <span class="math-container">$ x(0)=2 $</span> and <span class="math-container">$ y(0)=3 $</span>, then what is <span class="math-container">$ \lim_{t\to\infty}((x(t)+y(t)) $</span>?</p>
</blockquote>
<p>Here is what I do:</p>
<p><span class="math-container">$$ \frac{dy}{dx}=\frac{\frac{1}{4}x-\frac{1}{2}y}{-\frac{1}{2}x+y}=-\frac{1}{2} $$</span>
So <span class="math-container">$$ y=-\frac{1}{2}x+4 $$</span> and <span class="math-container">$$ x(t)+y(t)=\frac{1}{2}x(t)+4 .$$</span>
Now solve for <span class="math-container">$ x(t) $</span>, we have <span class="math-container">$$ x(t)=4-\frac{2}{e^t} .$$</span>
Hence <span class="math-container">$ \lim_{t\to\infty}((x(t)+y(t))=2+4=6 $</span>.</p>
<p>However, there should be another method involving using matrices in the standard way. How to do it via matrices?</p>
<p>The question is from:(14) of <a href="https://math.uchicago.edu/~min/GRE/files/week2.pdf" rel="nofollow noreferrer">https://math.uchicago.edu/~min/GRE/files/week2.pdf</a></p>
| Aleksas Domarkas | 562,074 | <p>Integrating factor is <span class="math-container">$\mu=y^3$</span>. Then solution of equation <span class="math-container">$$xydx+(2x^2+3y^2)dy=0$$</span> is
<span class="math-container">$$y^6+x^2y^4=C$$</span></p>
|
219,763 | <p><strong>The function $f$ is defined on $\mathbb{R}$ such that for every $\delta\gt0$, $|f(y)-f(x)|\lt\delta^2$ for all $x,y\in\mathbb{R}$ and $|y-x|\lt\delta$. Prove that $f$ is a constant function.</strong> </p>
<p>So, what I know, is that I need to show that $f(a)=f(b)$ for all points $a,b\in\mathbb{R}$. Or for every $\epsilon\gt0$, $|f(a)-f(b)|\lt\epsilon$. </p>
<p>I'm at a loss here. I got a hint to divide the interval $[a, b]$ into $n$ smaller intervals. But I don't understand why, and how this would help me. </p>
<p>Hints and preferably a proof are very much appreciated. Thanks in advance. </p>
| Pedro | 23,350 | <p>I have what I think is an equivalent situation:</p>
<blockquote>
<p>Suppose $f$ is such that $f(x)-f(y)\leq (x-y)^2$ for all $x,y$. Then $f$ is constant. </p>
</blockquote>
<p><strong>PROOF</strong> Choose any $x,y$. The hypothesis means that</p>
<p>$f(y)-f(x)=-(f(x)-f(y))\leq (y-x)^2$ whence $|f(x)-f(y)|\leq (x-y)^2$ for each $x,y$. Divide $[x,y]$ into $n$ intervals $[x_{k-1},x_k]$ such that $x_k-x_{k-1}=\dfrac{y-x}n$, $x_0=x$; $x_n=y$. That is, $$x_k=x+\frac k n (y-x)$$</p>
<p>Then</p>
<p>$$|f(x)-f(y)|=\left|\sum_{k=1}^n f(x_{k-1})-f(x_k)\right|\\ \leq \sum_{k=1}^n\left|f(x_{k-1})-f(x_k)\right| \\ \leq \sum\limits_{k = 1}^n {{{\left( {{x_{k - 1}} - {x_k}} \right)}^2}} \\ = \frac{1}{{{n^2}}}\sum\limits_{k = 1}^n {{{\left( {y - x} \right)}^2}} = \frac{{{{\left( {y - x} \right)}^2}}}{n}$$</p>
<p>Thus</p>
<p>$$\tag 1 \left| {f\left( x \right) - f\left( y \right)} \right| \leq \frac{{{{\left( {y - x} \right)}^2}}}{n}$$</p>
<p>for every $n\in \Bbb N$. Now suppose $f(x)\neq f(y)$. $(1)$ means $$\tag 2 \frac{{\left| {f\left( x \right) - f\left( y \right)} \right|}}{{{{\left( {y - x} \right)}^2}}} \leq \frac{1}{n}$$</p>
<p>But since $|f(x)-f(y)|>0$ we have $$\frac{{\left| {f\left( x \right) - f\left( y \right)} \right|}}{{{{\left( {y - x} \right)}^2}}} > 0$$whence there must be an $n$ such that $$\frac{1}{n} < \frac{{\left| {f\left( x \right) - f\left( y \right)} \right|}}{{{{\left( {y - x} \right)}^2}}}$$</p>
<p>But this contradicts $(2)$. Then it must be $f(x)=f(y)$ for each choice of $x,y$.</p>
|
219,763 | <p><strong>The function $f$ is defined on $\mathbb{R}$ such that for every $\delta\gt0$, $|f(y)-f(x)|\lt\delta^2$ for all $x,y\in\mathbb{R}$ and $|y-x|\lt\delta$. Prove that $f$ is a constant function.</strong> </p>
<p>So, what I know, is that I need to show that $f(a)=f(b)$ for all points $a,b\in\mathbb{R}$. Or for every $\epsilon\gt0$, $|f(a)-f(b)|\lt\epsilon$. </p>
<p>I'm at a loss here. I got a hint to divide the interval $[a, b]$ into $n$ smaller intervals. But I don't understand why, and how this would help me. </p>
<p>Hints and preferably a proof are very much appreciated. Thanks in advance. </p>
| wj32 | 35,914 | <p><strong>Step 1.</strong> Let $x,y\in\mathbb{R}$ with $x \ne y$. For all $r>1$ we have $|x-y|<r|x-y|$, so $|f(x)-f(y)|<r^2|x-y|^2$. This is true for all $r>1$, so we must have $|f(x)-f(y)| \le |x-y|^2$.</p>
<p><strong>Step 2.</strong> This is just Exercise 5.1 in Rudin; rewrite the above as
$$\left\vert\frac{f(x)-f(y)}{x-y}\right\vert \le |x-y|$$
and it's easy to see that $f'(x)=0$ for all $x\in\mathbb{R}$ directly from the definition of the derivative as a limit. Apply the mean value theorem, and you're done.</p>
|
2,709,532 | <p>(True/False) If $S$ is a non-empty subset of $\mathbb{N}$, then there exists an element $m \in S$ such that $m\ge k$ for all $k \in S$.</p>
<p>So my reasoning is that the above statement is false. Since $S$ is a non-empty subset of $\mathbb{N}$, it may also be the case that $S=\mathbb{N}$, and so there may not be an $m \in S$ such that $m \ge k$ for all $k \in S$. I wanted to know if I'm on the right track, and if not, if someone could provide a hint. </p>
| Mohammad Riazi-Kermani | 514,496 | <p>You are on the right track but your counter-example is an overkill.</p>
<p>Every infinite subset of $\mathbb N$ can serve as a counter-example. </p>
<p>For example, multiples of 5 or the set of prime numbers or the set of perfect squares are such counter-examples. </p>
|
4,626,019 | <p>I recently encountered a quant interview question, which I think is not super hard, but still I am not very sure about my results.</p>
<p>Could you guys give me some hints on it?</p>
<p>The question goes as follows:</p>
<blockquote>
<p>An ant is put on a vertex of a simplex, and at each vertex, it has equal probability to move to any other one. Once it goes back to the original start point and also it has visited all the vertices, it stops.</p>
<p>What is the expected number of steps for its move?</p>
</blockquote>
<p>The way I did is to consider a 'state', <span class="math-container">$(n,x)$</span>, where <span class="math-container">$n$</span> represents the number of vertices it has visited, and <span class="math-container">$x$</span> is a binary represents whether it is at the original or not. It should be a regular Markov Chain.</p>
<p>Therefore, the ant starts at <span class="math-container">$(1,\text{True})$</span> state and move to <span class="math-container">$(2,\text{False})$</span> state with probability <span class="math-container">$1$</span> <span class="math-container">$\ldots$</span> <span class="math-container">$(4,\text{True})$</span> state is the only absorbing state.</p>
<p>After some calculations, my result is <span class="math-container">$\mathbf{7}$</span>. I am wondering whether it is correct or not and do you guys have other insights?</p>
<p>Appreciate the community!!!</p>
<hr />
<p><strong>Edit.</strong> I'm sorry. I messed up with the matrix. The correct answer should be <span class="math-container">$8.5$</span> from my side.</p>
| Sangchul Lee | 9,340 | <p><em>Sketch of Proof.</em> Suppose the ant is moving on an <span class="math-container">$n$</span>-simplex. Then the crucial observation is that we can come up with a Markov chain which moves in a single direction:</p>
<p><span class="math-container">$$ 0
\overset{\frac{n}{n}}{\longrightarrow} 1
\overset{\frac{n-1}{n}}{\longrightarrow} 2
\overset{\frac{n-2}{n}}{\longrightarrow} 3
\overset{\frac{n-3}{n}}{\longrightarrow} \quad\cdots\quad
\overset{\frac{1}{n}}{\longrightarrow} n
\overset{\frac{1}{n}}{\longrightarrow} \texttt{origin},
$$</span></p>
<p>Here are some explanations:</p>
<ul>
<li><p>Each <span class="math-container">$i=0,1,\ldots,n$</span> represents the state at which the ant has visited <span class="math-container">$i$</span> vertices, not counting the origin.</p>
</li>
<li><p>The state <span class="math-container">$\texttt{origin}$</span> represents the state at which the ant has visited all the stated and is located at the origin. We are interested in the expected hitting time of this state.</p>
</li>
<li><p>The arrow <span class="math-container">$i \overset{p}{\longrightarrow} j$</span> means that the chain jumps from <span class="math-container">$i$</span> to <span class="math-container">$j$</span> with probability <span class="math-container">$p$</span>.</p>
</li>
<li><p>Each loop from <span class="math-container">$i$</span> to <span class="math-container">$i$</span> itself is omitted from the above diagram, since drawing them is super painful with vanilla MathJax. However, it is easy to recover them from the other arrows.</p>
</li>
<li><p>Just at the moment the ant has visited all the <span class="math-container">$n$</span> vertices (not counting the origin), the location of the ant cannot be the origin. This explains the last transition
<span class="math-container">$$n \overset{\frac{1}{n}}{\longrightarrow} \texttt{origin}.$$</span></p>
</li>
</ul>
<p>From this, we find that</p>
<ul>
<li><p>The transition <span class="math-container">$0 \to 1$</span> takes <span class="math-container">$T_{0\to1} = 1$</span> unit of time.</p>
</li>
<li><p>The transition <span class="math-container">$1 \to 2$</span> takes <span class="math-container">$T_{1\to2} \sim \operatorname{Geometric}(\frac{n-1}{n})$</span> unit of time.</p>
</li>
<li><p>The transition <span class="math-container">$2 \to 3$</span> takes <span class="math-container">$T_{2\to3} \sim \operatorname{Geometric}(\frac{n-2}{n})$</span> unit of time.</p>
</li>
</ul>
<p>and likewise for all the other transitions. So, the expected time from the state <span class="math-container">$0$</span> to <span class="math-container">$\texttt{origin}$</span> is</p>
<p><span class="math-container">\begin{align*}
&\mathbf{E}[T_{0\to1} + T_{1\to2} + T_{2\to3} + \cdots + T_{(n-1)\to n} + T_{n\to\texttt{origin}}] \\
&= \frac{n}{n} + \frac{n}{n-1} + \frac{n}{n-2} + \cdots + \frac{n}{1} + \frac{n}{1} \\
&= n\left( \frac{1}{n} + \frac{1}{n-1} + \frac{1}{n-2} + \cdots + 1 + 1 \right) \\
&= n(H_n + 1).
\end{align*}</span></p>
|
973,101 | <p>I am trying to find a way to generate random points uniformly distributed on the surface of an ellipsoid.</p>
<p>If it was a sphere there is a neat way of doing it: Generate three $N(0,1)$ variables $\{x_1,x_2,x_3\}$, calculate the distance from the origin</p>
<p>$$d=\sqrt{x_1^2+x_2^2+x_3^2}$$</p>
<p>and calculate the point </p>
<p>$$\mathbf{y}=(x_1,x_2,x_3)/d.$$</p>
<p>It can then be shown that the points $\mathbf{y}$ lie on the surface of the sphere and are uniformly distributed on the sphere surface, and the argument that proves it is just one word, "isotropy". No prefered direction.</p>
<p>Suppose now we have an ellipsoid</p>
<p>$$\frac{x_1^2}{a^2}+\frac{x_2^2}{b^2}+\frac{x_3^2}{c^2}=1$$</p>
<p>How about generating three $N(0,1)$ variables as above, calculate</p>
<p>$$d=\sqrt{\frac{x_1^2}{a^2}+\frac{x_2^2}{b^2}+\frac{x_3^2}{c^2}}$$</p>
<p>and then using $\mathbf{y}=(x_1,x_2,x_3)/d$ as above. That way we get points guaranteed on the surface of the ellipsoid but will they be uniformly distributed? How can we check that?</p>
<p>Any help greatly appreciated, thanks.</p>
<p>PS I am looking for a solution without accepting/rejecting points, which is kind of trivial.</p>
<p>EDIT:</p>
<p>Switching to polar coordinates, the surface element is $dS=F(\theta,\phi)\ d\theta\ d\phi$ where $F$ is expressed as
$$\frac{1}{4} \sqrt{r^2 \left(16 \sin ^2(\theta ) \left(a^2 \sin ^2(\phi )+b^2 \cos
^2(\phi )+c^2\right)+16 \cos ^2(\theta ) \left(a^2 \cos ^2(\phi )+b^2 \sin
^2(\phi )\right)-r^2 \left(a^2-b^2\right)^2 \sin ^2(2 \theta ) \sin ^2(2 \phi
)\right)}$$</p>
| Martín-Blas Pérez Pinilla | 98,199 | <p>Idea for an approximate solution: divide the ellipsoid in small enough, flat enough quasi-rectangles $P_i$, choose an almost-area-preserving parametrization of each piece:
$$f_i:R_i\longrightarrow P_i$$
with $R_i\subset\Bbb R^2$ a rectangle. Choose randomly an index $j$ with probability $\text{area}(R_j)/\sum_i\text{area}(R_i)$, choose a point in $R_j$ and finally apply $f_j$.</p>
|
973,101 | <p>I am trying to find a way to generate random points uniformly distributed on the surface of an ellipsoid.</p>
<p>If it was a sphere there is a neat way of doing it: Generate three $N(0,1)$ variables $\{x_1,x_2,x_3\}$, calculate the distance from the origin</p>
<p>$$d=\sqrt{x_1^2+x_2^2+x_3^2}$$</p>
<p>and calculate the point </p>
<p>$$\mathbf{y}=(x_1,x_2,x_3)/d.$$</p>
<p>It can then be shown that the points $\mathbf{y}$ lie on the surface of the sphere and are uniformly distributed on the sphere surface, and the argument that proves it is just one word, "isotropy". No prefered direction.</p>
<p>Suppose now we have an ellipsoid</p>
<p>$$\frac{x_1^2}{a^2}+\frac{x_2^2}{b^2}+\frac{x_3^2}{c^2}=1$$</p>
<p>How about generating three $N(0,1)$ variables as above, calculate</p>
<p>$$d=\sqrt{\frac{x_1^2}{a^2}+\frac{x_2^2}{b^2}+\frac{x_3^2}{c^2}}$$</p>
<p>and then using $\mathbf{y}=(x_1,x_2,x_3)/d$ as above. That way we get points guaranteed on the surface of the ellipsoid but will they be uniformly distributed? How can we check that?</p>
<p>Any help greatly appreciated, thanks.</p>
<p>PS I am looking for a solution without accepting/rejecting points, which is kind of trivial.</p>
<p>EDIT:</p>
<p>Switching to polar coordinates, the surface element is $dS=F(\theta,\phi)\ d\theta\ d\phi$ where $F$ is expressed as
$$\frac{1}{4} \sqrt{r^2 \left(16 \sin ^2(\theta ) \left(a^2 \sin ^2(\phi )+b^2 \cos
^2(\phi )+c^2\right)+16 \cos ^2(\theta ) \left(a^2 \cos ^2(\phi )+b^2 \sin
^2(\phi )\right)-r^2 \left(a^2-b^2\right)^2 \sin ^2(2 \theta ) \sin ^2(2 \phi
)\right)}$$</p>
| Artashes | 1,054,066 | <p>Here's the code, This works in ANY dimension :</p>
<pre><code>import numpy as np
dim = 2
r=1
a = 10
b = 4
A = np.array([[1/a**2, 0],
[0, 1/b**2]])
L = np.linalg.cholesky(A).T
x = np.random.normal(0,1,(200,dim))
z = np.linalg.norm(x, axis=1)
z = z.reshape(-1,1).repeat(x.shape[1], axis=1)
y = x/z * r
y_new = np.linalg.inv(L) @ y.T
plt.figure(figsize=(15,15))
plt.plot(y[:,0],y[:,1], linestyle="", marker='o', markersize=2)
plt.plot(y_new.T[:,0],y_new.T[:,1], linestyle="", marker='o', markersize=5)
plt.gca().set_aspect(1)
plt.grid()
</code></pre>
<p><a href="https://i.stack.imgur.com/013FD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/013FD.png" alt="enter image description here" /></a></p>
<p>And here's the theory : <a href="https://freakonometrics.hypotheses.org/files/2015/11/distribution_workshop.pdf" rel="nofollow noreferrer">https://freakonometrics.hypotheses.org/files/2015/11/distribution_workshop.pdf</a></p>
|
3,930,373 | <p>Hi so im working on a question about finding the <span class="math-container">$rank(A)$</span> and the <span class="math-container">$dim(Ker(A))$</span> of a 7x5 Matrix. Without being given an actual matrix to work from.</p>
<p>I have been told that that the homogeneous equation <span class="math-container">$A\vec x=\vec0$</span> has general solution <span class="math-container">$\vec x=\lambda \vec v$</span> for some non zero <span class="math-container">$\vec v$</span> in <span class="math-container">$R^{5}$</span>.</p>
<p>So my thinking so far is that I know for an <span class="math-container">$m*n$</span> matrix we know that:</p>
<p><span class="math-container">$rk(A)+dimker(A)=n$</span>
which must mean that <span class="math-container">$rk(A)+dimker(A)=5$</span></p>
<p>but this is where I get stuck and dont know how to proceed.</p>
<p>Any help is greately appreciated.</p>
<p><a href="https://i.stack.imgur.com/2kLWC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2kLWC.png" alt="enter image description here" /></a>
.</p>
<p>This is the exact question for the person who asked.</p>
| robjohn | 13,854 | <p>From the definition of a limit in a metric space:
<span class="math-container">$$
\forall\epsilon\gt0,\exists\delta\gt0:0\lt|x-0|\le\delta\implies|f(x)-0|\le\epsilon\tag1
$$</span>
Since there are no <span class="math-container">$x$</span> in the domain of <span class="math-container">$f(x)=\sqrt{-x^2}$</span> that satisfy <span class="math-container">$0\lt|x-0|\le\delta$</span>, the implication is vacuously true.</p>
<hr />
<p><strong>However...</strong></p>
<p>Thanks to Adam Rubinson for pointing out my error. As described in <a href="https://en.wikipedia.org/wiki/Limit_of_a_function#More_general_subsets" rel="nofollow noreferrer">Wikipedia</a>, to take the limit of a function at a point <span class="math-container">$p\in S\subset\mathbb{R}$</span>, it is required that <span class="math-container">$p\in\overline{S\setminus\{p\}}$</span>. Unfortunately, in this case, <span class="math-container">$S=\{p\}$</span>, so <span class="math-container">$p\not\in\overline{S\setminus\{p\}}=\emptyset$</span>. So, although condition <span class="math-container">$(1)$</span> is satisfied, we cannot say that
<span class="math-container">$$\require{cancel}
\cancel{\lim_{x\to0}\sqrt{-x^2}=0}\tag2
$$</span>
When <span class="math-container">$S=\mathbb{R}$</span>, <span class="math-container">$p\in\overline{S\setminus\{p\}}$</span> for all <span class="math-container">$p\in S$</span>, so <span class="math-container">$(1)$</span> is sufficient.</p>
|
1,123,513 | <p>The question is$$\text{Is }\mathbb{R}^2\text{ a subspace of }\mathbb{C}^2?$$My first thing to think about it now is $$\text{Is }\mathbb{R}^2\text{ a subset of }\mathbb{C}^2?$$ I think no because what is $\mathbb{R}^2$ it's like all those pairs $(0,1)$ right? and what is $\mathbb{C}^2$ it's like euh $((a,b),(c,d))$ (because a complex number is a pair of two numbers with multiplication defined as $(a,b)(c,d)=(ac-bd,ad+bc)$ and addition $(a,b)+(c,d)=(a+c,b+d)$) so they aren't the same$$$$another point is of course $\mathbb{R}^2$ is a vector space over $\mathbb{R}$ with specific operations, and $\mathbb{C}^2$ is a vector space over $\mathbb{C}$ with different operations, so my gut feeling tells me that they can't be the same;;;;;;;; in general if $V$ is a vector space over $D$ with operations $+,\times$ and $U$ is a vector space over $B$ with operations $\oplus,\otimes$ and $B\ne D$ can it be the case that $U$ is a subset of $V$? Over what?</p>
| TonyK | 1,508 | <p>According to most of the usual definitions, $\mathbb R$ is not a subset of $\mathbb C$. You are absolutely correct about this. In the same way, $\mathbb N$ is not a subset of $\mathbb Z$, which is not a subset of $\mathbb Q$, which is not a subset of $\mathbb R$.</p>
<p>However, $\mathbb N$ has a <em>natural embedding</em> in $\mathbb Z$, which has a <em>natural embedding</em> in $\mathbb Q$, etc. For instance, the integer $n \in \mathbb Z$ is mapped by this natural embedding to (the equivalence class of) the rational $\dfrac{n}{1} \in \mathbb Q$. And the real number $x \in \mathbb R$ is mapped to the complex number $x+0i \in \mathbb C$.</p>
<p>Often $-$ in fact, almost always $-$ this natural embedding is left implicit. So when $\mathbb R^2$ is treated as if it were a subset of $\mathbb C^2$, this is to be understood as $\mathbb R'^2$ being a subset of $\mathbb C^2$, where $\mathbb R'$ is the image of the natural embedding $\mathbb R \to \mathbb C$.</p>
|
1,401,116 | <p>An aeroplane is observed by two persons travelling at $60 \frac{km}{hr} $ in two vehicles moving in opposite directions on a straight road. To an observer in one vehicle the plane appears to cross the road track at right angles while to the observer in the other vehicle the angle appears to be $45°$. At what angle does the plane actually cross the road track and what is its speed relative to the ground.<br>
My attempt :<br>
Now for vehicle 1 $$ \vec{V_{p1}} =\vec{V_{p}}-\vec{V_{1}} $$ where $ \vec{V_{p}}$ is velocity of plane.<br>
Similarly for vehicle 2
$$ \vec{V_{p2}} =\vec{V_{p}}-\vec{V_{2}} $$
How should I proceed further? I am stuck at this step.</p>
| Robert Israel | 8,508 | <p>You do not remember correctly. Try $f(x,y) = x$. Perhaps there was an assumption that $f$ is bounded?</p>
|
728,503 | <p>\begin{align*}
f(x) = \left\{\begin{array}{ll}
0 & \text{ if } x=0\\
x^\alpha \sin(x^{-\beta}) & \text{ otherwise }
\end{array}\right.
\end{align*}</p>
<p>Determine the values of $\alpha$ and $\beta$ for which this function is differentiable at $x=0$.</p>
<p>I found the derivative, but I don't know what to do after...</p>
| Community | -1 | <p>Remember that the delta function has the special property that</p>
<p>$$\int_{-\infty}^{\infty} \delta(t - a) f(t) dt = f(a)$$</p>
<p>(or more particularly, $\int_0^{\infty} \delta(t - a) f(t) dt = f(a)$ whenever $a > 0$) together with the definition of the Laplace transform. Can you take it from here?</p>
|
2,369,133 | <p>I just found out that
if you want to get 1 with the fraction: $$\frac{5}{2}$$
Then you multiply it with: $$ \frac{2}{5} $$
Does anyone have a good way to think about this? </p>
| Shuri2060 | 243,059 | <p>$$\frac{a}{b}\times\frac{b}{a}=\frac{a\times b}{b\times a}=\frac{ab}{ab}=1\quad\quad\quad a\neq 0\,,\, b\neq0$$</p>
|
2,922,494 | <p>I came across the following problem today.</p>
<blockquote>
<p>Flip four coins. For every head, you get <span class="math-container">$\$1$</span>. You may reflip one coin after the four flips. Calculate the expected returns.</p>
</blockquote>
<p>I know that the expected value without the extra flip is <span class="math-container">$\$2$</span>. However, I am unsure of how to condition on the extra flips. I am tempted to claim that having the reflip simply adds <span class="math-container">$\$\frac{1}{2}$</span> to each case with tails since the only thing which affects the reflip is whether there are tails or not, but my gut tells me this is wrong. I am also told the correct returns is <span class="math-container">$\$\frac{79}{32}$</span> and I have no idea where this comes from.</p>
| abc... | 470,090 | <p>Your answer is $\frac {80}{32}$. The $\frac1 {32}$ is from where all four flips are heads.</p>
|
2,922,494 | <p>I came across the following problem today.</p>
<blockquote>
<p>Flip four coins. For every head, you get <span class="math-container">$\$1$</span>. You may reflip one coin after the four flips. Calculate the expected returns.</p>
</blockquote>
<p>I know that the expected value without the extra flip is <span class="math-container">$\$2$</span>. However, I am unsure of how to condition on the extra flips. I am tempted to claim that having the reflip simply adds <span class="math-container">$\$\frac{1}{2}$</span> to each case with tails since the only thing which affects the reflip is whether there are tails or not, but my gut tells me this is wrong. I am also told the correct returns is <span class="math-container">$\$\frac{79}{32}$</span> and I have no idea where this comes from.</p>
| Siong Thye Goh | 306,553 | <p>Let $X_i=1$ if $i$-th toss is head and $0$ otherwise.</p>
<p>The reward is $$\sum_{i=1}^4X_i + X_5\left( 1-\prod_{i=1}^4X_i\right)=\sum_{i=1}^5X_i-\prod_{i=1}^5X_i$$ </p>
<p>Hence </p>
<p>$$\mathbb{E}\left(\sum_{i=1}^5X_i-\prod_{i=1}^5X_i\right)=\left(\sum_{i=1}^5\mathbb{E}[X_i]-\prod_{i=1}^5\mathbb{E}[X_i]\right)=\frac52-\frac1{32}=\frac{79}{32}$$ </p>
|
392,441 | <p>Assume that $f$ is infinitely differentiable. Let $\delta$ be the (Dirac) delta functional.</p>
<p>I know that $f\delta = f(0)\delta$, but I'm not sure how to derive the equation $f\delta′=f(0)\delta′−f′(0)\delta$.</p>
| mrf | 19,440 | <p>You're missing an $f$ in the left hand side.</p>
<p>Compute $(f\delta)'$ in two different ways.</p>
<p>On one hand $(f\delta)' = f'\delta + f\delta' = f'(0)\delta + f\delta'$.</p>
<p>On the other hand $(f\delta)' = (f(0)\delta)' = f(0)\delta'$.</p>
<p>Hence $f\delta' = f(0)\delta' - f'(0)\delta$.</p>
|
392,441 | <p>Assume that $f$ is infinitely differentiable. Let $\delta$ be the (Dirac) delta functional.</p>
<p>I know that $f\delta = f(0)\delta$, but I'm not sure how to derive the equation $f\delta′=f(0)\delta′−f′(0)\delta$.</p>
| Mårten W | 58,780 | <p>(I suppose that with $\delta$ you mean the Dirac function.)
Just evaluate $(f(t)\delta(t))'$ in two ways.</p>
<p>First, use Leibniz' rule:
$$
(f(t)\delta(t))'=f(t)\delta'(t)+f'(t)\delta(t)=f(t)\delta'(t)+f'(0)\delta(t).
$$
We also have
$$
(f(t)\delta(t))'=(f(0)\delta(t))'=f(0)\delta'(t).
$$</p>
<p>Since the two expressions must be equal, we have
$$
f(t)\delta'(t)=f(0)\delta'(t)-f'(0)\delta(t).
$$</p>
|
43,646 | <blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://math.stackexchange.com/questions/43531/havel-hakimi-theorem">Havel-Hakimi Theorem</a> </p>
</blockquote>
<p>Hi. I'm a beginner at graph theory, and I recently came across the Havel-Hakimi Theorem which is used to determine whether a sequence of integers is graphical. I am using Chartrand and Zhang's <em>Introduction to Graph Theory</em>, but I feel that the proof they provide is lacking. I am wondering whether anyone is aware of a proof for this theorem or where I can find one, preferably an easier one.</p>
<p>Thanks.</p>
| Dai | 1,864 | <p>The proof in the book "Pearls in Graph Theory" <a href="http://books.google.ca/books?id=R6pq0fbQG0QC&lpg=PP1&dq=pearls%20of%20graph%20theory&pg=PA10#v=onepage&q=pearls%20of%20graph%20theory&f=false" rel="nofollow">here</a> is quite clear. </p>
|
4,541,392 | <p>I have an integral that looks like the following:
<span class="math-container">$$
\int_a^b \frac{1}{\left(1 + cx^2\right)^{3/2}} \mathrm{d}x
$$</span>
I have seen a method of solving it being to substitute <span class="math-container">$x = \frac{\mathrm{tan}(u)}{\sqrt{c}}$</span>; however, this seems somewhat sloppy to me. Is there perhaps a better way of tackling this integral?</p>
| MathDona | 1,084,986 | <p>Let
<span class="math-container">$$
I=\int \frac{1}{\left(1 + cx^2\right)^\frac{3}{2}} \mathrm{d}x= \int \frac{1}{x^3(1/x^2+c)^{3/2}}$$</span>.
Let <span class="math-container">$1/x=u \implies -dx/x^2=du$</span> and then <span class="math-container">$u^2=v$</span>
<span class="math-container">$$I=-\int \frac{udu}{(u^2+c)^{3/2}}=\frac{-1}{2} \int (v+c)^{-3/2} dv=(v+c)^{-1/2}.$$</span>
Finally <span class="math-container">$$I=\frac{x}{\sqrt{1+cx^2}}+C$$</span></p>
|
732,132 | <blockquote>
<p>Suppose $P(x)$ is a polynomial of degree $n \geq 1$ such that $\int_{0}^{1}x^{k}P(x)\,dx = 0$ for $k = 1, 2, \ldots, n$. Show that $$\int_{0}^{1}\{P(x)\}^{2}\,dx = (n + 1)^{2}\left(\int_{0}^{1}P(x)\,dx\right)^{2}$$</p>
</blockquote>
<p>If we assume that $P(x) = a_{0}x^{n} + \cdots + a_{n - 1}x + a_{n}$ then we can easily see that $$\int_{0}^{1}\{P(x)\}^{2}\,dx = a_{n}\int_{0}^{1}P(x)\,dx$$ and therefore to solve the given problem we need to show that $$\int_{0}^{1}P(x)\,dx = \frac{a_{n}}{(n + 1)^{2}}$$ Direct integration of the polynomial gives the expression $$\frac{a_{0}}{n + 1} + \frac{a_{1}}{n} + \cdots + \frac{a_{n - 1}}{2} + a_{n}$$ and simplifying this to $a_{n}/(n + 1)^{2}$ does not seem possible. I think there is some nice "integration by parts" trick which will give away the solution, but I am not able to think of it.</p>
| orangeskid | 168,051 | <p>Note that <span class="math-container">$P(x)$</span> is a linear combination of <span class="math-container">$1$</span> and <span class="math-container">$x, x^2$</span>, <span class="math-container">$\ldots$</span>, <span class="math-container">$x^n$</span> and is perpendicular to the subspace generated by <span class="math-container">$x, x^2$</span>, <span class="math-container">$\ldots$</span>, <span class="math-container">$x^n$</span>. Therefore <span class="math-container">$P$</span> gives the direction of the vertical component of <span class="math-container">$1$</span>.</p>
<p>Now, we must show that
<span class="math-container">$$\frac{\langle 1, P\rangle ^2 }{\|P\|^2} = \frac{1}{(n+1)^2}$$</span>
that is, <span class="math-container">$d(1, \langle x,x^2, \ldots, x^n\rangle)^2=\frac{1}{(n+1)^2}$</span>,
in words, the distance from <span class="math-container">$1$</span> to the subspace generated by <span class="math-container">$x,x^2, \ldots,x^n$</span> equals <span class="math-container">$\frac{1}{n+1}$</span>. Now we have
<span class="math-container">$$d^2 = \frac{G(1,x,\ldots, x^n)}{G(x,x^2, \ldots, x^n)}$$</span>
where the <span class="math-container">$G$</span>'s are <a href="https://en.wikipedia.org/wiki/Gramian_matrix#Gram_determinant" rel="nofollow noreferrer">Gram determinants</a> that can be calculated with the formula for <a href="https://en.wikipedia.org/wiki/Cauchy_matrix#Cauchy_determinants" rel="nofollow noreferrer">Cauchy determinants</a>.</p>
<p><span class="math-container">$\bf{Added:}$</span> One sees easily that the Gram matrix for the system of functions <span class="math-container">$(x^{\alpha_i})$</span> is the symmetric Cauchy matrix <span class="math-container">$(\frac{1}{\alpha_i + \alpha_j+1})$</span>, with determinant <span class="math-container">$V(\alpha_i)\cdot\prod_{i,j} \frac{1}{\alpha_i + \alpha_j+1}$</span>, where <span class="math-container">$V$</span> is the Vandermonde determinant. We get for the distance squared <span class="math-container">$d^2$</span> from <span class="math-container">$x^{\alpha}$</span> to the span of <span class="math-container">$(x^{\alpha_i})_i$</span></p>
<p><span class="math-container">$$d^2=\frac{G(x^{\alpha},x^{\alpha_i})}{G(x^{\alpha_i})}=\frac{1}{2\alpha+1}\prod_i \frac{(\alpha_i-\alpha)^2}{(\alpha_i+\alpha+1)^2}$$</span>
For <span class="math-container">$\alpha=0$</span> and <span class="math-container">$\alpha_i=i$</span>, with <span class="math-container">$1\le i \le n$</span>, we get the desired value
<span class="math-container">$\frac{(n!)^2}{(n+1)!^2} = \frac{1}{(n+1)^2}$</span>.</p>
|
119,527 | <p>I'm trying to prove the following:</p>
<blockquote>
<p>Let $(a_n)_{n\in\mathbb{N}}$ and $(b_n)_{n\in\mathbb{N}}$ be two sequences such that $(a_n)_{n\in\mathbb{N}}$ converges and $(b_n)_{n\in\mathbb{N}}$ is bounded. If $a=\lim_{n\to\infty} a_n$, prove that $$\limsup_{n\to\infty} (a_n+b_n) = a +\limsup_{n\to\infty} b_n$$</p>
</blockquote>
<p>It's easy to show that $\limsup_{n\to\infty} (a_n+b_n) \leq a +\limsup_{n\to\infty} b_n$ since the limit superior is subadditive, but I'm at a loss on how to prove the other inequality.</p>
| Aryabhata | 1,102 | <p>Let $\displaystyle b_{n_k}$ be a subsequence of $\displaystyle b_n$ which converges to $\displaystyle \limsup b_n$.</p>
<p>Show that $\displaystyle a_{n_k} + b_{n_k}$ converges. </p>
<p>How does this limit relate to $\displaystyle \limsup (a_n + b_n)$?</p>
|
1,227,330 | <p>I have the following hypotheses: $\alpha_n \to 0, \beta_n \to 0, \alpha_n < 0 < \beta_n$. I need to show that the following two limits exist <strong>so that I may add them</strong>:</p>
<p>$$\lim \limits_{n \to \infty} \frac{-\alpha_n}{\beta_n -\alpha_n}, \lim \limits_{n \to \infty} \frac{\beta_n}{\beta_n - \alpha_n}$$</p>
<p>Well, I know that each limit is between $0$ and $1$. But I'm not sure how to show that these limits actually converge and don't fluctuate? (Or, it's also a possibility that I don't need the fact that the limits exist to add them, but I'm not sure if that's true)</p>
<p>This question is motivated by Exercise 5.19 in Baby Rudin.</p>
<hr>
<p>Edit: So either I'm doing something wrong by wanting to add the limits, or <strong>we don't need existence of the limits</strong> because I found a counter example: take $\beta_n = \frac{1}{n} \cos^2 n, \alpha_n = - \frac{1}{n} \sin^2 n$. Then $ \beta_n < 0 < \alpha_n$ (strict because $\pi$ is irrational), and the quotients do not converge (they are equal to $\cos^2 n$ and $\sin^2 n$).</p>
<p>Thus, here is my whole solution so someone can point out where I'm going wrong. I need to prove that for $D_n = \frac{f(\beta_n) - f(\alpha_n)}{\beta_n - \alpha_n}$, $f$ defined in $(-1,1)$, $f'$ exists at $0$, and $-1 < \alpha_n < 0 < \beta_n < 1$, $\alpha_n, \beta_n \to 0$, we have $\lim D_n = f'(0)$.</p>
<p>\begin{align}
\lim D_n &= \lim \frac{f(\beta_n) - f(0) + f(0) - f(\alpha_n)}{\beta_n - \alpha_n}\\
&= \lim \frac{f(\beta_n) - f(0)}{\beta_n - \alpha_n} + \lim \frac{f(0) - f(\alpha_n)}{\beta_n - \alpha_n}\\
&= \lim \frac{f(\beta_n) - f(0)}{\beta_n - 0} \lim \frac{\beta_n}{\beta_n - \alpha_n} + \lim \frac{f(0) - f(\alpha_n)}{0 - \alpha_n} \lim \frac{-\alpha_n}{\beta_n - \alpha_n}\\
&= \lim f'(0) \lim \frac{\beta_n - \alpha_n}{\beta_n - \alpha_n} = f'(0)
\end{align}</p>
| robjohn | 13,854 | <p>In <a href="https://math.stackexchange.com/a/75151">this answer</a>, it is shown that for $0\le x\le\frac\pi2$,
$$
\cos(x)\le\frac{\sin(x)}x\le1\tag{1}
$$
So, for $0\lt x\le\frac\pi2$, and symmetrically for $-\frac\pi2\le x\lt0$, subtract $1$ from $(1)$ and divide by $x$:
$$
0=\lim_{x\to0}\frac{\cos(x)-1}{x}\le\lim_{x\to0}\frac{\frac{\sin(x)}x-1}{x-0}\le0\tag{2}
$$
We have the leftmost equality because
$$
\begin{align}
\lim_{x\to0}\frac{1-\cos(x)}x
&=\lim_{x\to0}\frac1x\frac{\sin^2(x)}{1+\cos(x)}\\
&=\lim_{x\to0}\frac{\sin(x)}x\lim_{x\to0}\frac{\sin(x)}{1+\cos(x)}\\[6pt]
&=1\cdot0\tag{3}
\end{align}
$$
Therefore, by the Squeeze Theorem, $(2)$ says
$$
f'(0)=0\tag{4}
$$</p>
|
467,318 | <blockquote>
<p><span class="math-container">$ABC$</span> is a triangle with integral side lengths. Given that <span class="math-container">$\angle A=3\angle B$</span>, find the minimum possible perimeter of <span class="math-container">$ABC$</span>.</p>
</blockquote>
<p>I got this problem from an old book (which did not provide even a hint). I can think of some approaches, but all of them result in complicated Diophantine equations that would not be solvable without the help of a computer. Any suggestions?</p>
| wendy.krieger | 78,024 | <p>Such triangles exist. I found one 3:8:10, where the angle opposite 8 is 1/3 of that opposite 10. The trick here is to pick really small primes in the form of $x+y\sqrt{-n}$, and cube the result. Note here that we're using $n=7$, and the prime is $1\frac 12 + \frac 12\sqrt{-7}$. This is a pretty tiny cube, one gets then a matrix</p>
<pre><code> ( 3 -7 ) (3) (2) (36)
( 1 3 ) (1) (6) (20)
A B = A^3
</code></pre>
<p>You then divide through by common factors, to get coordinates at $A =0,0$ $B=+6,0$, and $C=-9,5\sqrt{7}$. The three sides are AB=6, AC=16, and BC=20, which gives the indicated triangle. </p>
|
467,318 | <blockquote>
<p><span class="math-container">$ABC$</span> is a triangle with integral side lengths. Given that <span class="math-container">$\angle A=3\angle B$</span>, find the minimum possible perimeter of <span class="math-container">$ABC$</span>.</p>
</blockquote>
<p>I got this problem from an old book (which did not provide even a hint). I can think of some approaches, but all of them result in complicated Diophantine equations that would not be solvable without the help of a computer. Any suggestions?</p>
| cosmo5 | 818,799 | <p>Giving it a try using elementary techniques.</p>
<p>In <span class="math-container">$\triangle ABC$</span>, let <span class="math-container">$B=\theta$</span>, <span class="math-container">$A=3\theta$</span>, <span class="math-container">$C=\pi-4\theta$</span>. By sine-rule
<span class="math-container">$$\frac{a}{\sin 3\theta}=\frac{b}{\sin \theta}=\frac{c}{\sin 4\theta}$$</span></p>
<p>so that <span class="math-container">$$a=b(3-4\sin^2\theta) \quad , \quad c=b(4\cos \theta \cos 2\theta)$$</span></p>
<p>On further simplification,
<span class="math-container">$$a=b(4\cos^2\theta-1) \quad , \quad c=b \cdot 4\cos \theta (2\cos^2 \theta-1)$$</span></p>
<p><span class="math-container">$c>0$</span> places bound on <span class="math-container">$\cos \theta$</span> : <span class="math-container">$\cos \theta > 1/\sqrt{2} \Rightarrow \theta < 45^{\circ}$</span> which makes sense since <span class="math-container">$45^{\circ}+3\cdot 45^{\circ}=180^{\circ}$</span></p>
<p>We may choose any suitable value of <span class="math-container">$\cos \theta > 1/\sqrt{2} \approx 0.7071$</span>.</p>
<p>Taking <span class="math-container">$\cos \theta = 3/4=0.75$</span>, yields <span class="math-container">$a/b=5/4$</span>, <span class="math-container">$c/b=3/8$</span>. So one class of triangles would be multiples of
<span class="math-container">$$(a,b,c)=(10,8,3)$$</span></p>
<p>Taking <span class="math-container">$\cos \theta = 5/6$</span>, yields <span class="math-container">$a/b=16/9$</span>, <span class="math-container">$c/b=35/27$</span>. So another class of triangles would be multiples of
<span class="math-container">$$(a,b,c)=(48,27,35)$$</span></p>
<p>and so on.</p>
|
1,296,669 | <p>A tapas bar serves 15 dishes, of which 7 are vegetarian, 4 are fish and 4 are meat. A table of customers decides to order 8 dishes, possibly including repetitions.</p>
<p>a) Calculate the number of possible dish combinations. </p>
<p>b) The customers decide to order 3 vegetarian dishes, 3 fish and 2 meat. Calculate the number of possible orders.</p>
<hr>
<p><em>Progress</em>. For a) I think that the answer would be $15^8$ as this would be the number of different ordered sequences of 8 elements from the 15 possible dishes.</p>
| eloiprime | 180,579 | <p><strong>a)</strong> First, we note that repetition is allowed, and the order in which we order the dishes is unimportant. Therefore, we use the formula ${k+n-1 \choose k}$. In this case, $n=15$ and $k=8$. So, the number of possible combinations of dishes is
$${8+15-1 \choose 8} = {22 \choose 8} = 319770.$$</p>
<p><strong>b)</strong> We will need to apply the above formula three times. For the vegetarian dishes, we have $n_V = 7$ and $k_V=3$. For the fish dishes, we have $n_F = 4$ and $k_F=3$. For the meat dishes, we have $n_M = 4$ and $k_M=2$. So, the number of possible combinations of dishes under these conditions is
$${k_V+n_V-1 \choose k_V}{k_F+n_F-1 \choose k_F}{k_M+n_M-1 \choose k_M} = {9 \choose 3}{6 \choose 3}{5 \choose 2}=16800.$$</p>
|
3,251,928 | <p><span class="math-container">$$\frac{1}{a(a-b)(a-c)} + \frac{1}{b(b-a)(b-c)} + \frac{1}{c(c-a)(c-b)} $$</span></p>
<p>I tried to get everything to the same denominator, and then simplify numerators first but it is very complicated and long if I just use brute force, to multiply all the expressions given from the previous unification of denominator.</p>
| Thomas Andrews | 7,933 | <p>Via partial fractions, you have:
<span class="math-container">$$\begin{align}\frac{1}{a(a-b)(a-c) }&= \frac{1}{bc}\cdot\frac{1}{a}+\frac{1}{b(b-c)}\cdot\frac{1}{(a-b)} + \frac{1}{c(c-b)}\cdot\frac{1}{(a-c)}\\
&=\frac{1}{abc}-\frac{1}{b(b-a)(b-c)}-\frac{1}{c(c-a)(c-b)}
\end{align}$$</span></p>
<p>[Idea stolen from a deleted answer from user @auscrypt that started with partial fractions but got the coefficients wrong.]</p>
|
2,385,949 | <p>While playing around with random binomial coefficients , I observed that the following <em>identity</em> seems to hold for all positive integers $n$:</p>
<p>$$ \sum_{k=0}^{2n} (-1)^k \binom{4n}{2k}\cdot\binom{2n}{k}^{-1}=\frac{1}{1-2n}.$$</p>
<p>However, I am unable to furnish a proof for it ( though this result is just a conjecture ).<br> Any ideas/suggestions/solutions are welcome.</p>
| epi163sqrt | 132,007 | <p>When going through Jack's nice answer I did some intermediate steps to better see what's going on. Here is a somewhat more elaborated version, which might also be convenient for other readers.</p>
<blockquote>
<p>We obtain
\begin{align*}
\color{blue}{\sum_{k=0}^{2n}}&\color{blue}{(-1)^k\binom{4n}{2k}\binom{2n}{k}^{-1}}\\
&=\sum_{k=0}^{2n}\binom{4n}{2k}(2n+1)\int_0^1(1-x)^kx^{2n-k}\,dx\tag{1}\\
&=(2n+1)\int_0^1x^{2n}\sum_{k=0}^{2n}\binom{4n}{2k}\left(-\frac{1-x}{x}\right)^k\,dx\tag{2}\\
&=(2n+1)\int_0^1x^{2n}\cdot\frac{1}{2}
\left(\left(1+i\sqrt{\frac{1-x}{x}}\right)^{4n}+\left(1-i\sqrt{\frac{1-x}{x}}\right)^{4n}\right)\,dx\tag{3}\\
&=\frac{2n+1}{2}\int_0^1\left(\sqrt{x}+i\sqrt{1-x}\right)^{4n}+\left(\sqrt{x}-i\sqrt{1-x}\right)^{4n}\,dx\tag{4}\\
&=(2n+1)\int_{0}^{\frac{\pi}{2}}
\left[(\cos \theta+i\sin \theta)^{4n}+(\cos\theta-i\sin \theta)^{4n}\right]\cos \theta\sin \theta\,d\theta\tag{5}\\
&=(2n+1)\int_{0}^{\frac{\pi}{2}}
\left[e^{4ni\theta}+e^{-4ni\theta}\right]\cos \theta\sin \theta\,d\theta\tag{6}\\
&=(2n+1)\int_{0}^{\frac{\pi}{2}}\cos(4n\theta)\sin(2\theta)\,d\theta\tag{7}\\
&=\frac{2n+1}{2}\int_{0}^{\frac{\pi}{2}}\left[\sin( (4n+2)\theta)-\sin ((4n-2)\theta))\right]\,d\theta\tag{8}\\
&=\frac{2n+1}{2}\left[-\frac{1}{4n+2}\cos((4n+2)\theta)
+\frac{1}{4n-2}\cos((4n-2)\theta\right]_0^{\frac{\pi}{2}}\\
&=\frac{2n+1}{2}\left(\frac{1}{2n+1}-\frac{1}{2n-1}\right)\\
&\color{blue}{=\frac{1}{1-2n}}
\end{align*}</p>
</blockquote>
<p><em>Comment:</em></p>
<ul>
<li><p>In (1) we write the reciprocal of a binomial coefficient using the <em><a href="https://en.wikipedia.org/wiki/Beta_function" rel="nofollow noreferrer">beta function</a></em>
\begin{align*}
\binom{n}{k}^{-1}=(n+1)\int_0^1z^k(1-z)^{n-k}\,dz
\end{align*}</p></li>
<li><p>In (2) we do some rearrangements in order to apply the binomial theorem.</p></li>
<li><p>In (3) we consider the even function derived from $(1+z)^{2n}$
\begin{align*}
\sum_{k=0}^{n}\binom{2n}{2k}z^{2k}=\frac{1}{2}\left((1+z)^{2n}+(1-z)^{2n}\right)
\end{align*}
Replacing $n$ with $2n$ and $z$ with $i\sqrt{\frac{1-x}{x}}$ and the application of de Moivre's theorem in (6) becomes plausible.</p></li>
<li><p>In (4) we do some simplifications.</p></li>
<li><p>In (5) we substitute $x=\cos ^2\theta, dx=-2\cos\theta\sin\theta\,d\theta$.</p></li>
<li><p>In (6) we apply De Moivre's theorem and in (7) and (8) trigonometric sum formulas.</p></li>
</ul>
|
2,925,450 | <p>I am trying to solve the following problem: <span class="math-container">$|4-x| \leq |x|-2$</span>. I am trying to do it algebraically, but I'm getting a solution to the problem that makes no sense. I fail to see the error in my reasoning though. I hope to get an explanation where I went wrong.</p>
<p><span class="math-container">$|4-x| \leq |x|-2$</span></p>
<p><span class="math-container">$|4-x|-|x| \leq -2$</span></p>
<p>If <span class="math-container">$4-x$</span> and <span class="math-container">$x$</span> are both negative, then for them to be equal to <span class="math-container">$2$</span>, we need to multiply both expressions by <span class="math-container">$-1$</span>.</p>
<p><span class="math-container">$-4+x+x \leq -2$</span></p>
<p><span class="math-container">$-4+2x \leq -2$</span></p>
<p><span class="math-container">$2x \leq 2$</span></p>
<p><span class="math-container">$x \leq 1$</span></p>
<p>But if you sub in any <span class="math-container">$x$</span> less than or equal to <span class="math-container">$1$</span>, the inequality doesn't work! Can you please explain where in my logic, where in the steps, have I gone wrong? Thank you!</p>
| John Wayland Bales | 246,513 | <p>Subtract all terms to the left side of the inequality to obtain</p>
<p><span class="math-container">$$ |4-x|-|x|+2\le0 $$</span></p>
<p>Then define <span class="math-container">$f(x)=|4-x|-|x|+2$</span>. We want to find all values of <span class="math-container">$x$</span> for which <span class="math-container">$f(x)\le0$</span>.</p>
<p>Important changes occur in the function at <span class="math-container">$x=0$</span> and at <span class="math-container">$x=4$</span></p>
<p><span class="math-container">$$|x|=\begin{cases} x&\text{ for }x\ge0\\-x&\text{ for }x<0\end{cases}$$</span>
<span class="math-container">$$|4-x|=\begin{cases} 4-x&\text{ for }x\le4\\x-4&\text{ for }x>4\end{cases}$$</span></p>
<p>This allows us to re-write <span class="math-container">$f(x)$</span> as a piecewise defined function:
<span class="math-container">$$f(x)=\begin{cases} 6&\text{ for }x<0\\6-2x&\text{ for }0\le x<4\\-2&\text{ for }x\ge4\end{cases}$$</span></p>
<p>In the middle interval we see that <span class="math-container">$f(x)$</span> is decreasing and reaches a value of <span class="math-container">$0$</span> at <span class="math-container">$x=3$</span> and we get <span class="math-container">$f(x)\le0$</span> for <span class="math-container">$x\ge3$</span>. So the solution set is <span class="math-container">$[3,\infty)$</span>.</p>
|
1,808,183 | <p>Consider the Cauchy problem:
$$\left\{\begin{array}{lll}
x^2\partial_x u+y^2\partial_yu=u^2\\
u(x,2x)=1
\end{array}\right.$$
It is easy to show that the characteristic equations are given by:
$$\frac{dx}{x^2}=\frac{dy}{y^2}=\frac{dz}{px^2+qy^2}=\frac{dp}{2pu-2xp}=\frac{dq}{2qu-2yq}=dt$$
By the Lagrange-Charpit's method, we need a first integral $\Psi$ different to $\Phi=x^2p+y^2q-u^2$.</p>
<p>How I can find $\Psi$? Maybe we should get directly the characteristic curves of the previous system?</p>
<p>Many thanks!</p>
| Canardini | 341,007 | <p>In equation (3), Let $E_1$ denotes the conditional expectation$E(|X(1)=0)$ we use the iterated expectation , that is $E_{1}(X(s)X(t))=E_{1}(E(X(s)X(t)|X(t))$.</p>
<p>In equation (4), since we conditioned to X(t), that means that we know its value , hence X(t) is not random anymore under that assumption: we can remove it from the expectation term as if it was deterministic.</p>
<p>$E_{1}(X(s)X(t))=E_{1}(X(t)E(X(s)|X(t)))$</p>
|
2,070,353 | <p>Let there be two arbitrary functions $f$ and $g$, and let $g \circ f$ be their composition.</p>
<ol>
<li><p>Suppose $g \circ f$ is one-one. Then prove that this implies $f$ is also one-one.</p></li>
<li><p>Suppose $g \circ f$ is onto. Then prove that this implies $g$ is also onto.</p></li>
</ol>
<p>I tried to do both of the proofs by contradiction method but got stuck in that approach. Please help me in this proof. Proofs in more than one way is welcomed.</p>
| user160738 | 160,738 | <p>Suppose that $m$ is then number of terms in the sequence. Denote the first term by $a_1$, and then you can see that </p>
<p>$$
a_m=5+2m
$$</p>
<p>then $a_m=2n+1$, $5+2m=2n+1$ so $2m=2n-4$ whence you get $m=n-2$</p>
|
1,410,185 | <p>I observe that if we claim that $\sqrt[3]{-1}=-1$, we reach a contradiction.</p>
<p>Let's, indeed, suppose that $\sqrt[3]{-1}=-1$. Then, since the properties of powers are preserved, we have: $$\sqrt[3]{-1}=(-1)^{\frac{1}{3}}=(-1)^{\frac{2}{6}}=\sqrt[6]{(-1)^2}=\sqrt[6]{1}=1$$ which is a clear contradiction to what we assumed...</p>
| hunter | 108,129 | <p>You say</p>
<blockquote>
<p>since the properties of powers are preserved</p>
</blockquote>
<p>but this is not true, and you have given a proof that exactly the opposite is true:</p>
<p>There is no rule that, for $a$ negative, one has $a^{bc} = (a^b)^c$. Indeed, we can see that this is not a rule even without using cube roots:
$$
-1 = -1^{1/1} = -1^{2/2} ``=" (-1^2)^{1/2} = \sqrt{1} = 1.
$$
This rule is only true when $a$ is a positive real number.</p>
|
2,137,269 | <p>I just sat in on a lecture on exponential generating functions in combinatorics (I have no formal education in combinatorics myself). It was quite interesting, but I'm afraid I don't actually understand what the generating function is/does. I've tried doing some minimal research online, but everything I've seen seems to be either too complex or too general to understand well. For example, I know how to find the generating function for permutations of a finite set, $\frac{1}{1-x}$. But what role does $x $ play here, and what does the generating function tell us? I don't see how it's at all related to the species of permutations itself.</p>
| awkward | 76,172 | <p>This is really a too-long comment, not a solution, but I can't resist sharing what the statistician Frederick Mosteller had to say about his first encounter with generating functions. He was talking about ordinary power series generating functions (I think), not the exponential generating functions asked about in the OP, but the ideas are similar. </p>
<p>"A key moment in my life occurred in one of those classes during my sophomore year. We had the question: When three dice are rolled what is the chance that the sum of the faces will be 10? The students in this course were very good, but we all got the answer largely by counting on our fingers. When we came to class, I said to the teacher, "That's all very well - we got the answer - but if we had been asked about six dice and the probability of getting 18, we would still be home counting. How do you do problems like that?" He said, "I don't know, but I know a man who probably does and I'll ask him." One day I was in the library and Professor Edwin G Olds of the Mathematics Department came in. He shouted at me, "I hear you're interested in the three dice problem." He had a huge voice, and you know how libraries are. I was embarrassed. "Well, come and see me," he said, and I'll show you about it." "Sure, " I said. But I was saying to myself, "I'll never go." Then he said, "What are you doing?" I showed him. "That's nothing important," he said. "Let's go now."</p>
<p>"So we went to his office, and he showed me a generating function. It was the most marvelous thing I had ever seen in mathematics. It used mathematics that, up to that time, in my heart of hearts, I had thought was something that mathematicians just did to create homework problems for innocent students in high school and college. I don't know where I had got ideas like that about various parts of mathematics. Anyway, I was stunned when I saw how Olds used this mathematics that I hadn't believed in. He used it in such an unusually outrageous way. It was a total retranslation of the meaning of the numbers." [Albers, More Mathematical People].</p>
|
4,645,582 | <h2><strong>Problem</strong></h2>
<p>Let <span class="math-container">$f$</span> be a continuous function, <span class="math-container">$f:[0,1]\to\mathbb{R}$</span> with <span class="math-container">$\int_{0}^1 (2x-1)f(x) dx = 0$</span>.</p>
<p>Prove that there exists a point c between <span class="math-container">$(0, 1)$</span> such that <span class="math-container">$\int_{0}^c (x-c)(f(x)-f(c)) dx = 0$</span>.</p>
<p>This problem was given at a regional competition in Romania for <span class="math-container">$12$</span>th grade students.</p>
<h2><strong>Attempt</strong></h2>
<p>From Rolle's Theorem we have that there exists q such that <span class="math-container">$f(q) = (2q - 1)f(q) = 0$</span>, where we have <span class="math-container">$f(q)=0$</span>.</p>
<p>My other attempt was: fix <span class="math-container">$p$</span> to <span class="math-container">$0.5$</span> and separate according to the Mean Value Theorem for Integrals <span class="math-container">$\int_{0}^1 (2x-1)f(x) dx = 0$</span> into <span class="math-container">$-f(c1) * \int_{0}^p (1-2x) dx$</span> <span class="math-container">$+ $</span> <span class="math-container">$f(c2) * \int_{p}^1 (2x-1) dx = 0$</span> where we get that <span class="math-container">$f(c1)=f(c2)$</span></p>
| FShrike | 815,585 | <p>Meaning no disrespect to the existing answer, there is a much simpler solution.</p>
<p>Consider the continuously differentiable function <span class="math-container">$\Lambda:[0,1]\to\Bbb R$</span> given by: <span class="math-container">$$c\mapsto\int_0^c(2xc-c^2)f(x)\,\mathrm{d}x$$</span></p>
<p><span class="math-container">$\Lambda(0)=0=\Lambda(1)$</span> by assumption. By “Rolle’s” theorem, <span class="math-container">$\Lambda’(c)=0$</span> for some distinguished <span class="math-container">$c$</span> in <span class="math-container">$(0,1)$</span>. For this <span class="math-container">$c$</span>, we then know (Leibniz integral rule): <span class="math-container">$$0=(2c^2-c^2)f(c)(1)+\int_0^c\partial_c[(2xc-c^2)f(x)]\,\mathrm{d}x=2\int_0^c(x-c)(f(x)-f(c))\,\mathrm{d}x$$</span>As desired.</p>
|
2,557,608 | <p>Let $X$ be integer valued with Characteristic Function $\phi$. How to show that </p>
<p>$\ P(|X| = k)= \frac{1}{2\pi }\int _{-\pi} ^{\pi} e^{ikt} \phi(t) dt$</p>
<p>$P(S_n =k) =\frac{1}{2\pi }\int _{-\pi} ^{\pi} e^{ikt} (\phi(t))^n dt $</p>
| Badam Baplan | 164,860 | <p>Recall the angle addition formula $$\sin(\alpha + \beta) = \sin(\alpha)\cos(\beta) + \sin(\beta)\cos(\alpha)$$ for arbitrary angles.</p>
<p>From what you are given,</p>
<p>$$\frac{2}{3}\sin(\alpha)+\frac{2}{3}\sin(\beta)=\sin(\alpha+\beta)$$</p>
<p>So let's say $\cos(\beta) = \frac{2}{3} = cos(\alpha)$</p>
<p>Now your question is reduced to showing that $$ \frac{1}{\sqrt{5}} = \tan\big(\frac{\cos^{-1}(\frac{2}{3})}{2}\big)$$</p>
<p>Option one, reason directly from a triangle.
Option two, you can now use the relevant half-angle formula, </p>
<p>$$\tan(\frac{\alpha}{2}) = \sqrt{\frac{1-\cos(\alpha)}{1+\cos(\alpha)}}$$</p>
<p>which shows $$\tan(\frac{\alpha}{2}) = \sqrt{\frac{1-2/3}{1+2/3}} = \sqrt{1/5}$$</p>
|
4,610,640 | <p>I have this equation :</p>
<p><span class="math-container">$$\frac{d\alpha}{dz} = - \frac{dr}{dz} * \frac{\tan(\alpha)}r $$</span></p>
<p>I searched for some similar examples but non of these equations was like this one.
I'm confused about this one. As far as I know, I used RK4 for equations like this : <span class="math-container">$$y'(t) = F(t,y(t))$$</span></p>
<p>Thank you for helping me !</p>
<hr />
<p>Here's the context for the equation.
<a href="https://i.stack.imgur.com/UsOkt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UsOkt.png" alt="enter image description here" /></a></p>
<p>I just deleted the lambda part to make the equation easier. But I still don't figure out how to solve it!</p>
| B. Goddard | 362,009 | <p><span class="math-container">$$\frac{s+1}{s+2} =\frac{(s+2)-1}{s+2}=\frac{s+2}{s+2}-\frac{1}{s+2}.$$</span></p>
<p>You can also do long division of polynomials. The quotient is <span class="math-container">$1$</span> and the remainder is <span class="math-container">$1$</span>.</p>
|
2,675,954 | <p>Fractals, when viewed as functions, are everywhere continuous and nowhere differentiable. Can this also be used as a definition for fractals?</p>
<p>i.e. Are all fractals everywhere continuous and nowhere differentiable? And also: Are all functions that are everywhere continuous and nowhere differentiable fractals?</p>
| flawr | 109,451 | <p>Hint: Note that your matrix has the form</p>
<p>$$\pmatrix{a&b&c\cr b&c&a\cr c&a&b\cr }$$</p>
<p>Which has the determinant</p>
<p>$$3abc-c^3-b^3-a^3$$</p>
<p>which can again be factored into</p>
<p>$$-\left(a+b+c\right)\,\left(a^2+b^2+c^2-ab-bc-ac\right)$$</p>
|
4,631,014 | <blockquote>
<p>Finding range of function</p>
</blockquote>
<blockquote>
<p><span class="math-container">$\displaystyle f(x)=\cos(x)\sin(x)+\cos(x)\sqrt{\sin^2(x)+\sin^2(\alpha)}$</span></p>
</blockquote>
<p>I have use Algebric inequality</p>
<p><span class="math-container">$\displaystyle -(a^2+b^2)\leq 2ab\leq (a^2+b^2)$</span></p>
<p><span class="math-container">$\displaystyle (\cos^2(x)+\sin^2(x))\leq 2\cos(x)\sin(x)\leq \cos^2(x)+\sin^2(x)\cdots (1)$</span></p>
<p>And</p>
<p><span class="math-container">$\displaystyle -[\cos^2(x)+\sin^2(x)+\sin^2(\alpha)]\leq 2\cos(x)\sqrt{\sin^2(x)+\sin^2(\alpha)}\leq [\cos^2(x)+\sin^2(x)+\sin^2(\alpha)]\cdots (2)$</span></p>
<p>Adding <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span></p>
<p><span class="math-container">$\displaystyle -(1+1+\sin^2(\alpha))\leq 2\cos(x)[\sin(x)+\sqrt{\sin^2(x)+\sin^2(\alpha)}]\leq (1+1+\sin^2(\alpha))$</span></p>
<p><span class="math-container">$\displaystyle -\bigg(1+\frac{\sin^2(\alpha)}{2}\bigg)\leq f(x)\leq \bigg(1+\frac{\sin^2(\alpha)}{2}\bigg)$</span></p>
<p>I did not know where my try is wrong.</p>
<p>Please have a look on it</p>
<p>But actual answer is <span class="math-container">$\displaystyle -\sqrt{1+\sin^2(\alpha)}\leq f(x)\leq \sqrt{1+\sin^2(\alpha)}$</span></p>
| mathlove | 78,967 | <p>I'm going to have a look on your try, and then add a claim and an example which might help you.</p>
<p>From
<span class="math-container">$$-(\cos^2x+\sin^2x)\leqslant 2\cos x\sin x\leqslant \cos^2x+\sin^2x\tag1$$</span>
and
<span class="math-container">$$\small -(\cos^2 x+\sin^2 x+\sin^2\alpha)\leqslant 2\cos x\sqrt{\sin^2x+\sin^2\alpha}\leqslant \cos^2x+\sin^2x+\sin^2\alpha\tag2$$</span>
you got
<span class="math-container">$$-2-\sin^2\alpha\leqslant 2f(x)\leqslant 2+\sin^2\alpha$$</span>
and
<span class="math-container">$$-1-\frac{\sin^2\alpha}{2}\leqslant f(x)\leqslant 1+\frac{\sin^2\alpha}{2}\tag3$$</span></p>
<p>To see if <span class="math-container">$(3)$</span> is the range of <span class="math-container">$f(x)$</span>, let us see if there is <span class="math-container">$x$</span> such that, <em>for any given <span class="math-container">$\alpha$</span></em>, <span class="math-container">$f(x)=1+\dfrac{\sin^2\alpha}{2}$</span>, i.e.
<span class="math-container">$$2f(x)=2+\sin^2\alpha\tag4$$</span>
holds.</p>
<p>From <span class="math-container">$(1)(2)$</span>, <span class="math-container">$x$</span> satisfying <span class="math-container">$(4)$</span> has to satisfy both
<span class="math-container">$$2\cos x\sin x= \cos^2x+\sin^2x\tag5$$</span>
and
<span class="math-container">$$2\cos x\sqrt{\sin^2x+\sin^2\alpha}=\cos^2x+\sin^2x+\sin^2\alpha\tag6$$</span></p>
<p><span class="math-container">$(5)$</span> is equivalent to
<span class="math-container">$$(\cos x-\sin x)^2=0\iff \cos x=\sin x\iff x=\frac{\pi}{4}+m\pi\ (m\in\mathbb Z)$$</span></p>
<p><span class="math-container">$(6)$</span> is equivalent to
<span class="math-container">$$\cos x\geqslant 0\quad\text{and}\quad \cos^2x=\sin^2x+\sin^2\alpha\iff \cos x\geqslant 0\quad\text{and}\quad \cos(2x)=\sin^2\alpha$$</span>
So, we see that
<span class="math-container">$$(5)\ \ \text{and}\ \ (6)\iff x=\dfrac{\pi}{4}+2m\pi\quad\text{and}\quad \alpha=n\pi\quad (m,n\in\mathbb Z)$$</span></p>
<p>This means that there is no <span class="math-container">$x$</span> such that, <em>for any given <span class="math-container">$\alpha$</span></em>, <span class="math-container">$f(x)=1+\dfrac{\sin^2\alpha}{2}$</span> holds.</p>
<p>So, <span class="math-container">$(3)$</span> is not the range of <span class="math-container">$f(x)$</span>.</p>
<hr />
<p>In the following, I'm going to add a claim and an example which might help you.</p>
<p>In the following, I consider only continuous functions.</p>
<p><strong>Claim</strong> : If</p>
<ul>
<li><p><span class="math-container">$f(x)\leqslant F(x)\leqslant g(x)\quad$</span> (<span class="math-container">$f(x),g(x)$</span> are not necessarily constant functions)</p>
</li>
<li><p><span class="math-container">$h(x)\leqslant G(x)\leqslant i(x)\quad$</span> (<span class="math-container">$h(x),i(x)$</span> are not necessarily constant functions)</p>
</li>
<li><p>both <span class="math-container">$f(x)+h(x)$</span> and <span class="math-container">$g(x)+i(x)$</span> are constant functions</p>
</li>
<li><p>there is <span class="math-container">$\alpha$</span> such that <span class="math-container">$F(\alpha)=f(\alpha)$</span> and <span class="math-container">$G(\alpha)=h(\alpha)$</span></p>
</li>
<li><p>there is <span class="math-container">$\beta$</span> such that <span class="math-container">$F(\beta)=g(\beta)$</span> and <span class="math-container">$G(\beta)=i(\beta)$</span></p>
</li>
</ul>
<p>then, <span class="math-container">$\operatorname{range}(F(x)+G(x))=\left[f(x)+h(x),g(x)+i(x)\right]$</span>.</p>
<p><em>Proof</em> :</p>
<p>For <span class="math-container">$c$</span> such that <span class="math-container">$f(x)+h(x)\leqslant c\leqslant g(x)+i(x)$</span>, letting <span class="math-container">$H(x)=F(x)+G(x)-c$</span>, since we have
<span class="math-container">$$H(\alpha)=F(\alpha)+G(\alpha)-c=f(\alpha)+h(\alpha)-c\leqslant 0$$</span>
<span class="math-container">$$H(\beta)=F(\beta)+G(\beta)-c=g(\beta)+i(\beta)-c\geqslant 0$$</span>
we can say that, by the intermediate value theorem, there is <span class="math-container">$\gamma$</span> such that
<span class="math-container">$$H(\gamma)=0\qquad \text{and}\qquad \min(\alpha,\beta)\leqslant \gamma\leqslant \max(\alpha,\beta).\quad\blacksquare$$</span></p>
<hr />
<p><strong>Exmaple for which the claim above works</strong> :</p>
<p>Find the range of <span class="math-container">$\cos x+\dfrac{x^2+8\pi x-2\pi^2}{3x^2+6\pi^2}$</span>.</p>
<p>Let <span class="math-container">$f(x)=\cos x$</span> and <span class="math-container">$g(x)=\dfrac{x^2+8\pi x-2\pi^2}{3x^2+6\pi^2}$</span>. Then, <span class="math-container">$g'(x)=\dfrac{-8 \pi (x-2\pi)(x+\pi)}{3 (x^2 + 2 \pi^2)^2}$</span> and <span class="math-container">$\displaystyle\lim_{x\to\pm\infty}g(x)=\frac 13$</span>.</p>
<p>Since <span class="math-container">$-1\leqslant f(x)\leqslant 1$</span> and <span class="math-container">$-1=g(-\pi)\leqslant g(x)\leqslant g(2\pi)=1$</span>, adding these gives
<span class="math-container">$$-2\leqslant f(x)+g(x)\leqslant 2\tag7$$</span>
Since <span class="math-container">$f(-\pi)=g(-\pi)=-1$</span> and <span class="math-container">$f(2\pi)=g(2\pi)=1$</span>, we can say, by the claim above, that <span class="math-container">$(7)$</span> is the range.</p>
|
406,252 | <p>How to define the amount of additions. E.g. $1+2+3+4+5+6+7+8+9$</p>
<p>Are there $9$ additions, because of the nine numbers that are added together.
Or can you also say that there are $8$ additions, because there are only $8$ '$+$' signs.</p>
| response | 76,635 | <p>Addition is an operation and hence we are performing 8 additions as we repeat the operation of 'adding' 8 times. In contrast, I would say that we are adding 9 numbers.</p>
|
961,889 | <p>In exercise I.5.2 in Harshorne there is the following definition of intersection multiplicity for two curves in $\mathbb{A}^2$:
\begin{equation}
\mathrm{length}_{\mathcal{O}_P}\mathcal{O}_P / (f,g)
\end{equation}
where $P$ is the point in the intersection we are interested in, $\mathcal{O}_P$ is its local ring in $\mathbb{A}^2$ and $f$ and $g$ are the polynomials giving the two curves.</p>
<p>On the other hand I found in various references, such as Fulton's Algebraic Curves, that the length is taken over $k$, the residue field of $\mathcal{O}_P$.</p>
<p>Since $k$ embeds in $\mathcal{O}_P$ naturally, I can see any sequence of $\mathcal{O}_P$-modules as sequence of $k$-modules and then I get that the length over $k$ is grater or equal than the length over $\mathcal{O}_P$.</p>
<p>At this point I guess the two definitions should be equivalent, but I am stuck in trying.</p>
<p>Any hint or answer about this is welcome!</p>
| Slade | 33,433 | <p>We have $\operatorname{length}_{\mathcal{O}_P} \mathcal{O}_P/(f,g) = \operatorname{length}_{\mathcal{O}_P/(f,g)} \mathcal{O}_P/(f,g)$. So it is enough to show that the latter equals $\operatorname{length}_k \mathcal{O}_P/(f,g)$.</p>
<p>Let $A = \mathcal{O}_P / (f,g)$, $\mathfrak{m}$ its maximal ideal, and assume that the two curves have no components in common, so that $(A,\mathfrak{m})$ is an artinian local $k$-algebra.</p>
<p>For some $n$, we have a chain of $A$-modules $(0) = \mathfrak{m}^n \subset \mathfrak{m}^{n-1} \subset \cdots \subset \mathfrak{m} \subset A$. Furthermore, $\mathfrak{m}^j/\mathfrak{m}^{j+1}$ is finite-dimensional as a $k$-vector space for each $j\geq 0$, so we have $\dim_k A = \sum_{j=0}^{n-1} \dim_k \mathfrak{m}^j/\mathfrak{m}^{j+1}$.</p>
<p>If $k$ is algebraically closed (or, as Georges has pointed out, if $P$ is a rational point), then $A/\mathfrak{m} \cong k$. Each $\mathfrak{m}^j/\mathfrak{m}^{j+1}$ is a finite-dimensional vector space over $k$, so $\dim_k A = \sum_{j=0}^{n-1} \operatorname{length}_k \mathfrak{m}^j/\mathfrak{m}^{j+1} = \sum_{j=0}^{n-1}\operatorname{length}_{A/\mathfrak{m}} \mathfrak{m}^j/\mathfrak{m}^{j+1} = \sum_{j=0}^{n-1}\operatorname{length}_A \mathfrak{m}^j/\mathfrak{m}^{j+1} \leq \operatorname{length}_A A$. </p>
|
961,889 | <p>In exercise I.5.2 in Harshorne there is the following definition of intersection multiplicity for two curves in $\mathbb{A}^2$:
\begin{equation}
\mathrm{length}_{\mathcal{O}_P}\mathcal{O}_P / (f,g)
\end{equation}
where $P$ is the point in the intersection we are interested in, $\mathcal{O}_P$ is its local ring in $\mathbb{A}^2$ and $f$ and $g$ are the polynomials giving the two curves.</p>
<p>On the other hand I found in various references, such as Fulton's Algebraic Curves, that the length is taken over $k$, the residue field of $\mathcal{O}_P$.</p>
<p>Since $k$ embeds in $\mathcal{O}_P$ naturally, I can see any sequence of $\mathcal{O}_P$-modules as sequence of $k$-modules and then I get that the length over $k$ is grater or equal than the length over $\mathcal{O}_P$.</p>
<p>At this point I guess the two definitions should be equivalent, but I am stuck in trying.</p>
<p>Any hint or answer about this is welcome!</p>
| Georges Elencwajg | 3,217 | <p>Given a finitely generated module $M$ over a noetherian ring $A$, there exists a filtration of $M$ by submodules $M=M_0\supset M_1\cdots \supset M_n=0$ such that $M_i/M_{i+1}\cong A/\mathfrak p_i$ for some prime ideals $\mathfrak p_i\subset A$ (Bourbaki, <em>Commutative Algebra</em>, Chapter IV, §1, Theorem 1, page 261) </p>
<p>Now $M$ has finite length iff all the $\mathfrak p_i$'s are maximal and this applies in our case, where $A=\mathcal O_P$ and $M=\mathcal{O}_P / (f,g)$.<br>
We then have $\mathfrak p_i=\mathfrak m_P\subset A=\mathcal O_P$ and finally $$\mathrm{length}_A M=\sum_i \mathrm{length}_A(M_i/M_{i+1})=\sum_i \mathrm{length}_A(A/\mathfrak m)=\sum_i \mathrm{length}_Ak=\sum_i 1=n=\mathrm{dim}_kM$$ </p>
<p><strong>Remarks</strong><br>
1) I have assumed that $M=\mathcal{O}_P / (f,g)$ has finite length: this follows from its finite $k$-dimensionality which itself follows from the finiteness of the intersection of two curves with no common irreducible component.<br>
2) In all of the above I have not assumed $k$ algebraically closed.</p>
|
2,179,012 | <p>Let $V_1$ be a Banach space and let $A_1 \subset V_1$ be a compact subset of $V_1$</p>
<p>Let $V_2$ be a Banach space and let $A_2 \subset V_2$ be a compact subset of $V_2$</p>
<p>Let $f: A_1 \to A_2 $ be a homeomorphism</p>
<p>Let $x_1,y_1 \in A_1$ such that $$\left\| x_1-y_1 \right\|=\sup_{v,u \in A_1} \left\| v-u \right\|$$</p>
<p>Let $x_2,y_2 \in A_2$ such that $$\left\| x_2-y_2 \right\|=\sup_{v,u \in A_2} \left\| v-u \right\|$$</p>
<p>I would like to know if is true that
$$ f(x_1)=x_2 \text{ and } f(y_1)=y_2$$</p>
<p>thanks</p>
| Arpit Kansal | 175,006 | <p>The following are good books of complex Analysis,you can have a look at them and decide which one suits to your taste:</p>
<p><strong>1:</strong> Tristan Needham's Visual Complex Analysis.</p>
<p><strong>2:</strong> Complex Analysis by Stein and Shakarchi</p>
<p><strong>3:</strong> Complex Analysis by Ahlfors</p>
<p><strong>4:</strong> Theory of Complex Functions by Reinhold Remmert </p>
|
627,718 | <p>I need help evaluating the integrals in Fourier Series.</p>
<p>For example, for the function <span class="math-container">$\cos^{2}x$</span>, I can evaluate <span class="math-container">$a_0$</span>, <span class="math-container">$a_n$</span>, and <span class="math-container">$b_n$</span>, where <span class="math-container">$a_n$</span> is the coefficients of the cosine terms and <span class="math-container">$b_n$</span> the coefficients of the sine terms. In this case, because it is an even function, only cosine terms will exist, and the integral for calculating it will become:</p>
<p><span class="math-container">$$a_n=(1/2 {\pi})\int_{-\pi}^{\pi} cos^2xcosnx dx$$</span></p>
<p>How can this be solved?</p>
<p>Similarly, could someone please show the step-by-step calculation for the Fourier Series of <span class="math-container">$\cos^nx$</span>?</p>
| user44197 | 117,158 | <p>This was answered just a few minutes back. I don't know how to link to that so I copied and pasted it here. Not sure how you can give someone else the credit!</p>
<p>Following up on lab Bhattacharjee's answer (Please give him the credit).</p>
<p>If you use complex Fourier series, then you just have to use Binomial theorem. </p>
<p>If you want to avoid a lot of algebra, let $u = e^{ix}$ and $v=e^{-ix}$. Later we will use the fact that $u v = 1$ or $v=1/u$.</p>
<p>Now
$$\cos x=(u+v)/2$$</p>
<p>Hence
$$\cos^4 x =
1/16 (u+v)^4 = 1/16(u^4 + 4 u^3 v + 6 u^2 v^2 + 4 u v^3 + v^4) \\
=1/16 (u^4 + u^2 + 6 + v^2 + v^4)$$
Using Euler's identity again we have
$$
2 \cos 4x = (u^4 + v^4) \\
2 \cos2x = (u^2 + v^2)
$$
So
$$ \cos^4 x= 1/16 (2 \cos 4x +8 \cos 2x + 6)$$</p>
<p>If you have $\sin$ term then you need
$$
\sin x = (u-v)/(2i) \\
2i \sin 2x = u^2 - v^2\\
2i \sin 3x = u^3 -v^3
$$
etc.</p>
<p>So all you need is Binomial theorem and good quiet distraction free place to work.</p>
|
2,387,501 | <p>I'm struggling with the following integral:</p>
<p>$$
I = \int_{a}^{b}
{\frac{\mathrm{Erf}\left(\,{x/c}\,\right)}{\,\sqrt {\,{1 - {x^2}}\,}\,}\,\mathrm{d}x}
$$</p>
<p>Honestly, I do not know any approaches to solve it, except trying with Mathematica and searching for a possible solution in tables of integrals involving the Erf function, but all failed.</p>
<p>Can somebody give me a hint?</p>
<p>Thank you very much.</p>
<p>Best regards.</p>
| Robert Israel | 8,508 | <p>Since $a$ and $b$ are unspecified, you're looking for an antiderivative.
Integration by parts gives</p>
<p>$$ \arcsin(x) \text{erf}(x/c) - \frac{2}{c \sqrt{\pi}} \int \arcsin(x)\; e^{-x^2/c^2}\; dx $$</p>
<p>But this does not seem to have an elementary antiderivative.</p>
|
3,687,521 | <p>Recently I came accross this integral :
<span class="math-container">$$ \int \:\frac{1}{\sqrt[3]{\left(1+x\right)^2\left(1-x\right)^4}}dx $$</span>
I would evaluate it like this, first start by the substitution:
<span class="math-container">$$ x=\cos(2u) $$</span>
<span class="math-container">$$ dx=-2\sin(2u)du $$</span>
Our integral now becomes:
<span class="math-container">$$\int \:\frac{-2\sin \left(2u\right)du}{\sqrt[3]{\left(\cos \left(2u\right)+1\right)^2\left(\cos \:\left(2u\right)-1\right)^4}}$$</span>
<span class="math-container">$$\cos(2u)=\cos(u)^2-\sin(u)^2$$</span>
Thus:
<span class="math-container">$$\cos(2u)+1=2\cos(u)^2$$</span>
<span class="math-container">$$\cos(2u)-1=-2\sin(u)^2$$</span>
Thus our integral now becomes:
<span class="math-container">$$\int \:\frac{-\sin \left(2u\right)du}{\sqrt[3]{4\cos \left(u\right)^416\sin \left(u\right)^8}}=\frac{1}{2}\int \:\frac{-\sin \left(2u\right)du}{\sqrt[3]{\cos \left(u\right)^4\sin \left(u\right)^8}}$$</span>
we know:
<span class="math-container">$$\sin \left(u\right)=\cos \left(u\right)\tan \left(u\right)$$</span>
Thus our integral becomes:
<span class="math-container">$$\int \:\frac{-\tan \left(u\right)\cos \left(u\right)^2du}{\cos \:\left(u\right)^4\sqrt[3]{\tan \left(u\right)^8}}=\int \frac{-\tan \:\left(u\right)\sec \left(u\right)^2du}{\sqrt[3]{\tan \:\left(u\right)^8}}\:$$</span>
By letting <span class="math-container">$$v=\tan \:\left(u\right)$$</span>
<span class="math-container">$$dv=\sec \left(u\right)^2du$$</span>
Our integral now becomes:
<span class="math-container">$$\int -v\:^{1-\frac{8}{3}}dv=-\frac{v^{2-\frac{8}{3}}}{2-\frac{8}{3}}+C=\frac{3}{2\sqrt[3]{v^2}}+C$$</span>
Undoing all our substitutions:
<span class="math-container">$$\frac{3}{2\sqrt[3]{\tan \left(u\right)^2}}+C$$</span>
<span class="math-container">$$\tan \:\left(u\right)^2=\frac{1}{\cos \left(u\right)^2}-1=\frac{2}{1+\cos \left(2u\right)}-1=\frac{2}{1+x}-1$$</span>
Our integral therefore:
<span class="math-container">$$\frac{3}{2\sqrt[3]{\frac{2}{1+x}-1}}+C$$</span>
However, online integral give me anti-derivative of <span class="math-container">$$\frac{-3\sqrt[3]{\frac{2}{x-1}+1}}{2}+C$$</span> so I want to know where I went wrong</p>
| Quanto | 686,284 | <p>Your result is correct, which can be obtained alternatively by substituting <span class="math-container">$t= \frac{1+x}{1-x}$</span> to arrive at</p>
<p><span class="math-container">$$ \int \:\frac{dx}{\sqrt[3]{\left(1+x\right)^2\left(1-x\right)^4}}
=\frac12\int t^{-2/3}dt = \frac32 t^{1/3}+C=\frac32 \sqrt[3]\frac{1+x}{1-x} +C$$</span></p>
|
567,428 | <p>Let $A \ne V$ be a subspace of $V$ and $B$ a linearly independent subset of $V$. Prove that $B$ can be completed to a basis of $V$ with vectors from $V \setminus A$.</p>
<p>OK, I started with: </p>
<p>$K=\operatorname{lin}(v_1,v_2,v_3,\dotsc)$</p>
<p>and if any of $v_1,v_2,\dotsc$ belongs to $A$ we can replace this vector by a linear combination of vectors which don't belong to $A$ </p>
<p>So $v_{x_1},v_{x_2},\dotsc \in A$</p>
<p>$v_{x_1}\in\operatorname{lin}(w_{1,1},w_{1,2},\dotsc)$<br>
$v_{x_2}\in\operatorname{lin}(w_{2,1},w_{2,2},\dotsc)$<br>
etc.</p>
<p>Hence $V=\operatorname{lin}(w_{1,1},w_{1,2},\dots,w_{2,1},\dots,w_{i,j})$</p>
<p>and I don't know what to do next, but what I know is that I should use the Steinitz exchange lemma.</p>
<p>Does anybody have an idea how to finish this prove?</p>
| egreg | 62,967 | <p>I assume that in the hypotheses you have that $V$ is finite dimensional.</p>
<p><strong>Lemma.</strong> <em>The only subspace of $V$ containing $V\setminus A$ is $V$.</em></p>
<p><em>Proof.</em> Let $v\in V$. If $v\notin A$ we have nothing to prove. If $v\in A$, then $v=(v-w)+w$ where $w\in V\setminus A$ (one such vector exists by hypothesis). Since $v-w\notin A$, we are done. QED</p>
<p>If $B=\{v_1,\dots,v_m\}$ spans $V$ we have nothing to prove, because we extend it to a basis by taking zero vectors from $V\setminus A$.</p>
<p>If $B$ doesn't span $V$, then the lemma shows that $V\setminus A$ is not contained in the span of $B$. Take $v_{m+1}\in V\setminus A$ which doesn't belong to the span of $B$. Then $B_1=B\cup\{v_{m+1}\}$ is linearly independent.</p>
<p>Now we can reapply the argument starting from $B_1$. Finite dimensionality of $V$ ensures this algorithm terminates.</p>
<p>This can be formalized in the following way. Suppose $k$ is the maximum number of vectors from $V\setminus A$ that can be added to $B$ to still give a linearly independent set (it's possible that $k=0$). Choose one such set of vectors $\{v_{m+1},\dots,v_{m+k}\}$ and consider $B'=\{v_1,\dots,v_{m+k}\}$. If $B'$ doesn't span $V$, then the argument above shows that there exists $v\in V\setminus A$ such that $B'\cup\{v\}$ is linearly independent, contrary to the maximality of $k$.</p>
<hr>
<p>This holds also for infinite dimensional spaces (but we need Zorn's lemma).</p>
<p>Consider the set $\mathcal{F}$ of subsets $C$ of $V\setminus A$ such that $B\cup C$ is linearly independent and order it by inclusion. Since $\emptyset\in\mathcal{F}$, $\mathcal{F}$ is not empty.</p>
<p>Let $\mathcal{G}$ be a totally ordered subset of $\mathcal{F}$ and let $D=\bigcup\mathcal{G}$. Let's prove that $B\cup D$ is linearly independent.</p>
<p>If it isn't, then there are $w_1,w_2,\dots,w_k\in D$ such that $B\cup\{w_1,\dots,w_k\}$ is linearly dependent. Since $\mathcal{G}$ is totally ordered by inclusion, there is $C\in\mathcal{G}$ such that $w_1,\dots,w_k\in C$. But then $B\cup C$ is linearly dependent: contradiction.</p>
<p>Thus $D$ is an upper bound of $\mathcal{G}$ and we can apply Zorn's lemma to find $E$ maximal in $\mathcal{F}$. If $B\cup E$ doesn't span $V$, we find $v\in V\setminus A$ such that $B\cup E\cup\{v\}$ is linearly independent. In particular $v\notin E$ so $E\cup\{v\}\in\mathcal{F}$ contrary to the maximality of $E$.</p>
|
23,980 | <p>I am calculating huge data files with an external program. I would like then to import the data into Mathematica for analysis. The files are 2 columns and up to many millions of rows. </p>
<p>So for small data files I have just been using:</p>
<pre><code>dataTable = Import["data.txt", "Table"];
</code></pre>
<p>However once the files gets so large (into the millions), Mathematica and my computer slow down considerably. Now, I don't really care about most of the data in these files and in Mathematica I end up only using several thousand entries. So my question is, can I import a random sampling from the large data file (say 10000 elements only) instead of importing the entire file? So if I had a file with 10^6 rows, I would like to import just 10^4 of those rows (preferably randomly).</p>
| chuy | 237 | <p>Surely not the fastest or ideal solution, but this should work (it assumes that you have two numbers in each row):</p>
<pre><code>randomline[str_InputStream, num_] := (
SetStreamPosition[ str, 0];
Skip[str, Record, num-1];
Read[str, {Number, Number}])
str = OpenRead["data.txt"];
</code></pre>
<p>Now, if you have 2000 lines and want 100 random samples:</p>
<pre><code>Map[randomline[str, #] &, RandomSample[Range[2000], 100]]
</code></pre>
<p>This might provide a jumping-off point for a more sophisticated answer.</p>
|
745,382 | <p>I'm interested in hypercomplexes, or number systems with many square roots of $-1$. Now, I know that <a href="http://en.wikipedia.org/wiki/Quaternion" rel="nofollow">quaternions</a> are non-commutative, but associative. I'm wondering if it's possible to have a number system with more "elements" (or square roots of $-1$) that remains associative.</p>
<p>My understanding is that hypercomplex systems lose many properties in order to keep division. So I'm wondering, if we discard division, can we keep the associative property and add more square roots of $-1$?</p>
<p>In other words, <strong>Can we have a number system with $2^n$ square roots of $-1$ that is associative, for arbitrary $n$?</strong></p>
| Anixx | 2,513 | <p>Tessarines, also known as bicomplex numbers, are commutative and associative. You can build a tessarine-line system for any dimension <span class="math-container">$2^n$</span>.</p>
<p>You also can build any dimensional analogs of split-complex numbers.</p>
<p>You also can build any dimensional analogs of dual numbers.</p>
<p>You can combine them. They will remain commutative and associative.</p>
<p>You can build 3 and 6-dimensional numbers in 3 ways that also would be commutative and associative.</p>
<p>All these systems have zero divisors, that is non-zero elements by which we cannot divide. So, yes, by sacrificing universally applicable division you can build a commutative and associative algebra of higher dimensions.</p>
<p>This does not mean that you cannot divide in these systems. Generally, you can, you only have to check the denominator to be not a zero divisor.</p>
<p>For instance, here is a division formula in a 3-dimensional system isomorphic to <span class="math-container">$\mathbb{R}\times\mathbb{C}$</span>:</p>
<p><span class="math-container">$\frac{a_1+b_1 j+c_1 k}{a_2+b_2 j+c_2 k}=\frac{a_2^2 \left(a+b j_2+c j_1\right)-a_2 \left(b_2 \left(a j_2+b j_1+c\right)+c_2 \left(a j_1+b+c j_2\right)\right)+c_2^2 \left(a j_2+b j_1+c\right)-b_2 c_2 \left(a+b j_2+c j_1\right)+b_2^2 \left(a j_1+b+c j_2\right)}{a_2^3+b_2^3+c_2^3-3 a_2 b_2 c_2}$</span></p>
<p>When doing the division you have to check that <span class="math-container">$a_2^3+b_2^3+c_2^3-3 a_2 b_2 c_2$</span> is not zero, which is the criterion for the denominator <span class="math-container">$a_2+b_2 j+c_2 k$</span> not to be a zero divisor.</p>
|
310,238 | <p>The <a href="https://en.wikipedia.org/wiki/Weyl_character_formula" rel="nofollow noreferrer">Weyl character formula</a> and <a href="https://en.wikipedia.org/wiki/Weyl_character_formula#Weyl_denominator_formula" rel="nofollow noreferrer">the denominator identity</a> play important roles in the representation theory of classical simple Lie algebras and Kac-Moody Lie algebras over $\mathbb{C}$</p>
<p>Can you suggest any reference for similar formulas for Lie super-algebras? I suppose there are such formulas, at least for classes of (say classical simple) Lie super-algebras.</p>
| Thorsten Heidersdorf | 129,060 | <p>Since you ask for formulas for the character, I will first assume that you are interested in finite dimensional representations.</p>
<p>If <span class="math-container">$\mathfrak{g}$</span> is a basic classical Lie superalgebra, such as <span class="math-container">$\mathfrak{sl}(m|n)$</span>, one distinguishes between typical weights and atypical weights. A weight is typical if the irreducible representation <span class="math-container">$L(\lambda)$</span> is projective, i.e. does not have any nontrivial extensions. For atypical weights one can further distinguish between the degree of atypicality <span class="math-container">$1,2, \ldots$</span>. For typical weights Kac already gave a character formula in </p>
<ul>
<li>Kac, V. G.: Characters of typical representations of classical Lie superalgebras.</li>
</ul>
<p>For atypical weights no closed formula is known in general. For weights which have atypicality 1 there is a character formula obtained by Kac and Wakimoto in </p>
<ul>
<li>Kac, Victor G.; Wakimoto, Minoru: Integrable highest weight modules over affine superalgebras and number theory.</li>
</ul>
<p>This covers in particular the case when <span class="math-container">$\mathfrak{g}$</span> is a simple exceptional Lie superalgebra or <span class="math-container">$\mathfrak{osp}(2|2n)$</span> or <span class="math-container">$\mathfrak{sl}(n|1)$</span> since in these cases dominant integral weights are either typical or have atypicality 1. This leaves basically the cases where <span class="math-container">$L(\lambda)$</span> is an irreducible representation of atypicality <span class="math-container">$\geq 2$</span> of <span class="math-container">$\mathfrak{gl}(m|n)$</span>, <span class="math-container">$\mathfrak{osp}(m|2n)$</span>, <span class="math-container">$\mathfrak{p}(n)$</span> and <span class="math-container">$\mathfrak{q}(n)$</span> for general <span class="math-container">$m,n$</span>.</p>
<p>In the <span class="math-container">$\mathfrak{gl}(m|n)$</span> case the character problem was succesfully first solved by Serganova in</p>
<ul>
<li>Serganova, Vera: Kazhdan-Lusztig polynomials and character formula for the Lie superalgebra <span class="math-container">$\mathfrak{gl}(m|n)$</span></li>
</ul>
<p>However this does not give you a closed formula for the character. It is basically an algorithmic solution. Her approach is very similar to the usual one in category <span class="math-container">$\mathcal{O}$</span>: Write down an (infinite) resolution </p>
<p><span class="math-container">$$ 0 \leftarrow M^0 \leftarrow M^1 \leftarrow \ldots $$</span></p>
<p>of <span class="math-container">$L(\lambda)$</span> which has a filtration with quotients isomorphic to Kac modules (the universal highest weight modules in these representation categories). The character of <span class="math-container">$L(\lambda)$</span> is then </p>
<p><span class="math-container">$$ ch L(\lambda) = \sum_{i=0}^{\infty} (-1)^i ch M^i $$</span></p>
<p>where the characters of the <span class="math-container">$M^i$</span> can be easily calculated since the characters of Kac modules are known (by Kac himself). Then finding the character of <span class="math-container">$L(\lambda)$</span> amounts to determine the coefficients <span class="math-container">$b_{\lambda,\mu}$</span> in </p>
<p><span class="math-container">$$ ch L(\lambda) = \sum_{\mu} b_{\lambda, \mu} ch V(\mu) $$</span></p>
<p>where <span class="math-container">$V(\mu)$</span> is the Kac module. The coefficient <span class="math-container">$b_{\lambda,\mu} = K_{\lambda,\mu}(-1)$</span> is the value at <span class="math-container">$-1$</span> of a certain Kazhdan-Lusztig polynomial. A nice overview article of Serganova's work is Gruson's Bourbaki article</p>
<ul>
<li>Gruson, Caroline: Sur les representations de dimension finie de la super algebre de Lie <span class="math-container">$\mathfrak{gl}(m,n)$</span>.</li>
</ul>
<p>Later Brundan gave a different approach using categorification techniques in </p>
<ul>
<li>Kazhdan-Lusztig polynomials and character formulae for the Lie superalgebra <span class="math-container">$\mathfrak{gl}(m|n)$</span></li>
</ul>
<p>A closed formula for the character was then obtained in </p>
<ul>
<li>Su, Yucai; Zhang, R. B.: Character and dimension formulae for general linear superalgebra. </li>
</ul>
<p>They actually obtained a closed formula for the character by reworking some calculations of Brundan. The formula however is so complicated (with an immense amount of cancellations) that this is not comparable to the nice situation of the Weyl character formula for a semisimple Lie algebra. In some cases (when the weight is a so-called Kostant weight) the combinatorics collapses and one can get simple formulas. This can be understood in terms of KL theory since Kostant weights are precisely those, in which the occurring KL polynomials are monomials.</p>
<p>Algorithmic or Kazhdan-Lusztig type solutions for the character problem are known also for other Lie superalgebras such as <span class="math-container">$\mathfrak{osp}(m|2n)$</span> (due to Gruson-Serganova and later by Cheng-Lam-Wang) and <span class="math-container">$\mathfrak{q}(n)$</span> (due to Penkov-Serganova and Brundan), but in general no closed formulas are known for general weights. As far as I am aware the problem of finding the character in the periplectic <span class="math-container">$\mathfrak{p}(n)$</span>-case is open so far. </p>
<p>Of course the character problem has been studied for irreducible modules in category <span class="math-container">$\mathcal{O}$</span> as well. As others have mentioned, relevant names here are Brundan, Cheng, Lam, Wang and many others. For the infinite case I would recommend the overview article by Brundan</p>
<ul>
<li>Brundan, John: Representations of the general linear Lie superalgebra in the BGG category <span class="math-container">$\mathcal O$</span>.</li>
</ul>
|
1,545,258 | <p>In my abstract algebra book one of the first facts stated is the Well Ordering Principle:</p>
<p>(*) Every non-empty set of positive integers has a smallest member.</p>
<p>In real analysis on the other hand one of the first things introduced are the real numbers and their Completeness Axiom:</p>
<p>Every nonempty set of real numbers having an upper bound must have a least upper bound.</p>
<p>Which is equivalent to:</p>
<p>(**) Every nonempty set of real numbers having a lower bound must have a biggest lower bound (infimum). </p>
<p>It has never been mentioned in any book I've read and I don't know if they have anything to do with each other but (*) and (**) seem to me to be such that (**) implies (*). </p>
<blockquote>
<p>Is the Well Ordering Principle a consequence of the Completeness of
the real numbers? Or do they have nothing to do with each other? How
should I think of them in terms of how they relate to each other?</p>
</blockquote>
<p>Is it okay to see one as a consequence of the other?</p>
| Emmbee1 | 446,283 | <p>I see the completeness axiom of $\mathbb{R}$ as a generalisation of the well ordering principle of $\mathbb{N}$. As every subset $S\subset\mathbb{Z}$ is bounded below, this condition need to be imposed on $S\subset\mathbb{R}$ to make the extension possible even though the bound doesn't have to be in $S$.</p>
|
621,544 | <p>I have multiple data-sets from a Fourier series of a function $f(t,x)$ (the data-sets where obtained by varying $x$), so $A_n=\frac{2}{T}\int_0^T{f(t,x)\sin{\frac{2\pi nt}{T}}dt}$, which seems to follow an equation of the form
$$
A=an^b
$$
I know that $b$ will be negative, since $A$ approximately equal to $0$ for large $n$. However $A$ also has noise, which means that for certain data-sets $A$ sometimes also has a few negative values.</p>
<p>This means that I will not be able to use a linear fit using
$$
\log(A)=\log(a)+b\log(n)
$$</p>
<p><a href="https://math.stackexchange.com/questions/3625/easy-to-implement-method-to-fit-a-power-function-regression">This</a> might be a related question. That has an answer which does avoid this, but secant method or Newton's method can still take some time and I wonder if there might be a faster method? </p>
<p>Maybe related to this question is how would I determine the error ($\sigma_A$) of $A_n$?</p>
<p><strong>Edit:</strong><br>
I was trying to find Fourier series of the <a href="http://en.wikipedia.org/wiki/True_anomaly" rel="nofollow noreferrer">true anomaly</a> of closed Kepler orbits (so $0\leq e<1$) as a function of eccentricity $e$, since there is no explicit solution for the position as a function of time. But there is one <a href="https://physics.stackexchange.com/questions/69380/what-are-common-methods-for-calculating-the-time-dependency-of-elliptical-orbit">the other way around</a>
$$
t(\theta)=\sqrt{\frac{a^3}{\mu}}\left(2\tan^{-1}\left(\frac{\sqrt{1-e^2}\tan{\frac{\theta}{2}}}{1+e}\right)-\frac{e\sqrt{1-e^2}\sin{\theta}}{1+e\cos{\theta}}\right)
$$
The function from which I would like to the get the Fourier series is defined as
$$
f(e,t)=\theta(t)-\frac{2\pi t}{T}
$$
It is possible to find the time derivative of the true anomaly, $\frac{\delta}{\delta t}\theta$ as a function of $\theta$ itself
$$
\dot{\theta}(\theta)=\sqrt{\frac{\mu}{a^3\left(1-e^2\right)^3}}\left(1+e\cos{\theta}\right)^2
$$
This allowed me to rewrite the integral to another integral over $\theta$, since $dt=\frac{d\theta}{\dot{\theta}}$. So
$$
A_n=\frac{2}{T}\int_{-\pi}^{\pi}{\frac{\left(\theta-\frac{2\pi t(\theta)}{T}\right)\sin\left(\frac{2\pi nt(\theta)}{T}\right)}{\dot{\theta}(\theta)}d\theta}
$$
And since $T=2\pi\sqrt{\frac{a^3}{\mu}}$, this can be further simplified to only a function of $n$ and $e$ since
$$
\frac{2\pi t(\theta)}{T}=2\tan^{-1}\left(\frac{\sqrt{1-e^2}\tan{\frac{\theta}{2}}}{1+e}\right)-\frac{e\sqrt{1-e^2}\sin{\theta}}{1+e\cos{\theta}}
$$
$$
\frac{2}{T}\frac{1}{\dot{\theta}(\theta)}=\frac{\sqrt{\left(1-e^2\right)^3}}{\pi\left(1+e\cos{\theta}\right)^2}
$$
However according to MATLAB, this integral did not have an explicit solution, so I still had to solve it numerically. But fortunately MATLAB does has functionality to solve this integral numerically within small error bounds, which is better and faster than my initial Riemann integral approach. After this I did decided to implement an iterative method to solve for $b$, since I still got quite a lot of negative values.</p>
<p>However my actual goal was actually finding expressions for $a$ and $b$ as a function of $e$. And after some trial and error I got quite good test functions which comply with the data:
<img src="https://i.stack.imgur.com/kSwSY.png" alt="enter image description here"></p>
<p>$$
a(e)=\frac{1.887 e^{0.9591}}{1-0.04663 e^{64.38}}
$$
$$
b(e)=\frac{e^4 - 2.61 e^3 + 0.7813 e^2 + 0.8935 e + 2.014 \times 10^{-3}}{1.152 e^3 - 1.08 e^2 - 0.139 e - 6.703 \times 10^{-5}}
$$
But when using this to approximate $\theta(t)$ it often seems quite off. So someone know what would be better expressions for $a$ and $b$?</p>
| Ross Millikan | 1,827 | <p>The traditional way is to take the log of each side. The equation becomes $\log A=\log a + b \log n$. Your data changes from $(A_i,n_i)$ to $(\log A_i, \log n_i)$ and you can do a linear fit in $a,b$. There are issues with the errors being transformed, which will change the best fit parameters. You are correct that if the measurement errors make some $A_i$ negative you have to figure out what to do about that.</p>
|
1,564,750 | <p>Imagine that your country's postal system only issues 2 cent and 5 cent stamps. Prove that it possible to pay for postage using only these stamps for any amount n cents, where n is at least 4.</p>
<hr>
<p>My attempt (using Strong induction, I know we can use induction but since they say you can can apply strong induction for generalized/weak induction cases):</p>
<p>Base case:
$$n=4: 2 \times2cents$$
$$n=5: 0 \times 5cents$$
$$n=6: 3 \times 2cents$$
$$n=7: 1 \times 2 cents + 1 \times 5 cents$$</p>
<p>You might ask why I have so many base cases, this is my reason why: The question states that we can pay for postage using 2 and 5 cents only. Hence, we have 3 general cases: </p>
<p>1: ONLY 2-cents stamps are used</p>
<p>2: ONLY 5-cents stamps are used</p>
<p>3: 2-cents and 5-cents stamps are used.</p>
<p>(Till now, are my base cases valid?)</p>
<p>Assume that for n=k, P(k) is true and that we need to show P(k+1)</p>
<p>$\textbf{Induction hypothesis:}$ P(k) is true when $4\le i \le k$ and $k \ge 7$</p>
<p>Since $4\le (k-3) \le k$ , P(k-3) or i is true by Induction hypothesis.</p>
<p>Now, k-3 cents can be formed using 2 and 5-cent stamps.</p>
<p>To get k+1 stamps, we can just replace it with $\textbf{four}$ 2-cent stamps?</p>
<hr>
<p>Thank you! Is my proof valid or no? Any alternatives for this question using strong induction? Also, if I have used strong mathematical induction wrong or any of the steps are incorrect, please explain why.</p>
| Justpassingby | 293,332 | <p>You can avoid explicit induction by verifying a finite number of cases like 4,5,6,7,8 and then note that any higher natural number is separated from one of these by a multiple of 5.</p>
<p>Strictly logically that last step requires induction as well but the proof is much easier.</p>
|
1,564,750 | <p>Imagine that your country's postal system only issues 2 cent and 5 cent stamps. Prove that it possible to pay for postage using only these stamps for any amount n cents, where n is at least 4.</p>
<hr>
<p>My attempt (using Strong induction, I know we can use induction but since they say you can can apply strong induction for generalized/weak induction cases):</p>
<p>Base case:
$$n=4: 2 \times2cents$$
$$n=5: 0 \times 5cents$$
$$n=6: 3 \times 2cents$$
$$n=7: 1 \times 2 cents + 1 \times 5 cents$$</p>
<p>You might ask why I have so many base cases, this is my reason why: The question states that we can pay for postage using 2 and 5 cents only. Hence, we have 3 general cases: </p>
<p>1: ONLY 2-cents stamps are used</p>
<p>2: ONLY 5-cents stamps are used</p>
<p>3: 2-cents and 5-cents stamps are used.</p>
<p>(Till now, are my base cases valid?)</p>
<p>Assume that for n=k, P(k) is true and that we need to show P(k+1)</p>
<p>$\textbf{Induction hypothesis:}$ P(k) is true when $4\le i \le k$ and $k \ge 7$</p>
<p>Since $4\le (k-3) \le k$ , P(k-3) or i is true by Induction hypothesis.</p>
<p>Now, k-3 cents can be formed using 2 and 5-cent stamps.</p>
<p>To get k+1 stamps, we can just replace it with $\textbf{four}$ 2-cent stamps?</p>
<hr>
<p>Thank you! Is my proof valid or no? Any alternatives for this question using strong induction? Also, if I have used strong mathematical induction wrong or any of the steps are incorrect, please explain why.</p>
| HEKTO | 92,112 | <p>Let's assume you know how to pay $n$ cents using only 2- and 5-cents stamps. You want to find a way to pay $n+1$ cents. There are two cases:</p>
<ul>
<li>If your sequence for the $n$ contains a 5-cents stamp, take it out and replace it by three 2-cent stamps.</li>
<li>If your sequence contains only 2-cents stamps, then it must contain at least two of them (remember condition $n\ge 4$?) - in this case take these two stamps out and replace them by one 5-cent stamp.</li>
</ul>
<p>That'll be a strong induction.</p>
|
2,542,259 | <p>The question says to "find geometrically the set of points $(x,y) \in \Bbb R^2$ such that $$x+2yi=|x+yi|$$</p>
<p>I'm kinda lost on this one, I've tried solving the equation directly but I don't even know what to do with that. Here's what I have right now:</p>
<p>$$x+2yi = |x+yi|$$
$$x+2yi = \sqrt{x^2+y^2}$$
$$x^2+4yxi-4y^2=x^2+y^2$$
$$4yxi=5y^2$$
$$y=0 \lor -5y+4xi=0$$
$$y=0 \lor y=\frac{4xi}5$$</p>
<p>I don't know what to do with this... I'm reading Ahlfors' Complex Analysis but this type of exercise seems kind of weird to me.</p>
| avz2611 | 142,634 | <p>L.H.S is complex in nature where as R.H.S is Real. Thus $y$ has to be zero. So your solution is all Real for $x$ and $y$ is zero. </p>
<p>Now if $x$ and $y$ were complex you had to make sure the complex parts were canceled in L.H.S and solve for the rest</p>
|
1,376,627 | <p>I would like to know how to rewrite the following equations:</p>
<p>$$
\frac{d (f(x))}{d(x+c)} =0\\
\frac{d^2 (f(x))}{d(x+c)^2} =0\\
$$</p>
<p>Here $x$ is a variable, $c$ is a constant and $f(x)$ is a function of x.
I would also like to know the reasoning behind the answer.</p>
| wltrup | 232,040 | <p>Use the chain rule. Define $u = x + c$ then use the fact that $$\frac{d\cdot}{dx} = \frac{du}{dx} \frac{d\cdot}{du}$$ where the $\cdot$ represents any function, so</p>
<p>$$\frac{df}{dx} = \frac{du}{dx} \frac{df}{du}$$</p>
<p>It also follows that </p>
<p>$$
\begin{array}{rcll}
\frac{d^2f}{dx^2} &=& \frac{d}{dx} (\frac{df}{dx}) &\quad\mbox{definition of 2nd derivative} \\
&=& \frac{d}{dx} \big(\frac{du}{dx} \frac{df}{du}\big) &\quad\mbox{using the result above} \\
&=& \frac{d}{dx} \big(\frac{du}{dx} \big)\frac{df}{du} + \frac{du}{dx} \frac{d}{dx} \big(\frac{df}{du}\big) &\quad\mbox{using the rule for the derivative of a product} \\
&=& \frac{d^2u}{dx^2} \frac{df}{du} + \frac{du}{dx} \frac{du}{dx} \frac{d}{du} \big(\frac{df}{du}\big) &\quad\mbox{using various results from above} \\
&=& \frac{d^2u}{dx^2} \frac{df}{du} + (\frac{du}{dx})^2 \frac{d^2f}{du^2} &\quad\mbox{simplifying}
\end{array}
$$</p>
<p>So, in summary,</p>
<p>$$\frac{df}{dx} = \frac{du}{dx} \frac{df}{du}$$</p>
<p>and</p>
<p>$$\frac{d^2f}{dx^2} = \frac{d^2u}{dx^2} \frac{df}{du} + (\frac{du}{dx})^2 \frac{d^2f}{du^2}$$</p>
<p>The above is correct and valid for <em>any</em> $u(x)$ but it's written in a somewhat backwards way. You already have $\frac{df}{dx}$ and $\frac{d^2f}{dx^2}$ and want to find $\frac{df}{du}$ and $\frac{d^2f}{du^2}$. Well, that's simple algebra now to get those from the above. It's even simpler with the specific example of $u = x + c$ because, then, $\frac{du}{dx} = 1$ and $\frac{d^2u}{dx^2} = 0$, so</p>
<p>$$\frac{df}{du} = \frac{df}{dx}$$</p>
<p>and</p>
<p>$$\frac{d^2f}{du^2} = \frac{d^2f}{dx^2}$$</p>
<p>for that particular example.</p>
|
286,312 | <p>I'm starting to understand how induction works (with the whole $k \to k+1$ thing), but I'm not exactly sure how summations play a role. I'm a bit confused by this question specifically:</p>
<p>$$
\sum_{i=1}^n 3i-2 = \frac{n(3n-1)}{2}
$$</p>
<p>Any hints would be greatly appreciate</p>
| omid saba | 52,535 | <p>I will prove it in two way for you:</p>
<p>1- Mathematical Induction:</p>
<p>If $n=1$ then the left side is $1$ and also the right side is $1$ too.</p>
<p>Now think that we have $\sum_{i=1}^{n}(3i-2)=\frac{n(3n-1)}{2}$, we should show $\sum_{i=1}^{n+1}(3i-2)=\frac{(n+1)(3(n+1)-1)}{2}$ that is $\sum_{i=1}^{n+1}(3i-2)=\frac{(n+1)(3n+2)}{2}$. But we can write;
$$\sum_{i=1}^{n+1}(3i-2)=(\sum_{i=1}^{n}(3i-2))+(3(n+1)-2)=\frac{n(3n-1)}{2}+(3n+1)=\frac{3n^{2}-n+6n+2}{2}=\frac{3n^{2}+5n+2}{2}=\frac{3n^{2}+3n+2n+2}{2}=\frac{3n(n+1)+2(n+1)}{2}=\frac{(n+1)(3n+2)}{2}$$.
And it finished our work.</p>
<p>2- Without Mathematical Induction:</p>
<p>We know $\sum_{i=1}^{k}i=\frac{k(k+1)}{2}$ and $\sum_{i=1}^{k}c=kc$ for a constant number "c".</p>
<p>Now $$\sum_{i=1}^{n}(3i-2)=3\sum_{i=1}^{n}i-\sum_{i=1}^{n}2=3\frac{n(n+1)}{2}-2n=\frac{n(3n+3-4)}{2}=\frac{n(3n-1)}{2}$$.</p>
|
2,076,984 | <p>I have already asked <a href="https://math.stackexchange.com/questions/2075949/how-to-draw-diagram-for-line-and-parabola">a similar question</a>. But the answer in that question is very difficult to understand. I am new to this concept so I am looking for an easier explanation.</p>
<blockquote>
<p>My main <strong>question</strong> is: why do we subtract things to find the area using the definite integral?</p>
</blockquote>
<p>Here are a couple of figures -</p>
<ol>
<li>Two parabolas -
<a href="https://i.stack.imgur.com/aQC8h.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aQC8h.jpg" alt="page 1"></a></li>
</ol>
<p>Area $\displaystyle = \int \left(\sqrt{x} - x^2 \right) dx$</p>
<p>Why do we subtract to find the area? Why not add?</p>
<ol start="2">
<li>Similarly in parabola and line.</li>
</ol>
<p><a href="https://i.stack.imgur.com/emac3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/emac3.png" alt="page 2"></a></p>
<p>Area $\displaystyle = \int (x + 2 - x^2)dx$</p>
| Community | -1 | <p>If you decompose the area in thin vertical rectangles, the height of the rectangles is the difference between the ordinates.</p>
<p><a href="https://i.stack.imgur.com/uGEQs.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uGEQs.gif" alt="enter image description here"></a></p>
|
2,076,984 | <p>I have already asked <a href="https://math.stackexchange.com/questions/2075949/how-to-draw-diagram-for-line-and-parabola">a similar question</a>. But the answer in that question is very difficult to understand. I am new to this concept so I am looking for an easier explanation.</p>
<blockquote>
<p>My main <strong>question</strong> is: why do we subtract things to find the area using the definite integral?</p>
</blockquote>
<p>Here are a couple of figures -</p>
<ol>
<li>Two parabolas -
<a href="https://i.stack.imgur.com/aQC8h.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aQC8h.jpg" alt="page 1"></a></li>
</ol>
<p>Area $\displaystyle = \int \left(\sqrt{x} - x^2 \right) dx$</p>
<p>Why do we subtract to find the area? Why not add?</p>
<ol start="2">
<li>Similarly in parabola and line.</li>
</ol>
<p><a href="https://i.stack.imgur.com/emac3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/emac3.png" alt="page 2"></a></p>
<p>Area $\displaystyle = \int (x + 2 - x^2)dx$</p>
| Kanwaljit Singh | 401,635 | <p>The given curves are :</p>
<p>y = f(x). .........(1)</p>
<p>y = g(x). ...........(2)</p>
<p>between a and b. So (1) and (2) intersect at x = a and x = b.</p>
<p><a href="https://i.stack.imgur.com/rgdOq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rgdOq.png" alt="page 1"></a></p>
<p>A thin vertical strip of width dx is taken between the lines x = a and x = b as shown in the figure. </p>
<p>The height of this vertical strip is given as f(x) - g(x) as <strong>f(x) $\ge$ g(x)</strong> as shown. </p>
<p>So, the elementary area of this strip dA = [f(x) - g(x)] dx.</p>
<p>Now, we know that the total area is made up of vary large number of such strips, starting from x = a to x = b. </p>
<p>Hence, the total enclosed area A, between the curves is given by adding the area of all such strips between a and b:</p>
<p>A = $\int_a^b$ [f(x) – g(x)] dx</p>
<p>The enclosed area between two curves can also be calculated in the following manner,</p>
<p>A = (area bounded by the curve y = f(x), <strong>x-axis</strong> and the lines x = a and x = b) – (area bounded by the curve y = g(x), <strong>x-axis</strong> and the lines x = a and x = b)</p>
<p>Hope its clear to you now.</p>
|
2,652,662 | <blockquote>
<p>Let $f:[a,b] \to \mathbb{R}$ be a bounded function. Define $A_{f,x} : (0, \infty) \to \mathbb{R}$ by $$A_{f,x} (r) = \text{diam}\left( f((x-r,x+r) \cap [a,b])\right).$$ Show that for all $\epsilon >0$ the set $\{x\in [a,b] : \lim_{r\to 0^+} A_{f,x} (r)\geq \epsilon \}$ is a closed set.</p>
</blockquote>
<p>I showed that $f$ is continuous at $x$ if and only if $\lim_{r\to 0^+} A_{f,x} (r) = 0$. I'm thinking about showing the set is closed by showing the image of the set is closed (since $f$ is continuous, the pre-image must be closed). Is this the right approach? If so/not, how should I proceed/conclude? (In this case, would the set of discontinuities of $f$ be a union of countably many closed sets?)</p>
| trancelocation | 467,003 | <p>Here, the (conditional) sample spaces with equally likely outcomes are easily listed. </p>
<p>So let $\Omega$ denote the original sample space of all possible gender combinations of siblings.</p>
<p>Let $\Omega_{C}$ denote the restricted sample space induced by a given condition $C$. (Actually, $\Omega_{C} = C$).</p>
<p>1) Here $C$ is "first born is a girl". So, let $G$ denote the first born girl and letters $g$,$b$ may denote the other siblings:
$\Omega_{C} = \{Ggg, Ggb, Gbg, Gbb\}$
$$\rightarrow P(Ggg | C)= \frac{1}{4}$$</p>
<p>2) Here $C$ is the condition that "(at least) one kid is a girl". Similarly as in 1) we get:
$\Omega_{C} = \Omega \setminus \{bbb\} = \{ggg, ggb, gbg, gbb, bgg, bgb, bbg\}$
$$\rightarrow P(ggg | C)= \frac{1}{7}$$</p>
|
971,997 | <p>How do we find
$$\Re\left[\int_0^{\large\frac{\pi}{2}} e^{\Large e^{i\theta}}d\theta\right]$$</p>
<p>In the shortest and easiest possible manner?</p>
<p>I cannot think of anything good.</p>
| Venus | 146,687 | <p>Alternatively, using Taylor series of exponential function and Euler's formula we have $$e^{\Large e^{i\theta}}=1+(\cos\theta+i\sin\theta)+\frac{(\cos2\theta+i\sin2\theta)}{2!}+\frac{(\cos3\theta+i\sin3\theta)}{3!}+\cdots$$
We also have
$$\int_0^{\pi/2}\cos(n\theta)\;d\theta=\frac{\sin\left(\frac{\pi n}{2}\right)}{n}=\begin{cases}\dfrac{(-1)^{n-1}}{2n-1}&,\;\text{for $n$ is odd}\\\\0&,\;\text{for $n$ is even}\end{cases}$$
and series for <a href="http://mathworld.wolfram.com/SineIntegral.html">sine integral</a>, see formula $(9)$
$$\text{Si}\,(x)=\sum_{n=1}^\infty(-1)^{n-1}\frac{x^{2n-1}}{(2n-1)(2n-1)!}$$
Therefore
$$\begin{align}\int_0^{\pi/2}\Re\left( e^{\Large e^{i\theta}}\right)d\theta&=\int_0^{\pi/2}\left(1 +\cos\theta+\frac{\cos2\theta}{2!}+\frac{\cos3\theta}{3!}+\cdots\right)d\theta\\&=\frac{\pi}{2}+1-\frac{1}{3\cdot3!}+\frac{1}{5\cdot5!}-\frac{1}{7\cdot7!}+\cdots\\&=\frac{\pi}{2}+\text{Si}\,(1)\end{align}$$</p>
|
2,610,048 | <p>Ok, so I'm trying to prove statement in the header. I have read the following discussion on it, but I can't seem to follow it all the way through:</p>
<p><a href="https://math.stackexchange.com/questions/1198735/proving-sum-k-1n-k-k-n1-1/1198743#1198743?newreg=abbcf872d9904cbfa76cd161c4fecdd0">Proving $\sum_{k=1}^n k k!=(n+1)!-1$</a></p>
<p>I like mfl's answer, but I get hung up on the last step. They say: </p>
<p>and we need to show</p>
<blockquote>
<p>$$\sum_{k=1}^{n+1} kk!=(n+2)!−1.$$</p>
</blockquote>
<p>Just write</p>
<blockquote>
<p>$$\sum_{k=1}^{n+1} kk!=\sum_{k=1}^n kk! + (n+1)(n+1)!$$</p>
</blockquote>
<p>How do they get from the first step stated above, to the following step? I'm stuck.</p>
| David | 119,775 | <p>Your $x$s can have any one of five values, so for example $[1,1,x,x]$ is really $25$ options, not $1$. The total number of successful rolls of four dice is
$$6\times25+4\times5+1=171$$
and the probability is
$$\frac{171}{1296}\ .$$
The chance of failing on one roll of four dice is
$$1-\frac{171}{1296}=\frac{1125}{1296}\ ,$$
the chance of failing on three rolls is
$$\Bigl(\frac{1125}{1296}\Bigr)^3$$
and the chance of succeeding at least once in four rolls is
$$1-\Bigl(\frac{1125}{1296}\Bigr)^3=0.3459\ .$$</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.