qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
3,370,076 | <p>The total mechanical energy is conserved when a ball is dropped from a height of 4.00 <span class="math-container">$\mathit{m}$</span>, and it makes a elastic collision with the ground. Assuming no non-conservative forces are acting find the period of the ball. g of course is 9.81.</p>
<p><span class="math-container">\begin{align}
PE_g &= U_s \\
mgh &= \frac{1}2 kA^2 \\
mgh &= \frac{1}2 kh^2 \\
2mgh &= kh^2 \\
2\frac{g}{h} &= \frac{k}{m} \\
\omega &= \sqrt{\frac{k}{m}} = \sqrt{\frac{2g}{h}} \\
T &= \frac{2 \pi}{\omega}=2\pi \sqrt{\frac{h}{2g}} = \sqrt{2} \pi\sqrt{\frac{h} {g}}=2.837 s
\end{align}</span></p>
<p>Is my approach correct?</p>
<h2>Fixed Approach</h2>
<p><span class="math-container">\begin{align}
mgh &= \frac{1}2 m v^2_f \\
v_f &= \sqrt{2gh} \\
\frac{v_f - v_0}{g} &= t = \frac{T}{2} \\
2t &= T = 1.80 s
\end{align}</span></p>
| Nicolas | 498,847 | <p>Suppose <span class="math-container">$ax = ay$</span>, then <span class="math-container">$x = a^{-1}ax = a^{-1}ay = y$</span>. Also, for any <span class="math-container">$y \in G$</span>, let <span class="math-container">$x := a^{-1}y$</span>, then <span class="math-container">$ax = aa^{-1}y = y$</span>.</p>
|
114,733 | <p>Say you have the half-plane $\{z\in\mathbb{C}:\Re(z)>0\}$. Is there a rigorous explanation why the transformation $w=\dfrac{z-1}{z+1}$ maps the half plane onto $|w|<1$?</p>
| 138 Aspen | 909,868 | <p>I wrote a simple demo with Mathematica to demonstrate it.</p>
<pre><code>f[z_] := (z - 1)/(z + 1);
Manipulate[
ComplexListPlot[Table[f[re + I*im], {im, -50, 50, 0.1}],
PlotRange -> {{-5, 5}, {-5, 5}}], {{re, 0}, -3, 3}]
</code></pre>
<p><a href="https://i.stack.imgur.com/7VICt.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7VICt.gif" alt="enter image description here" /></a></p>
|
4,286,983 | <p>Let <span class="math-container">$p,q$</span> be two odd primes. Prove that <span class="math-container">$p$</span> is a primitive root of <span class="math-container">$q$</span> if and only if <span class="math-container">$\frac{x^q-1}{x-1}$</span> is an irreducible polynomial on <span class="math-container">$\mathbb{F}_p$</span></p>
<p>This is a middle school competition, I want to have pure competition method, here is an advanced method, I do not seem to understand, there is no simpler content to deal with this problem <a href="https://artofproblemsolving.com/community/c6h2701328p23464207" rel="nofollow noreferrer">links</a></p>
<p>Second, does this seem to be a famous result?</p>
<p>PS:Middle school=High school</p>
| WhatsUp | 256,378 | <p>Here is a "fake" elementary proof, as it is essentially a translation of some results on finite fields (as will any answer probably be). A "real proof" is attached at the end.</p>
<hr />
<p><strong>The "fake" proof.</strong></p>
<p>Let <span class="math-container">$f(x) \in \Bbb F_p[x]$</span> be any irreducible factor of <span class="math-container">$\frac{x^q - 1}{x - 1}$</span>. We prove two preparatory results.</p>
<ol>
<li><span class="math-container">$\prod_{i = 0}^{q - 1}(x - t^i) \equiv x^q - 1\mod f(t)$</span> in <span class="math-container">$\Bbb F_p[x, t]$</span>.</li>
</ol>
<blockquote>
<p>Consider the polynomial <span class="math-container">$H(x, t) = x^q - 1 - \prod_{i = 0}^{q - 1}(x - t^i) \in \Bbb Z[x, t]$</span>.</p>
<p>Write <span class="math-container">$\zeta = e^{\frac{2\pi i}q} \in \Bbb C$</span>. We have <span class="math-container">$H(x, \zeta^i) = 0$</span> for all <span class="math-container">$0 \leq i < q$</span>, which implies that <span class="math-container">$t^q - 1 \mid H(x, t)$</span> (as polynomials over <span class="math-container">$\Bbb Z$</span>, by Euclidean division). After mod <span class="math-container">$p$</span>, we get the result we want.</p>
</blockquote>
<ol start="2">
<li>For <span class="math-container">$i \not\equiv j \mod q$</span>, we have <span class="math-container">$t^i \not\equiv t^j \mod f(t)$</span> in <span class="math-container">$\Bbb F_p[x, t]$</span>.</li>
</ol>
<blockquote>
<p>In 1, We take formal derivative with respect to <span class="math-container">$x$</span>: <span class="math-container">$$f(t) \mid qx^{q - 1} - \sum_{i = 0}^{q - 1}\prod_{j \neq i}(x - t^j) =: D(x, t).$$</span>
If there existed <span class="math-container">$i \neq j$</span> such that <span class="math-container">$t^i\equiv t^j \mod f(t)$</span>, then every summand in the expression <span class="math-container">$D(t^i, t)$</span> would be divisible by <span class="math-container">$f(t)$</span>, as it would have either the factor <span class="math-container">$(t^i - t^i)$</span> or the factor <span class="math-container">$(t^i - t^j)$</span>.</p>
<p>Consequently, we would have <span class="math-container">$f(t) \mid qt^{i(q - 1)}$</span>. As <span class="math-container">$f(t)$</span> is irreducible, this leads to <span class="math-container">$f(t) = t$</span>, which is impossible.</p>
</blockquote>
<hr />
<p>First assume that <span class="math-container">$p^r \equiv 1\mod q$</span> for some <span class="math-container">$r < q - 1$</span>. We want to show that <span class="math-container">$\frac{x^q - 1}{x - 1}$</span> is reducible over <span class="math-container">$\Bbb F_p$</span>.</p>
<p>For every nonnegative integer <span class="math-container">$d$</span>, we consider the polynomial <span class="math-container">$G_d(x, t) = \prod_{i = 0}^{r - 1} (x - t^{dp^i}) \in \Bbb F_p[x, t]$</span>.</p>
<p>I claim that there exists a polynomial <span class="math-container">$g_d(x) \in \Bbb F_p[x]$</span> such that <span class="math-container">$G_d(x, t) \equiv g_d(x) \mod f(t)$</span>.</p>
<blockquote>
<p>Note that <span class="math-container">$q \mid p^r - 1$</span> implies that <span class="math-container">$t^{dp^r}\equiv t^d \mod f(t)$</span>. Consequently, <span class="math-container">$G_d(x, t^p) = \prod_{i = 1}^r(x - t^{dp^i}) \equiv G_d(x, t)\mod f(t)$</span>.</p>
<p>On the other hand, if we write <span class="math-container">$G_d(x, t) = \sum_{i = 0}^r a_i(t)x^i$</span> with polynomials <span class="math-container">$a_i(t) \in \Bbb F_p[t]$</span>, then we have <span class="math-container">$$G_d(x, t^p) = \sum_{i = 0}^r a_i(t^p) x^i = \sum_{i = 0}^r a_i(t)^p x^i$$</span> and hence <span class="math-container">$a_i(t)^p \equiv a_i(t) \mod f(t)$</span>.</p>
<p>This can be rewritten as <span class="math-container">$\prod_{s \in \Bbb F_p}(a_i(t) - s) \equiv 0 \mod f(t)$</span>. Since <span class="math-container">$f$</span> is irreducible, there exists <span class="math-container">$s_i \in \Bbb F_p$</span> such that <span class="math-container">$a_i(t) - s_i \equiv 0\mod f(t)$</span>.</p>
<p>Putting <span class="math-container">$g_d(x) = \sum_{i = 0}^rs_i x^i$</span> gives us <span class="math-container">$G_d(x, t) \equiv g_d(x) \mod f(t)$</span>.</p>
</blockquote>
<p>We note that in the definition of <span class="math-container">$G_d(x, t)$</span>, we may replace <span class="math-container">$dp^i$</span> with any integer that is congruent to <span class="math-container">$dp^i$</span> mod <span class="math-container">$q$</span>, because <span class="math-container">$t^q \equiv 1\mod f(t)$</span>.</p>
<p>We may partition all the nonzero residue classes mod <span class="math-container">$q$</span> into subsets of the form <span class="math-container">$S_d = \{d, dp, \dots, dp^{r - 1}\} \subseteq \Bbb F_q^\times$</span>, say <span class="math-container">$\Bbb F_q^\times = \bigsqcup_{j = 1}^n S_{d_j}$</span>, with <span class="math-container">$n = \frac{q - 1}r > 1$</span>. It follows that <span class="math-container">$$\prod_{j = 1}^n g_{d_j}(x) \equiv \prod_{j = 1}^nG_{d_j}(x, t)\equiv \prod_{i = 1}^{q - 1} (x - t^i) \mod f(t).$$</span></p>
<p>Since <span class="math-container">$p, q$</span> are different, <span class="math-container">$x - 1$</span> must be prime to <span class="math-container">$f$</span> and hence <span class="math-container">$\prod_{i = 1}^{q - 1}(x - t^i)\equiv \frac{x^q - 1}{x - 1} \mod f(t)$</span>.</p>
<p>It follows that <span class="math-container">$\prod_{j = 1}^n g_{d_i}(x) \equiv \frac{x^q - 1}{x - 1}\mod f(t)$</span>. As both sides don't involve <span class="math-container">$t$</span>, it is in fact an equality in <span class="math-container">$\Bbb F_p[x]$</span>, and we have shown that <span class="math-container">$\frac{x^q - 1}{x - 1}$</span> is reducible.</p>
<hr />
<p>Now assume that <span class="math-container">$p$</span> is a primitive root mod <span class="math-container">$q$</span>. We want to show that <span class="math-container">$\frac{x^q - 1}{x - 1}$</span> is irreducible.</p>
<p>Write <span class="math-container">$t_i = t^{p^i}$</span>. Since <span class="math-container">$p$</span> is a primitive root, we have <span class="math-container">$p^i \not \equiv p^j \mod q$</span> for any <span class="math-container">$0 \leq i < j < q - 1$</span>, which implies <span class="math-container">$t_i \not \equiv t_j \mod f(t)$</span> for such <span class="math-container">$i, j$</span>.</p>
<p>We prove by induction on <span class="math-container">$k$</span> that, for each <span class="math-container">$0 \leq k \leq q - 1$</span>, there exists <span class="math-container">$f_k(x, t) \in \Bbb F_p[x, t]$</span> such that <span class="math-container">$f(x) \equiv f_k(x, t)\prod_{i = 0}^{k - 1}(x - t_i) \mod f(t)$</span>.</p>
<blockquote>
<p>For <span class="math-container">$k = 0$</span>, simply take <span class="math-container">$f_0(x, t) = f(x)$</span>. Assume it's true for <span class="math-container">$k$</span> (<span class="math-container">$k < q - 1$</span>) and we want to prove it for <span class="math-container">$k + 1$</span>.</p>
<p>We have <span class="math-container">$f(x) \equiv f_k(x, t)\prod_{i = 0}^{k - 1}(x - t_i) \mod f(t)$</span>. Putting <span class="math-container">$x = t_k$</span> gives <span class="math-container">$f(t) \mid f_k(t_k, t)\prod_{i = 0}^{k - 1}(t_k - t_i)$</span>, because <span class="math-container">$f(t_k) = f(t)^{p^k}$</span>.</p>
<p>Since <span class="math-container">$f(t)$</span> is irreducible and <span class="math-container">$t_k - t_i$</span> is not divisible by <span class="math-container">$f(t)$</span>, we see that <span class="math-container">$f(t) \mid f_k(t_k, t)$</span>.</p>
<p>Therefore, choosing <span class="math-container">$f_{k + 1}(x, t) = \frac{f_k(x, t) - f_k(t_k, t)}{x - t_k}$</span> (which is a polynomial) will give <span class="math-container">$f_k(x, t) \equiv f_{k + 1}(x, t)(x - t_k) \mod f(t)$</span> and hence the willing property.</p>
</blockquote>
<p>In particular, for <span class="math-container">$k = q - 1$</span>, we get <span class="math-container">$f(x) \equiv f_{q - 1}(x, t) \prod_{i = 0}^{q - 2}(x - t_i) \mod f(t)$</span>.</p>
<p>However, when <span class="math-container">$i$</span> runs through <span class="math-container">$0$</span> to <span class="math-container">$q - 2$</span>, the residue class of <span class="math-container">$p^i$</span> runs through the whole <span class="math-container">$\Bbb F_q^\times$</span>, as <span class="math-container">$p$</span> is a primitive root.</p>
<p>Therefore <span class="math-container">$\prod_{i = 0}^{q - 2}(x - t_i) \equiv \prod_{i = 1}^{q - 1}(x - t^i) \equiv \frac{x^q - 1}{x - 1} \mod f(t)$</span>.</p>
<p>It follows that the degree of <span class="math-container">$f(x)$</span> is at least <span class="math-container">$q - 1$</span>, and hence <span class="math-container">$f = \frac{x^q - 1}{x - 1}$</span> is irreducible.</p>
<blockquote>
<p>If we write <span class="math-container">$f_{q - 1}(x, t) = \sum b_i(t)x^i$</span> and let <span class="math-container">$d$</span> denote the largest integer such that <span class="math-container">$f(t) \nmid b_d(t)$</span>, then the coefficient of <span class="math-container">$x^{d + q - 1}$</span> in the product <span class="math-container">$f_{q - 1}(x, t)\cdot \frac{x^q - 1}{x - 1}$</span> is not divisible by <span class="math-container">$f(t)$</span>. This forces <span class="math-container">$\deg f \geq q - 1$</span>.</p>
</blockquote>
<hr />
<p><strong>The real proof.</strong></p>
<p>The only non-elementary thing I use is that there are <span class="math-container">$q$</span> different <span class="math-container">$q$</span>-th roots of unity in some extension field of <span class="math-container">$\Bbb F_p$</span>.</p>
<p>Let <span class="math-container">$\zeta$</span> be one such root of unity, i.e. a root of <span class="math-container">$\frac{x^q - 1}{x - 1}$</span> in some extension of <span class="math-container">$\Bbb F_p$</span>.</p>
<p>First suppose that <span class="math-container">$p^r \equiv 1\mod q$</span> for some <span class="math-container">$r < q - 1$</span>. We form the polynomial <span class="math-container">$g = \prod_{i = 0}^{r - 1}(x - \zeta^{p^i})$</span> and notice that it is invariant under the Frobenius, hence is indeed a polynomial in <span class="math-container">$\Bbb F_p[x]$</span>. Meanwhile it's clear that it is a divisor of <span class="math-container">$\frac{x^q - 1}{x - 1}$</span>.</p>
<p>Next suppose that <span class="math-container">$p$</span> is a primitive root mod <span class="math-container">$q$</span>. Let <span class="math-container">$f$</span> be any factor of <span class="math-container">$\frac{x^q - 1}{x - 1}$</span> and suppose without loss of generality that <span class="math-container">$\zeta$</span> is a root of <span class="math-container">$f$</span>.</p>
<p>Applying Frobenius, we see that <span class="math-container">$\zeta^{p^i}$</span> is a root of <span class="math-container">$f$</span> for every <span class="math-container">$i$</span>. As <span class="math-container">$i$</span> ranges through all integers, the residue class of <span class="math-container">$p^i$</span> ranges through all <span class="math-container">$\Bbb F_q^\times$</span> and we conclude that <span class="math-container">$\zeta^i$</span> for <span class="math-container">$1 \leq i \leq q - 1$</span> are all roots of <span class="math-container">$f$</span>, hence <span class="math-container">$f = \frac{x^q - 1}{x - 1}$</span> is irreducible.</p>
|
2,235,610 | <p>I need some help for the proof of the uniformization theorem (Silverman's Advanced Topics ...).</p>
<p>If we have $G_{4}(\Lambda_{1})=G_{4}(\Lambda_{2}) $ and $ G_{6}(\Lambda_{1})=G_{6}(\Lambda_{2})$ (with $\Lambda_{1},\Lambda_{2}$ two lattices and $G_{n}$: Einsenstein serie).</p>
<p>Why we have $\Lambda_{1}=\Lambda_{2}$ ?</p>
| Angina Seng | 436,618 | <p>There is a nice analytic proof of this. The Weierstrass function $\wp(z)$ associated to $\Lambda$ satisfies a differential equation with coefficients derived from $G_4(\Lambda)$ and $G_6(\Lambda)$. It is the unique even function with principal part $1/z^2$ satisfying this. It has poles at
the points of $\Lambda$. So $G_4(\Lambda)$ and $G_6(\Lambda)$ determine $\wp(z)$ which determines $\Lambda$.</p>
|
102,383 | <p>I have a specific Generalized Eigenvalue Problem (GEVP) where i am primary not interested in solving this problem but concluding from a standard EVP the spectrum of the GEVP. </p>
<p><strong>The Problem</strong><br>
Let $A$ be a $n\times x$ possibly complex matrix and $B$ a diagonal, real $n\times n$ matrix with maximal rank of $n-1$ (e.g. the matrix $B$ has at minimum 1 zero column and row).<br>
Solving </p>
<p>$(B\lambda-A)\cdot v=0$ </p>
<p>with $|v|=1$, so that we have $n+1$ equations for $n+1$ unknown is the GEVP. The GEVP can not be reformulated as EVP because $det(B)=0$ and therefore $B$ is not invertible.</p>
<p>As I said the goal is not just solving this problem (this could be done by solving $det(\lambda B I-A)=0$ to obtain the eigenvalues) but to conclude eigenvalues for the stated GEVP from the following, already solved, EVP (the $n$ eigenvalues $\mu_1\leq\mu_2\leq\dots\leq \mu_n$ of $A$ are known): </p>
<p>$(I\mu-A)\cdot w=0$.</p>
<p><strong>What I have already learned</strong><br>
*As $A$ and $B$ in general do not commutate it is not possible to diagonalize $A$ and $B$ simultanously. Therefore the spectra will be different.<br>
*If the EVP results in eigenvalues $\mu=0$, so there will be the same number of eigenvalues $\lambda=0$ in the GEVP. (Because in both cases $det(A)=0$ must be fullfilled and the geometric multiplicity comes from the dimension of $kern(A)$.)<br>
*For every zero-row in $B$ the number of eigenvalues $\mu$ is one less then in the EVP. This is because the order of the characteristic polynom (CP) goes one down for every zero-row in $B$ compared to the order of the CP in the EVP.</p>
<p><strong>Questions</strong><br>
*Can be said which eigenvalues (in addition to the zeros) of the EVP are also eigenvalues of the GEVP (the eigenvectors may not be the same in both cases, but the eigenvalues).<br>
*Is there a pertubation theory? Can I somehow make a taylor series of the CP in the GEVP where the zeroth-term is the CP of the EVP?<br>
*The number of eigenvalues in the GEVP is less than in the EVP, can be concluded which eigenvalues vanish?</p>
<hr>
<p>In case anybody wants to know, where my question emerges from (this is not essential for my questions but possibly from general interest):</p>
<p>If one wants to conclude the stability of a fix point $x^*$ of ODEs one needs to solve the variational ODE $\dot{\delta x}(t)=D_xf(x^*)\delta x(t)$. Where $\delta x$ is a small pertubation away from the fix point: $\delta x(t)=x(t)-x^*$. Solving this with $\delta x(t)=\delta x_0 e^{\mu t}$ results in the EVP<br>
$\mu\delta x_0 = D_x f(x^*) \delta x_0$.<br>
Using $D_xf(x^*)=A$ and $w=\delta x_0$ results in the stated EVP.</p>
<p>If one has additional constraints in a implicit way<br>
$g(x(t))=0$<br>
the stability of a fix point in the ODE may change (e.g. the constraint acts in a unstable direction. The eigenvalue of $A$ in this direction is still greater zero (obviously the matrix $A$ does not change if constraints are imposed) but it is a "forbidden" direction as the corrisponding eigenvector is in a direction which is not allowed due to the constraint).<br>
Taking the time derivative of $g$ results in $D_x g(x)\cdot \dot{x}(t)=0$. Inserting the pertubation away from the fix point results in<br>
$(D_xg(x^*)+D_x(D_xg(x^*))\cdot \delta x)\cdot \dot{\delta x}(t)+\dots=0$<br>
$D_xg(x^*)\cdot \dot{\delta x}(t)+O(\delta x^2)=0$<br>
Inserting the exp-ansatz results in<br>
$D_xg(x^*)\cdot \delta x_0\mu\approx0$<br>
This means that the small pertubations need to be orthogonal to the gradient on the invariant manifold near the fix point (e.g. they are inside the invariant manifold). </p>
<p>One possible way to go to study the change of stability of the fix point when constraints are imposed, is to solve the EVP and then to check consistency with the last equation.<br>
I want to include the constraint directly in the EVP, which leads to the GEVP by simply adding the last equation to the EVP (with $D_x g(x^*)=\hat{B}$ and $w=\delta x_0$):<br>
$(\hat{B}\mu+I\mu-A)\cdot w=0$<br>
and with $\hat{B}+I=B$<br>
$(B\mu-A)\cdot w=0$<br>
The criterion of "low rank $B$" comes from the generic constraints like $g(x_1,\dots,x_n)=x_0^1-x_1\rightarrow D_xg=(-1,0,\dots,0)\rightarrow B=\mbox{diag}(0,1,\dots,1)$. </p>
| Robert Israel | 13,650 | <p>Let's write this in block matrices: $$B = \pmatrix{B_{11} & 0\cr 0 & 0\cr}, \ A = \pmatrix{A_{11} & A_{12}\cr A_{21} & A_{22}\cr}, u = \pmatrix{u_1 \cr u_2\cr}$$ where $B_{11}$ has full rank.
Then the eigenvector equations
$A u = \lambda B u$ become $A_{11} u_1 + A_{12} u_2 = \lambda B_{11} u_1$ and $A_{21} u_1 + A_{22} u_2 = 0$. Suppose $A_{22}$ is invertible. Then we have
$u_2 = - A_{22}^{-1} A_{21} u_1$, and $(A_{11} - A_{12} A_{22}^{-1} A_{21}) u_1 = \lambda B_{11} u_1$. The eigenvalues and eigenvectors of the GEVP correspond to eigenvalues and eigenvectors of the matrix $B_{11}^{-1}(A_{11} - A_{12} A_{22}^{-1} A_{21})$.</p>
|
2,564,217 | <p>For a project I'm doing, I'm wrapping an led strip light around a tube. The tube is 19mm in diameter and 915mm tall. I'm going to coil the led strip around the tube from top to bottom and the strip is 8mm wide, so the coils will be 8mm apart. How long does the led strip need to be to fully cover the tube?</p>
<p>This reminds me of a popular question on Math SE about a toilet paper roll, but slightly different. I estimated this by measuring how many 8mm wide circles could fit around the tube, then multiplied by the circumference. However, I don't know how to calculate the exact length of the coil. Out of curiosity, how would you find the exact length of the coil wrapping around the tube with each coil being 8mm apart?</p>
| user326210 | 326,210 | <ul>
<li>Your approach seems correct. Let the rows of the table correspond to the 6 outcomes on the black die, and let the columns of the table correspond to the 6 outcomes on the red die.</li>
<li>For entry in the table, you can fill in the value of $X$ (outcome of the red die) and the value of $Y$ (absolute difference between red and black) corresponding to that entry.</li>
<li>The joint probability distribution on $X$ and $Y$ is the probability of getting any particular $\langle x,y\rangle$ pair. </li>
<li>Because each entry in the table is equally likely (1/36), the probability of a particular $\langle x,y\rangle$ pair is equal to the number of times it occurs in this table, multiplied by 1/36.</li>
</ul>
|
1,610,700 | <blockquote>
<p>$$\int \frac{x-3}{\sqrt{1-x^2}} \mathrm dx$$</p>
</blockquote>
<p>I know that $\int \frac{1}{1-x^2}\mathrm dx=\arcsin(\frac{x}{1})$ but how can I continue from here? </p>
| zz20s | 213,842 | <p>Write $\int \frac{x-3}{\sqrt{1-x^2}}\mathrm dx=\int \frac{x}{\sqrt{1-x^2}}\mathrm dx-\int \frac{3}{\sqrt{1-x^2}} \mathrm dx$.</p>
<p>For the first term, let $u=1-x^2$, leading to $\mathrm du=-2x \mathrm dx$. If you're still having trouble, write $\frac{-du}{2}=x \mathrm dx$, which appears in the numerator of your first term. </p>
<p>For the second term, you're on the correct path, but it should be $\int \frac{1}{\sqrt{1-x^2}} \mathrm dx=\arcsin x+C$</p>
|
2,956,791 | <p>There is an equation here:
<span class="math-container">$$\sqrt{x+1}-x^2+1=0$$</span>
Now we want to write the equation <span class="math-container">$f(x)$</span> like <span class="math-container">$h(x)=g(x)$</span> in a way that we know how to draw h and g functions diagram.
Then we draw the h and g function diagrams and find the common points of them. So it will be number of the <span class="math-container">$f(x)$</span> roots that here is the equation mentioned top.
Actually now my problem is with drawing the first equation's diagram
I want you to draw its diagrams like <span class="math-container">$\sqrt{x-1}$</span> syep by step. Please help me with it!</p>
| Siong Thye Goh | 306,553 | <p>Guide:</p>
<ul>
<li><p>First draw <span class="math-container">$\sqrt{x}$</span>.</p></li>
<li><p>Now think of having drawn <span class="math-container">$h(x)$</span>, how would you draw <span class="math-container">$h(x\color{red}+1)$</span>.</p></li>
</ul>
|
2,142,042 | <p>how would you use induction to prove this:</p>
<p>$\sin(x)-sin(3x)+sin(5x)-...+(-1)^{(n+1)}sin[(2n-1)x] = \frac{(-1)^{(n+1)}sin2nx}{2cosx} $</p>
<p>I know how you assume its true for n=k, and then prove for n=k+1, but I get to </p>
<p>Left Hand Side: $\frac{(-1)^{(k+1)}sin2kx}{2cosx}+(-1)^{k+2}sin[(2k+1)x]$ but I'm not sure what step to take next.</p>
<p>any help would be appreciated.
Cheers</p>
| Pierpaolo Vivo | 302,446 | <p>$$
\int_0^{\infty} \frac{x^{2p-1} dx}{(ax^2+b)^{p+q}}=\frac{1}{b^{p+q}}\int_0^{\infty} \frac{x^{2p-1}dx}{((a/b)x^2+1)^{p+q}}\ ,
$$
then change variables $(a/b)x^2=t\Rightarrow 2(a/b)xdx=dt$ to obtain
$$
\frac{1}{b^{p+q}}\frac{b}{2a}(b/a)^{p-1}\int_0^{\infty} \frac{t^{p-1}dt}{(t+1)^{p+q}}\ ,
$$
and then use the identity
$$
\mathrm{B}(x,y)=\int_0^{\infty}dt\frac{t^{x-1}}{(1+t)^{x+y}}\ ,
$$
where $\mathrm{B}(x,y)$ is Euler's Beta function.</p>
|
1,050,917 | <p>I have a problem that I have to solve. I need to find center of the circle containing the point $(x,y)$. The point is $x=2,y=3$ with radius $r=3$. I need to find the center of circle. Is there equation for that? I use this equation.<br>
$$(x-h)^2+(y-k)^2=r^2$$
How I can find $h$ and $k$ for the center of circle if I know the point on circle and the diameter of circle? </p>
| Ross Millikan | 1,827 | <p>You have one equation in two unknowns, so should not expect a unique solution. Draw a circle around $(2,3)$ with radius $3$. Any of the points on this circle could be the center of the circle you seek.</p>
|
14,385 | <p>I have always taught my students that the <span class="math-container">$y$</span>-intercept of a line is the <span class="math-container">$y$</span>-coordinate of the point of intersection of a line with the <span class="math-container">$y$</span>-axis, that is, for the line given by the equation <span class="math-container">$y=mx+y_0$</span>, the <span class="math-container">$y$</span>-intercept is <span class="math-container">$y_0$</span>. I emphasize that that the <span class="math-container">$y$</span>-intercept is the <em>number</em> <span class="math-container">$y_0$</span> and not the <em>point</em> <span class="math-container">$(0,y_0)$</span>.</p>
<p>But I was quite surprised when I recently looked at the <a href="https://en.wikipedia.org/wiki/Intercept" rel="nofollow noreferrer">Wikipedia</a> and <a href="http://mathworld.wolfram.com/x-Intercept.html" rel="nofollow noreferrer">Wolfram</a> <a href="http://mathworld.wolfram.com/y-Intercept.html" rel="nofollow noreferrer">MathWorld</a> entries for <span class="math-container">$y$</span>-intercept because these define the intercept as a point and not as a number ("the point where a line crosses the y-axis" and "The point at which a curve or function crosses the y-axis").</p>
<p>Further investigation yielded inconsistencies: the Wikipedia entry for "<a href="https://en.wikipedia.org/wiki/Line_(geometry)#On_the_Cartesian_plane" rel="nofollow noreferrer">Line (geometry)</a>" states that in the equation <span class="math-container">$y=mx+b$</span>, "<span class="math-container">$b$</span> is the y-intercept of the line"; the Wolfram MathWorld entry for "<a href="http://mathworld.wolfram.com/Line.html" rel="nofollow noreferrer">Line</a>" states that "The line with <span class="math-container">$y$</span>-intercept <span class="math-container">$b$</span> and slope <span class="math-container">$m$</span> is given by the slope-intercept form <span class="math-container">$y=mx+b$</span>.</p>
<hr />
<p><sup>Edit made on February 21, 2021</sup></p>
<p>According to the <em>Dictionary of Analysis, Calculus, and Differential Equations</em> (edited by Douglas N. Clark, published by CRC Press in 2000),</p>
<blockquote>
<p><strong>intercept</strong> The point(s) where a curve or graph of a function in <span class="math-container">$\mathbf R^n$</span> crosses one of the
axes. For the graph of <span class="math-container">$y=f(x)$</span> in <span class="math-container">$\mathbf R^2$</span>, the <span class="math-container">$y$</span>-<em>intercept</em> is the point <span class="math-container">$(0,f(0))$</span> and the <span class="math-container">$x$</span>-<em>intercepts</em> are the points <span class="math-container">$(p,f(p))$</span> such that <span class="math-container">$f(p)=0$</span>.</p>
</blockquote>
<p>Unfortunately, the book does not consistently use that definition.</p>
<blockquote>
<p><strong>slope-intercept equation of line</strong> An equation of the form <span class="math-container">$y=mx+b$</span>, for a straight line in <span class="math-container">$\mathbf R^2$</span>. Here <span class="math-container">$m$</span> is the slope of the line and <span class="math-container">$b$</span> is the <span class="math-container">$y$</span>-intercept; that is, <span class="math-container">$y=b$</span>, when <span class="math-container">$x=0$</span>.</p>
</blockquote>
<p>Thus, even though the book defines an intercept as a point, it uses the term to denote a number.</p>
<hr />
<p>Is there a trusted source targeted at mathematics educators (from, say, a government agency, an educational institution, or an organization) that defines "intercept" and consistently uses that definition?</p>
| skoh | 10,200 | <p>$y$ is a value.</p>
<p>The difference between the two would essentially boil down to <em>dimensions</em> of $y$, with y-as-value being uni-dimensional and y-as-point being n-dimensional. Now, even with multiple $x$'s, all information about the intercept (think about difference in the intercepts of two models) is captured in y-as-value.
The other dimensions never come into play. </p>
|
428,408 | <p>Consider a norm on <span class="math-container">$\mathbb C^2$</span> as <span class="math-container">$\|(z_1,z_2)\|:=\max\{|z_1|,|z_2|,\frac{1}{\sqrt{2}}|z_1+iz_2|\}.$</span></p>
<p><em>Question.</em> Is <span class="math-container">$(\mathbb C^2,\|.\|)$</span> linearly isometric to <span class="math-container">$(\mathbb C^2,\|.\|_{\infty})$</span> where <span class="math-container">$\|(z_1,z_2)\|_\infty:=\max\{|z_1|,|z_2|\}?$</span></p>
| Christian Remling | 48,839 | <p>There is no such map <span class="math-container">$f$</span>. Let's try to map from the second space (with the funny norm, which I'll denote simply by <span class="math-container">$\|\cdot\|$</span>) back to <span class="math-container">$(\mathbb C^2, \|\cdot \|_{\infty})$</span>. Let <span class="math-container">$u=f(e_1)$</span>, <span class="math-container">$v=f(e_2)$</span>, so <span class="math-container">$\|u\|_{\infty}=\|v\|_{\infty}=1$</span>. Since <span class="math-container">$|1\pm i|^2=2$</span>, so <span class="math-container">$\|(1,\pm 1)\|=1$</span>, we also have <span class="math-container">$\|u\pm v\|_{\infty}=1$</span>. However, if <span class="math-container">$|z|=1$</span> and <span class="math-container">$w\not= 0$</span>, then <span class="math-container">$|z\pm w|>1$</span> for one choice of sign. So if (say) <span class="math-container">$|u_1|=1$</span>, then <span class="math-container">$v_1=0$</span>. It follows that <span class="math-container">$u=e^{i\alpha}e_1$</span>, <span class="math-container">$v=e^{i\beta}e_2$</span>, or the other way around.</p>
<p>But then <span class="math-container">$\|f((1,-i)/\sqrt{2})\|_{\infty}=1/\sqrt{2}$</span> even though <span class="math-container">$\|(1,-i)/\sqrt{2}\|=1$</span>.</p>
|
271,105 | <p><strong>tl;dr</strong> What are some good workflows for developing and running data processing pipelines with Mathematica?</p>
<hr />
<p>I sometimes develop data processing pipelines with Mathematica. I load some data, transform it, and derive some summary results. I tend to experiment quite a bit when doing this, checking the output after each step. The notebook environment is very convenient for this.</p>
<p>When I'm done, I sometimes need to run this pipeline on multiple datasets. The notebook is not convenient for this, unfortunately. It's better to wrap up the sequence of operations into a function and call that functions with several pieces of data.</p>
<p>The problems with this approach are:</p>
<ul>
<li>Collecting all the steps into a function takes some work, and is error-prone.</li>
<li>If I need to modify the pipeline later, it is very inconvenient to work with a single large function. It does not make it easy to look at partial results.</li>
</ul>
<p>How do people deal with this situation? What is the workflow you found most convenient?</p>
| David Keith | 44,700 | <p>I have frequently encountered a need to do the same thing. An example is a complex notebook which loads an image and analyzes it as scientific data. I want to run the same analysis on a large number of images to obtain a result for each. But the notebook is long and complex. Trying to merge it into a single cell for a function definition makes it almost unreadable and extremely difficult to debug when something breaks the analysis method.</p>
<p>My solution is to leave the analysis algorithm in the original notebook and call that notebook from a controlling notebook.</p>
<p>Here is how I do that:</p>
<p>The analysis notebook leads with a definition that defines the file name of the image to be analyzed. This notebook can be evaluated as a standalone notebook in development or later in debugging.</p>
<p>When I am satisfied that the analysis notebook is working correctly, I save a version in which the image file name is not defined, by just commenting out the leading definition.</p>
<p>I then create a controlling notebook which is usually very simple. It often runs a loop (or uses Map) to analyze a large number of images and saves the result for each. To do this it opens the analysis notebook and obtains a handle to it using, for example:</p>
<pre><code>nb = NotebookOpen["AnalysisNotebook.nb"]
</code></pre>
<p>It now relies on the fact that the controlling notebook and the newly opened analysis notebook share the same kernel and therefore the same symbol definitions. It can run a loop like this:</p>
<ol>
<li><p>Define an image file name using the symbol name used in the analysis notebook.</p>
</li>
<li><p>Use <code>NotebookEvaluate[nb]</code> to run the analysis.</p>
</li>
<li><p>Save the results which have been produced as symbol definitions by the analysis notebook. Often saved in a Dataset or just appended to a list.</p>
</li>
<li><p>Define a new image filename and do it again until done.</p>
</li>
</ol>
<p>I wrote this as though it was done in a loop, but all of this can just be done by a function in the controlling notebook that is mapped onto a list of image file names.</p>
<p>I find this works really well. If some image breaks the analysis algorithm. I just open the analysis notebook, uncomment the leading definition and revise it to point to the offending image. Then execute the notebook a cell at a time in the usual way to locate the problem.</p>
<p>Kind regards,
David</p>
|
1,290,363 | <p>So I already proved Closure and Associativity, now I'm trying to find the identity element of this operation defined as:
$$
a * b = a + b - ab
$$</p>
<p>But my identity element gets cancelled...</p>
<p>(The set defined in this exercise is the real numbers.)</p>
<p><img src="https://i.stack.imgur.com/ZchjC.jpg" alt="enter image description here"></p>
| Sammy Black | 6,509 | <p>Your calculation was good up to
$$
e = ae.
$$
Remember that if $e$ is to be the identity, then you want this equation to hold <strong>for all $a$</strong>. If you like, you can rewrite the equation as
$$
0 = (a-1)e,
$$
The only solution is $e = 0$.</p>
<hr>
<p>By the way, there's a neat way to understand this operation, using the function $r$, defined by $r(x) = 1 - x$. The equation $c = a * b$ is equivalent to
$$
\begin{align}
r(c) &= r(a) \cdot r(b) \\
(1-c) &= (1-a) (1-b) \\
1 - c &= 1 - a - b + ab \\
c &= a + b - ab
\end{align}
$$</p>
<p>Luckily $r$ is its own inverse: $r(r(x)) = x$ for all $x$. (It's a reflection about the point $x = \tfrac12$.) So, all of the structure and properties of the <em>usual</em> multiplication on the reals is transferred over (and back) to the $*$ multiplication by $r$.</p>
<p>The multiplicative identity for usual multiplication is $1$, so the multiplicative identity for $*$ multiplication is
$$
r(1) = 1 - 1 = 0.
$$</p>
<p>In algebra, a map such as $r$ that acts as a dictionary, translating from one structure to an equivalent one, is called an <em><a href="http://en.wikipedia.org/wiki/Isomorphism" rel="nofollow">isomorphism</a></em>.</p>
|
4,150,320 | <p>I need to prove <span class="math-container">$\displaystyle \lim _{x\to 2-} \left(\frac{|x-2|}{x^2-4}\right)=\frac{-1}{4}$</span></p>
<p>I know the definition <span class="math-container">$\forall \varepsilon >0, \exists \delta >0, 0>2-x>\delta$</span> then <span class="math-container">$\left|\left(\dfrac{|x-2|}{x^2-4}\right)+\dfrac{1}{4}\right|<\varepsilon$</span></p>
<p>And I also know how to calculate a limit but I don't know how to prove that a limit is correct</p>
| miracle173 | 11,206 | <p>I wrote in a comment that one should do a check for the equations and that may reveal a way to deduce these equation. So for the equation <span class="math-container">$(6)$</span>
<span class="math-container">$$Q_{1} =\frac{p_{22} - p_{12} }{p_{11}+ p_{22} -2 p_{12} } Q+ \frac{\left( p_{23} -p_{13} \right) Q_{3} +\left( p_{24} -p_{14} \right) Q_{4} }{p_{11}+ p_{22} -2 p_{12} } \tag{6}$$</span>
where I removed the <span class="math-container">$\cdots$</span> part, I did such a check. But I used a CAS (Maxima) to do these calculations and added some information as comments. I hope one can follow these calculations even if one is not familiar with Maxima.</p>
<pre>
(%i3) /* %i is the input line, %o is the output line
this is equation (6), it is stored in variable e1 */
e1:Q1 = ((p22-p12)/(p11+p22+(-2)*p12))*Q
+((p23-p13)*Q3+(p24-p14)*Q4)/(p11+p22+(-2)*p12)
(%o3) Q1 = (Q4*(p24-p14)+Q3*(p23-p13))/(p22-2*p12+p11)
+(Q*(p22-p12))/(p22-2*p12+p11)
(%i4) /* we replace Q by Q1 + Q2 in e1 and store the resulting equation in e2 */
ev(e2:e1,Q = Q1+Q2)
(%o4) Q1 = (Q4*(p24-p14)+Q3*(p23-p13))/(p22-2*p12+p11)
+((Q2+Q1)*(p22-p12))/(p22-2*p12+p11)
(%i5) /* from (2), (3) and (4) we get an equation that we store in e3: */
e3:p11*Q1+p12*Q2+p13*Q3+p14*Q4 = p21*Q1+p22*Q2+p23*Q3+p24*Q4
(%o5) Q4*p14+Q3*p13+Q2*p12+Q1*p11 = Q4*p24+Q3*p23+Q2*p22+Q1*p21
(%i6) /* from equation e3 we can calculate Q4, we store this equation in e4 */
e4:solve(e3,Q4)
(%o6) [Q4 = -(Q3*p23+Q2*p22+Q1*p21-Q3*p13-Q2*p12-Q1*p11)/(p24-p14)]
(%i7) /* now we check if the solution e2 is correct by inserting e4 and
store the resulting equation in e5 */
ev(e5:e2,e4)
(%o7) Q1 = (Q3*(p23-p13)-Q3*p23-Q2*p22-Q1*p21+Q3*p13+Q2*p12+Q1*p11)
/(p22-2*p12+p11)
+((Q2+Q1)*(p22-p12))/(p22-2*p12+p11)
(%i8) /* we expand the paranthesis */
ev(e6:e5,expand)
(%o8) Q1 = (Q1*p22)/(p22-2*p12+p11)-(Q1*p21)/(p22-2*p12+p11)
-(Q1*p12)/(p22-2*p12+p11)
+(Q1*p11)/(p22-2*p12+p11)
(%i9) /* bring it on the same denominator */
e7:rat(e6)
(%o9) Q1 = (Q1*p22-Q1*p21-Q1*p12+Q1*p11)/(p22-2*p12+p11)
(%i10) /* and use the fact that p12=p21 to see that the LHS is equal to the RHS */
ev(e8:e7,p21 = p12)
(%o10) Q1 = Q1
</pre>
<p>So following these calculations from the bottom (line %o10) to the top (line %i3) should enable you to construct a proof to deduce <span class="math-container">$(6)$</span>.</p>
|
3,243,733 | <p><strong>Use induction to show that the Fibonacci numbers satisfy F(n) <span class="math-container">$\ge$</span> <span class="math-container">$(2 ^ {(n-1) / 2})$</span> for all n <span class="math-container">$\ge$</span> 3</strong></p>
<p>My work thus far:</p>
<blockquote>
<p>Base Case: F(3) <span class="math-container">$\ge$</span> <span class="math-container">$2 ^ {(3-1) / 2}$</span> => F(3) <span class="math-container">$\ge$</span> <span class="math-container">$2 ^ {1}$</span></p>
<p>Induction Hypothesis: Assume F(n) is true for all 3 < n < k</p>
<p>Inductive step: for (k + 1), F(k + 1) <span class="math-container">$\ge$</span> <span class="math-container">$2 ^ {(k + 1 - 1) / 2}$</span> =>
F(3) <span class="math-container">$\ge$</span> <span class="math-container">$2 ^ {k / 2}$</span></p>
</blockquote>
<p>I'm not sure where to go from here.</p>
| Anurag Singh | 466,382 | <p>Clearly, <span class="math-container">$F(n) \geq F(n-1)$</span> for all <span class="math-container">$n\geq 2.$</span> From inductive step we have that <span class="math-container">$F(n-1) \geq 2^{(n-2)/2}$</span>.</p>
<p>Then observe that, for all <span class="math-container">$n\geq 3$</span></p>
<p><span class="math-container">\begin{align*}F(n+1) & =F(n)+F(n-1) \\
&\geq F(n-1)+F(n-1)\\
&= 2F(n-1)\\
&\geq 2 \times 2^{(n-2)/2} ~~~ \text{(from induction step)}\\
&=2^{n/2}
\end{align*}</span></p>
|
177,574 | <p>Fix $k \in \mathbb{N}$, $k \geq 1$. Let $p \in [0,1]$ and $x = (x_0, \ldots, x_k)$ be a $(k+1)$-dimensional <em>real</em> vector, and define
$$S(p,x) = -x_0^2 + \sum_{i=0}^k {k \choose i} p^i (1 - p)^{k - i} \cdot (x_i - p)^2.$$
Experiments show that for small values of $k$
$$\exists x \in \mathbb{R}^{k+1} \,.\, \forall p \in [0,1] \,.\, S(p,x) = 0.$$
In other words, there are $x_i$'s such that $S(x,p)$ is identically zero as a polynomial in $p$.</p>
<p>For a given $k$ we can expand $S(x,p)$ as a polynomial in $p$ and equate the coefficients to $0$. For $k = 2$ we get
\begin{align*}
0&=0 \\
-x_0^2-2 x_0+x_1^2&=0 \\
2 x_0-2 x_1+1&=0 \\
\end{align*}
and this has two solutions:
$$x = (\frac{1}{2} (-1-\sqrt{2}),\frac{1}{2},\frac{1}{2} (3+\sqrt{2}))$$
and
$$x = (\frac{1}{2} (-1+\sqrt{2}),\frac{1}{2},\frac{1}{2} (3-\sqrt{2})).$$
For $k = 1, 2, 3, 4, 5, 6, 7$ there are $1, 2, 4, 8, 14, 28, 48$ solutions respectively, according to Mathematica. <a href="https://oeis.org/search?q=1%2C%202%2C%204%2C%208%2C%2014%2C%2028%2C%2048" rel="nofollow">According to OEIS</a> this is <a href="https://oeis.org/A068912" rel="nofollow">A068912</a>, "the number of $n$ step walks (each step $\pm 1$ starting from $0$) which are never more than $3$ or less than $-3$." This is kind of interesting because the problem arises in statistics, see <a href="http://www.win-vector.com/blog/2014/07/frequenstist-inference-only-seems-easy/" rel="nofollow">John Mount's blog post</a> for background.</p>
<p><strong>Question:</strong> Is there a solution for every $k$?</p>
<p><strong>Addendum:</strong> John says he wants soltions in $[0,1]^{k+1}$...</p>
<hr>
<p>Here is the relevant Mathematica code:</p>
<pre><code>s[k_, p_, x_] := Sum[Binomial[k, i] * p^i* (1 - p)^(k - i)* (Subscript[x, i] - p)^2, {i, 0, k}] Subscript[x, 0]^2
xs[k_] := Table[Subscript[x, i], {i, 0, k}]
system[k_, p_, x_] := Thread[CoefficientList[s[k, p, x], p] == 0]
solutions[k_] := Solve[system[k, p, x], xs[k], Reals]
</code></pre>
<p>To see the system of equations for $k = 4$, type</p>
<pre><code>system[4, p, x] // ColumnForm
</code></pre>
<p>To see the solutions for $k = 4$, type</p>
<pre><code>solutions[4]
</code></pre>
<p>To make a table of counts of solutions up to $k = 7$, type</p>
<pre><code>Table[{k, Length@solutions[k]}, {k, 1, 7}] // ColumnForm
</code></pre>
| John Mount | 56,665 | <p>This is not a solution but some background to the question.</p>
<p>Define $$S(k,p,x) = \sum_{i=0}^k {k \choose i} p^i (1-p)^{k-i} (x_i-p)^2.$$
Define $$f(k) = \mathrm{argmin}_x \max_p S(k,p,x).$$
Then $f(k)$ is the minimax square-loss solution to trying to estimate the win rate of a random process by observing $k$ results (Wald wrote on this). The neat thing is: we <a href="http://winvector.github.io/freq/minimax.pdf" rel="nofollow">can show</a> if there is a real solution $x$ in the interior of $[0,1]^{k+1}$ to $S(k,p,x) = x_0^2$ then $x=f(k)$. Meaning we avoided two nasty quantifiers. See <a href="https://github.com/WinVector/Examples/blob/master/freq/python/freqMin.rst" rel="nofollow">this file</a> for some experimental examples. Also, a change of variables $z = p/(1-p)$ makes collecting terms easier.</p>
|
177,574 | <p>Fix $k \in \mathbb{N}$, $k \geq 1$. Let $p \in [0,1]$ and $x = (x_0, \ldots, x_k)$ be a $(k+1)$-dimensional <em>real</em> vector, and define
$$S(p,x) = -x_0^2 + \sum_{i=0}^k {k \choose i} p^i (1 - p)^{k - i} \cdot (x_i - p)^2.$$
Experiments show that for small values of $k$
$$\exists x \in \mathbb{R}^{k+1} \,.\, \forall p \in [0,1] \,.\, S(p,x) = 0.$$
In other words, there are $x_i$'s such that $S(x,p)$ is identically zero as a polynomial in $p$.</p>
<p>For a given $k$ we can expand $S(x,p)$ as a polynomial in $p$ and equate the coefficients to $0$. For $k = 2$ we get
\begin{align*}
0&=0 \\
-x_0^2-2 x_0+x_1^2&=0 \\
2 x_0-2 x_1+1&=0 \\
\end{align*}
and this has two solutions:
$$x = (\frac{1}{2} (-1-\sqrt{2}),\frac{1}{2},\frac{1}{2} (3+\sqrt{2}))$$
and
$$x = (\frac{1}{2} (-1+\sqrt{2}),\frac{1}{2},\frac{1}{2} (3-\sqrt{2})).$$
For $k = 1, 2, 3, 4, 5, 6, 7$ there are $1, 2, 4, 8, 14, 28, 48$ solutions respectively, according to Mathematica. <a href="https://oeis.org/search?q=1%2C%202%2C%204%2C%208%2C%2014%2C%2028%2C%2048" rel="nofollow">According to OEIS</a> this is <a href="https://oeis.org/A068912" rel="nofollow">A068912</a>, "the number of $n$ step walks (each step $\pm 1$ starting from $0$) which are never more than $3$ or less than $-3$." This is kind of interesting because the problem arises in statistics, see <a href="http://www.win-vector.com/blog/2014/07/frequenstist-inference-only-seems-easy/" rel="nofollow">John Mount's blog post</a> for background.</p>
<p><strong>Question:</strong> Is there a solution for every $k$?</p>
<p><strong>Addendum:</strong> John says he wants soltions in $[0,1]^{k+1}$...</p>
<hr>
<p>Here is the relevant Mathematica code:</p>
<pre><code>s[k_, p_, x_] := Sum[Binomial[k, i] * p^i* (1 - p)^(k - i)* (Subscript[x, i] - p)^2, {i, 0, k}] Subscript[x, 0]^2
xs[k_] := Table[Subscript[x, i], {i, 0, k}]
system[k_, p_, x_] := Thread[CoefficientList[s[k, p, x], p] == 0]
solutions[k_] := Solve[system[k, p, x], xs[k], Reals]
</code></pre>
<p>To see the system of equations for $k = 4$, type</p>
<pre><code>system[4, p, x] // ColumnForm
</code></pre>
<p>To see the solutions for $k = 4$, type</p>
<pre><code>solutions[4]
</code></pre>
<p>To make a table of counts of solutions up to $k = 7$, type</p>
<pre><code>Table[{k, Length@solutions[k]}, {k, 1, 7}] // ColumnForm
</code></pre>
| Vladimir Dotsenko | 1,306 | <p>The solutions described via the link <a href="http://winvector.github.io/freq/explicitSolution.html" rel="nofollow">http://winvector.github.io/freq/explicitSolution.html</a> (posted in one of the earlier answers) can be given by the following formula:
$$
x_i=\frac{(k-2i)\sqrt{k}+(2i-1)k}{2k(k-1)}=\frac{1}{2(1+\sqrt{k})}+\frac{i}{\sqrt{k}(1+\sqrt{k})}.
$$
Note that (when $k$ is fixed):</p>
<ul>
<li><p>$x_i$ is an increasing function of $i$, and we have $$
x_0=\frac{1}{2(1+\sqrt{k})}, \quad x_k=\frac{1+2\sqrt{k}}{2+2\sqrt{k}},
$$<br>
so all these numbers are between 0 and 1.</p></li>
<li><p>Moreover, we have $x_i=a+bi$, so $S(p,x)$ can be represented as
$$
-x_0^2+\sum_{i=0}^k\binom{k}{i}p^i(1-p)^{k-i}(U+Vi+Wi^2),
$$
where $U$, $V$ and $W$ depend on $k$ and $p$ but not on $i$. It remains to use formulas
\begin{gather}
\sum_{i=0}^k\binom{k}{i}p^i(1-p)^{k-i}=1,\\
\sum_{i=0}^k i\binom{k}{i}p^i(1-p)^{k-i}=kp,\\
\sum_{i=0}^k i(i-1)\binom{k}{i}p^i(1-p)^{k-i}=k(k-1)p^2
\end{gather}
(which are obvious) to check directly that the formulas for $x_i$ as above give a solution.</p></li>
</ul>
<p>This solution also simplifies to $x_i = (\frac{1}{2}\sqrt{k} + i)/(\sqrt{k}+k)$ which is exactly the smoothed estimate of the win-rate of a coin flipped $k$ times showing $i$ wins with $\sqrt{k}$ "pseudo-observations" (half wins, half losses) added first (or Bayesian inference starting with $\beta(\sqrt{k}/2,\sqrt{k}/2)$ priors, $\beta(1/2,1/2)$ being Jeffreys priors, and $\beta(1,1)$ being standard Laplace smoothing).</p>
|
95,598 | <p>I have a wavefunction $\psi(x,t)=Ae^{i(kx-\omega t)}+ Be^{-i(kx+\omega t)}$. $A$ and $B$ are complex constants.</p>
<p>I am trying to find the probability density, so I need to find the product of $\psi$ with it's complex conjugate. The problem is, im not sure what is it's complex conjugate, I know the complex conjugate of $5+4i$ is $5-4i$, but what would be the complex conjugate of $\psi$? Is it just $-Ae^{i(kx-\omega t)}-Be^{-i(kx+\omega t)}$?</p>
| Sasha | 11,069 | <p>The <a href="http://en.wikipedia.org/wiki/Complex_conjugate" rel="nofollow">complex conjugation</a> will map $A \to \bar{A}$ and $B \to \bar{B}$. </p>
<p>If, say $A= 5 + 4 i$, then $\bar{A} = 5 - 4 i$, as you noted. So
$$
\bar{\psi} = \bar{A} \mathrm{e}^{-i (k x - \omega t)} + \bar{B} \mathrm{e}^{i (k x + \omega t)}
$$</p>
|
2,507,328 | <p>A bit of a beginner question, but I've been told that between 2 x-intercepts, for any polynomial of degree 2 or higher. That is true. But the controversy here is that apparently, it has to be exactly in the middle between the 2. I'm not talking about quadratics; the only turning point is at $x = -b/2a$. I am talking about polynomials of higher degree. For example, $y= x^4−2x^2+x$. It seems that I am wrong, even for <em>most</em> polynomials: the turning point is usually not in the middle.</p>
<p>1. Is this completely false? (That it has to be in the middle) Or is it the case for specific types of graphs?</p>
<p>2. If it is, then what is the standard rule for finding the turning point between any 2 x-intercepts for any given polynomial</p>
<hr>
<p>I am just in year 10, so please bear with me if this question is too simple.</p>
| fonini | 113,664 | <p>If you want to use just the fact that "there is a Cauchy sequence", then indeed you're going to have a hard time. It's much easier if instead you <em>construct</em> that sequence yourself. Tip: do something like $\pi = \mbox{limit of $3; 3.1; 3.14; \ldots$}$</p>
|
1,344,161 | <p>Suppose $k\geq 2$ is an integer. I want to show $$\frac{1+k+k(k-2)}{1+\frac{k-1}{k}+\frac{(-1-\sqrt{k-1} )^2}{k(k-2)}}$$ is not an integer. It is equal to $$\frac{(k-2) k (k^2-k+1)}{2 (k^2-2 k+\sqrt{k-1}+1)}.$$</p>
<p>If I can show this then I will be able to finish my proof of the <a href="https://en.wikipedia.org/wiki/Friendship_graph#Friendship_theorem" rel="nofollow">Friendship Theorem</a>. We may assume $k$ is even if that helps any.</p>
| Bill Dubuque | 242 | <p>It is a special case of the following</p>
<p><strong>Theorem</strong> <span class="math-container">$\ $</span> Suppose <span class="math-container">$\,f,g\in \Bbb Z[x]\,$</span> are polynomials, and <span class="math-container">$\,j\neq 0\,$</span> and <span class="math-container">$\,k,a\,$</span> are all integers.</p>
<p>If <span class="math-container">$\,\color{#c00}{f(a)=\pm1},\ \color{#0a0}{g(a) = 0}\ $</span> then <span class="math-container">$\,\dfrac{f(k)}{g(k)+ j\,\sqrt{k-a}} = i\in\Bbb Z$</span> <span class="math-container">$\ \Rightarrow\ $</span> <span class="math-container">$ k = 1\!+\!a\ $</span> or <span class="math-container">$\ f(k) = 0$</span></p>
<p><strong>Proof</strong> <span class="math-container">$\ $</span> Clearing denom's we get <span class="math-container">$\, f(k)-ig(k) =\, ij \sqrt{k-a}.\ $</span> If <span class="math-container">$\,f(k)\neq 0\,$</span> then <span class="math-container">$\,i\neq 0\,$</span> therefore upon dividing by <span class="math-container">$\,ij\neq 0\,$</span> we deduce that <span class="math-container">$\,\sqrt{k-a}\in\Bbb Q.\,$</span> By the <a href="https://math.stackexchange.com/a/658058/242">Rational Root Test</a> we infer <span class="math-container">$\,\sqrt{k-a} = n\in\Bbb Z,\,$</span> so <span class="math-container">$\, k = n^2+a.\ $</span> If <span class="math-container">$\,n> 1\,$</span> then substituting this into the fraction</p>
<p><span class="math-container">$$\dfrac{f(n^2+a)}{g(n^2+a)+ jn}\in\Bbb Z\qquad $$</span></p>
<p><span class="math-container">$\qquad n\,\nmid f(n^2+a)\ $</span> since <span class="math-container">$\,\ {\rm mod}\ n\!:\,\ n\equiv 0\,\Rightarrow\,f(n^2+a)\equiv \color{#c00}{f(a)\equiv \pm1 \not \equiv 0}$</span></p>
<p><span class="math-container">$\qquad n\,\mid g(n^2+a)\ $</span> since <span class="math-container">$\,\ {\rm mod}\ n\!:\,\ n\equiv 0\,\Rightarrow\,g(n^2+a)\equiv \color{#0a0}{g(a)\equiv 0}$</span></p>
<p>so <span class="math-container">$\,n>1\,$</span> divides the denominator but not the numerator, contra the fraction is an integer. Therefore <span class="math-container">$\,n = 1\,$</span> hence <span class="math-container">$\,k = n^2+a = 1+a.$</span></p>
|
3,436,219 | <p><img src="https://i.stack.imgur.com/VplT3.jpg" alt="enter image description here"></p>
<p>I could use gaussian elimination if I make some assumptions or does any one have another suggestion?</p>
| Jolly Llama | 599,716 | <p>Andrew Chin's answer definitely works. Define <span class="math-container">$x\boxdot y$</span> by
<span class="math-container">$$x \boxdot y = x - y - 2,$$</span>
and it fits the data given. </p>
|
2,470,958 | <p>Let's say that I've got the following IVP:</p>
<p>$\frac{dy}{dx} = f(x,y)$</p>
<p>$y(x_0) = y_0$</p>
<p>And I want conditions that guarantee existence and uniqueness of its solution.</p>
<p>On the one hand I've got the Picard–Lindelöf theorem. It asks that there exists a rectangle $R = [a,b] \times [c,d]$, containing $(x_0, y_0)$ as an interior point, where $f$ is continuous in $x$ and Lipschitz continuous in $y$.</p>
<p>On the other hand I've got a theorem, which I've encountered in many undergraduate text books, that requires $f$ and $\frac{\partial f}{\partial y}$ to be continuous in the aforementioned rectangle. </p>
<p>Are these two different theorems? It seems to me that the hypotheses of the first one are implied by those of the second one. But in that case, why would some authors prefer this more restrictive form of the theorem? Could it be just so that students don't need to learn the concept of Lipschitz continuity? </p>
| Hans Lundmark | 1,242 | <p>Suppose $\frac{\partial f}{\partial y}$ is continuous. Then $|\frac{\partial f}{\partial y}|$ has a greatest value $K$ on the rectangle in question (by the extreme value theorem). The mean value theorem for derivatives says that
$$
f(x,y_2)-f(x,y_1) = \frac{\partial f}{\partial y}(x,\eta) \, (y_2-y_1)
$$
for some $\eta$ between $y_1$ and $y_2$, which implies
$$
|f(x,y_2)-f(x,y_1)| \le K |y_2-y_1|
,
$$
so $f$ is Lipschitz continuous with respect to $y$ on that rectangle, with Lipschitz constant $K$.</p>
<p>Hence the assumptions that $f$ and $\frac{\partial f}{\partial y}$ are continuous are just (somewhat weaker) replacements for the “proper” assumptions about Lipschitz continuity, with the advantages that they are often very easy to verify, and that you don't have to explain to your readers what Lipschitz continuity means.</p>
|
2,548,942 | <p>What would be the best approach to calculate the following limits </p>
<p>$$ \lim_{x \rightarrow 0} \left (1+\frac {1} {\arctan x} \right)^{\sin x}, \qquad \lim_{x \rightarrow 0} \frac {\tan ^7 x} {\ln (7x+1)} $$
in a basic way, using some special limits, without L'Hospital's rule? </p>
| user | 505,767 | <p>A solution for the first <strong>by Taylor series</strong>:</p>
<p>we can write the limit as follow:
$$\left (1+\frac {1} {\arctan x} \right)^{\sin x}=e^{sinx \ \log{\left (1+\frac {1} {\arctan x} \right)}}$$</p>
<p>Calculate Taylor series expansion for each term at the first order:
$$\sin x = x+o(x)$$</p>
<p>$$\log{\left (1+\frac {1}{\arctan x} \right)} =\log{\left (\frac {1+ \arctan x}{\arctan x} \right)} =-\log{\left (\frac {\arctan x}{1+\arctan x} \right)}\\ =-\log{\left (\frac {x+o(x)}{1+x+o(x)} \right)} =-\log{\left [(x+o(x))\cdot(1-x+o(x)) \right]} =-\log{(x+o(x))}$$</p>
<p>Thus:
$$\sin x \ \log{\left (1+\frac {1}{\arctan x} \right)}=(x+o(x))\cdot [-\log{(x+o(x))}]=-x \log x + o(x)\to 0$$</p>
<p>Finally:</p>
<p>$$\left (1+\frac {1} {\arctan x} \right)^{\sin x}\to e^0 =1$$</p>
|
117,500 | <p>How would you go about finding the conjugacy classes of the nonabelian group of order 21, $G:=\left\langle x,y | x^7=e=y^3, y^{-1}xy=x^2\right\rangle$?</p>
| Mikko Korhonen | 17,384 | <p>If $G$ is a nonabelian group of order $21$, then $G$ has trivial center. Otherwise $G/Z(G)$ would be cyclic and $G$ would be abelian.</p>
<p>Thus any element of order $3$ has its centralizer of order $3$ and thus has $7$ elements in its conjugacy class. By the same argument, an element of order $7$ has $3$ elements in its conjugacy class.</p>
<p>Let $a$ and $b$ be the number of conjugacy classes of order $3$ and $7$, respectively. By the class equation, $21 = 1 + 7a + 3b$. This implies that $a = b = 2$, because $a$ and $b$ are $\geq 1$ by Cauchy's theorem. Therefore there are five conjugacy classes: one for the identity, two containing elements of order $3$ and two containing elements of order $7$.</p>
<p>Since $y^{-1}xy = x^2$, we get $y^{-2}xy^2 = y^{-1}x^2y = x^4$. Therefore the conjugacy class of $x$ is $\{x, x^2, x^4\}$. The rest of the elements of order $7$ must be in the other conjugacy class, which is $\{x^3, x^5, x^6\}$.</p>
<p>We notice that $xyx^{-1} = yx$, $x^2yx^{-2} = yx^2$ and in general $x^jyx^{-j} = yx^{j}$. Thus in the two remaining conjugacy classes, one of them has all the elements of the form $yx^j$ and the other one all the elements of the form $y^2x^j$.</p>
|
514,338 | <p>Okay so my algebra knowledge is pretty guff..</p>
<p>I am taking a control systems class and pretty much all the questions I am expected to revise, are about doing this algebraic manipulation and I don't know what steps the tutor is taking to do it..</p>
<p>Okay here goes..</p>
<p>If the transfer function of a system is $G(s) = 3/(20s+1)$, then the closed loop version of that is</p>
<p>$$G(s)/(G(s) + 1)$$ </p>
<p>so that would be </p>
<p>$$\frac{\frac{3}{20s+1}}{\frac{3}{20s+1} + 1}$$</p>
<p>This is the bit I am having trouble with.. he just then cancels it all out and gives us the answer on the next line which is.. $$\frac{3}{20s+4}$$</p>
<p>He gives us loads of problems to this which are all similar but I just cannot work them out as my algebra sucks so bad.. I don't know how to cancel out stuff which has a division but with an addition in the denominator..</p>
<p>I have taken a screen shot of the pdf here
<a href="https://i.imgur.com/t82sfYr.png" rel="nofollow noreferrer">http://i.imgur.com/t82sfYr.png</a></p>
<p><a href="https://i.imgur.com/GoCBjuq.png" rel="nofollow noreferrer">And another one of another pdf which explains nearly how to do it but misses out the steps..</a></p>
| Michael Hoppe | 93,935 | <p>Multiply nominator and denominator by $20s+1$.</p>
|
1,295,259 | <p>How to prove that;</p>
<blockquote>
<p>$a^{|G|}=e$ if a $\in G $</p>
</blockquote>
<p>if $G$ is a finite group and $e$ is its identity.</p>
<p>I think this could be done through pigeonhole principle but I don't want to use the Lagrange theorem.</p>
<p>How should I start?</p>
| npatrat | 220,440 | <p>Let $d=ord(a)$. Then $a^d=e$ and $H=\{ e,a,a^2,...,a^{d-1} \}$ is a subgroup of $G$. By Lagrange, we have: $|G| \vdots |H|$, so $|G| \vdots d$; then, there is an $k \in \mathbb{N}$ such that $kd=|G|$. From this and $a^d=e$ we get that $a^{dk}=e^k$ or $a^{|G|}=e$.</p>
|
687,352 | <p>How many experiments should we conduct so that we could state that with more than $0.9$ probability the event occurs at least once. The probability that the event occurs is $0.7$. </p>
<p>I have tried the following:</p>
<p>Let's say the number of experiments is equal to $n$.
The opposite of 'occurs at least once' is that the event occurs in <strong>all</strong> experiments and the probability of this be $1-0.9=0.1.$ </p>
<p>So I need the following $(0.7)^n=0.1$</p>
<p>Solving this does not give me the right answer, which is <strong>more than $2$</strong>. </p>
<p>Anyone could help?</p>
| AlexR | 86,940 | <p>Note that
$$\left.\frac{|\sin x|}{|x|}\right|_{[n, n+1]} \ge\frac1{n+1}|\sin(x)|\tag 1$$
And that
$$\int_{\alpha}^{\alpha +1}|\sin(x)| dx \ge \int_{-\frac12}^{\frac12} |\sin x| dx = 2\int_0^{\frac12} \sin x dx = 2(1-\cos(\frac12)) =: C > 0\tag 2$$
So we have
$$\int_1^\infty \left|\frac{\sin x}x\right| dx =\sum_{n=1}^\infty \int_n^{n+1}\left|\frac{\sin x}x\right| dx \stackrel{(1)}\ge \sum_{n=1}^\infty \frac1{n+1} \int_n^{n+1} |\sin x| dx \stackrel{(2)}\ge C \sum_{n=2}^\infty \frac1n$$
The latter is the harmonic series (minus the first term) and is well-known to diverge, or can be shown by comparison to $\ln$.</p>
|
3,047,241 | <blockquote>
<p>Let <span class="math-container">$X_1, X_2, \cdots, X_n$</span> be i.i.d. <span class="math-container">$\sim \text{Bernoulli}(p)$</span>. Then <span class="math-container">$\bar{x}$</span> is an unbiased estimator of <span class="math-container">$p$</span>.</p>
</blockquote>
<p>How should I approach for this types of problems.
Some hint will also help me.</p>
| Ankit Seth | 393,189 | <p>You know that <span class="math-container">$E(X) = p$</span> or, for any <span class="math-container">$i$</span>, <span class="math-container">$E(X_i) = p$</span>.</p>
<p>So,</p>
<p><span class="math-container">$$E(\bar X) = E\Bigl(\frac {\sum_{i=1}^n X_i}{n}\Bigl)$$</span>
<span class="math-container">$$=\frac{1}{n}\Big(E{\sum_{i=1}^n X_i}\Big)$$</span>
<span class="math-container">$$=\frac{1}{n}\Big({\sum_{i=1}^n E(X_i)}\Big)$$</span>
<span class="math-container">$$=\frac{1}{n}(np)$$</span>
<span class="math-container">$$=p$$</span></p>
<p>By the definition of Unbiased Estimator, <span class="math-container">$\bar X$</span> is an unbiased estimator of <span class="math-container">$p$</span>.</p>
|
83,965 | <p>When students learn multivariable calculus they're typically barraged with a collection of examples of the type "given surface X with boundary curve Y, evaluate the line integral of a vector field Y by evaluating the surface integral of the curl of the vector field over the surface X" or vice versa. The trouble is that the vector fields, curves and surfaces are pretty much arbitrary except for being chosen so that one or both of the integrals are computationally tractable.</p>
<p>One more interesting application of the classical Stokes theorem is that it allows one to interpret the curl of a vector field as a measure of swirling about an axis. But aside from that I'm unable to recall any other applications which are especially surprising, deep or interesting. </p>
<p>I would like use Stokes theorem show my multivariable calculus students something that they enjoyable. Any suggestions?</p>
| Toby Bartels | 8,508 | <p>In the theory of electromagnetism, the classical Stokes Theorem moves between the differential and integral forms of two of Maxwell's four equations; see <a href="https://en.wikipedia.org/wiki/Stokes%27_theorem#In_electromagnetism" rel="nofollow">https://en.wikipedia.org/wiki/Stokes%27_theorem#In_electromagnetism</a> for discussion. Note that the integral forms may be directly interpreted using classical physical intuition, while the differential forms give us differential equations that we might solve, so it is important that we can switch between them.</p>
<p>ETA: I think that Wikipedia's discussion is a little vague, although possibly appropriate in that context. So here is more detail, looking at Faraday's Law. In terms of physically observable quantities, the law states that the rate of change of the magnetic flux through a stationary surface is proportional to the electromotive force around the boundary of the surface. The magnetic flux is the surface integral of the magnetic field $ \vec H $, and the EMF is the line integral of the electric field $ \vec E $, so we have $$ \oint _ { \partial S } \vec E \cdot \mathrm d \vec r = - \frac { \mathrm d } { \mathrm d t } \iint _ S \vec H \cdot \mathrm d ^ 2 \vec A $$ using standard units and sign conventions. Applying the classical Stokes Theorem on the left and using that $ S $ is stationary on the right, this becomes $$ \iint _ S ( \nabla \times \vec E ) \cdot \mathrm d ^ 2 \vec A = - \iint _ S \frac { \partial \vec H } { \partial t } \cdot \mathrm d ^ 2 \vec A \text ; $$ since this holds for arbitrarily small surfaces, we conclude that $$ \nabla \times \vec E = - \frac { \partial \vec H } { \partial t } \text , $$ a differential equation. (The argument in reverse is even easier, since you don't have to worry about arbitrarily small surfaces.)</p>
|
83,965 | <p>When students learn multivariable calculus they're typically barraged with a collection of examples of the type "given surface X with boundary curve Y, evaluate the line integral of a vector field Y by evaluating the surface integral of the curl of the vector field over the surface X" or vice versa. The trouble is that the vector fields, curves and surfaces are pretty much arbitrary except for being chosen so that one or both of the integrals are computationally tractable.</p>
<p>One more interesting application of the classical Stokes theorem is that it allows one to interpret the curl of a vector field as a measure of swirling about an axis. But aside from that I'm unable to recall any other applications which are especially surprising, deep or interesting. </p>
<p>I would like use Stokes theorem show my multivariable calculus students something that they enjoyable. Any suggestions?</p>
| Buschi Sergio | 6,262 | <p>I find interesting that divergence theorem (that is a corollary of the general Stokes theorem on differentiable varieties) in a vectorial form (one integral for any cathesian cohordinate) give a proof of the Archimedes' principle of buoyancy on the fluid .</p>
<p>given a body immersed on a (incompressible) fluid let $S$ its surface and for each point $p\in S$ let $\overrightarrow{n}:=(n_x(p), n_y(p), n_z(p))$ the normal versor to $S$ in $p$. then the total force on the body is the (vector) integral $\int_S\mu\cdot (L-z(p))\cdot -\overrightarrow{n}\cdot dS=$</p>
<p>$-(\int_S\mu\cdot (l-z(p))\cdot n_x(p) \cdot dS, \int_S\mu\cdot (l-z(p))\cdot n_x(p) \cdot dS, \int_S\mu\cdot (l-z(p))\cdot n_z(p) \cdot dS) =$</p>
<p>$-\mu\cdot(\int_S\ (l-z(p))\cdot \overrightarrow{i} \circ \overrightarrow{n}\circ dS, \int_S (l-z(p))\cdot \overrightarrow{j} \circ \overrightarrow{n}\circ dS, \int_S (l-z(p))\cdot \overrightarrow{k} \circ \overrightarrow{n}\circ dS) $</p>
<p>(where $l$ is the fluid level, and $\mu$ its density, $\overrightarrow{i}, \overrightarrow{j}, \overrightarrow{k}$ the usual cartesian versors).</p>
<p>then from the divergence theorem (on each of the three components) and from $\nabla\circ((l-z)\overrightarrow{i})=\partial/\partial x (l-z)=0,\ \nabla\circ((l-z)\overrightarrow{j})=\partial/\partial y (l-z)=0$
$\nabla\circ((l-z)\overrightarrow{k})=\partial/\partial z (l-z)=-1$</p>
<p>follow that: $\int_S\mu\cdot (L-z(p))\cdot -\overrightarrow{n}\cdot dS= \mu\cdot \int_VdV\cdot \overrightarrow{k} =\mu |V|\cdot \overrightarrow{k} $ (where $V$ in the volume space (internal) bounded by $S$ and $|V|$ its measure. </p>
|
83,965 | <p>When students learn multivariable calculus they're typically barraged with a collection of examples of the type "given surface X with boundary curve Y, evaluate the line integral of a vector field Y by evaluating the surface integral of the curl of the vector field over the surface X" or vice versa. The trouble is that the vector fields, curves and surfaces are pretty much arbitrary except for being chosen so that one or both of the integrals are computationally tractable.</p>
<p>One more interesting application of the classical Stokes theorem is that it allows one to interpret the curl of a vector field as a measure of swirling about an axis. But aside from that I'm unable to recall any other applications which are especially surprising, deep or interesting. </p>
<p>I would like use Stokes theorem show my multivariable calculus students something that they enjoyable. Any suggestions?</p>
| yael fregier | 13,742 | <p>You can tell your students that a clever use of Stokes theorem can give you a Fields medal. Indeed, the proof that the formality map given by M. Kontsevich is a $L_\infty$-morphism, is nothing else than Stokes theorem. A detailed account of this can be found in <code>Deformation quantization of Poisson manifolds. Lett. Math. Phys. 66 (2003), no. 3, 157–216</code> or with more details in <code>Déformation, quantification, théorie de Lie, 123–164, Panor. Synthèses, 20, Soc. Math. France, Paris, 2005</code> which is in English, contrary to its title.</p>
|
2,704,394 | <p>Here is the formal statement:</p>
<blockquote>
<p>Let $\lambda_1, \lambda_2, \lambda_3$ be distinct eigenvalues of $n\times n$ matrix $A$. Let $S=\{v_1, v_2, v_3\}$, where $Av_i = \lambda_i v_i$ for $1\leq i\leq 3$. Prove $S$ is linearly independent. </p>
</blockquote>
<p>Many resources online state the general proof or the proof for two eigenvectors. What is the proof for specifically 3? I tried to derive the 3 eigenvector proof from the 2 eigenvector proofs, but failed. </p>
| Robert Lewis | 67,071 | <p>Here's the $n$-eigenvector proof:</p>
<p>We assume</p>
<p>$A\vec v_i = \lambda_i \vec v_i, \; 1 \le i \le n, \tag 1$</p>
<p>with </p>
<p>$\lambda_i \ne \lambda_j, \; 1 \le i, j \le n; \tag 2$</p>
<p>assume there is a linear dependence between the eigenvectors:</p>
<p>$\displaystyle \sum_1^n a_i \vec v_i = 0, \; \exists [a_i \ne 0, 1 \le i \le n]; \tag 3$</p>
<p>since relations such as (3) are assumed to exist, there is (at least) one having a minimum number of non-zero coefficients $a_i$; we assume (3) is such; we note the number of non-zero $a_i$ must be $\ge 2$, otherwise (3) is of the form</p>
<p>$a_j \vec v_j = 0, \tag 4$</p>
<p>which implies $a_j = 0$, forbidden by hypothesis. Then</p>
<p>$A(\displaystyle \sum_1^n a_i \vec v_i) = 0, \tag 5$</p>
<p>or</p>
<p>$\displaystyle \sum_1^n a_i \lambda_i \vec v_i = 0; \tag 6$</p>
<p>we may assume without loss of generality that $a_1 \ne 0$; if we multiply (3) by $\lambda_1$ we have</p>
<p>$\displaystyle \sum_1^n a_i \lambda_1 \vec v_i = 0; \tag 7$</p>
<p>we subtract (7) from (6):</p>
<p>$\displaystyle \sum_2^n a_i (\lambda_i - \lambda_1) \vec v_i = 0; \tag 8$</p>
<p>since for all $i$</p>
<p>$\lambda_i - \lambda_1 \ne 0, \tag 9$</p>
<p>(8) is a linear relation between eigenvectors with fewer non-zero coefficients than (3); this contradiction shows (3) is impossible and hence the eigenvectors are linearly independent.</p>
|
3,754,030 | <p>Let <span class="math-container">$I^n$</span> be the <span class="math-container">$n$</span>-cube <span class="math-container">$[0,1]^n$</span>. Also define two subsets of <span class="math-container">$\partial I^n$</span>:</p>
<ul>
<li><span class="math-container">$A=\{(x_1,\ldots,x_n)\mid x_1=0\}$</span></li>
<li><span class="math-container">$B=\partial I^n\setminus \{(x_1,\ldots,x_n)\mid x_1=1\}$</span></li>
</ul>
<p>So <span class="math-container">$A$</span> is the "bottom face" and <span class="math-container">$B$</span> is every face but the "top face".</p>
<p>It's well-known in algebraic topology that there's a homeomorphism <span class="math-container">$F:I^n\rightarrow I^n$</span> with <span class="math-container">$F(A)=B$</span>. Can this <span class="math-container">$F$</span> be defined by an explicit formula?</p>
<p>I've been able to make some progress in the <span class="math-container">$n=2$</span> case, involving exponential maps defined on closed subsets of the square, but I haven't been able to glue them together coherently yet. Wondering if there's some insight I'm missing to give a nice formula that generalizes to all <span class="math-container">$n$</span>.</p>
<p>(I've been focused on <span class="math-container">$F^{-1}$</span> because it's easier for me to draw those pictures. I'd be happy with a formula for either direction.)</p>
| Greg Martin | 16,078 | <p>One way that is conceptually simple, and that could be turned into an explicit formula with enough effort, is:</p>
<ul>
<li>Define <span class="math-container">$g\colon I^n \to U$</span>, where <span class="math-container">$U$</span> is the closed ball whose boundary sphere circumscribes <span class="math-container">$I^n$</span>, by dilating along each radius of the ball so that the boundary of the cube ends up on the boundary sphere.</li>
<li>Let <span class="math-container">$\mathcal S$</span> denote the collection of great semicircles running from the south pole of <span class="math-container">$\partial U$</span> to the north pole.</li>
<li><span class="math-container">$g(\partial A)$</span> intersects every semicircle in <span class="math-container">$\mathcal S$</span> exactly once, as does <span class="math-container">$g(\partial B)$</span>, and never at the north or south poles.</li>
<li>Find a homoemorphism <span class="math-container">$h$</span> from <span class="math-container">$U$</span> to itself that, when restricted to any semicircle in <span class="math-container">$\mathcal S$</span>, pushes the points on that semicircle upwards so that the intersection with <span class="math-container">$g(\partial A)$</span> is mapped to its intersection with <span class="math-container">$g(\partial B)$</span>.</li>
<li>Then a map <span class="math-container">$F$</span> with the property you want is <span class="math-container">$g^{-1}\circ h \circ g$</span>.</li>
</ul>
|
3,203,607 | <p>"Each cell of a 100 × 100 table is painted either black or white and all
the cells adjacent to the border of the table are black. It is known that in every
2 × 2 square there are cells of both colours. Prove that in the table there is 2 × 2
square that is coloured in the chessboard manner."</p>
<p><a href="https://cms.math.ca/crux/v44/n9/OCP_44_9.pdf" rel="nofollow noreferrer">Source of problem</a></p>
<p>How to solve this problem?</p>
| Mike Earnest | 177,399 | <p><strong>Hint:</strong> Stick <span class="math-container">$99\times 99$</span> needles on this grid, each at a place where four cells meet in a corner. For each pair of needles at distance one apart, connect them with a piece of string if the the two squares touching the edge between them have different colors. </p>
<p>Each needle with have either <span class="math-container">$2$</span> or <span class="math-container">$4$</span> pieces of string tied to it (why?). If a needle has four strings, then the four squares surrounding it are colored like a checkerboard. So, assume to the contrary that every needle only has two strings. What would the resulting picture look like? Why is that impossible?</p>
<p>Further hint:</p>
<blockquote class="spoiler">
<p> If every needle only had two strings, then the needles would be partitioned into "loops," where each needle is connected to a the next and previous in a circular fashion. What are the possible sizes of a loop?<br>
<br>
For example, you can have a loop of size four where the four needles are the vertices of a cell. Can you have a loop of size <span class="math-container">$5$</span>?</p>
</blockquote>
|
1,079,493 | <blockquote>
<p>Prove that <span class="math-container">$f(x) = x^3 + 3x - 1$</span> is irreducible in <span class="math-container">$\mathbb Q[X]$</span>.<br />
Let <span class="math-container">$\theta$</span> be a root of <span class="math-container">$f(x)$</span>. Compute <span class="math-container">$\frac{1}{\theta}$</span> and <span class="math-container">$(2 + \theta^2)^{-1} $</span> in <span class="math-container">$\mathbb Q[\theta ]$</span>.</p>
</blockquote>
<p><span class="math-container">\begin{array}{l}
f\left( \theta \right) = \theta ^3 + 3\theta - 1 = 0 \\
\Leftrightarrow \theta \left( {3 + \theta ^2 } \right) = 1 \\
\Leftrightarrow \frac{1}{\theta } = \left( {3 + \theta ^2 } \right);\left( {\theta \ne 0} \right) \\
\end{array}</span></p>
<p><span class="math-container">\begin{array}{l}
\frac{1}{\theta } = 3 + \theta ^2 ;\left( {\theta \ne 0} \right) \\
\Leftrightarrow \frac{1}{\theta } - 1 = 2 + \theta ^2 \\
\Leftrightarrow \left( {\frac{1}{\theta } - 1} \right)^{ - 1} = \left( {2 + \theta ^2 } \right)^{ - 1} \quad ;\left( { \pm \sqrt 2 \notin Q} \right) \\
\Leftrightarrow \left( {2 + \theta ^2 } \right)^{ - 1} = \frac{\theta }{{1 - \theta }}\quad ;\left( {\theta \ne 1} \right) \\
\end{array}</span></p>
<p>But I can't show that <span class="math-container">$f$</span> is irreducible.</p>
| Tim Raczkowski | 192,581 | <p>Another idea is that if $f(x)$ is factorable over $\Bbb Q[x]$ it must have at least one rational zero. However, by the rational zero theorem, $\pm 1$ are the only possible rational zeros, but neither one is a zero.</p>
|
148,807 | <p>I'm not sure if these types of questions are accepted here or not (I'm very sorry if it's not), but it would be great if anyone could explain me this.</p>
<blockquote>
<p><strong>Question:</strong>
Using his bike, Daniel can complete a paper route in 20 minutes. Francisco, who walks the route, can complete it in 30 minutes. How long will it take the two boys to complete the route if they work together, one starting at each end of the route?</p>
</blockquote>
<p>I have the answer: 12 minutes</p>
<p>But I don't understand the solution given in the book.</p>
<p>Can any of you explain how to solve this? Your help is highly appreciated.</p>
| Unreasonable Sin | 592 | <p>Suppose the paper route is 1 mile in length. Then Daniel is traveling at 3 miles an hour and Fransico is traveling at 2 miles an hour. Imagine the paper route is a straight line running left to right. Daniel starts his route at the far left side of the line traveling towards the right, and Fransico starts his route from the far right side of the line traveling left. We want to know how long it takes for them to meet.</p>
<p>Daniel's position on the line, X, is a function of his speed and time. </p>
<p>X_Daniel = speed * time.</p>
<p>Fransico's position is also a function of time, but he's coming from the right. Since the route is 1 mile long, we subtract his position from 1.</p>
<p>X_Fransico = 1 - speed * time.</p>
<p>Since we want to know the point at which they meet, we set the two functions equal to each other and solve for time.</p>
<p>$3t = 1 - 2t$</p>
<p>$3t + 2t = 1$</p>
<p>$5t = 1$</p>
<p>$t = 1 / 5$ of an hour, or if we divide 60 minutes by 5, we get 12 minutes.</p>
|
3,033,943 | <blockquote>
<p><span class="math-container">$\textbf{Problem}$</span> Let <span class="math-container">$\Omega$</span> be an open, bounded and connected subset of <span class="math-container">$\mathbb{R}^n$</span>. Suppose that <span class="math-container">$\partial \Omega$</span> is <span class="math-container">$C^{\infty}$</span>. Consider an eigenvalue problem
<span class="math-container">\begin{align*}
\begin{cases}
-\Delta u=\lambda u & \textrm{ in } \; \Omega \\
\frac{\partial u}{\partial \nu}=-u & \textrm{ on } \partial \Omega
\end{cases}
\end{align*}</span>
Define a bilinear operater <span class="math-container">$(\cdot,\cdot)_{H^1}$</span> by
<span class="math-container">\begin{align*}
(u,v)_{H^1}:=\int_{\Omega} \nabla u \cdot \nabla v \;dx + \int_{\partial \Omega} uv \; d\sigma
\end{align*}</span>
Show that there exists a constant <span class="math-container">$\theta>0$</span> independent of <span class="math-container">$u,v$</span> such that
<span class="math-container">\begin{align*}
(u,u)_{H^1} \geq \theta \Vert u \Vert _{H^1(\Omega)}^2
\end{align*}</span></p>
</blockquote>
<p><span class="math-container">$\textbf{Attempt}$</span> </p>
<p><span class="math-container">\begin{align*}
(u,u)_{H^1}&=\int_{\Omega} \nabla u \cdot \nabla u \;dx + \int_{\partial \Omega} u^2 \; d\sigma \\
&=\int_{\Omega} \nabla \cdot(u\nabla u)-u\Delta u \; dx +\int_{\partial \Omega} u^2 \; d\sigma \\
&=\int_{\partial \Omega} u \frac{\partial u}{\partial \nu} \; d\sigma +\int_{\Omega} \lambda u^2 dx +\int_{\partial \Omega} u^2 \; d\sigma \\
&=-\int_{\partial \Omega} u^2 \; d\sigma +\int_{\Omega} \lambda u^2 dx +\int_{\partial \Omega} u^2 \; d\sigma\\
&=\lambda \Vert u \Vert _{L^2(\Omega)}^2
\end{align*}</span>
I don't know how to get <span class="math-container">$\lambda \Vert u \Vert_{L^2(\Omega)}^2 \geq \theta \Vert u \Vert_{H^1(\Omega)}^2$</span>...</p>
<p>Any help is appreciated..</p>
<p>Thank you!</p>
| Enkidu | 455,216 | <p>If you use the sequence definition of continuity, you take an arbitrary sequence <span class="math-container">$x_n\xrightarrow{\to \infty } x$</span> and want to prove that the image also converges.
Observe that any converging sequence <span class="math-container">$x_n\xrightarrow{\to \infty } x$</span> defines a sequence converging to zero of the form <span class="math-container">$x-x_n$</span>, but now the above equality shows that the converging of <span class="math-container">$T(x_n) \to T(x)$</span> is equivalent to <span class="math-container">$T(x_n -x) \xrightarrow{\to \infty} 0$</span>. Which is guaranteed by continuity at 0.</p>
|
122,945 | <p>Let $f:S^n\to C$ be a continuous function, $n\geq 1$. When $n=1$, this is a well-known theorem, called Kellog's theorem (or sometimes Kellog-Warschawski's theorem) which states the following</p>
<p>Theorem: Fix $k \geq 0, 0<\alpha<1$. Let $f\in C^{k,\alpha}(S^1)$. Then its harmonic extension $H(f)$, which is the solution to the Dirichlet problem on the unit disk $D$ with boundary value $f$, is in $C^{k, \alpha}(D)$.</p>
<p>My main question is: is the above true for $n\geq 2$ as well? Any refernces/ suggestions?</p>
<p>While I don't know exactly a complete reference for the proof, but I have read the following theorem mentioned in the book "Boundary Behaviour of Conformal maps" by Christian Pommerenke which states:</p>
<p>Let $F:D\to\Omega $ be a conformal homeomorphism of $D$ onto a Jordan domain $\Omega$ whose boundary curve $\partial\Omega$ has a $C^{k,\alpha}$ -parametrization. Then $f\in C^{k, \alpha}(D)$. Note that any conformal homeomorphism $F$ of $D$ onto a Jordan domain extends to the boundary of $D$, by Caratheodory's extension theorem.</p>
| timur | 824 | <p>It follows from the Schauder theory. You can also establish Kellog's theorem directly. One approach is given in DiBenedetto's PDE book, where he uses Kellog's theorem in the proof of Schauder estimates.</p>
|
1,134,215 | <p>How can I determine whether {$\frac{z}{1+z^2}$; z $\in$ $\mathbb{C}$ \ {-i, i}} is bounded? My textbook is very poor at describing boundedness for complex functions. Thanks for the help!</p>
| Eric Wofsey | 86,856 | <p>If $\alpha$ and $\beta$ are automorphisms of $G$, then the cosets $\alpha X$ and $\beta X$ are the same iff $\alpha(K)=\beta(K)$. Thus $X$ will fail to have finite index if there are infinitely many different subgroups of $G$ that are conjugate to $K$ under automorphisms of $G$. For instance, if $G$ is an infinite-dimensional vector space over a finite field and $K$ is subspace such that $G/K$ is finite-dimensional, then automorphisms of $G$ can send $K$ to any subspace $K'$ such that $G/K'$ has the same dimension, and there are infinitely many such subspaces.</p>
|
487,123 | <p>How to evaluate the following limit?
$$\lim_{n\to\infty}\dfrac{1!+2!+\cdots+n!}{n!}$$</p>
<p>For this problem I have two methods. But I'd like to know if there are better methods.</p>
<p><strong>My solution 1:</strong></p>
<p>Using Stolz-Cesaro Theorem, we have
$$\lim_{n\to\infty}\dfrac{1!+2!+\cdots+n!}{n!}=\lim_{n\to\infty}\dfrac{n!}{n!-(n-1)!}=\lim_{n\to\infty}\dfrac{n}{n-1}=1$$</p>
<p><strong>My solution 2:</strong></p>
<p>$$1=\dfrac{n!}{n!}<\dfrac{1!+2!+\cdots+n!}{n!}<\dfrac{(n-2)(n-2)!+(n-1)!+n!}{n!}=\dfrac{n-2}{n(n-1)}+\dfrac{1}{n}+1$$</p>
| robjohn | 13,854 | <p>If we let
$$
a_n=\frac1{n!}\sum_{k=1}^nk!
$$
then obviously, $a_n\ge1$. Furthermore, we get that
$$
a_{n+1}=1+\frac{a_n}{n+1}
$$
Suppose that for some $n\ge1$, $a_n\le2$, then
$$
\begin{align}
a_{n+1}
&=1+\frac{a_n}{n+1}\\
&\le1+\frac{2}{n+1}\\
&\le2
\end{align}
$$
Since $a_1=1$, we have that $a_n\le2$ for all $n\ge1$. Now finally,
$$
\begin{align}
1\le a_{n+1}=1+\frac{a_n}{n+1}\le1+\frac2{n+1}
\end{align}
$$
By the <a href="http://en.wikipedia.org/wiki/Squeeze_theorem">Squeeze Theorem</a>, we get that
$$
\lim_{n\to\infty}a_n=1
$$</p>
|
227,109 | <p>I keep mixing them up, because they are very similar.</p>
<p>Some contrapositives resemble some contradictions.</p>
| amWhy | 9,003 | <p>When one speaks of a <strong>contrapositive</strong> or proving a contrapositive, one is speaking about the contrapositive of an <em>implication</em> (an "if...then" statement), and as pointed out in the earlier answers, if one wants to prove that $$P \implies Q\tag{1}$$ one can choose, instead, to prove $$\lnot Q \implies \lnot P,\tag{2}$$ because both statements are equivalent (i.e., if one is true, so is the other...and if one is false, so is the other). Don't confuse the appearance of the $\lnot$ symbol on each side of (2) as being either a negation of (1) nor contradiction. To see what I mean, one can correctly state that (1) (which does not contain the "$\lnot$" symbol) is the contrapostive of (2) because (2) is equivalent to $$\lnot(\lnot P) \implies \lnot(\lnot Q) \equiv P \implies Q.\tag{3}$$ </p>
<p>In contrast, a <strong>contradiction</strong> is obtained when one derives or asserts that both a statement $P$ and its negation $\lnot P\;$ hold, i.e., when one asserts or derives: $$P \land \lnot P\tag{4}$$ (E.g., $x \in A \land x \notin A$ is a contradiction, and as such, is false <em>regardless of whether or not $x \in A$</em>). </p>
<p>Another way of putting it is that a contradiction is any statement which is <em>always</em> false (i.e., a statement which is "inherently" false), and a contradiction can be thought of as the "opposite" of a <em>tautology</em> which is always true: e.g. $P \lor \lnot P$ is a tautology, and as such is true without knowing whether $P$ is true or false).</p>
|
2,959,686 | <p>I'm trying to see if I can find a bijection between two groups that are infinite of which one in the subset of the other. If I find the inverse <span class="math-container">$\phi^{-1}(x)=\frac{1}{5}x$</span> since it doesn't work for <span class="math-container">$x \in \mathbb{Z}$</span> (because I will have values in <span class="math-container">$\mathbb{Q}$</span>) then there isn't an isomorphism right?</p>
<p>Or have I approached the problem incorrectly?</p>
<p>Thanks for your help.</p>
| Tsemo Aristide | 280,301 | <p>The inverse is not defined on the whole set, but only on the subset so it is a good argument. You can define <span class="math-container">$\phi:5\mathbb{Z}\rightarrow\mathbb{Z}$</span> by <span class="math-container">$\phi(z)={1\over 5}z$</span>.</p>
|
892,114 | <p>i have three number
1 2 3 which will always be in this order {123}, i want to find out number of cases can be made,
like {1},{2},{23},{13},{12},{123}{3},{}. but each number has two states like "a" "b", i.e, each one will become different entity,like 2a,2b,3a,3b,1a,
with only exception i.e. 1 will have only one state 1a.</p>
<p>please tel me step wise using formulas, so that i can understand, also, any link will be helpfull.
yours sincerly</p>
| evinda | 75,843 | <p>$$(x-2)^2=x^2-4x+4$$</p>
<p>$$(x-2)^2-12=x^2-4x+4-12=x^2-4x-8$$</p>
|
187,975 | <p>Let $\mu$ be a finite nonatomic measure on a measurable space $(X,\Sigma)$, and for simplicity assume that $\mu(X) = 1$. There is a well-known "intermediate value theorem" of Sierpiński that states that for every $t \in [0,1]$, there exists a set $S \in \Sigma$ with $\mu(S) = t$.</p>
<p>I would like to use the following stronger conclusion for such a measure: </p>
<blockquote>
<p>There exists a chain of sets $\{S_t \mid t \in [0,1]\}$ in $\Sigma$,
with $S_t \subseteq S_r$ whenever $0 \leq s \leq r \leq 1$, such that
$\mu(S_t) = t$ for all $t \in [0,1]$.</p>
</blockquote>
<p>(One can view this as the existence a right inverse to the map $\mu \colon \Sigma \to [0,1]$ in the category of partially ordered sets.)</p>
<p>This statement appears (albeit hidden within a proof) on the Wikipedia page for "<a href="http://en.wikipedia.org/wiki/Atom_%28measure_theory%29#Non-atomic_measures" rel="noreferrer">Atom (measure theory)</a>," and even includes a sketch for the proof! However, I would like to see some mention of this in the literature. I've checked the Wiki references and they both seem to prove the weaker statement. I looked in Fremiln's <em>Measure Theory</em>, vol. 2, and again found the weaker version but not the stronger. </p>
<p><strong>Question:</strong> Can anyone provide me with such a reference?</p>
<hr>
<p><strong>A proof.</strong> In case anyone stumbles to this page and wants to see a proof, I'll sketch one that is more constructive than the one that I linked to above. Set $S_0 = \varnothing$ and $S_1 = X$. By Sierpiński, there exists $S_{1/2} \in \Sigma$ of measure $1/2$. For each Dyadic rational $q = m/2^n \in [0,1]$ ($1 \leq m \leq 2^n$), we may proceed by induction on $n$ to construct each $S_q$. Now given $r \in [0,1]$, set $S_r = \bigcup_{q \leq r} S_q$. (This is essentially the same method of proof as the one in the reference provided in Ramiro de la Vega's answer.)</p>
| Ramiro de la Vega | 17,836 | <p>I would say this is folklore (I proved it and used it many years ago on my undergrad thesis), but here is a concrete reference:</p>
<p>Such a family of measurable sets is called a $[0,1]$-family in <em>On the Skorokhod representation theorem</em> by Jean Carlos Cortissoz, PAMS, Vol.135, No. 12, 2007 (see Definition 4.1). A proof that such a family exists in any non-atomic space is given in Lemma 4.1. </p>
|
331,962 | <p>We have an first order ODE : </p>
<p>Equation1 : $y' + y = x$ ?
We can view the left-hand side as an operator acting on $y$. </p>
<p>In that case $L=(d/dx + 1)$ </p>
<p>$L(y_1) = x$<br>
$L(y_2)=x$<br>
$L(y_1+y_2)=x$<br>
So, clearly $L(y_1+y_2) = x \neq L(y_1)+L(y_2) = 2x$ </p>
<p>So why is $y'+y=x$ is a linear ODE ?</p>
| Julien | 38,053 | <p>As we discussed earlier in another thread, the following
$$
L(y):=y'+y
$$
is a linear operator. Note it is not $y'+1$. And note that linear means
$$
L(\alpha y+ \beta z)=\alpha L(y)+ \beta L(z)
$$
for every scalars $\alpha,\beta$, and every differentiable functions $y,z$. The fact that $L$ is linear is merely the fact that differentiation is linear. What you wrote $L(y_1+y_2)=L(y_1)+L(y_2)$ does not suffice to claim linearity of $L$.</p>
<p>Now your ODE can be written
$$
L(y)=f
$$
with $f(x)=x$.</p>
<p>The solution set is either empty, or an affine subspace $$y_p+\ker L$$ where $y_p$ can be any particular solution. This is exactly like linear systems
$$
AX=B
$$
where $A$ is a rectangular matrix. The solution set is either empty (the system is not compatible), or an affine subspace
$$
X_p+\ker A
$$
where $X_p$ is any particular solution.</p>
<p>People say linear to stress out the fact that there is a linear part in the equation which yields the affine structure of the solution set. One could also say that the equation is affine, by writing it $L(y)-f=0$ and observing that $y\longmapsto L(y)-f$ is affine. But nobody says that, as far as I know.</p>
|
1,474,867 | <p>I was trying to prove </p>
<p>$$\left|\int_{0}^{a}{\frac{1-\cos{x}}{x^2}}dx-\frac{\pi}{2}\right|\leq \frac{3}{a}$$ or $\leq \frac{2}{a}$. My work: I would like to use Fubini's theorem to prove it. </p>
<p>I notice that $\frac{1}{x^2}=\int^{\infty}_{0}{ue^{-xu}}du$. </p>
<p>Then, I got $\int_{0}^{a}{\frac{1-\cos{x}}{x^2}}dx=\int_{0}^{\infty}u\int_{0}^{a}{(1-\cos{x})e^{-xu}}dxdu$. </p>
<p>Then, I got $\int_{0}^{a}{(1-\cos{x})e^{-xu}}dx=-e^{-au}u+\frac{1}{u+u^3}+e^{-au}\frac{u^2\cos{a}-u\sin{a}}{u+u^3}$.</p>
<p>Then, $\int_{0}^{a}{\frac{1-\cos{x}}{x^2}}dx=\int_0^{\infty}u(\frac{e^{au}-1}{u}+\frac{u-e^{au}(u\cos{a}+\sin{a})}{1+u^2})du\\=\int_0^{\infty}({e^{au}+\frac{-ue^{au}(u\cos{a}+\sin{a}-2)}{1+u^2}})du+\frac{\pi}{2}.$ </p>
<p>I was trying to show $|\int_0^{\infty}({e^{au}+\frac{-ue^{au}(u\cos{a}+\sin{a}-2)}{1+u^2}})du|\leq\frac{3}{a}$ or $\frac{2}{a}$. </p>
<p>But I do not have a clue. Can some give me hints?</p>
| Julian Rosen | 28,372 | <p>There appears to be a minor mistake in your computation. We have:
$$\begin{align*}
\int_0^a \frac{1-\cos(x)}{x^2}dx&=\int_0^a(1-\cos x)\int_0^\infty u e^{-xu}\,du\,dx\\
&=\int_0^\infty u\int_0^a (1-\cos x)e^{-xu}dx\,du\\
&=\int_0^\infty (1-e^{-au})-\frac{u^2}{1+u^2}+\frac{u^2\cos a-u\sin a}{1+u^2}\\
&=\int_0^\infty e^{-au}\left(-1+\frac{u^2\cos a-u\sin a}{1+u^2}\right)du+\int_0^\infty\frac{1}{1+u^2}du\\
&=\int_0^\infty e^{-au}\left(-1+\frac{u^2\cos a-u\sin a}{1+u^2}\right)du+\frac{\pi}{2}.
\end{align*}
$$
To complete your proof, you need to show that
$$
\left|\int_0^\infty e^{-au}\left(-1+\frac{u^2\cos(a)-u\sin(a)}{1+u^2}\right)du\right|\leq \frac{2}{a}.
$$
Now $\int_0^\infty e^{-au}\,du=1/a$, so it will suffice to check that
$$
\left|\frac{u^2\cos(a)-u\sin(a)}{1+u^2}\right|\leq 1.
$$
The numerator above is the dot prodcut of $(u^2,u)$ and $(\cos(a),-\sin(a))$, so the Cauchy-Schwarz inequality implies
$$
|u^2\cos(a)-u\sin(a)|\leq u \sqrt{1+u^2}\leq 1+u^2.
$$</p>
|
1,108,832 | <p>Q: A team of $11$ is to be chosen out of $15$ cricketers of whom $5$ are bowlers and $2$ others are wicket keepers. In how many ways can this be done so that the team contains at least $4$ bowlers and at least $1$ wicket keeper?</p>
| idm | 167,226 | <p>$$\binom{5}{4}\binom{2}{1}\binom{8}{6}+\binom{5}{5}\binom{2}{1}\binom{8}{5}+\binom{5}{4}\binom{2}{2}\binom{8}{5}+\binom{5}{5}\binom{2}{2}\binom{8}{4}$$</p>
|
2,510,322 | <p>$\left( f(x) \right ) =\min_{t<x}\left(t^2\right)$</p>
<p>How do I sketch this function for all real x? I don't get what minimum means in this context how do I sketch such a function when t is in the function but x isn't the square term? </p>
| Community | -1 | <p>$t^2$ is a decreasing function from $-\infty$ to $0$, so that its smallest value for $t\le x\le0$ is achieved at the bound $x$ and is $x^2$.</p>
<p>$t^2$ has a global minimum at $x=0$, with value $0$, so that its smallest value for $t,0\le x$ is $0$. Hence</p>
<p>$$f(x)=\begin{cases}x\le 0\to x^2,\\x\ge 0\to 0.\end{cases}$$</p>
<p><a href="https://i.stack.imgur.com/sTvQW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sTvQW.png" alt="enter image description here"></a></p>
|
2,887,440 | <p>We were asked in our Calculus class to prove that,</p>
<blockquote>
<p>$f(x+y) - f(x) = \frac {\sec^2(x) \tan(y)} {1 - \tan(x) \tan(y)}$ given that $f(x) = \tan(x)$</p>
</blockquote>
<p>I have gotten so far as:</p>
<p>$$f(x+y) - f(x)$$</p>
<p>$$\tan(x+y) - \tan(x)$$</p>
<p>$$\frac{\tan(x)+\tan(y)}{1-\tan(x)\tan(y)} - \tan(x)$$</p>
<p>$$\frac{\tan(x)+\tan(y)}{1-\tan(x)\tan(y)} + \frac{-\tan(x)+\tan^2(x)\tan(y)}{1-\tan(x)\tan(y)}$$</p>
<p>$$\frac{\tan(y) + \tan^2(x)\tan(y)}{1-\tan(x)\tan(y)}$$</p>
<p>$$\frac{\tan(y) [1+\tan^2(x)]}{1-\tan(x)\tan(y)}$$</p>
<blockquote>
<p>Substituting the pythagorean identity, $$1+\tan^2(x) = \sec^2(x)$$</p>
</blockquote>
<p>$$\frac{\tan(y) \sec^2(x)}{1-\tan(x)\tan(y)} = \boxed{\frac{\sec^2(x)\tan(y)}{1-\tan(x)\tan(y)}}$$ </p>
<p>I don't quite understand how $f(x+y)$ became $\tan(x+y)$. I've had a few search results stating that $f(x+y) = f(x)+f(y)$ but it does not quite fit the bill. </p>
<p>I got the idea for my solution above because of a textbook example I've read, where:</p>
<blockquote>
<p>Given $f(x)=x^2-4x+7$, find $\frac {f(x+h)-f(x)}{h}$</p>
<blockquote>
<p>$\frac{[(x+h)^2 - 4(x+h) + 7] - (x^2 - 4x + 7)}{h} = \frac{h(2x+h-4)}{h} = 2x+h-4$</p>
</blockquote>
</blockquote>
<p>...but the book did not describe what property was used in order to 'insert' the value of $f(x)$ into $f(x+h)$, and by extension the $f(x)$ into the $f(x+y)$ of my problem. They feel... similar.</p>
<p>Is there a name for this mathematical property? Thank you very much.</p>
| tarit goswami | 579,780 | <p>For your first query: "how $f(x+y)$ became $tan(x+y)$?" - Inside the bracket, we write the parameters of a function. As, $f(x)=tan(x)$ for all $ x\in \mathbb{R} $, hence, $f(z)=tan(z)$ also with $z=x+y$, as reals are closed under addition(means for any two real $x$ and $y$ , $x+y$ is also a real number). Writing $x+y$ in $f()$, we mean how the value of $f(z),z=x+y$. As, $x$ and $y$ are the partitions of $z$, using property of the function we are finding $f(z)$ in terms of $x$ and $y$, here, $tan(x+y)=\frac{tan(x)+tan(y)}{1-tan(x)tan(y)}$. There is no name for this property (as far I know). </p>
<p>For the last part, suppose you want to find $f(z)$ for $z=x+h$, so, you know that $f(z)=z^2-4z+7$, now substitute $z$ by $x+h$(as, $z=x+h$) and you will get $f(z)=(x+h)^2-4(x+h)+7$ which is nothing but $f(x+h)$ !. Think $x+h$ as a single number, then it will be clear to you.</p>
<p>The functions that satisfies $f(x+y)=f(x)+f(y)$ are known as <em>linear functions</em>. All function <strong>does not</strong> satisfy this property. For example, $f(x)=a\cdot x$ with constant $a$, find out $f(x+y)$ and $f(x)+f(y)$ separately, see that <em>this function</em> satisfies the property. On the other hand, suppose another function $g(x)=x^2$, observe that $g(x+y)=(x+y)^2=x^2+y^2+2xy$ is not equal to $g(x)+g(y)=x^2+y^2$ for all $x$ and $y$ in it's domain.</p>
<p>I think this explains , please feel free to ask for any doubt.</p>
|
2,756,798 | <p>Consider the sequence space $l^2:=\{(x_n)_n\mid \sum^\infty_{n=0}x_n<\infty\}$ together with the norm
$$
||(x_n)_n||=(\sum^\infty_{n=0}|x_n|^2)^{1/2}
$$
How can I show that the triangle inequality holds for $||\cdot||$?</p>
| N. S. | 9,176 | <p><strong>Hint</strong> Cauchy-Schwartz. You can either show that $l^2$ is an inner product space, or use the fact that for each $N$ you have by C-S in $\mathbb R^N$:
$$(\sum^N_{n=0}|x_n+y_n|^2)^{1/2} \leq (\sum^N_{n=0}|x_n|^2)^{1/2}+(\sum^N_{n=0}|y_n|^2)^{1/2}\leq (\sum^\infty_{n=0}|x_n|^2)^{1/2}+(\sum^\infty_{n=0}|y_n|^2)^{1/2}
$$</p>
|
759,087 | <p>I'm busy writing my thesis, and I'm looking for some concise notation to denote the supremum of the matrix entries of, say $A \in M_n(\mathbb{R})$. How should I do this? </p>
<p>Looking for something like
$$\sup_{a_{i,j} \in A}|a_{i,j}|$$
but the notation $a_{i,j} \in A$ in reality doesn't make much sense in my opinion. What else can I do?</p>
<p>EDIT: Even more ideally I want to denote $\sup_{a_{i,j}\in (A-B)}|A - B|$, but I might just introduce general notation for the "norm" to simplify this.</p>
| Algebraic Pavel | 90,996 | <p>The defined quantity is not a "norm", it <strong>is</strong> a norm (not an operator norm though and not sub-multiplicative). I'm not aware of a standard notation for this quantity, but $\|\cdot\|_M$ or $\|\cdot\|_{\max}$ look suitable.</p>
|
1,754,931 | <p>If a sequence has a pattern where +2 is the pattern at the start, but 1 is added each time, like the sequence below, is there a formula to find the 125th number in this sequence? It would also need to work with patterns similar to this. For example if the pattern started as +4, and 5 was added each time.</p>
<blockquote>
<p>2, 4, 7, 11, 16, 22 ...</p>
</blockquote>
| fleablood | 280,126 | <p>Let a be the first term. c be the added term. Then you add m more each term.</p>
<p>The kth term is a + c +(c+m)+... +(c + (k-2)m).</p>
<p>That is the kth term is $a + (k-1)c + m\sum_{i=0}^{k-2}i= a + (k-1)c +m\frac {(k-1)(k-2)}{2}$</p>
|
95,819 | <p>I think I have solved a problem in <em>Topology</em> by Munkres, but there is a small detail that is bugging me. The problem is stated in this question's title. I will write down the proof and will highlight what is troubling me.</p>
<p>We prove by contradiction: Assume $X$ is not Hausdorff. Then there exist points $x,y$ where $x$ is different from $y$ such that no neighbourhoods $U$, $V$ about $x$ and $y$ respectively have trivial intersection. Now consider the point $(x,y)$ that is in the complement of $\Delta$. Now let $U \times V$ be any basis element that contains $(x,y)$ (such an element exists by definition of the product topology being generated by the basis $\mathcal{B}$ consisting of elements of the form $W \times Z$, where $W$ is open in $X$ and $Z$ is open in $Y$). Consider $(U \times V ) \cap \Delta$, <strong>which I claim to be $(U \cap X) \times (V \cap X)$</strong>.</p>
<p>By our choice of $x$ and $y$ there is $z \in U \cap V$, implying that the intersection $(U \times V ) \cap \Delta$ is not trivial.</p>
<p>Since $U \times V$ was any basis element containing $(x,y)$, this means that $(x,y) \in \overline{\Delta}$, which means that there exists a limit point of $\Delta$ that is not in it, contradicting $\Delta$ being closed.</p>
<p>The problem comes is in the way I have decomposed $\Delta$; the way I have put it seems I am saying that $\Delta$ <em>is equal to $X \times X$</em>, which is not the case. How can I get round this?</p>
<p>Thanks.</p>
<p><strong>Edit:</strong> Martin Sleziak has pointed out some mistakes, $(U \times V ) \cap \Delta$ should be $\{ (x,x) : x \in U \cap V\}$ and not as claimed.</p>
| Martin Sleziak | 8,297 | <p>Your claim that $(U\times V)\cap \Delta=(U\cap X)\times(V\cap X)$ is incorrect. Since $U,V\subseteq X$, this is the same as claiming
$(U\times V)\cap \Delta=U\times V$.</p>
<p>If you use
$(U\times V)\cap \Delta = \{(x,x); x\in U\cap V\}$ instead, the rest of your proof should work fine, but there are still several minor details. </p>
<hr>
<p>Minor nitpicks:</p>
<ul>
<li><p>Wouldn't a direct proof (instead of using contradiction) more clear? I don't think you would have to change the proof much to do this. But perhaps this is a matter of taste.</p></li>
<li><p>If you're using proof by contradiction, you cannot choose $x$, $y$ arbitrarily, but you should choose $x,y\in X$, $x\ne y$, which witness that this space is not Hausdorff. (I.e., for any neighborhoods $U\ni x$, $V\ni y$ the intersection $U\cap V$ is non-empty.)</p></li>
</ul>
<p>One more nitpick considering formatting:</p>
<ul>
<li>It's not good to write two formulas after each other. If you write "$x,y$ $x \neq y$ such that..." this is quite difficult to read. You should separate such things at least with a little, e.g. "$x$, $y$; $x \neq y$ such that..."; in my opinion it is much better to separate them with text, e.g. "$x$, $y$ such that $x \neq y$ and..."</li>
</ul>
|
7,575 | <p>How could I display text that flashed red for a half second or so and then reverted to black? (Or was put in bold and reverted to normal, etc.)</p>
| kglr | 125 | <pre><code> PrintTemporary[Style["text", Red]]; Pause[2]; "text"
</code></pre>
<p><strong>EDIT:</strong> This looks too plain in comparison to all the cool effects that can be achieved with methods used in other answers. The following is an attempt to arm-twist <code>PrintTemporary</code> to perform similar tricks:</p>
<pre><code> Scan[(temp = PrintTemporary[#]; Pause[.1]; NotebookDelete[temp]) &,
Style[Rotate["text", #[[1]]], Bold, 60, FontColor -> #[[2]],
FontFamily -> "SketchFlow Print"] & /@
NestList[{Plus[#[[1]], 20 Degree], Darker[#[[2]]]} &, {0 Degree,Red}, 18]]; "text"
</code></pre>
<p>(Note: try it in the last cell of your notebook to avoid flickering cell sizes). </p>
|
4,008,152 | <p>Question itself: Throw a coin one million times. What is the expected number of sequences of six tails, if we <strong>do not allow overlap</strong>?</p>
<p>I know when overlap is allowed, the answer is (1,000,000-5)/(2^6). Not sure if we can just do (1,000,000-5)/(2^6) divided by 6 if overlap is not allowed?</p>
<p>Some clarifications:</p>
<p>For example, if part of the sequence is "one H, nine T, then one H", we would count 1 sequence of six tails. (When overlap is allowed, we can count three times because each of the first 3 T can start a sequence of six tails; However, this question does not allow overlap, so 9T can only be counted as containing <strong>one</strong> sequence of six tails)</p>
<p>If part of the sequence is "one H, thirteen T, then one H", we would count 2 sequences of six tails.</p>
| Rana | 43,899 | <p>If you are considering non-overlapping occurrence of <span class="math-container">$6$</span> consecutive Tails, then the occurrence of <span class="math-container">$6$</span> consecutive Tails is a renewal event. So, the whole might of renewal theory may be applied. See for more details in Feller Vol I.</p>
<p>I am copying some the stuff from there. Let <span class="math-container">$ N_n $</span> be the number of occurrences up to the <span class="math-container">$n$</span>-th trial. Then, we have
<span class="math-container">\begin{equation*}
E(N_n) \sim \frac{n}{\mu}, \text{ and } \text{Var}(N_n) \sim \frac{ n \sigma^2}{ \mu^3}
\end{equation*}</span>
where we have
<span class="math-container">\begin{equation*}
\mu = \frac{ 1 - q^6 }{ p q^6 } \text{ and } \sigma^2 = \frac{ 1 }{ ( p q^6)^2 } - \frac{13}{ p q^6 } + \frac{ q }{ p^2 }
\end{equation*}</span>
and <span class="math-container">$ a_n \sim b_n $</span> if <span class="math-container">$ a_n / b_n \to 1 $</span> as <span class="math-container">$ n \to \infty $</span>.</p>
<p>Further, there is a central limit theorem (again see Feller) which provides fantastic probability estimates for rare events.</p>
|
18 | <p>Some teachers make memorizing formulas, definitions and others things obligatory, and forbid "aids" in any form during tests and exams. Other allow for writing down more complicated expressions, sometimes anything on paper (books, tables, solutions to previously solved problems) and in yet another setting students are expected to take questions home, study the problems in any way they want and then submit solutions a few days later.</p>
<p>Naturally, the memory-oriented problem sets are relatively easier (modulo time limit), encourage less understanding and more proficiency (in the sense that the student has to be efficient in his approach). As the mathematics is in big part thinking, I think that it is beneficial to students to let them focus on problem solving rather than recalling and calculating (i.e. designing a solution rather than modifying a known one). There is a huge difference between work in time-constrained environment (e.g. medical teams, lawyers during trials, etc.) where the cost of "external knowledge" is much higher and good memory is essential.
However, math is, in general (things like high-frequency trading are only a small part math-related professions), slow.</p>
<p>On the other hand, memory-oriented teaching is far from being a relic of the past. Why is this so? As this is a broad topic, I will make it more specific:</p>
<p><strong>What are the advantages of memory-oriented teaching?</strong></p>
<p><strong>What are the disadvantages of allowing aids during tests/exams?</strong></p>
| Willie Wong | 125 | <p>Many of the disadvantages of allowing aids can be, in principle, resolved by </p>
<ol>
<li>requiring that the only aids the students have are handwritten by themselves and </li>
<li>setting a length limit. (I've seen somewhere between one index card [for non Americans: a piece of paper around 10 x 15 cm squared] and 4 pages of A4 [for Americans: 4 pages of letter paper].)</li>
</ol>
<p>Ideally this will cause the student to be structured in their preparation (choose the important things only!) and the copying by hand of the relevant formulae should in theory reinforce learning (at least memorisation). </p>
<p>In practice, however, some common downsides to this include</p>
<ol>
<li>Students grow to rely on the crib sheets. For more creative problems you would end up with students who spend the entire exam trying vainly to apply random items from their crib sheet to the problem, instead of actually trying to solve the problem. </li>
<li>There's transcription error somewhere: either when the student copied the formulae from his notes or textbook to the crib sheet, or perhaps his handwriting is bad enough that he misreads it during the exam. </li>
</ol>
|
98,700 | <blockquote>
<p>Suppose you wanted to write the number 100000. If you type it in ASCII, this would take 6 characters (which is 6 bytes). However, if you represent it as unsigned binary, you can write it out using 4 bytes.</p>
</blockquote>
<p>(from <a href="http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/BitOp/asciiBin.html" rel="nofollow">http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/BitOp/asciiBin.html</a>)</p>
<p>My question: $\log_2 100,000 \approx 17$. So that means I need <code>17</code> bits to represent <code>100,000</code> in binary, which requires 3 bytes. So why does it say 4 bytes?</p>
| Per Alexandersson | 934 | <p>As Joey tels you, the reason is that numbers are usually stored in the data type "integer", which (almost) always comes in 32 bit variants.
The processor is taylormade to add/subtract/multiply integers of exactly this size,
otherwise, you'll need 32*32*(number of operations) different circuits for every combination of number of bits, which is a huge waste of space.</p>
|
663,736 | <p>For a very large number n, how many divisibility tests are required to establish if its prime?</p>
<p>I know this has something to do with the Golden Number, but I can't figure out what. I did try searching for an answer but not much luck.</p>
<hr>
<p>!!EDIT!!
(It wont let me answer my own question for upto 8hours)</p>
<p>I found something posted by someone else on the primality test by golden ratio, although just like Fermat's probability test, it also fails at times. </p>
<blockquote>
<p>There is a primality test by Golden ratio that is used in conjunction with the Lucas N+1 primality test. It is based on the relation between Lucas numbers and Fibonacci numbers. Primality test by Golden ratio states that if </p>
<p>$g^p+(1-g)^p \equiv 1\mod p$ , where g is golden ration, is true then p is prime. In other words, if</p>
<p>$\frac{g^p+(1-g)^p-1}{p} $ divides wholly then p is prime. The expression </p>
<p>$g^p+(1-g)^p$ is a formula for the p-th Lucas number, i.e. </p>
<p>$g^p+(1-g)^p = L_p$. As a result, we can say that if p-th Lucas number minus 1 divides by p wholly then p is prime, i.e.
$ \forall p \in \mathbb{N}, \frac{L_p-1}{p}=a$ where a $\in \mathbb{N} \Rightarrow $ p is prime.</p>
<p>Aaaand it is not true. If you check a composite number 705 which is equal to 3*5*47:</p>
<p>$ \frac{L_{705}-1}{705} = \frac{g^{705} +(1-g)^{705}}{705} = 3.031556 * 10^{144}$</p>
<p>$3.031556 *10^{144}$ is a whole number and the test fails. Fermat's primality test suffers from a similar problem. </p>
</blockquote>
| CorBrand | 126,846 | <p>Why would you not just check any given number to see if it is a prime number by simply attempting to divide it by preceding prime numbers? No matter how big the number is there can not be that many you would need to check it against.
Other than that unless you know a formula that can tell you what any given prime number may be then there can be no way to test to see if a given number is a prime other that trial and error.</p>
|
3,104,890 | <p>I'm trying to solve the following problem:</p>
<blockquote>
<p>Ten people are sitting around a round table. Three of them are chosen
at random to give a presentation. What is the probability that the
three chosen people were sitting in consecutive seats?</p>
</blockquote>
<p>I got the wrong answer but cannot see the error in my reasoning. This is how I see it:</p>
<p>1) the selection of the first person is unconstrained.</p>
<p>2) the next person must be selected from the 2 spots adjacent to the first. So this choice is limited to <code>2/9</code> of the possible choices.</p>
<p>3) the third choice must be taken from the one free spot next to the first person chosen, or the one free spot next to the 2nd person chosen. So this choice is limited to <code>2/8</code> of the possible choices.</p>
<p>4) multiplying these we get:</p>
<pre><code>2/9 * 2/8 = 1/18
</code></pre>
<p>However, the official answer is:</p>
<blockquote>
<p>Let's count as our outcomes the ways to select 3 people without regard
to order. There are <span class="math-container">$\binom{10}{3} = 120$</span> ways to select any 3 people.
The number of successful outcomes is the number of ways to select 3
consecutive people. There are only 10 ways to do this -- think of
first selecting the middle person, then we take his or her two
neighbors. Therefore, the probability is <span class="math-container">$\frac{10}{120} =
> \boxed{\frac{1}{12}}$</span>.</p>
</blockquote>
| drhab | 75,923 | <p>For your mistake see the answer of Arthur.</p>
<p>A bit more concise solution:</p>
<p>If the first person has been chosen then yet <span class="math-container">$2$</span> out of <span class="math-container">$9$</span> must be chosen. </p>
<p>In <span class="math-container">$3$</span> of these cases the three chosen persons will sit consecutively so the probability on that is:<span class="math-container">$$\frac3{\binom92}=\frac1{12}$$</span></p>
|
629,275 | <p>A function $f$ is defined on an open set $D$ of $\mathbb R^{2}$ is called a differentiable at a point $x\in D$ if there is a vector $m \in \mathbb R^{2} $ such that
$$\lim_{h\to 0} \frac{f(x+h)-f(x)-m\cdot h}{|h|}=0.$$</p>
<p><strong>My questions are</strong>:
(1) What is a geometric interpretation of $f:\mathbb R^{2} \to \mathbb R$ is a differentiable at a point in $D$ ? </p>
<p>( Let $f:\mathbb R^{2} \to \mathbb R $such that
$f(x, y)= \frac{x^{3}y}{x^{4}+y^{2}}$ for $(x,y)\not = (0,0)$ and $f(0,0)= 0$. Notice that all the directional derivatives of $f$ exists at $(0, 0)$ and they are equal at $(0, 0)$ but although $f$ is fails to be differentiable at $(0,0)$. )</p>
<p>(2) What is a geometric interpretation of $f:D\subset \mathbb R^{n}\to \mathbb R^{m}$ is differentiable at point in $D$ ?</p>
| Matheman | 117,904 | <p>(1) A function $f: D \rightarrow \mathbb{R}$ is differentiable at an interior point $x_0$ of $D$ if $\nabla f(x_0)$ exists and $$\lim_{x \rightarrow x_0} (f(x_0)+ \nabla f(x_0)(x-x_0) + o\left( \| x-x_0 \right \|)=0$$
holds (here $x,x_0$ are vectors of two or more dimensions).
Geometrically, for one variable functions, differentiability at $x_0$ implies the existence of the tangent line to the graph of $f$ at the point $P_0=(x_0,f(x_0))$ $$t(x)=f(x_0) + f'(x_0)(x-x_0)$$.
The multivariate case is more complex and the existence of $\nabla f(x_0)$, or the existence of the tangent plane to $f$ at $P_0$ does not guarantee the validity of the definition above.
Like in your exercise, not even the existence of every partial derivative at $x_0$ guarantees differentiability.
For $n=2$ the tangent plane to the graph of $f$ at $P_0=(x_0,y_0,f(x_0,y_0)$ is defined by $$z=f(x_0)+\frac{\partial f}{\partial x}(x_0,y_0)(x-x_0)+\frac{\partial f}{\partial x}(x_0,y_0)(y-y_0)$$. This plane best approximates the graph of $f$ on a neighborhood of $P_0$. The differentiability of $f$ at $x_0$ means the existence of the tangent plane. In some cases this property fail to hold. </p>
<p>(2) This other function is a vector-valued map. We have differentiability if each component of $f$ is differentiable at $x_0 \in D$, i.e.
$$f_{i}(x)=f_i(x_0)+\nabla f_i(x_0) \cdot (x-x_0) +o\left( \| x-x_0 \right \|), \quad x \rightarrow x_0$$, where the scalar product $\nabla f_i(x_0) \cdot (x-x_0)$ is the matrix product between the row vector $\nabla f_i(x_0)$ and the column vector $(x-x_0)$. The $n \times n$ matrix $Jf(x_0)$ formed by the row vectors $\nabla f_i(x_0)$ is also called Jacobian matrix. Putting $\Delta (x)=x-x_0$ the latter equation is
$$f_{i}(x)=f_i(x_0)+J f(x_0)\Delta(x) +o\left( \| \Delta(x) \right \|), \quad x \rightarrow x_0$$. Up to infinitesimal of order greater than one the formula says that the increment $\Delta f= f(x_0 + \Delta x)-f(x_0)$ is approximated by the value of the differential $J f(x_0)\Delta(x)$. In other words the increment $\Delta f$ goes to zero quickly than $\Delta x$ as $x \rightarrow x_0$.</p>
|
1,130,142 | <p><img src="https://i.stack.imgur.com/NXr1V.png" alt="enter image description here"></p>
<p>This is how I solved this problem but I have some reservations regarding my answer.</p>
<p>1st house = x ; 2nd house = 3x ; 3rd house = [3x + x] - 2610</p>
<p>12(x) + 12(3x) + 12(4x - 2610) = 186,390</p>
<p>96x = 155,070</p>
<p>x = 1615.3125</p>
<p>__</p>
<p>4(1615.3125) - 2610 = 3,851.25</p>
<p>I answered 'none of the above'. Is my solution correct? How about my answer? Did I miss something? If there is some kind of shortcut in answering this problem, please let me know.</p>
<p>PS I am a college student having troubles with word problems.</p>
| Kapoios | 37,324 | <p>Your system of equations is:</p>
<p>$y=3x$,
$z=4x-2,610$,
$6x+12y+12z=186,390$,</p>
<p>where $x$ is the first house's monthly rent and $y$, $z$ are the monthly rents for the second and the third house respectively.</p>
<p>Putting the first two equations into third and doing the calculations, gives
$x=2,419$.</p>
<p>Then, substituting this value of $x$ into the second equation gives $z=7,066$.</p>
|
363,391 | <p>In <a href="https://math.stackexchange.com/q/2602271/682690">this MathSE question</a>,
classification of finite simple groups with Abelian Sylow 2-subgroups,
credit is rightly given to John Walter. But in the introduction to his paper, Walter explicitly states that "It seems to be a very difficult problem to show that these are the only examples." Is there a later reference, perhaps earlier than the complete classification theorem, that states that Walter, et al. found them all?
Thanks for your help.</p>
| JCA | 159,448 | <p>It is described in Gorenstein's book on finite simple groups.</p>
|
254,253 | <blockquote>
<p>If the only contents of a container are 10 disks that are each numbered with a different positive integer from 1 through 10, inclusive. If 4 disks are to be selected one after the other, with each disk selected at random and without replacement, what is the probability that the range of the numbers on the disks selected is 7?</p>
</blockquote>
<p>So I don't understand why my solution doesn't work:</p>
<p>I figured if there are 4 draws, then to pick, say, $1,8$,and two numbers between $1$ and $8$, the probability would be $(1/10)*(1/9)*(6/8)*(5/7)$. You have a $1/10$ chance to pick $1$. Since there's no replacement, you have $1/9$ chance to pick $8$. then $6/8$ for integers $2,3,4,5,6,$ and $7$. Then $5/7$ for another one.</p>
<p>Then just multiply by three. But apparently I don't get anything close to the solution. Could someone please explain why this is? </p>
<p><strong>Update</strong>: I finally get it. Thank you all for the responses!</p>
| André Nicolas | 6,312 | <p>There are $\dbinom{10}{4}$ equally likely ways to pick $4$ numbers. The number of ways to pick $1$, $8$, and two from the $6$ numbers from $2$ to $7$ inclusive is $\dbinom{6}{2}$. </p>
<p>Then multiply by $3$.</p>
<p><strong>Remark:</strong> In your solution, implicitly the numbers are being obtained in some specific order, so the probability obtained is much too low. One can correct for this by multiplying by the number of permutations that were double counted. That can be done, though the right factor may not be obvious. </p>
|
2,174,061 | <p>in $\Delta ABC$ if the $AD\perp BC$,$D\in BC$,and such $$|BC|=2|AD|$$
show that
$$\dfrac{|AB|}{|AC|}\le\sqrt{2}+1$$
<a href="https://i.stack.imgur.com/SXDvI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SXDvI.png" alt="enter image description here"></a></p>
<p>since
$$\cot{B}+\cot{C}=\dfrac{BD}{AD}+\dfrac{CD}{AD}=2$$
so
$$\dfrac{AB}{AC}=\dfrac{\sin{C}}{\sin{B}}$$</p>
| Mick | 42,351 | <p>First of all, I must point out that the picture is wrongly drawn. The reason is if AB is shorter than AC, the LHS of the inequality-to-be-proved is less than 1 and obviously is less than $\sqrt 2 + 1$, the RHS. This means there is nothing to be proved. Therefore, we must have $AB \ge AC$ ….. (1).</p>
<p><a href="https://i.stack.imgur.com/342GE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/342GE.png" alt="enter image description here"></a></p>
<p>If we let $AD = 1$ and $DC = x$ for some x greater 0, then $BD = 2 – x$ and $AC = \sqrt {1 + x^2}$.</p>
<p>(1) implies $\sqrt {(2 – x)^2 + 1} \ge \sqrt {1 + x^2}$.</p>
<p>After simplifying, we arrive at the first conclusion --- $0 \le x \le 1$. Then, $AC_{min} = 1$ ….. (2).</p>
<p>After some simplification. the inequality-to-be-proved is equivalent to $\sqrt 2 AC \ge BC’$.</p>
<p>It will be true if we can prove $\sqrt 2AC \ge \sqrt 2 AC_{min} \ge BC’_{max} \ge BC’$.</p>
<p>Or simply, $\sqrt 2 AC_{min} \ge BC’_{max}$ …. (*)</p>
<p>From (2), LHS = $\sqrt 2$ …. (3)</p>
<p>RHS = Max $[\sqrt {(2 – x)^2 + 1} - \sqrt {1 + x^2}] = Max[\sqrt {(2 – x)^2 + 1}] – Min [\sqrt {1 + x^2}] = ....$</p>
<p>$ =\sqrt 5 – 1$ …. (4)</p>
<p>(*) is true by comparing (3) and (4).</p>
|
3,363,944 | <p>A group consisting of <span class="math-container">$3$</span> men and <span class="math-container">$6$</span> women attends a prizegiving ceremony. If <span class="math-container">$ 5$</span> prizes are awarded at random to members of the group, find the probability that exactly <span class="math-container">$3 $</span> of the prizes are awarded to women if<br>
a) There is a restriction of at most one prize per person<br>
b) There is no restriction on the number of prizes per person</p>
<p>I did part a) and got the same result as the solution but I failed at getting the same answer for part b). When I looked at the working outs of both parts, I noticed a significant difference in the ways two parts are solved. </p>
<p>This is the working out for part a) (which is also similar to my working out)
a) <span class="math-container">$\frac{6C3\times 3C2}{9C5} = \frac{10}{21}\ $</span></p>
<p>And this is the working out of part b)
b) <span class="math-container">$\ 5C3 \times (\frac{3}{9})^{2} \times (\frac{6}{9})^{3}\ = \frac{80}{243}\ $</span></p>
<p>I'm so confused why part b) is done in such a different way than part a) and as a student, how can I know when to consider the numerator and denominator separately like part a) and when to find the probability of each component and times all of them together like part b)? Also, can we solve part b) in a similar way like part a)? Does anyone have any tips on how to distinguish these sorts of methods? </p>
<p>Thank you very much for helping.</p>
| Oliver Kayende | 704,766 | <p>Part b) : Assuming the prizes are identical there are <span class="math-container">$${9\choose 1}+4*{9\choose 2}+6*{9\choose 3}+4*{9\choose 4}+{9\choose 5}=c={13\choose 5}$$</span> total ways of distributing them ; the nth term the count when there are n winners. Exactly <span class="math-container">$$({6\choose 3}+2*{6\choose 2}+6)*(2*{3\choose 2}+3)=504$$</span> of these ways are desirable ; the number of ways of distributing 3 to 6 then multiplied by the number of ways of distributing 2 to 3. So, the probability is <span class="math-container">$$504/c$$</span>
<span class="math-container">$c$</span> is the number of ways 5 can be written as the sum of 9 non-negative integers with respect to order ; i.e. the number of non-negative integer solutions to <span class="math-container">$$\sum_{i=1}^9 x_i =5$$</span> i.e. the coefficient of <span class="math-container">$x^5$</span> in the polynomial <span class="math-container">$$(\sum_{i=0}^5 x^i)^9$$</span></p>
<p>In general the number of ways of distributing n identical objects to m people is <span class="math-container">$${{m+n-1}\choose {n-1}}$$</span></p>
|
2,261,927 | <p>How to get alternative form from equation 1)</p>
<p>$$ 1) -a^2 + a + b^2 -b $$</p>
<p>to equation 2)</p>
<p>$$ 2) (a-b)(a+b-1)$$</p>
| Ian Miller | 278,461 | <p>$$-a^2+a+b^2-b=-(a^2-b^2)+(a-b)$$</p>
<p>$$=-(a-b)(a+b)+(a-b)$$</p>
<p>To make it more obvious let $C=a-b$</p>
<p>$$=-C(a+b)+C$$</p>
<p>$$=C\big(-(a+b)+1\big)$$</p>
<p>$$=-C\big((a+b)-1\big)$$</p>
<p>$$=-(a-b)(a+b-1)$$</p>
|
2,261,927 | <p>How to get alternative form from equation 1)</p>
<p>$$ 1) -a^2 + a + b^2 -b $$</p>
<p>to equation 2)</p>
<p>$$ 2) (a-b)(a+b-1)$$</p>
| gue | 354,959 | <p>Another way would be polynomial division. </p>
<p>$-a^2 + a + b^2 - b : (a-b) = -a -b +1$</p>
<p>$a^2 - ab $</p>
<hr>
<p>$ -ab +a +b^2 -b$</p>
<p>$ ab -b^2 $</p>
<hr>
<p>$ a - b$</p>
|
4,131,747 | <p>I am having trouble with this problem in my Linear Algebra review:</p>
<blockquote>
<p>Find an equation for the plane parallel to <span class="math-container">$2x-y+2z=4 $</span> such that the
point <span class="math-container">$(3,2,-1) $</span> is equidistant from both planes.</p>
</blockquote>
<p>The answer is <span class="math-container">$2x-y+2=0$</span> . How would you go about finding the <span class="math-container">$0$</span> ?</p>
| David K | 139,123 | <p>In order for the point <span class="math-container">$(3,2,−1)$</span> to be equidistant from two distinct parallel planes, it must be midway between them. Furthermore, one plane is a reflection of the other plane through the point <span class="math-container">$(3,2,−1).$</span></p>
<p>The point <span class="math-container">$P_2 = (x_2,y_2,z_2)$</span> is in the second (unknown) plane if and only if there is a corresponding point <span class="math-container">$P_1 = (x_1,y_1,z_1)$</span> in the plane <span class="math-container">$2x−y+2z=4$</span> such that the point <span class="math-container">$(3,2,−1)$</span> is the midpoint of the line segment <span class="math-container">$\overline{P_1P_2}.$</span>
That is,</p>
<p><span class="math-container">\begin{align}
\frac{x_1 + x_2}{2} &= 3, \\[3pt]
\frac{y_1 + y_2}{2} &= 2, \\[3pt]
\frac{z_1 + z_2}{2} &= -1.
\end{align}</span></p>
<p>Solve these equations for the coordinates of <span class="math-container">$P_1$</span>:</p>
<p><span class="math-container">\begin{align}
x_1 &= 6 - x_2, \\
y_1 &= 4 - y_2, \\
z_1 &= -2 - z_2.
\end{align}</span></p>
<p>We know that <span class="math-container">$(x_1,y_1,z_1)$</span> lies in the given plane, so
<span class="math-container">$$2x_1 − y_1 + 2z_1 = 4.$$</span></p>
<p>Use the equations above to substitute for <span class="math-container">$x_1,$</span> <span class="math-container">$y_1,$</span> and <span class="math-container">$z_1$</span>:</p>
<p><span class="math-container">$$ 2(6 - x_2) − (4 - y_2) + 2(-2 - z_2) = 4. $$</span></p>
<p>That's an equation of a plane, and it is the desired plane.
But to put it in the form that is probably expected, we can simplify:</p>
<p><span class="math-container">\begin{align}
12 - 2x_2 − 4 + y_2 - 4 - 2z_2 &= 4, \\
- 2x_2 + y_2 - 2z_2 &= 0, && \text{collect all the constant terms}\\
2x_2 - y_2 + 2z_2 &= 0. && \text{multiply all terms by $-1$}\\
\end{align}</span></p>
<p>Done!</p>
|
4,317,945 | <p>A function <span class="math-container">$h : A → \mathbb{R}$</span> is Lipschitz continuous if <span class="math-container">$\exists K$</span> s.t.</p>
<p><span class="math-container">$$|h(x) - h(y)| \leq K \cdot |x - y|, \forall x, y \in A$$</span></p>
<p>Suppose that <span class="math-container">$I = [a, b]$</span> is a closed, bounded interval; and <span class="math-container">$g : I → \mathbb{R}$</span> is differentiable on <span class="math-container">$I$</span> and the function <span class="math-container">$G = Dg = g' : I → \mathbb{R}$</span> is continuous. Prove that <span class="math-container">$g$</span> is Lipschitz continuous on <span class="math-container">$I$</span>.</p>
| TheSilverDoe | 594,484 | <p>Let <span class="math-container">$f(x)=x^2(x+1)^n$</span>. One has <span class="math-container">$$f(x)=x^2(x+1)^n = x^{n+2} + nx^{n+1} + \frac{n(n-1)}{2}x^n + P(x)$$</span></p>
<p>where <span class="math-container">$P$</span> is a polynomial of degree <span class="math-container">$n-1$</span>. Hence
<span class="math-container">$$\boxed{f^{(n)}(x)=\frac{(n+2)!}{2}x^2 + n(n+1)!x + \frac{n(n-1)n!}{2}}$$</span></p>
|
600,404 | <p>I'm trying to study line bundle over $S^2$. <a href="https://mathoverflow.net/questions/113924/line-bundle-on-s2">In this post</a> was outlined the method based on clutching functions. But now I'm interesting in another approach. </p>
<p>For the sphere there is two maps : upper hemisphere and lower hemisphere with intersection as $[-\epsilon,\epsilon]\times S^1$. For the upper hemisphere and lower hemisphere its well-known that bundles over this spaces is trivial. (Any bundle over a contractible base is trivial). So to prove the fact that
line bundle over $S^2$ is trivial we must create continuation of trivialization from upper hemisphere (for example) to the lower hemisphere through "border" $[-\epsilon,\epsilon]\times S^1$. </p>
<p>As I understand it is sufficient to continue trivialization from the "border" to the center of the "disk". (I think here it is possible to use a partition of unity, but I'm not sure).</p>
<p>I can't formalize this reasoning.</p>
| DonAntonio | 31,254 | <p>Recognize the function evaluated at some partition of some interval...</p>
<p>$$\lim_{n\to\infty}\;\frac1n\sum_{k=1}^n\left(\frac kn\right)^p=\int\limits_0^1x^p\,dx=1\;\ldots$$</p>
<p>The first equality above stems from the fact that we <strong>know</strong> that $\;x^p\;$ is integrable on $\;[0,1]\;$ and thus we can choose the partition and the points in each subinterval as we wish to evaluate the Riemann sums.</p>
|
2,781,801 | <p>When asked to evaluate $g$ at the point specified above we would get $\dfrac{1}{e} \cdot \log_e(\frac{1}{\sqrt e})$ and that evaluates to some -0.18393... but the correct answer is -1/2e. How does it get simplified to that?</p>
| AHusain | 277,089 | <p>$$
\begin{eqnarray*}
\frac{1}{e} * \ln ( \frac{1}{\sqrt{e}}) &=& \frac{1}{e} * \ln ( e^{-1/2})\\
&=& \frac{1}{e} * \frac{-1}{2} \ln ( e)\\
&=& \frac{1}{e} * \frac{-1}{2} \\
&=& \frac{-1}{2e} \\
\end{eqnarray*}
$$</p>
|
2,567,332 | <p>A Greek urn contains a red, blue, yellow, and orange ball. A ball is drawn from the urn at random and then replaced. If one does this $4$ times, what is the probability that all $4$ colors were selected?</p>
<p>I approached this questions by doing $(1/4)^4$ because there's always a $1/4$ chance of selected a specific color ball if it's replaced. I also tried doing if not the correct ball was selected; so I did $(3/4)^4$ but that didn't work either. What am I doing wrong?</p>
| visitor | 401,140 | <p>The existing solutions provide the correct probability, but do not directly answer the question "What am I doing wrong?"</p>
<p>$(1/4)^4$ is the probability of a <em>specific</em> sequence of draws such as:</p>
<p>red, blue, yellow, orange</p>
<p>blue, yellow, orange, red</p>
<p>yellow, orange, blue, red</p>
<p>The event that "all 4 colors were selected" would occur if <em>any</em> of these sequences occurred. So we must count the number of such sequences (4! = 24) and add up their probabilities, which yields $\displaystyle\frac{4!}{4^4}$</p>
|
2,413,891 | <blockquote>
<p><strong>Question :</strong> Evaluate - $$\int_{0}^{1}2^{x^2+x}\mathrm dx$$</p>
</blockquote>
<p><strong>My Attempt :</strong> First I tried to evaluate the indefinite integral of $2^{x^2+x}$ in order to put the limits $0$ and $1$ later on, but couldn't integrate it. Then I checked on WA and came to know that it's elementary integral doesn't exist. Now I moved one to using properties of definite integration such as $$\int_a^b f(x) \mathrm dx=\int_a^b f(a+b-x) \mathrm dx$$</p>
<p>But it couldn't help either. Can you please give me hint to proceed on this question?</p>
<p>P.S. - This is a high school level problem and therefore its solution shouldn't involve any special functions, such as Gaussian Integral etc.</p>
<p><strong>Edit</strong> : I asked my teacher this question and basically this was an approximation based question. This was a MCQ type question which has an option "None of the above" and it was the correct answer, since the other options were made in such a way that can be rejected by bounding this integral between 2 functions. For example we can use $$2^{x^2+x}<2^{2x} ~; ~x\in (0,1)$$ and thus can be sure that this integral is less than $3/\ln(4)$.</p>
<p>Thanks all for devoting your time in my question!</p>
| Claude Leibovici | 82,404 | <p>If you cannot use special functions, then either numerical integration or approximation would be required.</p>
<p>For example, consider the Taylor expansion built around $x=\frac 12$ (mid point of the integration interval selected in order to tvoid promoting one of the bounds). You would get
$$2^{x^2+x}=2^{3/4}+2^{3/4} \left(x-\frac{1}{2}\right) \log (4)+2^{3/4}
\log (2) (1+\log
(4))\left(x-\frac{1}{2}\right)^2+O\left(\left(x-\frac{1}{2}\right)^3\right)$$ Integrate termwise to get
$$\int 2^{x^2+x}\,dx=2^{3/4} \left(x-\frac{1}{2}\right)+\frac{\left(x-\frac{1}{2}\right)^2 \log
(4)}{\sqrt[4]{2}}+\frac{1}{3} 2^{3/4} \left(x-\frac{1}{2}\right)^3 \log (2)
(1+\log (4))+O\left(\left(x-\frac{1}{2}\right)^4\right)$$ USe the bounds to get, as an approximation,
$$\int_0^1 2^{x^2+x}\,dx\approx\frac{24+\log ^2(4)+\log (4)}{12 \sqrt[4]{2}}\approx 1.91361$$ while Wolfram Alpha would give $\approx 1.93749$.</p>
<p>For sure, you could improve using more terms. For illustration purposes, suppose that we make the expansion to $O\left(\left(x-\frac{1}{2}\right)^n\right)$. We should get
$$\left(
\begin{array}{cc}
n & \text{result} \\
2 & 1.91361 \\
4 & 1.93589 \\
6 & 1.93741 \\
8 & 1.93749
\end{array}
\right)$$</p>
|
1,566,471 | <p>Hi can someone please help?</p>
<p>I need to evaluate this indefinite integral:</p>
<p>$$\int \frac{(\ln x)^5}x dx$$</p>
<p>I know I need to use substitution, so if I let <em>u= x</em> but I can't figure out the antiderivative for the top portion.</p>
<p>Thank you!</p>
| spandan madan | 296,493 | <p>A mistake in your argument as far as I understand-</p>
<p>If you are picking a random number in a continues range of [0,1], the probability of getting an exact number is zero as the number of options is infinite, so your sample space is infinite. While it is mathematically correct, it makes no sense as such.</p>
<p>However, let's consider the conjecture A and B are independent if and only if P(A∩B)=P(A)*P(B).</p>
<p>If P(A) or P(B) is 0, then that it means it is an impossible event that will never occur, say P(Sun rises from the north). While let B be a possible event like the sun seemed orange in the evening today, so P(Sun was orange in the evening)!=0.</p>
<p>Now, P(A∩B) means that the sun should be both rising in the north and be orange in the evening, which is an impossibility as the sun cannot rise in the north, so naturally P(A∩B)=0, even though only P(A) was zero. </p>
|
3,914,626 | <p>Let <span class="math-container">$A$</span> be a <span class="math-container">$k$</span>-dimensional non singualar matrix with integer coefficients. Is it true that <span class="math-container">$\|A^{-1}\|_\infty \leq 1$</span>? How can I show that? Could you give me a counterexample?It is clear that <span class="math-container">$\|A^{-1}\|_{\infty}=\frac{1}{\min\{\|Ax\|_{\infty}:\|x\|_{\infty}=1\}}$</span>. My idea is to show that the minimum is obtained on an integer point so the denominator is bigger than <span class="math-container">$1$</span>. Is mi idea right?</p>
<p>Thank you very much!</p>
| Mauro ALLEGRANZA | 108,274 | <p>The <a href="https://iep.utm.edu/nat-ded/#H7" rel="nofollow noreferrer"><span class="math-container">$(\forall \text I)$</span> rule</a> is:</p>
<blockquote>
<p>if <span class="math-container">$\Gamma \vdash \varphi[x/a]$</span>, then <span class="math-container">$\Gamma \vdash \forall x \varphi$</span>, provided that parameter <span class="math-container">$a$</span>a is “fresh” in the sense of having no other occurrences in <span class="math-container">$\Gamma , \varphi$</span></p>
</blockquote>
<p>The proviso is consistent with the intuitive meaning of the rule: if <span class="math-container">$\varphi$</span> holds of an object <span class="math-container">$a$</span> whatever, then it holds of every object.</p>
<p>The proviso is needed in order to avoid the fallacy: John is a Philosopher, therefore everything is a Philospher.</p>
<p>In your wrong proof above, you have committed exactly these fallacy: the parameter <span class="math-container">$a$</span> [in your case: John] must not occur in <span class="math-container">$\Gamma$</span>. In your case <span class="math-container">$\Gamma = \{ P(\text {John}) \}$</span>.</p>
<p>In conclusion, the issue is: how can you prove <span class="math-container">$\vdash P(\text {John})$</span>?</p>
<p>Example: consider the first-order language of arithmetic with individual constants <span class="math-container">$0$</span> and <span class="math-container">$1$</span> and let <span class="math-container">$\mathsf {PA}$</span> the collection of <a href="https://en.wikipedia.org/wiki/Peano_axioms#First-order_theory_of_arithmetic" rel="nofollow noreferrer">first-order Peano axiom</a>.</p>
<p>We have: <span class="math-container">$\mathsf {PA} \vdash (0 \ne 1)$</span>,</p>
<p>Now, applying <span class="math-container">$(\forall \text I)$</span> to it, using <span class="math-container">$0$</span> as <span class="math-container">$\text {John}$</span>, we conclude with: <span class="math-container">$\mathsf {PA} \vdash \forall x (x \ne 1)$</span>.</p>
<p><em>Where is the mistake</em> ?</p>
|
2,110,286 | <p>Show that if $A$ and $B$ are subsets of a set $S$, then $\overline{A \cap B}=\overline{A}\cup \overline{B}$.</p>
<p>I tried to prove that $A \cap B=A \cup B$ because I didn't realize that the overline meant to prove it for the <em>closure</em> of the sets.</p>
<p>So, now I am confused about how to prove for closure. I cannot find it in my textbook, and by some "similar" proofs online led me to conclude that $\overline{A \cap B}=\overline{A \cup B}$ but I somehow don't know if this is true, or how to prove it exactly. So, now I am not sure if I understand this principle at all.</p>
| Alex Mathers | 227,652 | <p>Like I said in my comment, I'm pretty sure that $\overline A$ is referring to the complement of $A$ in $S$. The way to prove this problem is to just blindly "chase elements":</p>
<p>Let $x\in\overline{A\cap B}$. Then $x\in S$ but $x\notin A\cap B$. Therefore $x\notin A$ or $x\notin B$. This precisely means $x\in\overline A\cup\overline B$, so $\overline{A\cap B}\subseteq\overline A\cup\overline B$.</p>
<p>I would encourage you do the other direction on your own. Just follow the same procedure I did above, and follow the definitions to show $\overline A\cup\overline B\subseteq\overline{A\cap B}$.</p>
|
373,958 | <p>Is $\sum_{n=1}^\infty(2^{\frac1{n}}-1)$ convergent or divergent?
$$\lim_{n\to\infty}(2^{\frac1{n}}-1) = 0$$
I can't think of anything to compare it against. The integral looks too hard:
$$\int_1^\infty(2^{\frac1{n}}-1)dn = ?$$
Root test seems useless as $\left(2^{\frac1{n}}\right)^{\frac1{n}}$ is probably even harder to find a limit for. Ratio test also seems useless because $2^{\frac1{n+1}}$ can't cancel out with ${2^{\frac1{n}}}$. It seems like the best bet is comparison/limit comparison, but what can it be compared against?</p>
| Justin | 72,616 | <p>See: <a href="http://forums.xkcd.com/viewtopic.php?f=17&t=48391" rel="nofollow">$\sum_{n=1}^\infty(2^{\frac1{n}}-1)$</a> </p>
<p>There are a few methods listed there, one being writing ${2^{\frac1n}}$ as a power series. The easiest to understand is probably the limit comparison test where $b_n = \frac1n$.</p>
<p>Paraphrasing <a href="http://forums.xkcd.com/viewtopic.php?f=17&t=48391#p1874150" rel="nofollow">Bjartr</a>:</p>
<blockquote>
<p>Let $m = \frac{1}{n}$, then we have $$\lim_{m\rightarrow0}\frac{2^m - 1}{m} = \frac{0}{0}$$ So we use L'hopital's Rule $$\lim_{m\rightarrow0}2^m\log(2) = \log(2) \neq 0$$ So $\sum_{n=1}^\infty(2^{\frac1{n}}-1)$ has the same behavior as $\sum_{n=1}^{\infty}\frac{1}{n}$ which diverges. Therefore: $\sum_{n=1}^\infty(2^{\frac1{n}}-1)$ is divergent</p>
</blockquote>
|
690,331 | <p>Does it make a different when you parametrize a counterclockwise full circle and a clockwise circle in the complex plane? </p>
<p>For example, I am looking at computing an integral $\int_\gamma {1\over{z+4}}dz$ where $\gamma$ is the circle of radius $1$, centered at $-4$, oriented <strong>counterclockwise.</strong></p>
<p>My parametrization look like this: $\gamma(t)=p+Re^{it}=-4+e^{it}, 0\leq t\leq 2\pi$. Would the parametrization look the same as well if the circle oriented clockwise? </p>
<p>I have the final answer for integral as $2\pi i$, which makes sense, would it be the same regardless? </p>
| anon | 11,763 | <p>$~$«$\displaystyle\sum_{i=1}^k E_i$ <em>is direct $\,\Leftrightarrow\,$ the $E_i$s intersect trivially pairwise</em> »$~$ is true for $V$ iff $k<3$ or $\dim V=1$.</p>
<p><em>Proof exercise</em>: Suffices to consider $k=3$, $\dim V=2$. Show if $V=\langle v,w\rangle$ then $\langle v\rangle,\langle w\rangle,\langle v+w\rangle$ is a counterexample to the claim. Notice this does not depend on the field of scalars for the space $V$.</p>
|
2,282,818 | <p>I'm getting $f(x)=2x+f(0)$ and $f(x)=f(0)-2x$ by setting $y=0$, but I'd like to verify. Am I right?</p>
| Just_to_Answer | 439,212 | <p>Another way to look at it is perhaps that the condition is equivalent to<br>
$$\left|\dfrac{f(x)-f(y)}{x-y}\right| = 2 \quad \quad \text{for } x \neq y$$
which says that absolute values of the slopes of the secant lines at any pair of points $x$ and $y$ are always 2. That is, the possible slopes of the secant lines at any pair of points $x$ and $y$ are $\pm 2$</p>
|
181,367 | <p>It is well known that compactness implies pseudocompactness; this follows from <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Heine%E2%80%93Borel_theorem">the Heine–Borel theorem</a>. I know that the converse does not hold, but what is a counterexample?</p>
<p>(A <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Pseudocompact_space"><strong>pseudocompact space</strong></a> is a topological space $S = \langle X,{\mathfrak I}\rangle$ such that every continuous function $f:S\to\Bbb R$ has bounded range.)</p>
| tomasz | 30,222 | <p>Note that sequential compactness also implies pseudocompactness, so any sequentially compact space which is not compact will work as well (the particular point topology is not sequentially compact, either, so it's different kind).</p>
<p>For example, the Corson space $\Sigma([0,1]^\kappa)$ of sequences of length $\kappa$ with countable support is not compact for uncountable $\kappa$ (which is easy to see), but is sequentially compact (which is a bit harder to see). It is also completely regular Hausdorff, which makes it sort of a "stronger" example than particular point topology. $\Sigma(2^\kappa)$ should work fine for this, too.</p>
|
181,367 | <p>It is well known that compactness implies pseudocompactness; this follows from <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Heine%E2%80%93Borel_theorem">the Heine–Borel theorem</a>. I know that the converse does not hold, but what is a counterexample?</p>
<p>(A <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Pseudocompact_space"><strong>pseudocompact space</strong></a> is a topological space $S = \langle X,{\mathfrak I}\rangle$ such that every continuous function $f:S\to\Bbb R$ has bounded range.)</p>
| Austin Mohr | 11,245 | <p><a href="http://topology.jdabbs.com" rel="nofollow">$\pi$-Base</a>, a searchable version of Steen and Seebach's <a href="http://books.google.com/books/about/Counterexamples_in_Topology.html?id=DkEuGkOtSrUC" rel="nofollow"><em>Counterexamples in Topology</em></a>, gives the following examples of pseudocompact spaces that are not compact. You can view the <a href="http://topology.jdabbs.com/search?q=%7B%22and%22%3A%5B%7B%2222%22%3Atrue%7D%2C%7B%2216%22%3Afalse%7D%5D%7D" rel="nofollow">search result</a> to learn more about any of these spaces.</p>
<p>$[0,\Omega) \times I^I$</p>
<p>An Altered Long Line</p>
<p>Countable Complement Topology</p>
<p>Countable Particular Point Topology</p>
<p>Deleted Tychonoff Plank</p>
<p>Divisor Topology</p>
<p>Double Pointed Countable Complement Topology</p>
<p>Gustin’s Sequence Space</p>
<p>Hewitt's Condensed Corkscrew</p>
<p>Interlocking Interval Topology</p>
<p>Irrational Slope Topology</p>
<p>Minimal Hausdorff Topology</p>
<p>Nested Interval Topology</p>
<p>Novak Space</p>
<p>Open Uncountable Ordinal Space $[0, \Omega)$</p>
<p>Prime Integer Topology</p>
<p>Relatively Prime Integer Topology</p>
<p>Right Order Topology on $\mathbb{R}$</p>
<p>Roy's Lattice Space</p>
<p>Strong Ultrafilter Topology</p>
<p>The Long Line</p>
<p>Tychonoff Corkscrew</p>
<p>Uncountable Particular Point Topology</p>
|
181,367 | <p>It is well known that compactness implies pseudocompactness; this follows from <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Heine%E2%80%93Borel_theorem">the Heine–Borel theorem</a>. I know that the converse does not hold, but what is a counterexample?</p>
<p>(A <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Pseudocompact_space"><strong>pseudocompact space</strong></a> is a topological space $S = \langle X,{\mathfrak I}\rangle$ such that every continuous function $f:S\to\Bbb R$ has bounded range.)</p>
| Brian M. Scott | 12,042 | <p>An example that does not depend on countable compactness is Mrówka’s space $\Psi$. Subsets of $\omega$ are said to be <em>almost disjoint</em> if their intersection is finite. Let $\mathscr{A}$ be a maximal almost disjoint family of subsets of $\omega$, and let $\Psi=\omega\cup\mathscr{A}$. Points of $\omega$ are isolated. Basic open nbhds of $A\in\mathscr{A}$ are sets of the form $\{A\}\cup(A\setminus F)$, where $F$ is any finite subset of $A$. $\Psi$ is not even countably compact, since $\mathscr{A}$ is an infinite (indeed uncountable) closed, discrete set in $\Psi$. (In fact it’s not hard to ensure that $|\mathscr{A}|=2^\omega$.)</p>
<p>To see that $\Psi$ is pseudocompact, suppose that $f:\Psi\to\Bbb R$ is continuous. Since $\omega$ is dense in $\Psi$, it suffices to show that $f[\omega]$ is bounded. If not, we can choose $S=\{n_k:k\in\omega\}\subseteq\omega$ such that $f(n_{k+1})\ge f(n_k)+1$ for each $k\in\omega$. The maximality of $\mathscr{A}$ ensures that there is an $A\in\mathscr{A}$ such that $A\cap S$ is infinite. Let $A_0=\{k\in\omega:n_k\in A\cap S\}$. Then $\langle f(n_k):k\in A_0\rangle\to f(A)$, which contradicts the choice of $S$.</p>
<p>$\Psi$ clearly is $T_2$ and has a clopen base, so it’s Tikhonov. It’s not normal, however, since in $T_4$ spaces pseudocompactness is equivalent to countable compactness.</p>
<p><strong>Added:</strong> This example is somewhat akin to what Steen & Seebach call the <em>strong ultrafilter topology</em>.</p>
|
2,060,156 | <p>First thing I want to mention is that this is not a topic about why $1+2+3+... = -1/12$ but rather the connection between this summation and $\zeta$.</p>
<p>I perfectly understand that the definition using the summation $\sum_{k=1}^\infty k^{-s}$ of the zeta function is only valid for $Re(s) > 1$ and that the function is then extrapolated through analytic continuation in the whole complex plan.</p>
<p>However some details bother me : Why can we manipulate the sum and still obtain correct final answer.
$$
S_1 = 1-1+1-1+1-1+... = 1-(1-1+1-1+1-...)= 1-S_1 \implies S_1 = \frac{1}{2} \\
S_2 = 1-2+3-4+5-... \implies S_2 - S_1 = 0-1+2-3+4-5... = -S_2 \implies S_2 = \frac{1}{4} \\
S = 1+2+3+4+5+... \implies S-S_2 = 4(1+2+3+4+...) = 4S \implies S = -\frac{1}{12} \\
S "=" \zeta(-1)
$$
Clearly these manipulations are not legal since we're dealing with infinite non-converging sums. But it works ! Why ?
Is there a real connection between the analytic continuation which yields the "true" value $\zeta(-1) = -1/12$ and these "forbidden manipulations" ? Could we somehow consider these manipulations as "continuation of non-converging sums" ? If so, is there a well-defined framework with defined rules because it is clear that we must be careful when playing with non-converging sums if we don't want to break the mathematics ! (For example <a href="https://en.wikipedia.org/wiki/Riemann_series_theorem" rel="nofollow noreferrer"> Riemann rearrangement theorem</a>)</p>
<p>And since it seems that these illegal operations can be used to compute some value of zeta in the extended domain $Re(s) < 1$, are there other examples of such derivations, for example $0 = \zeta(-2) "=" 1^2 + 2^2 + 3^2 + 4^2 + ...$ ?</p>
<p>Hopefully this is not an umpteenth vague question about zeta and $1+2+3+4...$ I did some research about it but couldn't find any satisfying answer. Thanks !</p>
| mfl | 148,513 | <p>We have that $$\frac{7k-5}{5k-3}=\frac{6l-1}{4l-3}\iff kl+8k+l=6.$$ That is, if $k\ne -1,$</p>
<p>$$l=2\frac{3-4k}{k+1}=-2\left(4-\frac{7}{k+1}\right)=-8+\frac{14}{k+1}.$$ Since $l$ has to be an integer $k+1$ must divide $14.$ So, we have that $k\in\{-15,-8,-3,-2,0,1,6,13\}.$ </p>
<p>Note that $k\ne -1$ since if $k=-1$ the equation $kl+8k+l=6$ gives $-8=6$ which doesn't hold.</p>
|
1,384,735 | <p>What is the ODE satisfied by $y=y(x)$ </p>
<p>given that $$\frac{dy}{dx} = \frac{-x-2y}{y-2x}$$</p>
<p>I understand that I need to get it in some form of $\int \cdots \;dy = \int \cdots \; dx$, but am not sure how to go about it.</p>
| Harish Chandra Rajpoot | 210,295 | <p>We have, $$\frac{dy}{dx} = \frac{-x-2y}{y-2x}$$
Let $y=ux\implies \frac{dy}{dx}=x\frac{du}{dx}+u$ $$u+x\frac{du}{dx}=\frac{-x-2ux}{ux-2x}$$
$$u+x\frac{du}{dx}=\frac{2u+1}{2-u}$$ $$x\frac{du}{dx}=\frac{2u+1}{2-u}-u$$ $$x\frac{du}{dx}=\frac{1+u^2}{2-u}$$ $$\frac{(2-u)du}{1+u^2}=\frac{dx}{x}$$ Integrating both the sides, we get $$\int \frac{(2-u)du}{1+u^2}=\int \frac{dx}{x}$$ $$2\tan^{-1}(u)-\frac{1}{2}\ln(1+u^2)=\ln(x)+c$$ Substituting $u=\frac{y}{x}$, we get $$2\tan^{-1}\left(\frac{y}{x}\right)-\frac{1}{2}\ln\left(\frac{x^2+y^2}{x^2}\right)=\ln(x)+c$$ $$2\tan^{-1}\left(\frac{y}{x}\right)-\frac{1}{2}\ln\left(x^2+y^2\right)+\frac{1}{2}\ln (x^2)=\ln(x)+c$$
$$2\tan^{-1}\left(\frac{y}{x}\right)-\frac{1}{2}\ln\left(x^2+y^2\right)=c$$</p>
|
503,589 | <ol>
<li><p>Let $\epsilon>0$. Prove that the set of those $x\in [0,1]$ such that there exist infinitely many fractions $p/q$, with relatively prime integers $p$ and $q$ such that
$$\bigg |x-\frac{p}{q}\bigg|\leq \frac{1}{q^{2+\epsilon}}$$
is a set of measure zero.</p></li>
<li><p>Let $(a_n)$ be a sequence of real numbers, and let $(\alpha_n)$ be a sequence of positive numbers such that $\sum_n \sqrt {a_n}<\infty$. Prove that there exists a measurable set $A$ with $\lambda(A^c)=0$ (Lebesgue measure) such that
$$\forall x\in A, \sum_n \frac{\alpha_n}{|x-a_n|}<\infty.$$</p></li>
</ol>
| Robert Israel | 8,508 | <p>Hint for (a): if $X$ is uniform on $[0,1]$, consider the random variables $Y_q = 1$ if $|X - p/q| \le 1/q^{2+\epsilon}$ for some $p$ relatively prime to $q$, $0$ otherwise. </p>
|
3,115,168 | <p>I've converted <span class="math-container">$\cos^3(x)$</span> into <span class="math-container">$\cos^2(x)\cos(x)$</span> but still have not gotten the answer. </p>
<p>The answer is <span class="math-container">$\dfrac{\sin(x)(3\cos^2x + 2\sin^2x)}{3}$</span></p>
<p>My answer was the same except I did not have a <span class="math-container">$3$</span> infront of <span class="math-container">$x$</span> and my <span class="math-container">$2\sin^2x$</span> was not squared.</p>
<p>Help! </p>
| Michael Rybkin | 350,247 | <p><span class="math-container">$$
\begin{align}
\int\cos^3{x}\,dx
&=\int\cos^2{x}\cdot\cos{x}\,dx\\
&=\int\cos^2{x}(\sin{x})'\,dx\\
&=\cos^2{x}\sin{x}+2\int\sin^2{x}\cos{x}\,dx\\
&=\cos^2{x}\sin{x}+2\int(1-\cos^2{x})\cos{x}\,dx\\
&=\cos^2{x}\sin{x}+2\int\cos{x}\,dx-2\int\cos^3{x}\,dx\\
&=\cos^2{x}\sin{x}+2\sin{x}-2\int\cos^3{x}\,dx
\end{align}
$$</span></p>
<p><span class="math-container">$$
I=\int\cos^3{x}\,dx
$$</span></p>
<p><span class="math-container">$$
I=\cos^2{x}\sin{x}+2\sin{x}-2I\implies\\
3I=\cos^2{x}\sin{x}+2\sin{x}\implies\\
I=\frac{\cos^2{x}\sin{x}+2\sin{x}}{3}+C.
$$</span></p>
<p>Check:</p>
<p><span class="math-container">$$
\frac{d}{dx}\left[\frac{1}{3}(\cos^2{x}\sin{x}+2\sin{x})+C\right]=\\
\frac{1}{3}(2\cos{x}(-\sin{x})\sin{x}+\cos^2{x}\cos{x}+2\cos{x})=\\
\frac{1}{3}(-2\sin^2{x}\cos{x}+\cos^3{x}+2\cos{x})=\\
\frac{1}{3}(-2(1-\cos^2{x})\cos{x}+\cos^3{x}+2\cos{x})=\\
\frac{1}{3}((-2+2\cos^2{x})\cos{x}+\cos^3{x}+2\cos{x})=\\
\frac{1}{3}(-2\cos{x}+2\cos^3{x}+\cos^3{x}+2\cos{x})=\\
\frac{1}{3}(2\cos^3{x}+\cos^3{x})=\\
\frac{1}{3}(3\cos^3{x})=\\
\cos^3{x}.
$$</span></p>
<p>The answer you provided is equivalent to mine:</p>
<p><span class="math-container">$$
\dfrac{\sin{x}(3\cos^2x + 2\sin^2x)}{3}=
\dfrac{\sin{x}(3\cos^2x + 2(1-\cos^2{x}))}{3}=\\
\dfrac{\sin{x}(3\cos^2x + 2-2\cos^2{x})}{3}=
\dfrac{\sin{x}(\cos^2x + 2)}{3}=\\
\dfrac{\cos^2x\sin{x} + 2\sin{x}}{3}.
$$</span></p>
|
2,083,127 | <p>How to show that $\lim_{n \rightarrow \infty} \frac{[a^{n+1}]}{[a^n]}=a$, where
$[a]$ = integer part of a?<br>
Here $a>1$. But I suspect it is true for all $a \ne 0$. </p>
| Στέλιος | 403,502 | <p>For $x>1$, we have the trivial inequalities $0<x-1<[x]\leq x$. We apply them and get:</p>
<p>$\frac{a^{n+1}-1}{a^n}\leq \frac{[a^{n+1}]}{[a^n]}\leq \frac{a^{n+1}}{a^n-1}$</p>
<p>But it is easy to check that for $a>1$, we have:</p>
<p>$\frac{a^{n+1}-1}{a^n}=a-a^{-n}\rightarrow a-0=a$,</p>
<p>$\frac{a^{n+1}}{a^n-1}=\frac{a}{1-a^{-n}}\rightarrow \frac{a}{1-0}=a$</p>
<p>So finally by the squeeze theorem we conclude that $\frac{[a^{n+1}]}{[a^n]}\rightarrow a$</p>
<p><em>Note:</em> Work similarly for the other values of $a$, but be carefull with the signs and don't expect always the same value.</p>
|
2,889,835 | <p>If I have random lag times from <code>a=0.1</code> to <code>b=0.3</code> and a time to live (TTL) of <code>x=0.25</code>, what would be the packet loss in per cent?</p>
<p>Ok so basically I have packets that arrive in <code>Random [a,b]</code> time, if that random value is greater than <code>x</code> the packet gets lost and doesn't arrive.</p>
<p>What is the probability of a packet to arrive?</p>
| Kusma | 514,933 | <p>In complete metric spaces, Cauchy sequences are the same thing as convergent sequences, so in complete spaces, the statement contradicts the definition of continuity. </p>
<p>However, in your example, the space $(0,1)$ that your function is defined on is not complete. Hence there are Cauchy sequences that do not converge to a limit in $(0,1)$, for example, $x_n=\frac1n$. [And then choosing $f(x)=\frac1x$ maps this into the non-Cauchy sequence $y_n=n$). </p>
|
1,924,568 | <p>This is a question that a friend asked me (has the final answer too).</p>
<p>The pdf of a random variable $X$ is</p>
<p>$$ f(x) = 0.5,\quad -1 < x < 1 $$</p>
<p>The random variable Y is defined as </p>
<p>$$ Y = \begin{cases}
-2X, & -1 < X < 0 \\
X+1, & 0 < X <1
\end{cases}$$</p>
<p>I tried using the inverse transform method but I'm unsure of how to go about this since $Y$ takes values in $[1, 2)$ in both intervals provided above. I get the fact that there should be some sort of an overlapping in this case but can someone provide me with a rigorous way to solve this problem.</p>
<p>The answer was given to be</p>
<p>$$ f(y) = \begin{cases}
0.25, & 0 < y < 1 \\
0.75, & 1 < y <2
\end{cases}$$</p>
<p>Here is what I did:</p>
<p>$P(Y \leq y) = P(-2X \leq y) = \frac{1}{2} + \frac{y}{4}$.</p>
<p>Taking the derivative of the above CDF, I get $f_Y(y) = 0.25$ when $0 < y < 2$. </p>
<p>I carried out the same procedure for the other interval and obtained $f_Y(y) = 0.5$ when $1 < y < 2$.</p>
<p>Is it alright to conclude that the pdf is as provided in the solution because there is an overlap between the two intervals? Is there a more rigorous way of showing this?</p>
| Graham Kemp | 135,106 | <p>Yes. It's called <em>folding</em>; when two disjoint intervals of the support of $X$ fold into the same interval in the support of $Y$, then the <em>change of variables transformation</em> folds the combined influence. $$\begin{align}f_Y(y)~=~& \Big\lvert\dfrac{\mathrm d x_1(y)}{\mathrm d y}\Big\rvert~f_X(x_1(y))+\Big\lvert\dfrac{\mathrm d x_2(y)}{\mathrm d y}\Big\rvert~f_X(x_2(y)) \\[1ex] ~=~& \tfrac 12 {f_X(-\tfrac 12y)}~\mathbf 1_{-1<-y/2<0} + {f_X(y+1)}~\mathbf 1_{0<y-1<1} \\[2ex] ~=~&\tfrac 1 4~\mathbf 1_{0<y<2}+\tfrac 12~\mathbf 1_{1<y<2} \\[2ex] ~=~&\tfrac 1 4~\mathbf 1_{0<y\leq 1}+\tfrac 34~\mathbf 1_{1< y<2}\end{align}$$</p>
<p>Where $x_1(Y), x_2(Y)$ are the two semi-inverses (the functions that map $Y$ back into the preimage).</p>
|
517,282 | <p>Suppose $a,n \in \mathbb{Z}$, and $n>a>0$. How do I prove that $\nexists x \in \mathbb{Z}$ s.t. $nx = a$ ? I'm really not sure where to start on this one. I'd be happy if someone could give me a hint.</p>
<p>Edit: I've solved this by contradiction, but I will not be 'accepting' an answer from below because I did not use any one of them in a significant way to solve the problem.</p>
| Community | -1 | <p><strong>Hint</strong>: Note that $|nx| = |n| |x|$, and consider the cases $x = 0$ and $|x| \ge1$ separately. Start by noting that $a < n$, so how are $|n| |x|$ and $a$ related?</p>
|
149,558 | <p>I always use <code>InputForm</code> to check the result object,such as <code>Dataset</code> or <code>Graphics</code> or other objects.But if you are in the result of <code>InputForm</code>,you cannot use the Front-End function of balance the bracket. Note this gif</p>
<p><a href="https://i.stack.imgur.com/51OYd.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/51OYd.gif" alt="enter image description here" /></a></p>
<p>When I double click the input line, I will select all content just in this bracket.But when I'm in result of <code>InputForm</code>,I will select all line. Of course I can copy the output of <code>InputForm</code> as another new input,but which will make the notebook more mess.</p>
<p>Any method can make the output of <code>InputForm</code> support the function of balance the bracket?</p>
| Kuba | 5,478 | <p>Carl's tip seems to be the best quick solution. </p>
<p>Very often I find syntax/style highlighting very useful too so I use:</p>
<pre><code>CellPrint[ExpressionCell[InputForm@#, "Input"]] &
</code></pre>
<p>to get everything what Input cells offer:</p>
<pre><code>Plot[x, {x, 0, 1}, PlotPoints -> 10, MaxRecursion -> 1
] // CellPrint[ExpressionCell[InputForm@#, "Input"]] &
</code></pre>
<p><a href="https://i.stack.imgur.com/NqQZ4m.png" rel="noreferrer"><img src="https://i.stack.imgur.com/NqQZ4m.png" alt="enter image description here"></a></p>
<p>related: </p>
<p><a href="https://mathematica.stackexchange.com/q/102938/5478">How to see a code preview (in Experimental`Explore[] or related GUI)</a></p>
<p><a href="https://mathematica.stackexchange.com/q/103257/5478">How to use an ExpressionCell to display e.g. an Input cell inside a generated output?</a></p>
|
6,931 | <p>One of the key steps in <a href="http://en.wikipedia.org/wiki/Merge_sort">merge sort</a> is the merging step. Given two sorted lists</p>
<pre><code>sorted1={2,6,10,13,16,17,19};
sorted2={1,3,4,5,7,8,9,11,12,14,15,18,20};
</code></pre>
<p>of integers, we want to produce a new list as follows:</p>
<ol>
<li>Start with an empty list <code>acc</code>.</li>
<li>Compare the first elements of <code>sorted1</code> and <code>sorted2</code>. Append the smaller one to <code>acc</code>.</li>
<li>Remove the element used in step 2 from either <code>sorted1</code> or <code>sorted2</code>.</li>
<li>If neither <code>sorted1</code> nor <code>sorted2</code> is empty, go to step 2. Otherwise append the remaining list to <code>acc</code> and output the value of <code>acc</code>.</li>
</ol>
<p>Applying this process to <code>sorted1</code> and <code>sorted2</code>, we get</p>
<pre><code>acc={1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20}
</code></pre>
<p><em>Added in response to Rojo's question: We can carry out this procedure even if the two lists are not pre-sorted. So <code>list1</code> and <code>list2</code> below are not assumed to be sorted.</em></p>
<p>If there were a built-in function <code>MergeList</code> which carries out this process, it would probably take three arguments <code>list1</code>, <code>list2</code>, and <code>f</code>. Here <code>f</code> is a Boolean function of two arguments used to decide which element to pick. In the case of merge sort, <code>f = LessEqual</code>. I feel that <code>MergeList</code> is a fundamental list operation, so</p>
<p><strong>Question 1: Is there such a built-in function or one very close to that?</strong></p>
<p>If I were to write such a function in Scheme, I would use a recursive definition equivalent to the following:</p>
<pre><code>MergeList[list1_,{},f_,acc_:{}]:=Join[acc,list1];
MergeList[{},list2_,f_,acc_:{}]:=Join[acc,list2];
MergeList[list1_,list2_,f_,acc_:{}]:=
If[
f@@First/@{list1,list2},
MergeList[Rest[list1],list2,f,Append[acc,First[list1]]],
MergeList[list1,Rest[list2],f,Append[acc,First[list2]]]
]
</code></pre>
<p><em>Sample output with unsorted lists:</em></p>
<pre><code>In[2]:= MergeList[{2,5,1},{3,6,4},LessEqual]
Out[2]= {2,3,5,1,6,4}
</code></pre>
<p>My impression is that recursive solutions tend to be inefficient in Mathematica, so</p>
<p><strong>Question 2: What would be a better way to implement <code>MergeList</code>?</strong></p>
<p>If you have tips about converting loops into their functional equivalents, feel free to mention them as well.</p>
| Rojo | 109 | <p>Not to different to Heike's I think, because I haven't followed it line by line. Please let me know if it's too similar to be a separate answer</p>
<pre><code>merge[l1_, l2_, f_] := Block[{mergeAux},
mergeAux[list1_, list2_] :=
mergeAux[list2,
Function[fr,
Drop[list1, Length@Sow[TakeWhile[list1, f[#, fr] &]]]][
First@list2]];
mergeAux[{}, l_] := Sow[l];
mergeAux[l_, {}] := Sow[l];
Flatten[Reap[mergeAux[l1, l2]][[2, 1]], 1]
]
</code></pre>
<p><strong>EDIT</strong>
Same idea but with a custom <code>takeWhile</code> that allows for setting from what position to start counting. Given that, </p>
<pre><code>lengthWhile[l_, cond_, from_] :=
If[# === Null, Length[l] - from + 1, #] &[
Do[If[! cond@l[[c]], Return[c - from, Do]], {c, from, Length[l]}]]
takeWhile[l_, cond_, from_] :=
l[[from ;; from - 1 + lengthWhile[l, cond, from]]]
ClearAll[merge];
merge[l1_, l2_, f_] := Block[{mergeAux},
mergeAux[list1_, list2_, in1_, in2_] :=
mergeAux[list2, list1, in2, in1 +
Function[fr,
Length@Sow@takeWhile[list1, f[#, fr] &, in1]][list2[[in2]]]];
mergeAux[_, l_, Length[l1] + 1, i_] := Sow[l[[i ;;]] ];
mergeAux[l_, _, i_, Length[l2] + 1] := Sow[l[[i ;;]]];
Flatten[Reap[mergeAux[l1, l2, 1, 1]][[2, 1]], 1]]
</code></pre>
|
2,305 | <p>I need an algorithm to produce all strings with the following property. Here capital letter refer to strings, and small letter refer to characters. $XY$ means the concatenation of string $X$ and $Y$.</p>
<p>Let $\Sigma = \{a_0, a_1,\ldots,a_n,a_0^{-1},a_1^{-1},\ldots,a_n^{-1}\}$ be the set of usable characters. Every string is made up of these symbols.</p>
<p>Out put any set $S_n$ with the following property achieves the goal.($n\geq 2$)</p>
<ol>
<li><p>If $W\in S_n$, then any cyclic shift of $W$ is not in $S_n$</p></li>
<li><p>If $W\in S_n$, then $|W| = n$</p></li>
<li><p>If $W\in S_n$, then $W \neq Xa_ia_i^{-1}Y$, $W \neq Xa_i^{-1}a_iY$, $W \neq a_iXa_i^{-1}$ and $W \neq a_i^{-1}Xa_i$ for any string $X$ and $Y$.</p></li>
<li><p>If $W\not \in S_n$, $S_n \cup \{W\}$ will violate at least one of the above 3 properties. </p></li>
</ol>
<p>Clearly any algorithm one can come up with is an exponential algorithm. but I'm still searching for a fast algorithm because this have some practical uses. At least for $\Sigma=\{a_0,a_1,a_0^{-1},a_1^{-1}\}$ and $n<25$.</p>
<p>The naive approach for my practical application requires $O(4^n)$ time. It generate all strings of length n. When ever a new string is generated, the program create all cyclic permutations of the string and check if it have been generated before though a hash table. If not, add to the list of the result strings. Total amount of operation are $O(n4^n)$, and that's assuming perfect hashing. 12 is the limit.</p>
<p>Are there better approaches? clearly a lot of useless strings were generate.</p>
<p>Edit: The practical usage is to find the maximum of minimum self intersection of a curve on a torus with a hole. Every curve can be characterized by a string described above. Therefore I have to generate every string and feed it to a program that calculate the minimum self intersection.</p>
| Sam Nead | 1,307 | <p>First of all, you might be interested in the work of Chas and Phillips: "Self-intersection of curves on the punctured torus". I've only skimmed their paper, but they seem to be doing something closely related to what you want.</p>
<p>Second I want to guess, for some reason, that the average time to compute self-intersection number is a lot slower than the average time to generate a word. (Is that the case? Could you tell me how you are computing minimal self-intersection numbers?)</p>
<p>If so, I guess that you want to generate as few strings as possible. I'll use $a, A, b, B$ as the generating set for $\pi_1 = \pi_1(T)$. Looking at Lyndon words is essentially the same as applying inner automorphisms (conjugation, ie cyclic rotation) to your words. You might also try replacing a word $w$ by its inverse $W$. If some rotation of $W$ beats $w$ [sic], then you can throw $w$ away. </p>
<p>There are also other "geometric automorphisms" (elements of the mapping class group)
of $\pi_1$ which are very useful eg rotation of $T$ by one-quarter: </p>
<p>$$a \mapsto b \mapsto A \mapsto B \mapsto a.$$</p>
<p>There are also two nice reflections: either fix $b, B$ and swap $a$ with $A$, or the other way around. Composing these gives the hyperelliptic which swaps $a$ with $A$ and swaps $b$ with $B$. (I use python's swapcase function for this -- very simple!) </p>
<p>If any of these operations (or any compositions of these, eg the reverse of a word) produces a word $w'$ that is lexicographically before $w$, then you can throw $w$ away. </p>
<p>Please let me know if this is helpful -- I'm interested in this kind of problem. </p>
|
1,821,800 | <p>Consider the system of ODE in $\Bbb R^2 $ </p>
<p>$\dfrac{dY}{dt}=AY$ where $Y(0)=$ \begin{bmatrix} 0 \\ 1\end{bmatrix} $t>0$ </p>
<p>where $ A=$ \begin{bmatrix} -1 & 1 \\ 0 & -1\end{bmatrix}</p>
<p>and $Y(t)=$\begin{bmatrix} y_1(t) \\ y_2(t)\end{bmatrix}</p>
<p><strong>My try</strong>:
$dy_1(t)=-y_1(t)+y_2(t)$
and
$dy_2(t)=-y_2(t)$</p>
<p>On solving the second equation I got $y_2(t)=e^{-t}$</p>
<p>Putting this in the first one I got :
$dy_1(t)+y_1(t)=e^{-t}$</p>
<p>On solving the homogeneous and complementary function I got </p>
<p>$y_1(t)=Ae^{-t}+te^{-t}$</p>
<p>Putting $t=0$ we get $A=0$ so $y_1(t)=te^{-t}$.</p>
| mathlove | 78,967 | <p>Suppose that there exists such a positive rational number.</p>
<p>We have
$$x^2-\lfloor x^2\rfloor+x-\lfloor x\rfloor =1,$$
i.e.
$$x^2+x=\lfloor x^2\rfloor +\lfloor x\rfloor +1$$
We can set $x:=p/q$ where $p,q$ are positive integer with $\gcd(p,q)=1$, then
$$x^2+x=\frac{p}{q}\left(\frac pq+1\right)=m\tag1$$
where $m\in\mathbb Z$. Then,
$$(1)\implies mq^2=p(p+q)\tag2$$
so, there exists an integer $k$ such that $m=pk$, and so we have
$$(2)\implies q(kq-1)=p$$
which contradicts that $\gcd(p,q)=1$.</p>
|
1,821,800 | <p>Consider the system of ODE in $\Bbb R^2 $ </p>
<p>$\dfrac{dY}{dt}=AY$ where $Y(0)=$ \begin{bmatrix} 0 \\ 1\end{bmatrix} $t>0$ </p>
<p>where $ A=$ \begin{bmatrix} -1 & 1 \\ 0 & -1\end{bmatrix}</p>
<p>and $Y(t)=$\begin{bmatrix} y_1(t) \\ y_2(t)\end{bmatrix}</p>
<p><strong>My try</strong>:
$dy_1(t)=-y_1(t)+y_2(t)$
and
$dy_2(t)=-y_2(t)$</p>
<p>On solving the second equation I got $y_2(t)=e^{-t}$</p>
<p>Putting this in the first one I got :
$dy_1(t)+y_1(t)=e^{-t}$</p>
<p>On solving the homogeneous and complementary function I got </p>
<p>$y_1(t)=Ae^{-t}+te^{-t}$</p>
<p>Putting $t=0$ we get $A=0$ so $y_1(t)=te^{-t}$.</p>
| mathreadler | 213,607 | <p><strong>EDIT</strong> I accidentally tried solving the wrong problem. I did not know about the $\{\cdot\}$ notation for fractional part. This is an attempt to show that $x^2+x=1$ has no solutions for $x\in\mathbb{Q}$.</p>
<hr>
<p>Here's my attempt:
Assume there are $p,q \in {\mathbb Z}$ so that they are relative prime.</p>
<p>$$\left(\frac{p}{q}\right)^2 + \left(\frac{p}{q}\right) = 1\Leftrightarrow\\ \frac{p^2+pq-q^2}{q^2} = 0\Leftrightarrow\\(p+q)(p-q) + pq = 0$$</p>
<p>For this to be true $p$ or $q$ must share prime factors with $p+q$ or $p-q$. But that can only be true if $p$ and $q$ aren't relative prime. But we can demand them to be from the definition of a rational number.</p>
|
191,548 | <p>Say I have a list:</p>
<pre><code>{{Line[{{-Sqrt[5/8 - Sqrt[5]/8], 1/4 (-1 - Sqrt[5])}, {0, 1}}],
Line[{{Sqrt[5/8 - Sqrt[5]/8], 1/4 (-1 - Sqrt[5])}, {0,1}}]},
{Line[{{-Sqrt[5/8 + Sqrt[5]/8],1/4 (-1 + Sqrt[5])}, {Sqrt[5/8 - Sqrt[5]/8], 1/4 (-1 - Sqrt[5])}}],
Line[{{Sqrt[5/8 - Sqrt[5]/8], 1/4 (-1 - Sqrt[5])}, {0, 1}}]}}
</code></pre>
<p>So that each sublist of that list consist of, in this case, two lines. All the points that tell us about the position of the line appear in another list, call this list <code>points</code>. Now, I want to extract the position of all the points from the above list in that <code>points</code> list. I'm aware of <code>Position</code> function but I'm not sure how to effectively apply it to my big list above in order to get the list of positions. AI'd very much appreciate some help. </p>
| amator2357 | 61,985 | <p>As bad as it may look, this seems to be working fine:</p>
<p><code>Partition[Partition[Flatten[Position[points, #] & /@ Catenate @ Cases[lines, Line[pts : {{_, _} ..}] :> pts, Infinity]],2],Length[l]]</code></p>
|
237,031 | <p>The question is: if I assert in ZF that there exists a Reinhardt cardinal, do I really get a theory of higher consistency strength than when I assert in ZFC that there exists an I0 cardinal (the strongest large cardinal not known to be inconsistent with choice, as I understand)? This is implicit in the ordering of things on <a href="http://cantorsattic.info/Upper_attic" rel="noreferrer">Cantor's Attic</a>, for example, but I've been unable to find a proof (granted, I don't necessarily have the best nose for where to look!).</p>
<p>One thing that worries me is that when there <em>is</em> a ZFC analog of a ZF statement, many equivalent formulations of the ZFC statement may become inequivalent in ZFC. So we don't have much assurance that the usual definition of a Reinhardt cardinal is "correct" in the absence of choice.</p>
<p>I think it should be clear that Con(ZF + Reinhardt) implies Con(ZF + I0). But again, it's not clear that ZF+I0 is equiconsistent with ZFC+I0.</p>
<p>It's apparently not possible to formulate Reinhardt cardinals in a first-order way, so I should really talk about NBG + Reinhardt, or maybe ZF($j$) + Reinhardt, where ZF($j$) has separation and replacement for formulas involving the function symbol $j$.</p>
<p><strong>EDIT</strong></p>
<p>Since this question has attracted a bounty from Joseph Van Name, maybe it's appropriate to update it a bit. Now, I'm not actually a set theorist, but it's not even clear to me that Con(ZF + Reinhardt) implies Con(ZFC + an inaccessible). So perhaps the question should really be: what large cardinal strength, if any, can we extract from the theory ZF + Reinhardt?</p>
| Joel David Hamkins | 1,946 | <p>Regarding the edit, one can easily show some simple lower bounds for a Reinhardt cardinal that are far stronger than an inaccessible cardinal. For example, if $\kappa$ is a Reinhardt cardinal, assuming ZF only, then it is clear that $\kappa$ is inaccessible and weakly compact and much more in $L$, because it is the critical point of an elementary embedding $j:V\to V$, which therefore gives rise to an elementary embedding $j\upharpoonright L:L\to L$, and any such $\kappa$ must be inaccessible in $L$ and weakly compact in $L$ and much more. Indeed, one easily gets the consistency of a measurable cardinal, since if $\mu$ is the measure on $\kappa$ induced by the original embedding $j:V\to V$, then $L[\mu]$ will be the canonical inner model in which $\kappa$ is measurable. </p>
<p>It seems to me that one will be able to carry this argument completely through the standard inner model of large cardinals. Thus, from a Reinhardt cardinal in ZF set theory, I expect that the critical point $\kappa$ of the corresponding embedding $j:V\to V$ will be very large in the corresponding core models. </p>
<p>What is less clear to me is the extent to which one gets models of ZFC plus $\kappa$ has large cardinal properties that are not witnessed by the standard inner model theory, and this is how one should interpret the question.</p>
|
3,663,054 | <p>In my introductory abstract algebra course, the quotient group <span class="math-container">$G/H$</span> was defined as
<span class="math-container">$$G/H=\{gH:g\in G\}$$</span>
which is a <strong>set of sets</strong>. In an exercise, I should show that for the group of invertible matrices <span class="math-container">$GL_n(K)$</span> over a field <span class="math-container">$K$</span> and the normal subgroup <span class="math-container">$SL_n(K)$</span> the quotient group is abelian.</p>
<p>I'm horribly confused. What is the operation that combines two sets of matrices? What does it mean for two sets of matrices to commute with respect to this operation?</p>
<p>I apologize if this is a silly question, but our lecture only ever mentioned modular arithmetic…</p>
| mag | 750,434 | <p>Regarding the distribution: It <a href="https://quant.stackexchange.com/questions/18646/distribution-of-stochastic-integral">holds</a> <span class="math-container">$$\int_{0}^{t}f(\tau)dW_{\tau}\sim N(0,\int_{0}^{t}|f(\tau)|^{2}d\tau),$$</span> since <span class="math-container">$f(t)=e^{-t}$</span> is a square integrable deterministic function. As you now know the distribution of <span class="math-container">$X_t$</span> and it converges in distribution to <span class="math-container">$X_\infty$</span> you can calculate its distribution.</p>
|
2,357,115 | <blockquote>
<p>$A$ is invertible matrix over $\mathbb{R}$, prove that $AA^t+A^tA$ is invertible</p>
</blockquote>
<p>It seems to be a trivial question, but it's not.
I tried using determinants, i.e $|A| \ne 0 \to |AA^t+A^tA|\ne0$, but calculating $|AA^t+A^tA|$ is not easy.</p>
| José Carlos Santos | 446,262 | <p>The matrix $A$ is similar to an upper triangular matrix $T$ (over $\mathbb C$, although perhaps not over $\mathbb R$). The entries of the main diagonal of $T$ are all non zero, since $A$ is invertible. The entries of both matrices $TT^t$ and $T^tT$ are the squares of the entries of the main diagonal of $T$, and therefore $TT^t+T^tT$ is an upper diagonal matrix such that the entries of the main diagonal are all non zero. Therefore, it is invertible.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.