qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
162,324 | <p>Let $x_n$ be a sequence in a Hilbert space such that
$\left\Vert x_n \right\Vert=1$ and $ \langle x_n,\ x_m \rangle =0 $, for all $n \neq m$.</p>
<p>Let $ K= \{ x_n/ n : n \in \mathbb{N} \} \cup \{0\} $.</p>
<p>I need to show that $K$ is compact, $\operatorname{co}(K)$ is bounded, but not closed and finally find all the extreme points of $ \overline{\operatorname{co}(K)} $ .</p>
| hearse | 71,768 | <p>I will show you an example if the formulation of the minimization problem was of a single variable as: $f(x)+\lambda g(x)$</p>
<p>Now to find the lambda's first solve a closed form for x by setting the gradient w.r.t x as zero. You will have a closed form for x, containing the lambda's. </p>
<p>Now consider this to be $x^{*}=c(\lambda)$. Now substitute the closed form for $x^{*}$ in the constraint as, $g(x^{*})=0$ and solve for $\lambda$ which would give you a $\lambda$ that can enforce your constraint at the optimal value of $x^*$. </p>
<p>In a statistical modeling scenario though, the $\lambda's$ are estimated by cross-validation if f(.) and g(.) were loss functions required to be optimized over random variables. But I am not sure about the domain of your work.</p>
|
594,498 | <p>Let $U$ be a linear subspace of $V$. Show that $\dim_K(U)\leq \dim_K(V)$ and conclude, that $\dim_K(U)= \dim_K(V) \Leftrightarrow U = V$. Is the equivalence $\dim_K(U)= \dim_K(V) \Leftrightarrow U = V$ also true for $\dim_K(V)= \infty$?</p>
<p>I've proved so far that $\dim_K(U)= \dim_K(V) \Leftrightarrow U = V$,but i have no idea, if this is still true for $\dim_K(V)= \infty$. I have problems to show that $\dim_K(U)\leq \dim_K(V)$ (doesn't this follow by definition?)</p>
| Community | -1 | <p><strong>Hint</strong></p>
<p>Let the $V$ the subspace of $\mathbb R[x]$ defined by
$$V=\mathrm{span}\left\{x^{2n},\ n\in\mathbb N\right\}$$
Prove that $V\varsubsetneq \mathbb R[x]$. What's $\dim V?$</p>
|
2,249,036 | <p>I was watching a video on PDEs and when arriving at the part of Fourier Series, the professor said:</p>
<blockquote>
<p>And one of the most fascinating reads I ever had was a paper by Riemann on the history of this [Fourier Series].</p>
</blockquote>
<p>I tried looking for it but didn't succeed, and I was wondering if anyone could provide a link or a reference to this paper.</p>
<p>Thank you very much.</p>
<p>In case you need it, the link of the video is: <a href="https://www.youtube.com/watch?v=cf8rgx60IKA" rel="nofollow noreferrer">https://www.youtube.com/watch?v=cf8rgx60IKA</a> on the minute 14:21 (The audio on that specific part is messed up, so be careful if you are using headphones).</p>
| Chappers | 221,811 | <p>The scalar product of vectors $a=(a_1,a_2)$ and $b=(b_1,b_2)$ is given by the two formulae (provable equivalent using the cosine rule, see <a href="https://math.stackexchange.com/a/2227712/221811">here</a>)
$$ a \cdot b = a_1b_1+a_2b_2 = \sqrt{a_1^2+a_2^2}\sqrt{b_1^2+b_2^2} \cos{\theta}, $$
where $\theta$ is the angle between $a$ and $b$.</p>
<p>To use this to deduce the cosine rule, choose the unit vectors
$$ a=(\cos{\alpha},\sin{\alpha}), \qquad b=(\cos{\beta},\sin{\beta}). $$
$a$ makes angle $\alpha$ with $(1,0)$, $b$ makes angle $\beta$ with the same vector, and it is clear, since these angles are both in the same direction (one goes anticlockwise from $(1,0)$ in both cases), that the angle between this $a$ and this $b$ is $\beta-\alpha$ (or $\alpha-\beta$, but cosine is even so it makes no difference). Applying the formulae for the dot product gives
$$ 1\cdot 1 \cdot \cos{(\alpha-\beta)} = a\cdot b = \cos{\alpha}\cos{\beta} + \sin{\alpha}\sin{\beta}, $$
as required.</p>
<p>The last formula can be found by replacing $\beta$ by $-\beta$ in this formula, and using that sine is odd and cosine is even.</p>
|
121,631 | <p>How to prove most simply that if a polyonmial $f$, has only real coefficients and $f(c)=0$, and $k$ is the complex conjugate of $c$, then $f(k)=0$?</p>
| WimC | 25,313 | <p>Look at $\overline{f(c)}$ and use that conjugation is a homomorphism of $\mathbb{C}$. That is, $\overline{a+b} = \overline{a}+\overline{b}$ and $\overline{a\cdot b} = \overline{a} \cdot \overline{b}$.</p>
|
2,696,097 | <p>$ \lim_{n\to \infty} ( \lim_{x\to0} (1+\tan^2(x)+\tan^2(2x)+ \cdots + \tan^2(nx)))^{\frac{1}{n^3x^2}} $</p>
<p>The answer should be $ {e}^\frac{1}{3} $</p>
<p>I haven't encountered problems like this before and I'm pretty confused, thank you.</p>
<p>I guess we must use the remarkable limit of $ \frac{\tan(x)}{x} $ when $ x $ approaches $ 0 $ by dividing and then multiplaying with $x$.</p>
| celtschk | 34,930 | <p>“Let $K$ be the set of elements in $G$ not in $H$, also including the identity.” — that's a contradiction, as the identity is always in $H$. I'll assume you mean $K = (G\setminus H)\cup \{e\}$ where $e$ is the identity.</p>
<p>$K$ cannot be a subgroup.</p>
<p><strong>Proof:</strong> Take $h\in H$ and $k\in K$, both not the identity. Then $g:=hk\in K$ (because if it were in $H$, then $k=gh^{-1}$ would be in $H$, too, and everything not in $H$ is in $K$). Now if $K$ were a group, it would follow that $gk^{-1}\in K$. But that's a contradiction, as $gk^{-1}=h\in H$, and the only element both in $H$ and $K$ is, by construction, the identity. But we assumed that $h$ is not the identity. $\square$</p>
|
1,267,644 | <p>Having a circle of radius <span class="math-container">$R$</span> with the center in <span class="math-container">$O(0, 0)$</span>, a starting point on the circle (e.g. <span class="math-container">$(0, R)$</span>) and an angle <span class="math-container">$\alpha$</span>, how can I move the point on the circle with <span class="math-container">$\alpha$</span> degrees? I need to get the second point where it was moved.</p>
<h2>Example</h2>
<blockquote>
<p>The red point is on <span class="math-container">$(0, R)$</span> and <span class="math-container">$\alpha$</span> is <span class="math-container">$90$</span> degrees. The violet circle is where the first point is supposed to be moved, and its coordinates are <span class="math-container">$(R, 0)$</span>. Then we consider the violet point as starting point and move it with <span class="math-container">$45$</span> degrees. The new position will be where the blue circle is.</p>
<p><img src="https://i.stack.imgur.com/Tko9F.png" alt="" /></p>
</blockquote>
| Community | -1 | <p>well you can consider the following linear transformation given by:</p>
<p>T : <span class="math-container">$R^2$</span> <span class="math-container">$\rightarrow$</span> <span class="math-container">$R^2$</span></p>
<p>given by T(x,y) = (cos<span class="math-container">$\theta$</span>x - sin<span class="math-container">$\theta$</span>y,sin<span class="math-container">$\theta$</span>x + cos<span class="math-container">$\theta$</span>y).</p>
<p>This does what you asked rotates your point counter-clockwise about the origin.</p>
|
1,610,800 | <p>When dividing $f(x)$ by $g(x)$: $f(x)=g(x)Q(x)+R(x)$.
How to find the quotient $Q(x)$ and the remainder $R(x)$?
For example: $f(x)=\ 2x^4+13x^3+18x^2+x-4 \ $
, $g(x)=\ x^2+5x+2 \ $</p>
<blockquote>
<p>At first $g(x)= (x+4.56)(x+0.44)$ then we use synthetic division : $Q(x)=
\ 2x^2+3x-1 \ $ , $R(x)= \ 0.04x-1.98 \ $</p>
</blockquote>
<p>how to show that $f(x)$ and $g(x)$ have no common roots?</p>
<blockquote>
<p><strong>I was wrong</strong> when I used synthetic division rather than long division
so that, $Q(x)= \ 2x^2+3x-1 \ $ , $R(x)=-2 \ $</p>
</blockquote>
<p>$f(x)= (2x^2+3x-1) g(x)\ $ $- 2 \ $ </p>
<p>let $x=α\ $ is a root for $f(x)=0\ $ and $g(x)=0\ $
and substitute in the equation which proves α does not exist so they have no common roots.</p>
<p>Is that true? </p>
| Community | -1 | <p><strong>Hint</strong>:</p>
<p>If two polynomials have common roots, they have a common factor (which is the product of the binomials $z-r_i$).</p>
<p>This common factor is their $\gcd$, for which the Euclidean algorithm can be used (divide $p$ by $q$; if there is a remainder, let $r$, divide $q$ by $r$, and so on).</p>
|
1,598,335 | <p>Find the transformation that takes $y=3^x$ to $y=\textit{e}^x$.
I have tried: </p>
<p>Let
$y=3^x$ to $y=e^{x'}$ </p>
<p>$$\log_{3}(y)=x\quad\text{hence}\quad\log_{3}(y)=\frac{\log_{e}(y)}{\log_{e}(3)}$$</p>
<p>$$x\log_{e}(3)=x'$$</p>
<p>Gives the transformation as dilate by $\log_e(3)$
And also this:</p>
<p>$$3^{\log_{3}e}=e$$</p>
<p>$$e^x=3^{(\log_{3}e)x}$$</p>
<p>And the transformation is dilate by $\frac{1}{\log_3e}$</p>
<p>Could I please get an explanation which is right?</p>
| Angelo Mark | 280,637 | <p>Clearly </p>
<p>$1) \to$ $$2x=\cos(\theta)+\sqrt{2}\sin(\theta)$$</p>
<p>$2) \to $ $$2y=-\cos(\theta)+\sqrt{2}\sin(\theta)$$</p>
<p>Thus by $1)+2)$ we get ,</p>
<p>$$2x+2y=2\sqrt{2}\sin(\theta) $$</p>
<p>So $$\frac{x+y}{\sqrt{2}}=\sin(\theta)$$</p>
<p>Thus by $1)-2)$ we get ,</p>
<p>$$2x-2y=2\cos(\theta) $$</p>
<p>So $$x-y=\cos(\theta)$$</p>
<p>Now clearly $$ \sin^2(\theta)+\cos^2(\theta)= \left(\frac{x+y}{\sqrt{2}}\right)^2 +(x-y)^2= 1$$</p>
<p>$$\Rightarrow 3x^2 − 2xy + 3y^2 = 2$$</p>
|
3,503,175 | <p>How to prove the following formulas</p>
<p><span class="math-container">$$
\sum_{n= 0}^{\infty} \frac{\cos(nx)}{n!} = e^{\cos(x)} \cos(\sin x) \\
\sum_{n= 0}^{\infty} \frac{\sin(nx)}{n!} = e^{\cos(x)} \sin(\sin x)
$$</span></p>
<p><strong>without</strong> using complex numbers ? </p>
<p>These summations can be done with complex numbers, by substituting <span class="math-container">$e^{inx}$</span> to <span class="math-container">$\cos(nx)$</span> and <span class="math-container">$\sin(nx)$</span>, and then using the taylor expansion of <span class="math-container">$e^x$</span>. I am aware that it is possible to do the same thing with matrices, by using the matrix to complex number analogy.</p>
| pathfinder | 23,431 | <p>For the second identity, you can use the Chebyshev polynomial of the second kind <span class="math-container">$U_n(x)$</span> since
<span class="math-container">$$\sin(nx)=\sin xU_{n-1}(\cos x).$$</span> Hence,
<span class="math-container">$$\sum_{n=0}^\infty\frac{\sin(n x)}{n!}=\sin x\sum_{n=0}^\infty\frac{ U_{n-1}(\cos x)}{n!}$$</span>
But a generating function for <span class="math-container">$U$</span> is
<span class="math-container">$$\sum_{n=0}^\infty\frac{U_{n-1}(x)}{n!}t^n=\frac{e^{t x} \sin \left(t \sqrt{1-x^2}\right)}{\sqrt{1-x^2}}.$$</span>
So substituting <span class="math-container">$\cos x$</span> for <span class="math-container">$x$</span> gives
<span class="math-container">$$\frac{1}{\sin x}\sum_{n=0}^\infty\frac{\sin(nx)}{n!}t^n=\frac{e^{t\cos x} \sin \left(t \sin x\right)}{\sin x}.$$</span>
Multiplying by <span class="math-container">$\sin x$</span> and letting <span class="math-container">$t=1$</span> gives
<span class="math-container">$$\sum_{n=0}^\infty\frac{\sin (nx)}{n!}=e^{\cos x}\sin(\sin x).\tag{1}$$</span>
We can ignore the <span class="math-container">$n=1$</span> term since <span class="math-container">$\sin(0)/0!=0$</span>. Now differentiating with respect to <span class="math-container">$x$</span> gives
<span class="math-container">$$\sum_{n=1}^\infty\frac{\cos(nx)}{(n-1)!}=e^{\cos x} \cos x \cos (\sin x)-\sin x \sin (\sin x) e^{\cos x}.$$</span>
The left hand side can be rewritten
<span class="math-container">$$\sum_{n=0}^\infty\frac{\cos(nx+x)}{n!}=\sum_{n=0}^\infty\frac{\cos (x) \cos (n x)-\sin (x) \sin (n x)}{n!},$$</span>
so
<span class="math-container">$$\cos x\sum_{n=0}^\infty\frac{\cos(nx)}{n!}=e^{\cos x} \cos x \cos (\sin x)-\sin x \sin (\sin x) e^{\cos x}+\sin x\sum_{n=0}^\infty\frac{\sin(nx)}{n!}.$$</span>
Using (1) and expanding gives
<span class="math-container">$$\cos x\sum_{n=0}^\infty\frac{\cos(nx)}{n!}=\cos x e^{\cos x}\cos(\sin x).$$</span>
Dividing by <span class="math-container">$\cos x$</span> then gives the first identity,
<span class="math-container">$$\sum_{n=0}^\infty\frac{\cos(nx)}{n!}=e^{\cos x}\cos(\sin x).$$</span></p>
<hr>
<p>I tried another way, using the Bell numbers, but it involved Euler's identity right at the end. Worth including anyway as it's not the usual way to prove the result. Here goes:</p>
<p>Consider,
<span class="math-container">$$\sum_{k=0}^\infty\frac{x^k}{k!}\sum_{n=0}^\infty\frac{n^k}{n!}.$$</span>
Using the Bell numbers (see <a href="https://oeis.org/search?q=1%2C%205%2C%2052%2C%20877%2C%2021147%2C%20678570&language=english&go=Search" rel="nofollow noreferrer">A099977</a> and <a href="https://en.wikipedia.org/wiki/Bell_number" rel="nofollow noreferrer">this Wikipedia page</a>; notation not to be confused with the <a href="https://en.wikipedia.org/wiki/Bernoulli_number" rel="nofollow noreferrer">Bernoulli number notation</a>) then by <a href="https://en.wikipedia.org/wiki/Bell_number#Moments_of_probability_distributions" rel="nofollow noreferrer">Dobinski's formula</a> we have,
<span class="math-container">$$e\sum_{k=0}^\infty\frac{x^k}{k!}B_k.$$</span>
Using the <a href="https://en.wikipedia.org/wiki/Bell_number#Generating_function" rel="nofollow noreferrer">generating function</a> for the Bell numbers,
<span class="math-container">$$e\sum_{k=0}^\infty\frac{B_k}{k!}x^n=e\cdot e^{e^x-1}=e^{e^x}.$$</span>
Now let <span class="math-container">$x=ix$</span> to obtain
<span class="math-container">$$\sum_{k=0}^\infty\frac{(ix)^k}{k!}\sum_{n=0}^\infty\frac{n^k}{n!}=e^{e^{ix}}=e^{\cos x+i\sin x}=e^{\cos x}(\cos(\sin x)+i\sin(x))\\=e^{\cos x}\cos(\sin x)+ie^{\cos x}\sin(\sin x).$$</span>
The left hand side is equivalent to
<span class="math-container">$$\sum_{n=0}^\infty\frac{1}{n!}\sum_{k=0}^\infty\frac{(-1)^k (xn)^{2k}}{(2k)!}+i\sum_{n=0}^\infty\frac{1}{n!}\sum_{k=0}^\infty\frac{(-1)^k (xn)^{2k+1}}{(2k+1)!}.$$</span>
But the inner sums are the Taylor series for <span class="math-container">$\cos(nx)$</span> and <span class="math-container">$\sin(nx)$</span>, so equating real and imaginary parts gives the result.</p>
|
3,503,175 | <p>How to prove the following formulas</p>
<p><span class="math-container">$$
\sum_{n= 0}^{\infty} \frac{\cos(nx)}{n!} = e^{\cos(x)} \cos(\sin x) \\
\sum_{n= 0}^{\infty} \frac{\sin(nx)}{n!} = e^{\cos(x)} \sin(\sin x)
$$</span></p>
<p><strong>without</strong> using complex numbers ? </p>
<p>These summations can be done with complex numbers, by substituting <span class="math-container">$e^{inx}$</span> to <span class="math-container">$\cos(nx)$</span> and <span class="math-container">$\sin(nx)$</span>, and then using the taylor expansion of <span class="math-container">$e^x$</span>. I am aware that it is possible to do the same thing with matrices, by using the matrix to complex number analogy.</p>
| QC_QAOA | 364,346 | <p>Define</p>
<p><span class="math-container">$$g(x)=\sum_{n= 0}^{\infty} \frac{\cos(nx)}{n!}$$</span></p>
<p><span class="math-container">$$f(x)=\sum_{n= 0}^{\infty} \frac{\sin(nx)}{n!}$$</span></p>
<p>We can differentiate <span class="math-container">$g(x)$</span> and <span class="math-container">$f(x)$</span> term by term (it is a seperate exercise to show why this is allowed) to get</p>
<p><span class="math-container">$$g'(x)=\sum_{n= 1}^{\infty} \left(-\frac{n \sin (n x)}{n!}\right)=-\sum_{n=0}^\infty\frac{\sin((n+1)x)}{n!}$$</span></p>
<p><span class="math-container">$$=-\sum_{n=0}^\infty\frac{\sin(nx)\cos(x)+\cos(nx)\sin(x)}{n!}=-\cos(x)f(x)-\sin(x)g(x)$$</span></p>
<p>By the same logic</p>
<p><span class="math-container">$$f'(x)=\sum_{n= 1}^{\infty} \left(\frac{n \cos(n x)}{n!}\right)=\sum_{n=0}^\infty\frac{\cos((n+1)x)}{n!}$$</span></p>
<p><span class="math-container">$$=\sum_{n=0}^\infty\frac{\cos(nx)\cos(x)-\sin(nx)\sin(x)}{n!}=\cos(x)g(x)-\sin(x)f(x)$$</span></p>
<p>Also, we have initial conditions <span class="math-container">$g(0)=e$</span> and <span class="math-container">$f(0)=0$</span>. There are two unique functions that satisfy these ODEs and initial conditions. As we just showed, these are <span class="math-container">$g(x)$</span> and <span class="math-container">$f(x)$</span>. However, we also know that</p>
<p><span class="math-container">$$G(x)=e^{\cos (x)} \cos (\sin (x))$$</span></p>
<p><span class="math-container">$$F(x)=e^{\cos (x)}\sin (\sin (x)) $$</span></p>
<p>Also satisfy these initial conditions and ODEs. We conclude that <span class="math-container">$G(x)=g(x)$</span> and <span class="math-container">$F(x)=f(x)$</span>.</p>
|
1,587,007 | <p>I have the following matrix $\mathbf{U}$ which is in echelon form. The strange to me is that I havent met a matrix with first column zero. </p>
<p>$$\mathbf{U} = \begin{bmatrix}
0 & 5 & 4 & 3 \\
0 & 0 & 2 & 1 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
\end{bmatrix}$$</p>
<p>According to theory columns with pivots define a basis. Also according to theory, a matrix can have more that one basis. So, according to these facts, we may say that the basis are:</p>
<p>$$B_1= \begin{Bmatrix}\begin{bmatrix} 5 \\ 0 \\ 0 \\0 \\ \end{bmatrix},
\begin{bmatrix} 4 \\ 2 \\ 0 \\ 0 \\ \end{bmatrix}\end{Bmatrix} ,~ B_2= \begin{Bmatrix}\begin{bmatrix} 5 \\ 0 \\ 0 \\0 \\ \end{bmatrix},
\begin{bmatrix} 3 \\ 1 \\ 0 \\ 0 \\ \end{bmatrix}\end{Bmatrix} \text{and}~B_3= \begin{Bmatrix}\begin{bmatrix} 4 \\ 2 \\ 0 \\0 \\ \end{bmatrix},
\begin{bmatrix} 3 \\ 1 \\ 0 \\ 0 \\ \end{bmatrix}\end{Bmatrix}$$</p>
<p>Taking one of these bases ($B_1$ or $B_2$ or $B_3$) we may represent the non basis column as a combination of the basis columns. I think this is by definition of the column space basis. However, the last hold only when taking $B_3$ as column space basis.</p>
<p><strong>Can you please correct possible flaws on my logic?</strong> </p>
<p>Thanks!!</p>
| Dave L | 300,365 | <p>Any of $B_1$, $B_2$ or $B_3$ is a basis for the column space of $\mathbf{U}$. It is pretty clear that the last column is a linear combination of either $B_2$ or $B_3$, with the coefficients 0 and 1. There is a little arithmetic to do to find the coefficients to express the last column in terms of $B_1$, but you can probably figure out that $1/5$ and $1/2$ will work. And you can express the first column in terms of any of these bases with the coefficients 0 and 0. </p>
|
4,005,463 | <p>In the problem I am solving, table values are given for function and it says to find <span class="math-container">$f''(0.5)$</span> using second order central difference formula. I know the formula which is
<span class="math-container">$\frac{f(x+\triangle x)-2f(x)+f(x-\triangle x)}{{\triangle x}^2}$</span>. But the problem is, table values are as follows(I am listing values around <span class="math-container">$0.5$</span> only).</p>
<p><span class="math-container">$f(0.48)=1.336, f(0.5)=1.405,f(0.51)=1.481$</span></p>
<p>I am confused what value should I use as <span class="math-container">$\triangle x$</span>.</p>
| AlvinL | 229,673 | <p>It's not said that <span class="math-container">$Y$</span> is a complete metric space, but we can work in the completion <span class="math-container">$\overline{Y}$</span>.</p>
<p>If I'm reading this right, we want to prove</p>
<blockquote>
<p>If <span class="math-container">$\mathscr F \subseteq C(X,Y)$</span> is equicontinuous and pointwise precompact, then <span class="math-container">$\mathscr F$</span> is precompact.</p>
</blockquote>
<p>Regarding Hint <span class="math-container">$\# 1$</span>. Fix <span class="math-container">$f_n(x_n)\in F, n\in\mathbb N$</span>. Without loss of generality, we may assume <span class="math-container">$x_n$</span> to be convergent due to compactness of <span class="math-container">$X$</span>.</p>
<p>By assumption <span class="math-container">$\{f_n(x_i) \mid n\in\mathbb N\}$</span> is precompact for every <span class="math-container">$i\in\mathbb N$</span>. Apply diagonal argument. Put <span class="math-container">$\lim\limits_{n\in N_1} f_{n}(x_1) := f(x_1)\in\overline{Y}$</span> for some subsequence <span class="math-container">$N_1\subseteq \mathbb N$</span>. Then, again by precompactness, there exists <span class="math-container">$N_2\subseteq N_1$</span> s.t <span class="math-container">$\lim\limits _{n\in N_2} f_n(x_2) := f(x_2)$</span> etc. Now define <span class="math-container">$N\subseteq \mathbb N$</span> such that the <span class="math-container">$k$</span>-th component is the <span class="math-container">$k$</span>-th
component of <span class="math-container">$N_k$</span>. Then <span class="math-container">$\lim \limits _{n\in N} f_n(x_i) = f(x_i)$</span> for every <span class="math-container">$i\in\mathbb N$</span>.</p>
<hr />
<p>Now we have <span class="math-container">$f_{\ell_n}(x_i)\to f(x_i)$</span> for every <span class="math-container">$i\in\mathbb N$</span>. Since <span class="math-container">$x_n\to x$</span>, we also have <span class="math-container">$x_{\ell_n} \to x$</span>. It suffices to show <span class="math-container">$f_{\ell_n}(x_{\ell _n})$</span> is Cauchy.</p>
<hr />
<p>To ease notation, identify <span class="math-container">$f_{\ell_n}(x_{\ell _n})$</span> with <span class="math-container">$f_n(x_n), n\in\mathbb N$</span>. By triangle inequality
<span class="math-container">$$d(f_n(x_n), f_m(x_m)) \leqslant d(f_n(x_n), f(x_n)) + d(f_m(x_m), f(x_m)) + d(f(x_m),f(x_n)). $$</span>
The first two terms can be made small due to <span class="math-container">$f_n(x_i)\to f(x_i)$</span>.</p>
<p>To make the third term small eventually, it suffices to show that <span class="math-container">$f(x_i)$</span> is Cauchy. We have <span class="math-container">$f(x_i) = \lim f_n(x_i)$</span>. Due to continuity, the metric respects limits, so
<span class="math-container">$$d(f(x_i),f(x_j)) = \lim d(f_n(x_i),f_n(x_j)) \leqslant \lim d(f_n(x_i), f_n(x)) + d(f_n(x_j),f_n(x)). $$</span>
The term <span class="math-container">$d(f_n(x_i), f_n(x))$</span> is small eventually due to equicontinuity of <span class="math-container">$\mathscr F$</span>:</p>
<p>Firstly, <span class="math-container">$d(f_n(x_i),f_n(x))$</span> can be made small around <span class="math-container">$x$</span> by definition. But <span class="math-container">$x_i\to x$</span> as well, so <span class="math-container">$x_i\in U_x$</span> eventually.</p>
|
1,627,381 | <p>There is a subset of sigma field $G_2$, say $G_1 \subset G_2$. $G_1$ is proven to be a sigma field.
Does this necessarily imply that $G_1 = G_2$?</p>
| Artem Mavrin | 225,635 | <p>No: If $X$ is nonempty, $\mathcal{G}_1 = \{\emptyset, X\}$, and $\mathcal{G}_2 = \mathcal{P}(X)$, then $\mathcal{G}_1$ and $\mathcal{G}_2$ are $\sigma$-fields with $\mathcal{G}_1 \subseteq \mathcal{G}_2$, but $\mathcal{G}_1 \neq \mathcal{G}_2$.</p>
|
286,932 | <p>For convenience we work with commutative rings instead of commutative algebras.</p>
<hr>
<p>Fix a commutative ring $R$. Consider the functor $\mathsf{Mod}\longrightarrow \mathsf{CRing}$ defined by taking an $R$-module $M$ to $R\ltimes M$ (with dual number multiplication). Following the <a href="https://ncatlab.org/nlab/show/Mod#TangentsAndDeformationTheory" rel="noreferrer">nlab</a>, the module of Kähler differentials of $R$ is defined as the universal arrow from $R$ to this functor.</p>
<p>There's also the equivalence between the category of <em>split</em> zero-square extensions and the category of modules. These result in the natural isomorphisms ($\operatorname{dom}$ takes an arrow to its domain).</p>
<p>$$\begin{aligned}\mathsf{CRing}(R,\operatorname{dom}(A^{\prime}\twoheadrightarrow A)) & \cong{\substack{\text{split square-zero}\\
\text{extensions}
}
}(R\ltimes\Omega_{R}\twoheadrightarrow R,A^{\prime}\twoheadrightarrow A)\\
& \cong \mathsf{Mod}(\Omega_{R},\operatorname{Ker}(A^{\prime}\twoheadrightarrow A)).
\end{aligned}$$</p>
<p>So $\Omega _R\cong \bf 0$ iff $R$ has exactly one arrow to the domain of every
<em>split</em> square-zero extension.</p>
<hr>
<p>On the other hand, we say $R$ is unramified if given a zero-square extension $A^\prime \twoheadrightarrow A$, the induced post-composition $$\mathsf{CRing}(R,A^\prime)\longrightarrow \mathsf{CRing}(R,A)$$ is injective. The diagrams of interest are below.</p>
<p>$$\require{AMScd} \begin{CD} A^\prime /I @<<< R\\ @AAA \\ A^\prime & \end{CD}
\; \; \; \begin{CD} A^\prime /I @<<< R\\ @AAA @VVV\\ A^\prime @<<< I \end{CD}$$</p>
<hr>
<p>Now, if $\Omega _R\cong \bf 0$ then there <em>exists and is unique</em> a diagonal arrow $R\to A^\prime $, which seems to easily give "unramified w.r.t <em>split</em> square-zero extensions". However, I see no way to deduce anything about the non-split ones.</p>
<hr>
<p>The thing is, I feel a proof is very close: suppose $g,h$ are diagonal fillers $R\to A^\prime$ on the left below. Viewing them as module arrows this is equivalent to saying $g-h$ lands in $I$, i.e factors through $I\vartriangleleft A^\prime $. Finally $g=h$ iff this factorization $R\to I$ of $g-h$ must be zero. If only this could be given by the universal property...</p>
<p>Perhaps the correct way to proceed is using some universal property of the diagonal? (Not that it realizes the Kähler differentials.)</p>
| Vladimir Sotirov | 75,650 | <p>Some time ago I worked out how to define Kähler differentials and derive their cotangent and conormal exact sequences in any protomodular category with pullbacks and pushouts. Based on the same idea, I think the following argument works to show that a morphism being unramified is equivalent to the vanihing of its module of relative Kähler differentials.</p>
<hr>
<p>Given a morphism $X\xrightarrow{f} Y$, the category of <strong>Beck modules equipped with $f$-derivations</strong> has for its objects internal abelian group objects $Y\leftarrow M$ in $\mathcal C/Y$ equipped with an additional section $X\xrightarrow{d} M$ of $X\leftarrow M$ such that $d\circ f=0\circ f$ where $Y\xrightarrow{0}M$ is the zero section of the Beck module. The Beck module $Y\leftarrow\Omega_{Y/X}$ of <strong>Kähler differentials</strong> is the initial object of this category.</p>
<p>For example, a Beck module over a ring $R$ is precisely a square-zero split extension $R\leftarrow R\oplus M$ for $M$ an $R$-module, and a $k$-linear derivation amounts to an $k$-linear map $R\xrightarrow{d}M$ such that $d(ab)=d(a)b+ad(b)$ and $d(c)=0$ for $c$ in the image of $k\to R$.</p>
<p>In the case where $\mathcal C$ is a protomodular category (e.g. the category of rings), recall that the forgetful functor from Beck modules to morphisms equipped with a section has a left adjoint. For the category of rings, a pointed object over a ring $R$ is a split extension $R\leftarrow R\oplus A$ for an $R$-algebra $A$, which left adjoint sends to the Beck module $R\oplus (A/A^2)$ (i.e. the left adjoint sends $R$-algebras $A$ to $R$-modules $A/A^2$).</p>
<p>This allows us to generalize the construction of Kähler differentials to arbitrary protomodular categories as follows.</p>
<p>Define a weaker notion of a <strong>pointed object equipped with an $f$-prederivation</strong>, i.e. a morphism $Y\leftarrow P$ equipped with a "point" section $Y\xrightarrow{0}P$ and an additional section $X\xrightarrow{d} P$ of $X\leftarrow P$ such that $d\circ f=0\circ f$.</p>
<p>For example, a pointed object in the category of rings is a split extension $R\leftarrow R\oplus A$ for $A$ an $R$-algebra, and a $k$-linear pre-derivation amounts to an additive map $R\xrightarrow{d} A$ such that $d(ab)=d(a)b+ad(b)+d(a)d(B)$ and $d(c)=0$ for $c$ in the image of $k\to R$.</p>
<p>The pointed object $\Gamma_{Y/X}$ of <strong>Kähler pre-differentials</strong> is then an initial object in the category of morphisms $Y\leftarrow P$ equipped with a pair of sections that make a fork $X\xrightarrow{f}Y\overset d{\underset 0\rightrightarrows} P$. </p>
<p>It turns out that $\Gamma_{Y/X}$ is then given by the cokernel pair of $X\xrightarrow{f}Y$, and $\Omega_{Y/X}$ by its reflection in Beck modules. Hence in the category of rings, $\Gamma_{R/k}$ is $R\leftarrow R\otimes_kR$ and $\Omega_{R/k}$ is $R\leftarrow R\oplus(J/J^2)$ where $J=\ker(R\leftarrow R\otimes_k R)$. </p>
<hr>
<p>Recall that in a protomodular category, being an abelian group object is a property of pointed objects, rather than a structure. Accordingly, we define a morphism $A'\xrightarrow{f} A$ in a protomodular category $\mathcal C$ to be a <strong>square-zero morphism</strong> if its kernel pair $K[f]\rightrightarrows A'$ equipped with the diagonal morphism $A'\xrightarrow{\Delta}K[f]$ is a Beck module over $A'$.</p>
<p>For example, $Y\leftarrow\Omega_{Y/X}$ is a square-zero morphism because its kernel pair is the pullback of a Beck module so still a Beck module.</p>
<p>Thus, we can define a morphism $X\to Y$ to be <strong>unramified</strong>, respectively <strong>etale</strong>, respectively <strong>smooth</strong>, if any square
$\require{amsCD}
\begin{CD}
X @>>> A'\\
@VVV @VVV\\
Y@>>> A
\end{CD}$
can be filled so that the resulting diagram commutes with at most one, respectively at least one, respectively exactly one morphism $Y\to A$. </p>
<p>We can now easily show that unramified is equivalent to $\Omega_{Y/X}=0$, i.e. to the fact that the zero section $Y\xrightarrow{0}\Omega_{Y/X}$ and the canonical $f$-derivation $Y\xrightarrow{d}\Omega_{Y/X}$ are equal.</p>
<p>First, unramified implies $\Omega_{Y/X}=0$ since there must be at most one fill in the square
$\require{amsCD}
\begin{CD}
X @>>> \Omega_{Y/X}\\
@VVV @VVV\\
Y@= Y
\end{CD}$
and these fills are precisely the $f$-derivations $Y\to\Omega_{Y/X}$.</p>
<p>Second, here's a sketch of a proof that $\Omega_{Y/X}=0$ implies $X\to Y$ is unramified. The pullback of the square-zero $A\leftarrow A'$ along $Y\to A$ is still square-zero, hence it suffices to show that there is at most one fill in any square
$\require{amsCD}
\begin{CD}
X @>>> A\\
@VVV @VfVV\\
Y@= Y
\end{CD}$. But a pair of fills $Y\overset a{\underset b\rightrightarrows}A$ is the data of a morphism $\Gamma_{Y/X}\to K[f]$ of pointed objects over $Y$. Since $A\xrightarrow{f}Y$ is square-zero, $K[f]$ is by assumption a Beck module, hence $\Gamma_{Y/X}\to K[f]$ factors as $\Gamma_{Y/X}\to\Omega_{Y/X}\to K[f]$. Finally, triviality of $\Omega_{Y/X}$ forces uniqueness of the morphism $\Omega_{Y/X}\to K[f]$.</p>
|
87,544 | <p>For teaching purposes, I want to create a <em>Mathematica</em> notebook that will "notice" when the user defines or redefines a variable or function of a particular name, so that it can check the value and take some appropriate action. For example, the notebook might be monitoring the symbol "foo", so that if the user executes <code>foo = 22/7</code> at any time, code that I've written and hidden (perhaps in an invisible cell, perhaps via an initialization cell that loads a package I've written) might write out some hint text, e.g. "You've guessed the right variable name, but not the right value yet." I know I could do this by having the user click an explicit "test" button or some such thing, but I'd rather have my code monitor the user invisibly and take action without the user invoking it.</p>
<p>Is this possible? If so, how might it be implemented?</p>
| Karsten 7. | 18,476 | <p>One can use <a href="http://reference.wolfram.com/language/ref/$Pre.html?q=%24Pre" rel="nofollow noreferrer"><code>$Pre</code></a> to check if an input expression defines the correct variable and is doing so using the correct value.</p>
<pre><code>SetAttributes[check, HoldAll]
check[new_Set] := (Print["You guessed it!"]; new) /; HoldForm@new == HoldForm@Set[foo, 23]
check[new_Set] := (Print[
"You've guessed the right variable name, but not the right value yet."]; new) /;
Extract[HoldForm[new], {1, 1}, Hold] == Hold[foo]
check[new_] := new
$Pre = check;
</code></pre>
<blockquote>
<p><img src="https://i.stack.imgur.com/FGxiw.png" alt="Example"></p>
</blockquote>
|
1,654,354 | <p>Why is $A=\{(x_1,x_2,...,x_n)|\exists_{i\ne j}: x_i=x_j\}$ a null set?</p>
<p>This claim was shown in a solution I ran into, and I don't see how it holds. I try to follow the formal definition of nullity, which is having the possibility to be covered by a collection of open cubes, whose volume is as small as desired. I can't, however, tell how it is achieved here. There seem to be too many options and combinations here. I could use some help here.</p>
| skyking | 265,767 | <p>The trick is to find a cover of $A_{j;k} = \{x: x_j = x_k\}$ then we use that $A = \bigcup A_{j;k}$ (so a cover for each of $A_{j;k}$ will together cover $A$).</p>
<p>Now to cover it with open cubes the trick is that we first don't have to use cubes of the same size. It should also be understood that we're allowed to have infinite number of cubes, but when measuring their size we get an infinite sum which should converge.</p>
<p>Now to cover it we just select cubes that get's smaller and smaller the longer from the origin we get in such a way that they sum up finitely (and as small as we desire).</p>
|
84,977 | <p>In how many ways 3 flags of colors black, purple & yellow can be arranged at the corners of an equilateral triangle?</p>
| Listing | 3,123 | <p>There is only a very finite amount of possibilities... Try to find out the distinct cases:</p>
<p><img src="https://i.stack.imgur.com/pO6u3.jpg" alt="a"> <img src="https://i.stack.imgur.com/yXjQ6.jpg" alt="b"> <img src="https://i.stack.imgur.com/Wagot.jpg" alt="c"></p>
<p><img src="https://i.stack.imgur.com/AXXBP.jpg" alt="d"> <img src="https://i.stack.imgur.com/idoWA.jpg" alt="e"> <img src="https://i.stack.imgur.com/pn4Ze.jpg" alt="f"></p>
|
4,249,789 | <p>My understanding is that the span of a set is a set of all vectors that can be obtained from the linear combination of all the vectors in the original set as shown in image #<a href="https://i.stack.imgur.com/3v6g4.png" rel="nofollow noreferrer">1</a>.</p>
<p><img src="https://i.stack.imgur.com/3v6g4.png" alt="Image #1" /></p>
<p>What I do not understand is how the span (or the dimension of span) of a set consisted of rows of a matrix is related to its rank, as shown in image # <a href="https://i.stack.imgur.com/BaGDA.png" rel="nofollow noreferrer">2</a>.</p>
<p><img src="https://i.stack.imgur.com/BaGDA.png" alt="Image #2" /></p>
<p>I always thought that rank is related to the basis of a set of vectors, not its span.</p>
<p>Thanks</p>
| Mason | 752,243 | <p>The rank of a matrix <span class="math-container">$A \in M(m \times n, \mathbb{R})$</span> is defined as the dimension of its range, i.e. <span class="math-container">$\dim(R(A))$</span>. We will show that <span class="math-container">$\dim(R(A)) = \dim(R(A^T))$</span>.</p>
<p>Using the identity <span class="math-container">$Ax \cdot y = x \cdot A^T y$</span>, it is not difficult to show that <span class="math-container">$R(A^T)^{\perp} = N(A)$</span>. By taking the orthogonal complement of both sides we get <span class="math-container">$R(A^T) = N(A)^{\perp}$</span>. We have <span class="math-container">$\mathbb{R}^n = N(A) \oplus N(A)^{\perp}$</span> (this holds for any subspace <span class="math-container">$U$</span> of <span class="math-container">$\mathbb{R}^n$</span>, not just <span class="math-container">$N(A)$</span>). Therefore <span class="math-container">$\dim(N(A)^{\perp}) = n - \dim(N(A)) = \dim(R(A))$</span>, where the last equality is by the rank-nullity theorem. Alternatively, you can prove that <span class="math-container">$A \colon N(A)^{\perp} \to R(A)$</span> is bijective, and hence <span class="math-container">$N(A)^{\perp} \simeq R(A)$</span>, so <span class="math-container">$N(A)^{\perp}$</span> and <span class="math-container">$R(A)$</span> must have the same dimension.</p>
|
504,997 | <p>I have a dynamic equation,
$$ \frac{\dot{k}}{k} = s k^{\alpha - 1} + \delta + n$$
Where $\dot{k}/k$ is the capital growth rate as a function of savings $s$, capital $k$, capital depreciation rate $\delta$, and population growth rate $n$.</p>
<p>I have been asked to find the change in the growth rate as $k$ increases. This is of course
$$\frac{\partial \dot{k}/k}{\partial k} = (\alpha - 1) s k^{\alpha -2}$$
But what I want to find now is the change in growth rate as $k$ increases <em>proportionately</em>. This should be
$$\frac{\partial \dot{k}/k}{\partial \ln(k)} = ?$$
How do you calculate the partial derivative with respect to the logarithm of a variable? I'm sure the answer is simple, but my analytical calculus is pretty rusty.</p>
| Community | -1 | <p>Notation for partial derivatives is inherently awkward. e.g. "growth rate as $k$ increases" doesn't <em>actually</em> make sense: there's implicit context "... while holding $s$, $\alpha$, $n$, and $\delta$ constant".</p>
<p>Differentials become cleaner when notation gets confusing. We have</p>
<p>$$ d \frac{\dot{k}}{k} = (\alpha - 1) s k^{\alpha - 2} dk + k^{\alpha - 1 } ds + s \log k k^{\alpha-1} d\alpha + d\delta + dn$$</p>
<p>Of course, I'm being a bit gratuitious here: we've already decided $ds=d\alpha=d\delta=dn=0$, so I could have just written</p>
<p>$$ d \frac{\dot{k}}{k} = (\alpha - 1) s k^{\alpha - 2} dk $$</p>
<p>Additionally, we have</p>
<p>$$ d \log k = \frac{1}{k} dk $$</p>
<p>What I'm <strong><em>guessing</em></strong> is that you're expected to find the ratio of these two differentials given that we've held all the other variables constant. Since both are multiples of $dk$, and thus should be multiples of each other, it makes sense to write</p>
<p>$$ \frac{d \frac{\dot{k}}{k}}{d \log k} = \frac{(\alpha - 1) s k^{\alpha - 2}}{\frac{1}{k}}$$</p>
<p>when everything is defined. (e.g. if we restrict to $k>0$)</p>
<hr>
<p>Warning: sometimes people mean something different entirely by similar notation. e.g. if I saw</p>
<p>$$ y = x + \log x $$</p>
<p>and I saw someone asking for</p>
<p>$$ \frac{\partial y}{\partial \log x} $$</p>
<p>I'd put decent odds that they were looking for the answer $1$ -- more precisely, they were expecting you to substitute $\log x = z$ and write</p>
<p>$$ y = x + z $$</p>
<p>and pretend $x$ and $z$ are independent, then take the partial derivative of $y$ with respect to $z$, holding $x$ constant. (and the substitute $z = \log x$ back into the result)</p>
|
2,851,609 | <p>I need to find the solution to the inequality $(x - y)(x + y -1) > z$, where $x,y,z \geq 0$ and $x,y,z \leq 1$. As $z$ is positive, then the inequality holds whenever (i) $x - y > 0$ and $x + y - 1 > 0$ OR (ii) $x - y < 0$ and $x + y - 1 < 0$. I can solve the cases (i) and (ii) on their own, but I don't know how to relate them to the value of $z$. Probably the solution is simple, but I am not an expert in math so any help would be appreciated. </p>
| Dr. Sonnhard Graubner | 175,066 | <p>Hint; It is $$x^2-x-y^2+y-z>0$$ you can solve it now for $x$ or $y$.</p>
|
1,382,507 | <p>$$A= \left\{\frac{m}{n}+\frac{4n}{m}:m,n\in\mathbb{N}\right\}$$</p>
<hr>
<p>Since $m,n\in \mathbb{N}$, infimum is zero because $m,n$ both are increasing to infinity. Then the supremum is $5$ when $m,n$ are equal to $1$. </p>
<p>But I don't think my approach is right. Can someone give a hit or suggestion to get the right answer? Thanks. </p>
| wltrup | 232,040 | <p>$$\frac{\frac{1}{x+3} - \frac{1}{3}}{x} =
\frac{3 - (x+3)}{3x\,(x+3)} =
\frac{-x}{3x\,(x+3)} =
\frac{-1}{3(x+3)}
$$</p>
<p>And you can continue from there.</p>
|
1,382,507 | <p>$$A= \left\{\frac{m}{n}+\frac{4n}{m}:m,n\in\mathbb{N}\right\}$$</p>
<hr>
<p>Since $m,n\in \mathbb{N}$, infimum is zero because $m,n$ both are increasing to infinity. Then the supremum is $5$ when $m,n$ are equal to $1$. </p>
<p>But I don't think my approach is right. Can someone give a hit or suggestion to get the right answer? Thanks. </p>
| Mark Viola | 218,419 | <p>The solution presented by @wltrup is solid and efficient. I thought that it would be instructive to present another way forward. </p>
<p>Here, straightforward application of L'Hospital's Rule reveals</p>
<p>$$\begin{align}
\lim_{x\to 0}\frac{\frac{1}{x+3}-\frac13}{x}&=\lim_{x\to 0}\left(-\frac{1}{(x+3)^2}\right)\\\\
&=-\frac19
\end{align}$$</p>
<p>as expected!</p>
|
3,002,767 | <p><span class="math-container">$l^p$</span> appears frequently in undergrad real analysis courses, I wonder if there is any strong connection between <span class="math-container">$l^p$</span> and <span class="math-container">$L^p$</span> space? (Other than they look similar)</p>
<p>I give one definition of <span class="math-container">$l^p$</span> I've seen:</p>
<p><span class="math-container">$\begin{equation} ||s||_p=\begin{cases}
\sup_{n\in\mathbb{N}}\left(\sum_{i=1}^{n}|s_i|^p\right)^{1/p},if
~1\leq p<\infty\\
\sup_{n\in\mathbb{N}}|s_n|, if~ p=\infty
\end{cases}\end{equation}$</span> </p>
<p><span class="math-container">$l^p$</span> denotes the space of real sequences s with <span class="math-container">$||s||_p< \infty$</span></p>
| Paul | 396,004 | <p><span class="math-container">$L^p$</span> and <span class="math-container">$\ell^p$</span> spaces both come from the same definition in measure theory. Given a measure space <span class="math-container">$(\Omega,\Sigma,\mu)$</span> you can define <span class="math-container">$L^p(\Omega,\Sigma,\mu)$</span> as the set of all <span class="math-container">$\mu$</span>-measurable functions defined on <span class="math-container">$\Omega$</span> such that
<span class="math-container">$$
\int_\Omega|f|^pd\mu<+\infty.
$$</span>
<span class="math-container">$L^p$</span> is simply <span class="math-container">$L^p(\mathbb{R},\mathcal{B},Leb)$</span>, where <span class="math-container">$\mathcal{B}$</span> is the Borel <span class="math-container">$\sigma$</span>-algebra and <span class="math-container">$Leb$</span> is the Lebesgue measure. On the other hand, <span class="math-container">$\ell^p=L^p(\mathbb{N},\mathcal{P}(\mathbb{N}),c)$</span> where <span class="math-container">$\mathcal{P}(\mathbb{N})$</span> is the <span class="math-container">$\sigma$</span>-algebra of the parts of <span class="math-container">$\mathbb{N}$</span> and <span class="math-container">$c$</span> is the counting measure.</p>
|
289,367 | <p>Given a positive definite matrix $Q\in\mathbb{R}^{n \times n}$, I want to find a diagnonal matrix $D$ such that $rank(Q-D) \leq k < n$.</p>
<p>I think this can be regarded as a generalization of eigenvalue problem, which is basically problem of finding a diagonal matrix $\lambda I$ such that $rank(Q-\lambda I) < n$.</p>
<p>Is there any theory about this problem?</p>
| Alexandre Eremenko | 25,510 | <p>Your problem can be restated as follows: To a given symmetric matxix, can
you add a diagonal matrix so that the result has eigenvalue $0$ with high
multiplicity?</p>
<p>This belongs to the theory which is called Additive Inverse Eigenvalue Problems. See, for example this paper, which seems to treat a very similar problem:</p>
<p>D.Paul Phillips, Some partial inverse eigenvalue problems: recovering diagonal entries of symmetric matrices, Linear Algebra and its Applications
Volume 380, 263-270,</p>
<p>however the exact statement you ask does not follow from this result,
and I suppose that your problem is unsolved.</p>
<p>Here is a survey of such problems:</p>
<p>Moody T. Chu, Inverse eigenvalue problems, SIAM Rev.
Vol. 40, No. 1, pp. 1–39.</p>
|
2,607,668 | <p>I am trying to prove/disprove $\operatorname{Arg}(zw)=\operatorname{Arg}(z)+\operatorname{Arg}(w)$. Apparently $\operatorname{Arg}(zw)=\operatorname{Arg}(z)+\operatorname{Arg}(w)+2k\pi$ where $k=0,1,\text{ or }-1$, but I have no idea why. I keep on finding that answer online. I am very lost on how to prove this statement, any help would be great. Thank you. </p>
| Jan Eerland | 226,665 | <p>Let's say we have two complex numbers $\text{z}_1$ and $\text{z}_2$ we can write:</p>
<ul>
<li>$$\text{z}_1=\left|\text{z}_1\right|\cdot\exp\left(\left(\arg\left(\text{z}_1\right)+2\pi\cdot\text{k}_1\right)\cdot i\right)\tag1$$</li>
</ul>
<p>Where $0\le\arg\left(\text{z}_1\right)<2\pi$ and $\text{k}_1\in\mathbb{Z}$</p>
<ul>
<li>$$\text{z}_2=\left|\text{z}_2\right|\cdot\exp\left(\left(\arg\left(\text{z}_2\right)+2\pi\cdot\text{k}_2\right)\cdot i\right)\tag1$$</li>
</ul>
<p>Where $0\le\arg\left(\text{z}_2\right)<2\pi$ and $\text{k}_2\in\mathbb{Z}$</p>
<p>So, we get:</p>
<p>$$\text{z}_1\cdot\text{z}_2=\left|\text{z}_1\right|\cdot\exp\left(\left(\arg\left(\text{z}_1\right)+2\pi\cdot\text{k}_1\right)\cdot i\right)\cdot\left|\text{z}_2\right|\cdot\exp\left(\left(\arg\left(\text{z}_2\right)+2\pi\cdot\text{k}_2\right)\cdot i\right)=$$
$$\left|\text{z}_1\right|\cdot\left|\text{z}_2\right|\cdot\exp\left(\left(\arg\left(\text{z}_1\right)+2\pi\cdot\text{k}_1\right)\cdot i+\left(\arg\left(\text{z}_2\right)+2\pi\cdot\text{k}_2\right)\cdot i\right)=$$
$$\left|\text{z}_1\right|\cdot\left|\text{z}_2\right|\cdot\exp\left(2\pi\cdot\left(\arg\left(\text{z}_1\right)+\arg\left(\text{z}_2\right)+\text{k}_1+\text{k}_2\right)\cdot i\right)\tag3$$</p>
|
2,607,668 | <p>I am trying to prove/disprove $\operatorname{Arg}(zw)=\operatorname{Arg}(z)+\operatorname{Arg}(w)$. Apparently $\operatorname{Arg}(zw)=\operatorname{Arg}(z)+\operatorname{Arg}(w)+2k\pi$ where $k=0,1,\text{ or }-1$, but I have no idea why. I keep on finding that answer online. I am very lost on how to prove this statement, any help would be great. Thank you. </p>
| user | 505,767 | <p>It can be shown in many ways. The simplest is to consider the exponential form of complex numbers $$z=\rho e^{i\theta}$$</p>
<p>with $|z|=\rho$ and $Arg(z)=\theta$</p>
|
1,157,877 | <p>We have $n$ bags of sand, with volume $$v_1,...,v_n, \forall i: \space 0 < v_i < 1$$ but not essentially sorted. we want to place all bag to boxes with volumes 1. We propose one algorithm:</p>
<blockquote>
<p>At first we place all bags in the original order. Then we select one
box and place on it, bag $1, 2, 3,...$ until these can be place in box.
If the $i^{th}$ bag couldn't be inserted in a box, we choose another box and
place it $i^{th}, i^{th+1},...$ until these can be place in the box.</p>
</blockquote>
<p>If the number of boxes that be used in this algorithms $X$, and the number of boxes used in minimum way (by using minimum algorithms) be $Y$, why always $X > Y$ is false, and always true that $X < 2Y$.</p>
| user121049 | 121,049 | <p>Write $g(x)=\int_0^x{f(\alpha)}$
The equation becomes something like $\frac{d(xg)}{dx}=-\frac{d Ln(1-g)}{dx}$ This gives you an equation for g which is unfortunately not solvable.</p>
|
3,957,972 | <p>As part of a longer problem, I need to find the number of elements of the multiplicative group of <span class="math-container">$\mathbb{Z}_2[x] / (x^3 + x^2 + 1)$</span>.</p>
<p>I have no idea where to start with this. I understand that these polynomials have coefficients in the finite field <span class="math-container">$\mathbb{Z_2}$</span>, so they can take on forms like <span class="math-container">$x^3 + x^2 + 1$</span> and <span class="math-container">$x^3 - x^2 - 1$</span>, but I am not sure where to go from here....</p>
<p>Any help is appreciated!!</p>
| Lubin | 17,760 | <p>I suspect that you have all the ingredients already, maybe have already put them together to get your desired degree of understanding. But let me lay it all out:</p>
<p>Any time you have a field <span class="math-container">$k$</span> and an irreducible <span class="math-container">$k$</span>-polynomial <span class="math-container">$f(X)$</span>, then the factor ring <span class="math-container">$k[X]/\bigl(f(X)\bigr)$</span> is also a field that contains <span class="math-container">$k$</span> in a natural way, via (for <span class="math-container">$a\in k$</span>) <span class="math-container">$a\mapsto$</span> the class of <span class="math-container">$a$</span>, modulo the ideal <span class="math-container">$(f)$</span>. So the factor ring can be considered a finite extension of <span class="math-container">$k$</span>, of degree equal to the degree of <span class="math-container">$f$</span>.</p>
<p>In your particular case, I’m sure that you’ve seen that <span class="math-container">$X^3+X^2+1$</span> is <span class="math-container">$\Bbb Z_2$</span>-irreducible. The elements of the factor ring can all be written <span class="math-container">$a+bX+cX^2$</span> for <span class="math-container">$a,b,c\in\Bbb Z_2$</span>, since anything of degree three or higher can be made congruent to something of degree <span class="math-container">$\le2$</span> by application of Euclidean division.</p>
<p>Thus our extension field is of order eight, and its multiplicative group is of order seven. Since <span class="math-container">$7$</span> is a prime, any element of our new field different from <span class="math-container">$1$</span> will generate the full group.</p>
|
3,957,972 | <p>As part of a longer problem, I need to find the number of elements of the multiplicative group of <span class="math-container">$\mathbb{Z}_2[x] / (x^3 + x^2 + 1)$</span>.</p>
<p>I have no idea where to start with this. I understand that these polynomials have coefficients in the finite field <span class="math-container">$\mathbb{Z_2}$</span>, so they can take on forms like <span class="math-container">$x^3 + x^2 + 1$</span> and <span class="math-container">$x^3 - x^2 - 1$</span>, but I am not sure where to go from here....</p>
<p>Any help is appreciated!!</p>
| Community | -1 | <p>The polynomial <span class="math-container">$x^3+x^2+1$</span> is irreducible, because neither <span class="math-container">$0$</span> nor <span class="math-container">$1$</span> is a root <span class="math-container">$\bmod2$</span>. So we get a field, an extension of <span class="math-container">$\Bbb Z_2$</span>.</p>
<p>The dimension of the <span class="math-container">$\Bbb Z_2$</span> vector space is <span class="math-container">$3$</span>, since the possible remainders upon division by <span class="math-container">$x^3+x^2+1$</span> are polynomials of degree <span class="math-container">$2$</span> or less.</p>
<p>Thus the order of the field is <span class="math-container">$2^3=8$</span>; its multiplicative group <span class="math-container">$7$</span>.</p>
|
437,053 | <p>I'm struggling with this nonhomogeneous second order differential equation</p>
<p><span class="math-container">$$y'' - 2y = 2\tan^3x$$</span></p>
<p>I assumed that the form of the solution would be <span class="math-container">$A\tan^3x$</span> where A was some constant, but this results in a mess when solving. The back of the book reports that the solution is simply <span class="math-container">$y(x) = \tan x$</span>.</p>
<p>Can someone explain why they chose the form <span class="math-container">$A\tan x$</span> instead of <span class="math-container">$A\tan^3x$</span>?</p>
<p>Thanks in advance.</p>
| Sean Boyd | 85,168 | <p>Both $a$ and $a^3$ generate the cyclic subgroup of order 5 to which they both belong. Try writing $a$ in terms of $a^3$; i.e., if $b=a^3$, express $a$ in terms of $b$. We can do this because the powers of $a$ in the problem statement are relatively prime to $5$ (and thus generate the cyclic subgroup $\langle a \rangle$.</p>
|
437,053 | <p>I'm struggling with this nonhomogeneous second order differential equation</p>
<p><span class="math-container">$$y'' - 2y = 2\tan^3x$$</span></p>
<p>I assumed that the form of the solution would be <span class="math-container">$A\tan^3x$</span> where A was some constant, but this results in a mess when solving. The back of the book reports that the solution is simply <span class="math-container">$y(x) = \tan x$</span>.</p>
<p>Can someone explain why they chose the form <span class="math-container">$A\tan x$</span> instead of <span class="math-container">$A\tan^3x$</span>?</p>
<p>Thanks in advance.</p>
| Lucas Willhelm | 761,431 | <p>I will complete the proof, by proving two separate statements. The first being, <span class="math-container">$C(a)\subseteq C(a^3)$</span>. Naturally, the second statement is <span class="math-container">$C(a^3) \subseteq C(a)$</span>. First, suppose some <span class="math-container">$x \in C(a)$</span>. This implies <span class="math-container">$ax=xa$</span>. Acting on the right by <span class="math-container">$a^2$</span> gives <span class="math-container">$a^3x=a(ax)a=axa^2$</span>. Associating again, <span class="math-container">$(ax)a^2=xa^3$</span>. Hence, <span class="math-container">$a^3x=xa^3$</span>. As a result <span class="math-container">$x\in C(a^3)$</span>. Now, allow <span class="math-container">$x\in C(a^3)$</span>. This implies that <span class="math-container">$a^3x=xa^3.$</span> Acting on the left by <span class="math-container">$a^3$</span> gives <span class="math-container">$a^6x=(a^3x)a^3=ax^6$</span> After considering the order of a one has <span class="math-container">$ax=xa$</span>. Therefore <span class="math-container">$x\in C(a)$</span>. <span class="math-container">$\square$</span> </p>
|
3,391,118 | <p>let <span class="math-container">$x , y$</span> be a real numbers with <span class="math-container">$-3\leq x \leq5$</span> and <span class="math-container">$-2\leq y\leq -1$</span> , I ask if the range of <span class="math-container">$x-y$</span> is <span class="math-container">$[-1,6]$</span> or <span class="math-container">$[-1,1]$</span> using two cases negative <span class="math-container">$x$</span> and positive <span class="math-container">$x$</span> and takin g sup is <span class="math-container">$1$</span> ? </p>
| JMP | 210,189 | <p>You should end up with:</p>
<p><span class="math-container">$$-2\le x-y \le 7$$</span></p>
<p>When you negate the <span class="math-container">$y$</span> inequality, you have:</p>
<p><span class="math-container">$$2\le -y \le 1$$</span></p>
<p>which is obviously wrong, you should have:</p>
<p><span class="math-container">$$2 \ge -y \ge 1$$</span></p>
<p>and reversing gives:</p>
<p><span class="math-container">$$1 \le -y \le 2$$</span></p>
<p>Add this to the <span class="math-container">$x$</span> inequality and you get the correct answer.</p>
|
3,030,332 | <p>I have started complex analysis and I am stuck on one definition 'extended complex plane'.Book is saying 'To visualize point at infinity,think of complex plane passing through the equator of a unit sphere centred at 0.To each point z in plane there corresponds exactly one point P on surface of sphere which is obtained by intersection of sphere with line joining point z with north pole N of sphere'.Now my question
1.Is plane passing through equator of unit sphere means our normal <span class="math-container">$xy$</span> plane where <span class="math-container">$z$</span> co-ordinate is 0?
2.If so, then all the points in our complex plane inside unit circle should get mapped to north pole N ?
What is going wrong with my understanding?</p>
| user | 505,767 | <p>The points inside the unit circle are projected onto the emisphere under the plane (that is <span class="math-container">$Z<0$</span>):</p>
<p><a href="https://i.stack.imgur.com/SC6vO.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SC6vO.jpg" alt="enter image description here"></a></p>
<p>(credits <a href="https://math.stackexchange.com/q/190270/505767">Radius of the spherical image of a circle</a>)</p>
<p>Refer also to <a href="https://en.wikipedia.org/wiki/Riemann_sphere" rel="nofollow noreferrer"><strong>Riemann sphere</strong></a>.</p>
|
231,403 | <p>Let $1 \le p < \infty$. For all $\epsilon > 0$, does there exist $C = C(\epsilon, q)$ such that$$\|u\|_{L^p(0, 1)} \le \epsilon \|u'\|_{L^1(0, 1)} + C\|u\|_{L^1(0, 1)} \text{ for all }u \in W^{1, 1}(0, 1)?$$</p>
| Romain Gicquaud | 24,271 | <p>Yes, this is the Sobolev injection $W^{1, 1}(]0, 1[) \to C^0([0, 1])$ (see e.g. Brezis' book on functional analysis)
$$
\|u\|_{L^\infty} \leq C \left( \|u'\|_{L^1} + \|u\|_{L^1}\right)
$$</p>
<p>followed by the $\epsilon$-Young inequality:</p>
<p>\begin{align*}
\|u\|_{L^p} & \leq \left(\|u\|_{L^1} \|u\|_{L^\infty}^{p-1}\right)^{1/p}\\
& \leq \frac{\|u\|_{L^1}}{2 \epsilon^p} + \frac{\epsilon^{p/(p-1)}}{2} \|u\|_{L^\infty}.
\end{align*}</p>
<p>There is just a slight adjustment to make in the definition of $\epsilon$ if you want to get exactly the inequality you want.</p>
|
2,234,478 | <p>Given 3 points, A (x1, y1), B (x2, y2) and C (x3, y3), what is the best way to tell if all 3 points lie within a circle of a given radius <strong>r</strong>?</p>
<p>The best I could come up with was to find the Fermat point F (x4, y4) for the triangle ABC, and then check if the distance from F to each A, B and C is less than <strong>r</strong>. Is there a way to do this more efficiently?</p>
| Claude Leibovici | 82,404 | <p>Starting using egreg's solution, consider that you look for the zero of function $$f(t)=2^t\log t+\log 2$$ $$f'(t)=\frac{2^t}{t}+2^t \log (2) \log (t)$$ and, starting from a "reasonable" guess $t_0$, Newton method will update it according to $$t_{n+1}=t_n-\frac{f(t_n)}{f'(t_n)}$$ Starting with $t_0=1$, the successive iterates would then be
$$\left(
\begin{array}{cc}
n & t_n \\
0 & 1.000000000 \\
1 & 0.6534264097 \\
2 & 0.6411588810 \\
3 & 0.6411857444 \\
4 & 0.6411857445
\end{array}
\right)$$
Starting with $t_0=\frac 12$, the successive iterates would then be
$$\left(
\begin{array}{cc}
n & t_n \\
0 & 0.5000000000 \\
1 & 0.6336043641 \\
2 & 0.6411741260 \\
3 & 0.6411857445
\end{array}
\right)$$</p>
|
446,948 | <blockquote>
<p>Suppose that <span class="math-container">$X$</span> is a geometric random variable with parameter (probability of success) <span class="math-container">$p$</span>.</p>
<p>Show that <span class="math-container">$\Pr(X > a+b \mid X>a) = \Pr(X>b)$</span></p>
</blockquote>
<p>First I thought I'd start by calculating <span class="math-container">$\Pr(X>n)$</span> where <span class="math-container">$n=a+b$</span>:</p>
<p><span class="math-container">$$\Pr(X > n) = p_{n+1} + p_{n+2} + \cdots = ?\tag{1}$$</span></p>
<p>But I don't know how to determine the limit of equation (1). I know for an infinite geometric series starting at index zero:</p>
<p><span class="math-container">$$\sum\limits_{n=0}^\infty ax^n=\cfrac{a}{1-x}\text{ for }|X|<1$$</span></p>
<p>But don't know what to do when index starts at <span class="math-container">$n$</span>.</p>
<p>Next I thought I'd do:</p>
<p><span class="math-container">$$\Pr(X > a+b) \mid X > a) = \cfrac{ \Pr[ (X > a+b) \cap (X > a)] }{\Pr(X > a)}$$</span>
<span class="math-container">$$=\cfrac{\Pr(X > a+b)}{\Pr(X > a)}$$</span></p>
<p>and substitute my result from equation (1). Any help appreciated in advance. Thank you.</p>
| Michael Hardy | 11,667 | <p>Here's how to handle an infinite geometric series when the index starts at $n$ instead of $0$:
\begin{align}
\sum_{k=n}^\infty ax^k & = ax^n + ax^{n+1} + ax^{n+2} + ax^{n+3}+\cdots \\[10pt]
& = (ax^n) + (ax^n)x +(ax^n)x^2 + (ax^n)x^3+\cdots \\[10pt]
& = b + bx + bx^2 + bx^3 + \cdots
\end{align}</p>
<p>Now it starts at index $0$. And of course $b$ is $ax^n$.</p>
<p>It looks as if you can do the rest.</p>
<p><b>Second method:</b> You mentioned that the probability of "success" is $p$. That means the probability of success on each trial is $p$.</p>
<p>If $X$ is defined as the number of trials needed to get one success, then the event $X>n$ is the same as the event of failure on all of the first $n$ trials, so that probability of that is $(1-p)^n$.</p>
<p>If $X$ is defined as the number of trials needed to get one failure, then the event $X>n$ is the same as the event of success on all of the first $n$ trials, so the probability of that is $p^n$.</p>
|
2,783,930 | <p>Here's what the authors of a textbook that I've been following argue:</p>
<blockquote>
<p>Let $x,y\in \mathbb{R}$. Assume that $x \le y$ and $y ≤ x$. We claim
that $x = y$. If false, then either, $x < y$ or $y < x$ by law of
<em>trichotomy</em>. Assume that we have $x < y$. Since $y ≤ x$, either $x = y$ or $y < x$. Neither can be true, since we assumed $x \not= y$ and
hence concluded $x < y$ from the first inequality $x ≤ y$. Hence we
conclude that the second inequality cannot be true, a contradiction.
Thus our assumption that $x \not= y$ is not tenable.</p>
</blockquote>
<p>My argument is as follows: We assume that $x \le y$ and $y ≤ x$. Suppose $x\not=y$. Then $x \le y$ and $y ≤ x$ reduces to $x < y$ and $y < x$. But both inequalities $x < y$ and $y < x$ cannot hold together at the same time due to <em>trichotomy</em>. Thus we achieved a contradiction to our hypothesis. Is this argument correct?</p>
<p>I need a clarification on proof done by the authors. They say "Assume that we have $x < y$" and on another line they say "hence concluded $x < y$ from the first inequality $x ≤ y$". Did they deduce it or assume it? </p>
| Bram28 | 256,001 | <blockquote>
<p>I need a clarification on proof done by the authors. They say "Assume that we have $x < y$" and on another line they say "hence concluded $x < y$ from the first inequality $x ≤ y$". Did they deduce it or assume it? </p>
</blockquote>
<p>They <em>deduce</em> it from $x \le y$ and the assumption $x \not = y$</p>
<p>But yes, their proof is quite confusing. I've read it over about $4$ times now, and tried to sketch their lines of reasoning, and concluded that the part </p>
<blockquote>
<p>Assume that we have $x <y$' </p>
</blockquote>
<p>is better taken out (indeed, the presence of this very sentence seems to be exactly what confused you as well!). </p>
<p>That is, it seems like they used this line to set up a proof by cases (see line 5 in outline below), where they show that both $x<y$ and $y<x$ would lead to a contradiction ... but then changed strategy in the middle of the proof, and ended up showing that since you can <em>conclude</em> (rather than <em>assume</em>) $x < y$ from the first inequality $x \le y$ together with the assumption $x \not = y$, you end up contradicting the second inequality.</p>
<p>To be rather technical and formal ... I think that in the middle of the proof they switched strategies from:</p>
<p>$1. x \le y$ (Premise)</p>
<p>$2. y \le x$ (Premise)</p>
<p>$3. x \not = y$ (Assumption)</p>
<p>$4. x < y \lor y < x$ (by 3 and Trichotomy)</p>
<p>$5. x < y$ (Assumption)</p>
<p>$6. \bot$ (between 2 and 5)</p>
<p>$7. y < x$ (Assumption)</p>
<p>$8. \bot$ (between 1 and 7)</p>
<p>$9. \bot$ (between 4, 5-6, and 7-8)</p>
<p>$10. x = y$ (3-9)</p>
<p>to something like:</p>
<p>$1. x \le y$ (Premise)</p>
<p>$2. y \le x$ (Premise)</p>
<p>$3. x \not = y$ (Assumption)</p>
<p>$4. x < y \ xor \ y < x$ (3 and trichotomy)</p>
<p>$5. y < x \lor x = y$ (from 2)</p>
<p>$6. x < y$ (from 1 and 3)</p>
<p>$7. y \not < x$ (from 4,6)</p>
<p>$8. \neg (y < x \ xor \ x = y)$ (from 3 and 7)</p>
<p>$9. \neg y \le x$ (from 8, 4, and fact that 4 follows from 2)</p>
<p>$10. \bot$ (2 and 9)</p>
<p>$11. x = y$ (3-10)</p>
<p>But even that is not very clean (note the weird thing that happens on line 9 ...) . Indeed, even if we take out the unused line, we obtain:</p>
<blockquote>
<p>Let $x,y\in \mathbb{R}$. Assume that $x \le y$ and $y ≤ x$. We claim
that $x = y$. If false, then either, $x < y$ or $y < x$ by law of
<em>trichotomy</em>. Since $y ≤ x$, either $x = y$ or $y < x$. Neither can be true, since we assumed $x \not= y$ and
hence concluded $x < y$ from the first inequality $x ≤ y$. Hence we
conclude that the second inequality cannot be true, a contradiction.
Thus our assumption that $x \not= y$ is not tenable.</p>
</blockquote>
<p>... which follows the second outline ... but is still not very readable.</p>
<p>... <em>NOT</em> a great example of how to write proofs!</p>
<p>I think your proof is far more clear! Good job, and good for you for noticing that something very funky was happening in the provided proof!</p>
|
2,783,930 | <p>Here's what the authors of a textbook that I've been following argue:</p>
<blockquote>
<p>Let $x,y\in \mathbb{R}$. Assume that $x \le y$ and $y ≤ x$. We claim
that $x = y$. If false, then either, $x < y$ or $y < x$ by law of
<em>trichotomy</em>. Assume that we have $x < y$. Since $y ≤ x$, either $x = y$ or $y < x$. Neither can be true, since we assumed $x \not= y$ and
hence concluded $x < y$ from the first inequality $x ≤ y$. Hence we
conclude that the second inequality cannot be true, a contradiction.
Thus our assumption that $x \not= y$ is not tenable.</p>
</blockquote>
<p>My argument is as follows: We assume that $x \le y$ and $y ≤ x$. Suppose $x\not=y$. Then $x \le y$ and $y ≤ x$ reduces to $x < y$ and $y < x$. But both inequalities $x < y$ and $y < x$ cannot hold together at the same time due to <em>trichotomy</em>. Thus we achieved a contradiction to our hypothesis. Is this argument correct?</p>
<p>I need a clarification on proof done by the authors. They say "Assume that we have $x < y$" and on another line they say "hence concluded $x < y$ from the first inequality $x ≤ y$". Did they deduce it or assume it? </p>
| farruhota | 425,072 | <blockquote>
<p>They say "Assume that we have $x<y$"</p>
</blockquote>
<p>You are taking it out of context. The full statement is:</p>
<blockquote>
<p>We claim that $x=y$. If false, then either, $x<y$ or $y<x$ by law of trichotomy. Assume that we have $x<y$.</p>
</blockquote>
<p>"If false, then..." implies "If $x\ne y$, then either, $x<y$ or $y<x$ by law of trichotomy. Assume that we have $x<y$." </p>
<p>The book's this statement corresponds to your statement:
"Suppose $x≠y$. Then $x≤y$ and $y≤x$ reduces to $x<y$ and $y<x$. But both inequalities $x<y$ and $y<x$ cannot hold together at the same time due to trichotomy."</p>
<p>The law of trichotomy: "Every real number is negative, $0$, or positive." So the book is using "OR" for each case: $x<y$ or $x>y$ separately, while you are using "OR" indirectly, that is by negating "AND".</p>
<blockquote>
<p>Did they deduce it or assume it?</p>
</blockquote>
<p>They deduced it from the assumption $x\ne y$. See below with my comments inside brackets:</p>
<p>We claim that $x=y$. If false ($\color{blue}{x\ne y}$), then either, $x<y$ or $y<x$ by law of trichotomy. Assume that we have $x<y$. Since $y≤x$ (original statement: $\color{green}{x\le y}$ $\color{magenta}{\text{and}}$ $\color{red}{y\le x}$), either $x=y$ or $y<x$. Neither can be true, since we assumed $\color{blue}{x≠y}$ and hence concluded $x<y$ from the first inequality $x\le y$ ($\color{green}{x≤y}$). Hence we conclude that the second inequality ($\color{red}{y\le x}$) cannot be true (because of $\color{magenta}{\text{and}}$), a contradiction. Thus our assumption that $x≠y$ is not tenable.</p>
|
2,783,930 | <p>Here's what the authors of a textbook that I've been following argue:</p>
<blockquote>
<p>Let $x,y\in \mathbb{R}$. Assume that $x \le y$ and $y ≤ x$. We claim
that $x = y$. If false, then either, $x < y$ or $y < x$ by law of
<em>trichotomy</em>. Assume that we have $x < y$. Since $y ≤ x$, either $x = y$ or $y < x$. Neither can be true, since we assumed $x \not= y$ and
hence concluded $x < y$ from the first inequality $x ≤ y$. Hence we
conclude that the second inequality cannot be true, a contradiction.
Thus our assumption that $x \not= y$ is not tenable.</p>
</blockquote>
<p>My argument is as follows: We assume that $x \le y$ and $y ≤ x$. Suppose $x\not=y$. Then $x \le y$ and $y ≤ x$ reduces to $x < y$ and $y < x$. But both inequalities $x < y$ and $y < x$ cannot hold together at the same time due to <em>trichotomy</em>. Thus we achieved a contradiction to our hypothesis. Is this argument correct?</p>
<p>I need a clarification on proof done by the authors. They say "Assume that we have $x < y$" and on another line they say "hence concluded $x < y$ from the first inequality $x ≤ y$". Did they deduce it or assume it? </p>
| fleablood | 280,126 | <p>Their argument:</p>
<p>Given $x \le y$ and $y \le x$.</p>
<p>Assume $x \ne y$. </p>
<p>$x\le y$ means $x=y$ or $x < y$. We ruled out $x=y$ so that leaves $x < y$. Stick a pin in that.</p>
<p>$y \le x$ means that $y=x$ or $y < x$. Remove the pin and compare with $x < y$. Neither $y =x$ nor $y < x$ is compatible with $x < y$.</p>
<p>So we have a contradiction.</p>
<p>So $x =y$.</p>
<p>Your argument:</p>
<p>Given $x \le y$ and $y \le x$.</p>
<p>Assume $x \ne y$. </p>
<p>$x\ne y$ means $x=y$ or $x < y$. We ruled out $x=y$ so that leaves $x < y$. </p>
<p>$y \le x$ means that $y=x$ or $y < x$. We ruled out $y=x$ so that leaves $y < x$.</p>
<p>$x < y$ and $y<x$ are mutually incompatible.</p>
<p>So we have a contradiction.</p>
<p>So $x =y$</p>
<p>====</p>
<p>Both your arguments are valid.</p>
<p>They took a "$x<y$" to a Contradiction fight and you took a "$x \ne y$" the the same contradiction fight. They were both equally good weapons.</p>
<p>....</p>
<p>But I prefer your method. It's more symmetric which aesthtically pleases me and I find more convincing.</p>
<p>The one thing they did that is both better and worse than what you did, is the <em>really</em> went to the basic axiomatic deductions: They <em>spelled out</em> that $x \le y$ means $x=y$ or $x < y$ and if we had assumed that $x\ne y$ then we deduce $x < y$ whereas you took it as obvious. </p>
<p>It's better in that it is pure axiomatic deduction. It's worse in that it is hard to read as it is easy to get lost in the tedium.</p>
<p>But I like that you used the <em>same</em> argument for $x \le y\implies x < y$ and $y \le x \implies y < x$ whereas I dislike that they used <em>different</em> $x \le y\implies x <y$ and $x<y \land y \le x$ are incompatible arguments.</p>
|
48,237 | <p>I have found by a numerical experiment that first such primes are:
$2,5,13,17,29,37,41$. But I cannot work out the general formula for it.<br>
Please share any your ideas on the subject.</p>
| jspecter | 11,844 | <p>As the ideal $(p)$ is maximal in $\mathbb{Z}.$ The ring $\mathbb{Z}/pZ$ is a field. It follows that the nonzero elements form a group (the group of units $(\mathbb{Z}/p\mathbb{Z})^{\times})$ and this group is cyclic. Note $-1\in(\mathbb{Z}/p\mathbb{Z})^{\times}$ and for $p>2$ the order of $-1$ in this group is 2. As cyclic groups have one and only one subgroup of order $n$ for each positive integer $n$ dividing the order of the group, we observe for $p>2,$ the element $-1$ has a square root if and only if $(\mathbb{Z}/p\mathbb{Z})^{\times}$ contains a subgroup of order $4$ which occurs if and only if $4$ divides $|(\mathbb{Z}/p\mathbb{Z})^{\times}| = p-1,$ i.e. if and only if $p\equiv 1\mod 4.$</p>
<p>Examining case where $p =2,$ trivially reveals that $-1$ has square root. </p>
|
2,409,312 | <p>In <a href="https://math.stackexchange.com/a/1999967/272831">this previous answer</a>, MV showed that for $n\in\Bbb N$,</p>
<p>$$\int\frac1{1+x^n}~dx=C-\frac1n\sum_{k=1}^n\left(\frac12 x_{kr}\log(x^2-2x_{kr}x+1)-x_{ki}\arctan\left(\frac{x-x_{kr}}{x_{ki}}\right)\right)$$</p>
<p>where</p>
<p>$$x_{kr}=\cos \left(\frac{(2k-1)\pi}{n}\right)$$</p>
<p>$$x_{ki}=\sin \left(\frac{(2k-1)\pi}{n}\right)$$</p>
<p>I am now interested in the case of $n=\frac ab\in\Bbb Q^+$. By substituting $x\mapsto x^b$, we get</p>
<p>$$\int\frac{bx^{b-1}}{1+x^a}~dx$$</p>
<p>Thus, the given integral in question is really</p>
<p>$$\int\frac{x^b}{1+x^a}~dx$$</p>
<p>By expanding with the geometric series and termwise integration, one can see that</p>
<p>$$\int_0^p\frac{x^b}{1+x^a}~dx=\sum_{k=0}^\infty\frac{(-1)^kp^{ak+b+1}}{ak+b+1}=\frac{p^{b+1}}a\Phi\left(-p^a,1,\frac{b+1}a\right)$$</p>
<p>where $\Phi$ is the <a href="http://mathworld.wolfram.com/LerchTranscendent.html" rel="noreferrer">Lerch transcendent</a>.</p>
<p>A few particular cases that arise may be found:</p>
<p>\begin{align}\int\frac1{1+x^{1/n}}~dx&=C+(-1)^{n+1}n\left[\ln(1+x^{1/n})+\sum_{k=1}^{n-1}\frac{(-x^{1/n})^k}k\right],&a=1\\\int\frac1{1+x^{2/n}}~dx&=C+(-1)^nn\left[\arctan(x^{1/n})+\frac1{x^{1/n}}\sum_{k=1}^{(n-1)/2}\frac{(-x^{2/n})^k}{2k-1}\right],&a=2,n\ne2b\end{align}</p>
<p>Or, more generally, with $x=t^{an+1}$,</p>
<p>$$\int\frac1{1+x^{a/(an+1)}}~dx=(-1)^{n+a}(an+1)\left[\int\frac1{1+t^a}~dt+\frac1{x^{(a-1)/(an+1)}}\sum_{k=1}^{(n-1)/a}\frac{(-x^{a/(an+1)})^k}{a(k-1)+1}\right]$$</p>
<p>which reduces down to the previously solved problem.</p>
<blockquote>
<p>But what of the cases when $n=a/b$ with $(b\bmod a)\ne0,1$?</p>
</blockquote>
<p>For example,</p>
<p>$$\int\frac1{1+x^{3/2}}~dx=C+\frac16\left[\log(1-x^{1/2}+x)-2\log(1+x^{1/2})+2\sqrt3\arctan\left(\frac{2x^{1/2}-1}{\sqrt3}\right)\right]$$</p>
| Peter | 220,102 | <p>One can use <a href="https://math.stackexchange.com/questions/1354106/what-is-the-integration-of-int-1-x2n-1dx/1354485#1354485">this answer</a> for evaluating $\dfrac1{1+x^a}$ in the following form
$$\frac1{1+x^a}=\sum_{k=1}^aa_k(x-x_k)^{-1} \tag {2}$$
where $a_k=\frac{-x_k}{n}$ and $x_k=e^{i(2k-1)\pi/n}$, $k=1, \cdots,n$</p>
<p>After that this integral can be evaluate </p>
<p>$$\int\frac{x^b}{1+x^a}~dx=\int\sum_{k=1}^aa_k(x-x_k)^{-1} (x-x_k+x_k)^b=
\int\sum_{k=1}^aa_k(x-x_k)^{-1} \sum_{l=0}^b C_b^l(x-x_k)^l x_k^{b-l} $$
$$
=-\sum_{k=1}^a\Big(\sum_{l=1}^b\frac{C_b^l}{ln}(x-x_k)^l x_k^{b-l+1}+x_k^{b+1}\log(x-x_k)\Big)
$$</p>
|
3,275,966 | <p>During the drawing lottery falls six balls with numbers from <span class="math-container">$1$</span> to <span class="math-container">$36$</span>. The player buys the ticket and writes in it the numbers of six balls, which in his opinion will fall out during the drawing lottery. The player wants to buy several lottery tickets in order to be guaranteed to guess at least two numbers in at least one ticket. Will there be enough to buy <span class="math-container">$12$</span> lottery tickets?</p>
<p><strong>My work</strong>. The maximum number of numbers pairs in <span class="math-container">$12$</span> tickets is equal to <span class="math-container">$12 \binom{6}{2}=12 \cdot 15$</span>. During the drawing lottery falls <span class="math-container">$ \binom{6}{2}=15$</span> numbers pairs. The total number of numbers pairs is equal to <span class="math-container">$\binom{36}{2}=18 \cdot 35$</span>. I have no ideas of solving the problem.</p>
| RobPratt | 683,666 | <p>In fact, 9 tickets are enough. See <a href="http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=8782FF45BF67A451E558E80711D4CAFC?doi=10.1.1.70.8377&rep=rep1&type=pdf" rel="nofollow noreferrer">this paper</a>.</p>
<pre><code>
1 2 3 4 5 6
7 8 9 10 11 12
13 14 15 16 17 18
19 20 21 22 23 24
19 20 21 25 26 27
22 23 24 25 26 27
28 29 30 31 32 33
28 29 30 34 35 36
31 32 33 34 35 36
</code></pre>
<p>This is a "sum of disjoint covers" obtained from three copies of C(6,6,2) and two copies of C(9,6,2), as described in Theorem 2.6.</p>
|
1,042,285 | <p>How can i prove boundedness of the sequence
$$a_n=\frac{\sin (n)}{8+\sqrt{n}}$$ without using its convergence to $0$? I know since it is convergent then it is bounded.</p>
| DeepSea | 101,504 | <p>$|\sin (n)| \leq 1$, and $8+\sqrt{n} > 8 \Rightarrow |a_n| < \dfrac{1}{8}$</p>
|
2,045,812 | <p>(gcd; greatest common divisor) I am pulling a night shift because I have trouble understanding the following task.</p>
<p>Fibonacci is defined by this in our lectures:<br>
I) $F_0 := 1$ and $F_1 := 1$</p>
<p>II) For $n\in\mathbb{N}$, $n \gt 1$ do $F_n=F_{n-1}+F_{n-2}$</p>
<p><strong>Task</strong><br>
Be for n>0 the numbers $F_n$ the Fibonacci-numbers defined as above.<br>
Calculate for $n\in\{3,4,5,6\}$ $\gcd(F_n, F_{n+1})$ and display it as </p>
<pre><code>aFₙ + bFₙ₊₁
</code></pre>
<p>, that means find numbers a, b, so that </p>
<pre><code>gcd(Fₙ, Fₙ₊₁) = aFₙ + bFₙ₊₁
</code></pre>
<p>holds.</p>
<hr>
<p>I know how to use the Euclidian Algorithm, but I don't understand from where I should find the a and b from, because the task gives me {3,4,5,6} and every gcd in this gives me 1.
(gcd(3,4)=1 ; gcd(4,5)=1) I need help solving this as I am hitting a wall.</p>
| eepperly16 | 239,046 | <p>What you need here is the so-called <em>Extended Euclidean Algorithm</em> where you back substitute to calculate numbers $a$ and $b$ such that $aF_n + bF_{n+1} = \gcd(F_n,F_{n+1})$. For example, let's use $F_3 = 3$ and $F_4 = 5$, Then</p>
<p>$$ 5 = 1\cdot3 + 2 $$
$$ 3 = 1\cdot2 + 1 $$</p>
<p>Rearranging and substituting gives</p>
<p>$$ 2 = 5 - 3 $$
$$ 3 = (5 - 3) + 1$$</p>
<p>Which gives</p>
<p>$$ 2\cdot 3 - 5 = 1$$</p>
<p>That is</p>
<p>$$ 2F_{3} + (-1) F_4 = 1$$</p>
|
4,415,037 | <p>I would like to prove upper and lower bounds on <span class="math-container">$|\cos(x) - \cos(y)|$</span> in terms of <span class="math-container">$|x-y|$</span>. I was able to show that <span class="math-container">$|\cos(x) - \cos(y)| \leq |x - y|$</span>. I'm stuck on the lower bound. Does anyone know how to approach this?</p>
<p>Update: Over the interval <span class="math-container">$[0,\pi/2]$</span>, I was able to show that <span class="math-container">$|\cos(x) - \cos(y)| \geq \frac{2 \min(x,y)}{\pi}|x-y|$</span>. But I would like a lower bound that holds for any interval.</p>
| Community | -1 | <p>I don't have enough reputation to comment so I apologize that this had to be an answer. I know that's probably not what you are looking for maybe because it's so easy, but <span class="math-container">$-\left|x-y\right|$</span> works because:</p>
<p><span class="math-container">\begin{eqnarray}
-\left|x-y\right| \leq 0 \leq \left|\cos(x) - \cos(y)\right|
\end{eqnarray}</span></p>
|
1,833,406 | <p>Now I have a topological space $X$ that is $C_2$ and $T_4$, and $U$ is an open set in it, I want to show that $U$ can be expressed as $\cup_{i\in\Bbb Z_+} F_i$ where $F_i$ are closed sets, <strong>without the aid of any metrisation theorem</strong> (but Uryshon's theorem about normal spaces and the equivalent Tietze extension theorem are usable, if needed). </p>
<p>How can I proceed then? I totally have no clue. The only thing I know is that according to $C_2$, $U$ is a countable union of base sets $B_n$. I think the next thing to do is to find a closed subset for each $B_n$ that is "saturated" enough. By $T_4$ we can find an ascending chain of open subsets $G_1\subset G_2\subset \cdots G_k\subset \cdots\subset B_n$ such that $\bar G_k\subset G_{k+1}$, but even so, there seems to be no way to guarantee that $G_k$ can literally "approach" $B_n$. </p>
<p>Of course, an appeal to Uryshon metrisation theorem makes this problem somewhat trivial, but is apparently an overkill here. </p>
| Behrouz Maleki | 343,616 | <p>$$f'(x)=2x\tan^{-1}x+1$$
$$f'(x)=1+2x\sum\limits_{n=1}^{\infty }{\frac{{{(-1)}^{n+1}}}{(2n-1)}}\,{{x}^{2n-1}}=1+2\sum\limits_{n=1}^{\infty }{\frac{{{(-1)}^{n+1}}}{(2n-1)}}\,{{x}^{2n}}$$ therefore
$$f(x)=c+x+2\sum\limits_{n=1}^{\infty }{\frac{{{(-1)}^{n+1}}}{(2n+1)(2n-1)}}\,{{x}^{2n+1}}$$
we have $f(0)=0$, thus $c=0$ and we have
$$f(x)=x+2\sum\limits_{n=1}^{\infty }{\frac{{{(-1)}^{n+1}}}{(2n+1)(2n-1)}}\,{{x}^{2n+1}}$$</p>
|
651,174 | <p>I've got a complex equation with 4 roots that I am solving. In my calculations it seems like I am going through hell and back to find these roots (and I'm not even sure I am doing it right) but if I let a computer calculate it, it just seems like it finds the form and then multiplies by $i$ and negative $i$. Have a look: <a href="http://www.wolframalpha.com/input/?i=%288*sqrt%283%29%29/%28z%5E4%2b8%29=i" rel="nofollow noreferrer">http://www.wolframalpha.com/input/?i=%288*sqrt%283%29%29%2F%28z%5E4%2B8%29%3Di</a></p>
<p>Here's me going bald: <img src="https://i.stack.imgur.com/oFE1P.jpg" alt="enter image description here"></p>
| Ben Grossmann | 81,360 | <p>Here's a way of doing things:
$$
\begin{align*}
\frac{8\sqrt 3}{z^4 + 8} &= i \implies\\
z^4 + 8 &= \frac{8\sqrt 3}{i} = -i\,8\sqrt 3\\
z^4 &= -8-i8\sqrt{3}
\\&=
16\left(\cos\left(\frac{4\pi}{3}\right) +i\,\sin \left(\frac{4\pi}{3} \right) \right)
\\&=
16\left(\cos\left(\frac{4\pi}{3} + 2 \pi\right) +i\,\sin \left(\frac{4\pi}{3}+2\pi \right) \right)
\\&=
16\left(\cos\left(\frac{4\pi}{3} + 4 \pi\right) +i\,\sin \left(\frac{4\pi}{3}+4\pi \right) \right)
\\&=
16\left(\cos\left(\frac{4\pi}{3} + 6 \pi\right) +i\,\sin \left(\frac{4\pi}{3}+6\pi \right) \right)
\end{align*}
$$
Taking $\sqrt[4]{16}$ and dividing each of these angles by $4$ gives you all $4$ of the $4^{th}$ roots, which are
$$
\begin{align*}
z=
& 2\left(\cos\left(\frac{\pi}{3}\right) +i\,\sin \left(\frac{\pi}{3} \right) \right),
\\&
2\left(\cos\left(\frac{5\pi}{6}\right) +i\,\sin \left(\frac{5\pi}{6} \right) \right),
\\&
2\left(\cos\left(\frac{4\pi}{3} + \right) +i\,\sin \left(\frac{4\pi}{3}\right) \right),
\\&
2\left(\cos\left(\frac{11\pi}{6} \right) +i\,\sin \left(\frac{11\pi}{6}\right) \right)
\end{align*}
$$
That is, our solutions are
$$
z \in \{1 + i \sqrt 3, -\sqrt 3 + i, -1 - i\sqrt 3, \sqrt 3 - i\}
$$</p>
|
2,988,987 | <blockquote>
<p>For example, we have <span class="math-container">$f(x)=\frac{1}{x^2-1}$</span></p>
</blockquote>
<p>Would the domain be <span class="math-container">$$\mathcal D(f)=\{x\in\mathbb{R}\mid x\neq(1,-1)\}$$</span> or rather
<span class="math-container">$$\mathcal D(f)=\{x\in\mathbb{R}\mid x\neq \{1,-1\}\}$$</span> or <span class="math-container">$$ D(f)=\{x\in\mathbb{R}\mid x \setminus \{1,-1\}\}$$</span> or are all notations correct?</p>
| Doesbaddel | 587,094 | <blockquote>
<p>For example, we have <span class="math-container">$f(x)=\frac{1}{x^2-1}$</span></p>
</blockquote>
<p>The domain <span class="math-container">$\mathcal D$</span> would be <em>(suggested from Mauro ALLEGRANZA)</em>:</p>
<p><span class="math-container">$$\mathcal{D}(f)=\left\{x\in\mathbb{R}\mid x\neq 1 \wedge x\neq -1\right\}$$</span></p>
|
2,249,929 | <p>Is there a more general form for the answer to this <a href="https://math.stackexchange.com/q/1314460/438622">question</a> where a random number within any range can be generated from a source with any range, while preserving uniform distribution? </p>
<p><a href="https://stackoverflow.com/q/137783/866502">This</a> question for example looks familiar and is changing a range of 1-5 to 1-7</p>
| Ross Millikan | 1,827 | <p>The questions you linked to have strategies that are easily generalized. If $M \gt N$ you can just roll the die, accept any number $\le M$ and roll again if the number is $\gt M$. This is very simple, but may lead to a lot of rolling if $M$ is rather larger than $N$. If $M \gt 2N$ you can take the largest multiple of $N$ that is less than $M$ and accept any roll up to that multiple, take it $\bmod N$ to get the result, and reroll anything larger. </p>
<p>You can just roll a number of times, add up the sum, and take that $\bmod N$. This will not be exactly even, but it will be very close.</p>
|
1,955,509 | <p>There's this exercise in Hubbard's book:</p>
<blockquote>
<p>Let $ h:\Bbb R \to \Bbb R $ be a $C^1$ function, periodic of period $2\pi$, and define the function $ f:\Bbb R^2 \to \Bbb R $ by
$$f\begin{pmatrix}r\cos\theta\\r\sin\theta \end{pmatrix}=rh(\theta)$$</p>
<p>a. Show that $f$ is a continuous real-valued function on $\Bbb R^2$.</p>
<p>b. Show that $f$ is differentiable on $\Bbb R^2 - \{\mathbf 0\}$.</p>
<p>c. Show that all directional derivatives of $f$ exist at $\mathbf 0$ if and only if</p>
<p>$$ h(\theta) = -h(\theta + \pi) \ \text{ for all } \theta $$</p>
<p>d. Show that $f$ is differentiable at $ \mathbf 0 $ if an only if $h(\theta)=a \cos \theta + b \sin \theta$ for some number $a$ and $b$. </p>
</blockquote>
<p>I can't find how to prove $ f $ is continuous, I tried to prove
$$ \lim_{\begin{pmatrix}r\cos\theta\\r\sin\theta \end{pmatrix} \to \begin{pmatrix}s\cos\phi\\s\sin\phi \end{pmatrix}} f\begin{pmatrix}r\cos\theta\\r\sin\theta \end{pmatrix}=s\ h(\phi) $$ for all $s$ and $\phi$.
But I can't do much else.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>prove with induction that $$\sum_{n=0}^k \frac{n^2+3n+2}{4^n}=\frac{1}{27} 4^{-k} \left(-9 k^2-51 k+2^{2 k+7}-74\right)$$</p>
|
1,955,509 | <p>There's this exercise in Hubbard's book:</p>
<blockquote>
<p>Let $ h:\Bbb R \to \Bbb R $ be a $C^1$ function, periodic of period $2\pi$, and define the function $ f:\Bbb R^2 \to \Bbb R $ by
$$f\begin{pmatrix}r\cos\theta\\r\sin\theta \end{pmatrix}=rh(\theta)$$</p>
<p>a. Show that $f$ is a continuous real-valued function on $\Bbb R^2$.</p>
<p>b. Show that $f$ is differentiable on $\Bbb R^2 - \{\mathbf 0\}$.</p>
<p>c. Show that all directional derivatives of $f$ exist at $\mathbf 0$ if and only if</p>
<p>$$ h(\theta) = -h(\theta + \pi) \ \text{ for all } \theta $$</p>
<p>d. Show that $f$ is differentiable at $ \mathbf 0 $ if an only if $h(\theta)=a \cos \theta + b \sin \theta$ for some number $a$ and $b$. </p>
</blockquote>
<p>I can't find how to prove $ f $ is continuous, I tried to prove
$$ \lim_{\begin{pmatrix}r\cos\theta\\r\sin\theta \end{pmatrix} \to \begin{pmatrix}s\cos\phi\\s\sin\phi \end{pmatrix}} f\begin{pmatrix}r\cos\theta\\r\sin\theta \end{pmatrix}=s\ h(\phi) $$ for all $s$ and $\phi$.
But I can't do much else.</p>
| Hazem Orabi | 367,051 | <p>$$
\begin{aligned}
& S_{0} = \sum_{n=0}^{\infty} \frac{n^{0}}{4^{n}} = \sum_{n=0}^{\infty} \frac{1}{4^{n}} = \sum_{n=0}^{\infty} (1/4)^{n} = \frac{1}{1 - (1/4)} \Rightarrow \color{red}{S_{0} = \frac{4}{3}} \\ \\
& S_{1} = \sum_{n=0}^{\infty} \frac{n^{1}}{4^{n}} = \sum_{n=0}^{\infty} \frac{n + 1 - 1}{4^{n}} = \sum_{n=0}^{\infty} \frac{n + 1}{4^{n}} - \sum_{n=0}^{\infty} \frac{1}{4^{n}} = 4 \sum_{n=0}^{\infty} \frac{n + 1}{4^{n + 1}} - S_{0} \\
& \qquad = 4 \sum_{n=1}^{\infty} \frac{n}{4^{n}} - S_{0} = 4 \left[ - 0 + \sum_{n=0}^{\infty} \frac{n}{4^{n}} \right] = 4 S_{1} - S_{0} \Rightarrow \color{red}{S_{1} = \frac{1}{3} S_{0} = \frac{4}{9}} \\ \\
& S_{2} = \sum_{n=0}^{\infty} \frac{n^{2}}{4^{n}} = \sum_{n=0}^{\infty} \frac{(n + 1)^{2} - 2 n - 1}{4^{n}} = 4 \sum_{n=0}^{\infty} \frac{(n + 1)^{2}}{4^{n+1}} - 2 \sum_{n=0}^{\infty} \frac{n}{4^{n}} - \sum_{n=0}^{\infty} \frac{1}{4^{n}} \\
& \qquad = 4 S_{2} - 2 S_{1} - S_{0} = 4 S_{2} - \frac{5}{3} S_{0} \Rightarrow \color{red}{S_{2} = \frac{5}{9} S_{0} = \frac{20}{27}} \\ \\
& \sum_{n=0}^{\infty} \frac{n^{2} + 3 n + 2}{4^{n}} = S_{2} + 3 S_{1} + 2 S_{0} = \frac{20}{27} + \frac{12}{9} + \frac{8}{3} = \frac{20 + 36 + 72}{27} = \frac{128}{27} \\ \\
\end{aligned}
$$</p>
|
4,597,679 | <p>My textbook is <em>A Mathematical Introduction to Logic, 2nd Edition</em> by Enderton.</p>
<p>The question initially comes when I was trying to prove Exercise 4. on pg.99</p>
<p><span class="math-container">$$\text{Show that if }x \text{ does not occur free in }\alpha,\text{ then }\alpha \vDash \forall x \alpha$$</span></p>
<p>One of the lines in the proof, which as I worked out is:</p>
<p>the assignment functions <span class="math-container">$s$</span> and <span class="math-container">$s(x|d)$</span> agree on any variable but <span class="math-container">$x$</span>, since <span class="math-container">$x$</span> does not occur free in <span class="math-container">$\alpha,$</span> hence
<span class="math-container">$$\vDash_\mathfrak{U} \alpha[s] \Leftrightarrow \vDash_{\mathfrak{U}}\alpha[s(x|d)]$$</span></p>
<p>Then, I referred back to <strong>Theorem 22A</strong>, which states:
Assume <span class="math-container">$s_1,s_2$</span> are assignment functions from <span class="math-container">$V$</span> into <span class="math-container">$\mathfrak{U}$</span> which agree at all variables (if any) that occur free in the wff <span class="math-container">$\phi$</span>. Then
<span class="math-container">$$\vDash_{\mathfrak{U}} \phi[s_1] \Leftrightarrow \vDash_{\mathfrak{U}} \phi[s_2]$$</span></p>
<p>My question is that why only free variables are enough? Why don't we need to impose the agreement on the bound variables of <span class="math-container">$s_1,s_2$</span>?</p>
<p>The proof of this theorem uses induction, where the inductive hypothesis simply states the agreement on free variables rather than telling the intuition behind it.</p>
<p>I attempted to answer my own question, but when it comes to discerning the difference between free and bound variables, the sentence 'a bound variable is a variable that's being quantified' feels incomplete, could you please provide richer insights to this?</p>
<p>Also, this question is closely related to the post:
<a href="https://math.stackexchange.com/questions/274925/show-that-if-x-does-not-occur-free-in-%ce%b1-then-%ce%b1-vdash-%e2%88%80-x-%ce%b1/868815?noredirect=1#comment9683956_868815">Show that if $x$ does not occur free in $α$, then $α \vDash ∀ x α$.</a></p>
| georgy_d | 251,394 | <p>I don't really like to talk about bound variables, because it is rather about bound occurences of variables.</p>
<p>The main idea is that everything you derived are valid formulas.(Theorem of correctness)</p>
<p>Valid formulas are precisely those which are true under any interpretation of variables, functional and predicate symbols. (So terms and formulas have precise meaning(semantics)).</p>
<p>Different interpretations allow you to implement substitution operation, so you can obtain.</p>
<p>So essentially free [occurences of] variables are those which you can substitute without breaking the validity of a formula. Bound [occurences of] variables are all the rest.</p>
|
3,440,732 | <p>How can I show that <span class="math-container">$P=\{\{2k-1,2k\},k\in \mathbb {N}\}$</span> is the basis of <span class="math-container">$\mathbb N$</span>.
Obviously we have to show that basis's orders do work.
But the problem is if I take two subsets that belong to <span class="math-container">$P$</span> , for example <span class="math-container">$P_1=\{1,2\}$</span> , <span class="math-container">$P_2=\{3,4\}$</span> , I get <span class="math-container">$P_1 \cap P_2=\emptyset$</span> , then how can I show that for every <span class="math-container">$n\in\mathbb N$</span> , there is <span class="math-container">$P_3\in P$</span> such that <span class="math-container">$n\in P_3$</span>?</p>
| ajotatxe | 132,456 | <p>Well, if <span class="math-container">$n$</span> is even, then <span class="math-container">$n\in \{n-1,n\}\in P$</span>. If it is odd then <span class="math-container">$n\in\{n,n+1\}\in P$</span>.</p>
|
263,961 | <p>Some of the more organic theories considered in model theory (other than set theory, which, from what I've seen, seems to be quite distinct from "mainstream" model theory) are those which arise from algebraic structures (theories of abstract groups, rings, fields) and real and complex analysis (theories of expansions of real and complex fields, and sometimes both).</p>
<p>While relationships with algebra seem quite apparent, I wonder what are some interesting results in real and complex analysis that have nice model-theoretical proofs (or better yet, only model-theoretical proofs are known!)? </p>
<p>Of course, there's nonstandard analysis, but I hope to see some different examples. That said, I wouldn't mind seeing a particularly interesting application of nonstandard analysis. :)</p>
<p>I hope the question is at least a little interesting. I have only the very basic knowledge of model theory of that type (and the same applies to nonstandard analysis), so it may seem a little naive, but I got curious, hence the question.</p>
| Seirios | 36,434 | <p>Ax found the following application in complex analysis:</p>
<blockquote>
<p><strong>Theorem:</strong> If $f : \mathbb{C}^n \to \mathbb{C}^n$ is an injective polynomial function, that is there exist $f_1,...,f_n \in \mathbb{C}[X_1,...,X_n]$ such that $f=(f_1,...,f_n)$, then $f$ is surjective.</p>
</blockquote>
<p>You can show that the theorem holds for $f : k^n \to k^n$ where $k$ is a locally finite field, therefore it holds for the algebraic closure $\overline{\mathbb{F}_p}= \bigcup\limits_{n \geq 1} \mathbb{F}_{p^n}$ of $\mathbb{F}_p$. Then, it holds for the (non trivial) ultraproduct $K=\prod\limits_{p \in \mathbb{P}} \overline{\mathbb{F}_p} / \omega$ where $\omega$ is a non principal ultrafilter over the set of primes $\mathbb{P}$, because the theorem can be expressed as a set of sentences of the first order. But $K$ is an algebraic closed field of caracteristic $0$ and the theory of algebraic closed fields of caracteristic $0$ is complete, so $K$ and $\mathbb{C}$ are elementary equivalent. Finally, Ax theorem is proved.</p>
<p>Ax theorem was generalized by Grothendieck.</p>
|
923,871 | <p>Let $Y=\{(a,b)∈ \Bbb R\times \Bbb R∣ a≠0\}$. Given $(a,b),(c,d)\in Y$, define $(a,b)*(c,d)=(ac,ad+b)$. Prove that $Y$ is a group with the operation $*$.</p>
<p>I already do the proof of ∗ is an operation on Y. And proof is associative like this:
$$\{(A,B)*(C,D)\}*(E,F)=(A,B)*\{(C,D)\}*(E,F)\}$$</p>
<p>$$(AC,AD+B)(E,F)=(A,B)(CE,CF+D)$$</p>
<p>$$(ACE,ACF+AD+B)=(ACE,ACF+AD+B)$$</p>
<p>but I get stock trying the identity and the inverse proof. dont know how to start</p>
| Cameron Buie | 28,900 | <p>There are four things you need to accomplish, here:</p>
<ul>
<li>You need to show that $*$ is an operation on $Y.$ In particular, you must show that if $(a,b),(c,d)\in Y,$ then $(a,b)*(c,d)\in Y.$ This should be straightforward from the definition of $Y.$</li>
<li>You need to show that $*$ is an <em>associative</em> operation on $Y.$ That is, you need to show that if $(a,b),(c,d),(e,f)\in Y,$ then $$\bigl[(a,b)*(c,d)\bigr]*(e,f)=(a,b)*\bigl[(c,d)*(e,f)\bigr].$$</li>
<li>You need to show that $Y$ has a $*$-identity. That is, you must find some $(i,j)\in Y$ such that no matter what $(a,b)\in Y$ we choose, we will always have $$(a,b)*(i,j)=(a,b)=(i,j)*(a,b).$$</li>
<li>You need to show that every element of $Y$ has a $*$-inverse. That is, you must show that for every $(a,b)\in Y,$ there is some $(c,d)\in Y$ such that $$(a,b)*(c,d)=(i,j)=(c,d)*(a,b),$$ where $(i,j)$ is the $*$-identity that you should already have found.</li>
</ul>
<p>Which of these have you accomplished so far? What have you managed to show for the others?</p>
|
4,007,450 | <p>Let <span class="math-container">$y=f(x)$</span> the graph of a real-valued function. We define its curvature by : <span class="math-container">$$curv(f) = \frac{|f''|}{(1+(f')^2)^{3/2}}$$</span></p>
<p>I would like to know if there is any function (apart from the trivial anwser <span class="math-container">$f(x)=0$</span>) whose curvature is itself. So what is the fixed point of <span class="math-container">$curv$</span> ?</p>
| Shaun | 104,041 | <p>You're correct.</p>
<p>If <span class="math-container">$a^2=e$</span> for all <span class="math-container">$a\in G$</span>, then for any <span class="math-container">$g,h\in G$</span>,</p>
<p><span class="math-container">$$\begin{align}
(gh)^2&=ghgh\\
&=e\\
&=ee\\
&=g^2h^2\\
&=gghh,
\end{align}$$</span></p>
<p>from which it follows that <span class="math-container">$gh=hg$</span>. Hence <span class="math-container">$G$</span> is abelian.</p>
|
524,568 | <p>I'm working on a recursive function task which i'm a bit stuck at. I've tried to google it on how I can solve this task, but with no luck</p>
<p>Here is the task:</p>
<blockquote>
<p><em>Provide a recursive function $r$ on $A$</em>* <em>which gives the number of
characters in the string</em></p>
</blockquote>
<p>I Hope I can get some help here, since i've tried nearly everything.</p>
<p>Thanks alot for your kind help!</p>
| mihirj | 100,576 | <p>Kindly see if this helps you.</p>
<p>r = f(A)</p>
<p>f(A) = 0 if f(A) = ∅</p>
<pre><code> = 1 + f(A-{c}) otherwise
</code></pre>
<p>where {c} is the character extracted from the string.</p>
|
3,409,342 | <p>Let <span class="math-container">$X$</span> be a set containing <span class="math-container">$A$</span>.</p>
<p>Proof:
<span class="math-container">$y\in A \cup (X \setminus A) \Rightarrow y\in A$</span> or <span class="math-container">$y \in (X \setminus A)$</span></p>
<p>If <span class="math-container">$y \in A$</span>, Then <span class="math-container">$y \in X$</span> because <span class="math-container">$A \subset X$</span>. </p>
<p>If <span class="math-container">$y \in (X \setminus A)$</span>, Then <span class="math-container">$y \in X$</span> and <span class="math-container">$y \notin A$</span>. So <span class="math-container">$y \in X$</span>.</p>
<p>Therefore <span class="math-container">$y \in A \cup (A \setminus X) \Rightarrow y \in X$</span>.</p>
<p>Now I've proved that every element of <span class="math-container">$A \cup (A \setminus X)$</span> is also an element of <span class="math-container">$X$</span>.
But how to prove vice versa??</p>
| fleablood | 280,126 | <p>The inverse is really trivial.</p>
<p>Proposition: For any sets <span class="math-container">$A, B$</span> than <span class="math-container">$A \subset A\cup B$</span>.</p>
<p>Pf: If <span class="math-container">$x \in A$</span> then (<span class="math-container">$x\in A$</span> or <span class="math-container">$x \in B$</span>). So <span class="math-container">$x\in A\cup B$</span>.</p>
<p>So <span class="math-container">$A\subset A \cup (A\setminus X)$</span></p>
|
877,477 | <p>I'm stuck with the following question, which looks quite innocent.</p>
<p>I'd like to show that if a covering space map $f:\tilde{X}\to X$ between cell complexes is null-homotopic, then the covering space $\tilde{X}$ must be contractible.</p>
<p>Since $f$ is null-homotopic there exists a homotopy $H_t:\tilde{X}\to X$ from $H_0=x_0$ to $H_1=f$ and I would like to use it to construct another homotopy $G:\tilde{X}\to \tilde{X}$ from $G_0=\tilde{x}_0$ to $G_1=Id_{\tilde{X}}$.</p>
<p>By the homotopy lifting property, $H_t$ lifts to a homotopy $\tilde{H}_t:\tilde{X}\rightarrow \tilde{X}$ such that $H_t(x)=f(\tilde{H}_t(x))$ and $\tilde{H}_0(x)=\tilde{x}_0$</p>
<p>So we have a homotopy $\tilde{H}_t:\tilde{X}\rightarrow \tilde{X}$ from $\tilde{H}_0(x)= \tilde{x}_0$ to $\tilde{H}_1(x)$ and besides $f(x)=H_1(x)=f(\tilde{H}_1(x))$.</p>
<p>If $f$ was injective we would be done, but in principle $\tilde{H}_1(x)$ could be any point in $f^{-1}(x_0)$ right?</p>
| Quang Hoang | 91,708 | <p>Since $f$ is nullhomotopic, $f_*:\pi_n(\tilde X)\to \pi_n(X)$ are trivial for all $n$. Consequently $\pi_n(\tilde X)$ are all trivial. Whitehead theorem implies $\tilde X$ is contractible.</p>
|
1,361,948 | <p>$\frac{df}{dx} = 2xe^{y^2-x^2}(1-x^2-y^2) = 0.$</p>
<p>$\frac{df}{dy} = 2ye^{y^2-x^2}(1+x^2+y^2) = 0.$</p>
<p>So, $2xe^{y^2-x^2}(1-x^2-y^2) = 2ye^{y^2-x^2}(1+x^2+y^2)$.</p>
<p>$x(1-x^2-y^2) = y(1+x^2+y^2)$</p>
<p>$x-x^3-xy^2 = y + x^2y + y^3$</p>
<p>Is the guessing the values of the variables the only way of solving this? With $y = 0$ and I can already figure out that $x = 0$, $1$ or $-1$ but It's bothering me how I had to randomly guess that. </p>
| Budenn | 250,821 | <p>$$\mathrm{Re} \left[\frac{1 + i}{\sigma \delta \left( 1 - e^{-(1 + i)t/\delta} \right) }\right]
= \mathrm{Re}\left[\frac{(1 + i)\left( 1 - e^{-(1 - i)t/\delta} \right)}{\sigma \delta \left( 1 - e^{-(1 + i)t/\delta} \right) \left( 1 - e^{-(1 - i)t/\delta} \right)} \right]
= \frac{1 - \mathrm{Re}\left[e^{-(1 - i)t/\delta} \right] - \mathrm{Re}\left[i e^{-(1 - i)t/\delta}\right]}{2 \sigma \delta e^{-t/\delta} (\cosh(t/\delta) - \cos(t/\delta))}
= \frac{1 - \mathrm{Re}\left[e^{-(1 - i)t/\delta} \right] + \mathrm{Im}\left[ e^{-(1 - i)t/\delta}\right]}{2 \sigma \delta e^{-t/\delta} (\cosh(t/\delta) - \cos(t/\delta))} = \frac{1 - e^{-t/\delta} (\cos (t/\delta) - \sin(t/\delta))}{2 \sigma \delta e^{-t/\delta} (\cosh(t/\delta) - \cos(t/\delta))} = \frac{e^{t/\delta} - \cos (t/\delta) + \sin(t/\delta)}{2 \sigma \delta (\cosh(t/\delta) - \cos(t/\delta))}.$$</p>
<p>It doesn't seem that this expression can be reduced to the supposed real part in the answer.</p>
<p>Furthermore, plotting <a href="http://www.wolframalpha.com/input/?i=plot+real+part+of+%281%2B1i%29%2F%281-Exp[-%281%2B1i%29+t]%29+for+0+to+2*pi" rel="nofollow">the real part</a> and <a href="http://www.wolframalpha.com/input/?i=plot+1%2F%281-Exp[-t]%29+for+0+to+2*pi" rel="nofollow">its supposed value</a> for $\delta = \sigma = 1$ shows they are quite different functions.</p>
|
2,841,640 | <p>What is a vector space? I can see two different formulations, and between them there is one difference: commutativity. </p>
<blockquote>
<p><strong>DEFINITION 1</strong> (See <a href="https://proofwiki.org/wiki/Definition:Vector_Space" rel="noreferrer">here</a>)</p>
<p>Let $(F, +_F, \times_F)$ be a division ring.
Let $(\mathcal{V}, +_\mathcal{V})$ be an abelian group.
Let $(\mathcal{V}, +_\mathcal{V}, \cdot)_F$ be a unitary module over $F$. Then $(\mathcal{V}, +_\mathcal{V}, \cdot)_F$ is a vector space over $F$. That is, a vector space is a unitary module over a ring, whose ring is a division ring.</p>
<p><strong>DEFINITION 2</strong></p>
<p>Let $(F, +_F, \times_F)$ be a field.
Let $(\mathcal{V}, +_\mathcal{V})$ be an abelian group.
Let $\cdot: F\times \mathcal{V} \longrightarrow \mathcal{V}$ be a function. A vector space is $(\mathcal{V}, +_\mathcal{V}, \cdot)_F$ such that $\forall a,b, \in F$ and $\forall x,y \in \mathcal{V}$:</p>
<ul>
<li>$\cdot$ right distributive: $(a +_F b) \cdot x = (a\cdot x) +_\mathcal{V} (b\cdot x)$</li>
<li>$\cdot$ left distributive: $\,\,\, a \cdot (x +_\mathcal{V} y) = (a\cdot x) +_\mathcal{V} (a\cdot y)$</li>
<li>$\cdot$ compatible with $\times_F$: $(a\times_F b) \cdot x = a \cdot (b\cdot x)$</li>
<li>$\times_F$ 's identity is $\cdot$'s identity: $1_F \cdot x = x$</li>
</ul>
</blockquote>
<p>There could also be other definitions,but for now it doesn't matter. What matter is that commutativity is not considered in the same way in both definitions! In the first definition, we ahve a division ring (not a commutative division ring, i.e. a field!), while in the second we have a field (i.e. a commutative division ring). </p>
<hr>
<p>Notice that the key difference on which I am struggling is that on one side we have a division ring and on the other side a commutative division ring. The first is an abelian group $(R, +_R)$ under the $+_R$ binary operation, however $(R, \times_R)$ is only a group (i.e. not abelian, i.e. not commutative). </p>
| AnalysisStudent0414 | 97,327 | <p>Usually, over a field.</p>
<p>On <a href="https://en.wikipedia.org/wiki/Vector_space" rel="nofollow noreferrer">Wikipedia</a> (I know, I know) I read that "Some authors use the term vector space to mean modules over a division ring" (<a href="https://en.wikipedia.org/wiki/Vector_space#cite_note-120" rel="nofollow noreferrer">cit.</a>). That seems reasonable, as they are just extending the definition. </p>
<p>Note that in the division ring definition, if $F$ is a field, the two definitions become equivalent.</p>
|
2,400,110 | <p>Let's say that we have a matrix of transfer functions:
$$G(s) = C(sI-A)^{-1}B + D$$</p>
<p>And we create the sensitivity matrix transfer function:</p>
<p>$$S(s) = (I+GK)^{-1}$$</p>
<p>Where $K$ is our controller gain matrix.</p>
<p>We also create the complementary sensitivity transfer function matrix:</p>
<p>$$T(s) = (I+GK)^{-1}GK$$</p>
<p>We also create the weighting transfer function matrices:
$$W_u(s) \\ W_T(s) \\ W_P(s)$$</p>
<p>You can see them as the tunning matrices. This picture below representing the $H_{ \infty}$ controller. Where $z$ is our performance output. Only for analysis. The $G$ is our transfer function matrix and $K$ is as mention before, our controller gain matrix. $w$ it's a vector of disturbance. Notice that $\omega \neq w$</p>
<p><a href="https://i.stack.imgur.com/24UsU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/24UsU.png" alt="enter image description here"></a></p>
<p>This whole picture can be described as:
$$\begin{bmatrix}
z_1 = W_uu\\
z_2 = W_TGu\\
z_3 = W_Pw + W_PGu\\
v = w + Gu
\end{bmatrix}$$</p>
<p>And we can create our generalized plant P:</p>
<p>$$P = \begin{bmatrix}
0 & WuI \\
0& W_TG\\
W_PI & WpG\\
I & G
\end{bmatrix}$$</p>
<p>$$z = Pw = \begin{bmatrix}
0 & WuI \\
0& W_TG\\
W_PI & WpG\\
I & G
\end{bmatrix}
\begin{bmatrix}
w\\
u
\end{bmatrix}$$</p>
<p>We ca partioning the generalized plant P by saying that:</p>
<p>$$P = \begin{bmatrix}
P_{11} & P_{12}\\
P_{21}& P_{22}
\end{bmatrix} = \begin{bmatrix}
A & B_1 & B_2\\
C_1 & D_{11} & D_{12} \\
C_2 & D_{21} & D_{22}
\end{bmatrix}$$</p>
<p>Then we can say:</p>
<p>$$P_{11} = \begin{bmatrix}
A
\end{bmatrix} = \begin{bmatrix}
0\\
0\\
W_PI
\end{bmatrix}$$
$$P_{12} = \begin{bmatrix}
B_1 & B_2
\end{bmatrix} = \begin{bmatrix}
W_uI\\
W_TG\\
W_PG
\end{bmatrix}
$$
$$P_{21} = \begin{bmatrix}
C_1\\
C_2
\end{bmatrix} = \begin{bmatrix}
I
\end{bmatrix}
$$
$$
P_{22} = \begin{bmatrix}
D_{11} & D_{12} \\
D_{21} & D_{22}
\end{bmatrix} = \begin{bmatrix}
G
\end{bmatrix}$$</p>
<p>So. Now to the question! In Robust Control, it's something called detectable and stabilizable. I wonder what it are. Accoring to a book I have, this:</p>
<p>$$A, B_2, C_2$$</p>
<p>and</p>
<p>$$A, B_1, C_1$$</p>
<p>needs to be detectable and stabilizable. The definition of stabilizable is:</p>
<blockquote>
<p>A system is stabilizable if all unstable states are controllable. </p>
</blockquote>
<p>The definition of detectable is:</p>
<blockquote>
<p>A system is detectable if all unstable states are observable. </p>
</blockquote>
<p>I know how to find out if a system is controllable and observable. That is very easy!</p>
<p>To check controllibility:</p>
<p>$$C_o \equiv \begin{bmatrix}
B & AB & A^2B & \dots & A^{n-1}B
\end{bmatrix}$$</p>
<p>And then check the rank
$$rank(C_o) = n$$</p>
<p>To check observbility:</p>
<p>$$O_o \equiv \begin{bmatrix}
C\\
CA\\
CA^2\\
\vdots\\
CA^{n-1}
\end{bmatrix}$$</p>
<p>And then check the rank
$$rank(O_o) = n$$</p>
<p><strong>Question:</strong>
Do you know any formula or method to check if all unstable states are controllable and observable? Because i cannot find any states in my system which are unstable. If I find a unstable state, that means my system matrix $A$ has some positive eigenvalues? Right? </p>
| hzh | 528,159 | <p>I think you started with some wrong definitions.</p>
<p>The right definition for stabilizability and detectability should be:</p>
<blockquote>
<p>A system is stabilizable if the uncontrollable states are stable.</p>
<p>A system is detectable if the unobservable states are asymptotically stable.</p>
</blockquote>
<p>You seemed to be using the definition the other way around.</p>
<p>So with these definitions, the task should be very straightforward.</p>
<ul>
<li><p>For stabilizability, you simply decompose the system into controllable and uncontrollable subsystems, and then check if the uncontrollable subsystem is stable.</p>
</li>
<li><p>For detectability, again, decompose the system into observable and unobservable subsystems, then check if the unobservable subsystem is asymptotically stable.</p>
</li>
</ul>
|
3,789,873 | <p>Let <span class="math-container">$X, Y, X_n's$</span> be random variables for which <span class="math-container">$X_n+\tau Y\to_D X+\tau Y$</span> for every fixed positive constant <span class="math-container">$\tau$</span>. Show that <span class="math-container">$X_n \to_D X$</span>.</p>
<p>I dont think we can let <span class="math-container">$\tau\to0$</span> and claim the result. Any hint on this question?</p>
| Oliver Díaz | 121,671 | <p>This is just to complement the answer by Shalop and to address a comment posted by the OP.</p>
<p>The first and third are bounded by a small term:
<span class="math-container">$$
\begin{align}
\Big|\Bbb E[e^{it X_{n}}]-\Bbb E[e^{it X_{n} + i t \epsilon Y}]\Big|&\leq \Bbb E\big[\big|e^{itX_n}\big(1-e^{it\varepsilon Y}\big)\big|\big]\leq\mathbb{E}\big[|1-e^{it\varepsilon Y}|\big]\leq \Bbb E[\min(2,\varepsilon t|Y|)]\\
\Big|\Bbb E[e^{it X}]-\Bbb E[e^{it X + i t \epsilon Y}]\Big|&\leq \Bbb E\big[\big|e^{itX}\big(1-e^{it\varepsilon Y}\big)\big|\big]\leq\mathbb{E}\big[|1-e^{it\varepsilon Y}|\big]\leq \Bbb E[\min(2,\varepsilon t|Y|)]
\end{align}
$$</span></p>
<p>Here, we use the fact that for any <span class="math-container">$n\in\mathbb{Z}_+$</span> and <span class="math-container">$x\in\mathbb{R}$</span>
<span class="math-container">$$
\Big|e^{ix}-\sum^n_{k=0}\frac{(ix)^k}{k!}\Big|\leq \min\Big(
\frac{|x|^{n+1}}{(n+1)!},\frac{2|x|^n}{n!}\Big)\\
$$</span></p>
<p>Dominated convergence implies that <span class="math-container">$\lim_{\varepsilon\rightarrow0}\Bbb E[\min(2,\varepsilon t|Y|)]=0$</span></p>
|
4,095,715 | <p>I know how to do these in a very tedious way using a binomial distribution, but is there a clever way to solve this without doing 31 binomial coefficients (with some equivalents)?</p>
| trancelocation | 467,003 | <p>Here is just a standard trick from generating functions using</p>
<p><span class="math-container">$$\frac 1{(1-y)^n} = \sum_{k=0}^{\infty}\binom{k+n-1}{n-1}y^k$$</span>.</p>
<p>To simplify the expressions set</p>
<p><span class="math-container">$$y=x^2\Rightarrow \text{ we look for }[y^6]\frac{(1-y^4)^n}{(1-y)^n}$$</span></p>
<p>Hence, for <span class="math-container">$n\geq 1$</span> you get</p>
<p><span class="math-container">\begin{eqnarray*}[y^6]\frac{(1-y^4)^n}{(1-y)^n}
& = & [y^6]\left((1-y^4)^n\sum_{k=0}^{\infty}\binom{k+n-1}{n-1}y^k\right) \\
& = & [y^6]\left((1-ny^4)\sum_{k=0}^{\infty}\binom{k+n-1}{n-1}y^k\right) \\
& = & \boxed{\binom{6+n-1}{n-1} - n\binom{2+n-1}{n-1}}
\end{eqnarray*}</span></p>
|
2,811,991 | <p>$$\int_{-3}^5 f(x)\,dx$$<br>
for
$$ f(x) =\begin{cases}
1/x^2, & \text{if }x \neq 0 \\
-10^{17}, & \text{if }x=0
\end{cases}
$$</p>
<p>I tried with Newton Leibniz formula, is this correct ?</p>
<p>$\int_{-3}^0 f(x)dx$ + $\int_{0}^5 f(x)dx$ =</p>
<p>$1/x^2 |_{-3}^{0} $ $ + $ $1/x^2 |_0^5$=</p>
<p>$3/(-3)^2+10^{17}+(-10^{17})-3/5^2)$= $16/75$</p>
<p>I know I made a mistake, but I dont know what, could someone please correct me and help me.</p>
| Greg | 495,411 | <p>$\frac1{x^2}$ is not the integral of $\frac1{x^2}$.</p>
|
851,459 | <p>hello $$$$ I am trying to find explanation how to derive cahn hilliard equation:</p>
<p>$$ u_t =\Delta (w'(u)-\epsilon ^2 \Delta u)$$ </p>
<p>as gradient flow of energy functional $$ : E[u]=\int w(u)+\epsilon ^2 |\nabla u|^2 ) $$
I tried to follow the definition of gradient flow from :
<a href="http://anhngq.wordpress.com/2010/11/05/what-is-a-gradient-flow/" rel="nofollow">http://anhngq.wordpress.com/2010/11/05/what-is-a-gradient-flow/</a>
but I got stucked and then </p>
<p>I read that it is $ H^{-1} $ gradient flow of the functional. can anyone tell me what is $ H^{-1} $ gradient flow?
thanks. </p>
| Student | 124,626 | <p>to get first a Cahn-Hilliard system first we write the mass conservation i.e. $\dfrac{\partial u}{\partial t}=-h_x$, where $h$ is the mass flux which is related to the chemical potential $\mu$ by a constitutive relation $h=-\mu_x$, and that the chemical potential $\mu$ is a variational derivative of $\Psi$ with respect to $u$, we end up with the sixth-order Cahn-Hilliard system:</p>
<p>\begin{eqnarray}
\dfrac{\partial u}{\partial t}=\Delta \mu
\end{eqnarray}
\begin{eqnarray}
\mu=\dfrac{\delta E}{\delta u}
\end{eqnarray}</p>
<p>now we will find the variational derivative of $E$ w.r.t $u$</p>
<p>\begin{eqnarray*}
\delta E&=& \int \left[w'(u) \delta u+2\epsilon^2 \nabla u \cdot \delta(\nabla u)\right]\\
&=&\int \left[w'(u) \delta u+2\epsilon^2 \nabla u\nabla(\delta u)\right]\\
&=& \int \left[w'(u) \delta u-2\epsilon^2 \Delta u \delta u\right]\\
&=&\int \left[w'(u)-2\epsilon^2 \Delta u\right]\delta u
\end{eqnarray*}</p>
<p>so we get
$$\mu=w'(u)-2\epsilon^2 \Delta u$$</p>
<p>i.e.
$$u_t=\Delta(w'(u)-2\epsilon^2 \Delta u)$$</p>
|
730,253 | <p>let $x,y$ such
$$2^x+5^y=2^y+5^x=\dfrac{7}{10}$$</p>
<p>prove or disprove $x=y=-1$ is the only solution for the system.</p>
<p>My try: since
$$2^x-2^y=5^x-5^y$$</p>
<p>But How can prove or disprove $x=y$?</p>
| Greg Martin | 16,078 | <p>If $(x,y)$ is a solution, then we have both $2^y=\frac7{10}-5^x$ and
$$
2^y = (5^y)^{\log2/\log5} = \big(\tfrac7{10}-2^x\big)^{\log2/\log5}.
$$
So define $f(x)=\frac7{10}-5^x$ and $g(x)=\big(\tfrac7{10}-2^x\big)^{\log2/\log5}$. We want to show that the only place $f$ and $g$ are equal is at $x=-1$; it suffices to show that $f'(x)<g'(x)$ everywhere (to the right of $x=\log_2(0.7)$), or equivalently that $f'(x)/g'(x)<1$.</p>
<p>A calculation shows that $f'(x)/g'(x) = I(x)D(x)$, where
$$
I(x) = \bigg(\frac{\log5}{\log2}\bigg)^2\left(\frac{5}{2}\right)^x \quad\text{and}\quad D(x) = \left(\frac{7}{10}-2^x\right)^{1-\log2/\log5}
$$
are increasing and decreasing functions, respectively. One can thus show that $I(x)D(x) < 1$ on an interval $[a,b]$ by showing that $I(a)D(b)<1$. In this way, one can show separately on each of the intervals
$$
(-\infty,-1.65],\, [-1.65,-1.25],\, [-1.25,-1.05],\, [-1.05,-0.9],\, [-0.9,-0.75],\, [-0.75,\log_2(0.7)]
$$
that $I(x)D(x) < 1$. (On the leftmost interval, use $I(x)D(x) < I(-1.65)\lim_{x\to-\infty} D(x)$.)</p>
|
1,923,298 | <p>I am really struggling with this question and it isn't quite making sense. Please help and if you don't mind answering quickly.</p>
<p>Reflection across $x = −1$</p>
<p>$H(−3, −1), F(2, 1), E(−1, −3)$.</p>
| Alberto Takase | 146,817 | <p>You could argue by induction on the size of the finite set $B$. If $|B|=0$, then $B=\varnothing$. Therefore $A\cup B$ is finite (completing the base case). Now assume that $A\cup B$ is finite for $|B|\le k$. Then you have to show that $A\cup B$ is finite for $|B|=k+1$. [Hint: $B=\{x\}\cup(B\setminus\{x\})$ for some $x\in B$]</p>
|
3,561,860 | <p>Given <span class="math-container">$x_i, x_j \in [0,1]$</span>, and a payoff function <span class="math-container">$u_i(x_i, x_j) = (\theta_i + 3x_j - 4x_i)x_i$</span> if <span class="math-container">$x_j < 2/3$</span>, and <span class="math-container">$= (3x_j-2)x_i$</span> if <span class="math-container">$x_j \geq 2/3$</span>. My question is I don't understand why we have the best-response function <span class="math-container">$b_i(x_j) = {1}$</span> when <span class="math-container">$x_j > 2/3$</span>? </p>
<p><strong>My argument.</strong> For <span class="math-container">$x_j < 2/3$</span>, <span class="math-container">$\partial(u_i)/\partial(x_i) = \theta_i + 3x_j - 8x_i = 0 \iff x_i = (3x_j + \theta_i)/8$</span>. For <span class="math-container">$x_j \geq 2/3$</span>, <span class="math-container">$\partial(u_i)/\partial(x_i) = 3x_j - 2 = 0 \iff x_j = 2/3$</span> for every <span class="math-container">$x_i \in [0,1]$</span>. </p>
<p>Shouldn't this imply player i's pure best response function is: <span class="math-container">$b_i(x_j) = [0, 1]$</span> if <span class="math-container">$ x_j \geq 2/3$</span>, <span class="math-container">$=(\theta_i + 3x_j)/8$</span> if <span class="math-container">$x_j < 2/3$</span>?</p>
| Claude Leibovici | 82,404 | <p>It could be easier to use the fundamental theorem of calculus
<span class="math-container">$$F(x) = \int_{0}^{x} (1+t^2)\cos(t^2)\,dt \implies F'(x)=(1+x^2)\cos(x^2)$$</span> Now, let <span class="math-container">$y=x^2$</span> and you should arrive at
<span class="math-container">$$F'(x)=1+\sum_{n=1}^\infty\frac{n \sin \left(\frac{\pi n}{2}\right)+\cos \left(\frac{\pi n}{2}\right)}{n!} x^{2n}$$</span> and integrating termwise
<span class="math-container">$$F(x)=x+\sum_{n=1}^\infty\frac{n \sin \left(\frac{\pi n}{2}\right)+\cos \left(\frac{\pi n}{2}\right)}{(2n+1)n!} x^{2n+1}$$</span> This is an alernating series. So, if you write
<span class="math-container">$$F(x)=x+\sum_{n=1}^{p-1}\frac{n \sin \left(\frac{\pi n}{2}\right)+\cos \left(\frac{\pi n}{2}\right)}{(2n+1)n!} x^{2n+1}+\sum_{n=p}^\infty\frac{n \sin \left(\frac{\pi n}{2}\right)+\cos \left(\frac{\pi n}{2}\right)}{(2n+1)n!} x^{2n+1}$$</span> The first neglected term is
<span class="math-container">$$R_p=\frac{p \sin \left(\frac{\pi p}{2}\right)+\cos \left(\frac{\pi p}{2}\right)}{(2p+1)p!} x^{2p+1} $$</span> which makes
<span class="math-container">$$R_{2p}=\frac{x^{4 p+1}}{(4 p+1) (2 p)!}\sim \frac{x^{4 p+1}}{2 (2 p+1)!} \qquad\text{and}\qquad R_{2p+1}=\frac{x^{4 p+3}}{(4 p+3) (2p)!}\sim \frac{x^{4 p+3}}{2 (2 p+1)!}$$</span></p>
<p>So, depending on the value of <span class="math-container">$x$</span> we need to solve either
<span class="math-container">$$(2p+1)!=\frac 1{2x} (x^2)^{(2p+1)} 10^k\qquad\text{or}\qquad (2p+1)!=\frac x{2} (x^2)^{(2p+1)} 10^k $$</span> in order to have <span class="math-container">$R \leq 10^{-k}$</span>.</p>
<p>Have a look at <a href="https://math.stackexchange.com/questions/1333449/could-this-approximation-be-made-simpler-solve-n-an-10k">this question</a> of mine; you will find a magnificent approximation provided by @robjohn, an eminent user on this site. Adapted to your problem, this would give
<span class="math-container">$$2p+1 \sim e x^2 \exp\Big[{W\left(2 \log \left(\frac{10^k}{8 \pi x^3}\right)\right) }\Big]-\frac 12$$</span>
<span class="math-container">$$2p+1 \sim e x^2 \exp\Big[{W\left(2 \log \left(\frac{10^k}{8 \pi x}\right)\right) }\Big]-\frac 12$$</span>
where <span class="math-container">$W(.)$</span> is Lambert function.</p>
<p>Applied to <span class="math-container">$x=1$</span> and <span class="math-container">$k=3$</span>, both formulae will give <span class="math-container">$p=5.68784$</span>, that is to say <span class="math-container">$p=6$</span>. </p>
|
3,565,727 | <blockquote>
<p>Show that the antiderivatives of <span class="math-container">$x \mapsto e^{-x^2}$</span> are uniformly continuous in <span class="math-container">$\mathbb{R}$</span>.</p>
</blockquote>
<p>So we know that for a function to be uniformly continuous there has to exists <span class="math-container">$\varepsilon$</span> s.t when <span class="math-container">$|x-y| < \delta$</span> implies <span class="math-container">$|f(x)-f(y)| < \varepsilon$</span>.</p>
<p>But how do we go about this since we cannot really integrate <span class="math-container">$e^{-x^2}$</span>and we need the antiderivative?</p>
| Peter Szilas | 408,605 | <p>MVT for integrals:</p>
<p><span class="math-container">$f(x)=\displaystyle{\int_{a}^{x}}e^{-t^2}dt$</span></p>
<p><span class="math-container">$|f(x)-f(y)|=|\displaystyle{\int_{y}^{x}}e^{-t^2}dt|=$</span></p>
<p><span class="math-container">$e^{-s^2}|x-y| \le |x-y|,$</span></p>
<p>where <span class="math-container">$s \in (\min(x,y),\max(x,y))$</span>.</p>
|
975,210 | <p>\begin{align}
\left| f(b)-f(a)\right|&=\left| \int_a^b \frac{df}{dx} dx\right|\\ \ \\
&\leq\left| \int_a^b \left|\frac{df}{dx}\right|\ dx\right|.
\end{align}</p>
<p>I do not understand why the second line is greater or equal than the top equation. Can anyone explain please?</p>
| DeepSea | 101,504 | <p>$\left|\displaystyle \int_a^b \dfrac{df}{dx} dx\right| \leq \displaystyle \int_a^b \left|\dfrac{df}{dx}\right|dx \leq \left|\displaystyle \int_a^b \left|\dfrac{df}{dx}\right|dx\right|$</p>
|
3,770,391 | <p>According to <span class="math-container">${\tt Mathematica}$</span>, the following integral converges if
<span class="math-container">$\beta < 1$</span>.</p>
<p><span class="math-container">$$
\int_{0}^{1 - \beta}\mathrm{d}x_{1}
\int_{1 -x_{\large 1}}^{1 - \beta}\mathrm{d}x_{2}\, \frac{x_{1}^{2} + x_{2}^{2}}{\left(1 - x_{1}\right)\left(1 - x_{2}\right)}
$$</span></p>
<p>How is this possible ?. For <span class="math-container">$x_{1} = 0$</span> the integration over
<span class="math-container">$x_{2}$</span> hits <span class="math-container">$1$</span> on the boundary, so the denominator vanishes and hence the whole expression should diverge.</p>
<p>How can this integral be convergent ?.</p>
| RRL | 148,510 | <p>We can prove this integral converges for <span class="math-container">$0 < \beta < 1$</span> without evaluation.</p>
<p>Write this as</p>
<p><span class="math-container">$$\int_0^{1-\beta}\int_{1-x_1}^{1-\beta} \frac{x_1^2 + x_2^2}{(1-x_1)(1-x_2)}\, dx_2 \, dx_1\\ = \underbrace{\int_0^{\beta}\int_{1-x_1}^{1-\beta} \frac{x_1^2 + x_2^2}{(1-x_1)(1-x_2)}\, dx_2 \, dx_1}_{I_1}+ \underbrace{\int_\beta^{1-\beta}\int_{1-x_1}^{1-\beta} \frac{x_1^2 + x_2^2}{(1-x_1)(1-x_2)}\, dx_2 \, dx_1}_{I_2}$$</span></p>
<p>The integrand is continuous over the region of integration for <span class="math-container">$I_2$</span>. If there is a problem with convergence it will arise with the integral <span class="math-container">$I_1$</span>.</p>
<p>When <span class="math-container">$0 \leqslant x_1 \leqslant \beta $</span>, we have <span class="math-container">$1- \beta \leqslant 1- x_1 \leqslant 1$</span> and, making the variable change <span class="math-container">$u = 1- x_2$</span>, we get
<span class="math-container">$$I_1 = -\int_0^{\beta}\int_{1-\beta}^{1-x_1} \frac{x_1^2 + x_2^2}{(1-x_1)(1-x_2)}\, dx_2 \, dx_1 = \int_0^{\beta}\int_{x_1}^{\beta} \frac{x_1^2 + (1-u)^2}{(1-x_1)u}\, du \, dx_1$$</span></p>
<p>Introducing polar coordinates <span class="math-container">$(r,\theta)$</span> where <span class="math-container">$u = r \cos \theta$</span> and <span class="math-container">$x_1 = r \sin \theta$</span>, the integral becomes</p>
<p><span class="math-container">$$I_1 = \int_0^{\pi/4}\int_0^{\beta/\cos \theta} \frac{r^2\sin^2 \theta + (1 - r\cos \theta)^2}{(1- r\sin \theta)r \cos \theta}\, r \, dr\, d\theta \\ = \int_0^{\pi/4}\int_0^{\beta/\cos \theta}\frac{r^2\sin^2 \theta + (1 - r\cos \theta)^2}{(1- r\sin \theta)\cos \theta} \, dr\, d\theta $$</span></p>
<p>With <span class="math-container">$0 \leqslant r \leqslant \beta/\cos \theta$</span> and <span class="math-container">$0 \leqslant \theta \leqslant \pi/4$</span> the denominator satisfies (when <span class="math-container">$\beta < 1$</span>)</p>
<p><span class="math-container">$$(1- r\sin\theta)\cos \theta \geqslant \left(1 - \frac{\beta}{\cos \theta} \sin \theta\right) \cos \theta \geqslant \frac{1 - \beta \tan \theta}{\sqrt{2}} \geqslant \frac{1- \beta}{\sqrt{2}} > 0,$$</span></p>
<p>and the integral <span class="math-container">$I_1$</span> is finite.</p>
|
3,837,745 | <p>The two equations are</p>
<p><span class="math-container">$$\begin{aligned}3x^2-12y=& \ 0\\
3y^2-12x=& \ 0
\end{aligned}$$</span></p>
<p>Using system of equations find all of the solutions?</p>
<p>I found the first one to be <span class="math-container">$(0,0).$</span> The answer key says <span class="math-container">$(4,4)$</span> is also a solution. How is the point <span class="math-container">$(4,4)$</span> found?</p>
| Gteal | 725,635 | <p>We have <span class="math-container">$3x^2-12y=0$</span> and <span class="math-container">$3y^2-12x=0$</span>.</p>
<p>Solving for <span class="math-container">$x$</span> in the second equation gives us <span class="math-container">$x=\frac{1}{4}y^2$</span>.</p>
<p>Substituting this into the first equation we have</p>
<p><span class="math-container">\begin{align}
3\left(\frac{1}{4}y^2\right)^2-12y&=0\\
3\left(\frac{1}{16}y^4\right)-12y&=0\\
\frac{3}{16}y^4-12y&=0\\
3y(\frac{1}{16}y^3-4)&=0\\
\frac{1}{16}y^3-4&=0\\
y^3&=64\\
y&=4\\
\end{align}</span>
Can you proceed from here?</p>
|
3,273,830 | <p>Okay so I've started to study derivatives and there is this idea of continuity. The book says <em>"a real valued function is considered continuous at a point iff the graph of a function has no break at the point of consideration, which is so iff the values of the function at the neighbouring points are close enough to the value of the the function at the given point"</em></p>
<p>So what I dont understand is that why is it that values of the function at the neighbouring points should be close enough to the value of the function at the given point, isn't it enough if they are defined why do they have to be <em>close enough</em> the value of the function at the given point?</p>
| mlchristians | 681,917 | <p>I hope your textbook also provides a more formal definition of continuity at a point:</p>
<p><span class="math-container">$f(x)$</span> is continuous at a point <span class="math-container">$a$</span> if and only if all three of the following hold---</p>
<p>(1) <span class="math-container">$f(a)$</span> is defined.</p>
<p>(2) <span class="math-container">$\lim_{x \to a} = L$</span> exists. (Keep in mind this is a 2-sided limit.)</p>
<p>(3) <span class="math-container">$f(a) = L$</span></p>
<p>If any one of the above three fail to hold, then <span class="math-container">$f(x)$</span> is not continuous at <span class="math-container">$a$</span>.</p>
<p>``No break'' in the graph of the function at <span class="math-container">$a$</span> is but a consequence of the above definition.</p>
|
3,273,830 | <p>Okay so I've started to study derivatives and there is this idea of continuity. The book says <em>"a real valued function is considered continuous at a point iff the graph of a function has no break at the point of consideration, which is so iff the values of the function at the neighbouring points are close enough to the value of the the function at the given point"</em></p>
<p>So what I dont understand is that why is it that values of the function at the neighbouring points should be close enough to the value of the function at the given point, isn't it enough if they are defined why do they have to be <em>close enough</em> the value of the function at the given point?</p>
| Adam Latosiński | 653,715 | <p>If the values of the function in points <span class="math-container">$x$</span> close enough to a specific point <span class="math-container">$x_0$</span> are not arbitrarily close to <span class="math-container">$f(x_0)$</span> the graph will be (usually) broken.</p>
<p>This definition is not completely rigorous, as it assumes we have intuitive understanding what a "graph with no break is", and it can create some misunderstandings. For example, it can be proven that the graph of function <span class="math-container">$$ f(x) = \left\{\begin{array}{l} \sin\frac{1}{x} & \text{for }x\neq 0 \\ 0 &\text{for }x=0 \end{array}\right. $$</span>
is connected, but the function is still non-continuous.</p>
<p>It's better to consider "iff the values of the function at the neighbouring points are close enough to the value of the the function at the given point" as the definition, or, rigorously speaking </p>
<blockquote>
<p>Function <span class="math-container">$F : X \rightarrow Y$</span> is called <em>continuous</em> at point <span class="math-container">$x_0\in X$</span>, iff <span class="math-container">$$ \forall \epsilon>0 \exists \delta>0 \forall x\in X: \Big((|x-x_0|<\delta) \Rightarrow \big(|f(x)-f(x_0)|< \epsilon\big)\Big)$$</span></p>
</blockquote>
<p>which means: "For arbitrary distance <span class="math-container">$\epsilon$</span> I can choose a neighbourhood of <span class="math-container">$x_0$</span> small enough for the values of function to be within distance of <span class="math-container">$\epsilon$</span> from the value of function at point <span class="math-container">$x_0$</span>".</p>
|
523,376 | <p>Suppose $a_i$ is a sequence of positive integers. Define $a_1 = 1$, $a_2 = 2$ and $a_{n+1} = 2a_n + a_{n-1}$. Does it follow that </p>
<p>$$ \gcd(a_{2n+1} , 4 ) = 1 $$ ???</p>
<p>Im trying to see this by induction assuming above holds, we need to see that $\gcd(a_{2n+3} , 4 ) = 1$.</p>
<p>But, $\gcd(a_{2n+3} , 4 ) = n_0(2a_{2n+1} + a_{2n-1}) + 4n_1$ for integers $n_0, n_1$. But this quantity does not seem to give me $1$. Can someone help me with this problem? thanks</p>
| ulilaka | 42,323 | <p>In order to get a parametrization defined on an open subset of $\mathbf{R}^{2}$ you need to restrict your angular parameter $u$ to the open interval $(0, 2\pi)$. But the image of $x(u,v)$ in this case is not precisely the Möbius band, but the band minus one vertical line where it should "close". If you take $u$ belonging to the interval $[0, 2 \pi]$, then you cover the band, but you also obtain a normal field that is not globally continuous. Indeed, doing the calculations for the central line of your band (that is, to the curve $v = 0$), you may check that $N(0,0) = - N(2\pi, 0)$, where $N$ is a normal field obtained by taking the cross product of the coordinate derivatives $x_{u}$ and $x_{v}$.</p>
|
310,669 | <p>This is related to a course I'm taking in computer science theory. </p>
<p>Let $\sum$ be an alphabet. Then the set of all strings over $\sum$, denoted as $\sum^*$ has the operation of concatenation (adjoining two strings end to end). Clearly, concatenation is associative, $\sum^*$ is closed under concatenation, and the identity element is the empty string. I'm also taking a course in modern algebra, so I naturally ask can $\sum^*$ be formed into a group? Three of the four group axioms are satisfied.</p>
| Ittay Weiss | 30,953 | <p>No, $\Sigma^*$ is not a group (unless $\Sigma = \emptyset $, in which case $\Sigma ^*$ is a trivial group with one element). The reason is that the only element having an inverse is the empty word. So if $a\in \Sigma$, then $a$ as an element of $\Sigma ^*$ does not have an inverse (rigorously, note that the length of words never decreases upon concatenation). </p>
<p>What $\Sigma ^*$ is is an example of a monoid. A monoid is an algebraic structure satisfying all axioms of a group except the requirement of identities. In fact, $\Sigma ^*$ is the free monoid on the set $\Sigma$. There is also the obvious notion of a commutative monoid, and a certain quotient of $\Sigma^*$, obtained by allowing elements to commute, gives rise to the free commutative monoid $\Sigma ^+$. </p>
|
522,714 |
<p>$$
dz_t \sim O\left(\sqrt{dt}\,\right)
$$</p>
<p>$z$ is a Brownian motion random variable, for reference. I just don't understand what the $\sim O$ part means. I've looked up the page for Big O notation on wikipedia because I thought it might be related, but I can't see the link.</p>
| Mark Bennet | 2,906 | <p>Note that $\cfrac 1x=\cfrac 1{\sqrt 3+\sqrt 2}=\cfrac {\sqrt 3-\sqrt 2}{\sqrt 3-\sqrt 2}\cdot\cfrac 1{\sqrt 3+\sqrt 2}=\sqrt 3-\sqrt 2$</p>
<p>So you need to find $(\sqrt 3+\sqrt 2)^4-(\sqrt 3-\sqrt 2)^4$</p>
<p>Now note that the terms which have even powers of $\sqrt 2$ will cancel, and the odd powers will double - so we are left with the two terms in the binomial expansion with coefficient $\binom 41=\binom 43=4$ so we get $$2\cdot\binom 41\left((\sqrt3)^3\sqrt2+\sqrt 3(\sqrt 2)^3\right)=2\cdot4\cdot\sqrt 6\cdot(3+2)=40\sqrt 6$$</p>
<p>I've done this longhand, to show the detail.</p>
|
3,540,593 | <p>Are <span class="math-container">$n$</span> vectors are orthogonal if performing the inner product of all <span class="math-container">$n$</span> vectors at once yields zero?</p>
<p>In other words, could I say that <span class="math-container">$\hat{i} \perp \hat{j} \perp \hat{k}$</span>?</p>
<p>For example, suppose I have the three vectors <span class="math-container">$\hat{i} = <2, 0, 4>; \hat{j}= <0, 1, 0>; \hat{k} = <2, 0, 1>$</span>. Their inner product would be <span class="math-container">$2(0)(2) + 0(1)(0) + 4(0)(1) = 0$</span>. Can I say that the three vectors <span class="math-container">$\hat{i}, \hat{j}, $</span> and <span class="math-container">$ \hat{k}$</span> are orthogonal, or do I have to multiply out each pair?</p>
| Eduline | 743,749 | <p>Given that <span class="math-container">$p$</span> is an odd prime. We need to find the number of positive integers k with <span class="math-container">$1<k<p$</span> such that <span class="math-container">$k^2 \equiv 1 \pmod {p}\implies p|k^2-1\implies p|(k-1)(k+1)\implies p|k-1$</span> or <span class="math-container">$p|k+1$</span>. </p>
<p>Case 1 <span class="math-container">$\left(p|k+1\right)$</span>: Given that <span class="math-container">$1<k<p\implies 2<k+1<p+1\implies 3\le k+1\le p$</span>. Now it is obvious that, since <span class="math-container">$p$</span> is a prime, implies <span class="math-container">$p\nmid j$</span>, <span class="math-container">$\forall$</span> <span class="math-container">$3\le j<p$</span>. But, <span class="math-container">$p|p\implies k+1=p\implies k=p-1.$</span> Therefore, this case yields only one solution, that is <span class="math-container">$k=p-1$</span>. </p>
<p>Case 2 <span class="math-container">$\left(p|k-1\right)$</span>: Again we have <span class="math-container">$1<k<p \implies 0<k-1<p-1 \implies 1\le k-1 \le p-2.$</span> Again it is obvious that <span class="math-container">$p\nmid j, \forall$</span> <span class="math-container">$1\le j\le p-2$</span>. Therefore, this case yields no solution. </p>
<p>After analyzing all the cases, we can conclude that there is only one positive integer <span class="math-container">$k$</span>, such that <span class="math-container">$1<k<p$</span> and <span class="math-container">$k^2\equiv 1 \pmod{p}$</span> and that positive integer is <span class="math-container">$k=p-1$</span>. </p>
|
162,863 | <p>if I have a quaternion which describes an arbitrary rotation, how can I get for example only the half rotation or something like 30% of this rotation?</p>
<p>Thanks in advance!</p>
| rschwieb | 29,335 | <p>I can't be sure what formula for a general rotation you have, but it <em>should</em> depend upon an angle through which you are rotating. Doesn't your formula look something like $R(\Theta, u)$ where $\Theta$ is the angle of rotation, and $u$ is a unit vector which tells you the axis of rotation?</p>
<p>If so, you just replace $\Theta$ with $p\Theta$ where $p$ is just some decimal form of a percentage.</p>
|
162,863 | <p>if I have a quaternion which describes an arbitrary rotation, how can I get for example only the half rotation or something like 30% of this rotation?</p>
<p>Thanks in advance!</p>
| Kallus | 32,086 | <p>I believe what you're looking for are exponent and logarithm formulas for quaternions, which can be found on the Wikipedia page on <a href="https://en.wikipedia.org/wiki/Quaternion#Exponential.2C_logarithm.2C_and_power" rel="nofollow">quaternions</a>. The Wikipedia page even gives a formula for raising a quaternion to an arbitrary power, which is exactly what you want. If your original rotation is given by $q$, and you want to take 30% of this rotation, you simply take $q^{0.3}$.</p>
|
4,496,772 | <p>Let <span class="math-container">$V$</span> be a vector space of dimension <span class="math-container">$n$</span> over a finite field <span class="math-container">$F$</span> with <span class="math-container">$q$</span> elements, and let <span class="math-container">$U\subseteq V$</span> be a subspace of dimension <span class="math-container">$k$</span>. How many subspaces <span class="math-container">$W\subseteq V$</span> of dimension <span class="math-container">$m$</span> such that <span class="math-container">$U\subseteq W $</span> do we have <span class="math-container">$(k\leq m\leq n)$</span>?</p>
<p><strong>Hint</strong>: look at the set:
<span class="math-container">$$
\{ (U,W) \mid U \subseteq W \subseteq V;
\, \dim(U)=k, \, \dim(W)=m \}
$$</span></p>
<p>I have no idea how to use the hint. I know that the number of subspaces of dimension <span class="math-container">$k$</span> is:
<span class="math-container">$$
\frac{\prod_{i=0}^{k-1} (q^{n-1}-1)}{\prod_{i=0}^{k-1} (q^{k-1}-1)}
$$</span>
and I don't know how to proceed from here.</p>
| cigar | 1,070,376 | <p>Given a basis <span class="math-container">$\{v_1,v_2,\dots, v_k\}$</span> for <span class="math-container">$U$</span>, how many ways can you extend the basis to a larger linearly independent set?</p>
<p>So, the next vector can't be a linear combination of the <span class="math-container">$v_1,\dots, v_k$</span>. How many linear combinations are there? There are <span class="math-container">$q^k$</span>. So since the vector space has <span class="math-container">$q^n$</span> elements, we get <span class="math-container">$q^n-q^k$</span> ways of adding one more linearly independent vector.</p>
<p>Similarly, for the next one there are <span class="math-container">$q^n-q^{k+1}$</span>.</p>
<p>So, we get <span class="math-container">$$\prod_{j=0}^{n-k-1}(q^n-q^{k+j})$$</span>.</p>
<p>Now we need to remember that there are <span class="math-container">$l_m=(q^n-1)(q^n-q)\dots (q^n-q^{m-1})$</span> different bases for each <span class="math-container">$m$</span>-dimensional subspace (by similar reasoning) . So we divide.</p>
<p>Get <span class="math-container">$$\prod_{j=0}^{n-k-1}(q^n-q^{k+j})/l_{k+j+1}$$</span></p>
|
1,422,859 | <p>$$\sqrt{1000}-30.0047 \approx \varphi $$
$$[(\sqrt{1000}-30.0047)^2-(\sqrt{1000}-30.0047)]^{5050.3535}\approx \varphi $$
Simplifying Above expression we get<br>
$$1.0000952872327798^{5050.3535}=1.1618033..... $$
Is this really true that
$$[\varphi^2-\varphi]^{5050.3535}=\varphi $$</p>
| Claude Leibovici | 82,404 | <p><em>This is not an answer but it is too long for a comment.</em></p>
<p>Working with illimited precision, let $$a=\sqrt{1000}-\frac{300047}{10000}\approx 1.6180766016837933200$$ $$(a^2-a)^{\frac {50503535}{10000}}\approx 1.6180331121536741389$$ $$\phi\approx 1.6180339887498948482$$ Using as exponent $5050.3592$ (same number of digits as in your post) instead (and doing the same), you would get $$(a^2-a)^{\frac {50503592}{10000}}\approx 1.6180339909260630347$$ which is closer but still not exact (Vincenzo Oliva clearly explained the problem).</p>
|
769,272 | <p>So, I'm trying to solve the wave equation with the Fourier transform, and I'm struggling to figure out how to apply the BC's. Here's the problem I considered:</p>
<p>$$\frac{d^2u}{dt^2}=c^2\frac{d^2u}{x^2}$$
$$u(x,0)=g(x)$$
$$\frac{du}{dt}=0$$ at t = 0</p>
<p>Running through computations, I find that the Fourier transform solution is as follows:</p>
<p>$$F(u(\lambda,t))=A(\lambda)e^{ic\lambda t}+B(\lambda)e^{-ic \lambda t}$$</p>
<p>How would I apply boundary conditions to this and then transform back to u? ANy detailed explanation on this would be appreciated (I'm still learning this stuff). I'm thinking it may be easier to work with cosines and sines.</p>
| doppz | 48,746 | <p>There's a remarkable theorem by Gauss (that's actually what it's called: the <a href="http://en.wikipedia.org/wiki/Theorema_Egregium" rel="nofollow">Theorema Egregerium</a>) that says Guassian curvature $K$ can be found using only the first fundamental form. There are many different ways to then compute it, one of which (perhaps the most straightforward) is via the <a href="http://mathworld.wolfram.com/BrioschiFormula.html" rel="nofollow">Brioschi formula</a>.</p>
|
2,399,216 | <p>How to find the coefficient of $x^4$ in the expansion of $(1+x+x^2)^{20}$?</p>
| Community | -1 | <p><strong>HINT</strong></p>
<p>Let $f(x)=(1+x+x^2)^{20}$. Use the forth derivative for $x=0$ , that is $f^{''''}(0)$</p>
<p>If $c$ is the wanted coefficient, then $f^{''''}(x)=4! \cdot c + x \cdot g(x)$. Now making $x=0$ one gets $c = \frac {f^{''''}(0)}{4!}$ </p>
|
2,399,216 | <p>How to find the coefficient of $x^4$ in the expansion of $(1+x+x^2)^{20}$?</p>
| robjohn | 13,854 | <p>$$
\begin{align}
\left[x^n\right]\left(1+x+x^2\right)^{20}
&=\left[x^n\right]\left(\frac{1-x^3}{1-x}\right)^{20}\\
&=\left[x^n\right]\overbrace{\sum_{j=0}^{20}\binom{20}{j}\left(-x^3\right)^j}^{\left(1-x^3\right)^{20}}\overbrace{\sum_{k=0}^\infty\binom{-20}{k}(-x)^k\vphantom{\sum_{j=0}^{20}}}^{(1-x)^{-20}}\\
&=(-1)^n\sum_{j=0}^{\lfloor n/3\rfloor}\binom{20}{j}\binom{-20}{n-3j}\tag{$k=n-3j$}\\
&=\sum_{j=0}^{\lfloor n/3\rfloor}(-1)^j\binom{20}{j}\binom{n-3j+19}{19}\\
\end{align}
$$
For $n=4$, there are just two term:
$$
\binom{20}{0}\binom{23}{19}-\binom{20}{1}\binom{20}{19}=8455
$$</p>
|
3,791,350 | <p>An example question is:</p>
<p>In radian measure, what is <span class="math-container">$\arcsin \left(\frac{1}{2}\right)$</span>?</p>
<p>Select one:</p>
<p>a. <span class="math-container">$0$</span></p>
<p>b. <span class="math-container">$\frac{\pi}{6}$</span></p>
<p>c. <span class="math-container">$\frac{\pi}{4}$</span></p>
<p>d. <span class="math-container">$\frac{\pi}{3}$</span></p>
<p>e. <span class="math-container">$\frac{\pi}{2}$</span></p>
<hr />
<p>So, in the exam, I will be given only four function calculator. And is it possible to calculate this kind of trigo function? Or, do I have to memorise common values of trigo functions? Is there any tricks and tips for this problem?</p>
| B. Goddard | 362,009 | <p>There's a sort of silly way to keep the sines of common angles in your head. The common angles are:</p>
<p><span class="math-container">$$0, \frac{\pi}{6}, \frac{\pi}{4}, \frac{\pi}{3}, \frac{\pi}{2}.$$</span></p>
<p>The sine of each of these, in order is:</p>
<p><span class="math-container">$$\frac{\sqrt{0}}{2}, \frac{\sqrt{1}}{2}, \frac{\sqrt{2}}{2}, \frac{\sqrt{3}}{2}, \frac{\sqrt{4}}{2}.$$</span></p>
<p>The cosines are the reverse order, and then you have all the trig functions for these angles.</p>
<p>(But yes, I think it makes more sense to just know the two special triangles involved.)</p>
|
618,823 | <p>Reading a book I encountered the following claim, which I don't understand. Let $X$ be a smooth projective curve over $\mathbb{C}$, and $q\in X$ a rational point.
Denote by $\pi_i: X^n\to X$ the $i$-th projection of the cartesian $n$-product of the curve onto $X$ itself. The claim is that</p>
<blockquote>
<p>The line bundle $\bigotimes_{i=1}^n \pi_i^* \mathcal{O}_X(q) $ is clearly ample.</p>
</blockquote>
<p>Could you point me in the right direction please? Is there a specific criterion for ampleness I should immediately see it's satisfied?</p>
<p>I know that, being each $\pi_i$ finite and surjective, every $\pi_i^*\mathcal{O}_X(q)$ is ample because each $\mathcal{O}_X(q)$ is. But how can one conclude from here that the tensor product of them is?</p>
<p>PS: Is there any way to see this geometrically?</p>
| Andrew D. Hwang | 86,418 | <p>Geometrically the tensor product $L$ is ample because a sufficiently high power $L^N$ is the tensor product of pullbacks of very ample bundles on $X$.</p>
<p>In more detail, say the $N$-fold tensor power $\mathcal{O}_X(q)^N$ is very ample, and that the sections $s_1, \dots, s_m$ projectively embed $X$. For each $i = 1, \dots, n$ and $j = 1, \dots, m$, let $s_{i,j} = \pi_i^* s_j$ denote the pullback of $s_j$ by projection to the $i$th factor. The $m^n$ sections $s_{1,f(1)} \otimes \dots \otimes s_{n,f(n)}$ of $L^N$ (as $f$ ranges over all mappings from $\{1, \dots, n\}$ to $\{1, \dots, m\}$) embed the $n$-fold product $X \times \dots \times X$.</p>
<p>FWIW, sections of the pullback $\pi_i^* \mathcal{O}_X(q)$ are "non-constant only in the $i$th factor", so it appears they don't separate points except (possibly) in the $i$th factor. :)</p>
|
3,414,009 | <p>Given functions
<span class="math-container">$$
f(x)=\sum_{i=0}^\infty a_ix^i\,\,\,\text{ and }\,\,\,g(x)=\sum_{j=0}^\infty b_jx^j
$$</span>
the following simplification
<span class="math-container">$$
f(x)g(x)=\left(\sum_{i=0}^\infty a_ix^i\right)\left(\sum_{j=0}^\infty b_jx^j \right)=\sum_{i=0}^\infty\sum_{j=0}^\infty a_ib_jx^{i+j}
$$</span>
is relatively intuitive. I guess a rather simple and informal proof would consist in expanding a few terms and recognize the emerging pattern. For example,
<span class="math-container">$$
(a_0+a_1x+a_2x^2+\cdots)(b_0+b_1x+b_2x^2+\cdots)\\
=a_0b_0+a_0b_1x+a_0b_2x^2+\cdots+a_1b_0+a_1b_1x+a_1b_2x^2+\cdots
$$</span>
and recognize this as a double summation. However, I fear some of my students might be confused by it. Is there a more straighforward way to argue in order to justify the intuition behind such simplification?</p>
| José Carlos Santos | 446,262 | <p>If <span class="math-container">$f(x)=\sum_{n=0}^\infty a_nx^n$</span>, then <span class="math-container">$a_n=\frac{f^{(n)}(0)}{n!}$</span> and the same thing applies to <span class="math-container">$g$</span>. But then, if <span class="math-container">$f(x)g(x)=\sum_{n=0}^\infty c_nx^n$</span>:</p>
<p><span class="math-container">\begin{align}c_0&=(f\times g)(0)=a_0b_0;\\c_1&=(f\times g)'(0)=f(0)g'(0)+f'(0)g(0)\\&=a_0b_1+a_1b_0;\\c_2&=\frac{(f\times g)''(0)}2\\&=\frac{f(0)g''(0)}2+f'(0)g'(0)+\frac{f''(0)g(0)}2\\&=f(0)\frac{g''(0)}2+f'(0)g'(0)+\frac{f''(0)}2g(0)\\&=a_0b_2+a_1b_1+a_2b_0\end{align}</span></p>
<p>and so on…</p>
|
1,577,838 | <p>Let $p$ and $q$ be primes such that $p=4q+1$. Then $2$ is a primitive root modulo $p$.</p>
<p>Proof. </p>
<p>Note that $q\not=2$ since $4\cdot2+1=9$ is not prime. $\mathrm{ord}_p(2)\vert p-1=4q$, so $\mathrm{ord}_p(2)=1,\;2,\;4,\;q,\;2q,\;\mathrm{or}\;4q$.</p>
<p>Clearly $\mathrm{ord}_p(2) \not= 1$, and $\mathrm{ord}_p(2)\not=2$ since $4\equiv1(\text{mod }p) \implies p=3$ but $3\not=4q+1$ for any positive integer $q$. Also $\mathrm{ord}_p(2)\not=2$ because $2^4=16\equiv1(\text{mod }p)\implies p=3 \text{ or } 5$. It has been shown that $p\not=3$ and $p=5\implies q=1$ which is not prime.</p>
<p>Suppose $\mathrm{ord}_p(2)=q$. Then $2^q\equiv1(\text{mod }p)$. Let $g$ be a primitive root modulo $p$, so that $2\equiv g^i(\text{mod }p)$ for some $i\in\mathbb{Z}$. Then $$g^{iq}\equiv1(\text{mod }p)\implies p-1\vert iq\implies iq=k(p-1)=4kq\implies i=4k$$ for some $k\in\mathbb{Z}$. So $2\equiv g^{4k}(\text{mod }p)$ and $2$ is a square modulo $p$, which means $p\equiv\pm1(\text{mod }8)$. Hence, either $8\vert p-1$ or $8\vert p+1$. If $8\vert p-1$, then $p-1=4q=8l\implies q=2l$ for some $l\in\mathbb{Z}$, so $q$ is even, which is impossible since $q\not=2$. If instead $8\vert p+1$, then $p+1=4q+2=8l\implies2q+1=2l$ for some $l\in\mathbb{Z}$, which is impossible. Thus $\mathrm{ord}_p(2)\not=q$.</p>
<p>Suppose $\mathrm{ord}_p(2)=2q$. Then $2^{2q}\equiv1(\text{mod }p)$. Let $g$ be a primitive root modulo $p$, so that $2\equiv g^i(\text{mod }p)$ for some $i\in\mathbb{Z}$. Thus $$g^{2iq}\equiv1(\text{mod }p)\implies p-1\vert 2iq\implies2iq=4kq\implies i=2k$$ for some $k\in\mathbb{Z}$, so $2$ is a square modulo $p$, which has been shown to be false. Therefore $\mathrm{ord}_p(2)\not=2q$.</p>
<p>Hence $\mathrm{ord}_p(2)=4q=p-1$ and 2 is a primitive root modulo $p$. $\square$</p>
<p>I feel confident that my proof is correct, mostly because I cannot find any obvious errors. Are there any major errors in the proof? If not, are there any details I should have included? For example, should I have specified that $1\leq i\leq p-1$, or was I okay to be lazy there? Is there anything that was unnecessary to include in the proof? Does it need to be 'cleaned up?' Thank you for your time.</p>
| Jyrki Lahtonen | 11,619 | <p>Looks good to me. As you observed the key for ruling out the possible orders $q$ and $2q$ is that in either case $2$ would end up being a quadratic residue modulo $p$ - in violation of (an extension of) the law of quadratic reciprocity.</p>
<p>You can actually combine those two cases. Observe that irrespective of whether the order would be $q$ or $2q$ you get the congruence $2^{2q}\equiv1\pmod p$. After all, if $2^q\equiv1$ then also $2^{2q}\equiv1$. So it suffices to show that the congruence $2^{2q}\equiv1$ leads to a contradiction.</p>
|
2,541,131 | <p>$$ S = \sum_{n=1}^{99} \frac{(5)^{100}}{(25)^{n} + (5)^{100}}$$</p>
<p>I tried writing first and end terms to make a similar face in the denominator, but in vain. The denominators are getting same in alternate terms.
I tried adding and subtracting by 1 to look after a v(n) and v(n-1) pair of terms also, but that just isn't anywhere near.</p>
| lab bhattacharjee | 33,337 | <p>Hint:</p>
<p>$$f(n)=\dfrac{a^m}{a^n+a^m}$$</p>
<p>$$f(2m-n)=\dfrac{a^m}{a^{2m-n}+a^m}=\dfrac1{a^{m-n}+1}=\dfrac{a^n}{a^m+a^n}=1-f(n)$$</p>
<p>Here $2m=100,a=25$</p>
<p>Set $m=1,50$ and add</p>
|
2,541,131 | <p>$$ S = \sum_{n=1}^{99} \frac{(5)^{100}}{(25)^{n} + (5)^{100}}$$</p>
<p>I tried writing first and end terms to make a similar face in the denominator, but in vain. The denominators are getting same in alternate terms.
I tried adding and subtracting by 1 to look after a v(n) and v(n-1) pair of terms also, but that just isn't anywhere near.</p>
| Gribouillis | 398,505 | <p>Hint:</p>
<p>$$\frac{1}{q + 1} + \frac{1}{\frac{1}{q}+1} =\ ?$$</p>
|
992,487 | <p>Consider the following problem.</p>
<p>A collection of $n$ countries $C_1, \dots, C_n$ sit on an EU commission. Each country $C_i$ is assigned a voting weight $c_i$. A resolution passes if it has the support of a proportion of the panel of at least $A$, taking into account voting weights. Each country $C_i$ has a probability $p_i$ of voting for the resolution, and each country acts independently of the others.</p>
<p>The problem is to assign the voting weights so as to maximize the probability that any given resolution will pass. I am interested in answering the question asymptotically under something like the following assumptions.</p>
<ol>
<li><p>The number of countries $n$ is very large. (Perhaps the EU's $n = 28$ is already not so far from this!)</p></li>
<li><p>The proportion of votes held by any one country is bounded above by $M/n$, for some fixed reasonable number $M$.</p></li>
<li><p>$p_i > A$ for all $i$.</p></li>
<li><p>The probabilities $p_i$ are bounded away from $1$.</p></li>
</ol>
<p>Perhaps some of these conditions can be relaxed, or perhaps additional assumptions are needed, but these are the ones that seem to be needed for my arguments below.</p>
<p>I have tried to answer the question in an approximate and non-rigorous way as follows. </p>
<p>Let $X_i$ be the random variable equal to $1$ when country $C_i$ votes for the resolution, and $0$ otherwise. Now let $V = \sum c_i X_i$. By a suitably general version of the central limit theorem (the Berry-Esseen inequality?), $V$ follows approximately a normal distribution with mean $\sum c_i p_i$ and variance $\sum c_i^2 p_i(1-p_i)$. The probability that we would like to maximize is
$$P\left( V \geq A\sum c_i \right).$$ </p>
<p>If we let $F(z)$ be the cumulative distribution function for the standard normal distribution, this probability can be approximated by $F(z)$ where
$$z = \frac{\sum c_i(p_i - A)}{\left[\sum c_i^2 p_i (1-p_i) \right]^{1/2}}. $$</p>
<p>Considering the gradient of the function $z = z(c_1,\dots,c_n)$ shows that $z$ is maximal when the weights $c_i$ are proportional to the numbers
$$\gamma_i = \frac{p_i - A}{p_i(1-p_i)}.$$</p>
<p>I conclude that it is plausible that the weights $c_i = \gamma_i$ are close to being optimal.</p>
<p>My questions, in descending order of importance, are:</p>
<ol>
<li><p>Has anything significant been written on this problem, or an equivalent one?</p></li>
<li><p>Is my "theorem" correct?</p></li>
<li><p>What would a rigorous formulation of the "theorem" look like?</p></li>
</ol>
<p>EDIT: I've simulated the problem for $A = 0.5$ with 1867 countries with a 50.17% chance of voting in favour and 637 countries with a probability of 50.5%. I gave weight $1$ to each of the first group of countries and weight $c$ to the second. In the graph below, the horizontal axis is for $c$, and the vertical axis for the probability of passing the resolution. The blue curve represents the theoretical probability we would have if the normal approximation worked perfectly, and the red curve experimental data based on 5 million repetitions of the experiment. The maximum for the red graph is not too far from the conjectured optimal value of $\gamma = 2.94$. </p>
<p><img src="https://i.stack.imgur.com/LxVwS.jpg" alt="Experimental data for the conjecture"></p>
<p>EDIT: In response to a comment, here are some additional details on the maximization of $z$ above. By homogeneity, it makes no difference whether or not we constrain the $c_i$'s to have sum $1$. But if we do, then a compactness argument shows that $z$ must attain a maximum at some point. </p>
<p>Now return to unconstrained $c_i$'s. $\partial z/\partial c_i$ has the same sign as
$$\frac{\sum c_j^2 p_j (1 - p_j)}{\sum c_j (p_j - A)} \gamma_i - c_i.$$
This shows that where the maximum occurs, all the $c_i$'s must be proportional to $\gamma_i$.</p>
| Marco | 234,614 | <p>Forgive me if this solution may appear to be too "simplistic", maybe I misunderstood something in your original question, but what I gathered is that you have to set the various votes of countries in a manner that you maximize the probability of a resolution passing.
Strangely enough for all the assumptions you mentioned, 3 of them did not matter in my solution and only number 2 had a meaning.</p>
<p>I did add one additional assumption though. All countries are "sorted" by decreasing probability of passing the resolution, that means:</p>
<p>$$ p_n \le p_{n-1} \le ... \le p_2 \le p_1$$</p>
<p>It should not affect generality too much, since the probability is data (from what I understand).</p>
<p>Let w be the relative weight of votes:</p>
<p>$$ w_i = {c_i \over \sum c_i}$$</p>
<p>It follows that:</p>
<p>$$ \sum w_i = 1 $$</p>
<p>Also let m be the bound for the proportion of votes:</p>
<p>$$ m = {M \over n} $$</p>
<p>Consider that m is less than 1 so we can write the following:</p>
<p>$$ {1 \over m} = k + \eta$$</p>
<p>Where <strong>k</strong> is an integer and <strong>eta</strong> is a real number higher than or equal to 0 but lower than 1.</p>
<p>This is useful for telling how many countries can have an assigned weight of m, and, in fact we're gonna assign m to exactly the K countries with the highest probability. (the first K)</p>
<p>$$ w_1 = ... = w_k = m $$</p>
<p>Country K + 1 will get the "leftover" (note that if eta is zero, this country gets zero votes):</p>
<p>$$ w_{k+1} = 1 - m \cdot k $$</p>
<p>Everyone else gets zero votes:</p>
<p>$$ w_{k+2} = ... = w_n = 0 $$</p>
<p>Note that this values are weights, to go from weights to actual votes you simply have to multiply all weights for the same arbitrary constant.</p>
<p>Again, since this appears to be too "simplistic", it's very likely that I missed something in your original question or misunderstood it entirely. If that's the case, please tell me and I will deleted this answer.</p>
|
2,662,717 | <p>Let $(f_k)_{k=m}^\infty$ be a sequence of differentiable functions $f_k:[a,b]\rightarrow R$ whose derivatives are continuous. Suppose there exists a sequence $(M_k)_{k=m}^\infty$ in $R$ with $|f_k'|\le M_k$ for all $x\in X, k\geq m,$ and such that $\sum_{k=m}^\infty M_k$ converges. Assume also that there is some $x_0\in [a,b]$ such that $\sum_{k=m}^\infty f_k(x_0)$ converges. </p>
<p>Show that $\sum_{k=m}^\infty f_k$ converges uniformly to a differentiable function $f:[a,b]\rightarrow R$ and that $f'(x)=\sum_{k=m}^\infty f_k'(x)$ for all $x\in [a,b]$.</p>
<p>$Remark:$ So you are showing that under these assumptions, </p>
<p>$\frac{d}{dx}\sum_{k=m}^\infty f_k= \sum_{k=m}^\infty \frac{d}{dx} f_k$</p>
<p>$Hint:$ Combine the Weirestrass M-test with Theorem $3.7.1$: Let $[a, b]$ be an interval, and for every integer $n ≥ 1$, let $f_n : [a, b] → R$ be a differentiable function whose derivative $f_n' : [a, b] → R$ is continuous. Suppose that the derivatives $f_n'$ converge uniformly to a function $g : [a, b] → R$. Suppose also that there exists a point $x_0$ such that the limit lim$_{n→∞} f_n (x_0)$ exists. Then the functions $f_n$ converge uniformly to a differentiable function $f$, and the derivative of $f$ equals $g$. </p>
| JonathanZ supports MonicaC | 275,313 | <p>Yes, you can do such a "double induction". There are multiple different ways you can go about it, and yours looks to be valid.</p>
<p>However, often you don't need to do two inductions but it's merely enough to say "Fix an arbitrary $m$" and then do an induction on $n$. Essentially you are doing an induction (on $n$) that has a parameter in it ($m$), but your proof is valid regardless of the value of $m$.</p>
<p>For your problem, you don't need to do the second bulleted step -- if you plug in $n = 0$ you get a true statement regardless of the value of $m$. That observation serves a your base case (for all possible values of $m \gt 0$). Now just do your third step, which is an induction on $n \rightarrow n+1$, and you're done. (BTW, I don't think you have to resort to the closed form, I think you can get away with just using the standard Fibonacci recursion formula. But I haven't actually worked it out so I'm not sure.)</p>
|
2,662,717 | <p>Let $(f_k)_{k=m}^\infty$ be a sequence of differentiable functions $f_k:[a,b]\rightarrow R$ whose derivatives are continuous. Suppose there exists a sequence $(M_k)_{k=m}^\infty$ in $R$ with $|f_k'|\le M_k$ for all $x\in X, k\geq m,$ and such that $\sum_{k=m}^\infty M_k$ converges. Assume also that there is some $x_0\in [a,b]$ such that $\sum_{k=m}^\infty f_k(x_0)$ converges. </p>
<p>Show that $\sum_{k=m}^\infty f_k$ converges uniformly to a differentiable function $f:[a,b]\rightarrow R$ and that $f'(x)=\sum_{k=m}^\infty f_k'(x)$ for all $x\in [a,b]$.</p>
<p>$Remark:$ So you are showing that under these assumptions, </p>
<p>$\frac{d}{dx}\sum_{k=m}^\infty f_k= \sum_{k=m}^\infty \frac{d}{dx} f_k$</p>
<p>$Hint:$ Combine the Weirestrass M-test with Theorem $3.7.1$: Let $[a, b]$ be an interval, and for every integer $n ≥ 1$, let $f_n : [a, b] → R$ be a differentiable function whose derivative $f_n' : [a, b] → R$ is continuous. Suppose that the derivatives $f_n'$ converge uniformly to a function $g : [a, b] → R$. Suppose also that there exists a point $x_0$ such that the limit lim$_{n→∞} f_n (x_0)$ exists. Then the functions $f_n$ converge uniformly to a differentiable function $f$, and the derivative of $f$ equals $g$. </p>
| Bram28 | 256,001 | <p>Your double induction is not sufficient the way you have set it up. You end up proving only the cases with $n=0$ and arbitrary $m$, and with $m=1$ and arbitrary $n$. </p>
<p>Also, while I'm not sure you really are trying to do this, but it almost sounds like you want to go from the $(m,n)$ case directly to the $(m+1,n+1)$ case, but that wouldn't be right, since now you only hit the $(1,0)$, $(2,1)$, $(3,2)$, ... cases. Clearly you want to get to all the cases from the whole $m \times n$ 'grid', rather than just its diagonal.</p>
<p>Instead, what you could do is to go from the $(m,n)$ case to both the $(m,n+1)$ case and the $(m+1,n)$ case.</p>
<p>Another thing to think about, though, is that you are dealing with the Fibonacci numbers, and where a single variable induction involving the Fibonacci numbers involves having to have two base cases, with this kind of double induction you'll actually need all those cases with $n=0$ and arbitrary $m$, and all the cases with $m=1$ and arbitrary $n$, as your base cases, so that you can go from the $(m-1,n)$ and the $(m,n)$ case to the $(m+1,n)$ case, and from the $(m,n-1)$ and the $(m,n)$ case to the $(m,n+1)$ case. </p>
<p>That is:</p>
<p>Suppose you have (by induction) proven the claim to be true for all the cases with $n=0$ and arbitrary $m$, and with $m=1$ and arbitrary $n$.</p>
<p>So now take some arbitrary $m > 1, n > 0$</p>
<p>Assume Inductive hypothesis:</p>
<p>$$F_{m+n} = F_{m} F_{n+1} + F_{m-1} F_{n}$$</p>
<p>and:</p>
<p>$$F_{m+n-1} = F_{m} F_{n} + F_{m-1} F_{n-1}$$</p>
<p>and:</p>
<p>$$F_{m-1+n} = F_{m-1} F_{n+1} + F_{m-2} F_{n}$$</p>
<p>First do $m,n+1$:</p>
<p>$$F_{m+(n+1)} = F_{m+n-1}+ F_{m+n} \overset{Inductive Hypothesis}{=}$$</p>
<p>$$ F_{m} F_{n} + F_{m-1} F_{n-1} + F_{m} F_{n+1} + F_{m-1} F_{n}=$$</p>
<p>$$ F_{m} (F_{n} + F_{n+1}) + F_{m-1} (F_{n-1} + F_{n})=$$</p>
<p>$$ F_{m} F_{n+2} + F_{m-1} F_{n+1}$$</p>
<p>as desired.</p>
<p>Now do $m+1,n$:</p>
<p>$$F_{m+1+n} = F_{m-1+n}+ F_{m+n} = \overset{Inductive Hypothesis}{=}$$</p>
<p>$$F_{m-1} F_{n+1} + F_{m-2} F_{n} + F_{m} F_{n+1} + F_{m-1} F_{n}=$$</p>
<p>$$(F_{m-1} + F_{m}) F_{n+1} + (F_{m-2} + F_{m-1})F_{n} =$$</p>
<p>$$F_{m+1} F_{n+1} + F_{m}F_{n}$$</p>
<p>Again, as desired.</p>
<p>... well, ok, so that works, but it did require infinitely many base cases, which required their very own induction. Isn't there something we can do to avoid this? Yes!</p>
<p>Here's something you can do. Do induction on $k$, where $k=m+n$. That is, show that the claim holds for $(1,0)$, $(1,1)$, and $(2,0)$ as your base cases, and then show that for any $k>2$: if we assume that the claim holds for all $(m,n)$ with $m+n=k-2$ and the claim also holds for all $(m,n)$ with $m+n=k-1$, then the claim also holds for all $(m,n)$ with $m+n=k$. Now, in proving this step, you will have to account of the edge cases where $m=1$ or where $n=0$ though, but other than that you can follow the above derivations, so that would seem to be a little less work.</p>
<p>Finally, here is a method to prove the claim without any difficult induction at all. Suppose that you want to go up some stairs and at every step you can take either one or two stairs: in how many ways can you get up the stairs? Well, if we say that there are $n-1$ stairs, then it turns out there are $F_n$ ways to do it (where $F_0$ is defined to $0$ and $F_1$ to $1$). Here is why: for your first step you can either go one up or two up. Now, if by inductive hypothesis there are $F_{n-1}$ ways to finish climbing the $n-2$ stairs after having taken a step of $1$ stairs, and there are $F_{n-2}$ ways to finish climbing the $n-3$ stairs after having taken a step of $2$ stairs, then it follows that there are $F_{n-1}+F_{n-2}=F_n$ ways to climb the original $n-1$ stairs, completing the inductive proof that indeed there are $F_n$ ways to climb $n-1$ stairs if every step you take either $1$ or $2$ stairs.</p>
<p>OK, but that means that we can climb $m+n-1$ stairs in $F_{m+n}$ ways. But note: we can either climb the stairs and at some point having climbed exactly $m-1$ stairs, or we can climb the stairs and at some point having climbed exactly $m-2$ stairs, after which we take a step of two stairs, and continue our way: each way of climbing the stairs will be done in one of those two different ways. OK, but the first way can be done in $F_m F_{n+1}$ ways, since you can go up $m-1$ steps in $F_m$ ways and then finish the other $n$ stairs in $F_{n+1}$ ways. For the second method there are $F_{m-1}$ ways to first climb $m-2$ stairs, and then $F_n$ ways to climb the remaining $n-1$ stairs after taking a step of two stairs after that, for a total of $F_{m-1}F_n$ ways. Thus, it must be true that:</p>
<p>$$F_{m+n} = F_{m} F_{n+1} + F_{m-1} F_{n}$$</p>
|
3,896,327 | <p>For the first part of this question, I was asked to find the either/or version and the contrapositive of this statement, which I found as follows:</p>
<p>i) either <span class="math-container">$n \leq 7$</span>, or <span class="math-container">$n^2-8n+12$</span> is composite</p>
<p>ii) if <span class="math-container">$n^2-8n+12$</span> is not composite, then <span class="math-container">$n \leq 7$</span></p>
<p>Then we are asked to prove the statement.</p>
<p>I've examined all of our class notes and found mention of proof via factorising "to be covered in more detail later" (but no detail later included - perhaps it is considered too obvious?).</p>
<p>As this is such a simple question, judging by the two marks available for it, I'm concerned that asking another student will cause them to inadvertently tell me the answer.</p>
<p>I am finding this question hard to google around because findable simple proof examples do not seem to include quadratic expressions.</p>
<p>What would the right steps be to approach a proof like this? And/or is there an online resource that shows some similar examples with working/notes? I don't mind handing in an incorrect answer if I'm able to make an attempt, but I'm just so uncertain of the right starting point that I cannot make a start.</p>
| Parcly Taxel | 357,390 | <p><span class="math-container">$$n^2-8n+12=(n-2)(n-6)$$</span>
Now if <span class="math-container">$n>7$</span> then both factors are at least <span class="math-container">$2$</span>, which means <span class="math-container">$n^2-8n-12$</span> is composite. This settles the matter.</p>
|
69,472 | <blockquote>
<p><strong>Theorem 1</strong><br>
If $g \in [a,b]$ and $g(x) \in [a,b] \forall x \in [a,b]$, then $g$ has a fixed point in $[a,b].$<br>
If in addition, $g'(x)$ exists on $(a,b)$ and a positive constant $k < 1$ exists with
$$|g'(x)| \leq k, \text{ for all } \in (a, b)$$
then the fixed point in $[a,b] is unique. </p>
<p><strong>Fixed-point Theorem</strong>
Let $g \in C[a,b]$ be such that $g(x) \in [a,b]$, for all $x$ in $[a,b]$. Suppose, in addition, that $g'$ exists on $(a,b)$ and that a constant $0 < k < 1$ exists with
$$|g'(x)| \leq k, \text{ for all } x \in (a, b)$$
Then, for any number $p_0$ in $[a,b]$, the sequence defined by
$$p_n = g(p_{n-1}), n \geq 1$$
converges to the unique fixed-point in $[a,b]$</p>
</blockquote>
<p>These are two theorems that I have learned, and I'm having a hard time with this problem:</p>
<blockquote>
<p>Given a function $f(x)$, how can we find the interval $[a,b]$ on which fixed-point iteration will converge?</p>
</blockquote>
<p>Besides guess and check, I couldn't find any other way to solve this problem. I tried to link the above theorems, but it involves two variables, so I have a feeling it can't be solved algebraically. I wonder is there a general way to find the interval of convergence rather trial and error? Thank you.</p>
| Gerry Myerson | 8,269 | <p>There are many general approaches. One of the simplest and best is Newton's Method, which you will find in any calculus text, or by searching the web. </p>
<p>By the way, I think your first formula should be $f(x)=(1/2)(x+(3/x))$, no?</p>
|
69,472 | <blockquote>
<p><strong>Theorem 1</strong><br>
If $g \in [a,b]$ and $g(x) \in [a,b] \forall x \in [a,b]$, then $g$ has a fixed point in $[a,b].$<br>
If in addition, $g'(x)$ exists on $(a,b)$ and a positive constant $k < 1$ exists with
$$|g'(x)| \leq k, \text{ for all } \in (a, b)$$
then the fixed point in $[a,b] is unique. </p>
<p><strong>Fixed-point Theorem</strong>
Let $g \in C[a,b]$ be such that $g(x) \in [a,b]$, for all $x$ in $[a,b]$. Suppose, in addition, that $g'$ exists on $(a,b)$ and that a constant $0 < k < 1$ exists with
$$|g'(x)| \leq k, \text{ for all } x \in (a, b)$$
Then, for any number $p_0$ in $[a,b]$, the sequence defined by
$$p_n = g(p_{n-1}), n \geq 1$$
converges to the unique fixed-point in $[a,b]$</p>
</blockquote>
<p>These are two theorems that I have learned, and I'm having a hard time with this problem:</p>
<blockquote>
<p>Given a function $f(x)$, how can we find the interval $[a,b]$ on which fixed-point iteration will converge?</p>
</blockquote>
<p>Besides guess and check, I couldn't find any other way to solve this problem. I tried to link the above theorems, but it involves two variables, so I have a feeling it can't be solved algebraically. I wonder is there a general way to find the interval of convergence rather trial and error? Thank you.</p>
| Fixee | 7,162 | <p>Somewhat off-topic, but...</p>
<p>When I was a kid, we didn't have calculators yet, so we learned to compute sqrts on paper using the <a href="http://en.wikipedia.org/wiki/Methods_of_computing_square_roots#Digit-by-digit_calculation" rel="nofollow">digit-at-a-time</a> method. This cannot compete with Newton-Raphson, of course, but is simple enough to explain to a child. Here's a Python program that uses this method and gives $\sqrt{3}$ to 10,000 places:</p>
<pre><code>import sys
i,j,k,q,d = 3,1,1,1,0
while True:
sys.stdout.write(str(q))
d,i,k,q = d+1, (i-k*q)*100, 20*j, 0
for x in range(10):
q += 1
if (k+q)*q > i:
q -= 1
break
j, k = j*10+q, k+q
if d >= 10000:
break
</code></pre>
|
132,618 | <p>Ogg characterized the finitely many N such that $X_0(N)_{\mathbb{Q}}$ is hyperelliptic, and Poonen proved in "Gonality of modular curves in characteristic p" that for large enough N, $X_0(N)_{\mathbb{F}_p}$ is not hyperelliptic.</p>
<p><strong>Question</strong>: Are there any N such that $X_0(N)_{\mathbb{Q}}$ is not hyperelliptic but for some p not dividing N $X_0(N)_{\mathbb{F}_p}$ is hyperelliptic? </p>
<p>I'm also interested in this question for other modular curves of the form $X_H$, where H is a congruence subgroup.</p>
| JSE | 431 | <p>This is sort of a cheap answer, but I would look at values of N where X_0(N) has genus 3 and is not hyperelliptic (over Q). In genus 3, the locus of hyperelliptic curves has codimension 1 in the whole moduli space of curves, so one would expect a non-hyperelliptic curve to reduce to a hyperelliptic curve mod p for some finite (but typically non-empty) set of primes p.</p>
<p>Presumably this could be made explicit: a non-hyperelliptic genus 3 curve is a plane quartic, and so you've got to figure that there is an invariant F of ternary quartic forms with the property that F(P) is a multiple of precisely those primes where the curve cut out by P reduces to a hyperelliptic curve. Or so I would guess.</p>
|
132,618 | <p>Ogg characterized the finitely many N such that $X_0(N)_{\mathbb{Q}}$ is hyperelliptic, and Poonen proved in "Gonality of modular curves in characteristic p" that for large enough N, $X_0(N)_{\mathbb{F}_p}$ is not hyperelliptic.</p>
<p><strong>Question</strong>: Are there any N such that $X_0(N)_{\mathbb{Q}}$ is not hyperelliptic but for some p not dividing N $X_0(N)_{\mathbb{F}_p}$ is hyperelliptic? </p>
<p>I'm also interested in this question for other modular curves of the form $X_H$, where H is a congruence subgroup.</p>
| Maarten Derickx | 23,501 | <p>A curve $C$ is hyperelliptic if and only if the canonical map $C \to \mathbb P^*(\Omega^1(C))$, which sends a point $p$ to the codimension 1 subspace $V_p \subset\Omega^1(C)$ of all one forms vanishing at $p$, is a two to one map. In the hyperelliptic case its image will be a $\mathbb P^1$, and the degree two map will be the quotient by the hyperelliptic involution. The map $\mathbb P^1 \to \mathbb P^*(\Omega^1(C))$ will basically be a Veronese embedding and hence its image will lie on a lot of quadrics. One can use this to search for possible candidates. I found none for $\Gamma_0(N)$ with $N \leq 151$. I think my search is provably complete, showing that for $N \leq 151$ and all $p$ the only hyperelliptic $X_0(N)_{\mathbb F_p}$ are the ones who are already hyperelliptic over $\mathbb Q$. Together with the general bound $${\rm gon_{\mathbb F_p^2}} X_0(N) \geq [PSL_2(\mathbb Z) : \Gamma_0(N)]\frac {p-1} {12(p^2+1)}$$ which is mentioned in Poonen's paper this implies that $X_0(N)_{\mathbb F_p}$ will never be hyperelliptic modulo $2,3$ or $5$ unless of course $X_0(N)_{\mathbb Q}$ is.
Now for $N \geq 151$ the genus of $X_0(N)$ already starts to become reasonably big. So I would not expect there to be any examples with $p>5$ and $N>151$ either. But I don't see how to prove it yet since Poonen's bound depends heavily on $p$.</p>
<p>Using this idea for $X_H$ with other congruence subgroups $H$ I found an example though. There might be more but this is just the first one I found. Anyway here is the explicit example:
Consider the modular curve $X_1(37)/\langle 4\rangle$, it is a double cover of $X_0(37)$ and its genus is $4$. Its global one forms are generated by the modular forms with q-expansion:
\begin{gather}
x_0 := q - 2q^{5} + 2q^{6} - 3q^{7} + 2q^{9} + O(q^{10})\\
x_1 := q^{2} - q^{5} - q^{6} + O(q^{10})\\
x_2 := q^{3} - 2q^{7} + O(q^{10})\\
x_3 := q^{4} - q^{5} + q^{6} - 2q^{7} + 2q^{9} + O(q^{10}).\\
\end{gather}
These satisfy the relations:
\begin{gather}
- x_{1}^{2} + x_{0} x_{2} - 2 x_{2} x_{3} = 0 \\
x_{1}^{2} x_{2} - 2 x_{1} x_{2}^{2} + 2 x_{2}^{3} - x_{0} x_{1} x_{3} +
x_{1}^{2} x_{3} + x_{1} x_{2} x_{3} - 4 x_{2}^{2} x_{3} - x_{0}
x_{3}^{2} + x_{1} x_{3}^{2} + 4 x_{2} x_{3}^{2} + 2 x_{3}^{3} = 0, \\
\end{gather}
which describe the canonical model $X_1(37)/\langle 4\rangle $ as the intersection of a smooth quadric and cubic in $\mathbb P^3$.
Modulo $2$ however $x_0,x_1,x_2,x_3$ satisfy two additional relations:
$$
x_{1} x_{2} + x_{2}^{2} + x_{0} x_{3} + x_{2} x_{3} =0 \\
x_{2}^{2} + x_{1} x_{3} + x_{2} x_{3} = 0,\\
$$
which shows that in this case the image of the canonical embedding is a $\mathbb P^1$. 2 is the only prime for which it happens for $X_1(37)/\langle 4 \rangle$.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.