qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,297,319 | <p>integration equation </p>
<p>$$\int_{0}^{1/8} \frac{4}{\sqrt{(1-4x^2)}} \,dx$$</p>
<p>my work </p>
<p>$t= \sqrt{(1-4x^2)} $</p>
<p>$dt = -4x/\sqrt{(1-4x^2)} dx $</p>
<p>stuck here also </p>
| Joseph Martin | 203,227 | <p>You can use integration by substitution $ 2x=sin(u) $. Then $ \frac{d}{du}x=\frac{1}{2}\cos(u) $.</p>
<p>We can rearrange our substitution equation $ 2x=sin(u) $ into $ u = arcsin(2x) $. So we can find our limits with respect to u. When $ x = \frac{1}{8} \: \: u = arcsin(2\frac{1}{8}) = arcsin(\frac{1}{4}) $ and when $ x = 0 \: \: u = arcsin(0) = 0 $.
So</p>
<p>$ \int_{0}^{1/8} \frac{4}{\sqrt{(1-4x^2)}} \,dx = \int_{0}^{\arcsin\left(\frac{1}{4}\right)} 2\,du = 2\arcsin\left(\frac{1}{4}\right) = 0.50536051 $</p>
|
3,789,060 | <p>I was asked the following question:</p>
<blockquote>
<p>Determine if the following set is a vector space:<br />
<span class="math-container">$$W=\left\{\left[\begin{matrix}p\\q\\r\\s\\\end{matrix}\right]:\begin{matrix}-3p+2q=-s\\p=-s+3r\\\end{matrix}\right\}$$</span></p>
</blockquote>
<p>I know the answer is yes and you can show it by showing that W is a subspace of <span class="math-container">$\mathbb{R}_4$</span>. But, I have no idea how to show that, or in general how to determine if a set is a vector space. I am interested in understanding so that I can apply it to future questions, not just so that I can answer this question.</p>
| Brian Moehring | 694,754 | <p>Write <span class="math-container">$\{x\} = x - \lfloor x\rfloor$</span>. Since <span class="math-container">$$\arctan(\cot(\pi x)) = \arctan(\cot(\pi\{x\})) = \arctan(\tan(\frac{\pi}{2}-\pi\{x\})) \\ -\frac{\pi}{2} < \frac{\pi}{2} - \pi\{x\} \leq \frac{\pi}{2}$$</span> it follows that <span class="math-container">$$\arctan(\cot(\pi x)) = \frac{\pi}{2} - \pi\{x\}, \qquad x \in \mathbb{R}\setminus\mathbb{Z}.$$</span></p>
<hr />
<p>Arguably <span class="math-container">$\arctan$</span> and <span class="math-container">$\cot$</span> are both "elementary" functions, but this gives an exact way to write <span class="math-container">$\arctan(\cot(\pi x))$</span> without appealing to anything more than linear functions and the operation of "rounding down to the nearest integer".</p>
<p>It has as an antiderivative the function <span class="math-container">$$\int_0^x \left(\frac{\pi}{2} - \pi\{t\}\right)dt = \frac{\pi}{2}\{x\}(1-\{x\})$$</span></p>
|
1,879,395 | <p>I am trying to learn generating functions so I am trying this recurrence:</p>
<p>$$F(n) = 1 + \frac{n-1}{n}F(n-1)$$</p>
<p>But I am struggling with it. Luckily the base case can be anything since $F(1)$ will multiply it by $0$ anyway, so let's say $F(0) = 0$. Then I tried this:</p>
<p>$$G(x) = \sum_{n=0}^{\infty} F(n)x^n$$</p>
<p>Remove base case $n=0$, split $F(n)$ into its parts:</p>
<p>$$G(x) = 0 + \sum_{n=1}^{\infty} x^n + \sum_{n=1}^{\infty} \frac{n-1}{n} F(n-1) x^{n}$$ </p>
<p>Simplify the first sum (accounting for $n=0$), pull $x$ out of the right sum and shift index:</p>
<p>$$G(x) = -1 + \frac{1}{1-x} + x\sum_{n=0}^{\infty} \frac{n}{n+1} F(n) x^{n}$$ </p>
<p>At this point I don't know how to simplify the right sum any further because I cannot simply pull out $\frac{n}{n+1}$ and replace the sum with $G(x)$ like I normally can with constant coefficients.</p>
<p>Just looking for hints because I want to solve this myself (as much as I can, anyway), please. What are the typical methods people use at this point?</p>
| André Nicolas | 6,312 | <p>Hint: Life will be more pleasant if we let $kF(k)=W(k)$. Then we are looking at $W(n)=n+W(n-1)$. The generating function is straightforward, and then we can obtain the generating function of $F$.</p>
|
1,879,395 | <p>I am trying to learn generating functions so I am trying this recurrence:</p>
<p>$$F(n) = 1 + \frac{n-1}{n}F(n-1)$$</p>
<p>But I am struggling with it. Luckily the base case can be anything since $F(1)$ will multiply it by $0$ anyway, so let's say $F(0) = 0$. Then I tried this:</p>
<p>$$G(x) = \sum_{n=0}^{\infty} F(n)x^n$$</p>
<p>Remove base case $n=0$, split $F(n)$ into its parts:</p>
<p>$$G(x) = 0 + \sum_{n=1}^{\infty} x^n + \sum_{n=1}^{\infty} \frac{n-1}{n} F(n-1) x^{n}$$ </p>
<p>Simplify the first sum (accounting for $n=0$), pull $x$ out of the right sum and shift index:</p>
<p>$$G(x) = -1 + \frac{1}{1-x} + x\sum_{n=0}^{\infty} \frac{n}{n+1} F(n) x^{n}$$ </p>
<p>At this point I don't know how to simplify the right sum any further because I cannot simply pull out $\frac{n}{n+1}$ and replace the sum with $G(x)$ like I normally can with constant coefficients.</p>
<p>Just looking for hints because I want to solve this myself (as much as I can, anyway), please. What are the typical methods people use at this point?</p>
| Jack D'Aurizio | 44,121 | <p>The usual GF-approach may go through the following lines. We have $F(0)=0$ and $n F(n) = n + (n-1) F(n-1)$. Assuming that
$$ G(x) = \sum_{n\geq 0}F(n) x^n = \sum_{n\geq 1}F(n) x^n,\tag{1} $$
we have:
$$ x\cdot G'(x) = \sum_{n\geq 1} n F(n)\,x^{n}=\sum_{n\geq 0} n F(n)\,x^{n}, \tag{2}$$
$$ x^2\cdot G'(x) = \sum_{n\geq 0} (n-1) F(n-1)\, x^n,\tag{3} $$
$$ \sum_{n\geq 0}n\,x^n = \frac{x}{(1-x)^2}\tag{4}$$
hence the recurence relation turns into the pseudo-DE
$$ x G'(x) = \frac{x}{(1-x)^2}+x^2 G'(x)\tag{5} $$
leading to $G'(x)=\frac{1}{(1-x)^3}$ and $G(x)=K+\frac{1}{2(1-x)^2}$. Since $G(0)=0$ we have $K=-\frac{1}{2}$, hence:</p>
<p>$$ G(x) = \frac{1}{2}\sum_{n\geq 1}\binom{n+1}{1}x^n \tag{6} $$
by <a href="https://en.wikipedia.org/wiki/Stars_and_bars_(combinatorics)" rel="nofollow">stars and bars</a>, and $F(n)=\frac{n+1}{2}$ for any $n\geq 1$ readily follows.</p>
|
733,553 | <p>It's been a long time since high school, and I guess I forgot my rules of exponents. I did a web search for this rule but I could not find a rule that helps me explain this case:</p>
<p>$ 2^n + 2^n = 2^{n+1} $</p>
<p>Which rule of exponents is this?</p>
| Kaj Hansen | 138,538 | <p>$2^{N} + 2^{N} = 2^{N}(1+1) = 2^{N}(2) = 2^{N+1}.$</p>
<p>Just a little trickery with the distributive law.</p>
|
121,362 | <p>I have a set of sample time-series data below of monthly prices for two companies. </p>
<p>Q1. I want to calculate monthly and quarterly log returns.what is the most expedient way to do this? <code>TimeSeriesAggregate[]</code> only has the standard <code>Mean</code>, etc. </p>
<p>Q2. With the returns from Q1, what is the most expedient method to calculate the correlation of the monthly returns between the two companies?</p>
<p>Q3. How would it be possible to calculate six-monthly log returns and then create a series of overlapping $6m \log$ returns so I can derive $7\times 6M$ outcomes from the limited dataset below; i.e. <code>[1m-6m, 2m-7m, 3m-8m, ...]</code> (and then calculate a correlation between these)?</p>
<pre><code>(data1 = {{Date, CompanyA, CompanyB}, {"16/01/2007", 3655,
1000}, {"16/02/2007", 3655, 1000}, {"16/03/2007", 3655,
1000}, {"16/04/2007", 3655, 1000}, {"16/05/2007", 3655,
1000}, {"16/06/2007", 3435, 1011}, {"16/07/2007", 3528,
1012}, {"16/08/2007", 3348, 1013}, {"16/09/2007", 3648,
1022}, {"16/10/2007", 3648, 1022}, {"16/11/2007", 3648,
1022}, {"16/12/2007", 3648, 1022}});
(data2 = MapAt[DateList[{#, {"Day", "Month", "Year"}}] &,
data1, {2 ;;, 1}]) // Grid
</code></pre>
<p>Thanks</p>
| wuyingddg | 18,981 | <p>Anothor way by <code>NestList</code></p>
<pre><code>randomTriPlot[n_] := Module[{next},
next[polys_] :=
Join[Map[# + {-1, -Sqrt[3]} &,
polys, {2}], {MapAt[# - 2 Sqrt[3] &,
polys[[-1]], {1, 2}], # + {1, -Sqrt[3]} & /@ polys[[-1]]}];
(*get coordinate of the next layer by translate this layer*)
Flatten@
Map[Polygon,
NestList[next, N@{{{0, 0}, {-1, -Sqrt[3]}, {1, -Sqrt[3]}}},
n - 1], {2}] // Graphics[Thread[{RandomColor[Length@#], #}]] &
]
randomTriPlot[7]
</code></pre>
|
26,893 | <p>Does there exists a function $f \in C^2[0,\infty]$ (that is, $f$ is $C^2$ and has finite limits at $0$ and $\infty$) with $f''(0) = 1$, such that for any $g \in L^p(0,T)$ (where $T > 0$ and $1 \leq p < \infty$ may be chosen freely) we get
$$
\int_0^T \int_0^\infty \frac{u^2-s}{s^{5/2}} \exp{\left( -\frac{u^2}{2s} \right)} f(u) g(s) du ds = 0?
$$</p>
| Zarrax | 3,035 | <p>As Joriki pointed out in his comment, this is equivalent to finding an $f(u)$ such that for all $0 \leq s \leq T$ one has
$$\int_0^{\infty}(u^2 - s)\exp{(-{u^2 \over 2s})}f(u) \,du = 0$$
Write $\int_0^{\infty}u^2\exp{-({u^2 \over 2s})}f(u)\,du$ as
$-\int_0^{\infty}-{u \over s}\exp{(-{u^2 \over 2s})}suf(u)\,du$ and integrate by parts, integrating $-{u \over s}\exp{(-{u^2 \over 2s})}$ to $\exp{(-{u^2 \over 2s})}$, and differentiating the rest. The result is the expression
$$\int_0^{\infty}\bigg(s\exp{(-{u^2 \over 2s})}f(u) + suf'(u)\exp{(-{u^2 \over 2s})}\bigg)\,du$$
The first term of this cancels out the second term of your original expression, so what you need is a function $f(u)$ such that for all $0 \leq s \leq T$ one has
$$s\int_0^{\infty}uf'(u)\exp{(-{u^2 \over 2s})}\,du = 0$$
You can cancel out the $s$ factor in front, then change variables from $u$ to $\sqrt{u}$ to get
$$\int_0^{\infty}f'(\sqrt{u})\exp{(-{u \over 2s})}\,du = 0$$
Lastly, one can replace $s$ by ${1 \over 2s}$ to get that for all $s \geq {1 \over 2T}$ you need
$$\int_0^{\infty}f'(\sqrt{u})\exp{(-{su})}\,du = 0$$
This can't happen; the above defines analytic function of $s$ which can't be zero on a segment without being identically zero. So the Laplace transform of $f'(\sqrt{u})$ is identically zero, which for reasonable $f'$ will not happen unless $f'$ is identically zero.</p>
|
4,146,081 | <p>How can I demonstrate the Jacobi identity:</p>
<p><span class="math-container">\begin{equation}
[S_{i}, [S_{j},S_{k}]] + [S_{j}, [S_{k},S_{i}]] + [S_{k}, [S_{i},S_{j}]] = 0 ~,
\end{equation}</span></p>
<p>using the infinitesimal generators <span class="math-container">$S_{\kappa}$</span> for a continuous group, where the generators satisfies the Lie algebra, such that:</p>
<p><span class="math-container">$$[S_{\alpha},S_{\beta}] = \sum_{\gamma} f_{\alpha \beta \gamma}S_{\gamma}$$</span></p>
<p>where <span class="math-container">$f_{\alpha \beta \gamma}$</span> are the structure constants ?</p>
<p>I was doing the following:</p>
<p><span class="math-container">$$\begin{align*}
[S_{i}, [S_{j},S_{k}]] &+ [S_{j}, [S_{k},S_{i}]] + [S_{k}, [S_{i},S_{j}]] = \\
&=
[S_{i}, \sum_{l} f_{jkl}S_{l}] + [S_{j}, \sum_{l} f_{kil}S_{l}] + [S_{k}, \sum_{l} f_{ijl}S_{l}]\\
&= \sum_{l} f_{jkl} [S_{i}, S_{l}] + \sum_{l} f_{kil}[S_{j}, S_{l}] + \sum_{l} f_{ijl}[S_{k}, S_{l}]\\
&= \sum_{l} f_{jkl} \sum_{m} f_{ilm}S_{m} + \sum_{l} f_{kil}\sum_{n} f_{jln}S_{n}\\
&\qquad + \sum_{l} f_{ijl}\sum_{p} f_{klp}S_{p}\\
&= \sum_{l,~m} f_{jkl} f_{ilm}S_{m} + \sum_{l,~n} f_{kil}f_{jln}S_{n} + \sum_{l,~p} f_{ijl}f_{klp}S_{p}\\
&= f_{jk}^{l} f_{il}^{m}~S_{m} + f_{ki}^{l}f_{jl}^{n}~S_{n} + f_{ij}^{l}f_{kl}^{p}~S_{p}
\end{align*}$$</span>
I know that these structure constants are antisymmetric:</p>
<p><span class="math-container">\begin{equation}
f_{\alpha \beta}^{\gamma} = - f_{\beta \alpha}^{\gamma} ~~.
\end{equation}</span></p>
<p>Are there a way to go further and show that the expression will be equal to zero ?</p>
| paul garrett | 12,291 | <p>Recapitulating @DietrichBurde's point, in different terms:</p>
<p>Again, emphatically, if we have a real or complex vector space <span class="math-container">$V$</span> with an anti-commutative binary (bilinear) operation <span class="math-container">$[,]$</span>, this does not imply that <span class="math-container">$[,]$</span> satisfies the Jacobi identity. At this level of abstraction, the Jacobi identity must be <em>required</em>.</p>
<p>To see some sense in what the Jacobi asserts, rather than it just being "a formula", it asserts that the map <span class="math-container">$x\to \mathrm{ad}(x)$</span> is a Lie algebra homomorphism, where <span class="math-container">$\mathrm(ad)(x)(y)=[x,y]$</span>. That is, it asserts/requires that <span class="math-container">$$
\mathrm{ad}(x)\circ\mathrm{ad}(y)-\mathrm{ad}(y)\circ\mathrm{ad}(x)
\;=\; \mathrm{ad}([x,y])
$$</span>
The left-hand side is the natural Lie bracket <span class="math-container">$[A,B]=A\circ B-B\circ A$</span> inside the linear endomorphism algebra of <span class="math-container">$V$</span>.</p>
<p>This also broaches the issue of possibly hoping to "open up" <span class="math-container">$[A,B]$</span> into <span class="math-container">$AB-BA$</span>. This makes best sense if/when <span class="math-container">$A,B$</span> are elements of some associative algebra, such as square matrices. And, yes, if/when one has a vector subspace <span class="math-container">$V$</span> of square matrices of some size, closed under <span class="math-container">$[A,B]=AB-BA$</span>, then it does make sense to "open up" the brackets, and, yes, one can prove by direct computation that the Jacobi identity holds.</p>
<p>The Jacobi identity also can be <em>proven</em> to hold when the vector space is the tangent space at the identity of a (real or complex) Lie group.</p>
<p>The two cases overlap because the space of square matrices is identifiable as the Lie algebra of the multiplicatively invertible matrices of that size.</p>
<p>EDIT: after a few minutes' foolling around, it's not so hard to make a three-dimensional algebra with a skew-symmetric operator <span class="math-container">$[,]$</span> which fails to satisfy the Jacobi identity. E.g., take basis <span class="math-container">$x,y,z$</span> and <span class="math-container">$[x,y]=x$</span>, <span class="math-container">$[x,z]=y$</span>, <span class="math-container">$[y,z]=0$</span>. Then
<span class="math-container">$$
\Big(\mathrm{ad}(x)\circ\mathrm{ad}(y)-\mathrm{ad}(y)\circ\mathrm{ad}(x)\Big)(z)
\;=\; [x,[y,z]]-[y,[x,z]] \;=\; [x,0] - [y,y] \;=\; 0
$$</span>
while
<span class="math-container">$$
[[x,y],z] \;=\; [x,z] \;=\; y \;\not=\; 0
$$</span></p>
|
370,007 | <p>A river boat can travel a 20km per hour in still water. The boat travels 30km upstream against the current then turns around and travels the same distance back with the current. IF the total trip took 7.5 hours, what is the speed of the current? Solve this question algebraically as well as graphically..</p>
<p>I started the Algebra Solution: starting with this
x=(Vstill-Vcurrent)t,(When goes up stream)
x=(Vstill Vcurrent) t2( when it goes back stream....</p>
<p>I have the same question on a quiz in 1 hours and I need to know how to do this please show a solution :D thanks</p>
| Diego | 301,198 | <p>Suppose $(ab)^t=a^t b^t=1$. Then $t/mn$. Note that $t$ cannot be a multiple of $m$ and not of $n$, since then $a^tb^t=b^t \neq 1$. Now suppose $t$ is neither a multiple of $n$ nor a multiple of $m$. Then let $n=p_{1}^{\alpha_1}...p_k^{\alpha_k}$ and $m=q_1^{\beta_1}...q_n^{\alpha_n}$. Then $t$ takes a proper piece $r$ of $n$ and a proper piece $s$ of $m$.Since $x^t=y^{-t}$, $ord (x^t)=ord (y^{-t})=ord (y^t)$, but $ord (x^t)$ is what $r$ lacks from $n$(which are a product of $p_{k}$'s), while $ord (y^t)$ is a product of $q_i$'s, which is a contradiction because $(n,m)=1$.</p>
|
936,138 | <p>I need help approaching a proof which deals with inequalities:</p>
<p>If p and r are the precision and recall of a test, then the F1 measure of the test is
defined to be
$$F(p, r) = \frac{2pr}{p+r}$$</p>
<p>Prove that, for all positive reals p, r, and t, if t ≥ r then F(p, t) ≥ F(p, r)</p>
<p>What's the first step to approaching this problem? Do I need to look at this with different cases? </p>
| Khosrotash | 104,171 | <p>$$\frac{1}{p}+\frac{1}{r}=\frac{p+r}{pr}\\so\\ f(p,r)=2 \frac{1}{\frac{1}{p}+\frac{1}{r}}\\\frac{1}{\frac{1}{p}+\frac{1}{r}}\\\\is -harmonic-mean$$</p>
|
3,416,895 | <p>here's the relevant question: <a href="https://math.stackexchange.com/q/193157/716946">If $\sigma_n=\frac{s_1+s_2+\cdots+s_n}{n}$ then $\operatorname{{lim sup}}\sigma_n \leq \operatorname{lim sup} s_n$</a></p>
<p>In the accepted answer, <strong>doesn't the last inequality only work if <span class="math-container">$\sup_{l\geq k}s_l$</span> is nonnegative?</strong>
The "last inequality" I'm referring to is this:
<span class="math-container">$$\frac 1n\sum_{j=1}^ks_j+\frac{n-k}n\sup_{l\geqslant k}s_l\leqslant \frac 1n\sum_{j=1}^ks_j+\sup_{l\geqslant k}s_l.$$</span></p>
<p>I ran into this issue when trying to prove the analagous statement for liminf, because in the case of liminf I could only get a similar inequality if <span class="math-container">$\inf_{l\geq k}s_l \leq 0$</span>, as follows:</p>
<p><span class="math-container">$$\sigma_n=
\frac 1n\sum_{j=1}^ks_j+\frac 1n\sum_{j=k+1}^ns_j
\geqslant \frac 1n\sum_{j=1}^ks_j+\frac{n-k}n\inf_{l\geqslant k}s_l
$$</span>
From here, if <span class="math-container">$\inf_{l\geq k}s_l \leq 0$</span> then I could continue and write
<span class="math-container">$\geq\frac 1n\sum_{j=1}^ks_j+\inf_{l\geqslant k}s_l$</span>.</p>
<p>Could someone clarify please?</p>
| user284331 | 284,331 | <p><span class="math-container">\begin{align*}
&\limsup_{n}\left(\dfrac{1}{n}\sum_{j=1}^{k}s_{j}+\dfrac{n-k}{n}\sup_{l\geq k}s_{l}\right)\\
&\leq\limsup_{n}\dfrac{1}{n}\sum_{j=1}^{k}s_{j}+\limsup_{n}\dfrac{n-k}{n}\sup_{l\geq k}s_{l}\\
&=\lim_{n}\dfrac{1}{n}\sum_{j=1}^{k}s_{j}+\lim_{n}\dfrac{n-k}{n}\sup_{l\geq k}s_{l}\\
&=\sup_{l\geq k}s_{l},
\end{align*}</span>
you still got it.</p>
|
1,970,235 | <p>If I remember right, $f(x)$ is continuous at $x=a$ if</p>
<ol>
<li><p>$\lim_{x \to a} f(x)$ exists</p></li>
<li><p>$f(a)$ exists</p></li>
<li><p>$f(a) = \lim_{x \to a} f(x)$</p></li>
</ol>
<p>So $\lim_{x \to 0^{-}} \sqrt{x}$ exists? Thus $\lim_{x \to 0^{-}} \sin(\sqrt{x})$ <a href="https://math.stackexchange.com/questions/1929450/prove-lim-x-to-0-sin-sqrtx-does-not-exist">exists</a>?</p>
| avs | 353,141 | <p>If we define $f(x) = \sqrt{x}$ only for $x \geq 0$, then the function is <em>continuous on the right</em>; i.e., there the right-sided limit
$$
\lim_{x \rightarrow 0+} f(x).
$$
As for the left-sided limit, it all depends on how $f(x)$ is defined for $x<0$.</p>
|
2,189,832 | <p>Take the matrix
$$
\begin{matrix}
1 & 1 & 1 & 1 \\
1 & 1 & 1 & 1 \\
1 & 1 & 1 & 1 \\
1 & 1 & 1 & 1 \\
\end{matrix}
$$</p>
<p>I tried to calculalte the eigenvalues of this matrix and got to a point where I found that the eigenvalues are: 1,0,2. but wolfarm says it has only 4 ad 0. I have no idea why.. wolfarm:<a href="https://i.stack.imgur.com/5Ukn1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5Ukn1.png" alt="enter image description here"></a></p>
<p>In addition, can this matrix be diagonalized?</p>
<p>Thanks!</p>
<p>my calculations:</p>
<p>I calculated its characteristic polynomial which is the determinant of the matrix A.
\begin{matrix}
1-t & 1 & 1 & 1 \\
1 & 1-t & 1 & 1 \\
1 & 1 & 1-t & 1 \\
1 & 1 & 1 & 1-t \\
\end{matrix}
I got to det(t * Id - A) = t*(1-t)^2*(t-2) which means that the eigenvalues are 1,0,2 where did I go wrong?</p>
| Ofek Gillon | 230,501 | <p>How did you calculate the eigenvalues? </p>
<p>It is easy to see that Wolfram is correct by multiplying $Av$ for each vector.</p>
<p>More over, you can see that the rank of the matrix is $1$, meaning there are $3$ linearly independent vectors that satisfy</p>
<p>$$Av = 0 = 0v$$</p>
<p>Meaning there needs to be 3 eigenvectors with eigenvalue of $0$, like wolfram suggests. (And in your answer this isn't possible because a matrix with size $n$ has maximum $4$ eigenvectors... you propose $3$ eigenvectors with $\lambda=0$ and at least one with $\lambda = 1$ and at least one with $\lambda = 2$ making it a minimum of 5 eigenvectors.)</p>
<p>Now About your second question, can you diagonalize the matrix, the answer is yes. A matrix is diagonalizable if and only if it has $n$ independent eigenvectors, and here we have $4$, just the correct amount.</p>
|
1,925,867 | <p>I can't find any. For saying $H$ is a subgroup of $G$ we have notation but it seems none exists for subrings.</p>
| quid | 85,306 | <p>This is correct. There is a notation for ideal, yet no notation for subring, as far as I know.
One just writes: "Let $S \subset R$ be a subring." </p>
<p>Anyway, also the notation for subgroup, ideal and alike is not used all that much. "Let $H\subset G$ be a subgroup." Is what I see all the time. </p>
|
1,925,867 | <p>I can't find any. For saying $H$ is a subgroup of $G$ we have notation but it seems none exists for subrings.</p>
| Keith Kearnes | 310,334 | <p>I have used and seen $S\leq R$ to say that $S$ is a substructure of $R$.</p>
|
1,119,027 | <p>I'm trying to learn Bayes's formula, and am coming up with some poker problems to learn this.</p>
<p>My problem is as following: given a $H4,H5$ ($4$ of hearts, $5$ of hearts) hand, what are the odds that I'll hit a straight flush?</p>
<p>My reasoning is like this:</p>
<p>$$\Pr(\text{straight flush}|H4H5) = (\Pr(H4H5|\text{straight flush}) \cdot \Pr(\text{straight flush})) / \Pr(H4H5)$$</p>
<p>Now, off <a href="http://en.wikipedia.org/wiki/Poker_probability" rel="nofollow">of wikipedia</a>, I learnt that:</p>
<p>$$P(\text{straight flush}) = 0.00139$$</p>
<p>Given that there are 36 ways to achieve a straight flush, and only 4 ways to have a straight flush with $H4,H5$ (namely $HA-H5, H2-6, H3-7, H4-8$), I calculated that:</p>
<p>$$\Pr(H4H5|\text{straight flush}) = 4/36 = 1/9$$</p>
<p>Now, how do we find $\Pr(H4H5)$? My reasoning was: There's a $2/52$ chance that we get dealt $H4$ or $H5$ as the first card, and then a $1/51$ chance that we get dealt $H4$ or $H5$ as the second card.</p>
<p>However, filling out those numbers says there is a 15% chance that this will happen. That numbers seems way to high to me. Surely, somewhere in my reasoning I'm making a mistake. Who can help?</p>
| Ross Millikan | 1,827 | <p>Hint: You still have three cards to draw out of $50$. How many combinations of three cards result in a straight flush? How many total draws are there?</p>
|
3,223,705 | <p>I have a task for school and we need to plot a polar function with MATLAB. The function is <span class="math-container">$r = 1-2\cos(6\theta)$</span>.</p>
<p>I did this and I'm getting exactly the same as on Wolfram Alpha: <a href="https://www.wolframalpha.com/input/?i=polar+plot+r%3D1-2" rel="nofollow noreferrer">https://www.wolframalpha.com/input/?i=polar+plot+r%3D1-2</a>\cos(6theta)</p>
<p>I've used for <span class="math-container">$\theta$</span> a value between <span class="math-container">$0$</span> and <span class="math-container">$2\pi$</span>. Now the question explitictly says we need to take into account the period and the domain of the function and we can use the logic function <span class="math-container">$r>0$</span> for this. But I didn't do this because I just took a value between <span class="math-container">$0$</span> and <span class="math-container">$2\pi$</span> for <span class="math-container">$\theta$</span>. Am I doing something wrong here?</p>
<p>Thanks.</p>
| Bernard | 202,857 | <p>As the function <span class="math-container">$r$</span> has period <span class="math-container">$\frac\pi3$</span>, the curve is invariant by rotations of angle <span class="math-container">$\frac\pi 3$</span>. Hence you have to draw a petal of the curve for <span class="math-container">$0\le\theta\le\frac\pi 3$</span>, and deduce the five other petals by successive rotations.</p>
|
1,186,825 | <p>Prove $$\lim_{n\to\infty}\int_0^1 \left(\cos{\frac{1}{x}} \right)^n\mathrm dx=0$$</p>
<p>I tried, but failed. Any help will be appreciated.</p>
<p>At most points $(\cos 1/x)^n\to 0$, but how can I prove that the integral tends to zero clearly and convincingly?</p>
| xpaul | 66,420 | <p>Note that if $x=\frac{1}{k\pi}$ ($k\in\mathbb{N}$), $\cos\frac{1}{x}=(-1)^k$. Fix $\varepsilon\in(0,1)$ such that $\frac1{\varepsilon \pi}$ is not an integer. Let $M=[\frac1{\varepsilon \pi}]$. Clearly if $k<M$, then $\frac{1}{k\pi}\in(\varepsilon,1]$
let
$$ I_k=(\frac{1}{k\pi}-\frac{\varepsilon}{2^k}, \frac{1}{k\pi}+\frac{\varepsilon}{2^k}). $$
Write $[0,1]$ as
$$ [0,1]=[0,\varepsilon]\cup([\varepsilon,1]\setminus\cup_{k=1}^MI_k)\cup\cup_{k=1}^M I_k. $$
Note
$$ \bigg|\int_{I_k}\left(\cos\frac{1}{x}\right)^ndx\bigg|\le|I_k|=\frac{\varepsilon}{2^{k-1}}. $$
If $x\in [\varepsilon,1]\setminus\cup_{k=1}^M I_k$, $|\cos\frac{1}{x}|<1$ and hence by the bounded convergence theorem,
$$ \lim_{n\to\infty}\int_{[\varepsilon,1]\setminus\cup_{k=1}^M I_k} \left(\cos\frac{1}{x}\right)^ndx=0 $$
and hence for the above $\varepsilon$, there is $N>0$ such that when $n>N$,
$$ \bigg|\int_{[\varepsilon,1]\setminus\cup_{k=1}^M I_k} \left(\cos\frac{1}{x}\right)^ndx\bigg|<\varepsilon. $$
Thus when $n>N$,
\begin{eqnarray}
\bigg|\int_{[0,1]} \left(\cos\frac{1}{x}\right)^ndx\bigg|&\le&\bigg|\int_{[0,\varepsilon]} \left(\cos\frac{1}{x}\right)^ndx\bigg|+\bigg|\int_{[\varepsilon,1]\setminus\cup_{k=1}^\infty I_k} \left(\cos\frac{1}{x}\right)^ndx\bigg|\\
&&+\sum_{k=1}^M\bigg|\int_{I_k}\left(\cos\frac{1}{x}\right)^ndx\bigg|\\
&\le&2\varepsilon+\sum_{k=1}^\infty\frac{\varepsilon}{2^{k-1}}=4\varepsilon
\end{eqnarray}
Therefore
$$ \lim_{n\to\infty}\int_{[0,1]} \left(\cos\frac{1}{x}\right)^ndx=0 $$</p>
|
2,789,002 | <p>How can I calculate the height of the tree? I am with geometric proportionality.</p>
<p><img src="https://i.stack.imgur.com/m4zMD.png"></p>
| fleablood | 280,126 | <p><a href="https://i.stack.imgur.com/R4bUb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R4bUb.jpg" alt="enter image description here"></a></p>
<p>$FG \approx DE$ as $AG \approx AE$.</p>
<p>And $FG \approx AG$ as $DE \approx AE$ and as $BC \approx AC$.</p>
<p>Or perhaps most sophisticatedly: If $\frac {BC = 1.60}{CE=16}*k = DE=17.2$ then $\frac {DE=17.2}{EG=10} *k = FG$ and $\frac{BC=1.6}{CG = 16 + 10}*k = FG$.</p>
|
3,029,208 | <p>Hi I have been trying to find a way to find a combinatorial proof for <span class="math-container">${kn \choose 2}= k{n \choose 2}+n^2{k \choose 2}$</span>. </p>
| user | 505,767 | <p>Consider a set of <span class="math-container">$k\cdot n$</span> elements allocated in a grid with <span class="math-container">$n$</span> rows and <span class="math-container">$k$</span> columns then </p>
<ul>
<li>on the LHS we have the ways to choose <span class="math-container">$2$</span> elements among all of them</li>
<li>on the RHS we have the cases with 2 elements choseen from a same row <span class="math-container">$k{n \choose 2}$</span> and the cases 2 elements choseen from a same column <span class="math-container">$n{k \choose 2}$</span> and the orther cases with <span class="math-container">$2$</span> elements chosen form different columns and rows <span class="math-container">$nk+(n-1)(k-1)$</span>, indeed</li>
</ul>
<p><span class="math-container">$$k{n \choose 2}+n{k \choose 2}+nk+(n-1)(k-1)=k{n \choose 2}+n^2{k \choose 2}$$</span></p>
|
1,704,410 | <p>If we have two groups <span class="math-container">$G,H$</span> the construction of the direct product is quite natural. If we think about the most natural way to make the Cartesian product <span class="math-container">$G\times H$</span> into a group it is certainly by defining the multiplication</p>
<p><span class="math-container">$$(g_1,h_1)(g_2,h_2)=(g_1g_2,h_1h_2),$$</span></p>
<p>with identity <span class="math-container">$(1,1)$</span> and inverse <span class="math-container">$(g,h)^{-1}=(g^{-1},h^{-1})$</span>.</p>
<p>On the other hand we have the construction of the semidirect product which is as follows: consider <span class="math-container">$G$</span>,<span class="math-container">$H$</span> groups and <span class="math-container">$\varphi : G\to \operatorname{Aut}(H)$</span> a homomorphism, we define the semidirect product group as the Cartesian product <span class="math-container">$G\times H$</span> together with the operation</p>
<p><span class="math-container">$$(g_1,h_1)(g_2,h_2)=(g_1g_2,h_1\varphi(g_1)(h_2)),$$</span></p>
<p>and we denote the resulting group as <span class="math-container">$G\ltimes H$</span>.</p>
<p>We then show that this is a group and show many properties of it. My point here is the intuition.</p>
<p>This construction doesn't seem quite natural to make. There are many operations to turn the Cartesian product into a group. The one used when defining the direct product is the most natural. Now, why do we give special importance to this one?</p>
<p>What is the intuition behind this construction? What are we achieving here and why this particular way of making the Cartesian product into a group is important?</p>
| Alex Provost | 59,556 | <p>Forget about the actual construction of the semidirect product for now. I argue that the semidirect product is important because it arises naturally and beautifully in many areas of mathematics. I will list below many examples, and I urge you to find a few that interest you and look at them in detail.</p>
<p>Before doing that let me just give the following extra motivation: say you have a group <span class="math-container">$G$</span>, and you find two subgroups <span class="math-container">$H,K$</span> such that every element of <span class="math-container">$G$</span> can be uniquely written as a product <span class="math-container">$hk$</span> for <span class="math-container">$h \in H$</span>, <span class="math-container">$k \in K$</span>. In other words, you have a set-theoretic bijection between <span class="math-container">$G$</span> and <span class="math-container">$H \times K$</span>. Then certainly you'd want to understand <span class="math-container">$G$</span> by studying its smaller components <span class="math-container">$H$</span> and <span class="math-container">$K$</span>. One way to achieve this would be to find a suitable group structure on <span class="math-container">$H \times K$</span> intertwining the structures of <span class="math-container">$H$</span> and <span class="math-container">$K$</span> such that the above bijection becomes a group isomorphism. This <a href="https://en.wikipedia.org/wiki/Zappa%E2%80%93Sz%C3%A9p_product" rel="noreferrer">can be done</a>; however doing things in this generality becomes rapidly tedious. If instead we restrict our attention to such decompositions with <span class="math-container">$H$</span> normal in <span class="math-container">$G$</span>, the problem becomes much more manageable. In this case we have what we call a split exact sequence <span class="math-container">$$1 \to H \to G \to K \to 1,$$</span>
and <span class="math-container">$G$</span> is called a semidirect product of <span class="math-container">$H$</span> and <span class="math-container">$K$</span>. An existence and uniqueness theorem gives us <em>all</em> the possible semidirect products one can obtain from <span class="math-container">$H$</span> and <span class="math-container">$K$</span> through the group of homomorphisms from <span class="math-container">$K$</span> to <span class="math-container">$\operatorname{Aut}H$</span>. Note that <span class="math-container">$K = \mathbb{Z}/2$</span> appears often in practice, because this guarantees normality of <span class="math-container">$H$</span>. Now here are some examples:</p>
<ul>
<li><p>The symmetric group <span class="math-container">$S_n = A_n \rtimes \mathbb{Z}/2$</span>. The exact sequence is <span class="math-container">$$1 \to A_n \to S_n \xrightarrow{\mathit{sign}} \mathbb{Z}/2 \to 1.$$</span></p></li>
<li><p>The dihedral group <span class="math-container">$D_n = \mathbb{Z}/n \rtimes \mathbb{Z}/2$</span>. The exact sequence is <span class="math-container">$$1 \to \mathbb{Z}/n \to D_n \xrightarrow{\mathit{det}} \mathbb{Z}/2 \to 1.$$</span></p></li>
<li><p>The infinite dihedral group <span class="math-container">$D_\infty = \mathbb{Z} \rtimes \mathbb{Z}/2$</span>. The exact sequence depends on your explicit construction. You may take <span class="math-container">$$1 \to \mathbb{Z} \to \mathbb{Z}/2 * \mathbb{Z}/2 \to \mathbb{Z}/2 \to 1$$</span> or <span class="math-container">$$1 \to \mathbb{Z} \to A(1,\mathbb{Z}) \to \mathbb{Z}/2 \to 1,$$</span> where <span class="math-container">$A(1,\mathbb{Z})$</span> is the group of affine transformations of the form <span class="math-container">$x \mapsto ax + b$</span>, where <span class="math-container">$a \in \{ \pm 1 \} \cong \mathbb{Z}/2$</span> and <span class="math-container">$b \in \mathbb{Z}$</span>.</p></li>
<li><p>Many matrix groups, thanks to the determinant map. For example, <span class="math-container">$G = \operatorname{GL}(n,\mathbb{F})$</span>, <span class="math-container">$O(n,\mathbb{F})$</span> and <span class="math-container">$U(n)$</span> have respective subgroups <span class="math-container">$H = \operatorname{SL}(n,\mathbb{F}),\operatorname{SO}(n,\mathbb{F}),\operatorname{SU}(n)$</span> and <span class="math-container">$K = \mathbb{F}^\times,\mathbb{Z}/2,U(1)$</span>.</p></li>
<li><p>The fundamental group of the Klein bottle is <span class="math-container">$G = \langle x,y \mid xyx = y \rangle$</span>. This is just the nontrivial semidirect product of <span class="math-container">$\mathbb{Z}$</span> with itself. Interestingly, the other (trivial) semidirect product <span class="math-container">$\mathbb{Z}^2$</span> is the fundamental group of the other closed surface of Euler characteristic <span class="math-container">$0$</span>, namely the torus.</p></li>
<li><p>The affine group <span class="math-container">$A(n,\mathbb{F}) = \mathbb{F}^n \rtimes \operatorname{GL}(n,\mathbb{F})$</span>. Its elements are transformations <span class="math-container">$\mathbb{F}^n \to \mathbb{F}^n$</span> of the form <span class="math-container">$x \mapsto Ax + b$</span>, with <span class="math-container">$A$</span> an invertible matrix and <span class="math-container">$b$</span> a translation vector. The exact sequence is <span class="math-container">$$1 \to \mathbb{F}^n \to A(n,\mathbb{F}) \xrightarrow{f} \operatorname{GL}(n,\mathbb{F}) \to 1$$</span>
where <span class="math-container">$f$</span> forgets the affine structure (the translation part).</p></li>
<li><p>The hyperoctahedral group <span class="math-container">$O(n,\mathbb{Z})$</span> is the group of signed permutation matrices. We have two decompositions <span class="math-container">$O(n,\mathbb{Z}) \cong \operatorname{SO}(n,\mathbb{Z}) \rtimes \mathbb{Z}/2$</span> and <span class="math-container">$O(n,\mathbb{Z}) \cong (\mathbb{Z}/2)^n \rtimes S_n$</span>. In the corresponding exact sequences the surjective map is respectively the determinant homomorphism and the "forget all the signs" homomorphism.</p></li>
</ul>
|
1,704,410 | <p>If we have two groups <span class="math-container">$G,H$</span> the construction of the direct product is quite natural. If we think about the most natural way to make the Cartesian product <span class="math-container">$G\times H$</span> into a group it is certainly by defining the multiplication</p>
<p><span class="math-container">$$(g_1,h_1)(g_2,h_2)=(g_1g_2,h_1h_2),$$</span></p>
<p>with identity <span class="math-container">$(1,1)$</span> and inverse <span class="math-container">$(g,h)^{-1}=(g^{-1},h^{-1})$</span>.</p>
<p>On the other hand we have the construction of the semidirect product which is as follows: consider <span class="math-container">$G$</span>,<span class="math-container">$H$</span> groups and <span class="math-container">$\varphi : G\to \operatorname{Aut}(H)$</span> a homomorphism, we define the semidirect product group as the Cartesian product <span class="math-container">$G\times H$</span> together with the operation</p>
<p><span class="math-container">$$(g_1,h_1)(g_2,h_2)=(g_1g_2,h_1\varphi(g_1)(h_2)),$$</span></p>
<p>and we denote the resulting group as <span class="math-container">$G\ltimes H$</span>.</p>
<p>We then show that this is a group and show many properties of it. My point here is the intuition.</p>
<p>This construction doesn't seem quite natural to make. There are many operations to turn the Cartesian product into a group. The one used when defining the direct product is the most natural. Now, why do we give special importance to this one?</p>
<p>What is the intuition behind this construction? What are we achieving here and why this particular way of making the Cartesian product into a group is important?</p>
| Ari Royce Hidayat | 435,467 | <p>Many have given good answers here, so I just want to answer specifically for the intuition behind it.</p>
<p>Semi direct product came into light when we found out that if a group <span class="math-container">$H$</span> is a normal subgroup, and another group <span class="math-container">$K$</span> is also a subgroup (not necessarily normal) of a bigger group, and <span class="math-container">$H \cap K = 1$</span>, the multiplication of those two subgroups would yield another group <span class="math-container">$HK$</span> with order <span class="math-container">$\frac{|H||K|}{|H \cap K|} = |H||K|$</span> (as <span class="math-container">$H \cap K = 1$</span>).</p>
<p>So we know that the new group <span class="math-container">$HK$</span> can be written uniquely in the form of <span class="math-container">$hk$</span> where <span class="math-container">$h \in H$</span> and <span class="math-container">$k \in K$</span>. Because of that, the multiplication in <span class="math-container">$HK$</span>, thus all elements of <span class="math-container">$HK$</span>, must can always be written in the form of well... <span class="math-container">$hk$</span>, e.g.:</p>
<p><span class="math-container">$(h_1 k_1)(h_2 k_2) = h_1 k_1 h_2 (k_1^{-1} k_1) k_2 = h_1 (k_1 h_2 k_1^{-1}) k_1 k_2$</span></p>
<p>As <span class="math-container">$H$</span> is a normal subgroup, <span class="math-container">$k_1 h_2 k_1^{-1} \in H$</span>, so it can be re-written as:</p>
<p><span class="math-container">$(h_1 k_1)(h_2 k_2) = (h_1 (k_1 h_2 k_1^{-1})) (k_1 k_2) = h_3 k_3$</span></p>
<p>where <span class="math-container">$h_3 = h_1 (k_1 h_2 k_1^{-1}) \in H$</span> and <span class="math-container">$k_3 = k_1 k_2 \in K$</span>.</p>
<p>But we notice that this left conjugation by <span class="math-container">$k_1$</span> which is <span class="math-container">$k_1 h_2 k_1^{-1}$</span> is an automorphism of <span class="math-container">$H$</span>, so of course any automorphism of <span class="math-container">$H$</span> would do the job. If we define a homomorphism:</p>
<p><span class="math-container">$\varphi: K \rightarrow Aut(H)$</span>,</p>
<p>that homomorphism can be used in the place of left conjugation by <span class="math-container">$k_1$</span> and again achieve the same form of <span class="math-container">$hk$</span>. Rewriting the above derivation with direct product notation and the defined homomorphism, we would get:</p>
<p><span class="math-container">$(h_1, k_1)(h_2, k_2) = (h_1 \: \varphi(k_1)(h_2), k_1 k_2) = (h_3, k_3)$</span></p>
<p>where <span class="math-container">$h_3 = h_1 \: \varphi(k_1)(h_2) \in H$</span> and <span class="math-container">$k_3 = k_1 k_2 \in K$</span>.</p>
<p>which is exactly the definition of semi direct product multiplication you ask.</p>
|
2,103,436 | <p>Suppose we have the vector space $V$ and the non-empty subspace $W$. I know there is a theorem that states that if $\bar{v}_1$ and $\bar{v}_2$ are vectors in a subspace $W$ then the vector $(\bar{v}_1 + \bar{v}_2)$ will also be in the subspace $W$. However is the converse true? Would having the vector $(\bar{v}_1 + \bar{v}_2)$ in $W$ imply that $\bar{v}_1$ and $\bar{v}_2$ are also in $W$?</p>
| Siong Thye Goh | 306,553 | <p>This is not true. Consider the trivial subspace that consist of only the zero vector.</p>
<p>Pick any non-zero vector, $v$, it is not inside $W$. but $v-v=0$ </p>
|
2,316,042 | <p><strong>Problem:</strong> Consider the set of all those vectors in $\mathbb{C}^3$ each of whose coordinates is either $0$ or $1$; how many different bases does this set contain?</p>
<p>In general, if $B$ is the set of all bases vectors then,
$$B=\{(x_1,x_2,x_3),(y_1,y_2,y_3),(z_1,z_2,z_3)\}.$$</p>
<p>There are $8(6\cdot8+7)=440$ possible $B$s that contain unique elements with coordinates $0$ and $1.$ Now there is are $6\cdot 8+7$ sets that contain the element $(0,0,0)$, which makes the set $B$ linearly dependent and thus we are left with $385$ sets. Beyond this, I am finding it difficult to compute the final answer. Any hint/suggestion will be much appreciated. </p>
| deinst | 943 | <p>Knowing that none of x,y,z can be (0,0,0) there are only 7 choices for each. Since they must be different you only have $\binom{7}{3}=35$ choices to make.</p>
<p>This is small enough to sort through by hand.</p>
|
4,176,646 | <p>I need to find the directional derivatives for all vectors <span class="math-container">$u=[u_1\ \ u_2]\in \mathbb R^2$</span> with <span class="math-container">$\|u\|=1$</span> at <span class="math-container">$P_0=(0,0)$</span>, and determine whether <span class="math-container">$f$</span> is differentiable at <span class="math-container">$P_0$</span>.</p>
<p><span class="math-container">$$f(x,y)=\begin{cases}
1 & y=x^2,x\neq 0\\
0 & \text{else}
\end{cases}$$</span></p>
<p>First of all, if <span class="math-container">$f$</span> is not continuous then can I always say it isn't differentiable?</p>
<p>And my attemp was this:</p>
<p><span class="math-container">$$\lim_{t\rightarrow 0} \frac {f(P_0+tu)-f(P_0)} t = \lim_{t\rightarrow 0}
\begin{cases}
\frac{1}{t} & \text{else}\\
0 & u_1=0 \text{ or } u_1^2\neq u_2\\
\end{cases}$$</span>
Does the fact that <span class="math-container">$\lim_{t\rightarrow 0}\frac {1}{t}$</span> does not exist say anything about f being differentiable? Because <span class="math-container">$D_if(P_0)$</span> both exist for <span class="math-container">$i=1,2$</span>.</p>
<p>So I'd like to know if my calculation is correct, and if the continuous statement is true.</p>
<p>Thanks!</p>
| Asher2211 | 742,113 | <p>The number of trialing zeros in binary notation is the highest power of <span class="math-container">$2$</span> that divides the number in decimal notation.</p>
<p><span class="math-container">$t(n)=\displaystyle \sum_{k=1}^\infty \left( 1-\left\lceil\left\{\frac{n}{2^k}\right\}\right\rceil\right)$</span> satisfies the above condition (where {} denotes the fractional part).</p>
<p>If <span class="math-container">$k$</span> is the highest power of <span class="math-container">$2$</span> that divides <span class="math-container">$n$</span>, then the first <span class="math-container">$k$</span> terms of <span class="math-container">$t(n)$</span> is <span class="math-container">$1$</span> while the rest of the terms are <span class="math-container">$0$</span> so the formula for <span class="math-container">$t(n)$</span> gives the number of trialing zeros in binary notation.</p>
|
244,433 | <p>I have a list:</p>
<pre><code>data = {{2.*10^-9, 0.0025}, {4.*10^-9, 0.0025}, {6.*10^-9, 0.0025}, {8.*10^-9, 0.0025}, {1.*10^-8, 0.0025}, {7.*10^-9, 0.0023}, {3.*10^-9, 0.0025},...}
</code></pre>
<p>And I wanted to remove every third pair and get</p>
<pre><code> newdata = {{2.*10^-9, 0.0025}, {4.*10^-9, 0.0025}, {8.*10^-9, 0.0025}, {1.*10^-8, 0.0025}, {3.*10^-9, 0.0025},...}
</code></pre>
| jmm | 57,731 | <pre><code>Table[
If[Mod[n, 3] != 0, data[[n]], Nothing], {n, 1, Length[data]}]
</code></pre>
|
1,617,269 | <p>Let X and Y be independent random variables with probability density functions
$$f_X(x) = e^{-x} , x>0$$
$$f_Y(y) = 2e^{-2y} , y>0$$</p>
<p>Derive the PDF of $Z_1 = X + Y$</p>
<p>other cases: $Z =min(X,Y)$ , $Z =1/Y^2 $ , $Z =e^{-2y} $ </p>
<p>Just considering the 1st part, I understand to go from the fact that
$P(X + Y<= z)$ then $$\int_{0}^{z} f_{xy}(z-y,y) dx $$ since they are independent I integrate $f_x(z-y) f_y (y)$ wrt y. I come to $-e^{-z}$</p>
<p>This is as much as I have managed to pick up, but still very much unsure.
What would I need to look out for in the other cases as for $min(X,Y)$ id have no idea of how to start.</p>
| Graham Kemp | 135,106 | <p>You have obtained:$$\begin{align}
f_{X+Y}(z) & = \frac{\operatorname d \mathsf P(X+Y\leq z)}{\operatorname d z}
\\[1ex] & = \frac{\operatorname d }{\operatorname d z} \int_0^z \int_0^{z-x} f_X(x)\,f_Y(y)\operatorname d y\operatorname d x
\\[1ex] & = \int_0^zf_X(x)\,f_Y(z-x)\operatorname d x
\\[1ex] & =\ldots
\end{align}$$
Similarly
$$\begin{align}
f_{\min(X,Y)}(z) & =\frac{\operatorname d 1-\mathsf P(X> z, Y> z)}{\operatorname d z}
\\[1ex]
& = \frac{\operatorname d \mathsf P(X\leq z)}{\operatorname d z}\cdot\mathsf P(Y> z)+ \mathsf P(X> z)\cdot\frac{\operatorname d \mathsf P(Y\leq z)}{\operatorname d z}
\\[1ex]
& = f_X(z)\int_z^\infty f_Y(y)\operatorname d y+ f_Y(z)\int_z^\infty f_X(x)\operatorname d x
\\[1ex] & =\ldots
\\[3ex]
f_{1/Y^2}(z) & = \frac{\operatorname d \mathsf P(Y\leq \sqrt{1/z\;})}{\operatorname d z}
\\[1ex] & = f_Y\left(\sqrt {1/z\,}\right)\;\left\lvert\frac{\operatorname d \sqrt{1/z\;}}{\operatorname d z}\right\rvert
\\[1ex] & =\ldots
\\[3ex]
f_{e^{-2Y}}(z) & = \frac{\operatorname d \mathsf P(Y\geq -\tfrac 1 2\ln z)}{\operatorname d z}
\\[1ex] & =\ldots
\end{align}$$</p>
|
39,684 | <p>In order to have a good view of the whole mathematical landscape one might want to know a deep theorem from the main subjects (I think my view is too narrow so I want to extend it).</p>
<p>For example in <em>natural number theory</em> it is good to know quadratic reciprocity and in <em>linear algebra</em> it's good to know the Cayley-Hamilton theorem (to give two examples).</p>
<p>So, what is one (<strong>per post</strong>) deep and representative theorem of each subject that one can spend a couple of months or so to learn about? (In Combinatorics, Graph theory, Real Analysis, Logic, Differential Geometry, etc.)</p>
| Gadi A | 1,818 | <p>A nice example in Algebraic number theory is the solution of the $p=x^2+ny^2$ problem, described in details in <a href="http://rads.stackoverflow.com/amzn/click/0471190799" rel="nofollow">Cox's book</a>. Very much like Fermat's last theorem it has its roots in the 17th century (actually, it originated from Fermat...) and is completely solved only using some heavy machinery from number theory (although not as heavy as FLT). I'd say it is a good representative for the power and depth of the field that is still very much accessible.</p>
|
39,684 | <p>In order to have a good view of the whole mathematical landscape one might want to know a deep theorem from the main subjects (I think my view is too narrow so I want to extend it).</p>
<p>For example in <em>natural number theory</em> it is good to know quadratic reciprocity and in <em>linear algebra</em> it's good to know the Cayley-Hamilton theorem (to give two examples).</p>
<p>So, what is one (<strong>per post</strong>) deep and representative theorem of each subject that one can spend a couple of months or so to learn about? (In Combinatorics, Graph theory, Real Analysis, Logic, Differential Geometry, etc.)</p>
| Community | -1 | <ul>
<li>Spectral theorem in Functional Analysis.</li>
</ul>
|
2,530,458 | <p>Find Range of $$ y =\frac{x}{(x-2)(x+1)} $$</p>
<p>Why is the range all real numbers ? </p>
<p>the denominator cannot be $0$ Hence isn't range suppose to be $y$ not equals to $0$ ?</p>
| 5xum | 112,884 | <p>The range is $\mathbb R$ because for every $y\in\mathbb R$, there exists some $x$ such that $$\frac{x}{(x-2)(x+1)}=y.$$</p>
<p>For example, for $y=0$, you have $$\frac{0}{(0-2)(0+1)}=\frac{0}{(-2)\cdot 1} = -\frac{0}{2}=0.$$</p>
<hr>
<p>For a general $y$, you have to show that the equation above has a solution, which you can try to do by multiplying it by $(x-2)(x+1)$ to get</p>
<p>$$x=yx^2 - yx - 2y$$</p>
<p>This can be further simplified to a quadratic equation, and it's fairly easy to see if a quadratic equation has a solution </p>
<p>(<em>Hint:</em> it has something to do with the discriminant).</p>
|
2,799,123 | <p>Prove the following equation by counting the non-empty subsets of $\{1,2,\ldots,n\}$ in $2$ different ways:</p>
<p>$1+2+2^2+2^3\ldots+2^{n-1}=2^n-1$.</p>
<p>Let $A=\{1,2\ldots,n\}$. I know from theory that it has $2^n-1$ non-empty subsets, which is the right-hand side of the equation but, how do count the left one?</p>
<p>I've proven it using induction but how can i get to the first part of the equation by counting subsets differently?</p>
| N. F. Taussig | 173,070 | <p>The right-hand side counts non-empty subsets of the set $\{1, 2, 3, \ldots, n\}$. </p>
<p>The left-hand side counts non-empty subsets of the set $\{1, 2, 3, \ldots, n\}$ whose largest element is $k$, $1 \leq k \leq n$. The number of such subsets is $2^{k - 1}$ since such a subset is determined by choosing which of the $k - 1$ elements smaller than $k$ are in the subset.</p>
|
3,009,387 | <p>I'm asking the following: it is true that if <span class="math-container">$K$</span> is a normal subgroup of <span class="math-container">$G$</span> and <span class="math-container">$K\leq H\leq G$</span> then <span class="math-container">$K$</span> is normal in <span class="math-container">$H$</span>? I tried to prove it but I failed to do so, so I'm starting to suspect that it is not true. Can you provide me a proof or a counterexample of this statement or hint about its proof? </p>
| Mefitico | 534,516 | <p><strong>Hint:</strong> Try factorization!</p>
<p><span class="math-container">$$
\frac{2x^2-50}{2x^2+3x-35}=\frac{2(x^2-25)}{(1/2)(4x^2+6x-70)}=\frac{4(x-5)(x+5)}{(2x+10)(2x-7)}
$$</span></p>
|
2,853,401 | <p>Assume $E\neq \emptyset $, $E \neq \mathbb{R}^n $. Then prove $E$ has at least one boundary point. (i.e $\partial E \neq \emptyset $).</p>
<p>================= </p>
<p>Here is what I tried.<br>
Consider $P_0=(x_1,x_2,\dots,x_n)\in E,P_1=(y_1,y_2,\dots,y_n)\notin E $.<br>
Denote $P_t=(ty_1+(1-t)x_1,ty_2+(1-t)x_2,\dots,ty_n+(1-t)x_n) $, $0\le t\le 1$.<br>
$ t_0=\sup\{t |P_t \in E\} $. And then I wanted to prove that $P_{t_0}\in \partial E$. </p>
<p>A. If $P_{t_0}\in E$. then $t\neq 1$ otherwise $P_{t_0}=P_1$. And by definition ,$P_t \notin E$ for $t_0 \lt t \leq 1 $. And choose $t_n$,such that $1\gt t_n\gt t_0$,$t_n \to t_0$, which makes $P_{t_n} \notin E$, but $P_{t_n} \to P_{t_0}$.Then $P_{t_o} \in \partial E$. </p>
<p>B. If $P_{t_0}\notin E$. then $t\neq 0$ otherwise $P_{t_0}=P_0$.And then choose $t_n$ such that $0\lt t_n\lt t_0$ , $t_n \to t_0$ ,therefore $P_{t_n} \to P_{t_0}$ and $P_{t_n} \in E$. Hence $P_{t_o} \in \partial E$.</p>
<p>Thus we have $\partial E \neq \emptyset$. </p>
<p>Am I correct? the construct of $P_t$ is a clue from my elder. What I am wondering is this step in A(Also, B).
$$P_{t_n} \to P_{t_0} \Rightarrow P_{t_o} \in \partial E$$
I can somewhat image this. But how to make this step strictly?</p>
| Joe | 524,659 | <p>Here's a proof that doesn't use connectedness at all. Suppose that $\emptyset \neq E \subsetneq \mathbb{R}^n$. Now take $x \in \mathbb{R}^n \setminus E$. If $x$ is a boundary point of $E$, we're done! Otherwise, take $\delta = \sup \{\epsilon : \epsilon > 0, B_{\epsilon}(x) \cap E = \emptyset \}$, where $B_\epsilon(x)$ is the open ball surrounding $x$ with radius $\epsilon$. Now since $x$ is not a boundary point and $E$ is non-empty, we know that this $\delta$ exists and is well defined.</p>
<p>Take $S = \{s \in \mathbb{R}^n : |s - x| = \delta \}$. That is, $S$ is the boundary of the open ball centered at $x$ with radius $\delta$. I claim that there is a point in $S$ which is on the boundary of $E$.</p>
<p>Define the following function: $f:S \rightarrow \mathbb{R}$ such that $f(s) = \inf_{e \in E} |s - e|$. There exists some $\hat{s} \in S$ such that $f(\hat{s}) = 0$. If there is no such $\hat{s}$, our choice of $\delta$ was not maximal, which would contradict the definition of $\delta$.</p>
<hr>
<p>EDIT: It was pointed out that this part of the argument requires$f$ to be continuous. If you already have the fact that the distance function between a point and a set is continuous, you can ignore this part of the post. To prove that fact, let $(s_n) \rightarrow s$ be a convergent sequence in $S$.</p>
<p>Observe by the triangle inequality that for any $e \in E$, we have:
$$\begin{align*}
f(s_n) & \leq |s_n - e| \\
& \leq |s_n - s| + |s - e|
\end{align*}
$$</p>
<p>Taking sufficiently large $n$ we can get $|s_n - s| < \epsilon$. Additionally, taking an infimum over our choice of $e$ yeilds:
$$f(s_n) \leq f(s) + \epsilon$$</p>
<p>On the flip side, we get for any $e \in E$:
$$\begin{align*}
f(s) & \leq |s - e| \\
& \leq |s_n - s| + |s_n - e| \\
& \leq \epsilon + |s_n - e|
\end{align*}
$$</p>
<p>taking an infimum over $e \in E$ yields:
$$f(s_n) \geq f(s) - \epsilon$$
Combining our inequalities:
$$|f(s) - f(s_n)| \leq \epsilon$$
This proves the continuity of $f$</p>
<hr>
<p>Now take $\epsilon > 0$, and the open ball $B_\epsilon(\hat{s})$. This open ball contains a point of $E$ and a point of $E^c$. To see why, notice that since $f(\hat{s}) = 0$, there is some point $e \in E$ such that $|\hat{s} - e| < \epsilon$. Further, since $\hat{s}$ is in the set $S$, we have that $B_\epsilon(\hat{s}) \cap B_{\delta} (x) \neq \emptyset$. and since $B_{\delta} (x) \subset E^c$, we are done.</p>
<p>Therefore, $\hat{s}$ satisfies the definition of a boundary point of $E$.</p>
<p>Note that there is a better way to prove this statement using the connectedness of $\mathbb{R}^n$, but the comments indicate that the OP does not want to use this connectedness. </p>
|
1,156,874 | <p>How to show that $\mathbb{Z}[i]/I$ is a finite field whenever $I$ is a prime ideal? Is it possible to find the cardinality of $\mathbb{Z}[i]/I$ as well?</p>
<p>I know how to show that it is an integral domain, because that follows very quickly.</p>
| Jyrki Lahtonen | 11,619 | <p>A small variation of Arthur's argument. I wanted to do this without using the fact that the complex norm works as a Euclidean domain norm as well.</p>
<hr>
<p>The claim is not true, if $I=\{0\}$, so let's assume that is not the case. So then there exists an element $z=a+bi\in I$ with either $a$ or $b$ non-zero. Because $I$ is an ideal, the number
$$
z(a-bi)=(a+bi)(a-bi)=a^2+b^2\in I.
$$</p>
<p>This implies that the intersection $I\cap\Bbb{Z}$ is non-trivial. It is clearly a prime ideal of $\Bbb{Z}$, so the known list of prime ideals of $\Bbb{Z}$ says that there exists a prime number $p$ such that $I\cap\Bbb{Z}=p\Bbb{Z}$.</p>
<p>From this we can deduce two things. Namely we know that $p$ is the characteristic of the quotient ring $\Bbb{Z}[i]/I$. This implies that $\Bbb{Z}[i]/I$ is a vector space over the field $\Bbb{F}_p$. Secondly, from $p\in I\implies pi\in I$. Therefore
$$
J=p\Bbb{Z}[i]=p\Bbb{Z}\oplus pi\Bbb{Z}\subseteq I.
$$
As a subgroup of the additive group of $\Bbb{Z}[i]$ $J$ is generated by $p$ and $pi$, so it is of index $p^2$. In other words the dimension of $\Bbb{F}_p$as a vector space over
$\Bbb{F}_p$ is at most two. Therefore $\Bbb{Z}[i]/I$ has either $p$ or $p^2$ elements.
Anyway a finite integral domain $\Bbb{Z}[i]/I$ is the a (finite) field.</p>
<p>Both possibilities, $p$ and $p^2$ occur. The smallest examples are $\Bbb{Z}[i]/\langle 1+i\rangle$ that has only two elements, and $\Bbb{Z}[i]/\langle 3\rangle$ that has nine elements (see below for the reason why in the latter case we cannot get a field of three elements).</p>
<hr>
<p>It is not too difficult see which case it is. Assume that $p>2$. Then we see that the cosets $i^k+I$ are distinct for $k=0,1,2,3$. In particular the multiplicative order of the coset $i+I$ is exactly four. By Lagrange this means that the order of the multiplicative group of $\Bbb{Z}[i]/I$ must be a multiple of four. So if $\Bbb{Z}[i]/I$ has only $p$ elements, then we must have $p\equiv1\pmod4$. A consequence is that if $p\equiv3\pmod4$, then the field $\Bbb{Z}[i]/I$ must have $p^2$ elements. OTOH it is known that the multiplicative group of $\Bbb{F}_p$ is cyclic of order $p-1$ (a primitive root exists!). This means that there is an integer $a, 0<a<p$ such that
$a^4\equiv1\pmod p$, $a^2\not\equiv1\pmod p$. Because in a field $\Bbb{Z}[i]/I$ the equation $x^4=1$ can have at most four solutions, the coset $a+I$ has to be equal to one of the cosets $\pm i+I$. Therefore in this case the cardinality of $\Bbb{Z}[i]/I$ is strictly less than $p^2$, hence exactly $p$.</p>
|
1,156,874 | <p>How to show that $\mathbb{Z}[i]/I$ is a finite field whenever $I$ is a prime ideal? Is it possible to find the cardinality of $\mathbb{Z}[i]/I$ as well?</p>
<p>I know how to show that it is an integral domain, because that follows very quickly.</p>
| S. Venkataraman | 457,895 | <p>Let us count the number of elements in <span class="math-container">$\mathbb{Z}[i]/n\mathbb{Z}[i]$</span>. When do two elements <span class="math-container">$a+ib$</span>, <span class="math-container">$a_1+ib_1$</span> in <span class="math-container">$\mathbb{Z}[i]$</span> belong to the same coset of <span class="math-container">$n\mathbb{Z}[i]$</span>? We must have <span class="math-container">$(a-a_1)+i(b-b_1)\in n\mathbb{Z}[i]$</span>. So, <span class="math-container">$(a-a_1)+i(b-b_1)=n(c+di)$</span> with <span class="math-container">$c$</span>,<span class="math-container">$d\in \mathbb{Z}$</span>. This means that <span class="math-container">$a\equiv a_1\bmod{n}$</span> and <span class="math-container">$b\equiv b_1\bmod{n}$</span>. Conversely, if <span class="math-container">$a\equiv a_1\bmod{n}$</span> and <span class="math-container">$b\equiv b_1\bmod{n}$</span>, then <span class="math-container">$(a+ib)-(a_1+ib_1)\in n\mathbb{Z}[i]$</span>. If we let <span class="math-container">$S=\{a+bi\mid 0\leq a<n, 0\leq b <n\}$</span>, then for any <span class="math-container">$u+iv$</span> in <span class="math-container">$\mathbb{Z}[i]$</span> there is a unique <span class="math-container">$a_0+ib_0\in S$</span> such that <span class="math-container">$(u+iv)-(a_0+ib_0)\in n\mathbb{Z}$</span>. It follows that <span class="math-container">$\vert \mathbb{Z}[i]/n\mathbb{Z}\vert=\vert S\vert$</span>. Therefore, <span class="math-container">$\vert\mathbb{Z}[i]/n\mathbb{Z}[i]\vert=n^2$</span> since <span class="math-container">$\vert S\vert=n^2$</span>.</p>
<p>Now, if <span class="math-container">$I$</span> is any nonzero ideal, let <span class="math-container">$(a+ib)\in I$</span>, <span class="math-container">$a$</span>, <span class="math-container">$b\in \mathbb{Z}$</span>, <span class="math-container">$a+ib\neq 0$</span>. Then, since <span class="math-container">$I$</span> is an ideal, <span class="math-container">$(a-ib)(a+ib)\in I$</span>, so <span class="math-container">$I$</span> contains some nonzero integer <span class="math-container">$n$</span>. So, <span class="math-container">$(n)\subset I$</span> and we have a surjective ring homomorphism <span class="math-container">$\mathbb{Z}[i]/n\mathbb{Z}\to \mathbb{Z}[i]/I$</span>, therefore <span class="math-container">$\mathbb{Z}[i]/I$</span> is a finite ring. If <span class="math-container">$I$</span> is a prime ideal, <span class="math-container">$\mathbb{Z}[i]/I$</span> is a finite integral domain, and therefore a field.</p>
<p>Finding the cardinality of <span class="math-container">$\mathbb{Z}/I$</span> is a little more tricky.</p>
<p>Let <span class="math-container">$I$</span> be a prime ideal. The argument in the previous paragraph shows that <span class="math-container">$I$</span> contains a natural number and therefore a smallest natural number <span class="math-container">$n$</span>. This smallest natural number <span class="math-container">$n$</span> has to be a prime. If <span class="math-container">$n=m_1m_2$</span> with <span class="math-container">$m_1\neq 1$</span> and <span class="math-container">$m_2\neq 1$</span>, since <span class="math-container">$I$</span> is a prime ideal, <span class="math-container">$m_1$</span> or <span class="math-container">$m_2\in I$</span>, contradicting the minimality of <span class="math-container">$n$</span>. So, <span class="math-container">$I$</span> contains a prime number <span class="math-container">$p$</span> and <span class="math-container">$(p)\subset I$</span>. Then, from the surjective map <span class="math-container">$\mathbb{Z}[i]/(p)\to \mathbb{Z}[i]/I$</span>, we see that <span class="math-container">$\vert \mathbb{Z}/I\vert$</span> divides <span class="math-container">$\vert\mathbb{Z}[i]/(p)\vert=p^2$</span>. We have <span class="math-container">$I\neq \mathbb{Z}[i]$</span>, so, <span class="math-container">$\vert \mathbb{Z}[i]/I\vert \neq 1$</span>. Therefore, either <span class="math-container">$\vert \mathbb{Z}[i]/I\vert = p$</span> and
<span class="math-container">$(p)\subsetneq I$</span> in this case. If <span class="math-container">$\vert \mathbb{Z}[i]/I\vert=p^2,$</span> <span class="math-container">$I=(p)$</span> in this case. In the second case <span class="math-container">$p$</span> is a prime in <span class="math-container">$\mathbb{Z}[i]$</span>.</p>
<p>Suppose <span class="math-container">$(2)\subset I$</span>. Since <span class="math-container">$2=(1+i)(1-i)\in I$</span>, <span class="math-container">$1+i$</span> or <span class="math-container">$1-i$</span> is in <span class="math-container">$I$</span>. Since <span class="math-container">$1+i$</span> and <span class="math-container">$1-i$</span> are associates, we may assume that <span class="math-container">$(1+i)\subset I$</span>. Then, <span class="math-container">$\mathbb{Z}[i]/(1+i)$</span> has two elements, <span class="math-container">$0+(1+i)$</span> and <span class="math-container">$1+(1+i)$</span> since <span class="math-container">$\pm 1\equiv \pm i\pmod{1+i}$</span>; this is because <span class="math-container">$1+i$</span>, <span class="math-container">$-1+i$</span>, <span class="math-container">$1-i$</span>, <span class="math-container">$-1-i$</span> are associates of each other. Therefore, <span class="math-container">$\vert \mathbb{Z}[i]/(1+i)\vert=2$</span>. Since <span class="math-container">$(1+i)\subset I\subset \mathbb{Z}[i]$</span> and <span class="math-container">$I\neq \mathbb{Z}[i]$</span>, we have <span class="math-container">$I=(1+i)$</span> and <span class="math-container">$\vert \mathbb{Z}[i]/I\vert=2$</span>.</p>
<p>Let us now assume that <span class="math-container">$p>2$</span>. With some number theory, one can show that <span class="math-container">$\vert \mathbb{Z}[i]/I\vert=p$</span> if <span class="math-container">$p\equiv 1\bmod{4}$</span>. In this case <span class="math-container">$p=a^2+b^2=(a+ib)(a-ib)\in I$</span>, <span class="math-container">$a$</span>, <span class="math-container">$b\in\mathbb{Z}[i]$</span>. (See <a href="https://people.mpim-bonn.mpg.de/zagier/files/doi/10.2307/2323918/fulltext.pdf" rel="nofollow noreferrer">here</a> for a beautiful proof of the fact that <span class="math-container">$p$</span> is the sum of two squares if <span class="math-container">$p\equiv 1\bmod{4}$</span>.) So, <span class="math-container">$a+ib\in I$</span> or <span class="math-container">$a-ib\in I$</span>, say <span class="math-container">$a+ib\in I$</span>. We have <span class="math-container">$(p)\subsetneq (a+ib)$</span>; if <span class="math-container">$(p)=(a+ib)$</span>, <span class="math-container">$p$</span> and <span class="math-container">$a+ib$</span> will be associates. It is easy to check this is not the case using the fact that the units in <span class="math-container">$\mathbb{Z}[i]$</span> are <span class="math-container">$\{\pm 1,\pm i\}$</span>. Also, <span class="math-container">$(a+ib)\neq \mathbb{Z}[i]$</span> since <span class="math-container">$(a+ib)\subset I$</span>. So <span class="math-container">$\vert\mathbb{Z}[i]/(a+ib)\vert$</span> is a proper divisor of <span class="math-container">$\vert \mathbb{Z}[i]/(p)\vert=p^2$</span>. Therefore, <span class="math-container">$\vert \mathbb{Z}[i]/(a+ib)\vert=p$</span>. Since <span class="math-container">$(a+ib)\subset I$</span>, <span class="math-container">$\vert \mathbb{Z}[i]/I\vert$</span> divides <span class="math-container">$\vert \mathbb{Z}[i]/(a+ib)\vert=p$</span>. Since <span class="math-container">$I\neq \mathbb{Z}[i]$</span>, <span class="math-container">$\vert \mathbb{Z}[i]/I\vert=p$</span> and <span class="math-container">$I=(a+ib)$</span>.</p>
<p>If <span class="math-container">$p\equiv 3\bmod{4}$</span>, then <span class="math-container">$p$</span> is a prime in <span class="math-container">$\mathbb{Z}[i]$</span>. If not, let <span class="math-container">$p=\alpha\beta$</span>. Then <span class="math-container">$p=\overline{p}=\overline{\alpha}\overline{\beta}
$</span>. A somewhat messy argument gives <span class="math-container">$p=(a+ib)(a-ib)=a^2+b^2$</span>. This is not possible since square of every integer is <span class="math-container">$1$</span> or <span class="math-container">$0\bmod{4}$</span>, <span class="math-container">$p$</span> cannot be sum of two squares.</p>
|
1,123,777 | <p><strong><span class="math-container">$U$</span> here represents the upper Riemann Integral.</strong></p>
<p><img src="https://i.stack.imgur.com/GbNm2.jpg" alt="enter image description here" /></p>
<p><img src="https://i.stack.imgur.com/KtRI4.jpg" alt="enter image description here" /></p>
<p><img src="https://i.stack.imgur.com/wlmAk.jpg" alt="enter image description here" /></p>
<p><img src="https://i.stack.imgur.com/pvKC3.jpg" alt="enter image description here" /></p>
<p><strong>I understand the vast majority of this proof</strong>, however the part underlined in orange states <span class="math-container">$\forall \varepsilon>0 $</span> should it not be <span class="math-container">$\forall \varepsilon\geq0 $</span> so that we have</p>
<p><span class="math-container">$U(f)\leq S(f,\Delta_\varepsilon ^1) \leq U(f)+\frac{\varepsilon}{2}$</span>?</p>
<p>For the green part , if the statement works <span class="math-container">$\forall\varepsilon>0$</span> surely it could work in the case <span class="math-container">$U(f+g)=50$</span>, <span class="math-container">$U(f)+U(g)=49$</span>, <span class="math-container">$\varepsilon=2$</span></p>
| Brian M. Scott | 12,042 | <p>HINT: As an alternative to an inductive approach: for each possible choice of $k$ (the number of terms), there is exactly one choice of $a_1$ that allows the inequalities to be satisfied and the sum of the $a_i$’s to be $n$.</p>
|
1,123,777 | <p><strong><span class="math-container">$U$</span> here represents the upper Riemann Integral.</strong></p>
<p><img src="https://i.stack.imgur.com/GbNm2.jpg" alt="enter image description here" /></p>
<p><img src="https://i.stack.imgur.com/KtRI4.jpg" alt="enter image description here" /></p>
<p><img src="https://i.stack.imgur.com/wlmAk.jpg" alt="enter image description here" /></p>
<p><img src="https://i.stack.imgur.com/pvKC3.jpg" alt="enter image description here" /></p>
<p><strong>I understand the vast majority of this proof</strong>, however the part underlined in orange states <span class="math-container">$\forall \varepsilon>0 $</span> should it not be <span class="math-container">$\forall \varepsilon\geq0 $</span> so that we have</p>
<p><span class="math-container">$U(f)\leq S(f,\Delta_\varepsilon ^1) \leq U(f)+\frac{\varepsilon}{2}$</span>?</p>
<p>For the green part , if the statement works <span class="math-container">$\forall\varepsilon>0$</span> surely it could work in the case <span class="math-container">$U(f+g)=50$</span>, <span class="math-container">$U(f)+U(g)=49$</span>, <span class="math-container">$\varepsilon=2$</span></p>
| anomaly | 156,999 | <p>The given partitions $(a_1, \dots, a_k)$ are all given by
\begin{align*}
a_i &= \begin{cases}
a_1 & \text{if $i \leq p$}; \\
a_1 + 1 & \text{if $i > p$}
\end{cases}
\end{align*}<br>
for $p = 1, \dots, k$. Count the number of pairs $(a_1, p)$ for which the partition sums to $n$.</p>
|
130,564 | <p>Hi, everyone.</p>
<p>I am looking for some references for period matrix of abelian variety over arbitrary field, if you know, could you please tell me?</p>
<p>For period matrix of abelian varieties, I means that if $A$ is an abelian variety over complex number field, $A \cong V/\Gamma$. If we chose a basis of $V$, and a basis of lattice $\Gamma$, write the basis of lattice in term by the basis of $V$, the matrix which is called period matrix.</p>
<p>I want to know how to define period matrix for abelian variety over arbitrary field, and some basic properties of this matrix.</p>
<p>Thank you very much!</p>
| S. Carnahan | 121 | <p>You won't get $A$ as a quotient of a vector space in general without some kind of strange transcendentality. For example, if $V$ is defined over $\mathbb{F}_p$, then any $S$-valued point of $V$ is $p$-torsion for any test object $S$. In particular, you can't possibly obtain any of the prime-to-$p$ torsion in $A$ without resorting to extraordinary means (whose existence I doubt).</p>
<p>There are some interesting treatments for certain complete topological fields other than $\mathbb{C}$, but "most" fields do not fit this description.</p>
|
749,926 | <p>I have a group of 10 players and I want to form two groups with them.Each group must have atleast one member.In how many ways can I do it?</p>
| vonbrand | 43,946 | <p>The "proper" way to solve this is that you want to know the ways to partition 10 elements into 2 parts; the ways to partition $n$ elements into $k$ groups is given by the <a href="http://en.wikipedia.org/wiki/Stirling_number" rel="nofollow">Stirling number</a> of the second kind $\genfrac{\{}{\}}{0pt}{}{n}{k}$, in your case $\genfrac{\{}{\}}{0pt}{}{10}{2} = 511$ (value courtesy of <a href="http://keisan.casio.com/exec/system/1292214829" rel="nofollow">Casio</a>)</p>
|
1,840,352 | <blockquote>
<p>For every $A \subset \mathbb R^3$ we define $\mathcal{P}(A)\subset \mathbb R^2$ by
$$ \mathcal{P}(A) := \{ (x,y) \mid \exists_{z \in \mathbb R}:(x,y,z) \in A \} \,. $$
Prove or disprove that $A$ closed $\implies$ $\mathcal{P}(A)$ closed and $A$ compact $\implies$ $\mathcal{P}(A)$ compact.</p>
</blockquote>
<p>This $\mathcal{P}(A)$ seems to me the projection on the $xy$-plane, so intuitively both make sense to me.</p>
<p><strong>My try</strong></p>
<p>For the first one, to prove something is closed seems easiest to do with sequences, so for all sequences $(x^{(n)})$ in $A$ with $x^{(n)} \to x$ we have that $x \in A$. But I do not know how to progress further in a useful direction. For the second one I don't even know where to start.</p>
<p><strong>Update</strong></p>
<p>For the first one I tried to argue that for any sequence going to $x \in A$, a projection of that sequence on the $xy$-plane will also go to the projection of $x$, so $x \in \mathcal{P}(A)$, so hence $\mathcal{P}(A)$ must be closed. </p>
| John Gowers | 26,267 | <p><strong>Hints:</strong></p>
<ol>
<li><p>The fact that you are unable to prove this might suggest that the statement is actually untrue. Try and work out where your proof is falling apart and use that to construct a closed set $A\subset \mathbb R^3$ such that $P(A)$ is not closed.</p></li>
<li><p>This is not as difficult as you're making it out to be. What definition of compactness are you using?</p></li>
</ol>
|
362,926 | <p>I have a problem that looks like this:</p>
<p>$$\frac{20x^5y^3}{5x^2y^{-4}}$$</p>
<p>Now they said that the "rule" is that when dividing exponents, you bring them on top as a negative like this:</p>
<p>$$4x^{5-2}*y^{3-(-4)}$$</p>
<p>That doesn't make too much sense though. A term like $y^{-4}$ is essentially saying $\large \frac 1{y^4}$ in the denominator because a negative exponent is the opposite of a positive exponent and you use division. And so here you are dividing by $y$ four times. So if that's the case, you cross multiply: $\large \frac{1}{y^4} \frac{y^4}{1}$ on bottom and then of course to keep balance, you multiply $\large \frac{y^4}{1}$ on top to get this:</p>
<p>$$4x^{5 - 2}y^{3 + 4}$$</p>
<p>Now look at my solution and look at the other one. They get the same answer but through different means. I dont see how they get $y^{3-(-4)}$.</p>
| Javier | 2,757 | <p>What the other solution did is a straightforward application of the rule: $\frac1{y^n} = y^{-n}$. For $n = -4$, you get $\frac1{y^{-4}} = y^{-(-4)} = y^4$. Does that make it clear?</p>
|
1,339,649 | <p>Summation convention holds. If $\frac{\partial}{\partial t}g_{ij}=\frac{2}{n}rg_{ij}-2R_{ij}$, then ,I compute:
$$
\frac{1}{2}g^{ij}\frac{\partial}{\partial t}g_{ij}=\frac{1}{2}g^{ij}(\frac{2}{n}rg_{ij}-2R_{ij})=\frac{1}{n}r(\sum\limits_i\sum\limits_jg^{ij}g_{ij})-g^{ij}R_{ij}=nr-R
$$</p>
<p>But on the Hamilton's THREE-MANIFOLDS WITH POSITIVE RICCI CURVATURE,the result is :
$$
\frac{1}{2}g^{ij}\frac{\partial}{\partial t}g_{ij}=r-R
$$</p>
<p>I don't know where I make my mistake,and who can tell me ? Very thanks.</p>
| Chappers | 221,811 | <p>$$ \sum_j g^{ij} g_{kj} = \delta^i_k $$
Now, $\delta^i_i = 1$, with no summation, so
$$ \sum_i \sum_j g^{ij} g_{ij} = \sum_i \delta_i^i = \sum_i 1 = n. $$
You may be confusing whether or not you are using summation convention.</p>
|
3,219,428 | <p>Sorry for the strange title, as I don't really know the proper terminology.</p>
<p>I need a formula that returns 1 if the supplied value is anything from 10 to 99, returns 10 if the value is anything from 100 to 999, returns 100 if the value is anything from 1000 to 9999, and so on.</p>
<p>I will be translating this to code and will ensure the value is never less than 1, in case that changes anything.</p>
<p>It's probably something really simple but I can't wrap my head around a nice way to do this so... thanks!</p>
| P Vanchinathan | 28,915 | <p>Is a mathematical formula really needed ? Or a function (in the sense of a programming language) that does this job ok? As you say you are going to translate this into code it is simpler to directly translate your description to code, without having to find a mathematical formula</p>
<p>Here is it in Python: (you may have to handle numbers less than 10 differently)</p>
<p>def mylog(x):</p>
<pre><code>#assumes x is a positive integer between 10 and 10^25
tenpowers = [10**k for k in range(25)]
for k in range(25):
if x-1 < tenpowers[k]:
return tenpowers[k-1]
</code></pre>
|
679,544 | <p>How to prove this for positive real numbers?
$$a+b+c\leq\frac{a^3}{bc}+\frac{b^3}{ca}+\frac{c^3}{ab}$$</p>
<p>I tried AM-GM, CS inequality but all failed.</p>
| Community | -1 | <p>Here other two answers used Cauchy-Scwartz Inequality. I am giving a simple $AM\ge GM$ inequality proof.</p>
<p>You asked, $$\frac{a^3}{bc}+\frac{b^3}{ca}+\frac{c^3}{ab}\ge a+b+c\\\implies a^4+b^4+c^4\ge a^2bc+b^2ca+c^2ab$$</p>
<p>Now, from, $AM\ge GM$, we have $$\frac {a^4+
a^4+b^4+c^4}4\ge \left(a^4\cdot a^4\cdot b^4\cdot c^4\right)^{1/4}=a^2bc\tag 1$$</p>
<p>Similarly, $$\frac {a^4+
b^4+b^4+c^4}4\ge \left(a^4\cdot b^4\cdot b^4\cdot c^4\right)^{1/4}=ab^2c\tag 2$$ and also, $$\frac {a^4+
b^4+c^4+c^4}4\ge \left(a^4\cdot b^4\cdot c^4\cdot c^4\right)^{1/4}=abc^2\tag 3$$</p>
<p>Now, summing up $(1),(2),(3)$, we have, $a^4+b^4+c^4\ge a^2bc+b^2ca+c^2ab$, that is $$\frac{a^3}{bc}+\frac{b^3}{ca}+\frac{c^3}{ab}\ge a+b+c$$</p>
|
679,544 | <p>How to prove this for positive real numbers?
$$a+b+c\leq\frac{a^3}{bc}+\frac{b^3}{ca}+\frac{c^3}{ab}$$</p>
<p>I tried AM-GM, CS inequality but all failed.</p>
| Michael Rozenberg | 190,319 | <p>By Holder
$$\sum_{cyc}\frac{a^3}{bc}\geq\frac{(a+b+c)^3}{3(ab+ac+bc)}=\frac{(a+b+c)\cdot(a+b+c)^2}{3(ab+ac+bc)}\geq a+b+c$$</p>
|
5,896 | <p>$\sum_{n=1}^{\infty} \frac{\varphi(n)}{n}$ where $\varphi(n)$ is 1 if the variable $\text n$ has the number $\text 7$ in its typical base-$\text10$ representation, and $\text0$ otherwise.</p>
<p>I am supposed to find out if this series converges or diverges. I think it diverges, and here is why.</p>
<p>We can see that there is a series whose partial sums are always below our series, but which diverges. Compare some of the terms of each sequence</p>
<p>$\frac{1}{7} > \frac{1}{8}$<br/>
$\frac{1}{70} > \frac{1}{80}$<br/>
$\frac{1}{71} > \frac{1}{80}$<br/>
$\frac{1}{72} > \frac{1}{80}$<br/>
$\text ... $<br/>
$\frac{1}{79} > \frac{1}{80}$<br/>
$\text ... $<br/>
$\frac{1}{700} > \frac{1}{800}$<br/>
$\text ... $<br/></p>
<p>And continue in this way.</p>
<p>Obviously some terms are left out of the sequence on the left, which is fine since our sequence of terms on the left is already greater than the right side. Notice the right side can be grouped into</p>
<p>$\frac{1}{8} + \frac{1}{8} + ... $ because we will have $10$ $\frac{1}{80}$s, $100$ $\frac{1}{800}$s, etc etc. Thus we are adding up infinitely many 1/8s. This is similar to the idea of the divergence of the harmonic series. So, my conclusion is that it diverges. A bunch of other students in my real analysis class have come to the conclusion that is, in fact, convergent, and launched into a detailed verbal explanation about comparison with a geometric series that I couldn't follow without seeing their work. Is my reasoning, like they suspect, flawed? I can't see how.</p>
<p>Sorry about the poor format, I'm new to TeX and couldn't figure out how to format a piecewise function (it was telling me a my \left delimiter wasn't recognized).</p>
| Douglas S. Stones | 139 | <p>Your argument seems fine to me... so I'll give the argument that popped into my head when I read the question.</p>
<p>Sum phi(n)/n for n congruent to 7 (mod 10) and multiply by 10 (which does not affect divergence/convergence). Note that</p>
<pre><code>10/7 > 1/7 +1/8 +...+1/16
10/17 > 1/17+1/18+...+1/26
</code></pre>
<p>and so on. The result is "larger" than the harmonic series minus the first six terms (which diverges).</p>
<p>Although, I actually prefer the OP's proof since it's self-contained (i.e. doesn't require the harmonic series).</p>
|
5,896 | <p>$\sum_{n=1}^{\infty} \frac{\varphi(n)}{n}$ where $\varphi(n)$ is 1 if the variable $\text n$ has the number $\text 7$ in its typical base-$\text10$ representation, and $\text0$ otherwise.</p>
<p>I am supposed to find out if this series converges or diverges. I think it diverges, and here is why.</p>
<p>We can see that there is a series whose partial sums are always below our series, but which diverges. Compare some of the terms of each sequence</p>
<p>$\frac{1}{7} > \frac{1}{8}$<br/>
$\frac{1}{70} > \frac{1}{80}$<br/>
$\frac{1}{71} > \frac{1}{80}$<br/>
$\frac{1}{72} > \frac{1}{80}$<br/>
$\text ... $<br/>
$\frac{1}{79} > \frac{1}{80}$<br/>
$\text ... $<br/>
$\frac{1}{700} > \frac{1}{800}$<br/>
$\text ... $<br/></p>
<p>And continue in this way.</p>
<p>Obviously some terms are left out of the sequence on the left, which is fine since our sequence of terms on the left is already greater than the right side. Notice the right side can be grouped into</p>
<p>$\frac{1}{8} + \frac{1}{8} + ... $ because we will have $10$ $\frac{1}{80}$s, $100$ $\frac{1}{800}$s, etc etc. Thus we are adding up infinitely many 1/8s. This is similar to the idea of the divergence of the harmonic series. So, my conclusion is that it diverges. A bunch of other students in my real analysis class have come to the conclusion that is, in fact, convergent, and launched into a detailed verbal explanation about comparison with a geometric series that I couldn't follow without seeing their work. Is my reasoning, like they suspect, flawed? I can't see how.</p>
<p>Sorry about the poor format, I'm new to TeX and couldn't figure out how to format a piecewise function (it was telling me a my \left delimiter wasn't recognized).</p>
| Aryabhata | 1,102 | <p>Yes your argument seems fine.</p>
<p>Another argument is:</p>
<p>Divide the integers into blocks of 10</p>
<p>$$[1 \dots 10] [11 \dots 20] [21 \dots 30] \dots$$</p>
<p>Each block will have an number with digit $7$ in it and so the series has sum at least</p>
<p>$$\frac{1}{10} + \frac{1}{20} + \dots + \frac{1}{10n} + \dots $$</p>
<p>$$ = \frac{1}{10}(1 + \frac{1}{2} + \frac{1}{3} + \dots + \frac{1}{n} + \dots)$$</p>
<p>which is divergent.</p>
<p>Yet another argument which uses the following useful result (the main reason for posting my answer):</p>
<blockquote>
<p>If $A = \{a_i\}$ is a sequence of natural numbers such that $\sum\limits_{i=1}^{\infty} \frac{1}{a_i}$ converges then the <a href="http://en.wikipedia.org/wiki/Natural_density" rel="nofollow noreferrer">natural density</a> of $A$ is zero. (This is an exercise in Ivan Niven's book on Number theory and for a proof see here: <a href="https://math.stackexchange.com/questions/5932/theorem-on-natural-density/5967#5967">If $A\subseteq\mathbb N$ and $\sum\limits_{a\in A}\frac1a$ converges then $A$ has natural density $0$</a>).</p>
</blockquote>
<p>In our case the density of the numbers under consideration is at least $\frac{1}{10}$ and thus the sum cannot be convergent.</p>
|
4,183,263 | <p>If Tychonoff's theorem is true, why closed ball in <span class="math-container">$\mathbb{R}^n$</span> is not compact?</p>
<p>The theorem says that if <span class="math-container">$X_i$</span> is compact, for every <span class="math-container">$i\in I$</span>, so <span class="math-container">$\prod_{i\in I}X_i$</span> is compact. Then take <span class="math-container">$n\in\mathbb{N}$</span> and we have <span class="math-container">$\prod_{n\in\mathbb{N}}[-1,1]_i$</span> in <span class="math-container">$\mathbb{R}^\infty$</span> is not compact. But, what??</p>
| Henno Brandsma | 4,280 | <p>An infinite product of copies of <span class="math-container">$[-1,1]$</span>, say, is indeed compact in the <em>product</em> topology, but a Banach space topology is <em>not</em> like a product topology; the "closest" we can get to that kind of a topology in an infinite-dimensional Banach space is the so-called weak-* topology (which is not metrisable most of the time, so quite unlike the norm topology, which of course is), and there we can prove the <a href="https://en.wikipedia.org/wiki/Banach%E2%80%93Alaoglu_theorem" rel="nofollow noreferrer">Banach-Alaoglu theorem</a> which shows that the ball in that topology <em>is</em> compact and in which the Tychonoff theorem is a key ingredient of the proof...</p>
|
1,634,325 | <blockquote>
<p><strong>Problem</strong>:
Is there sequence that sublimit are $\mathbb{N}$? If it's eqsitist prove this.</p>
</blockquote>
<p>I try to solve this problem by guessing what type of sequence need to be. <br>For example:
$a_n=(-1)^n$ has two sublimit $\{1,-1\}$.
<br>
$a_n=n
\times\sin(\frac{\pi}{2})$ has 0 sublimit because $\lim _{x\to \infty}{n\times\sin(\frac{\pi}{2})}=\infty$. <br>
But I don't know how to solve this so please give me a hint.</p>
| Jimmy R. | 128,037 | <p>Such a sequence exists and this is possible due to the (not so intuitive perhaps) fact, that $\mathbb N\times \mathbb N$ and $\mathbb N$ have the same cardinality (surprise!). But since they have the same cardinality, there is a bijection (actually many), $b:\mathbb N \times \mathbb N\to \mathbb N$. Now, for any $m \in \mathbb N$ the sequence that is defined as $$a_{m,n}:=m$$ for each $n\in \mathbb N$ is a sequence with limit points all $m \in \mathbb N$ and so has the desired property but is a sequence in $\mathbb N \times \mathbb N$. So, use this bijection and define the sequence $x:\mathbb N \to \mathbb N$ as $$x_{b(m,n)}=m$$ which is the one you are looking for.</p>
|
1,556,298 | <p>If we have $p\implies q$, then the only case the logical value of this implication is false is when $p$ is true, but $q$ false.</p>
<p>So suppose I have a broken soda machine - it will never give me any can of coke, no matter if I insert some coins in it or not.</p>
<p>Let $p$ be 'I insert a coin', and $q$ - 'I get a can of coke'.</p>
<p>So even though the logical value of $p \implies q$ is true (when $p$ and $q$ are false), it doesn't mean the implication itself is true, right? As I said, $p \implies q$ has a logical value $1$, but implication is true when it matches the truth table of implication. And in this case, it won't, because $p \implies q$ is <strong>false</strong> for true $p$ (the machine doesn't work).</p>
<p>That's why I think it's not right to say the implication is true based on only one row of the truth table. Does it make sense?</p>
| Ove Ahlman | 222,450 | <p>Well, yes, you can not say that any formula $\varphi$ (or more explicit $p\rightarrow q$) is allways true just based on one row of a truth table. You have to check all the rows. However If $p$ is evaluated to $True$ and $q$ is evaluated to $True$ then $p\to q$ is evaluated to $True$, thus $p\to q$ is true in this <strong>instance</strong> (also called <strong>valuation</strong>). </p>
<p>You have to separate between a formula being true for a valuation and allways true i.e. a tautology (or logical truth if you want to go to predicate logic).</p>
|
2,364,742 | <p>$$\log_5\tan(36^\circ)+\log_5\tan(54^\circ)=\log_5(\tan(36^\circ)\tan(54^\circ)).$$ I cannot solve those 2 tangent functions above. Here calculator comes in handy to calculate it. Is there a method of evaluating this problem without a calculator?</p>
| user362325 | 362,325 | <p>Note that $\tan(36^\circ)=\tan(90^\circ-54^\circ)=\frac{1}{\tan54^\circ}$</p>
<p>$$\log_5(\tan(36^\circ)\tan(54^\circ))$$</p>
<p>$$=\log_5(\frac{1}{\tan54^\circ}\tan(54^\circ))$$</p>
<p>$$=\log_5(1)=0$$</p>
|
2,364,742 | <p>$$\log_5\tan(36^\circ)+\log_5\tan(54^\circ)=\log_5(\tan(36^\circ)\tan(54^\circ)).$$ I cannot solve those 2 tangent functions above. Here calculator comes in handy to calculate it. Is there a method of evaluating this problem without a calculator?</p>
| Khosrotash | 104,171 | <p>HInt: $36^\circ+54^\circ =90$ so
$$tan(54^\circ)=cot(36^\circ)\\$$
$$\log_5\tan(36^\circ)+\log_5\tan(54^\circ)=\log_5(\tan(36^\circ)\tan(54^\circ))=\\\log_5(\tan(36^\circ)\cot(36^\circ))=log_5(1)=0$$</p>
|
1,413,145 | <p>I would like to find a way to show that the sequence $a_n=\big(1+\frac{1}{n}\big)^n+\frac{1}{n}$ is eventually increasing.</p>
<p>$\hspace{.3 in}$(Numerical evidence suggests that $a_n<a_{n+1}$ for $n\ge6$.)</p>
<p>I was led to this problem by trying to prove by induction that $\big(1+\frac{1}{n}\big)^n\le3-\frac{1}{n}$, as in</p>
<p>$\hspace{.4 in}$ <a href="https://math.stackexchange.com/questions/1087545/a-simple-proof-that-bigl1-frac1n-bigrn-leq3-frac1n">A simple proof that $\bigl(1+\frac1n\bigr)^n\leq3-\frac1n$?</a></p>
| Jack D'Aurizio | 44,121 | <p>As suggested by Clement C., let:
$$ f(x)=\left(1+\frac{1}{x}\right)^{x}+\frac{1}{x}.\tag{1}$$
Then:
$$ f'(x) = \left(1+\frac{1}{x}\right)^{x}\left(\log\left(1+\frac{1}{x}\right)-\frac{1}{x+1}\right)-\frac{1}{x^2}\tag{2} $$
but, due to convexity:
$$\log\left(1+\frac{1}{x}\right)-\frac{1}{x+1}=-\frac{1}{x+1}+\int_{x}^{x+1}\frac{dt}{t}\geq \frac{1}{2(x+1)^2}\tag{3}$$
hence for any $x\geq 8$:
$$ f'(x)\geq \frac{\left(1+\frac{1}{8}\right)^8}{2(x+1)^2}-\frac{1}{x^2}>0.\tag{4} $$</p>
|
1,413,145 | <p>I would like to find a way to show that the sequence $a_n=\big(1+\frac{1}{n}\big)^n+\frac{1}{n}$ is eventually increasing.</p>
<p>$\hspace{.3 in}$(Numerical evidence suggests that $a_n<a_{n+1}$ for $n\ge6$.)</p>
<p>I was led to this problem by trying to prove by induction that $\big(1+\frac{1}{n}\big)^n\le3-\frac{1}{n}$, as in</p>
<p>$\hspace{.4 in}$ <a href="https://math.stackexchange.com/questions/1087545/a-simple-proof-that-bigl1-frac1n-bigrn-leq3-frac1n">A simple proof that $\bigl(1+\frac1n\bigr)^n\leq3-\frac1n$?</a></p>
| Arin Chaudhuri | 404 | <p>Here is another way to approach this problem.
The function $$f(z) = 1 - z/2 + z^2/3 + \ldots + (-1)^{k+1} z^k/(k+1) + \ldots $$ is analytic on the unit disc $ \{ z : |z| < 1\}$, which implies $ g(z) = \exp f(z)$ is also analytic on $ \{ z : |z| < 1\}$ and hence can be expanded as a power series $$g(z) = a_0 + a_1 z + a_2 z^2 + \dots + $$ in $ \{ z : |z| < 1\}$. We can easily compute the the first few coefficients as $a_0 = \exp f(0) = e$, $a_1 = \exp f(0) f^{'}(0) = -e/2$, and similarly $a_3 = 11e/24.$</p>
<p>However, $f(x) = \log(1+x)/x$ for for all real $x$ with $ 0 < x < 1$, so $g(x) = (1+x)^{1/x}$ for $ 0 < x < 1$ and the above series for real $x$ is an analytic extension of $(1+x)^{1/x}$ to $-1 < x < 1$.</p>
<p>Writing $$(1+x)^{1/x} = e - ex/2 + 11x^2/24 + \dots $$ from which we get
$(1+x)^{1/x} + cx = e + (c - e/2) x + 11ex^2/24 + \dots + \dots $.</p>
<p>The derivative of the above function at 0 is c -e/2, which is < 0, if c < e/2, by the continuity of the derivative, there is an interval $[0,\epsilon]$ on which the derivative of the function above is strictly negative and hence it decreases. Since 1/n decreases and lies in $[0,\epsilon]$ for all large $n$ this means $(1+1/n)^{n} + c/n$ increases eventually for any $ c < e/2$. This holds for any $x_n$ that strictly decreases to 0 not only $1/n,$, $(1+x_n)^{1/x_n} + c x_n$ eventually increases if $ c < e/2$. We can similarly argue that $(1+x_n)^{1/x_n} + c x_n$ eventually increases if $ c > e/2$ if $x_n$ strictly decreases to 0. For $c = e/2$, the positivity of the coefficient of $x^2$ implies $(1+x_n)^{1/x_n} + e x_n / 2$ eventually starts decreasing.</p>
|
622,552 | <p>In the context of (most of the times convex) optimization problems -</p>
<p>I understand that I can build a Lagrange dual problem and assuming I know there is strong duality (no gap) I can find the optimum of the primal problem from the one of the dual. Now I want to find the primal optimum point (i.e. the point in which the optimum is attained). Somehow, it is assumed that both the optimum of the primal and dual point are achieved at the same point (which is then a saddle point). Why/when is that true? Does strong duality can happen only at a saddle point? Does that require the primal problem to be convex?</p>
<p>Now, from this, it is assumed it's enough for to find the minimizer of the Lagrangian at the value of dual parameters (point of the solution of the dual problem) in order to get the primal optimal point. Why is that true?</p>
<p>Thank you,
Dany</p>
| user3589786 | 168,570 | <p>If you solved the dual and have optimal values for the dual variables, then plug those optimal dual values back into the Lagrange equation. Now you have an equation with only x as unknown. Minimize, and you've recovered the value for x.</p>
|
622,552 | <p>In the context of (most of the times convex) optimization problems -</p>
<p>I understand that I can build a Lagrange dual problem and assuming I know there is strong duality (no gap) I can find the optimum of the primal problem from the one of the dual. Now I want to find the primal optimum point (i.e. the point in which the optimum is attained). Somehow, it is assumed that both the optimum of the primal and dual point are achieved at the same point (which is then a saddle point). Why/when is that true? Does strong duality can happen only at a saddle point? Does that require the primal problem to be convex?</p>
<p>Now, from this, it is assumed it's enough for to find the minimizer of the Lagrangian at the value of dual parameters (point of the solution of the dual problem) in order to get the primal optimal point. Why is that true?</p>
<p>Thank you,
Dany</p>
| OtZman | 431,258 | <p>I know this question was asked some time ago, but I just stumbled upon it myself wondering the same thing, so I will post what I have found out in case it is helpful to someone else.</p>
<p>I think the answer you seek is in Boyd and Vandenberghe's Convex Optimization (freely available on Boyd's website: <a href="http://stanford.edu/~boyd/cvxbook/" rel="nofollow noreferrer">http://stanford.edu/~boyd/cvxbook/</a>) in sections 5.1-5.5 (especially, see subsection 5.5.5). Below are my key takeaways from that text.</p>
<p>Let
\begin{align}
\min \;\;\;\; &f_0(x)\\
\text{s.t.} \;\;\;\; &f_i(x) \leq 0 \;\;\;\; i = 1,\ldots,m \tag{1} \label{eq:1}\\
&h_i(x) = 0 \;\;\;\; i = 1,\ldots,p,
\end{align}
where $x \in \mathbb{R}^n$ be the optimization problem (not necessarily convex) we're considering. The Lagrangian of this problem is
$$
L(x, \lambda, \nu) = f_0(x) + \sum_{i=1}^m \lambda_i f_i(x) + \sum_{i=1}^p \nu_i h_i(x).
$$
The Lagrangian dual is
$$
g(\lambda,\nu) = \inf_{x \in \mathcal{D}} \left( f_0(x) + \sum_{i=1}^m \lambda_i f_i(x) + \sum_{i=1}^p \nu_i h_i(x) \right),
$$
where $\mathcal{D} := \left( \bigcap_{i=0}^m \operatorname{dom} f_i \right) \cap \left( \bigcap_{i=1}^p \operatorname{dom} h_i \right)$. The Lagrangian dual problem, in turn, is
\begin{align}
\max \;\;\;\; &g(\lambda,\nu)\\
\text{s.t.} \;\;\;\; &\lambda \succcurlyeq 0.
\end{align}
Note that
\begin{align}
\sup_{\lambda \succcurlyeq 0, \; \nu} L(x,\lambda,\nu)
&= \sup_{\lambda \succcurlyeq 0, \; \nu} \left( f_0(x) + \sum_{i=1}^m \lambda_i f_i(x) + \sum_{i=1}^p \nu_i h_i(x) \right) \\
&=
\begin{cases}
f_0(x) & \text{if } f_i(x) \leq 0, \; i = 1,\ldots,m, \text{ and } h_i(x) = 0, \; i=1,\ldots,p \\
\infty & \text{otherwise}.
\end{cases}
\end{align}
This means that
$$
p^\star := \inf_x \sup_{\lambda \succcurlyeq 0, \; \nu} L(x, \lambda, \nu)
$$
is the optimal value of the primal problem \eqref{eq:1}. Note that there is no restriction on the $x$ values over which we take the infimum, since the conditions of \eqref{eq:1} are already included in the $\sup L$. Moreover, by definition,
$$
d^\star := \sup_{\lambda \succcurlyeq, \; \nu} \inf_x L(x,\lambda,\nu)
$$
is the optimal value of the dual problem.</p>
<p>Let us now assume that strong duality holds. Then $d^\star = p^\star$. Furthermore, also assume that both the primal and dual problem optimal values are attained, i.e., that there exist feasible $x^\star$ and $(\lambda^\star,\nu^\star)$ such that $f(x^\star) = d^\star$ and $g(\lambda^\star,\nu^\star) = d^\star$. Then
\begin{align}
f_0(x^\star)
&= g(\lambda^\star,\nu^\star) \\
&= \inf_x \left( f_0(x) + \sum_{i=1}^m \lambda_i^\star f_i(x) + \sum_{i=1}^p \nu_i^\star h_i(x) \right) \\
&\leq f_0(x^\star) + \sum_{i=1}^m \lambda_i^\star f_i(x^\star) + \sum_{i=1}^p \nu_i^\star h_i(x^\star) \\
&\leq f_0(x^\star)
\end{align}
(for motivations of each equality/inequality, see page 242 in Boyd and Vandenberghe's book). This implies that the inequalities are in fact equalities. We from this, we can conclude that $x^\star$ solves the problem $\inf_x L(x,\lambda^\star,\nu^\star)$. Finally, assume that $\inf_x L(x,\lambda^\star,\nu^\star)$ has a unique solution (which happens e.g. if $L(x,\lambda^\star,\nu^\star)$ is a strinctly convex function of $x$). Then $x^\star$ is that solution, and therefore
$$
x^\star = \arg\min_x L(x,\lambda^\star,\nu^\star).
$$ </p>
|
96,080 | <p>The empty clause is a clause containing no literals and by definition is false.</p>
<p>c = {} = F</p>
<p>What then is the empty set, and why does it evaluate to true?</p>
<p>Thanks!</p>
| Ted | 15,012 | <p>Remember, we take the disjunction over the elements of a <em>clause</em>, then the conjunction over the entire <em>clause set</em>. So if the <em>clause set</em> is empty, then we have an empty <em>conjunction</em>. If the <em>clause itself</em> is empty, then we have an empty <em>disjunction</em>.</p>
<p>What does it mean to take an empty conjunction or empty disjunction? Let's consider a similar situation. Over the real numbers, what is an empty sum, or an empty product? I claim that an empty sum should be 0; an empty product should be 1. Why is this? Clearly, we have:</p>
<p>sum(2,3,4)+sum(5,6,7) = sum(2,3,4,5,6,7)</p>
<p>sum(2,3,4)+sum(5,6) = sum(2,3,4,5,6)</p>
<p>sum(2,3,4)+sum(5) = sum(2,3,4,5)</p>
<p>Now make the second sum empty:</p>
<p>sum(2,3,4)+sum() = sum(2,3,4)</p>
<p>So sum() should be 0. In the same way, product() must be 1. (Replace "sum" by "product" and "+" by "*" in the lines above.)</p>
<p>In general, a commutative, associative binary operation applied on an empty set should be the identity element for that operation. </p>
<p>Now back to your original example. Since the identity for conjunction is "true", and the identity for disjunction is "false", that is why an empty <em>clause set</em> is true, but empty <em>clause</em> is false.</p>
|
8,695 | <p>I have a parametric plot showing a path of an object in x and y (position), where each is a function of t (time), on which I would like to put a time tick, every second let's say. This would be to indicate where the object is moving fast (widely spaced ticks) or slow (closely spaced ticks). Each tick would just be short line that crosses the plot at that point in time, where that short line is normal to the plotted curve at that location.</p>
<p>I'm sure I can figure out a way to do it using lots of calculations and graphics primitives, but I'm wondering if there is something built-in that I have missed in the documentation that would make this easier.</p>
<p>(Note: this is about ticks on the plotted curve itself -- this doesn't have anything to do with the ticks on the axes or frame.)</p>
| rm -rf | 5 | <p>To create the ticks perpendicular to the curve, I calculate the direction of the normal to the curve where the ticks are to be placed and then orient a line segment along that direction. The following code does that.</p>
<pre><code>ClearAll[ParametricTimePlot]
SetAttributes[ParametricTimePlot, HoldAll]
ParametricTimePlot[fun_, {var_, min_, max_, steps_}, len_,
opts : OptionsPattern[]] :=
Show[
ParametricPlot[fun, {var, min, max}, opts],
Graphics[
Rotate[Translate[Line[{{-len/2, 0}, {len/2, 0}}], #1], ArcTan @@ #2]
] & @@@ (N@Table[{fun, RotationMatrix[π/2].D[fun, var]} /. var -> i,
{i, min + steps, max, steps}])
]
</code></pre>
<p>Use the above function as:</p>
<pre><code>ParametricTimePlot[{ Sqrt[t] Cos[t], Sqrt[t] Sin[t]}, {t, 0, 2 π,
π/10}, 0.1, PlotStyle -> Blue]
</code></pre>
<p><img src="https://i.stack.imgur.com/qVMs1.png" alt="enter image description here"></p>
<hr>
<p>If you want to keep it simple and just show the dots, then this is very straightforward:</p>
<pre><code>ClearAll[ParametricTimePlot]
SetAttributes[ParametricTimePlot, HoldAll]
ParametricTimePlot[fun_, {var_, min_, max_, steps_}, opts : OptionsPattern[]] :=
ParametricPlot[fun, {var, min, max}, opts]
~Show~
ListPlot[Table[fun /. var -> i, {i, min, max, steps}],
PlotStyle -> Red, PlotMarkers -> {Automatic, 8}]
ParametricTimePlot[{ Sqrt[t] Cos[t], Sqrt[t] Sin[t]}, {t, 0, 2 Pi, Pi/10},
PlotStyle -> Blue]
</code></pre>
<p><img src="https://i.stack.imgur.com/4ECpb.png" alt="enter image description here"></p>
|
974,656 | <p><img src="https://i.stack.imgur.com/LyqzL.jpg" alt="enter image description here"></p>
<p>One way to solve this and my book has done it is by : </p>
<p><img src="https://i.stack.imgur.com/2wYSn.jpg" alt="enter image description here"></p>
<hr>
<p>This is a well known way, but I have a different method, and it seems logical to me (but I don't know what the mistake is). And yes it's wrong, but I don't understand what's wrong with my following method :</p>
<p>For 1 component the mean is $E(X)=2.5$ so for 5 components it's : </p>
<p>$$E(5X)=5E(X)=5(2.5)=12.5$$</p>
<p>So for 5 items we can say : </p>
<p>$$\lambda= 1/E(5X)= 1/12.5$$</p>
<p>$$X ~ Expo (1/12.5)$$</p>
<p>$$P(T \geq 3)=1-e^{-3/12.5}$$
$$P(T \geq3)=0.21$$</p>
<p>Which is not the same as in the book. Please help me, what is wrong with my method.</p>
| A. Breust | 184,254 | <p><strong>Hints</strong></p>
<p>The polynomial is of even degree.
What does this means about the limits at $x\rightarrow-\infty$ and $+\infty$?</p>
<p>Now what is $P(0)$? Using the fact that $a_na_0<0$ means that $a_n$ and $a_0$ are of opposite signs, you should be able to finish.</p>
|
2,738,957 | <p>I did the following to derive the value of $\pi$, you might want to grab a pencil and a piece of paper:</p>
<p>Imagine a unit circle with center point $b$ and two points $a$ and $c$ on the circumference of the circle such that triangle $abc$ is an obtuse triangle. you can see that if $\theta$ denotes the angle $\angle acb$ then $0<\theta<90$ and that the angle of the sector $abc$ is $180 -2\theta$, so the area of sector $abc$ is $\frac{180-2\theta}{360}\pi = \frac{90-\theta}{180}\pi$. If we extend the radius $bc$ to form a diameter $D$, then the angle between line $ab$ and $D$ is $180-(180-2\theta) = 2\theta$; so if we define the distance between the point $a$ and the line $D$ as $h$ we get $h = \sin(2\theta)$. These allows us to derive the area of triangle $abc$ as $\frac{1}{2}sin(2\theta)$. The area of segment $ac$ is the area of sector abc - the area of triangle abc :
$$
\frac{90-\theta}{180}\pi - \frac{1}{2}\sin(2\theta)
$$
We can see that as $\theta$ approaches 0, the area of segment ac approaches the half the area of the circle which is $\frac{\pi}{2}$
$$
\lim_{\theta \to 0} \frac{90-\theta}{180}\pi - \frac{1}{2}\sin(2\theta) = \frac{\pi}{2}
$$
$$
\lim_{\theta \to 0} \frac{90-\theta}{90}\pi - \sin(2\theta) = \pi
$$
$$
\lim_{\theta \to 0} \pi\Big[\frac{90-\theta}{90} - 1\Big] = \lim_{\theta \to 0} \sin(2\theta)
$$
$$
\lim_{\theta \to 0} -\frac{\theta\pi}{90} = \lim_{\theta \to 0} \sin(2\theta)
$$
$$
\pi = -\lim_{\theta \to 0} \frac{90\sin(2\theta)}{\theta}
$$
However this limit approaches -3.1415...</p>
| dxiv | 291,201 | <blockquote>
<p>$$\lim_{\theta \to 0} -\frac{\theta\pi}{90} = \lim_{\theta \to 0} \sin(2\theta) $$</p>
</blockquote>
<p>The above is correct, but it does not imply the following line:</p>
<blockquote>
<p>$$\pi = -\lim_{\theta \to 0} \frac{90\sin(2\theta)}{\theta}$$</p>
</blockquote>
<p>The fallacy here is expecting (or handwaving) that $\displaystyle \,\lim_{\theta \to 0} f(\theta) = \lim_{\theta \to 0} g(\theta) \implies \lim_{\theta \to 0} \frac{f(\theta)}{g(\theta)}=1\,$, but this latter implication does not necessarily hold true when both limits are $\,0\,$.</p>
<p>Compare for example to $\displaystyle\,\lim_{\theta \to 0} \theta + \lim_{\theta \to 0} \theta = 0 \implies \lim_{\theta \to 0} \theta = -\lim_{\theta \to 0} \theta \implies 1 = -\lim_{\theta \to 0} \dfrac{\theta}{\theta}=-1\,$.</p>
|
28,532 | <p><code>MapIndexed</code> is a very handy built-in function. Suppose that I have the following list, called <code>list</code>:</p>
<pre><code>list = {10, 20, 30, 40};
</code></pre>
<p>I can use <code>MapIndexed</code> to map an arbitrary function <code>f</code> across <code>list</code>:</p>
<pre><code>{f[10, {1}], f[20, {2}], f[30, {3}], f[40, {4}]}
</code></pre>
<p>where the second argument to <code>f</code> is the part specification of each element of the list.</p>
<p>But, now, what if I would like to use <code>MapIndexed</code> only at certain elements? Suppose, for example, that I want to apply <code>MapIndexed</code> to only the second and third elements of <code>list</code>, obtaining the following:</p>
<pre><code>{10, f[20, {2}], f[30, {3}], 40}
</code></pre>
<p>Unfortunately, there is no built-in "<code>MapAtIndexed</code>", as far as I can tell. What is a simple way to accomplish this? Thanks for your time.</p>
| tenure track job seeker | 6,251 | <p>You can also do this:</p>
<pre><code>list = {10, 20, 30, 40};
newlist =list/.{a_,b_,c_,d_}:>{a,f[b,2],f[c,3],d}
</code></pre>
<p>In this way, you give a lable to any element of your list, and using <code>:></code>, you can map some other function on some of elements of your list. In the above code for example, a and d are not changed, while the second and thord elements are changed.</p>
|
3,425,415 | <p>I need to define a bijection <span class="math-container">$f:\mathbb Q\to\mathbb Q$</span> such that <span class="math-container">$f(0) = 0$</span> and <span class="math-container">$f(1) = 1$</span> while also preserving order (i.e. if <span class="math-container">$a < b$</span>, then <span class="math-container">$f(a) < f(b)$</span>). Also, <span class="math-container">$f$</span> cannot be the identity function. </p>
<p>Whenever I try to come up with a function, it either becomes not injective, not surjective, or it does not preserve order. Any help would be appreciated. </p>
| Ross Millikan | 1,827 | <p>One approach is just to use two straight lines.
<span class="math-container">$$f(x)= \begin {cases} \frac x2 & x \le \frac 12\\
\frac 14+\frac 32(x-\frac 12)& \frac 12 \lt x\end{cases}$$</span>
[<img src="https://i.stack.imgur.com/zkUBn.png" alt="enter image description here"></p>
|
4,383,557 | <p>This question came up in an oral exam. During the course we studied a bit of the theory of lie algebras and some representation theory.</p>
<p>The question: show that the lie algebra <span class="math-container">$\mathfrak{g_2}$</span> has a dimension <span class="math-container">$14$</span> representation, where dimension <span class="math-container">$14$</span> means the vector space <span class="math-container">$V$</span> where the representation is defined has dimension <span class="math-container">$14$</span> over <span class="math-container">$k$</span>.</p>
<p>Why is this true? I think it has to do with <span class="math-container">$\mathfrak{g_2}$</span> having <span class="math-container">$12$</span> roots (and maybe the maximal toral subalgebra has dimension <span class="math-container">$2$</span>? But why?).</p>
<p>I'll be glad if someone can enlighten me.</p>
| Dietrich Burde | 83,966 | <p>Let <span class="math-container">$\mathfrak{g}$</span> be the simple Lie algebra of type <span class="math-container">$G_2$</span>, with positive roots <span class="math-container">$R^+=\{\alpha,\beta,\alpha+\beta,2\alpha+\beta,3\alpha+\beta,3\alpha+2\beta \}$</span>. With
<span class="math-container">$\lambda=m_1\varpi_1+m_2\varpi_2$</span> the Weyl dimension formula gives
<span class="math-container">\begin{align*}
\dim(L(\lambda)) & = \frac{1}{120}(m_1+1)(m_2+1)(m_1+2m_2+3)(m_1+m_2+2) \\
& \hspace{1.52cm} (m_1+3m_2+4)(2m_1 + 3m_2+5)
\end{align*}</span>
for the irreducible highest weight module. We easily see that the possible dimensions are
<span class="math-container">$$1, 7, 14, 27, 64, 77, 182, 189, 273, 286, 378, 448, 714, 729, 748, 896, 924, \ldots
$$</span></p>
|
464,426 | <p>Find the limit of $$\lim_{x\to 1}\frac{x^{1/5}-1}{x^{1/3}-1}$$</p>
<p>How should I approach it? I tried to use L'Hopital's Rule but it's just keep giving me 0/0.</p>
| Zarrax | 3,035 | <p>Substituting $y = x^{1 \over 3}$ you have
$$\lim_{x\to 1}\frac{x^{1/5}-1}{x^{1/3}-1} = \lim_{y\to 1}\frac{y^{3/5}-1}{y-1}$$
Note the right hand side is the definition of ${d \over dy} y^{3/5}|_{y = 1}$, which gives you
${3 \over 5}(1)^{-{2 \over 5}} = {3 \over 5}$.</p>
|
915,054 | <p>I'm trying to find a closed form of this sum:
$$S=\sum_{n=1}^\infty\frac{\Gamma\left(n+\frac{1}{2}\right)}{(2n+1)^4\,4^n\,n!}.\tag{1}$$
<a href="http://www.wolframalpha.com/input/?i=Sum%5BGamma%28n%2B1%2F2%29%2F%28%282n%2B1%29%5E4+4%5En+n%21%29%2C+%7Bn%2C1%2CInfinity%7D%5D"><em>WolframAlpha</em></a> gives a large expressions containing multiple generalized hypergeometric functions, that is quite difficult to handle. After some simplification it looks as follows:
$$S=\frac{\pi^{3/2}}{3}-\sqrt{\pi}-\frac{\sqrt{\pi}}{324}\left[9\,_3F_2\left(\begin{array}{c}\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2}\\\tfrac{5}{2},\tfrac{5}{2}\end{array}\middle|\tfrac{1}{4}\right)\\+3\,_4F_3\left(\begin{array}{c}\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2}\\\tfrac{5}{2},\tfrac{5}{2},\tfrac{5}{2}\end{array}\middle|\tfrac{1}{4}\right)+\,_5F_4\left(\begin{array}{c}\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2}\\\tfrac{5}{2},\tfrac{5}{2},\tfrac{5}{2},\tfrac{5}{2}\end{array}\middle|\tfrac{1}{4}\right)\right].\tag{2}$$ I wonder if there is a simpler form. Elementary functions and simpler special funtions (like Bessel, gamma, zeta, polylogarithm, polygamma, error function etc) are okay, but not hypergeometric functions.</p>
<p>Could you help me with it? Thanks!</p>
| user153012 | 153,012 | <p>By now, I've found a closed-form by doing some integral evaluation, a lot of hypergeometric, polylogarithm and polygamma manipulation.
$$
S = \sqrt{\pi}\left(\frac{\pi}{12}\zeta(3)+\frac{1}{192\sqrt3}\psi^{(3)}\left(\tfrac13\right)-\frac{\pi^4}{72\sqrt3}-1\right).
$$</p>
|
915,054 | <p>I'm trying to find a closed form of this sum:
$$S=\sum_{n=1}^\infty\frac{\Gamma\left(n+\frac{1}{2}\right)}{(2n+1)^4\,4^n\,n!}.\tag{1}$$
<a href="http://www.wolframalpha.com/input/?i=Sum%5BGamma%28n%2B1%2F2%29%2F%28%282n%2B1%29%5E4+4%5En+n%21%29%2C+%7Bn%2C1%2CInfinity%7D%5D"><em>WolframAlpha</em></a> gives a large expressions containing multiple generalized hypergeometric functions, that is quite difficult to handle. After some simplification it looks as follows:
$$S=\frac{\pi^{3/2}}{3}-\sqrt{\pi}-\frac{\sqrt{\pi}}{324}\left[9\,_3F_2\left(\begin{array}{c}\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2}\\\tfrac{5}{2},\tfrac{5}{2}\end{array}\middle|\tfrac{1}{4}\right)\\+3\,_4F_3\left(\begin{array}{c}\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2}\\\tfrac{5}{2},\tfrac{5}{2},\tfrac{5}{2}\end{array}\middle|\tfrac{1}{4}\right)+\,_5F_4\left(\begin{array}{c}\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2}\\\tfrac{5}{2},\tfrac{5}{2},\tfrac{5}{2},\tfrac{5}{2}\end{array}\middle|\tfrac{1}{4}\right)\right].\tag{2}$$ I wonder if there is a simpler form. Elementary functions and simpler special funtions (like Bessel, gamma, zeta, polylogarithm, polygamma, error function etc) are okay, but not hypergeometric functions.</p>
<p>Could you help me with it? Thanks!</p>
| Tito Piezas III | 4,781 | <p>The OP gives the evaluation</p>
<p><span class="math-container">$$S=\frac{\pi^{3/2}}{3}-\sqrt{\pi}-\frac{\sqrt{\pi}}{324}\left[9\,_3F_2\left(\begin{array}{c}\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2}\\\tfrac{5}{2},\tfrac{5}{2}\end{array}\middle|\tfrac{1}{4}\right)\\+3\,_4F_3\left(\begin{array}{c}\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2}\\\tfrac{5}{2},\tfrac{5}{2},\tfrac{5}{2}\end{array}\middle|\tfrac{1}{4}\right)+\,_5F_4\left(\begin{array}{c}\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2}\\\tfrac{5}{2},\tfrac{5}{2},\tfrac{5}{2},\tfrac{5}{2}\end{array}\middle|\tfrac{1}{4}\right)\right]$$</span></p>
<p>We can simplify this further. Since</p>
<p><span class="math-container">$$\frac1{36}\,_3F_2\left(\begin{array}{c}\tfrac32,\tfrac32,\tfrac32\\ \tfrac52,\tfrac52\end{array}\middle|\tfrac14\right) = -\,_3F_2\left(\begin{array}{c}\tfrac12,\tfrac12,\tfrac12\\ \tfrac32,\tfrac32\end{array}\middle|\tfrac14\right) + \,_2F_1\left(\begin{array}{c}\tfrac12,\tfrac12\\ \tfrac32\end{array}\middle|\tfrac14\right) $$</span></p>
<p><span class="math-container">$$\frac1{108}\,_4F_3\left(\begin{array}{c}\tfrac32,\tfrac32,\tfrac32,\tfrac32\\ \tfrac52,\tfrac52,\tfrac52\end{array}\middle|\tfrac14\right) = -\,_4F_3\left(\begin{array}{c}\tfrac12,\tfrac12,\tfrac12,\tfrac12\\ \tfrac32,\tfrac32,\tfrac32\end{array}\middle|\tfrac14\right) + \,_3F_2\left(\begin{array}{c}\tfrac12,\tfrac12,\tfrac12\\ \tfrac32,\tfrac32\end{array}\middle|\tfrac14\right) $$</span></p>
<p><span class="math-container">$$\frac1{324}\,_5F_4\left(\begin{array}{c}\tfrac32,\tfrac32,\tfrac32,\tfrac32,\tfrac32\\ \tfrac52,\tfrac52,\tfrac52,\tfrac52\end{array}\middle|\tfrac14\right) = -\,_5F_4\left(\begin{array}{c}\tfrac12,\tfrac12,\tfrac12,\tfrac12,\tfrac12\\ \tfrac32,\tfrac32,\tfrac32,\tfrac32\end{array}\middle|\tfrac14\right) + \,_4F_3\left(\begin{array}{c}\tfrac12,\tfrac12,\tfrac12,\tfrac12\\ \tfrac32,\tfrac32,\tfrac32\end{array}\middle|\tfrac14\right) $$</span></p>
<p>and <span class="math-container">$\small{\,_2F_1\left(\begin{array}{c}\tfrac12,\tfrac12\\ \tfrac32\end{array}\middle|\tfrac14\right)} = \frac{\pi}3$</span>, then,</p>
<blockquote>
<p><span class="math-container">$$S=\sum_{n=1}^\infty\frac{\Gamma\left(n+\frac{1}{2}\right)}{(2n+1)^4\,4^n\,n!} = -\sqrt{\pi}+\sqrt{\pi}\,_5F_4\left(\begin{array}{c}\tfrac12,\tfrac12,\tfrac12,\tfrac12,\tfrac12\\ \tfrac32,\tfrac32,\tfrac32,\tfrac32\end{array}\middle|\tfrac14\right) \approx 0.0028056$$</span></p>
</blockquote>
|
1,681,205 | <p>I would like a <strong>hint</strong> for the following, more specifically, what strategy or approach should I take to prove the following?</p>
<p><em>Problem</em>: Let $P \geq 2$ be an integer. Define the recurrence
$$p_n = p_{n-1} + \left\lfloor \frac{p_{n-4}}{2} \right\rfloor$$
with initial conditions:
$$p_0 = P + \left\lfloor \frac{P}{2} \right\rfloor$$
$$p_1 = P + 2\left\lfloor \frac{P}{2} \right\rfloor$$
$$p_2 = P + 3\left\lfloor \frac{P}{2} \right\rfloor$$
$$p_3 = P + 4\left\lfloor \frac{P}{2} \right\rfloor$$</p>
<p>Prove that the following limit converges:
$$\lim_{n\rightarrow \infty} \frac{p_n}{z^n}$$
where $z$ is the positive real solution to the equation $x^4 - x^3 - \frac{1}{2} = 0$.</p>
<p><em>Note</em>: I've already proven the following:
$$\lim_{n\rightarrow \infty} \frac{p_n}{p_{n-1}} = z$$
Any ideas? Not sure if this result helps. Also $\lim_{n\rightarrow \infty}p_n/z^n$ is also bounded above and below. I've attempted to show $\lim_{n\rightarrow \infty} \frac{p_n}{z^n}$ is Cauchy, but had no luck with that. I don't know what the limit converges to either.</p>
<p><em>Edit</em>: I believe the limit should converge as $p_n$ achieves an end behaviour of the form $cz^n$ for $c \in \mathbb{R}$ (this comes from the fact that the limit of the ratios of $p_n$ converge to $z$), however I do not know how to make this rigorous.</p>
<p><em>Edit 2</em>: Proving the limit exists is equivalent to showing
$$p_0 \cdot \prod_{n=1}^{\infty} \left( \frac{p_n/p_{n-1}}{z} \right)$$
converges.</p>
<p><strong>UPDATED:</strong></p>
<p>If someone could prove that $|p_n-z \cdot p_{n-1}|$ is bounded above (or converges, or diverges), then the proof is complete.</p>
| marty cohen | 13,079 | <p>I don't know if you
can show that
$\frac{p_n}{z^n}
= 1
$.
If the sequence
$\frac{p_n}{p_{n-1}}
$
approaches $z$ from the same side,
each term in the product
exceeds $z$,
so the product will
always exceed
$z^n$.</p>
<p>What you <em>can</em> show
is that
$\lim \frac{p_n^{1/n}}{z}
= 1
$.
I will now give the
standard,
not original proof.</p>
<p>Once you have shown that
$\lim_{n\rightarrow \infty} \frac{p_n}{p_{n-1}} = z
$,
the hard part is done.
The rest is a standard
good-part/bad-part splitting
on $p_n$.</p>
<p>From that limit,
for any $c > 0$,
there is a $N = N(c)$
such that
$z-c
< \frac{p_n}{p_{n-1}}
< z+c
$
for
$n > N(c)
$.</p>
<p>Then
(this is how these proofs
usually go)</p>
<p>$\begin{array}\\
\frac{p_n}{p_0}
&=\prod_{k=1}^{n} \frac{p_k}{p_{k-1}}\\
&=\prod_{k=1}^{N(c)} \frac{p_k}{p_{k-1}}\prod_{k=N(c)1}^{n} \frac{p_k}{p_{k-1}}\\
&=P(c)\prod_{k=N(c)+1}^{n} \frac{p_k}{p_{k-1}}\\
&< P(c)(z+c)^{n-N(c)}
\qquad\text{(this is for an upper bound - the lower bound proof is similar)}\\
\text{so}\\
\frac{p_n}{z^n}
&< \frac{P(c)(z+c)^{n-N(c)}}{z^n}\\
&= \frac{P(c)(1+c/z)^{n-N(c)}}{z^{N(c)}}\\
&= (1+c/z)^n\frac{P(c)}{z^{N(c)}(1+c/z)^{N(c)}}\\
&= (1+c/z)^n\frac{P(c)}{(z+c)^{N(c)}}\\
\text{so that}\\
\frac{p_n^{1/n}}{z}
&< (1+c/z)\left(\frac{P(c)}{(z+c)^{N(c)}}\right)^{1/n}\\
&= (1+c/z)R(c)^{1/n}
\qquad\text{where }R(c) = \frac{P(c)}{(z+c)^{N(c)}}\\
\end{array}
$</p>
<p>Therefore,
by taking $c$ small
and letting $n$ get large,
we have
$\lim \sup \frac{p_n^{1/n}}{z}
\le 1
$.</p>
<p>A almost identical,
cut-and-pasteable
proof will show that
$\lim \inf \frac{p_n^{1/n}}{z}
\ge 1
$,
so that
$\lim \frac{p_n^{1/n}}{z}
= 1
$.</p>
|
335,116 | <p>As a PhD student, if I want to do something algebraic / linear-algebraic such as representation theory as well as do PDEs, in both the theoretical and numerical aspects of PDEs, would this combination be compatible and / or useful? Is it feasible?</p>
<p>I'd be grateful for an online resource to look into.</p>
<p>Thanks,</p>
| Mare | 61,949 | <p>The book "D-Modules, Perverse Sheaves, and Representation Theory " by Ryoshi Hotta, Kiyoshi Takeuchi and Toshiyuki Tanisaki is the perfect source for this topic. The introduction gives a very nice (and elementary) explanation how representation theory of D-modules and symstems of partial differential equations are related. I just give a very nice excerpt from the introduciton of the book.</p>
<p>Let <span class="math-container">$X$</span> be an open subset of <span class="math-container">$\mathbb{C^n}$</span> and <span class="math-container">$\mathcal{O}$</span> the commutative ring of complex analytic functions defined on <span class="math-container">$X$</span>.
Let <span class="math-container">$D$</span> be the set of partial differential operators with coefficients in <span class="math-container">$\mathcal{O}$</span>, whose elements are thus of the form <span class="math-container">$\sum\limits_{i_1,...,i_n}^{\infty}{f_{i_1,...,i_n} (\frac{\delta}{\delta x_1})^{i_1} ... (\frac{\delta}{\delta x_n})^{i_n}} $</span>.</p>
<p>Let <span class="math-container">$P \in D$</span> and consider the partial differential equation <span class="math-container">$Pu=0$</span> and <span class="math-container">$M$</span> the D-module <span class="math-container">$M=D/DP$</span>.
We then have <span class="math-container">$Hom_D(M,\mathcal{O}) \cong \{f \in \mathcal{O} | Pf=0 \}$</span>.
This shows that the set of analytic solution of <span class="math-container">$Pu=0$</span> is isomorphic to a <span class="math-container">$Hom$</span>-space, which are the natural objects of study of representation theory (representation theory can be summarized more or less as the study of representations of rings and their Hom-spaces).</p>
|
745,674 | <p>Let $E$ be a complex vector space of dimension 3. Let $f$ be a non zero endomorphism such that $f^2=0$. I want to show that there is a basis $B=\{b_1,b_2,b_3\}$ of $E$ such that
$$f(b_1)=0, f(b_2)=b_1,f(b_3)=0$$</p>
<p><strong>Edit</strong> Here is how i see the answer now: </p>
<p>$f$ being non zero there exists $x_0\in E$ such that $f(x_0)\not =0$. </p>
<p>Let $M=span\{f(x_0),x_0\}$. Since $f^2=0$ we show easily that $f(x_0)$ and $x_0$ are linearly independent hence they form a basis for $M$. </p>
<p>We take $b_1=f(x_0)$, $b_2=x_0$. </p>
<p>Take any $z\not \in M$. </p>
<p>If $z\in \ker f$ then take $b_3=z$. </p>
<p>If $z\not \in \ker f$ then there exists $\beta \not = 0$ such that $f(z)=\beta f(x_0)$ (because $\dim(Im(f))=1$ hence it is spanned by any non zero vector, we take $f(x_0)$ as a spanning vector). Take $z'=\dfrac{1}{\beta}z-f(x_0)$ hence $z'\in \ker f$ and we take $b_3=z'$.</p>
| user2566092 | 87,313 | <p>You can use Jordan-Normal form to solve this easily. In fact the proof would go along the lines of proving the general Jordan-Normal form theorem directly, so you may want to look at the proof for that if you don't want to use the theorem directly.</p>
|
1,246,705 | <p>I was doing some linear algebra exercises and came across the following tough problem :</p>
<blockquote>
<p>Let $M_{n\times n}(\mathbf{R})$ denote the set of all the matrices whose entries are real numbers. Suppose $\phi:M_{n\times n}(\mathbf{R})\to M_{n\times n}(\mathbf{R})$ is a nonzero linear transform (i.e. there is a matrix $A$ such that $\phi(A)\neq 0$) such that for all $A,B\in M_{n\times n}(\mathbf{R})$
$$\phi(AB)=\phi(A)\phi(B).$$
Prove that there exists a invertible matrix $T\in M_{n\times n}(\mathbf{R})$ such that
$$\phi(A)=TAT^{-1}$$
for all $A\in M_{n\times n}(\mathbf{R})$.</p>
</blockquote>
<p>This is an exercise from my textbook and I am all thumbs when I attempted to solve it .</p>
<p>Can someone tell me as to how should I , at least , start the problem ? </p>
| user1551 | 1,551 | <p>This kind of problems are known as <em>linear preserver problems</em> in the literature. The following is a sketch of proof that immediately comes to my mind. Certainly there are simpler ways to solve the problem (especially if one makes use of existing results on linear preserver problems), but anyway, let $\{e_1,\ldots,e_n\}$ be the standard basis of $\mathbb R^n$ and $E_{ij}=e_ie_j^T$.</p>
<ol>
<li><p>Prove that $\phi$ is injective. <em>Hint.</em> Suppose the contrary that $\phi(X)=0$ for some matrix $X$ whose $(r,s)$-th entry is nonzero. Now consider $\phi(E_{ir}XE_{sj})$ for every $(i,j)$.</p></li>
<li><p>Prove that</p>
<ul>
<li>$\phi$ preserves non-invertibility (<em>hint:</em> if $X$ is singular, then $XY=0$ for some nonzero $Y$),</li>
<li>$\phi$ preserves invertibility (<em>hint:</em> if $\phi(P)$ is singular for some invertible $P$, then $\phi(P)Y=0$ for some nonzero matrix $Y$; since $\phi$ is an injective linear operator over a finite dimensional vector space, $Y=\phi(B)$ for some nonzero $B$, but then ...),</li>
<li>$\phi(I)=I$.</li>
</ul></li>
<li><p>This is the only interesting step in the whole proof: show that every $\phi(E_{ii})$ is a <strong><em>rank-1</em></strong> idempotent matrix. <em>Hint:</em> the rank of an idempotent matrix is equal to its trace.</p></li>
<li><p>Argue that without loss of generality, we may assume that $\phi(E_{11})=E_{11}$.</p></li>
<li><p>Show that whenever $i,j\ne1$, the first column and the first row of $\phi(E_{ij})$ are zero (<em>hint:</em> $E_{ij}E_{11}=0=E_{11}E_{ij}$). By mathematical induction/recursion, show that we may further assume that $\phi(E_{ii})=E_{ii}$ for every $i$.</p></li>
<li><p>For any off-diagonal coordinate pair $(i,j)$, show that $\phi(E_{ij})$ is a scalar multiple of $E_{ij}$ (<em>hint:</em> we have $E_{kk}E_{ij}=0$ for every $k\ne i$ and $E_{ij}E_{kk}=0$ for every $k\ne j$).</p></li>
<li><p>Hence prove that <em>in addition to</em> all the previous assumptions (i.e. $\phi(E_{ii})=E_{ii}$ and $\phi(E_{ij})$ is a scalar multiple of $E_{ij}$ for every $i,j$), we may further assume that $\phi(E_{\color{red}{1}j})=E_{\color{red}{1}j}$ for every $j$.</p></li>
<li><p>Since $\phi$ preserves invertibility and non-invertibility, prove that $\phi(E_{ij})=E_{ij}$ for every $(i,j)$.</p></li>
</ol>
|
4,598,275 | <p>Let <span class="math-container">$(X, \mathcal{F})$</span> be a measurable space, and <span class="math-container">$\mu_{n}, \mu$</span> probability measures on it. <span class="math-container">$\mu_{n}$</span> is said to converge weakly to <span class="math-container">$\mu$</span> if for any bounded continuous functions <span class="math-container">$f$</span> on <span class="math-container">$X$</span>, <span class="math-container">$\int f d\mu_{n} \xrightarrow{} \int f d\mu$</span>.</p>
<p>The professor mentioned if <span class="math-container">$X = \mathbb{R}^{d}$</span> and <span class="math-container">$\mathcal{F}$</span> is the Borel sigma algebra, then it is enough to check the convergence of integrals for any compactly supported continuous function <span class="math-container">$f$</span>. But why is this true?</p>
| Damian Pavlyshyn | 154,826 | <p>This is because all probability measures on <span class="math-container">$\mathbf{R}^d$</span> are inner-regular.</p>
<p>The relevant consequence of this fact is that for all <span class="math-container">$\epsilon > 0$</span>, there exists a compact <span class="math-container">$K \subseteq \mathbf{R}^d$</span> such that <span class="math-container">$\mu(K^c) < \epsilon$</span>.</p>
<p>Now, since <span class="math-container">$\mu_n(K) \rightarrow \mu(K)$</span> (there's a small argument here, since <span class="math-container">$\mathbf{1}_K$</span> is not continuous, but it's not difficult), we have that
<span class="math-container">$$
\limsup_{n\rightarrow \infty}\mu_n(K^c)
= 1 - \liminf_{n\rightarrow\infty}\mu_n(K)
= 1 - \mu(K)
= \mu(K^c)
< \epsilon.
$$</span></p>
<p>Therefore, for any <span class="math-container">$f \in C_b(\mathbf{R}^d)$</span>, we have that
<span class="math-container">\begin{align*}
\limsup_{n\rightarrow\infty}|\mu(f) - \mu_n(f)|
&\leq \limsup_{n\rightarrow\infty}|\mu(f\mathbf{1}_K) - \mu_n(f\mathbf{1}_K)| + \limsup_{n\rightarrow\infty}\lVert{f} \rVert_\infty (\mu(K^c) + \mu_n(K^c)) \\
&\leq 2\epsilon \lVert{f} \rVert_\infty.
\end{align*}</span></p>
<p>(Again, there's a detail here where <span class="math-container">$f \mathbf{1}_K$</span> is not continuous and so not in <span class="math-container">$C_c$</span>, but this is not hard to fix - the important thing is that it is compactly supported)</p>
<p>Since this holds for all <span class="math-container">$\epsilon > 0$</span>, we have that <span class="math-container">$\mu_n(f) \rightarrow \mu(f)$</span> and so <span class="math-container">$\mu_n \stackrel{\mathrm{w}}{\rightarrow} \mu$</span>.</p>
|
290,903 | <p>I am unable to understand the fundamental difference between a Gradient vector and a Tangent vector.
I need to understand the geometrical difference between the both. </p>
<p>By Gradient I mean a vector $\nabla F(X)$ , where $ X \in [X_1 X_2\cdots X_n]^T $</p>
<p>Note: I saw similar questions on "Difference between a Slope and Gradient" but the answers didn't help me much.</p>
<p>Appreciate any effort.</p>
| Paul Orland | 42,566 | <p>Say you are standing on the side of a hill. Imagine somewhere beneath the hill, there is a flat $x,y$ plane that you can use to determine your position. Let's say $+x$ is east and $+y$ is north.</p>
<p>If the hill is smooth, then the height of the hill above this plane is some continuous function $f(x,y)$.</p>
<p>The gradient of $f$ at any point tells you which direction is the steepest from that point and how steep it is. To find the direction of the gradient of $f$ where you are standing, decide which direction is the steepest. The answer could be "north" or "30 degrees west of south". There is no vertical component to the gradient, it is telling you a direction with respect to the $x,y$ plane which is your reference. The magnitude of the gradient will be the slope of the hill in that direction. </p>
<p>The tangent plane is the plane that best approximates the shape of the hill where you are standing. The hill may be curved if you look at it from a distance, but maybe directly beneath your feet it is flat enough to set a pizza box down and have it be flush with the ground. The plane that the bottom of the pizza box defines would, roughly, be the "tangent" plane.</p>
|
136,086 | <p>I've been given the following problem as homework:</p>
<blockquote>
<p>Q: <strong>Compute the number of subgraphs of <span class="math-container">$K_{15}$</span> isomorphic to <span class="math-container">$C_{15}$</span></strong>.</p>
<p><span class="math-container">$K_{15}$</span> means complete graph with 15 vertices. <span class="math-container">$C_{15}$</span> means cyclic graph, where the whole graph is a cycle, with 15 vertices. For example, <span class="math-container">$C_3$</span> is a triangle, <span class="math-container">$C_4$</span> is a square, and <span class="math-container">$C_5$</span> is a pentagon.</p>
</blockquote>
<p>My efforts: In order to try to figure out a general formula for <span class="math-container">$C_n$</span>, I tried doing this problem with <span class="math-container">$C_5$</span>. After a huge amount of trial and error, it looks like the formula is something like <span class="math-container">$$\binom{15}{n}\binom{n-1}{2}(n-3)!$$</span><br />
However, I can't seem to come up with a good reason for this formula or verify whether it's correct. I'm doubting it is correct.</p>
<p>This is homework, so I'm NOT looking for solutions. Could you give some tips for figuring this out?</p>
| Arturo Magidin | 742 | <p>The binomial coefficient $\binom{15}{n}$ selects the $n$ vertices that will be in the cycle.</p>
<p>I think you would be less confused if you rewrote the other factor: assuming $n\geq 3$,
$$\binom{n-1}{2}(n-3)! = \frac{(n-1)(n-2)}{2}(n-3)! = \frac{(n-1)!}{2}.$$
It's even more suggestive if we write it as
$$\frac{n!}{2\times n}.$$</p>
<p>Think of it as follows: $\binom{15}{n}$ selects the vertices. Now you need to select an order for those vertices so that you get a cycle (this will select the edges). How many ways can you order $n$ vertices? And when will two orderings give you the same cycle? The factor of $n$ and the factor of $2$ represent two different kinds of overcounting.</p>
|
1,407,683 | <p>I am new to differential geometry. It is surprising to find that the linear connection is not a tensor, namely, not coordinate-independent. </p>
<p>Can we bypass this ugly object? Only intrinsic quantities should appear in a textbook. </p>
| Anthony Carapetis | 28,513 | <p>Connections are not tensors, but that does not mean they are not coordinate-independent objects! A linear connection is a map sending two vector fields $X,Y$ to another vector field $\nabla_X Y$ which satisfies the rules $\nabla_{fX+Z} Y = f \nabla_X Y + \nabla_Z Y$ and $\nabla_X (fY + Z) = f\nabla_X Y + (\nabla_X f)Y + \nabla_X Z$. This is an abstract definition that makes no reference whatsoever to coordinates.</p>
<p>The <em>connection coefficients</em> or Christoffel symbols $\Gamma^i_{jk} = (\nabla_j\partial_k)^i$ describe how the connection acts on a given coordinate basis. This object $\Gamma$ is not a tensor precisely due to the second rule above: it satisfies a product rule rather than full bilinearity. (Indeed this is necessary if we want something that behaves like a derivative - tensors act pointwise and thus cannot distinguish constants from non-constants!) This does not mean it's not intrinsic - it just means that its components in different coordinate systems are not related in the same way that those of tensors are. </p>
|
1,501,876 | <blockquote>
<p>I want to prove $A_n$ has no subgroups of index 2. </p>
</blockquote>
<p>I know that if there exists such a subgroup $H$ then $\vert H \vert = \frac{n!}{4}$ and that $\vert \frac{A_n}{H} \vert = 2$ but am stuck there. I have tried using the proof that $A_4$ has no subgroup of order 6 to get some ideas but am still stuck. Sorry I don't have much else to add at this point. Thanks a bunch.</p>
| Espen Nielsen | 45,874 | <p>Hint: What do we know about a subgroup of $G$ whose index is the smallest prime dividing $|G|$?</p>
|
2,869,898 | <p>I want to prove that <span class="math-container">$$
f(x,y)=
\begin{cases} \frac{xy^2}{x^2+y^2} &\text{ if }(x,y)\neq (0,0)\\
0 &\text{ if }(x,y)=(0,0)
\end{cases}
$$</span>
is not differentiable at <span class="math-container">$(0,0)$</span>.</p>
<p>I thought that I can prove that it is not continuous around <span class="math-container">$(0,0)$</span> but it certainly is!</p>
<p>So how can I prove that it is not differentiable?</p>
| Kavi Rama Murthy | 142,385 | <p>At the origin the directional derivative in the direction of $(1,1)$ is $\frac 1 2$ whereas the derivative in the direction of $x-$ axis and $y-$axis are $0$. This implies that the derivative does not exist. </p>
|
2,869,898 | <p>I want to prove that <span class="math-container">$$
f(x,y)=
\begin{cases} \frac{xy^2}{x^2+y^2} &\text{ if }(x,y)\neq (0,0)\\
0 &\text{ if }(x,y)=(0,0)
\end{cases}
$$</span>
is not differentiable at <span class="math-container">$(0,0)$</span>.</p>
<p>I thought that I can prove that it is not continuous around <span class="math-container">$(0,0)$</span> but it certainly is!</p>
<p>So how can I prove that it is not differentiable?</p>
| José Carlos Santos | 446,262 | <p>Since $f_x(0,0)=f_y(0,0)=0$, if $f$ was differentiable at $(0,0)$, $f'(0,0)$ would be the null function. Therefore$$\lim_{(x,y)\to(0,0)}\frac{\bigl|f(x,y)-f(0,0)\bigr|}{\sqrt{x^2+y^2}}=0,$$which means that$$\lim_{(x,y)\to(0,0)}\frac{|xy^2|}{(x^2+y^2)^{\frac32}}=0.$$However, this is false. See what happens when $x=y$.</p>
|
2,240,405 | <p>The question asks me to find the Laurent series of $$f(z) = {5z+2e^{3z}\over(z-i)^6}\,\,at\,\,z=i$$I know the following $$e^z=\sum_{n=0}^\infty {z^n\over n!}$$ What I want to know, is if I can do this: $$={1\over (z-i)^6}\sum_{n=0}^\infty ({2(3z)^n\over n!}+5z)$$ $$=\sum_{n=0}^{n=6}(z-i)^{n-6}({2(3z)^n\over n!}+5z)+\sum_{n=7}^{\infty}(z-i)^{n-6}({2(3z)^n\over n!}+5z)$$
This is all I could think of possibly doing and I'm not even sure if it is even something that I can even do, and I don't really know how to continue. I also need to find the radius of convergence but I feel once I have the series it will be easy to apply whatever test necessary to acquire it.</p>
| xpaul | 66,420 | <p>Noting
$$ 5z+2e^{3z}=5(z-i)+5i+2e^{3i}e^{3(z-i)}==5(z-i)+5i+2e^{3i}\sum_{k=0}^\infty\frac{1}{k!}(z-i)^k $$
one has
$$ f(z) = {5z+2e^{3z}\over(z-i)^6}={5i\over(z-i)^6}+{5\over(z-i)^5}+2e^{3i}\sum_{k=0}^\infty\frac{1}{k!}(z-i)^{k-6}.$$</p>
|
14,238 | <p>In question #7656, Peter Arndt asked <a href="https://mathoverflow.net/questions/7656/why-does-the-gamma-function-complete-the-riemann-zeta-function">why the Gamma function completes the Riemann zeta function</a> in the sense that it makes the functional equation easy to write down. Several of the answers were from the perspective of Tate's thesis, which I don't really have the background to appreciate yet, so I'm asking for another perspective.</p>
<p>The perspective I want goes something like this: the Riemann zeta function is a product of the local zeta functions of a point over every finite prime $p$, and the Gamma function should therefore be the "local zeta function of a point at the infinite prime."</p>
<p><strong>Question 1:</strong> Can this intuition be made precise without the machinery of Tate's thesis? (It's okay if you think the answer is "no" as long as you convince me why I should try to understand Tate's thesis!)</p>
<p>Multiplying the local zeta functions for the finite and infinite primes together, we get the Xi function, which has the nice functional equation. Now, as I learned from Andreas Holmstrom's <a href="https://mathoverflow.net/questions/2040/why-are-functional-equations-important">excellent answer to my question about functional equations</a>, for the local zeta functions at finite primes the functional equation</p>
<p>$$\zeta(X,n-s) = \pm q^{ \frac{nE}{2} - Es} \zeta(X, s)$$</p>
<p>(notation explained at <a href="http://en.wikipedia.org/wiki/Weil_conjectures#Statement_of_the_Weil_conjectures" rel="nofollow noreferrer">the Wikipedia article</a>), which for a point is just the statement $\frac{1}{1 - p^s} = -p^{-s} \frac{1}{1 - p^{-s}}$, reflects Poincare duality in etale cohomology, and the hope is that the functional equation for the Xi function reflects Poincare duality in some conjectural "arithmetic cohomology theory" for schemes over $\mathbb{Z}$ (or do I mean $\mathbb{F}_1$?).</p>
<p><strong>Question 2:</strong> Can the reflection formula for the Gamma function be interpreted as "Poincare duality" for some cohomology theory of a point "at the infinite prime"? (Is this question as difficult to answer as the more general one about arithmetic cohomology?)</p>
| JBorger | 1,114 | <p>Your questions are a part of what Deninger has been writing about for 20 years. He's proposed a point of view that sort of explains a lot of things about zeta functions. It's important to say that this explanation is more in a theoretical physics way than in a mathematical way, in that, as I understand it, he's predicted lots of new things which he and other people have then gone on to prove using actual mathematics. I guess it's kind of like the yoga surrounding the Weil conjectures before Dwork and Grothendieck made actual cohomology theories that had a chance to do the job (and eventually did). It's pretty clear to me that he's put his finger on something, but we just don't know what yet. </p>
<p>Let me try to say a few things. But I should also say that I never worried too much about the details, because the details he has are about a made up picture, not the real thing. (If he had the real thing, everyone would be out of a job.) So my understanding of the actual mathematics in his papers is pretty limited.</p>
<p>Question 1: He gives some evidence that Euler factors at both finite and infinite places should be seen as zeta-regularized characteristic polynomials. For the usual Gamma function, see (2.1) in [1]. For the Gamma factors of general motives, see (4.1) in [1]. For the Euler factors at the finite places, see (2.3)-(2.7) in [2]. He gives a description that works simultaneously at the finite and infinite places in (0.1) of [2]. Beware that some of this is based on an artificial cohomology theory that is designed to make things uniform over the finite and infinite places. (Indeed, at the risk of speaking for him, probably the whole point was to see what such a uniform cohomology theory would look like, so maybe one day we'll be able to find the real thing.)</p>
<p>Question 2: He expects his cohomology theory to have a Poincare duality which is "compatible with respect to the functional equation". See the remarks and references in [3] between propositions 3.1 and 3.2.</p>
<p>I'd recommend having a look at [3]. It's mainly expository. Also, I remember [4] being a good exposition, but I don't have it in front of me now, so I can't say much. He also reviews things in section 2 of his recent Archive paper [5].</p>
<p>[1] "On the Gamma-factors attached to motives", Invent. Math. 104, pp 245-261</p>
<p>[2] "Local L-factors of motives and regularized determinants", Invent. Math. 107, pp 135-150</p>
<p>[3] "Some analogies between number theory and dynamical systems on foliated spaces", Proceedings of the ICM, Vol. I (Berlin, 1998), pp 163-186</p>
<p>[4] "Evidence for a cohomological approach to analytic number theory", First ECM, Vol. I (Paris, 1992), pp 491-510</p>
<p>[5] "The Hilbert-Polya strategy and height pairings", arxiv.org</p>
|
1,369,990 | <p>I came across a quesion -
<a href="https://www.hackerrank.com/contests/ode-to-code-finals/challenges/pingu-and-pinglings" rel="nofollow">https://www.hackerrank.com/contests/ode-to-code-finals/challenges/pingu-and-pinglings</a></p>
<p>The question basically asks to generate all combinations of size k and sum up the product of numbers of all combinations..Is there a general formula to calculate the same,as it is quite tough to generate all the possible combinations and operate on them..
For example for n=3(no of elements) and k=2
and the given 3 numbers are 4 2 1,then the answer will be 14 as
For k=2 the combinations are {4,2},{4,1},{2,1} so answer is (4×2)+(4×1)+(2×1)=8+4+2=14.
I hope i am clear in asking my question.</p>
| Lucian | 93,448 | <p>Are you asking about a more efficient algorithm than merely summing $\displaystyle{n\choose2}=\dfrac{n(n-1)}2$ products ? If so, then you can sum only <em>n</em> products, namely $S_2=\dfrac12\cdot\displaystyle\sum_1^na_k(S_1-a_k),$ where $S_1=\displaystyle\sum_1^na_k.$</p>
|
1,353,498 | <p>The problem is to prove or disprove that there is a noncyclic abelian group of order $51$. </p>
<p>I don't think such a group exists. Here is a brief outline of my proof:</p>
<p>Assume for a contradiction that there exists a noncyclic abelian group of order $51$.</p>
<p>We know that every element (except the identity) has order $3$ or $17$. Assume that $|a|=3$ and $|b|=17$. Then I managed to prove that the subgroups generated by $a$ and $b$ only intersect at the identity element, from which we can show that $ab$ is a generator of the whole group, so it is cyclic. Contradiction.</p>
<p>So every element (except the identity) has the same order $p$, where $p$ is either $3$ or $17$. </p>
<p>If $p=17$, take $a$ not equal to the identity, and take $b$ not in the subgroup generated by $a$. Then we can prove that $a^kb^l$ where $k,l$ are integers between $0$ and $16$ inclusive are distinct, hence the group has more than $51$ elements, contradiction.</p>
<p>If $p=3$, take $a$ not equal to the identity and take $b$ not in the subgroup generated by $a$. Then we can prove that $a^kb^l$ where $k,l$ are integers betwen $0$ and $2$ inclusive are distinct. This subgroup has $9$ elements so we can find $c$ that's not of the form $a^kb^l$. Then we can prove that $a^kb^lc^m$ where $k,l,m$ are integers betwen $0$ and $2$ inclusive are distinct. Then this subgroup has $27$ elements so we can find $d$ that's not of the form $a^kb^lc^m$. Then we prove that $a^kb^lc^md^n$ where $k,l,m,n$ are integers between $0$ and $2$ inclusive are distinct, this being $81$ elements. Contradiction.</p>
| mathcounterexamples.net | 187,663 | <p>You're right.</p>
<p>If you know the theorem that classifies finite abelian groups, then the only possible abelian groups of order $51$ are $\mathbb Z/51 \mathbb Z$ which is cyclic and $\mathbb Z/3 \mathbb Z \times \mathbb Z/17 \mathbb Z$ which is also cyclic because $3$ and $17$ are prime and $\gcd(3,17)=1$ so $\mathbb Z/3 \mathbb Z \times \mathbb Z/17 \mathbb Z \simeq \mathbb Z/51 \mathbb Z$.</p>
|
11,090 | <p>In <em>MMA</em> (8.0.0/Linux), I tried to to create an animation using the command</p>
<pre><code>Export["s4s5mov.mov", listOfFigures]
</code></pre>
<p>and got the output</p>
<p><img src="https://i.stack.imgur.com/bFWPP.png" alt="enter image description here"></p>
<p>Doing a little research, one can <a href="http://reference.wolfram.com/mathematica/ref/format/QuickTime.html" rel="nofollow noreferrer">read</a> in the Documentation Center that</p>
<p><img src="https://i.stack.imgur.com/woicu.png" alt="enter image description here"></p>
<p>And I was wondering <strong>if there is some way to overcome this limitation <em>within MMA</em></strong>.</p>
<p><strong>EDIT</strong></p>
<p>Here is a sample code of the inverted animation problem:</p>
<pre><code>movingP = Table[Show[
ParametricPlot[{Sin[x], Cos[x]}, {x, 0, 2 \[Pi]}, AxesLabel -> {x, y}],
Graphics[{Red, PointSize[Large],
Point[{Sin[(n \[Pi])/8], Cos[(n \[Pi])/8]}]}]
], {n, 0, 15}]
Export["~/Desktop/point.avi", movingP]
</code></pre>
<p>will produce an avi like this:</p>
<p><img src="https://i.stack.imgur.com/JPkLa.gif" alt="enter image description here"></p>
<p>(The gif has been tampered to look like the avi)</p>
| halirutan | 187 | <p>The function converting strings to integer is <code>FromDigits</code>. It is the counterpart of <code>IntegerString</code> and both functions can be used with whatever basis you like. Therefore, if you want to convert from base 16 you do</p>
<pre><code>FromDigits["6b", 16]
</code></pre>
|
1,912,570 | <p>There is a proof from stein for this assertion,</p>
<p><a href="https://i.stack.imgur.com/fqRKU.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fqRKU.jpg" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/fhZmE.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fhZmE.jpg" alt="enter image description here"></a></p>
<p>My question is why $\sum_{j \in J_1}|Q_j|+\sum_{j \in J_2}|Qj| \leqslant \sum_{j =1}^\infty|Q_j|$ ?</p>
<p>I have a feeling they are not equal since the order of addition has been changed, but it is not changed in the way that we can apply the proposition that absolute convergence implies that the order of summation does not matter.</p>
| DanielWainfleet | 254,665 | <p>The proof looks unnecessarily complicated. Let $\mu^o$ denote outer measure. </p>
<p>First. Show that $\mu^o(A)+\mu^o(B)=\mu^o(A\cup B)$ when $A, B$ are non-empty open sets and $d(A,B)>0.$</p>
<p>Second. We have $\mu^o(E_1\cup E_2)\leq \mu^o(E_1)+\mu^o(E_2)$ for any $E_1,E_2.$ And if $\mu^o(E_1)=\infty$ or $\mu^o(E_2)=\infty$ then $\infty=\mu^o(E_1\cup E_2)=\mu^o(E_1)+\mu^o(E_2).$</p>
<p>So it suffices to show $\mu^o(E_1)+\mu^o(E_2)\leq \mu^o(E_1)+\mu^o(E_2)$ when $\mu^o(E_1)$ and $\mu^o(E_2)$ are finite, non-empty, and $d(E_1,E_2)=r>0.$ </p>
<p>For $i\in \{1,2\}$ let $D_i=\cup_{p\in E_i}B(p,r/4)$ where $B(p,r/4)$ is the open ball centered at $p$ with radius $r/4.$ Then $E_1\subset D_1$ and $E_2\subset D_2$ and $d(D_1, D_2)\geq r/4.$</p>
<p>Now for any $e>0$ let $U_e$ be an open set with $U_e\supset (E_1\cup E_2)$ and $\mu^o(U_e)<\mu^o(E_1\cup E_2)+e.$ (Note the strict inequality is possible because $\mu^o(E_1\cap E_2)\leq \mu^o(E_1)+\mu^o(E_2)<\infty.$) Then $E_i\subset U_e\cap D_i$ for $i\in \{1,2\},$ while $U_e\cap D_2, U_e\cap D_2$ are open, with $d(U_e\cap D_1,U_e\cap D_2)\geq r/4>0.$ We have therefore $$\mu^o(E_1)+\mu^o(E_2)\leq \mu^o(U_e\cap D_1)+\mu^o(U_e\cap D_2)=$$ $$=\mu^o((U_e\cap (D_1)\cup (U_e\cap D_2))=\mu^o(U_e\cap (D_1\cup D_2))\leq$$ $$\leq \mu^o(U_e)<\mu^o(E_1\cup E_2)+e.$$ Since $e$ can be arbitrarily small we have $\mu^o(E_1)+\mu^o(E_2)\leq \mu^o(E_1\cup E_2).$</p>
|
3,118,462 | <p>cars arrives according to a Poisson process with rate=2 per hour and trucks arrives according to a Poisson process with rate=1 per hour. They are independent. </p>
<p>What is the probability that <strong>at least</strong> 3 cars arrive before a truck arrives? </p>
<p>My thoughts:
Interarrival of cars A ~ Exp(2 per hour), Interarrival of trucks B ~ Exp(1 per hour). </p>
<p>Probability that <strong>at least</strong> 3 cars arrive before a truck arrives</p>
<p><span class="math-container">$= 1- Pr(B<A) - Pr(A<B)Pr(B<A) - Pr(A<B)Pr(A<B)Pr(B<A)
\\= 1 - (\frac{1}{3})-(\frac{2}{3}\cdot\frac{1}{3})-(\frac{2}{3}\cdot\frac{2}{3}\cdot\frac{1}{3})\\=\frac{8}{27}.$</span> </p>
<p>Is this correct?</p>
| Community | -1 | <p>A few quick notes:</p>
<ul>
<li>1 variable polynomials, can generalize, base 10 multiplication.</li>
<li>as each digit in base ten is between 0 and 9, That's the part of the table we need to know (arguably with tricks a lot less).</li>
<li>We group like terms,by the power of the variable (generalizing grouping by powers of 10)</li>
</ul>
<p>with these three, we have enough to write 68 as 6y+8, and 21 as 2y+1 we can then check that (y+7)+4 = y+11. Since y=10 in base 10, we change 11 into y+1, grouping like terms gives us 2y+1 so they do add correctly. Checking the product, we have 4y+28, again y=10 so we have 4y+2y+8, grouping like terms gives, 6y+8 so the product also works. All this took, is keeping like terms together, and multiplication of numbers less than 10.</p>
|
531,342 | <p>Prove or disprove that the greedy algorithm for making change always uses the fewest coins possible when the denominations available are pennies (1-cent coins), nickels (5-cent coins), and quarters (25-cent coins).</p>
<p>Does anyone know how to solve this?</p>
| Emily | 31,475 | <p>Since the each coin divides the face value of every larger coin, a single larger coin will always represent an integer multiple of smaller coins.</p>
|
3,345,329 | <p>In Bourbaki Lie Groups and Lie Algebras chapter 4-6 the term displacement is used a lot. For example groups generated by displacements. But I can not find a definition of the term displacement given anywhere. I also looked at Humphreys Reflection Groups and Coxeter groups book but I could not find it. Can someone provide a definiton of displacements in the context of reflection groups and root systems? </p>
| Olivier | 381,016 | <p>For definiteness, consider odds <span class="math-container">$n$</span>, say <span class="math-container">$2m+1$</span>.</p>
<p>This is an additional comment more than an answer (the second comment by @LinAlg refers to a Theorem that gives the complete answer anyway), to grasp intuition about about the 1/n factor</p>
<p>Start with the simplest case, that of iid uniform random variables.</p>
<p>Then the ordered statistics of such random variables are well known to be beta random variables, and the median itself will be Beta(<span class="math-container">$m$</span>, <span class="math-container">$m+1$</span>) if I am not mistaken, the variance of which (check <a href="https://en.wikipedia.org/wiki/Beta_distribution" rel="nofollow noreferrer">wikipedia</a>) is</p>
<p><span class="math-container">$$\frac{m(m+1)}{(2m+1)^2 \times 2(m+1)}= \frac{m}{2(2m+1)^2} \sim \frac{1}{8m}$$</span></p>
<p>Now you can map your iid uniform to iid Gaussian using the inverse distribution function (in fact, since the Beta concentrates around 1/2, we will only need the derivative of the function at this point). This way, you can even work out the constant in front of <span class="math-container">$1/m$</span>, but doing this rigorously is parhaps not that easy I assume.</p>
|
2,961,971 | <blockquote>
<p><span class="math-container">$$\sum_{n=1}^\infty\frac{(2n)!}{2^{2n}(n!)^2}$$</span></p>
</blockquote>
<p>Can I have a hint for whether this series converges or diverges using the comparison tests (direct and limit) or the integral test or the ratio test?</p>
<p>I tried using the ratio test but it failed because I got 1 as the ratio. The integral test seems impossible to use here.</p>
| davidlowryduda | 9,754 | <p>For a more direct approach, you might directly expand the terms as follows:
<span class="math-container">$$\begin{align}
\frac{(2n)!}{4^n (n!)^2} &= \frac{1}{4^n}\frac{2n(2n-1)}{n^2}\frac{(2n-2)(2n-3)}{(n-1)^2}\cdots\frac{(4)(3)}{2^2} \frac{(2)(1)}{1^2} \\
&= \frac{2^n}{4^n}\frac{2n-1}{n} \frac{2n-3}{n-1}\cdots\frac{3}{2} \frac{1}{1} \\
&= \frac{4^n}{4^n} \frac{n - 1/2}{n} \frac{n - 3/2}{n-1} \cdots \frac{3/2}{2} \frac{1/2}{1}.
\end{align}$$</span>
This is almost a telescoping product. By subtracting <span class="math-container">$1/2$</span> from each numerator (except the last), we get a smaller term that does telescope. Thus
<span class="math-container">$$
\frac{(2n)!}{4^n (n!)^2}
\geq
\frac{n-1}{n}\frac{n-2}{n-1} \cdots \frac{1}{2} \cdot (1/2) = \frac{1}{2n}.
$$</span>
Thus the <span class="math-container">$n$</span>th term of your series is bigger than <span class="math-container">$1/2n$</span>, and diverges by comparison with the harmonic series
<span class="math-container">$$ \sum_{n \geq 1} \frac{1}{n}.$$</span></p>
|
16,795 | <p>Consider a finite simple graph $G$ with $n$ vertices, presented in two different but equivalent ways:</p>
<ol>
<li>as a logical formula $\Phi= \bigwedge_{i,j\in[n]} \neg_{ij}\ Rx_ix_j$ with $\neg_{ij} = \neg$ or $ \neg\neg$ </li>
<li>as an (unordered) set $\Gamma = \lbrace [n],R \subseteq [n]^2\rbrace$ </li>
</ol>
<p>In each case the complement $G'$ of $G$ is easily presented and is of course <em>not</em> isomorphic to $G$ (in the usual sense) generally:</p>
<ol>
<li>$ \Phi' = \bigwedge_ {i,j} \neg \neg_{ij}\ R x_i x_j $ </li>
<li>$\Gamma' = \lbrace [n],[n]^2 \setminus R\rbrace$</li>
</ol>
<p>Let's state for the moment that the presentation as a logical formula is the more "flexible" one: we can easily omit single literals, leaving it open whether $Rx_ix_j$ or not. But this can be mimicked for set presentation by making it from a pair to a triple $\lbrace[n],R,\neg R \subseteq [n]^2 \setminus R\rbrace$. </p>
<p>Let's call a presentation <em>complete</em>, if it leaves nothing open, i.e. no omitted literal and $\neg R = [n]^2 \setminus R$, resp.</p>
<p>Now, let a graph be given in complete set presentation $\lbrace[n],R,\neg R = [n]^2 \setminus R\rbrace$. Since order in this set should not matter, any sensible definition of "graph isomorphism" should make any graph isomorphic to its complement.</p>
<blockquote>
<p>Where and how do I run into trouble when I
assume - following this line of
reasoning, contrary to the usual line of thinking - that every (finite) graph is
isomorphic to its complement?</p>
</blockquote>
| Mariano Suárez-Álvarez | 1,409 | <p>The usual definition of graph isomorphism implies that in general a graph is not isomorphic to its complement, and it is generally agreed that that definition is sensible. So the claim in your second to last paragraph is false.</p>
|
3,383,687 | <p>I'm interested in ideas for improving and fixing the proof I wrote for the following theorem:</p>
<blockquote>
<p>Let <span class="math-container">$f \colon \mathbb{R}^n \to \mathbb{R} $</span> be differentiable, and <span class="math-container">$ \lim_{\| x \| \to \infty} f(x) = 0 $</span>. Then <span class="math-container">$\nabla f(x) = 0 $</span> for some <span class="math-container">$x \in \mathbb{R}^n$</span>.</p>
</blockquote>
<p>Here's the idea of the proof. First, since <span class="math-container">$f$</span> is differentiable, it is continuous.</p>
<p>As <span class="math-container">$ \lim_{\| x \| \to \infty} f(x) = 0 $</span>, <span class="math-container">$\forall \varepsilon > 0, \exists r \in \mathbb{R} : |f(x) - 0| < \varepsilon$</span> whenever <span class="math-container">$\| x \| > r$</span>.</p>
<p>If we choose <span class="math-container">$D = \{x \in \mathbb{R}^n : \| x \| \leq r \}$</span>, we can use the theorem that states that all continuous functions are bounded inside closed sets. In other words, there's a supremum of <span class="math-container">$|f(x)|$</span> in <span class="math-container">$D$</span>.</p>
<p>Then we just look at the cases: if <span class="math-container">$f(x) = 0$</span>, so its gradient is always 0 and we're done.</p>
<p>If <span class="math-container">$f$</span> varies in the set <span class="math-container">$D$</span>, there exist <span class="math-container">$a,b \in D$</span> such that <span class="math-container">$f(a) \neq f(b)$</span>, ie. <span class="math-container">$\exists \varepsilon_2 > 0$</span> so <span class="math-container">$| f(a) - f(b) | > \varepsilon_2 $</span>.</p>
<p>If we choose <span class="math-container">$\varepsilon_2 > \varepsilon$</span>, <span class="math-container">$|f(x)|$</span> attains greater values in <span class="math-container">$D$</span> than outside it, and if we choose <span class="math-container">$c$</span> to be a point such that <span class="math-container">$$f(c) = \sup{\{f(x) : x \in D\}}$$</span> Then <span class="math-container">$|f(x)| \leq |f(c)|\quad \forall x \in D$</span> and as <span class="math-container">$f$</span> is differentiable, <span class="math-container">$\nabla f(c) = 0$</span>.</p>
<p>There are more than a few issues I have with the formulation of the proof. First, "<span class="math-container">$f$</span> attains greater values in <span class="math-container">$D$</span> than outside it" seems a little ambiguous. Then the choosing of <span class="math-container">$c$</span> in a convenient way after having talked about it at such length... Additionally, I'd like to use the definition of differentiability that states that if <span class="math-container">$f$</span> is differentiable, it can be represented as </p>
<p><span class="math-container">$$f(x_0+h) = f(x_0) + Df(x_0)h + \varepsilon(h)\| h \|,\quad h \in \mathbb{R}^n $$</span> </p>
<p>where <span class="math-container">$\varepsilon(h)\| h \| \to 0$</span> as <span class="math-container">$\| h \| \to 0$</span>, and where <span class="math-container">$Df(x)$</span> is the gradient in this case, or the Jacobian in a more general case. I'm almost certain you could bound the gradient <span class="math-container">$Df(c)$</span> to <span class="math-container">$0$</span> somehow using that definition, because it gives you a semi-explicit expression, instead of the verbal hand-waving I'm facing.</p>
<p>There might've also been a method much simpler than this, but I couldn't exactly employ the mean value theorem easily here with the whole open domain. Maybe using the <span class="math-container">$D$</span> I defined there would've worked.</p>
| Lázaro Albuquerque | 85,896 | <p>I'll assume <span class="math-container">$f$</span> is bounded since you seem to get that part.</p>
<p>Let <span class="math-container">$\alpha = \inf_{x \in \mathbb{R}^n} f(x)$</span> and <span class="math-container">$\beta = \sup_{x \in \mathbb{R}^n} f(x)$</span>. </p>
<p>Then there are sequences <span class="math-container">$\{x_k\}$</span> and <span class="math-container">$\{y_k\}$</span> in <span class="math-container">$\mathbb{R}^n$</span> such that <span class="math-container">$\alpha = \lim_{k \rightarrow \infty} f(x_k)$</span> and <span class="math-container">$\beta = \lim_{k \rightarrow \infty} f(y_k)$</span>.</p>
<p>If both <span class="math-container">$x_k$</span> and <span class="math-container">$y_k$</span> are unbounded, then <span class="math-container">$\alpha = \beta = 0$</span>, so that <span class="math-container">$f=0$</span>.</p>
<p>Suppose <span class="math-container">$x_k$</span> bounded. Then it has a convergent subsequence <span class="math-container">$x_{k_p} \rightarrow x_{\alpha}$</span>. Hence, by continuity, <span class="math-container">$\alpha = f(x_{\alpha})$</span>.</p>
<p>If <span class="math-container">$y_k$</span> is bounded we conclude, analogously, that <span class="math-container">$\beta = f(x_{\beta})$</span>.</p>
<p>Therefore, <span class="math-container">$f$</span> always has a global minimum/maximum and at this point, <span class="math-container">$\nabla f$</span> is zero.</p>
|
382,526 | <p>I can't calculate the Integral:</p>
<p>$$
\int_{0}^{1}\frac{\sqrt{x}}{\sqrt{1-x^{6}}}dx
$$</p>
<p>any help would be great!</p>
<p>p.s I know it converges, I want to calculate it.</p>
| xpaul | 66,420 | <p>Use substitution $u=x^6$ and $B$ function:
$$\int_0^1\frac{\sqrt{x}}{\sqrt{1-x^6}}dx=\frac{1}{6}\int_0^1u^{-\frac{3}{4}}(1-u)^{-\frac{1}{2}}du=\frac{1}{6}\int_0^1u^{\frac{1}{4}-}(1-u)^{\frac{1}{2}-1}du=\frac{1}{6}B(x,y)=\frac{1}{6}\frac{\Gamma(\frac{1}{4})\Gamma(\frac{1}{2})}{\Gamma(\frac{3}{4})}$$</p>
|
435,936 | <p>Does anyone know when
$x^2-dy^2=k$ is resoluble in $\mathbb{Z}_n$ with $(n,k)=1$ and $(n,d)=1$ ?
I'm interested in the case $n=p^t$</p>
| hot_queen | 72,316 | <p>Yes. For example, divide $\mathbb{N}$ into intervals $[n_k, n_{k+1})$ where $n_k$ is sufficiently fast growing and take the union of every other interval.</p>
|
2,032,241 | <p>In Euler's (number theory) theorem one line reads: since $d|ai$ and $d|n$ and $gcd(a,n)=1$ then $d|i$. I've been staring at this for over an hour and I am not convinced why this is true could anyone explain why? I have tried all sorts of lemma's I've seen before but I honestly just can't see it and I feel I'm going round in circles. Could someone just explain to me why it is making me feel stupid. Here is the full proof for context and I highlighted the lines I don't get. Thanks! (Also hcf=gcd as I know that confuses some people.)</p>
<p><a href="https://i.stack.imgur.com/JEcNA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JEcNA.png" alt="enter image description here"></a></p>
| Joffan | 206,402 | <p>Since $d \mid s_i$ and $d \mid n$, clearly $d \mid (s_i+An)$. And $s_i+An = ai$, so $d \mid ai$. (maybe you understood this, but it was highlighted).</p>
<p>So $d\mid ai$ and $d\mid n$. Now since $\gcd(a,n)=1$, $a$ and $n$ have no common factors, and also $a$ and $d$ have no common factors. That means that $d \nmid a$ and $\gcd(d,a)=1$. That lets us step from $d\mid ai$ to $d\mid i$.</p>
|
972,281 | <p>I have to find the inverse laplace transform of:</p>
<p>$\mathcal{L}^{-1}(\frac{s}{-8+2s+s^2})$</p>
<p>I found it was </p>
<p>$\frac{2}{3}e^{-4t}+\frac{1}{3}e^{2t}$</p>
<p>But the question I'm asked is, determine $A,B,C,D$ such that $e^{At}(Bcosh(Ct)+Dsinh(Ct))$ is a solution of the inverse laplace transform.</p>
<p>I have no idea how to proceed. I've tried multiple things to no avail.</p>
<p>Any help will be greatly appreciated. Thanks.</p>
<p>Edit: I know hyperbolic trigs can be rewritten in terms of exponentials, but I can't figure out how to use this to my advantage.</p>
| MPW | 113,214 | <p>Use the fact that
$$e^{At}(B\cosh Ct + D\sinh CT) = \tfrac{B+D}{2}e^{(A+C)t} + \tfrac{B-D}{2}e^{(A-C)t}
$$
Compare this to your expression. The coefficients give you
$$\left\{
\begin{array}{cc}\tfrac{B+D}{2}=\tfrac23 \\
\tfrac{B-D}{2}=\tfrac13
\end{array}\right.
$$
The exponents give you
$$\left\{
\begin{array}{cc}A+C=-4 \\
A-C=2
\end{array}\right.
$$
From these systems, you can easily see that
$$A=-1,B=1, C=-3, D=\tfrac13$$
so that you can write your solution as
$$e^{-t}(\cosh 3t - \tfrac13\sinh 3t)
$$</p>
|
24,593 | <p>Traditionally, I have always taught evaluating expressions before teaching linear equations. But, I was recently given a remedial class of students that have to cover the bare minimums (and we have until mid-December to finish). Luckily, I have great flexibility with what I can do to the syllabus, so for the first time ever, I have completely cut out evaluating expressions since they won´t even be tested on this on the final exam.</p>
<p>My question is more if anyone else has done this, or thinks this is not a good way to go. Most of my students in that particular class have ZERO to little formal math background, a lot of them did not even finish high school, and they barely get by with mean, median, mode, rounding, etc. I started equations with them today, and they seemed ¨fine" for the most part. Of course, I also have spent the past week emphasizing positive and negative integer operations, so they are pretty OK with that so far. The textbook itself does not cover linear equations until after the section on evaluating expressions.</p>
<p>EDIT: Upon request for "non-native" English speakers:</p>
<p>Evaluating expressions simply means in the US to plug in numbers for the given variable values of the algebraic expression. Thus, for example, an exercise would be:</p>
<p>"Evaluate a + b + c" if a = 1, b = 2, c = 3."</p>
<p>The expressions can be as simple as that, or very much more complicated/interesting/beautiful. But, you get the picture.</p>
<p>Linear equations simply mean basic equations where you solve for an unknown variable.</p>
<p>Example: x + 5 = 10. What is x? Or 2x + 20 = 40, what is x? Etc.</p>
| Steven Gubkin | 117 | <p>I hope this does not come across as overly harsh: I do not think that thinking of teaching as "covering material" in a particular order is a useful framework.</p>
<p>If your students are solving equations like <span class="math-container">$3x+4 = 19$</span>, but are unable to evaluate <span class="math-container">$3x+4$</span> at <span class="math-container">$x = 5$</span> to check to see if they are correct (or, if they get an incorrect answer, to see for themselves that their answer is incorrect), then they are not doing anything of intellectual value. It is impossible to "solve" the equation without understanding how to evaluate the expression on the lefthand side.</p>
<p>One can manipulate the equation according to some rules, obtain <span class="math-container">$x=5$</span>, and circle it, but this would just be mimicry of mathematics, not the genuine article.</p>
<p>So my answer is: you do not need to specifically devote a day to "evaluating expressions", but you had better be sure that the students do achieve this outcome in tandem with solving equations. If you do not, then all of the work you have done will be meaningless.</p>
|
3,933,069 | <p>Given
<span class="math-container">$f(x+1)+f(x-1)=x^2$</span></p>
<p>I have subtituted <span class="math-container">$(a=x+1)$</span> and <span class="math-container">$(a=x-1)$</span> and got
<span class="math-container">$$f(x)+f(x-2)=(x-1)^2 \text{ and } f(x+2)+f(x)=(x+1)^2$$</span>
Combining those equations, I got</p>
<p><span class="math-container">$$f(x+2)-f(x-2)=4x$$</span></p>
<p>I could not even find <span class="math-container">$f(x)$</span></p>
<p>Please help me</p>
| Hagen von Eitzen | 39,174 | <p>Note that <span class="math-container">$f(x)=1$</span> leads to <span class="math-container">$f(x+1)+f(x-1)=2$</span>,
<span class="math-container">$f(x)=x$</span> leads to <span class="math-container">$f(x+1)+f(x-1)=2x$</span>,
<span class="math-container">$f(x)=x^2$</span> leads to <span class="math-container">$f(x+1)+f(x-1)=2x^2+2$</span>. In order to obtain <span class="math-container">$x^2$</span> on the right hand side, we might therefore combine
<span class="math-container">$$ f(x)=\frac12x^2-\frac12$$</span>
and verify that this indeed makes
<span class="math-container">$$ f(x+1)+f(x-1)=x^2.$$</span>
However, <span class="math-container">$f$</span> is not unique. For example,
<span class="math-container">$$ f(x)=\frac12x^2-\frac12+\sin\frac{\pi x}2$$</span>
is another solution</p>
|
3,933,069 | <p>Given
<span class="math-container">$f(x+1)+f(x-1)=x^2$</span></p>
<p>I have subtituted <span class="math-container">$(a=x+1)$</span> and <span class="math-container">$(a=x-1)$</span> and got
<span class="math-container">$$f(x)+f(x-2)=(x-1)^2 \text{ and } f(x+2)+f(x)=(x+1)^2$$</span>
Combining those equations, I got</p>
<p><span class="math-container">$$f(x+2)-f(x-2)=4x$$</span></p>
<p>I could not even find <span class="math-container">$f(x)$</span></p>
<p>Please help me</p>
| lab bhattacharjee | 33,337 | <p>HINT:</p>
<p>WLOG let <span class="math-container">$$f(x)=g(x)+\sum_{r=0}^na_rx_r$$</span></p>
<p><span class="math-container">$$x^2=g(x-1)+g(x+1)+2a_0+2a_1x+a_2((x+1)^2+(x-1)^2)+\cdots$$</span></p>
<p>Set <span class="math-container">$a_r=0$</span> for <span class="math-container">$r\ge3$</span></p>
<p><span class="math-container">$$x^2=g(x-1)+g(x+1)+2a_0+2a_1x+2a_2(x^2+1)$$</span></p>
<p>Compare the coefficients of <span class="math-container">$x$</span> to find <span class="math-container">$a_1=0$</span></p>
<p>that of <span class="math-container">$x^2,1=2a_2$</span></p>
<p>those of constants, <span class="math-container">$0=a_2+a_0\iff a_2=-a_0$</span> so that</p>
<p><span class="math-container">$$f(x)=g(x)+\dfrac{x^2-1}2\text{ and } g(x-1)+g(x+1)=0\iff g(x+1)=-g(x-1)=\cdots=g(x+3)$$</span></p>
<p>So, <span class="math-container">$g(x)$</span> could be any periodic function with period <span class="math-container">$=4$</span></p>
|
3,040,110 | <p>What is the Range of <span class="math-container">$5|\sin x|+12|\cos x|$</span> ?</p>
<p>I entered the value in desmos.com and getting the range as <span class="math-container">$[5,13]$</span>.</p>
<p>Using <span class="math-container">$\sqrt{5^2+12^2} =13$</span>, i am able to get maximum value but not able to find the minimum.</p>
| Ross Millikan | 1,827 | <p>The four quadrants give the four combinations of signs of <span class="math-container">$\sin$</span> and <span class="math-container">$\cos$</span>. Let us work initially in the first quadrant, where both functions are positive. We can then remove the absolute value signs, take a derivative, and set to zero.
<span class="math-container">$$\frac d{dx}(5 \sin x + 12 \cos x)=5\cos x - 12 \sin x$$</span>
This is zero when <span class="math-container">$\tan x=\frac 5{12}$</span>, giving the maximum you found. The minimum must then come at one end of the interval, and if you check <span class="math-container">$x=0, \frac \pi 2$</span> you find the minimum at <span class="math-container">$\frac \pi 2$</span>, which is <span class="math-container">$5$</span>. You can do the same in the other four quadrants, flipping the signs of <span class="math-container">$\sin x$</span> and <span class="math-container">$\cos x$</span> as required, and find that the minimum is <span class="math-container">$5$</span> again.</p>
|
646,183 | <p>I am not familiar with the theory of Lie groups, so I am having a hard time finding all the connected closed real Lie subgroups of $\mathrm{SL}(2, \mathbb{C})$ up to conjugation.</p>
<p>One can find the real and complex parabolic, elliptic, hyperbolic, subgroups, $\mathrm{SU}(2)$, $\mathrm{SU}(1,1)$ and $\mathrm{SL}(2,\mathbb{R})$ (the last two ones are isomorphic though), the subgroup of real upper triangular matrices,the subgroup of upper triangular matrices with unitary diagonal coefficients the subgroup of complex triangular matrices. </p>
<p>Are there any other one ?</p>
| Peter Crooks | 101,240 | <p>Let $G$ be a Lie group with Lie algebra $\mathfrak{g}$. There a bijective correspondence between the connected closed subgroups $H$ of $G$ and the Lie subalgebras $\mathfrak{h}$ of $\mathfrak{g}$. The correspondence associates to $H$ its Lie algebra, and to $\mathfrak{h}$ the closure of the image $\mathfrak{h}$ under the exponential map. So, the exercise is to classify all (real) Lie sublagebras of $\mathfrak{sl}_2(\mathbb{C})$. I think you might find this problem to be more tractable.</p>
|
1,393,154 | <p><span class="math-container">$4n$</span> to the power of <span class="math-container">$3$</span> over <span class="math-container">$2 = 8$</span> to the power of negative <span class="math-container">$1$</span> over <span class="math-container">$3$</span></p>
<p>Written Differently for Clarity:</p>
<p><span class="math-container">$$(4n)^\frac{3}{2} = (8)^{-\frac{1}{3}}$$</span></p>
<hr />
<blockquote>
<p><strong>EDIT</strong></p>
<p>Actually, the problem should be solving <span class="math-container">$4n^{\frac{3}{2}} = 8^{-\frac{1}{3}}$</span>. Another user edited this question for clarity, but they edited it incorrectly to add parentheses around the right hand side, as can be seen above.</p>
</blockquote>
| Wojciech Karwacki | 242,866 | <p>$4n^{\frac{3}{2}}=8^{-\frac{1}{3}} \iff 4n^{\frac{3}{2}}=\frac{1}{2} \iff n^{\frac{3}{2}}=\frac{1}{8} \iff n^3=\frac{1}{64} \iff n= \frac{1}{4}$</p>
|
1,393,154 | <p><span class="math-container">$4n$</span> to the power of <span class="math-container">$3$</span> over <span class="math-container">$2 = 8$</span> to the power of negative <span class="math-container">$1$</span> over <span class="math-container">$3$</span></p>
<p>Written Differently for Clarity:</p>
<p><span class="math-container">$$(4n)^\frac{3}{2} = (8)^{-\frac{1}{3}}$$</span></p>
<hr />
<blockquote>
<p><strong>EDIT</strong></p>
<p>Actually, the problem should be solving <span class="math-container">$4n^{\frac{3}{2}} = 8^{-\frac{1}{3}}$</span>. Another user edited this question for clarity, but they edited it incorrectly to add parentheses around the right hand side, as can be seen above.</p>
</blockquote>
| Taylor Ted | 225,132 | <p>We have $$(4n)^\frac{3}{2} = (8)^{-\frac{1}{3}}$$</p>
<p>So we write as $4^{3/2} n^{3/2} = (2^{3})^{-1/3}$</p>
<p>Now writing as</p>
<p>$(2^{2})^{3/2} n^{3/2}= 2^{-1}$</p>
<p>we get as</p>
<p>$8n^{3/2}=1/2$</p>
<p>so $n^{3/2}=1/16$
Now squaring both sides we get</p>
<p>$n^{3}= (1/16) (1/16)$</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.