qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,538,521 | <blockquote>
<p>Let $x$ be a function of $C^1(I,R)$ where $I\subset \mathbb{R}$ , such that $$x'(t)\leq a(t) x(t)+b(t),$$ where $a$ and $b$ are continuous functions on $I$ in $R$ then
$$ x(t)\leq x(t_0) \exp\left(\int_{t_0}^{t}a(s)ds\right)+\int_{t_0}^{t}\exp\left(\int_{s}^t a(\sigma)d\sigma\right)b(s)ds$$</p>
</blockquote>
<p>How to prove this proposition please ?</p>
<p>Thank you</p>
| Diesirae92 | 289,721 | <p>You can solve the differential equation </p>
<p>$x′(t)+d(t)= a(t)x(t)+b(t)$</p>
<p>for some $d\geq 0$. When you get the explicit solution, just recall the sign $d$ had.</p>
|
24,876 | <p>As it is possible to see the last time when you or others visited M.SE, I wonder if one can see a statistics of visits of your own or of a specific user for a period of time (last year, let's say).</p>
| quid | 85,306 | <p>A partial answer:</p>
<p>For your own account this data is available in detail and nicely presented: <a href="https://math.stackexchange.com/users/current?tab=profile">go to your user profile</a>, then clicking on "visited {number} days, {othernumber} consecutive" will give you a calendar marking each and every day you visited. </p>
|
450,785 | <p>I want to obtain the formula for binomial coefficients in the following way: elementary ring theory shows that $(X+1)^n\in\mathbb Z[X]$ is a degree $n$ polynomial, for all $n\geq0$, so we can write</p>
<p>$$(X+1)^n=\sum_{k=0}^na_{n,k}X^k\,,\ \style{font-family:inherit;}{\text{with}}\ \ a_{n,k}\in\mathbb Z\,.$$</p>
<p>Using the fact that $(X+1)^n=(X+1)^{n-1}(X+1)$ for $n\geq1$ and the definition of product of polynomials, we obtain the following recurrence relation for all $n\geq1$:</p>
<p>$$a_{n,0}=a_{n,n}=1;\ a_{n,k}=a_{n-1,k}+a_{n-1,k-1}\,,\ \style{font-family:inherit;}{\text{for}}\ k=1,\dots,n-1\,.$$</p>
<p>I want to know if there is a way to manipulate this recurrence in order to obtain directly the values of the coefficients $a_{n,k}$, namely $a_{n,k}=\binom nk=\frac{n!}{k!(n-k)!}$. </p>
<p>Note that the usual approach via generating functions definitely will not work, at least <strong>in the spirit of my question</strong>, because this method only works when we do know in advance the coefficients of the generating function (either by the "number of $k$-subsets" argument, or Maclaurin series for $(X+1)^n$, or anything else) and this is <em>precisely</em> what I intend to avoid.</p>
<p>This question is closely related to a recent <a href="https://math.stackexchange.com/questions/449834/explicit-formula-for-bernoulli-numbers-by-using-only-the-recurrence-relation">question of mine</a>. Actually the same question, with Bernoulli numbers instead of binomial coefficients.</p>
<p><strong>EDIT</strong></p>
<p>I do not consider as a valid manipulation the following "magical" argument: 'the sequence $(b_{n,k})$ given by $b_{n,k}=\frac{n!}{k!(n-k)!}$ obeys the same recurrence and initial conditions as $(a_{n,k})$, so $a_{n,k}=b_{n,k}$ for all $n,k$. How did you obtain the formula for the $b_{n,k}$ at the first place? Okay, you can go through the "counting subsets" argument, but this is precisely what I don't want to do. The same applies to my related question about Bernoulli numbers.</p>
| user26872 | 26,872 | <p>A simple-minded approach is to solve the two variable recurrence relation iteratively, that is, knowing $a_{n,0}$ find $a_{n,1}$, then $a_{n,2}$, etc.</p>
<p>We must have<br>
$$\begin{eqnarray*}
a_{n,1} &=& a_{n-1,1}+a_{n-1,0} \\
&=& a_{n-1,1}+1,
\qquad a_{1,1}=1.
\end{eqnarray*}$$
This is a one variable recurrence relation of the form
$$b_n = b_{n-1}+1, \qquad b_1 = 1.$$
This can be solved by the usual methods.
We find
$a_{n,1} = n.$</p>
<p>Next we have
$$\begin{eqnarray*}
a_{n,2} &=& a_{n-1,2}+a_{n-1,1} \\
&=& a_{n-1,2}+n-1,
\qquad a_{2,2}=1.
\end{eqnarray*}$$
This is another simple recurrence relation.
We find
$a_{n,2} = n(n-1)/2.$</p>
<p>At the next step,
$$\begin{eqnarray*}
a_{n,3} &=& a_{n-1,3}+a_{n-1,2} \\
&=& a_{n-1,3} + \frac{1}{2}(n-1)(n-2),
\qquad a_{3,3}=1.
\end{eqnarray*}$$
This implies
$a_{n,3} = n(n-1)(n-2)/6.$</p>
<p>This process can be repeated to build up $a_{n,k}$ for any $k$.
At some point a pattern will be noticed and the principle of induction can be applied.</p>
|
2,634,791 | <blockquote>
<p>How can I show that the map $f: GL_n(\mathbb R)\to GL_n(\mathbb R)$ defined by $f(A)=A^{-1}$ is continuous?</p>
</blockquote>
<p>The space $GL_n(\mathbb R)$ is given the operator norm and so I want to show for all $\epsilon$ there exists $\delta$ such that $\|A-B\|<\delta \implies \|A^{-1}-B^{-1}\|<\epsilon$.</p>
| Martín-Blas Pérez Pinilla | 98,199 | <p>In fact is true in any unital Banach algebra. See Lemma 5.1 of <a href="https://www.math.ksu.edu/~nagy/real-an/2-05-b-alg.pdf" rel="nofollow noreferrer">https://www.math.ksu.edu/~nagy/real-an/2-05-b-alg.pdf</a> or <a href="https://math.stackexchange.com/questions/924341/banach-algebras-continuity-of-inversion">Banach Algebras: Continuity of Inversion?</a>.</p>
|
1,527,891 | <p>Determine (up to a constant multiplier) the polynomial with a maximum at $(-1,1)$, a minimum $(1,-1)$ and no other critical points.</p>
<p>The only thing I can think of is coming up with an equation with roots $1$ and $-1$ and then integrating it but I don't think that will work.</p>
| molarmass | 119,376 | <p>A polynomial of degree $n=3$ in general has exactly $2$ extreme points so let's use a cubic polynomial $f(x) = ax^3 + bx^2 + cx + d$ with $a \ne 0$. Obviously the derivative of this function is $f'(x) = 3ax^2 + 2bx + c$.</p>
<p>We now need to find values of $(a,b,c,d)$ such that \begin{align}f(-1) &= 1,&f(1) &= -1, & f'(-1) &= 0, & f'(1) &= 0.\end{align}
Since there are $4$ variables and $4$ equations, there is a unique solution.</p>
|
3,695,868 | <p>In right triangle <span class="math-container">$ABC,$</span> <span class="math-container">$\angle C = 90^\circ.$</span> Let <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> be points on <span class="math-container">$\overline{AC}$</span> so that <span class="math-container">$AP = PQ = QC.$</span> If <span class="math-container">$QB = 67$</span> and <span class="math-container">$PB = 76,$</span> find <span class="math-container">$AB.$</span></p>
<p><a href="https://i.stack.imgur.com/BIPQ0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BIPQ0.png" alt="enter image description here"></a></p>
<p>How do I use ratios and given side lengths to create a proportion to solve for <span class="math-container">$AB$</span>? Is there any other way to solve this?</p>
<p>I would think the best way to approach this is to relate <span class="math-container">$QB/CB = AB/CB$</span>, though that would make <span class="math-container">$CB$</span> for both the same. I guess the relation of <span class="math-container">$AB/AC = QB/QC$</span> can also be used.</p>
| Narasimham | 95,860 | <p>Let <span class="math-container">$BA= y, BC=x. $</span> We can solve numerically using Pythagoras thm twice.</p>
<p><span class="math-container">$$ 2 (67.^2 - x^2)^{0.5} = (76^2 - x^2)^{0.5}$$</span> </p>
<p><span class="math-container">$$ 3 (67.^2 - x^2)^{0.5} = (y^2 - x^2)^{0.5}$$</span></p>
<p>Since there are 2 equations and two unknowns, we can solve by squaring and eliminating <span class="math-container">$x$</span>.</p>
<p>Begin by squaring both equations. Solve for <span class="math-container">$x$</span> and substitute in the second set to find <span class="math-container">$y$</span>:</p>
<p><span class="math-container">$$ y=89, x=63.7181$$</span></p>
|
3,695,868 | <p>In right triangle <span class="math-container">$ABC,$</span> <span class="math-container">$\angle C = 90^\circ.$</span> Let <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> be points on <span class="math-container">$\overline{AC}$</span> so that <span class="math-container">$AP = PQ = QC.$</span> If <span class="math-container">$QB = 67$</span> and <span class="math-container">$PB = 76,$</span> find <span class="math-container">$AB.$</span></p>
<p><a href="https://i.stack.imgur.com/BIPQ0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BIPQ0.png" alt="enter image description here"></a></p>
<p>How do I use ratios and given side lengths to create a proportion to solve for <span class="math-container">$AB$</span>? Is there any other way to solve this?</p>
<p>I would think the best way to approach this is to relate <span class="math-container">$QB/CB = AB/CB$</span>, though that would make <span class="math-container">$CB$</span> for both the same. I guess the relation of <span class="math-container">$AB/AC = QB/QC$</span> can also be used.</p>
| Harish Chandra Rajpoot | 210,295 | <p>For easy understanding, assume <span class="math-container">$AP=PQ=QC=x$</span> & <span class="math-container">$BC=y$</span> then using Pythagoras theorem in respective right triangles, we get
<span class="math-container">$$QB^2=QC^2+BC^2\iff 67^2=x^2+y^2\tag 1$$</span>
<span class="math-container">$$PB^2=PC^2+BC^2\iff 76^2=4x^2+y^2\tag 2$$</span>
<span class="math-container">$$AB^2=AC^2+BC^2\iff AB^2=9x^2+y^2\tag 3$$</span>
Subtracting (1) from (2)
<span class="math-container">$$3x^2=76^2-67^2\iff x^2=429$$</span>
Subtracting (1) from (3),
<span class="math-container">$$AB^2-67^2=8x^2\iff AB^2=8x^2+67^2=8(429)+67^2=7921$$</span><span class="math-container">$$ \color{blue}{AB=}\sqrt{7921}=\color{blue}{89}$$</span></p>
|
277,217 | <p>I am stuck on the following problem, which I do not believe to be so difficult.</p>
<p>Let $X$ and $Y$ be Banach spaces. Let $f:X\times X\rightarrow Y$ be a function such that for any fixed $x_0$, $f(x,x_0)$ and $f(x_0,x)$ are continuous in $x$. Then is $f(x,x)$ continuous in $x$?</p>
<p>I tried taking an arbitrary convergent subsequence $\{x_n\}$ which converges to some $x$ and trying to argue that $f(x_n,x_n)$ converges to $f(x,x)$ using continuity in both terms, but I cannot seem to make this work for some reason.</p>
<p>Any help is greatly appreciated.</p>
| Davide Giraudo | 9,849 | <p>Let $X=Y:=\Bbb R$ and
$$f(x,y):=\begin{cases}\frac{xy}{x^2+y^2},&\mbox{if }(x,y)\neq (0,0);\\
0&\mbox{ if }(x,y)=(0,0).
\end{cases}$$
This function is continuous once a variable is fixed, but is not globally continuous. </p>
|
656,701 | <p>Suppose we have:</p>
<p>$A = \{(x,v,w):x+v=w\}$</p>
<p>$B = \{(x,v):x=v\}$</p>
<p>$C = \{(w,u):\exists x 2x=w\}$</p>
<p>Can we say that $C = A \cup B$?</p>
| Unwisdom | 124,220 | <p>Oh, I see what you're trying to do:
\begin{eqnarray}
A&=&\{\langle x,v,w\rangle :x+v=w\}\\
B&=&\{\langle x,v,w\rangle :x=v\}\\
C&=&\{\langle x,v,w\rangle :2x=w\}
\end{eqnarray}
These are three planes in $\mathbb{R}^{3}$. Planes $A$ and $B$ intersect in a line. Since every solution to the conditions $x+v=w$ and $x=v$ is also a solution to $2x=w$, it follows that the plane $C$ contains the intersection of $A$ and $B$. Symbolically:
$$
A\cap B \subseteq C.$$
However, $C$ is not equal to this intersection (for a start it's a plane, and not a line). Consider $\langle 1,0,2 \rangle$, for instance. </p>
|
1,993,217 | <p>Let $\left\{f_{n}\right\}$ be a sequence of equicontinuous functions where $f_n: [0,1] \rightarrow \mathbf{R}$. If $\{f_n(0)\}$ is bounded, why is $\left\{f_{n}\right\}$ uniformly bounded?</p>
| Manoel | 20,988 | <p>Hint: </p>
<p><span class="math-container">$|f_{n}(x)|=|f_{n}(x) +f_{n}(0) -f_{n}(0)|\leq|f_{n}(x) -f_{n}(0)|+|f_{n}(0)|$</span> </p>
<p>now use the Equi-continuity and <span class="math-container">$\{f_{n}(0)\}$</span> bounded</p>
|
1,993,217 | <p>Let $\left\{f_{n}\right\}$ be a sequence of equicontinuous functions where $f_n: [0,1] \rightarrow \mathbf{R}$. If $\{f_n(0)\}$ is bounded, why is $\left\{f_{n}\right\}$ uniformly bounded?</p>
| Martin Sleziak | 8,297 | <p>Let me try to prove this using real induction. You can find some basic description of this proof technique together with some references <a href="https://math.stackexchange.com/questions/4202/induction-on-real-numbers/4204#4204">in this answer</a>. I have tried to give some informal description of real induction <a href="https://math.stackexchange.com/questions/2244723/generalization-of-real-induction-for-topological-spaces">here</a>.</p>
<p>Real induction basically says that if we have some subset $S\subseteq[0,1]$, to show that $S=[0,1]$ it suffices to verify there conditions:<br>
(RI1) $0\in S$.<br>
(RI2) If $0\le x<1$, then $x\in S$ $\implies$ $[x,y]\subseteq S$ for some $y > x$.<br>
(RI3) If $0 < x \le 1$ and $[0,x)\subset S$, then $x \in S$.<br></p>
<hr>
<p>We will show that (RI1), (RI2), (RI3) is true for the set $S=\{x\in[0,1]; \text{ the sequence }f_n\text{ is uniformly bounded on }[0,x]\}$, i.e.,
$$S=\{x\in[0,1]; (\exists M\in\mathbb R)(\forall t\in[0,x])(\forall n) |f_n(t)|\le M\}.$$</p>
<p>(RI1) is exactly the assumption that the given sequence is bounded in $0$.</p>
<p>(RI2) Let us assume that $x\in S$, which means that there exists $M$ such that
$$ (\forall t\in[0,x])(\forall n) |f_n(t)|\le M. $$
In particular, we have also $|f_n(x)|\le M$.</p>
<p>Let us choose some $\varepsilon<0$. Now from equicontinuity we get that there exists $\delta>0$ such that
$$|t-x|<\delta \implies |f_n(t)-f_n(x)|<\varepsilon$$
for every $n$. So for every $t\in [x,x+\delta/2]$ and any $n$ we have
$$|f_n(t)| \le |f_n(t)-f_n(x)| + |f_n(x)| < M+\varepsilon.$$</p>
<p>Let $y=x+\delta/2$. We see that all $f_n$'s are bounded by $M+\varepsilon$ on both intervals $[0,x]$ and $[x,y]$, so it is uniformly bounded on the whole interval $[0,y]$. This shows that $[x,y]\subset S$.</p>
<p>(RI3) Let $x$ be such that $[0,x)\subset S$. Let $\varepsilon>0$. We will again use that we have $\delta>0$ such that
$$|y-x|<\delta \implies |f_n(t)-f_n(x)|<\varepsilon$$
for every $n$. Now we choose any $y<x$ such that $y>0$ and $y>x-\delta$. </p>
<p>Since $y\in S$, there exists $M$ such that
$$ (\forall t\in[0,y]) |f_n(t)|\le M $$
for every $n$. I.e., on the interval $[0,y]$ the sequence is uniformly bounded by $M$; it remains to show what happens on $[y,x]$.</p>
<p>However, for $t\in [y,x]$ we have
$$|f_n(t)| \le |f_n(t)-f_n(y)| + |f_n(y)| < M+\varepsilon.$$
Again, we get that $(f_n)$ is uniformly bounded (by $M+\varepsilon$) on the whole interval $[0,x]$ and that $x\in S$.</p>
<hr>
<p>You may also notice that we have never used the fact that we work with a countable family of functions. So the same argument works for arbitrary family of equicontinuous functions. </p>
|
907,893 | <p>I wanted to know about this convention :</p>
<p>By rate of growth of R, we normally mean : (change in R) / (change in Time)</p>
<p>But Rate of growth of a geometric sequence "a(1+r)^n" is r, which is strange i feel</p>
<p>I am kind of confused, can anyone clear it </p>
| Calculon | 163,648 | <p>That is not strange at all. The geometric sequence in your question is given by $a_{n+1} = (1+r)a_n$ with $a_0 = a$. In every single "time step" going from $n$ to $n+1$ your $a_n$ becomes $(1+r)a_n$. So your growth rate per time step is $r$. You cannot break up this time step into smaller units of time since $n$ in the geometric progression has to be an integer.</p>
|
290,050 | <p>Are there good lower/upper bounds for
$ \sum\limits_{i = 0}^k {\left( \begin{array}{l} n \\ i \\ \end{array} \right)x^i } $ where $0<x<1$, $k \ll n$?</p>
| zeraoulia rafik | 51,189 | <p><strong>Hint</strong> :for lower Bound we have for $k> 1$:$ (1+\frac{1}{k})^k \leq (e^{1/k})^k =e ,0<x=1/k<1$ , and $1/k << k $</p>
|
290,050 | <p>Are there good lower/upper bounds for
$ \sum\limits_{i = 0}^k {\left( \begin{array}{l} n \\ i \\ \end{array} \right)x^i } $ where $0<x<1$, $k \ll n$?</p>
| Max Alekseyev | 7,076 | <p>Let $p=\frac{x}{1+x}$ and $q=\frac{1}{1+x}$, and thus
$$\sum_{i=0}^k \binom{n}{i} x^i=(1+x)^n\sum_{i=n-k}^n \binom{n}{i} p^{n-i} q^i.$$
Then for $k<np$ <a href="https://en.wikipedia.org/wiki/Chernoff_bound" rel="nofollow noreferrer">Chernoff bound</a> gives
$$\sum_{i=n-k}^n \binom{n}{i} p^{n-i} q^i \le \left( \frac{nq}{n-k}\right)^{n-k} e^{np-k}.$$
That is,
$$\sum_{i=0}^k \binom{n}{i} x^i \le (1+x)^k \left( \frac{n}{n-k}\right)^{n-k} e^{\frac{(n-k)x-k}{1+x}}.$$</p>
|
1,290,516 | <p>Find the values of $m$ if the line $y=mx+2$ is a tangent to the curve $x^2-2y^2=1$.</p>
<p>My working:</p>
<p>First we differentiate $x^2-2y^2=1$ with respect to $y$ to get the gradient. We get $y^2=\frac{1}{2}x^2-\frac{1}{2}\implies y=\pm\sqrt{\frac{1}{2}x^2-\frac{1}{2}}$.</p>
<p>We take the positive one for demonstration<br>
$\frac{dy}{dx}=\frac{1}{2}x(\frac{1}{2}x^2-\frac{1}{2})^{-\frac{1}{2}}=\frac{x}{2\sqrt{\frac{1}{2}x^2-\frac{1}{2}}}$</p>
<p>$\implies(1-2m^2)x^2=-2m^2$</p>
<p>Since the tangent touches the curve, we can make $x^2-2(mx+2)^2=1$, we then get $(1-2m^2)x^2=9+8mx$</p>
<p>$\implies(1-2m^2)x^2=-2m^2$ and $(1-2m^2)x^2=9+8mx$ are two equations with two unknowns, then we should be able to find the values of $m$, but I couldn't find any easy way to solve those 2 simultaneous equations. Is there any easier method?</p>
<p>I tried solving $9+8mx=-2m^2$ but we still have two unknowns in one equation?</p>
<p>Also, if we don't use those two simultaneous equations, can we solve this question with a different method?</p>
<p>I am trying to solve WITHOUT implicit differentiation.</p>
<p>Many thanks for the help!</p>
| Empty | 174,970 | <p><strong>One more simplest way:</strong></p>
<p>Put $y=mx+2$ in the equation $x^2-2y^2=1$. Then it comes to a quadratic equation of $x$. From which we get two values of $x$. Since the line is tangent to the given hyperbola so, it can not intersect at two different points. So, the quadratic equation must give two identical values of $x$.</p>
<p>For this, put discriminant is equal to $0$. </p>
<p>Quadratic equation becomes , $x^2-2(mx+2)^2=1$. Putting discriminant equal to $0$ we get, $$64m^2+36(1-2m^2)=0\implies m=\pm \frac{3}{\sqrt 2}$$</p>
|
439,620 | <p>As we know, the QR-factorization <span class="math-container">$Q\cdot R=A$</span> of any real symmetric <span class="math-container">$n \times n$</span> matrix <span class="math-container">$A$</span> with full rank is <em><strong>unconditionally</strong></em> <em>numerically stable</em>. Further, when A is rank-1-updated, the factorization can be updated in <span class="math-container">$\mathcal{O}(n^2)$</span>, <em><strong>and</strong></em> <em>the factorization remains stable</em> after the update as well!</p>
<p>Now, when <span class="math-container">$A$</span> is symmetric, I seek a decomposition with all the same properties, plus that the factorization itself is symmetric. So to summarize, these are the properties of a factorization that I seek:</p>
<ol>
<li>The factorization is unconditionally numerically stable (i.e., no conditions on inertia, spectrum, norm, M-property, reodering, growth-factor, etc, are permitted to be imposed on <span class="math-container">$A$</span> whatsoever).</li>
<li>The factorization is inherently symmetric (e.g., <span class="math-container">$Q^T \cdot R^T \cdot D \cdot R \cdot Q$</span>), i.e., exact multiplication of the factors yields a symmetric matrix.</li>
<li>The factorization can be updated in <span class="math-container">$\mathcal{O}(n^2)$</span> and remains stable afterwards.</li>
</ol>
<p>Remark 1: For general symmetric <span class="math-container">$A$</span>, no assertions can be given on the stability of LDL-factorizations. (In some cases, reorderings do exist, but upon rank-1-update, the matrix would have to be reordered from the start, thus assertions on the stability of the factorization after the rank-1-update do not exist.</p>
<p>Remark 2: I am likewise interested in a result that such sought decomposition cannot exist.</p>
| Daniel Shapero | 49,417 | <p>It may be difficult to meet all your criteria but here's an attempt.
This is a bit lower-level than Federico Poloni's suggestion to use the eigenvalue factorization.</p>
<p>The <a href="https://en.wikipedia.org/wiki/Lanczos_algorithm" rel="nofollow noreferrer">Lanczos algorithm</a> computes a unitary matrix <span class="math-container">$Q$</span> and a symmetric, tridiagonal matrix <span class="math-container">$T$</span> such that</p>
<p><span class="math-container">$$A = QTQ^*$$</span></p>
<p>using <span class="math-container">$n$</span> matrix-vector multiplies.
I'll assume that you have a numerically stable implementation of the Lanczos algorithm but see below.
In any case, if <span class="math-container">$A' = A + \alpha uu^*$</span>, then if we let <span class="math-container">$z = Q^*u$</span>,</p>
<p><span class="math-container">$$A' = QTQ^* + \alpha uu^* = Q(T + \alpha zz^*)Q^*.$$</span></p>
<p>Now of course the matrix <span class="math-container">$T + \alpha zz^*$</span> is no longer tridiagonal.
<em>But</em> multiplying it by a vector only requires <span class="math-container">$\mathscr{O}(n)$</span> operations.
So we can compute the tridiagonal factorization</p>
<p><span class="math-container">$$T + \alpha zz^* = PSP^*$$</span></p>
<p>with <span class="math-container">$P$</span> unitary and <span class="math-container">$S$</span> tridiagonal in only <span class="math-container">$\mathscr{O}(n^2)$</span>.
More or less the same works if you did a rank-<span class="math-container">$k$</span> update so long as <span class="math-container">$k \ll n$</span>.
Putting it all together now, the desired rank-1 update is</p>
<p><span class="math-container">$$A' = QPS(QP)^*.$$</span></p>
<p>Now as I alluded to above the Lanczos algorithm has quite subtle stability properties.
The columns of <span class="math-container">$Q$</span> are guaranteed to be orthogonal in real arithmetic, but in floating point arithmetic they will fail to remain exactly orthogonal and can even become linearly dependent.
The worst part about it is that the loss of orthogonality is greatest whenever one of the eigenvectors of the partial factorization grows close to one of the eigenvectors of <span class="math-container">$A$</span>.
So in a sense the algorithm sabotages itself.
The remedy for this is to re-orthogonalize the Lanczos vectors, but if you reorthogonalize all of them, we're back at <span class="math-container">$\mathscr{O}(n^3)$</span> again.
There are partial reorthogonalization strategies that bring this back down.
I've implemented them and found them to be quite finicky but your mileage may vary.
My point here is that the Lanczos algorithm might fulfill all your requirements or only some of them.
In any case, almost all methods for computing the full eigendecomposition require first reducing the matrix to tridiagonal form.
If you want to read more, you can consult <a href="https://epubs.siam.org/doi/book/10.1137/1.9781611971163" rel="nofollow noreferrer">The Symmetric Eigenvalue Problem</a> or <a href="https://apps.dtic.mil/sti/pdfs/ADA289614.pdf" rel="nofollow noreferrer">Do We Understand the Symmetric Lanczos Algorithm Yet?</a>, both by Parlett, or <a href="https://epubs.siam.org/doi/book/10.1137/1.9781611970739" rel="nofollow noreferrer">Numerical Methods for Large Eigenvalue Problems</a> by Saad.</p>
<p>Finally, and this is a little pedantic, but I'd say that <em>there are numerically stable algorithms</em> for computing the QR factorization, not that the factorization itself is stable.
There are numerically unstable algorithms for computing the QR factorization as well!
For example, the Householder approach (orthogonal triangularization) is stable while Gram-Schmidt (triangular orthogonalization) is not.</p>
|
1,131,622 | <p>The question itself is a very easy one:<br/></p>
<blockquote>
<p>Somebody has got two kids, one of whom is a girl. Then what's the probability that he's got <strong>at least</strong> one boy?</p>
</blockquote>
<p>My answer is that, since he's already got a girl, then "he's got at least one boy" amounts to "the other kid is a boy", whose probability is apparently $\frac{1}{2}$.<br/>
But my friends argue that the probability should be $\frac23$: they say this is a binomial distribution, all the possible cases are (girl,girl),(girl,boy),(boy,girl) which yields that the probability is two cases out of three and is thus $\frac23$.<br/>
But I think this is totally unacceptable. I don't think it is a binomial distribution at all, at least not what my friends explained to me. However, I just can't disuade them of their opinion, nor can I prove that I am wrong.<br/>
So what on earth is the probability? and why? Any help is appreciated. Thanks in advance.<hr/>
Esp. Can anybody show why <strong>my</strong> explanation is wrong? Isn't it that whether the other kid is a boy or a girl a 50/50 event?
<hr/>
EDIT:<br/>
Thanks for all the help you provided for me, and special thanks will go to @HammyTheGreek and @KSmarts, who have made it clear to me that there is in fact some ambiguity in my statement in this problem.<br/>
As is pointed in <a href="http://en.wikipedia.org/wiki/Boy_or_Girl_paradox" rel="nofollow">this link</a> ,two distinct interpretation of the statement "one of whom is a girl" that gives rise to ambiguity:<br/></p>
<blockquote>
<p>From all families with two children, at least one of whom is a boy, a family is chosen at random. This would yield the answer of 1/3.<br/>
From all families with two children, one child is selected at random, and the sex of that child is specified to be a boy. This would yield an answer of 1/2. </p>
</blockquote>
| Timbuc | 118,527 | <p>I agree with your friend, and the reason follows using conditional probability. Define B=the event of having a boy, G= the event of having a girl, and we're in the space defined by "having two kids". Then we want the probability $\;P(B\backslash G)=$ the probability of having a boy <em>knowing</em> that there's already a girl:</p>
<p>$$P(B\backslash G)=\frac{P(B\cap G)}{P(G)}=\frac{\frac12}{\frac34}=\frac23$$</p>
|
1,131,622 | <p>The question itself is a very easy one:<br/></p>
<blockquote>
<p>Somebody has got two kids, one of whom is a girl. Then what's the probability that he's got <strong>at least</strong> one boy?</p>
</blockquote>
<p>My answer is that, since he's already got a girl, then "he's got at least one boy" amounts to "the other kid is a boy", whose probability is apparently $\frac{1}{2}$.<br/>
But my friends argue that the probability should be $\frac23$: they say this is a binomial distribution, all the possible cases are (girl,girl),(girl,boy),(boy,girl) which yields that the probability is two cases out of three and is thus $\frac23$.<br/>
But I think this is totally unacceptable. I don't think it is a binomial distribution at all, at least not what my friends explained to me. However, I just can't disuade them of their opinion, nor can I prove that I am wrong.<br/>
So what on earth is the probability? and why? Any help is appreciated. Thanks in advance.<hr/>
Esp. Can anybody show why <strong>my</strong> explanation is wrong? Isn't it that whether the other kid is a boy or a girl a 50/50 event?
<hr/>
EDIT:<br/>
Thanks for all the help you provided for me, and special thanks will go to @HammyTheGreek and @KSmarts, who have made it clear to me that there is in fact some ambiguity in my statement in this problem.<br/>
As is pointed in <a href="http://en.wikipedia.org/wiki/Boy_or_Girl_paradox" rel="nofollow">this link</a> ,two distinct interpretation of the statement "one of whom is a girl" that gives rise to ambiguity:<br/></p>
<blockquote>
<p>From all families with two children, at least one of whom is a boy, a family is chosen at random. This would yield the answer of 1/3.<br/>
From all families with two children, one child is selected at random, and the sex of that child is specified to be a boy. This would yield an answer of 1/2. </p>
</blockquote>
| Christoph | 86,801 | <p>I'll try to explain without mentioning conditional probabilities or binomial distributions explicitly:</p>
<p>Having two kids, you can have two boys, two girls, or one girl and a boy. However those 3 possibilities don't have equal probabilities. Two get two boys, both your first and second child have to be boys, chance $\frac 1 4$. To get two girls, both your first and second child have to be girls, chance $\frac 1 4$. Now to get a boy and a girl, you can either first get a boy, then a girl, or first get a girl, then a boy, chance $\frac 1 2$.</p>
<p>Now assume we know that somebody has two kids, and we know one of them is a girl. We are left with the possibilietes of two girls or a boy and a girl, the second still having twice the probability. Hence the probability of that somebody also having a boy is $\frac 2 3$.</p>
|
130,028 | <p>I often want to have the same code at the beginning of every new notebook. Is it possible to configure Mathematica, such that whenever you create a new notebook some user-defined code will always be created with the new document.</p>
<p>E.g. commonly used plot configurations, packages, directory setting etc.</p>
<pre><code>Needs["PolygonPlotMarkers"]
Needs["TwoAxisListPlot"]
fm[name_, size_: 7] :=
Graphics[{EdgeForm[], PolygonMarker[name, Offset[size]]}]
PlotStyles = {Frame -> True, FrameStyle -> Directive[Black, Thin],
Axes -> False, ImageSize -> 350, AspectRatio -> 1.0};
</code></pre>
<p>At the begining of every new notebook. </p>
| Szabolcs | 12 | <p>I recommend creating a palette with a button that can insert the code for you. Then save the palette and make it easy to access through the palettes menu.</p>
<h3>Create palette</h3>
<p>Suppose your code is (for sake of simplicity),</p>
<pre><code>code1 = HoldComplete[1+1];
</code></pre>
<p>The create the palette:</p>
<pre><code>CreatePalette[
Column[{
PasteButton["Template1", Defer @@ code1]
}],
WindowTitle -> "Templates"
]
</code></pre>
<p>I put in a column in case you want multiple buttons that insert different pieces of code.</p>
<p><a href="https://i.stack.imgur.com/6k7Oe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6k7Oe.png" alt="enter image description here"></a></p>
<h3>Install palette for permanent use</h3>
<p>Now go to File → Install..., and in the dialog that comes up select Palettes, then the palette you just created.</p>
<p><a href="https://i.stack.imgur.com/eMlUB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eMlUB.png" alt="enter image description here"></a></p>
<p>For Install Name, type a filename, e.g. <code>Templates.nb</code>, then press OK. Now close the palette.</p>
<p>From now on the palette will be permanently present in the Palettes menu. If you want to remove it, the file is located at</p>
<pre><code>SystemOpen@FileNameJoin[{$UserBaseDirectory, "SystemFiles", "FrontEnd", "Palettes"}]
</code></pre>
<p>Simply remove it.</p>
<hr>
<p>It is my personal opinion that inserting code in every single notebook you open will eventully be both annoying and counterproductive. I recommend the palette solution instead, which just takes a single mouseclick, so it's simple and quick. It lets you keep several code snippets and insert whichever you want.</p>
<p>If you're feeling up to it, you can even create a whole snippet system where the snippets are stored in a file (perhaps notebook) and can be selected and inserted with a simple GUI (e.g. dropdown boxes). Many text editors have such a thing.</p>
|
394,517 | <p>How can I evaluate $\lim_{x \to \infty}\left(\sqrt{x+\sqrt{x}}-\sqrt{x-\sqrt{x}}\right)$?</p>
| Zarrax | 3,035 | <p>I guess someone should mention the Taylor approximation approach:
$$\left(\sqrt{x+\sqrt{x}}-\sqrt{x-\sqrt{x}}\right) = \sqrt{x}\left(\sqrt{1+ {1 \over \sqrt{x}}}-\sqrt{1- {1 \over \sqrt{x}}}\right) $$
$$= \sqrt{x}\bigg(\big(1 + {1 \over 2\sqrt{x}} + O({1 \over x})\big) - \big(1 - {1 \over 2\sqrt{x}} + O({1 \over x})\big)\bigg)$$
$$=\sqrt{x}\bigg({1 \over \sqrt{x}} + O({1 \over x})\bigg)$$
$$= 1 + O({1 \over \sqrt{x}})$$
So the limit is $1$.</p>
|
4,196,109 | <p>While studying about inequalities, I came across the following definition (<span class="math-container">$\forall a > 0)$</span>:</p>
<p><span class="math-container">$$
\begin{alignat}{1}
& |x| > a \iff \{ x \mid x < -a \text{ or } x > a \} \\
& |x| < a \iff \{ x \mid -a < x < a \}
\end{alignat}
$$</span></p>
<p>Naturally, as <span class="math-container">$\{ -a < x < a \}$</span> could be rewritten as <span class="math-container">$\{ -a < x \text{ and } x < a \}$</span>, I wonder if is valid to rewrite <span class="math-container">$\{ x < -a \text{ or } x > a \}$</span> as <span class="math-container">$\{ -a > x > a \}$</span>.</p>
<p>I don't know if that would be valid because, while <span class="math-container">$\{ -a < x < a \}$</span> represents only one interval, <span class="math-container">$\{ -a > x > a \}$</span> would represent two in a single expression. Is that notation valid?</p>
| user97357329 | 630,243 | <p>Another framework proposed by Cornel (<strong>answer to the second integral</strong>, <span class="math-container">$\displaystyle \int_0^1\frac{\ln^2(1+x^2)\ln x}{1+x^2}\textrm{d}x$</span>)</p>
<p>Observe that <span class="math-container">$$\int_0^1 \frac{1}{1+x^2}\log^3\left(\frac{2x}{1+x^2}\right)\textrm{d}x$$</span>
<span class="math-container">$$=\log^3(2)\int_0^1\frac{1}{1+x^2}\textrm{d}x+3\log^2(2)\int_0^1\frac{\log(x)}{1+x^2}\textrm{d}x+3\log(2)\int_0^1\frac{\log^2(x)}{1+x^2}\textrm{d}x$$</span>
<span class="math-container">$$+\int_0^1\frac{\log^3(x)}{1+x^2}\textrm{d}x-3\log^2(2)\int_0^1\frac{\log(1+x^2)}{1+x^2}\textrm{d}x-6\log(2)\int_0^1\frac{\log(x)\log(1+x^2)}{1+x^2}\textrm{d}x$$</span>
<span class="math-container">$$-3\int_0^1\frac{\log^2(x)\log(1+x^2)}{1+x^2}\textrm{d}x+3\log(2)\int_0^1\frac{\log^2(1+x^2)}{1+x^2}\textrm{d}x$$</span><span class="math-container">$$-\int_0^1\frac{\log^3(1+x^2)}{1+x^2}\textrm{d}x+3\color{blue}{\int_0^1\frac{\log(x)\log^2(1+x^2)}{1+x^2}\textrm{d}x}.$$</span></p>
<ul>
<li><p>Note that the integral in the left-hand side may be beautifully reduced by the variable change <span class="math-container">$\displaystyle x\mapsto \frac{2x}{1+x^2}$</span> to <span class="math-container">$\displaystyle \int_0^1 \frac{1}{1+x^2}\log^3\left(\frac{2x}{1+x^2}\right)\textrm{d}x=\frac{1}{2}\int_0^1 \frac{\log^3(x)}{\sqrt{1-x^2}}\textrm{d}x$</span>, where the last integral is a form involving the derivative of the <strong>Beta function</strong>.</p>
</li>
<li><p>Note that all the other resulting integrals in the right-hand side are already known.</p>
</li>
<li><p>To easily make the connection with the known integrals, for the integrals <span class="math-container">$\displaystyle \int_0^1\frac{\log^2(1+x^2)}{1+x^2}\textrm{d}x$</span> and <span class="math-container">$\displaystyle \int_0^1\frac{\log^3(1+x^2)}{1+x^2}\textrm{d}x$</span> make the variable change <span class="math-container">$x \mapsto \tan(x)$</span> to have a view in terms of trigonometric functions. A relevant link: <a href="https://math.stackexchange.com/questions/2798619/about-the-integral-int-0-pi-4-log4-cos-theta-d-theta">About the integral $\int_{0}^{\pi/4}\log^4(\cos\theta)\,d\theta$</a></p>
</li>
<li><p>Also, you might like to know the following generalized integral, <span class="math-container">$\displaystyle \int_0^1\frac{\log^{2n}(x)\log(1+x^2)}{1+x^2}\textrm{d}x$</span>, is nicely presented and calculated by Ali Shadhar in his book, <strong>An Introduction To The Harmonic Series And Logarithmic Integrals: For High School Students Up To Researchers</strong> (see page <span class="math-container">$149$</span>). The integral easily and naturally reduces to forms involving derivatives of the <strong>Beta function</strong>.</p>
</li>
</ul>
<p><strong>End of story</strong></p>
|
4,336,706 | <p>Let <span class="math-container">$\mathbb {K}$</span> be a field. Let <span class="math-container">$f: \mathbb {K}^2 \rightarrow \mathbb {K}^2; x \mapsto Ax+b$</span> be an affine transformation. Suppose <span class="math-container">$f$</span> has a fixed point line (i.e. a line such that every point on that line is a fixed point of <span class="math-container">$f$</span>). When does the linear map <span class="math-container">$x \mapsto Ax$</span> have a fixed point line?</p>
<ul>
<li><em>What I tried:</em><br />
I tried to construct a fixed point line of the linear map from the one of <span class="math-container">$f$</span>, but to no avail. I know that <span class="math-container">$(0,0)$</span> is a fixed point of the linear map. If I could obtain one other fixed point I would be done, since by linearity the line through the origin and that point would consist only of fixed points. So it boils down to finding a fixed point of the linear map other than the origin. Another thought: By our assumption the coefficient matrix of the inhomogeneous system of linear equations <span class="math-container">$(A-I_2)x=-b$</span> has rank one.
Now we are interested in the homogeneous system. Any hints?</li>
</ul>
| Gribouillis | 398,505 | <p>We can assume without loss of generality that <span class="math-container">$f(1)=0$</span>. Now let <span class="math-container">$g(x) = x f(x)$</span>. We have <span class="math-container">$g(0)=g(1)=0$</span>. Hence there is a point <span class="math-container">$c$</span> in <span class="math-container">$(0,1)$</span> where <span class="math-container">$g'(c) = 0$</span>. It follows that <span class="math-container">$c f'(c) + f(c) = 0$</span>, QED.</p>
<p>More generally if <span class="math-container">$\alpha>0$</span> and we choose <span class="math-container">$g(x)=x^\alpha (f(x)-f(1))$</span>, the same argument applies, we have <span class="math-container">$g(0)=g(1)=0$</span> and there is a point <span class="math-container">$c\in (0, 1)$</span> such that <span class="math-container">$\alpha c^{\alpha-1}(f(c)-f(1))+ c^\alpha f'(c)=0$</span>, i.e.
<span class="math-container">\begin{equation}
f'(c) = \alpha \frac{f(1)-f(c)}{c}
\end{equation}</span></p>
|
4,336,706 | <p>Let <span class="math-container">$\mathbb {K}$</span> be a field. Let <span class="math-container">$f: \mathbb {K}^2 \rightarrow \mathbb {K}^2; x \mapsto Ax+b$</span> be an affine transformation. Suppose <span class="math-container">$f$</span> has a fixed point line (i.e. a line such that every point on that line is a fixed point of <span class="math-container">$f$</span>). When does the linear map <span class="math-container">$x \mapsto Ax$</span> have a fixed point line?</p>
<ul>
<li><em>What I tried:</em><br />
I tried to construct a fixed point line of the linear map from the one of <span class="math-container">$f$</span>, but to no avail. I know that <span class="math-container">$(0,0)$</span> is a fixed point of the linear map. If I could obtain one other fixed point I would be done, since by linearity the line through the origin and that point would consist only of fixed points. So it boils down to finding a fixed point of the linear map other than the origin. Another thought: By our assumption the coefficient matrix of the inhomogeneous system of linear equations <span class="math-container">$(A-I_2)x=-b$</span> has rank one.
Now we are interested in the homogeneous system. Any hints?</li>
</ul>
| Mr.Gandalf Sauron | 683,801 | <p>To extend on @Gribouillis's solution.</p>
<p>Take <span class="math-container">$g(x)=xf(x)-xf(1)$</span> . Then <span class="math-container">$g(0)=g(1)=0$</span>.</p>
<p>Then there exist <span class="math-container">$c\in(0,1)$</span> such that <span class="math-container">$g'(c)=0$</span>.</p>
<p><span class="math-container">$g'(c)=f(c)+cf'(c)-f(1)=0\implies \frac{f(1)-f(c)}{c}=f'(c)$</span> . Thus you have your answer. And yes for the intuition regarding this, the solution of the ode was a good observation.</p>
|
1,320,874 | <p>I am trying to answer the following: Does the congruence $x^2 \equiv -1$ (mod $p$) have any solutions if $p \equiv 3$ (mod $4$)? If so, how many incongruent solutions does it have? If not, why not?</p>
<p>I know from the previous part of the question that if $p$ is a prime and $p \equiv 1$ (mod $4$), then the congruence $x^2 \equiv -1$ (mod $p$) has two incongruent solutions, namely $x \equiv \pm (\dfrac{p-1}{2})!$ (mod $p$).</p>
<p>I am completely unsure how to even approach solving this problem. Any hints would be appreciated.</p>
| André Nicolas | 6,312 | <p>The following is a proof that uses Wilson's Theorem. There are "easier" (and group-theoretically more natural) proofs that do not use Wilson's Theorem. The idea is due to Dirichlet. </p>
<p>Let $p$ be a prime of the form $4k+3$. We will assume that the congruence $x^2\equiv -1\pmod{p}$ has a solution, and use Wilson's Theorem to derive a contradiction.</p>
<p>If $x^2\equiv -1\pmod{p}$ had a solution $c$, then it would have exactly $2$ solutions, namely $c$ and $p-c$.</p>
<p>If $1\le a\le p-1$ and $1\le b\le p-1$, with $a\ne b$, call $a$ and $b$ <strong>buddies</strong> if $ab\equiv -1\pmod{p}$. Apart from $c$ and $p-c$, all numbers $a$ in the interval $1\le a\le p-1$ has a unique buddy $b\ne a$ in the interval $1\le b\le p-1$. So the numbers in the interval are divided into $\frac{p-3}{2}$ pairs of buddies, plus the numbers $c$ and $p-c$.</p>
<p>It follows that
$$(p-1)!\equiv (-1)^{(p-3)/2}(c)(p-c)\equiv 1\pmod{p}.$$
This contradicts Wilson's Theorem. It follows that there cannot be a $c$ such that $c^2\equiv -1\pmod{p}$.</p>
<p><strong>Remark:</strong> The OP alluded to a Wilson's Theorem proof of the fact that if $p$ is of the form $4k+1$, then the congruence $x^2\equiv -1\pmod{p}$ has a solution. We give a <em>different</em> Wilson's Theorem based proof, that uses the ideas in the answer above.</p>
<p>Suppose to the contrary that $p=4k+1$ and the congruence $x^2\equiv -1\pmod{p}$ has no solution. Then all the numbers from $1$ to $p-1$ are divided into buddy pairs. It follows that $(p-1)!\equiv (-1)^{2k}\equiv 1\pmod{p}$, contradicting Wilson's Theorem.</p>
|
1,320,874 | <p>I am trying to answer the following: Does the congruence $x^2 \equiv -1$ (mod $p$) have any solutions if $p \equiv 3$ (mod $4$)? If so, how many incongruent solutions does it have? If not, why not?</p>
<p>I know from the previous part of the question that if $p$ is a prime and $p \equiv 1$ (mod $4$), then the congruence $x^2 \equiv -1$ (mod $p$) has two incongruent solutions, namely $x \equiv \pm (\dfrac{p-1}{2})!$ (mod $p$).</p>
<p>I am completely unsure how to even approach solving this problem. Any hints would be appreciated.</p>
| user26486 | 107,671 | <p>We assume $p$ is an odd prime. You know that $$\,p\equiv 1\pmod{\! 4}\,\Rightarrow\, (x^2\equiv -1\pmod{\! p}\,\text{ is solvable})$$ by your constructive proof, namely $x\equiv\pm\left(\frac{p-1}{2}\right)!\pmod{\! p}$ works as a solution. Proofs of this have been discussed <a href="https://math.stackexchange.com/questions/1275461/elementary-proof-that-1-is-a-square-in-mathbbf-p-for-p-1-mod4">here</a>. </p>
<p>Now you want to prove it in the other direction, namely $$(x^2\equiv -1\!\pmod{\! p} \,\text{ is solvable})\,\Rightarrow\,p\equiv 1\pmod{\! 4}$$ </p>
<p>For two proofs, see <a href="https://math.stackexchange.com/a/1265465/107671">this answer</a> with $(a,b)=(x,1)$, where it's proved that more generally: </p>
<p>$p\nmid a,b$ and $p\mid a^2+b^2$ implies $p\equiv 1\pmod{\! 4}$. </p>
<p>You can show $\iff$ at once by proving Euler's criterion (I try to prove it simply <a href="https://math.stackexchange.com/a/1320903/107671">in this answer</a>, or you can see <a href="https://math.stackexchange.com/questions/799554/a-combinatorial-proof-of-eulers-criterion-tfracap-equiv-a-fracp-1">this question</a> for combinatorial proofs, in which André Nicolas extends his combinatorial argument in your question to prove a more general statement).</p>
|
73,238 | <p>How can I calculate the solid angle that a sphere of radius R subtends at a point P? I would expect the result to be a function of the radius and the distance (which I'll call d) between the center of the sphere and P. I would also expect this angle to be 4π when d < R, and 2π when d = R, and less than 2π when d > R.</p>
<p>I think what I really need is some pointers on how to solve the integral (taken from <a href="http://en.wikipedia.org/wiki/Solid_angle" rel="nofollow">wikipedia</a>) $\Omega = \iint_S \frac { \vec{r} \cdot \hat{n} \,dS }{r^3}$ given a parameterization of a sphere. I don't know how to start to set this up so any and all help is appreciated!</p>
<p>Ideally I would like to derive the answer from this surface integral, not geometrically, because there are other parametric surfaces I would like to know the solid angle for, which might be difficult if not impossible to solve without integration.</p>
<p>*I reposted this from mathoverflow because this isn't a research-level question.</p>
| Ross Millikan | 1,827 | <p>Is your "slope of the axis in the x,y and z axes" the <a href="http://mathworld.wolfram.com/DirectionCosine.html" rel="nofollow">direction cosines</a>? You need a center $c$ as well, presumably a point on the axis. Given the unit vector $\vec{v}$ along the axis one way is to find two perpendicular unit vectors. As long as $\vec{v}$ is not along the $x$ axis, you can normalize $\vec{v} \times (1,0,0)$ for one and call it $\vec{a}$, then let $\vec{b}=\vec{v} \times \vec{a}$. Then your satellite location is $c+R\cos\omega t\vec{a}+R\sin \omega t\vec{b}$ </p>
|
2,402,429 | <p>Let $P(x) = x^3 + 2x^2+3x+4$ and $a$ be the root of equation $x^4+x^3+x^2+x+1=0$.</p>
<p>Find the value of $P(a)P(a^2)P(a^3)P(a^4)$</p>
<p>Is my answer correct ?</p>
<p>Since root of equation $x^4+x^3+x^2+x+1=0$ is the $5^{th}$ primitive root of 1,</p>
<p>so $a, a^2, a^3, a^4$ are roots of $x^4+x^3+x^2+x+1=0$ </p>
<p>$P(a)P(a^4)=(a^3+2a^2+3a+4)(\frac{1}{a^3}+\frac{2}{a^2}+\frac{3}{a}+4)= 15+5a^4+5a$</p>
<p>Similarly, $P(a^2)P(a^3)=15+5a^3+5a^2$</p>
<p>$P(a)P(a^2)P(a^3)P(a^4)=(15+5a^4+5a)(15+5a^3+5a^2)=125$</p>
| Batominovski | 72,152 | <p>Suppose $P(x)=x^3+2x^2+3x+4=(x-p)(x-q)(x-r)$ for some $p,q,r\in\mathbb{C}$. Then, $$\prod_{j=1}^4\,P\left(a^j\right)=Q(p)\,Q(q)\,Q(r)\,,$$
where $Q(x):=x^4+x^3+x^2+x+1$. Now, $$Q(x)=(x-1)\,P(x)+5\,.$$
Thus, $Q(p)=Q(q)=Q(r)=5$. </p>
<hr>
<p>This is actually quite a nice technique. Let $P(x)$ and $Q(x)$ be two nonconstant polynomials in $x$ over a field $K$. Write $\bar{P}$ and $\bar{Q}$ for the leading coefficients of $P(x)$ and $Q(x)$, respectively. Suppose that $t_1,t_2,\ldots,t_n$ are the roots of $Q(x)$ in the algebraic closure of $K$ (with multiplicities). Then, $$\prod_{j=1}^n\,P\left(t_j\right)=(-1)^{mn}\,\frac{\bar{P}^n}{\bar{Q}^m}\,\prod_{i=1}^m\,Q\left(s_i\right)\,,\tag{1}$$
where $s_1,s_2,\ldots,s_m$ are the roots of $P(x)$in the algebraic closure of $K$ (with multiplicities). </p>
<p>If $m>n$, then write
$$P(x)=Q(x)\,A(x)+B(x)$$
for some $A(x),B(x)\in K[x]$ with $B(x)$ having degree less than $n$. Clearly, we have $$\prod_{j=1}^n\,P\left(t_j\right)=\prod_{i=1}^m\,B\left(t_i\right)\,,\tag{2}$$
Then, we can use the paragraph below using $B(x)$ in place of $P(x)$ to simplify things even more, provided that $B(x)$ is nonconstant (or nonlinear).</p>
<p>If $m \leq n$, then we write $Q(x)=P(x)\,U(x)+V(x)$ for some polynomials $U(x),V(x)\in K[x]$, with $V(x)$ having degree less than $m$. Then, from $(1)$, $$\prod_{j=1}^n\,P\left(t_j\right)=(-1)^{mn}\,\frac{\bar{P}^n}{\bar{Q}^m}\,\prod_{i=1}^m\,V\left(s_i\right)\,.\tag{3}$$
In fact, we can play this game again between $V(x)$ and $P(x)$ to make further degree reductions, provided that $V(x)$ is nonconstant (or nonlinear).</p>
<hr>
<p>For example, let $P(x)=x^5+x^2+2x+1$ and $Q(x)=x^3-3x-2$. Then, we see that $P(x)=A(x)\,Q(x)+B(x)$ with $A(x)=x^2+3$ and $B(x)=3x^2+11x+7$. If $t_1$, $t_2$, and $t_3$ are the roots of $Q(x)$, then
$$\prod_{j=1}^3\,P\left(t_j\right)=\prod_{j=1}^3\,B\left(t_j\right)\,.$$
Note that $Q(x)=B(x)\,U(x)+V(x)$, where $U(x)=\frac{x}{3}-\frac{11}{9}$ and $V(x)=\frac{73}{9}x+\frac{59}{9}$. Using $(3)$, we obtain
$$\prod_{j=1}^3\,B\left(t_j\right)=(-1)^{2\cdot 3}\,\frac{3^3}{1^2}\,\prod_{i=1}^2\,V\left(r_i\right)\,,$$
where $r_1$ and $r_2$ are the roots of $B(x)$. That is,
$$\prod_{j=1}^3\,P\left(t_j\right)=3^3\,\prod_{i=1}^2\,V\left(r_i\right)=3^2\left(\frac{73}{9}\right)^2\left(-\frac{59}{73}-r_1\right)\left(-\frac{59}{73}-r_2\right)=3^2\left(\frac{73}{9}\right)^2\,B\left(-\frac{59}{73}\right)\,.$$
That is, $\prod_{j=1}^3\,P\left(t_j\right)=41$. This can be easily verified as $t_1=-1$, $t_2=-1$, and $t_3=2$, so that $P\left(t_1\right)=-1$, $P\left(t_2\right)=-1$, and $P\left(t_3\right)=41$. (I picked an easy-to-check example, of course, so this reduction method is actually more complicated than simply computing the values of $P\left(t_j\right)$'s.)</p>
|
19,842 | <p>Seeing <a href="https://stackoverflow.com/help/self-answer">this</a> I though this thing was promoted, and for avoiding for the question becoming boring, I didn't answer it suddenly and waited and I did mentioned that I knew the answer, maybe it's just misunderstanding that I don't know the answer. Anyways, what's the state/condition regarding such things?</p>
<p>See <a href="https://math.stackexchange.com/questions/1176644/unspecific-function-integration">this</a> and <a href="https://math.stackexchange.com/questions/1176622/differential-equation-sin2x-left-frac-rm-dy-rm-dx-sqrt-tan-x-ri">this</a>. I edited one to include answer in question, I just didn't edited other just because.</p>
| GEdgar | 442 | <p><a href="http://meta.math.stackexchange.com/a/4233/442">Puzzle Questions</a> are allowed in math.se ... But include the information in the original post that you know the answer!</p>
<p><a href="https://math.stackexchange.com/questions/351333/evaluation-of-a-continued-fraction">HERE</a> is an example of mine.</p>
|
408,590 | <p>I'm looking for references (books/lecture notes) for :</p>
<ul>
<li>Cardinality without choice, Scott's trick;</li>
<li>Cardinal arithmetic without choice.</li>
</ul>
<p>Any suggestions? Thanks in advance.</p>
| Asaf Karagila | 622 | <ol>
<li>Jech, <strong>The Axiom of Choice</strong>.</li>
<li>Herrlich, <strong>The Axiom of Choice</strong>.</li>
<li>Halbeisen, <strong>Combinatorial Set Theory</strong>.</li>
<li>Jech, <strong>Set Theory, 3rd Millennium Edition</strong>.</li>
</ol>
<p>Jech's (first) book is kinda old, and some progress has been made since then, but I don't think there has been a lot that we can say about cardinal arithmetic that was discovered since that book was published (on their order, other structure properties and complexities - sure).</p>
<p>Herrlich's book is not a set theoretical book per se, but it has a reasonable chapter about basic failures of cardinal arithmetics. In particular with the existence of infinite Dedekind-finite sets, which give us a great source of interest for counterexamples.</p>
<p>For the most part, let me tell you what we know about cardinal arithmetic without the axiom of choice:</p>
<ul>
<li>The basic addition, multiplication and exponentiation is well-defined as finitary operations. Those are easily found in <em>any</em> set theoretical textbook.</li>
<li>Everything else can fail miserably.</li>
</ul>
<p>Some interesting papers:</p>
<ol>
<li>Rubin, Jean E. <strong><a href="http://dx.doi.org/10.1007/BF02771738" rel="nofollow">Non-constructive properties of cardinal numbers.</a></strong> <em>Israel J. Math.</em> <strong>10</strong> (1971), 504–525.</li>
<li>Halbeisen, Lorenz; Shelah, Saharon <strong><a href="http://www.jstor.org/stable/2275247" rel="nofollow">Consequences of arithmetic for set theory.</a></strong> <em>J. Symbolic Logic</em> <strong>59</strong> (1994), no. 1, 30–40. </li>
<li>Halbeisen, Lorenz; Shelah, Saharon <strong><a href="http://www.jstor.org/stable/2687776" rel="nofollow">Relations between some cardinals in the absence of the axiom of choice.</a></strong> <em>Bull. Symbolic Logic</em> <strong>7</strong> (2001), no. 2, 237–261. </li>
</ol>
|
3,066,020 | <p>I’m reading Hans Kurzweil ‘s “The Theory of Finite Groups”, where it says</p>
<blockquote>
<p>1.6.4 Let <span class="math-container">$N_1, . . . , N_n$</span> be normal subgroups of <span class="math-container">$G$</span>. Then the mapping <span class="math-container">$$α: G→G/N_1\times ··· \times G/N_n$$</span> given by <span class="math-container">$$g \mapsto
(gN_1,...,gN_n)$$</span> is a homomorphism with <span class="math-container">$\operatorname{Ker}α = \cap_i N_i$</span>. In
particular, <span class="math-container">$G/\cap_i N_i$</span> is isomorphic to a subgroup of <span class="math-container">$G/N_1
\times ··· \times G/N_n$</span>.</p>
</blockquote>
<p>I’m confused here: can we write <span class="math-container">$$G/N_1\times \cdots \times G/N_n$$</span>
? To write a product of groups as this, it’s required that each <span class="math-container">$G/N_i$</span> has only <span class="math-container">$e$</span> as common element.</p>
<p>What if <span class="math-container">$$G=C_2 \times C_3 \times C_5 \times C_7$$</span></p>
<p><span class="math-container">$$N_1=C_2 \times C_3 $$</span></p>
<p><span class="math-container">$$N_2=C_2 \times C_5 $$</span></p>
<p><span class="math-container">$$N_3=C_2 \times C_7 $$</span></p>
<p>, shouldn’t <span class="math-container">$$G/N_1 \cong C_3 \times C_5$$</span></p>
<p><span class="math-container">$$G/N_2 \cong C_2 \times C_7$$</span></p>
<p><span class="math-container">$$G/N_3 \cong C_5 \times C_7$$</span></p>
<p>, and they have common elements besides <span class="math-container">$e$</span>?</p>
| cqfd | 588,038 | <blockquote>
<p>I’m confused here: can we write <span class="math-container">$$G/N_1\times ··· \times G/N_n$$</span> ? To
write a product of groups as this, it’s required that each <span class="math-container">$G/N_i$</span> has
only <span class="math-container">$e$</span> as common element.</p>
</blockquote>
<p>Note that <span class="math-container">$G/N_i$</span> is not a subgroup of <span class="math-container">$G $</span>. So here we are not considering the <em>internal direct product</em>, which requires the condition you mentioned above to be a group. Here <span class="math-container">$G/N_1\times ··· \times G/N_n$</span> represents the <em>external direct product</em> , which is a group under the componentwise operation.</p>
|
1,239,211 | <p>I have been allowed to attend some preparatory lectures for a seminar on the Goodwillie Calculus of Functors. I found in my notes from one of the lectures two statements which I would like to ask about.</p>
<p>The first one is probably straightforward and I'm guessing is related to Whitehead-type theorems. Still, I would still like a detailed explanation of what it means.</p>
<blockquote>
<ol>
<li>Every homotopy type is a filtered colimit of finite CW complexes.</li>
</ol>
</blockquote>
<p>The second statement is a lot more problematic because I don't understand any of the context. Here it is:</p>
<blockquote>
<ol start="2">
<li>We want to look at (extraordinary) homology theories $h_\ast :\mathsf{Top}\rightarrow \mathsf{grAb}$ which commute with filtered colimits.</li>
</ol>
</blockquote>
<p>My question is why do we want to study homology theories which commute with filtered colimits? So that we may reduce to (finite) CW complexes? Is there anything else?</p>
<p>This statement is preceded in my notes by the following theorem of Whitehead: </p>
<p><strong><em>Theorem.</strong> For any extraordinary homology theory which is finitary ($\overset?=$ determined by values on finite CW complexes) there exists a spectrum $E\in \mathsf{Sp}$ such that $h_\ast (X)=\pi_\ast (E\wedge X)$ where $\pi _\ast$ are stable homotopy groups and $\wedge $ is the smash product.</em></p>
<p>Now I don't yet know anything about either spectra no stable homotopy, so I can't make out much of this theorem myself.</p>
| Kevin Arlin | 31,228 | <p>For your first claim: every weak homotopy type can be represented by some CW complex $X$. This is one of Whitehead's most famous theorems. But $X$ is given as the union of its finite-dimensional skeleta $X^n$, and such a nested union is a particular example of a filtered colimit.</p>
<p>The reason to restrict to finitary homology theories, equivalently, those which commute with filtered colimits, is to get the best possible representability theorem. In cohomology theory Brown's original representability theorem says that <em>every</em> extraordinary cohomology theory is representable by a spectrum. </p>
<p>From a cohomology theory represented by a spectrum $X$ we can get a homology theory defined on finite-dimensional CW complexes by Spanier-Whitehead duality, and so a finitary homology theory by the claim from the first paragraph, and this process is reversible (this is morally the reason for the theorem of Whitehead you cite.) But there is no Spanier-Whitehead duality for arbitrary spaces, so there's no way to use Brown representability to get a spectrum representing a non-finitary homology theory. And indeed not all extraordinary homology theories are representable!</p>
|
1,627,357 | <p>Is there a simple way to prove $$\frac{1}{\sqrt{1-x}} \le e^x$$ on $x \in [0,1/2]$?</p>
<p>Some of my observations from plots, etc.:</p>
<ul>
<li>Equality is attained at $x=0$ and near $x=0.8$.</li>
<li>The derivative is positive at $x=0$, and zero just after $x=0.5$. [I don't know how to find this zero analytically.]</li>
<li>I tried to work with Taylor series. I verified with plots that the following is true on $[0,1/2]$:
$$\frac{1}{\sqrt{1-x}} = 1 + \frac{x}{2} + \frac{3x^2}{8} + \frac{3/4}{(1-\xi)^{5/2}} x^3 \le 1 + \frac{x}{2} + \frac{3}{8} x^2 + \frac{5 \sqrt{2} x^3}{6} \le 1 + x + \frac{x^2}{2} + \frac{x^3}{6} \le e^x,$$
but proving the last inequality is a bit messy.</li>
</ul>
| André Nicolas | 6,312 | <p>For our interval, the inequality is equivalent to $1-x\ge e^{-2x}$. (We squared and flipped.)</p>
<p>This inequality can be proved using differential calculus. Let $f(x)=1-x-e^{-2x}$. Then $f'(x)=2e^{-2x}-1$. So $f(x)$ is increasing until $x=\frac{\ln 2}{2}\approx 0.34$ and then decreasing. Thus all we need to do is check its value at $x=1/2$. </p>
|
1,627,357 | <p>Is there a simple way to prove $$\frac{1}{\sqrt{1-x}} \le e^x$$ on $x \in [0,1/2]$?</p>
<p>Some of my observations from plots, etc.:</p>
<ul>
<li>Equality is attained at $x=0$ and near $x=0.8$.</li>
<li>The derivative is positive at $x=0$, and zero just after $x=0.5$. [I don't know how to find this zero analytically.]</li>
<li>I tried to work with Taylor series. I verified with plots that the following is true on $[0,1/2]$:
$$\frac{1}{\sqrt{1-x}} = 1 + \frac{x}{2} + \frac{3x^2}{8} + \frac{3/4}{(1-\xi)^{5/2}} x^3 \le 1 + \frac{x}{2} + \frac{3}{8} x^2 + \frac{5 \sqrt{2} x^3}{6} \le 1 + x + \frac{x^2}{2} + \frac{x^3}{6} \le e^x,$$
but proving the last inequality is a bit messy.</li>
</ul>
| πr8 | 302,863 | <p>If $f(x)=(1-x)e^{2x}$, then $f'(x)=(1-2x)e^{2x}=0$ when $x=\frac{1}{2}$. Drawing a graph/checking the second derivative shows it to be a maximum, whence $1=f(0)\le f(x)\le f(1/2)=\frac{e}{2}$ on $[0,\frac{1}{2}]$. We thus have:</p>
<p>$$1\le(1-x)e^{2x}\le\frac{e}{2}$$</p>
<p>$$\implies \frac{1}{1-x}\le e^{2x}\le\frac{e}{2(1-x)}$$</p>
<p>$$\implies \frac{1}{\sqrt{1-x}}\le e^x \le \sqrt{\frac{e}{2}}\frac{1}{\sqrt{1-x}}$$</p>
<p>on the given interval.</p>
<p>The reason I chose this approach is that much in the same way as young children make most of their arithmetic mistakes when dealing with fractions and negative numbers, I find myself far more at ease when fractions, square roots, inverse functions and such are all cleared out (I'm still very averse to the quotient rule for differentiation). So dealing with $(1-x)e^{2x}$ is greatly preferable for me, and is set up in such a way that the required bounds should pop out quite naturally.</p>
|
917,302 | <p>If $p(x)$ is a polynomial of degree 4 such that $p(2)=p(-2)=p(-3)=-1$ and $p(1)=p(-1)=1$, then find $p(0)$.</p>
| user84413 | 84,413 | <p>Using a difference table, with $p(0)=c$, gives</p>
<p>$-1\hspace{.5 in}-1\hspace{.5 in}1\hspace{.5 in}c\hspace{.6 in}1\hspace{.5 in}-1$</p>
<p>$\hspace{.4 in}0\hspace{.64 in}2\hspace{.43 in}c-1\hspace{.35 in}1-c\hspace{.4 in}-2$</p>
<p>$\hspace{.7 in}2\hspace{.47 in}c-3\hspace{.2 in}-2c+2\hspace{.2 in}-3+c$</p>
<p>$\hspace{.9 in}c-5\hspace{.3 in}-3c+5\hspace{.3 in}3c-5$</p>
<p>$\hspace{1.1 in}-4c+10\hspace{.3 in}6c-10$</p>
<p>$\hspace{1.5 in}10c-20$</p>
<p>Then $10c-20=0\implies c=2$.</p>
|
3,884,581 | <p>Please don't just throw an answer at me, please explain how you arrived at it cause I've been fiddling with this for the past 30min...</p>
| ym94 | 630,901 | <p>Using the binomial theorem, we see that</p>
<p><span class="math-container">$u:=(a+b)^2=a^2+2ab+b^2=(a^2+b^2)+2(ab)=6+8=14$</span>. Therefore, <span class="math-container">$a+b=\pm \sqrt{14}$</span>.</p>
<p>Analogously,</p>
<p><span class="math-container">$v:=(a-b)^2=a^2-2ab+b^2=(a^2+b^2)-2(ab)=6-8=-2$</span>. Therefore, <span class="math-container">$a-b=\pm i\sqrt{2}$</span>. Finally, note that</p>
<p><span class="math-container">$a=\frac{u+v}{2}$</span> and <span class="math-container">$b=\frac{u-v}{2}$</span> which gives four pairs of solutions <span class="math-container">$(a,b)$</span>.</p>
|
622,090 | <p>We are asked to solve the following linear system</p>
<p>$$x_1-3x_2+x_3=1$$
$$2x_1-x_2-2x_3=2$$
$$x_1+2x_2-3x_3=-1$$</p>
<p>by using gauss-jordan elimination method. The augmented matrix of the linear system is $$\left(\begin{array}{ccc|c}1 & -3 & 1 & 1 \\2 & -1 & -2 & 2 \\1 & 2 & -3 & -1\end{array}\right).$$ By a series of elementary row operations, we have $$\left(\begin{array}{ccc|c}1 & -3 & 1 & 1 \\0 & 5 & -4 & 0 \\0 & 0 & 0 & -2\end{array}\right).$$ My question is, although the question asked us to solve the linear system using gauss-jordan elimination method, can we stop immediately and conclude that the linear system is inconsistent without further apply any elementary row operation to the matrix $$\left(\begin{array}{ccc|c}1 & -3 & 1 & 1 \\0 & 5 & -4 & 0 \\0 & 0 & 0 & -2\end{array}\right)$$ until the matrix $$\left(\begin{array}{ccc|c}1 & -3 & 1 & 1 \\0 & 5 & -4 & 0 \\0 & 0 & 0 & -2\end{array}\right)$$ is transformed into reduced-row echelon form?</p>
| Matheman | 117,904 | <p>$$ rank \left(\begin{array}{ccc|c}1 & -3 & 1 & 1 \\2 & -1 & -2 & 2 \\1 & 2 & -3 & -1\end{array}\right)=3$$ because $$det\left(\begin{array}{ccc}-3 & 1 & 1 \\-1 & -2 & 2 \\2 & -3 & -1\end{array}\right)\neq 0$$ and the rank of the coefficient matrix $$\left(\begin{array}{ccc}1 & -3 & 1 \\2 & -1 & -2 \\1 & 2 & -3 \end{array}\right)$$ is $2$ because its determinant is $0$ and $$det\left(\begin{array}{cc}1 & -3 \\2 & -1 \end{array}\right)\neq 0$$
Augmented matrix and coefficient matrix have different ranks then by the Rouchè-Capelli theorem the system has no solutions</p>
|
1,989,182 | <p>Why does only one particular solution allow enough degrees of freedom for the general solution?</p>
| lisyarus | 135,314 | <p>This only works if the differential equation is linear, so it can be expressed as $Lx=y$, where $L$ is a <em>linear</em> differential operator. Then it is a basic theorem of linear algebra that if $x$ is some solution, then any solution is of the form $x+a$, where $La=0$.</p>
<ul>
<li><p>First, applying $L$ to $x+a$, we find $L(x+a)=Lx+La=Lx+0=Lx=y$, where we used the linearity of $L$ and that $x$ is a solution. Thus, we see that $x+a$ is indeed a solution.</p></li>
<li><p>Second, if $x'$ is another solution, define $a=x'-x$, so that $x'=x+a$. Then $La=L(x'-x)=Lx'-Lx=y-y=0$, thus $La=0$, as desired.</p></li>
</ul>
|
4,564,882 | <p>Suppose there are two types of weathers. Sunny and Rainy. <br />
The probability that a sunny day is followed by a sunny day is 70% and followed by a rainy day is 30%. <br />
The probability that a rainy day is followed by a rainy day is 60% and followed by a sunny day is 40%. <br />
In a year (365 days), how many days do we expect to be sunny?</p>
<p>Based on the question above, I only get the transition matrix
<span class="math-container">$
\begin{bmatrix}
0.7 & 0.3\\
0.4 & 0.6
\end{bmatrix}
$</span>
May I ask how do I calculate the expected number of sunny days in a year?
Thanks in advance.</p>
| Vercingetorix | 848,746 | <p>Just expanding on what the comments said. Let <span class="math-container">$P$</span> be the ptm. What you want to find is <span class="math-container">$(P^T)^{365}\pmatrix{1\\0}$</span></p>
<p>The idea is that overall long enough timespans (ex:365 days) we reach a stationary probability distribution <span class="math-container">$v$</span> since:</p>
<p><span class="math-container">$(P^T)^{364} \pmatrix{1\\0} \approxeq (P^T)^{365}\pmatrix{1\\0}$</span>. So let <span class="math-container">$v$</span> be the long run dist <span class="math-container">$(P^T)^{364}\pmatrix{1\\0}$</span> to get that <span class="math-container">$P^Tv = v$</span>. So let <span class="math-container">$v = (v_1, v_2)$</span>.</p>
<p>This gives us three equations:</p>
<p><span class="math-container">$v_1 + v_2 = 1$</span></p>
<p><span class="math-container">$0.7v_1 + 0.4v_2 = v_1$</span></p>
<p><span class="math-container">$0.3v_1 + 0.6v_2 = v_2$</span></p>
<p>which gives us that <span class="math-container">$v_1 = 4/7$</span> and <span class="math-container">$v_2 = 3/7$</span></p>
|
3,984,230 | <blockquote>
<p><span class="math-container">$2^x=4x$</span></p>
</blockquote>
<p>I cant seem to solve this equation. The furthest I have been able to come is
<span class="math-container">$x-\log_2(x)=2$</span>, but I can't figure how to solve. When I graph <span class="math-container">$2^x$</span> and <span class="math-container">$4x$</span> they intersect at <span class="math-container">$x=4$</span> and <span class="math-container">$x=0.31$</span>, so I know it is possible to solve.</p>
| David G. Stork | 210,401 | <p>Classic problem:</p>
<p><span class="math-container">$$ x = -\frac{W\left(-\frac{\log (2)}{4}\right)}{\log (2)}, {\rm or}\ -\frac{W_{-1}\left(-\frac{\log
(2)}{4}\right)}{\log (2)}$$</span></p>
<p>where <span class="math-container">$W$</span> is the <a href="http://wiki.analytica.com/ProductLog#:%7E:text=ProductLog(z),-Returns%20the%20value&text=where%20Exp%20is%20the%20exponential,exponentials%20in%20the%20same%20equation." rel="nofollow noreferrer">ProductLog</a> function.</p>
<p>Numerical evaluation of these analytic solutions: <span class="math-container">$.309907$</span> and <span class="math-container">$4$</span>.</p>
<p><a href="https://i.stack.imgur.com/sOh9M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sOh9M.png" alt="enter image description here" /></a></p>
|
2,221,897 | <p>Show that </p>
<p>$$\lim_{n \to \infty} \sum_{k=3}^n \frac{2k}{k^2+n^2+1} = \ln(2)$$</p>
<p>How many ways are there to prove it ?</p>
<p>Is there a standard way ?</p>
<p>I was thinking about making it a Riemann sum.
Or telescoping.</p>
<p>What is the easiest way ?
What is the shortest way ?</p>
| Jacky Chong | 369,395 | <p>Observe
\begin{align}
\sum^n_{k=3}\frac{2k}{n^2+k^2+1} = \frac{1}{n}\sum^n_{k=3} \frac{2(k/n)}{1+n^{-2}+(k/n)^2}
\end{align}
then we have
\begin{align}
\frac{1}{1+n}\sum^n_{k=3} \frac{2k/(1+n)}{1+k^2(1+n)^{-2}} \leq \frac{1}{n}\sum^n_{k=3} \frac{2(k/n)}{1+n^{-2}+k^2n^{-2}} \leq \frac{1}{n}\sum^n_{k=3} \frac{2(k/n)}{1+k^2n^{-2}}.
\end{align}
Hence it follows
\begin{align}
\int^1_0 \frac{2x}{1+x^2} \ dx=\lim_{n\rightarrow \infty}\frac{1}{1+n}\sum^n_{k=3} \frac{2k/(1+n)}{1+k^2(1+n)^{-2}} \leq \lim_{n\rightarrow \infty}\frac{1}{n}\sum^n_{k=3} \frac{2(k/n)}{1+n^{-2}+k^2n^{-2}} \leq \lim_{n\rightarrow \infty}\frac{1}{n}\sum^n_{k=3} \frac{2(k/n)}{1+k^2n^{-2}} = \int^1_0 \frac{2x}{1+x^2}\ dx
\end{align}
which means
\begin{align}
\lim_{n\rightarrow \infty}\sum^n_{k=3} \frac{2k}{1+k^2+n^2} = \int^1_0 \frac{2x}{1+x^2}\ dx = \log 2.
\end{align}</p>
|
1,265,531 | <p>I understand the question but I am not sure how to solve it. For example, if we flip HHHTTTTT then the next three must be heads because of the question. This however seems counterintuitive. I believe that there are $2^{10}$ possible strings, but I am unsure of how to count all possible strings that begin with HHH.</p>
| André Nicolas | 6,312 | <p>We do a formal conditional probability calculation.</p>
<p>Let $A$ be the event the first $3$ tosses are heads, and let $B$ be the event we have an equal number of heads and tails in the $10$ tosses. We want $\Pr(A|B)$. By the definition of conditional probability, we have
$$\Pr(A|B)=\frac{\Pr(A\cap B)}{\Pr(B)}.$$
We calculate the two probabilities on the right. </p>
<p>First we calculate $\Pr(B)$. The probability of $5$ heads and $5$ tails in $10$ tosses is $\frac{\binom{10}{5}}{2^{10}}$.</p>
<p>Next we calculate $\Pr(A\cap B)$. The probability the first $3$ tosses are heads is $\frac{1}{2^3}$. Given that the first $3$ tosses were heads, the probability of $5$ heads and $5$ tails is the probability of $2$ heads in the last $7$ tosses. This is $\frac{\binom{7}{2}}{2^7}$. It follows that $\Pr(A\cap B)=\frac{\binom{7}{2}}{2^{10}}$. </p>
<p>Finally, divide. </p>
|
1,265,531 | <p>I understand the question but I am not sure how to solve it. For example, if we flip HHHTTTTT then the next three must be heads because of the question. This however seems counterintuitive. I believe that there are $2^{10}$ possible strings, but I am unsure of how to count all possible strings that begin with HHH.</p>
| Graham Kemp | 135,106 | <blockquote>
<p>I understand the question but I am not sure how to solve it. For example, if we flip HHHTTTTT then the next three must be heads because of the question. This however seems counterintuitive. I believe that there are $2^{10}$ possible strings, but I am unsure of how to count all possible strings that begin with HHH.</p>
</blockquote>
<p>You don't understand the question.</p>
<p>It is: When <em>given the counts</em> of heads and tails resulting from the flips, what is the probability that the <em>order of the results</em> has heads in the first three places?</p>
<p><em>Notice:</em> We do not have to worry about the probability of any of the flips resulting in heads or tails. The coin does not even need to be fair; as long as the same one used each time (the flips have identical and independent distributions), bias has no impact on <em>this</em> question.</p>
<blockquote>
<p>A coin is flipped ten times. What is the probability that the first three are heads if an equal number of heads and tails are flipped?</p>
</blockquote>
<p><em>An equivalent problem is:</em> When 5 red and 5 black cards are fairly shuffled, what is the probability that the first three will be red?</p>
<p>There are $\binom{5}{3}$ (equiprobable) ways to select three of the five red cards out of $\binom{10}{3}$ ways to select any three of all ten cards.</p>
<p>$$\frac{\dbinom{5}{3}}{\dbinom{10}{3}}=\cfrac{\;\cfrac{5!}{3!2! }\;}{\;\cfrac{10!}{3!7!}\;}=\dfrac{5! \; 7!}{2! \; 10!} =\frac{1}{12}$$</p>
<hr>
<p><strong>Alternatively:</strong> there are $\binom{7}{2}$ ways to order the cards/coins such that the first three are red/head, out of $\binom{10}{5}$ ways to order them in total. Divide and calculate to obtain the same result.</p>
|
959,525 | <p>Could someone tell me what i've done wrong?</p>
<p>I tried to find out the derivative of $3^(2x)-2x+1$ but I got it wrong.
What I did was derivate $3^a-2x+1$ where a = 2x then multiply those two.</p>
<p>$(ln3*3^a - 2)*2$ = $2ln3*3^(2x)-4$</p>
<p>Ps. x = 2 so the answer is supposed to be 176.</p>
| mathlove | 78,967 | <p>It's not true. In general, we have
$$m\lt\frac{a+b}{2}.$$</p>
<p><strong>Proof</strong> : By <a href="http://en.wikipedia.org/wiki/Parallelogram_law" rel="nofollow">parallelogram law</a>, we have
$$a^2+b^2=2\left(m^2+\left(\frac c2\right)^2\right)\Rightarrow 2m=\sqrt{2a^2+2b^2-c^2}.$$
Hence, we have
$$\begin{align}2m-(a+b)&=\sqrt{2a^2+2b^2-c^2}-(a+b)\\&=\frac{(\sqrt{2a^2+2b^2-c^2}-(a+b))(\sqrt{2a^2+2b^2-c^2}+(a+b))}{\sqrt{2a^2+2b^2-c^2}+(a+b)}\\&=\frac{(2a^2+2b^2-c^2)-(a+b)^2}{\sqrt{2a^2+2b^2-c^2}+(a+b)}\\&=\frac{(a-b-c)(a-b+c)}{\sqrt{2a^2+2b^2-c^2}+(a+b)}\lt 0\end{align}$$
because we have
$$b+c\gt a\ \ \ \text{and}\ \ \ a+c\gt b.$$
Hence, we have
$$2m-(a+b)\lt 0\iff m\lt\frac{a+b}{2}.$$</p>
|
269,242 | <p>The number of primes in each of the $\phi(n)$ residue classes relatively prime to $n$ are known to occur with asymptotically equal frequency (following from the proof of the Prime Number Theorem). Does the same result hold on pairs of consecutive primes on the $\phi(n)^2$ pairs of congruence classes?</p>
<p>To wit: Consider $\{(2, 3), (3, 5), (5, 7), (7, 11), \ldots\}\pmod n$. Does $(a,b)$ occur with natural density
$$\begin{cases}
1/\phi(n)^2,&\gcd(ab,n)=1\\
0,&\text{otherwise}
\end{cases}
$$
?</p>
| Charles | 1,778 | <p>I suspect that the statement is false. The effect of prime gaps making some sizes more likely than others disappears after about $e^{e^n},$ but the effect of (small) primes dividing the modulus seems to be a problem except when $n$ is a prime power.</p>
<p>So the conjecture could either be weakened to the case where $n=p^k$ or the density corrected with an appropriate product over primes.</p>
|
31,502 | <p>This is probably a trivial question, but I don't see the answer, and I haven't found it on <a href="http://en.wikipedia.org/wiki/Cartesian_closed_category" rel="nofollow noreferrer">Wikipedia</a>, <a href="http://ncatlab.org/nlab/show/cartesian+closed+category" rel="nofollow noreferrer">nLab</a>, nor <a href="https://mathoverflow.net/questions/19004/is-the-category-commutative-monoids-cartesian-closed">MathOverflow</a>.</p>
<p>Let $\text{ComAlg}$ denote the category whose objects are commutative algebras over a fixed field $\mathbb K$ and whose morphisms are homomorphisms of algebras, and let $\text{ComAlg}^{\rm op}$ denote its opposite category. Given commutative algebras $A,B$, let $\operatorname{hom}(A,B)$ denote the set of algebra homomorphisms $A\to B$, so that $\operatorname{hom}$ is the usual functor $\text{ComAlg}^{\rm op} \times \text{ComAlg} \to \text{Set}$. The short version of my question:</p>
<blockquote>
<p>Is $\text{ComAlg}^{\rm op}$ Cartesian closed?</p>
</blockquote>
<p>The long version of my question (if I've gotten all the signs right):</p>
<blockquote>
<p>Is there a functor $[,] : \text{ComAlg} \times \text{ComAlg}^{\rm op} \to \text{ComAlg}$ such that there is an adjunction (natural in $A,B,C$, i.e. an isomorphism of functors $\text{ComAlg}^{\rm op} \times \text{ComAlg} \times \text{ComAlg} \to \text{Set}$) of the form:
$$ \operatorname{hom}([A,B],C) \cong \operatorname{hom}(A,B\otimes C) ?$$</p>
</blockquote>
<p>Recall: $\otimes$ is the coproduct in $\text{ComAlg}$, hence the product in $\text{ComAlg}^{\rm op}$.</p>
<p>Motivation: $\text{ComAlg}^{\rm op}$ is complete and cocomplete, and so many constructions that make sense in $\text{Set}$ and $\text{Top}$ transfer verbatim to the algebraic setting. I would like to know how many.</p>
| thel | 373 | <p>The existence of such an adjunction implies that $B \otimes -$ preserves limits, which doesn't seem very likely.</p>
<p>Here is a counterexample, though probably not the simplest one. Set $B = k[y]$ and consider the inverse limit of $k[x]/(x^{n+1})$. If we take the tensor products first, then we get $k[y][[x]]$ while if we take the limit first we obtain $k[[x]][y]$. These are distinct, since the first contains for example $(1-yx)^{-1} = \sum_{k \geq 0} y^k x^k$ and the second does not.</p>
|
2,706,776 | <p>In solving the wave equation
$$u_{tt} - c^2 u_{xx} = 0$$
it is commonly 'factored'</p>
<p>$$u_{tt} - c^2 u_{xx} =
\bigg( \frac{\partial }{\partial t} - c \frac{\partial }{\partial x} \bigg)
\bigg( \frac{\partial }{\partial t} + c \frac{\partial }{\partial x} \bigg)
u = 0$$</p>
<p>to get
$$u(x,t) = f(x+ct) + g(x-ct).$$</p>
<p><strong>My question is: is this legitimate?</strong></p>
<p>The partial differentiation operators are not variables, but here in 'factoring' they are treated as such.</p>
<p>Also it does not seem that both factors can individually be set to zero to obtain the solution--either one or the other, or both might be zero.</p>
| akhmeteli | 162,569 | <p>Yes, this is appropriate. You can apply the operators in the brackets one after another and get the same result as with the second-order derivatives. This "factoring" would be wrong, however, if, for example, $c$ were a function of $x$ or $t$.</p>
|
3,613,854 | <p>Let <span class="math-container">$$A=\begin{bmatrix}
3 & 2 \\
2 & 3
\end{bmatrix}.$$</span>
Find the spectral decomposition of <span class="math-container">$A$</span>. This is <span class="math-container">$$A=VDV^{-1}=\begin{bmatrix}
-1 & 1 \\
1 & 1
\end{bmatrix}\begin{bmatrix}
1 & 0 \\
0 & 5
\end{bmatrix}\begin{bmatrix}
-1/2 & 1/2 \\
1/2 & 1/2
\end{bmatrix}.$$</span>
My question how do I find <span class="math-container">$2^A$</span>? Thanks for your help.</p>
| Mostafa Ayaz | 518,023 | <p><strong>Hint</strong></p>
<p>Note that using that decomposition<span class="math-container">$$A^k=VD^kV^{-1}$$</span>and <span class="math-container">$$2^A=\sum {A^n(\ln 2)^n\over n!}$$</span></p>
|
163,640 | <p>Early in a course in Algebra the result that every group can be embedded as a subgroup<br>
of a symmetric group is introduced. One can further work on it to embed it as a subgroup of a suitable (higher degree) alternating group.</p>
<p>Inverting the view point we can say that the family of simple groups $A_n, n\geq 5$, contains all finite groups as their subgroups.</p>
<p>My question now is, is the same true for each of the other infinite families listed in the Classification of Finite Simple Groups?</p>
<p>In case the answer to this question is negative it might lead to some categorization.
Cayley's embedding theorem is often considered a 'useless theorem',
as no result about that group can be proved using that embedding. (Is that correct?)
Other simple groups being somewhat more special (structure preserving maps of some non-trivial structure), we can categorize groups according to which infinite family(ies) they fall into.
And groups embeddable in a particular family, but not embeddable in another may exhibit some special property.</p>
<p>Hope this provides a motivation for the question.</p>
| DavidLHarden | 12,610 | <p>Another use of regarding a group (there called $H$) as a subgroup of the symmetric group $S_{|H|}$ is given by Marty Isaacs in <a href="https://mathoverflow.net/questions/173148/subgroup-property-stronger-than-being-characteristic">Subgroup property stronger than being characteristic</a></p>
|
3,910,739 | <p>I am trying to find a pdf for a random variable <span class="math-container">$X$</span> where <span class="math-container">$X=-2Y+1$</span> and <span class="math-container">$Y$</span> is given by <span class="math-container">$N(4,9)$</span></p>
<p>Here is my attempt:</p>
<p>we know <span class="math-container">$\mu=4$</span> and <span class="math-container">$\sigma=3$</span>. so that the normal distribution of <span class="math-container">$Y$</span> is given by <span class="math-container">$\frac{1}{3\sqrt{2\pi}}e^\frac{-(y-4)^2}{18}$</span><br />
We can differentiate the cumulative function of <span class="math-container">$X$</span> to get the pdf for <span class="math-container">$X$</span>.<br />
cdf of <span class="math-container">$X = P(X<x)$</span> = <span class="math-container">$P(-2Y+1<x)=P(Y<\frac{-(x-1)}{2})=\int_{-\infty}^{\frac{-(x-1)}{2}}\frac{1}{3\sqrt{2\pi}}e^\frac{-(y-4)^2}{18}dy$</span><br />
so <span class="math-container">$\frac{d}{dx}(\int_{-\infty}^{\frac{-(x-1)}{2}}\frac{1}{3\sqrt{2\pi}}e^\frac{-(y-4)^2}{18}dy)=f(x)$</span>, which is the density function for <span class="math-container">$X$</span><br />
<span class="math-container">$f(x)=-\frac{1}{6\sqrt{2\pi}}e^\frac{-(\frac{-(x-1)}{2}-4)^2}{18}$</span></p>
<p>Is this a correct way to approach the problem? I feel like my answer is very funky.</p>
| Kolmogorov | 551,240 | <p>Let us denote distribution functions by <span class="math-container">$F$</span>, and density functions by <span class="math-container">$f$</span>. Then,
<span class="math-container">\begin{align*}
F_X(x) &= P(X \leqslant x)\\
&= P\left(Y \geqslant \frac{1-x}{2}\right)\\
&= 1 - F_Y\left(\frac{1-x}{2}\right)\\
&= 1 - \int_{-\infty}^{\frac{1-x}{2}} f_Y(t) dt\\
&= 1 - \int_{0}^{\frac{1-x}{2}} f_Y(t) dt - \int_{-\infty}^{0} f_Y(t) dt\\
&= \frac{1}{2} - \int_{0}^{\frac{1-x}{2}} f_Y(t) dt
\end{align*}</span>
Therefore, by Fundamental Theorem of Calculus, we have : <span class="math-container">$$ f_X(x) = \frac{1}{2} \cdot f_Y\left(\frac{1-x}{2}\right) = \frac{1}{6\sqrt{2\pi}} \exp\left[-\frac{(x+7)^2}{72}\right]$$</span>
So, <span class="math-container">$~X \sim N(-7,36)$</span> . Hope it helps.</p>
|
4,032,983 | <p>I would like to know math websites that are useful for students, PhD students and researchers (useful in the sense most of the students or researchers—of a particular area—are using it). Maybe you can share which math websites you sometime use and why you use it.</p>
<p>Let me give my websites and why I use them:</p>
<p>MathOverflow and Stack Exchange: Very good to look up questions about how to do mathematics, what mathematics is, how it evolves, and of course to have an exchange with smart people, who can help you if you are stuck at something.</p>
<p>arXiv: I don't use it now, but I think that's the main website to read papers and to publish (research level)</p>
<p>The Stacks Project: It is very useful for me to look algebraic things ups with more explanation than in some lectures...</p>
<p>MathSciNet: To search for papers that may help you.</p>
<p>Number Theory Web: Since I want to become a number theorist, this site helps me to see which kind of things exist.</p>
<p>Do you have more websites that help you to understand maybe mathematics, or just to connect to other mathematicians?</p>
<p>Everything is welcome. Just post it as an answer and not a comment.</p>
| Qi Zhu | 470,938 | <p>Great question. It would probably also be interesting to think about what could be useful but is not yet out there. Here are a few picks that I can think of:</p>
<ul>
<li>A useful website for ring theorists is the <a href="https://ringtheory.herokuapp.com/" rel="noreferrer">Database of Ring Theory</a>. It is actually managed by rschwieb, a frequent user of MSE. You can look for examples or counterexamples of your favourite properties of rings and modules in the database.</li>
<li>In a similar vein, here is a <a href="http://galoisdb.math.upb.de/" rel="noreferrer">Database for Number Fields.</a></li>
<li>Super useful for number theorists is also <a href="http://www.lmfdb.org/knowledge/" rel="noreferrer">LMFDB</a>. It's another database with which you can look for number-theoretic objects with certain desired properties.</li>
<li>For category lovers, the equivalent of the Stacks Project is <a href="https://kerodon.net/" rel="noreferrer">Kerodon</a> on Higher Category Theory. I think this one is still growing.</li>
<li>And of course, obvious ones like <a href="https://www.wolframalpha.com/" rel="noreferrer">Wolfram Alpha</a> and <a href="https://www.wikipedia.org/" rel="noreferrer">Wikipedia</a>!</li>
</ul>
<p>Maybe I should also mention certain social media meme groups in which you can apply your knowledge to understand the most recent homological algebra memes. But maybe this is not quite the right place for that... ;-)</p>
|
4,032,983 | <p>I would like to know math websites that are useful for students, PhD students and researchers (useful in the sense most of the students or researchers—of a particular area—are using it). Maybe you can share which math websites you sometime use and why you use it.</p>
<p>Let me give my websites and why I use them:</p>
<p>MathOverflow and Stack Exchange: Very good to look up questions about how to do mathematics, what mathematics is, how it evolves, and of course to have an exchange with smart people, who can help you if you are stuck at something.</p>
<p>arXiv: I don't use it now, but I think that's the main website to read papers and to publish (research level)</p>
<p>The Stacks Project: It is very useful for me to look algebraic things ups with more explanation than in some lectures...</p>
<p>MathSciNet: To search for papers that may help you.</p>
<p>Number Theory Web: Since I want to become a number theorist, this site helps me to see which kind of things exist.</p>
<p>Do you have more websites that help you to understand maybe mathematics, or just to connect to other mathematicians?</p>
<p>Everything is welcome. Just post it as an answer and not a comment.</p>
| storluffarn | 891,289 | <p>I really enjoy the following schematic overview of various statistical, distributions, their relationships and properties. It's quite handy for giving students (and others!) a quick way of relating new distributions to distributions that they already know about.</p>
<p><a href="http://www.math.wm.edu/%7Eleemis/chart/UDR/UDR.html?fbclid=IwAR3mycDttQQtpyRHhitocYGT-H4kQQ9tC2gQ07OgkcgNXg49c3hw2TTaJJg" rel="nofollow noreferrer">Univariate Distribution Relationships</a></p>
<p>edit: typos</p>
|
870,240 | <p>Which number is larger? $\underbrace{888\cdots8}_\text{19 digits}\times\underbrace{333\cdots3}_\text{68 digits}$ or $\underbrace{444\cdots4}_\text{19 digits}\times\underbrace{666\cdots67}_\text{68 digits}$? Why? How much is it larger?</p>
| please delete me | 164,934 | <p>Let those four numbers be $a,b,c,d$ respectively. Then $a=2c$ and $d=2b+1$. So $cd-ab=c$.</p>
|
1,665,443 | <p>How do we show the ring homomorphism for </p>
<p>$\phi :\mathbb F_p(\alpha) \rightarrow\mathbb F_p(\alpha)$ which is defined as $ \phi(\alpha)=\alpha +1$.</p>
<p>This is a very basic fact but I am unable to prove it by the definition of ring homomorphism. Same thing happens for ring homomorphism over $\phi :K[x] \rightarrow K[X]$ defined as $\phi (X)=X+1$. I have studied this earlier and assumed it as trivial but never tried to see the proof. </p>
<p>Thanks in advance.</p>
| Andreas Caranti | 58,401 | <p>Let us start with your second question.</p>
<p>Let $K$ be a field, and $K[x]$ be the polynomial rings. Let $B$ be a commutative ring with unity containing $K$ as a subring, and let $\beta \in B$. Then there is a unique ring homomorphism
$$
v_{\beta} : K[x] \to B
$$
which satisfies
$$\begin{cases}
v_{\beta}: &a \mapsto a &\text{for $a \in K$},\\
& x \mapsto \beta.\\
\end{cases}$$
This is just <em>evaluation of a polynomial for $x = \beta$</em>. The statement, which allows several generalizations, can be described as the universal property of polynomial rings.</p>
<hr>
<p>As to your first question (which originally was missing the part on the polynomial of which $\alpha$ is a root), let $f = x^{p} - x - b$, for some $b \ne 0$. Note that $b$ has additive period $p$. If $\alpha$ is a root of $f$, apply the Frobenius morphism $z \mapsto z^{p}$, then $\alpha^{p}$ is also a root, as
$$
0 = f(\alpha)^{p} = (\alpha^{p} - \alpha - b)^{p}
= (\alpha^{p})^{p} - (\alpha^{p}) - b = f(\alpha^{p}).
$$
And clearly $\alpha^{p} = \alpha + b$. Since $\mathbb F_p(\alpha) = \mathbb F_p(\alpha+b)$, you have found that the Frobenius map induces a ring isomorphism $\sigma$ on $\mathbb F_p(\alpha)$ which fixes $\mathbb F_p$ elementwise and maps $\alpha$ to $\alpha + b$.</p>
<p>Now note that $\sigma^{i}(\alpha) = \alpha + i b$. Since $b \ne 0$ has additive period $p$, you can choose $i_{0}$ so that $i_{0} b = 1$, and thus
$$
\sigma^{i_{0}} (\alpha) = \alpha + 1.
$$</p>
|
1,448,416 | <p>It states that nth difference of a polynomial of n degree is constant thus (n+1)th difference will be zero.</p>
<ul>
<li>how can i show that the nth difference is constant? </li>
<li>forward difference of a constant is zero but how can i prove it?</li>
</ul>
| lisyarus | 135,314 | <p>Let $p(x) = c$, where $c$ is a constant. Then $p(x+h)-p(x) = c-c=0$, thus forward difference of a constant is zero.</p>
<p>Let $p(x)=\sum\limits_{k=0}^{n}a_k x^k$. Then $p(x+h)-p(x) = \sum\limits_{k=0}^{n}a_k ((x+h)^k - x^k)$.</p>
<p>$(x+h)^k-x^k = {0 \choose k} h^0 x^k + {1 \choose k} h^1 x^{k-1} + \dots + {k \choose k} h^k x^0 - x^k = {1 \choose k} h^1 x^{k-1} + \dots + {k \choose k} h^k x^0$. Thus, forward difference of degree $k$ polynomial is a degree $k-1$ polynomial. Use induction to prove that $n$-th forward difference of a $n$ degree polynomial is a $0$ degree polynomial, which is constant.</p>
|
1,300,273 | <p>I have a question about evaluating the limit:</p>
<p>$$\lim_{x \to\infty }\left(x^{f(x)}-x \right)$$</p>
<p>where:</p>
<p>$f(x)$ is a continuous map from the positive reals to the positive reals , and</p>
<p>$\lim_{x\rightarrow \infty }f(x)= 1$.</p>
<p>I attempted to apply L'Hôpital's rule by writing:</p>
<p>$x^{f(x)}-x$ = $\log(\exp(x^{f(x)})/\exp(x))$ </p>
<p>then applying the rule to $\exp(x^{f(x)})/\exp(x)$, however this quotient appeared in the resulting expression and successive applications of the rule would not remove it. </p>
<p>The Wikipedia article on L'Hôpital's rule (link below) mentions the way the original expression can occur in the result of applying the rule. The article gives some examples where this problem is solved by using transformations but I could not get that method to work in this case. </p>
<p>I would appreciate any help in evaluating this limit and/or in referring me to a source where it or similar limits are evaluated.</p>
<p><a href="http://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule" rel="nofollow">http://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule</a></p>
<p>EDIT </p>
<p>Thanks for the comment and the answer (now deleted). They show me I left some information out of my question. Apologies for the omission. I should have included the following:</p>
<ol>
<li><p>The function $f(x)$ is assumed to be $C^\infty$ on the positive reals.</p></li>
<li><p>The limit will depend on $f(x)$ so I was looking for an evaluation of the limit that related the limit to the properties of $f(x)$. For example, I was looking for those properties of $f(x)$ that imply the limit is $\infty$ and those that imply it is finite.</p></li>
</ol>
| gjh | 37,021 | <p>I now think the problem is simpler than it originally seemed to me when I posted the question.</p>
<p>Is the following how the form of $f(x)$ determines $\lim_{x \to\infty }\left(x^{f(x)}-x \right)$?</p>
<p><strong>CASE ONE</strong> : $\lim_{x \to\infty }\left(x^{f(x)}-x \right) = L$ (finite)</p>
<p>This limit implies $x^{f(x)}-x = L + g(x)$ </p>
<p>where $\lim_{x\rightarrow \infty }g(x)= 0$ and $L$ is a constant.</p>
<p>Rearranging $x^{f(x)}-x = L + g(x)$ gives:</p>
<p>$f(x) = 1+[log[1+[L+g(x)]/x]/log(x)$</p>
<p>The implications are reversible so:</p>
<p>$\lim_{x \to\infty }\left(x^{f(x)}-x \right) = L$ (finite) iff $f(x) = 1+[log[1+[L+g(x)]/x]/log(x)$</p>
<p>provided $\lim_{x\rightarrow \infty }g(x)= 0$</p>
<p><strong>CASE TWO</strong> : $\lim_{x \to\infty }\left(x^{f(x)}-x \right) = \infty$</p>
<p>This limit implies:</p>
<p>$x^{f(x)}-x = h(x)$</p>
<p>where $\lim_{x\rightarrow \infty }h(x)= \infty$</p>
<p>Rearranging $x^{f(x)}-x = h(x)$ gives:</p>
<p>$f(x) = 1+[log[1+h(x)/x]/log(x)$</p>
<p>The implications are reversible so:</p>
<p>$\lim_{x \to\infty }\left(x^{f(x)}-x \right) = \infty$ iff $f(x) = 1+[log[1+h(x)/x]/log(x)$</p>
<p>provided $\lim_{x\rightarrow \infty }h(x)= \infty$</p>
<p>End</p>
|
1,474,123 | <p>I have tried to use u-substitution but for some reason am not doing it right and thus not getting the correct answer. I want to know the most obvious/ intuitive way to solve this integral.</p>
| Vamsi Spidy | 279,085 | <p>$x=z\tan(k)$</p>
<p>$\mathrm{d}x=z\sec^{2}(k)\mathrm{d}k$</p>
<p>$(a^2 + x^2)^{\frac32} = z^4 (\sec k)^3 (sec(k))^2 \mathrm{d}k
=z^4 (1+(\tan k)(\tan k))^{\frac32}(\sec k)^2\mathrm{d}k$ </p>
<p>now put $\tan(k)=t$, $\mathrm{d}k=(\sec k)(\sec k)\mathrm{d}k$ .
And then integrate it very easily</p>
|
3,531,693 | <p>Let <span class="math-container">$A \subset \mathbb{R}^n$</span> be a compact set with positive Lebesgue measure on <span class="math-container">$\mathbb{R}^n$</span>. Can we find an open set <span class="math-container">$B \subset \mathbb{R}^n$</span> such that <span class="math-container">$B \subset A$</span>?</p>
<p>PS: I know that if the compactness removed, the answer is no, since <span class="math-container">$A$</span> can be any compact set removing all the rational points.</p>
| Milo Brandt | 174,927 | <p>Although the other answer is really a good answer, it's worth noting that you can cook up a lot of examples in a similar manner: choose your favorite compact set <span class="math-container">$C$</span> (in <span class="math-container">$\mathbb R^n$</span> or some other nicely behaved space) with positive measure. Now, make a list of countably many open sets such that every non-empty open set contains one on your list (e.g. the set of balls of rational radius centered at rational coordinates).</p>
<p>Let's decide that we're okay with losing some <span class="math-container">$\varepsilon$</span> of area from <span class="math-container">$C$</span> during this construction. We are now going to nibble away at <span class="math-container">$C$</span> by removing open sets (preserving compactness, even in the limiting case). Choose a point in your first open set and remove from <span class="math-container">$C$</span> a ball with area <span class="math-container">$\varepsilon 2^{-1}$</span> around that point. Choose a point in your second set and remove a ball around it with area <span class="math-container">$\varepsilon 2^{-2}$</span>. Choose a point in your third set and remove a ball around it with area <span class="math-container">$\varepsilon 2^{-3}$</span>. Do this for your entire list. You've at most removed an area of <span class="math-container">$\varepsilon$</span>. However, no non-empty open set is a subset of the remaining set, since no set on your list is a subset of the remaining set.</p>
<p>Basically, the fact that we're in a second-countable space means that there really aren't <em>so</em> many open sets, so we can just make a list of enough open sets and deal with them individually. This is a nice thing to keep in mind if you want to make pathological examples from scratch.</p>
|
147,095 | <p>I was wondering if there is any stationary distribution for bipartite graph? Can we apply random walks on bipartite graph? since we know the stationary distribution can be found from Markov chain, but we have two different islands in bipartite graph and connections occur between nodes from different groups. </p>
| R W | 8,588 | <p>There is no problem with dealing with random walks and stationary distributions on bipartite graphs. Actually, integer lattices $\mathbb Z^d$ or finite cyclic groups of even order $Z_{2p}$ all give rise to bipartite graphs with respect to natural generating sets. The simple random walk on a finite connected graph always has a unique stationary distribution given by the usual formula (without any exceptions for bipartite graphs). On the other hand, the property of being bipartite can indeed be expressed in terms of random walks: a graph is bipartite if and only if the simple random walk on it has precisely two periodic classes.</p>
|
240,699 | <p>I have the following equation which I want to solve:</p>
<p><span class="math-container">$$
I_D = [Li_2(-e^{V_D-I_D})-Li_{2}(e^{I_D})]
$$</span></p>
<p>Here <span class="math-container">$Li_2(x)$</span> is the PolyLog function of order <span class="math-container">$2$</span>. Is there a way to solve this equation iteratively in Mathematica to get <span class="math-container">$I_D$</span> as a function of <span class="math-container">$V_D$</span>.</p>
<p>Edit: I want to solve this equation numerically for real values of <span class="math-container">$I_D$</span> and <span class="math-container">$V_D$</span>.</p>
| bbgodfrey | 1,063 | <p>A typical solution of the equation</p>
<pre><code>id - PolyLog[2, -Exp[vd - id]] - PolyLog[2, Exp[id]] == 0
</code></pre>
<p>can be obtained by plotting this expression.</p>
<pre><code>ReImPlot[(id - PolyLog[2, -Exp[vd - id]] - PolyLog[2, Exp[id]]) /. vd -> .5, id, -1, 1},
ImageSize -> Large, AxesLabel -> {id, None}, LabelStyle -> {15, Bold, Black}]
</code></pre>
<p><a href="https://i.stack.imgur.com/Nn2Tn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Nn2Tn.png" alt="enter image description here" /></a></p>
<p>Visibly, there is a branch point at <code>id = 0</code>, consistent with the documentation of <code>PolyLog</code>. A small amount of experimentation shows that the zero of the curve shown moves toward the branch point as <code>vd</code> increases. Consequently, there is no solution for <code>vd</code> greater than</p>
<pre><code>FindRoot[(id - PolyLog[2, -Exp[vd - id]] - PolyLog[2, Exp[id]]) /. id -> 0, {vd, -.87}]
(* {vd -> 0.872676} *)
</code></pre>
<p>at least for the principal value of <code>PolyLog</code>. With this information, a plot of <code>id</code> as a function of <code>vd</code> is obtained by</p>
<pre><code>Plot[id /. FindRoot[(id - PolyLog[2, -Exp[vd - id]] - PolyLog[2, Exp[id]]), {id, 01}],
{vd, -1, .872}, ImageSize -> Large, AxesLabel -> {vd, id}, LabelStyle -> {15, Bold, Black}]
</code></pre>
<p><a href="https://i.stack.imgur.com/A0kvP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A0kvP.png" alt="enter image description here" /></a></p>
|
672,707 | <p><img src="https://i.stack.imgur.com/Tr5Jy.gif" alt="enter image description here" /></p>
<p>2 How do I solve this equation involving a logarithm? 3</p>
| voligno | 67,047 | <p>Use the properties of the log function: </p>
<p>$\log_a {\frac{b}{c}} = \log_a b -\log_a c$ </p>
<p>and $\log_a x = \log_a b \times \log_b x$. </p>
|
481,313 | <p>Show that in an abelian group the product of two elements of finite order is itself an element of finite order.</p>
<p>I need some hint to start with, I am familiar with the basic</p>
| Prahlad Vaidyanathan | 89,789 | <p>$(ab)^n = a^nb^n$, so take $n = lcm(|a|,|b|)$</p>
|
285,227 | <p>I am trying to prove $\exp(x+y) = \exp(x) \exp(y)$.</p>
<p>I may use that $$\exp(x) = \sum_{n=0}^\infty \frac {x^n}{n!}$$
I further know how to multiply two power series in one point, i.e. if $f(x) = \sum_{n=0}^\infty c_n(x-a)^n$ and $g(x) = \sum_{k=0}^\infty d_n(x-a)^n$ then
$$
f(x)g(x) = \sum_{n=0}^\infty e_n(x-a)^n
$$
with
$$
e_n = \sum_{m=0}^n c_md_{n-m}
$$</p>
| Bumblebee | 156,886 | <p>Euler formula says that (exponential form of a complex number) $$e^{i\theta}=\cos\theta+i\sin\theta.$$ Therefor $$e^{x+y}=\cos(-i(x+y))+i\sin(-i(x+y))\\=\cos(xi+yi)-i\sin(xi+yi)\\=(\cos ix\cos iy-\sin ix\sin iy)+i(\sin ix\cos iy+\cos ix\sin iy)\\=(\cos ix+i\sin iy)(\cos iy+i\sin iy)\\=e^xe^y.$$</p>
|
1,850,069 | <p>Let the incircle (with center $I$) of $\triangle{ABC}$ touch the side $BC$ at $X$, and let $A'$ be the midpoint of this side. Then prove that line $A'I$ (extended) bisects $AX$.<a href="https://i.stack.imgur.com/pd7Di.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pd7Di.png" alt="enter image description here"></a></p>
| Stefan4024 | 67,746 | <p>First denote the intersection of $A'I$ and $AX$ with $M$. Now let $IX$ intersect the incenter for a second time at $Y$. Then let $AY$ intersect $BC$ at $W$. It's well-known that $W$ is the tangent point of the excircle and $BC$ (you can check the proof of this lemma <a href="http://yufeizhao.com/olympiad/geolemmas.pdf" rel="nofollow">here</a>, Chapter $2$). Also it's well-known that $A'X = A'W$. Now consider $\triangle A'YX$, we have that $A'I \parallel AW$, since $A'I$ is a midline in the triangle. Now using this we have by Intercept Theorem:</p>
<p>$$\frac{AM}{AX} = \frac{A'W}{XW} = \frac 12$$</p>
<p>Therefore $M$ is a midpoint of $AX$.</p>
|
56,162 | <p>I'm trying to understand the Cartan decomposition of a semisimple Lie algebra, $\mathfrak g=\mathfrak k \oplus \mathfrak p$, where $[\mathfrak k,\mathfrak p] \subseteq \mathfrak p$, cf. the wikipedia article on <a href="http://en.wikipedia.org/wiki/Cartan_decomposition" rel="noreferrer">Cartan decomposition</a>.</p>
<p>I posted the following question on math.stackexchange.com, where Darij suggested to repost the question here as an answer is not completely obvious, I suppose.</p>
<p>Let $\mathfrak {so}_{n}$ denote the skew-symmetric complex $n \times n$-matrices and let $M$ denote the symmetric $n \times n$-matrices of trace 0.</p>
<p>Then $M$ is a module over the Lie algebra $\mathfrak {so}_n$ (this comes from the Cartan decomposition of $\mathfrak {sl}_n$).</p>
<blockquote>
<p>What is the decomposition of $M$ into irreducible $\mathfrak {so}_n$-modules? </p>
</blockquote>
<p>The standard representation of $\mathfrak {so}_n$ has dimension $n$, the adjoint representation has dimension $\frac 1 2 n \cdot (n-1)$ and there are two spin representations of small dimension. But I don't see a way how these, together with trivial representations, should add up to the dimension of $M$, which is $\frac 1 2 n \cdot (n+1)-1$. </p>
| Kelly Davis | 1,400 | <p>First consider the case $M = S^3$. Generalizing, consider the connected sum of a generic M with a sphere $M = M\# S^3$</p>
<p><strong>Edit</strong> Here's what I was thinking (Still not sure if it's all correct, but it seems closer to the spirit of Witten's paper than the obstruction arguments.) </p>
<p>Consider a gauge transform $f': M \rightarrow G$. Also, consider a gauge transformation $g' : S^3 \rightarrow G$ not homotopic to the identity. Continuity allows us to change $f'$ to a map $f$ homotopic to $f'$ such that in a neighborhood $U$ of $p \in M$ the map $f$ maps to the identity of $G$. We can define a map $g$ to have similar properties in a neighborhood $V$ of $q \in S^3$. </p>
<p>Do the connected sum around $p$ and $q$ and obtain $M\# S^3 = M$ as well as a gauge transform $h$ on $M\# S^3 = M$ obtained by joining $f$ and $g$. Now, <em>assume</em> $h$ is homotopic to the identity. </p>
<p>The homotopy taking $h$ to the identity can be used to construct a homotopy of $g$ to the identity. (Here we use the fact that $\pi_2(G)$ is trivial to continue the homotopy over the ball removed from $S^3$.) </p>
<p>But, no such homotopy of $g$ to the identity exists. Thus, $h$ is not homotopic to the identity. Hence, $\pi_3(G) = \mathbf{Z}$ implies there exist continuous maps $M \rightarrow G$ not homotopic to the identity.</p>
|
56,162 | <p>I'm trying to understand the Cartan decomposition of a semisimple Lie algebra, $\mathfrak g=\mathfrak k \oplus \mathfrak p$, where $[\mathfrak k,\mathfrak p] \subseteq \mathfrak p$, cf. the wikipedia article on <a href="http://en.wikipedia.org/wiki/Cartan_decomposition" rel="noreferrer">Cartan decomposition</a>.</p>
<p>I posted the following question on math.stackexchange.com, where Darij suggested to repost the question here as an answer is not completely obvious, I suppose.</p>
<p>Let $\mathfrak {so}_{n}$ denote the skew-symmetric complex $n \times n$-matrices and let $M$ denote the symmetric $n \times n$-matrices of trace 0.</p>
<p>Then $M$ is a module over the Lie algebra $\mathfrak {so}_n$ (this comes from the Cartan decomposition of $\mathfrak {sl}_n$).</p>
<blockquote>
<p>What is the decomposition of $M$ into irreducible $\mathfrak {so}_n$-modules? </p>
</blockquote>
<p>The standard representation of $\mathfrak {so}_n$ has dimension $n$, the adjoint representation has dimension $\frac 1 2 n \cdot (n-1)$ and there are two spin representations of small dimension. But I don't see a way how these, together with trivial representations, should add up to the dimension of $M$, which is $\frac 1 2 n \cdot (n+1)-1$. </p>
| Paul | 3,874 | <p>@kwl1026. Gauge transformations are sections of the Ad bundle $P\times_{Ad} g$ where $P\to M$ is the principal $G$ bundle; $g$ the lie algebra. When $G$ is abelian the adjoint action is trivial so, e.g. the $U(1)$ gauge group is always $Map(M,U(1))$ whether or not $P$ is trivial. Its homotopy classes are then $[M, U(1)]= H^1(M;Z)$, which is zero (for $M$ a closed 3-manifold) if and only if $M$ is a rational homology sphere.</p>
<p>An elementary answer to your original question for $SU(2)=S^3$ is that obstruction theory shows that the primary obstruction gives an isomorphism $[M,S^3]\to H^3(M;Z)$. An induction using the fibration $SU(n)\to SU(n+1)\to S^{2n+1}$ and cellular approximation shows that $[M,SU(n)]=[M,SU(2)]$. Other tricks can get you there for other $G$. It is true that the differnence in Chern-Simons invariants (suitably normalized) coincides with this isomorphism (composed with $H^3(M;Z)\to Z$), as indcated by Konrad. For $SU(2)$ it also agrees with the degree, as mentioned by Peter.</p>
<p>If $P$ is non-trivial you have to work a little harder, since you are asking what is the set of homotopy classes of sections of the fiber bundle $P\times_{Ad} g$. A useful reference is Donaldson's book on Floer homology.</p>
|
2,127,679 | <p>I need to find $\frac{a}{b} \mod c$.<br>
This is equal to $(a\cdot b^{\phi(c)-1}\mod c)$, when $b,c$ are co-prime. But what if that's not the case?<br>
To be more clear, I need $$\frac{10^{a\cdot b}-1}{10^b-1}\mod P$$ </p>
| Ben Grossmann | 81,360 | <p>What' you're looking for is a solution to
$$
bx = a \pmod c
$$
If $b$ and $c$ are not coprime, write $d = \gcd(b,c)$, and write $c = (md)n$ in such a way that $md$ and $n$ are relatively prime. With the Chinese remainder theorem, it suffices to solve the system of equations
$$
bx = a \pmod {md}\\
bx = a \pmod n
$$
The second equation is necessarily solvable, but the first might not be.</p>
|
235,430 | <p>Suppose that a bounded sequence of real numbers $s_i$ ($i\in\omega$) has a limit $\alpha$ along some ultrafilter $\mu_1\in \beta{\Bbb N}\setminus{\Bbb N}$. Then given another ultrafilter $\mu_2\in \beta{\Bbb N}\setminus{\Bbb N}$, surely there exists some rearrangement $s_{r(i)}$ of $s_i$ that has the same limit $\alpha$.</p>
<p>One can easily extend this simple observation to a countable family of sequences. </p>
<p>Now given $s_{i;j}$ ( $i,j\in \omega$; values bounded for each fixed $j$) with limits $\alpha_j$ along a fixed $\mu_1\in \beta{\Bbb N}\setminus{\Bbb N}$, and given another ultrafilter $\mu_2\in \beta{\Bbb N}\setminus{\Bbb N}$, there exists a <em>simultaneous</em> rearrangement $s_{r(i);j}$ having the same limits $\alpha_j$ along $\mu_2$. </p>
<p>All this fails if we pass to size $c=2^\omega$ families of sequences. Indeed $s_{i;j}$ could then enumerate all bounded sequences. But all the limits $\alpha_j$ together would determine $\mu_1$. Taking limits of a simultaneous rearrangement of all the sequences amounts, equivalently, to taking limits of the original sequences along an ultrafilter $\mu_2'$ in the orbit of $\mu_2$ under the action of the symmetric group of $\Bbb N$ extended to $\beta\Bbb N$. Equality of all those limits thus forces $\mu_1=\mu_2'$, and that places $\mu_1$ and $\mu_2$ in the same orbit of the symmetric group action, a severe restriction on $\mu_2$.</p>
<p><strong>Question</strong>: If CH fails, what happens for a size $\omega_1$ family of sequences?</p>
| Paata Ivanishvili | 50,901 | <p>If diagonal entries of $Y$ are zero then there is an open question of Pełczyński which asks whether we have the lower bound<br>
$$
\mathbb{E} |x^{T}Yx| \geq \frac{1}{2} \sqrt{\mathbb{E} |x^{T}Yx|^{2}} ?
$$</p>
<p>K. Oleszkiewicz writes in his slides (see slides 170)
<a href="https://simons.berkeley.edu/sites/default/files/docs/481/oleszkiewiczslides.pdf" rel="nofollow noreferrer">https://simons.berkeley.edu/sites/default/files/docs/481/oleszkiewiczslides.pdf</a></p>
<blockquote>
<p>Known to be true for n ≤ 6. In general, unknown ... </p>
</blockquote>
<p><strong>Remark:</strong> One can weaken the assumption "diagonal entries of $Y$ are zero" to "$\mathrm{Tr}(Y)=0$" but then the conclusion would be
$$
\mathbb{E} |x^{T}Yx| \geq \frac{1}{2} \sqrt{\mathbb{E} |x^{T}Y^{0}x|^{2}}?
$$
where $Y^{0}$ is obtained from $Y$ by removing diagonal entries. Indeed, this follows from the identity $x^{T}Yx=x^{T}Y^{0}x$. </p>
|
235,430 | <p>Suppose that a bounded sequence of real numbers $s_i$ ($i\in\omega$) has a limit $\alpha$ along some ultrafilter $\mu_1\in \beta{\Bbb N}\setminus{\Bbb N}$. Then given another ultrafilter $\mu_2\in \beta{\Bbb N}\setminus{\Bbb N}$, surely there exists some rearrangement $s_{r(i)}$ of $s_i$ that has the same limit $\alpha$.</p>
<p>One can easily extend this simple observation to a countable family of sequences. </p>
<p>Now given $s_{i;j}$ ( $i,j\in \omega$; values bounded for each fixed $j$) with limits $\alpha_j$ along a fixed $\mu_1\in \beta{\Bbb N}\setminus{\Bbb N}$, and given another ultrafilter $\mu_2\in \beta{\Bbb N}\setminus{\Bbb N}$, there exists a <em>simultaneous</em> rearrangement $s_{r(i);j}$ having the same limits $\alpha_j$ along $\mu_2$. </p>
<p>All this fails if we pass to size $c=2^\omega$ families of sequences. Indeed $s_{i;j}$ could then enumerate all bounded sequences. But all the limits $\alpha_j$ together would determine $\mu_1$. Taking limits of a simultaneous rearrangement of all the sequences amounts, equivalently, to taking limits of the original sequences along an ultrafilter $\mu_2'$ in the orbit of $\mu_2$ under the action of the symmetric group of $\Bbb N$ extended to $\beta\Bbb N$. Equality of all those limits thus forces $\mu_1=\mu_2'$, and that places $\mu_1$ and $\mu_2$ in the same orbit of the symmetric group action, a severe restriction on $\mu_2$.</p>
<p><strong>Question</strong>: If CH fails, what happens for a size $\omega_1$ family of sequences?</p>
| Henry.L | 25,437 | <p>The expectation can be computed in closed form, and I think that without further assumptions on entries of the matrix $Y$, the Jensen bound is sharp according to following calculation:
$\begin{align}\mathbb{E}\left[\boldsymbol{X^{t}YX}\right] & =\mathbb{E}\left[\sum_{\substack{1\le i,j\le n}
}X_{i}X_{j}Y_{ij}\right]\\
& =\sum_{m=-n}^{n}\mathbb{E}\left[\sum_{1\le i,j\le n}X_{i}X_{j}Y_{ij}\left|\sum_{i=1}^{n}X_{i}=m\right.\right]\mathbb{P}\left(\sum_{i=1}^{n}X_{i}=m\right)\\
& =\sum_{m=-n}^{n}\sum_{1\le i,j\le n}Y_{ij}\mathbb{E}\left[X_{i}X_{j}\left|\sum_{i=1}^{n}X_{i}=m\right.\right]\mathbb{P}\left(\sum_{i=1}^{n}X_{i}=m\right)\\
& =\sum_{m=-n}^{n}\sum_{1\le i,j\le n}Y_{ij}\left\{ x_{i}x_{j}\mathbb{P}\left(X_{i}=x_{i},X_{j}=x_{j}\left|\sum_{i=1}^{n}X_{i}=m\right.\right)\right\} \mathbb{P}\left(\sum_{i=1}^{n}X_{i}=m\right)\\
& =\sum_{m=-n}^{n}\sum_{1\le i,j\le n}Y_{ij}\left\{ \sum_{x_{i},x_{j}\in\{\pm1\}}x_{i}x_{j}\mathbb{P}\left(\left.\sum_{i=1}^{n}X_{i}=m\right|X_{i}=x_{i},X_{j}=x_{j}\right)\mathbb{P}\left(X_{i}=x_{i},X_{j}=x_{j}\right)\right\} \\
& \text{The probability }\mathbb{P}\left(X_{i}=x_{i},X_{j}=x_{j}\right)=\left(\frac{1}{2}\right)^{x_{i}+x_{j}}\left(\frac{1}{2}\right)^{2-x_{i}-x_{j}}=\frac{1}{4},\forall x_{i},x_{j}\in\{\pm1\}\\
& =\sum_{m=-n}^{n}\sum_{1\le i,j\le n}Y_{ij}\frac{1}{4}\left\{ \sum_{x_{i},x_{j}\in\{\pm1\}}x_{i}x_{j}\mathbb{P}\left(\left.\sum_{i=1}^{n}X_{i}=m\right|X_{i}=x_{i},X_{j}=x_{j}\right)\right\} \\
& \text{The probability }\mathbb{P}\left(\left.\sum_{i=1}^{n}X_{i}=m\right|X_{i}=x_{i},X_{j}=x_{j}\right)=\begin{cases}
m\neq\pm n,\pm(n-2) & \begin{cases}
\frac{\left(\begin{array}{c}
n-2\\
\frac{n-2-m}{2}
\end{array}\right)\left(\frac{1}{2}\right)^{\frac{n-2-m}{2}+m}\left(\frac{1}{2}\right)^{\frac{n-2-m}{2}}}{1/4} & n-m\,even\\
0 & n-m\,odd
\end{cases}\\
m=\pm n & \left(\frac{1}{2}\right)^{n}I_{(x_{i}=x_{j}=1)}\text{ or }\left(\frac{1}{2}\right)^{n}I_{(x_{i}=x_{j}=-1)}\\
m=\pm(n-2) & \left(\frac{1}{2}\right)^{n}I_{(x_{i}=1,x_{j}=-1)}\text{ or }\left(\frac{1}{2}\right)^{n}I_{(x_{i}=-1,x_{j}=1)}
\end{cases}\,\text{treated as a 1-d random walk.}\\
& \text{Following simplification assumes }m\neq\pm n,\pm(n-2)\\
& =\sum_{m=-n}^{n}\sum_{1\le i,j\le n}Y_{ij}\frac{1}{4}\left\{ \sum_{x_{i},x_{j}\in\{\pm1\}}x_{i}x_{j}\left(\frac{1+(-1)^{n-m}}{2}\right)\cdot\frac{\left(\begin{array}{c}
n-2\\
\frac{n-2-m}{2}
\end{array}\right)\left(\frac{1}{2}\right)^{\frac{n-2-m}{2}+m}\left(\frac{1}{2}\right)^{\frac{n-2-m}{2}}}{1/4}\right\} \\
& =\sum_{m=-n}^{n}\sum_{1\le i,j\le n}Y_{ij}\left\{ \sum_{x_{i},x_{j}\in\{\pm1\}}x_{i}x_{j}\left(\frac{1+(-1)^{n-m}}{2}\right)\cdot\left(\begin{array}{c}
n-2\\
\frac{n-2-m}{2}
\end{array}\right)\left(\frac{1}{2}\right)^{\frac{n-2-m}{2}+m}\left(\frac{1}{2}\right)^{\frac{n-2-m}{2}}\right\}
\end{align}
$
If $Y=I_n$ then the lower bound is literally reached. If you want a Hanson-Wright type concentration bound, then it can be improved since Bernoulli random vectors are sub-gaussian:</p>
<blockquote>
<p><a href="https://arxiv.org/pdf/1306.2872.pdf" rel="nofollow noreferrer">Hanson-Wright inequality</a>.</p>
</blockquote>
<p>Let $X=(X_{1},\cdots X_{n})\in\mathbb{R}^{n}$ be a random vector with
independent components $X_{i}$ such that </p>
<p>(i)$EX_{i}=0$</p>
<p>(ii) $\left\Vert X_{i}\right\Vert _{\psi_{2}}=
sup_{p\geq1}p^{-\frac{1}{2}}\left[E\left|X\right|^{p}\right]^{\frac{1}{p}}\leq K$ i.e. its components have uniform sub-gaussian norm.</p>
<p>Then for arbitrary n\times n constant matrix A and $\forall t\geq 0$
we can assert that</p>
<p>$Pr\left\{ \left|X^{t}AX-EX^{t}AX\right|>t\right\}
\leq2exp\left(-c\cdot min\left(\frac{t^{2}}{K^{4}\left\Vert
A\right\Vert _{HS}^{2}},\frac{t}{K^{2}\left\Vert A\right\Vert
}\right)\right)$ for some constant $c>0$.</p>
<p>where $\left\Vert A\right\Vert =max_{x\neq0}\frac{\left\Vert
Ax\right\Vert _{L^{2}}}{\left\Vert x\right\Vert _{L^{2}}}$ and
$\left\Vert A\right\Vert
_{HS}=\sqrt{\sum_{i,j}\left|a_{ij}\right|^{2}}$.</p>
<p>It is readily verifed that a Bernoulli random vector satisfies (i)(ii)
with $K=2$. Therefore we can assert that </p>
<p>$$Pr\left\{ \left|X^{t}YX-EX^{t}YX\right|>t\right\}
\leq2exp\left(-c\cdot min\left(\frac{t^{2}}{K^{4}\left\Vert
Y\right\Vert _{HS}^{2}},\frac{t}{K^{2}\left\Vert Y\right\Vert
}\right)\right)$$</p>
<p>where $Y=yy^{t}-zz^{t}$ is symmetric as stated in the OP.</p>
|
3,033,344 | <p>Question: Tom only have 2 type of coins: coins: 4 cents and 5 cents. Write a proof by induction that every amount n ≥ a can indeed be paid with Tom coins</p>
<p>1) Base Case: Tom can pay <span class="math-container">$12, $</span>13, <span class="math-container">$14, $</span>15, <span class="math-container">$16 and $</span>17</p>
<p>2) Inductive steep: Let n>= 17 and suppose the Tom can pay every amount k with 12 <= k < n </p>
<p>3) Proof of claim: I am confused now...</p>
<p>edit: it's a normal induction, not strong induction</p>
| Melody | 598,521 | <p>You can show that <span class="math-container">$\phi:\mathbb{N}\to\mathbb{N}$</span> defined by <span class="math-container">$$\phi(n)=\#\{m\in\mathbb{N}:\text{gcd}(m,n)=1,1\leq m\leq n\}$$</span>
is multiplicative. That is, if <span class="math-container">$m,n\in\mathbb{N}$</span> are relatively prime, then <span class="math-container">$\phi(mn)=\phi(m)\phi(n).$</span> Using this you only have to solve the problem for prime powers, and everything else comes by multiplication. It's not hard to show <span class="math-container">$\phi(p^n)=p^n-p^{n-1},$</span> which allows us to compute <span class="math-container">$$\phi(360)=\phi(2^3\cdot5\cdot3^2)=(8-4)(5-1)(9-3)=96.$$</span></p>
<p>The function <span class="math-container">$\phi$</span> is actually very famous, and is known as the Totient function, or Euler's Totient function.</p>
|
3,752,770 | <p>I tested this in python using:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 10*2*np.pi, 10000)
y = np.sin(x)
plt.plot(y/y)
plt.plot(y)
</code></pre>
<p>Which produces:</p>
<p><a href="https://i.stack.imgur.com/pCwoV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pCwoV.png" alt="" /></a></p>
<p>The blue line representing <code>sin(x)/sin(x)</code> appears to be <code>y=1</code></p>
<p>However, I don't know if the values at the point where <code>sin(x)</code> crosses the x-axis really equals 1, 0, infinity or just undefined.</p>
| Knight wants Loong back | 569,595 | <p>There are few misconceptions regarding the rational functions. For example, if
<span class="math-container">$$
f(x) = \frac{x^2 -1}{x-1}
$$</span>
Then, we usually find that people write it out as
<span class="math-container">$$
f(x) = x+1
$$</span>
But they are not equivalent, they differ from each other at <span class="math-container">$x=1$</span> where the former is undefined while the latter have the value as <span class="math-container">$1$</span>. Let's start with <span class="math-container">$f(x) = x+1$</span> and try to get <span class="math-container">$\frac{x^2 -1}{x-1}$</span>.</p>
<p><span class="math-container">$$
f(x) = x+1 \\
f(x) = x+1 \times \frac{x-1}{x-1} \\
f(x)= \frac{x^2-1}{x-1}
$$</span>
We can justify the second step by saying "well, <span class="math-container">$\frac{x-1}{x-1}$</span> is basically 1, we got a division by itself" but we forget two things, first <span class="math-container">$x-1$</span> is not a constant like real numbers it's a changing quantity, second the at <span class="math-container">$x=1$</span> we will get <span class="math-container">$x-1$</span> as 0. Now, will the argument "we got a division by itself" work? No, because 0 is something which doesn't have a property "itself". You can read division by zero <a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjt2dK1h8XqAhWBA3IKHV3-AK0QFjAXegQIARAB&url=https%3A%2F%2Fwww.math.utah.edu%2F%7Epa%2Fmath%2F0by0.html&usg=AOvVaw1ZyiBVER9C6DQWhlN0ohUB" rel="nofollow noreferrer">here</a></p>
<p>So, in the case <span class="math-container">$\frac{\sin x}{\sin x}$</span> we get <span class="math-container">$\sin x$</span> as 0 for <span class="math-container">$x= 2\pi n $</span>, <span class="math-container">$n= 0, 1 \cdots $</span>. Therefore, functions
<span class="math-container">$$
f(x) = \frac{\sin x}{\sin x} \\
g(x) =1\\
$$</span>
are equivalent, except at points <span class="math-container">$x=0, 2\pi, 4\pi, 6\pi \cdots$</span>.</p>
<p>Hope it helps!</p>
|
1,765,222 | <p>I have proven this by the induction method but would like to know if it can be proven using an alternative method.</p>
| Roman83 | 309,360 | <p>$$\frac{n(n^4-1)}{5}=\frac{n(n^2-1)(n^2+1)}{5}=\frac{(n-1)n(n+1)(n^2+1)}{5}$$
If $n=5k$, then $5|n$</p>
<p>If $n=5k+1$, then $5|n-1$</p>
<p>If $n=5k-1$, then $5|n+1$</p>
<p>If $n=5k\pm2$, then $n^2+1=(5k\pm2)^2+1=25k^2\pm10k+4+1=25k^2\pm10k+5=5(5k^2\pm2k+1)$, then $5|(n^2+1)$</p>
|
1,765,222 | <p>I have proven this by the induction method but would like to know if it can be proven using an alternative method.</p>
| S.C.B. | 310,930 | <p>By <a href="https://en.wikipedia.org/wiki/Fermat%27s_little_theorem" rel="nofollow">Fermat's Little Theorem</a>, we have that $$n^5 \equiv n \pmod 5 \Leftrightarrow \frac{n^5-n}{5} \in \mathbb{Z}$$</p>
|
769,504 | <p>It is mentioned in <a href="http://ac.els-cdn.com/0166864182900657/1-s2.0-0166864182900657-main.pdf?_tid=2ecebd88-ccce-11e3-ae74-00000aab0f6c&acdnat=1398467347_2b1c578dc3ae8c1a9107e7444203edb6" rel="nofollow noreferrer">this</a> article, that, the one point compactification of an uncountable discrete space, is A non-first-countable topological space in which ONE has a winning strategy in <a href="https://math.stackexchange.com/questions/765723/can-we-construct-from-0-omega-1-a-space-which-is-strictly-frechet-with-no-w">$G_{np}(q,E)$</a>.
I have tried to use the Alexandroff one point compactification of the <a href="https://dantopology.wordpress.com/tag/mrowka-space/" rel="nofollow noreferrer">Mrowka space</a>: </p>
<blockquote>
<p><strong>Mrówka’s space $\Psi$:</strong> Subsets of $\omega$ are said to be <em>almost disjoint</em> if their intersection is finite. Let $\mathscr{A}$ be a maximal almost disjoint family of subsets of $\omega$, and let $\Psi=\omega\cup\mathscr{A}$. Points of $\omega$ are isolated. Basic open nbhds of $A\in\mathscr{A}$ are sets of the form $\{A\}\cup(A\setminus F)$, where $F$ is any finite subset of $A$. $\Psi$ is not even countably compact, since $\mathscr{A}$ is an infinite (indeed uncountable) closed, discrete set in $\Psi$.</p>
</blockquote>
<p><strong>Claim:</strong> $X=\Psi \cup \{ \infty \}$ is a non-first-countable topological space in which ONE has a winning strategy in $G_{np}(q,E)$ .</p>
<p><strong>Proof</strong>:It is obvious that $X$ is not first countabe, since every point in $\mathscr{A}$, has an uncountable local base. We will show now, that, it satisfies $G_{np}(q,E)$.</p>
<p>Let $q \in \overline A$. If $q \in \omega$, then it is a discrete point. So, suppose $q$ is not in $\omega$. If $q \neq \infty$, then, every open neighbothood of $q$, is a cofinite subset of $q$. Since $q \in \mathscr{A}$ contains only points from $\omega$, That means that, every infinite sequence of points $\{q_n\} \subset q$ from $\omega$ converges to $q$.</p>
<p>If $q = \infty$, Then, every open neighbourhood of $q$, is a complement of a compact set in $X$. Again, sinse an infinite set of points from $\omega$ or from $\mathscr{A}$ is not compact, every sequence of points, that will be picked by TWO will converge to $p$. </p>
<p>What do you think? Is my proof ok?</p>
<p>Thank you!</p>
| user642796 | 8,348 | <p>Suppose that $\mathscr{B} \subseteq \mathscr{A}$, and $B \subseteq \omega$. It is not too difficult to show that $\mathscr{B} \cup B$ is a compact subset of $\Psi$ iff $\mathscr{B}$ is finite, and $B \setminus \bigcup \mathscr{B}$ is finite. From this it follows that if we make the very mild assumption that $\omega = \bigcup \mathscr{A}$, then the sets of the form $$\{ \infty \} \cup ( \Psi \setminus {\textstyle \bigcup_{i \leq n}} ( \{ A_i \} \cup A_i )) = X \setminus ( {\textstyle \bigcup_{i \leq n}} ( \{ A_i \} \cup A_i ) ), \tag{$\star$}$$ where the $A_i$ belong to $\mathscr{A}$, form a neighbourhood basis for $\infty$ in the one-point compactification $X = \Psi \cup \{ \infty \}$. In the sequel I will denote by $U(A_0 , \ldots , A_n)$ the set described in ($\star$).</p>
<p>ONE may not have a winning strategy for $G_{\text{np}} ( \infty , X )$. In fact, I'll describe a mad family $\mathscr{A}$ in which TWO has a winning strategy in this game. We will assume that ONE always plays a basic open neighbourhood of $\infty$ of the kind described in ($\star$).</p>
<p>Consider the full binary tree of height $\omega$: $C = 2^{<\omega}$, and let $\mathscr{A}$ be a mad family of subsets of $C$ which includes every branch through $C$. It follows that every set in $\mathscr{A}$ is either branch through $C$, or includes at most finitely many nodes from any branch.</p>
<p>Before play begins, TWO takes $x_{-1}$ to be the root of $C$. TWO's basic strategy is to play $x_{n+1}$ which properly extends $x_n$. (In this way, TWO's moves will constitute an increasing chain in $C$, which will have the unique branch containing each $x_i$ as its limit in $X$.)</p>
<p>Suppose that ONE's $n$th move is $U(B_1, \ldots , B_k, A_1 , \ldots , A_\ell)$ where each each $B_i$ is a branch through $C$, and each $A_i$ is a non-branch set in $\mathscr{A}$. Since there are uncountably many branches in $C$ through $x_n$, TWO picks some branch $B \in \mathscr{A}$ through $x_n$ which is not among the $B_i$. Then $B \cap A_i$ is finite for each $i \leq \ell$, and so TWO can pick some $x_{n+1}$ in $B$ extending $x_n$ which is not in any $A_i$.</p>
<hr>
<p><strong><em>Addendum</em></strong></p>
<p>In fact, given any mad family $\mathscr{A}$ on $\omega$ TWO has a winning strategy for the game $G_{\text{np}} ( \infty , X )$. This is because no sequence of points in $\omega$ can converge to $\infty$ in $X$. </p>
<blockquote>
<p>Suppose that $\langle n_i \rangle_{i \in \omega}$ is a one-to-one sequence in $\omega$. This means that $B = \{ n_i : i \in \omega \}$ is infinite, and so there is an $A \in \mathscr{A}$ which has infinite intersection with $B$. But then the open neighbourhood $U(A)$ of $\infty$ cannot include a tail of the sequence, and so the sequence cannot converge to $\infty$.</p>
</blockquote>
<p>Since open neighbourhoods of $\infty$ must include infinitely many natural numbers, it follows that as long as TWO plays natural numbers, the sequence constructed cannot converge to $\infty$.</p>
|
62,177 | <p>One of the most mind boggling results in my opinion is, with the axiom of choice/well-ordering principle, there exist such things as uncountable well-ordered sets $(A,\leq)$. </p>
<p>With this is mind, does there exist some well ordered set $(B,\leq)$ with some special element $b$ such that the set of all elements smaller than $b$ is uncountable, but for any element besides $b$, the set of all elements smaller is countable (by countable I include finite too). </p>
<p>More formally stated, how can one show the existence of a well ordered set $(B,\leq)$ such that there exists a $b\in B$ such that $\{a\in X\mid a\lt b\}$ is uncountable, but $\{a\in X\mid a\lt c\}$ is countable for all $c\neq b$?</p>
<p>It seems like this $b$ would have to "be at the very end of the order." </p>
| Craig | 15,279 | <p>The best explanation I've seen of this for the layman has to be "The Pancake at the Bottom", by Scott Aaronson: <a href="http://www.scottaaronson.com/writings/pancake.html" rel="nofollow">http://www.scottaaronson.com/writings/pancake.html</a></p>
|
2,762,230 | <blockquote>
<p>Let $I:=[a,b]$ a perfect interval and $\gamma\in C(I,\Bbb R^n)$ an injective path such that $\Gamma:=\gamma(I)$ is rectifiable. Show that $\dim_H(\Gamma)=1$.</p>
</blockquote>
<p>Here $\dim_H$ is the Hausdorff dimension. My work so far: </p>
<p>Note that the canonical projections $\pi_k$ are Lipschitz, and because $\gamma$ is continuous and it domain is compact and connected then $\Gamma$ is also compact and connected, thus $\pi_k(\Gamma)\subset\Bbb R$ is compact and connected. </p>
<p>Because $I$ is perfect and $\gamma$ injective then $\Gamma$ is not a singleton, so there is some $k\in\{1,\ldots,n\}$ such that $\pi_k(\Gamma)$ is a perfect closed interval, thus setting
$$
f:\Gamma\to\Bbb R^n,\, x\mapsto (\pi_k(x),0,\ldots,0)\tag1
$$
we can see that $f$ is also Lipschitz and we find that $\dim_H(\pi_k(\Gamma))=\dim_H(f(\Gamma))=1\le\dim_H(\Gamma)$ by some elementary identities of the Hausdorff outer measures.</p>
<p>However Im unable to find a way to show that $\dim_H(\Gamma)\le 1$. I dont have a clue about how to do it. </p>
<p>Some random ideas that I had: I tried to relate that $\gamma$ have a continuous inverse in $\Gamma$, or some uniform polynomial approximation to $\Gamma$, or the fact that $\Gamma$ is rectifiable and compact with the definition of Hausdorff outer measure, but I dont found something.</p>
<p>Some help will be appreciated, thank you.</p>
| Masacroso | 173,262 | <p>After some mistakes I think I found a valid answer.</p>
<hr>
<p>First note that the function $\tilde\gamma: I\to\Gamma,\, t\mapsto\gamma(t)$ is bijective and note that for every closed set $C\subset I$ then $\gamma(C)$ is compact, and by the injectivity of $\gamma$ we find that
$$
\gamma(I\setminus C)=\gamma(I\cap C^\complement)=\gamma(I)\cap\gamma(C^\complement)=\Gamma\cap[\gamma(C)]^\complement=\Gamma\setminus\gamma(C)\tag1
$$
Hence $\gamma$ is open, so $\tilde\gamma^{-1}$ is continuous and consequently $\tilde\gamma$ is an homeomorphism.</p>
<p>Let $I=[a,b]$. I will show that for some rectifiable path $\Gamma$ parameterized by $\gamma\in C(I,\Bbb R^n)$, there is a partition $\mathfrak Z_\delta:=\{a_0,a_1,\ldots,a_m\}$ of $I$ with arbitrarily small mesh $\Delta_{\frak Z}=\delta>0$ such that
$$
2\,|\gamma(a_k)-\gamma(a_{k+1})|\ge\operatorname{diam}\big(\gamma([a_k,a_{k+1}])\big),\quad \forall k\in\{0,\ldots,m-2\}\tag2
$$
Now choose some arbitrarily small $\epsilon>0$, then we can define recursively
$$
B_k:=\sup\left\{A\subset\Gamma\cap\overline{\Bbb B}(\gamma(a_k),\epsilon): \gamma(a_k)\in A\text{ and }A\text{ is connected }\right\}\\
a_0:=a,\quad a_k:=\sup\gamma^{-1}(B_{k-1})\text{ for }k\ge 1\tag3
$$
where we used the fact that $\tilde\gamma$ is an homeomorphism, so for each $B_k$ the set $\gamma^{-1}(B_k)$ is a closed interval contained in $I$ and the set of points $\mathfrak Z:=\{a_0,a_1,\ldots,a_k,\ldots\}$ is well defined. </p>
<p>Moreover: because $\Gamma$ is rectifiable then $|\mathfrak Z|<\infty$ and because $\tilde\gamma^{-1}$ is uniformly continuous for any chosen $\delta>0$ there is an $\epsilon>0$ such that $|\gamma(x)-\gamma(y)|<\epsilon\implies |x-y|<\delta$, so for any chosen mesh $\Delta_{\frak Z}=\delta$ we can choose an arbitrarily small $\epsilon>0$ on $(4)$ and rename the set of points as $\frak Z_\delta$, what is a partition of $I$ of mesh $\delta$.</p>
<p>Now note that for $\mathfrak Z_\delta=\{a_0,a_1,\ldots,a_m\}$ by construction $|\gamma(a_k)-\gamma(a_{k+1})|=\epsilon$ for $k\neq m-1$ and $\operatorname{diam}(\gamma([a_k,a_{k+1}]))\le 2\epsilon$ because $\gamma([a_k,a_{k+1}])\subset\overline{\Bbb B}(\gamma(a_k),\epsilon)$, so $(2)$ holds.</p>
<p>Now from the definitions
$$
\begin{align*}\mathcal H_\epsilon^s(A):=&\inf\left\{\sum_{k=0}^\infty[\operatorname{diam}(A_k)]^s:A\subset\bigcup_{k=0}^\infty A_k\text{ and }\operatorname{diam}(A_k)<\epsilon,\,\forall k\in\Bbb N\right\}\\
L(\Gamma):=&\sup\left\{\sum_{k=0}^m|\gamma(a_k)-\gamma(a_{k+1})|:\{a_0,\ldots,a_{m+1}\}\text{ is a partition of }I\right\}\end{align*}\tag4
$$
we find that
$$
\mathcal H_\epsilon^1(\Gamma)\le\sum_{k=0}^{m-1}\operatorname{diam}([a_k,a_{k+1}])\le2\epsilon+2\sum_{k=0}^{m-2}|\gamma(a_k)-\gamma(a_{k+1})|\le 2L(\Gamma)+2\epsilon<\infty\tag5
$$
for arbitrarily small $\epsilon,\delta>0$ where $a_k\in\mathfrak Z_\delta$ (note that $\epsilon$ depends by the chosen $\Delta_{\frak Z}=\delta$, however in any case is bounded below by zero). And because $\mathcal H_*^1(\Gamma)=\lim_{\epsilon\to 0^+}\mathcal H_\epsilon^1(\Gamma)$ we find that $\mathcal H_*^1(\Gamma)\le2 L(\Gamma)<\infty$, and consequently $\dim_H(\Gamma)\le 1$, as desired.</p>
<hr>
<p>A simpler construction of a partition of $I$ by recursion is as follows
$$
a_{k+1}:=\inf\{x\in[a_k,b]:|\gamma(a_k)-\gamma(x)|=\epsilon\}\cup\{b\}\tag6
$$
where we set $a_0:=a$. Then we have a partition of $I$ defined by $\mathfrak Z:=\{a_0,a_1,\ldots,a_m\}$ with the property that
$$
\begin{align*}|\gamma(a_k)-\gamma(a_{k+1})|&=\operatorname{diam}\big(\gamma([a_k,a_{k+1}])\big),\quad\forall k\in\{0,\ldots,m-2\}\\
|\gamma(a_{m-1})-\gamma(a_m)|&\le\operatorname{diam}\big(\gamma([a_{m-1},a_m])\big)\le\epsilon\end{align*}\tag7
$$
(note that by construction $a_m=b$). Then we find that
$$
\mathcal H_\epsilon^1(\Gamma)\le\sum_{k=0}^{m-2}\operatorname{diam}\big(\gamma([a_k,a_{k+1}]\big)+\operatorname{diam}\big(\gamma([a_{m-1},a_m])\big)\\
\le\sum_{k=0}^{m-2}|\gamma(a_k)-\gamma(a_{k+1})|+\epsilon\le L(\Gamma)+\epsilon\tag8
$$
Then taking limits above as $\epsilon\to 0^+$ we find that $\mathcal H_*^1(\Gamma)\le L(\Gamma)$, and consequently that $\dim_H(\Gamma)\le 1$.</p>
|
1,291,511 | <p>This may seem like a silly question, but I just wanted to check. I know there are proofs that if $f(x)=f'(x)$ then $f(x)=Ae^x$. But can we 'invent' another function that obeys $f(x)=f'(x)$ which is <strong>non-trivial</strong>?</p>
| Paramanand Singh | 72,031 | <p>I think other answers given here assume the existence of a nice function $e^{x}$ and this makes the proof considerably simpler. However I believe that it is better to approach the problem of solving $f'(x) = f(x)$ without knowing anything about $e^{x}$.</p>
<p>When we go down this path our final result is the following:</p>
<blockquote>
<p><strong>Theorem</strong>: <em>There exists a unique function $f:\mathbb{R}\to \mathbb{R}$ which is differentiable for all $x \in \mathbb{R}$ and satisfies $f'(x) = f(x)$ and $f(0) = 1$. Further any function $g(x)$ which is differentiable for all $x$ and satisfies $g'(x) = g(x)$ is explicitly given by $g(x) = g(0)f(x)$ where $f(x)$ is the unique function mentioned previously.</em></p>
</blockquote>
<p>We give a simple proof of the above theorem without using any properties/knowledge of $e^{x}$. Let's show that if such a function $f$ exists then it must be unique. Suppose there is another function $h(x)$ such that $h'(x) = h(x)$ and $h(0) = 1$. Then the difference $F(x) = f(x) - h(x)$ satisfies $F'(x) = F(x)$ and $F(0) = 0$. We will show that $F(x) = 0$ for all $x$. Suppose that it is not the case and that there is a point $a$ such that $F(a) \neq 0$ and consider $G(x) = F(a + x)F(a - x)$. Clearly we have
\begin{align}
G'(x) &= F(a - x)F'(a + x) - F(a + x)F'(a - x)\notag\\
&= F(a - x)F(a + x) - F(a + x)F(a - x)\notag\\
&= 0
\end{align}
so that $G(x)$ is constant for all $x$. Therefore $G(x) = G(0) = F(a) \cdot F(a) > 0$. We thus have $F(a + x)F(a - x) > 0$ and hence putting $x = a$ we get $F(2a)F(0) > 0$. This contradicts $F(0) = 0$.</p>
<p>It follows that $F(x) = 0$ for all $x$ and hence the function $f$ must be unique. Now we need to show the existence. To that end we first establish that $f(x) > 0$ for all $x$. If there is a number $b$ such that $f(b) = 0$ then we can consider the function $\phi(x) = f(x + b)$ and it will have the property that $\phi'(x) = \phi(x)$ and $\phi(0) = 0$. By argument in preceding paragraph $\phi(x)$ is identically $0$ and hence $f(x) = \phi(x - b)$ is also identically $0$. Hence it follows that $f(x)$ is non-zero for all $x$. Since $f(x)$ is continuous and $f(0) = 1 > 0$ it follows that $f(x) > 0$ for all $x$.</p>
<p>Since $f'(x) = f(x) > 0$ for all $x$, it follows that $f(x)$ is strictly increasing and differentiable with a non-vanishing derivative. By inverse function therorem the inverse function $f^{-1}$ exists (if $f$ exists) and is also increasing with non-vanishing derivative. Also using techniques of differentiation it follows that $f'(x) = f(x)$ implies that $\{f^{-1}(x)\}' = 1/x$ for all $x > 0$ and $f^{-1}(1) = 0$. Since $1/x$ is continuous the definite integral $$\psi(x) = \int_{1}^{x}\frac{dt}{t}$$ exists for all $x > 0$ and has the properties of $f^{-1}$ and it is easy to show that $f^{-1}(x) = \psi(x)$. Clearly the function $(f^{-1}(x) - \psi(x))$ is constant as it derivative is $0$ and hence $$f^{-1}(x) - \psi(x) = f^{-1}(1) - \psi(1) = 0$$ so that $$f^{-1}(x) = \psi(x) = \int_{1}^{x}\frac{dt}{t}$$ Next using inverse function theorem $f(x)$ exists. Thus the question of existence of $f(x)$ is settled.</p>
<p>Now consider $g(x)$ with $g'(x) = g(x)$. If $g(0) = 0$ then we know from argument given earlier that $g(x) = 0$ for all $x$. If $g(0) \neq 0$ then we study the function $\psi(x) = g(x)/g(0)$. Clearly $\psi'(x) = \psi(x)$ and $\psi(0) = 1$ and hence it is same as the unique function $f(x)$. Thus $g(x)/g(0) = \psi(x) = f(x)$ for all $x$. Hence $g(x) = g(0)f(x)$.</p>
<p>The unique function $f(x)$ in the theorem proved above is denoted by $\exp(x)$ or $e^{x}$.</p>
|
1,822,008 | <p>Here are two functions:
$f\left(u,v\right)=u^{2}+3v^{2}$</p>
<p>$g\left(x,y\right)=\begin{pmatrix} e^{x}\cos y \\ e^{x}\sin y \end{pmatrix} $</p>
<p>I need to make Jacobian matrix of $f\circ g$. I found derivative of their composition:</p>
<p>$\frac{d\left(f\circ g\right) }{d\left(x,y\right) }=2e^{2x}\cos^{2}{y}+4e^{2x}\sin{y}\cos{y}+6e^{2x}sin^{2}{y} $</p>
<p>How do I put that in Jacobian matrix?</p>
| Community | -1 | <p>$$(f\circ g)(x,y) = h(x,y) = e^{2x}\cos^2(y)+3e^{2x}\sin^2(y)$$ Now just build the Jacobian matrix (AKA gradient because $h$ is a scalar-valued function) like normal: $$\pmatrix{\frac{\partial h}{\partial x} & \frac{\partial h}{\partial y}}$$</p>
|
3,374,248 | <p>I haven't worked out all the details yet, but it seems to be true for the following functions:</p>
<ul>
<li><span class="math-container">$f(k) = 1$</span></li>
<li><span class="math-container">$f(k) = 1/k!$</span></li>
<li><span class="math-container">$f(k) = a^k$</span></li>
<li><span class="math-container">$f(k) = 1/\log(k+1)$</span></li>
</ul>
<p>What are the conditions on <span class="math-container">$f$</span> for this to be true? It sounds like a fairly general result that should be easy to prove. Sums like these are related to the discrete self-convolution operator, so I'm pretty sure the result mentioned here must be well known. </p>
<p><strong>Update</strong>: A weaker result that applies to a broader class of functions is the following:
<span class="math-container">$$\sum_{k=1}^n f(k)f(n-k) = O\Big(n f^2(\frac{n}{2})\Big).$$</span>
Is it possible to find a counter-example, with a function <span class="math-container">$f$</span> that is smooth enough and in wide use?</p>
| Vincent Granville | 574,948 | <p>This is not an answer, but rather an upper bound. Using the Cauchy-Schwartz inequality, it is easy to obtain
<span class="math-container">$$\sum_{k=1}^n f(k)f(n-k)\leq \sum_{k=0}^n f^2(k)\sim \int_0^nf^2(x) dx.$$</span></p>
<p>For a full solution, consider two independently and identically distributed random variables <span class="math-container">$X_n, Y_n$</span> with <span class="math-container">$P(X_n =k) = P(Y_n=k) \propto f(k)$</span> for <span class="math-container">$k=0, 1, \cdots, n$</span>. This assumes that <span class="math-container">$f\geq 0$</span>. Then
<span class="math-container">$$\sum_{k=1}^n f(k)f(n-k)\approx P(X_n+Y_n \leq n) .$$</span></p>
<p>The distribution of <span class="math-container">$X_n + Y_n$</span> can be obtained using the convolution theorem.</p>
|
154,757 | <p>I have this data:</p>
<ul>
<li><p>$a=6$</p></li>
<li><p>$b=3\sqrt2 -\sqrt6$ </p></li>
<li><p>$\alpha = 120°$</p></li>
</ul>
<p><strong>How to calculate the area of this triangle?</strong></p>
<p>there is picture:</p>
<p><img src="https://i.stack.imgur.com/hr2Cp.jpg" alt=""></p>
| Peter | 152,834 | <p>Area: S = 3.80384750844</p>
<p>Triangle calculation with its picture:</p>
<p><a href="http://www.triangle-calculator.com/?what=ssa&a=1.7931509&b=6&b1=120&submit=Solve" rel="nofollow">http://www.triangle-calculator.com/?what=ssa&a=1.7931509&b=6&b1=120&submit=Solve</a></p>
<p>Only one triangle with this sides and angle exists.</p>
|
3,895,314 | <p>How do I prove <span class="math-container">$x ^ {1 - x}(1 - x) ^ {x} \le \frac{1}{2}$</span>, for every <span class="math-container">$x \in (0, 1)$</span>.</p>
<hr />
<p>For <span class="math-container">$x = \frac {1}{2}$</span> the LHS is equal to one half. I tried studying what happens when <span class="math-container">$x \lt \frac {1}{2}$</span> and its correspondent, but to no result.</p>
| Albus Dumbledore | 769,226 | <p>let <span class="math-container">$a=x,b=1-x$</span>,</p>
<p><span class="math-container">$a+b=1$</span>,</p>
<p>By AM-GM<span class="math-container">$$\frac{1}{2}=\frac{{(a+b)}^2}{2}\ge 2ab=\frac{ab+ba}{a+b}\ge \sqrt[a+b]{a^b b^a}=a^bb^a=x^{1-x}{(1-x)}^{x}$$</span></p>
|
3,895,314 | <p>How do I prove <span class="math-container">$x ^ {1 - x}(1 - x) ^ {x} \le \frac{1}{2}$</span>, for every <span class="math-container">$x \in (0, 1)$</span>.</p>
<hr />
<p>For <span class="math-container">$x = \frac {1}{2}$</span> the LHS is equal to one half. I tried studying what happens when <span class="math-container">$x \lt \frac {1}{2}$</span> and its correspondent, but to no result.</p>
| xpaul | 66,420 | <p>Let
<span class="math-container">$$ f(x)=\ln[x ^ {1 - x}(1 - x) ^ {x}]=(1-x)\ln x+x\ln(1-x) $$</span>
and then
<span class="math-container">$$ f'(x)=-\ln x+\frac{1-x}{x}+\ln(1-x)-\frac{x}{1-x}, f''(x)=-\frac{1-x+x^2}{x^2(1-x)^2} .$$</span>
Clearly <span class="math-container">$x=\frac12$</span> is the only point in <span class="math-container">$(0,1)$</span> such that <span class="math-container">$f'(x)=0$</span> since <span class="math-container">$f''(x)<0$</span> in <span class="math-container">$(0,1)$</span>. Thus <span class="math-container">$f(x)$</span> obtains the max at <span class="math-container">$x=\frac12$</span>, namely
<span class="math-container">$$ f(x)\le \ln(\frac12). $$</span>
So
<span class="math-container">$$ x ^ {1 - x}(1 - x) ^ {x}\le \frac12. $$</span></p>
|
90,876 | <p>$$2x-\dfrac{x+1}{2} + \dfrac{1}{3}(x+3)= \dfrac{7}{3}$$</p>
<p>When I solve this I always end up with 11x = 5, which is wrong, no matter which way I solve it. Does anyone know how to solve it? Steps? (Because I know the answer should be x=1)</p>
| Jesko Hüttenhain | 11,653 | <p>$$\begin{align*}
&& 2x-\frac{x+1}{2}+\frac{x+3}{3} &= \frac{7}{3} & \cdot 6 \\
&\Leftrightarrow& 12x - 3x - 3 + 2x +6 &= 14 & \text{rearrange} \\
&\Leftrightarrow& 11x&=11
\end{align*}$$</p>
|
3,231,271 | <blockquote>
<p>Suppose <span class="math-container">$X$</span> is Banach and <span class="math-container">$T\in B(X)$</span> (i.e. <span class="math-container">$T$</span> is a linear and continuous map and <span class="math-container">$T:X \to X$</span>). Also, suppose <span class="math-container">$\exists c > 0$</span>, s.t. <span class="math-container">$\|Tx\| \ge c\|x\|, \forall x\in X$</span>. Prove <span class="math-container">$T$</span> is a compact operator if and only if <span class="math-container">$X$</span> is finite dimensional.</p>
</blockquote>
<p>"<span class="math-container">$X$</span> is finite dimensional <span class="math-container">$\implies$</span> <span class="math-container">$T$</span> is compact" is easy to show. To prove the other side, at first, I made a mistake, thinking <span class="math-container">$X$</span> is reflexive. Then this work can be easily done by the fact that any sequence of a reflexive linear space has a weakly convergent subsequence and <span class="math-container">$T$</span> is completely continuous (since <span class="math-container">$T$</span> is compact). But this is not the situation. </p>
<blockquote>
<p>So how to prove "<span class="math-container">$T$</span> is compact <span class="math-container">$\implies X$</span> is finite dimensional"?</p>
</blockquote>
| Robert Israel | 8,508 | <p>Hint: if not, the image of the unit ball of <span class="math-container">$X$</span> contains a ball in an infinite-dimensional space.</p>
|
2,966,871 | <blockquote>
<p>Define the unit sphere as <span class="math-container">$S^1=\{x\in \mathbb{R}^2: \|x\|=1\}$</span></p>
<p>Also define the real projective line as <span class="math-container">$\mathbb{R}P^1=S^1/(x\sim-x)$</span></p>
</blockquote>
<p>We can consider the mapping <span class="math-container">$f:S^1\rightarrow S^1$</span>, <span class="math-container">$f(x)=x^2$</span></p>
<p>If I can show that <span class="math-container">$f$</span> is a continuous quotient map, i.e. <span class="math-container">$f$</span> is a continuous surjective mapping such that <span class="math-container">$f(-x)=f(x)$</span> for all <span class="math-container">$x\in S^1$</span>, then I can apply the universal property of a quotient topology and conclude that there exists an induced homeomorphism between <span class="math-container">$\mathbb{R}P^1$</span> and <span class="math-container">$S^1$</span>.</p>
<p>I am unsure how to prove, however, that <span class="math-container">$f$</span> is both surjective and continuous. It's obvious that <span class="math-container">$f(-x)=f(x)$</span> for all <span class="math-container">$x\in S^1$</span>, but how should I go about the other two claims? I think I am overthinking this. Any help would be much appreciated.</p>
| Henno Brandsma | 4,280 | <p>The reason Ashvin gave for surjectivity, using the polar coordinates representation is perfectly fine:</p>
<blockquote>
<p>The map <span class="math-container">$f$</span> is surjective because in polar coordinates, it is given by <span class="math-container">$e^{i\theta} \mapsto e^{2i\theta}$</span>, and every angle <span class="math-container">$\psi \in [0, 2\pi)$</span> can be uniquely represented as <span class="math-container">$2\theta$</span> for some <span class="math-container">$\theta \in [0, \pi)$</span>.</p>
</blockquote>
<p>(Or simply note that all polynomials are surjective in <span class="math-container">$\mathbb{C}$</span> by the universal theorem of algebra and <span class="math-container">$|z^2| = |z|^2$</span> so the norm of a preimage of a point of norm <span class="math-container">$1$</span> is still <span class="math-container">$1$</span>.)</p>
<p>Continuity is in fact easy: it follows from the universal property of quotient maps : if <span class="math-container">$q: X \to Y$</span> is a quotient map (So <span class="math-container">$Y$</span> has the quotient topology wrt <span class="math-container">$q$</span> and <span class="math-container">$X$</span>) then <span class="math-container">$g: Y \to Z$</span> is continuous iff <span class="math-container">$g \circ q: X \to Z$</span> is continuous. So continuity from a quotient space is determined by the composition with the quotient map.</p>
<p>In our case <span class="math-container">$X = S^1$</span> and <span class="math-container">$q: S^1 \to \mathbb{R}P^1$</span> is given by <span class="math-container">$q(x) = \{x,-x\}= [x]$</span>, the class of <span class="math-container">$x$</span> under <span class="math-container">$\sim$</span>. </p>
<p>The map <span class="math-container">$f(z) = z^2$</span> has the property that <span class="math-container">$f(z) = f(x)$</span> iff <span class="math-container">$z^2 = x^2$</span> (squares taken as members of <span class="math-container">$\mathbb{C}$</span>) iff <span class="math-container">$x^2 - z^2 = 0$</span> iff <span class="math-container">$(x-z)(x+z) = 0$</span> iff <span class="math-container">$x=z$</span> or <span class="math-container">$x=-z$</span> iff <span class="math-container">$x \sim z$</span>.</p>
<p>The fact that <span class="math-container">$x \sim x'$</span> implies <span class="math-container">$f(x) = f(x')$</span> implies that the map <span class="math-container">$\overline{f}: \mathbb{R}P^1 \to S^1$</span> defined by <span class="math-container">$\overline{f}([x]) = f(x)$</span> is <em>well-defined</em>: for all representatives of the class of <span class="math-container">$x$</span>, (<span class="math-container">$x$</span> and <span class="math-container">$-x$</span>), we get the same <span class="math-container">$f$</span>-value. But then this induced map <span class="math-container">$\overline{f}$</span> then obeys by definition</p>
<p><span class="math-container">$$\overline{f} \circ q = f$$</span> and <span class="math-container">$f$</span> is continuous, so <span class="math-container">$\overline{f}$</span> is continuous by the aforementioned universal property directly.</p>
<p>The property that <span class="math-container">$f(x) = f(x')$</span> implies <span class="math-container">$x \sim x'$</span> implies that <span class="math-container">$\overline{f}$</span> is 1-1: <span class="math-container">$\overline{f}[x] = \overline{f}[x']$</span> iff <span class="math-container">$f(x) = f(x')$</span> iff <span class="math-container">$[x] = [x']$</span>.</p>
<p>So <span class="math-container">$\overline{f}$</span> is a 1-1 continuous and onto (if <span class="math-container">$z \in S^1$</span> find <span class="math-container">$x\in S^1$</span> with <span class="math-container">$f(x) = z$</span> and then <span class="math-container">$\overline{f}([x]) = z$</span> too) map from the compact space <span class="math-container">$\mathbb{R}P^1$</span> (continuous image of <span class="math-container">$S^1$</span> under <span class="math-container">$q$</span>) to <span class="math-container">$S^1$</span>, hence a homeomorphism as <span class="math-container">$S^1$</span> is Hausdorff.</p>
|
81,982 | <p>I am beggining to do some work with cubical sets and thought that I should have an understanding of various extra structures that one may put on cubical sets (for purposes of this question, connections). I know that cubical sets behave more nicely when one has an extra set of degeneracies called connections. The question is: Why these particular relations? Why do they show up? Precise references would be greatly appreciated.</p>
| Tim Porter | 3,502 | <p>A list of precise references for connections on cubical sets has to start with :</p>
<p>R. Brown, P. J. Higgins and R. Sivera, 2011, Nonabelian Algebraic Topology: Filtered spaces, crossed complexes, cubical homotopy groupoids , volume 15 of EMS Monographs in Mathematics , European Mathematical Society.</p>
<p>as in there Brown, Higgins and Sivera have written out and explored the theory in detail. There are several introductory sections on connections both in double categories and in cubical sets. The intuitions come back to the structure of the singular cubical complex of a space in which there are cubes that are degenerate in an intuitive sense but are not of the 'constant in direction $i$' type. The typical example is a square with two adjacent sides constant and the other two copies of the same path. (I cannot draw it here!)</p>
<p>Ronnie Brown has numerous introductory articles on his website and I will give you a link to the handout for a talk on higher dimensional group theory in which there is some discussion of the connections from a group theoretic viewpoint.(<a href="http://groupoids.org.uk/pdffiles/liverpool-beamer-handout.pdf" rel="nofollow">http://groupoids.org.uk/pdffiles/liverpool-beamer-handout.pdf</a>) The discussion is fairly far near the end, so have a look for diagrams with cubes and hieroglyphic pictures!</p>
<p>The point made there is that if you want to say that the top face of a cube is the composite of its other faces, then on expanding the cube as a cross shape collection of five squares, there will be holes to fill in the corners, but connection squares are just the right form to fill them. (It is worth roaming around on Ronnie Browns site including <a href="http://groupoids.org.uk/brownpr.html" rel="nofollow">http://groupoids.org.uk/brownpr.html</a>, as there are several other chatty papers and Beamer presentations that may help.)</p>
<p>You can go back to the original Brown-Higgins papers, but as they have been used as the base for the new book, they may not give you anything extra.</p>
|
1,190,345 | <p>If $f$ is Riemann integrable on $[a,b]$ , is $|f|$ Riemann integrable on $[a,b]$ ?
(The metric is $\mathbb R$ usual)</p>
<p>The other is question is $f$ is Riemann integrable on $[a,b]$ , can I claim $f$ is bounded on $[a,b]$ ? (I think the answer can be either yes or no that depend on considerating generalized function or not)</p>
<p>Update: I think I made a mistake on second question that both definitions of Riemann integral and generalized Riemann integral on [a,b] request the f being bounded on [a,b]. Sorry for my mistake.</p>
| Learnmore | 294,365 | <p>Let $\epsilon >0$ be arbitrary</p>
<p>Let $P=\{a=x_0<x_1<x_2<....<x_n=b\}$ be a partition of $[a,b]$ with $||P||<\delta$</p>
<p>let $M_i=\sup _{({x_{i-1},x_i})}f$ and $m_i=\inf _{({x_{i-1},x_i})}f$</p>
<p>let $M_i^{'}=\sup _{({x_{i-1},x_i})}|f|$ and $m_i^{'}=\inf _{({x_{i-1},x_i})}|f|$</p>
<p>Then $\sum_{i=1}^n\{|M_i^{'}-m_i^{'}|\}\Delta x_i\leq \sum_{i=1}^n|M_i-m_i|\Delta x_i<\epsilon$</p>
|
1,190,345 | <p>If $f$ is Riemann integrable on $[a,b]$ , is $|f|$ Riemann integrable on $[a,b]$ ?
(The metric is $\mathbb R$ usual)</p>
<p>The other is question is $f$ is Riemann integrable on $[a,b]$ , can I claim $f$ is bounded on $[a,b]$ ? (I think the answer can be either yes or no that depend on considerating generalized function or not)</p>
<p>Update: I think I made a mistake on second question that both definitions of Riemann integral and generalized Riemann integral on [a,b] request the f being bounded on [a,b]. Sorry for my mistake.</p>
| TomGrubb | 223,701 | <p>I will use the fact that $f(x)$ is Riemann integrable on $[a,b]$ if and only if it is bounded and continuous almost everywhere on $[a,b]$.</p>
<p>Let $f(x)$ be Riemann integrable on [a,b]. Then $f$ is bounded and continuous almost everywhere. Define $f_+(x)$ by $f_+(x)=f(x)$ if $f(x)>0$, and $f_+(x)=0$ otherwise. Then $f_+(x)$ will also be bounded and continuous almost everywhere. Thus $f_+(x)$ is Riemann integrable. Similarly, define $f_-(x)$ by $f_-(x)=f(x)$ if $f(x)\leq 0$ and $f_-(x)=0$ otherwise. Then $f_-(x)$ is also Riemann integrable by the same reasoning as above. Riemann integrability of $|f(x)|$ follows since $|f(x)|=f_+(x)-f_-(x)$.</p>
|
2,403,201 | <p>How do I solve for $x$:</p>
<p>$$\log\left(\frac{1.07^x}{1050-2.5x}\right)=\log\left(\frac{1.2}{828}\right)$$</p>
<p>If I raise both sides to the power of $10$, I get:
$\dfrac{1.07^x}{1050-2.5x}=\frac{1}{690}$</p>
<p>Then I'm stuck. How do I solve this ?</p>
<p>As suggest by @Kevin, I have decided to add my take here:</p>
<p>One way I could solve this is using Linear Interpolation Approximation.</p>
<p>We have,</p>
<p>$\frac{1.07^x}{1050-2.5x}=\frac{1}{690}$</p>
<p>$1-690\frac{1.07^x}{1050-2.5x}=0$</p>
<p>We need to get the LHS as close to $0$ as possible.</p>
<p>At $x=5(A)$,</p>
<p>LHS $\simeq$ 0.067219 (a)</p>
<p>Since LHS at $x=5$ is greater than $0$, we try at $x=7(B)$</p>
<p>LHS $\simeq$ -0.07311 (b)</p>
<p>Since LHS at $x=7$ is less than $0$, </p>
<p>$5<x<7$</p>
<p>Thus by interpolation,</p>
<p>$x=[A+\frac{a}{a-b}(B-A)]=[5+\frac{0.067219}{0.067219-(-0.07311)}(7-5)]\simeq5.958$</p>
| Claude Leibovici | 82,404 | <p>As said in comments, the solution is given in terms of Lambert function.</p>
<p>If you plot the function $$f(x)=\frac{1.07^x}{1050-2.5x}-\frac{1}{690}$$ you should notice that the solution is very close to $x=6$; this means that you could start Newton method and converge quite fast as shown below
$$\left(
\begin{array}{cc}
n & x_n \\
0 & 6\\
1 & 5.993055006 \\
2 & 5.993053313 \\
3 & 5.993053313
\end{array}
\right)$$ Sooner or later, you will lear than any equation which can write or rewrite as $$A+B x+C \log(D+Ex)=0$$ has solution(s) in terms of Lambert function.</p>
|
1,983,614 | <p>Consider a measurable space $(\Omega, \mathcal{F})$ and let $I$ be an arbitrary index set. </p>
<p>Is the following true?</p>
<blockquote>
<p>If $\left( A_i \right)_{i \in I}$ is a chain in $\mathcal{F}$ – that is, $\forall i \in I$, $A_i \in \mathcal{F}$ and for all $i, j \in I$, we have $A_i \subseteq A_j$ or $A_j \subseteq A_i$ – then $$\displaystyle \bigcup_{i \in I} A_i \in \mathcal{F}.$$</p>
</blockquote>
| bof | 111,012 | <p>No. Consider Lebesgue measure on the real line. Let $\kappa$ be the minimum cardinality of a non-measurable set, and let $A$ be a non-measurable set of cardinality $\kappa.$ Then $A$ is the union of a chain of sets of cardinality less than $\kappa,$ which are of course measurable sets.</p>
|
3,917,912 | <p>I am reading an article where the author seems to use a known relationship between the sum of a finite sequence of real positive numbers <span class="math-container">$a_1 +a_2 +... +a_n = m$</span> and the sum of their reciprocals. In particular, I suspect that
<span class="math-container">\begin{equation}
\sum_{i=1}^n \frac{1}{a_i} \geq \frac{n^2}{m}
\end{equation}</span><br />
with equality when <span class="math-container">$a_i = \frac{m}{n} \forall i$</span>. Are there any references or known theorems where this inequality is proven?</p>
<p><a href="https://math.stackexchange.com/a/1857918/852233">This</a> interesting answer provides a different lower bound. However, I am doing some experimental evaluations where the bound is working perfectly (varying <span class="math-container">$n$</span> and using <span class="math-container">$10^7$</span> uniformly distributed random numbers).</p>
| Bart Michels | 43,288 | <p>The author is using the Arithmetic Mean - Harmonic Mean ("AM-HM") inequality:
<a href="https://en.wikipedia.org/wiki/Harmonic_mean#Relationship_with_other_means" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Harmonic_mean#Relationship_with_other_means</a></p>
<p>This is a popular inequality in the math olympiad world; you can find a proof here:
<a href="https://artofproblemsolving.com/wiki/index.php/Root-Mean_Square-Arithmetic_Mean-Geometric_Mean-Harmonic_mean_Inequality" rel="nofollow noreferrer">https://artofproblemsolving.com/wiki/index.php/Root-Mean_Square-Arithmetic_Mean-Geometric_Mean-Harmonic_mean_Inequality</a></p>
|
668,291 | <p>If $h$ and $k$ are any two distinct integers, then $h^n-k^n$ is divisible by $h-k$.</p>
<p>Let's start with the basis. Let $n=1$, then
$h^1-k^1 = h-k$</p>
<p>Now for the induction, I can't use $k$ because I don't want to be confused. So let $P(r)$ for $h^n-k^n$ and that's $h^r-k^r$</p>
<p>$h^r-k^r = h-k$</p>
<p>$h^r = h-k +k^r$</p>
<p>So, for $P(r+1)$</p>
<p>$h^{r+1}-k^{r+1}$</p>
<p>$h^r * h^1 - k^r * k^1$</p>
<p>$ (h-k +k^r) * h -k^r *k $</p>
<p>This is the point where I'm not certain if I should distribute the $h $ all over the place...so here it is</p>
<p>$ (h*h-k*h +k^r*h) -k^r *k $</p>
<p>$ (h*h)+(-k*h) +(k^r*h) -k^r *k $</p>
<p>$ (h)*(h-k) + (k^r)*(h-k)$</p>
<p>$(h-k) * (h+k^r)$</p>
| robjohn | 13,854 | <p>For $n=0$: $h-k\mid h^0-k^0$.</p>
<p>Suppose $h-k\mid h^n-k^n$, then
$$
\begin{align}
h^{n+1}-k^{n+1}
&=h\cdot h^n-k\cdot k^n\\
&=(\color{#C00000}{h-k})h^n+k(\color{#C00000}{h^n-k^n})
\end{align}
$$
Since $h-k$ and $h^n-k^n$ are divisible by $h-k$, so is $h^{n+1}-k^{n+1}$.</p>
|
2,960,734 | <p>So basically, I am given the following to prove:</p>
<blockquote>
<p>Let <span class="math-container">$+\gamma$</span> be a positively oriented smooth Jordan arc, and let <span class="math-container">$\omega$</span> denote the interior of <span class="math-container">$+\gamma$</span>. Recall that if <span class="math-container">$F = (F_1, F_2):D \to \mathbb{R}^2$</span> is a continuously differentiable vector field in an open set <span class="math-container">$D$</span> containing <span class="math-container">$\omega \cup(+\gamma)$</span>, then</p>
<p><span class="math-container">$$\iint_\omega \left(\frac{\partial F_2}{\partial x} - \frac{\partial F_1}{\partial y}\right) dxdy = \oint_{+\gamma}F \cdot \overrightarrow{ds} $$</span>
where the right hand-side is the line-integral of <span class="math-container">$F$</span> along the path <span class="math-container">$+\gamma$</span>.</p>
<p>By suitably choosing <span class="math-container">$F$</span>, prove that <span class="math-container">$$ \DeclareMathOperator{\Area}{Area} \DeclareMathOperator{\diameter}{diameter} \DeclareMathOperator{\length}{length} 2\Area(\omega) \leq \diameter(+\gamma) \length(+\gamma)$$</span>
where</p>
<p><span class="math-container">$\diameter(+\gamma) = \sup\{|z(s)-z(t)| : s,t \in [a,b]\}$</span> and <span class="math-container">$\length(+\gamma) = \int_{a}^{b} |z'(t)| dt$</span>.</p>
</blockquote>
<p>The only thing I know so far is that I need to find an <span class="math-container">$F$</span> such that <span class="math-container">$\frac{\partial F_2}{\partial x} - \frac{\partial F_1}{\partial y} =1$</span> because then <span class="math-container">$$\iint_{\omega} dxdy = \Area(\omega).$$</span> However, I do not know where to proceed from there!</p>
| Community | -1 | <p>I assume that <span class="math-container">$z$</span> is <span class="math-container">$\gamma$</span> in the definition of <span class="math-container">$\operatorname{diameter}(+\gamma)$</span>. Without loss of generality, we can assume that <span class="math-container">$\gamma(a)=(0,0)$</span> (otherwise, we can translate <span class="math-container">$\gamma$</span> until the point <span class="math-container">$\gamma(a)$</span> hits the origin).</p>
<p>I would take <span class="math-container">$\vec{F}=(F_1,F_2)$</span> with <span class="math-container">$F_1=-y$</span> and <span class="math-container">$F_2=x$</span>,
so that
<span class="math-container">$$\frac{\partial F_2}{\partial x}-\frac{\partial F_1}{\partial y}=2.$$</span>
By Green's theorem, we have
<span class="math-container">$$2\operatorname{Area}(\omega)=\iint_\omega \left(\frac{\partial F_2}{\partial x}-\frac{\partial F_1}{\partial y}\right)\ dx\ dy=\oint_{+\gamma}\vec{F}\cdot \vec{dr}.$$</span>
Hence,
<span class="math-container">$$2\operatorname{Area}(\omega)\leq \oint_{+\gamma}\left\vert\vec{F}\cdot \vec{dr}\right\vert\leq \oint_{+\gamma}\left\Vert\vec{F}\right\Vert\ dr.$$</span>
Because <span class="math-container">$\left\Vert\vec{F}\right\Vert\leq \operatorname{diameter}(+\gamma)$</span>, we conclude that
<span class="math-container">$$2\operatorname{Area}(\omega)\leq \oint_{+\gamma}\operatorname{diameter}(+\gamma)\ dr=\operatorname{diameter}(+\gamma)\oint_{+\gamma}dr=\operatorname{diameter}(+\gamma)\operatorname{length}(+\gamma).$$</span></p>
<hr>
<p>This inequality is a quite weak. If we define <span class="math-container">$\operatorname{radius}(+\gamma)$</span> to be
<span class="math-container">$$\inf\Big\{r>0:\exists p\in\mathbb{R}^2,\ \forall u\in[a,b],\ \big\vert\gamma(u)-p\big\vert< r\Big\}\,,$$</span>
then we have
<span class="math-container">$$2\operatorname{Area}(\omega)\leq \operatorname{radius}(+\gamma)\operatorname{length}(+\gamma).$$</span>
The equality holds iff <span class="math-container">$\gamma$</span> traces a circle (once). You can use <a href="https://en.wikipedia.org/wiki/Jung%27s_theorem" rel="nofollow noreferrer">Jung's theorem</a> to show that
<span class="math-container">$$2\operatorname{Area}(\omega)\leq \frac{1}{\sqrt{3}}\operatorname{diameter}(+\gamma)\operatorname{length}(+\gamma),$$</span> which is stronger than the required result, but is still weak. So, it is an interesting question to find the infimum <span class="math-container">$\lambda_{\min}$</span> of all <span class="math-container">$\lambda>0$</span> such that
<span class="math-container">$$\operatorname{Area}(\omega)\leq \lambda\operatorname{diameter}(+\gamma)\operatorname{length}(+\gamma).$$</span>
We already know that <span class="math-container">$\lambda_{\min} \leq \frac1{2\sqrt{3}}$</span>.</p>
|
19,305 | <p>If I compute the eigenvalues and eigenvectors using <code>numpy.linalg.eig</code> (from Python), the eigenvalues returned seem to be all over the place. Using, for example, <a href="http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data" rel="nofollow">the Iris dataset</a>, the normalized Eigenvalues are <code>[2.9108 0.9212 0.1474 0.0206]</code>, but the ones I currently have are <code>[9206.53059607 314.10307292 12.03601935 3.53031167]</code>.</p>
<p>The problem I'm facing is that I want to find out how much percentage of the variance each component brings, but with the current Eigenvalues I don't have the right values.</p>
<p>So, how can I transform my eigenvalues so that they can give me the correct proportion of variance?</p>
<p>Edit: Just in case it wasn't clear, I'm computing the <code>eig</code> of the covariance matrix (The process is called Principal Component Analysis).</p>
| leonbloy | 312 | <p>The sum of the eigenvalues equals the trace of the matriz. For a $N \times N$ covariance matriz, this would amount to $N VAR$ - where VAR is the variance of each variable (assuming they are equal - otherwise it would be the mean variance).
Put in other way, the mean value of the eigenvalues is equal to the mean value of the variances.</p>
<p>And that's pretty much what can be said. Perhaps you are computing a covariance matriz by just multiplying the data matrices? If so, you just should divide by $N.</p>
|
3,763,744 | <p>The helix is a curve <span class="math-container">$x(t) \in \mathbb{R}^3$</span> defined by:</p>
<p><span class="math-container">$$
x(t) = \begin{bmatrix}
\sin(t) \\
\cos(t) \\
t
\end{bmatrix}
$$</span></p>
<p>and it takes the classic shape:</p>
<p><a href="https://en.wikipedia.org/wiki/File:Rising_circular.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4W73Vm.png" alt="simple helix" /></a></p>
<p>Does this have a natural extension from <span class="math-container">$\mathbb{R}^3$</span> to <span class="math-container">$\mathbb{R}^4$</span>? (Or even <span class="math-container">$\mathbb{R}^n$</span>?)</p>
<hr />
<hr />
<h3>What I've tried so far:</h3>
<p>The classic <span class="math-container">$\mathbb{R}^3$</span> helix curve above has two nice properties:</p>
<ul>
<li><span class="math-container">$x(t)$</span> has constant distance from the axis of propagation <span class="math-container">$\hat{e}_3$</span>, where <span class="math-container">$\hat{e}_3 = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}$</span></li>
<li><span class="math-container">$x(t)$</span> has constant angular velocity when projected onto the plane normal to <span class="math-container">$\hat{e}_3$</span>. i.e. the vector <span class="math-container">$(x_1(t), x_2(t))$</span> has polar coordinates <span class="math-container">$(r, \theta) = (1, t)$</span>, so <span class="math-container">$\dot{\theta} \equiv 1$</span>.</li>
</ul>
<p>The classic helix can be viewed as a parametric walk of a circle in <span class="math-container">$\mathbb{R}^2$</span>, with the parameter <span class="math-container">$t$</span> added as the third dimension. A natural extension to a helix in <span class="math-container">$\mathbb{R}^n$</span> would be a parametric walk of a curve on a hypersphere in <span class="math-container">$\mathbb{R}^{n-1}$</span>, with parameter <span class="math-container">$t$</span> added as the nth dimension. So for <span class="math-container">$\mathbb{R}^4$</span>, one could choose a <a href="https://en.wikipedia.org/wiki/Spiral#Spherical_spirals" rel="nofollow noreferrer">spherical spiral</a> to walk the sphere in <span class="math-container">$\mathbb{R}^3$</span>, and use parameter t as the 4th dimension:</p>
<p><span class="math-container">$$
x(t) = \begin{bmatrix}
\sin(t) \cos(ct) \\
\sin(t) \sin(ct) \\
\cos(t) \\
t
\end{bmatrix}
$$</span></p>
<p>The first three components are rendered on wikipedia as:</p>
<p><a href="https://en.wikipedia.org/wiki/File:Kugel-spirale-1-2.svg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uAlBAm.png" alt="spherical spiral" /></a></p>
<p>This construction matches the two properties I listed:</p>
<ul>
<li><span class="math-container">$x(t)$</span> has constant distance from the axis of propagation <span class="math-container">$\hat{e}_4$</span>, where <span class="math-container">$\hat{e}_4 = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}$</span></li>
<li>When <span class="math-container">$c=1$</span>, <span class="math-container">$x(t)$</span> has constant angular velocity when projected onto the 3-plane normal to <span class="math-container">$\hat{e}_4$</span>. i.e. the vector <span class="math-container">$(x_1(t), x_2(t), x_3(t))$</span> has spherical coordinates <span class="math-container">$(r, \theta, \phi) = (1, t, t)$</span>, so <span class="math-container">$\dot{\theta} = \dot{\phi} \equiv 1$</span>.</li>
</ul>
<p>It's technically a direct extension of the <span class="math-container">$\mathbb{R}^3$</span> helix, since <span class="math-container">$c=0$</span> induces an identical curve (up to a projection.) But it still feels a little arbitrary, and the closed form will be quite ugly in higher dimensions.</p>
<p>Is there a generally accepted extension of the classical circular helix in <span class="math-container">$\mathbb{R}^3$</span> to <span class="math-container">$\mathbb{R}^4$</span>? (Or even <span class="math-container">$\mathbb{R}^n$</span>?) And do its properties or construction at all resemble the above?</p>
<hr />
<p>After some research, I've learned that there are interesting generalizations of helices in <span class="math-container">$\mathbb{R}^n$</span>, defined in terms of derivative constraints, Frenet frames, etc. such that even polynomial curves can behave as helices. [<a href="https://link.springer.com/article/10.1007/s00006-018-0835-1" rel="nofollow noreferrer">Altunkaya and Kula 2018</a>]. However, that's much more general than I'm seeking, since those are aperiodic, and may have unbounded distance from the axis of propagation. But the existence of such work is promising - I just don't know how to search this space well.</p>
| kdbanman | 426,612 | <p>After a few hours of digging around and thinking, I've found a way to more naturally express the spherical spiral idea in my question.</p>
<p><strong>I'm still not sure if my construction or properties make sense though, so I won't mark my own answer as correct here.</strong> Someone else with broader geometry knowledge should weigh in instead of me.</p>
<hr />
<p>One can write the classic <span class="math-container">$\mathbb{R}^3$</span> helix in <a href="https://en.wikipedia.org/wiki/Cylindrical_coordinate_system" rel="nofollow noreferrer">cylindrical coordinates</a> <span class="math-container">$(\rho, \phi, z)$</span>:</p>
<p><span class="math-container">$$
\begin{bmatrix}
x_1(t) \\
x_2(t) \\
z(t)
\end{bmatrix}
=
\begin{bmatrix}
\sin t \\
\cos t \\
t
\end{bmatrix}
\implies
\begin{bmatrix}
\rho(t) \\
\phi(t) \\
z(t)
\end{bmatrix}
=
\begin{bmatrix}
1 \\
t \\
t
\end{bmatrix}
$$</span></p>
<p>Cylindrical coordinates are a hybrid of <span class="math-container">$\mathbb{R}^2$</span> polar coordinates <span class="math-container">$(r, \theta)$</span>, plus an additional cartesian coordinate <span class="math-container">$(z)$</span>. In the diagram below, the helix would propagate vertically, winding around the <span class="math-container">$L$</span> axis.</p>
<p><a href="https://en.wikipedia.org/wiki/File:Coord_system_CY_1.svg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HnRmem.png" alt="cylindrical coordinates" /></a></p>
<p>So we can apply the same kind of hybrid using <span class="math-container">$\mathbb{R}^3$</span> spherical coordinates <span class="math-container">$(r, \theta, \phi)$</span> with <span class="math-container">$(z)$</span> to get the "hypercylindrical" coordinates <span class="math-container">$(\rho, \phi_1, \phi_2, z)$</span> and write the <span class="math-container">$\mathbb{R}^4$</span> helix from the question just as easily.</p>
<p><span class="math-container">$$
\begin{bmatrix}
x_1(t) \\
x_2(t) \\
x_3(t) \\
z(t)
\end{bmatrix}
=
\begin{bmatrix}
\sin t \cos t \\
\sin t \sin t \\
\cos t \\
t
\end{bmatrix}
\implies
\begin{bmatrix}
\rho(t) \\
\phi_1(t) \\
\phi_2(t) \\
z(t)
\end{bmatrix}
=
\begin{bmatrix}
1 \\
t \\
t \\
t
\end{bmatrix}
$$</span></p>
<p>and the pattern naturally extends for the general <span class="math-container">$\mathbb{R}^n$</span> helix. We use <span class="math-container">$\mathbb{R}^{n-1}$</span> <a href="https://en.wikipedia.org/wiki/N-sphere#Spherical_coordinates" rel="nofollow noreferrer">hyperspherical coordinates</a> to write the helix in <span class="math-container">$\mathbb{R}^n$</span> hypercylindrical coordinates</p>
<p><span class="math-container">$$
\begin{bmatrix}
\rho \\
\phi_1 \\
\phi_2 \\
... \\
\phi_{n-3} \\
\phi_{n-2} \\
z
\end{bmatrix}
=
\begin{bmatrix}
1 \\
t \\
t \\
... \\
t \\
t \\
t
\end{bmatrix}
$$</span></p>
<p>This trivially meets my listed properties, because</p>
<ul>
<li><span class="math-container">$\rho=1$</span> means constant (unit) distance from the axis of propagation <span class="math-container">$\hat{e}_n$</span>.</li>
<li><span class="math-container">$\phi_k = t \implies \dot{\phi_k} = 1$</span>, so angular velocity is also constant in all angular coordinate dimensions.</li>
</ul>
<p>Like I've said, though, I'm not sure those properties actually make sense for <span class="math-container">$\mathbb{R}^n$</span> helices.</p>
|
3,441,346 | <p>I was asked to prove that a set <span class="math-container">$X$</span> is closed if and only if it contains all its limit points. I proceeded like so:</p>
<p>Let <span class="math-container">$X^\dagger=\partial X \cap X´$</span> and <span class="math-container">$X^\ast=\partial X \backslash X´$</span> with <span class="math-container">$X´$</span> being the derived set of <span class="math-container">$X$</span>. If <span class="math-container">$X$</span> is closed then:
<span class="math-container">$$X=int(X) \,\cup \,\partial X =int(X) \, \cup \,(X^\dagger \,\cup\,X^\ast)= $$</span>
<span class="math-container">$$=(int(X)\,\cup\, X^\dagger)\, \cup \, X^\ast= $$</span>
<span class="math-container">$$=X´ \, \cup \, X^\ast$$</span>
Therefore if <span class="math-container">$X$</span> is closed then it contains all its limit points. I.e, <span class="math-container">$X$</span> is closed if and only if <span class="math-container">$X´\subseteq X$</span>.</p>
<p>Is this correct, and if so, what are some better ways to prove this?</p>
| Will Cai | 470,180 | <p>Assume <span class="math-container">$a_n \in X$</span>, and <span class="math-container">$a_n \to a$</span>, if <span class="math-container">$a \notin X$</span>, <span class="math-container">$a\in X'$</span>, then <span class="math-container">$\exists \epsilon, B(a,\epsilon)\in X'$</span>. So <span class="math-container">$\exists N$</span>, when <span class="math-container">$n>N$</span>, <span class="math-container">$a_n \in X'$</span>. Conflict.</p>
<p>If it contains all its limit points. Consider <span class="math-container">$\forall a \in X'$</span>, if <span class="math-container">$\forall \epsilon$</span>, <span class="math-container">$B(a,\epsilon) \notin X'$</span>, assume <span class="math-container">$a_n \in B(a,\epsilon_n)\cap X$</span>, we have <span class="math-container">$a_n\to a, a\notin X$</span>. Conflict. Therefore, <span class="math-container">$\exists \epsilon, B(a,\epsilon) \in X'$</span>.</p>
|
4,496,736 | <p>Question: Use the variation of parameter method to find the general solution of the following differential equation
<span class="math-container">$$(\cos x) y''+(2\sin x) y'-(\cos x) y =0\;\;\;\;,\;\;\;\;0<x<1$$</span></p>
<p><strong>My Try:</strong></p>
<p>I think the question is wrong, since the right hand side term is 0, so the particular integral will also be zero. Thus, the general solution will be equal to homogenous solution. So, I think no use of using the variation of parameter formulas since <span class="math-container">$y_p(x)=0$</span> always.
I reduced the equation as in the subject or title and then used integrating factor <span class="math-container">$$y=(\cos x )z$$</span> to eliminate the term <span class="math-container">$y'$</span> but I got another difficult DE as <span class="math-container">$z''-2\sec^2xz=0$</span></p>
<blockquote>
<p>Please help with any suggestions or do you think question is correct. Is there a way to solve it ?</p>
</blockquote>
| Z Ahmed | 671,540 | <p>Note that <span class="math-container">$y_1(x)=\sin x$</span> is one solution of the second order linear ODE
<span class="math-container">$$y''+2\tan x y'-y=0$$</span>
If <span class="math-container">$y_1$</span> is one solution of ODE
<span class="math-container">$$y''+P(x)y'+Q(x)y=0.$$</span>
Then the other solution <span class="math-container">$y_2(x)$</span> is given as
<span class="math-container">$$y_2=y_1C_2\int \frac{\exp[-\int P(x) dx]}{y_1^2}$$</span></p>
<p>See <a href="https://math.stackexchange.com/questions/3913808/finding-one-solution-of-second-order-de-using-another-solution-using-wronskian">Finding one solution of second order DE using another solution using Wronskian</a>
So here in this case
<span class="math-container">$$y_2=C_2 \sin x \int \frac{\exp[-2\int \tan x dx]}{\sin^2 x} dx=C_2 \sin x \int \cot^2 x ~dx=C_2 \sin x (-x-\cot x)$$</span> <span class="math-container">$$=-C_2(x\sin x+ \cos x)$$</span></p>
<p>So total solution of the ODE (*) is
<span class="math-container">$$y(x)=C_1\sin x+C_3(x \sin x+\cos x)$$</span></p>
|
2,853,668 | <blockquote>
<p>Show that $$\sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{x^n}$$ converges for every $x>1$.</p>
</blockquote>
<p>let $a(x)$ be the sum of the series. does $a$ continious at $x=2$? differentiable?</p>
<p>I guess the first part is with leibniz but I am not sure about it.</p>
| mechanodroid | 144,766 | <p><strong>Hint:</strong></p>
<p>Use the geometric series:</p>
<p>$$a(x) = \frac1{x}\sum_{n=0}^\infty \frac{(-1)^n}{x^n} = \frac{1}{x\left(1+\frac1x\right)} = \frac{1}{x+1}$$</p>
|
2,969,004 | <p>I have seen several references to "order" of an element in the Symmetric Group. Specifically, that the order of a cycle is the least common multiple of the lengths of the cycles in its decomposition.</p>
<p>But the Symmetric Group is not cyclic, and I'm only familiar with the concept of "order" for cyclic groups. So what does it mean in this context?</p>
| Shweta Aggrawal | 581,242 | <p>Since you know about cyclic groups. </p>
<p>Think of order of an element in the symmetric group <span class="math-container">$S_n$</span> as the size of the cyclic group generated by it.</p>
<p>Example 1: Let us take <span class="math-container">$S_3$</span>. Take <span class="math-container">$g=(123)$</span>. Consider the cyclic group generated by <span class="math-container">$g$</span>, that is <span class="math-container">$\langle g\rangle $</span>. <span class="math-container">$\langle g\rangle =\{(123),(132),(1)\}$</span>. So order of <span class="math-container">$g$</span> is 3 since cardinality of <span class="math-container">$\langle g\rangle $</span> is 3.</p>
<p>Example 2: Consider <span class="math-container">$S_4$</span> and take <span class="math-container">$g=(12)(34)$</span> Then <span class="math-container">$\langle g\rangle =\{(12)(34),(1)\}$</span>. Since cardinality of <span class="math-container">$\langle g\rangle $</span> is 2, order of <span class="math-container">$g$</span> is <span class="math-container">$2$</span>. </p>
<p>Example 3(Take it as an exercise): Consider <span class="math-container">$S_5$</span> and take <span class="math-container">$g=(123)(45)$</span>. Find <span class="math-container">$\langle g\rangle $</span> by explicitly writing each element and then calculate the order from <span class="math-container">$g$</span> from there </p>
|
3,433,249 | <p>I need to find the number of <span class="math-container">$7$</span>s if we write all the numbers from <span class="math-container">$1$</span> to <span class="math-container">$1000000$</span>(so <span class="math-container">$77$</span>, for example, counts as two <span class="math-container">$7$</span>s and not one).</p>
<p>Here's what I did:</p>
<p>I split the problem into <span class="math-container">$7$</span> sections:</p>
<p>The number of <span class="math-container">$7$</span>s in numbers with one seven: <span class="math-container">$\displaystyle \binom{7}{1}$$*$</span> <span class="math-container">$9^6$</span>(number of ways to place one <span class="math-container">$7$</span> times the number of possible numbers we could make with each displacement. Note that leading zeros wouldn't be a problem since they would result in numbers with less than <span class="math-container">$7$</span> digits which we need)</p>
<p>The number of <span class="math-container">$7$</span>s in numbers with two sevens:
<span class="math-container">$\displaystyle\binom{7}{2} * 9^5 * 2$</span>(same logic, but we multiplied it by <span class="math-container">$2$</span> since there's two sevens)</p>
<p>...</p>
<p>So my answer would be <span class="math-container">$\sum_{i=1}^7 \displaystyle \binom{7}{i}9^ii$</span> but my textbook says the right answer is <span class="math-container">$600000$</span>. I do understand its solution but I don't know why mine is wrong.</p>
<p>Thanks in advance!</p>
| fleablood | 280,126 | <p>One. the numbers <span class="math-container">$0$</span> to <span class="math-container">$999999$</span> (as <span class="math-container">$1000000$</span> doesn't have any <span class="math-container">$7$</span>) have <span class="math-container">$6$</span> digits. Not seven.</p>
<p>Three somehow you went from the power being <span class="math-container">$6-i$</span> to <span class="math-container">$i$</span>.</p>
<p><span class="math-container">$\sum_{k=1}^6 {6\choose k}9^{6-k}*k = 6*59049*1 + 15*6561*2 + 20*279*3 + 15*81*4 + 6*9*5 + 1*1*6=600000$</span></p>
<p>But why break it into seven cases?</p>
<p>If we count the number of times <span class="math-container">$7$</span> appears in the <span class="math-container">$n$</span> position there are <span class="math-container">$10^5$</span> ways the other digits can be. So a <span class="math-container">$7$</span> will appear in the <span class="math-container">$n$</span> position <span class="math-container">$10^5$</span> times. So the total number of <span class="math-container">$7$</span>s that appear in any and all of the six positions will be <span class="math-container">$6*10^5$</span>.</p>
<p>Interesting result though <span class="math-container">$\sum_{k=1}^n {n \choose k}(b-1)^{n-k}*k = n*b^{n-1}$</span> apparently.</p>
<p>Think you can prove that or give an argument why it'd be so?</p>
|
3,134,991 | <p>If nine coins are tossed, what is the probability that the number of heads is even?</p>
<p>So there can either be 0 heads, 2 heads, 4 heads, 6 heads, or 8 heads.</p>
<p>We have <span class="math-container">$n = 9$</span> trials, find the probability of each <span class="math-container">$k$</span> for <span class="math-container">$k = 0, 2, 4, 6, 8$</span></p>
<p><span class="math-container">$n = 9, k = 0$</span></p>
<p><span class="math-container">$$\binom{9}{0}\bigg(\frac{1}{2}\bigg)^0\bigg(\frac{1}{2}\bigg)^{9}$$</span> </p>
<p><span class="math-container">$n = 9, k = 2$</span></p>
<p><span class="math-container">$$\binom{9}{2}\bigg(\frac{1}{2}\bigg)^2\bigg(\frac{1}{2}\bigg)^{7}$$</span> </p>
<p><span class="math-container">$n = 9, k = 4$</span>
<span class="math-container">$$\binom{9}{4}\bigg(\frac{1}{2}\bigg)^4\bigg(\frac{1}{2}\bigg)^{5}$$</span></p>
<p><span class="math-container">$n = 9, k = 6$</span></p>
<p><span class="math-container">$$\binom{9}{6}\bigg(\frac{1}{2}\bigg)^6\bigg(\frac{1}{2}\bigg)^{3}$$</span></p>
<p><span class="math-container">$n = 9, k = 8$</span></p>
<p><span class="math-container">$$\binom{9}{8}\bigg(\frac{1}{2}\bigg)^8\bigg(\frac{1}{2}\bigg)^{1}$$</span></p>
<p>Add all of these up: </p>
<p><span class="math-container">$$=.64$$</span> so there's a 64% chance of probability?</p>
| Ethan Bolker | 72,858 | <p>If there are an even number of heads then there must be an odd number of tails. But heads and tails are symmetrical, so the probability must be <span class="math-container">$1/2$</span>.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.