qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
184,361 | <p>I'm doing some exercises on Apostol's calculus, on the floor function. Now, he doesn't give an explicit definition of $[x]$, so I'm going with this one:</p>
<blockquote>
<p><strong>DEFINITION</strong> Given $x\in \Bbb R$, the integer part of $x$ is the unique $z\in \Bbb Z$ such that $$z\leq x < z+1$$ and we denote it by $[x]$.</p>
</blockquote>
<p>Now he asks to prove some basic things about it, such as: if $n\in \Bbb Z$, then $[x+n]=[x]+n$</p>
<p>So I proved it like this: Let $z=[x+n]$ and $z'=[x]$. Then we have that</p>
<p>$$z\leq x+n<z+1$$</p>
<p>$$z'\leq x<z'+1$$</p>
<p>Then $$z'+n\leq x+n<z'+n+1$$</p>
<p>But since $z'$ is an integer, so is $z'+n$. Since $z$ is unique, it must be that $z'+n=z$.</p>
<p>However, this doesn't seem to get me anywhere to prove that
$$\left[ {2x} \right] = \left[ x \right] + \left[ {x + \frac{1}{2}} \right]$$</p>
<p>in and in general that </p>
<p>$$\left[ {nx} \right] = \sum\limits_{k = 0}^{n - 1} {\left[ {x + \frac{k}{n}} \right]} $$</p>
<p>Obviously one could do an informal proof thinking about "the carries", but that's not the idea, let alone how tedious it would be. Maybe there is some easier or clearer characterization of $[x]$ in terms of $x$ to work this out.</p>
<p>Another property is
$$[-x]=\begin{cases}-[x]\text{ ; if }x\in \Bbb Z \cr-[x]-1 \text{ ; otherwise}\end{cases}$$</p>
<p>I argue: if $x\in\Bbb Z$, it is clear $[x]=x$. Then $-[x]=-x$, and $-[x]\in \Bbb Z$ so $[-[x]]=-[x]=[-x]$. For the other, I guess one could say:</p>
<p>$$n \leqslant x < n + 1 \Rightarrow - n - 1 < x \leqslant -n$$</p>
<p>and since $x$ is not an integer, this should be the same as $$ - n - 1 \leqslant -x < -n$$</p>
<p>$$ - n - 1 \leqslant -x < (-n-1)+1$$</p>
<p>So $[-x]=-[x]-1$</p>
| N. S. | 9,176 | <p>This is one of my favourite exercises, because of the following neat solution:</p>
<p>Fix $n$. Let </p>
<p>$$ f(x) := \sum\limits_{k=0}^{n-1} \Biggl[x + \frac{k}{n}\Biggr] - [nx] \,.$$</p>
<p>Then $f(x) =0 \forall x \in [0,\frac{1}{n})$ since all terms are zero, and it is easy to prove that $f(x+\frac{1}{n})=f(x)$.</p>
<p>It follows imediately that $f$ is identically 0.</p>
|
184,361 | <p>I'm doing some exercises on Apostol's calculus, on the floor function. Now, he doesn't give an explicit definition of $[x]$, so I'm going with this one:</p>
<blockquote>
<p><strong>DEFINITION</strong> Given $x\in \Bbb R$, the integer part of $x$ is the unique $z\in \Bbb Z$ such that $$z\leq x < z+1$$ and we denote it by $[x]$.</p>
</blockquote>
<p>Now he asks to prove some basic things about it, such as: if $n\in \Bbb Z$, then $[x+n]=[x]+n$</p>
<p>So I proved it like this: Let $z=[x+n]$ and $z'=[x]$. Then we have that</p>
<p>$$z\leq x+n<z+1$$</p>
<p>$$z'\leq x<z'+1$$</p>
<p>Then $$z'+n\leq x+n<z'+n+1$$</p>
<p>But since $z'$ is an integer, so is $z'+n$. Since $z$ is unique, it must be that $z'+n=z$.</p>
<p>However, this doesn't seem to get me anywhere to prove that
$$\left[ {2x} \right] = \left[ x \right] + \left[ {x + \frac{1}{2}} \right]$$</p>
<p>in and in general that </p>
<p>$$\left[ {nx} \right] = \sum\limits_{k = 0}^{n - 1} {\left[ {x + \frac{k}{n}} \right]} $$</p>
<p>Obviously one could do an informal proof thinking about "the carries", but that's not the idea, let alone how tedious it would be. Maybe there is some easier or clearer characterization of $[x]$ in terms of $x$ to work this out.</p>
<p>Another property is
$$[-x]=\begin{cases}-[x]\text{ ; if }x\in \Bbb Z \cr-[x]-1 \text{ ; otherwise}\end{cases}$$</p>
<p>I argue: if $x\in\Bbb Z$, it is clear $[x]=x$. Then $-[x]=-x$, and $-[x]\in \Bbb Z$ so $[-[x]]=-[x]=[-x]$. For the other, I guess one could say:</p>
<p>$$n \leqslant x < n + 1 \Rightarrow - n - 1 < x \leqslant -n$$</p>
<p>and since $x$ is not an integer, this should be the same as $$ - n - 1 \leqslant -x < -n$$</p>
<p>$$ - n - 1 \leqslant -x < (-n-1)+1$$</p>
<p>So $[-x]=-[x]-1$</p>
| Community | -1 | <p>The trick for the inductive proof is that you want to do induction on $x$, not $n$, and you want to take steps of size $\frac{1}{n}$, not steps of size $1$.</p>
<p>For the base case, observe that it's true for $x \in [0, \frac{1}{n})$, since all of the terms are zero.</p>
<p>Then for the inductive step, you can see that substituting $x \to x + \frac{1}{n}$ will increase exactly one term on the left hand side by $1$, and has the same effect on the right hand side. Alternatively, you can calculate</p>
<p>$$\begin{align} \sum_{k=0}^{n-1} \left\lfloor \left(x + \frac{1}{n}\right) + \frac{k}{n} \right\rfloor
&= \sum_{k=0}^{n-1} \left\lfloor x + \frac{k+1}{n} \right\rfloor
\\&= \left( \sum_{k=0}^{n-1} \left\lfloor x + \frac{k}{n} \right\rfloor
\right) + \left\lfloor x + \frac{n}{n} \right\rfloor
- \left\lfloor x + \frac{0}{n} \right\rfloor
\\&= \lfloor nx \rfloor + 1
\\&= \left\lfloor n \left( x + \frac{1}{n} \right) \right\rfloor
\end{align}$$</p>
<p>Consequently, if the theorem holds for $x = t$, then it also holds for $x = t + \frac{1}{n}$. Similarly, it holds for $x = t - \frac{1}{n}$. We thus conclude it holds for all real $x$.</p>
<p>To phrase this as an "ordinary" induction problem, the statement to be proved is:</p>
<blockquote>
<p>For every integer $k$, the equation holds for $x \in [\frac{k}{n}, \frac{k+1}{n})$.</p>
</blockquote>
<p>The base case is $k=0$. One inductive argument then proves the theorem for positive $k$, and another proves it for negative $k$.</p>
|
70,143 | <p>Is there a good way to find the fan and polytope of the blow-up of $\mathbb{P}^3$ along the union of two invariant intersecting lines?</p>
<p>Everything I find in the literature is for blow-ups along smooth invariant centers.</p>
<p>Thanks!</p>
| David E Speyer | 297 | <p>The question is local near the intersection of the two lines, so the more basic question is to work this out for $\mathbb{A}^3$. </p>
<p>So we want to blow up $k[x,y,z]$ at $\langle z, xy \rangle$. There are two maximal charts:</p>
<p>$$\mathrm{Spec} \ k[x,y,z, (xy)/z] \ \mbox{and} \ \mathrm{Spec} \ k[x,y,z,z/(xy)].$$</p>
<p>Note that the first chart is isomorphic to $k[a,b,c,d]/(ad-bc)$, so we expect it to correspond to a cone with four rays in the fan, while the second chart is isomorphic to $k[u,v,w]$, so we expect it to give an ordinary simplicial cone.</p>
<p>Let's work out exactly how these fit together. Our first semigroup is the integer points in the cone $\mathrm{Span}_{\mathbb{R}_{+}} ((1,0,0), (0,1,0), (0,0,1), (1,1,-1))$. The dual cone is $C_1 := \{ (u,v,w) : u,v,w \geq 0, \ u+v \geq w \}$. So we expect that cone to appear in the fan. Similarly, the second chart corresponds to the cone $C_2 := \{ (u,v,w) : u,v \geq 0, \ u+v \leq w \}$. We have omitted the inequality $w \geq 0$ because it is redundant.</p>
<p>So $\mathbb{A}^3$ blown up at two crossing lines in the toric variety for the fan which consists of the two cones $C_1$ and $C_2$ above (and their faces).</p>
<p>In order to do $\mathbb{P}^3$, you'll want to insert the picture I just described in place of the cone which corresponds to the torus invariant $\mathbb{A}^3$ around the intersection. You'll also have to modify the two other torus charts which contain one of the lines (each). Since that's a blow up at a smooth center, I'll leave it to you.</p>
|
1,073,947 | <p>If I'm given a value $n$. And I know its of the form $p_{1}^x + p_{2}^y$, can I be sure that there is a unique solution for $x$ and $y$ and Can I determine values of $x$ and $y$, If I know the primes $p_{1}$ and $p_{2}$ (and $p_{1} \ne p_{2}$) </p>
<p>If yes, how to prove it?</p>
| ajotatxe | 132,456 | <p>From
$$p^x+q^y=p^s+q^t$$
if $(x,y)\neq(s,t)$, we have
$$p^s(p^{x-s}-1)=q^y(q^{t-y}-1)$$</p>
<p>so $p\equiv 1\pmod q$ and $q\equiv 1\pmod p$. This is not possible.</p>
<p>THIS IS WRONG: PLEASE UNACCEPT (sorry for caps, only for visibility)</p>
<p>To find the exponents, if $n$ is not <em>very</em> big, the fastest and easiest way is perhaps bruteforcing.</p>
<p><em>Remark</em>: I have assumed that the exponents are positive. If they may be $0$ we have for example
$$2^3+3^2=2^4+3^0$$</p>
|
1,073,947 | <p>If I'm given a value $n$. And I know its of the form $p_{1}^x + p_{2}^y$, can I be sure that there is a unique solution for $x$ and $y$ and Can I determine values of $x$ and $y$, If I know the primes $p_{1}$ and $p_{2}$ (and $p_{1} \ne p_{2}$) </p>
<p>If yes, how to prove it?</p>
| Mike Bennett | 59,326 | <p>The short answer to the question is "no". For each of
$$
n \in \{ 11, 35, 133, 259, 2200 \},
$$
we can find distinct primes $p$ and $q$ for which we have
$$
p^a+q^b=p^c+q^d = n
$$
with $(a,b) \neq (c,d)$ and all of $a, b, c$ and $d$ positive integers. Probably this property is not true for other values of $n$, but that does not appear at all easy to prove.</p>
<p>Given $p$ and $q$ and the knowledge that $p^a+q^b=n$ for some integers $a$ and $b$, finding these exponents is certainly polynomial time in $n$.</p>
|
756,735 | <blockquote>
<p>Let $n>0$ be a positive integer. For all $x\not=0$, prove that $f(x) = 1/x^n$ is differentiable at $x$ with $f^\prime(x) = -n/x^{n+1}$ by showing that the limit of the difference quotient exists.</p>
</blockquote>
<p>I am having trouble seeing how I can manipulate the difference quotient in order to get a limit that exists</p>
<p>so far I have</p>
<p>$$f^\prime(x)= \lim_{h\rightarrow 0} \frac{f(x+h) - f(x)}{h} =\lim_{h\rightarrow 0} \frac{1/(x+h)^n - 1/x^n}{h}$$</p>
<p>All help is much appreciated.</p>
| J.R. | 44,389 | <p>Let $x\not=0$. Use the binomial theorem:</p>
<p>\begin{align}
\frac{\frac{1}{(x+h)^n}-\frac{1}{x^n}}{h} &=-\frac{(x+h)^n - x^n}{(x+h)^n x^n h}\\
&=-\frac{1}{h (x+h)^n x^n} \left(\sum_{k=0}^n {n\choose k} x^k h^{n-k} - x^n\right)\\
&=-\frac{1}{h (x+h)^n x^n} \sum_{k=0}^{n-1} {n\choose k} x^k h^{n-k}\\
&=-\frac{1}{(x+h)^n x^n} \sum_{k=0}^{n-1} {n\choose k} x^k h^{n-k-1}
\end{align}</p>
<p>Now we can safely let $h\rightarrow 0$. This makes the sum collapse to the summand with $k=n-1$ and we get</p>
<p>$$f^\prime(x)=-\frac{1}{x^{2n}} {n\choose n-1} x^{n-1}=-\frac{n}{x^{n+1}}$$</p>
|
3,035,228 | <p>Show that two cardioids <span class="math-container">$r=a(1+\cos\theta)$</span> and <span class="math-container">$r=a(1-\cos\theta)$</span> are at right angles.</p>
<hr>
<p><span class="math-container">$\frac{dr}{d\theta}=-a\sin\theta$</span> for the first curve and <span class="math-container">$\frac{dr}{d\theta}=a\sin\theta$</span> for the second curve but i dont know how to prove them perpendicular.</p>
| Nosrati | 108,128 | <p>The angle between tangent line and a ray from pole of a polar curve is
<span class="math-container">$$\tan\psi=\dfrac{r}{r'}$$</span>
then for these curves in every <span class="math-container">$\theta$</span> on curves
<span class="math-container">$$\tan\psi_1=\dfrac{r_1}{r'_1}=\dfrac{a(1-\cos\theta)}{a\sin\theta}=\tan\dfrac{\theta}{2}$$</span>
<span class="math-container">$$\tan\psi_1=\dfrac{r_2}{r'_2}=\dfrac{a(1+\cos\theta)}{-a\sin\theta}=-\cot\dfrac{\theta}{2}$$</span>
therefore
<span class="math-container">$$\tan\psi_1\tan\psi_2=-1$$</span></p>
|
1,333,994 | <p>We have a function $f: \mathbb{R} \to \mathbb{R}$ defined as</p>
<p>$$\begin{cases} x; \ \ x \notin \mathbb{Q} \\ \frac{m}{2n+1}; \ \ x=\frac{m}{n}, m\in \mathbb{Z}, n \in \mathbb{N} \ \ \ \text{$m$ and $n$ are coprimes} \end{cases}.$$</p>
<p>Find where $f$ is continuous</p>
| Landon Carter | 136,523 | <p>If you know the AM-GM inequality, this is easy.</p>
<p>$a=\log_{\frac{1}{2}}(3)=-\log_23$ and $b=\log_3(\dfrac{1}{2})=-\log_32$.</p>
<p>Now $-a>0,-b>0$ so $-a-b>2\sqrt{|ab|}=2$ implying $a+b<-2$.</p>
|
4,354,133 | <p>In general, if <span class="math-container">$K$</span> is a field, it could be that exists <span class="math-container">$f(x)\in K[x]$</span> such that <span class="math-container">$f(a)=0$</span> for all <span class="math-container">$a\in K$</span>; for example, set <span class="math-container">$K:=\mathbb Z/(2)$</span> and <span class="math-container">$f(x):=x^2+x$</span>.</p>
<p>Now, if <span class="math-container">$f(x)$</span> vanishes on all <span class="math-container">$\operatorname{Spec} K[x]$</span>, it means that <span class="math-container">$f(x)\in \mathfrak p$</span> for all <span class="math-container">$\mathfrak p\subset K[x]$</span>, so that <span class="math-container">$f(x)$</span> is nilpotent. However <span class="math-container">$1$</span> is clearly not nilpotent. Is this because the sets <span class="math-container">$\operatorname{Spec} K[x]\neq K$</span> in general, and the fact that <span class="math-container">$f$</span> vanishes on all <span class="math-container">$K$</span> only means that <span class="math-container">$f$</span> belongs to the (always maximal) ideals of the form <span class="math-container">$(x+a)$</span> for <span class="math-container">$a\in K$</span>? That would make sense to me, in fact in the example above <span class="math-container">$x^2+x$</span> belongs to both <span class="math-container">$(x)$</span> and <span class="math-container">$(x+1)$</span>.</p>
<p>So is there any characterization of the fields <span class="math-container">$K$</span> for which, if <span class="math-container">$f\in K[x_1,\dots ,x_n]$</span> vanishes on all <span class="math-container">$K^n$</span>, it vanishes on <span class="math-container">$\operatorname {Max Spec}K[x_1,\dots ,x_n]$</span>? For example if <span class="math-container">$K$</span> is algebraically closed and <span class="math-container">$n=1$</span>, then as sets <span class="math-container">$\operatorname{Max Spec}K[x_1]=K$</span>. However this already isn't true anymore for <span class="math-container">$n\gt 1$</span> or if <span class="math-container">$K$</span> is not algebraically closed. For example, are there any polynomials in <span class="math-container">$\mathbb R[x]$</span> belonging to all the ideals of the form <span class="math-container">$(x-a)$</span> but not in <span class="math-container">$x^2+1$</span>? What is the geometric interpretation of this fact, if there is any?</p>
| Jonas Linssen | 598,157 | <p>For <span class="math-container">$K$</span> a field <span class="math-container">$K[X_1,…,X_n]$</span> is an integral domain, in particular there are no nilpotent elements besides <span class="math-container">$0$</span>. Your argument shows that if <span class="math-container">$f(x_1,…,x_n)=0\in K$</span> for all choices of <span class="math-container">$x_1,…,x_n \in K$</span>, then we have
<span class="math-container">$$f \in \bigcap \limits_{{x}\in K^n} (X_1-x_1,…,X_n-x_n)$$</span>
If <span class="math-container">$K$</span> is algebraically closed this is by Hilbert’s Nullstellensatz equivalent to saying that
<span class="math-container">$$f \in \bigcap \limits_{\mathfrak{m}\in \operatorname{mSpec}(K[X])} \mathfrak{m} = \operatorname{jrad}(K[X])$$</span>
which is the jacobson radical.</p>
<p>I don’t see why you should be able to deduce <span class="math-container">$f$</span> being in all prime ideals. I think in the case of <span class="math-container">$K[X]$</span> with <span class="math-container">$K$</span> algebraically closed this only works since all prime ideals are already maximal.</p>
|
4,354,133 | <p>In general, if <span class="math-container">$K$</span> is a field, it could be that exists <span class="math-container">$f(x)\in K[x]$</span> such that <span class="math-container">$f(a)=0$</span> for all <span class="math-container">$a\in K$</span>; for example, set <span class="math-container">$K:=\mathbb Z/(2)$</span> and <span class="math-container">$f(x):=x^2+x$</span>.</p>
<p>Now, if <span class="math-container">$f(x)$</span> vanishes on all <span class="math-container">$\operatorname{Spec} K[x]$</span>, it means that <span class="math-container">$f(x)\in \mathfrak p$</span> for all <span class="math-container">$\mathfrak p\subset K[x]$</span>, so that <span class="math-container">$f(x)$</span> is nilpotent. However <span class="math-container">$1$</span> is clearly not nilpotent. Is this because the sets <span class="math-container">$\operatorname{Spec} K[x]\neq K$</span> in general, and the fact that <span class="math-container">$f$</span> vanishes on all <span class="math-container">$K$</span> only means that <span class="math-container">$f$</span> belongs to the (always maximal) ideals of the form <span class="math-container">$(x+a)$</span> for <span class="math-container">$a\in K$</span>? That would make sense to me, in fact in the example above <span class="math-container">$x^2+x$</span> belongs to both <span class="math-container">$(x)$</span> and <span class="math-container">$(x+1)$</span>.</p>
<p>So is there any characterization of the fields <span class="math-container">$K$</span> for which, if <span class="math-container">$f\in K[x_1,\dots ,x_n]$</span> vanishes on all <span class="math-container">$K^n$</span>, it vanishes on <span class="math-container">$\operatorname {Max Spec}K[x_1,\dots ,x_n]$</span>? For example if <span class="math-container">$K$</span> is algebraically closed and <span class="math-container">$n=1$</span>, then as sets <span class="math-container">$\operatorname{Max Spec}K[x_1]=K$</span>. However this already isn't true anymore for <span class="math-container">$n\gt 1$</span> or if <span class="math-container">$K$</span> is not algebraically closed. For example, are there any polynomials in <span class="math-container">$\mathbb R[x]$</span> belonging to all the ideals of the form <span class="math-container">$(x-a)$</span> but not in <span class="math-container">$x^2+1$</span>? What is the geometric interpretation of this fact, if there is any?</p>
| Hank Scorpio | 843,647 | <p>What you ask happens exactly when <span class="math-container">$K$</span> is infinite. If <span class="math-container">$K=\Bbb F_q$</span> is finite, just take <span class="math-container">$\prod_{a\in K} (x_1-a)$</span> which vanishes on every element of <span class="math-container">$\Bbb F_q^n$</span> but not on <span class="math-container">$(b,0,\cdots,0)$</span> where <span class="math-container">$b\in\Bbb F_{q^2}\setminus\Bbb F_q$</span>. If <span class="math-container">$K$</span> is infinite, you may prove by induction that the only polynomial which is zero on all of <span class="math-container">$K^n$</span> is zero: write <span class="math-container">$p(x_0,\cdots,x_n)$</span> as a polynomial in <span class="math-container">$x_n$</span> with coefficients in <span class="math-container">$K[x_0,\cdots,x_n]$</span>, then see that if there's any point <span class="math-container">$(a_1,\cdots,a_{n-1})$</span> so that the coefficients don't all vanish we have <span class="math-container">$p$</span> has only finitely many roots of the form <span class="math-container">$(a_1,\cdots,a_{n-1},b)$</span> as <span class="math-container">$b$</span> varies. So all the coefficients must vanish, and eventually you're reduced to the case of polynomials in <span class="math-container">$K[x]$</span> vanishing at all elements of <span class="math-container">$K$</span>: but every polynomial in one variable has finitely many roots over a field.</p>
<p>The geometric interpretation is that the points <span class="math-container">$K^n\subset \operatorname{Spec} K[x_1,\cdots,x_n]$</span> are dense iff <span class="math-container">$K$</span> is infinite.</p>
|
345,766 | <p>I'm trying to calculate this limit expression:</p>
<p>$$ \lim_{s \to \infty} \frac{ab + (ab)^2 + ... (ab)^s}{1 +ab + (ab)^2 + ... (ab)^s} $$</p>
<p>Both the numerator and denominator should converge, since $0 \leq a, b \leq 1$, but I don't know if that helps. My guess would be to use L'Hopital's rule and take the derivative with respect to $s$, which gives me:</p>
<p>$$ \lim_{s \to \infty} \frac{s (ab)^{s-1}}{s (ab)^{s-1}} $$</p>
<p>but this still gives me the non-expression $\frac{\infty}{\infty}$ as the solution, and applying L'Hopital's rule repeatedly doesn't change that. My second guess would be to divide by some multiple of $ab$ and therefore simplify the expression, but I'm not sure how that would help, if at all. </p>
<p>Furthermore, the solution in the tutorial I'm working through is listed as $ab$, but if I evaluate the expression that results from L'Hopital's rule, I get $1$ (obviously). </p>
| Cameron Buie | 28,900 | <p><strong>Edit</strong>: Since you've added in the assumption that $a,b\in[0,1]$, then $0\le ab\le 1$, and so the numerator and denominator only diverge in the case that $a=b=1$. In that case, L'Hopital's rule does indeed yield a limit of $1$...which is precisely $ab$.</p>
<p>Otherwise, we have $$1+ab+\cdots+(ab)^s=\frac{1-(ab)^{s+1}}{1-ab},$$ whence $$\frac{ab+\cdots+(ab)^s}{1+ab+\cdots+(ab)^s}=1-\frac1{1+ab+\cdots+(ab)^s}=1-\frac{1-ab}{1-(ab)^{s+1}},$$ so since $0\le ab<1$, then $(ab)^{s+1}\to 0$ as $s\to\infty$, so again we have $ab$ as the limit.</p>
|
345,766 | <p>I'm trying to calculate this limit expression:</p>
<p>$$ \lim_{s \to \infty} \frac{ab + (ab)^2 + ... (ab)^s}{1 +ab + (ab)^2 + ... (ab)^s} $$</p>
<p>Both the numerator and denominator should converge, since $0 \leq a, b \leq 1$, but I don't know if that helps. My guess would be to use L'Hopital's rule and take the derivative with respect to $s$, which gives me:</p>
<p>$$ \lim_{s \to \infty} \frac{s (ab)^{s-1}}{s (ab)^{s-1}} $$</p>
<p>but this still gives me the non-expression $\frac{\infty}{\infty}$ as the solution, and applying L'Hopital's rule repeatedly doesn't change that. My second guess would be to divide by some multiple of $ab$ and therefore simplify the expression, but I'm not sure how that would help, if at all. </p>
<p>Furthermore, the solution in the tutorial I'm working through is listed as $ab$, but if I evaluate the expression that results from L'Hopital's rule, I get $1$ (obviously). </p>
| André Nicolas | 6,312 | <p><strong>Hint:</strong> Use the closed form expression
$$1+r+r^2+\cdots +r^{n}=\frac{1-r^{n+1}}{1-r}.$$
Note that this only applies for $r\ne 1$. </p>
|
2,666,425 | <p>I'm working through Vakil's excellent The Rising Sea notes, and in an exercise, the following question is posed:</p>
<p>If $X$ is a topological space, show that the fibered product always exists in the category of open sets of $X$, by describing what a fibered product is. </p>
<p>Now I know intuitively the fibered product of 2 open sets is nothing but the intersection as the maps between them are that of inclusion. But I'm having trouble proving that explicitly using the universal property. Can anybody help me out? </p>
| lush | 499,195 | <p>So lets take two open sets $U,V \subseteq Y \subseteq X$, where $Y$ is open as well.</p>
<p>That means we have two morphisms $U \to Y$, $V \to Y$.
Now the fibered product of those morphisms (lets call it $W$) is an object of the same category - hence it is an open subset of $X$.
Furthermore we need some maps $W \to U$, $W \to V$.
By the definition of our category this translates into: $W \subseteq U$, $W\subseteq V$. Hence we have $W \subseteq U\cap V$ for sure.</p>
<p>Now we want the following: Given any open subset $W' \subseteq X$ together with morphisms $W' \to U$, $W'\to V$ that make the appropriate diagram commute (which is trivial in our case, do you see why?), we want a morphism $W' \to W$ that makes all "new" diagrams commute (again, it is trivial that all diagrams commute once we have a morphism - that is all we need to check).</p>
<p>$W' \to W$ translates to $W' \subseteq W$. Hence intuitively we'd expect $W$ to be the biggest open set contained in both $U, V$. As you've already said, that is $U \cap V$. Now why does $U \cap V$ satisfy the universal property of the fiber product? Because if $W' \subseteq U$, $W' \subseteq V$ it follows that $W' \subseteq U \cap V$, which exactly means that there is a morphism $W' \to U\cap V$. Again, by the very definition of our category it is clear, that this morphism is unique - there is at most one morphism between any two objects.</p>
|
195,150 | <p>Of all the possible combinations of positive numbers that sum to 10, which has the largest multiplication?</p>
<p>I had also got a clue: it's related to <a href="http://en.wikipedia.org/wiki/E_%28mathematical_constant%29" rel="nofollow"><code>e</code></a>.</p>
<p>Please help! (I need explanation aswell)</p>
<p><strong>Notice</strong>: i said positive not natural so you can use fractions.</p>
| Seyhmus Güngören | 29,940 | <p>It is a simple sort of constrained optimization problem. Assume we have two positive numbers adding upto $10$, $x+y=10$, find $x$ and $y$ subject to $\max_{\forall x,y}\, xy$. If you rewrite this $\max_{\forall x,y}x(10-x)$ we have $\frac{d(10x-x^2)}{dx}=0$ then we have $x=5$. If we have 3 numbers or four numbers etc.. one can show that the product is maximized when $x=y=z=t=....$. Therefore one needs to check the number of numbers which will maximize the product. For $2$, namely $x+y=10$ we have $25$ and for $3$, namely $x+y+z=10$ we have $3x=10$ and $x=10/3$ so we have $10^3/3^3=37.04$. when we have $4$ numbers we have $10^4/4^4=39,06$ and $10^5/5^5=32$ as the function $10^k/k^k$ goes to $0$ when $k\rightarrow\infty$ we have $k=4$ which is the optimum solution which gives $39,06$</p>
|
195,150 | <p>Of all the possible combinations of positive numbers that sum to 10, which has the largest multiplication?</p>
<p>I had also got a clue: it's related to <a href="http://en.wikipedia.org/wiki/E_%28mathematical_constant%29" rel="nofollow"><code>e</code></a>.</p>
<p>Please help! (I need explanation aswell)</p>
<p><strong>Notice</strong>: i said positive not natural so you can use fractions.</p>
| Community | -1 | <p>The answers above give the usual methods. Here's a method I've come up with taking up your clue of a link with $e$.</p>
<p>You know that $$x+y+z+t = 10 \:\:\;\:\;\;\;\;\; x,y,z,t > 0$$ and wish to maximise $P=xyzt$. Take the natural logarithm to get $$\log P = \log x + \log y + \log z + \log t.$$ Now $P$ is maximised exactly when $\log P$ is maximised. So we may rephrase the problem as follows: </p>
<p><em>Find when the maximum of $f(x,y,z,t) = x + y + z + t$ occurs given the constraint $e^x + e^y + e^z + e^t = 10$.</em> </p>
<p>Following the technique of Lagrange multipliers, we define the function $$g(x,y,z,t,\lambda) = x + y + z + t + \lambda(e^x + e^y + e^z + e^t - 10).$$ Then $$\partial_x g = 1 + \lambda e^x$$ $$\partial_y g = 1 + \lambda e^y$$ $$\partial_z g = 1 + \lambda e^z$$ $$\partial_t g = 1 + \lambda e^t$$ and $$\partial_{\lambda} g = e^x + e^y + e^z + e^t - 10.$$ </p>
<p>To find when the maximum occurs we set the partial derivatives to be zero. From the first four partial derivatives we have $e^x=e^y=e^z=e^t = \frac{-1}{\lambda}$. Substituting this into the fifth partial derivative gives $-\frac{4}{\lambda}-10=0$, which gives $\lambda = -\frac{2}{5}$. This then gives $e^x=e^y=e^z=e^t = \frac{5}{2} = 2.5$, returning the result that the product is maximised when all four numbers are $2.5$.</p>
|
3,953,674 | <p>Here is a common argument used to prove that the sum of an infinite geometric series is <span class="math-container">$\frac{a}{1-r}$</span> (where <span class="math-container">$a$</span> is the first term and <span class="math-container">$r$</span> is the common ratio):
<span class="math-container">\begin{align}
S &= a+ar+ar^2+ar^3+\cdots \\
rS &= ar+ar^2+ar^3+ar^4+\cdots \\
S-rS &= a \\
S(1-r) &= a \\
S &= \frac{a}{1-r} \, .
\end{align}</span>
I am sceptical about the validity of this argument. It feels like there is something amiss about a proof involving infinite series that makes no mention of the fact that they are typically defined as the limit of their partial sums. The third line involves 'cancelling' all of the terms other than <span class="math-container">$a$</span>. This makes it seem like an infinite series is actually an infinite string of symbols, rather than a limiting expression. If <span class="math-container">$a+ar+ar^2+ar^3+\cdots$</span> is simply a shorthand for
<span class="math-container">$$
\lim_{n \to \infty}\sum_{k=0}^{n}ar^k \, ,
$$</span>
and <span class="math-container">$ar+ar^2+ar^3+ar^4+\cdots$</span> a shorthand for
<span class="math-container">$$
\lim_{n \to \infty}\sum_{k=1}^{n}ar^k \, ,
$$</span>
then I am struggling to see how the terms actually cancel. Perhaps the third line can be written more formally as
<span class="math-container">\begin{align}
S-rS &= (a+ar+ar^2+ar^3+\cdots)-(ar+ar^2+ar^3+ar^4+\cdots) \\
&= \lim_{n \to \infty}\sum_{k=0}^{n}ar^k - \lim_{n \to \infty}\sum_{k=1}^{n}ar^k \\
&= \lim_{n \to \infty}\left(\sum_{k=0}^{n}ar^k - \sum_{k=1}^{n}ar^k\right) \\
&= \lim_{n \to \infty}\left(a +\sum_{k=1}^nar^k - \sum_{k=1}^{n}ar^k\right) \\
&= \lim_{n \to \infty}a \\
&= a \, .
\end{align}</span>
There is another concern I have. Geometric series only converge when <span class="math-container">$|r|<1$</span>. However, the argument used above seems to apply regardless of whether <span class="math-container">$|r|<1$</span>, which would yield nonsensical results such as
<span class="math-container">$$
1+2+4+8+\cdots = -1 \, .
$$</span>
So is the argument rigorous, and if so, why are my fears misplaced?</p>
| A.J. | 654,406 | <p>I think the proofs generally start (or at least they should) with something like "Suppose an infinite geometric series converges to the sum <span class="math-container">$S \,$</span>", after which all the steps would be justified.</p>
<p>The 'cancelling' you mentioned can be avoided by including an extra step:</p>
<p><span class="math-container">\begin{align}
S &= a+ar+ar^2+ar^3+\cdots \\
rS &= ar+ar^2+ar^3+ar^4+\cdots \\
rS + a &= a+ar+ar^2+ar^3+\cdots \\
&= S\\
S-rS &= a \\
S(1-r) &= a \\
S &= \frac{a}{1-r} \, .
\end{align}</span></p>
<p>Also, since the proof assumes the existence of <span class="math-container">$S$</span>, it clearly will not apply if <span class="math-container">$|r| \ge 1 \,$</span>.</p>
|
13,705 | <p>Let $m$ be a positive integer. Define $N_m:=\{x\in \mathbb{Z}: x>m\}$. I was wondering when does $N_m$ have a "basis" of two elements. I shall clarify what I mean by a basis of two elements: We shall say the positive integers $a,b$ generate $N_m$ and denote $N_m=<a,b>$ if every element $x\in N_m$ can be written as $x=\alpha a+\beta b$ where $\alpha,\beta$ are nonnegative integers (not both zero) <STRIKE>and no nonnegative linear combination (with at least one coefficient nonzero) of $a,b$ gives an element not in $N_m$.</STRIKE></p>
<p>More specifically, I want to understand the set $\{(a,b,m):N_m=<a,b>\}$.</p>
<p>An example would be the triple $(2,3,1)$. Every positive integer greater than 1 can be written as $2\alpha +3\beta$ for nonnegative integers $\alpha,\beta$, for if it is even we can write it as $2\alpha$ for some positive integer $\alpha$ and if it is odd and greater than $3$ then we can write it as $3+2\alpha$ for some positive integer $\alpha$ and of course $3=1\cdot 3$. FInally the smallest integer a positive linear combination of $2,3$ can generate is $2$.</p>
<p>PS: Not quite sure what to tag this. Feel free to retag.</p>
<p>EDIT: After Jason's answer, it seems the first version of this question is more interesting, where the last condition in paragraph 1 is relaxed.</p>
| Jason DeVito | 331 | <p>This can only happen for the triple you found.</p>
<p>For, if $m=1$, then we must be able to generate 2, so we must take 2 in the basis. Likewise, we must be able to generate 3, so we must allow 3 in the basis.</p>
<p>If $m>1$, then we see by the same reasoning that $N_m$ must be generated by $m+1$ and $m+2$.</p>
<p>But then we can't express $m+3$: since $m>1$, we see if $\alpha >1$, then that $\alpha(m+1)\geq 2m + 2 > m+3$ . So, we must take $\alpha = 0$ or $1$. Then same reasoning applies to $\beta$.</p>
<p>If we take either of $\alpha$ and $\beta$ equal to $0$, then we just generate $m+1$ or $m+2$. Hence, we must take them both one.</p>
<p>Then $m+1 + m+2 = 2m+3 > m+3$.</p>
|
1,530,406 | <p>How to multiply Roman numerals? I need an algorithm of multiplication of numbers written in Roman numbers. Help me please. </p>
| Sandith | 689,495 | <p>It is conversion to Hindu numerals, Not Arabic. Just because Europeans learnt from Arabs does not mean the founders change!</p>
<p><a href="https://rbutterworth.nfshost.com/Tables/romanmult" rel="nofollow noreferrer">https://rbutterworth.nfshost.com/Tables/romanmult</a> is one place multiplication is explained</p>
|
514,517 | <p>So this is what my book states:</p>
<p>Random variables $X,Y, and Z$ are said to form a Markov chain in that order denoted $X\rightarrow Y \rightarrow Z$ if and only if:</p>
<p>$p(x,y,z)=p(x)p(y|x)p(z|y) $</p>
<p>That's great and all but that doesn't give any intuition as to what a Markov chain is or what it implies.</p>
<p>Can someone please give me more intuition as to how I should think about Markov chains?</p>
<p>Thanks a lot!!</p>
| Prahlad Vaidyanathan | 89,789 | <p>Hint: Use intermediate value theorem on $g(x) = f(x) - x$</p>
|
1,480,511 | <p>I have included a screenshot of the problem I am working on for context. T is a transformation. What is meant by Tf? Is it equivalent to T(f) or does it mean T times f?</p>
<p><a href="https://i.stack.imgur.com/LtRS1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LtRS1.png" alt="enter image description here"></a></p>
| YoTengoUnLCD | 193,752 | <p>Using: <a href="https://math.stackexchange.com/q/4468">Square root of integer is integer if rational</a>:</p>
<p>Suppose <span class="math-container">$\sqrt {15}$</span> is rational, then we'd have a positive integer <span class="math-container">$x$</span> such that <span class="math-container">$x^2=15$</span>, by the lemma below, we know that <span class="math-container">$1 \leq x=\sqrt{15} \leq 15$</span>.</p>
<p>Therefore we have a systematic way to check if <span class="math-container">$\sqrt {15}$</span> is rational: is any of the numbers <span class="math-container">$1^2,2^2,...,15^2$</span> equal to <span class="math-container">$15$</span>? If the answer is negative, then our number is irrational.</p>
<p><span class="math-container">$\bf Lemma$</span>: If <span class="math-container">$z\geq 1 \rightarrow z\geq \sqrt z$</span>.</p>
<p>Proof: Let <span class="math-container">$z\geq 1 \rightarrow z^2\geq z$</span> take square roots and we get <span class="math-container">$|z|=z\geq \sqrt z ._\square$</span></p>
|
720,823 | <p>I was solving some linear algebra problems and I have a quick question about one problem. I'm given the matrix $A = \{a_1,a_2\}$ where $a_1=[1,1]$ and $a_2=[-1,1]$. I need to solve for the eigenvalues of the matrix $A$ over the complex numbers $\mathbb{C}$. I solved and got the eigenvalue $1-i$ and $1+i$. Now this is where I get confused. I was taught that after finding the eigenvalue you plug it into the equation $[A - \lambda I]$. However, the book is putting it into the this equation. $[\lambda I - A]$. I understand the equations are exactly the same because they are both set to zero. You just have to multiply one by negative one. However, why for this problem are they choosing this other equation. It is the first time I've seen it. Also, once I plug it in I obtain $B=\{a_1,a_2\}$ where $a_1=[i,1]$ and $a_2=[-1,i]$. How do I reduce this matrix so that I have one row completely zero?</p>
| Amzoti | 38,839 | <p>The $[A- \lambda I]v$ versus $[\lambda I - A]v$ is just a matter of choice. When we have $Av = \lambda v$, we can choose to subtract from either side, so just a convention. Some people hate negating each term of the matrix as this leaves more room for error. </p>
<p>We are given:</p>
<p>$$A = \begin{bmatrix}1 & 1\\-1 & 1\end{bmatrix}$$</p>
<p>We find the eigenvalues as:</p>
<p>$$\lambda_{1,2} = 1 ~ \pm ~i$$</p>
<p>To find the eigenvectors, we setup and solve:</p>
<p>$$[A - \lambda I]v_i = 0$$</p>
<p>We have independent and complex conjugate eigenvalues, so finding eigenvectors, we can also use conjugates.</p>
<p>We have to find the RREF using $\lambda_1 = 1 + i$, yielding:</p>
<p>$$ [A -(1 + i)I]v_1 = \begin{bmatrix}-i & 1\\-1 & -i\end{bmatrix}v_1 = 0$$</p>
<p>Adding $i \times R_1$ to $R_2$, and dividing $R_1$ by $-i$ yields:</p>
<p>$$\begin{bmatrix}1 & i\\0 & 0\end{bmatrix}v_1 = 0$$</p>
<p>This gives us $v_1 = (-i, 1)$ and immediately using the complex conjugate $v_2 = (i, 1)$.</p>
<p><strong>Update</strong></p>
<p>If we had chosen the second eigenvalue, $\lambda_2 = 1 - i$, to find the first eigenvector, we have:</p>
<p>$$ [A -(1 - i)I]v_2 = \begin{bmatrix}i & 1\\-1 & i\end{bmatrix}v_2 = 0$$</p>
<p>Adding $-i \times R_1$ to $R_2$, and dividing $R_1$ by $i$ yields:</p>
<p>$$\begin{bmatrix}1 & -i\\0 & 0\end{bmatrix}v_2 = 0$$</p>
<p>This gives us $v_2 = (i, 1)$ and immediately using the complex conjugate $v_1 = (-i, 1)$.</p>
<p>Of course this matches with the result above.</p>
|
1,904,553 | <blockquote>
<p>$$\displaystyle\lim_{x\to0}\left(\frac{1}{x^5}\int_0^xe^{-t^2}\,dt-\frac{1}{x^4}+\frac{1}{3x^2}\right)$$</p>
</blockquote>
<p>I have this limit to be calculated. Since the first term takes the form $\frac 00$, I apply the L'Hospital rule. But after that all the terms are taking the form $\frac 10$. So, according to me the limit is $ ∞$. But in my book it is given 1/10. How should I solve it? </p>
| Olivier Oloa | 118,798 | <p>One may recall that, as $t \to 0$, by the Taylor series expansion
$$
e^{-t^2}=1-t^2+\frac{t^4}2+O(t^6)
$$ giving, as $x \to 0$,
$$
\int_0^xe^{-t^2}dt=x-\frac{x^3}3+\frac{x^5}{10}+O(x^7)
$$ and, as $x \to 0$,</p>
<blockquote>
<p>$$
\frac1{x^5}\int_0^xe^{-t^2}dt-\frac1{x^4}+\frac1{3x^2}=\frac1{10}+O(x^2)
$$</p>
</blockquote>
<p>from which one deduces the desired limit.</p>
|
625,608 | <p>The question is to show that the function $\phi$ given by $\phi(\lambda)=\frac{\lambda}{1+|\lambda|}$ is 1-1 on the complex plane. I would be grateful for a hint on how to start.</p>
| user118829 | 118,829 | <p>Assume that $\phi(\lambda) = \phi(\mu)$. First show that $\lambda$ and $\mu$ must have the same argument. Then show that they have the same modulus.</p>
|
250,496 | <p>Please help me out.. Is there some appropriate method to draw Hasse diagram</p>
<blockquote>
<p>My question is $L=\{1,2,3,4,5,6,10,12,15,30,60\}$</p>
</blockquote>
<p>Please explain me by step by step solution... Thanks for help..</p>
| user2205316 | 221,340 | <p>You have <span class="math-container">$L=\{1,2,3,4,5,6,10,12,15,20,30,60\}$</span>. <span class="math-container">$|L|$</span> = 12. This tells you the first important bit of information: there will be <span class="math-container">$12$</span> points in your Hasse diagram. (Please note that 20 was missing from the set in your question).</p>
<p>The next step is to find all the different ratios between the divisors as these will be the lines in your Hasse diagram.</p>
<p>I start with the highest divisor and work my way down.</p>
<pre><code> 1) 1:60 (Nothing is parallel to 1->60, so we will not draw this line in our diagram).
2) 1:30, 2:60 (This tells us the the lines 1->30 and 2->60 must be parallel).
3) 1:20, 3:60 (This tells us the the lines 1->20 and 3->60 must be parallel).
4) 1:15, 2:30, 4:60 (This tells us the the lines 1->15, 2->30, and 4->60 must be parallel).
5) 1:12, 5:60
6) 1:10, 2:20, 3:30, 6:60
7) 1:6, 2:12, 5:30, 10:60
8) 1:5, 2:10, 4:20, 6:30, 12:60
9) 1:4, 3:12, 5:20, 15:60
10) 1:3, 2:6, 4:12, 5:15, 10:30, 20:60
11) 1:2, 2:4, 3:6, 5:10, 6:12, 10:20, 15:30, 30:60
</code></pre>
<p>Now that we've calculated all of that, a fun trick about Hasse diagrams for divisors is that you can generally guess the resulting structure by simply looking at the set of ratios for the smallest divisor (in this case, the 1:2 ratios).</p>
<p>This tells us that we should have 4 parallel lines of 3 points in our diagram:</p>
<pre><code> 1 ---- 2 --- 4
3 ---- 6 --- 12
5 --- 10 --- 20
15 -- 30 --- 60
</code></pre>
<p>How can we arrange these lines such that they make sense with the other ratios? Let's look at the 1:3 ratios. We know that 1 must be connected to 3, 2 to 6, 4 to 12, etc. I'll try to display this visually.</p>
<pre><code> 1 --- 2 --- 4
| | |
3 --- 6 --- 12
5 -- 10 -- 20
| | |
15 -- 30 -- 60
</code></pre>
<p>Notice that we get the 1:4 ratios for free. 1->4 is already parallel to 3->12, 5->20, etc.</p>
<p>Let's apply the 1:5 ratios and see what happens.</p>
<pre><code> 3 --- 6 --- 12
| | |
1 --- 2 --- 4
| | |
5 -- 10 -- 20
| | |
15 -- 30 -- 60
| | | (connects to 3->6->12)
</code></pre>
<p>Okay, the resulting diagram is starting to look like two stacked cubes. From here, I would draw the cubes and label the vertices with the numbers we deduced above. Then use the list of divisor ratios and make sure that all lines are present and parallel to lines of the same ratio.</p>
<p>For <span class="math-container">$L=\{1,2,3,4,5,6,10,12,15,20,30,60\}$</span>, you can check your answer <a href="https://demonstrations.wolfram.com/HasseDiagramsOfIntegerDivisors/" rel="nofollow noreferrer">on Wolfram</a>.</p>
<p>EDIT: I discovered that the Wolfram solution is missing all of the diagonals. Namely, 1->20, 3->60, 1->10, 2->20, 3->30, 6->60, 1->6, 2->12, 5->30, 10->60. The vertices, however, are all in the correct position.</p>
|
3,875,643 | <p>I am studying the nonlinear ordinary differential equation</p>
<p><span class="math-container">$$\frac{d^2y}{dx^2}=\frac{1}{y}-\frac{x}{y^2}\frac{dy}{dx}$$</span></p>
<p>I have entered this equation into two different math software packages, and they produce different answers.</p>
<p>software 1:</p>
<p><span class="math-container">$$0=c_2-\ln(x)-\frac{1}{2}\ln\left(-\frac{c_1y}{x}-\frac{y^2}{x^2}+1\right)-\frac{c_1}{\sqrt{-c_1^2+4}}\tan^{-1}\left(\frac{c_1+\frac{2y}{x}}{\sqrt{-c_1^2+4}}\right)$$</span></p>
<p>software 2:</p>
<p><span class="math-container">$$0=-c_2-\ln(x)-\frac{c_1}{\sqrt{c_1^2+4}}\tanh^{-1}\left(\frac{c_1x+2y}{x\sqrt{c_1^2+4}}\right)-\frac{1}{2}\ln\left(\frac{c_1xy-x^2+y^2}{x^2}\right)$$</span></p>
<p>I have not attempted to verify the solution from software 1 yet, but have done some work on software 2.</p>
<p>I first used software 2 to try to solve for y, to substitute the expression for y directly into the ordinary differential equation. The result was the following:</p>
<p><a href="https://i.stack.imgur.com/EAc5w.png" rel="nofollow noreferrer">I believe that this output is ambiguous, since there are essentially two equations that are supposed to be equated to zero</a></p>
<p>I am not sure if it is possible to solve for y, and hence to check the validity of this solution using this method.</p>
<p>I then did some reading on the internet, and it was suggested to, in this case, take the second implicit derivative with respect to x, then simplify.</p>
<p>I tried to do this with math software 2, and the result was, after simplifying:</p>
<p><span class="math-container">$$\frac{d^2y}{dx^2}=\frac{c_1xy-x^2+y^2}{y^3}$$</span></p>
<p>I did some hand calculations, and it seems that software 2 simplifies the result before calculating the next derivative, even without using the simplify command.</p>
<p>Considering this, I used the software to take the first derivative implicitly, then wrote out the equation in full, put that equation into a different form than the software output, and calculated the second derivative implicitly by hand, treating derivatives as functions of x for operations such as the product rule.</p>
<p>The equation I calculated did not match the original differential equation.</p>
<p>Software 2 has a function called odetest, which is supposed to verify that a function is a solution to an ordinary differential equation. If you use odetest on this solution, the returned result is zero, implying that the function is a solution.</p>
<p>The problem is that odetest does not show steps. I contacted the company and asked to see the steps for this calculation, but they would not provide the steps.</p>
<p>Are there any other ways to verify implicit solutions to an ordinary differential equation?</p>
| Lutz Lehmann | 115,115 | <p>As a symbolic solution exists, it should be possible to transform the equation and integrate it with relatively elementary means. And indeed, by careful examination one finds that the right side is the derivative of <span class="math-container">$\frac xy$</span>, so that a direct integration to
<span class="math-container">$$
y'(x)=\frac{x}{y(x)}+c
$$</span>
is possible. You could try to find this form of the equation from your solutions, avoiding the second derivative.</p>
|
3,168,130 | <p><a href="https://math.stackexchange.com/questions/2690416/mathematical-proof-of-uniform-circular-motion">Here</a> is a mathematical proof that any force <span class="math-container">$F(t)$</span>, which affects a body, so that <span class="math-container">$\vec{F(t)} \cdot \vec{v(t)} = 0$</span>, where <span class="math-container">$v(t)$</span> is its velocity cannot change the amount of this velocity.</p>
<p>Further, there is stated that <span class="math-container">$\vec{v(t)}$</span> itself cannot change, what I think is nonsense. Since:</p>
<p><span class="math-container">$$\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} + \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} \times t = \begin{pmatrix} 1 \\ t \\ 0 \end{pmatrix} $$</span></p>
<p>But maybe I am just wrong.
Now, I am further wondering in how far <span class="math-container">$\vec{F(t)} \cdot \vec{v(t)} = 0$</span> leads to a circular motion and how to proof this using Netwons laws and calculus? Can you explain why a circle is created, if you let the time tick?</p>
| G.Carugno | 572,366 | <p>Try to think like this: the component of the force parallel to the velocity changes the module of the velocity, while component perpendicular to the velocity changes its direction. Since you have only a perpendicular component, the velocity will stay constant in module while changing its direction. Now, if your force rotationally symmetric and is parallel to <span class="math-container">$\vec{s}$</span>, you also have that <span class="math-container">$\vec{s}$</span> and <span class="math-container">$\vec{v}$</span> are perpendicular. Therefore by the same euristic argument as before we can say that <span class="math-container">$\vec{s}$</span> will only change its direction and not its module. This is exactly like saying that the trajectory is a circle</p>
|
3,168,130 | <p><a href="https://math.stackexchange.com/questions/2690416/mathematical-proof-of-uniform-circular-motion">Here</a> is a mathematical proof that any force <span class="math-container">$F(t)$</span>, which affects a body, so that <span class="math-container">$\vec{F(t)} \cdot \vec{v(t)} = 0$</span>, where <span class="math-container">$v(t)$</span> is its velocity cannot change the amount of this velocity.</p>
<p>Further, there is stated that <span class="math-container">$\vec{v(t)}$</span> itself cannot change, what I think is nonsense. Since:</p>
<p><span class="math-container">$$\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} + \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} \times t = \begin{pmatrix} 1 \\ t \\ 0 \end{pmatrix} $$</span></p>
<p>But maybe I am just wrong.
Now, I am further wondering in how far <span class="math-container">$\vec{F(t)} \cdot \vec{v(t)} = 0$</span> leads to a circular motion and how to proof this using Netwons laws and calculus? Can you explain why a circle is created, if you let the time tick?</p>
| Siddharth Bhat | 261,373 | <p>Let the velocity vector be defined as <span class="math-container">$v(t) = (s \cos( \theta(t)), s \sin (\theta(t))$</span>, where <span class="math-container">$\theta(t)$</span> is a time varying quantity. Note that <span class="math-container">$s$</span> is a constant, since we have already proved that the <em>speed</em> of the velocity vector does not change. </p>
<p>Let the force vector be defined as <span class="math-container">$F(t) = (r(t) \cos(\phi(t)), r(t) \sin(\phi(t))$</span> where <span class="math-container">$r(t)$</span> and <span class="math-container">$\phi(t)$</span> are time varying quantities. </p>
<p>Now, we know that <span class="math-container">$F . v = 0$</span>. Hence: </p>
<p><span class="math-container">\begin{align*}
&r(t) s \left[\cos(\theta(t))\cos(\phi(t)) + \sin(\theta(t))\sin(\phi(t)) \right] = 0 \\
&r(t)s[\cos(\theta(t) - \phi(t))] = 0 \quad \text{(since $\cos(a - b) = \cos a \cos b + \sin a \sin b)$}
\end{align*}</span></p>
<p>Let us assume that <span class="math-container">$r(t), s \neq 0$</span> for the moment. If they are, then the problem reduces to trivial cases. So, this now means that <span class="math-container">$cos(\theta(t) - \phi(t)) = 0$</span>, or <span class="math-container">$\theta(t) - \phi(t) = \pi /2$</span>. Hence, <span class="math-container">$ \theta(t) = \pi/2 + \phi(t) $</span>.</p>
<p>Next, let us assume the mass of the body is 1 (otherwise, we will need to carry a factor of <span class="math-container">$m$</span> everywhere which is annoying and adds no real insight), and hence <span class="math-container">$F = \frac{dv}{dt}$</span>. </p>
<p><span class="math-container">\begin{align*}
&F = \frac{dv}{dt} \\
&(r(t) \cos(\phi(t)), r(t) \sin(\phi(t)) =
\frac{d \left(s \cos (\theta(t)), s \sin (\theta(t)) \right)}{dt} \\
%
&(r(t) \cos(\phi(t)), r(t) \sin(\phi(t)) =
\left(-s \sin (\theta(t)) \theta'(t), s \cos (\theta(t)) \theta'(t) \right)
\quad \text{(Differentiating with respect to $t$)}
\\
%
&(r(t) \cos(\phi(t)), r(t) \sin(\phi(t)) =
\left(-s \sin (\pi/2 + \phi(t)) \theta'(t), s \cos (\pi/2 + \phi(t)) \theta'(t) \right)\quad \text{($\theta(t) = \pi/2 + \phi(t)$)} \\
%
&(r(t) \cos(\phi(t)), r(t) \sin(\phi(t)) =
\left(-s\theta'(t) \cos (\phi(t)) , -s\theta'(t) \sin (\phi(t)) \right)\quad \\
%
\end{align*}</span>
Comparing the LHS and the RHs, we conclude that <span class="math-container">$r(t) = -s \theta'(t)$</span>.</p>
<p>If we are interested in <em>uniform circular motion</em>, then we would set <span class="math-container">$\theta'(t)$</span> to a constant and proceed to solve the system <a href="https://en.wikipedia.org/wiki/Circular_motion#Uniform_circular_motion" rel="nofollow noreferrer">as described on wikipedia</a></p>
<p>For non-uniform circular motion, I don't know off-hand how to solve the system of equations, but I presume it is possible. I'll update the answer once I go look it up</p>
|
1,320,112 | <p>Using the following identity $$\int_{0}^{\infty}\frac{f\left ( t \right )}{t}dt= \int_{0}^{\infty}\mathcal{L}\left \{ f\left ( t \right ) \right \}\left ( u \right )du$$
I rewrote $$\int_{0}^{\infty}\frac{\sin^{2}\left ( t \right )}{t^{2}}dt$$
as $$\int_{0}^{\infty}\frac{\sin^{2}\left ( t \right )}{t\cdot t}dt$$
And thus the initial integral should be easily evaluated as $$\int_{0}^{\infty}\mathcal{L}\left \{ \frac{\sin^{2}t}{t} \right \}\left ( u \right )du$$
According to my calculations, this is equal to $$\int_{0}^{\infty}\left ( \frac{1}{4}\ln \left ( u^{2}+4 \right )-\frac{u^{2}}{4} \right )du$$
Which evaluates to $\frac{\pi}{16}$. Being that this acutally a well-known integral and that its value is actually $\frac{\pi}{2}$ I think that I made a crucial mistake somewhere. Any ideas?</p>
| Mark Viola | 218,419 | <p>We have </p>
<p>$$\begin{align}
\int_0^{\infty}\frac{\sin^2t}{t}e^{-ut}&=\frac{1}{2}\int_0^{\infty}\frac{1-\cos t}{t}e^{-(u/2)t}dt\\\\
&=\frac12\log\left(\frac{(u/2)^2+1}{(u/2)^2}\right)\\\\
&=\frac12 \log\left(\frac{u^2+4}{u^2}\right)
\end{align}$$</p>
|
78,243 | <p>A positive integer $n$ is said to be <em>happy</em> if the sequence
$$n, s(n), s(s(n)), s(s(s(n))), \ldots$$
eventually reaches 1, where $s(n)$ denotes the sum of the squared digits of $n$.</p>
<p>For example, 7 is happy because the orbit of 7 under this mapping reaches 1.
$$7 \to 49 \to 97 \to 130 \to 10 \to 1$$
But 4 is not happy, because the orbit of 4 is an infinite loop that does not contain 1.
$$4 \to 16 \to 37 \to 58 \to 89 \to 145 \to 42 \to 20 \to 4 \to \ldots$$</p>
<p>I have tabulated the happy numbers up to $10^{10000}$, and it appears that they have a limiting density, although the rate of convergence is slow. Is it known if the happy numbers do in fact have a limiting density? In other words, does $\lim_{n\to\infty} h(n)/n$ exist, where $h(n)$ denotes the number of happy numbers less than $n$?</p>
<p><img src="https://dl.dropbox.com/u/39561574/happiness.jpg" alt="Relative frequency of happy numbers up to 1e10000"></p>
| Joe Silverman | 11,926 | <p>Helen Grundman has a written a number of articles about happy numbers. (I first heard of them at a talk of hers at a JMM.) References for her articles are listed below. I don't know if they discuss densities. One can also look at happy numbers to other bases, of course. According to Wikipedia: "The origin of happy numbers is not clear. Happy numbers were brought to the attention of Reg Allenby (a British author and Senior Lecturer in pure mathematics at Leeds University) by his daughter, who had learned of them at school. However, they 'may have originated in Russia'." </p>
<p>To answer Ricky Demer's question, yes, it's easily decidable, since if $N$ is large enough, then it's easy to see that $s(N)$ is a lot smaller than $N$. More precisely, $s(N)\le 81*\lceil\log_{10}(N)\rceil$. So one rapidly gets into and remains within a bounded range, after which one can check if the sequence cycles at 1, or in some other way.</p>
<ul>
<li>MR2382633 (2008m:11020) Grundman, H. G. ; Teeple, E. A. Sequences of consecutive happy numbers. <em>Rocky Mountain J. Math.</em> <strong>37</strong> (2007), no. 6, 1905--1916.</li>
<li>MR2285991 (2007i:11016) Grundman, H. G. ; Teeple, E. A. Sequences of generalized happy numbers with small bases. <em>J. Integer Seq.</em> <strong>10</strong> (2007), no. 1, Article 07.1.8, 6 pp. (electronic).</li>
<li>MR2022409 Grundman, H. G. ; Teeple, E. A. Heights of happy numbers and cubic happy numbers. <em>Fibonacci Quart.</em> <strong>41</strong> (2003), no. 4, 301--306.</li>
<li>MR1866364 (2002h:11010) Grundman, H. G. ; Teeple, E. A. Generalized happy numbers. <em>Fibonacci Quart.</em> <strong>39</strong> (2001), no. 5, 462--466.</li>
</ul>
<p><strong>Added 18 October 2011</strong>: A relevant ArXiv post just appeared: On the Density of Happy Numbers, Justin Gilmer, <a href="http://arxiv.org/abs/1110.3836" rel="nofollow">http://arxiv.org/abs/1110.3836</a>. The author proves that "happy numbers have upper density $\geq .18$ and lower density $\leq .12$."</p>
|
78,243 | <p>A positive integer $n$ is said to be <em>happy</em> if the sequence
$$n, s(n), s(s(n)), s(s(s(n))), \ldots$$
eventually reaches 1, where $s(n)$ denotes the sum of the squared digits of $n$.</p>
<p>For example, 7 is happy because the orbit of 7 under this mapping reaches 1.
$$7 \to 49 \to 97 \to 130 \to 10 \to 1$$
But 4 is not happy, because the orbit of 4 is an infinite loop that does not contain 1.
$$4 \to 16 \to 37 \to 58 \to 89 \to 145 \to 42 \to 20 \to 4 \to \ldots$$</p>
<p>I have tabulated the happy numbers up to $10^{10000}$, and it appears that they have a limiting density, although the rate of convergence is slow. Is it known if the happy numbers do in fact have a limiting density? In other words, does $\lim_{n\to\infty} h(n)/n$ exist, where $h(n)$ denotes the number of happy numbers less than $n$?</p>
<p><img src="https://dl.dropbox.com/u/39561574/happiness.jpg" alt="Relative frequency of happy numbers up to 1e10000"></p>
| Justin Gilmer | 18,635 | <p>The answer is almost certainly that the limiting density does not exist. Without going into the details of the proof allow me to give a heuristic argument which is based on how the OP likely generated his graph of the relative frequency of happy numbers. </p>
<p>Let $Y_n$ be the r.v. uniformly distributed amongst integers in the interval $[0,10^n -1]$ (that is $Y_n$ picks a random $n$-digit integer). If $X_i$ denotes the r.v. for the digit of $10^i$ in $Y_n$, then $s(Y_n) = \sum\limits_{i=0}^{n-1} s(X_i)$.</p>
<p>I'm guessing the way you generated your graph was you first computed the distribution of $s(Y_n)$ (this can be done recursively) then computed $\mathbb{P}\big(s(Y_n) \text{ is happy}\big)$. This would give the relative density of happy numbers amongst all $n$ digit integers. </p>
<p>Studying the distribution of $s(Y_n)$ can tell us a lot. Its equivalent to rolling $n$ times a 10-sided die with faces $0,1,4,\dots, 81$ and finding the sum. Its distribution is Gaussian as $n$ gets large by the central limit theorem. More importantly most of the distribution is concentrated near the mean, which is $28.5n$. This implies that the density happy numbers amongst all $n$-digit integers depends almost entirely on the distribution of happy numbers near $28.5n$.</p>
<p>For example, there is a peak in your graph at around $n = 400$ of about $.185$ density. Calculating the density of happy numbers within one standard deviation from the mean of $s(Y_{400})$ we get a density of .1911 (the interval I looked at was $[10916,11884]$). If you assume $s(Y_{400})$ is "exactly" normally distributed and estimated the density in this manner you would get a much better approximation.</p>
<p>This means picking $n$ s.t. the mean of $s(Y_n)$ lands in the interval $[10^{400},10^{401}-1]$ then the density of happy numbers amongst $n$-digit integers should be around $.185$. Likely some choices of $n$ will give densities strictly larger than $.185$ and some strictly smaller. This has led me to suspect that by iterating this process, the upper density of happy numbers may be $1$, and lower density $0$.</p>
<p>The article Joe Silverman mentioned is my own. In it I attempt to give the above heuristic a rigorous foundation. It is still a rough draft and has only been reviewed by one of my fellow graduate students, so I won't to say it is definitely correct, although I am very confident it is. I have been working on it for the past few weeks, seeing your question on MO I decided to go ahead and upload a rough draft. In it I use an averaging argument to say that if you find experimentally a large interval of $n$-digit integers ($n$ sufficiently large) which contain happy numbers with density $d$, then the upper density of happy numbers is at least $d(1 - o(1))$. That is where the upper density $\geq .18$ and lower density $ \leq .12$ comes from.</p>
|
320,355 | <p>Show that $$\nabla\cdot (\nabla f\times \nabla h)=0,$$
where $f = f(x,y,z)$ and $h = h(x,y,z)$.</p>
<p>I have tried but I just keep getting a mess that I cannot simplify. I also need to show that </p>
<p>$$\nabla \cdot (\nabla f \times r) = 0$$</p>
<p>using the first result.</p>
<p>Thanks in advance for any help</p>
| Muphrid | 45,296 | <p>While we're here having fun, you can prove this easily enough with geometric calculus. Instead of using the cross product, we use the wedge product instead and Hodge duality.</p>
<p>$$\nabla \cdot (\nabla f \times \nabla h) = \nabla \cdot [-i (\nabla f \wedge \nabla h)] = -i [\nabla \wedge (\nabla f \wedge \nabla h)]$$</p>
<p>Using the product rule, we get</p>
<p>$$\nabla \cdot (\nabla f \times \nabla h) = -i [(\nabla \wedge \nabla f) \wedge \nabla h - \nabla f \wedge (\nabla \wedge \nabla h)]$$</p>
<p>But $\nabla \wedge \nabla g = 0$ for any scalar field $g$, so the result is zero.</p>
<p>You might be thinking this is exotic and strange. It's not. To show you this, let's move the $i$ back into our products to get something straight out of vector calculus.</p>
<p>$$-i [(\nabla \wedge \nabla f) \wedge \nabla h - \nabla f \wedge (\nabla \wedge \nabla h)] = (\nabla \times \nabla f) \cdot \nabla h - \nabla f \cdot (\nabla \times \nabla h)$$</p>
<p>You might be accustomed to seeing $\nabla \times \nabla g = 0$, which yields the same conclusion as above.</p>
<p>Now then, note that the vector $r = \nabla \frac{1}{2} |r|^2$.</p>
|
2,072,666 | <p>I have a set as : <b> {∀x ∃y P(x, y), ∀x¬P(x, x)}. </b>. In order to satisfy this set I know that there should exist an interpretation <b> I </b> such that it should satisfy all the elements in the set. For instance my interpretation for x is 3 and for y is 4. Should I apply the same numbers (3,4) to ∀x¬P(x, x) as well ? Moreover, there are two x's in the function argument so should I apply 3 and 3 which is the value of the x that I assigned ? Thanks.</p>
| Olivier Oloa | 118,798 | <p><strong>Hint</strong>. We have
$$
x \to 0 \Longleftrightarrow -\frac{2x}{e} \to 0.
$$</p>
|
332,993 | <p>How do I approach the problem?</p>
<blockquote>
<p>Q: Let $ \displaystyle z_{n+1} = \frac{1}{2} \left( z_n + \frac{1}{z_n} \right)$ where $ n = 0, 1, 2, \ldots $ and $\frac{-\pi}{2} < \arg (z_0) < \frac{\pi}{2} $. Prove that $\lim_{n\to \infty} z_n = 1$.</p>
</blockquote>
| NECing | 60,869 | <p>If the limit exists, then
$$\lim_{n\to\infty}z_{n+1}=\lim_{n\to\infty}z_n$$
Substitute it in, you can get
$$\lim_{n\to\infty}z_n=\pm1$$
Since there is one more constrain on arg$(z)$, you can conclude that the limit must be $1$.</p>
<p>It is not hard to prove the existence of limit.</p>
<p>Assume $|z_n|>1$, then $|\frac{1}{z_n}|<1$. By the triangle inequality, $|z_n+\frac{1}{z_n}|\leq|z_n|+|\frac{1}{z_n}|<2|z_n|$.
Therefore, $|z_{n+1}|<|z_n|$.</p>
|
2,699,753 | <p>$a,b$ and $c$ are all natural numbers, and function $f(x)$ always returns a natural number. If$$
\sum_{n=b}^{a} f(n) = c,$$
in terms of $b,c$ and $f$, how would you solve for $a$? Do I require more information to solve for $a$?</p>
<p>EDIT: If $x$ increases $f(x)$ increases</p>
| Alexander Burstein | 499,816 | <p>Your solution to this problem continues as follows. Divide the recurrence relation through by $9^n$ to obtain $g(n)=g(n-1)-\frac{14}{9^n}$ for $n\ge 1$, and $g(0)=3$. Then
$$
g(n)=3-\sum_{k=1}^{n}{\frac{14}{9^k}}=3-14\frac{\frac{1}{9}-\frac{1}{9^{n+1}}}{1-\frac{1}{9}}=3-\frac{14}{8}\left(1-\frac{1}{9^n}\right)=\frac{5}{4}+\frac{7}{4}\cdot\frac{1}{9^n},
$$
so
$$
f(n)=g(n)\cdot 9^n=\frac{5}{4}\cdot9^n+\frac{7}{4}=\frac{5\cdot 9^n+7}{4}
$$</p>
|
1,690,854 | <p>Solve the equation
$$-x^3 + x + 2 =\sqrt{3x^2 + 4x + 5.}$$
I tried. The equation equavalent to
$$\sqrt{3x^2 + 4x + 5} - 2 + x^3 - x=0.$$
$$\dfrac{3x^2+4x+1}{\sqrt{3x^2 + 4x + 5} + 2}+x^3 - x=0.$$
$$\dfrac{(x+1)(3x+1)}{\sqrt{3x^2 + 4x + 5} + 2}+ (x+1) x (x-1)=0.$$
$$(x+1)\left [\dfrac{3x+1}{\sqrt{3x^2 + 4x + 5} + 2}+ x (x-1)=0\right ]=0.$$
How can I prove the equation
$$\dfrac{3x+1}{\sqrt{3x^2 + 4x + 5} + 2}+ x (x-1)=0$$
has no solution?</p>
| Claude Leibovici | 82,404 | <p><em>This is not an answer since based on approximations.</em></p>
<p>Consider the function $$f(x)=\frac{-x^3 + x + 2 -\sqrt{3x^2 + 4x + 5}}{x+1}$$ Around $x=0$ it looks as a parabola; so the function can be approximated by a Taylor series to third order. So, around $x=0$, we have $$f(x)\approx \left(2-\sqrt{5}\right)+\left(\frac{3}{\sqrt{5}}-1\right) x+\left(1-\frac{41}{10
\sqrt{5}}\right) x^2+O\left(x^3\right)$$ So, if the approximating function is $$g(x)= \left(2-\sqrt{5}\right)+\left(\frac{3}{\sqrt{5}}-1\right) x+\left(1-\frac{41}{10
\sqrt{5}}\right) x^2$$The quadratic does not show any real root and then $g(x)$ shows a maximum for $x_*=-\frac{5 \left(11 \sqrt{5}-73\right)}{1181}\approx 0.204925$ leading to $$g(x_*)=-\frac{9 \left(116 \sqrt{5}-233\right)}{1181}\approx -0.201063$$ which corresponds to the maximum. Using the full definition of $f(x)$, we should find $$f(x_*)\approx -0.200890$$</p>
<p>Numerically, the maximum value of $f(x)$ is found as $\approx -0.200889$ corresponding to $x=0.206212$.</p>
<p>Since the maximum value of $f(x)<0$, no other roots beside $x=-1$.</p>
|
228,437 | <p>The ODE in question: <code>y'' + 3y' + 2y = 8t + 8</code></p>
<p>But I get something like this for my solution:</p>
<p><a href="https://i.stack.imgur.com/6QFoV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6QFoV.png" alt="enter image description here" /></a></p>
<p>I also tried getting the solution of <code>y'=y^2-y^3</code> but again the solution did not make sense to me.</p>
| Ulrich Neumann | 53,677 | <p>You should use <code>==</code> instead of <code>=</code> to define the equations:</p>
<pre><code>DSolve[y''[t] + 3 y'[t] + 2 y[t] == 8 t + 8, y, t]
(*{{y -> Function[{t}, 2 (-1 + 2 t) + E^(-2 t) C[1] + E^-t C[2]]}}*)
</code></pre>
|
315,235 | <p>I am learning about vector-valued differential forms, including forms taking values in a Lie algebra. <a href="http://en.wikipedia.org/wiki/Vector-valued_differential_form#Lie_algebra-valued_forms">On Wikipedia</a> there is some explanation about these Lie algebra-valued forms, including the definition of the operation $[-\wedge -]$ and the claim that "with this operation the set of all Lie algebra-valued forms on a manifold M becomes a graded Lie superalgebra".
The explanation on Wikipedia is a little short, so I'm looking for more information about Lie algebra-valued forms. Unfortunately, the Wikipedia page does not cite any sources, and a Google search does not give very helpful results.</p>
<blockquote>
<p>Where can I learn about Lie algebra valued differential forms?</p>
</blockquote>
<p>In particular, I'm looking for a proof that $[-\wedge -]$ turns the set of Lie algebra-valued forms into a graded Lie superalgebra. I would also appreciate some information about how the exterior derivative $d$ and the operation $[-\wedge -]$ interact.</p>
| Adittya Chaudhuri | 311,277 | <p>I studied it from <em>Differential Geometry Connections,Curvature and Characteristic Classes</em> by <strong>Loring W.Tu</strong> in the Chapter 4, Section 21(specially subsection 21.5). The whole section 21 (Vector-valued forms) is very well treated. As a Special case Lie-Algebra valued differential forms are discussed. Also how a vector valued differential form differ from a real valued form is discussed in 21.10. Say for example on the Lie Algebra of <span class="math-container">$Gl(n,\mathbb{R})$</span>,
<span class="math-container">$\alpha\wedge\alpha$</span> which is given by Matrix Multiplication is not always zero when the deree of <span class="math-container">$\alpha $</span> is odd which is contradictory to the usual notion of Real Valued differential form.</p>
|
1,930,558 | <p>I want a good textbook covering elemenents of discrete mathematics Average level.Im a mathematics undergraduate so i dont want it to be towards Computer science that much.Interested in combinatorics and graph theory .But also covering enumeration and other stuff.One book i found is <a href="http://rads.stackoverflow.com/amzn/click/0071005447" rel="nofollow">https://www.amazon.com/Elements-Discrete-Mathematics-C-Liu/dp/0071005447</a> .</p>
| TheProofIsTrivial | 384,536 | <p>The Book of Proof is a free PDF that is fairly good and easy to read, lots of examples. 'Discrete Mathematics with Graph Theory' By Goodaire and Parmenter may have what you're looking for and is decent, however, I feel it's too much of a 'light read' and the authors' humour is questionable. But there are lots of examples and practice problems with worked solutions.</p>
|
2,089,619 | <p>I have proven that if $X $ is a finite set and $Y$ is a proper set of $X$, then does not exist $f:X \rightarrow Y $ such that $f$ is a bijection.</p>
<p>I'm pretending to show that the naturals is an infinite set. Let $P=\{2,4,6...\} $. So, by contraposity i just have to show that $f: \mathbb{N} \rightarrow P$, given by $f(n)=2n$, is actually a bijection.</p>
<p>Logically speaking (my book), i have yet constructed only the natural numbers, its addition and multiplication.</p>
<p>By far, i have shown that $f$ is injective. But in order to prove that $f$ is a surjection, how can i do that without using the usual: "Take any $y\in P$. Given $x=\frac{y}{2}$ ..."? Because $x$ is actually a rational number given in a "strange form", which I yet didn't construct.</p>
| Arthur | 15,500 | <p>The multiplicative inverse of $[a]\in \Bbb Z_n$ only exists if $\gcd(a,n)=1$. That's not the case for your $[2]\in \Bbb Z_4$.</p>
<p>As for how to calculate something like, say, $[2]^{-1}\cdot [3]$ in $\Bbb Z_5$, you first have to <em>find</em> $[2]^{-1}$, which happens to be $[3]$. This makes $[2]^{-1}\cdot[3]=[3]\cdot[3]=[4]$.</p>
|
2,089,619 | <p>I have proven that if $X $ is a finite set and $Y$ is a proper set of $X$, then does not exist $f:X \rightarrow Y $ such that $f$ is a bijection.</p>
<p>I'm pretending to show that the naturals is an infinite set. Let $P=\{2,4,6...\} $. So, by contraposity i just have to show that $f: \mathbb{N} \rightarrow P$, given by $f(n)=2n$, is actually a bijection.</p>
<p>Logically speaking (my book), i have yet constructed only the natural numbers, its addition and multiplication.</p>
<p>By far, i have shown that $f$ is injective. But in order to prove that $f$ is a surjection, how can i do that without using the usual: "Take any $y\in P$. Given $x=\frac{y}{2}$ ..."? Because $x$ is actually a rational number given in a "strange form", which I yet didn't construct.</p>
| Joshua Mundinger | 106,317 | <p>Not every element has a multiplicative inverse modulo $n$.
Indeed, since $[2] \cdot [2] = 0$, if $[2]$ had an inverse than $[0] = [2] \cdot [2] \cdot [2]^{-1} = [2]$, a contradiction.</p>
<p>In general, in $\mathbb Z/ n\mathbb Z$ a residue class $[m]$ has a multiplicative inverse if and only if $m$ and $n$ are relatively prime:
$[m]$ has a multiplicative inverse if and only if there is a $[k]$ such that
$$km \equiv 1 \mod n,$$
that is, a $k$ such that there exists $t$ such that $km -nt = 1$.
This is equivalent to $n$ and $m$ being relatively prime by Bézout's lemma.</p>
|
227,618 | <p>I'm creating AI for a card game, and I run into problem calculating the probability of passing/failing the hand when AI needs to start the hand. Cards are A, K, Q, J, 10, 9, 8, 7 (with A being the strongest) and AI needs to play to not take the hand.</p>
<p>Assuming there are 4 cards of the suit left in the game and one is in AI's hand, I need to calculate probability that one of the other players would take the hand. Here's an example:</p>
<p>AI player has: J
Other 2 players have: A, K, 7</p>
<p>If a single opponent has AK7 then AI would lose. However, if one of the players has A or K without 7, AI would survive. Now, looking at possible distribution, I have:</p>
<pre><code>P1 P2 AI
--- --- ---
AK7 loses
AK 7 survives
A7 K survives
K7 A survives
A 7K survives
K 7A survives
7 AK survives
AK7 loses
</code></pre>
<p>Looking at this, it seems that there is 75% chance of survival.</p>
<p>However, I skipped the permutations that mirror the ones from above. It should be the same, but somehow when I write them all down, it seems that chance is only 50%:</p>
<pre><code>P1 P2 AI
--- --- ---
AK7 loses
A7K loses
K7A loses
KA7 loses
7AK loses
7KA loses
AK 7 survives
A7 K survives
K7 A survives
KA 7 survives
7A K survives
7K A survives
A K7 survives
A 7K survives
K 7A survives
K A7 survives
7 AK survives
7 KA survives
AK7 loses
A7K loses
K7A loses
KA7 loses
7AK loses
7KA loses
</code></pre>
<p>12 loses, 12 survivals = 50% chance. Obviously, it should be the same (shouldn't it?) and I'm missing something in one of the ways to calculate. </p>
<p>Which one is correct?</p>
| Dan Shved | 47,560 | <p>Denote $A=\{0,1\}$. You can use the fact that $\mathbb{R} \simeq A^{\mathbb{N}}$, where $\simeq$ means "have the same cardinality". So we have
$$
\mathbb{R}^{\mathbb{R}} \simeq \left(A^{\mathbb{N}}\right)^{\mathbb{R}} \simeq A^{\mathbb{N} \times \mathbb{R}}.
$$
Now we use the fact that $\mathbb{N}\times\mathbb{R} \simeq \mathbb{R}$, so
$$
\mathbb{R}^{\mathbb{R}} \simeq A^{\mathbb{R}} \simeq P(\mathbb{R}),
$$
QED.</p>
|
3,895,275 | <p>I have a matrix <span class="math-container">$A= \begin{pmatrix} 5 & 3 \\ 2 & 1 \end{pmatrix} $</span> and I should find <span class="math-container">$m$</span>, <span class="math-container">$n$</span>, <span class="math-container">$r$</span> in case that <span class="math-container">$A^2+nA+rI=0$</span> (<span class="math-container">$I$</span> is Identity matrix) . and after that find <span class="math-container">$A^{-1}$</span>
with that relation .</p>
<p>I really tried to find those variables but i do not know how to solve that with just one equation.
Could you help me find <span class="math-container">$m$</span> ,<span class="math-container">$n$</span> , <span class="math-container">$r$</span> in every way you think is true.</p>
| Bernard | 202,857 | <p><strong>Hint</strong>: Compute the characteristic polynomial and apply <em>Hamilton-Cayley</em>.</p>
|
3,651,287 | <p>I would be very grateful if you could help me, I have a question about the Cauchy sequences, they have given me the definition that a Cauchy sequence if:</p>
<p>A sequence <span class="math-container">$(r_{n})_{n\in \mathbb{N}}$</span> is of Cauchy if:</p>
<p><span class="math-container">$\forall \epsilon>0,$</span> <span class="math-container">$\exists N \in \mathbb{N}$</span> such that if <span class="math-container">$n,m>N$</span>, then <span class="math-container">$|r_{n}-r_{m}|<\epsilon$</span></p>
<p>My question is, I found a different definition in another book and I would like to know how they are equivalent since the second definition I see as only limits the difference in the sequences. But I think I am wrong and I would like to know how they are equivalent definitions.</p>
<p>The other definition I found is:</p>
<p>A secuence <span class="math-container">$(r_{n})_{n\in \mathbb{N}}$</span> is Cauchy if given <span class="math-container">$\epsilon>0$</span>, there exists a positive integer <span class="math-container">$m$</span> such that:</p>
<p><span class="math-container">$|r_{n}-r_{m}|<\epsilon\,,$</span> <span class="math-container">$[n>m$</span></p>
<p>first of all, Thanks!</p>
| Guangyi Zou | 780,777 | <p>To prove the second definition from the first, you just need to fix <span class="math-container">$m=N+1,$</span>and then <span class="math-container">$|a_n-a_m|<\epsilon$</span> for all <span class="math-container">$n>m$</span>.So the second holds if the first holds.</p>
<p>To prove the first from the second, fix <span class="math-container">$\epsilon>0,$</span> choose <span class="math-container">$N$</span> such that <span class="math-container">$|a_n-a_N|\le \epsilon/2$</span> for all <span class="math-container">$n>N$</span>.Then for all <span class="math-container">$n,m>N$</span>,we have by triangle inequality<span class="math-container">$$|a_n-a_m|=|a_n-a_N+a_N-a_m|\le |a_n-a_N|+|a_m-a_N|\le 2*\epsilon/2=\epsilon.$$</span>So the first holds if the second holds.</p>
<p>So the two definitions are equivalent.</p>
|
2,843,822 | <p>How do I rewrite $(1\,2)(1\,3)(1\,4)(1\,5)$ as a single cycle? I have tried questions in the form: $(1\,4\,3\,5\,2)(4\,5\,3\,2\,1)$.</p>
| Arnaud Mortier | 480,423 | <p>Keep in mind that as a composition of functions, $(1\,2)(1\,3)(1\,4)(1\,5)$ is to be read from right to left. </p>
<p>Now what happens to $1$? It's mapped to $5$ by the first transposition, and then all other transpositions fix $5$. So overall $$1\longrightarrow 5$$</p>
<p>What happens to $5$ now? The first transposition maps $5$ to $1$, then the second maps $1$ to $4$, and $4$ is fixed by the remaining transpositions. So we get:
$$1\longrightarrow 5\longrightarrow 4$$
Similarly you get $ 4\longrightarrow 3$ and $3\longrightarrow 2$, so that in the end $$1\longrightarrow 5\longrightarrow 4\longrightarrow 3\longrightarrow 2$$
Which is usually written as $(1\,5\,4\,3\,2)$.</p>
|
1,564,962 | <p>I need to calculate the following:</p>
<p>$$x=8 \pmod{9}$$
$$x=9 \pmod{10}$$
$$x=0 \pmod{11}$$</p>
<p>I am using the chinese remainder theorem as follows:</p>
<p>Step 1:</p>
<p>$$m=9\cdot10\cdot11 = 990$$</p>
<p>Step 2:</p>
<p>$$M_1 = \frac{m}{9} = 110$$</p>
<p>$$M_2 = \frac{m}{10} = 99$$</p>
<p>$$M_3 = \frac{m}{11} = 90$$</p>
<p>Step 3:</p>
<p>$$x=8\cdot110\cdot2 + 9\cdot99\cdot9 + 0\cdot90\cdot2 = 9779 = 869\mod 990$$</p>
<p>I have used online calculators to check this result and I know it is wrong (it should be 539 I think) but cannot find out what am I doing wrong. Can you help me?</p>
<p>Thanks</p>
| Will Jagy | 10,400 | <p>$x \equiv -1 \pmod 9$ and $x \equiv -1 \pmod {10}.$ So $x \equiv -1 \pmod {90}$ and $x = 90 n - 1.$ But $90 = 88 + 2,$ so $90 \equiv 2 \pmod {11}.$</p>
<p>$$ x = 90 n - 1 \equiv 2n - 1 \pmod {11}. $$
$$ 2n \equiv 1 \pmod {11}, $$
$$ n \equiv 6 \pmod {11}. $$
Start with $n=6,$ so $x = 540 - 1 = 539.$</p>
<p>$$ \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc $$</p>
<p>A more official way to combine the $90$ and $11$ parts is this: the continued fraction for $90/11$ has penultimate convergent $41/5,$ and
$$ 41 \cdot 11 - 5 \cdot 90 = 1. $$
So
$$ 451 \equiv 1 \pmod {90}, \; \; 451 \equiv 0 \pmod {11}, $$
$$ -450 \equiv 0 \pmod {90}, \; \; -450 \equiv 1 \pmod {11}. $$
We want something $-1 \pmod {90}$ and $0 \pmod {11},$ so we can ignore the second pair and use
$$ -451 \equiv -1 \pmod {90}, \; \; -451 \equiv 0 \pmod {11}. $$
Also note
$$ 990 - 451 = 539. $$</p>
<p>Let's see: the virtue of the continued fraction thing is that, when i want something $A \pmod {90}$ but $B \pmod {11},$ I just take $451 A -450 B.$</p>
|
1,497,232 | <p>Prove or disprove. If $f(A) \subseteq f(B)$ then $A \subseteq B$</p>
<p>Let y be arbitrary. </p>
<p>$f(A)$ means $\exists a \in A (f(a)=y)$</p>
<p>$f(B)$ means $\exists b \in B (f(b)=y)$ </p>
<p>but $\forall a \in A \exists ! y \in f(a)(f(a)=y)$ </p>
<p>and $\forall b \in B \exists ! y \in f(b)(f(b)=y)$</p>
<p>Therefore if $f(a)=y=f(b)$, then $a = b$. This along with the given that $f(A) \subseteq f(B)$ shows that $A \subseteq B$</p>
| Alan | 175,602 | <p>HINT: Consider a constant function, for instance, $f(x)=0$, with domain $\mathbb R$.</p>
|
1,497,232 | <p>Prove or disprove. If $f(A) \subseteq f(B)$ then $A \subseteq B$</p>
<p>Let y be arbitrary. </p>
<p>$f(A)$ means $\exists a \in A (f(a)=y)$</p>
<p>$f(B)$ means $\exists b \in B (f(b)=y)$ </p>
<p>but $\forall a \in A \exists ! y \in f(a)(f(a)=y)$ </p>
<p>and $\forall b \in B \exists ! y \in f(b)(f(b)=y)$</p>
<p>Therefore if $f(a)=y=f(b)$, then $a = b$. This along with the given that $f(A) \subseteq f(B)$ shows that $A \subseteq B$</p>
| Balloon | 280,308 | <p>As @Alan said, this is false. Take $E=\{0,1\}$ and $F=\{0\}$, and define $f:E\to F$ by $f(0)=f(1)=0$. Then you have $f(\{0\})=\{0\}\subset f(\{1\})=\{0\}$ whereas $\{0\}\not\subset\{1\}$.</p>
|
2,665,723 | <p>Find value of $k$ for which the equation $\sqrt{3z+3}-\sqrt{3z-9}=\sqrt{2z+k}$ has no solution.</p>
<p>solution i try $\sqrt{3z+3}-\sqrt{3z-9}=\sqrt{2z+k}......(1)$</p>
<p>$\displaystyle \sqrt{3z+3}+\sqrt{3z-9}=\frac{12}{\sqrt{2z+k}}........(2)$</p>
<p>$\displaystyle 2\sqrt{3z+3}=\frac{12}{\sqrt{2z+k}}+\sqrt{2z+k}=\frac{12+2z+k}{\sqrt{2z+k}}$</p>
<p>$\displaystyle 2\sqrt{6z^2+6z+3kz+3k}=\sqrt{12+2z+k}$</p>
<p>$\displaystyle 4(6z^2+6z+3kz+3k)=144+4z^2+k^2+48z+4kz+24k$</p>
<p>Help me how i solve it after that point </p>
| Felix Marin | 85,343 | <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
\sum_{k = 1}^{50}\pars{2k - 1}2k & =
\left.\partiald[2]{}{x}\sum_{k = 1}^{50}x^{2k}\right\vert_{\ x\ =\ 1} =
\partiald[2]{}{x}\bracks{x^{2}\,{x^{100} - 1 \over x^{2} - 1}}_{\ x\ =\ 1} =
\bbx{169150}
\end{align}</p>
|
125,503 | <p>Completeness Properties of $\mathbb{R}$: Least Upper Bound Property, Monotone Convergence Theorem, Nested Intervals Theorem, Bolzano Weierstrass Theorem, Cauchy Criterion.</p>
<p>Archimedean Property: $\forall x\in \mathbb{R}\forall \epsilon >0\exists n\in \mathbb{N}:n\epsilon >x$</p>
<p>I can show that LUB implies the Archimedean Property but what about the rest of these properties? Please provide proofs (even hints) or counterexamples. </p>
<p>EDIT: It was shown by Isaac Solomon that the Bolzano-Weierstrass implies the Archimedean Property.</p>
| George Chailos | 320,942 | <p>Cauchy Completeness does not imply Archimedean Property</p>
<p>Counterexample: the field of formal Laurent series over R
Every Cauchy sequence converges, but the field is Non Archimedean </p>
<p>For a proof see: Counterexamples in Analysis by Bernard R. Gelbaum and John M. H. Olmsted, Chapter 1, Example</p>
|
984,232 | <p>We consider everything in the category of groups. It is known that monomorphisms are stable under pullback; that is, if
$$\begin{array}
AA_1 & \stackrel{f_1}{\longrightarrow} & A_2 \\
\downarrow{h} & & \downarrow{h'} \\
B_1 & \stackrel{g_1}{\longrightarrow} & B_2
\end{array}
$$ is a pullback, then $g_1$ being one-to-one implies that $f_1$ is also one-to-one. Now if we weaken the condition, suppose that the kernel of $g_1$ is known, what can we say about the kernel of $f_1$? More precisely, if there is a commutative diagram </p>
<p>$$\begin{array}
A & & B_0 & & A_1 &\stackrel{f_1}{\longrightarrow} & A_2\\
& & \parallel & &\downarrow{h}& &\downarrow{h'}\\
0 & \stackrel{}{\longrightarrow} &B_0 & \stackrel{g_0}{\longrightarrow} &B_1 & \stackrel{g_1}{\longrightarrow} & B_2 & \stackrel{}{\longrightarrow} & 0
\end{array}$$</p>
<p>where the last row is an exact sequence and $A_1$ is the pullback, can we complete an exact sequence in the first row? Or at least is there a natural map from $B_0$ to $A_1$ making the diagram commutative?</p>
| Martin Brandenburg | 1,650 | <p>There is an exact sequence
$0 \to \ker(f_1) \cap \ker(h) \to \ker(f_1) \to \ker(g_1) \to 0$. I doubt that more can be said.</p>
|
104,170 | <p>I am trying to solve a fundamental problem in analytical convective heat transfer: laminar free convection flow and heat transfer from a flat plate parallel to the direction of the generating body force.</p>
<p><strong>Brief History of the problem</strong></p>
<p>Effectively: a flat plate is vertical and parallel to the direction of gravity vector. The plate is hot and the ambient is not. Heat transfer occurs from the plate to the ambient through natural convection due to density stratification. </p>
<p><a href="https://en.wikipedia.org/wiki/Simon_Ostrach" rel="nofollow noreferrer">Simon Ostrach</a>, a distinguished scientist in the field of microgravity science <a href="https://dl.dropboxusercontent.com/u/13223318/Pohlhausen1952a.pdf" rel="nofollow noreferrer">solved this problem through a coupled set of equations</a>. <strong>In Ostrach's work, these equations were solved by an <a href="https://www-03.ibm.com/ibm/history/exhibits/vintage/vintage_4506VV2198.html" rel="nofollow noreferrer">IBM Card Programmed Electronic Calculator</a></strong></p>
<p>$$ F''' + 3 FF'' - 2 (F')^2 + H = 0 $$
$$ H'' + 2 \text{Pr} F H' = 0 $$</p>
<p>The Boundary conditions are:
$$ F'(0) = F(0) = 0 $$
$$ H(0) = 1 $$
$$ F'(\infty) = H(\infty) = 0 $$</p>
<p>Here, $F$ provides the hydrodynamic solution while $H$ provides the thermal solution with Pr being the Prandtl number which is a property of the fluid that the plate is "immersed" in.</p>
<p><strong>My Mathematica code ... it runs selectively</strong></p>
<pre><code>Clear[max, Pr, T, f, η, p];
max = 50;
Pr = 0.72;
pohl = NDSolve[{f'''[η] + 3 f[η] f''[η] -
2 (f'[η])^2 + T[η] == 0,
T''[η] + 2 Pr f[η] T'[η] == 0, f[0] == f'[0] == 0,
f'[max] == 0, T[0] == 1, T[max] == 0}, {f, T}, {η, max}]
p4 = Plot[{Evaluate[f'[η] /. pohl]}, {η, 0, max},
PlotRange -> All,
PlotLabel ->
Style[Framed["Hydrodynamic development is depicted on this plot"],
10, Blue, Background -> Lighter[Yellow]], ImageSize -> Large,
BaseStyle -> {FontWeight -> "Bold", FontSize -> 18},
AxesLabel -> {"η", "f'[η]"}, PlotLegends -> "Expressions"]
</code></pre>
<p>For a Prandtl number of 0.72 (Air) I get a velocity profile ($F'$) as suggested by the Ostrach in his pivotal report. However, for, many Prandtl numbers, the following warning message is sometimes flashed and I get incorrect velocity profiles (negative velocities) per the publication. For instance try Pr=6.</p>
<blockquote>
<p>FindRoot::cvmit: Failed to converge to the requested accuracy or
precision within 100 iterations. >></p>
<p>NDSolve::berr: The scaled boundary value residual error of
2.9035865095898766`*^7 indicates that the boundary values are not satisfied to specified tolerances. Returning the best solution found.</p>
</blockquote>
<p>I have experimented with the <code>LSODA</code> <a href="https://mathematica.stackexchange.com/q/11630/204">method because this system of diff eqs is stiff</a> and LSODA has proven to be a 'magic wand' in the past. What gives? How do I select a method for this problem? I wonder if this is a problem with the method of choice (or default method with no options) or my definition of the "free stream limit" $\infty$.</p>
<p><strong>Pr=0.01</strong></p>
<p><a href="https://i.stack.imgur.com/FfJZM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FfJZM.png" alt="Pr=0.01"></a></p>
<p><strong>Pr=0.72</strong></p>
<p><a href="https://i.stack.imgur.com/ShFmq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ShFmq.png" alt="Pr=0.72"></a></p>
<p><strong>Pr=0.6 (what went wrong? Warning message was displayed too...)</strong>
<a href="https://i.stack.imgur.com/2MxSH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2MxSH.png" alt="Pr=0.6"></a></p>
| george2079 | 2,079 | <p>The problem with your system of equation is that the "zero" far field solution is unstable, hence extremely sensitive to the initial conditions. Posing the problem as an initial value problem, with the "known" conditions from the successful solution:</p>
<pre><code>max = 50;
Pr = .72;
a = 0.7172594734816521`
b = -0.4344414944896132`
pohl = NDSolve[{f'''[\[Eta]] + 3 f[\[Eta]] f''[\[Eta]] -
2 (f'[\[Eta]])^2 + T[\[Eta]] == 0,
T''[\[Eta]] + 2 Pr f[\[Eta]] T'[\[Eta]] == 0, f[0] == f'[0] == 0,
f''[0] == a, T[0] == 1, T'[0] == b}, {f, T}, {\[Eta], max}]
</code></pre>
<p><a href="https://i.stack.imgur.com/MFfi4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MFfi4.png" alt="enter image description here"></a></p>
<p>now change the <code>T'</code> initial value by 1/2%:</p>
<pre><code>b=.995b
</code></pre>
<p>and you see you get the essential solution but it eventually blows up:</p>
<p><a href="https://i.stack.imgur.com/IAtkK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IAtkK.png" alt="enter image description here"></a></p>
<p>so the shooting method needs an exceptionally good initial guess to work.</p>
|
131,741 | <p>Take the following example <code>Dataset</code>:</p>
<pre><code>data = Table[Association["a" -> i, "b" -> i^2, "c" -> i^3], {i, 4}] // Dataset
</code></pre>
<p><img src="https://i.stack.imgur.com/PZSgO.png" alt="Mathematica graphics"></p>
<p>Picking out two of the three columns is done this way:</p>
<pre><code>data[All, {"a", "b"}]
</code></pre>
<p><img src="https://i.stack.imgur.com/LG3jU.png" alt="Mathematica graphics"></p>
<p>Now instead of just returning the "a" and "b" columns I want to map the functions f and h to their elements, respectively, and still drop "c". Based on the previous result and the documentation of <code>Dataset</code> I hoped the following would do that:</p>
<pre><code>data[All, {"a" -> f, "b" -> h}]
</code></pre>
<p><img src="https://i.stack.imgur.com/Wvwml.png" alt="Mathematica graphics"></p>
<p>As you can see, the behavior is not like the one before. Although the functions are mapped as desired, the unmentioned column "c" still remains in the data.</p>
<p>Do I really need one of the following (clumsy looking) alternatives</p>
<pre><code>data[All, {"a" -> f, "b" -> h}][All, {"a", "b"}]
data[Query[All, {"a", "b"}], {"a" -> f, "b" -> h}]
Query[All, {"a", "b"}]@data[All, {"a" -> f, "b" -> h}]
</code></pre>
<p>to get: </p>
<p><img src="https://i.stack.imgur.com/q62un.png" alt="Mathematica graphics"></p>
<p>or is there a more elegant solution?</p>
| WReach | 142 | <p>The following expression might not qualify as <em>elegant</em>, but perhaps it can be scored as <em>less clumsy</em>?</p>
<pre><code>data[All, <| "a" -> "a" /* f, "b" -> "b" /* h |>]
</code></pre>
<p><a href="https://i.stack.imgur.com/q62un.png" rel="noreferrer"><img src="https://i.stack.imgur.com/q62un.png" alt="dataset screenshot"></a></p>
|
4,282,006 | <p><strong>Evaluate the limit</strong></p>
<p><span class="math-container">$\lim_{x\rightarrow \infty}(\sqrt[3]{x^3+x^2}-x)$</span></p>
<p>I know that the limit is <span class="math-container">$1/3$</span> by looking at the graph of this function, but I struggle to show it algebraically.</p>
<p>Is there anyone who can help me out and maybe even provide a solution to this problem?</p>
| David C. Ullrich | 248,223 | <p>Hint: If <span class="math-container">$t=1/x$</span> then <span class="math-container">$$\sqrt[3]{x^3+x^2}-x=\frac{\sqrt[3]{1+t}-1}{t}$$</span></p>
|
3,118 | <p>Can anyone help me out here? Can't seem to find the right rules of divisibility to show this:</p>
<p>If $a \mid m$ and $(a + 1) \mid m$, then $a(a + 1) \mid m$.</p>
| Robin Chapman | 226 | <p>The other answers put this in a general context, but in this example one
can be absolutely explicit. If $a\mid m$ and $(a+1)\mid m$ then there are
integers $r$ and $s$ such that
$$m=ar=(a+1)s.$$
Then
$$a(a+1)(r-s)=(a+1)[ar]-a[(a+1)s]=(a+1)m-am=m.$$
As $r-s$ is an integer, then $a(a+1)\mid m$.</p>
|
4,609,236 | <p>When finding the derivative of <span class="math-container">$f(x) = \sqrt x$</span> via the limit definition, one gets</p>
<p><span class="math-container">$$f'(x) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h} = \lim_{h \to 0} \frac{\sqrt{x+h} - \sqrt x}{h}$$</span></p>
<p>For this, I could get the answer from applying L'Hopital's rule... but to me, this line of reasoning is a bit circular. I'm computing a derivative from first principles while needing to use derivative rules (from L'Hopital's) to compute this derivative.</p>
<p>Can I use L'Hopital's rule when finding a derivative using the limit definition?</p>
| gnasher729 | 137,175 | <p>L’Hopital’s rule has been proven, many, many times in the past. It’s true, because you believe hundreds of mathematicians who told you so. You don’t need to know details of the proof of the rule.</p>
<p>So you can use the rule to prove something. And I use what you proved to prove L’Hopital. Yes, there is a circle. Nevertheless, L’Hopital’s rule is true. As is everything proved by using it.</p>
|
2,043,429 | <p>In my textbook, it states that the general formula for the partial sum </p>
<p>$$\sum_{i=1}^n i^2 = \frac{n(n+1)(2n+1)}{6}$$</p>
<p>My question is, if I have the following sum instead:</p>
<p>$$\sum_{i=1}^n \frac{1}{i^2}$$ </p>
<p>Can I just flip the general formula to get this:</p>
<p>$$\sum_{i=1}^n \frac{1}{i^2} = \frac{6}{n(n+1)(2n+1)} $$</p>
<p>Or does it not work like that? Thank you!</p>
| Olivier Oloa | 118,798 | <p>The answer is no.</p>
<p>Even with <em>just</em> two terms:
$$
\frac1a+\frac1b \neq \frac1{a+b},
$$ for example
$$
\frac11+\frac11=2 \neq \frac1{2}=\frac1{1+1}.
$$</p>
|
4,573,004 | <p>I want to solve <span class="math-container">$y'' +y^3 = 0$</span> with the boundary conditions <span class="math-container">$y(0) = a$</span> and <span class="math-container">$y(k) = b$</span>. My goal is to reduce this problem to <span class="math-container">$y' +y^2 = 0$</span> while solving but I'm not sure it can be done.</p>
<p>I tried reduction of order substitutions (ie. taking <span class="math-container">$y' = w$</span> and <span class="math-container">$y'' = \frac{dw}{dy}y'$</span>) but that did not work. Then I tried to solve in the following way</p>
<p><span class="math-container">$y'' y' = -y^3 y'$</span></p>
<p><span class="math-container">$\frac{1}{2}[(y')^2]' = -[\frac{1}{4} y^4]'$</span></p>
<p><span class="math-container">$\frac{1}{2}(y')^2 = -\frac{1}{4} y^4+C$</span></p>
<p><span class="math-container">$(y')^2 = -\frac{1}{2} y^4+C$</span></p>
<p><span class="math-container">$y' = \pm \sqrt{-\frac{1}{2} y^4+C}$</span></p>
<p>It seems to me if I take my original problem to be <span class="math-container">$y'' - 2y^3 = 0$</span> instead, I get
<span class="math-container">$y' = \pm \sqrt{y^4+C}$</span>. If <span class="math-container">$C=0$</span>, this would reduce to <span class="math-container">$y' - y^2 = 0$</span>, which is close enough to what I want for my purposes. But I'm not sure how to get <span class="math-container">$C=0$</span> without a condition on the derivative, so maybe this was the wrong way to go.</p>
<ol>
<li>Can I reduce my original problem, <span class="math-container">$y'' +y^3 = 0$</span>, to <span class="math-container">$y' +y^2 = 0$</span>?</li>
<li>Where do my boundary conditions come into play?</li>
</ol>
| Zuhair | 489,784 | <p>For any set <span class="math-container">$S$</span> and any set <span class="math-container">$C$</span> it is clear that <span class="math-container">$S \cup (S - C)$</span> is <span class="math-container">$S$</span> itself because by the union you are returning back the elements of <span class="math-container">$S$</span> that you took out by <span class="math-container">$S - C$</span>, so here in this case we have <span class="math-container">$S = B^c$</span> and <span class="math-container">$C=A$</span> and so we'll have <span class="math-container">$(B^c)^c$</span> which is <span class="math-container">$B$</span> itself.</p>
|
2,158,369 | <p>prove that :
$$a,b>0\\,0<x<\pi/2$$
$$a\sqrt{\sin x}+b\sqrt{\cos x}≤(a^{4/3}+b^{4/3})^{3/4}$$
my try :</p>
<p>$$a\sqrt{\sin x}+b\sqrt{\cos x}=a(\sqrt{\sin x}+\frac{a}{b}\sqrt{\cos x})$$</p>
<p>$$\frac{a}{b}=\tan y$$</p>
<p>$$a\sqrt{\sin x}+b\sqrt{\cos x}=a(\sqrt{\sin x}+\tan y\sqrt{\cos x})$$</p>
<p>$$\frac{\sin y}{\cos y}=\tan y$$</p>
<p>$$a\sqrt{\sin x}+b\sqrt{\cos x}=a(\sqrt{\sin x}+\frac{\sin y}{\cos y}\sqrt{\cos x})$$</p>
<p>now?</p>
| Patrick Stevens | 259,262 | <p>Consider a vertex of minimal degree. If the degree was $1$, then we're done by induction if we remove that vertex. Otherwise, the minimal degree in the graph is at least $2$. Take a longest path $v_0 v_1 \dots v_k$; then $v_0$ has degree $2$ or higher, so the only way this can be a <em>longest</em> path is if $v_0$ is connected to some other $v_i$ than $v_1$. (If not, if $v_0$ is connected to $x$ which is not a $v_i$, then we could extend the path by adding $x$.) So we have found a cycle.</p>
<hr>
<p>Why is there a longest path? There is certainly a path of length $1$ (pick any vertex!). There are only finitely many paths, since there are only $|V|$ vertices. So we can just list the paths, and their lengths; at least one of them will have longest length (because every finite set has a maximum).</p>
<hr>
<p>Question for you: where have we used that the graph was connected?</p>
|
2,413,368 | <p>I am to show that if $ w = z + \frac{c}{z} $ and $ |z| = 1 $, then $w$ is an ellipse, and I must find its equation.</p>
<p>Previously, I have solved transformation questions by finding the modulus of the transformation in either the form $ w = f(z) $ or $ z = f(w) $. However, I think the part stumping me here is that the transformation has both $z$ and $z^{-1}$ in it.</p>
<p>Attempt with $w = f(z)$:
$$ w = x+iy + \frac{c}{x+iy} $$
$$= x+iy + \frac{c(x-iy)}{x^2 + y^2} $$
$$= x+iy+c(x-iy)$$
[as $|z| = 1 \implies x^2 + y^2 = 1^2$]</p>
<p>However, from here I am unable to work out how to proceed.</p>
<p>I also tried to find $z=f(w)$:
$$ z^2 - zw + c = 0 $$
$$ (z - \frac{w}{2})^2 + c - \frac{w^2}{4} = 0 $$
But I cannot see how this is of any use either.</p>
| dxiv | 291,201 | <p>Let $\,z=u^2\,$ with $\,|u| = \sqrt{|z|}=1\,$, and let $\,c=b^2\,$. Then $\displaystyle\,w=u^2+\frac{b^2}{u^2}\,$, and so:</p>
<p>$$
w+2b=u^2+\frac{b^2}{u^2}+ 2b=\left(u+\frac{b}{u}\right)^2 \\ \implies |w+2b|=\left|u+\frac{b}{u}\right|^2 = \left(u+\frac{b}{u}\right)\left(\bar u+\frac{\bar b}{\bar u}\right) = |u|^2+\frac{|b|^2}{|u|^2}+\frac{b \bar u}{u}+\frac{\bar b u}{\bar u} \\[5px]
w-2b=u^2+\frac{b^2}{u^2}- 2b=\left(u-\frac{b}{u}\right)^2 \\ \implies |w-2b|=\left|u-\frac{b}{u}\right|^2 = \left(u-\frac{b}{u}\right)\left(\bar u-\frac{\bar b}{\bar u}\right) = |u|^2+\frac{|b|^2}{|u|^2}-\frac{b \bar u}{u}-\frac{\bar b u}{\bar u}
$$</p>
<p>Adding the above gives:</p>
<p>$$
|w+2b| + |w-2b| = 2|u|^2+2\,\frac{|b|^2}{|u|^2} = 2(1+|b|^2) \tag{1}
$$</p>
<p>Therefore the locus of $\,w\,$ is an ellipse with foci $\,\pm 2b\,$ and semi-major axis $1+|b|^2 \ge 2|b|\;$ (which degenerates to the segment $\,[-2b,2b]\,$ iff $\,|b|=1 \iff |c|=1\,$).</p>
<p><hr>
[ <em>EDIT</em> ] To elaborate on $(1)$, the equation says that the sum of distances from $w$ to the fixed points $-2b$ and $2b$ is constant, which is the geometric definition of an <a href="https://en.wikipedia.org/wiki/Ellipse" rel="nofollow noreferrer">ellipse</a>. In general, the equation $|w-a|+|w-b|=c \in \mathbb{R}^+$ describes an ellipse with foci $a,b \in \mathbb{C}$ and semi-major axis $\frac{1}{2}c$ when $c \gt |a-b|$. The locus degenerates to the segment $[a,b]$ if $c=|a-b|$ and is empty if $c \lt |a-b|$.</p>
|
16,754 | <p>Let $c$ be an integer, not necessarily positive and not a square. Let $R=\mathbb{Z}[\sqrt{c}]$
denote the set of numbers of the form $$a+b\sqrt{c}, a,b \in \mathbb{Z}.$$
Then $R$ is a subring of $\mathbb{C}$ under the usual addition and multiplication.</p>
<p>My question is: if $R$ is a UFD (unique factorization domain), does it follow that
it is also a PID (principal ideal domain)?</p>
| Bill Dubuque | 242 | <p>Yes, because (quadratic) number rings are easily shown to have dimension at most one (i.e. every nonzero prime ideal is maximal). But <span class="math-container">$\rm PID$</span>s are <em>precisely</em> the <span class="math-container">$\rm UFD$</span>s which have dimension <span class="math-container">$\le 1.\, $</span> Below is a sketch of a proof of this and closely related results.</p>
<p><strong>Theorem</strong> <span class="math-container">$\rm\ \ TFAE\ $</span> for a <span class="math-container">$\rm UFD\ D$</span></p>
<p><span class="math-container">$(1)\ \ $</span> prime ideals are maximal if nonzero, <span class="math-container">$ $</span> i.e. <span class="math-container">$\rm\ dim\,\ D \le 1$</span><br />
<span class="math-container">$(2)\ \ $</span> prime ideals are principal<br />
<span class="math-container">$(3)\ \ $</span> maximal ideals are principal<br />
<span class="math-container">$(4)\ \ \rm\ gcd(a,b) = 1\, \Rightarrow\, (a,b) = 1,\, $</span> i.e. <span class="math-container">$ $</span> coprime <span class="math-container">$\Rightarrow$</span> comaximal <br/>
<span class="math-container">$(5)\ \ $</span> <span class="math-container">$\rm D$</span> is Bezout, i.e. all ideals <span class="math-container">$\,\rm (a,b)\,$</span> are principal.<br />
<span class="math-container">$(6)\ \ $</span> <span class="math-container">$\rm D$</span> is a <span class="math-container">$\rm PID$</span></p>
<p><strong>Proof</strong> <span class="math-container">$\ $</span> (sketch of <span class="math-container">$\,1 \Rightarrow 2 \Rightarrow 3 \Rightarrow 4 \Rightarrow 5 \Rightarrow 6 \Rightarrow 1)\ $</span> where <span class="math-container">$\rm\,p_i,\,P\,$</span> denote primes <span class="math-container">$\neq 0$</span></p>
<p><span class="math-container">$(1\Rightarrow 2)$</span> <span class="math-container">$\rm\ \ p_1^{e_1}\cdots p_n^{e_n}\in P\,\Rightarrow\,$</span> some <span class="math-container">$\rm\,p_j\in P\,$</span> so <span class="math-container">$\rm\,P\supseteq (p_j)\, \Rightarrow\, P = (p_j)\:$</span> by dim <span class="math-container">$\le1$</span> <br/>
<span class="math-container">$(2\Rightarrow 3)$</span> <span class="math-container">$ \ $</span> max ideals are prime, so principal by <span class="math-container">$(2)$</span><br />
<span class="math-container">$(3\Rightarrow 4)$</span> <span class="math-container">$\ \rm \gcd(a,b)=1\,\Rightarrow\,(a,b) \subsetneq (p) $</span> for all max <span class="math-container">$\rm\,(p),\,$</span> so <span class="math-container">$\rm\ (a,b) = 1$</span> <br/>
<span class="math-container">$(4\Rightarrow 5)$</span> <span class="math-container">$\ \ \rm c = \gcd(a,b)\, \Rightarrow\, (a,b) = c\ (a/c,b/c) = (c)$</span> <br/>
<span class="math-container">$(5\Rightarrow 6)$</span> <span class="math-container">$\ $</span> Ideals <span class="math-container">$\neq 0\,$</span> in Bezout UFDs <a href="https://math.stackexchange.com/a/218516/242">are generated by an elt with least #prime factors</a><br />
<span class="math-container">$(6\Rightarrow 1)$</span> <span class="math-container">$\ \ \rm (d) \supsetneq (p)$</span> properly <span class="math-container">$\rm\Rightarrow\,d\mid p\,$</span> properly <span class="math-container">$\rm\,\Rightarrow\,d\,$</span> unit <span class="math-container">$\,\rm\Rightarrow\,(d)=(1),\,$</span> so <span class="math-container">$\rm\,(p)\,$</span> is max</p>
<p><strong>Remark</strong> <span class="math-container">$ $</span> Examples of non-PID UFDs are easy in polynomial rings: if <span class="math-container">$D$</span> is a non-field domain then it has a nonzero nonunit <span class="math-container">$d$</span> so by <a href="https://math.stackexchange.com/a/1872824/242">here</a> the ideal <span class="math-container">$(d,x)$</span> is not principal.</p>
|
337,252 | <p>I'm guessing that the free group on an empty set is either the trivial group or isn't defined.
Some clarification would be appreciated.</p>
| archipelago | 67,907 | <p>It is defined. Free groups are defined for any sets just by the following universal property:</p>
<p>Let $S$ be a set. A group $F(S)$ with a map $\iota\colon S\rightarrow F(S)$ is called free over $S$, if for any group $H$ and any map $g\colon S\rightarrow H$ there is exactly one group homomorphism $h\colon F(S)\rightarrow H$, such that $g=h\circ\iota$.</p>
<p>(With this universal property, the free group over a fixed set is defined up to isomorphism and one possible choice in this isomorphism class is just the construction with "words" in $S$, which you propably know.)</p>
<p>Let's try this for the empty set $\emptyset$. The guess was, that it could be the trivial group $\{0\}$ with the only possible map $\iota\colon \emptyset\rightarrow \{0\}$, namely the empty map. So let $H$ be an arbitary group and $g\colon \emptyset\rightarrow H$ a map. Since there is no such map except the empty map, h must be the empty map. Now we are searching for a group homomorphism $h\colon \{0\}\rightarrow H$, such that $g=h\circ\iota$. Since there is exactly one group homomorphism from $\{0\}$ to another group, namely the one, which send $0$ to the neutral element of $H$ and $g=h\circ\iota$ holds for this map (both sides are just the empty map), we get the result: The free group over the empty set is just the trivial group.</p>
<p>If you know some basic category theory, you could do this way more elegant and things become more natural.</p>
|
19,605 | <p>Let <span class="math-container">$P=(x_1,y_1)$</span> be a non torsion point on an elliptic curve <span class="math-container">$y^2=x^3+Ax+B$</span>.
Let <span class="math-container">$(x_n,y_n)=P^{2^n}. x_n,y_n$</span> are rationals with heights growing rapidly. Can <span class="math-container">${x_n} {y_n}$</span> stay bounded?</p>
| Ben Webster | 66 | <p>From the discussion above it looks like the answer is yes (<strong>EDIT</strong>: if you allow real numbers; the OP was unclear, perhaps they wanted a rational point, in which case I'm uncertain. Does anybody know anything about the binary expansion of complex numbers with rational Weierstrass p-values?). Let the origin of your group be the point at infinity in the curve in $\mathbb{RP}^2$, and pick a topological group isomorphism of $S^1$ to the component of the identity to $S^1\cong \mathbb{R}/\mathbb{Z}$. The doublings of a point are given by truncating off the first $m$ digits of the base 2 expansion of your point. Thus the doublings of a point stay bounded if and only if the length of a consecutive string of 0's and 1's in this expansion is bounded above (there are plenty of irrational numbers with this property).</p>
<p>There's a similar answer for putting the origin somewhere else: you can never allow too much of the beginning of the expansion of the point at infinity to show up in the expansion of your point. </p>
|
1,176,938 | <p>How do you show that $-1+(x-4)(x-3)(x-2)(x-1)$ is irreducible in $\mathbb{Q}$?</p>
<p>I don't think you can use the eisenstein criterion here</p>
| Bill Kleinhans | 73,675 | <p>Actually, the obvious generalization is also true. Let $P$ and $Q$ be polynomial factors, so that the given expression equals $PQ$. Then $PQ=-1$ at each of the integer values, $1,2,3,4$ for this case. So $P,Q=\pm1$ at each of these integers, and whenever $P$ equals one, $Q$ equals minus one, and vice versa. But these are more than enough values to determine each polynomial, so $P=-Q$. But this is impossible because the coefficient of the highest order term in the product must be plus one.</p>
|
3,483,260 | <p>Given the set <span class="math-container">$\{1,2,3,4,5,6,7\}$</span>.</p>
<p>We would like to create a string of size 8 so that each of the elements of the set appears at least once in the result. How many ways are there to create such a set?</p>
<p>I think that the answer should be: order 7 elements <span class="math-container">$7!$</span> and chose 1 number out of 7 to reappear and chose a position out of 8 available.</p>
<p>Am I correct? Or am I missing something?</p>
| Robert Lewis | 67,071 | <p>With</p>
<p><span class="math-container">$0 \le f''(x) \le f(x), \; \forall x \in \Bbb R, \tag 1$</span></p>
<p>and</p>
<p><span class="math-container">$f'(x) \ge 0, \; \forall x \in \Bbb R, \tag 2$</span></p>
<p>we have</p>
<p><span class="math-container">$0 \le f'(x) f''(x) \le f(x) f'(x), \; \forall x \in \Bbb R, \tag 3$</span></p>
<p>or</p>
<p><span class="math-container">$0 \le \dfrac{1}{2} ((f'(x))^2)' \le \dfrac{1}{2} (f^2(x))', \tag 4$</span></p>
<p>or</p>
<p><span class="math-container">$0 \le ((f'(x))^2)' \le (f^2(x))', \tag 5$</span></p>
<p>we integrate this 'twixt some arbitrary <span class="math-container">$L \in \Bbb R$</span> and <span class="math-container">$x \in \Bbb R$</span> to obtain</p>
<p><span class="math-container">$(f'(x))^2 - (f'(L))^2 = \displaystyle \int_L^x ((f'(s))^2)' \; ds \le \int_L^x (f^2(s))' \; ds = f^2(x) - f^2(L). \tag 6$</span></p>
<p>Now in light of (1), </p>
<p><span class="math-container">$f(x) \ge 0, \forall x \in \Bbb R, \tag 7$</span></p>
<p>so if we define</p>
<p><span class="math-container">$\alpha = \inf \{ f(x), \; x \in \Bbb R \}, \tag 8$</span></p>
<p>then</p>
<p><span class="math-container">$\alpha \ge 0 \tag 9$</span></p>
<p>and, by virtue of (2), <span class="math-container">$f(x)$</span> is monotonically <em>decreasing</em> with decreasing <span class="math-container">$x$</span>; together these imply that</p>
<p><span class="math-container">$\displaystyle \lim_{x \to -\infty} f(x) = \alpha. \tag{10}$</span></p>
<p>We next consider <span class="math-container">$f'(x)$</span> as <span class="math-container">$x \to -\infty$</span>; again in accord with (1) we see that <span class="math-container">$f'(x)$</span> is monotonically decreasing with decreasing <span class="math-container">$x$</span>, and by (2) it too is bounded below by <span class="math-container">$0$</span>. I claim that in fact</p>
<p><span class="math-container">$\displaystyle \lim_{x \to -\infty} f'(x) = 0; \tag{11}$</span></p>
<p>for if not, setting</p>
<p><span class="math-container">$\beta = \inf \{f'(x), x \in \Bbb R\} > 0, \tag{12}$</span></p>
<p>then we may assert that</p>
<p><span class="math-container">$f'(x) \ge \beta, \; \forall x \in \Bbb R; \tag{13}$</span></p>
<p>then picking </p>
<p><span class="math-container">$x_0, x_1 \in \Bbb R, \; x_0 < x_1, \tag{14}$</span></p>
<p>it follows that</p>
<p><span class="math-container">$f(x_1) - f(x_0) = \displaystyle \int_{x_0}^{x_1} f'(s) \; ds \ge \int_{x_0}^{x_1} \beta \; ds = \beta(x_1 - x_0), \tag{15}$</span></p>
<p>whence</p>
<p><span class="math-container">$f(x_1) - f(x_0) \ge \beta(x_1 - x_0), \tag{16}$</span></p>
<p>or</p>
<p><span class="math-container">$f(x_0) - f(x_1) \le -\beta(x_1 - x_0), \tag{16}$</span></p>
<p>that is,</p>
<p><span class="math-container">$f(x_0) \le f(x_1) - \beta(x_1 - x_0); \tag{17}$</span></p>
<p>but it is easily seen that this implies that </p>
<p><span class="math-container">$f(x_0) \to -\infty \; \text{as} \; x_0 \to -\infty, \tag{18}$</span></p>
<p>which contradicts (1). Thus</p>
<p><span class="math-container">$\beta = 0 \tag{19}$</span></p>
<p>and</p>
<p><span class="math-container">$f'(x) \to 0 \; \text{as} \; x \to \infty. \tag{20}$</span></p>
<p>Returning now to (6), we have</p>
<p><span class="math-container">$(f'(x))^2 - (f'(L))^2 \le f^2(x) - f^2(L), \tag {21}$</span></p>
<p>and letting</p>
<p><span class="math-container">$L \to -\infty \tag{22}$</span></p>
<p>we reach</p>
<p><span class="math-container">$(f'(x))^2 \le f^2(x) - \alpha^2 \le f^2(x), \tag {23}$</span></p>
<p>and since both</p>
<p><span class="math-container">$f(x), f'(x) \ge 0, \forall x \in \Bbb R, \tag{24}$</span></p>
<p>we may at last conclude that</p>
<p><span class="math-container">$f'(x) \le f(x), \; \forall x \in \Bbb R, \tag{25}$</span></p>
<p><span class="math-container">$OE\Delta$</span>.</p>
<p>Finally, note that in light of (10) and (11) we have</p>
<p><span class="math-container">$\displaystyle \lim_{x \to -\infty} (f(x) - f'(x))$</span> <span class="math-container">$= \lim_{x \to -\infty} f(x) - \lim_{x \to -\infty}f'(x) = \alpha - 0 \ge 0, \tag{26}$</span></p>
<p>as our OP requested be proved in a comment to Clement Yung's answer.</p>
|
3,295,318 | <p><span class="math-container">$$\int _{ c } \frac { \cos(iz) }{ { z }^{ 2 }({ z }^{ 2 }+2i) } dz$$</span></p>
<p>I used residue caculus for solving the problem but i am not pretty sure if the approach is right.</p>
<p>The attempt has been annexed in the pictures.</p>
<p><a href="https://i.stack.imgur.com/ZSv27.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZSv27.jpg" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/v533k.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/v533k.jpg" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/ldJpM.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ldJpM.jpg" alt="enter image description here" /></a></p>
| fleablood | 280,126 | <p>Notice if you add an even number of odd numbers you get an even sum. If you add an odd number of consecutive numbers you get an odd sum. So to get an odd sum you must have an odd number of terms.</p>
<p>Suppose you have <span class="math-container">$2k + 1$</span> terms and the middle term is <span class="math-container">$m$</span>, then you numbers are <span class="math-container">$(m-2k), (m-2k+2), (m-2k + 4),...... (m-2), m , (m+2),...... (m+k-4), (m+k-2), (m+k)$</span>. </p>
<p>If you add up the first term with the last term you get <span class="math-container">$(m-2k) + (m+2k) =2m$</span>. If add up the second and second to last term you ge <span class="math-container">$(m - 2k +2)+(m+2k -2) = 2m$</span>. And so on.</p>
<p>So when you add them all up you get <span class="math-container">$2m*k + m = m(2k + 1)$</span></p>
<p>So if you factor your number into two factors and set one to <span class="math-container">$m$</span> and the other to <span class="math-container">$2k + 1$</span> (as your number is odd both factors will be odd) you can get your sum.</p>
<p>Example: <span class="math-container">$1649 = 1 * 1649$</span> so so if <span class="math-container">$m = 1649$</span> and <span class="math-container">$2k+1 = 1$</span> then we can write <span class="math-container">$1649$</span> as a sum of <span class="math-container">$1$</span> term with the middle term <span class="math-container">$1649$</span>. I.e. <span class="math-container">$1649 = \sum_{i=1}^1 1649$</span>.</p>
<p>..... Okay, that's cheating but we can write <span class="math-container">$m =1$</span> and <span class="math-container">$2k+1 = 1649$</span> so that it can be the sum of <span class="math-container">$1649$</span> terms with the middle term of <span class="math-container">$1$</span>. so <span class="math-container">$-1647+(-1645) + (-1643) + ...... + 1643 + 1645 + 1647 + 1649 = 1649$</span>.</p>
<p>.... Okay, that was me cheating <em>again</em>. But if the number is not prime we can do it.</p>
<p><span class="math-container">$1649 = 17*97$</span>. Let <span class="math-container">$m =97$</span> and <span class="math-container">$2k+1 =17$</span> so <span class="math-container">$k = 8$</span> then we can have a sum of <span class="math-container">$17$</span> terms centered at <span class="math-container">$97$</span>.</p>
<p>So <span class="math-container">$81 + 83 + 85 + 87 + ..... + 109 + 111 + 113 = $</span></p>
<p><span class="math-container">$(97-16) + (97-14) + .... + (97-2) + 97 + (97+2) + .... + (97+14)+(97+16) =$</span></p>
<p><span class="math-container">$97 + 97 + ..... + 97 + 97 +97 +.... + 97 + 97 = 17*97 = 1649$</span>.</p>
|
3,249,809 | <p>What’s the explicit rule for for this number sequence?</p>
<p><span class="math-container">$$\displaystyle{1 \over 100},\ -{3 \over 95},\ {5 \over 90},\
-{7 \over 85},\ {9 \over 80}$$</span></p>
<p>The numerator changes to negative every other term, while the denominator subtracts <span class="math-container">$5$</span> every term. </p>
| MarianD | 393,259 | <p>The general formula for an (infinite) sequence of (e. g. real) numbers from the finite number <span class="math-container">$n$</span> of its (first) members is <strong>in principle impossible,</strong> as the next (not listed) <span class="math-container">$(n+1)^\mathrm{th}$</span> member may be an <em>arbitrary</em> number, and <em>there is still a formula for expressing</em> <span class="math-container">$a_1, \dots, a_n, a_{n+1}$</span>.</p>
<p>See <a href="https://math.stackexchange.com/a/3196103/393259">this answer</a>, in which is for the question</p>
<blockquote>
<p>What is the next number of the sequence <span class="math-container">$1, 2, 3, 4, 5?$</span></p>
</blockquote>
<p>proved by the appropriate formula, that the <strong>next number is <span class="math-container">$\color{red}{1762}.$</span></strong></p>
|
196,092 | <p>Are there any examples of a semigroup (which is <strong>not a group</strong>) with <strong>exactly one</strong> left(right) identity (which is <strong>not the two-sided identity</strong>)? Are there any “real-world” examples of these (semigroups of some more or less well-known mathematical objects) or they could only be “manually constructed” from abstract symbols (a, b, c…) subject to operation given by a Caley table?</p>
| Nobody | 17,111 | <p>Take a finite semigroup $S$. Then $S$ has an idempotent element $e$ since $S$ is finite.</p>
<p>Let $T = \{se : s \in S\}$. Then $T$ is a subsemigroup of $S$. We have $e \in T$ because $e = ee$. And $e$ is a right identity of $T$ since $(se)e = s(ee) = se$ for all $s \in S$.</p>
<p>My problem with this example is that I don't think $e$ is the only right identity of $T$ for every such $T$ and this is probably still not real world enough.</p>
<p>I believe if we give more conditions when we construct $T$, we might be able to get a semigroup with only one right identity. But, it may not be a real example for OP.</p>
|
196,092 | <p>Are there any examples of a semigroup (which is <strong>not a group</strong>) with <strong>exactly one</strong> left(right) identity (which is <strong>not the two-sided identity</strong>)? Are there any “real-world” examples of these (semigroups of some more or less well-known mathematical objects) or they could only be “manually constructed” from abstract symbols (a, b, c…) subject to operation given by a Caley table?</p>
| Tara B | 26,052 | <p>Consider for example the semigroup consisting of all constant functions on a set $X$ [acting on the right], together with one non-constant idempotent function $f$ (for example, let $f$ fix some point $x\in X$ and send every other point to some $y\neq x$). Then $f$ is a unique left identity, and $f$ is not a right identity.</p>
<p>In general I think it's probably helpful to think about this question in terms of transformation semigroups.</p>
<p>EDIT: Since this question has been sitting around with no accepted answer for a while, I'll state my last sentence a bit more strongly: You can determine exactly which transformation semigroups have a single left (or right) identity, and since every semigroup is isomorphic to a transformation semigroup, doing this will give you all examples.
[Although I just noticed that the OP hasn't been on this site for about a month, so I guess the question might remain 'unanswered'.]</p>
|
196,092 | <p>Are there any examples of a semigroup (which is <strong>not a group</strong>) with <strong>exactly one</strong> left(right) identity (which is <strong>not the two-sided identity</strong>)? Are there any “real-world” examples of these (semigroups of some more or less well-known mathematical objects) or they could only be “manually constructed” from abstract symbols (a, b, c…) subject to operation given by a Caley table?</p>
| J.-E. Pin | 89,374 | <p>Take the semigroup $S = \{a, b, 0\}$ with $a^2 = a$, $ab = b$ and every other product equal to $0$. Then $a$ is a left identity (since $ax = x$ for all $x \in S$) but it is not an identity (since $ba = 0$). You can define this semigroup in three other equivalent ways:</p>
<p>(1) As a transformation semigroup on $\{1, 2, 0\}$. Just take the semigroup generated by $a = [1, 0, 0]$ and $b = [2, 0, 0]$.</p>
<p>(2) As a semigroup of matrices. Just take $a = \pmatrix{1 & 0\\ 0 & 0}$, $b = \pmatrix{0 & 1\\ 0 & 0}$ and $0 = \pmatrix{0 & 0\\ 0 & 0}$.</p>
<p>(3) As the syntactic semigroup of the regular language $a^*b$ [or as the transformation semigroup of its minimal automaton, which is just (1)].</p>
|
2,291,310 | <p>I'm seeking an alternative proof of this result:</p>
<blockquote>
<p>Given $\triangle ABC$ with right angle at $A$. Point $I$ is the intersection of the three angle lines. (That is, $I$ is the incenter of $\triangle ABC$.) Prove that
$$|CI|^2=\frac12\left(\left(\;|BC|-|AB|\;\right)^2+|AC|^2\right)$$</p>
</blockquote>
<p><strong>My Proof.</strong> Draw $ID \perp AB$, $IE\perp BC$, and $IF\perp CE$.</p>
<p>We have $|ID|=|IE|=|IF|=x$. Since $\triangle ADI$ is right isosceles triangle, we also have that $|AD|=|ID|=x$. Respectively, we have: $$|ID|=|IF|=|IE|=|AD|=|AF|=x$$ </p>
<p>$\triangle BDI=\triangle BEI \Rightarrow |BD|=|BE|=y$. And $|CE|=|CF|=z$</p>
<p>We have:<br>
$$|CI|^2=|CE|^2+|IE|^2=x^2+z^2 \tag{1}$$</p>
<p>And
$$\begin{align}\frac12\left(\left(|BC|-|AB|\right)^2+|AC|^2\right) &=\frac12\left(\left(\;\left(y+z\right)-\left(x+y\right)\;\right)^2+\left(x+z\right)^2\right) \\[4pt]
&=\frac12\left(\left(x-z\right)^2+\left(x+z\right)^2\right) \\[4pt]
&=\frac22\left(x^2+z^2\right) \\[4pt]
&=x^2+z^2
\tag{2}\end{align}$$</p>
<p>From $(1);(2)$ we are done. $\square$</p>
| Lazy Lee | 430,040 | <p>Let $AB = c, AC = b, BC = a$, and let $IT\perp AC $ at $I$. Then, $CI$ can be written as $$\begin{split}CI^2 &= CT^2 + IT^2 \\&= \left(\frac{a+b-c}{2}\right)^2+\left(\frac{b+c-a}{2}\right)^2\\&= \frac{2a^2+2b^2+2c^2-4ac}{4}\\&=\frac{(a-c)^2+b^2}{2} \\&=\frac{(BC-AB)^2+AC^2}{2}\end{split}$$</p>
|
19,285 | <p>Is anyone aware of Mathematica use/implementation of <a href="http://en.wikipedia.org/wiki/Random_forest">Random Forest</a> algorithm?</p>
| image_doctor | 776 | <p>As of version 10.0 there is a built in implementation of Random Forests which is accessible through the <em>Classify</em> function.</p>
<pre><code>trainingset = {1 -> "A", 2 -> "A", 3.5 -> "B", 4 -> "B"};
classifier = Classify[trainingset, Method->"RandomForest"];
</code></pre>
|
3,053,386 | <p>This might be a very basic question for some of you. Indeed in <span class="math-container">$\textbf Z$</span>, it's very easy. For example, <span class="math-container">$\textbf Z / \langle 2 \rangle$</span> consists of <span class="math-container">$\langle 2 \rangle$</span> and <span class="math-container">$\langle 2 \rangle + 1$</span>. Obviously just two elements. In general, if <span class="math-container">$p$</span> is a positive prime in <span class="math-container">$\textbf Z$</span>, then <span class="math-container">$\textbf Z / \langle p \rangle$</span> consists of one principal ideal and <span class="math-container">$p - 1$</span> cosets.</p>
<p>I guess it's also easy in imaginary quadratic integer rings, since we can visualize them in the complex plane, e.g., <span class="math-container">$\textbf Z[i] / \langle 1 + i \rangle$</span> consists of <span class="math-container">$\langle 1 + i \rangle$</span>, <span class="math-container">$\langle 1 + i \rangle + 1$</span> and <span class="math-container">$\langle 1 + i \rangle + i$</span>... wait a minute, three elements? I'm not sure that's quite right.</p>
<p>And I really have no idea how to go about, say, <span class="math-container">$\textbf Z[\sqrt{14}] / \langle 4 + \sqrt{14} \rangle$</span>. To say nothing of something like <span class="math-container">$\textbf Z[\sqrt{10}] / \langle 2, \sqrt{10} \rangle$</span>.</p>
<p>Given a ring <span class="math-container">$R$</span> of algebraic integers of degree <span class="math-container">$2$</span>, and a prime ideal <span class="math-container">$\mathfrak P$</span>, how do you determine how many elements there are in <span class="math-container">$R / \mathfrak P$</span>?</p>
| Kenny Lau | 328,173 | <p><span class="math-container">$$\Bbb Z[\sqrt{14}]/(4+\sqrt{14}) = \Bbb Z[X]/(X^2-14,X+4) = \Bbb Z[X]/(2,X+4) = \Bbb Z[X]/(2,X) = \Bbb F_2$$</span></p>
<p><span class="math-container">$$\Bbb Z[\sqrt{10}]/(2,\sqrt{10}) = \Bbb Z[X]/(X^2-10, 2, X) = \Bbb Z[X]/(2, X) = \Bbb F_2$$</span></p>
<p>The trick is to move the situation to computing ideals in <span class="math-container">$\Bbb Z[X]$</span>.</p>
|
540,227 | <p>I was try to understand the following theorem:-</p>
<p><strong>Let $X,Y$ be two path connected spaces which are of the same homotopy type.Then their fundamental groups are isomorphic.</strong></p>
<p><strong>Proof:</strong> The fundamental groups of both the spaces $X$ and $Y$ are independent on the base points since they are path connected. Since $X$ and $Y$ are of the same homotopy type, there exist continuous maps $f:X\to Y $ and $g:Y\to X$ such that $g\circ f\sim I_X$ by a homotopy, say, $F$ and $f\circ g \sim I_Y$ by some homotopy, say $G$. Let $x_0\in X$ be a base point. Let<br>
$$f_\#:\pi_1(X,x_0)\to \pi_1(Y,f(x_0))$$ and $$g_\#:\pi_1(Y,f(x_0))\to \pi_1(X,g(f(x_0)))$$ be the induced homomorphisms. Let $\sigma$ be the path joining $x_0$ to $gf(x_0)$ defined by the homotopy $F$.</p>
<p>After that the author says that $\sigma_\#$ is a isomorphism. obviously $\sigma_\#$ is a homomorphism but I could not understand how it becomes a isomorphism.</p>
<p>Can someone explain me please. thanks for your kind help and time.</p>
| Stefan Hamcke | 41,672 | <p>I suppose $σ$ is a path from $x_0$ the a point $x_1$ and the map $\sigma_\#$ is defined as:
$$σ_\#:\pi_1(X,x_0)\to π_0(X,x_1)\\σ_\#([p])=[σ\cdot p\cdot\barσ]$$ where $σ⋅p$ is the path which first traverses the loop $p$ and then $σ$, and $\bar σ$ is the reversed path.</p>
<p>This is a bijection because it has the inverse $\barσ_♯$ which we see by calculating $$\barσ_\#σ_\#([p])=[\barσ⋅σ⋅p⋅\barσ⋅σ]=[\barσ⋅σ]⋅[p]⋅[\barσ⋅σ]=[p]$$ as the homotopy class of $[\barσ⋅σ]$ is trivial. Similarly, we get $σ_\#\barσ_\#([p])=[p]$</p>
|
1,494,409 | <blockquote>
<p>Let <span class="math-container">$\Bbb R^+$</span> denote the real numbers. Suppose <span class="math-container">$\phi:\Bbb R^+\to\Bbb R^+$</span> is an automorphism of the group <span class="math-container">$\Bbb R^+$</span> under multiplication with <span class="math-container">$\phi(4)=7$</span>.</p>
<p>Find <span class="math-container">$\phi(2)$</span>, <span class="math-container">$\phi(8)$</span>, and <span class="math-container">$\phi(1/4).$</span></p>
</blockquote>
<p>I've done this problem in groups of integers modulo <span class="math-container">$n$</span> but I am not sure how to start here. Please explain what <span class="math-container">$\phi(4)=7$</span> is.</p>
<p>Thank you!</p>
| Travis Willse | 155,629 | <p><strong>Hint</strong> If $a \in \Bbb R^+$ satisfies $a^2 = 4$, then because $\phi$ is a homomorphism, we have
$$\phi(a)^2 = \phi(a^2) = \phi(4) .$$</p>
<p>It's interesting to note that we cannot, however, conclude the value of, e.g., $\phi(3)$ from the given information.</p>
<blockquote class="spoiler">
<p><strong>Additional hint</strong> To compute $\phi\left(\frac{1}{4}\right)$ it will be useful to recall that any homomorphism maps the identity to the identity.</p>
</blockquote>
|
37,804 | <p>I'm trying to gain some intuition for the usefullness of the spectral theory for bounded self adjoint operators. I work in PDE and any interesting applications/examples I've ever encountered are concerning <em>compact operators</em> and <em>unbounded operators</em>. Here I have the examples of $-\Delta$, the laplacian and $(-\Delta)^{-1}$, the latter being compact.</p>
<p>The most common example I see of a bounded non-compact operator is the shift map on $l_2$ given by $T(u_1,u_2,\cdots) = (u_2,u_3,\cdots)$. While this nicely illustrates the different kind of spectra, I don't see why this is useful or where this may come up in practice.<em>Why does knowing things about the spectrum of the shift operator help you in any practical way?</em></p>
<p>Secondly, concerning the spectral theorem for bounded, <em>self adjoint</em> operators. All useful applications I have encountered concern <em>compact or unbounded operators</em>. Is there an example arising in PDE (preferably) or some other applied field where knowing the spectral representation for a bounded, non-compact operator is useful? I have yet to encounter one that didn't just reduce to the compact case. Any insight/suggestions are appreciated.</p>
<p>Best,
dorian</p>
| Newbie | 9,032 | <p>Maybe the corresponding functional calculus (FC) addresses your question about why it is interesting to know the spectrum of an operator? Many applications use it, see for example Pedersen's book on Functional Analysis for the various types (Holomorphic, Continuous, Measurable FC). </p>
<p>For an intuitive approach -and keeping in mind that you are familiar with compact operators- you might like to learn some more about generalised eigenvectors.</p>
|
2,201,085 | <p>Let
$$x_{1},x_{2},x_{3},x_{5},x_{6}\ge 0$$ such that
$$x_{1}+x_{2}+x_{3}+x_{4}+x_{5}+x_{6}=1$$
Find the maximum of the value of
$$\sum_{i=1}^{6}x_{i}\;x_{i+1}\;x_{i+2}\;x_{i+3}$$
where
$$x_{7}=x_{1},\quad x_{8}=x_{2},\quad x_{9}=x_{3}\,.$$</p>
| Yuri Negometyanov | 297,350 | <p>Let $x_1=a,\ x_2=b,\ x_3=c,\ x_4=d,\ x_5=e,\ x_6=f.$ </p>
<p>Objective function is
$$Z(a,b,c,d,e,f) = abcd + bcde + cdef + defa + efab + fabc.$$</p>
<p>Let maximize $Z(a,b,c,d,e,f)$ using Lagrange mulptiplyers method for the function
$$F(a,b,c,d,e,f,\lambda) = abcd + bcde + cdef + defa + efab + fabc + \lambda(1-a-b-c-d-e-f).$$
Equaling to zero partial derivatives $F'_a,\ F'_b,\ F'_c,\ F'_d,\ F'_e,\ F'_f,\ $ one can obtain the system:
$$\begin{cases}
bcd+bcf+bef+def = \lambda\\
acd+acf+aef+cde = \lambda\\
abd+abf+bde+def = \lambda\\
abc+aef+bce+cef = \lambda\\
abf+adf+bcd+cdf = \lambda\\
abc+abe+ade+cde = \lambda\\
a+b+c+d+e+f = 1.
\end{cases}$$</p>
<p>Note that:</p>
<ol>
<li>First and third equations contains the common term $def.$</li>
<li>Second and fourth - $aef.$</li>
<li>Third and fifth - $abf.$</li>
<li>Fourth and six - $abc.$</li>
<li>Fifth and first - $bcd.$</li>
<li>Sixth and second - $cde.$</li>
</ol>
<p>So
$$\begin{cases}
bcd+bcf+bef = abd+abf+bde\qquad(1)\\
acd+acf+cde = abc+bce+cef\qquad(2)\\
abd+bde+def = adf+bcd+cdf\qquad(3)\\
aef+bce+cef = abe+ade+cde\qquad(4)\\
abf+adf+cdf = bcf+bef+def\qquad(5)\\
abc+abe+ade = acd+acf+aef\qquad(6)\\
a+b+c+d+e+f = 1.\qquad\qquad\qquad(7)
\end{cases}$$</p>
<p>Easy to see that:</p>
<ol>
<li>Both $Z(a,b,c,d,e,f)$ and accordingly the system $(1-7)$ WLOG allows any cyclic permutation of $(a,b,c,d,e,f)$.</li>
<li>Substitution of any pair of zero unknowns equals objective function to zero if that unknowns are not neighbours in the list of unknowns.</li>
</ol>
<p>Let us consider zero cases in detail.</p>
<p><strong>"Neighbouring zeros" case $b=c=0$ (and cyclic permutations).</strong></p>
<p>Equations $(1)$ and $(2)$ give $LHS=RHS=0,$ remaining equations can be simplified to the form of
$$a=e,\quad d=f,\quad a+d = \frac12,$$
with the object function
$$Z(a, 0, 0, d, a, d) = a^2\left(\frac12-a\right)^2\quad\text{for } a\in\left(0,\dfrac12\right).$$
That gives
$$Z_{max} = \frac1{256}\text{ for } a=\frac14.$$</p>
<p><strong>"Single zero" case $b=0, a>0, c>0.$ (and cyclic permutations).</strong></p>
<p>And condition $Z>0$ requires $d>0, e>0, f>0.$</p>
<p>The equations $(3,4,6)$ forms a system
$$\begin{cases}
def = adf+cdf\\
aef+cef = ade+cde\\
ade = acd+acf+aef
\end{cases}\rightarrow
\begin{cases}
e = a+c\\
(a+c)f = (a+c)d\\
de = cd+cf+ef,
\end{cases}$$</p>
<p>$ef=de,\ c(d+f)=0.$ </p>
<p>This contradicts the conditions $c>0,\ d>0,\ f>0.$</p>
<p><strong>Case $abcdef>0.$</strong></p>
<p>This case allows reduction of $(1-6)$:</p>
<p>$$\begin{cases}
cd+cf+ef = ad+af+de\qquad(1a)\\
ad+af+de = ab+be+ef\qquad(2a)\\
ab+be+ef = af+bc+cf\qquad(3a)\\
af+bc+cf = ab+ad+cd\qquad(4a)\\
ab+ad+cd = bc+be+de\qquad(5a)\\
bc+be+de = cd+cf+ef\qquad(6a)\\
a+b+c+d+e+f = 1.\qquad\quad(7)
\end{cases}$$</p>
<p>Summations $(1a+2a),\ (2a+3a),\ (3a+4a),\ (4a+5a),\ (5a+6a),\ $ leads to the system</p>
<p>$$\begin{cases}
cd+cf = ab+be\qquad\qquad(1b)\\
ad+de = bc+cf\qquad\qquad(2b)\\
be+ef = ad+cd\qquad\qquad(3b)\\
af+cf = be+de\qquad\qquad(4b)\\
ab+ad = cf+ef\qquad\qquad(5b)\\
a+b+c+d+e+f = 1.\quad(7)
\end{cases}$$</p>
<p>Now weighted sums $(1b)\cdot d+(2b)\cdot b,$ $(2b)\cdot e+(3b)\cdot,$ $(3b)\cdot f+(4b)\cdot d,$ $(4b)\cdot a+(5b)\cdot c$ form the system</p>
<p>$$\begin{cases}
cd^2+cdf = b^2c+bcf\\
ade+de^2 = acd+c^2d\\
bef+ef^2 = bde+d^2e\\
a^2f+acf = cef+e^2f\\
a+b+c+d+e+f = 1
\end{cases}\rightarrow
\begin{cases}
(d-b)\cdot(b+d+f) = 0\\
(e-c)\cdot(c+e+a) = 0\\
(f-d)\cdot(f+d+b) = 0\\
(a-e)\cdot(a+e+c) = 0\\
a+b+c+d+e+f = 1
\end{cases}$$</p>
<p>$$f=d=b,\quad e=c=a,\quad a+b = \dfrac13.$$
The object function is
$$Z(a, b, a, b,a,b) = 6a^3\left(\dfrac13 - a\right)^3$$
for $a\in\left(0,\dfrac13\right)$.</p>
<p>That gives
$$Z_{max}(a) = \frac1{216}\text{ for } a=\frac16,$$
and finally
$$\boxed{\max Z(\vec x) = \dfrac 1{216}\quad\text{ for all }x_i=\dfrac16.}$$</p>
|
203,114 | <p>If we have a pair of coordinates <span class="math-container">$(x,y)$</span>, let's say</p>
<pre><code>pt = {1,2}
</code></pre>
<p>then we can easily rotate the coordinates, by an angle <span class="math-container">$\theta$</span>, by using the rotation matrix</p>
<pre><code>R = {{Cos[\[Theta]], -Sin[\[Theta]]}, {Sin[\[Theta]], Cos[\[Theta]]}};
</code></pre>
<p>as</p>
<pre><code>pt2 = pt.R;
</code></pre>
<p>Now let's assume that we have a collection of points in the form</p>
<pre><code>data = {{1}, {-0.3, 1}, {2, -0.2}, {2}, {-2, 1}, {4,-2}, {3}, {1, 1}, {-0.2, -0.3}}
</code></pre>
<p>where the integers 1, 2 and 3 count the subsets of the list <code>data</code>. </p>
<p>My question: how can we rotate the <span class="math-container">$(x,y)$</span> coordinates of the list <code>data</code> by and angle, let's say <span class="math-container">$2\pi/3$</span> and create a new list, <code>data2</code> of the form</p>
<pre><code>data2 = {{1}, {rotated x, rotated y}, {rotated x, rotated y}, {2}, {rotated x, rotated y}, {roatetd x, rotated y}, ...}
</code></pre>
<p>Any suggestions?</p>
| Alx | 35,574 | <p>This should give what you want:</p>
<pre><code>data2=data/.{x_?NumericQ,y_?NumericQ}:>RotationMatrix[\[Theta]].{x,y}
</code></pre>
|
198,555 | <p>I am having difficulties understanding how do I perform set operation like union or intersection on Relations. </p>
<p>In a question, I am asked to prove/disprove: </p>
<ul>
<li>If R & S are symmetric, is $R \cap S$ symmetric? </li>
<li>If R & S are transitive, is $R \cup S$ transitive?</li>
</ul>
<p>How do I do that? 1st, how does $R \cap S$ or $R \cup S$ look like? How can I write a formal prove/disprove for it? I am very bad at these proves ... </p>
| Austin Mohr | 11,245 | <p>Other answers have already discussed symmetry.</p>
<p>An example of transitive $R$ and $S$ with non-transitive $R \cup S$ can be obtained by taking
$$
R = \{(1,2),(2,1)\}
$$
and
$$
S = \{(2,3),(3,2)\}.
$$
In $R \cup S$, we have $(1,2)$ and $(2,3)$, but not $(1,3)$ (that is, $1$ is related to $2$ and $2$ is related to $3$, but $1$ is not related to $3$).</p>
<p>The trick is set up $R$ and $S$ in such as way that $R \cup S$ requires a new ordered pair (i.e. relationship) that wasn't present in either $R$ or $S$. In this example, that is the ordered pair $(1,3)$.</p>
|
332,380 | <p>The following is an excerpt from Sharpe's <em>Differential Geometry - Cartan's Generalization of Klein's Erlangen Program</em>.</p>
<blockquote>
<p>Now we come to the question of higher derivatives. As usual in modern
differential geometry, we shall be concerned only with the skew-symmetric
part of the higher derivatives. In essence, what we shall be doing is taking
the partial derivatives with respect to the base (i.e., manifold) variables
and skew-symmetrizing the result, thus forgetting about the part of the
higher derivatives that vanish under this procedure. However, this will not
be made explicit in our treatment. The part of the higher derivative that
disappears has not been studied much in differential geometry since Elie
Cartan showed how useful it is to consider only the skew-symmetric part,
that is, the exterior derivative. The old masters did use the symmetric part...</p>
</blockquote>
<p>"Partial derivatives with respect to the base" must be the covariant derivative of the connection. If I correctly understand what's written in <a href="https://math.stackexchange.com/a/1980156/223002">this answer</a>, then we have for any <strong>torsion free</strong> connection on a manifold the equality <span class="math-container">$ \mathrm dw=\operatorname{Alt}(\nabla w)$</span>.</p>
<p>This already seems rather remarkable since the exterior derivative is intrinsic.</p>
<p><strong>Question 1.</strong> What is the geometric meaning of the above fact for torsion-free connections? How does taking anti-symmetrization "forget" the structure of a horizontal bundle (connection)?</p>
<p><strong>Question 2.</strong> Still for a torsion-free connection, what is the geometric meaning of "the part of the higher derivative that vanish under this procedure" of anti-symmetrization (i.e the symmetric part)?</p>
<p><strong>Question 3.</strong> What's the conceptual picture for an arbitrary connection on a manifold (possibly with torsion)? Is the anti-symmetrization still the exterior derivative? What is the symmetric part?</p>
<p><em>Remark on Q3.</em> I vaguely recall being told that torsion measures the failure of the fundamental theorem of calculus, and also that in torsion-free connections the parallel transport commutator is given by the Lie bracket (again, the latter is intrinsic).</p>
| Jesper Göransson | 112,784 | <p>Here's some calculations that may help. (I'll leave it to you to decide if the arguments are valid or not.) We'll work in a two-dimensional manifold with local coordinates <span class="math-container">$x$</span> and <span class="math-container">$y$</span>. Let <span class="math-container">$d$</span> be the covariant derivative at a point in some direction and let <span class="math-container">$dx$</span> and <span class="math-container">$dy$</span> be the components of the directional vector.</p>
<p>We apply <span class="math-container">$d$</span> to a scalar function <span class="math-container">$f(x,y)$</span> and get
<span class="math-container">$$
df = f_xdx+f_ydy
$$</span>
which is the usual differential of <span class="math-container">$f$</span>. Apply <span class="math-container">$d$</span> a second time and we get
<span class="math-container">$$
d^2f = f_{xx}dx^2+2f_{xy}dxdy+f_{yy}dy^2+f_xd^2x+f_yd^2y.
$$</span>
This is the second differential of <span class="math-container">$f$</span> and is a form on the second order tangent bundle, but since <span class="math-container">$d$</span> is to be interpreted as the covariant derivative, the connection gives that the second order differentials are expressed as quadratic forms on the tangent bundle:</p>
<p><span class="math-container">\begin{align*}
d^2x &= -(\Gamma^1_{11}dx^2+(\Gamma^1_{12}+\Gamma^1_{21})dxdy+\Gamma^1_{22}dy^2)\\
d^2y &= -(\Gamma^2_{11}dx^2+(\Gamma^2_{12}+\Gamma^2_{21})dxdy+\Gamma^2_{22}dy^2)
\end{align*}</span>
and we have that
<span class="math-container">\begin{align*}
d^2f &= (f_{xx}-\Gamma^1_{11}f_x-\Gamma^2_{11}f_y)dx^2\\
&+(2f_{xy}-(\Gamma^1_{12}+\Gamma^1_{21})f_x-(\Gamma^2_{12}+\Gamma^2_{21})f_y)dxdy\\
&+(f_{yy}-\Gamma^1_{22}f_x-\Gamma^2_{22}f_ydy^2.
\end{align*}</span></p>
<p>So the symmetric part of the connection is what comes in to play when we calculate higher order forms, the anti-symmetric part does not affect these calculations.</p>
<p>To calculate an anti-symmetric derivative we let <span class="math-container">$\partial$</span> denote the covariant derivative in some other direction and let <span class="math-container">$\partial x$</span> and <span class="math-container">$\partial y$</span> be the components of this other direction vector. This time we'll investigate the derivative of a one-form so let <span class="math-container">$\omega=a(x,y)dx+b(x,y)dy$</span> be some one-form. This one-form can be evaluated on either of the two direction vectors so we let
<span class="math-container">\begin{align*}
\omega(d) &= adx+bdy\\
\omega(\partial) &= a\partial x+b\partial y
\end{align*}</span>
mean that the one-form is evaluated on <span class="math-container">$(dx,dy)$</span> and <span class="math-container">$(\partial x, \partial y)$</span> respectively.</p>
<p>We now calculate the anti-symmetric derivative of the one-form
<span class="math-container">\begin{align*}
\partial\omega(d)-d\omega(\partial) &= a_x\partial xdx + a_y\partial ydx +b_x\partial xdy + b_y\partial ydy + a\partial dx + b\partial dy\\
&- (a_xdx\partial x + a_ydy\partial x +b_xdx\partial y + b_ydy\partial y + ad\partial x + bd\partial y)\\
&= a_y(dx\partial y-\partial xdy) + b_x(\partial xdy-dx\partial y) \\
&+ a(\partial d-d\partial)x + b(\partial d-d\partial)y
\end{align*}</span></p>
<p>Our connection defines that
<span class="math-container">\begin{align*}
\partial dx &= -(\Gamma^1_{11}\partial xdx+\Gamma^1_{12}\partial xdy+\Gamma^1_{21}dx\partial y+\Gamma^1_{22}\partial ydy)\\
\partial dy &= -(\Gamma^2_{11}\partial xdx+\Gamma^2_{12}\partial xdy+\Gamma^2_{21}dx\partial y+\Gamma^2_{22}\partial ydy)
\end{align*}</span>
and we have that
<span class="math-container">\begin{align*}
(\partial d-d\partial)x &= -(\Gamma^1_{12}-\Gamma^1_{21})(\partial xdy-dx\partial y)\\
(\partial d-d\partial)y &= -(\Gamma^2_{12}-\Gamma^2_{21})(\partial xdy-dx\partial y)
\end{align*}</span>
We now replace <span class="math-container">$\partial xdy-\partial ydx$</span> with standard notation <span class="math-container">$dx\wedge dy$</span> and the anti-symmetric derivative of the one-form is then expressed as
<span class="math-container">\begin{align*}
\partial\omega(d)-d\omega(\partial) &= (b_x-a_y-a(\Gamma^1_{12}-\Gamma^1_{21})-b(\Gamma^2_{12}-\Gamma^2_{21}))\,dx\wedge dy
\end{align*}</span>
The anti-symmetric derivative is also called the covariant exterior derivative and for symmetric connections it coincides with the usual exterior derivative on linear forms.</p>
|
332,380 | <p>The following is an excerpt from Sharpe's <em>Differential Geometry - Cartan's Generalization of Klein's Erlangen Program</em>.</p>
<blockquote>
<p>Now we come to the question of higher derivatives. As usual in modern
differential geometry, we shall be concerned only with the skew-symmetric
part of the higher derivatives. In essence, what we shall be doing is taking
the partial derivatives with respect to the base (i.e., manifold) variables
and skew-symmetrizing the result, thus forgetting about the part of the
higher derivatives that vanish under this procedure. However, this will not
be made explicit in our treatment. The part of the higher derivative that
disappears has not been studied much in differential geometry since Elie
Cartan showed how useful it is to consider only the skew-symmetric part,
that is, the exterior derivative. The old masters did use the symmetric part...</p>
</blockquote>
<p>"Partial derivatives with respect to the base" must be the covariant derivative of the connection. If I correctly understand what's written in <a href="https://math.stackexchange.com/a/1980156/223002">this answer</a>, then we have for any <strong>torsion free</strong> connection on a manifold the equality <span class="math-container">$ \mathrm dw=\operatorname{Alt}(\nabla w)$</span>.</p>
<p>This already seems rather remarkable since the exterior derivative is intrinsic.</p>
<p><strong>Question 1.</strong> What is the geometric meaning of the above fact for torsion-free connections? How does taking anti-symmetrization "forget" the structure of a horizontal bundle (connection)?</p>
<p><strong>Question 2.</strong> Still for a torsion-free connection, what is the geometric meaning of "the part of the higher derivative that vanish under this procedure" of anti-symmetrization (i.e the symmetric part)?</p>
<p><strong>Question 3.</strong> What's the conceptual picture for an arbitrary connection on a manifold (possibly with torsion)? Is the anti-symmetrization still the exterior derivative? What is the symmetric part?</p>
<p><em>Remark on Q3.</em> I vaguely recall being told that torsion measures the failure of the fundamental theorem of calculus, and also that in torsion-free connections the parallel transport commutator is given by the Lie bracket (again, the latter is intrinsic).</p>
| Mozibur Ullah | 35,706 | <p>Although higher derivatives are best thought through with jet bundles where we actually differentiate a bundle and not just a manifold, there is a useful description of second derivatives using secondary tangent bundles, this is just iterating the tangent bundle, ie <span class="math-container">$TTM$</span>. This is described in one of the early chapters in Kolár, Michor & Slováks <a href="https://www.mat.univie.ac.at/%7Emichor/kmsbookh.pdf" rel="nofollow noreferrer">book</a> on natural bundles. (They also describe a generalisation of this in one of the later chapters which they call section forms - but they don't do anything with it).</p>
<blockquote>
<p>Question 3. What's the conceptual picture for an arbitrary connection on a manifold (possibly with torsion)? Is the anti-symmetrization still the exterior derivative? What is the symmetric part?</p>
</blockquote>
<p>On the last part of this question you may be interested in this paper, <em>A Description of the Derivations of the Algebra of Symmetric Tensors</em>, by A. Heydari, N. Boroojerdian, E. Peyghan and published in <em>Archivum Mathematicum</em>. They introduce symmetric forms, bracket, Lie derivative and differential. Whilst the symmetric differential is not nilpotent, ie <span class="math-container">$(d^s)^2$</span> does not vanish, they show</p>
<blockquote>
<p>proposition 5:</p>
<p>Let <span class="math-container">$(M, g)$</span> be a Riemmanian manifold with Levi-Civita connection
<span class="math-container">$∇$</span>. The 1-form <span class="math-container">$ω$</span> is Killing if and only if <span class="math-container">$d^sω = 0$</span>.</p>
</blockquote>
<p>and</p>
<blockquote>
<p>proposition 7:</p>
<p>Every derivation of degree 1 on symmetric forms, whose value for a function is the differential of the function, is of the form of a symmetric differential of a connection which is also unique.</p>
</blockquote>
|
132,591 | <p>Let $f(x)$ be a positive function on $[0,\infty)$ such that $f(x) \leq 100 x^2$. I want to bound $f(x) - f(x-1)$ from above. Of course, we have $$f(x) - f(x-1) \leq f(x) \leq 100 x^2.$$ This is not good for me though. I need a bound which is linear (or at worst linear-times-root) in $x$.</p>
<p>Is there an inequality of the form $f(x) - f(x-1) \leq f^\prime (x)=200 x$?</p>
| Michael Hardy | 11,667 | <p>Suppose $f(x) = 100x^2\sin^2(\pi x/2)$. Then $f(x) = 0$ when $x$ is an even integer and $f(x) = 100x^2$ when $x$ is an odd integer. So $f(x)-f(x-1)\ge 100(x-1)^2$, with equality when $x$ is even.</p>
|
548,470 | <p>Prove $$(x+1)e^x = \sum_{k=0}^{\infty}\frac{(k+1)x^k}{k!}$$ using Taylor Series.</p>
<p>I can see how the $$\sum_{k=0}^{\infty}\frac{x^k}{k!}$$ plops out, but I don't understand how $(x+1)$ can become $(k+1)$.</p>
| André Nicolas | 6,312 | <p>Note that
$$xe^x=\sum_{k=0}^\infty \frac{x^{k+1}}{k!}.$$
Differentiate both sides with respect to $x$. </p>
|
96,377 | <p>I have a polynomial with the coefficients of {a1, b1, b2}</p>
<pre><code>x = 1/8 (a1^4 E^(4 I τ ω) - 2 a1^2 E^( 2 I τ ω) (b2 Sqrt[1 - t] + b1 Sqrt[t])^2 + (b2 Sqrt[ 1 - t] + b1 Sqrt[t])^4);
</code></pre>
<p><a href="https://i.stack.imgur.com/6W2Ob.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6W2Ob.png" alt="enter image description here"></a></p>
<p>After expanding, it is :</p>
<pre><code>Expand[x]
b2^4/8 - 1/4 a1^2 b2^2 E^(2 I τ ω) + 1/8 a1^4 E^(4 I τ ω) + 1/2 b1 b2^3 Sqrt[1 - t] Sqrt[t] - 1/2 a1^2 b1 b2 E^(2 I τ ω) Sqrt[1 - t] Sqrt[t] + 3/4 b1^2 b2^2 t - (b2^4 t)/4 - 1/4 a1^2 b1^2 E^(2 I τ ω) t + 1/4 a1^2 b2^2 E^(2 I τ ω) t + 1/2 b1^3 b2 Sqrt[1 - t] t^(3/2) - 1/2 b1 b2^3 Sqrt[1 - t] t^(3/2) + ( b1^4 t^2)/8 - 3/4 b1^2 b2^2 t^2 + (b2^4 t^2)/8
</code></pre>
<p>There are two problems :</p>
<ol>
<li><p>The terms <code>1/2 b1 b2^3 Sqrt[1 - t] Sqrt[t]</code> and <code>-(1/2) b1 b2^3 Sqrt[1 - t] t^(3/2)</code> have the same coefficient b1 b2^3 , but they donot combine automatically. I need all the terms with the same coefficient in {a1, b1,
b2} be combined. e.g.,</p>
<pre><code>1/2 b1 b2^3 Sqrt[1 - t] Sqrt[t] - 1/2 b1 b2^3 Sqrt[1 - t] t^(3/2) = 1/2 b1 b2^3 (1 - t)^(3/2) Sqrt[t]
</code></pre></li>
<li><p>I need the polynomial be reordered in a descending order of the variables {a1, b1, b2}, as shown below, but this is done by hand. I hope how to do so automatically by Mathematica.</p>
<pre><code>1/8 a1^4 E^(4 I τ ω) - 1/4 a1^2 b1^2 E^(2 I τ ω) t - 1/2 a1^2 b1 b2 E^(2 I τ ω) Sqrt[1 - t] Sqrt[t] - 1/4 a1^2 b2^2 E^(2 I τ ω) + 1/4 a1^2 b2^2 E^(2 I τ ω) t + (b1^4 t^2)/8 + 1/2 b1^3 b2 Sqrt[1 - t] t^(3/2) + 3/4 b1^2 b2^2 t - 3/4 b1^2 b2^2 t^2 + 1/2 b1 b2^3 Sqrt[1 - t] Sqrt[t] - 1/2 b1 b2^3 Sqrt[1 - t] t^(3/2) + b2^4/8 - (b2^4 t)/4 + (b2^4 t^2)/8
</code></pre></li>
</ol>
<p>After reordering in a descending order, and combining all the terms with the same coefficient, I achieve the final result of</p>
<pre><code>1/8 a1^4 E^(4 I τ ω) - 1/4 a1^2 b1^2 E^(2 I τ ω) t - 1/2 a1^2 b1 b2 E^(2 I τ ω) Sqrt[1 - t] Sqrt[t] +1/4 a1^2 2^2 E^(2 I τ ω) (-1 + t) + (b1^4 t^2)/8 + 1/2 b1^3 b2 Sqrt[1 - t] t^(3/2) + -(3/4) b1^2 b2^2 (-1 + t) t + 1/2 b1 b2^3 (1 - t)^(3/2) Sqrt[t] + 1/8 b2^4 (-1 + t)^2
</code></pre>
<p>Or in figure format:</p>
<p><a href="https://i.stack.imgur.com/PIEnb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PIEnb.png" alt="enter image description here"></a></p>
<p>I can do this work manually in this example with only 14 terms, but in my next step I need to process a polynomial with more than 30 terms. Doing this reordering and combining work automatically is very necessary.</p>
<p>I have checked the previous questions and answers on stackexchange, but my problems can not be solved. Thank you very much if you can help me!</p>
| Alexei Boulbitch | 788 | <p>Try this:</p>
<pre><code>Collect[Expand[x], {a1^2 b2^2, b2^4, b1 b2^3, a1^2 b1 b2} ]
</code></pre>
<p>I do not give here the result, since it is much too long, but it looks like what you want up to reordering. Concerning the reordering, it is more complex in Mma. There are few approaches, but I do not recommend to go for that without too much need. Anyway, they are (most often) only decorative, since Mma usually reorders the expression according to internal simplicity. To override it one often needs to transform the expression into an inactive (held) form preventing any operation with it. So, it is only useful to visualize the result. </p>
<p>Have fun! </p>
|
4,250,292 | <p>I am stuck trying to solve the following integral:</p>
<p><span class="math-container">$$\int_{0}^{3}\frac{1}{3}e^{3t-2}dt$$</span></p>
<p>I understand that I can take out <span class="math-container">$\frac{1}{3}$</span> of the integral, and that the integral of <span class="math-container">$e^{3t-2}$</span> is <span class="math-container">$\frac{1}{3t-2}e^{3t-2}$</span></p>
<p>However, when I insert the boundaries, I always receive <span class="math-container">$\frac{1}{7}e^7+\frac{1}{2}e^{-2}$</span> which yields <span class="math-container">$\frac{1}{21}e^7+\frac{1}{6}e^{-2}$</span> when multiplied with <span class="math-container">$\frac{1}{3}$</span>.</p>
<p>In the solution book, however, it is said to be <span class="math-container">$\frac{1}{9}(e^7-e^{-2})$</span>.</p>
<p>Does somebody see where my mistake is and know how to get to the correct solution?</p>
| Parthib Ghosh | 921,782 | <p>We have,
<span class="math-container">$\displaystyle\mathsf{\int^{3}_{0}\,\dfrac{1}{3}\,e^{3t-2}\,dt}$</span>
<span class="math-container">$\mathtt{Put\,\,\,3t-2=u}$</span>
<span class="math-container">$\mathtt{\implies\,3\,dt=du}$</span>
<span class="math-container">$\mathtt{\implies\,dt=\dfrac{du}{3}}$</span>
So, changing limits:
<span class="math-container">$t=0\,\implies\,u=-2$</span>
<span class="math-container">$t=3\,\implies\,u=7$</span>
<span class="math-container">$\displaystyle\mathsf{=\int^{7}_{-2}\,\dfrac{1}{3}\,e^{u}\,\dfrac{du}{3}}$</span>
<span class="math-container">$\displaystyle\mathsf{=\int^{7}_{-2}\,\dfrac{1}{9}\,e^{u}\,du}$</span>
<span class="math-container">$\displaystyle\mathsf{=\dfrac{1}{9}\,\int^{7}_{-2}\,e^{u}\,du}$</span>
<span class="math-container">$\displaystyle\mathsf{=\dfrac{1}{9}\,\left[e^{u}\right]^{7}_{-2}}$</span>
<span class="math-container">$\displaystyle\mathsf{=\dfrac{1}{9}\,\left[e^{7}-e^{-2}\right]}$</span>
As desired!</p>
|
1,717,149 | <p>Is it true or false that if $V$ is a vector space and $T:V \to W$ is a linear transformation such that $T^2 = 0$, then $Im(T) \subseteq Ker(T)$ ?<br>
I don't understand it that much. It doesn't seem related... I can have a vector $v$ from $V$ that its power by 2 equals zero but $T(v) \neq 0_{v}$ </p>
| Deusovi | 256,930 | <p>I recommend Carter's <em>Visual Group Theory</em>. It makes heavy use of pictures and diagrams (hence the name) and I found it very clear.</p>
|
1,650,881 | <p>I found this problem in a book on undergraduate maths in the Soviet Union (<a href="http://www.ftpi.umn.edu/shifman/ComradeEinstein.pdf" rel="nofollow">http://www.ftpi.umn.edu/shifman/ComradeEinstein.pdf</a>):</p>
<blockquote>
<p>A circle is inscribed in a face of a cube of side a. Another circle is circumscribed about a neighboring face of the cube. Find the least distance between points of the circles.</p>
</blockquote>
<p>The solution to the problem is in the book (page 61), but I am wondering how to find the maximum distance between points of the circles and I cannot see how the method used there can be used to find this.</p>
| John Alexiou | 3,301 | <p>I placed the inscribed circle on the top face (+y direction) and the circumscribed circle on the front face (+z direction). Their locus of points is</p>
<p>$$ \begin{align}
\vec{r}_1 & = \begin{bmatrix} r_1 \cos \theta_1 & \frac{a}{2} & r_1 \sin \theta_1 \end{bmatrix} \\
\vec{r}_2 & = \begin{bmatrix} r_2 \cos \theta_2 & r_2 \sin \theta_2 & \frac{a}{2} \end{bmatrix}
\end{align} $$</p>
<p>where $r_1 = \frac{a}{2}$, $r_2 = \frac{a}{\sqrt{2}}$ and $a$ is the side of the cube.</p>
<p>The distance (squared) is $$d^2 = \| \vec{r}_1 -\vec{r}_2 \|^2 $$ which is a function of $\theta_1$ and $\theta_2$.</p>
<p>It comes out as</p>
<p>$$ \frac{d^2}{a^2} = \frac{5}{4} - \frac{\sqrt{2} \cos\theta_1 \cos \theta_2 + \sin \theta_1 + \sqrt{2} \sin \theta_2}{2} $$</p>
<p>To minimize this you have to set $$\frac{\partial}{\partial \theta_1} \frac{d^2}{a^2} = 0$$ and at the same time $$\frac{\partial}{\partial \theta_2} \frac{d^2}{a^2} = 0$$</p>
<p>The system to solve is $$\begin{align} \frac{\sin \theta_1 \cos \theta_2}{\sqrt{2}} - \frac{\cos\theta_1}{2} & = 0 \\ \frac{\cos \theta_1 \sin \theta_2}{\sqrt{2}} - \frac{\cos\theta_2}{\sqrt{2}} & = 0 \end{align} $$</p>
<p>From the first equation $\cos \theta_2 = \frac{\cot \theta_1}{\sqrt{2}}$. When used in the second equation I get $\theta_1 = \arctan \sqrt{2}$.</p>
<p>The result I get was $$\frac{d^2}{a^2} = \frac{5}{4} - \frac{\sqrt{6}}{2} $$</p>
|
201,807 | <p>I heard this problem, so I might be missing pieces. Imagine there are two cities separated by a very long road. The road has only one lane, so cars cannot overtake each other. $N$ cars are released from one of the cities, the cars travel at constant speeds $V$ chosen at random and independently from a probability distribution $P(V)$. What is the expected number of groups of cars arriving simultaneously at the other city? </p>
<p>P.S.: Supposedly, this was a Princeton physics qualifier problem, if that makes a difference. </p>
| Berci | 41,488 | <p>One lane? No overtaking? Simultaneous arrival? Then they must arrive one by one, so $N$ is the number of group of cars. What have I misunderstood?</p>
|
201,807 | <p>I heard this problem, so I might be missing pieces. Imagine there are two cities separated by a very long road. The road has only one lane, so cars cannot overtake each other. $N$ cars are released from one of the cities, the cars travel at constant speeds $V$ chosen at random and independently from a probability distribution $P(V)$. What is the expected number of groups of cars arriving simultaneously at the other city? </p>
<p>P.S.: Supposedly, this was a Princeton physics qualifier problem, if that makes a difference. </p>
| Aditya Bansal | 304,867 | <p>This is not drastically different from the answers above but a slightly different way to arrive at the recurrence relation.</p>
<p>Let $E(n)$ be the expected number of clusters in the case where we begin with $n$ cars in the beginning. Now, if we add one more car from the behind (i.e behind the last car of the slowest cluster), two cases arise :</p>
<p>1) It catches up with the slowest cluster and joins it; or
2) It is too slow to catch up and forms a singleton cluster of its own</p>
<p>The probability of event 2 is $\dfrac 1 {n+1}$. We have $n+1$ in the denominator because it could lie in any interval between and outside the n initial speeds (imagine them on a number line and you'll see $n+1$ possible interval) of the $n$ cars we had in the start.</p>
<p>Therefore, the probability of event 1 is $\dfrac n {n+1}$.</p>
<p>Thus $E(n+1) = \big( E(n)+1 \big) \dfrac 1 {n+1} + E(n) \dfrac n {n+1}$, so $E(n+1) = E(n) + \dfrac 1 {n+1}$ i.e. $E(n)$ is the $n$th harmonic number.</p>
|
1,851,209 | <p>Let $L:X\to Y$ an linear operator. I saw that $L$ is bounded if $$\|Lu\|_Y\leq C\|u\|_X$$
for a suitable $C>0$. This definition looks really weird to me since such application is in fact not necessary bounded as $f:\mathbb R\to \mathbb R$ defined by $f(x)=x$. So, is there an error in <a href="https://en.wikipedia.org/wiki/Bounded_operator" rel="nofollow">wikipedia definition</a> ? And if the are true (what I supposed, since it's wikipedia), what is the interest to defined boundness as Lipschitz condition ? (it make no sense in fact...) </p>
| Mike | 268,604 | <p>This definition is not really trying to tell you that the values this operator spits out for each argument are always bounded by a given constant. </p>
<p>This definition says that the size (=norm) of the argument you plugged in in the domain, $\mid\mid u \mid\mid_X$ can only be "enlarged" or "diminished" by this much (this much being C) in the range, where its size is described as $\mid\mid Lu \mid\mid_Y$</p>
|
2,236,862 | <p>$$\frac{1^2}{1!}+ \frac{2^2}{2!}+ \frac{3^2}{3!} + \frac{4^2}{4!} + \dotsb$$</p>
<p>I wrote it as:
$$\lim_{n\to \infty}\sum_{r=1}^n \frac{(r^2)}{r!}.$$</p>
<p>Then I thought of sandwich theorem, it didn't work. Now I am trying to convert it into difference of two consecutive terms but can't. Need hints. </p>
| N. S. | 9,176 | <p><strong>Hint 1:</strong>
$$\sum_{r=1}^n \frac{(r^2)}{r!}=\sum_{r=1}^n \frac{r}{(r-1)!}=\sum_{r=0}^{n-1} \frac{r+1}{r!}$$</p>
<p><strong>Hint 2:</strong> Derivate
$$xe^x=\sum_{r=0}^\infty \frac{x^{r+1}}{r!}$$</p>
|
4,617,031 | <p>How would I order <span class="math-container">$x = \sqrt{3}-1, y = \sqrt{5}-\sqrt{2}, z = 1+\sqrt{2} \ $</span> without approximating the irrational numbers? In fact, I would be interested in knowing a general way to solve such questions if there is one.</p>
<p>What I tried to so far, because they are all positive numbers, is to square <span class="math-container">$x,y,z$</span> but, obviously, the rational parts will not be equal so I cannot compare the radicals. Proving that <span class="math-container">$x<z$</span> is easy and so is <span class="math-container">$y<z$</span>, but I'm stuck at <span class="math-container">$x < y \text{ or } x>y$</span>.</p>
| S.H.W | 325,808 | <p>One way is as follows. It can be proved that <span class="math-container">$f(x) = \sqrt{x}$</span> is an increasing function. So we have:
<span class="math-container">$$x_1\lt x_2 \implies \sqrt{x_1}\lt \sqrt{x_2} \implies \sqrt{x_2} - \sqrt{x_1} \gt 0$$</span>This implies that <span class="math-container">$x,y\gt 0$</span>. Also <span class="math-container">$g(x) = x^2$</span> is an increasing function for <span class="math-container">$x\ge 0$</span>, then:
<span class="math-container">$$x_1\lt x_2 \implies x_1^2\lt x_2^2 \ \ \ \ x_1,x_2\in [0,\infty)$$</span>
Now suppose <span class="math-container">$x\gt y$</span>:
<span class="math-container">$$x>y \implies x^2\gt y^2 \implies (\sqrt{3} -1)^2>(\sqrt{5} - \sqrt{2})^2 \implies 3+1-2\sqrt{3}\gt 5+2-2\sqrt{10} \implies -3-2(\sqrt{3}-\sqrt{10})\gt 0 \implies -3\gt 2(\sqrt{3}-\sqrt{10}) \implies 3<2(\sqrt{10}-\sqrt{3})\implies \frac{3}{2}\lt \sqrt{10}-\sqrt{3} \implies \frac{9}{4} \lt 10 +3-2\sqrt{30} \implies 2\sqrt{30}\lt 13 - \frac{9}{4} \implies 4\times30\lt \frac{43^2}{4^2} \implies 4^3\times30 \lt 43^2 \implies 1920\lt 1849$$</span>
This is a contradiction and implies <span class="math-container">$x\lt y$</span>. Other cases can be checked similarly.</p>
|
3,406,056 | <p>By the definition of matrix exponentiation,</p>
<p><span class="math-container">$$A^k = \begin{cases}
I_n, & \text{if } k=0 \\[1ex]
A^{k-1}A, & \text{if } k\in \mathbb {N}_0 \\
\end{cases}$$</span></p>
<p>In my book, there's an exercise to do <span class="math-container">$D^k$</span>, where <span class="math-container">$D$</span> is a diagonal matrix.
In the solutions, though, they wrote</p>
<p><span class="math-container">$$D^k=\operatorname{diag}(d_1^k, \dots , d_n^k)$$</span></p>
<p>Which one is correct?</p>
| Z Ahmed | 671,540 | <p>Let <span class="math-container">$$D=\begin{bmatrix} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c \end{bmatrix}.$$</span> then by the property of diagonal matrix
<span class="math-container">$$f(D)=\begin{bmatrix} f(a) & 0 & 0 \\ 0 & f(b) & 0 \\ 0 & 0 & f(c) \end{bmatrix}$$</span> So <span class="math-container">$$D^k= \begin{bmatrix} a^k & 0 & 0 \\ 0 & b^k & 0 \\ 0 & 0 & c^k \end{bmatrix}$$</span> and <span class="math-container">$$D^0 =\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$$</span></p>
|
4,551,497 | <p>How do I find the Taylor series of <span class="math-container">$\cos^{20}{(x)}$</span> for <span class="math-container">$x_0 = 0$</span>, knowing the Taylor series of <span class="math-container">$\cos{(x)}$</span>?</p>
| boojum | 882,145 | <p>We can use the even symmetry of the cosine function to determine that <span class="math-container">$ \ (\cos x)^{20} \ $</span> is likewise an even function, thus the Maclaurin series for this function must contain only terms with even powers of <span class="math-container">$ \ x \ \ . \ $</span> This means that we will only be interested in higher derivatives of the function of even order.</p>
<p>We can also exploit properties of the derivatives of the cosine function. If we take <span class="math-container">$ \ u \ = \ \cos x \ \ , \ $</span> then
<span class="math-container">$$ u^{(2n)} \ \ = \ \ (-1)^n·u \ \ \Rightarrow \ \ u^{(2n)}(0) \ = \ (-1)^n \ \ \ , \ \ \ u^{(2n+1)} \ \ = \ \ (-1)^n·(-\sin x) \ \ \Rightarrow \ \ u^{(2n+1)}(0) \ = \ 0 \ \ . \ $$</span>
It will also be helpful to apply <span class="math-container">$ \ (u')^2 \ = \ (-\sin x)^2 \ = \ 1 - u^2 \ \ . $</span></p>
<p>Starting with the <em>second</em> derivative, we obtain</p>
<p><span class="math-container">$$ (u^{20})'' \ \ = \ \ ( \ 20·u^{19}·u' \ )' \ \ = \ \ (20·19·u^{18}·u')·u' \ + \ 20·u^{19}·u'' $$</span>
<span class="math-container">$$ = \ \ ( 20·19·u^{18})·(u')^2 \ + \ 20·u^{19}·(-u) \ \ = \ \ ( 20·19·u^{18})·(u')^2 \ - \ 20·u^{20} $$</span>
<span class="math-container">$$ \Rightarrow \ \ (u^{20})''(0) \ \ = \ \ 20·19·1^{18}·0^2 \ - \ 20·1^{20} \ \ = \ \ -20 \ \ . $$</span>
[Of course, this term could be extracted easily enough from <span class="math-container">$ \ (\cos x)^{20} \ \approx \ \left(1 - \frac12x^2 \right)^{20} \ \approx \ 1 - 10x^2 \ \ . \ ] $</span></p>
<p>In proceeding to the fourth derivative, we observe that we have already done some of the work:
<span class="math-container">$$ (u^{20})^{(4)} \ \ = \ \ [ \ ( 20·19·u^{18})·(u')^2 \ - \ 20·u^{20} \ ]'' \ \ = \ \ [ \ ( 20·19·u^{18})·(1 - u^2) \ - \ 20·u^{20} \ ]'' $$</span>
<span class="math-container">$$ = \ \ [ \ 20·19·u^{18} \ - \ (20 \ + \ 20·19) ·u^{20} \ ]'' $$</span>
<span class="math-container">$$ = \ \ ( \ 20·19·18·u^{17}·u' \ )' \ - \ (20 \ + \ 20·19)·[ \ 20·19·u^{18} \ - \ (20 \ + \ 20·19) ·u^{20} \ ] $$</span>
<span class="math-container">$$ = \ \ ( \ 20·19·18·17·u^{16}·u'·u' \ + \ 20·19·18·u^{17}·u'' \ ) $$</span> <span class="math-container">$$ - \ (20 \ + \ 20·19)·[ \ 20·19·u^{18} \ - \ (20 \ + \ 20·19) ·u^{20} \ ] $$</span></p>
<p><span class="math-container">$$ \Rightarrow \ \ (u^{20})^{(4)}(0) \ \ = \ \ 20·19·18·17·1^{16}·0^2 \ + \ 20·19·18·1^{17}·(-1) $$</span> <span class="math-container">$$ - \ (20^2·19 \ + \ 20^2·19^2)·1^{18} \ + \ (20 \ + \ 20·19)^2 ·1^{20} $$</span> <span class="math-container">$$ = \ \ -6840 \ - \ 7600 \ - \ 144,400 \ + \ 160,000 \ \ = \ \ 1160 \ \ . $$</span></p>
<p>Although it may seem daunting, we will work to obtain the sixth derivative, which we may hope will be far enough to see a pattern. (We note along the way that we have found that the first and third derivatives of <span class="math-container">$ \ u^{20} \ $</span> only have terms with factors that are odd derivatives of <span class="math-container">$ \ u \ \ , \ $</span> making odd-power terms of the Taylor series equal to zero, as expected.)</p>
<p><span class="math-container">$$ (u^{20})^{(6)} \ \ = \ \ [ \ 20·19·18·17·u^{16}·(1 - u^2) \ + \ 20·19·18·u^{17}·(-u) $$</span> <span class="math-container">$$ - \ (20^2·19 \ + \ 20^2·19^2)·u^{18} \ + \ (20 \ + \ 20·19)^2 ·u^{20} \ ]'' $$</span></p>
<p><span class="math-container">$$ = \ \ [ \ 20·19·18·17·u^{16} \ - \ (\overbrace{20·19·18·17 \ + \ 20·19·18 \ + \ 20^2·19 \ + \ 20^2·19^2}^{\mathcal{C}})·u^{18} $$</span> <span class="math-container">$$ + \ (20 \ + \ 20·19)^2 ·u^{20} \ ]'' $$</span></p>
<p><span class="math-container">$$ = \ \ 20·\ldots·15·u^{14}·(u')^2 \ + \ 20·\ldots·16·u^{15}·(u'') \ - \ \mathcal{C}·18·17·u^{16}·(u')^2 \ - \ \mathcal{C}·18·u^{17}·(u'') $$</span>
<span class="math-container">$$ + \ (20 \ + \ 20·19)^2 · [ \ 20·19·u^{18} \ - \ (20 \ + \ 20·19) ·u^{20} \ ] $$</span></p>
<p><span class="math-container">$$ \Rightarrow \ \ (u^{20})^{(6)}(0) \ \ = \ \ 20·19·18·17·16·1^{15}·(-1) $$</span> <span class="math-container">$$ - \ (20·19·18·17 \ + \ 20·19·18 \ + \ 20^2·19 \ + \ 20^2·19^2)·18·1^{17}·(-1) $$</span> <span class="math-container">$$ + \ (20 \ + \ 20·19)^2 · 20·19·1^{18} \ - \ (20 \ + \ 20·19)^3 ·1^{20} $$</span>
<span class="math-container">$$ = \ \ -1,860,480 \ + \ 4,952,160 \ + \ 60,800,000 \ - \ 64,000,000 \ \ = \ \ -108,320 \ \ . $$</span></p>
<p>Thus we have produced by direct computation the first four terms of the Taylor series
<span class="math-container">$$ (\cos x)^{20} \ \ = \ \ 1 \ + \ \frac{-20}{2!}·x^2 \ + \ \frac{1160}{4!}·x^4 \ + \ \frac{-108320}{6!}·x^6 \ + \ \ldots $$</span>
<span class="math-container">$$ = \ \ 1 \ - \ 10·x^2 \ + \ \frac{145}{3}·x^4 \ - \ \frac{1354}{9}·x^6 \ + \ \ldots \ \ , $$</span>
in agreement with the result from WolframAlpha (which, interestingly, declines to generate any further terms, at least at "Pro" level).</p>
<p>I have not found a concise way to express the general term of this series (in the time I've had available), but the calculation of higher derivative values at <span class="math-container">$ \ x \ = \ 0 \ $</span> is not difficult, though a little tedious.</p>
|
2,572,884 | <p>Let $X$ be a random variable with values $0$ and $1$.</p>
<p>Let $Y$ be a random variable with values in $\mathbb{N_0}$.</p>
<p>Let $ p \in (0,1)$ and $ P(X=0, Y=n) = p \cdot \frac{e^{-1}}{n!} $ and $ P(X=1, Y=n) = (1-p) \cdot \frac{2^{n}e^{-2}}{n!} $.</p>
<p>Calculate $E(Y)$ and $Var(Y)$. (expected value and variance)</p>
<p>So We want to use the law of total expectation. $$ E(Y) = E(E(Y|X))= \sum_xE(Y|X=x) \cdot P(X=x)= E(Y|X=0) \cdot p + E(Y|X=1) \cdot (1-p)= \sum_{n=1} n\cdot P(Y=n|X=0) \cdot p+ \sum_{n=1} n\cdot P(Y=n|X=1)\cdot (1-p) = \sum_{n=1} n\cdot \frac{e^{-1}}{n!} \cdot p +\sum_{n=1} n\cdot \frac{e^{-2}\cdot 2^n}{n!} \cdot(1-p)= e^{-1} \cdot e \cdot p + 2\cdot(1-p)\cdot e^{-2} \cdot \sum_{n=1} \frac{2^{n-1}}{(n-1)!} = p + 2\cdot (1-p) e^{-2} \cdot e^2= p+2 \cdot (1-p)= 2-p $$
So far so good. But I never used the law of total variance. Can somebody can only give me "the beginning". What do I mean with "beginning"? I mean that you only write $ E(V(Y|X))$ and$ Var(E(Y|X))$ as sums like I did with $E(E(Y|X))$. Then I will try the rest on my own.</p>
<p>Thank you for your time.</p>
| Christian Blatter | 1,303 | <p>Given a finite probability distribution $p:=(p_i)_{1\leq i\leq n}$ its <em>entropy</em> is defined by
$$H(p):=-\sum_{i=1}^n p_i \log_2(p_i)\ .$$
If $p$ models the frequencies of the letters of an alphabet then $H(p)$ turns out to be the average number of bits per letter. This is the essential content of Shannon theory, and cannot be explained in a few lines. In the case $p=\bigl({1\over2},{1\over3},{1\over6}\bigr)$ one obtains $H(p)=1.45915$. This is what you can reach "in the limit" through a clever encoding. But $1.41421$ is definitely not attainable under the given circumstances.</p>
|
2,572,884 | <p>Let $X$ be a random variable with values $0$ and $1$.</p>
<p>Let $Y$ be a random variable with values in $\mathbb{N_0}$.</p>
<p>Let $ p \in (0,1)$ and $ P(X=0, Y=n) = p \cdot \frac{e^{-1}}{n!} $ and $ P(X=1, Y=n) = (1-p) \cdot \frac{2^{n}e^{-2}}{n!} $.</p>
<p>Calculate $E(Y)$ and $Var(Y)$. (expected value and variance)</p>
<p>So We want to use the law of total expectation. $$ E(Y) = E(E(Y|X))= \sum_xE(Y|X=x) \cdot P(X=x)= E(Y|X=0) \cdot p + E(Y|X=1) \cdot (1-p)= \sum_{n=1} n\cdot P(Y=n|X=0) \cdot p+ \sum_{n=1} n\cdot P(Y=n|X=1)\cdot (1-p) = \sum_{n=1} n\cdot \frac{e^{-1}}{n!} \cdot p +\sum_{n=1} n\cdot \frac{e^{-2}\cdot 2^n}{n!} \cdot(1-p)= e^{-1} \cdot e \cdot p + 2\cdot(1-p)\cdot e^{-2} \cdot \sum_{n=1} \frac{2^{n-1}}{(n-1)!} = p + 2\cdot (1-p) e^{-2} \cdot e^2= p+2 \cdot (1-p)= 2-p $$
So far so good. But I never used the law of total variance. Can somebody can only give me "the beginning". What do I mean with "beginning"? I mean that you only write $ E(V(Y|X))$ and$ Var(E(Y|X))$ as sums like I did with $E(E(Y|X))$. Then I will try the rest on my own.</p>
<p>Thank you for your time.</p>
| celtschk | 34,930 | <p>The Huffman code is the best you can achieve for encoding single symbols from a given set. To achieve a better encoding, you must encode combinations of several symbols at once.</p>
<p>For example, for two-symbol combinations, you get the probabilities:
$$\begin{aligned}
p(AA) &= \frac14 &
p(AB) &= \frac16 &
p(AC) &= \frac1{12}\\
p(BA) &= \frac16 &
p(BB) &= \frac19 &
p(BC) &= \frac1{18}\\
p(CA) &= \frac1{12} &
p(CB) &= \frac1{18} &
p(CC) &= \frac1{36}
\end{aligned}$$</p>
<p>Applying the Huffman code to this, you can get (e.g. using <a href="http://huffman.ooz.ie/?text=111111111222222333444444555566777889" rel="nofollow noreferrer">this tool</a>):
$$\begin{aligned}
AA &\to 10 & AB &\to 111 & AB &\to 1100\\
BA &\to 00 & BB &\to 010 & BC &\to 01111\\
CA &\to 1101 & CB &\to 0110 & CC &\to 01110
\end{aligned}$$
The average length with this encoding is
$$\frac12\left(\frac{2}{4} + \frac{3}{6} + \frac{4}{12} + \frac{2}{6} + \frac{3}{9} + \frac{5}{18} + \frac{4}{12} + \frac{4}{18} + \frac{5}{36}\right) \approx 1.486$$
which is already less than $1.5$.</p>
<p>Encoding even more characters at one time, you get even more close to the theoretical optimum, $-\sum_k p_k \log_2 p_k \approx 1.46$.</p>
|
19,373 | <p>I posted this question earlier today on the Mathematics site (<a href="https://math.stackexchange.com/q/3988907/96384">https://math.stackexchange.com/q/3988907/96384</a>), but was advised it would be better here.</p>
<p>I had a heated argument with someone online who claimed to be a school mathematics teacher of many years standing. The question which spurred this discussion was something along the lines of:</p>
<p>"A horseman was travelling from (location A) along a path through a forest to (location B) during the American War of Independence. The journey was of 22 miles. How far was it in kilometres?"</p>
<p>To my mind, the answer is trivially obtained by multiplying 22 by 1.6 to get 35.2 km, which can be rounded appropriately to 35 km.</p>
<p>I was roundly scolded by this ancient mathematics teacher for a) not using the official conversion factor of 1.60934 km per mile and b) not reporting the correct value as 35.405598 km.</p>
<p>Now I have serious difficulties with this analysis. My argument is: this is a man riding on horseback through a forest in a pre-industrial age. It would be impractical and impossible to measure such a distance to any greater precision than (at best) to the nearest 20 metres or so, even in this day and age. Yet the answer demanded was accurate to the nearest millimetre.</p>
<p>But when I argued this, I was told that it was not my business to round the numbers. I was to perform the conversion task given the numbers I was quoted, and report the result for the person asking the question to decide how accurately the numbers are to be interpreted.</p>
<p>Is that the way of things in school? As a trained engineer, my attitude is that it is part of the purview of anybody studying mathematics to be able to estimate and report appropriate limits of accuracy, otherwise you get laughably ridiculous results like this one.</p>
<p>I confess I have never had a good relationship with teachers, apart from my A-level physics teacher whom I adored, so I expect I will be given a hard time over my inability to understand the basics of what I have failed to learn during the course of the above.</p>
| Syntax Junkie | 5,465 | <p>You're both right, depending on the domain of discourse and the rules of engagement.</p>
<p>In pure math, the traditional expectation is that the numbers given are exact unless stated otherwise, and answers are also to be exact unless stated otherwise. So when the mathematician read "22 miles," he's using a tradition that means "exactly 22 miles."</p>
<p>But in the physical sciences, all measurements are understood to be inexact and approximations and rounding are either "allowed" or "expected" (depending on the logical rigor applied).</p>
<p>In this case, the correct answer boils down to a question of semantics and assumptions. What about this question:</p>
<p><em>If a man traveled 22 miles, how far did he travel in kilometers?</em></p>
<p>How would you answer that? The "If" complicates things. Some would say that it turns the question into a hypothetical that ignores the physical difficulties in measuring exactly 22 miles and turns it into a "given." It's not a stretch to read the original question as a hypothetical, even without an explicit "If" at the beginning.</p>
<p>Some traditions say that integers are always expected to be exact and that the question should have used "approximately 22 miles," "22.0," or a bar on the last significant digit to show it's a real number instead of an integer.</p>
<p>Even in the physical sciences, scenarios used for pedagogical purposes are sometimes idealized in order to remove confounding factors that might distract from the main point being taught. I don't think we know enough about the source of this question to know about what assumptions or simplifications are being made.</p>
<p>You may argue that the use of "a man riding on horseback through a forest in a pre-industrial age" implies a real situation and an actual, inexact measurement. A counterargument is that the use of abstract identifiers "A" and "B" to to designate the starting and stopping point suggest an idealized situation.</p>
<p>I would agree this is a good question for Mathematics Educators Stack Exchange. It emphasizes that in the classroom (as well as life) it's important to be explicit about assumptions and expectations and to lay out the ground rules.</p>
<p><strong>Adding a summary, based on comments, that tries to be more direct:</strong></p>
<ol>
<li>Use of significant digits only applies to inexact numbers such as measurements.</li>
<li>In a problem like "Convert 22 miles to kilometers," there is no reason to think 22 miles is a measurement. Rather, it is a "given": Something that is to be assumed or taken for granted for the sake of the problem.</li>
<li>I think this question boils down to this: In the original question, is "22 miles" to be taken as a given or a measurement?</li>
</ol>
<p>I don't think we can tell. (At least not without more context about where the question came from and why it was asked.) The original question could merely be "Convert 22 miles to kilometers," dressed up in a story to make it engaging or interesting.</p>
<p>My reading of some of the comments suggests a point of view of "If the problem resembles a real-world situation, then it must be interpreted as a real-world situation." Or more succinctly: If 22 miles <em>could be</em> interpreted as a measurement, then it <em>must be</em> interpreted as a measurement. Or that by phrasing the question in a historical, real-world context, that somehow forces the measurement interpretation. I don't follow that. It ignores the way real-world people write, talk, and teach.</p>
|
19,373 | <p>I posted this question earlier today on the Mathematics site (<a href="https://math.stackexchange.com/q/3988907/96384">https://math.stackexchange.com/q/3988907/96384</a>), but was advised it would be better here.</p>
<p>I had a heated argument with someone online who claimed to be a school mathematics teacher of many years standing. The question which spurred this discussion was something along the lines of:</p>
<p>"A horseman was travelling from (location A) along a path through a forest to (location B) during the American War of Independence. The journey was of 22 miles. How far was it in kilometres?"</p>
<p>To my mind, the answer is trivially obtained by multiplying 22 by 1.6 to get 35.2 km, which can be rounded appropriately to 35 km.</p>
<p>I was roundly scolded by this ancient mathematics teacher for a) not using the official conversion factor of 1.60934 km per mile and b) not reporting the correct value as 35.405598 km.</p>
<p>Now I have serious difficulties with this analysis. My argument is: this is a man riding on horseback through a forest in a pre-industrial age. It would be impractical and impossible to measure such a distance to any greater precision than (at best) to the nearest 20 metres or so, even in this day and age. Yet the answer demanded was accurate to the nearest millimetre.</p>
<p>But when I argued this, I was told that it was not my business to round the numbers. I was to perform the conversion task given the numbers I was quoted, and report the result for the person asking the question to decide how accurately the numbers are to be interpreted.</p>
<p>Is that the way of things in school? As a trained engineer, my attitude is that it is part of the purview of anybody studying mathematics to be able to estimate and report appropriate limits of accuracy, otherwise you get laughably ridiculous results like this one.</p>
<p>I confess I have never had a good relationship with teachers, apart from my A-level physics teacher whom I adored, so I expect I will be given a hard time over my inability to understand the basics of what I have failed to learn during the course of the above.</p>
| Eric Duminil | 9,380 | <p>Here's a joke I like to tell when people could use a reminder about precision vs accuracy:</p>
<blockquote>
<p>A tour guide at Giza was explaining how the Pyramids were 4507 years
old. Someone in the crowd asked: "That's oddly specific. How do we know this?"</p>
<p>"Well. I was told they were 4500 years old when I
started working here 7 years ago."</p>
</blockquote>
<p>I'm not sure the grumpy teacher you mentioned would be amused, though.</p>
|
19,373 | <p>I posted this question earlier today on the Mathematics site (<a href="https://math.stackexchange.com/q/3988907/96384">https://math.stackexchange.com/q/3988907/96384</a>), but was advised it would be better here.</p>
<p>I had a heated argument with someone online who claimed to be a school mathematics teacher of many years standing. The question which spurred this discussion was something along the lines of:</p>
<p>"A horseman was travelling from (location A) along a path through a forest to (location B) during the American War of Independence. The journey was of 22 miles. How far was it in kilometres?"</p>
<p>To my mind, the answer is trivially obtained by multiplying 22 by 1.6 to get 35.2 km, which can be rounded appropriately to 35 km.</p>
<p>I was roundly scolded by this ancient mathematics teacher for a) not using the official conversion factor of 1.60934 km per mile and b) not reporting the correct value as 35.405598 km.</p>
<p>Now I have serious difficulties with this analysis. My argument is: this is a man riding on horseback through a forest in a pre-industrial age. It would be impractical and impossible to measure such a distance to any greater precision than (at best) to the nearest 20 metres or so, even in this day and age. Yet the answer demanded was accurate to the nearest millimetre.</p>
<p>But when I argued this, I was told that it was not my business to round the numbers. I was to perform the conversion task given the numbers I was quoted, and report the result for the person asking the question to decide how accurately the numbers are to be interpreted.</p>
<p>Is that the way of things in school? As a trained engineer, my attitude is that it is part of the purview of anybody studying mathematics to be able to estimate and report appropriate limits of accuracy, otherwise you get laughably ridiculous results like this one.</p>
<p>I confess I have never had a good relationship with teachers, apart from my A-level physics teacher whom I adored, so I expect I will be given a hard time over my inability to understand the basics of what I have failed to learn during the course of the above.</p>
| user3067860 | 1,908 | <p>It depends on the level of the class.</p>
<p>I would expect someone who has a recent undergraduate degree in mathematics to have experienced significant figures at at least some point in their life, either in high school or in college. I would also expect common sense to kick in and say that the level of accuracy proposed is unreasonable.</p>
<p>But it is reasonable to dodge the topic when teaching arithmetic, algebra, etc., because the students usually have a hard enough time as it is. Sometimes you can arrange for numbers that come out evenly anyway, but if you're stuck trying to teach an awkward conversion (miles to km) or if the task is to teach something about decimals or fractions then you may be unable to avoid it.</p>
<p>For example, "Alex had five cookies and split them evenly with Blake. How many did each of them get?" Two and a half, and we aren't going to quibble over how precisely half of a cookie was achieved.</p>
<p>If your students are advanced enough to be working with more precise numbers (and, presumably, starting to question what level of accuracy is acceptable) then the best way to dodge it is to simply specify what rounding you want in the question: "Answer to x decimal places." That way you can specify the correct precision without the students having to understand how to calculate what the correct precision should be.</p>
<p>That's much simpler for the student to understand than the official way, which is <a href="https://www.nist.gov/pml/special-publication-811/nist-guide-si-appendix-b-conversion-factors" rel="nofollow noreferrer">according to NIST</a>:</p>
<p>The precision of your conversion should be based on <em>relative error</em>. If error isn't specified, then you can infer it from the number of digits in the values given. Use a conversion factor with equal or more precision to that to preform the calculation. Then you round the result to produce a relative error that is of the order of the original.</p>
<p><span class="math-container">$22$</span> miles implies an error range of plus or minus <span class="math-container">$\frac{1}{2}$</span> mile which is <span class="math-container">$2.\overline{27}\%$</span>.</p>
<p>Using a conversion factor of <span class="math-container">$1.61$</span> kilometers per mile (which has better than <span class="math-container">$2.\overline{27}\%$</span> accuracy, note that <span class="math-container">$1.6$</span> is not accurate enough)... <span class="math-container">$22*1.61=35.42$</span> km. We could also use <span class="math-container">$1.609$</span> or any more precise conversion, it <em>does not matter</em> because we will be rounding. (For example, in this case, <span class="math-container">$22*1.609 = 35.398$</span> km.)</p>
<p>Now we round... <span class="math-container">$40$</span> km would have a relative error <span class="math-container">$5\div40\approx12.5\%$</span> which is too much, <span class="math-container">$35.4$</span> would have a relative error of <span class="math-container">$0.05\div35.4\approx0.141%$</span> which is too little. <span class="math-container">$35$</span> km has a relative error of <span class="math-container">$0.5\div35\approx 1.43\%$</span> which is just right. Note that we get the same (rounded) answer regardless of how much precision we used in the conversion factor, as long as the conversion factor met a minimum level precision.</p>
<p>Question: Why do we assume plus or minus one-half mile? Wouldn't a distance of 22 miles be measured more accurately than that?</p>
<p>Answer: No. If anything it is likely to be much worse. (Disclaimer, I'm not doing sigfigs in this section, I just can't be bothered.)</p>
<p>In American revolutionary war era from the <a href="https://www.nypl.org/blog/2015/08/12/traveling-with-jefferson" rel="nofollow noreferrer">New York Public Library</a>, Thomas Jefferson measuring exactly 22 miles would have actually gone over 22.3 miles (and he was a bit obsessive about measurements):</p>
<blockquote>
<p>Before he left on the trip, Jefferson bought from a Philadelphia watchmaker an odometer that counted the revolution’s of his carriage’s wheel. He had measured distance based “on the belief that the wheel of the Phaeton [his carriage] made exactly 360. revoln’s in a mile.” After the trip, though, he re-measured circumference of the wheel and found that it made only 354.95 revolutions in a mile. So for every seventy-one miles Jefferson thought he traveled, he had actually traveled seventy-two.</p>
</blockquote>
<p>But I use my car odometer, not a carriage! It's much more reliable! ...nope. From <a href="https://www.motus.com/variations-in-odometer-readings/" rel="nofollow noreferrer">motus.com</a>, if your car odometer says 22 miles then it could be anywhere between 21.12 to 22.88 miles:</p>
<blockquote>
<p>Surprisingly, there is no federal law that regulates odometer accuracy. The Society of Automotive Engineers set guidelines that allow for a margin of error of plus or minus four percent.</p>
</blockquote>
<p>Actually I use GPS, that's very accurate! ....nope again. GPS has a margin of error on every position measurement made, plus error from the distance between measurements. Essentially your path is like a coastline and the GPS can suffer from the <a href="https://en.wikipedia.org/wiki/Coastline_paradox" rel="nofollow noreferrer">coastline paradox</a>. From <a href="https://www.singletracks.com/mtb-gear/do-gps-units-always-overestimate-trail-distances-no/" rel="nofollow noreferrer">singletracks.com</a> (with pictures and a good explanation: In a very, very bad case (steep trail, lots of switchbacks) your GPS may report 22 miles when you've actually gone 44 miles! Holy guacamole.</p>
<blockquote>
<p>[...]GPS reports the full loop is right at 1 mile long. In fact, everyone else who rides this trail gets roughly the same measurement. But the trail always “feels” much longer than that.</p>
<p>Recently I started riding with a wheel-based cyclocomputer, which I calibrated and verified during one of our track tests. Measuring this particular trail with the cyclocomputer reveals the trail is actually closers to 2 miles long, meaning our GPS units are underestimating distance by half!</p>
</blockquote>
<p>I'm not going to find sources for inaccuracy of distance calculated by counting steps, the time it took to travel, etc. It's pretty obvious that no one (and no horse) actually moves at that even of a pace for 22 miles.</p>
<p>So accepting 22 miles on a trail as being between 21.5 and 22.5 is actually pretty generous. Better to just call it a day and say it was "some distance".</p>
|
794,389 | <p>Q1. I was fiddling around with squaring-the-square type algebraic maths, and came up with a family of arrangements of $n^2$ squares, with sides $1, 2, 3\ldots n^2$ ($n$ odd). Which seems like it would work with ANY odd $n$. It's so simple, surely it's well-known, but I haven't seen it in my (brief) web travels. I have a page with pics <a href="http://www.adamponting.com/squaring-the-plane/" rel="nofollow noreferrer">here</a> of the $n=7,9,11$ versions, and description of how to construct them; $n=11$ is:</p>
<p><img src="https://i.stack.imgur.com/jsBgg.png" alt="enter image description here"></p>
<p>It seems the same method could square the plane, well, fill greater than any specified area, no matter how huge, at least. Which is what 'infinite' means, practically, isn't it? Anyway, it seems, if somehow not known (which it must be, surely - if anyone has links etc to where it's discussed I would be very grateful) then it's another way of squaring the plane.</p>
<p>Q2. Each of these arrangements can be extended, but the ones I've tried (5 or 6) have a little gap to the south-east, (i.e. where the squares don't fit neatly together) but otherwise can be extended forever. Is there an $n$ for which there is no gap? There are more than 1 of each sized square in this arrangement, but still, it would be a nice tessellation with integer squares. <a href="http://www.adamponting.com/squaring-the-plane-ii/" rel="nofollow noreferrer">Here's</a> a picture of the 7x7 version extended, and detail of the centre.</p>
<p>Thanks for any answers or help.</p>
<hr>
<p>P.s. This was too long for the comments section:</p>
<p>I wrote to Jim Henle the other day asking about this method, he hadn't seen it, thought it was nice, but not plane-filling in the way his method is. He wrote in part:</p>
<blockquote>
<p>You are sort of "squaring the plane," but not in the sense that we did it. You are squaring larger and larger areas of the plane, but you don't square the whole thing. ... There are many meanings to "infinite". Aristotle distinguished between the "potential infinite" (more and more, without bound) and the "actual infinite" (all the numbers, all at once). Your procedure is the first sort, and ours is the second sort.</p>
</blockquote>
<p>Which is what I had thought. </p>
<p>But the more I think about it.. the less clear the difference seems. Well, e.g. 'there are an infinite number of primes' means: there is no highest one; any number you can say, there's a higher prime. That's how that is defined, spoken about, to my (non-mathematician's) understanding. Similarly, there are an infinite number of these $n\times n$ square groups, there's no biggest one, any area you name, there's a bigger one. The Henle's method consists in adding more squares, ideally forever, but practically, you stop at some point and say 'and so on forever'. The procedure requires an infinite number of steps. I can't quite see how this series of $n\times n$ squares is so different. You have to start drawing again with each new $n$, sure, but I can't see that matters so much - there are an infinite number of arrangements, and there's the same 'and so on forever'.. i.e. "there is, strictly speaking, no such thing as an infinite summation or process."</p>
<p>Nov 2015. [I can't add comments to questions below, not sure why.]
Ross M, sorry about the delay! It seems you might be confused between the 2 parts, my fault for combining them in one question. (I still haven't heard anything much about either 2 from anyone.)</p>
<p>The first part is the basic $n^2$ arrangements of squares. (The second part takes just one of these and tries to extend it outwards, wonders about the possibility and mathematics of there being no gaps, and has many more than 1 of each sized square)
I still don't quite see why 'having to rearrange each step' makes a huge difference to anything. Imagine I had a method of going from $n^2$ to $(n+1)^2$ squares by adding more around the edges. Then, according to what people seem to be saying, I 'could tile the whole thing'? The way I've done it has exactly the same area as that would be, just it has to be redrawn. I don't see how that affects whether it 'tiles the plane' or not, or anything else. If someone could explain that to me, I would be very grateful. </p>
| Adam Ponting | 149,936 | <p>(OP here)</p>
<p>Update: Well, I answered Q2 "Yes" after a couple of years, and found the rules governing when an arrangement "has no gaps" or not. It seems every arrangement actually tiles the plane in 2 layers, but only for some the layers of squares coincide. I wrote <a href="http://www.adamponting.com/wp-content/uploads/2017/10/sq-tilings-paper.pdf" rel="nofollow noreferrer">a paper</a> a while back about exactly when that is, and lots more about them. <a href="http://www.adamponting.com/update-ponting-packing/" rel="nofollow noreferrer">This page</a> has some more pics of the plane tilings, plus a program to draw the $1\dots n^2$-type square arrangements of Q1, which were christened <em>Ponting square packings</em> by Ed Pegg, who wrote a <a href="http://demonstrations.wolfram.com/PontingSquarePacking/" rel="nofollow noreferrer">Mathworld demonstration</a> about them. It seems they were new.</p>
|
102,721 | <p>This is probably a very simple question, but I couldn't find a duplicate.</p>
<p>As everybody knows, <code>{x, y} + v</code> gives <code>{x + v, y + v}</code>. But if I intend <code>v</code> to represent a vector, for example if I am going to substitute <code>v -> {vx, vy}</code> in the future, then the result <code>{x + v, y + v}</code> is meaningless.</p>
<p>How I can indicate to Mathematica that <code>v</code> is not a scalar and functions like <code>Plus</code> should not treat it as one? I tried setting <code>$Assumptions = v ∈ Vectors[2]</code> but that didn't help.</p>
| JungHwan Min | 35,945 | <p>One workaround would be <code>ClearAttributes[Plus,Listable]</code> (The <code>Plus</code> is threading because of the <code>Listable</code> attribute). If you need the <code>Listable</code> attribute in another part of your code, you would need to run <code>SetAttributes[Plus,Listable]</code> to put the attribute back in.</p>
<pre><code>ClearAttributes[Plus,Listable];
{x, y} + v
(* v + {x, y} *)
</code></pre>
<p>Or, you could simply use <code>Defer</code> (a bit tricky to use):</p>
<pre><code>Defer[{x, y} + v]
(* {x, y} + v *)
</code></pre>
<p>Alternatively, you could use <code>Hold</code> (or <code>HoldForm</code>) and <code>ReleaseHold</code>:</p>
<pre><code>ReleaseHold[Hold[{x, y} + v] /. v -> {vx, vy}]
(* {vx + x, vy + y} *)
</code></pre>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.