qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,822,336 | <p>My friend asked me what the roots of $y=x^3+x^2-2x-1$ was.</p>
<p>I didn't really know and when I graphed it, it had no integer solutions. So I asked him what the answer was, and he said that the $3$ roots were $2\cos\left(\frac {2\pi}{7}\right), 2\cos\left(\frac {4\pi}{7}\right)$ and $2\cos\left(\frac {8\pi}{7}\right)$.</p>
<blockquote>
<p><strong>Question:</strong> How would you get the roots without using a computer such as Mathematica? Can other equations have roots in Trigonometric forms? </p>
</blockquote>
<p>Anything helps!</p>
| egreg | 62,967 | <p>Set $x=t+t^{-1}$. Then the equation becomes
$$
t^3+3t+3t^{-1}+t^{-3}+t^2+2+t^{-2}-2t-2t^{-1}-1=0
$$
and, multiplying by $t^3$,
$$
t^6+t^5+t^4+t^3+t^2+t+1=0
$$
and it should be now clear what the solutions are. For each root there's another one giving the same solution in $x$.</p>
|
3,001,335 | <p>I know little formal math terminology and don't understand much of anything about complex analysis. Also, if this isn't a good starting point for complex integration feel free to say (I'm learning about it partly for Cauchy's residue theorem). </p>
<p>My first and intuitive idea of residue has to do with remainder, subtraction, division, etc., But more generally something that's leftover, extra, or unused by an operation or something. Do these ideas tie together? </p>
| José Carlos Santos | 446,262 | <p>If <span class="math-container">$a\in\mathbb C$</span>, <span class="math-container">$r\in(0,\infty)$</span>, <span class="math-container">$f\colon B_r(a)\setminus\{a\}\longrightarrow\mathbb C$</span> is an analytic function, and <span class="math-container">$\gamma\colon[a,b]\longrightarrow B_r(a)\setminus\{a\}$</span> is a simple loop around <span class="math-container">$a$</span>, then<span class="math-container">$$\frac1{2\pi i}\int_\gamma f(z)\,\mathrm dz=\operatorname{res}_{z=a}\bigl(f(z)\bigr).$$</span>So, the residue is what's leftover after integrating along such a loop.</p>
|
2,734,338 | <p>I would like to show that
$$\forall n\in\mathbb{N}^*, \quad \sqrt{\frac{n}{n+1}}\notin \mathbb{Q}$$</p>
<p>I'm interested in more ways of proofing this.</p>
<p>My method :</p>
<p>suppose that $\sqrt{\frac{n}{n+1}}\in \mathbb{Q}$ then there exist $(p,q)\in\mathbb{Z}\times \mathbb{N}^*$ such that $\sqrt{\frac{n}{n+1}}=\frac{p}{q}$ thus </p>
<p>$$\dfrac{n}{n+1}=\dfrac{p^2}{q^2} \implies nq^2=(n+1)p^2 \implies n(q^2-p^2)=p^2$$
since $p\neq q\implies p^2\neq q^2$ then</p>
<p>$$n=\dfrac{p^2}{(q^2-p^2)}$$</p>
<p>since $n\in \mathbb{N}^*$ then $n\in \mathbb{Q}$</p>
<ul>
<li>I'm stuck here and I would like to see different ways to prove $\sqrt{\frac{n}{n+1}}\notin \mathbb{Q}$</li>
</ul>
| rtybase | 22,583 | <p>From <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">RRT</a> for $P(x)=(n+1)x^2-n=0$, if $x=\frac{p}{q}, \gcd(p,q)=1$ is a solution for P(x), then $\color{green}{p\mid n}$ and $\color{red}{q\mid n+1}$. In this particular case it's even $\color{green}{p^2\mid n}$ and $\color{red}{q^2\mid n+1}$ from
$$(n+1)p^2-nq^2=0 \tag{1}$$</p>
<p>Aslo, from $\gcd(n,n+1)=1$ and <a href="https://en.wikipedia.org/wiki/B%C3%A9zout%27s_identity" rel="nofollow noreferrer">Bezout</a> $\exists a,b\in \mathbb{Z}$ s.t.
$$an+b(n+1)=1 \tag{2}$$
Combining $(1)$ and $(2)$
$$aq^2n+bq^2(n+1)=q^2 \Rightarrow a(n+1)p^2+bq^2(n+1)=q^2\Rightarrow\\
\color{red}{n+1\mid q^2} \Rightarrow q^2=t(n+1)$$
and back to $(1)$
$$p^2-tn=0 \Rightarrow \color{green}{n \mid p^2}$$
as a result $n+1=q^2$ and $n=p^2$ or we have 2 consecutive integers which are also perfect squares, which is not possible, since the next perfect square is $(p+1)^2=p^2+2p+1 > n+1$. </p>
|
4,508,796 | <p>How to find the integral
<span class="math-container">$$\int_0^1 x\sqrt{\frac{1-x}{1+x}}dx$$</span></p>
<p>I tried by substituting <span class="math-container">$x=\cos a$</span>. But it's leading to a form <span class="math-container">$\sin2a\cdot\tan a/2$</span> which I can't integrate further.</p>
| MathDona | 1,084,986 | <p><span class="math-container">$$I=\int_{0}^{1} x \sqrt{\frac{1-x}{1+x}}dx$$</span>
Let <span class="math-container">$x=\cos 2t$</span>, then
<span class="math-container">$$I=-2\int_{\pi/4}^{0} \cos 2t \sin 2t \tan t dt=2\int_{0}^{\pi/4} \left[ \cos 2t-\cos^2 2t\right] dt$$</span> <span class="math-container">$$=2\int_{0}^{\pi/4} \cos 2t dt -\int_{0}^{\pi/4} (1+\cos 4t)~ dt$$</span>
<span class="math-container">$$\implies I=2\frac{\sin 2t}{2}\bigg|_{0}^{\pi/4} - \frac{\pi} {4}-\frac{\sin 4t}{4}\bigg|_{0}^{\pi/4}=1-\frac{\pi}{4}$$</span></p>
|
4,508,796 | <p>How to find the integral
<span class="math-container">$$\int_0^1 x\sqrt{\frac{1-x}{1+x}}dx$$</span></p>
<p>I tried by substituting <span class="math-container">$x=\cos a$</span>. But it's leading to a form <span class="math-container">$\sin2a\cdot\tan a/2$</span> which I can't integrate further.</p>
| dan_fulea | 550,003 | <p>Observe that the given integral is an integral over <span class="math-container">$\sqrt{1-x^2}$</span> times a rational function of <span class="math-container">$x$</span>, so any of the three <a href="https://en.wikipedia.org/wiki/Euler_substitution" rel="nofollow noreferrer">Euler substitutions</a>
leads to an integral of a rational function, which can be further done by partial fraction decomposition. In fact, the third substitution is (up to sign)
<span class="math-container">$$
t = \sqrt{\frac{1-x}{1+x}}\ ,
$$</span>
which is already a piece of the puzzle, so we try it first.
Then <em>formally</em>
<span class="math-container">$\displaystyle
t^2 =\frac{1-x}{1+x}
=\frac2{1+x} -1
$</span>, so
<span class="math-container">$\displaystyle
1+t^2 =\frac{1-x}{1+x}
=\frac2{1+x}
$</span>, so
<span class="math-container">$\displaystyle
1+x =\frac{1-x}{1+x}
=\frac2{1+t^2}
$</span>, so we have <span class="math-container">$x$</span> and <span class="math-container">$dx$</span> in terms of <span class="math-container">$t$</span>,</p>
<p><span class="math-container">$\displaystyle
x =\frac{1-t^2}{1+t^2}
$</span>, and
<span class="math-container">$\displaystyle
dx = d(1+x)=d\left(\frac2{1+t^2}\right)
=-\frac2{(1+t^2)^2}\cdot 2t\; dt
$</span>.
This reduces the given integral to a rational one, <em>formally</em>:
<span class="math-container">$$
\begin{aligned}
\int_0^1x\cdot\sqrt{\frac{1-x}{1+x}}\; dx
&=
\int_1^0 \frac{1-t^2}{1+t^2} \cdot t\cdot \frac{-2}{(1+t^2)^2}\cdot 2t\; dt
=
\int_0^1 \frac{4t^2(1-t^2)}{(1+t^2)^3}\; dt
\\
&=
\left[
\frac{3t^3 + t}{(1+t^2)^2} - \arctan t
\right]_0^1
=1-\frac \pi4\ .
\end{aligned}
$$</span>
<span class="math-container">$\square$</span></p>
<hr />
<p>Similarly, we can try for an alternative proof an other Euler substitution,
we use <span class="math-container">$t$</span> such that
<span class="math-container">$$
\sqrt{1-x^2} = 1-tx\ .
$$</span>
After squaring, cancelling the one, simplifying by <span class="math-container">$x$</span>, isolating <span class="math-container">$x$</span> on one side, we get <span class="math-container">$x$</span> as an explicit function of <span class="math-container">$t$</span>,
<span class="math-container">$\displaystyle x=\frac {2t}{1+t^2}$</span>.</p>
<p>The formal derivation gives
<span class="math-container">$\displaystyle dx=d\left(\frac {2t}{1+t^2}\right)=\frac{2(1-t^2)}{1+t^2}\; dt$</span>.</p>
<p><span class="math-container">$$
\begin{aligned}
\int_0^1x\cdot\sqrt{\frac{1-x}{1+x}}\; dx
&=
\int_0^1x\cdot\frac 1{1+x}\sqrt{1-x^2}\; dx
\\
&=
\int_0^1\frac {2t}{1+t^2}\cdot\frac 1{\frac {(1+t)^2}{1+t^2}}\cdot
\frac{1-t^2}{1+t^2}\; \frac{2(1-t^2)}{1+t^2}\; dt
\\
&= \int_0^1\frac {4t(1-t)^2}{(1+t^2)^3}\; dt
\\
&=
\left[
\frac{t-t^3}{(1+t^2)^2}
- \frac 2{1+t^2}
- \arctan t
\right]_0^1
=1-\frac \pi4\ .
\end{aligned}
$$</span>
<span class="math-container">$\square$</span></p>
|
227,873 | <p>I am looking for robust generalizations of matrix rank.</p>
<p>Think of the the following problem: A big matrix of low rank is perturbed by random noise, such that it becomes a full-rank matrix. Is there a generalization of matrix rank that still 'sees' that the perturbed matrix is close to a low-rank matrix?</p>
| Vahan | 28,157 | <p>You can perform Principle Component Analysis and consider the rank of the resulting matrix. This is similar to what Sebastian Goette suggested.</p>
|
3,991,072 | <p>Consider, two planar vectors:</p>
<p><span class="math-container">$$V= a \hat{x} + b \hat{y}$$</span></p>
<p>And
<span class="math-container">$$ U = a' \hat{x} + b' \hat{y}$$</span></p>
<p>These are analogous to the complex numbers:</p>
<p><span class="math-container">$$ v = a + bi$$</span></p>
<p>and,</p>
<p><span class="math-container">$$u= a' + b' i$$</span></p>
<p>Now, there are clear rules for multiply <span class="math-container">$ v \cdot u$</span> and also the expression: <span class="math-container">$ \frac{v}{u}$</span> , also there exists a geometric interpretation (if you view in polar form). Then, why is it that we don't bring up these products when speaking vectors and only discuss dot and cross product?</p>
| Mustafa Said | 90,927 | <p>For planar vectors we can indeed multiply and divide.</p>
<p>First identify <span class="math-container">$V = a \hat{x} + b \hat{y}$</span> with <span class="math-container">$(a, b)$</span> and</p>
<p><span class="math-container">$U = c \hat{x} + d \hat{y} $</span> with <span class="math-container">$(c, d)$</span></p>
<p>Now define <span class="math-container">$(a, b) \times (c, d) := (ac - bd, ad + bc)$</span>. You can check that this definition of multiplication is well-defined.</p>
<p>Also, <span class="math-container">$\frac{(a,b)}{(c,d)} = \frac{1}{c^2 + d^2}(ac + bd, ad - bc)$</span> is well defined.</p>
<p>These formulas may look foreign at first site but a closer inspection reveals that they are just multiplication and division of complex numbers.</p>
|
3,445,768 | <p>I am trying to solve a nonlinear differential equation of the first order that comes from a geometric problem ; <span class="math-container">$$x(2x-1)y'^2-(2x-1)(2y-1)y'+y(2y-1)=0.$$</span></p>
<p>edit1 <strong><em>I am looking for human methods to solve the equation</em></strong> </p>
<p>edit2 the geometric problem was discussed on this french forum <a href="http://www.les-mathematiques.net/phorum/read.php?8,1779080,1779080" rel="nofollow noreferrer">http://www.les-mathematiques.net/phorum/read.php?8,1779080,1779080</a> </p>
<p>We can see the differential equation here <a href="http://www.les-mathematiques.net/phorum/read.php?8,1779080,1780782#msg-1780782" rel="nofollow noreferrer">http://www.les-mathematiques.net/phorum/read.php?8,1779080,1780782#msg-1780782</a></p>
<p>edit 3 I do not trust formal computer programs: look at Wolfram's answer when asked to calculate the cubic root of -1 <a href="https://www.wolframalpha.com/input/?i=%7B%5Csqrt%5B3%5D%7B-1%7D%7D%29+" rel="nofollow noreferrer">https://www.wolframalpha.com/input/?i=%7B%5Csqrt%5B3%5D%7B-1%7D%7D%29+</a>.</p>
| Robert Israel | 8,508 | <p>Maple finds the solutions</p>
<p><span class="math-container">$$y(x) = \frac12,\ y(x) =\frac12-x,\ y(x) = \sqrt {(c-c^2 )(2\,x-1)}-cx+c $$</span></p>
|
3,445,768 | <p>I am trying to solve a nonlinear differential equation of the first order that comes from a geometric problem ; <span class="math-container">$$x(2x-1)y'^2-(2x-1)(2y-1)y'+y(2y-1)=0.$$</span></p>
<p>edit1 <strong><em>I am looking for human methods to solve the equation</em></strong> </p>
<p>edit2 the geometric problem was discussed on this french forum <a href="http://www.les-mathematiques.net/phorum/read.php?8,1779080,1779080" rel="nofollow noreferrer">http://www.les-mathematiques.net/phorum/read.php?8,1779080,1779080</a> </p>
<p>We can see the differential equation here <a href="http://www.les-mathematiques.net/phorum/read.php?8,1779080,1780782#msg-1780782" rel="nofollow noreferrer">http://www.les-mathematiques.net/phorum/read.php?8,1779080,1780782#msg-1780782</a></p>
<p>edit 3 I do not trust formal computer programs: look at Wolfram's answer when asked to calculate the cubic root of -1 <a href="https://www.wolframalpha.com/input/?i=%7B%5Csqrt%5B3%5D%7B-1%7D%7D%29+" rel="nofollow noreferrer">https://www.wolframalpha.com/input/?i=%7B%5Csqrt%5B3%5D%7B-1%7D%7D%29+</a>.</p>
| JJacquelin | 108,514 | <p>This is not an answer to the question but a complement to the Robert Israel's answer. It was not possible to edit it in the comments section. </p>
<p><span class="math-container">$$y(x) = \sqrt {(c-c^2 )(2\,x-1)}-cx+c \tag 1$$</span>
is not the complete set of solutions. One must not forget
<span class="math-container">$$y(x) = -\sqrt {(c-c^2 )(2\,x-1)}-cx+c \tag 2$$</span>
Among them two are trivial : </p>
<p><span class="math-container">$y(x)=0\quad$</span> corresponding to <span class="math-container">$c=0$</span> ,</p>
<p><span class="math-container">$y(x)=1-x\quad$</span> corresponding to <span class="math-container">$c=1$</span> .</p>
<p>The map drawn below shows the curves corresponding to Eqs.<span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span>. The small numbers written on the curves are the values of <span class="math-container">$c$</span>.</p>
<p>The envelops of the set of curves are also solutions. They are four of them :</p>
<p>Two already given by WA : </p>
<p><span class="math-container">$$y(x) = \frac12$$</span>
<span class="math-container">$$y(x) =\frac12-x$$</span>
The third and fourth are discutable (not given by WA) :
<span class="math-container">$$x(y)=\frac12$$</span>
<span class="math-container">$$x(y)=0$$</span>
In fact these solutions results from the transformation of the ODE :
<span class="math-container">$$x(2x-1)\left(\frac{dy}{dx}\right)^2-(2x-1)(2y-1)\frac{dy}{dx}+y(2y-1)=0.$$</span>
into :
<span class="math-container">$$x(2x-1)-(2x-1)(2y-1)\frac{dx}{dy}+y(2y-1)\left(\frac{dx}{dy}\right)^2=0.$$</span>
which avoid to forget the solutions to which <span class="math-container">$\frac{dy}{dx}$</span> is infinite, i.e. the vertical lines <span class="math-container">$\frac{dx}{dy}=0$</span> at <span class="math-container">$x=\frac12$</span> and <span class="math-container">$x=0$</span> .</p>
<p><a href="https://i.stack.imgur.com/8Qj80.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8Qj80.gif" alt="enter image description here"></a></p>
|
972,530 | <blockquote>
<p>If $S_1 = \sqrt{2}$, and</p>
<p>$S_{n+1} = \sqrt{2 + \sqrt{S_n}}$ (n = 1,2,3....),</p>
<p>prove that $\{S_n\}$ converges, and that $S_n < 2$ for all $n \in \Bbb{N}$</p>
<p>This is one the questions from Principles of Mathematical Analysis by Rudin. I am not sure how to proceed with the solution.</p>
</blockquote>
| Paul | 17,980 | <p>Hints:</p>
<p><strong>1)</strong> $S_n < 2.$ Clearly, $S_1<2$. Suppose that $S_k <2$. Then $S_{k+1}=\sqrt{2 + \sqrt{S_k}} <\sqrt{2 + 2}=2.$</p>
<p><strong>2)</strong> $S_n \le S_{n+1}$. Clearly, $S_1<S_2$. Suppose that $S_{k-1} <S_k$. Then $$S_{k}=\sqrt{2 + \sqrt{S_{k-1}}} <\sqrt{2 +\sqrt{S_{k}}}=S_{k+1}.$$</p>
<p><strong>3)</strong> So its limit exists, say $S$. So, $S=\sqrt{2+\sqrt{S}}.$ </p>
|
381,067 | <p>I am asked this question:</p>
<blockquote>
<p>Prove that $x^5 - 5x + 1$ has no double roots in $\mathbb C[x]$. </p>
</blockquote>
<p>Now here's what I said:</p>
<blockquote>
<p>$p(x) \in \mathbb C[x]$ has no double roots if and only if $gcd(p,p') = 1$. Now we need to prove that. So:
$p(x) = x^5 - 5x + 1$ and $p'(x) = 5x^4 - 5$ and I start doing the Euclidean algorithm and I got to a point where I have to do $\frac{-4x+1}{ \frac{5}{4}x^3-5 }$, which we obviously can not do. </p>
</blockquote>
<p>What do we do in such a situation? What does that mean that while doing the Euclidean algorithm we can not continue dividing?</p>
| Community | -1 | <p>In the Euclidean algorithm, you divide the larger by the smaller. A cubic polynomial is larger than a linear polynomial.</p>
<p>Computing gcds by finding the prime factorization is often a feasible alternative. In this setting, prime factorization means "factor", and is almost synonymous with finding its roots.</p>
|
2,003 | <p>I use some custom shortcut keys in <code>KeyEventTranslations.tr</code>. One is for the <code>Delete All Output</code> function: </p>
<pre><code>Item[KeyEvent["w", Modifiers -> {Control}],
FrontEnd`FrontEndExecute[FrontEnd`FrontEndToken["DeleteGeneratedCells"]]]
</code></pre>
<p>or simply:</p>
<pre><code>Item[KeyEvent["w", Modifiers -> {Control}], "DeleteGeneratedCells"]
</code></pre>
<p>This works as expected, putting up the dialog: "Do you really want to delete all the output cells in the notebook?". Is there any way to set up <code>KeyEventTranslations.tr</code> that when I hit <kbd>Ctrl</kbd>+<kbd>w</kbd> the dialog is automatically acknowledged and I don't have to hit <kbd>Enter</kbd>? The same goes for the <code>Quit kernel</code> function, that also puts up a dialog.</p>
| István Zachar | 89 | <p>Under version 10 at least for the <strong>Delete All Output</strong> menuoption one doesn't have to hit <kbd>Enter</kbd> any more to make it effective. This is not a full answer but it certainly makes my life one keystroke easier.</p>
<p>This now works without putting up a confirmation dialog:</p>
<pre><code>FrontEndExecute@FrontEndToken@"DeleteGeneratedCells"
</code></pre>
<p>(Tested only under Win7 6-bit.)</p>
|
629,989 | <p>Given function sequence $\{f_n(x)\}^\infty$ defined as $f_n(x) = \frac{nx}{2 + n + x}. (0 \le x \le 1)$</p>
<p>I need to find the limit function and whether it converges uniformly or not uniformly.</p>
<p>I found that the limit is:</p>
<p>$$\lim_{n \to \infty} \frac{nx}{2 + n + x} = \lim_{n \to \infty} \frac{x}{\frac{2}{n} + \frac{n}{n} + \frac{x}{n}} = \lim_{n \to \infty} \frac{x}{0+1+0} = \lim_{n \to \infty} \frac{x}{1} = \lim_{n \to \infty} x = x.$$ </p>
<p>The book confirms that,</p>
<p>but it says that it converges uniformly to $x$.</p>
<p>Why? the limit is dependent with $x$, so how does the convergence is uniformly?</p>
<p>Doesn't the definition says the a sequence of function converge uniformly to $f(x)$ if it dependent only on $\varepsilon > 0$?</p>
<p>Thanks in advance.</p>
| Haha | 94,689 | <p>$f_n$ are differentiable . Then use the fact that $f_n(x)\to x$ uniformly iff $\|f_n(x)-x\|_{\infty}\to 0$.You can find the maximum of $f_n(x)-x$ in terms of $n$ and show that the maximum goes to $0$ for $n\to \infty$.</p>
|
439,745 | <blockquote>
<p>Prove:$|x-1|+|x-2|+|x-3|+\cdots+|x-n|\geq n-1$</p>
</blockquote>
<p>example1: $|x-1|+|x-2|\geq 1$</p>
<p>my solution:(substitution)</p>
<p>$x-1=t,x-2=t-1,|t|+|t-1|\geq 1,|t-1|\geq 1-|t|,$</p>
<p>square,</p>
<p>$t^2-2t+1\geq 1-2|t|+t^2,\text{Since} -t\leq -|t|,$</p>
<p>so proved.</p>
<p><em>question1</em> : Is my proof right? Alternatives?</p>
<p>one reference answer: </p>
<p>$1-|x-1|\leq |1-(x-1)|=|1-x+1|=|x-2|$</p>
<p><em>question2</em> : prove:</p>
<p>$|x-1|+|x-2|+|x-3|\geq 2$</p>
<p>So I guess:( I think there is a name about this, what's that? wiki item?)</p>
<p>$|x-1|+|x-2|+|x-3|+\cdots+|x-n|\geq n-1$</p>
<p>How to prove this? This is <em>question3.</em> I doubt whether the two methods I used above may suit for this general case.</p>
<p>Of course, welcome any interesting answers and good comments.</p>
| Elias Costa | 19,266 | <p>After the answer of Thomas Andrews my answer appears to be dispensable. But I'll leave it here as a poor alternative. Here a proof valid for any $ x $ since $ n $ is even. For the case $ n $ odd simply discard the parcel | x-n |.
\begin{align}
|x-1|+|x-2|+|x-3|+\ldots+|x-n|
=
&
|x-1|+|2-x|+|x-3|+
\\
+
&
\ldots+|(-1)^{n+1}x+(-1)^{n}n|
\\
\geq
&
\bigg[(1-x)+(x-2)+(3-x)+
\\
+
&
\ldots+(-1)^{n}x+(-1)^{n+1}n\bigg]
\\
=
&
(-1)^1+(-1)^{2}2+(-1)^{3}3+(-1)^{4}4+\ldots
\\
+
&
(-1)^{(n-1)}(n-1)+n(-1)^{n}
\\
=
&
\frac{n}{2}+n(-1)^{n}= \frac{n}{2}+n\geq n
\end{align}</p>
|
1,311,466 | <p>My concept of real no. Is not very clear. Please also tell the logic behind the question.
The expression is true for 19, is it true for all the multiples? </p>
| DeepSea | 101,504 | <p>Observe that : there are $n$ numbers which are both divisible by $2$ and $3$ that appear in the product of $(3n)! = 1\cdot 2\cdot 3\cdots (3n)$ are: $2,4,6,...,2n$, and $3,6,9,...,3n$. Thus the product is divisible by both $3^n$ and $2^n$, and since $(2^n,3^n) = 1$, we have $6^n \mid (3n)!$</p>
|
1,569,728 | <p>Find one $z\in \mathbb{C}$ in the inequality $|z-25i|\le 15$ that has the largest argument ($\arg (z)$)</p>
<p>The inequality is equivalent to $x^2+(y-25)^2\le 15^2$ that represents the set of points in the circle of radius $15$ and center coordinate $C(0,25)$.</p>
<p>In this set, how to find one complex number which has the largest argument?</p>
| Piquito | 219,998 | <p>It is clear the largest argument corresponds to the red tangent below. The involved triangle is $5$ times the pythagorean one of sides $(3,4,5)$ i.e. of sides $(15,20,25)$. Thus $$z=-12+16i$$</p>
<p><a href="https://i.stack.imgur.com/lSzkg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lSzkg.png" alt="enter image description here"></a></p>
|
4,090,259 | <p>I began watching Gilbert Strang's lectures on Linear Algebra and soon realized that I lacked an intuitive understanding of matrices, especially as to why certain operations (e.g. matrix multiplication) are defined the way they are. Someone suggested to me 3Blue1Brown's video series (<a href="https://youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab" rel="nofollow noreferrer">https://youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab</a>) and it has helped immensely. However, it seems to me that they present matrices in completely different ways: 3Blue1Brown explains that they represent linear transformations, while Strang depicts matrices as systems of linear equations. What's the connection between these two different ideas?</p>
<p>Furthermore, I understand why operations on matrices are defined the way they are when we think of them as linear maps, but this intuition breaks when matrices are thought of in different ways. Since matrices are used to represent all sorts of things (linear transformations, systems of equations, data, etc.), how come operations that are seemingly defined for use with linear maps the same across all these different contexts?</p>
| CoveredInChocolate | 901,866 | <p>Not sure if I am addressing what you really are after, but I always like considering simple examples if there is something I don't understand. Using the same matrix in two different contexts. Here is an example of a linear transformation.
<span class="math-container">$$
A =
\begin{bmatrix}
2 & 0 \\
0 & 1
\end{bmatrix}
$$</span>
If you input some vector, say <span class="math-container">$[x_1, x_2] = [1, 1]$</span>, it transforms it by stretching it out in the <span class="math-container">$x_1$</span> direction into <span class="math-container">$[2, 1]$</span>.</p>
<p>The same matrix is used in a related linear equation. <span class="math-container">$Ax = [2,1]$</span>. What <span class="math-container">$x$</span> values gave the stretched vector <span class="math-container">$[2,1]$</span>?
<span class="math-container">$$
Ax = [2,1]
$$</span>
In matrix form.
<span class="math-container">$$
\begin{bmatrix}
2 & 0 \\
0 & 1
\end{bmatrix}
\begin{bmatrix}
x_1 \\
x_2
\end{bmatrix}
=
\begin{bmatrix}
2 \\
1
\end{bmatrix}
$$</span>
As a system of linear equations.
<span class="math-container">$$
2x_1 + 0 x_2 = 2 \\
0x_1 + x_2 = 1
$$</span></p>
|
1,014,476 | <p>I pick 6 cards from a set of 13 (ace-king). If ace = 1 and jack,queen,king = 10 what is the probability of the sum of the cards being a multiple of 6? </p>
<p><strong>Tried so far:</strong>
I split the numbers into sets with values:
6n, 6n+1, 6n+2, 6n+3
like so:</p>
<p>{6}{1,7}{2,8}{3,9}{4,10,j,q,k}{5}</p>
<p>and then grouped the combinations that added to a multiple of 6:</p>
<p>(5c4)(1c1)(2c1) + (2c2)(2c2)(2c2) + (5c4)(2c2) + (5c2)(1c1)(1c1)(2c1)(2c1)
/ (13c6)</p>
<p>= 10/1716</p>
<p>I am almost certain I am missing combinations but am having trouble finding out which.</p>
| Ross Millikan | 1,827 | <p>Your combinations are $4\cdot 4+1\cdot 0+1\cdot 2, 2\cdot 1+2\cdot2+2\cdot3, 4 \cdot 4+2\cdot 1,+$ something that doesn't make sense because there are three places you choose from $1$ and only $0,5$ qualify. The left number is the number of cards of that value $\pmod 6$ . You could also have $3\cdot 4+1 \cdot 5+1 \cdot 1+1 \cdot 0, 3 \cdot 4+ 2 \cdot 2 + 1 \cdot 0$ and others</p>
|
1,014,476 | <p>I pick 6 cards from a set of 13 (ace-king). If ace = 1 and jack,queen,king = 10 what is the probability of the sum of the cards being a multiple of 6? </p>
<p><strong>Tried so far:</strong>
I split the numbers into sets with values:
6n, 6n+1, 6n+2, 6n+3
like so:</p>
<p>{6}{1,7}{2,8}{3,9}{4,10,j,q,k}{5}</p>
<p>and then grouped the combinations that added to a multiple of 6:</p>
<p>(5c4)(1c1)(2c1) + (2c2)(2c2)(2c2) + (5c4)(2c2) + (5c2)(1c1)(1c1)(2c1)(2c1)
/ (13c6)</p>
<p>= 10/1716</p>
<p>I am almost certain I am missing combinations but am having trouble finding out which.</p>
| Satish Ramanathan | 99,745 | <p>Answer:<img src="https://i.stack.imgur.com/lcFvj.png" alt="enter image description here"></p>
<p>Repetition of 10 is not the same as they represent different cards. I have done it through brute force. Hopefully I have captured everything.</p>
<p>Good luck</p>
<p>Satish</p>
|
2,417,356 | <p>I found the following very nice post yesterday which presented the conditional expectation in a way which I found intuitive;</p>
<p><a href="https://math.stackexchange.com/questions/1492306/conditional-expectation-with-respect-to-a-sigma-algebra?noredirect=1&lq=1">Conditional expectation with respect to a $\sigma$-algebra</a>.</p>
<p>I wonder if there is a way to see that $E(X\mid \mathcal{F}_n)(\omega)=\frac 1 {P(E_i)} \int_{E_i}X \, dP$ if $\omega \in E_i$, could be regarded a Radon-Nikodym derivative. I cant formally connect the dots with respect to for example to the Wikipedia discussion,</p>
<p><a href="https://en.wikipedia.org/wiki/Conditional_expectation" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Conditional_expectation</a>.</p>
<p>I am missing the part where the measure gets "weighted" i.e somthing analogous to $\frac 1 {P(E_i)}$, in the Wikipedia article.</p>
<p>Update</p>
<p>It just hit me that if one divides the defining relation by the measure of the set the one has</p>
<p>$\frac{1}{P(E_{i})} \int_{E_i}X \, dP=\frac{1}{P(E_{i})} \int_{E_i}E(X\mid \mathcal{F}_n) \, dP$ for all $E_{i}\in \mathcal{F}_{n}$. This looks like we have a function which agrees with the avarages of $X$ a.e on every set of the algebra $\mathcal{F}_{n}$. This is not quite what @martini writes but maybe it is a reansable way to look at it aswell? This looks like somthing which would fit into the wikipedia disussion better tho. But it donst sit right on some other occasions.</p>
<p>So the question remains,</p>
<p>How do I think about this in the right way? If the second way is wrong, then how is the first consistant with the Wikipedia article?</p>
<p>My comment of Sangchuls answer will also be of help when understadning my troubles!</p>
| Sangchul Lee | 9,340 | <p>Assume that $X \geq 0$. Then $\nu(E) = \int_{E} X \, d\mathbb{P}$ defined a measure on the probability space $(\Omega, \mathcal{F}, \mathbb{P})$. Of course, its Radon-Nikodym derivative is simply $d\nu/d\mathbb{P} = X$.</p>
<p>Now we restrict $\nu$ to the space space $(\Omega, \mathcal{G}, \mathbb{P}|_{\mathcal{G}})$ where $\mathcal{G} \subset \mathcal{F}$ is a $\sigma$-subalgebra. Then the Radon-Nikodym derivative is</p>
<p>$$ \frac{d\nu|_{\mathcal{G}}}{d\mathbb{P}|_{\mathcal{G}}} = \mathbb{E}[X \mid \mathcal{G}]. $$</p>
<p>This is just a rephasal of the definition of conditional expectation $\mathbb{E}[X\mid \mathcal{G}]$, which is defined as the $\mathcal{G}$-measurable random variable $\tilde{X}$ satisfying</p>
<p>$$ \forall E \in \mathcal{G}: \quad \int_{E} X \,d \mathbb{P} = \int_{E} \tilde{X} \,d \mathbb{P}. $$</p>
|
1,068,719 | <p>The problem is to prove that the quintic
$$x^5+10x^4+15x^3+15x^2-10x+1$$
is irreducible in the rationals. </p>
<p>I don't have much knowledge in group theory, and certainly not in Galois theory, and I'm pretty sure this problem can be solved without those tools. </p>
<p>I know about Eisenstein's criterion, but it cannot be applied to this particular quintic because $5$ does not divide the constant term. If we somehow manipulate the polynomial so that $5$ divides the constant term, we still have to make sure that $25$ doesn't. </p>
<p>So is there any other easy ("elementary") way to solve this?</p>
| Kaj Hansen | 138,538 | <p><strong>Hint</strong>:</p>
<p>First, prove that $f(x)$ is irreducible over a field $F$ $\iff$ $f(x+c)$ is also irreducible over $F$ for any $c \in F$.</p>
<p>Given this result, note that $f(x-1) = x^5 + 5x^4 - 15x^3 + 20x^2 - 30x + 20$.</p>
|
2,419,753 | <p>How would you approach to solve questions like:</p>
<p>$\sin(z)=\sin(2)$, $z$ is an arbitrary complex number.</p>
| Robert Israel | 8,508 | <p>By the laws of exponents
$$x^{10n} x^5 \left(x^{-5}\right)^n = x^{10 n + 5 - 5 n} = x^{5 + 5 n}$$
so you want
$$ x^{5+5n} = x^{-10}$$
Assuming $x$ is not $0$ or $\pm 1$, $x^t$ is a one-to-one function of $t$, so this will
imply $$5 + 5 n = -10$$</p>
|
4,638,393 | <p><a href="https://i.stack.imgur.com/SgVlq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SgVlq.png" alt="enter image description here" /></a></p>
<p>How does the author obtain formula (4)? From formula (2), I only get that <span class="math-container">$u\left(\frac{k}{n}+\frac{1}{n},\cdot\right)=\left(1-\frac{c}{n}\partial_x\right)u\left(\frac{k}{n},\cdot\right)$</span> but I don't see how the exponent <span class="math-container">$k$</span> appears as in (4).</p>
| Abolfazl Chaman motlagh | 938,462 | <p>for one step backward it is what you said:
<span class="math-container">$$
u(t + \frac{1}{n},.) \approx (1-\frac{c}{n} \partial_x)u(t,.)
$$</span>
so if you start from <span class="math-container">$t=0$</span> and use this approximation <span class="math-container">$k$</span> times each time with step size <span class="math-container">$\Delta t = 1/n$</span> you get:
<span class="math-container">$$
u(0+\frac{k}{n},.) \approx (1-\frac{c}{n} \partial_x)u(\frac{k-1}{n},.) \\
\approx (1-\frac{c}{n} \partial_x)(1-\frac{c}{n} \partial_x)u(\frac{k-2}{n},.)
\approx \dots \approx (1-\frac{c}{n} \partial_x)^k u(0,.) = (1-\frac{c}{n} \partial_x)^k g(.)
$$</span></p>
|
36,735 | <p>In Peter J. Cameron's book "Permutation Groups" I found the following quote</p>
<blockquote>
<p>It is a slogan of modern enumeration theory that the ability to count a set is closely related to the ability to pick a random element from that set (with all elements equally likely).</p>
</blockquote>
<p>Indeed, one can count and sample uniformly from labeled trees, spanning trees, spanning forests, dimer models, young tableaux, plane partitions etc. However one can't do either of these very efficiently with groups, for example. My question is if one can make this into a rigorous statement, perhaps through complexity theory. That is, if I have an algorithm to produce a uniform sample from a set of objects, can I somehow come up with an efficient way to count them or vice-versa? </p>
<p>Does this slogan have a standard name? Are there any references?</p>
| Robin Kothari | 8,075 | <p>Yes, there is formal way of saying this using complexity theory. I think the statement is something like: For all self-reducible relations, the problems of approximate sampling and approximate counting are equivalent (with polynomial time reductions). More specifically, for such problems, the existence of an FPRAS (fully polynomial-time randomized approximation scheme) implies the existence of an FPAUS (fully polynomial-time almost uniform sampler) and vice versa.</p>
<p>References:</p>
<ul>
<li>Jerrum, Valiant, and Vazirani, "Random generation of combinatorial structures from a uniform distribution"</li>
<li>Sinclair and Jerrum, "Approximate counting, uniform generation and rapidly mixing Markov chains"</li>
</ul>
<p>Alternately, just search for "approximate counting", "approximate sampling" and "self-reducible" on google, and you'll find lots of lecture notes and presentations explaining the ideas.</p>
|
3,192,068 | <p><span class="math-container">\begin{matrix}
1 & 2 & 0 & 1 \\
2 & 4 & 1 & 4 \\
3 & 6 & 3 & 9 \\
\end{matrix}</span>
I have tried to transpose it and then reduce it by row echelon form and i get zeros on the last two rows. But i can't grasp if i should be doing that or doing it another way.</p>
| Deepak | 151,732 | <p>Just to explain both my comment and Kavi Rama Murthy's,</p>
<p>The distance is equal to the arc length of the curve. I hope you can see this. Just sketch out any arbitrary curve and pretend that it's a particle (or a bumblebee!) moving around on the Cartesian plane. The arc length of the curve is the distance travelled by the particle.</p>
<p>The arc length is given by <span class="math-container">$\displaystyle \int ds = \int \sqrt{dx^2 + dy^2}$</span></p>
<p>Here <span class="math-container">$dx = 3\cos 3t dt$</span> and <span class="math-container">$dy = 2dt$</span>, so the arc length is <span class="math-container">$\displaystyle \int_{-\pi}^{\pi}\sqrt{9\cos^2 3t + 4}dt$</span>, where the appropriate bounds have been imposed in the last step.</p>
<p>The problem now is evaluating this integral. The indefinite integral has no elementary form and involves elliptic integrals. Sometimes, a definite integral can be calculated "neatly" and "cleverly" with symmetry and appropriate substitutions if the bounds are just right, but I can't see a way to do so here (and neither can Wolfram Alpha). In any case, the answer is here:</p>
<p><a href="https://www.wolframalpha.com/input/?i=integrate+sqrt(9cos%5E2(3x)+%2B+4)+from+-pi+to+pi" rel="nofollow noreferrer">https://www.wolframalpha.com/input/?i=integrate+sqrt(9cos%5E2(3x)+%2B+4)+from+-pi+to+pi</a></p>
|
1,809,022 | <p>I've been working on some very basic differential equations, but I came to a
problem where I need to figure out the behavior of $y(t)$ as $t \rightarrow
\infty$ Given that
$$\frac{dy}{dt} = \frac{3t}{1+2e^{y}}.$$
In this case, it was very apparent to me that I would not be able to solve for
a simple solution of $y(t)$ due to the equation $1+2e^y$ in the denominator.
However, solving for this was rather straightforward:
$$\frac{dy}{dt}(1+2e^y) = 3t$$
$$\implies \frac{dy}{dt}+2e^y\frac{dy}{dt} = 3t$$
$$\implies \int \frac{dy}{dt}+2e^y\frac{dy}{dt} dt= \int 3t dt$$
$$\implies y(t) + 2e^{y(t)} = \frac{3}{2}t^2 + C.$$
However, it now has become quite difficult for me to figure out how to figure
out the behavior of $y(t)$ as $t \rightarrow \infty$. Someone suggested that
I should look for a particular in equality, but I am not sure how I could
manipulate the right-hand side to provide me with the desired information.
Any suggestions on this?</p>
| Doug M | 317,162 | <p>since $y'>0$ when $t>0.$ $y(t)$ is constantly increasing for all $t>0$</p>
<p>If $y(t)$ were bounded $y'(t)$ would either have to oscillate around $0,$ or the limit as $t$ goes to infinity of $y(t)$ would equal $0.$</p>
|
1,809,022 | <p>I've been working on some very basic differential equations, but I came to a
problem where I need to figure out the behavior of $y(t)$ as $t \rightarrow
\infty$ Given that
$$\frac{dy}{dt} = \frac{3t}{1+2e^{y}}.$$
In this case, it was very apparent to me that I would not be able to solve for
a simple solution of $y(t)$ due to the equation $1+2e^y$ in the denominator.
However, solving for this was rather straightforward:
$$\frac{dy}{dt}(1+2e^y) = 3t$$
$$\implies \frac{dy}{dt}+2e^y\frac{dy}{dt} = 3t$$
$$\implies \int \frac{dy}{dt}+2e^y\frac{dy}{dt} dt= \int 3t dt$$
$$\implies y(t) + 2e^{y(t)} = \frac{3}{2}t^2 + C.$$
However, it now has become quite difficult for me to figure out how to figure
out the behavior of $y(t)$ as $t \rightarrow \infty$. Someone suggested that
I should look for a particular in equality, but I am not sure how I could
manipulate the right-hand side to provide me with the desired information.
Any suggestions on this?</p>
| Pipicito | 93,689 | <p>I will give you a precise proof describing the behaviour of $y(t)$ when $t$ goes to $+\infty$. Observe that no matter what your initial condition at $t=0$ is, you will have $y'(t)>0$ for $t>0$. You get this information by checking the sign of the right hand side of $$\frac{dy}{dt} = \frac{3t}{1+2e^{y}}$$</p>
<p>The limit $\lim_{t\to +\infty}{y(t)}$ exists because $y$ is a monotone function and, because $y$ is increasing, it can be finite or $+\infty$. Let's prove that the finite case can't happen. Assume on the contrary that $\lim_{t\to +\infty}{y(t)}$ is finite. This implies that $1+2e^{y(t)}<M$ for every positive $t$ and certain positive constant $M$. So $$\frac{dy}{dt} = \frac{3t}{1+2e^{y}} > \frac{3t}{M}$$
for every positive $t$. Integrating both sides from $0$ to $t$ the inequality is preserved and you get $$y(0) + \frac{1}{2M} t^2 \leq y(t)$$ and that means that $\lim_{t\to +\infty}{y(t)} = +\infty$, a contradiction. So the limit $\lim_{t\to +\infty}{y(t)}$ is not finite and thus $\lim_{t\to +\infty}{y(t)} = +\infty$.</p>
<p><strong>====== EDIT ======</strong></p>
<p>Once you have proven $\lim_{t\to +\infty}{y(t)} = +\infty$, you might want to quantify that claim. Now is when your prior work comes in handy. Consider $$ y(t) + 2e^{y(t)} = \frac{3}{2}t^2 + C$$ where $C$ depends on the initial condition. Because $\lim_{t\to +\infty}{y(t)} = +\infty$, you know $y(t)$ is positive for big enough $t$ so $$ 2e^{y(t)} < y(t) + 2e^{y(t)} = \frac{3}{2}t^2 + C$$ and then $$ e^{y(t)} < \frac{3}{4}t^2 + \frac{C}{2}.$$ From this results $$ y(t) < \log{\big( \frac{3}{4}t^2 + \frac{C}{2} \big)}$$ for big enough $t$.</p>
<p>Similarly you have $$ \frac{3}{2}t^2 + C = y(t) + 2e^{y(t)} < 3e^{y(t)}$$ for big enough $t$. You can conclude $$ \log{\big( \frac{1}{2}t^2 + \frac{C}{3} \big)} < y(t)$$ for big enough $t$.</p>
<p>Combine those bounds to get</p>
<p>$$\log(\frac{1}{2}) + 2\log{\big(t \big)} + \log{\big( 1 + \frac{2C}{3t^2} \big)} < y(t) < \log(\frac{3}{4}) + 2\log{\big(t \big)} + \log{\big( 1 + \frac{2C}{3t^2} \big)}$$ for big enough $t$. This is a much more precise claim, giving you the order of $y$ at $+\infty$. We can say that $$\lim_{t\to +\infty}{\frac{y(t)}{2\log{ \big( t \big) }}} = 1$$ to simplify.</p>
|
3,547,384 | <p>I saw this equation<span class="math-container">$$S(q)=\int_a^bL(t,q(t),\dot q(t))dt$$</span>
in <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Lagrange_equation" rel="nofollow noreferrer">wikipedia</a>.</p>
<p>So I would think that <span class="math-container">$f(x,y)$</span> must be equal to <span class="math-container">$f(x,y,t)$</span> if <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are the function of <span class="math-container">$t$</span>. So let's take an experiment.</p>
<p>Firstly, just let <span class="math-container">$f = f(x,y)$</span>, where <span class="math-container">$x = x(t)$</span>, <span class="math-container">$y=y(t)$</span>, so <span class="math-container">$f=f\left(x(t),y(t)\right)$</span></p>
<p><span class="math-container">$$\cfrac{df}{dt}=\cfrac{\partial f}{\partial x}\cfrac{dx}{dt}+\cfrac{\partial f}{\partial y}\cfrac{dy}{dt}\qquad (1)$$</span></p>
<p>Secondly, since you all see that <span class="math-container">$f$</span> is actually also a function of <span class="math-container">$t$</span>, so we have</p>
<p><span class="math-container">$$f = f(x,y)=f(x,y,t)\qquad (2)$$</span></p>
<p>Now, </p>
<p><span class="math-container">$$\cfrac{df}{dt}=\cfrac{\partial f}{\partial x}\cfrac{dx}{dt}+\cfrac{\partial f}{\partial y}\cfrac{dy}{dt}+\cfrac{\partial f}{\partial t}\cfrac{dt}{dt}\qquad (3)$$</span></p>
<p><span class="math-container">$$\cfrac{df}{dt}=\cfrac{\partial f}{\partial x}\cfrac{dx}{dt}+\cfrac{\partial f}{\partial y}\cfrac{dy}{dt}+\cfrac{\partial f}{\partial t}\qquad (4)$$</span></p>
<p>Because it is always correct that <span class="math-container">$\cfrac{df}{dt}=\cfrac{\partial f}{\partial t}$</span>,</p>
<p><span class="math-container">$$\cfrac{\partial f}{\partial x}\cfrac{dx}{dt}+\cfrac{\partial f}{\partial y}\cfrac{dy}{dt}=0\qquad (5)$$</span></p>
<p>Substitute (5) into (1),</p>
<p><span class="math-container">$$\cfrac{df}{dt}=0\qquad (6)$$</span></p>
<p>This is not a correct outcome, since (6) are not always equal to zero for all cases.</p>
<p>So what's wrong???</p>
| Arthur | 15,500 | <blockquote>
<p>Because it is always correct that <span class="math-container">$\cfrac{df}{dt}=\cfrac{\partial f}{\partial t}$</span>,</p>
</blockquote>
<p>No, that's not correct, at all. At least not in this context.</p>
<p><span class="math-container">$\dfrac{\partial f}{\partial t}$</span> means "the partial derivative of <span class="math-container">$f$</span>, with respect to its third variable, which we happen to call <span class="math-container">$t$</span>". If you have taken a function of <span class="math-container">$x$</span> and <span class="math-container">$y$</span>, then let <span class="math-container">$x$</span> and <span class="math-container">$y$</span> be functions of <span class="math-container">$t$</span>, and now see <span class="math-container">$f$</span> as a function of <span class="math-container">$x,y,t$</span>, then the third variable doesn't really appear. So <span class="math-container">$\dfrac{\partial f}{\partial t}=0$</span>.</p>
<p><span class="math-container">$\frac{df}{dt}$</span> on the other hand, means "rewrite (or at least reconceptualize) <span class="math-container">$f$</span> as a function solely of the variable <span class="math-container">$t$</span>, using that <span class="math-container">$x,y$</span> are functions of <span class="math-container">$t$</span>, then differentiate with respect to this <span class="math-container">$t$</span>."</p>
<p>These are completely different things.</p>
|
2,646,251 | <p>Given the linear system in $\mathbb{Z_3}$:</p>
<p>$$
\left\{
\begin{array}{c}
a+b+c+d=1 \\
b+c+e=2 \\
a+2e=0
\end{array}
\right.
$$</p>
<p>I used the row reduction with matrices and I got:</p>
<p>$$
\left\{
\begin{array}{c}
a+b+c+d=1 \\
b+c+e=2 \\
d=2
\end{array}
\right.
$$</p>
<p>But now I don't know how to find the solutions.</p>
| Bernard | 202,857 | <p>I'd use the symbols $0,1,-1$ for the elements of $\mathbf Z/3\mathbf Z$. Here's how to put the augmented matrix in reduced row echelon form:
\begin{align}
&\begin{bmatrix}
1&1&1&1&0&\hspace{-0.6em}|\phantom{-}~1\\
0&1&1&0&1&\hspace{-0.6em}|\:{-}1 \\1&0&0&0&-1&\hspace{-0.6em}|\phantom{-}~0
\end{bmatrix}\xrightarrow{R_1\leftarrow R_1-R_2}
\begin{bmatrix}
1&0&0&1&-1&\hspace{-0.6em}|\:{-}1\\
0&1&1&0&1&\hspace{-0.6em}|\,{-}1 \\1&0&0&0&-1&\hspace{-0.6em}|\phantom{-}~0
\end{bmatrix}\\[1ex]
\xrightarrow[R_3\leftarrow R_1-R_3]{}
&\begin{bmatrix}
1&0&0&1&-1&\hspace{-0.6em}|\:{-}1\\
0&1&1&0&1&\hspace{-0.6em}|\,{-}1 \\0&0&0&1&0&\hspace{-0.6em}|\:{-}1
\end{bmatrix}
\xrightarrow{R_1\leftarrow R_1-R_3}
\begin{bmatrix}
\color{red}1&\color{red}0&0&\color{red}0&-1&\hspace{-0.6em}|\:\phantom{-}0\\
\color{red}0&\color{red}1&1&\color{red}0&1&\hspace{-0.6em}|\,{-}1 \\\color{red}0&\color{red}0&0&\color{red}1&0&\hspace{-0.6em}|\:{-}1
\end{bmatrix}
\end{align}
Thus the solution are:
$$\begin{cases}
a=e\\b=-1-c-e\\d=-1
\end{cases}\enspace\text{or, in vector form:}\quad
\begin{bmatrix}a\\b\\c\\d\\e\end{bmatrix}=\begin{bmatrix}\phantom{-}0\\-1\\\phantom{-}0\\-1\\\phantom{-}0
\end{bmatrix}-c\begin{bmatrix}\phantom{-}0\\\phantom{-}1\\-1\\\phantom{-}0\\\phantom{-}0\end{bmatrix} +e\begin{bmatrix}\phantom{-}1\\-1\\\phantom{-}0\\\phantom{-}0\\\phantom{-}1
\end{bmatrix}.$$</p>
|
1,662,090 | <p>First of all, hi. I am new here.</p>
<p>Let$$X_1,\dots, X_n$$
be i.i.d. exponential random variables.</p>
<p>$$
Pr({\max X_n}>{(\sum X_n-\max X_n ) }) = ?
$$</p>
<p>I think we should take integrals on exponential distribution functions over corresponding intervals but I could not make it work.</p>
<p>Thanks.</p>
<p>edit: my initial work was similar to and more generalized version of problem 6.45 in <a href="http://www.stat.washington.edu/~hoytak/teaching/current/stat395/" rel="nofollow">http://www.stat.washington.edu/~hoytak/teaching/current/stat395/</a></p>
<p>instead of 1s as in uniform distribution I tried to put exponential pdf. But nothing good seemed to come out of it.</p>
<p>2nd edit: thanks a lot for all of you, especially sinbadh and BGM. you spent much time. I could not put my initial work thoroughly because I just start getting used to LATEX format. Hopefully after earning some repition I start giving thumbs up.</p>
| stity | 285,341 | <p>$75583$ in base $9$ is $50052=2^2*3*43*97$ in base 10</p>
<p>The unknown base $b$ is at least $5$ since $402$ start with a $4$</p>
<p>If $b>10$, $402_b*302_2 \geq (11*11*4+2)*(11*11*3+2)=177390 > 50052$ so $b\leq 10$</p>
<p>$402_b =2 \mod(b)$ and $302_b=2 \mod(b)$ so $75583_9=50052_{10}=4\mod(b)$</p>
<p>$$50052=2\mod(5)$$
$$50052=0\mod(6)$$
$$50052=2\mod(7)$$
$$50052=4\mod(8)$$
$$50052=3\mod(9)$$
$$50052=2\mod(10)$$</p>
<p>So $b=8$</p>
|
450,785 | <p>I want to obtain the formula for binomial coefficients in the following way: elementary ring theory shows that $(X+1)^n\in\mathbb Z[X]$ is a degree $n$ polynomial, for all $n\geq0$, so we can write</p>
<p>$$(X+1)^n=\sum_{k=0}^na_{n,k}X^k\,,\ \style{font-family:inherit;}{\text{with}}\ \ a_{n,k}\in\mathbb Z\,.$$</p>
<p>Using the fact that $(X+1)^n=(X+1)^{n-1}(X+1)$ for $n\geq1$ and the definition of product of polynomials, we obtain the following recurrence relation for all $n\geq1$:</p>
<p>$$a_{n,0}=a_{n,n}=1;\ a_{n,k}=a_{n-1,k}+a_{n-1,k-1}\,,\ \style{font-family:inherit;}{\text{for}}\ k=1,\dots,n-1\,.$$</p>
<p>I want to know if there is a way to manipulate this recurrence in order to obtain directly the values of the coefficients $a_{n,k}$, namely $a_{n,k}=\binom nk=\frac{n!}{k!(n-k)!}$. </p>
<p>Note that the usual approach via generating functions definitely will not work, at least <strong>in the spirit of my question</strong>, because this method only works when we do know in advance the coefficients of the generating function (either by the "number of $k$-subsets" argument, or Maclaurin series for $(X+1)^n$, or anything else) and this is <em>precisely</em> what I intend to avoid.</p>
<p>This question is closely related to a recent <a href="https://math.stackexchange.com/questions/449834/explicit-formula-for-bernoulli-numbers-by-using-only-the-recurrence-relation">question of mine</a>. Actually the same question, with Bernoulli numbers instead of binomial coefficients.</p>
<p><strong>EDIT</strong></p>
<p>I do not consider as a valid manipulation the following "magical" argument: 'the sequence $(b_{n,k})$ given by $b_{n,k}=\frac{n!}{k!(n-k)!}$ obeys the same recurrence and initial conditions as $(a_{n,k})$, so $a_{n,k}=b_{n,k}$ for all $n,k$. How did you obtain the formula for the $b_{n,k}$ at the first place? Okay, you can go through the "counting subsets" argument, but this is precisely what I don't want to do. The same applies to my related question about Bernoulli numbers.</p>
| Marc van Leeuwen | 18,880 | <p>If you define $a_{n,k}=\binom nk$ to be the coefficient of $X^k$ in $(1+X)^n$ (with no combinatorial meaning attached), then you easily find $\binom nk=\binom {n-1}{k-1}+\binom{n-1}k$ for all $n,k\geq1$ (as you indicated in the question). Also $\binom n0=1$ for all $n\geq0$, and $\binom0k=0$ for all $k>0$. Expanding the second term of the recurrence relation recursively, with $\binom0k=0$ as terminating clause, gives
$$
\binom n k=\sum_{0\leq i<n}\binom i{k-1}
\qquad\text{for $k\geq1$ and all $n\geq0$.}
$$
Now from the fact that $n\mapsto\binom n0$ is a polynomial function (of $n\in\mathbf N$) of degree$~0$ (namely the constant function$~1$) it follows easily by induction that $n\mapsto\binom nk$ is a polynomial function of degree$~k$ (summing a polynomial function of$~n$ increases the degree by$~1$). Moreover the first $k$ values $\binom ik$ for $0\leq i<k$ of the polynomial function are all$~0$, so the polynomial has roots $0,1,2,\ldots,k-1$. It therefore necessarily takes the form $\binom nk=c_k(n-0)(n-1)\ldots(n-(k-1))$ for some constant $c_k$. Using the recursion (at some point one does need to compute <em>something</em>) the first nonzero term $\binom kk$ of the sequence $n\mapsto\binom nk$ can be shown to be$~1$ for every$~k$, and it follows that $c_k=1/(k(k-1)(k-2)\ldots(k-(k-1))=\frac1{k!}$. Then
$$
\binom nk = \frac{n(n-1)\ldots(n-(k-1))}{k!},
$$
which you may, if you feel so inclined, write as $\binom nk = \frac{n!}{k!(n-k)!}$ as long as $k\leq n$.</p>
<p>Look mom, no counting!</p>
|
1,135,045 | <p>I need to compute
\begin{align}
S = \sum_{k=-\infty}^j \sum_{m=-1}^2 w_{k,m} f_{k+m-1}
\end{align}
but I only want to access the elements of $f$ once, so I would prefer something like
\begin{align}
\sum_k f_k \sum_m ...
\end{align}
Here is what I did: substitute $l=m-1+k$ to get
\begin{align}
S &= \sum_{k=-\infty}^j \sum_{m=-1}^2 w_{k,m} f_{k+m-1}
\\
&=\sum_{k=-\infty}^j \sum_{l+1-k=-1}^2 w_{k,l+1-k} f_{l}
\\
&=\sum_{k=-\infty}^j \sum_{l=k-2}^{k+1} w_{k,l+1-k} f_{l}
\end{align}
But when I try to get $f_l$ out of the inner sum I'm messing something up.
Can anyone produce the correct sum for looping over $f$ only once? Thanks in advance.</p>
<p>Since the highest index of $f$ that is accessed is $j+1$, I assume that the outer sum should be
\begin{align}
\sum_{k=-\infty}^{j+1} f_k
\end{align}</p>
| PdotWang | 212,686 | <p>Let us assume that:
$$S_l=\sum_{l=-\infty}^{j+1}N_{l} \cdot f_{l}$$
Then we need to figure out what the $N_l$ is.
$$N_l=\sum_{k=-\infty}^{j}\sum_{m=-1}^{2} \omega_{k,m}=\sum_{m=-1}^{2}\sum_{k=-\infty}^{j} \omega_{k,m}$$
Under the condition of
$$l=k+m-1$$
Therefor,
$$k=l-m+1$$
$$N_l=\sum_{m=-1}^{2}\omega_{l-m+1,m}$$ </p>
|
2,634,791 | <blockquote>
<p>How can I show that the map $f: GL_n(\mathbb R)\to GL_n(\mathbb R)$ defined by $f(A)=A^{-1}$ is continuous?</p>
</blockquote>
<p>The space $GL_n(\mathbb R)$ is given the operator norm and so I want to show for all $\epsilon$ there exists $\delta$ such that $\|A-B\|<\delta \implies \|A^{-1}-B^{-1}\|<\epsilon$.</p>
| caffeinemachine | 88,582 | <p>One can use the Cayley-Hamilton Theorem, which states that if $$p(x)=\det(xI-M)= x^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0$$ is the characteristic polynomial of $M$, then </p>
<p>$$M^n+a_{n-1}M^{n-1}+\cdots+a_1M+a_0I=0$$.</p>
<p>If $M\in GL_n(\mathbf R)$, then note that $a_0$ above cannot be $0$, for otherwise the determinant of the matrix $M$ would be $0$. Also note that each $a_i$ is a polynomial in the entries of $M$. So we can think of each $a_i$ as a map $GL_n(\mathbf R)\to \mathbf R$, which takes a matrix $M$ to $a_{i, M}$, and we know that this map is continuous.</p>
<p>Now note that for any matrix $M\in GL_n(\mathbf R)$, by the Cayley-Hamilton Theorem stated above,</p>
<p>$$\frac{-(M^{n-1}+a_{n-1, M}M^{n-2}+ \cdots+ a_{2, M}M+ a_{1, M}I)}{a_{0, M}}$$</p>
<p>is the inverse of $M$.</p>
<p>So if we define the map $f:GL_n(\mathbf R)\to GL_n(\mathbf R)$ as</p>
<p>$$f(M)= \frac{-(M^{n-1}+a_{n-1, M}M^{n-2}+ \cdots+ a_{2, M}M+ a_{1, M}I)}{a_{0, M}}$$</p>
<p>then by the remarks above it is clear that $f$ is continuous and it takes $M$ to $M^{-1}$.</p>
|
4,517,429 | <p>Let <span class="math-container">$ {\textstyle \{X_{1},\ldots ,X_{n},\ldots \}}$</span> be a sequence of independent random variables, each of those random variable follow a Gamma distribution.</p>
<p>For the summation of those random variable:</p>
<p><span class="math-container">$ {\displaystyle {\bar {X}}_{n}\equiv {\frac {X_{1}+\cdots +X_{n}}{n}}}$</span></p>
<p>Question: Is the summation of independent Gamma distributions a Gamma distribution or a normal distribution ?</p>
<p>Here is the confusion: the summation of Gamma distribution should still be a Gamma distribution. On the other hand, central limit theorem says that such summation should approach a normal distribution. which of those viewpoint is correct ?</p>
| Harry | 676,771 | <p>Well, CLT doesn't tell sum is Normal. It tells the normalized average <span class="math-container">$(\bar X_n-n\mu)/\sqrt{n}\,\sigma$</span> has an asymptotic distribution that is normal. The sum may be Gamma or not depending on the parameters.</p>
|
2,232,095 | <blockquote>
<p>Let $a, b, c, p, q$ be real numbers. Suppose $\{α, β\}$ are the roots
of the equation $x^2 + 2px+ q = 0$ and $\{α,\frac{1}{β}\}$ are the
roots of the equation $ax^2 + 2bx+ c = 0$, where $β \notin \{−1, 0, 1\}$.</p>
</blockquote>
<p>STATEMENT-1 : $(p^2 − q)(b^2 − ac) ≥ 0$</p>
<p>STATEMENT 2: $b \neq pa$ or $c \neq qa$</p>
<blockquote>
<p>My Attempt:</p>
</blockquote>
<p>The second statement means that $α+\dfrac{1}{β} \neq α+β$ or $α\dfrac{1}{β} \neq αβ$, which is true as long as $α\neq0$, since $β \notin \{−1, 0, 1\}$. </p>
<p><strong>But I have two questions:</strong></p>
<ul>
<li><p>Is assuming $\alpha \neq 0$ wrong?</p></li>
<li><p>Does statement $2$ imply statement $1$ ?</p></li>
</ul>
| Miguel | 259,671 | <p><strong>Statement 1</strong></p>
<p>Note that you can write a second order equation with its roots as:
$$x^2+2px+q=(x-\alpha)(x-\beta)$$
so that $2p=-\alpha-\beta$ and $q=\alpha\beta$. Doing the same with the second equation:
$$a x^2+2bx+c=a(x-\alpha)(x-\frac{1}{\beta})$$
yields $c=a\frac{\alpha}{\beta}$ and $2b=a(-\alpha-\frac{1}{\beta})$. Direct substitution (can you fill in the details?) results in:
$$p^2-q=\frac{1}{4}(\alpha-\beta)^2\ge0$$
and
$$b^2-ac=\frac{a^2}{4}(\alpha-\frac{1}{\beta})^2\ge 0$$</p>
<p><strong>Statement 2</strong></p>
<p>Assume for the sake of contradiction that $b=pa$. Let us first see that also $c=qa$ holds. The second equation becomes:
$$ax^2+2bx+c=ax^2+2pax+c=0$$
And the first equation, multiplied by $a$ is:
$$ax^2+2pax+aq=0$$
so substracting both equations gives $c-aq=0$. </p>
<p>Now, substituting further $c=aq$ in the second equation we obtain:
$$ax^2+2bx+c=ax^2+2pax+aq=a(x^2+2px+q)=0$$
which is a multiple of the first equation, so both equations have the same roots, i.e. $\beta=\frac{1}{\beta}$ or $\beta=\pm 1$. This is a contradiction, so $b\neq pa$.</p>
|
13,843 | <p>We have a natural number $n>1$. We want to determine whether there exist
natural numbers $a, k>1$ such that $n = a^k$. </p>
<p>Please suggest a polynomial-time algorithm.</p>
| Community | -1 | <p>In order to test whether or not a natural number <span class="math-container">$n$</span> is a perfect power, we can conduct a binary search of the integers {1,2,...,n} for a number <span class="math-container">$m$</span> such that <span class="math-container">$n = m^b$</span> for some <span class="math-container">$b>1$</span>. Let <span class="math-container">$b>1$</span>. If a solution <span class="math-container">$m$</span> to <span class="math-container">$m^b =n$</span> exists, then it must lie in some interval <span class="math-container">$[c_i,d_i]$</span>. When <span class="math-container">$i = 0$</span> we may take <span class="math-container">$[c_0,d_0] = [1,n]$</span>. To define <span class="math-container">$[c_{i+1},d_{i+1}]$</span>, consider <span class="math-container">$\alpha:= \left\lfloor \frac{(ci+di)}{2}\right\rfloor$</span>. If <span class="math-container">$\alpha^b = n$</span> then we’re done. If <span class="math-container">$\alpha^b > n$</span>, let <span class="math-container">$[c_{i+1}, d_{i+1}] = [c_i, \alpha]$</span>; otherwise <span class="math-container">$\alpha^b < n$</span> and we let <span class="math-container">$[c_{i+1}, d_{i+1}] = [\alpha, d_i]$</span>. We continue in this manner until <span class="math-container">$|c_i − d_i| \leq 1$</span>. We then increase the value stored in variable <span class="math-container">$b$</span> and start the loop again. Performing this loop for all <span class="math-container">$b \leq log(n)$</span> completes the algorithm. </p>
<p>A pseudocode implementation of this algorithm can be found on page 21 of Dietzelbinger's <a href="http://books.google.com/books?id=DWdjlaxl_7YC&dq=primality+testing+in+polynomial+time&printsec=frontcover&source=bn&hl=en&ei=HnFoS4XxEMuXtgf21KXUBg&sa=X&oi=book_result&ct=result&resnum=4&ved=0CB4Q6AEwAw#v=onepage&q=&f=false" rel="nofollow noreferrer">Primality Testing in Polynomial Time</a>. Its complexity is approximately <span class="math-container">$O(log^3(n))$</span>.</p>
|
13,843 | <p>We have a natural number $n>1$. We want to determine whether there exist
natural numbers $a, k>1$ such that $n = a^k$. </p>
<p>Please suggest a polynomial-time algorithm.</p>
| Stefan Kohl | 28,104 | <p>The computer algebra system <a href="http://www.gap-system.org" rel="nofollow">GAP</a> performs this test and determines a
smallest root $a$ of a given integer $n$ quite efficiently.
The following is copied directly from its source code (file gap4r6/lib/integer.gi),
and should be self-explaining:</p>
<pre><code>#############################################################################
##
#F SmallestRootInt( <n> ) . . . . . . . . . . . smallest root of an integer
##
InstallGlobalFunction(SmallestRootInt,
function ( n )
local k, r, s, p, l, q;
# check the argument
if n > 0 then k := 2; s := 1;
elif n < 0 then k := 3; s := -1; n := -n;
else return 0;
fi;
# exclude small divisors, and thereby large exponents
if n mod 2 = 0 then
p := 2;
else
p := 3; while p < 100 and n mod p <> 0 do p := p+2; od;
fi;
l := LogInt( n, p );
# loop over the possible prime divisors of exponents
# use Euler's criterion to cast out impossible ones
while k <= l do
q := 2*k+1; while not IsPrimeInt(q) do q := q+2*k; od;
if PowerModInt( n, (q-1)/k, q ) <= 1 then
r := RootInt( n, k );
if r ^ k = n then
n := r;
l := QuoInt( l, k );
else
k := NextPrimeInt( k );
fi;
else
k := NextPrimeInt( k );
fi;
od;
return s * n;
end);
</code></pre>
|
3,055,365 | <p>While doing my research, I came across this integral and don't know how to solve for this:
<span class="math-container">$$\int_{0}^{\infty}x^2\exp\{ax-be^{ax}\}dx,\text{where $a,b>0$}.$$</span>
My attempt:
<span class="math-container">\begin{align}
\int_{0}^{\infty}x^2\exp\{ax-be^{ax}\}dx &\overset{x = \ln u}{=} \int_1^{\infty}\frac{1}{u}\left(\ln u\right)^{2}\exp\left\{a\ln u-be^{a \ln u}\right\}du\\
&=\int_{1}^{\infty}(\ln u)^2u^{a-1}e^{-bu^a}du\\
&\overset{t=u^a}{\propto} \int_{1}^{\infty}(\ln t)^2e^{-bt}dt = A
\end{align}</span>
Then, integration by part where <span class="math-container">$k=(\ln t)^2$</span>, we have:
<span class="math-container">$$A\propto \int_{1}^{\infty}\frac{1}{t}(\ln t )\exp\{-bt\}dt=B$$</span>
Here, my first approach is to use Taylor Series expansion and obtain:
<span class="math-container">$$B=\int_{1}^{\infty}\frac{1}{t}(\ln t)\sum_{n=0}^\infty\frac{(-bt)^n}{n!}dt=\int_{1}^{\infty}\sum_{n=0}^{\infty}(-1)^n\frac{b^nt^{n-1}\ln t}{n!}dt$$</span>
However, here <span class="math-container">$f_{n}(t) = (-1)^n\frac{b^nt^{n-1}\ln t}{n!}$</span> is not greater than <span class="math-container">$0$</span> for all values of <span class="math-container">$t$</span> since <span class="math-container">$b > 0$</span>, so I can't interchange integration and summation. (by Fubini's theorem)</p>
<p>Then, my second approach I integration by part one more time:
<span class="math-container">$$B=\int_{1}^{\infty}\frac{1}{t}(\ln t )\exp\{-bt\}dt\overset{k=\ln x}{=}\int_{0}^{\infty}k\exp\{-be^k\}dk$$</span>
Let <span class="math-container">$y=k$</span> and <span class="math-container">$dz=\exp\{-be^k\}dk \rightarrow z = -E_1\left(be^k\right)$</span>
So,
<span class="math-container">$$B=-tE_1\left(be^k\right)\big|_{0}^{\infty}+\int_{0}^{\infty}E_1\left(be^k\right)dk$$</span>
where <span class="math-container">$E_1\left(be^k\right)$</span> is the exponential integral of <span class="math-container">$be^k$</span>. </p>
<p>But now, I don't know where to go from here.
Any suggestion? </p>
| Ininterrompue | 622,553 | <p>The integral you call <span class="math-container">$A$</span> can be split into two integrals.</p>
<p><span class="math-container">$$ A = \frac{1}{a^{3}}\int_{1}^{\infty}\ln^{2}t\,e^{-bt}\,\mathrm{d}t = \frac{1}{a^{3}}\left[\int_{0}^{\infty}\ln^{2}t\,e^{-bt}\,\mathrm{d}t - \int_{0}^{1}\ln^{2}t\, e^{-bt}\,\mathrm{d}t\right] = \frac{1}{a^{3}}(A_{1} - A_{2}) $$</span></p>
<p>The first one can be done by considering the series expansion</p>
<p><span class="math-container">$$\begin{aligned} \int_{0}^{\infty}t^{\epsilon}e^{-bt}\,\mathrm{d}t &= \sum_{n=0}^{\infty}\frac{\epsilon^{n}}{n!}\int_{0}^{\infty}\ln^{n}t\,e^{-bt}\,\mathrm{d}t \\
&= \frac{1}{b^{1+\epsilon}}\int_{0}^{\infty}t^{\epsilon}e^{-t}\,\mathrm{d}t = \frac{\Gamma(1+\epsilon)}{b^{1+\epsilon}}\end{aligned}$$</span></p>
<p>and expanding the latter term to second order in <span class="math-container">$\epsilon$</span>. The integral will then be <span class="math-container">$2! = 2$</span> times the coefficient of the <span class="math-container">$\epsilon^{2}$</span> term to compensate for the factorial. Using the Taylor expansion</p>
<p><span class="math-container">$$ \ln\Gamma(1+\epsilon) = -\gamma\epsilon + \sum_{k=2}^{\infty}\frac{(-1)^{k}\zeta(k)}{k}\epsilon^{k},$$</span></p>
<p>we have</p>
<p><span class="math-container">$$\begin{aligned} \frac{\Gamma(1+\epsilon)}{b^{1+\epsilon}} &\approx \frac{e^{-\gamma\epsilon + \frac{\zeta(2)}{2}\epsilon^{2}}}{be^{\epsilon\ln b}}
\approx \frac{1}{b}\left(1 - \epsilon\ln b + \frac{\ln^{2}b}{2}\epsilon^{2}\right)\left(1 - \gamma\epsilon + \frac{\zeta(2)}{2}\epsilon^{2} + \frac{\gamma^{2}}{2}\epsilon^{2}\right) \\
&\approx \frac{1}{b}\left(1 + \left(-\gamma - \ln b\right)\epsilon +\left(\frac{\zeta(2)}{2} + \frac{\gamma^{2}}{2} + \gamma\ln b + \frac{\ln^{2}b}{2}\right)\epsilon^{2}\right)\end{aligned}$$</span></p>
<p>so</p>
<p><span class="math-container">$$ A_{1} = \frac{1}{b}\left(\zeta(2) + \gamma^{2} + 2\gamma\ln b + \ln^{2}b\right). $$</span></p>
<p>The second integral can be done using series again, but I am not sure if it has a nice expression. I write it in terms of hypergeometric functions. The Taylor series of the exponential is used. (Note I can switch sum and integral because <span class="math-container">$\int_{0}^{1}t^{k}\ln^{2}t\,\mathrm{d}t$</span> certainly converges for any whole number <span class="math-container">$k$</span>, as the divergence of <span class="math-container">$\ln^{2}t$</span> is weak) </p>
<p><span class="math-container">$$ A_{2} = \int_{0}^{1}\ln^{2}t\,\sum_{k=0}^{\infty}\frac{(-1)^{k}b^{k}t^{k}}{k!}\,\mathrm{d}t = \sum_{k=0}^{\infty}\frac{(-1)^{k}b^{k}}{k!}\int_{0}^{1}t^{k}\ln^{2}t\,\mathrm{d}t $$</span></p>
<p>Consider</p>
<p><span class="math-container">$$ \int_{0}^{1}t^{k+\epsilon}\,\mathrm{d}t = \sum_{n=0}^{\infty}\frac{\epsilon^{n}}{n!}\int_{0}^{1}t^{k}\ln^{n}t\,\mathrm{d}t = \frac{1}{k+\epsilon+1}$$</span></p>
<p>where we are again finding the <span class="math-container">$\epsilon^{2}$</span> coefficient. It follows that</p>
<p><span class="math-container">$$ \frac{1}{1+k+\epsilon} = \frac{1}{1+k}\frac{1}{1+\frac{\epsilon}{1+k}} = \frac{1}{1+k}\sum_{j=0}^{\infty}(-1)^{j}\left(\frac{\epsilon}{1+k}\right)^{j}$$</span></p>
<p>so </p>
<p><span class="math-container">$$ \int_{0}^{1}t^{k}\ln^{2}t\,\mathrm{d}t = \frac{2}{(1+k)^{3}} \quad\to\quad A_{2} = \sum_{k=0}^{\infty}\frac{(-1)^{k}b^{k}}{k!}\frac{2}{(1+k)^{3}} = 2\,{}_{3}F_{3}(1,1,1;2,2,2;-b).$$</span></p>
|
3,055,365 | <p>While doing my research, I came across this integral and don't know how to solve for this:
<span class="math-container">$$\int_{0}^{\infty}x^2\exp\{ax-be^{ax}\}dx,\text{where $a,b>0$}.$$</span>
My attempt:
<span class="math-container">\begin{align}
\int_{0}^{\infty}x^2\exp\{ax-be^{ax}\}dx &\overset{x = \ln u}{=} \int_1^{\infty}\frac{1}{u}\left(\ln u\right)^{2}\exp\left\{a\ln u-be^{a \ln u}\right\}du\\
&=\int_{1}^{\infty}(\ln u)^2u^{a-1}e^{-bu^a}du\\
&\overset{t=u^a}{\propto} \int_{1}^{\infty}(\ln t)^2e^{-bt}dt = A
\end{align}</span>
Then, integration by part where <span class="math-container">$k=(\ln t)^2$</span>, we have:
<span class="math-container">$$A\propto \int_{1}^{\infty}\frac{1}{t}(\ln t )\exp\{-bt\}dt=B$$</span>
Here, my first approach is to use Taylor Series expansion and obtain:
<span class="math-container">$$B=\int_{1}^{\infty}\frac{1}{t}(\ln t)\sum_{n=0}^\infty\frac{(-bt)^n}{n!}dt=\int_{1}^{\infty}\sum_{n=0}^{\infty}(-1)^n\frac{b^nt^{n-1}\ln t}{n!}dt$$</span>
However, here <span class="math-container">$f_{n}(t) = (-1)^n\frac{b^nt^{n-1}\ln t}{n!}$</span> is not greater than <span class="math-container">$0$</span> for all values of <span class="math-container">$t$</span> since <span class="math-container">$b > 0$</span>, so I can't interchange integration and summation. (by Fubini's theorem)</p>
<p>Then, my second approach I integration by part one more time:
<span class="math-container">$$B=\int_{1}^{\infty}\frac{1}{t}(\ln t )\exp\{-bt\}dt\overset{k=\ln x}{=}\int_{0}^{\infty}k\exp\{-be^k\}dk$$</span>
Let <span class="math-container">$y=k$</span> and <span class="math-container">$dz=\exp\{-be^k\}dk \rightarrow z = -E_1\left(be^k\right)$</span>
So,
<span class="math-container">$$B=-tE_1\left(be^k\right)\big|_{0}^{\infty}+\int_{0}^{\infty}E_1\left(be^k\right)dk$$</span>
where <span class="math-container">$E_1\left(be^k\right)$</span> is the exponential integral of <span class="math-container">$be^k$</span>. </p>
<p>But now, I don't know where to go from here.
Any suggestion? </p>
| Zachary | 433,146 | <p>Starting with <span class="math-container">$A$</span>, and knowing that the constant of proportionality is <span class="math-container">$a^{-3}$</span>, we can see that
<span class="math-container">\begin{align}
A&=a^{-3}\int_1^\infty \log^2 x\, e^{-bx}\,dx \\
&= a^{-3}\frac{\partial^2}{\partial \mu^2} \int_1^\infty x^{\mu-1} e^{-bx}\,dx \Big|_{\mu=1} \\
&= a^{-3}\frac{\partial^2}{\partial \mu^2} b^{-\mu} \int_b^\infty z^{\mu-1} e^{-z}\,dz \Big|_{\mu=1} \\
&= a^{-3}\frac{\partial^2}{\partial \mu^2} b^{-\mu} \,\Gamma(\mu, b) \Big|_{\mu=1}\\
\end{align}</span>
Where <span class="math-container">$\Gamma(\mu, b)$</span> is the <a href="https://en.wikipedia.org/wiki/Incomplete_gamma_function" rel="nofollow noreferrer">upper incomplete gamma function</a>. Now, if you want to go further, we can use the lower incomplete gamma function and its series representation:</p>
<p><span class="math-container">\begin{align}
A&= a^{-3}\frac{\partial^2}{\partial \mu^2} b^{-\mu} \Big\{ \left( \Gamma(\mu)-\gamma(\mu, b)\right)\Big\} \Big|_{\mu=1}\\
&= a^{-3}\frac{\partial^2}{\partial \mu^2} \Big\{ b^{-\mu} \Gamma(\mu)-b^{-\mu} \gamma(\mu, b) \Big\} \Big|_{\mu=1}\\
&= a^{-3}\frac{\partial^2}{\partial \mu^2}\Big\{ b^{-\mu} \Gamma(\mu)-b^{-\mu} \sum_{n\ge 0} \frac{(-1)^n}{n!} \frac{b^{\mu+n}}{\mu+n} \Big\}\Big|_{\mu=1}\\
&= a^{-3}\frac{\partial^2}{\partial \mu^2} \Big\{b^{-\mu} \Gamma(\mu)- \sum_{n\ge 0} \frac{(-1)^n}{n!} \frac{b^n}{\mu+n} \Big\} \Big|_{\mu=1}\\
&= a^{-3}\frac{\log^2 b +2\gamma\log b +\pi^2/6 + \gamma^2}{b}- 2a^{-3}\sum_{n\ge 0} \frac{(-1)^n}{n!} \frac{b^n}{(n+1)^3}\\
\end{align}</span></p>
|
1,212,336 | <p>Let R be the set of all real numbers. Is $\{\mathbb R^+,\mathbb R^−,\{0\}\}$ a partition of $\mathbb R$? Explain your answer.</p>
<p>My answer is no because of $\{0\}$. I am confused with $\{0\}$. please help.</p>
| robjohn | 13,854 | <p>Consider that $\log(x)$ is an increasing function and that $\log(1)=0$, then look at the following plot:</p>
<p><img src="https://i.stack.imgur.com/rozpp.png" alt="enter image description here"></p>
<p>Normally, if $f(x)$ is an increasing function, then, for $a,b\in\mathbb{Z}$,
$$
\sum_{k=a+1}^bf(k)\ge\int_a^bf(x)\,\mathrm{d}x
$$
but since $f(a)=\log(1)=0$, we can include $1$ in the summation.</p>
|
656,701 | <p>Suppose we have:</p>
<p>$A = \{(x,v,w):x+v=w\}$</p>
<p>$B = \{(x,v):x=v\}$</p>
<p>$C = \{(w,u):\exists x 2x=w\}$</p>
<p>Can we say that $C = A \cup B$?</p>
| André Nicolas | 6,312 | <p>The question has been modified a bit from the previous one. The replacement of $-$ by $=$ makes sense. However, in the definition of $C$, you had
$C=\{w: \exists x(2x=w)\}$, and that should have been kept. </p>
<p>Presumably the set that our variables range over is the set of natural numbers, or the set of integers, or the set of reals. Which one it is does not matter much.</p>
<p>Whatever we choose, it is <strong>not</strong> true that $A\cap B=C$. The reason is simple. The set $A$ is a set of <em>ordered triples</em>, the set $B$ is a set of ordered pairs, and $C$ is a set of numbers. </p>
<p>The intersection of a set of ordered triples of numbers and a set of ordered pairs of numbers is <strong>empty</strong>. A set of ordered triples of numbers is a different kind of creature than a set of ordered pairs of numbers. </p>
<p><strong>Added:</strong> Define $A$ as the set of triples $(x,v,w)$ such that $x+v=w$. Define $B$ as the set of $(x,v,w)$ such that $x=v$. Finally, let $C$ be the set of all $(x,v,w)$ such that $\exists t(w=2t)$. </p>
<p>These definitions are related to yours, but definitely different. </p>
<p>Then any element of $A\cap B$ is an element of $C$. But definitely not necessarily vice-versa, since $x$ and $v$ have no conditions on them. </p>
|
74,347 | <blockquote>
<p>Construct a function which is continuous in $[1,5]$ but not differentiable at $2, 3, 4$.</p>
</blockquote>
<p>This question is just after the definition of differentiation and the theorem that if $f$ is finitely derivable at $c$, then $f$ is also continuous at $c$. Please help, my textbook does not have the answer. </p>
| user7530 | 7,530 | <p>$|x|$ is continuous, and differentiable everywhere except at 0. Can you see why?</p>
<p>From this we can build up the functions you need: $|x-2| + |x-3| + |x-4|$ is continuous (why?) and differentiable everywhere except at 2, 3, and 4.</p>
|
290,050 | <p>Are there good lower/upper bounds for
$ \sum\limits_{i = 0}^k {\left( \begin{array}{l} n \\ i \\ \end{array} \right)x^i } $ where $0<x<1$, $k \ll n$?</p>
| Brendan McKay | 9,025 | <p>Write $x=p/(1-p)$ and then
$$ \sum_{i=0}^k \binom ni x^i = (1-p)^{-n}\sum_{i=0}^k \binom ni p^k(1-p)^{n-k}.$$
The last sum is the cumulative binomial distribution, which has no exact formula (except as a special function) but a large literature on bounds. It is quite a common topic on Mathoverflow, see these for example:
<a href="https://mathoverflow.net/questions/220030/normal-approximation-of-tail-probability-in-binomial-distribution">ref1</a> <a href="https://mathoverflow.net/questions/149440/asymptotic-behaviour-of-binomial-sum">ref2</a> <a href="https://mathoverflow.net/questions/201530/partial-sum-of-the-binomial-theorem?rq=1">ref3</a> <a href="https://mathoverflow.net/questions/93744/estimating-a-partial-sum-of-weighted-binomial-coefficients">ref4</a> <a href="https://mathoverflow.net/questions/55585/lower-bound-for-sum-of-binomial-coefficients">ref5</a></p>
|
1,290,516 | <p>Find the values of $m$ if the line $y=mx+2$ is a tangent to the curve $x^2-2y^2=1$.</p>
<p>My working:</p>
<p>First we differentiate $x^2-2y^2=1$ with respect to $y$ to get the gradient. We get $y^2=\frac{1}{2}x^2-\frac{1}{2}\implies y=\pm\sqrt{\frac{1}{2}x^2-\frac{1}{2}}$.</p>
<p>We take the positive one for demonstration<br>
$\frac{dy}{dx}=\frac{1}{2}x(\frac{1}{2}x^2-\frac{1}{2})^{-\frac{1}{2}}=\frac{x}{2\sqrt{\frac{1}{2}x^2-\frac{1}{2}}}$</p>
<p>$\implies(1-2m^2)x^2=-2m^2$</p>
<p>Since the tangent touches the curve, we can make $x^2-2(mx+2)^2=1$, we then get $(1-2m^2)x^2=9+8mx$</p>
<p>$\implies(1-2m^2)x^2=-2m^2$ and $(1-2m^2)x^2=9+8mx$ are two equations with two unknowns, then we should be able to find the values of $m$, but I couldn't find any easy way to solve those 2 simultaneous equations. Is there any easier method?</p>
<p>I tried solving $9+8mx=-2m^2$ but we still have two unknowns in one equation?</p>
<p>Also, if we don't use those two simultaneous equations, can we solve this question with a different method?</p>
<p>I am trying to solve WITHOUT implicit differentiation.</p>
<p>Many thanks for the help!</p>
| Community | -1 | <p>Let the given line be tangent to the curve at the point $(x_0, y_0).$ Then we have $$y_0 = m x_0 + 2, \\ x_0^2 = 1 + 2y_0^2.$$ Using the fist equation in the second, we have $$x_0^2 = 1 + 2 (m x_0^2 + 4 m x_0 + 4),$$ which is $$m = \frac {x_0^2 - 9} {2 x_0^2 + 8 x_0}.$$ We also know from the second equation that $$y' = \frac{x} {\sqrt{2x^2-2}}.$$ Hence, $$m = \frac{x_0} {\sqrt{2x_0^2-2}}.$$ Solving the equation $$\frac{x_0} {\sqrt{2x_0^2-2}} = \frac {x_0^2 - 9} {2 x_0^2 + 8 x_0},$$ we get $x_0 = -3.44122\cdots$ and $m = -.73899\cdots$. </p>
|
1,131,622 | <p>The question itself is a very easy one:<br/></p>
<blockquote>
<p>Somebody has got two kids, one of whom is a girl. Then what's the probability that he's got <strong>at least</strong> one boy?</p>
</blockquote>
<p>My answer is that, since he's already got a girl, then "he's got at least one boy" amounts to "the other kid is a boy", whose probability is apparently $\frac{1}{2}$.<br/>
But my friends argue that the probability should be $\frac23$: they say this is a binomial distribution, all the possible cases are (girl,girl),(girl,boy),(boy,girl) which yields that the probability is two cases out of three and is thus $\frac23$.<br/>
But I think this is totally unacceptable. I don't think it is a binomial distribution at all, at least not what my friends explained to me. However, I just can't disuade them of their opinion, nor can I prove that I am wrong.<br/>
So what on earth is the probability? and why? Any help is appreciated. Thanks in advance.<hr/>
Esp. Can anybody show why <strong>my</strong> explanation is wrong? Isn't it that whether the other kid is a boy or a girl a 50/50 event?
<hr/>
EDIT:<br/>
Thanks for all the help you provided for me, and special thanks will go to @HammyTheGreek and @KSmarts, who have made it clear to me that there is in fact some ambiguity in my statement in this problem.<br/>
As is pointed in <a href="http://en.wikipedia.org/wiki/Boy_or_Girl_paradox" rel="nofollow">this link</a> ,two distinct interpretation of the statement "one of whom is a girl" that gives rise to ambiguity:<br/></p>
<blockquote>
<p>From all families with two children, at least one of whom is a boy, a family is chosen at random. This would yield the answer of 1/3.<br/>
From all families with two children, one child is selected at random, and the sex of that child is specified to be a boy. This would yield an answer of 1/2. </p>
</blockquote>
| KSmarts | 192,747 | <p>All of the answers so far explain why you are wrong. However, strictly speaking, you are <strong>not</strong> wrong. You are simply making different assumptions and interpreting the question differently than other people.</p>
<p>The question tells us that someone has (exactly) two children, (at least) one of whom is a girl. There is unstated information here that is necessary to answer the question, and thus the question is ambiguous. How did we select this "someone" and how did we find out that one child is a girl?</p>
<p>The existing answers treat it like this: we gather all people who have two children and at least one girl, and select one of them. In this situation, we consider the three situations of older boy, younger girl; older girl, younger boy; and two girls. This gives us a probability of $2/3$ that the other child is a boy.</p>
<p>Consider this alternate presentation, though. We select one person with two children (sex unknown). Then we determine that one of the children, randomly selected, is a girl. We still have the same three possible cases as before. However, in each of the two cases where there is one boy and one girl, there is only a $1/2$ chance that the child selected would be a girl, whereas if there are two girls, we will always select a girl. So, in this situation, there is a $1/2$ chance that the other child is a girl.</p>
<p>This question (or some form of it) is known as the Two-Child Problem, or the <a href="http://en.wikipedia.org/wiki/Boy_or_Girl_paradox" rel="nofollow">Boy or Girl Paradox</a>. If you think this is frustrating or confusing, you should probably avoid the similar <a href="http://en.wikipedia.org/wiki/Sleeping_Beauty_problem" rel="nofollow">Sleeping Beauty problem</a>.</p>
|
29,115 | <p>I just read a proof and, after struggling some time with a mental leap, I think that it uses tacitly the following:</p>
<p>Let $\kappa$ be a regular cardinal, $\theta > \kappa$ a regular cardinal too then:
$ S \subset \kappa$ is stationary if and only if
$\forall \mathcal{A} = (H(\theta), \in, <,..) \exists M \prec \mathcal{A}, |M| < \kappa,$ such that $sup(M \cap \kappa) \in S$.</p>
<p>Now my questions are:</p>
<ol>
<li><p>Is this statement above even true? (I think so as I have a proof, but this doesn't have to mean anything)</p></li>
<li><p>It appears to me that the latter part of this characterization is a quite strong assumption as $\mathcal{A}$ might contain a lot of additional information, so is there a possibility to weaken it? Or could you mention any similar statements to the one above?</p></li>
</ol>
<p>Thank you</p>
<p>EDIT: I accepted the answer of Philip, simply because he has lower points. Francois answer would have deserved it too.</p>
| Philip Welch | 6,942 | <p>(I first wanted to give an answer, but I was not quick enough. I then wanted to add a small comment and found out after 20 minutes that I had insufficient reputation.)</p>
<p>The comment was regarding 2) of oktan's original query: having $H(\theta)$
in the structure is overkill: it suffices to have $( \kappa, <, \in, C)$. (One does not need the structure to be able to express $C$ is closed.'')</p>
|
1,392,576 | <p>How can the followin question be solved algebraically?</p>
<p>A certain dealership has a total of 100 vehicles consisting of cars and trucks. 1/2 of the cars are used and 1/3 of the trucks are used. If there are 42 used vehicles used altogether, how many trucks are there?</p>
| Anurag A | 68,092 | <p>$$\frac{x^2+2\sqrt{x^2}}{x}=x+\frac{2|x|}{x}$$
Now as $x\to 0$. The first term goes to $0$ but the second term goes to $\pm 2$, depending on which side you approach $0$.</p>
|
1,265,801 | <p>Let $p$ be an odd prime and $a, b \in \Bbb Z$ with $p$ doesn't divide $a$ and $a$ doesn't divide $b$. Prove that among the congruence's $x^2 \equiv a \mod p$, $\ x^2 \equiv b \mod p$, and $x^2 \equiv ab \mod p$, either all three are solvable or exactly one.</p>
<p>Please help I'm trying to study for final in number theory and I can't figure out this proof.</p>
| Rolf Hoyer | 228,612 | <p>What you should prove directly is the following:</p>
<ul>
<li>The product of two quadratic residues is a quadratic residue.</li>
<li>The product of a quadratic residue and a quadratic non-residue is a quadratic non-residue.</li>
</ul>
<p>A counting argument then yields additionally:</p>
<ul>
<li>The product of two quadratic non-residues is a quadratic residue.</li>
</ul>
<p>These facts combined yield the desired answer, by breaking into cases as to whether or not $a,b$ are quadratic residues.</p>
|
73,238 | <p>How can I calculate the solid angle that a sphere of radius R subtends at a point P? I would expect the result to be a function of the radius and the distance (which I'll call d) between the center of the sphere and P. I would also expect this angle to be 4π when d < R, and 2π when d = R, and less than 2π when d > R.</p>
<p>I think what I really need is some pointers on how to solve the integral (taken from <a href="http://en.wikipedia.org/wiki/Solid_angle" rel="nofollow">wikipedia</a>) $\Omega = \iint_S \frac { \vec{r} \cdot \hat{n} \,dS }{r^3}$ given a parameterization of a sphere. I don't know how to start to set this up so any and all help is appreciated!</p>
<p>Ideally I would like to derive the answer from this surface integral, not geometrically, because there are other parametric surfaces I would like to know the solid angle for, which might be difficult if not impossible to solve without integration.</p>
<p>*I reposted this from mathoverflow because this isn't a research-level question.</p>
| Neil G | 774 | <p>I think an easy way to visualize this is to see it as a bunch of transformation matrices.</p>
<p>A circle is just $SRx$ where</p>
<p>$x = \left[\matrix{1\\0\\0}\right],$</p>
<p>$S = rI$ is a scale matrix, and</p>
<p>$R = \left[\matrix{\cos\theta & \sin\theta & 0 \\ \sin\theta & -\cos\theta & 0 \\ 0 & 0 & 1}\right]$</p>
<p>is a a rotation matrix:</p>
<p>Now, you want rotate that whole circle to some arbitrary direction in three dimensions, so we need a three-dimensional rotation matrix $T$, and we need an translation vector $k$. A circle is then:</p>
<p>$TSRx + k.$</p>
|
2,059,604 | <p>Let's say I have a two periodic functions f(x) and g(x) each with the same period of p. Is it always the case that the sum of these two functions will also have the period of p? Is there any counter example?</p>
| Community | -1 | <p>For sums and products: Let $p$ be a period of $f(x)$ and let $q$ be a period of $g(x)$. Suppose that there are positive integers $a$ and $b$ such that $ap=bq=r$. Then $r$ is a period of $f(x)+g(x)$, and also of $f(x)g(x)$. </p>
<p>So, if $f(x)$ has $5\pi$ as a period, and $g(x)$ has $3\pi$ as a period, then $f(x)+g(x)$ and $f(x)g(x)$ each have $15\pi$ as a period. However, even if $5\pi$ is the shortest period of $f(x)$ and $3\pi$ is the shortest period of $g(x)$, the number $15\pi$ need not be the shortest period of $f(x)+g(x)$ or $f(x)g(x)$. </p>
<p>And as @Antoine says, let $f(x)=\sin x$, and $g(x)=-\sin x$. Each function has smallest period $2\pi$. But their sum is the $0$-function, which has every positive number $n$ as a period!</p>
|
1,601,970 | <p>I want to use proof by contradiction.</p>
<p>Suppose that real numbers are bounded, then according to the axiom of continuity, there exists a least upper bound $b$.</p>
<p>But if $x\in \Bbb R$, then $x+1\in \Bbb R$ because of the inclusion property of real numbers.</p>
<p>But $x+1\in \Bbb R\Longrightarrow x+1\leq b\Longrightarrow x\leq b-1$, hence $b-1$ is an upper bound for $\Bbb R$.</p>
<p>However since $b$ is a least upper bound we must have: $b\leq b-1\Longrightarrow 1\leq 0$, a contradiction, since $1>0$</p>
<p>Thus $\Bbb R$ is not bounded.</p>
<p>Is that proof correct?</p>
| frosh | 211,697 | <p>From Calculus by Apostol:</p>
<blockquote>
<p><strong>Theorem #1:</strong> The set <strong>P</strong> of positive integers <span class="math-container">$1, 2, 3,...$</span> is unbounded above.</p>
<p><strong>Proof #1:</strong> Assume <strong>P</strong> is bounded above. We shall show that this leads to a contradiction. Since <strong>P</strong> is nonempty, <strong>P</strong> has a least upper bound, say <span class="math-container">$b$</span>. The number <span class="math-container">$b-1$</span>, being less than <span class="math-container">$b$</span>, cannot be an upper bound for P. Hence, there is at least one positive integer <span class="math-container">$n$</span> such that <span class="math-container">$n>b-1$</span>. For this <span class="math-container">$n$</span> we have <span class="math-container">$n+1>b$</span>. Since <span class="math-container">$n+1$</span> is in P, this contradicts the fact that <span class="math-container">$b$</span> is an upper bound for P.</p>
<p><strong>Theorem #2:</strong> For every real <span class="math-container">$x$</span> there exists a positive integer <span class="math-container">$n$</span> such that <span class="math-container">$n>x$</span>.</p>
<p><strong>Proof #2</strong>: If this were not so, some <span class="math-container">$x$</span> would be an upper bound for <strong>P</strong>, contradicting <strong>Theorem #1</strong>.</p>
</blockquote>
<p><span class="math-container">$\therefore$</span> The set of real numbers has no upper bound.</p>
|
1,119,563 | <p>Why is $\sec^{-1}(2/\sqrt{2}) = \sec^{-1}(\sqrt{2})$ true?</p>
| Daniel W. Farlow | 191,378 | <p>As Workaholic pointed out (here with more explanation):
\begin{align}
\frac{2}{\sqrt{2}} &= \frac{2}{\sqrt{2}}\cdot \frac{\sqrt{2}}{\sqrt{2}}\tag{$\frac{\sqrt{2}}{\sqrt{2}}=1$}\\[1em]
&= \frac{2\sqrt{2}}{2}\\[1em]
&= \sqrt{2}.
\end{align}
Since $\dfrac{2}{\sqrt{2}}=\sqrt{2}$, it must then be true that
$$
\sec^{-1}\left(\frac{2}{\sqrt{2}}\right)=\sec^{-1}(\sqrt{2}),
$$
as desired.</p>
|
490,641 | <p>In Niels Lauritzen, <em>Concrete Abstract Algebra</em>, I'm having trouble showing the following:</p>
<p>The problem starts out like this:</p>
<p>$f(X)=a_nX^n+\cdots+a_1X+a_0, a_i \in \mathbb Z, n \in \mathbb N$ </p>
<p>Part (i) which I think I've done right:</p>
<p>i) Show $X-a \mid X^n-a^n$:
$X^n - a^n = (X-a)(X^{n-1} + X^{n-2}a + \cdots + Xa^{n-2} + a^{n-1}), n\ge2 \Rightarrow X-a \mid X^n-a^n$.</p>
<p>Part (ii) which I am having trouble solving:</p>
<blockquote>
<p>ii) Show that if: $a, N \in \mathbb Z$, $f$ has degree $n$ modulo $N$, $f(a) \equiv 0$ (mod $N$) then $f(X) \equiv (X-a)g(X)$ (mod $N$) and $g$ has degree $n-1$ modulo $N$.</p>
</blockquote>
<p>Hint: use (i) and $f(X) \equiv f(X) - f(a)$ (mod $N$).</p>
<p>$f$ has degree $n$ modulo $N$ means $N$ does not divide $a_n$.</p>
<p>Thanks for your time.</p>
| PITTALUGA | 94,471 | <p>Of course $(x-a)g(x)$ vanishes at $x=a$, so $a$ is a root also for this polynomial. If the degree of $f$ is $n$ modulo $N$, then $N$ does not divide $a_n$, which consequently must be the leading coefficient of $g(x)$.
As a consequence, the degree of $g(x)$ must be exactly $n-1$, otherwise $(x-a)g(x) \neq f(x)$ mod $N$.</p>
|
2,466,949 | <p>Room coordinates are following my walls, to use the guidance system I build the position from various other sensors & built a GPS position from it.</p>
<p>As I also need the a "fake" compass I'm trying to interface a moving robot with a sensor I made.</p>
<p>Robot expect compass to send him the values of a 3-axis magnometer.
As my sensor gives me the orientation pitch & roll I have this formula:</p>
<p>$\text{Orientation}=\text{atan2}( (-\text{ymag}*\cos(\text{Roll}) + \text{zmag}*\sin(\text{Roll}) ) , (\text{xmag}*\cos(\text{Pitch}) + \text{ymag}*\sin(\text{Pitch})*\sin(\text{Roll})+ \text{zmag}*\sin(\text{Pitch})*\cos(\text{Roll})))$</p>
<p>as I've 3 unknown variables & one equation I need more equations.
But I'm stuck, there should be a way based on Orientation values to get constraints (i.e in $\text{atan2}(y,x) = \arctan(y/x)$ if $x > 0$, etc.) but I can translate those relations to equations.</p>
<p>Am I missing something or is it impossible?</p>
<p>What Im trying to do:</p>
<p>-get Xmag,Ymag and Zmag, those are the expected output of the fake compass. </p>
<p>-Known variables are: Orientation (Yaw) Pitch & Roll, on the robot system (X: right of robot, Y: front of robot, Z: going up) Yaw is the rotation on Z in reference to a "North" arbitrary selected, Pitch the rotation on X and Roll the rotation on Y.</p>
| N. F. Taussig | 173,070 | <p>Your instructor is counting distinguishable arrangements of the word <em>anagram</em>.</p>
<p>The word <em>anagram</em> has seven letters, so we have seven positions to fill with $3$ <em>a</em>s, $1$ <em>g</em>, $1$ <em>m</em>, and $1$ <em>r</em>. We can fill three of these seven positions with <em>a</em>s in $\binom{7}{3}$ ways. The remaining four letters are distinct, so they can be arranged in the remaining four positions in $4!$ ways. Hence, the number of distinguishable arrangements of the word <em>anagram</em> is
$$\binom{7}{3}4! = \frac{7!}{3!4!} \cdot 4! = \frac{7!}{3!}$$
The factor of $3!$ in the denominator represents the number of ways the three <em>a</em>s can be permuted among themselves within a given arrangement of the letters of the word <em>anagram</em> without producing an arrangement distinguishable from the given arrangement.</p>
|
10,974 | <p>Is the following true: If two chain complexes of free abelian groups have isomorphic homology modules then they are chain homotopy equivalent.</p>
| Leonid Positselski | 2,106 | <p>Yes, this is true, and it does not matter whether the complexes are bounded from any side (nor of course does it matter whether the homology is finitely generated). This is so because:</p>
<ol>
<li>The homotopy category of free abelian groups is equivalent to the derived category of abelian groups. This holds even for unbounded complexes, since the category of abelian groups has a finite homological dimension.</li>
<li>Any complex of abelian groups is quasi-isomorphic to its homology, since the category of abelian groups has homological dimension 1.</li>
</ol>
|
2,231,092 | <p>I am reading <a href="http://people.ucalgary.ca/~rzach/static/open-logic/open-logic-complete.pdf" rel="nofollow noreferrer">Open Logic TextBook</a>. In which there is a proposition about Extensionality of first order sentences (6.12) It goes like this, </p>
<p>Let $\phi$ be a sentence, and $M$ and $M'$
be structures.
If $c_M = c_{M'}$
, $R_M = R_{M'}$
, and $f_M = f_{M'}$
for every constant symbol $c$,
relation symbol $R$, and function symbol $f$ occurring in $\phi$, then $ M \models \phi$ iff $M' \models \phi$</p>
<p>Does this statement implicitly imply that the Domain is exactly the same set, since $f_M = f_{M'}$ I am confused at this statement, does it mean, $f_M = f_{M'}$ only on the domain of constant values (or other covered terms?)</p>
| Dr. Sonnhard Graubner | 175,066 | <p>we have the equation in cartesian coordinates as $$y=\frac{1}{7}x-\frac{19}{7}$$
now we have $$\cos(\theta)=\frac{x}{r}$$ and $$\sin(\theta)=\frac{y}{r}$$
can you finish?</p>
|
241,998 | <p>Consider a list of even length, for example <code>list={1,2,3,4,5,6,7,8}</code></p>
<p>what is the fastest way to accomplish both these operations ?</p>
<p><strong>Operation 1</strong>: two by two element inversion, the output is:</p>
<pre><code>{2,1,4,3,6,5,8,7}
</code></pre>
<p>A code that work is:</p>
<pre><code>Flatten@(Reverse@Partition[list,2])
</code></pre>
<p><strong>Operation 2</strong>: Two by two Reverse, the output is:</p>
<pre><code>{7,8,5,6,3,4,1,2}
</code></pre>
<p>A code that work is:</p>
<pre><code>Flatten@(Reverse@Partition[Reverse[list],2])
</code></pre>
<p><strong>The real lists have length 16, no need for anything adapated to long lists</strong></p>
| Henrik Schumacher | 38,178 | <p>If you have the do that very often, store your lists in a packed matrix like <code>a</code> below (that's a good idea anyways!), use your current code to generate a permutation, and then use <code>Part</code> to apply the permutations on the columns of <code>a</code></p>
<pre><code>n = 16;
a = RandomInteger[{1, 100}, {1000000, n}];
p1 = Flatten@(Reverse@Partition[Range[n], 2]);
b1 = Map[Flatten@(Reverse@Partition[#, 2]) &, a]; // RepeatedTiming // First
c1 = a[[All, p1]]; // RepeatedTiming // First
b1 == c1
p2 = Flatten@(Reverse@Partition[Range[n, 1, -1], 2]);
b2 = Map[Flatten@(Reverse@Partition[Reverse[#], 2]) &, a]; // RepeatedTiming // First
c2 = a[[All, p2]]; // RepeatedTiming // First
b2 == c2
</code></pre>
<blockquote>
<p>0.338</p>
<p>0.041</p>
<p>True</p>
<p>0.463</p>
<p>0.040</p>
<p>True</p>
</blockquote>
<p>If you are still free to choose you data layout, then you may consider to store the lists in "structure of arrays" format (i.e., by using the transpose of <code>a</code>) in order to obtain a small percentage of extra performance:</p>
<pre><code>aT = Transpose[a];
b1T = aT[[p1]]; // RepeatedTiming // First
b2T = aT[[p2]]; // RepeatedTiming // First
Transpose[b1] == b1T
Transpose[b2] == b2T
</code></pre>
<blockquote>
<p>0.035</p>
<p>0.035</p>
<p>True</p>
<p>True</p>
</blockquote>
|
3,772,399 | <p>I need help with the following question:</p>
<p>Let <span class="math-container">$X_i$</span> be independent, non-negative random variables, <span class="math-container">$i \in \{1,...,n\}$</span>. I want to show that for all <span class="math-container">$t > 0$</span>, <span class="math-container">$$P(S_n > 3t) \leq P(\max_{1 \leq i \leq n} X_i > t) + P(S_n >t)^2$$</span>
where we define <span class="math-container">$S_n \equiv \sum_{i = 1}^n X_i$</span></p>
<hr />
<p><strong>My "attempt":</strong> I'm not really sure how to approach, but obviously we can say that <span class="math-container">$$P(S_n > 3t) = P(S_n > 3t, \max_{1 \leq i \leq n} X_i > t) + P(S_n > 3t, \max_{1 \leq i \leq n} X_i \leq t) \\ \leq
P(\max_{1 \leq i \leq n} X_i > t) + \sum_{i=1}^n P(S_i > 3t, S_j \leq 3t \quad \forall j < i, \max_{i \leq n} X_i \leq t)$$</span> since we have that <span class="math-container">$\{S_n > 3t\} = \bigcup_{i=1}^n \{S_i > 3t, S_j \leq 3t \quad \forall j < i\}$</span> and this is a disjoint union, but I don't know where to go from here. Any help would be appreciated!</p>
| Adina Goldberg | 250,127 | <p>This post about <a href="https://math.stackexchange.com/questions/3180034/commuting-matrices-up-to-a-scalar">which matrices commute up to a scalar</a> may be helpful, as you are looking for <span class="math-container">$M$</span> and <span class="math-container">$\lambda$</span> such that <span class="math-container">$AMA^{-1} = \lambda M$</span>, essentially requiring <span class="math-container">$M$</span> and <span class="math-container">$A$</span> to change places and leave behind a scalar <span class="math-container">$\lambda$</span>.</p>
|
959,525 | <p>Could someone tell me what i've done wrong?</p>
<p>I tried to find out the derivative of $3^(2x)-2x+1$ but I got it wrong.
What I did was derivate $3^a-2x+1$ where a = 2x then multiply those two.</p>
<p>$(ln3*3^a - 2)*2$ = $2ln3*3^(2x)-4$</p>
<p>Ps. x = 2 so the answer is supposed to be 176.</p>
| copper.hat | 27,978 | <p>Here is a more complex answer:</p>
<p>Suppose $z \in \mathbb{C}$, then $|1+z|+|1-z| \ge \sqrt{|1+z|^2 + |1-z|^2} = \sqrt{2+ |z|^2} \ge 2$. Furthermore, we have equality <strong>iff</strong> $z=0$.</p>
<p>Now suppose $z \neq 0$ and let $w \in \mathbb{C}$, then $|z+w|+|z-w| = |z| (|1+{ w \over z}| + |1-{w \over z}|) \ge 2 |z|$.</p>
<p><img src="https://i.stack.imgur.com/FZpMu.png" alt="enter image description here"></p>
<p>We see that $a = |z-w|, b = |z+w|$ and $m= |z|$, hence $a+b \ge 2m$. Since $m>0, w \neq 0$, we have $a+b > 2m$.</p>
|
1,874,914 | <p>in order to find $e^{AT}$ We can't just take the exponential of A as we would do in its diagonalized form. We need to diagonalize $A=S^{-1}e^{\delta(t)}S$ in order to find $e^{AT}$ why is this the case? I know we can't take the exponential of the matrix right away, do we need to take the exponential of the diagonal and multiply by $S$ in order to reach the answer all the time?</p>
<p>If yes, why?</p>
| Andrew D. Hwang | 86,418 | <p>It's not that you can't write down the series
$$
\exp(tA) = \sum_{k=0}^{\infty} \frac{t^{k} A^{k}}{k!},
\tag{1}
$$
it's that the entries of $A^{k}$ usually aren't easy to express in terms of the entries of $A$ (try it yourself for a $2 \times 2$ matrix!), so (1) isn't an explicit description of the entries of $\exp(tA)$.</p>
<p>By contrast, diagonalizing $A = S^{-1}DS$ (or generally, putting $A$ into <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow">Jordan canonical form</a>) permits the entries of $A^{k} = S^{-1}D^{k}S$ to be calculated, so that
$$
\exp(tA) = S^{-1} \exp(tD) S
\tag{2}
$$
is a useful, relatively explicit description.</p>
|
943,048 | <p><strong>Question:</strong></p>
<blockquote>
<p>let $x_{i}=1$ or $-1$,$i=1,2,\cdots,1990$, show that
$$x_{1}+2x_{2}+\cdots+1990x_{1990}\neq 0$$</p>
</blockquote>
<p>this problem it seem is easy,But I think is not easy. </p>
<p>I think note
$$1+2+3+\cdots+1990\equiv \pmod { 1990}?$$</p>
| Kim Jong Un | 136,641 | <p>You are summing up $\frac{1990\times 1991}{2}=1981045$ numbers each of which is odd. The result must be odd and in particularly cannot be $0$.</p>
|
33,153 | <p>Here is one definition of a differential equation:</p>
<blockquote>
<p>"An equation containing the derivatives of one or more dependent variables, with respect to one of more independent variables, is said to be a differential equation (DE)" <em>(Zill - A First Course in Differential Equations)</em></p>
</blockquote>
<p>Here is another:</p>
<blockquote>
<p>"A differential equation is a relationship between a function of time & it's derivatives" <em>(Braun - Differential equations and their applications)</em></p>
</blockquote>
<p>Here is another:</p>
<blockquote>
<p>"Equations in which the unknown function or the vector function appears under the sign of the derivative or the differential are called differential equations" <em>(L. Elsgolts - Differential Equations & the Calculus of Variations)</em></p>
</blockquote>
<p>Here is another:</p>
<blockquote>
<p>"Let <span class="math-container">$f(x)$</span> define a function of <span class="math-container">$x$</span> on an interval <span class="math-container">$I: a < x < b$</span>. By an ordinary differential equation we mean an equation involving <span class="math-container">$x$</span>, the function <span class="math-container">$f(x)$</span> and one of more of it's derivatives" <em>(Tenenbaum/Pollard - Ordinary Differential Equations)</em></p>
</blockquote>
<p>Here is another:</p>
<blockquote>
<p>"A differential equation is an equation that relates in a nontrivial way an unknown function & one or more of the derivatives or differentials of an unknown function with respect to one or more independent variables." <em>(Ross - Differential Equations)</em></p>
</blockquote>
<p>Here is another:</p>
<blockquote>
<p>"A differential equation is an equation relating some function <span class="math-container">$f$</span> to one or more of it's derivatives." <em>(Krantz - Differential equations demystified)</em></p>
</blockquote>
<p>Now, you can see that while there is just some tiny variation between them,
calling <span class="math-container">$f(x)$</span> the function instead of <span class="math-container">$f$</span> or calling it a function instead of an
equation but generally they all hint at the same thing.</p>
<p>However:</p>
<blockquote>
<p>"Let <span class="math-container">$U$</span> be an open domain of n-dimensional euclidean space, & let <span class="math-container">$v$</span> be a vector field in <span class="math-container">$U$</span>. Then by the differential equation determined by the vector field <span class="math-container">$v$</span> is meant the equation <span class="math-container">$x' = v(x), x \in U$</span>.</p>
<p>Differential equations are sometimes said to be equations containing unknown functions and their derivatives. This is false. For example, the equations <span class="math-container">$\frac{dx}{dt} = x(x(t))$</span> is not a differential equation." <em>(Arnold - Ordinary Differential Equations)</em></p>
</blockquote>
<p>This is quite different and the last comment basically says that all of the
above definitions, in all of the standard textbooks, are in fact incorrect.</p>
<p>Would anyone care to expand upon this point if it is of interest as some of you
might know about Arnold's book & perhaps be able to give some clearer examples than
<span class="math-container">$\frac{dx}{dt} = x(x(t))$</span>, I honestly can't even see how to make sense of <span class="math-container">$\frac{dx}{dt} = x(x(t))$</span>.
The more explicit (and with more detail) the better!</p>
<p>A second question I would really appreciate an answer to would be -
is there any other book that takes the view of differential equations
that Arnold does? I can't find any elementary book that starts by
defining differential equations in the way Arnold does and then goes on
to work in phase spaces etc. Multiple references welcomed.</p>
| Pacciu | 8,553 | <p>When I was a student I was taught the following definition:</p>
<blockquote>
<p>Let <span class="math-container">$N\in \mathbb{N}$</span>, <span class="math-container">$U\subseteq \mathbb{R}^{N+2}$</span> and <span class="math-container">$F:U\to \mathbb{R}$</span>.</p>
<p>Then the <em><span class="math-container">$N^{th}$</span> order ordinary differential equation</em> (<em>in implicit form</em>) corresponding to <span class="math-container">$F$</span> is the <strong>problem</strong> of finding all the non-degenerate intervals <span class="math-container">$I\subseteq \mathbb{R}$</span> and all the functions <span class="math-container">$y:I\to \mathbb{R}$</span> such that the following hold:</p>
<ol>
<li><p>Each <span class="math-container">$I\subseteq \text{proj}_1 U$</span> (i.e. <span class="math-container">$I$</span> is a subset of the projection of <span class="math-container">$U$</span> onto the first coordinate direction);</p></li>
<li><p><span class="math-container">$\text{proj}_N u\neq 0$</span> for some <span class="math-container">$u\in U$</span> (so that the ODE is <em>actually</em> <span class="math-container">$N^{th}$</span> order); and,</p></li>
<li><p><span class="math-container">$(x,y(x),y^\prime (x), \ldots , y^{(N)}(x))\in U$</span> and <span class="math-container">$F(x,y(x),y^\prime (x),\ldots ,y^{(N)}(x))=0$</span> for each <span class="math-container">$x\in \text{int }I$</span>.</p></li>
</ol>
<p>This problem can be denoted for short as:
<span class="math-container">$$F(x,y,y^\prime, \ldots ,y^{(N)})=0\; .$$</span></p>
<p>If the function <span class="math-container">$F$</span> is of the type:
<span class="math-container">$$F(x,y_0,y_1,\ldots ,y_N)=f(x,y_0,y_1,\ldots ,y_{N-1})-y_N$$</span>
then the differential equation is said to be in <em>normal</em> (or <em>explicit</em>) <em>form</em> and it can be denoted for short as:
<span class="math-container">$$y^{(N)}=f(x,y,y^\prime ,\ldots,y^{(N-1)})\; .$$</span></p>
</blockquote>
<p>What do you think about it?</p>
|
378,966 | <p>$$A_t-A_{xx} = \sin(\pi x)$$
$$A(0,t)=A(1,t)=0$$
$$A(x,t=0)=0$$
Find $A$.</p>
<p>I know I need to find the homogeneous and particular solutions. Im just not sure on this PDE.</p>
| Daryl | 36,034 | <p>Since the non-homogeneity depends only on $x$, we can assume a solution of the form $A(x,t)=u(x,t)+\phi(x)$.</p>
<p>Substituting this into the PDE gives
$$u_t-u_{xx}-\phi_{xx}=\sin(\pi x).$$
Choosing $\phi(x)$ such that $-\phi_{xx}=\sin(\pi x)$, means that $u$ only needs to satisfy a homogeneous PDE. </p>
<p>Note that the boundary conditions on $u$ will change with this assumed solution.</p>
|
3,613,854 | <p>Let <span class="math-container">$$A=\begin{bmatrix}
3 & 2 \\
2 & 3
\end{bmatrix}.$$</span>
Find the spectral decomposition of <span class="math-container">$A$</span>. This is <span class="math-container">$$A=VDV^{-1}=\begin{bmatrix}
-1 & 1 \\
1 & 1
\end{bmatrix}\begin{bmatrix}
1 & 0 \\
0 & 5
\end{bmatrix}\begin{bmatrix}
-1/2 & 1/2 \\
1/2 & 1/2
\end{bmatrix}.$$</span>
My question how do I find <span class="math-container">$2^A$</span>? Thanks for your help.</p>
| SMA.D | 255,648 | <p>For a diagonalizable matrix <span class="math-container">$A=VDV^{-1}$</span>:
<span class="math-container">$$f(A) = V\begin{bmatrix}f(d_1)&0 \\ 0&f(d_2)\end{bmatrix}V^{-1}$$</span>
Hence
<span class="math-container">$$ 2^A = V\begin{bmatrix}2^1&0 \\ 0&2^5\end{bmatrix}V^{-1}$$</span></p>
<p>See <a href="https://en.wikipedia.org/wiki/Matrix_function" rel="nofollow noreferrer">here</a> for more information.</p>
|
1,355,684 | <p>How to find the lower and upper focus? Hyperbola </p>
<p>I started with this
$$ 9x^2 + 54x - y^2 + 10y + 81 = 0 $$</p>
<p>and broke it down to</p>
<p>$$ \frac{9(x+3)^2}{25} - \frac{(y-5)^2}{25} = -1 $$</p>
<p>center = (-3,5)
Lower Vertex = (-3,0)
Upper Vertex = (-3,10)</p>
<p>How to get the foci? </p>
<p>foci / focus = (h, k +- c)</p>
<p>b = 5 but what is a?</p>
<p>$$ c^2 = a^2 + b^2 $$</p>
<p>Thank you.</p>
| Bassball Batman | 169,579 | <p>The cool thing about 37 and 111 is this: the three identical digits divided by their sum equals 37. Another fun fact is that 3 × 7 × 37 = 777, which times 13 equals 10,101 (and thus, 151,515 and 474,747)! The product of 7 × 13 (that is, 91) always goes evenly into multiples of 10,101 because...</p>
<ul>
<li>91 × 11 = 1,001</li>
<li>91 × 111 = 10,101</li>
</ul>
<p>The logic, 111 as a factor of 10,101 (in any base) is not obvious at first glance, but there are two easy ways to prove it is: times 11 = three sets of 11 side-by-side -- 10,101 × 11 = 111,111 -- easily revealing that factor 111! The alternative: add or subtract a multiple, namely 999, from any three-digit set within, such as switching the middle digit between commas:</p>
<ul>
<li>1<strong>1</strong>1,0<strong>0</strong>0 ➡ 1<strong>0</strong>1,0<strong>1</strong>0</li>
<li>1<strong>1</strong>1,5<strong>5</strong>5 ➡ 1<strong>5</strong>1,5<strong>1</strong>5</li>
<li>4<strong>4</strong>4,7<strong>7</strong>7 ➡ 4<strong>7</strong>4,7<strong>4</strong>7</li>
</ul>
<p>I just proved (works in every base) 111 is a factor of 10,101! Multiples of 37 (and of 111) are just the coolest, ain't they?</p>
|
163,640 | <p>Early in a course in Algebra the result that every group can be embedded as a subgroup<br>
of a symmetric group is introduced. One can further work on it to embed it as a subgroup of a suitable (higher degree) alternating group.</p>
<p>Inverting the view point we can say that the family of simple groups $A_n, n\geq 5$, contains all finite groups as their subgroups.</p>
<p>My question now is, is the same true for each of the other infinite families listed in the Classification of Finite Simple Groups?</p>
<p>In case the answer to this question is negative it might lead to some categorization.
Cayley's embedding theorem is often considered a 'useless theorem',
as no result about that group can be proved using that embedding. (Is that correct?)
Other simple groups being somewhat more special (structure preserving maps of some non-trivial structure), we can categorize groups according to which infinite family(ies) they fall into.
And groups embeddable in a particular family, but not embeddable in another may exhibit some special property.</p>
<p>Hope this provides a motivation for the question.</p>
| YCor | 14,094 | <p>There are plenty of ways of characterizing bounded rank for finite simple groups. Let $G$ be a finite group. Define </p>
<ul>
<li>$r_n(G)$ as the largest $k$ such that $(\mathbf{Z}/n\mathbf{Z})^k$ embeds into $G$</li>
<li>$\mathrm{nc}(G)$ the largest $k$ such that there exist non-abelian subgroups $H_1\dots,H_k$ such that $[H_i,H_j]=1$ for all $i\neq j$, and $\mathrm{ns}(G)$ the same with non-solvable subgroups.</li>
<li>$\rho(G)$ the smallest dimension of a faithful representation for $G$ (over any field of any characteristic, finite fields being enough).</li>
<li>$\mathrm{lni}(G)$ is the largest nilpotency length of a nilpotent subgroup of $G$, and $\mathrm{lso}(G)$ is the largest solvability length of a solvable subgroup of $G$.</li>
<li>$a(G)$ the largest $n$ such that $\mathrm{Alt}_n$ embeds into $G$</li>
</ul>
<p>Then if $(S_n)$ is a sequence of finite simple groups, the conditions $r_6(S_n)\to\infty$, $\mathrm{nc}(S_n)\to\infty$, $\mathrm{ns}(S_n)\to\infty$, $\rho(S_n)\to\infty$, $r_m(S_n)\to\infty$ where $m$ is any number with at least 2 distinct prime divisors, $\mathrm{lni}(S_n)\to\infty$, $\mathrm{lso}(S_n)\to\infty$, $a(S_n)\to\infty$, are all equivalent. The same holds with "$\to\infty"$ replaced by "is bounded". This is just a sample: many variants are possible, and follow from the classification with not much efforts. Of course this dramatically fails for arbitrary finite groups, for which "being of bounded rank" is not univocally defined.</p>
|
3,910,739 | <p>I am trying to find a pdf for a random variable <span class="math-container">$X$</span> where <span class="math-container">$X=-2Y+1$</span> and <span class="math-container">$Y$</span> is given by <span class="math-container">$N(4,9)$</span></p>
<p>Here is my attempt:</p>
<p>we know <span class="math-container">$\mu=4$</span> and <span class="math-container">$\sigma=3$</span>. so that the normal distribution of <span class="math-container">$Y$</span> is given by <span class="math-container">$\frac{1}{3\sqrt{2\pi}}e^\frac{-(y-4)^2}{18}$</span><br />
We can differentiate the cumulative function of <span class="math-container">$X$</span> to get the pdf for <span class="math-container">$X$</span>.<br />
cdf of <span class="math-container">$X = P(X<x)$</span> = <span class="math-container">$P(-2Y+1<x)=P(Y<\frac{-(x-1)}{2})=\int_{-\infty}^{\frac{-(x-1)}{2}}\frac{1}{3\sqrt{2\pi}}e^\frac{-(y-4)^2}{18}dy$</span><br />
so <span class="math-container">$\frac{d}{dx}(\int_{-\infty}^{\frac{-(x-1)}{2}}\frac{1}{3\sqrt{2\pi}}e^\frac{-(y-4)^2}{18}dy)=f(x)$</span>, which is the density function for <span class="math-container">$X$</span><br />
<span class="math-container">$f(x)=-\frac{1}{6\sqrt{2\pi}}e^\frac{-(\frac{-(x-1)}{2}-4)^2}{18}$</span></p>
<p>Is this a correct way to approach the problem? I feel like my answer is very funky.</p>
| Kavi Rama Murthy | 142,385 | <p><span class="math-container">$-2Y+1 <x$</span> is not equivalent to <span class="math-container">$Y <-\frac {x-1} 2$</span>.</p>
<p><span class="math-container">$P(X\leq x)=P(-2Y+1 \leq x)=P(Y \geq \frac {1-x} 2)=\int_{\frac {1 -x} 2}^{\infty} \phi (t)dt$</span> where <span class="math-container">$\phi$</span> is the standard normal density. Hence the density of <span class="math-container">$X$</span> is <span class="math-container">$\frac 1 2\phi (\frac {1-x} 2)$</span></p>
|
4,032,983 | <p>I would like to know math websites that are useful for students, PhD students and researchers (useful in the sense most of the students or researchers—of a particular area—are using it). Maybe you can share which math websites you sometime use and why you use it.</p>
<p>Let me give my websites and why I use them:</p>
<p>MathOverflow and Stack Exchange: Very good to look up questions about how to do mathematics, what mathematics is, how it evolves, and of course to have an exchange with smart people, who can help you if you are stuck at something.</p>
<p>arXiv: I don't use it now, but I think that's the main website to read papers and to publish (research level)</p>
<p>The Stacks Project: It is very useful for me to look algebraic things ups with more explanation than in some lectures...</p>
<p>MathSciNet: To search for papers that may help you.</p>
<p>Number Theory Web: Since I want to become a number theorist, this site helps me to see which kind of things exist.</p>
<p>Do you have more websites that help you to understand maybe mathematics, or just to connect to other mathematicians?</p>
<p>Everything is welcome. Just post it as an answer and not a comment.</p>
| RavenclawPrefect | 214,490 | <p>On /r/math, there is <a href="https://www.reddit.com/r/math/comments/8ewuzv/a_compilation_of_useful_free_online_math_resources/" rel="nofollow noreferrer">A Compilation of Useful, Free, Online Math Resources</a>. It is somewhat geared towards students in scope, but references many tools used by research mathematicians as well (see especially the comment on subject-specific resources). It contains most of the links already in this thread, and several not yet listed here - a selection of the latter group of links:</p>
<ul>
<li><a href="http://www.openproblemgarden.org/" rel="nofollow noreferrer">Open Problem Garden</a></li>
<li><a href="http://www.proofwiki.org/wiki/Main_Page" rel="nofollow noreferrer">Proof Wiki</a></li>
<li><a href="https://fermatslibrary.com/librarian" rel="nofollow noreferrer">Librarian</a> is a Chrome extension that provides easy access to references, BibTeX, and comments for all papers on the arXiv.</li>
<li><a href="https://isc.carma.newcastle.edu.au/" rel="nofollow noreferrer">The Inverse Symbolic Calculator</a> takes in a constant and tries to find a closed form for it.</li>
<li><a href="https://approach0.xyz/" rel="nofollow noreferrer">Approach0</a> is a mathematical search engine, particularly useful for cases where you would like to search for a specific formula that e.g. may use different variable names.</li>
<li><a href="https://groupprops.subwiki.org/wiki/Main_Page" rel="nofollow noreferrer">Groupprops, the group properties wiki</a></li>
<li><a href="https://math.la.asu.edu/%7Ejj/localfields/" rel="nofollow noreferrer">Database of Local Fields</a></li>
<li><a href="http://katlas.org/wiki/Main_Page" rel="nofollow noreferrer">The Knot Atlas</a></li>
</ul>
|
3,234,217 | <p>Let <span class="math-container">$a,b,c \in \mathbb{R},$</span> <span class="math-container">$\vec{v_1}=\begin{pmatrix}1\\4\\1\\-2 \end{pmatrix},$</span> <span class="math-container">$\vec{v_2}=\begin{pmatrix}-1\\a\\b\\2 \end{pmatrix},$</span> and <span class="math-container">$\vec{v_1}=\begin{pmatrix}1\\1\\1\\c \end{pmatrix}.$</span> What are the conditions on the numbers <span class="math-container">$a,b,c$</span> so that the three vectors are linearly dependent on <span class="math-container">$\mathbb{R}^4$</span>? I know that the usual method of solving this is to show that there exists scalars <span class="math-container">$x_1,x_2,x_3$</span> not all zero such that
<span class="math-container">\begin{align}
x_1\vec{v_1}+x_2\vec{v_2}+x_3\vec{v_3}=\vec{0}.
\end{align}</span></p>
<p>Doing this would naturally lead us to the augmented matrix
<span class="math-container">\begin{pmatrix}
1 & -1 & 1 &0\\
4 & a & 1 &0\\
1& b & 1 &0\\
-2 & 2 & c &0\\
\end{pmatrix}</span></p>
<p>Doing some row reduction would lead us to the matrix</p>
<p><span class="math-container">\begin{pmatrix}
1 & -1 & 1 &0\\
4 & a & 1 &0\\
0& b+1 & 0 &0\\
0 & 0 & c+2 &0\\
\end{pmatrix}</span>
I'm not quite sure how to proceed after this. Do I take cases on when whether <span class="math-container">$b+1$</span> or <span class="math-container">$c+2$</span> are zero and nonzero? </p>
| PierreCarre | 639,238 | <p>Just build a matrix with these vectors as rows and perform row reduce. The vectors will be linearly dependent if at least one row is made of zeros. The idea is that the rank of a matrix is the maximum number of linearly independent rows (or columns), hence, the rows will be linearly dependent if and only if <span class="math-container">$r(A) < 3$</span>.</p>
<p><span class="math-container">$$
\begin{pmatrix} 1 & 4 & 1& -2 \\ -1 & a & b & 2\\ 1 & 1 & 1 & c\end{pmatrix}\to
\begin{pmatrix} 1 & 4 & 1& -2 \\ 0 & a+4 & b+1 & 0\\ 0 & -3 & 0 & c+2\end{pmatrix}\to
\begin{pmatrix} 1 & 4 & 1& -2 \\ 0 & -3 & 0 & c+2\\ 0 & a+4 & b+1 & 0\end{pmatrix} \to
$$</span></p>
<p><span class="math-container">$$
\begin{pmatrix} 1 & 4 & 1& -2 \\ 0 & -3 & 0 & c+2\\ 0 & 0 & b+1 & \frac{(c+2)(a+4)}{3}\end{pmatrix}
$$</span></p>
<p>So the vectors are linearly dependent if and only if the last row is filled with zeros, i.e. <span class="math-container">$b = -1 \wedge (a=-4 \vee c=-2)$</span>.</p>
|
3,234,217 | <p>Let <span class="math-container">$a,b,c \in \mathbb{R},$</span> <span class="math-container">$\vec{v_1}=\begin{pmatrix}1\\4\\1\\-2 \end{pmatrix},$</span> <span class="math-container">$\vec{v_2}=\begin{pmatrix}-1\\a\\b\\2 \end{pmatrix},$</span> and <span class="math-container">$\vec{v_1}=\begin{pmatrix}1\\1\\1\\c \end{pmatrix}.$</span> What are the conditions on the numbers <span class="math-container">$a,b,c$</span> so that the three vectors are linearly dependent on <span class="math-container">$\mathbb{R}^4$</span>? I know that the usual method of solving this is to show that there exists scalars <span class="math-container">$x_1,x_2,x_3$</span> not all zero such that
<span class="math-container">\begin{align}
x_1\vec{v_1}+x_2\vec{v_2}+x_3\vec{v_3}=\vec{0}.
\end{align}</span></p>
<p>Doing this would naturally lead us to the augmented matrix
<span class="math-container">\begin{pmatrix}
1 & -1 & 1 &0\\
4 & a & 1 &0\\
1& b & 1 &0\\
-2 & 2 & c &0\\
\end{pmatrix}</span></p>
<p>Doing some row reduction would lead us to the matrix</p>
<p><span class="math-container">\begin{pmatrix}
1 & -1 & 1 &0\\
4 & a & 1 &0\\
0& b+1 & 0 &0\\
0 & 0 & c+2 &0\\
\end{pmatrix}</span>
I'm not quite sure how to proceed after this. Do I take cases on when whether <span class="math-container">$b+1$</span> or <span class="math-container">$c+2$</span> are zero and nonzero? </p>
| amd | 265,466 | <p>Fleshing out a comment by Bernard, the rank of a matrix is equal to the order of its largest nonzero minor. Writing the three vectors as rows to save vertical space, for the three vectors to be linearly dependent we want the matrix <span class="math-container">$$A=\begin{bmatrix}1&4&1&-2 \\ -1&a&b&2 \\ 1&1&1&c \end{bmatrix}$$</span> to be rank-deficient. That occurs when all of the determinants of the matrices obtained by deleting one column of <span class="math-container">$A$</span> vanish. This generates the following system of equations: <span class="math-container">$$3b+3 = 0 \\ ac+2a+4c+8 = 0 \\ bc+2b+c+2=0 \\ ac-4bc+2a-2b+6 = 0.$$</span> This isn’t the most efficient way to solve this particular problem, but this relation between the rank of a matrix and its minors is worth knowing. It can come in handy in other contexts.</p>
|
1,474,123 | <p>I have tried to use u-substitution but for some reason am not doing it right and thus not getting the correct answer. I want to know the most obvious/ intuitive way to solve this integral.</p>
| David Vallis | 279,109 | <p>$x=z$, $\mathrm{d}x=z\sec^{2}(k) \mathrm{d}k$, $(a^2+x^2)^{\frac32}=z^4$</p>
|
3,098,587 | <p>I have no answers to refer to, hence, would be great if someone could check up if my procedure to solve the following problem is correct. Also, I am struggling to solve for event B from part b) - any tips would be much much appreciated! I'm preparing for an exam, hence, it is of vital importance. Thank you! </p>
<p>Consider the following function <span class="math-container">$f$</span> on <span class="math-container">$I=[2,∞): f(x)=ax^{-3}$</span> where <span class="math-container">$a$</span> is a certain constant.</p>
<p>a) Find <span class="math-container">$a$</span> such that <span class="math-container">$f$</span> is a probability density function.</p>
<blockquote>
<p>By rule it must hold that: <span class="math-container">$P(I)=\int_If(x)dx=1$</span></p>
</blockquote>
<p>From here follows:</p>
<blockquote>
<p><span class="math-container">$P(I)=\int_If(x)dx=\int^∞_2ax^{-3}dx=[-\cfrac{a}{2}x^{-2}]^{x=∞}_{x=2}=
> [-\cfrac{a}{2}x^{-2} - [-\cfrac{a2^{-2}} {2}]]$</span> The first part goes to
0 as x goes to ∞. So we are left with the second part, so,
<span class="math-container">$\cfrac{a2^{-2}} {2}=1 => a=8$</span></p>
</blockquote>
<p>b) Determine the probabilities of the events <span class="math-container">$A=(4,∞)$</span> and <span class="math-container">$B=$</span>{3}</p>
<blockquote>
<p><span class="math-container">$P[4,∞]=\int^∞_48x^{-3}=[-4x^{-2}]^{x=>∞}_{x=4}=[-4x^{-2} -
> [-4*4^{-2}]$</span> Once again, as x goes to infinity, the first part goes to
0 so we are left with the second part only, so <span class="math-container">$4*4^{-2}=1/4$</span></p>
</blockquote>
<p>For event B, though, I have absolutely no idea what to do. What limits do we put on the integral, both upper and lower limit 3? But that would lead us to an equation that equals 0 so I'm confused.</p>
| Asatur Khurshudyan | 261,161 | <p>It seems that rotation indeed involves Euclids fifth postulate. Section 8 of <a href="http://www.michaelbeeson.com/research/papers/ConstructiveGeometryAndTheParallelPostulate.pdf" rel="nofollow noreferrer">this paper</a> is all about that.</p>
|
3,072,720 | <p>Let <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> be independent random variables with uniform distribution between <span class="math-container">$0$</span> and <span class="math-container">$1$</span>, that is, have joint density <span class="math-container">$f_{xy}(x, y) = 1$</span>, if x <span class="math-container">$\in$</span> <span class="math-container">$[0,1]$</span> and y <span class="math-container">$\in [0,1]$</span> and <span class="math-container">$f_{xy} (x, y) = 0$</span>, cc. Let <span class="math-container">$W = (X + Y) / 2$</span>:</p>
<p>What is the distribution of <span class="math-container">$X|W=w$</span>?</p>
<p>I started considering something like this: <span class="math-container">$X|W = (x +y)/2$</span></p>
<p>But I stucked. I dont know how to begin.</p>
<p>Any idea? Hint?</p>
| Maxim | 491,644 | <p>The conditional pdf is given by
<span class="math-container">$$f_{X | W = w}(x) =
\frac {f_{X, W}(x, w)} {f_W(w)}.$$</span>
<span class="math-container">$f_W$</span> is the pdf of a sum of two independent uniformly distributed r.v.:
<span class="math-container">$$f_W(w) = 4 w \left[0 < w \leq \frac 1 2 \right] +
4 (1 - w) \left[\frac 1 2 < w < 1 \right].$$</span>
The transformation <span class="math-container">$(x,w) = (x, (x + y)/2)$</span> maps the square <span class="math-container">$0 < x < 1 \land 0 < y < 1$</span> to the parallelogram <span class="math-container">$0 < x < 1 \land 0 < 2 w - x < 1$</span> and has the Jacobian <span class="math-container">$\partial(x, w)/\partial(x, y) = 1/2$</span>. The <a href="https://en.wikipedia.org/wiki/Probability_density_function#Multiple_variables" rel="nofollow noreferrer">transformed pdf</a> is
<span class="math-container">$$f_{X, W}(x, w) = 2 \,[0 < x < 1 \land 0 < 2 w - x < 1] = \\
2 \left[ 0 < w \leq \frac 1 2 \land 0 < x < 2 w \right] +
2 \left[ \frac 1 2 < w < 1 \land 2 w - 1 < x < 1 \right].$$</span>
Consider what <span class="math-container">$f_{X | W = w}$</span> simplifies to when <span class="math-container">$w < 1/2$</span> and when <span class="math-container">$w > 1/2$</span>.</p>
|
4,586,527 | <p>Consider the following problem.</p>
<p><a href="https://i.stack.imgur.com/uzgHX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uzgHX.png" alt="enter image description here" /></a></p>
<p>I'm fairly new to probability theory, but have some experience with combinatorics. For that reason, after failing with a probabilistic approach, I approached the problem from a combinatoric perspective.</p>
<p>Firstly, notice there are <span class="math-container">$4!$</span> ways to arrange the four components. We do not care for permutations in the functioning components and therefore we will only consider <span class="math-container">$\frac{4!}{2!}=12$</span> orderings. Out of those <span class="math-container">$12$</span> orderings only two start with defective components. Then <span class="math-container">$P(Y=2)=\frac{2}{12}=\frac{1}{6}$</span>.</p>
<p>Similarly, one arrives to the conclusion that <span class="math-container">$P(Y=3)=\frac{1}{3}, P(Y=4)=\frac{1}{2}$</span>, and the problem seems to be solved.</p>
<p>However, I was still curious as to how one would formulate the problem in terms closer to probability theory. I attempted the following formulation, but I'm still unsure of whether it is formally correct.</p>
<hr />
<p>Let <span class="math-container">$S$</span> be the sample space where each <span class="math-container">$E\in S$</span> is a specific ordering of the components. Then there are <span class="math-container">$|S|=4!=24$</span> possible sample points. Let <span class="math-container">$A_i, B_j$</span> be the events that <span class="math-container">$A$</span> is the <span class="math-container">$i$</span>th component, <span class="math-container">$B$</span> the <span class="math-container">$j$</span>th component, respectively. Then it is clear that</p>
<p><span class="math-container">$$\begin{align} P(Y=2)&=P\Big((A_1 \cap B_2 ) \cup (A_2 \cap B_1)\Big) \\ & =P(A_1 \cap B_2) +P(A_2 \cap B_1) \\ &=\frac{1}{4}\frac{1}{3}+\frac{1}{4}\frac{1}{3} \\ &=\frac{1}{6} \end{align}$$</span></p>
<p>For <span class="math-container">$P(Y=3) $</span> the formulation is equivalent:</p>
<p><span class="math-container">$$\begin{align} P(Y=3)&=P\Big(\big(A_1 \cap B_3 ) \cup P(A_3 \cap B_1\big) \cup \big(A_2 \cap B_3 ) \cup P(A_3 \cap B_2\big)\Big)\end{align}$$</span></p>
<p>which under the exact same logic gives <span class="math-container">$P(Y=3)=\frac{1}{3}$</span>. In the same manner, <span class="math-container">$P(Y=4)=\frac{1}{2}$</span>.</p>
<p>It is clear the results match. What I want to be sure about is whether my formulation in terms of probability theory is formally sound and correct, since I'm barely buildnig a basic understanding on the subject.</p>
| user469053 | 1,027,291 | <p>For this problem, your probabilistic approach is correct. But I would use the following alternative: The sample space consists of <span class="math-container">$\binom{4}{2}=6$</span> outcomes. You have four components (in order) and there are <span class="math-container">$\binom{4}{2}=6$</span> ways to choose a set <span class="math-container">$S\subset \{1,2,3,4\}$</span> of two defectives. Each of these sets <span class="math-container">$S$</span> is equally likely. For each set <span class="math-container">$S\subset \{1,2,3,4\}$</span> of two defectives, <span class="math-container">$\max S$</span> is the time the second defective is detected. There is one such <span class="math-container">$S$</span> with <span class="math-container">$\max S=2$</span>, two with <span class="math-container">$\max S=3$</span>, and three with <span class="math-container">$\max S=4$</span>. Dividing each of these numbers by <span class="math-container">$6$</span> gives your probabilities <span class="math-container">$1/6$</span>, <span class="math-container">$1/3$</span>, and <span class="math-container">$1/2$</span>.</p>
<p>This approach may be simpler/more generalizable. For fixed <span class="math-container">$n,k,r\geqslant 1$</span> (<span class="math-container">$ k\leqslant r\leqslant n$</span>), there are <span class="math-container">$\binom{r-1}{k-1}$</span> ways to choose <span class="math-container">$S\subset \{1, \ldots, n\}$</span> with <span class="math-container">$|S|=k$</span> and <span class="math-container">$\max S=r$</span>. This is because the largest member of <span class="math-container">$S$</span> is <span class="math-container">$r$</span>, and there are <span class="math-container">$\binom{r-1}{k-1}$</span> ways to choose the <span class="math-container">$k-1$</span> smaller elements of <span class="math-container">$S$</span>. Applying this with <span class="math-container">$n=4$</span>, <span class="math-container">$k=2$</span> and <span class="math-container">$r=2,3,4$</span> yields the counts of <span class="math-container">$1$</span>, <span class="math-container">$2$</span>, and <span class="math-container">$3$</span> I mentioned above.</p>
|
986,754 | <p>So I'm kind of stuck on this question and I don't exactly know how to describe this on the title header and I apologize... </p>
<blockquote>
<p>For some values of $x$, the assignment statement $y := 1-\cos(x)$ involves a difficulty. What is the difficulty? What values of $x$ are involved? What remedy do you propose to resolve this difficulty?</p>
</blockquote>
<p>I know that this question does seem bleak and looks confusing, but any help I get would be appreciated! Thank you!</p>
| Spencer | 71,045 | <p>Another remedy, </p>
<blockquote class="spoiler">
<p> $$1-\cos(x) = 2\sin^2(x/2)$$</p>
</blockquote>
|
4,246,719 | <p>Consider two random variables <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>, both distributed as a <a href="https://en.wikipedia.org/wiki/Gumbel_distribution" rel="nofollow noreferrer">Gumbel</a> with location 0 and scale 1.</p>
<p>Let <span class="math-container">$Z\equiv X-Y$</span>.</p>
<p>We know that if the two variables are <strong>independent</strong>, then <span class="math-container">$Z$</span> is Logistic with location 0 and scale 1. Hence,</p>
<p><span class="math-container">$$
\Pr(Z\leq z)=\frac{1}{1+\exp(-z)}
$$</span></p>
<p>Suppose now that <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are <strong>correlated</strong> with correlation parameter <span class="math-container">$\rho$</span>. Can we still write down a closed form expression for <span class="math-container">$
\Pr(Z\leq z)$</span>?</p>
| greg | 357,854 | <p><span class="math-container">$
\def\a{\alpha}\def\b{\beta}
\def\o{{\tt1}}\def\p{\partial}
\def\E{{\cal E}}\def\F{{\cal F}}\def\G{{\cal G}}
\def\L{\left}\def\R{\right}\def\LR#1{\L(#1\R)}
\def\vec#1{\operatorname{vec}\LR{#1}}
\def\trace#1{\operatorname{Tr}\LR{#1}}
\def\grad#1#2{\frac{\p #1}{\p #2}}
\def\c#1{\color{red}{#1}}
$</span>If you absolutely need the tensor-valued gradient, then you have several options.</p>
<p>Perhaps the simplest approach is to take Ben Grossmann's matrix-valued gradient
<span class="math-container">$$\eqalign{
G_{\a\b} &= \grad{x_\a}{y_\b} \quad\iff\quad
G &= \grad{x}{y} &= \grad{\vec{X}}{\vec{Y}} \\
}$$</span>
and reverse the Kronecker-vec indexing
<span class="math-container">$$\eqalign{
x &\in {\mathbb R}^{n^2\times\o} \implies
X \in {\mathbb R}^{n\times n} \\
x_{\a} &= X_{ij} \\
\a &= i+(j-1)\,n \\
i &= \o+(\a-1)\,{\rm mod}\,n \\
j &= \o+(\a-1)\,{\rm div}\,n \\
\\
y &\in {\mathbb R}^{mn\times\o} \implies
Y \in {\mathbb R}^{m\times n} \\
y_{\b} &= Y_{k\ell} \\
\b &= k+(\ell-1)\,m \\
k &= \o+(\b-1)\,{\rm mod}\,m \\
\ell &= \o+(\b-1)\,{\rm div}\,m \\
}$$</span>
to recover the tensor-valued gradient
<span class="math-container">$$\eqalign{
G &\in {\mathbb R}^{n^2\times mn}
\implies \Gamma\in {\mathbb R}^{n\times n\times m\times n} \\
G_{\a\b} &= \Gamma_{ijk\ell} = \grad{X_{ij}}{Y_{k\ell}} \\
}$$</span></p>
|
2,421,771 | <p>I’m attempting to explain curvature in layman’s terms to my class before explaining the formula. I like to do this first to give my students an idea of what we are finding. </p>
<p>Some people explain curvature as a “measure of how fast a curve is changing direction at a given point.” But this seems misleading to me. It seems to me that when someone say “how fast” most people interpret that as a change in direction per unit time. But time has nothing to do with it correct? </p>
<p>For example the curvature is the same for any particular curve regardless of the speed of its parametrization. So time has nothing to do with it. The way I’ve been explaining curvature in laymen’s terms is that it is a “measure of how ‘hard’ a curve is changing direction at a given point”. </p>
<p>Do you think this is sufficient for a laymen’s term explanation without leading the students astray with a time component. Perhaps you have another way to explain it? Or perhaps other people’s laymen definition is perfectly fine and I’m over analyzing. </p>
<p>Advice? </p>
| operatorerror | 210,391 | <p>Time can be an ok way to talk about it. Take a particle travelling along a curve, maybe defined by $x(t)$ or maybe it happens to be the graph of some function of time. Then, curvature is measuring how the velocity is changing, encoding the sign (think concavity/convexity) of this change as well.</p>
|
15,237 | <p><a href="https://matheducators.stackexchange.com/questions/176/knowing-mathematics-does-not-translate-to-knowing-to-teach-mathematics-why">A question</a> has been asked about why great mathematicians are not necessarily great teachers. On the other hand, I am wondering if knowing more mathematics actually helps with one's teaching of lower level courses in mathematics. For example, I believe that a good student with bachelor's degree in mathematics should have sufficient knowledge to teach calculus. However, how does having a master's or doctorate degree help one's teaching in calculus, if any at all?</p>
<p>I am teaching calculus now and I do not understand commutative algebra; I took a course on commutative algebra long time ago; I did poorly in the course and now I could hardly recall anything from this course. If I invest substantial among of time studying this subject well now, will it help me in my calculus course in any sense?</p>
| KCd | 893 | <p>The title of the question asks how knowing "more about math" can help in teaching calculus, while the question itself asks specifically about the math learned towards a masters degree or a doctorate. I am going to focus on the "more about math" part, regardless of the stage at which it was acquired, because the math that one person learned in grad school could be learned as an undergraduate by someone else.</p>
<p>A solid understanding of analysis of all kinds (such as real analysis, complex analysis, and Fourier analysis) gives an instructor in a calculus course a thorough mastery of the intricacies behind the often confusing topic of infinite series. It means the instructor understands what makes all those convergence tests work and really understands the math behind boundary behavior of power series ("at the endpoints") or forming power series expansions of the same function around different points. This can help in the construction of examples for the class that emphasize a certain issue and it means the instructor should be up to the challenge of answering almost any non-routine question calculus students may ask about infinite series. </p>
<p>Of course a calculus instructor who has not taken a lot of analysis courses could teach himself/herself why all the convergence tests for series work, but I think the experience of using series in more advanced coursework gives you a perspective on the value of this topic and how it gets used later that someone whose analysis knowledge ends at calculus doesn't have. It's kind of like taking high school French from someone who just knows high school French very well compared to taking high school French from someone who is actually fluent in French (assuming that both instructors actually want to teach French!). Maybe on most days the students won't notice a difference, but as soon as someone asks about something going even slightly beyond the regular curriculum, the instructor with more limited knowledge will be at a loss to answer the questions or point out connections that more mathematically experienced instructors know well.</p>
<p>Since you bring up commutative algebra in your question, let me give an example of how one topic from that could help an instructor understand calculus better: completions of commutative rings include as a special case the rings of <em>formal power series</em> <span class="math-container">$\mathbf R[[x]]$</span> and <span class="math-container">$\mathbf R[[x,y]]$</span>. Some calculations with power series really work if you use just a formal manipulation of the terms, while others are genuinely numerical (convergence issues are essential for the result to even mean anything). When I was first learning calculus, I found the idea of division of power series (e.g., the ratio <span class="math-container">$\sin x/\cos x = (x - x^3/3! + \ldots)/(1 - x^2/2! + \ldots)$</span> near <span class="math-container">$x = 0$</span>) a bit puzzling. Knowing about both analytic functions in complex analysis and formal power series made me much more comfortable with division of power series. (For the record, my understanding of formal power series actually came from my reading about <span class="math-container">$p$</span>-adic analysis rather than commutative algebra.)</p>
<p>Knowing some number theory and complex analysis gave me a much clearer idea of what is going on with integration of rational functions <em>in general</em>, which if you look in a calculus textbook appears to be a large number of disconnected different cases, of which we may teach only 1 or 2 to avoid excessive hand calculation. If a student were to ask me for some <em>intuition</em> behind decompositions like <span class="math-container">$1/(x^3+x) = 1/x - x/(x^2+1)$</span>, I can point out a numerical analogy: <span class="math-container">$7/(3 \cdot 5)$</span> can be broken up into some number of thirds and some number of fifths: <span class="math-container">$7/15 = 2/3 - 1/5$</span>. The principle is that when you have a denominator that is a product of "relatively prime parts", you should expect to be able to break it up into a sum with those separate parts as their own denominator. It works for denominators of fractions just as much as it works for denominators of rational functions. I don't know what other people would say if a student asked for intuition behind the partial fraction decomposition process: it just works?</p>
<p>Knowing some algebraic geometry lets me tell students why implicit differentiation has a surprising practical application that they use without even realizing it: the security in ATM cards and other smart cards is based on elliptic curve cryptography (ECC), and the math behind ECC involves being able to compute tangent lines to points on curves like <span class="math-container">$y^2 = x^3 - x$</span>. (Technically, ECC uses elliptic curves over finite fields rather than over the real numbers, and the equations look more like <span class="math-container">$y^2 + xy = x^3 + x + 1$</span> because of issues involving fields of characteristic <span class="math-container">$2$</span>, but the basic ideas for finding tangent lines are still largely the same as over the real numbers.) Algebraic geometry also lets me know the motivation behind the otherwise peculiar <span class="math-container">$\tan(t/2)$</span>-substitution in calculus, which turns integrals of rational functions of <span class="math-container">$\sin x$</span> and <span class="math-container">$\cos x$</span> into integrals of rational functions.</p>
|
1,860,134 | <p>In <em>The logic of provability</em>, by G. Boolos, there is a remark in chapter 7 saying that $\diamond^{m} \top\implies \diamond^{n} \top$ is false if $m<n$ <strong>(unless $PA$ is 1-inconsistent)</strong>.</p>
<p>Now, it seems to me that the parenthetical expression is not necessary, since earlier in the chapter we saw a demonstration of the arithmetical completeness theorem of $GL$ for constant sentences (sentences without sentence letters), which did require simple consistency of $PA$ to work, instead of 1-consistency.</p>
<p>Since the expression in question is letterless, and it is easy to see that it is not a theorem of $GL$, attending to its trace, then we should be able to conclude that $PA$ doesn't prove that either appealing to mere consistency.</p>
<p>Am I right? Is there something I am misunderstanding?</p>
| user21820 | 21,820 | <p>Hmm your reasoning seems weird. Take any theory $T$ that satisfies the <a href="http://plato.stanford.edu/entries/logic-provability/" rel="nofollow">provability conditions</a>. Take any $m,n \in \mathbb{N}$ such that $m<n$. Then by Lob's theorem, if $T \vdash \square^n \bot \to \square^{n-1} \bot$ then $T \vdash \square^{n-1} \bot$. Note that $T \vdash \square^m \bot \to \square^{n-1} \bot$ by using (D3) for $m>0$. Thus if $T \vdash \square^n \bot \to \square^m \bot$ we would have $T \vdash \square^{n-1} \bot$. Now if $n = 1$ then $T$ is actually inconsistent. But if $n > 1$ then $T$ must be $Σ_1$-unsound since otherwise $T \vdash \bot$ by iteratively "removing the $\square$".</p>
<p>Since there are these two cases, I'm not sure how you could conclude that one always needs to assume $Σ_1$-soundness ($1$-consistency).</p>
|
141,346 | <p>I am not sure where one looks up this type of fact. Google was not very helpful.</p>
| sheesh | 39,611 | <p>Appealing to Grothendieck's theorem is pretty much over the top. It's completely elementary. Suppose $\cal A$ is an abelian (or exact) category with enough injectives.</p>
<blockquote>
<p>A complex is injective if and only if it is (a retract of a) split exact complex with injective components.</p>
</blockquote>
<p>First note that the functor $A^\bullet \mapsto A^n$ that sends a complex to its $n$-th component is exact and has a right adjoint $D^n$. The right adjoint is given by the complex $D^n(A) = ( \cdots \to 0 \to A \stackrel{1}{\to} A \to 0 \to \cdots)$ where $A$ sits in degrees $n$ and $n+1$, so $D^n(I)$ is injective whenever $I$ is injective.</p>
<p>For a complex $A^\bullet$ let $IA^\bullet$ be the complex with components $(IA)^{n} =A^n \oplus A^{n+1}$ and differential $\begin{pmatrix} 0 & 1_{A^{n+1}} \\ 0 & 0\end{pmatrix}$, so $IA = \prod_{n \in \mathbb{Z}} D^n(A^n)$. Note that $IA$ is a split exact complex and the chain map $(1_{A^n} \, d^{n})^t \colon A^n \to A^{n} \oplus A^{n+1}$ is degreewise split monic. Now choose for each $A^n$ a monic $i^n \colon A^n \rightarrowtail I^n$ into an injective and put $J^n = I^n \oplus I^{n+1}$ to obtain the complex $J^\bullet$ with differential $\begin{pmatrix} 0 & 1_{I^{n+1}} \\ 0 & 0\end{pmatrix}$, so $J^\bullet = \prod_{n \in \mathbb{Z}} D^{n}(I^n)$. The complex $J^\bullet$ is injective as a product of injective complexes and the $i^n$ yield a chain map $i \colon IA \rightarrowtail J$ in an obvious way. The composite $A^\bullet \rightarrowtail IA^\bullet \rightarrowtail J^\bullet$ embeds the complex $A^\bullet$ into an injective complex.</p>
<p>To prove the emphasized statement, note that if we start with an injective complex $A^\bullet$, it is a retract of $J^\bullet$. Conversely, every retract of a split exact complex of injectives is a retract of a complex of the form $J^\bullet$, hence injective.</p>
|
3,395,910 | <p>I understand how to apply the trapezoidal rule to approximate the area under a curve.</p>
<p>But I'm not sure how to apply it when approximating areas between two functions. </p>
<ul>
<li>Do you use the formula like how you normally would, except apply it to the <strong>first function - the second function</strong>?</li>
</ul>
<p>Or is there a completely different approach?</p>
<p>Thanks!</p>
| mathcounterexamples.net | 187,663 | <p><span class="math-container">$T= \{x \in \mathbb R \mid (f-g)(x)=0\} = (f-g)^{-1}(\{0\})$</span>, i.e. <span class="math-container">$T$</span> is the inverse image under <span class="math-container">$f-g$</span> of the singleton <span class="math-container">$\{0\}$</span>. As <span class="math-container">$\{0\}$</span> is closed (a singleton set is closed) and <span class="math-container">$f-g$</span> continuous, <span class="math-container">$T$</span> is closed.</p>
|
56,162 | <p>I'm trying to understand the Cartan decomposition of a semisimple Lie algebra, $\mathfrak g=\mathfrak k \oplus \mathfrak p$, where $[\mathfrak k,\mathfrak p] \subseteq \mathfrak p$, cf. the wikipedia article on <a href="http://en.wikipedia.org/wiki/Cartan_decomposition" rel="noreferrer">Cartan decomposition</a>.</p>
<p>I posted the following question on math.stackexchange.com, where Darij suggested to repost the question here as an answer is not completely obvious, I suppose.</p>
<p>Let $\mathfrak {so}_{n}$ denote the skew-symmetric complex $n \times n$-matrices and let $M$ denote the symmetric $n \times n$-matrices of trace 0.</p>
<p>Then $M$ is a module over the Lie algebra $\mathfrak {so}_n$ (this comes from the Cartan decomposition of $\mathfrak {sl}_n$).</p>
<blockquote>
<p>What is the decomposition of $M$ into irreducible $\mathfrak {so}_n$-modules? </p>
</blockquote>
<p>The standard representation of $\mathfrak {so}_n$ has dimension $n$, the adjoint representation has dimension $\frac 1 2 n \cdot (n-1)$ and there are two spin representations of small dimension. But I don't see a way how these, together with trivial representations, should add up to the dimension of $M$, which is $\frac 1 2 n \cdot (n+1)-1$. </p>
| Peter Woit | 11,670 | <p>As Konrad Waldorf noted, in this case G-bundles are trivializable (since $\pi_2(G)$ is trivial). So gauge transformations are just maps
$$\phi:M\rightarrow G$$</p>
<p>and these have a homotopy invariant that can be non-trivial, the degree of the map. One way to compute this is as
$$\int_M \phi^*\omega_3$$</p>
<p>where $\omega_3$ is a generator of $H^3(G)$. Or, as usual for a degree, just pick an element of G, and count points (with sign) in the inverse image.</p>
|
2,740,349 | <blockquote>
<p>A triangle has the side lengths of $3$, $5$, and $7$. Express $\cos(y)+\sin(y)$, where $y$ is the largest angle in the triangle.</p>
</blockquote>
<p>I have tried to apply pythagoras theorm, trying to express the other two angles in some way, split the triangle into smaller triangles, but all without success. </p>
<p>Thanks in advance.</p>
| Gibbs | 498,844 | <p>If $n = 2$, you can see that $\det(X,Y) = \sigma_1 \wedge \sigma_2 (X,Y)$. Generalize to any $n$.</p>
|
1,521,518 | <p>Determine if the given vectors span $\mathbb{R}^4$</p>
<p>${(1, 1, 1, 1), (0, 1, 1, 1), (0, 0, 1, 1), (0, 0, 0, 1)}$.</p>
<p>I'm completely confused on this question. My textbook gives a different problem but in $\mathbb{R}^3$. How would i go about this?</p>
| z100 | 259,327 | <p>Not needed any sophisticated knowledge, just a simple understanding: the most right vector gives 1 dimension,
2nd rightmost 2nd dimension, 2nd left 3rd dimension and first one 4th dimension.
If you do not believe, subtract two consetive vectors - and get the standard basis exactly.</p>
|
1,666,977 | <p><strong>Background</strong></p>
<p>This is purely a "sate my curiosity" type question.</p>
<p>I was thinking of building a piece of software for calculating missing properties of 2D geometric shapes given certain other properties, and I got to thinking of how to failsafe it in case a user wants to calculate the area of a $2$-gon, $1$-gon, $0$-gon, 'aslkfn'-gon or maybe even $-4$-gon.</p>
<p><strong>Question</strong></p>
<p>Are there any definitions for $n$-gons where $n < 0$?</p>
<p><strong>Valid assumptions</strong></p>
<p>Let's, for the sake of simplicity (if possible) say that $n \in \mathbb Z$, although I might come back later and ask what a $\pi$-gon is.</p>
| Ng Chung Tak | 299,599 | <p>You may define a digon on $S^{2}$. By the ways, there're $\{ \frac{n}{k} \}$ star polygons. In particular $\{ \frac{n}{1} \}$ or $\{ \frac{n}{n-1} \}$ is ordinary polygon.</p>
|
3,995,272 | <p>I am a master student in mathematical physics. I study soliton and traveling wave solutions of the differential equations.</p>
<p>Let's consider the following ODE:
<span class="math-container">$$Q^{\prime}(\xi)=ln(A)(\alpha+\beta Q(\xi)+\sigma Q^2(\xi))$$</span>
where
<span class="math-container">$A \neq 0,1.$</span></p>
<p>In a book, the solutions of the ODE are given as follows: <strong>(But, I don't understand how to derive it.)</strong></p>
<p>There are twelve solution cases w.r.t coefficients of ODE.
Any help would be appreciated.</p>
<p><strong>CASE I)</strong> When <span class="math-container">$\beta^{2}-4 \alpha \sigma<0$</span> and <span class="math-container">$\sigma \neq 0$</span>, then
<span class="math-container">$$
\begin{array}{l}
Q_{1}(\xi)=-\frac{\beta}{2 \sigma}+\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{2 \sigma} \tan _{A}\left(\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{2} \xi\right) \\
Q_{2}(\xi)=-\frac{\beta}{2 \sigma}-\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{2 \sigma} \cot _{A}\left(\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{2} \xi\right) \\
Q_{3}(\xi)=-\frac{\beta}{2 \sigma}+\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{2 \sigma}\left(\tan _{A}\left(\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)} \xi\right) \pm \sqrt{p q} \sec _{A}\left(\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right) \xi}\right)\right) \\
Q_{4}(\xi)=-\frac{\beta}{2 \sigma}-\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{2 \sigma}\left(\cot _{A}\left(\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)} \xi\right) \pm \sqrt{p q} \csc _{A}\left(\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)} \xi\right)\right) \\
Q_{5}(\xi)=-\frac{\beta}{2 \sigma}+\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{4 \sigma}\left(\tan _{A}\left(\frac{\sqrt{-\left(\beta^{2}-4 \alpha \sigma\right)}}{4} \xi\right)-\cot _{A}\left(\frac{\sqrt{-\left(\beta^{2}-4 \alpha\right) \sigma}}{4} \xi\right)\right)
\end{array}
$$</span></p>
<p><strong>CASE II:</strong></p>
<p><span class="math-container">$\vdots$</span></p>
<p><strong>CASE XII:</strong>
When <span class="math-container">$\beta=\lambda, \sigma=m \lambda(m \neq 0)$</span> and <span class="math-container">$\alpha=0,$</span> then
<span class="math-container">$$
Q_{37}(\xi)=\frac{p A^{\lambda \xi}}{q-m p A^{\lambda \xi}}
$$</span>
where triangular functions are defined as
<span class="math-container">\begin{array}{l}
\sin _{A}(\xi)=\frac{p A^{i \xi}-q A^{-i \xi}}{2 i}, \quad \cos _{A}(\xi)=\frac{p A^{i \xi}+q A^{-i \xi}}{2} \\
\tan _{A}(\xi)=-i \frac{p A^{i \xi}-q A^{-i \xi}}{p A^{i \xi}+q A^{-i \xi}}, \quad \cot _{A}(\xi)=i \frac{p A^{i \xi}+q A^{-i \xi}}{p A^{i \xi}-q A^{-i \xi}} \\
\sec _{A}(\xi)=\frac{2}{p A^{i \xi}+q A^{-i \xi}}, \quad \csc _{A}(\xi)=\frac{2 i}{p A^{i \xi}-q A^{-i \xi}}
\end{array}</span>
where <span class="math-container">$\xi$</span> is an independent variable, <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are constants greater than zero and called deformation parameters.</p>
| Kavi Rama Murthy | 142,385 | <p>Hint: Since <span class="math-container">$\ln (1+s) \sim s$</span> as <span class="math-container">$s \to 0$</span> we get <span class="math-container">$\lim_{s\to 0} \frac {\ln (1+s)} {1-(1+s)^{2}}=-\frac 1 2$</span>. Now put <span class="math-container">$s=\sin x-1$</span> in this.</p>
|
62,177 | <p>One of the most mind boggling results in my opinion is, with the axiom of choice/well-ordering principle, there exist such things as uncountable well-ordered sets $(A,\leq)$. </p>
<p>With this is mind, does there exist some well ordered set $(B,\leq)$ with some special element $b$ such that the set of all elements smaller than $b$ is uncountable, but for any element besides $b$, the set of all elements smaller is countable (by countable I include finite too). </p>
<p>More formally stated, how can one show the existence of a well ordered set $(B,\leq)$ such that there exists a $b\in B$ such that $\{a\in X\mid a\lt b\}$ is uncountable, but $\{a\in X\mid a\lt c\}$ is countable for all $c\neq b$?</p>
<p>It seems like this $b$ would have to "be at the very end of the order." </p>
| Ted | 15,012 | <p>Let $b$ be the first <a href="http://en.wikipedia.org/wiki/First_uncountable_ordinal" rel="nofollow">uncountable ordinal</a>, and $B$ be the set of all ordinals up to and including $b$.</p>
|
1,291,511 | <p>This may seem like a silly question, but I just wanted to check. I know there are proofs that if $f(x)=f'(x)$ then $f(x)=Ae^x$. But can we 'invent' another function that obeys $f(x)=f'(x)$ which is <strong>non-trivial</strong>?</p>
| k1.M | 132,351 | <p>Observe that if $ f(x)=f'(x) $ then
$$
\left(\frac{f(x)}{e^x}\right)'=\frac{f'(x)-f(x)}{e^x}=0
$$
Hence $\dfrac{f(x)}{e^x}$ is constant...</p>
|
3,374,248 | <p>I haven't worked out all the details yet, but it seems to be true for the following functions:</p>
<ul>
<li><span class="math-container">$f(k) = 1$</span></li>
<li><span class="math-container">$f(k) = 1/k!$</span></li>
<li><span class="math-container">$f(k) = a^k$</span></li>
<li><span class="math-container">$f(k) = 1/\log(k+1)$</span></li>
</ul>
<p>What are the conditions on <span class="math-container">$f$</span> for this to be true? It sounds like a fairly general result that should be easy to prove. Sums like these are related to the discrete self-convolution operator, so I'm pretty sure the result mentioned here must be well known. </p>
<p><strong>Update</strong>: A weaker result that applies to a broader class of functions is the following:
<span class="math-container">$$\sum_{k=1}^n f(k)f(n-k) = O\Big(n f^2(\frac{n}{2})\Big).$$</span>
Is it possible to find a counter-example, with a function <span class="math-container">$f$</span> that is smooth enough and in wide use?</p>
| reuns | 276,986 | <p><span class="math-container">$\sum_{k=0}^n f(k)f(n-k) = O\Big(n f^2(\frac{n}{2})\Big)$</span> doesn't hold when <span class="math-container">$f(0) \ne 0$</span> and <span class="math-container">$f(n)$</span> grows much faster than <span class="math-container">$f(n/2)^2$</span> for example with <span class="math-container">$f(n)= e^{n!}$</span></p>
|
2,860,321 | <p>Suppose $L/K$ is a Galois extension of local fields with Galois group $G = \operatorname{Gal}(L/K)$. Let $K'$ be the maximal unramified Extension of $K$ in $L$.</p>
<p>The Definition of the <strong>inertia group</strong> of $L/K$ is given by $I = I_{L/K} = \operatorname{Gal}(L/K')$ which I understand.</p>
<p>In some notes I found that this is equivalent to</p>
<p>$$ I = \{ \sigma \in G : \sigma \text{ maps to } \iota \text{ in } \operatorname{Gal}(k_L/k_K) \}$$</p>
<p>or $$I = \{ \sigma \in G : \sigma x \equiv x \text{ (mod }M) \: \forall x \in \mathcal{O}_L \}.
$$</p>
<p>I know that $k_L$ and $k_K$ are the residue fields of $L$ and $K$.
Could you please explain me what $\iota$ and $M$ are supposed to be? If possible, could you also explain why all the conditions above are indeed equivalent then?</p>
<p>Thank you.</p>
| Tomo | 62,940 | <p>The inertia group is usually defined as the kernel of the homomorphism
<span class="math-container">$$\varepsilon:D\twoheadrightarrow G(\overline L/\overline K),$$</span>
where here <span class="math-container">$D$</span> denotes the decomposition group (in the case of a Galois extension of local fields as in your question, <span class="math-container">$D=G(L/K)$</span>), and <span class="math-container">$\overline L=k_L$</span> in your notation, etc.</p>
<p>Then it is a fact that the extension of local fields <span class="math-container">$L^I/K$</span> is unramified, and moreover <span class="math-container">$\overline {L^I}$</span> coincides with the separable closure of <span class="math-container">$\overline K$</span> in <span class="math-container">$\overline L$</span> (Proposition 21 in I<span class="math-container">$\S$</span>7 of Serre, <em>Local Fields</em>). Therefore, the fixed field of the inertia subgroup corresponds to the maximal unramified extension of the local field <span class="math-container">$K$</span> in <span class="math-container">$L$</span>.</p>
|
129,439 | <p>This code in Mathematica 9 returns two graphs that are NOT complementary.</p>
<pre><code>{GraphData[{7, 172}], GraphData[{7, 172}, "ComplementGraph"]}
</code></pre>
| Feyre | 7,312 | <p>In's an unfortunate rendering I think because the connection between what according to the <code>AdjacencyMatrix[]</code> are <code>6</code> and <code>7</code> isn't obvious.</p>
<p>But please consider the following mapping:</p>
<p><a href="https://i.stack.imgur.com/1ubsN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1ubsN.png" alt="enter image description here"></a></p>
|
1,579,781 | <blockquote>
<p>If $x+y+z=6$ and $xyz=2$, then find the value of $$\cfrac{1}{xy}
+\cfrac{1}{yz}+\cfrac{1}{zx}$$</p>
</blockquote>
<p>I've started by simply looking for a form which involves the given known quantities ,so:</p>
<p>$$\cfrac{1}{xy} +\cfrac{1}{yz} +\cfrac{1}{zx}=\cfrac{yz\cdot zx +xy \cdot zx +xy \cdot yz}{(xyz)^2}$$</p>
<p>Now this might look nice since I know the value of the denominator but if I continue to work on the numerator I get looped :</p>
<p>$$\cfrac{yz\cdot zx +xy \cdot zx +xy \cdot yz}{(xyz)^2}=\cfrac{4\left(\cfrac{1}{xy}+\cfrac{1}{zy}+\cfrac{1}{zy}\right)}{(xyz)^2}=\cfrac{4\left(\cfrac{(\cdots)}{(xyz)^2}\right)}{(xyz)^2}$$</p>
<p>How do I deal with such continuous fraction ?</p>
| Harish Chandra Rajpoot | 210,295 | <p>Notice, the method is straight forward simply take L.C.M. & substitute known values as follows $$\frac{1}{xy}+\frac{1}{yz}+\frac{1}{zx}=\frac{x+y+z}{xyz}=\frac{6}{2}=\mathbb{\color{red}{3}}$$</p>
|
81,982 | <p>I am beggining to do some work with cubical sets and thought that I should have an understanding of various extra structures that one may put on cubical sets (for purposes of this question, connections). I know that cubical sets behave more nicely when one has an extra set of degeneracies called connections. The question is: Why these particular relations? Why do they show up? Precise references would be greatly appreciated.</p>
| Peter Arndt | 733 | <p>A very concrete instance where you can see the meaning and usefulness of connections is <a href="http://www.tac.mta.ca/tac/volumes/1999/n7/5-07abs.html" rel="nofollow">this</a> article by Brown and Mosa: They show that double categories (which do have an underlying (truncated) cubical set) with connections are the same as (globular) 2-categories. </p>
<p>The reason is that the connection allows to fold the four different edges of a 2-cell in the cubical double category structure into just two edges, leaving degenerate edges at the other sides, and this can as well be captured in the data of a globular 2-category where 2-cells have just one source and one target 1-cell -
see the definition of he folding map right before Proposition 5.1 in the above article.</p>
|
258,226 | <p>As an algebraist, I have some strong intuitions about what it means for an algebraic result to be true. In particular, my intuition would lead me to believe that if I cannot construct a counter-example to a claim, then the claim must be true. This is what motivated <a href="https://mathoverflow.net/questions/258132/platonic-truth-and-1st-order-predicate-logic">my previous question</a>, where it became clear that my intuition needed some tweaking. (In other words, in some contexts it just doesn't hold true!) The answers given there are excellent, and I recommend people read them before continuing.</p>
<p>So here is a follow-up question to see just how far we can take the intuition.</p>
<p>Let $T_0$ be the theory ${\rm PA}$ over a countable language. Let $T_1$ be the extended theory obtained by adding as new axioms all $\Pi_1^0$ statement which are independent of ${\rm PA}$. This new theory is consistent and sound, assuming that the standard model of the natural numbers exists.</p>
<p>I would guess that this new theory $T_1$ is already not effectively computable. At any rate, suppose now that we let $T_2$ be the extended theory of $T_1$ obtained by adding as new axioms all $\Pi_2^0$ statement which are independent of $T_1$.</p>
<p>Is $T_2$ consistent? If so, is the theory $T_n$ (defined in the obvious recursive way) consistent? If so, is $\bigcup_{n\in \mathbb{N}}T_n$ the true theory of the standard model?</p>
<p>If $T_2$ is not consistent, is there some natural way to fix the problem?</p>
<p>Finally, what happens if we repeat these ideas for ${\rm ZFC}$ instead?</p>
| Panu Raatikainen | 102,468 | <p>No. It works with T0 and Π01 sentences, as PA refutes any false Π01 sentence. But not so with T1 and Π02 sentences: there are false Π02 sentences which are independent of T1. </p>
|
2,237,441 | <p>Let $n$ be a natural number.</p>
<p>I need to prove that $9 \mid 4^n-3n-1$</p>
<p>Could anyone give me some hints how to prove it without using induction.</p>
| Ayoub Falah | 337,716 | <p>Let $u_n=4^n-3n-1$</p>
<p>so $$u_{n+1}=4^{n+1}-3(n+1)-1$$</p>
<p>$$\Leftrightarrow u_{n+1}=4(4^n-1)-3n$$</p>
<p>$$\Leftrightarrow u_{n+1}=4(4^n-1)- 4\cdot3n + 9n$$</p>
<p>$$\Leftrightarrow u_{n+1}=4(4^n - 3n - 1) + 9n$$</p>
<p>$$\Leftrightarrow u_{n+1}=4u_n + 9n$$</p>
<p>$$\Leftrightarrow u_{n+1}=4(4u_{n-1} + 9(n-1)) + 9n$$</p>
<p>$$\Leftrightarrow u_{n+1}=4^2u_{n-1} + 9(4(n-1) + n)$$</p>
<p>$$\Leftrightarrow u_{n+1}=4^3u_{n-2} + 9(4^2(n-2) + 4(n-1) + n)$$</p>
<p>$$\vdots$$</p>
<p>$$\Leftrightarrow u_{n+1}=4^nu_1 + 9\sum_{k=0}^{n-1}4^k(n-k) $$</p>
<p>$$\Leftrightarrow u_{n+1}=4^n\cdot0 + 9\sum_{k=0}^{n-1}4^k(n-k) $$</p>
<p>$$\Leftrightarrow u_{n+1}=9\sum_{k=0}^{n-1}4^k(n-k) $$</p>
<p>$$\Leftrightarrow u_{n}=9\sum_{k=0}^{n-2}4^k(n-(k+1)) $$</p>
<p>$$\Rightarrow 9 \mid u_n \Rightarrow 9 \mid 4^n-3n-1$$</p>
|
658,758 | <p>How do you plot $$f(x,y) = \frac{x}{1-y} \text{with}~ x^2+y^2<1$$ in Mathematica or Maple?</p>
| Mikasa | 8,581 | <p>In Maple, you can use the following codes:</p>
<pre><code>[> with(plots):
plot3d(x/(1-y), x = -1 .. 1, y = -sqrt(1-x^2) .. sqrt(1-x^2), axes = boxed, filled = true,numpoints = 1000,color=green);
</code></pre>
<p><img src="https://i.stack.imgur.com/ysKzx.png" alt="enter image description here"></p>
|
1,397,190 | <p>Find the sum of following series:</p>
<p>$$1 + \cos \theta + \frac{1}{2!}\cos 2\theta + \cdots$$</p>
<p>where $\theta \in \mathbb R$.</p>
<p>My attempt: I need hint to start.</p>
| Chinny84 | 92,628 | <p>Hint to start
$$
1+\cos \theta + \frac{1}{2!}\cos(2\theta)+\cdots = \mathcal{Re}\sum_{k=0}^n\frac{\mathrm{e}^{ik\theta}}{k!} = \mathcal{Re}\sum_{k=0}^n\frac{\left(\mathrm{e}^{i\theta}\right)^k}{k!}
$$
Also remember that in the $\lim_{n\to\infty}$ we have
$$
\lim_{n\to\infty}\sum_{k=0}^n\frac{x^k}{k!}=\mathrm{e}^x
$$</p>
|
1,397,190 | <p>Find the sum of following series:</p>
<p>$$1 + \cos \theta + \frac{1}{2!}\cos 2\theta + \cdots$$</p>
<p>where $\theta \in \mathbb R$.</p>
<p>My attempt: I need hint to start.</p>
| GeorgSaliba | 142,772 | <p><strong>HINTS</strong>: You have $$\sum_{k=0}^\infty\frac{\cos(k\theta)}{k!}$$</p>
<p>Remember that:
$$e^x=\sum_{k=0}^\infty\frac{x^k}{k!}$$
And:
$$\cos(\theta)=\Re{e^{i\theta}}$$</p>
|
1,983,614 | <p>Consider a measurable space $(\Omega, \mathcal{F})$ and let $I$ be an arbitrary index set. </p>
<p>Is the following true?</p>
<blockquote>
<p>If $\left( A_i \right)_{i \in I}$ is a chain in $\mathcal{F}$ – that is, $\forall i \in I$, $A_i \in \mathcal{F}$ and for all $i, j \in I$, we have $A_i \subseteq A_j$ or $A_j \subseteq A_i$ – then $$\displaystyle \bigcup_{i \in I} A_i \in \mathcal{F}.$$</p>
</blockquote>
| Mitchell Spector | 350,214 | <p>This isn't true in general (assuming the axiom of choice).</p>
<p>Let $\kappa$ be the least cardinal of a non-measurable set of reals; let $f$ map $\kappa$ 1-1 onto such a non-measurable set.</p>
<p>Then $\{f\!"\!\alpha \mid \alpha\lt\kappa\}$ (where $f\!"\!\alpha$ denotes the range of $f$ on $\alpha)$ is a chain of measurable sets whose union is not measurable.</p>
<p>To see that the axiom of choice needs to be used in some way, at least for the case of Lebesgue measure on the reals, note that it's consistent with ZF that every set of reals is Lebesgue measurable.</p>
|
213,102 | <p>Is it true than an aperiodic, ergodic, minimal and equicontinuous dynamical system on a compact metric space is totally ergodic ?</p>
<p>According to some results I found in some books, a rotation on a compact metric group is equicontinous, and it is minimal <del>and totally ergodic</del> whenever it is ergodic.</p>
| Ian Morris | 1,840 | <p>Just knowing that a transformation $T$ is minimal is no guarantee that $T^n$ is also minimal. For example, let $T_1$ be the non-identity homeomorphism of a two-point metric space and let let $T_2$ be an aperiodic, minimal, equicontinuous, uniquely ergodic transformation of a compact metric space (for example, an irrational circle rotation). Then $T_1 \times T_2$ is minimal and uniquely ergodic, but $T_1^2 \times T_2^2$ has two nonempty, disjoint, closed invariant sets, and the unique invariant measure for $T_1 \times T_2$ is not ergodic for $T_1^2 \times T_2^2$, since it gives both "halves" of the phase space positive measure, but those "halves" are invariant sets for $T_1^2 \times T_2^2$.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.