qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,913,320 | <blockquote>
<p>Let <span class="math-container">$A=(a_{ij})_{n\times n}$</span> and <span class="math-container">$A=(a_{ij})_{n\times n}$</span> be two upper triangular matrices, i.e. <span class="math-container">$a_{ij}=b_{ij}=0$</span> whenever <span class="math-container">$i>j$</span>.</p>
<p><span class="math-container">$(a)$</span> Show that the <span class="math-container">$(i,j)$</span>-entry of <span class="math-container">$AB$</span> is <span class="math-container">$0$</span> if <span class="math-container">$i>j$</span>, i.e <span class="math-container">$AB$</span> is an upper triangular matrix.</p>
<p><span class="math-container">$(b)$</span> Find the <span class="math-container">$(i,i)$</span>-entry of <span class="math-container">$AB$</span>.</p>
</blockquote>
<p>I have already proven part (a). How do I go about finding part (b)? Any help would be greatly appreciated! Thank you so much!</p>
| Sri-Amirthan Theivendran | 302,692 | <p>Typically one proves that every natural number $n>1$ is divisible by a prime by strong induction. The idea of the proof is to construct a number which is not divisible by any prime supposing that we had only finitely many primes $p_1, \dotsc, p_n$. This number is $p_1p_2\cdots p_n+1$ which leads to a contradiction. If we have infinitely many primes, there could be other primes than $p_1, \dotsc, p_n$ which divide $p_1p_2\cdots p_n+1$. </p>
|
1,005,154 | <p>I don't know how to advance in the following <em><strong>problem</strong></em>:</p>
<p>Let $X$, $Y$ and $Z$ independent random variables equally distributed with uniform distribution over $[0,1]$.</p>
<ul>
<li>Find the joint pdf of $W:=XY$ and $V:=Z^2$.</li>
</ul>
<hr>
<p><em><strong>I tried to</strong></em> answer this problem by declaring a new random variable $U:= Y$ (to my opinion necessary to get the transformation).</p>
<p>Then:<br>
$w=xy,$<br>
$v=z^2,$<br>
$u=y.$</p>
<p>We can see that dividing the first equation by the second one:<br>
$x=\dfrac{w}{u},$<br>
$y=u,$<br>
$z=\sqrt{v}.$</p>
<p>Consider the transformation $h(x(w,v,u),y(w,v,u),z(w,v,u))=\left(\dfrac{w}{u},u,\sqrt{v}\right)$ gives us</p>
<p>$$f_{WVU}(w,v,u)=|\boldsymbol{J(h)}|f_{XYZ}(h(x,t,z))=\frac{1}{2u\sqrt{v}}.$$</p>
<p>To find the pdf of $W,V$:
$$f_{WV}(w,v)=\int_u\frac{1}{2u\sqrt{v}}du.$$
However I don't know what limits to use. Any help is appreciated.</p>
| Did | 6,179 | <blockquote>
<p>I don't know what limits to use.</p>
</blockquote>
<p>Note that $x=w/u$, $y=u$, $z=\sqrt{v}$ with $0\leqslant x,y,z\leqslant1$ hence the domain of integration is $$0\leqslant w/u,u,\sqrt{v}\leqslant1,$$ or, equivalently, $$0\leqslant w\leqslant u\leqslant1,\qquad0\leqslant v\leqslant1.$$</p>
<blockquote>
<p>Find the joint pdf of $W:=XY$ and $V:=Z^2$.</p>
</blockquote>
<p>This can be simplified by noting that $W$ and $V$ are independent hence their marginal densities suffice to solve the question.</p>
|
2,445,655 | <p>Challenge: A Good Deal</p>
<p>You are currently learning some important aspects of collusion and cartels. This challenge puts you in the position of a bad guy, namely a price-fixing sales manager. Suppose that you find yourself in a so-called “smoke-filled room” to fix prices for the upcoming year with the sales manager of a competing firm, Snitch Inc.. Suddenly the door opens and one of the employees of Snitch Inc. enters the room. The employee knows that price-fixing is illegal and immediately grabs his cellphone to inform the competition authority. You are fully aware that you are now facing a serious risk of getting a fine or even a jail sentence. Whereas your fellow sales manager is simply in shock, not knowing what to do, you as a University School of Business and Economics alumni are quick to
react and you try to save the situation. Your idea is to bribe the employee. You expect that the employee requires a bribe of at least €100 to remain silent. In other words, the reservation price of the employee is €100. For simplicity, assume in the following that this expectation is correct. At the same time, suppose that you are not willing to offer more than €200 for otherwise you would rather save your money and spend it on a good lawyer instead. In other words, your reservation price is €200. You consider your chances and are thinking about making an offer. For simplicity, assume in the following that all parties aim to maximize the gains from trade and that offers can be any positive real number (all values weakly above zero, that is).</p>
<p>Try to provide a clear and concise answer to the following four questions.
For the first two questions suppose that the employee is still slightly in shock and therefore can only respond by either accepting or rejecting your offer.</p>
<ol>
<li>How many Nash equilibria does this game have, if any? Explain your answer.</li>
<li>How many subgame perfect Nash equilibria does this game have, if any? Explain your answer.</li>
</ol>
<p>Now suppose that you are dealing with an, somewhat cocky, employee who is brave
enough to start negotiating with you. That is, in questions 3 and 4 below, the
employee, instead of simply accepting or rejecting the offer, may now make a
counteroffer instead. For simplicity, assume that both parties get a zero payoff in case of “no deal”.</p>
<ol start="3">
<li>Suppose that the employee indeed does make a counteroffer and that you will either accept or reject (in other words, the game ends after your response to the counteroffer). What is the subgame perfect Nash equilibrium in this case? </li>
</ol>
<p>You fear that these negotiations may take quite some time and as a business man you know that time is money. Suppose that your impatience as well as that of the
employee is given by a common discount factor 0 < δ < 1 (which is known to you and the employee). The interpretation is that the utility of a money amount K “tomorrow” is equal to the utility of an amount of δK “today”. Your goal is to make an acceptable offer the first round while at the same time saving as much money as possible.</p>
<ol start="4">
<li>What is the subgame perfect Nash equilibrium outcome in this case? Do you (i.e., the sales manager) or the employee benefit from your impatience? Explain your answer.</li>
</ol>
| José Carlos Santos | 446,262 | <p>\begin{align}\lim_{x\to-\infty}2x+\sqrt{4x^2+x}&=\lim_{x\to-\infty}\frac{\left(2x+\sqrt{4x^2+x}\right)\left(2x-\sqrt{4x^2+x}\right)}{2x-\sqrt{4x^2+x}}\\&=\lim_{x\to-\infty}\frac{-x}{2x-\sqrt{4x^2+x}}\\&=-\lim_{x\to-\infty}\frac1{2+\sqrt{4+\frac1x}}\text{ (because $x<0$)}\\&=-\frac14.\end{align}</p>
|
2,445,655 | <p>Challenge: A Good Deal</p>
<p>You are currently learning some important aspects of collusion and cartels. This challenge puts you in the position of a bad guy, namely a price-fixing sales manager. Suppose that you find yourself in a so-called “smoke-filled room” to fix prices for the upcoming year with the sales manager of a competing firm, Snitch Inc.. Suddenly the door opens and one of the employees of Snitch Inc. enters the room. The employee knows that price-fixing is illegal and immediately grabs his cellphone to inform the competition authority. You are fully aware that you are now facing a serious risk of getting a fine or even a jail sentence. Whereas your fellow sales manager is simply in shock, not knowing what to do, you as a University School of Business and Economics alumni are quick to
react and you try to save the situation. Your idea is to bribe the employee. You expect that the employee requires a bribe of at least €100 to remain silent. In other words, the reservation price of the employee is €100. For simplicity, assume in the following that this expectation is correct. At the same time, suppose that you are not willing to offer more than €200 for otherwise you would rather save your money and spend it on a good lawyer instead. In other words, your reservation price is €200. You consider your chances and are thinking about making an offer. For simplicity, assume in the following that all parties aim to maximize the gains from trade and that offers can be any positive real number (all values weakly above zero, that is).</p>
<p>Try to provide a clear and concise answer to the following four questions.
For the first two questions suppose that the employee is still slightly in shock and therefore can only respond by either accepting or rejecting your offer.</p>
<ol>
<li>How many Nash equilibria does this game have, if any? Explain your answer.</li>
<li>How many subgame perfect Nash equilibria does this game have, if any? Explain your answer.</li>
</ol>
<p>Now suppose that you are dealing with an, somewhat cocky, employee who is brave
enough to start negotiating with you. That is, in questions 3 and 4 below, the
employee, instead of simply accepting or rejecting the offer, may now make a
counteroffer instead. For simplicity, assume that both parties get a zero payoff in case of “no deal”.</p>
<ol start="3">
<li>Suppose that the employee indeed does make a counteroffer and that you will either accept or reject (in other words, the game ends after your response to the counteroffer). What is the subgame perfect Nash equilibrium in this case? </li>
</ol>
<p>You fear that these negotiations may take quite some time and as a business man you know that time is money. Suppose that your impatience as well as that of the
employee is given by a common discount factor 0 < δ < 1 (which is known to you and the employee). The interpretation is that the utility of a money amount K “tomorrow” is equal to the utility of an amount of δK “today”. Your goal is to make an acceptable offer the first round while at the same time saving as much money as possible.</p>
<ol start="4">
<li>What is the subgame perfect Nash equilibrium outcome in this case? Do you (i.e., the sales manager) or the employee benefit from your impatience? Explain your answer.</li>
</ol>
| cip999 | 483,763 | <p>$$\begin{align*} \lim_{x \rightarrow -\infty} 2x + \sqrt{4x^2 + x} & = \lim_{x \rightarrow \infty} \sqrt{4x^2 - x} - 2x = \\ & = \lim_{x \rightarrow \infty} \frac{(\sqrt{4x^2 - x} - 2x)(\sqrt{4x^2 - x} + 2x)}{\sqrt{4x^2 - x} + 2x} = \\ & = \lim_{x \rightarrow \infty} -\frac{x}{\sqrt{4x^2 - x} + 2x} = \boxed{-\frac{1}{4}} \end{align*}$$</p>
|
2,194,190 | <p>I need to check the irreducibility of $p(x) \in F[x]$, where $F$ is a finite field.
I have read and checked on several exercises on the internet. Their solutions are as follows:</p>
<p>For instance, let $p(x)$ an arbitrary polynomial in $\mathbb{Z}_5[x]$. </p>
<p>If $p(x)$ has no zeros in $\mathbb{Z}_5$, then they say that $p(x)$ is an irreducible polynomial in $\mathbb{Z}_5[x]$.</p>
<p><em>I am confused at this point:</em> The polynomial $p(x)=(x^2+2)(x^2+3)$ has no zeros in $\mathbb{Z}_5[x]$, but it is reducible? Where is my mistake?</p>
| Yunus Syed | 422,304 | <p>In $Z_5[x]$ the polynomial $p(x) = x^4 + 1$. This can be checked to have no zeroes in your field.</p>
<p>It seems the definition you are using is incorrect. Reducible means factorable into polynomials of lesser degree.</p>
|
2,903,163 | <p>I don't really know whether to put this in Physics forums since it is relating to Mechanics, or Math since the question is actually about the math being done. Don't criticize me over it.</p>
<p>So for the question: I was doing some review problems on Lagrange's equations, KE+PE, and I found <a href="http://wwwf.imperial.ac.uk/%7Epavl/ASHEET2.PDF" rel="nofollow noreferrer">this document</a>.</p>
<p>In the first question's solution, the writer differentiates without explaining the step. They have these:</p>
<p><span class="math-container">$$\begin{cases}
x = r \sin(\theta) \cos(\phi)\\[5 pt]
y = r \sin(\theta) \sin(\phi)\\[5 pt]
z = r \cos(\theta)
\end{cases}
$$</span></p>
<p>and this:</p>
<p><span class="math-container">$$T = {m\over 2}(\dot x^2 +\dot y^2 +\dot z^2)$$</span></p>
<p>I never really studied the spherical coordinate system much, and obviously never thought about the derivatives of the conversion into Cartesian. Can someone find or explain the process of taking the derivatives of the first three equations, plugging into the equation for Kinetic Energy, and simplifying? There is a probably a different calculus method for the coordinate system, which I don't know. Thanks!</p>
<p>EDIT: While doing taking the derivatives, was the method used actually a separate form of calculus beyond I and II, or was it normal first-order differentiation? If so, how? Here is the part I am speaking of:</p>
<blockquote>
<p><strong>Solution:</strong> The kinetic energy is <span class="math-container">$T=\frac m2(\dot x^2+\dot y^2+\dot z^2)$</span>. We substitute
<span class="math-container">$$\begin{cases}
x = r \sin(\theta) \cos(\phi)\\[5 pt]
y = r \sin(\theta) \sin(\phi)\\[5 pt]
z = r \cos(\theta)
\end{cases}
$$</span>
Differentiating these, substituting into <span class="math-container">$T$</span>, and simplifying, we find
<span class="math-container">$$T=\frac m2 (\dot r^2 +r^2\dot\theta^2+r^2\sin^2\theta\dot\phi^2).$$</span></p>
</blockquote>
| Quiver | 564,698 | <p>I think this question belongs to PSE! But whatever, here's your answer: you have to remember that $\dot x$ is a complete derivative of $x$ with respect to time. Going to a new representation of $x$ in a new system, like in your case $x(r,\theta,\phi)$ for the spherical coordinates, where, and this is important, all the <em>coordinates are functions of time</em> $$r\equiv r(t)\\ \theta \equiv \theta(t) \\\phi\equiv\phi(t)$$ transforms the complete time derivative in this manner </p>
<p>$$
\dot x = \frac{\partial x}{\partial r}\frac{\mathrm d r}{\mathrm d t}+\frac{\partial x}{\partial \theta}\frac{\mathrm d \theta}{\mathrm d t}+\frac{\partial x}{\partial \phi}\frac{\mathrm d \phi}{\mathrm d t} \\
\dot x = \frac{\partial x}{\partial r}\dot r+\frac{\partial x}{\partial \theta}\dot\theta+\frac{\partial x}{\partial \phi}\dot\phi \\
\dot x = (\sin\theta\cos\phi)\dot r + (r\cos\theta\cos\phi)\dot\theta - (r\sin\theta\sin\phi)\dot\phi
$$ </p>
<p>where the last equation was evaluated from the definition of $x=r\sin\theta\cos\phi$. Now same goes for the other variables, which get's you </p>
<p>$$
\dot y = (\sin\theta\sin\phi)\dot r+(r\cos\theta\sin\phi)\dot\theta+(r\sin\theta\cos\phi)\dot\phi \\
\dot z = (\cos\theta)\dot r-(r\sin\theta)\dot\theta
$$</p>
<p>From this three equations, it's just a manner of squaring them all, summing them and seeing what you get! Tedious work, but is has to be done sometimes: </p>
<p>$$
\dot x^2 =\sin^2\theta\cos^2\phi\dot r^2+r^2\cos^2\theta\cos^2\phi\dot\theta^2 +r^2\sin^2\theta\sin^2\phi\dot\phi^2+\\ +2r\sin\theta\cos\theta\cos\phi^2\dot r\dot\theta\color{blue}{-2r\sin^2\theta\cos\phi\sin\phi\dot r\dot\phi}\color{red}{-2r^2\cos\theta\sin\theta\cos\phi\sin\phi\dot\theta\dot\phi}\\[10 pt]
\dot y^2 = \sin^2\theta\sin^2\phi\dot r^2+r^2\cos^2\theta\sin^2\phi\dot\theta^2+r^2\sin^2\theta\cos^2\phi\dot\phi^2+\\+2r\cos\theta\sin\theta\sin^2\phi\dot r\dot\theta \color{blue}{+ 2r\sin^2\theta\cos\phi\sin\phi\dot r\dot\phi }\color{red}{+2r^2\cos\theta\sin\theta\cos\phi\sin\phi\dot\theta\dot\phi}\\[10 pt]
\dot z^2 = \cos^2\theta\dot r^2+r^2\sin^2\theta\dot\theta^2-2r\cos\theta\sin\theta\dot r\dot\theta
$$</p>
<p>Let's evaluate the sum keeping in mind that the coloured parts, clearly, add up to zero with one another (we'll see that other parts add up to zero but not so easily):</p>
<p>$$\begin{align}
(\dot x^2+\dot y^2+\dot z^2) &= \dot r^2 (\sin^2\theta\cos^2\phi+\sin^2\theta\sin^2\phi+\cos^2\theta)\tag1\\
&+{}r^2\dot\theta^2(\cos^2\theta\cos^2\phi+\cos^2\theta\sin^2\phi+\sin^2\theta)\tag2\\
&+{}r^2\dot\phi^2(\sin^2\theta\sin^2\phi+\sin^2\theta\cos^2\phi)\tag3\\
&+{}2r\dot r\dot\theta(\sin\theta\cos\theta\cos^2\phi+\cos\theta\sin\theta\sin^2\phi-\cos\theta\sin\theta)\tag4
\end{align}
$$</p>
<p>Now it probably seems all wrong! But, keeping in mind the formula $$\cos^2\theta+\sin^2\theta=1$$ we can do lot's of things: </p>
<p>Formula $(1)$
$$
\color{red}{\sin^2\theta}\cos^2\phi+\color{red}{\sin^2\theta}\sin^2\phi+\cos^2\theta = \color{red}{\sin^2\theta}\underbrace{(\cos^2\phi+\sin^2\phi)}_{\text{is one}}+\color{red}{\cos^2\theta} \\[5 pt] = \sin^2\theta+\cos^2\theta = 1
$$</p>
<p>Formula $(2)$
$$
\color{red}{\cos^2\theta}\cos^2\phi+\color{red}{\cos^2\theta}\sin^2\phi+\sin^2\theta = \cos^2\theta(\cos^2\phi+\sin^2\phi)+\sin^2\theta = \\ =\cos^2\theta+\sin^2\theta = 1
$$</p>
<p>Formula $(3)$
$$
\color{red}{\sin^2\theta}\sin^2\phi+\color{red}{\sin^2\theta}\cos^2\phi= \sin^2\theta(\sin^2\phi+\cos^2\phi)=\sin^2\theta
$$</p>
<p>Formula $(4)$
$$
\color{red}{\sin\theta\cos\theta}\cos^2\phi+\color{red}{\cos\theta\sin\theta}\sin^2\phi-\cos\theta\sin\theta = \sin\theta\cos\theta(\cos^2\phi+\sin^2\phi)-\cos\theta\sin\theta = \\ = \sin\theta\cos\theta-\sin\theta\cos\theta=0
$$</p>
<p><strong>Finally,</strong> plugging it all back into the sum of the derivatives squared what we get is </p>
<p>$$
(\dot x^2+\dot y^2+\dot z^2) =\dot r^2+r^2\dot\theta^2+r^2\sin^2\theta\dot\phi^2
$$</p>
<p>which is exactly your formula!</p>
|
2,903,163 | <p>I don't really know whether to put this in Physics forums since it is relating to Mechanics, or Math since the question is actually about the math being done. Don't criticize me over it.</p>
<p>So for the question: I was doing some review problems on Lagrange's equations, KE+PE, and I found <a href="http://wwwf.imperial.ac.uk/%7Epavl/ASHEET2.PDF" rel="nofollow noreferrer">this document</a>.</p>
<p>In the first question's solution, the writer differentiates without explaining the step. They have these:</p>
<p><span class="math-container">$$\begin{cases}
x = r \sin(\theta) \cos(\phi)\\[5 pt]
y = r \sin(\theta) \sin(\phi)\\[5 pt]
z = r \cos(\theta)
\end{cases}
$$</span></p>
<p>and this:</p>
<p><span class="math-container">$$T = {m\over 2}(\dot x^2 +\dot y^2 +\dot z^2)$$</span></p>
<p>I never really studied the spherical coordinate system much, and obviously never thought about the derivatives of the conversion into Cartesian. Can someone find or explain the process of taking the derivatives of the first three equations, plugging into the equation for Kinetic Energy, and simplifying? There is a probably a different calculus method for the coordinate system, which I don't know. Thanks!</p>
<p>EDIT: While doing taking the derivatives, was the method used actually a separate form of calculus beyond I and II, or was it normal first-order differentiation? If so, how? Here is the part I am speaking of:</p>
<blockquote>
<p><strong>Solution:</strong> The kinetic energy is <span class="math-container">$T=\frac m2(\dot x^2+\dot y^2+\dot z^2)$</span>. We substitute
<span class="math-container">$$\begin{cases}
x = r \sin(\theta) \cos(\phi)\\[5 pt]
y = r \sin(\theta) \sin(\phi)\\[5 pt]
z = r \cos(\theta)
\end{cases}
$$</span>
Differentiating these, substituting into <span class="math-container">$T$</span>, and simplifying, we find
<span class="math-container">$$T=\frac m2 (\dot r^2 +r^2\dot\theta^2+r^2\sin^2\theta\dot\phi^2).$$</span></p>
</blockquote>
| Ahmed S. Attaalla | 229,023 | <p>This is a (relatively tedious) application of the chain and product rule.</p>
<hr>
<p>$$z=r \cos \theta$$</p>
<p>$$\frac{dz}{dt}=\frac{d}{dt} \left( r \cos \theta \right)$$</p>
<p>Applying the product rule,</p>
<p>$$=\frac{dr}{dt} \cos \theta+ \frac{d \cos \theta}{dt} r$$</p>
<p>Applying the chain rule,</p>
<p>$$=\dot r \cos \theta+\frac{d \cos \theta}{d \theta} \frac{d \theta}{dt} r$$</p>
<p>$$=\dot r \cos \theta-r \dot \theta \sin \theta $$</p>
<p>It is a similar exercise to differentiate $r\sin \theta$ with respect to time.</p>
<hr>
<p>$$y=r \sin \theta \sin \phi$$</p>
<p>$$\dot y=\sin \phi \frac{d}{dt} (r \sin \theta)+r \sin \theta\frac{d}{dt} \sin \phi$$</p>
<p>$$= \sin \phi \frac{d}{dt} (r \sin \theta)+r \sin \theta\frac{d \sin \phi}{d\phi} \frac{d\phi}{dt}$$</p>
<p>$$=\sin \phi \left( \dot r \sin \theta+\dot \theta r \cos \theta \right)+r \dot \phi \sin \theta \cos \phi$$</p>
<hr>
<p>$$x=r \sin \theta \cos \phi$$</p>
<p>$$\dot x=\cos \phi \frac{d}{dt}\left(r \sin \theta \right)+r \sin \theta \frac{d}{dt} \cos \phi$$</p>
<p>$$=\cos \phi \left(\dot r \sin \theta+\dot \theta r \cos \theta \right)-r \dot \phi \sin \theta \sin \phi $$</p>
<hr>
<p>In order to calculate $\dot x^2+\dot y^2$ without too much trouble, make the substitution $u= \dot r \sin \theta+\dot \theta r \cos \theta$ and $v=r \dot \phi \sin \theta$. Then we wish to calculate,</p>
<p>$$(u \sin \phi +v \cos \phi)^2+(u \cos \phi-v \sin \phi)^2$$</p>
<p>$$=u^2+v^2$$</p>
<p>$$=(\dot r \sin \theta+\dot \theta r \cos \theta )^2+r^2 \dot \phi^2 \sin^2 \theta$$</p>
<p>Next, to calculate, $\dot x^2+\dot y^2+\dot z^2$ note:</p>
<p>$$(\dot r \sin \theta+\dot \theta r \cos \theta )^2+\left(\dot r \cos \theta-r \dot \theta \sin \theta \right)^2$$</p>
<p>$$=\dot r^2+r^2 \dot \theta^2$$</p>
<p>So,</p>
<p>$$\dot x^2+\dot y^2+\dot z^2= \dot r^2+r^2 \dot \theta^2+ r^2 \dot \phi^2 \sin^2 \theta$$</p>
<p>As expected.</p>
|
1,501,595 | <blockquote>
<p>Let $A$ be an integral domain. Show that $\dim(A)=0 \iff A$ is a field.</p>
</blockquote>
<p>The backward implication is trivial.</p>
<p>For the forward implication, if we can show that $1 \in <a>$, where $a(\neq 0) \in A$. Then, we are done. However, I don't know how to show it. </p>
<p>Any suggestions are appreciated.</p>
| rschwieb | 29,335 | <p>Since maximal ideals are prime, "zero-dimensional" becomes the same thing as "prime ideals are maximal ideals."</p>
<p>In a domain $\{0\}$ is prime, and in a zero-dimensional domain it is also maximal.</p>
<p>So, there are no other ideals besides $\{0\}$ and $A$. Do you know what this implies about $A$? (There are already more than one question on the site answering this, if you formulate the correct question...)</p>
|
184,719 | <p>Let $\Sigma_g$ be a Riemann surface of genus $g\geq 2$ and $G=\pi_1(\Sigma_g)$.
Let $\pi\colon \mathbb{H}\to \Sigma_g$ be the universal covering map. What kind of surface is $\mathbb{H}/[G,G]$? </p>
<p>Moreover, what is $[G,G]$; e.g. if $g=2$?</p>
| Misha | 21,684 | <p>Let me start by interpreting the question "What kind of surface is $S$?" in the case of a general connected oriented topological surface (without boundary). (I am considering only oriented surfaces just for simplicity of discussion.)
If $S$ had finite complexity, i.e., would be homeomorphic to the interior of a compact oriented surface, you probably would be satisfied by the answer of the type "$S$ is has $n$ ends and genus $g$", since this provides a complete set of topological invariants. Surfaces of infinite complexity are also classified by a certain set of invariants:</p>
<ol>
<li><p>Its set of ends (regarded as a topological space).</p></li>
<li><p>Its genus. </p></li>
<li><p>Its set of ends with positive genus. </p></li>
</ol>
<p>You can find more details and references in <a href="https://mathoverflow.net/questions/4155/classification-problem-for-non-compact-manifolds">this MO post</a>. </p>
<p>If you look closely at the surface you are interested in, $H^2/[G,G]$, you realize that its invariants are:</p>
<ol>
<li><p>The surface is 1-ended (simply because the abelian group $G/[G,G]$ is 1-ended). </p></li>
<li><p>It has infinite genus (this is easy to see and is explained in Sam's answer). </p></li>
<li><p>In particular, its only end has positive genus. </p></li>
</ol>
<p>To summarize: Your surface is the unique connected oriented topological surface of infinite genus and one end. If you are looking for a different answer, you should clarify what does your question really mean. </p>
|
2,624,837 | <p>What is the integral of $$\int a^{x-1}dx?$$</p>
<p>is it $$\frac{a^{x-1}}{\log(a)} + c?$$</p>
<p>How can we derive the proper integral? Also can you please tell me the definite integral with limits, say b to c?</p>
| Dr. Sonnhard Graubner | 175,066 | <p>it is $$\frac{1}{a}\int a^xdx=\frac{1}{a}\frac{a^x}{\ln(a)}+C$$</p>
|
2,479,918 | <p>Every vector space $V$ could be embedded into $V^{\ast}$ (see <a href="https://en.wikipedia.org/wiki/Dual_basis" rel="noreferrer">here</a>) after choosing a basis, for a given vector $v \in V$ denote this embedding by $v^{\ast}\in V^{\ast}$. Now for given vector spaces $V_1, \ldots, V_k$ over some field $F$, let $V = \{ \varphi : V_1 \times \ldots \times V_k \to F \mbox{ multilinear } \}$. Why not define the tensor product
of $V_1, \ldots, V_k$ simply as $T = \{ \varphi^{\ast} \mid \varphi\in V\}$. Then the universal property is obviously fulfilled, for if we
define $\pi : V_1 \times \ldots \times V_k \to T$ by $\pi(v_1, \ldots, v_k) = \Phi \in V^{\ast}$ with
$$
\Phi(\varphi) = \varphi(v_1, \ldots, v_k).
$$
Then if we have some multilinear $\varphi : V_1 \times \ldots \times V_k \to F$ define the linear map $h_{\varphi} : T \to F$ by
$$
h_{\varphi}(\Phi) = \Phi(\varphi)
$$
and we have
$$
h_{\varphi}(\pi(v_1, \ldots, v_k)) = \varphi(v_1, \ldots, v_k)
$$
i.e. it factors through $T$ by $\pi$ and $h_{\varphi}$. Then everything works out quite easily, no nasty "quotient constructions", it even appears too simple for me...</p>
<p>I have nowhere seen this definition? So why not define it that way? Have I overlooked something? Note that we do not rely on reflexivity here, as $T$ does not has to be all of $V^{\ast}$, but just those elements that arise from elements of $V$ (the image of the embedding). Maybe the universal property breaks down because the linear map is not unique, but I do not see other choices for it?</p>
| Pedro | 23,350 | <p>I haven't read all the details of your construction, but if you look at Halmos' classic text on linear algebra, <em>Finite dimensional vector spaces</em>, you will see he defines $V\otimes W$ as the linear dual of bilinear forms $V\times W\to k$, where $k$ is the base field, which seems to be what you suggest. This is Chapter 1, $\S$25.</p>
<p><a href="https://i.stack.imgur.com/TWcyx.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/TWcyx.jpg" alt="enter image description here"></a></p>
<p>The problem with this is extending it to infinite dimensional vector spaces and to arbitrary modules over arbitrary rings, of course, as Halmos explains:</p>
<p><a href="https://i.stack.imgur.com/PypQR.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/PypQR.jpg" alt="enter image description here"></a></p>
|
74,271 | <p>Hello, all!</p>
<p>I have a big sum of log-normal (with location parameter $\mu$ and scale parameter $\sigma$) random variables $X_i$ $\sum_{i=1}^N X_i$ with $N \gg 1$.
How could I estimate convergence rate to a gaussian distribution relative to $\mu$ and $\sigma$?</p>
<p>Thank you.</p>
| Brendan McKay | 9,025 | <p>If you search at Google Scholar for "sum of lognormal" or "sum of log-normal" (using the quotation marks), you will find several papers devoted to this question.</p>
|
393,378 | <p>Let <span class="math-container">$K=\mathbb{Q}(x)$</span> be the rational functions in one variable <span class="math-container">$x$</span> and let the automorphisms <span class="math-container">$\phi,\psi$</span> of <span class="math-container">$K$</span> be defined as <span class="math-container">$\phi(x)=-\frac{1}{x+1}$</span> and <span class="math-container">$\psi(x)=\frac{1}{x}$</span>.</p>
<p>Let <span class="math-container">$G$</span> be the group generated by <span class="math-container">$\phi,\psi$</span>, then <span class="math-container">$G=\langle \phi,\psi|\phi^3=\psi^2=1,\phi\psi=\psi\phi^2\rangle $</span>.</p>
<p>To be specific, <span class="math-container">$ G=\{1,\phi,\phi^2,\psi,\phi\psi,\psi\phi\} $</span> and
<span class="math-container">$$ \phi(x)=-\frac{1}{x+1}\\ \phi^2(x)=-\frac{x+1}{x}\\ \psi(x)=\frac{1}{x}\\ \phi\psi(x)=-x-1\\ \psi\phi(x)=-\frac{x}{x+1} $$</span>
let <span class="math-container">$K_0$</span> be the invariant subfield of <span class="math-container">$K$</span> under the <span class="math-container">$G$</span>-action.</p>
<p><strong>The point is to show that <span class="math-container">$K_0$</span> is a simple extension of <span class="math-container">$\mathbb{Q}$</span>.</strong></p>
<p>The threads I hold: Regard <span class="math-container">$K$</span> as a <span class="math-container">$G$</span>-module then <span class="math-container">$K_0$</span> is nothing but the 0-th cohomology of <span class="math-container">$G$</span> with coefficient <span class="math-container">$K$</span> and <span class="math-container">$\text{Gal}(K/K_0)=G$</span>. On the other hand set <span class="math-container">$N=\sum\limits_{g\in G}g$</span> to be the norm element of <span class="math-container">$G$</span>, then there is a cannonical map <span class="math-container">$\alpha$</span> form <span class="math-container">$K$</span> to <span class="math-container">$K_0$</span> sending every rational function <span class="math-container">$f\in K$</span> to <span class="math-container">$Nf$</span>. And I guess <span class="math-container">$\alpha$</span> is a surjection which I'm not sure. Notice that <span class="math-container">$Nx=-3$</span> and <span class="math-container">$N(x^2)=(t+1)^2+(\frac{1}{t+1})^2+t^2+\frac{1}{t^2}+(\frac{t+1}{t})^2+(\frac{t}{t+1})^2$</span> and I have a vague sense that <span class="math-container">$N(x^i)$</span> could be expressed in <span class="math-container">$N(x^2)$</span> for any <span class="math-container">$i\in\mathbb{Z}$</span>. So I guess <span class="math-container">$K_0=\mathbb{Q}(N(x^2))$</span>. Again, I'm not sure about this.</p>
| R.P. | 17,907 | <p>The answer as to the surjectivity of <span class="math-container">$\alpha$</span> is <strong>no</strong>. As in algebraic number theory, the simplest way to prove that an element is not a norm is by local considerations. Let us consider
<span class="math-container">$$
y=\frac{(x^3-3x-1)(x^3+3x^2-1)}{x^2(x+1)^2},
$$</span>
and ask whether <span class="math-container">$y$</span> is a norm of <span class="math-container">$\mathbb{Q}(x)$</span>. Let us suppose that it is, and derive a contradiction. If <span class="math-container">$y$</span> is a norm, we would have
<span class="math-container">$$
y = \prod_{\sigma \in G} f(\sigma(x)) = f(x)f(\phi(x))f(\phi^2(x))f(\psi(x))f(\psi(\phi(x)))f(\psi(\phi^2(x))),
$$</span>
for some rational function <span class="math-container">$f \in \mathbb{Q}(x)$</span>, where <span class="math-container">$G \subset \operatorname{Aut}(\mathbb{Q}(x))$</span> and <span class="math-container">$\phi,\psi \in \operatorname{Aut}(\mathbb{Q}(x))$</span> are as you defined them.</p>
<p>Now, since <span class="math-container">$g=x^3-3x-1$</span> is an irreducible factor of the numerator of <span class="math-container">$y$</span>, it must appear as a numerator in at least one of the six factors in the above product representation of <span class="math-container">$y$</span> as well (when written "in lowest terms", i.e. after cancelling any common factors of numerator and denominator). As can be easily checked, <span class="math-container">$\phi$</span> permutes the roots of <span class="math-container">$g$</span>, which means <span class="math-container">$g$</span> must appear in either three or all six of the factors. Letting <span class="math-container">$\xi$</span> be one of the zeros of <span class="math-container">$g$</span>, this would imply that the order of the zero of the function <span class="math-container">$y$</span> (say considered as a meromorphic function in the variable <span class="math-container">$x$</span>) at <span class="math-container">$x=\xi$</span> is a multiple of three. However from the formula for <span class="math-container">$y$</span> it is clear that the order of the zero at <span class="math-container">$x=\xi$</span> equals <span class="math-container">$1$</span>, contradiction.</p>
|
760,926 | <p>Show that $\binom{n}{0} - \binom{n}{1} + \binom{n}{2} - ...+(-1)^k * \binom{n}{k} = (-1)^k * \binom{n-1}{k}$.</p>
<p>I know this has to do with permutations and combination problems, but I'm not sure how would I start with this problem. </p>
| Igor | 66,242 | <p>I assume that $n \geq 1$ and $K \geq 0$. We need to prove that
$\sum_{k=0}^{K}(-1)^k\binom{n}{k}=(-1)^K\cdot\binom{n-1}{K}$</p>
<p>I like proofs by induction. So we fix some $n \geq 1$ and our base case is $K = 0$. In this case we have (recall that for any $~n~$ $\binom{n}{0} = 1$):</p>
<p>$LHS: ~~ \sum_{k=0}^{0}(-1)^k\binom{n}{k} = (-1)^0\binom{n}{0} = 1\cdot 1=1$</p>
<p>$RHS: ~~ (-1)^0\cdot \binom{n-1}{0} = 1\cdot1=1$</p>
<p>Now we assume that the formula is correct for some $K \geq 0$ and show that then it's correct for $K+1$ (notice that for any $~x~$ and any $~k~$ $~(-1)^{k+1}x=-(-1)^kx~$ thus$~(-1)^kx + (-1)^{k+1}x = 0$):</p>
<p>$\sum_{k=0}^{K+1}(-1)^k\binom{n}{k} = \sum_{k=0}^{K}(-1)^k\binom{n}{k} + (-1)^{K+1}\binom{n}{K+1}=(-1)^K\cdot\binom{n-1}{K}+(-1)^{K+1}\binom{n}{K+1}=(-1)^K\cdot\binom{n-1}{K} + (-1)^{K+1}\binom{n-1}{K} + (-1)^{K+1}\binom{n-1}{K+1} = (-1)^{K+1}\binom{n-1}{K+1}$</p>
|
670,522 | <p>In my very young mathematical career, I have worked a lot with modular forms. Recently, I worked as a teaching assistant in a course about geometry. At the end of the course, we dealt with hyperbolic geometry. It seems as if there is some relation between hyperbolic geometry and modular forms, for example, why is it precisely the set $\mathbb{H}$ (from which modular forms map into $\mathbb{C}$) that is also a model for a "weird" geometry in which the sum over the angles in a triangle is not $\pi$ or in which some axiom about parallel lines does not hold? It seems at first sight, as if these two mathematical areas are quite distant from each other.</p>
<p>If there is such a relation, can someone solve the following equation:</p>
<p>$$ \frac{\text{modular forms}}{\text{hyperbolic geometry}} = \frac{???}{\text{euclidean geometry}}$$</p>
<p>Of course, one can reinterpret modular forms as certain sections of line bundles over ... blah blah blah, but this is not the way you would ever describe what a modular form is to someone who has never heard about them.</p>
<p>cheers,</p>
<p>FW</p>
| dmk | 88,878 | <p>The OP, I assume, knows this, but in the interest of making this question more searchable, this is Exercise 3.3 in Axler's <em>Linear Algebra Done Right</em> (as well as the reason I was looking up questions like this).</p>
<p>I can only imagine what grading twenty or thirty proofs for this (and for a few other results) does to one's mind, but apparently my professor didn't find anything wrong with mine (but for a few typographical errors, now corrected). I'm posting it because, from what I can tell, it differs somewhat from the solutions so far; specifically, it seems a little more general in that it doesn't assign a specific value for certain preimages. I'm still far from being comfortable with any of this stuff, though, so if I'm wrong on that point, please correct me. Anyway —</p>
<p>Because $U$ is a finite-dimensional subspace, it has a basis — call it $B=\left\{u_{1},u_{2},\ldots u_{r}\right\}$. Moreover, since $B$ is linearly independent in $V$, it can be expanded to a basis for $V$ — say, $B^{\prime}=\left\{ u_{1},\ldots u_{r},v_{r+1},\ldots v_{s}\right\}$. Since $B^{\prime}$ is also linearly independent, we can form a basis for another subspace, $V^{\prime}$, with the vectors in $B^{\prime}\backslash B=\left\{ v_{r+1},\ldots v_{s}\right\}$. Let $T^{\prime}\in\mathcal{L}\left(V^{\prime},W\right)$. Now we define $T:V\rightarrow W$ such that $T\left(v\right)=S\left(u\right)+T^{\prime}\left(v^{\prime}\right)$. Because $U$ and $\text{span}(B^{\prime}\backslash B)$ are subspaces of $V$, they both belong to $\mathcal{L}\left(V,W\right)$ as well; and since $\mathcal{L}\left(V,W\right)$ forms a vector space, $S+T^{\prime}=T$ is linear as well. Since each $u\in U$ can be written in $V$ as $a_{1}u_{i}+\ldots+a_{r}u_{r}+b_{r+1}v_{r+1}+\ldots+b_{s}v_{s}$, where $a_{i}\in\mathbb{F}$ and $b_{j}=0$, $T\left(u\right)=S\left(u\right)+T^{\prime}\left(0\right)=S\left(u\right)$.</p>
|
2,210,893 | <p>A lot of times when proving for example inequalities like $$x \leq y$$
for real numbers $x,y$ the argument looks like
$$x \leq y + \varepsilon$$
for all $\varepsilon > 0$, hence $x \leq y$. </p>
<p>Now this is obviously very intuitive, but is there a "proof" that this conclusion is correct? And is it always sufficient in order to proof $x \leq y$ to show $x \leq y + \epsilon$ for all $\varepsilon > 0$? </p>
<p>I'd appreciate any explanations!</p>
<p><strong>NOTE:</strong> I know that these kinds of arguments are correct when dealing with sequences. But here we have no sequences so I wanted to understand this too. </p>
| Olivier | 111,247 | <p>Suppose $x < y + \varepsilon$ for all $\varepsilon >0$, but that $x>y$. Then, taking $\varepsilon = x-y >0$, you obtain $x<x$, a contradiction.</p>
|
3,853,723 | <p>While brushing up on my old discrete mathematics skills I stumbled upon this problem that I can't solve.</p>
<p>In <span class="math-container">$\mathbb{R^2}$</span> the middle point of two coordinates is <span class="math-container">$(\frac{x_1+x_2}{2}, \frac{y_1+y_2}{2})$</span>. Show that given five points in <span class="math-container">$\mathbb{Z^2}$</span> (points with intetger coordinates) there are at least one pair of them whose middle point also lays in <span class="math-container">$\mathbb{Z^2}$</span>. Let us consider the corresponding question in <span class="math-container">$\mathbb{R^3}$</span>. How many points in <span class="math-container">$\mathbb{Z^3}$</span> would you need to be sure that at least one pair of them has a middle point in <span class="math-container">$\mathbb{Z^3}$</span>?</p>
<p>I am thinking that using the <strong>pigeonhole principle</strong> is appropriate, how I would use it is unclear to me though.</p>
<p><a href="https://i.stack.imgur.com/g4WxU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/g4WxU.png" alt="points in <span class="math-container">$\mathbb{Z^2}$</span>" /></a></p>
| David Lui | 445,002 | <p>As <span class="math-container">$k$</span>-modules, <span class="math-container">$M_n(k)$</span> is isomorphic to <span class="math-container">$k^{(n^2)}$</span>, and similarly, <span class="math-container">$M_n(B) \sim B^{(n^2)}$</span>. Therefore, since tensor product distributes over direct sum, <span class="math-container">$M_n(k) \otimes_k B \sim M_n(B)$</span> as <span class="math-container">$k$</span>-modules. The isomorphism is given via <span class="math-container">$(x_1, ... x_{n^2}) \otimes b \rightarrow (b x_1 , ... b x_{n^2})$</span>. Call this function <span class="math-container">$\phi$</span>.</p>
<p>The only thing we need to do now is to show that this preserves multiplication. By the distributive property, it suffices to show that it preserves multiplication on pure tensors.</p>
<p>Let <span class="math-container">$A, B \in M_n(k), x, y \in B$</span>. Then, <span class="math-container">$\phi(A \otimes x * B \otimes y) = \phi(AB \otimes xy) = xy AB$</span>, and <span class="math-container">$\phi(A \otimes x) * \phi(B \otimes y) = xA * yB = xy AB$</span>.</p>
<p>Therefore, it preserves multiplication and is an isomorphism.</p>
|
291,957 | <p>Does there exist a simple expression for integrals of the form,</p>
<p>$I = \int_{-\infty}^0 H_n(u) H_m(u)\, \mathrm{e}^{-u^2}\,du$,</p>
<p>where $m$ and $n$ are nonnegative integers and $H_n$ is the $n$'th (physicists') Hermite polynomial?</p>
<p>When $n+m$ is even, the symmetry of the integrand and the orthogonality of $H_n$ imply,</p>
<p>$I = \sqrt{\pi} \,2^{n-1} n! \,\delta_{n,\,m}$ (for $n+m$ even).</p>
<p>For $n+m$ odd, $I$ is nonzero and increases in magnitude with $n+m$, but I have been unable to find a general formula.</p>
| Tom Davis | 864,833 | <p>Although this is a very old question, I think it has another answer that may be useful to people.</p>
<p>I prefer the probabilists definition
<span class="math-container">$$
He_{\alpha}(x) = 2^{-\frac{\alpha}{2}}H_\alpha\left(\frac{x}{\sqrt{2}}\right)
$$</span>
so the integral becomes
<span class="math-container">$$
I = 2^{\frac{n+m-1}{2}}\int_{-\infty}^0He_n(x)He_m(x)\omega(x)dx
$$</span>
where <span class="math-container">$\omega(x) = e^{-\frac{x^2}{2}}$</span>.</p>
<p>Using the linearization of Hermite polynomials
<span class="math-container">$$
He_\alpha(x)He_\beta(x)=\sum_{k=0}^{\min(\alpha,\beta)}{\alpha \choose k}{\beta \choose k}k!He_{\alpha+\beta-2k}(x)
$$</span>
we can rewrite the integral as a single Hermite polynomial, since we know this indefinite integral
<span class="math-container">$$
I = 2^{\frac{n+m-1}{2}}\sum_{k=0}^{\min(n,m)}{n \choose k}{m \choose k}k!\int_{-\infty}^0 He_{n+m-2k}(x)\omega(x)dx.
$$</span></p>
<p>The indefinite integral can be calculated by noting
<span class="math-container">$$
\frac{d}{dx}\left[He_n(x)\omega(x)\right]=\frac{d^{n+1}}{dx^{n+1}}\omega(x)\\
= (-1)He_{n+1}(x)\omega(x)
$$</span>
and therefore
<span class="math-container">$$
\int He_{n+1}(x)\omega(x)dx = -He_n(x)\omega(x)
$$</span>
and in particular
<span class="math-container">$$
\int_{-\infty}^0 He_{n+m-2k}(x)\omega(x)dx = -He_{n+m-2k-1}(0).
$$</span>
These are known as the Hermite numbers and are zero for odd indices, therefore <span class="math-container">$n+m-2k-1$</span> is even, or has zero modulus <span class="math-container">$2$</span>. Since <span class="math-container">$2k$</span> also has zero modulo <span class="math-container">$2$</span>, then the parity of <span class="math-container">$n$</span> and <span class="math-container">$m$</span> must be opposite, ie <span class="math-container">$\operatorname{mod}(n,2) + \operatorname{mod}(m,2) = 1$</span>.</p>
<p>Assume <span class="math-container">$n>m$</span> and <span class="math-container">$n$</span> is odd, so it can be written <span class="math-container">$n=2l+1$</span> and <span class="math-container">$m$</span> is even, so it can be written <span class="math-container">$m=2s$</span>. The integral becomes
<span class="math-container">$$
I = -2^{l+s}\sum_{k=0}^{2s}{2l+1 \choose k}{2s \choose k}k! He_{2s+2l-2k}(0).
$$</span>
We can re-index the sum by <span class="math-container">$\alpha=s+l-k$</span>
<span class="math-container">$$
I = -2^{l+s}\sum_{\alpha=l-s}^{l+s}{2l+1 \choose l+s-\alpha}{2s \choose l+s-\alpha}(l+s-\alpha)! He_{2\alpha}(0).
$$</span>
The Hermite numbers are <span class="math-container">$He_{2\alpha}(0) = \frac{(-1)^{\alpha}(2\alpha)!}{2^\alpha\alpha!}$</span> leading to
<span class="math-container">$$
I = 2^{l+s}\sum_{\alpha=l-s}^{l+s}{2l+1 \choose l+s-\alpha}{2s \choose l+s-\alpha}(l+s-\alpha)! \frac{(-1)^{\alpha+1}(2\alpha)!}{2^\alpha \alpha!}.
$$</span>
Although complicated, it has your desired properties that for <span class="math-container">$n+m$</span> odd it is zero, and it increases as <span class="math-container">$n,m$</span> increase.</p>
|
2,359,700 | <p>Given the vector space, $ C(-\infty,\infty)$ as the set of all continuous functions that are always continuous, is the set of all exponential functions, $U=\{a^x\mid a \ge 1 \}$, a subspace of the given vector space?</p>
<p>As far as I'm aware, proving a subspace of a given vector space only requires you to prove closure under addition and scalar multiplication, but I'm kind of at a loss as to how to do this with exponential functions (I'm sure it's way simpler than I'm making it). </p>
<p>My argument so far is that the set $U$ is a subset of the set of all differentiable functions, which itself is a subset of $C(-\infty,\infty)$, but I doubt that argument would hold up on my test, given how we've tested for subspaces in class (with closure).</p>
| Robert Z | 299,698 | <p>Hint. Try by evaluating the limit along the parabola $y=mx^2$ with $m\in\mathbb{R}$. What do you obtain? The limit depends on $m$?</p>
|
42,301 | <p>everyone! I am sorry, but I am an abcolute novice of Mathematica (to be more precise this is my first day of using it) and even after surfing the web and all documents I am not able to solve the following system: </p>
<pre><code>Solve[{y*(((y*x)/(beta*b))^(1/(beta - 1)) - v) - c*alpha ==
0, ((x/alpha))*(((y*x)/(alpha*beta*b))^(1/(beta - 1)) -
v) + (((y*x)/alpha) -
2*alpha*((yx)/(2*beta*b))^(1/(beta - 1)))*(1/(beta -
1))*(x/(alpha*beta*
b))*((y*x)/(alpha*beta*b))^((2 - beta)/(1 - beta)) == 0}, {x,
y}]
</code></pre>
<p>What I need is to solve following systems, getting x and y expressed through all these symbols. Is it even possible? Thank you in advance. </p>
| user1066 | 106 | <p>Not very efficient, I suspect, but two other (related) possibilities:</p>
<pre><code>#[[Position[Ordering@Ordering@#[[All, 3]], 1, 1, 1][[1, 1]]]] &@mya
</code></pre>
<p>=> </p>
<blockquote>
<p>{0, 2, 5}</p>
</blockquote>
<pre><code>Pick[#, Ordering@Ordering@#[[All, 3]], 1] &@mya
</code></pre>
<p>=> </p>
<blockquote>
<p>{{0, 2, 5}}</p>
</blockquote>
|
18,686 | <p>Suppose you have an arbitrary triangle with vertices $A$, $B$, and $C$. <a href="http://www.cs.princeton.edu/~funk/tog02.pdf">This paper (section 4.2)</a> says that you can generate a random point, $P$, uniformly from within triangle $ABC$ by the following convex combination of the vertices:</p>
<p>$P = (1 - \sqrt{r_1}) A + (\sqrt{r_1} (1 - r_2)) B + (r_2 \sqrt{r_1}) C$</p>
<p>where $r_1, r_2 \sim U[0, 1]$.</p>
<p>How do you prove that the sampled points are uniformly distributed within triangle $ABC$?</p>
| Ross Millikan | 1,827 | <p>I would argue that if it is true for any triangle, it is true for all of them, as we can find an affine transformation between them. So I would pick my favorite triangle, which is $A=(0,0), B=(1,0), C=(0,1)$. Then the point is $(\sqrt{r_1}(1-r_2),r_2\sqrt{r_1})$ and we need to prove it is always within the triangle and evenly distributed. To be in the triangle we need $x,y\ge 0, x+y\le 1$, which is clear. Then show that the probability to be within an area $(0,x) \times (0,y)$ is $2xy$ by integration.</p>
|
18,686 | <p>Suppose you have an arbitrary triangle with vertices $A$, $B$, and $C$. <a href="http://www.cs.princeton.edu/~funk/tog02.pdf">This paper (section 4.2)</a> says that you can generate a random point, $P$, uniformly from within triangle $ABC$ by the following convex combination of the vertices:</p>
<p>$P = (1 - \sqrt{r_1}) A + (\sqrt{r_1} (1 - r_2)) B + (r_2 \sqrt{r_1}) C$</p>
<p>where $r_1, r_2 \sim U[0, 1]$.</p>
<p>How do you prove that the sampled points are uniformly distributed within triangle $ABC$?</p>
| Samrat Mukhopadhyay | 83,973 | <p>I'm starting with the argument provided by @Ross Millikan. Let $A=(0,0),\ B=(1,0),\ C=(0,1)$. Then the point chosen according to the given equation is $P=(X,Y)=(\sqrt{r_1}(1-r_2),r_2\sqrt{r_1})$. Now clearly, $0\leq X,Y \leq 1$ and $X+Y\leq \sqrt{r_1}\leq 1$. Now the problem is to show that $\mathbb{P}(X\leq x, Y\leq y)=2xy,\ \forall 0\leq x,y\leq 1$ with $x+y\leq 1$. Now, \begin{equation*}
\begin{split}
\mathbb{P}(X\leq x, Y\leq y)=& \mathbb{P}(\sqrt{r_1}(1-r_2)\leq x, r_2\sqrt{r_1}\leq y)\\
\ =&\int_{0}^1 \mathbb{P}(\sqrt{r}(1-r_2)\leq x, r_2\sqrt{r}\leq y|r_1=r)f_{r_1}(r)dr\\
\ =&\int_{0}^1 \mathbb{P}(1-\frac{x}{\sqrt{r}}\leq r_2\leq \frac{y}{\sqrt{r}})I_{[0,1]}(r)dr\ \mbox{(Since, $r_1, r_2$ are i.i.d $\mathcal{U}[0,1]$})\\
\end{split}
\end{equation*}
Now to find the region of integration we note that $$1-\frac{x}{\sqrt{r}}\leq r_2\leq \frac{y}{\sqrt{r}}\ \Rightarrow\ 0\leq r\leq(x+y)^2$$ Also, if $x\leq y$ then
$$ r\in (0,x^2)\ \Rightarrow\ 1-\frac{x}{\sqrt{r}}\leq 0,\ \frac{y}{\sqrt{r}}\geq 1 \\
r\in (x^2,y^2)\ \Rightarrow\ 1-\frac{x}{\sqrt{r}}\geq 0,\ \frac{y}{\sqrt{r}}\geq 1 \\
r\in (y^2,(x+y)^2)\ \Rightarrow\ 1-\frac{x}{\sqrt{r}}\geq 0,\ \frac{y}{\sqrt{r}}\leq 1 \\$$
and if $y\leq x$ then
$$ r\in (0,y^2)\ \Rightarrow\ 1-\frac{x}{\sqrt{r}}\leq 0,\ \frac{y}{\sqrt{r}}\geq 1 \\
r\in (y^2,x^2)\ \Rightarrow\ 1-\frac{x}{\sqrt{r}}\leq 0,\ \frac{y}{\sqrt{r}}\leq 1 \\
r\in (x^2,(x+y)^2)\ \Rightarrow\ 1-\frac{x}{\sqrt{r}}\geq 0,\ \frac{y}{\sqrt{r}}\leq 1 \\$$</p>
<p>Then if $x\leq 1$ the integral becomes $$\int_{0}^{x^2}1 dr+\int_{x^2}^{y^2}\frac{x}{\sqrt{r}} dr+ \int_{y^2}^{(x+y)^2}\left(\frac{x+y}{\sqrt{r}}-1\right) dr=2xy$$
Similarly, if $y\leq x$ the integral becomes $$\int_{0}^{y^2}1 dr+\int_{y^2}^{x^2}\frac{y}{\sqrt{r}} dr+ \int_{x^2}^{(x+y)^2}\left(\frac{x+y}{\sqrt{r}}-1\right) dr=2xy$$ Hence the point $P$ is uniformly distributed on the surface of the triangle $ABC$. $\hspace{3cm}\ \Box$</p>
|
1,988,419 | <p>Any hint for proving this? If $Y$ is a subspace of $X$, what I am able to find is a closed subset $V$ in $Y$, hence $\mbox{cl}_Y(V)$ is compact, whose closure is contained in a neighborhood of a point $x$, by regularity of $Y$. Restricted to $X$, this $V_x=V \cap X$ is closed. I dont see any way to prove that $V_x$ is compact in $X$ but this is the obvious path. Thanks a lot for any suggestion.</p>
| Taumatawhakatangihangakoauauot | 144,375 | <p>The result you wish to prove is not true. To give a counterexample I will use a characterization of locally compact subspaces of locally compact Hausdorff spaces found in <a href="https://math.stackexchange.com/a/644089">this answer</a>:</p>
<blockquote>
<p>If a subspace $Y$ of a locally compact Hausdorff space $X$ is locally compact, then it is of the form $Y = F \cap U$ where $F \subseteq X$ is closed and $U \subseteq X$ is open.</p>
</blockquote>
<p>Consider the compact regular space $X = [0,1]$, and the subspace $Y = \mathbb Q \cap X$. If $Y$ is locally compact, then there is a closed $F \subseteq X$ and an open $U \subseteq X$ such that $Y = F \cap U$. Since $Y \subseteq F$ and $Y$ is dense in $X$ and $F$ is closed, it follows that $F = X$, meaning that $Y = F \cap U = X \cap U = U$, which is impossible since $Y$ is not an open subset of $X$! Therefore $Y$ is not a locally compact subspace of $X$.</p>
|
4,174 | <p>I'm developing a course that focuses on the transistion from arithmetic to algebraic thinking, particularly in grades 5-8. We will do this through focus on the common core. I'm also putting together a collection of suggested readings from the math education literature. I would be interested to hear your suggestions for suggested readings.</p>
| Joseph Malkevitch | 1,865 | <p>Although these two NCTM books cover K-12 (there are items specifically directed at middle school level) they have ideas related to how to develop algebraic thinking: The Ideas of Algebra, K12, 1988 Yearbook, A. Coxford and A. Shulte, editors, and Developing Mathematical Reasoning in Grades K-12, 1999 Yearbook, Lee Stiff and Francis Curcio, editors. </p>
|
2,764,818 | <blockquote>
<p>Let $f(x)=ax^3+bx^2+cx+d$, be a polynomial function, find relation between $a,b,c,d$ such that it's roots are in an arithmetic/geometric progression. (separate relations)</p>
</blockquote>
<p>So for the arithmetic progression I took let $\alpha = x_2$ and $r$ be the ratio of the arithmetic progression.</p>
<p>We have:</p>
<p>$$x_1=\alpha-2r, \quad x_2=\alpha, \quad x_3=\alpha +2r$$</p>
<p>Therefore:</p>
<p>$$x_1+x_2+x_3=-\frac ba=3\alpha$$
$$x_1^2+x_2^2+x_3^2 = 9\alpha^2-2\frac ca \to 4r^2=\frac {b^2-3ac}{3a^2}$$
$$x_1x_2x_3=\alpha(\alpha^2-4r^2)=-\frac da$$</p>
<p>and we get the final result $2b^3+27a^2d-9abc=0$.</p>
<p>How should I take the ratio at the geometric progression for roots?</p>
<p>I tried something like </p>
<p>$$x_1=\frac {\alpha}q, \quad x_2=\alpha, \quad x_3=\alpha q$$</p>
<p>To get $x_1x_2x_3=\alpha^3$ but it doesn't really work out..</p>
<p>Note:</p>
<p>I have to choose from this set of answers:</p>
<p>$$\text{(a)} \ a^2b=c^2d \quad\text{(b)}\ a^2b^2=c^2d \quad\text{(c)}\ ab^3=c^3d$$</p>
<p>$$\text{(d)}\ ac^3=b^3d \quad\text{(e)}\ ac=bd \quad\text{(f)}\ a^3c=b^3d$$</p>
| Somos | 438,089 | <p>The condition you want is <span class="math-container">$\;D:=(x_1^2-x_2x_3)(x_2^2-x_1x_3)(x_3^2-x_1x_2)=0.$</span> This can be written as <span class="math-container">$\;D=e_1^3e_3-e_2^3\;$</span> where <span class="math-container">$\;e_1,e_2,e_3\;$</span> are the elementary symmetric functions of the roots. This simplifies in terms of the polynomial coefficients to <span class="math-container">$\;ac^3-b^3d\;$</span> since <span class="math-container">$e_1=-\frac{b}{a},e_2=\frac{c}{a},e_3=-\frac{d}{a}.$</span> Thus the answer choice is (d).</p>
<p>By the way, the <a href="http://grail.eecs.csuohio.edu/~somos/ident04.gp" rel="nofollow noreferrer">Special Algebraic Identity</a> <strong>id3_3_2_6a</strong>
<span class="math-container">$$\;(x_1^2-x_2x_3)(x_2^2-x_1x_3)(x_3^2-x_1x_2)=(x1+x_2+x_3)^3x_1x_2x_3-(x_1x_2+x_1x_3+x_2x_3)^3\;$$</span>
is given in H. S. Hall and S. R. Knight, <em>Higher Algebra,</em> 1957 (p. 439, Examples, 23).</p>
|
587,217 | <p>I know that the units of 2 by 2 matrices with integer entries must have a determinant of 1 or -1, and I have proved that if the determinant is zero then the matrix is not a unit, however I am wondering how you would go about proving that matrices with determinants other than 1 and -1 are not units?</p>
| user1337 | 62,839 | <p>Given a matrix of integers $$\begin{pmatrix}a & b \\ c & d \end{pmatrix} $$ it's inverse is $$ \frac{1}{ad-bc} \begin{pmatrix}d & -b \\ -c & a \end{pmatrix} .$$ This will be a matrix of integers if and only if $\frac{1}{ad-bc} \in \mathbb Z$.</p>
|
2,643,099 | <p>Can someone help me explain why it is true that</p>
<p>$$\sin(\pi/2-\theta)=\sqrt{1-\sin^2\theta}$$</p>
<p>When answering please explain the different relation which is used</p>
<p>Thanks</p>
| user | 505,767 | <p>Simply note that</p>
<ul>
<li>$1-\sin^2\theta=\cos^2 \theta$</li>
<li>$\sin \left(\frac{\pi}2-\theta\right)=\cos \theta$</li>
</ul>
<p>thus, since RHS is non negative </p>
<p>$$\sin \left(\frac{\pi}2-\theta\right)=\sqrt{1-\sin^2\theta}$$</p>
<p>is true if and only if $\sin \left(\frac{\pi}2-\theta\right)=\cos \theta\ge0$ that is $\theta \in\left[-\frac{\pi}2+2k\pi,\frac{\pi}2+2k\pi\right]$.</p>
|
2,643,099 | <p>Can someone help me explain why it is true that</p>
<p>$$\sin(\pi/2-\theta)=\sqrt{1-\sin^2\theta}$$</p>
<p>When answering please explain the different relation which is used</p>
<p>Thanks</p>
| Guy Fsone | 385,707 | <p>$$ \sin \left(\frac{\pi}2-\theta\right) =\cos\theta \not = |\cos\theta| =\sqrt{\cos^2\theta}=\sqrt{1-\sin^2\theta}$$</p>
|
3,161,371 | <p>For <span class="math-container">$p, q$</span> prime, if <span class="math-container">$q$</span> divides an integer <span class="math-container">$n$</span> but <span class="math-container">$p$</span> does not, show that <span class="math-container">$\text{gcd}(n, pq) = q$</span></p>
<p>This statement sort of reminds me of Euclid's Lemma, but I haven't been able to progress much. </p>
<p>I tried writing <span class="math-container">$n = kq$</span> for some integer <span class="math-container">$k$</span>. Then we have <span class="math-container">$\text{gcd}(kq, pq)$</span>, where <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are prime. I don't really know how to progress from here.</p>
| Bill Dubuque | 242 | <p>More generally for any <span class="math-container">$\,p,q\in\Bbb Z\!:\,$</span> <span class="math-container">$\, \color{#c00}{(p,n)}=1\,\Rightarrow\, (pq,n) = (q,n),\,$</span> because</p>
<p><span class="math-container">$$ (pq,n) = (pq,nq,n)=(\color{#c00}{(p,n)}q,n) = (q,n)$$</span></p>
<p>This is indeed one form of Euclid's Lemma. The above proof works in any domain where gcd exists (where proofs using unique factorization [e.g. Robert's answer] may fail). </p>
|
173,466 | <p>For a given matrix <code>M[n]</code> of size $ n\times n $ I want to define the following list of matrix-expressions:</p>
<pre><code>n=1
{Tr[M[1]]}
n=2
{Tr[M[2]]^2,Tr[M[2].M[2]]}
n=3
{Tr[M[3]]^3,Tr[M[3]]Tr[M[3].M[3]],Tr[M[3].M[3].M[3]]}
</code></pre>
<p>How could I generalize this relation for arbitrary $ n $? I tried <code>Nest</code> and <code>NestList</code>.
Thanks!</p>
| Αλέξανδρος Ζεγγ | 12,924 | <p>How about this:</p>
<pre><code>listVonMat[M_] := Module[{n = Length[M], inter1, inter2},
inter1 = TakeDrop[Table[M, n], #] & /@ Range[n - 1, 0, -1];
inter2 = {Times @@ Tr /@ #1, Tr @ (Dot @@ #2)} & @@@ inter1;
Times @@@ inter2
]
</code></pre>
<p>Maybe numerical results are easier to be checked with:</p>
<pre><code>listVonMat[Partition[Range[#^2], #]] & /@ Range[4]
</code></pre>
<p>returns</p>
<pre><code>{
{1},
{25, 29},
{3375, 3915, 4185},
{1336336, 1521296, 1613776, 1719056}
}
</code></pre>
<hr>
<p><strong>Update</strong></p>
<p>To show that the scheme is indeed implemented, run, e.g., codes below</p>
<pre><code>TakeDrop[{a, a, a, a}, #] & /@ Range[3, 0, -1]
Times @@@ ({Times @@ Tr /@ #1, Tr @ (Dot @@ #2)} & @@@ %)
</code></pre>
<p>and one gets</p>
<pre><code>{{{a, a, a}, {a}}, {{a, a}, {a, a}}, {{a}, {a, a, a}}, {{}, {a, a, a, a}}}
{Tr[a]^4, Tr[a]^2 Tr[a.a], Tr[a] Tr[a.a.a], Tr[a.a.a.a]}
</code></pre>
|
230,204 | <p>Let $X$ be a compact, oriented Riemann manifold. Let $\pi_{P}: P \rightarrow X$ be a principal $G$-bundle over $X$, for a compact Lie group $G$. Let $(M, \omega)$ be a symplectic manifold endowed with a symplectic action of $G$. Denote by $\mathcal{N}:=C^{\infty}(P,M)^{U(1)}$ the space of smooth $G$-equivariant maps $u:P \rightarrow M$. Then $C^{\infty}(P,M)^{G}$ is a smooth Frechet manifold. The total space of the tangent bundle $T\mathcal{N} = C^{\infty}(P,TM)^{G}$. At a point $u \in \mathcal{N}$, $T_{u}\mathcal{N} = \Gamma(P, u^{\ast}TM)^{G}$.</p>
<p>For $\xi_{1}, \xi_{2} \in T_{u}\mathcal{N}$, define $\Omega(\xi_{1}, \xi_{2}) = \displaystyle \int_{X} \omega_{u}(\xi_{1}, \xi_{2}) ~ dvol_{\scriptscriptstyle X}$, where $\omega_{u}(\cdot, \cdot)$ denotes the restric tion of $\omega$ along $u$. Will $\Omega(\cdot, \cdot)$ be a symplectic form on $\mathcal{N}$? More precisely, is $\Omega(\cdot, \cdot)$ closed?</p>
| Peter Michor | 26,935 | <p>Yes, I think $\Omega$ is closed. I will add a proof later.
Yes, as a mapping $T\mathcal N \to T^*\mathcal N$ it is injective, but it can never be surjective, since $T_n\mathcal N$ is a Frechet space, whereas its dual $T^*_u\mathcal N$ is a DF-space (generalized functions of distributions) which can never be isomorphic to a Frechet space. So $\Omega$ is a weak symplectic structure. See section 48 (called: Weak Symplectic Manifolds) of <a href="http://www.mat.univie.ac.at/~michor/apbookh-ams.pdf" rel="nofollow">here</a>, where 48.2 and 48.8 have to be corrected as described in the <a href="http://www.mat.univie.ac.at/~michor/apbook.mpr.html" rel="nofollow">Errata</a>. </p>
<h1>Edit:</h1>
<p>Answering your comment: It is not necessary to work with Sobolev completions.
I your case you can do it. If the structure group is a diffeomorphism group, you loose smoothness of the action. </p>
<p>You can work with the image under $\Omega$ of $T\mathcal N$ as "symplectic dual". See 2.5 of <a href="http://www.mat.univie.ac.at/~michor/curves-hamiltonian.pdf" rel="nofollow">this paper</a> for an example of symplectic reduction, which in this case is equivalent to constructing a Riemannian submersion.
Also <a href="http://www.mat.univie.ac.at/~michor/landmarks.pdf" rel="nofollow">this paper</a> might be of interest. </p>
|
4,133,782 | <p>I am having trouble finding a formula that connects the two and can produce an answer. Anyone know how this is done? I tried y=mx+b, m=3, and b=5-a. But I don't know what to do next or did I even start right.</p>
| Community | -1 | <p>If <span class="math-container">$A$</span> is <span class="math-container">$m \times n$</span>, then the following are equivalent:</p>
<ol>
<li><span class="math-container">$A$</span> has full column rank <span class="math-container">$n$</span></li>
<li>The columns of <span class="math-container">$A$</span> are linearly independent</li>
<li>The null space of <span class="math-container">$A$</span> is trivial</li>
<li>The map induced by <span class="math-container">$A$</span> is injective</li>
<li><span class="math-container">$A$</span> has a left inverse</li>
</ol>
<p><strong>Proof that 1 <span class="math-container">$\iff$</span> 2:</strong></p>
<p>Immediate from the definition of column rank.</p>
<p><strong>Proof that 2 <span class="math-container">$\iff$</span> 3:</strong></p>
<p>Observe that the vector <span class="math-container">$Ax$</span> is equal to the linear combination <span class="math-container">$\sum_{i=1}^{n}a_i x_i$</span>, where <span class="math-container">$a_i$</span> is the <span class="math-container">$i$</span>'th column of <span class="math-container">$A$</span>, and <span class="math-container">$x_i$</span> is the <span class="math-container">$i$</span>'th component of <span class="math-container">$x$</span>.</p>
<p>In particular, <span class="math-container">$Ax = 0$</span> if and only if <span class="math-container">$\sum_{i=1}^{n}a_i x_i = 0$</span>.</p>
<p>The null space of <span class="math-container">$A$</span> is trivial if and only if <span class="math-container">$x=0$</span> is the only solution to <span class="math-container">$Ax = 0$</span> which, by what we said above, is true if and only if <span class="math-container">$\sum_{i=1}^{n}a_i x_i = 0$</span> implies <span class="math-container">$x_i = 0$</span> for all <span class="math-container">$i$</span>, which is true if and only if <span class="math-container">$a_1, a_2, \ldots, a_n$</span> are linearly independent.</p>
<p><strong>Proof that 3 <span class="math-container">$\iff$</span> 4:</strong></p>
<p>Suppose that <span class="math-container">$Ax = Ay$</span>. Since <span class="math-container">$A$</span> is linear, this is equivalent to <span class="math-container">$Ax - Ay = A(x-y) = 0$</span>. Therefore <span class="math-container">$x-y$</span> is in the null space of <span class="math-container">$A$</span>. But the null space of <span class="math-container">$A$</span> is trivial, hence <span class="math-container">$x-y = 0$</span>, so <span class="math-container">$x=y$</span>. This shows that (the map induced by) <span class="math-container">$A$</span> is injective (one-to-one).</p>
<p>Conversely, suppose that <span class="math-container">$A$</span> is injective. Then <span class="math-container">$x=0$</span> is the unique vector such that <span class="math-container">$Ax = 0$</span>. Therefore the null space of <span class="math-container">$A$</span> is trivial.</p>
<p><strong>Proof that 2 (and equivalently 4) <span class="math-container">$\implies$</span> 5:</strong></p>
<p>Let <span class="math-container">$e_1, e_2, \ldots, e_n$</span> be the canonical basis for <span class="math-container">$\mathbb R^n$</span>, meaning that <span class="math-container">$e_i$</span> has a <span class="math-container">$1$</span> in the <span class="math-container">$i$</span>'th component, and zeros everywhere else. Note that for each <span class="math-container">$i$</span> we have <span class="math-container">$a_i = Ae_i$</span>, where again <span class="math-container">$a_i$</span> is the <span class="math-container">$i$</span>th column of <span class="math-container">$A$</span>. Moreover, since <span class="math-container">$A$</span> is injective, <span class="math-container">$e_i$</span> is the <em>unique</em> vector that is mapped by <span class="math-container">$A$</span> to <span class="math-container">$a_i$</span>.</p>
<p>Now, since <span class="math-container">$a_1, a_2, \ldots, a_n$</span> are linearly independent, they are a basis for the column space of <span class="math-container">$A$</span>, which can be extended to a basis <span class="math-container">$a_1,a_2,\ldots, a_n, b_1,b_2,\ldots,b_{m-n}$</span> for <span class="math-container">$\mathbb R^m$</span>. Hence an arbitrary <span class="math-container">$y \in \mathbb R^m$</span> has a unique representation of the form <span class="math-container">$y = \sum_{i=1}^{n} c_i a_i + \sum_{j=1}^{m-n} d_j b_j$</span> where <span class="math-container">$c_i$</span> and <span class="math-container">$d_j$</span> are scalars.</p>
<p>Therefore we can define a linear map <span class="math-container">$g : \mathbb R^m \to \mathbb R^n$</span> by first setting <span class="math-container">$g(a_i) = e_i$</span> for each <span class="math-container">$i=1,2,\ldots,n$</span> and <span class="math-container">$g(b_j) = 0$</span> for each <span class="math-container">$j=1,2,\ldots,m-n$</span>, and then extending <span class="math-container">$g$</span> linearly to all of <span class="math-container">$\mathbb R^m$</span>:</p>
<p><span class="math-container">$$g(y) = g\left(\sum_{i=1}^{n} c_i a_i + \sum_{j=1}^{m-n} d_j b_j \right) = \sum_{i=1}^{n} c_i g(a_i) + \sum_{j=1}^{m-n} d_j g(b_j) = \sum_{i=1}^{n} c_i g(a_i) = \sum_{i=1}^{n} c_i e_i$$</span></p>
<p>Then <span class="math-container">$g$</span> is a left inverse of <span class="math-container">$A$</span>:</p>
<p><span class="math-container">$$g(Ax) = g\left(\sum_{i=1}^{n}a_i x_i\right) = \sum_{i=1}^{n} x_i g(a_i) = \sum_{i=1}^{n} x_i e_i = x$$</span></p>
<p><strong>Proof that 5 <span class="math-container">$\implies$</span> 3:</strong></p>
<p>Suppose that <span class="math-container">$Ax = 0$</span>. Let <span class="math-container">$g$</span> be a left inverse of <span class="math-container">$A$</span>. Then <span class="math-container">$x = g(Ax) = 0$</span>. This shows that the null space of <span class="math-container">$A$</span> is trivial.</p>
<hr>
<p>As a side note, it turns out that 4 and 5 are equivalent for general functions, not just linear maps. If <span class="math-container">$f$</span> is any injective function, then it has a left inverse, and conversely if <span class="math-container">$f$</span> is any function that has a left inverse, then it is injective. There is a proof <a href="https://math.stackexchange.com/questions/1075924/finishing-a-proof-f-is-injective-if-and-only-if-it-has-a-left-inverse">here</a>, for example. Since you indicated in the comments that this is an unfamiliar fact, I did not use it in the proof above but instead constructed a left inverse explicitly.</p>
<hr>
<p>Note that my proof shows why a left inverse of <span class="math-container">$A$</span> must exist if <span class="math-container">$A$</span> has full column rank, but it doesn't explicitly show how to compute the left inverse.</p>
<p>As Strang notes, one formula for a left inverse is <span class="math-container">$B = (A^T A)^{-1} A^T$</span>. That this is a left inverse is clear by computing:</p>
<p><span class="math-container">$$BA = ((A^T A)^{-1} A^T) A = (A^T A)^{-1} (A^T A) = I_n$$</span></p>
<p>But as you will have noted, Strang punts to a later chapter the proof that <span class="math-container">$A^T A$</span> is invertible when <span class="math-container">$A$</span> has full column rank. So that's not very satisfactory!</p>
<p>Also, computing Strang's left inverse is very inefficient because it involves inverting <span class="math-container">$A^T A$</span>. This requires a lot of calculation, proportional to <span class="math-container">$n^3$</span> operations for an <span class="math-container">$n \times n$</span> matrix.</p>
<p>In practice, probably the best way to compute a left inverse is to perform row reduction on <span class="math-container">$A$</span> to bring it to the form</p>
<p><span class="math-container">$$\begin{bmatrix} I_n \\ 0_{m-n \times n} \end{bmatrix}$$</span></p>
<p>where <span class="math-container">$I_n$</span> is the <span class="math-container">$n \times n$</span> identity matrix, and <span class="math-container">$0_{m-n \times n}$</span> is the <span class="math-container">$m - n \times n$</span> matrix consisting of all zeros. Row reduction to this form is possible if and only if the columns of <span class="math-container">$A$</span> are linearly independent.</p>
<p>Assuming you're familiar with row reduction, you probably know that each row operation can be expressed as an <span class="math-container">$m \times m$</span> elementary matrix of one of three forms, corresponding to the three row reduction operations (multiplying a row by a scalar, interchanging two rows, and adding a scalar multiple of one row to another). The row reduction procedure can then be expressed by left-multiplying <span class="math-container">$A$</span> by the corresponding elementary matrices. Assuming there are <span class="math-container">$k$</span> of these, we have:</p>
<p><span class="math-container">$$E_k E_{k-1} \cdots E_2 E_1 A = \begin{bmatrix} I_n \\ 0_{m-n \times n} \end{bmatrix}$$</span></p>
<p>The product <span class="math-container">$E_k E_{k-1} \cdots E_2 E_1$</span> is easy to understand conceptually: it corresponds to the <span class="math-container">$k$</span> row operations used to bring <span class="math-container">$A$</span> into the reduced form. Fortunately, it's not necessary to <em>compute</em> <span class="math-container">$E_k E_{k-1} \cdots E_2 E_1$</span> as a product of <span class="math-container">$k$</span> matrices! Instead you compute it by starting with <span class="math-container">$I_{m}$</span> and performing the same row operations on it as you perform on <span class="math-container">$A$</span>.</p>
<p>In any case, denoting <span class="math-container">$E_k E_{k-1} \cdots E_2 E_1$</span> by <span class="math-container">$B$</span>, the above becomes</p>
<p><span class="math-container">$$BA = \begin{bmatrix} I_n \\ 0_{m-n \times n} \end{bmatrix}$$</span></p>
<p>Note that <span class="math-container">$B$</span> is an <span class="math-container">$m \times m$</span> matrix. It is <em>almost</em> the left inverse we seek, except we want just <span class="math-container">$I_n$</span> on the right hand side and a left inverse should be <span class="math-container">$n \times m$</span>, not <span class="math-container">$m \times m$</span>. If <span class="math-container">$m > n$</span> then the right hand side has <span class="math-container">$m-n$</span> spare rows of zeros at the bottom. To get rid of these, we can simply remove the bottom <span class="math-container">$m-n$</span> rows of <span class="math-container">$B$</span> to get a <span class="math-container">$n \times m$</span> matrix <span class="math-container">$B'$</span> which satisfies <span class="math-container">$B'A = I_n$</span> and is therefore a left inverse of <span class="math-container">$A$</span>, as desired!</p>
|
4,401,460 | <p>I'm wondering what other tools there are aside from radicals can be used to extend fields in the context of solving polynomials. Since <span class="math-container">$S_5$</span> isn't solvable, constructing a field with a Galois group of <span class="math-container">$S_5$</span> with respect to <span class="math-container">$\mathbb{Q}$</span> can't be a radical extension, but is there some other function or operation that could be used? In other words, a quintic formula with radicals doesn't exist, but is there some function that isn't a purpose-built "this function yields solutions to a polynomial" function that could be used to solve quintics or higher polynomials?</p>
| Young Toom | 1,035,157 | <p>My idea is Markov Chain.
Three states, representing <span class="math-container">$0,1,2$</span>. Start at <span class="math-container">$0$</span>, every turn uniformly go to one of the three states.We want to calculate <span class="math-container">$E_0(\sigma_0)$</span>.</p>
<p>Denote <span class="math-container">$E_0(\sigma_i)$</span> as <span class="math-container">$t_i$</span>, <span class="math-container">$i=0,1,2$</span>.</p>
<p>We have
<span class="math-container">$$t_0 = 1+(t_1+t_2)/3$$</span>
<span class="math-container">$$t_1 = 1+(t_1+t_2)/3$$</span>
<span class="math-container">$$t_2 = 1+(t_1+t_2)/3$$</span>
We have <span class="math-container">$t_0=3$</span>.</p>
<p>Update-----</p>
<p>I was a fool. Think that if you haven't finish rolling, every time there's <span class="math-container">$1/3$</span> chance to be terminated. So it's a geometry distribution with <span class="math-container">$p=1/3$</span></p>
|
918,689 | <p>of 5 be selected that contain /at least/ 1 of the broken bulbs?</p>
<p>So far, I have tried only 1 method, as it's the only one I've been taught, but I don't know if I am doing it right.
I tried doing C(100,1)/C(100,5) but it just doesn't seem right. Is it? If it isn't, what am I doing wrong?</p>
| thanasissdr | 124,031 | <p>Another way is the following one:</p>
<p>$\bullet$ Let's suppose that you want to have <strong>exactly 1</strong> non - working bulb.</p>
<p>Then, you need 4 working bulbs out of the 99 and 1 non-working bulb out of 2. </p>
<p>Then, the ways you can choose the 4 working bulbs are $\binom{98}{4}$ and the ways you can choose the non - working bulb are $\binom{2}{1}$.</p>
<p>So, the ways you can choose <strong>exactly 1</strong> non - working bulb and 4 working bulbs are: $\binom{98}{4} \cdot \binom 2 1$.</p>
<p>$\bullet$ Let's suppose that you want to have <strong>exactly 2</strong> non - working bulbs.</p>
<p>Then, the ways you can choose the 3 working bulbs are $\binom{98}{3}$ and the ways you can choose the 2 non - working bulbs are $\binom{2}{2}$.</p>
<p>So, the ways you can choose <strong>exactly 2</strong> non - working bulbs and 3 working bulbs are $\binom{98}{3} \cdot \binom 2 2$.</p>
<p>So, the ways you can choose <strong>at least one</strong> non - working bulb are:</p>
<p>$\binom{98}{4} \cdot \binom 2 1+\binom{98}{3} \cdot \binom 2 2$</p>
|
1,386,343 | <p>Let $P$ be an idempotent $n \times n$ matrix ($P^2 = P$). What is $(I + P)^{-1}$? I've been thinking about this problem for a while, but can't find an answer. I tried a few examples, but I'm not sure what the general pattern is.</p>
| sebigu | 32,185 | <p>If $0 \neq 2$ in the field and $P^2=P$, then the minimal Polynomial of $P$ divides $f := x^2-x$, which means it is $f$, $x$, or $x-1$. If it is $x$, $P=0$, and if it is $x-1$, $P=1$. Those cases are clear.</p>
<p>So suppose it is $x^2-x$. Then $I+P$ has minimal polynomial $(x-1)(x-2)=x^2-3x+2$. This means that $I$ is $((I+P)^2-3(I+P))/(-2)$ and so
$(I+P)^{-1}$ is $(I+P-3I)/(-2)=(P-2I)/(-2)$</p>
|
267,236 | <blockquote>
<p><strong>Possible Duplicate:</strong><br />
<a href="https://math.stackexchange.com/questions/30156/why-is-this-entangled-circle-not-a-retract-of-the-solid-torus">Why is this entangled circle not a retract of the solid torus?</a></p>
</blockquote>
<p>I am stuck with exercise 16 (c), pag.39 of Hatcher's <em>Algebraic Topology</em>: prove that there is no retraction from <span class="math-container">$S^1\times D^2$</span> onto the set <span class="math-container">$A$</span>, which is described by an image in the book, and which you can see here below.</p>
<p><img src="https://i.stack.imgur.com/MhCFv.jpg" alt="enter image description here" /></p>
| Chris Gerig | 22,295 | <p><em>Hint:</em> View the subspace "$A$" as a path in the space $A$, and then as a path in the space $S^1\times D^2$. Then see what that means about the desired retraction and inclusion maps.</p>
|
2,349,124 | <p>I keep on hitting a road block in trying to solve this, especially when trying to prove it going from the right hand side to the left hand side. </p>
| Patrick Stevens | 259,262 | <p>Suppose $X=\emptyset$; then the RHS is $\emptyset \cup Y$.</p>
<p>Suppose $X \not = \emptyset$; then say $x \in X$. Then two mutually exclusive cases:</p>
<ul>
<li>$x \in Y^c$. Then we have $x \in (X \cap Y^c) \cup (X^c \cap Y)$, but $x \in Y^c$ also implies $x \not \in Y$ so $Y$ can't be equal to $(X \cap Y^c) \cup (X^c \cap Y)$.</li>
<li>$x \in Y$. Then $x \not \in Y^c$ so $x \not \in X \cap Y^c$; and $x \not \in X^c$ so $x \not \in X^c \cap Y$; so $x \not \in (X \cap Y^c) \cup (X^c \cap Y)$. But $x \in Y$, so $y \not = (X \cap Y^c) \cup (X^c \cap Y)$.</li>
</ul>
|
112,503 | <p>I am working the a subject guide on involving $L$-Systems and have the alphabet $A = \{a, b, c\}$. The initiator is the string $a$ and the rules of substitution $a \to ba$, $b \to ccb$, $c \to a$. </p>
<p>The study guide gives the first five generations as:</p>
<p>$$[a] \to [ba] \to [ccba] \to [acba] \to [aaba] \to [aaccba]$$</p>
<p>I can't for the life of me figure out how this works. No rules regarding the order of substitution are provided, and my lecturer say's that it is possible to get to this.</p>
<p>Does anybody have any ideas?</p>
| William Vickery | 334,699 | <p>Lindenmayer systems make all substitutions at once; this is one of their defining features. This is mentioned in the book The Algorithmic Beauty of Plants on page 3 of chapter 1, "In Chomsky grammars productions are applied sequentially, whereas in L-systems they are applied in parallel and simultaneously replace all letters in a given word." This book is one of the go to references for L-Systems (other than Lindenmayer's original papers in computational biology). </p>
<p>There is no order of precedence in the substitution. The sequence should be</p>
<p>a -> ba -> ccbba -> aaccbba -> babaaaccbccb -> ccbbaccbbababaaaccbaaccb ->....</p>
<p>if the rules</p>
<p>a -> ba
b -> ccb
c -> a
are applied all at once at each step, as they should be for all L-Systems. </p>
<p>I'm gonna write the evolution again blocked out so that each substitution is in brackets.</p>
<p>a -> [a] -> [ba] -> ba -> [b][a] -> [ccb][ba] -> ccbba -> [c][c][b][b][a] </p>
<p>-> [a][a][ccb][ccb][ba] -> aaccbccbba -> etc.</p>
<p>are applied. Please inform your instructor or T.A. that they have made a serious error in their study guide. Such an error could dissuade students from learning the material - especially introverted students or underrepresented students such as women and minorities. The student might think that the error is in their understanding or even capacity to understand rather than with the educator's carelessness. Out of curiosity how did this topic come up/what type of class? </p>
<p>(there is a small chance that your professor may have been referring to second order or even contextual L-systems which are more complex, but if s/he were referring to that they would have listed more axioms).</p>
|
2,491,577 | <p>What is the meaning of phrase,<strong>"Compactness and Connectedness are intrinsic properties of a topological space"</strong>?</p>
| Alex Mathers | 227,652 | <p>This means that these properties are preserved by homeomorphism (the natural notion of equivalence for topological spaces).</p>
<p>To be more precise, you could make the following definition:</p>
<blockquote>
<p><strong>Definition</strong>: Let $P$ be a property of a topological space $X$. We say that $P$ is a <em>topological property</em> if given any topological space $Y$ and a homeomorphism $f:X\to Y$, then $Y$ has property $P$ as well.</p>
</blockquote>
<p>Then, the phrase you've written can be rewritten as follows:</p>
<blockquote>
<p><strong>Proposition</strong>: Compactness and connectedness are both topological properties.</p>
</blockquote>
|
2,543,169 | <p>The question is pretty self explanatory, but I’ve encountered situations where, for the length of some vector $\vec{a}$, to denote the length (or magnitude, which ever you prefer) as either $\| \vec{a}\|= \sqrt{a_1^2+a_2^2+\ldots+a_n^2}$ or $|\vec{a}|= \sqrt{a_1^2+a_2^2+\ldots+a_n^2}$ and I was wondering which notation is more widely accepted, per say? I’ve tried researching this and different websites actually use different notation. </p>
<p>Any help is appreciated, thank you.</p>
| mlk | 155,406 | <p>I would say, it really depends on the context. Both are widely accepted and understood. However, the nice thing is, that the two notations allow you to distinguish between two notions of "length". So in one extreme, in an introductory course on vectors I would write
$$\|a\| = \sqrt{|a_1|^2+ |a_2|^2+...+|a_n|^2}$$
to explicitely distinguish the notion of absolute value in the real numbers and length of a vector.</p>
<p>In the other extreme, if I am later interested in some basic functional analysis, I would instead use notation
$$\|f\| = \sqrt{\int_{\mathbb{R}^n} |f(x)|^2 dx}$$
for some function $f:\mathbb{R}^n \to \mathbb{R}^n$, where then
$$|f(x)| = \sqrt{f_1(x)^2+...+f_n(x)^2},$$
this time to explicitely distinguish between the $L^2$ norm of a function and the length of its value at $x$.</p>
<p><strong>edit</strong>: I did not put vector arrows here, because I personally do not use them, however this answer of course also works with $\vec{a}$ and $\vec{f}$.</p>
|
2,835,172 | <p>What is the probability of a number (picked at random) from set $A= \{1,2...,6\},$ being larger than a number (picked at random) from set $B= \{1,2...,10\}.$ What would the probability be if sets $A$ and $B$ were generalized: Set $A=\{n,n+1,...,n+x\},$ and set $B = \{m,m+1,...,m+x\}?$ This is well beyond my realm of statistics knowledge so the more clear each step is would be appreciated. </p>
<p>BONUS: I am trying to find a general equation. If it could apply to multi-sided dice, sets that skipped numbers (e.g. Set $A=\{1,2,3,5,8,13\},$ or set $B=\{2,4,6,8\}),$ more than two sets with different numbers (e.g. Set $A=\{1,2,3\}, B=\{2,3,4\}, C=\{1,3,5\}),$ etc. that would be ideal. I am not sure if this is even possible in general form so please answer the first part, even if you cannot answer this portion.</p>
<p><a href="https://math.stackexchange.com/questions/2465976/probability-with-custom-dice"><em>This link</em></a> has a similar question, but the answer is in matrix form. Is there a clean equation to solve this?</p>
| BruceET | 221,800 | <p><strong>Extended Comment:</strong> As indicated in the Comment by @HagenvonEitzen, one way to work the initial problem (on the probability D6 shows a larger value than D10) is to enumerate
cases. In particular, you might make a $10 \times 6$ array of possible pairs
of outcomes and highlight the pairs that satisfy your condition.
When I did that, it was pretty clear that there are 15 favorable outcomes out of 60, so the probability $P(\text{D6 > D10}) = 1/4.$ </p>
<pre><code>D10\D6 1 2 3 4 5 6
1 11 12* 13* 14* 15* 16*
2 21 22 23* 24* 25* 26*
3 31 32 33 34* 35* 36*
4 41 42 43 44 45* 45*
5 51 52 53 54 55 56*
6 61 62 63 64 65 66
...
</code></pre>
<p>A brief simulation can sometimes help to provide insurance against miscounting outcomes.
In a simulation of a million pairs of dice rolls (D6 and D10), results ought
to give two place accuracy, and that was the result. (The code is for R
statistical software: <code>event</code> is a logical vector with a million <code>TRUE</code>s and
<code>FALSE</code>s, and its <code>mean</code> is its proportion of <code>TRUE</code>s.)</p>
<pre><code>m = 10^6
event = replicate(m, sample(1:6, 1) > sample(1:10, 1))
mean(event)
[1] 0.250383 # aprx 16/50 = 1/4
</code></pre>
<p>Using the array method might
suggest a way to generalize to cases in which the two dice have <em>consecutively</em> numbered faces, such as in the 'generalization' of your first paragraph. I will
leave it to you to figure that out. Or maybe you can see how to generalize
Hagen von Eitzen's computation.</p>
<p>My guess is that it will be considerably messier to solve the 'bonus' problem.</p>
|
2,835,172 | <p>What is the probability of a number (picked at random) from set $A= \{1,2...,6\},$ being larger than a number (picked at random) from set $B= \{1,2...,10\}.$ What would the probability be if sets $A$ and $B$ were generalized: Set $A=\{n,n+1,...,n+x\},$ and set $B = \{m,m+1,...,m+x\}?$ This is well beyond my realm of statistics knowledge so the more clear each step is would be appreciated. </p>
<p>BONUS: I am trying to find a general equation. If it could apply to multi-sided dice, sets that skipped numbers (e.g. Set $A=\{1,2,3,5,8,13\},$ or set $B=\{2,4,6,8\}),$ more than two sets with different numbers (e.g. Set $A=\{1,2,3\}, B=\{2,3,4\}, C=\{1,3,5\}),$ etc. that would be ideal. I am not sure if this is even possible in general form so please answer the first part, even if you cannot answer this portion.</p>
<p><a href="https://math.stackexchange.com/questions/2465976/probability-with-custom-dice"><em>This link</em></a> has a similar question, but the answer is in matrix form. Is there a clean equation to solve this?</p>
| Graham Kemp | 135,106 | <p>Use the Law of Total Probability: for example $d6, d10$ the results of independen six and ten sided dice.</p>
<p>$$\begin{align}\mathsf P(d6>d10) &= \mathsf P(d10>6)\mathsf P(d6>d10\mid d10>6)+\mathsf P(d10\leq 6)\mathsf P(d6>d10\mid d10\leq 6) \\ &=\tfrac 4{10}\cdot 1+\tfrac 6{10}\cdot\mathsf P(d6>d10\mid d10\leq 6)\\ &=\tfrac 2{5}+\tfrac 3 5\cdot\mathsf P(d6>d10\mid d10\leq 6)\end{align}$$</p>
<p>All that is left is to evaluate that last term</p>
<p>Hints: $1=\mathsf P(d6>d10\mid d10\leq 6)+\mathsf P(d6=d10\mid d10<6)+\mathsf P(d6<d10\mid d10\leq 6)\\\mathsf P(d6>d10\mid d10\leq 6)=\mathsf P(d6<d10\mid d10\leq 6)$</p>
<p>Also, when given the condition that it is at most 6, the distribution of a 10 sided die is identical to the distribution of a six sided die .</p>
<p>Extend this principle to account for selections from any two independent uniform discrete distributions from non-identical supports.</p>
|
2,380,456 | <p>Given this graph:</p>
<p><a href="https://i.stack.imgur.com/5hDje.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5hDje.png" alt="enter image description here"></a></p>
<p>We can assume there is a linearity in the semi-curves shown above (<em>note that <code>air</code> line is not my concern here</em>). Obviously, there is a <strong>logarithmic</strong> relationship for every gas, so for any gas of them: </p>
<p><strong>log(y) = m*log(x) + b</strong> --> <strong>x = 10<sup>(log(y)-b)/m</sup></strong></p>
<p>That's how , as far as I understand it, we find <strong>x</strong>.</p>
<p>However, the thing is, the results I'm getting are not correct. When I searched for very long time, I could find <em>finally</em> someone posted the solution but <em>without</em> any explanation, he used this equation:</p>
<p><strong>x = 10<sup>(ln(y)-b)/m</sup></strong></p>
<p>Even though he started finding the slope by using the common logarithm, but to find <strong>x</strong> he used the <strong>natural</strong> logarithm as shown above!</p>
<p>The values I'm getting with the absence of any gas is something like: <strong>5.123456 ppm</strong>.</p>
<p>However and logically speaking, the expected value should be something like <strong>0.00123 ppm</strong> which his little change (from common to natural logarithm) in the final step can do. </p>
<p>Any explanation will be very much appreciated.</p>
<hr>
<p>P.S:</p>
<p>Here is some physic facts about the graph:</p>
<ul>
<li>R<sub>s</sub> directly related to the gas concentration.</li>
<li>R<sub>0</sub> is constant for every gas (<em>the concentration in fresh air</em>).</li>
<li>The Gas Sensor internally simulates a <em>Voltage Divider</em>.</li>
</ul>
<hr>
<p>P.S2:</p>
<p>In other words:</p>
<p>Can we start finding the <strong>slope</strong> and <strong>y-intercept</strong> by using the <em>common logarithm</em> because we assumed initially that there is a <em>common logarithmic relationship</em> between <strong>x</strong> and <strong>y</strong>, then when we want to find <strong>x</strong> , we use the base <strong>10</strong> for <strong>x</strong> but the base <strong>e</strong> for <strong>y</strong>? What is the logic behind this? that's basically my question.</p>
<p><a href="http://sandboxelectronics.com/files/SEN-000004/MQ-2.pdf" rel="nofollow noreferrer">MQ2 Datasheet</a></p>
<p><a href="http://sandboxelectronics.com/?p=165" rel="nofollow noreferrer">Application Code</a></p>
| k.stm | 42,242 | <p>Not sure if there’s a neat direct way. But I learnt that the trick is to add another more conceptual equivalent statement.</p>
<blockquote>
<p>Let $K / F$ be a finite extension of fields and let $L$ be an algebraically closed field containing $K$. So we assume $F ⊆ K ⊆ L$. The following are equivalent:</p>
<ol>
<li>$K$ is a splitting field for some polynomial $f ∈ F[X]$.</li>
<li>Every field embedding $σ \colon K → L$ fixing $F$ restricts to $K → K$.</li>
<li>Every irreducible polynomial $p ∈ F[X]$ with some root $α ∈ K$ splits completely in $K$.</li>
</ol>
</blockquote>
<p><em>Proof</em>. (1) ⇒ (2): Let $f ∈ F[X]$ be a polynomial for which $K$ is a splitting field, say of degree $n$ and monic (without loss of generality). Then $f$ has $n$ roots $α_1, …, α_n ∈ K$ (possibly counted with multiplicites) and $f = (X - α_1)·…·(X - α_n)$ in $K$. Then $K = F(α_1, …, α_n)$, as the latter field is a subextension in which $f$ splits and $K$ is by definition the smallest such extension.</p>
<p>Let $σ \colon K → L$ be any field embedding fixing $F$. Then $σ$ map zeroes of $f$ to zeros of $f^σ = f$, that is: $f(\{α_1,…,α_n\}) = \{α_1,…,α_n\}$. As $σ$ fixes $F$ and $K = F(α_1,…,α_n)$, this implies $σ(K) ⊆ K$.</p>
<p>(2) ⇒ (3): Let $p ∈ F[X]$ be any irreducible polynomial with some root $α ∈ K$. We assume it’s nonzero and monic. Then $p$ splits in $L$ because $L$ is algebraically closed, say $p = (X - α_1)·…·(X - α_n)$ for some $α_1, …, α_n ∈ L$. Have a look at $E = F(α)$. Then we have $n$ maps $σ_1, …, σ_n$ all $E → L$ with $σ_i(α) = α_i$ for $i = 1, …, n$. (For this, we need $p$ to be irreducible!) We can extend those to maps $K → L$, which then have to restricct to maps $K → K$ by assumption. As $α_1, …, α_n$ are in the image of $σ_1, …, σ_n$, we therefore have $α_1, …, α_n ∈ K$ and so $p = (X - α_1)·…(X - α_n)$ in $K[X]$.</p>
<p>(3) ⇒ (1): As a finite extension, there are some $β_1, …, β_n ∈ K$ with $K = F(β_1, …, β_n)$ – for example, take an $F$-basis of $K$. Let $p_1, …, p_n ∈ F[X]$ be the minimal polynomials of $β_1, …, β_n$. Being irreducible, all of them split in $K[X]$, so $f = p_1·…·p_n ∈ F[X]$ does a well and $K$ then is the splitting field of $f$.</p>
|
2,380,456 | <p>Given this graph:</p>
<p><a href="https://i.stack.imgur.com/5hDje.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5hDje.png" alt="enter image description here"></a></p>
<p>We can assume there is a linearity in the semi-curves shown above (<em>note that <code>air</code> line is not my concern here</em>). Obviously, there is a <strong>logarithmic</strong> relationship for every gas, so for any gas of them: </p>
<p><strong>log(y) = m*log(x) + b</strong> --> <strong>x = 10<sup>(log(y)-b)/m</sup></strong></p>
<p>That's how , as far as I understand it, we find <strong>x</strong>.</p>
<p>However, the thing is, the results I'm getting are not correct. When I searched for very long time, I could find <em>finally</em> someone posted the solution but <em>without</em> any explanation, he used this equation:</p>
<p><strong>x = 10<sup>(ln(y)-b)/m</sup></strong></p>
<p>Even though he started finding the slope by using the common logarithm, but to find <strong>x</strong> he used the <strong>natural</strong> logarithm as shown above!</p>
<p>The values I'm getting with the absence of any gas is something like: <strong>5.123456 ppm</strong>.</p>
<p>However and logically speaking, the expected value should be something like <strong>0.00123 ppm</strong> which his little change (from common to natural logarithm) in the final step can do. </p>
<p>Any explanation will be very much appreciated.</p>
<hr>
<p>P.S:</p>
<p>Here is some physic facts about the graph:</p>
<ul>
<li>R<sub>s</sub> directly related to the gas concentration.</li>
<li>R<sub>0</sub> is constant for every gas (<em>the concentration in fresh air</em>).</li>
<li>The Gas Sensor internally simulates a <em>Voltage Divider</em>.</li>
</ul>
<hr>
<p>P.S2:</p>
<p>In other words:</p>
<p>Can we start finding the <strong>slope</strong> and <strong>y-intercept</strong> by using the <em>common logarithm</em> because we assumed initially that there is a <em>common logarithmic relationship</em> between <strong>x</strong> and <strong>y</strong>, then when we want to find <strong>x</strong> , we use the base <strong>10</strong> for <strong>x</strong> but the base <strong>e</strong> for <strong>y</strong>? What is the logic behind this? that's basically my question.</p>
<p><a href="http://sandboxelectronics.com/files/SEN-000004/MQ-2.pdf" rel="nofollow noreferrer">MQ2 Datasheet</a></p>
<p><a href="http://sandboxelectronics.com/?p=165" rel="nofollow noreferrer">Application Code</a></p>
| Bach | 497,335 | <p>I have found a direct way proving this:</p>
<p>Suppose <span class="math-container">$K$</span> is the splitting field of <span class="math-container">$g(x)\in F[x]$</span> and <span class="math-container">$p(x)\in F[x]$</span> is irreducible over <span class="math-container">$F$</span>. Moreover, <span class="math-container">$p(x)$</span> has a root <span class="math-container">$\alpha$</span> in <span class="math-container">$K$</span>, we want to show that <span class="math-container">$p(x)$</span> splits in <span class="math-container">$K[x]$</span>.</p>
<p>If <span class="math-container">$p(x)$</span> is linear, then we are done.</p>
<p>Otherwise, suppose <span class="math-container">$p(x)=(x-\alpha)(x-\beta)\tilde p(x)$</span> over the splitting field <span class="math-container">$\mathcal F$</span> of <span class="math-container">$p(x)$</span>, where <span class="math-container">$\alpha\in K$</span>, <span class="math-container">$\tilde p(x)\in\mathcal F[x]$</span> and <span class="math-container">$\beta\in\mathcal F\setminus K$</span> is taken to be any root of <span class="math-container">$p(x)$</span> that is not in <span class="math-container">$K$</span> and we want to get a contradiction.</p>
<p>Note that there is a natural isomorphism between <span class="math-container">$F(\alpha)$</span> and <span class="math-container">$F(\beta)$</span>, since <span class="math-container">$p(x)$</span> is irreducible. Therefore, we can extend this isomorphism naturally to an isomorphism between the splitting field of <span class="math-container">$g(x)$</span> over <span class="math-container">$F(\alpha)$</span> and the splitting field of <span class="math-container">$g(x)$</span> over <span class="math-container">$F(\beta)$</span>. Since <span class="math-container">$\alpha\in K$</span> and we conclude that the splitting field of <span class="math-container">$g(x)$</span> over <span class="math-container">$F(\alpha)$</span> is <span class="math-container">$K$</span>. If we let <span class="math-container">$K'$</span> denote the splitting field of <span class="math-container">$g(x)$</span> over <span class="math-container">$F(\beta)$</span>, then we have <span class="math-container">$K\cong K'$</span> and of course <span class="math-container">$[K':F]=[K:F]$</span> for their extension degrees.</p>
<p>Note that <span class="math-container">$K'$</span> can be viewed as adjoining <span class="math-container">$\beta$</span> to <span class="math-container">$K$</span> and <span class="math-container">$\beta\in\mathcal F\setminus K$</span> implies that <span class="math-container">$$[K': F]=[K': K][K: F]>[K:F]$$</span> which is a contradiction.</p>
<p>Thus, no root of <span class="math-container">$p(x)$</span> can be taken from <span class="math-container">$\mathcal F\setminus K$</span> which implies <span class="math-container">$p(x)$</span> splits completely over <span class="math-container">$K[x]$</span>.</p>
<hr />
<p>The other direction is simple, please refer to k.stm's answer for "(3)⇒ (1)".</p>
|
2,127,494 | <p>Given two $3$D vectors $\mathbf{u}$ and $\mathbf{v}$ their cross-product $\mathbf{u} \times \mathbf{v}$ can be defined by the property that, for any vector $\mathbf{x}$ one has $\langle \mathbf{x} ; \mathbf{u} \times \mathbf{v} \rangle = {\rm det}(\mathbf{x}, \mathbf{u},\mathbf{v})$.
From this a number of properties of the cross product can be obtained quite easily. It is less obvious that, for instance $|\mathbf{u} \times \mathbf{v}|^2 = |\mathbf{u}|^2 |\mathbf{v}|^2 - \langle \mathbf{u} ; \mathbf{v} \rangle ^2$, from which the norm of the cross-product can be deduced.</p>
<p>Is it possible to obtain these properties nicely (i.e. without dealing with coordinates), but with elementary linear algebra only (i.e. without the exterior algebra stuff, only properties of determinants and matrix / vector multiplication).</p>
<p>Thanks in advance! </p>
| Jaroslaw Matlak | 389,592 | <p><strong>Hint</strong></p>
<p>$$|\mathbf{u}\times \mathbf{v}| = |\mathbf{u}|\cdot|\mathbf{v}|\cdot|\sin \alpha|$$
$$\langle \mathbf{u}, \mathbf{v}\rangle = |\mathbf{u}|\cdot|\mathbf{v}|\cdot|\cos \alpha|$$
where $\alpha$ is an angle between vectors $\mathbf{u}$ and $\mathbf{v}$</p>
|
2,127,494 | <p>Given two $3$D vectors $\mathbf{u}$ and $\mathbf{v}$ their cross-product $\mathbf{u} \times \mathbf{v}$ can be defined by the property that, for any vector $\mathbf{x}$ one has $\langle \mathbf{x} ; \mathbf{u} \times \mathbf{v} \rangle = {\rm det}(\mathbf{x}, \mathbf{u},\mathbf{v})$.
From this a number of properties of the cross product can be obtained quite easily. It is less obvious that, for instance $|\mathbf{u} \times \mathbf{v}|^2 = |\mathbf{u}|^2 |\mathbf{v}|^2 - \langle \mathbf{u} ; \mathbf{v} \rangle ^2$, from which the norm of the cross-product can be deduced.</p>
<p>Is it possible to obtain these properties nicely (i.e. without dealing with coordinates), but with elementary linear algebra only (i.e. without the exterior algebra stuff, only properties of determinants and matrix / vector multiplication).</p>
<p>Thanks in advance! </p>
| DC75 | 376,199 | <p>I went through some books and found something I am happier with. It comes from <em>Euclidean and Non-Euclidean Geometry: An Analytic Approach</em>, by Ryan (p.85, or something like this)</p>
<p>Essentially it goes as follows : $\mathbf{n} = \mathbf{u} \times \mathbf{v}$ is defined by the property that for every $\mathbf{x}$ one has $\langle \mathbf{x} ; \mathbf{n}\rangle = \det( \mathbf{x},\mathbf{u},\mathbf{v})$. </p>
<ul>
<li><p>Antisymetry and linearity follow directly from the corresponding properties of the determinant.</p></li>
<li><p>It is also easy to get $\langle \mathbf{u} ; \mathbf{v} \times \mathbf{w} \rangle = \langle \mathbf{u}\times \mathbf{v} ; \mathbf{w}\rangle$;</p></li>
<li><p>Using linearity, and restricting in a clever way to the basis vectors one shows that $\mathbf{u}\times (\mathbf{v}\times \mathbf{w}) = \langle \mathbf{u};\mathbf{w} \rangle \mathbf{v} - (\mathbf{u};\mathbf{v})\mathbf{w}$. This is the only part in which some "dirty" work is needed, but that is not too bad : using symmetry argument and linearity, one really needs very little computations. </p></li>
<li><p>Using the last, one gets $\langle \mathbf{u}\times \mathbf{v}; \mathbf{w}\times \mathbf{z}\rangle = \langle\mathbf{u};\mathbf{w} \rangle \langle\mathbf{v};\mathbf{z} \rangle - \langle\mathbf{v};\mathbf{w} \rangle \langle \mathbf{u};\mathbf{z} \rangle $</p></li>
<li><p>From this one gets the Lagrange identity, which, by the way, allows to get another proof of Cauchy - Schwartz</p></li>
</ul>
|
4,118 | <p>I've recently dipped my toes into the world of number theory; and I've bought a book that to me is quite unconventional: R. P. Burn, <em>A Pathway into Number Theory</em>. I've yet to put the book through its paces, but it seems agreeable enough to me. The book is unique in that it poses a sequence of questions to you in the hope that you'll be able to answer them and by thus doing so, begin to discover the fundamentals of number theory.</p>
<p>This is a style of learning that I find agreeable as the knowledge I gain this way is assimilated and retained better. I like being able to discover for myself however most times I don't have the necessary direction (I am self-studying) but that's where this textbook comes in. I feel that this book in the process of nudging you in the right direction also helps you think more like a mathematician (from my very limited experience with it). </p>
<p>I enjoy books that give you a "pathway", although I guess this is the aim of all textbooks. Is it possible for anyone to recommend texts that take a similar aided discovery/inquiry based approach? </p>
| bzm3r | 2,013 | <p>I am going to copy and paste my answer from another question on this site, because I think one would be hard pressed to beat it in terms of the number of suggestions it covers, and the general quality with which it presents these suggestions:</p>
<blockquote>
<p>You might be interested in the expansive answers that were generated on math.stackexchange by the questions <a href="https://math.stackexchange.com/questions/828458/book-ref-request-starting-from-a-mathematically-amorphous-problem-and-comb/889732#889732">Book ref. request: “…starting from a mathematically amorphous problem and combining ideas from sources to produce new mathematics…”</a> and <a href="https://math.stackexchange.com/questions/828648/book-series-like-ams-student-mathematical-library">Book series like AMS' Student Mathematical Library?</a>.</p>
</blockquote>
<p>In order to make this answer complete in its own right, and in order to reduce the number of times one has to depress a mouse button, I will summarize the suggestions from those threads here, giving credit to those who originally provided them. </p>
<h1>“…starting from a mathematically amorphous problem and combining ideas from sources to produce new mathematics…”</h1>
<p><a href="https://math.stackexchange.com/questions/828458/book-ref-request-starting-from-a-mathematically-amorphous-problem-and-comb/889732#889732">Book ref. request: “…starting from a mathematically amorphous problem and combining ideas from sources to produce new mathematics…”</a> was asked due to inspiration from from Charles Radin's <a href="http://www.ams.org/bookstore-getitem/item=STML-1" rel="nofollow noreferrer">Miles of Tiles</a>, which has the following description:</p>
<blockquote>
<p><strong>Theme:</strong> "In this book, we try to display the value (and joy!) of starting from a mathematically amorphous problem and combining ideas from diverse sources to produce new and significant mathematics--mathematics unforeseen from the motivating problem ... "</p>
<p><strong>Style:</strong> The common thread throughout this book is <code><insert topic here></code>...the presentation uses many different areas of mathematics and physics to analyze features of <code><insert topic here></code>...[as] understanding <code><insert topic here></code> requires an unusual variety of specialties...this interdisciplinary approach also leads to new mathematics seemingly unrelated to <code><insert topic here></code>...</p>
<p><strong>Readership:</strong> Advanced undergraduates, graduate students, and research mathematicians.</p>
</blockquote>
<p>mweiss further suggested Soifer's <a href="http://www.ams.org/bookstore/mawrldseries" rel="nofollow noreferrer">How does One Cut a Triangle?</a>:</p>
<blockquote>
<p>You may enjoy Alexander Soifer's book <a href="http://www.ams.org/bookstore/mawrldseries" rel="nofollow noreferrer">How Does One Cut a Triangle?</a>. From my review of this on Math Reviews (MR#2548775):</p>
<blockquote>
<p>Indeed the entire work is a sequence of problems posed and solved, with each new solution yielding, through generalization and specialization, new questions. One of the most noteworthy features of the text is its “just-in-time” approach to introducing new ideas: tools from linear algebra (linear independence and eigenvalues), Diophantine and algebraic equations, calculus (the intermediate value theorem), combinatorics (the pigeonhole principle), and affine geometry are brought in with a minimum of fuss precisely when they are most useful.</p>
</blockquote>
</blockquote>
<p><a href="https://math.stackexchange.com/a/829889/115703">Conifold</a> further provided an absolute wealth of suggestions:</p>
<p><strong>1) Books with light prerequisites</strong></p>
<p><a href="http://www.ams.org/bookstore/mawrldseries" rel="nofollow noreferrer">Stories of Maxima and Minima</a> by Tikhomirov, a guided tour of extremal problems starting with Dido and the founding of Carthage all the way to convex programming with geometry, optics and mechanics visited along the way. While the author aims the book at "high school students" he means Russian ones perhaps. </p>
<p><a href="http://www.maa.org/book-series/classroom-resource-materials?page=1" rel="nofollow noreferrer">Indra's Pearls: The Vision of Felix Klein</a> has Mumford (that one) for one of the authors, and a <a href="http://en.wikipedia.org/wiki/Indra%27s_Pearls_%28book%29" rel="nofollow noreferrer">wikipedia article</a> devoted to it, saves me the effort.</p>
<p><a href="http://books.google.com/books/about/G%C3%B6del_Escher_Bach_Anniversary_Edition.html?id=aFcsnUEewLkC" rel="nofollow noreferrer">Gödel, Escher, Bach</a> by Hofstadter is a book with almost cult following, also has a <a href="http://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach" rel="nofollow noreferrer">wikipedia article</a>. Very roughly, looks into how recursion and self-reference lead to expressing meaning in formal systems, music and art. Goes in depth into Gödel's incompleteness and mathematical themes of Escher and Bach, while staying a literary marvel that won a Pulitzer prize. According to Martin Gardner, "a book of such depth, clarity, range, wit, beauty and originality that it is recognized at once as a major literary event".</p>
<p><a href="http://books.google.com/books?id=uVE_LiXbSpoC&source=gbs_navlinks_s" rel="nofollow noreferrer">Fibonacci Numbers</a> by Vorobiev studies the title subject by introducing modular arithmetic, recurrence relations and continued fractions, then discusses their role in approximating irrationals by fractions, Fibonacci enumeration system for integers and its application to winning a Chinese game, their appearence in geometry alongside the golden ratio, and in the theory of search.</p>
<p><a href="http://rads.stackoverflow.com/amzn/click/0821838598" rel="nofollow noreferrer">Mathematical Gift I-III</a> by Ueno, Shiga and Morita is a well designed intuitive transition into graduate notions of geometry and topology, with highlights including Poincare-Hopf and Gauss-Bonet theorems, theories of dimension and volume (with Banach-Tarsky paradox explained), Poncelet closure theorem in projective geometry, Whitney embedding theorem, and Dehn's solution to the third Hilbert problem.</p>
<p><a href="http://rads.stackoverflow.com/amzn/click/0817633162" rel="nofollow noreferrer">Felix Klein and Sophus Lie</a> by Yaglom is an inspired story of how a mathematical theory is born, the theory of symmetry. The content is much broader than the title, related ideas of Galois, Poncelet, Hamilton, Grassmann, Cayley, Peirce, Clifford are thoroughly explored as well. Most insightful historical account of 19th century geometry and algebra.</p>
<p><a href="http://www.cambridge.org/ar/academic/subjects/mathematics/recreational-mathematics/series/outlooks?layout=grid" rel="nofollow noreferrer">Knot Book</a> by Colin Adams is a gem that takes one from knotting and braiding rope to topological invariants, Seifert surfaces, 3-manifolds by surgery and applications in biology, chemistry and physics.</p>
<p><a href="https://cms.math.ca/Publications/Books/treatises." rel="nofollow noreferrer">Excursions into Mathematics</a> by Beck, Bleicher and Crowe is a collection of 6 mini-books under one cover. My favorite ones are on perfect numbers, the ancient topic that launched much of modern number theory (which still can't answer some basic questions about them), and on exotic geometries. You may like that one because it comes close to "laying down the axioms and playing with them" from your other question, albeit in geometry rather than algebra. From Euclid's postulates to Hilbert's axioms, what happens if some of them are modified, on to Latin squares, arithmetic of finite fields, lines and circles in finite projective spaces, and geometries they create. </p>
<p><a href="http://books.google.com/books?id=7US0cSy70hoC&dq=ash+gross+fearless+symmetry&source=gbs_navlinks_s" rel="nofollow noreferrer">Fearless symmetry</a> by Ash and Gross is not a text for liberal arts majors despite the title. It sets out to outline a proof of the Last Fermat Theorem to non-experts with all the jazz of quadratic reciprocity, modular forms, algebraic integers, Galois group of $\mathbb{Q}$ and its representations on elliptic curves, traces of Frobenius elements, etc.</p>
<p><a href="http://books.google.com/books?id=3RLGKcEjVIoC&dq=moore+zermelo%27s+axiom&source=gbs_navlinks_s" rel="nofollow noreferrer">Zermelo's Axiom of Choice</a> by Moore. AC with its controversies and history up to Gödel and Cohen, and equivalents and consequences in algebra, topology and analysis.</p>
<p><a href="http://rads.stackoverflow.com/amzn/click/0521666465" rel="nofollow noreferrer">Proofs and Confirmations</a> by Bressoud follows your wishes very closely. It is a thrilling story of proving a conjecture about the total number of alternating sign matrices that draws on insights about partitions, symmetric functions, hypergeometric series, lattice paths and statistical mechanics.</p>
<p><strong>2) Advanced Books</strong></p>
<p><a href="http://books.google.com/books?id=ECnHLtiCiNsC&dq=hardy+ramanujan&hl=en&sa=X&ei=zxyeU6XSK86uyASu3IHgBA&ved=0CB4Q6AEwAA" rel="nofollow noreferrer">Ramanujan</a> by Hardy is not a biography but a look at Ramanujan's enigmatic mathematical legacy by the man who knew him best. Hardy explains and ties together Ramanujan's 'magic' insights into primes, partitions, hypergeometric series, zeta function, elliptic and modular forms. Understanding the genesis of analytic number theory is a side bonus.</p>
<p><a href="https://cms.math.ca/Publications/Books/treatises." rel="nofollow noreferrer">Exploring the Number Jungle</a> by Burger. The theme is approximating irrationals by fractions with relatively small denominators, a.k.a. Diophantine approximation. But that doesn't stop Riemann surfaces, elliptic curves, Pythagorean triples, quadratic forms and $p$-adic numbers from showing up. It is unusually written: there are descriptions, questions, theorems, exercises, hints, but no proofs. On principle.</p>
<p><a href="http://www.librarything.com/series/New+Mathematical+Library" rel="nofollow noreferrer">Radical Approach to Real Analysis</a> also by Bressoud is a very unconventional exposition of the subject that starts with the crisis in mathematics posed by the discovery of Fourier series and develops ideas in a very versatile manner, highlighting perspectives lost in modern texts. </p>
<p><a href="http://books.google.com/books?id=3RLGKcEjVIoC&dq=moore+zermelo%27s+axiom&source=gbs_navlinks_s" rel="nofollow noreferrer">Mathematical Coloring Book</a> by Soifer, who also wrote How Does One Cut a Triangle. Coloring everything here: polygons, graphs, plane, space, integers, arithmetic progressions, but it all ties to the chromatic number of the plane. Which depends on the axiom of choice and existence of inaccessible cardinals (not kidding!).</p>
<p><a href="http://books.google.com/books?id=6kCIAwAAQBAJ&source=gbs_navlinks_s" rel="nofollow noreferrer">Glimpses of Soliton Theory</a> by Kasman is a rare book on the subject that doesn't just throw cumbersome computations and transformations at the reader. Intuition for non-linear PDE-s is built up through examples and history, and then supplemented with ideas about elliptic curves, isospectrality, wedge products, pseudo-differential operators and the Grassmann cone. </p>
<p><a href="http://rads.stackoverflow.com/amzn/click/0883850362" rel="nofollow noreferrer">Tour Through Mathematical Logic</a> by Wolf is a historically driven exposition of advanced modern logic including Gödel's incompleteness and constructible hierarchy, model theory, Cohen's forcing, Robinson's non-standard and Bishop's constructive analyses, large cardinals, determinacy and the Woodin program. </p>
<p><a href="http://www.librarything.com/series/Dolciani+Mathematical+Expositions" rel="nofollow noreferrer">Mathematical Methods of Classical Mechanics</a>, Arnold's classic, is about mechanics obviously. And about differential forms, Poisson structures, symplectic manifolds, geodesic flows, Legendre transforms and singularities, to name a few. According to a MathSciNet reviewer a unique element in the intersection of "the most influential books of the second half of this century, the most frequently quoted books, books that have the highest probability of surviving into the 21st century, books that are very useful in teaching, books characterized by a very strong personal style, books that provide a delightful reading experience."</p>
<p>Free electronic versions may be available <a href="https://www.librarything.com/series/London+Mathematical+Society+Student+Texts" rel="nofollow noreferrer">here</a> or <a href="http://bib.tiera.ru/" rel="nofollow noreferrer">here</a>. </p>
<p>Some more suggestions were provided by <a href="https://math.stackexchange.com/a/889732/115703">user89</a>:</p>
<p><a href="http://www.cambridge.org/ca/academic/subjects/mathematics/logic-categories-and-sets/counting-frameworks-mathematics-aid-design-rigid-structures" rel="nofollow noreferrer">Counting on Frameworks</a> ...fits the style of my original request. The author builds up a mathematically amorphous problem ("what is a rigid structure?"), progressively developing a theory (along with the participation of the reader through exercises) to describe "rigidity".</p>
<p><a href="https://www.math.dartmouth.edu/news-resources/electronic/kpbogart/" rel="nofollow noreferrer">Combinatorics through guided discovery, by Kenneth Bogart</a>. It seems to be quite fun! You learn combinatorics by going through exercises, rather than by being <em>told</em> what it is -- in other words, you make up the subject yourself, in a guided way. The book is completely free, and available online.</p>
<p><a href="http://www.cs.utoronto.ca/~hehner/FMSD/" rel="nofollow noreferrer">Formal Methods of Software Design</a>, available online for free..., is an excellent introduction to boolean logic, and general theory building -- it has been a fantastic base for learning other math. As a cool side effect (or was that the original goal?), you'll learn how to derive computer programs (yes, computer programs), from boolean specifications: very much like how you would derive any other proof! Fantastic.</p>
<p><a href="https://www.coursera.org/course/learning" rel="nofollow noreferrer">Learning how to Learn</a> ...(offered by the UC San Diego, on Coursera), which is actually, hands down, the most honest and effective course (good MOOCs are rare these days) on the subject that I have ever come across. Generally useful ideas to keep in mind while learning new mathematics!</p>
<h1>Book series like AMS' Student Mathematical Library</h1>
<p><a href="https://math.stackexchange.com/q/828648/115703">This</a> reference request was created because:</p>
<blockquote>
<p>I had the joy of discovering AMS' <a href="http://www.ams.org/bookstore/stmlseries" rel="nofollow noreferrer">Student Mathematical Library</a> book series today, and I have been pleasantly surprised by how enticing some of the titles seem: exciting and expositionary, a perfect stepping stone for learning!</p>
<p>I am familar with some Springer book series (Undergraduate/Graduate Texts in Mathematics), but I think those have a much more of a textbook nature in general.</p>
<p>What are some great book series that fit the style of Student Mathematical Library?</p>
<p><a href="https://math.stackexchange.com/questions/828458/book-ref-request-starting-from-a-mathematically-amorphous-problem-and-comb">See this question for inspiration as to what the answers should look like.</a></p>
</blockquote>
<p><a href="https://math.stackexchange.com/a/863799/115703">Conifold</a> once again delivers an excellent answer: </p>
<p>Generally what I think makes such series so good is that the format forces the authors to explain non-trivial and often non-elementary mathematics in accessible and inspiring way. The concentration of mathematically "clever" and "cool" both fascinates and challenges. They also often expose parts and perspectives of mathematics that are largely missing in standard texts and approaches. Of course, not every book in a series is equally good, so I will list some that I find particularly outstanding. But I haven't read all of them, so it doesn't mean that the rest are sub par, and my assessment of level only applies on average.</p>
<p>Not a series per se but similar in spirit and close to the upper undergraduate level of AMS Student Mathematical Library (and cheap) are some (most are just texts) of the <a href="http://store.doverpublications.com/by-subject-mathematics.html" rel="nofollow noreferrer">Dover Books on Mathematics</a>: Riemann's Zeta Function; Three Pearls of Number Theory; Geometry and Light; Counterexamples in Topology; Regular polytopes; Beauty of Geometry; Asymptotic methods in Analysis; Satan, Cantor and Infinity; Hyperbolic Functions.</p>
<p>Some of Cambridge University Press' <a href="https://www.librarything.com/series/London+Mathematical+Society+Student+Texts" rel="nofollow noreferrer">London Mathematical Society Student Texts</a> are more than typical texts, and they are at the right level too: Prime Number Theorem, Undergraduate Algebraic Geometry, Elliptic Functions, Young Tableaux. Also good but very short are their <a href="http://www.cambridge.org/ar/academic/subjects/mathematics/recreational-mathematics/series/outlooks?layout=grid" rel="nofollow noreferrer">Outlooks</a>, and Canadian Mathematical Society's <a href="https://cms.math.ca/Publications/Books/treatises." rel="nofollow noreferrer">Treatises in Mathematics</a> series.</p>
<p>MAA and Cambridge University Press support <a href="http://www.librarything.com/series/Dolciani+Mathematical+Expositions" rel="nofollow noreferrer">Dolciani Mathematical Expositions</a>, which is freshman/sophomore level: Charming Proofs; Diophantus and Diophantine equations; Logic as Algebra. MAA's <a href="http://www.maa.org/book-series/classroom-resource-materials?page=1" rel="nofollow noreferrer">Classroom Resource Materials</a> also has some entries at this level: Paradoxes and Sophisms in Calculus; Counterexamples in Calculus; Explorations in Complex Analysis; Which Numbers are Real?; Real Infinite Series.</p>
<p>The ones below, especially Mir's, are generally less advanced, high school/freshman level. Still, I grew up reading such booklets, and learned from them more than from most formal studying, they also guided my interests later and helped select topics which I wanted to pursue in depth. </p>
<p>AMS's <a href="http://www.ams.org/bookstore/mawrldseries" rel="nofollow noreferrer">Mathematical World</a>: A Mathematical Gift, I, II, III; Mathematical Ciphers: From Caesar to RSA; Kvant Selecta (collections of best articles from Russian math journal for advanced high school kids); Stories about Maxima and Minima.</p>
<p>MAA's <a href="http://www.librarything.com/series/New+Mathematical+Library" rel="nofollow noreferrer">New Mathematical Library</a>: Game Theory and Strategy; Geometry of Numbers; Numbers: rational and irrational; Ingenuity in Mathematics; Geometric transformations; Uses of Infinity.</p>
<p>Mir's <a href="http://mirtitles.org/category/little-mathematics-library/" rel="nofollow noreferrer">Little Mathematics Library</a>: Proof in Geometry; Solving Equations in Integers; Inequalities; Areas and Logarithms; Remarkable Curves.</p>
|
398,388 | <p>The classification of finite simple groups has been called one of the great intellectual achievements of humanity, but I don't even know one single application of it. Even worse, I know a lot of applications of simple <em>modules</em> over some ring/algebra <span class="math-container">$A$</span>, but I can barely know an application of them for finite simple groups. When studying modules, one has, for example,</p>
<ol>
<li>If <span class="math-container">$S$</span> and <span class="math-container">$T$</span> are distinct simple modules, then <span class="math-container">$\operatorname{Hom}(S,T) = 0$</span>, and one can enhance this using Jordan-Holder to prove that, if <span class="math-container">$M$</span> and <span class="math-container">$N$</span> are modules whose Jordan-Holder decomposition don't have common factors, then <span class="math-container">$\operatorname{Hom}(M,N)=0$</span>. We may use this, for example, to try to compute some cohomology, also;</li>
<li>The simple modules form a basis of the <span class="math-container">$K_0$</span> group, and therefore if we're interested in, for example, the multiplicative structure of <span class="math-container">$K_0$</span> it's enough to compute the (tensor) product of simple modules;</li>
<li>If the algebra <span class="math-container">$A$</span> is basic (i.e. every simple representation is <span class="math-container">$1$</span>-dimensional), which happens for path algebras, then simple modules have a group structure with respect to the tensor product (so they are an analogue for the Picard group).</li>
</ol>
<p>For finite simple groups, the only application I know is for the (non)-solubility of polynomials, and it's a quite particular example which uses only <span class="math-container">$S_n$</span> and <span class="math-container">$A_n$</span>. So I have two questions:</p>
<ol>
<li>What are some (concrete) applications of (finite simple groups + Jordan-Holder) for general finite groups?</li>
<li>What are some (concrete) applications of the classifications of finite simple groups?</li>
</ol>
| JoshuaZ | 127,690 | <p>The best case bound for <a href="https://en.wikipedia.org/wiki/Jordan%E2%80%93Schur_theorem" rel="nofollow noreferrer">the Jordan-Schur theorem</a> uses heavily the classification, and that theorem shows up in a lot of different contexts.</p>
|
3,439,626 | <p>I need to proof the following statement:</p>
<p>Let <span class="math-container">$a, b, n \in \Bbb{Z}$</span> with <span class="math-container">$ n≥ 2, gcd(a,n)=1$</span>. Proof that if <span class="math-container">$s_{1},s_{2}$</span> are solutions to <span class="math-container">$ax\equiv b \pmod{n}$</span>, then <span class="math-container">$s_{1}\equiv s_{2} \pmod{n}$</span>.</p>
<p>I don't know where to start my proof. I do know that if you get any solution, then by adding the modulo you get equivalent solutions. Then, there are n possible solutions. But I don't think my argument is correct. </p>
| Rushabh Mehta | 537,349 | <p>Note that if <span class="math-container">$s_1,s_2$</span> are solutions, <span class="math-container">$as_1\equiv as_2\equiv b\pmod n$</span>, so <span class="math-container">$as_1\equiv as_2\pmod n$</span>. </p>
<p>Hence, <span class="math-container">$n\mid a(s_2-s_1)$</span>. But, since <span class="math-container">$\gcd(a,n)=1$</span>, </p>
<p><span class="math-container">$$n\mid s_2-s_1$$</span> so <span class="math-container">$s_1\equiv s_2\pmod n$</span>.</p>
|
513,779 | <p>If $a,b\in\mathbb{N}$ are odd</p>
<p>then demonstrate:
$$ {\sqrt{a^2 + b^2}} \not\in \mathbb{Q}$$ </p>
<p>I try to guess that $$ {\sqrt{a^2 + b^2}} \in\mathbb{Q}.$$ Then i write $$ {\sqrt{a^2 + b^2}= m/n}.$$ After that: $$ {n\sqrt{a^2 + b^2}= m}$$ , I raised at squared and i have like $$ n^2(a^2+ b^2)=m^2 $$ and i thought that $$ (a^2 +b^2)$$ is even so m^2 is even. After this I write $$m^2= 4k^2.$$ In the and I have this ecuation $$a^2 + b^2 = 4k^2/ n^2$$ This fraction is irreducible..I think</p>
| imranfat | 64,546 | <p>You may remember that Pythagorean triplets of a right triangle are of the form, a²-b², 2ab and a²+b², the latter one being the hypotenuse. Well, the second terms says it all...</p>
|
4,190,492 | <p>I offer a proposition with both a proof and a counterexample. Thus, either the proof is incorrect, or the counterexample is not actually a counterexample, or both. Which is it?</p>
<p><strong>Proposition.</strong> Given a function <span class="math-container">$h(x)$</span> which is twice differentiable, strictly convex, and strictly decreasing, there does not exist a strictly increasing, twice differentiable function <span class="math-container">$g(y)$</span> such that <span class="math-container">$f(x) \equiv (g \circ h)(x)$</span> is concave.</p>
<p><strong>Proof.</strong> Suppose <span class="math-container">$g(y)$</span> exists. By the properties of concave functions and the chain rule,</p>
<p><span class="math-container">$$0 \geq f''(x) = (g \circ h)''(x) = [g'(h(x)) h'(x)]' = g'(h(x)) \underbrace{h''(x)}_{\gt 0} + {g''(h(x))} \underbrace{[h'(x)]^2}_{\gt 0}$$</span></p>
<p>For the statement to hold, we need <span class="math-container">$g'(h(x)) \leq 0 $</span> and <span class="math-container">$g''(h(x)) \leq 0$</span>. Thus <span class="math-container">$g'$</span> must be weakly decreasing (and concave), a contradiction.</p>
<p><strong>Counterexample.</strong> Consider <span class="math-container">$h(x) = \exp (-x)$</span> and <span class="math-container">$g(y) = \log y$</span>. <span class="math-container">$h$</span> is twice differentiable, convex, and strictly decreasing. <span class="math-container">$g$</span> is strictly increasing and twice differentiable. Finally, the function <span class="math-container">$f = (g \circ h)(x) = - x$</span> is linear, and therefore concave.</p>
<p>What's going on?</p>
| Theo Bendit | 248,286 | <p>In your proof, you write</p>
<blockquote>
<p>For the statement to hold, we need <span class="math-container">$g'(h(x)) \leq 0 $</span> and <span class="math-container">$g''(h(x)) \leq 0$</span>. Thus <span class="math-container">$g'$</span> must be weakly decreasing (and concave), a contradiction.</p>
</blockquote>
<p>But, it really should read</p>
<blockquote>
<p>For the statement to hold, we need <span class="math-container">$g'(h(x)) \leq 0 $</span> <strong>or</strong> <span class="math-container">$g''(h(x)) \leq 0$</span>. Thus <span class="math-container">$g'$</span> must either be weakly decreasing (a contradiction) or <span class="math-container">$g$</span> is concave. Oh well, I guess <span class="math-container">$g$</span> is concave, as per my counterexample.</p>
</blockquote>
|
4,190,492 | <p>I offer a proposition with both a proof and a counterexample. Thus, either the proof is incorrect, or the counterexample is not actually a counterexample, or both. Which is it?</p>
<p><strong>Proposition.</strong> Given a function <span class="math-container">$h(x)$</span> which is twice differentiable, strictly convex, and strictly decreasing, there does not exist a strictly increasing, twice differentiable function <span class="math-container">$g(y)$</span> such that <span class="math-container">$f(x) \equiv (g \circ h)(x)$</span> is concave.</p>
<p><strong>Proof.</strong> Suppose <span class="math-container">$g(y)$</span> exists. By the properties of concave functions and the chain rule,</p>
<p><span class="math-container">$$0 \geq f''(x) = (g \circ h)''(x) = [g'(h(x)) h'(x)]' = g'(h(x)) \underbrace{h''(x)}_{\gt 0} + {g''(h(x))} \underbrace{[h'(x)]^2}_{\gt 0}$$</span></p>
<p>For the statement to hold, we need <span class="math-container">$g'(h(x)) \leq 0 $</span> and <span class="math-container">$g''(h(x)) \leq 0$</span>. Thus <span class="math-container">$g'$</span> must be weakly decreasing (and concave), a contradiction.</p>
<p><strong>Counterexample.</strong> Consider <span class="math-container">$h(x) = \exp (-x)$</span> and <span class="math-container">$g(y) = \log y$</span>. <span class="math-container">$h$</span> is twice differentiable, convex, and strictly decreasing. <span class="math-container">$g$</span> is strictly increasing and twice differentiable. Finally, the function <span class="math-container">$f = (g \circ h)(x) = - x$</span> is linear, and therefore concave.</p>
<p>What's going on?</p>
| Lee White | 468,437 | <p>For the second derivative of <span class="math-container">$f\left(x\right)$</span>, <span class="math-container">$g^\prime \left(h\left(x\right)\right) \leq 0 $</span> and <span class="math-container">$g^\prime \left(h\left(x\right)\right) \leq 0$</span> is sufficient condition of <span class="math-container">$f\left(x\right)^{\prime \prime} \leq 0$</span> but not necessary.</p>
|
202,040 | <p>I'd like to get separate plots for the functions in a list, and I'm trying the following, which doesn't work. What is the correct way to do that?</p>
<pre><code>Table[ContourPlot3D[f, {x, -2, 2}, {y, -2, 2}, {z, -2, 2}], {f, {x + y + z + x y z == 0, x + y + z^2 + x y z^2 == 0, x + y^2 + z + x y^2 z == 0}}]
</code></pre>
| Rohit Namjoshi | 58,370 | <p>Using <code>Table</code></p>
<pre><code>Table[ContourPlot3D[
Evaluate@f, {x, -2, 2}, {y, -2, 2}, {z, -2, 2}], {f, {x + y + z + x y z == 0,
x + y + z^2 + x y z^2 == 0, x + y^2 + z + x y^2 z == 0}}]
</code></pre>
<p><a href="https://i.stack.imgur.com/Be7F0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Be7F0.png" alt="enter image description here"></a></p>
|
1,416,998 | <p>In the definition of martingales, one finds in Stroock and Varadhan (Multidimensional Diffusion processes - page 20) the strange request that it be right-continuous process.</p>
<p><a href="https://i.stack.imgur.com/0Nni7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0Nni7.png" alt="enter image description here"></a></p>
<p>However no such requirement is made in the wiki
<a href="https://en.wikipedia.org/wiki/Martingale_%28probability_theory%29" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Martingale_%28probability_theory%29</a></p>
<p><a href="https://i.stack.imgur.com/D1B7Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D1B7Y.png" alt="enter image description here"></a></p>
<p>nor in Kallenberg (Foundations of modern probability - page 96) </p>
<p><a href="https://i.stack.imgur.com/KeK1H.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KeK1H.png" alt="enter image description here"></a></p>
<p>Isn't this strange of the part of Stroock and Varadhan? Has the notion of martingales evolved? </p>
| 5xum | 112,884 | <p>There is some slight confusion about what you really proved, but it's minor (even though, rigorously speaking, the proof is all wrong).</p>
<p>You can actually remove the first sentence "Then $ax+by=d$".</p>
<p>The argument would flow better like so:</p>
<ul>
<li>We know: by Bezout's theorem, if there exists a pair $x,y$ such that $ax+by=z$, then $d|z$</li>
<li>Suppose $(a,b)=d$ and $((a+b),(a-b))=e$</li>
<li>Then, we know that there exists a pair $u,v$ such that $e=u(a+b)+v(a-b)$</li>
<li>Then, by setting $x=u+v, y=u-v$, we see that $ax+by=e$</li>
<li>We have proven that there exists a pair $x,y$ such that $ax+by=e$, therefore, by point $1$, we know that $d|e$</li>
</ul>
|
69,272 | <p>By the way, does anyone know how to prove in an elementary way (i.e. expanding) that $\prod_1^n (1+a_i r)$ tends to $e^r=\sum \frac{r^k}{k!}$ as you let $\max|a_i|\to 0$ with $0\leq a_i \leq 1$ and $\sum a_i = 1$? An easy solution goes by writing the product with the exponential function so that you get the exponential of $\sum \log(1+a_i r) = \sum \int_0^1 \frac{a_i r}{(1+s a_i r)} ds$.</p>
<p>You can then integrate by parts (i.e. Taylor expand) to obtain $\sum a_ i r − \sum \int_0^1 (1−s)\frac{(a_i r)2}{(1+s a_i r)2}ds$. Now, $\sum a_i r = r$ is the main term. After you take $\max|a_i|$ to be less than $.5/|r|$, the error term is bounded in absolute value by $C \sum |a_i r|^2 \leq \max|a_i|\cdot \sum |a_i| |r|^2 \leq C |r|^2 \max |a_i|$.</p>
<p>I was hoping to find an elementary proof of this convergence by expanding the product $\prod_1^n (1+a_i r)$ and gathering terms with a common power of $r$. In particular, it would be nice to prove the convergence of this limit without the exponential function, since then the limit could be considered a definition of $e^r$. The case when all of the $a_i$ are equal is done in Rudin's "Principles of Mathematical Analysis".</p>
<p>The motivation for this problem comes from compound interest, which I described in a different thread here: <a href="https://mathoverflow.net/questions/40005/generalizing-a-problem-to-make-it-easier/69224#69224">Generalizing a problem to make it easier</a> .</p>
| Phil Isett | 7,193 | <p>Thanks, Anthony, for finding this solution. I was completely at a loss for how to handle all the indices. If you don't mind, I would like to write down one version of the argument that you've given in full detail.</p>
<p>Claim: Under the hypotheses of the question $1 = k! \sum_{i_1 < \ldots < i_k} a_{i_1} \cdots a_{i_k} + O(\max |a_i|) $ where the error is non-negative.</p>
<p>The claim is true without an error when $k = 1$, and follows from induction. If we write
$1 = (\sum a_i)^{k+1} = (\sum a_i) ( \sum a_i )^k$
The induction hypothesis allows us to write this product as
$(\sum a_i)\cdot(k! \sum_{i_1 < \ldots < i_k} a_{i_1} + O(\max |a_i|) ) = (\sum a_i)\cdot (k! \sum_{i_1 < \ldots < i_k} a_{i_1} ) + O(\max |a_i| ) $</p>
<p>If we now distribute out the product, we get the term we want $(k+1)! \sum_{i_1 < \ldots < i_k < i_{k+1} } a_{i_1} \cdots a_{i_{k+1} }$ from the products with no repeats and then an error coming from products with exactly one term repeated. Take whichever term is repeated and bound one copy of it in absolute value by $\max |a_i|$. Then the error is bounded by $\max |a_i| ( \sum |a_i| )^k = O(\max |a_i|)$.</p>
<p>Having this claim established and looking slightly more carefully at the dependence of the error on $k$ (the constant in the big O only grows like $C^k$), we also have prove the convergence that I was looking for (and we don't need non-negativity of the terms; just that $\sum |a_i|$ is bounded). In the non-negative case we can just observe the error is non-negative, so that the dominated convergence theorem applies (with respect to the finite measure $\frac{|r|^k}{k!}$), giving a small shortcut and a soft way to see the convergence without a rate.</p>
<p>All credit goes to Anthony Quas for the idea; I just thought the induction was a fairly clear way to get the details all down.</p>
|
138,800 | <h1>Background</h1>
<p>I have a block of code, reproduced at the bottom of this post, consisting of combined \$PreRead and \$PrePrint statements, that automatically formats outputs as 'input = output', and also allows easy inline combination of math and text (allowing the text to be placed either before, after, or both), as follows. [Note that one merely needs to put the text in quotes, and separate it from the math with a semicolon (;).] </p>
<pre><code>int=Integrate[x^2,x]
"Letting"; int=Integrate[x^2,x]
"We find that"; int/x; ", as expected."
</code></pre>
<blockquote>
<p>$\text{int}=\int x^2 \, dx=\frac{x^3}{3}$</p>
<p>$\color{blue}{\textit {Letting}}\>\text{int}=\int x^2 \, dx=\frac{x^3}{3}$</p>
<p>$\color{blue}{\textit {We find that}}\>\frac{int}{x} = \frac{x^2}{3}\> \color{blue}{\textit {, as expected.}}$</p>
</blockquote>
<p>[Attribution: The code is a synthesis and extension, done by MB1965 and myself (<a href="https://mathematica.stackexchange.com/questions/134653/combined-inline-printing-of-input-output-and-text-w-minimal-added-syntax/134657#134657">Combined inline printing of input, output, and text, w/ minimal added syntax</a>), of code blocks written by Simon Rochester (<a href="https://mathematica.stackexchange.com/questions/134406/would-like-input-and-output-printed-on-same-line-w-o-needing-extra-syntax">Would like input and output printed on same line, w/o needing extra syntax</a>) and Mr. Wizard (<a href="https://mathematica.stackexchange.com/questions/11961/notebook-formatting-easier-descriptions-for-equations-and-results/11987#11987">Notebook formatting - easier descriptions for equations and results?</a> ).]</p>
<h1>Issue</h1>
<p>As far as I can tell, the code works perfectly. However, I've subsequently realized that the code's input=output functionality isn't appropriate for graphical output, since it puts the graphics inline with the input (making them too small, and thus necessitating manual resizing):</p>
<pre><code>Plot[x^2, {x, -10, 10}]
</code></pre>
<p><a href="https://i.stack.imgur.com/mNTEt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mNTEt.png" alt="enter image description here"></a></p>
<h1>Objective</h1>
<p>To address the above issue, I'd like to modify the code to enable me to tell Mathematica to not implement its inline input=output functionality for specific inputs. I had in mind adding some distinguishing symbol, or combination of symbols, not used elsewhere in Mathematica, at the end of my input (e.g., a triple semicolon or colon): </p>
<pre><code>Plot[x^2, {x, -10, 10}];;;
</code></pre>
<p>In such cases, it would be nice to retain the code's ability to add text to output; here I'd like the text to be placed either above or below the graphics (depending on whether it's entered before or after the math in the input), rather than inline.</p>
<p>Secondarily, I'd also like to modify the code to be able to specify specific commands for which its inline input=output functionality is to be disabled (Plot, ListPlot, etc.).</p>
<p>I've made several attempts to achieve these modifications, but they were all unsuccessful.</p>
<p>Finally, note that I could globally deactivate the code prior to a given evaluation using:</p>
<pre><code>$PreRead = .
$PrePrint = .
</code></pre>
<p>...and then reactivate it.</p>
<p>But that's somewhat inconvenient and impractical, since I would then need to go back up to wherever I'd posted the code block to reactivate it. In addition, this would preclude me from being able to globally evaluate the entire notebook (because of the code's length, it's not practical to re-paste it following every statement for which I'd deactivated it).</p>
<hr>
<h1>Code</h1>
<pre><code>$note1 = Null;
$note2 = Null;
$note3 = Null;
$outputStyles =
<|
"Default" -> {
Blue,
15,
Italic,
FontFamily -> "Times"
},
"Before" -> {
Blue,
15,
Italic,
FontFamily -> "Times"
},
"After" -> {
Blue,
15,
Italic,
FontFamily -> "Times"
}
|>;
boxExpr[body_] :=
RowBox@{"Replace", "[", "\"thisIsJustATag\"", ";", body, ",",
"Null", "->", "\"\"", "]"};
styleNote[note_, style_] :=
Style[ToExpression@note,
Sequence @@ Lookup[$outputStyles, style, $outputStyles["Default"]]];
extractNotes[boxes_] :=
Replace[boxes, {RowBox[{note1_String?(StringMatchQ[#, "\"*\""] &),
";", body__, ";",
note2_String?(StringMatchQ[#, "\"*\""] &)}] :> ($note1 =
styleNote[note1, "Before"]; $note2 =
styleNote[note2, "After"];
boxExpr@body),
RowBox[{body__, ";",
note_String?(StringMatchQ[#, "\"*\""] &)}] :> ($note2 =
styleNote[note, "After"];
$note1 = Null;
boxExpr@body),
RowBox[{note_String?(StringMatchQ[#, "\"*\""] &), ";",
body__}] :> ($note1 = styleNote[note, "After"];
$note2 = Null;
boxExpr@body),
RowBox[{note_String?(StringMatchQ[#, "\"*\""] &),
";"}] :> ($note3 = styleNote[note, "Neither"];
$note2 = Null; $note1 = Null;
note),
e_ :> ($note1 = Null; $note2 = Null; boxExpr@e)}];
applyFormatting[out_] :=
With[{line = $Line},
HoldForm[In[line] = $placeHolder] /.
DownValues[In] /. {
$placeHolder -> out,
HoldPattern[
Replace[CompoundExpression["thisIsJustATag", expr_],
Null -> ""]] :> expr
}
/. {
HoldPattern[a_ = ""] :> a,
HoldPattern[a_ = a_] :> a,
HoldPattern[a_ = HoldForm[a_]] :> a,
HoldPattern[(c : (a_ = b_)) = b_] :> c,
HoldPattern[(a_ = b_) = c_] :> HoldForm[a = b = c]
}
];
addNotes[formatted_] :=
TraditionalForm@Switch[{$note1, $note2, $note3},
{Null, Null, Except@Null},
With[{r = $note3}, $note3 = Null; r],
{Except@Null, Except@Null, _},
With[{r1 = $note1, r2 = $note2}, $note1 = $note2 = Null;
Row[{r1, formatted, r2}, Spacer[5]]
],
{Except@Null, _, _},
With[{r = $note1}, $note1 = Null;
Row[{r, formatted}, Spacer[5]]
],
{_, Except@Null, _},
With[{r = $note2}, $note2 = Null;
Row[{formatted, r}, Spacer[5]]
],
_,
formatted
];
$PreRead = extractNotes;
$PrePrint = addNotes@*applyFormatting;
</code></pre>
<h1>Update</h1>
<p>For the convenience of the reader, here is the current code, incorporating Mr. Wizard's additions:</p>
<pre><code>$note1 = Null;
$note2 = Null;
$note3 = Null;
$outputStyles = <|
"Default" -> {Blue, 15, Italic, FontFamily -> "Times"},
"Before" -> {Blue, 15, Italic, FontFamily -> "Times"},
"After" -> {Blue, 15, Italic, FontFamily -> "Times"}|>;
boxExpr[body_] :=
RowBox@{"Replace", "[", "\"thisIsJustATag\"", ";", body, ",",
"Null", "->", "\"\"", "]"};
styleNote[note_, style_] :=
Style[ToExpression@note,
Sequence @@ Lookup[$outputStyles, style, $outputStyles["Default"]]];
extractNotes[boxes_] :=
Replace[boxes, {RowBox[{note1_String?(StringMatchQ[#, "\"*\""] &),
";", body__, ";",
note2_String?(StringMatchQ[#, "\"*\""] &)}] :> ($note1 =
styleNote[note1, "Before"]; $note2 =
styleNote[note2, "After"];
boxExpr@body),
RowBox[{body__, ";",
note_String?(StringMatchQ[#, "\"*\""] &)}] :> ($note2 =
styleNote[note, "After"];
$note1 = Null;
boxExpr@body),
RowBox[{note_String?(StringMatchQ[#, "\"*\""] &), ";",
body__}] :> ($note1 = styleNote[note, "After"];
$note2 = Null;
boxExpr@body),
RowBox[{note_String?(StringMatchQ[#, "\"*\""] &),
";"}] :> ($note3 = styleNote[note, "Neither"];
$note2 = Null; $note1 = Null;
note), e_ :> ($note1 = Null; $note2 = Null; boxExpr@e)}];
applyFormatting[out_] :=
With[{line = $Line},
HoldForm[In[line] = $placeHolder] /.
DownValues[In] /. {$placeHolder -> out,
HoldPattern[
Replace[CompoundExpression["thisIsJustATag", expr_],
Null -> ""]] :> expr} /. {HoldPattern[a_ = ""] :> a,
HoldPattern[a_ = a_] :> a, HoldPattern[a_ = HoldForm[a_]] :> a,
HoldPattern[(c : (a_ = b_)) = b_] :> c,
HoldPattern[(a_ = b_) = c_] :> HoldForm[a = b = c]}];
addNotes[formatted_] :=
TraditionalForm@
Switch[{$note1, $note2, $note3}, {Null, Null, Except@Null},
With[{r = $note3}, $note3 = Null; r], {Except@Null,
Except@Null, _},
With[{r1 = $note1, r2 = $note2}, $note1 = $note2 = Null;
Row[{r1, formatted, r2}, Spacer[5]]], {Except@Null, _, _},
With[{r = $note1}, $note1 = Null;
Row[{r, formatted}, Spacer[5]]], {_, Except@Null, _},
With[{r = $note2}, $note2 = Null;
Row[{formatted, r}, Spacer[5]]], _, formatted];
bypass = Replace[
RowBox[{b1___, RowBox[{b2___, ";;"}], ";"}] :> ($bypass = True;
RowBox[{b1, b2}])];
applyFormatting[out_] /; $bypass := Pane[out];
self : addNotes[formatted_] /; $bypass := ($bypass =.;
Unevaluated[self] /. (DownValues[addNotes] /. Row -> Column))
SetAttributes[graphicsQ, HoldFirst]
graphicsQ[_Graphics | _Graphics3D | _Graph | _Image | _Image3D] = True;
graphicsQ[Legended[_?graphicsQ, ___]] = True;
graphicsQ[{___, _?graphicsQ, ___}] = True;
applyFormatting[out_?graphicsQ] :=
Column[{# /. DownValues[In], Pane@out}] &[
HoldForm@InputForm@In@# &@$Line] /.
HoldPattern[Replace["thisIsJustATag"; expr_, Null -> ""]] :> expr
$PreRead = extractNotes@*bypass;
$PrePrint = addNotes@*applyFormatting;
</code></pre>
<p>Mr. Wizard's very nice code blocks succeed in accomplishing both my primary and secondary goals, so I have accepted his answer. But, for completeness, I should note that three corner issues remain. The first two involve the new code; the third, involving the use of the semicolon to suppress output (including that the system sometimes prints it in red), is a carry-over from the code I originally posted (interestingly, when I quit the kernel, the red semicolons revert to black):</p>
<p><a href="https://i.stack.imgur.com/8261e.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8261e.png" alt="enter image description here"></a></p>
<p>TEST CODE:</p>
<pre><code>ParametricPlot[{{2 Cos[t], 2 Sin[t]}, {2 Cos[t], Sin[t]}, {Cos[t],
2 Sin[t]}, {Cos[t], Sin[t]}}, {t, 0, 2 Pi},
PlotLegends -> "Expressions"];
ParametricPlot[{{2 Cos[t], 2 Sin[t]}, {2 Cos[t], Sin[t]}, {Cos[t],
2 Sin[t]}, {Cos[t], Sin[t]}}, {t, 0, 2 Pi},
PlotLegends -> "Expressions"]
\[Alpha] = Integrate[x^2, x] ;;;
Sin[x] // N ;;;
"Some text"; Integrate[x^2, x];
Graphics[{Thick, Green,Rectangle[{0, -1}, {2, 1}], Red, Disk[], Blue,
Circle[{2, 0}], Yellow, Polygon[{{2, 0}, {4, 1}, {4, -1}}],
Purple, Arrowheads[Large], Arrow[{{4, 3/2}, {0, 3/2}, {0, 0}}],
Black, Dashed, Line[{{-1, 0}, {4, 0}}]}];
Graphics3D[Cylinder[]];
a = Plot[x^2, {x, -10, 10}];
</code></pre>
<p><a href="https://i.stack.imgur.com/UKGhb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UKGhb.png" alt="enter image description here"></a></p>
| userrandrand | 86,543 | <p>I had a similar issue. I wanted to prevent the function in PrePrint from applying when I asked it not to. I did not use any systematic conditions where the function would be automatically suppressed (like if the expression to be evaluated is a plot or an integer).</p>
<p>My solution was to use an inert global variable whose name will be called "original" in the following.</p>
<p>Applying "original" to the expression prevents the PrePrint function from evaluating.</p>
<p>Desired scenario in code :</p>
<pre><code>expression
</code></pre>
<p>printed output: modified expression</p>
<pre><code>expression //original
</code></pre>
<p>printed output: expression</p>
<p>If we call preprintfun the original PrePrint function, then we change PrePrint to :</p>
<pre><code>$PrePrint=If[Head[#]===original,First@Level[#,1],preprintfun[#]] &;
</code></pre>
<p>I suppose one could add more conditions if one wishes to filter certain expressions by changing the If command above to Which and including the sequence Or[condition1, condition2,...], # among the arguments.</p>
|
2,091,766 | <p>Suppose $h:R \longrightarrow R$ is differentiable everywhere and $h'$ is continuous on $[0,1]$, $h(0) = -2$ and $h(1) = 1$. Show that:
<p> $|h(x)|\leq max(|h'(t)| , t\in[0,1])$ for all $x\in[0,1]$</p>
<p>I attempted the problem the following way:
Since $h(x)$ is differentiable everywhere then it is also continuous everywhere. $h(0) = -2$ and $h(1) = 1$ imply that h(x) should cross x-axis at some point (at least once). Denote that point by c to get $h(c) = 0$ for some $c\in[0,1]$.
<p> $h'(x)$ continuous means that $lim[h'(x)] = h'(a)$ as $x\rightarrow a$ but then I am stuck and I don't see how what I have done so far can help me to obtain the desired inequality.
<p>Thank you in advance!</p>
| Fred | 380,717 | <p>If $a$ and $b$ are rational, then all your sets $S$ from above contain no irrational number !</p>
<p>Your turn !</p>
|
2,018,239 | <p>I have to show, using induction, that $2^{4^n}+5$ is divisible by $21$. It is supposed to be a standard exercise, but no matter what I try, I get to a point where I have to use two more inductions.</p>
<p>For example, here is one of the things I tried:</p>
<p>Assuming that $21 |2^{4^k}+5$, we have to show that $21 |2^{4^{k+1}}+5$.</p>
<p>Now, $2^{4^{k+1}}+5=2^{4\cdot 4^k}+5=2^{4^k+3\cdot 4^k}+5=2^{4^k}2^{3\cdot 4^k}+5=2^{4^k}2^{3\cdot 4^k}+5+5\cdot 2^{3\cdot 4^k}-5\cdot 2^{3\cdot 4^k}=2^{3\cdot 4^k}(2^{4^k}+5)+5(1-2^{3\cdot 4^k})$.</p>
<p>At this point, the only way out (as I see it) is to prove (using another induction) that $21|5(1-2^{3\cdot 4^k})$. But when I do that, I get another term of this sort, and another induction.</p>
<p>I also tried proving separately that $3 |2^{4^k}+5$ and $7 |2^{4^k}+5$. The former is OK, but the latter is again a double induction.</p>
<p>Is there an easier way of doing this?</p>
<p>Thank you!</p>
<p><strong>EDIT</strong></p>
<p>By an "easier way" I still mean a way using induction, but only once (or at most twice). Maybe add and subtract something different than what I did?...</p>
<p>Just to put it all in a context: a daughter of a friend got this exercise in her very first HW assignment, after a lecture about induction which included only the most basic examples. I tried helping her, but I can't think of a solution suitable for this stage of the course. That's why I thought that there should be a trick I am missing... </p>
| Sarvesh Ravichandran Iyer | 316,409 | <p>You may have to use a few tricks here. The base case is clear.</p>
<p>Note that $2^{(4^n)} - 2^{(4^{n-1})} = 2^{(4^{n-1})}(2^{(3 \cdot 4^{n-1})} -1)$</p>
<p>So we have to prove that $2^{3 \cdot 4^{n-1}} - 1$ is a multiple of $21$. To do this, note that for $n \geq 2$,we can use modular arithmetic: since $3 \cdot 4^{n-1}$ is even, $3$ divides $(2^{3 \cdot 4^{n-1}} - 1)$ by Fermat's theorem. Similarly, since $3 \cdot 4^{n-1}$ is also a multiple of $6$, again by Fermat's theorem, $7$ divides $2^{3 \cdot 4^{n-1}} - 1$. Hence, $21$ divides it also.</p>
<p>Now, note that $2^{(4^n)} + 5 = 2^{(4^{n-1})} + 5 + (2^{(4^n)} - 2^{(4^{n-1})})$, hence assuming induction hypothesis, we have the result.</p>
|
1,933,744 | <p>I simulated the following situation on my pc. Two persons A and B are initially at opposite ends of a sphere of radius r. Both being drunk, can take exactly a step of 1 unit(you can define the unit, i kept it at 1m) either along a latitude at their current location, or a longitude. A and B are said to meet, if the arc length distance between A and B becomes less than equal to 1km.</p>
<p>Note: the direction of possible motion of each man is fixed w.r.t thr axis of the globe. Either latitude or longitude. Assume such a coordinate system exists before hand(just like the 2d analog on a plane, moving in x or y only and not absolutely randomly).</p>
<p>The simulation returned results, which i could not comprehend fully. The average time to meet, was about 270 years for a sphere of radius 100km!. Can someone shed some light on how i can proceed with proving this result. I want the expected time of meeting given the radius and step length, given that each move requires 1 sec. I tried considering a spehrical cap of arc length √n, after n steps, in analogy with the 2d model. But then,i cant calculate the expected time. If possible please help or suggest some related articles.</p>
| Daniel Robert-Nicoud | 60,713 | <p>For now, I will give a reformulation of this problem in terms that should make it easier to attack with analytic methods (at least to get results on the asymptotic behavior) and greatly simplify simulations. In a second time, I will maybe also attempt to solve it, but I don't guarantee any kind of success.</p>
<p>My reformulation of this problem is based on the following two remarks:</p>
<ol>
<li>On spheres, it is always easier to work with angles than with distances. Therefore, we will always work on a sphere of radius $r=1$, define an angle $\epsilon$ corresponding to one unit, and say that $A$ and $B$ meet if the angle between them is less than or equal to some angle $\beta$.</li>
<li>If possible, it is better to have only one thing moving around. Therefore we will change our point of view and fix ourselves in the reference system of one of the two people, say person $B$. We will take this point to be the north pole (for the spherical coordinates we'll find ourselves in). In light of the first observation, the only variable of interest to us will be the latitude $A$ finds itself on.</li>
</ol>
<p>For the mathematical details:</p>
<p>First we have to fix our coordinate system. Let $\phi$ be the longitude (this variable will be useful for calculations, but irrelevant in the end), and $\theta$ be the latitude. We put the north pole $B$ at latitude $\theta = 0$, and the south pole is at $\theta = \pi$. We'll denote positions by coordinates $(\theta,\phi)$.</p>
<p>Notice that by spherical symmetry, $A$ and $B$ bot taking a step is the same as $B$ staying put and $A$ taking two steps. Let's start by seeing what happens when $A$ takes one step. Let's say that $A$ starts in position $(\theta,\phi)$. We are only interested in the probability of $A$ landing at latitude $\theta'$. Notice that $P[A\text{ lands at }\theta'\le\theta_0]$ is given by the ration of the circumference of the circle at angle $\epsilon$ from $A$ below the meridian at $\theta_0$. To find this, I refer you to <a href="https://math.stackexchange.com/a/1941154/60713">this answer</a> by @Aretino, giving
$$P[A\text{ lands at }\theta'\le\theta_0] = 1 - \frac{1}{\pi}\arccos\left(\frac{\cos\theta' - \cos\epsilon\cos\theta}{\sin\epsilon\sin\theta}\right)$$
whenever the term in the brackets is in $[-1,1]$, and $0$ or $1$ else (depending on $\theta'$). The distribution function $f_\theta(\theta')$ giving the probability to land at $\theta'$ after one step starting at $\theta$ can then be found as usual by differentiating this probability:
\begin{align}
f_\theta(\theta') = & \frac{\partial}{\partial\theta'}\left(1 - \frac{1}{\pi}\arccos\left(\frac{\cos\theta' - \cos\epsilon\cos\theta}{\sin\epsilon\sin\theta}\right)\right)\\
= & \frac{\sin\theta'}{\sqrt{1 - \left(\frac{\cos\theta' - \cos\epsilon\cos\theta}{\sin\epsilon\sin\theta}\right)^2}}\\
= & \frac{\sin\epsilon\sin\theta\sin\theta'}{\sqrt{\cos\theta'(2\cos\epsilon\cos\theta - \cos\theta') + \sin^2\theta - \cos^2\epsilon}},
\end{align}
and $0$ outside the domain of definition of the original function. The probability of landing at $\theta'$ after <strong>two</strong> steps starting at $\theta$ is therefore given by
$$F_\theta(\theta') = \int_0^\pi f_\theta(\theta'')f_{\theta''}(\theta')d\theta''.$$
I have some doubts this can be done analytically, but maybe some approximation can give something useful.</p>
<p>Given this data, we can have a hope to be able to do something at least to find asymptotic bounds for when $\epsilon\to0$, and if not, it will at least greatly simplify simulations, as we are reduced to simulating a walk on a line (parametrized by $\theta$) with a non-uniform probability to move to nearby points.</p>
|
2,464,756 | <p>When I was trying to prove a relation from solid state physics, I reached this mathematical problem. In the equation</p>
<p>$$\sum_{i=1}^Nm_ix_i=n$$</p>
<p>$m_i$ and $n$ are known integers, $N=3$, and $x_i$ are unknown integers. Also we know that the greatest common factor of $\left\{m_i\right\}$ is 1. I don't need to find the solution; I must just show/state that the answer exists.</p>
| B. Mehta | 418,148 | <p>If the greatest common divisor of $m_1, \dots, m_N$ divides $n$, then this has a solution, by <a href="https://en.wikipedia.org/wiki/B%C3%A9zout%27s_identity#For_three_or_more_integers" rel="noreferrer">Bézout's identity</a>. If not, there is no solution, since the gcd will divide the left for any choice of $x_i$, but will never divide the right.</p>
|
3,794,158 | <p>I am trying to prove that: Let <span class="math-container">$(M,d)$</span> an metric space and <span class="math-container">$(x_n)$</span>,<span class="math-container">$(y_n)$</span> sequences in <span class="math-container">$M$</span> such that <span class="math-container">$d(x_n,y_n) \leq \frac{1}{n}$</span> <span class="math-container">$\forall n \in \mathbb{N}$</span>. If <span class="math-container">$(x_n)$</span> converge to <span class="math-container">$L$</span> then <span class="math-container">$(y_n)$</span> converges and <span class="math-container">$\lim{y_n} = L$</span>.</p>
<p>My attempt: If <span class="math-container">$\lim{y_n} = L_y$</span> different of <span class="math-container">$L$</span> then <span class="math-container">$\forall \epsilon > 0, y_n \in B_{\epsilon}(L_y)$</span>. But <span class="math-container">$d(x_n,y_n) \leq \frac{1}{n}$</span> say that <span class="math-container">$y_n \in B_{\epsilon}(x_n)$</span> <span class="math-container">$\forall n \in \mathbb{N}$</span>. If we take <span class="math-container">$\epsilon = \frac{1}{n}$</span> at the begin then <span class="math-container">$y_n \in B_{\frac{1}{n}}(L_y)$</span> [...]</p>
<p>I know that statement is very intuitive but from here i don't know how to conclude <span class="math-container">$L_y = L$</span>. Some one could help me?</p>
| Ben Grossmann | 81,360 | <p><strong>Hint:</strong> Our goal is the following: given an <span class="math-container">$\epsilon > 0$</span>, prove that there exists an <span class="math-container">$N$</span> such that <span class="math-container">$y_n \in B_{\epsilon}(L)$</span> whenever <span class="math-container">$n \geq N$</span>.</p>
<p>The fact that <span class="math-container">$x_n \to L$</span> means that for any new constant <span class="math-container">$\epsilon_1>0$</span>, there is an <span class="math-container">$N_1$</span> such that <span class="math-container">$x_n \in B_{\epsilon_1}(L)$</span>. The fact that <span class="math-container">$d(x_n, y_n) \leq 1/n$</span> means that for any new constant <span class="math-container">$\epsilon_2>0$</span>, there exists an <span class="math-container">$N_2$</span> such that <span class="math-container">$d(x_n,y_n) < \epsilon_2$</span> (more specifically, we can take any integer <span class="math-container">$N_2 > 1/\epsilon_2$</span>).</p>
<p>With that in mind, what <span class="math-container">$\epsilon_1,\epsilon_2$</span> can we select (in terms of <span class="math-container">$\epsilon$</span>) that will ensure <span class="math-container">$y_n \in B_\epsilon(L)$</span> for some <span class="math-container">$n$</span>? How big does <span class="math-container">$n$</span> need to be for this to happen?</p>
|
933,604 | <p>Hi can anyone solve these two questions using logs and indices</p>
<p>a.
$$4^{2x}-2^{x+1}=48$$</p>
<p>b.
$$6^{2x+1}-17*{6^x}+12=0$$</p>
<p>Thanks.</p>
| lab bhattacharjee | 33,337 | <p>I believe the last question to be $6^{2x+1}-17(6^x)+12=0$</p>
<p>$$\iff6(6^x)^2-17(6^x)+12=0$$</p>
<p>$$6^x=\frac{17\pm\sqrt{17^2-4\cdot6\cdot12}}{2\cdot6}=\frac{17\pm1}{12}=\frac32,\frac43$$</p>
|
2,327,273 | <p>If a tree has 5 vertices of degree 2, 3 vertices of degree 3, 4 vertices of degree 4, then how many leaves are there in that tree? </p>
<p>I know the tree has at least 12 vertices and so it must have at least 11 edges. Also the number of leaves must be odd but I could not proceed further. </p>
| Prajwal Kansakar | 49,781 | <p>If $k$ is the number of leaves then the total number of vertices in the tree is $12+k$ with $11+k$ edges and the sum of the degrees is $\sum\deg(v)=(5\times 2)+(3\times 3)+(4\times 4)+(k\times 1)=35+k$. Now by handshaking lemma,
$$35+k=2(11+k)\Rightarrow k=13.$$</p>
|
2,979,226 | <p>Consider you are given following </p>
<blockquote>
<p><span class="math-container">$$\biggr (x-\dfrac{2}{x^2}\biggr )^6$$</span></p>
</blockquote>
<p>I'm trying to evaluate the constant term. What I've done so far is given below</p>
<p><span class="math-container">$$\sum^{6}_{n = 0} \binom{6}{r}x^{6-r}\times (-2)^6 \times x^{-12}$$</span></p>
<p><span class="math-container">$$\sum^{6}_{n = 0} \binom{6}{r}x^{-6-r}\times (-2)^6 $$</span></p>
<p><span class="math-container">$$-6-r = 0 \implies r = -6$$</span></p>
<p>I got a negative number. Where did I go wrong?</p>
<p>Regards</p>
| lab bhattacharjee | 33,337 | <p>It should be <span class="math-container">$$\binom6rx^{6-r}(-2x^{-2})^r=?$$</span></p>
|
4,045,238 | <p>I was working on the problems in Mathematical Methods for Physics and Engineering by Riley,Hobson & Bence.
In Problem 2.34 (d) I'm supposed to find this integral: <span class="math-container">$$J=\int\frac{dx}{x(x^n+a^n)}.$$</span>
I used partial fractions and arrived at the form
<span class="math-container">$$J=\frac{1}{a^n}\left[\log x-\int \frac{dx}{x^n+a^n}\right]$$</span>
and now I'm stuck, I don't know how to integrate <span class="math-container">$1/(x^n+a^n)$</span>.</p>
| Ishraaq Parvez | 736,904 | <p><span class="math-container">\begin{gather*}
Let\ I=\int \frac{dx}{x\left( x^{n} +a^{n}\right)} =\int \frac{dx}{x^{n+1}} \cdotp \frac{1}{1+\frac{a^{n}}{x^{n}}}\\
Let\ 1+\frac{a^{n}}{x^{n}} =t\\
\frac{-n\cdotp a^{n}}{x^{n+1}} dx=dt\\
\frac{dx}{x^{n+1}} =\frac{-dt}{n\cdotp a^{n}}\\
I=\int \frac{-dt}{n\cdotp a^{n}} \cdotp \frac{1}{t} =\frac{-1}{n\cdotp a^{n}}\ln t=\frac{-1}{n\cdotp a^{n}}\ln\left( 1+\frac{a^{n}}{x^{n}}\right)\\
=\frac{-1}{n\cdotp a^{n}}\ln\left(\frac{x^{n} +a^{n}}{x^{n}}\right) =\frac{1}{a^{n}}\left(\ln x-\frac{1}{n}\ln\left( x^{n} +y^{n}\right)\right)\\
\end{gather*}</span>
Hope this helps!</p>
|
4,045,238 | <p>I was working on the problems in Mathematical Methods for Physics and Engineering by Riley,Hobson & Bence.
In Problem 2.34 (d) I'm supposed to find this integral: <span class="math-container">$$J=\int\frac{dx}{x(x^n+a^n)}.$$</span>
I used partial fractions and arrived at the form
<span class="math-container">$$J=\frac{1}{a^n}\left[\log x-\int \frac{dx}{x^n+a^n}\right]$$</span>
and now I'm stuck, I don't know how to integrate <span class="math-container">$1/(x^n+a^n)$</span>.</p>
| imranfat | 64,546 | <p>Alternatively, perform a u-sub. Let <span class="math-container">$x=1/t$</span>. What will happen is that
(after little algebraic simplification) you get a monomial numerator which is one degree lower than a binomial denominator consisting of an <span class="math-container">$x$</span> term and a constant. Integration: A basic <span class="math-container">$ln$</span> term. It's very easy. Try it out...</p>
|
4,573,566 | <p>So I have to find the bifurcation points of the system: <span class="math-container">$\dot{x}=(ax-x^3+x^5)(x-a+2)$</span>, where <span class="math-container">$a\in\mathbb{R}$</span> is a parameter.</p>
<p>Attempt:<br />
I know that a bifurcation point is the point, where there is a change in stability or number of fixed points.
I have tried visualising the graph, and have come to the conclusion, that there are: <br />
4 fixed points for <span class="math-container">$a\leq 0$</span>.<br />
6 fixed points for <span class="math-container">$0<a\leq 0.2$</span>. <br />
2 fixed points for <span class="math-container">$0.2<a<2$</span>. <br />
1 fixed point for <span class="math-container">$a=2$</span><br />
2 fixed points for <span class="math-container">$2<a$</span>.<br />
The change in stability happens at the same time as the number of fixed points changes.</p>
<p>From what I have learned, I'm pretty sure that one bifurcation point is <span class="math-container">$(a,x)=(2,0)$</span>, and I think that a transcritical bifurcation happens at this point.</p>
<p>I think there is another bifurcation point, when we go from 4 to 6 to 2 points. I just don't know exactly what that point is? <span class="math-container">$a=0$</span>? <span class="math-container">$a=0.2$</span>? It confueses me, that the change seems to happen before and after the interval <span class="math-container">$0<a\leq 0.2$</span>. Normally the change should happen at a single point?</p>
<p>All help is appreciated!</p>
| MathWonk | 301,562 | <p>One extra tip that augments the prior answer. The easy way to locate the fixed points is to first use the equation <span class="math-container">$\dot x=0$</span> by setting each factor equal to zero, and then (the sneaky part) plot the solution set for each factor by expressing <span class="math-container">$a$</span> as a function of <span class="math-container">$x$</span>.
(This is much easier than solving for <span class="math-container">$x$</span> as a function of <span class="math-container">$a$</span>.)</p>
<p>In your example the first factor give the equation (i) <span class="math-container">$a=x+2$</span>.</p>
<p>The second factor gives either (ii) <span class="math-container">$a= x^2- x^4$</span> or (iii) <span class="math-container">$x=0$</span> with no restriction on <span class="math-container">$a$</span>. Plot all these solutions on a single picture and this will reveal how the number of solutions varies as you adjust the variable <span class="math-container">$a$</span>.</p>
<p>In the graph below (i) is blue, (iii) is the faint green vertical axis, and (ii) is the quartic.<a href="https://i.stack.imgur.com/HKrLI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HKrLI.png" alt="enter image description here" /></a></p>
<p>Horizontal lines correspond to setting <span class="math-container">$a$</span> constant. As you vary the height ( the value of <span class="math-container">$a$</span>), the number of solutions changes.</p>
|
4,573,566 | <p>So I have to find the bifurcation points of the system: <span class="math-container">$\dot{x}=(ax-x^3+x^5)(x-a+2)$</span>, where <span class="math-container">$a\in\mathbb{R}$</span> is a parameter.</p>
<p>Attempt:<br />
I know that a bifurcation point is the point, where there is a change in stability or number of fixed points.
I have tried visualising the graph, and have come to the conclusion, that there are: <br />
4 fixed points for <span class="math-container">$a\leq 0$</span>.<br />
6 fixed points for <span class="math-container">$0<a\leq 0.2$</span>. <br />
2 fixed points for <span class="math-container">$0.2<a<2$</span>. <br />
1 fixed point for <span class="math-container">$a=2$</span><br />
2 fixed points for <span class="math-container">$2<a$</span>.<br />
The change in stability happens at the same time as the number of fixed points changes.</p>
<p>From what I have learned, I'm pretty sure that one bifurcation point is <span class="math-container">$(a,x)=(2,0)$</span>, and I think that a transcritical bifurcation happens at this point.</p>
<p>I think there is another bifurcation point, when we go from 4 to 6 to 2 points. I just don't know exactly what that point is? <span class="math-container">$a=0$</span>? <span class="math-container">$a=0.2$</span>? It confueses me, that the change seems to happen before and after the interval <span class="math-container">$0<a\leq 0.2$</span>. Normally the change should happen at a single point?</p>
<p>All help is appreciated!</p>
| boojum | 882,145 | <p>Another way in which the problem can be addressed without referring to graphs is to consider that the function in the differential equation <span class="math-container">$ \ \dot{x} \ = \ f(x) \ $</span> is a sixth-degree polynomial for which the partial factorization is <span class="math-container">$ \ f(x) \ = \ x·( \ x - [a-2] \ )·(a - x^2 + x^4) \ \ . \ $</span> Since the coefficients are real, there can be up to six real zeroes, two of which are <span class="math-container">$ \ x \ = \ 0 \ $</span> and <span class="math-container">$ \ x \ = \ a - 2 \ \ . \ $</span> The number of remaining real zeroes depends upon the value of <span class="math-container">$ \ a \ $</span> in the biquadratic equation <span class="math-container">$ \ x^4 - x^2 + a \ = \ 0 \ \ ; \ $</span> we find that
<span class="math-container">$$ x^2 \ \ = \ \ \frac{1 \ \pm \ \sqrt{1 \ - \ 4a}}{2} \ \ , $$</span>
as also given by <strong>Gregory</strong>.</p>
<p>Since the equilibria ("fixed points") of this system are represented by real zeroes of <span class="math-container">$ \ f(x) \ \ , \ $</span> we need to examine the discriminant of the biquadratic equation. For <span class="math-container">$ \ 1 - 4a \ < \ 0 \ \Rightarrow \ a \ > \ \frac14 \ \ , \ \ x^2 \ $</span> has no real values and for <span class="math-container">$ \ a \ = \ \frac14 \ \ , \ $</span> the zeroes are <span class="math-container">$ \ x \ = \ \pm \frac{1}{\sqrt2} \ \ . \ $</span></p>
<p>For <span class="math-container">$ \ a \ < \ \frac14 \ \ , \ $</span> we must be a bit more thorough. For <span class="math-container">$ \ 0 \ < \ \sqrt{1 - 4a} \ < \ 1 \ \Rightarrow \ 0 \ < \ a \ < \ \frac14 \ \ , \ \ x^2 \ > \ 0 \ $</span> has two possible values, permitting four real zeroes of the biquadratic equation. At <span class="math-container">$ \ a \ = \ 0 \ \ , \ $</span> we have <span class="math-container">$ \ x \ = \ \pm 1 \ $</span> and <span class="math-container">$ \ x \ = \ 0 \ \ $</span> (a double zero). With <span class="math-container">$ \ \sqrt{1 - 4a} \ > \ 1 \ \Rightarrow \ a \ < \ 0 \ \ , \ \ x^2 \ $</span> is only positive in the case of <span class="math-container">$ \ x^2 \ = \ \frac{1 \ + \ \sqrt{1 \ - \ 4a}}{2} \ \ , $</span> giving us two real zeroes.</p>
<p>We have established that <span class="math-container">$ \ a \ = \ 0 \ $</span> and <span class="math-container">$ \ a \ = \ \frac14 \ \ $</span> are "special" values of this parameter, but we must also consider that the location of one of the zeroes of <span class="math-container">$ \ f(x) \ $</span> also depends upon <span class="math-container">$ \ a \ \ . \ $</span> So for <span class="math-container">$ \ a \ > \ \frac14 \ \ , \ $</span> <span class="math-container">$ \ f(x) \ $</span> has the two (single) zeroes <span class="math-container">$ \ x \ = \ 0 \ $</span> and <span class="math-container">$ \ x \ = \ a - 2 \ \ , \ $</span> <em>except</em> at <span class="math-container">$ \ a \ = \ 2 \ \ , \ $</span> when <span class="math-container">$ \ x \ = \ 0 \ $</span> becomes a <em>double</em> zero (the polynomial is <span class="math-container">$ \ x^2·(x^4 - x^2 + 2) \ \ ) \ . \ $</span> As we will see, these zeroes with multiplicity larger than one are important. For the other cases we've discussed, the <span class="math-container">$ \ a - 2 \ $</span> zero is negative and smaller than any other zero.</p>
<p>The number of equilibria (real zeroes) are thus</p>
<p><span class="math-container">$ \mathbf{ a \ > \ 2 \ \ : } \quad \quad 0 \ \ \ , \ \ \ a - 2 \ > \ 0 \ \ ; $</span></p>
<p><span class="math-container">$ \mathbf{ a \ = \ 2 \ \ : } \quad \quad 0 \ \ $</span> (double zero) ;</p>
<p><span class="math-container">$ \mathbf{ \frac14 \ < \ a \ < \ 2 \ \ : } \quad \quad a - 2 \ < \ 0 \ \ \ , \ \ \ 0 \ \ ; $</span></p>
<p><span class="math-container">$ \mathbf{ a \ = \ \frac14 \ \ : } \quad \quad -\frac74 \ \ \ , \ \ \ -\frac{1}{\sqrt2} \ \ \ , \ \ \ 0 \ \ \ , \ \ \ \frac{1}{\sqrt2} \ \ $</span> (with the second and fourth on this list being double zeroes ; the polynomial is <span class="math-container">$ \ \frac14·x·\left(x + \frac74 \right)·(2x^2 - 1)^2 \ \ ) \ \ $</span> ;</p>
<p><span class="math-container">$ \mathbf{ 0 \ < \ a \ < \ \frac14 \ \ : } \quad \quad a - 2 \ \ \ , \ \ \ -\sqrt{\frac{1 \ + \ \sqrt{1 \ - \ 4a}}{2}} \ \ \ , \ \ \ -\sqrt{\frac{1 \ - \ \sqrt{1 \ - \ 4a}}{2}} \ \ \ , \ \ \ 0 \ \ \ , \ \ \ \sqrt{\frac{1 \ - \ \sqrt{1 \ - \ 4a}}{2}} \ \ \ , \ \ \ \sqrt{\frac{1 \ + \ \sqrt{1 \ - \ 4a}}{2}} \ \ ; $</span></p>
<p><span class="math-container">$ \mathbf{ a \ = \ 0 \ \ : } \quad \quad -2 \ \ \ , \ \ \ -1 \ \ \ , \ \ \ 0 \ \ $</span> (<em>triple</em> zero) <span class="math-container">$ \ \ \ , \ \ \ 1 \ \ $</span> (the polynomial is <span class="math-container">$ \ x^3·(x+2)·(x+1)·(x-1) \ \ ) \ \ $</span> ;</p>
<p><span class="math-container">$ \mathbf{ a \ < \ 0 \ \ : } \quad \quad a - 2 \ \ \ , \ \ \ -\sqrt{\frac{1 \ + \ \sqrt{1 \ - \ 4a}}{2}} \ \ \ , \ \ \ 0 \ \ \ , \ \ \ \sqrt{\frac{1 \ + \ \sqrt{1 \ - \ 4a}}{2}} \ \ . $</span></p>
<p>What we may observe from this analysis is that the zeroes of multiplicity greater than one at critical values of <span class="math-container">$ \ a \ $</span> are the "bifurcation points" for the system. "Near" <span class="math-container">$ \ a \ = \ 2 \ \ , \ $</span> the double zero at <span class="math-container">$ \ x \ = \ 0 \ \ $</span> "splits" into <span class="math-container">$ \ 0 \ $</span> and <span class="math-container">$ \ a - 2 \ \ . \ $</span> The double zeroes that emerge at <span class="math-container">$ \ a \ = \ \frac14 \ \ $</span> each "split" into two zeroes as <span class="math-container">$ \ a \ $</span> is decreased. Finally, as <span class="math-container">$ \ a \ $</span> decreases from small positive values to zero, the zeroes <span class="math-container">$ \ \pm \sqrt{\frac{1 \ - \ \sqrt{1 \ - \ 4a}}{2}} \ $</span> "merge" with <span class="math-container">$ \ x \ = \ 0 \ \ . $</span></p>
<p>So, as you believed, there is a <strong>transcritical bifurcation</strong> at <span class="math-container">$ \ x \ = \ 0 \ $</span> for <span class="math-container">$ \ \mathbf{a \ = \ 2} \ \ . \ $</span> There is a (supercritical) <strong>pitchfork bifurcation</strong> at <span class="math-container">$ \ \mathbf{a \ = \ 0} \ \ $</span> in which the equilbrium at <span class="math-container">$ \ x \ = \ 0 \ $</span> changes character as it "splits from" or "merges with" the two nearest equilibria. Finally, there is a <strong>"saddle-node" bifurcation</strong> at <span class="math-container">$ \ \mathbf{a \ = \ \frac14} \ \ $</span> with four equilibria "splitting from" or "merging into" two and vanishing for <span class="math-container">$ \ a \ > \ \frac14 \ \ . $</span> (So this system has "something for everybody"...)</p>
<p><span class="math-container">$$ \ \ $$</span></p>
<p>Although it isn't specifically asked for in the problem, we can say a bit about the types of the equilibria. One way we can obtain this information is to differentiate the original differential equation to produce an expression for <span class="math-container">$ \ \ddot{x} \ \ , \ $</span> and determine its sign at an equilibrium point for varying values of <span class="math-container">$ \ a \ \ . \ $</span> As this is a bit cumbersome for the sixth-degree polynomial, we could instead look at the signs of <span class="math-container">$ \ \dot{x} \ $</span> on "either side" of equilibria. As this is still daunting without the use of graphs, we can look at the properties of the polynomial <span class="math-container">$ \ f(x) \ $</span> in the vicinity of bifurcation points.</p>
<p>At <span class="math-container">$ \ a \ = \ 2 \ \ , \ $</span> we have <span class="math-container">$ \ x^2·(x^4 - x^2 + 2) \ \ , \ $</span> which "opens upward" and has a global minimum at <span class="math-container">$ \ x \ = \ 0 \ \ . \ $</span>
For <span class="math-container">$ \ a \ = \ 2^{-} \ \ , \ $</span> the curve "deforms" slightly with one zero at <span class="math-container">$ \ x \ = \ a - 2 \ < \ 0 \ $</span> and the other zero at <span class="math-container">$ \ x \ = \ 0 \ \ . \ $</span> So <span class="math-container">$ \ \dot{x} \ $</span> goes from positive to negative at <span class="math-container">$ \ a - 2 \ $</span> and from negative back to positive at <span class="math-container">$ \ x \ = \ 0 \ \ , \ $</span> making <span class="math-container">$ \ x \ = \ a - 2 \ $</span> a <em>stable equilibrium</em> and <span class="math-container">$ \ 0 \ $</span> an <em>unstable</em> one. For <span class="math-container">$ \ a \ = \ 2^{+} \ \ , \ $</span> the direction of the changes in <span class="math-container">$ \ \dot{x} \ $</span> is reversed, so <span class="math-container">$ \ 0 \ $</span> becomes stable and <span class="math-container">$ \ a - 2 \ > \ 0 \ $</span> is now unstable. This confirms the transcritical character of the bifurcation at <span class="math-container">$ \ a \ = \ 2 \ \ . $</span> [At <span class="math-container">$ \ a \ = \ 2 \ $</span> then, <span class="math-container">$ \ x \ = \ 0 \ $</span> is a "saddle point" or "semi-stable" equilibrium.]</p>
<p>The next-easiest bifurcation to discuss is <span class="math-container">$ \ a \ = \ 0 \ \ . \ $</span> The associated polynomial <span class="math-container">$ \ x^3·(x+2)·(x+1)·(x-1) \ \ $</span> changes from positive to negative at <span class="math-container">$ \ x \ = \ -2 \ $</span> and the triple zero at <span class="math-container">$ \ x \ = \ 0 \ $</span> and from negative to positive at <span class="math-container">$ \ x \ = \ -1 \ $</span> and <span class="math-container">$ \ x \ = \ 1 \ \ . \ $</span> For <span class="math-container">$ \ a \ = \ 0^{+} \ \ , \ $</span> two additional zeroes appear symmetrically around <span class="math-container">$ \ x \ = \ 0 \ \ ; \ $</span> as all of the zeroes are now single, we must have the "deformed" polynomial change from negative to positive at <span class="math-container">$ \ x \ = \ 0 \ $</span> and from positive to negative at the "new" zeroes. The equilibrium at <span class="math-container">$ \ x \ = \ 0 \ $</span> <em>switches from stable to unstable</em> and the new equilibria are <em>stable</em>, while the equilbria at <span class="math-container">$ \ x \ = \ \pm 1 \ $</span> remain unstable and the one at <span class="math-container">$ \ x \ = \ -2 \ $</span> remains stable.</p>
<p>Finally, there is the bifurcation at <span class="math-container">$ \ a \ = \ \frac14 \ $</span> with associated polynomial <span class="math-container">$ \ \frac14·x·\left(x + \frac74 \right)·(2x^2 - 1)^2 \ \ . \ $</span> The curve again "opens upward" and, for <span class="math-container">$ \ a \ = \ \frac14^{-} \ \ , \ $</span> the six zeroes are all single, so we must have <span class="math-container">$ \ \dot{x} \ $</span> change from positive to negative for the first <span class="math-container">$ \ ( \ a - 2 \ ) \ \ , $</span> third, and fifth zeroes, and from negative to positive for the second, fourth <span class="math-container">$ \ ( \ 0 \ ) \ \ , $</span> and sixth. So <span class="math-container">$ \ x \ = \ a - 2 \ $</span> is stable and <span class="math-container">$ \ x \ = \ 0 \ $</span> is unstable; we also have the pairs <span class="math-container">$ \ -\sqrt{\frac{1 \ + \ \sqrt{1 \ - \ 4a}}{2}} \ \ , \ \ -\sqrt{\frac{1 \ - \ \sqrt{1 \ - \ 4a}}{2}} \ $</span> (unstable-stable) and <span class="math-container">$ \ \sqrt{\frac{1 \ - \ \sqrt{1 \ - \ 4a}}{2}} \ \ \ , \ \ \ \sqrt{\frac{1 \ + \ \sqrt{1 \ - \ 4a}}{2}} \ $</span> (stable-unstable). These pairs "merge" into saddle-points at <span class="math-container">$ \ x \ = \ \pm \frac{1}{\sqrt2} \ $</span> for <span class="math-container">$ \ a \ = \ \frac14 \ \ , \ $</span> which then diaappear ("annihilate") for <span class="math-container">$ \ a \ > \ \frac14 \ \ . $</span></p>
|
3,362,000 | <p>From listing the first few terms, I suspect that the sequence is increasing, so I wanted to use mathematical induction to verify my suspicion.</p>
<p>I have assumed that <span class="math-container">$a_k<a_{k+1}$</span>, I don't see how I can obtain <span class="math-container">$a_{k+1}<a_{k+2}$</span> because <span class="math-container">$\frac{1}{a_k}>\frac{1}{a_{k+1}}$</span></p>
| J. W. Tanner | 615,567 | <p>The sequence is increasing. </p>
<p>Since <span class="math-container">$a_{n-1}>0, $</span> it follows that <span class="math-container">$\dfrac1{a_{n-1}}>0$</span>, and therefore that <span class="math-container">$a_n=a_{n-1}+\dfrac1{a_{n-1}}>a_{n-1}$</span>.</p>
|
166,013 | <p>Ordinary (connective) complex $K$-theory is the algebraic $K$ theory of the topological ring $\mathbb{C}$ with analytic topology. One can also study the $K$ theory of $\mathbb{C}$ with discrete topology. Weibel, in his $K$-theory book, computes the torsion in its coefficient ring. I would like to know the torsion-free part in the homotopy groups, but
can't find this anywhere. The best language for this might be in terms of motives (without factoring out $\mathbb{A}^1$), but I don't know where to find its homotopy groups computed in this language either. Anyone know this? </p>
| Dan Ramras | 4,042 | <p>According to Corollary 22.4 here:
<a href="http://www.math.uiuc.edu/~dan/Papers/KTheoryOfFields.pdf" rel="nofollow">http://www.math.uiuc.edu/~dan/Papers/KTheoryOfFields.pdf</a>
the K-theory of an algebraically closed field is divisible. According to the "structure theorem for divisible abelian groups", discussed here:
<a href="http://homepages.math.uic.edu/~gconant/Math/Structure%20Theorem%20for%20Divisible%20Abelian%20Groups.pdf" rel="nofollow">http://homepages.math.uic.edu/~gconant/Math/Structure%20Theorem%20for%20Divisible%20Abelian%20Groups.pdf</a>
a divisible group is a direct sum of copies of $\mathbb{Q}$ and Prufer $p$-groups. (I think this theorem is discussed in L. Fuchs' book(s) on abelian group theory.) The remaining question, as far as the torsion-free part goes, seems to be the number of $\mathbb{Q}$ factors in each dimension (as a cardinal). </p>
|
3,953,681 | <p>I have a basic question but I have failed in solving it. I have the equation of a cylinder which is <span class="math-container">$y^2 + z^2 = r^2$</span> (centered in the x-axis). The parametric equation (dependent on <span class="math-container">$L$</span> and <span class="math-container">$s$</span>) is <span class="math-container">$(x,y,z) = (L, r\cos(s), r\sin(s))$</span>.</p>
<p>I would like to rotate it certain angle <span class="math-container">$\theta$</span> (anticlockwise). Thus I have the new axis from the rotation as: <span class="math-container">$x=x'*\cos\theta + z'*\sin\theta$</span>, <span class="math-container">$y=y'$</span> and <span class="math-container">$z=r*\sin\theta$</span>. However, when rewriting the equation of the cylinder as <span class="math-container">$(y')^2 + (-x'*\sin\theta + z'*\cos\theta)^2 = r^2$</span> and parametrizing, I get: <span class="math-container">$(x,y,z) = (L, r*\cos(s), z+x'*\tan\theta)$</span>, with <span class="math-container">$z=r*\sin\theta$</span>. When I plot this, I get a elliptic cylinder.
Does anyone know what am I doing wrong? I need such equation because I will generate multiple cylinders later computationally.</p>
<p>I have followed previous posts such as <a href="https://math.stackexchange.com/questions/2733090/if-i-have-an-oblique-cylinder-can-i-trim-it-in-to-a-rectilinear-cylinder">If I have an oblique cylinder can I trim it in to a rectilinear cylinder?</a> but they actually obtain the elliptic cylinder.</p>
<p>Many thanks!</p>
| lab bhattacharjee | 33,337 | <p>Hint:</p>
<p><span class="math-container">$$\int\dfrac{dx}{x\sqrt{a-bx^2}}=\int\dfrac{bx\ dx}{bx^2\sqrt{a-bx^2}}$$</span></p>
<p>Let <span class="math-container">$\sqrt{a-bx^2}=u\implies du=\dfrac{{-bx}}{\sqrt{a-bx^2}}$</span> and <span class="math-container">$bx^2=a-u^2$</span></p>
<p>More generally for <span class="math-container">$$\int\dfrac{dx}{x\sqrt{a-bx^n}}=\int\dfrac{bx^{n-1}\ dx}{bx^n\sqrt{a-bx^n}}$$</span></p>
<p>set <span class="math-container">$\sqrt{a-bx^n}=y$</span></p>
|
1,574,663 | <p>I'm a first time Calc I student with a professor who loves using $e^x$ and logarithims in questions. So, loosely I know L'Hopital's rule states that when you have a limit that is indeterminate, you can differentiate the function to then solve the problem. But what do you do when no matter how much you differentiate, you just keep getting an indeterminate answer? For example, a problem like</p>
<p>$\lim _{x\to \infty }\frac{\left(e^x+e^{-x}\right)}{\left(e^x-e^{-x}\right)}$</p>
<p>When you apply L'Hopital's rule you just endlessly keep getting an indeterminate answer. With just my basic understanding of calculus, how would I go about solving a problem like that?</p>
<p>Thanks</p>
| SchrodingersCat | 278,967 | <p><strong>HINT:</strong> </p>
<p>As for your problem, divide both numerator and denominator by $e^x$. You'll get your limit as $1$.</p>
<p>In mathematics, logic, representation and arrangement play an extremely vital role. So always check that you have arranged your expression properly. Else repeated applications of several powerful and helpful theorems might fail, not only in calculus but also in other mathematical topics as well.</p>
|
1,574,663 | <p>I'm a first time Calc I student with a professor who loves using $e^x$ and logarithims in questions. So, loosely I know L'Hopital's rule states that when you have a limit that is indeterminate, you can differentiate the function to then solve the problem. But what do you do when no matter how much you differentiate, you just keep getting an indeterminate answer? For example, a problem like</p>
<p>$\lim _{x\to \infty }\frac{\left(e^x+e^{-x}\right)}{\left(e^x-e^{-x}\right)}$</p>
<p>When you apply L'Hopital's rule you just endlessly keep getting an indeterminate answer. With just my basic understanding of calculus, how would I go about solving a problem like that?</p>
<p>Thanks</p>
| abel | 9,252 | <p>why would you want to use l'hopitals on this? for $x$ large positive $e^{-x} = \frac 1{e^x}$ which is small. therefore $$\frac{e^x + e^{-x}}{e^x -e^{-x}} = \frac{e^x}{e^x}+\cdots \to 1 \text{ as } x \to \infty.$$</p>
|
3,734,216 | <p>Say <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are operators on Hilbert spaces <span class="math-container">$H_A,H_B$</span> respectively. If the Hilbert spaces are finite dimensional, then I know the tensor <span class="math-container">$A\otimes B$</span> can be represented by the Kronecker product <span class="math-container">$[a_{ij}B]$</span>.</p>
<p>Question 1: Does the Kronecker product formula <span class="math-container">$[a_{ij}B]$</span> still work in infinite dimensions?</p>
<p>Question 2: If not, does it work when <span class="math-container">$H_A$</span> is finite dimension and <span class="math-container">$H_B$</span> is infinite dimensional (possibly an operator on a non-separable space)?</p>
| tomasz | 30,222 | <p><span class="math-container">$\DeclareMathOperator{\End}{End}$</span>
Yes, the Kronecker product formula works, and not just for Hilbert spaces.</p>
<p>More precisely, in the non-topological case, if <span class="math-container">$V,W$</span> are <span class="math-container">$k$</span>-vector spaces with (algebraic) bases <span class="math-container">$(v_i)_i$</span>, <span class="math-container">$(w_k)_k$</span> and <span class="math-container">$A\in \End(V)$</span>, <span class="math-container">$B\in \End(W)$</span>, then you can express <span class="math-container">$A,B$</span> using matrix coefficients <span class="math-container">$a_{i,j},b_{k,l}$</span> as <span class="math-container">$A(v_{i})=\sum_{j}a_{i,j}v_j$</span>, <span class="math-container">$B(w_k)=\sum_{l}b_{k,l}w_l$</span>, and then
<span class="math-container">$$(A\otimes B)(v_i\otimes w_k)=(Av_i)\otimes (Bw_k)=(\sum_{j}a_{i,j}v_j)\otimes(\sum_{l}b_{k,l}w_l)=\sum_{j,l} a_{i,j}b_{k,l} (v_j\otimes w_l),$$</span>
i.e. the matrix coefficients of <span class="math-container">$A\otimes B$</span> are, indeed, given by the Kronecker product (with respect to the tensor product of the bases or <span class="math-container">$V,W$</span> with respect to which we compute the matrix coefficients of <span class="math-container">$A,B$</span>).</p>
<p>Now, if <span class="math-container">$V,W$</span> are Hilbert spaces and <span class="math-container">$(v_i)_i$</span>, <span class="math-container">$(w_k)_k$</span> are their orthonormal bases, then <span class="math-container">$(v_i\otimes w_k)_{i,k}$</span> is an orthonormal basis of <span class="math-container">$V\otimes W$</span> and exactly the same computation works. The only difference is that the summation is no longer essentially finite, but only absolutely convergent. I suppose this should work the same way in any context where "matrix coefficients" make any sense.</p>
|
2,010,693 | <p>How can I prove that $x_{n+1}=c+\sqrt{x_n}$, $x_1=a>0$ and $c>0$ converges?
I know that the limit (if it exists) is $L={{2c+1+\sqrt{4c+1}}\over 2}$.
I have already prove that if $x_1<L$ then $x_n<L$ so its bounded from above but how can I prove that if $x_1<L$ then the sequence is increasing?
I would really appreciate any hints or ideas.</p>
| Mark Viola | 218,419 | <p>Note that we have</p>
<p>$$x_{n+1}-x_n=\sqrt{x_n}-\sqrt{x_{n-1}}=\frac{x_n-x_{n-1}}{\sqrt{x_n}+\sqrt{x_{n-1}}}$$</p>
<p>Hence, if $c+\sqrt{x_1}>x_1$, then we see from $(1)$ by inductive reasoning that the sequence is always increasing.</p>
<p>Similarly, if $c+\sqrt{x_1}<x_1$, then the sequence is decreasing. </p>
|
869,337 | <p>"Abstract index" and "coordinate free notations" are often submitted as alternatives to Einstein Summation notation. Could you illustrate their use using an example?</p>
<p>Here's a sum written in Einstein's notation:</p>
<p>$a_{ij}b_{kj} = a_{i}b_{k}$</p>
<p>How would you rewrite it in a modern way? </p>
| Gro-Tsen | 84,253 | <p>Even assuming $R = k$ is a field (which I will do throughout), there is no really satisfactory simple condition, but there is a name associated to the phenomenon: Tschirnhaus transformations. And we can say a few things about them.</p>
<p>Specifically, if $k$ is a field and $f,g$ are two monic polynomials of the same degree in $k$, which for convenience I will write in different variables $f \in k[X]$ and $g \in k[Y]$, a <em>Tschirnhaus transformation</em> of $f$ into $g$ is an element $u$ of the algebra $k[X]/(f)$ whose minimal polynomial is $g$ (and sometimes a polynomial $U \in k[X]$ representing $u$ is abusively called the Tschirnhaus transformation). Note that the minimal polynomial of $u$ can be algorithmically computed as the resultant in the variable $X$ of $f(X)$ and $Y - U(X)$. A Tschirnhaus transformation of $f$ into $g$ is essentially the same thing as an isomorphism between $k[Y]/(g)$ and $k[X]/(f)$, the isomorphism being represented by $A \mapsto A\circ U$ for $A \in k[Y]$. So we can also give the following criterion: a Tschirnhaus transformation $U$ of $f$ is a polynomial $U \in k[X]$ (or more rigorously, its class mod $f$) such that there exists $V \in k[Y]$ for which $V\circ U \equiv X \pmod{f}$ (the transformation is into the minimal polynomial $g$ of $u$, computed as explained). This $V$ is, obviously, called the inverse (or "converse") Tschirnhaus transformation to $U$.</p>
<p>Now we can reduce to the case where $f$ and $g$ are irreducible, because:</p>
<ul>
<li><p>If $f = f_1 f_2$ where $f_1$ and $f_2$ are monic and relatively prime, then a Tschirnhaus transformation $U$ of $f$ is exactly the same thing as two Tschirnhaus transformations $U_1, U_2$ of $f_1,f_2$ such that the polynomials $g_1,g_2$ that they transform into are relatively prime ($U$ is congruent to $U_i$ mod $f_i$ and transforms $f$ into $g = g_1 g_2$). This follows easily from the Chinese remainder theorem.</p></li>
<li><p>If $f,g$ are monic, then a polynomial $U \in k[X]$ defines a Tschirnhaus transformation of $f^r$ into $g^r$ iff it defines one of $f$ into $g$.</p></li>
</ul>
<p>So if we factor $f = \prod f_i^{v_i}$ and $g = \prod g_j^{w_j}$ with the $f_i$ and $g_j$ irreducible, the polynomials $f$ and $g$ are Tschirnhaus-equivalent iff there is a bijection $i \mapsto j=\sigma(i)$ such that $f_i$ is Tschirnhaus-equivalent to $g_j$ and $w_j = v_i$.</p>
<p>Also note the following: if there is a Tschirnhaus transformation $U$ of $f$ into $g$ (monic, of same degree), and if $E$ is a splitting field of $f$ over $k$, then $E$ is also a splitting field of $g$ over $k$. Indeed, $U$ is still a Tschirnhaus transformation of $f$ into $g$ when $f,g$ are viewed as polynomials over $E$, but since $f$ is split, $g$ also has to be split.</p>
<p>So now we can assume $f$ and $g$ irreducible (of the same degree), so that $k(x) := k[X]/(f)$ and $k(y) := k[Y]/(g)$ are fields (the rupture fields of $f$ and $g$). The fact that $f$ and $g$ are Tschirnhaus-equivalent can then be expressed in a slightly simpler equivalent form: $g$ has a root in $k(x)$ (i.e., there is $U$ such that $g \circ U \equiv 0 \pmod{f}$; then $g$ is necessarily the minimal polynomial of the class $u$ of $U$).</p>
<p>Suppose moreover $f$ and $g$ are separable and $k$ has a splitting algorithm (i.e., we can algorithmically compute the factorization of a polynomial $h \in k[T]$ into irreducible factors). Then there is an algorithm for deciding whether $f$ and $g$ are Tschirnhaus-equivalent: indeed, we saw that we can assume $f$ and $g$ irreducible, and that it is then the matter of deciding whether $g$ has a root over $k(x) := k[X]/(f)$; but the latter is solved by factoring $g$ over $k(x)$, and since $f$ is separable, $k(x)$ has a splitting algorithm (see Fried & Jarden, <em>Field Arithmetic</em>, lemma 19.2.2, or Stoltenberg-Hansen & Tucker, "Computable rings and fields" in the <em>Handbook of Computability Theory</em>, theorem 3.2.4).</p>
<p>[<strong>Edit:</strong> perhaps the following criterion is simpler or more satisfactory: if $f \in k[X]$ and $g \in k[Y]$ are monic of the same degree $d$, separable and irreducible, they are Tschirnhaus-equivalent iff the quotient $k[X,Y]/(f,g) = k(x) \otimes_k k(y)$ by the ideal they both generate, and which is a product of fields, has a factor of degree $d$ over $k$. Indeed, this algebra is $k(x)[Y]/(g)$, a product of extensions of $k(x)$ which has a factor of degree $1$ over $k(x)$ — or equivalently, of degree $d$ over $k$ — iff $g$ has a root in $k(x)$. Now we can apply algorithms of (zero-dimensional) primary decomposition to $(f,g)$ to find whether it is the case.]</p>
<p>[<strong>Edit 2:</strong> Here is a more detailed algorithm to test whether $f,g$ (monic separable irreducible of same degree $d$) are Tschirnhaus-equivalent, assuming for simplicity that $k$ is infinite. First, we find $\lambda\in k$ such that $x+\lambda y$ is primitive in the sense that the monic generator $h \in k[Z]$ of the ideal $(f,g,Z-X-\lambda Y) \cap k[Z]$ (computed by algebraic elimination of $X$ and $Y$) is of degree $d^2$: since this is the case for all but finitely many $\lambda$ and $k$ is infinite, this can be found. Once this $h$ has been found, factor it over $k$: it has (at least one) factor of degree $d$ iff $f,g$ are Tschirnhaus-equivalent. If $h_0$ is such a factor, compute $(f,g,Z-X-\lambda Y, h_0) \cap k[X,Y]$, again by algebraic elimination, and more precisely, compute a Gröbner basis for the lexicographic order with $X<Y$: this basis will be of the form $f(X), Y-U(X)$ where $U$ is the desired Tschirnhaus transformation.]</p>
<p>As seen above, if $f$ and $g$ are separable, a necessary condition for $f$ and $g$ to be Tschirnhaus-equivalent is that they have the same splitting field in some fixed algebraic closure of $k$. This is not sufficient (even if $f$ and $g$ are irreducible): $X^4 - 2$ and $Y^4 + 2$ over $\mathbb{Q}$ have the same splitting field (namely $\mathbb{Q}(\sqrt{-1},\sqrt[4]{2})$), but they are not Tschirnhaus-equivalent since $\mathbb{Q}[X]/(X^4-2)$ can be embedded in $\mathbb{R}$ so it contains no root of $Y^4 + 2$. However, the criterion can be refined as follows: $f$ and $g$ (monic separable and irreducible of the same degree) are Tschirnhaus-equivalent iff they have the same splitting field $E$ and moreover, if we let $P$ and $Q$ be the subgroups of $G := \mathrm{Gal}(E/k)$ fixing a root of $f$ and $g$ respectively, then $P$ and $Q$ are conjugate in $G$. (This is an easy consequence of the fact that an isomorphism of rupture fields extends to an automorphism of the splitting field.)</p>
<p>All of this is fairly simple, and I'm not sure any of it can really be said to answer the question, but I don't think one can do any better.</p>
|
181,499 | <p>In many of the classes that I teach, I require students to learn the basics of Mathematica which we use throughout the semester to do computations and to submit homeworks (in notebook form). Some students really like this and some... not so much. </p>
<p>Since I teach in an engineering department, almost everyone already knows some programming language: <em>Matlab</em>, <em>python</em>, <em>java</em>, or <em>C</em> are the most common, though there is quite a variety. One thing that I have found pretty effective is to try and relate Mathematica formalisms, structures, and ideas to those that students already know. For example:</p>
<p><span class="math-container">$-$</span> When talking about using the <a href="https://reference.wolfram.com/language/ref/Listable.html" rel="noreferrer"><code>Listable</code></a> Attribute of functions, I compare this to Matlab's <a href="https://www.mathworks.com/help/matlab/matlab_prog/vectorization.html" rel="noreferrer">vectorization</a></p>
<p><span class="math-container">$-$</span> When talking about alternatives for loops, Mathematica's <a href="https://reference.wolfram.com/language/ref/Table.html" rel="noreferrer"><code>Table</code></a> function is analogous to python's <a href="https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions" rel="noreferrer">List Comprehensions</a>, for example, observe the similarity between</p>
<pre><code>squares = [x**2 for x in range(10)]
</code></pre>
<p>and</p>
<pre><code>squares = Table[x^2, {x, Range[10]}]
</code></pre>
<p><span class="math-container">$-$</span> Mathematica's Notebook format is analogous to <a href="https://jupyter.org/index.html" rel="noreferrer">Jupyter notebooks</a> which merge word processing, computation, and interactive presentations.</p>
<p>My question is this: What are some other analogies between Mathematica functions, expressions, and structures that might be helpful to new users in understanding "what Mathematica is thinking" or "why it works that way"?</p>
<p>Update: It seems that we have some very good answers for Matlab and for python. How about other languages? Any nice analogies for/with other popular languages?</p>
| Αλέξανδρος Ζεγγ | 12,924 | <p>When a language, e.g., Python, not emphasizing but has to talk about "functional programming", usually it speaks about three functions: <code>map</code>, <code>filter</code> and <code>reduce</code>. I always think comparison a good approach to learn things, so below I share the comparison I made before.</p>
<p><a href="https://i.stack.imgur.com/sKwwf.png" rel="noreferrer"><img src="https://i.stack.imgur.com/sKwwf.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/s7PwN.png" rel="noreferrer"><img src="https://i.stack.imgur.com/s7PwN.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/uLZgL.png" rel="noreferrer"><img src="https://i.stack.imgur.com/uLZgL.png" alt="enter image description here"></a></p>
<p>Besides, <code>Function (&)</code> vs <code>lambda</code>, <code>Array, Table</code> vs "list comprehensions" (<code>Table</code> has been mentioned but <code>Range[]</code> is redundant.).</p>
|
1,602,271 | <p>Someone can help me to solve this differential equation with method of undetermined coefficient.
$$ y''-2y'+y=x\sin x$$
Thanks</p>
| ultrainstinct | 177,777 | <p>So first you need to get the solution to the homogeneous ODE, and the characteristic equation is $$r^2-2r+1=0=(r-1)^2,$$ so you have two repeated real roots, you know what to do with that from there. Now onto the particular solution. Since we have a $\sin x$ multiplied by a polynomial, we have $$y_p = (Ax+B)\sin x + (Cx+D)\cos x$$</p>
<p>Now from here you have all of the materials needed to solve the problem. Take the derivative of the particular solution twice, sub in, and equate coefficients.</p>
|
213,872 | <p>I'm learning probability theory and I see the half-open intervals $(a,b]$ appear many times. One of theorems about Borel $\sigma$-algebra is that</p>
<blockquote>
<p>The Borel $\sigma$-algebra of ${\mathbb R}$ is generated by inervals of the form $(-\infty,a]$, where $a\in{\mathbb Q}$. </p>
</blockquote>
<p>Also, the distribution function induced by a probability $P$ on $({\mathbb R},{\mathcal B})$ is defined as
$$
F(x)=P((-\infty,x])
$$</p>
<p>Is it because for some theoretical convenience that the half-open intervals are used often in probability theory or are they of special interest?</p>
| George Frank | 30,674 | <p>I think it's because the distribution function in the discrete case is the sum of probabilities from minus infinity up to and including x; but minus infinity is not a number so that end of the interval is open, i.e., has no end point.</p>
|
3,041,656 | <p>I need some help in a proof:
Prove that for any integer <span class="math-container">$n>6$</span> can be written as a sum of two co-prime integers <span class="math-container">$a,b$</span> s.t. <span class="math-container">$\gcd(a,b)=1$</span>.</p>
<p>I tried to go around with "Dirichlet's theorem on arithmetic progressions" but didn't had any luck to come to an actual proof.
I mainly used arithmetic progression of <span class="math-container">$4$</span>, <span class="math-container">$(4n,4n+1,4n+2,4n+3)$</span>, but got not much, only to the extent of specific examples and even than sometimes <span class="math-container">$a,b$</span> weren't always co-prime (and <span class="math-container">$n$</span> was also playing a role so it wasn't <span class="math-container">$a+b$</span> it was <span class="math-container">$an+b$</span>).</p>
<p>I would appriciate it a lot if someone could give a hand here.</p>
| user760041 | 760,041 | <p>We know that <span class="math-container">$n>6$</span> and we need to prove that any integer <span class="math-container">$n>6$</span> can be written as the form of two co-primes. So it has a very simple proof.
We know that any integer <span class="math-container">$n$</span> and <span class="math-container">$n-1$</span> are always co-prime, this means that their gcd is <span class="math-container">$1$</span>.
So,<span class="math-container">$$n = (n-1) + 1\\
= a + b.$$</span> ( Where a is n-1 and b is 1)
So gcd (a,b) = 1 , so I say that any integer n>1 can be written as the sum of two co primes.</p>
|
642,631 | <p>What is $[\mathbb{Q}(i,\sqrt{2},\sqrt{3}):\mathbb{Q}]$?</p>
<p>On the one hand, we have $[\mathbb{Q}(i,\sqrt{2},\sqrt{3}):\mathbb{Q}(i,\sqrt{2})]\cdot[\mathbb{Q}(i,\sqrt{2}):\mathbb{Q}(i)]\cdot[\mathbb{Q}(i):\mathbb{Q}]=2^3=8.$</p>
<p>On the other hand, the minimum polynomial in $\mathbb{Q}[x]$ containing $i,\sqrt{2},\sqrt{3}$ as roots is $(x^2+1)(x^2-2)(x^2-3)$, which is of degree $6$.</p>
<p>What am I misunderstanding?</p>
| Hagen von Eitzen | 39,174 | <p>The polynomial you give is not irreducible and is not the minimal polynomial of a single $\alpha$ with $\mathbb Q(\alpha)=\mathbb Q(i,\sqrt 2,\sqrt 3)$. Out of the blue I suggest that $\alpha=i+\sqrt 2+\sqrt 3$ has the desired property - try to find its minimal polynomial.</p>
|
203,456 | <p>Please help me proof $\log_b a\cdot\log_c b\cdot\log_a c=1$, where $a,b,c$ positive number different for 1.</p>
| Aang | 33,989 | <p>Let $\log_b a=x\implies b^x=a,$</p>
<p>$ \log_c b=y\implies c^y=b$ and</p>
<p>$\log_a c=z\implies a^z=c$</p>
<p>Now, $a^z=c\implies (b^x)^z=c\implies ((c^y)^z)^x=c\implies c^{xyz}=c\implies xyz=1$ assuming $c\neq 0,1$</p>
<p>Thus, $xyz=1\implies \log_b a\cdot\log_c b\cdot \log_a c=1$</p>
|
4,164,960 | <p>I'm trying to prove <span class="math-container">$$P \implies Q \vdash \neg Q \implies\lnot P$$</span>
with natural deduction but I'm kind of stuck. I tried going from the conclusion side, which lead me to this:
<span class="math-container">$$ \frac{\frac{[P] \quad
\bot}{[\neg Q] \qquad
\neg P \qquad(\neg E)}}{\qquad\neg Q\implies \neg P \qquad (\implies E)}
$$</span></p>
<p>which is where I'm kind of stuck</p>
| Rob Arthan | 23,171 | <p>Unfortunately, (1) I don't know of a good way of drawing proof trees in MathJax and (2) there is no universally agreed set of natural deduction rules. Here is a proof presented as a sequence of labelled steps using rules similar to those presented <a href="https://en.wikipedia.org/wiki/Natural_deduction#Introduction_and_elimination" rel="nofollow noreferrer">here</a>.</p>
<p><span class="math-container">$$\begin{align*}
1 \quad& P \Rightarrow Q & \quad & \mbox{ASM - left open} \\
2 \quad& \lnot Q && \mbox{ASM - discharged at step 7} \\
3 \quad& P && \mbox{ASM - discharged at step 6} \\
4 \quad& Q && \mbox{1, 3, $\Rightarrow$-E} \\
5 \quad& \bot && \mbox{2, 4, $\lnot$-E} \\
6 \quad& \lnot P && \mbox{3, 5, $\lnot$-I} \\
7 \quad& \lnot Q \Rightarrow \lnot P && \mbox{2,6, $\Rightarrow$-I}
\end{align*}$$</span></p>
<p>Giving us <span class="math-container">$1 \vdash 7$</span>, i.e., <span class="math-container">$P \Rightarrow Q \vdash \lnot Q \Rightarrow \lnot P$</span>.</p>
<hr />
<p><strong>Here is a proof tree for the above that Graham Kemp has kindly added:</strong></p>
<p><span class="math-container">$$\sf\dfrac{\dfrac{\dfrac{\qquad\dfrac{P\to Q\quad{[P]}^2}{Q}{\small{\to}\rm E}\quad\lower{1.5ex}{{[\lnot Q]}^1}}{\bot}{\small\lnot\rm E}}{\lnot P}{\small\lnot\rm I^2}}{\lnot Q\to\lnot P\qquad}{\small{\to}\rm I^1}$$</span></p>
|
4,164,960 | <p>I'm trying to prove <span class="math-container">$$P \implies Q \vdash \neg Q \implies\lnot P$$</span>
with natural deduction but I'm kind of stuck. I tried going from the conclusion side, which lead me to this:
<span class="math-container">$$ \frac{\frac{[P] \quad
\bot}{[\neg Q] \qquad
\neg P \qquad(\neg E)}}{\qquad\neg Q\implies \neg P \qquad (\implies E)}
$$</span></p>
<p>which is where I'm kind of stuck</p>
| Dan Christensen | 3,515 | <p>(Posted after previous answer was accepted 3 hours ago)</p>
<p>Same proof, but this may be a little more readable format (screenshot from my proof checker):</p>
<p><a href="https://i.stack.imgur.com/ezosz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ezosz.png" alt="enter image description here" /></a></p>
|
9,629 | <p>are people facing problem of not loading latex symbols in MSE? I have high speed internet connection but I am facing this problem from yesterday,any suggestion?It says "math processing error" if my connection is low speed but this is not the case, I am just watching all latex symbols instead of compiled complete picture.</p>
| 75064 | 75,064 | <p>The timing suggests that the problem was related to the release of <a href="http://www.mathjax.org/mathjax-v2-2-now-available/" rel="nofollow">MathJax 2.2</a>: </p>
<blockquote>
<p>During the time that the files are making their way out to the CDN’s servers, there may be a mixture of files in a browser cache, and so users may need to clear their cache and restart their browser in order to get a consistent version of the files.</p>
</blockquote>
|
237,708 | <p>Does the series </p>
<p>$$\sum_{n=1}^{\infty}\log n - (\log n)^{n/(n+1)}$$</p>
<p>converge?</p>
| N. S. | 9,176 | <p>By AM-GM:</p>
<p>$$\sqrt[n+1]{ \log(n)^n} \leq \frac{n\log(n)+1}{n+1}$$</p>
<p>Thus</p>
<p>$$ \log n- (\log n)^{\frac{n}{n+1}} \geq \log(n)-\frac{n\log(n)+1}{n+1}=\frac{\log(n)-1}{n+1}$$</p>
<p>Now, Limit Comparison test tells you that since $\sum \frac{\log n}{n}$ is divergent, this series is also divergent.</p>
|
14,552 | <p>What are good examples of proofs by induction that are relatively low on algebra? Examples might include simple results about graphs.</p>
<p>My aim is to help students get a sense of the logical form of an induction proof (in particular proving a statement of the form 'if $P(k)$ then $P(k+1)$'), independent of the way one might show that in a proof about series formulae specifically.</p>
| Brendan W. Sullivan | 80 | <p>How about the <strong>Tower of Hanoi</strong> puzzle and finding the optimal number of moves? </p>
<p>This link describes the recursive solution procedure and a proof of optimality using induction.</p>
<p><a href="https://proofwiki.org/wiki/Tower_of_Hanoi" rel="nofollow noreferrer">https://proofwiki.org/wiki/Tower_of_Hanoi</a></p>
|
14,552 | <p>What are good examples of proofs by induction that are relatively low on algebra? Examples might include simple results about graphs.</p>
<p>My aim is to help students get a sense of the logical form of an induction proof (in particular proving a statement of the form 'if $P(k)$ then $P(k+1)$'), independent of the way one might show that in a proof about series formulae specifically.</p>
| Steven Gubkin | 117 | <p>I am going to try the following activity as a first introduction to Mathematical Induction on Monday next week. I will let you know how it goes.</p>
<p>The implication <span class="math-container">$P(k) \implies P(k+1)$</span> let's you "hop around" the natural numbers, deciding the proof of new statements using your knowledge of the truth value of old statements. However, it is a bit too straightforward to see what all the fuss is about. The underwhelming and boring nature of the hopping (just put one foot in front of the other) doesn't really permit any play. Without play there can be no learning. So here, I first pose two more interesting rules for "hopping" which give interesting play opportunities (they are actually a puzzle). The final example gives the "obvious" induction rule, which should now feel truly obvious to the student. At this point we will formalize what we have learned as the principle of mathematical induction.</p>
<hr />
<p>Alice, Bob, and Chelle are three mathematicians. As mathematicians, they have a love of certain numbers. Also, as mathematicians, their love is quite idiosyncratic.</p>
<p>Alice loves the natural numbers 1 and 2. Also, if she loves the natural number <span class="math-container">$k$</span>, then she also loves the natural number <span class="math-container">$k+5$</span>. Which natural numbers can you be certain that Alice loves? Are there any natural numbers you can be sure she does not love? Are there any natural numbers you just do not have enough information about to decide this question?</p>
<p>Bob loves the natural number 5. If he loves the natural number <span class="math-container">$k$</span>, then he also loves the natural number <span class="math-container">$2k$</span>. Also, if he loves the natural number <span class="math-container">$j$</span>, he also loves <span class="math-container">$j-2$</span>. Which natural numbers can you be certain that Bob loves? Are there any natural numbers you can be sure he does not love? Are there any natural numbers you just do not have enough information about to decide this question?</p>
<p>Chelle loves the natural number 1. If she loves the natural number <span class="math-container">$k$</span>, then she also loves the natural number <span class="math-container">$k+1$</span>. Which natural numbers can you be certain that Chelle loves? Are there any natural numbers you can be sure she does not love? Are there any natural numbers you just do not have enough information about to decide this question?</p>
|
611,529 | <p>$$i^3=iii=\sqrt{-1}\sqrt{-1}\sqrt{-1}=\sqrt{(-1)(-1)(-1)}=\sqrt{-1}=i
$$</p>
<p>Please take a look at the equation above. What am I doing wrong to understand $i^3 = i$, not $-i$?</p>
| robjohn | 13,854 | <p>Since $i^2=-1$ by definition, $i^3=i^2\cdot i=-i$.</p>
<p>$\sqrt{a}\sqrt{b}=\sqrt{ab}$ is only guaranteed for positive real $a$ and $b$.</p>
|
611,529 | <p>$$i^3=iii=\sqrt{-1}\sqrt{-1}\sqrt{-1}=\sqrt{(-1)(-1)(-1)}=\sqrt{-1}=i
$$</p>
<p>Please take a look at the equation above. What am I doing wrong to understand $i^3 = i$, not $-i$?</p>
| haughtonomous | 116,620 | <p>Someone posted earlier that it is i x i x i which is -1 x i, ie -i (but the post seems to have been deleted!</p>
<p>But that's all there is to it. It doesn't matter what i represents, the algebra will be consistent. But to prove it, substituting the value of i, ie (√-1), you get</p>
<p>√-1 x √-1 x √-1
= -1 x √-1
= -(√-1)
= -i</p>
<p>However it may be worth mentioning that i is not a complex number as has been suggested - it is an irrational number. A complex number is of the form (x +iy)</p>
|
3,242,921 | <p>Prove that the equation<span class="math-container">$$x^4+(a-2)x^3+(a^2-2a+4)x^2-x+1=0$$</span>
does not admit <span class="math-container">$$x=-2$$</span> as a triple root.</p>
| Servaes | 30,382 | <p>A set of vectors does <em>not</em> span the whole space if and only if it is contained in a hyperplane. So there exist sets of <span class="math-container">$2^{n-1}$</span> vectors that do not span the whole space, and every set of <span class="math-container">$2^{n-1}+1$</span> does span the whole space.</p>
<p>This agrees with your finding that every set of <span class="math-container">$2^{n-1}$</span> <em>nonzero</em> vectors spans the whole space.</p>
|
2,128,991 | <p>What kind of rule or formula this kind of equations uses?</p>
<p>For example we have:</p>
<p>$$a=e^{x}$$</p>
<p>How come it is equal to:</p>
<p>$$\ln a =x$$</p>
<p>Tried to find some kind of rule for that about how it works, but didn't found anything.</p>
| S.C.B. | 310,930 | <p>The rule we have is the definition of <a href="https://en.wikipedia.org/wiki/Logarithm">logarithim</a>.The logarithim, by definition is the inverse of the exponential function. </p>
<p>Note that by the definition of logarithim $$a=b^{x}$$ becomes $$\log_{b} a=x$$
Yours is just a case when $b=e$. Note $\log_{e} x =\ln x$.If you're curious about $\log$ and $\ln$, see more <a href="https://en.wikipedia.org/wiki/Natural_logarithm">here</a>. </p>
|
2,128,991 | <p>What kind of rule or formula this kind of equations uses?</p>
<p>For example we have:</p>
<p>$$a=e^{x}$$</p>
<p>How come it is equal to:</p>
<p>$$\ln a =x$$</p>
<p>Tried to find some kind of rule for that about how it works, but didn't found anything.</p>
| amWhy | 9,003 | <p>Note $\log_a x$ is the inverse of the function $a^x$. When we speak of the natural log of $x$, it is written $\ln(x)$, which is simply shorthand for $\log_e(x)$.</p>
<p>$$a = e^x$$</p>
<p>Since $\log x$, or in this case the natural log $\log_e x = \ln x$ is a strictly increasing function, we can take the $\ln$ of both sides to get:
$$\ln(a) = \ln(e^x) = x\ln(e) = x$$</p>
<p>Note that for any real $a, b$, be have $\underbrace{\log_a(a^b) = b\log_a(a)}_{\log x^y = y \log x} = b\cdot 1 = b$. </p>
|
3,715,484 | <p>As the title saying , the question is how to find the radius <span class="math-container">$R$</span> of convergence of <span class="math-container">$\sum_{n=1}^{\infty}\frac{\sin n}{n} x^n$</span>. My method is as the following:</p>
<p>When <span class="math-container">$x=1$</span>, it is well known that the series <span class="math-container">$\sum_{n=1}^{\infty}\frac{\sin n}{n}$</span> is convergent by Dirichlet's test, and so is <span class="math-container">$\sum_{n=1}^{\infty}(-1)^n \frac{\sin n}{n}$</span> . when <span class="math-container">$x>1$</span>, the limit <span class="math-container">$\lim_{n\to \infty} \frac{\sin n}{n} x^n $</span> does not exist. Therefore, if <span class="math-container">$x>1$</span>, the series <span class="math-container">$\sum_{n=1}^{\infty}\frac{\sin n}{n} x^n$</span> is not convergent. So <span class="math-container">$R=1$</span>. Is this solution right? or is there any other method to calculate the radius?</p>
<p>I would appreciate if someone could give some suggestions and comments.</p>
| zkutch | 775,801 | <p>Let's consider
<span class="math-container">$$f(x,y)= \begin{cases}
\frac{x^2y}{x^4+y^2}, & x^2+y^2 \ne 0 \\
0, & x=y=0
\end{cases}$$</span>
This function have derivative on any line which contain <span class="math-container">$(0,0)$</span>, because it's <span class="math-container">$0$</span> on both axis and for points <span class="math-container">$(x, kx)$</span> it is <span class="math-container">$f(x,kx)=\frac{kx}{x^2+k^2}$</span>. But is not even continuous there by approaching <span class="math-container">$(0,0)$</span> on points type <span class="math-container">$(a, a^2)$</span>, as equals <span class="math-container">$\frac{1}{2}$</span>.</p>
|
2,103,706 | <p>I tried to prove this by induction.</p>
<p>Base case $n=1$, $5$ vertices. I just drew a pentagon, which has $5$ vertices of degree $2$</p>
<p>Then I assume for $n=k$,$4k+1$ vertices, there is at least one vertex with degree $2n$. The number of edges for this graph is $\dfrac{(4k+1)(4k)}{4}=(4k+1)(k)$</p>
<p>Then for $n=k+1$,$4k+5$ vertices. The number of edges is $\dfrac{(4k+5)(4k+4)}{4}=(4k+5)(k+1)$</p>
<p>The graph with $4k+1$ vertices have $8k+5$ more edges than the graph with $4k$ vertices.</p>
<p>This is where I get stuck.Am I on the right track? How should I proceed from here?</p>
| Joffan | 206,402 | <p>The <a href="https://en.wikipedia.org/wiki/Complement_graph" rel="nofollow noreferrer">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p>
<p>The self-complementary graph must have the same <a href="https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence" rel="nofollow noreferrer">degree sequence</a> as its complement, and each vertex has its degree change from $\text{deg}(v)$ to $4n{-}\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p>
<p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\text{deg}(v) = 4n{-}\text{deg}(v)$. Which means for at least that vertex, $\text{deg}(v) = 2n$.</p>
<hr>
<p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href="http://mathworld.wolfram.com/Self-ComplementaryGraph.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p>
<p><a href="https://i.stack.imgur.com/6soM9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6soM9.jpg" alt="enter image description here"></a></p>
<p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\color{red}{2},1,1)$.</p>
|
4,203,079 | <p>I’m trying to grasp the idea behind quotient spaces and reading <a href="https://en.m.wikipedia.org/wiki/Quotient_space_(topology)" rel="nofollow noreferrer">this</a> wikipedia article. In the section ”Examples” they have the unit square <span class="math-container">$S^2$</span> homeomorphism example, which I tought would be something I could use to start building my understading of these. I’ve gone through abstract algebra course and now what an equivalence relation is however I still cannot understand the idea here.</p>
<blockquote>
<p>Consider the unit square <span class="math-container">$I^2 = [0,1] × [0,1]$</span> and the equivalence relation <span class="math-container">$\sim$</span> generated by the requirement that all boundary points be equivalent, thus identifying all boundary points to a single equivalence class. Then <span class="math-container">$I^2/ \sim$</span> is homeomorphic to the sphere <span class="math-container">$S^2$</span>.</p>
</blockquote>
<p>The problem is with the sentence.</p>
<blockquote>
<p>equivalence relation <span class="math-container">$\sim$</span> generated by the requirement that all boundary points be equivalent, thus identifying all boundary points to a single equivalence class.</p>
</blockquote>
<p>What does this exactly mean? All the boundary points of <span class="math-container">$I^2$</span> are the set <span class="math-container">$\partial I^2$</span> which I guess could be denoted as <span class="math-container">$[0,1] \times \{0\} \cup \{0\} \times [0,1] \cup [0,1] \times \{1\} \cup \{1\} \times [0,1]$</span>? Also the sentence ”thus identifying all boundary points to a single equivalence class.” is somewhat confusing. Any clarification for this would be greatly appreciated.</p>
| José Carlos Santos | 446,262 | <p>The equivalence relation <span class="math-container">$\sim$</span> is this one:</p>
<ul>
<li>if <span class="math-container">$p\in I^2\setminus\partial I^2$</span>, then <span class="math-container">$p\sim q$</span> if and only if <span class="math-container">$q=p$</span>;</li>
<li>if <span class="math-container">$p\in\partial I^2$</span>, then you have <span class="math-container">$p\sim q$</span> if and only if <span class="math-container">$q\in\partial I^2$</span> too.</li>
</ul>
<p>So, there are two types of equivalence classes:</p>
<ul>
<li>those which consist of a single point from <span class="math-container">$I^2\setminus\partial I^2$</span>;</li>
<li>an equivalence class equal to <span class="math-container">$\partial I^2$</span>.</li>
</ul>
<p>So, the set <span class="math-container">$I^2/{\sim}$</span>, which is the set of all equivalence classes, is the set<span class="math-container">$$\bigl\{\{p\}\mid p\in I^2\setminus\partial I^2\}\cup\bigl\{\partial I^2\bigr\}.$$</span>The essential idea here is that, in <span class="math-container">$I^2/{\sim}$</span>, the whole boundary of <span class="math-container">$I^2$</span> consists now of a single point.</p>
|
1,528,507 | <p>So I started reading Conjecture and Proof by Miklos Laczkovich and one of the first proofs he provides is that of the irrationality of the square root of two. I am aware there are alternative proofs (one of which is geometric and another that uses the fundamental theorem of arithmetic) but I have a few questions about this one.</p>
<p>The Proof:</p>
<p>Suppose $\sqrt2 = p/q$, where $p, q$ are positive integers and let $q$ be the smallest such number. Then $2q^2=p^2$ and thus $p^2$ is even. Then p itself must be even; let $p=2p_1$. Then $2p^2 = (2p_1)^2=4p_{1}^{2}$ and thus q is also even. If $q=2q_1$ then $\sqrt{2} =p/q=p_{1}/q_{1}$. Since $q_{1}<q$, this contradicts the minimality of $q$. </p>
<p>My questions:</p>
<p>Why do we let $q$ be the smallest such number? I understand that this creates the contradiction at the end of the proof but I don't know why this is a fundamental need. </p>
<p>Also, if we are picking from the set of positive integers and $q$ is the smallest wouldn't that make $q=1$ if our set is all positive integers if we exlude $0$ in the set, and $q=0$ if we do include $0$ in the set? So in the first case $\sqrt{2}=p$ and the second case $\sqrt{2}= undefinded$. I am unsure of where I am going wrong here</p>
| Henno Brandsma | 4,280 | <p>The proof assumes that you know what fractions are and how to compute with them, and also that they have more than one representation: if $\frac{p}{q}$ represents a fraction, then $\frac{2p}{2q}$ represents the <em>same</em> fraction. We use this fact to add fractions ($\frac{1}{2} + \frac{1}{4} = \frac{2}{4} + \frac{1}{4} = \frac{1+2}{4} = \frac{3}{4}$, etc.), among other things.</p>
<p>Now we make the argument very formal, if we'd like:</p>
<p>For every fraction $\frac{p}{q}$ we can consider denominators that can occur in representations of it: $D_{\frac{p}{q}} := \{ b \in \mathbb{N}: \exists a \in \mathbb{Z}: \frac{p}{q} = \frac{a}{b} \}$. This set is non-empty ($q$ is in it), so has a minimal member in $\mathbb{N}$. Now start arguing as in your proof, and you get a proper contradiction: we get a stricly smaller number <em>in $D_{\frac{p}{q}}$</em>. And $1$ is not a consideration, as it will not be in $D_{\frac{p}{q}}$ at all.</p>
|
1,645,130 | <p>Is there any known explicit bijection between these two sets? </p>
<p>I know it can be proved that such bijection exists using two injections and Schröder–Bernstein theorem, but I wanted to know whether some explicit bijection is known. I failed to find any except ones constructed awkwardly from the Schröder–Bernstein theorem.</p>
| Hagen von Eitzen | 39,174 | <p>First note that the set $\mathcal P_{\text{fin}}(\Bbb N)$ of <em>finite</em> subsets of $\Bbb N_0$ is in bijection with $\Bbb N_0$:
$$ \begin{align}\alpha\colon \mathcal P_{\text{fin}}(\Bbb N)&\to \Bbb N\\A&\mapsto \sum_{k\in A}2^k\end{align}$$</p>
<p>Every real number $a\in[0,1)$ as a binary expansion $a=\sum_{k=0}^\infty a_k2^{-k-1} $ with $a_k\in\{0,1\}$. For those cases with two expansions we pick the one ending in zeroes. Now we map
$$ \begin{align}\beta\colon [0,1)&\to \mathcal P(\Bbb N)\\a&\mapsto \{\,k\in\Bbb N\mid a_k=0\,\}\end{align}$$
This one fails to be bijective: We leave out precisely the finite subsets of $\Bbb N$. In other words, we have a bijection
$$ \beta\colon [0,1)\to\mathcal P(\Bbb N)\setminus \mathcal P_{\text{fin}}(\Bbb N)$$
The rest is glueing and playing Hilbert's Hotel:
We have a bijection
$$ \begin{align}\gamma\colon \Bbb R&\to (0,\infty)\\x&\mapsto e^x\end{align}$$
and a bijection
$$ \begin{align}\delta\colon (0,\infty)&\to [0,\infty)\\x&\mapsto \begin{cases}x-1,&x\in\Bbb N\\x,&\text{otherwise}\end{cases}\end{align}$$
and a bijection
$$ \begin{align}\epsilon \colon [0,\infty)&\to [0,1)\\x&\mapsto\begin{cases} \frac1{1+x},&x>0\\0,&x=0\end{cases}\end{align}$$
All in all this gives us a bijection
$$ \zeta\colon \Bbb R\stackrel{\beta\circ\epsilon\circ\delta\circ\gamma}\longrightarrow \mathcal P(\Bbb N)\setminus\mathcal P_{\text{fin}}(\Bbb N)$$
To complete the construction we have to hide countably many finite sets by defining for example
$$ \begin{align}\eta \colon \Bbb R&\to \mathcal P(\Bbb N)\\x&\mapsto\begin{cases}\alpha^{-1}(x),&x\in\Bbb N\\
\zeta(x-\sqrt 2),&x=n+m\sqrt 2\text{ with }n\in\Bbb N, m\in\Bbb N_{>0}\\
\zeta(x),&\text{otherwise}\end{cases}\end{align}$$</p>
|
142,819 | <p>I am currently studying Serge Lang's book "Algebra", on page 25 it is proved that if $G$ is a cyclic group of order $n$, and if $d$ is a divisor of $n$, then there exists a unique subgroup $H$ of $G$ of order $d$.</p>
<p>I have trouble seeing why the proof (as explained below) settles the uniqueness part.</p>
<p>The proof (as I understand it) goes as follows: </p>
<p>First we show existence of the subgroup $H$, given any choice of a divisor $d$ of $n$. </p>
<p>So suppose $n = dm$. Obviously, one can construct a surjective homomorphism $f : \mathbb{Z} \to G$, and it is also clear that $f(m\mathbb{Z}) \subset G$ is a subgroup of $G$. The resulting isomorphism $\mathbb{Z}/m\mathbb{Z} \cong G/f(m\mathbb{Z})$ leads us to conclude that the index of $f(m\mathbb{Z})$ in $G$ is $m$ and so the order of $f(m\mathbb{Z})$ must be $d$.</p>
<p>Ok, so we have shown that a subgroup having order $d$ exists.</p>
<p>The second part is then to show uniqueness - and here is where I am lost as I don't understand why the following argument serves this end:</p>
<p>Suppose $H$ is any subgroup of order $d$. Looking at the inverse image of $f^{-1}(H)$ in $\mathbb{Z}$ we know it must be of the form $k\mathbb{Z}$ for some positive integer $k$ (since all non - trivial subgroups in $\mathbb{Z}$ can be written in this form). Now $H = f(k\mathbb{Z})$ has order $d$, and $\mathbb{Z}/k\mathbb{Z} \cong G/H$, where the group on the right hand side has order $n/d = m$. From this isomorphism we can therefore conclude that $k = m$. Here Lang ends by saying ".. and H is uniquely determined".</p>
<p>But why is this ? Does he mean uniquely determined up to isomorphism ? Because, what I think I have shown is that any subgroup of order $d$ must be isomorphic to $m\mathbb{Z}$ - yet this gives me uniqueness only up to isomorphism.. what am I missing ?</p>
<p>Thanks for your help! </p>
| Manos | 11,921 | <p>Let $H, H'$ be two subgroups of $G$ of order $d$. Let $m \mathbb{Z} = f^{-1}(H)$ and $m' \mathbb{Z} = f^{-1}(H')$. Then $H = f(m \mathbb{Z})$ and $H' = f(m' \mathbb{Z})$. But $G/H, G/H'$ have the same order. Also by the canonical isomorphism given at the bottom of p. 17, $G/H$ is isomorphic to $\mathbb{Z} / m \mathbb{Z}$ and similarly $G/H'$ is isomorphic to $\mathbb{Z} / m' \mathbb{Z}$. Thus $\mathbb{Z} / m \mathbb{Z}$ has the same order with $\mathbb{Z} / m' \mathbb{Z}$, thus $m=m'$. Hence $H=f(m \mathbb{Z}) = H'$.</p>
|
452,653 | <p>If $f:X\rightarrow Y$ is initial in category <strong>Top</strong> then
it is easy to proof that </p>
<blockquote>
<p>(!) the topology on $X$ is the set of preimages of open sets in $Y$. </p>
</blockquote>
<p>Just construct topology $Z$ having
the same underlying subset as $X$ and let the set of these preimages
serve as topology on it. Then from $g:Z\rightarrow X$ with $x\mapsto x$
it is clear that $fg$ is continuous so the conclusion that $g$ is
continuous can be made. Then we are ready.
But now my question: </p>
<blockquote>
<p>what if we do not work in $\textbf{Top}$ but in category $\textbf{Haus}$?</p>
</blockquote>
<p>The constructed topology $Z$ does not have to be a Hausdorff space (or am I overlooking something here?) and if the fact that $f$ is initial in $\textbf{Haus}$ would work then it would justify the conclusion that $g$ can be recognized as an arrow in $\textbf{Haus}$. </p>
<blockquote>
<p>Is there a way out? Or - even stronger - is statement (!) not true in $\textbf{Haus}$?</p>
</blockquote>
| jimjim | 3,936 | <p>Edited : thanks to Did</p>
<p>There is no need for any test, the terms of series tend to $0$ monotonically alternating sign, that is sufficient that series converges. (There was nothing regrading the absolute convergence in the question, so why bother with it?)</p>
<p>More over the limit of the series L is :</p>
<p>$$\frac{(-1)^2}{2\cdot \ln^{7/5}(2)}+\frac{(-1)^3}{3\cdot \ln^{7/5}(3)}<L<\frac{(-1)^2}{2\cdot \ln^{7/5}(2)} $$</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.