qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
2,152,872
<p>I am working on a problem in Artin's Algebra related to the algebraic geometry talked in Chapter 11. The problem number is 9.2., F.Y.I.</p> <p>Here goes the problem:</p> <blockquote> <p>Let $f_1, \dots, f_r$ be complex polynomials in the variables $x_1, \dots, x_n$, let $V$ be the variety of their common zeros, and let $I$ be the ideal of the polynomial ring $R = \mathbb{C}\left [ x_1, \dots, x_n \right ]$ they generate. Define a ring homomorphism from the quotient ring $\bar{R} = R/I$ to the ring $\mathcal{R}$ of continuous, complex-valued functions on $V$.</p> </blockquote> <p>I attempted to use the correspondence theorem w.r.t. the variety of a set of polynomials, i.e. the maximal ideals bijectively correspond to the point in $V$ and we may somehow define the continuous functions there. However I cannot come up with any idea further. Also, the term 'continuous' here seems redundant since I expect the homomorphism will carry polynomials to polynomials. </p> <p>I appreciate your participation and will be thankful to anything from hints to full solution. </p>
A.G.
115,996
<p>Note that if you allow loops and parallel edges then the problem is very simple: just draw 5 nodes and $1+1+2+3+3=10$ edge ends, then join the edge ends whichever way you like. This is possible as $10$ is even (if the total were odd it would be impossible). <a href="https://i.stack.imgur.com/rZ9hV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rZ9hV.png" alt="enter image description here"></a></p>
2,122,389
<p>The problem goes so : you have a parking lot with 8 parking spaces and 8 cars, of which 4 are red and 4 are white. What is the probability of :</p> <p>a) 4 white cars being parked next to each other ?</p> <p>b) 4 white cars and 4 red cars being parked next to each other ?</p> <p>c) red and white cars being parked alternately ( red-white-red...) ?</p> <p>Any help will be greatly appreciated. :-)</p>
Simply Beautiful Art
272,831
<p>I'm not so sure of indefinite integration, but if we add bounds, it comes out beautifully, as shown <a href="https://en.wikipedia.org/wiki/Leibniz_integral_rule#Examples_for_evaluating_a_definite_integral" rel="noreferrer">here</a>.</p> <p>Take the derivative with respect to $m$ to get</p> <p>$$\begin{align}-I'(-m)&amp;=\int_0^\pi\frac{2m-2\cos(x)}{1-2m\cos(x)+m^2}\ dx\\&amp;=\frac1m\int_0^\pi1-\frac{1-m^2}{1-2m\cos(x)+m^2}\ dx\\&amp;=\frac\pi m-\frac2m\tan^{-1}\left(\frac{1+m}{1-m}\tan(x/2)\right)\bigg|_{x=0}^{x=\pi}\\&amp;=\begin{cases}0&amp;|m|&lt;1\\\frac{2\pi}m&amp;|m|&gt;1\end{cases}\end{align}$$</p> <p>It thus becomes clear that</p> <p>$$I(-m)=\begin{cases}C_1&amp;|m|&lt;1\\2\pi\ln|m|+C_2&amp;|m|&gt;1\end{cases}$$</p> <p>By substituting in a few values, one may deduce that $C_1=C_2=0$, thus</p> <p>$$\int_0^\pi\ln(1+2m\cos(x)+m^2)\ dx=\begin{cases}0&amp;|m|&lt;1\\2\pi\ln|m|&amp;|m|&gt;1\end{cases}$$</p>
2,122,389
<p>The problem goes so : you have a parking lot with 8 parking spaces and 8 cars, of which 4 are red and 4 are white. What is the probability of :</p> <p>a) 4 white cars being parked next to each other ?</p> <p>b) 4 white cars and 4 red cars being parked next to each other ?</p> <p>c) red and white cars being parked alternately ( red-white-red...) ?</p> <p>Any help will be greatly appreciated. :-)</p>
Jack D'Aurizio
44,121
<p>This is a well-known problem about the <a href="https://en.wikipedia.org/wiki/Poisson_kernel" rel="nofollow noreferrer">Poisson kernel</a>.<br> Since $\log\|z\|=\text{Re}\log(z)$ and for every $n\in\mathbb{Z}$ we have $\int_{0}^{2\pi}e^{ni\theta}\,d\theta=2\pi\,\delta(n)$,</p> <p>$$\forall r\in\mathbb{R},\quad \int_{0}^{2\pi}\log\|1-r e^{i\theta}\|\,d\theta = 2\pi \log\max(1,|r|),\tag{1}$$ $$\forall r\in\mathbb{R},\quad \int_{0}^{2\pi}\log(1+r^2-2r\cos\theta)\,d\theta = 4\pi\log\max(1,|r|).\tag{2}$$</p> <p>The indefinite integral is related with the dilogarithm function, but I doubt you <em>really</em> need it.</p>
23,566
<p>I love math, and I used to be very good at it. The correct answers came fast and intuitively. I never studied, and redid the demonstration live for the tests (sometimes inventing new ones). I was the one who answered the tricky questions in class (8 hours of math/week in high school)... You get the idea.</p> <p>As such I used to have a lot of confidence in my math abilities, and didn't think twice about saying the first idea that came to mind when answering a question.</p> <p>That was more than 10 years ago, and I (almost) haven't done any math since then. I've graduated in a scientific field that requires little of it (I prefer not to give details) and worked for some time.</p> <p>Now I'm back at school (master of statistics) and I need to do math, once again. I make mistakes upon blunders with the same confidence I used to have when I was good, which is extremely embarrassing when it happens in class. </p> <p>I feel like a tone deaf musician and an ataxic painter at the same time.</p> <p>One factor that probably plays a role is that I've learnt math in my mother tongue, and I'm now using it English, but I wouldn't expect it to make such a difference. </p> <p>I know that it will require practice and hard work, but I need direction.</p> <p>Any help is welcome.</p> <p>Kind regards,</p> <p>-- Mathemastov</p>
phren0logy
9,279
<p>Linear Algebra: Try MIT's OpenCourseware with the inimitable Gilbert Strang.</p> <p>Calculus: Try Calculus Made Easy, available as a free and nicely typeset PDF.</p> <p>I also second Khan Academy.</p>
23,566
<p>I love math, and I used to be very good at it. The correct answers came fast and intuitively. I never studied, and redid the demonstration live for the tests (sometimes inventing new ones). I was the one who answered the tricky questions in class (8 hours of math/week in high school)... You get the idea.</p> <p>As such I used to have a lot of confidence in my math abilities, and didn't think twice about saying the first idea that came to mind when answering a question.</p> <p>That was more than 10 years ago, and I (almost) haven't done any math since then. I've graduated in a scientific field that requires little of it (I prefer not to give details) and worked for some time.</p> <p>Now I'm back at school (master of statistics) and I need to do math, once again. I make mistakes upon blunders with the same confidence I used to have when I was good, which is extremely embarrassing when it happens in class. </p> <p>I feel like a tone deaf musician and an ataxic painter at the same time.</p> <p>One factor that probably plays a role is that I've learnt math in my mother tongue, and I'm now using it English, but I wouldn't expect it to make such a difference. </p> <p>I know that it will require practice and hard work, but I need direction.</p> <p>Any help is welcome.</p> <p>Kind regards,</p> <p>-- Mathemastov</p>
Community
-1
<p>Perhaps of interest/relevance:</p> <p><a href="http://johnlawrenceaspden.blogspot.com/2011/01/effortless-superiority.html" rel="nofollow">http://johnlawrenceaspden.blogspot.com/2011/01/effortless-superiority.html</a></p>
1,022,950
<p>I was reading linear dependence between vectors, where I come across below explanation:</p> <hr> <p>In a rectangular xy-coordinate system every vector in the plane can be expressed in exactly one way as a linear combination of the standard unit vectors. For example, the only way to express the vector (3, 2) as a linear combination of i = (1, 0) and j = (0, 1) is</p> <blockquote> <p>(3, 2) = 3(1, 0) + 2(0, 1) = 3i + 2j ...formula(1)</p> </blockquote> <p>Suppose, however, that we were to introduce a third coordinate axis that makes an angle of 45◦ with the x-axis. The unit vector along the w-axis is</p> <blockquote> <p>w = $(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})$</p> </blockquote> <p>Whereas Formula (1) shows the only way to express the vector (3, 2) as a linear combination of i and j, there are infinitely many ways to express this vector as a linear combination of i, j, and w. Three possibilities are</p> <p>(3, 2) = 3(1, 0) + 2(0, 1) + 0$(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})$ = 3i + 2j + 0w</p> <p>(3, 2) = 2(1, 0) + (0, 1) + $\sqrt{2}$$(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})$ = 3i + j + $\sqrt{2}$w</p> <p>(3, 2) = 4(1, 0) + 3(0, 1) - $\sqrt{2}$$(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})$ = 4i + 3j - $\sqrt{2}$w</p> <hr> <p>What I did not understood is </p> <ul> <li>How these last three expressions of (3,2) are formed, I just did not get anything of it. Maybe missing something elementary maths.</li> <li>How introducing another axis allows us to express any vector in <strong>infinitely many ways</strong>?, and how these last three expressions proves that?</li> </ul>
Timbuc
118,527
<p>$$\left|\;\sqrt\frac{n+1}n-1\;\right|=\frac{\sqrt{n+1}-\sqrt n}{\sqrt n}=\frac1{\sqrt n\left(\sqrt{n+1}+\sqrt n\right)}$$</p> <p>Take now <strong>any</strong> $\;\epsilon&gt;0\;$ . We want to check for which $\;n$' s we have</p> <p>$$\frac1{\sqrt n\left(\sqrt{n+1}+\sqrt n\right)}&lt;\epsilon$$</p> <p>But now we can <em>estimate</em> :</p> <p>$$\frac1{\sqrt n\left(\sqrt{n+1}+\sqrt n\right)}\le\frac1{\sqrt n\sqrt n}=\frac1n$$</p> <p>So it is enough to know when</p> <p>$$\frac1n&lt;\epsilon\iff n&gt;\frac1\epsilon$$</p> <p>and we're done.</p>
184,682
<p>I have difficulties with a rather trivial topological question: </p> <p>A is a discrete subset of $\mathbb{C}$ (complex numbers) and B a compact subset of $\mathbb{C}$. Why is $A \cap B$ finite? I can see that it's true if $A \cap B$ is compact, i.e. closed and bounded, but is it obvious that $A \cap B$ is closed?</p>
André Nicolas
6,312
<p>The result is not correct, for the set $A$ of all $\frac{1}{n}$, where $n$ ranges over the positive integers, is discrete. Let $B$ be the unit disk.</p> <p>As you observed, everything would be fine if $A\cap B$ were closed. But in this case it isn't.</p>
2,664,286
<p>I am confused between the usage of two words <em>for all</em> and <em>any</em>. Let us consider the example of the definition normal subgroups, A subgroup $H$ is said to be normal if $\forall g \in G, g^{-1}Hg = H$ but if I rephrase the definition of normal subgroup to $H$ is normal in $G$ if for any $g \in G, g^{-1}Hg = H$.</p> <p>Are these two definitions correct?</p>
Michael Rozenberg
190,319
<p>Let $Q'$ be symmetrical to $Q$ with respect to the line $l:y=-2x+7.$</p> <p>Also, let $PQ'\cap l=\{R\}$.</p> <p>Thus, $R$ is a needed point.</p>
674,310
<p>I am having trouble with a proof for linear algebra. Could somebody explain to me how to prove that if $A$ and $B$ are both $n\times n$ non singular matrices, that their product $AB$ is also non singular. </p> <p>A place to start would be helpful. Thank you for your time. </p>
zipirovich
127,842
<p>Depends how far into linear algebra you are and what you can use. One possible and very short solution: a square matrix is nonsingular iff its determinant is nonzero. Now use the property for $\det(AB)$.</p>
157,992
<p>Please, help me</p> <p>Prove that $(1, i);(1,-i)$ are characteristic vectors of $\begin{bmatrix} a &amp; b \\ -b &amp; a \end{bmatrix}$</p> <p>I've found the polynomial characteristic: $\lambda^2-2a\lambda+a^2+b^2$ and the roots are:</p> <p>$\lambda_{1} = \frac{a+ib}{\lambda} \\ \lambda_{2} = \frac{a-ib}{\lambda}$</p> <p>But, I don't know how to resolve the system and find characteristic vectors.</p> <p>Thanks so much.</p>
DonAntonio
31,254
<p>So you have to check whether $$\begin{pmatrix}a&amp;b\\\!\!\!-b&amp;a\end{pmatrix}\binom{1}{i}=\lambda\binom{1}{i}\,\,,\,\lambda=\lambda_1\,\,or\,\,\lambda_2$$and the same for the other given vector. Well, now just verify that you indeed get the above matrix equation right when you use the values you got for $\,\lambda\,$...and find out which one corresponds to which vector.</p>
99,572
<p>One of the most useful tools in the study of convex polytopes is to move from polytopes (through their fans) to toric varieties and see how properties of the associated toric variety reflects back on the combinatorics of the polytopes. This construction requires that the polytope is rational which is a real restriction when the polytope is general (neither simple nor simplicial). Often we would like to consider general polytopes and even polyhedral spheres (and more general objects) where the toric variety construction does not work.</p> <p>I am aware of very general constructions by M. Davis, and T. Januszkiewicz, (one relevant paper might be <a href="http://www.math.osu.edu/%7Edavis.12/old_papers/DJ_toric.dmj.pdf" rel="nofollow noreferrer">Convex polytopes, Coxeter orbifolds and torus actions, Duke Math. J. 62 (1991)</a> and several subsequent papers). Perhaps these constructions allow you to start with arbitrary polyhedral spheres and perhaps even in greater generality.</p> <p>I ask about an explanation of the scope of these constructions, and, in simple terms as possible, how does the construction go?</p>
Neil Strickland
10,366
<p>The DJ construction works with a simplicial complex $K$ and a subtorus $W\leq\prod_{v\in V}S^1$ (where $V$ is the set of vertices of $K$). People tend to be interested in the case where $|K|$ is homeomorphic to a sphere, but that isn't really central to the theory. However, it is important that we have a simplicial complex rather than something with more general polyhedral structure. It is also important that we have a subtorus, which gives a sublattice $\pi_1(W)\leq\prod_{v\in V}\mathbb{Z}$, which is integral/rational information. I don't think that the DJ approach will help you get away from the rational case.</p> <p>I like to formulate the construction this way. Suppose we have a set $X$ and a subset $Y$. Given a point $x\in\prod_{v\in V}X$, we put $\text{supp}(x)=\{v:x_v\not\in Y\}$ and $K.(X,Y)=\{x:\text{supp}(x) \text{ is a simplex}\}$. The space $K.(D^2,S^1)$ is a kind of moment-angle complex, and $K.(D^2,S^1)/W$ is the space considered by Davis and Januskiewicz; it has an action of the torus $T=\left(\prod_{v\in V}S^1\right)/W$. Generally we assume that $W$ acts freely on $K.(D^2,S^1)$. There is a fairly obvious complexification map $K.(D^2,S^1)/W\to K.(\mathbb{C},\mathbb{C}^\times)/W_{\mathbb{C}}$. Under certain conditions relating the position of $W$ to the simplices of $K$, one can check that $K$ gives rise to a fan, that the complexification map is a homeomorphism, and that both $K.(D^2,S^1)/W$ and $K.(\mathbb{C},\mathbb{C}^\times)/W_{\mathbb{C}}$ can be identified with the toric variety associated to that fan.</p>
2,601,380
<p>I'm trying to figure out the impedance of a capacitor. My textbook tells me the answer is $\frac{-i}{\omega C}$ and plugging that into the equation does work but I wanted to come up with that answer myself. So I wrote out the equation with what I know:</p> <p>$$-V_0\omega C\sin\omega t = Re\left( \frac{V_0(\cos\omega t + i\sin\omega t)}{x} \right)$$</p> <p>This is where I get stuck. I don't know how to isolate $x$ given that it is inside the $Re()$ function. Trying to get somewhere, I tried this:</p> <p>$$x = \frac{V_0(\cos\omega t + i\sin\omega t)}{-V_0\omega C\sin\omega t} = \frac{\cos\omega t}{-\omega C\sin\omega t} - \frac{i}{\omega C}$$</p> <p>Seeing $-\frac{i}{\omega C}$ makes me feel like I'm on the right track. Now I just need to figure out how to get rid of the first part of that answer. And I'm guessing that if I knew how to isolate $x$ from the first equation, that would do the trick. So how can I isolate $x$ when it is included in the $Re()$ function?</p>
Jan Eerland
226,665
<p>For a capacitor, there is the relation:</p> <p>$$\text{I}_\text{C}\left(t\right)=\text{C}\cdot\frac{\text{d}\text{V}_\text{C}\left(t\right)}{\text{d}t}\tag1$$</p> <p>Considering the voltage signal to be:</p> <p>$$\text{V}_\text{C}\left(t\right)=\text{V}_\text{p}\sin\left(\omega t\right)\tag2$$</p> <p>It follows that:</p> <p>$$\frac{\text{d}\text{V}_\text{C}\left(t\right)}{\text{d}t}=\omega\text{V}_\text{p}\cos\left(\omega t\right)\tag3$$</p> <p>And thus:</p> <p>$$\frac{\text{V}_\text{C}\left(t\right)}{\text{I}_\text{C}\left(t\right)}=\frac{\text{V}_\text{p}\sin\left(\omega t\right)}{\omega\text{V}_\text{p}\cos\left(\omega t\right)}=\frac{\sin\left(\omega t\right)}{\omega\text{C}\sin\left(\omega t+\frac{\pi}{2}\right)}\tag4$$</p> <p>This says that the ratio of AC voltage amplitude to AC current amplitude across a capacitor is $\frac{1}{\omega\text{C}}$, and that the AC voltage lags the AC current across a capacitor by $90$ degrees (or the AC current leads the AC voltage across a capacitor by $90$ degrees).</p> <p>This result is commonly expressed in polar form as:</p> <p>$$\text{Z}_\text{c}=\frac{1}{\omega\text{C}}\cdot e^{-\frac{\pi}{2}\cdot\text{j}}\tag5$$</p> <p>Or, by applying Euler's formula, as:</p> <p>$$\text{Z}_\text{C}=-\text{j}\cdot\frac{1}{\omega\text{C}}=\frac{1}{\text{j}\omega\text{C}}\tag6$$</p> <p>Now for $\text{X}_\text{C}$:</p> <p>$$\text{X}_\text{C}=\left|-\text{j}\cdot\frac{1}{\omega\text{C}}\right|=\frac{1}{\omega\text{C}}\tag7$$</p> <p>Where $\omega=2\pi\text{f}$</p>
463,650
<p>Consider the sequence $\left \{ x_{n} \right \}$ that satisfies the condition: $$\left | x_{n+1}-x_{n} \right |&lt; \frac{1}{2^{n}} \ \ \ for\ all\ n=1,2,3,...$$ Part (1): Prove that the sequence $\left \{ x_{n} \right \}$ is convergent.</p> <p>Part (2): Does the result in part (1) hold if we only assume that $\left | x_{n+1}-x_{n} \right |&lt; \frac{1}{n} \ \ \ for\ all\ n=1,2,3,...$?</p> <p>For part (1), I proved that the sequence is Cauchy and hence it is convergent. For part (2), I feel like the sequence is not necessarily convergent. I am trying to come up with a sequence that is divergent, but satisfies the condition given in part (2). Any ideas?</p>
N. S.
9,176
<p>For part (2) you can also use $x_n = \ln(n)$.</p> <p>Note that by the MVT we have for some $c_n \in (n,n+1)$:</p> <p>$$ \frac{\ln(n+1)-\ln(n)}{n+1-n}= \frac{1}{c_n} &lt; \frac{1}{n} \,.$$</p>
160,491
<pre><code>Histogram[RandomVariate[NormalDistribution[0, 1], 200]] </code></pre> <p>How to calculate the area under the histogram.</p>
Jack LaVigne
10,917
<p>Here is one sample of the random generated data</p> <pre><code>data = RandomVariate[NormalDistribution[0, 1], 200]; </code></pre> <p>and the associated historgram</p> <pre><code>Histogram[data] </code></pre> <p><img src="https://i.stack.imgur.com/q31Yg.png" alt="Mathematica graphics"></p> <p>The coordinates of the plot can be derived by:</p> <pre><code>histList = HistogramList[data] (* {{-(5/2), -2, -(3/2), -1, -(1/2), 0, 1/2, 1, 3/2, 2}, {6, 7, 25, 42, 43, 28, 33, 11, 5}} *) </code></pre> <p>We set it equal to a variable, <code>histList</code> for later use.</p> <p>Note that the part one of <code>histList</code> is of length 11 and part two is length 10.</p> <p>Part one defines the x-axis coordinates of the rectangle and part two the associated heights.</p> <p>In order to compute the area one can use either <code>Map</code> or <code>Table</code> to get the individual rectangles, and then apply <code>Total</code> to the list of areas.</p> <pre><code>Total@ Map[(histList[[1, # + 1]] - histList[[1, #]])*histList[[2, #]] &amp;, Range[Length@histList[[2]]]] (* 100 *) </code></pre>
160,491
<pre><code>Histogram[RandomVariate[NormalDistribution[0, 1], 200]] </code></pre> <p>How to calculate the area under the histogram.</p>
Szabolcs
12
<p>Why would you want to calculate that?</p> <p>The third argument of <code>Histogram</code> controls the meaning of the bin height. The default is <code>"Count"</code>. Thus your result for <code>Histogram[data]</code> would simply be <code>Length[data]</code>.</p>
2,633,720
<blockquote> <p>Prove by induction that $$ (k + 2)^{k + 1} \leq (k+1)^{k +2}$$ for $ k &gt; 3 .$</p> </blockquote> <p>I have been trying to solve this, but I am not getting the sufficient insight. </p> <p>For example, $(k + 2)^{k + 1} = (k +2)^k (k +2) , (k+1)^{k +2}= (k+1)^k(k +1)^2.$</p> <p>$(k +2) &lt; (k +1)^2 $ but $(k+1)^k &lt; (k +2)^k$ so what I want would clearly not be immediate from using something like If $ 0 &lt; a &lt; b, 0&lt;c&lt;d $ then $0 &lt; ac &lt; bd $. THe formula is valid for n = 4, So if it is valid for $n = k$ I would have to use</p> <p>$ (k + 2)^{k + 1} \leq (k+1)^{k +2} $ somewhere in order to get that $ (k + 3)^{k + 2} \leq (k+2)^{k +3} $ is also valid. This seems tricky.</p> <p>I also tried expanding $(k +2)^k $ using the binomial formula and multiplying this by $(k + 2)$, and I expanded $(k+1)^k$ and multiplied it by $(k + 1)^2 $ term by term. I tried to compare these sums, but it also gets tricky. I would appreaciate a hint for this problem, thanks. </p>
Akababa
87,988
<p>Try taking log of both sides and prove $\frac{\log x}x$ is decreasing.</p> <p>Or by induction try to show $(\frac{k+1}k)^k\leq k$:</p> <p>$$(1+1/k)^k\leq \sum_{i=0}^k \binom ki k^{-i}&lt;\sum_{i=0}^k 1=k+1$$</p>
1,714,654
<p>Show that a box (rectangular parallelopiped) of maximum volume V with prescribed surface area is a cube. Let $$V=xyz$$ $$S=2xy + 2yz + 2zx$$ $S$ is constant.</p> <p>Using Lagrange method, I am stuck at $V_x$$_x$=$0$=$V_y$$_y$=$V_z$$_z$ at the (only) critical point. How to approach this. </p>
Mark Viola
218,419
<p>Recall from elementary geometry that the sine function satisfies the inequalities </p> <p>$$|\theta\cos(\theta)|\le |\sin(\theta)|\le |\theta|$$</p> <p>for $|\theta|\le \pi/2$. </p> <p>Letting $\theta =x^2-y^2$ we can write</p> <p>$$|\cos(x^2-y^2)|\le\left|\frac{\sin(x^2-y^2)}{x^2-y^2}\right|\le|1|$$</p> <p>whereupon applying the squeeze theorem and exploiting the evenness of $\frac{\sin(z)}{z}$ yields the limit</p> <p>$$\lim_{(x,y)\to(0,0)}\frac{\sin(x^2-y^2)}{x^2-y^2}=1$$</p>
106,464
<p>I'd like to prove the following:</p> <blockquote> <p>If $\mathfrak{a} \subseteq k[x_0, \ldots, x_n]$ is a homogeneous ideal, and if $f \in k[x_0,\ldots,x_n]$ is a homogeneous polynomial with $\mathrm{deg} \ f &gt; 0$, such that $f(P) = 0 $ for all $P \in Z(\mathfrak{a})$ in $\mathbb P^n$, then $f^q \in \mathfrak{a}$ for some $ q &gt; 0$.</p> </blockquote> <p>I've been given the hint: interpret the problem in terms of the affine ($n+1$)-space whose affine coordinate ring is $k[x_0,\ldots,x_n]$ and use the usual Nullstellensatz. </p> <hr> <p>I'm not really sure what the hint means. We have the isomorphism $k[x_0,...,x_n] \cong k[x_0,...,x_n] / I(\mathbb A_k^{n+1})$ (since $I(\mathbb A^{n+1}) = I(Z(0)) = 0$). But I don't see how this is helpful at all, nor am I sure this is what the hint means.</p> <p>Any help would be greatly appreciated. Thanks</p>
azarel
20,998
<p>$\bf Hint:$ Let $\overline Z(\mathfrak a)=\{p\in \mathbb A^{n+1}:\forall g\in\mathfrak a \ (g(p)=0)\}$ (the affine variety determined by $\mathfrak a$). Note that $f\in \overline Z(\mathfrak a)$ and aplly the standard Nullstellensatz. </p>
322,302
<p>Conjectures play important role in development of mathematics. Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p> <p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p> <p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p> <p>Asking the question I keep in mind by &quot;recent years&quot; something like a dozen years before now, by a &quot;conjecture&quot; something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit &quot;not so famous&quot;, but on the other hand these might not be considered as strict criteria, and let us &quot;assume a good will&quot; of the answerer.</p>
Mare
61,949
<p>The strong no loop conjecture for quiver algebras <span class="math-container">$A$</span> states that a simple module <span class="math-container">$S$</span> with <span class="math-container">$Ext_A^1(S,S) \neq 0$</span> has infinite projective dimension. It was proven here <a href="https://www.sciencedirect.com/science/article/pii/S0001870811002714" rel="noreferrer">https://www.sciencedirect.com/science/article/pii/S0001870811002714</a> . The more general conjecture for Artin algebras is still open.</p> <p>(The result can be used to check for finite global dimension of endomorphism algebras, see for example <a href="https://mathoverflow.net/questions/320858/does-this-algebra-have-finite-global-dimension-human-vs-computer">Does this algebra have finite global dimension ? (Human vs computer)</a>.)</p>
322,302
<p>Conjectures play important role in development of mathematics. Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p> <p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p> <p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p> <p>Asking the question I keep in mind by &quot;recent years&quot; something like a dozen years before now, by a &quot;conjecture&quot; something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit &quot;not so famous&quot;, but on the other hand these might not be considered as strict criteria, and let us &quot;assume a good will&quot; of the answerer.</p>
efs
109,085
<p>Recently, Dasgupta, Kakde and Ventullo proved Gross's conjecture on the value at zero of the <span class="math-container">$p$</span>-adic <span class="math-container">$L$</span>-function constructed by Cassou-Noguès, and Deligne and Ribet. The article, <em>On the Gross-Stark Conjecture</em>, was published in Annals of Mathematics in 2018, and can be found <a href="https://services.math.duke.edu/~dasgupta/#research" rel="noreferrer">here</a>. Here is the abstract:</p> <p><em>In 1980, Gross conjectured a formula for the expected leading term at <span class="math-container">$s = 0$</span> of the Deligne-Ribet <span class="math-container">$p$</span>-adic <span class="math-container">$L$</span>-function associated to a totally even character <span class="math-container">$\psi$</span> of a totally real feld <span class="math-container">$F$</span>. The conjecture states that after scaling by <span class="math-container">$L(\psi\omega^{-1},0)$</span>, this value is equal to a <span class="math-container">$p$</span>-adic regulator of units in the abelian extension of <span class="math-container">$F$</span> cut out by <span class="math-container">$\psi\omega^{-1}$</span>. In this paper, we prove Gross's conjecture.</em> </p>
322,302
<p>Conjectures play important role in development of mathematics. Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p> <p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p> <p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p> <p>Asking the question I keep in mind by &quot;recent years&quot; something like a dozen years before now, by a &quot;conjecture&quot; something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit &quot;not so famous&quot;, but on the other hand these might not be considered as strict criteria, and let us &quot;assume a good will&quot; of the answerer.</p>
user27976
27,976
<p>Kiran Kedlaya finished the proof of Deligne's conjecture (1.2.10) made in La conjecture de Weil, II, which is definitely "noteworthy", and perhaps "not so famous" compared to the original Weil conjectures.</p> <p>Colloquium talk: <em><a href="https://www.math.rutgers.edu/news-events/list-all-events/icalrepeat.detail/2019/02/06/10062/-/companions-in-etale-cohomology" rel="noreferrer">Companions in etale cohomology</a></em>.</p> <p><a href="http://kskedlaya.org/galc.shtml" rel="noreferrer">Annotated reading list for the working seminar</a> at the IAS on the proof.</p>
322,302
<p>Conjectures play important role in development of mathematics. Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p> <p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p> <p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p> <p>Asking the question I keep in mind by &quot;recent years&quot; something like a dozen years before now, by a &quot;conjecture&quot; something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit &quot;not so famous&quot;, but on the other hand these might not be considered as strict criteria, and let us &quot;assume a good will&quot; of the answerer.</p>
Denis Nardin
43,054
<p>Tyler Lawson's <a href="https://arxiv.org/abs/1703.00935" rel="noreferrer">recent proof</a> that the Brown-Peterson spectrum <span class="math-container">$BP$</span> at the prime 2 has no <span class="math-container">$E_∞$</span>-ring structure. This was later generalized at odd primes, using similar methods, by <a href="https://arxiv.org/abs/1710.09822" rel="noreferrer">Andrew Senger</a>.</p> <p>The proof proceeds via a detailed study of secondary power operations for ring spectra, which is valuable in itself.</p> <p>This result suggests that <span class="math-container">$BP$</span> should have no natural "geometric model", since such models often endow the corresponding cohomology theory with an <span class="math-container">$E_∞$</span>-ring structure.</p>
322,302
<p>Conjectures play important role in development of mathematics. Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p> <p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p> <p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p> <p>Asking the question I keep in mind by &quot;recent years&quot; something like a dozen years before now, by a &quot;conjecture&quot; something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit &quot;not so famous&quot;, but on the other hand these might not be considered as strict criteria, and let us &quot;assume a good will&quot; of the answerer.</p>
Stopple
6,756
<p>This MO question <a href="https://mathoverflow.net/questions/115447/the-riemann-zeros-and-the-heat-equation">The Riemann zeros and the heat equation</a> describes the Newman conjecture. Very briefly, a deformation parameter is introduced into an integral representation of the Riemann zeta function, creating a function of two variables which satisfies the backward heat equation. Newman made the conjecture that any infinitesimal deformation with this extra parameter destroys the Riemann hypothesis:</p> <blockquote> <p>"This new conjecture is a quantitative version of the dictum that the Riemann hypothesis, if true, is only barely so."</p> </blockquote> <p>In 2018, Tao and Rodgers were able to use the connection to PDE and posted a <a href="https://arxiv.org/abs/1801.05914" rel="noreferrer">proof of the Newman conjecture</a> on the arXiv. Last week, Alexander Dobner, another student of Tao posted a <a href="https://arxiv.org/abs/2005.05142" rel="noreferrer">new, purely analytic</a> (and shorter) proof on the arXiv, writing "One final note we make about our proof is that it reveals that Newman's conjecture holds for completely analytic rather than arithmetic reasons.... Thus, our proof of Newman's conjecture is quite different from the proof of Rodgers and Tao which depends fundamentally on knowledge about the gaps between zeta zeros and hence on the arithmetic structure of the zeta function</p>
322,302
<p>Conjectures play important role in development of mathematics. Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p> <p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p> <p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p> <p>Asking the question I keep in mind by &quot;recent years&quot; something like a dozen years before now, by a &quot;conjecture&quot; something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit &quot;not so famous&quot;, but on the other hand these might not be considered as strict criteria, and let us &quot;assume a good will&quot; of the answerer.</p>
Peter Shor
2,294
<p><a href="https://en.wikipedia.org/wiki/Connes_embedding_problem" rel="noreferrer">Connes' embedding conjecture</a> (from 1976) about the structure of infinite-dimensional von Neumann algebras was shown to be false in the paper</p> <ul> <li>Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, Henry Yuen, <span class="math-container">$\mathsf{MIP}^*=\mathsf{RE}$</span>, arXiv:<a href="https://arxiv.org/abs/2001.04383" rel="noreferrer">2001.04383</a>.</li> </ul> <p>This was done by a quantum computer science argument, which showed that an interactive proof system with multiple provers sharing quantum entanglement (MIP<span class="math-container">$^*$</span>) could give proofs for any recursively enumerable (RE) language. The computational power of MIP<span class="math-container">$^*$</span> was a long-standing question in complexity theory, but I don't believe anybody thought it was equal to RE until the connection with Connes' conjecture was discovered fairly recently.</p> <p>The same paper settles <a href="https://www.tau.ac.il/%7Etsirel/Research/bellopalg/main.html" rel="noreferrer">Tsirelsen's problem</a>. Boris Tsirelsen stated a theorem without proof in a 1993 survey paper. It was only much later that he was asked about it, and discovered that the simple proof he thought he had didn't work. He posed it as an open problem in 2006, and this paper shows that Tsirelsen's statement is false.</p> <p>The connection between these problems was already known, so the linked paper only gives a direct proof of the quantum computer science result.</p>
322,302
<p>Conjectures play important role in development of mathematics. Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p> <p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p> <p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p> <p>Asking the question I keep in mind by &quot;recent years&quot; something like a dozen years before now, by a &quot;conjecture&quot; something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit &quot;not so famous&quot;, but on the other hand these might not be considered as strict criteria, and let us &quot;assume a good will&quot; of the answerer.</p>
David White
11,540
<p>The <a href="https://en.wikipedia.org/wiki/Stabilization_hypothesis" rel="nofollow noreferrer">Baez-Dolan Stabilization Hypothesis</a> was posed in 1995. It involves the relationship between weak <span class="math-container">$n$</span>-categories as <span class="math-container">$n$</span> varies. Specifically, if one has an <span class="math-container">$n+k$</span> category <span class="math-container">$C$</span> and &quot;forgets&quot; to an <span class="math-container">$n$</span>-category <span class="math-container">$D$</span> then <span class="math-container">$D$</span> has extra structure. To make sure we didn't forget anything important, we assume <span class="math-container">$C$</span> has exactly one object, one 1-cell, one 2-cell, ..., one <span class="math-container">$(k-1)$</span>-cell, and we are simply reindexing so that the <span class="math-container">$k$</span>-cells in <span class="math-container">$C$</span> become the objects of <span class="math-container">$D$</span>. For example, if <span class="math-container">$k=1$</span> then the objects in <span class="math-container">$D$</span> are the morphisms of <span class="math-container">$C$</span>, so they have a composition law making them into a monoid. If <span class="math-container">$k = 2$</span> then the objects of <span class="math-container">$D$</span> have both horizontal and vertical composition rules, so by the Eckmann-Hilton argument, the objects of <span class="math-container">$D$</span> have the structure of a commutative monoid. If <span class="math-container">$k \geq 3$</span>, you still get a commutative monoid. The forgetting process stabilized after <span class="math-container">$k \geq n+2$</span>. The stabilization hypothesis posits that this always happens, in any reasonable model of a weak <span class="math-container">$n$</span>-category. It was a hypothesis rather than a conjecture because there was no known model at the time of weak <span class="math-container">$n$</span>-categories.</p> <p>The stabilization hypothesis has recently been proven in many different models of weak <span class="math-container">$n$</span>-categories including:</p> <ol> <li><p>Enriched <span class="math-container">$\infty$</span>-categories (by a remark in Lurie's book (2009), then a 2013 paper of <a href="https://arxiv.org/abs/1312.3178" rel="nofollow noreferrer">Gepner and Haugseng</a>), published in Advances.</p> </li> <li><p>Charles Rezk's <span class="math-container">$\Theta_n$</span>-space model for weak <span class="math-container">$n$</span>-categories and <span class="math-container">$(m,n)$</span>-categories, by a 2015 <a href="https://arxiv.org/pdf/1511.09130" rel="nofollow noreferrer">paper of Michael Batanin</a>, published in Proceedings of the AMS.</p> </li> <li><p>Tamsamani's model, Simpson's higher Segal categories, Ara's <span class="math-container">$n$</span>-quasicategories, and various models due to Bergner and Rezk, by a 2020 <a href="https://arxiv.org/abs/2001.05432" rel="nofollow noreferrer">paper by Michael Batanin and me</a>, published in Transactions.</p> </li> </ol>
3,686,921
<blockquote> <p>Prove that for all triangles with angles <span class="math-container">$\alpha, \beta, \gamma$</span>, <span class="math-container">$$\frac{\sin\alpha}{\cos\alpha + 1} + \frac{\sin\beta}{\cos\beta + 1} + \frac{\sin\gamma}{\cos\gamma + 1} = \frac{\cos\alpha + \cos\beta + \cos\gamma + 3}{\sin\alpha + \sin\beta + \sin\gamma}$$</span></p> </blockquote> <p>Let <span class="math-container">$\tan\dfrac{\alpha}{2} = a, \tan\dfrac{\beta}{2} = b, \tan\dfrac{\gamma}{2} = c$</span>, we have that <span class="math-container">$$\dfrac{\sin\beta}{\cos\beta + 1} = \dfrac{1}{b}, \cos\beta = \dfrac{1 - b^2}{1 + b^2}, \sin\beta = \dfrac{2b}{1 + b^2}$$</span> and <span class="math-container">$bc + ca + ab = 1$</span>.</p> <p>It needs to be proven that <span class="math-container">$$\frac{1}{a} + \frac{1}{b} + \frac{1}{c} = \frac{\dfrac{1 - a^2}{1 + a^2} + \dfrac{1 - b^2}{1 + b^2} + \dfrac{1 - c^2}{1 + c^2} + 3}{\dfrac{2a}{1 + a^2} + \dfrac{2b}{1 + b^2} + \dfrac{2c}{1 + c^2}}$$</span></p> <p><span class="math-container">$$\impliedby \frac{1}{a} + \frac{1}{b} + \frac{1}{c} = \frac{\dfrac{1}{1 + a^2} + \dfrac{1}{1 + b^2} + \dfrac{1}{1 + c^2}}{\dfrac{a}{1 + a^2} + \dfrac{b}{1 + b^2} + \dfrac{c}{1 + c^2}}$$</span></p> <p><span class="math-container">$$\impliedby \left(\frac{1}{a} + \frac{1}{b} + \frac{1}{c}\right)\left(\frac{a}{1 + a^2} + \frac{b}{1 + b^2} + \frac{c}{1 + c^2}\right) = \frac{1}{1 + a^2} + \frac{1}{1 + b^2} + \frac{1}{1 + c^2}$$</span></p> <p><span class="math-container">$$\impliedby \left(\frac{a}{b} + \frac{a}{c}\right)\frac{1}{1 + a^2} + \left(\frac{b}{c} + \frac{b}{a}\right)\frac{1}{1 + b^2} + \left(\frac{c}{a} + \frac{c}{b}\right)\frac{1}{1 + c^2} = 0$$</span></p> <p><span class="math-container">$$\impliedby \frac{a(b + c)}{bc(c + a)(a + b)} + \frac{b(c + a)}{ca(a + b)(b + c)} + \frac{c(a + b)}{ab( b + c)(c + a)} = 0$$</span></p> <p><span class="math-container">$$\impliedby \frac{1 - bc}{(1 - ca)(1 - ab)} + \frac{1 - ca}{(1 - ab)(1 - bc)} + \frac{1 - ab}{(1 - bc)(1 - ca)} = 0$$</span></p> <p><span class="math-container">$$\impliedby (1 - bc)^2 + (1 - ca)^2 + (1 - ab)^2 = 0$$</span></p> <p><span class="math-container">$$\impliedby bc = ca = ab = 1 \impliedby bc + ca + ab = 3,$$</span> which is definitely incorrect.</p> <p>I've surmised that the correct equality is <span class="math-container">$$\frac{\sin\alpha}{\cos\alpha + 1} + \frac{\sin\beta}{\cos\beta + 1} + \frac{\sin\gamma}{\cos\gamma + 1} = \frac{\cos\alpha + \cos\beta + \cos\gamma + 1}{\sin\alpha + \sin\beta + \sin\gamma},$$</span> but then I wouldn't know what to do first.</p>
CryoDrakon
193,538
<p>Using your transformation I a get sligtly different equation: <span class="math-container">$$a+b+c=\frac{\sum_{cyc}\frac{1}{1+a^2}}{\sum_{cyc}\frac{a}{1+a^2}}$$</span> <span class="math-container">$$\sum_{cyc}\frac{a(a+b+c)}{1+a^2}=\sum_{cyc}\frac{1}{1+a^2}$$</span> <span class="math-container">$$\sum_{cyc}\frac{a^2+ab+ac-1}{(a+b)(a+c)}=0$$</span> <span class="math-container">$$\sum_{cyc}(a^2-bc)(b+c)=0$$</span> <span class="math-container">$$\sum_{cyc}a^2b+a^2c-b^2c-bc^2=0$$</span> Which is true!</p>
3,680,536
<p>In Eric Lengyel's book, <em>Mathematics for 3D Game Programming and Computer Graphics, 3<sup>rd</sup> Edition</em>, there is a theorem that states</p> <blockquote> <p>An <span class="math-container">$n \times n$</span> matrix M is invertible if and only if the rows of M form a linearly independent set of vectors.</p> </blockquote> <p>Two proofs were provided, each corresponding to if and only if of the theorem. I understand the first proof, but I don't fully understand the second one. I have some understanding of it, but it's not enough to fully grasp the proof. The proof, slightly modified for brevity, goes as follows:</p> <blockquote> <p>Let the rows of M be denoted by <span class="math-container">$R_{1}^{T}, R_{2}^{T}, \ldots, R_{n}^{T}$</span>. Now assume that the rows of M are a linearly independent set of vectors. We first observe that performing elementary row operations on a matrix does not alter the property of linear independence within the rows. Running through Algorithm 3.12, if step C fails, then rows <em>j</em> through <em>n</em> of the matrix at that point form a linearly dependent set since the number of columns for which the rows <span class="math-container">$R_{j}^{T}$</span> through <span class="math-container">$R_{n}^{T}$</span> have at least one nonzero entry is less than the number of rows itself. This is a contradiction, so step C of the algorithm cannot fail, and M must be invertible.</p> </blockquote> <p>Note: A screenshot from the book of Algorithm 3.12 is available at the bottom.</p> <p>What I don't fully understand is this snippet of the proof:</p> <blockquote> <p>[...] rows <em>j</em> through <em>n</em> of the matrix at that point form a linearly dependent set since the number of columns for which the rows <span class="math-container">$R_{j}^{T}$</span> through <span class="math-container">$R_{n}^{T}$</span> have at least one nonzero entry is less than the number of rows itself. [...]</p> </blockquote> <p>Why do the rows form a linearly dependent set based on the idea that "the number of columns for which the rows <span class="math-container">$R_{j}^{T}$</span> through <span class="math-container">$R_{n}^{T}$</span> have at least one nonzero entry is less than the number of rows itself."?</p> <hr> <h3>Appendix</h3> <p>Algorithm 3.12</p> <p><a href="https://i.stack.imgur.com/8no1T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8no1T.png" alt="Description of Algorithm 3.12"></a></p>
user790072
790,072
<p>The proof appears to be using the fact that linearly independent subsets of <span class="math-container">$\Bbb{R}^n$</span> have at most <span class="math-container">$n$</span> vectors in them. For the sake of illustration, let's say we're at step C with <span class="math-container">$j = 3$</span> in an <span class="math-container">$n \times n$</span> matrix. Then our matrix looks something like this:</p> <p><span class="math-container">$$\begin{pmatrix} 1 &amp; 0 &amp; * &amp; * &amp; \cdots &amp; * \\ 0 &amp; 1 &amp; * &amp; * &amp; \cdots &amp; * \\ 0 &amp; 0 &amp; \color{red}* &amp; * &amp; \cdots &amp; * \\ 0 &amp; 0 &amp; \color{red}* &amp; * &amp; \cdots &amp; * \\ \vdots &amp; \vdots &amp; \color{red}\vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ 0 &amp; 0 &amp; \color{red}* &amp; * &amp; \cdots &amp; * \end{pmatrix}.$$</span></p> <p>The proof then tries to find the largest absolute value out of the red <span class="math-container">$\color{red}*$</span>s, throwing an exception if every one is <span class="math-container">$0$</span>. If every one of the <span class="math-container">$\color{red}*$</span>s was <span class="math-container">$0$</span>, then the last <span class="math-container">$n - 2$</span> rows each begins with three <span class="math-container">$0$</span>s, and the <span class="math-container">$n - 2$</span> row vectors can be embedded in <span class="math-container">$\Bbb{R}^{n - 3}$</span> (by ignoring the first three <span class="math-container">$0$</span>s). There are one too many such vectors for them to be linearly independent, hence the theorem (for the <span class="math-container">$j = 3$</span> case anyway; hopefully you can see how this would generalise).</p>
1,327,944
<p>I'm a hobbyist working on a mechanical sorting machine to sort magic the gathering cards. I'm by no means a mathematician though, and I was wondering if you all wouldn't mind helping me out with a math puzzle to determine the best route to go with my machine.</p> <p>The average Magic: the Gathering set of cards contains 250 unique cards. My machine will sort those cards alphabetically placing the cards into multiple piles repeatedly to filter the cards down into the correct position. My question is, how many piles would be needed to sort 250 cards with only three passes. Here is some further info more clarification of the question.</p> <p>Example: a card passes through the scanner, and the computer determines that the cards position in the alphabetized set is within the first 50 positions of the alphabetized set. 50 positions is %20 percent of a 250 card set, which I've chosen that number. It just seems like an easy place to start.</p> <p>Then the next card passes through, and the scanner determines that the cards position is in the 3rd 50 positions of the alphabetized set (positions 101-150) so it places this in pile three. It keeps doing this until it has 250 cards broken down into 5 piles.</p> <p>The cards are then scanned again for further refinement until they are in the correct order. So the question is, how many piles would make it possible to on average sort a 250 card set in 3 passes. I only chose 5 piles to illustrate the question, so I hope that's not confusing.</p> <p>What is the most optimal number of piles needed?</p>
Kingrames
256,266
<p>Well unless you're using exclusively cards from way back in ye oldene dayse (tm), there's a collector number on the card itself, so they're technically already sorted.</p> <p>Before I begin, let me say: Alphabetically is the absolute WORST way to sort M:tG cards. I personally sort by more effective category: white blue black red green multicolored and colorless (this is referred to as WUBRG order). Some people prefer to place colorless in front due to its ability to fit in any deck, but I feel basic lands go in the back, and the rest of colorless follows suit. After organizing by color it's more helpful to sort the rest of the cards by their converted mana cost, with mana symbols weighted more than numericals.</p> <p>My preferred method of sorting involves tossing cards into piles based on their color or color combinations or lack of color, then one by one, taking each of those piles and hand-sorting them such that the pile is sorted by CMC. (converted mana cost) If those piles are too large, I may choose to sort by card type as well, which would be creature, planeswalker, instant, sorcery, enchantment, and possibly artifact (rare circumstances, those).</p> <p>I hope this helps you out better than sorting alphabetically! It does technically use two to three passes.</p> <p>Best of luck to you while playing, and may the mana screw never find you.</p>
1,849,758
<p>Does there exist a name for the class of commutative rings with identity that satisfy the following:</p> <p>For any 2 ideals $I_1,I_2$ of R,we have : $I_1 I_2= (I_1\cap I_2)(I_1+I_2) $</p> <p>I would also like to see an example of a ring not satisfying the above property.</p> <p>Thank you</p>
E.R
325,912
<p>If $R$ is a noetherian domain with that property $R$ is called Dedekind domain, see Larsen and McCarthy' <em>Multiplicative Theory of Ideals</em>, theorem 6.20. However, see D. D. Anderson's homepage, he has a paper with a similar title for arbitrary commutative rings. </p>
4,062,242
<p>Is <span class="math-container">$$\int_1^\infty \frac{\log x}{x^2}dx$$</span> finite? How to solve this?</p>
BCLC
140,308
<p>Use integration by parts. Keep trying different parts until it works.</p> <ol> <li>This doesn't work:</li> </ol> <p><span class="math-container">$u= \frac1x$</span>, <span class="math-container">$dv=\frac{\ln(x)dx}{x}$</span></p> <p><span class="math-container">$du= -\frac{dx}{x^2}$</span>, <span class="math-container">$v=\frac{\ln^2(x)}{2}$</span></p> <ol start="2"> <li>This doesn't work:</li> </ol> <p><span class="math-container">$u= \frac1{x^2}$</span>, <span class="math-container">$dv=\frac{\ln(x)}{1}$</span></p> <p><span class="math-container">$du= (...)$</span>, <span class="math-container">$v=(...)$</span></p> <ol start="3"> <li>This works</li> </ol> <p><span class="math-container">$u=\frac{\ln(x)}{1}$</span>, <span class="math-container">$dv= \frac1{x^2}$</span></p> <p><span class="math-container">$du=\frac{1}{x}$</span>, <span class="math-container">$v= -\frac1{x}$</span></p>
542,454
<p>There are two tangent lines on $f(x) = \sqrt{x}$ each with the $x$-value $a$ and $b$ respectively. </p> <p>I need to prove that $c$, the $x$ value of the point at which the two lines intersect each other, is equal to $\sqrt{ab}$, the geometric mean of $a$ and $b$. </p> <p>I have been trying many different ways of doing this question and I keep getting stuck. </p>
TBrendle
97,380
<p>The two points of tangency are $(a, \sqrt{a})$ and $(b, \sqrt{b})$. If either of $a$ or $b$ is $0$ then the result holds trivially, so assume $ab \neq 0$.</p> <p>Because the curve is a parabola, the intersection of the tangents, $(x_0, y_0)$, has $y_0=(\sqrt{a}+\sqrt{b})/2$. That's because projecting perpendicularly onto the directrix, the intersection of two tangents of a parabola will always bisect the segment between the points of tangency (draw a picture). This can be proved using high school geometry and is sometimes known as the Two-Tangent Theorem for the parabola. </p> <p>The equation for the tangent at $(a, \sqrt{a})$ is $$y- \sqrt{a}=\frac{1}{2\sqrt{a}}(x-a) . $$ This line passes through $(x_0, y_0)$ so we have $$ \frac{\sqrt{a}+\sqrt{b}}{2} - \sqrt{a}=\frac{1}{2\sqrt{a}}(x_0-a). $$ Simplifying,<br> $$ \frac{\sqrt{b} - \sqrt{a}}{2} =\frac{1}{2\sqrt{a}}(x_0-a). $$ Clearing denominators gives $$ \sqrt{ab}-a = x_0 -a, $$ and so $x_0= \sqrt{ab}$, as desired.</p> <p><strong>Notice</strong>: In fact, if $A$ and $B$ are distinct non-zero points on the curve $y=\sqrt{x}$ and $C$ is the intersection of the tangents at $A$ and $B$, then we have</p> <blockquote> <p>the $x$-coordinate of C is the <em>geometric</em> mean of the $x$-coordinates of $A$ and $B$</p> </blockquote> <p>and</p> <blockquote> <p>the $y$-coordinate of C is the <em>arithmetic</em> mean of the $y$-coordinates of $A$ and $B$.</p> </blockquote>
2,747,896
<p>I am a bit confused with cardinality at the moment. I know that the cardinality of $\mathbb{R}$ is equal to $\lvert(0,1)\rvert$, but does that mean they are equal to infinity, if not what are they equal to ?</p>
saulspatz
235,128
<p>Their cardinality is referred to as "the cardinality of the continuum." There are infinitely many different infinite cardinal numbers. In many contexts, though, it enough to say that the cardinality is "uncountably infinite" or that the set is "uncountable." An example of a countable infinite set is the integers. </p>
3,768,198
<p>Show that <span class="math-container">$\|uv^T-wz^T\|_F^2\le \|u-w\|_2^2+\|v-z\|_2^2$</span>, assuming <span class="math-container">$u,v,w,z$</span> are all unit vectors.</p>
Ben Grossmann
81,360
<p>Another approach: note that for orthogonal matrices <span class="math-container">$U,V,$</span> we have <span class="math-container">$$ \|uv^T - wz^T\|_F^2 = \|U(uv^T - wz^T)V\|_F^2 = \|(Uu)(Vv)^T - (Uw)(Vz)^T\|_F^2. $$</span> So without loss of generality, we can assume that <span class="math-container">$u = v = (1,0,\dots,0)^T$</span>, so <span class="math-container">$uv^T$</span> is the matrix with a <span class="math-container">$1$</span> as its <span class="math-container">$1,1$</span> entry and zeros elsewhere. The left-hand side is then given by <span class="math-container">$$ \|uv^T - wz^T\|_F^2 = \|wz^T\|_F^2 + [(1 - w_1z_1)^2 - (w_1z_1)^2] \\ = (w^Tz)(z^Tw) + [1 - 2w_1 z_1] = 2 - 2w_1z_1. $$</span> The right hand size is given by <span class="math-container">$$ \|u - w\|^2 + \|v-z\|^2 = \|w\|^2 + [(1 - w_1)^2 - w_1^2] + \|z\|^2 + [(1 - z_1)^2 - z_1^2] \\ = 2 - 2w_1 + 2 - 2z_1 = 4 - 2(w_1 + z_1), $$</span> and from there the reasoning is similar.</p> <hr /> <p>Another approach for expanding the exact expression: note that <span class="math-container">$$ M = uv^T - wz^T = \pmatrix{u &amp; w} \pmatrix{v &amp; -z}^T, $$</span> so that <span class="math-container">$$ \operatorname{tr}(MM^T) = \operatorname{tr}[\pmatrix{u &amp; w} \pmatrix{v &amp; -z}^T\pmatrix{v &amp; -z}\pmatrix{u &amp; w}^T] \\ = \operatorname{tr}[\pmatrix{v &amp; -z}^T\pmatrix{v &amp; -z}\pmatrix{u &amp; w}^T\pmatrix{u &amp; w}] \\ = \operatorname{tr}\left[\pmatrix{1 &amp; -v^Tz\\ -v^Tz &amp; 1}\pmatrix{1 &amp; u^Tw\\u^Tw &amp; 1}\right] $$</span></p>
2,059,192
<p>I was reading about Sobolev spaces and came across the notation $\dot{H}^1, \dot{H}^{-1}, \dot{H}^t$. I'm familiar with $H^1, H^{-1}, H^t$, but not the dot, and I can't find these spaces defined anywhere. Is this notation common, and could you explain it to me or point me to a reference?</p> <p>I have more or less the same question about the spaces $L_t^2,H_x^2$, and the norm $||\cdot||_{L_t^2H_x^2}$.</p>
Michał Miśkiewicz
350,803
<p>I have seen another meaning of $\dot{H}^s(\mathbb{R}^d)$ in <em>Fourier analysis and nonlinear PDEs</em> by Bahouri-Chemin-Danchin. In this book, this is called a <em>homogenous Sobolev space</em> defined as the set of all tempered distributions such that $\hat{u} \in L^1_{loc}$ and $$ ||u||_{\dot{H}^s}^2 := \int_{\mathbb{R}^d} |\xi|^{2s} |\hat{u}(\xi)|^2 d \xi &lt; \infty. $$ This can be compared with the Slobodecki seminorm, and in case $s \in \mathbb{N}$ with the usual Sobolev seminorm. For the definition of $H^s$ without a dot, you would add $1$, i.e. replace $|\xi|^{2s}$ with $(1+|\xi|^{2})^s$. </p>
102,738
<p>I imported two sets data one: </p> <pre><code>data1={{0., 5.02512*10^-10}, {0.06668, 2.99284*10^-8}, {0.13336, 3.22116*10^-8}, {0.20004, 2.58191*10^-8}, {0.26672, 1.99125*10^-7}, {0.3334, 1.21646*10^-8}, {0.40008, 3.35916*10^-7}, {0.46676, 3.79768*10^-7}, {0.53344, 1.02102*10^-7}, {0.60012, 1.17535*10^-6}, {0.6668, 1.72507*10^-7}, {0.73348, 1.23789*10^-6}, {0.80016, 1.9808*10^-6}, {0.86684, 1.39616*10^-7}, {0.93352, 4.60649*10^-6}, {1.0002, 1.39262*10^-6}, {1.06688, 3.83127*10^-6}, {1.13356, 0.0000101002}, {1.20024, 3.26005*10^-8}, {1.26692, 0.0000229263}, {1.3336, 0.0000144712}, {1.40028, 0.000020778}, {1.46696, 0.000134013}, {1.53364, 4.94753*10^-6}, {1.60032, 0.00250851}, {1.667, 0.00326501}, {1.73368, 0.0000968109}, {1.80036, 0.000207831}, {1.86704, 7.79724*10^-6}, {1.93372, 0.0000459028}, {2.0004, 0.0000321442}, {2.06708, 2.43685*10^-6}, {2.13376, 0.0000276559}, {2.20044, 3.87948*10^-6}, {2.26712, 9.62673*10^-6}, {2.3338, 0.0000130072}, {2.40048, 1.53889*10^-7}, {2.46716, 0.0000116171}, {2.53384, 3.36691*10^-6}, {2.60052, 3.53838*10^-6}, {2.6672, 8.3132*10^-6}, {2.73388, 2.36251*10^-8}, {2.80056, 6.58432*10^-6}, {2.86724, 3.33096*10^-6}, {2.93392, 1.45936*10^-6}, {3.0006, 6.35157*10^-6}, {3.06728, 2.69642*10^-7}, {3.13396, 4.25243*10^-6}, {3.20064, 3.49319*10^-6}, {3.26732, 5.50908*10^-7}, {3.334, 5.33684*10^-6}, {3.40068, 6.86369*10^-7}, {3.46736, 2.92315*10^-6}, {3.53404, 3.88476*10^-6}, {3.60072, 1.32685*10^-7}, {3.6674, 4.88858*10^-6}, {3.73408, 1.2985*10^-6}, {3.80076, 2.10915*10^-6}, {3.86744, 4.63201*10^-6}, {3.93412, 9.45702*10^-10}, {4.0008, 4.94888*10^-6}, {4.06748, 2.37468*10^-6}, {4.13416, 1.60386*10^-6}, {4.20084, 6.40728*10^-6}, {4.26752, 1.82055*10^-7}, {4.3342, 6.14228*10^-6}, {4.40088, 5.175*10^-6}, {4.46756, 1.4092*10^-6}, {4.53424, 0.000013092}}; </code></pre> <p>second:</p> <pre><code>{{0., 5.02512*10^-10}, {0.06668, 6.99284*10^-8}, {0.13336, 9.22116*10^-8}, {0.20004, 9.58191*10^-8}, {0.26672, 6.99125*10^-7}, {0.3334, 7.21646*10^-8}, {0.40008, 1.35916*10^-7}, {0.46676, 8.79768*10^-7}, {0.53344, 9.02102*10^-7}, {0.60012, 5.17535*10^-6}, {0.6668, 9.72507*10^-7}, {0.73348, 0.23789*10^-6}, {0.80016, 5.9808*10^-6}, {0.86684, 9.39616*10^-7}, {0.93352, 1.60649*10^-6}, {1.0002, 5.39262*10^-6}, {1.06688, 7.83127*10^-6}, {1.13356, 0.0000101002}, {1.20024, 5.26005*10^-8}, {1.26692, 0.0000229263}, {1.3336, 0.0000144712}, {1.40028, 0.000020778}, {1.46696, 0.000134013}, {1.53364, 4.94753*10^-6}, {1.60032, 0.00250851}, {1.667, 0.00326501}, {1.73368, 0.0000968109}, {1.80036, 0.000207831}, {1.86704, 7.79724*10^-6}, {1.93372, 0.0000459028}, {2.0004, 0.0000321442}, {2.06708, 7.43685*10^-6}, {2.13376, 0.0000276559}, {2.20044, 9.87948*10^-6}, {2.26712, 9.62673*10^-6}, {2.3338, 0.0000130072}, {2.40048, 1.53889*10^-7}, {2.46716, 0.0000116171}, {2.53384, 3.36691*10^-6}, {2.60052, 3.53838*10^-6}, {2.6672, 8.3132*10^-6}, {2.73388, 2.36251*10^-8}, {2.80056, 6.58432*10^-6}, {2.86724, 3.33096*10^-6}, {2.93392, 1.45936*10^-6}, {3.0006, 6.35157*10^-6}, {3.06728, 2.69642*10^-7}, {3.13396, 4.25243*10^-6}, {3.20064, 3.49319*10^-6}, {3.26732, 5.50908*10^-7}, {3.334, 5.33684*10^-6}, {3.40068, 6.86369*10^-7}, {3.46736, 2.92315*10^-6}, {3.53404, 3.88476*10^-6}, {3.60072, 1.32685*10^-7}, {3.6674, 4.88858*10^-6}, {3.73408, 1.2985*10^-6}, {3.80076, 2.10915*10^-6}, {3.86744, 4.63201*10^-6}, {3.93412, 9.45702*10^-10}, {4.0008, 4.94888*10^-6}, {4.06748, 2.37468*10^-6}, {4.13416, 1.60386*10^-6}, {4.20084, 6.40728*10^-6}, {4.26752, 1.82055*10^-7}, {4.3342, 6.14228*10^-6}, {4.40088, 5.175*10^-6}, {4.46756, 1.4092*10^-6}, {4.53424, 0.000013092}}; </code></pre> <p>The desired case is: plotting by <code>ListLogPlot</code> of two sets in one plot. But, before plotting, the second one must be multiplied by <code>100</code> for preventing of overlapping plots on each other. <code>100</code> must be multiplied to the second column of second data. I mean: </p> <pre><code> `{0., 100*5.02512*10^-10}, {0.06668,100*6.99284*10^-8}, {0.13336, 100*9.22116*10^-8}, {0.20004, 100*9.58191*10^-8}, {0.26672, 100*6.99125*10^-7}, {0.3334, 10*7.21646*10^-8}.......` </code></pre>
WReach
142
<pre><code>SetOptions[EvaluationNotebook[] , StyleDefinitions -&gt; Notebook @ { Cell[StyleData[StyleDefinitions -&gt; "Default.nb"]] , Cell[StyleData["ItemNumbered"], CounterBoxOptions -&gt; {CounterFunction -&gt; (#*2-1&amp;)}] , Cell[StyleData["SubitemNumbered"], CounterBoxOptions -&gt; {CounterFunction -&gt; (#*2-1&amp;)}] , Cell[StyleData["SubsubitemNumbered"], CounterBoxOptions -&gt; {CounterFunction -&gt; (#*2-1&amp;)}] } ] </code></pre> <p>As pointed out by @Kuba, the same <code>CounterFunction</code> must be applied to items, sub-items and sub-sub-items to ensure that the counters at all levels match.</p> <p>Example:</p> <pre><code>Do[ If[j === 1 &amp;&amp; k === 1, CellPrint@Cell["item", "ItemNumbered"]] ; If[k === 1, CellPrint@Cell["subitem", "SubitemNumbered"]] ; CellPrint@Cell["subsubitem", "SubsubitemNumbered"] , {i, 3}, {j, 3}, {k, 2} ] </code></pre> <p><a href="https://i.stack.imgur.com/MHVSE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MHVSE.png" alt="list items"></a></p>
3,605,636
<p>Let <span class="math-container">$m_{a},m_{b},m_{c}$</span> be the lengths of the medians and <span class="math-container">$a,b,c$</span> be the lengths of the sides of a given triangle , Prove the inequality : </p> <p><span class="math-container">$$m_{a}m_{b}m_{c}\leq\frac{Rs^{2}}{2}$$</span></p> <p>Where : </p> <p><span class="math-container">$s : \operatorname{Semiperimeter}$</span></p> <p><span class="math-container">$R : \operatorname{circumradius}$</span> </p> <p>I know the relation : </p> <p><span class="math-container">$$m_{a}^{2}=\frac{2(b^{2}+c^{2})-a^{2}}{4}$$</span></p> <p>But when I multiple together I dont get simple formulas!</p> <p>So, I need help finding a solution. Thanks!</p>
Quanto
686,284
<p><a href="https://i.stack.imgur.com/6VkTE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6VkTE.png" alt="enter image description here"></a></p> <p>Note that the triangles ABD and EDC are similar. Then,</p> <p><span class="math-container">$$\frac{AD}{BD}=\frac{CD}{ED}\implies \frac{m_a}{\frac a2}=\frac{\frac a2}{AE-m_a} \implies m_a^2 -AE\cdot m_a + \frac {a^2}4=0$$</span></p> <p>which, since <span class="math-container">$AE \le 2R$</span> and <span class="math-container">$a=2R\sin A$</span>, leads to</p> <p><span class="math-container">$$m_a =\frac12(AE+\sqrt{AE^2-a^2})\le \frac12\left[2R+\sqrt{(2R)^2-(2R\sin A)^2}\right] =2R\cos^2 \frac A2$$</span> Likewise, <span class="math-container">$m_b\le 2R\cos^2 \frac B2$</span> and <span class="math-container">$m_c\le 2R\cos^2 \frac C2$</span>. Together, we have</p> <p><span class="math-container">$$\begin{align} m_a m_bm_c &amp; \le \frac12R^3\left( 4\cos\frac A2\cos \frac B2\cos \frac C2\right)^2 \\ &amp; = \frac12R^3\left( 2 \cos\frac A2 \left(\cos \frac {B+C}2+\cos \frac {B-C}2 \right)\right)^2 \\ &amp; = \frac12R^3\left( 2 \cos\frac A2 \sin\frac A2+2 \sin\frac {B+C}2\cos \frac {B-C}2 \right)^2 \\ &amp; = \frac12R^3\left( \sin A + \sin B + \sin C \right)^2 \\ &amp; = \frac12R^3 \left( \frac a{2R} + \frac b{2R} + \frac c{2R}\right)^2 \\ &amp; = \frac12R \left( \frac{a + b+ c}2 \right)^2 \\ \end{align}$$</span></p> <p>Thus,</p> <p><span class="math-container">$$m_a m_bm_c \le \frac12Rs^2$$</span></p>
344,725
<p>in $\Delta ABC$,and </p> <p>$$\dfrac{\sin{(\dfrac{B}{2}+C)}}{\sin^2{B}}=\dfrac{\sin{(\dfrac{C}{2}+B)}}{\sin^2{C}}$$</p> <p>prove that $B=C$</p> <p>I think $\sin{(\dfrac{B}{2}+C)}\sin^2{C}=\sin{(\dfrac{C}{2}+B)}\sin^2{B}$</p> <p>then $$\sin{(\dfrac{B}{2})}\cos{C}\sin^2{C}+\cos{\dfrac{B}{2}}\sin^3C=\sin{\dfrac{C}{2}}\cos{B}\sin^2B+\cos{\dfrac{C}{2}}\sin^3B$$</p> <p>so $$(\sin{\dfrac{B}{2}}-\sin{\dfrac{C}{2}})f(B,C)=0$$</p> <p>my question:How can prove $f(B,C)\neq 0$ ?</p>
coffeemath
30,316
<p>No manipulations I tried worked, so I began to think the statement was false. So I set up the function $$f(b,c)=\sin(b/2+c)\sin^2(c),$$ noting the sides of your equation are equal iff $f(b,c)=f(c,b).$ So if $b=c$ followed from that it should be true that for example the function $$g(x)=f(x,x+.5)-f(x+.5,x)$$ should not have zeros at values of $x$ such that angles $x,x+.5$ could be angles in a triangle. However a root finder gave the zero $x_0 \approx 1.1685189447$. In the notation of the equation this is $b=x_0$ and $c=x_0+.5$, which correspond in degrees to roughly $66.95$ and $95.59$ degrees respectively, with sum about $162.55$ degrees, so that this $a,b$ could be angles in a triangle. The two sides of the original equation then both come out about $0.916865653972.$</p>
2,471,680
<p>I am working with a theorem and i need the reference of the above limit. Kindly guide.</p>
user387832
387,832
<p>Make use of the monotone convergence theorem.</p> <p>From Wikipedia:</p> <p><em>If a sequence of real numbers is decreasing and bounded below, then its infimum is the limit.</em></p> <p>So you'll need to show that $0$ is the infimum of the sequence and that this sequence is decreasing.</p> <p><em>Hint</em>: Show that for any $n\in \mathbb{N}, k^{n+1} \lt k^{n} $ and also that $\forall \epsilon &gt;0, \exists N\in \mathbb{N}$ such that $k^{N} \lt \epsilon.$</p>
2,860,156
<p>Let $A\in \mathbb{M}_3(\mathbb{R})$ be a symmetric matrix whose eigen-values are $1,1$ and $3$. Express $A^{-1}$ in the form $\alpha I +\beta A$, where $\alpha, \beta \in \mathbb{R}$.</p>
InsideOut
235,392
<p>Let $\alpha,\beta\in \Bbb R$ such that $A^{-1}=\alpha I+\beta A$. Then $$A(\alpha I+\beta A)=\alpha A+ \beta A^2=I$$</p> <p>This is equivalent to say that $\beta A^2 + \alpha A-I=0$, that is $A$ is a solution of the polynomial $$p(x)=\beta x^2+\alpha x-1=0.$$ This is also equivalent to say that the eigenvalues $1,3$ are solutions of $p(x)$. Let $q(x)=(x-1)(x-3)=x^2-4x+3$, then $$p(x)=-\frac13q(x)=-\frac13 x^2+\frac43 x -1.$$ Hence $\alpha=\frac43$ and $\beta=-\frac13$.</p>
1,081,417
<p>This is exercise number $57$ in Hugh Gordon's <em>Discrete Probability</em>. </p> <hr> <p>For $n \in \mathbb{N}$, show that</p> <p>$$\binom{\binom{n}{2}}{2}=3\binom{n+1}{4}$$</p> <hr> <p>My algebraic solution:</p> <p>$$\binom{\binom{n}{2}}{2}=3\binom{n+1}{4}$$</p> <p>$$\binom{\frac{n(n-1)}{2}}{2}=\frac{3n(n+1)(n-1)(n-2)}{4 \cdot 3 \cdot 2}$$</p> <p>$$2\left(\frac{n(n-1)}{2}\right)\left(\frac{n(n-1)}{2}-1\right)=\frac{n(n+1)(n-1)(n-2)}{2}$$</p> <p>$$2n(n-1)\frac{n^2-n-2}{2} = n(n+1)(n-1)(n-2)$$</p> <p>$$n(n-1)(n-2)(n+1)=n(n+1)(n-1)(n-2)$$</p> <p>This finishes the proof.</p> <hr> <p>I feel like this is not what the point of the exercise was; it feels like an unclean, inelegant bashing with the factorial formula for binomial coefficients. Is there a nice counting argument to show the identity? Something involving committees perhaps?</p>
Grigory M
152
<p>$\binom{\binom n2}2$ counts pairs of (distinct) 2-element subsets of $n$-element set. Union of such pair is either 4-element set (and each 4-element set is counted 3 times: there are 3 ways to divide 4-set into 2 pairs) or 3-element set (and each 3-element set is also counted 3 times). That gives $3\binom n4+3\binom n3=3\binom{n+1}4$.</p>
3,325,114
<blockquote> <p>You are on an island inhabited only by knights, who always tell the truth, and knaves, who always lie. You meet two women who live there and ask the older one,</p> <blockquote> <p>&quot;Is at least one of you a knave?&quot;</p> </blockquote> <p>She responds yes or no, but! you do not yet have enough information to determine what they were. So you then ask the younger woman,</p> <blockquote> <p>&quot;Are you two of the same type?&quot;</p> </blockquote> <p>She answers yes or no and after that you know which type each is. What type is each?</p> <ul> <li>both knight</li> <li>both knave</li> <li>older knight, younger knave</li> <li>older knave, younger knight</li> <li>not enough information</li> </ul> </blockquote> <p>I thought it should be &quot;not enough information&quot; but that seems wrong</p>
Floris Claassens
638,208
<p>Just write a truth table: You've got 4 possibilities:</p> <ul> <li>Woman 1 is a knight and woman 2 is a knight, answers are no and yes.</li> <li>Woman 1 is a knight and woman 2 is a knave, answers are yes and yes. </li> <li>Woman 1 is a knave and woman 2 is a knight, answers are no and no. </li> <li>Woman 1 is a knave and woman 2 is a knave, answers are no and no. </li> </ul> <p>This means they are both knights. If the first answer was yes, it would be clear the first woman is a knight and the second a knave, so the first answer was no. </p> <p>If the second answer was no, it would still be unclear what both women are, so the second answer was yes.</p>
3,325,114
<blockquote> <p>You are on an island inhabited only by knights, who always tell the truth, and knaves, who always lie. You meet two women who live there and ask the older one,</p> <blockquote> <p>&quot;Is at least one of you a knave?&quot;</p> </blockquote> <p>She responds yes or no, but! you do not yet have enough information to determine what they were. So you then ask the younger woman,</p> <blockquote> <p>&quot;Are you two of the same type?&quot;</p> </blockquote> <p>She answers yes or no and after that you know which type each is. What type is each?</p> <ul> <li>both knight</li> <li>both knave</li> <li>older knight, younger knave</li> <li>older knave, younger knight</li> <li>not enough information</li> </ul> </blockquote> <p>I thought it should be &quot;not enough information&quot; but that seems wrong</p>
AgentS
168,854
<p><span class="math-container">$a$</span>: older woman is knight<br> <span class="math-container">$b$</span>: younger woman is knight</p> <p>Behavior of <code>older woman</code> can be represented with the boolean function <span class="math-container">$$f(a,b)=a(a'+ b') + a'(ab) = a b'$$</span> Behavior of <code>younger woman</code> can be represented with the boolean function <span class="math-container">$$g(a,b)=b(ab+ a' b') + b'( a'b +a b' ) = a$$</span></p> <hr> <p>With older woman the output has to be<code>no</code>, that is <span class="math-container">$f(a,b)=0$</span>, so that we can eliminate just one case <span class="math-container">$(a, b')$</span>, and the remaining possibilities are <span class="math-container">$$(a,b), (a',b), (a',b')$$</span></p> <p>Above <span class="math-container">$a$</span> is true in only one case.<br> This means the younger woman has to output <code>yes</code>, that is <span class="math-container">$g(a,b)$</span> has to be <span class="math-container">$1$</span>, then you can conclude it is <span class="math-container">$(a,b)$</span>.</p>
1,557,165
<p>Prove that $$\int_1^\infty\frac{e^x}{x (e^x+1)}dx$$ does not converge.</p> <p>How can I do that? I thought about turning it into the form of $\int_b^\infty\frac{dx}{x^a}$, but I find no easy way to get rid of the $e^x$.</p>
Enrico M.
266,764
<p>Collect $e^x$ and have</p> <p>$$\int\frac{e^x}{e^x(x + e^{-x})} = \int \frac{dx}{x + xe^{-x}}$$</p> <p>Now substitute $e^{-x} = y$ so that $dy = -y dx$ and extrema changes from $1/e$ to $0$:</p> <p>$$\int_0^{1/e} \frac{dy}{y(- \log(y))(1+y)}$$</p> <p>which has a pole along the path of integration, so then the integral does not converge.</p>
1,557,165
<p>Prove that $$\int_1^\infty\frac{e^x}{x (e^x+1)}dx$$ does not converge.</p> <p>How can I do that? I thought about turning it into the form of $\int_b^\infty\frac{dx}{x^a}$, but I find no easy way to get rid of the $e^x$.</p>
Lucian
93,448
<p>Since $~\dfrac{dx}x=d~\big(\ln x\big),~$ we'll just let $x=e^t.~$ This yields $\displaystyle\int_0^\infty\frac{e^{e^t}}{e^{e^t}+1}~dt.~$ Simplifying </p> <p>both sides by $e^{e^t}$ leads to $\displaystyle\int_0^\infty\frac1{1+e^{-e^t}}~dt,~$ which, as far as I'm able to see, diverges </p> <p>as shamelessly as $\displaystyle\int_0^\infty dt,~$ since the rate at which the function $e^{-e^t}$ decreases towards </p> <p>$0$ is simply mind-blowing.</p>
187,545
<p><span class="math-container">$\DeclareMathOperator\GL{GL}\DeclareMathOperator\L{\mathfrak{L}}$</span>The free Lie algebra <span class="math-container">$\L(V)$</span> generated by an <span class="math-container">$r$</span>-dimensional vector space <span class="math-container">$V$</span> is, in the language of <a href="https://en.wikipedia.org/wiki/Free_Lie_algebra" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Free_Lie_algebra</a>, the free Lie algebra generated by any choice of basis <span class="math-container">$e_1, \ldots , e_r$</span> for the vector space <span class="math-container">$V$</span>. (Work over the field <span class="math-container">${\mathbb R}$</span> or <span class="math-container">${\mathbb C}$</span>, whichever you prefer.) It is a graded Lie algebra<br /> <span class="math-container">$$\L(V) = V \oplus \L_2 (V) \oplus \L_3 (V) \oplus \ldots .$$</span> The general linear group <span class="math-container">$\GL(V)$</span> of <span class="math-container">$V$</span> acts on <span class="math-container">$\L(V)$</span> by gradation-preserving Lie algebra automorphisms. Thus each graded piece <span class="math-container">$\L_k (V)$</span> is a finite dimensional representation space for <span class="math-container">$\GL(V)$</span>. (The `weight' of <span class="math-container">$\L_k (V)$</span> is <span class="math-container">$k$</span> in the sense that <span class="math-container">$\lambda \mathrm{Id} \in \GL(V)$</span> acts on <span class="math-container">$\L_k (V)$</span> by scalar multiplication by <span class="math-container">$\lambda^k$</span>.) QUESTION: How does <span class="math-container">$\L_k (V)$</span> break up into <span class="math-container">$\GL(V)$</span>-irreducibles?</p> <p>I only really know that <span class="math-container">$\L_2 (V) = \Lambda ^2 (V)$</span>, which is already irreducible.</p> <p>To start the game off, perhaps some reader out there already is familiar with <span class="math-container">$\L_3 (V)$</span> as a <span class="math-container">$\GL(V)$</span>-rep, and can tell me its irreps in terms of the Young diagrams / Schur theory involving 3 symbols?</p> <p>(My motivation arises from trying to understand some details of the subRiemannian geometry <a href="https://en.wikipedia.org/wiki/Sub-Riemannian_manifold" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Sub-Riemannian_manifold</a> of the Carnot group whose Lie algebra is the free <span class="math-container">$k$</span>-step Lie algebra, which is <span class="math-container">$\L(V)$</span>-truncated after step <span class="math-container">$k$</span>. )</p>
Daniel Barter
4,002
<p>What you are describing is the algebraic operad Lie. More details can be found <a href="https://mathoverflow.net/questions/87309/an-n-dimensional-representation-of-the-symmetric-group-s-n2"> here</a>. The Whitehouse modules are exactly what you get when you take Lie onto the other side of Schur Weyl duality. The sequence of partitions John describes in the link are the Whitehouse modules tensored with the alternating representation.</p>
3,451,374
<p>Given that I have a random variable <span class="math-container">$\max\{K-X, 0\}$</span> where <span class="math-container">$k&gt;0$</span> is a constant and <span class="math-container">$x$</span> is uniformly distributed on <span class="math-container">$[-K, K]$</span> or I guess more generally with any distribution. How does one go about finding the Expectation of such random variables? Some ideas come up to mind like I think it could be <span class="math-container">$P(K-x&gt;0)\cdot(K-x)+P(k-x&lt;0)\cdot0$</span> but that is itself a random variable. Maybe it is obtained by taking the Expectation of this one but I can't justify that.</p>
Siddhant
687,664
<p>Hint :- Consider <span class="math-container">$Y = max(X_1,X_2)$</span>, then <span class="math-container">$P(Y \leq y) = P(X_1 \leq y,X_2 \leq y)$</span>. If <span class="math-container">$X_1$</span> and <span class="math-container">$X_2$</span> are independent then <span class="math-container">$P(Y \leq y) = P(X_1 \leq y).P(X_2 \leq y) = F_Y(y).$</span> This should be the distribution function for <span class="math-container">$Y$</span> and then we can proceed the expectation of <span class="math-container">$Y$</span>.</p>
3,638,028
<p>find <span class="math-container">$f\circ f$</span> for the function <span class="math-container">$f\colon \mathbb R^2\to \mathbb R^2$</span> (,)=(−,) I know that if (,)=(−,), then () is its inverse reflected about the -axis. If this is the case then <span class="math-container">$f\circ f$</span> = f^-1(−f^-1(−)). I also know that it may equal (−,−) but I have no idea how (−,)=(−,-). I also know that its got something to do with vectors or scalars but I'm still stuck. I need someone to explain it in detail for me.</p> <p>I am not sure on how to do this question. Could someone please help me?</p>
Henry
6,460
<p>Just factor out the <span class="math-container">$(k+1)$</span> term</p> <p>or if you prefer expand and then factor:</p> <p><span class="math-container">$$\frac {k(k+1)+2(k+1)}2=\frac {k^2+3k+2}2=\frac {(k+1)(k+2)}2$$</span></p>
2,281,510
<p>Why do we replace y by x and then calculate y for calculating the inverse of a function?</p> <p>So, my teacher said that in order to find the inverse of any function, we need to replace y by x and x by y and then calculate y. The reason being inverse takes y as input and produces x as output.</p> <p>My question is-</p> <p>Why do we have to calculate y after swapping? I do not get this part.</p>
StuartMN
439,545
<p>Good question , If <span class="math-container">$y=f(x)$</span> then for <span class="math-container">$x$</span> the function <span class="math-container">$f$</span> determines a unique <span class="math-container">$y$</span> .If there is an inverse function then for each <span class="math-container">$y$</span> the above equation determines a unique <span class="math-container">$x$</span> ,so that in principle (and in simple cases one can solve the equation for <span class="math-container">$x$</span> in terms of <span class="math-container">$y$</span> getting <span class="math-container">$x$</span> in terms of <span class="math-container">$y$</span> viz : <span class="math-container">$x=f^{-1}(y)$</span> showing for each <span class="math-container">$y$</span> how to calculate the unique <span class="math-container">$x$</span> . But now if you wanted to graph the two functions on the same graph paper and you do since the graph is the picture of the function ;then you have to use the same independent and dependent variables in both cases .</p> <p>Traditionally "x" is used for the independent and "y" the dependent variable . so you must switch them in the equation for the inverse . Often teachers and books trying to program you to get the right answer tell you to switch the <span class="math-container">$x$</span> and <span class="math-container">$y$</span> . at the beginning so you don't forget or something .I never liked that because it obscures what you are doing . Even switching at the end is bad if <span class="math-container">$x$</span> and <span class="math-container">$y$</span> carry different units or geometric interpretation like <span class="math-container">$x$</span> miles and <span class="math-container">$y$</span> pounds or something -then you would pick neutral new variables for both <span class="math-container">$f$</span> and <span class="math-container">$f^{-1}$</span> at the very end . </p>
1,840,778
<p>In rectangle $ABCD$, we have $AD = 3$ and $AB = 4$. Let $M$ be the midpoint of $\overline{AB}$, and let $X$ be the point such that $MD = MX$, $\angle MDX = 77^\circ$, and $A$ and $X$ lie on opposite sides of $\overline{DM}$. Find $\angle XCD$, in degrees. </p> <p><img src="https://i.stack.imgur.com/3TsZm.png" alt="Diagram"></p> <p>Thanks!</p>
Stefan4024
67,746
<p>Find the $\angle ADM$ from the right-angled triangle. This will help you find $\angle XDC$, as $\angle XDC = 77^{\circ} - \angle CDM = 77^{\circ} - 90^{\circ} + \angle ADM$. Then use Sine Theorem on $\triangle DMX$ to find the length of $\overline{DX}$ and then finally use Sine Theorem on $\triangle DXC$ to find $\angle XCD$.</p> <p>This should be your outline for caclulating the value of the wanted angle. I leave the calculations to you.</p>
340,886
<p>Suppose $x=(x_1,x_2),y = (y_1,y_2) \in \mathbb{R}^2$. I noticed that \begin{align*} \|x\|^2 \|y\|^2 - \langle x,y \rangle^2 &amp;= x_1^2y_1^2 + x_1^2 y_2^2 + x_2^2 y_1^2 + x_2^2 y_2 ^2 - (x_1^2 y_1^2 + 2 x_1 y_1 x_2 y_2 + x_2^2 y_2^2) \\ &amp;=(x_1 y_2)^2 - 2x_1 y_2 x_2 y_1 + (x_2 y_2)^2 \\ &amp;=(x_1 y_2 - x_2 y_1)^2 \end{align*} which proves the CSB inequality in dimension two. This begs the question:</p> <blockquote> <p>If $x = (x_1,\ldots,x_n),y=(y_1,\ldots,y_n) \in \mathbb{R}^n$, is there a polynomial $p \in \mathbb{R}[x_1,\ldots,x_n;y_1,\ldots,y_n]$ such that $ \|x\|^2 \|y\|^2 - \langle x,y \rangle^2 = p^2$?</p> </blockquote>
user1551
1,551
<p>When $n\ge3$, $\|x\|^2\|y\|^2 - \langle x,y\rangle^2$ is not the square of any polynomial $p(x,y)$. Keep all entries other than $x_1$ fixed and let \begin{align*} q(x_1) &amp;= \|x\|^2\|y\|^2 - \langle x,y\rangle^2,\\ \Rightarrow q\,'(x_1) &amp;= 2(x_1 \|y\|^2 - y_1\langle x,y\rangle) \end{align*} If $q$ is a squared polynomial, some zero of $q\,'$ must be a zero of $q$. However, when $x=(x_1,1,0,0,\ldots,0)$ and $y=(1,0,1,0,0,\ldots,0)$, we have $q\,'(x_1)=2x_1$ and $q(x_1)=2(x_1^2+1)-x_1^2$. So, the only zero of $q\,'$ is $x_1=0$, but $q(0)=2\neq0$. Therefore $q$ is not a squared polynomial.</p>
75,900
<p>I have got a following equation: </p> <pre><code>-(c - x)/Sqrt[b^2 + (c - x)^2] + x/Sqrt[a^2 + x^2] == 0. </code></pre> <p>Trying to solve it for x, so I evaluate</p> <pre><code>Solve[-(c-x)/Sqrt[b^2+(c-x)^2]+x/Sqrt[a^2+x^2] == 0,x]. </code></pre> <p>This produces</p> <blockquote> <pre><code>{{x -&gt; (a*c)/(a - b)}, {x -&gt; (a*c)/a + b)}} </code></pre> </blockquote> <p>It is obviously wrong. Well, the second solution <code>a c/(a + b)</code> is <strong>indeed</strong> the right one, but the first <code>a c/(a - b)</code> is obviously wrong. You can check it yourself with direct substitution (lets take <code>a = 2</code>, <code>b = 3</code>, <code>c = 4</code>):</p> <pre><code>(-(c - x)/Sqrt[b^2 + (c - x)^2] + x/Sqrt[a^2 + x^2])/. x -&gt; a*c/(a - b) /. a -&gt; 2 /. b -&gt; 3 /.c -&gt; 4 </code></pre> <p>It produces <code>-(8/Sqrt[17])</code>. Not zero. So wrong.</p> <p>Now lets try the same but with a c/(a + b):</p> <pre><code>(-(c - x)/Sqrt[b^2 + (c - x)^2] + x/Sqrt[a^2 + x^2])/. x -&gt; a*c/(a + b) /. a -&gt; 2 /. b -&gt; 3 /.c -&gt; 4 </code></pre> <p>Produces zero. So what's wrong with <code>Solve</code>?</p> <p>I'm using <em>Mathematica</em> 9.</p>
chuy
237
<p>You can try using the option <a href="http://reference.wolfram.com/language/ref/MaxExtraConditions.html" rel="nofollow"><code>MaxExtraConditions</code></a> </p> <blockquote> <p>Solve gives generic solutions only. Solutions that are valid only when continuous parameters satisfy equations are removed. Additional solutions can be obtained by using nondefault settings for MaxExtraConditions. </p> </blockquote> <pre><code>Solve[-(c - x)/Sqrt[b^2 + (c - x)^2] + x/Sqrt[a^2 + x^2] == 0, x, MaxExtraConditions -&gt; 1] During evaluation of Solve::useq: The answer found by Solve contains equational condition(s) {0==-b-Sqrt[b^2],0==-a-Sqrt[a^2],0==-b-Sqrt[b^2],0==a-Sqrt[a^2],0==b-Sqrt[b^2],0==-a-Sqrt[a^2],0==b-Sqrt[b^2],0==a-Sqrt[a^2],0==(-a Sqrt[Power[&lt;&lt;2&gt;&gt;] Power[&lt;&lt;2&gt;&gt;] Plus[&lt;&lt;4&gt;&gt;]]-b Sqrt[Power[&lt;&lt;2&gt;&gt;]+Times[&lt;&lt;3&gt;&gt;]])/b,&lt;&lt;11&gt;&gt;,0==b-Sqrt[b^2],0==-b-Sqrt[b^2],0==b-Sqrt[b^2],0==-b-Sqrt[b^2],0==b-Sqrt[b^2],0==b-Sqrt[b^2],0==b-Sqrt[b^2],0==b-Sqrt[b^2],0==(Sqrt[Plus[&lt;&lt;2&gt;&gt;]^2] x-c Sqrt[x^2]+x Sqrt[x^2])/(c-x)}. A likely reason for this is that the solution set depends on branch cuts of Wolfram Language functions. &gt;&gt; (* {{x -&gt; ConditionalExpression[ 0, (a + Sqrt[a^2] == 0 &amp;&amp; b + Sqrt[b^2] == 0 &amp;&amp; c == 0) || (a + Sqrt[a^2] == 0 &amp;&amp; c == 0 &amp;&amp; -b + Sqrt[b^2] == 0) || (b + Sqrt[b^2] == 0 &amp;&amp; c == 0 &amp;&amp; -a + Sqrt[a^2] == 0) || (c == 0 &amp;&amp; -b + Sqrt[b^2] == 0 &amp;&amp; -a + Sqrt[a^2] == 0)]}, {x -&gt; ConditionalExpression[c/2, a == -b || a == b]}, {x -&gt; ConditionalExpression[(a c)/( a - b), (a Sqrt[(b^2 (a^2 - 2 a b + b^2 + c^2))/(a - b)^2])/b + Sqrt[a^2 + (a^2 c^2)/(a - b)^2] == 0]}, {x -&gt; ConditionalExpression[(a c)/( a + b), (a Sqrt[(b^2 (a^2 + 2 a b + b^2 + c^2))/(a + b)^2])/b - Sqrt[a^2 + (a^2 c^2)/(a + b)^2] == 0]}} *) </code></pre> <p>A simpler example:</p> <pre><code>Solve[a x^2 + b x + c == 0, x] (* {{x -&gt; (-b - Sqrt[b^2 - 4 a c])/(2 a)}, {x -&gt; (-b + Sqrt[b^2 - 4 a c])/(2 a)}} *) </code></pre> <p>The above solutions are only true if <code>a!=0</code></p> <pre><code>Solve[a x^2 + b x + c == 0, x, MaxExtraConditions -&gt; 1] (* {{x -&gt; (-b - Sqrt[b^2 - 4 a c])/(2 a)}, {x -&gt; (-b + Sqrt[b^2 - 4 a c])/(2 a)}, {x -&gt; ConditionalExpression[-(c/b), a == 0]}} *) </code></pre>
3,132,380
<p>To compute this I used the fact that <span class="math-container">$S(n,2) = 2^{n-1}-1$</span> and used the recurrence relation <span class="math-container">$S(n,k) = kS(n-1,k) + S(n-1,k-1)$</span>, and used induction to get that <span class="math-container">$S(n,3)=\dfrac{3^{n-1}+1}{2}-2^{n-1}$</span>.</p> <p>But is there a quicker way to do this? Is there a way to just see plainly, as in, just counting how many ways we could put <span class="math-container">$n$</span> distinguishable balls into <span class="math-container">$3$</span> indistinguishable boxes where no box is empty? I tried but it seems difficult.</p>
Mike Earnest
177,399
<p><span class="math-container">$3^{n-1}$</span> counts ternary sequences of length <span class="math-container">$n-1$</span>, symbols in <span class="math-container">$\{0,1,2\}. $</span> </p> <p><span class="math-container">$(3^{n-1}+1)/2$</span> counts ternary sequences whose first nonzero symbol is a <span class="math-container">$1$</span>, because half of the nonzero sequences have their first nonzero symbol equal to <span class="math-container">$1$</span>.</p> <p>To translate such a sequence into a partition, scan the sequence from left to right.</p> <ul> <li><p>If symbol <span class="math-container">$i$</span> is a <span class="math-container">$0$</span>, place <span class="math-container">$i+1$</span> in the same part as <span class="math-container">$1$</span>.</p></li> <li><p>For the first <span class="math-container">$i^*$</span> that the <span class="math-container">$(i^*)^{th}$</span> symbol is a <span class="math-container">$1$</span>, place <span class="math-container">$i^*+1$</span> in a new part. <br>For subsequent <span class="math-container">$i$</span> whose <span class="math-container">$i^{th}$</span> symbol is <span class="math-container">$1$</span>, place <span class="math-container">$i+1$</span> in the same part as <span class="math-container">$i^*+1$</span>. </p></li> <li><p>For the first <span class="math-container">$i^{**}$</span> that the <span class="math-container">$(i^{**})^{th}$</span> symbol is a <span class="math-container">$2$</span>, place <span class="math-container">$i^{**}+1$</span> in a new part. <br>For subsequent <span class="math-container">$i$</span> whose <span class="math-container">$i^{th}$</span> symbol is <span class="math-container">$2$</span>, place <span class="math-container">$i+1$</span> in the same part as <span class="math-container">$i^{**}+1$</span>. </p></li> </ul> <p>However, there is a small problem. If the ternary string only consists of <span class="math-container">$0$</span>s and <span class="math-container">$1$</span>s, then the resulting partition will only have <span class="math-container">$2$</span> parts. These strings must be subtracted. </p>
1,534,675
<p>Since differential quantities are defined as any variable /function tending to zero ($\lim_{x\to0} x= dx$). This is basically the smallest value that we can imagine. Doesn't this mean that there is only one smallest value that ew can imagine? Doesn't the existence of another differential quantity, say $dl$ also mean that $dl$ can be smaller than dx? For example,<a href="https://i.stack.imgur.com/Y0VJH.png" rel="nofollow noreferrer">right angled triangle</a></p> <p>Means that $dl&gt;dh$ ? Or do the differential quantities differ by a differential differential amount that is negligible in first order integrations?</p> <p>EDIT: It seems I was mistaken in my understanding of limits and differentials. I am not taking down this question as it may be useful to others with the same confusion.</p>
Community
-1
<p>A <a href="https://en.wikipedia.org/wiki/Differential_of_a_function" rel="nofollow">differential</a> is not a quantity, it is not "tending to zero" and the equation $\lim_{x\to0} x= dx$ is meaningless (by the way, $\lim_{x\to0} x= 0$). An expression like $dl&gt;dh$ also has no meaning.</p>
1,358,270
<p>If we have a function $f=f(r, \theta, \phi)$, where $(r, \theta, \phi)$ are spherical coordinates on $\mathbb{R}^3$, how do we compute the gradient $\nabla f$ by using the formula $$\nabla f \cdot d\vec{r} = df ?$$ Here $\vec{r}$ is the position vector and $df=\frac{\partial f}{\partial r}dr +\frac{\partial f}{\partial \theta}d\theta+\frac{\partial f}{\partial \phi}d\phi$. </p>
Alex M.
164,025
<p>Let us split the integral into $\int \limits _{- \infty} ^{-R} + \int \limits _{-R} ^R + \int \limits _R ^\infty$, where $R&gt;1$ is large enough in order for the possible roots of the denominator to be included in $[-R, R]$. Let us first analyze the third integral, for $y &gt; R$, which is a type 1 improper integral.</p> <p>Let us show that this integral is convergent using techniques for improper integrals. Let us introduce the notation: $f \sim g$ if and only if $\lim \limits _{t \to \infty} \frac {f(t)} {g(t)} \in (1, \infty)$. By the limit comparison test, if $f$ and $g$ are bounded, integrable on $[R, R'] \; \forall R' &gt; R$ and $f \sim g$ then $\int \limits _R ^\infty f(t) \Bbb d t$ and $\int \limits _R ^\infty f(t) \Bbb d t$ have the same nature. We already have $f$, we shall construct an integrable $g$ to which we shall apply the above test.</p> <p>Note that the integrand can be successively simplified by taking $\lim \limits _{y \to \infty}$ as follows:</p> <p>$$\Big| A^{- \sqrt y \sqrt {\frac x y + \Bbb i}} \frac {\sqrt y (\frac {B_1} {\sqrt y} - \sqrt {\frac x y + \Bbb i} )} {\sqrt y ( \frac {B_2} {\sqrt y}- \sqrt {\frac x y + \Bbb i} ) \sqrt y ( \frac {B_3} {\sqrt y} + \sqrt {\frac x y + \Bbb i} )} \Big| \sim \\ \Big| A^{- \sqrt y \sqrt {\frac x y + \Bbb i}} \frac 1 {\sqrt y} \frac {- \sqrt {\Bbb i}} {-\sqrt {\Bbb i} \sqrt {\Bbb i}} \Big| = \\ \Big| A^{- \sqrt y \sqrt {\frac x y + \Bbb i}} \Big| \frac 1 {\sqrt y} = \\ \Big| A^{- \sqrt y \sqrt {\frac x y + \Bbb i} + \sqrt y \sqrt {\Bbb i}} \; A^{- \sqrt y \sqrt {\Bbb i}} \Big| \frac 1 {\sqrt y} = \\ \Big| A^{\sqrt y \frac {- \frac x y} {\sqrt {\Bbb i} + \sqrt {\frac x y + \Bbb i}}} A^{- \sqrt y \sqrt {\Bbb i}} \Big| \frac 1 {\sqrt y} \sim \\ \Big| A^{- \sqrt y \sqrt {\Bbb i}} \Big| \frac 1 {\sqrt y} = \\ \frac {A^{- \frac {\sqrt y} {\sqrt 2}}} {\sqrt y} ,$$</p> <p>and this will be our $g$. It is obviously bounded. To show that it is integrable on $[R, \infty)$, make the substitution $u = \frac {\sqrt y} {\sqrt 2}$ to obtain $\int \limits _{\sqrt {\frac R 2}} ^\infty \frac {A^{-u}} {\sqrt 2 u} 4 u \; \Bbb d u $ which is easily integrable and has a finite value.</p> <p>An almost identical type of reasoning is used to show that the first integral is convergent too, using $\sqrt {-y}$ where we previously used $\sqrt y$.</p> <p>Finally, let us investigate the middle integral. The only possible root of the denominator can only be produced by the factor $B_2 - \sqrt \mu$, giving $x = B_2 ^2, y=0$. Therefore, if $x \ne B_2 ^2$, the integrand in the middle integral is continuous on a compact interval, therefore has a finite integral, so the whole big integral is convergent.</p> <p>If, on the other side, $x = B_2 ^2$, then the middle integral will be a type 2 improper integral, having a singularity in $B_2 ^2$. If $B_1 = B_2$ then the two quantities between parantheses will simplify and the middle integral will again be finite. If $B_1 \ne B_2$ then by the limit comparison test the middle integrand will behave like $A^{-\sqrt {B_2}} \frac {B_1 - B_2} {B_3 + B_2} \frac 1 {B_2 - \sqrt \mu}$ (replace $\mu$ by $B_2 ^2$ where possible). Therefore, we must investigate</p> <p>$$\int \limits _{-R} ^R \frac 1 {B_2 - \sqrt {B_2 ^2 + \Bbb i y}} \Bbb d y = \int \limits _{-R} ^R \frac {B_2 + \sqrt {B_2 ^2 + \Bbb i y}} {- \Bbb i y} \Bbb d y .$$</p> <p>Note that towards $y=0$ the integrand behaves like $\frac 1 y$ which is clearly not integrable. Therefore, in this case the middle integral is divergent, and so will be the whole integral.</p> <p>To summarize:</p> <ul> <li>if $x \ne B_2 ^2$ the integral is convergent;</li> <li>if $x = B_2 ^2$ and $B_1 = B_2$ the integral is convergent;</li> <li>if $x = B_2 ^2$ and $B_1 \ne B_2$ the integral is divergent.</li> </ul>
4,218
<p>I could imagine a system of categorizing the questions that would work alongside the current tagging system. If you select the "homework" tag (or some special tag or option), it would give you the option to specify which textbook problem your question pertains to in terms of title/chapter/section/problem number. Maybe the site could present a list of textbooks that have already had one solution entered and the user could navigate through a hierarchical tree of problems.</p> <p>Now the users would be able to search or browse by textbook title, go to the chapter/section/problem and see if there is a solution present from someone else who has asked a question about it before.</p> <p>This feature would make the site a lot more organized and eliminate redundant questions, or at least make it possible to find all the questions related to a specific problem in a textbook by providing explicit links. Math Stackexchange could eventually turn into the authoritative source of solutions for textbook problems. I think it would make the site a little less intimidating, too, if it was easier to use without having to ask questions all the time. Searching for mathematical symbols is pretty hard to do.</p>
Martin Sleziak
8,297
<p>This might be closer to a longer comment than to an answer - but I hope it's ok to post it here anyway.</p> <hr> <p>I have been more than once in the situation that I stumbled upon a problem when reading a mathematical book. Usually I thought that I am either missing an important point in some proof or that the authors have there an omission or a mistake. I would love to be able to find out whether some reader of that book was in a similar situation before. Hence it would be very nice to have somewhere something as list of discussions concerning the various books organized by the book. (The questions/discussions would be probably very similar to the questions appearing here at MSE and sometimes at MO: Why this part of proof holds? Could you explain this step in the proof? Is this a mistake in this book?)</p> <p>Related MO thread: <a href="https://mathoverflow.net/questions/3038/errata-database/">Errata database?</a></p> <p>What I write above is to some extent similar to OP's suggestion, although he proposes to restrict this just to homework questions.</p> <hr> <p>Having said that I find idea of something like this very tempting, <strong>I do not think that MSE is the best place for creating and maintaining this type of database.</strong> Nevertheless, I believe that MSE can serve the same purpose even without additional metadata and sorting questions according to books from which their originated. It is perhaps a slightly more difficult to find the question related to the particular book. Adding metadata would made this easier, but I think <strong>amount of work which would go into it is not proportional to the profit it would bring the users.</strong></p>
288,974
<p>Alright this maybe really funny but I want to know why is this wrong. We often come across identities which we prove by multiplying both the sides of the identity by a certain entity but why don't we multiply it by $0$. That way every identity will be proved in one single line. That is so stupid. I mean, by that way we may also say that $1=2=3$. I know it is wrong. But why? I mean if we can multiply both the sides by $2$ then why not by $0$. For example, consider the following trigonometric identity :</p> <p>Prove the identity : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta$</p> <h2>Usual way</h2> <p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p> <p>$\displaystyle \implies {\sin^2 \theta \over \cos^2 \theta } = {\tan^2 \theta \cos ^2 \theta \over \cos^2 \theta}$ (multiplying both the sides by $\displaystyle 1 \over \cos^2 \theta$)</p> <p>$\implies \tan ^2 \theta = \tan^2\theta$</p> <p>$\implies LHS=RHS$</p> <p>$\therefore proved$</p> <h2>Funny way</h2> <p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p> <p>$\displaystyle \implies {\sin^2 \times 0} = {\tan^2 \theta \cos ^2 \theta \times 0}$ (multiplying both the sides by $0$)</p> <p>$\implies 0 = 0$</p> <p>$\therefore proved$</p> <p>Please explain why is this wrong.</p>
Todd Wilcox
41,686
<p>The first proof (unlike the second) immediately leads the reader to the following, more definite style of writing essentially the same thing: $$ \sin^2(x)=\sin^2(x)\frac{\cos^2(x)}{\cos^2(x)}=\frac{\sin^2(x)}{\cos^2(x)}\cos^2(x)=\tan^2(x)\cos^2(x)\text{.} $$ This is a more explicit demonstration of what I believe Andre Nicolas is talking about in his comment and Tunococ in his answer. In terms of reversing steps, notice that we can easily do this backwards: $$ \tan^2(x)\cos^2(x)=\frac{\sin^2(x)}{\cos^2(x)}\cos^2(x)=\sin^2(x)\text{.} $$</p>
288,974
<p>Alright this maybe really funny but I want to know why is this wrong. We often come across identities which we prove by multiplying both the sides of the identity by a certain entity but why don't we multiply it by $0$. That way every identity will be proved in one single line. That is so stupid. I mean, by that way we may also say that $1=2=3$. I know it is wrong. But why? I mean if we can multiply both the sides by $2$ then why not by $0$. For example, consider the following trigonometric identity :</p> <p>Prove the identity : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta$</p> <h2>Usual way</h2> <p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p> <p>$\displaystyle \implies {\sin^2 \theta \over \cos^2 \theta } = {\tan^2 \theta \cos ^2 \theta \over \cos^2 \theta}$ (multiplying both the sides by $\displaystyle 1 \over \cos^2 \theta$)</p> <p>$\implies \tan ^2 \theta = \tan^2\theta$</p> <p>$\implies LHS=RHS$</p> <p>$\therefore proved$</p> <h2>Funny way</h2> <p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p> <p>$\displaystyle \implies {\sin^2 \times 0} = {\tan^2 \theta \cos ^2 \theta \times 0}$ (multiplying both the sides by $0$)</p> <p>$\implies 0 = 0$</p> <p>$\therefore proved$</p> <p>Please explain why is this wrong.</p>
dhpratik
60,095
<p>the reason behind this is is a basic primary school rule. to prove <code>x=y</code></p> <p>this condition should also hold true.</p> <pre><code>x/y=1 </code></pre> <p>so if u divide both sides by <code>'0'</code></p> <pre><code>x*0=y*0 0=0 then lhs=rhs but how do u answer this lhs/rhs = 1 0/0=? </code></pre>
1,303,577
<p>I have started to learn about the properties of the <a href="http://en.wikipedia.org/wiki/Quadratic_residue" rel="nofollow">quadratic residues modulo n (link)</a> and reviewing the list of quadratic residues modulo $n$ $\in [1,n-1]$ I found the following possible property:</p> <blockquote> <p>(1) $\forall\ p \gt 3\in \Bbb P, \ (number\ of\ Quadratic\ Residues\ mod\ kp)=p\ when\ k\in\{2,3\}$</p> </blockquote> <p>In other words: (a) if $n$ is $2p$ or $3p$, where $p$ is a prime number greater than $3$, then the total number of the quadratic residues modulo $n$ is exactly the prime number $p$. (b) And every prime number $p$ is the number of quadratic residues modulo $2p$ and $3p$.</p> <blockquote> <p>E.g.:</p> <p>$n=22$, the list of quadratic residues is $\{1,3,4,5,9,11,12,14,15,16,20\}$, the total number is $11 \in \Bbb P$ and $22=11*2$.</p> <p>$n=33$, the list of quadratic residues is $\{1,3,4,9,12,15,16,22,25,27,31\}$, the total number is $11 \in \Bbb P$ and $33=11*3$.</p> </blockquote> <p>I did a quick Python test initially in the interval $[1,10^4]$, no counterexamples found. Here is the code:</p> <pre><code>def qrmn(): from sympy import is_quad_residue from gmpy2 import is_prime def list_qrmn(n): lqrmn = [] for i in range (1,n): if is_quad_residue(i,n): lqrmn.append(i) return lqrmn tested1 = 0 tested2 = 0 for n in range (4,10000,1): lqrmn = list_qrmn(n) # Test 1 if is_prime(len(lqrmn)): if n==3*len(lqrmn) or n==2*len(lqrmn): print("SUCCESS1 " + str(n) + " len " + str(len(lqrmn)) + " div " + str(int(n/len(lqrmn)))) tested1 = tested1 + 1 # Test 2 if n==3*len(lqrmn) or n==2*len(lqrmn): if is_prime(len(lqrmn)): print("SUCCESS2 " + str(n) + " len " + str(len(lqrmn)) + " div " + str(int(n/len(lqrmn)))) tested2 = tested2 + 1 else: print("ERROR2 " + str(n) + " len " + str(len(lqrmn)) + " div " + str(int(n/len(lqrmn)))) if tested1 == tested2: print("\nTEST SUCCESS: iif condition is true") else: print("\nTEST ERROR: iif condition is not true: " + str(tested1) + " " + str(tested2)) qrmn() </code></pre> <p>I am sure this is due to a well known property of the quadratic residues modulo $n$, but my knowledge is very basic (self learner) and initially, reviewing online, I can not find a property of the quadratic residues to understand if that possible property is true or not.</p> <p>Please I would like to share with you the following questions:</p> <blockquote> <ol> <li><p>Is (1) a trivial property due to the definition of the quadratic residue modulo n?</p></li> <li><p>Is there a counterexample?</p></li> </ol> </blockquote> <p>Thank you!</p>
Asvin
68,188
<p>More generally, Let $N(n) = \{\textrm{the number of quadratic residues in Z/nZ}\}$.</p> <p>Then for $n,m$ coprime, $N(nm) = N(n)N(m)$. This will follow from the chinese remainder theorem($Z/mnZ = Z/mZ\times Z/nZ$) and basic group theory. Try working through the details.</p> <p>Example: For $p$ prime: $N(p) = p+1/2, 0$ is a square. $N(2) = |\{0,1\}| = 2, N(3) = |\{0,1\}| = 2$ . I get $p+1$ since I am including $0$ and the op isn't.</p>
1,612,353
<blockquote> <p>In how many ways out of $20$ students you can select $1$ treasurer, $1$ secretary and $3$ more representatives?</p> </blockquote> <p>I understand that for single selections I can multiply with the availability of the persons. Like for treasurer I can have $20$ options, for secretary then I have $19$ options. So I can select a secretary and a treasurer in $20\cdot 19$ ways. but for $3$ more representatives? Should I multiply up to $16$? This is exactly where I am stuck.</p>
N. F. Taussig
173,070
<p>You are correct that since the treasurer can be selected in $20$ ways and the secretary can be selected in $19$ ways, the number of ways of selecting the treasurer and secretary is $20 \cdot 19$. </p> <p>It remains to select three representatives. There are $18$ ways to pick the first representative, $17$ ways to third the second representative, and $16$ ways to pick the third representative. However, $18 \cdot 17 \cdot 16$ overcounts the number of ways we can pick the representatives. Suppose that the three selected representatives are Amelia, Bruce, and Cynthia. Notice that the following selections produce the same representatives:</p> <p>Amelia, Bruce, Cynthia<br> Amelia, Cynthia, Bruce<br> Bruce, Amelia, Cynthia<br> Bruce, Cynthia, Amelia<br> Cynthia, Amelia, Bruce<br> Cynthia, Bruce, Amelia</p> <p>For any given selection of three particular representatives, there are three ways to pick the first representative, two ways to pick the second, and one way to pick the third, so there are $3! = 3 \cdot 2 \cdot 1 = 6$ orders in which we can pick the same three representatives. Thus, the number of ways of selecting the three representatives is $$\frac{18 \cdot 17 \cdot 16}{3 \cdot 2 \cdot 1} = \frac{18 \cdot 17 \cdot 16}{3!} \cdot \frac{15!}{15!} = \frac{18!}{3!15!} = \binom{18}{3}$$ where $$\binom{n}{k} = \frac{n!}{k!(n - k)!}$$ is the number of ways of selecting a subset of $k$ elements from a set of $n$ elements, that is, of making an unordered selection of $k$ of the $n$ elements. The number $\binom{n}{k}$ is called the <a href="https://en.wikipedia.org/wiki/Binomial_coefficient" rel="nofollow">binomial coefficient</a> since it is the coefficient of $x^{n - k}y^k$ in the binomial expansion $(x + y)^n$. </p> <p>Getting back to your question, the number of ways of selecting a committee consisting of a treasurer, secretary, and three representatives from a class of $20$ students is $$20 \cdot 19 \cdot \binom{18}{3}$$</p>
801,562
<p>We consider that $R$ is a commutative ring with $1_R$.</p> <p>Each $c \in R^*$(if we see it as a constant polynomial), divides each polynomial of $R[X]$.</p> <p>($c \in R^*$ means that $c$ is invertible.)</p> <p>I haven't undersotod it..Could you explain it to me?</p> <p>Does it mean that if we have a polynomial $p(X) \in R$,then $\frac{p}{c} \in \mathbb{Z}$ ? If yes, why is it like that??</p>
Bill Dubuque
242
<p><strong>Hint</strong> $\ $ An invertible element remains invertible in every extension ring (having the same $\,\color{#c00}1)$. Therefore $\,c c' = \color{#c00}1\,$ for $\,c'\in R\,$ yields $\, f = \color{#C00}1\cdot f = (cc')f = c(c'f),\,$ so $\,c\mid f\,$ in $\,R[x].$</p>
4,491,251
<p>Per the question title, what's the easiest way to evaluate the following? <span class="math-container">$$\int_0^{\pi/6}\sec x\,dx$$</span></p> <p>You can do something like computing the derivatives of <span class="math-container">$\sec x$</span> and <span class="math-container">$\tan x$</span>, adding them up, computing the derivative of the logarithm of the absolute value of the sum of <span class="math-container">$\sec x$</span> and <span class="math-container">$\tan x$</span>, and then completing the integration-by-parts, getting the final answer of <span class="math-container">$\ln(\sqrt{3})$</span>.</p> <p>But that feels like pulling something out of thin air.</p> <p>I'm wondering if there's an easier way to compute the integral.</p>
egreg
62,967
<p>It's not difficult (and a standard exercise) to compute <span class="math-container">$$ \int\frac{2}{\sin2t}\,dt=\int\frac{\cos^2t+\sin^2t}{\sin t\cos t}\,dt= \int\Bigl(\frac{\cos t}{\sin t}+\frac{\sin t}{\cos t}\Bigr)\,dt=\log\lvert\tan t\rvert+c $$</span> How do you transform a cosine into sine? Easy, with the complementary angle. So perform <span class="math-container">$x=\pi/2-2t$</span> and your integral becomes <span class="math-container">$$ \int_{\pi/4}^{\pi/6} -\frac{2}{\sin2t}\,dt=\Bigl[\log\tan t\Bigr]_{\pi/6}^{\pi/4}=-\log(1/\sqrt{3})=\log\sqrt{3} $$</span></p>
1,249,707
<blockquote> <p>Assume <span class="math-container">$V$</span> to be a finite dimensional vector space. Define the algebraic multiplicity <span class="math-container">$am(\lambda)$</span> of an eigenvalue <span class="math-container">$\lambda$</span> of a linear operator <span class="math-container">$T:V\to V$</span> as the maximum index of the factor <span class="math-container">$(t-\lambda)$</span> appearing in the characteristic polynomial of <span class="math-container">$T$</span>. Also define <span class="math-container">$G_\lambda=\{v\in V:(T-\lambda I)^kv=0\}$</span>. I want to show that <span class="math-container">$\dim(G_\lambda)=am(\lambda)$</span> without using Jordan Form.</p> </blockquote> <p>Sheldon Axler in &quot;Linear Algebra Done Right&quot; specifically defined the &quot;multiplicity&quot; of <span class="math-container">$\lambda$</span> as <span class="math-container">$\dim(G_\lambda)$</span>, hence I could not get any help from it. I am not very conversant with the properties of the Jordan form, hence I would like a more elementary proof. Please note that I cannot use the decomposition of <span class="math-container">$V$</span> into a direct sum of generalized eigenspaces because I will need to prove that indeed, <span class="math-container">$am(\lambda)=\dim(G_\lambda)$</span> to prove this.</p> <p>I started by assuming that <span class="math-container">$f(t)=(t-\lambda)^kp(t)$</span> where <span class="math-container">$f$</span> is the characteristic polynomial of <span class="math-container">$T$</span>, <span class="math-container">$p$</span> is any other polynomial not containing the factor <span class="math-container">$(t-\lambda)$</span>. So I will have to show that <span class="math-container">$\dim(G_\lambda)=k$</span>.</p> <p>By Cayley Hamilton Theorem, <span class="math-container">$f(T)=0\implies (T-\lambda I)^kp(T)=0$</span> hence <span class="math-container">$p(T)v\in G_\lambda \forall v\in V$</span>. Now consider the collection <span class="math-container">$\{p(T)v,(T-\lambda I)p(T)v,...,(T-\lambda I)^{k-1}p(T)v\}$</span> for a nonzero <span class="math-container">$v\in V$</span> which I know is linearly independent (based on the previous exercise) and hence <span class="math-container">$\dim(G_\lambda)\geq k$</span>.</p> <p>How will the other direction follow?</p>
Marc van Leeuwen
18,880
<p>Don't use the Cayley-Hamilton theorem; it is less elementary than what you need. And in any case in Axler's book it (8.20) <em>follows</em> results to the effect you are asking about (8.10, 8.18). In fact Axler <em>defines</em> the (algebraic) multiplicity of <span class="math-container">$\lambda$</span> as <span class="math-container">$\dim(G_\lambda)$</span>, and <em>then</em> goes on to define the characteristic polynomial to be the product over eigenvalues<span class="math-container">$~\lambda$</span> of <span class="math-container">$(X-\lambda)^{\dim(G_\lambda)}$</span> (which is a crazy thing to do, born of <a href="http://www.axler.net/DwD.html" rel="nofollow noreferrer">irrational fear of determinants</a>, but) which makes the question you ask void of content in the context of that book.</p> <p>I suppose you know that given a <em>direct sum</em> decomposition into <em>invariant subspaces</em>, the characteristic polynomial of <span class="math-container">$T$</span> is the product of those of its restrictions to those subspaces. I will also assume you know the characteristic polynomial of the restriction of <span class="math-container">$T$</span> to <span class="math-container">$G_\lambda$</span> is <span class="math-container">$(X-\lambda)^{\dim(G_\lambda)}$</span>. Both things are quite obvious if you define the characteristic polynomial using determinants (for the second part use that the restriction has a triangular matrix on an appropriate basis). Now you will be done if you can show that <span class="math-container">$G_\lambda$</span> is a factor in a direct sum decomposition into two invariant subspaces, where (the restriction of <span class="math-container">$T$</span> to) the other factor does not have <span class="math-container">$\lambda$</span> as an eigenvalue.</p> <p>There are two approaches to proving that fact. The one related to the primary decomposition theorem is to write the minimal polynomial~<span class="math-container">$\mu$</span> of<span class="math-container">$~T$</span> (or any polynomial annihilating <span class="math-container">$T$</span>) as product <span class="math-container">$\mu=(X-\lambda)^dQ$</span> of a power of <span class="math-container">$X-\lambda$</span> and a factor<span class="math-container">$~Q$</span> relatively prime to it; then using Bézout coefficients <span class="math-container">$B,C$</span> of these two factors (so <span class="math-container">$1=B(X-\lambda)^d+CQ$</span>) (one can find certain polynomials of <span class="math-container">$T$</span> (namely <span class="math-container">$(CQ)[X:=T]$</span> and <span class="math-container">$\def\Id{\mathrm{id}}B[X:=T](T-\lambda\Id)^d$</span>) that give projections onto the kernels of <span class="math-container">$(T-\lambda\Id)^d$</span> respectively <span class="math-container">$Q[X:=T]$</span>, and which kernels therefore form a direct sum decomposition. The kernel associated the factor <span class="math-container">$(X-\lambda)^d$</span> is in fact <span class="math-container">$G_\lambda$</span> (that approach does not even explicitly depend on the space being finite dimensional, though having an annihilating polynomial in the first place does depend on that).</p> <p>But there is more elementary: if <span class="math-container">$G_\lambda$</span> is the kernel of <span class="math-container">$(T-\lambda\Id)^d$</span> with <span class="math-container">$d=\dim(G_\lambda)$</span>, then the <em>image</em> of <span class="math-container">$(T-\lambda\Id)^d$</span> provides an invariant complementary factor. The intersection has dimension zero, since one would otherwise have vectors that are not annihilated by <span class="math-container">$(T-\lambda\Id)^d$</span>, but which <em>are</em> annihilated by a higher power of <span class="math-container">$T-\lambda\Id$</span>, which contradicts what you ought to know of generalised eigenspaces. But then the two subspaces are complementary by the rank-nullity theorem, and form a direct sum decomposition.</p>
4,574,692
<p>The theorem goes: Let <span class="math-container">$A_{1}, A_{2} ... \in \mathcal{A}$</span> with <span class="math-container">$A_{N}$</span> increasing to <span class="math-container">$\Omega$</span> and <span class="math-container">$\mu (A_{N}) &lt; \infty$</span> for all <span class="math-container">$N \in \mathbb{N}$</span>. For measurable <span class="math-container">$f, g: \Omega \xrightarrow{} E$</span> where <span class="math-container">$E$</span> is a metric space, define</p> <p><span class="math-container">$\tilde{d}(f, g) := \sum_{N = 1}^{\infty} \frac{2^{-N}}{1 + \mu(A_{N})} \int_{A_{N}} \text{min}\{1, d(f(\omega), g(\omega))\} d\mu$</span>.</p> <p>Then <span class="math-container">$\tilde{d}$</span> is a metric that induces convergence in measure: if <span class="math-container">$f, f_{1}, ...$</span> are measurable, then <span class="math-container">$f_{n} \xrightarrow{} f$</span> in measure iff <span class="math-container">$\tilde{d}(f, f_{n}) \xrightarrow{} 0$</span>.</p> <p>In the proof, the author defines <span class="math-container">$\tilde{d}_{N}(f, g) = \int_{A_{N}} \text{min}\{1, d(f(\omega), g(\omega))\} d\mu$</span>.</p> <p>He says <span class="math-container">$\tilde{d}(f, f_{n}) \xrightarrow{} 0 $</span> iff <span class="math-container">$\tilde{d}_{N}(f, f_{n}) \xrightarrow{} 0 $</span> for all <span class="math-container">$N$</span>. Why do we have this? First taking the infinite sum then taking the limit is the same as first taking the limit then sum? How to justify this?</p>
geetha290krm
1,064,504
<p>Hint: <span class="math-container">$$\tilde{d}(f, g) $$</span> <span class="math-container">$$ = \sum_{N = m+1}^{\infty} \frac{2^{-N}}{1 + \mu(A_{N})} \int_{A_{N}} \text{min}\{1, d(f(\omega), g(\omega))\} d\mu $$</span> <span class="math-container">$$+\sum_{N = 1}^{m} \frac{2^{-N}}{1 + \mu(A_{N})} \int_{A_{N}} \text{min}\{1, d(f(\omega), g(\omega))\} d\mu$$</span> and the first term is less than <span class="math-container">$\sum_{N = m+1}^{\infty} \frac{2^{-N}}{1 + \mu(A_{N})}&lt;2^{-m}$</span> since <span class="math-container">$1+\mu (A_N) \geq 1$</span>. I hope you can complete the proof using this.</p>
3,055,324
<p>I need some help with constructing a proof for the following statement,<span class="math-container">$ \frac{P_1 P_2}{hcf(m,n)} = lcm(P_1,P_2)$</span> where <span class="math-container">$P_1$</span> and <span class="math-container">$P_2$</span> are polynomials with real coefficients.</p> <p>I know how to do the same for integers using prime factors and their exponents but not sure where to go with polynomials.</p>
Bill Dubuque
242
<p>This proof works in any gcd domain. We use the <span class="math-container">$\,\overbrace{{\rm involution}\,\ x' :=\, ab/x}^{\rm\large cofactor\ duality\ \ }\ $</span> on the divisors of <span class="math-container">$\rm\:ab,\,$</span> which exposes <span class="math-container">$\rm\color{#c00}{cofactor\ reflection}$</span> <span class="math-container">$\rm\ x\mid y\color{#c00}\iff y'\mid x',\ $</span> by <span class="math-container">${\,\ \rm\dfrac{y}x = \dfrac{x'}{y'} \ }$</span> by <span class="math-container">$\rm\, \ yy'\! = ab = xx'.\, $</span> Thus</p> <p><span class="math-container">$$\begin{align}\rm c\mid\gcd(a,b)\!\iff&amp;\rm\ c\mid a,b\\[3px] \color{#c00}\iff&amp;\ \rm b',a'\mid c'\\[3px] \iff &amp;\ \rm lcm(b',a')\mid c'\\[3px] \color{#c00}\iff &amp;\ \rm c\mid lcm(b',a')' \\ {\rm Thus}\rm\quad \gcd(a,b)\, \ \cong\ \,&amp;\rm \, lcm(b',a')'\,=\ \dfrac{ab}{lcm(a,b)} \end{align}\ $$</span></p> <p>i.e. having the same set of divisors <span class="math-container">$\,c,\,$</span> they divide each other (i.e. they are associate <span class="math-container">$\cong\,)$</span></p> <p>Above the red arrows are <span class="math-container">$\rm\color{#c00}{cofactor\ reflections}$</span> and the black arrows are the <a href="https://math.stackexchange.com/a/3356212/242">definition (or universal property) of gcd and lcm</a>.</p>
188,947
<p>I have a rather complex looking plot which is a combination of graphics objects, generated by </p> <pre><code>data = Import["o2ld.csv"]; data2 = Import["stemld.csv"]; a1 = ListContourPlot[data, Contours -&gt; 25, Axes -&gt; False, PlotRangePadding -&gt; 0, Frame -&gt; False, ColorFunction -&gt; "DarkRainbow", PlotRange -&gt; {0.001, 100}]; a2 = ListPlot3D[data2, ClippingStyle -&gt; None, Mesh -&gt; None, ColorFunction -&gt; "TemperatureMap", PlotRange -&gt; {0.001, 1}, PlotStyle -&gt; Opacity[0.33]]; level = -0.01; gr = Graphics3D[{Texture[a1], EdgeForm[], Polygon[{{0, 0, level}, {200, 0, level}, {200, 200, level}, {0, 200, level}}, VertexTextureCoordinates -&gt; {{0, 0}, {1, 0}, {1, 1}, {0, 1}}]}, Lighting -&gt; "Neutral"]; out = Show[a2, gr, PlotRange -&gt; All, BoxRatios -&gt; {1, 1, 1}, BoxStyle -&gt; Directive[Dashed, Black, Thin], ViewPoint -&gt; {-0.35, -2, 1.5}, AxesLabel -&gt; {"x", "y", "Proportion"}, LabelStyle -&gt; Directive[Blue, Bold], Ticks -&gt; {{{0, 0}, {40, 0.5}, {80, 1}, {120, 1.5}, {160, 2}, {200, 2.5}}, {{0, 0}, {40, 0.5}, {80, 1}, {120, 1.5}, {160, 2}, {200, 2.5}}, {0, 0.5, 1}}, ImageSize -&gt; Large] </code></pre> <p>Which produces graphics like; <a href="https://i.stack.imgur.com/WPzT8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WPzT8.png" alt="enter image description here"></a></p> <p>What I would like to do is add a bar legend to the right of the right, based on the values from data (a1). I can create the precise bar easily enough with </p> <pre><code> vr = BarLegend[{"DarkRainbow", {0, Max[data]}}, 25]; </code></pre> <p>But as this is not a graphics object, I cannot get it to display in with Show. Nor can I seem to get it working with epilog or inset. Does anyone have any idea how to include the bar to the right? I can included csv files for ease of recreation if required, downloadable in a RAR <a href="http://s000.tinyupload.com/download.php?file_id=89424804368267636644&amp;t=8942480436826763664489404" rel="nofollow noreferrer">here</a>.</p>
Robert Jacobson
27,662
<p>This is one of those "I'm not even sure how to ask this" kind of questions. You could be asking any (or all) of the following questions.</p> <h2>How do I add a legend to appear on a <code>ListContourPlot</code>?</h2> <p>All you need to do is add <code>PlotLegends-&gt;Automatic</code> to your <code>ListContourPlot</code>:</p> <pre><code>a1=ListContourPlot[data, Contours-&gt;25,Axes-&gt;False, PlotRangePadding-&gt;0, Frame-&gt;False,ColorFunction-&gt;"DarkRainbow", PlotRange-&gt;{0.001,100}, PlotLegends-&gt;Automatic]; </code></pre> <p><img src="https://i.stack.imgur.com/YlOto.jpg" width="400" height="400"></p> <p>The legend will be drawn on the same 3D surface as its associated plot.</p> <h2>How do I make a <code>BarLegend</code> object into a <code>Graphics</code> object?</h2> <p>It already is a <code>Graphics</code> object in <code>StandardForm</code>. From the docs:</p> <blockquote> <p>BarLegend is displayed in StandardForm as a graphics object.</p> </blockquote> <h2>How do I place a <code>Graphics</code> object in the <code>Epilog</code> of a <code>Graphics3D</code> object?</h2> <p>Use <code>Epilog</code> with <code>Inset</code> to position the object.</p> <pre><code>vr = BarLegend[{"DarkRainbow", {0, Max[data]}}, 25]; out = Show[a2, gr, PlotRange -&gt; All, BoxRatios -&gt; {1, 1, 1}, BoxStyle -&gt; Directive[Dashed, Black, Thin], ViewPoint -&gt; {-0.35, -2, 1.5}, AxesLabel -&gt; {"x", "y", "Proportion"}, LabelStyle -&gt; Directive[Blue, Bold], Ticks -&gt; {{{0, 0}, {40, 0.5}, {80, 1}, {120, 1.5}, {160, 2}, {200, 2.5}}, {{0, 0}, {40, 0.5}, {80, 1}, {120, 1.5}, {160, 2}, {200, 2.5}}, {0, 0.5, 1}}, ImageSize -&gt; Large, Epilog -&gt; Inset[vr, {Right, Center}, {Right, Center}]] </code></pre> <p><img src="https://i.stack.imgur.com/Jzy2K.jpg" width="400" height="400"></p> <p>The object will sit in front of the 3D object and will not move as the 3D object is rotated.</p> <h2>How do I place a <code>Graphics</code> object wherever I want in the scene of a <code>Graphics3D</code> object?</h2> <p>It <em>should</em> be the same as you did with the <code>ListContourPlot a1</code>, but the <code>ListPlot3D a2</code> disappears every time I try it. Must be a bug.</p> <pre><code>vr = BarLegend[{"DarkRainbow", {0, Max[data]}}, 25]; legendLeft = 190; legendWidth = 40; legendHeight = 200; legendDepth = 100; legend3D = Graphics3D[{EdgeForm[], {Texture[ Rasterize[vr, Background -&gt; None, ImageResolution -&gt; 200]], Polygon[{{legendLeft, legendDepth, 0}, {legendLeft + legendWidth, legendDepth, 0}, {legendLeft + legendWidth, legendDepth, legendHeight}, {legendLeft, legendDepth, legendHeight}}, VertexTextureCoordinates -&gt; {{0, 0}, {1, 0}, {1, 1}, {0, 1}}]}}]; out = Show[a2, gr, legend3D, PlotRange -&gt; All, BoxRatios -&gt; {1, 1, 1}, BoxStyle -&gt; Directive[Dashed, Black, Thin], ViewPoint -&gt; {-0.35, -2, 1.5}, AxesLabel -&gt; {"x", "y", "Proportion"}, LabelStyle -&gt; Directive[Blue, Bold], Ticks -&gt; {{{0, 0}, {40, 0.5}, {80, 1}, {120, 1.5}, {160, 2}, {200, 2.5}}, {{0, 0}, {40, 0.5}, {80, 1}, {120, 1.5}, {160, 2}, {200, 2.5}}, {0, 0.5, 1}}, ImageSize -&gt; Large] </code></pre> <p><img src="https://i.stack.imgur.com/qwaNC.jpg" width="400" height="400"></p>
1,375,365
<p>Find all polynomials for which </p> <p>What I have done so far: for $x=8$ we get $p(8)=0$ for $x=1$ we get $p(2)=0$</p> <p>So there exists a polynomial $p(x) = (x-2)(x-8)q(x)$</p> <p>This is where I get stuck. How do I continue?</p> <p><strong>UPDATE</strong></p> <p>After substituting and simplifying I get $(x-4)(2ax+b)=4(x-2)(ax+b)$</p> <p>For $x = 2,8$ I get</p> <p>$x= 2 \to -8a+b=0$</p> <p>$x= 8 \to 32a+5b=0$</p> <p>which gives $a$ and $b$ equal to zero.</p>
Valentin
31,877
<p>Following the method outlined in <a href="https://math.stackexchange.com/questions/3888/find-polynomials-such-that-x-16p2x-16x-1px/4150#4150">this answer</a> we can write the original equation in the form $$\frac{\sigma p}{p} = \frac{\sigma^3 r}{r}$$ where $\sigma p(x) = p(2x)$ and $r(x)=8-x$. Using the "additive notation" (see the referenced post) we obtain $$p=\frac{\sigma^3-1}{\sigma-1}r=(\sigma^2+\sigma+1)r=(4x-8)(2x-8)(x-8)$$ unique up to a constant factor.</p>
2,946,384
<p>How to prove that any integer n which is not divisible by 2 or 3 is not divisible by 6?</p> <p>The point was to prove separately inverse, converse and contrapositive statements of the given statement: "for all integers n, if n is divisible by 6, then n is divisible by 3 and n is divisible by 2". I have the proof for converse and inverse similar to that given in comments. I have trouble only with the proof that integer not divisible by 2 or 3 is not divisible by 6. </p> <p>As I review my proof for inverse statement, I'm not sure of it as well. "For all integers n, if n is not divisible by 6, n is not divisible by 3 or n is not divisible by 2."</p> <p>n = 6*x where x in not an integer<br> n = 2*3*x<br> n/2 = 3*x and n/3 = 2*x where 2x or 3x is not an integer,<br> so n is not divisible by 2 or 3</p>
Community
-1
<p>There are a ton of mistakes here, unfortunately. The key issue is that you've got something like</p> <p><span class="math-container">$$\frac{t + t^3}{t - t^5} = 1 + t^2 - \frac{1}{t^4} - \frac{1}{t^2}$$</span></p> <p>where you've just mixed-and-matched all four terms. This is a (very) incorrect manipulation of the fractions. One way that you can tell the two sides are unrelated is that the left hand side tends to <span class="math-container">$1$</span> as <span class="math-container">$t \to \infty$</span>, while the right hand side blows up.</p> <p>The second and third issues, as pointed out in the other answers, are that you're missing <span class="math-container">$dt/t = dx$</span> from the substitution, and that you didn't change the bounds to <span class="math-container">$[e, \infty)$</span>. </p>
444,486
<p>I am teaching myself real analysis, and in this particular set of lecture notes, the <a href="http://www.math.louisville.edu/~lee/RealAnalysis/IntroRealAnal-ch01.pdf" rel="nofollow">introductory chapter on set theory</a> when explaining that not all sets are countable, states as follows:</p> <blockquote> <p>If $S$ is a set, $\operatorname{card}(S) &lt; \operatorname{card}(\mathcal{P}(S))$.</p> </blockquote> <p>Can anyone tell me what this means? It is theorem 1.5.2 found on page 13 of the article.</p>
Steven Clontz
86,887
<p>One might consider <span class="math-container">$\mathbb C$</span> and <span class="math-container">$\mathbb R^2$</span> to be isomorphic in some sense, since <span class="math-container">$f(a+bi)=(a,b)$</span> defines a bijection. If we define addition in <span class="math-container">$\mathbb R^2$</span> coordinate-wise, then this (group-)isomorphism holds as <span class="math-container">$$f((a+bi)+(c+di))=(a,b)+(c,d)=(a+c,b+d)=f((a+c)+(b+d)i)$$</span></p> <p>One way you might define multiplication in <span class="math-container">$\mathbb R^2$</span> is also coordinate-wise. But the above bijection would not form a (ring-)isomorphism in that case, since <span class="math-container">$f(1)=(1,1)$</span>, <span class="math-container">$f(i)=(a,b)$</span>, and <span class="math-container">$$f(i^2)=f(-1)=(-1,-1)=f(ii)=f(i)f(i)=(a^2,b^2)$$</span> demonstrates a contradiction <span class="math-container">$a^2=-1$</span> for real-valued <span class="math-container">$a$</span>.</p> <p>However, as is stated elsewhere, defining multiplication by <span class="math-container">$(a,b)(c,d)=(ac-bd)+(bc+ad)i$</span> does in fact allow <span class="math-container">$f$</span> to be a ring isomorphism.</p> <p>(N.B. if you have a bijection, you can always coerce it to be a ring isomorphism... provided you're flexible in how you define addition and multiplication for one of the sets in question!)</p>
1,054,595
<p>I have been thinking about this problem for a while and I still can't come up with a solution. Could you please point me in a direction? Here's the problem.</p> <pre><code>Let A, B be two 2x2 matrices, A = a b . A and B belong to M2(C). A*B - B*A = A. c d Prove that for every n &gt;= 2: A^n * B - B * A^n = n*A^n </code></pre> <p>Any kind of help is appreciated. Thanks!</p>
DumpsterDoofus
93,655
<p>Assuming <span class="math-container">$\|\|_2$</span> refers to the Frobenius norm, we have the obvious statement <span class="math-container">$$\text{MaxElement}(A-B)&lt;\sqrt{\epsilon}$$</span> where <span class="math-container">$\text{MaxElement}$</span> returns the element with largest absolute value, but I'm not sure you can say anything stronger.</p> <h2>Edit 1:</h2> <p>To prove this, note that if <span class="math-container">$A-B$</span> has an element larger than <span class="math-container">$\sqrt{\epsilon}$</span>, then</p> <p><span class="math-container">$$\|A-B\|_2=\sum_{i,j}(A-B)^2_{ij}&gt;(A-B)_{i_\text{max}j_\text{max}}&gt;(\sqrt{\epsilon})^2=\epsilon$$</span> which contradicts <span class="math-container">$\|A-B\|_2&lt; \epsilon$</span>. Meanwhile, if all elements are zero except one which is smaller than <span class="math-container">$\sqrt{\epsilon}$</span>, then <span class="math-container">$$\|A-B\|_2=\sum_{i,j}(A-B)^2_{ij}=(A-B)_{i_\text{max}j_\text{max}}&lt;(\sqrt{\epsilon})^2=\epsilon$$</span> which satisfies <span class="math-container">$\|A-B\|_2&lt; \epsilon$</span>. So the bound is tight.</p>
1,671,357
<p>I'm trying to solve a minimization problem whose purpose is to optimize a matrix whose square is close to another given matrix. But I can't find an effective tool to solve it.</p> <p>Here is my problem:</p> <blockquote> <p>Assume we have an unknown Q with parameter $q11, q12,q14,q21,q22,q23,q32,q33,q34,q41,q43,q44$, and a given matrix G, that is, $Q=\begin{pmatrix} q11&amp;q12 &amp;0 &amp;q14 \\q21&amp;q22&amp; q23&amp;0\\ 0&amp;q32&amp; q33&amp;q34\\ q41&amp;0&amp; q43&amp;q44\\ \end{pmatrix} $, $G=\begin{pmatrix} 0.48&amp;0.24 &amp;0.16 &amp;0.12 \\ 0.48&amp;0.24 &amp;0.16 &amp;0.12\\0.48&amp;0.24 &amp;0.16 &amp;0.12\\0.48&amp;0.24 &amp;0.16 &amp;0.12 \end{pmatrix} $,</p> <p>The problem is how to find the values of $q11, q12,q14,q21,q22,q23,q32,q33,q34,q41,q43,q44$ such that the square of $Q$ is very close to matrix $G$.</p> </blockquote> <p>I choose to minimize the Frobenius norm of their difference, that is, </p> <blockquote> <p>$ Q* ={\arg\min}_{Q} \| Q^2-G\|_F$</p> <p>s.t. $0\leq q11, q12,q14,q21,q22,q23,q32,q33,q34,q41,q43,q44 \leq 1$,$\quad$<br> $\quad$ $q11+q12+q14=1$, $\quad$ $q21+q22+q23=1$, $\quad$ $q32+q33+q34=1$, $\quad$ $q41+q43+q44=1$.</p> </blockquote> <p>During those days, I am frustrated to find a effective tool to execute the above optimization algorithm, can someone help me to realize it?</p>
Jean Marie
305,862
<p>Nice answer, @Samrat Mukhopadhyay</p> <p>I would like to add something that can bring a supplementary light to the issue.</p> <p>It is based on a property that has been overlooked, i.e., that $rank(P)=1$.</p> <p>$P=\begin{pmatrix} 1 \\ 1 \\ 1 \\ 1 \end{pmatrix} \begin{pmatrix} 0.48 &amp; 0.24 &amp; 0.16 &amp; 0.12 \end{pmatrix}$</p> <p>Said otherwise, the Singular Value Decomposition of $P$ is </p> <p>$P=\sigma_1 U_1 V_1^T=1.14543 \begin{pmatrix} .5 \\ .5 \\ .5 \\ .5 \end{pmatrix} \begin{pmatrix} 0.8381 &amp; 0.4191 &amp; 0.2794 &amp; 0.2095 \end{pmatrix}$</p> <p>Approximants we are looking for can always be decomposed into the form </p> <p>$$A=P+\sigma_2 U_2 V_2^T+\sigma_3 U_3 V_3^T+\sigma_4 U_4 V_4^T$$</p> <p>The squared error to the target $\|A-P\|_F^2$ is thus $\sigma_2^2+\sigma_3^2+\sigma_4^2$ (F = Frobenius norm).</p> <p>This is not at all independent of the ellipse Samrat Mukhopadhyai is speaking about.</p> <p>More time and space would be necessary to detail all these aspects...</p>
1,671,357
<p>I'm trying to solve a minimization problem whose purpose is to optimize a matrix whose square is close to another given matrix. But I can't find an effective tool to solve it.</p> <p>Here is my problem:</p> <blockquote> <p>Assume we have an unknown Q with parameter $q11, q12,q14,q21,q22,q23,q32,q33,q34,q41,q43,q44$, and a given matrix G, that is, $Q=\begin{pmatrix} q11&amp;q12 &amp;0 &amp;q14 \\q21&amp;q22&amp; q23&amp;0\\ 0&amp;q32&amp; q33&amp;q34\\ q41&amp;0&amp; q43&amp;q44\\ \end{pmatrix} $, $G=\begin{pmatrix} 0.48&amp;0.24 &amp;0.16 &amp;0.12 \\ 0.48&amp;0.24 &amp;0.16 &amp;0.12\\0.48&amp;0.24 &amp;0.16 &amp;0.12\\0.48&amp;0.24 &amp;0.16 &amp;0.12 \end{pmatrix} $,</p> <p>The problem is how to find the values of $q11, q12,q14,q21,q22,q23,q32,q33,q34,q41,q43,q44$ such that the square of $Q$ is very close to matrix $G$.</p> </blockquote> <p>I choose to minimize the Frobenius norm of their difference, that is, </p> <blockquote> <p>$ Q* ={\arg\min}_{Q} \| Q^2-G\|_F$</p> <p>s.t. $0\leq q11, q12,q14,q21,q22,q23,q32,q33,q34,q41,q43,q44 \leq 1$,$\quad$<br> $\quad$ $q11+q12+q14=1$, $\quad$ $q21+q22+q23=1$, $\quad$ $q32+q33+q34=1$, $\quad$ $q41+q43+q44=1$.</p> </blockquote> <p>During those days, I am frustrated to find a effective tool to execute the above optimization algorithm, can someone help me to realize it?</p>
Johan Löfberg
37,404
<p>As you ask about MATLAB, here is an implementation and solution using YALMIP (disclaimer, developed by me).</p> <p>First, set up data and model</p> <pre><code>G = repmat([.48 .24 .16 .12],4,1); Q = sdpvar(4); Q(1,3)=0; Q(3,1)=0; Q(2,4)=0; Q(4,2)=0; residual = Q*Q-G; Objective = residual(:)'*residual(:); Model = [0&lt;=Q(:)&lt;=1, sum(Q,2)==1]; </code></pre> <p>Effectively, you have a nonconvex quartic optimization problem, so it is not easy to solve. The trivial approach is to simply throw a nonlinear local solver at it, such as fmincon or ipopt. fmincon solves the problem in under 0.05s, with optimal objective $0.349$</p> <pre><code>optimize(Model,Objective,sdpsettings('solver','fmincon')) value(Q) &gt;&gt; value(Q) ans = 0.3333 0.3333 0 0.3333 0.3333 0.3333 0.3333 0 0 0.3333 0.3333 0.3333 0.3333 0 0.3333 0.3333 </code></pre> <p>Alternatively, we can attack the problem using a global solver, such as scip, baron, or the built in global solver bmibnb (you should preferably have a fast LP solver installed for to be efficient, such as gurobi, mosek, cplex etc., although it works with MATLABs standard LP solver)</p> <pre><code>optimize(Model,Objective,sdpsettings('solver','bmibnb','bmibnb.maxiter',1e4)) </code></pre> <p>The solver immediately finds the global solution (which is the same as fmincon found, so the local approach actually worked well), and all time is spent in proving optimality.</p> <p>Performance can be improved significantly by sparsifying the objective</p> <pre><code>e = sdpvar(16,1); Objective = e'*e; Model = [e == residual(:), 0&lt;=Q(:)&lt;=1, sum(Q,2)==1]; optimize(Model,Objective,sdpsettings('solver','bmibnb')) </code></pre> <p>Finally, we can try a semidefinite relaxation (essentially what is refered to as a sum-of-squares approach above). Luckily, this problem can be solved using a semidefinite relaxation (i.e, the relaxation is tight, and a solution can be recovered). This turns out to be the fastest approach. Requires a semidefinite solver, such as Mosek, SDPT3, SeDuMi etc</p> <pre><code>residual = Q*Q-G; Objective = residual(:)'*residual(:); Model = [0&lt;=Q(:)&lt;=1, sum(Q,2)==1]; sol = optimize(Model,Objective,sdpsettings('solver','moment')) </code></pre>
3,334,031
<p>I was doing some practice problems that my professor had sent us and I have not been able to figure out one of them. The given equation is:</p> <p><span class="math-container">$-y^2dx +x^2dy = 0$</span></p> <p>He then asks us to verify that:</p> <p><span class="math-container">$ u(x, y) = \frac{1}{(x-y)^2}$</span></p> <p>is an integrating factor. </p> <p>I multiplied through to get:</p> <p><span class="math-container">$\frac{-y^2}{(x-y)^2}dx + \frac{x^2}{(x-y)^2}dy = 0$</span> </p> <p>However, the partial derivatives of these do not equal each other so I am a bit confused...</p>
MafPrivate
695,001
<p>Actually, we can set a function and try to find a formula. Though I <strong>haven't finished</strong>, I hope it will inspire you guys.</p> <p>Let a function <span class="math-container">$f_{n} \left(x\right)$</span> mean that the number of ways to take <span class="math-container">$n$</span> steps from <span class="math-container">$x$</span> to <span class="math-container">$0$</span>, which each step can only be <span class="math-container">$\boxed{+1}$</span>, <span class="math-container">$\boxed{-1}$</span> and <span class="math-container">$\boxed{\times2}$</span> from the previous term. then, we get the following definition:</p> <p><span class="math-container">$1)$</span> <span class="math-container">$f_{n} \left(x\right)=\begin{cases} 1 &amp; \text{when }n=0,x=0\\ 0 &amp; \text{when }n=0,x\ne0\\ 0 &amp; \text{when }n&lt;0\end{cases}$</span></p> <p><span class="math-container">$2)$</span> <span class="math-container">$f_{n} \left(x\right)=f_{n} \left(-x\right)$</span></p> <p><span class="math-container">$3)$</span> <span class="math-container">$f_{n+1} \left(x\right)=f_{n} \left(x+1\right)+f_{n} \left(x-1\right)+f_{n} \left(2x\right)$</span></p> <p><span class="math-container">$4)$</span> Our answer is <span class="math-container">$f_n \left(0\right)$</span></p> <p>Then, we try to generate a function <span class="math-container">$P_n \left(x\right)= \sum_{k=-\infty}^\infty f_n \left(k\right) x^k$</span>. However, here's come the question:</p> <p><span class="math-container">$\quad P_{n+1} \left(x\right)\\=\sum_{k=-\infty}^\infty f_{n+1} \left(k\right) x^k\\ =\sum_{k=-\infty}^\infty \left[f_{n} \left(k+1\right)+f_{n} \left(k-1\right)+f_{n} \left(2k\right)\right]x^k \\=\sum_{k=-\infty}^\infty f_{n} \left(k-1\right) x^k+\sum_{k=-\infty}^\infty f_{n} \left(k+1\right) x^k+\sum_{k=-\infty}^\infty f_{n} \left(2k\right) x^k \\=x\sum_{k=-\infty}^\infty f_{n} \left(k-1\right) x^{k-1}+\dfrac{1}{x}\sum_{k=-\infty}^\infty f_{n} \left(k+1\right) x^{k+1}+\sum_{k=-\infty}^\infty f_{n} \left(2k\right) x^k \\= \left(x+\dfrac{1}{x}\right)P_n \left(x\right)+\sum_{k=-\infty}^\infty f_{n} \left(2k\right) x^k$</span></p> <p>I can't find a way to express <span class="math-container">$f_{n} \left(2k\right)$</span> by <span class="math-container">$f_{n} \left(k\right)$</span>, so I can't finish it. So, if you guys have some advices, please tell me in the comment immediately. Thank you!</p>
786,827
<blockquote> <p>Two balls are chosen at random from a box containing 12 balls, numbered 1;2; : : : ;12. Let X be the larger of the two numbers obtained. Compute the PMF of X, if the sampling is done</p> <p>(a) without replacement;</p> <p>(b) with replacement</p> </blockquote> <p>I understand the numerator for both cases. In case 1, we have 1 ball x such that X=x and x-1 balls smaller than x, so we have 1(x-1) = x-1. In case two, we have x choices for a ball, then x-1 choices, so x+x-1 = 2x-1. The denominator is troubling me and it's a core probability concept I never understood</p> <p>In case 1, we have 12 options for the first choice and 11 for the second. Therefore, the total number of possibilities should be 12*11 = 132. But it's not, it's half of that, which is 66</p> <p>In case 2, we have 12 options for the first choice and 12 for the second. Total possibilities is 12*12 = 144. This is correct</p> <p>Why am I right in the 2nd case but wrong in the first?</p> <p>We had a similar question on our midterm: We have 7 unique children and 20 identical cookies. How many ways can we distribute the cookies such that each child gets one?</p> <p>I thought if each child gets one, there are 13 left. Each of those 13 cookies can go to one of 7 children, so the answer should be 7^13. After learning the stars and stripes method, I realize the correct calculation leads to 19C7. I'm wondering WHY it is that my calculation is wrong, why there are two possible calculations, and what my calculation represents. It is this concept that I never understood that is still giving me trouble. I can do PMF and density functions, but I have trouble with this simplest concept</p>
Mario Carneiro
50,776
<p>You can use a trick to make this a regular least-squares optimization, in an extension of the method for <a href="https://math.stackexchange.com/questions/213658/get-the-equation-of-a-circle-when-given-3-points">finding the center of a circle through three points</a>. Draw a perpendicular bisector for every pair of points. If the points are all on a circle, they will all intersect at a common point; since it is an approximate circle, they will slightly miss each other. The squared distance to each line is a quadratic form, so the sum of the squared distances is also a quadratic form, and minimizing this is a least-squares problem. That's the center of the circle.</p> <p>Given this, you can just find the radius as the mean of the distances of each point to the center.</p>
786,827
<blockquote> <p>Two balls are chosen at random from a box containing 12 balls, numbered 1;2; : : : ;12. Let X be the larger of the two numbers obtained. Compute the PMF of X, if the sampling is done</p> <p>(a) without replacement;</p> <p>(b) with replacement</p> </blockquote> <p>I understand the numerator for both cases. In case 1, we have 1 ball x such that X=x and x-1 balls smaller than x, so we have 1(x-1) = x-1. In case two, we have x choices for a ball, then x-1 choices, so x+x-1 = 2x-1. The denominator is troubling me and it's a core probability concept I never understood</p> <p>In case 1, we have 12 options for the first choice and 11 for the second. Therefore, the total number of possibilities should be 12*11 = 132. But it's not, it's half of that, which is 66</p> <p>In case 2, we have 12 options for the first choice and 12 for the second. Total possibilities is 12*12 = 144. This is correct</p> <p>Why am I right in the 2nd case but wrong in the first?</p> <p>We had a similar question on our midterm: We have 7 unique children and 20 identical cookies. How many ways can we distribute the cookies such that each child gets one?</p> <p>I thought if each child gets one, there are 13 left. Each of those 13 cookies can go to one of 7 children, so the answer should be 7^13. After learning the stars and stripes method, I realize the correct calculation leads to 19C7. I'm wondering WHY it is that my calculation is wrong, why there are two possible calculations, and what my calculation represents. It is this concept that I never understood that is still giving me trouble. I can do PMF and density functions, but I have trouble with this simplest concept</p>
AnonSubmitter85
33,383
<p>For each point you are given, we assume that it satisfies the following relation:</p> <p>$$ ( x_i - x_0 )^2 + ( y_i - y_0)^2 = r^2. $$</p> <p>This can be written as</p> <p>$$ \left[ \begin{array}{ccc} 1 &amp; -2x_i &amp; -2y_i \end{array} \right] \left[ \begin{array}{c} x_0^2 + y_0^2 -r^2 \\ x_0 \\ y_0 \end{array} \right] = -x_i^2 - y_i^2. $$</p> <p>Thus, using all of the $(x_i,y_i)$ pairs, we can set up an over-determined system $\mathbf{Ax}=\mathbf{b}$. The least squares solution $\bar{\mathbf{x}}$ to this system will give you your answer.</p>
4,107,920
<p>the question tells me that <span class="math-container">$P(A|B)&gt;P(A)$</span> and needs me to prove: <Br></p> <ol> <li><span class="math-container">$P(B|A)&gt;P(B)$</span> <br></li> <li><span class="math-container">$P(B^c|A)&lt;P(B^c)$</span></li> </ol> <p>In general all I want to ask is do I need to care that <span class="math-container">$P(B)&gt;0$</span> or <span class="math-container">$P(A)&gt;0$</span> for the conditional probability, and how do I do it if I do need to. <br> <strong>Here's how I proved</strong>: <br> Knowing that <span class="math-container">$P(A|B)&gt;P(A)$</span> I know that <span class="math-container">$\frac{P(A\cap B)}{P(B)}&gt;P(A)\Rightarrow P(A\cap B)&gt;P(A)P(B)$</span>. <br></p> <ol> <li>I can write it as <span class="math-container">$P(B\cap A) &gt; P(B)P(A)$</span> (After multiplying <span class="math-container">$P(A)&gt;0$</span>, I assumed it's <span class="math-container">$&gt;0$</span> because otherwise I wouldn't have <span class="math-container">$P(B|A)?$</span> - and exactly right here is my question, can I do this? if not, say <span class="math-container">$P(A)=0$</span> how do I prove (1)? <br></li> </ol> <p>I did prove (2) also using (1), but I think that getting an answer about (1) will be enough for me to keep going. <br> Thanks in advance.</p>
tommik
791,458
<blockquote> <p>I think that getting an answer about (1) will be enough for me to keep going.</p> </blockquote> <p>What you are asking is immediate. In fact,</p> <p>If</p> <p><span class="math-container">$$\mathbb{P}[A|B]&gt;\mathbb{P}[A]$$</span></p> <p>this means that</p> <p><span class="math-container">$$\mathbb{P}[A\cap B]&gt;\mathbb{P}[A]\cdot\mathbb{P}[B]$$</span></p> <p>which is exactly the same condition as</p> <p><span class="math-container">$$\mathbb{P}[B|A]&gt;\mathbb{P}[B]$$</span></p>
1,488,388
<p><strong>The Statement of the Problem:</strong></p> <p>Let $G$ be a finite abelian group. Let $w$ be the product of all the elements in $G$. Prove that $w^2 = 1$.</p> <p><strong>Where I Am:</strong></p> <p>Well, I know that the commutator subgroup of $G$, call it $G'$, is simply the identity element, i.e. $1$. But, can I conclude from this that $\forall g \in G, g=g^{-1}$, i.e., $\forall g \in G, g^2 = gg^{-1} = 1$, which is our desired result? That just seems... strange. But, it kind of makes sense. After all, each element in $G$ has an associated inverse element (because it's a group), and because it's abelian, we can always position an element next to its inverse, i.e.</p> <p>$$ w^2 = (g_1g_1^{-1}g_2g_2^{-1}g_3g_3^{-1}\cdot \cdot \cdot g_ng_n^{-1})^2 = (1\cdot 1\cdot 1\cdot \cdot \cdot 1)^2=1.$$</p> <p>Is that all there is to it? Actually, looking at it now, I don't even need to mention the commutator subgroup, do I...</p>
Community
-1
<p>$w = g_1...g_r$, where $g_1,..., g_r$ are the elements of $G$ of order $2$ (all other elements can be paired with their inverses, or in the case of $e$, can be removed), then $w^2= g_1^2...g_r^2= e^r = e$.</p>
1,488,388
<p><strong>The Statement of the Problem:</strong></p> <p>Let $G$ be a finite abelian group. Let $w$ be the product of all the elements in $G$. Prove that $w^2 = 1$.</p> <p><strong>Where I Am:</strong></p> <p>Well, I know that the commutator subgroup of $G$, call it $G'$, is simply the identity element, i.e. $1$. But, can I conclude from this that $\forall g \in G, g=g^{-1}$, i.e., $\forall g \in G, g^2 = gg^{-1} = 1$, which is our desired result? That just seems... strange. But, it kind of makes sense. After all, each element in $G$ has an associated inverse element (because it's a group), and because it's abelian, we can always position an element next to its inverse, i.e.</p> <p>$$ w^2 = (g_1g_1^{-1}g_2g_2^{-1}g_3g_3^{-1}\cdot \cdot \cdot g_ng_n^{-1})^2 = (1\cdot 1\cdot 1\cdot \cdot \cdot 1)^2=1.$$</p> <p>Is that all there is to it? Actually, looking at it now, I don't even need to mention the commutator subgroup, do I...</p>
egreg
62,967
<p>Yes, your argument is essentially the way to go and the derived subgroup is indeed irrelevant. To make the proof more formal, you can do as follows.</p> <p>The map $g\mapsto g^{-1}$ is bijective on $G$. Writing $G=\{g_1,g_2,\dots,g_n\}$ and $$ w=g_1g_2\dots g_n $$ we have $$ w^{-1}=(g_1g_2\dots g_n)^{-1}=\color{red}{g_n^{-1}\dots g_2^{-1}g_1^{-1}} =g_1g_2\dots g_n=w $$ because the term painted red is again the list of all elements of $G$, just possibly in a different order, but commutativity of multiplication allows to reorder them.</p>
3,710,377
<p>In <em>Postmodern Analysis</em> by Jurgen Jost, the Lebesgue integral of a step function is defined as follows:</p> <p>Suppose we have a step function <span class="math-container">$t:W\subset\mathbb{R}^d\to \mathbb{R}$</span> defined on a cube <span class="math-container">$W\subset\mathbb{R}^d$</span> given by <span class="math-container">$$ t = \sum_j c_j \mathbb{1}_{W_j} \quad (*)$$</span> where <span class="math-container">$c_j$</span> is a real number and <span class="math-container">$W_j$</span> is a cube with side-length <span class="math-container">$\ell_j&gt;0$</span> for <span class="math-container">$j=1,2,...k$</span> and if <span class="math-container">$j\neq j'$</span>, then <span class="math-container">$\text{int}(W_j)\cap \text{int}(W_{j'})=\varnothing$</span> and <span class="math-container">$\bigcup_j W_j = W$</span> (i.e. the cubes <span class="math-container">$\{W_j\}$</span> are almost disjoint and partition W, and <span class="math-container">$t$</span> is constant on the interior of each cube). Then we define the integral of <span class="math-container">$t$</span> to be <span class="math-container">$$ \int_{\mathbb{R}^d} t = \sum_{j=1}^k c_j\ell_j^d. $$</span></p> <p>I want to show that this definition is indepenedent of the collection of cubes <span class="math-container">$\{W_j\}_{j=1}^k$</span>, as it is obvious that the function <span class="math-container">$t$</span> can have many representations of the form <span class="math-container">$(*)$</span>. Although it is "geometrically obvious in my minds eye", I am having difficulty proving this fact. Any help is appreciated. </p> <p>What I have tried: Given two collections of cubes <span class="math-container">$\{W_j\}$</span> and <span class="math-container">$\{B_i\}$</span> where we can write <span class="math-container">$$ t = \sum_i a_i\mathbb{1}_{B_i} = \sum_j c_j\mathbb{1}_{W_j} $$</span> we need to show <span class="math-container">$$ \sum_i a_i k^d_i = \sum_j c_j \ell_j^d $$</span> where <span class="math-container">$k_i$</span> is the side length of the cube <span class="math-container">$B_i$</span> and <span class="math-container">$\ell_j$</span> is the side length of the cube <span class="math-container">$W_j$</span>.</p> <p>I am not sure how to formally prove this equality, in general. The other idea I have is to find some kind of canonical representation of <span class="math-container">$(*)$</span>.</p> <p>I feel as though I am missing something, because the author says this fact is obvious and gives no proof or justification. I don't want to rely on faith.</p>
Sam
496,121
<p>Suppose either f or g (WLOG f) is a bijection, and thus has an inverse <span class="math-container">$f^{-1}$</span>.</p> <p>Then <span class="math-container">$f(g(x))=x \Rightarrow g(x)=f^{-1}(x)$</span>.</p> <p>And so <span class="math-container">$g(f(x))=f^{-1}(f(x))=x$</span>.</p> <p>So if such a pair f,g were to exist, they must both be non-bijections.</p>
1,773,375
<p>I am having a problem with solving this equation.</p> <p>I've tried different ways but nothing works.</p> <p>$$y^2\frac{dy}{dx}+2xy=e^y$$</p>
MathCurious314
201,890
<p>I am not sure what you are asking for, but I am assuming that you want to find $\frac{dy}{dx}$, so</p> <p>$$y^2 \cdot \frac{dy}{dx} + 2xy=e^y$$</p> <p>$$y^2 \cdot \frac{dy}{dx}=e^y - 2xy$$</p> <p>$$\frac{dy}{dx}=\frac{e^y - 2xy}{y^2}$$</p>
1,773,375
<p>I am having a problem with solving this equation.</p> <p>I've tried different ways but nothing works.</p> <p>$$y^2\frac{dy}{dx}+2xy=e^y$$</p>
Nikunj
287,774
<p>Notice that L.H.S can be written as an exact differential, i.e: $$\frac{d}{dy}(xy^2)=e^y$$ $$\implies d(xy^2)=e^ydy$$ Integrating both sides, we get: $$xy^2=e^y+C$$</p>
51,509
<p>Here is a problem due to Feynman. If you take 1 divided by 243 you get 0.004115226337 .... It goes a little cockeyed after 559 when you're carrying out the decimal expansion, but it soon straightens itself out and repreats itself nicely. Now I want to see how many times it repeats itself. Does it do this indefinitely, or does it stop after certain number of repititions? Can you write a simple <em>Mathematica</em> program to verify one conjecture or the other?</p>
eldo
14,254
<p>For this particular case (could be easily extended):</p> <pre><code>Count[#, Max@#] &amp;[ StringLength /@ Rest@StringSplit[ToString@N[1/243, 10^6], "00"]] 37037 </code></pre> <p>With <code>10^6</code> digits after the decimal point there are <code>37037</code> repetitions.</p>
3,888,365
<p>I have been trying to understand this limit:</p> <p><span class="math-container">$$\lim_{x \to 0}\frac{tan(x)-sin(x)}{x^2}$$</span></p> <p>When aplying the l'Hopital rule I arrive to the limit being <span class="math-container">$0$</span> but when doing things organically I get an indetermination:</p> <p><span class="math-container">$$ \lim_{x \to 0}\frac{tan(x)-sin(x)}{x^2}=\lim_{x \to 0}\frac{tan(x)}{x^2}-\frac{sin(x)}{x^2}= \lim_{x \to 0} \frac{sin(x)}{cos(x)x^2}-\frac{sin(x)}{x^2}= \lim_{x \to 0}\frac{sin(x)}{x^2}(\frac{1}{cos(x)}-1) $$</span></p> <p>Clearly <span class="math-container">$\lim_{x \to 0} \frac{1}{cos(x)}=1$</span> hence <span class="math-container">$(\frac{1}{cos(x)}-1)=0$</span> and I could well aply <span class="math-container">$\lim_{x \to 0}\frac{sin(x)}{x}=1$</span> but that still leaves <span class="math-container">$\lim_{x \to 0}\frac{1}{x}$</span> which is undetermined because it has different limits on <span class="math-container">$0^-$</span> and <span class="math-container">$0^+$</span>.</p> <p>Is there something I'm missing?</p>
RRL
148,510
<p>Rearrange this as <span class="math-container">$\frac{\sin x}{x} \frac{1}{\cos x}\frac{1- \cos x}{x}$</span> and use the standard limit <span class="math-container">$\frac{1- \cos x}{x} \to 0$</span> as <span class="math-container">$x \to 0$</span>.</p>
2,384,422
<p>I'm really stuck on how to go about solving the following first order ODE; I've got little idea on how to approach it, and I'd really appreciate if someone could give me some hints and/or working for a solution so I can have a reference point on how to approach these sorts of problems.</p> <p>The following is one of many ODE's I've gotten off a problem set I found in a textbook at a library:</p> <p>$$y' = xe^{-\sin(x)} - y\cos(x)$$</p> <p>Can anyone help?</p>
velut luna
139,981
<p>The particular solution is $$y=\frac{1}{2}x^2 e^{-\sin x}$$</p> <p>Then solve for the homogeneous solution.</p> <p>Can you take it from here?</p>
1,630,480
<p>Assume $U\subset\mathbb{R}\times\mathbb{R}^{n}=\mathbb{R}^{n+1}$, $U$ is open and $(t_0, \bf{x}$$_0)\in U$. Assume ${\bf f} (= {\bf f}(t,{\bf x})) : U \to \mathbb{R}$ is <em>continuous</em>. Then the following is called an <em>initial value problem</em>, with <em>initial condition</em>:</p> <p>\begin{align*} \frac{d\bf{x}}{dt} &amp;= {\bf f}(t, {\bf x}),\\ {\bf x}(t_0) &amp;= {\bf x}_0. \end{align*}</p> <p>My doubt is $\bf{x}$ is a vector, so $\frac{d\bf{x}}{dt} \in \mathbb{R}^{n}$ but ${\bf f}(t, {\bf x}) \in \mathbb{R}$. Am I correct? So how can they be equal?</p> <p>Thanks for the help in advance.</p>
Alind Shukla
600,280
<p>If number of matrices are let us assume <span class="math-container">$M$</span>. Then number of ways to Multiply <span class="math-container">$M$</span> Matrices are = <span class="math-container">$$[(2N)!/(N+1)!N! ]$$</span> Where <span class="math-container">$N=M-1$</span></p> <p>For example if I have <span class="math-container">$3$</span> matrices <span class="math-container">$A,B,C$</span> We can multiply these <span class="math-container">$3$</span> by <span class="math-container">$2$</span> ways <span class="math-container">$(AB)C$</span> and <span class="math-container">$A(BC)$</span></p> <p>Here <span class="math-container">$M$</span> <span class="math-container">$=$</span> <span class="math-container">$3$</span> Take <span class="math-container">$N$</span> <span class="math-container">$=$</span> <span class="math-container">$M$$-$$1$</span> <span class="math-container">$N$</span> <span class="math-container">$=$</span> <span class="math-container">$2$</span> Then the number of ways to multiply these <span class="math-container">$3$</span> are <span class="math-container">$=$</span> <span class="math-container">$[(2*2)!/(2+1)!*2!]$</span></p> <p><span class="math-container">$Ans = 2$</span></p>
536,362
<p>Let $\Sigma = \sigma(\mathcal C)$ be the $\sigma$-algebra generated by the countable collection of sets $\mathcal C \subset \mathcal{P}(X)$. How can I prove that if $\mu$ is a $\sigma$-finite measure on $(X,\Sigma)$ then $L^p(X)$ is separable for $1 \le p &lt; \infty$?</p> <p>I know that simple functions are dense in $L^p(X)$, so I would like to find a countable subset of the set of simple functions that is dense in them. Could you help me please?</p>
ncmathsadist
4,154
<p>Begin as follows. You have a countable collection of sets generating the algebra. Now take the finite intersections of all elements of this collection; it is countable. Now take the finite unions of the countable family of finite intersections. This is an algebra of sets; the smallest $\sigma$-algebra generating it is $\Sigma$; let us denote this algebra by $\mathcal{A}$.</p> <p>The simple functions $$\mathcal{S} = \left\{\sum_{k=1}^n c_k \chi_{A_k}, A_1, A_2, \cdots A_k \in \mathcal{A}, c_1, c_2, .... c_n\in \mathbb{Q}, n\in \mathbb{N}\right\}$$ constitute a countable set. </p> <p>Choose $E\in \sigma$. For any $\epsilon &gt; 0$ you can choose $A\in \mathcal{A}$ so that $\mu(A\Delta E) &lt; \epsilon$. This says that every characteristic function is in the closure of $\mathcal{S}$. Can you continue and show that $\mathcal{S}$ is dense in $\mathcal{L}^p$?</p>
1,631,589
<p>Consider the sequence $\{\frac{x^n}{n!}\}_n$ for any number $x$.</p> <p>By choosing $m&gt;x$ and letting $n&gt;m$ , show that:</p> <p>$\frac{x^n}{n!} &lt; \frac{x^n}{m^n} &lt; \frac{m^m}{(m-1)!}$</p> <p>Am using the squeeze theorem , but unable to start third inequality.</p>
Marcin Malogrosz
140,932
<p>Consider a graph $G=(V,E)$ such that $|V|=n$ and $|E|={n \choose 2}$ (i.e. any two vertices are connected). Than the LHS of your identity is the number of unordered pairs of edges. Any such pair is either determined by four vertices or by three. In the first case we first choose 4 vertices and then connect them in any of 3 ways which gives us altogether $3{n \choose 4}$ combinations. In the second case we first select the vertex which belongs to both edges and later we select the other two vertices from remaining $n-1$ vertices which gives us $n{n-1 \choose 2}$ possibilities. Summing it up we get the RHS. </p>
1,457,063
<p>I am utterly confused on how to solve this problem. I found a lemma that says $|A\cup B|=|A|+|B|$ is true if the two sets are disjoint which makes sense, but how do I prove the entire statement. </p>
Adam Hughes
58,831
<p>Write the disjoint unions and use your original result. That is:</p> <blockquote> <p>$$\begin{cases}A\cup B=A\setminus B\cup B\setminus A \cup A\cap B \\ A = A\setminus B \cup A\cap B\\ B=B\setminus A\cup A\cap B\end{cases}.$$</p> </blockquote> <p>Since you know that these are all disjoint, you can use the original result to write your proof as</p> <blockquote> <p>$$|A\cup B|=|A\setminus B|+|B\setminus A|+|A\cap B|=(|A\setminus B|+|A\cap B|)+(|B\setminus A|+|A\cap B|)-|A\cap B|.$$</p> </blockquote>
1,457,063
<p>I am utterly confused on how to solve this problem. I found a lemma that says $|A\cup B|=|A|+|B|$ is true if the two sets are disjoint which makes sense, but how do I prove the entire statement. </p>
zoli
203,663
<p>$|A|+|B|$ contains twice those elements that are contained in both sets. So, if you want to calculate the true number of elements of $A\cup B$, $|A\cup B|$ then you have to subtract the number of elements that are taken into account twice, that is, you have to subtract $|A\cap B|$ form $|A|+|B|$. As a result</p> <p>$$|A\cup B|=|A|+|B|-|A\cap B|.$$</p>
56,394
<p>Hi!</p> <p>While studying C*-algebras I found 2 different definitions for non degenerate representations (<em>-homomorphisms $\pi:\mathcal{A} \rightarrow B(\mathcal{h})$ where $\mathcal{A}$ is a C</em>-algebra and $B(\mathcal{h})$ is the space of bounded linear operators on some Hilbert space $\mathcal{h}$):</p> <p>1) For every non-zero $\xi \in \mathcal{h}$ there exists $a \in \mathcal{A}$ such that $\pi(a)\xi \neq 0$;</p> <p>2) The set $\{\pi(a)\xi \quad a \in \mathcal{A}, \xi \in \mathcal{h}\}$ is dense in $\mathcal{h}$.</p> <p>Are they equivalent?</p> <p>Thanks, Alessandro</p>
Jan Jitse Venselaar
3,897
<p>Yes they are. This is Proposition I.9.2 in Theory of Operator Algebras I by Takesaki.</p> <p>Short proof:</p> <p>2) => 1): suppose $\pi(a)\xi = 0$ for all $a$. Then $(\pi(a)\eta|\xi) = 0$ for all $\eta\in h$ and $a\in\mathcal{A}$ hence $\xi=0$.</p> <p>1) => 2): Take $\xi \in h$ orthogonal to all $\pi(a) \eta$. Then from $(\xi| \pi(a^* a) \xi)= 0$ for all $a$ it follows that $\xi =0$.</p>
1,765,538
<p>If $N$ is the set of all natural numbers, $R$ is a relation on $N \times N$, defined by $(a,b) \simeq (c,d)$ iff $ad=bc$, how can I prove that $R$ is an equivalence relation ?</p>
JKnecht
298,619
<p>Hint:</p> <p>You need to prove that $R$ is reflexive, symmetric and transitive.</p> <p>I leave the first two to you. They are very straight forward.</p> <p>Now suppose $(a,b) \simeq (c,d)$ and $(c,d) \simeq (e,f)$</p> <p>Then $ad = bc$ and $cf=de$</p> <p>Thus</p> <p>$(ad)(cf)=(bc)(de)$</p> <p>and by cancelling from both sides</p> <p>$af=be$</p> <p>Accordingly $(a,b) \simeq (e,f)$ and $R$ is transitive.</p>
2,439,111
<p>By definition, a function $ f: \mathbb{R} \rightarrow\mathbb{R}$ is linear iff</p> <ol> <li>$f(x+y)=f(x)+f(y)$ $ \forall x,y \in \mathbb{R}$ </li> <li>$ f(bx) = bf(x)$ $ \forall b,x \in \mathbb{R}$</li> </ol> <p>I am trying to prove the following statement: </p> <p>If $ f $ is a linear map defined above then $f$ has the following form: $f(x)=ax$ $ \forall x \in\mathbb{R}$.</p> <p>Could you give a suggestion about where to start from? </p>
M. Van
337,283
<p>It is difficult to give a first step without giving away the whole solution, since the solution consists of just one step: $$f(x)=x \cdot f(1)=f(1) \cdot x$$</p>
3,856,180
<p><span class="math-container">$X = \{0 ,1\}^{\mathbb N}$</span> be the metric space. Can anyone please tell me how to define a continuous injective function from <span class="math-container">$X = \{0 ,1\}^{\mathbb N}$</span> to the cantor set ?</p> <p>Can anyone please give an idea ?</p>
Community
-1
<p>For each <span class="math-container">$x\in X$</span>, we have a sequence of zeros and ones. Meanwhile the Cantor set is the set of all real numbers in the unit interval whose ternary expansion contains no <span class="math-container">$1$</span>'s. So the natural map would be to send a given sequence <span class="math-container">$x=(x_n)$</span> of zeros and ones to the number in the Cantor set whose ternary expression is <span class="math-container">$\sum a_n/3^n$</span>, where <span class="math-container">$a_n=\begin{cases} 0 , x_n=0\\2, x_n=1\end{cases}$</span>.</p> <p>The map is automatically continuous because the <span class="math-container">$X$</span> is discrete.</p> <p>It remains to prove injectivity. But that's straight forward, because different sequences result in numbers with different ternary representations.</p>
2,040,293
<p>I am trying to follow this tutorial: <a href="http://ctms.engin.umich.edu/CTMS/index.php?example=InvertedPendulum&amp;section=SystemModeling" rel="nofollow noreferrer">http://ctms.engin.umich.edu/CTMS/index.php?example=InvertedPendulum&amp;section=SystemModeling</a></p> <p>I am stuck to understand how to make a state-space representation from these transfer functions</p> <p>$$ \frac{\Phi(s)}{U(s)} = \frac{\frac{ml}{q}s}{s^3+\frac{b(I+ml^2)}{q}s^2-\frac{(M+m)mgl}{q}s-\frac{bmgl}{q}} \\ \frac{X(s)}{U(s)} = \frac{\frac{(I+ml^2)s^2-gml}{q}}{s^4+\frac{b(I+ml^2)}{q}s^3-\frac{(M+m)mgl}{q}s^2-\frac{bmgl}{q}s} $$</p> <p>The text gives a hint "<em>The linearized equations of motion from above can also be represented in state-space form if they are rearranged into a series of first order differential equations. Since the equations are linear, they can then be put into the standard matrix form shown below.</em>"</p> <p>But I do not understand this hint, I tried to research on to reduce the order like on <a href="http://tutorial.math.lamar.edu/Classes/DE/ReductionofOrder.aspx" rel="nofollow noreferrer">http://tutorial.math.lamar.edu/Classes/DE/ReductionofOrder.aspx</a> but maybe you can give me more input to my brain.</p> <p>Solution: $$ \begin{bmatrix} \dot x \\ \ddot x \\ \dot \phi \\ \ddot \phi \end{bmatrix} = \begin{bmatrix} 0 &amp; 1 &amp; 0 &amp; 0 \\ 0 &amp; \frac{-(I+ml^2)b}{I(M+m)+Mml²} &amp; \frac{m^2gl^2}{I(M+m)+Mml^2} &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 1 \\ 0 &amp; \frac{-mlb}{I(M+m)+Mml^2} &amp; \frac{mgl(M+m)}{I(M+m)+Mml^2} &amp; 0 \end{bmatrix} \begin{bmatrix} x \\ \dot x \\ \phi \\ \dot \phi \end{bmatrix} + \begin{bmatrix} 0 \\ \frac{I+ml^2}{I(M+m)+Mml²} \\ 0 \\ \frac{ml}{I(M+m)+Mml^2} \end{bmatrix} u $$</p> <p>$$ y = \begin{bmatrix} 1 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; 0 \end{bmatrix} \begin{bmatrix} x \\ \dot x \\ \phi \\ \dot \phi \end{bmatrix} + \begin{bmatrix} 0 \\ 0 \end{bmatrix} u $$</p>
martini
15,379
<p>Yes it's true. For defining the power $x^m$ with general $m \in \mathbf R$ and $x \in (0,\infty)$, there are two possibilities: </p> <ol> <li><p>The direct way (using some definition of $\exp$ and $\log$ that does not use non-integral powers, e.g. the power series): We define $$ x^m := \exp(m\log x) $$ then the proof uses the known properties of $\exp$ and $\log$, giving $$ x^m x^n = \exp(m\log x)\exp(n\log x) = \exp\bigl(m\log x + n \log x\bigr) = \exp\bigl((m+n)\log x\bigr) = x^{n+m} $$ Doing it this way we have to show that for integral $n\in \mathbf N$, we get the same $x^n$ as with the classical inductive definition (this is a simple induction).</p></li> <li><p>The "step by step"-way. We extend our definition of $x^m$ along with the equation from $m \in \mathbf N$ over $m \in \mathbf Z$ and $m \in \mathbf Q$ to $m \in \mathbf R$.</p> <ul> <li>From $\mathbf N$ to $\mathbf Z$: For negative $m \in \mathbf Z \setminus \mathbf N$ define $$ x^{m} := \frac 1{x^{-m}} $$ Then (just consider all possible signs of $m$ and $n$): $x^{m+n} = x^mx^n$, for all $m,n \in \mathbf Z$.</li> <li>From $\mathbf Z$ to $\mathbf Q$: For $m \in \mathbf Q$, write $m = \frac pq$ with $p \in \mathbf Z$ and $q \in \mathbf N^\times$. Define $$ x^{m} = x^{p/q} := \sqrt[q]{x^p} $$ Then, we have for $m,n \in \mathbf Q$, $m =\frac pq$, $n = \frac st$, that \begin{align*} x^{m+n} &amp;= x^{\frac{pt + qs}{qt}}\\ &amp;= \sqrt[qt]{x^{pt+qs}}\\ &amp;= \sqrt[qt]{x^{pt}x^{qs}}\\ &amp;= \sqrt[qt]{x^{pt}}\sqrt[qt]{x^{qs}}\\ &amp;= \sqrt[q]{x^p}\sqrt[t]{x^s}\\ &amp;= x^m x^n \end{align*}</li> <li>From $\mathbf Q$ to $\mathbf R$: For $m \in \mathbf R$ and $x\ge 1$, define $$ x^m := \sup_{q\in \mathbf Q, q \le m} x^q $$ for $x \in (0,1)$, define $$ x^m := \inf_{q\in \mathbf Q, q \le m} x^q $$ Then $x^{m+n}= x^m x^n$ for all $m,n \in \mathbf R$ (that is straightforward using the definition and properties of the sup).</li> </ul></li> </ol>
2,788,498
<p>Suppose $T([a,-b])=[−x,y]$ and $T([a,b])=[x,y]$. Find a matrix $A$ such that $T(x)=Ax$ for all $x\in\mathbb{R}^2$.</p>
Lutz Lehmann
115,115
<p>As $1$ is not a root of the characteristic polynomial (of the (homogeneous) differential operator on the left side), the degree of the polynomial factor stays the same and you have to try to fit the parameters in $$ y_p(x)=(ax+b)e^x $$ to the equation.</p>
2,788,498
<p>Suppose $T([a,-b])=[−x,y]$ and $T([a,b])=[x,y]$. Find a matrix $A$ such that $T(x)=Ax$ for all $x\in\mathbb{R}^2$.</p>
User1234
235,058
<p>Hint: $$$$Here is an alternate solution using the Method of Annihilators:$$$$ $y_1=4xe^x$ is clearly annihilated by $D^2-2D+1=(D-1)^2$ where $D$ denotes the derivative operator. Hence, using this annihilator on both sides of the original ODE, the ODE can be "rewritten" as $$(D^2+1)(D-1)^2y=(D+i)(D-i)(D-1)^2y=0$$ $$$$Edit: As pointed out in the comments, after getting the solutions to the equation $(D^2+1)(D-1)^2y=(D+i)(D-i)(D-1)^2y=0$, we need to see which solutions actually satisfy the original equation - the calculations there are going to be exactly like if we used Undetermined Coefficients.</p>
3,575,334
<p>I am trying to show that <span class="math-container">$\int_{-b}^{b} \frac{f(N+\frac{1}{2} + it)}{e^{2\pi i(N+\frac{1}{2} + it)}-1} dt \to 0$</span> as <span class="math-container">$N \to \infty$</span> where <span class="math-container">$|f(N+1/2+it)| \le A/(1+(N+1/2)^2)$</span> for some constant <span class="math-container">$A$</span>. </p> <p>To show this, I have <span class="math-container">$|\int_{-b}^{b} \frac{f(N+\frac{1}{2} + it)}{e^{2\pi i(N+\frac{1}{2} + it)}-1} dt |\le \frac{A}{1+N^2} \int_{-b}^b \frac{1}{|e^{2\pi i(N+\frac{1}{2})} e^{-2\pi t} - 1|} dt \le \frac{A}{1+N^2} \int_{-b}^b \frac{dt}{e^{-2 \pi t} - 1}.$</span></p> <p>My proof would be complete if we have <span class="math-container">$\int_{-b}^b \frac{dt}{e^{-2 \pi t} - 1}$</span> exists. However, I don't know think it does. How can I show that the contour integral vanishes in the limit?</p>
J. W. Tanner
615,567
<p>The intersection <span class="math-container">$(A-C)\cap(C-B)$</span> is empty, </p> <p>because any element of <span class="math-container">$A-C$</span> is not in <span class="math-container">$C$</span>, and any element of <span class="math-container">$C-B$</span> is in <span class="math-container">$C$</span>, </p> <p>so there are no elements in <span class="math-container">$A-C$</span> and <span class="math-container">$C-B$</span>.</p>
1,425,433
<p>I have problems proceeding in solving the following differential equation $$xy' + y + (y')^2 = 0.$$</p> <p>After solving for $\frac{dy}{dx}$ in the quadratic and using the substitution $u^2 = x^2 - 4y$ in the discriminant, I obtain $\frac{du}{dx} = 2 \frac{x}{u} -1$. Please how can I proceed? </p>
najayaz
169,139
<p>First make the substitution $p=y'$ to get $$px+y+p^2=0$$ Now differentiate wrt $x$ $$p'x+2p+2pp'=0$$ $$p'=\frac{-2p}{x+2p}$$ Put $p=vx\implies p'=v+xv'$ $$v+xv'=\frac{-2v}{1+2v}$$ $$xv'=\frac{-3v-2v^2}{1+2v}$$ $$\frac{1+2v}{3v+2v^2}dv=-\frac{dx}x$$ Now integrating, $$\ln v+2\ln(2v+3)+3\ln x=c$$ Now put back $v=\frac px$ to get $$\ln p+2\ln(2p+3x)=c\implies p(2p+3x)^2=c$$ I have no idea how to solve that last equation but I think someone else might be able to do it.</p>
1,425,433
<p>I have problems proceeding in solving the following differential equation $$xy' + y + (y')^2 = 0.$$</p> <p>After solving for $\frac{dy}{dx}$ in the quadratic and using the substitution $u^2 = x^2 - 4y$ in the discriminant, I obtain $\frac{du}{dx} = 2 \frac{x}{u} -1$. Please how can I proceed? </p>
JJacquelin
108,514
<p>$$xy+y+y’^2=0$$ Let : $y’=p$ $$y=-xy’-y’^2=-xp-p^2$$ $$ \frac{dy}{dp} =-p\frac{dx}{dp}-x-2p$$ $$p=y’=\frac{dy}{dx}=\frac{dy}{dp} \frac{dp}{dx} =-p-(x+2p) \frac{dp}{dx}$$ $$2p=-(x+2p) \frac{dp}{dx}$$ $$2p\frac{dx}{dp}+x=-2p$$ The solution of this first order linear ODE is : $$x=-\frac{2}{3}p+\frac{C}{2\sqrt{p}}$$ $$y=-xp -p^2= \frac{2}{3}p^2-\frac{C}{2}\sqrt{p} –p^2 =-\frac{1}{3}p^2-\frac{C}{2}\sqrt{p}$$ Finally, the solution of $xy+y+y’^2=0$ expressed on parametric form with parameter $p$ is : $$\begin{cases} x=-\frac{2}{3}p+ \frac{C}{2}\frac{1}{\sqrt{p} } \\ y =-\frac{1}{3}p^2-\frac{C}{2}\sqrt{p} \\ \end{cases}$$ If one want to find the explicit function $y(x)$ the parameter $p$ has to be eliminated from the system $\left(x(p),y(p)\right)$. This is possible, but will lead to complicated equations.</p>
1,017,411
<blockquote> <p>Let <span class="math-container">$R$</span> be a commutative Ring with <span class="math-container">$1$</span> and <span class="math-container">$M$</span> a <span class="math-container">$R$</span>-Module. <span class="math-container">$$\varphi: \begin{cases}R &amp; \longrightarrow \text{end}_R(M) \\ a &amp; \longmapsto \lambda_a \end{cases} $$</span> is a Ringisomorphism for <span class="math-container">$M=R$</span></p> </blockquote> <p><strong>My approach</strong>: First to clarify, <span class="math-container">$\lambda_a: M \to M, x \mapsto ax$</span> is the homothetic mapping.</p> <p>I managed to show that <span class="math-container">$\varphi$</span> is a Ringmorphism with <span class="math-container">$\varphi(1_R)=\lambda_{1_R}=1_{\text{end}_R(M)}=id_M$</span></p> <p>Now I am stuck with the 2nd part. I was told that the easiest way to complete this exercise is to find the inverse mapping and show that the two composition yield the identity mapping (on the respective set)</p> <p>At some point I was given the following mapping <span class="math-container">$$\xi : \begin{cases}\text{end}_R(R) &amp; \longrightarrow R \\ \delta &amp; \longmapsto \delta (1) \end{cases} $$</span></p> <p>I yet fail to understand the intuition behind this mapping, how one comes up with such an idea and ontop of that, why it works. Here are my calculations:</p> <p>Let <span class="math-container">$x \in R$</span> be arbitrary <span class="math-container">$$\varphi(\xi(\delta(x)))=\varphi(\delta(1))=\lambda_{\delta(1)} $$</span> My <span class="math-container">$x \in R$</span> seems to have 'vanished' which is clearly a bad calculation on my end. So I suppose I have to do the calculation like that: <span class="math-container">$$\varphi(\xi (\delta(x)))=\varphi(\delta(1)(x))=\lambda_{\delta(1)(x)}\overset{?}=\lambda_{\delta(1)}(x)=\delta(1)x =x \delta(1) \\ = \delta(x) = id_{\text{end}_R(R)}\tag{*}\delta(x)$$</span></p> <p>After the answers provided below I understand that the above calculation does hold, but there is clearly some magic (and with magic I mean bad mathematics performed by me) going on at the step indicated with ?. It seems that my argument is always getting 'eaten up' or ends up in places where I can no longer work with it.</p> <p>Furthermore let <span class="math-container">$a \in R$</span> be arbitrary: <span class="math-container">$$\xi(\varphi(a))=\xi(\lambda_a)=\lambda_a(1)=a\cdot 1=a=id_M(a) $$</span> Which I am okay with.</p> <p>Could someone please enlighten me with some insight regarding this exercise? Especially in the calculation marked with (*) I am hopelessly lost (because of a mapping defined by a mapping through a mapping .....)</p>
egreg
62,967
<p>My impression is that you're overcomplicating things.</p> <h3>For $a\in R$, $\lambda_a\colon M\to M$ is an $R$-endomorphism of $M$</h3> <p>This is just a simple verification.</p> <h3>$\varphi$ is a ring homomorphism</h3> <p>Indeed, $\varphi(ab)=\lambda_{ab}$ and this is the map $$ x\mapsto \lambda_{ab}(x)=(ab)x=a(bx)= \lambda_a(\lambda_b(x))=(\lambda_a\circ\lambda_b)(x) $$ so $\lambda_{ab}=\lambda_a\circ\lambda_b$ or $\varphi(ab)=\varphi(a)\circ\varphi(b)$.</p> <p>Similarly, $\lambda_{a+b}$ is the map $$ x\mapsto\lambda_{a+b}(x)=(a+b)x=ax+bx=\lambda_a(x)+\lambda_b(x) =(\lambda_a+\lambda_b)(x) $$ and so $\varphi(a+b)=\varphi(a)+\varphi(b)$.</p> <p>Of course $\varphi(1)=\lambda_1$ is the identity on $M$.</p> <h3>The kernel of $\varphi$ is the annihilator of $M$</h3> <p>Easy: $\varphi(a)=0$ if and only if $aM=0$.</p> <p>As a consequence, if $M$ is faithful, $\varphi$ is injective. Note that $R$ is faithful as $R$-module, because only $0$ annihilates $1$.</p> <h3>If $M=R$, then $\varphi$ is surjective</h3> <p>Let $f\colon R\to R$ be an $R$-endomorphism and set $a=f(1)$. Then, for $x\in R$, we have $$ f(x)=f(x1)=xf(1)=xa=ax=\lambda_a(x) $$ so $f=\lambda_a=\varphi(a)$.</p> <p>The inverse map is just $f\mapsto f(1)$.</p>
216,421
<p>How do I go about proving this? Do I have to show total boundedness (I don't know how to use the finiteness of the residue field, and this seems like something that it might pertain to).</p>
Makoto Kato
28,422
<p>Let $A$ be a DVR. Let $P$ be its maximal ideal.</p> <p><strong>Lemma 1</strong> $P^n/P^{n+1}$ is, as an $A$-module, isomorphic to $A/P$ for every integer $n &gt; 0$.</p> <p>Proof: Let $\pi$ be a generator of $P$. Let $\phi\colon P^n \rightarrow P^n/P^{n+1}$ be the canonical $A$-homomorphism. Let $g\colon A \rightarrow P^n$ be the $A$-homomorphism defined by $g(x) = \pi^n x$. Let $f\colon A \rightarrow P^n/P^{n+1}$ be $\phi\circ g$. Clearly $f$ is surjective. Suppose $f(x) = 0$. Since $\pi^n x \in P^{n+1}$, there exists $y \in A$ such that $\pi^n x = \pi^{n+1} y$. Hence $x = \pi y$. Hence $Ker(f) = P$. Hence $P^n/P^{n+1}$ is isomorphic to $A/P$. <strong>QED</strong></p> <p><strong>Lemma 2</strong> Suppose $A/P$ is finite. Then $A/P^n$ is finite for every integer $n &gt; 0$.</p> <p>Proof: This follows immediately from Lemma 1 and the follwoing series.</p> <p>$A \supset P \supset P^2 \supset \cdots \supset P^{n-1} \supset P^n$. <strong>QED</strong></p> <p><strong>Lemma 3</strong> Suppose $A/P$ is finite. Then $A$ is totally bounded in the $P$-adic topology.</p> <p>Proof: Let $n &gt; 0$ be an integer. By Lemma 2, $A/P^n$ is finite. Let $a_1, \dots, a_m$ be a complete system of representatives modulo $P^n$. Then $A = \bigcup_i (a_i + P^n)$. Hence $A$ is totally bounded. <strong>QED</strong></p> <p><strong>Proposition</strong> Suppose $A$ is complete and $A/P$ is finite. Then $A$ is compact.</p> <p>Proof: This follows immediately from Lemma 3.</p>