qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,857,630 | <p>Let $\mathcal{A}$ be a non (necessarily) unital commutative Banach algebra, and let
$$ M_{\mathcal{A}} = \{ \phi:\mathcal{A} \to \mathbb{C} : \phi \mbox{ is multiplicative and not trivial}\} $$
and
$$ \mathrm{Max}(\mathcal{A})=\{ I \lhd \mathcal{A} : I \mbox{ maximal} \}.$$
If $\mathcal{A}$ is unital, it is well known that there is a bijection between $M_{\mathcal{A}}$ and $\mathrm{Max}(\mathcal{A})$ sending each functional to its kernel (the inverse is given by the quotient and the Gelfand-Mazur theorem). </p>
<p>My question is, </p>
<blockquote>
<p>is this still a bijection in the non-unital case?</p>
</blockquote>
<p>I'm aware that if $\mathcal{A}$ is a commutative C*-algebra it is still a bijection. Also that the restriction gives a bijection from $M_{\tilde{\mathcal{A}}} \setminus \{ \pi:\tilde{\mathcal{A}} \to \mathbb{C} \}$ to $M_{\mathcal{A}}$; but this fact don't seem enough to conclude the result. I haven't been able to find a source for this. </p>
<p>Thanks in advance.</p>
| Tomasz Kania | 17,929 | <p>Linear-multiplicative functionals (<em>aka</em> characters) on complex Banach algebras are automatically continuous, so their kernels are closed. (You will find a slick proof of this fact on p. 181 of Allan's and Dales' <em>Introduction to Banach Spaces and Algebras</em>.) However, in the non-unital case it may well happen that a maximal ideal is dense. </p>
<p>The right notion to look at is the notion of a maximal modular ideal. Such ideals are bijectively associated to (kernels of) characters.</p>
<p>You may also be interested in <a href="https://math.stackexchange.com/questions/859247/maximal-ideals-and-maximal-subspaces-of-normed-algebras">this thread</a>.</p>
|
1,444,216 | <p>An LP is given in the form :</p>
<p>$max\langle c,x\rangle$ subject to $Ax=b,x\geq 0$.</p>
<p>I'm trying to show that any such problem can be reduced to a problem of the form </p>
<p>$max\langle c,x\rangle$ subject to $Ax=b,x\geq 0,\displaystyle \sum {x_j}=1$.</p>
<p>I think I'll somehow have to modify the problem by dividing by some quantity. However, I can't quite show this. Please help.</p>
| Ittay Weiss | 30,953 | <p>The intrinsic metric structure is what the ant is using. Generally, given a metric space $X$ and a susbet $S$, the ambient metric in $X$ means $S$ is a metric space, using the exact same metric. Thinking of $S\subseteq \mathbb R^2$ with the Euclidean metric on $\mathbb R^2$ and where $S$ is the unit circle (only the circumference), in that metric the distance between two antipodal points is $2$. There is however a different metric, the so called intrinsic metric. In it the distance between two points is the length of the shortest path connecting the two points, where the path is required stay within $S$ all the time (assuming such paths exist...). In the intrinsic metric, the distance between antipodal points on $S$ is $\pi$. </p>
|
1,444,216 | <p>An LP is given in the form :</p>
<p>$max\langle c,x\rangle$ subject to $Ax=b,x\geq 0$.</p>
<p>I'm trying to show that any such problem can be reduced to a problem of the form </p>
<p>$max\langle c,x\rangle$ subject to $Ax=b,x\geq 0,\displaystyle \sum {x_j}=1$.</p>
<p>I think I'll somehow have to modify the problem by dividing by some quantity. However, I can't quite show this. Please help.</p>
| JMoravitz | 179,297 | <p>As mentioned already, the metric to use is called the <em>intrinsic metric</em>, but without having worked with it before, it might be hard to use.</p>
<p>This is a good example to start using the intrinsic metric on though.</p>
<p>Let us label the four walls, ceiling, and floor as $N,E,S,W,C,F$ respectively.</p>
<p>Suppose that the ant is currently on the western wall. You have then the following space: (<em>ignore the arrows for now</em>)</p>
<p>$$\begin{array}{|c|c|c|c|c|}
\hline
&&\exists&\color{orange}{\swarrow}\\
\hline
\color{green}{\swarrow}&&C&\color{red}{\searrow}\\
\hline
E&S&W&N&E\\
\hline
&\color{blue}{\nearrow}&F\\
\hline
&&\exists&\color{orange}{\swarrow}\\
\hline
\end{array}$$</p>
<p>where $\exists$ denotes the eastern wall, but oriented differently as suggested by the picture..</p>
<p>Similar images can be made for the ant's current location, but they are difficult to superimpose in a 2-dimensional setting.</p>
<p>Given a starting location and an ending location, you can then find the shortest path to be the same as the shortest path on one of these pictures. For example, if you wish to begin at the bottom left of the western wall and wish to end at the top right of the northern wall, i.e. starting at the corner where the blue $\color{blue}{\nearrow}$ is pointing and ending at the corner where the red $\color{red}{\searrow}$ is pointing, you will travel from the starting corner to the midpoint of the shared edge between the western and northern wall, and then continuing along the northern wall to the final corner.</p>
<p>However, note that the red $\color{red}{\searrow}$ is pointing to the exact same point as the green $\color{green}{\swarrow}$, and so the ant could have chosen to go counterclockwise as opposed to clockwise by traveling from the starting vertex along a straightline path to the midpoint of the shared edge between the southern and eastern wall before continuing to the top left of the east wall (which is the same as the top right of the north wall). Similarly, the additional orange arrows also point to the same spot and so he could have traveled along the ceiling or floor to get to his destination.</p>
|
2,903,429 | <p>Show that the equation $$\frac{4}{3}{x^2} + \frac{4}{3}{y^2} + \frac{4}{3}{z^2} + \frac{4}{3}{xy} + \frac{4}{3}{xz} + \frac{4}{3}{yz} = 1$$ represents an ellipsoid. Find the position and lengths of its principal half-axes.</p>
<p>Workings
$$(x+y)^2+(y+z)^2+(z+x)^2=\frac32\,.$$
Common factor $\left(\dfrac32\right)^{\frac12}$ not sure how to find length and position half-axes??</p>
| Deepesh Meena | 470,829 | <p>$$\binom{n}{2}=\frac{n!}{2!(n-2)!}=\frac{n(n-1)(n-2)!}{2!(n-2)!}=\frac{n(n-1)}{2}$$</p>
<p>$$\sum_{k=1}^{j}1=j$$
$$\sum_{k=1}^{n}1=n$$
$$\sum_{k=1}^{n}1=\sum_{k=1}^{j}1+\sum_{k=j+1}^{n}1$$
$$\sum_{k=j+1}^{n}1=\sum_{k=1}^{n}1-\sum_{k=1}^{j}1=n-j$$</p>
<p>$$\sum_{j=1}^n \sum_{k=j+1}^{n} \ 1 =\sum_{j=1}^n\left(n-j\right)$$</p>
|
754,644 | <p>This question is related to a solved problem in Gilbert Strang's 'Introduction to Linear Algebra'(Chapter 3,Question 3.6A, Page 190).</p>
<blockquote>
<p>Q) Find bases and dimensions for all four fundamental subspaces of A if you know that</p>
<p><span class="math-container">$A = \begin{bmatrix}1 & 0 & 0\\2 & 1 & 0 \\ 5 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 3 & 0 & 5\\0 & 0 & 1 & 6\\ 0 & 0 & 0 & 0\end{bmatrix} = LU = E^{-1}R$</span></p>
</blockquote>
<p>Answer given in the text:</p>
<blockquote>
<p>This matrix has pivots in columns <span class="math-container">$1$</span> and <span class="math-container">$3$</span>. Its rank is <span class="math-container">$r=2$</span>.</p>
<p>Column space : Basis <span class="math-container">$(1,2,5)$</span> and <span class="math-container">$(0,1,0)$</span> from <span class="math-container">$E^{-1}$</span></p>
</blockquote>
<p>Why does he choose the first two columns of <span class="math-container">$E^{-1}$</span> as a basis ? If anything, the pivot columns of <span class="math-container">$R$</span> are an obvious choice for basis.</p>
| yago | 141,261 | <p>If you think of the product $E^{-1}R$ as the composition of linear applications, then $E^{-1}$ acts "last" and its columns determine somehow the image of $E^{-1}R$ (of course, it also depends on $R$).</p>
<p>More generally, for any functions from any sets that you can compose (assuming $Im(g) \subset D(f)$ where $D(f)$ is the domain where f is defined) then $Im(f \circ g) = f(Im(g)) \subset Im(f)$ because by definition</p>
<p>$Im(f \circ g) = \{ f(g(x)) \mid x \in D(g)\} = \{ f(y) \mid y \in Im(g) \} \subset \{ f(z) \mid z \in D(f) \}$</p>
<p>If $g$ is a constant function, so is $f \circ g$, and even if the image of $f$ can be very large, the image of $f \circ g$ is just a singleton. On the other hand, if $g$ is identity, then $Im(f \circ g) = Im(f)$ is maximal. You can have every cases between.</p>
<p>More specifically here in a linear setting, where $R : \mathbb{R}^4 \rightarrow \mathbb{R^3}$ and $E^{-1} : \mathbb{R^3} \rightarrow \mathbb{R^3}$,
$Im(E^{-1}R)=E^{-1}(ImR)=E^{-1}(span\{Re_1,Re_2,Re_3,Re_4\})$ if $(e_1,e_2,e_3,e_4)$ is the canonical basis of $\mathbb{R^4}$. If one writes $(e'_1,e'_2,e'_3)$ for the canonical basis of $\mathbb{R}^3$, since the last element of each columns is zero in $R$ (in other words : the last line is null), this shows that $Im(R) \subset span\{e'_1,e'_2\}$ and $rank(R) \leq 2$. But the rank of $R$ is at least $2$ (first and last column are free e.g), hence it is exactly $2$, and $Im(R) = span\{e'_1,e'_2\}$. </p>
<p>Finally, $Im(E^{-1}R)=span\{E^{-1}(e'_1),E^{-1}(e'_2)\}$ which are precisely the two first columns of $E^{-1}$.</p>
|
2,006,870 | <p>Is the following inequality true?</p>
<p>$\left( \sum \limits_{i=1}^\infty \sum \limits_{j=1}^\infty \sum \limits_{k=1}^\infty \sum \limits_{l=1}^\infty a_{ij}\,a_{ik}\,a_{jl}\,a_{kl} \right) \leq \left( \sum \limits_{i=1}^\infty \sum \limits_{j=1}^\infty a_{ij}^2 \right)^{1/2}\left( \sum \limits_{i=1}^\infty \sum \limits_{k=1}^\infty a_{ik}^2 \right)^{1/2}\left( \sum \limits_{j=1}^\infty \sum \limits_{l=1}^\infty a_{jl}^2 \right)^{1/2}\left( \sum \limits_{k=1}^\infty \sum \limits_{l=1}^\infty a_{kl}^2 \right)^{1/2}=\left( \sum \limits_{i=1}^\infty \sum \limits_{j=1}^\infty a_{ij}^2 \right)^2$</p>
<p>where $a_{ij}$s are real numbers.</p>
| B. Goddard | 362,009 | <p>Pick two primes $p$ and $q$ larger than $999,999,999$. Solve the system $t \equiv X \pmod{p}$, $t\equiv Y \pmod{q}$ by the Chinese Remainder Theorem. Then $t$ will be unique for each pair $(X,Y)$ modulo $pq>10^{18}.$</p>
|
126,270 | <p>How many seven-digit numbers divisible by 11 have the sum of their digits equal to 59?</p>
<p>I am able to get the seven-digit numbers divisible by 11 </p>
<p>and </p>
<p>I am also able to get the seven-digit numbers whose sum of their digits equal to 59.</p>
<p>But i am not able to get how i can get the count of 7 digit numbers satisying both the condition.</p>
<p>Thanks in advance.</p>
<p>Thanks in advance.</p>
| Brian M. Scott | 12,042 | <p>Since I promised vikiiii that I’d answer, here’s my version.</p>
<p>A number is divisible by $11$ if and only if the alternating sum of its digits is divisible by $11$. Say that the seven-digit number is $abcdefg$, where the letters represent the individual digits; then it’s divisible by $11$ if and only if $(a+c+e+g)-(b+d+f)$ is a multiple of $11$. Let $S=a+c+e+g$ and $T=b+d+f$; then we need $S-T$ to be a multiple of $11$ and $S+T=59$.</p>
<p>Since $S+T=59$, one of $S$ and $T$ must be odd and the other even, so their difference must be odd. Thus, $S-T$ cannot be $0$ or $\pm22$. We should look for ways to make it $\pm11$ or $\pm33$. (It clearly can’t be any bigger in magnitude, since $S\le 4\cdot9=36$.)</p>
<p>Suppose that $S-T=11$; then $70=11+59=(S-T)+(S+T)=2S$, so $S=35$ and $T=24$. This is possible only if three of the digits $a,c,e,g$ are $9$ and the fourth is $8$; there’s no other way to get four digits that total $35$. For three digits to total $24$, they must average $8$, so the only possibilities are that all three are $8$, that two are $9$ and one is $6$, or that they are $7,8$, and $9$. Thus, the digits $a,c,e,g$ in that order must be $8999,9899,9989$, or $9998$, and the digits $b,d,f$ must be $888,699,969,996,789,798,879,897,978$, or $987$, for a total of $4\cdot 10=40$ numbers.</p>
<p>Now suppose that $S-T=-11$; then by similar reasoning $2S=-11+59=48$, so $S=24$, and $T=35$. But $T\le 3\cdot9=27$, so this is impossible. Similarly, $S-T$ cannot be $-33$. The only remaining case is $S-T=33$. Then $2S=33+59=92$, and $S=46$, which is again impossible. Thus, the first case contained all of the actual solutions, and there are $40$ of them.</p>
|
767,282 | <blockquote>
<p><strong>Problem</strong>: Show that $f: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ with $f(x,y)=\|x-y\|_2^2$ is differentiable and compute its differential at every point in the domain of $f$<br><strong>Note</strong>: $\| \cdot \|_2$ denotes the euclidian norm $d(x,y)_2 =\sqrt{\sum\limits_{i=1}^n (x_i-y_i)^2}$</p>
</blockquote>
<p>One exercise I found in <strong>Zorich Analysis II</strong>. I am however in <strong>general</strong> clueless on how to show that a function is differentiable. Of course there is always the brute force way such as computing the Jacobian-Matrix and discuss its entries, however that might be a little bit difficult with the above mapping since $f: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$, that would be one hell of a matrix to compute. </p>
<p>I found that most people enjoy working with this definition of differentiability in $\mathbb{R}^n$ </p>
<blockquote>
<p>$f(x)=f(x_0)+A(x-x_0)+\|x-x_0\|\alpha (x)$ where $\alpha: U \to \mathbb{R}^n$ such that $\lim\limits_{x \to x_0} \alpha (x)=0$ and $A$ is a linear function.</p>
</blockquote>
<p>I understand that the above is only a slight variation of:</p>
<blockquote>
<p><strong>Def</strong>: $f$ is at $x_0$ differentiable $\iff \exists A: \mathbb{R}^m \to \mathbb{R}^n: \displaystyle \lim_{x \to x_0} \frac{f(x)-f(x_0)-A(x-x_0)}{\|x-x_0\|}=0 $</p>
</blockquote>
<p>In <strong>C.T. Michaels Analysis II</strong> the author mentions that by the definition nothing is known about the linear mapping $A$, however in most cases $A$ can easily be found by computing the Jacobian-Matrix and then verifying the definition with that Matrix. </p>
<p>I have read a couple of entries on this website where people attempt to show that a function is differentiable or not at some generic point. But to be honest it all seemed a bit like Voodoo-Magic to me.</p>
<p><strong>Question</strong>: I am not primarily interested in the solution to this exercise but in the thought process one would have to show that $f$ defined as above is differentiable:</p>
<ul>
<li>Where do you start? </li>
<li>Is there something in specific you have to notice about the function beforehand? Like symmetry, roots or anything to start discussing its differential.</li>
<li>Is it a lot of lucky guessing and in the end succeeding after a lot of failed attempts? Or is there something like a 'most natural attempt' to generally solve such exercises.</li>
</ul>
| Pedro | 23,350 | <p>Note this is the map $(x,y)\mapsto (x-y)\cdot (x-y)$, that is $(x,y)\mapsto x\cdot x-2x\cdot y+y\cdot y$. It suffices we show each of the summands is differentiable. But two of the summands are $\lVert \cdot \rVert ^2$; which we know how to differentiate, and the central summand is bilinear, so it is not too hard to obtain their derivative. In fact, we obtain that $$Df(x,y)(a,b)=2x\cdot a-2(x\cdot b+a\cdot y)+2 b\cdot y\\=2x\cdot(a-b)-2y\cdot(a-b)=2(x-y)\cdot (a-b)$$</p>
<p>Note that if $g(x,y)=x\cdot x$, then $Df(x,y)(a,b)=2x\cdot a$ and if $h(x,y)=y\cdot y$ then $Dh(x,y)(a,b)=2y\cdot b$. This is easily proven by say $$g(x+k,y+k)-g(x,y)=2x\cdot h+\lVert h\rVert ^2$$ $$h(x+k,y+k)-h(x,y)=2y\cdot k+\lVert k\rVert ^2$$ and noting $\lVert h\rVert,\lVert k\rVert \leqslant \lVert (h, k)\rVert$.</p>
<blockquote>
<p><em>Claim</em> Let $B:\Bbb R^n\times\Bbb R^m\to\Bbb R^p$ bilinear. Then $DB(x,y)(a,b)=B(x,b)+B(a,y)$. More generally, if $T:\Bbb R^{i_1}\times\cdots \Bbb R^{i_k }\to\Bbb R^p$ multilinear, $$DT(x_1,\ldots,x_k)(a_1,\ldots,a_k)=\sum_{j=1}^k T(x_1,\ldots,a_j,\ldots,x_k)$$</p>
</blockquote>
<p><em>Proof</em> I give the proof for the bilinear case, I leave the other proof out.</p>
<p>Indeed, suppose $B:\Bbb R^n\times\Bbb R^m\to\Bbb R^p$ is bilinear. Then $$B(x+h,y+k)=B(x,y)+B(x,k)+B(h,y)+B(h,k)$$</p>
<p>Thus $$B(x_0+h,y_0+k)-B(x_0,y_0)=B(x_0,k)+B(h,y_0)+B(h,k)$$</p>
<p>Let $A:\Bbb R^n\times \Bbb R^n\to\Bbb R $ be the map that sends $(a,b)$ to $B(x_0,b)+B(a,y_0)$. This is <em>linear</em>, and the above gives $$B(x_0+h,y_0+k)-B(x_0,y_0)=A(h,k)+B(h,k)$$</p>
<p>It suffices we show that $B(h,k)$ os $o(\lVert h,k\rVert)$. Let $e_i,e_j$ be the canonical bases for $\Bbb R^n,\Bbb R^m$. If $h=(h_1,\ldots,h_n),k=(k_1,\ldots,k_m)$ then $$B(h,k)=\sum_{i,j} h_ik_j B(e_i,e_j)$$ and $|h_j|,|k_i|\leqslant \lVert h,k\rVert$, so if $C=\sum_{i,j}\lVert B(e_i,e_j)\rVert $ (a constant depending only on $B$ and the basis of choice)</p>
<p>$$\lVert B(h,k)\lVert\leqslant C\lVert h,k\rVert^2$$</p>
|
184,142 | <p>I am currently an undergraduate math student. (In fact, freshmen.)</p>
<p>I know that usually abstract algebra is taught somehow late in the undergraduate course, and curious how studies of abstract algebra at graduate level differ from studies at undergraduate level.</p>
<p>So, things like what gets new treatment, or what is learned new are what I want to know.</p>
| M Turgeon | 19,379 | <p>There are several ways in which it is/can be different.</p>
<ul>
<li>There might be greater emphasis on proofs and rigor, depending on the level of the undergraduate abstract algebra.</li>
<li>The material is covered more quickly, and as such, more material can be covered.</li>
<li>As an extension of the previous point, more sophisticated topics can be discussed.</li>
</ul>
<p>As a rough idea, my graduate abstract algebra (taken at McGill University) covered the following topics, which are not necessarily covered in the undergraduate curriculum: Infinite Galois theory, Commutative algebra, Basic algebraic number theory, Homological algebra, Representation theory of finite groups.</p>
|
279,238 | <p>I want to create a regular polygon from the initial two points <span class="math-container">$A$</span>, <span class="math-container">$B$</span> and number of vertices <span class="math-container">$n$</span>,<br />
<a href="https://i.stack.imgur.com/dKr9A.png" rel="noreferrer"><img src="https://i.stack.imgur.com/dKr9A.png" alt="enter image description here" /></a></p>
<p><code>regularPolygon[{0, 0}, {1, 0}, 3]</code> gives <code>{{0, 0}, {1, 0}, {1/2, Sqrt[3]/2}}</code></p>
<p><code>regularPolygon[{x1, y1}, {x2, y2}, 4]</code> gives <code>{{x1, y1}, {x2, y2}, {x2 + y1 - y2, -x1 + x2 + y2}, {x1 + y1 - y2, -x1 + x2 + y1}}</code></p>
<p>I found a related function <a href="http://reference.wolfram.com/language/ref/CirclePoints.html" rel="noreferrer">CirclePoints</a>, it seems not suitable. Is there a simple way to implement such a function? Maybe you can use iteration.</p>
| cvgmt | 72,111 | <ul>
<li><code>AnglePath</code>.</li>
</ul>
<p>For two point <code>{x1,y1}</code> and <code>{x2,y2}</code>, at first we locate at <code>{x2,y2}</code> and toward to the direction <code>{x2,y2}-{x1,y2}</code>, then turn to a new direction by rotate a angle<code>2 Pi/n</code>, after <code>n</code> rotation we go back to <code>{x2,y2}</code>.</p>
<pre><code>{dx, dy} = {x2, y2} - {x1, y1};
length = Sqrt[{dx, dy} . {dx, dy}];
n = 4;
pts = AnglePath[{{x2, y2}, {dx, dy}},
ConstantArray[{length, 2 Pi/n}, n]] // FullSimplify
RotateRight[Most@pts]
</code></pre>
<p><a href="https://i.stack.imgur.com/2t56n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2t56n.png" alt="enter image description here" /></a></p>
<pre><code>{dx, dy} = {x2, y2} - {x1, y1};
length = Sqrt[{dx, dy} . {dx, dy}];
n = 5;
pts = AnglePath[{{x2, y2}, {dx, dy}},
ConstantArray[{length, 2 Pi/n}, n]];
poly = Polygon[pts];
{x1, y1} = {2, 2};
{x2, y2} = {5, 5};
Graphics[{EdgeForm[Black], FaceForm[], poly, AbsolutePointSize[10],
Red, Point[{{x1, y1}}], Text[{x1, y1}, {x1, y1}, {0, 2}], Blue,
Point[{{x2, y2}}], Text[{x2, y2}, {x2, y2}, {-2, 0}]}]
</code></pre>
<p><a href="https://i.stack.imgur.com/QhNtp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QhNtp.png" alt="enter image description here" /></a></p>
<pre><code>{x1, y1} = {2, 2};
{x2, y2} = {5, 5};
{dx, dy} = {x2, y2} - {x1, y1};
length = Norm[{dx, dy}];
Graphics[
Table[{EdgeForm[RandomColor[]], FaceForm[],
Polygon[AnglePath[{{x2, y2}, {dx, dy}},
ConstantArray[{length, 2 π/n}, n]]]}, {n, 3, 8}]]
</code></pre>
<p><a href="https://i.stack.imgur.com/DbGNt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DbGNt.png" alt="enter image description here" /></a></p>
<ul>
<li><code>NestList</code> + <code>RotationMatrix</code>.</li>
</ul>
<pre><code>p[1] = {x1, y1} =N@{2, 2};
p[2] = {x2, y2} =N@{5, 5};
n = 5;
θ = (2 π)/n;
lines = NestList[{#[[2]], #[[2]] +
RotationMatrix[θ] . (#[[2]] - #[[1]])} &, {p[1], p[2]},
n];
poly = Polygon[lines[[;; , 2]]];
Graphics[{EdgeForm[Blue], FaceForm[], poly, Arrow@lines,
AbsolutePointSize[10], Red, Point[p[1]], Blue, Point[p[2]]}]
</code></pre>
<p><a href="https://i.stack.imgur.com/XqeA0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XqeA0.png" alt="enter image description here" /></a></p>
<ul>
<li>Using complex number to rotate the vector.</li>
</ul>
<pre><code>Clear["Global`*"];
z[1] = {x1, y1} . {1, I};
z[2] = {x2, y2} . {1, I};
n = 5;
θ = (2 π)/n;
ω = Exp[I*θ];
z[k_] := z[k] = z[k - 1] + (z[k - 1] - z[k - 2]) ω;
Table[z[k], {k, n}] // ReIm // ComplexExpand
Block[{x1, y1, x2, y2},
{x1, y1} = {2, 2};
{x2, y2} = {5, 5};
Graphics[{RegionBoundary@Polygon[pts], AbsolutePointSize[10], Red,
Point[{x1, y1}], Blue, Point[{x2, y2}]}]]
</code></pre>
<p><a href="https://i.stack.imgur.com/KK2qE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KK2qE.png" alt="enter image description here" /></a></p>
<ul>
<li><code>FoldList</code> +<code>AngleVector</code></li>
</ul>
<pre><code>n = 5;
{dx, dy} = {x2 - x1, y2 - y1};
length = Sqrt[{dx, dy} . {dx, dy}];
pts = FoldList[AngleVector, {x2, y2},
Table[{length, ArcTan[dx, dy] + k (2 π)/n}, {k, n}]] //
FullSimplify
{x1, y1} = {2, 2};
{x2, y2} = {5, 5};
Graphics[Polygon[pts]]
</code></pre>
<p><a href="https://i.stack.imgur.com/6FQyO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6FQyO.png" alt="enter image description here" /></a></p>
|
4,112,011 | <p>Let <span class="math-container">$f:X\rightarrow Y$</span> be a map and let <span class="math-container">$T'$</span> be a topology on <span class="math-container">$Y$</span>.</p>
<p>Show that <span class="math-container">$$T=\{U\subseteq X \mid \exists V\in T' \text{ with }U=f^{-1}(V)\} $$</span> is a topology on <span class="math-container">$X$</span>.</p>
<p><span class="math-container">$T'$</span> is the coarsest topology on <span class="math-container">$X$</span> such that the map <span class="math-container">$f:(X, T) \rightarrow (Y, T') $</span> is continuous.</p>
<p><span class="math-container">$$$$</span></p>
<p>We have to show:</p>
<p>1.The set <span class="math-container">$X$</span> and the empty set are elements of <span class="math-container">$T$</span> .</p>
<ol start="2">
<li><p>Any union of elements of <span class="math-container">$T$</span> belongs to <span class="math-container">$T$</span>.</p>
</li>
<li><p>Any finite intersection of elements of <span class="math-container">$T$</span> belongs to <span class="math-container">$T$</span>.</p>
</li>
</ol>
<p><span class="math-container">$$$$</span></p>
<p>Toshow these axioms, Ihave done the following:</p>
<ol>
<li><p>It holds that <span class="math-container">$X\subseteq X$</span> and <span class="math-container">$f(X)\in T'$</span> where <span class="math-container">$T'$</span> is a topology, and so we get <span class="math-container">$X\in T$</span>.</p>
<p>We also have that <span class="math-container">$\emptyset\subseteq X$</span> and <span class="math-container">$f(\emptyset)\in T'$</span> where <span class="math-container">$T'$</span> is a topology, and so we get <span class="math-container">$\emptyset\in T$</span>.</p>
</li>
<li><p>We consider the union <span class="math-container">$O = \bigcup\limits_{\alpha \in I} U_{\alpha}$</span>.</p>
</li>
</ol>
<p>This set is in <span class="math-container">$T$</span> iff theimage under <span class="math-container">$f$</span> is in <span class="math-container">$T'$</span>.
So we consider <span class="math-container">$f \left( O \right)$</span>.</p>
<p>The union is well defined under images so we get <span class="math-container">\begin{equation*}f \left( O \right) = f \left( \bigcup\limits_{\alpha \in I} U_{\alpha} \right) = \bigcup\limits_{\alpha \in I} f \left( U_{\alpha} \right)\end{equation*}</span>
Since each <span class="math-container">$U_{\alpha}$</span> is in <span class="math-container">$T$</span>, the images <span class="math-container">$f\left( U_{\alpha} \right)$</span> are in <span class="math-container">$T'$</span>.</p>
<p>Since <span class="math-container">$T'$</span> is a topology, is the union again in <span class="math-container">$T'$</span> and so we get that <span class="math-container">$O$</span> in <span class="math-container">$T$</span>.</p>
<ol start="3">
<li>The respective argument of 2.for intersection.</li>
</ol>
<p>Is everything correct?</p>
| Community | -1 | <p>So, we made two corrections, and you are well on your way!</p>
<p>Left is just to prove the intersection and union properties. But they follow from the general facts:</p>
<ul>
<li><span class="math-container">$f^{-1}(A\cup B)=f^{-1}(A)\cup f^{-1}(B)$</span></li>
<li><span class="math-container">$f^{-1}(A\cap B)=f^{-1}(A)\cap f^{-1}(B)$</span></li>
</ul>
<p>Note there is another correction here: you want preimages not forward images, because you're starting with the topology <span class="math-container">$T'$</span> on <span class="math-container">$Y$</span>.</p>
<p>That <span class="math-container">$T$</span> is the coarsest topology that makes <span class="math-container">$f$</span> continuous is clear, because if you take out even one open set, the definition fails.</p>
|
301,038 | <p>If you want to show that a sequence $(a_{n})$ in $\mathbb{R}$ is convergent, when is it sufficient to show that there is a number $b\in\mathbb{R}$ such that
$$ \liminf a_{n} \geq b \geq \limsup a_{n}$$</p>
<p>In particular, I have a situation where my sequence is bounded and I wanted to use this approach, but I'm not sure I really understand what is going on and why or if this works.</p>
<p>Thanks for any illumination!</p>
| Amr | 29,267 | <p>$$\forall n\in\mathbb{Z}^+[\inf\{a_m|m\geq n\}\leq\sup\{a_m|m\geq n\}]$$
Thus:
$$\lim\inf a_n\leq\lim \sup a_n$$</p>
<p>Now if you can show that $\lim\sup a_n\leq\lim\inf a_n$, then this inequality and the previous one show that $\lim\inf a_n=\lim\sup a_n$</p>
<p>The last equation can be used to show that $\lim a_n$ exists</p>
|
2,096,296 | <p>Recently I discovered that $\sum_{n=1}^\infty \frac{1}{n^2}=\frac{π^2}{6}$, and we know that sum of reciprocals of naturals to the first power diverges to infinity. So I was wondering just out of curiosity, whether there was a number between $1$ and $2$ where this sum of reciprocals raised to that power started diverging. Is there such a number? </p>
| Simply Beautiful Art | 272,831 | <p>We have the integral test, which has</p>
<p>$$\int_1^\infty\frac1{x^s}\ dx\sum_{n=1}^\infty\frac1{n^s}<1+\int_1^\infty\frac1{x^s}\ dx$$</p>
<p>Which quickly shows it converges only if $s>1$ and diverges for $s\le1$.</p>
|
109,307 | <p><strong>Problem.</strong> Let $d=(a,b)$, $b=\beta d$ and $n>1$. If $\beta$ is odd number, prove that $(n^a+1,n^b-1)\le 2$.</p>
<p><em>Solution (from the book).</em> Each common divisor of numbers $n^a+1$ and $n^b-1$ has to be divisor of their sum $n^a+n^b=n^b(n^{a-b}+1)$, that is, it has to be a common divisor of $n^b-1$ and $n^{a-b}+1$ (we assume $a>b$). Keeping up, we conclude that the common divisor of numbers $n^a+1$ and $n^b-1$ has to be a divisor of number $n^d+1$. Let $x=n^d$, such that $n^b-1=x^\beta-1$ and $\beta$ is odd. To divide $n^b-1$ with $n^d-1$ means to divide $x%\beta-1$ with $x+1$. Remainder of this division is $-2$, and this means that numbers $n^a+1$ and $n^b-1$ cannot have common divisor greater than 2. $\square$</p>
<p>I'm horrible at number theory, because it always seems to abstract for me, and solutions always seem kind of forced, as if first the solution was written, and then the problem.</p>
<p>So I've decided to start from the beginning of my <em>Introduction to number theory</em> book, and this problem was supposed to be an easy example of previously learned.</p>
<p>There is a bunch of things I don't understand in this proof, beginning with "it has to be a common divisor of $n^b-1$ and $n^{a-b}+1$". After that, I understand literally nothing. I wish someone could write a more detailed explanation, but less formal, so that it can include many words and explanations, and examples, too.</p>
<p>Also, are there any hints/directions you could give me that I could follow when encountering number theory problems. What am I supposed to see, what am I supposed to look for, how much "deep" do I have to go to prove something (sometimes, some things sound pretty obvious to me, yet they are explained pretty long, and some things, as stuff I mention I don't understand, are simply gone though without a brief of explanation. </p>
<p>I hope I'm not asking too much.</p>
| André Nicolas | 6,312 | <p>The proof you were given is unnecessarily convoluted. In particular, the first part is totally unnecessary! We give an alternate proof in great detail. </p>
<p>We use congruence notation, because it is convenient. In a remark at the end, we show how you can avoid congruence notation, if you really want to, which you shouldn't.</p>
<p>We want to show that any (positive) common divisor $m$ of $n^a+1$ and $n^b-1$ is $\le 2$. Let $a=\alpha d$ and $b=\beta d$.</p>
<p>Because $m$ divides $n^{\alpha d}+1$, we have
$$n^{\alpha d}\equiv -1 \pmod{m}.$$
Take the $\beta$-th power of both sides. Because $\beta$ is odd, we conclude that
$$n^{\alpha\beta d}=(n^{\alpha d})^\beta \equiv (-1)^\beta\equiv -1 \pmod m.\qquad\qquad(\ast)$$
Similarly, and more easily, because $m$ divides $n^{\beta d}-1$, we have
$$n^{\beta d}\equiv 1 \pmod{m}.$$
Take the $\alpha$-th power of both sides. We conclude that
$$n^{\alpha\beta d}=(n^{\beta d})^\alpha \equiv 1^\alpha\equiv 1 \pmod m.\qquad\qquad(\ast\ast)$$</p>
<p>Compare $(\ast)$ and $(\ast\ast)$. We have simultaneously $n^{\alpha\beta d}\equiv -1 \pmod m$ and $n^{\alpha\beta d}\equiv 1 \pmod m$. It follows that $-1\equiv 1\pmod m$, that is, $2\equiv 0 \pmod m$. So we conclude that $m$ divides $2$, which is what we wanted to show.</p>
<p>Note that if $n$ is even, then $n^a+1$ and $n^b-1$ are both odd, so in that case, since their $\gcd$ divides $2$, we conclude that the $\gcd$ is actually $1$. Similarly, if $n$ is odd, then $n^a+1$ and $n^b-1$ are both even, so since their $\gcd$ divides $2$, it must be $2$.</p>
<p><strong>Remark:</strong> We show how to dispense with congruence notation. We first used it to show that since $n^{\alpha d}\equiv -1 \pmod m$, we can conclude that<br>
$(n^{\alpha d})^n \equiv -1 \pmod m$. </p>
<p>Without the congruence notation, we will prove that if $\beta$ is odd, and if $m$ divides $n^{\alpha d}+1$, then $m$ divides $(n^{\alpha d}+1)^\beta+1$. To see that nothing much is involved, let $y=n^{\alpha d}$. We want to show that if $m$ divides $y+1$, then $m$ divides $y^\beta+1$. This is easy, for if $\beta$ is odd, then
$$y^\beta+1=(y+1)(y^{\beta-1}-y^{\beta-2}+y^{\beta-3}-y^{\beta-4}+y^{\beta -5}+\cdots +1).$$</p>
<p>Showing the other thing that we needed, namely that if $m$ divides $n^{\beta d}-1$, then $m$ divides $(n^{\beta d})^\alpha-1$ is even easier, there is no alternation of plus signs and minus signs. </p>
|
3,020,825 | <p>let A be a subset of <span class="math-container">$\mathbb{R}^P$</span> and <span class="math-container">$ x \in \mathbb{R}^P$</span> denotes <span class="math-container">$d(x,A) = \inf \{ d(x,y) : y \in A\}$</span>.There exists a point <span class="math-container">$y_0 \in A $</span> with <span class="math-container">$d(y_0,x) = d(x,A)$</span> if </p>
<p>choose the correct option</p>
<p><span class="math-container">$a)$</span> <span class="math-container">$A$</span> is any closed non emepty subset of <span class="math-container">$\mathbb{R}^P$</span></p>
<p><span class="math-container">$b)$$ A$</span> is any non empty subset of <span class="math-container">$\mathbb{R}^P$</span></p>
<p><span class="math-container">$c)$$ A$</span> is any non empty compact subset of <span class="math-container">$\mathbb{R}^P$</span></p>
<p><span class="math-container">$d)$$ A$</span> is any non empty bounded subset of <span class="math-container">$\mathbb{R}^P$</span></p>
<p>My attempt : i thinks option a) will correct because if A is closed then <span class="math-container">$x \in \bar A= A,$</span> that is <span class="math-container">$d(y_0,x) = d(x,A)=0$</span></p>
<p>I don't know that other option pliz help me</p>
| Noble Mushtak | 307,483 | <p>Primitive recursion allows us to define <span class="math-container">$F(x+1)$</span> in terms of both <span class="math-container">$x$</span> and <span class="math-container">$F(x)$</span> like this:</p>
<p><span class="math-container">$$F(0)=0$$</span>
<span class="math-container">$$F(S(x))=g(x,F(x))$$</span></p>
<p>Now, we need to figure out what <span class="math-container">$g(x, y)$</span> is, where <span class="math-container">$y=F(x)=\lfloor\sqrt{x}\rfloor$</span> and <span class="math-container">$g(x,y)=F(S(x))=\lfloor\sqrt{x+1}\rfloor$</span>. In this case, it is pretty easy to show:</p>
<p><span class="math-container">$$\lfloor\sqrt {x}\rfloor \leq \lfloor \sqrt {x+1} \rfloor \leq \lfloor \sqrt{x}\rfloor +1\rightarrow y \leq g(x,y)\leq y+1$$</span></p>
<p>Thus, once we know <span class="math-container">$y=F(x)$</span>, we simply need to test if <span class="math-container">$\lfloor\sqrt{x+1}\rfloor$</span> is <span class="math-container">$y$</span> or <span class="math-container">$y+1$</span>. One way to do this is by testing if <span class="math-container">$\lfloor \sqrt{x+1}\rfloor \geq y+1$</span>, which combined with the above inequality, would confirm <span class="math-container">$\sqrt{x+1}=y+1$</span>. Otherwise, if the inequality failed, the above inequality would confirm <span class="math-container">$\sqrt{x+1}=y$</span>.</p>
<p>Now, <span class="math-container">$\lfloor \sqrt{x+1}\rfloor \geq y+1$</span> can not be tested since we don't know what <span class="math-container">$\lfloor \sqrt{x+1}\rfloor$</span> is. However, if we square both sides, we can instead test <span class="math-container">$x+1 \geq (y+1)^2$</span>. In order to make this inequality more strict, since we are dealing with the integers, we can instead say <span class="math-container">$x+2 > (y+1)^2$</span>. Thus, if <span class="math-container">$x+2 > (y+1)^2$</span>, then <span class="math-container">$\lfloor \sqrt{x+1} \rfloor=g(x,y)=y+1$</span>. Otherwise, <span class="math-container">$\lfloor \sqrt{x+1} \rfloor=g(x,y)=y$</span>. Therefore, <span class="math-container">$g(x,y)$</span> is in the form of an if loop, like this:</p>
<p><span class="math-container">$$g(x,y)=IF(h(x, y), S(I_{2,2}(x, y)), I_{2,2}(x,y))$$</span></p>
<p>The first argument, <span class="math-container">$h(x, y)$</span> tests if <span class="math-container">$(y+1)^2 < x+2$</span>: Returns <span class="math-container">$0$</span> for false, and a positive number if true. The second argument is <span class="math-container">$y+1$</span> and the third argument is <span class="math-container">$y$</span>. Thus, <span class="math-container">$IF$</span> is a primitive recursive which chooses the second argument if the first argument is positive and chooses the third argument if the first argument is zero. This is pretty easy to define as a primitive recursive function:</p>
<p><span class="math-container">$$IF(0, b, c)=I_{2,2}(b, c)$$</span>
<span class="math-container">$$IF(S(y), b, c)=I_{4,3}(y, IF(y, b, c), b, c)$$</span></p>
<p>Finally, we need to define <span class="math-container">$h$</span>, a primitive recursive function that tests if <span class="math-container">$(y+1)^2 < x+2$</span>. From <a href="https://en.wikipedia.org/wiki/Primitive_recursive_function#Subtraction" rel="nofollow noreferrer">Wikipedia</a>, we have a primitive recursive function <span class="math-container">$SUB(a, b)$</span> which returns a positive number <span class="math-container">$b-a$</span> if <span class="math-container">$a < b$</span> and <span class="math-container">$0$</span> otherwise. Thus, <span class="math-container">$h(x,y)=SUB((y+1)^2,x+2)$</span>:</p>
<p><span class="math-container">$$h(x, y)=SUB(MULT(S(I_{2,2}(x,y)), S(I_{2,2}(x,y))), S(S(I_{2,1}(x,y))))$$</span></p>
<p>Here, I've used <span class="math-container">$MULT(x,y)$</span> which finds <span class="math-container">$x\cdot y$</span> in the integers, but it is well-known that multiplication is primitive recursive. Thus, we have finally defined <span class="math-container">$F(x)$</span> completely as a primitive recursive function.</p>
<p>I know this definition is pretty long and kind of confusing, but I think it goes to show that even simply functions like <span class="math-container">$\lfloor \sqrt x \rfloor$</span> can get pretty complicated when you break it down like this. Also, I never explicitly used the composition operator, but I did use the composition rule when I defined <span class="math-container">$g(x,y)$</span> and <span class="math-container">$h(x,y)$</span>, so composition is definitely present in this definition. If you have any questions, feel free to comment below and ask. In any case, I hope this explanation helps you understand primitive recursion!</p>
|
1,017,320 | <p>Assume ZFC. Let $B\subseteq\mathbb R$ be a set that is not Borel-measurable. Clearly, $B$ must be uncountable, since countable sets are always Borel being a countable union of measurable singletons.</p>
<p><strong>Question:</strong> can one conclude that $B$ necessarily has the cardinality of the continuum <em>without</em> assuming either the continuum hypothesis or the negation thereof?</p>
<p>A possibly related result is that any $\sigma$-algebra that contains infinitely many <em>sets</em> must necessarily have at least the cardinality of the continuum. This result is independent of the continuum hypothesis.</p>
| Francis Adams | 29,633 | <p>You can still have an idea of the cardinality of sets that are definable in a more general sense than being Borel. Say a set is analytic $(\bf{\Sigma}^1_1)$ if it is the continuous image of a Borel set, coanalytic $(\bf{\Pi^1_1})$ if its complement is analytic, and $\bf{\Sigma}^1_2$ if it is the continuous image of a coanalytic set. </p>
<p>As is the case with Borel sets, analytic sets have the perfect set property: either they are countable or they contain a perfect subset. Since perfect sets have size $\mathfrak{c}$, analytic sets satisfy the continuum hypothesis.</p>
<p>The result for $\bf{\Sigma}^1_2$ sets (which include coanalytic sets) is close, but not quite CH. These sets can have cardinality $\aleph_0, \aleph_1$ or $2^{\aleph_0}$, for the reason that they can be expressed as the union of $\aleph_1$ Borel sets.</p>
|
1,037,632 | <p>How to find the last 2 digits of $2014^{2001}$? What about the last 2 digits of $9^{(9^{16})}$?</p>
| Swapnil Tripathi | 117,387 | <p>Finding the last two digits it equivalent to finding</p>
<p>$2014^{2001}\pmod{100}$.</p>
<p>Note $2014\equiv 14\pmod{100}$</p>
<p>Finding this requires using $100=25\times 4$ (coprime factors)</p>
<p>Is there a common solution to the system below?</p>
<p>$x\equiv 2014^{2001}\pmod{25}\equiv 14^{2001}\pmod{25}$</p>
<p>$x\equiv 2014^{2001}\pmod{4}\equiv 2^{2001}\pmod{4}$</p>
|
15,702 | <p>One way to prove that a field $K$ has no ideals except the entire field and the trivial ideal is to note the fact that every element $x$ has an inverse. By the definition of an ideal, if $x$ is in the ideal then $x^{-1}x$ is because $x^{-1} \in K$. But now we have that 1 is in the ideal, and so again by the definition of an ideal we have that every element is in the ideal. Therefore it is either the entire field or trivial.</p>
<p>However, this works for any would-be ideal that has a unit; hence my question. I don't see how this coheres particularly with the idea that ideals are generalizations of things like "multiple of $n$", or that we use them to form quotient rings.</p>
<p>Can someone please explain whether this has a deeper meaning or if it's not really important? I think it might have something to do with what is written in the "motivation" section in the <a href="http://en.wikipedia.org/wiki/Ideal_%28ring_theory%29" rel="nofollow">Wikipedia article for ideals</a> but I'm not really sure.</p>
<p>Edit: I do realize that not all subsets without a unit are ideals. Sorry for the confusion.</p>
| Arturo Magidin | 742 | <p>Yes, it's true that any ideal in a ring which contains a unit must be the whole ring. (Also: every left ideal that contains a left-invertible element must be the whole ring).</p>
<p>Now, there are two items here. One is the notion of generalizing "multiples of $n$", and the second is the notion of making quotients.</p>
<p>Let's start with the first. The notion that ideals generalize things like "all multiples of $n$" in $\mathbb{Z}$ goes back all the way to Dedekind, who in fact came up with the term "ideal" (it was a counterpart to Kummer's "ideal numbers"; at the risk of touting my own horn, check out the paper with David McKinnon, <a href="http://www.jstor.org/stable/30037491" rel="nofollow noreferrer"><em>Gauss's Lemma for Number Fields</em></a>, Amer. Math. Monthly <strong>112</strong> no. 5 (2005) pp. 385-416). Dedekind was looking at rings like $\mathbb{Z}[\sqrt{-5}]$, and he wanted something along the lines of unique factorization; Kummer had done this by introducing something called "ideal numbers", which were like the primes for integers but which did not actually occur in the ring in order to explain things like $2\times 3 = (1+\sqrt{-5})(1-\sqrt{-5})$, even though none of $2$, $3$, $1+\sqrt{-5}$ and $1-\sqrt{-5}$ can be written as a product of two numbers (from $\mathbb{Z}[\sqrt{-5}]$), neither of which is $1$ or $-1$. The idea was that there were these "ideal numbers" $\alpha$, $\beta$, and $\gamma$, which had the property that $\alpha\beta=1+\sqrt{-5}$, $\alpha\gamma=1-\sqrt{-5}$, $\alpha^2 = 2$, and $\beta\gamma=3$. Instead of "inventing" these numbers, Dedekind considered a collection of all numbers which would be "the multiples of $\alpha$" <em>if</em> $\alpha$ was actually an element of $\mathbb{Z}[\sqrt{-5}]$. A necessary and sufficient condition for a collection of elements of $\mathbb{Z}[\sqrt{-5}]$ to qualify as "all multiples of $x$" (where $x$ was either an ideal number of an actual number) was that the collection be nonempty, closed under sums and differences, and that if $a$ was in the collection of $r$ was in $\mathbb{Z}[\sqrt{-5}]$, then $ra$ was in the collection. They were "ideals" because they were playing the role of ideal numbers (here, "ideal" means <em>existing in fancy or imagination</em>). This generalized to any ring contained in a finite extension of $\mathbb{Q}$ and containing all integral elements (roots of monic polynomials with integer coefficients), called the "number field case"; later Dedekind and Weber developed a similar theory in what is called the "function field" case. Later, when rings were abstracted and generalized from the number field and function field cases by Artin and by Noether, the conditions were extended but they were still based in the notion that the ideals corresponded to "all multiples of" an ideal number in the number field.</p>
<p>Now, there is no problem with the fact that sometimes these collections are <em>everything</em>: this happens whenever the collection satisfies the conditions and contains a unit. Some collections do, some don't. In fact, that is one way in which one can test to see if an element is a unit: if the ideal it generates is the whole ring, then it is a unit (this works in the case of commutative rings with unit, not in more general rings). This actually makes sense because <em>everything</em> is a multiple of a unit. </p>
<p>For rings like $\mathbb{Z}$, and to some extent number rings as above, <em>every</em> ideal consists exactly of "all the multiples of $x$" for some $x$ (though in the number ring, the $x$ may be an "ideal number"). For more general (commutative) rings (with unit), if you have a family of elements $x_1,\ldots,x_n$, then the elements of the ideal <em>generated</em> by $x_1,\ldots,x_n$ are exactly those elements that are "$R$-linear combinations" of $x_1,\ldots,x_n$; that is, all elements that can be expressed as $r_1x_1+\cdots+r_nx_n$ for some $r_1,\ldots,r_n\in R$; this <em>includes</em> all multiples of each $x_i$, as well as other elements. If one of the $x_i$ is a unit, then since everything is a multiple of a unit, then everything is in the ideal in question. There are other kinds of ideals (not "finitely generated", as you start dealing with more and more complicated rings).</p>
<p>Now, on to the second question. As it happens, Dedekind also noted that the conditions on the set (nonempty, closed under sums and differences, and closed under products by elements of the ring) were exactly the condition one needed in order to be able to do "congruence modulo" the multiples of $x$, so he also called them "modules" (because you could do modular arithmetic with them). This is essentially the same as talking about quotients. </p>
<p>But one can get to ideals differently if you are interested in quotients. This is essentially the same thing I did in <a href="https://math.stackexchange.com/questions/14282/why-do-we-define-quotient-groups-for-normal-subgroups-only/14315#14315">this answer</a> about normal subgroups. Suppose you want to define an equivalence relation $\sim$ on the ring $R$ so that you can make $R/\sim$ into a ring using the operations $[a]+[b]=[a+b]$ and $[a][b]=[ab]$ (where $[a]$ is the equivalence class of $a$ under $\sim$). It turns out that the only way this can work is if $\sim$ is a subring of $R\times R$ (when we consider $\sim$ as a collection of ordered pairs of elements of $R$, hence a subset of $R\times R$), and that if you let $I$ be the collection of all elements of $R$ that are equivalent to $0$, then $I$ is an ideal of $R$ and the equivalence relation $\sim$ is $a\sim b\Longleftrightarrow a-b\in I$. You can try proving the theorems in the abovementioned answer for rings instead of groups, they are essentially the same arguments.</p>
<p>Again, this has no problem with regards to having units in the ideal: this is what happens if your equivalence relation is just "everything is related to everything", or if your quotient is the one element ring; these are perfectly valid (if somewhat boring) equivalence relations and quotients.</p>
<p>But: in a commutative ring with $1$, an ideal is the whole ring <strong>if and only if</strong> it contains a unit. If there are no units, then it is not the whole ring. A ring is said to be <em>simple</em> if the only ideals are the zero ideal and the whole ring; for commutative rings, the only simple rings are the fields and the zero ring. In the noncommutative case there are simple rings that are not division rings: for example, the set of $n\times n$ matrices over any simple ring (for instance, any field) is simple for every $n$. </p>
<p>Note also that while a <em>proper</em> ideal necessarily does not contain a unit, it is not true that any subset that does not contain a unit is an ideal. The collection of all numbers that are either multiples of $2$ or of $3$ in $\mathbb{Z}$ contains no units, but is not an ideal (it is not even a subgroup). </p>
|
1,761,273 | <p>Good morning. I have a problem with this:</p>
<p>Find the maximum and minimum distances from the origin to the curve* <span class="math-container">$$g\left(x,y\right)=5x^{2}+6xy+5y^{2}$$</span></p>
<p>I have done this:</p>
<p>Function to optimize:<span class="math-container">$f\left(x,y\right)=x^{2}+y^{2}$</span></p>
<p>Restriction: <span class="math-container">$g\left(x,y\right)=5x^{2}+6xy+5y^{2}=8
$</span></p>
<p>Applying Lagrange multipliers:
<span class="math-container">$\nabla f\left(x,y\right)=\lambda\nabla g\left(x,y\right)$</span></p>
<p>Then,
<span class="math-container">$\nabla f\left(x,y\right)=2x\hat{i}+2y\hat{j}$</span> and <span class="math-container">$\lambda\nabla g(x,y)=\lambda(2x+\frac{6}{5}y)\hat{i}+\lambda\left(2y+\frac{6}{5}x\right)\hat{j}
$</span></p>
<p>Making the ecuation system:</p>
<p><span class="math-container">$\begin{cases}
2x=(2x+\frac{6}{5}y)\lambda\\
2y=(2y+\frac{6}{5}x)\lambda\\
x^{2}+\frac{6}{5}xy+y^{2}=8
\end{cases}$</span></p>
<p>But I have serious problem solving the system. Any suggestions?</p>
| Reynan Henry | 628,467 | <p>divide the first equation with the second one to get rid of the <span class="math-container">$\lambda$</span> then you can solve for <span class="math-container">$x$</span> and <span class="math-container">$y$</span> with the third equation</p>
|
1,430,447 | <p>sketch the graph of the integrand function and use it to help evaluate the integral.</p>
<p>integration from(1 , -1) |x|-1 </p>
<p><a href="https://i.stack.imgur.com/NY3Ei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NY3Ei.png" alt="enter image description here"></a></p>
<p>I think I can evaluate the integration</p>
<p>f(x) = 1/2 x^2 -x+c</p>
<p>but how sketch the graph </p>
| nathan.j.mcdougall | 181,447 | <p>Another method would be to use the graph alone. <a href="https://i.stack.imgur.com/sh6Du.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sh6Du.png" alt="Created using Desmos"></a>
Created using <a href="http://www.desmos.com" rel="nofollow noreferrer">Desmos</a></p>
<p>From the graph, it is clear this is a triangle with base length $2$ and height $-1$. Since we know the integral is the area under the curve, and the area of a triangle is $A=\frac{1}{2}bh$, the integral is:
$$\int_{-1}^1\,(|x|-1)\;\mathrm{d}x=\frac{1}{2}(2)(-1)=\boxed{-1}$$</p>
|
3,270,504 | <blockquote>
<p>Prove that <span class="math-container">$\log|e^z-z|\leq |z|+1$</span> where <span class="math-container">$z\in\mathbb{C}$</span> with <span class="math-container">$|z|\geq e$</span>.</p>
</blockquote>
<p><strong>Background:</strong></p>
<p>This is from a proof that <span class="math-container">$e^z-z$</span> has infinitely many zeroes. The present stage is that we assumed in contradiction that <span class="math-container">$e^z-z$</span> hasn't any zero.</p>
<p><strong>My attampt:</strong></p>
<p>I assume that the meaning of <span class="math-container">$\log$</span> here is the principal branch of <span class="math-container">$\log$</span>.</p>
<p>We know that <span class="math-container">$|w|\in\mathbb{R} ,\ \forall w\in\mathbb{C}$</span>. Because <span class="math-container">$\log$</span> is increasing in <span class="math-container">$\mathbb{R}^+$</span> and according to the triangle inequality we get <span class="math-container">$$\log|e^z-z|\leq\log(|e^z|+|z|)$$</span> But I'm not sure how to proceed. Thanks.</p>
| Philip Hoskins | 41,421 | <p>Use the definition: <span class="math-container">$e^z = \sum_{n=0}^\infty z^n/n!$</span></p>
<p><span class="math-container">$$
\begin{align}
\vert e^z - z \vert &= \vert 1+z^2/2 + z^3/6 + \cdots \vert \\
&\leq 1+\vert z \vert ^2/2 + \vert z^3 \vert/6 + \cdots\\
&= e^{\vert z \vert} - \vert z \vert \\
&\leq e^{\vert z \vert}\\
&< e^{\vert z \vert + 1}
\end{align}
$$</span></p>
|
3,281,965 | <p>It is known that a way to check whether a number <span class="math-container">$n$</span> is prime, is to check for divisors of <span class="math-container">$n$</span> from <span class="math-container">$2$</span> to <span class="math-container">$\lfloor\sqrt{n}\rfloor$</span>. If we find any divisor, then <span class="math-container">$n$</span> is not prime. If we don't, then we don't need to check for divisors bigger than <span class="math-container">$\sqrt{n}$</span> (and <span class="math-container">$n$</span> is prime).</p>
<p>An "approximation" of this method would be to check for divisors the same way but from <span class="math-container">$2$</span> to <span class="math-container">$log_2n$</span>. If we find any divisor we declare that <span class="math-container">$n$</span> is not prime. If we don't find any divisor we declare that <span class="math-container">$n$</span> is prime. Of course this method will not always give the correct results. My question is:</p>
<p>If the second algorithm declares a number to be prime, what is the probability that this number is actually prime?</p>
| Peter | 82,961 | <p>For very large numbers we can estimate the probability as follows :</p>
<p>Let <span class="math-container">$N$</span> be the given number , then <span class="math-container">$M:=\frac{\ln(N)}{\ln(2)}$</span> is the limit of trial division.</p>
<p>Denote <span class="math-container">$$P:=\prod{\frac{p}{p-1}}$$</span> where <span class="math-container">$p$</span> runs over the primes upto <span class="math-container">$M$</span>, if <span class="math-container">$M$</span> itself is very large, we can approximate this as <span class="math-container">$$P\approx \frac{e^{-\gamma}}{\ln(M)}$$</span> where <span class="math-container">$\gamma$</span> is the Euler-Mascheroni-constant.</p>
<p>Then, the probability that we have a prime, if no factor is found, is roughly <span class="math-container">$$\frac{P}{\ln(N)}$$</span></p>
<p>For numbers <span class="math-container">$N\approx 10^{99}$</span> , we get about <span class="math-container">$4.6$</span>%</p>
|
275,526 | <p>What would be an easy example of a sequence of functions defined on a compact interval so that $f_n$ goes to $f$ pointwise but $\sup f_n$ does not go to $sup f$.</p>
<p>I thought of the usual example we take to show that the limits in integration can't be interchanged when we only have pointwise convergence. Is this correct?</p>
<p>Does $f(x)=x^n$ work in this context?
Any comments or hints?</p>
| David Mitra | 18,986 | <p>Consider the functions $f_n$ defined on $[0,1]$, where $f_n$ is the function whose graph consists of the following straight line segments: from $(0,0)$ to $(1/n,1)$, from $(1/n,1)$ to $(2/n,0)$, and from $(2/n,0)$ to $(1,0)$.</p>
<p>Note that $(f_n)$ converges pointwise to the zero function on $[0,1]$.</p>
|
2,111,841 | <p>A geodesic is a line representing the shortest route between two points on a sphere, for example on the Earth treated here as a perfect sphere.
Two points on Earth having the same latitude can be also connected with the line being a part of a circle for selected constant latitude.
Differences between these two lines can be visualized with the use of
<a href="https://academo.org/demos/geodesics/" rel="nofollow noreferrer"> <strong>this Academo program</strong></a> presenting the situation in the context of the map of Earth.</p>
<p><strong>Question:</strong></p>
<ul>
<li>How to calculate <strong>the area</strong> between these two lines?</li>
</ul>
<p><em>(Assume for example that the starting point is $(\alpha, \beta_1)=(45^\circ, -120^\circ)$ and the destination $(\alpha, \beta_2)=(45^\circ, 0^\circ))$ - the arc of constant latitude $45^\circ$ has length $120^\circ$</em>.</p>
<p><a href="https://en.wikipedia.org/wiki/Spherical_trigonometry" rel="nofollow noreferrer"><strong>In wikipedia</strong></a> <em>a formula for an area of a spherical polygon is presented, but the polygon is limited in this case with parts of geodesics. Is it possible somehow transform these formulas of spherical geometry into the case of finding the area between geodesic and arc of constant latitude?</em></p>
| Andrew D. Hwang | 86,418 | <p><strong>Summary</strong>: If $A$ and $B$ lie on the latitude line at angle $0 < \alpha < \pi/2$ north of the equator on a sphere of unit radius, and at an angular separation $0 < \theta = \beta_{2} - \beta_{1} < \pi$, then the "digon" bounded by the latitude and the great circle arc $AB$ (in blue) has area
\begin{align*}
\pi - \theta \sin\alpha - 2\psi
&= \text{sum of interior angles} - \theta \sin\alpha \\
&= \pi - \theta \sin\alpha - 2\arccos\left(\frac{\sin\alpha(1 - \cos\theta)}{\sqrt{\sin^{2}\theta + \sin^{2}\alpha(1 - \cos\theta)^{2}}}\right).
\end{align*}</p>
<p><a href="https://i.stack.imgur.com/HiHk5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HiHk5.png" alt="The digon bounded by a latitude line and great circle arc"></a></p>
<hr>
<p>If $A$ and $B$ have longitude-latitude coordinates $(0, \alpha)$ and $(\theta, \alpha)$, their Cartesian coordinates (on the unit sphere) are
$$
A = (\cos\alpha, 0, \sin\alpha),\quad
B = (\cos\theta\cos\alpha, \sin\theta \cos\alpha, \sin\alpha).
$$
Let $C = (0, 0, 1)$ be the north pole, $G$ the "gore" (shaded) bounded by the spherical arcs $AC$, $BC$, and the latitude through $A$ and $B$, and $T$ the geodesic triangle with vertices $A$, $B$, and $C$.</p>
<p><strong>Lemma 1</strong>: The area of $G$ is $\theta(1 - \sin\alpha)$.</p>
<p><em>Proof</em>: The spherical zone bounded by the latitude through $A$ and $B$ and containing the north pole has height $h = 1 - \sin\alpha$ along the diameter through the north and south poles. By a <a href="http://mathworld.wolfram.com/ArchimedesHat-BoxTheorem.html" rel="nofollow noreferrer">theorem of Archimedes</a>, this zone has area $2\pi h = 2\pi(1 - \sin\alpha)$. The area of the gore $G$, which subtends an angle $\theta$ at the north pole, is
$$
(\theta/2\pi)2\pi(1 - \sin\alpha) = \theta(1 - \sin\alpha).
$$</p>
<p><strong>Lemma 2</strong>: The area of $T$ is $\theta - \pi + 2\arccos\dfrac{\sin\alpha(1 - \cos\theta)}{\sqrt{\sin^{2}\theta + \sin^{2}\alpha(1 - \cos\theta)^{2}}}$.</p>
<p><em>Proof</em>: If $\psi$ denotes the interior angle of $T$ at either $A$ or $B$, the area of $T$ is the angular defect, $\theta + 2\psi - \pi$. To calculate $\psi$, note that the unit vector $n_{1} = \frac{A \times C}{\|A \times C\|} = (0, -1, 0)$ is orthogonal to the great circle $AC$, the unit vector
$$
n_{2} = \frac{A \times B}{\|A \times B\|}
= \frac{(-\sin\theta \sin\alpha, \sin\alpha(\cos\theta - 1), \cos\alpha \sin\theta)}{\sqrt{\sin^{2}\theta + \sin^{2}\alpha(1 - \cos\theta)^{2}}}
$$
is orthogonal to the great circle $AB$, and
$$
\cos\psi = n_{1} \cdot n_{2}
= \frac{\sin\alpha(1 - \cos\theta)}{\sqrt{\sin^{2}\theta + \sin^{2}\alpha(1 - \cos\theta)^{2}}}.
$$
This completes the proof of Lemma 2.</p>
<p>The area of the digon is the difference,
\begin{align*}
A &= \theta(1 - \sin\alpha) - (\theta + 2\psi - \pi)
= \pi - \theta \sin\alpha - 2\psi \\
&= \pi - \theta \sin\alpha - 2\arccos\left(\frac{\sin\alpha(1 - \cos\theta)}{\sqrt{\sin^{2}\theta + \sin^{2}\alpha(1 - \cos\theta)^{2}}}\right).
\end{align*}</p>
<p>When $\alpha = 0$, the area vanishes for $0 < \theta < \pi$ (because the latitude through $A$ and $B$ coincides with the great circle arc), while if $\alpha$ is small and positive, the area is close to $\pi$ when $\theta = \pi$ (because $A$ and $B$ are nearly antipodal and the great circle arc passes through the north pole).</p>
<p><a href="https://i.stack.imgur.com/4CWeO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4CWeO.png" alt="Graph of the area of the digon bounded by a latitude line and great circle"></a></p>
|
1,694,991 | <p>I tried doing $u$-substitution and got $-20e$ as my final answer, but I think the correct answer is just $20$. I'm not sure what I did wrong, but probably had to do with plugging in infinity... could someone explain the process of solving this integral?</p>
| Michael Hardy | 11,667 | <p>\begin{align}
\int_0^\infty \frac x{20}e^{-x/20} \, dx & = 20 \int_0^\infty \left(\frac x{20}\right) e^{-x/20} \, \left(\frac{dx}{20}\right) \\[6pt]
& = 2\underbrace{0 \int_0^\infty u e^{-u}\,du}_\text{substitution} = \underbrace{20 \int u\,dv = 20\left( uv - \int v\,du \right)}_\text{integration by parts} \\[10pt]
& = \left[-20 u e^{-u} \vphantom{\frac 11} \right]_0^\infty - \int_0^\infty (-e^{-u})\,du = \text{etc.}
\end{align}</p>
<p>To evaluate the part in brackets, you need $\lim\limits_{u\to\infty} ue^{-u} = \lim\limits_{u\to\infty} \dfrac u {e^u}$. L'Hopital's rule handles that quickly.</p>
|
3,299,863 | <p>Let <span class="math-container">$f_{1}(x) = e^{x^5}$</span> and <span class="math-container">$f_{2}(x) = e^{x^3}$</span>. Let <span class="math-container">$g(x) = f_{1}f_{2}$</span>. Find <span class="math-container">$g^{(18)}(0)$</span>.</p>
<p>By series expansion at <span class="math-container">$x = 0$</span>:</p>
<p><span class="math-container">$f_{1}(x) = \sum_{k \ge 0} {x^{5k} \over k! }$</span> and <span class="math-container">$f_{2}(x) = \sum_{m \ge 0}{x^{3m} \over {m!}}$</span>, then</p>
<p><span class="math-container">$$g(x) = \sum_{k, m \ge 0}{x^{5k + 3m} \over {m!k!}}.$$</span></p>
<p>Substituting <span class="math-container">$5k + 3m = n$</span> we get <span class="math-container">$g(x) = \sum_{n \ge 0} \left( \sum_{5k + 3m = n}{1 \over {m!k!}} \right) x^{n} $</span>.</p>
<p>Solving diophantine equation <span class="math-container">$5k + 3m = 18$</span>, there are two ordered pairs of non - negative integers <span class="math-container">$(k, m)$</span>: <span class="math-container">$(3, 1), (0, 6)$</span>. Thus, <span class="math-container">$g^{18}(0) = 18! \left[ { {1 \over {3!1!}} + {1 \over {0!6!}}} \right].$</span></p>
<p>Is there a general method for finding <span class="math-container">$n^{th}$</span> derivative of functions <span class="math-container">$\prod_{1 \le i \le n}f_{i}$</span>? Obviously, if there are no solutions then a derivative of a function at some point will be <span class="math-container">$0$</span>. But what can be said when there are infinitely many solutions?</p>
<p><strong>UPD: 01.08.2019</strong></p>
<p>Consider function
<span class="math-container">$f(x) = e^{1 \over 1 - x}$</span>. Then by expansion at 0:
<span class="math-container">$$f(x) = e\sum_{n \ge 0} \sum_{x_{1} + 2x_{2} + \cdots = n} {{1} \over {x_{1}!x_{2}!\cdots}} x^{n},$$</span>
which gives an infinite diophantine equation. More general, it can be applied to functions of a form: <span class="math-container">$f(x)^{g(x)}.$</span> Referring to my early question, what can be said about a derivative at <span class="math-container">$x = 0$</span> of a such function?</p>
| Con | 682,304 | <p>Not sure whether this is the kind of answer that you wanted or expected.</p>
<p>You only need a topology to talk about convergence in general. To get a nice and convenient topology on non-standard sets <span class="math-container">${^*}Y$</span>, one usually takes superstructures (actually in a way such that <span class="math-container">${^*}Y$</span> is an enlargement) and defines any union of sets from <span class="math-container">${^*}\tau$</span> as open, where <span class="math-container">$\tau$</span> is a topology on the set <span class="math-container">$Y$</span>.</p>
<p>What you do in non-standard analysis (instead of non-standard topology) is rather to reformulate analytical notions like convergence in terms of other notions. For example we have that <span class="math-container">$$\lim_{n \rightarrow \infty} a_n = b$$</span> if and only if <span class="math-container">$\text{st}(a_N) = b$</span> for all <span class="math-container">$N \in
{^*}\mathbb{N}\setminus \mathbb{N}$</span>, where "<span class="math-container">$\text{st}$</span>" denotes taking the standard part.</p>
|
1,282,592 | <p>I am often asked to prove properties of regular bipartite graphs, and beyond the two parts having equal size nothing seems obvious. Are these graphs more intuitive than they first seem?</p>
<p>In particular, right now I can't work out why an r-regular bipartite graph is r-edge-colourable.</p>
<p>Thanks</p>
| Exodd | 161,426 | <p>You can reduce this problem (through induction on $r$) to looking for a perfect matching that covers all the nodes. </p>
<p>Thanks to the <a href="http://en.wikipedia.org/wiki/Hall%27s_marriage_theorem" rel="nofollow">Hall's marriage theorem</a>, you can prove there's one by verifying the condition that for every subset $S$ of the nodes on the left, the number of nodes on the right connected to $S$ is greater or equal to the cardinality of $S$, and that's easy to see thanks to the regularity of the graph. </p>
|
299,140 | <p>Is there a closed form sum of </p>
<p>$\sum_{k=0}^{\infty} \frac{x^k}{(k!)^2}$</p>
<p>It is trivial to show that it is less than $e^x$ but is there a tighter bound?</p>
<p>Thanks</p>
| Ryan O'Donnell | 658 | <p>The first way I try to solve questions like this is to "ask Maple" (or Mathematica). If you have access to, say, Maple, then you can type</p>
<p>"sum(x^k/k!^2, k=0..infinity)"</p>
<p>and it will report BesselI(0,2*sqrt(x)). [It's impossible to tell from the font, but that is "Bessel" followed by capital-i.] If you're me, that's when you search for Bessel functions on Wikipedia. </p>
<p>And furthermore, you can then type "asympt(BesselI(0,2*sqrt(x)),x)", and it will report that the leading term in the asymptotic expansion is indeed $\frac12 \frac{e^{2\sqrt{x}}}{\sqrt{\pi} x^{1/4}}$, as others have said. I'm not sure what resource will immediately explain "how Maple knew that", but at least one knows the answer at that point.</p>
|
15,093 | <p>For example, I am confident that very few students majoring in pure mathematics can write a complete proof to the <a href="https://en.wikipedia.org/wiki/Abel%E2%80%93Ruffini_theorem" rel="noreferrer">Abel–Ruffini theorem</a> (there is no algebraic solution to general polynomial equations of degree five or higher with arbitrary coefficients) by the time of their graduation. I suspect many students with a Master's degree or Doctorate in pure mathematics could not prove this theorem either. They may know the conclusion, but may not be able to sketch an idea of the proof, let alone give a complete proof.</p>
<p>My question is: should we educate pure mathematics major students in such a way that they should know how to prove most of the classical results in mathematics such as the Abel–Ruffini theorem and the <a href="https://en.wikipedia.org/wiki/Fundamental_theorem_of_algebra" rel="noreferrer">Fundamental Theorem of Algebra</a> before getting their bachelor's degree, or at least their master's Degrees?</p>
| Nicola Ciccoli | 465 | <p>It just depend on what do you mean by "knowing very well".</p>
<p>To me "knowing very well" means: if you give me 1 hour I can tell you the basic ingredients that go in the proof (and the reason why the result is relevant) and if you give me 1 full day I can sketch a reasonably detailed proof.</p>
<p>This means:</p>
<ul>
<li>I know how to fit the result in an area of math;</li>
<li>I know on which books I'd rather look for it (which is very personal, depends on your own background);</li>
<li>I am capable of looking back at the relevant prerequisites and recover them quite fastly.</li>
</ul>
<p>Then yes, that's what we hope from our students. And it is already a quite high level of demand.</p>
<p>(as for the example, I got a PhD in math without ever being exposed to a proof of Abel-Ruffini, so "classical" means different things to different people. I think to grasp the proof of it I'll probably need a week) </p>
|
199,842 | <p>I understand the reasoning behind $\pi r^2$ for a circle area however I'd like to know what is wrong with the reasoning below:</p>
<p>The area of a square is like a line, the height (one dimension, length) placed several times next to each other up to the square all the way until the square length thus we have height x length for the area.</p>
<p>The area of a circle could be thought of a line (The radius) placed next to each other several times enough to make up a circle. Given that circumference of a circle is $2 \pi r$ we would, by the same reasoning as above, have $2 \pi r^2$. Where is the problem with this reasoning?</p>
<p>Lines placed next to each other would only go straight like a rectangle so you'd have to spread them apart in one of the ends to be able to make up a circle so I believe the problem is there somewhere. Could anybody explain the issue in the reasoning above?</p>
| copper.hat | 27,978 | <p>Here is a similar approach. Split the radius into $n$ equal parts, and form concentric circles of radius $0, \frac{r}{n}, \frac{2r}{n},...,r$. Think of the cross section of an onion. Then estimate the area by unrolling each circle, approximating its area by a rectangular strip of length given by the outer radius and width $\frac{r}{n}$ and adding the lot together. Then let $n \to \infty$ to make the approximation better.</p>
<p>This gives $A \approx 2 \pi r \frac{r}{n} + 2 \pi (r-\frac{r}{n}) \frac{r}{n} + \cdots + 2 \pi \frac{1}{n} \frac{r}{n} = 2 \pi \frac{r}{n}r (1+ (1-\frac{1}{n})+ \cdots + \frac{1}{n})$, which gives the estimate $A \approx 2 \pi r^2 \frac{1+ \frac{1}{n}}{2}$. Taking limits gives $A = \pi r^2$, as desired.</p>
<p>Note that the $\frac{1}{2}$ appears because you are summing $1+ (1-\frac{1}{n})+ \cdots + \frac{1}{n}$. If you draw lines of lengths $1$, $1-\frac{1}{n}$, etc. stacked on top of each other, you see that they approximate a triangle. The area of a triangle is half that of the 'equivalent' rectangle. This explains the 'disappearing' 2 in the formula.</p>
|
199,842 | <p>I understand the reasoning behind $\pi r^2$ for a circle area however I'd like to know what is wrong with the reasoning below:</p>
<p>The area of a square is like a line, the height (one dimension, length) placed several times next to each other up to the square all the way until the square length thus we have height x length for the area.</p>
<p>The area of a circle could be thought of a line (The radius) placed next to each other several times enough to make up a circle. Given that circumference of a circle is $2 \pi r$ we would, by the same reasoning as above, have $2 \pi r^2$. Where is the problem with this reasoning?</p>
<p>Lines placed next to each other would only go straight like a rectangle so you'd have to spread them apart in one of the ends to be able to make up a circle so I believe the problem is there somewhere. Could anybody explain the issue in the reasoning above?</p>
| JavaMan | 6,491 | <p>Actually, you can use your method, but you have to be a bit careful. Start with a circle of radius $r$ centered at the origin. We can think of the total area of the circle as adding the "lengths" of all the circumferences of all the circles centered at the origin which have a radius $x \leq r$:</p>
<p>$$\text{Area of Circle} = \int_0^{r} 2 \pi x~\textrm{d}x = \pi r^2$$</p>
<p>In reality, we are approximating the circumference of each circle by an annulus of small width, and then letting that width tend to zero.</p>
<p>This method works with finding the area of the square too. Suppose you have a square centered at the origin. Then, the area of the square is the perimeter of each square centered at the origin which ha a side length $x < r$.</p>
<p>$$
\text{Area of Square} = \int_{-r/2}^{r/2} 4x dx = r^2.
$$</p>
<p>The reason this doesn't work the way you originally thought was in order to add "fattened" radii all around the circle, the fattened radii overlap, and so you are over counting.</p>
|
2,217,295 | <p>I thought about defining a function $ g(x) = \dfrac{1}{\epsilon} x^{\epsilon} - \ln(x) $ and show that it's larger than $0$. I saw that if I substitute
$1$, I get $\dfrac{1}{\epsilon}$ which is positive and so on. I also tried to derive the function and I got $g'(x) = x^{\epsilon -1} - \dfrac{1}{x} $</p>
<p>I'm stuck. Can someone help me realize how to keep going?</p>
<p><strong>edit</strong>: I need to show also that $\forall \epsilon >0 $ $\exists M >0 $ that $\forall x>M $ $\ln x < x^{\epsilon}$ I thought about using what we proved just now by multiplying by $\epsilon$ both sides and get $ \epsilon \ln x < x^{\epsilon}$ define a new function $f(x) = x^{\epsilon} - \epsilon \ln x$ and again if I derieve it I get $\epsilon x^{\epsilon -1} - \epsilon \dfrac{1}{x}$ get the $\epsilon$ out and get $\epsilon * (x^{\epsilon -1} - \dfrac{1}{x}) > 0 $.<br>
The only problem with it I never use the $M$ can someone help me please? </p>
<p>Thanks in advance!</p>
| Zubzub | 349,735 | <p>Let $c > 0$ and $\epsilon \in ]0,1[$. We start with the trivial inequality :
$$
\begin{align}
c &< c + \frac{1}{\epsilon} &\Longleftrightarrow \\
c & < \frac{1}{\epsilon}(1 + \epsilon c) & \overset{(*)}{\implies} \\
c & < \frac{1}{\epsilon} e^{\epsilon c} & \overset{(**)}{\implies} \\
\ln(x) &< \frac{1}{\epsilon} e^{\epsilon \ln(x)} = \frac{1}{\epsilon} x^{\epsilon}
\end{align}
$$
$(*)$ since $1+x \leq e^x, \forall x$.</p>
<p>$(**)$ Setting $c = \ln(x)$ for $x > 1$</p>
|
2,217,295 | <p>I thought about defining a function $ g(x) = \dfrac{1}{\epsilon} x^{\epsilon} - \ln(x) $ and show that it's larger than $0$. I saw that if I substitute
$1$, I get $\dfrac{1}{\epsilon}$ which is positive and so on. I also tried to derive the function and I got $g'(x) = x^{\epsilon -1} - \dfrac{1}{x} $</p>
<p>I'm stuck. Can someone help me realize how to keep going?</p>
<p><strong>edit</strong>: I need to show also that $\forall \epsilon >0 $ $\exists M >0 $ that $\forall x>M $ $\ln x < x^{\epsilon}$ I thought about using what we proved just now by multiplying by $\epsilon$ both sides and get $ \epsilon \ln x < x^{\epsilon}$ define a new function $f(x) = x^{\epsilon} - \epsilon \ln x$ and again if I derieve it I get $\epsilon x^{\epsilon -1} - \epsilon \dfrac{1}{x}$ get the $\epsilon$ out and get $\epsilon * (x^{\epsilon -1} - \dfrac{1}{x}) > 0 $.<br>
The only problem with it I never use the $M$ can someone help me please? </p>
<p>Thanks in advance!</p>
| Nathanael Skrepek | 423,961 | <p>Alternative you could use L'Hôpital's rule to show
$$\lim_{x\to\infty} \frac{\ln(x)}{x^{\epsilon}} = \lim_{x\to\infty} \frac{x^{-1}}{\epsilon x^{\epsilon-1}} = \lim_{x\to\infty} \frac{1}{\epsilon} x^{-\epsilon} = 0.$$
From this follows that from some point $x^\epsilon > \ln(x)$ otherwise the limit can't be $0$.</p>
<p><strong>Second approach:</strong>
Use the $\exp$ function you know that $\exp$ is monoton so the inequality must hold if you add on both sides the exp function
\begin{align*}
\ln(x) < x^\epsilon \quad &\Leftrightarrow \quad \exp(\ln(x)) < \exp(x^\epsilon) \\ &\Leftrightarrow \quad 1 < \frac{\exp(x^\epsilon)}{x} = \sum_{n=0}^\infty \frac{x^{\epsilon n - 1}}{n!} =: f(x)
\end{align*}
So there is only left to show that $f(x) > 1$. For example use $x=\lceil\frac{1}{\epsilon}\rceil!$
and show that $f$ is monoton from this point or at least still bigger than $1$. Showing that it is still bigger than $1$ should be easy.</p>
|
813,301 | <p>$f$ is strictly increasing and $g$ is decreasing. How to find whether $f\circ g$ and $g\circ f$ are increasing, decreasing, strictly increasing or strictly decreasing?</p>
<p>This is what I did,</p>
<p>$f \circ g=f(g(x))$</p>
<p>If we take $x_1 < x_2$,</p>
<p>$f(x_1) < f(x_2) $ and $\ g(x_1) \ge g(x_2) $</p>
<p>Assuming $f(g(x_1)) < f(g(x_2))$, then,</p>
<p>$g(x_1)<g(x_2) \implies x_2<x_1$ </p>
<p>This is a contradiction, therefore our assumption is wrong. After this, what should I assume to prove this? If I assume $f(g(x_1)) \ge f(g(x_2)) \ $, a problem occurs since $f$ is strictly increasing and the assumption has an equality possibility.</p>
<p>Or is there a more effective method than this? Can it be applied to prove the same for $g\circ f$?</p>
| Empy2 | 81,790 | <p>You have shown that $f\circ g(x_1)<f\circ g(x_2)\implies x_2<x_1$ - that is, $f\circ g$ is decreasing. </p>
<p>Yes, the same method shows that $g\circ f$ is also decreasing.</p>
|
2,265,368 | <p>Let $R$ be a commutative ring with unit and $N$ its nilradical. If $R$ is connected then $R/N$ is also connected; the quotient map induces a homeomorphism between the spectra. I'd like to see a more hands-on proof, but I'm unable to make it work. That is to say, I'd like to show that if $e^2-e\in N$ for some $e\in R$, then $e\in N$ or $e-1\in N$. After scribbling full quite a couple of pages going around in circles, any idea is welcome.</p>
| E.R | 325,912 | <p>It is well known that every idempotent modulo $N$ can be lifted to an idempotent of $R$. Thus, if $R$ has no non trivial idempotent, then $R/N$ also has no non trivial idempotent. (See Anderson's "Rings and Categories of modules")</p>
|
4,318,842 | <p>Thanks for your time.</p>
<p>I am studying Euclidean Geometry through a text book and they have a corollary that afirms that given <span class="math-container">$r \parallel s$</span>, if t intersects r then it must intersect s, although the proof they give is not much convinceable. It says: "If one line can intersect only one of the two parallel lines, there is one parallel line to two non-parallel lines."</p>
<p>A proof that seems more reasonable to me: Given <span class="math-container">$r \parallel s$</span>. If t intersects r, but does not intersects s, then <span class="math-container">$t \parallel s$</span>. If <span class="math-container">$r \parallel s$</span> and <span class="math-container">$t \parallel s$</span> than <span class="math-container">$r \parallel t$</span>, which is a contradiction, therefore t must intersect both r and s.</p>
<p>Is my proof correct?</p>
| Community | -1 | <p>I must assume that two lines "intersect" when they have a point in common and they don't coincide, otherwise the claim is falsified by the case <span class="math-container">$t=r$</span>.</p>
<p>As per your work, using the fact that parallelism is a transitive relation defeats the purpose of disproving the assertion that it isn't transitive (which is essentially your task).</p>
<p>If <span class="math-container">$r\parallel s$</span>, <span class="math-container">$A\in r\cap t$</span> and <span class="math-container">$t\parallel s$</span>, then <span class="math-container">$r$</span> and <span class="math-container">$t$</span> are both parallel lines to <span class="math-container">$s$</span> passing through <span class="math-container">$A$</span>. By Euclid V, this implies <span class="math-container">$r=t$</span>.</p>
|
3,836,662 | <p>I understand basic group theory. I would say that I've seen most of the standard stuff up to, say, the quotient group.</p>
<p>I feel like I've seen in more than one place the suggestion that group theory is the study of symmetries, or actions that leave something (approximately) unchanged. Unfortunately I can only find a couple sources. At 0:49 in this <a href="https://www.youtube.com/watch?v=mH0oCDa74tE" rel="nofollow noreferrer">3 Blue 1 Brown video</a>, the narrator says "[Group theory] is all about codifying the idea of symmetry." The whole video seems to be infused with the idea that every group represents the symmetry of something.</p>
<p>In <a href="https://www.youtube.com/watch?v=ihMyW7Z5SAs" rel="nofollow noreferrer">this video</a> about the Langlands Program, the presenter discusses symmetry as a lead-in to groups beginning around 33:00. I don't know if he actually describes group theory as being about the study of symmetry, but the general attitude seems pretty similar to that of the previous video.</p>
<p>This doesn't jive with my intuition very well. I can see perfectly well that <em>part</em> of group theory has to do with symmetries: one only has to consider rotating and flipping a square to see this. But is <em>all</em> of group theory about symmetry? I feel like there must be plenty of groups that have nothing to do with symmetry. Am I wrong?</p>
| Community | -1 | <p>The equivalence relation <span class="math-container">$(h,k)\sim (h',k')\stackrel{(def.)}{\iff} hk=h'k'$</span> induces a partition of <span class="math-container">$H\times K$</span> into equivalence classes each of cardinality <span class="math-container">$|H\cap K|$</span>, and the quotient set <span class="math-container">$(H\times K)/\sim$</span> has cardinality <span class="math-container">$|HK|$</span>. Therefore, if <span class="math-container">$H$</span> and <span class="math-container">$K$</span> are finite (in particular if they are subgroups of a finite group), we get: <span class="math-container">$|H\times K|=|H||K|=|H\cap K| |HK|$</span>, whence the formula in the OP. Hereafter the details.</p>
<p>(The formula holds irrespective of <span class="math-container">$HK$</span> being a subgroup.)</p>
<hr />
<p>Let's define in <span class="math-container">$H\times K$</span> the equivalence relation: <span class="math-container">$(h,k)\sim (h',k')\stackrel{(def.)}{\iff} hk=h'k'$</span>. The equivalence class of <span class="math-container">$(h,k)$</span> is given by:</p>
<p><span class="math-container">$$[(h,k)]_\sim=\{(h',k')\in H\times K\mid h'k'=hk\} \tag 1$$</span></p>
<p>Now define the following map from any equivalence class:</p>
<p><span class="math-container">\begin{alignat*}{1}
f_{(h,k)}:[(h,k)]_\sim &\longrightarrow& H\cap K \\
(h',k')&\longmapsto& f_{(h,k)}((h',k')):=k'k^{-1} \\
\tag 2
\end{alignat*}</span></p>
<p>Note that <span class="math-container">$k'k^{-1}\in K$</span> by closure of <span class="math-container">$K$</span>, and <span class="math-container">$k'k^{-1}\in H$</span> because <span class="math-container">$k'k^{-1}=h'^{-1}h$</span> (being <span class="math-container">$(h',k')\in [(h,k)]_\sim$</span>) and by closure of <span class="math-container">$H$</span>. Therefore, indeed <span class="math-container">$k'k^{-1}\in H\cap K$</span>.</p>
<p><strong>Lemma 1</strong>. <span class="math-container">$f_{(h,k)}$</span> is bijective.</p>
<p><em>Proof</em>.</p>
<p><span class="math-container">\begin{alignat}{2}
f_{(h,k)}((h',k'))=f_{(h,k)}((h'',k'')) &\space\space\space\Longrightarrow &&k'k^{-1}=k''k^{-1} \\
&\space\space\space\Longrightarrow &&k'=k'' \\
&\stackrel{h'k'=h''k''}{\Longrightarrow} &&h'=h'' \\
&\space\space\space\Longrightarrow &&(h',k')=(h'',k'') \\
\end{alignat}</span></p>
<p>and the map is injective. Then, for every <span class="math-container">$a\in H\cap K$</span>, we get <span class="math-container">$ak\in K$</span> and <span class="math-container">$a=f_{(h,k)}((h',ak))$</span>, and the map is surjective. <span class="math-container">$\space\space\Box$</span></p>
<p>Now define the following map from the quotient set:</p>
<p><span class="math-container">\begin{alignat}{1}
f:(H\times K)/\sim &\longrightarrow& HK \\
[(h,k)]_\sim &\longmapsto& f([(h,k)]_\sim):=hk \\
\tag 3
\end{alignat}</span></p>
<p><strong>Lemma 2</strong>. <span class="math-container">$f$</span> is well-defined and bijective.</p>
<p><em>Proof</em>.</p>
<ul>
<li>Good definition: <span class="math-container">$(h',k')\in [(h,k)]_\sim \Rightarrow f([(h',k')]_\sim)=h'k'=hk=f([(h,k)]_\sim)$</span>;</li>
<li>Injectivity: <span class="math-container">$f([(h',k')]_\sim)=f([(h,k)]_\sim) \Rightarrow h'k'=hk \Rightarrow (h',k')\in [(h,k)]_\sim \Rightarrow [(h',k')]_\sim=[(h,k)]_\sim$</span>;</li>
<li>Surjectivity: for every <span class="math-container">$ab\in HK$</span> , we get <span class="math-container">$ab=f([(a,b)]_\sim)$</span>. <span class="math-container">$\space\space\Box$</span></li>
</ul>
<p>Finally, the formula holds irrespective of <span class="math-container">$HK$</span> being a subgroup, which was never used in the proof.</p>
|
4,563,135 | <p>Denote the linear space of linear operators on the linear space <span class="math-container">$V$</span> with field <span class="math-container">$\mathbb{F}$</span> by <span class="math-container">$L(V)$</span> and the linear space of <span class="math-container">$n \times n$</span> matrices with entries in <span class="math-container">$\mathbb{R}$</span> by <span class="math-container">$\mathbb{R}^{n\times n}$</span>. Let <span class="math-container">$T:V \to V$</span> be an operator on the linear space <span class="math-container">$V$</span> over <span class="math-container">$\mathbb{R}$</span>. Let <span class="math-container">$C_T(x)=\det(x I - T)$</span> be its characteristic polynomial. The coefficients of <span class="math-container">$C_T$</span> are in <span class="math-container">$\mathbb{R}$</span> since by definition <span class="math-container">$C_T(x)=\det\left(\mathcal{M}_B^B(xI-T)\right)$</span> and all the entries of <span class="math-container">$\mathcal{M}_B^B(xI-T)$</span> are in <span class="math-container">$\mathbb{R}$</span>, where <span class="math-container">$\mathcal{M}_B^B$</span> is a linear isomorphism between <span class="math-container">$L(V)$</span> and <span class="math-container">$\mathbb{R}^{n \times n}$</span> with <span class="math-container">$B$</span> being a basis for <span class="math-container">$V$</span>. Consequently, if <span class="math-container">$\lambda \in \mathbb{C} - \mathbb{R}$</span> is a root of <span class="math-container">$C_T$</span> then <span class="math-container">$\bar\lambda$</span> is also a root of <span class="math-container">$C_T$</span> with the same algebraic multiplicity. Now, I want to show that</p>
<p><span class="math-container">$$\dim \ker (\lambda I - T)^{(m)} = \dim \ker (\bar \lambda I - T)^{(m)}, \qquad m = 1,\dots,r \tag{1}$$</span></p>
<p>where <span class="math-container">$r$</span> is the algebraic multiplicity of <span class="math-container">$\lambda$</span> and <span class="math-container">$\bar \lambda$</span>. According to the spectral decomposition theorem, we have <span class="math-container">$V = \cdots \oplus V_\lambda \oplus \cdots \oplus V_{\bar \lambda} \oplus \cdots$</span>, where <span class="math-container">$V_{\lambda} = \ker (\lambda I - T)^{(r)}$</span> and <span class="math-container">$V_{\bar \lambda} = \ker (\bar \lambda I - T)^{(r)}$</span>. Let <span class="math-container">$B=(\cdots, B_{\lambda},\cdots,B_{\bar \lambda},\cdots)$</span> be the corresponding basis of this decomposition. Equation <span class="math-container">$(1)$</span> literally means that the blocks <span class="math-container">$\mathcal{M}_{B_\lambda}^{B_{\lambda}}(T|_{V_{\lambda}})$</span> and <span class="math-container">$\mathcal{M}_{B_{\bar \lambda}}^{B_{\bar \lambda}}(T|_{V_{\bar \lambda}})$</span> have complex conjugate Jordan sub-blocks of the following form</p>
<p><span class="math-container">\begin{align}
J_{\lambda} =
\begin{bmatrix}
\lambda & 1 & \cdots & 0 \\
0 & \lambda & \ddots & \vdots \\
\vdots & \vdots & \ddots & 1 \\
0 & 0 & \cdots & \lambda
\end{bmatrix}_{d \times d}, \qquad
J_{\bar \lambda}=
\begin{bmatrix}
\bar \lambda & 1 & \cdots & 0 \\
0 & \bar \lambda & \ddots & \vdots \\
\vdots & \vdots & \ddots & 1 \\
0 & 0 & \cdots & \bar \lambda
\end{bmatrix}_{d \times d}, \qquad
J_{\bar \lambda} = \overline{J_{\lambda}}
\end{align}</span></p>
<p>or more compactly,</p>
<p><span class="math-container">$$\mathcal{M}_{B_{\bar \lambda}}^{B_{\bar \lambda}}(T|_{V_{\bar \lambda}}) = \overline{\mathcal{M}_{B_\lambda}^{B_{\lambda}}(T|_{V_{\lambda}})}$$</span>
How can I prove equation <span class="math-container">$(1)$</span>?</p>
| user8675309 | 735,806 | <p>The idea is that you can get the result by using two standard operations you are already familiar with: (i) transposition <span class="math-container">$^T$</span> (which you know doesn't change rank) and (ii) conjugate transposition <span class="math-container">$^*$</span>. These are both involutions.</p>
<p>Now consider arbitrary <span class="math-container">$B\in\mathbb C^{n\times n}$</span> where <span class="math-container">$\text{rank}\big(B\big)=r$</span> and use the fact</p>
<blockquote>
<p>a matrix has rank <span class="math-container">$r$</span> <strong>iff</strong> it has some <span class="math-container">$r\times r$</span> submatrix with
nonzero determinant and for <span class="math-container">$m\gt r$</span> all <span class="math-container">$m\times m$</span> minors are zero</p>
</blockquote>
<p>I.e. using two well chosen permutation matrices <span class="math-container">$PBP'=\begin{bmatrix} B_r &* \\ * &* \end{bmatrix}$</span> has <span class="math-container">$B_r$</span>, the leading <span class="math-container">$r\times r$</span> principal submatrix, invertible. (Note that <span class="math-container">$*$</span> denotes entries we don't care about; this nearly overloads notation for conjugate transpose but the context should make it clear.) So</p>
<p><span class="math-container">$$\text{rank}\big(B\big) =\text{rank}\big(PBP'\big) =\text{rank}\big( B_r\big) =r $$</span></p>
<p>Since <span class="math-container">$B_r$</span> is invertible we know <span class="math-container">$B_r$</span> is similar (unitarily if you like) to an upper triangular matrix with no zeros on the diagonal. Then <span class="math-container">$B_r^*$</span> is similar to a lower triangular matrix with no zeros on the diagonal so <span class="math-container">$B_r^*$</span> is invertible.</p>
<p>This tell us that <span class="math-container">$\big(PBP'\big)^*=(P')^*B^*P^*$</span> has a leading <span class="math-container">$r\times r$</span> principal submatrix with rank <span class="math-container">$r$</span>. It could conceivably have an even larger submatrix that has non-zero determinant so applying the blocked quote</p>
<p><span class="math-container">$$\text{rank}\big(B\big)=r\leq \text{rank}\big((P')^*B^*P^*\big)=\text{rank}\big(B^*\big)\leq \text{rank}\big(B\big)$$</span></p>
<p>where the right hands side comes by re-running the argument on <span class="math-container">$B^*$</span> (i.e. making use of the involution).</p>
<p>Conclude: the conjugation map that sends <span class="math-container">$i\mapsto -i$</span> (aplied component-wise to a matrix) doesn't change rank of <span class="math-container">$B$</span>, because it can be written as <span class="math-container">$\overline B=(B^*)^T$</span></p>
<p>Or if you prefer, we can conclude with:</p>
<p><span class="math-container">\begin{align}
\text{rank}\Big(\big(\lambda I -A\big)^m\Big)
&=\text{rank}\Big(\big((\lambda I -A)^m\big)^*\Big) \\
&=\text{rank}\Big(\big((\lambda I)^* -A^*\big)^m\Big) \\
&=\text{rank}\Big(\big(\overline \lambda I -A^T\big)^m\Big) \\
&=\text{rank}\Big(\big((\overline \lambda I -A^T)^m\big)^T\Big) \\
&=\text{rank}\Big(\big( (\overline\lambda I)^T -(A^T)^T\big)^m\Big) \\
&=\text{rank}\Big(\big(\overline \lambda I -A\big)^m\Big)
\end{align}</span></p>
<p>where, the third equality holds since <span class="math-container">$A$</span> has real components.</p>
|
4,024,871 | <p>I'm looking to build a function <span class="math-container">$f:S^2 \to \mathbb R^2$</span> such that <span class="math-container">$f(x)\neq f(−x)$</span> for all <span class="math-container">$x\in S^2$</span>.</p>
<p>By Borsuk-Ulam Theorem, this function must be discontinuous. I was trying to build a not too complicated function, but I always encountered a problem.</p>
<p>I appreciate any help.</p>
| Aryaman Maithani | 427,810 | <p>Here's an overkill using the Axiom of Choice:</p>
<p>Let <span class="math-container">$\mathcal A = \{\{\mathbf x, -\mathbf x\} \mid \mathbf x \in S^2\}$</span> be the collection of all antipodal pairs. From each set in <span class="math-container">$\mathcal A$</span>, pick exactly one element. Map that to <span class="math-container">$(1, 0)$</span> and its antipodal counterpart to <span class="math-container">$(0, 1)$</span>.</p>
|
20,470 | <p>I couldn't find a more descriptive title, but I guess an example will explain my problem.</p>
<p>I set up some customized <code>Grid</code> function including some additional functionalities which I control with custom Options. Additionally, I would like to change some of the standard <code>Grid</code> Options, e.g. always use <code>Frame->All</code>. Take the following working example:</p>
<pre><code>Options[myGrid] = {Frame -> All, "Tooltip" -> False};
myGrid[content_, opts : OptionsPattern[]] :=
Module[{con},
If[OptionValue["Tooltip"],
con = MapIndexed[Tooltip[#1, #2] &, content, {-1}],
con = content
];
Grid[con,
Sequence @@
FilterRules[{opts}~Join~Options[myGrid], Options[Grid]]
]
]
</code></pre>
<p>defining an example content:</p>
<pre><code>mat = {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}};
</code></pre>
<p>We can test the behavior:</p>
<pre><code>myGrid[mat]
</code></pre>
<blockquote>
<p><img src="https://i.stack.imgur.com/NTsTf.png" alt="Mathematica graphics"></p>
</blockquote>
<p>The custom "Tooltip" flag works as intended. Now I want to pass an <code>Option</code> to <code>Grid</code>, that has not been explicitely set in the above <code>Options[myGrid]</code> declaration.This eventually makes it through to the Grid, but produces an error message.</p>
<pre><code>myGrid[mat, Background -> Blue]
</code></pre>
<p><img src="https://i.stack.imgur.com/M1mow.png" alt="Mathematica graphics"></p>
<p><img src="https://i.stack.imgur.com/PovKV.png" alt="Mathematica graphics"></p>
<p>To get rid of the errors I embed the Options from Grid into my custom function:</p>
<pre><code>Options[myGrid] =
Join[
{Frame -> All, "Tooltip" -> False},
Options[Grid]
];
</code></pre>
<p>Now, I can change the Grid Options without raising an error:</p>
<pre><code>myGrid[mat, Background -> Green]
</code></pre>
<p><img src="https://i.stack.imgur.com/lDbt9.png" alt="Mathematica graphics"></p>
<p>but the custom setting <code>Frame->All</code> gets lost. </p>
<pre><code>myGrid[mat, Frame -> All]
</code></pre>
<p><img src="https://i.stack.imgur.com/PfSWh.png" alt="Mathematica graphics"></p>
<p>Apparently, the default <code>Frame->None</code> setting for <code>Grid</code> overrules my custom setting. I banged my head against this problem for too long already, therefore my plea for your assistance.</p>
| Mr.Wizard | 121 | <p><a href="http://reference.wolfram.com/mathematica/ref/OptionsPattern.html" rel="noreferrer">OptionsPattern</a>:</p>
<p><img src="https://i.stack.imgur.com/8Dpvd.png" alt="Mathematica graphics"></p>
<p>Therefore declare <code>Options</code> for both <code>myGrid</code> and <code>Grid</code> as valid:</p>
<pre><code>Options[myGrid] = {Frame -> All, "Tooltip" -> False};
myGrid[content_, opts : OptionsPattern[{myGrid, Grid}]] := . . .
</code></pre>
<p>Then:</p>
<pre><code>myGrid[mat, Background -> Blue]
</code></pre>
<blockquote>
<pre><code>Grid[mat, Background -> RGBColor[0, 0, 1], Frame -> All]
</code></pre>
</blockquote>
<p>With no error message.</p>
|
366,589 | <p>how do I solve this one? </p>
<p>$$\lim_{q \to 0}\int_0^1{1\over{qx^3+1}} \, \operatorname{d}\!x$$</p>
<p>I tried substituting $t=qx^3+1$ which didn't work, and re-writing it as $1-{qx^3\over{qx^3+1}}$ and then substituting, but I didn't manage to get on. </p>
<p>Thanks in advance! </p>
| Abel | 71,157 | <p>For any $q>0$, $$\frac{1}{q+1} = \int_0^1\frac{1}{q\cdot 1+1}\,\mathrm{d}x\leq\int_0^1\frac{1}{qx^3+1}\,\mathrm{d}x \leq \int_0^1\frac{1}{q\cdot 0+1}\,\mathrm{d}x = 1.$$
For any $q\in(-1,0)$,
$$1 = \int_0^1\frac{1}{q\cdot 0+1}\,\mathrm{d}x\leq \int_0^1\frac{1}{qx^3+1}\,\mathrm{d}x\leq \int_0^1\frac{1}{q\cdot 1+1}\,\mathrm{d}x = \frac{1}{q+1}.$$</p>
<p>Thus, the limit for $q\to 0$ is $1$.</p>
|
2,940,306 | <p>I understand that it would be n! permutations for the given amount of elements, but I am not sure calculate it with these parameters.</p>
| Saketh Malyala | 250,220 | <p>Choose the first character. </p>
<p>Choose the last character.</p>
<p>How many ways can you order the remaining <span class="math-container">$7$</span>? </p>
<p><span class="math-container">$3 x 3 x 7!$</span></p>
|
1,091,549 | <p>How to compute $\int_{0}^{+\infty} \frac{dx}{e^{x+1} + e^{3-x}}$?</p>
<p>My partial solution:
$$
\int_{0}^{+\infty} \frac{dx}{e^{x+1} + e^{3-x}} = \int_{0}^{+\infty} \frac{dx}{e^{3-x}(1 + e^{2x-2})} \\
= \int_{0}^{+\infty} \frac{e^{x-3}dx}{1 + e^{2x-2}}.
$$
Thank you very much.</p>
| André Nicolas | 6,312 | <p>It is natural to let $y=e^x$. Then $dy=e^x\,dx$, so $dx=\frac{1}{y}\,dy$. Thus we want
$$\int_1^\infty \frac{1}{y}\cdot \frac{1}{e\cdot y+e^3 \cdot \frac{1}{y}}\,dy,$$
which simplifies to
$$\int_1^\infty \frac{1}{e}\cdot\frac{1}{y^2+e^2}\,dy.$$
Let $y=et$, and we are nearly finished.</p>
|
939,911 | <p>Question Re-phrased:</p>
<hr>
<p>I'm having a lot of trouble wrapping my head around this problem. While I've looked through similar posts, It's difficult understanding the maths because I currently have another approach stuck in my head. Is this valid?</p>
<p>Question:
Prove that an at most countable union of at most countable sets is at most countable.</p>
<p>Proof:
Let <strong>F = {A1, A2, ... , Ak, ...}</strong> be an at most countable family of sets where each <strong>$A_k \in$ F</strong> is also at most countable for <strong>$k \in N$</strong>. Define <strong>S = $ \bigcup_{A \in F}A_k $</strong>.</p>
<p>If every <strong>$A_k \in F $</strong> is finite, then there exists a bijective <strong>$g_k: A_k \rightarrow J_{n_k}$</strong> for every A $\in$ F. Therefore, for some m $\in$ N, S = $J_m$ such that <strong>$m\le\sum_{k=1}^x(n_i) $</strong>. Hence, S is finite and at most countable. However, if at least one $A_k \in F$ exists such that this $A_k$~N ($A_k$ is countable), then S = N because $J_n \subset$ N for all n $\in$ N. Hence, S is countable.</p>
<p>Therefore, S must be at most countable.</p>
<hr>
<p>But this only works if F is finite. Need to rethink this.</p>
| Asaf Karagila | 622 | <p>This is not a proof. And had I been grading it, I'd have given it no points. </p>
<ol>
<li><p>At most countable includes the infinite case. All you did in your proof is to handle the case all the sets are finite.</p></li>
<li><p>Even if there is a maximal index in $F$, what difference does it make? You're just point out that you are going to assume there is one, even if there isn't, but then you don't use it again. As Ittay points out, this is a false statement, but you also don't use it. So why add it to the proof?</p></li>
<li><p>You haven't proved anything, you merely stated that "since those sets are all finite, their union has to be countable", <em>but that's exactly the statement you want to prove!</em></p>
<p>Instead you need to show that there exists some injection from $S$ into a set that you know is countable. For example $\Bbb N$ or $\Bbb{N\times N}$.</p></li>
<li><p>The assumption that all sets are equipotent to a subset of $\Bbb N$, therefore their union is countable is false. Consider $\{\{x\}\mid x\in\Bbb R\}$, then each set in this family is equipotent with a subset of $\Bbb N$, but their union is not.</p></li>
</ol>
|
3,443,672 | <p>The equation is
<span class="math-container">$$\tan\frac{5\pi}{6} \cos x=1-\sin x$$</span>
<span class="math-container">$$\sin\frac{5\pi}{6} \cos x=\cos\frac{5\pi}{6}-\cos\frac{5\pi}{6} \sin x$$</span>
<span class="math-container">$$\sin\left(\frac{5\pi}{6}+x\right)=\cos \frac{5\pi}{6}$$</span> which looks weird to me. What am I doing wrong?</p>
| Calum Gilhooley | 213,690 | <p>One set of solutions is given by <span class="math-container">$\cos x = 0$</span> and <span class="math-container">$\sin x = 1$</span>, i.e.:
<span class="math-container">$$
x = \left(2n + \tfrac12\right)\pi.
$$</span></p>
<p>If <span class="math-container">$\cos x \ne 0$</span>, then <span class="math-container">$\sin x \ne 1$</span>, and
<span class="math-container">$$
\left(2 - \sqrt3\right)\cos x = \frac{\cos^2 x}{\left(2 + \sqrt3\right)\cos x} =
\frac{1 - \sin^2 x}{1 - \sin x} = 1 + \sin x,
$$</span>
whence, by adding and subtracting the two equations, <span class="math-container">$\cos x = \tfrac12$</span> and <span class="math-container">$\sin x = -\sqrt3\cdot\cos x = -\tfrac{\sqrt3}2$</span>, i.e.:
<span class="math-container">$$
x = \left(2n - \tfrac13\right)\pi.
$$</span></p>
|
2,013,650 | <p>I am trying to prove that
$$n^{n/2}<n!,\text{ for } n\ge2.$$
I can't really figure it out. </p>
| Chill2Macht | 327,486 | <p>$\newcommand{\Vec}{\mathsf{Vec}}$$\newcommand{\Hom}{\operatorname{Hom}}$$\newcommand{\id}{\operatorname{id}}$This is for my future reference so I never have to write out these proofs again.</p>
<blockquote>
<p><strong>Claim:</strong> $\Hom(-,-):\Vec^{op}\times \Vec \to \Vec$ is a functor.</p>
</blockquote>
<p>(a) Given $V,W \in \Vec$, we define the <em>object part</em> of $\Hom$ as follows: $$(V,W)\mapsto \Hom(V,W) $$ where $\Hom(V,W)$ is defined to be the space of all linear transformations $S: V \to W$.</p>
<p>(b) Given $(V,W) \overset{(f,g)}{\to} (V',W')$, i.e. $f: V' \to V$, $g: W \to W'$ linear transformations, we define the <em>morphism part</em> of $\Hom$ as follows: $$\Hom(f,g):S \mapsto g \circ S \circ f .$$ Note that $g \circ S \circ f \in \Hom(V',W')$, hence $\Hom(f,g): \Hom(V,W) \to \Hom(V'W')$ as required.</p>
<p>(c) We now show that $\Hom$ preserves <em>identity morphisms</em>. Let $(\id_V, \id_W)= \id_{(V,W)}: (V,W) \to (V,W)$ be the identity morphism for $(V,W)$. Then we want to show that $\Hom(\id_V,\id_W)$ is the identity morphism for $\Hom(V,W)$. So again let $S \in \Hom(V,W)$, then by definition: $$\Hom(\id_V, \id_W): S \mapsto \id_W \circ S \circ \id_V =S .$$ Thus $\Hom(\id_V,\id_W)=\id_{\Hom(V,W)}$, which is what we wanted to show.</p>
<p>(d) Given $(V,W) \overset{(f_1,g_1)}{\to} (V',W')$ and $(V',W') \overset{(f_2,g_2)}{\to} (V'',W'')$, i.e. linear transformations $f_2: V'' \to V', f_1: V' \to V, g_1: W \to W', g_2: W' \to W''$. First we compute $$\Hom((f_2,g_2)\circ(f_1, g_1) )=\Hom( f_1 \circ f_2, g_2 \circ g_1): S \mapsto (g_2 \circ g_1) \circ S \circ (f_1 \circ f_2). $$ Note that $(g_2 \circ g_1) \circ S \circ (f_1 \circ f_2): V'' \to W''$ is a linear transformation, i.e. $(g_2 \circ g_1) \circ S \circ (f_1 \circ f_2) \in \Hom(V'',W'')$ as expected and required. Secondly we compute: $$\Hom(f_2,g_2)\circ\Hom(f_1,g_1):S \mapsto g_2 \circ (g_1 \circ S \circ f_1) \circ f_2 .$$ Again, $g_2 \circ (g_1 \circ S \circ f_1) \circ f_2: V'' \to W''$ is a linear transformation, i.e. an element of $\Hom(V'',W'')$. And by associativity of function composition we clearly have $$(g_2 \circ g_1) \circ S \circ (f_1 \circ f_2) = g_2 \circ (g_1 \circ S \circ f_1) \circ f_2, $$ from which it follows that $$\Hom((f_2,g_2)\circ(f_1,g_1))=\Hom(f_2,g_2)\circ\Hom(f_1,g_1), $$ i.e. $\Hom$ does commute with <em>composition of morphisms</em> as required. Thus $\Hom$ is a functor.</p>
<blockquote>
<p><strong>Claim:</strong> $\Hom$ is a bifunctor.</p>
</blockquote>
<p>This follows from the fact that $\Hom$ is a functor whose domain is a product category, $\Vec^{op}\times \Vec$.</p>
<blockquote>
<p><strong>Claim:</strong> $\Phi:\Vec^{op}\times\Vec \to \Vec$ is a functor.</p>
</blockquote>
<p>(a) Given $V,W \in \Vec$, we define the <em>object part</em> of $\Phi$ to be: $$\Phi:(V,W)\mapsto\Hom(V\otimes W^*, \mathbb{R}).$$</p>
<p>(b) Given $(V,W) \overset{(f,g)}{\to} (V',W')$, i.e. $f: V' \to V, g: W \to W'$ linear transformations, we define the <em>morphism part</em> of $\Phi$ to be: $$\Phi(f,g): \sigma \mapsto \sigma \circ (f \otimes g^*), $$ with $\sigma \in \Hom(V \otimes W^*, \mathbb{R})$, $g^*: (W')^*=\Hom(W',\mathbb{R}) \to \Hom(W, \mathbb{R})=W^*$, $g^*: \tau \mapsto \tau \circ g\ \forall \tau\in(W')^*$, $(f \otimes g^*): V' \otimes (W')^* \to V \otimes W^*$, since $f: V' \to V$, $g: W \to W' \implies g^* : (W')^* \to W^*$ (adjoint/pre-composition is contravariant), thus $$V' \otimes (W')^* \overset{(f \otimes g^*)}{\to } V \otimes W^* \overset{\sigma}{\to} \mathbb{R}, $$ thus $\sigma \circ (f \otimes g^*): V' \otimes (W')^* \to \mathbb{R}$ as required, and thus $\Phi(f,g): \Phi(V,W)= \Hom(V \otimes W^*, \mathbb{R}) \to \Hom(V' \otimes (W')^*, \mathbb{R}) = \Phi(V',W')$ as claimed.</p>
<p>(c) We now show that $\Phi$ preserves <em>identity morphisms</em>. Let $(\id_V, \id_W)=\id_{(V,W)}$ be the identity morphism for $(V,W) \in \Vec^{op} \times \Vec$. Then we want to show that $\Phi(\id_v, \id_W)=\id_{\Phi(V,W)}=\id_{\Hom(V \otimes W^*, \mathbb{R})}$. Given $\sigma \in \Hom(V \otimes W^*, \mathbb{R})$, we have that $$\Phi(\id_V, \id_W): \sigma \mapsto \sigma \circ (\id_V \otimes \id_W^*), $$ now since $(\id_V,\id_W):(V,W)\to(V,W)$, we have that $(\id_V \otimes \id_W^*):V \otimes W^* \to V \otimes W^*$, so that $\sigma \circ (\id_V \otimes \id_W^*) \in \Hom(V \otimes W^*, \mathbb{R})$. <a href="https://en.wikipedia.org/wiki/Tensor_product#Tensor_product_of_linear_maps" rel="nofollow noreferrer">We have</a> by definition of tensor product of linear maps that for all $u \otimes \tilde{w} \in V \otimes W^*$: $$(\id_V \otimes \id_W^*)(u\otimes\tilde{w})=(\id_V(u))\otimes(\id_W^*(\tilde{w}))=u \otimes (\tilde{w} \circ \id_W) = u \otimes \tilde{w}, $$ in other words $(\id_V \otimes \id_W^*)=\id_{V \otimes W^*}$, so $\sigma \circ (\id_V \otimes \id_W^*) = \sigma \circ \id_{V \otimes W^*}= \sigma$, so in conclusion $\forall \sigma \in \Hom(V \otimes W^*, \mathbb{R}), \quad \Phi(\id_V,\id_W): \sigma \mapsto \sigma$, thus $\Phi(id_V,\id_W)=\id_{\Hom(V \otimes W^*, \mathbb{R})} = \id_{\Phi(V,W)}$.</p>
<p>(d) We now show that $\Phi$ commutes with composition. Given $(V,W) \overset{(f_1,g_1)}{\to}(V',W') \overset{(f_2,g_2)}{\to}(V'',W'')$, i.e. linear transformations $f_2: V'' \to V'$, $f_1: V' \to V$, $g_1: W \to W'$, $g_2: W' \to W''$. First we compute $$ \Phi( (f_2,g_2)\circ(f_1,g_1) ) = \Phi( f_1 \circ f_2, g_2 \circ g_1 ): \sigma \mapsto \sigma \circ ( (f_1 \circ f_2) \otimes (g_2 \circ g_1)^* ) = \sigma \circ ( (f_1 \circ f_2) \otimes (g_1^* \circ g_2^*) ) $$ Then we compute $$ \Phi(f_2,g_2)\circ\Phi(f_1,g_1):\sigma \mapsto (\sigma \circ (f_1 \otimes g_1^*))\circ (f_2 \otimes g_2^*) = \sigma \circ (f_1 \otimes g_1^*) \circ (f_2 \otimes g_2^*). $$ Now $f_1 \circ f_2: V'' \to V$, $(g_2 \circ g_1): W \to W'' \implies (g_2 \circ g_1)^*: \Hom(W'', \mathbb{R}) \to \Hom(W, \mathbb{R})$, $(g_2 \circ g_1)^*: \tau \mapsto \tau \circ (g_2 \circ g_1) = (\tau \circ g_2)\circ g_1 = (g_1^* \circ g_2^*) (\tau)$. Furthermore $$ (f_2 \otimes g_2^*): V'' \otimes (W'')^* \to V' \otimes (W')* $$ and $$ (f_1 \otimes g_1^*): V' \otimes (W')^* \to V \otimes W^* $$ thus $$ (f_1 \otimes g_1^*) \circ (f_2 \otimes g_2^*): V'' \otimes (W'') \overset{(f_2 \otimes g_2^*)}{\to} V' \otimes (W')^* \overset{(f_1 \otimes g_1^*)}{\to} V \otimes W^* $$ in other words $$ (f_1 \otimes g_1^*) \circ (f_2 \otimes g_2^* ): V'' \otimes (W'')^* \to V \otimes W^*. $$ Similarly, since $(f_1 \circ f_2): V'' \to V$ and $g_1^* \circ g_2^*: (W'')^* \to W^*$, we have by the definition of tensor product of maps that $$ (f_1 \circ f_2) \otimes (g_1^* \circ g_2^* ): V'' \otimes (W'')^* \to V \otimes W^*, $$ hence it is at least plausible that the two maps are equal.</p>
<p>Let $(u, \tilde{w}) \in V'' \otimes (W'')^*$. Then $$ (f_1 \otimes g_1^*) \circ (f_2 \otimes g_2^*) (u, \tilde{w}) = (f_1 \otimes g_1^*)( f_2(u) \otimes g_2^*(\tilde{w}) ). $$ Now since $f_2: V'' \to V'$, we have that $f_2(u) \in V'$ and since $g_2^*: (W'')^* \to (W')^*$, we have that $g_2^*(\tilde{w}) \in (W')^*$, thus $f_2(u) \otimes g_2^*(\tilde{w}) \in V' \otimes (W')^*$, which is the domain of $(f_1 \otimes g_1^*)$, so we can apply it, using once again the definition of tensor product of maps: $$ (f_1 \otimes g_1^*)( f_2(u) \otimes g_2^*(\tilde{w}) ) = f_1(f_2(u)) \otimes g_1^*(g_2^*(\tilde{w})) = (f_1 \circ f_2)(u) \otimes (g_1^* \circ g_2^*)(\tilde{w}) = ( (f_1 \circ f_2) \otimes (g_1^* \circ g_2^*) )(u, \tilde{w}). $$ In other words, we have shown that, for all $(u,\tilde{w}) \in V'' \otimes (W'')^*$, we have $$ [(f_1 \otimes g_1^*) \circ (f_2 \otimes g_2^*)](u, \tilde{w}) = [(f_1 \circ f_2) \otimes (g_1^* \circ g_2^*)](u, \tilde{w}), $$ i.e. that $(f_1 \otimes g_1^*) \circ (f_2 \otimes g_2^*) = (f_1 \circ f_2) \otimes (g_1^* \circ g_2^*)$ (i.e. that $\circ$ and $\otimes$ play nicely with each other). Anyway this has as an immediate consequence that $$ \Phi((f_2,g_2)\circ(f_1,g_1))(\sigma) = \Phi(f_1 \circ f_2, g_2 \circ g_1)(\sigma) = \\ \sigma \circ [(f_1 \circ f_2) \otimes (g_1^* \circ g_2^*) ] = \sigma \circ [ (f_1 \otimes g_1^*) \circ (f_2 \otimes g_2^*) ] = \Phi(f_2,g_2) \circ \Phi(f_1, g_1) $$ for all $\sigma \in \Hom(V \otimes W^*, \mathbb{R})$, i.e. that $\Phi( (f_2,g_2)\circ(f_1,g_1) ) = \Phi(f_2,g_2)\circ \Phi(f_1,g_1)$. Thus $\Phi$ commutes with composition, i.e. is functorial, and thus is a functor, as claimed.</p>
<blockquote>
<p><strong>Claim:</strong> $\Phi$ is a bifunctor.</p>
</blockquote>
<p>This follows from the fact that $\Phi$ is a functor whose domain is a product category, $\Vec^{op}\times \Vec$.</p>
<blockquote>
<p><strong>Claim:</strong> There exists a natural transformation $\eta$ between $\Hom$ and $\Phi$.</p>
</blockquote>
<p>A natural transformation $\eta$ is a family of morphisms in the target category ($\Vec$) indexed by the elements of the source category ($\Vec^{op} \times \Vec$), $\eta_{(V,W)}: \Hom(V,W) \to \Phi(V,W)$, such that, given $(V,W) \overset{(f,g)}{\to} (V',W')$, i.e. linear transformations $f: V' \to V$ and $g: W \to W'$, the following relationship holds: $$ \eta_{(V',W')} \circ \Hom(f,g) = \Phi(f,g) \circ \eta_{(V,W)}. $$ Note that: $$ \begin{array}{rccl} \Hom(f,g): & \Hom(V,W) & \to & \Hom(V',W'), \\ \Phi(f,g): & \Phi(V,W) & \to & \Phi(V',W'), \\ \eta_{(V,W)}: & \Hom(V,W) & \to & \Phi(V,W), \\ \eta_{(V',W')}: & \Hom(V',W') & \to & \Phi(V',W'), \end{array} $$ in other words all of the above compositions are defined, and both the morphism $\eta_{(V',W')} \circ \Hom(f,g)$ and the morphism $\Phi(f,g)\circ \eta_{(V,W)}$ map $\Hom(V,W) \to \Phi(V',W')$.</p>
<p>We now propose the following formula for the natural transformation $\eta$, which, given an object $(V,W)$ in the source category $(\Vec^{op} \times \Vec)$ returns a morphism between the two corresponding objects in the target category $(\Vec)$ defined by the functors $\Hom$ and $\Phi$, namely a morphism $\Hom(V,W) \to \Phi(V,W)$. In other words: $$ \eta: (V,W) \mapsto [ \Hom(V,W) \to \Phi(V,W) ] . $$ Anyway, we have that $\eta_{(V,W)}$ maps between the spaces $\Hom(V,W)$ and $\Phi(V,W) = \Hom(V \otimes W^*, \mathbb{R})$ according to the formula: $$ \sigma \mapsto [ \eta_{(V,W)}(\sigma): \sum_{i,j} v_i \otimes \psi_j \mapsto \sum_{i,j} \psi_j(\sigma(v_i)) ] , $$ i.e. $\sigma: V \to W$ and $\eta_{(V,W)}(\sigma): V \otimes W^* \to \mathbb{R}$, both linear.</p>
<p>So anyway, showing that $\eta$ is a natural transformation again comes down to verifying the equality (for arbitrary $(V,W) \overset{(f,g)}{\to} (V',W')$): $$ \eta_{(V',W')} \circ \Hom(f,g) = \Phi(f,g) \circ \eta_{(V,W)}. $$ Remember that both sides map $\Hom(V,W) \to \Phi(V',W')$, so if we show that both sides evaluate to the same value when applied to an arbitrary $\sigma \in \Hom(V,W)$, we will have shown the equality between functions for arbitrary $(V,W) \overset{(f,g)}{\to} (V',W')$ and thus that $\eta$ is a natural transformation, i.e. all we have to do is show, for arbitrary $\sigma \in \Hom(V,W)$: $$ ( \eta_{(V',W')} \circ \Hom(f,g) ) (\sigma) = (\Phi(f,g) \circ \eta_{(V,W)})(\sigma). $$ Let's calculate the left-hand side first: $$ (\eta_{(V',W')} \circ \Hom(f,g))(\sigma) = \eta_{(V',W')}( \Hom(f,g)(\sigma) ) . $$ Now by definition of $\Hom(f,g): \Hom(V,W) \to \Hom(V',W')$, we have that: $$ \Hom(f,g)(\sigma) = g \circ \sigma \circ f. $$ Since $f: V' \to V, \sigma: V \to W,$ and $g: W \to W'$, we have that $g \circ \sigma \circ f: V' \to V \to W \to W'$ i.e. $g \circ \sigma \circ f: V' \to W'$, such that: $$\Hom(f,g)(\sigma)=g \circ \sigma \circ f \in \Hom(V',W'), $$ as expected and required. Also, since $\eta_{(V',W')}: \Hom(V',W') \to \Phi(V',W')$, we have as a result that $\eta_{(V',W')}(g \circ \sigma \circ f)$ is well-defined. Anyway, we have by definition of $\eta_{(V',W')}$ that: $$ \eta_{(V',W')}(g \circ \sigma \circ f) = \left[\sum_{i,j} v'_i \otimes \psi'_j \mapsto \sum_{i,j} \psi'_j((g \circ \sigma \circ f) (v'_i) ) = \sum_{i,j} \psi'_j(g(\sigma (f(v'_i)))) \right]. $$ Now let's calculate the right-hand side: $$ (\Phi(f,g) \circ \eta_{(V,W)})(\sigma) = \Phi(f,g) ( \eta_{(V,W)}(\sigma).$$ Now we have again by definition of $\eta$ that:$$ \eta_{(V,W)}(\sigma) = \left[ \sum_{i,j} v_i \otimes \psi_j \mapsto \sum_{i,j} \psi_j(\sigma(v_i)) \right] . $$ Allow us to recall the definition of the morphism part of $\Phi$: $$ \Phi(f,g): \tau \mapsto \tau \circ (f \otimes g^*). $$ Thus, we can now evaluate the rest of the expression on the right-hand side: $$ \Phi(f,g) \left( \left[ \sum_{i,j} v_i \otimes \psi_j \mapsto \sum_{i,j} \psi_j(\sigma(v_i)) \right] \right) = \left[ \sum_{i,j} v_i \otimes \psi_j \mapsto \sum_{i,j} \psi_j(\sigma(v_i)) \right] \circ (f \otimes g^*) $$ Allow us to pause for a moment to recall the definition of $(f \otimes g^*): V' \otimes (W')^* \to V \otimes W^*$: $$ (f \otimes g^*): v' \otimes \psi' \mapsto (f(v')) \otimes (\psi' \circ g). $$ Hence, the above equals: $$ = \left[ \sum_{i,j} v'_i \otimes \psi'_j \mapsto \sum_{i,j} (f(v'_i)) \otimes (\psi'_j \circ g) \mapsto \sum_{i,j} (\psi'_j \circ g)(\sigma (f(v_i'))) \right] $$ $$ = \left[ \sum_{i,j} v_i' \otimes \psi_j' \mapsto \sum_{i,j} \psi'_j(g(\sigma(f(v'_i)))) \right]. $$
Hence, we have shown that: $$ \eta_{(V',W')} (\Hom(f,g)(\sigma) = \left[ \sum_{i,j} v_i' \otimes \psi_j' \mapsto \sum_{i,j} \psi_j'(g(\sigma(f(v_i')))) \right] = \Phi(f,g)(\eta_{(V,W)}(\sigma) ). $$ Since $(V,W) \overset{(f,g)}{\to}(V',W')$ and $\sigma \in \Hom(V,W)$ were arbitrary, this completes the proof that $\eta$ is in fact a natural transformation from $\Hom(-,-)$ to $\Phi(-,-)$.</p>
<blockquote>
<p><strong>Claim:</strong> There is a natural transformation from $\Phi(-,-)$ to $\Hom(-,-)$ which is inverse to $\eta$, denote it tentatively $\eta^{-1}$.</p>
</blockquote>
<p>In this claim, there are really two claims to unpack: (1) first, that there exists a natural transformation $\eta^{-1}$ taking $\Phi(-,-)$ to $\Hom(-,-)$, and (2) that $\eta(\eta^{-1}(\Phi(-,-)))=\Phi(-,-)$ and that $\eta^{-1}(\eta(\Hom(-,-)))=\Hom(-,-)$. (So perhaps really more like three claims packed into one.)</p>
<p>(1) We will start by trying to find a formula for $\eta^{-1}$ and then evaluating the first (sub-)claim.</p>
<p>We would need $\eta^{-1}$ to satisfy: $$ \eta^{-1}_{(V',W')}\circ \Phi(f,g) = \Hom(f,g) \circ \eta^{-1}_{(V,W)} $$ for arbitrary $(V,W),(V',W') \in \Vec^{op}\times \Vec, f: V' \to V, g: W \to W'$ linear.</p>
<p>Note that: $$ \begin{array}{rccl} \Phi(f,g): & \Phi(V,W) & \to & \Phi(V',W'), \\ \eta^{-1}_{(V',W')}: & \Phi(V',W') & \to & \Hom(V',W'), \\ \eta^{-1}_{(V,W)}: & \Phi(V,W) & \to & \Hom(V,W), \\ \Hom(f,g): & \Hom(V,W) & \to & \Hom(V',W'),
\end{array} $$ hence both sides of the above equation map $\Phi(V,W) \to \Hom(V',W')$.</p>
<p>Let us start with $\tau \in \Phi(V,W) = \Hom(V \otimes W^*, \mathbb{R})$. Then $\tau$ is a linear map taking tensors of the form $\sum_{i,j} v_i \otimes \psi_j$, with $v_i \in V$ and $\psi_j \in W^*$, to real numbers. Applying $\eta^{-1}_{(V,W)}$ to $\tau$, we somehow want to get out of it a map in $\Hom(V,W)$, i.e. a linear transformation from $V$ to $W$. </p>
<p>Since $W$ is finite-dimensional, we can fix for it a basis $(w_1, \dots, w_n)$. Corresponding to that let $(\psi_1, \dots, \psi_n)$ be the dual basis of $W^*$. Then, given $\tau \in \Hom(V \otimes W^*, \mathbb{R})=\Phi(V,W)$, we will define $\eta^{-1}_{(V,W)}$ by: $$ \eta^{-1}_{(V,W)}: \tau \mapsto \left[ v \mapsto \sum_{i=1}^{n} \tau(v \otimes \psi_i) \cdot w_i \right] . $$ Now we have to begin the process of checking that this is a natural transformation, i.e. that for arbitrary $\tau \in \Hom(V \otimes W^*, \mathbb{R})= \Phi(V,W)$, that: $$ \eta^{-1}_{(V',W')}(\Phi(f,g)(\tau)) = \Hom(f,g) (\eta^{-1}_{(V,W)}(\tau) ). $$</p>
<p>As before, let's start with the left-hand side first. We have that: $$ \Phi(f,g)(\tau) = \tau \circ (f \otimes g^*). $$ From this it follows that: $$ \eta^{-1}_{(V',W')}(\Phi(f,g)(\tau) )= \eta^{-1}_{(V',W')}(\tau \circ (f \otimes g^*)) = \left[ v' \mapsto \sum_{i=1}^{n} (\tau \circ (f \otimes g^*) )(v' \otimes \psi_i') \cdot w_i' \right] $$ $$ = \left[ v' \mapsto \sum_{i=1}^{n} \tau( (f \otimes g^*)(v' \otimes \psi_i') ) \cdot w_i' = \sum_{i=1}^{n} \tau( f(v') \otimes (\psi_i' \circ g) ) \cdot w_i' \right]. $$ Note that $\psi_i' \circ g: W \to W' \to \mathbb{R}$, i.e. $\psi_i' \circ g \in W^*$, and $f: V' \to V$, so $f(v') \in V$, such that $f(v') \otimes (\psi_i' \circ g ) \in V \otimes W^*$, and thus $\tau ( f(v') \otimes (\psi_i' \circ g) )$ is well-defined. Hence we have: $$ \eta^{-1}_{(V',W')}(\Phi(f,g) (\tau) ) = \left[ v' \mapsto \sum_{i=1}^{n} \tau ( f(v') \otimes (\psi_i'\circ g) ) \cdot w_i' \right], $$ which is clearly a map in $\Hom(V',W')$, as expected and required.</p>
<p>We will now calculate the right-hand side, as follows; we start with: $$\eta^{-1}_{(V,W)}(\tau) = \left[ v \mapsto \sum_{i=1}^{n} \tau(v \otimes \psi_i)\cdot w_i \right], $$ which is simply the definition of $\eta^{-1}_{(V,W)}$ as stated above. Clearly, this is in $\Hom(V,W)$ as expected and required. Now we apply the morphism $\Hom(f,g)$ to get an element in $\Hom(V',W')$, hopefully the same as before: $$\Hom(f,g)(\eta^{-1}_{(V,W)}(\tau) ) = \Hom(f,g)\left( \left[ v \mapsto \sum_{i=1}^{n} \tau(v \otimes \psi_i) \cdot w_i \right] \right) = g \circ \left[ v \mapsto \sum_{i=1}^{n} \tau(v \otimes \psi_i) \cdot w_i \right] \circ f. $$ Remembering that $f:V' \to V, f: v' \mapsto v$, we can rewrite the above as: $$ = g \circ \left[ v' \mapsto f(v') \mapsto \sum_{i=1}^{n} \tau( f(v') \otimes \psi_i ) \cdot w_i \right] = g \circ \left[ v' \mapsto \sum_{i=1}^{n} \tau( f(v') \otimes \psi_i ) \cdot w_i \right].$$ Remember now that $g: W \to W'$ is a linear transformation, hence we can distribute it over the sum to get: $$ = \left[ v' \mapsto g \left( \sum_{i=1}^{n} \tau( f(v') \otimes \psi_i ) \cdot w_i \right) \right] = \left[ v' \mapsto \sum_{i=1}^{n} \tau(f(v') \otimes \psi_i )\cdot g(w_i) \right]. $$ Remember, that since $f(v') \in V$, and $\tau: V \otimes W^* \to \mathbb{R}$ that $\tau (f(v') \otimes \psi_i) \in \mathbb{R}$, which is why $g$ distributes over it. Now let's remember that $(\psi_1, \dots, \psi_n)$ is the dual basis to $(w_1, \dots, w_n)$, and $(\psi_1',\dots, \psi_n')$ is the dual basis to $(w_1', \dots, w_n')$. Thus, if we want to claim that the expression we have just derived is equal to the expression for $\eta^{-1}_{(V',W')}(\Phi(f,g) (\tau) )$, then we must show that: $$ w_i' = g(w_i) \implies \psi_i = \psi_i' \circ g,$$ and $$\psi_i' \circ g = \psi_i \implies w_i' = g(w_i).$$ </p>
<p>Let's start with the first direction. Assume that $w_i'=g(w_i)$. Then: $$1=\psi_i'(w_i')=\psi_i'(g(w_i)) = (\psi_i' \circ g)(w_i).$$ Since $\psi_i$ is the unique element of $W^*$ such that $\psi_i(w_i)=1$, and we have shown above that $(\psi_i' \circ g)(w_i)=1$, it follows that $\psi_i = \psi_i' \circ g$, as we wanted to show.</p>
<p>Now let us show the second direction. Assume that $\psi_i' \circ g = \psi_i$. Then: $$1 = \psi_i(w_i) = (\psi_i' \circ g)(w_i) = \psi_i'(g(w_i)).$$ Since $w_i'$ is the unique element of $W'$ such that $\psi_i'(w_i')=1$, and we have shown above that $\psi_i' (g(w_i))=1$, it follows that $g(w_i)=w_i'$, which is what we wanted to show.</p>
<p>Therefore, we can in full faith and confidence interchange $g(w_i)$ with $w_i'$ whenever we interchange $\psi_i$ with $\psi_i' \circ g$. In particular, we have shown that: $$\left[ v' \mapsto \sum_{i=1}^{n} \tau(f(v') \otimes \psi_i ) \cdot g(w_i) \right] = \left[ v' \mapsto \sum_{i=1}^{n} \tau( f(v') \otimes (\psi_i' \circ g) ) \cdot w_i' \right]. $$</p>
<p>In conclusion, we have shown that: $$\eta^{-1}_{(V',W')}(\Phi(f,g)(\tau) ) = \left[ v' \mapsto \sum_{i=1}^{n} \tau( f(v') \otimes (\psi'_i \otimes g) )\cdot w_i' \right] = \Hom(f,g)(\eta^{-1}_{(V,W)}(\tau) ) .$$ Thus $\eta^{-1}$ really is a natural transformation, as had previously been claimed.</p>
<p>(2) Now, given that there is a natural transformation $\eta^{-1}$ taking $\Phi(-,-)$ to $\Hom(-,-)$, we want to see whether this natural transformation is inverse to the natural transformation $\eta$ we had defined earlier, which takes $\Hom(-,-)$ to $\Phi(-,-)$.</p>
<p>In particular, if we want to say that $\eta^{-1}$ is inverse to $\eta$, then we want to show that $\eta^{-1} \circ \eta$ is the identity natural transformation $\Hom(-,-) \to \Hom(-,-)$, and that $\eta \circ \eta^{-1}$ is the identity natural transformation $\Phi(-,-) \to \Phi(-,-)$. Thus we have to show that: $$\Hom(f,g) \circ \eta_{(V,W)}^{-1} \circ \eta_{(V,W)} = \eta^{-1}_{(V',W')} \circ \eta_{(V',W')} \circ \Hom(f,g), $$ and $$ \eta_{(V',W')} \circ \eta^{-1}_{(V',W')} \circ \Phi(f,g) = \Phi(f,g) \circ \eta_{(V,W)} \circ \eta^{-1}_{(V,W)}.$$ Note that this will follow if we can show that: $$\begin{array}{rcl} \eta_{(V,W)}^{-1} \circ \eta_{(V,W)} & = & \id_{\Hom(V,W)}, \\ \eta^{-1}_{(V',W')} \circ \eta_{(V',W')} & = & \id_{\Hom(V',W')}, \\ \eta_{(V,W)} \circ \eta^{-1}_{(V,W)} & = & \id_{\Phi(V,W)}, \\ \eta_{(V',W')} \circ \eta^{-1}_{(V',W')} & = & \id_{\Phi(V',W')}, \end{array}$$ since the identity morphisms satisfy the required commutativity relations by definition, i.e. by definition of identity morphism it holds that: $$\begin{array}{rcl} \Hom(f,g) \circ \id_{\Hom(V,W)} & = & \id_{\Hom(V',W')} \circ \Hom(f,g) \\ \id_{\Phi(V',W')} \circ \Phi(f,g) & = & \Phi(f,g) \circ \id_{\Phi(V,W)}. \end{array} $$ Note also that if we show for arbitrary $(\tilde{V}, \tilde{W}) \in \Vec^{op} \times \Vec$ that $$\begin{array}{rcl} \eta^{-1}_{(\tilde{V}, \tilde{W})} \circ \eta_{(\tilde{V}, \tilde{W}) } & = & \id_{\Hom(\tilde{V}, \tilde{W} )}, \\ \eta_{ (\tilde{V}, \tilde{W} ) } \circ \eta^{-1}_{ (\tilde{V}, \tilde{W} ) } & = & \id_{ \Phi( \tilde{V}, \tilde{W} ) }, \end{array} $$ then we will have proven all four of the above claims (since again $(\tilde{V}, \tilde{W})$ is chosen arbitrarily, and thus can stand for either pair $(V,W)$ or $(V',W')$ if we like). Showing the first equation: $$\eta^{-1}_{(\tilde{V}, \tilde{W})} \circ \eta_{(\tilde{V}, \tilde{W})} = \id_{\Hom(\tilde{V}, \tilde{W})}, $$ will prove the first claim, that $\eta^{-1} \circ \eta$ is the identity natural transformation $\Hom(-,-) \to \Hom(-,-)$. Showing the second equation: $$\eta_{(\tilde{V}, \tilde{W})} \circ \eta^{-1}_{(\tilde{V}, \tilde{W})} = \id_{\Phi(\tilde{V}, \tilde{W})}, $$ will prove the second claim, that $\eta \circ \eta^{-1}$ is the identity natural transformation $\Phi(-,-) \to \Phi(-,-)$.</p>
<p>Let $\sigma \in \Hom(\tilde{V}, \tilde{W})$ be arbitrary, we will now prove the first claim by showing that: $$\eta^{-1}_{(\tilde{V}, \tilde{W})}( \eta_{(\tilde{V}, \tilde{W})} (\sigma) ) = \sigma. $$ By definition of $\eta$ given above, we have that: $$ \eta_{(\tilde{V}, \tilde{W})}(\sigma) = \left[ \sum_{i,j} \tilde{v}_i \otimes \tilde{\psi}_j \mapsto \sum_{i,j} \tilde{\psi}_j(\sigma(\tilde{v}_i) ) \right]. $$ Then it follows that: $$\eta^{-1}_{(\tilde{V}, \tilde{W})} (\eta_{(\tilde{V}, \tilde{W})} (\sigma )) = \eta^{-1}_{(\tilde{V}, \tilde{W})} \left( \left[ \sum_{i,j} \tilde{v}_i \otimes \tilde{\psi}_j \mapsto \sum_{i,j} \tilde{\psi}_j ( \sigma (\tilde{v}_i) ) \right] \right), $$ using the definition of $\eta^{-1}$ given above: $$ = \left[ \tilde{v} \mapsto \sum_{k=1}^{n} \left[ \sum_{i,j} \tilde{v}_i \otimes \tilde{\psi}_j \mapsto \sum_{i,j} \tilde{\psi}_j ( \sigma (\tilde{v}_i) ) \right] (\tilde{v} \otimes \tilde{\psi}_k) \cdot \tilde{w}_k \right] = \left[ \tilde{v} \mapsto \sum_{k=1}^{n} (\tilde{\psi}_k(\sigma(\tilde{v})))\cdot \tilde{w}_k \right]. $$ So we are done as long as we can show that: $$\sigma(\tilde{v}) = \sum_{k=1}^{n} (\tilde{\psi}_k(\sigma(\tilde{v}))) \cdot \tilde{w}_k .$$ Remember that $\tilde{\psi}_{k}(\tilde{w}_{\ell})=\delta_{k,\ell}$ by definition of these quantities (where $\delta_{k,\ell}$ is the Kronecker delta). Since $(w_1, \dots, w_n)$ is a basis for $W$, we can write uniquely: $$ \sigma(\tilde{v}) = \sum_{\ell=1}^{n} a_{\ell} \cdot \tilde{w}_{\ell}, $$ for some yet to be determined quantities $a_{\ell} \in \mathbb{R}$. Then for each $1 \le k \le n$, using linearity of $\tilde{\psi}_k$: $$\tilde{\psi}_{k} (\sigma(\tilde{v})) = \tilde{\psi}_{k} \left( \sum_{\ell=1}^{n} a_{\ell} \cdot \tilde{w}_{\ell} \right) = \sum_{\ell=1}^{n} a_{\ell} \tilde{\psi}_{k}(\tilde{w}_{\ell}) = \sum_{\ell=1}^{n} a_{\ell} \delta_{k, \ell} = a_k. $$ Therefore, we have shown that: $$\sigma(\tilde{v}) = \sum_{\ell=1}^{n} a_{\ell} \cdot \tilde{w}_{\ell} = \sum_{k=1}^{n} a_k \cdot \tilde{w}_k = \sum_{k=1}^{n} (\tilde{\psi}_k(\sigma(\tilde{v})) )\cdot \tilde{w}_k, $$ as we had wanted to do. Then for arbitrary $\tilde{v} \in \tilde{V}$ and arbitrary $\sigma \in \Hom(\tilde{V}, \tilde{W})$: $$\eta^{-1}_{(\tilde{V},\tilde{W})} (\eta_{(\tilde{V}, \tilde{W})} (\sigma(\tilde{v})) ) = \sum_{k=1}^{n} (\tilde{\psi}_k(\sigma(\tilde{v})))\cdot \tilde{w}_k = \sigma(\tilde{v}) , $$ $$ \iff \eta^{-1}_{(\tilde{V}, \tilde{W})} ( \eta_{(\tilde{V}, \tilde{W})} (\sigma)) = \sigma \iff \eta^{-1}_{(\tilde{V}, \tilde{W})} \circ \eta_{(\tilde{V}, \tilde{W})} = \id_{\Hom(\tilde{V}, \tilde{W})}.$$ It follows that $\eta^{-1} \circ \eta$ is the identity natural transformation $\Hom(-,-) \to \Hom(-,-)$.</p>
<p>Now let $\tau \in \Phi(\tilde{V}, \tilde{W}) = \Hom(\tilde{V} \otimes (\tilde{W})^*, \mathbb{R})$ be arbitrary. We prove the second claim by showing: $$\eta_{(\tilde{V}, \tilde{W})} (\eta^{-1}_{(\tilde{V}, \tilde{W})}(\tau) ) = \tau. $$ One has: $$\eta^{-1}_{(\tilde{V}, \tilde{W})}(\tau) = \left[ \tilde{v} \mapsto \sum_{k=1}^{n} \tau(\tilde{v} \otimes \tilde{\psi}_k) \cdot \tilde{w}_k \right], $$ and thus further that: $$ \eta_{(\tilde{V}, \tilde{W})} (\eta^{-1}_{(\tilde{V}, \tilde{W})}(\tau) ) = \eta_{(\tilde{V}, \tilde{W})}\left( \left[ \tilde{v} \mapsto \sum_{k=1}^{n} \tau(\tilde{v} \otimes \tilde{\psi}_k) \cdot \tilde{w}_k \right] \right) = \left[ \sum_{i,j} \tilde{v}_i \otimes \tilde{\psi}_j \mapsto \sum_{i,j} \tilde{\psi}_j \left( \left[ \tilde{v} \mapsto \sum_{k=1}^{n} \tau(\tilde{v} \otimes \tilde{\psi}_k) \cdot
\tilde{w}_k ) \right] (\tilde{v}_i) \right) \right] $$ $$ = \left[ \sum_{i,j} \tilde{v}_i \otimes \tilde{\psi}_j \mapsto \sum_{i,j} \tilde{\psi}_j \left( \sum_{k=1}^{n} \tau(\tilde{v}_i \otimes \tilde{\psi}_k) \cdot \tilde{w}_k \right) \right] = \left[ \sum_{i,j} \tilde{v}_i \otimes \tilde{\psi}_j \mapsto \sum_{i,j} \sum_{k=1}^{n} \tau(\tilde{v}_i \otimes \tilde{\psi}_k ) \tilde{\psi}_j (\tilde{w}_k) \right] $$ $$\left[ \sum_{i,j} \tilde{v}_i \otimes \tilde{\psi}_j \mapsto \sum_{i,j} \sum_{k=1}^{n}\tau(\tilde{v}_i \otimes \tilde{\psi}_k) \delta_{jk} \right] = \left[ \sum_{i,j} \tilde{v}_i \otimes \tilde{\psi}_j \mapsto \sum_{i,j} \tau(\tilde{v}_i\otimes \tilde{\psi}_j ) \right] \in \Hom(\tilde{V} \otimes (\tilde{W})^*, \mathbb{R} )=\Phi(\tilde{V}, \tilde{W}). $$ Now by linearity of $\tau$ we actually have immediately that: $$\sum_{i,j} \tau(\tilde{v}_i \otimes \tilde{\psi}_j) = \tau\left( \sum_{i,j} \tilde{v}_i \otimes \tilde{\psi}_j \right). $$ To be even more explicit (the first equality is a definition, the second holds by linearity): $$\tau = \left[ \sum_{i,j} \tilde{v}_i \otimes \tilde{\psi}_j \mapsto \tau \left( \sum_{i,j} \tilde{v}_i \otimes \tilde{\psi}_j \right) \right] = \left[ \sum_{i,j} \tilde{v}_i \otimes \tilde{\psi}_j \mapsto \sum_{i,j} \tau(\tilde{v}_i \otimes\tilde{\psi}_j) \right]. $$ In conclusion, we have shown that, for arbitrary $\tau \in \Phi(\tilde{V}, \tilde{W})$ that: $$\eta_{(\tilde{V},\tilde{W})}( \eta^{-1}_{(\tilde{V}, \tilde{W})} (\tau) ) = \tau \iff \eta_{ (\tilde{V}, \tilde{W} )}\circ \eta^{-1}_{(\tilde{V}, \tilde{W})} = \id_{\Phi(\tilde{V}, \tilde{W})}.$$ It follows that $\eta \circ \eta^{-1}$ is the identity natural transformation $\Phi(-,-) \to \Phi(-,-)$.</p>
<p>Thus not only (1) is $\eta^{-1}$ a natural transformation, but finally (2) we have shown that it is indeed inverse to the natural transformation $\eta$. Thus we have established the existence of a \textbf{natural isomorphism} between the functors $\Hom(-,-)$ and $\Phi(-,-)$.</p>
<p>In particular, we have shown that, for all $(V,W) \in \Vec^{op} \times \Vec$, we have: $$\Hom(V,W) \cong \Hom(V\otimes W^*, \mathbb{R} ), $$ in the category of finite-dimensional vector spaces, i.e. that $\Hom(V,W)$ and $\Hom(V\otimes W^*, \mathbb{R})$ are isomorphic as vector spaces for any choice of two finite-dimensional vector spaces $(V,W)$.</p>
|
471,710 | <p>Why do small angle approximations only hold in radians? All the books I have say this is so but don't explain why.</p>
| Meow | 39,568 | <p>It's because this relationship
$$\lim_{x\rightarrow0}\frac{\sin(x)}{x}=1$$ (i.e. $\sin(x) \approx x$) only holds if $x$ is in radians, as, using L'Hoptial's rule,
$$\lim_{x\rightarrow0}\frac{\sin(x)}{x}=\lim_{x\rightarrow0}\frac{\frac{d}{dx}\sin(x)}{\frac{d}{dx}x}=\lim_{x\rightarrow0}\frac{d}{dx}\sin(x)=\begin{cases}
1 & \text{ if } x\text{ is in radians}\\
\frac{\pi}{180} & \text{ if } x \text{ is in degrees}
\end{cases}.$$
That is, $\frac{d}{dx}(\sin(x))=\cos(x)$ only if $x$ is in radians. The reason can be divined from this proof without words by <a href="https://mathoverflow.net/questions/8846/proofs-without-words">Stephen Gubkin</a>:</p>
<p><img src="https://i.stack.imgur.com/KWNQB.jpg" alt="enter image description here"></p>
|
2,698,555 | <p><strong>Question</strong></p>
<p>Find all the rational values of $x$ at which $y=\sqrt{x^2+x+3}$</p>
<p><strong>My attempt</strong></p>
<p>Since we only have to find the rational values of $x$ and $y$, we can assume that
$$ x \in Q$$
$$ y \in Q$$
$$ y-x \in Q $$
Let$$ d = y-x$$
$$d=\sqrt{x^2+x+3}-x$$
$$d+x=\sqrt{x^2+x+3}$$
$$(d+x)^2=(\sqrt{x^2+x+3})^2$$
$$d^2 + x^2 + 2dx =x^2+x+3$$
$$d^2 +2dx = x +3$$
$$x = \frac{3-d^2}{2d-1}$$</p>
<p>$$d \neq \frac{1}{2}$$</p>
<p>So $x$ will be rational as long as $d \neq \frac{1}{2}$.</p>
<p>Now
$$ y = \sqrt{x^2+x+3}$$
$$ y = \sqrt{(\frac{3-d^2}{2d-1})^2 + \frac{3-d^2}{2d-1} + 3}$$
$$ y = \sqrt{\frac{(3-d^2)^2}{(2d-1)^2} + \frac{(3-d^2)(2d-1)}{(2d-1)^2} + 3\frac{(2d-1)^2}{(2d-1)^2}}$$
$$ y = \sqrt{\frac{(3-d^2)^2 + (3-d^2)(2d-1) + 3(2d-1)^2}{(2d-1)^2}} $$
$$ y = \frac{\sqrt{(3-d^2)^2 + (3-d^2)(2d-1) + 3(2d-1)^2}}{(2d-1)}$$
$$ y = \frac{\sqrt{d^4-2d^3+7d^2-6d+9}}{(2d-1)}$$</p>
<p>I know that again $d \neq \frac{1}{2}$ but I don't know what to do with the numerator. Help</p>
| King Tut | 520,782 | <p>There are two red parts and two blue parts. Let one blue part be called $B$ and red be $R$. Radius of small circle be $r$. Now write area of big quarter circle with radius $2r$ in terms of these variables using <em>inclusion exclusion principle</em>:</p>
<p>$$2 \cdot \frac{\pi r^2}{2} -2B+2R = \frac{\pi (2r)^2}{4}\\ R=B$$</p>
<p><strong>Note:</strong> We actually <strong>do not need</strong> that area of a disk is $\pi r^2$. We can assume it to be $A$ and use similarity of figures.</p>
<p>$$2 \cdot \frac{A}{2} -2B+2R = \frac{2^2A}{4} \\ R=B$$</p>
|
2,698,555 | <p><strong>Question</strong></p>
<p>Find all the rational values of $x$ at which $y=\sqrt{x^2+x+3}$</p>
<p><strong>My attempt</strong></p>
<p>Since we only have to find the rational values of $x$ and $y$, we can assume that
$$ x \in Q$$
$$ y \in Q$$
$$ y-x \in Q $$
Let$$ d = y-x$$
$$d=\sqrt{x^2+x+3}-x$$
$$d+x=\sqrt{x^2+x+3}$$
$$(d+x)^2=(\sqrt{x^2+x+3})^2$$
$$d^2 + x^2 + 2dx =x^2+x+3$$
$$d^2 +2dx = x +3$$
$$x = \frac{3-d^2}{2d-1}$$</p>
<p>$$d \neq \frac{1}{2}$$</p>
<p>So $x$ will be rational as long as $d \neq \frac{1}{2}$.</p>
<p>Now
$$ y = \sqrt{x^2+x+3}$$
$$ y = \sqrt{(\frac{3-d^2}{2d-1})^2 + \frac{3-d^2}{2d-1} + 3}$$
$$ y = \sqrt{\frac{(3-d^2)^2}{(2d-1)^2} + \frac{(3-d^2)(2d-1)}{(2d-1)^2} + 3\frac{(2d-1)^2}{(2d-1)^2}}$$
$$ y = \sqrt{\frac{(3-d^2)^2 + (3-d^2)(2d-1) + 3(2d-1)^2}{(2d-1)^2}} $$
$$ y = \frac{\sqrt{(3-d^2)^2 + (3-d^2)(2d-1) + 3(2d-1)^2}}{(2d-1)}$$
$$ y = \frac{\sqrt{d^4-2d^3+7d^2-6d+9}}{(2d-1)}$$</p>
<p>I know that again $d \neq \frac{1}{2}$ but I don't know what to do with the numerator. Help</p>
| CiaPan | 152,299 | <p>See the image with blue parts shifted:</p>
<p><a href="https://i.stack.imgur.com/LazQz.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LazQz.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/f00Mb.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/f00Mb.gif" alt="enter image description here"></a></p>
<p>The single blue figure is the same part of its small square as the red plus both blue of the big square, hence areas
$$\frac{2\cdot blue + red}{blue}=\frac{big\ square}{small\ square}=4$$
so
$$2\cdot blue + red = 4\cdot blue$$
hence
$$red = 2\cdot blue$$
Q.E.D.</p>
<p>Without the formula for the area of a triangle, and without the formula for the area of a circle...</p>
|
2,698,555 | <p><strong>Question</strong></p>
<p>Find all the rational values of $x$ at which $y=\sqrt{x^2+x+3}$</p>
<p><strong>My attempt</strong></p>
<p>Since we only have to find the rational values of $x$ and $y$, we can assume that
$$ x \in Q$$
$$ y \in Q$$
$$ y-x \in Q $$
Let$$ d = y-x$$
$$d=\sqrt{x^2+x+3}-x$$
$$d+x=\sqrt{x^2+x+3}$$
$$(d+x)^2=(\sqrt{x^2+x+3})^2$$
$$d^2 + x^2 + 2dx =x^2+x+3$$
$$d^2 +2dx = x +3$$
$$x = \frac{3-d^2}{2d-1}$$</p>
<p>$$d \neq \frac{1}{2}$$</p>
<p>So $x$ will be rational as long as $d \neq \frac{1}{2}$.</p>
<p>Now
$$ y = \sqrt{x^2+x+3}$$
$$ y = \sqrt{(\frac{3-d^2}{2d-1})^2 + \frac{3-d^2}{2d-1} + 3}$$
$$ y = \sqrt{\frac{(3-d^2)^2}{(2d-1)^2} + \frac{(3-d^2)(2d-1)}{(2d-1)^2} + 3\frac{(2d-1)^2}{(2d-1)^2}}$$
$$ y = \sqrt{\frac{(3-d^2)^2 + (3-d^2)(2d-1) + 3(2d-1)^2}{(2d-1)^2}} $$
$$ y = \frac{\sqrt{(3-d^2)^2 + (3-d^2)(2d-1) + 3(2d-1)^2}}{(2d-1)}$$
$$ y = \frac{\sqrt{d^4-2d^3+7d^2-6d+9}}{(2d-1)}$$</p>
<p>I know that again $d \neq \frac{1}{2}$ but I don't know what to do with the numerator. Help</p>
| Thomas Pornin | 38,580 | <p>If you forgot formulas, but remember that areas are quadratic (i.e. "double the length, quadruple the area"), then you know that a circle of radius $2r$ uses four times the area of circle of radius $r$. Here, you have one quarter of a circle of radius $2r$, and one circle of radius $r$ (in two halves), so they should cover the same area. In other words, the two halves of the small circle <em>should</em> be enough to cover the large quarter circle.</p>
<p>However, the two halves of the small circle overlap (the blue area) and thus some of the area covering power is lost. And, indeed, the big quarter-circle is partially uncovered (the red area). Therefore, the blue and red area MUST be equal.</p>
|
2,698,555 | <p><strong>Question</strong></p>
<p>Find all the rational values of $x$ at which $y=\sqrt{x^2+x+3}$</p>
<p><strong>My attempt</strong></p>
<p>Since we only have to find the rational values of $x$ and $y$, we can assume that
$$ x \in Q$$
$$ y \in Q$$
$$ y-x \in Q $$
Let$$ d = y-x$$
$$d=\sqrt{x^2+x+3}-x$$
$$d+x=\sqrt{x^2+x+3}$$
$$(d+x)^2=(\sqrt{x^2+x+3})^2$$
$$d^2 + x^2 + 2dx =x^2+x+3$$
$$d^2 +2dx = x +3$$
$$x = \frac{3-d^2}{2d-1}$$</p>
<p>$$d \neq \frac{1}{2}$$</p>
<p>So $x$ will be rational as long as $d \neq \frac{1}{2}$.</p>
<p>Now
$$ y = \sqrt{x^2+x+3}$$
$$ y = \sqrt{(\frac{3-d^2}{2d-1})^2 + \frac{3-d^2}{2d-1} + 3}$$
$$ y = \sqrt{\frac{(3-d^2)^2}{(2d-1)^2} + \frac{(3-d^2)(2d-1)}{(2d-1)^2} + 3\frac{(2d-1)^2}{(2d-1)^2}}$$
$$ y = \sqrt{\frac{(3-d^2)^2 + (3-d^2)(2d-1) + 3(2d-1)^2}{(2d-1)^2}} $$
$$ y = \frac{\sqrt{(3-d^2)^2 + (3-d^2)(2d-1) + 3(2d-1)^2}}{(2d-1)}$$
$$ y = \frac{\sqrt{d^4-2d^3+7d^2-6d+9}}{(2d-1)}$$</p>
<p>I know that again $d \neq \frac{1}{2}$ but I don't know what to do with the numerator. Help</p>
| Community | -1 | <p>I assume the sides of the square are $2R$, and that the blue area is the area of the intersection of two half circles of radius $R$</p>
<p>$RedArea = Sa_{Big_{quartercircle}} - 2 * Sa_{Small_{halfcircles}} + intersection $</p>
<p>But </p>
<p>$Sa_{Big_{quartercircle}} = \pi R^2$</p>
<p>and:</p>
<p>$Sa_{Small_{halfcircle}} = \pi R^2\div 2$</p>
<p>So:</p>
<p>$RedArea = intersection = BlueArea$</p>
|
1,639,521 | <p>Let <span class="math-container">$f:[0,\infty)\to\mathbb{R}$</span> differentiable and suppose that <span class="math-container">$$\lim_{x\to\infty}f'(x)=L.$$</span> How can I prove that <span class="math-container">$$\lim_{x\to\infty}\frac{f(x)}{x} = L\;?$$</span></p>
<p>I have solved some similar problems using the Mean Value Theorem, and I am trying to use it again in this one, but nothing works. For example, I tried to apply the MVT in <span class="math-container">$[x, 2x]$</span> but it does not work. Some hint?</p>
| Robert Israel | 8,508 | <p>Hint: If for $x > N > 0$, $ f'(x) < L+\epsilon$, then for such $x$,
$f(x) < f(N) + (x-N) (L + \epsilon)$ and
$$\dfrac{f(x)}{x} < L + \epsilon + \dfrac{f(N)- N(L+\epsilon)}{x} $$
What's the limit of the right side as $x \to \infty$?</p>
<p>Similarly in the other direction...</p>
|
1,639,521 | <p>Let <span class="math-container">$f:[0,\infty)\to\mathbb{R}$</span> differentiable and suppose that <span class="math-container">$$\lim_{x\to\infty}f'(x)=L.$$</span> How can I prove that <span class="math-container">$$\lim_{x\to\infty}\frac{f(x)}{x} = L\;?$$</span></p>
<p>I have solved some similar problems using the Mean Value Theorem, and I am trying to use it again in this one, but nothing works. For example, I tried to apply the MVT in <span class="math-container">$[x, 2x]$</span> but it does not work. Some hint?</p>
| Eugene Zhang | 215,082 | <p>Since <span class="math-container">$\lim_{x\to\infty}f'(x)=L$</span>, for any <span class="math-container">$\epsilon>0$</span>, there is <span class="math-container">$M>0$</span> such that for any <span class="math-container">$x>M$</span>, there is
<span class="math-container">$$
|f'(x)-L|<\epsilon
$$</span>
For any <span class="math-container">$x>M$</span>, by Lagrange mean value theorem
<span class="math-container">$$
f(x)=f(M)+f'(ξ)(x−M)\quad\text{and}\quad f(x)−Lx=(f′(ξ)-L)(x−M)+f(M)−LM
$$</span>
Where <span class="math-container">$M<\xi<x.\:$</span> Thus
<span class="math-container">\begin{align}
\left|\frac{f(x)}{x} - L\right|&=\left|\frac{f(x) - Lx}{x}\right|
\\
&=\left|\frac{(f'(\xi) - L)(x-M)+f(M)-LM}{x}\right|
\\
&<|f'(\xi) - L|+\left|\frac{f(M)-LM}{x}\right|
\\
&<\epsilon+\left|\frac{f(M)-LM}{x}\right|
\end{align}</span>
And we have
<span class="math-container">$$
\varlimsup_{x\to\infty}\left|\frac{f(x)}{x} - L\right|\leqslant\varlimsup_{x\to\infty}\epsilon+\varlimsup_{x\to\infty}\left|\frac{f(M)-LM}{x}\right|=\epsilon
$$</span>
Since <span class="math-container">$\epsilon$</span> is arbitrary, this means
<span class="math-container">$$
\varlimsup_{x\to\infty}\left|\frac{f(x)}{x} - L\right|=0\quad\text{and}\quad\varlimsup_{x\to\infty}\left|\frac{f(x)}{x} - L\right|=\varliminf_{x\to\infty}\left|\frac{f(x)}{x} - L\right|
$$</span>
Therefore
<span class="math-container">$$
\lim_{x\to\infty}\left|\frac{f(x)}{x} - L\right|=0\quad\text{or}\quad\lim_{x\to\infty}\frac{f(x)}{x} = L
$$</span></p>
|
35,375 | <p>Good morning,
today I have read that "number theory is nothing but the study of $\mathrm{Gal}(\mathbb{\bar{Q}}/\mathbb{Q})$", here <a href="http://www.math.uconn.edu/~alozano/elliptic/finding%20points.pdf" rel="nofollow">http://www.math.uconn.edu/~alozano/elliptic/finding%20points.pdf</a>
can anyone give a very naive layman definition of what it actually means?</p>
<p>Furthermore, I got this doubt that $\bar {\mathbb{Q}}$ is the algebraic closure of $\mathbb{Q}$, and the thing that confuses me is
the field of rational numbers $\mathbb{Q}$ is not a algebraically closed as there exists a polynomial with $a_{1},a_{2},\dotsc,a_{n}\in \mathbb{Q}$ and $(x-a_{1})(x-a_{2})\cdots(x-a_{n})+1$ has no zero in $\mathbb{Q}$.</p>
<p>Then why are we considering the field extension of $\bar {\mathbb{Q}}/\mathbb{Q}$ when $\mathbb{Q}$ is not algebraically closed, won't it contradict the definition of algebraic closure?</p>
<p>But I am not getting an answer i was looking for ,i want what are the things going on behind the $\mathrm{Gal}(\mathbb{\bar{Q}}/\mathbb{Q})$ like what is the thing we get if we take the $\mathbb{\bar{Q}/\mathbb{Q}}$ and what does taking the $\mathrm{Gal}(\mathbb{\bar{Q}}/\mathbb{Q})$ give someone,</p>
<p>Thank you</p>
| Bill Dubuque | 242 | <p>Perhaps the most accessible introduction for a "very naive layperson" is Ash and Gross's <a href="http://press.princeton.edu/titles/8141.html" rel="noreferrer">Fearless Symmetry.</a> I recall reading many glowing reviews from non-experts, so this may be precisely the exposition that you seek. See also <a href="http://fmwww.bc.edu/gross/fearless_symmetry" rel="noreferrer">Gross's links to 10 reviews.</a> I think you will find some of these reviews quite informative.</p>
<p>You may also find of interest this excerpt from an <a href="http://math.mit.edu/%7Esheffield/interview.html" rel="noreferrer">interview with Richard Taylor.</a></p>
<blockquote>
<p>WHAT ARE YOUR MAJOR RESEARCH INTERESTS AND ACHIEVEMENTS?</p>
<p>The great problem that motivates me is to understand the absolute Galois group of the rational numbers, that is, the group of all automorphisms of the field of algebraic numbers (complex numbers which are the roots of nonzero polynomials with rational coefficients). If you like you can talk about all Galois groups of finite extensions of the rational numbers, but this is a convenient way to put them all together. It doesn't make a lot of difference, but it is technically neater to put them all together. The question that has motivated almost everything I have done is, "What's the structure of that group?" One of the great achievements of mathematicians of the first half of this century is called class field theory, and one way of seeing it is as a description of all abelian quotients of the absolute Galois group of Q, or if you like, the classification of the abelian extensions of the field of the rational numbers. That's only a very small part of this group. The group is extremely complicated, and just describing the abelian part doesn't solve the problem. For instance John Thompson proved that the monster group is a quotient group of this group in infinitely many ways.</p>
<p>There is some sort of program to understand the rest of this group, often referred to as the Langlands Program. There's a huge mass of conjectures, of which we are only beginning to scratch the surface, which tell us what the structure is. The answer is to my mind extremely surprising; it invokes extremely different objects. You start out with this algebraic structure and end up using what are called modular forms, which relate to complex analysis.</p>
<p>There seems to be an answer to this question: what's the structure? And the answer is something completely unexpected in terms of these analytic objects, and I think that's what attracts me to the subject. When there is a great connection between two different areas of mathematics, it always seems to me indicative that something interesting is going on.</p>
<p>The other thing we can see--another indication that it's a powerful theory--is that one can answer questions one might have asked anyway, before one built up the theory. Maybe, the first example was a result proved by Barry Mazur; he provided a description of the possible torsion subgroups of elliptic curves defined over the rational numbers. It was a problem that had been knocking around for some time, and it's relatively easy to state. Using these sorts of ideas, Barry was able to settle it.</p>
<p>Other examples are the proof the main conjecture of Iwasawa theory by Barry Mazur and Andrew Wiles, and the work of Dick Gross and Don Zagier on rational points on elliptic curves. And I guess finally, there's Fermat's last theorem, which Andrew Wiles solved using these ideas again. So in fact, the story of Fermat's last theorem is that this German mathematician Frey realized that if you knew enough of this correspondence between modular forms and Galois groups, there is an extraordinarily quick proof of Fermat's last theorem. And at the time he realized this, not enough was known about this correspondence. What Andrew Wiles did and Andrew and I completed was prove enough about this correspondence for Frey's argument to go through. The thing that amuses me is that it seems that history could easily have been reversed. All these things could have been proved about the relationship between modular forms and Galois groups, and then Frey could have come along and given nearly a two-line proof of Fermat's last theorem.</p>
<p>Those four [torsion points, Iwasawa theory, Gross and Zagier, Fermat] are probably the obvious big applications of these sorts of ideas. It seems to me the applications have been extraordinarily successful--at least four things that would have been recognized as important problems irrespective of this theory, problems that people had thought about before modular forms.</p>
</blockquote>
|
3,480,857 | <p>For <span class="math-container">$x \in \mathbb{R}^n$</span> we define <span class="math-container">$\Vert x \Vert _\infty := \sup_{k = 1,..,n} |x_k|$</span> (meaning that <span class="math-container">$\Vert x \Vert _\infty $</span> is the biggest component of <span class="math-container">$x$</span> according to amount)</p>
<p>How can one prove that </p>
<p><span class="math-container">$$\Vert x\Vert_\infty \leq \Vert x \Vert \leq \sqrt{n}\Vert x\Vert_\infty$$</span></p>
<p>I have seen it on <a href="https://en.wikipedia.org/wiki/Norm_%28mathematics%29#Properties" rel="nofollow noreferrer">Wikipedia</a>, but there's no proof to it.</p>
<p>I know that using Cauchy–Schwarz inequality we get for all <span class="math-container">$x\in\mathbb{R}^n$</span>
<span class="math-container">$$
\Vert x\Vert_1=
\sum\limits_{i=1}^n|x_i|=
\sum\limits_{i=1}^n|x_i|\cdot 1\leq
\left(\sum\limits_{i=1}^n|x_i|^2\right)^{1/2}\left(\sum\limits_{i=1}^n 1^2\right)^{1/2}=
\sqrt{n}\Vert x\Vert_2
$$</span></p>
<p>but that doesn't help me.</p>
| Prabath Hewasundara | 243,089 | <p>Let <span class="math-container">$\Vert x \Vert_\infty=l=\sup_\limits{i\in{1,2,...n}}\vert x_i \vert$</span></p>
<p>Note that <span class="math-container">$0 \leq l, \Vert x \Vert _2$</span>.</p>
<p>Then <span class="math-container">$l^2 \leq \sum \limits_{i=1}^n \vert x_i \vert^2 \implies \Vert x \Vert_\infty=l\leq \sqrt{\sum \limits_{i=1}^n \vert x_i \vert^2} =\Vert x\Vert _2$</span></p>
<p>Also <span class="math-container">$\sum \limits_{i=1}^n \vert x_i \vert^2 \leq \sum \limits_{i=1}^nl^2 \implies \sqrt{\sum \limits_{i=1}^n \vert x_i \vert^2} \leq l\sqrt n \implies \Vert x \Vert _2 \leq \sqrt n\Vert x \Vert_\infty $</span></p>
|
1,955,505 | <blockquote>
<p>$\sum_{n=0}^\infty \frac{n^2+3n+2}{4^n} = \frac{128}{27}$ Given hint: $(n^2+3n+2) = (n+2)(n+1)$</p>
</blockquote>
<p><strong>I've tried</strong> converting the series to a geometric one but failed with that approach and don't know other methods for normal series that help determine the actual convergence value. Help and hints are both appreciated</p>
| Dr. Sonnhard Graubner | 175,066 | <p>prove with induction that $$\sum_{n=0}^k \frac{n^2+3n+2}{4^n}=\frac{1}{27} 4^{-k} \left(-9 k^2-51 k+2^{2 k+7}-74\right)$$</p>
|
1,955,505 | <blockquote>
<p>$\sum_{n=0}^\infty \frac{n^2+3n+2}{4^n} = \frac{128}{27}$ Given hint: $(n^2+3n+2) = (n+2)(n+1)$</p>
</blockquote>
<p><strong>I've tried</strong> converting the series to a geometric one but failed with that approach and don't know other methods for normal series that help determine the actual convergence value. Help and hints are both appreciated</p>
| Olivier Oloa | 118,798 | <p>One may start with the standard <strong>finite</strong> evaluation:
$$
1+x+x^2+...+x^n=\frac{1-x^{n+1}}{1-x}, \quad |x|<1. \tag1
$$ Then by differentiating $(1)$ we have
$$
1+2x+3x^2+...+nx^{n-1}=\frac{1-x^{n+1}}{(1-x)^2}+\frac{-(n+1)x^{n}}{1-x}, \quad |x|<1, \tag2
$$ by differentiating once more one gets
$$
2\times 1+3\times 2 x^2+...+n\times (n-1)x^{n-2}=\frac{2-x^{n-1}\left(n+n^2 (1-x)^2+2x-nx^2\right)}{(1-x)^3},\tag3
$$ then <strong>by making</strong> $n \to +\infty$ in $(3)$, <strong>using $|x|<1$</strong>, one obtains </p>
<blockquote>
<p>$$
\sum_{n=0}^\infty(n+1)\cdot n \cdot x^n=\frac{2 x}{(1-x)^3},\quad |x|<1, \tag4
$$ </p>
</blockquote>
<p>from which one gets the desired series by putting $x:=\dfrac14.$</p>
|
1,955,505 | <blockquote>
<p>$\sum_{n=0}^\infty \frac{n^2+3n+2}{4^n} = \frac{128}{27}$ Given hint: $(n^2+3n+2) = (n+2)(n+1)$</p>
</blockquote>
<p><strong>I've tried</strong> converting the series to a geometric one but failed with that approach and don't know other methods for normal series that help determine the actual convergence value. Help and hints are both appreciated</p>
| E.H.E | 187,799 | <p>Hint:
$${\frac {1}{1-x}}=\sum _{n=0}^{\infty }x^{n}\quad {\text{ for }}|x|<1\!$$
differentiate it and multiply by $x$</p>
<p>$${\frac {x}{(1-x)^{2}}}=\sum _{n=0}^{\infty }nx^{n}$$
differentiate it again and multiply by $x$
$${\frac {x(x+1)}{(1-x)^{3}}}=\sum _{n=0}^{\infty }n^2x^{n}$$</p>
|
499,496 | <p>We have the diff. eq:</p>
<p>$$ y' = \dfrac{2y}{t}$$</p>
<p>I tried to do the following:</p>
<p>$$ y' = \dfrac{1}{t} \cdot 3y$$</p>
<p>$$3y = \dfrac{1}{t} \cdot y'$$</p>
<p>$$ \dfrac{3}{2} y^2 = \ln(t)$$</p>
<p>Here I stopped because I noticed the answer had to be $c \cdot t^3$. I don't understand how to get that answer.</p>
| the_candyman | 51,370 | <p>You have to use the separability of the variables.
$$y' = \frac{dy}{dt} = \frac{2y}{t}$$
We can use $dy$ and $dt$ as "normal" number, and we must move all "$y$"s on a side, and all "$t$"s on the other.
$$\frac{dy}{2y} = \frac{dt}{t}$$
Then, we integrate!
$$\int_{y(t_0)}^{y(t)}\frac{1}{2y} dy = \int_{t_0}^{t}\frac{1}{t}dt$$
$$\frac{1}{2}(\log(y(t) - \log(y(t_0)) = \log(t) - \log(t_0)$$
$$\log(y(t)) = 2(\log(t) - \log(t_0)) + \log(y(t_0))$$
$$y(t) = e^{2(\log(t) - \log(t_0)) + \log(y(t_0))}$$
$$y(t) = e^{2\log(t)} e^{-2\log(t_0)}e^{\log(y(t_0))}$$
$$y(t) = \left(\frac{t}{t_0}\right)^2 y(t_0) $$</p>
<p>P.S. Don't know why in your second step, you have a "3" instead of a "2"...</p>
|
499,496 | <p>We have the diff. eq:</p>
<p>$$ y' = \dfrac{2y}{t}$$</p>
<p>I tried to do the following:</p>
<p>$$ y' = \dfrac{1}{t} \cdot 3y$$</p>
<p>$$3y = \dfrac{1}{t} \cdot y'$$</p>
<p>$$ \dfrac{3}{2} y^2 = \ln(t)$$</p>
<p>Here I stopped because I noticed the answer had to be $c \cdot t^3$. I don't understand how to get that answer.</p>
| Avitus | 80,800 | <p>You can write the original O.D.E (I assume $y=y(t)$ is a $C^1$ function) as follows</p>
<p>$$\frac{y'}{y}=\frac{2}{t};$$</p>
<p>integrating both sides you arrive at</p>
<p>$$\ln y=2\ln(t)+C, $$</p>
<p>with $C$ constant. Then,</p>
<p>$$y(t)=C't^2,$$</p>
<p>as $2\ln t=\ln t^2$ and $C'=e^C$.</p>
|
593,746 | <p>In general, if a random process is ergodic, does it imply that it is also stationary in any sense?</p>
| user127022 | 127,022 | <p>In the theory of Dynamical Systems, one usually only defines ergodicity for invariant measures, so the short answer could be 'yes'.</p>
<p>But, consider the following theorem (taken from Norris' Markov Chains):</p>
<p><strong>Theorem 1.10.2</strong>: let $P=(p_{ij}:\,i,j\in I)$ be an irreducible, positive recurrent transition matrix indexed by $I\times I$, where $I$ is a countable set, and let $\lambda = (\lambda_i:\,i\in I)$ be any distribution on $I$. If $(X_n:\,n\in\mathbb{Z}_+)$ is a Markov chain with initial distribution $\lambda$ and transition matrix $P$, then for any bounded function $f\colon I\to \mathbb{R}$ there is a set $\Omega_f$ with $\mathbb{P}(\Omega_f) = 1$ such that $$\frac{1}{n}\sum_{k=0}^{n-1} f\circ X_k(\omega) \to \sum_{i\in I} \pi_i f(i),\qquad \forall\omega\in \Omega_f,$$ where $\pi = (\pi_i:\,i\in I)$ is the unique invariant distribution of $P$.</p>
<p>By taking $\lambda\neq \pi$, clearly the corresponding chain is not stationary, but nevertheless the time averages converge to the "limiting space averages".</p>
|
3,161,874 | <p>Find the vector equation of the plane which contains points <span class="math-container">$A(0,1,1)$</span>, <span class="math-container">$B(-1,2,1)$</span> and <span class="math-container">$C(2,0,2)$</span>. </p>
<p>I did this by first finding <span class="math-container">$AB$</span> and <span class="math-container">$AC$</span> where I got <span class="math-container">$AB=(-1,1,0)$</span> and <span class="math-container">$AC=(2,-1,1)$</span>. Then I did cross product with this and I got <span class="math-container">$i+j-k$</span>. When I then used this to find equation I got <span class="math-container">$x+y-z=0$</span> but I am unsure why this is wrong. Can someone please help? </p>
| inavda | 134,481 | <p>Arthur's answer is very good, but here is another (equally valid) way of visualizing the problem:</p>
<p>Let us graph the positions of the wall, fly, and car over time. The wall doesn't move, so it is represented by a horizontal line. The car starts at some distance away from the wall but moves towards it at a constant speed until it hits the wall -- so we have a line that intersects the wall's line. Now for the interesting part:</p>
<p>The fly's path starts off as a line with greater slope than the car's line, until it hits the wall's line. The fly's speed remains the same, but it is going in the opposite direction. So now the path continues as if the wall's line was a mirror and it was reflected. When the fly hits the car, the same thing happens -- the fly's path is reflected and it continues towards the wall again (one caveat: when it bounces off the car's line, the angles of incidence and reflection are not equal so it isn't behaving exactly as light would).</p>
<p>So we can see that the fly's path continues bouncing up and down, always at the same or opposite slope.</p>
<p>The final thing to notice to grasp the intuition is that this diagram we have constructed is self-similar. If we zoom in so that the second bounce with the wall is where the first bounce used to be, we have the same exact diagram as before. I won't prove this, but if you draw it out, you can see it intuitively. Essentially, this means that no matter how close the car gets to the wall, we can zoom in and see more bounces for the fly.</p>
|
273,580 | <p>I notice that the built-in "Coordinates Tool" has a very efficient zoom tooltip displaying an enlarged portion of the image along with coordinates and row/column indices:</p>
<p><a href="https://i.stack.imgur.com/KteX5.png" rel="noreferrer"><img src="https://i.stack.imgur.com/KteX5.png" alt="screenshot" /></a></p>
<p>How to get the source code of this tool? Or <strong>how can we implement such an efficient tooltip ourselves</strong> (I want to replicate all the tooltip functionality)?</p>
| Alexey Popkov | 280 | <p>Answering the first part of my question (thanks to <a href="https://mathematica.stackexchange.com/users/36508/lukas-lang">Lukas Lang</a> for pointing me in the right direction!).</p>
<p>The source code for the <a href="http://reference.wolfram.com/language/workflow/GetCoordinatesFromAnImage.html" rel="nofollow noreferrer">Image Assistant Toolbar</a> that appears when you select <code>Image</code> in the FrontEnd is in the file</p>
<pre><code>FileNameJoin@{$InstallationDirectory, "SystemFiles", "FrontEnd", "SystemResources",
"AttachedImage2D.nb"}
</code></pre>
<p>In particular, the zooming tooltip can be invoked by evaluating</p>
<pre><code>FrontEndExecute[FrontEnd`Select2DTool["DisplayImageTooltip"]]
</code></pre>
<p>when an <code>Image</code> is selected. It appears to be implemented at a low level in the FrontEnd, so the source code is not available.</p>
<p>The option <code>CurrentValue[$FrontEnd, "DisplayImagePixels"]</code> determines whether to show the zoomed portion of the image in the tooltip. The default value for this option is <code>Automatic</code>, what is equivalent to <code>"ExploreView"</code>. With the value <code>"TooltipInfo"</code> the tooltip will be shown without the zoomed image. For displaying the zoomed image, we need to set <code>"DetailExploreView"</code>:</p>
<pre><code>CurrentValue[$FrontEnd, "DisplayImagePixels"] = "DetailExploreView";
SelectionMove[PreviousCell[], All, CellContents]
FrontEndExecute[FrontEnd`Select2DTool["DisplayImageTooltip"]]
</code></pre>
<p><a href="https://i.stack.imgur.com/0azeG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0azeG.png" alt="screenshot" /></a></p>
<p>The pixel size of the zoomed image can be controlled via <code>CurrentValue[$FrontEnd, "RasterExploreViewRange"]</code>.</p>
<p>Evaluating</p>
<pre><code>FrontEndExecute[FrontEnd`Select2DTool["GetPixelPointMarkers"]]
</code></pre>
<p>activates also the point marker creation functionality. After creating the markers, the list of their row/column indices can be obtained with</p>
<pre><code>MathLink`CallFrontEnd[FrontEnd`Value[FEPrivate`GetlImageMarkersIndicesList[NotebookSelection[]]]]
</code></pre>
<p>The makers can be removed with</p>
<pre><code>FrontEndTokenExecute["ClearPixelPointMarkers"]
</code></pre>
<hr />
<p>With the above, we can develop the following simple pixel picker interface:</p>
<pre><code>PixelPickerInterface[i_Image] :=
With[{id = CreateUUID[], id2 = CreateUUID[],
is = {{1, #}, {1, #}} &[101./CurrentValue["WindowResolution"]*72]},
With[{init := (CurrentValue[<span class="math-container">$FrontEnd, "DisplayImagePixels"] = "DetailExploreView";
SelectionMove[First@Cells[CellTags -> id], All, CellContents,
AutoScroll -> False];
SelectionMove[EvaluationNotebook[], All, Graphics, AutoScroll -> False];
SetOptions[
First@Cells[NotebookSelection[], CellTags -> "AttachedImage2D",
AttachedCell -> True], CellSize -> {0, 0}];
FrontEndExecute[FrontEnd`Select2DTool["GetPixelPointMarkers"]])},
CellPrint[
ExpressionCell[i, "Output", CellTags -> id, CellFrameLabels -> {{None, None}, {None,
ToBoxes@Pane[Row[{Spacer[40],
Button["Start Image Assistant Interface", init],
Button["Print Markers Indices",
NotebookDelete[Cells[CellTags -> id2]]; init;
With[{inds =
MathLink`CallFrontEnd[
FrontEnd`Value[
FEPrivate`GetlImageMarkersIndicesList[NotebookSelection[]]]]},
CellPrint@Table[
ExpressionCell[
Row[Prepend[
Table[Image[
ImageTake[i, {-sh, sh} + ind[[1, 1]], {-sh, sh} + ind[[2, 2]]],
ImageSize -> is], {sh, {0, 1, 4, 10, 50}}],
Labeled[{ind[[1, 1]], ind[[2, 2]]}, "row,col"]], "->"], "Echo",
CellTags -> id2], {ind, inds}]]],
Button["Clear Markers",
init; FrontEndTokenExecute["ClearPixelPointMarkers"]],
Slider[Dynamic[CurrentValue[$FrontEnd, "RasterExploreViewRange"]], {39,
3, -1}, ImageSize -> Small],
Dynamic[CurrentValue[$</span>FrontEnd, "RasterExploreViewRange"]]
}], Full, Alignment -> Left]}}]]
]];
PixelPickerInterface[ExampleData[{"TestImage", "House"}]]
</code></pre>
<p><a href="https://i.stack.imgur.com/WefZy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WefZy.png" alt="screesnhot" /></a></p>
|
507,867 | <p>I saw an inequality for $n\times n$ matrices. I was wondering if the inequality is true or not?</p>
<p>Does $\det(A)>0$ imply $\det(I+A)>0$?</p>
| Asaf Karagila | 622 | <p>Consider $-I$ in $M_n(\Bbb R)$ for an even $n$.</p>
|
2,713,555 | <p>I have a question stating</p>
<blockquote>
<p>Let $A= \{ x_1 , x_2 , x_3 , \ldots , x_n \}$ be a set consisting of $n$ distinct elements. How many subsets does it have? How many proper subsets?</p>
</blockquote>
<p>My thought is that there would be subsets with $1$ element, $2$ elements, $3$ element and so on, up to $n$ elements. The number of subsets of each size would be:</p>
<p>$$\begin{array}{c|c}
\text{Subset size} & \text{no. of subsets} \\
\hline
1 & n \\ 2 & n-1 \\ 3 & n-2 \\ \vdots & \vdots \\ n-1 &n-(n-2) \\ n & n- (n-1)
\end{array}$$
From this it seems the number of subsets would be $\displaystyle \sum_{k=0}^{n-1} (n-k) $. And for proper subsets, I would just not include the subsets of size $n$ , so $\displaystyle \sum_{k=0}^{n-2} (n-k) $. Is this correct?</p>
| fleablood | 280,126 | <p>Why do you think there will be $n-(k-1)$ subsets of size $k$?</p>
<p>Let's say you have a set of $25$ elements, $\{1,2,3,....., 25\}$ and you want is see who many subset of size $2$ there are.</p>
<p>You can have $\{1,2\}, \{1,3\}......, \{2, 25\}$. That's $24 = n- 1$. But those are all with the element $1$. What about subsets <em>without</em> the element $1$.</p>
<p>$\{2,3\}...... , \{2,25\}$ is $23$ and $\{3,4\}...\{3,25\}$ is $23$. and so on all the way to $\{24,25\}$ being $1$.</p>
<p>So adding those up we have $24 + 23 + 22 + ... + 1 = 300$.</p>
<p>But what about subsets of size $3$. $\{1,2,3\} ... \{1,2,25\}$ is $23$ and $\{1,3,4\}$ to $\{1,3,25\}$ is $22$ and $\{1,2,3\}...\{1,24,25\}$ is $23 + 22 + ... + 1 = 276$ and $\{2,3,4\}...\{2,24,25\}$ is $22 + .... + 1 = 253$. and so on so all of the subsets of size $3$ = \sum_{m=1}^{23} \sum_{i=1}^m i = \sum_{m=1}^{23} \frac {m(m+1)}2$. </p>
<p>And then to do subsets of size $4$ we get.... well, a headache.</p>
<p>A bit of thought is that choosing $k$ elements from $25$ (or $n$) is, quite <em>literally</em> choosing $k$ elements from $25$ (or $n$). So the number of subsets of size $k$ is ${n \choose k}$. Literally.</p>
<p>[This would imply that ${n \choose 3} = \sum_{i=1}^{n-2}\frac {i(i+1)}2$. Does it?]</p>
<p>So, anyway the number of proper subsets would therefore be: $\sum_{i=1}^{n-1} {n\choose i}$.</p>
<p>But... can that be simplified. You should play with that for about 14 hours or so.</p>
<p>$n =2\implies \sum{n\choose i} = 2$</p>
<p>$n = 3\implies \sum{n\choose i} = {3\choose 1 } + {3\choose 2} = 6$.</p>
<p>$n = 4 \implies \sum{n\choose i} = {4\choose 1} + {4\choose 2} + {4\choose 3} = 4 + 6 + 4 = 14$.</p>
<p>Notice anything?</p>
<p>Well, here's a suggestion. Try figuring the number of <em>all</em> subsets including the two <em>im</em>proper subsets, $\emptyset$ and $A$.</p>
<p>Then the solution is $\sum_{i=0}^n{n\choose i}$.</p>
<p>Then the number of all subsets is</p>
<p>$n = 2; \sum {n\choose i} = {2 \choose 0} + {2\choose 1} + {2\choose 2} = 1+2+1 = 4$.</p>
<p>$n =3; \sum {n\choose i} = {3\choose 0}+ {3 \choose 1} + {3\choose 2} + {3\choose 3} = 1 + 3 + 3 + 1 = 8$.</p>
<p>And $n= 4; \sum {n\choose i} = {4\choose 0} + {4\choose 1} + {4\choose 2} + {4 \choose 3} + {4\choose 4} = 1 + 4 +6+ 4 + 1 = 16$.</p>
<p>Notice anything.</p>
<p>It seems that the number of all subsets is $2^n$.</p>
<p>Why would that be? We could probably prove $\sum_{i=0}^n{n\choose i} = 2^n$ by induction.... But what does it <em>mean</em>?</p>
<p>well, consider this single sentence:</p>
<p><strong>For any of the $n$ elements of $A$, either the element is, or is not, in a specific subset of $A$.</strong></p>
<p>Think about that. If we had considered that from the beginning, the whole thing could have probably been easier.</p>
|
2,713,555 | <p>I have a question stating</p>
<blockquote>
<p>Let $A= \{ x_1 , x_2 , x_3 , \ldots , x_n \}$ be a set consisting of $n$ distinct elements. How many subsets does it have? How many proper subsets?</p>
</blockquote>
<p>My thought is that there would be subsets with $1$ element, $2$ elements, $3$ element and so on, up to $n$ elements. The number of subsets of each size would be:</p>
<p>$$\begin{array}{c|c}
\text{Subset size} & \text{no. of subsets} \\
\hline
1 & n \\ 2 & n-1 \\ 3 & n-2 \\ \vdots & \vdots \\ n-1 &n-(n-2) \\ n & n- (n-1)
\end{array}$$
From this it seems the number of subsets would be $\displaystyle \sum_{k=0}^{n-1} (n-k) $. And for proper subsets, I would just not include the subsets of size $n$ , so $\displaystyle \sum_{k=0}^{n-2} (n-k) $. Is this correct?</p>
| Arian | 172,588 | <p>There are
$${n\choose 0},{n\choose 1},...,{n\choose n-1},{n\choose n}$$
subsets from $\{x_1,...,x_n\}$ with no element, one element,..., $(n-1)$ elements, and $n$ elements respectively. Therefore the total number of subsets is
$${n\choose 0}+{n\choose 1}+...+{n\choose n-1}+{n\choose n}=(1+1)^n=2^n$$</p>
|
1,031,304 | <p>Let $\varphi$ be a bounded, differentiable function on $\mathbb{R}$ such that $\varphi'$ is bounded and uniformly continuous on $\mathbb{R}$.</p>
<p>We want to prove that $\displaystyle\frac{\varphi(x+h)-\varphi(x)}{h}\to\varphi'(x)$ uniformly as $h\to 0$</p>
<p>I can prove $\varphi$ is uniformly continuous, but I don't know what to do with it.</p>
<p>Any hints?</p>
| mookid | 131,738 | <p><strong>Hint:</strong> use the mean value theorem.
$$
\phi(x+h) - \phi(x) - h\phi'(x) = h (\phi'(x + \theta_x h) - \phi'(x))
$$
for some $\theta_x\in(0,1)$.</p>
|
203,212 | <p>Q1: When a question asking me to show a set consists of countably many point, do it mean infinite countably many or both infinite or finite case. </p>
<p>Q2:Also, to show that a set consist of countably many point, is it enough to show that the set is countable or i have to show somethings more?</p>
| Belgi | 21,335 | <p><strong>For 1:</strong> the common use of the word countable includes both finite and
countably infinite (the definition is that $A$ is countable if $|A|\leq\aleph_{0}$)</p>
<p><strong>For 2:</strong> yes, it is sufficient as you show this by definition </p>
|
11,882 | <p>Prove that $\lim\limits_{x \to 2} \frac{x^{2}-2x+9}{x+1}$ using an epsilon delta proof.</p>
<p>So I have most of the work done. I choose $\delta = min{\frac{1}{2}, y}$,<br>
$f(x)$ factors out to $\frac{|x-3||x-2|}{|x+1|}$
But $|x-3| \lt \frac{3}{2}$ for $\delta = \frac{1}{2}$ and also $|x+1| > 5/2$ (I'll spare you the details). </p>
<p>I'm not sure how to choose my y here. If I take $\lim\limits_{x \to 2} \frac{x^{2}-2x+9}{x+1}$ < $(3/5) \delta$ How do I choose my epsilon here (replace y with this) to satisfy this properly?</p>
<p>Thanks</p>
| Zarrax | 3,035 | <p>I'm going to go out on a limb and guess that you're trying to show the limit is 3 and that $f(x) = {x^2 - 2x + 9 \over x + 1} - 3$. I suggest trying to translate what you've done into the fact that $|{x^2 - 2x + 9 \over x + 1} - 3| < {3 \over 5}|x - 2|$ whenever $|x - 2| < {1 \over 2}$. </p>
<p>This means that if you choose any $\epsilon < {1 \over 2}$, then you have that $|{x^2 - 2x + 9 \over x + 1} - 3| < {3 \over 5}\epsilon$ whenever $|x - 2| < \epsilon$. So, given $\epsilon$, the natural choice for $\delta$ is ${3 \over 5}\epsilon$. (you got the $\delta$ and $\epsilon$ reversed.)</p>
<p>Now verify that the "for every $\epsilon$ there is a $\delta$" definition is satisfied in this way. </p>
|
4,065,868 | <p>Can anyone here recommend a low-dimensional topology textbook that contains knot theory and 3,4-manifolds?Or should I look for these subjects in separate textbooks?</p>
| Harambe | 357,206 | <p>My book preferences are different from the other answer! My preferences are the following:</p>
<p>Knot theory: Lickorish "An introduction to knot theory". You don't need much background apart from singular homology/fundamental groups.</p>
<p>3-manifolds: Saveliev "Lectures on the topology of 3-manifolds". Some of the earlier content overlaps with Lickorish, but it's also good! Again if you know some stuff about singular homology you'll be able to read this book. It's quite selective about content though - if you're interested in things like contact topology, geometrisation etc, you'll need other sources.</p>
<p>4-manifolds: Scorpan "Wild world of 4-manifolds". The previous answer also mentioned this book - I like it a lot! The earlier chapters develop theory within the algebraic topology framework, but eventually we need more tools: later on Seiberg-Witten gauge theory is introduced and a bunch of wild 4-dimensional results are proven (e.g. uncountably many smooth structures on R^4)</p>
|
4,065,868 | <p>Can anyone here recommend a low-dimensional topology textbook that contains knot theory and 3,4-manifolds?Or should I look for these subjects in separate textbooks?</p>
| Strongly Negative Amphicheiral | 881,176 | <p>In addition to what others have mentioned, I would like to add Hatcher's "Notes on Basic 3-Manifold Topology" which don't cover too much material but are in my opinion the best introduction to 3-Manifolds around.</p>
|
1,691,685 | <p>The answer for it is $$3 + \sum_{k=1}^n (3+k(k-1)2^{k-2})\frac{(-1)^k}{k!} x^k + o(x^n)$$
Well, I've tried to change every $e^x$ to $1 + x + \frac{x}{2!} + ... + o(x^n)$ and got nothing useful. I know how to get Maclaurin series for polynomials but I don't really know what should I do with fractions and how get that kind of result.</p>
| OKPALA MMADUABUCHI | 98,218 | <p>First Write the function as $3e^{-x}+x^2e^x$. Then add the Maclaurin expansions.</p>
|
4,431,764 | <p>I was solving one problem in which I had to find the value of <span class="math-container">$x$</span> and at the last step my result came out to be <span class="math-container">$x^{\log_b{a}}=2$</span> but I was not able to get to the correct answer after this. Here's what I did:</p>
<p><span class="math-container">$$\begin{align}
&x^{\log_b{a}}=2\\
\Rightarrow\; & \log_b{a}\cdot\log_b{x}=\log_b{2} \rightarrow \text{Taking log to the base }b\text{ on both side}\\
\Rightarrow\; & \log_b{x}=\frac{\log_b{2}}{\log_b{a}}\\
\Rightarrow\; & \log_b{x}= \log_a{2}
\end{align}$$</span>
But as you see I was not able to proceed ahead and I tried other ways to get to the solution but I was not able to.</p>
<p>The correct answer that has been provided is <span class="math-container">$x=2^{\log_a{b}}$</span>. Can someone please help me as to how to get to this form of the answer?</p>
| Koro | 266,435 | <p>Take <span class="math-container">$\rm\log$</span> w.r.t. <span class="math-container">$\rm \, base\,2$</span> so that <span class="math-container">$$\rm\log_ba \log_2x=1\implies \log_2x=\frac 1{\log_ba}=\log_ab\implies x=2^{\log_ba}$$</span></p>
<p>The first implication assumes <span class="math-container">$\rm a\ne 1$</span>.</p>
|
38,597 | <p>Or equivalently, if $G$ is a group, do the projective and injective dimension of $Z$ (viewed as a $ZG$-module) agree?</p>
<p>Thanks! </p>
| David White | 11,540 | <p>So, Simon Wadsley’s comment clearly answers this question but a hypothetical future user will not have his/her eye drawn to that answer. That’s why I’m posting this, which is all Simon’s idea, in the hope that the OP will come back on at some point and accept this answer (thus preventing this problem from being bumped up to the front-page by the Mathoverflowbot). If the OP is reading this, please click the check mark next to this box so it'll count as being answered. I'm giving a CW answer so I don't get rep (in line with the recommended procedure on <a href="http://mathoverflow.tqft.net/discussion/328/should-we-do-anything-if-a-question-is-answered-well-in-the-comments/#Item_0" rel="nofollow noreferrer">meta</a>)</p>
<p>The homological and cohomological dimensions of a group do NOT have to agree. As you point out, if they did agree then the projective and injective dimensions of <span class="math-container">$\mathbb{Z}$</span> as a <span class="math-container">$\mathbb{Z}[G]$</span> module would agree (this is basically just the definition of Ext). An example that this could fail, take <span class="math-container">$G$</span> to be the trivial group. Then the projective dimension of <span class="math-container">$\mathbb{Z}$</span> as a <span class="math-container">$\mathbb{Z}$</span>-module is zero because any ring is projective over itself. But the injective dimension is not zero because <span class="math-container">$\mathbb{Z}$</span> is not a divisible abelian group. Indeed, the injective dimension is 1, as can be seen from the fact that <span class="math-container">$\mathbb{Z}$</span> is a PID and hence has global dimension 1. Or you can just write down an injective resolution. Or you can read Dummit-Foote for their treatment.</p>
|
292,159 | <blockquote>
<p>Find the relation between <span class="math-container">$a, b, c, d$</span> if the roots of <span class="math-container">$ax^3+bx^2+cx+d=0$</span> are in geometric progression.</p>
<p>By considering <span class="math-container">$(\alpha+\beta)(\beta+\gamma)(\alpha+\gamma)$</span> show that the above cubic equation has two roots equal in size but opposite in sign if and only if <span class="math-container">$ad=bc$</span>.</p>
</blockquote>
<p>I can do the second part if I am given some hints on the first. I can use <span class="math-container">$\beta$</span> as the middle root and make <span class="math-container">$\alpha=\beta/r$</span> and <span class="math-container">$\gamma=r \times \beta$</span>, but haven't got anywhere so far.</p>
| lab bhattacharjee | 33,337 | <p>If $\alpha, \beta, \gamma$ are the roots of $ax^3+bx^2+cx+d=0$</p>
<p>Using <a href="http://mathworld.wolfram.com/VietasFormulas.html" rel="nofollow">Vieta's Formulas</a>, $$\alpha+\beta+\gamma=-\frac ba, \alpha\beta+\beta\gamma+\gamma\alpha=\frac ca,\alpha\beta\gamma=-\frac da$$</p>
<p>If $\alpha,\beta,\gamma$ are in Geometric Progression, $\frac\beta\alpha=\frac\gamma\beta$ is constant $=r$(say).
Clearly, $\alpha r\ne 0$</p>
<p>$\implies \beta=\alpha r, \gamma=\beta r=\alpha r^2$</p>
<p>$$\implies \alpha+\alpha r+\alpha r^2=-\frac ba\implies \alpha(1+r+r^2)=-\frac ba$$</p>
<p>$$\alpha\alpha r+\alpha r\alpha r^2+\alpha r^2\alpha=\frac ca\implies \alpha^2 r(1+r+r^2)=\frac ca$$</p>
<p>$$\alpha\alpha r\alpha r^2=-\frac da\implies \alpha^3 r^3=-\frac da$$</p>
<p>So, $$\frac{\alpha^2 r(1+r+r^2)}{\alpha(1+r+r^2)}=\frac{\frac ca}{-\frac ba}\implies \alpha r=-\frac cb \text{ if }1+r+r^2\ne0$$</p>
<p>Then, $$\left(-\frac cb\right)^3=-\frac da\implies b^3d=c^3a$$</p>
<p>If $1+r+r^2=0,$</p>
<p>$b=c=0$
and $r=\frac{-1\pm\sqrt3}2$ i.e., r is a complex cube root of $1$ </p>
<p>So, the equation reduces to $ax^3+d=0$ whose roots are $\left(-\frac da\right)^\frac13,\left(-\frac da\right)^\frac13r, \left(-\frac da\right)^\frac13r^2$.</p>
<p>So, we don't find any relationship among $a,b,c$ and $d$ here.</p>
<hr>
<p>In the 2nd case, let us find the equation whose roots are $\alpha+\beta,\beta+\gamma, \gamma+\alpha$ using the Transformation of Equations.</p>
<p>If $y=\alpha+\beta=\alpha+\beta+\gamma-\gamma=-\frac ba-\gamma\implies \gamma=-\left(y+\frac ba\right)$</p>
<p>As $\gamma$ is a root of the given equation, $-a\left(y+\frac ba\right)^3+b\left(y+\frac ba\right)^2-c\left(y+\frac ba\right)+d=0$</p>
<p>On simplification, $$ay^3+(\cdots)y^2+(\cdots)y^2+\frac{bc-ad}a=0$$ whose roots are $\alpha+\beta,\beta+\gamma, \gamma+\alpha$</p>
<p>Using Vieta's Formulas again, $(\alpha+\beta)(\beta+\gamma)(\gamma+\alpha)=\frac{ad-bc}{a^2}$</p>
<p>If among $\alpha,\beta,\gamma$ two have same absolute value but opposite sign, $(\alpha+\beta)(\beta+\gamma)(\gamma+\alpha)=0\implies ad=bc$ (the condition is necessary)</p>
<p>Sufficiency: Again, if $ad=bc,$ $\frac ab=\frac cd=t$(say,) so $a=bt,c=dt$</p>
<p>Then the equation becomes $$btx^3+bx^2+dtx+d=0\implies bx^2(tx+1)+d(tx+1)=0$$</p>
<p>So, $$(tx+1)(bx^2+d)=0$$ One root is $-\frac1t$ What about the others?</p>
|
3,672,948 | <p>Consider the series
<span class="math-container">$$
S = \sum_{n=1}^\infty e^{-n^2x}
$$</span>
then I have to argue for that <span class="math-container">$S$</span> is convergent if and only if <span class="math-container">$x>0$</span>.</p>
<p>As this is if and only if I think I have to assume first that S is convergent and show that this implies that <span class="math-container">$x>0$</span> but I am not sure how to. It is easy for me see that if <span class="math-container">$x=0$</span> the series is divergent but if I were to assume that S is convergent and that for a contradiction that <span class="math-container">$x\leq 0$</span> how do I proceed? And how the other way around? </p>
<p>Do you mind helping me? </p>
| mathemagician99 | 784,628 | <p>If S is convergent then <span class="math-container">$e^{-n^2x}\to 0$</span>, which it only does if <span class="math-container">$x>0$</span>.
If <span class="math-container">$x\le 0$</span> then <span class="math-container">$-n^2x\ge 0$</span> and hence <span class="math-container">$\sum_{n\in\mathbb{N}} e^{-n^2x}\ge \sum_{n\in\mathbb{N}} e^{0}= \sum_{n\in\mathbb{N}}1=\infty$</span>. (contraposition)</p>
|
997,116 | <p>If one root of the equation $x^2 + ax + b = 0$ and $x^2 + bx + a = 0$ is common and $a \ne b$ then:</p>
<p>The options are as follows:
$$\begin{array}{ll}
(A)\quad& a + b = 0\\
(B)& a + b = -1\\
(C)& a - b = 1\\
(D)& a + b = 1
\end{array}$$</p>
<p>Idk how to solve this, please help me.</p>
| bankrip | 96,097 | <p>$x^2+ax+b=x^2+bx+a$ => $(a-b)(x-1)=0$. Since $a \neq b$, $x=1$. Thus $1^2+a+b=0$. Hence, $a+b=-1$. </p>
<p>Thus, C is the answer. </p>
|
276,725 | <p>Is there a positive number $n$ of distinct odd integers $z_1,z_2, \ldots, z_n \geq 3$ such that $\frac{1}{z_1} + \frac{1}{z_2} + \cdots + \frac{1}{z_n} = 1$?</p>
| Calvin Lin | 54,563 | <p>In 1954, it was shown by <a href="http://en.wikipedia.org/wiki/Egyptian_fraction#Open_problems">Stewart and Breusch</a> (independently) that if $\frac {p}{q} >0$ and $q$ is odd, then it can be written as the sum of finitely many reciprocals of odd numbers.</p>
<p>As a specific example,</p>
<p>$$1=\frac {1}{3} + \frac {1}{5} + \frac {1}{7} + \frac {1}{9} + \frac {1}{15} + \frac {1}{21} + \frac {1}{27} + \frac {1}{35} + \frac {1}{63} + \frac {1}{105} + \frac {1}{135}$$</p>
|
18,691 | <p>Let's say I have a folowing set of data:</p>
<ul>
<li>k = 1 : list of values </li>
<li>k = 3 : list of values </li>
<li>k = 10 : list of values</li>
</ul>
<p>I know that to make a <code>BoxWhiskerChart</code> I have to give it a list of listen of these values as data and ks as labels. </p>
<p>How do I force the offset between the boxes for different ks to be proportional to the values of ks?</p>
<p>This is like combining <code>ListPlot</code> and <code>BoxWhiskerChart</code> - list plot gives appropriate position of boxes relative to the x-axis.</p>
| kglr | 125 | <p>Using a custom <code>ChartElementFunction</code> that translates the graphics primitives based on specified <code>x</code> values passed as metadata:</p>
<pre><code>ClearAll[ceF]
ceF[func_: "GlassBoxWhisker"][delta_] :=
ChartElementData[func][{#3[[1]] + {-delta, delta}, #[[2]]}, ##2] &
salaries = ExampleData[{"Statistics", "UniversitySalaries"}, "DataElements"];
depts = {"Mathematics", "History", "English", "Chemistry", "Law",
"Physics", "Statistics"};
data = Table[Cases[salaries, {d, _, salary_, "A"} :> salary], {d, depts}];
SeedRandom[3]
xvalues = Sort@RandomReal[100, Length@data];
delta = 2 Min[Differences[xvalues]]/5;
Show[BoxWhiskerChart[Thread[data -> xvalues], ChartStyle -> 10,
ImageSize -> 500, ChartElementFunction -> (ceF[][delta]), ChartLegends -> depts],
ListLinePlot[Transpose[{xvalues, Median /@ data}], PlotStyle -> Thick],
FrameTicks -> {{Automatic, Automatic},
{{#, Rotate[Style[#, 14, Bold, "Panel"], 90 Degree]} & /@
xvalues, {#, ""} & /@ xvalues}},
ImagePadding -> {{40, 10}, {70, 10}}]
</code></pre>
<p><a href="https://i.stack.imgur.com/SEKpK.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SEKpK.jpg" alt="enter image description here"></a></p>
|
1,166,127 | <p>I want to select an even rows of matrix. How can I show it using mathematical notations.</p>
<p>Something like this:</p>
<p>$X_{i:}$ where <code>i</code> is even.</p>
| philipp | 136,005 | <p>even indices:</p>
<p>$X_{2i}$</p>
<p>uneven indices can be expressed as</p>
<p>$X_{2i+1}$</p>
<p>Of course this changes the range for $i$.</p>
|
42,854 | <p>How much money would you have if the amount of money you started with was 5 and it increased by 5 a day for 365 days.
So January 1st you receive 5, Jan 2nd you receive 10, the third 15.. etc.
I'm wondering what the formula is</p>
| Eric Naslund | 6,075 | <p><strong>Hint:</strong>
On the 365th day, you will receive $365*5=1825$ dollars. Then the total amount of money is </p>
<p>$$5+10+15+\cdots+1820+1825.$$ Consider double this amount. That is consider $$\begin{array}{ccccccc}
5 & +10 & +15 & +\cdots & +1815 & +1820 & +1825\\
1825 & +1820 & +1815 & +\cdots & +15 & +10 & +5\end{array}$$</p>
<p>Adding up the rows we get $$1830+1830+1830+\cdots+1830+1830+1830$$</p>
<p>$$=365*1830.$$ Now take this and divide it by 2. Then we have the original sum.</p>
<p>See also: <a href="http://en.wikipedia.org/wiki/Arithmetic_progression">Arithmetic Progression.</a></p>
<p>Hope that helps,</p>
|
1,970,103 | <p>I'm trying to show that for any countable ordinal $\alpha$, there is a subset of $(\mathbb{R},<)$ that has order type $\alpha$. In this post I'm not asking for a solution, but instead for a proof-check. If it's wrong I'll go back to the drawing board.</p>
<p>I tried using induction: assume that every $\beta<\alpha$ admits a subset of $(\mathbb{R},<)$ with order type $\beta$. Since $\alpha$ is assumed countable $\{\beta\in \text{Ord}:\beta<\alpha\}$ is countable, so we can enumerate as $\beta_0,\beta_1\dots$. Then map $f_k:\beta_k\to [k,k+1)\subset \mathbb{R}$ where $f_k$ order preserving and continuous, and $\sup f( \beta_k) = k+1$. Then $\bigcup \beta_k$ is a well-order with order type $\alpha$.</p>
<p>Does this work? If so, is it ok to claim the existence of such $f_k$'s?</p>
| DanielWainfleet | 254,665 | <p>The proof is incorrect. I think you meant to say that $\cup_kS_k$ has order-type $\alpha$ where $S_k$ is the image of $\beta_k$ under $f_k.$ But it's still wrong. For example if $\alpha=\omega + \omega=\omega \cdot 2$ then for some $k',k'',k'''$ we have $\beta_k'=\omega,\; \beta_k''=\omega +1, \beta_k'''=\omega + 2.$ Then each of the 3 intervals $[k',k'+1),\;[k'',k''+1),\;[k''',k'''+1)$ contains a subset of $\cup_kS_k$ that is order-isomorphic to $\omega,$ so the order-type of $\cup_kS_k$ is at least $\omega \cdot 3.$</p>
<p>For real numbers $a,b$ with $a<b$ let $g_{a,b}:[0,\infty) \to [a,b)$ be an order-isomorphic bijection. </p>
<p>For countable ordinal $\alpha ,$ suppose that for all $b<\alpha$ there exists an order-embedding $f_b$ of $b$ into $[0,1] .$ We show there exists an order-embedding of $\alpha$ into $[0,1].$</p>
<p>(i). If $\alpha =0 $ then $\alpha = \emptyset.$ Let $f_0=\emptyset.$ (Trivial special case.)</p>
<p>(ii). If $\alpha =c+1,$ let $f_c:c\to [0,1]$ be an order-embedding of $c$ into $[0,1] .$ Define $f_{\alpha} (x)=g_{0,1}(f_c(x))$ for $x <c,$ and $f_{\alpha }(c)=1.$ Then $f_{\alpha }$ is an order-embedding of $\alpha$ into $[0,1].$</p>
<p>(iii). If $0\ne \alpha =\cup \alpha,$ take $\{a_n: n\in \omega\}\subset \alpha$ with $a_n<a_{n+1}<\alpha$ for each $n\in \omega,$ and $\cup S=\alpha ,$ and $a_0=0.$ </p>
<p>For each $n\in \omega$ the set $a_{n+1}$ \ $a_n$ is order-isomorphic to a (unique) ordinal $b_n,$ with $b_n<\alpha.$ So let $f_n$ be an order-embedding of $a_{n+1}$ \ $a_n$ into $[0,1].$ </p>
<p>We have $\cup_{n\in \omega} (a_{n+1}$ \ $a_n)=\alpha$ (because $a_0=0$). For each $x\in \alpha$ there is a unique $n\in \omega$ such that $x\in a_{n+1}$ \ $a_n.$</p>
<p>$$\text {For } x\in a_{n+1} \backslash a_n \text { let }\quad h_{\alpha}(x)=g_{n,n+1}(f_n(x))$$ $$\text {and }\quad f_{\alpha}(x)=g_{0,1}(h_{\alpha}(x)).$$ Then $f_{\alpha}:\alpha\to [0,1]$ is an order-embedding. </p>
|
1,970,103 | <p>I'm trying to show that for any countable ordinal $\alpha$, there is a subset of $(\mathbb{R},<)$ that has order type $\alpha$. In this post I'm not asking for a solution, but instead for a proof-check. If it's wrong I'll go back to the drawing board.</p>
<p>I tried using induction: assume that every $\beta<\alpha$ admits a subset of $(\mathbb{R},<)$ with order type $\beta$. Since $\alpha$ is assumed countable $\{\beta\in \text{Ord}:\beta<\alpha\}$ is countable, so we can enumerate as $\beta_0,\beta_1\dots$. Then map $f_k:\beta_k\to [k,k+1)\subset \mathbb{R}$ where $f_k$ order preserving and continuous, and $\sup f( \beta_k) = k+1$. Then $\bigcup \beta_k$ is a well-order with order type $\alpha$.</p>
<p>Does this work? If so, is it ok to claim the existence of such $f_k$'s?</p>
| Mirko | 188,367 | <p>Here is an alternative way of showing that every countable ordinal is order-isomorphic to a subset of the real line (or a subset of <span class="math-container">$[0,1]$</span>, below).</p>
<p>For finite ordinals this is clear (just take a finite subset of <span class="math-container">$[0,1]$</span> with the same number of points).</p>
<p>Now let <span class="math-container">$\nu=\{\gamma:\gamma<\nu\}$</span> be an infinite countable ordinal. Fix a bijection <span class="math-container">$f:\{\gamma:\gamma<\nu\}\to\Bbb N_+$</span> (where <span class="math-container">$\Bbb N_+$</span> denotes the set of all positive integers), and map each <span class="math-container">$\gamma<\nu$</span> to <span class="math-container">$\psi(\gamma):=\sum\limits_{\delta<\gamma}\frac{1}{2^{f(\delta)}}$</span>.<br />
Note that this maps <span class="math-container">$\gamma=0$</span> to <span class="math-container">$0\in[0,1]$</span> (i.e. <span class="math-container">$\psi(0)=0\ $</span>) because
<span class="math-container">$\sum\limits_{\delta<0}\frac{1}{2^{f(\delta)}}$</span> is a sum with an empty index set, and by convention it sums up to <span class="math-container">$0$</span>.<br />
Then, <span class="math-container">$\psi(1)=\frac{1}{2^{f(0)}}$</span>.<br />
Also
<span class="math-container">$\psi(2)=\frac{1}{2^{f(0)}}+\frac{1}{2^{f(1)}}\ $</span>, etc.</p>
<p>Clearly if <span class="math-container">$\beta<\gamma<\nu$</span> then <span class="math-container">$\psi(\beta)<\psi(\gamma)$</span>. Indeed, <span class="math-container">$\psi(\beta)+\frac{1}{2^{f(\beta)}}\le\psi(\gamma)$</span>.</p>
<p>Note also that if <span class="math-container">$A=\{\psi(\gamma):\gamma<\nu\}$</span>, then <span class="math-container">$\inf(A)=\min(A)=0$</span> and <span class="math-container">$\sup(A)\le1$</span>.
We have <span class="math-container">$\sup(A)=\max(A)<1$</span> only when <span class="math-container">$\nu$</span> is a successor ordinal (assuming <span class="math-container">$\nu$</span> is countably infinite, to begin with).</p>
<p>An alternative definition would be
<span class="math-container">$\varphi(\gamma):=\sum\limits_{\delta\le\gamma}\frac{1}{2^{f(\delta)}}$</span>.<br />
Then, <span class="math-container">$\varphi(0)=\frac{1}{2^{f(0)}}$</span>.
Also
<span class="math-container">$\varphi(1)=\frac{1}{2^{f(0)}}+\frac{1}{2^{f(1)}}\ $</span>, etc.<br />
But, <span class="math-container">$\psi$</span> might be a better choice, since <span class="math-container">$\psi(\gamma)=\sup\limits_{\delta<\gamma}\psi(\delta)$</span> (in <span class="math-container">$[0,1]$</span>) exactly when <span class="math-container">$\gamma$</span> is limit, i.e.
<span class="math-container">$\gamma=\sup\limits_{\delta<\gamma}\delta$</span>.</p>
<p>The above is not properly an answer to the original question, as the OP did not ask for a different approach, but only asked for a verification of the approach presented in the question. I put, nevertheless, this different approach here, for reference. It is slick (though not completely self-contained since it presumes background knowledge of sum of an infinite series), but it is also simple and, I would think, good-to-know.</p>
<p>The notation <span class="math-container">$\sum\limits_{\delta<\gamma}\frac{1}{2^{f(\delta)}}$</span> may need some extra explanation since one usually sees something like
<span class="math-container">$\sum\limits_{n=1}^\infty$</span> rather than <span class="math-container">$\sum\limits_{\delta<\gamma}$</span>, but the latter is ok too, since all terms are non-negative (so the order in which they appear won't matter) so
<span class="math-container">$\sum\limits_{\delta<\gamma}\frac{1}{2^{f(\delta)}}$</span> could be defined as <span class="math-container">$\sup$</span> of all possible sums of the form <span class="math-container">$\frac{1}{2^{f(\delta_1)}}+\cdots+\frac{1}{2^{f(\delta_k)}}$</span> where <span class="math-container">$k\ge1$</span> and <span class="math-container">$0\le\delta_1<\dots<\delta_k<\gamma$</span>.
This <span class="math-container">$\sup$</span> cannot exceed <span class="math-container">$\sum\limits_{n=1}^\infty\frac{1}{2^n}=1$</span>.</p>
<p>Yet another way to explain this, if <span class="math-container">$g=f^{-1}$</span> is the inverse bijection, <span class="math-container">$g:\Bbb N_+\to\{\gamma:\gamma<\nu\}$</span>, then for each <span class="math-container">$\gamma<\nu$</span> and all <span class="math-container">$n\ge1$</span>, one may define
<span class="math-container">$a_{n,\gamma}= \begin{cases}
\frac{1}{2^n}\ ,\ {\mathrm{ if }}\ g(n)<\gamma\\
0\ ,\ {\mathrm{ if }}\ \gamma\le g(n)<\nu
\end{cases}\ ,$</span> and then <span class="math-container">$\sum\limits_{\delta<\gamma}\frac{1}{2^{f(\delta)}}:=\sum\limits_{n=1}^\infty a_{n,\gamma}\ $</span>.<br />
Clearly <span class="math-container">$0\le\psi(\gamma)=\sum\limits_{n=1}^\infty a_{n,\gamma}<
\sum\limits_{n=1}^\infty\frac{1}{2^n}=1$</span>, for each <span class="math-container">$\gamma<\nu$</span>. Also, if <span class="math-container">$\beta<\gamma<\nu$</span> then <span class="math-container">$0\le a_{n,\beta}\le a_{n,\gamma}$</span> for all <span class="math-container">$n$</span>, and if <span class="math-container">$m=f(\beta)$</span> then
<span class="math-container">$a_{m,\beta}=0<\frac{1}{2^m}=a_{m,\gamma}$</span>
so <span class="math-container">$\sum\limits_{n=1}^\infty a_{n,\beta}<\frac{1}{2^m}+\sum\limits_{n=1}^\infty a_{n,\beta}\le\sum\limits_{n=1}^\infty a_{n,\gamma}\ .$</span></p>
|
300,154 | <p>I am considering an optimization problem of the form:
\begin{equation}
\begin{split}
f(s) &= \min_{X} \mathrm{tr}(C(s)X) \\
&\;\;\;\;\;\;\;\;\;\;\; X \ge 0, \\
&\;\;\;\;\;\;\;\;\;\;\; \mathrm{tr}(A_iX) = a_i, \;\; 1 \le i \le M,
\end{split}
\end{equation}
where the minimization is over $n\times n$ Hermitian matrices $X$. Further, $A_i$ for $1 \le i \le M$ denote some $n\times n$ Hermitian matrices which together with $a_i \in \mathbb{R}$ determine linear constraints on $X$. Finally, the matrix-valued function $C(s)$ is of the block form:
\begin{equation}
C(s) = \left( \begin{array}{cc} C_{1}(s) & 0 \\ 0 & 0\end{array} \right),
\end{equation}
where the upper left block $C_1(s)$ is of size $(n_1 + 1) \times (n_1 + 1)$ for some $n_1 < n$, and is given by:
\begin{equation}
C_1(s) = \left( \begin{array}{ccccc} I_{n_1\times n_1} & -ic \mathbb{I}_{n_1\times n_1} & \cdot & \cdot \\ i c \mathbb{I}_{n_1\times n_1} & \cdot & -i \frac{s}{2} \mathbb{I}_{n_1\times n_1} & \cdot \\ \cdot & i \frac{s}{2} \mathbb{I}_{n_1\times n_1} & \cdot & \cdot \\ \cdot & \cdot & \cdot & s^2\end{array} \right).
\end{equation}
Here, $c \in \mathbb{R}$, $I_{n_1\times n_1}$ is the $n_1 \times n_1$ matrix of ones and $\mathbb{I}_{n_1\times n_1}$ denotes the $n_1 \times n_1$ identity matrix (whereas all entries indicated by $\cdot$ vanish).</p>
<p>Can it be shown that $f(s)$ is convex?</p>
<p>If not, which further requirements has the optimization to fulfill in order to guarantee convexity of $f(s)$?</p>
| Mark L. Stone | 75,420 | <p>Yes, this is convex because the objective function and all constraints are convex.</p>
<p>The objective function is affine (linear), which is convex. The semidefinjite constraint on X is convex. The trace equality constraint on X is affine (linear), and therefore is convex.</p>
|
2,936,994 | <p>There are 7 white balls in a row and a fair die (a cube with numbers 1, 2,..., 6 in its six faces). We roll the die 7 times and paint the ith balls into black if we get either 5 or 6 in the ith roll.</p>
<p>(a) What is the expected number of black balls?</p>
<p>(b) What is the chance that there exist 6 or more consecutive black balls?</p>
<p>(c) What is the chance that there are not 4 or more consecutive white balls?</p>
<p>Our study group has answered (a) and (b) and are confident we got those correct. We are just having trouble with the last one (c). </p>
<p>(a) We reasoned that the over all experiment can be modeled by a Binomial(7,<span class="math-container">$\frac{1}{3}$</span>) where <span class="math-container">$\frac{1}{3}$</span> is the probability of painting a ball black (probability of rolling a 5 or 6). Therefore the expected value is just <span class="math-container">$7 \times \frac{1}{3} \approx 2.3$</span>.</p>
<p>(b) There are only 2 ways to get exactly 6 consecutive black balls so <span class="math-container">$2 \times (\frac{1}{3})^6$</span> and one way to get 7 consecutive black balls <span class="math-container">$(\frac{1}{3})^7$</span>. Therefore P(6 or more consecutive black balls) <span class="math-container">$= 2 \times (\frac{2}{3})(\frac{1}{3})^6 + (\frac{1}{3})^7$</span>.</p>
<p>(c) So we decided to attempt to calculate it as 1 - P(4 or more consecutive white balls). We believe that we could possibly brute force this by literally finding the probability of every combination of 4 or more consecutive white balls. The problem is that we are studying this question to help us prep for a qualification exam and believe that there should be a non-brute force way to solve it for it to be a valid exam question. </p>
| lulu | 252,071 | <p>Here is a recursive method:</p>
<p>Let <span class="math-container">$P(n)$</span> denote the answer for a string of length <span class="math-container">$n$</span> . We remark that <span class="math-container">$$n≤3\implies P(n)=1 \quad \&\quad P(4)=1-\left(\frac 23\right)^4$$</span></p>
<p>For <span class="math-container">$n>4$</span> we remark that if a sequence is good then it must begin with exactly one of <span class="math-container">$B,WB,W^2B,W^3B$</span> and of course it must then be followed by a good sequence of shorter length. It follows that <span class="math-container">$$P(n)=\frac 13\times P(n-1)+\frac 23\times \frac 13\times P(n-2)+\left( \frac 23\right)^2\times \frac 13\times P(n-3)+\left( \frac 23\right)^3\times \frac 13\times P(n-4)$$</span></p>
<p>This is very easy to implement mechanically, maybe a bit slow to do with pencil and paper. </p>
<p>We get <span class="math-container">$$P(5)=0.736625514,\quad P(6)=0.670781893,\quad\boxed {P(7)=0.604938272}$$</span></p>
<p>Sanity Check: Let's get <span class="math-container">$P(5)$</span> by hand. The only "bad" sequences of length <span class="math-container">$5$</span> are <span class="math-container">$W^4B, BW^4, W^5$</span>. Easy to compute the probabilities of each and we get <span class="math-container">$$P(5)=1-2\times \left(\frac 23\right)^4\times \frac 13-\left( \frac 23 \right)^5=0.736625514$$</span> which matches the result obtained by the recursion.</p>
|
839,516 | <p>What is the number of binary sequences of length $n$, with no two consecutive zeros, and if starts with $0$ has to end with $1$.</p>
<p>Would appreciate suggestions and help.</p>
<p>I tried counting the total sequences and than substractung the ones containing 2 consecutive zeros, and then substracting the ones starting and ending with seros- but it got messy and confusing </p>
| André Nicolas | 6,312 | <p>Call a sequence of length $n$ <em>good</em> if it has no two consecutive $0$ (no other restriction).</p>
<p>Let $g_n$ be the number of good sequences of length $n$.</p>
<p>Now look at our more restricted collection, with the added condition that if we start with $0$ we must end with $1$. There are two types of such sequences of length $n$.</p>
<p>(i) The sequences of length $n$ that start with $1$. We can get these by taking any good sequence of length $n-1$, and putting a $1$ in front. There are therefore $g_{n-1}$ of these.</p>
<p>(ii) The sequences of length $n$ that start with $0$. Then the next entry must be $1$, and the last must be $1$. In between can be any good sequence of length $n-3$, so there are $g_{n-3}$ of these.</p>
<p>So if $a_n$ is the number of restricted condition sequences of length $n$, then
$$a_n=g_{n-1}+g_{n-3}.$$</p>
<p>Now we go after the $g_n$, a more familiar problem. It turns out that the $g_n$ are just the Fibonacci numbers. For a good sequence of length $n$ either ends with a $1$ or a $0$. If it ends with $1$, it can be produced by appending a $1$ to a good sequence of length $n-1$. If it ends with $0$, then the previous entry (if any) must be $1$, and the part before that is any good sequence of length $n-2$. Thus $g_n=g_{n-1}+g_{n-2}$.</p>
<p>Now put the pieces together. Note that $g_0=1$ and $g_1=2$.</p>
|
1,973,724 | <p>Now this was the explanation on how to solve a prove in my book : </p>
<p>We will prove that the two sets complement (A∩B) and complement(A) ∪ complement(B) are equal by showing that each is a subset of the other. </p>
<p>First,we will show that complement (A∩B) ⊆ complement(A) ∪ complement(B). We do this by showing that if x is in complement (A∩B) ,then it must also be in complement(A) ∪ complement(B). Now suppose that x ∈ complement (A∩B).By the definition of complement,x ∈ complement (A∩B). Using the definition of intersection,we see that the proposition¬((x ∈ A)∧(x ∈ B))is true. By applying De Morgan’s law for propositions, we see that¬(x ∈ A) or¬(x ∈ B). Using the definition of negation of propositions, we have x ∈ A or x ∈ B. Using the definition of the complement of a set, we see that this implies that x ∈ A or x ∈ B. Consequently, by the definition of union, we see that x ∈ complement(A) ∪ complement(B) . We have now shown that complement (A∩B) ⊆ complement(A) ∪ complement(B). Next, we will show that complement(A) ∪ complement(B) ⊆ complement (A∩B).We do this by showing that if x is in complement(A) ∪ complement(B), then it must also be in A∩B.Now suppose that x ∈ A∪B. By the definition of union, we know that x ∈ A or x ∈ B.Using the definition of complement,we see that x ∈ A or x ∈ B.Consequently, the proposition¬(x ∈ A) ∨¬(x ∈ B)is true. By De Morgan’s law for propositions, we conclude that ¬((x ∈ A) ∧(x ∈ B)) is true. By the definition of intersection, it follows that ¬(x ∈ A∩B). We now use the definition of complement to conclude that x ∈ complement (A∩B). This shows that complement(A) ∪ complement(B) ⊆ complement (A∩B). Because we have shown that each set is a subset of the other,the two sets are equal,and the identity is proved. </p>
<p>Why do I have to negate the definition of an intersection to prove that complement (A∩B) = complement(A) ∪ complement(B)?
What does the definition of the intersection got to do with this?
Because this is my first time solving a proof. Am I just suppose to manipulate the equation complement (A∩B) and simplify it by definitions? </p>
| user2825632 | 250,232 | <p>If you are asking for an intuitive explanation of why $(A \cap B)^c = (A^c \cup B^c)$, consider a set $X$ containing $A$ and $B$, so that $A^c = X - A$ and $B^c = X - B$. Then we see that any element in $X$ lies in exactly one of the following sets (they are all mutually exclusive):</p>
<p>$$A \cap B$$</p>
<p>$$A^c \cap B$$</p>
<p>$$A \cap B^c$$</p>
<p>$$A^c \cap B^c$$</p>
<p>Then you see that when you ask for what $(A \cap B)^c$ is, you see that it is precisely the union of the three remaining sets: $(A^c \cap B) \cup (A \cap B^c) \cup (A^c \cap B^c)$. As it turns out, we can see that $(A^c \cap B) \cup (A \cap B^c) \cup (A^c \cap B^c) = A^c \cup B^c$.</p>
|
98,317 | <p>This comes from Artin Second Edition, page 219. Artin defined $G = \langle x,y\mid x^3, y^3, yxyxy\rangle$, and uses the Todd-Coxeter Algorithm to show that the subgroup $H = \langle y\rangle$ has index 1, and therefore $G = H$ is the cyclic group of order 3.</p>
<p>That being the case, $x$ cannot be either $y$ or $y^2$, for then the third relation would not be satisfied. So the relation $x=1$ must follow from the given relations. Is there another way of seeing this besides from the Todd-Coxeter algorithm?</p>
| Andrés E. Caicedo | 462 | <p>Let's see. We have $yxyxy=1$, so (multiplying by $y$ on the left) $y=y^2xyxy$, so (cancelling $y$ on the right) $y^2xy=x^{-1}$.</p>
<p>Also, $yxyxy=1$, so $yxyxy^2=y$, or $xyxy^2=1$. So $yxy^2=x^{-1}$. </p>
<p>It follows that $y^2xy=yxy^2$, or $yx=xy$. (So the group is Abelian.)</p>
<p>But then $1=yxyxy=x^2y^3=x^2$. Since $x^3=1$ as well, we finally conclude $x=1$.</p>
|
573,484 | <p>Prove that if $A \setminus B = \emptyset$, then $A \subseteq B$.</p>
<p>The Venn Diagram helped me to visualize what I'm trying to show (thanks @GA316), but the book asks for a written proof (step by step) by contradiction. Sorry if I wasn't more specific at first, is just that I've had many troubles in the past with proofs, somehow I have many ideas but I can't seem to connect them to get to the final proof. </p>
<p>This is what I have so far:
$P \rightarrow Q$ is equivalent to $\neg Q \rightarrow \neg P$ contraposition (thanks @The Chaz 2.0)</p>
<p>With P: $ A \setminus B = \emptyset$ and Q: $ A \subseteq B$</p>
<p>so $ \neg Q \equiv A \not\subseteq B\ , \exists x \in A : x \notin B $</p>
<p>be $ t: t \in A \wedge t \notin B $ ...is this right?</p>
<p>as this is the definition for $A \setminus B \ne \emptyset$ ...is this right?</p>
<p>$\therefore \neg Q \rightarrow \neg P \equiv A \not\subseteq B\ \rightarrow A \setminus B \ne\emptyset$</p>
<p>I have many concerns regarding if I'm using the correct notation. I am trying to learn this by myself and have nobody else to ask.</p>
<p>Also, sorry if it took me too long to update, I just started learning about this LaTEX notation.</p>
<p>Thank you very much in advance, you guys are so nice and helpful. You made me feel very welcomed and sure I need to read more about the rules and instructions for using this site. </p>
| MarnixKlooster ReinstateMonica | 11,994 | <p>Here is a complete proof in a calculational style: we start at the most complex side, and then just expand the definitions and simplify, and see where that leads us.
\begin{align}
& A \setminus B = \emptyset \\
\equiv & \qquad \text{"basic property of $\;\emptyset\;$"} \\
& \langle \forall x :: x \not\in A \setminus B \rangle \\
\equiv & \qquad \text{"definition of $\;\setminus\;$"} \\
& \langle \forall x :: \lnot (x \in A \land x \not\in B) \rangle \\
\equiv & \qquad \text{"logic: DeMorgan -- there is not much else we can do"} \\
& \langle \forall x :: x \not\in A \lor x \in B \rangle \\
\equiv & \qquad \text{"logic: $\;\lnot P \lor Q\;$ is one of the ways to write $\;P \Rightarrow Q\;$"} \\
& \langle \forall x :: x \in A \Rightarrow x \in B \rangle \\
\equiv & \qquad \text{"definition of $\;\subseteq\;$"} \\
& A \subseteq B \\
\end{align}</p>
<p>This completes the proof. (Strictly speaking, this even proves the stronger statement $\;A \setminus B = \emptyset \;\equiv\;A \subseteq B\;$.)</p>
|
573,484 | <p>Prove that if $A \setminus B = \emptyset$, then $A \subseteq B$.</p>
<p>The Venn Diagram helped me to visualize what I'm trying to show (thanks @GA316), but the book asks for a written proof (step by step) by contradiction. Sorry if I wasn't more specific at first, is just that I've had many troubles in the past with proofs, somehow I have many ideas but I can't seem to connect them to get to the final proof. </p>
<p>This is what I have so far:
$P \rightarrow Q$ is equivalent to $\neg Q \rightarrow \neg P$ contraposition (thanks @The Chaz 2.0)</p>
<p>With P: $ A \setminus B = \emptyset$ and Q: $ A \subseteq B$</p>
<p>so $ \neg Q \equiv A \not\subseteq B\ , \exists x \in A : x \notin B $</p>
<p>be $ t: t \in A \wedge t \notin B $ ...is this right?</p>
<p>as this is the definition for $A \setminus B \ne \emptyset$ ...is this right?</p>
<p>$\therefore \neg Q \rightarrow \neg P \equiv A \not\subseteq B\ \rightarrow A \setminus B \ne\emptyset$</p>
<p>I have many concerns regarding if I'm using the correct notation. I am trying to learn this by myself and have nobody else to ask.</p>
<p>Also, sorry if it took me too long to update, I just started learning about this LaTEX notation.</p>
<p>Thank you very much in advance, you guys are so nice and helpful. You made me feel very welcomed and sure I need to read more about the rules and instructions for using this site. </p>
| Joshua P. Swanson | 86,777 | <p>It seems to me you could benefit from writing a proof in multiple stages. Here's an example of my own thought processes on this problem.</p>
<ol>
<li><p>Translate the math symbols into English (or Spanish, as the case may be). I literally read "If $A \setminus B = \emptyset$, then $A \subseteq B$" as "If A with the set B removed is empty, then A is a subset of B."</p></li>
<li><p>"Understand" the problem. For me with this problem I imagine myself being given A and told to remove all elements of B from it. I find I'm left with nothing, and I remark to myself that each element of A must have been in B, so A must have been contained in B.</p></li>
<li><p>Find a proof without symbols. "each element of A must have been in B"--why? Maybe the intuition in (2) is wrong and there is some element of A that's not in B. But then I wouldn't have removed it from A when I removed all elements of B, so I wouldn't have ended up with the empty set, a contradiction.</p></li>
<li><p>Translate your proof into math symbols. I'll do it sentence by sentence:</p>
<ul>
<li>"there is some element of A that's not in B."
-> "Pick $x \in A$. Suppose $x \not \in B$".</li>
<li>"But then I wouldn't have removed it from A when I removed all elements of B"
-> "But then it would be in A minus B" -> "$x \in A \setminus B$"</li>
<li>"so I wouldn't have ended up with the empty set"
-> "$A \setminus B \neq \emptyset$".</li>
<li>"a contradiction" -> "so $x \not \in B$ was false, so $x \in B$".</li>
<li>"A must have been contained in B."
-> "so $A \subset B$".</li>
</ul></li>
</ol>
<p>In all, the proof is now: "Pick $x \in A$. Suppose $x \not \in B$. Then $x \in A \setminus B$. But then $A \setminus B \neq \varnothing$, so $x \not \in B$ was false, so $x \in B$. So $A \subset B$".</p>
<p>This is pretty close to formal logic already. I'm not sure what system you're using, but here's one possible translation to more formal logic:</p>
<ol>
<li>$A \setminus B = \emptyset$</li>
<li>$x \in A \Rightarrow x \in B$:
<ol>
<li>$x \in A$</li>
<li>If $x \not \in B$:
<ol>
<li>$x \in A \setminus B$ (definition of $A \setminus B$)</li>
<li>$A \setminus B \neq \emptyset$ (definition of $\emptyset$)</li>
<li>Contradiction between 1 and 2.2.2.</li>
</ol></li>
<li>$\therefore x \in B$</li>
</ol></li>
<li>$A \subset B$ (definition of $\subset$)</li>
</ol>
|
573,484 | <p>Prove that if $A \setminus B = \emptyset$, then $A \subseteq B$.</p>
<p>The Venn Diagram helped me to visualize what I'm trying to show (thanks @GA316), but the book asks for a written proof (step by step) by contradiction. Sorry if I wasn't more specific at first, is just that I've had many troubles in the past with proofs, somehow I have many ideas but I can't seem to connect them to get to the final proof. </p>
<p>This is what I have so far:
$P \rightarrow Q$ is equivalent to $\neg Q \rightarrow \neg P$ contraposition (thanks @The Chaz 2.0)</p>
<p>With P: $ A \setminus B = \emptyset$ and Q: $ A \subseteq B$</p>
<p>so $ \neg Q \equiv A \not\subseteq B\ , \exists x \in A : x \notin B $</p>
<p>be $ t: t \in A \wedge t \notin B $ ...is this right?</p>
<p>as this is the definition for $A \setminus B \ne \emptyset$ ...is this right?</p>
<p>$\therefore \neg Q \rightarrow \neg P \equiv A \not\subseteq B\ \rightarrow A \setminus B \ne\emptyset$</p>
<p>I have many concerns regarding if I'm using the correct notation. I am trying to learn this by myself and have nobody else to ask.</p>
<p>Also, sorry if it took me too long to update, I just started learning about this LaTEX notation.</p>
<p>Thank you very much in advance, you guys are so nice and helpful. You made me feel very welcomed and sure I need to read more about the rules and instructions for using this site. </p>
| Hicham Amarir | 900,374 | <p>We assume <span class="math-container">$A \subset E$</span>, <span class="math-container">$B \subset E$</span> and <span class="math-container">$A \setminus B = \varnothing$</span></p>
<p><span class="math-container">\begin{align*}% Utiliser & pour aligner en colonne. Enlever * pour numéroter chaque ligne et \nonumber en fin de ligne pour enlever la numérotation
x \in A & \implies x \in A \wedge x \in E \\
& \implies x \in A \wedge x \in B \cup \complement_E B \\
& \implies x \in A \wedge \left(x \in B \vee x \in \complement_E B\right) \\
& \implies x \in A \wedge \left(x \in B \vee x \notin B\right) \\
& \implies \left(x \in A \wedge x \in B\right) \vee \left(x \in A \wedge x \notin B\right) \\
& \implies x \in A \cap B \vee x \in A\setminus B \\
& \implies x \in \left(A \cap B\right) \cup \left(A\setminus B\right) \\
& \implies x \in \left(A \cap B\right) \cup \varnothing \\
& \implies x \in \left(A \cap B\right) \\
& \implies x \in A \wedge x \in B \\
& \implies x \in B
\end{align*}</span>
Then <span class="math-container">$A \subset B$</span></p>
|
414,400 | <p>Let $ \mathbb{Z}[i]$ denote the ring of the Gaussian intergers. For which of the following value of n is the quotient ring $ \mathbb{Z}[i]/n\mathbb{Z}[i]$ an integral domain?</p>
<p>$ a. 2$</p>
<p>$ b. 13$</p>
<p>$ c. 19$</p>
<p>$ d. 7$</p>
<p>I'm doubtful with the following attempt I made.</p>
<ul>
<li><p><strong>I think all 4 options are correct:</strong> It suffices to show $n\mathbb Z[i]$ is a prime ideal of $\mathbb Z[i]$ if $n$ is prime. Now $(n)=n\mathbb Z[i].$ So $n$ is prime element of $n\mathbb Z[i]\implies(n)$ is a prime ideal of $\mathbb Z[i].$</p>
<p>Let $n$ be a prime integer. Of course then $n$ is non zero and non unit. Let $n|(a+ib)(c+id).$ That's $n|(ac-bd)+i(ad+bc)\\\implies\dfrac{ac-bd}{n},\dfrac{ad+bc}{n}\in\mathbb Z\\\implies n|ac,bd,ad,bc\\\implies n|\{a~or~c\}~and~\{b~or~d\}~and~\{a~or~d\}~and~\{b~or~c\}\\\implies n\text{ divides at least $3$ of }a,b,c,d.$</p>
<p>WLG let $n|a,b\implies n|a+ib.$</p></li>
</ul>
<p><strong>Is my attempt correct?</strong></p>
| davidlowryduda | 9,754 | <p>You claim that $n$ prime $\implies (n)$ is a prime ideal in $\mathbb{Z}[i]$. Let's consider $(5)$.</p>
<p>Clearly, $5$ is prime. But $(5) = (2+i)(2-i)$, and thus $(5)$ is not a Gaussian prime.</p>
<p>I encourage you to trace your proof over on this example and see where it goes wrong.</p>
|
256,785 | <p>Let $T_0$ be the set theory axiomatized by $ZFC^-$ (that is $ZFC$ without powerset) + every set is countable + $\mathbb{V}=\mathbb{L}$. </p>
<p><strong>Question 1:</strong> Suppose $\phi$ is a sentence of set theory. Must there be a large cardinal axiom $A$ such that $\phi$ is decided by $T_0$ + "there are transitive set models of $A$ of arbitrarily high ordinal height?" </p>
<p><strong>Update:</strong> no.</p>
<p><strong>Question 2:</strong> Is there a set $T$ of $\Pi_2$ sentences, such that $ZFC^- \cup T$ is complete? </p>
<p><strong>Update:</strong> no, essentially. See my answer below.</p>
<p><strong>Questions 3 and 4 after comments.</strong></p>
<p><strong>Comments.</strong></p>
<ul>
<li>Question 1 is related to, but different from several previous questions on Mathoverflow, e.g. <a href="https://mathoverflow.net/questions/118081/nice-algebraic-statements-independent-from-zf-v-l-constructibility">Nice Algebraic Statements Independent from ZF + V=L (constructibility)</a> or <a href="https://mathoverflow.net/questions/11480/on-statements-independent-of-zfc-v-l?noredirect=1&lq=1">On statements independent of ZFC + V=L</a> or <a href="https://mathoverflow.net/questions/81190/natural-statements-independent-from-true-pi0-2-sentences?rq=1">Natural statements independent from true $\Pi^0_2$ sentences</a>. Regarding the first two---Hamkins gives a long list of examples of things independent of $\mathbb{V}= \mathbb{L}$, but it seems that my schema takes care of all of them. (Of course in those questions $ZFC$ was assumed.)</li>
<li>Moreover, Question 1 was partly motivated by Dorais' answer to the second question above, which references a question of Shelah from <a href="http://shelah.logic.at/E16/E16.html" rel="nofollow noreferrer">The Future of Set Theory</a>.</li>
<li>I'm leaving "large cardinal axiom" undefined here (so Question 1 can't be formalized), but obviously we should exclude inconsistent axioms, or things like $ZFC + $ "there are no transitive set models of ZFC."</li>
<li><p>If we pick a definition of ``large cardinal axiom", then "every set is countable" + $\mathbb{V}=\mathbb{L}$ + "there are transitive set models of $A$ of arbitrarily high ordinal height" (for each large cardinal axiom $A$) is a set of $\Pi_2$ sentences. So a negative answer to Question 2 implies a negative answer to Question 1.</p></li>
<li><p>We can ignore (recursive) large cardinal axiom schemas $\Gamma$ because we can just replace them by $ZFC_0$+"there is a transitive set model of $\Gamma$" where $ZFC_0$ is some large enough finite fragment of $ZFC$. In particular we don't have to write "$ZFC + A$". </p></li>
<li><p>$T_0$ + this axiom schema (loosely speaking) is of personal significance to me: in fact I believe it to be true. (Namely, given whatever universe of sets $\mathbb{V}$ in which we are working, it seems reasonable to suppose there is a larger universe of sets $\mathbb{W} \models \mathbb{V}=\mathbb{L}$ in which $\mathbb{V}$ is countable, or such that $\mathbb{W}$ is a model of a given large cardinal axiom $A$. Ergo,...) </p></li>
</ul>
<p>Let $\mathcal{L}_{\mbox{set}}$ be the language of set theory $\{\in\}$ and let $\mathcal{L}_1$ be $\mathcal{L}_{\mbox{set}} \cup \{P\}$, $P$ a new unary relation symbol. Let $T_1$ be $T_0$ + the axioms asserting that $P \subseteq \mbox{ON}$ is stationary (for $\in$-definable classes) and for every $\alpha \in P$, $(\mathbb{V}_\alpha, \in) \preceq (\mathbb{V}, \in)$. We insist that large cardinal axioms $A$ be sentences of $\mathcal{L}_{\mbox{set}}$.</p>
<p><strong>Question 3:</strong> Suppose $\phi$ is a sentence of set theory (i.e. of $\mathcal{L}_{\mbox{set}}$). Must there be a large cardinal axiom $A$ such that $\phi$ is decided by $T_1$ + "there are transitive set models of $A$ of arbitrarily high ordinal height?" </p>
<p><strong>Question 4:</strong> Is there a set $T$ of $\Pi_2$ sentences of set theory, such that $T_1 \cup T$ decides every sentence of set theory?</p>
| Danielle Ulrich | 26,705 | <p>Overnight the following occurred to me...</p>
<p>The answer to Question 2 is negative (with an asterisk), and so the same is true of Question 1. Namely, let $T$ be a set of $\Pi_2$ sentence with $ZFC^- \cup T$ consistent; I claim that $ZFC^- \cup T$ does not resolve whether the class of ordinals $\alpha$ such that $\mathbb{V}_\alpha \models ZFC^-$ is stationary. In particular $ZFC^- \cup T$ does not resolve the following sentence $\phi$:</p>
<p>"There is $\alpha$ such that $\mathbb{V}_\alpha \models ZFC^-$ and $\mathbb{V}_\alpha \preceq_2 \mathbb{V}$." </p>
<p>This sentence makes sense since we can define satisfaction for formulas of bounded complexity. This is a particular instance of the schema asserting "$\{\alpha: \mathbb{V}_\alpha \models ZFC^-\}$ is stationary", since the set $\{\alpha: \mathbb{V}_\alpha \preceq_2 \mathbb{V}\}$ is club.</p>
<p>We assume $ZFC^- \cup T \cup \{\phi\}$ is consistent (if not then $T$ is silly---this is the asterisk). On the other hand, $ZFC^- \cup T \cup \{\lnot \phi\}$ is consistent, since given $\mathbb{V} \models ZFC^- \cup T \cup \{\phi\}$, if we let $\alpha$ be least such that $\mathbb{V}_\alpha \models ZFC^-$ and $\mathbb{V}_\alpha \preceq_2 \mathbb{V}$, then $\mathbb{V}_\alpha \models ZFC^- \cup T \cup \{\lnot \phi\}$.</p>
<p>We can play the same game whenever $T$ is a set of $\Pi_n$ sentences, and when we replace $ZFC^-$ by any (stronger) recursive theory.</p>
<p>The truth that we would like to express, but can't, is that the class $\{\alpha: \mathbb{V}_\alpha \preceq \mathbb{V}\}$ is stationary. Since I'm the OP I feel justified in raising the bar for the question; see Question 3 and Question 4 (new).</p>
|
2,947,218 | <p>I know that if a curve <span class="math-container">$C$</span> is parametrize as <span class="math-container">$$s(t)=(x(t),y(t)),\quad t\in [a,b]$$</span>
then <span class="math-container">$$\ell(C)=\int ds=\int_a^b \sqrt{\dot x^2(t)+\dot y^2(t)}dt.\tag{*}$$</span></p>
<p>In the <a href="https://fr.wikipedia.org/wiki/Abscisse_curviligne" rel="nofollow noreferrer">french wikipedia</a>, it's written that <span class="math-container">$(*)$</span> can be written as <span class="math-container">$$ds^2=dx^2+dy^2.$$</span></p>
<p>I really don't understand the logical. Could someone explain ? What is the link between <span class="math-container">$(*)$</span> and <span class="math-container">$ds^2=dx^2+dy^2$</span> ? I tried <span class="math-container">$$ds^2=dx^2+dy^2\implies ds=\sqrt{dx^2+dy^2}\implies \int ds=\int\sqrt{dx^2+dy^2},$$</span>
but I can't give an interpretation of the RHS... So it should be wrong.</p>
| Ethan Bolker | 72,858 | <p>On your curve the expression
<span class="math-container">$$
dx = \dot{x}(t)dt
$$</span>
says (informally) that if you change time by the "infinitesimal" amount <span class="math-container">$dt$</span> the <span class="math-container">$x$</span> value change <span class="math-container">$dx$</span> in <span class="math-container">$x$</span> will be given by that formula. Similarly for <span class="math-container">$dy$</span>. </p>
<p>You are interested in the amount <span class="math-container">$ds$</span> of arclength this change causes. To find that, you calculate the hypotenuse of the right triangle with sides <span class="math-container">$dx$</span> and <span class="math-container">$dy$</span>. </p>
<p>To find the length of the curve you add up those infinitesimal changes - that is, you integrate. </p>
<p>Then <span class="math-container">$$\int\sqrt{dx^2+dy^2}$$</span> just rewrites (*) informally to remind you about what the integrand means geometrically.</p>
|
318,299 | <blockquote>
<p>Let <span class="math-container">$U$</span> be an open set in <span class="math-container">$\mathbb R$</span>. Then <span class="math-container">$U$</span> is a countable union of disjoint intervals. </p>
</blockquote>
<p>This question has probably been asked. However, I am not interested in just getting the answer to it. Rather, I am interested in collecting as many different proofs of it which are as diverse as possible. A professor told me that there are many. So, I invite everyone who has seen proofs of this fact to share them with the community. I think it is a result worth knowing how to prove in many different ways and having a post that combines as many of them as possible will, no doubt, be quite useful. After two days, I will place a bounty on this question to attract as many people as possible. Of course, any comments, corrections, suggestions, links to papers/notes etc. are more than welcome.</p>
| Brian M. Scott | 12,042 | <p>Here’s one to get things started.</p>
<p>Let $U$ be a non-empty open subset of $\Bbb R$. For $x,y\in U$ define $x\sim y$ iff $\big[\min\{x,y\},\max\{x,y\}\big]\subseteq U$. It’s easily checked that $\sim$ is an equivalence relation on $U$ whose equivalence classes are pairwise disjoint open intervals in $\Bbb R$. (The term <em>interval</em> here includes unbounded intervals, i.e., rays.) Let $\mathscr{I}$ be the set of $\sim$-classes. Clearly $U=\bigcup_{I \in \mathscr{I}} I$. For each $I\in\mathscr{I}$ choose a rational $q_I\in I$; the map $\mathscr{I}\to\Bbb Q:I\mapsto q_I$ is injective, so $\mathscr{I}$ is countable.</p>
<p>A variant of the same basic idea is to let $\mathscr{I}$ be the set of open intervals that are subsets of $U$. For $I,J\in\mathscr{I}$ define $I\sim J$ iff there are $I_0=I,I_1,\dots,I_n=J\in\mathscr{I}$ such that $I_k\cap I_{k+1}\ne\varnothing$ for $k=0,\dots,n-1$. Then $\sim$ is an equivalence relation on $\mathscr{I}$. For $I\in\mathscr{I}$ let $[I]$ be the $\sim$-class of $I$. Then $\left\{\bigcup[I]:I\in\mathscr{I}\right\}$ is a decomposition of $U$ into pairwise disjoint open intervals.</p>
<p>Both of these arguments generalize to any LOTS (= Linearly Ordered Topological Space), i.e., any linearly ordered set $\langle X,\le\rangle$ with the topology generated by the subbase of open rays $(\leftarrow,x)$ and $(x,\to)$: if $U$ is a non-empty open subset of $X$, then $U$ is the union of a family of pairwise disjoint open intervals. In general the family need not be countable, of course.</p>
|
318,299 | <blockquote>
<p>Let <span class="math-container">$U$</span> be an open set in <span class="math-container">$\mathbb R$</span>. Then <span class="math-container">$U$</span> is a countable union of disjoint intervals. </p>
</blockquote>
<p>This question has probably been asked. However, I am not interested in just getting the answer to it. Rather, I am interested in collecting as many different proofs of it which are as diverse as possible. A professor told me that there are many. So, I invite everyone who has seen proofs of this fact to share them with the community. I think it is a result worth knowing how to prove in many different ways and having a post that combines as many of them as possible will, no doubt, be quite useful. After two days, I will place a bounty on this question to attract as many people as possible. Of course, any comments, corrections, suggestions, links to papers/notes etc. are more than welcome.</p>
| Yoni Rozenshein | 36,650 | <p>A variant of the usual proof with the equivalence relation, which trades in the ease of constructing the intervals with the ease of proving countability (not that either is hard...):</p>
<ol>
<li>Define the same equivalence relation, but only on $\mathbb Q \cap U$:
$q_1 \sim q_2$ iff $(q_1, q_2) \subset U$ (or $(q_2, q_1) \subset U$, whichever makes sense).</li>
<li>From each equivalency class $C$, produce the open interval $(\inf C, \sup C) \subset U$ (where $\inf C$ is defined to be $-\infty$ in case $C$ is not bounded from below, and $\sup C = \infty$ in case $C$ is not bounded from above).</li>
<li>The amount of equivalence classes is clearly countable, since $\mathbb Q \cap U$ is countable.</li>
</ol>
|
318,299 | <blockquote>
<p>Let <span class="math-container">$U$</span> be an open set in <span class="math-container">$\mathbb R$</span>. Then <span class="math-container">$U$</span> is a countable union of disjoint intervals. </p>
</blockquote>
<p>This question has probably been asked. However, I am not interested in just getting the answer to it. Rather, I am interested in collecting as many different proofs of it which are as diverse as possible. A professor told me that there are many. So, I invite everyone who has seen proofs of this fact to share them with the community. I think it is a result worth knowing how to prove in many different ways and having a post that combines as many of them as possible will, no doubt, be quite useful. After two days, I will place a bounty on this question to attract as many people as possible. Of course, any comments, corrections, suggestions, links to papers/notes etc. are more than welcome.</p>
| P.. | 39,722 | <p>Let <span class="math-container">$U\subseteq\mathbb R$</span> open. It is enough to write <span class="math-container">$U$</span> as a disjoint union of open intervals.<br>
For each <span class="math-container">$x\in U$</span>, we define <span class="math-container">$\alpha_x=\inf\{\alpha\in\mathbb R:(\alpha,x+\epsilon)\subseteq U, \text{ for some }\epsilon>0\}$</span> and <span class="math-container">$\beta_x=\sup\{\beta\in\mathbb R:(\alpha_x,\beta)\subseteq U\}$</span>.</p>
<p>Then <span class="math-container">$\displaystyle U=\bigcup_{x\in U}(\alpha_x,\beta_x)$</span> where <span class="math-container">$\{(\alpha_x,\beta_x):x\in U\}$</span> is a disjoint family of open intervals.</p>
<p>The intervals appearing in the union are disjoint in the sense that every time <span class="math-container">$x,y\in U$</span> with <span class="math-container">$x<y$</span>, then either <span class="math-container">$(\alpha_x,\beta_x)=(\alpha_y,\beta_y)$</span> holds, or <span class="math-container">$(\alpha_x,\beta_x)\cap(\alpha_y,\beta_y)$</span> is empty. To see this, suppose <span class="math-container">$(\alpha_x,\beta_x)\cap(\alpha_y,\beta_y)$</span> has an element. We claim that <span class="math-container">$[x,y]\subseteq U$</span>. (For if <span class="math-container">$x<t<y$</span> with <span class="math-container">$t\not\in U$</span>, then <span class="math-container">$\beta_x\leq t\leq \alpha_y$</span>.)</p>
<p>But if <span class="math-container">$[x,y]\subseteq U$</span>, then both <span class="math-container">$\alpha_x<x$</span> and <span class="math-container">$\alpha_y<x$</span>, and both <span class="math-container">$y<\beta_x$</span> and <span class="math-container">$y<\beta_y$</span>. Hence <span class="math-container">$\alpha_x$</span> and <span class="math-container">$\alpha_y$</span> can be expressed as <span class="math-container">$$\alpha_x=\inf\{\alpha\leq x:(\alpha,x+\epsilon)\subseteq U, \text{ for some }\epsilon>0\},$$</span> <span class="math-container">$$\alpha_y=\inf\{\overline\alpha\leq x:(\overline\alpha,y+\overline\epsilon)\subseteq U, \text{ for some }\overline\epsilon>0\},$$</span> and these are the same; so then also <span class="math-container">$\beta_x$</span> and <span class="math-container">$\beta_y$</span> are the same.</p>
|
318,299 | <blockquote>
<p>Let <span class="math-container">$U$</span> be an open set in <span class="math-container">$\mathbb R$</span>. Then <span class="math-container">$U$</span> is a countable union of disjoint intervals. </p>
</blockquote>
<p>This question has probably been asked. However, I am not interested in just getting the answer to it. Rather, I am interested in collecting as many different proofs of it which are as diverse as possible. A professor told me that there are many. So, I invite everyone who has seen proofs of this fact to share them with the community. I think it is a result worth knowing how to prove in many different ways and having a post that combines as many of them as possible will, no doubt, be quite useful. After two days, I will place a bounty on this question to attract as many people as possible. Of course, any comments, corrections, suggestions, links to papers/notes etc. are more than welcome.</p>
| Community | -1 | <p>The proof that every open set is a disjoint union of countably many open intervals relies on three facts:</p>
<ul>
<li>$\Bbb R$ is locally-connected</li>
<li>$\Bbb R$ is ccc</li>
<li>The open connected sets in $\Bbb R$ are open intervals</li>
</ul>
<p>Let $U\subseteq \Bbb R$ be open. Then there is a collection of disjoint, open, connected sets $\{G_\alpha\}_{\alpha\in A}$ such that $U=\bigcup_{\alpha\in A} G_\alpha$. Since $\Bbb R$ is ccc, the collection $\{G_\alpha\}$ is at most countable. Since the open connected sets $\Bbb R$ are open intervals, $\{G_\alpha\}$ is a countable collection of disjoint, open intervals.</p>
<p>The first two facts allow us to see some generalizations. Namely any open set in a locally-connected, ccc space is a countable disjoint union of connected open sets. This applies to any Euclidean space. Although open connected subsets of Euclidean space are more complicated than open intervals, they are still relatively well-behaved.</p>
|
318,299 | <blockquote>
<p>Let <span class="math-container">$U$</span> be an open set in <span class="math-container">$\mathbb R$</span>. Then <span class="math-container">$U$</span> is a countable union of disjoint intervals. </p>
</blockquote>
<p>This question has probably been asked. However, I am not interested in just getting the answer to it. Rather, I am interested in collecting as many different proofs of it which are as diverse as possible. A professor told me that there are many. So, I invite everyone who has seen proofs of this fact to share them with the community. I think it is a result worth knowing how to prove in many different ways and having a post that combines as many of them as possible will, no doubt, be quite useful. After two days, I will place a bounty on this question to attract as many people as possible. Of course, any comments, corrections, suggestions, links to papers/notes etc. are more than welcome.</p>
| Theorist Reflectionist | 211,474 | <p>This proof is an extended version of the nice proof proposed by
<strong>Stromael</strong> and it serves best for beginners who want to understand every detail(that one that for any established mathematician logically seems trivial) of the proof.</p>
<p><span class="math-container">$ \textbf{Proof:} $</span></p>
<p>Let <span class="math-container">$U \subseteq \mathbb{R}$</span> be open and let <span class="math-container">$x \in U$</span>. Then Either <span class="math-container">$x$</span> is rational or <span class="math-container">$x$</span> is irrational.</p>
<p>Suppose <span class="math-container">$x$</span> is rational, then define</p>
<p><span class="math-container">\begin{align} I_x = \bigcup\limits_{\substack{I\text{ an open interval} \\ x~\in~I~\subseteq~U}} I,\end{align}</span></p>
<p><strong>Claim</strong>: <span class="math-container">$I_x$</span> is interval, <span class="math-container">$I_x$</span> is open and <span class="math-container">$ I_x \subseteq U $</span></p>
<p><strong>Definition:</strong> An interval is a subset <span class="math-container">$ I \subseteq \mathbb{R}$</span> such that, for all <span class="math-container">$ a<c<b$</span> in <span class="math-container">$\mathbb{R}$</span>, if <span class="math-container">$ a,b \in I $</span> then <span class="math-container">$ c \in I$</span>.</p>
<p>Now, consider any <span class="math-container">$ a<c<b $</span> such that <span class="math-container">$ a,b \in I_x$</span>. We want to show that <span class="math-container">$ c \in I_x $</span>.</p>
<p>Denote <span class="math-container">$I_a $</span> to be an interval such that <span class="math-container">$ x \in I_a $</span> and <span class="math-container">$ a \in I_a $</span>. In other words <span class="math-container">$ I_a $</span> is one of the intervals from the union <span class="math-container">$ I_x $</span> that contains <span class="math-container">$a$</span>. In the same way, let <span class="math-container">$ I_b $</span> be the interval such that <span class="math-container">$ x \in I_b $</span> and <span class="math-container">$ b \in I_b $</span>.</p>
<ol>
<li><p><span class="math-container">$ c=x $</span>: If <span class="math-container">$c=x$</span> then by construction of <span class="math-container">$I_x$</span>, <span class="math-container">$ c \in I_x$</span></p>
</li>
<li><p><span class="math-container">$ c<x $</span>: If <span class="math-container">$c<x$</span> then we have that either <span class="math-container">$ a<c<x<b $</span> or <span class="math-container">$ a<c<b<x $</span>. Since <span class="math-container">$ x \in I $</span> for every open interval <span class="math-container">$I$</span> of the union <span class="math-container">$I_x$</span> (by construction of <span class="math-container">$I_x$</span> ), we have that <span class="math-container">$x \in I_a $</span> and <span class="math-container">$ x \in I_b$</span>. Since <span class="math-container">$ x \in I_a $</span> then because <span class="math-container">$ I_a $</span> is an interval <span class="math-container">$ c \in I_a$</span> and hence <span class="math-container">$ c \in I_x $</span>. And since <span class="math-container">$ x \in I_b $</span> then because <span class="math-container">$ I_b $</span> is an interval <span class="math-container">$ c \in I_b $</span> and hence <span class="math-container">$ c \in I_x $</span>. Thus, we concluded that <span class="math-container">$ c \in I_x $</span>.</p>
</li>
<li><p><span class="math-container">$ c > x $</span>: If <span class="math-container">$ c>x $</span> then we have that either <span class="math-container">$ a<x<c<b $</span> or <span class="math-container">$ x<a<c<b $</span>. Since <span class="math-container">$ x \in I $</span> for every open interval <span class="math-container">$I$</span> of the union <span class="math-container">$I_x$</span> (by construction of <span class="math-container">$I_x$</span> ), we have that <span class="math-container">$x \in I_a $</span> and <span class="math-container">$ x \in I_b$</span>. Since <span class="math-container">$ x \in I_b $</span> then because <span class="math-container">$ I_b $</span> is an interval <span class="math-container">$ c \in I_b $</span> and hence <span class="math-container">$ c \in I_x $</span>. As for the second case, note that since <span class="math-container">$ x \in I_b$</span> we have that <span class="math-container">$ a \in I_b $</span>. But then, because <span class="math-container">$ I_b $</span> is an interval we have that <span class="math-container">$ c \in I_b $</span> and hence <span class="math-container">$ c \in I_x$</span>. Hence we concluded that <span class="math-container">$ c \in I_x $</span>.</p>
</li>
</ol>
<p>This Proves that <span class="math-container">$ I_x $</span> is an interval.</p>
<p><span class="math-container">$ I_x $</span> is open because it is union of open sets.</p>
<p><span class="math-container">$ I_x \subseteq U $</span> by construction.</p>
<p>Suppose <span class="math-container">$x$</span> is irrational, then by openness of <span class="math-container">$ U $</span> there is <span class="math-container">$\varepsilon > 0$</span> such that <span class="math-container">$(x - \varepsilon, x + \varepsilon) \subseteq U$</span>, and by the property of real numbers that for any irrational number there exists a sequence of rational unmbers that converges to that irrational number, there exists rational <span class="math-container">$y \in (x - \varepsilon, x + \varepsilon) $</span>. Then by construction <span class="math-container">$ (x - \varepsilon, x + \varepsilon) \subseteq I_y $</span>. Hence <span class="math-container">$x \in I_y$</span>. So any <span class="math-container">$x \in U$</span> is in <span class="math-container">$I_q$</span> for some <span class="math-container">$q \in U \cap \mathbb{Q}$</span>, and so</p>
<p><span class="math-container">\begin{align}U \subseteq \bigcup\limits_{q~\in~U \cap~\mathbb{Q}} I_q.\end{align}</span></p>
<p>But <span class="math-container">$I_q \subseteq U$</span> for each <span class="math-container">$q \in U \cap \mathbb{Q}$</span>; thus</p>
<p><span class="math-container">\begin{align}U = \bigcup\limits_{q~\in~U \cap~\mathbb{Q}} I_q, \end{align}</span></p>
<p>which is a countable union of open intervals.</p>
<p>Now let's show that intervals <span class="math-container">$ \{I_q \} ~\ q \in U \cap \mathbb{Q} $</span> are disjoint. Suppose there is <span class="math-container">$ i, j, \in U \cap \mathbb{Q} $</span> such that <span class="math-container">$ I_i \cap I_j \neq \emptyset $</span> then <span class="math-container">$ I_i \subseteq I_q $</span> and <span class="math-container">$ I_j \subseteq I_q $</span> for some <span class="math-container">$ q \in U \cap \mathbb{Q} $</span></p>
<p>Hence we constructed disjoint intervals <span class="math-container">$ \{I_q \} ~\ q \in U \cap \mathbb{Q} $</span> that are enumerated by rational numbers in <span class="math-container">$U$</span> and whose union is <span class="math-container">$U$</span>. Since any subset of rational numbers is countable, <span class="math-container">$ \{I_q \} ~\ q \in U \cap \mathbb{Q} $</span> is countable as well. This finishes the proof.</p>
|
318,299 | <blockquote>
<p>Let <span class="math-container">$U$</span> be an open set in <span class="math-container">$\mathbb R$</span>. Then <span class="math-container">$U$</span> is a countable union of disjoint intervals. </p>
</blockquote>
<p>This question has probably been asked. However, I am not interested in just getting the answer to it. Rather, I am interested in collecting as many different proofs of it which are as diverse as possible. A professor told me that there are many. So, I invite everyone who has seen proofs of this fact to share them with the community. I think it is a result worth knowing how to prove in many different ways and having a post that combines as many of them as possible will, no doubt, be quite useful. After two days, I will place a bounty on this question to attract as many people as possible. Of course, any comments, corrections, suggestions, links to papers/notes etc. are more than welcome.</p>
| Nathan A.S. | 451,262 | <p>Essentially nothing differs here from the two previous responses which rely principally on the fact that $\mathbb{R} $ is locally connected. However I present here a proof that hopefully will feel accessible to readers with a slightly lower level of topological literary at the cost of appearing cumbersome to an expert. </p>
<p>Consider the connected components of $U$.
$U_x \subseteq U$ is defined to be the connected component of U containing $x$ if $U_x$ is the largest connected subset of $U$ which contains $x$.
Clearly by definition $U_x=U_v$ if $v \in U_x$.
Therefore if $U_a \cap U_b \neq \varnothing$ then $U_a=U_b$.
We see that $\{U_x\}_{x\in U}$ is a disjoint collection.
Also it should be clear that $\bigcup \limits_{x\in U} U_x = U$.</p>
<p>Now we show that $\forall x$ $U_x$ is open.
Let $y\in U_x \subseteq U$.
Since $U$ is open there exists $\epsilon>0$ such that $(y-\epsilon, y+\epsilon )\subseteq U$.
Sets of real numbers are connected iff they are intervals, singletons or empty.
$(y-\epsilon,y+\epsilon)$ an interval hence it is connected.
Therefore since $U_y$ is the largest connected subset of $U$ containing $y$ we must have $(y-\epsilon,y+\epsilon)\subseteq U_y =U_x$.
This shows that $U_x$ is open for all $x$. </p>
<p>$U_x$ open and connected implies that $U_x$ must be an open interval. </p>
<p>Also $\mathbb{Q}$ dense in $\mathbb{R}$,
so $\forall x\in U$, $U_x\cap \mathbb{Q}\neq \varnothing$ and $U_x=U_q$ for some $q\in\mathbb{Q}$.
So we can write $\{U_x\}_{x\in U}=\{U_q\}_{q\in S}$ for some $S\subseteq \mathbb{Q}$.
$\mathbb{Q}$ is countable so $S$ is at most countable. </p>
<p>In conclusion, we have just shown that the union of the connected components of $U$ is a disjoint union of open intervals that equals $U$ and is at most countable. </p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.