qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
299,056
<p>Let $\overline{\mathbb{Q}}$ be the algebraic closure of $\mathbb{Q}$. Let $K = \mathbb{Q}(\sqrt{d})$ and $\overline{K}$ defined to be the algebraic closure of $K$. Is it true that $\overline{K} \cong \overline{\mathbb{Q}}$?</p>
Zach L.
43,128
<p>If $d \in \mathbb{Q}$, $K$ will be a subfield of $\overline{\mathbb{Q}}$, since $\sqrt{d} \in \overline{\mathbb{Q}}$. Since $\overline{\mathbb{Q}}$ is algebraically closed, it suffices to show that $\overline{\mathbb{Q}}$ is algebraic over $K$. But if $\alpha \in \overline{\mathbb{Q}}$, choose a polynomial in $\mathbb{Q}[x]$ with $\alpha$ as a root. But then this polynomial also has coefficients in $K$, and so $\alpha$ is algebraic over $K$, too. Hence $\overline{\mathbb{Q}}$ is an algebraically closed field that is algebraic over $K$. These are the defining properties of an algebraic closure, so $\overline{K} = \overline{Q}$. Up to isomorphism, of course.</p>
527,799
<p>In Hatcher's Algebraic Topology section 1.3, Cayley complexes are explained. The book states that we get a Cayley complex out of a Cayley graph by attaching a 2-cell to each loop. There is an example showing the Cayley complex for $\mathbb{Z}\times\mathbb{Z}$ (the fundamental group of the torus). We attach one 2-cell to each loop and we get $\mathbb{R}^{2}$ with vertical and horizontal tiling. I understand this.</p> <p>The book then says (example 1.47) that the Cayley complex of a cyclic group of order $ n $ is $n$ disks with boundaries identified. I can't for the life of me figure out where the $n$ disks come from. In the Cayley graph, we have one loop $e \to x \to x^2 \to \cdots \to x^n = e$. I guess the relation $x^n = e$ somehow generates $n$ loops, but I don't understand why.</p> <p>The next example is for $\mathbb{Z}_2*\mathbb{Z}_2$ in which two 2-cells are attached to each loop. I also don't understand why two.</p> <p>I'm looking for a canonical description of the algorithm to build Cayley complexes, and the application of the algorithm to build Cayley complexes for finite cyclic groups and $\mathbb{ℤ}_2*\mathbb{ℤ}_2$.</p> <p>Thank you.</p>
Bombyx mori
32,240
<p>The Cayley complex of $\mathbb{Z}/n\mathbb{Z}$ contains $n$ vertices and $n$ edges connected in a circle. So if you want to attach a disk, you have $n$ choices because $(123),(231),(312)$ gives three different attaching disks. For example in the $\mathbb{R}\mathbb{P}^{2}$ case, you glue a disk along the relation $(12)$, and another one along the relation $(21)$. The resulting complex is just the universal cover - the sphere. </p> <p>This is hard to visualize for $n$ large, and as Hatcher commented the resulting complex is not a surface in general. For a picture and more elementary explanation you may check Page 52, where Hatcher constructed $X_{G}$ explicitly. That's why Hatcher say $\widetilde{X}_{G}$ consists of $n$-disks with their boundary circles identified. </p> <p>The $\mathbb{Z}_{2}*\mathbb{Z}_{2}$ case is similar, but here I think Hatcher is only trying to kill off enough relations so that the resulting space is simply connected. He observed that all the relations (or circles) near a point must pass through either of the two boundary circles, so it is enough to "fill in" four disks to "kill" the 2 circles. If one did this to every vertex then one got his complex. </p>
2,226,150
<blockquote> <p>The following diophantine equation came up in the past paper of a Mathematics competition that I am doing soon: $$ 2(x+y)=xy+9.$$ </p> </blockquote> <p>Although I know that the solution is $(1,7)$, I am unsure as of how to reach this result. Clearly, the product $xy$ must be odd since $2(x+y)$ must be even, however beyond that, I am unable to see anything else that I can do to solve the problem. I have also tried using the AM-GM inequality, however, it did not simplify the problem much:$$(x+y)+(x-xy+y)\le(\frac{(x+y)+(x+y-xy)}{2})^2.$$ Any help would be greatly be appreciated.</p>
Sarvesh Ravichandran Iyer
316,409
<p>$2(x+y) = xy +9 \implies 2x - xy = 9 - 2y \implies x = \frac{9-2y}{2-y} = \frac{2y-9}{y-2} = 2 - \frac{5}{y-2}$.</p> <p>If $x$ is an integer, then $\frac{5}{y-2}$ is also an integer. This will tell you what $y$ can be, then what $x$ can be. Trying these out will give you the solution $y=7, x=1$ and the solution $y=3,x=-3$, which then will give you four solutions, since you can switch $x,y$ (it doesn't change the equation) and get more solutions. </p>
4,249,789
<p>My understanding is that the span of a set is a set of all vectors that can be obtained from the linear combination of all the vectors in the original set as shown in image #<a href="https://i.stack.imgur.com/3v6g4.png" rel="nofollow noreferrer">1</a>.</p> <p><img src="https://i.stack.imgur.com/3v6g4.png" alt="Image #1" /></p> <p>What I do not understand is how the span (or the dimension of span) of a set consisted of rows of a matrix is related to its rank, as shown in image # <a href="https://i.stack.imgur.com/BaGDA.png" rel="nofollow noreferrer">2</a>.</p> <p><img src="https://i.stack.imgur.com/BaGDA.png" alt="Image #2" /></p> <p>I always thought that rank is related to the basis of a set of vectors, not its span.</p> <p>Thanks</p>
Stanislav Bashkyrtsev
24,684
<p>Matrix rank shows how many linearly independent vectors there are in the matrix. And when determining the span you don't really need any other vectors (linearly dependent vectors are also in the span since they can be formed by linearly independent ones).</p> <p>The dimension of span is the number of vectors which form that span, so it makes sense that it's formed by only independent vectors of a matrix. Therefore: <span class="math-container">$dim C(A) = rank A$</span>.</p> <p>And since in any matrix the number of independent vectors in rows is the same as in columns: <span class="math-container">$dim R(A) = rank A$</span> too.</p>
223,509
<p>Let $\ulcorner \cdot \urcorner$ be a fixed encoding of formulas by numbers, e.g. let $\ulcorner \varphi \urcorner$ denote the <a href="https://en.wikipedia.org/wiki/G%C3%B6del_numbering" rel="nofollow">Godel number</a> of $\varphi$. Let $T$ be a first-order arithmetic theory, e.g. PA. Let $\Phi$ be a class of closed first-order formulas e.g. $\Sigma^0_1$. Let $Tr$ be a first-order formula defining the truth predicate for the formulas in $\Phi$ in the standard model:</p> <blockquote> <p>for all $\varphi \in \Phi$, $\mathbb{N} \vDash \varphi$ iff $\mathbb{N} \vDash Tr(\ulcorner \varphi \urcorner)$</p> </blockquote> <p>E.g. we have: $\mathbb{N} \vDash Tr(\ulcorner \top \urcorner)$, $\mathbb{N} \nvDash Tr(\ulcorner \bot \urcorner)$, $\mathbb{N} \nvDash Tr(\ulcorner \exists x \ (x + 1 = 0)\urcorner)$, etc.</p> <p>Consider the following property: </p> <blockquote> <p>for all $\varphi \in \Phi$, $T \vdash Tr(\ulcorner \varphi \urcorner) \to \varphi$.</p> </blockquote> <p><strong>Does this property have a standard name?</strong></p> <hr> <p>The reflection is similar to the property above with <em>truth</em> replaced with <em>provability</em>: let $\square$ denote the provability in theory $T'$. Then reflection property is:</p> <blockquote> <p>for all $\varphi \in \Phi$, $T \vdash \square(\ulcorner \varphi \urcorner) \to \varphi$.</p> </blockquote>
Nik Weaver
23,141
<p>The law $${\rm Tr}(\ulcorner \phi\urcorner) \to \phi$$ is sometimes called the "release scheme". "T-Elim" and "T-Out" are also used, according to <a href="http://plato.stanford.edu/entries/liar-paradox/" rel="nofollow">this source</a>. The converse is "capture scheme" ("T-Intro", "T-In").</p>
2,554,380
<p>With the metric space $(X,d) : X = \Bbb R$ and $d(x,y) = |x| + |y|$ for $x\neq y$, $d(x,x) = 0$ and $A = \{0\}$. Is interior of $A$ empty? </p> <p>In usual metric it would be empty but with this metric I conclude that it is $A$ itself.</p> <p>Def: $ a \in A $ is an interior point of $A$ if $\exists \epsilon &gt; 0: B(a,\epsilon) \subset A$</p> <p>$B(a,\epsilon) = \{x \in \Bbb R | |x| + |a| &lt; \epsilon\} = \{0\}$ </p> <p>For the condition $B(a,\epsilon) \subset A$ in Def, should it be proper subset or $\{0\} \subset \{0\}$ is okay?</p>
operatorerror
210,391
<p>You want, given a $\epsilon$, a $\delta$ which will work for any points in the domain of your function. </p> <p>You have a direct control over the distance between values in the range by values in the domain however, $|f(x)-f(y)|&lt; c|x-y|$. Hence, if $$ |x-y|&lt;\delta/c $$ then $$ |f(x)-f(y)|&lt;\delta $$ and this holds everywhere in your domain. Can you see what you should select as your delta to match any epsilon challenge?</p>
3,002,767
<p><span class="math-container">$l^p$</span> appears frequently in undergrad real analysis courses, I wonder if there is any strong connection between <span class="math-container">$l^p$</span> and <span class="math-container">$L^p$</span> space? (Other than they look similar)</p> <p>I give one definition of <span class="math-container">$l^p$</span> I've seen:</p> <p><span class="math-container">$\begin{equation} ||s||_p=\begin{cases} \sup_{n\in\mathbb{N}}\left(\sum_{i=1}^{n}|s_i|^p\right)^{1/p},if ~1\leq p&lt;\infty\\ \sup_{n\in\mathbb{N}}|s_n|, if~ p=\infty \end{cases}\end{equation}$</span> </p> <p><span class="math-container">$l^p$</span> denotes the space of real sequences s with <span class="math-container">$||s||_p&lt; \infty$</span></p>
Nico
592,433
<p>Yes. The connection is, that both are the same kind of construction, but over different measure spaces.</p> <p>The standard definition of <span class="math-container">$l^p$</span> spaces is: <span class="math-container">$l^p = \{x: \mathbb N \to \mathbb R|\quad ||x||_p&lt;\infty\}$</span> where <span class="math-container">\begin{align*} ||x||_p = \left(\sum_{n\in\mathbb N} |x(n)|^p\right)^{1/p} \end{align*}</span></p> <p>The standard definition of <span class="math-container">$L^p$</span> spaces is <span class="math-container">$\{f: X \to \mathbb R |\quad ||f||_p &lt; \infty\}$</span> where <span class="math-container">$X$</span> is some compact subset of <span class="math-container">$\mathbb R$</span> or sometimes even <span class="math-container">$\mathbb R$</span> and <span class="math-container">\begin{align*} ||f||_p =\left( \int _X |f|^p\, d\lambda \right)^{1/p} \end{align*}</span> integration with respect to the Lebesgue measure on the real line. Look how similar they are. The connection is: Let <span class="math-container">$(\mathbb N, \mathcal P(\mathbb N), \mu)$</span> be a measure space, where <span class="math-container">$\mu$</span> is the counting measure, i.e. <span class="math-container">$\mu(\{n\})=1$</span> for all <span class="math-container">$n\in \mathbb N$</span> and <span class="math-container">$\mu$</span> <span class="math-container">$\sigma$</span>-additive. Then <span class="math-container">\begin{align*} \int_{\mathbb N} |x|^p\, d\mu = \sum_{n\in\mathbb N} |x(n)|^p \end{align*}</span> where <span class="math-container">$x: \mathbb N \to \mathbb R$</span> is some measurable function, i.e. a sequence. So they really are nearly the same thing and many measure theoretic results hold for both.</p>
878,914
<p>I'm having a problem to solve this limit.</p> <p>$$\lim_{x \to \pi/4} \frac{\tan x-1}{\sin x-\cos x}$$</p> <p>$\lim_{x \to \pi/4} \frac{\tan x-1}{\sin x-\cos x}$ = $\lim_{x \to \pi/4} \frac{\frac{\sin x}{\cos x}-1}{\sin x-\cos x}$= $\lim_{x \to \pi/4} \frac{\frac{\sin x-\cos x}{\cos x}}{\sin x-\cos x}$= $\lim_{x \to \pi/4} \frac{\frac{\frac{\sin x-\cos x}{\cos x}}{\sin x-\cos x}}{1}$ =</p> <p>numerator is : (upper*lower) = 1*$\sin x-\cos x$</p> <p>denominator is : (inner-up*inner-low) = $\cos x*(\sin x-\cos x)$. </p> <p>Which is :</p> <p>$$\lim_{x \to \pi/4} \frac{\sin x-\cos x}{(\cos x)(\sin x-\cos x)}$$</p> <p>I don't know what to do next? Any ideas?</p>
lab bhattacharjee
33,337
<p>Safely cancel out $\sin x-\cos x$ as $\displaystyle x\to\frac\pi4,\sin x-\cos x\to0\implies\sin x-\cos x\ne0$</p> <p>Things will be clearer if we write $$\lim_{x \to \pi/4} \frac{\tan x-1}{\sin x-\cos x}=\lim_{x\to\dfrac\pi4}\frac{\tan x-1}{\cos x(\tan x-1)}$$</p> <p>As $\displaystyle x\to\frac\pi4,\tan x\to1\implies\tan x\ne1$</p>
3,771,972
<blockquote> <p>If gcd<span class="math-container">$(m,n)=1$</span>, then <span class="math-container">$\phi(mn)=\phi(m)\phi(n)$</span>.</p> </blockquote> <p>My text book write that <span class="math-container">$f:\Bbb Z/_{(mn)}\Bbb Z\to\Bbb Z/_m\Bbb Z \times \Bbb Z/_n\Bbb Z$</span> by <span class="math-container">$f([a]_{mn})=([a]_m,[a]_n)$</span> is an isomorphism.</p> <p>So far, I think he discusses the additive group.</p> <p>But when he claims that <span class="math-container">$f(U(_{mn}\Bbb Z))=U(_m\Bbb Z)\times U(_n\Bbb Z)$</span>, he using the multiplication on <span class="math-container">$\Bbb Z/_{(mn)}\Bbb Z$</span> and I'm confused that whether changing operations is alright or not. If he want to use the multiplication on <span class="math-container">$\Bbb Z/_{(mn)}\Bbb Z$</span>, why don't he just write that <span class="math-container">$f:(\Bbb Z/_{(mn)}\Bbb Z)^{\times}\to(\Bbb Z/_m\Bbb Z)^\times \times (\Bbb Z/_n\Bbb Z)^\times$</span> although <span class="math-container">$(\Bbb Z/_n\Bbb Z)^\times$</span> and <span class="math-container">$(\Bbb Z/_m\Bbb Z)^\times$</span> may not be group. Hope someone can help me to understand the notation here.</p>
Dietrich Burde
83,966
<p>For a ring <span class="math-container">$R$</span>, the group of units is denoted either by <span class="math-container">$U(R)$</span> or by <span class="math-container">$R^{\times}$</span>. In your example the ring is <span class="math-container">$R=\Bbb Z/m\Bbb Z$</span> fo some integer <span class="math-container">$m$</span>. So <span class="math-container">$(\Bbb Z/m\Bbb Z)^{\times}$</span> is certainly a group.</p>
50,023
<p>two problems from Dugundji's book page $156$. (I don't know why the system deletes the word hi in this sentence)</p> <p>$1$. Let $X$ be a Hausdorff space. Show that:</p> <p>a) $\bigcap \{F: x \in F , F \ \textrm{closed}\} = \{x\}$.</p> <p>b) $\bigcap \{U : x \in U, U \ \textrm{open}\} = \{x\}$.</p> <p>c) The above properties are not equivalent to being Hausdorff.</p> <p>$2$. Prove every infinite Hausdorff space contains a countably infinite discrete subspace.</p> <p>My work:</p> <ol> <li>(a) So let $y \not \in \{x\}$ then $x \neq y$. Since $X$ is Hausdorff we can find disjoint open sets $U_{x}$, $V_{y}$. Define $F = X \setminus V_{y}$ then $x \in F$, $F$ is closed and $y \not \in F$, so $y \not \in \bigcap \{F: x \in F , F \ \textrm{closed}\}$.</li> </ol> <p>(b) Same as above, just note that $y \not \in U_{x}$, hence not in the intersection.</p> <p>(c) Not sure here, can we take $X$ any infinite set endowed with the cofinite topology?</p> <p>Claim: $\bigcap \{F: x \in F , F \ \textrm{closed}\} = \{x\}$. Suppose $y \not \in \{x\}$ then $y \neq x$, that is: $y \not \in \{x\}$. Note that since $X$ is a $T_{1}$ space then $F=\{x\}$ is closed and contains $x$, so $y$ is not in the intersection.</p> <p>Similarly for $(b)$. But $X$ is not Hausdorff since any two non-empty open sets intersect.</p> <p>2) How do you prove this one?</p>
Stephen J. Herschkorn
13,048
<p>Your answer to 1(a) is too complicated. Note that $\{x\}$ is in the collection whereof you are taking the intersection.</p> <p>Also, your answer to 1(c) is too complicated: The natural numbers under the cofinite topology provides a counterexample.</p> <p>For 2, I haven't thought through the details, but I think this works: First show that any infinite Hausdorff space must have a nonempty open set whose complement is infinite. Then apply the Axiom of Choice inductively to carve out neighborhoods of points.</p>
4,605,772
<p>I've been familiar with the formula of finding the <span class="math-container">$r$</span>-permutation of an <span class="math-container">$n$</span>-element set; but I've only been half-so familiar with deriving the formula. See, I know that to prove the truth of <span class="math-container">$$ P(n,r) = \frac{n!}{(n-r)!} $$</span></p> <p>It is first conventional to show that <span class="math-container">$$ P(n,r) = n \times (n-1) \times (n-2) \times \cdots \times (n-r+1) $$</span> and this is exactly where I find myself struggling.</p> <p>To construct all possible ways of choosing <span class="math-container">$r$</span> elements from an <span class="math-container">$n$</span> element set, we can reason accordingly: the first element can be chosen in <span class="math-container">$n$</span> different ways, the second element can be chosen in <span class="math-container">$(n-1)$</span> different ways ... and it is said to me that the <span class="math-container">$r$</span>th element can be chosen in <span class="math-container">$n-(r-1)$</span> different ways. But I do not see why the last one stands. I can admire the pattern in which it makes sense. Since judging by the position of the element, the first element can be represented with <span class="math-container">$n-(1-1)$</span> (because it's in the first place), then second with <span class="math-container">$n-(2-1)$</span> (because it's in the second place), the third in <span class="math-container">$n-(3-1)$</span> (because it's in the third place), and finally the <span class="math-container">$r$</span>th with <span class="math-container">$n-(r-1)$</span> (because it's in the <span class="math-container">$r$</span>th place); but this pattern hasn't been concretely proven, and hence isn't it only merely a conjecture? Could anyone provide a good impermeable proof of this? Thank you in advance.</p>
Ben Grossmann
81,360
<p>In case it's helpful (e.g. perhaps it can be modified to work using pseudoinverses), here's an approach that works if <span class="math-container">$\Gamma_j$</span> for <span class="math-container">$j = 1,\dots,n$</span> are invertible.</p> <p>First of all, note that for the block matrix <span class="math-container">$$ A = \pmatrix{1 &amp; 0_k^T\\0_k &amp; B} $$</span> <span class="math-container">$A$</span> is invertible iff <span class="math-container">$B$</span> is invertible with inverse given by <span class="math-container">$$ A^{-1} = \pmatrix{1&amp;0_k^T\\0_k&amp;B^{-1}}. $$</span> So, we can safely restrict our attention to the submatrix <span class="math-container">$$ \mathbf{D}_n = \begin{bmatrix} \mathbf{X}^\top_{1}\mathbf{X}_{1} + \boldsymbol{\Gamma}_1 &amp; \mathbf{X}^\top_{1}\mathbf{X}_{2} &amp; \cdots &amp; \mathbf{X}^\top_{1}\mathbf{X}_{n} \\ \mathbf{X}^\top_{2}\mathbf{X}_{1} &amp; \mathbf{X}^\top_{2}\mathbf{X}_{2} + \boldsymbol{\Gamma}_2 &amp;\cdots &amp; \mathbf{X}^\top_{2}\mathbf{X}_{n} \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ \mathbf{X}^\top_{n}\mathbf{X}_{1} &amp; \mathbf{X}^\top_{n}\mathbf{X}_{2} &amp;\cdots &amp; \mathbf{X}^\top_{n}\mathbf{X}_{n} + \boldsymbol{\Gamma}_n\end{bmatrix}, \quad n=1, \ldots, N. $$</span> It is notable that we can write this matrix in the form <span class="math-container">$\mathbf D_n = \mathbf G_n + \mathbf \Xi_n^T\mathbf \Xi_n$</span>, where <span class="math-container">$$ \mathbf G_n = \operatorname{diag}(\mathbf \Gamma_1,\dots,\mathbf \Gamma_n), \quad \mathbf \Xi_n = \pmatrix{\mathbf X_1 &amp; \cdots &amp; \mathbf X_n}. $$</span> <span class="math-container">$\mathbf G_n$</span> is block-diagonal, so its inverse is easily computed as <span class="math-container">$$ \mathbf G_n^{-1} = \operatorname{diag}(\mathbf \Gamma_1^{-1},\dots,\mathbf \Gamma_n^{-1}). $$</span> From there, we could apply the <a href="https://en.wikipedia.org/wiki/Woodbury_matrix_identity" rel="nofollow noreferrer">Woodbury matrix identity</a> to get <span class="math-container">$$ \mathbf D_n^{-1} =\mathbf G_n^{-1} - \overbrace{\mathbf G_n^{-1} \mathbf \Xi^T}^{\mathbf U_n^T}\overbrace{(I + \mathbf \Xi_n \mathbf G_n^{-1}\mathbf \Xi_n^T)^{-1}}^{\mathbf Y_n}\overbrace{\mathbf \Xi_n \mathbf G_n^{-1}}^{\mathbf U_n} $$</span> The term <span class="math-container">$\mathbf U_n^T \mathbf Y_n \mathbf U_n^T$</span> requires only the computation of an <span class="math-container">$m \times m$</span> matrix. Note that we can rewrite <span class="math-container">$\mathbf U_n, \mathbf Y_n$</span> as follows. <span class="math-container">$$ \mathbf Y_n = \left(\mathbf I_m + \mathbf \Xi \mathbf G_n^{-1}\mathbf \Xi^T\right)^{-1} = \left(\mathbf I_m + \sum_{j=1}^m \mathbf X_j \mathbf \Gamma_j^{-1} \mathbf X_j^T\right)^{-1}. $$</span> Note that <span class="math-container">$\mathbf Y_n$</span> can itself be updated from <span class="math-container">$\mathbf Y_{n-1}$</span> using the Woodbury identity, which is especially useful if <span class="math-container">$k_n$</span> is small compared to <span class="math-container">$m$</span>. We have <span class="math-container">$$ \mathbf U_n = [\mathbf X_1 \mathbf \Gamma_1^{-1} \ \ \cdots \ \ \mathbf X_n \mathbf \Gamma_n^{-1}]. $$</span> Thus, we have <span class="math-container">$$ \mathbf U_n^T \mathbf Y_n \mathbf U_n^T = \left[\mathbf \Gamma_i^{-T} \mathbf X_i^T\left(\mathbf I_m + \sum_{j=1}^m \mathbf X_j \mathbf \Gamma_j^{-1} \mathbf X_j^T\right)^{-1}\mathbf X_j\mathbf \Gamma_j^{-1}\right]_{i,j = 1}^n. $$</span></p> <hr /> <p>We can make this work for positive semidefinite <span class="math-container">$\mathbf \Gamma_j$</span> as follows. Consider the following definitions:</p> <ul> <li>For each <span class="math-container">$j$</span>, let the matrix <span class="math-container">$L_j$</span> be such that <span class="math-container">$L_jL_j^T = \mathbf \Gamma_j$</span>. In particular, I will take <span class="math-container">$L_j$</span> to be the <a href="https://en.wikipedia.org/wiki/Square_root_of_a_matrix#Positive_semidefinite_matrices" rel="nofollow noreferrer">positive semidefinite square</a> root of <span class="math-container">$\mathbf \Gamma_j$</span> so that <span class="math-container">$L_j$</span> is symmetric.</li> <li><span class="math-container">$X = \operatorname{diag}(L_1,\dots,L_n)$</span>.</li> <li><span class="math-container">$Y = \mathbf \Xi_n^T$</span></li> <li><span class="math-container">$Z = (I - XX^+)Y$</span></li> <li><span class="math-container">$E = I - X^+Y (I - Z^+Z) F^{-1} (X^+Y)^\mathrm T$</span></li> <li><span class="math-container">$F = I + (I - Z^+Z) Y^\mathrm T (XX^\mathrm T)^+ Y (I - Z^+Z)$</span></li> </ul> <p>We can then compute <span class="math-container">$$ \mathbf C_n^{-1} = \mathbf C_n^+ = (XX^\mathrm T + YY^\mathrm T)^+ = (ZZ^\mathrm T)^+ + (I - YZ^+)^\mathrm T X^{+\mathrm T} E X^+ (I - YZ^+). $$</span></p> <p>Some simplifications:</p> <ul> <li><span class="math-container">$X^+ = \operatorname{diag}(L_1^+,\dots,L_n^+)$</span></li> <li><span class="math-container">$(XX^T)^+ = \mathbf G_n^+ = (X^+)^2 = \operatorname{diag}([L_1^+]^2,\dots,[L_n^+]^2)$</span></li> </ul>
4,605,772
<p>I've been familiar with the formula of finding the <span class="math-container">$r$</span>-permutation of an <span class="math-container">$n$</span>-element set; but I've only been half-so familiar with deriving the formula. See, I know that to prove the truth of <span class="math-container">$$ P(n,r) = \frac{n!}{(n-r)!} $$</span></p> <p>It is first conventional to show that <span class="math-container">$$ P(n,r) = n \times (n-1) \times (n-2) \times \cdots \times (n-r+1) $$</span> and this is exactly where I find myself struggling.</p> <p>To construct all possible ways of choosing <span class="math-container">$r$</span> elements from an <span class="math-container">$n$</span> element set, we can reason accordingly: the first element can be chosen in <span class="math-container">$n$</span> different ways, the second element can be chosen in <span class="math-container">$(n-1)$</span> different ways ... and it is said to me that the <span class="math-container">$r$</span>th element can be chosen in <span class="math-container">$n-(r-1)$</span> different ways. But I do not see why the last one stands. I can admire the pattern in which it makes sense. Since judging by the position of the element, the first element can be represented with <span class="math-container">$n-(1-1)$</span> (because it's in the first place), then second with <span class="math-container">$n-(2-1)$</span> (because it's in the second place), the third in <span class="math-container">$n-(3-1)$</span> (because it's in the third place), and finally the <span class="math-container">$r$</span>th with <span class="math-container">$n-(r-1)$</span> (because it's in the <span class="math-container">$r$</span>th place); but this pattern hasn't been concretely proven, and hence isn't it only merely a conjecture? Could anyone provide a good impermeable proof of this? Thank you in advance.</p>
tch
352,534
<p>If it is not possible to store a matrix of the dimensions of <span class="math-container">$\mathbf{C}_n$</span>, one possibility is to use a randomized estimator for the diagonal in conjunction with a linear-system solver.</p> <p>In particular, note that for a matrix <span class="math-container">$\mathbf{A}$</span>, and random vector <span class="math-container">$\mathbf{v}$</span> with iid entries equal to <span class="math-container">$\pm 1$</span> each with probability <span class="math-container">$1/2$</span>, <span class="math-container">\begin{equation*} \mathbb{E}[\mathbf{v}\circ (\mathbf{A} \mathbf{v})] = \operatorname{diag}(\mathbf{A}). \end{equation*}</span> Here we use <span class="math-container">$\circ$</span> to denote the <a href="https://en.wikipedia.org/wiki/Hadamard_product_(matrices)" rel="nofollow noreferrer">Hadamard Product</a> which just multiplies elements entry-wise.</p> <p>If you set <span class="math-container">$\mathbf{A} = \mathbf{C}_n$</span>, then you can compute <span class="math-container">$\mathbf{A}\mathbf{v}$</span> by solving the linear system <span class="math-container">$\mathbf{C}_n \mathbf{x} = \mathbf{v}$</span>. This could be done by a Krylov subspace method which would allow you to avoid even forming <span class="math-container">$\mathbf{C}_n$</span> itself. I suspect you can take advantage of the structure of your matrix to speed up product with <span class="math-container">$\vec{C}_n$</span>. If you have some other solver of choice, you could of course use that instead.</p> <p>For a single vector <span class="math-container">$\mathbf{v}$</span>, the variance of <span class="math-container">$\mathbf{v}\circ (\mathbf{A} \mathbf{v})$</span> may be big. But you can just average a bunch of iid copies to reduce the variance. There are more clever approaches based on this general idea as well.</p> <p>Here are some references:</p> <ul> <li><a href="https://arxiv.org/pdf/2201.10684.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2201.10684.pdf</a></li> <li><a href="https://arxiv.org/pdf/2208.03268.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2208.03268.pdf</a></li> </ul>
112,263
<p>When Mathematica outputs nested lists, it just gloms one record right after the next record and it wraps around in the space available, for example:</p> <pre><code>In[10]:= FactorInteger[FromDigits["1000000000000101",Range[2,10]]] Out[10]= {{{13,1},{2521,1}},{{11,1},{251,1},{5197,1}},{{3,2},{229,1},{520981,1}},{{4751,1},{6423401,1}},{{173,1},{281,1},{2677,1},{3613,1}},{{3,1},{61,2},{425294411,1}},{{53,1},{157,1},{1697,1},{2491681,1}},{{109,1},{75403,1},{25050853,1}},{{3,1},{47,1},{157,1},{1021,1},{44244113,1}}} </code></pre> <p>What would be more convenient would be to be able to see each record on its own line. So, for example, in the example above there are 9 records, one for each of the input values 2-10. I can kind of see the separation between them by looking for the double "}}", but it is stressful. Is there an easy way to have a line break after each record, so it is more obvious where one record ends and the next begins?</p>
mgamer
19,726
<p>Maybe you are looking for something like this:</p> <pre><code>Print[#] &amp; /@ Flatten[ FactorInteger[FromDigits["1000000000000101", Range[2, 10]]], 1]; </code></pre>
3,848,517
<p>I have a conjecture in my mind regarding Arithmetic Progressions, but I can't seem to prove it. I am quite sure that the conjecture is true though.</p> <p>The conjecture is this: suppose you have an AP (arithmetic progression): <span class="math-container">$$a[n] = a[1] + (n-1)d$$</span> Now, suppose our AP satisfies the property that the sum of the first <span class="math-container">$n$</span> terms of our AP is equal to the sum of the first <span class="math-container">$m$</span> terms: <span class="math-container">$$S[n] = S[m]$$</span> but <span class="math-container">$n \neq m$</span>. I want to prove two theorems:</p> <ul> <li>The underlying AP <span class="math-container">$a[n]$</span> must be <strong>symmetric</strong> with respect to the point at which it becomes zero.</li> <li><span class="math-container">$S[n + m] = 0$</span></li> </ul> <h2>A Numerical Example</h2> <p>Consider the AP: <span class="math-container">$$a[n] = 4 - n = (3, 2, 1, 0, -1, -2, -3)$$</span> This is an AP with common difference <span class="math-container">$d = -1$</span> and first term <span class="math-container">$a[1] = 3$</span>: <a href="https://i.stack.imgur.com/NKmZF.png" rel="nofollow noreferrer">Here is the MATLAB plot of this AP</a>. As you can see in the plot, our AP is <strong>symmetric</strong> with respect to the point <span class="math-container">$n = 4$</span>: <span class="math-container">$$a[4-1] = -a[4+1] = 1$$</span> <span class="math-container">$$a[4-2] = -a[4+2] = 2$$</span> <span class="math-container">$$a[4-3] = -a[4+3] = 3$$</span></p> <p>Now, here is the sum of our AP: <span class="math-container">$$S[n] = (3,5,6,6,5,3,0)$$</span> <a href="https://i.stack.imgur.com/Ep0aB.png" rel="nofollow noreferrer">Here is the MATLAB plot of the summation</a>. You can clearly see that: <span class="math-container">$$S[1] = S[6] = 3$$</span> <span class="math-container">$$S[2] = S[5] = 5$$</span> <span class="math-container">$$S[3] = S[4] = 6$$</span></p> <p>and you can also see that: <span class="math-container">$$S[1 + 6] = S[7] = 0$$</span> <span class="math-container">$$S[2 + 5] = S[7] = 0$$</span> <span class="math-container">$$S[3 + 4] = S[7] = 0$$</span></p> <p>Can you please help me out with this problem? Any guidance will be very welcome. I am actually an Engineering student, so my Pure Math skills are not that strong.</p> <p>Thank you!</p>
Kosh
270,689
<p>The condition you are looking for is uniform convergence. If you want a counterexample just think, on the real line, about a sequence of functions with compact support obtained translating by <span class="math-container">$+n$</span> a given nonnegative and non identically zero function with compact support.</p>
766,694
<p>I am stuck trying to get the values for x, y, and z. I keep moving variables around but I end up getting answers like x = x or z = z and I do not think that is what I want. It's really just algebra but I seem to have forgotten that. </p> <p>(x/2) + (2z/3) = x</p> <p>(x/2) + (y/3) = y</p> <p>(2y/3) + (z/3) = z</p> <p>I initially tried finding the values for z and y from the first two equations and plugged those into the third equation but ended up getting that x is equal to 0 and when I plug zero into the other equations to solve for the other three variables I get that x, y, and z are all equal to 0. I do not think I am tackling this correctly.</p>
Amzoti
38,839
<p>From the first equation, we have:</p> <p>$$z = \dfrac{3x}{4}$$</p> <p>From the second equation, we have:</p> <p>$$y = \dfrac{3x}{4}$$</p> <p>Substituting these into the third equation gives an identity. So, we have:</p> <p>$$z = y = \dfrac{3x}{4},~ x ~~\mbox{is a free variable}$$</p> <p>Note, you could have rewritten this system as:</p> <p>$$3 x=4 z,3 x=4 y,y=z$$</p>
2,282,359
<p>I am trying to calculate this limit: $$\lim_{x \to \infty} x^2(\ln x-\ln (x-1))-x$$ The answer is $1/2$ but I am trying to verify this through proper means. I have tried L'Hospital's Rule by factoring out an $x$ and putting that as $\frac{1}{x}$ in the denominator (indeterminate form) but it becomes hopeless afterwards. Also I am a little hesitant about series involving the natural log because of restricted interval of convergence as $x$ is going to infinity. Is there a different approach how to do evaluate this limit? Thanks. </p>
Community
-1
<p>$$\ln x - \ln(x-1)=\ln\frac{1}{1-\frac{1}{x}}=\frac{1}{x}+\frac{1}{2}\frac{1}{x^2}+\frac{1}{3}\frac{1}{x^3}+\cdots, $$ so $$x^2(\ln x - \ln(x-1))-x=\frac{1}{2}+\frac{1}{3}\frac{1}{x}+\cdots$$ and thus $$\lim_{x\rightarrow\infty}[x^2(\ln x - \ln(x-1))-x]=\frac{1}{2}.$$</p>
729,415
<blockquote> <p>If $f$ has a relative extremum at $c$, then either $f'(c) = 0$ or $f'(c)$ does not exist. Can one prove the opposite?</p> </blockquote> <p>The statement is due to Fermat. It shows that one can find the relative extrema of $f$ by finding the points $c$ such that $f'(c)=0$ or $f'(c)$ doesn't exist.</p> <p>However can one show that, if $f'(c)=0$ or $f'(c)$ does not exist for some value $c$, then $f$ attains a relative extremum at $c$ ?</p>
Nitish
61,574
<p>It's <em>not</em> true.</p> <p>\begin{align} f(x)&amp;=x^3\\ f'(0)&amp;=0 \end{align}</p> <p>However, $0$ is neither a local minima or maxima. It is in fact a <a href="http://en.wikipedia.org/wiki/Saddle_point" rel="nofollow"><em>saddle point</em></a>.</p> <p>You can have points like these which are stationary (i.e. they have $0$ derivative) but they aren't local extrema. Roughly speaking, these are points where locally you have increase in one direction and decrease in another. </p>
729,415
<blockquote> <p>If $f$ has a relative extremum at $c$, then either $f'(c) = 0$ or $f'(c)$ does not exist. Can one prove the opposite?</p> </blockquote> <p>The statement is due to Fermat. It shows that one can find the relative extrema of $f$ by finding the points $c$ such that $f'(c)=0$ or $f'(c)$ doesn't exist.</p> <p>However can one show that, if $f'(c)=0$ or $f'(c)$ does not exist for some value $c$, then $f$ attains a relative extremum at $c$ ?</p>
Jared
138,018
<p>No, because the other way isn't true. Here's an example:</p> <p>$$ f(x) = x^3 \\ f'(x) = 3x^2 \rightarrow f'(0) = 0 $$</p> <p>But $x = 0$ isn't an extremum. The only definition for an extremum is that if the derivative's sign changes then you definitely have an extremum. If the function is differentiable (i.e. the derivative is continuous), then this can only occur when $f'(x) = 0$. Otherwise, there may be a jump discontinuity in the derivative where it jumps from positive to negative or vice versa (like with $f(x) = |x|$).</p> <p><b>edit</b></p> <p>I should always be careful about making absolute statements. If the sign of the derivative changes <i>and</i> the function exists at that point, then it is an extremum. So for instance, $f(x) = \frac{1}{x^2}$: well the sign of the derivative changes at $x = 0$, but the function doesn't exist at that point and thus is not an extremum (unless you're like me an you like to consider infinities as max's or mins--especially when they're in the middle of a graph).</p>
11,315
<p>I just answered this question:</p> <p><a href="https://math.stackexchange.com/questions/528880/boolean-formula">Boolean formula over 64 Boolean variables X</a></p> <p>By the time I had posted my answer, another user had edited the question so as to remove all the mathematical interest.</p> <p>What is going on here? I feel very demotivated after spending time crafting an answer to a question and then seeing a sensible question trashed.</p> <p>Regards,</p> <p>Rob.</p>
Dennis Meng
35,665
<p>Anyone is able to edit a question or answer; this is common across <em>all</em> stackexchange sites, not just Mathematics. Those with less than 2000 reputation must first have their edit approved.</p> <p>It's unfortunate when a question is edited to either change the question asked or otherwise make the question lose its value. However, you can suggest an edit to change it back, and explain in the edit description that you are trying to change it back after a previous edit invalidated the question in some way.</p>
950,937
<p><strong>Specifications and Data in question</strong></p> <p>1.We have a skew symmetric matrix $ A(t)_{3\times 3}= \begin{bmatrix}\,0&amp;\!-a_3(t)&amp;\,\,a_2(t)\\ \,\,a_3(t)&amp;0&amp;\!-a_1(t)\\-a_2(t)&amp;\,\,a_1(t)&amp;\,0\end{bmatrix} \tag 1$ and</p> <p>$B(t)_{3 \times 3}= -\int_{0}^{t} A(s)\ ds \tag 2$ </p> <p>$a_1(t),a_2(t),a_3(t)$ are general non matrix functions. Just used here to show the form of $A(t)$</p> <p><strong>Question</strong></p> <ol> <li><strong>Is it possible to prove $B(t)*B(t)^{'}=B(t)^{'}*B(t)$ (commutative)? if not why?</strong> </li> </ol> <p>NB: * means multiplication and $B'(t) =\frac{\text{d}B(t) }{\text{d}t} $</p>
Slade
33,433
<p>Suppose I have $n$ light switches, numbered $1$ through $n$. I'd like at least one light on, but other than that I can select any configuration. How many configurations are there?</p> <p>Now, for some $i$ with $1\leq i\leq n$, how many configurations are there with the property that switch $i$ is on, but all switches to its right are off?</p>
950,937
<p><strong>Specifications and Data in question</strong></p> <p>1.We have a skew symmetric matrix $ A(t)_{3\times 3}= \begin{bmatrix}\,0&amp;\!-a_3(t)&amp;\,\,a_2(t)\\ \,\,a_3(t)&amp;0&amp;\!-a_1(t)\\-a_2(t)&amp;\,\,a_1(t)&amp;\,0\end{bmatrix} \tag 1$ and</p> <p>$B(t)_{3 \times 3}= -\int_{0}^{t} A(s)\ ds \tag 2$ </p> <p>$a_1(t),a_2(t),a_3(t)$ are general non matrix functions. Just used here to show the form of $A(t)$</p> <p><strong>Question</strong></p> <ol> <li><strong>Is it possible to prove $B(t)*B(t)^{'}=B(t)^{'}*B(t)$ (commutative)? if not why?</strong> </li> </ol> <p>NB: * means multiplication and $B'(t) =\frac{\text{d}B(t) }{\text{d}t} $</p>
miniparser
99,402
<p>I have my own combinatorial proof for this.</p> <p>$\sum_{i=0}^{n-1}2^i=2^n-1$</p> <p>Interpreted as $2^0+2^1+2^2+...+2^{n-1}$</p> <p>Each increase of $n\ge 1$ by $1$ LHS increases the total by the current total plus one, making the total $(2^{n-1}-1)+2^{n-1}=2(2^{n-1})-1=2^n-1$.</p>
446,948
<blockquote> <p>Suppose that <span class="math-container">$X$</span> is a geometric random variable with parameter (probability of success) <span class="math-container">$p$</span>.</p> <p>Show that <span class="math-container">$\Pr(X &gt; a+b \mid X&gt;a) = \Pr(X&gt;b)$</span></p> </blockquote> <p>First I thought I'd start by calculating <span class="math-container">$\Pr(X&gt;n)$</span> where <span class="math-container">$n=a+b$</span>:</p> <p><span class="math-container">$$\Pr(X &gt; n) = p_{n+1} + p_{n+2} + \cdots = ?\tag{1}$$</span></p> <p>But I don't know how to determine the limit of equation (1). I know for an infinite geometric series starting at index zero:</p> <p><span class="math-container">$$\sum\limits_{n=0}^\infty ax^n=\cfrac{a}{1-x}\text{ for }|X|&lt;1$$</span></p> <p>But don't know what to do when index starts at <span class="math-container">$n$</span>.</p> <p>Next I thought I'd do:</p> <p><span class="math-container">$$\Pr(X &gt; a+b) \mid X &gt; a) = \cfrac{ \Pr[ (X &gt; a+b) \cap (X &gt; a)] }{\Pr(X &gt; a)}$$</span> <span class="math-container">$$=\cfrac{\Pr(X &gt; a+b)}{\Pr(X &gt; a)}$$</span></p> <p>and substitute my result from equation (1). Any help appreciated in advance. Thank you.</p>
André Nicolas
6,312
<p>There are two ways to get $\Pr(X\gt n)$, the easy way and the hard way. Since there is something to be learned from both, we do it both ways.</p> <p><strong>The hard way:</strong> The probability that $X=n+1$ is the probability of $n$ failures in a row, then success. This is $(1-p)^np$.</p> <p>Similarly, the probability that $X=n+2$ is the probability of $n+1$ failures in a row, then success. This has probability $(1-p)^{n+1}p$.</p> <p>Continue, and add up. There is a $(1-p)^n p$ "in" each term. Take it out as a common factor. So we get $$(1-p)^n p\left(1+(1-p)+(1-p)^2 +\cdots\right).$$ The geometric series inside the brackets has sum $\frac{1}{1-(1-p)}=\frac{1}{p}$.</p> <p><strong>The easy way:</strong> We have $X\gt n$ precisely if we get $n$ failures in a row, probability $(1-p)^n$.</p>
48,237
<p>I have found by a numerical experiment that first such primes are: $2,5,13,17,29,37,41$. But I cannot work out the general formula for it.<br> Please share any your ideas on the subject.</p>
Zarrax
3,035
<p>If you know a little group theory it's not too hard. It holds for $p = 2$ so assume $p &gt; 2$. Under multiplication ${\mathbb Z}_p$ is a cyclic group of order $p - 1$. To say an $x$ in ${\mathbb Z}_p$ satisfies $x^2 = -1$ is the same as saying that $x$ is an element of order $4$ in this group. To see why, first note that clearly such an $x$ is of order $4$; $x \neq 1$ and $x^2 = - 1 \neq 1$, while $x^4 = 1$. For the converse, if an element $y$ is of order $4$ then $y^2$ is of order $2$. So $y^4 - 1 = (y^2 - 1)(y^2 + 1) = 0$ in ${\mathbb Z}_p$. Since $y^2$ is not $1$ it has to be $-1$.</p> <p>So the question is when does the cyclic group ${\mathbb Z}_p$ have an element of order 4. The orders of the elements of a cyclic group are exactly the divisors of the order. So this happens when $4$ divides $p - 1$ or equivalently when $p = 1 \pmod 4$.</p>
48,237
<p>I have found by a numerical experiment that first such primes are: $2,5,13,17,29,37,41$. But I cannot work out the general formula for it.<br> Please share any your ideas on the subject.</p>
André Nicolas
6,312
<p>The following argument depends on knowing (or separately proving) <strong>Wilson's Theorem</strong>.</p> <p><strong>Theorem</strong> Let $p$ be prime. Then $(p-1)! \equiv -1 \pmod{p}$.</p> <p>Now we use Wilson's Theorem to prove the result. For the sake of concreteness, I will use $p=17$, but the same idea exactly works for all primes congruent to $1$ modulo $4$. All congruences will be modulo $17$, so $17$ will mostly not be mentioned explicitly.</p> <p>Note that $$16!= (1)(2)(3)(4)(5)(6)(7)(8)(16)(15)(14)(13)(12)(11)(10)(9).$$ (Didn't really do anything.)</p> <p>Note that $16\equiv -1$, $15\equiv -2$, $14\equiv -3$, and so on until $9 \equiv -8$. It follows that $$(15)(14)(13)(12)(11)(10)(9)\equiv (-1)(-2)(-3)(-4)(-5)(-6)(-7)(-8).$$ But the right-hand side is congruent to $(-1)^8(1)(2)(3)(4)(5)(6)(7)(8)$, and of course $(-1)^8=1$.</p> <p>It follows that $16! \equiv (8!)^2 \pmod{17}$. But by Wilson's Theorem, $16!\equiv -1 \pmod{17}$, and therefore $$(8!)^2 \equiv -1 \pmod{17}.$$</p> <p>We have found an <strong>explicit expression</strong> for an $x$ such that $x^2\equiv - \pmod{17}$.</p> <p>If $p \equiv 3 \pmod{4}$, the argument breaks down, because we end up with an <em>odd number</em> of minus signs. (Anyway, the result does not hold when $p\equiv 3\pmod 4$.)</p> <p>In general, if $p\equiv 1 \pmod 4$, and $q=(p-1)/2$, then $$(q!)^2 \equiv -1 \pmod{p}.$$</p> <p><strong>Other primes</strong>: It remains to show that the congruence is not solvable if $p \equiv 3\pmod{4}$. So suppose to the contrary that $b^2\equiv -1\pmod{p}$. Look at the numbers $b, 2b, 3b, \dots, (p-1)b$. It is easy to verify they are pairwise incongruent modulo $p$, so their product is congruent to $(p-1)!$ modulo $p$. Thus $$b^{p-1}(p-1)! \equiv (p-1)! \pmod p,$$ giving $b^{p-1}\equiv 1 \pmod p$. Let $q=(p-1)/2$. Then $b^{p-1} \equiv (-1)^q \pmod p$. But if $p \equiv 3 \pmod 4$, then $q$ is odd, and we obtain the absurd $-1 \equiv 1 \pmod{p}$. </p>
3,275,966
<p>During the drawing lottery falls six balls with numbers from <span class="math-container">$1$</span> to <span class="math-container">$36$</span>. The player buys the ticket and writes in it the numbers of six balls, which in his opinion will fall out during the drawing lottery. The player wants to buy several lottery tickets in order to be guaranteed to guess at least two numbers in at least one ticket. Will there be enough to buy <span class="math-container">$12$</span> lottery tickets?</p> <p><strong>My work</strong>. The maximum number of numbers pairs in <span class="math-container">$12$</span> tickets is equal to <span class="math-container">$12 \binom{6}{2}=12 \cdot 15$</span>. During the drawing lottery falls <span class="math-container">$ \binom{6}{2}=15$</span> numbers pairs. The total number of numbers pairs is equal to <span class="math-container">$\binom{36}{2}=18 \cdot 35$</span>. I have no ideas of solving the problem.</p>
Ross Millikan
1,827
<p><span class="math-container">$10$</span> tickets are enough. Start by buying <span class="math-container">$(1,2,3,4,5,6)$</span> and the other five blocks of six numbers. If you have not won yet there must be one number in each block of six. Now buy <span class="math-container">$(1,2,3,7,8,9),(1,2,3,10,11,12),(4,5,6,7,8,9)$</span> and <span class="math-container">$(4,5,6,10,11,12)$</span>. One of these last four must win because the number in <span class="math-container">$1-6$</span> and the number in <span class="math-container">$7-12$</span> are both in one of them. </p> <p>Thanks to David K for the correction.</p>
1,042,285
<p>How can i prove boundedness of the sequence $$a_n=\frac{\sin (n)}{8+\sqrt{n}}$$ without using its convergence to $0$? I know since it is convergent then it is bounded.</p>
Bumblebee
156,886
<p>$$\Big|\frac{sin (n)}{8+\sqrt{n}}\Big|\le\Big|\frac{1}{8+\sqrt{n}}\Big|\le\dfrac{1}{9}, \forall n\in\mathbb{N}. $$</p>
4,415,037
<p>I would like to prove upper and lower bounds on <span class="math-container">$|\cos(x) - \cos(y)|$</span> in terms of <span class="math-container">$|x-y|$</span>. I was able to show that <span class="math-container">$|\cos(x) - \cos(y)| \leq |x - y|$</span>. I'm stuck on the lower bound. Does anyone know how to approach this?</p> <p>Update: Over the interval <span class="math-container">$[0,\pi/2]$</span>, I was able to show that <span class="math-container">$|\cos(x) - \cos(y)| \geq \frac{2 \min(x,y)}{\pi}|x-y|$</span>. But I would like a lower bound that holds for any interval.</p>
Community
-1
<p>You might consider this a &quot;satisfactory&quot; answer instead of the obvious <span class="math-container">$0$</span> and <span class="math-container">$-|x-y|$</span>.</p> <p>What we want to achieve first here is find some <span class="math-container">$f\left|t\right|$</span> so <span class="math-container">$\left|\cos t \right|=\cos|t| \geq f\left|t\right|$</span> is a strict inequality.</p> <p>We know <span class="math-container">$\cos|t|$</span>, at it's lowest, is <span class="math-container">$0$</span> when <span class="math-container">$|t|$</span> is an odd multiple of <span class="math-container">$\dfrac{\pi}{2}$</span> i.e <span class="math-container">$\cos|t|=0 \Leftrightarrow |t|=\dfrac{(2n+1)\pi}{2}$</span> for some <span class="math-container">$n \in \mathbb{N}$</span>.</p> <p>To get a handle on <span class="math-container">$|t|$</span> we solve for <span class="math-container">$n$</span>:</p> <p><span class="math-container">$n= \dfrac{\dfrac{2|t|}{\pi}-1}{2} = \dfrac{2|t|-\pi}{2\pi}$</span></p> <p>So, in that case:</p> <p><span class="math-container">$\cos|t| \geq |t|-\dfrac{\left( 2\cdot \dfrac{2|t|-\pi}{2\pi}+1\right)\pi}{2}$</span></p> <p>Same argument can be done when <span class="math-container">$\cos|t|$</span> is at its max of <span class="math-container">$1$</span> where <span class="math-container">$|t|=n\pi$</span> for some <span class="math-container">$n \in \mathbb{N}$</span>:</p> <p><span class="math-container">$n= \dfrac{|t|}{\pi}$</span></p> <p>So, in that case:</p> <p><span class="math-container">$\cos|t| \geq |t|-\dfrac{|t|}{\pi}\pi + 1$</span></p> <p>Notice that we had to adjust for <span class="math-container">$\cos|t|=1$</span> by adding <span class="math-container">$1$</span> to ensure the inequality is strict.</p> <p>Now we choose the inequality with the smaller RHS:</p> <p><span class="math-container">$\cos|t| \geq |t|-\dfrac{\left( 2\cdot \dfrac{2|t|-\pi}{2\pi}+1\right)\pi}{2}$</span></p> <p>Finally:</p> <p><span class="math-container">$\left|\cos x -\cos y\right| \\ \geq \left|\cos x \right| -\left|\cos y \right| \\= \cos|x| - \cos|y|\\ \geq |x|-|y| - \dfrac{\left( 2\cdot \dfrac{2|x|-\pi}{2\pi}+1\right)\pi}{2} + \dfrac{\left( 2\cdot \dfrac{2|y|-\pi}{2\pi}+1\right)\pi}{2}\\ \geq -|x-y| - |x| + |y| $</span></p> <p>I know we can keep going to reach <span class="math-container">$-2|x-y|$</span> and that we could have cancelled all for <span class="math-container">$0$</span>, but let's just leave it at that!</p>
3,104,210
<p>Is there a commonly accepted notation for a vector whose first <span class="math-container">$k$</span> entries are <span class="math-container">$1$</span>'s, with <span class="math-container">$0$</span>'s afterwards? I have seen <span class="math-container">$\mathbf{e}_i$</span> for a vector with a <span class="math-container">$1$</span> in the <span class="math-container">$i$</span>th entry, and <span class="math-container">$\mathbf{e}$</span> for a vector of all ones, but not this particular case.</p>
neofelis
641,909
<p>As noted in comments, your <span class="math-container">$\phi$</span> is not a property of sets -- it's a class. You're correct when you say that Comp says that given a set, there is a subset of that set whose elements all satisfy some property. What's the property we want? "Being identical to x or being identical to y" (for some given <span class="math-container">$x$</span> and <span class="math-container">$y$</span>). We can write this as a formula with one free variable, <span class="math-container">$u$</span>, as follows: <span class="math-container">$u = x \lor u = y$</span>. We call this formula <span class="math-container">$\varphi$</span>. If <span class="math-container">$\varphi$</span> holds of <span class="math-container">$u$</span>, then <span class="math-container">$u$</span> is identical to <span class="math-container">$x$</span> or to <span class="math-container">$y$</span>, meaning <span class="math-container">$u$</span> has the property we want. Now it does indeed follow from Cmpr that when we're given a set, there's a subset of that set whose elements all satisfy the property. For the following formula is an instance of Cmpr: <span class="math-container">$$\forall x \forall w_1 \forall w_2 \exists y \forall z(z \in y \leftrightarrow (z \in x \land (u = w_1 \lor u = w_2)))$$</span> (Note we replaced <span class="math-container">$\phi$</span> in Comp not by our <span class="math-container">$\varphi$</span> but by a variation on it, in which <span class="math-container">$w_1$</span> and <span class="math-container">$w_2$</span> are also free.) And, given this, we can derive <span class="math-container">$$\forall u \exists y \forall z(z \in y \leftrightarrow (z \in u \land (z = x \lor z = y))) \tag{1}$$</span> which tells us that for any set <span class="math-container">$u$</span> there is a <span class="math-container">$y \subset u$</span> such that all elements of <span class="math-container">$y$</span> satisfy <span class="math-container">$\varphi$</span>. </p> <p>Now your informal proof is basically "Given (1), I can turn the set that Pair gives me into the set Pair# asserts exists". It's a fine informal proof. You ask how to turn it into a formal one. I'll walk you through my own thought process and, at the end, note what generalizes particularly well.</p> <p>First, we try and write down the central idea. Saying we can turn the set Pair gives us into the set whose existence Pair# asserts is saying that we can turn <em>any</em> set containing <span class="math-container">$x$</span> and <span class="math-container">$y$</span> into a set containing <em>only</em> <span class="math-container">$x$</span> and <span class="math-container">$y$</span>. It is to say that <span class="math-container">$$\forall p(x \in p \land y \in p \rightarrow \exists p' \forall u(u \in p' \leftrightarrow (u = x \lor u = y))) \tag{2}$$</span>Let's imagine we have a proof of (2). Then we can use the statement <span class="math-container">$$\forall p (\varphi \to \psi) \leftrightarrow (\exists p \varphi \to \psi) \tag{3}$$</span> which holds whenever <span class="math-container">$p$</span> is not free in <span class="math-container">$\psi$</span>, and throw some quantifiers at the formula we get, to ultimately arrive at <span class="math-container">$$\forall x \forall y \exists p(x \in p \land y \in p) \to \forall x \forall y \exists p \forall u(u \in p \leftrightarrow (u = x \lor u = y))$$</span>or in other words "Pair <span class="math-container">$\to$</span> Pair#". So if we can prove (2), we're done. How do we prove (2)? In the informal proof you got it from (1), which we know we can get from Cmpr. We can also get it from (1) in a formal proof. For, using the statement <span class="math-container">$$\forall x \psi \rightarrow \forall x(\varphi \to \psi) \tag{4}$$</span> we can from (1) arrive at <span class="math-container">$$\forall p(x \in p \land y \in p \rightarrow \exists y \forall z(z \in y \leftrightarrow z \in p \land (z = x \lor z = y))) \tag{5}$$</span>and from (5), we can arrive at (2). I leave this part (and writing down the full proof, which will probably involve proving (3) and (4)) to you. (It might help you to note that in axiomatic systems the best way to prove <span class="math-container">$\exists x \varphi \to \exists x \psi$</span> is usually to prove <span class="math-container">$\forall x \neg \psi \rightarrow \forall x \neg \varphi$</span> and contrapose, and that in general formal proofs use contraposition all the time.)</p> <p>What in this thought process is generally applicable? Write down the statements your informal proof uses; write down your axioms; use statements like (3) and (4) which hold generally to connect the latter to the former. At the end of this process, you have the skeleton of a proof, and you just need to fill in the blanks.</p>
651,174
<p>I've got a complex equation with 4 roots that I am solving. In my calculations it seems like I am going through hell and back to find these roots (and I'm not even sure I am doing it right) but if I let a computer calculate it, it just seems like it finds the form and then multiplies by $i$ and negative $i$. Have a look: <a href="http://www.wolframalpha.com/input/?i=%288*sqrt%283%29%29/%28z%5E4%2b8%29=i" rel="nofollow noreferrer">http://www.wolframalpha.com/input/?i=%288*sqrt%283%29%29%2F%28z%5E4%2B8%29%3Di</a></p> <p>Here's me going bald: <img src="https://i.stack.imgur.com/oFE1P.jpg" alt="enter image description here"></p>
Claude Leibovici
82,404
<p><strong>Hint</strong> </p> <p>Click the "Approximate Forms" button and think about what the numbers could be !</p>
2,623,544
<p>I have to diagonolize a matrix A \begin{bmatrix}0&amp;-3&amp;-1&amp;1\\2&amp;5&amp;1&amp;-1\\-2&amp;-3&amp;1&amp;1\\2&amp;3&amp;1&amp;1\end{bmatrix}</p> <hr> <p>I do $det(A-λ)=0$ and I get $λ_{1}=1$, $λ_{2}=2$, $λ_{3}=2$, $λ_{4}=2$</p> <p>So possible diagonal matrix looks like: \begin{bmatrix}1&amp;0&amp;0&amp;0\\0&amp;2&amp;0&amp;0\\0&amp;0&amp;2&amp;0\\0&amp;0&amp;0&amp;2\end{bmatrix}</p> <hr> <p>Then I look for eigenvectors for $λ=1$ and $λ=2$</p> <p>I get:</p> <p>For $λ=1$: $(1, -1, 1,-1)$ </p> <p>For $λ=2$: $(1, 0, 0, 2)$, $(0, 1, 0, 3)$, $(0, 0, 1, 1)$ </p> <p>I create matrix P made of eigenvectors (as columns):</p> <p>\begin{bmatrix}1&amp;1&amp;0&amp;0\\-1&amp;0&amp;1&amp;0\\1&amp;0&amp;0&amp;1\\-1&amp;2&amp;3&amp;1\end{bmatrix}</p> <hr> <p>And when checking if the diagonal matrix is correct by using formula: </p> <p>$A=P^{-1}*D*P$</p> <p>I don't get the correct answer :(</p> <p>Can someone check my steps? I tried to do that a few times but still I am not getting the correct result.</p>
Konstantin
509,087
<p>Your computations are totally correct, but you made a tiny mistake in the order of the decomposition of $A$. It has to be $A = PDP^{-1}$.</p> <p>Just remind yourself, what if A was e.g. symmetric, since then one could find orthogonal transformations and there you immediately see, that the most right matrix has to contain the eigenvectors in a <em>horizontal</em> manner and the left matrix has to contain them <em>standing</em>.</p>
2,249,929
<p>Is there a more general form for the answer to this <a href="https://math.stackexchange.com/q/1314460/438622">question</a> where a random number within any range can be generated from a source with any range, while preserving uniform distribution? </p> <p><a href="https://stackoverflow.com/q/137783/866502">This</a> question for example looks familiar and is changing a range of 1-5 to 1-7</p>
hardmath
3,111
<p>A general strategy for obtaining a uniform discrete distribution on $1$ to $N$ using an $M$-sided die is to find the least power $r$ such that $M^r\ge N$, then roll the die $r$ times. Assign radix $M$ place-values $0$ to $M-1$ to each face of the die, and compute the radix $M$ value:</p> <p>$$ m_r m_{r-1} \ldots m_1 $$</p> <p>using the place-values generated by the rolls. Add $1$ to the result, and if it is not more than $N$, accept that as the answer. Otherwise repeat the process until eventually a value between $1$ and $N$ is obtained.</p>
1,955,509
<p>There's this exercise in Hubbard's book:</p> <blockquote> <p>Let $ h:\Bbb R \to \Bbb R $ be a $C^1$ function, periodic of period $2\pi$, and define the function $ f:\Bbb R^2 \to \Bbb R $ by $$f\begin{pmatrix}r\cos\theta\\r\sin\theta \end{pmatrix}=rh(\theta)$$</p> <p>a. Show that $f$ is a continuous real-valued function on $\Bbb R^2$.</p> <p>b. Show that $f$ is differentiable on $\Bbb R^2 - \{\mathbf 0\}$.</p> <p>c. Show that all directional derivatives of $f$ exist at $\mathbf 0$ if and only if</p> <p>$$ h(\theta) = -h(\theta + \pi) \ \text{ for all } \theta $$</p> <p>d. Show that $f$ is differentiable at $ \mathbf 0 $ if an only if $h(\theta)=a \cos \theta + b \sin \theta$ for some number $a$ and $b$. </p> </blockquote> <p>I can't find how to prove $ f $ is continuous, I tried to prove $$ \lim_{\begin{pmatrix}r\cos\theta\\r\sin\theta \end{pmatrix} \to \begin{pmatrix}s\cos\phi\\s\sin\phi \end{pmatrix}} f\begin{pmatrix}r\cos\theta\\r\sin\theta \end{pmatrix}=s\ h(\phi) $$ for all $s$ and $\phi$. But I can't do much else.</p>
user246336
246,336
<p>Here is another approach. Let us define the function </p> <p>$$\phi(x):=\sum_{n=0}^\infty (n+1)(n+2) x^n.$$</p> <p>If we differentiate we obtain</p> <p>$$\phi'(x):=\sum_{n=0}^\infty n(n+1)(n+2) x^{n-1}=\sum_{n=0}^\infty(n+1)(n+2)(n+3)x^n.$$</p> <p>Now, </p> <p>$$\phi'(x)-3\phi(x)=\sum_{n=0}^\infty n(n+1)(n+2)x^{n}=\sum_{n=0}^\infty(n+1)(n+2)(n+3)x^{n+1}=x\phi'(x)$$</p> <p>so</p> <p>$$\phi'(x)(x-1)+3\phi(x)=0.$$</p> <p>An elementary integration (with the initial condition $\phi(0)=2$) yields</p> <p>$$\phi(x)=\frac2{(1-x)^3}.$$</p> <p>Thus, the value you are seeking is $\phi(1/4)=128/27$.</p>
98,932
<p>Suppose I would like to use a method for data prediction, and that I have some empirical data (i.e., sequence of samples of the form [time, value]). Would it be possible to know in advance, based on the data only, if it makes sense to use a model for prediction (based on the samples, used as a training set). </p> <p>I am asking this for the following simple reason: it is possible that the sample data is totally random and that there is no correlation at all between the samples. Hence, I would like to avoid trying to find correlations where there is not.</p>
Community
-1
<p>Why don't you omit a few data points, when working out the parameters of your model? Then see if the model correctly predicts the data points that you omitted. You'll need to decide ahead of time how accurately the model needs to predict the missing data point, in other words, what constitutes "success" of the model.</p>
3,409,342
<p>Let <span class="math-container">$X$</span> be a set containing <span class="math-container">$A$</span>.</p> <p>Proof: <span class="math-container">$y\in A \cup (X \setminus A) \Rightarrow y\in A$</span> or <span class="math-container">$y \in (X \setminus A)$</span></p> <p>If <span class="math-container">$y \in A$</span>, Then <span class="math-container">$y \in X$</span> because <span class="math-container">$A \subset X$</span>. </p> <p>If <span class="math-container">$y \in (X \setminus A)$</span>, Then <span class="math-container">$y \in X$</span> and <span class="math-container">$y \notin A$</span>. So <span class="math-container">$y \in X$</span>.</p> <p>Therefore <span class="math-container">$y \in A \cup (A \setminus X) \Rightarrow y \in X$</span>.</p> <p>Now I've proved that every element of <span class="math-container">$A \cup (A \setminus X)$</span> is also an element of <span class="math-container">$X$</span>. But how to prove vice versa??</p>
PrincessEev
597,568
<p>The reverse implication is simple enough.</p> <p>If <span class="math-container">$y \in X$</span>, then either <span class="math-container">$y \in A$</span> or <span class="math-container">$y \not \in A$</span>, since <span class="math-container">$A$</span> is a subset of <span class="math-container">$X$</span>.</p> <p>If <span class="math-container">$y \in A$</span>, then <span class="math-container">$y \in X$</span> since <span class="math-container">$A \subseteq X$</span> is given.</p> <p>If <span class="math-container">$y \not \in A$</span>, since <span class="math-container">$y \in X$</span> is also given, then <span class="math-container">$y \in X - A$</span>.</p> <p>Therefore, in either case, <span class="math-container">$y \in A \cup (X-A)$</span>, the desired conclusion.</p>
1,361,948
<p>$\frac{df}{dx} = 2xe^{y^2-x^2}(1-x^2-y^2) = 0.$</p> <p>$\frac{df}{dy} = 2ye^{y^2-x^2}(1+x^2+y^2) = 0.$</p> <p>So, $2xe^{y^2-x^2}(1-x^2-y^2) = 2ye^{y^2-x^2}(1+x^2+y^2)$.</p> <p>$x(1-x^2-y^2) = y(1+x^2+y^2)$</p> <p>$x-x^3-xy^2 = y + x^2y + y^3$</p> <p>Is the guessing the values of the variables the only way of solving this? With $y = 0$ and I can already figure out that $x = 0$, $1$ or $-1$ but It's bothering me how I had to randomly guess that. </p>
Community
-1
<p>For $t/\delta=\pi$ and ignoring the inessential $\sigma\delta$,</p> <p>$$z=\frac{1+i}{1+e^{-\pi}}$$</p> <p>and</p> <p>$$\Re(z)=\frac1{1+e^{-\pi}}.$$</p> <p>This doesn't match</p> <p>$$\frac1{1-e^{-\pi}}.$$</p> <p>So, no.</p>
3,689,051
<p>Let <span class="math-container">$(X,d_{X})$</span> be a metric space, and let <span class="math-container">$(Y,d_{Y})$</span> be another metric space. Let <span class="math-container">$f:X\to Y$</span> be a function. Then the following two statements are logically equivalent:</p> <p>(a) <span class="math-container">$f$</span> is continuous.</p> <p>(b) Whenever <span class="math-container">$V$</span> is an open set in <span class="math-container">$Y$</span>, the set <span class="math-container">$f^{-1}(V) = \{x\in X: f(x)\in V\}$</span> is an open set in <span class="math-container">$X$</span>.</p> <p>I know this problem is pretty standard, but I am not able to prove any of the two directions.</p> <p>Since I am studying real analysis at the moment (metric spaces, in fact), could someone provide a proof or at least a hint as how to prove it? It is not homework. Any comment or contributions are welcome.</p>
Steven Lu
789,699
<p>Suppose <span class="math-container">$f$</span> is continuous on <span class="math-container">$X$</span>. If <span class="math-container">$V$</span> is open in <span class="math-container">$Y$</span>, <span class="math-container">$f^{-1}(V)$</span> is in X. Given a point <span class="math-container">$x \in f^{-1}(V)$</span>, <span class="math-container">$f(x) \in V$</span>. By continuity of <span class="math-container">$f$</span>, there is a neighborhood of <span class="math-container">$x$</span>, say <span class="math-container">$U$</span>, such that <span class="math-container">$f(U) \subseteq V$</span> since <span class="math-container">$V$</span> is a neighborhood of <span class="math-container">$f(x)$</span>. We get <span class="math-container">$x\in U \subseteq f^{-1}(f(U))\subseteq f^{-1}(V)$</span>. Since <span class="math-container">$x\in f^{-1}(V)$</span> is arbitrary, <span class="math-container">$f^{-1}(V)$</span> is open in X. <span class="math-container">$$$$</span> Suppose the inverse mapping of open set is open. If <span class="math-container">$V\in Y$</span> is open and <span class="math-container">$f^{-1}(V)$</span> is open in <span class="math-container">$X$</span>. Given <span class="math-container">$p\in X$</span> and <span class="math-container">$\varepsilon &gt;0$</span>. <span class="math-container">$B(f(p),\varepsilon) \subseteq Y$</span> is open. So <span class="math-container">$f^{-1}(B(f(p),\varepsilon))$</span> is open in <span class="math-container">$X$</span> and <span class="math-container">$p \in f^{-1}(B(f(p),\varepsilon))$</span>. There is a positive number <span class="math-container">$\delta$</span> such that <span class="math-container">$B(p,\delta) \subseteq f^{-1}(B(f(p),\varepsilon))$</span>. We get <span class="math-container">$$f(B(p,\delta)) \subseteq f(f^{-1}(B(f(p),\varepsilon))) \subseteq B(f(p),\varepsilon)$$</span> Since such <span class="math-container">$p$</span> is arbitrary, f is continuous on X.</p>
492,392
<p>I found this math test very interesting. I would like to know how the answer is being calculate?</p>
Brian M. Scott
12,042
<p>Since $49a+49b=6272$, we can divide by $49$ to find that $a+b=128$. By definition the average of $a$ and $b$ is $\frac12(a+b)$, which is $\frac12\cdot128=64$. (Of course you can do this all at once by dividing by $2\cdot49$, but the logic is probably a little clearer the way I did it.)</p>
4,095,715
<p>I know how to do these in a very tedious way using a binomial distribution, but is there a clever way to solve this without doing 31 binomial coefficients (with some equivalents)?</p>
epi163sqrt
132,007
<p>Your approach is also fine. In the following it is convenient to denote with <span class="math-container">$[x^k]$</span> the coefficient of <span class="math-container">$x^k$</span> in a series.</p> <blockquote> <p>We obtain <span class="math-container">\begin{align*} \color{blue}{[x^{12}]}&amp;\color{blue}{\left(\frac{1-x^8}{1-x^2}\right)^n}\\ &amp;=[x^{12}](1-x^8)^n\sum_{j=0}^{\infty}\binom{-n}{j}\left(-x^2\right)^j\tag{1}\\ &amp;=[x^{12}]\left(1-\binom{n}{1}x^8\right)\sum_{j=0}^{\infty}\binom{n+j-1}{j}x^{2j}\tag{2}\\ &amp;=\left([x^{12}]-n[x^4]\right)\sum_{j=0}^{\infty}\binom{n+j-1}{j}x^{2j}\tag{3}\\ &amp;\,\,\color{blue}{=\binom{n+5}{6}-n\binom{n+1}{2}}\tag{4} \end{align*}</span></p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (1) we use the <em><a href="https://en.wikipedia.org/wiki/Binomial_series" rel="nofollow noreferrer">binomial series expansion</a></em>.</p> </li> <li><p>In (2) we expand <span class="math-container">$(1-x^8)^n$</span> up to terms of <span class="math-container">$x^8$</span>, since other terms do not contribute. We also use the binomial identity <span class="math-container">$\binom{-p}{q}=\binom{p+q-1}{q}(-1)^q$</span>.</p> </li> <li><p>In (3) we apply the rule <span class="math-container">$[x^p]x^qA(x)=[x^{p-q}]A(x)$</span>.</p> </li> <li><p>In (4) we select the coefficients of <span class="math-container">$x^k$</span> accordingly.</p> </li> </ul>
1,930,722
<p>$$\lim_{x\to 0}\frac{x}{x+x^2e^{i\frac{1}{x}}} = 1$$</p> <p>I've trying seeing that:</p> <p>$$e^{\frac{1}{x^2}i} = 1+\frac{i}{x^2}-\frac{1}{x^42!}-i\frac{1}{x^63!}+\frac{1}{x^84!}+\cdots\implies$$</p> <p>$$x^2e^{\frac{1}{x^2}i} = x^2+\frac{i}{1}-\frac{1}{x^22!}-i\frac{1}{x^43!}+\frac{1}{x^64!}+\cdots$$</p> <p>but I couldn't find a way to prove that the limit goes to $1$</p>
DonAntonio
31,254
<p>First some simplification and then assuming $\;x\in\Bbb R\;$:</p> <p>$$\frac x{x+x^2e^{i/x}}=\frac1{1+xe^{i/x}}=\frac1{1+x\left(\cos\frac1x+i\sin\frac1x\right)}\xrightarrow[x\to0]{}\frac1{1+0}=1$$</p> <p>since </p> <p>$$\left|e^{i/x}\right|=\left|\cos\frac1x+i\sin\frac1x\right|\le1\;\;\;\text{and this is then a bounded expression}$$</p>
33,311
<p>I'm (very!) new to <em>Mathematica</em>, and trying to use it plot a set in $\mathbb{R}^3$. In particular, I want to plot the set</p> <p>$$ \big\{ (x,y,z) : (x,y,z) = \frac{1}{u^2+v^2+w^2}(vw,uw,uv) \text{ for some } (u,v,w) \in \mathbb{R}^3\setminus\{0\} \big\}. $$ This is just the image of function from $\mathbb{R}^3\setminus\{0\} \to \mathbb{R}^3$. I haven't been able to make any of the plot functions display this. Any advice would be greatly appreciated.</p> <p>If it makes it easier, the set above is also $$ \big\{ (x,y,z) : (x,y,z) = \frac{1}{u^2+v^2+w^2}(vw,uw,uv) \text{ for some } (u,v,w) \in \mathbb{S}^2 \big\}. $$ Alternatively, one could plot the three sets $$ \big\{ (x,y,z) : (x,y,z) = \frac{1}{1+v^2+w^2}(vw,w,v) \text{ for some } (v,w) \in \mathbb{R}^2 \big\}, $$ $$ \big\{ (x,y,z) : (x,y,z) = \frac{1}{1 +u^2+w^2}(w,uw,u) \text{ for some } (u,w) \in \mathbb{R}^2 \big\}, $$ $$ \big\{ (x,y,z) : (x,y,z) = \frac{1}{1+u^2+v^2}(v,u,uv) \text{ for some } (u,v) \in \mathbb{R}^2 \big\}. $$</p>
Michael E2
4,999
<p>Here's a fairly simple way to fix your <code>Manipulate</code> by applying <code>Dynamic</code> to <code>ListPlot</code>.</p> <pre><code>Manipulate[ (* Beep[]; *) data = function @ Range[-Pi*10., Pi*10, Pi/1000]; Dynamic @ ListPlot[data, PlotRange -&gt; {{start, stop}, Automatic}], {function, {Sin, Cos, Tan}}, {start, 1, Length[data]}, {{stop, 300}, 1, Length[data]}, {data, ControlType -&gt; None}] </code></pre> <p><img src="https://i.stack.imgur.com/f9Lii.png" alt="Mathematica graphics"></p> <p>Uncomment <code>Beep[]</code> to hear when <code>data</code> is reevaluated.</p> <p>There are several questions on this site whose answers discuss using <code>Dynamic</code> for such a purpose. This one is more general than most: <a href="https://mathematica.stackexchange.com/questions/21625/using-refresh-with-trackedsymbols">Using Refresh[..] with TrackedSymbols</a></p>
313,298
<p>For example, let's say that I have some sequence $$\left\{c_n\right\} = \left\{\frac{n^2 + 10}{2n^2}\right\}$$ How can I prove that $\{c_n\}$ approaches $\frac{1}{2}$ as $n\rightarrow\infty$?</p> <p>I'm using the Buchanan textbook, but I'm not understanding their proofs at all.</p>
Alex Becker
8,173
<p>Well we want to show that for any $\epsilon&gt;0$, there is some $N\in\mathbb N$ such that for all $n&gt;N$ we have $|c_n-1/2|&lt;\epsilon$ (this is the definition of a limit). In this case we are looking for a natural number $N$ such that if $n&gt;N$ then $$\left|\frac{n^2+10}{2n^2}-\frac{1}{2}\right|=\frac{5}{n^2}&lt;\epsilon$$ We can make use of what's called the <em>Archimedean property</em>, which is that for any real number $x$ there is a natural number larger than it. To do so, note that the above equation is equivalent to $\frac{n^2}{5}&gt;\frac{1}{\epsilon}$, or $n^2&gt;\frac{5}{\epsilon}$. If we choose $N$ to be a natural number greater than $\frac{5}{\epsilon}$, then if $n&gt;N$ we have $n^2&gt;N&gt;\frac{5}{\epsilon}$ as desired. Thus $\lim\limits_{n\to\infty} c_n = \frac12$.</p> <p>To relate this to the definition of limit in Buchanan: You want to show that for any neighborhood of $1/2$, there is some $N\in\mathbb N$ such that if $n&gt;N$ then $c_n$ is in the neighborhood. Now note that any neighborhood contains an <em>open interval</em> around $1/2$, which takes the form $(1/2-\epsilon,1/2+\epsilon)$. Saying that $c_n$ is in this open interval is the same as saying that $|c_n-1/2|&lt;\epsilon$.</p>
730,253
<p>let $x,y$ such $$2^x+5^y=2^y+5^x=\dfrac{7}{10}$$</p> <p>prove or disprove $x=y=-1$ is the only solution for the system.</p> <p>My try: since $$2^x-2^y=5^x-5^y$$</p> <p>But How can prove or disprove $x=y$?</p>
barak manos
131,263
<p><strong>Here is my idea of proving it:</strong></p> <ol> <li><p>For $x=y=-1$, you can verify that $2^{-1}+5^{-1}=0.7$</p></li> <li><p>In order to refute any other possible solution where $x=y$:</p> <ul> <li>You can prove that $f(x)=2^x+5^x$ is monotonously increasing</li> <li>Do it by showing that $f'(x)=2^x\ln2+5^x\ln5$ is always positive</li> </ul></li> <li><p>In order to refute any other possible solution where $x \neq y$:</p> <ul> <li>You have the following two equations: <ul> <li>$5^y=0.7-2^x$</li> <li>$2^y=0.7-5^x$</li> </ul></li> <li>Write down each equation as a simple function: <ul> <li>$y=\log_5(0.7-2^x)$</li> <li>$y=\log_2(0.7-5^x)$</li> </ul></li> <li>Prove that if $x \neq -1$, then the functions are not equal: <ul> <li>Prove that if $x&gt;-1$, then $\log_2(0.7-5^x)&gt;\log_5(0.7-2^x)$</li> <li>Prove that if $x&lt;-1$, then $\log_2(0.7-5^x)&lt;\log_5(0.7-2^x)$</li> </ul></li> </ul></li> </ol> <p><strong>Here is the graph of both functions, intersecting at $(-1,-1)$</strong>:</p> <p><img src="https://i.stack.imgur.com/BZpaZ.png" alt="enter image description here"></p>
730,253
<p>let $x,y$ such $$2^x+5^y=2^y+5^x=\dfrac{7}{10}$$</p> <p>prove or disprove $x=y=-1$ is the only solution for the system.</p> <p>My try: since $$2^x-2^y=5^x-5^y$$</p> <p>But How can prove or disprove $x=y$?</p>
Christian Blatter
1,303
<p>Write $x=u-v$, $\&gt;y=u+v$ and put $r:={5\over2}$. Then we have to solve the equations $$2^{u-v}+5^{u+v}=5^{u-v}+2^{u+v}={7\over10}\ .$$ Dividing the first equation by $2^u$ we obtain $$(s:=)\qquad 2^{-v} +r^u 5^v = r^u 5^{-v}+ 2^v\ ,$$ or $$r^u(5^v-5^{-v})=2^v-2^{-v}\ .\tag{1}$$ Equation $(1)$ is obviously fulfilled when $v=0$ and $u$ is arbitrary. This corresponds to an arbitrary choice of $(x,y)$ on the line $x=y$ and leads together with the remaining equation to $x=y=-1$.</p> <p>But this is not all: For given $v\ne0$ equation $(1)$ determines a unique $u\in{\mathbb R}$ by means of $$r^u={2^v-2^{-v}\over 5^v-5^{-v}}\ ,\tag{2}$$ and for this value of $u$ (an even function of $v$) we then get $$s={1\over2}\bigl(2^v+2^{-v}+r^u(5^v+5^{-v})\bigr)={10^v-10^{-v}\over 5^v-5^{-v}}\ .$$ With the help of $(2)$ it follows that $$2^u s=\left({2^v-2^{-v}\over 5^v-5^{-v}}\right)^{\!\log 2/\log r}\ {10^v-10^{-v}\over 5^v-5^{-v}}=: f(v)\ .$$</p> <p><img src="https://i.stack.imgur.com/gYeNW.jpg" alt="enter image description here"></p> <p>Plotting $f(v)$ one finds that it is minimal at $v=0$ and assumes the (limiting) value $0.756463$ there, which is $&gt;{7\over10}$. It follows that there are no solutions of the original problem with $v\ne0$.</p>
3,561,860
<p>Given <span class="math-container">$x_i, x_j \in [0,1]$</span>, and a payoff function <span class="math-container">$u_i(x_i, x_j) = (\theta_i + 3x_j - 4x_i)x_i$</span> if <span class="math-container">$x_j &lt; 2/3$</span>, and <span class="math-container">$= (3x_j-2)x_i$</span> if <span class="math-container">$x_j \geq 2/3$</span>. My question is I don't understand why we have the best-response function <span class="math-container">$b_i(x_j) = {1}$</span> when <span class="math-container">$x_j &gt; 2/3$</span>? </p> <p><strong>My argument.</strong> For <span class="math-container">$x_j &lt; 2/3$</span>, <span class="math-container">$\partial(u_i)/\partial(x_i) = \theta_i + 3x_j - 8x_i = 0 \iff x_i = (3x_j + \theta_i)/8$</span>. For <span class="math-container">$x_j \geq 2/3$</span>, <span class="math-container">$\partial(u_i)/\partial(x_i) = 3x_j - 2 = 0 \iff x_j = 2/3$</span> for every <span class="math-container">$x_i \in [0,1]$</span>. </p> <p>Shouldn't this imply player i's pure best response function is: <span class="math-container">$b_i(x_j) = [0, 1]$</span> if <span class="math-container">$ x_j \geq 2/3$</span>, <span class="math-container">$=(\theta_i + 3x_j)/8$</span> if <span class="math-container">$x_j &lt; 2/3$</span>?</p>
marty cohen
13,079
<p>Since they are each alternating series and eventually the terms are decreasing you can use the alternating series rule on each one. If you make your error criteria on each series to be half the desired error, then the overall error when you combine the two will be what you want.</p>
351,534
<p>It was already known to Weil that a sufficiently reasonable cohomology theory for algebraic varieties over <span class="math-container">$\mathbb{F}_p$</span> would allow for a possible solution to the Weil conjectures. </p> <p>It was also understood that such a cohomology theory could not take values in vector spaces over either the rational numbers <span class="math-container">$\mathbb{Q}$</span> or the <span class="math-container">$p$</span>-adic numbers <span class="math-container">$\mathbb{Q}_p$</span>. The classical argument that I know is that the cohomology of a supersingular elliptic curve should admit an action by a quaternion algebra structure of the endomorphism group of the elliptic curve, and representation theory rules out such an action. (Side question: Due to whom is this reasoning, and was it known to Weil's generation?)</p> <p>This reasoning does not, however, rule out the possibility of cohomology with coefficients in <span class="math-container">$\mathbb{Q}_\ell$</span> with <span class="math-container">$\ell \neq p$</span>, and indeed <span class="math-container">$\ell$</span>-adic cohomology would end up being found. </p> <p><strong>Question.</strong> Were there any other historical reasons to believe that looking at cohomology valued in <span class="math-container">$\mathbb{Q}_\ell$</span>-vector spaces would lead to something fruitful?</p> <p>N.B. This question was posted on MathSE, but I got advised to post it here instead.</p>
R. van Dobben de Bruyn
82,179
<p>I think an important motivation for the <span class="math-container">$\ell$</span>-adic theory comes from the Riemann existence theorem/the Grauert–Remmert theorem. This says that a finite (topological) covering space <span class="math-container">$Y \to X$</span> of a normal complex variety can again be equipped with an algebraic structure, which is more or less what you need to prove to obtain <span class="math-container">$$\pi_1^{\operatorname{\acute et}}(X) = \widehat{\pi_1^{\operatorname{top}}(X(\mathbf C))}.$$</span> So finite covering spaces can be detected using the étale topology (except it's not really a topology, but that's not stopping Grothendieck!).</p> <p>In particular, you expect to get a good theory of étale cohomology with <em>finite</em> coefficients. But to run the arguments that Weil was dreaming of, you need characteristic <span class="math-container">$0$</span> coefficients, so what do you do? Just take a limit!</p> <p>I think that's really the explanation of why the adic formalism enters the picture. But it doesn't quite explain why it's different at the prime <span class="math-container">$p$</span>. There are a few ways to look at this:</p> <ul> <li>Serre's argument shows that a <span class="math-container">$\mathbf Q_p$</span>-valued Weil cohomology theory cannot exist (over any ground field <span class="math-container">$k$</span> containing <span class="math-container">$\mathbf F_{p^2}$</span>; so in particular for <span class="math-container">$k = \bar{\mathbf F}_p$</span>).</li> <li>The basic results on <span class="math-container">$\ell$</span>-adic étale cohomology take the Galois cohomology of function fields of curves over algebraically closed fields as a starting point, and these behave differently at the prime <span class="math-container">$p$</span>.</li> <li>Already for elliptic curves, the <span class="math-container">$p$</span>-adic Tate module behaves a little different from the <span class="math-container">$\ell$</span>-adic one.</li> </ul> <p>In the end it doesn't really matter, because all they needed was <em>one</em> Weil cohomology theory.</p>
3,565,727
<blockquote> <p>Show that the antiderivatives of <span class="math-container">$x \mapsto e^{-x^2}$</span> are uniformly continuous in <span class="math-container">$\mathbb{R}$</span>.</p> </blockquote> <p>So we know that for a function to be uniformly continuous there has to exists <span class="math-container">$\varepsilon$</span> s.t when <span class="math-container">$|x-y| &lt; \delta$</span> implies <span class="math-container">$|f(x)-f(y)| &lt; \varepsilon$</span>.</p> <p>But how do we go about this since we cannot really integrate <span class="math-container">$e^{-x^2}$</span>and we need the antiderivative?</p>
the_candyman
51,370
<p>Let <span class="math-container">$F(x) = \int_{-\infty}^{x} e^{-t^2}dt$</span> be the antiderivative of <span class="math-container">$e^{-x^2}.$</span> </p> <p>Without loss of generalities (indeed, the problem is symmetric if we exchange <span class="math-container">$x$</span> and <span class="math-container">$y$</span>), let's assume <span class="math-container">$y &gt; x$</span> and consider the following:</p> <p><span class="math-container">$$|F(y) - F(x)| = \left|\int_{-\infty}^{y} e^{-t^2}dt - \int_{-\infty}^{x} e^{-t^2}dt\right| = \left|\int_{x}^{y} e^{-t^2}dt\right| = \int_{x}^{y} e^{-t^2}dt.$$</span></p> <p>Since <span class="math-container">$e^{-t^2} \leq 1$</span>, then:</p> <p><span class="math-container">$$|F(y) - F(x)| = \int_{x}^{y} e^{-t^2}dt \leq \int_{x}^{y} dt = y-x.$$</span></p> <p>Now, for any <span class="math-container">$\varepsilon &gt; 0$</span>, there exists a <span class="math-container">$\delta &gt; 0$</span>, such that if we choose <span class="math-container">$|y-x| &lt; \delta = \varepsilon$</span>, then <span class="math-container">$|F(y) - F(x)| &lt; \varepsilon.$</span></p> <p>Hence, the antiderivative of <span class="math-container">$e^{-x^2}$</span> is uniformly continuous.</p>
85,117
<p>Let $\mathcal{A}$ and $\mathcal{B}$ be two $2$-categories and $F : \mathcal{A} \to \mathcal{B}$ be a lax $2$-functor. Given $1$-cells $(f_{i})_{0 \leq i \leq n}$ of $\mathcal{A}$ such that the composition $f_{n} \circ f_{n-1} \circ \cdots \circ f_{0}$ makes sense, this data together with the structural $2$-cells of $F$ give many paths of $2$-cells going from $F(f_{n}) \circ F(f_{n-1}) \circ \cdots \circ F(f_{0})$ to $F(f_{n} \circ f_{n-1} \circ \cdots \circ f_{0})$, for instance $$ F(f_{n}) \circ F(f_{n-1}) \circ \cdots \circ F(f_{0}) \Rightarrow F(f_{n} \circ f_{n-1}) \circ F(f_{n-2}) \circ \cdots \circ F(f_{0}) \Rightarrow \cdots $$ $$\Rightarrow F(f_{n} \circ f_{n-1} \circ \cdots \circ f_{0}) $$ and $$ F(f_{n}) \circ F(f_{n-1}) \circ \cdots \circ F(f_{0}) \Rightarrow F(f_{n}) \circ F(f_{n-1}) \circ \cdots \circ F(f_{1} \circ f_{0}) \Rightarrow \cdots $$ $$\Rightarrow F(f_{n} \circ f_{n-1} \circ \cdots \circ f_{0}) $$ which correspond to what one gets by "parenthesizing on the left" and "parenthesizing on the right" respectively. It seems to seem obvious that it follows from the definition of lax functor that the $C_{n}$ ways to parenthesize the left hand side all give the same $2$-cell $$ F(f_{n}) \circ F(f_{n-1}) \circ \cdots \circ F(f_{0}) \Rightarrow F(f_{n} \circ f_{n-1} \circ \cdots \circ f_{0}) $$ Since I need this property for a text I am writing, I would like to provide a reference. My question is the following:</p> <blockquote> <p>Where is this result rigorously stated, and where is it rigorously proved? Hopefully, the two references will be the same.</p> </blockquote> <p>Edit: I am aware that this result is "obvious". In addition, it is certainly classical, by which I mean that all the people working with lax functors use it routinely. However, if one wants to state it and prove it, the question arises as to what is the best way to state the result, which I think turns out <em>not</em> to be completely trivial. Furthermore, writing a rigorous proof certainly <em>does</em> require some work. I am sure there are some people here who have already used this result. How do they state it? To which reference do they point? Or is the reader assumed to find this fact so obvious that no one ever cares to provide a proof or a reference?</p>
Buschi Sergio
6,262
<p>For composable couple of morphisms $g\circ f$ let </p> <p>$T(g, f): F(g)\circ F(f) \Rightarrow F(g\circ f)$ the canonical cell. </p> <p>For a triple of composable $h\circ g\circ f$ let $T_r(h, g, f):=T(T(h, g),f)$ (A short way to write $T(hg, f)\ast T(h, g)f$) </p> <p>and $T_l(h,g,f):= T(h,T(g, f))$ for the axiom of coherence $T'_3=T''_3$.</p> <p>If we have composable $n$ morphisms $f_n\circ \ldots\circ f_1$ for the various (associativity) coherent disposition of parenthesis (as in a no necessarily associative binary composition) we have a cell $F(f_n)\circ \ldots F(f_1) \to F(f_n\circ \ldots f_1)$, we want to prove that all these cells are equal, for $n=3$ this is true. For induction we suppose the assert for 3, ..., n and indicate with $T^k$ ($3\leq n\leq n$) the unique composition cell $F(f_k)\circ \ldots F(f_1) \to F(f_k\circ \ldots f_1)$. Let $T_{n+1}: F(f_{n+1})\circ \ldots F(f_1)\xrightarrow{1\circ T^n} F(f_{n+1})\circ F(f_n\circ \ldots f_1)\xrightarrow{T} F(f_{n+1}\circ f_n\circ \ldots f_1) $, let $T': F(f_{n+1})\circ \ldots F(f_1)\to F(f_{n+1}\circ f_n\circ \ldots f_1) $ associated to some coherent parenthesis disposition. Now if $f_{n+1}$ is not inside a parenthesis (<em>do not allow the total parenthesis on the whole string</em>) we have that $T'(f_{n+1}, \ldots f_1): F(f_{n+1})\circ F(f_n)\circ \ldots F(f_1)\xrightarrow{1\circ T} F(f_{n+1})\circ F(f_n\circ \ldots f_1)$ and this is $T_{n+1}$, if $f_{n+1}$ is inside a parenthesis consider the maximal parenthesis containing $f_{n+1}$ and let $f_{n+1},\ldots f_{i+1}$ the part of string contained in this parenthesis, the case $i=n-1$ follow as above, </p> <p>if $i&lt; n-1$ then </p> <p>$T'(f_{n+1}, \ldots f_1)= T(T^{n-i}(f_{n+1},\ldots f_i), T^{i}(f_i,\ldots f_1))=$</p> <p>$T(T((f_{n+1},T^{n-i-1}(f_n, \ldots f_i)), T^{i}(f_i,\ldots f_1))=$</p> <p>$T((f_{n+1},T^n(f_n, \ldots f_1))=$</p> <p>$T_{n+1}(f_{n+1}\circ \ldots f_1) $.</p> <p>This is only a traslation from the classical algebra theorem.</p>
3,770,391
<p>According to <span class="math-container">${\tt Mathematica}$</span>, the following integral converges if <span class="math-container">$\beta &lt; 1$</span>.</p> <p><span class="math-container">$$ \int_{0}^{1 - \beta}\mathrm{d}x_{1} \int_{1 -x_{\large 1}}^{1 - \beta}\mathrm{d}x_{2}\, \frac{x_{1}^{2} + x_{2}^{2}}{\left(1 - x_{1}\right)\left(1 - x_{2}\right)} $$</span></p> <p>How is this possible ?. For <span class="math-container">$x_{1} = 0$</span> the integration over <span class="math-container">$x_{2}$</span> hits <span class="math-container">$1$</span> on the boundary, so the denominator vanishes and hence the whole expression should diverge.</p> <p>How can this integral be convergent ?.</p>
Claude Leibovici
82,404
<p>Changing a little notations (and trying to show the steps), considering <span class="math-container">$$I=\int^{1-\beta}_{0}dx \int^{1-\beta}_{1-x} \frac{x^2+y^2}{(1-x)(1-y)}\,dy$$</span> <span class="math-container">$$\int \frac{x^2+y^2}{(1-x)(1-y)}\,dy=\frac{\left(x^2+1\right) \log (y-1)+\frac{1}{2} (y-1)^2+2 (y-1)}{x-1}$$</span> <span class="math-container">$$J(x)=\int^{1-\beta}_{1-x} \frac{x^2+y^2}{(1-x)(1-y)}\,dy$$</span> <span class="math-container">$$J(x)=-\frac{-2 \left(x^2+1\right) \log (-\beta )+2 \left(x^2+1\right) \log (-x)+(x-\beta ) (\beta +x-4)}{2 (x-1)}$$</span> Integrating this last one with respect to <span class="math-container">$x$</span> gives <span class="math-container">$$2 \int J(x)\,dx=-4 \text{Li}_2(x)+\log (1-x) \left(\beta ^2-4 \beta +4 \log (-\beta )-4 \log (-x)+3\right)-x (-(x+2) \log (-\beta )+(x+2) \log (-x)-5)$$</span> <span class="math-container">$$2\int^{1-\beta}_{0} J(x)\,dx= -\log (\beta -1) \left(\beta ^2-4 \beta +4 \log (\beta )+3\right)+\log (-\beta ) \left(\beta ^2-4 \beta +4 \log (\beta )+3\right)+(\beta -1) ((\beta -3) \log (\beta )-5)-4 \text{Li}_2(1-\beta )$$</span> which does not exist (at least as a real if <span class="math-container">$\beta &gt;1$</span>.</p> <p>Now, assuming <span class="math-container">$\beta &lt; 1$</span>, this reduces to <span class="math-container">$$2I=\left(\beta ^2-4 \beta +4 \log (\beta )+3\right) \log \left(\frac{\beta }{1-\beta }\right)+(\beta -1) ((\beta -3) \log (\beta )-5)-4 \text{Li}_2(1-\beta )$$</span></p>
3,273,830
<p>Okay so I've started to study derivatives and there is this idea of continuity. The book says <em>"a real valued function is considered continuous at a point iff the graph of a function has no break at the point of consideration, which is so iff the values of the function at the neighbouring points are close enough to the value of the the function at the given point"</em></p> <p>So what I dont understand is that why is it that values of the function at the neighbouring points should be close enough to the value of the function at the given point, isn't it enough if they are defined why do they have to be <em>close enough</em> the value of the function at the given point?</p>
Community
-1
<p>Intuitively, the idea is that the closer and closer you get to <span class="math-container">$x$</span>, the closer and closer <span class="math-container">$y$</span> gets to <span class="math-container">$f(x)$</span>. A small neighborhood of <span class="math-container">$x$</span> must correspond to a small neighborhood of <span class="math-container">$f(x)$</span>.</p> <p>When there is a discontinuity, the <em>image</em> of a neighborhood of <span class="math-container">$x$</span> always includes that discontinuity and that blocks it from being as small as you want.</p>
937,443
<blockquote> <p>Evaluate $\displaystyle \int \tan^2x\sec^2x\,dx$</p> </blockquote> <p>I tried several methods: </p> <ul> <li>First method was I changed $\tan^2x = \sec^2x-1$, and then substitute $\sec x$ to $t$, but it doesn't work.</li> <li>Second method was to use substitute $\tan^2x = v$, $\sec x = u$. And, it does not work as well.</li> </ul> <p>Is there any better way to solve this problem? </p>
RE60K
67,609
<p>$$I=\int\sec x\tan^2xdx\\=\int\sec x(\sec^2x-1)dx\\=\int\underbrace{\sec x}_{\mathtt u}.\underbrace{\sec^2x}_{\mathtt v}dx-\int\sec xdx\\=\left(\sec x\int\sec^2 xdx-\int (\sec x\tan x)\tan xdx\right)-\int\sec x$$ $$I=\sec x \tan x-\ln|\sec x+\tan x|-I+C$$ $$I=\frac12\left(\sec x\tan x-\ln\left|\sec x+\tan x\right|\right)+C$$</p>
4,131,208
<p>Does the following converge to <span class="math-container">$0$</span> for <span class="math-container">$\theta&gt;-1/2$</span>?</p> <p><span class="math-container">$$\lim_{n\rightarrow\infty}\frac{\sum_{i=1}^ni^{3\theta}}{\left(\sum_{i=1}^ni^{2\theta}\right)^{3/2}}$$</span></p> <p>I'd like to use the comparison test but no idea what to compare it to. If <span class="math-container">$\theta=1$</span>, based on code I wrote it seems to be going to 0 but would like to know how to use math.</p>
Ninad Munshi
698,724
<p>Rearranging terms we have that</p> <p><span class="math-container">$$\frac{\sum_{i=1}^ni^{3\theta}}{\left(\sum_{i=1}^ni^{2\theta}\right)^{\frac{3}{2}}}\cdot\frac{\frac{1}{n^{3\theta}}}{\frac{1}{n^{3\theta}}}\cdot\frac{\frac{1}{n^{\frac{3}{2}}}}{\frac{1}{n^{\frac{3}{2}}}} = \frac{\sum_{i=1}^n\left(\frac{i}{n}\right)^{3\theta}\frac{1}{n}}{\left(\sum_{i=1}^n\left(\frac{i}{n}\right)^{2\theta}\frac{1}{n}\right)^{\frac{3}{2}}}\cdot\frac{1}{\sqrt{n}}$$</span></p> <p>The terms on the left converge to</p> <p><span class="math-container">$$\frac{\sum_{i=1}^n\left(\frac{i}{n}\right)^{3\theta}\frac{1}{n}}{\left(\sum_{i=1}^n\left(\frac{i}{n}\right)^{2\theta}\frac{1}{n}\right)^{\frac{3}{2}}} \longrightarrow\frac{\int_0^1x^{3\theta}dx}{\left(\int_0^1x^{2\theta}dx\right)^{\frac{3}{2}}} = \frac{(1+2\theta)^{\frac{3}{2}}}{1+3\theta}$$</span></p> <p>if and only if <span class="math-container">$\theta&gt;-\frac{1}{3}$</span>, not just <span class="math-container">$-\frac{1}{2}$</span>. Thus the overall sequence converges to <span class="math-container">$0$</span> in that case. If <span class="math-container">$-\frac{1}{2}&lt;\theta\leq-\frac{1}{3}$</span>, the denominator Riemann sum converges, and the numerator terms</p> <p><span class="math-container">$$N=\frac{1}{\sqrt{n}}\sum_{i=1}^n\left(\frac{i}{n}\right)^{3\theta}\frac{1}{n} = \frac{1}{\sqrt{n}}\sum_{i=1}^n\sqrt{\frac{n}{i}}\cdot\left(\frac{i}{n}\right)^{3\theta+\frac{1}{2}}\frac{1}{n}$$</span></p> <p><span class="math-container">$$\implies \frac{1}{\sqrt{n}}\sum_{i=1}^n\left(\frac{i}{n}\right)^{3\theta+\frac{1}{2}}\frac{1}{n} &lt; N &lt; \sum_{i=1}^n\left(\frac{i}{n}\right)^{3\theta+\frac{1}{2}}\frac{1}{n}$$</span></p> <p><span class="math-container">$$\longrightarrow 0 &lt; N &lt; \int_0^1x^{3\theta+\frac{1}{2}}dx = \frac{2}{6\theta+3}$$</span></p> <p>by the same logic above. Thus the entire expression will always converge when <span class="math-container">$\theta &gt; -\frac{1}{2}$</span>, and only possibly to something nonzero when <span class="math-container">$-\frac{1}{2} &lt; \theta \leq -\frac{1}{3}$</span>.</p>
2,107,990
<p>I'm looking for suggestions for textbooks covering multivariable calculus. I am looking for two textbooks, one which covers the theory and the other which covers the computational aspects. I have already taken a (not so taught well) first course in multivariable calculus, but I'd ideally like to to keep a computational/intuitive text with me.</p> <p>I have covered a first course analysis with a focus on point set topology. After revising a bit of the theory of integration and covering Lebesgue integrals which I haven't done thus far, I'd like to move on to cover the aforementioned texts. </p> <p>My aim is the following: over the semester, I'll be enrolled in a course in advanced calculus. From what I gather, the course will cover manifolds etc. but it'll cover them from a very not so rigorous/differential geometric point of view. With that course and a course in GR, I'd like to get introduced to the rudiments of manifolds etc. With my out-of-class work on multivariable calculus, I hope to build up on the material of the courses in a few month's time -- before the term ends -- and use the material covered on both these fronts to start off with basic differential geometry. By then, I hope to have covered some algebra, topology etc. on my own as well.</p> <p>With my motivation in mind, it'd be great if you could recommend some texts.</p>
IQ WANTER
511,144
<p>I'm also a Bangladeshi, reading in class IX in DRMC. Probably the problem was:</p> <blockquote> <p>How many solution triads <span class="math-container">$(a,b,c)$</span> are there for the equation <span class="math-container">$a^2 + b^2 + c^2 - \frac{a^3 + b^3 + c^3 - 3abc}{a+b+c} = 2 + abc?$</span> Where <span class="math-container">$(a,b,c)$</span> are positive integers and <span class="math-container">$a,b,c&gt;1$</span>. </p> </blockquote> <p>After trying a lot, I've got a simple solution. Suppose that, <span class="math-container">$2≤a≤b≤c$</span>.</p> <p>Note that,</p> <p><span class="math-container">$$ab+bc+ca-2&lt;3bc$$</span></p> <p>i.e. <span class="math-container">$$abc&lt;3bc$$</span> So, <span class="math-container">$a=2$</span></p> <p>Set the value of <span class="math-container">$a$</span>. Therefore,</p> <p><span class="math-container">$$2b+2c = 2+bc$$</span></p> <p>Now, <span class="math-container">$2b+2c=2+bc&gt;bc$</span></p> <p>Hence, <span class="math-container">$(b-2)(c-2)&lt;4$</span></p> <p>Solve the equation. Finally, <span class="math-container">$(a,b,c)=(2,3,4)$</span> is the only solution.</p>
673,928
<p>How could I solve, apart from brute force, the congruence</p> <p>$$ \frac{x(x-1)}{2} \equiv 0\ ( \text{mod} \ 2) \ ? $$</p> <p>I mean, for what values of $y$ has the polynomial $ x(x-1)=2y $ only integer solutions and even for example $ x=f(y) $ is an EVEN number ? Thanks.</p>
Peter
111,826
<p>One also should add that $|e^{j\omega t}|=1$ and therefore $\lim_{t\to\infty} e^{-(a - j\omega) t} = 0 $ provided $a&gt;0$ and $\omega\in \mathbb R$.</p>
484,281
<p>I recently got interested in mathematics after having slacked through it in highschool. Therefore I picked up the book "Algebra" by I.M. Gelfand and A.Shen</p> <p>At problem 113, the reader is asked to factor $a^3-b^3.$</p> <p>The given solution is: $$a^3-b^3 = a^3 - a^2b + a^2b -ab^2 + ab^2 -b^3 = a^2(a-b) + ab(a-b) + b^2(a-b) = (a-b)(a^2+ab+b^2)$$</p> <p>I was wondering how the second equality is derived. From what is it derived, from $a^2-b^2$? I know that the result is the difference of cubes formula, however searching for it on the internet i only get exercises where the formula is already given. Can someone please point me in the right direction?</p>
Tyler Holden
68,577
<p>The second equality is the typical trick of adding/subtracting things until you get something you want. It is usually not terribly insightful but is nonetheless standard. In general, one can factor $a^n-b^n$ as $$ a^n-b^n = (a-b)(a^{n-1} + a^{n-2}b^1 + a^{n-3}b^2 + \cdots + a^2 b^{n-3} + a b^{n-2} + b^{n-1}).$$ One way of seeing this more general result is as follows: Consider the polynomial equation $x^3-b^3$ (where I have just used $x$ instead of $a$!). Clearly $x=b$ is a root of this equation, so we should be able to factor out an $(x-b)$ term from the polynomial. Polynomial long division then gives the desired result. This can be generalized to higher $n$.</p>
4,106,575
<p>Consider a number drawn from <span class="math-container">$U(1,100)$</span>. When we make an incorrect guess, we are told whether the target number is smaller or larger. So we employ a binary search approach, where the first guess is 50. If that is not the target, then that means our next guess is going to be 25 or 75.</p> <p>What is the probability that binary search ends in 2 guesses?</p> <p>At first, I thought it was <span class="math-container">$P(B|A)P(A)$</span> where <span class="math-container">$A$</span> is the event that the first guess is wrong and <span class="math-container">$B$</span> is the event that the second guess is right. We know <span class="math-container">$P(A) = 99/100$</span>. When <span class="math-container">$A$</span> occurs, that means we would either search in the interval <span class="math-container">$[1,49]$</span> or <span class="math-container">$[51, 100]$</span>, which has 49 and 50, elements, respectively. So I believe <span class="math-container">$P(B | A) = 49/100 * 1/49 + 50/100 * 1/50 = 2/100$</span>.</p> <p>So <span class="math-container">$P(B|A)P(A) = 99/100 * 2/100 = 0.198$</span>.</p> <p>Then I second guessed myself and started wondering why is it not <span class="math-container">$2/100$</span>? If it ends in 2 guesses, it means the target number is 25 or 75. Since the target number is uniformly chosen, the probability that it being 25 or 75 is <span class="math-container">$2/100=1/50 = 0.2$</span>.</p> <p>Both of these answers seem right to me, but 1 one of them must be wrong. Which one is wrong?</p>
roddur
915,875
<p>The probability that it ends in 1 guess is 1 in 100. For ending in 2 guesses, you must first miss the first then make the second.</p> <p>Missing the first is probability 99/100. But now, your solution set is cut down to either 49 or 50 numbers (depending on whether you were above or below). If you are above, your probability of being correct is 1/49. If you are below, it is 1/50. We can average these two probabilities since they are equally likely.</p> <p>The answer is approximately <span class="math-container">$.0202$</span>.</p>
4,106,575
<p>Consider a number drawn from <span class="math-container">$U(1,100)$</span>. When we make an incorrect guess, we are told whether the target number is smaller or larger. So we employ a binary search approach, where the first guess is 50. If that is not the target, then that means our next guess is going to be 25 or 75.</p> <p>What is the probability that binary search ends in 2 guesses?</p> <p>At first, I thought it was <span class="math-container">$P(B|A)P(A)$</span> where <span class="math-container">$A$</span> is the event that the first guess is wrong and <span class="math-container">$B$</span> is the event that the second guess is right. We know <span class="math-container">$P(A) = 99/100$</span>. When <span class="math-container">$A$</span> occurs, that means we would either search in the interval <span class="math-container">$[1,49]$</span> or <span class="math-container">$[51, 100]$</span>, which has 49 and 50, elements, respectively. So I believe <span class="math-container">$P(B | A) = 49/100 * 1/49 + 50/100 * 1/50 = 2/100$</span>.</p> <p>So <span class="math-container">$P(B|A)P(A) = 99/100 * 2/100 = 0.198$</span>.</p> <p>Then I second guessed myself and started wondering why is it not <span class="math-container">$2/100$</span>? If it ends in 2 guesses, it means the target number is 25 or 75. Since the target number is uniformly chosen, the probability that it being 25 or 75 is <span class="math-container">$2/100=1/50 = 0.2$</span>.</p> <p>Both of these answers seem right to me, but 1 one of them must be wrong. Which one is wrong?</p>
saulspatz
235,128
<p>The second calculation is correct.</p> <p>The mistake in the first calculation is that <span class="math-container">$\Pr(B|A)$</span>is not <span class="math-container">$\frac2{100}$</span> but <span class="math-container">$\frac2{99}$</span> Once we know that <span class="math-container">$50$</span> is not the number, only <span class="math-container">$99$</span> possibilities remain. If you want to do it by formula, <span class="math-container">$$\Pr(B|A)=\frac{\Pr(B\cap A)}{\Pr(A)}=\frac{\Pr(B)}{\Pr(A)}=\frac{2/100}{99/100}=\frac2{99}$$</span></p>
222,701
<p>Below in code are two versions. I believe plot1 might be correct but not sure why its shifted to the left and how to fix it. plot2 is copied directly from the wolfman online documentation but using my own data. It doesn't look normalized and the continuous curve is almost flat. I know something is likely wrong with both of them. Please suggest edits.</p> <p>Goal is to normalize histogram of probability density so that we have a mean at zero, standard deviation is 1, and total area under graph is 1. The graph should look similar to image:</p> <p><a href="https://i.stack.imgur.com/GNmCE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GNmCE.png" alt="a typical normalized histogram borrowed from wolfram documentation"></a></p> <pre><code>raw = {1.32, 1.37, 1.43, 1.32, 1.36, 1.33, 1.38, 1.35, 1.48, 1.28, 1.31, 1.52, 1.51, 1.33, 1.32, 1.27, 1.35, 1.40, 1.27, 1.39, 1.50, 1.31, 1.34, 1.48, 1.36, 1.33, 1.40, 1.29, 1.35, 1.36, 1.33, 1.30, 1.28, 1.32, 1.34, 1.33, 1.29, 1.34, 1.34, 1.29, 1.35, 1.52, 1.29, 1.38, 1.40, 1.28, 1.36, 1.36, 1.32, 1.62, 1.36, 1.34, 1.33, 1.33, 1.30, 1.31, 1.33, 1.32, 1.36, 1.41}; (* plot1 almost looks correct except for shifting from mean *) plotHistPDF[raw_, n_ : 3] := Block[{len, μ, σ, bin, hist, dist}, len = Length[raw]; μ = Mean[raw]; σ = StandardDeviation[raw]; hist = Histogram[raw, Automatic, "PDF"]; dist = Plot[ PDF[NormalDistribution[μ, σ], x], {x, μ - n σ, μ + n σ}, PlotRange -&gt; All, Filling -&gt; Axis]; Show[ hist, dist, PlotLabel -&gt; Style["Probabality Density Distribution", Bold, Orange], AxesLabel -&gt; {x, ρ}, ImageSize -&gt; imgsize] ] (* plot2 appears correctly placed but curve is flat. should be a bell shaped curve *) Show[ Histogram[raw, Automatic, "PDF"], Plot[PDF[NormalDistribution[0, 1], x], {x, -4, 4}] ] </code></pre>
prog9910
59,860
<p>I get this plot: followed @JimB advice. Time to learn some statistics, again.</p> <p><a href="https://i.stack.imgur.com/RXT0p.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RXT0p.jpg" alt="Histogram Plot"></a></p>
1,037,621
<p>There are 20 people at a chess club on a certain day. They each find opponents and start playing. How many possibilities are there for how they are matched up, assuming that in each game it does matter who has the white pieces (and who has the black ones). </p> <p>I thought it might be $$\large2^{\frac{20(20-1)}2}$$ is this correct?</p>
Graham Kemp
135,106
<p>A set of twenty distinct items can be ordered in $20!$ ways.</p> <p>A set of $10$ distinct pairs can be ordered in $10!$ ways.</p> <p>A pair of distinct items can be ordered in $2!$ ways.</p> <p>Since we don't care about the order <em>of</em> the pairs, and only care about the the order within <em>each</em> pair, then a set of $20$ distinct items can be subdivided into an unordered set of $10$ distinct ordered pairs in $\frac{20!}{10!}$ ways.</p> <hr> <p>If it <em>doesn't</em> matter about the order within each pair then a set of $20$ distinct items can be subdivided into an unordered set of $10$ distinct unordered pairs in $\frac{20!}{10!\;2!^{10}}$ ways.</p>
1,037,621
<p>There are 20 people at a chess club on a certain day. They each find opponents and start playing. How many possibilities are there for how they are matched up, assuming that in each game it does matter who has the white pieces (and who has the black ones). </p> <p>I thought it might be $$\large2^{\frac{20(20-1)}2}$$ is this correct?</p>
pmunoz
693,872
<p>I would like to share how I ended up thinking about this.</p> <p>To come up with a single instance of pairings first choose who gets to play white. There are <span class="math-container">$20\choose{10}$</span> ways to pick who gets to play white. Then permute the remaining 10 players, and pair them up with the white players by order (or index).</p> <p>Example:</p> <p>Say we randomly chose 1 3 5 7 9 11 13 15 17 19 to play white.</p> <p>Then we permuted the rest and got 20 18 16 14 12 10 8 6 4 2.</p> <p>Then we have the pairings:</p> <p>{(1,20), (3,18), (5,16), (7,14), (9,12), (11,10), (13,8), (15,6), (17,4), (19,2)}</p> <p>So we end up having <span class="math-container">$20\choose{10}$</span> = <span class="math-container">$\frac{20!}{10!10!}$</span> ways to choose who plays white, and for each those we will have <span class="math-container">$10!$</span> permutations of who to pair them with, this makes it so the number of pairs is <span class="math-container">$\frac{20!}{10!10!}10! = \frac{20!}{10!}$</span>.</p>
140,459
<p>There is a long tradition of mathematicians remarking that FLT in itself is a rather isolated claim, attractive only because of its simplicity. And people often note a great thing about current proofs of FLT is their use of the modularity thesis which is just the opposite: arcane, and richly connected to a lot of results.</p> <p>But have there been uses of FLT itself? Beyond implying simple variants of itself, are there any more serious uses yet?</p> <p>I notice the discussion in <a href="https://mathoverflow.net/questions/50183/fermats-last-theorem-and-computability-theory">Fermat&#39;s Last Theorem and Computability Theory</a> concludes that one purported use is not seriously using FLT.</p>
Carlo Beenakker
11,260
<p>Do applications to physics count?</p> <p><A HREF="http://arxiv.org/abs/hep-th/9403014">Supersymmetry Breakings and Fermat's Last Theorem</A>, Hitoshi Nishino, Mod.Phys.Lett. A <strong>10</strong> (1995) 149-158.</p> <blockquote> <p>In this paper, we give the first application of Fermat's last theorem (FLT) to physical models, in particular to supersymmetric models in two or four dimensions. It is shown that FLT implies that supersymmetry is exact at some irrational number points in parameter space, while it is broken at all rational number points except for the origin. This mechanism presents a peculiar link between the FLT in number theory and the vacuum structure of supersymmetry. Previously, the only well-known connection between number theory and supersymmetry has been via topological effects, such as instantons and monopoles in supersymmetric models.</p> </blockquote>
140,459
<p>There is a long tradition of mathematicians remarking that FLT in itself is a rather isolated claim, attractive only because of its simplicity. And people often note a great thing about current proofs of FLT is their use of the modularity thesis which is just the opposite: arcane, and richly connected to a lot of results.</p> <p>But have there been uses of FLT itself? Beyond implying simple variants of itself, are there any more serious uses yet?</p> <p>I notice the discussion in <a href="https://mathoverflow.net/questions/50183/fermats-last-theorem-and-computability-theory">Fermat&#39;s Last Theorem and Computability Theory</a> concludes that one purported use is not seriously using FLT.</p>
Chandan Singh Dalawat
2,821
<p>It is perhaps an indication of the average age of today's MOers that nobody remembers the work of Hellegouarch who introduced around 1970 what is nowadays called the Frey curve precisely in order to deduce information about elliptic curves from (the then) known results about Fermat's Last Theorem.</p>
2,399,216
<p>How to find the coefficient of $x^4$ in the expansion of $(1+x+x^2)^{20}$?</p>
Franklin Pezzuti Dyer
438,055
<p>Let us treat this trinomial $$1+x+x^2$$ instead as a binomial: $$=(1+x)+x^2$$ Then, by the binomial theorem, we have $$(1+x+x^2)^{20}=\sum_{n=0}^{20} \binom{20}{n}x^{2n}(1+x)^{20-n}$$ Now notice that it in this sum, only the $n=0,1,2$ terms can have an $x^4$ term, because otherwise the $x^{2n}$ would produce some power of $x$ greater than $4$. Thus we need only to examine the terms $$\binom{20}{0}(1+x)^{20}$$ $$\binom{20}{1}x^2(1+x)^{19}$$ $$\binom{20}{2}x^4(1+x)^{18}$$ Using the binomial theorem, from the first term, we have a coefficient of $$\color{red}{\binom{20}{0}\binom{20}{4}}$$ Again using the binomial theorem, from the second term, we have a coefficient of $$\color{red}{\binom{20}{1}\binom{19}{2}}$$ and from the last term, it is rather obvious that the coefficient will be $$\color{red}{\binom{20}{2}}$$ and so the final coefficient is $$\color{red}{\binom{20}{0}\binom{20}{4}+\binom{20}{1}\binom{19}{2}+\binom{20}{2}}$$</p>
314,236
<p>The integral $\int_{-\infty}^{\infty} e^{ix} dx$ diverges. I have read (in a wikipedia article) that the principal value of this integral vanishes: $P \int_{-\infty}^{\infty} e^{ix} dx = 0$. How can one see that?</p> <p>Thank you for your effort!</p>
Antonio Vargas
5,531
<p>The integral can be regularized using the <a href="http://en.wikipedia.org/wiki/Ces%C3%A0ro_summation#Ces.C3.A0ro_summability_of_an_integral" rel="nofollow">integral analogue of Cesàro summation</a>. By definition we have</p> <p>$$ \begin{align} \textrm{C} \int_{-\infty}^{\infty} e^{ix}\,dx &amp;= \lim_{a \to \infty} \frac{1}{a} \int_0^a \int_{-y}^{y} e^{ix}\,dx\,dy \\ &amp;= \lim_{a \to \infty} \frac{2}{a} \int_0^a \sin y\,dy \\ &amp;= \lim_{a \to \infty} \frac{2}{a} (1 - \cos a) \\ &amp;= 0. \end{align} $$</p> <p>Perhaps this is what the article was referring to?</p>
266,981
<p>Wikipedia presents a recursive definition of the Pfaffian of a skew-symmetric matrix as $$ \operatorname{pf}(A)=\sum_{{j=1}\atop{j\neq i}}^{2n}(-1)^{i+j+1+\theta(i-j)}a_{ij}\operatorname{pf}(A_{\hat{\imath}\hat{\jmath}}),$$ where $\theta$ is the Heaviside step function. I have failed to find a proper reference for this result. Is it standard or does it require proper citation? What should one cite?</p>
Todd Trimble
2,926
<p>To answer the question raised in the most recent edit: abelian sheaves are tensored and powered over abelian groups. </p> <p>First of all, abelian presheaves have tensors and powers that are computed <em>pointwise</em>. By "abelian presheaves" I mean the $\textbf{Ab}$-category of functors $F: C \to \textbf{Ab}$ where $C$ is a small (ordinary) category, or somewhat more generally the $\textbf{Ab}$-category of $\textbf{Ab}$-functors $F: C \to \textbf{Ab}$ where $C$ is small and $\textbf{Ab}$-enriched (we can always regard a small $\textbf{Set}$-category $C$ as a small $\textbf{Ab}$-category, by defining $C(a, b) = F\hom(a, b)$, the free abelian group generated by the set $\hom(a, b)$). If $A$ is an abelian group, then the power $F^A$ is given by the "pointwise" formula $F^A(c) = F(c)^A$, i.e., $F^A(c) = \textbf{Ab}(A, F(c))$. This is a straightforward calculation having to do with commutation of limits. We have for each $G: C \to \textbf{Ab}$ an isomorphism $\textbf{Ab}^C(G, F^A) \cong \textbf{Ab}(A, \textbf{Ab}^C(G, F))$ since </p> <p>$$\begin{array}{rcl} \textbf{Ab}^C(G, F^A) &amp; \cong &amp; \int_c \textbf{Ab}(Gc, F^A(c)) \\ &amp; \cong &amp; \int_c \textbf{Ab}(Gc, \textbf{Ab}(A, Fc)) \\ &amp; \cong &amp; \int_c \textbf{Ab}(Gc \otimes A, Fc) \\ &amp; \cong &amp; \int_c \textbf{Ab}(A, \textbf{Ab}(Gc, Fc)) \\ &amp; \cong &amp; \textbf{Ab}(A, \int_c \textbf{Ab}(Gc, Fc) \\ &amp; \cong &amp; \textbf{Ab}(A, \textbf{Ab}^C(G, F)). \end{array}$$ </p> <p>A very similar sort of calculation shows that $\textbf{Ab}$-tensors of abelian sheaves are also computed in pointwise (or objectwise) fashion. </p> <p>As for sheaves: if $F$ is an abelian sheaf (for whatever topology), then for any abelian group $A$ the presheaf $F^A$ is a sheaf. Abstractly, if $F$ is an algebra of the (enriched) sheafification monad $\sigma$ on $\textbf{Ab}^C$, then so is $F^A$. For we have a universal element $\mathbb{Z} \to \textbf{Ab}(A, \textbf{Ab}^C(F^A, F))$ and a map </p> <p>$$\textbf{Ab}^C(F^A, F) \to \textbf{Ab}^C(\sigma(F^A), \sigma(F)).$$ </p> <p>Putting these together we get $\mathbb{Z} \to \textbf{Ab}(A, \textbf{Ab}^C(\sigma(F^A), \sigma(F)) \cong \textbf{Ab}^C(\sigma(F^A), \sigma(F)^A)$, i.e., a map $\sigma(F^A) \to \sigma(F)^A$. Together with the algebra structure $\alpha: \sigma(F) \to F$, we produce an algebra structure on $F^A$ by an evident composite </p> <p>$$\sigma(F^A) \to \sigma(F)^A \stackrel{\alpha^A}{\to} F^A$$ </p> <p>where $\alpha^A$ for any $\alpha: G \to F$ is defined by exploiting maps </p> <p>$$\mathbb{Z} \stackrel{\text{canon}}{\to} \textbf{Ab}(A, \textbf{Ab}^C(G^A, G)) \stackrel{\textbf{Ab}(1, \textbf{Ab}^C(1, \alpha))}{\to} \textbf{Ab}(A, \textbf{Ab}^C(G^A, F)) \cong \textbf{Ab}^C(G^A, F^A)$$ </p> <p>(cf. similar constructions mentioned at <a href="https://mathoverflow.net/questions/263214/morphisms-of-cotensors/263215#263215">Morphisms of cotensors</a>). </p> <hr> <p>Nothing in this answer so far has much to do with the details of abelian groups or sheafification or anything like that; it's really just pure enriched category theory. A more general result is that if $\mathcal{V}$ is a suitable base of enrichment (complete, cocomplete, symmetric monoidal closed) and if $C$ is a small $\mathcal{V}$-category, then $\mathcal{V}^C$ has $\mathcal{V}$-powers and $\mathcal{V}$-tensors (in fact is $\mathcal{V}$- complete and cocomplete), and also if $\mathcal{C}$ is $\mathcal{V}$-complete and $\sigma$ is a $\mathcal{V}$-monad on $\mathcal{C}$, then the $\mathcal{V}$-category of algebras $\mathcal{C}^\sigma$ inherits $\mathcal{V}$-weighted limits from $\mathcal{C}$. Just as you would expect by analogy from ordinary category theory. Putting these results together covers a huge range of examples that arise in the wild. </p> <hr> <p>As for tensoring on abelian sheaves: this is a special case of (weighted) colimits on sheaves, where the recipe is well-known: sheafify the (pointwise) colimit taken in presheaves. Thus the formula for a sheaf $F$ should be given by $(A \cdot F)(c) = a(A \otimes iF(c))$, where $i$ is the full inclusion of sheaves into presheaves and $a$ is its left adjoint, reflecting presheaves back into sheaves. This construction applies more generally to <em>idempotent</em> monads $\sigma = ai$ where $i$ is a full inclusion and $a \dashv i$; however, it doesn't work for general (enriched) monads $\sigma$ (and in fact the construction of colimits in categories of algebras is a pretty big topic in its own right). But anyway, if $G$ is a sheaf, then we have </p> <p>$$\text{Sh}(A \cdot F, G) \cong \text{Sh}(a(A \otimes iF), G) \cong \textbf{Ab}^C(A \otimes iF, iG) \cong \textbf{Ab}(A, \textbf{Ab}^C(iF, iG)) \cong \textbf{Ab}(A, \text{Sh}(F, G))$$ </p> <p>where the last isomorphism invokes (enriched) full faithfulness. </p> <hr> <p>Although much of the theory of weighted limits and colimits in enriched category theory is a smooth generalization of the theory of ordinary (conical) limits and colimits from ordinary category theory, it must be said that powers and tensors add an important enriched ingredient, in the sense that </p> <ul> <li><p>Categories that are complete and cocomplete in the ordinary sense need not admit (enriched) powers and tensors, but </p></li> <li><p>If a category is complete in the ordinary sense and admits enriched powers, then it is complete in the enriched sense of admitting all weighted limits. Similarly if a category is cocomplete in the ordinary sense and admits tensors, then it is cocomplete in the enriched sense of admitting all weighted colimits. </p></li> </ul> <p>For an example of the first point: consider the inclusion of monoids $M$ into categories, taking $M$ to the usual one-object category $BM$ whose morphisms are elements of $M$. Monoids are thus $\textbf{Cat}$-enriched by the formula $\textbf{Cat}(BM, BN)$. Certainly $\textbf{Mon}$ admits all ordinary limits (they are computed as in $\textbf{Set}$). However, $\textbf{Mon}$ does not admit $\textbf{Cat}$-powers. For instance if $\mathbf{2}$ is the arrow category, then the objects of $(BM)^\mathbf{2}$ correspond to elements $m \in M$, hence have more than one object (and in fact need not be even <em>equivalent</em> as categories to monoids, when one considers that morphisms $a: m \to n$ in $(BM)^\mathbf{2}$ are elements $a \in M$ such that $am = na$). This is enough to show $\textbf{Mon}$ doesn't have copowers, because the full inclusion $B: \textbf{Mon} \to \textbf{Cat}$ reflects any weighted limit in $\textbf{Cat}$. </p> <p>Essentially all of this material can be found in Kelly's book, <a href="http://www.tac.mta.ca/tac/reprints/articles/10/tr10abs.html" rel="nofollow noreferrer">Basic Concepts of Enriched Category Theory</a>. </p>
72,876
<p>A quiver in representation theory is what is called in most other areas a directed graph. Does anybody know why Gabriel felt that a new name was needed for this object? I am more interested in why he might have felt graph or digraph was not a good choice of terminology than why he thought quiver is a good name. (I rather like the name myself.)</p> <p>On a related note, does anybody know why quiver representations, resp. morphisms of quiver representations, are not commonly defined as functors from the free category on the quiver to the category of finite dimensional vector spaces, resp. natural transformations? </p> <p><b>Added</b> I made this community wiki in case this will garner more responses. </p> <p>My motivation for asking this is that one of my students just defended her thesis, which involved quivers, and the Computer Scientist on the committee remarked that these are normally called directed graphs and using that term might make the thesis appeal to a wider community. Afterwards, some of us were wondering what prompted Gabriel to coin a new term for this concept.</p>
Dag Oskar Madsen
18,756
<p>Gabriel actually gave a short explanation himself in [Gabriel, Peter. Unzerlegbare Darstellungen. I. (German) Manuscripta Math. 6 (1972), 71--103]:</p> <blockquote> <p>Für einen solchen 4-Tupel schlagen wir die Bezeichnung <em>Köcher</em> vor, und nicht etwa Graph, weil letzerem Wort schon zu viele verwandte Begriffe anhaften.</p> </blockquote> <p>Attempt at translation: For such a 4-tuple we suggest the name <em>quiver</em>, rather than graph, since the latter word already has too many related concepts connected to it.</p> <p>(This is community wiki, so anyone can add a proper English translation.)</p>
4,528,823
<p>I'm trying to understand the proof that every bounded sequence has a convergent subsequence. The proof goes as follows:</p> <blockquote> <p>Let <span class="math-container">$\{a_{n}\}$</span> be a bounded sequence of real numbers. Choose <span class="math-container">$M\ge 0$</span> such that <span class="math-container">$|a_{n}|\le M$</span> for all <span class="math-container">$n \in \mathbb{N}$</span>. Define sets <span class="math-container">$$E_{n}=\overline{\{a_{j} \mid j\ge n}\}.$$</span> Then, <span class="math-container">$E_{n}\subseteq [-M, M]$</span> are a descending sequence of non-empty, closed and bounded subsets of <span class="math-container">$\mathbb{R}$</span>. The nested interval theorem then says that there exists some <span class="math-container">$a$</span> such that <span class="math-container">$$a \in \bigcap_{n\ge 1}E_{n}.$$</span> For each natural number <span class="math-container">$k$</span>, <span class="math-container">$a$</span> is a point of closure of <span class="math-container">$\{a_{j}\mid j\ge k\}$</span>. Hence, for infinitely many indices <span class="math-container">$j\ge n$</span> (shouldn't it be <span class="math-container">$j\ge k$</span>?), <span class="math-container">$$a_{j}\in (a-\frac{1}{k}, a+\frac{1}{k}).$$</span> We may therefore inductively choose a strictly increasing sequence of natural numbers <span class="math-container">$\{n_{k}\}$</span> such that <span class="math-container">$|a-a_{n_{k}}|&lt;1/k$</span> for all <span class="math-container">$k$</span>.</p> </blockquote> <p>My question is how exactly are the points in the sequence being chosen such that they're guaranteed to be strictly increasing? I'm not able to visualize how this process works.</p> <p>Can you can please explain how starting with <span class="math-container">$k=1$</span> we inductively construct the subsequence?</p>
DonAntonio
31,254
<p>For <span class="math-container">$\;k=1\;$</span> :</p> <p><span class="math-container">$$E_1=\overline{\{a_j\;|\; j\ge1\}}$$</span></p> <p>For <span class="math-container">$\;k=2\;$</span></p> <p><span class="math-container">$$E_2=\overline{\{a_j\;|\; j\ge2\}}\subset E_1$$</span></p> <p>and etc.</p>
184,487
<p>For any domain $A$ let $A^\times$ be its group of units.</p> <p>Let $A$ be a noetherian domain with only finitely many prime ideals, and field of fractions $K$. </p> <p>Is the group $K^\times/A^\times$ finitely generated?</p>
t.b.
5,363
<p>I think this (good) exercise in the use of the calculus of fractions is made a bit more difficult than necessary because Gel'fand&ndash;Manin formulate the axioms in a self-dual way. Since we're dealing with (what I call) <em>right fractions</em>, it seems like a good idea to isolate the necessary parts of the axioms for this exercise. That this is possible is hinted at in <a href="http://books.google.com/books?id=pv94ATbagxEC&amp;lpg=PP1&amp;pg=PA151" rel="noreferrer">remark&nbsp;9</a> following <a href="http://books.google.com/books?id=pv94ATbagxEC&amp;lpg=PP1&amp;pg=PA148" rel="noreferrer">the lemma</a> you're trying to understand. Proofs sometimes become easier when one reduces the options one has and this is one instance, I believe.</p> <p>To get to the maths, the axioms for <em>right fractions</em> read:</p> <blockquote> <p>Let $\mathscr{C}$ be a category. A collection $\Sigma$ of morphisms in $\mathscr{C}$ is called a <em>right multiplicative system</em> (or <em>right denominator set</em>) if</p> <ul> <li><strong>[RF&nbsp;0]</strong> (non-triviality): the identity morphism of each object belongs to $\Sigma$: for all objects $C \in \mathscr{C}$ we have $1_C \in \Sigma$.</li> <li><strong>[RF&nbsp;1]</strong> (composition): if $s\colon A \to A'$ and $s'\colon A' \to A''$ are composable morphisms from $\Sigma$ then $s's\colon A \to A''$ belongs to $\Sigma$, too.</li> <li><p><strong>[RF&nbsp;2]</strong> (<a href="http://en.wikipedia.org/wiki/Ore_condition" rel="noreferrer">Ore condition</a>): Given an arbitrary morphism $f\colon A \to B$ and a morphism $t\colon B' \to B$ from $\Sigma$, there exists a commutative square</p> <p><img src="https://i.stack.imgur.com/erWuB.png" alt="Ore condition diagram"></p> <p>with $s\colon A' \to A$ from $\Sigma$. <br><strong>Note:</strong> If $f$ happens to be from $\Sigma$ we do <em>not</em> assume that we can choose $f'$ to be from $\Sigma$, however, we can say that the composition $sf' = tf$ belongs to $\Sigma$ by <strong>[RF&nbsp;1]</strong> because both $t$ and $f$ do.</p></li> <li><strong>[RF&nbsp;3]</strong> (cancellation condition): If $f,g\colon A \rightrightarrows B$ are two parallel morphisms such that there is $s\colon B \to B'$ from $\Sigma$ such that $sf = sg$, then there is $t\colon A' \to A$ from $\Sigma$ such that $ft = gt$.</li> </ul> </blockquote> <p>I use the notation $(f/s)$ with $s \in \Sigma$ to represent a <em>right fraction</em> $A \xleftarrow{s} A' \xrightarrow{f} B$ (a &ldquo;left $\Sigma$-roof&rdquo; in Gel'fand&ndash;Manin's terminology): my rationale for the name-switching in the handedness is that $(f/s)$ represents &ldquo;$f \circ s^{-1}$&rdquo; in the localized category $\mathscr{C}[\Sigma^{-1}]$, so we compose $f$ with $s^{-1}$ on the <em>right</em> to get a morphism $A \to B$ in $\mathscr{C}[\Sigma^{-1}]$.</p> <p>The above axioms are enough to prove that <em>equivalence of (right) fractions</em> is indeed an equivalence relation. The definition of equivalence of fractions as phrased in Gel'fand&ndash;Manin is somewhat confusing because it may suggest an incorrect interpretation (as witnessed in your question). Here's the diagram defining equivalence between $(f_1/s_1)$ and $(f_2/s_2)$ that is actually intended: </p> <p><img src="https://i.stack.imgur.com/9jFEL.png" alt="Equivalence of right fractions"></p> <p>Note that the vertical morphisms $\bar r_1$ and $\bar r_2$ in this diagram are denoted by $r$ and $h$ by Gel'fand&ndash;Manin and <em>neither</em> of them is assumed to belong to $\Sigma$. What <em>is</em> assumed to be in $\sigma$ is the composition $\bar{s} = s_1\bar{r}_1 = s_2\bar{r}_2$.</p> <p>If you read the proof of Lemma&nbsp;8&nbsp;a) of Gel'fand&ndash;Manin with this definition in mind, you'll see that they actually prove that the above relation is transitive (only using the parts of their axioms that I gave above), so I'll take that result for granted.</p> <p>Before proceeding further, a trivial but useful</p> <blockquote> <p><strong>Remark.</strong> The equivalence diagram not only tells us that $(f_1/s\vphantom{f}_1)$ and $(f_2/s\vphantom{f}_2)$ are equivalent, but that they both are equivalent to their common expansion $(\bar f/\bar s)$: the upper half exhibits $(\bar{f}/\bar{s})$ to be equivalent to $(f_1/s\vphantom{f}_1)$ and the lower half gives that $(\bar{f}/\bar{s})$ is equivalent to $(f_2/s\vphantom{f}_2)$.</p> </blockquote> <hr> <p>I think it is instructive to break your question into small and easy steps instead of proving the general well-definedness in one blow, so that's the route I'll take below. I'll give the details of most of the argument and think that it is safe to leave the rest only in outline. The direct route is of course possible, but, as you mentioned, it becomes somewhat messy.</p> <p>One crucial observation to make towards proving the well-definedness of composition in the category of fractions is that the Ore condition <strong>[RF&nbsp;2]</strong> reads $fs = tf'$, or &ldquo;$t^{-1} \circ f = f' \circ s^{-1}$&rdquo;, so it is not too surprising that we have the following:</p> <blockquote> <p><strong>Lemma.</strong> Given a wedge $A \xrightarrow{f} B \xleftarrow{t} B'$ in $\mathscr{C}$ with $t \in \Sigma$, any two right fractions $(f_1/s_1)$ and $(f_2/s_2)$ such that $tf_1 = fs_1$ and $tf_2 = fs_2$ are equivalent: &ldquo;up to equivalence of fractions there's only one Ore square over any given wedge&rdquo;.</p> </blockquote> <p>(This observation is a variant of the heuristics at the beginning of <a href="http://books.google.com/books?id=pv94ATbagxEC&amp;lpg=PP1&amp;pg=PA148" rel="noreferrer">remark&nbsp;7</a> in Gel'fand&ndash;Manin)</p> <p><em>Proof.</em> We are given the two commutative squares</p> <p><img src="https://i.stack.imgur.com/dIYrv.png" alt="two Ore squares"></p> <p>and we apply the Ore condition to the wedge $A_1 \xrightarrow{s_1} A \xleftarrow{s_2} A_2$ to get the commutative square</p> <p><img src="https://i.stack.imgur.com/mIb56.png" alt="Another Ore square"></p> <p>Observe that $s := s_1s_1' = s_2s_2' \in \Sigma$ because of the composition axiom. Moreover, we have $$ t f_1 s_1' = f s_1 s_1' = f s_2 s_2' = t f_2 s_2' $$ so that there is $s'\colon A'' \to A'$ such that $f_1 s_1' s' = f_2 s_2' s'$ by the cancellation axiom.</p> <p>But this means that the diagram</p> <p><img src="https://i.stack.imgur.com/o2kGs.png" alt="desired equivalence diagram"></p> <p>is commutative, proving the equivalence of $(f_1/t_1)$ and $(f_2/t_2)$.</p> <hr> <p>Finally, here's how the above leads to the desired conclusion:</p> <ol> <li><p>In order to compose $(g/t)$ with $(f/s)$ apply the Ore condition to $f$ and $t$ to get the diagram</p> <p><img src="https://i.stack.imgur.com/KiLmo.png" alt="definition of composition"></p> <p>and apply the lemma to prove that the equivalence class of the composition $(g/t) \circ (f/s)$ is independent of the chosen Ore square over $f$ and $t$.</p></li> <li><p>Replace $(f/s)$ by an expansion $(f_1/s_1) = (fr/sr)$ to see that the compositions $(g/t) \circ (f/s)$ and $(g/s) \circ (f_1/s_1)$ are equivalent, as shown by the following diagram</p> <p><img src="https://i.stack.imgur.com/mdTAg.png" alt="can replace (f/g) with an expansion"></p> <p>(this diagram proves all that you need for this case).</p></li> <li><p>Similarly, replace $(g/t)$ by an expansion $(g_1/t_1)$ to see that the compositions $(g/t) \circ (f/s)$ and $(g_1/t_1) \circ (f/s)$ are equivalent (I leave this case as an exercise).</p></li> <li><p>Combine 2. and 3. and use transitivity of equivalence of right fractions to see that the compositions $(g/t) \circ (f/t)$ and $(g_1/t_1) \circ (f_1/s_1)$ are equivalent: $(g/s) \circ (f/s) \sim (g/s) \circ (f_1/s_1) \sim (g_1/s_1) \circ (f_1/s_1)$, <em>provided</em> $(f_1,s_1)$ and $(g_1/t_1)$ are expansions of $(f/t)$ and $(g/t)$, respectively.</p></li> <li><p>Use transitivity of equivalence of fractions to see that if $(f/s) \sim (f'/s')$ and $(g/t) \sim (g'/t')$ lead to $(g/t) \circ (f/s) \sim (g'/t') \circ (f'/s')$ by using 4. in order to compare $(f/s)$ and $(f'/s')$ to a common expansion $(f_1/s_1)$ as well as $(g/t)$ and $(g'/t')$ to a common expansion $(g_1/t_1)$.</p></li> </ol> <hr> <p>To finish up, let me direct you to <a href="https://math.stackexchange.com/q/89272/5363">this thread</a> for a discussion of related questions, some remarks on the significance of the &ldquo;2-out-of-3&rdquo; condition Agust&iacute; mentioned (it is called saturation in the other thread), as well as some useful links and references at the end the answer.</p>
224,670
<p>If I have a data that I fit with <code>NonlinearModelfit</code> that fits a data based on two fitting parameters, <code>c1</code> and <code>c2</code>.</p> <p>When I used <code>nlm[&quot;ParameterTable&quot;] // Quiet</code> I get the following table:</p> <p><a href="https://i.stack.imgur.com/ZmQQO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZmQQO.png" alt="Image" /></a></p> <p>If I have an equation such as:</p> <p><code>eq = (2.303*((70 + 273.15)^2)*(c1/c2))/1000</code></p> <p>Is there any code (as opposed to doing it manually) I can use to calculate the value of <code>eq</code> with the combined standard deviation based on the standard deviations of <code>c1</code> and <code>c2</code> from the table?.</p> <p>To clarify I would like to get something like: <code>eq = (2.303*((70 + 273.15)^2)*(8.08318/21.1577))/1000=103.604</code> but also the standard deviation based on the errors of <code>c1</code> and <code>c2</code> as to get something like <code>103.604 +- standard error</code></p> <p>Thank you!</p> <p><strong>EDIT:</strong></p> <p>For reference <code>eq</code> comes from:</p> <pre><code>eq = ((log10q - Log10[qref]) == c1*(Tfp - Tfpref)/(c2 + (Tfp - Tfpref))); model = Tfp /. Solve[eqn, Tfp][[1]]// FullSimplify; const = {Tfpref -&gt; 70, qref -&gt; 10/60}; model2 = model /. (const // Rationalize) // FullSimplify; nlm = NonlinearModelFit[data, {model2, c1 &gt; 5, c2 &gt; 5}, {c1, c2}, log10q]; </code></pre> <p>where everything in <code>eq</code> is known except the fitting parameters <code>c1</code> and <code>c2</code></p>
JimB
19,758
<p>If you're wanting an estimate of the standard error for <code>eq</code>, one approach is to use the Delta Method (a.k.a Propagation of Error if you're in the physical sciences).</p> <pre><code>(* Data from first example in `NonlinearModelFit` documentation *) data = {{0, 1}, {1, 0}, {3, 2}, {5, 4}, {6, 4}, {7, 5}}; nlm = NonlinearModelFit[data, Log[c1 + c2 x^2], {c1, c2}, x] eq = (2.303*((70 + 273.15)^2)*(c1/c2))/1000 (* (271.183 c1)/c2 *) eq /. nlm[&quot;BestFitParameters&quot;] (* 286.391 *) f = D[eq, {{c1, c2}}] /. nlm[&quot;BestFitParameters&quot;] (* {190.126, -200.789} *) se = (f.nlm[&quot;CovarianceMatrix&quot;].f)^0.5 (* 261.115 *) </code></pre> <p>More work but better if the desired function estimator does not have an approximate normal distribution is to use a bootstrap approach.</p> <p><strong>Addition:</strong></p> <p>I should note that the true standard error almost certainly doesn't exist as the ratio of two normals have no finite moments. However, the &quot;estimate of the standard error&quot; can (depending on the values of the distributions of the estimators) provide a reasonable confidence interval for the estimate of the ratio (such as in +/- 1.96 standard errors).</p>
3,288,651
<p>I have a problem understanding a proof about ideals, which states that every ideal in the integers can be generated by a single integer. And with that I realized that I also don't really understand ideals in general and the intuition behind them. </p> <p>So let me start by the definition of an ideal. For <span class="math-container">$a, b \in \mathbb{Z}$</span>, the ideal generated by <span class="math-container">$a$</span> is the set <span class="math-container">$ (a) := \{ua : u \in \mathbb{Z}\} $</span> while the ideal generated by <span class="math-container">$a$</span> and <span class="math-container">$b$</span> is the set <span class="math-container">$(a, b) := \{ua + vb : u,v \in \mathbb{Z}\}$</span>. Here comes my first question: Are those "multiples" of the generators (<span class="math-container">$u$</span> and <span class="math-container">$v$</span>) all possible integers? Or does this apply only to a specific amount of predefined integers? </p> <p>And now comes the proof in question. I added the questions in parenthesis where I had problems following: </p> <p>The lemma states that for <span class="math-container">$a, b \in \mathbb{Z}$</span> (not both 0), <span class="math-container">$ \exists d \in \mathbb{Z}: (a,b) = (d) $</span>. This means in my understanding that every ideal in the integers, no matter how many integers were used to generate it, can be generated only by a single integer. </p> <p><em>Proof</em>: The set <span class="math-container">$(a,b)$</span> must contain some positive numbers (why? The definition of the ideal doesn't state that). By the well-ordering principle, we know that those positive numbers must have a smallest positive number. Let <span class="math-container">$d$</span> be that number. Because <span class="math-container">$d \in (a,b)$</span>, every multiple of <span class="math-container">$d$</span> must also be in <span class="math-container">$(a,b) $</span> (why? Is there any definition or lemma or theorem that states that?). Therefore, we have <span class="math-container">$(d) \subseteq (a,b)$</span>. And now to prove the other side <span class="math-container">$\supseteq$</span>: For any <span class="math-container">$c \in (a,b) $</span> , <span class="math-container">$\exists q,r $</span> (are those elements of the integers or of the set <span class="math-container">$(a,b)$</span>? and do any restrictions apply to <span class="math-container">$q$</span>?) where <span class="math-container">$0 \leq r &lt; d$</span> such that <span class="math-container">$c = qd + r$</span> (as far as my understanding goes, this comes from the fact that any integer can be divided by another integer yielding a remainder). Since both <span class="math-container">$c$</span> and <span class="math-container">$d$</span> are in <span class="math-container">$(a,b)$</span>, so is <span class="math-container">$r=c−qd$</span> . Since <span class="math-container">$0≤r&lt;d$</span> and <span class="math-container">$d$</span> is (by assumption) the smallest positive element in <span class="math-container">$(a, b)$</span>, we must have <span class="math-container">$r = 0$</span>. Thus <span class="math-container">$ c = qd ∈ (d)$</span> (how did we conclude that last step?). </p> <p>Thank you for the clarifications. </p>
Bill Dubuque
242
<p>It seems you may be missing some arithmetical intuition on these ideas so we emphasize that below.</p> <blockquote> <p>For <span class="math-container">$a, b \in \mathbb{Z}$</span>, the ideal generated by <span class="math-container">$a$</span> is the set <span class="math-container">$ (a) := \{ua : u \in \mathbb{Z}\} $</span> while the ideal generated by <span class="math-container">$a$</span> and <span class="math-container">$b$</span> is the set <span class="math-container">$(a, b) := \{ua + vb : u,v \in \mathbb{Z}\}$</span>. Here comes my first question: Are those "multiples" of the generators (<span class="math-container">$u$</span> and <span class="math-container">$v$</span>) all possible integers? Or does this apply only to a specific amount of predefined integers? </p> </blockquote> <p>Yes, it means <em>for all</em> <span class="math-container">$\,u,v\in R = \Bbb Z.\,$</span> Ideals in <span class="math-container">$R$</span> generalize the set of <em>all multiples</em> of an element, or <em>all common multiples</em> of a set of elements. Such sets are closed under addition and also under multiplication by <em>all</em> elements of <span class="math-container">$R$</span>, e.g. if <span class="math-container">$\,a,b\,$</span> are common multiples of <span class="math-container">$\,c,d\,$</span> then so too are <span class="math-container">$\,ua+vb\,$</span> for <em>all</em> <span class="math-container">$\,u,v\in R.\,$</span> In particular, if an ideal <span class="math-container">$I$</span> contains <span class="math-container">$d$</span> then it contains <em>all</em> multiples of <span class="math-container">$d$</span>, therefore <span class="math-container">$\, d\in I\iff (d)\subseteq I$</span>. Our goal below is to show that every ideal <span class="math-container">$\,I\subseteq Z\,$</span> has this form, i.e. it is the set of (common) multiples of a <em>single</em> (vs. multiple) element(s). </p> <p>We do so by observing that ideals are further closed under <em>remainder</em> (mod), which yields a <em>descent</em>: given <span class="math-container">$\,d &lt; c\in I\,$</span> if <span class="math-container">$\,c\,$</span> isn't divisible by <span class="math-container">$\,d\,$</span> then it leaves a <em>nonzero</em> remainder <span class="math-container">$\,c\bmod d = c-qd\in I$</span> which is <em>smaller</em> then <span class="math-container">$d$</span>. Iterating this yields a descending sequence of positive elements eventually terminating in the least positive <span class="math-container">$d\in I,\,$</span> which must divide every <span class="math-container">$\,c\in I,\,$</span> else <span class="math-container">$\,c\bmod d\,$</span> would be smaller than <span class="math-container">$d$</span>. Let's consider your questions with these ideas in mind.</p> <blockquote> <p>The lemma states that for <span class="math-container">$a, b \in \mathbb{Z}$</span> (not both 0), <span class="math-container">$ \exists d \in \mathbb{Z}: (a,b) = (d) $</span>. This means in my understanding that every ideal in the integers, no matter how many integers were used to generate it, can be generated only by a single integer. </p> </blockquote> <p>The lemma only claims the case for ideals <span class="math-container">$\,(a,b)$</span>, but the proof works for any ideal <span class="math-container">$(0)\neq I\subseteq \Bbb Z$</span>.</p> <blockquote> <p><em>Proof</em>: The set <span class="math-container">$(a,b)$</span> must contain some positive numbers (why? The definition of the ideal doesn't state that). </p> </blockquote> <p>By hypothesis <span class="math-container">$I $</span> contains an element <span class="math-container">$\,i\neq 0,\,$</span> thus <span class="math-container">$\,i\,$</span> or <span class="math-container">$(-1)i\,$</span> is positive, and <span class="math-container">$(-1)i\in I$</span> since <span class="math-container">$I$</span> contains <em>all</em> multiples of <span class="math-container">$\,i$</span>. </p> <blockquote> <p>By the well-ordering principle, we know that those positive numbers must have a smallest positive number. Let <span class="math-container">$d$</span> be that number. Because <span class="math-container">$d \in (a,b)$</span>, every multiple of <span class="math-container">$d$</span> must also be in <span class="math-container">$(a,b) $</span> (why? Is there any definition or lemma or theorem that states that?). Therefore, we have <span class="math-container">$(d) \subseteq (a,b)$</span>.</p> </blockquote> <p>Because, again <span class="math-container">$\,d\in I\,\Rightarrow\, (d)\subset I,\,$</span> i.e. <span class="math-container">$I$</span> contains <em>all</em> multiples of <span class="math-container">$d$</span>.</p> <blockquote> <p>And now to prove the other side <span class="math-container">$\supseteq$</span>: For any <span class="math-container">$c \in (a,b) $</span> , <span class="math-container">$\exists q,r $</span> (are those elements of the integers or of the set <span class="math-container">$(a,b)$</span>? and do any restrictions apply to <span class="math-container">$q$</span>?) where <span class="math-container">$0 \leq r &lt; d$</span> such that <span class="math-container">$c = qd + r$</span> (as far as my understanding goes, this comes from the fact that any integer can be divided by another integer yielding a remainder). </p> </blockquote> <p>Yes, we apply the (Euclidean) integer division algorithm to divide <span class="math-container">$\,c\,$</span> by <span class="math-container">$\,d,\,$</span> with remainder <span class="math-container">$\,r$</span>.</p> <blockquote> <p>Since both <span class="math-container">$c$</span> and <span class="math-container">$d$</span> are in <span class="math-container">$(a,b)$</span>, so is <span class="math-container">$r=c−qd$</span> . Since <span class="math-container">$0≤r&lt;d$</span> and <span class="math-container">$d$</span> is (by assumption) the smallest positive element in <span class="math-container">$(a, b)$</span>, we must have <span class="math-container">$r = 0$</span>. Thus <span class="math-container">$ c = qd ∈ (d)$</span> (how did we conclude that last step?). </p> </blockquote> <p>Since <span class="math-container">$\,d\in I\,$</span> so it its multiple <span class="math-container">$\,-qd,\,$</span> so <span class="math-container">$\,c\in I\,\Rightarrow\, c-qd\in I\,$</span> since ideals are closed under addition.</p> <p>The remainder <span class="math-container">$ r &lt; d$</span> can't be positive since that contradicts the definition of <span class="math-container">$d$</span> as the least positive integer in <span class="math-container">$I.\,$</span> Thus <span class="math-container">$\, 0 = r\, [= c-qd\,]\,$</span> so <span class="math-container">$\,c = qd,\,$</span> i.e. every <span class="math-container">$\,c\in I\, $</span> is a multiple <span class="math-container">$\,d,\,$</span> so <span class="math-container">$\,I\subseteq (d)$</span>. Combined with above we have <span class="math-container">$\,(d)\subseteq I \subseteq (d)\,$</span> so <span class="math-container">$\,I = (d)$</span>.</p> <p><strong>Remark</strong> <span class="math-container">$ $</span> In fact a nonzero subset of <span class="math-container">$\,\Bbb Z\,$</span> is an ideal <span class="math-container">$\iff I$</span> is close under subtraction, and we can use this to simplify the proof, e.g. <a href="https://math.stackexchange.com/a/203450/242">see here</a> for this viewpoint and further conceptual insight.</p>
69,472
<blockquote> <p><strong>Theorem 1</strong><br> If $g \in [a,b]$ and $g(x) \in [a,b] \forall x \in [a,b]$, then $g$ has a fixed point in $[a,b].$<br> If in addition, $g&#39;(x)$ exists on $(a,b)$ and a positive constant $k &lt; 1$ exists with $$|g&#39;(x)| \leq k, \text{ for all } \in (a, b)$$ then the fixed point in $[a,b] is unique. </p> <p><strong>Fixed-point Theorem</strong> Let $g \in C[a,b]$ be such that $g(x) \in [a,b]$, for all $x$ in $[a,b]$. Suppose, in addition, that $g&#39;$ exists on $(a,b)$ and that a constant $0 &lt; k &lt; 1$ exists with $$|g&#39;(x)| \leq k, \text{ for all } x \in (a, b)$$ Then, for any number $p_0$ in $[a,b]$, the sequence defined by $$p_n = g(p_{n-1}), n \geq 1$$ converges to the unique fixed-point in $[a,b]$</p> </blockquote> <p>These are two theorems that I have learned, and I'm having a hard time with this problem:</p> <blockquote> <p>Given a function $f(x)$, how can we find the interval $[a,b]$ on which fixed-point iteration will converge?</p> </blockquote> <p>Besides guess and check, I couldn't find any other way to solve this problem. I tried to link the above theorems, but it involves two variables, so I have a feeling it can't be solved algebraically. I wonder is there a general way to find the interval of convergence rather trial and error? Thank you.</p>
J. M. ain't a mathematician
498
<blockquote> <p>Is there a general approach to find a function to approximate a constant?</p> </blockquote> <p>For the square root case, sure. Bahman Kalantari has found <a href="http://dx.doi.org/10.1016/S0377-0427%2897%2900014-9" rel="nofollow">a "basic family" of iteration functions</a> for finding $\sqrt c$. They take the form</p> <p>$$x_{k+1}=x_k-(x^2-c)\frac{\Delta_{m-2}(x_k)}{\Delta_{m-1}(x_k)}$$</p> <p>where</p> <p>$$\Delta_m(x)=\begin{vmatrix}2x&amp;1&amp;&amp;\\x^2-c&amp;\ddots&amp;\ddots&amp;\\&amp;\ddots&amp;&amp;1\\&amp;&amp;x^2-c&amp;2x\end{vmatrix}$$</p> <p>is an $m\times m$ tridiagonal Toeplitz determinant, and $\Delta_0(x)=1$. $m$ here is the order of convergence of the iteration formula. $m=2$ for instance is the usual Newton-Raphson iteration (quadratic convergence), and $m=3$ is the <a href="http://mathworld.wolfram.com/HalleysMethod.html" rel="nofollow">Halley iteration</a> (cubic convergence).</p> <p>(Allow me two asides. First, one can evaluate $\Delta_m$ through a three-term recursion relation. Second, through a special property of Toeplitz determinants, it turns out that $\Delta_m(x)$ admits a (not too practical in this case) closed form: $\Delta_m(x)=(x^2-c)^{m/2}U_m\left(\dfrac{x}{\sqrt{x^2-c}}\right)$, where $U_m(x)$ is the <a href="http://functions.wolfram.com/Polynomials/ChebyshevU" rel="nofollow">Chebyshev polynomial of the second kind</a>.)</p> <p>Here are the first few members of the "basic family":</p> <p>$$\begin{array}{c|l}m&amp;\\\hline 2&amp;x-\frac{x^2-c}{2x}\\3&amp;x-(x^2-c)\frac{2x}{3x^2+c}\\4&amp;x-(x^2-c)\frac{(3x^2+c)}{4x(x^2+c)}\\5&amp;x-(x^2-c)\frac{4x(x^2+c)}{5x^4+10cx^2+c^2}\end{array}$$</p> <p>See Kalantari's paper for the general formula when the function $x^2-c$ is replaced by some other function $f(x)$ whose root is sought.</p> <hr> <p>This looks all rather theoretical and unwieldy, you might say. I'll put in a few <em>Mathematica</em> demonstrations for the first few members of the "basic family".</p> <p>Here's the definition for $\Delta_n(x)$ (here called <code>d[n]</code>):</p> <pre><code>d[0] = 1; d[1] = 2 x; d[n_Integer] := Det[SparseArray[{Band[{2, 1}] -&gt; x^2 - c, Band[{1, 1}] -&gt; 2x, Band[{1, 2}] -&gt; 1}, {n, n}]] </code></pre> <p>Generate the first few members of the "basic family":</p> <pre><code>Table[x - (x^2 - c) Simplify[d[n]/d[n + 1]], {n, 0, 4}] {x - (-c + x^2)/(2*x), x - (2*x*(-c + x^2))/(c + 3*x^2), x - ((-c + x^2)*(c + 3*x^2))/(4*c*x + 4*x^3), x - (4*x*(-c + x^2)*(c + x^2))/(c^2 + 10*c*x^2 + 5*x^4), x - ((-c + x^2)*(c^2 + 10*c*x^2 + 5*x^4))/(6*c^2*x + 20*c*x^3 + 6*x^5)} </code></pre> <p>Here's a demonstration of the Newton-Raphson iteration for approximating $\sqrt{10}$:</p> <pre><code>With[{c = 10}, FixedPointList[Function[x, x - (-c + x^2)/(2*x)], N[5, 100]] - Sqrt[c]]; N[#2/#1 &amp; @@@ Partition[Log[%], 2, 1], 20] {-1.7838671038748854863, 3.7925879490759233441, 2.4492570735611599421, 2.1829174906843319475, 2.0837943631820556414, 2.0402123955504591656, 2.01970990649706825, Indeterminate, Indeterminate} </code></pre> <p>Note that successive ratios of the logarithms of the error approach the value $2$, demonstrating the quadratic convergence. (The <code>Indeterminate</code>s in the last two slots come up because the error was effectively zero at that point.)</p> <p>For comparison, here is the iteration with <em>quartic</em> (fourth-order) convergence:</p> <pre><code>With[{c = 10}, FixedPointList[Function[x, x - ((-c + x^2)* (c + 3*x^2))/(4*c*x + 4*x^3)], N[5, 1000]] - Sqrt[10]]; N[#2/#1 &amp; @@@ Partition[Log[%], 2, 1], 20] {-6.7654728809088590549, 5.3465261050589774854, 4.2513830895422052676, 4.0591297194912047384, 4.0145670928443785744, Indeterminate, Indeterminate} </code></pre> <p>Here the convergence was a bit too fast, but the approach to the value $4$ can still be seen.</p>
69,472
<blockquote> <p><strong>Theorem 1</strong><br> If $g \in [a,b]$ and $g(x) \in [a,b] \forall x \in [a,b]$, then $g$ has a fixed point in $[a,b].$<br> If in addition, $g&#39;(x)$ exists on $(a,b)$ and a positive constant $k &lt; 1$ exists with $$|g&#39;(x)| \leq k, \text{ for all } \in (a, b)$$ then the fixed point in $[a,b] is unique. </p> <p><strong>Fixed-point Theorem</strong> Let $g \in C[a,b]$ be such that $g(x) \in [a,b]$, for all $x$ in $[a,b]$. Suppose, in addition, that $g&#39;$ exists on $(a,b)$ and that a constant $0 &lt; k &lt; 1$ exists with $$|g&#39;(x)| \leq k, \text{ for all } x \in (a, b)$$ Then, for any number $p_0$ in $[a,b]$, the sequence defined by $$p_n = g(p_{n-1}), n \geq 1$$ converges to the unique fixed-point in $[a,b]$</p> </blockquote> <p>These are two theorems that I have learned, and I'm having a hard time with this problem:</p> <blockquote> <p>Given a function $f(x)$, how can we find the interval $[a,b]$ on which fixed-point iteration will converge?</p> </blockquote> <p>Besides guess and check, I couldn't find any other way to solve this problem. I tried to link the above theorems, but it involves two variables, so I have a feeling it can't be solved algebraically. I wonder is there a general way to find the interval of convergence rather trial and error? Thank you.</p>
NoChance
15,180
<p>The link below shows to some extent the rationale behind the formula you have provided as well as others - See in particular the "Babylonian method"</p> <p><a href="http://en.wikipedia.org/wiki/Methods_of_computing_square_roots" rel="nofollow">Approximating square roots</a></p>
1,008,610
<blockquote> <p>If <span class="math-container">$a$</span> is a group element, prove that <span class="math-container">$a$</span> and <span class="math-container">$a^{-1}$</span> have the same order.</p> </blockquote> <p>I tried doing this by contradiction.</p> <p>Assume <span class="math-container">$|a|\neq|a^{-1}|$</span>.</p> <p>Let <span class="math-container">$a^n=e$</span> for some <span class="math-container">$n\in \mathbb{Z}$</span> and <span class="math-container">$(a^{-1})^m=e$</span> for some <span class="math-container">$m\in \mathbb{Z}$</span>, and we can assume that <span class="math-container">$m &lt; n$</span>.</p> <p>Then <span class="math-container">$e= e*e = (a^n)((a^{-1})^m) = a^{n-m}$</span>. However, <span class="math-container">$a^{n-m}=e$</span> implies that <span class="math-container">$n$</span> is not the order of <span class="math-container">$a$</span>, which is a contradiction and <span class="math-container">$n=m$</span>.</p> <p>But I realized this doesn’t satisfy the condition if <span class="math-container">$a$</span> has infinite order. How do I prove that piece?</p>
Asinomás
33,907
<p>Let $a^n$ be $e$, then $e=(aa^{-1})^n=a^n(a^{-1})^n=e(a^{-1})^n=(a^{-1})^n$.</p> <p>Let $(a^{-1})^n=e$, then $e=(aa^{-1})^n=a^n(a^{-1})^n=a^ne=a^n$.</p> <p>So, $a^n=e \iff (a^{-1})^n=e$.</p>
4,242,133
<p>I was having trouble with the question:</p> <blockquote> <p>Prove that <span class="math-container">$$I:=\int_0^{\infty}\frac{\ln(1+x^2)}{x^2(1+x^2)}dx=\pi\ln\big(\frac e 2\big)$$</span></p> </blockquote> <p><strong>My Attempt</strong></p> <p>Perform partial fractions <span class="math-container">$$I=\int_0^{\infty}\frac{\ln(1+x^2)}{x^2(1+x^2)}dx=\int_0^{\infty}\frac{\ln(1+x^2)}{x^2}dx-\int_0^{\infty}\frac{\ln(1+x^2)}{1+x^2}dx=$$</span> First integral <span class="math-container">$$\int_0^{\infty}\frac{\ln(1+x^2)}{x^2}dx=-\Bigg[\frac{\ln(x^2+1)}{x}\Bigg]_0^{\infty}+\int_0^{\infty}\frac{2}{x^2+1}=\pi$$</span> Second integral <span class="math-container">$$\int_0^{\infty}\frac{\ln(1+x^2)}{1+x^2}dx=$$</span> How do you solve this integral? Thank you for your time</p>
Ilovemath
956,199
<p>Let <span class="math-container">$$\int_0^{\infty} \frac{\log(1+x^2)}{1+x^2}dx= J$$</span>.</p> <p>Substituting <span class="math-container">$x=tan(\theta)$</span>,</p> <p><span class="math-container">$$J=-2\int_0^{\frac{π}{2}} \log{\cos{\theta}}d{\theta}$$</span>.</p> <p>Hence, <span class="math-container">$$J=π\log 2$$</span>.</p> <p><img src="https://i.stack.imgur.com/LHDlj.png" alt="Explanation of integral here" /></p>
777,691
<p>I would like to prove if $a \mid n$ and $b \mid n$ then $a \cdot b \mid n$ for $\forall n \ge a \cdot b$ where $a, b, n \in \mathbb{Z}$</p> <p>I'm stuck.<br> $n = a \cdot k_1$<br> $n = b \cdot k_2$<br> $\therefore a \cdot k_1 = b \cdot k_2$</p> <p>EDIT: so for <a href="http://en.wikipedia.org/wiki/Fizz_buzz" rel="nofollow">fizzbuzz</a> it wouldn't make sense to check to see if a number is divisible by 15 to see if it's divisible by both 3 and 5?</p>
Eric Towers
123,905
<p>Updated: The edited version is still not true. At least two counterexamples in the comments below.</p> <p>Prior counterexample: This is not true. $12|36$ and $9|36$, but $12\cdot9 = 108 \not | 36$.</p>
777,691
<p>I would like to prove if $a \mid n$ and $b \mid n$ then $a \cdot b \mid n$ for $\forall n \ge a \cdot b$ where $a, b, n \in \mathbb{Z}$</p> <p>I'm stuck.<br> $n = a \cdot k_1$<br> $n = b \cdot k_2$<br> $\therefore a \cdot k_1 = b \cdot k_2$</p> <p>EDIT: so for <a href="http://en.wikipedia.org/wiki/Fizz_buzz" rel="nofollow">fizzbuzz</a> it wouldn't make sense to check to see if a number is divisible by 15 to see if it's divisible by both 3 and 5?</p>
Xelvonar
147,242
<p>This is false. For example, 3 | 30 and 6 | 30, but their product, 18, does not divide 30 even though $3 \times 6 &lt; 30$.</p>
1,067,762
<p>Is what I am doing below correct, please assist. </p> <p>$$\sum_{k=-\infty}^{-1}\frac{e^{kt}}{1-kt} = \sum_{k=1}^{\infty}\frac{e^{-{kt}}}{1-kt}$$ </p> <p>Is this the rule on how to "invert" the limits, and does it matter if there are imaginary numbers in the sum; or not or is it all the same with both pure real and pure imaginary summations?</p>
Ahaan S. Rungta
85,039
<p>Note that the statement is true if and only if $ \angle B + \angle C = 90^\circ $. Also, note that $$ \angle B + \angle C = \arctan \left( \sqrt{2} - 1 \right) + \arctan \left( \sqrt{2} + 1 \right). $$Now, we <a href="http://en.wikipedia.org/wiki/List_of_trigonometric_identities#Angle_sum_and_difference_identities" rel="nofollow">recall</a> that $ \arctan b + \arctan c = \arctan \left( \frac {b+c}{1-bc} \right) $. </p> <p>Using $b=\sqrt{2}-1$ and $c=\sqrt{2}+1$, can you finish from here? </p>
929,243
<p>Hey guys I just need help solving this solution here. Sorry if I didn't type the symbols correctly.</p> <p>My solution so far: $$ (¬p \vee ¬(p\wedge¬q)) \wedge (¬p \vee ¬q)≡ (¬p \vee (¬p \vee q)) \wedge (¬p \vee ¬q)≡ $$ at this point I'm stuck. Is there any way I can take care of the not-$p$ $\vee$ not-$p$?</p> <p>Thanks</p>
zeta
88,778
<p>\begin{align*} &amp;(¬p \vee (¬p \vee q)) \wedge (¬p \vee ¬q) \\ &amp;\equiv((\lnot p \lor q) \land (\lnot p \lor \lnot q)) \\ &amp;\equiv\lnot p \lor(q \land \lnot q)) \\ &amp;\equiv\lnot p \end{align*}</p>
1,387,454
<p>What is the sum of all <strong>non-real</strong>, <strong>complex roots</strong> of this equation -</p> <p>$$x^5 = 1024$$</p> <p>Also, please provide explanation about how to find sum all of non real, complex roots of any $n$ degree polynomial. Is there any way to determine number of real and non-real roots of an equation?</p> <hr> <p>Please not that I'm a high school freshman (grade 9). So please provide simple explanation. Thanks in advance!</p>
juantheron
14,311
<p>$\bf{My\; Solution::}$ Given $$x^5 = 1024 = 2^{10} = 4^5\Rightarrow x^5-4^5 = 0$$</p> <p>So $$(x^5-4^5) = (x-5)\cdot (x^4+x^3\cdot 4+x^2\cdot 4^2+x\cdot 4^3+4^4) = 0$$</p> <p>Above we use the formula $\displaystyle x^n-a^n = (x-a)\cdot (x^{n-1}+x^{n-2}\cdot a+x^{n-3}\cdot a^2+.......+a^{n-1})$</p> <p>So We Get $x=4(\bf{Real \; Root})$ and $$x^4+x^3\cdot 4+x^2\cdot 4^2+x\cdot 4^3+4^4=0$$</p> <p>So Sum of Roots in $\bf{Second}$ Equation is $\displaystyle = -\frac{4}{1} = -4$ </p> <p>Above we use the Formula $\displaystyle \bf{Sum\; of \; Roots} = -\frac{\bf{coeff.\; of \; x^3}}{\bf{coeff.\; of \; x^4}}$</p>
1,591,278
<p>$$f(x) =\frac{ (x - 1) }{(x^2 - 1)}$$ $$g(x) = \frac{1}{(x + 1)}$$</p> <p>Is $f = g$? why?</p> <p>Solution: answer is no. I don't get it. why can't it be same when it is the alt form?? Can anyone explain this further...</p> <p>additional questions for further clarification: $$f(x) = \lim_{x\to 1}\frac{ (x^2 - 1) }{(x-1)} = infinite $$ $$f(x) = \lim_{x\to 1}\frac{ (x - 1)(x + 1) }{(x-1)} $$ $$f(x) = \lim_{x\to 1}(x+1)$$ $$ 1+1=2$$ How the function be justified that the limit can be apply to its alt form of a function which is not the same.</p>
pancini
252,495
<p>What they don't tell you until later in life is that a function is TWO things, a rule and a domain (plus the codomain).</p> <p>That said, we often don't specify the domain since it is usually obvious. In the first function, for example, the domain can by any real number EXCEPT when the denominator is zero. Thus, the domain is the real line minus $1$ and $-1$.</p> <p>The second function LOOKS the same, but the domain is the real number line minus $-1$.</p> <p>It is a small difference, but since the domain is part of what defines the function, we say that the two functions are not the same.</p> <p>However, we can of course write that $f=g$ for all $x\neq 1$.</p>
3,921,252
<p>Let <span class="math-container">$M$</span> be a connected topological <span class="math-container">$n$</span>-manifold. Note this implies <span class="math-container">$M$</span> is path-connected.</p> <p>Let <span class="math-container">$a,b \in M$</span>. Must there always exist a continuous <span class="math-container">$H:M \times [0,1] \to M$</span> such that <span class="math-container">$H(m,0)=m$</span> for all <span class="math-container">$m \in M$</span>, and <span class="math-container">$H(a,1)=b$</span>?</p> <p>We can note that this is true for <span class="math-container">$\mathbb{R}^n$</span>, by considering <span class="math-container">$(x,t) \mapsto x+t(b-a)$</span>, and it is also true for <span class="math-container">$S^1$</span> by considering <span class="math-container">$(e^{i\theta}, t)\mapsto e^{i(\theta+t(\theta_b-\theta_a))}$</span> where <span class="math-container">$a=e^{i\theta_{a}}, b=e^{i\theta_b}$</span>.</p>
user3482749
226,174
<p>For manifolds with boundary: no, you can't take boundary points to non-boundary points.</p> <p>For manifolds without boundary: yes. For 1-manifolds, this is easy. For <span class="math-container">$n &gt; 1$</span>, take a neighbourhood of a (shortest) path between your points. After removing problems (by taking a subset), that's homeomorphic to a disk, with both points in the interior. Take any homeomorphism of the disk that interchanges the points while fixing the boundary, and pass it through the homeomorphism to the neighbourhood of the path.</p>
1,063,599
<p>There is a deck made of $81$ different card. On each card there are $4$ seeds and each seeds can have $3$ different colors, hence generating the $ 3\cdot3\cdot3\cdot3 = 81 $ card in the deck. A tern is a winning one if,for every seed, the correspondent colors on the three card are or all the same or all different.</p> <p>-1 How many winning tern there are in the deck?</p> <p>-2 Shows that $4$ cards can have at most one winning tern.</p> <p>-3 Draw $4$ card from the deck. What is the probability that there is a winning tern?</p> <p>-4 We want delete $4$ cards from the decks to get rid of as many tern as possible. How we choose the $4$ cards?</p> <p>I've no official solution for this problem and i don't know where to start. Rather than a complete solution i would appreciate more some hints to give me the input to start thinking a solution.</p>
Brian M. Scott
12,042
<p>HINT: The problem is equivalent to the following one. Let $D=\{0,1,2\}^4$, the set of $4$-tuples of numbers from the set $\{0,1,2\}$. Each member of $D$ corresponds to a card; the four components of the $4$-tuple are the seeds; and $0,1$, and $2$ are the three ‘colors’. A tern is a set of three members of $D$.</p> <ul> <li>Prove that a tern $\{\langle a_1,a_2,a_3,a_4\rangle,\langle b_1,b_2,b_3,b_4\rangle,\langle c_1,c_2,c_3,c_4\rangle\}$ is winning if and only if $a_k+b_k+c_k\equiv 0\pmod3$ for $k=1,2,3,4$. If we think of the members of $D$ as vectors, then a tern $\{a,b,c\}$ is winning if and only if $a+b+c=\underline 0=\langle 0,0,0,0\rangle$, where the addition in each component is modulo $3$.</li> </ul> <p>I think that this version is a little easier to work with.</p> <ol> <li><p>To count the winning terns, show that if $a,b\in D$ and $a\ne b$, then there is exactly one $c\in D$ such that $a+b+c=\underline 0$. How many pairs of distinct elements can we choose from $D$? How many different pairs of distinct elements produce the same winning tern?</p></li> <li><p>This is a straightforward consequence of the first sentence in (1).</p></li> <li><p>Imagine drawing the $4$ cards one at a time. The first two can be anything. What’s the probability that the third doesn’t make a winning tern with the first two? Assuming that the first three cards aren’t a winning tern, what is the probability that the fourth does not complete a winning tern? It may be helpful to show that if $a,b$, and $c$ are distinct members of $D$, then the sums $a+b,a+c$, and $b+c$ are distinct as well.</p></li> <li><p>If $T_a$ is the set of winning terns containing the card $a\in D$, how many terns are in $T_a\cap T_b$ when $a\ne b$? Is $T_a\cup T_b\cup T_c$ bigger when $\{a,b,c\}$ is a winning tern, or when it’s a non-winning tern? Playing with these ideas should help you see how you need to choose the four deleted cards $a,b,c,d$ in order to make $T_a\cup T_b\cup T_c\cup T_d$ as large as possible.</p></li> </ol>
1,521,739
<p>The following is the Meyers-Serrin theorem and its proof in Evans's <em>Partial Differential Equations</em>:</p> <blockquote> <p><a href="https://i.stack.imgur.com/XnzXY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XnzXY.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/7woqQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7woqQ.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/pBIqX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pBIqX.png" alt="enter image description here"></a></p> </blockquote> <p>Could anyone explain where (for which $x\in U$) is the convolution in step 2 defined and how to get (3) from Theorem 1? </p> <blockquote> <p><a href="https://i.stack.imgur.com/NdZWL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NdZWL.png" alt="enter image description here"></a></p> </blockquote>
Brevan Ellefsen
269,764
<p>The transform equations to use for rotating any curve are $$ x' = (x \cos θ - y \sin θ)$$ $$y' =(x \sin θ + y \cos θ)$$ (These basically rotate the axes, but when we view them with static axes the graph rotates) </p> <hr> <p>In this case, we get $$x' = \frac{x+y}{\sqrt{2}}$$ $$y' = \frac{y-x}{\sqrt{2}}$$ $$$$$$$$ $$x^2+2xy+y^2+2\sqrt{2}x-2\sqrt{2}y+4=0$$ After the transformation we get: $$\Rightarrow \frac{1}{2}(x+y)^2 + (x+y)(y-x) + \frac{1}{2}(y-x)^2 + 2(x+y) - 2(y-x)+4=0$$ $$\Rightarrow x = -\frac{y^2}{2}-1$$</p> <p>This is just a horizontal parabola scaled down and shifted left by $1$; therefore, the vertex is at $(-1,0)$</p>
832,715
<blockquote> <p>Suppose that the distribution of a random variable $X$ is symmetric with respect to the point $x = 0$. If $\mathbb{E}(X^4)&gt;0$ then $Var(X)$ and $Var(X^2)$ are both positive.</p> </blockquote> <p>How is that true? I am getting $Var(X)=\mathbb{E}(X^2)$ and $Var(X^2)=\mathbb{E}(X^4)-(\mathbb{E}(X^2))^2$, but do not know why $\mathbb{E}(X^2)&gt;0$ &amp; $\mathbb{E}(X^4)&gt;(\mathbb{E}(X^2))^2.$</p>
M. Vinay
152,030
<p>We know that the variance of a random variable is $0$ if and only if it is constant (so $X = \text{E}(X)$).</p> <blockquote> <p>Proof:<br> If $X$ has constant value $c$, then $\text{E}(X) = c$ and $\text{E}(X^2) = \text{E}(c^2) = c^2$, $\Rightarrow \text{V}(X) = \text{E}(X^2) - \text{E}(X)^2 = 0$.<br> Conversely, suppose the variance, $\sigma^2 = 0$, and let the mean be $\mu$. Then by Chebyshev's inequality $\Pr[|X - \mu| \ge k] \le \dfrac{\sigma^2}{k^2} \le 0$, so that $|X - \mu| = 0 \Rightarrow X = \mu$.</p> </blockquote> <p>As $X$ is symmetric about $0$, the mean is $\text{E}(X) = 0$. But as $\text{E}(X^4) &gt; 0$, $X^4$ has <em>some</em> non-zero values, which implies that $X$ <strike>and $\not X^2$ also have</strike> has some non-zero values. Hence, $X$ <strike>and $\not X^2$ are</strike> is not constant. Thus, $\text{V}(X) &gt; 0$ <strike>and $\not{\text{V}(\not X^2)} \not &gt; \not 0$</strike>.</p> <p><em>Edit: Henry is right, this does not imply that $V(X^2) &gt; 0$</em>.</p>
145,046
<p>I'm a first year graduate student of mathematics and I have an important question. I like studying math and when I attend, a course I try to study in the best way possible, with different textbooks and moreover I try to understand the concepts rather than worry about the exams. Despite this, months after such an intense study, I forget inexorably most things that I have learned. For example if I study algebraic geometry, commutative algebra or differential geometry, In my minds remain only the main ideas at the end. Viceversa when I deal with arguments such as linear algebra, real analysis, abstract algebra or topology, so more simple subjects that I studied at first or at the second year I'm comfortable. So my question is: what should remain in the mind of a student after a one semester course? What is to learn and understand many demostrations if then one forgets them all?</p> <p>I'm sorry for my poor english.</p>
fretty
25,381
<p>I am in the same boat at the moment, I am partway through my first year as a postgrad and I feel I am forgetting many results that I used to know through lack of use/practice or full understanding.</p> <p>However, I think at this level maths is more about understanding the situation rather than studying the results. For example, I find myself thinking "so this is what person X is <strong>trying</strong> to do" and "so this is what the concepts capture" rather than "Wow, here is yet another nice theorem to remember".</p> <p>Of course studying results is important too but the bigger picture is more important I feel. You will always have access to enough material to jog your memory on any given topic. It is always easier to pick something up for the second time.</p>
1,232,036
<p>I have a midterm tomorrow, and while studying for that I saw this question, however don't have any idea how to solve it. (I could not come up with a legitimate proof. All I could do was, by putting some functions, approving what the problem claims.) I will appreciate if you can help.</p> <p>Suppose that $ h(t)$ is continuous and bounded on $(-\infty,\infty)$ and </p> <p>$$|h(t)| \leq M, \forall t\in (-\infty,\infty)$$</p> <p>Show that equation $$ y′(t) + y(t) = h(t) $$</p> <p>has one solution that is bounded on $(-\infty,\infty)$. Also, show that if $h(t)$ is a periodic function, then $y(t)$ is also periodic.</p> <p>Regards,</p> <p>Amadeus</p>
RTJ
223,807
<p>For $V=y^2$ we have $$\frac{dV}{dt}=2y(t)\frac{dy}{dt}(t)=-2y^2(t)+2y(t)h(t)\leq -y^2(t)+h^2(t)=-V(t)+h^2(t)$$ Multiplying by $e^t$ we equivalently have $$\frac{d}{dt}\left[V(t)e^t-\int_0^t{h^2(s)e^s ds}\right]\leq 0$$ If we now integrate over $[0,t]$ we obtain $$y^2(t)\leq y^2(0)e^{-t}+e^{-t}\int_0^t{e^sM^2ds}=y^2(0)e^{-t}+M^2(1-e^{-t})$$ i.e. $y$ is bounded in $[0,\infty)$. This shows that the solution is bounded in $[0,\infty)$ for all initial conditions. A bounded solution on the whole real line $(-\infty,\infty)$ is not possible for every initial condition at $t=0$. However as proved by Chappers in his answer there exists some bounded solution on the whole real line $(-\infty,\infty)$ if the initial condition is chosen appropriately. </p>
1,453,330
<p>I want to prove that this function is one-to-one</p> <p>$ f: \mathbb R \rightarrow \mathbb R$ , $$f(x) = \left\{\begin{array}{ll} 2x+1 &amp; : x \ge 0\\ x &amp; : x \lt 0 \end{array} \right.$$ And for this function</p> <p>$ f: \mathbb R \rightarrow \mathbb R$, $$f(x) = \left\{\begin{array}{ll} x^2 &amp; : x \gt0\\ mx^2 &amp; : x \le 0 \end{array} \right.$$ I want to find the values for m that this function is one-to-one</p> <p>I want the algebraic solution for both of the exercises.</p>
Maciej
241,823
<p><strong>First function</strong></p> <p>Notice, that $f[\mathbb{R}_{\ge 0}]\cap f[\mathbb{R}_{&lt; 0}] = \emptyset$, so you need to show, that $f|_{\mathbb{R}_{\ge 0}}$ and $f|_{\mathbb{R}_{&lt; 0}}$ are one-to-one.</p> <p><strong>Second function</strong></p> <p>Notice, that if $m \ge 0$, than $f$ is not one-to-one. If $m=0$ this is obvious, if $m &gt; 0$, than $f(-1) = m = f(\left|\sqrt{m}\right|)$, and because $m \ge 0$, $\left|\sqrt{m}\right|\in\mathbb{R}$. Of course, $\left|\sqrt{m}\right|\neq-1$, so $f$ is not one-to-one function.</p> <p>If $m&lt;0$, notice, that $f[\mathbb{R}_{\ge 0}]\cap f[\mathbb{R}_{&lt; 0}] = \emptyset$, because $f[\mathbb{R}_{\ge 0}] = \mathbb{R}_{\ge 0}$ and $f[\mathbb{R}_{&lt; 0} = \mathbb{R}_{&lt; 0}$. So you need to proof that both $f|_{\mathbb{R}_{\ge 0}}$ and $f|_{\mathbb{R}_{&lt; 0}}$ are one-to-one.</p>
3,444,463
<p>Everyone knows that the shortest distance between two points is a straight line. How can we prove it mathematically? </p>
Matthew Leingang
2,785
<p>If <span class="math-container">$y$</span> is a <span class="math-container">$C^2$</span> function of <span class="math-container">$x$</span> and traces a curve between points <span class="math-container">$(a,c)$</span> and <span class="math-container">$(b,d)$</span>, then the length of the curve is <span class="math-container">$$ \ell = \int_a^b \sqrt{1+(y')^2}\,dx $$</span> This is the kind of functional that we can apply the <a href="https://en.wikipedia.org/wiki/Calculus_of_variations" rel="nofollow noreferrer">calculus of variations</a> to. According the Euler-Lagrange equations (see link), the quantity <span class="math-container">$\ell$</span> is minimized when <span class="math-container">$$ \frac{d}{dx} \frac{\partial}{\partial y'}\sqrt{1+(y')^2} = \frac{\partial}{\partial y}\sqrt{1+(y')^2} $$</span> Therefore, <span class="math-container">$$ \frac{d}{dx}\frac{y'}{\sqrt{1+(y')^2}} = 0 \implies \frac{y''}{(1+(y')^2)^{3/2}} = 0 \implies y'' = 0 $$</span> If <span class="math-container">$y'' = 0$</span>, the curve is a straight line.</p>
1,047,489
<p>let $f(x)$ be a function such that </p> <p>$$f(0) = 0$$</p> <p>$$f(1) =1$$ $$f(2) = 2$$ $$f(3) = 4$$ $$f'(x) \text{is differentiable on } \mathbb{R}$$ Prove that there is a number in the interval $(0,3)$ such that $0 &lt; f''(x)&lt;1$</p> <p>I'm really stuck. thanks</p>
FundThmCalculus
153,550
<p>I would suggest as a first attempt, curve fitting a polynomial. Let's fit to a cubic, since the general cubic equation has four parameters (and you have four known points). $$f(x) = ax^3+bx^2+cx+d$$ $$f(0) = a*0^3+b*0^2+c*0+d = 0 \rightarrow d = 0$$ $$f(1) = a*1^3+b*1^2+c*1+d = 1 \rightarrow a+b+c=1$$ $$f(2) = a*2^3+b*2^2+c*2+d = 2 \rightarrow 8a+4b+2c = 2$$ $$f(3) = a*3^3+b*3^2+c*3+d = 4 \rightarrow 27a+9b+3c = 4$$ Solving the last three equations simultaneously gives us: $$a=-\frac{1}{2}, b=\frac{3}{2}, c = 0$$ Therefore, the actual cubic equation that will pass through all these points is: $$f(x)=\frac{-x^3}{2}+\frac{3x^2}{2}$$ Since we have the actual function, let's take the derivative: $$f'(x)=\frac{-3x^2}{2}+3x$$ $$f''(x)=-3x+3$$ Since we have the second derivative being linear, we can figure out where the point $f(x)=0.5$ is. If that point lies inside $(0,3)$, then we have proven the function. $$f''(x)=3-3x=0.5 \rightarrow x=\frac{-2.5}{-3}=\frac{5}{6}$$ This is inside the domain of the function, so we are good.</p> <p>Alternatively, we could use the MVT like others have done and say: $$f''(0)=3$$ $$f''(3)=-6$$ Since the second derivative is continuous (a straight line), by the MVT there exists a value in $(0,3)$ such that $0&lt;f''(x)&lt;1$.</p>
25,413
<p>Background - I am tutoring a second year college sophomore for a class titled Single Variable Calculus, and whose curriculum looks to be similar to the AB calculus I tutor in my High School.</p> <p>We are on limits and L’Hôpital’s Rule, and I see this among the questions (note, all the worksheet questions are meant to be solved via L-H rule) - The instruction is</p> <p>&quot;Evaluate the following using L’Hôpital’s Rule&quot;</p> <p><span class="math-container">$$\lim_{x\to 0}\frac{\sin x}x= $$</span></p> <p>I recall, when subbing for a calc teacher, that this is a classic example of the use of the &quot;squeeze theorem&quot; aka &quot;sandwich theorem&quot;. Once it's proven, we'd go on to different arguments of Sine, practice a bit, then move on. It's introduced prior to L-H rule.</p> <p>Given the fast pace of my student self-studying and effort of remote teaching, I'm inclined to ignore this, and move on. My question is whether skipping over Squeeze Theorem is doing her a disservice, and should I (forgive the pun) squeeze it into our next session? At my HS, students have told me it feels like it's introduced, practiced for a few problems, but never seeing again.</p>
Steven Gubkin
117
<p>The very limit you have chosen as an example shows the necessity of the squeeze theorem.</p> <p>When deriving the fact that the derivative of sine is cosine, you first use the angle sum formula for sine, which then reduces the computation to the derivative of sine at 0. This is the limit you have chosen as an example.</p> <p>So you really have no choice: to convince someone that the derivative of sine is cosine, you need the squeeze theorem. You can, of course, just skip the justification but that defeats the point of a mathematics class for me.</p>
4,294,714
<p>The problem is as follows:</p> <p>You have an unlimited number of marbles and each marble is one of 16 different colours. You have to choose 6 marbles and order is irrelevant. How many different combinations of 6 marbles are there?</p>
gnasher729
137,175
<p>To meet the condition that A, B and C leave at different stations: A can leave at 7 stations, B at 6 and C at 5.</p> <p>There are 6 possible orderings, but only one is allowed. Therefore (7<em>6</em>5)/7^3/6 = 5/49.</p>
4,031,633
<p>Let <span class="math-container">$R$</span> be a commutative ring and <span class="math-container">$M$</span> is an <span class="math-container">$R$</span>-module. The following statement is well-known.</p> <p>If <span class="math-container">$M$</span> is finitely presented flat module, <span class="math-container">$M/J(R)M$</span> is free over <span class="math-container">$R/J(R)$</span> then <span class="math-container">$M$</span> is free.</p> <p>Here <span class="math-container">$J(R)$</span> is the (Jacobson) radical of <span class="math-container">$R$</span>. Is the statement true for finite <span class="math-container">$M$</span>? I expect that it is not, but was not able to construct a counter-examples. Clearly, in a counter-example the ring <span class="math-container">$R$</span> is not Noetherian. Also, local <span class="math-container">$R$</span> won't work, since over a local ring any finite flat module is free.</p>
Dolors
1,078,427
<p>The result can be extended to the setting of non-commutative rings:</p> <p><strong>Prop:</strong> Let M be a finitely generated flat right module over a ring R, and let P be a projectivemodule. If M/MJ(R)is isomorphic to P/PJ(R), then M is isomorphic to P</p> <p>For the proof you can check the Proposition 7.3 in the paper FLAT MODULES AND LIFTING OF FINITELY GENERATED PROJECTIVE MODULES, by Fachhini, Herbera and Skhaev (Pac. J. Math. Vol. 220, No. 1, 2005, 49-67)</p>
2,916,887
<p>The formula for Shannon entropy is as follows,</p> <p>$$\text{Entropy}(S) = - \sum_i p_i \log_2 p_i $$</p> <p>Thus, a fair six sided dice should have the entropy,</p> <p>$$- \sum_{i=1}^6 \dfrac{1}{6} \log_2 \dfrac{1}{6} = \log_2 (6) = 2.5849...$$</p> <p>However, the entropy should also correspond to the average number of questions you have to ask in order to know the outcome (as exampled in <a href="https://medium.com/udacity/shannon-entropy-information-gain-and-picking-balls-from-buckets-5810d35d54b4" rel="noreferrer">this guide</a> under the headline <em>Information Theory</em>).</p> <p>Now, trying to construct a decision tree to describe the average number of questions we have to ask in order to know the outcome of a dice, and this seems to be the optimal one:</p> <p><img src="https://i.stack.imgur.com/yCi7X.png" alt="Decision tree for dice"></p> <p>Looking at the average number of questions in the image, there are 3 questions in 4/6 cases in 2 questions in 2/6 cases. Thus the entropy should be:</p> <p>$$\dfrac{4}{6} \times 3 + \dfrac{2}{6} \times 2 = 2.6666...$$</p> <p>So, obviously the result for the entropy isn't the same in the two calculations. How come?</p>
metamorphy
543,769
<p>In your setting, the Shannon entropy is "just" a lower bound for an entropy of any decision tree (including optimal ones). These don't have to coincide. To get closer to what the Shannon entropy is, imagine an optimal decision tree identifying outcomes of throwing a dice $N$ times with some large $N$ (assuming independence). The larger $N$ is, the smaller (yet nonnegative) is the difference between the "averaged" (i.e. divided by $N$) entropy of this "compound" decision tree and the Shannon entropy of the dice. (It resembles a background of <a href="https://en.wikipedia.org/wiki/Arithmetic_coding" rel="noreferrer">arithmetic coding</a>).</p>
2,916,887
<p>The formula for Shannon entropy is as follows,</p> <p>$$\text{Entropy}(S) = - \sum_i p_i \log_2 p_i $$</p> <p>Thus, a fair six sided dice should have the entropy,</p> <p>$$- \sum_{i=1}^6 \dfrac{1}{6} \log_2 \dfrac{1}{6} = \log_2 (6) = 2.5849...$$</p> <p>However, the entropy should also correspond to the average number of questions you have to ask in order to know the outcome (as exampled in <a href="https://medium.com/udacity/shannon-entropy-information-gain-and-picking-balls-from-buckets-5810d35d54b4" rel="noreferrer">this guide</a> under the headline <em>Information Theory</em>).</p> <p>Now, trying to construct a decision tree to describe the average number of questions we have to ask in order to know the outcome of a dice, and this seems to be the optimal one:</p> <p><img src="https://i.stack.imgur.com/yCi7X.png" alt="Decision tree for dice"></p> <p>Looking at the average number of questions in the image, there are 3 questions in 4/6 cases in 2 questions in 2/6 cases. Thus the entropy should be:</p> <p>$$\dfrac{4}{6} \times 3 + \dfrac{2}{6} \times 2 = 2.6666...$$</p> <p>So, obviously the result for the entropy isn't the same in the two calculations. How come?</p>
celtschk
34,930
<p>To recover entropy, you have to consider a <em>sequence of</em> dice throws, and ask how many questions <em>per roll</em> you need in an optimal strategy, in the limit that the number of rolls goes to infinity. Note that each question can cover all the rolls, for example for two rolls, you could ask at some point: “Are the results in $\{16,21,22,23\}$?” (where the first digit denotes the first throw, and the second digit denotes the second throw).</p> <p>I'm too lazy to do it for 36 possibilities, therefore here a simpler example: Consider a die for which each roll gives only one of three results with equal probability. Then the entropy is about $1.58496$.</p> <p>For one toss, the optimal strategy is simply to ask “was it $1$?” followed by ”was it $2$?”, which on average gives $5/3 = 1.66$ questions.</p> <p>For two tosses, an optimal strategy would be to first ask “was it one of $\{11,12,13,21\}$?” (where the first digit gives the result of the first toss, and the second digit the result of the second toss). If the answer is “yes”, then use two questions to single out one of the four results. Otherwise, ask “was the first toss a $2$?”, if yes then it was one of $22$ or $23$, and one question is sufficient to determine that. In the remaining case you know the first toss was $3$ and know nothing about the second, so you employ the one-toss strategy to determine the second toss.</p> <p>This strategy needs on average $29/9=3.2222$ questions, or $1.61111$ questions per toss. Which is already much better, and indeed only $1.65\,\%$ worse that the value given by the entropy.</p> <p>Note that the average number of questions of the single-toss optimal strategy can differ dramatically from the entropy. For this, consider the toss of a biased coin. The entropy of this can be made arbitrary low by making the coin sufficiently biased. But obviously there's no way you can get the result of a coin toss with less than one question.</p>
2,916,887
<p>The formula for Shannon entropy is as follows,</p> <p>$$\text{Entropy}(S) = - \sum_i p_i \log_2 p_i $$</p> <p>Thus, a fair six sided dice should have the entropy,</p> <p>$$- \sum_{i=1}^6 \dfrac{1}{6} \log_2 \dfrac{1}{6} = \log_2 (6) = 2.5849...$$</p> <p>However, the entropy should also correspond to the average number of questions you have to ask in order to know the outcome (as exampled in <a href="https://medium.com/udacity/shannon-entropy-information-gain-and-picking-balls-from-buckets-5810d35d54b4" rel="noreferrer">this guide</a> under the headline <em>Information Theory</em>).</p> <p>Now, trying to construct a decision tree to describe the average number of questions we have to ask in order to know the outcome of a dice, and this seems to be the optimal one:</p> <p><img src="https://i.stack.imgur.com/yCi7X.png" alt="Decision tree for dice"></p> <p>Looking at the average number of questions in the image, there are 3 questions in 4/6 cases in 2 questions in 2/6 cases. Thus the entropy should be:</p> <p>$$\dfrac{4}{6} \times 3 + \dfrac{2}{6} \times 2 = 2.6666...$$</p> <p>So, obviously the result for the entropy isn't the same in the two calculations. How come?</p>
A. Webb
59,557
<p>If you have $1$ die, there are $6$ possible outcomes. Label them 0 through 5 and express as a binary number. This takes $\lceil\log_2{6}\rceil = 3$ bits. You can always determine the 1 die with 3 questions, just ask about each bit in turn. </p> <p>If you have $10$ dice, then there are $6^{10}$ possible outcomes. Label them 0 through $6^{10}-1$ and express as a binary number. This takes $\lceil\log_2{6^{10}}\rceil = \lceil10\log_2{6}\rceil = 26$ bits. You can always determine the 10 dice with 26 questions, just ask about each bit in turn. The average is 26 questions / 10 dice = 2.6.</p> <p>If you have $100$ dice, then there are $6^{100}$ possible outcomes. Label them 0 through $6^{100}-1$ and express as a binary number. This takes $\lceil\log_2{6^{100}}\rceil = \lceil100\log_2{6}\rceil = 259$ bits. You can always determine the 100 dice with 259 questions, just ask about each bit in turn. The average is 259 questions / 100 dice = 2.59.</p> <p>If you have $1000$ dice, then there are $6^{1000}$ possible outcomes. Label them 0 through $6^{1000}-1$ and express as a binary number. This takes $\lceil\log_2{6^{1000}}\rceil = \lceil1000\log_2{6}\rceil = 2585$ bits. You can always determine the 1000 dice with 2585 questions, just ask about each bit in turn. The average is 2585 questions / 1000 dice = 2.585.</p> <p>Each order of magnitude gets you one more digit, converging toward the Shannon entropy.</p> <p>On the other hand, with the decision tree in your example, you would not converge towards splitting the outcome space in half with each question. The first question $d_1 \in \{1,2,3\}$? does, but then there is waste if you have to ask two questions to determine 3 remaining outcomes. The second question (given a yes to the first), could be is either $d_1 = 1$ or $d_1 = 2$ and $d_2 \in \{1,2,3\}$?, which does split the outcomes space in half for multiple dice. Now you are forced to ask 3 questions to get the first die, but have gained information about the following dice. The strategy of enumerating and encoding the outcomes as above is just an extension of this idea. It doesn't pay off for a low number of dice, but does for many.</p>
309,234
<p>I am looking for good, detailed references for "mod $p$ lower central series".</p> <p>So far I only find papers such as (<a href="https://core.ac.uk/download/pdf/81193793.pdf" rel="nofollow noreferrer">https://core.ac.uk/download/pdf/81193793.pdf</a>, <a href="https://www.sciencedirect.com/science/article/pii/0040938366900243" rel="nofollow noreferrer">https://www.sciencedirect.com/science/article/pii/0040938366900243</a>), which briefly mention it in the context of topology.</p> <p>Are there any good books that discuss this in detail (not necessarily related to topology)?</p> <p>Also, just to confirm, are these terminologies the same thing:</p> <ol> <li>mod $p$ lower central series</li> <li>lower $p$-central series</li> <li>lower exponent-p central series</li> </ol> <p>I am confused by the different terminologies.</p> <p>Thanks a lot.</p>
Andy Putman
317
<p>This is a fundamental in the theory of pro-$p$ groups. A good reference is</p> <p>J. D. Dixon, M. Du Sautoy, A. Mann, and D. Segal, Analytic pro-p groups, second edition, Cambridge Studies in Advanced Mathematics, 61, Cambridge University Press, Cambridge, 1999.</p>
201,358
<p>I would like to be able to extract elements from a list where indices are specified by a binary mask.</p> <p>To provide an example, I would like to have a function <code>BinaryIndex</code> doing the following:</p> <pre><code>foo = {a, b, c} mask = {True, False, True} BinaryIndex[mask, foo] (* expected output *) {a, c} </code></pre> <p>Is there such a function built-in? I would be able to come up with some implementation, but I would like to have this performance-optimized. If there is no such built-in function, what would be a good approach to make this fast? </p>
Sjoerd Smit
43,522
<p>Since this question is about performance, I'd like to add that often it's better to work with lists of integers than lists of booleans since lists of integers can be packed. For example, if you want to pick the positive elements from an array using <code>Pick</code>, you can either use <code>Positive</code> to make a boolean selector list or <code>UnitStep</code> to make an integer selector list:</p> <pre><code>list = RandomReal[{-1, 1}, 100000]; With[{sel = Positive[list]}, Pick[list, sel]; // RepeatedTiming] With[{sel = UnitStep[list]}, Pick[list, sel, 1]; // RepeatedTiming] </code></pre> <blockquote> <p>{0.0065, Null}</p> <p>{0.00099, Null}</p> </blockquote> <p>As you can see, <code>UnitStep</code> is significantly faster in the <code>Pick</code>ing step because <code>UnitStep[list]</code> is a packed array:</p> <pre><code>Developer`PackedArrayQ[list] (* True *) Developer`PackedArrayQ[Positive[list]] (* False *) Developer<span class="math-container">`</span>PackedArrayQ[UnitStep[list]] (* True*) </code></pre>
711,168
<p>Let $V$ be an $n$-dimensional real inner product space and let $a=\lbrace v_1,v_2,\dots v_n \rbrace$ be an orthonormal basis for $V$. Let $W$ be a subspace of $V$ with orthonormal basis $B = \lbrace w_1, w_2,\dots w_k\rbrace$. Let $A = \lbrace [w_1]a, [w_2]a,\dots [w_k]a\rbrace$ and let $P_w$ be the orthogonal projection onto $W$. Show $[P_w]aa = AA^t$.</p> <ol> <li>What I have is $P_w(x) = w$ </li> <li>$[P_w(x)]a = [P_w]aa[x]a$</li> <li>And since $A$ is a orthogonal matrix $AA^t=I$ I'm stuck on this step Please help me out. Thanks</li> </ol>
amWhy
9,003
<p>It is true that the group axioms can be "weakened" to include: </p> <ul> <li>associativity</li> <li>the existence of a <strong>left</strong> identity for all elements in the group</li> <li>for each element in the group, there exists a <strong>left</strong> inverse.</li> </ul> <p>Note that in this weakened definition we need <strong>left identity, left inverses</strong>. Similarly, we can replace both occurrences of <strong>left</strong> above by <strong>right</strong>, so it is also true that associativity plus <strong>right identity, right inverses</strong> suffices. The point is, we need the "sides" of operation of the identity and of inverses to match.</p> <p>From these "weakened" axioms, we can prove that the left (right) identity is the right (left) identity, and given element $b$ in $G$ such that there exists an element $a$ in $G$ such that $a*b = e$ it can be shown that $b*a = e$.</p>
1,790,311
<p>Show the following equalities $5 \mathbb{Z} +8= 5\mathbb{Z} +3= 5\mathbb{Z} +(-2)$.</p> <p>$5 \mathbb{Z} +8=\{5z_{1}+8: z_{1} \in \mathbb{Z}\}$,</p> <p>$5 \mathbb{Z} +3=\{5z_{2}+3: z_{2} \in \mathbb{Z}\}$,</p> <p>$5 \mathbb{Z} +(-2)=\{5z_{3}+(-2): z_{3} \in \mathbb{Z}\}$.</p> <p>So, how can prove to use these defitions?</p>
Stefan Hante
42,060
<p>Let $a \in 5\mathbb Z + 8$, then there is $b\in\mathbb Z$ such that $a = 5b + 8$. Then you can do \begin{align*} a = 5b + 8 = 5(b-1) + 3, \end{align*} thus $a\in5\mathbb Z + 3$. This proves $$ 5\mathbb Z + 8 \subseteq 5\mathbb Z + 3, $$ because every element of $5\mathbb Z + 8$ is an element of $5\mathbb Z + 3$.</p> <p>With a similar trick you can show $5\mathbb Z + 3 \subseteq 5\mathbb Z + 8$ which implies then $5\mathbb Z + 3 = 5\mathbb Z + 8$.</p> <p>Your other claim is analog.</p>
23,312
<p>What is the importance of eigenvalues/eigenvectors? </p>
SChepurin
53,883
<p>When you apply transformations to the systems/objects represented by matrices, and you need some characteristics of these matrices you have to calculate eigenvectors (eigenvalues). </p> <blockquote> <p>"Having an eigenvalue is an accidental property of a real matrix (since it may fail to have an eigenvalue), but every complex matrix has an eigenvalue."(Wikipedia) </p> </blockquote> <p>Eigenvalues ​​characterize important properties of linear transformations, such as whether a system of linear equations has a unique solution or not. In many applications eigenvalues ​​also describe physical properties of a mathematical model.</p> <p>Some important applications -</p> <ul> <li><p>Principal Components Analysis (PCA) in object/image recognition; </p></li> <li><p>Physics - stability analysis, the physics of rotating bodies;</p></li> <li><p>Market risk analysis - to define if a matrix is positive definite;</p></li> <li><p>PageRank from Google.</p></li> </ul>
2,325,421
<blockquote> <p>If $f$ is a linear function such that $f(1, 2) = 0$ and $f(2, 3) = 1$, then what is $f(x, y)$?</p> </blockquote> <p>Any help is well received.</p>
kvicente
452,277
<p>I suppose you refer to a function <em>f</em> from the real plane to the real line, then note that <code>(1,2);(2,3)</code> is a base for the real pane vector space. Then any element of the plane can be represented as a linear combination of this elements. The applying linearity you get form for the required function.</p>