qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
403,184
<p>A (non-mathematical) friend recently asked me the following question:</p> <blockquote> <p>Does the golden ratio play any role in contemporary mathematics?</p> </blockquote> <p>I immediately replied that I never come across any mention of the golden ratio in my daily work, and would guess that this is the same for almost every other mathematician working today.</p> <p>I then began to wonder if I were totally correct in this statement . . . which has led me to ask this question.</p> <p>My apologies is this question is unsuitable for Mathoverflow. If it is, then please feel free to close it.</p>
Ian Agol
1,345
<p>There's an identity for chromatic polynomials of planar triangulations called the &quot;golden identity&quot; found by Tutte, giving a quadratic relation between the values of the chromatic polynomial at <span class="math-container">$\phi+1$</span> and <span class="math-container">$\phi+2$</span>. In a fairly <a href="https://arxiv.org/abs/1801.00502" rel="noreferrer">recent paper</a> Slava Krushkal and I extended this identity (or rather its dual for the flow polynomial of planar cubic graphs) to the Yamada polynomial, an invariant of spatial graphs. We also have a conjecture that this identity characterizes planar cubic graphs in terms of the flow polynomial.</p>
373,951
<p>Suppose we have a linearly ordered group over $\mathbb Z^n$ where the ordering goes left-to-right, i.e. when deciding if $(x_1,x_2,\dots)&lt;(y_1,y_2,\dots)$ we first check if $x_1&lt; y_1$, if it is then $X&lt; Y$. If they are the same, we compare $x_2&lt; y_2$ and so on.</p> <p>I believe there is an isomorphism to a subset of $\mathbb Q$ which preserves this ordering. Namely, a Godel numbering $(x_1,x_2,\dots)\mapsto 2^{x_1}3^{x_2}\dots$ is an isomorphism which has the same ordering if we first use the 2-adic norm, then the 3-adic norm etc.</p> <p>I think a slightly more convoluted isomorphism exists even if we're dealing with $\mathbb Q^n$. However, I'm not sure if this is true if we're dealing with $\mathbb R^n$. So I ask: </p> <blockquote> <p>For any group over $\mathbb R^n$ which uses the left-to-right ordering, is there an equivalent group over $\mathbb R$?</p> </blockquote>
Cameron Buie
28,900
<p>No. Note that an order-isomorphism will be an injective continuous function when both are considered in the order topology. Consider then $\Bbb R^2$ ordered as you describe, and in particular consider the images of the connected sets $\{x\}\times\Bbb R$ of $\Bbb R^2$ under any continuous function. The continuous image of a connected set is connected, so for $f$ to be injective, we will require for the images of those connected sets to be disjoint, uncountable connected subsets of $\Bbb R$. The interiors of these connected sets would be disjoint open intervals or rays, but there would be uncountably many of these, which is impossible, since $\Bbb R$ has a countable dense subset.</p> <p>Incidentally, I'm not sure what your rational isomorphism is supposed to do, but I don't think it works. However, as a countable dense linear order with no endpoints, $\Bbb Q^n$ is order isomorphic to $\Bbb Q$.</p>
373,951
<p>Suppose we have a linearly ordered group over $\mathbb Z^n$ where the ordering goes left-to-right, i.e. when deciding if $(x_1,x_2,\dots)&lt;(y_1,y_2,\dots)$ we first check if $x_1&lt; y_1$, if it is then $X&lt; Y$. If they are the same, we compare $x_2&lt; y_2$ and so on.</p> <p>I believe there is an isomorphism to a subset of $\mathbb Q$ which preserves this ordering. Namely, a Godel numbering $(x_1,x_2,\dots)\mapsto 2^{x_1}3^{x_2}\dots$ is an isomorphism which has the same ordering if we first use the 2-adic norm, then the 3-adic norm etc.</p> <p>I think a slightly more convoluted isomorphism exists even if we're dealing with $\mathbb Q^n$. However, I'm not sure if this is true if we're dealing with $\mathbb R^n$. So I ask: </p> <blockquote> <p>For any group over $\mathbb R^n$ which uses the left-to-right ordering, is there an equivalent group over $\mathbb R$?</p> </blockquote>
Pete L. Clark
299
<p>An ordered abelian group <span class="math-container">$(G,+,&lt;)$</span> is <strong>Archimedean</strong> if for all <span class="math-container">$x,y \in G$</span> with <span class="math-container">$x &gt; 0$</span>, there is a positive integer <span class="math-container">$n$</span> with <span class="math-container">$nx &gt; y$</span>.</p> <p>Famously the ordered group <span class="math-container">$(\mathbb{R},+,&lt;)$</span> is Archimedean. But for <span class="math-container">$N &gt; 1$</span>, <span class="math-container">$\mathbb{R}^N$</span> endowed with the lexicographic ordering is not: for all <span class="math-container">$n \in \mathbb{Z}^+$</span>, we have <span class="math-container">$n (0,1,0,\ldots,0) &lt; (1,0,\ldots,0)$</span>. So they cannot be order isomorphic.</p> <p>It is a theorem of Holder that an ordered abelian group is Archimedean iff it can be embedded in <span class="math-container">$\mathbb{R}$</span>: see <span class="math-container">$\S$</span> 17.2 <a href="http://alpha.math.uga.edu/%7Epete/integral.pdf" rel="nofollow noreferrer">of these notes</a> for a proof.</p> <p>These ideas can be pushed much further: a subgroup <span class="math-container">$H$</span> of an ordered abelian group <span class="math-container">$(G,+,&lt;)$</span> is <strong>convex</strong> if for all <span class="math-container">$x &lt; y&lt; z \in G$</span>, if <span class="math-container">$x,z \in H$</span> then also <span class="math-container">$y \in H$</span>.</p> <p>The family of convex subgroups of an ordered abelian group is linearly ordered under inclusion (Proposition 17.10 of <a href="http://alpha.math.uga.edu/%7Epete/integral.pdf" rel="nofollow noreferrer"><em>loc. cit.</em></a>). So the order isomorphism type of this set is an invariant of <span class="math-container">$(G,+,&lt;)$</span>, say <span class="math-container">$r(G)$</span>. When <span class="math-container">$r(G)$</span> is finite we identify it with a natural number and call it the <strong>rank</strong> of <span class="math-container">$G$</span>. Then:</p> <p><span class="math-container">$\bullet$</span> An ordered group has rank at most <span class="math-container">$1$</span> iff it is Archimedean (Proposition 17.11 of <a href="http://alpha.math.uga.edu/%7Epete/integral.pdf" rel="nofollow noreferrer"><em>loc. cit.</em></a>).</p> <p><span class="math-container">$\bullet$</span> The rank of <span class="math-container">$\mathbb{R}^N$</span> is <span class="math-container">$N$</span>. (Exercise! This refines the current question.)</p>
2,255,192
<p>I'm going through the exercises in Georgi E Shilov's Linear Algebra book and am on chapter 1 problem 2: "Write down all the terms appearing in the determinant of order four which have a minus sign and contain $ a_{23}$"</p> <p>the answers I have arrived at are: </p> <p>$a_{11}$$a_{23}$$a_{32}$$a_{44}$</p> <p>$a_{12}$$a_{23}$$a_{34}$$a_{41}$</p> <p>$a_{14}$$a_{23}$$a_{31}$$a_{42}$</p> <p>The answers listed in the back of the book are the same except for this one below:</p> <p>$a_{44}$$a_{23}$$a_{31}$$a_{42}$</p> <p>Is that a typo with $a_{44}$?</p>
Jed
441,147
<p>Every iteration the green area is reduced by 25%. As the iteration number tends toward infinity, the green area will approach 0. Since the white area is everything else, its area will tend toward the original area (1).</p>
101,972
<p>I have a grammatically computable function $f$, which means that a grammar $G = (V,\Sigma,P,S)$ exists, so that</p> <p>$SwS \rightarrow v \iff v = f(w)$.</p> <p>Now I have to show that, given a grammatically computable function $f$, a Turing Machine $M$ can be constructed, so that $M$ computes $f$.</p> <p>Here is my idea:</p> <ol> <li>The Turing Machine writes the string $SwS$ to its tape.</li> <li>It nondeterministically chooses one of the productions in $P$.</li> <li>For the chosen production rule, say $x \rightarrow y$, it scans the tape from left to right and nondeterministically stops at a symbol and checks whether the next $|x|$ symbols match the left side of the production rule. If that's the case, it replaces $x$ with $y$ and chooses the next production rule.</li> </ol> <p>After the computation, the tape contains the string $f(w) = v$.</p> <p><strong>Here is the question:</strong></p> <p>Because the TM is nondeterministic, there are many branches in the computation and each branch has its own "version" of the tape with its own content. But in the end, the starting string can only derive to a single string $v$, because the TM computes a function.</p> <p>Does this mean that the computation branches will all have the same tape content when they arrive at their leaves?</p>
Marc van Leeuwen
18,880
<p><a href="http://en.wikipedia.org/wiki/Polynomial_long_division" rel="nofollow">Polynomial long division</a> is the way to go. Especially over a finite field where you don't have to worry about fractional coefficients (working over for instance the rational numbers these can get extremely unwieldy surprisingly soon). Over $\mathbb Z/2\mathbb Z$ you don't even have to worry about dividing coefficients at all, the only question to be answered is "to substract or not to subtract", where as a bonus subtraction is actually the same as addition.</p> <p>Note that the wikipedia article you refer to does not assume such a simple context, and avoids division by coefficients by doing a pseudo-division instead (for which instead of explosion of fractions you can get enormous coefficients).</p>
101,972
<p>I have a grammatically computable function $f$, which means that a grammar $G = (V,\Sigma,P,S)$ exists, so that</p> <p>$SwS \rightarrow v \iff v = f(w)$.</p> <p>Now I have to show that, given a grammatically computable function $f$, a Turing Machine $M$ can be constructed, so that $M$ computes $f$.</p> <p>Here is my idea:</p> <ol> <li>The Turing Machine writes the string $SwS$ to its tape.</li> <li>It nondeterministically chooses one of the productions in $P$.</li> <li>For the chosen production rule, say $x \rightarrow y$, it scans the tape from left to right and nondeterministically stops at a symbol and checks whether the next $|x|$ symbols match the left side of the production rule. If that's the case, it replaces $x$ with $y$ and chooses the next production rule.</li> </ol> <p>After the computation, the tape contains the string $f(w) = v$.</p> <p><strong>Here is the question:</strong></p> <p>Because the TM is nondeterministic, there are many branches in the computation and each branch has its own "version" of the tape with its own content. But in the end, the starting string can only derive to a single string $v$, because the TM computes a function.</p> <p>Does this mean that the computation branches will all have the same tape content when they arrive at their leaves?</p>
ego
21,356
<p>$f$ corresponds to the binary number $1011$ and $g$ to $110$ if you identify $x$ with $2$. Appending a $0$ (rsp. multiplication by $2$) corresponds to multiplying with $x$ and $\oplus$ (exclusive or) is addition.</p> <pre><code>1011:110 = 11, i.e., the quotient is $x+1$ 110 --- 111 110 --- 1, i.e., the remainder is 1 </code></pre>
3,494,470
<p>Will all units in <span class="math-container">$\mathbb{Z}_{72}$</span> be also units (modulo <span class="math-container">$8$</span> and <span class="math-container">$9$</span>) of <span class="math-container">$\mathbb{Z}_8$</span> and <span class="math-container">$\mathbb{Z}_9$</span>? </p> <p>I think yes, because if <span class="math-container">$\gcd(x,72)=1\implies\gcd(x,8)=\gcd(x,9)=1$</span>, right? any counterexamples? Thanks beforehand.</p>
Bill Dubuque
242
<p>Yes, that is correct. More generally if <span class="math-container">$\,f(x)\,$</span> is polynomial with integer coef's and it has a root <span class="math-container">$\,x\equiv r\pmod{\!mn}\,$</span> then that root <em>persists</em> <span class="math-container">$\bmod m\,$</span> &amp; <span class="math-container">$\,n\,$</span> since congruences <a href="https://math.stackexchange.com/a/3236473/242">persist</a> mod factors of the modulus (or directly: <span class="math-container">$\,m,n\mid mn\mid f(r)).\,$</span> Specializing <span class="math-container">$\,f(x) = ax-1\,$</span> shows that if <span class="math-container">$\,a\,$</span> is a unit <span class="math-container">$\bmod mn$</span> then it persists as a unit <span class="math-container">$\!\bmod m\,$</span> &amp; <span class="math-container">$\,n.$</span></p> <p>Or, up in <span class="math-container">$\Bbb Z,\,$</span> we can view it as <em>persistence</em> of Bezout identities for <span class="math-container">$\rm\color{#c00}{factors}$</span> of gcd arguments, i.e. </p> <p><span class="math-container">$$\gcd(\color{#c00}ab,\color{#c00}mn)=1\,\Rightarrow\, abx + mny = 1\,\Rightarrow a(bx)+m(ny) = 1\,\Rightarrow\, \gcd(\color{#c00}{a,m}) = 1$$</span></p> <p>Or, in <span class="math-container">$ $</span> divisibility <span class="math-container">$ $</span> language: <span class="math-container">$\ \ (a,m)\ \mid\ (ab,mn) = 1$</span></p> <p>Or, in simpler ideal language <span class="math-container">$\ (a,m)\supseteq (ab,mn) = (1)$</span></p>
3,887,156
<p>I understand that the vertical shift is <span class="math-container">$0$</span> that is why the graph starts at <span class="math-container">$(0,0)$</span>. Also I understand that the amplitude is <span class="math-container">$3$</span> because the maximum y value is <span class="math-container">$3$</span> and the minimum y value is <span class="math-container">$-3$</span>. Last but not least the graph is inverse sine but it seems to be a period and a half. I am not sure where the <span class="math-container">$\frac {\pi}{4}$</span> comes from. I need help understanding that part. Photo of problem attached below. Thanks!</p> <p><a href="https://imgur.com/gallery/coL4IUw" rel="nofollow noreferrer">https://imgur.com/gallery/coL4IUw</a></p>
Zhooo
819,254
<p>We have <span class="math-container">$V=span\{(1,1,0),(0,0,1)\}$</span> now we know that <span class="math-container">$$ \dim\frac{\mathbb{R}^3}{V}=3-\dim V=1 $$</span></p> <p>so, if <span class="math-container">$v$</span> it's a generator of the quocient, the map <span class="math-container">$v\mapsto 1$</span> extends to a linear isomorphism between <span class="math-container">$\frac{\mathbb{R}^3}{V}$</span> and <span class="math-container">$\mathbb{R}$</span>.</p>
1,903,520
<p>By generalizing the approach in <a href="https://math.stackexchange.com/questions/1903152/integral-involving-a-dilogarithm-versus-an-euler-sum">Integral involving a dilogarithm versus an Euler sum.</a> meaning by using the integral representation of the harmonic numbers and by computing a three dimensional integral over a unit cube analytically we have found the generating function of cubes of harmonic numbers. We have: \begin{eqnarray} &amp;&amp;S^{(3)}(x) := \sum\limits_{n=1}^\infty H_n^3 x^n = \frac{-18 \text{Li}_3\left(1-\frac{1}{x}\right)+6 \text{Li}_3\left(\frac{1}{x}\right)-18 \text{Li}_3(x)}{6(1-x)}+ \frac{6 \log ^3(1-x)-9 \log (x) \log ^2(1-x)+3 \left(3 \log ^2(x)+\pi ^2\right) \log (1-x)}{6(x-1)}+\frac{-\log (x) \left(2 \log ^2(x)+ 3 i \pi \log (x)+5 \pi ^2\right)}{6 (x-1)} \end{eqnarray} Clearly some of the terms on the right hand side are complex even though the whole expression is of course real. The first two terms in the first fraction on the rhs are complex and the middle term in the last fraction is complex. My question is how do I simplify the right hand side to get rid of the complex terms? </p>
Przemo
99,778
<p>By using the functional equations for the trilogarithm we simplified the result as follows: \begin{eqnarray} &amp;&amp;S^{(3)}(x)= \\ &amp;&amp;\frac{ \text{Li}_3(x)}{(1-x)}+ 3\frac{\text{Li}_3(1-x)-\zeta (3)}{(1-x)}+ \log(1-x)\frac{ \left(-2 \log ^2(1-x)+3 \log (x) \log(1-x)-\pi ^2 \right)}{2 (1-x)} \end{eqnarray}</p> <p>For a sanity check we expand each of the terms in the formula in a Taylor series about zero we have: \begin{eqnarray} &amp;&amp;\frac{Li_3(x)}{1-x} =\\ &amp;&amp; x+\frac{9 x^2}{8}+\frac{251 x^3}{216} + O(x^4) \\ &amp;&amp;3\frac{\text{Li}_3(1-x)-\zeta (3)}{(1-x)} =\\ &amp;&amp;-\frac{\pi ^2 x}{2}+\frac{9 x^2}{4}-\frac{3}{2} x^2 \log (x)-\frac{3 \pi ^2 x^2}{4}-\frac{11 \pi ^2 x^3}{12}+4 x^3-3 x^3 \log (x) + O(x^4) \\ &amp;&amp;\log(1-x)\frac{ \left(-2 \log ^2(1-x)+3 \log (x) \log(1-x)-\pi ^2 \right)}{2 (1-x)} =\\ &amp;&amp; \frac{\pi ^2 x}{2}+\frac{3 \pi ^2 x^2}{4}+\frac{3}{2} x^2 \log (x) +\frac{11 \pi ^2 x^3}{12}+x^3+3 x^3 \log (x)+ O(x^4) \end{eqnarray}</p> <p>As we can see the terms proportional to $\log(x)$being present in the second and the third term exactly cancel each other. The formula is correct.</p>
417,181
<p>We have to prove that if the difference between two prime numbers greater than two is another prime,the prime is $2$. It can be proved in the following way.</p> <p>1)$Odd -odd =even$. </p> <p>Therefore the difference will always even.</p> <p>2)The only even prime number is $2$.Therefore the difference will be $2$ if the difference between primes is another prime.</p> <p>I am looking for more proofs to this theorem.Any help will be appreciated.</p>
mrf
19,440
<p>Assume that $\sum_{n=1}^\infty b_n$ is a divergent positive series. Then you can always find another divergent positive series $\sum_{n=1}^\infty a_n$ with the property that $$ \lim_{n\to\infty} \frac{a_n}{b_n} = 0. $$ (In your case, $b_n = \dfrac{1}{n\log n}$.)</p> <p>One way to see this is to use a theorem of Abel and Dini: </p> <blockquote> <p><strong>Theorem</strong> Assume that $\sum_{n=1}^\infty b_n$ is a divergent positive series, and let $B_n = b_1 + \cdots + b_n$ denote its partial sums. Then the series $$ \sum_{n=1}^\infty \frac{b_n}{B_n} $$ also diverges.</p> </blockquote> <p><strong>Proof</strong> First note that $$ \frac{b_{n+1}}{B_{n+1}} + \frac{b_{n+2}}{B_{n+2}} + + \cdots + \frac{b_{n+k}}{B_{n+k}} \ge \frac{b_{n+1}+b_{n+2}+\cdots+b_{n+k}}{B_{n+k}} = 1 - \frac{B_n}{B_{n+k}}. $$ Since we assume that $B_n \to \infty$, for each $n$, we can choose $k_n$ such that $$ \frac{B_n}{B_{n+k_n}} &lt; \frac12, $$ i.e. $$ \frac{b_{n+1}}{B_{n+1}} + \frac{b_{n+2}}{B_{n+2}} + + \cdots + \frac{b_{n+k_n}}{B_{n+k_n}} &gt; \frac12. $$ Summing blocks like these, we see that $$ \sum_{n=1}^\infty \frac{b_n}{B_n} $$ diverges.</p>
69,658
<p>Given a fiber bundle $f: E\rightarrow M$ with connected fibers we call the image $f^*(\Omega^k(M))\subset \Omega^k(E)$ the subspace of basic forms. Clearly, for any vertical vector field $X$ on $E$ we have that the interior product $i_X(f^*\omega)$ and the Lie derivative $L_X(f^*\omega)$ vanish for all $\omega \in \Omega^k(M)$. Is the converse true? That is, if $\alpha \in \Omega^k(E)$ is a form such that $i_X(\alpha)=0$ and $L_X(\alpha)=0$ for all vertical vector fields $X$ on $E$, is it true that $\alpha$ is a basic form? I believe so, but I am not sure how to prove it. Thanks for your help.</p>
Jason DeVito
331
<p>First, this can be checked locally, so we may as well assume $E = F\times M$.</p> <p>Use coordinates $x_i$ on $F$ and $y_j$ on $M$. Then a $k-$form is given by $\alpha =\sum f_{IJ} dx^I\wedge dy^J$. Here, $I = \{i_1,...,i_s\}$ and $dx^I$ means $dx^{i_1}\wedge...\wedge dx^{i_s}$ and we have $|I|+|J|=k$.</p> <p>The goal is to show that if $i_X(\alpha) = 0$ for all vertical $X$, then $f_{IJ} = 0$ whenever $I\neq \emptyset$.</p> <p>So fix any $I\neq \emptyset$. Suppose $i_1\in I$. Let $X = \frac{\partial}{\partial x^{i_1}}$, the dual vector do $dx^{i_1}$. Then $0 = i_X(\alpha)$ so $ 0 = i_X(\alpha)(\frac{\partial}{\partial x^{I-i_1}}, \frac{\partial}{\partial x^{J}}) = \pm f_{IJ}$.</p> <p>Thus the condition guarantees that $f_{IJ} = 0$ whenever $I\neq \emptyset$, so the only terms which appear in $\alpha$ are of the form $f_J dy^J$.</p> <p>The only problem now is that $f_J$ could depend on the $F$ factor. This is where the Lie bracket term will come in.</p> <p>The Lie bracket of a form is given by $L_Y(\alpha) = i_Y d\alpha + d i_Y(\alpha)$.</p> <p>Taking $Y$ vertical, this reduces to $L_Y(\alpha) = i_Y(d\alpha)$ since we've just shown that $i_Y( \alpha) = 0$.</p> <p>Now, $d\alpha = \sum \frac{\partial{f_J}}{{\partial x^i}} dx^i\wedge dy^J + \sum \frac{\partial{f_J}}{{\partial y^j}} dy^j\wedge dy^J$.</p> <p>Just as before, by utilizing a dual basis, we can show that since $0 = L_X(\alpha) = di_X(\alpha)$, that $\frac{\partial{f_J}}{\partial x^i} = 0$.</p> <p>But this implies that each $f_J$ only depends on the $y^j$ coordinates. It's now clear that $\alpha$ is the pull back of something, mainly of $\sum f_J dy^J$, which makes sense because $f$ is really only a function on $M$.</p>
2,525,498
<p>I'm an undergrad, and I've been presented with the following problem:</p> <blockquote> <p>Fundamental Theorem of Arithmetic: Let $\mathbb{N}_{&gt;0}$ be the monoid of positive integers with binary operation given by ordinary multiplication, let $P$ be the set of primes in $\mathbb{N}$, let $M$ be a commutative monoid and let $g : P → M$ be a function. Prove that there is a unique monoid homomorphism $G : \mathbb{N}_{&gt;0} → M$ such that $G(p) = g(p)$ for every prime $p ∈ P$.</p> </blockquote> <p>So far, I've been able to come up with this:</p> <blockquote> <p>Let $G: \mathbb{N}_{&gt;0} \to M$ be such that $G(p) = g(p)$ for all $p \in P$. Since $G$ isn't explicitly defined for non-prime numbers, we can just say that $G(1) = e$, where $e \in M$ is the identity. Let $x, y$ be positive integers. We want to show that $G(xy) = G(x)G(y)$. </p> </blockquote> <p>Am I right in just declaring $G$ to be what I want it to be and then showing it's a monoid homomorphism? Does my logic for the identities make sense? How do I attack the last part (with $G(xy) = G(x)G(y)$)? Or am I completely wrong and I should erase what I have and start over? And what does the fundamental theorem of arithmetic have to do with any of this?</p>
Burrrrb
322,248
<p>The function has to be monotone, assume it's increasing, now consider $f^{-1}(1)$ and observe that $1$ is not in $(0,1)$. </p>
2,225,150
<p>I am seek for a rigorous proof for the following identity</p> <p>$\sum_{i = 0}^{T} x_i \sum_{j = 0}^{i} y_j = \sum_{i = 0}^{T}y_i\sum_{j = i}^{T} x_j$. </p> <p>By setting some small $T$ and expand the formulas, it is then clear to see the result. I am asking for help to give a formal proof of this identity, by reordering the summation. </p>
Leox
97,339
<p>Direct way \begin{gather*} \sum_{i = 0}^{T} x_i \sum_{j = 0}^{i} y_j=x_0 y_0+x_1(y_0+y_1)+x_2(y_0+y_1+y_2)+\cdots+x_k(y_0+y_1+\cdots+y_k)+\cdots+x_T(y_0+y_1+\cdots+y_T)=y_0(x_0+x_1+\cdots+x_T)+y_1(x_1+x_2+\cdots+x_T)+\cdots+x_T y_T=\sum_{i=0}^T y_i \sum_{j=i}^T x_j. \end{gather*}</p>
2,225,150
<p>I am seek for a rigorous proof for the following identity</p> <p>$\sum_{i = 0}^{T} x_i \sum_{j = 0}^{i} y_j = \sum_{i = 0}^{T}y_i\sum_{j = i}^{T} x_j$. </p> <p>By setting some small $T$ and expand the formulas, it is then clear to see the result. I am asking for help to give a formal proof of this identity, by reordering the summation. </p>
Andrew D. Hwang
86,418
<p>The formal approach for all such formulas is mathematical induction. Fix sequences $(x_{i})_{i=0}^{\infty}$ and $(y_{j})_{j=0}^{\infty}$ of summands from some commutative ring (e.g., the field of real numbers). (If you're only interested in your "change of index" formula up to some fixed finite number of summands, these sequences need only be finite and "long enough", and the inductive step below will only be invoked finitely many times.)</p> <p>Recall that summation is defined recursively by $$ \sum_{i=0}^{0} x_{i} = x_{0},\qquad \sum_{i=0}^{T+1} x_{i} = \left[\sum_{i=0}^{T} x_{i}\right] + x_{T+1}. $$</p> <p>For each non-negative integer $T$, let $P(T)$ be the statement $$ \sum_{i=0}^{T} x_{i} \sum_{j=0}^{i} y_{j} = \sum_{i=0}^{T} y_{i} \sum_{j=i}^{T} x_{j}. \tag*{$P(T)$} $$ The base case reads $$ \sum_{i=0}^{0} x_{i} \sum_{j=0}^{i} y_{j} = x_{0} y_{0} = \sum_{i=0}^{0} y_{i} \sum_{j=i}^{0} x_{j}, \tag*{$P(0)$} $$ which is true. Assume inductively that $P(T)$ is true for some integer $T \geq 0$. We have \begin{align*} \sum_{i=0}^{T+1} x_{i} \sum_{j=0}^{i} y_{j} &amp;= \left[\sum_{i=0}^{T} x_{i} \sum_{j=0}^{i} y_{j}\right] + x_{T+1} \sum_{j=0}^{T+1} y_{j} &amp;&amp; \text{recursive definition of summation} \\ &amp;= \left[\sum_{i=0}^{T} y_{i} \sum_{j=i}^{T} x_{j}\right] + x_{T+1} \sum_{i=0}^{T+1} y_{i} &amp;&amp; \text{inductive hypothesis, dummy index} \\ &amp;= \sum_{i=0}^{T+1} y_{i} \sum_{j=i}^{T+1} x_{j} &amp;&amp; \text{recursive definition of summation,} \end{align*} so $P(T+1)$ is true.</p> <p>Since the base case $P(0)$ is true, and since $P(T)$ implies $P(T+1)$ for every integer $T \geq 0$, induction guarantees $P(T)$ is true for all $T \geq 0$.</p>
4,027,071
<p>Let <span class="math-container">$\mathbf{C}$</span> be a category with products, and let <span class="math-container">$A, B, C \in \mathbf{C}$</span>. I wish to show that there exists a morphism <span class="math-container">$h: (A \times B) \times C \to A \times (B \times C)$</span> which is an isomorphism.</p> <p>I believe I can get that we have morphisms <span class="math-container">$g_1:(A \times B) \times C \to A \times (B \times C)$</span> and <span class="math-container">$g_2:A \times (B \times C) \to (A \times B) \times C$</span> and that they are the unique morphisms satisfying the universal property for products.</p> <p>However, I don't see how to show that one/either of them is an isomorphism. We need to prove that their compositions are identity morphisms on those product objects. This amounts to showing that for any objects <span class="math-container">$U, V, W, X$</span> and arrows <span class="math-container">$$ \begin{aligned} f_U&amp;: U \to (A \times B) \times C \\ f_V&amp;: (A \times B) \times C \to V \\ f_W&amp;: W \to A \times (B \times C) \\ f_X&amp;: A \times (B \times C) \to X \\ \end{aligned} $$</span> we have <span class="math-container">$$ \begin{aligned} (g_2 \circ g_1) \circ f_U &amp;= f_U \\ f_V \circ (g_2 \circ g_1) &amp;= f_V \\ (g_1 \circ g_2) \circ f_W &amp;= f_W \\ f_X \circ (g_1 \circ g_2) &amp;= f_X. \end{aligned} $$</span> The first two equalities above assert that <span class="math-container">$g_2 \circ g_1 = \mathrm{id}_{(A \times B) \times C}$</span>, and the second pair asserts that <span class="math-container">$g_1 \circ g_2 = \mathrm{id}_{A \times (B \times C)}$</span>. From here, though, I am stuck.</p>
fosco
685
<p>See 1.5 &quot;Diagram Chasing&quot; here <a href="https://compose.ioc.ee/categoryTheory2020/week3/week3.pdf" rel="nofollow noreferrer">https://compose.ioc.ee/categoryTheory2020/week3/week3.pdf</a></p>
4,027,071
<p>Let <span class="math-container">$\mathbf{C}$</span> be a category with products, and let <span class="math-container">$A, B, C \in \mathbf{C}$</span>. I wish to show that there exists a morphism <span class="math-container">$h: (A \times B) \times C \to A \times (B \times C)$</span> which is an isomorphism.</p> <p>I believe I can get that we have morphisms <span class="math-container">$g_1:(A \times B) \times C \to A \times (B \times C)$</span> and <span class="math-container">$g_2:A \times (B \times C) \to (A \times B) \times C$</span> and that they are the unique morphisms satisfying the universal property for products.</p> <p>However, I don't see how to show that one/either of them is an isomorphism. We need to prove that their compositions are identity morphisms on those product objects. This amounts to showing that for any objects <span class="math-container">$U, V, W, X$</span> and arrows <span class="math-container">$$ \begin{aligned} f_U&amp;: U \to (A \times B) \times C \\ f_V&amp;: (A \times B) \times C \to V \\ f_W&amp;: W \to A \times (B \times C) \\ f_X&amp;: A \times (B \times C) \to X \\ \end{aligned} $$</span> we have <span class="math-container">$$ \begin{aligned} (g_2 \circ g_1) \circ f_U &amp;= f_U \\ f_V \circ (g_2 \circ g_1) &amp;= f_V \\ (g_1 \circ g_2) \circ f_W &amp;= f_W \\ f_X \circ (g_1 \circ g_2) &amp;= f_X. \end{aligned} $$</span> The first two equalities above assert that <span class="math-container">$g_2 \circ g_1 = \mathrm{id}_{(A \times B) \times C}$</span>, and the second pair asserts that <span class="math-container">$g_1 \circ g_2 = \mathrm{id}_{A \times (B \times C)}$</span>. From here, though, I am stuck.</p>
Berci
41,488
<p><strong>Hint:</strong> As in the comments, prove that both <span class="math-container">$(A\times B)\times C$</span> and <span class="math-container">$A\times (B\times C)$</span> are limits for the discrete 3-point diagram <span class="math-container">$A,B,C$</span> (ternary product).</p>
1,292,490
<blockquote> <p>Let $(a_{ij})$ be a real $n \times n$ matrix satisfying,</p> <ol> <li>$a_{ii} &gt; 0 \space (1 \leq i \leq n) ,$</li> <li>$a_{ij} \leq 0 \space (i \ne j, 1 \leq i,j \leq n) ,$</li> <li>$\sum_{i=1}^ {i=n} \space a_{ij} &gt; 0 (1 \leq j \leq n).$ </li> </ol> <p>Then $\det (A) &gt; 0$</p> </blockquote> <p>How to prove this? I have no idea.</p>
Pegah
242,413
<p>You can also use Gaussian Elimination Method. Show that in each step of elimination, elements of main diagonal stay positive. So, you will have an upper triangular matrix in which all the elements of main diagonal are positive. Now use the fact that in an upper triangular matrix, determinant is equal to the product of elements of main diagonal.</p>
2,332,419
<p>What's the angle between the two pointers of the clock when time is 15:15? The answer I heard was 7.5 and i really cannot understand it. Can someone help? Is it true, and why?</p>
Archis Welankar
275,884
<p>At $15$ ie $3$ o' clock minute hand will be at exact $3$. Now for every minute the hour hand moves $0.5$ degrees . Calculation we have $360^o=12$hrs ie $360=12×60$minutes thus $0.5deg/min $ hence the answer is $15\times 0.5=7.5$degrees.</p>
3,745,551
<p>I often see people say that if you have 2 IID gaussian RVs, say <span class="math-container">$X \sim \mathcal{N}(\mu_x, \sigma_x^2)$</span> and <span class="math-container">$Y \sim \mathcal{N}(\mu_y, \sigma_y^2)$</span>, then the distribution of their sum is <span class="math-container">$\mathcal{N}(\mu_x + \mu_y, \sigma_x^2 + \sigma_y^2)$</span>.</p> <p>This is only true when <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> have the same units right, otherwise you can't even sum them to begin with without standardization?</p> <p>e.g., if <span class="math-container">$X$</span> was some measure of distance in meters and <span class="math-container">$Y$</span> was some measure of velocity in <span class="math-container">$\frac{meters}{second}$</span>, then you can't simply just add their means and variances together. That wouldn't make sense. You'd have to standardize them first so they're both unitless before you can do the above.</p>
User5678
632,875
<p>I think what you are looking for is a Multivariate Gaussian — assign each unit to a different dimension and then you have a random Gaussian vector. If there is no correlation between the two dimensions this reduces to each coming from its own Gaussian.</p>
393,467
<p>I am looking for a proof that:</p> <p>if <span class="math-container">$A_{11}A_{12}...A_{1n}$</span>; <span class="math-container">$A_{21}A_{22}...A_{2n}$</span>; <span class="math-container">$\cdots$</span>; <span class="math-container">$A_{i1}A_{i2}...A_{in}$</span>; <span class="math-container">$\cdots$</span>; <span class="math-container">$A_{m1}A_{m2}...A_{mn}$</span> are <span class="math-container">$m$</span> oriented regular polygons (<span class="math-container">$n$</span>-gons), where <span class="math-container">$n=2k$</span>, then<span class="math-container">$\DeclareMathOperator\Area{Area}$</span> <span class="math-container">$$ \begin{align*} &amp; \Area(A_{11}A_{21}...A_{m1})+\Area(A_{1\;k+1}A_{2\;k+1}...A_{m\;k+1})\\ =\ &amp; \Area(A_{12}A_{22}...A_{m2})+\Area(A_{1\;k+2}A_{2\;k+2}...A_{m\;k+2})\\=\ &amp; \cdots\\ =\ &amp; \Area(A_{1i}A_{2i}...A_{mi})+\Area(A_{1\;k+i}A_{2\;k+i}...A_{m\;k+i})\\ =\ &amp; \cdots \end{align*} $$</span></p> <p><strong>Reference:</strong></p> <ul> <li><p><a href="http://www.xente.mundo-r.com/ilarrosa/GeoGebra/AreasIg_Npoligonos.html" rel="nofollow noreferrer">Areas que suman lo mismo</a></p> </li> <li><p>A case <span class="math-container">$m=4$</span> and <span class="math-container">$n=4$</span>, I posed in <a href="https://math.stackexchange.com/questions/1447773/two-conjectures-of-four-squares">here</a> is near six year ago. But no have a proof.</p> </li> </ul>
Iiro Ullin
219,013
<p>Such inequality is impossible: consider <span class="math-container">$p(x)=1$</span>, <span class="math-container">$q(x)=1/(2\sqrt{x})$</span>, as probability densities on <span class="math-container">$(0,1)$</span>. Then <span class="math-container">$D_{KL}(p\parallel q)$</span> is finite, while <span class="math-container">$\|p-q\|_2=\infty$</span>, as <span class="math-container">$q\not\in L^2$</span>.</p> <p>The reverse direction is also impossible: take <span class="math-container">$p(x)=a e^{-ax}$</span>, <span class="math-container">$q(x)=a^2e^{-a^2x}$</span> on <span class="math-container">$(0,\infty)$</span>. Then <span class="math-container">$\|p-q\|_2\to0$</span>, while <span class="math-container">$D_{KL}(q\parallel p)=1/a+\ln a -1\to\infty$</span>, as <span class="math-container">$a\to0$</span>.</p>
393,467
<p>I am looking for a proof that:</p> <p>if <span class="math-container">$A_{11}A_{12}...A_{1n}$</span>; <span class="math-container">$A_{21}A_{22}...A_{2n}$</span>; <span class="math-container">$\cdots$</span>; <span class="math-container">$A_{i1}A_{i2}...A_{in}$</span>; <span class="math-container">$\cdots$</span>; <span class="math-container">$A_{m1}A_{m2}...A_{mn}$</span> are <span class="math-container">$m$</span> oriented regular polygons (<span class="math-container">$n$</span>-gons), where <span class="math-container">$n=2k$</span>, then<span class="math-container">$\DeclareMathOperator\Area{Area}$</span> <span class="math-container">$$ \begin{align*} &amp; \Area(A_{11}A_{21}...A_{m1})+\Area(A_{1\;k+1}A_{2\;k+1}...A_{m\;k+1})\\ =\ &amp; \Area(A_{12}A_{22}...A_{m2})+\Area(A_{1\;k+2}A_{2\;k+2}...A_{m\;k+2})\\=\ &amp; \cdots\\ =\ &amp; \Area(A_{1i}A_{2i}...A_{mi})+\Area(A_{1\;k+i}A_{2\;k+i}...A_{m\;k+i})\\ =\ &amp; \cdots \end{align*} $$</span></p> <p><strong>Reference:</strong></p> <ul> <li><p><a href="http://www.xente.mundo-r.com/ilarrosa/GeoGebra/AreasIg_Npoligonos.html" rel="nofollow noreferrer">Areas que suman lo mismo</a></p> </li> <li><p>A case <span class="math-container">$m=4$</span> and <span class="math-container">$n=4$</span>, I posed in <a href="https://math.stackexchange.com/questions/1447773/two-conjectures-of-four-squares">here</a> is near six year ago. But no have a proof.</p> </li> </ul>
Ze-Nan Li
235,487
<p>Now, I am trying to answer this question.</p> <p><strong>Proposition</strong>. <em>If <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are two probability densities, and (upper) bounded by <span class="math-container">$\tau_1$</span> and <span class="math-container">$\tau_2$</span>, respectively, then</em> <span class="math-container">$$ KL(p,q) \ge \frac{1-\log(2)}{\max(\tau_1, \tau_2)} L_2(p,q). $$</span></p> <p><em>Proof.</em> We define <span class="math-container">$\eta(x)=\frac{q(x)-p(x)}{p(x)}$</span>, and thus the KL divergence between <span class="math-container">$p$</span> and <span class="math-container">$q$</span> can be computed as follows. <span class="math-container">$$ D_{K L}(p || q)=\int_\mathcal{X} p(x) \log \left(\frac{p(x)}{q(x)} \right) d x=-\int_\mathcal{X} p(x) \log (1+\eta(x)) d x. $$</span> We define <span class="math-container">\begin{equation} A := \{x \mid \eta(x) &gt; 1\} = \{x \mid q(x)&gt;2p(x)\}, \quad B := \{x \mid \eta(x) \leq 1\} = \{x \mid q(x) \leq 2p(x)\}. \end{equation}</span> Then, we can obtain that</p> <p>(1) For <span class="math-container">$x \in A$</span>, <span class="math-container">$(1+\eta(x)) \leq e^{a\eta(x)}$</span>, where <span class="math-container">$a=\log(2)$</span>.</p> <p>(2) For <span class="math-container">$x \in B$</span>, <span class="math-container">$(1+\eta(x)) \leq e^{\eta(x)-b\eta(x)^2}$</span>, where <span class="math-container">$b=1-\log(2)$</span>.</p> <p>Note that we also have <span class="math-container">\begin{equation} \int_{\mathcal{X}} p(x) \eta(x) dx = \int_{\mathcal{X}} (q(x)-p(x)) dx = 0, \end{equation}</span> which implies that <span class="math-container">$\int_A p(x) \eta(x) dx = - \int_B p(x) \eta(x) dx$</span>. Putting all together, we have <span class="math-container">\begin{equation*} \begin{aligned} D_{K L}(p || q) &amp;=-\int_A p(x) \log (1+\eta(x)) d x-\int_B p(x) \log (1+\eta(x)) d x \newline &amp;\geq -a \int_A p(x) \eta(x) d x-\int_B p(x) \eta(x) d x+b \int_B p(x) \eta(x)^2 d x \newline &amp;=(1-a) \int_A p(x) \eta(x) d x+b \int_B p(x) \eta(x)^2 d x \newline &amp;=(1-\log (2))\left(\int_A|q(x)-p(x)| d x+\int_B p(x)\left(\frac{q(x)-p(x)}{p(x)}\right)^2 d x\right). \end{aligned} \end{equation*}</span> For the first summand in RHS, we have <span class="math-container">\begin{equation*} \begin{aligned} \int_A|q(x)-p(x)| d x &amp;= \int_A |\frac{q(x)-p(x)}{q(x)}|q(x) d x \newline &amp; \ge \int_A (\frac{q(x)-p(x)}{q(x)})^2 q(x) d x \newline &amp; \ge \frac{1}{\max(\tau_1, \tau_2)} \int_A (q(x)-p(x))^2 d x. \end{aligned} \end{equation*}</span> For the second summand in RHS, we have <span class="math-container">\begin{equation*} \int_B p(x)\left(\frac{q(x)-p(x)}{p(x)}\right)^2 d x \ge \frac{1}{\max(\tau_1, \tau_2)} \int_B (q(x)-p(x))^2 dx. \end{equation*}</span> Finally, we have <span class="math-container">\begin{equation*} D_{KL}(p || q) \ge \frac{1-\log(2)}{ \max(\tau_1, \tau_2)} L_2(p, q), \end{equation*}</span> which completes the proof. qed</p>
1,824,966
<p>Ok, I was asked this strange question that I can't seem to grasp the concept of..</p> <blockquote> <p>Let $T$ be a linear transformation such that: $$T \langle1,-1\rangle = \langle 0,3\rangle \\ T \langle2, 3\rangle = \langle 5,1\rangle $$ Find $T$.</p> </blockquote> <p>Is there suppose to be a function out of this? A matrix of some kind? Maybe both? If so, what is it?</p>
user247327
247,327
<p>"Given T find T" makes no sense! But if you are given "$T$" (standard font) and asked to find "$\mathbf{T}$" (bold face) where the bold face has <strong>already</strong> been defined to be the "matrix associated with the linear transformation, then, yes.</p> <p>You should also understand that the matrix representing a given linear transformation depends upon what <strong>basis</strong> you are using for the vector space. Here, nothing is said about a basis but it is in $\mathbb R^2$ so I think we are to assume the "standard" basis, $\langle 1, 0\rangle$ and $\langle 0, 1\rangle$.</p> <p>Since this is $\mathbb R^2$ to $\mathbb R^2$ any such matrix is of the form $\begin{bmatrix}a &amp; b \\ c &amp; d \end{bmatrix}$. We are told that this linear transformation maps $\langle1, -1\rangle$ to $\langle0, 3\rangle$ so we must have $$\begin{bmatrix}a &amp; b \\ c &amp; d \end{bmatrix}\begin{bmatrix}1 \\ -1 \end{bmatrix}= \begin{bmatrix}a- b \\ c- d\end{bmatrix}= \begin{bmatrix}0 \\ 3\end{bmatrix}.$$</p> <p>We are also told that this linear transformation maps $\langle2, 3\rangle$ to $\langle5, 1\rangle$ so we must have $$\begin{bmatrix}a &amp; b \\ c &amp; d \end{bmatrix}\begin{bmatrix}2 \\ 3 \end{bmatrix}= \begin{bmatrix}2a+ 3b \\ 2c+ 3d\end{bmatrix}= \begin{bmatrix}5 \\ 1\end{bmatrix}.$$</p> <p>Solve the two equations $a- b= 0$ and $2a+ 3b= 5$ for $a$ and $b$. Solve the two equations $c- d= 3$ and $2c+ 3d= 1$ for $c$ and $d$.</p>
37,052
<p>This is my first question with mathOverflow so I hope my etiquette is up to par here.</p> <p>My question is regarding a <span class="math-container">$3\times3$</span> magic square constructed using the la Loubère method (see <a href="http://en.wikipedia.org/wiki/Magic_square#Method_for_constructing_a_magic_square_of_odd_order" rel="nofollow noreferrer">la Loubère method</a>)</p> <p>Using the method, I have constructed a magic square and several semimagic squares (where one or both of the diagonals do not add up to a magic sum) with a program on written on my graphing calculator. After playing around with the program, I was shocked that the determinants of these <span class="math-container">$3\times3$</span> magic squares are all the same (specifically -360). Why is this so? (I am still an undergraduate so please go easy on the math :] )</p>
José Figueroa-O'Farrill
394
<p>The original reference for Wick's theorem is, not surprisingly, Wick's original 1950 paper: <a href="https://doi.org/10.1103/PhysRev.80.268" rel="nofollow noreferrer"><em>The Evaluation of the Collision Matrix</em></a> published in the Physical Review <strong>80</strong> (2) pp. 268-272. He also shows how to compute it and it is surprisingly readable 60 years on.</p> <p>Of course, depending on your background, this may be too physical. A more mathematical reference are the Bombay Lectures by Kac and Raina <em><a href="https://books.google.co.uk/books?id=SZVqYvvD8rQC" rel="nofollow noreferrer">Highest-weight representations of infinite-dimensional Lie algebras</a></em>, particularly the 5th lecture on the Bose-Fermi correspondence.</p> <p>The basic idea is to think of <span class="math-container">$\mathfrak{F}$</span> as the space of semi-infinite forms. The vacuum vector would be given by <span class="math-container">$$\Omega = f_1^* \wedge f_2^* \wedge \cdots$$</span> and <span class="math-container">$c(f_i)^*$</span> acts by wedging with <span class="math-container">$f_i^*$</span> whereas <span class="math-container">$c(f_i)$</span> acts by contracting with <span class="math-container">$f_i$</span>.</p>
3,896,345
<p>I've been studying a paper in which the author says:</p> <p>Fix <span class="math-container">$n$</span> such that <span class="math-container">$m^n \prod_{j=1}^n \frac{j}{j+\delta} &gt; 1$</span>, where <span class="math-container">$1&lt;m&lt;\infty$</span>, and <span class="math-container">$\delta &gt;0$</span>.</p> <p>I seem not to be able to show why such <span class="math-container">$n$</span> must exist. I tried rewriting it this way:</p> <p><span class="math-container">$m^n \prod_{j=1}^n \frac{j}{j+\delta} =m^n \prod_{j=1}^n (1-\frac{\delta}{j+\delta})= m^n \exp\bigg[\sum_{j=1}^n \log(1-\frac{\delta}{j+\delta})\bigg] \geq m^n\bigg[1+\sum_{j=1}^n \log(1-\frac{\delta}{j+\delta})\bigg]$</span></p> <p>but after that I'm stuck. In one part the author writes</p> <p><span class="math-container">$m^n \prod_{j=1}^n (1-\frac{\delta}{j+\delta}) \geq m^n e^{c_0} \exp\bigg[-\delta\sum_{j=1}^n \frac{1}{j+\delta}\bigg]$</span>, where <span class="math-container">$c_0$</span> is some constant,</p> <p>but then I still don't know how this guarantees that there exists <span class="math-container">$n$</span> such that <span class="math-container">$m^n \prod_{j=1}^n \frac{j}{j+\delta} &gt; 1$</span>. Does anyone have any idea on how to prove it?</p>
Mathick
846,353
<p>I think I solved this, or at least I managed to convince myself that this is true. Approximately, we have that</p> <p><span class="math-container">$\prod_{j=1}^{n} \frac{j}{j+\delta} = \prod_{j=1}^{n} \left(1 - \frac{\delta}{j+\delta}\right) \geq c_0 \exp\left[\sum_{j=1}^{n} \log\left(1 - \frac{\delta}{j+\delta}\right) \right] \geq c_0 \exp\left[-\delta\sum_{j=1}^{n} \frac{1}{j+\delta} \right] \geq \frac{c_0}{n^{\delta}},$</span></p> <p>where <span class="math-container">$c_0$</span> is a constant and since <span class="math-container">$\sum_{j=1}^{n} \frac{1}{j+\delta} \sim \log(n)$</span>.</p> <p>Therefore, we have that <span class="math-container">$m^n \prod_{j=1}^{n} \frac{j}{j+\delta} \geq \frac{m^n}{n^{\delta}}$</span> and <span class="math-container">$\lim_{n \rightarrow \infty} \frac{m^n}{n^{\delta}} = \infty$</span> since <span class="math-container">$m&gt;1$</span> and <span class="math-container">$\delta&gt;0$</span>.</p> <p>In particular, as <span class="math-container">$\frac{m^n}{n^{\delta}}$</span> is also an increasing function in <span class="math-container">$n$</span>, there exists <span class="math-container">$n_0&gt;0$</span> such that <span class="math-container">$\frac{m^n}{n^{\delta}}&gt;1$</span>.</p>
3,669,937
<p>I had this problem where i had the application <span class="math-container">$\varphi: \mathbb Z[i] \Rightarrow \mathbb Z/(2)$</span> where <span class="math-container">$\varphi(a+bi)=\bar{a}+\bar{b}$</span>. I had to find the kernel and prove that is a factor ideal. I proofed that the kernel is formed by all the complex numbers such that <span class="math-container">$a+b$</span> is even but I need to find the generator to the ideal. </p>
J. W. Tanner
615,567
<p>A generator of the ideal is <span class="math-container">$1+i$</span>.</p> <p>Let <span class="math-container">$a+bi$</span> be a multiple of <span class="math-container">$1+i.$</span> </p> <p>Then <span class="math-container">$a+bi= (x+yi)(1+i)=(x-y)+(x+y)i,$</span> so <span class="math-container">$a+b=2x$</span> is even.</p> <p>On the other hand, if <span class="math-container">$a+b$</span> is even, then so is <span class="math-container">$b-a$</span>, </p> <p>so <span class="math-container">$\dfrac{a+bi}{1+i}=\dfrac{(a+bi)(1-i)}2=\dfrac{a+b}2+\dfrac{(b-a)i}2\in \mathbb Z[i]$</span>.</p>
167,262
<p>I make a circle with radius as below</p> <pre><code>Ctest = Table[{0.05*Cos[Theta*Degree], 0.05*Sin[Theta*Degree]}, {Theta, 1, 360}] // N; </code></pre> <p>And herewith is my list of data points</p> <pre><code>pts = {{0., 0.}, {0.00493604, -0.00994539}, {0.00987001, -0.0198918}, {0.0148019, -0.0298392}, {0.0197318, -0.0397877}, {0.0246596, -0.0497372}, {0.0295853, -0.0596877}, {0.0345089, -0.0696392}, {0.0394305, -0.0795918}, {0.04435, -0.0895453}, {0.0492675, -0.0994999}, {0.0541829, -0.109456}, {0.0590962, -0.119412}, {0.0640075, -0.12937}, {0.0689166, -0.139328}, {0.0738238, -0.149288}, {0.0787288, -0.159249}, {0.0836318, -0.169211}, {0.0885327, -0.179173}, {0.0934316, -0.189137}, {0.0983284, -0.199102}, {0.103223, -0.209068}, {0.108116, -0.219034}, {0.113006, -0.229002}, {0.117895, -0.238971}, {0.122781, -0.248941}, {0.127666, -0.258912}}; </code></pre> <p>I would like to know the intersection between a circle and list data point as shown by figure below. How to make its program automatically? I mean that if one day I would like to change the radius of circle, the program would still work.</p> <p><a href="https://i.stack.imgur.com/ckZuP.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ckZuP.jpg" alt="enter image description here"></a></p>
bill s
1,783
<p>Another way to specify the condition on a variable that it be either zero or 1 is to observe that the square of the variable must equal itself, e.g. x^2==x has only solutions x=0 and x=1. So we could add these conditions to the solution. Here is a small example:</p> <pre><code>Solve[Flatten[{2 x[1] + 3 x[2] - 2 x[3] == 3, x[#]^2 == x[#] &amp; /@ Range[3]}], x[#] &amp; /@ Range[3]] {{x[1] -&gt; 0, x[2] -&gt; 1, x[3] -&gt; 0}, {x[1] -&gt; 1, x[2] -&gt; 1, x[3] -&gt; 1}} </code></pre> <p>which are indeed the only solutions for this toy problem.</p>
250,074
<p>How can one generate a random vector <span class="math-container">$v=[v_1, v_2, v_3]^T$</span> satisfying <span class="math-container">$\sqrt{v_1v_1^* + v_2 v_2^* + v_3 v_3^*} = 1$</span>, where <span class="math-container">$T$</span> and <span class="math-container">$*$</span> denote the transpose and complex-conjugate, respectively?</p>
mikado
36,788
<p>This is a very simple 1-liner giving a list of <code>n</code> such random vectors</p> <pre><code>sphericalrandom[n_] := Normalize /@ RandomVariate[NormalDistribution[0, 1], {n, 3}] </code></pre> <p>Note that these are uniformly distributed on the sphere, since the multivariate normal distribution is invariant under rotation (consider the covariance matrix, <code>R.I.Transpose[R] = I</code>)</p> <p>We can easily verify that the requirement is met</p> <pre><code>sphericalrandom[6] (* {{-0.277119, -0.913442, -0.298042}, {0.784793, 0.124294, -0.607166}, {0.0794014, -0.138744, 0.98714}, {0.477633, 0.578417, -0.661287}, {0.182014, -0.443811, 0.877441}, {-0.967141, -0.236544, 0.0931965}} *) # . # &amp; /@ % (* {1., 1., 1., 1., 1., 1.} *) </code></pre> <p>The question seems to request complex numbers subject to the same criteria. This is very easily done</p> <pre><code>sphericalrandomcomplex[n_] := Normalize /@ (RandomVariate[ NormalDistribution[0, 1], {n, 3, 2}] . {1, I}) </code></pre> <p>Again, the normalisation checks</p> <pre><code>sphericalrandomcomplex[6] (* {{0.0291155 + 0.299873 I, 0.118097 + 0.105762 I, 0.872673 + 0.350054 I}, {-0.476609 - 0.271261 I, 0.762756 + 0.326494 I, 0.0928805 - 0.0473263 I}, {0.482045 + 0.36627 I, -0.627184 - 0.11854 I, 0.183277 + 0.438722 I}, {-0.445669 - 0.472578 I, -0.0542457 + 0.260653 I, 0.129732 + 0.700241 I}, {0.398994 + 0.191309 I, 0.147118 - 0.353526 I, -0.455474 - 0.670913 I}, {-0.157891 + 0.0766977 I, 0.66208 + 0.28717 I, -0.261425 + 0.616465 I}} *) # . Conjugate[#] &amp; /@ % (* {1. + 0. I, 1. + 0. I, 1. + 0. I, 1. + 0. I, 1. + 0. I, 1. + 0. I} *) </code></pre>
2,618,746
<p>The distance between two stations $X$ and $Y$ is 220 km.</p> <p>Trains $P$ and $Q$ leave station $X$ at 7 am and 8:15 am respectively at the speed of 25 km/hr and 20 km/hr respectively for journey towards $Y$.</p> <p>Train $R$ leaves station $Y$ at 11:30 am at a speed of 30 km/hr for journey towards $X$. </p> <p>When and where will $P$ be equidistant from $Q$ and $R$ ?</p>
lab bhattacharjee
33,337
<p>Let the time of equidistance be $t$ hours after $7$ AM</p> <p>So, $Q$ will travel $20(t-5/4)=20t-25$ km</p> <p>$P$ will travel $25t$ km</p> <p>and $R$ will travel $30(t-9/2)$ km hence is $220- 15(2t-9)=355-30t$ km away from $X$</p> <p>We need $$20t-25+355-30t=2\cdot25t$$</p>
3,928,937
<p>Determine all the solutions of the congruence<br /> <span class="math-container">$x^{85} ≡ 25 \pmod{31}$</span><br /> using index function in base <span class="math-container">$3$</span> module <span class="math-container">$31$</span>.<br /> It is clear to me that <span class="math-container">$3$</span> is primitive root module <span class="math-container">$31$</span>, but how do I use this information in the solution?</p>
lab bhattacharjee
33,337
<p>Using <a href="https://mathworld.wolfram.com/DiscreteLogarithm.html" rel="nofollow noreferrer">Discrete logarithm</a> with respect to base <span class="math-container">$3$</span>,</p> <p><span class="math-container">$85\cdot$</span>ind<span class="math-container">$_3x\equiv2\cdot$</span>ind<span class="math-container">$_35\pmod{30}$</span></p> <p>As <span class="math-container">$85\equiv-5\pmod{30},$</span></p> <p><span class="math-container">$-5\cdot$</span>ind<span class="math-container">$_3x\equiv2\cdot$</span>ind<span class="math-container">$_35\pmod{30}\ \ \ \ (1)$</span></p> <p><span class="math-container">$3^3\equiv-4,3^5\equiv9\cdot(-4)\equiv-5\pmod{31}$</span></p> <p>As <span class="math-container">$3$</span> is a primitive root <span class="math-container">$\pmod{31}, -1\equiv3^{30/2}\pmod{31}$</span></p> <p><span class="math-container">$\implies5\equiv3^{15}\cdot3^5\pmod{31}$</span></p> <p>By <span class="math-container">$(1), -5\cdot$</span>ind<span class="math-container">$_3x\equiv2\cdot20\pmod{30}\equiv-20$</span></p> <p>Dividing through out by <span class="math-container">$-5$</span></p> <p>ind<span class="math-container">$_3x\equiv4\pmod6$</span></p> <p><span class="math-container">$\implies x\equiv3^{4+6k}\pmod{31}$</span> where <span class="math-container">$0\le4+6k\le30\iff0\le k\le4$</span></p>
604,824
<p>So the puzzle is like this:</p> <blockquote> <p>An ant is out from its nest searching for food. It travels in a straight line from its nest. After this ant gets 40 ft away from the nest, suddenly a rain starts to pour and washes away all its scent trail. This ant has the strength of traveling 280 ft more then it will starve to death. Suppose this ant's nest is a huge wall and this ant can travel in a whatever curve it wants, how can this ant find its way back? </p> </blockquote> <p>I interpret it as: I start at the origin. I know that there is a straight line with distance 40 ft to the origin, but I don't know the direction. In what parametric curve I will sure hit the line when the parameter $t$ is increasing, while the total arc length is less than or equal to 280 ft.</p> <p>I asked a friend of mine who is a PhD in math, he told me this is a calculus of variation problem. I wonder if I could use basic calculus stuff to solve this puzzle (I have learned ODE as well). My hunch tells me that a spiral should be used as the path, yet I am not sure what kind of spiral to use here. Any hint shall be appreciated. Thanks dudes!</p> <h3>Clarification by dfeuer</h3> <p>As some people seem to be having trouble understanding the problem description, I'll add an equivalent one that should be clear:</p> <p>Starting in the center of a circle of radius 40 ft, draw a path with the shortest possible length that intersects every line that is tangent to the circle.</p>
Will Nelson
62,773
<p>Rescale so that radius $R=1$. Assume the starting point (the center of the circle) is at $(0,0)$. Unless I've made a calculation error (I don't think so), the minimum is achieved with the following path:</p> <ul> <li><p>Straight line from $(0,0)$ to $\left(\frac{\sqrt{3}}{3},-1\right)$.</p></li> <li><p>Straight line from $\left(\frac{\sqrt{3}}{3},-1\right)$ to $\left(\frac{\sqrt{3}}{2},-\frac{1}{2}\right)$.</p></li> <li><p>Follow the circular arc counterclockwise about center $(0,0)$ from $\left(\frac{\sqrt{3}}{2},-\frac{1}{2}\right)$ to $(-1,0)$.</p></li> <li><p>Straight line from $(-1,0)$ to $(-1,-1)$.</p></li> </ul> <p>The length of each of these subpaths is $\frac{2}{3}\sqrt{3}$, $\frac{1}{3}\sqrt{3}$, $\frac{7}{6}\pi$, and $1$. The total length is $$ \frac{7}{6}\pi + \sqrt{3} + 1 $$ or for R=40, $$ 40\left(\frac{7}{6}\pi + \sqrt{3} + 1\right) \approx 255.89. $$</p> <p>I'll try to sketch a proof of this when I get a chance.</p> <hr> <p><strong>UPDATE:</strong> I'll begin to sketch a proof. Consider the unit circle $C$ centered at the origin in $\mathbb{R}^2$. Consider the line $l$ tangent to $C$ defined by $y=-1$. Fix two points $(x_+,-1)$ and $(x_-,-1)$ on $l$ such that $x_+\ge 0$ and $x_-\le 0$. Suppose $\Gamma$ is a path from $(x_+,-)$ to $(x_-,-1)$ such that the convex hull of $\Gamma$ includes $C$. Let $\Gamma'$ be any path from $(0,0)$ to $(x_+,-1)$ and then along $\Gamma$ to $(x_-,-1)$. Observe that $\Gamma'$ is an admissible solution to the problem, i.e., all tangent lines of $C$ intersect $\Gamma'$.</p> <p>It turns out that the optimal solution to the problem must be such a path $\Gamma'$. That's the hard part of the proof, but it's intuitively quite clear if you think about it. And it's not hard to find the minimum length $\Gamma'$. To do so, fix $x_-\le 0$ and $x_+\ge 0$, then solve for the minimum length $\Gamma'$ with these values for $x_-$ and $x_+$. Then vary $x_-$ and $x_+$ to solve for the minimum length path overall. The minimum length $\Gamma'$ for fixed $x_-$ and $x_+$ is easy to find.</p> <p>More details when I get a chance.</p>
604,824
<p>So the puzzle is like this:</p> <blockquote> <p>An ant is out from its nest searching for food. It travels in a straight line from its nest. After this ant gets 40 ft away from the nest, suddenly a rain starts to pour and washes away all its scent trail. This ant has the strength of traveling 280 ft more then it will starve to death. Suppose this ant's nest is a huge wall and this ant can travel in a whatever curve it wants, how can this ant find its way back? </p> </blockquote> <p>I interpret it as: I start at the origin. I know that there is a straight line with distance 40 ft to the origin, but I don't know the direction. In what parametric curve I will sure hit the line when the parameter $t$ is increasing, while the total arc length is less than or equal to 280 ft.</p> <p>I asked a friend of mine who is a PhD in math, he told me this is a calculus of variation problem. I wonder if I could use basic calculus stuff to solve this puzzle (I have learned ODE as well). My hunch tells me that a spiral should be used as the path, yet I am not sure what kind of spiral to use here. Any hint shall be appreciated. Thanks dudes!</p> <h3>Clarification by dfeuer</h3> <p>As some people seem to be having trouble understanding the problem description, I'll add an equivalent one that should be clear:</p> <p>Starting in the center of a circle of radius 40 ft, draw a path with the shortest possible length that intersects every line that is tangent to the circle.</p>
Constructor
114,355
<h3>Historical summary</h3> <p>It is the famous problem invented by R. Bellman in 1956 [1]. It is known as <a href="http://mathworld.wolfram.com/LostinaForestProblem.html" rel="noreferrer" title="Page on &#39;Wolfram MathWorld&#39; site">'Lost in a Forest Problem'</a>:</p> <blockquote> <p>What is the best path to follow in order to escape a forest of known shape and dimensions?</p> </blockquote> <p>The subproblem for the half-plane forest with known distance from the boundary was solved by J. R. Isbell in 1957 [2]. He described the path with the length</p> <p>$$\left(\sqrt{3}+\frac{7\pi}{6}+1\right)d$$</p> <p>where $d$ is the distance from the boundary of the forest. He gave the proof in outline that his path had minimal length. The complete and detailed proof was given by H. Joris in 1980 [3]. The consideration about this problem can also be found in the book [4].</p> <p>The overview of the results on the general problem up to 2004 can be found in [5].</p> <h3>Isbell's path</h3> <p>The form of the shortest path:</p> <p><img src="https://i.stack.imgur.com/41Q3D.png" alt="The shortest path"></p> <p>As @Will Nelson wrote the shortest path consists of the 3 line segments with lengths $\frac{2}{\sqrt{3}}d$, $\frac{1}{\sqrt{3}}d$ and $d$ and the arc of the circle with radius $d$ which subtends the angle $\frac{7\pi}{6}$. The total length of the path is</p> <p>$$L_{\min}(d)=\frac{2}{\sqrt{3}}d+\frac{1}{\sqrt{3}}d+\frac{7\pi}{6}d+d=\left(\sqrt{3}+\frac{7\pi}{6}+1\right)d\approx 6.397d$$</p> <p>In our case $d=40$ so $L_{\min}\approx 255.890&lt;280$. The ant can survive!</p> <h3>References</h3> <ol> <li><p>R. Bellman, Minimization Problem, Bull. Amer. Math. Soc. 62 (1956) p. 270. [<a href="http://www.ams.org/journals/bull/1956-62-03/S0002-9904-1956-10021-9/S0002-9904-1956-10021-9.pdf" rel="noreferrer">Available online</a> at the <a href="http://www.ams.org/journals/bull/1956-62-03/S0002-9904-1956-10021-9/" rel="noreferrer">AMS <em>BAMS</em> archive</a>, free of charge.]</p></li> <li><p>J. R. Isbell, An optimal search pattern, Naval Res. Logist. Quart. 4 (1957) pp. 357-359. [<a href="http://onlinelibrary.wiley.com/doi/10.1002/nav.3800040409/pdf" rel="noreferrer">Available online</a> at <a href="http://onlinelibrary.wiley.com/doi/10.1002/nav.3800040409/abstract" rel="noreferrer">Wiley Online Library</a>, 35$ cost.]</p></li> <li><p>H. Joris, Le chasseur perdu dans la foret, Elem. Math. 35 (1980) pp. 1-14. [<a href="http://infoscience.epfl.ch/record/130395/files/PPN378850199_0035__0_0.pdf" rel="noreferrer">Available online</a> at the site of <a href="http://infoscience.epfl.ch/record/130395" rel="noreferrer">École polytechnique fédérale de Lausanne</a>, free of charge.]</p></li> <li><p>Z. A. Melzak, Companion to Concrete Mathematics: Mathematical Techniques and Various Applications, Wiley, New York, 1973, pp. 150-153. [I couldn't find this book online. According to @Barry Cipra <a href="http://books.google.com" rel="noreferrer">Google Books</a> let some users see a big chunk of this book with relevant pages.]</p></li> <li><p>S. R. Finch and J. E. Wetzel, Lost in a Forest, American Mathematical Monthly 111 (2004) pp. 645-654. [<a href="http://www.maa.org/sites/default/files/pdf/upload_library/22/Ford/Finch645-654.pdf" rel="noreferrer">Available online</a> at the site of <a href="http://www.maa.org/programs/maa-awards/writing-awards/lost-in-a-forest" rel="noreferrer">Mathematical Association of America</a>, free of charge.]</p></li> </ol> <p><strong>Update:</strong> Some references were added.</p>
959,322
<p>Solve $$ \sum_{k = 1}^{ \infty} \frac{\sin 2k}{k}$$</p> <p>I first tried to use Eulers formula</p> <p>$$ \frac{1}{2i} \sum_{k = 1}^{ \infty} \frac{1}{k} \left( e^{2ik} - e^{-2ik} \right)$$</p> <p>However to use the geometric formula here, I must subtract the $k=0$ term and that term is undefinted since $1/k$. I also end up with something that diverges in my calculations and since $\sin 2k$ is limited the serie should not diverge.</p>
Leucippus
148,155
<p>The summation is as follows: \begin{align} S &amp;= \sum_{n=1}^{\infty} \frac{\sin(2an)}{n} = \frac{1}{2i} \, \sum_{n=1}^{\infty} \frac{ e^{2ai n} - e^{- 2ai n}}{n} \\ &amp;= - \frac{1}{2i} \left( \ln(1 - e^{2ai}) - \ln(1 - e^{-2ai}) \right) \\ &amp;= - \frac{1}{2i} \ln\left( \frac{1 - e^{2ai}}{1 - e^{- 2ai}} \right) = - \frac{1}{2i} \ln\left( - \frac{e^{ai}}{e^{-ai}} \cdot \frac{\sin(a)}{\sin(a)} \right) \\ &amp;= - \frac{1}{2i} \ln\left( - e^{2ai} \right) = - \frac{1}{2i} \left( \ln(e^{- \pi i}) + \ln(e^{2ai}) \right) \\ &amp;= - \frac{1}{2i} \left( - \pi i + 2ai \right) = \frac{ \pi - 2a}{2}. \end{align} This provides \begin{align} \sum_{n=1}^{\infty} \frac{\sin(2an)}{n} = \frac{ \pi - 2a}{2}. \end{align}</p>
379,194
<p>Let $X$ be a topological space and $X^*$ be its supspace. It is stated in my textbook that if $c(A)$ represents the closure of set $A$ in $X$, then $c(A) \bigcap X^*$ is closed in $X^*$. </p> <p>A closed set is one which contains all its limit points, and a limit point of a set is a point such that every open set containing it contains a different point from the aforementioned set. </p> <p>Let $l$ is an external limit point of set $A$. If there is an open set containing $l$, it has to contain a point in $A$- let's call it $p$. Let the open set containing $p$ and $l$ not contain any other point in $A$. I don't see why that should be a problem at all. Let the subspace $X^*$ contain $l$, but not $p$. </p> <p>$c(A) \bigcap X^*$ will contain $l$, but $l$ will be not a limit point of $A$, as there is an open set containing $l$ and no point in $A \bigcap X^*$ ($p$ is not there in $X^*$). How is $A \bigcap X^*$ closed in $X^*$ then?</p>
Stefan Hansen
25,632
<p><strong>Hints</strong>: If $f$ and $g$ are differentiable at $x$, then $$ (f+g)'(x)=f'(x)+g'(x), $$ and if $a$ is a constant, then $$ (af)'(x)=a\cdot f'(x) $$ If $g$ is differentiable at $x$ and $f$ is differentiable at $g(x)$ then $$ (f\circ g)'(x)=f'(g(x))\cdot g'(x). $$ And lastly if $f(x)=x^n$ for $n\in\mathbb{N}$, then $f'(x)=n\cdot x^{n-1}$.</p> <p>These are all the necessary tools to know in order for you to find the derivative.</p>
3,623,924
<p>Trying to solve the following problem:</p> <p>Let <span class="math-container">$f(x)$</span> be a continuous real-valued function on <span class="math-container">$[0,3]$</span>. Given any <span class="math-container">$\varepsilon&gt;0$</span> prove there exists a polynomial, <span class="math-container">$p(x)$</span>, such that <span class="math-container">$\int_0^3|f(x)-p(x)|\,dx&lt;\varepsilon$</span></p> <p>This almost seems trivially true, which leads me to believe that I'm thinking about it incorrectly. If by Weierstrass theorem we know there exists a sequence of polynomials <span class="math-container">$P_n(x)$</span> in <span class="math-container">$[0,3]$</span> such that <span class="math-container">$\lim_{n \to \infty} P_n(x)=f(x)$</span>, then if we set <span class="math-container">$p(x)=\lim_{n \to \infty} P_n(x)=f(x)$</span>, then <span class="math-container">$|f(x)-p(x)|=|f(x)-f(x)|=0$</span> and therefore it is obviously true that <span class="math-container">$\int_0^3|f(x)-p(x)|\,dx&lt;\varepsilon$</span>. I'm almost certain this is not correct, so what am I doing wrong?</p>
user27182
22,020
<p>SW says that for any <span class="math-container">$\epsilon &gt; 0$</span> there is a polynomial <span class="math-container">$p(x)$</span> such that <span class="math-container">$\forall x \in [0, 3]$</span> we have <span class="math-container">$|f(x) - p(x)| &lt; \epsilon / 4$</span>. </p> <p>Then <span class="math-container">$$ \int_0^3 |f(x) - p(x)|\text{d}x \le (3 - 0) \times \sup_{x \in [0, 3]}|f(x) - p(x)| &lt; 3 \times \frac{\epsilon}{4} &lt; \epsilon $$</span></p>
2,311,979
<p>Let $A = (a_{i,j})_{n\times n}$ and $B = (b_{i,j})_{n\times n}$</p> <p>$(AB) = (c_{i,j})_{n\times n}$, where $c_{i,j} = \sum_{k=1}^n a_{i,k} b_{k,j}$, so</p> <p>$(AB)^T = (c_{j,i})$, where $c_{j,i} = \sum_{k=1}^n a_{j,k}b_{k,i} $, and $B^T = b_{j,i}$ and $A^T = a_{j,i}$, so </p> <p>$B^T A^T = d_{j,i}$ where $d_{j,i} = \sum_{k=1}^n b_{j,k} a_{k,i}$, but this mean that $(AB)^T \not = (B^T A^T)$, so where is the problem in this derivation ?</p> <p>Edit: To be clear, lets be more precise; Let $A = (a_{x,y})_{p\times n}$ and $B = (b_{z,t})_{n\times q}$</p> <p>So, $A^T_{n\times p} = (a_{y,x})$ and $B^T_{q\times n} = (b_{t,z})$, which implies</p> <p>$$(B^T A^T)_{i,j}^{q \times p} = \sum_{k=1}^n b_{i,k} a_{k,j},$$ and</p> <p>$(AB)_{c,d}^{p\times q} = \sum_{k=1}^n a_{c,k} b_{k,d}$, which implies $$((AB)^T)_{d,c}^{q\times p} = \sum_{k=1}^n = a_{d,k} b_{k,c}.$$ Since $i,d \in \{1,...,q\}$ and $j,c \in \{1,...,p\}$, $$((AB)^T)_{d,c}^{q\times p} = \sum_{k=1}^n = a_{d,k} b_{k,c} = = \sum_{k=1}^n = a_{i,k} b_{k,j},$$ which again concludes that $(AB)^T \not = (B^T A^T)$.</p>
Sangchul Lee
9,340
<p>You seem to know that $(i,j)$-entry of $B^T$ is $b_{j,i}$, that is probably why you are writing $B^T = (b_{j,i})$. The issue is, this notation is confusing as it is not telling you which index denotes the row and which denotes the column. I guess this is where you get confused.</p> <p>To make things clear, let us use some unconventional notation: A matrix $A$ whose $(i,j)$-entry is $a_{i,j}$ is denoted by the following function notation</p> <p>$$A = [(i,j)\mapsto a_{i,j}].$$</p> <p>So if you know that $A$ is given by $A = [\text{some function of the pair }(i, j)]$, then you simply evaluate that function at $(i,j)$ to retrieve its $(i,j)$-entry. This seemingly stupid tautology is in fact helping because the transpose of $A$ is written by $A^T = [(i,j) \mapsto a_{j,i}]$, where the role of $i$ and $j$ are now explicit. Then</p> <p>\begin{align*} \text{$(i,j)$-entry of $B^TA^T$} &amp;= \sum_{k=1}^{n} (\text{$(i,k)$-entry of $B^T$})\cdot(\text{$(k,j)$-entry of $A^T$}) \\ &amp;= \sum_{k=1}^{n} (\text{value of $(x, y)\mapsto b_{y,x}$ at $(x,y) = (i,k)$}) \\ &amp;\hspace{3em} \cdot(\text{value of $(z, t)\mapsto a_{t,z}$ at $(z,t) = (k,j)$}) \\ &amp;= \sum_{k=1}^{n} b_{k,i} a_{j,k} = \sum_{k=1}^{n} a_{j,k}b_{k,i} = \text{$(j,i)$-entry of $AB$}. \end{align*}</p> <hr> <p>If it is still not convincing, it never hurts to consider a concrete example. Consider</p> <p>$$ A = \begin{pmatrix} a_{11} &amp; a_{12} &amp; a_{13} \\ a_{21} &amp; a_{22} &amp; a_{23} \end{pmatrix}, \qquad B = \begin{pmatrix} b_{11} &amp; b_{12} \\ b_{21} &amp; b_{22} \\ b_{31} &amp; b_{32} \end{pmatrix} $$</p> <p>Then $[AB]_{11} = a_{11}b_{11} + a_{12}b_{21} + a_{13}b_{31}$ as expected. Now consider their transpose:</p> <p>$$ A^{T} = \begin{pmatrix} a_{11} &amp; a_{21} \\ a_{12} &amp; a_{22} \\ a_{13} &amp; a_{23} \end{pmatrix}, \qquad B^{T} = \begin{pmatrix} b_{11} &amp; b_{21} &amp; b_{31} \\ b_{12} &amp; b_{22} &amp; b_{32} \end{pmatrix}. $$</p> <p>Now the $(1,2)$-entry of the product $B^T A^T$ is given by</p> <p>\begin{align*} [B^T A^T]_{12} &amp;= [B^T]_{11}[A^T]_{12} + [B^T]_{12}[A^T]_{22} + [B^T]_{13}[A^T]_{32} \\ &amp;= b_{11}a_{21} + b_{21}a_{22} + b_{31}a_{23} \\ &amp;= a_{21}b_{11} + a_{22}b_{21} + a_{23}b_{31} \\ &amp;= [AB]_{21} \end{align*}</p>
1,463,881
<blockquote> <p>By considering $\sum_{r=1}^n z^{2r-1}$ where z= $\cos\theta + i\sin\theta$, show that if $\sin\theta$ $\neq$ 0, $$\sum_{r=1}^n \sin(2r-1)\theta=\frac{\sin^2n\theta}{\sin\theta}$$</p> </blockquote> <p>I couldn't solve this at first but with some hints some of you gave, I was able to come up with my own solution. Here it is:</p> <p>First, we want to consider what is given in the question, $$\begin{align} \sum_{r=1}^n z^{2r-1} &amp; =z+z^3+z^5...\\ &amp; = z + z(z^2)+z^3(z^2)+...\\ \end{align}$$ In Geometric Progression, sum is given by $$\sum_{r=1}^n z^{2r-1}=S_n = \frac{a (1-r^n)}{1-r}=\frac{z(1-(z^2)^n)}{1-z^2} = \frac {z-z^{2n+1}}{1-z^2}=\frac {1-z^{2n}}{z^{-1}-z} $$ Now, substitute $z = \cos \theta +i \sin \theta$</p> <p>$$\begin{align} \sum_{r=1}^n z^{2r-1} &amp; =\frac{1-(\cos (2n\theta) + i \sin(2n\theta))}{\cos\theta - i\sin\theta-(\cos\theta+i\sin\theta)} \\ &amp; = \frac{1-\cos (2n\theta) - i \sin(2n\theta)}{-2i\sin\theta} \cdot \frac{(i\sin\theta)}{(i\sin\theta)}\\ &amp; = \frac{i\sin\theta-i\sin\theta \cos (2n\theta) + \sin\theta \sin(2n\theta)}{2\sin^2\theta} \\ \end{align}$$ Equating imaginary parts, $$\sum_{r=1}^n \sin(2r-1)\theta = \frac{\sin\theta (1 - \cos(2n\theta))}{2\sin^2\theta} = \frac{\sin^2n\theta}{2\sin\theta} $$</p>
K_P
181,185
<p>The problem should be equivalent to $x_1 +x_2+x_3+x_4+x_5=5$ with $0≤x_1≤3$ and $0≤x_2&lt;3$ and $x_3 \ge 0$.The formula should be $n-r+1\choose {r-1}$ wich gives ${9 \choose 4}=126 $ solutions without the restrictions. Applying the restrictions we need to get rid of the instances of $x_1=4$ (there are 4 of them) and $x_1=5$ (one). Also get rid of $x_2=3$ (by the formula above it is ${2+4-1 \choose 4-1}=10$) and $x_2=4 ($4 solutions$)$ and $x_2=5$ ($1$ solution). Since there is no overlap we can add all these instances and take them out of 126 to give $126-(1+4+10+1+4)=106$</p>
1,001,839
<p>$$\frac{\pi x y^2}{4}$$</p> <p>Is this function continuous? I really haven't worked with continuity with multivariable funtions before, so I am a little stumped. How would one answer such a question? </p> <p>I'm reading a bit ahead of my level, and I'm seeing all these epsilon delta things... is that what I am supposed to use? Makes very little sense.... </p>
Race Bannon
188,877
<p>You are asking if the function $f(x,y) = \frac{\pi xy^{2}}{4}$ is continuous? If you had a functions of x or y alone, then it'd be easy to see that it's continuous, right? Together, they should still be continuous... I hope this agrees with your intuition. Proving that the function is continuous, you just need the definition of continuity and the definition of the limit for multivariate functions. Check this out!</p> <p><a href="http://www.math.jhu.edu/mathcourses/202/Florin_notes/notes_02-15-05.pdf" rel="nofollow">http://www.math.jhu.edu/mathcourses/202/Florin_notes/notes_02-15-05.pdf</a></p>
189,380
<p>How can I solve this ODE:</p> <p>$y(x)+Ay'(x)+Bxy'(x)+Cy''(x)+Dx^{2}y''(x)=0$</p> <p>Can you please also show the derivation.</p>
Robert Israel
8,508
<p>I don't think you'll find an "elementary" solution in general. Maple finds a rather complicated solution involving hypergeometric functions: $$\displaystyle S\, := \,y \left( x \right) ={\it \_C1}\,{\mbox{$_2$F$_1$}(1/2\,{\frac {-d+B+ \sqrt{{d}^{2}+ \left( -2\,B-4 \right) d+{B}^{2}}}{d}},1/2\,{\frac {-d+B- \sqrt{{d}^{2}+ \left( -2\,B-4 \right) d+{B}^{2}}}{d}};\,-1/2\\ \mbox{}\, \left( A-B \sqrt{-{\frac {C}{d}}} \right) {d}^{-1} \left( \sqrt{-{\frac {C}{d}}} \right) ^{-1};\,1/2\, \left( C- \sqrt{-{\frac {C}{d}}}xd \right) {C}^{-1})}+{\it \_C2}\, \left( C- \sqrt{-{\frac {C}{d}}}xd \right) ^{1/2\, \left( \left( -B+2\,d \right) \sqrt{-{\frac {C}{d}}}+A \right) {d}^{-1} \left( \sqrt{-{\frac {C}{d}}} \right) ^{-1}}{\mbox{$_2$F$_1$}(1/2\, \left( d \sqrt{-{\frac {C}{d}}}- \sqrt{{d}^{2}+ \left( -2\,B-4 \right) d+{B}^{2}} \sqrt{-{\frac {C}{d}}}+A \right) {d}^{-1} \left( \sqrt{-{\frac {C}{d}}}\\ \mbox{} \right) ^{-1},1/2\, \left( d \sqrt{-{\frac {C}{d}}}+ \sqrt{{d}^{2}+ \left( -2\,B-4 \right) d+{B}^{2}} \sqrt{-{\frac {C}{d}}}+A \right) {d}^{-1\\ \mbox{}} \left( \sqrt{-{\frac {C}{d}}} \right) ^{-1};\,1/2\, \left( \left( 4\,d-B \right) \sqrt{-{\frac {C}{d}}}+A \right) {d}^{-1} \left( \sqrt{-{\frac {C}{d}}} \right) ^{-1};\,1/2\, \left( C- \sqrt{-{\frac {C}{d}}}xd \right) {C}^{-1})} $$</p> <p>(I used $d$ instead of $D$ because $D$ has a special meaning in Maple)</p>
86,202
<p>Let $\mathcal{L},\mathcal{U}$ be invertible sheaves over a noetherian scheme $X$, where $X$ is of finite type over a noetherian ring $A$. If $\mathcal{L}$ is very ample, and $\mathcal{U}$ is generated by global sections, then $\mathcal{L} \otimes \mathcal{U}$ is very ample.</p> <p>Since $\mathcal{L}$ is very ample, there exists $n$, s.t. $i: X\mapsto \mathbb{P}^n$ is an immersion with $\mathcal{L}= i^*\mathcal{O}(1)$, and since $\mathcal{U}$ is generated by global sections, one can construct $j:X \to \mathbb{P}^m$ with $j^*\mathcal{O}(1) = \mathcal{U}$. From this I can construct the following morphism:</p> <p>$$ h: X \xrightarrow{\Delta} X\times X \xrightarrow{i\times j} \mathbb{P}^n \times \mathbb{P}^m \xrightarrow{ \operatorname{segre \ embedding}} \mathbb{P}^N $$</p> <p>I can prove $\mathcal{L}\otimes \mathcal{U } \cong h^*\mathcal{O}(1)$, and the segre embedding is a closed immersion. But I don't know whether the map $(i\times j) \circ \Delta$ is an immersion, which is suspicious to be such, especially for the $\Delta$.</p>
red_trumpet
312,406
<p>It is true that if <span class="math-container">$i: X \to Y$</span> is an immersion, and <span class="math-container">$j:X \to Z$</span> is <em>any</em> morphism (all over <span class="math-container">$S$</span>), then <span class="math-container">$(i, j): X \to Y \times_S Z$</span> is an immersion. See <a href="https://math.stackexchange.com/questions/535049/if-p-ix-rightarrow-y-ii-1-2-are-immersions-is-x-rightarrow-y-1-times-y/3170483#3170483">this answer</a> for a proof.</p>
2,623,560
<blockquote> <p>Decide if $\mathbb Z[i]/\langle i\rangle$ and $\mathbb Z$ are isomorphic, if $\mathbb Z[i]/\langle i+1\rangle$ and $\mathbb Z_2$ are isomorphic</p> </blockquote> <p>I know that in the first case if there exist such homomorphism then $f(i)=0$ (and in the second case $f(i+1)=0$), but I don't know exactly how to prove it.</p>
Pedro
178,668
<p>To show the first isomorphism you can use one of the <strong>isomorphism theorems</strong>:</p> <p>$$(A+I)/I \cong A/(A\cap I) $$</p> <p>In this case $A=\mathbb{Z}\subseteq \mathbb{Z}[i]$ and $I=(i)$.</p> <p>In the second case, note that $(i+1)^{2}=(2)$, so we have $\mathbb{Z}\cap (i+1)^{2}=(2)$. Then apply again the same theorem.</p>
2,623,560
<blockquote> <p>Decide if $\mathbb Z[i]/\langle i\rangle$ and $\mathbb Z$ are isomorphic, if $\mathbb Z[i]/\langle i+1\rangle$ and $\mathbb Z_2$ are isomorphic</p> </blockquote> <p>I know that in the first case if there exist such homomorphism then $f(i)=0$ (and in the second case $f(i+1)=0$), but I don't know exactly how to prove it.</p>
lhf
589
<p>Here is an answer for the second question:</p> <p>Since $2=(1+i)(1-i)$, we have $2 \in (1+i)$.</p> <p>Thus, $a+bi \equiv (a\bmod 2)+(b\bmod 2)\,i \equiv (1+i)$.</p> <p>Therefore, the classes of $\mathbb Z[i]$ mod $(1+i)$ are reduced to the classes of $0,1,i,1+i$.</p> <p>Now $0 \equiv 1+i$ and $1 \equiv i$ and so $\mathbb Z[i]$ mod $(1+i)$ has only two classes and must be $\mathbb Z_2$.</p>
425,981
<p>Let $F$ be an infinite field such that $F^*$ is a torsion group. We know that $F^*$ is an Abelian group. So every subgroup of $F^*$ is a normal subgroup.</p> <p>My question:</p> <p>Does $F^*$ have a proper subgroup with finite index?</p>
Jyrki Lahtonen
11,619
<p>Let $F$ be the algebraic closure of a finite field. Each element of $F^*$ belongs to a finite field, so is a torsion element. On the other hand $F^*$ cannot have a subgroup of index $n&gt;1$. For if $A$ is such a subgroup, then $x^n\in A$ for all $x\in F^*$. But if $z\in F^*\setminus A$, then $z$ has an $n$th root in $F^*$ contradicting the previous sentence.</p> <hr> <p>Returning to the general case. If $F^*$ is torsion, then obviously $F$ has finite characteristic, and is algebraic over its prime field. Therefore $F$ is contained in an algebraic closure of a finite field. If $F$ itself is finite, then it obviously has finite index subgroups, but this was excluded by the OP. </p> <hr> <p>And as an example of an infinite field such that $F^*$ has a subgroup of a finite index let's try the following. Consider the nested union of extensions of $\mathbb{F}_2$ of degrees $2^n$ $$ F=\bigcup_{n\ge0}\mathbb{F}_{2^{2^n}}. $$ The union can be formed inside an algebraic closure of $\mathbb{F}_2$. For $m\ge n$ let $N^m_n:\mathbb{F}_{2^{2^m}}\to\mathbb{F}_{2^{2^n}}$ be the relative norm map. For $n\ge1$ define the groups $$ A_n=\{z\in\mathbb{F}_{2^{2^n}}\mid N^n_1(z)=1\}\le \mathbb{F}_{2^{2^n}}^*. $$ Transitivity of norm in a tower of field extensions means that $N^n_1\circ N^m_n=N^m_1$ for all $1\le n\le m$.</p> <p>Let us define $$ A=\bigcup_{n\ge1}A_n. $$ I claim that $A$ is a subgroup of $F^*$. It is obviously closed under inverses, as all the $A_n$ are groups as kernels of $N^n_1$. If $z\in A_n$ and $n&lt;m$, then $$ N^m_1(z)=N^n_1(N^m_n(z)). $$ Here $N^m_n(z)=z^{2^{m-n}}$ is just a power of $z$, so we get that also $z\in A_m$. We have seen that $A_n\le A_m$, and the claim follows from this.</p> <p>To close off this example I claim that $A$ is of index three in $F^*$. Let $\mathbb{F}_4^*=\{1,\omega,\omega^2=1+\omega\}$, where $\omega$ is a primitive third root of unity. Because $N^n_1(\omega)=\omega^{2^{n-1}}$ is either $\omega$ or $\omega^2$, it follows that for every element $z\in F^*$ exactly one of $z,\omega z,\omega^2 z$ belongs to the subgroup $A$. The claim follows from this.</p> <p>Note that the argument from the case of an algebraically closed field does not apply for this $F$. For example, the field $F$ does not have ninth roots of unity because those reside in the field $\mathbb{F}_{64}$, and won't be included in this tower.</p>
425,981
<p>Let $F$ be an infinite field such that $F^*$ is a torsion group. We know that $F^*$ is an Abelian group. So every subgroup of $F^*$ is a normal subgroup.</p> <p>My question:</p> <p>Does $F^*$ have a proper subgroup with finite index?</p>
DonAntonio
31,254
<p><strong>Note:</strong> This uses the very same reasoning as Jyrki's answer but with a different, perhaps slightly more groupwise, approach:</p> <p>(1) An abelian group $\,A\,$ (with multiplicative operation, to fit within our problem) is <em>divisible</em> if</p> <p>$$\forall\,g\in G\;\wedge\;\forall n\in\Bbb N\;\exists\, x\in G\;\;s.t.\;\;g=x^n$$</p> <p>(2) Any homomorphic image of a divisible group is divisible.</p> <p>(3) Finite abelian groups can <strong>not</strong> be divisible</p> <p>(4) Divisible groups cannot have finite-index subgroups (this is just (2)+(3))</p> <p>(5) The multiplicative group of an algebraically closed group is abelian </p> <p>Now apply Jyrki's answer...</p>
1,554
<p>Suppose you have an incomplete Riemannian manifold with bounded sectional curvature such that its completion as a metric space is the manifold plus one additional point. Does the Riemannian manifold structure extend across the point singularity?</p> <p>(Penny Smith and I wrote a paper on this many years ago, but we had to assume that no arbitrarily short closed geodesics existed in a neighborhood of the singularity. I was never able to figure out how to get rid of this assumption and still would like someone better at Riemannian geometry than me to explain how. Or show me a counterexample.)</p> <p>EDIT: For simplicity, assume that the dimension of the manifold is greater than 2 and that in any neighborhood of the singularity, there exists a smaller punctured neighborhood of the singularity that is simply connected. In dimension 2, you have to replace this assumption by an appropriate holonomy condition. </p> <p>EDIT 2: Let's make the assumption above simpler and clearer. Assume dimension greater than 2 and that for any r > 0, there exists 0 &lt; r' &lt; r, such that the punctured geodesic ball B(p,r'){p} is simply connected, where p is the singular point. The precludes the possibility of an orbifold singularity.</p> <p>ADDITIONAL COMMENT: My approach to this was to construct a differentiable family of geodesic rays emanating from the singularity. Once I have this, then it is straightforward using Jacobi fields to show that this family must be naturally isomorphic to the standard unit sphere. Then using what Jost and Karcher call "almost linear coordinates", it is easy to construct a C^1 co-ordinate chart on a neighborhood of the singularity. (Read the paper. Nothing in it is hard.)</p> <p>But I was unable to build this family of geodesics without the "no small geodesic loop" assumption. To me this is an overly strong assumption that is essentially equivalent to assuming in advance that that differentiable family of geodesics exists. So I find our result to be totally unsatisfying. I don't see why this assumption should be necessary, and I still believe there should be an easy way to show this. Or there should be a counterexample.</p> <p>I have to say, however, that I am pretty sure that I did consult one or two pretty distinguished Riemannian geometers and they were not able to provide any useful insight into this.</p>
Igor Belegradek
1,573
<p>Here is what seems to be a counterexample. Let (M,g) be a simply-connected closed Riemannian manifold. Then M times (0,infinity) with the warped product metric dr^2 + r^2 g has bounded curvature and the completion at r=0 is a point. If the metric is smooth, then M is diffeomorphic to a sphere, so any other M gives a counterexample. </p> <p>EDIT: Sorry, this does not work as curvature blows up at zero unless g has constant curvature 1.</p>
1,554
<p>Suppose you have an incomplete Riemannian manifold with bounded sectional curvature such that its completion as a metric space is the manifold plus one additional point. Does the Riemannian manifold structure extend across the point singularity?</p> <p>(Penny Smith and I wrote a paper on this many years ago, but we had to assume that no arbitrarily short closed geodesics existed in a neighborhood of the singularity. I was never able to figure out how to get rid of this assumption and still would like someone better at Riemannian geometry than me to explain how. Or show me a counterexample.)</p> <p>EDIT: For simplicity, assume that the dimension of the manifold is greater than 2 and that in any neighborhood of the singularity, there exists a smaller punctured neighborhood of the singularity that is simply connected. In dimension 2, you have to replace this assumption by an appropriate holonomy condition. </p> <p>EDIT 2: Let's make the assumption above simpler and clearer. Assume dimension greater than 2 and that for any r > 0, there exists 0 &lt; r' &lt; r, such that the punctured geodesic ball B(p,r'){p} is simply connected, where p is the singular point. The precludes the possibility of an orbifold singularity.</p> <p>ADDITIONAL COMMENT: My approach to this was to construct a differentiable family of geodesic rays emanating from the singularity. Once I have this, then it is straightforward using Jacobi fields to show that this family must be naturally isomorphic to the standard unit sphere. Then using what Jost and Karcher call "almost linear coordinates", it is easy to construct a C^1 co-ordinate chart on a neighborhood of the singularity. (Read the paper. Nothing in it is hard.)</p> <p>But I was unable to build this family of geodesics without the "no small geodesic loop" assumption. To me this is an overly strong assumption that is essentially equivalent to assuming in advance that that differentiable family of geodesics exists. So I find our result to be totally unsatisfying. I don't see why this assumption should be necessary, and I still believe there should be an easy way to show this. Or there should be a counterexample.</p> <p>I have to say, however, that I am pretty sure that I did consult one or two pretty distinguished Riemannian geometers and they were not able to provide any useful insight into this.</p>
Rafe Mazzeo
888
<p>Take a cone over a finite quotient S^{2n-1}/\Gamma. The curvature is 0, but the manifold structure does not even extend. (More generally, you can take the cone over any compact Einstein manifold of dimension n-1 with Einstein constant n-2.) </p>
1,554
<p>Suppose you have an incomplete Riemannian manifold with bounded sectional curvature such that its completion as a metric space is the manifold plus one additional point. Does the Riemannian manifold structure extend across the point singularity?</p> <p>(Penny Smith and I wrote a paper on this many years ago, but we had to assume that no arbitrarily short closed geodesics existed in a neighborhood of the singularity. I was never able to figure out how to get rid of this assumption and still would like someone better at Riemannian geometry than me to explain how. Or show me a counterexample.)</p> <p>EDIT: For simplicity, assume that the dimension of the manifold is greater than 2 and that in any neighborhood of the singularity, there exists a smaller punctured neighborhood of the singularity that is simply connected. In dimension 2, you have to replace this assumption by an appropriate holonomy condition. </p> <p>EDIT 2: Let's make the assumption above simpler and clearer. Assume dimension greater than 2 and that for any r > 0, there exists 0 &lt; r' &lt; r, such that the punctured geodesic ball B(p,r'){p} is simply connected, where p is the singular point. The precludes the possibility of an orbifold singularity.</p> <p>ADDITIONAL COMMENT: My approach to this was to construct a differentiable family of geodesic rays emanating from the singularity. Once I have this, then it is straightforward using Jacobi fields to show that this family must be naturally isomorphic to the standard unit sphere. Then using what Jost and Karcher call "almost linear coordinates", it is easy to construct a C^1 co-ordinate chart on a neighborhood of the singularity. (Read the paper. Nothing in it is hard.)</p> <p>But I was unable to build this family of geodesics without the "no small geodesic loop" assumption. To me this is an overly strong assumption that is essentially equivalent to assuming in advance that that differentiable family of geodesics exists. So I find our result to be totally unsatisfying. I don't see why this assumption should be necessary, and I still believe there should be an easy way to show this. Or there should be a counterexample.</p> <p>I have to say, however, that I am pretty sure that I did consult one or two pretty distinguished Riemannian geometers and they were not able to provide any useful insight into this.</p>
valeri
1,988
<p>What if you try a family of triangles, parallel to some two-direction s.t. their union contains singularity? (Like tetraedr for n=3)? Then their geometry (angles, sides, etc) are controlled from "outside" the singularity, so they all have uniformly bounded cirvature - including that one which contains singularity. Let the size then goes to zero. Does it mean that the tangent plane is defined at singularity and is R^n and so on ...? </p>
1,930,401
<p>Are there any non-linear real polynomials $p(x)$ such that $e^{p(x)}$ has a closed form antiderivative? If not, is the value of $\int_{0}^{\infty}e^{p(x)}dx$ known for any $p$ with negative leading term other than $-x$ and $-x^2$?</p>
Hrhm
332,390
<p>This proof was given <a href="http://www.drking.org.uk/hexagons/misc/polymax.html" rel="nofollow noreferrer">here</a>.</p> <p>Take the polygon and arrange it so that all the vertices lie on a circle: <a href="https://i.stack.imgur.com/U4u4q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U4u4q.png" alt="enter image description here"></a></p> <p>Now, "glue" the red caps onto the side of the polygon:</p> <p><a href="https://i.stack.imgur.com/eUOjA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eUOjA.png" alt="enter image description here"></a></p> <p>Note that the perimeter around the red caps stays constant as I move the vertices. The area of the red shaded sections also stay the same. According to the isoperimetric inequality, the area of the total shape (both the red and white sections) is maximized when the polygon is cyclic. Ergo, the area of the polygon is maximized when the polygon is cyclic. </p>
3,084,934
<p>I want to prove or disprove that the Fourier transform <span class="math-container">$\mathcal F \colon (\mathcal S(\mathbb R^d), \lVert \cdot \rVert_1) \to L^1(\mathbb R^d)$</span> is unbounded, where <span class="math-container">$\lVert\cdot \rVert_1$</span> denotes the <span class="math-container">$L^1(\mathbb R^d)$</span>-norm. </p> <p>Having thought about this for a moment, I believe it is indeed unbounded. So I tried to find a sequence of Schwartz functions <span class="math-container">$(f_n)_{n\in \mathbb N} \subseteq \mathcal S(\mathbb R^d)$</span> with <span class="math-container">$\forall n: \lVert f_n \rVert_1 = 1$</span> and <span class="math-container">$$\lVert \mathcal F f_n \rVert \to +\infty.$$</span> Of course I first thought about Gaussians but couldn't quite find a suitable sequence. Any help appreciated!</p>
gt6989b
16,192
<p><strong>HINT</strong></p> <p>We have to show that for each <span class="math-container">$\epsilon &gt; 0 \ \exists \delta &gt; 0$</span> such that whenever <span class="math-container">$|x-a| &lt; \delta$</span> you have <span class="math-container">$$ \epsilon &gt; \left|\frac{1}{f(x)} - 0 \right| = \left|\frac1{f(x)}\right| \iff |f(x)| &gt; 1/\epsilon. $$</span></p> <p>Assume that as <span class="math-container">$x \to a$</span>, we have <span class="math-container">$f(x) \to \infty$</span>, in other words, that <span class="math-container">$\forall N &gt; 0 \ \exists \delta&gt;0$</span> such that <span class="math-container">$f(x) &gt; N$</span> whenever <span class="math-container">$|x - a| &lt; \delta$</span>.</p> <p>Can you see how to pick <span class="math-container">$\epsilon = \epsilon(N)$</span> to make what you want happen?</p>
456,583
<p>I was searching for a Latex symbol that indicates $A \Rightarrow B$ and $A \not\Leftarrow B$ ($B$ if not only if $A$, $B$ ifnf $A$). I thought of using $A \Leftrightarrow B$ with the left arrow tick <code>&lt;</code> crossed out. Since I did not find such a symbol:</p> <p>Is there a Latex symbol for this?</p> <p>How common or understandable is this symbol?</p> <p>If it isn't common: How easily is it confused with the symbol $\not\Leftrightarrow$?</p> <hr> <p>Update: </p> <p>I need it for a sequence $A$ ifnf $B$ ifnf $C$ ifnf $D$, which I find more understandable than $A \Leftarrow B \Leftarrow C \Leftarrow D$ and $A \not\Rightarrow B \not\Rightarrow C \not\Rightarrow D$.</p> <p>Of course I will prove both directions.</p>
Andreas Blass
48,510
<p>When people assert implications, they often implicitly involve universal quantification. For example, "if $n$ is a prime number greater than $2$, then $n$ is odd" really means "for all integers $n$, if $\dots$." When one denies an implication, one includes the universal quantifier in the denial, so it becomes an existential quantifier. For example, if someone says that "$n$ is odd" doesn't imply "$n$ is a prime number greater than $2$", he normally means to deny that "for all $n$, if $n$ is odd then $n$ is a prime number greater than $2$"; equivalently, he means to assert that "there exists an odd $n$ that is not a prime number greater than $2$." (Recall from propositional logic that the negation of $B\implies A$ is equivalent to $B\land\neg A$.) So your proposed combined connective, for implication in one direction and denial of implication in the other direction, will implicitly quantify the variables partly with universal quantifiers and partly with existential ones. This looks to me like a recipe for confusion and therefore well worth avoiding.</p> <p>If, by good fortune, your statements $A$ and $B$ don't involve variables, so these quantification issues don't arise, then there is a fairly easy answer to your question. As I said above, the negation of $B\implies A$ is equivalent to $B\land\neg A$. Furthermore, this formula already implies that $A\implies B$, so $$ (A\implies B)\land\neg(B\implies A) $$ is equivalent to $B\land\neg A$. But remember, this use of propositional logic is legitimate only if your $A$ and $B$ don't involve any variables that are implicitly quantified in your implications.</p>
34,487
<p>A few years ago Lance Fortnow listed his favorite theorems in complexity theory: <a href="http://blog.computationalcomplexity.org/2005/12/favorite-theorems-first-decade-recap.html" rel="nofollow">(1965-1974)</a> <a href="http://blog.computationalcomplexity.org/2006/12/favorite-theorems-second-decade-recap.html" rel="nofollow">(1975-1984)</a> <a href="http://eccc.hpi-web.de/eccc-reports/1994/TR94-021/index.html" rel="nofollow">(1985-1994)</a> <a href="http://blog.computationalcomplexity.org/2004/12/favorite-theorems-recap.html" rel="nofollow">(1995-2004)</a> But he restricted himself (check the third one) and his last post is now 6 years old. An updated and more comprehensive list can be helpful.</p> <blockquote> <p>What are the most important results (and papers) in complexity theory that every one should know? What are your favorites?</p> </blockquote>
Kevin H. Lin
83
<p>Well I guess after Cook, Karp's paper "Reducibility among combinatorial problems" is the second most obligatory and canonical thing to mention. This paper was the first to demonstrate to the world the diversity and ubiquity of NP-complete problems.</p>
3,702,094
<p>For a school project for chemistry I use systems of ODEs to calculate the concentrations of specific chemicals over time. Now I am wondering if </p> <p><span class="math-container">$$ \frac{dX}{dt} =X(t) $$</span></p> <p>the same is as </p> <p><span class="math-container">$$ X(t)=e^t . $$</span> </p> <p>As far as I know, this should be correct, because the derivative of <span class="math-container">$ e^t $</span> is the same as the current value. Can anyone confirm that this is correct (or not)?</p> <p>I already searched for it on the internet but can't really find any articles about this. Thanks!</p>
Community
-1
<p>It is true that <span class="math-container">$$X(t)=e^t$$</span> is a solution of the differential equation <span class="math-container">$X'(t)=X(t)$</span>. But we know from the theory that the solution must be a family of functions, depending on an arbitrary constant.</p> <p>The usual way to solve this separable equation is by writing</p> <p><span class="math-container">$$\frac{dX}X=dt$$</span></p> <p>and by indefinite integration,</p> <p><span class="math-container">$$\log X=t+c$$</span> or <span class="math-container">$$X=e^{t+c}=Ce^t.$$</span></p> <hr> <p>As for all functions <span class="math-container">$f$</span>, <span class="math-container">$(Cf(t))'=Cf'(t)$</span> (the differentiation operator is linear), you could have inferred that all <span class="math-container">$Ce^t$</span> are solutions of <span class="math-container">$X'(t)=X(t)$</span>. But this does not guarantee yet that it is the most general solution.</p>
2,547,488
<p>We have always been taught that a function assigns to every element in the domain a single <em>unique</em> element in the range. If a rule of assignment assigns to one element in the domain more than one element in the range then it isn't a function.</p> <p>Now in Munkres' <em>Topology</em>, on page 107, it says:</p> <p>"If $f:X \rightarrow Y$ maps all of $X$ into the single point $y_0$ of $Y$, then $f$ is continuous."</p> <p>But we cannot speak of the continuity of $f$ unless $f^{-1}$ is defined, because the topological definition of continuity (in Munkres and other textbooks) states that $f:X \rightarrow Y$ is continuous if for every open subset $V$ of $Y$, the set $f^{-1}(V)$ is open in $X$. </p> <p>How then can the constant function above be continuous when $f^{-1}$ maps a single $y_0$ to ALL $x \in X$? How is it a function? Or is the elementary definition of "function" relaxed in topology?</p>
sti9111
233,046
<p>$f^{-1}(U)=\{x\in X | f(x)\in U\}$, here we do not need that the function $f^{-1}$ there exist, we only use definition of the pre-image of set by $f,$ and off course any constant function is continuous because given any pen set $U$ in $Y$ we have two possibilites, $y_0\in U$ in this case $f^{-1}(U)=X$ which is open and if $y_0\notin U$ then $f^{-1}(U)=\emptyset$ which is open. </p>
1,783,200
<p>Prove or disprove the following statement:</p> <p><strong>Statement.</strong> <em>Continuous for each variables, when other variables are fixed, implies continuous?</em> More clearly, prove or disprove the following problem:</p> <p>Let $\displaystyle f:\left[ a,b \right]\times \left[ c,d \right]\to \mathbb{R}$ for which:</p> <ul> <li>For every $\displaystyle {{x}_{0}}\in \left[ a,b \right]$, $\displaystyle f\left( {{x}_{0}},y \right)$ is continuous on $\displaystyle \left[ c,d \right]$ respect to variable $ \displaystyle y$.</li> <li>For every $ \displaystyle {{y}_{0}}\in \left[ c,d \right]$, $ \displaystyle f\left( x,{{y}_{0}} \right)$ is continuous on $ \displaystyle \left[ a,b \right]$ respect to variable $\displaystyle x$.</li> </ul> <p>Then $\displaystyle f\left( x,y \right)$ is continuous on $ \displaystyle \left[ a,b \right]\times \left[ c,d \right]$. ?</p> <p><a href="https://hongnguyenquanba.wordpress.com/2016/05/12/problem-6/" rel="nofollow">https://hongnguyenquanba.wordpress.com/2016/05/12/problem-6/</a></p>
Quản Bá Hồng Nguyễn
286,880
<p>I have built a counter-example for this statement. Because the building process is quite complicated (it has images too). So I post my <em>counter example</em> in the following link: <a href="https://hongnguyenquanba.wordpress.com/2016/05/12/problem-6/" rel="nofollow">https://hongnguyenquanba.wordpress.com/2016/05/12/problem-6/</a></p>
69,476
<p>Hello everybody !</p> <p>I was reading a book on geometry which taught me that one could compute the volume of a simplex through the determinant of a matrix, and I thought (I'm becoming a worse computer scientist each day) that if the result is exact this may not be the computationally fastest way possible to do it.</p> <p>Hence, the following problem : if you are given a polynomial in one (or many) variables $\alpha_1 x^1 + \dots + \alpha_n x^n$, what is the cheapest way (in terms of operations) to evaluate it ?</p> <p>Indeed, if you know that your polynomial is $(x-1)^{1024}$, you can do much, much better than computing all the different powers of $x$ and multiply them by their corresponding factor.</p> <p>However, this is not a problem of factorization, as knowing that the polynomial is equal to $(x-1)^{1024} + (x-2)^{1023}$ is also much better than the naive evaluation.</p> <p>Of course, multiplication and addition all have different costs on a computer, but I would be quite glad to understand how to minimize the "total number of operations" (additions + multiplications) for a start ! I had no idea how to look for the corresponding litterature, and so I am asking for your help on this one :-)</p> <p>Thank you !</p> <p>Nathann</p> <p>P.S. : <em>I am actually looking for a way, given a polynomial, to obtain a sequence of addition/multiplication that would be optimal to evaluate it. This sequence would of course only work for <strong>THIS</strong> polynomial and no other. It may involve working for hours to find out the optimal sequence corresponding to this polynomial, so that it may be evaluated many times cheaply later on.</em></p>
paperclip optimizer
472,708
<p>If I understood your question correctly, you are willing to do an arbitrarily large amount of precomputing on your polynomial in order to make the evaluation process at run time as fast as possible.</p> <p>In other words, you want to find a some way of making your polynomial evaluation &quot;sparse&quot; in some sense.</p> <p>My first intuition is that this is not possible for an <em>arbitrary</em> polynomial, i.e. the set of polynomials for which a &quot;sparsification&quot; is possible is a set of measure zero. Which is not to say it is not possible for <em>certain</em> specific polynomials.</p> <p>In general, &quot;sparseness&quot; usually indicates some sort of underlying mathematical structure, which suggests you should attempt to understand said structure first.</p> <p>Otherwise I believe the problem to be NP-complete for the general case. I will edit this answer with a proof if I can think of one.</p>
1,336,937
<p>I think: <em>A function $f$, as long as it is measurable, though Lebesgue integrable or not, always has Lebesgue integral on any domain $E$.</em></p> <p>However Royden &amp; Fitzpatrick’s book "Real Analysis" (4th ed) seems to say implicitly that “a function could be integrable without being Lebesgue measurable”. In particular, theorem 7 page 103 says: </p> <p><strong>“If function $f$ is bounded on set $E$ of finite measure, then $f$ is Lebesgue integrable over $E$ if and only if $f$ is measurable”.</strong> </p> <p>The book spends a half page to prove the direction “$f$ is integrable implies $f$ is measurable”! Even the book “Real Analysis: Measure Theory, Integration, And Hilbert Spaces” of Elias M. Stein &amp; Rami Shakarchi does the same job!</p> <p>This makes me think there is possibly a function that is not bounded, not measurable but Lebesgue integrable on a set of infinite measure?</p> <p>=== Update: Read the answer of smnoren and me below about the motivation behind the approaches to define Lebesgue integrals. Final conclusion: The starting statement above is still true and doesn't contradict with the approach of Royden and Stein.</p>
Tyr Curtis
209,743
<p>Let $A\subset [0,1]$ be a nonmeasurable set. Then consider the function $f(x)=1$, if $x\in A$ and $f(x)=-1$, if $x \in [0,1]\setminus A$ and zero everywhere else. Then clearly $f$ is not measurable, but $|f(x)|=\chi_{[0,1]}(x)$, so it's integrable.</p>
856,334
<p>the problem is based on this picture. <img src="https://i.stack.imgur.com/51fmQ.jpg" alt="enter image description here"></p> <p>at beginning or we say $t=0$, $P$ is a circle of which the center is at the point $(0,r)$, $r_0=1$ is the initial radius of this circle. $AB$ is a vector which has an angle $\theta$ from the $x$ axis, and this vector $AB$ will always on the tangent of the circle (the side close to $O$). We suppose that $A_0=(x_0,y_0)$ and the initial value of $\theta$ is $\theta_0$. we have also $|AP|=l=2$. the angle betwen $PA$ and $x$ axis is noted $\varphi$. The vector $AB$ moves on the tangent of the circle with a velocity $-x$, so $A$ will be closer and closer to O. when the distance $AP=l&lt;2r$, the radius becomes smaller, and then we have new $P,r$, but the new $r$ should satisfy $AP=2r$. </p> <p>The question is : when $A\to O,r\to 0$, what is the limit of $\theta$?</p> <p>I tried to use the differential method to get the result, but didn't make it. I know that the angle $\angle PAQ$ is always $\pi/6$ and $\dot{l}=2\dot{r}$. I don't know how to continue. thanks very much for your help.</p>
MvG
35,416
<h1>Some analysis</h1> <p>I choose $x$ as a free parameter, and $y(x)$ as a function to describe the movement. You want $l=2r$ which you can rewrite as</p> <p>\begin{align*} x^2+(y-r)^2&amp;=(2r)^2\\ x^2+y^2-2yr+r^2&amp;=4r^2\\ 3r^2+2yr-(x^2+y^2)&amp;=0\\ r&amp;=\frac{-y+\sqrt{3x^2+4y^2}}{3} \end{align*}</p> <p>The other solution of the quadratic equation would result in $r&lt;0$ and can therefore be omitted. Now you know that</p> <p>$$\overrightarrow{PA}=\begin{pmatrix}x\\y-r\end{pmatrix} =\frac13\begin{pmatrix}3x\\4y-\sqrt{3x^2+4y^2}\end{pmatrix}$$</p> <p>Rotate that right by $60°$ and scale it by $6$ and you get</p> <p>$$12\overrightarrow{PQ}= \begin{pmatrix} 1&amp;\sqrt3\\-\sqrt3&amp;1 \end{pmatrix}\cdot3\overrightarrow{PA}= \begin{pmatrix} 3x+\sqrt3\left(4y-\sqrt{3x^2 + 4y^2}\right)\\ -3\sqrt{3} x + 4y - \sqrt{3x^2 + 4y^2} \end{pmatrix} $$</p> <p>That vector has to be orthogonal to the direction of the movement, namely</p> <p>$$\begin{pmatrix}1\\y'\end{pmatrix}$$</p> <p>so their dot product must be zero. This will give you a relation between $y(x)$ and $y'(x)$:</p> <p>$$ y'=\frac{\mathrm dy}{\mathrm dx}= -\frac{3x+\sqrt3\left(4y-\sqrt{3x^2 + 4y^2}\right)} {-3\sqrt{3} x + 4y - \sqrt{3x^2 + 4y^2}} $$</p> <h1>Interpreting this formula</h1> <p>Now I don't have enough experience to solve this beast analytically (if that is at all possible), and for a numeric solution we would need precise starting conditions. But a look at the slope field might be instructive:</p> <p><img src="https://i.stack.imgur.com/YuIUI.png" alt="Slope field"></p> <p>That does look reasonable to me.</p> <h1>Convegence claim</h1> <p>One thing worth noting is that the whole problem is scale invariant (at least if you are in the regime where the circle shrinks with the approaching point). If you multiply all lengths with a common factor, the slope remains the same. So if the slope converges towards the end, it means that the final direction must be a fixpoint, so starting from that direction at <em>any</em> distance will cause movement in a straight line towards the origin. But there is only one direction which matches movement directly to the origin in the initial situation, and that is horizontal. That's because the tangent at the origin is horizontal.</p> <p>So I'd say the slope converges towards horizontal eventually, no matter the initial conditions. In the limit, $\theta=0$.</p> <h1>Numerical experiments</h1> <p>The convergence is very slow towards the end, though. In the following animation, I'm using the distance $OA$ as a scale factor for the speed, and dowards the end the change in the slope of the line $AB$ is hardly perceptible any more. So I can see how numeric experiments would indicate a non-zero limit. Nevertheless, the apparent final slope cannot be the real final slope, because if it were, then slopes along that line would have to point directly towards the origin, which they do not.</p> <p>Also note that the convergence only appears slow in this parametrization with adaptice speed. If you use fixed speed, you will reach the point where $x=0$ in finite time. But obtaining precise numeric results for this case is hard, which is the reason why I consdier this adaptiveness and the resulting slow convergence to be reasonable.</p> <p><img src="https://i.stack.imgur.com/x1cO3.gif" alt="Animation"></p> <h1>Change in angle</h1> <p>In a comment, you asked about characterizing the convergence in terms of a Lyapunov exponent. For that we'd need to fix a reasonable parameter space. Since the problem is scale invariant, I'd use the angle which $OA$ makes againt horizontal as the sole description of phase space. Let's call that angle $\alpha$, with $\tan\alpha=y/x$. For the time dependence, I'll consider a point at unit distance from the origon, traveling at unit speed towards $Q$. This formulation models the rescaling I used above. Now consider the following differential equation:</p> <p>\begin{align*} \frac{\cos(\alpha+\mathrm d\alpha)}{\sin(\alpha+\mathrm d\alpha)} = \frac{\cos\alpha-\mathrm d\alpha\sin\alpha}{\sin\alpha+\mathrm d\alpha\cos\alpha} = \frac{x-y\,\mathrm d\alpha}{y+x\,\mathrm d\alpha} &amp;= \frac{x+\mathrm dt\,\Delta x}{y+\mathrm dt\,\Delta y} \\ (x-y\,\mathrm d\alpha)(y+\mathrm dt\,\Delta y) &amp;= (y+x\,\mathrm d\alpha)(x+\mathrm dt\,\Delta x) \\ x\,\Delta y\,\mathrm dt-y^2\,\mathrm d\alpha &amp;= y\,\Delta x\,\mathrm dt+x^2\,\mathrm d\alpha \\ (x^2+y^2)\mathrm d\alpha = \mathrm d\alpha &amp;= (x\,\Delta y-y\,\Delta x)\mathrm dt \\ \dot\alpha = \frac{\mathrm d\alpha}{\mathrm dt} &amp;= x\,\Delta y-y\,\Delta x \end{align*}</p> <p>Here I assume $x=\cos\alpha,y=\sin\alpha$ and also assume $(\Delta x,\Delta y)$ to be a unit length vector pointing from $A$ to $Q$:</p> <p>$$\begin{pmatrix}\Delta x\\\Delta y\end{pmatrix}= \frac{-1}{\sqrt{1+y'^2}}\begin{pmatrix}1\\y'\end{pmatrix}$$</p> <p>at least for the case where $y'&gt;0$. For $y'&lt;0$ we want the opposite sign; those are the parts where we move down and right. But for the simpler case of $y'&lt;0$ we get</p> <p>\begin{align*} \dot\alpha= \frac{\mathrm d\alpha}{\mathrm dt}&amp;= \frac{y-xy'}{\sqrt{1+y'^2}} \end{align*}</p> <p>Fixing the sign over the whole range, you get the following as a plot of the change in angle (or rather its negative, for better placement of labels):</p> <p><img src="https://i.stack.imgur.com/hlU7s.png" alt="Change in angle"></p> <p>Or here a close-up of the last degree:</p> <p><img src="https://i.stack.imgur.com/ADsgN.png" alt="Close-up"></p> <p>As you can see, the smaller the angle, the smaller its change will be. But the change will always be negative, so the tangent will always move towards horizontal.</p> <p>Now that I think about it, it will be difficult to express this in terms of a (global) Lyapunov exponent: On the one hand there is a high change rate for large angles, whereas for arbitrarily small angles, the change rate can become arbitrarily slow as well. But I've not much experience dealing with Lyapunov exponents, so perhaps you can find a better formulation for this.</p> <p>Looking at these graphs, it should be possible to show that $\dot\alpha&lt;0$ for $\alpha\in(0°,90°]$, thereby demonstrating the convergence</p> <p>$$\lim_{t\to\infty}\alpha(t)=0$$</p> <p>which in turn implies $\theta=0$ and $\varphi=-\frac\pi6=-30°$ in that limit.</p> <h1>Constant speed</h1> <p>Also please remember that the slow convergence is a result of adaptive step width. If you move at uniform speed, e.g. from $x_0=y_0=1$, then angle as a function of time will look like this:</p> <p><img src="https://i.stack.imgur.com/MyYUB.png" alt="Angle as function of time"></p> <p>You can obtain such a graph with reasonable precision if you do the adaptive step width approach, but keep track of the curve length so far and use that as the $t$ coordinate for the data point. This will give you a lot of points on the near vertical axis at the far right.</p> <p>The near-vertical slope there will tell you that the difference between correctly representing the convergence and overshooting the $x=0$ situation is <em>very</em> slim.</p>
1,640,733
<p>I think it is true that any power of a logarithm, no matter how big, will eventually grow slower than a linear function with positive slope.</p> <p>Is it true that for any exponent $m&gt;0$ (no matter how big we make $m$), the function $f(x)$ $$f(x)=(\ln x)^m$$</p> <p>will eventually always be less than $g(x) = x$ ?</p> <p>I am pretty sure this is true because I have tried large values of $m$ and it always eventually slows down. But I don't know how to prove it.</p>
DanielWainfleet
254,665
<p>No calculus required.Taking logs to any base $b=1+r$ with $r&gt;0,$ then for $x&gt;(1+r)^2$ we have $\log_b x&gt;2.$ So for $x&gt;(1+r)^2$ let $n_x$ be the positive integer such that $$n_x\leq \log_bx&lt;n_x+1.$$ We have then $b^{n_x}\leq x&lt;b^{n_x+1}$. And since $n_x\geq 2$ and $r&gt;0$, we have, by the Binomial Theorem, $$x\geq b^{n_x}=(1+r)^{n_x}=1+r n_x+r^2 \binom {n_x}{2}+...&gt;r^2 \binom {n_x}{2}&gt;r^2(n_x-1)^2/2.$$ So we have $x&gt; r^2(n_x-1)^2/2$, equivalently $1+\sqrt {2 x}/r&gt;n_x.$ But since $\log_bx&lt;n_x+1$ we have $\log_bx&lt;\sqrt {2x}/r+2$ and we conclude that $\lim _{x\to \infty}(\log_bx)/x=0.$ As shown in the first part of the answer by Clement C., this is sufficient to imply $\lim_{x\to \infty}(\log_bx)^m/x=0$ for any $m&gt;0$.</p>
88,122
<p>For the easiest case, assume that $L/E$ is Galois and $E/K$ is Galois. Under what conditions can we conclude that $L/K$ is Galois? I guess the general case can be a bit tricky, but are there some "sufficiently general" cases that are interesting and for which the question can be answered?</p> <p>EDIT: Since Jyrki's reply seems to suggest that there is no general criterion on the groups. Can we say something if we put criterions on the fields? Assume say that $K=\mathbb{Q}$ or $K=\mathbb{Q}_p$?</p>
Lior B-S
26,713
<p>If every $K$-automorphism of $E$ can be extended to a $K$-automorphism of $L$, then $L/K$ is Galois. </p>
191,175
<p>How to calculate the limit of $(n+1)^{\frac{1}{n}}$ as $n\to\infty$?</p> <p>I know how to prove that $n^{\frac{1}{n}}\to 1$ and $n^{\frac{1}{n}}&lt;(n+1)^{\frac{1}{n}}$. What is the other inequality that might solve the problem?</p>
Jonathan
37,832
<p>With $$y=\lim_{n\to\infty} (n+1)^{1/n},$$ consider, using continuity of $\ln$, $$\ln y=\lim_{n\to\infty} \frac{1}{n}\ln(n+1)=0.$$ This tells you that your limit is $1$.</p> <p>Alternately, $$n^{1/n}&lt;n^{1/n}\left(1+\frac{1}{n}\right)^{1/n}&lt;n^{1/n}\left(1+\frac{1}{n}\right),$$ where the middle guy is your expression.</p>
191,175
<p>How to calculate the limit of $(n+1)^{\frac{1}{n}}$ as $n\to\infty$?</p> <p>I know how to prove that $n^{\frac{1}{n}}\to 1$ and $n^{\frac{1}{n}}&lt;(n+1)^{\frac{1}{n}}$. What is the other inequality that might solve the problem?</p>
Martin Argerami
22,857
<p>For the other inequality, you could use $$ (n+1)^{\frac1n}\leq (2n)^{\frac1n}=2^{\frac1n}\,n^{\frac1n}. $$</p>
3,780,575
<p>We know that if <span class="math-container">$f$</span> is continuous on [a,b] and <span class="math-container">$f:[a,b] \to \mathbb{R}$</span>, then there exists <span class="math-container">$c \in [a,b]$</span> with <span class="math-container">$f(c)(a-b) = \int_a^bf(x)dx$</span></p> <p>If we change ''f is continuous on [a,b]'' to ''f is Riemann integrable'', does the mean value theorem for integral still holds? If not, Can you give me a counter-example?</p> <p>I know that the first Mean-Value Theorem for Riemann-Stieltjes does not requires continuity, but that is still different from this statement.</p> <p>reference:<a href="http://mathonline.wikidot.com/the-first-mean-value-theorem-for-riemann-stieltjes-integrals" rel="nofollow noreferrer">http://mathonline.wikidot.com/the-first-mean-value-theorem-for-riemann-stieltjes-integrals</a></p>
user69608
793,349
<p>Found an another solution:</p> <p>we have <span class="math-container">$a^2=b(c+b)$</span></p> <p>A triangle of smallest perimeter means <span class="math-container">$gcd(a,b,c)=1$</span></p> <p>In fact <span class="math-container">$gcd(b,c)=1$</span> since any common factor of <span class="math-container">$b,c$</span> would be a factor of <span class="math-container">$a$</span> as well.</p> <p>A perfect square <span class="math-container">$a^2$</span> is being expressed as the product of two relatively prime integers <span class="math-container">$b$</span> and <span class="math-container">$c$</span>.</p> <p>it must be the case that both <span class="math-container">$b$</span> &amp; <span class="math-container">$b + c$</span> are perfect squares. Thus for some integer <span class="math-container">$m$</span> &amp; <span class="math-container">$n$</span> with <span class="math-container">$gcd(m,n)= 1$</span> we have</p> <p><span class="math-container">$b=m^2$</span> and <span class="math-container">$b+c=n^2$</span>,<span class="math-container">$a=mn$</span></p> <p><span class="math-container">$2\ cosB=\frac{n}{m}=\frac{a}{b}$</span></p> <p>As <span class="math-container">$\angle C &gt;\frac{\pi}{2} \Rightarrow 0&lt;\angle B&lt;\frac{\pi}{6}$</span></p> <p><span class="math-container">$\Rightarrow \sqrt{3}&lt;2\ cosB=\frac{n}{m}&lt;2$</span></p> <p>The smallest value of <span class="math-container">$(m, n)$</span> that satisfies the above mentioned conditions are <span class="math-container">$4$</span> and <span class="math-container">$7$</span> <span class="math-container">$\blacksquare$</span></p>
1,169,336
<p>Using the formal definition of convergence, Prove that $\lim\limits_{n \to \infty} \frac{3n^2+5n}{4n^2 +2} = \frac{3}{4}$.</p> <p>Workings:</p> <p>If $n$ is large enough, $3n^2 + 5n$ behaves like $3n^2$</p> <p>If $n$ is large enough $4n^2 + 2$ behaves like $4n^2$</p> <p>More formally we can find $a,b$ such that $\frac{3n^2+5n}{4n^2 +2} \leq \frac{a}{b} \frac{3n^2}{4n^2}$</p> <p>For $n\geq 2$ we have $3n^2 + 5n /leq 3n^2.</p> <p>For $n \geq 0$ we have $4n^2 + 2 \geq \frac{1}{2}4n^2$</p> <p>So for $ n \geq \max\{0,2\} = 2$ we have:</p> <p>$\frac{3n^2+5n}{4n^2 +2} \leq \frac{2 \dot 3n^2}{\frac{1}{2}4n^2} = \frac{3}{4}$ </p> <p>To make $\frac{3}{4}$ less than $\epsilon$:</p> <p>$\frac{3}{4} &lt; \epsilon$, $\frac{3}{\epsilon} &lt; 4$</p> <p>Take $N = \frac{3}{\epsilon}$ </p> <p>Proof:</p> <p>Suppose that $\epsilon &gt; 0$</p> <p>Let $N = \max\{2,\frac{3}{\epsilon}\}$</p> <p>For any $n \geq N$, we have that $n &gt; \frac{3}{\epsilon}$ and $n&gt;2$, therefore</p> <p>$3n^2 + 5n^2 \leq 6n^2$ and $4n^2 + 2 \geq 2n^2$</p> <p>Then for any $n \geq N$ we have</p> <p>$|s_n - L| = \left|\frac{3n^2 + 5n}{4n^2 + 2} - \frac{3}{4}\right|$</p> <p>$ = \frac{3n^2 + 5n}{4n^2 + 2} - \frac{3}{4}$</p> <p>$ = \frac{10n-3}{8n^2+4}$</p> <p>Now I'm not sure on what to do. Any help will be appreciated.</p>
AAkash
219,951
<p>It simple</p> <p>$$ \lim_{x \to \infty} \dfrac{3 + \dfrac{5}{n}}{4 + \dfrac{2}{n^2}}$$</p> <p>$$ = \dfrac{3}{4}$$</p>
1,169,336
<p>Using the formal definition of convergence, Prove that $\lim\limits_{n \to \infty} \frac{3n^2+5n}{4n^2 +2} = \frac{3}{4}$.</p> <p>Workings:</p> <p>If $n$ is large enough, $3n^2 + 5n$ behaves like $3n^2$</p> <p>If $n$ is large enough $4n^2 + 2$ behaves like $4n^2$</p> <p>More formally we can find $a,b$ such that $\frac{3n^2+5n}{4n^2 +2} \leq \frac{a}{b} \frac{3n^2}{4n^2}$</p> <p>For $n\geq 2$ we have $3n^2 + 5n /leq 3n^2.</p> <p>For $n \geq 0$ we have $4n^2 + 2 \geq \frac{1}{2}4n^2$</p> <p>So for $ n \geq \max\{0,2\} = 2$ we have:</p> <p>$\frac{3n^2+5n}{4n^2 +2} \leq \frac{2 \dot 3n^2}{\frac{1}{2}4n^2} = \frac{3}{4}$ </p> <p>To make $\frac{3}{4}$ less than $\epsilon$:</p> <p>$\frac{3}{4} &lt; \epsilon$, $\frac{3}{\epsilon} &lt; 4$</p> <p>Take $N = \frac{3}{\epsilon}$ </p> <p>Proof:</p> <p>Suppose that $\epsilon &gt; 0$</p> <p>Let $N = \max\{2,\frac{3}{\epsilon}\}$</p> <p>For any $n \geq N$, we have that $n &gt; \frac{3}{\epsilon}$ and $n&gt;2$, therefore</p> <p>$3n^2 + 5n^2 \leq 6n^2$ and $4n^2 + 2 \geq 2n^2$</p> <p>Then for any $n \geq N$ we have</p> <p>$|s_n - L| = \left|\frac{3n^2 + 5n}{4n^2 + 2} - \frac{3}{4}\right|$</p> <p>$ = \frac{3n^2 + 5n}{4n^2 + 2} - \frac{3}{4}$</p> <p>$ = \frac{10n-3}{8n^2+4}$</p> <p>Now I'm not sure on what to do. Any help will be appreciated.</p>
Timbuc
118,527
<p>Perhaps simpler:</p> <p>With the Squeeze Theorem:</p> <p>$$\frac34\xleftarrow[x\to\infty]{}\frac{3n^2}{4n^2}\le\frac{3n^2+5n}{4n^2+2}\le\frac{3n^2+5n}{4n^2}=\frac34+\frac54\frac1{n}\xrightarrow[n\to\infty]{}\frac34+0=\frac34$$</p> <p>With arithmetic of limits:</p> <p>$$\frac{3n^2+5n}{4n^2+2}=\frac{3n^2+5n}{4n^2+2}\cdot\frac{\frac1{n^2}}{\frac1{n^2}}=\frac{3+\frac5n}{4+\frac2{n^2}}\xrightarrow[n\to\infty]{}\frac{3+0}{4+0}=\frac34$$</p>
2,280,243
<blockquote> <p>A tribonacci sequence is a sequence of numbers such that each term from the fourth onward is the sum of the previous three terms. The first three terms in a tribonacci sequence are called its <em>seeds</em> For example, if the three seeds of a tribonacci sequence are $1,2$,and $3$, it's 4th terms is $6$<br> ($1+2+3$),then $11(2+3+6)$.</p> </blockquote> <p>Find the smallest 5 digit term in a tribonacci sequence if the seeds are $6,19,22$</p> <p>I'm having trouble with this. I don't know where to start. The formula for the tribonacci sequence in relation to its seeds is $$u_{n+3} = u_{n} + u_{n+1} + u_{n+2}$$ This tribonacci formula holds for all integer $n$. But that's all I know how to work out. And just if it helps, the next few numbers in the sequence mentioned in the question are $47,88,157,292$. Is there some shortcut to it, because I need to show some working out and having two pages full of addition doesn't sound very easy to mark, does it?</p>
Stefan Gruenwald
149,416
<p>Here is a little python program for your particular tribonacci sequence. You first seed your list with the start values (in your case 6,19,22). The you load your variables with them (a,b,c) and you define how the variables will be updated: a will be the sum of a+b+c in the next iteration. b will be set to the old value of a, and c will be set to the old value of b. Then the process repeats (I have set the repetition to 40 times, but it can be anything. The results are stored in a list called "result". At the end you read them out numerically.</p> <pre><code>result=[6,19,22] a,b,c=result[-1],result[-2],result[-3] for i in range(40): a,b,c=a+b+c,a,b result.append(a) for e,f in enumerate(result,1): print e,f </code></pre> <p>You get the following result:</p> <p>1 6</p> <p>2 19</p> <p>3 22</p> <p>4 47</p> <p>5 88</p> <p>6 157</p> <p>7 292</p> <p>8 537</p> <p>9 986</p> <p>10 1815</p> <p>11 3338</p> <p>12 6139</p> <p>13 11292</p> <p>...</p>
52,299
<p>Hello everybody.</p> <p>I'm looking for an "easy" example of a (non-zero) holomorphic function $f$ with almost everywhere vanishing radial boundary limits: $\lim\limits_{r \rightarrow 1-} f(re^{i\phi})=0$.</p> <p>Does anyone know such an example.</p> <p>Best CJ</p>
David Hansen
1,464
<p>According to a footnote in the famous Hardy-Ramanujan paper "Asymptotic formulae in combinatory analysis", the function $f(q)=\prod_{n=1}^{\infty}\frac{1}{1-q^n}$ vanishes like $f(re^{i\theta})=o((1-r)^{1/4-\varepsilon})$ for almost all $\theta$. No proof is given, though I can't imagine Hardy would have made a statement like this without a proof in his pocket.</p> <p><strong>Edit:</strong> This isn't actually hard to guess at. By Euler's pengatonal number theorem, we have $f(q)^{-1}=\sum_{n\in \mathbf{Z}}(-1)^{n}q^{n(3n-1)/2}$, so Plancherel gives</p> <p>$\int_{0}^{2\pi}|f(re^{i\theta})|^{-2}d\theta=2\pi\sum_{n\in \mathbf{Z}}r^{n(3n-1)} \sim 2 \pi^{3/2}3^{-1/2}(1-r)^{-1/2}.$</p>
201,163
<p>I have a data set that contains data of the form (x0, y0, f1, f2, i1, i2, i3). The (x0, y0) are the coordinates, while the values f1 and f2 are real numbers (i1, i2, i3 correspond to some integers which are used as indices). The data can be downloaded <a href="http://www.mediafire.com/file/0xtxn8rggjorhdj/basins_%2528L4%2529.out/file" rel="nofollow noreferrer">here</a>.</p> <p>Now I plot the (x0, y0) coordinates of the data with i2 = 4, where each point is colored according to the value of f1. </p> <p><a href="https://i.stack.imgur.com/GSsy9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GSsy9.jpg" alt="enter image description here"></a></p> <p>As you can see, there are missing points. The data set with all the missing points can be found <a href="http://www.mediafire.com/file/obqpo2pfq56sa24/data_LGs.dat/file" rel="nofollow noreferrer">here</a></p> <p><a href="https://i.stack.imgur.com/owMyc.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/owMyc.jpg" alt="enter image description here"></a> </p> <p>Now, how can I use the original data with the f1 and f2 values, so as to interpolate and predict the f1 and f2 values of the missing points? Any suggestions? </p>
kickert
54,320
<p>The <code>Predict</code> function can provide you the information you need.</p> <p>Start by importing your data into Mathematica. For me, it was easiest to change the file extensions to <code>.txt</code> and use <code>SemanticImport</code>.</p> <pre><code>rawdata = SemanticImport["basins_(L4).txt"] // Normal; missing = SemanticImport["data_LGs.txt"] // Normal; </code></pre> <p>Then pull out the subset that with i2=4.</p> <pre><code>subset = Select[rawdata, #[[6]] == 4 &amp;] </code></pre> <p>You can now thread your (x0, y0) values to the f1 values:</p> <pre><code>f1aidata = Thread[subset[[All, 1 ;; 2]] -&gt; subset[[All, 3]]] </code></pre> <p>At this point you have some choices to make around the Method and Performance Goals you use for the <code>Predict</code> function. We could go deep in the weeds on this, but I created some training and test data and ran through all the options and found <code>GradientBoostedTrees</code> was the best compromise between quality and computational time.</p> <pre><code>f1predictor = Predict[f1aidata, Method -&gt; "GradientBoostedTrees", PerformanceGoal -&gt; "Quality"] </code></pre> <p>With the Predictor you just created, you can run the missing data through it.</p> <pre><code>f1outputs = f1predictor[#] &amp; /@ missing; </code></pre> <p>Then combine the inputs and outputs and <code>Join</code> the lists</p> <pre><code>f1missingresults = Append[Transpose[missing], f1outputs] // Transpose; combinedresults = Join[subset[[All, 1 ;; 3]], f1missingresults]; </code></pre> <p>Using a <code>ListDensityPlot</code>, you get this:</p> <pre><code>ListDensityPlot[combinedresults, ColorFunction -&gt; "TemperatureMap"] </code></pre> <p><a href="https://i.stack.imgur.com/QQjht.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QQjht.png" alt="enter image description here"></a></p> <p>Looking at the <code>ListPointPlot3D</code> you can see it isn't perfect, but it is very close.</p> <pre><code>ListPointPlot3D[combinedresults] </code></pre> <p><a href="https://i.stack.imgur.com/4Djj8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4Djj8.png" alt="enter image description here"></a></p> <p>If you want to use this for f2, then follow the same process pulling your data from <code>subset[[All,{1,2,4}]]</code> and creating a new predictor</p>
4,056,273
<h2><a href="https://gateoverflow.in/357468/gate-cse-2021-set-1-ga-question-9" rel="nofollow noreferrer">GATE CSE 2021 Set 1 | GA Question: 9</a></h2> <hr /> <p>Given below are two statements 1 and 2, and two conclusions I and II</p> <ul> <li><strong>Statement 1:</strong> All bacteria are microorganisms.</li> <li><strong>Statement 2:</strong> All pathogens are microorganisms.</li> <li><strong>Conclusion I:</strong> Some pathogens are bacteria.</li> <li><strong>Conclusion II:</strong> All pathogens are not bacteria.</li> </ul> <p>Based on the above statements and conclusions, which one of the following options is logically CORRECT?</p> <ul> <li><strong>(A)</strong> Only conclusion I is correct</li> <li><strong>(B)</strong> Only conclusion II is correct</li> <li><strong>(C)</strong> Either conclusion I or II is correct</li> <li><strong>(D)</strong> Neither conclusion I nor II is correct</li> </ul> <hr /> <h2>My attempt:</h2> <p>I have to find counter example, <a href="https://i.stack.imgur.com/0lYp6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0lYp6.png" alt="enter image description here" /></a></p> <p><strong>Case-1</strong> is counter example of Conclusion II, because of all pathogens are bacteria in this diagram. Conclusion I would be true in this case, because all so also some. (So, Conclusion I is true and II is false.).</p> <p><strong>Case-2</strong> is counter example of Conclusion I, because no pathogens are bacteria. Conclusion II would be true in this case, because all so also some. (So, Conclusion II is false and II is true.).</p> <p><strong>Case-3:</strong> Conclusion I is true, and Conclusion II is false as some pathogens are <em>already</em> bacteria.</p> <p><strong>Case-4:</strong> Conclusion I is true, and Conclusion II is false as some pathogens are <em>already</em> bacteria.</p> <p>So, I found counter example for both Conclusion. <strong>Hence, option (D) is true (Official Key given by GATE 2021.).</strong></p> <p><em>I've read this method for finding counter examples for such questions.</em></p> <hr /> <p>Doubt here, is as both given Conclusions are negation of each other. That is,</p> <blockquote> <p>∃x(p(x) ∧ b(x)) = ¬∀x (p(x) → ¬b(x))</p> <p>∀x (p(x) → ¬b(x)) = ¬∃x(p(x) ∧ b(x))</p> </blockquote> <p><a href="https://gateoverflow.in/blog/12492/questions-to-be-debated-for-gate-cse-2021" rel="nofollow noreferrer">Peoples are debating this question</a> and saying that the <a href="https://gate.iitb.ac.in/" rel="nofollow noreferrer">correct answer</a> should be option <strong>(C) Either conclusion I or II is correct</strong>.</p> <hr /> <p>A exact similar question appeared in the GATE EC branch, which has same above logic.</p> <h2><a href="https://ec.gateoverflow.in/1617/gate-ec-2021-ga-question-6" rel="nofollow noreferrer">GATE EC 2021 | GA Question: 6</a> :</h2> <p>Given below are two statements and two conclusions.</p> <ul> <li><p><strong>Statement 1:</strong> All purple are green.</p> </li> <li><p><strong>Statement 2:</strong> All black are green.</p> </li> <li><p><strong>Conclusion I:</strong> Some black are purple.</p> </li> <li><p><strong>Conclusion II:</strong> No black is purple.</p> </li> </ul> <p>Based on the above statements and conclusions, which one of the following options is logically CORRECT?</p> <ul> <li><strong>(A)</strong> Only conclusion I is correct</li> <li><strong>(B)</strong> Only conclusion II is correct</li> <li><strong>(C)</strong> Either conclusion I or II is correct</li> <li><strong>(D)</strong> Both conclusion I and II are correct</li> </ul> <p>Here, as above my attempt, we can find counter examples for the both the statements, so both <strong>(D)</strong> Both conclusion I and II are correct.</p> <p>But, this time official key is given <strong>(C)</strong> Either conclusion I or II is correct.</p> <hr /> <p>Note these are two different branches of Engineering (Computer Science and Engineering, Electronics and Communication Engineering). Professors who designed these questions for GATE 2021, should not be same person (but, could be as these questions are from General Aptitude which is common (but not same questions) in all branches of GATE Papers).</p> <hr /> <p>My Question</p> <blockquote> <p>What is correct approach to solve such questions and which answer keys are correct which are wrong ? If both keys are correct respective their questions then what is possible difference between these questions ?</p> </blockquote> <p>Thank you,</p> <hr /> <h2>Update :</h2> <p>Final answer keys given by GATE official :</p> <ul> <li>For first question (GATE CSE 2021 Set 1 | GA Question: 9) : Both <strong>(C) or (D)</strong></li> <li>For second question (GATE EC 2021 | GA Question: 6) : Only <strong>(C)</strong></li> </ul> <blockquote> <p>Now, why they have given option (D) as answer for the first question !?</p> </blockquote>
JJacquelin
108,514
<p><span class="math-container">$$\frac{d \beta}{dt}=\frac{c}{R(\beta(t))+c_0}\quad;\quad R(\beta)=\frac{1}{\sqrt{1-\big(e \cos(\beta)\big)^2}}$$</span></p> <p><span class="math-container">$$\left(\frac{1}{\sqrt{1-\big(e \cos(\beta)\big)^2}}+c_0\right)\frac{d \beta}{dt}=c$$</span></p> <p><span class="math-container">$$\int \frac{d\beta}{\sqrt{1-\big(e \cos(\beta)\big)^2}}+c_0\beta=c(t-t_0)$$</span></p> <p><span class="math-container">$$t=t_0+\frac{1}{c}\left(\frac{1}{\sqrt{1-e^2}}F\left(\beta\:\Bigg|\:{\frac{e^2}{e^2-1}} \right)+c_0\beta \right)$$</span> <span class="math-container">$F$</span> is the elliptic integral of the first kind. With initial condition <span class="math-container">$\beta(0)=\beta_0$</span> :</p> <p><span class="math-container">$$\boxed{t(\beta)=\frac{1}{c\:\sqrt{1-e^2}}\left( F\left(\beta\:\Bigg|\:{\frac{e^2}{e^2-1}}\right)-F\left(\beta_0\:\Bigg|\:{\frac{e^2}{e^2-1}}\right) \right)+\frac{c_0}{c}(\beta-\beta_0)}$$</span> This is the solution <span class="math-container">$t$</span> as a function of <span class="math-container">$\beta$</span>.</p> <p>The inverse function <span class="math-container">$\beta(t)$</span> cannot be written with a finite number of available standard functions. Numerical calculus is recommended.</p>
4,056,273
<h2><a href="https://gateoverflow.in/357468/gate-cse-2021-set-1-ga-question-9" rel="nofollow noreferrer">GATE CSE 2021 Set 1 | GA Question: 9</a></h2> <hr /> <p>Given below are two statements 1 and 2, and two conclusions I and II</p> <ul> <li><strong>Statement 1:</strong> All bacteria are microorganisms.</li> <li><strong>Statement 2:</strong> All pathogens are microorganisms.</li> <li><strong>Conclusion I:</strong> Some pathogens are bacteria.</li> <li><strong>Conclusion II:</strong> All pathogens are not bacteria.</li> </ul> <p>Based on the above statements and conclusions, which one of the following options is logically CORRECT?</p> <ul> <li><strong>(A)</strong> Only conclusion I is correct</li> <li><strong>(B)</strong> Only conclusion II is correct</li> <li><strong>(C)</strong> Either conclusion I or II is correct</li> <li><strong>(D)</strong> Neither conclusion I nor II is correct</li> </ul> <hr /> <h2>My attempt:</h2> <p>I have to find counter example, <a href="https://i.stack.imgur.com/0lYp6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0lYp6.png" alt="enter image description here" /></a></p> <p><strong>Case-1</strong> is counter example of Conclusion II, because of all pathogens are bacteria in this diagram. Conclusion I would be true in this case, because all so also some. (So, Conclusion I is true and II is false.).</p> <p><strong>Case-2</strong> is counter example of Conclusion I, because no pathogens are bacteria. Conclusion II would be true in this case, because all so also some. (So, Conclusion II is false and II is true.).</p> <p><strong>Case-3:</strong> Conclusion I is true, and Conclusion II is false as some pathogens are <em>already</em> bacteria.</p> <p><strong>Case-4:</strong> Conclusion I is true, and Conclusion II is false as some pathogens are <em>already</em> bacteria.</p> <p>So, I found counter example for both Conclusion. <strong>Hence, option (D) is true (Official Key given by GATE 2021.).</strong></p> <p><em>I've read this method for finding counter examples for such questions.</em></p> <hr /> <p>Doubt here, is as both given Conclusions are negation of each other. That is,</p> <blockquote> <p>∃x(p(x) ∧ b(x)) = ¬∀x (p(x) → ¬b(x))</p> <p>∀x (p(x) → ¬b(x)) = ¬∃x(p(x) ∧ b(x))</p> </blockquote> <p><a href="https://gateoverflow.in/blog/12492/questions-to-be-debated-for-gate-cse-2021" rel="nofollow noreferrer">Peoples are debating this question</a> and saying that the <a href="https://gate.iitb.ac.in/" rel="nofollow noreferrer">correct answer</a> should be option <strong>(C) Either conclusion I or II is correct</strong>.</p> <hr /> <p>A exact similar question appeared in the GATE EC branch, which has same above logic.</p> <h2><a href="https://ec.gateoverflow.in/1617/gate-ec-2021-ga-question-6" rel="nofollow noreferrer">GATE EC 2021 | GA Question: 6</a> :</h2> <p>Given below are two statements and two conclusions.</p> <ul> <li><p><strong>Statement 1:</strong> All purple are green.</p> </li> <li><p><strong>Statement 2:</strong> All black are green.</p> </li> <li><p><strong>Conclusion I:</strong> Some black are purple.</p> </li> <li><p><strong>Conclusion II:</strong> No black is purple.</p> </li> </ul> <p>Based on the above statements and conclusions, which one of the following options is logically CORRECT?</p> <ul> <li><strong>(A)</strong> Only conclusion I is correct</li> <li><strong>(B)</strong> Only conclusion II is correct</li> <li><strong>(C)</strong> Either conclusion I or II is correct</li> <li><strong>(D)</strong> Both conclusion I and II are correct</li> </ul> <p>Here, as above my attempt, we can find counter examples for the both the statements, so both <strong>(D)</strong> Both conclusion I and II are correct.</p> <p>But, this time official key is given <strong>(C)</strong> Either conclusion I or II is correct.</p> <hr /> <p>Note these are two different branches of Engineering (Computer Science and Engineering, Electronics and Communication Engineering). Professors who designed these questions for GATE 2021, should not be same person (but, could be as these questions are from General Aptitude which is common (but not same questions) in all branches of GATE Papers).</p> <hr /> <p>My Question</p> <blockquote> <p>What is correct approach to solve such questions and which answer keys are correct which are wrong ? If both keys are correct respective their questions then what is possible difference between these questions ?</p> </blockquote> <p>Thank you,</p> <hr /> <h2>Update :</h2> <p>Final answer keys given by GATE official :</p> <ul> <li>For first question (GATE CSE 2021 Set 1 | GA Question: 9) : Both <strong>(C) or (D)</strong></li> <li>For second question (GATE EC 2021 | GA Question: 6) : Only <strong>(C)</strong></li> </ul> <blockquote> <p>Now, why they have given option (D) as answer for the first question !?</p> </blockquote>
Eugene
726,796
<p><span class="math-container">$$ \begin{aligned} \dot{\beta}{(\theta)} &amp;= \frac{c}{R(\beta{(\theta)}) + c_0} \Leftrightarrow \\ &amp;\Leftrightarrow \left(R(\beta{(\theta)}) + c_0\right)\dot{\beta}{(\theta)} = c \Leftrightarrow \\ &amp;\Leftrightarrow \int\limits_0^\theta\left(R(\beta{(\tau)}) + c_0\right)\dot{\beta}{(\tau)}d\tau = c\int\limits_0^\theta d\tau = c\theta \end{aligned} $$</span></p> <p>Now, we have to evaluate the integral</p> <p><span class="math-container">$$ \begin{aligned} \int\limits_0^\theta\left(R(\beta{(\tau)}) + c_0\right)\dot{\beta}{(\tau)}d\tau &amp;= \left|u = \beta{(\tau)} \Rightarrow du = \dot{\beta}(\tau)d\tau\right| = \\ &amp;= \int\limits_{\beta{(0)}}^{\beta{(\theta)}}\left(R(u) + c_0\right)du = \\ &amp;= \int\limits_{\beta{(0)}}^{\beta{(\theta)}}\frac{du}{\sqrt{1-\left(e\cos{u}\right)^2}} + c_0(\beta{(\theta)} - \beta{(0)}) = \left|\right| \end{aligned} $$</span></p>
939,868
<p>There are a lot math journals with title "acta" includes, for instance, Acta Mathematica, acta arithmetica, etc. Would you explain what "acta" means?</p>
Lucian
93,448
<p>Acta is the plural of actum, a Latin word refering to $($official$)$ papers or documents.</p>
113,963
<p>While the common approach to algebraic groups is via representable functors, it seems that there is no such for differential algebraic groups (defined by differential polynomials). Neither the book by E. Kolchin, nor the texts by Ph. J. Cassidy contain anything like this — they work only with the groups of points over differential fields (and, naturally, don't say the words "group of points").</p> <p>Concerning difference algebraic groups, i.e. defined by polynomials with some fixed endomorphism (also, I don't like the ambiguity with the notion of "difference algebraic equation"), there is no systematic treatment at all, although some of these groups are intensively studied (twisted groups of Lie type as an example).</p> <p>So the question is: is there really no modern (scheme-like) exposition of the subject? If so, why?</p>
anonymous
21,885
<p>A functorial-schematic approach to differential/difference algebraic groups is surely possible. Why this is not to be found in the literature is probably due to historic reasons. The major bulk of results in differential and difference algebra was obtained in a time where algebraic geometry in the style of Weil's "Foundations of algebraic geometry" was in vogue. In differential/difference algebra there never really emerged someone of the format of Grothendieck who was willing to rewrite all the foundations. (It is not only lack of motivation, there are also some mathematically highly non-trivial issues here.) Moreover, some people in the field seem to doubt the value of such an effort and model theorists, who always had a strong impact on the field, are also keeping the universal domain alive. So even nowadays, Kolchin's book remains the ultimate reference for differential algebra and differential algebraic geometry.</p> <p>However, I guess if Kolchin had lived ten years later, he would have written his book in Grothendieckien style and we would not be discussing this topic here. It is also not the case that people in differential and difference algebra are ignorant of the scheme theoretic developements. For example, J. Kovacic has rewritten and clarified quite beautifully Kolchin's Galois theory of strongly normal differential field extensions. For difference equations, in my opinion, Hrushovskies proof of the twisted Lang-Weil estimates is also a manifest for the necessity and superiority of the scheme theoretic approach. Finally, this article <a href="http://arxiv.org/abs/1111.7285">http://arxiv.org/abs/1111.7285</a> contains a tannakian approach to differential/difference algebraic groups using scheme theoretic language.</p>
2,354,004
<p>I'm struggling with the following sum:</p> <p>$$\sum_{n=0}^\infty \frac{n!}{(2n)!}$$</p> <p>I know that the final result will use the error function, but will not use any other non-elementary functions. I'm fairly sure that it doesn't telescope, and I'm not even sure how to get $\operatorname {erf}$ out of that.</p> <p>Can somebody please give me a hint? No full answers, please.</p>
J.G.
56,861
<p>Here's a hint. Write the fraction as $\int_0^\infty\frac{x^n}{(2n)!}e^{-x}dx$. The sum is then $\int_0^\infty\cosh\sqrt{x}e^{-x} \, dx=\int_0^\infty 2y\cosh y e^{-y^2} \, dy$. Can you take it from there?</p>
150,180
<p>I try to read Gross's paper on Heegner points and it seems ambiguous for me on some points:</p> <p>Gross (page 87) said that $Y=Y_{0}(N)$ is the open modular curve over $\mathbb{Q}$ which classifies ordered pairs $(E,E^{'})$ of elliptic curves together with cyclic isogeny $E\rightarrow E^{'}$ of degree $N$. Gross uses on some steps the cyclic isogeny between two elliptic curves over $\mathbb{C}$. One of the books that I have read to understand the theory of modular curves is "A first course in Modular forms, written by Fred Diamond and Jerry Shurman". </p> <p>Theorem 1.5.1.(page 38)</p> <p>Let $N$ be a positive integer. </p> <p>(a) The moduli space for $\Gamma_{0}(N)$ is $$S_{0}(N)=\{[E_{\tau},\langle1/N+\Lambda_{\tau}\rangle]:\tau\in H\},$$ with $H$ is the upper half plan. Two points $[E_{\tau},\langle1/N+\Lambda_{\tau}\rangle]$ and $[E_{\tau^{'}},\langle1/N+\Lambda_{\tau^{'}}\rangle]$ are equal if and only if $\Gamma_{0}(N)\tau=\Gamma_{0}(N)\tau^{'}$. Thus there is a bijection $$\psi:S_{0}(N)\rightarrow Y_{0}(N), [\mathbb{C}/\Lambda_{\tau},\langle1/N+\Lambda_{\tau}\rangle]\mapsto \Gamma_{0}(N)\tau.$$</p> <p>So, how can we make the link between the equivalence classes of the enhanced elliptic curves for $\Gamma_{0}(N)$ (= the equivalence classes of $(E,C)$ where $E$ is a complex elliptic curve and $C$ is a cyclic subgroup of $E$ of order $N$) defined in Diamond/Shurman's book and the cyclic isogenies $E\rightarrow E^{'}$ of degree $N$ used by Gross.</p> <p>I also ask if there is any other paper which explains the theory of Heegner points explicitly? </p> <p>I have look at Darmon's note and Gross-Zagier paper "Heegner points and derivatives of L-series" and it seems that the both were influenced by Gross's paper! Is there any other paper which explains Heegner points explecitely and independently of Gross's paper?</p> <p>(I keep this post open for any further question about Gross's paper and I apologise for any mistakes in my English.)</p> <p>Thank you.</p>
Anton Fonarev
10,941
<p>Under the Noetherian hypothesis, the category of coherent sheaves is the smallest abelian subcategory (say, in the category of $\mathcal{O}_X$-modules), containing all line bundles. I don't have a reference in mind, but any standard algebraic geometry text should work (Harstsorne, Liu, Vakil's notes, Stacks project etc.)</p> <p><strong>UPD.</strong> As for the question of getting rid of the ambient category, this is a tuff (and, probably, not very natural) one. You still have to keep track of some structure. In particular, vector bundles form an exact category and one may take its abelian hull (this is some kind of an adjoint functor, though, you have to deal with 2-categories). This has appeared in some work of Keller. Probably, you still get the right answer, but I can't say anything more precise.</p>
3,599,893
<p>I had this idea to build a model of Earth in Minecraft. In this game, everything is built on a 2D plane of infinite length and width. But, I wanted to make a world such that someone exploring it could think that they could possibly be walking on a very large sphere. (Stretching or shrinking of different places is OK.) </p> <p>What I first thought about doing was building a finite rectangular model of the world as like a mercator projection, and tessellating this model infinitely throughout the plane. </p> <p><a href="https://i.stack.imgur.com/bzdjA.png" rel="noreferrer"><img src="https://i.stack.imgur.com/bzdjA.png" alt="enter image description here"></a></p> <p>Someone starting in the US could swim eastwards in a straight line across the Atlantic, walk across Africa and Asia, continue through the Pacific and return to the US. This would certainly create a sense of 3D-ness. However, if you travel north from the North Pole, you would wind up immediately at the South Pole. That wouldn't be right.</p> <p>After thinking about it, I hypothesized that an explorer of this model might conclude that they were walking on a donut-shaped world, since that would be the shape of a map where the left was looped around to the right (making a cylinder), and then the top was looped to the bottom. For some reason, by simply tessellating the map, I was creating a hole in the world.</p> <p>Anyway, to solve this issue, I thought about where one ends up after travelling north from various parts of the world. Going north from Canada, and continuing to go in that direction, you end up in Russia and you face south. The opposite is true as well: going north from Russia, you end up in Canada pointing south. Thus, I started to modify the tessellation to properly connect opposing parts of Earth at the poles. </p> <p>When going north of a map of Earth, the next (duplicate) map would have to be rotated 180 degrees to reflect the fact that one facing south after traversing the north pole. This was OK. However, to properly connect everything, the map also had to be <em>flipped</em> about the vertical axis. On a globe, if Alice starts east of Bob and they together walk North and cross the North Pole, Alice still remains east of Bob. So, going north from a map, the next map must be flipped to preserve the east/west directions that would have been otherwise rotated into the wrong direction.</p> <p><a href="https://i.stack.imgur.com/U5n9t.png" rel="noreferrer"><img src="https://i.stack.imgur.com/U5n9t.png" alt="enter image description here"></a></p> <p>Now the situation is hopeless. After an explorer walks across the North Pole in this Minecraft world, he finds himself in a mirrored world. If the world were completely flat, it would feel as if walking North will take you from the outside of a 3D object to its inside.</p> <p>Although I now think that it is impossible to trick an explorer walking on infinite plane into thinking he is on a sphere-like world, a part of me remains unconvinced. Is it really impossible? Also, how come a naive tessellation introduces a hole? And finally, if an explorer were to roam the world where crossing a pole flips everything, what would he conclude the shape of the world to be?</p>
Daniel R. Collins
266,243
<p>Not really a full answer, but elaboration on the OP's "I was creating a hole in the world" observation: It's been known since antiquity that one can make a <a href="https://en.wikipedia.org/wiki/Stereographic_projection" rel="nofollow noreferrer">stereographic projection</a> of a plane onto sphere, missing one single point. This is used notably in complex analysis in the construction of the <a href="https://en.wikipedia.org/wiki/Riemann_sphere" rel="nofollow noreferrer">Riemann sphere</a>; by taking the complex plane and adding a single "point at infinity", one has a structure equivalent to a sphere.</p> <p><a href="https://i.stack.imgur.com/ze3RG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ze3RG.png" alt="Riemann sphere"></a></p>
3,599,893
<p>I had this idea to build a model of Earth in Minecraft. In this game, everything is built on a 2D plane of infinite length and width. But, I wanted to make a world such that someone exploring it could think that they could possibly be walking on a very large sphere. (Stretching or shrinking of different places is OK.) </p> <p>What I first thought about doing was building a finite rectangular model of the world as like a mercator projection, and tessellating this model infinitely throughout the plane. </p> <p><a href="https://i.stack.imgur.com/bzdjA.png" rel="noreferrer"><img src="https://i.stack.imgur.com/bzdjA.png" alt="enter image description here"></a></p> <p>Someone starting in the US could swim eastwards in a straight line across the Atlantic, walk across Africa and Asia, continue through the Pacific and return to the US. This would certainly create a sense of 3D-ness. However, if you travel north from the North Pole, you would wind up immediately at the South Pole. That wouldn't be right.</p> <p>After thinking about it, I hypothesized that an explorer of this model might conclude that they were walking on a donut-shaped world, since that would be the shape of a map where the left was looped around to the right (making a cylinder), and then the top was looped to the bottom. For some reason, by simply tessellating the map, I was creating a hole in the world.</p> <p>Anyway, to solve this issue, I thought about where one ends up after travelling north from various parts of the world. Going north from Canada, and continuing to go in that direction, you end up in Russia and you face south. The opposite is true as well: going north from Russia, you end up in Canada pointing south. Thus, I started to modify the tessellation to properly connect opposing parts of Earth at the poles. </p> <p>When going north of a map of Earth, the next (duplicate) map would have to be rotated 180 degrees to reflect the fact that one facing south after traversing the north pole. This was OK. However, to properly connect everything, the map also had to be <em>flipped</em> about the vertical axis. On a globe, if Alice starts east of Bob and they together walk North and cross the North Pole, Alice still remains east of Bob. So, going north from a map, the next map must be flipped to preserve the east/west directions that would have been otherwise rotated into the wrong direction.</p> <p><a href="https://i.stack.imgur.com/U5n9t.png" rel="noreferrer"><img src="https://i.stack.imgur.com/U5n9t.png" alt="enter image description here"></a></p> <p>Now the situation is hopeless. After an explorer walks across the North Pole in this Minecraft world, he finds himself in a mirrored world. If the world were completely flat, it would feel as if walking North will take you from the outside of a 3D object to its inside.</p> <p>Although I now think that it is impossible to trick an explorer walking on infinite plane into thinking he is on a sphere-like world, a part of me remains unconvinced. Is it really impossible? Also, how come a naive tessellation introduces a hole? And finally, if an explorer were to roam the world where crossing a pole flips everything, what would he conclude the shape of the world to be?</p>
Tanner Swett
13,524
<p>I'd like to add another visual which complements <a href="https://math.stackexchange.com/a/3600741/13524">James K's answer</a>.</p> <p>What you need is this:</p> <p><a href="https://i.stack.imgur.com/5MyN4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5MyN4.png" alt="A plain white square pillow. The pillow is made from two squares of fabric whose edges are sewn together."></a></p> <p>Topologically, the above pillow is just a sphere. It would be pretty easy to take a globe and put it on this pillow; you'd need to stretch and shrink things a bit, but not too severely.</p> <p>Once you've done that, it's not too hard to cover the plane with copies of the pillow. Start with an infinite pile of globe pillows, and use a seam ripper to separate each pillow into the top half and the bottom half. Place one top half in the center of an infinite table, then surround it with 4 bottom halves, so that their edges match up the same way as on the original pillow. Then surround these bottom halves with 8 top halves, matched appropriately, and so on forever.</p> <p>As your explorer walks across the quilt that you've created, they will have a very hard time distinguishing the quilt from the original pillow. They'll only notice a difference if they come across one of the corners, and if they're astute enough to somehow notice that, standing at the corner, they are now immediately surrounded by two copies of the world, not one.</p> <p>To see what this would actually look like, look at the image in <a href="https://math.stackexchange.com/a/3600741/13524">James K's answer</a> again. One side of the pillow would have the northern hemisphere on it, and the other side would have the southern hemisphere. The equator would, of course, lie along the seam of the pillow. The orientation of the map has been chosen so that one corner is in the middle of the Atlantic Ocean, one is in the middle of the Indian Ocean, and the other two are in the Pacific Ocean.</p> <p>That image shows that if you take two copies of the northern hemisphere and two copies of the southern hemisphere, you can form them into a square which tiles the plane in the usual way. The two copies of each hemisphere are 180 degree rotations of each other.</p>
1,617,239
<blockquote> <p><strong>Problem</strong></p> <p>Find the area of the cone <span class="math-container">$z=\sqrt{2x^2+2y^2}$</span> inscribed in the sphere <span class="math-container">$x^2+y^2+z^2=12^2$</span>.</p> </blockquote> <p>I think I have to solve this via the surface integral</p> <p><span class="math-container">$$\iint_S dS=\iint_T \|\Sigma_u\times \Sigma_v\| \, du \, dv$$</span></p> <p>Where <span class="math-container">$\Sigma$</span> is a parametrization of the cone and <span class="math-container">$T=\operatorname {dom}\Sigma$</span>.</p> <p>Now</p> <p><span class="math-container">$$\Sigma(u,v):=(u,v,\sqrt{2u^2+2v^2})$$</span></p> <p>Should work, but I have to figure out the domain, as <span class="math-container">$z\geq 0$</span> (and <span class="math-container">$0$</span> is achieved by <span class="math-container">$z$</span>), we get that the domain has the form <span class="math-container">$[0,a]\times [0,b]$</span> (as <span class="math-container">$u=v=0$</span> is the only way to get <span class="math-container">$z=0$</span>).</p> <p>But I'm having problems getting the <span class="math-container">$a,b$</span>: If I look for the intersection of the cone and the sphere I get a circle: <span class="math-container">$x^2+y^2=4\cdot 12$</span> and I'm not sure what to do next.</p> <p>Could someone give a thorough explanation of how to solve this problem from the start? I'm pretty lost.</p> <hr /> <p>Also, on this <a href="http://tutorial.math.lamar.edu/Classes/CalcIII/SurfaceArea.aspx" rel="nofollow noreferrer">site</a></p> <p>I found the following formula, if <span class="math-container">$z=f(x,y)$</span></p> <p><span class="math-container">$$S=\iint_D\sqrt{f_x^2+f_y^2+1} \, dA$$</span></p> <p>Then <span class="math-container">$S$</span> is the area of the surface <span class="math-container">$R=\operatorname{Im}_D(f)$</span>.</p> <p>Where does this formula come from? (Does it relate to surface integrals?)</p>
Thomas Rasberry
265,575
<p>You could try to project the (double) cone and the sphere onto the $xz$-plane via substituting $y=0$, leaving you with the question of simply rotating the line $z=\sqrt{2}x$ around the $z$ axis to generate the (double) cone (where would this intersect with the circle the sphere would project on the $xz$-plane?). You can then proceed with the typical surface area of solid of rotation formulas. </p>
636,246
<blockquote> <p>Let $g(x)=x^2\sin(1/x)$, if $x \neq 0$ and $g(0)=0$. If $\{r_i\}$ is the numeration of all rational numbers in $[0,1]$, define $$ f(x)=\sum_{n=1}^\infty \frac{g(x-r_n)}{n^2} $$ Show that $f:[0,1] \rightarrow R$ is differentiable in each point over [0,1] but $f'(x)$ is discontinuous over each $r_n$. Is possible that the set of all discontinuous points of $f'$ is precisely $\{r_n\}$?</p> </blockquote> <p>I'm not seeing how this function is working. I could not even derive it. I need to fix some $ n $ to work? And to see the discontinuity of $ f '(x) $ after that? Can anyone give me any tips? I am not knowing how to work with this exercise and do not know where to start.</p>
David Mitra
18,986
<p>A standard result from analysis is: </p> <p>Let $I$ be a bounded interval of $\Bbb R$ and let $(f_n)$ be a sequence of functions on $I$ to $\Bbb R$. Suppose there exists $x_0\in I$ such that $(f_n(x_0))$ converges, and that the sequence $(f_n')$ of derivatives exists on $I$ and converges uniformly on $I$ to a function $g$.</p> <p>Then the sequence $(f_n)$ converges uniformly on $I$ to a function $f$ that has a derivative at every point of $I$ and $f'=g$. (c.f. Bartle and Sherbert, <em>Introduction to Real Analysis</em>, Theorem 8.2.3)</p> <p><br> On to your problem: </p> <p>The salient facts concerning $g$ are:</p> <p>$\ \ \ $1) $g$ is indeed differentiable everywhere (use the definition of derivative for $g'(0)$).</p> <p>$\ \ \ $2) $g'$ is continuous at every $x\ne 0$.</p> <p>$\ \ \ $3) $g'$ is not continuous at $x=0$, as it oscillates between $1$ and $-1$ as $x$ approaches $0$.</p> <p>$\ \ \ $4) $g'$ is bounded.</p> <p><br></p> <p>Warning: a mostly complete solution follows.</p> <p>You can apply the $M$-test to the series $\sum\limits_{n=1}^\infty {g'(x-r_n)\over n^2}$ (use the fact that the family $g'(x-r_n)$ has a common bound) to show that the above result applies to your problem. This will show $f$ is differentiable at every point in $[0,1]$ and that $f'(x)=\sum\limits_{n=1}^\infty {g'(x-r_n)\over n^2}$.</p> <p>To show $f'$ is not continuous at any $r_n$, fix an $r_n$. Choose $N$ so that the tail $\sum\limits_{n=N}^\infty {g'(x-r_n)\over n^2}$ is uniformly small (less than $1/2^{n+1}$, say).</p> <p>Break the sum for $f'$ into three parts: the small tail, the $n^{\rm th}$-term, and the rest. Now use the fact that $g'(x-r_n)$ oscillates from $1$ to $-1$ about $r_n$ and that the $g'(x-r_i)$, $i=1,\ldots,n-1, n+1,\ldots, N$ are continuous at $r_n$ (note there is a $\delta&gt;0$ so that $(r_n-\delta, r_n+\delta)$ excludes every $r_i$, $i=1,\ldots, n-1, n+1,\ldots N$) to show that $f'$ oscillates with positive amplitude about $r_n$.</p> <p>For the last part, you may use the following result:</p> <p>Let $(h_n)$ be a sequence of functions defined on an interval $I$ that converges uniformly to a function $h$ on $I$ and let $x_0\in I$. If each $h_n$ is continuous at $x_0$, then $h$ is also continuous at $x_0$.</p> <p>So, here, if $\alpha\in[0,1]$ is irrational, then each $g_n(x)={g'(x-r_n)\over2^n}$ is continuous at $\alpha$. It follows from the above then that $f'$ is continuous at $\alpha$.</p>
10,942
<p>I heard about it sometime somewhere and want to read about it now, but I can't recall what the name is:</p> <p>Start with $a_1 = \ldots =a_n=1$. Choose a number between 1 and $n$ with probability $a_i/(a_1+ \ldots + a_n)$ to choose $i$. If $i_0$ is the number chosen, increase $a_i$ by 1 and now choose another number and so on indefinitely.</p>
Wok
2,380
<p>Thanks for the pointer, Shai Covo. The name of this specific process is <a href="http://en.wikipedia.org/wiki/Preferential_attachment" rel="nofollow">preferential attachment</a>.</p>
302,061
<p>Can you say how to find number of non-abelian groups of order n?</p> <p>Suppose n is 24 ,then from structure theorem of finite abelian group we know that there are 3 abelian groups.But what can you say about the number of non-abelian groups of order 24?</p> <p>The following link is a list of number of groups of order n: <a href="http://oeis.org/wiki/Number_of_groups_of_order_n" rel="nofollow">http://oeis.org/wiki/Number_of_groups_of_order_n</a> .But here also they did not mention anything how to find number of non-abelian groups of order n.</p>
DonAntonio
31,254
<p>Can be pretty messy to do this, but using semidirect products you can get quite some answers.</p> <p>In you example, $\,24=2^3\cdot 3\,$ and from Sylow Theorems and some element counting either the Sylow $\,2-$ subgroup or the $\,3-$subgroup <em>must</em> be normal. From here you obtaing some action by automorphisms of <em>the other</em> Sylow subgroup (i.e., the one that's not necessarily normal) on the normal one (i.e., a homomorphism to the normal one's automorphisms group), and from here we can construct semidirect products.</p> <p>Of course, the above cannot properly be developed in this site and by this means, so you better grab some good group theory books and read it there.</p>
152,620
<p>The following question is from Golan's linear algebra book. I have posted a solution in the answers. </p> <p><strong>Problem:</strong> Let $F$ ba field and let $V$ be a vector subspace of $F[x]$ consisting of all polynomials of degree at most 2. Let $\alpha:V\rightarrow F[x]$ be a linear transformation satisfying</p> <p>$\alpha(1)=x$</p> <p>$\alpha(x+1)=x^5+x^3$</p> <p>$\alpha(x^2+x+1)=x^4-x^2+1$.</p> <p>Determine $\alpha(x^2-x)$.</p>
André Nicolas
6,312
<p>Your calculation looks fine. Equivalently, but with rather less work, note that $$x^2-x=(x^2+x+1)-2(x+1)+1,$$ so we can compute $\alpha(x^2-x)$ directly by linearity, without computing $\alpha$ at the most common basis elements.</p>
56,082
<p>Suppose I have a nested list such as,</p> <pre><code>{{{A, B}, {A, D}}, {{C, D}, {A, A}, {H, A}}, {{A, H}}} </code></pre> <p>Where the elements of interest are,</p> <blockquote> <pre><code>{{A, B}, {A, D}} {{C, D}, {A, A}, {H, A}} {{A, H}} </code></pre> </blockquote> <p>How would I use select to pick up only elements that contain two or more <code>A</code>s in the first part of their sub-elements. In this example I would want the following as an output,</p> <blockquote> <pre><code>{{A,B},{A,D}} </code></pre> </blockquote>
Chris Degnen
363
<pre><code>list = {{{A, B}, {A, D}}, {{C, D}, {A, A}, {H, A}}, {{A, H}}}; If[Count[First /@ #, A] &gt;= 2, #, ## &amp;[]] &amp; /@ list </code></pre> <blockquote> <p>{{{A, B}, {A, D}}}</p> </blockquote>
2,994,970
<p>As far that i have known, i understand the notion "a function on the circle" by each one of the followings (both equivalent):</p> <ol> <li>A function is defined on <span class="math-container">$\mathbb{R}$</span> that is <span class="math-container">$2\pi-$</span>periodic.</li> <li>A function that is defined on <span class="math-container">$[a,b]$</span> with <span class="math-container">$b-a=2\pi$</span> and <span class="math-container">$f(a)=f(b).$</span> So we can extend this function to get a <span class="math-container">$2\pi-$</span>periodic function.</li> </ol> <p>And my problem is: i am confused every time the author (of the book i have been reading) uses the notion "integrable on the circle". So, by "a function that is integrable on the circle", do i have to get the meaning in which way:</p> <ol> <li>A function that is integrable on all the interval of length <span class="math-container">$2\pi.$</span> Just like this <a href="http://math.uchicago.edu/~may/REU2017/REUPapers/Xue.pdf" rel="nofollow noreferrer">http://math.uchicago.edu/~may/REU2017/REUPapers/Xue.pdf</a> (page 1)</li> <li>A function that is needed firstly be a function on the circle and then, it is integrable on some interval of length <span class="math-container">$2\pi$</span> (because the integral gets the same value over any interval of length <span class="math-container">$2\pi$</span>) <a href="https://i.stack.imgur.com/w7dKa.png" rel="nofollow noreferrer">like this</a></li> </ol> <p>I ask the question because of <a href="https://i.stack.imgur.com/2n0qT.png" rel="nofollow noreferrer">the passage 1</a> and <a href="https://i.stack.imgur.com/3MZH4.png" rel="nofollow noreferrer">the passage 2</a>. So do i need a function getting the same value at the end-points of every interval of lenth <span class="math-container">$2\pi?$</span> By the way, the author approachs the integral in the Riemann sense, if it helps.</p>
Hans Lundmark
1,242
<p>Actually the two conditions are equivalent (unless you require the functions to be continuous).</p> <p>In the second case, you could equally well say that <span class="math-container">$f$</span> should be defined on <span class="math-container">$[a,b)$</span> to begin with, and forget about the requirement <span class="math-container">$f(a)=f(b)$</span>, because it will be satisfied by definition when you extend the function to a <span class="math-container">$2\pi$</span>-periodic function on <span class="math-container">$\mathbb{R}$</span>. And then it should be clear that this is the same thing as the first case.</p> <p>But to require a function to be “continuous on the circle” means that the <span class="math-container">$2\pi$</span>-periodic extension must be continuous on <span class="math-container">$\mathbb{R}$</span>. This is of course not necessarily true if you extend a function which is continuous on <span class="math-container">$[a,b)$</span>. But if <span class="math-container">$f$</span> is continuous on <span class="math-container">$[a,b]$</span> and satisfies <span class="math-container">$f(a)=f(b)$</span>, then the extension will be continuous (and conversely).</p> <p>Anyway, the value at a single point doesn't affect integrals, and a function need not be continuous in order to be integrable or to have a Fourier series, so perhaps one shouldn't worry too much about continuity when only talking about what “a function on the circle” means. Of course, when studying convergence of the Fourier series, it's interesting to talk about continuity, but that's a later question.</p>
3,070,258
<p>In a (Partial Differential Equations / Laplace Equation) , I try to solving a problem of Laplace eq. by using separation of variables method.</p> <p>I usually using the rule : if <span class="math-container">$e^{2 \sqrt{k} b} = 1$</span>, then I have: <span class="math-container">$2\sqrt{k} b = 2ni\pi$</span>. </p> <p>Now in my problem I have : <span class="math-container">$e^{2 \sqrt{k}\pi} = 1$</span> Can I use the same rule which lead to cancel the <span class="math-container">$\pi$</span> ?</p>
user3482749
226,174
<p>Yes, indeed: the result that you quote (after correcting the missing/surplus <span class="math-container">$2$</span>) is true for all complex values of <span class="math-container">$b$</span>, and so, if <span class="math-container">$e^{2\sqrt{k}\pi} = 1$</span>, then you have <span class="math-container">$2\sqrt{k}\pi = 2ni\pi$</span>, and hence <span class="math-container">$\sqrt{k} = ni$</span>, for some integer <span class="math-container">$n$</span>. </p>
231,479
<p>Is there a function that can create hexagonal grid?</p> <p>We have square grid graph, where we can specify <code>m*n</code> dimensions:</p> <pre><code>GridGraph[{m, n}] </code></pre> <p>We have triangular grid graph (which works only for argument <code>n</code> up to 10 - for unknown reason):</p> <pre><code>GraphData[{&quot;TriangularGrid&quot;, n}, &quot;Graph&quot;] </code></pre> <p>I can not find a function that would generate a hexagonal grid graph. I would like it like it is with <code>GridGraph</code> something like <code>HexagonalGridGraph[{m,n,o}]</code> where <code>m,n,o</code> are dimensions <code>m*n*o</code> of planar graph - or other way said - &quot;lengths&quot; of the sides of the graph.</p> <p>I can make my own code, I am asking just in case there already exist implemented function.</p> <p><strong>UPDATE:</strong></p> <p>What I mean by <code>m*n*o</code> hexagonal grid is for example this <code>3*5*7</code> hexagonal grid:</p> <p><a href="https://i.stack.imgur.com/r8yTS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/r8yTS.png" alt="enter image description here" /></a></p> <p>My code for producing it is very long and cumbersome so I will not upload it unless I can make it simpler.</p>
azerbajdzan
53,172
<p>Here is my generalization of the code from link provided by @LouisB:</p> <pre><code>HexagonalGridGraph2[{wide1_Integer?Positive, wide2_Integer?Positive, wide3_Integer?Positive}, opts : OptionsPattern[Graph]] := Module[{cells, edges, vertices}, cells = Flatten[Table[ CirclePoints[{Sqrt[3] (1 j + k - 2 ) + Sqrt[3] (1 j + l - 2 ), 3 k - 2 - 3 l}, {2, \[Pi]/2}, 6], {j, wide1}, {k, wide2}, {l, wide3}], 2]; edges = Union[Sort /@ Flatten[Partition[#, 2, 1, 1] &amp; /@ cells, 1]]; vertices = Union[Flatten[edges, 1]]; IndexGraph[ Graph[UndirectedEdge @@@ edges, opts, VertexCoordinates -&gt; Thread[vertices -&gt; vertices]]]] </code></pre> <p>And here are some examples:</p> <pre><code>Sort /@ Tuples[Range[4], {3}] // Union; Partition[ Rasterize /@ (HexagonalGridGraph2[#, PlotLabel -&gt; #, ImageSize -&gt; {100, 100}] &amp; /@ %), 5]; ImageAssemble[%] </code></pre> <p><a href="https://i.stack.imgur.com/WZyM0.png" rel="noreferrer"><img src="https://i.stack.imgur.com/WZyM0.png" alt="enter image description here" /></a></p>
231,479
<p>Is there a function that can create hexagonal grid?</p> <p>We have square grid graph, where we can specify <code>m*n</code> dimensions:</p> <pre><code>GridGraph[{m, n}] </code></pre> <p>We have triangular grid graph (which works only for argument <code>n</code> up to 10 - for unknown reason):</p> <pre><code>GraphData[{&quot;TriangularGrid&quot;, n}, &quot;Graph&quot;] </code></pre> <p>I can not find a function that would generate a hexagonal grid graph. I would like it like it is with <code>GridGraph</code> something like <code>HexagonalGridGraph[{m,n,o}]</code> where <code>m,n,o</code> are dimensions <code>m*n*o</code> of planar graph - or other way said - &quot;lengths&quot; of the sides of the graph.</p> <p>I can make my own code, I am asking just in case there already exist implemented function.</p> <p><strong>UPDATE:</strong></p> <p>What I mean by <code>m*n*o</code> hexagonal grid is for example this <code>3*5*7</code> hexagonal grid:</p> <p><a href="https://i.stack.imgur.com/r8yTS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/r8yTS.png" alt="enter image description here" /></a></p> <p>My code for producing it is very long and cumbersome so I will not upload it unless I can make it simpler.</p>
Anton Antonov
34,008
<p>There is a resource function that makes hexagonal graphs: <a href="https://resources.wolframcloud.com/FunctionRepository/resources/HexagonalGridGraph" rel="nofollow noreferrer"><code>HexagonalGridGraph</code></a>. (Contributed by WRI.)</p> <p><a href="https://i.stack.imgur.com/gGunX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gGunX.png" alt="enter image description here" /></a></p>
4,575,771
<p>I need to show that <span class="math-container">$\int_0^1 (1+t^2)^{\frac 7 2} dt &lt; \frac 7 2 $</span>. I've checked numerically that this is true, but I haven't been able to prove it.</p> <p>I've tried trigonometric substitutions. Let <span class="math-container">$\tan u= t:$</span></p> <p><span class="math-container">$$\int_0^1 (1+t^2)^{\frac 7 2} dt = \int_0^{\frac{\sqrt 2}{2}} (1+\tan^2 u )^{\frac 9 2} du = \int_0^{\frac{\sqrt 2}{2}} \sec^9 u \ du = \int_0^{\frac{\sqrt 2}{2}} \sec^{10} u \cos u\ du = \int_0^{\frac{\sqrt 2}{2}} \frac {\cos u}{(1-\sin^2 u)^5} du$$</span> Now let <span class="math-container">$\sin u = w$</span>. Then: <span class="math-container">$$\int_0^{\frac{\sqrt 2}{2}} \frac {\cos u}{(1-\sin^2 u)^5} du = \int_0^{\sin {\frac{\sqrt 2}{2}}} \frac {1}{(1-w^2)^5} dw.$$</span> This last integral is solvable using partial fraction decomposition, but even after going through all the work required I'm not really sure how to compare it with <span class="math-container">$\frac 7 2$</span>, because of that <span class="math-container">$\sin {\frac {\sqrt{2}}{2}}$</span> term, which is not easy to compare.</p>
River Li
584,414
<p><em>Alternative proof</em>:</p> <p>Clearly, we have <span class="math-container">$\sqrt{1 + t^2} \le 1 + t^2/2$</span>. Thus, we have <span class="math-container">$$(1 + t^2)^{7/2} \le (1 + t^2)^3 (1 + t^2/2) = \frac12t^8 + \frac52 t^6 + \frac92 t^4 + \frac72 t^2 + 1.$$</span></p> <p>Thus, we have <span class="math-container">\begin{align*} \int_0^1 (1 + t^2)^{7/2} \, \mathrm{d} t &amp;\le \int_0^1 \left( \frac12t^8 + \frac52 t^6 + \frac92 t^4 + \frac72 t^2 + 1\right)\,\mathrm{d} t \\ &amp;= \frac{1}{18} + \frac{5}{14} + \frac{9}{10} + \frac{7}{6} + 1\\ &amp; &lt; \frac72. \end{align*}</span></p> <p>We are done.</p>
4,237,342
<p>I am a researcher and encountered the following challenging function in my work:</p> <p><span class="math-container">$$f(S)=\sum_{k=1}^{S-1}(\ln (S)-\ln (k))^2 \bigg [ \frac{1}{(S-k)^2}+\frac{1}{(S+k)^2} \bigg ]$$</span></p> <p>And I am only interested in the first term of the Taylor expansion of this function when <span class="math-container">$S-&gt;+\infty$</span>. Matlab simulations give me that it is equivalent to:</p> <p><span class="math-container">$$\frac{a}{S}$$</span></p> <p>In other words, simply a positive scalar divided by the parameter <span class="math-container">$S$</span>.</p> <p>Do you have any idea how to compute this term?</p> <p>Thank you so much.</p>
user247327
247,327
<p>Evaluate the first term- when k'= 1? That should be easy: <span class="math-container">$(ln(S)- ln(1))^2\left[\frac{1}{(S-1)^2}- \frac{1}{(S+1)^2}\right]$</span> <span class="math-container">$= (ln(S)- 0)^2\left[\frac{(S+1)^2- (S-1)^2}{(S+1)^2(S-1)^2}\right]$</span> <span class="math-container">$= (ln(S))^2\left[\frac{S^2+ 2S+ 1- S^2+ 2S- 1}{(S+1)^2(S-1)^2}\right]$</span> <span class="math-container">$= \frac{4ln(S)S}{(S+1)^2(S-1)^2}$</span></p> <p>You can multiply out the denominator if you like but that is not really useful.</p>
2,232,060
<p>$f(x) = \sqrt[3]{1+ \sqrt[3]x}$ </p> <p>I have to derive in 1st order and 2nd order</p> <p>$f'(x) = \frac{1}{9x^\frac 23(1+x^\frac 13)^\frac 23}$ Is what I get after the first derivation </p> <p>Now the teachers assistant is making $some$ $magic$ by showing that </p> <p>$f(u) = \frac{1}{U^\frac 23}$</p> <p>$u=g(x)=x(1+{x^\frac 13}) = x + x^\frac 43$ &lt;- where did she get that first $x$ from. For me it doesn't make sense since she got the $\frac{1}{9x^\frac 23}$ as $u$ already so the $x$ goes there. </p> <p>$f"(x)= \frac{1}{9} * f'(u) * g'(x)$ &lt;- is how the second derivation continues. Can someone please explain how you would get a second order derivative?</p> <p>Is there some kind of rule am I missing or something? </p>
Fabian Schn.
432,082
<p>If I get you right, we have to find $n,a,b$ so that the conditions </p> <ul> <li>$12 \mid a+b $</li> <li>$n \mid ab $</li> <li>$12 \mid a $</li> <li>$12 \mid b $</li> </ul> <p>hold true.</p> <p>Well, if you are searching for the smallest possible values, than you might be able to simply check the first numbers with a simple, however inefficient, computer program. This is a Python example:</p> <pre><code>import itertools for i in itertools.product(range(1,100),range(1,100),range(1,100)): n,a,b = i if ((a + b) % 12 == 0) and ((a * b) % n == 0): if (a % 12 == 0) and (b % 12 == 0): print n,a,b </code></pre> <p>The results are:</p> <ul> <li>1 12 12</li> <li>1 12 24</li> <li>1 12 36</li> <li>1 12 48</li> <li>1 12 60</li> <li>1 12 72</li> <li>...</li> </ul>
4,062,987
<p>I was reading this question here: <a href="https://math.stackexchange.com/questions/1230688/what-are-the-semisimple-mathbbz-modules">What are the semisimple $\mathbb{Z}$-modules?</a> and I understood everything except why we need <span class="math-container">$\alpha_p$</span> copies here <span class="math-container">$$ \bigoplus_{\text{$p$ prime}}(\mathbb{Z}/p\mathbb{Z})^{(\alpha_p)} $$</span> And not just <span class="math-container">$$ \bigoplus_{\text{$p$ prime}}\mathbb{Z}/p\mathbb{Z} $$</span></p> <p>Could anyone explain this to me please?</p>
PierreCarre
639,238
<p>You can start by noting that the sequence is decreasing. In fact, <span class="math-container">$$ \dfrac{a_{n+1}}{a_n} = \dfrac{2n+1}{2n+2} &lt; 1 \Rightarrow a_{n+1}&lt; a_n. $$</span></p> <p>Also, the sequence is clearly bounded. Since it is decreasing and positive, we have that <span class="math-container">$0 &lt; a_n \leq a_1$</span>.</p> <p>Finally, being monotonic and bounded, the sequence is convergent.</p>
1,936,043
<p>I would like to prove that the sequence $n^{(-1)^{n}}$ is divergent. </p> <p>My thoughts: I know $(-1)^n$ is divergent, so $n$ to the power of a divergent sequence is still divergent? I am not sure how to give a proper proof, pls help!</p>
Dr. Sonnhard Graubner
175,066
<p>note that $$(-1)^{n}=-1$$ if $n$ is odd and $$(-1)^{n}=1$$ if $n$ is even. </p>
2,130,836
<p>My question is really simple: </p> <p>Let $E$ be a vector space and $A_r(E)$ be the vector space of the alternating $r$-linear maps $\varphi:E\times\ldots \times E\to \mathbb R$. If $v_1,\ldots,v_r$ are linearly independent vectors. Can we get $\omega\in A_r(E)$ such that $\omega(v_1\ldots,v_r)\neq 0$? Is the converse true?</p>
WafflesTasty
70,877
<p>I was looking for this as well, and eventually figured it out myself. So here's my solution for future reference. The short answer is, <span class="math-container">$2^n - 1$</span> never divides <span class="math-container">$3^n - 1$</span>. Here's the proof, making use of the Jacobi symbol.</p> <p>Assume <span class="math-container">$2^n - 1 \mid 3^n - 1$</span>. If <span class="math-container">$n = 2k$</span> is even, then <span class="math-container">$2^n - 1 = 4^k - 1 \equiv 0 \bmod 3$</span>. Consequently, <span class="math-container">$3$</span> must also divide <span class="math-container">$3^n - 1$</span>, which is a contradiction. At the very least, we can already assume <span class="math-container">$n = 2k + 1$</span> is odd. Next, since <span class="math-container">$3^n \equiv 1 \bmod 2^n - 1$</span>, from the properties of the Jacobi-symbol it follows that </p> <p><span class="math-container">\begin{equation} 1 = (\frac{1}{2^n - 1}) = (\frac{3^n}{2^n - 1}) = (\frac{3^{2k}}{2^n - 1}) \cdot (\frac{3}{2^n - 1}) = (\frac{3}{2^n - 1}) \end{equation}</span></p> <p>However, using Jacobi's law of reciprocity we also know</p> <p><span class="math-container">\begin{equation} (\frac{2^n - 1}{3}) = (\frac{3}{2^n - 1}) \cdot (\frac{2^n - 1}{3}) = (-1)^{\frac{3 - 1}{2}\frac{2^n - 2}{2}} = (-1)^{2^{n - 1} - 1} = -1 \end{equation}</span></p> <p>The only quadratic non-residue <span class="math-container">$\bmod 3$</span> is <span class="math-container">$2$</span>, therefore <span class="math-container">$2^n - 1 \equiv 2 \bmod 3$</span> or alternatively <span class="math-container">$2^n \equiv 0 \bmod 3$</span>. Since this implies <span class="math-container">$3$</span> divides <span class="math-container">$2^n$</span>, we again arrive at a contradiction.</p>
63,671
<p>The problem is:</p> <p>For infinite independent Bernoulli trials, prove that the total number of successful trials $N$ have the following property:</p> <p>$$ [N &lt; \infty] = \bigcup\limits_{n=1}^{\infty}\,[N \le n] $$</p> <p>Actually this is just part of bigger problem in a book, and the equation is given as an obvious fact and as a hint without any explanation. What does the equation exactly mean? I guess square brace means set, but what's the definition of $[N &lt; \infty]$?</p>
Pablo
15,860
<p>Maybe we don't have to do thermal average of n and we need : $$n = \frac{1}{{z^{ - 1} e^{\varepsilon /kT} - 1}} = \sum\limits_{n = 1}^\infty {\left( {ze^{ - \varepsilon /kT} } \right)^l } $$ so: $$\sum\limits_{} {&#39;... = \sum\limits_{i,j,k = 1}^\infty {\sum\limits_{} &#39; } } z^{ijk} \frac{{e^{ - \frac{{\hbar ^2 k_a^2j }}{{2m}}} e^{ - \frac{{\hbar ^2 k_\gamma ^2k }}{{2m}}} e^{ - \frac{{\hbar ^2 k_\lambda ^2l }}{{2m}}} }}{{\frac{1}{2}\left( {k_a^2 + k_b^2 - k_\gamma ^2 - k_\lambda ^2 } \right)}} $$ This explain the factor $z^{ijk}$ and the sum. How can we continue?Now we have to transform the sum' in a integral. The density of state depends of $kdk$ so i think we have to evaluate: $$\int\limits_0^\infty {\frac{{e^{ - \frac{{\hbar ^2 k_a^2j }}{{2m}}} e^{ - \frac{{\hbar ^2 k_\gamma ^2k }}{{2m}}} e^{ - \frac{{\hbar ^2 k_\lambda ^2l }}{{2m}}} }}{{\frac{1}{2}\left( {k_a^2 + k_b^2 - k_\gamma ^2 - k_\lambda ^2 } \right)}}k_a k_b k_\gamma k_\lambda } dk_a dk_b dk_\gamma dk_\lambda $$ We have to evaluate two integrals, i think.</p>
63,671
<p>The problem is:</p> <p>For infinite independent Bernoulli trials, prove that the total number of successful trials $N$ have the following property:</p> <p>$$ [N &lt; \infty] = \bigcup\limits_{n=1}^{\infty}\,[N \le n] $$</p> <p>Actually this is just part of bigger problem in a book, and the equation is given as an obvious fact and as a hint without any explanation. What does the equation exactly mean? I guess square brace means set, but what's the definition of $[N &lt; \infty]$?</p>
Pablo
15,860
<p>I have tried to solve this integral: $$F\left( {a,b,c,d} \right) = \int {\frac{{e^{ - \frac{{\hbar ^2 }}{{2m}}\left( {ak_a^2 + bk_b^2 + ck_\gamma ^2 + dk_\lambda ^2 } \right)} }}{{\left( {k_a^2 + k_b^2 - k_\gamma ^2 - k_\lambda ^2 } \right)}}k_a k_b k_\gamma k_\lambda dk_a dk_b dk_\gamma dk_\lambda } $$ from which we can calculate our integral putting $b=0$. We have: $$\frac{{\partial F}}{{\partial a}} + \frac{{\partial F}}{{\partial b}} - \frac{{\partial F}}{{\partial c}} - \frac{{\partial F}}{{\partial d}} \propto \frac{1}{{abcd}} $$ Now we put: $$i=a+b+c+d$$ $$l=a+b-c-d$$ $$m=a-b$$ $$n=c-d$$ So: $$\frac{\partial }{{\partial a}} = \frac{\partial }{{\partial i}} + \frac{\partial }{{\partial l}} + \frac{\partial }{{\partial m}} $$ $$\frac{\partial }{{\partial b}} = \frac{\partial }{{\partial i}} + \frac{\partial }{{\partial l}} - \frac{\partial }{{\partial m}} $$ $$\frac{\partial }{{\partial c}} = \frac{\partial }{{\partial i}} - \frac{\partial }{{\partial l}} + \frac{\partial }{{\partial n}} $$ $$\frac{\partial }{{\partial d}} = \frac{\partial }{{\partial i}} - \frac{\partial }{{\partial l}} - \frac{\partial }{{\partial n}} $$ $$\frac{\partial }{{\partial a}} + \frac{\partial }{{\partial b}} - \frac{\partial }{{\partial c}} - \frac{\partial }{{\partial d}} = 4\frac{\partial }{{\partial l}} $$ and: $$a=(i+l+2m)/4$$ $$b=(i+l-2m)/4$$ $$c=(i-l+2n)/4$$ $$d=(i-l-2n)/4$$ and the equation becomes: $$\frac{{\partial F&#39;}}{{\partial l}} \propto \frac{1}{{\left( {\frac{{i + l}}{2} + m} \right)\left( {\frac{{i + l}}{2} - m} \right)\left( {\frac{{i - l}}{2} + n} \right)\left( {\frac{{i - l}}{2} - n} \right)}} $$ The integration can be evaluated with derive, and the substitutions gives an expression divergent if we put $b=0$. What is wrong?</p>
2,868,047
<p>My question is in relation to a problem I am trying to solve <a href="https://math.stackexchange.com/questions/2867002/finding-mathbbpygx">here</a>. If $g(.)$ is a monotonically increasing function and $a &lt;b$, is it always true that $a&lt;g(a)&lt;g(b)&lt;b$? Why or why not?</p>
Jaroslaw Matlak
389,592
<p><strong>Hint</strong></p> <p>You can compute the probability of not getting triple. The number $f_n$ of possibilities of not getting triples in $n$ throws is: $$f_1 = \binom{6}{1}\\ f_2 = \binom{6}{2}2!+\binom{6}{1}\\ f_3=\binom{6}{3}3!+\binom{6}{1}\binom{5}{1}\frac{3!}{2!}\\ f_4=\binom{6}{4}4!+\binom{6}{1}\binom{5}{2}\frac{4!}{2!}+\binom{6}{2}\frac{4!}{2!2!}\\ f_5=\binom{6}{5}5!+\binom{6}{1}\binom{5}{3}\frac{5!}{2!}+\binom{6}{2}\binom{4}{1}\frac{5!}{2!2!}\\ f_6=\binom{6}{6}6!+\binom{6}{1}\binom{5}{4}\frac{6!}{2!}+\binom{6}{2}\binom{4}{2}\frac{6!}{2!2!}+\binom{6}{3}\frac{6!}{2!2!2!}\\ f_7=\binom{6}{1}\frac{7!}{2!}+\binom{6}{2}\binom{4}{3}\frac{7!}{2!2!}+\binom{6}{3}\binom{3}{1}\frac{7!}{2!2!2!}\\ f_8=\binom{6}{2}\frac{8!}{2!2!}+\binom{6}{3}\binom{3}{2}\frac{8!}{2!2!2!}+\binom{6}{4}\frac{8!}{2!2!2!2!}\\ f_9=\binom{6}{3}\frac{9!}{2!2!2!}+\binom{6}{4}\binom{2}{1}\frac{9!}{2!2!2!2!}\\ f_{10} = \binom{6}{4}\frac{10!}{2!2!2!2!}+\binom{6}{5}\frac{10!}{2!2!2!2!2!}\\ f_{11}=\binom{6}{5}\frac{11!}{2!2!2!2!2!}\\ f_{12}=\binom{6}{6}\frac{12!}{2!2!2!2!2!2!}$$ For $k&gt;13$ we have $f_k=0$</p> <p>The number of all possible throws $\omega_n$ is: $$\omega_n=6^n$$ Now you can compute the probability of not getting triple: $$p_n=\frac{f_n}{\omega_n}$$ and probability of getting triple: $$q_n=1-p_n$$</p>
451,063
<p>Alright so I am having the following issue: I want to figure out how to find the fourier coefficients of the following function: $$D(X)=\frac {a'(x)} {1+a'(x)^2}$$</p> <p>Where $a(x)$ is an arbitrary function. I already have a model for finding the fourier coefficients for $a(x)$ and $a'(x)$:</p> <pre><code>fc = fft(a) / Nfft; fc = fftshift(fc); % fft of a(x) fc = conj(fc); % sign correction aprimec = -i * [0:Dim2-1] .* fc; % fc of derivative (definition) </code></pre> <p>The equation I am given to use is: $$f_m=\frac 1 N \sum^Nf_ie^{+im2\pi x}$$</p> <p>Which confuses me because of the $f_i$. So does any one have any suggestions? </p> <p>Additionally, I do not know how to define d</p> <pre><code>d = diff(a)/(1+diff(a)^2); </code></pre> <p>I do not think that this would work because doesn't <code>diff(x)</code> just take the difference between two consecutive components in the vector?</p> <p>I would greatly appreciate any help. Thanks!</p>
AlexR
86,940
<p>You can try to approximate $D$ using the <code>ifft</code> of $a'$, additionally your formula seems to be $$ \hat{f}_m = \frac{1}{N} \sum_{j=0}^{N-1} f_j \omega_N^{mj} $$ where $\omega_N = \exp(\frac{2\pi i}{N})$.</p>
1,603,323
<p>If <span class="math-container">$A$</span> is a positive definite matrix can it be concluded that the kernel of <span class="math-container">$A$</span> is <span class="math-container">$\{0\}$</span>? </p> <p>pf: R.T.P <span class="math-container">$\ker A = 0$</span>. Suppose not, i.e., there exists some <span class="math-container">$x\in\ker A$</span> s.t <span class="math-container">$x\neq 0$</span>, then <span class="math-container">$$Ax = 0\;\Longrightarrow x^T Ax = 0$$</span> which is a contradiction by definition of positive definite. Therefore <span class="math-container">$\ker A=\{0\}$</span>.</p>
Hanno
316,749
<p><span class="math-container">$\color{blue}{\text{user}1551}$</span>'s comment as answer.</p> <p>If <span class="math-container">$x\in\ker A$</span>, then <span class="math-container">$Ax=0\,$</span> and in turn <span class="math-container">$\,x^TAx=0$</span>. As <span class="math-container">$A$</span> is positive definite, we get <span class="math-container">$\,x=0$</span>.<br> Therefore <span class="math-container">$\ker A=\{0\}\,$</span>.</p>
19,356
<p>So I was wondering: are there any general differences in the nature of "what every mathematician should know" over the last 50-60 years? I'm not just talking of small changes where new results are added on to old ones, but fundamental shifts in the nature of the knowledge and skills that people are expected to acquire during or before graduate school.</p> <p>To give an example (which others may disagree with), one secular (here, secular means "trend over time") change seems to be that mathematicians today are expected to feel a lot more comfortable with picking up a new abstraction, or a new abstract formulation of an existing idea, even if the process of abstraction lies outside that person's domain of expertise. For example, even somebody who knows little of category theory would not be expected to bolt if confronted with an interpretation of a subject in his/her field in terms of some new categories, replete with objects, morphisms, functors, and natural transformations. Similarly, people would not blink much at a new algebraic structure that behaves like groups or rings but is a little different.</p> <p>My sense would be that the expectations and abilities in this regard have improved over the last 50-60 years, partly because of the development of "abstract nonsense" subjects including category theory, first-order logic, model theory, universal algebra etc., and partly because of the increasing level of abstraction and the need for connecting frameworks and ideas even in the rest of mathematics. I don't really know much about how mathematics was taught thirty years ago, but I surmised the above by comparing highly accomplished professional mathematicians who probably went to graduate school thirty years ago against today's graduate students.</p> <p>Some other guesses:</p> <ol> <li>Today, people are expected to have a lot more of a quick idea of a larger number of subjects, and less of an in-depth understanding of "Big Proofs" in areas outside their subdomain of expertise. Basically, the Great Books or Great Proofs approach to learning may be declining. The rapid increase in availability of books, journals, and information via the Internet (along with the existence of tools such as Math Overflow) may be making it more profitable to know a bit of everything rather than master big theorems outside one's area of specialization.</li> <li>Also, probably a thorough grasp of multiple languages may be becoming less necessary, particularly for people who are using English as their primary research language. Two reasons: first, a lot of materials earlier available only in non-English languages are now available as English translations, and second, translation tools are much more widely available and easy-to-use, reducing the gains from mastery of multiple languages.</li> </ol> <p>These are all just conjectures. Contradictory information and ideas about other possible secular trends would be much appreciated.</p> <p>NOTE: This might be too soft for Math Overflow! Moderators, please feel free to close it if so.</p>
Deane Yang
613
<p>I advise against using MathOverflow as a guide to what most young mathematicians do or ought or learn. The last time I saw such a strong bias towards "abstract nonsense" was when I was a graduate student at Harvard (in the early 80's), where if you wanted to do differential geometry rather than derived categories, you felt like a second class citizen.</p> <p>I do agree with Steve Huntsman that any math Ph.D. student should devote at least some time towards developing some skills in the practical use of mathematics, including some programming. The fact is that most Ph.D.'s do not end up in a research university, so if you want to have more options than teaching at a lower tier school, these practical skills are extremely useful. You can definitely develop them later, but getting at least some feel for what's involved is very helpful.</p> <p>Beyond that, there are many, many directions to head in, and each one has its own requirements on what you need to know. Today, a certain facility with abstraction can be quite useful, but it is not essential. Knowing a lot of different things also makes it a lot easier to interact with a broader range of mathematicians. This can be extremely useful to your own research, because you will stumble onto unexpected connections and intersections with work that seems completely unrelated.</p> <p>Most of us are unable to learn everything we want to, so we have to make choices on what we're going to focus on. This is difficult to do, but developing the proper judgement for this is one of the most important stages of becoming a research mathematician. You can't just follow someone else's advice; you have to learn to figure it out, based on all the different and conflicting views you'll get.</p>
19,356
<p>So I was wondering: are there any general differences in the nature of "what every mathematician should know" over the last 50-60 years? I'm not just talking of small changes where new results are added on to old ones, but fundamental shifts in the nature of the knowledge and skills that people are expected to acquire during or before graduate school.</p> <p>To give an example (which others may disagree with), one secular (here, secular means "trend over time") change seems to be that mathematicians today are expected to feel a lot more comfortable with picking up a new abstraction, or a new abstract formulation of an existing idea, even if the process of abstraction lies outside that person's domain of expertise. For example, even somebody who knows little of category theory would not be expected to bolt if confronted with an interpretation of a subject in his/her field in terms of some new categories, replete with objects, morphisms, functors, and natural transformations. Similarly, people would not blink much at a new algebraic structure that behaves like groups or rings but is a little different.</p> <p>My sense would be that the expectations and abilities in this regard have improved over the last 50-60 years, partly because of the development of "abstract nonsense" subjects including category theory, first-order logic, model theory, universal algebra etc., and partly because of the increasing level of abstraction and the need for connecting frameworks and ideas even in the rest of mathematics. I don't really know much about how mathematics was taught thirty years ago, but I surmised the above by comparing highly accomplished professional mathematicians who probably went to graduate school thirty years ago against today's graduate students.</p> <p>Some other guesses:</p> <ol> <li>Today, people are expected to have a lot more of a quick idea of a larger number of subjects, and less of an in-depth understanding of "Big Proofs" in areas outside their subdomain of expertise. Basically, the Great Books or Great Proofs approach to learning may be declining. The rapid increase in availability of books, journals, and information via the Internet (along with the existence of tools such as Math Overflow) may be making it more profitable to know a bit of everything rather than master big theorems outside one's area of specialization.</li> <li>Also, probably a thorough grasp of multiple languages may be becoming less necessary, particularly for people who are using English as their primary research language. Two reasons: first, a lot of materials earlier available only in non-English languages are now available as English translations, and second, translation tools are much more widely available and easy-to-use, reducing the gains from mastery of multiple languages.</li> </ol> <p>These are all just conjectures. Contradictory information and ideas about other possible secular trends would be much appreciated.</p> <p>NOTE: This might be too soft for Math Overflow! Moderators, please feel free to close it if so.</p>
Gerald Edgar
454
<p>In Littlewood's <em>Miscellany</em> there is an essay "A Mathematical Education" where he describes the situation before 1907.</p>
2,835,474
<p>What is linear about a linear combination of things?. In linear algebra, the "things" we are dealing with are usually vectors and the linear combination gives the span of the vectors. Or it could be a linear combination of variables and functions. But why not just call it combination. Why is the term "linear" included?What is so "linear" about it?</p>
wjmccann
426,335
<p>When we want to talk about things being linear we are restricting ourselves to only two potential operations</p> <ol> <li>We can add things together</li> <li>We can scale things</li> </ol> <p>We call these operations linear because, well, they operate on a line! When you scale a vector, the new vector is a "further out" version of the original on the same line. When we add two vectors together, we are scaling and changing the vector angle a bit. </p> <p><strong>Notice that nothing is curvy!</strong></p> <p>So when we talk about a linear combination, we say that you can create a combination from a set of vectors $\{v_1,v_2,...v_n\}$ using only those two operators</p> <p>$$ v_{new} = c_1v_1 + c_2v_2 + ... c_nv_n $$</p>
2,520,044
<p>$$\lim_{x\to2}{\frac{\sqrt{3x-2}-\sqrt{5x-6}}{\sqrt{2x-1}-\sqrt{x+1}}}$$</p> <p>Evaluate the limit.</p> <p>Thanks for any help</p>
Michael Rozenberg
190,319
<p>$$\lim_{x\to2}{\frac{\sqrt{3x-2}-\sqrt{5x-6}}{\sqrt{2x-1}-\sqrt{x+1}}}=\lim_{x\to2}{\frac{(3x-2-(5x-6))(\sqrt{2x-1}+\sqrt{x+1})}{(2x-1-(x+1))(\sqrt{3x-2}+\sqrt{5x-6})}}=$$ $$=-2\cdot\frac{\sqrt3+\sqrt3}{2+2}=-\sqrt3$$</p>
1,159
<p>A lot of times I see theorems stated for local rings, but usually they are also true for "graded local rings", i.e., graded rings with a unique homogeneous maximal ideal (like the polynomial ring). For example, the Hilbert syzygy theorem, the Auslander-Buchsbaum formula, statements related to local cohomology, etc.</p> <p>But it's not entirely clear to me how tight this analogy is. I certainly don't expect all statements about local rings to extend to graded local rings, so I'd like to know about some "pitfalls" in case I ever decide to make an "oh yes, this obviously extends" fallacy. What are some examples of statements which are true for local rings whose graded analogues are not necessarily true? Or another related question: what kind of intuition should I have when I want to conclude that statements have graded versions?</p> <p>There is a notion of "generalized local ring" due to Goto and Watanabe which includes graded local rings and local rings: a positively graded ring that is finitely generated as an algebra over its zeroth degree part, and its zeroth degree part is a local ring, so one possibility is just to see if this weaker definition is enough to prove the statement. Of course the trouble comes when the proofs cite other sources, and become unmanageable to trace back to first principles.</p>
Jan Weidner
2,837
<p>I will try to provide some geometric intuition, why there should be an analogy between local rings and graded rings with unique homogeneous maximal ideal. Maybe this also helps to guess whether a statement true for local rings should still hold in the graded case. A graded k-Algebra can be thought of as an affine space with $k^*$-action. Homogeneous prime ideals correspond to invariant closed sub-varieties. So your sort of algebras corresponds to spaces with exactly one fixpoint. For example in the case of the polynomial ring its $k^n$ with the obvious action of the multiplicative group and $0$ is the fixpoint. From this example we see, that the action can be used to "contract" the space to the fixed point. Hence the local nature of the space.</p>