qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
3,962,514
<p>This is just curiosity / a personal exercise.</p> <p><a href="https://what3words.com/" rel="nofollow noreferrer">What3Words</a> allocates every 3m x 3m square on the Earth a unique set of 3 words. I tried to work out how many words are required, but got a bit stuck.</p> <p><span class="math-container">$$ Area = 510 \times 10^6 km^2 = 5.1 \times 10^{14} m^2 =&gt; ~ 5.4 \times 10^{14} m^2 $$</span></p> <p>(rounding up to make the next step easier!)</p> <p>And so there are ~ <span class="math-container">$6\times10^{13}$</span> 3m x 3m squares.</p> <p>I <em>assumed</em> I could use the equation to calculate number of combinations to find the number of words needed:</p> <p><span class="math-container">$$ _nC_r = \frac{n!}{r! (n - r)!} $$</span></p> <p>where <span class="math-container">$r$</span> is 3, and total number of combinations is the number of squares: <span class="math-container">$6\times10^{13}$</span></p> <p><span class="math-container">$$ 6\times10^{13} = \frac{n!}{3! (n - 3)!} $$</span> <span class="math-container">$$ 6\times10^{13} = \frac{(n)(n-1)(n-2)(n-3)!}{3! (n - 3)!} $$</span> <span class="math-container">$$ n^3 - 3n^2 + 2n - (36\times10^{13}) = 0 $$</span></p> <p>... and then, I can't work out the first factor to use to solve the cubic equation, I'm not sure I've ever had to solve a cubic eqtn with a non-integer factor and none of the tutorials I've found have helped.</p> <p>(And, my stats is also not good enough for me to be convinced this is the correct equation to use anyway!)</p> <p>Any hints as to the next step would be appreciated.</p>
CrackedBauxite
867,032
<p>You need to choose an eigenvector that is also in the column space of the matrix <span class="math-container">$A-\lambda I$</span>. In this case, by looking at the matrix you have and at your eigenvector basis, one sees that an eigenvector that will work is <span class="math-container">$\begin{bmatrix}1\\0\\-2\end{bmatrix}-2\begin{bmatrix}0\\1\\0\end{bmatrix}=\begin{bmatrix}1\\-2\\-2\end{bmatrix}$</span>.</p> <p>Hence, you have two eigenvectors <span class="math-container">$ v_1=\begin{bmatrix}1\\0\\-2\end{bmatrix}$</span> and <span class="math-container">$v_2 = \begin{bmatrix}1\\-2\\-2\end{bmatrix}$</span>. Define <span class="math-container">$v_3 = \begin{bmatrix}0\\0\\1\end{bmatrix}$</span> and note that <span class="math-container">$(A-I)v_3 =v_2$</span>. Now using <span class="math-container">$T=[v_1,v_2,v_3]$</span> we will get the Jordan form.</p> <p>Note that if we just wanted the Jordan form <span class="math-container">$J$</span>, without caring about what T is, then it is sufficient to know that A has <span class="math-container">$2$</span> eigenvectors. For then, as A is a <span class="math-container">$3\times3$</span> matrix the only possibility for the matrix <span class="math-container">$T$</span> is that it contains <span class="math-container">$2$</span> eigenvectors and one generalized eigenvector, which is mapped to one of the eigenvectors under <span class="math-container">$A-I$</span>. Hence, we will get two Jordan blocks, one of size <span class="math-container">$1\times 1$</span> and one of size <span class="math-container">$2\times 2$</span>. So up to ordering of the blocks <span class="math-container">$J=\begin{bmatrix}1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 1 \\ 0 &amp; 0 &amp; 1\end{bmatrix}$</span>.</p>
184,699
<p>First, we make the following observation: let $X: M \rightarrow TM $ be a vector field on a smooth manifold. Taking the contraction with respect to $X$ twice gives zero, i.e. $$ i_X \circ i_{X} =0.$$ Is there any "name" for the corresponding "homology" group that one can define (Kernel mod image)? Has this "homology" group been studied by others (there are plenty of questions that one can ask........is it isomorphic to anything more familiar etc etc). </p> <p>Similarly, a dual observation is as follows: Let $\alpha$ be a one form; taking the wedge product with $\alpha$ twice gives us zero. One can again define kernel mod image. Does that give anything "interesting"? </p> <p>If people have investigated these questions, I would like to know a few references. </p> <p>My purpose for asking the "name" of the (co)homology group is so that I can make a google search using the name. I was unable to do that, since I do not know of any key words under this topic (or if at all it is a topic).</p>
Donu Arapura
4,144
<p>Actually, I have seen something like this used before in the paper <em>Holomorphic vector fields and Kaehler manifolds</em>, Invent 1973, by Carrel and Lieberman. They prove that if a compact Kähler manifold admits a holomorphic vector field $v$ with nonempty zero locus $Z(v)$, then the Hodge numbers vanish for $|p-q|&gt; \dim Z(v)$. The key idea is to analyze the complex of sheaves $$\ldots \Omega^p\stackrel{i_v}{\to}\Omega^{p-1}\stackrel{i_v}{\to}\ldots $$ which is really a Koszul complex. The complex given by wedging by a $1$-form is also interesting, and it has been studied on the same class of manifolds by Green and Lazarsfeld, <em>Deformation theory, generic vanishing theorems…</em>, Invent (1987).</p> <p>(I deleted this soon after I wrote it, but perhaps it does give a sort of answer.)</p>
184,699
<p>First, we make the following observation: let $X: M \rightarrow TM $ be a vector field on a smooth manifold. Taking the contraction with respect to $X$ twice gives zero, i.e. $$ i_X \circ i_{X} =0.$$ Is there any "name" for the corresponding "homology" group that one can define (Kernel mod image)? Has this "homology" group been studied by others (there are plenty of questions that one can ask........is it isomorphic to anything more familiar etc etc). </p> <p>Similarly, a dual observation is as follows: Let $\alpha$ be a one form; taking the wedge product with $\alpha$ twice gives us zero. One can again define kernel mod image. Does that give anything "interesting"? </p> <p>If people have investigated these questions, I would like to know a few references. </p> <p>My purpose for asking the "name" of the (co)homology group is so that I can make a google search using the name. I was unable to do that, since I do not know of any key words under this topic (or if at all it is a topic).</p>
Vladimir Dotsenko
1,306
<p>These constructions certainly are very recognisable and meaningful in the algebraic context, that is if you think of algebraic differential forms on an affine algebraic variety. Namely, the corresponding complexes both implement versions of the Koszul complex for a sequence of elements $f_1,\ldots,f_n$ in $R$, where $R$ is the ring of functions on $M$, and $n=\dim M$. Indeed, the algebra of differential forms $\Omega^\bullet(M)$ is the exterior algebra $\Lambda_R^\bullet(R\,dx_1\oplus\ldots\oplus R\,dx_n)$, and for the given vector field $X=f_1\partial_1+\cdots+f_n\partial_n$, the differential $i_X$ on $\Omega^\bullet(M)$ is the $R$-algebra (super)derivation taking $dx_i$ to $f_i$, which essentially is a commonly used definition of the Koszul complex. For the $1$-form $\omega=f_1\,dx_1+\cdots+f_n\,dx_n$, the differential $\omega\wedge$ on $\Omega^\bullet(M)$, which as we remember, is $\Lambda_R^\bullet(R\,dx_1\oplus\ldots\oplus R\,dx_n)$, is dual to the differential corresponding to $i_X$ under the isomorphism of $R$-modules $$\mathop{\mathrm{Hom}}\nolimits_R(\Lambda_R^\bullet(R\,dx_1\oplus\ldots\oplus R\,dx_n),R)\simeq \Lambda_R^\bullet(R\,dx_1\oplus\ldots\oplus R\,dx_n).$$</p> <p>In any case, the Koszul complex is acyclic if and only if $f_1,\ldots,f_n$ define a complete intersection (this I think also holds in the analytic case, I have no intuition about the smooth one). There are many references, e.g Principles of Algebraic Geometry by Griffiths &amp; Harris, or Commutative Algebra by Bourbaki.</p>
507,062
<p>I need your help with a lifelong problem I have always had. There are two things in life I hate most, 1) weddings and 2) Long division. For the life of me I hate division. I am horrible at it. I can multiple large numbers in my head but I can’t divide if my life depended on it. When the division operation is 2 digits over 1 or 2 digit, it’s not a major issue. But when I divide anything 3+ over a 2 digit number, it becomes an issue.</p> <p>I am trying to find a way t divide that appeals to my abilities to multiply. I am trying to find another way to approach a division problem than the typical long division approach.</p> <p>For example: 876/2. Simple enough Ans: 438. This isn’t a problem, Finding the new approach is. I am trying to break this division number into single digits. $$8=2(4)+0$$ $$7=2(3)+1$$ $$6=2(3)+0$$</p> <p>My problem is how does this approach get me to 438? The numbers in the parenthesis get me to 433 with a summed remainder of 1. If I carry the one and put it on the first digit of my number, 3, I get 434.</p> <p>In typical long division, the remainder is carried over. So $$8=2(4)+0$$ $$7=2(3)+1 $$(the remainder 1 is carried down) $$16=2(8)+0$$</p> <p>Is there a way to do long division by doing the division individually on each number and then summing the remainder?</p>
Lisa
81,933
<p>Yes.</p> <p>In response to your edit: You squared both sides of the equation to correct your post. However, this doesn't make a difference, it's still valid. Since you squared both sides of the equation (applied the same operation), you didn't effectively change anything.</p> <p>The solution is therefore:</p> <ol> <li><p>take the square root</p></li> <li><p>multiply both sides by $(x^{2}-4)$ and divide both sides by $5x$.</p></li> <li><p>square both sides ( $1^2=1$ )</p></li> </ol>
507,062
<p>I need your help with a lifelong problem I have always had. There are two things in life I hate most, 1) weddings and 2) Long division. For the life of me I hate division. I am horrible at it. I can multiple large numbers in my head but I can’t divide if my life depended on it. When the division operation is 2 digits over 1 or 2 digit, it’s not a major issue. But when I divide anything 3+ over a 2 digit number, it becomes an issue.</p> <p>I am trying to find a way t divide that appeals to my abilities to multiply. I am trying to find another way to approach a division problem than the typical long division approach.</p> <p>For example: 876/2. Simple enough Ans: 438. This isn’t a problem, Finding the new approach is. I am trying to break this division number into single digits. $$8=2(4)+0$$ $$7=2(3)+1$$ $$6=2(3)+0$$</p> <p>My problem is how does this approach get me to 438? The numbers in the parenthesis get me to 433 with a summed remainder of 1. If I carry the one and put it on the first digit of my number, 3, I get 434.</p> <p>In typical long division, the remainder is carried over. So $$8=2(4)+0$$ $$7=2(3)+1 $$(the remainder 1 is carried down) $$16=2(8)+0$$</p> <p>Is there a way to do long division by doing the division individually on each number and then summing the remainder?</p>
StasK
97,144
<p>If you are solving for $x$, it's OK as long as in the end you check whether the expression you divided by, $5x$, is non-zero.</p> <p>In general, you would have to follow through with everything assuming all transformations are legal, and then check for weak spots like division by zero ($5x=0$ or $x^2-4=0$, or square roots or logs of negative numbers), and modify your set of solutions accordingly by excluding some solutions that would lead to a log of a negative number, or adding some solutions that could have lead to a zero on both sides of the equation.</p>
507,062
<p>I need your help with a lifelong problem I have always had. There are two things in life I hate most, 1) weddings and 2) Long division. For the life of me I hate division. I am horrible at it. I can multiple large numbers in my head but I can’t divide if my life depended on it. When the division operation is 2 digits over 1 or 2 digit, it’s not a major issue. But when I divide anything 3+ over a 2 digit number, it becomes an issue.</p> <p>I am trying to find a way t divide that appeals to my abilities to multiply. I am trying to find another way to approach a division problem than the typical long division approach.</p> <p>For example: 876/2. Simple enough Ans: 438. This isn’t a problem, Finding the new approach is. I am trying to break this division number into single digits. $$8=2(4)+0$$ $$7=2(3)+1$$ $$6=2(3)+0$$</p> <p>My problem is how does this approach get me to 438? The numbers in the parenthesis get me to 433 with a summed remainder of 1. If I carry the one and put it on the first digit of my number, 3, I get 434.</p> <p>In typical long division, the remainder is carried over. So $$8=2(4)+0$$ $$7=2(3)+1 $$(the remainder 1 is carried down) $$16=2(8)+0$$</p> <p>Is there a way to do long division by doing the division individually on each number and then summing the remainder?</p>
Cameron Buie
28,900
<p>As others have noted, it is valid so long as $x\ne0,$ since otherwise you're dividing by $0$. Let me offer another approach, using the difference of squares formula $a^2-b^2=(a-b)(a+b)$, together with the fact that $(-c)^2=c^2.$ The following, then, are equivalent:</p> <p>$$\left(\frac{x^2+6}{x^2-4}\right)^2=\left(\frac{5x}{4-x^2}\right)^2$$</p> <p>$$\left(\frac{x^2+6}{x^2-4}\right)^2=\left(\frac{5x}{x^2-4}\right)^2$$</p> <p>$$\left(\frac{x^2+6}{x^2-4}\right)^2-\left(\frac{5x}{x^2-4}\right)^2=0$$</p> <p>$$\left(\frac{x^2+6}{x^2-4}-\frac{5x}{x^2-4}\right)\left(\frac{x^2+6}{x^2-4}+\frac{5x}{x^2-4}\right)=0$$</p> <p>$$\frac{x^2-5x+6}{x^2-4}\cdot\frac{x^2+5x+6}{x^2-4}=0.$$</p> <p>Now, note that <em>none</em> of the above equations make sense when $x=\pm 2,$ since $x^2-4=(x-2)(x+2).$ So, we must assume that $x\neq 2$ and $x\neq-2$. At that point, the last equation is readily equivalent to $$(x^2-5x+6)(x^2+5x+6)=0,$$ which is true if and only if $x^2-5x+6=0$ or $x^2+5x+6=0$. These two quadratic equations give the solutions $x=\pm 2,\pm3,$ and so since we had to rule out $x=\pm 2,$ we get $x=\pm 3$.</p>
2,970,787
<blockquote> <p>Find <span class="math-container">$\lim_{x\to -\infty} \frac{(x-1)^2}{x+1}$</span></p> </blockquote> <p>If I divide whole expression by maximum power i.e. <span class="math-container">$x^2$</span> I get,<span class="math-container">$$\lim_{x\to -\infty} \frac{(1-\frac1x)^2}{\frac1x+\frac{1}{x^2}}$$</span> Numerator tends to <span class="math-container">$1$</span> ,Denominator tends to <span class="math-container">$0$</span></p> <p>So I get the answer as <span class="math-container">$+\infty$</span></p> <p>But when I plot the graph it tends to <span class="math-container">$-\infty$</span></p> <p>What am I missing here? "Can someone give me the precise steps that I should write in such a case." Thank you very much!</p> <p>NOTE: I cannot use L'hopital for finding this limit.</p>
mfl
148,513
<p><strong>Hint</strong></p> <p>If <span class="math-container">$x&lt;-1$</span> then <span class="math-container">$$\frac1x+\frac {1}{x^2}&lt;0.$$</span></p>
2,970,787
<blockquote> <p>Find <span class="math-container">$\lim_{x\to -\infty} \frac{(x-1)^2}{x+1}$</span></p> </blockquote> <p>If I divide whole expression by maximum power i.e. <span class="math-container">$x^2$</span> I get,<span class="math-container">$$\lim_{x\to -\infty} \frac{(1-\frac1x)^2}{\frac1x+\frac{1}{x^2}}$$</span> Numerator tends to <span class="math-container">$1$</span> ,Denominator tends to <span class="math-container">$0$</span></p> <p>So I get the answer as <span class="math-container">$+\infty$</span></p> <p>But when I plot the graph it tends to <span class="math-container">$-\infty$</span></p> <p>What am I missing here? "Can someone give me the precise steps that I should write in such a case." Thank you very much!</p> <p>NOTE: I cannot use L'hopital for finding this limit.</p>
B. Goddard
362,009
<p>First, I think it's better to divide top and bottom by the maximum power <em>of the bottom</em>. In this case, by <span class="math-container">$x$</span>, to get</p> <p><span class="math-container">$$\lim_{x\to -\infty} \frac{\frac{1}{x}(x-1)^2}{1+\frac{1}{x}} = \lim_{x\to -\infty} \frac{\frac{1}{x}(x^2-2x+1)}{1+\frac{1}{x}}$$</span></p> <p><span class="math-container">$$\lim_{x\to -\infty} \frac{x-2+\frac{1}{x}}{1+\frac{1}{x}}.$$</span></p> <p>Now you can see that the top goes to minus infinity and the bottom goes to <span class="math-container">$1$</span>.</p>
341,202
<p>The divisibility rule for $3$ is well-known: if you add up the digits of $n$ and the sum is divisible by $3$, then $n$ is divisible by three. This is quite helpful for determining if really large numbers are multiples of three, because we can recursively apply this rule:</p> <p>$$1212582439 \rightarrow 37 \rightarrow 10\rightarrow 1 \implies 3\not\mid 1212582439$$ $$124524 \rightarrow 18 \rightarrow 9 \implies 3\mid 124524$$</p> <p>This works for as many numbers as I've tried. However, I'm not sure how this may be <em>proven</em>. Thus, my question is:</p> <blockquote> <p>Given a positive integer $n$ and that $3\mid\text{(the sum of the digits of $n$)}$, how may we prove that $3\mid n$?</p> </blockquote>
Ross Millikan
1,827
<p>Hint: take the number apart into digits. Each digit $d$ represents $d \cdot 10^n$ for some $n$. What is the remainder when you divide $10^n$ by $3$? (Think about $10^n-1$ what does it look like?)</p>
341,202
<p>The divisibility rule for $3$ is well-known: if you add up the digits of $n$ and the sum is divisible by $3$, then $n$ is divisible by three. This is quite helpful for determining if really large numbers are multiples of three, because we can recursively apply this rule:</p> <p>$$1212582439 \rightarrow 37 \rightarrow 10\rightarrow 1 \implies 3\not\mid 1212582439$$ $$124524 \rightarrow 18 \rightarrow 9 \implies 3\mid 124524$$</p> <p>This works for as many numbers as I've tried. However, I'm not sure how this may be <em>proven</em>. Thus, my question is:</p> <blockquote> <p>Given a positive integer $n$ and that $3\mid\text{(the sum of the digits of $n$)}$, how may we prove that $3\mid n$?</p> </blockquote>
Community
-1
<p>$1$. First prove that $3 \mid 10^n - 1$ (By noting that, $10^n - 1 = (10-1)(10^{n-1} + 10^{n-2} + \cdots + 1)$).</p> <p>$2$. Now any number can be written in decimal expansion as $$a = a_n 10^n + a_{n-1} 10^{n-1} + \cdots + a_1 10^1 + a_0$$</p> <p>$3$. Note that $a_k 10^k = a_k + a_k (10^k-1)$. Hence, $$a = \overbrace{(a_n + a_{n-1} + \cdots + a_0)}^{b} + \underbrace{\left(a_n (10^n-1) + a_{n-1} (10^{n-1}-1) + \cdots + a_1 (10^1-1) \right)}_{c}$$</p> <p>$4$. We have $a=b+c$ and $3 \mid c$. Now conclude that $3 \mid a \iff 3 \mid b$.</p>
341,202
<p>The divisibility rule for $3$ is well-known: if you add up the digits of $n$ and the sum is divisible by $3$, then $n$ is divisible by three. This is quite helpful for determining if really large numbers are multiples of three, because we can recursively apply this rule:</p> <p>$$1212582439 \rightarrow 37 \rightarrow 10\rightarrow 1 \implies 3\not\mid 1212582439$$ $$124524 \rightarrow 18 \rightarrow 9 \implies 3\mid 124524$$</p> <p>This works for as many numbers as I've tried. However, I'm not sure how this may be <em>proven</em>. Thus, my question is:</p> <blockquote> <p>Given a positive integer $n$ and that $3\mid\text{(the sum of the digits of $n$)}$, how may we prove that $3\mid n$?</p> </blockquote>
mathandy
53,784
<p>There's likely a more elegant proof, but you got me thinking about this, so here's what I came up with (this is for two digits, but it generalizes easily).</p> <p>If some number 10a+b (for a,b integers) is divisible by three then there is some integer k such that 10a+b = 3k</p> <p>This means that</p> <p>10a+10b = 3k+9b and thus</p> <p>10(a+b) = 3(k+3b)</p> <p>so since 3 doesn't divide 10, it must divide a+b.</p> <p>For the other direction:</p> <p>If a+b = 3k, then</p> <p>10a+b = 3k+9a = 3(k+3a)</p>
341,202
<p>The divisibility rule for $3$ is well-known: if you add up the digits of $n$ and the sum is divisible by $3$, then $n$ is divisible by three. This is quite helpful for determining if really large numbers are multiples of three, because we can recursively apply this rule:</p> <p>$$1212582439 \rightarrow 37 \rightarrow 10\rightarrow 1 \implies 3\not\mid 1212582439$$ $$124524 \rightarrow 18 \rightarrow 9 \implies 3\mid 124524$$</p> <p>This works for as many numbers as I've tried. However, I'm not sure how this may be <em>proven</em>. Thus, my question is:</p> <blockquote> <p>Given a positive integer $n$ and that $3\mid\text{(the sum of the digits of $n$)}$, how may we prove that $3\mid n$?</p> </blockquote>
DeepSea
101,504
<p><strong>Divisibility by $3$ rule</strong>: $ 3\mid \overline {a_1a_2...a_n} \iff 3\mid (a_1+a_2+...+a_n)$, whereas $a_1,a_2,..a_n$ are digits in $\{0,1,2,...9\}$.</p> <p><strong>Proof:</strong> $\overline{a_1a_2...a_n} = a_1\cdot 10^{n-1} + a_2\cdot 10^{n-2} + ... + a_{n-1}\cdot 10 + a_n = a_1\cdot (9+1)^{n-1} + a_2\cdot (9+1)^{n-2} +...+ a_{n-1}\cdot (9+1) + a_n \cong a_1+a_2+...+a_n\pmod 3$. </p> <p><strong>Example:</strong> $3 \mid 4,722$ because $4+7+2+2 = 15$, and $1+5 = 6$ finally is a multiple of $3$. Check: $4,722 = 1,574 \times 3$.</p>
341,202
<p>The divisibility rule for $3$ is well-known: if you add up the digits of $n$ and the sum is divisible by $3$, then $n$ is divisible by three. This is quite helpful for determining if really large numbers are multiples of three, because we can recursively apply this rule:</p> <p>$$1212582439 \rightarrow 37 \rightarrow 10\rightarrow 1 \implies 3\not\mid 1212582439$$ $$124524 \rightarrow 18 \rightarrow 9 \implies 3\mid 124524$$</p> <p>This works for as many numbers as I've tried. However, I'm not sure how this may be <em>proven</em>. Thus, my question is:</p> <blockquote> <p>Given a positive integer $n$ and that $3\mid\text{(the sum of the digits of $n$)}$, how may we prove that $3\mid n$?</p> </blockquote>
Bill Dubuque
242
<p>Since you tagged it <a href="/questions/tagged/abstract-algebra" class="post-tag" title="show questions tagged &#39;abstract-algebra&#39;" rel="tag">abstract-algebra</a> let's use a little algebra to help clarify the essence of the matter.</p> <p><span class="math-container">$\!\!\begin{align}\rm\ {\bf Hint}\ \ \ mod\ 3\!:\,\ \color{#C00}{10\equiv 1}\:\Rightarrow\ &amp;\rm \overbrace{\rm n = d_k \color{#C00}{10}^k+\cdots+d_1 \color{#C00}{10} + d_0}^{\textstyle \text{$\rm n\,$ in radix $\,10\,$ (decimal)}} \equiv\, f(10) \\[.2em] &amp;\rm \ \ \equiv\ d_k\,\color{#C00}1^k \:\!+\cdots +\ d_1\,\color{#C00}1\:\!+d_0\equiv\, f(1)\\[.2em] &amp;\ \ \equiv\ \text{digit sum of $\,\rm n$}\end{align}$</span></p> <p><strong>Or,</strong> <span class="math-container">$ $</span> let <span class="math-container">$\rm\:f = \,$</span> above <strong>polynomial</strong> (in <span class="math-container">$10),\:$</span> so <span class="math-container">$\rm\ n = f(10),\:$</span> and <span class="math-container">$\rm\ f(1) =\,$</span> digit sum of <span class="math-container">$\rm\,n.$</span></p> <p><a href="http://en.wikipedia.org/wiki/Factor_theorem" rel="nofollow noreferrer">Factor Theorem</a> <span class="math-container">$\rm\:\Rightarrow\: 3\mid 10\!-\!1\mid f(10)\!-f(1)\,$</span> <span class="math-container">$\Rightarrow$</span> <span class="math-container">$\rm\, f(10) = f(1) + 3k,\ $</span> so <span class="math-container">$\rm\ 3\mid f(10)\!\iff\!3\mid f(1)$</span></p> <p><strong>Remark</strong> <span class="math-container">$ $</span> The same holds true if we replace <span class="math-container">$\,3\,$</span> by <span class="math-container">$\,9\,$</span> since, too, <span class="math-container">$\rm\:10\equiv 1\ \ (mod\ 9),\:$</span> i.e. <span class="math-container">$\rm\:9\mid 10\!-\!1.\ $</span> This leads to the <a href="http://en.wikipedia.org/wiki/Casting_out_nines" rel="nofollow noreferrer">casting out nines</a> divisibility test, on which <a href="https://math.stackexchange.com/search?tab=newest&amp;q=user%3a242%20casting%20">much has been written here.</a></p> <p>The <strong>key point</strong> is that radix representation is a <strong>polynomial</strong> function of the radix, and polynomials (being compositions of sums and products) are <em>compatible</em> with modular arithmetic, i.e. <span class="math-container">$\rm \bmod m\!:\ a\equiv b\Rightarrow f(a)\equiv f(b),\,$</span> see the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a>.</p> <p>Note also that such modular &quot;casting&quot; methods are more powerful than <a href="https://math.stackexchange.com/a/2989299/242">classical divisibility tests</a> like <span class="math-container">$\,7\mid 10b+a\iff 7\mid b-2a\,$</span> because the modular methods yield further info (the remainder), which allow us to perform perform further arithmetic on these values (e.g. to help check computations). See also the <a href="https://math.stackexchange.com/a/2063944/242">universal divisibility test</a>.</p>
2,987,994
<p>I'm looking for definite integrals that are solvable using the method of differentiation under the integral sign (also called the Feynman Trick) in order to practice using this technique.</p> <p>Does anyone know of any good ones to tackle?</p>
Henry Lee
541,220
<p>you can try the most famous one which is: <span class="math-container">$$\int_0^\infty\frac{\sin(x)}{x}dx$$</span> good luck!</p>
2,987,994
<p>I'm looking for definite integrals that are solvable using the method of differentiation under the integral sign (also called the Feynman Trick) in order to practice using this technique.</p> <p>Does anyone know of any good ones to tackle?</p>
Zacky
515,527
<p>Since this became quite popular I will mention here about an <a href="https://zackyzz.github.io/feynman.html" rel="nofollow noreferrer">introduction to Feynman's trick</a> that I wrote recently. It also contains some exercises that are solvable using this technique.</p> <p>My goal there is to give some ideas on how to introduce a new parameter as well as to describe some heuristics that I tend to follow when using Feynman's trick, hoping that it can serve as a good starting point.</p> <hr /> <p>In case you are already familiar with Feynman's trick or prefer to evaluate some integrals directly, here is a brief list:</p> <p><a href="https://math.stackexchange.com/q/3026362/515527"><span class="math-container">$$I_1=\int_0^\frac{\pi}{2} \ln(\sec^2x +\tan^4x)dx$$</span></a> <a href="https://math.stackexchange.com/q/3205906/515527"><span class="math-container">$$I_2=\int_0^\infty \frac{\ln\left({1+x+x^2}\right)}{1+x^2}dx$$</span></a> <a href="https://math.stackexchange.com/q/3007234/515527"><span class="math-container">$$I_3=\int_0^\frac{\pi}{2}\ln(2+\tan^2x)dx$$</span></a> <a href="https://math.stackexchange.com/q/2974517/515527"><span class="math-container">$$I_4=\int_0^\infty \frac{x-\sin x}{x^3(x^2+4)} dx$$</span></a> <a href="https://math.stackexchange.com/q/2876690/515527"><span class="math-container">$$I_5=\int_0^\frac{\pi}{2}\arcsin\left(\frac{\sin x}{\sqrt 2}\right)dx$$</span></a> <a href="https://math.stackexchange.com/q/3044687/515527"><span class="math-container">$$I_6=\int_0^\frac{\pi}{2} \ln\left(\frac{2+\sin x}{2-\sin x}\right)dx$$</span></a> <a href="https://math.stackexchange.com/q/3024896/515527"><span class="math-container">$$I_7=\int_0^\frac{\pi}{2} \frac{\arctan(\sin x)}{\sin x}dx $$</span></a> <a href="https://math.stackexchange.com/q/3247341/515527"><span class="math-container">$$I_8=\int_0^1 \frac{\ln(1+x^3)}{1+x^2}dx $$</span></a> <a href="https://math.stackexchange.com/a/4412090/515527"><span class="math-container">$$I_9=\int_0^{\infty} \frac{x^{4/5}-x^{2/3}}{\ln(x)(1+x^2)}dx$$</span></a> <a href="https://math.stackexchange.com/q/3201868/515527"><span class="math-container">$$I_{10}=\int_{0}^{1} \int_{0}^{1} \frac{1}{(1+x y) \ln (x y)}dxdy$$</span></a> <a href="https://math.stackexchange.com/q/3002309/515527"><span class="math-container">$$I_{11}=\int_0^1\frac{\ln(1+x-x^2)}{x}dx$$</span></a> <a href="https://math.stackexchange.com/q/4412674/515527"><span class="math-container">$$I_{12}=\int_0^1 \frac{\ln(1-x+x^2)}{x(1-x)}dx$$</span></a> <a href="https://math.stackexchange.com/q/3143523/515527"><span class="math-container">$$I_{13}=\int_0^\infty \log \left(1-2\frac{\cos 2\theta}{x^2}+\frac{1}{x^4} \right)dx$$</span></a> <a href="https://math.stackexchange.com/q/3328822/515527"><span class="math-container">$$I_{14}=\int_0^{\infty} \exp\left(-\left(4x+\frac{9}{x}\right)\right) \sqrt{x}dx$$</span></a> <a href="https://math.stackexchange.com/q/3778663/515527"><span class="math-container">$$I_{15}=\int_0^\frac{\pi}{2}\arctan\left(\frac{2\sin x}{2\cos x -1}\right)\frac{\sin\left(\frac{x}{2}\right)}{\sqrt{\cos x}}dx$$</span></a> <a href="https://math.stackexchange.com/q/3284527/515527"><span class="math-container">$$I_{16}=\int_0^1\int_0^1 \frac{ x\ln x\ln y}{(1-xy)\ln(xy)}dxdy$$</span></a> <a href="https://math.stackexchange.com/a/4511882/515527"><span class="math-container">$$I_{17}=\int_1^{2}\frac{\cosh^{-1} x}{\sqrt{4-x^2}}dx$$</span></a> <a href="https://math.stackexchange.com/q/3583334/515527"><span class="math-container">$$I_{18}=\int_0^t\frac{1}{\sqrt{x^3}} \exp\left({-\frac{(a-bx)^2}{2x}}\right) dx$$</span></a> <a href="https://math.stackexchange.com/q/3303483/515527"><span class="math-container">$$I_{19}=\int_{0}^{\frac{\pi}4} \ln(\sin{x}+\cos{x}+\sqrt{\sin(2x)})dx$$</span></a> <a href="https://math.stackexchange.com/q/3357105/515527"><span class="math-container">$$I_{20}=\int_0^1 \frac{\ln^2 x\ln(1+x)}{1+x^2}dx$$</span></a></p>
1,131,970
<p>Let $I$ be a proper ideal of a polynomial ring $A$ and $x \in A$ an irreducible element.</p> <p>In a theorem of commutative algebra I will use the fact that, in this hypothesis, holds the following equality: $$\sqrt{(I,x^k)}=\sqrt{(\sqrt{I},x)}$$</p> <p>The assert seems to be true, anyone has any counterexample/proof? </p> <p>Thank you.</p>
orangeskid
168,051
<p>If you know a bit about the field $\mathbb{Q}_p$ and its subring $\mathbb{Z}_p$, things become easier, at least formally.</p> <p>Say $x_0^2 = a \mod p^k$. Well, let's assume in fact that the exponent of $p$ in $x^2-a$ is larger than the exponent of $p$ in $a$. </p> <p>$$x^2_0 = a + \delta'$$ with $e_p(\delta') &gt; e_p(a)$.</p> <p>Then $$\delta = \frac{\delta'}{a}$$</p> <p>has </p> <p>$$e_p(\delta) &gt;0$$</p> <p>and we get $$x_0^2 = a(1+\delta)$$</p> <p>Fact: the element</p> <p>$$x = x_0\cdot (1+\delta)^{-1/2}$$</p> <p>satisfies</p> <p>$$x^2 = a$$</p> <p>where $(1+\delta)^{-1/2}$ is the sum of a series</p> <p>$$(1+\delta)^{-1/2} = 1 +\frac{-1}{2} \delta + \frac{\frac{-1}{2}( \frac{-1}{2}-1)}{2} \delta^2 + \cdots = \sum_{k\ge 0}\binom{(\frac{-1}{2})}{k}\delta^k$$</p> <p>(since $p$ is odd and $e_p(\delta) &gt;0$ this series is convergent- see also <a href="http://en.wikipedia.org/wiki/Binomial_theorem#Newton.27s_generalised_binomial_theorem" rel="nofollow">http://en.wikipedia.org/wiki/Binomial_theorem#Newton.27s_generalised_binomial_theorem</a>)</p> <p>$\bf{Added:}$</p> <p>@Lubin: pointed out ( many thanks!) a neat way to rescale the relative error $\delta$ as rather $4 \delta$, since the series $(1+4 X)^{\frac{1}{2}}$ as a series in $X$ has all integer coefficients. Indeed: recall the generating function for the Catalan numbers $$\frac{1- \sqrt{1 - 4 X}}{2X} = \sum_{n\ge 0} C_n X^n$$ This rescaling imposes no extra coditions if $p$ is odd, and explains the case $p=2$. In general one can take<br> $$\frac{1 - ( 1- r^2 X)^{\frac{1}{r}}}{r X}= \sum C_{r,n} X^n$$ again a series with all integer coefficients if $r$ is an integer (<a href="http://arxiv.org/abs/1410.5880" rel="nofollow">generalized Catalan numbers</a>). This would take care of $r$th roots. </p>
1,810,055
<p>I want to find the Galois groups of the following polynomials over $\mathbb{Q}$. The specific problems I am having is finding the roots of the first polynomial and dealing with a degree $6$ polynomial.</p> <blockquote> <p>$X^3-3X+1$</p> </blockquote> <p>Do we first need to find its roots, then construct a splitting field $L$, then calculate $Gal(L/\mathbb{Q})$?</p> <p>I am having difficulties finding roots. If we let the reduced cubic be: $U^2+qU+\frac{p^3}{27}=U^2+U+\frac{27}{27}=U^2+U+1$. The roots of this are: $x=\frac{-1 \pm \sqrt{-3}}{2}$</p> <p>How do we use this to find the roots of the cubic?</p> <p>Once I can decompose the polynomial I know that the Galois group will be $\{e\}, Z_2, A_3$ or $S_3$ depending on the degree of the splitting field and and how many linear factors there are,</p> <blockquote> <p>$(X^3-2)(X^2+3)$</p> </blockquote> <p>I have never encountered finding the Galois group of a degree $6$ polynomial but I am guessing that since it is factorised this eases things somewhat.</p> <p>Let $f(X)=(X^3-2)(X^2+3)=(X-\sqrt[3]{2})(X^2+aX+b)(X-\sqrt{-3})(X+\sqrt{3})$</p> <p>I am not sure how to find the coefficients of $X^2+aX+b$. Is it irreducible? </p> <p>Let $L$ be the splitting field of $f(X)$ over $\mathbb{Q}$ then (assuming $X^2+aX+b$ is irreducible) $L=\mathbb{Q}(\sqrt[3]{2}, \sqrt{-3})$. </p> <p>If this is true what would $[\mathbb{Q}(\sqrt[3]{2}, \sqrt{-3}), \mathbb{Q}]$ be?</p> <p>I think this degree would be the order of the Galois group, so it could narrow down to one of $S_3, S_4, A_3, A_4...$ etc</p>
msm
340,064
<p>For the second one, you should think about those polynomials separately. In that case, it's pretty easy to see that the splitting field of $(x^3 - 2)(x^2+3)$ is $\mathbb{Q}(\sqrt[3]{2} , \zeta_3, \sqrt{-3})$.</p> <p>Since $\zeta_3 = \frac{-1 \pm \sqrt{-3}}{2}$, this is the same as $\mathbb{Q}(\sqrt[3]{2} , \sqrt{-3})$. Since $[\mathbb{Q}(\sqrt[3]{2}): \mathbb{Q}] = 3$ and $[\mathbb{Q}(\sqrt{-3}) : \mathbb{Q}] = 2$, and $(2,3) =1$, we get $[\mathbb{Q}(\sqrt[3]{2} , \sqrt{-3}) :\mathbb{Q}] = 6$. </p> <p>So, $Gal(\mathbb{Q}(\sqrt[3]{2} , \sqrt{-3})/\mathbb{Q})$ is either $\mathbb{Z}/6\mathbb{Z}$ or $S_3$. But, since the subfield $\mathbb{Q}(\sqrt[3]{2})$ is not Galois over $\mathbb{Q}$, the Galois group cannot be abelian, and so it is $S_3$. </p>
188,150
<p>I'm reading the paper <em>Loop groups and twisted K-theory I</em> by Freed, Hopkins, and Teleman. They give some examples of computing (twisted) K groups using the Mayer-Vietoris sequence. </p> <p>I'm a bit confused with some of their computations, for instance $S^3$ (their example 1.4 in the first section). They take subsets $U_+ = S^3 \backslash(0,0,0,-1)$ and $U_- = S^3 \backslash(0,0,0,1)$ and then they say that $K^0(U_\pm) \simeq \mathbb Z$. I don't understand where this comes from since $U_\pm$ are non-compact so I believe $K^0(U_\pm)$ should be the reduced $K^0$ of a 1-point compactification. The compactifications of these spaces are $S^3$ so shouldn't $K^0(U_\pm) = \tilde {K^0}(S^3) = 0$? However, it seems like if you replace $U_\pm$ by shrinking it a bit to make it closed, these computations work out. </p> <p>So I'm wondering what the exact statement of Mayer-Vietoris is for $K$-theory (specifically, what type of covers you can take) or if Freed, Hopkins, and Teleman are using a different definition of $K^0$ for which $K^0(U_\pm)$ is indeed $\mathbb Z$. Any references would also be appreciated since I couldn't find much in the literature about a Mayer-Vietoris sequence for $K$-theory.</p>
Neil Strickland
10,366
<p>Freed, Hopkins and Teleman will be using the homotopical definition $K^0(X)=[X,\mathbb{Z}\times BU]$. For many spaces $X$ this is the same as the Grothendieck group of vector bundles on $X$; in particular this holds if $X$ is compact Hausdorff, or if it is a finite-dimensional CW complex. This definition is visibly homotopy invariant, and the spaces $U_{\pm}$ are contractible, so $K^0(U_{\pm})=K^0(\text{point})=\mathbb{Z}$.</p> <p>For a noncompact manifold $M$ we can also consider $\widetilde{K}^0(M\cup\{\infty\})$. This is the Grothendieck group of vector bundles on $M$ with a specified trivialisation outside of a compact set. This is also interesting, but different.</p> <p>With the homotopical version of $K$-theory, you get a Mayer-Vietoris sequence for $K^0(A\cup B)$ whenever the map from the homotopy pushout of $A\xleftarrow{}A\cap B\to B$ to $A\cup B$ is a homotopy equivalence. Provided that all the relevant spaces have the homotopy type of CW complexes, this essentially means that you get a Mayer-Vietoris sequence in $K$-theory iff you get one in ordinary singular cohomology. In particular, this will certainly work if $A$ and $B$ are open subsets of a manifold. </p>
2,372,698
<p>If a function $f(x)$ is continuous on the closed interval $\left [ a,b \right ]$ then its bounded on this interval........the proof for this theorem i have is: </p> <p>Since it's continuous on $\left [ a,b \right ]$ if we pick a random point on this interval let it be $c$ </p> <p>$\implies$ $\forall$ $\epsilon$ $&gt; 0$ , $\exists$ $\delta$($\epsilon$,c) $&gt; 0$ s.t $\left | x-c \right|$ $&lt;$ $\delta$ $\implies$ $\left | f(x)-f(c) \right|$ $&lt;$ $\epsilon$<br> $-$ $\epsilon$ $&lt; f(x)-f(c) &lt;$ $\epsilon$<br> $f(c)-$ $\epsilon$ $&lt;$ $f(x)$ $&lt;$ $f(c)+$ $\epsilon$ </p> <p>Take $M =$ $\left |f(c) \right|$ $\in$ $+$ $\mathbb{R}$<br> $\forall$ $M$ $\in$ $\mathbb{R}$ $f(x) &lt; M$ </p> <p>$\therefore$ $f(x)$ is bounded</p> <p>Is there something missing with this proof ?<br> because i could'nt really understand it</p>
Sahiba Arora
266,110
<p>Let $f(x)=x$ on $[0,1]$ and $c=\frac12$. Then $M=|f(c)|=\frac12$. But $|f(x)| \not &lt;M$ for all $x \in [0,1]$ (for instance, $|f(1)|&gt;M$)</p>
3,662,466
<p>Given a sequence with the terms </p> <p><span class="math-container">$$ a_{n}=\left\{\begin{array}{ll} n, &amp; \text { if } n \text { even } \\ \frac{1}{n}, &amp; \text { if } n \text { odd } \end{array}\right. $$</span></p> <p>Prove <span class="math-container">$\limsup _{n \rightarrow \infty} a_{n} = \infty$</span>.</p> <p>I will like to have some help. Intuitively this makes sense: the upper bound of the set consists of <span class="math-container">$a_n$</span> is <span class="math-container">$\infty$</span>. Because there is infinitely many even natural numbers. I can also explain this a bit more thoroughly verbally. But is it possible to make an <span class="math-container">$\epsilon$</span>-proof? I have a good grasp of <span class="math-container">$\epsilon$</span>-proof when it comes to convergence of sequences, but I do not have any experience with limes superior.</p> <p>Please help,</p> <p>Kind regards</p>
QuantumSpace
661,543
<p>Recall that <span class="math-container">$\limsup a_n$</span> is the largest limit of a subsequence. But we have the sequence <span class="math-container">$(n)_{n=1}^\infty$</span> as subsequence of even terms and this tends to <span class="math-container">$+\infty$</span>. Thus we must have <span class="math-container">$\limsup_n a_n=+\infty$</span>.</p> <p>Alternatively, </p> <p><span class="math-container">$$\limsup_n a_n = \inf_{n=1}^\infty \sup_{k=n}^\infty a_k =\inf \{\infty\}=\infty$$</span></p> <p>Yet another alternative,</p> <p><span class="math-container">$$\limsup_n a_n = \lim_{n\to \infty} \sup\{a_k:k\geq n\}=\lim_{n\to \infty} +\infty=+\infty$$</span></p> <p>Exercise using the same reasoning: Show <span class="math-container">$\liminf_n a_n=0$</span>.</p>
300,753
<p>Revision in response to early comments. Users of set theory need an <em>implementation</em> (in case "model" means something different) of the axioms. I would expect something like this:</p> <blockquote> <p>An <em>implementation</em> consists of a "collection-of-elements" $X$, and a relation (logical pairing) $E:X\times X\to \{0,1\}$. A logical function $h:X\to\{0,1\}$ is a <em>set</em> if it is of the form $x\mapsto E[x,a]$ for some $a\in X$. Sets are required to satisfy the following axioms: ....</p> </blockquote> <p>The background "collection-of-elements" needs some properties to even get started. For instance "of the form $x\mapsto E[x,a]$" needs first-order quantification. Mathematical standards of precision seem to require <em>some</em> discussion, but so far I haven't seen anything like this. The first version of this question got answers like $X$ is "the domain of discourse" (philosophy??), "everything" (naive set theory?) "a set" (circular), and "type theory" (a postponement, not a solution). Is this a missing definition? Taking it seriously seems to give a rather fruitful perspective. </p>
Timothy Chow
3,106
<p>I'm not sure I understand your question, since at first it sounds like you're thinking of $\in$ as a multi-valued function that sends a set $A$ to an element $x$ of $A$, but then I would expect you to be asking about the <i>range</i> of such a function rather than its <i>domain</i>. I'll assume that you are, loosely speaking, asking about where all those elements of sets come from.</p> <p>Mathematics as commonly practiced is <i>atomic</i> in the following sense: When we define something, such as a group, we typically think of the ground set of the group as comprising "things" or "atoms." The identity of these atoms is left vague, since after all, we want to allow them to be anything&mdash;numbers, matrices, functions, formal sums, etc. All that matters is that they have some kind of tangible identity.</p> <p>In particular, most of us have a vague feeling that these atoms are <i>distinct from sets</i>. Of course it is possible for an atom to itself be a set, since we can form sets of sets, but intuitively, most of us feel that there is a distinction between atoms and sets. Therefore we may come to axiomatic set theory with a tacit expectation that it will formalize atoms as well as sets.</p> <p>Though this can be done, the most common axiomatic set theories <i>are not atomic</i>. In particular, in ZFC, there are no atoms that are distinct from sets. <b>Everything is a set.</b> If you need some atoms, then you have to build them out of sets, starting with the empty set and working your way up. This is a little unintuitive and takes some getting used to. But once you get used to it, it has technical advantages. Most notably, you don't have to fuss with two different "kinds" of things (atoms and sets); you only ever have to deal with one kind of thing. Experience shows that everything you would want to do with atoms can also be done with sets standing in for the atoms.</p> <p>I hope this explains why the axioms about atoms that you seem to be expecting to see in ZFC are absent.</p>
2,763,974
<blockquote> <p>Find $\displaystyle \lim_{(x,y)\to(0,0)} x^2\sin(\frac{1}{xy}) $ if exists, and find $\displaystyle\lim_{x\to 0}(\lim_{y\to 0} x^2\sin(\frac{1}{xy}) ), \displaystyle\lim_{y\to 0}(\lim_{x\to 0} x^2\sin(\frac{1}{xy}) )$ if they exist.</p> </blockquote> <p>Hey everyone. I've tried using the squeeze theorem and found $0 \le |x^2\sin(\frac{1}{xy})| \le |x^2|\cdot 1 \xrightarrow{x\to0} 0 $ and so the "double" limit exists and equals zero. Now, I know $\lim_{x\to 0}\sin(\frac{1}{ax})$ diverges, so both $lim_{y\to 0}(\lim_{x\to 0} x^2\sin(\frac{1}{xy}) )$ and $lim_{x\to 0}(\lim_{y\to 0} x^2\sin(\frac{1}{xy}) )$ do not exist(?) </p> <p>I don't think I understand multi-variable limits, I would love your help on this basic one so I can understand better. Thanks in advance :) </p>
The Phenotype
514,183
<p>$0 \le |x^2\sin(\frac{1}{xy})| \le |x^2| \xrightarrow{x\to0} 0$ means indeed that for any trajectory of $(x,y)$ going to $(0,0)$ we have that $x^2\sin(\frac{1}{xy})$ approaches $0$.</p> <hr> <p>Note that just by taking any $y\neq 0$ fixed we cannot just conclude that it would not approach $0$, since eventually you also want $y$ to approach $0$. Indeed we can see that $y\neq 0$ fixed is a trajectory that does not intersect or approach $(0,0)$.</p>
285,591
<p>If $$\lim_{x\to\infty} f(x) = \infty$$ what can be said about the rate at which $$\int_1^\infty f(x) \,dx$$ approaches infinity if $f(x) \geq 1$ for all values of $x$?</p>
Benjamin Dickman
37,122
<p>Is this all the information given? If so, not much.</p> <p>$f(x)$ could be $|x| + 1$ or $e^x + 1$. These are two pretty different functions, in terms of the rate at which their integrals grow. You would need more information to say much of anything at all.</p>
285,591
<p>If $$\lim_{x\to\infty} f(x) = \infty$$ what can be said about the rate at which $$\int_1^\infty f(x) \,dx$$ approaches infinity if $f(x) \geq 1$ for all values of $x$?</p>
Eric Naslund
6,075
<p>All that can be said is that $$\frac{\int_1^x f(y)dy}{x}\rightarrow \infty.$$ No better lower bound can be given, and nothing can be said about the rate at which this goes to infinity since nothing is given about $f$. Indeed, you can construct $f$ so that this ratio goes to infinity as slowly, or as quickly as desired.</p>
506,397
<p>I would like to know the condition for a random variable <span class="math-container">$Y$</span> in order to make <span class="math-container">$\mathbb{E}[\max\{X_1+Y,X_2\}] &gt; \mathbb{E}[\max\{X_1, X_2\}]$</span>, where <span class="math-container">$X_1$</span> and <span class="math-container">$X_2$</span> are iid.</p> <p>Any help would be appreciated.</p> <h3>Comment by OP incorporated by dfeuer</h3> <p>I tried to use the upper and lower bounds of the highest order statistics for inid and iid random variables to solve the problem, but they are not tight. The brute force might be applying the convolution on the sum term, then the cdf of the highest order statistic for inid random variables.</p>
karakfa
14,900
<p>start with $\max(x,y) = \frac12(x+y+|x-y|)$</p> <p>$\mathbb{E}[\max\{X_1+Y,X_2\}]=\frac12\left(\mathbb{E}[X_1+Y+X_2] + \mathbb{E}\bigl[|X_1+Y-X_2|\bigr]\right) $ $\mathbb{E}[\max\{X_1,X_2\}]=\frac12\left(\mathbb{E}[X_1+X_2] + \mathbb{E}\bigl[|X_1-X_2|\bigr]\right) $</p> <p>your condition will be satisfied when $\mathbb{E}[Y] + \mathbb{E}\bigl[|X_1+Y-X_2|\bigr] \gt \mathbb{E}\bigl[|X_1-X_2|\bigr]$</p>
985,103
<p>The set $\{u_{1},u_{2}\cdots,u_{6}\}$ is a basis for a subspace $\mathcal{M}$ of $\mathbb{F}^{m}$ if and only if $\{u_{1}+u_{2},u_{2}+u_{3}\cdots,u_{6}+u_{1}\}$ is also a basis for $\mathcal{M}$. So far I have that the two basis are just rearranged sums of each other but don't know where else to go with it.</p>
Kim Jong Un
136,641
<p>You don't have enough info to infer that $\{b_n\}$ converges.</p> <p>Let $M&gt;0$ be an upperbound of $\{b_n\}$. For a given $\epsilon&gt;0$, there is $N&gt;0$ such that for $n\geq N$, we have $|a_n/b_n|&lt;\frac{\epsilon}{M}$. Then, for all such $n$, $$ \frac{\epsilon}{M}&gt;|a_n/b_n|=a_n/b_n\geq a_n/M\implies0&lt;a_n&lt;\epsilon\implies|a_n|&lt;\epsilon. $$</p>
3,572,967
<p>Can I ask how to solve this type of equation:</p> <blockquote> <p><span class="math-container">$$\log_{yz} \left(\frac{x^2+4}{4\sqrt{yz}}\right)+\log_{zx}\left(\frac{y^2+4}{4\sqrt{zx}}\right)+\log_{xy}\left(\frac{z^2+4}{4\sqrt{xy}}\right)=0$$</span></p> </blockquote> <p>It is given that <span class="math-container">$x,y,z&gt;1$</span>. Which properties of the logarithm have to be used?</p> <p>I know that <span class="math-container">$\log_a b=\log a/\log b\to \log_{yz}((x^2+4)/(4\sqrt{yz}))=\log ((x^2+4)/(4\sqrt{yz}))/\log xy$</span></p> <p>and <span class="math-container">$\log_a b/c=\log_a b/\log_a c\to \log_{yz}((x^2+4)/(4\sqrt{yz}))=\log_{yz} (x^2+4)/\log_{yz} (4\sqrt{yz})$</span></p> <p>And how to solve this type of equation in general?</p>
Michael Rozenberg
190,319
<p>By AM-GM <span class="math-container">$$0=\sum_{cyc}\log_{yz}\frac{x^2+4}{4\sqrt{yz}}\geq\sum_{cyc}\log_{yz}\frac{2\sqrt{x^2\cdot4}}{4\sqrt{yz}}=\sum_{cyc}\left(\log_{yz}x-\frac{1}{2}\right)=$$</span> <span class="math-container">$$=\sum_{cyc}\frac{2\ln{x}-\ln{y}-\ln{z}}{2(\ln{y}+\ln{z})}=\frac{1}{2}\sum_{cyc}\frac{\ln{x}-\ln{y}-(\ln{z}-\ln{x})}{\ln{y}+\ln{z}}=$$</span> <span class="math-container">$$=\frac{1}{2}\sum_{cyc}(\ln{x}-\ln{y})\left(\frac{1}{\ln{y}+\ln{z}}-\frac{1}{\ln{z}+\ln{x}}\right)=$$</span> <span class="math-container">$$=\sum_{cyc}\frac{(\ln{x}-\ln{y})^2}{2(\ln{y}+\ln{z})(\ln{x}+\ln{z})}\geq0.$$</span> The equality occurs only for <span class="math-container">$x=y=z=2$</span> and <span class="math-container">$\ln{x}=\ln{y}=\ln{z},$</span> </p> <p>which gives that <span class="math-container">$\{(2,2,2)\}$</span> is an answer.</p>
1,814,216
<p>I was trying to show that $\sin(x)$ is non-zero for integers $x$ other than zero and I thought that this result might emerge as a corollary if I managed to show that the result in question is true. </p> <p>I think it's possible to demonstrate this by looking at the power series expansion of $\sin(x)$ and assuming that we don't know anything about the existence of $\pi$. </p> <p>All of the answers below insist that the proposition '$\exists p,q \in \mathbb{Z}, \sin(p)=\sin(q)$' -where $\sin(x)$ is the power series representation-is undecidable without using the properties of $\pi$. If so this is a truly wonderful conjecture and I would like to be provided with a proof. Until then, I insist that methods for analyzing infinite series from analysis should suffice to show that the proposition is false. </p> <hr> <p><strong>Note:</strong> In a previous version of this post the question said "bijective" instead of "injective". Some of the answers below have answered the first version of this post.</p>
Brian Fitzpatrick
56,960
<p>Recall the following facts.</p> <p><strong>Fact 1.</strong> $\sin t=0$ if and only if $t=k\pi$ for some $k\in\Bbb Z$</p> <p><strong>Fact 2.</strong> $\cos t=0$ if and only if $t=k\pi+\frac{\pi}{2}$ for some $k\in\Bbb Z$</p> <p>Also, recall the identity $$ \sin u-\sin v=2\,\cos\left(\frac{u+v}{2}\right)\sin\left(\frac{u-v}{2}\right) $$</p> <p>Thus $\sin u=\sin v$ if and only if $u+v=k\pi+\frac{\pi}{2}$ or $u-v=k\pi$ for some $k\in\Bbb Z$. But the values \begin{align*} k\pi+\frac{\pi}{2} &amp;&amp; k\pi \end{align*} are <a href="https://math.stackexchange.com/questions/45104/prove-that-the-product-of-a-rational-and-irrational-number-is-irrational">necessarily irrational</a> for nonzero $k\in\Bbb Z$. Hence $\sin u\neq\sin v$ for $u,v\in\Bbb Z$ with $u\neq v$.</p>
3,527,004
<p>As stated in the title, I want <span class="math-container">$f(x)=\frac{1}{x^2}$</span> to be expanded as a series with powers of <span class="math-container">$(x+2)$</span>. </p> <p>Let <span class="math-container">$u=x+2$</span>. Then <span class="math-container">$f(x)=\frac{1}{x^2}=\frac{1}{(u-2)^2}$</span></p> <p>Note that <span class="math-container">$$\int \frac{1}{(u-2)^2}du=\int (u-2)^{-2}du=-\frac{1}{u-2} + C$$</span></p> <p>Therefore, <span class="math-container">$\frac{d}{du} (-\frac{1}{u-2})= \frac{1}{x^2}$</span> and</p> <p><span class="math-container">$$\frac{d}{du} (-\frac{1}{u-2})= \frac{d}{du} (-\frac{1}{-2(1-\frac{u}{2})})=\frac{d}{du}(\frac{1}{2} \frac{1}{1-\frac{u}{2}})=\frac{d}{du} \Bigg( \frac{1}{2} \sum_{n=0}^\infty \bigg(\frac{u}{2}\bigg)^n\Bigg)$$</span></p> <p><span class="math-container">$$= \frac{d}{du} \Bigg(\sum_{n=0}^\infty \frac{u^n}{2^{n+1}}\Bigg)= \frac{d}{dx} \Bigg(\sum_{n=0}^\infty \frac{(x+2)^n}{2^{n+1}}\Bigg)= \sum_{n=0}^\infty \frac{d}{dx} \bigg(\frac{(x+2)^n}{2^{n+1}}\bigg)=$$</span></p> <p><span class="math-container">$$\sum_{n=0}^\infty \frac{n}{2^{n+1}} (x+2)^{n-1}$$</span></p> <p>From this we can conclude that </p> <p><span class="math-container">$$f(x)=\frac{1}{x^2}=\sum_{n=0}^\infty \frac{n}{2^{n+1}} (x+2)^{n-1}$$</span></p> <p>Is this solution correct?</p>
José Carlos Santos
446,262
<p>No, it is not, since what you got is <em>not</em> a power series (see what you get if you put <span class="math-container">$n=0$</span>).</p> <p>Use the fact that<span class="math-container">\begin{align}\frac1{x^2}&amp;=\frac14+\left(\frac1{x^2}-\frac14\right)\\&amp;=\frac14+\int_4^x-\frac1x\,\mathrm dx\\&amp;=\frac14-\int_4^x\frac1{-2+(x+2)}\,\mathrm dx\end{align}</span>and you will get that the answer is<span class="math-container">$$\frac1{x^2}=\sum_{n=0}^\infty\frac{(n+1)}{2^{n+2}}(x+2)^n.$$</span></p>
1,694,159
<p>I am prepping for my mid semester exam, and came across with the following question:</p> <blockquote> <p>Find the closed form for the sum $\sum_{j=k}^n (-1)^{j+k}\binom{n}{j}\binom{j}{k}$, using the assumption that $k = 0, 1,...n$ and $n$ can be any natural number.</p> </blockquote> <p>So what I have done is to note the fact that $$\binom{n}{j}\binom{j}{k}= \frac{n!}{j!(n-j)!}\frac{j!}{k!(j-k)!}=\frac{n!}{(n-j)!\ k!\ (j-k)!}$$</p> <p>Then we can write the summation as $$\sum_{j=k}^n {(-1)^{j+k}\binom{n}{j}\binom{j}{k}}= \sum_{j=k}^n (-1)^{j+k} \frac{n!}{(n-j)!\ k!\ (j-k)!} = \frac{n!}{k!} \sum_{j=k}^n (-1)^{j+k} \frac{1}{(n-j)!\ (j-k)!} $$</p> <p>I tried to let $m=j-k$: $$\frac{n!}{k!} \sum_{m=0}^{n-k} (-1)^{m+2k} \frac{1}{(n-m-k)!\ m!}=\frac{n!}{k!} \sum_{m=0}^{n-k} (-1)^{m} \frac{1}{(n-m-k)!\ m!}$$</p> <p>But I am not sure what to proceed next. Any help would be highly appreciated!</p>
Brian M. Scott
12,042
<p>The computational argument is easier and quicker, but for the record there is also a combinatorial argument.</p> <p>As usual let $[n]=\{1,\ldots,n\}$; we’ll count the $k$-element subsets of $[n]$ that contain every element of $[n]$ in two ways.</p> <p>First, it’s clear that there is such a set if and only if $k=n$, and in that case there is exactly one, $[n]$ itself.</p> <p>On the other hand, for each $i\in[n]$ let $\mathscr{A}_i$ be the family of $k$-element subsets of $[n]$ that do not contain $i$. If $I$ is any non-empty subset of $[n]$, there are clearly $\binom{n-|I|}k$ $k$-element subsets of $[n]\setminus I$, i.e.,</p> <p>$$\left|\bigcap_{i\in I}\mathscr{A}_i\right|=\binom{n-|I|}k\;.$$</p> <p>It follows from the <a href="https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle#Statement" rel="nofollow">inclusion-exclusion principle</a> that</p> <p>$$\begin{align*} \left|\bigcup_{i\in[n]}\mathscr{A}_i\right|&amp;=\sum_{\varnothing\ne I\subseteq[n]}(-1)^{|I|+1}\left|\bigcap_{i\in I}\mathscr{A}_i\right|\\ &amp;=\sum_{\varnothing\ne I\subseteq[n]}(-1)^{|I|+1}\binom{n-|I|}k\\ &amp;=\sum_{j=1}^n(-1)^{j+1}\binom{n}j\binom{n-j}k\;, \end{align*}$$</p> <p>since $[n]$ has $\binom{n}j$ subsets $I$ of cardinality $j$. Now $\bigcup_{i\in[n]}\mathscr{A}_i$ is the collection of $k$-element subsets of $[n]$ that do <em>not</em> contain every element of $[n]$, so $[n]$ has </p> <p>$$\begin{align*} \binom{n}k-\left|\bigcup_{i\in[n]}\mathscr{A}_i\right|&amp;=\binom{n}k-\sum_{j=1}^n(-1)^{j+1}\binom{n}j\binom{n-j}k\\ &amp;=\binom{n}k+\sum_{j=1}^n(-1)^j\binom{n}j\binom{n-j}k\\ &amp;=\sum_{j=0}^n(-1)^j\binom{n}j\binom{n-j}k\\ &amp;=\sum_{j=0}^n(-1)^j\binom{n}{n-j}\binom{n-j}k\\ &amp;\overset{(1)}=\sum_{j=0}^n(-1)^{n-j}\binom{n}j\binom{j}k\\ &amp;\overset{(2)}=(-1)^{n+k}\sum_{j=0}^n(-1)^{j+k}\binom{n}j\binom{j}k\\ &amp;=(-1)^{n+k}\sum_{j=k}^n(-1)^{j+k}\binom{n}j\binom{j}k\;. \end{align*}$$</p> <p>$k$-element subsets that <em>do</em> contain every element of $[n]$. (In step $(1)$ I replaced $n-j$ with $j$ everywhere, in step $(2)$ I used the fact that $(-1)^{-r}=(-1)^r$ for any integer $r$, and the $k$ terms that I threw away in the last step were all $0$ anyway.) Thus,</p> <p>$$(-1)^{n+k}\sum_{j=k}^n(-1)^{j+k}\binom{n}j\binom{j}k=\begin{cases} 1,&amp;\text{if }k=n\\ 0,&amp;\text{otherwise}\;, \end{cases}$$</p> <p>and since $(-1)^{n+k}=1$ when $k=n$, we have</p> <p>$$\sum_{j=k}^n(-1)^{j+k}\binom{n}j\binom{j}k=\begin{cases} 1,&amp;\text{if }k=n\\ 0,&amp;\text{otherwise}\;. \end{cases}$$</p>
281,294
<p>Let $G$ be a finite abelian group.</p> <p>Is there a field $K$, and an elliptic curve $E$ over $K$ such that $E(K)_{tor} \cong G$?</p>
Daniel Litt
6,950
<p>Even better, there exists an elliptic curve $E$ over a number field $K$, such that for any group of the form $\mathbb{Z}/n\mathbb{Z}\times \mathbb{Z}/mn\mathbb{Z}$, there is a finite extension $K'/K$ such that $$E(K')_{\text{tors}}\simeq\mathbb{Z}/n\mathbb{Z}\times \mathbb{Z}/mn\mathbb{Z}.$$ Indeed, take any $(E,K)$ such that the natural Galois representation on the total Tate module of $E$, $$\rho: \text{Gal}(\bar K/K)\to GL_2(\hat{\mathbb{Z}})$$ is surjective (such curves exist by e.g. <a href="http://www.math.cornell.edu/~zywina/papers/MaximalGalois.pdf" rel="noreferrer">this paper of Zywina</a>). Now choose a subgroup $$\mathbb{Z}/n\mathbb{Z}\times \mathbb{Z}/mn\mathbb{Z}\subset E(\bar K)_{\text{tors}}$$ and let $K'$ be the field fixed by its stabilizer under $\rho$. By the surjectivity of $\rho$, the desired subgroup is all that is fixed, and we're done.</p>
3,682,900
<p>I have a hard time solving this one. I'm sure there is trick that should be used but if so, I can't spot it.</p> <p><span class="math-container">$$(3\cdot4^{-x+2}-48)\cdot(2^x-16)\leqslant0$$</span></p> <p>Here is what I get but I'm anything but confident about this:</p> <p><span class="math-container">$$3\cdot(2^{-2x+4}-16)\cdot(2^x-16)\leqslant0$$</span> <span class="math-container">$$(2^{-2x+4}-2^4)\cdot(2^x-2^4)\leqslant0$$</span> <span class="math-container">$$2^{-x+4}-2^{x+4}-2^{-2x+8}+2^8\leqslant0$$</span> <span class="math-container">$$2^{-x+4}+2^8\leqslant2^{x+4}+2^{-2x+8}$$</span></p> <p>So far, I'm already not 100% sure but then, I'm not sure at all:</p> <p><span class="math-container">$$(-x+4)\cdot \ln(2)+8\cdot \ln(2)\leqslant(x+4)\cdot \ln(2)+(-2x+8)\cdot \ln(2)$$</span></p> <p>This is nonsense, can someone correct me please? Thanks.</p>
hamam_Abdallah
369,188
<p>If we put <span class="math-container">$t=2^x&gt;0$</span>, the inequation becomes</p> <p><span class="math-container">$$48(\frac{1}{t^2}-1)(t-16)\le 0$$</span></p> <p>which equivalent to <span class="math-container">$$(1-t)(1+t)(t-16)\le 0$$</span></p> <p>or <span class="math-container">$$(1-t)(t-16)\le 0$$</span></p> <p>thus <span class="math-container">$$2^x\le 1 \; or \;2^x \ge2^4$$</span> so, the answer is <span class="math-container">$$x\le 0 \;\; or \;\; x \ge 4$$</span></p>
1,400,399
<p>Here is an indefinite integral that is similar to an integral I wanna propose for a contest. Apart from using CAS, do you see any very easy way of calculating it?</p> <p>$$\int \frac{1+2x +3 x^2}{\left(2+x+x^2+x^3\right) \sqrt{1+\sqrt{2+x+x^2+x^3}}} \, dx$$</p> <p><strong>EDIT:</strong> It's a part from the generalization </p> <p>$$\int \frac{1+2x +3 x^2+\cdots n x^{n-1}}{\left(2+x+x^2+\cdots+ x^n\right) \sqrt{1\pm\sqrt{2+x+x^2+\cdots +x^n}}} \, dx$$</p> <p><strong>Supplementary question:</strong> How would you calculate the following integral using the generalization above? Would you prefer another way?</p> <p>$$\int_0^{1/2} \frac{1}{\left(x^2-3 x+2\right)\sqrt{\sqrt{\frac{x-2}{x-1}}+1} } \, dx$$</p> <p>As a note, the generalization like the one you see above and slightly modified versions can be wisely used for calculating very hard integrals.</p>
lab bhattacharjee
33,337
<p>HINT: </p> <p>As $\dfrac{d(2+x+x^2+x^3)}{dx}=1+2x+3x^2,$</p> <p>let $\sqrt{1+\sqrt{2+x+x^2+x^3}}=u\implies 2+x+x^2+x^3=(u^2-1)^2$</p> <p>and $(1+2x+3x^2)dx=(u^2-1)2u\ du$</p> <p>Now use <a href="http://mathworld.wolfram.com/PartialFractionDecomposition.html">Partial Fraction Decomposition</a>, </p> <p>$\dfrac1{(u^2-1)^2}=\dfrac A{u-1}+\dfrac B{(u-1)^2}+\dfrac C{u+1}+\dfrac D{(u+1)^2}$</p>
3,417,227
<p><strong>Problem</strong>:</p> <p>Let <span class="math-container">$f : \Bbb R \to \Bbb R$</span> be a differentiable function such as <span class="math-container">$f(0) = 0$</span>, compute </p> <p><span class="math-container">$$\lim_{r\to 0^{+}} \iint_{x^2 + y^2 \leq r^2} {3 \over 2\pi r^3} f(\sqrt{x^2+y^2}) dx$$</span></p> <p><strong>Progress</strong>:</p> <p>I figure the solution to this is likely 0 because the domain is approaching 0, so intuitively there cannot by any resulting volume. I got as far as converting the problem into polar coordinates, which makes it look much cleaner, but I was unsure how to integrate the product of <span class="math-container">$f(\rho)\rho d\rho$</span>. Integration by parts did not give a useful result. Could someone please demonstrate how to obtain that integral, or present another solution?</p>
Gabriel Palau
619,715
<p><span class="math-container">$\lim_{R\to 0^{+}} \iint_{x^2 + y^2 \leq R^2} {3 \over 2\pi r^{3}} f(\sqrt{x^2+y^2}) dxdy =\lim_{R\to 0^{+}} \int_{0}^{2\pi}\int_{0}^{R} {3 \over 2\pi r^3}r^{2} f(r) drd\theta =\lim_{R\to 0^{+}} \int_{0}^{2\pi}\int_{0}^{R} {3 \over 2\pi r} f(r) drd\theta$</span></p> <p>integrating in <span class="math-container">$\theta$</span> the <span class="math-container">$2\pi$</span> cancels and you have</p> <p><span class="math-container">$\lim_{R\to 0^{+}} 3\int_{0}^{R} \frac{1}{r} f(r) dr$</span>. If we analize its absolute value,</p> <p><span class="math-container">$|\int_{0}^{R} \frac{1}{r} f(r) dr| \leq \int_{0}^{R} |\frac{1}{r} f(r) |dr =\int_{0}^{R} \frac{1}{r} |f(r)| dr\leq \int_{0}^{R} \frac{1}{r} Mr dr =MR\to0$</span> (if <span class="math-container">$R\to0$</span>), where in the last inequality I used the fact that if <span class="math-container">$f$</span> is a diferenciable function such that <span class="math-container">$f(0)=0$</span> then <span class="math-container">$f(x)=f'(0)x+o_{2}$</span>, so if <span class="math-container">$x$</span> is little then <span class="math-container">$|f(x)|\leq Mx$</span>. </p>
770,430
<p>How to find the value of $X$?</p> <p>If $X$= $\frac {1}{1001}$+$\frac {1}{1002}$+$ \frac {1}{1003}$. . . . $\frac {1}{3001}$</p>
Lucian
93,448
<blockquote> <p><em>How to find the value of X ?</em></p> </blockquote> <p>You don't. All you can do is to approximate it with $\ln3000-\ln1000=\ln\dfrac{3000}{1000}=\ln3$.</p>
770,430
<p>How to find the value of $X$?</p> <p>If $X$= $\frac {1}{1001}$+$\frac {1}{1002}$+$ \frac {1}{1003}$. . . . $\frac {1}{3001}$</p>
pi37
46,271
<p>According to Wolfram Alpha, the exact answer is $\frac{p}{q}$, where p=75328031318485390324661526737425033504382783697307257420576924210199013865675494588453357597732611836759615279262897804228616138318504937188371116285762401242413015264220245625479603089419098916457166245209822607501071955143499155953588750190801459033569902371944986875832208531524214828639697643189965117705382654989852370599753740878627879853483787696654677713060906781015657974936128342382392285649376707442839841179748088751298833747872798844076570494507780225749022929010768355428164437169302405535535364154990753024705481864063860017478507389655897233586797811430528294182048959446223923598196249442135891217435883947315129246902187725246250598067420776140788952327658325678502701284402300815156540371036195063003627486059021552907734502862589610721915821193440593974954781712319953123170734301914497590285511175026077731937589721512275001092521417536559660349265700268430040798622819564401017864757739795100181374338527553865490907433079559312179804056893247878014980769034106784211416486427494110137143167726890741012450902524382016709863255621914636546018196139306200607120112035754193101797513239991492135601146320147050994872715861596583104447145670782396748805189295043563824059380013910741630415788589094785297982336752288051949876466345284300334533818775485735929036146857606273476737333144687 and q=68566531279903628912686455689707762035187218980388472531818931221958143227040137485566325964619669558509920485829503840213156491378105416097859140787570909163265838858706060495340080601871261988532528073722857103208163591171453249496963221949000399595906940217890999758063758045600496397218639237756801379116714936907189264418517317848373281952463530695109614931068018579566062545346842001231262506076974738267939850559454231717656259370454715943739489846432220408266436405743383253335947738008816316420815936026338093013782076698663690993643987282074025771028584395877711978236168705319634952035546058102196465266094685597850839130112444421743346563438994002058641939513027378913267407309287523130290409441356981921372056659086236100173594357097198287699815700073687104112287650768268409060993927723692727795317967782792493532735160560983640097780774026793902802329162609514356813613404373046193497169872443197790118543274296291015758965419813085403237331785330094743458034277833761189510285871016868053278321608864897615096982508710403676706674326523034795706489260558438180338079964279882266757471688021586869860522419765557336317312645948642787671563307725326223661077615937980880502844240344055840043277923862912310720505496149062384179496213136193385018508630698975487803093967780513084297517697280000.</p> <p>You're not going to find a nicer way to express the exact answer in closed form.</p>
49,544
<p>In reading section 2.2, page 14 of <a href="http://www.gaussianprocess.org/gpml/chapters/" rel="nofollow noreferrer">this book</a>, I came across the term &quot;singular distribution&quot;.</p> <p>Apparently, a multivariate Gaussian distribution is singular if and only if its covariance matrix is singular. One way (the only way?) the covariance matrix can be singular is if one of the diagonal entries is zero. A Gaussian distribution approaches a Dirac delta as the variance goes to zero. Is the Dirac distribution an example of a singular distribution? I don't think so, since <a href="http://en.wikipedia.org/wiki/Singular_distribution" rel="nofollow noreferrer">Wikipedia</a> says the Lebesgue integral of the density function of a singular distribution is zero.</p>
Shai Covo
2,810
<p>It turns out that there are two common definitions for singular distribution; see <a href="https://encyclopediaofmath.org/wiki/Singular_distribution" rel="nofollow noreferrer">this article</a> (Singular distribution - Springer Online Reference Works).</p> <p>According to one definition, it is a probability distribution on <span class="math-container">$\mathbb{R}^n$</span> concentrated on a set of Lebesgue measure zero and giving probability zero to every one-point set. This is the case in the Wikipedia article you linked. Sometimes, a singular distribution is defined without the latter requirement; under this definition every discrete distribution is singular (with respect to Lebesgue measure). In any case, a singular distribution does not have a probability density function. (Distributions with probability density functions are called absolutely continuous.) As for the case of a multivariate Gaussian distribution, if the covariance matrix is singular, then the distribution is continuous singular (hence has no density, and gives probability zero to every one-point set).</p>
396,088
<p>Let <span class="math-container">$K$</span> be a field and let <span class="math-container">$\Lambda_{1}$</span> and <span class="math-container">$\Lambda_{2}$</span> be two finite-dimensional <span class="math-container">$K$</span>-algebras with Jacobson radicals <span class="math-container">$J_{1}$</span> and <span class="math-container">$J_{2}$</span> respectively. How to show or where can I find the proof of the following statement?</p> <blockquote> <p><span class="math-container">$\Lambda_{1} / J_{1} \otimes_{K} \Lambda_{2} / J_{2}$</span> is always semisimple if <span class="math-container">$K$</span> is perfect or if <span class="math-container">$\Lambda_{1}$</span> and <span class="math-container">$\Lambda_{2}$</span> are path algebras of quivers factored by admissible ideals.</p> </blockquote> <p>Thank you.</p>
Benjamin Steinberg
15,934
<p>Here is an alternate version of @Mare's answer. First recall that a <span class="math-container">$K$</span>-algebra <span class="math-container">$A$</span> is separable if it is semisimple under all base extensions; its enough to check over an algebraic closure of <span class="math-container">$K$</span>.</p> <p>Let us write <span class="math-container">$L\otimes_K A$</span> as <span class="math-container">$A^L$</span> for a <span class="math-container">$K$</span>-algebra <span class="math-container">$A$</span> and field extension <span class="math-container">$L/K$</span>.</p> <p>Any semisimple algebra over a perfect field is separable. By Wedderburn-Artin, it suffices to show that if <span class="math-container">$D$</span> is a finite dimensional division algebra over <span class="math-container">$K$</span>, then <span class="math-container">$D^{\overline K}=\overline{K}\otimes_K D$</span> is semisimple, where <span class="math-container">$\overline{K}$</span> is an algebraic closure of <span class="math-container">$K$</span>. Let <span class="math-container">$F$</span> be the center of <span class="math-container">$D$</span>; then <span class="math-container">$F/K$</span> is a finite field extension and hence separable because <span class="math-container">$K$</span> is perfect. Then <span class="math-container">$D^{\overline K}\cong (\overline K\otimes_K F)\otimes_F D$</span>. But since <span class="math-container">$F/K$</span> is separable, basic field theory says <span class="math-container">$\overline K\otimes_K F\cong \overline K^{[F:K]}$</span>. Therefore, <span class="math-container">$D^{\overline K}\cong (\overline K\otimes_F D)^{[F:K]}$</span>. But <span class="math-container">$D$</span> is central simple over <span class="math-container">$F$</span> and so by a basic result in the theory of central simple algebras, <span class="math-container">$\overline K\otimes_F D\cong M_n(\overline K)$</span> where <span class="math-container">$n^2=[D:F]$</span>. Thus <span class="math-container">$D^{\overline K}$</span> is semisimple.</p> <p>Also, if <span class="math-container">$A$</span> is a split semisimple <span class="math-container">$K$</span>-algebra (so isomorphic to a direct product of matrix algebras over a field), then <span class="math-container">$A$</span> is separable. In particular, if <span class="math-container">$\Lambda = KQ/I$</span> where <span class="math-container">$Q$</span> is a quiver and <span class="math-container">$I$</span> is an admissible ideal, then <span class="math-container">$\Lambda/J(\Lambda)\cong K^{|Q_0|}$</span> and hence is a separable <span class="math-container">$K$</span>-algebra.</p> <p>Thus your question really boils down to proving that if <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are separable <span class="math-container">$K$</span>-algebras, then so is <span class="math-container">$A\otimes_K B$</span>. It is enough to show that <span class="math-container">$(A\otimes_K B)^{\overline K}$</span> is semisimple where <span class="math-container">$\overline K$</span> is an algebraic closure. But <span class="math-container">$(A\otimes_K B)^{\overline K}\cong A^{\overline K}\otimes_{\overline K}B^{\overline K}$</span> as they both have the same universal property. The right hand side is a tensor product of direct sums of matrix algebras over <span class="math-container">$\overline{K}$</span> and hence is a direct sum of matrix algebras over <span class="math-container">$\overline{K}$</span> (as <span class="math-container">$M_n(\overline K)\otimes_{\overline K} M_m(\overline {K})\cong M_{mn}(\overline K)$</span>) and thus semisimple. Therefore, <span class="math-container">$A\otimes_K B$</span> is separable and hence semisimple.</p>
1,829,342
<p>So I know that $\sum_{i\geq 0}{n \choose 2i}=2^{n-1}=\sum_{i\geq 0}{n \choose 2i-1}$. However, I need formulas for $\sum_{i\geq 0}i{n \choose 2i}$ and $\sum_{i\geq 0}i{n \choose 2i-1}$. Can anyone point me to a formula with proof for these two sums? My searches thus far have only turned up those first two sums without the $i$ coefficient in the summand. Thanks!</p>
Brian M. Scott
12,042
<p>Here’s a solution not using generating functions. Let </p> <p>$$a_n=\sum_kk\binom{n}{2k}\;,$$</p> <p>the first of your two sums. Suppose that you have a pool of players numbered $1$ through $n$; then $k\binom{n}{2k}$ is the number of ways to choose $2k$ players from the pool to form a team and then designate one of the lowest-numbered $k$ on the team as the captain. Thus, $a_n$ is the number of ways to pick a team with an even number of members and designate one member of the lower-numbered half of the team to be the captain. Note that $k=0$ contributes $0$ to $a_n$, so we may as well consider only $k\ge 1$.</p> <p>We can choose team and captain in a different way, however. We first pick the player who will be the highest-numbered player in the lower half; if that player’s number is $\ell$, we must have $1\le\ell\le n-1$. For some $k$ between $1$ and $n-\ell$ inclusive we then pick $k$ players numbered above $\ell$ and $k-1$ players numbered below $\ell$. Finally, we pick one of the $k$ chosen players numbered $\ell$ or lower to be the captain. For a given $\ell$ and $k$ this can be done in $k\binom{n-\ell}k\binom{\ell-1}{k-1}$ different ways. Thus,</p> <p>$$\begin{align*} a_n&amp;=\sum_{\ell=1}^{n-1}\sum_{k=1}^{n-\ell}k\binom{n-\ell}{k}\binom{\ell-1}{k-1}\\ &amp;=\sum_{\ell=1}^{n-1}(n-\ell)\sum_k\binom{n-1-\ell}{k}\binom{\ell-1}k\tag{1}\\ &amp;=\sum_{\ell=1}^{n-1}(n-\ell)\sum_k\binom{n-1-\ell}{k}\binom{\ell-1}{\ell-1-k}\\ &amp;=\sum_{\ell=1}^{n-1}(n-\ell)\binom{n-2}{\ell-1}\tag{2}\\ &amp;=\sum_{\ell=0}^{n-2}(n-1-\ell)\binom{n-2}\ell\\ &amp;=(n-1)2^{n-2}-\sum_\ell\ell\binom{n-2}\ell\\ &amp;=(n-1)2^{n-2}-(n-2)2^{n-3}\\ &amp;=n2^{n-3}\;. \end{align*}$$</p> <p>To get $(1)$ I used the identity $m\binom{n}m=n\binom{n-1}{m-1}$ and shifted the index $k$ by one; there’s no need to specify limits on the inner summation, because examination shows that it’s over all values of $k$ that yield non-zero terms. $(2)$ follows from the <a href="https://en.wikipedia.org/wiki/Vandermonde&#39;s_identity" rel="nofollow">Vandermonde identity</a>.</p> <p>The formula $a_n=n2^{n-3}$ is valid for $n\ge 2$, and clearly $a_0=a_1=0$.</p> <p>Your second sum is</p> <p>$$b_n=\sum_{k\ge 0}k\binom{n}{2k-1}=\sum_{k\ge 1}k\binom{n}{2k-1}=\sum_{k\ge 0}(k+1)\binom{n}{2k+1}\;.$$</p> <p>This corresponds to choosing an odd number $2k+1$ of players for your team and naming a captain from the lowest-numbered $k+1$ members of the team. The alternative calculation is almost the same as before: $\ell$ is the number of the player in the middle of the team when it’s arranged by number, which can be anything from $1$ to $n$ inclusive, and</p> <p>$$\begin{align*} b_n&amp;=\sum_{\ell=1}^n\sum_{k=0}^{n-\ell}(k+1)\binom{n-\ell}{k}\binom{\ell-1}k\\ &amp;=\sum_{\ell=1}^n\sum_{k=0}^{n-\ell}k\binom{n-\ell}k\binom{\ell-1}k+\sum_{\ell=1}^n\sum_{k=0}^{n-\ell}\binom{n-\ell}k\binom{\ell-1}k\\ &amp;=\sum_{\ell=1}^n(n-\ell)\sum_k\binom{n-1-\ell}{k-1}\binom{\ell-1}k+\sum_{\ell=1}^n\binom{n-1}{\ell-1}\\ &amp;=\sum_{\ell=1}^n(n-\ell)\sum_k\binom{n-1-\ell}k\binom{\ell-1}{k+1}+2^{n-1}\\ &amp;=\sum_{\ell=1}^n(n-\ell)\sum_k\binom{n-1-\ell}k\binom{\ell-1}{\ell-2-k}+2^{n-1}\\ &amp;=\sum_{\ell=1}^n(n-\ell)\binom{n-2}{\ell-2}+2^{n-1}\\ &amp;=n\sum_\ell\binom{n-2}\ell-\sum_{\ell=1}^n\ell\binom{n-2}{\ell-2}+2^{n-1}\\ &amp;=n2^{n-2}-\sum_\ell(\ell+2)\binom{n-2}\ell+2^{n-1}\\ &amp;=n2^{n-2}-2\sum_\ell\binom{n-2}\ell-\sum_\ell\ell\binom{n-2}\ell+2^{n-1}\\ &amp;=n2^{n-2}-2^{n-1}-(n-2)\sum_\ell\binom{n-3}{\ell-1}+2^{n-1}\\ &amp;=n2^{n-2}-(n-2)2^{n-3}\\ &amp;=(n+2)2^{n-3}\;, \end{align*}$$</p> <p>valid for $n\ge 2$. Clearly $b_1=1$.</p> <p>As a matter of possible interest, these sequences are essentially <a href="https://oeis.org/A001792" rel="nofollow">OEIS A001792</a> and <a href="https://oeis.org/A045623" rel="nofollow">OEIS A045623</a>, though with different starting points in each case. Thus, $a_n$ turns out to be (among many other things) the number of parts in all compositions of $n-1$, and $b_n$ to be the number of ones in all compositions of $n$. Many other interpretations and many references can be found at the OEIS links.</p> <p>It’s also not hard to verify that $\sum_{k=1}^nb_k=a_{n+1}$ for $n\ge 1$.</p>
2,198,454
<p>My professor's solution to this is as follows: "Create a 2 x 2 matrix with the first row (corresponding to $X=0$) summing to $P(X=0)=1-p$, the second row summing to $P(X=1)=p$, the first column ($Y=0$) summing to $P(Y=0)=1-q$ and the second column summing to $P(Y=1)=q$. We want to maximize the sum of the diagonal which is $P(X=Y)$. Since $p &gt; q$, the first diagonal entry can be at most $1 − p$; the second can be at most $q$. If we write these in we can fill out the rest of the table (below) to get the desired coupling."</p> <p><a href="https://i.stack.imgur.com/8jqJp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8jqJp.png" alt="enter image description here"></a></p> <p>The bit I'm confused about is why $p&gt;q$ implies the first diagonal entry can be at most $1-p$? Can someone explain this please? If we want to maximise the sum of the main diagonal then isn't $1-q+p$ better than $1-p+q$ because $p&gt;q \implies 1-p+q&lt;1$ but $1-q+p&gt;1$?</p>
Misha Lavrov
383,078
<p>The first diagonal entry is the probability that $X=Y=0$, and we know that both of the following statements are true:</p> <ul> <li>Since $\Pr[X=Y=0] \le \Pr[X=0]$, it is at most $1-p$.</li> <li>Since $\Pr[X=Y=0] \le \Pr[Y=0]$, it is at most $1-q$. </li> </ul> <p>However, $p&gt;q$, so the first constraint is stronger, and we can forget about the second constraint.</p> <p>Similarly, the second diagonal entry is the probability that $X=Y=1$, and we know that both of the following statements are true:</p> <ul> <li>Since $\Pr[X=Y=1] \le \Pr[X=1]$, it is at most $p$.</li> <li>Since $\Pr[X=Y=1] \le \Pr[Y=1]$, it is at most $q$. </li> </ul> <p>However, $p&gt;q$, so the second constraint is stronger, and we can forget about the first constraint.</p> <p>This shows that $\Pr[X=Y]$ is at most $1-p+q$. Algebraically, $$\Pr[X=Y] \le \Pr[X=Y=0] + \Pr[X=Y=1] \le (1-p) + q.$$ We need to show that this upper bound is also a lower bound, and that's what the table does.</p>
2,539,888
<p>I have an polynomial $x^4+x+1 \in \mathbb{Z}\left\{ x\right\}$ and I want to construct an extension field of $\mathbb{Z}_2$ that include the roots of that polynomial. So is this the right approach?</p> <p>Let E be the extension field. $$E= \mathbb{Z}_2 / &lt;x^4+x+1&gt; $$?</p> <p>If so, how do I find the root of this polynomial? And what is the range of the extension field?</p>
Dr. Sonnhard Graubner
175,066
<p>i have your term factorized in the form $$-(c-b)^2 (2 b+c) (b+2 c)$$</p>
2,924,380
<p><span class="math-container">$\sum_{k=0}^{n}{k\binom{n}{k}}=n2^{n-1}$</span></p> <p><span class="math-container">$n2^{n-1} = \frac{n}{2}2^{n} = \frac{n}{2}(1+1)^n = \frac{n}{2}\sum_{k=0}^{n}{\binom{n}{k}}$</span></p> <p>That's all I got so far, I don't know how to proceed</p>
Sri-Amirthan Theivendran
302,692
<p><strong>Two approaches:</strong></p> <p><strong>First Approach:</strong> Consider <span class="math-container">$(1+x)^n=\sum_{k=0}^n \binom{n}{k}x^k$</span>, differentiate both sides with respect to <span class="math-container">$x$</span> and substitute <span class="math-container">$x=1$</span>.</p> <p><strong>Second Approach:</strong> Use the identity <span class="math-container">$k\binom{n}{k}=n\binom{n-1}{k-1}$</span> for <span class="math-container">$k\ge 1$</span> and apply the result that <span class="math-container">$\sum_{k=0}^n \binom{n}{k}=2^n$</span></p>
2,924,380
<p><span class="math-container">$\sum_{k=0}^{n}{k\binom{n}{k}}=n2^{n-1}$</span></p> <p><span class="math-container">$n2^{n-1} = \frac{n}{2}2^{n} = \frac{n}{2}(1+1)^n = \frac{n}{2}\sum_{k=0}^{n}{\binom{n}{k}}$</span></p> <p>That's all I got so far, I don't know how to proceed</p>
Quiver
564,698
<p>I want to give another way of doing it, but all the other answers are very good (I really like the differentiation one). Mine is going to be more of simple arithmetic and change of indices.</p> <p>Note that the binomial coefficient can be written as <span class="math-container">$${n\choose k} = \frac{n!}{k!(n-k)!}$$</span> and that <span class="math-container">$$\frac{k}{k!} = \frac{k}{k(k-1)!} = \frac{1}{(k-1)!}$$</span> If we substitute back into the sum <span class="math-container">$$\sum_{k=0}^n k{n\choose k} = \sum_{k=0}^n\frac{k}{k!}\frac{n!}{(n-k)!} = \sum_{k=0}^n\frac{n!}{(k-1)!(n-k)!}$$</span> by making a new index <span class="math-container">$m =k-1$</span>, what we get in the sum is <span class="math-container">$$\sum_{m=0}^{n-1}\frac{n!}{m!(n-(m+1))!} = \sum_{m=0}^{n-1}n\frac{(n-1)!}{m!((n-1)-m)!}$$</span></p> <p>Using now the binomial theorem, that states <span class="math-container">$$(a+b)^n=\sum_{m=0}^{n}{n\choose m}a^{n-m}b^m$$</span> setting <span class="math-container">$a=b=1$</span>, we get <span class="math-container">$$2^n = \sum_{m=0}^{n}{n\choose m}$$</span></p> <p>That's pretty similar to what we got earlier, setting <span class="math-container">$j=n-1$</span> <span class="math-container">$$\sum_{m=0}^{n-1}n\frac{(n-1)!}{m!((n-1)-m)!} = n\sum_{m=0}^j\frac{j!}{m!(j-m)!} = n\sum_{m=0}^j{j\choose m} = n2^j = n2^{n-1}$$</span></p>
2,425,337
<p>What would be an example of a real valued sequence $\{a_{n}\}_{n=1}^{\infty}$ such that $$\frac{a_{n}}{a_{n+1}} = 1 + \frac{1}{n} + \frac{p}{n \ln n} + O\left(\frac{1}{n \ln^{2}n}\right)\ ?$$</p>
robjohn
13,854
<p>$$ f_n(x)=\frac{\alpha\,}{1+\left(\frac{2x}{1-2x}\right)^n} $$ For $\alpha=1$ and $n=1,2,3,4,5$, we get <a href="https://i.stack.imgur.com/B5WoM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B5WoM.png" alt="enter image description here"></a></p>
676,171
<blockquote> <p>Prove that if $F$ is a field, every proper prime ideal of $F[X]$ is maximal.</p> </blockquote> <p>Should I be using the theorem that says an ideal $M$ of a commutative ring $R$ is maximal iff $R/M$ is a field? Any suggestions on this would be appreciated. </p>
ricardio
124,869
<p>If $F$ is a field, $F[x]$ is a Euclidean Domain (just the normal division of polynomials you learn in high school). Thus $F[x]$ is a PID and a UFD. In a PID or UFD, proper prime ideals are maximal.</p>
3,294,402
<p>Prove that it is impossible for three consecutive squares to sum to another perfect square. I have tried for the three numbers <span class="math-container">$x-1$</span>, <span class="math-container">$x$</span>, and <span class="math-container">$x+1$</span>.</p>
John Omielan
602,049
<p>As you stated, you let the <span class="math-container">$3$</span> consecutive numbers be <span class="math-container">$x-1$</span>, <span class="math-container">$x$</span>, and <span class="math-container">$x+1$</span>. This will give you a sum of their squares to be <span class="math-container">$3x^2 + 2$</span>. Consider any integer <span class="math-container">$n$</span> and <span class="math-container">$r = 0,1$</span> or <span class="math-container">$2$</span>. Then <span class="math-container">$(3n+r)^2 = 9n^2 + 6nr + r^2$</span>, so the possible remainders when divided by <span class="math-container">$3$</span> are just <span class="math-container">$r^2$</span>, i.e., <span class="math-container">$0$</span>, <span class="math-container">$1$</span>, plus <span class="math-container">$4$</span> which has a remainder of <span class="math-container">$1$</span> also. Thus, all perfect squares have a remainder of either <span class="math-container">$0$</span> or <span class="math-container">$1$</span> when divided by <span class="math-container">$3$</span>, but this sum has a remainder of <span class="math-container">$2$</span>. Thus, it cannot be a perfect square.</p> <p>In general, you should try to handle these types of questions by checking the remainders (sometimes called congruences in higher math) of various small integers to see if you can find any particular pattern, such as determine anything which doesn't fit.</p>
3,294,402
<p>Prove that it is impossible for three consecutive squares to sum to another perfect square. I have tried for the three numbers <span class="math-container">$x-1$</span>, <span class="math-container">$x$</span>, and <span class="math-container">$x+1$</span>.</p>
James Arathoon
448,397
<p>I will attempt a proof by contradiction.</p> <p>Assume that the given sum of three consecutive squares sums to the square <span class="math-container">$(x+n)^2$</span>, <span class="math-container">$n$</span> being a positive integer like <span class="math-container">$x$</span>. Thus the statement or proposition we conjecture to be true is</p> <p><span class="math-container">$$(x-1)^2+x^2+(x+1)^2=(x+n)^2\tag{1}$$</span></p> <p>which simplifies to <span class="math-container">$$2x^2+2=2nx+n^2\tag{2}$$</span></p> <p>Since the left hand side of equation 2 is even, it immediately follows that n must be even. Thus let <span class="math-container">$n=2m$</span> giving</p> <p><span class="math-container">$$x^2+1=2mx+2m^2\tag{3}$$</span></p> <p>Now from (3) we can immediately say that <span class="math-container">$x$</span> must be odd. Thus let <span class="math-container">$x=2u-1$</span>, where u is any non-zero positive integer. Substituting for <span class="math-container">$x$</span> on the left hand side, (3) then simplifies to</p> <p><span class="math-container">$$2u^2-2u+1=mx+m^2\tag{4}$$</span></p> <p>The left hand side of (4) is always odd, therefore <span class="math-container">$m$</span> cannot be even. However if <span class="math-container">$m$</span> is odd with <span class="math-container">$x$</span> odd as required above, <span class="math-container">$mx+m^2$</span> will always be even. [The result of multiplying two odd whole numbers together is always odd, as proved by for example by algebraicly multiplying <span class="math-container">$(2u-1)$</span> and <span class="math-container">$(2v-1)$</span>.] </p> <p>Therefore we have reached a contradiction, as <span class="math-container">$x$</span> cannot be both odd and even at the same time, and so the original conjecture (1) must be false.</p>
2,782,726
<p>I'm reading through some lecture notes to prepare myself for analysis next semester and stumbled along the following exercises: </p> <p>a) Prove that $\lim_{x\to0} f(x)=b$ is equivalent to the statement $\lim_{x\to0} f(x^3)=b$.</p> <p>b) Give an example of a map where $\lim_{x\to0} f(x^2)$ exists, but $\lim_{x\to0} f(x)$ does not. </p> <p>for b) I was thinking about the following piecewise function: </p> <p>$f(x)=\begin{cases} -1 &amp; x &lt; 0 \\ 1 &amp; x \geq0 \end{cases}$</p> <p>is this a good example?</p> <p>for (a), I don't have any concrete tools to work with, I can't write down any explicit $\epsilon$ or $\delta$, so what can I do?</p>
user
505,767
<p>Yes your example for point b) is a good example.</p> <p>For a) the property is true for continuity of the function $x^3$ and since $x^3 \to 0$ as $x\to 0$. Yes we can prove that by the $\epsilon-\delta$ definition.</p> <p>Refer also to <a href="https://math.stackexchange.com/questions/167926/formal-basis-for-variable-substitution-in-limits?noredirect=1&amp;lq=1">Formal basis for variable substitution in limits</a>.</p>
2,782,726
<p>I'm reading through some lecture notes to prepare myself for analysis next semester and stumbled along the following exercises: </p> <p>a) Prove that $\lim_{x\to0} f(x)=b$ is equivalent to the statement $\lim_{x\to0} f(x^3)=b$.</p> <p>b) Give an example of a map where $\lim_{x\to0} f(x^2)$ exists, but $\lim_{x\to0} f(x)$ does not. </p> <p>for b) I was thinking about the following piecewise function: </p> <p>$f(x)=\begin{cases} -1 &amp; x &lt; 0 \\ 1 &amp; x \geq0 \end{cases}$</p> <p>is this a good example?</p> <p>for (a), I don't have any concrete tools to work with, I can't write down any explicit $\epsilon$ or $\delta$, so what can I do?</p>
Community
-1
<p>Suppose, for every $\epsilon&gt;0$, whenever $|x-0|&lt; \delta$, it holds true that $|f(x)-b|&lt;\epsilon$. With this in mind we can make the case for $x^3$,</p> <p>By using the fact that $x^3$ is bijective and </p> <p>for any $x \in \mathbb{R}$, $p(x)$ is true$\iff$ for any $x^3 \in \mathbb{R}$, $p(x^3)$ is true </p> <p>since $x^3$ tends to $0$, as $x$ tends to zero, we can replace the orginal limit by $\lim_{x^3\to 0} f(x^3)$, after substituting $x $ by $x^3$ to get: </p> <p>for every $\epsilon&gt;0$, whenever $|x^3-0|&lt; \delta$, it holds true that $|f(x^3)-b|&lt;\epsilon$. QED.</p> <p>An alternative way would be via the substitution rule for limits:</p> <p>Let $\lim_{y \to 0} f(y)=0$, and notice if we choose $\lim_{x \to 0}y= \lim_{x\to 0} x^3=0$ then by substitution $\lim_{x\to 0} f(x^3)=0$,</p>
180,839
<p>Is there any software which can be used for computing Thurston's unit ball (for second homology of 3-manifolds) of link complements? In particular can I do that with SnapPy?</p> <p>PS: even a table for Thurston's ball of two component links would be helpful for me.</p>
Igor Rivin
11,142
<p>I am not aware of any implementation. The best known algorithm is <a href="http://arxiv.org/abs/0706.0673" rel="noreferrer">Cooper and Tillmann's</a>, the closest (which is not very) to a table is in <a href="http://www.math.harvard.edu/~ctm/papers/home/text/papers/alex/alex.pdf" rel="noreferrer">Curt McMullen's classical paper.</a></p>
408,344
<p>I'm making this post to ask for a reference about combinatorics: I'm a PhD student in representation theory/algebraic geometry. My background is mostly in algebra and geometry (and also mostly theoretical unfortunately).</p> <p>Right now I'm interested in the cohomology of quiver and character varieties and their links with cohomological Hall algebras, quantum groups and the character ring of <span class="math-container">$\operatorname{GL}(n,\mathbb{F}_q)$</span>. There's a lot of interesting combinatorics going on, but I really know very little about it. Especially regarding MacDonald polynomials etc.</p> <p>Outside of the classic book by Macdonald &quot;Symmetric functions and Hall polynomials&quot; what could be a good reference to get into this area of combinatorics? Ideally I would like a book or notes with strong link to representation theory/cohomology theories etc</p>
Libli
37,214
<p>M. Haiman &quot;Notes on Macdonald polynomials and the geometry of the Hilbert scheme of points on <span class="math-container">$\mathbb{P}^2$</span>&quot;. By one of the greatest specialists of interactions between combinatorics and algebraic geometry.</p>
408,344
<p>I'm making this post to ask for a reference about combinatorics: I'm a PhD student in representation theory/algebraic geometry. My background is mostly in algebra and geometry (and also mostly theoretical unfortunately).</p> <p>Right now I'm interested in the cohomology of quiver and character varieties and their links with cohomological Hall algebras, quantum groups and the character ring of <span class="math-container">$\operatorname{GL}(n,\mathbb{F}_q)$</span>. There's a lot of interesting combinatorics going on, but I really know very little about it. Especially regarding MacDonald polynomials etc.</p> <p>Outside of the classic book by Macdonald &quot;Symmetric functions and Hall polynomials&quot; what could be a good reference to get into this area of combinatorics? Ideally I would like a book or notes with strong link to representation theory/cohomology theories etc</p>
Per Alexandersson
1,056
<p>I am not that strong on the representation-theory side, but know more about the combinatorics side. If you want to get an overview of the symmetric functions and the associated combinatorics (crystal bases, RSK etc), then one starting point (with references!) is <a href="https://www.symmetricfunctions.com" rel="noreferrer">www.symmetricfunctions.com</a>.</p> <p>I am the admin for this site, so all errors and issues are completely my fault :)</p>
3,195,618
<p>Prove that topological space <span class="math-container">$ \mathbb{R^2} $</span> with dictionary order topology is first countable, but not second countable.</p> <p>I am a bit stuck. Some hints would help. For first countability I am having trouble finding a local base for each <span class="math-container">$ (x,y) \in \mathbb{R^2}$</span>. For second countability can I for example look at the first quadrant and write it as: <span class="math-container">$ \cup_{x \in \mathbb{R^{+}} } ((x,0), (x, + \infty )) $</span> which is a disjunct union of uncountably many uncountable sets so there can't be a countable base ?</p>
JJacquelin
108,514
<p><span class="math-container">$$\frac{d^2y}{dx^2} + \frac{2}{4x} \frac{dy}{dx} + \frac{9}{4x} y = 0 \text{ with transformation } t = \sqrt{x}$$</span> <span class="math-container">$\frac{dt}{dx}=\frac{1}{2\sqrt{x}}=\frac{1}{2t}$</span></p> <p><span class="math-container">$\frac{dy}{dx}=\frac{dy}{dt}\frac{dt}{dx}=\frac{1}{2t}\frac{dy}{dt}$</span></p> <p><span class="math-container">$\frac{d^2y}{dx^2}= (\frac{d}{dt}(\frac{dy}{dx}))\frac{dt}{dx}=(\frac{d}{dt}(\frac{1}{2t}\frac{dy}{dt}) )\frac{1}{2t}=-\frac{1}{4t^3}\frac{dy}{dt}+\frac{1}{4t^2}\frac{d^2y}{dt^2}$</span> <span class="math-container">$$(-\frac{1}{4t^3}\frac{dy}{dt}+\frac{1}{4t^2}\frac{d^2y}{dt^2})+\frac{2}{4t^2}(\frac{1}{2t}\frac{dy}{dt})+\frac{9}{4t^2}y=0$$</span> <span class="math-container">$$\frac{d^2y}{dt^2}+9y=0$$</span> <span class="math-container">$$y=c_1\cos(3t)+c_2\sin(3t)$$</span> <span class="math-container">$$y=c_1\cos(3\sqrt{x})+c_2\sin(3\sqrt{x})$$</span></p>
237,446
<p>I find to difficult to evaluate with $$\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right )$$ I tried to use the fact, that $$\frac{1}{1-n} \geqslant \ln(n)\geqslant 1+n$$ what gives $$\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right ) \geqslant \lim_{n\rightarrow\infty} n(1-\sqrt[n]{1+n}) =\lim_{n\rightarrow\infty}n *\lim_{n\rightarrow\infty}(1-\sqrt[n]{1+n})$$ $$(1-\sqrt[n]{1+n})\rightarrow -1\Rightarrow\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right )\rightarrow-\infty$$Is it correct? If not, what do I wrong?</p>
André Nicolas
6,312
<p><strong>Hint:</strong> We look at the behaviour of $$x\left(1-\sqrt[x]{\log x}\right)$$ for large $x$. Rewrite the expression as $$\frac{1-e^{\frac{\log\log x}{x}} }{\frac{1}{x}}.$$ Top and bottom both approach $0$ as $x\to\infty$, so the conditions for using L'Hospital's Rule hold. The rest is a calculation. </p>
3,255,644
<p>I need help understanding the definitions and context for a homework question:</p> <blockquote> <p>Consider a 3 by 7 matrix A over GF(2) containing distinct columns. The row space C of A is the subspace over GF(2) generated by the 3 rows. (Extra note: This is a “simplex” code [7,3] with generator matrix A. It is closely related to a certain “Hamming” code [7,4].)</p> </blockquote> <p>Would the above mean that, for instance I have a matrix that has unique columns and elements in GF(2):</p> <p><span class="math-container">$A= \begin{matrix} 0 &amp; 0 &amp; 0&amp; 0&amp; 1 &amp; 1 &amp;1\\ 0 &amp; 0 &amp; 1&amp; 1&amp; 0 &amp; 0 &amp;1\\ 0 &amp; 1 &amp; 0&amp; 1&amp; 0 &amp; 1 &amp;0 \end{matrix} $</span></p> <p>the row space would then be:</p> <p><span class="math-container">$C=[0000111, 0011001, 0101010]$</span></p> <p>The next parts of the questions needs me to know about the</p> <blockquote> <p>weight distribution of C, weight of a vector, distance between words</p> </blockquote> <p>Could anyone explain what those words mean in this context?</p>
ArsenBerk
505,611
<p>Row space is a set of vectors generated by row vectors of <span class="math-container">$A$</span>, i.e. any linear combination of <span class="math-container">$0000111, 0011001, 0101010$</span>.</p> <p>Distance between words <span class="math-container">$u$</span> and <span class="math-container">$v$</span>, generally denoted by <span class="math-container">$d(u,v)$</span>, is the number of different indexes of <span class="math-container">$u$</span> and <span class="math-container">$v$</span>. For instance, <span class="math-container">$d(0000111, 0011001) = 4$</span> since these two words have the same bit in their first, second and last indexes.</p> <p>Weight of a vector <span class="math-container">$u$</span> is generally denoted by <span class="math-container">$w(u)$</span> is the number of non-zero indexes of <span class="math-container">$u$</span>, that is, <span class="math-container">$d(u,\bar{0})$</span> where <span class="math-container">$\bar{0}$</span> is the zero vector.</p> <p>Weight distribution of a code <span class="math-container">$C$</span> can be thought as an ordered <span class="math-container">$(n+1)-$</span>tuple <span class="math-container">$W$</span> where <span class="math-container">$n$</span> is length of a codeword. Here, <span class="math-container">$i^{th}$</span> index of <span class="math-container">$W$</span> denotes the number of codewords of weight <span class="math-container">$i$</span> in <span class="math-container">$C$</span>. Also note that <span class="math-container">$W_0 = 1$</span> since zero is always a codeword and only codeword with weight <span class="math-container">$0$</span>. </p>
3,547,995
<p>I've been trying to figure out the way to solve this for a while now, and I'm hoping someone could point me in the right direction to find the answer (or show me how to solve this).</p> <p>The problem I'm having is with this equation: <span class="math-container">$(2i-2)^{38}$</span> and I need to solve it using de Moivre's formula (i is imaginary in case that wasn't clear). Now obviously I know it would be stupid to expand right away because it would turn into an extremely long equation.</p> <p>The farthest I got with it is <span class="math-container">$2(i-1)^{38}$</span> and <span class="math-container">$(2\cdot(-1)^{1/2}-2)^{38}$</span>.</p> <p>I'm hoping I'm headed in the right direction but I'm stuck. Could someone please show me the way to solve this?</p>
Pythagoras
701,578
<p><strong>Hint</strong>: <span class="math-container">$(2i-2)^{38}=2^{38}(i-1)^{38}$</span> and <span class="math-container">$(i-1)^2=-2i$</span>. So de Moivre’s formula is not required.</p>
1,902,878
<p>If $a^x=bc$, $b^y=ca$ and $c^z=ab$, prove that: $xyz=x+y+z+2$.</p> <p>My Approach; Here,</p> <p>$$a^x=bc$$ $$a={bc}^{\frac {1}{x}}$$</p> <p>and,</p> <p>$$b={ca}^{\frac {1}{y}}$$ $$c={ab}^{\frac {1}{z}}$$</p> <p>I got stopped from here. Please help me to continue </p>
Aryabhatta
406,086
<p>Notice here, quite simple approach... </p> <p>$a^x.b^y.c^z=(bc).(ca).(ab)$ $a^x.b^y.c^z=a^2.b^2.c^2$</p> <p>Comparing the corresponding exponents... $x=y=z=2$.</p> <p>Now, Ĺ.H.S$=xyz =2\times 2\times 2=8$</p> <p>R.H.S$=x+y+z+2=2+2+2+2=8$ Proved. </p> <p>.....))</p>
1,336,869
<p>Does every mod p have at least one element with a non-identical inverse?</p> <p>I very much suspect this is true, but how can I prove it? For example, in mod 5, some elements have inverses that are not themselves ${2,3}$ and some have themselves as inverses ${1,4}$. Am I assured that every prime $p\gt 2$ will have at least one element that is not its own inverse (almost certainly yes)? How do I prove that?</p>
egreg
62,967
<p>An element which is its own inverse modulo $p$ is represented by an integer $x$ such that $x^2\equiv 1\pmod{p}$, that is, $p\mid (x^2-1)$ which means $$ p\mid x-1 \quad\text{or}\quad p\mid x+1 $$ In other words, either $x\equiv 1\pmod{p}$ or $x\equiv p-1\pmod{p}$.</p> <p>So, as soon as $p&gt;3$, there are elements like you're looking for.</p>
2,886,675
<p>I suspect the following is exactly true ( for positive $\alpha$ )</p> <p>\begin{equation} \sum_{n=1}^\infty e^{- \alpha n^2 }= \frac{1}{2} \sqrt { \frac{ \pi}{ \alpha} } \end{equation}</p> <p>If the above is exactly true, then I would like to know a proof of it. I accept showing a particular limit is true, may be far more difficult than just applying a general theorem to show that the limit exists. Also as the result involves $\pi$ this makes me think the proof could well be a long one, BUT … ?</p> <p>To give some context, the above series crops up in calculating the 'One Particle Translational Partition Function' for the quantum mechanical 'Particle In A Box'.</p>
Jack D'Aurizio
44,121
<p>It is not an exact equality. By the <a href="https://en.wikipedia.org/wiki/Poisson_summation_formula" rel="nofollow noreferrer">Poisson summation formula</a>, assuming $\alpha&gt;0$,</p> <p>$$ \sum_{n\in\mathbb{Z}}e^{-\alpha n^2} = \sqrt{\frac{\pi}{\alpha}}\sum_{n\in\mathbb{Z}}e^{-\pi^2 n^2 / \alpha} \tag{1}$$ hence by parity $$ \sum_{n\geq 1}e^{-\alpha n^2} = -\frac{1}{2}+\sqrt{\frac{\pi}{\alpha}}\left(\frac{1}{2}+\sum_{n\geq 1}e^{-\pi^2 n^2 / \alpha} \right).\tag{2} $$ This lemma is usually exploited in the proof of the reflection formula for the Riemann $\zeta$ function.</p>
417,064
<p>Let T be a totally ordered set that is <strong>finite</strong>. Does it follow that minimum and maximum of T exist? Since T is finite, I believe there exists a minimal of T. From that it maybe able to be shown that the minimal is the minimum but not quite sure whether it is the right approach. </p>
Cameron Buie
28,900
<p>Yes, so long as $T$ is nonempty. Since $T$ is totally ordered, then minimal is equivalent to minimum (one direction is easy, the other follows by totality/comparability). Similarly for maximal and maximum.</p>
379,669
<p>So I was exploring some math the other day... and I came across the following neat identity:</p> <p>Given $y$ is a function of $x$ ($y(x)$) and $$ y = 1 + \frac{\mathrm{d}}{\mathrm{d}x} \left(1 + \frac{\mathrm{d}}{\mathrm{d}x} \left(1 + \frac{\mathrm{d}}{\mathrm{d}x} \left(1 + \frac{\mathrm{d}}{\mathrm{d}x} \left(1 + \frac{\mathrm{d}}{\mathrm{d}x} \left( \cdots \right) \right) \right) \right) \right) \text{ (repeated differential)} $$</p> <p>then we can solve this equation as follows: $$ y - 1 = \frac{\mathrm{d}}{\mathrm{d}x} \left(1 + \frac{\mathrm{d}}{\mathrm{d}x} \left( \cdots \right) \right) \iff \int y - 1 \, \mathrm{d} x = 1 + \frac{\mathrm{d}}{\mathrm{d}x} \left( 1 + \frac{\mathrm{d}}{\mathrm{d}x} \left( \cdots \right) \right) $$ $$ \implies \int y - 1 \, \mathrm{d} x = y \iff y - 1 = \frac{\mathrm{d} y }{ \mathrm{d} x} $$</p> <p>So</p> <p>$$ \ln \left( y - 1 \right) = x + C \iff y = Ce^x + 1 $$</p> <p>This problem reminded me a lot of nested radical expressions such as: $$ x = 1 + \sqrt{1 + \sqrt{ 1 + \sqrt{ \cdots }}} \iff x - 1 = \sqrt{1 + \sqrt{ 1 + \sqrt{ \cdots }}} $$ $$ \implies (x - 1)^2 = x \iff x^2 - 3x + 1 = 0 $$</p> <p>and so</p> <p>$$ x = \frac{3}{2} + \frac{\sqrt{5}}{2} $$</p> <p>This reminded of the Ramanujan nested radical which is:</p> <p>$$ x = 0 + \sqrt{ 1 + 2 \sqrt{ 1 + 3 \sqrt{1 + 4 \sqrt{ \cdots }}}} $$</p> <p>whose solution cannot be done by simple series manipulations but requires knowledge of general formula found by algebraically manipulating the binomial theorem...</p> <p>This made me curious...</p> <p>say $y$ is a function of $x$ ($y(x)$) and</p> <p>$$ y = 0 + \frac{\mathrm{d}}{\mathrm{d}x} \left(1 + 2\frac{\mathrm{d}}{\mathrm{d}x} \left(1 + 3\frac{\mathrm{d}}{\mathrm{d}x} \left(1 + 4\frac{\mathrm{d}}{\mathrm{d}x} \left(1 + 5\frac{\mathrm{d}}{\mathrm{d}x} \left( \cdots \right) \right) \right) \right) \right) $$</p> <p>What would the solution come out to be?</p>
Sidharth Ghoshal
58,294
<p>Having been quite a bit of time I thought about this some more!</p> <p>If we cut our original radical at a finite time we have (as pointed out by Aryabhata) we have</p> <p>$$y = n!\frac{d^n}{dx^n} $$</p> <p>Solutions to this include $$ y= C_1 e^{x\sqrt[n]{n!}_1} +C_2 e^{x\sqrt[n]{n!}_2}... C_n e^{x\sqrt[n]{n!}_n} $$</p> <p>For all possible nth roots of unity.</p> <p>Now consider stirlings approximation of n! which states</p> <p>$$n! \le e n^{n + \frac{1}{2}} e^{-n} $$</p> <p>(Note that for $n \ge 0$ these functions are both greater than or equal to 1) hence</p> <p>$$ |\sqrt[n]{n!}| \le |\sqrt[n]{e n^{n + \frac{1}{2}} e^{-n}} | $$</p> <p>Which yields</p> <p>$$ |\sqrt[n]{n!}| \le |\sqrt[n]{e n^{n + \frac{1}{2}} e^{-n}} | $$</p> <p>Which yields:</p> <p>$$ |\sqrt[n]{n!}| \le |{e^{\frac{1}{n}} n^{1 + \frac{1}{2n}} e^{-1}} | $$</p> <p>Which yields:</p> <p>$$ |\sqrt[n]{n!}| = O(n) $$</p> <p>As n tends to infinity, so does this.</p> <p>But before I throw out any hope for this, I would like to point out that as n gets larger, we essentially obtain a basis for functions in terms of complex exponentials, I'm curious if there is a way to pick the constants $C_i$ such that for a given n, you can model the function as closely as possible, and determine then in the limit what the $C_i$ need to be (probably very degenerate piecewise constant functions)</p> <p>If there is such a scheme, it may very well be that ANY function satisfies this differential equation, assuming the set of exponentials gets closer and closer to forming a basis for all functions (which in the limit as n goes to infinity means it does indeed form a basis). The tricky part is defining, "how well" a basis is. IE if it doesn't cover every case, how do we say that it covers more cases, or relatively more cases than its predecessor.</p>
2,021,217
<p>I tried to solve this limit $$\lim_{x\to 1} \left(\frac{x}{x-1}-\frac{1}{\ln x}\right)$$ and, without thinking, I thought the result was 1. But, using wolfram to verify, I noticed that the limit is $1/2$. </p> <p>How can I solve it without Hopital/series/integration, just with known limits (link in comments) /squeeze/basic therem?</p> <p>I don't have a clue about what I can do, because known limits can't be used ($\ln x = x-1$) without considering the "error" that we don't exactly know!</p>
marco2013
79,890
<p>Let $E=(\mathbb{R}/\mathbb{Z})^6$. Let $x=(\overline{a},\overline{b},\overline{c},\overline{d},\overline{e},\overline{f})\in E$. Let $F=\mathbb{N}x$.</p> <p>$\overline{F}$ is a subgroup of $E$.</p> <p>$a,b,c,d,e,f$ are irrationnal, so it exists $(u_n)\in\mathbb{N}^{\mathbb{N}}$ and $(\delta_a,\delta_b,\delta_c,\delta_d,\delta_e,\delta_f)\in \{0,1\}^6$ such that $u_nx \in E \mapsto 0_E$ with $u_nx=(\overline{a_n},\overline{b_n},\overline{c_n},\overline{d_n},\overline{e_n},\overline{f_n})$ and $0&lt;a_n&lt;1,0&lt;b_n&lt;1,0&lt;c_n&lt;1,0&lt;d_n&lt;1 ,0&lt;e_n&lt;1 ,0&lt;f_n&lt;1$.</p> <p>and $(a_n,b_n,c_n,d_n,e_n,f_n)\in[0,1]^6 \mapsto (\delta_a,\delta_b,\delta_c,\delta_d,\delta_e,\delta_f)$.</p> <p>So it exists $(v_n)\in\mathbb{N}^{\mathbb{N}}$ such that $v_nx \in E \mapsto 0_E$ with $v_nx=(\overline{a'_n},\overline{b'_n},\overline{c'_n},\overline{d'_n},\overline{e'_n},\overline{f'_n})$ and $0&lt;a'_n&lt;1,0&lt;b'_n&lt;1,0&lt;c'_n&lt;1,0&lt;d'_n&lt;1 ,0&lt;e'_n&lt;1 ,0&lt;f'_n&lt;1$.</p> <p>and $(a'_n,b'_n,c'_n,d'_n,e'_n,f'_n)\in[0,1]^6 \mapsto (1-\delta_a,1-\delta_b,1-\delta_c,1-\delta_d,1-\delta_e,1-\delta_f)$</p> <p>So $\lfloor au_n\rfloor \lfloor bu_n\rfloor \lfloor cu_n\rfloor=(au_n-a_n) (bu_n-b_n)(cu_n-c_n)=abc(u_n)^3-abc_n(u_n)^2-acb_n(u_n)^2-bca_n(u_n)^2+\dots$</p> <p>So, as already shown by <strong>dxiv</strong>, $$\lim\frac{\lfloor au_n\rfloor \lfloor bu_n\rfloor \lfloor cu_n\rfloor}{u_n^3}=abc$$</p> <p>So $abc=def$ because $\lfloor au_n\rfloor \lfloor bu_n\rfloor \lfloor cu_n\rfloor=\lfloor du_n\rfloor \lfloor eu_n\rfloor \lfloor fu_n\rfloor$</p> <p>And $$\lim \frac{\lfloor au_n\rfloor \lfloor bu_n\rfloor \lfloor cu_n\rfloor-abc(u_n)^3}{u_n^2}=-ab\delta_c-ac\delta_b -bc\delta_a$$</p> <p>We have also $$\lim \frac{\lfloor av_n\rfloor \lfloor bv_n\rfloor \lfloor cv_n\rfloor-abc(v_n)^3}{v_n^2}=-ab(1-\delta_c)-ac(1-\delta_b) -bc(1-\delta_a)$$</p> <p>So $$ab+ac+bc=-\lim \frac{\lfloor au_n\rfloor \lfloor bu_n\rfloor \lfloor cu_n\rfloor-abc(u_n)^3}{u_n^2}-\lim \frac{\lfloor av_n\rfloor \lfloor bv_n\rfloor \lfloor cv_n\rfloor-abc(v_n)^3}{v_n^2}$$</p> <p>So $ab+ac+bc=de+df+ef$.</p> <p><strong>Lemma</strong>: We show $\overline{F}=\overline{\mathbb{N}x}$ is a subgroup. For all $k \in \mathbb{N}$, let the compact $A_k=kx+\overline{\mathbb{N}x}\subset \overline{\mathbb{N}x}$. And for all $k$, $0\in A_k$. So $0 \in G=\cap_k A_k$. $x+G\subset G$. So $\mathbb{N}x+G\subset G$. </p> <p>$G$ is compact so $\overline{\mathbb{N}x}\subset G$.</p> <p>As $G \subset A_0=\overline{\mathbb{N}x}$, we have $G=\overline{\mathbb{N}x}$.</p> <p>As $0 \in\overline{\mathbb{N}x}=G \subset A_1=x+\overline{\mathbb{N}x}$, we have $-x\in \overline{\mathbb{N}x}$. So $ \overline{\mathbb{N}x}= \overline{\mathbb{Z}x}$</p> <p><strong>Remark</strong>: for all $m\in\mathbb{N}$, we can choose $(\delta_a,\delta_b,\dots,\delta_f)=(\{ma\},\{mb\},\dots,\{mf\})\in \overline{F}$, and $(u_n)$ such that $u_n x\mapsto \delta$. So we have $ab\{mc\}+ac\{mb\}+bc\{ma\}=de\{mf\}+df\{me\}+ef\{md\}$ for all $m\in\mathbb{N}$.</p>
3,991,351
<p>As stated in the title.</p> <p>Any arbitrary function can be expressed as <span class="math-container">$$f(x)=\frac{a_0}{2}+\sum^{\infty}_{n=1}(a_n\cos(nx)+b_n\sin(nx)) \tag{1}$$</span> where <span class="math-container">$$a_n=\frac{1}{\pi}\int_{-\pi}^{\pi}f(x)\cos(nx)dx \tag{2}$$</span> <span class="math-container">$$b_n=\frac{1}{\pi}\int_{-\pi}^{\pi}f(x)\sin(nx)dx \tag{3}$$</span></p> <hr /> <p>I am trying to show that (1)-(3) are consistent with <span class="math-container">$f(x)=e^{ix}=\cos(x)+i\sin(x)$</span>.</p> <p>However, for <span class="math-container">$a_n=\frac{1}{\pi}\int^{\pi}_{-\pi}e^{ix}\cos(nx)dx$</span> I got <span class="math-container">$a_n=0$</span>, similarly, <span class="math-container">$b_n=0$</span>.</p> <p>What's gone wrong? Are the integral limits <span class="math-container">$\int_{-\pi}^{\pi}$</span> not valid in general, does it require modifications for non-periodic functions?</p>
GEdgar
442
<p>Let's try this definition: <span class="math-container">$f(x) = e^{ix}$</span> is the unique differentiable function <span class="math-container">$f : \mathbb R \to \mathbb C$</span> that satisfies <span class="math-container">$$ f'(x) = if(x),\qquad f(0)=1. \tag4$$</span> Then let's try to evaluate the required integrals <span class="math-container">$(2),(3)$</span> using integration by parts.<br /> <em>We will do this only using <span class="math-container">$(4)$</span>. No complex exponentials appear in what I do below, so I will not accidentally use other properties of <span class="math-container">$e^{ix}$</span></em></p> <p>Start with <span class="math-container">$(2)$</span>, integrate by parts, use <span class="math-container">$(4)$</span> on the result. We get <span class="math-container">$$ a_n = \frac{-i}{n}\;b_n \tag5$$</span> Start with <span class="math-container">$(3)$</span> in the same way. We get <span class="math-container">$$ b_n = \frac{i}{n}\;a_n + \frac{(-1)^n}{\pi n}(f(-\pi)-f(\pi)) \tag6$$</span> Solving the system <span class="math-container">$(5),(6)$</span>, we get (in case <span class="math-container">$n \ne 1$</span>) <span class="math-container">$$ a_n = \frac{i n (-1)^n}{\pi(n^2-1)}\;(f(\pi)-f(-\pi)) \\ b_n = \frac{n(-1)^n}{\pi(n^2-1)}\;(f(-\pi)-f(\pi)) $$</span></p> <p>Unless we can prove <span class="math-container">$f(\pi) = f(-\pi)$</span> from the definition <span class="math-container">$(4)$</span>, it seems we will not be able to complete this.</p> <p>Now consider <span class="math-container">$n=1$</span>. We get equations <span class="math-container">$$ b_1 = ia_1 \\ a_1 = a_1 +\frac{i}{\pi}\;(f(\pi)-f(-\pi)) $$</span> so we conclude <span class="math-container">$f(\pi)-f(-\pi) = 0$</span>. This tells us <span class="math-container">$a_n = b_n = 0$</span> for <span class="math-container">$n \ne 1$</span>. Write <span class="math-container">$A = a_1$</span>. Putting this all into <span class="math-container">$(1)$</span> we have <span class="math-container">$$ f(x) = A\big(\cos(x) + i\sin(x)\big) \tag7$$</span> Finally, we evaluate the constant <span class="math-container">$A$</span>. For that we use <span class="math-container">$f(0)=1$</span> in <span class="math-container">$(7)$</span> to get <span class="math-container">$A=1$</span>.</p>
3,354,684
<p>I was trying to prove following inequality:</p> <p><span class="math-container">$$|\sin n\theta| \leq n\sin \theta \ \text{for all n=1,2,3... and } \ 0&lt;\theta&lt;π $$</span></p> <p>I succeeded in proving this via induction but I didn't get "feel" over the proof. Are there other proof for this inequality?</p>
Conrad
298,272
<p>Not sure that this is what you want, but a neat way to do it is noticing that if <span class="math-container">$0 &lt; \theta &lt; \pi$</span>:</p> <p><span class="math-container">$|1+e^{2i\theta}+...+e^{2i(n-1)\theta}|=\frac{|\sin (n\theta)|}{\sin (\theta)}$</span> and then use the triangle inequality on LHS</p>
88,319
<p>I've had the same effect in Mathematica 9 and 10.</p> <p>I'm trying to color a 3D Plot with another function, let's call it colorFun ( it should highlight the areas where the colorFun is above a certain threshold), but ColorFunction seems to use the wrong coordinates.</p> <p>Horribly colored minimal example</p> <pre><code>colorFun := Function[{x, y},If[x &lt; y, Red, Blue]] Plot3D[Evaluate[x^2+y^2],{x,0,1},{y,0,2},ColorFunction-&gt;colorFun] </code></pre> <p><img src="https://i.stack.imgur.com/ecaTd.png" alt="Holy cow it&#39;s ugly!" title="sooo ugly!"></p> <p>Note that x and y have different intervals plotted, so the divide should not be through the middle. Similar things happen if you change the colorFun to something like y&lt;0.5 . It seems that the ColorFunction is not using the same coordinates as the function, but rather a kind of normalized version, always going from 0 to 1.</p> <p>Is this a bug, or is Mathematica beating my ability to understand computers again?</p>
GeckoGeorge
26,921
<p>And immediately after submitting, I find the answer <a href="https://mathematica.stackexchange.com/questions/58951/set-the-scale-in-colorfunction">here</a>:</p> <p>Using ColorFunctionScaling->False</p> <pre><code>Plot3D[Evaluate[x^2+y^2],{x,0,1},{y,0,2},ColorFunction-&gt;colorFun,ColorFunctionScaling-&gt;False] </code></pre> <p>Gives the correct coloring. Sorry to bother you all, and thanks for listening!</p> <p>Decided to keep the question and answer it myself for other who might do the same searches I did, since finding the answer was kinda random for me.</p>
552,395
<p>I have $f(x)$=$(2x,e^x)$ what does this notation mean? Notation: $Df(\frac{∂}{∂x})$</p> <p>Certainly $Df(x)$=$(2,e^x)$ but how can I replace $x$ with $\frac{∂}{∂x}$?</p> <p>Particularly, how can I make sense of $e^{\frac{∂}{∂x}}$</p>
dezign
32,937
<p>In your case $f$ is a map from $\mathbb{R}^1$ to $\mathbb{R}^2$, so at any point $x\in \mathbb{R}^1$ $Df$ is a linear map from $T\mathbb{R}^1_x$ to $T\mathbb{R}^2_{f(x)}$, and is given by the $1\times 2$ matrix $(2,e^x)^T$. To clarify the notation, $\frac{\partial}{\partial x}$ is notation for the unit tangent vector in the tangent space at $x$ (pointing in the positive direction), so if $u$ and $v$ are your coordinates on $\mathbb{R}^2$ we have $Df\left( \frac{\partial}{\partial x}\right)=2\frac{\partial}{\partial u}+e^x\frac{\partial}{\partial v}$ (where $\frac{\partial}{\partial u}$ and $\frac{\partial}{\partial v}$ are the unit tangent vectors in the positive $u$ and $v$ directions, which are a basis for the tangent space at $f(x)$).</p>
4,364,421
<p>Are all solutions of the equation <span class="math-container">$x^2-4My^2=K^2$</span>, multiples of <span class="math-container">$K$</span>? I am considering <span class="math-container">$M$</span> not perfect square. Any tests in Python show be true, but...</p> <p>My code:</p> <pre><code>for x in range (1,8000): for y in range (1,8000): if (x*x-20*y*y)==36: print(x,y) </code></pre> <p>Result: <span class="math-container">$ 54, 12\\ 966, 216$</span></p> <p>All multiples of <span class="math-container">$6$</span>.</p> <p>Any can a example of solution coprime (except by factor 2) with K?</p>
Thomas Andrews
7,933
<p>For the case <span class="math-container">$M=5, K=6,$</span> all solutions have <span class="math-container">$x$</span> divisible by <span class="math-container">$6.$</span></p> <p>Modulo <span class="math-container">$2,$</span> you'd get <span class="math-container">$x^2\equiv 0\pmod{2},$</span> so <span class="math-container">$x$</span> is even.</p> <p>Modulo <span class="math-container">$3,$</span> you get <span class="math-container">$x^2+y^2\equiv 0\pmod 3.$</span> A little work shows that this implies <span class="math-container">$x$</span> must be divisible by <span class="math-container">$3.$</span></p> <hr /> <p>In fact, this shows that if <span class="math-container">$x^2-20y^2$</span> is divisible by <span class="math-container">$6$</span> then <span class="math-container">$x$</span> is divisible by <span class="math-container">$6.$</span> It really has nothing to do with the &quot;square&quot; in <span class="math-container">$6^2.$</span></p> <hr /> <p>If <span class="math-container">$x^2-4My^2$</span> is divisible by <span class="math-container">$2,$</span> then <span class="math-container">$x$</span> will be, of course.</p>
4,027,889
<p>I'm trying to graph <span class="math-container">$f(x,y)=\ln(x)-y$</span>, however, I am not sure how as all of my tools are refusing to graph it.</p> <p>Can you please help me?</p> <p>Thanks</p>
VIVID
752,069
<p>You cannot draw it on the <span class="math-container">$Oxy$</span>-plane. You have two input values <span class="math-container">$x$</span> and <span class="math-container">$y$</span> and so you need one more axis to represent the value of <span class="math-container">$f$</span>. Below is the graph drawn via GeoGebra:</p> <p><a href="https://i.stack.imgur.com/rd3D0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rd3D0.png" alt="enter image description here" /></a></p>
121,897
<p>I want to check if a user input the function with all the specified variables or not. For that I choose the replace variables with some values and check for if the result is a number or not via a doloop. I am thinking there might be more elegant way of doing it such as <a href="http://reference.wolfram.com/language/ref/ReplaceList.html" rel="nofollow"><code>ReplaceList</code></a> but it is not working the way I want it. </p> <p>Lets assume </p> <pre><code> u = z^2 Sin[π x] + z^4 Cos[π y] + I y^6 Cos[2 π y] + w; (*and user give variables as *) vas = {x, y, z, w}; (* I need to check if all the variables are in the function *) Do[ u = u /. vas[[i]] -&gt; 1.1; (* 1.1 is within where the function is going to get \ evaluated *) If[i == 4, numc9 = NumericQ[u]; Print[numc9];]; (* if numc9 False either there infinity or one of \ the variables in the list is not present in the function or function \ has extra variable(s) *) Print[u]; , {i, 4}] </code></pre> <p>Is there more elegant way doing it?</p> <p><strong>EDIT I</strong></p> <p>After @Mr.Wizard 's answer I realized that my question was not covering everything I wanted. @Mr. Wizard answer is working, if I was checking all the variables are present in u. However, at the same time I want to check if there is no extra variables in u. Because at the end I want to evaluate u using vars and if u has an extra variable I won't get a value at the end. </p> <p>For example:</p> <pre><code> u = z^2 Sin[π x] + z^4 Cos[π y] + I y^6 Cos[2 π y] + w + z^p; vas = {z, x, y, p}; </code></pre> <p>Level and FreeQ commands give all the variables in function u. After that you check if all the variables in vas are present in this list of variables coming from Level or FreeQ and in the example above its. </p> <p>In this situation @J.M. undocumented command does what I need. Or I will need to stick with my DoLoop.</p>
kglr
125
<p>Another <em>undocumented</em> function: <code>Internal`LiterallyOccurringQ</code>:</p> <pre><code>And @@ (Internal`LiterallyOccurringQ[u, #] &amp; /@ vas) </code></pre> <blockquote> <p>True</p> </blockquote> <pre><code>And @@ (Internal`LiterallyOccurringQ[u, #] &amp; /@ {x, y, t}) </code></pre> <blockquote> <p>False</p> </blockquote>
381,036
<p>I must show that $f(x)=p{\sqrt{x}}$ , $p&gt;0$ is continuous on the interval [0,1). </p> <p>I'm not sure how I show that a function is continuous on an interval, as opposed to at a particular point. </p>
Ben
33,679
<p>Firstly consider the case where $x_0 \in (0,1)$ then: $$ |f(x)-f(x_0)| =|p{(\sqrt{x}-\sqrt{x_0})}|= |p|\lvert{x-x_0 \over \sqrt{x} +\sqrt{x_0}}\rvert = |p|{|x-x_0| \over \sqrt{x} +\sqrt{x_0}} \tag{1}$$</p> <p>We want to show that this gets small as we move toward $x_0$. Take the distance of $x$ from $x_0$ to be less than $\epsilon_0$. and we have:</p> <p>$$|x-x_0|&lt; \epsilon_0$$ $$x_0-\epsilon_0 &lt; x &lt; \epsilon_0 + x_0$$ $$\sqrt{x_0-\epsilon_0} &lt; \sqrt{x} \tag{*}$$ $$\sqrt{x_0-\epsilon_0} +\sqrt{x_0} &lt; \sqrt{x} +\sqrt{x_0}$$ $${1 \over \sqrt{x} +\sqrt{x_0}}&lt;{1 \over\sqrt{x_0-\epsilon_0} +\sqrt{x_0} }$$ $${|p||x-x_0| \over \sqrt{x} +\sqrt{x_0}}&lt;{|p||x-x_0| \over\sqrt{x_0-\epsilon_0} +\sqrt{x_0} }$$</p> <p>And so we have bounded $(1)$. Also note that the step in $(*)$ is justified as $x_0$ is inside an $\textit{open}$ interval and so an $\epsilon_0$ exist so that the qauntity $x_0-\epsilon_0&gt;0$.</p> <p>Then we want ${|p||x-x_0| \over\sqrt{x_0-\epsilon_0} +\sqrt{x_0} }&lt;\epsilon$ so we start in reverse assuming what we want and then work back up the chain of reasoning to show it is true.</p> <p>$${|p||x-x_0| \over\sqrt{x_0-\epsilon_0} +\sqrt{x_0} }&lt;\epsilon \quad \Leftrightarrow$$ $$|p||x-x_0|&lt;\epsilon (\sqrt{x_0-\epsilon_0} +\sqrt{x_0} ) \quad \Leftrightarrow$$ $$|x-x_0| &lt; {\epsilon(\sqrt{x_0-\epsilon_0} +\sqrt{x_0})\over |p|} $$</p> <p>So by taking $|x-x_0|&lt; \min({\epsilon(\sqrt{x_0-\epsilon_0} +\sqrt{x_0})\over |p|} , \epsilon_0)$ we have that $|f(x)-f(x_0)|&lt;\epsilon$ and it is continuous for all $x_0 \in (0,1)$</p> <p>Now for the case of continuity at $0$ clearly this can only be continuous from the right as $f$ is not defined for negative values. So we wish to show that $|p \sqrt{x} - 0|=|p\sqrt{x}| = |p|\sqrt{x}&lt; \epsilon$ for $x $ such that $0&lt;x&lt;\delta$. Again assume what you want and we can work backwards.</p> <p>$$|p|\sqrt{x}&lt;\epsilon$$ $$\sqrt{x}&lt;{\epsilon \over |p|}$$ $$x&lt;({\epsilon \over |p|})^2$$</p> <p>So take $\delta = ({\epsilon \over |p|})^2$ and your done :-).</p>
2,546,497
<p>987x ≡ 610 (mod 1597)</p> <p>Is this correct way of applying little Fermat's theorem for linear congruences? Does it make any sense? If not could someone advice a bit.</p> <p>Since gcd(987,1597)=1</p> <p>-> 987ˆ1597-1 ≡ 1 (mod 1597)</p> <p>-> 987ˆ1596 ≡ 1 (mod 1597)</p> <pre><code>610 ≡ 610 (1597) 987ˆ1596 * 610 ≡ 610 (mod 1597) 987 * 987ˆ1595 * 610 ≡ 610 (mod 1597) -&gt; x0 = 987ˆ1595 * 610 -&gt;[987ˆ1595 * 610]127 </code></pre>
Green
357,732
<p>It's because all the 1's are indistinguishable. For example, if you have 1001 and the 1's and 0's were distinguishable, you have $4!$ options, but since they aren't you have $4!$ divided by $2! * 2!$ (which is equivalent to 4 choose 2). </p> <p>As a result, to find out the total number of ways to have r 1's in a string of length n, you have to find the total way to choose r positions out of the n total to assign 1 to.</p>
3,693,735
<p><span class="math-container">$X \not= \emptyset$</span>,<span class="math-container">$Y \not= \emptyset$</span>,<span class="math-container">$(X,T)$</span> and <span class="math-container">$(Y,V)$</span> are topological space. Let <span class="math-container">$f:X \rightarrow Y$</span> function is a homeomorphizm and <span class="math-container">$A \subseteq X $</span> if <span class="math-container">$x \in X $</span> is a accumulation point of <span class="math-container">$A$</span> show that <span class="math-container">$f(x)$</span> is a accumulation point of <span class="math-container">$f(A)$</span>.</p>
Community
-1
<p>I think that a solution based solely on the Class Equation can be given, as follows.</p> <p>For your group, the Class Equation reads (see <em>e.g.</em> <a href="https://math.stackexchange.com/q/4185478/943729">here</a>):</p> <p><span class="math-container">$$pq=1+k_pp+k_qq \tag 1$$</span></p> <p>where <span class="math-container">$k_i$</span> are the number of conjugacy classes of size <span class="math-container">$i$</span>. Now, you have exactly <span class="math-container">$k_pp$</span> elements of order <span class="math-container">$q$</span> (they are the ones in the conjugacy classes of size <span class="math-container">$p$</span>)<span class="math-container">$^\dagger$</span>. Since each subgroup of order <span class="math-container">$q$</span> contributes <span class="math-container">$q-1$</span> elements of order <span class="math-container">$q$</span>, and two subgroups of order <span class="math-container">$q$</span> intersect trivially, then <span class="math-container">$k_pp=l(q-1)$</span> for some positive integer <span class="math-container">$l$</span> such that <span class="math-container">$p\mid l$</span> (because <span class="math-container">$p\nmid q-1$</span>). Therefore <span class="math-container">$(1)$</span> yields:</p> <p><span class="math-container">$$pq=1+l'p(q-1)+k_qq \tag 2$$</span></p> <p>for some positive integer <span class="math-container">$l'$</span>; but then <span class="math-container">$p\mid 1+k_qq$</span>, namely <span class="math-container">$1+k_qq=np$</span> for some positive integer <span class="math-container">$n$</span>, and finally <span class="math-container">$(2)$</span> yields:</p> <p><span class="math-container">$$q=n+l'(q-1) \tag 3$$</span></p> <p>which holds <em>for arbitrary</em> <span class="math-container">$q$</span> if and only if <span class="math-container">$l'=1$</span> <em>and <span class="math-container">$n=1$</span></em>, whence in particular <span class="math-container">$1+k_qq=p$</span> or, equivalently, <span class="math-container">$p\equiv 1 \pmod q$</span>.</p> <p>Incidentally, this provides also an answer to <a href="https://math.stackexchange.com/q/1373734/943729">this other question</a>, where the roles of <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are swapped, being there <span class="math-container">$p&lt;q$</span>. In fact, the number of conjugacy classes is given by:</p> <p><span class="math-container">\begin{alignat}{1} |G/\sim_{conj}| &amp;= 1+k_p+k_q \\ &amp;= 1+(q-1)+\frac{p-1}{q} \\ &amp;= q+\frac{p-1}{q} \\ \end{alignat}</span></p> <hr /> <p><span class="math-container">$^\dagger$</span> An element lies in a conjugacy class of size <span class="math-container">$p$</span> (respectively, <span class="math-container">$q$</span>) if and only if its order is <span class="math-container">$q$</span> (respectively, <span class="math-container">$p$</span>). This claim follows from the Orbit-Stabilizer Theorem and the fact that, for <span class="math-container">$g\in G\setminus\{e\}$</span>, it is <span class="math-container">$\langle g\rangle=C_G(g)$</span>.</p>
1,771,920
<p>Okay so here's the question </p> <blockquote> <p>Seventy percent of the light aircraft that disappear while in flight in a certain country are subsequently discovered. Of the aircraft that are discovered, 60% have an emergency locator, whereas 90% of the aircraft not discovered do not have such a locator. Suppose that a light aircraft has disappeared. If it has an emergency locator, what is the probability that it will be discovered?</p> </blockquote> <p>Anndd here's <strong>my</strong> answer</p> <p><a href="https://i.stack.imgur.com/IyadD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IyadD.png" alt="answer"></a></p> <p>The answer to this question was, however, given as 93%. I don't understand how they got that answer and I was pretty confident in my solution. Can someone either tell me the answer given in the text is incorrect or what's wrong with my solution?</p> <p>Thanks so much!</p>
Luis Vera
178,730
<p>Intuitively, you can think of $P(E \mid D)$ in the numerator as "given that you are already inside the $70 \%$ that has been discovered, what is the probability that inside that $70 \% $ it does have an emergency locator?" so you don't take $.7$ into account in the calculation, you only take $.6$.</p> <p>You can also use the formula $$P(D \mid E) = \frac {P( D \cap E)}{P(E)}= \frac {(.7)(.6)}{(.7)(.6)+(.3)(.1)}$$</p>
1,771,920
<p>Okay so here's the question </p> <blockquote> <p>Seventy percent of the light aircraft that disappear while in flight in a certain country are subsequently discovered. Of the aircraft that are discovered, 60% have an emergency locator, whereas 90% of the aircraft not discovered do not have such a locator. Suppose that a light aircraft has disappeared. If it has an emergency locator, what is the probability that it will be discovered?</p> </blockquote> <p>Anndd here's <strong>my</strong> answer</p> <p><a href="https://i.stack.imgur.com/IyadD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IyadD.png" alt="answer"></a></p> <p>The answer to this question was, however, given as 93%. I don't understand how they got that answer and I was pretty confident in my solution. Can someone either tell me the answer given in the text is incorrect or what's wrong with my solution?</p> <p>Thanks so much!</p>
heropup
118,193
<p>Let's use a frequency table of a deterministic cohort of light aircraft that disappear. Suppose there are $N = 100$ such aircraft. As $70\%$ of these are discovered, this means $70$ aircraft belong in the group $D$, indicating that they are subsequently discovered, and $30$ aircraft belong in the group $\bar D$, indicating they are not discovered.</p> <p>Among the $70$ discovered aircraft, $60\%$ have an emergency locator, so $$D \cap L = (70)(0.6) = 42$$ where $L$ represents the event that an aircraft has an emergency locator. Thus there are $$D \cap \bar L = (70)(0.4) = 28$$ that were discovered but had no emergency locator.</p> <p>Similarly, among the $30$ undiscovered aircraft, $$\bar D \cap \bar L = (30)(0.9) = 27$$ had no emergency locator; and $$\bar D \cap L = (30)(0.1) = 3$$ had an emergency locator.</p> <p>We summarize the above in the following table:</p> <p>$$\begin{array}{c|c|c|c} &amp; L &amp; \bar L &amp; \\ \hline D &amp; 42 &amp; 28 &amp; 70 \\ \hline \bar D &amp; 3 &amp; 27 &amp; 30 \\ \hline &amp; 45 &amp; 55 &amp; 100 \end{array}$$ </p> <p>Therefore, given that an aircraft has an emergency locator--that is to say, is one of the $45$ aircraft in column $L$--the number of discovered aircraft is $42$, thus the proportion of such aircraft is $42/45 \approx 0.933$. </p> <hr> <p>In the language of probability, where $D$ and $L$ are events, we are given $$\Pr[D] = 0.7, \quad \Pr[L \mid D] = 0.6, \quad \Pr[\bar L \mid \bar D] = 0.9,$$ and we wish to compute $$\Pr[D \mid L] = \frac{\Pr[L \mid D]\Pr[D]}{\Pr[L]}.$$ Then $$\Pr[L] = \Pr[L \mid D]\Pr[D] + \Pr[L \mid \bar D]\Pr[\bar D] = (0.6)(0.7) + (1 - 0.9)(1 - 0.7) = 0.45,$$ and $$\Pr[D \mid L] = \frac{(0.6)(0.7)}{0.45} = \frac{42}{45} \approx 0.933,$$ as we found using the deterministic cohort above.</p>
1,919,880
<p>Let $B=\begin{bmatrix} 0 &amp; 0 &amp; 0 &amp; -8 \\ 1&amp;0&amp;0 &amp; 16\\ 0 &amp;1&amp;0&amp; -14\\ 0&amp;0&amp;1&amp;6 \end{bmatrix}$</p> <p>Consider B a real matrix. Find its Jordan Form. So, the characteristic polynomial for B is $(x^2-2x+2)(x-2)^2$. Suppose $B$ represents $T$ in the standard basis. By the primary decomposition theorem, we have that $V$ is the direct sum of $K=Ker(T^2-2T+2I)$ and $L=Ker[(T-2)^2]$. $L$ has dimension 2(direct computation here), so that $K$ should have dimension 2, too. Defining an operator $N$ over $Null(T-2)^2$ as $N=T-2$, we have that $N$ is nilpotent(it "dies" when is raised to two), and also $[T]_B=[N]_B+2[I]_B$, for every basis of L. In particular, exists a "cyclic base for N" in L, i.e., a basis for which $N$ can be represented as \begin{bmatrix} 0 &amp; 0 \\ 1 &amp; 0 \\ \end{bmatrix} And then, the matrix of T with respect to that same basis is</p> <p>\begin{bmatrix} 2 &amp; 0 \\ 1 &amp; 2 \\ \end{bmatrix}</p> <p>Since the rest can't be factored in linear factors, I guess we can just try to find a basis for K and the matrix will look sthg like this:</p> <p>$D=\begin{bmatrix} 2 &amp; 0 &amp; 0 &amp; 0 \\ 1&amp;2&amp;0 &amp; 0\\ 0 &amp;0&amp;*&amp; *\\ 0&amp;0&amp;*&amp;* \end{bmatrix}$</p> <p>The problem is, my friends, that when I compute $B^2-2B+2I$ through WA, and then row reduce it, it says that it is equivalent to the identity matrix, so I can't extract any basis for K. What's wrong here?</p>
Sarvesh Ravichandran Iyer
316,409
<p>You are getting something wrong. The point is, that the eigenvalues of this expression are $2,1+i$ and $1-i$, of which the eigenvalue $2$ has multiplicity $2$, hence the Jordan canonical form $J$ will look like:$$ \begin{pmatrix} 2 \qquad 1\qquad 0\qquad 0 \\ 0\qquad 2 \qquad 0 \qquad 0 \\ 0\qquad 0 \ \quad 1-i \ \quad 0 \\ \ \ 0\qquad 0 \qquad 0 \quad {1+i} \end{pmatrix} $$ This is the Jordan form of the matrix. Now, all you need to do, is to find the following: one eigenvector of $1+i$, one eigenvector of $1-i$ and two linearly independent eigenvectors of $2$. </p> <p>For this, you need to solve the equations: </p> <p>$B\vec{x}=(i+1)\vec{x}$</p> <p>$B\vec{x}=(1-i)\vec{x}$</p> <p>$B\vec{x}=2\vec{x}$ (will give you two linearly independent solutions)</p> <p>Once you normalize all the above vectors, then adjoining them as columns will give you the transformation matrix $S$, such that $B=S^{-1}JS$.</p>
2,339,521
<p>In a game, each trial consists of two possible outcomes, success or failure. Two trials $H$ and $K$ are carried out. The probability of success for trial $H$ is $x$, and the probability of success for trial $K$ is $2/5$ if trial $H$ is a success, and $x/2$ if trial $H$ is a failure. Given that the probability of trials $H$ and $K$ ending with one success is $1/5$, determine the value of $x$. </p> <p>I approached this question by multiplying $(1-x)(x/2) = 1/5$, to which I didn't obtain answer at all. What am I missing?</p>
G Tony Jacobs
92,129
<p>There are two ways to obtain exactly one success: Success on $H$ and failure on $K$, or failure on $H$ and success on $K$. The probability of the first option is $(x)(3/5)$, and the probability of the second option is $(1-x)(x/2)$. The sum of those two probabilities should be $1/5$.</p> <p>Does that help?</p>
202,379
<p>Suppose for some constants $\alpha,\beta,\gamma$ that we're given the following ODE: $$\alpha y''+\beta xy'+\gamma y=0.$$ Now, I know how to find the general solution for $y(x)$ if any of $\alpha,\beta,\gamma$ should turn out to be $0$, but I've just ended up with the ODE $$2y''+xy'+y=0.$$ Can anybody give me the first (few) step(s) of a general procedure one can use for such ODEs?</p>
Ayman Hourieh
4,583
<p>Write $xy'+y$ as $(xy)'$ and integrate to get: $$ y' + \frac{x}{2}y = c_1 $$</p> <p>Which can be solved using the integrating factor $\exp\left(\int \frac{x}{2} \, dx\right) = \exp\left(\frac{x^2}{4}\right)$. The solution cannot be written in terms of elementary functions though:</p> <p>\begin{align*} \exp\left(\frac{x^2}{4}\right)y' + \dfrac{x}{2}\exp\left(\frac{x^2}{4}\right)y &amp;= c_1 \exp\left(\frac{x^2}{4}\right) \\ \exp\left(\frac{x^2}{4}\right)y &amp;= c_1 \int \exp\left(\frac{x^2}{4}\right) \, dx + c_2 \end{align*}</p> <p>Thus: $$ y = c_1 \exp\left(-\frac{x^2}{4}\right) \int \exp\left(\frac{x^2}{4}\right) \, dx + c_2\exp\left(-\frac{x^2}{4}\right) $$</p> <p>This only works if $\beta = \gamma$, but it does work for the ODE you have.</p>
701,241
<p><code>¬(p∨q)∧(p∨r)</code> Does this mean the negation of both <code>(p∨q)</code> and <code>(p∨r)</code> or just <code>(p∨q)</code>? If it was just <code>p∨q</code> it would make more sense to me being inside the brackets like <code>(¬p∨q)</code> but maybe that's just the programmer in me. I have also seen <code>(¬p∨¬q)</code> does that mean the same as <code>¬(pvq)</code> It starts to get rather confusing..</p> <p>Does it have to do with order assesed for example look at <code>(pvq)</code> then its negation.</p>
homegrown
125,659
<p>Firstly, the $\neg$ is applied only to $(p\lor q).$ If it was intended for the whole statement to be negated, then it would look like $\neg[(p\lor q)\land (p\lor r)].$ Now, for your second question, $\neg(p\lor q)$ is <strong>not</strong> logically equivalent to $(\neg p\lor \neg q).$ According to De Morgan's Law, $\neg(p\lor q)\iff(\neg p\land\neg q),$ and $\neg(p\land q)\iff(\neg p\lor\neg q).$ To check if this is really the case, let's look at the truth table: $$\begin{array}{c|c|c|c|c|c|c} p&amp;q&amp;(p\lor q)&amp;\neg(p\lor q)&amp;\neg p&amp;\neg q&amp;(\neg p\land\neg q)\\ \hline T&amp;T&amp;T&amp;F&amp;F&amp;F&amp;F\\ T&amp;F&amp;T&amp;F&amp;F&amp;T&amp;F\\ F&amp;T&amp;T&amp;F&amp;T&amp;F&amp;F\\ F&amp;F&amp;F&amp;T&amp;T&amp;T&amp;T\end{array}$$ As you can see all the entries in the $\neg(p\lor q)$ column equal all the entries in the $(\neg p\land\neg q)$ column. This means that $\neg(p\lor q)\iff(\neg p\land\neg q).$ For more info on De Morgan's Law, here is a <a href="http://en.wikipedia.org/wiki/De_Morgan%27s_laws" rel="nofollow">link</a>.</p>
578,961
<p>Let $\mathbf{T}=[\mathbf{t}_1,\dots,\mathbf{t}_d]$ be a $m\times d$ matrix with $\mathbf{t}_i$ as its linearly independent columns. Also I assume $d&lt;\min(m,n)$. Let $\mathbf{H}$ be a $n\times m$ matrix. Let $\mathbf{W}$ be a $n \times n$ positive definite matrix. For $i=1,\dots,d$, let me define the matrices \begin{align} \mathbf{B}_i&amp;=\mathbf{H}\mathbf{t}_i\mathbf{t}_i^H\mathbf{H}^H \\ \mathbf{C}_i&amp;=\mathbf{W}+\sum_{k\neq i}\mathbf{H}\mathbf{t}_k\mathbf{t}_k^H\mathbf{H}^H \\ \mathbf{D}_i&amp;=\mathbf{C}_i^{-1/2}\mathbf{B}_i\mathbf{C}_i^{-1/2} \end{align} Here $\mathbf{A}^H$, $\mathbf{A}^{-1}$,$\mathbf{A}^{1/2}$ denotes hermitian, inverse and cholesky root (or square root) of matrix $\mathbf{A}$ respectively. </p> <p>Now all $\mathbf{D}_i$'s are rank-one matrices since each $\mathbf{B}_i$ is also a rank-one matrix. So they have one non-zero eigenvalue each. Let the non-zero eigenvalue of $\mathbf{D}_i$ be $\alpha_i$ for all $i\in\{1,\dots,d\}$</p> <p><strong>CLAIM:</strong> $\alpha_1,\dots,\alpha_d$ are also the eigenvalues of $\mathbf{T}^H\mathbf{H}^H\mathbf{W}^{-1}\mathbf{H}\mathbf{T}$</p> <p>Is it true. If so, how do I prove this?. It is becoming really difficult for me. </p>
user1551
1,551
<p>Let $u_i=W^{-1/2}Ht_i$ and $U=\pmatrix{u_1&amp;u_2&amp;\cdots&amp;u_d}\in M_{n\times d}(\mathbb{C})$. Then $\alpha_i=\|C_i^{-1/2}Ht_i\|^2=u_i^H(W+UU^H - u_iu_i^H)^{-1}u_i$ and $T^HH^HW^{-1}HT=U^HU$. In this formulation, since $U^HU$ depends solely on $U$ but $\alpha_i$ depends on both $U$ and $W$, there is no reason to believe that $\alpha_i$ is an eigenvalue of $U^HU$.</p>
3,956,828
<p>I can find the nth integral of <span class="math-container">$\ln(z)$</span> as follows: <span class="math-container">\begin{aligned} \left(\frac d{dz}\right)^{-n}\ln(z)&amp;=\frac1{\Gamma(n)}\int\limits_0^z(z-t)^{n-1}\ln(t)dt\\ &amp;=\frac1{n!}\left[\int\limits_0^z\frac1t(z-t)^ndt-z^n\ln(0)\right]\\ &amp;=\frac1{n!}\left[\int\limits_0^1\frac{z^nt^n-z^n+z^n}{1-t}dt-z^n\ln(0)\right]\\ &amp;=\frac{\ln(z)-H_n}{n!}z^n, \end{aligned}</span> but I can't get very far with <span class="math-container">$\ln(1+z/k)$</span>. I was able to come up with this but it is only a conjecture: <span class="math-container">\begin{aligned} \frac1{\Gamma(n)}\int\limits_0^z(z-t)^{n-1}\ln\left(1+\frac tk\right)dt&amp;=\frac{\ln\left(1+\frac{z}{k}\right)-H_n}{n!}(k+z)^n+\sum ^{n}_{i=0}\frac{H_{i}}{i!(n-i)!}z^{n-i}k^i\\ &amp;=\frac1{n!}\ln\left( 1+\frac{z}{k}\right)( k+z)^{n} -\sum ^{n}_{i=0}\frac{H_{n} -H_{i}}{i!( n-i) !} z^{n-i} k^{i}. \end{aligned}</span> I'm only using <span class="math-container">$k,n\in\mathbb{N}$</span> and <span class="math-container">$z\in\mathbb{R}$</span> for now. Any help proving this would be appreciated.</p>
Claude Leibovici
82,404
<p><em>This is not a proof</em></p> <p>Consider <span class="math-container">$$f_1=\int_0^z \log \left(1+\frac{z}{k}\right)\,dz\qquad \text{and} \qquad f_{n}=\int_0^z f_{n-1}\,dz$$</span> Define <span class="math-container">$$a_n=n\, a_{n-1} \qquad \text{with} \qquad a_1=1$$</span> and <span class="math-container">$$g_n=\frac{a_n} z \left(f_n-\frac{(k+z)^n }{n!}\log \left(1+\frac{z}{k}\right)\right)+k^{n-1}$$</span></p> <p>The terms <span class="math-container">$g_n$</span> appears to be <span class="math-container">$$g_n=-\frac { z\,P_{n-2}} {\text{lcm}(1,2,\cdots,n)} $$</span> where the polynomials <span class="math-container">$P$</span> are homogeneous is <span class="math-container">$k$</span>,<span class="math-container">$z$</span>, all coefficients being positive. None of these polynomials can be factored.</p> <p>Tested up to extremely large values of <span class="math-container">$n$</span>, you conjecture is verified.</p> <p>In any manner, if <span class="math-container">$$I_n=\frac1{\Gamma(n)}\int\limits_0^z(z-t)^{n-1}\log\left(1+\frac tk\right)dt$$</span> <span class="math-container">$$I_n=\frac{z^{n+1} }{n (n+1) (z+k) \Gamma(n)}\,\, _2F_1\left(1,n+1;n+2;\frac{z}{z+k}\right)$$</span></p>
542,808
<p>I had a little back and forth with my logic professor earlier today about proving a number is irrational. I proposed that 1 + an irrational number is always irrational, thus if I could prove that 1 + irrational number is irrational, then it stood to reason that was also proving that the number in question was irrational.</p> <p>Eg. $\sqrt2 + 1$ can be expressed as a continuous fraction, and through looking at the fraction, it can be assumed $\sqrt2 + 1$ is irrational. I suggested that because of this, $\sqrt2$ is also irrational.</p> <p>My professor said this is not always true, but I can't think of an example that suggests this.</p> <p>If $x+1$ is irrational, is $x$ always irrational? </p> <p>Actually, a better question is: if $x$ is irrational, is $x+n$ irrational, provided $n$ is a rational number?</p>
Shaurya Gupta
102,986
<p>Let x be rational and y be irrational. So let us assume that x + y is rational.<br> (x+y) - x will also be a rational since rational - rational is always rational.<br> therefore y is also rational. but that is a contradiction. Hence proved!</p>
2,530,788
<p>$x + y + z = 0$;</p> <p>$x^2 + y^2 + z^2 = 1$;</p> <p>$x^3 + y^3 + z^3 = 0$;</p> <p>I understand that there are multiple solutions which are the permutations of $(\sqrt{ 2 }/2, 0, -\sqrt{2}/2).$</p> <p>How do i go about solving for it? I have tried the normal brute force gaussian elimination method, Cramer's rule and i still cant get the answer.</p> <p>Would appreciate if someone could provide me with an algorithm and/or the steps.</p> <p>Thank you very much!!</p>
Michael Rozenberg
190,319
<p>We can use the Viete's theorem.</p> <p>Indeed, $$0=(x+y+z)^2=1+2(xy+xz+yz),$$ which gives $$xy+xz+yz=-\frac{1}{2}.$$ Also, since $$(x+y+z)^3=x^3+y^3+z^3+3(xy+xz+yz)(x+y+z)-3xyz,$$ we obtain $$xyz=0,$$ which gives that $x$, $y$ and $z$ are roots of the equation: $$t^3-\frac{1}{2}t=0.$$</p>
1,416,275
<p>I've just began the study of linear functionals and the dual base. And this book I'm reading says the dual space $V^{*}$ may be identified with the space of row vectors. This notion seems very important, but I'm having trouble understanding it. Here is the text:</p> <blockquote> <p>Let $\sigma$ be an element of the dual space $V^{*}$, i.e. a linear map $\sigma: V \rightarrow K$. Choose a basis for $V$, say the usual the basis, then $\sigma$ is represented by a matrix $[\sigma]$. However, such a matrix $[\sigma]$ is a row vector. Also, the map $\sigma \rightarrow [\sigma]$ is a vector space isomorphism.</p> <p>On the other hand, any row vector $\phi = (a_1, \ldots, a_n)$ defines a linear functional $\phi: V \rightarrow K$ by \begin{align*} \phi(x_1, \ldots, x_n) = (a_1, \ldots, a_n) \begin{pmatrix} x_1 \\ \vdots \\ x_n \end{pmatrix} \end{align*} or simply $\phi(x_1, \ldots, x_n) = a_1 x_1 + a_2 x_2 + \ldots + a_n x_n$.</p> </blockquote> <p>The author speaks of the matrixrepresentation $[\sigma]$, but he doesn't really explain it. Why is this matrix a row vector? Also, the second part of the text: is this merely a definition? Why does he claim $\phi(x_1, \ldots, x_n) = a_1 x_1 + \ldots + a_n x_n$? The output of a linear functional is suppose to be a scalar, and not a vector? And this is clearly a linear combination of vectors...</p> <p>Maybe some of the advanced mathematicians here could give me some examples, because I can't get my head around this at the moment. </p>
uniquesolution
265,735
<p>The textbook chooses to define the action of the dual space as multiplication of row and column vectors. In this approach, an element in $V$ is a column vector, i.e, a matrix of order $n\times 1$, whereas the elements of the dual space $V^{*}$ are row vectors, i.e., matrices of order $1\times n$. So the action of the dual space is given by matrix multiplication: a $1\times n$ matrix times an $n\times 1$ matrix, which - as the textbook must be assuming that you already know -- gives a matrix of order $1\times 1$, which is nothing but a scalar. Do not despair, however. Just think of it is the scalar product of the vector $a_i$ by $x_i$ and you'll be fine.</p>
3,751,780
<p>Given positive real numbers <span class="math-container">$a, b, c$</span> with <span class="math-container">$ab + bc + ca = 1.$</span> Prove that <span class="math-container">$$ \sqrt{a^{2} + 1} + \sqrt{b^{2} + 1} + \sqrt{c^{2} + 1}\leq 2(a+b+c).$$</span></p> <p>I have no idea to prove this inequality.</p>
Z Ahmed
671,540
<p>Note that <span class="math-container">$$\sqrt{a^2+1}= \sqrt{a^2+ab+bc+ca}=\sqrt{(a+b)(a+c)}\le \frac{(2a+b+c)}{2}~~\text{AM-GM}\,.$$</span> Addling three similar results we prove that <span class="math-container">$$\sqrt{a^2+1}+\sqrt{b^2+1}+\sqrt{c^2+1}\le \frac{4(a+b+c)}{2}=2(a+b+c)\,.$$</span></p>
1,072,524
<p>Suppose $X$ is an integral scheme. I would like to show that the restriction maps $res_{U,V} : O_X(U) \rightarrow O_X(V)$ is an inclusion so long as $V$ is not empty. I was wondering if someone could give me some assistance with this. This is exercise 5.2 I on Ravi Vakil's notes. Thank you!</p>
Ayman Hourieh
4,583
<p>Let $X$ be an integral scheme. Let $\xi$ be its generic point. If we show that the canonical map $\mathcal O_X(U) \to \mathcal O_\xi$ is injective, we're done. Since $U$ can be covered by affine open subsets, we can assume that $U = \operatorname{Spec} A$ is affine. Now the map $\mathcal O_X(U) \to \mathcal O_\xi$ corresponds to the canonical map $A \to \operatorname{Frac}(A)$, where $\operatorname{Frac}(A)$ is the quotient field of $A$. This map is clearly injective as desired.</p> <p>I've left some details out. Hopefully you'll be able to fill them in.</p>
228,651
<p>When testing to determine the convergence or divergence of series with positive terms, there's a common way by comparing them with other series which we already know converge or diverge.</p> <p>My question is, how do we choose the proper to-be-compared series? I hope to get some detailed <strong>methodology</strong> about this. I am a bit confused - do I have to even rely on my intuition?</p> <p>For instance, how do I choose a comparison series for this given one below:</p> <p>$$\sum_{n=2}^\infty\frac{1}{n\ln n}$$</p>
Scorpio19891119
40,071
<p>Usually, you can choose the highest order of the denominator and numerator, respectively. So in your problem, for the numerator choose $1$, whose highest order is $0$, and for denominator choose $n$. Then, the sequence you should choose is $1/n$. </p>
80,056
<p>I am toying with the idea of using slides (Beamer package) in a third year math course I will teach next semester. As this would be my first attempt at this, I would like to gather ideas about the possible pros and cons of doing this. </p> <p>Obviously, slides make it possible to produce and show clear graphs/pictures (which will be particularly advantageous in the course I will teach) and doing all sorts of things with them; things that are difficult to carry out on a board. On the other hand, there seems to be something about writing slowly with a chalk on a board that makes it easier to follow and understand (I have to add the disclaimer that here I am just relying on only a few reports I have from students and colleagues).</p> <p>It would very much like to hear about your experiences with using slides in the classroom, possible pitfalls that you may have noticed, and ways you have found to optimize this.</p> <p>I am aware that the question may be a bit like the one with Japanese chalks, once discussed here, but I think as using slides in the classroom is becoming more and more common in many other fields, it may be helpful to probe the advantages and disadvantages of using them in math classrooms as well.</p>
Will Jagy
3,324
<p>I have a story in the middle. I hurt my right shoulder over time, by 1994 it was simply too much to write on a blackboard, at least overhead. So, pre-Beamer, I wrote up these slides on transparencies with colored pens. These were unusually well-prepared lessons for me, I had everything worked out, it was all clearly my work, and I still had plenty of blank slides on which to write new material when needed. That is the hardest I have ever worked on course preparation. </p> <p>They did have class questionnaires, sent to administration and never seen by me, later the chairman told me how very much the students hated the slides. They were never fond of me but I think that was a separate item, the slides made it worse than it would have been...I suppose my question now is, would things have been different if I also gave each student photocopies of the slides for that day? </p>
80,056
<p>I am toying with the idea of using slides (Beamer package) in a third year math course I will teach next semester. As this would be my first attempt at this, I would like to gather ideas about the possible pros and cons of doing this. </p> <p>Obviously, slides make it possible to produce and show clear graphs/pictures (which will be particularly advantageous in the course I will teach) and doing all sorts of things with them; things that are difficult to carry out on a board. On the other hand, there seems to be something about writing slowly with a chalk on a board that makes it easier to follow and understand (I have to add the disclaimer that here I am just relying on only a few reports I have from students and colleagues).</p> <p>It would very much like to hear about your experiences with using slides in the classroom, possible pitfalls that you may have noticed, and ways you have found to optimize this.</p> <p>I am aware that the question may be a bit like the one with Japanese chalks, once discussed here, but I think as using slides in the classroom is becoming more and more common in many other fields, it may be helpful to probe the advantages and disadvantages of using them in math classrooms as well.</p>
Christopher Perez
4,566
<p>Most of the non-mathematics courses I've taken in college were done with lecture slides, and I have to say that there are a number of advantages and disadvantages to them that actually amount to more disadvantages if you were to do the same in math. The one obvious advantage is that the slides can be posted online, but the problem with this is that it encourages students to skip class. Even those who don't skip class won't take notes (and are sometimes even encouraged to not take notes by the professors), and this would not be good in a math class, because many people feel that copying down proofs from lecture is best way to get a better understanding of them. Also, when you have lecture notes, you can sometimes get nonsense like <a href="http://www.its.caltech.edu/~cperez/IST4.pdf" rel="nofollow">this</a>. Anyways, back to your point. If your main concern is displaying graphics, you could possibly just use slides for graphics. If you can lecture in a room with a projector screen that doesn't obscure the blackboards, that would be ideal for this. Alternatively you can distribute handouts at the beginning with graphics that you will be referencing.</p>
466,576
<p>Some conventional math notations seem arbitrary to English speakers but were mnemonic to non-English speakers who started them. To give a simple example, Z is the symbol for integers because of the German word <em>Zahl(en)</em> 'number(s)'. What are some more examples?</p>
Brian M. Scott
12,042
<p>In topology the letter $F$ is commonly used to denote a closed set, from French <a href="http://fr.wikipedia.org/wiki/Ferm%C3%A9_%28topologie%29" rel="nofollow"><em>fermé</em></a> 'closed [set]'. The common use of $K$ to denote a compact set is probably from German <em>kompakt</em>, as in <a href="http://de.wikipedia.org/wiki/Kompakte_Menge" rel="nofollow"><em>kompakte Menge</em></a> 'compact set' and <a href="http://de.wikipedia.org/wiki/Kompakter_Raum" rel="nofollow"><em>kompakter Raum</em></a> 'compact space'. The common use of $k$ to denote an arbitrary field is probably from German <a href="http://de.wikipedia.org/wiki/K%C3%B6rper_%28Algebra%29" rel="nofollow"><em>Körper</em></a> 'field'. The common use of $G$ for an open set is probably from German <em>Gebiet</em> 'region', though as a mathematical term it <a href="http://de.wikipedia.org/wiki/Gebiet_%28Mathematik%29" rel="nofollow">now means</a> 'non-empty, connected, open set'. The notation $G_\delta$-<em>set</em> for the intersection of countably many open sets combines this $G$ with $\delta$ for German <a href="http://de.wikipedia.org/wiki/Schnittmenge#Durchschnitt_.28Schnittmenge.2C_Schnitt.29" rel="nofollow"><em>Durchschnitt</em></a> 'intersection'. Presumably $F_\sigma$-<em>set</em> for the union of countably many closed sets is from the $F$ above and $\sigma$ for French <em>somme</em> 'sum'. The $T$ in the names of the separation axioms $T_1,T_2$, etc. is from German <a href="http://de.wikipedia.org/wiki/Trennungsaxiom" rel="nofollow"><em>Trennungsaxiom</em></a> 'separation axiom'.</p>
466,576
<p>Some conventional math notations seem arbitrary to English speakers but were mnemonic to non-English speakers who started them. To give a simple example, Z is the symbol for integers because of the German word <em>Zahl(en)</em> 'number(s)'. What are some more examples?</p>
MartianInvader
8,309
<p>The etymology of the $\sin$ function has a colorful history - it comes from <em>sinus</em>, the latin word for... well, bosom. This was due to a mistranslation from Arabic text in the 12th century: The word <em>jaib</em> means bosom, and since Arabic is written without short vowels, it was written essentially as <em>jb</em>. But <em>jb</em> was also the spelling of <em>jiba</em>, which was a transliteration of the Sanskrit word for chord (the mathematical chord, ie a line passing through a circle, half the length of which is the sine of the angle from the center of the cicle).</p>
466,576
<p>Some conventional math notations seem arbitrary to English speakers but were mnemonic to non-English speakers who started them. To give a simple example, Z is the symbol for integers because of the German word <em>Zahl(en)</em> 'number(s)'. What are some more examples?</p>
MJD
25,554
<p>The center of a group $G$ is denoted $Z(G)$. The $Z$ is for “Zentrum”, which is the German word for ‘center’.</p>
466,576
<p>Some conventional math notations seem arbitrary to English speakers but were mnemonic to non-English speakers who started them. To give a simple example, Z is the symbol for integers because of the German word <em>Zahl(en)</em> 'number(s)'. What are some more examples?</p>
charlotte
195,474
<p>Most of the good examples have been mentioned already.</p> <ul> <li>$+$ probably comes from Latin <em>et</em>, "and"</li> <li>$\tau$ is sometimes used for the golden ratio, from Greek <em>tomos</em>, "section" (same root as <em>atom</em>).</li> <li>$\sinh$ from Latin <em>sinus hyperbolus</em> (and so on)</li> <li>$\in$ from Latin <em>est</em>.</li> <li>$M$ for an arbitrary set from German <em>Menge</em>, crowd.</li> <li>$\mathrm E[x]$ for expectation is from German <em>Erwartung</em> or French <em>éspérance</em>.</li> <li>$\aleph$ (the cardinality of the countable numbers) is maybe from Hebrew <em>einsof</em>, infinity, or maybe just because it's the first letter of the Hebrew abjad.</li> <li>$m$ for slope may be from Latin <em>modulus</em>, "quantity", or French <em>monter</em>, "to climb".</li> <li>$e$ may be from German <em>Einheit</em>, "unity", because $\ln e = 1$. </li> <li>$c$ for the speed of light from Latin <em>celeritas</em>, "swiftness".</li> </ul> <p>If you want to be pedantic, you could point out that there are a large number of mnemonics from other languages which also happen to work in English. For example </p> <ul> <li>$\mathbb{Q}$, $\mathbb{R}$ from German <em>Quotient</em>, <em>Reel</em></li> <li>$\pi$ from Greek <em>perimitron</em> or from Latin <em>peripheriam</em></li> <li>$I$ from French <em>intensite de courant</em></li> <li>$\%$ comes from Italian <em>per cento</em>, "per hundred"</li> <li>$x$ is used for variables because it's uncommon in French</li> </ul> <p>You might also count all symbols for the SI prefixes (atto-, yotta-, etc.) except the ones which have become productive in English (micro-, mega-, etc.) But that would be far too pedantic, even for me</p>
2,432,817
<p>Let $X$ and $Y$ be topological spaces and let $A \subseteq X$ be a subspace of $X$. Suppose $A$ is homeomorphic to some subspace $B \subseteq Y$ of $Y$. Let $f$ explicitly denote this homeomorphism.</p> <p>If $f : A \to B$ is a homeomorphism, does $f$ extend to a homeomorphism between $\text{Cl}_X(A)$ and $\text{Cl}_Y(B)$, i.e does there exist a $g : Cl_X(A) \to Cl_Y(B)$, such that $g|_{A} = f$.</p> <p>More generally if $A$ and $B$ are homeomorphic, does that imply that $Cl_X(A)$ and $Cl_Y(B)$ are homeomorphic? </p> <hr> <p>If $X$ and $Y$ are homeormorphic, then this is true, since it is a well known-theorem that $f[Cl_X(A)] = Cl_Y(f[A] = B)$, if $f : X \to Y$ is a homeomorphism. </p> <p>However I can't seem to come up with a counterexample for my question, since I assume the implication is false.</p> <hr> <p><em>Edit :</em> I know a counterexample that comes from CW Complexes, where if $(X, \xi)$ is a CW-Complex, then $\xi$ is a collection of open cells $e$, which are topological spaces homeomorphic to $\mathbb{B}^n$ the open unit ball in $\mathbb{R}^n$. Each $e \subseteq X$ is a subspace of a haursdoff space $X$.</p> <p>Also a closed cell $\bar{e}$ is a topological space homeomorphic to the closed unit ball $\mathbb{\bar{B}}^n \subseteq \mathbb{R}^n$</p> <p>Now in this example $Y = \mathbb{R}^n$. It is also known that for any $e \in \xi$, $Cl_X(e) \neq \bar{e} \cong {\mathbb{\bar{B}}^n} = Cl_Y(\mathbb{{B}}^n \cong e) $</p> <p>Hence $e$ and $\mathbb{B}^n$ are homeomorphic, however $Cl_X(e)$ is not homeomorphic to $Cl_Y(\mathbb{B}^n) = \mathbb{\bar{B}}^n$, since $Cl_X(e) \neq \bar{e}$</p> <hr> <p>But using the above example feels like bringing a gun to a knife fight, are there any simpler counterexamples?</p>
Henno Brandsma
4,280
<p>$A=S^1\setminus \{p\} \subseteq X= S^1$ (for any $p \in S^1$) is homeomorphic to $B=(0,1) \subseteq Y = [0,1]$. But their respective closures $X$ and $Y$ are not.</p> <p>More trivially: $A = (0,1) \subseteq X=\mathbb{R}$ and $B = Y = \mathbb{R}$, where $\overline{B} = B$ but $\overline{A}$ becomes compact.</p>
2,573,572
<p>Here is the expression to take the derivative of. $$C = \frac{1}{2}\sum_j (y_j - a_j^L)^2$$</p> <p>Here is the result. $$\frac{\partial C}{\partial a_j^L} = 2(a_j^L-y_j)$$</p> <p>Multiplying by 2, then again by the derivative of the inside (-1) seems reasonable, but what happened to the summation?</p>
user
505,767
<p>The derivative is with respect to a component $a_j^L$ thus the others cancel out.</p>
2,864,992
<p>It starts by someone asking an exercise question that whether negation of</p> <pre><code>2 is a rational number </code></pre> <p>is</p> <pre><code>2 is an irrational number </code></pre> <p>Their argument is that they consider it is incorrect if we include complex numbers, because 2 might be complex and not irrational. </p> <p>My argument is that because first proposition is always true because 2 could not be anything else but rational number, any false sentence could be negation of the first proposition, including the given sentence above. They said what I do is not "negation" in their sense.</p> <p>So my question is, is it true that every false statements are negation of true statement? </p>
Somos
438,089
<p>Let $\ f(x) := \log_2(1-x) + \sum_{n=0}^\infty x^{2^n}, \ $ $\ g(x) := f(e^{-x}) = \log_2(1-e^{-x}) + \sum_{n=0}^\infty e^{-x2^n}, \ $ and $\ a_k := g(2^{-k}) = b_k + \sum_{n=0}^\infty e^{-2^{n-k}} \ $ where $\ b_k := \log_2(1-e^{-2^{-k}}) \approx -k - 2^{-1-k}/\log(2). \ $ Now $\ \sum_{n=0}^\infty e^{-2^{n-k}} = \sum_{n=1}^k e^{-2^{-n}} + B \ $ where $\ B := \sum_{n=0}^\infty e^{-2^n} \approx 0.521865938459879089046726. \ $ But $ c_k :=\! -k \!+\! \sum_{n=1}^k e^{-2^{-n}}\! = \sum_{n=1}^k \big(e^{-2^{-n}}\!-\!1 \big) \ $ and $\ c_k \to C \ $ where $\ C \approx -0.8546133208927. \ $ Finally, $\ \lim_{x\to 1^-} f(x) = \lim_{x\to 0^+} g(x) = \lim_{k\to\infty} a_k = B+C \approx -0.3327473824328992250.\ $ The digits of $1$ minus this number is <a href="http://oeis.org/A158468" rel="nofollow noreferrer">OEIS sequence A158468</a>. $f(\exp(-2^{-30})) \approx -0.3327473822.$</p> <p>EDIT: Unfortunately, it seems that the function $\ f(x) \ $ oscillates as it gets close to $1$ from below. That is, $\ g(2^{-x}) \ $ approaches a period $1$ function with mean value $\ 1/2 - \gamma/\log(2) \ $ with oscillations of magnitude $\ \approx 1.57315\times 10^{-6} \ $ as Michael shows. Thus, the limit does <strong>not</strong> exist. It was obvious that the infinite sum in $\ f(x) \ $ has radius of convergence $1.$ What was <strong>not</strong> obvious was the limiting behavior as $\ x\to 1^-. \ $ We now know that the series has a logarithmic singularity and $\ f(x) \ $ is what remains. That $\ f(x) \ $ has interesting oscillatory behavior is nice information.</p>
90,673
<p>let $M$ be a closed ortientable irreducible 3-mfd, let $T$ be a non-separating torus in $M$, we cut $M$ along $T$ and glue two solid tori along the two boundary tori, we get a new closed 3-mfd $M_1$ (we need $M_1$ also is irreducible). Now If there exists nonseperable torus $T_1$ in $M_1$, we go on the above process, we get a new closed 3-mfd $M_2$ ...</p> <p>My question is, whether you can find a $M$, choose suitable $T_i$, glue solid tori suitablely, the process will go infinitely?</p> <p>or can you prove that it is impossible to find such an example? (for example, from $M$---> $M_1$, some invariant of 3-mfds decrease strictly).</p> <hr> <p>After Kevin's example, I added the condition "$M_1$ also is irreducible". This condition is natural in the original field (for this question): 3-mfd with Anosov flow. </p>
Bruno Martelli
6,205
<p>The answer is "no", although it seems that a homological argument is not enough as Kevin's and Bin's examples show. I describe an argument which uses geometrization.</p> <p>There is a quantity which decreases strictly at each operation. It is crucial to suppose that both $M$ and $M_1$ are irreducible. The quantity for an irreducible manifold $M$ is the triple $(\|M\|, n(M), s(M))$ of real numbers, where <li> $\|M\|$ is Gromov's norm (i.e. the sum of the volumes of the hyperbolic pieces in the JSJ decomposition of $M$) <li> $n(M)$ is the number of tori in the JSJ decomposition of $M$ <li> $s(M)$ is the sum of the $-\chi(S)$ for each Seifert piece of the decomposition with some base surface $S$.</p> <p>Triples $(\|M\|, n(M), s(M))$ are ordered lexicographically. I prove below that the surgery you describe (cut along a non-separating torus and glue two solid tori) indeed strictly decreases this quantity, supposing that the resulting manifold $M_1$ is still irreducible (this hypothesis is important). The result then follows because Gromov norms of 3-manifolds form a well-ordered set.</p> <p>Let $T$ be the torus you cut. If it is adjacent to at least one hyerbolic piece, the filled manifold $M_2$ has strictly smaller Gromov norm thanks to Thurston's Dehn filling theorem. It remains to consider the case $T$ is adjacent to two (possibly coinciding) Seifert pieces and Gromov norm does not decrease. If $T$ is a torus of the JSJ then $n(M)$ decreases. If $T$ is a torus inside a Seifert piece, then $s(M)$ decreases. You use here the following fact: the hypothesis that $M_2$ is irreducible ensures that the meridian of your Dehn filled tori are not fiber-parallel, and hence only add some (possibly non-singular) fiber at the adjacent Seifert pieces. Therefore the JSJ of the new manifold is easily controlled by the JSJ of the old manifold.</p> <p>It is possible that this argument extends to the relaxed case where you only suppose $T$ to be incompressible (and $M$, $M_1$ are any manifolds).</p>
1,170,088
<blockquote> <p>In a group of $10$ people, $60\%$ have brown eyes. Two people are to be selected at random from the group. What is the probability that neither person selected will have brown eyes?</p> </blockquote> <p>How do I do this problem? $6$ people have brown eyes and $4$ people don't. </p> <p>The possibility of people not have brown eyes is:</p> <p>$$4 * 3 * 2 * 1 = 24$$ </p> <p>What to do?</p>
Stupid Man
219,196
<p>EDIT: I was overcounting in my previous solution.</p> <p>Probability of an event = Number of favourable outcomes / Total number of outcomes</p> <p>Here total outcomes = Number of ways to choose 2 people out of 10, which is $10C2$</p> <p>Number of favourable outcomes = Number of ways to choose one non brown eyed person * Number of ways to choose another non - brown eyed person which is $(\binom 4 1 \binom 3 1)/2$</p> <p>We are dividing it by two because here, the order of selection does not matter.</p> <p>So our final probability is $(\binom 4 1 \binom 3 1)/(2 * \binom {10} 2)$</p>
272,114
<p>Yesterday, my uncle asked me this question:</p> <blockquote> <p>Prove that $x = 2$ is the unique solution to $3^x + 4^x = 5^x$ where $x \in \mathbb{R}$.</p> </blockquote> <p>How can we do this? Note that this is not a diophantine equation since $x \in \mathbb{R}$ if you are thinking about Fermat's Last Theorem.</p>
jim
44,551
<p>$$f(x) = \left(\dfrac{3}{5}\right)^x + \left(\dfrac{4}{5}\right)^x -1$$</p> <p>$$f^ \prime(x) &lt; 0\;\forall x \in \mathbb R\tag{1}$$</p> <p>$f(2) =0$. If there are two zeros of $f(x)$, then by Rolle's theorem $f^\prime(x)$ will have a zero which is a contradiction to $(1)$.</p>
272,114
<p>Yesterday, my uncle asked me this question:</p> <blockquote> <p>Prove that $x = 2$ is the unique solution to $3^x + 4^x = 5^x$ where $x \in \mathbb{R}$.</p> </blockquote> <p>How can we do this? Note that this is not a diophantine equation since $x \in \mathbb{R}$ if you are thinking about Fermat's Last Theorem.</p>
Community
-1
<p>Let $f(x)=5^x-4^x-3^x$. Then $f(2)=0$.</p> <p>If $k&gt;0$, then</p> <p>$f(2+k)=f(2+k)-f(2)$</p> <p>$=25(5^k-1)-16(4^k-1)-9(3^k-1)$</p> <p>$&gt;25(5^k-1)-16(5^k-1)-9(5^k-1)=0$.</p> <p>If $k&lt;0$, then</p> <p>$f(2+k)=f(2+k)-f(2)$</p> <p>$=25(5^k-1)-16(4^k-1)-9(3^k-1)$</p> <p>$&lt;25(5^k-1)-16(5^k-1)-9(5^k-1)=0$.</p>
2,203,066
<p>The definition I have is the following:</p> <blockquote> <p>A vector space V is said to be <strong>finite-dimensional</strong> if there is a finite set of vectors in V that spans V and is said to be <strong>infinite-dimensional</strong> if no such set exists.</p> </blockquote> <p>However, with this definition I can't determine whether the vector space $\mathbb{R}^3$ is finite-dimensional or infinite-dimensional (I am assuming that it is finite since the dimension of $\mathbb{R}^3$ is $3$)</p> <p>Going with my thought process, though, I know that $(1,0,0),(0,1,0),(0,0,1)$ spans $\mathbb{R}^3$. However we can also check that $(2,0,0),(0,2,0),(0,0,2)$ spans $\mathbb{R}^3$. Also note that $(3,0,0),(0,3,0),(0,0,3)$ spans $\mathbb{R}^3$. This process could be continued over and over to show that there are infinitely many vectors that span $\mathbb{R}^3$. </p> <p>Wouldn't this mean that $\mathbb{R}^3$ is infinite-dimensional? Because there isn't a finite number of vectors that span $\mathbb{R}^3$. (Again I want to say this isn't the case and that there is something I am overlooking.) </p>
Christopher Pompetzki
429,330
<p>A vector space is called finite-dimensional, if some list of vectors in it spans the space. By definition, every list has finite length. For every positive integer $n,$ $\mathbb{F^n}$ is a finite-dimensional vector space. </p>