qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
3,786,553 | <p>I was trying to follow the logic in a similar question (<a href="https://math.stackexchange.com/questions/643352/probability-number-comes-up-before-another">Probability number comes up before another</a>), but I can't seem to get it to work out.</p>
<p>Some craps games have a Repeater bet. You can bet on rolling aces twice before rolling a 7, rolling 3 three times before 7, etc. The patent for this game (<a href="https://patents.google.com/patent/US20140138911" rel="nofollow noreferrer">https://patents.google.com/patent/US20140138911</a>) says the odds for aces twice before 7 is 48:1. The wizard of odds (<a href="https://wizardofodds.com/games/craps/appendix/5/" rel="nofollow noreferrer">https://wizardofodds.com/games/craps/appendix/5/</a>) says the probability is 0.020408 (which is 1/49).</p>
<p>I tried calculating this by multiplying the odds of the two events 1/36 for rolling aces and (1/36)/((1/36)+(1/6)) for rolling aces before 7. I got (1/36)*((1/36)/((1/36)+(1/6))) = 0.003968253968253969 which is like 1/252.</p>
<p>I'm obviously missing something, but can't see what.</p>
<p>Edit:...sorry...after typing this up i figured it out. The bet has to be made and then aces has to roll before 7 twice. So if 7 rolls before the first aces the bet loses, so I was wrong by using the 1/36 for the first aces.</p>
<p>((1/36)/((1/36)+(1/6)))*((1/36)/((1/36)+(1/6)))
0.020408163265306128</p>
<p>I still don't understand why one says 48:1 when its 1/49</p>
| peter.petrov | 116,591 | <p><span class="math-container">$d(2+x) = d(x)$</span> so there's no problem here.</p>
<p>This integral is just <span class="math-container">$\int_{0}^{2} x ~d(x)$</span></p>
<p>Do you know how to compute this one?</p>
|
876,896 | <p>What are the poles of a polynomial? Are they the same as the roots?</p>
| paw88789 | 147,810 | <p>A (nonconstant) polynomial may be considered to have a pole at infinity.</p>
|
876,896 | <p>What are the poles of a polynomial? Are they the same as the roots?</p>
| Jason Knapp | 8,454 | <p>Brief answer: a pole refers to the location of a special type of discontinuity, in fact a special sort of singularity. Polynomials on the real line or complex plane are continuous, and thus do not have any poles in the real or complex numbers. </p>
<p>However we have a very similar object in the complex case called a pole at infinity. Essentially a function $f(z)$ has a pole at infinity if $f(1/z)$ has a pole at $0$. Any nonconstant polynomial has a pole at infinity.</p>
<p>Depending on your experience, the page <a href="http://en.wikipedia.org/wiki/Pole_(complex_analysis)" rel="nofollow">http://en.wikipedia.org/wiki/Pole_(complex_analysis)</a> might help you or be a bit too much.</p>
|
3,862,182 | <p>I encountered this question, and I am unsure how to answer it.</p>
<p>When <span class="math-container">$P(x)$</span> is divided by <span class="math-container">$x - 4$</span>, the remainder is <span class="math-container">$13$</span>, and when <span class="math-container">$P(x)$</span> is divided by <span class="math-container">$x + 3$</span>, the remainder is <span class="math-container">$-1$</span>. Find the remainder when <span class="math-container">$P(x)$</span> is divided by <span class="math-container">$x^2 - x - 12$</span>.</p>
<p>How would I proceed? Thank you in advance!</p>
| Lion Heart | 809,481 | <p><span class="math-container">$P(x)=(x^2-x-12)Q(x)+ax+b$</span></p>
<p><span class="math-container">$P(4)=4a+b=13$</span></p>
<p><span class="math-container">$P(-3)=-3a+b=-1$</span></p>
<p><span class="math-container">$a=2, b=5$</span></p>
<p><span class="math-container">$P(x)=(x^2-x-12)Q(x)+2x+5$</span></p>
<p><span class="math-container">$R(x)=2x+5$</span></p>
|
3,106,696 | <p>I am confused on the notation used when writing down the solution of x and y in quadratic equations.
For example in <span class="math-container">$x^2+2x-15=0$</span>, do I write :</p>
<p><span class="math-container">$x=-5$</span> AND <span class="math-container">$x=3$</span></p>
<p>or is it</p>
<p><span class="math-container">$x=-5$</span> OR <span class="math-container">$x=3$</span></p>
<p>which is it and why? I thought that because x can only equal one of the values when you substitute it in so it would be OR, however there are sometimes 2 roots of a quadratic so is it more correct to use AND? What about for the value of <span class="math-container">$y$</span>, is it the same?</p>
<p>Thanks in advance</p>
| lulu | 252,071 | <p>Let <span class="math-container">$p$</span> be a prime dividing <span class="math-container">$\gcd(f(n+1),f(n))$</span></p>
<p>We note that <span class="math-container">$$f(n+1)-f(n)=2(n+1)$$</span> so <span class="math-container">$p$</span> must divide <span class="math-container">$2(n+1)$</span></p>
<p>Since all the <span class="math-container">$f(n)$</span> are odd we must have <span class="math-container">$p\,|\,n+1$</span>.</p>
<p>But <span class="math-container">$(n+1)^2=f(n)+n$</span> so <span class="math-container">$p$</span> must divide <span class="math-container">$(n+1)^2-f(n)=n$</span>.</p>
<p>Thus <span class="math-container">$p$</span> divides both <span class="math-container">$n$</span> and <span class="math-container">$n+1$</span>, a contradiction.</p>
|
3,834,796 | <p>I found a really interesting question which is as follows:
Prove that the value of
<span class="math-container">$$\sum^{7}_{k=0}[({7\choose k}/{14\choose k})*\sum^{14}_{r=k}{r\choose k}{14\choose r}] = 6^7$$</span></p>
<p>my approach:</p>
<p>I tried to simplify the innermost sigma as well as trying to simplify by using
<span class="math-container">${n\choose k}=n!/k!(n-k)!$</span> however I am can't get hold of this one.</p>
<p>My guess is that the summation simplifies into a standard series but I can't say for sure.
Kindly help me out.</p>
| cosmo5 | 818,799 | <p>Using <span class="math-container">$${14 \choose r}{r \choose k} = {14 \choose k}{14-k \choose r-k}$$</span> given reduces to</p>
<p><span class="math-container">$$
\begin{align*}
& \sum_{k=0}^7 {7 \choose k} \bigg\{\sum^{14}_{r=k} {14-k \choose r-k} \bigg\} \\
& = \sum_{k=0}^7 {7 \choose k} \{2^{14-k}\} \\
& = 2^{7} \times \sum_{k=0}^7 {7 \choose k} 2^{7-k} \\
& = 2^{7}\times(2+1)^{7} \\
& = 6^7
\end{align*}
$$</span></p>
<p><strong>Edit :</strong> As pointed by @ElliotYu, outer bound should be from <span class="math-container">$0$</span> to <span class="math-container">$7$</span>.</p>
|
3,834,796 | <p>I found a really interesting question which is as follows:
Prove that the value of
<span class="math-container">$$\sum^{7}_{k=0}[({7\choose k}/{14\choose k})*\sum^{14}_{r=k}{r\choose k}{14\choose r}] = 6^7$$</span></p>
<p>my approach:</p>
<p>I tried to simplify the innermost sigma as well as trying to simplify by using
<span class="math-container">${n\choose k}=n!/k!(n-k)!$</span> however I am can't get hold of this one.</p>
<p>My guess is that the summation simplifies into a standard series but I can't say for sure.
Kindly help me out.</p>
| Elliot Yu | 165,060 | <p><strike>First off I don't think your sum is quite right. The bounds on the outer sum should be <span class="math-container">$k=0$</span> to <span class="math-container">$7$</span>, I believe, otherwise the value isn't <span class="math-container">$6^7$</span>.</strike> (Question now corrected)</p>
<p>You are on the right track that rewriting the binomial coefficients in terms of factorials will help. Though the factors inside the sum over <span class="math-container">$r$</span> won't simplify much by themselves. The solution is to bring the factor <span class="math-container">$1/\binom{14}{k}$</span> into the second sum. This gives us
<span class="math-container">$$
\left.\frac{r!}{k!(r-k)!}\frac{14!}{r!(14-r)!}\right/\frac{14!}{k!(14-k)!} = \frac{(14-k)!}{(r-k)!(14-k)!}\ .
$$</span>
This can be recognized as <span class="math-container">$\binom{14-k}{r-k}$</span>.
Note that the inner sum is from <span class="math-container">$r = k$</span> to <span class="math-container">$14$</span>, we can let <span class="math-container">$t = r-k$</span>, and change the bounds to <span class="math-container">$0$</span> and <span class="math-container">$14-k$</span>. This turns the inner sum into
<span class="math-container">$$
\sum_{t=0}^{14-k} \binom{14-k}{t} = 2^{14-k}\ .
$$</span>
The outer sum can now be evaluated,
<span class="math-container">$$
\sum_{k=0}^7 \binom{7}{k} 2^{14-k} = 2^7\sum_{k=0}^{7}\binom{7}{k} 2^{7-k} = 2^7(1+2)^7 = 6^7\ .
$$</span></p>
|
2,705,980 | <p>I have the following problem:
\begin{cases}
y(x) =\left(\dfrac14\right)\left(\dfrac{\mathrm dy}{\mathrm dx}\right)^2 \\
y(0)=0
\end{cases}
Which can be written as:</p>
<p>$$ \pm 2\sqrt{y} = \frac{dy}{dx} $$</p>
<p>I then take the positive case and treat it as an autonomous, seperable ODE. I get $f(x)=x^2$ as my solution.</p>
<p>In order to solve this problem, I have to divide each side of the equation by $\frac{1}{\sqrt{y}}$. But since the solution to this IVP is $y(x)=x^2$, zero is in the image of $f(x)$. So at a particular point $1/\sqrt{y}$ is not defined. But the <strong>solution</strong> is defined at $y =0$.</p>
<p>In fact, $y(x)= 0$ for all x is another solution. But aside from this solution the non-trivial solution is defined at zero also.</p>
<p>So is it wrong to divide across by $1/\sqrt{y}$? And if so how else do I approach this question?</p>
| Martín-Blas Pérez Pinilla | 98,199 | <p>By the <a href="https://en.wikipedia.org/wiki/Rank%E2%80%93nullity_theorem" rel="nofollow noreferrer">rank-nullity theorem:</a> rank = $\dim$ of image $= n\implies\dim\ker A =0$.</p>
|
2,190,551 | <p>How can I find the degrees of freedom of a $n \times n$ real orthogonal matrix?</p>
<p>I have tried to proceed by principle of induction but I fail.Please tell me the right way to proceed.</p>
<p>Thank you in advance.</p>
| jnez71 | 295,791 | <p>For an $n \times n$ matrix with column vectors $v_1 \dots v_n$, we say that it is orthogonal if,
$$
\langle v_i, v_j \rangle = 0,\ \ \ \forall (i,j)\ \Big{|}\ i\neq j
$$</p>
<p>Recalling that the inner product is commutative, the above gives us <a href="https://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_%E2%8B%AF" rel="noreferrer">this many</a> unique constraint equations:
$$\sum_{k=1}^{n-1} k = \frac{n(n-1)}{2}$$</p>
<p>The matrix has $n^2$ elements, so in the orthogonal case,
$$ \text{dof}_\text{orthogonal} = n^2 - \frac{n(n-1)}{2}$$</p>
<p>If we also require that the matrix is ortho<em>normal</em>, then there are $n$ more constraints of the form,
$$
\langle v_i, v_i \rangle = 1,\ \ \ \forall i
$$</p>
<p>and so in the orthonormal case,
\begin{align}
\text{dof}_\text{orthonormal} &= n^2 - \frac{n(n-1)}{2} - n\\
&= \frac{n(n-1)}{2}
\end{align}</p>
<p>This proves that $2 \times 2$ orthonormal matrices have 1 degree of freedom and that $3 \times 3$ orthonormal matrices have 3 degrees of freedom, which agrees with our intuition about <a href="https://en.wikipedia.org/wiki/Rotation_matrix" rel="noreferrer">rotations</a>.</p>
|
3,749,892 | <p>I'll just upload a Brilliant page first. Check this out please.</p>
<p><a href="https://i.stack.imgur.com/yj9Yc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yj9Yc.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/7iPh2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7iPh2.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/eQhNi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eQhNi.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/Y4bx5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y4bx5.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/GeXPg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GeXPg.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/9YHK6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9YHK6.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/axvNX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/axvNX.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/LaNBr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LaNBr.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/iizF1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iizF1.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/lELer.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lELer.png" alt="enter image description here" /></a></p>
<p>Now, what I don't understand are three parts.</p>
<ol>
<li><p>I get it, when they say it's important to have a multiplicative inverse there. You have to have a multiplicative inverse to return to the 1. But how does that guarantee some cycle would come back to where it started which is not 1? Like, if you start at 13 or whatever and keep multiplying a number, and how do you be sure about the cycle would come back to 13 just because the multiplying number has the inverse to the module?</p>
</li>
<li><p>Again, they said that this whole process is related to the multiplicative inverse, and that's why it's true that the cycles that have numbers relatively prime to the module have same length each other. This is the most confusing one to me. Firstly, I don't even get why they furtively change the subject from "the number being multiplied is relatively prime to the module" to "the numbers in the cycles are relatively prime to the module". Furthermore how can you confidently say that multiplying the integers of one cycle by some fixed integer will give you a unique integer from the other cycle?</p>
</li>
<li><p>They say all that that's all about being relatively prime to the module or not. But as you can see in the picture below, I think there's more to it than that. <a href="https://i.stack.imgur.com/IFl8x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IFl8x.png" alt="enter image description here" /></a>
Look, the red cycle has numbers that are not relatively prime to the module, 14, and it still has the same length and the same figure with the black one.</p>
</li>
</ol>
| Angela Pretorius | 15,624 | <p>The integers which are coprime to <span class="math-container">$n$</span> form a group under multiplication modulo <span class="math-container">$n$</span>. Denote this group by <span class="math-container">$G$</span>.</p>
<p>Let <span class="math-container">$H$</span> be the group of powers of <span class="math-container">$a$</span> modulo <span class="math-container">$n$</span> for some <span class="math-container">$a$</span> coprime to <span class="math-container">$n$</span>.</p>
<p>The multiplicative cycles in <span class="math-container">$G$</span> are the cosets of <span class="math-container">$H$</span> in <span class="math-container">$G$</span>.</p>
<p>Let <span class="math-container">$g_1H$</span>, <span class="math-container">$g_2H$</span> be two such cosets. These two cosets must contain the same number of elements because <span class="math-container">$f(x)=g_2g_1^{-1}x$</span> is a bijection from <span class="math-container">$g_1H$</span> to <span class="math-container">$g_2H$</span>.</p>
<hr />
<p>Your example in #3 can be generalized as follows.</p>
<p>Let <span class="math-container">$a$</span> be coprime to <span class="math-container">$n$</span>, <span class="math-container">$q$</span> be a divisor of <span class="math-container">$n$</span> and <span class="math-container">$a$</span> be coprime to <span class="math-container">$q$</span>. Then the cycle <span class="math-container">$a^k$</span> and the cycle <span class="math-container">$qa^k$</span> are of the same length; <span class="math-container">$f(x)=qx$</span> is a bijection because <span class="math-container">$qa^k$</span> will always be a multiple of <span class="math-container">$q$</span> modulo <span class="math-container">$n$</span> and so <span class="math-container">$f^{-1}(x)=x/q$</span> is well-defined.</p>
|
3,749,892 | <p>I'll just upload a Brilliant page first. Check this out please.</p>
<p><a href="https://i.stack.imgur.com/yj9Yc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yj9Yc.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/7iPh2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7iPh2.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/eQhNi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eQhNi.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/Y4bx5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y4bx5.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/GeXPg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GeXPg.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/9YHK6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9YHK6.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/axvNX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/axvNX.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/LaNBr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LaNBr.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/iizF1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iizF1.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/lELer.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lELer.png" alt="enter image description here" /></a></p>
<p>Now, what I don't understand are three parts.</p>
<ol>
<li><p>I get it, when they say it's important to have a multiplicative inverse there. You have to have a multiplicative inverse to return to the 1. But how does that guarantee some cycle would come back to where it started which is not 1? Like, if you start at 13 or whatever and keep multiplying a number, and how do you be sure about the cycle would come back to 13 just because the multiplying number has the inverse to the module?</p>
</li>
<li><p>Again, they said that this whole process is related to the multiplicative inverse, and that's why it's true that the cycles that have numbers relatively prime to the module have same length each other. This is the most confusing one to me. Firstly, I don't even get why they furtively change the subject from "the number being multiplied is relatively prime to the module" to "the numbers in the cycles are relatively prime to the module". Furthermore how can you confidently say that multiplying the integers of one cycle by some fixed integer will give you a unique integer from the other cycle?</p>
</li>
<li><p>They say all that that's all about being relatively prime to the module or not. But as you can see in the picture below, I think there's more to it than that. <a href="https://i.stack.imgur.com/IFl8x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IFl8x.png" alt="enter image description here" /></a>
Look, the red cycle has numbers that are not relatively prime to the module, 14, and it still has the same length and the same figure with the black one.</p>
</li>
</ol>
| Pablo Carneiro Elias | 272,929 | <p>If <span class="math-container">$a$</span> and <span class="math-container">$m$</span> are coprimes, then, <span class="math-container">$m \nmid a^i$</span>.</p>
<p>Since all reminders of <span class="math-container">$m$</span> belongs to the set <span class="math-container">$\{0,...,m-1\}$</span>, by the pigeonhole principle, there exists <span class="math-container">$i,j$</span> large enough such that <span class="math-container">$a^i \equiv a^j (\textrm{mod}\ m)$</span>.</p>
<p><span class="math-container">$a^i \equiv a^j (\textrm{mod}\ m) \implies a^i(a^{(j-i)}-1) \equiv 0$</span>.</p>
<p>But <span class="math-container">$\textrm{gdc}(a,m) = 1 \implies m \nmid a^i \implies m \mid a^{(i-j)}-1$</span>.
So <span class="math-container">$a^{(i-j)}-1 \equiv 0 (\textrm{mod}\ m) \implies a^{(i-j)} \equiv 1 (\textrm{mod}\ m)$</span>. This means <span class="math-container">$k=(i-j)$</span> will make the cycle back to 1. So if <span class="math-container">$a$</span> and <span class="math-container">$m$</span> are coprimes, the inverse will always be reached and the cycle will be closed.</p>
<p>If you start on 1 and reach 1, you must restart the cycle.</p>
<p>This proves that for any <span class="math-container">$a,m$</span> coprime, (including coprimes of <span class="math-container">$m$</span> in <span class="math-container">$\{1,...,m-1\}$</span>) there exists <span class="math-container">$a^{-1}$</span>. Now if each coprime to <span class="math-container">$m$</span> has inverse, you can explicitly build an inverse function from any cycle to any other, by just first going back to <span class="math-container">$1$</span> and then moving in any direction, or the other way around. So you can build an injective function from cycle 1 (<span class="math-container">$c_1$</span>) to cycle 2 (<span class="math-container">$c_2$</span>), <span class="math-container">$c_1 \to c_2$</span>. But you can do the same from <span class="math-container">$c_2$</span> to <span class="math-container">$c_1$</span>. Because of this, based on <a href="https://en.wikipedia.org/wiki/Schr%C3%B6der%E2%80%93Bernstein_theorem" rel="nofollow noreferrer">Schröder Bernstein Theorem</a>, you can guarantee there is same number of elements between cycles.</p>
|
4,499,058 | <blockquote>
<p>Let <span class="math-container">$A:L_2[0,1]\to L_2[0,1]$</span> be defined by<span class="math-container">$$Ax(t)=\int \limits _0^1t\left (s-\frac{1}{2}\right )x(s)\,ds\quad \forall t\in [0,1].$$</span>Compute the adjoint and the norm of <span class="math-container">$A$</span></p>
</blockquote>
<p>This is my avance:</p>
<p>First I think to <span class="math-container">$\|A\|=\dfrac{1}{2}$</span> I only prove that:</p>
<p>Let <span class="math-container">$x\in L_2[0,1]$</span> then<span class="math-container">$$\|Ax(t)\|_2=\left (\int \limits _0^1t\left (s-\frac{1}{2}\right )x(s)\,ds\right )^{1/2}\leq \frac{1}{2}\|x\|_2.$$</span>Is this correct?<br>
For the other inequality I try to find <span class="math-container">$x$</span> with norm <span class="math-container">$1$</span> such that <span class="math-container">$\|Ax\|=\dfrac{1}{2}$</span>.<br>
Any hint or help for this step or for computing the adjoint will be greatly appreciated.</p>
| Abhijeet Vats | 426,261 | <p>Not quite. I'll tell you what you did wrong first and try to point you in the right direction. My solution will be presented later and you can use that as a reference.</p>
<p>So, your calculation of <span class="math-container">$\left|\left|Ax\right|\right|_2$</span> is incorrect. You haven't used the correct expression. We have the map:
<span class="math-container">$$t \mapsto Ax(t) := t \int_{0}^{1} (s-\frac{1}{2})x(s) \ ds$$</span>
The correct formula for the <span class="math-container">$L^2$</span>-norm of <span class="math-container">$Ax$</span> is:
<span class="math-container">$$\left|\left|Ax\right|\right|_2 = \left(\int_{0}^{1} t^2 \left|\int_{0}^{1} \left(s-\frac{1}{2}\right)x(s) \ ds \right|^2 \ dt \right)^{\frac{1}{2}}$$</span></p>
<p>Start with this and begin simplifying. You might possibly have to use the Cauchy-Schwarz Inequality along the way and justify why that gives you the best possible estimate. As for the adjoint, you need to start by playing around with the following equation:
<span class="math-container">$$\langle Tf,g \rangle = \langle f,T^{\star} g \rangle$$</span></p>
<p>The adjoint is defined by this and if you play around with it, you'll get an expression for what the adjoint should be.</p>
<hr />
<p><strong>Solution:</strong></p>
<p>Observe that:
<span class="math-container">$$\left|\left|Ax\right|\right|_2^2 = \frac{1}{3} \left|\int_{0}^{1} \left(s-\frac{1}{2} \right) x(s) \ ds \right|^2$$</span>
By the Cauchy-Schwarz Inequality, we have that:
<span class="math-container">$$\left|\int_{0}^{1} \left(s-\frac{1}{2}\right) x(s) \ ds \right| \leq \left(\int_{0}^{1} \left(s-\frac{1}{2} \right)^2 \ ds \right)^{\frac{1}{2}} \left|\left|x\right|\right|_2 = \frac{1}{2\sqrt{3}} \left|\left|x\right|\right|_2$$</span>
I've left the calculation of the integral to you, that should be pretty easy. Therefore, it follows that:
<span class="math-container">$$\left|\left|Ax\right|\right|_2^2 \leq \frac{1}{3} \cdot \frac{1}{12} \cdot \left|\left|x\right|\right|_2^2$$</span>
Hence, it follows that:
<span class="math-container">$$\left|\left|Ax\right|\right|_2 \leq \frac{1}{6} \left|\left|x\right|\right|_2$$</span>
as was desired. Now, I claim that <span class="math-container">$\left|\left|A\right|\right| = \frac{1}{6}$</span>. To check that this is the case, let <span class="math-container">$x(s) = s-\frac{1}{2}$</span>. Then:
<span class="math-container">$$\left|\left|x\right|\right|_2 = \left(\int_{0}^{1} \left(s-\frac{1}{2} \right)^2 \ ds \right)^{\frac{1}{2}} = \frac{1}{2\sqrt{3}}$$</span>
Now, define <span class="math-container">$x_0(s) = 2\sqrt{3} (s-\frac{1}{2})$</span>. This has <span class="math-container">$L^2$</span>-norm equal to <span class="math-container">$1$</span> based on the calculation we just did. Observe that:
<span class="math-container">$$Ax_0(t) = t \int_{0}^{1} 2\sqrt{3}(s-\frac{1}{2})^2 \ ds = \frac{t}{2\sqrt{3}}$$</span>
Therefore, we have that:
<span class="math-container">$$\left|\left|Ax_0 \right|\right|_2 = \left(\int_{0}^{1} \frac{t^2}{12} \ dt \right)^{\frac{1}{2}} = \frac{1}{6}$$</span>
In other words, we knew that <span class="math-container">$\left|\left|A\right|\right| \leq \frac{1}{6}$</span> and it turns out that this map actually "achieves" the bound of <span class="math-container">$\frac{1}{6}$</span>. It follows that <span class="math-container">$\left|\left|A\right|\right| = \frac{1}{6}$</span>.</p>
<p>Next, let's calculate the adjoint. For this, let's simplify our notation. Let <span class="math-container">$k(s,t) = t(s-\frac{1}{2})$</span>. Then:
<span class="math-container">$$Ax(t) = \int_{0}^{1} k(s,t) x(s) \ ds$$</span>
Therefore, we have that:
<span class="math-container">$$\langle Ax, y \rangle = \int_{0}^{1} \overline{Ax(t)} y(t) \ dt = \int_{0}^{1} \int_{0}^{1} k(s,t) \overline{x(s)} y(t) \ ds \ dt$$</span>
<span class="math-container">$$\langle Ax,y \rangle = \int_{0}^{1} \overline{x(s)} \int_{0}^{1} k(s,t) y(t) \ dt \ ds = \langle x, Ay \rangle $$</span></p>
<p>This actually proves that the operator is symmetric. Now, it follows that the operator is self-adjoint and we have that:
<span class="math-container">$$A^{\star} = A$$</span></p>
<p>as was desired.</p>
<p><strong>Edit:</strong></p>
<p>So, the calculation above for the adjoint is incorrect. The idea behind it is correct but what I derived towards the end isn't. Have a look at Ryszard's answer if you want the correct derivation, I might correct mine later. But everything else should be quite fine.</p>
|
313,025 | <p>I got two problems asking for the proof of the limit: </p>
<blockquote>
<p>Prove the following limit: <br/>$$\sup_{x\ge 0}\ x e^{x^2}\int_x^\infty e^{-t^2} \, dt={1\over 2}.$$</p>
</blockquote>
<p>and, </p>
<blockquote>
<p>Prove the following limit: <br/>$$\sup_{x\gt 0}\ x\int_0^\infty {e^{-px}\over {p+1}} \, dp=1.$$</p>
</blockquote>
<p>I may feel that these two problems are of the same kind. World anyone please help me with one of them and I may figure out the other one? Many thanks! </p>
| André Nicolas | 6,312 | <p>For the first problem, use integration by parts, followed by crude estimation. </p>
<p>Let $u=\frac{1}{t}$ and $dv=te^{-t^2}\,dt$. Then $du=-\frac{1}{t^2}\,dt$ and we can take $v=-\frac{1}{2}e^{-t^2}$. Thus our integral is equal to
$$\frac{1}{x}\cdot \frac{1}{2}e^{-x^2}-\int_x^\infty \frac{1}{2t^2}e^{-t^2}\,dt.$$
Multiplying $\frac{1}{x}\cdot\frac{1}{2}e^{-x^2}$ by $xe^{x^2}$ gives us the main term. It remains to check that $\int_x^\infty \frac{1}{2t^2}e^{-t^2}\,dt$ is small enough that multiplying by $xe^{x^2}$ gives a small result. </p>
<p>We have
$$\int_x^\infty \frac{1}{2t^2}e^{-t^2}\,dt \lt \frac{1}{2x^2}\int_x^\infty e^{-t^2}\,dt.$$
We can bound $\int_x^\infty e^{-t^2}\,dt$ by integrating from $x$ to $x+1$, and from $x+1$ to $\infty$. The integral from $x$ to $x+1$ is $\lt e^{-x^2}$. </p>
<p>For $x+1$ to $\infty$, make the change of variable $t=s+1$. We want
$\int_{s=x}^\infty e^{-s^2-2s-1}\,ds$, which is less than $e^{-x^2}\int_x^\infty e^{-2s-1}\,ds$. The remaining integral is bounded by a constant. </p>
|
313,025 | <p>I got two problems asking for the proof of the limit: </p>
<blockquote>
<p>Prove the following limit: <br/>$$\sup_{x\ge 0}\ x e^{x^2}\int_x^\infty e^{-t^2} \, dt={1\over 2}.$$</p>
</blockquote>
<p>and, </p>
<blockquote>
<p>Prove the following limit: <br/>$$\sup_{x\gt 0}\ x\int_0^\infty {e^{-px}\over {p+1}} \, dp=1.$$</p>
</blockquote>
<p>I may feel that these two problems are of the same kind. World anyone please help me with one of them and I may figure out the other one? Many thanks! </p>
| Mhenni Benghorbal | 35,472 | <p>Let</p>
<p>$$ f(x)=\ x e^{x^2}\int_x^\infty e^{-t^2} \implies f(x)=\ x e^{x^2}g(x).$$ </p>
<p>We can see that $ f(0)=0 $ and $f(x)>0,\,\, \forall x>0$. Taking the limit as $x$ goes to infinity and using L'hobital's rule and Leibniz integral rule yields</p>
<p>$$ \lim_{ x\to \infty } xe^{x^2}g(x) = \lim _{x\to \infty} \frac{g(x)}{\frac{1}{xe^{x^2}}}=\lim_{x \to \infty} \frac{g'(x)}{\frac{1}{(xe^{x^2})'}}=\lim_{x \to \infty} \frac{-e^{-x^2}}{{-{\frac {{{\rm e}^{-{x}^{2}}} \left( 2\,{x}^{2}+1 \right) }{{x}^{2}}}}} =\frac{1}{2}. $$ </p>
|
184,772 | <p>Is there any way to write a code that has a <strong>function</strong> include <strong>Block[ ]</strong> and <strong>Do[ ]</strong> loop instead of my code?</p>
<p>Here is my code:</p>
<pre><code>(* m = Maximum members of "list" *)
list = {{12, 9, 10, 5}, {3, 7, 18, 6}, {1, 2, 3, 3},
{4, 5, 6, 2}, {1, 13, 1, 1}};
m = {};
Do[
AppendTo[m, Max[list[[All, i]]]];
, {i, 1, Length[list[[1]]]}];
m
(*{12,13,18,6}*)
</code></pre>
| ZaMoC | 46,583 | <p>if you want to use your code try</p>
<pre><code>F[list_] := Block[{m}, m = {};
Do[AppendTo[m, Max[list[[All, i]]]];, {i, Length[list[[1]]]}];m]
F[list]
</code></pre>
<p>otherwise you can use this function </p>
<pre><code>F[list_]:=Max/@Transpose@list
</code></pre>
|
2,750,931 | <p>I'm in the process of exploring Bra Ket notation. In it, I often find operators in the form $\lvert a\rangle\langle b\rvert$, which can be thought of as multiplying a row vector $a$ with a column vector $b$.</p>
<p>This strikes me as a construction which should probably have a name that I can research to understand the properties of matrices formed this way, but I'm having trouble finding sources that name such matrices.</p>
<p>What is it called when a matrix can be decomposed into a row vector and a column vector? I'd like to look up the properties of such a matrix.</p>
<p>$$M=\begin{pmatrix}
a_0 \\
a_1 \\
\vdots \\
a_n \\
\end{pmatrix}
\begin{pmatrix}
b_0 & b_1 & \ldots & b_n \\
\end{pmatrix}$$</p>
| Pietro Paparella | 414,530 | <p>It can be shown that a matrix $M$ has rank equal to one if and only if $M = ab^\top$, where $a$ and $b$ are column vectors with complex entries, so the matrices you are thinking of can be referred to as rank-one matrices.</p>
<p>The product $ab^\top$ is also known as the outer product whereas the product $a^\top b$ is known as the inner product of $a$ and $b$.</p>
|
2,247,498 | <p>Imagine a circle of radius R in 3D space with a line l running threw it's center C in a direction perpendicular to the plane of the circle. Basically, like the axel of a wheel. </p>
<p>From a given point P that is not on the circle or on l, a ray extends to intersect both l and the circle. What would be the equations used to find the intersection points the ray make with the circle and l? You are given the coordinates of C and P, the radius R and the orientation of l.</p>
<p>I am trying to model looking from a point P onto a wheel-axis shape and find from point of view P the point of the edge of the circle that would appear to intersect with it's axis. Of course it doesn't but it is how this 3d structure would appear in a 2d image if a camera was situated at point P.</p>
| Futurologist | 357,211 | <p>No need to rotate anything, it is a matter of very simple geometry, which if you follow through, gives you a very simple explicit algorithm for computing the points you need. </p>
<p>Assume the orientation of the lines $l$ is given by a vector $\vec{v}$. Then the circle, call it $k$, has given center $C$ and radius $r$ and lies in the plane $\beta$ passing through $C$ and orthogonal to $\vec{v}$. Denote by $s$ the ray through point $P$ that intersects both line $l$ and the circle $s$. Then, since $s$ intersects $l$, the two together determines a plane $\alpha$, which is orthogonal to the plane of the circle $\beta$ and is transverse to the circle itself. The ray $s$ lies in this plane $\alpha$ and at the same time intersects $k$ so the ray $s$ passes through the point of intersection of $k$ and $\alpha$. So all you have to do is find the intersection points of the plane $\alpha$ and the circle $k$ (technically, you have two solutions of your problem). It becomes even simpler when you notice that the planes $\alpha$ and $\beta$ (the one of the circle $k$) intersect at a common line $l_C$ that passes thorough the circle center $C$, that is $l_C = \alpha \cap \beta$. Therefore the intersection points between $k$ and $\alpha$ are in fact the intersection points of $l_C$ and $k$. In other words, the two points you are looking for are exactly $l_C \cap k$. Which, by the way, are the two points on $l_C$ at a distance $r$ from point $C \in l_C$ (on either side of $C$ on $l_C$). </p>
<p>All of the above observations prompt the following algorithm:</p>
<ol>
<li><p>If the dot product $\big(\vec{v} \cdot \vec{CP}\big) < 0$ then set $\vec{v} := -\vec{v}$. This way we make sure both vectors $\vec{v}$ and $\vec{CP}$ are in the same half space with respect to the plane $\beta$ defined by the circle (recall $\beta$ is orthogonal to $\vec{v}$).</p></li>
<li><p>Define vector $$\vec{n} = \vec{v} \times \vec{CP}$$ (cross product) which is orthogonal to $\alpha$, and thus orthogonal to $l_C$.</p></li>
<li><p>Then define vector $$\vec{w} = \vec{v} \times \vec{n} = \vec{v} \times\big(\vec{v} \times \vec{CP}\big)$$ and normalize it to $$\vec{u} = \frac{\vec{w}}{|\vec{w}|} = \frac{ \vec{v} \times\big(\vec{v} \times \vec{CP}\big)}{| \vec{v} \times\big(\vec{v} \times \vec{CP}\big)|}$$ Vector $\vec{u}$ is parallel to line $l_C$ because $l_C$ is orthogonal to both vectors $\vec{v}$ and $\vec{n}$, and vector $\vec{u}$ is also orthogonal to both of them (cross product of the two).</p></li>
<li><p>If point $O$ is the origin of the coordinate system, a point $X$ lies on the line $l_C$ if and only if
$$\vec{OX}= \vec{OC} + t \, \vec{u}$$ </p></li>
<li><p>The two intersection points $Q_1$ and $Q_2$ you are looking for are
$$\vec{OQ_1}= \vec{OC} + r \, \vec{u}$$
$$\vec{OQ_2}= \vec{OC} - r \, \vec{u}$$</p></li>
</ol>
<p>If I am not wrong, according to the way I have deliberately defined the relative location of the vectors, point $Q_1$ should be "behind" the line $l$ and point $Q_2$ "in front" when looking from point $P$. For further reference I will use the notations $R_1 = PQ_1 \cap l$ and $R_2 = PQ_2 \cap l$.</p>
<ol start="6">
<li><p>For $i=1,2$ calculate the vectors $$\vec{Q_iP} = \vec{CP} - (-1)^{i} \, r \, \vec{u} \,\,\, \text{ and } \,\,\, |Q_iP| = \sqrt{\big(\vec{Q_iP} \cdot \vec{Q_iP} \big)}$$ i.e. the latter is a dot product and then square root. The former equality holds because $$\vec{Q_iP} = \vec{OP} - \vec{OQ_i} = \vec{CP} - \vec{CQ_i} = \vec{CP} - (-1)^i \, r \, \vec{u}$$</p></li>
<li><p>For $i=1,2$ calculate $\cos(\alpha_i) = \cos\big( \angle \, CQ_iP\big)$ by calculating the dot products $$\cos(\alpha_i) = (-1)^i\, \frac{\big(\, \vec{u} \cdot\vec{Q_iP} \,\big)}{|{Q_iP}|}$$ </p></li>
<li><p>For $i=1,2$ calculate $$\vec{OR_i} = \vec{OQ_i} + \left(\,\,\frac{r}{|Q_iP| \,\cos(\alpha_i)}\,\right) \, \vec{Q_iP}$$ where by construction $R_1$ is in between $Q_1$ and $P$ (i.e. $Q_1$ is behind the line $l$ when looking from point $P$) while $Q_2$ is outside the straight segment formed by $Q_2$ and $P$.</p></li>
</ol>
<p>If you manage to write a computer implementation of this algorithm, let me know if it works or not. If not, I will look up what needs to be corrected.</p>
|
586,112 | <p>Consider following statment $\{\{\varnothing\}\} \subset \{\{\varnothing\},\{\varnothing\}\}$</p>
<p>I think above statement is false as $\{\{\varnothing\}\}$ is subset of $\{\{\varnothing\},\{\varnothing\}\}$ but to be proper subset there must be some element in $\{\{\varnothing\},\{\varnothing\}\}$ which is not in $\{\{\varnothing\}\}$.As this is not the case here so it is false.</p>
<p>Is my explanation and answer right or not?</p>
| Stefan Smith | 55,689 | <p>An antiderivative of a function $f$ is <em>one</em> function $F$ whose derivative is $f$. The indefinite integral of $f$ is the set of <em>all</em> antiderivatives of $f$. If $f$ and $F$ are as described just now, the indefinite integral of $f$ has the form $\{F+c \mid c\in \mathbb{R}\}$. Usually people don't both with the set-builder notation, and write things such as "$\int \cos(x)\,dx = \sin(x)+C$". </p>
<p>This is what I was taught. One of the other answers here is completely different. I did some Googling, and, to my surprise, Wikipedia defines an improper integral as a single function. I found a link at <a href="http://people.hofstra.edu/stefan_waner/realworld/tutorials4/frames6_1.html" rel="noreferrer">http://people.hofstra.edu/stefan_waner/realworld/tutorials4/frames6_1.html</a> that agrees with my answer. I don't know if there is any consensus in the math community about which answer is correct.</p>
|
159,585 | <p>This is a kind of a plain question, but I just can't get something.</p>
<p>For the congruence and a prime number $p$: $(p+5)(p-1) \equiv 0\pmod {16}$.</p>
<p>How come that the in addition to the solutions
$$\begin{align*}
p &\equiv 11\pmod{16}\\
p &\equiv 1\pmod {16}
\end{align*}$$
we also have
$$\begin{align*}
p &\equiv 9\pmod {16}\\
p &\equiv 3\pmod {16}\ ?
\end{align*}$$</p>
<p>Where do the last two come from? It is always 4 solutions? I can see that they are satisfy the equation, but how can I calculate them?</p>
<p>Thanks</p>
| Bill Dubuque | 242 | <p>In the Theorem below put $\rm\:p\!=\!2,\ c=-5,\ d = 1\:$ to deduce that $\rm\:mod\ 16,\ (x\!+\!5)(x\!-\!1)\:$ has roots $\rm\:x \equiv -5,1\pmod 8,\:$ which are $\rm\:x,x\!+\!8 \equiv -5,1,3,9\pmod{16}.$ It has $\rm\,4\ (\!vs.\,2)$ roots by $\rm\:x\!+\!5 \equiv x\!-\!1\pmod{2},\:$ so both are divisible by $2$, so the other need be divisible only by $\rm8\ (\!vs. 16).$ Choosing larger primes $\rm\:p\:$ yields a quadratic with as many roots as you desire. These matters are much clearer $\rm\:p$-adically, e.g. google <a href="http://en.wikipedia.org/wiki/Hensel%27s_lemma" rel="nofollow">Hensel's Lemma.</a></p>
<p><strong>Theorem</strong> $\ $ If prime $\rm\:p\:|\:c\!-\!d\:$ but $\rm\:p^2\nmid c\!-\!d\:$ then $\rm\:(x\!-\!c)(x\!-\!d)\:$ has $\rm\:2\!\;p\:$ roots mod $\rm\:p^4,\:$ namely $\rm\:x \equiv c+j\,p^3\:$ and $\rm\: x\equiv d+j\,p^3,\:$ for $\rm\:0\le j \le p\!-\!1.$</p>
<p><strong>Proof</strong> $\ $ Note $\rm\: a = x\!-\!d,\ b = x\!-\!c\:$ satisfy the hypotheses of the Lemma below, thus we deduce $\rm\:p^4\:|\:(x\!-\!c)(x\!-\!d)\iff p^3\:|\:x\!-\!c\:$ or $\rm\:p^3\:|\:x\!-\!d,\:$ i.e. $\rm\:x\equiv c,d\pmod{p^3}.\:$ This yields the claimed roots $\rm\:mod\,\ p^4,\:$ which are all distinct since $\rm\:c+jp^3\equiv d+kp^3\:$ $\Rightarrow$ $\rm\:p^4\:|\:c\!-\!d+(j\!-\!k)p^3\:$ $\Rightarrow$ $\rm\: p^3\:|\:c\!-\!d,\:$ contra hypothesis. $\quad$ <strong>QED</strong></p>
<p><strong>Lemma</strong> $\ $ If prime $\rm\:p\:|\:a\!-\!b\:$ but $\rm\:p^2\nmid a\!-\!b\:$ then $\rm\:p^4\:|\:ab\iff p^3\:|\:a\ $ or $\rm\:p^3\:|\:b.$</p>
<p><strong>Proof</strong> $\rm\,\ (\Rightarrow) \ \ p\:|\:ab\:\Rightarrow\:p\:|\:a\:$ or $\rm\:p\:|\:b,\:$ so $\rm\:p\:|\:a,b\:$ by $\rm\:a\equiv b\pmod p.$ But not $\rm\:p^2\:|\:a,b\:$ else $\rm\:p^2\:|\:a\!-\!b.\:$ So one of $\rm\:a,b\:$ is not divisible by $\rm\:p^2,\:$ hence the other is divisible by $\rm\:p^3.$ </p>
<p>$(\Leftarrow)\ \ $ As above, $\rm\:p\:|\:a,b.\:$ Since one is divisible by $\rm\:p^3,\:$ then $\rm\:p^4\:$ divides their product. $\ \ $ <strong>QED</strong></p>
|
1,839,693 | <p>I think it must be true. Yet I have no rigorous proof for that. So, what I need to prove is that "group being simple" is invariant under isomorphism.</p>
<p>That if $G \cong H$, then either both are simple groups, or both are not simple.</p>
| Eric Wofsey | 86,856 | <p>Hint: Let $f:H\to G$ be an isomorphism and let $K\subseteq H$ be a normal subgroup. What can you say about $f(K)\subseteq G$?</p>
|
341,602 | <p>Let <span class="math-container">$L$</span> be a semisimple Lie algebra and let <span class="math-container">$(V,\varphi)$</span> be a finite-dimensional <span class="math-container">$L$</span>-module representation. Our main goal is to prove that <span class="math-container">$\varphi$</span> is completely reducible.
Consider an <span class="math-container">$L$</span>-submodule of <span class="math-container">$V$</span> of codimension one, let <span class="math-container">$0 \longrightarrow W \longrightarrow V \longrightarrow F \longrightarrow 0$</span> be an exact sequence (where <span class="math-container">$F$</span> is an <span class="math-container">$L$</span>-module). From the book of James Humphreys called "<a href="https://link.springer.com/book/10.1007%2F978-1-4612-6398-2" rel="nofollow noreferrer">Introduction to Lie Algebras and Representation Theory</a>", I have understood the following steps:</p>
<ol>
<li>We take another proper submodule of <span class="math-container">$W$</span> denoted by <span class="math-container">$W'$</span> such that the exact sequence <span class="math-container">$0 \longrightarrow W/W' \longrightarrow V/W' \longrightarrow F \longrightarrow 0$</span> splits, so there exists a one dimensional <span class="math-container">$L$</span>-submodule of <span class="math-container">$V/W'$</span> (say <span class="math-container">$\tilde{W}/W'$</span>) complementary to <span class="math-container">$W/W'$</span>.</li>
<li>We proceed by induction on the dimension of <span class="math-container">$W$</span>, so we get an exact sequence <span class="math-container">$0 \longrightarrow W' \longrightarrow \tilde{W} \longrightarrow F \longrightarrow 0$</span> which splits. It follows easily that <span class="math-container">$V=W \oplus X$</span>, where <span class="math-container">$X$</span> is a submodule complementary to <span class="math-container">$W'$</span> in <span class="math-container">$\tilde{W}$</span>.</li>
<li>We suppose that <span class="math-container">$W$</span> is irreducible, so we may use Schur's lemma on <span class="math-container">$c \vert_{W}$</span> to say that <span class="math-container">$\operatorname{Ker} c$</span> is an <span class="math-container">$L$</span>-submodule of <span class="math-container">$V$</span>, where <span class="math-container">$c$</span> is the endomorphism of <span class="math-container">$V$</span> defined in 6.2.</li>
</ol>
<p>The other parts of the proof are very hard, and I didn't understand them. Can someone help me to figure out those parts? If there is another comprehensible method, can someone share it with us?</p>
| Jim Humphreys | 4,231 | <p>Maybe it would clarify matters if I gave a little more background, in community wiki format.</p>
<p>The basic idea of this algebraic proof goes back to a short paper by Richard Brauer (1936) in German in <em>Mathematische Zeitschrift</em>, an interesting time in his life when he had been expelled from his professorship in Berlin and took up a position in Toronto. (This particular paper has no marginal additions, as do some other papers; I bought a used copy of his collected papers in three volumes which used to belong to him.)</p>
<p>Brauer's proof is of course less natural than the original proof by Weyl, but it applies to all semisimple Lie algebras over an algebraically closed field of characteristic 0 and may be the simplest algebraic proof. An attempt was made to simplify the proof, in a more recent textbook by K. Erdmann and her recent student M. Wildon <em><a href="https://link.springer.com/book/10.1007%2F1-84628-490-2" rel="nofollow noreferrer">Introduction to Lie algebras</a></em> (<a href="https://mathscinet.ams.org/mathscinet-getitem?mr=2218355" rel="nofollow noreferrer">MSN</a>), published by Springer in 2006 as an undergraduate text. This result is not essential for their further work but occurs in Theorem 9.16 with an unconvincing proof. This was acknowledged by them in a list of errata, and affects Exercises 9.15 (cf. 9.16) as well perhaps as Lemma 10.7. (A good exercise is to track down the specific fault in the proof.) The book itself is perhaps an attractive alternative to mine, having a more leisurely pace and more examples but covering much less material in a similar number of pages.</p>
<p>The Brauer method is used in Bourbaki's Chapter 1 (of their <a href="https://link.springer.com/book/10.1007/978-3-540-35337-9" rel="nofollow noreferrer">Groupes de Lie et algèbres de Lie</a>) as well as Jacobson's 1962 book <a href="https://mathscinet.ams.org/mathscinet-getitem?mr=143793" rel="nofollow noreferrer">Lie algebras</a> (now in a Dover reprint). But as stated above, Weyl's 1925 proof is more natural in the Lie group context, imitating the finite group method.</p>
|
112,320 | <p>What is the number of strings of length $235$ which can be made from the letters A, B, and C, such that the number of A's is always odd, the number of B's is greater than $10$ and less than $45$ and the Number of C's is always even?</p>
<p>What I can think of is </p>
<p>$$\left(\binom{235}{235} - \left\lfloor235 - \frac{235}2\right\rfloor\right) \binom{235}{35} \binom {235}{ \lfloor 235/2\rfloor}\;.$$</p>
<p>Thanks</p>
| Ross Millikan | 1,827 | <p>Hint: I would do it with a program. You have 44*2*2 allowable states-number of B's, parity of A's and parity of C's. Write the recurrence relations and note that the empty string has even C's and no B's.</p>
|
4,821 | <p>A quick bit of motivation: recently a question I answered quite a while ago ( <a href="https://math.stackexchange.com/questions/22437/combining-two-3d-rotations/178957">Combining Two 3D Rotations</a> ) picked up another (IMHO rather poor) answer. While it was downvoted by someone else and I strongly concur with their opinion, I haven't downvoted it myself because I'm leery of any perception of 'competitive' downvoting on questions that I've already answered; in general I tend to be <em>very</em> stingy with downvotes (certainly more than I probably should), but this seems like a particularly thorny case.</p>
<p>What I'm wondering is whether this is a reasonable concern (or reasonable approach) on my part; do people concur that this is something to be worried about from an ethical perspective, or should a bad answer be downvoted regardless of whether it might be abstractly 'beneficial' to myself to do so?</p>
| Rick Decker | 36,993 | <p>Consider the "symmetric" situation: would you hesitate to upvote another--good--answer after you had already provided one of your own? I certainly wouldn't. I put quotes around "symmetric" because it's not completely clear to me that the two positions are indeed equivalent. My feeling is that they are, especially since I can't see how two answers "compete" in any meaningful way.</p>
|
4,155,900 | <p>I'd like someone to verify my sketch proof of the below exercise 3.3.7 from Abbott's, <em>Understanding Analysis</em>. If it's incorrect, could you hint/point at the correct approach to the proof. Thanks!</p>
<blockquote>
<p>Prove that the sum <span class="math-container">$C+C=\{x+y:x,y\in C\}$</span> is equal to the closed interval <span class="math-container">$0,2]$</span>.</p>
</blockquote>
<blockquote>
<p>Because <span class="math-container">$C\subseteq [0,1], C+C \subseteq [0,2]$</span>, so we only need to prove the reverse implication. Given <span class="math-container">$s \in [0,2]$</span>, we must find two elements <span class="math-container">$x,y \in C$</span>, satisfying <span class="math-container">$x + y = s$</span>.</p>
</blockquote>
<blockquote>
<p>(a) Show that there exists <span class="math-container">$x_1,y_1 \in C_1$</span>, for which <span class="math-container">$x_1 + y_1 = s$</span>. Show in general that, for an arbitrary <span class="math-container">$n \in \mathbf{N}$</span>, we can always find <span class="math-container">$x_n,y_n \in C_n$</span> for which <span class="math-container">$x_n + y_s = s$</span>.</p>
</blockquote>
<p><em><strong>Proof.</strong></em></p>
<p>(a) Let <span class="math-container">$s \in [0,2]$</span>. Then, atleast one of (1) <span class="math-container">$s \in \left[0,\frac{1}{3}\right]$</span> (2) <span class="math-container">$s \in \left[\frac{1}{3},\frac{4}{3}\right]$</span> (3) <span class="math-container">$s \in \left[\frac{4}{3},2\right]$</span> holds. If (1) holds, then we pick <span class="math-container">$x_1,y_1 \in \left[0,\frac{1}{3}\right]$</span>. If (2) holds, we pick <span class="math-container">$x_1 \in \left[0,\frac{1}{3}\right]$</span>, <span class="math-container">$y_1 \in \left[\frac{2}{3},1\right]$</span>. If (3) holds, we may pick <span class="math-container">$x_1,y_1 \in \left[\frac{2}{3},1\right]$</span>.</p>
<p>Now, <span class="math-container">$C_n$</span> is given by</p>
<p><span class="math-container">$$
\left[0,\frac{1}{3^n}\right]\cup\left[\frac{2}{3^n},\frac{3}{3^n}\right]\cup\left[\frac{4}{3^n},\frac{5}{3^n}\right]\cup \ldots \cup \left[\frac{3^n-1}{3^n},1\right]
$$</span></p>
<p>If <span class="math-container">$x_n$</span> belongs to atleast one of the above sets, then</p>
<p><span class="math-container">\begin{align*}
x_n \in \bigcup_{k}\left[\frac{2k}{3^n},\frac{2k+1}{3^n}\right]
\end{align*}</span></p>
<p>where <span class="math-container">$k \in \{0,1,2\ldots,\frac{3^n-1}{2}\}$</span>. Suppose, <span class="math-container">$y_n$</span> belongs to one of the above sets, then</p>
<p><span class="math-container">\begin{align*}
y_n \in \bigcup_{l}\left[\frac{2l}{3^n},\frac{2l+1}{3^n}\right]
\end{align*}</span></p>
<p>where <span class="math-container">$l \in \{0,1,2\ldots,\frac{3^n-1}{2}\}$</span>.</p>
<p>By definition, <span class="math-container">$x_n + y_n$</span> belongs to</p>
<p><span class="math-container">\begin{align*}
\bigcup_{k,l}\left[\frac{2k+2l}{3^n},\frac{2k+2l+2}{3^n}\right]=\left[0,\frac{2}{3^n}\right]\cup \left[\frac{2}{3^n},\frac{4}{3^n}\right] \cup \ldots \cup \left[\frac{2\cdot(3^n - 1)}{3^n},2\right]=[0,2]
\end{align*}</span></p>
<blockquote>
<p>(b) Keeping in mind that the sequences <span class="math-container">$(x_n)$</span> and <span class="math-container">$(y_n)$</span> do not necessarily converge, show how they can nevertheless be used to produce the desired <span class="math-container">$x$</span> and <span class="math-container">$y$</span> in <span class="math-container">$C$</span> satisfying <span class="math-container">$x + y = s$</span>.</p>
</blockquote>
<p><span class="math-container">$\star$</span> Any hints on how to proceed here, would be super-helpful!</p>
| Zerox | 737,359 | <p>Note that <span class="math-container">$\{x_n\}$</span> is bounded, so there is a subsequence <span class="math-container">$\{x_{n_k}\}$</span> converging to some <span class="math-container">$x \in [0,1]$</span>. Use contradiction to show that <span class="math-container">$x \in C$</span>: If not, then <span class="math-container">$x$</span> belongs to some <span class="math-container">$[0,1] \backslash C_k$</span> (this is an open set) but <span class="math-container">$\forall n \ge k+1, x_n \notin [0,1] \backslash C_k$</span>, contradicting the convergence. The corresponding <span class="math-container">$\{y_{n_k}\}$</span> sequence also converges to some <span class="math-container">$y$</span> because <span class="math-container">$x_{n_k}+y_{n_k}=s$</span>. Again <span class="math-container">$y$</span> can be shown to lie in <span class="math-container">$C$</span>.</p>
|
4,155,900 | <p>I'd like someone to verify my sketch proof of the below exercise 3.3.7 from Abbott's, <em>Understanding Analysis</em>. If it's incorrect, could you hint/point at the correct approach to the proof. Thanks!</p>
<blockquote>
<p>Prove that the sum <span class="math-container">$C+C=\{x+y:x,y\in C\}$</span> is equal to the closed interval <span class="math-container">$0,2]$</span>.</p>
</blockquote>
<blockquote>
<p>Because <span class="math-container">$C\subseteq [0,1], C+C \subseteq [0,2]$</span>, so we only need to prove the reverse implication. Given <span class="math-container">$s \in [0,2]$</span>, we must find two elements <span class="math-container">$x,y \in C$</span>, satisfying <span class="math-container">$x + y = s$</span>.</p>
</blockquote>
<blockquote>
<p>(a) Show that there exists <span class="math-container">$x_1,y_1 \in C_1$</span>, for which <span class="math-container">$x_1 + y_1 = s$</span>. Show in general that, for an arbitrary <span class="math-container">$n \in \mathbf{N}$</span>, we can always find <span class="math-container">$x_n,y_n \in C_n$</span> for which <span class="math-container">$x_n + y_s = s$</span>.</p>
</blockquote>
<p><em><strong>Proof.</strong></em></p>
<p>(a) Let <span class="math-container">$s \in [0,2]$</span>. Then, atleast one of (1) <span class="math-container">$s \in \left[0,\frac{1}{3}\right]$</span> (2) <span class="math-container">$s \in \left[\frac{1}{3},\frac{4}{3}\right]$</span> (3) <span class="math-container">$s \in \left[\frac{4}{3},2\right]$</span> holds. If (1) holds, then we pick <span class="math-container">$x_1,y_1 \in \left[0,\frac{1}{3}\right]$</span>. If (2) holds, we pick <span class="math-container">$x_1 \in \left[0,\frac{1}{3}\right]$</span>, <span class="math-container">$y_1 \in \left[\frac{2}{3},1\right]$</span>. If (3) holds, we may pick <span class="math-container">$x_1,y_1 \in \left[\frac{2}{3},1\right]$</span>.</p>
<p>Now, <span class="math-container">$C_n$</span> is given by</p>
<p><span class="math-container">$$
\left[0,\frac{1}{3^n}\right]\cup\left[\frac{2}{3^n},\frac{3}{3^n}\right]\cup\left[\frac{4}{3^n},\frac{5}{3^n}\right]\cup \ldots \cup \left[\frac{3^n-1}{3^n},1\right]
$$</span></p>
<p>If <span class="math-container">$x_n$</span> belongs to atleast one of the above sets, then</p>
<p><span class="math-container">\begin{align*}
x_n \in \bigcup_{k}\left[\frac{2k}{3^n},\frac{2k+1}{3^n}\right]
\end{align*}</span></p>
<p>where <span class="math-container">$k \in \{0,1,2\ldots,\frac{3^n-1}{2}\}$</span>. Suppose, <span class="math-container">$y_n$</span> belongs to one of the above sets, then</p>
<p><span class="math-container">\begin{align*}
y_n \in \bigcup_{l}\left[\frac{2l}{3^n},\frac{2l+1}{3^n}\right]
\end{align*}</span></p>
<p>where <span class="math-container">$l \in \{0,1,2\ldots,\frac{3^n-1}{2}\}$</span>.</p>
<p>By definition, <span class="math-container">$x_n + y_n$</span> belongs to</p>
<p><span class="math-container">\begin{align*}
\bigcup_{k,l}\left[\frac{2k+2l}{3^n},\frac{2k+2l+2}{3^n}\right]=\left[0,\frac{2}{3^n}\right]\cup \left[\frac{2}{3^n},\frac{4}{3^n}\right] \cup \ldots \cup \left[\frac{2\cdot(3^n - 1)}{3^n},2\right]=[0,2]
\end{align*}</span></p>
<blockquote>
<p>(b) Keeping in mind that the sequences <span class="math-container">$(x_n)$</span> and <span class="math-container">$(y_n)$</span> do not necessarily converge, show how they can nevertheless be used to produce the desired <span class="math-container">$x$</span> and <span class="math-container">$y$</span> in <span class="math-container">$C$</span> satisfying <span class="math-container">$x + y = s$</span>.</p>
</blockquote>
<p><span class="math-container">$\star$</span> Any hints on how to proceed here, would be super-helpful!</p>
| dan_fulea | 550,003 | <p>Here is - in essence - the same argument, written differently. Let <span class="math-container">$C$</span> be the <a href="https://en.wikipedia.org/wiki/Cantor_set" rel="nofollow noreferrer">Cantor set</a>. Each element <span class="math-container">$a$</span> in <span class="math-container">$C$</span> can be written in base <span class="math-container">$\color{blue}3$</span> in the form
<span class="math-container">$$
\begin{aligned}
a
&
= 0,a_1a_2a_3\dots_{\color{blue}{(3)}}
=\frac 13a_1+\frac 1{3^2}a_2+\frac 1{3^2}a_3 + \dots \ ,\qquad a_1,a_2,a_3\dots\in\{0,2\}\ ,\text{ so}
\\
b:=\frac a2
&= 0,b_1b_2b_3\dots_{\color{blue}{(3)}}
=\frac 13b_1+\frac 1{3^2}b_2+\frac 1{3^2}b_3 + \dots \ ,\qquad b_1,b_2,b_3\dots\in\{0,1\}\ .
\end{aligned}
$$</span>
So <span class="math-container">$b\in\frac 12C=:D$</span>.</p>
<p>Fix <span class="math-container">$x\in[0,2]$</span>, associate <span class="math-container">$y=\frac 12x$</span>. We need to show that it can be written as the sum of two elements in <span class="math-container">$D$</span>. Consider the digits of <span class="math-container">$y$</span> in base <span class="math-container">$3$</span>. For each digit <span class="math-container">$y_k\in\{0,1,2\}$</span> write it as <span class="math-container">$y_k=b_k+b'_k$</span> according to <span class="math-container">$0=0+0$</span>, <span class="math-container">$1=1+0$</span> (or <span class="math-container">$1=0+1$</span>, here we have a choice, but let it be always the one...), and <span class="math-container">$2=1+1$</span>. Build <span class="math-container">$b,b'\in D$</span> with these digits. Then <span class="math-container">$y=b+b'$</span>.</p>
<p>(Note: <span class="math-container">$y=1$</span> can be written as <span class="math-container">$0,22222\dots$</span> in base three, and the above procedure applies.)</p>
|
4,393,925 | <p>Is the integral of <span class="math-container">$\tan(x)\,\mathrm{d}x$</span> equal to the negative <span class="math-container">$\ln$</span> of absolute value of <span class="math-container">$\cos(x)$</span>, the same as integral of <span class="math-container">$\tan(x)\,\mathrm{d}x$</span> equal to the <span class="math-container">$\ln$</span> of absolute value of <span class="math-container">$\sec(x)$</span>
<span class="math-container">$$
\int\tan(x)\,\mathrm{d}x = -\ln\left|\cos(x)\right| + C \equiv \int\tan(x)\,\mathrm{d}x = \ln\left|\sec(x)\right|+C
$$</span></p>
| EvPlaysPiano | 1,031,001 | <p>So, when dealing with logarithms, you can use this following property: <span class="math-container">$\ln(\frac{1}x) = -\ln(x)$</span>. and this is the same when working with inverse trig functions, such as cos and sec; so, <span class="math-container">$-\ln|\cos(x)$</span>| is the same thing as <span class="math-container">$\ln|\frac{1}{\cos (x)}|$</span>, and we know that <span class="math-container">$\frac{1}{\cos (x)} = \sec(x)$</span>.</p>
|
714,000 | <p>Lets say:</p>
<p>$X = \{x_1, x_2, x_3, ... \} $ be a set of Real numbers in range $(R_1, R_2)$ and $m =$ mean of $x$</p>
<p>If I have to increase mean of set $X$ by $3$, each number in the set has to be increase by $3$.
But how to increase mean of set $X$ by $3$, by only changing a subset of X. Is there any mathematical relations as such?</p>
| Semsem | 117,040 | <p>Now the sum of the <span class="math-container">$n$</span>elements is <span class="math-container">$s$</span> and so <span class="math-container">$$s=nm$$</span> Now we want a new mean <span class="math-container">$M=m+3$</span> and the new sum <span class="math-container">$S$</span> should be <span class="math-container">$$S=nM=nm+3n=s+3n$$</span> So you have to add <span class="math-container">$y_i$</span> to every element <span class="math-container">$x_i$</span> such that
<span class="math-container">$$\sum y_i=3n$$</span></p>
<h1>ِAdded</h1>
<ul>
<li><p>you can construct another set <span class="math-container">$y_i$</span> of elements with mean <span class="math-container">$3$</span></p>
</li>
<li><p>you can add <span class="math-container">$3$</span> to every element</p>
</li>
<li><p>you can add <span class="math-container">$3n$</span> to one element</p>
</li>
</ul>
|
366,724 | <p>Suppose $D$ is an integral domain and that $\phi$ is a nonconstant function from $D$ to
the nonnegative integers such that $\phi(xy) = \phi(x)\phi(y)$. If $x$ is a unit in $D$, show that
$\phi(x) = 1$.</p>
| Math Gems | 75,092 | <p><strong>Hint</strong> $\, $ By multiplicativity, $\rm\:\phi\:$ preserves $1$ and divisibility, so it preserves divisors of $1$ (= units).</p>
<p>$\rm(1)\quad \phi\ $ preserves $\:1\!:\ $ apply $\rm\:\phi\:$ to $\rm\:1^2 = 1\ $ to deduce $\rm\ \phi(1) = 1.$</p>
<p>$\rm(2)\quad \phi\ $ preserves divisibility: $\rm\,\ a\mid c\:\Rightarrow\:ab=c\:\Rightarrow\:\phi(a)\,\phi(b)=\phi(c)\:\Rightarrow\:\phi(a)\mid \phi(c)$</p>
<p>$\rm(3)\quad \phi\ $ preserves units: $\rm\,\ u\ $ unit $\rm\:\Rightarrow\:u\mid 1\,\ \smash{\stackrel{(2)}{\Rightarrow}}\,\ \phi(u)\mid \phi(1)\ \smash{\stackrel{(1)}{=}}\ 1\:\Rightarrow\:\phi(u)\:$ unit</p>
|
717,980 | <p>In a certain 2-player game, the winner is determined by rolling a single 6-sided die in turn, until a
6 is shown, at which point the game ends immediately. Now, suppose that k dice are now rolled simultaneously by each player on his turn, and the
first player to obtain a total of k (or more) 6’s, accumulated over all his throws, wins the
game. (For example, if k = 3, then player 1 will throw 3 dice, and keep track of any 6’s that
show up. If player 1 did not get all 6’s then player 2 will do the same. Assuming that player
1 gets another turn, he will again throw 3 dice, and any 6’s that show up will be added to
his previous total). Compute the expected number of turns that will be needed to complete the game.</p>
<p>I've analysed this problem as follows: The problem can be modeled by a negative binomial distribution with probability $p=\frac{1}{6}$. Now, X is a random variable representing the number of dice being thrown. I need to find the <a href="http://en.wikipedia.org/wiki/Cumulative_distribution_function">cdf</a> $Pr[X\geq k]$, and then find the expectation as follows $E[X] = \int_0^\infty kPr[X\geq k]$. The problem here is that the cdf of a negative binomal distribution is a regularized beta function and this is quite messy to deal with. I'm wondering is there another way to approach this problem that wouldn't involve that? </p>
| JSS | 136,447 | <p>It might be easier to approach this problem with generating functions.</p>
<p>When you roll a die, you have 1/6 chance to roll a 6, and a 5/6 chance to roll any other face. Let a represent rolling a 6, and let b represent rolling any other face. </p>
<p>Rolling a single die, the outcomes may be represented as $({a\over 6} + {5b\over 6})$ </p>
<p>For k dice, we have $p(a)=({a\over 6} + {5b\over 6})^k$</p>
<p>Each monomial of the expansion will be of the form $a^{r_1}b^{r_2}$, such that $r_1+r_2=k$. The coefficient gives the probability of obtaining exactly $r_1$ 6's and $r_2$ not-6's. Since we don't earn any points when we don't roll a 6, we can ignore these outcomes by setting $b=1$.</p>
<p>$p(a)=({a\over 6} + {5\over 6})^k$</p>
<p>We can obtain the expected number of 6's <em>per turn</em> by first differentiating with respect to a, and then evaluating it at a=1. If we want to score w points to win, we divide by the expected points per turn to obtain the expected number of personal turns a player should expect before winning.</p>
<p>If you meant how many <em>cumulative</em> turns we should expect to complete the game: if each player expects $t$ turns to win, and one player goes first, presumably the game should take $2t-1$ turns total to complete. Alternately, manipulate the generating function to account for expected cumulative totals.</p>
|
696,869 | <p>Question:
Show that if a square matrix $A$ satisfies the equation $A^2 + 2A + I = 0$, then $A$ must be invertible.</p>
<p>My work: Based on the section I read, I will treat I to be an identity matrix, which is a $1 \times 1$ matrix with a $1$ or as an square matrix with main diagonal is all ones and the rest is zero. I will also treat the $O$ as a zero matrix, which is a matrix with all zeros.</p>
<p>So the question wants me to show that the square matrix $A$ will make the following equation true. Okay so I pick $A$ to be $[-1]$, a $1 \times 1$ matrix with a $-1$ inside. This was out of pure luck.</p>
<p>This makes $A^2 = [1]$. This makes $2A = [-2]$. The identity matrix is $[1]$.</p>
<p>$1 + -2 + 1 = 0$. I satisfied the equation with my choice of $A$ which makes my choice of the matrix $A$ an invertible matrix.</p>
<p>I know matrix $A *$ the inverse of $A$ is the identity matrix.</p>
<p>$[-1] * inverse = [1]$. So the inverse has to be $[-1]$.</p>
<p>So the inverse of $A$ is $A$. </p>
<p>It looks right mathematically speaking. </p>
<p>Anyone can tell me how they would pick the square matrix A because I pick my matrix out of pure luck? </p>
| Max | 130,322 | <p>(an alternative to your approach)</p>
<p>if $0=A^{2}+2A+I=\left(A+I\right)\left(A-I\right)$you have that the set $\left\{+1,-1\right\}$ contains all the eigenvalues of $A$. thus $0$ is not an eigenvalue of $A$ and this is equivalent to being invertible.</p>
<p>Edit:</p>
<p>as LutzL has commented correctly, $A-I$ has to be replaced by $A+I$ and thus $\left\{-1,+1\right\}$ by $\left\{-1\right\}$. The arguments still work.</p>
|
696,869 | <p>Question:
Show that if a square matrix $A$ satisfies the equation $A^2 + 2A + I = 0$, then $A$ must be invertible.</p>
<p>My work: Based on the section I read, I will treat I to be an identity matrix, which is a $1 \times 1$ matrix with a $1$ or as an square matrix with main diagonal is all ones and the rest is zero. I will also treat the $O$ as a zero matrix, which is a matrix with all zeros.</p>
<p>So the question wants me to show that the square matrix $A$ will make the following equation true. Okay so I pick $A$ to be $[-1]$, a $1 \times 1$ matrix with a $-1$ inside. This was out of pure luck.</p>
<p>This makes $A^2 = [1]$. This makes $2A = [-2]$. The identity matrix is $[1]$.</p>
<p>$1 + -2 + 1 = 0$. I satisfied the equation with my choice of $A$ which makes my choice of the matrix $A$ an invertible matrix.</p>
<p>I know matrix $A *$ the inverse of $A$ is the identity matrix.</p>
<p>$[-1] * inverse = [1]$. So the inverse has to be $[-1]$.</p>
<p>So the inverse of $A$ is $A$. </p>
<p>It looks right mathematically speaking. </p>
<p>Anyone can tell me how they would pick the square matrix A because I pick my matrix out of pure luck? </p>
| nmasanta | 623,924 | <p>Here you have to show <strong><span class="math-container">$A$</span> is invertible</strong>. For this only you have to show <span class="math-container">$ det (A) \neq 0$</span>. </p>
<p>Now since <span class="math-container">$A$</span> satisfies the polynomial equation <span class="math-container">$ x^{2} + 2x + 1 = 0$</span> (as <span class="math-container">$ A^{2} + 2 A + I = 0$</span> is given), which contains non-zero constant term <span class="math-container">$1$</span> so <span class="math-container">$ det (A) = 1 \neq 0$</span>. Therefore <span class="math-container">$A$</span> has an inverse. [Q.E.D]</p>
<p>** Now if you have to find <span class="math-container">$A^{-1}$</span>, then you can proceed as follows</p>
<p><span class="math-container">$A^{2} +2 A + I = 0 $</span> .......(1)</p>
<p>Since <em>A has an inverse</em>, so multiplying both side of equation (1) by <span class="math-container">$A^{-1}$</span> we have</p>
<p><span class="math-container">$A^{-1} (A^{2} + 2 A + I) = A^{-1}. 0 \implies A^{-1} . A^{2} +2 A^{-1}. A + A^{-1} .I=0 \implies A + 2I + A^{-1}=0 \implies A^{-1} = -A-2I $</span></p>
|
4,220,518 | <p>A point is moving along the curve <span class="math-container">$y = x^2$</span> with unit speed. What is the magnitude of its acceleration at the point <span class="math-container">$(1/2, 1/4)$</span>?</p>
<p>My approach : I use the parametric equation <span class="math-container">$s(t) = (ct, c^2t^2)$</span>, then <span class="math-container">$v(t) = s'(t) = (c, 2c^2t)$</span> and <span class="math-container">$a(t) = v'(t) = (0, 2c^2)$</span>. Now the point <span class="math-container">$(1/2, 1/4)$</span> is reached at time <span class="math-container">$t = \frac{1}{2c}$</span>, so <span class="math-container">$v(\frac{1}{2c}) = (c, c)$</span>. Now the unit speed condition gives us <span class="math-container">$\sqrt{c^2 + c^2} = 1 \implies c = \frac{1}{\sqrt{2}}$</span>. So the magnitude of acceleration is <span class="math-container">$2c^2 = 1$</span>.</p>
<p>But the answer is <span class="math-container">$\frac{1}{\sqrt{2}}$</span>. Can someone help me what is wrong in my approach.</p>
| David Quinn | 187,299 | <p>You can use Cartesian coordinates for this.
You have <span class="math-container">$$y=x^2\implies \dot{y}=2x\dot{x}$$</span>
Therefore at <span class="math-container">$x=\frac12$</span>, <span class="math-container">$\dot{y}=\dot{x}$</span></p>
<p>For constant speed we have <span class="math-container">$${\dot{x}}^2+{\dot{y}}^2=1$$</span>
Hence at <span class="math-container">$x=\frac12$</span>, <span class="math-container">$\dot{y}=\dot{x}=\frac{1}{\sqrt{2}}$</span></p>
<p>Differentiating with respect to time the curve again, we have <span class="math-container">$$\ddot{y}=2{\dot{x}}^2+2x\ddot{x}$$</span></p>
<p>Therefore at <span class="math-container">$x=\frac12$</span>, we have <span class="math-container">$$\ddot{y}=1+\ddot{x}$$</span></p>
<p>Finally, differentiating the constant speed condition gives <span class="math-container">$$2\dot{x}\ddot{x}+2\dot{y}\ddot{y}=0\implies \ddot{x}=-\ddot{y}$$</span></p>
<p>So <span class="math-container">$\ddot{y}=\frac12$</span> and <span class="math-container">$\ddot{x}=-\frac12$</span> and hence the result.</p>
|
1,041,226 | <p>I need to prove the following: ${n\choose m-1}+{n\choose m}={n+1\choose m}$, $1\leq m\leq n$.</p>
<p>With the definition: ${n\choose m}= \left\{
\begin{array}{ll}
\frac{n!}{m!(n-m)!} & \textrm{für \(m\leq n\)} \\
0 & \textrm{für \(m>n\)}
\end{array}
\right.$</p>
<p>and $n,m\in\mathbb{N}$.</p>
<p>I'm not really used to calculations with factorials and can't make much sense from it...</p>
| ajotatxe | 132,456 | <p>$$\begin{align}
\binom n{m-1}+\binom nm&=\frac{n!}{(m-1)!(n-m+1)!}+\frac{n!}{m!(n-m)!}\\
&=\frac{n!m+n!(n-m+1)}{m!(n-m+1)!}\\
&=\frac{n!(n-m++1+m)}{m!(n-m+1)!}\\
&=\frac{n!(n+1)}{m!(n-m+1)!}\\
&=\frac{(n+1)!}{m!(n-m+1)!}=\binom{n+1}m
\end{align}$$</p>
|
1,041,226 | <p>I need to prove the following: ${n\choose m-1}+{n\choose m}={n+1\choose m}$, $1\leq m\leq n$.</p>
<p>With the definition: ${n\choose m}= \left\{
\begin{array}{ll}
\frac{n!}{m!(n-m)!} & \textrm{für \(m\leq n\)} \\
0 & \textrm{für \(m>n\)}
\end{array}
\right.$</p>
<p>and $n,m\in\mathbb{N}$.</p>
<p>I'm not really used to calculations with factorials and can't make much sense from it...</p>
| Community | -1 | <p>Simpler than the simplest, simplify by $n!$, $m!$ and $(n-m+1)!$:
$$\frac{{n\choose m-1}+{n\choose m}}{n+1\choose m}=\frac{\frac{n!}{(m-1)!(n-m+1)!}+\frac{n!}{m!(n-m)!}}{\frac{(n+1)!}{m!(n-m+1)!}}=
\frac{\frac1{1.m^{-1}}+\frac{1}{1.(n-m+1)^{-1}}}{\frac{n+1}{1.1}}=1.$$</p>
|
1,517,502 | <p>Point P$(x, y)$ moves in such a way that its distance from the point $(3, 5)$ is proportional to its distance from the point $(-2, 4)$. Find the locus of P if the origin is a point on the locus.</p>
<p><strong>Answer</strong>:</p>
<p>$$(x-3)^2 + (y-5)^2 = (x+2)^2 + (y-4)^2$$
or, $$10x+2y-14=0$$
or, $$5x+y-7=0$$</p>
<p>but answer given is $$7x^2+7y^2+128x-36y=0$$</p>
| SchrodingersCat | 278,967 | <p>$$(x-3)^2 + (y-5)^2 = k\left[ (x+2)^2 + (y-4)^2 \right]$$ where $k$ is a constant</p>
<p>Now $(0,0)$ lies on the locus. <br>
Therefore $$9+25=k(4+16) \Rightarrow k=\frac{34}{20} = \frac{17}{10}$$</p>
<p>Using this value of $k$ in the equation, we get
$$(x-3)^2 + (y-5)^2 = \frac{17}{10}\left[ (x+2)^2 + (y-4)^2 \right]$$
$$10\left[(x-3)^2 + (y-5)^2 \right]= 17\left[ (x+2)^2 + (y-4)^2 \right]$$
$$10(x^2-6x+9+y^2-10y+25)=17(x^2+4x+4+y^2-8y+16)$$
$$7x^2+7y^2+128x-36y=0$$</p>
|
1,517,502 | <p>Point P$(x, y)$ moves in such a way that its distance from the point $(3, 5)$ is proportional to its distance from the point $(-2, 4)$. Find the locus of P if the origin is a point on the locus.</p>
<p><strong>Answer</strong>:</p>
<p>$$(x-3)^2 + (y-5)^2 = (x+2)^2 + (y-4)^2$$
or, $$10x+2y-14=0$$
or, $$5x+y-7=0$$</p>
<p>but answer given is $$7x^2+7y^2+128x-36y=0$$</p>
| Piquito | 219,998 | <p>The same answer than Aniket. You have a cercle centered at the point $(-\frac{64}{7},\frac{18}{7})$ and radius $\frac{2\sqrt{5\cdot13\cdot17}}{7}$</p>
|
1,032,926 | <p>I'm trying to understand this proof of the following Lemma, that I found in an article on Existence of Eigenvalues and Eigenvectors, but I don't understand the following step:</p>
<p><em>Let $V$ be a finite-dimensional complex vector space, $v\in V$ and $c\gt 0$. Since for every $v\in V\setminus \{ 0 \}$ and $k\in\mathbb C$ we have $||T(v) - kv||\ge c||v||$, then for every $k\in\mathbb C$ the operator $T - kI$ is one-to-one.</em></p>
<p>Why does this guarantee injectivity of $T-kI$?</p>
| Martín-Blas Pérez Pinilla | 98,199 | <p>$$T(v) - kv =0\implies 0=\|T(v) - kv\| \ge c \|v\|\implies v=0.$$</p>
|
137,571 | <p>As the title, if I have a list:</p>
<pre><code>{"", "", "", "2$70", ""}
</code></pre>
<p>I will expect:</p>
<pre><code>{"", "", "", "2$70", "2$70"}
</code></pre>
<p>If I have</p>
<pre><code>{"", "", "", "3$71", "", "2$72", ""}
</code></pre>
<p>then:</p>
<pre><code>{"", "", "", "3$71", "3$71", "2$72", "2$72"}
</code></pre>
<p>And </p>
<pre><code>{"", "", "", "3$71", "","", "2$72", ""}
</code></pre>
<p>should give </p>
<pre><code>{"", "", "", "3$71", "3$71", "", "2$72", "2$72"}
</code></pre>
<p>This is my try:</p>
<pre><code>{"", "", "", "2$70", ""} /. {p : Except["", String], ""} :> {p, p}
</code></pre>
<p>But I don't know why it doesn't work. Poor ability of pattern match. Can anybody give some advice?</p>
| Carl Woll | 45,431 | <p>In place modification of a list works well here:</p>
<pre><code>fc[list_] := Module[{out=list},
With[{empty = Pick[Range[2,Length@list], Rest@list,""]},
out[[empty]]=out[[empty-1]]
];
out
]
</code></pre>
<p>A comparison with Mr Wizard's fn:</p>
<pre><code>data = RandomChoice[{"","","","a","b"}, 10^5];
r1 = fn[data]; //AbsoluteTiming
r2 = fc[data]; //AbsoluteTiming
r1 === r2
</code></pre>
<blockquote>
<p>{0.049858,Null}</p>
<p>{0.01092,Null}</p>
<p>True</p>
</blockquote>
|
51,096 | <p>Is it possible to have a countable infinite number of countable infinite sets such that no two sets share an element and their union is the positive integers?</p>
| Asaf Karagila | 622 | <p>Easily.</p>
<p>Let $P=\{2,3,5,7,11,\ldots\}$ be all the prime numbers. Take for $p\in P$, any prime number let $A_p = \{p^n\mid n\in\mathbb N, n\neq 0\}$ and take $A_1 = \{n\in\mathbb N\mid n \text{ have at least two different prime divisors}\}\cup\{0\}$.</p>
<p>Every $A_i$ is disjoint of the rest, and every natural number has to appear in at least one.</p>
<p>However if one requires the sets to be disjoint then it is possible to show that you cannot split $\mathbb N$ to more than a countable number of disjoint sets.</p>
<p>Suppose $A_i$ for $i\in I$, for some indices set $I$, is a partition of $\mathbb N$ to disjoint sets. Map each $i\in I$ to the minimal $n\in A_i$.</p>
<p>Since $A_i\cap A_j=\varnothing$ we have that the minimal element of $A_i$ cannot be in $A_j$, let alone its minimal element. Since every $A_i$ is non-empty then it <em>has</em> a minimal element. Therefore we have an injective mapping from $I$ into $\mathbb N$ and thus $I$ is countable (countably infinite, or finite in this case).</p>
<hr>
<p>Another fine example is to take your favourite bijection between $\mathbb N$ and $\mathbb N\times\mathbb N$, call it $f$. Now take $A_n=f^{-1}[\{n\}\times\mathbb N]$, then $A_n\cap A_m=\varnothing$ for $n\neq m$ and easily enough this is a partition as you like. This can be generalized to any cardinal number such that $|A|=|A\times A|$.</p>
|
184,361 | <p>I'm doing some exercises on Apostol's calculus, on the floor function. Now, he doesn't give an explicit definition of $[x]$, so I'm going with this one:</p>
<blockquote>
<p><strong>DEFINITION</strong> Given $x\in \Bbb R$, the integer part of $x$ is the unique $z\in \Bbb Z$ such that $$z\leq x < z+1$$ and we denote it by $[x]$.</p>
</blockquote>
<p>Now he asks to prove some basic things about it, such as: if $n\in \Bbb Z$, then $[x+n]=[x]+n$</p>
<p>So I proved it like this: Let $z=[x+n]$ and $z'=[x]$. Then we have that</p>
<p>$$z\leq x+n<z+1$$</p>
<p>$$z'\leq x<z'+1$$</p>
<p>Then $$z'+n\leq x+n<z'+n+1$$</p>
<p>But since $z'$ is an integer, so is $z'+n$. Since $z$ is unique, it must be that $z'+n=z$.</p>
<p>However, this doesn't seem to get me anywhere to prove that
$$\left[ {2x} \right] = \left[ x \right] + \left[ {x + \frac{1}{2}} \right]$$</p>
<p>in and in general that </p>
<p>$$\left[ {nx} \right] = \sum\limits_{k = 0}^{n - 1} {\left[ {x + \frac{k}{n}} \right]} $$</p>
<p>Obviously one could do an informal proof thinking about "the carries", but that's not the idea, let alone how tedious it would be. Maybe there is some easier or clearer characterization of $[x]$ in terms of $x$ to work this out.</p>
<p>Another property is
$$[-x]=\begin{cases}-[x]\text{ ; if }x\in \Bbb Z \cr-[x]-1 \text{ ; otherwise}\end{cases}$$</p>
<p>I argue: if $x\in\Bbb Z$, it is clear $[x]=x$. Then $-[x]=-x$, and $-[x]\in \Bbb Z$ so $[-[x]]=-[x]=[-x]$. For the other, I guess one could say:</p>
<p>$$n \leqslant x < n + 1 \Rightarrow - n - 1 < x \leqslant -n$$</p>
<p>and since $x$ is not an integer, this should be the same as $$ - n - 1 \leqslant -x < -n$$</p>
<p>$$ - n - 1 \leqslant -x < (-n-1)+1$$</p>
<p>So $[-x]=-[x]-1$</p>
| Bill Dubuque | 242 | <p>Both sides are equal since they count the same set: the RHS counts naturals <span class="math-container">$\rm\:\le n\:x\:$</span>. The LHS counts them in a unique mod <span class="math-container">$\rm\ n\ $</span> representation, <span class="math-container">$\:$</span> viz. <span class="math-container">$\rm\ \: j \:\le\: x+k/n\: \iff \ j\:n-k \:\le\: n\:x\:,\ \ j>0 \le k < n\:$</span>.</p>
<p><strong>REMARK</strong> <span class="math-container">$\:$</span> That every natural has a unique representation of form <span class="math-container">$\rm \: j\:n-k \ \ \:$</span> where <span class="math-container">$\rm\ \ \: j>0 \le k < n\ \ \ $</span> is simply a slight variant of the <strong>Division Algorithm</strong> where one utilizes negative (vs. positive) remainders.<span class="math-container">$\ \ $</span> To derive this negative form, simply perform the following transformation on the positive remainder form <span class="math-container">$\rm\ q\: n + r\ \to\ (q+1)\:n + r-n\ $</span> if <span class="math-container">$\rm\ r\ne 0\:$</span>, i.e. inc the quotient, dec the remainder by the dividend.</p>
<p>Thus the result is equivalent to the Division Algorithm, whose normal proof is indeed by <em>induction</em>. One could give a direct inductive proof of the result if, instead of invoking the Division Algorithm by name, one unwinds or inlines this inductive proof directly into the proof of the result - much as the same way that the classic <a href="http://www.math.uiuc.edu/%7Edan/ShortProofs/uf.html" rel="noreferrer">Lindenmann - Zermelo direct proof of unique factorization of naturals</a> inlines a division / Euclidean algorithm based descent proof of the fundamental Prime Divisor property <span class="math-container">$\rm\ p|ab\ \Rightarrow\ p|a\ \ or\ \ p|b\:$</span>.</p>
|
70,143 | <p>Is there a good way to find the fan and polytope of the blow-up of $\mathbb{P}^3$ along the union of two invariant intersecting lines?</p>
<p>Everything I find in the literature is for blow-ups along smooth invariant centers.</p>
<p>Thanks!</p>
| Sasha | 4,428 | <p>If you are interested not in the toric picture, but in the structure of the variety itself, the following approach is very useful. Note that the ideal of the pair of lines has the following resolution on $P^3$:
$$
0 \to O(-3) \to O(-2) \oplus O(-1) \to I \to 0.
$$
Hence the graded algebra $\oplus I^k$ is the quotient of the symmetric algebra $\oplus S^k(O(-2) \oplus O(-1))$. This means that the blowup is a subscheme in $Proj(\oplus S^k(O(-2) \oplus O(-1))) = P_{P^3}(O(2) \oplus O(1))$. Moreover, the equation of the blowup in this projectivization of the vector bundle is given by the embedding $O(-3) \to O(-2) \oplus O(-1)$ in the resolution above. Namely, the map $O(-3) \to O(-2) \oplus O(-1)$ can be thought of as a global section of the line bundle $L\otimes \pi^*O(3)$, where $L$ is the Grothendieck relative $O(1)$ bundle on the projectivization and $\pi:P_{P^3}(O(2) \oplus O(1)) \to P^3$
is the projection. </p>
<p>So, we conclude that the blowup is the zero locus of a section of a line bundle $L\otimes \pi^*O(3)$ on $P_{P^3}(O(2) \oplus O(1))$.</p>
|
70,143 | <p>Is there a good way to find the fan and polytope of the blow-up of $\mathbb{P}^3$ along the union of two invariant intersecting lines?</p>
<p>Everything I find in the literature is for blow-ups along smooth invariant centers.</p>
<p>Thanks!</p>
| Allen Knutson | 391 | <p>Since you already know how to blow up along either ${\mathbb P}^1$ individually, we can concentrate on what's happening nearby the intersection. Which means we can work affinely.</p>
<p>Then the polyhedron for ${\mathbb A}^3$ is the octant $({\mathbb R}_{\geq 0})^3$, and your two lines correspond to two of the three edges.
To blow them up, take a carpenter's plane to those two edges, and shave them off <em>exactly the same amount.</em> The result will have an edge connecting two vertices, one of degree 3, one of degree 4 (the isolated singularity on the blowup). </p>
<p>In coordinates, the polyhedron is {$(x,y,z) : x,y,z \geq 0, x+y \geq 1, x+z \geq 1$}.</p>
<p>If you want the blowup of ${\mathbb P}^3$ not ${\mathbb A}^3$, also impose $x+y+z \leq N$ for some large $N$ (three will do). </p>
|
609,245 | <p>Is it always possible to extend a linearly independent set to a basis in infinite dimensional vector space?</p>
<p>I was proving with the following argument:
If S is a linearly independent set, if it spans the vector space then done else keep on adding the elements such that the resultant set is also linearly independent, till it spans the vector space . But the problem is how can we guarantee that the process will stop?</p>
| Quimey | 10,443 | <p>Let $V$ be a vector space, $S\subseteq V$ a linearly independent subset and $\mathcal{A}=\{T\subseteq V: S\subseteq T \text{and $T$ is linearly independent}\}$. It is easy to see that any chain on $\mathcal{A}$ has an upper bound on $\mathcal{A}$ (we can take the union). Then, it follows from Zorn's lemma that $\mathcal{A}$ has a maximal element $R$. If $\langle R\rangle\neq V$ then we can consider $R\cup\{v\}$ for some $v\notin \langle R\rangle$ and we obtain an element of $\mathcal{A}$ which is greater than a maximal element. The contradiction comes from our assumption that $\langle R\rangle\neq V$. So, we must have $\langle R\rangle = V$.</p>
|
609,245 | <p>Is it always possible to extend a linearly independent set to a basis in infinite dimensional vector space?</p>
<p>I was proving with the following argument:
If S is a linearly independent set, if it spans the vector space then done else keep on adding the elements such that the resultant set is also linearly independent, till it spans the vector space . But the problem is how can we guarantee that the process will stop?</p>
| K. Carrillo | 544,243 | <p>Let be $V$ a vector space and $W\leq V$ a subspace of $V$ with basis $Y$. If we consider the quotient space $V/W$ , by Zorn's lemma, we can obtain a basis of $V/W$, denoted $\overline{S}$.</p>
<p>If $v\in V$, $\overline{v}=\beta_1\overline{\alpha_1}+\cdots+\beta_k\overline{\alpha_k}$
where $\alpha_i\in S$, then $v-(\beta_1\alpha_1+\cdots+\beta_k\alpha_k)\in W$ so $v=\beta_1\alpha_1+\cdots+\beta_k\alpha_k+\gamma_1\zeta_1+\cdots+\gamma_r\zeta_r$, where $\zeta_i\in Y$. Thus, $S$ (yes, without bar) and $Y$ generate $V$.</p>
<p>Finally, whit the same notation above, if $\{{\alpha_1,\cdots,\alpha_k}\}\subseteq S$ and $\{{\zeta_1,\cdots,\zeta_r}\}\subseteq Y$ the equation $$\beta_1\alpha_1+\cdots+\beta_k\alpha_k+\gamma_1\zeta_1+\cdots+\gamma_r\zeta_r=0$$ implies that each escalar is zero. Indeed, the equation implies that $$\beta_1\alpha_1+\cdots+\beta_k\alpha_k\in W,$$ then $\beta_1\overline{\alpha_1}+\cdots+\beta_k\overline{\alpha_k}=\overline{0}$ and it follows that each $\beta_i=0$ and by linear independence of $\{{\zeta_1,\cdots,\zeta_r}\}$ it follows that $\{{\alpha_1,\cdots,\alpha_k,\zeta_1,\cdots,\zeta_r}\}$ is linearly independent. Thus $Y$ can be extended to a basis of $V$.</p>
<p>PDT: I'm sorry, english is not my mother tongue.</p>
|
2,698,553 | <p>Is the natural ring morphism $\mathbb{C}\otimes_{\mathbb{Z}}\mathbb{C}\to\mathbb{C}\otimes_{\mathbb{Q}}\mathbb{C}$ an isomorphism?</p>
<p>In other words, is there a $\mathbb Z$-linear map $f:\mathbb{C}\otimes_{\mathbb{Q}}\mathbb{C}\to\mathbb{C}\otimes_{\mathbb{Z}}\mathbb{C}$ such that
$$
f(z\otimes w)=z\otimes w
$$
for all $z,w\in\mathbb{C}$? (Note that the two occurrences of $z\otimes w$ in the above display have different meanings.)</p>
| Angina Seng | 436,618 | <p>If $U$ and $V$ are vector spaces over $\Bbb Q$, then $U\otimes_\Bbb Z V$
and $U\otimes_\Bbb Q V$ are always isomorphic. To see this,
$$U\otimes_\Bbb Z V\cong U\otimes_\Bbb Q\Bbb Q\otimes_\Bbb Z V
\cong U\otimes_\Bbb Q V.$$
As the last stage we use $\Bbb Q\otimes_\Bbb Z\Bbb Q\cong\Bbb Q$
and the fact that $V$ has a basis as a $\Bbb Q$-vector space and
that direct sums preserve tensor products.</p>
|
1,216,302 | <p>I am looking for some sequence of random variables $(X_n)$ such that </p>
<p>$$ \lim_{n \rightarrow \infty} E(X_n^2) = 0 $$</p>
<p>but such that the following <strong>almost sure</strong> convergence does <strong>NOT</strong> hold:</p>
<p>$$ \frac{S_n - E(S_n)}{n} \rightarrow 0$$</p>
<p>where the $S_n$ are the partial sums of the $X_n$.</p>
<p>Note: for any such sequence the convergence in probability will <em>always</em> hold; if the random variables are not correlated, so will the convergence almost surely. In particular, any counterexample must consist of correlated random variables.</p>
<p>Many thanks for your help.</p>
| D. Thomine | 20,413 | <p>Here is an algorithm which gives you such a sequence. Let us work on the probabilised space $[0,1)$ with the Lebesgue measure.</p>
<p>For all $0 \leq k < n$, let $I_{k,n} := [k/n, (k+1)/n)$. Fix $\varepsilon \in (0,1)$</p>
<p>Start from $n = 1$, $k=0$, time $N=0$.</p>
<p>If $S_N < \varepsilon N$ on $I_{k,n}$, take $X_N = 1_{I_{k,n}}$. </p>
<p>Else : </p>
<ul>
<li><p>if $k < n-1$ : increment $k$ by $1$.</p></li>
<li><p>if $k = n-1$ : increment $n$ by $1$, put $k=0$.</p></li>
</ul>
<p>Rince and repeat, incrementing $N$ by $1$.</p>
<p>Now, for all $k,n$, we only need a finite time before $S_N \geq \varepsilon N$ on $I_{k,n}$ (the times at which these conditions are satisfied successively grow exponentially, though). Hence we will eventually increment $k$, and then $n$. Since any point in $[0,1)$ is in infinitely many $I_{k,n}$, that means that almost surely, $S_N \geq \varepsilon N$ for infinitely many $N$.</p>
<p>On the other hand, $\mathbb{E} (X_N) = \mathbb{E} (X_N^2)$ will converge to $0$. Hence, $\mathbb{E} (S_N)$ grows sub-linearly, so that almost surely, $S_N - \mathbb{E} (S_N) \geq \varepsilon N/2$ for infinitely many $N$s.</p>
|
1,636,207 | <p>I understand the basics of Cartesian products, but I'm not sure how to handle a set inside of a set like $C = \{\{1,2\}\}$. Do I simply include the set as an element, or do I break it down?</p>
<p>If I use it as an element I think it would be something like this:</p>
<p>$$\{(1,\{1,2\}), (2,\{1,2\})\}$$</p>
<p>If I were to break $C = \{\{1,2\}\}$ further, I'm not sure how I would implement that, so I'm guessing what I did above is correct, but I want to make sure.</p>
| MPW | 113,214 | <p>That's right. Just proceed as you normally would. $B$ has two elements and $C$ has one element. Pair them up (it may help to write "$A$" in place of "$\{1,2\}$" temporarily):</p>
<p>$$A\times B = \{(1,A), (2,A)\} = \{(1,\{1,2\}), (2,\{1,2\})\}$$</p>
|
3,035,228 | <p>Show that two cardioids <span class="math-container">$r=a(1+\cos\theta)$</span> and <span class="math-container">$r=a(1-\cos\theta)$</span> are at right angles.</p>
<hr>
<p><span class="math-container">$\frac{dr}{d\theta}=-a\sin\theta$</span> for the first curve and <span class="math-container">$\frac{dr}{d\theta}=a\sin\theta$</span> for the second curve but i dont know how to prove them perpendicular.</p>
| Jean Marie | 305,862 | <p><strong>Solution 1 :</strong> take a look at the following figure :</p>
<p><a href="https://i.stack.imgur.com/P2aND.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P2aND.jpg" alt="enter image description here"></a></p>
<p>Representing the two cardioids we are working on and 2 circles described by <span class="math-container">$z=\frac12(1+e^{i \theta})$</span> and <span class="math-container">$z=\frac12(i+ie^{i \theta})$</span>.</p>
<p>The 2 circles are orthogonal ; the cardioids being the images of these circles by conformal transformation <span class="math-container">$z \to z^2$</span> (red circle gives magenta cardioid, cyan circle gives blue cardioid), the orthogonality is preserved.</p>
<p><strong>Solution 2 :</strong> Sometimes, one of the best methods for studying a polar curve <span class="math-container">$r=r(\theta)$</span> is to come back to its equivalent <em>parametric</em> representation under the form </p>
<p><span class="math-container">$$x=r(\theta)\cos \theta, \ \ y=r(\theta)\sin(\theta)\tag{1}$$</span></p>
<p>In particular, (1) permits to obtain easily the polar angle made by a tangent to a curve <span class="math-container">$r=r(\theta)$</span> with the horizontal axis :</p>
<p><span class="math-container">$$\frac{dy}{dx} = \frac{dy(\theta)}{d\theta}/\frac{dx(\theta)}{d\theta}= \frac{r'(\theta)\sin \theta+r(\theta)\cos(\theta)}{r'(\theta)\cos \theta-r(\theta)\sin(\theta)}=\frac{r'(\theta)\tan \theta+r(\theta)}{r'(\theta)-r(\theta)\tan(\theta)} .\tag{1}$$</span></p>
<p>Besides, the condition for orthogonality of two curves at a certain intersection point <span class="math-container">$I$</span> is that the product of the slopes of their tangents at this point is <span class="math-container">$-1$</span>, i.e.,</p>
<p><span class="math-container">$$\frac{dy_1}{dx} \frac{dy_2}{dx}=-1$$</span></p>
<p>I think you have now all the ingredients for being able to terminate. </p>
|
1,333,994 | <p>We have a function $f: \mathbb{R} \to \mathbb{R}$ defined as</p>
<p>$$\begin{cases} x; \ \ x \notin \mathbb{Q} \\ \frac{m}{2n+1}; \ \ x=\frac{m}{n}, m\in \mathbb{Z}, n \in \mathbb{N} \ \ \ \text{$m$ and $n$ are coprimes} \end{cases}.$$</p>
<p>Find where $f$ is continuous</p>
| user26486 | 107,671 | <p>$$\log_{3}(1/2)+\log_{1/2}(3)=\log_3(1/2)+\frac{\log_3(3)}{\log_3(1/2)}=$$</p>
<p>$$=\log_{3}(1/2)+\frac{1}{\log_3(1/2)}$$</p>
<p>$$=-\left(\left(\sqrt{-\log_{3}(1/2)}-\sqrt{-\frac{1}{\log_3(1/2)}}\right)^2+2\right)\le -2$$</p>
<p>Equality can't occur, because $\sqrt{-\log_{3}(1/2)}\neq \sqrt{-\frac{1}{\log_3(1/2)}}$.</p>
<p>Expressed differently:</p>
<p>$$\left(-\log_{3}(1/2)\right)+\left(-\log_{1/2}(3)\right)=\left(-\log_{3}(1/2)\right)+\left(-\frac{1}{\log_3(1/2)}\right)$$</p>
<p>$$\ge 2\sqrt{\left(-\log_3(1/2)\right)\left(-\frac{1}{\log_3(1/2)}\right)}=2$$</p>
<p>You may call it AM-GM, but it's the trivial inequality $a+b\ge 2\sqrt{ab}$, i.e. $\left(\sqrt{a}-\sqrt{b}\right)^2\ge 0$, for $a,b\ge 0$. Equality won't occur, since $\left(-\log_3(1/2)\right)\neq -\frac{1}{\log_3(1/2)}$.</p>
|
345,766 | <p>I'm trying to calculate this limit expression:</p>
<p>$$ \lim_{s \to \infty} \frac{ab + (ab)^2 + ... (ab)^s}{1 +ab + (ab)^2 + ... (ab)^s} $$</p>
<p>Both the numerator and denominator should converge, since $0 \leq a, b \leq 1$, but I don't know if that helps. My guess would be to use L'Hopital's rule and take the derivative with respect to $s$, which gives me:</p>
<p>$$ \lim_{s \to \infty} \frac{s (ab)^{s-1}}{s (ab)^{s-1}} $$</p>
<p>but this still gives me the non-expression $\frac{\infty}{\infty}$ as the solution, and applying L'Hopital's rule repeatedly doesn't change that. My second guess would be to divide by some multiple of $ab$ and therefore simplify the expression, but I'm not sure how that would help, if at all. </p>
<p>Furthermore, the solution in the tutorial I'm working through is listed as $ab$, but if I evaluate the expression that results from L'Hopital's rule, I get $1$ (obviously). </p>
| lab bhattacharjee | 33,337 | <p>If $ab=1,$</p>
<p>$$ \lim_{s \to \infty} \frac{ab + (ab)^2 + ... (ab)^s}{1 +ab + (ab)^2 + ... (ab)^s}= \lim_{s \to \infty} \frac{s}{s+1}=\lim_{s \to \infty} \frac1{1+\frac1s}=1$$</p>
<p>If $ab\ne1, $</p>
<p>$$\lim_{s \to \infty} \frac{ab + (ab)^2 + ... (ab)^s}{1 +ab + (ab)^2 + ... (ab)^s}$$</p>
<p>$$=\lim_{s \to \infty} \frac{(ab)^{s+1}-ab}{(ab)^{s+1}-1}$$</p>
<p>If $|ab|<1, \lim_{s \to \infty}(ab)^s=0$ then $$\lim_{s \to \infty} \frac{(ab)^{s+1}-ab}{(ab)^{s+1}-1}=ab$$</p>
<p>Similarly if $|ab|>1,\lim_{s \to \infty}\frac1{(ab)^s}=0$</p>
<p>then $$\lim_{s \to \infty} \frac{(ab)^{s+1}-ab}{(ab)^{s+1}-1}=\lim_{s \to \infty} \frac{1-\frac1{(ab)^s}}{1-\frac1{(ab)^{s+1}}}=1$$</p>
|
3,337,475 | <p>This is definitely the most difficult integral that I've ever seen.
Of course, I'm not able to solve this.
Could you help me?</p>
<p><span class="math-container">$$\int { \sin { x\cos { x } \cosh { \left( \ln { \sqrt { \frac { 1 }{ 1-\sin { x } } } +\tanh ^{ -1 }{ \left( \sin x \right) +\tanh ^{ -1 }{ \left( \cos { x } \right) } } } \right) } } dx } $$</span></p>
| Ninad Munshi | 698,724 | <p>Using the fact that <span class="math-container">$\cosh t = \frac{1}{\sqrt{1-\tanh^2 t}}$</span> and <span class="math-container">$\sinh t = \frac{\tanh t}{\sqrt{1-\tanh^2 t}} $</span> <span class="math-container">$$\cosh(a+b+c) = \cosh a \cosh b \cosh c + \sinh a \sinh b \cosh c + \sinh a \cosh b \sinh c + \cosh a \sinh b \sinh c$$</span>
the integrand simplifies to
<span class="math-container">$$\int dx \left[\cosh\left(\log\left(\frac{1}{\sqrt{1-\sin x}}\right)\right)\left(1+\sin x \cos x\right) + \sinh\left(\log\left(\frac{1}{\sqrt{1-\sin x}}\right)\right)\left(\sin x + \cos x \right) \right]$$</span>
Next we'll substitute <span class="math-container">$x = 2z+\frac{\pi}{2}$</span>:
<span class="math-container">$$\int 2dz \left[\cosh\left(\log\left(\frac{1}{\sqrt{1-\cos 2z}}\right)\right)\left(1-\sin 2z \cos 2z\right) + \sinh\left(\log\left(\frac{1}{\sqrt{1-\cos 2z}}\right)\right)\left(\cos 2z - \sin 2z \right) \right]$$</span>
<span class="math-container">$$ = \frac{1}{\sqrt2}\int dz (2\sin z + \csc z )(1-\sin 2z \cos 2z)-(2\sin z - \csc z )(\cos 2z - \sin 2z)$$</span>
<span class="math-container">$$ = \frac{1}{\sqrt2}\int dz\left[4\sin^2 z \cos z(1-\cos 2z) - 2\cos z(1+2\cos 2z)+2\sin z(1-\cos 2z)+ \csc z(1+\cos 2z) \right]$$</span>
<span class="math-container">$$ =\frac{1}{\sqrt{2}} \int(8\sin^4 z + 4 \sin^ z - 4)\cos z - (4\cos^2 z - 2)\sin z + 2 \csc z dz$$</span>
<span class="math-container">$$= \sqrt{2}\left(\frac{4}{5}\sin^5 z + \frac{2}{3}\sin^3 z - 2 \sin z + \frac{2}{3}\cos^3 z - 2\cos z - \log|\csc z + \cot z|\right)$$</span>
Therefore our final answer is
<span class="math-container">$$\sqrt{2}\left(\frac{4}{5}\sin^5 \left(\frac{x}{2}-\frac{\pi}{4}\right) + \frac{2}{3}\sin^3 \left(\frac{x}{2}-\frac{\pi}{4}\right) - 2 \sin \left(\frac{x}{2}-\frac{\pi}{4}\right) + \frac{2}{3}\cos^3 \left(\frac{x}{2}-\frac{\pi}{4}\right) - 2\cos \left(\frac{x}{2}-\frac{\pi}{4}\right) - \log{\left|\csc \left(\frac{x}{2}-\frac{\pi}{4}\right) + \cot \left(\frac{x}{2}-\frac{\pi}{4}\right)\right|}\right) + C$$</span></p>
<p>Edit: We can simplify this a bit further with some trig shenanigans. Applying the angle subtraction formulas, we get:
<span class="math-container">$$\frac{1}{5}\left(\sin \left(\frac{x}{2}\right)-\cos\left(\frac{x}{2}\right)\right)^5 + \frac{1}{3}\left(\sin \left(\frac{x}{2}\right)-\cos\left(\frac{x}{2}\right)\right)^3 - 2 \left(\sin \left(\frac{x}{2}\right)-\cos\left(\frac{x}{2}\right)\right) + \frac{1}{3}\left(\sin \left(\frac{x}{2}\right)+\cos\left(\frac{x}{2}\right)\right)^3 - 2\left(\sin \left(\frac{x}{2}\right)+\cos\left(\frac{x}{2}\right)\right) - \sqrt{2}\log{\left|\frac{\sqrt{2}+\cos\left(\frac{x}{2}\right)+\sin\left(\frac{x}{2}\right)}{\sin\left(\frac{x}{2}\right)-\cos\left(\frac{x}{2}\right)}\right|} + C$$</span>
<span class="math-container">$$=\frac{1}{5}\left(\sin \left(\frac{x}{2}\right)-\cos\left(\frac{x}{2}\right)\right)^5 + \frac{2}{3}\sin \left(\frac{x}{2}\right)\left(\sin^2 \left(\frac{x}{2}\right)+3\cos^2\left(\frac{x}{2}\right)\right) - 4 \sin \left(\frac{x}{2}\right) - \sqrt{2}\log{\left|\frac{\sqrt{2}+\cos\left(\frac{x}{2}\right)+\sin\left(\frac{x}{2}\right)}{\sin\left(\frac{x}{2}\right)-\cos\left(\frac{x}{2}\right)}\right|} + C$$</span>
<span class="math-container">$$=\frac{1}{5}\left(\sin \left(\frac{x}{2}\right)-\cos\left(\frac{x}{2}\right)\right)^5 + \frac{2}{3}\sin \left(\frac{x}{2}\right)\cos x - \frac{8}{3} \sin \left(\frac{x}{2}\right) - \frac{1}{\sqrt{2}}\log{\left(\frac{3+2\sqrt{2}(\cos\left(\frac{x}{2}\right)+\sin\left(\frac{x}{2}\right))+\sin x}{1-\sin x}\right)} + C$$</span>
And I think I will stop there.</p>
|
543,938 | <p>Can anyone share a link to proof of this?
$${{p-1}\choose{j}}\equiv(-1)^j(\text{mod}\ p)$$ for prime $p$.</p>
| bof | 97,206 | <p>Since $p$ is prime, $\displaystyle\binom pj\equiv0\pmod p$ for $0\lt j\lt p$. </p>
<p>By Pascal's identity, for $0\lt j\lt p$ we have $$\binom{p-1}j=\binom pj-\binom{p-1}{j-1}\equiv-\binom{p-1}{j-1}\pmod p.$$Since $\displaystyle\binom{p-1}0=1$, it follows by induction that $\displaystyle\binom{p-1}j\equiv(-1)^j\pmod p$ </p>
<p>for $0\le j\le p-1$.</p>
|
524,686 | <p>Let X be a uniform random variable on [0,1], and let $Y=\tan\left (\pi \left(x-\frac{1}{2}\right)\right)$. Calculate E(Y) if it exists. </p>
<p>After doing some research into this problem, I have discovered that Y has a Cauchy distribution (although I do not know how to prove this); therefore, E(Y) does not exist.</p>
<p>Also, I know that if I can show the improper integral does not absolutely converge - i.e., that $\int_{-\infty}^{\infty}|\tan\left(\pi\left(x-\frac{1}{2}\right)\right|dx$ diverges - I can show that E(Y) does not exist.</p>
<p>The problem is that I do not know how to evaluate this integral. Could someone please enlighten me on how to do so? Thanks in advance.</p>
| Zz'Rot | 51,747 | <p>My intuition would be, since $\tan$ is a periodic function, you can first consider a single cycle. (In this case, maybe $x\in(0,1)$.) Since you have infinite number of cycles, you can deduce that the integral diverges.</p>
|
1,334,680 | <p>How to apply principle of inclusion-exclusion to this problem?</p>
<blockquote>
<p>Eight people enter an elevator at the first floor. The elevator
discharges passengers on each successive floor until it empties on the
fifth floor. How many different ways can this happen</p>
</blockquote>
<p>The people are <strong>distinguishable</strong>.</p>
| wythagoras | 236,048 | <p>On the bottom you have axioms (things that are assumed to be true) and definitions. In the case of set theory, these might be the axioms of ZFC and the definitions that explain them. PA or KP might be another possibility.</p>
<p>We will need another informal system (like English) to build the lowest axioms. But English is not a formal system. We can easily reach the paradox: <em>The smallest ordinal which is not definable using [logic system]</em> is definable using English. And it must necessarily exist, since there are only countably many definitions and uncountably many countable ordinals. Therefore English must stand on top of all formal systems, thus it can not be a formal system itself. </p>
<hr>
<p>This is <em>what I think</em> to be reasonable axiomation and definition of logic (Yes, I just made this up). Comments on this are more than welcome. </p>
<p><strong>Axiom 1.</strong> Any proposition $P$ has either value 0 or value 1. </p>
<p><strong>Definition 1.</strong> $\neg P$ has value 1 if $P$ has value 0, $\neg P$ has value 0 if $P$ has value 1.</p>
<p><strong>Definition 2,3,4,5,6.</strong> $P \wedge Q$, $P \vee Q$, $P \implies Q$, $P \iff Q$, $x \in S$. You know their definition.</p>
<p><strong>Definition 7.</strong> The proposition $\forall x \in S: P(x)$ is true if P(x) is true for all $x \in S$.</p>
<p><strong>Definition 8.</strong> The proposition $\exists x \in S: P(x)$ is true if P(x) is true for some $x \in S$.</p>
<p>Using this tools we can formulate the axioms of ZFC. </p>
|
1,334,680 | <p>How to apply principle of inclusion-exclusion to this problem?</p>
<blockquote>
<p>Eight people enter an elevator at the first floor. The elevator
discharges passengers on each successive floor until it empties on the
fifth floor. How many different ways can this happen</p>
</blockquote>
<p>The people are <strong>distinguishable</strong>.</p>
| user21820 | 21,820 | <p>Most set theories, such as ZFC, require an underlying knowledge of first-order logic formulas (as strings of symbols). This means that they require acceptance of facts of string manipulations (which is essentially equivalent to accepting arithmetic on natural numbers!) First-order logic does not require set theory, but if you want to prove something <strong>about</strong> first-order logic, you need some stronger framework, often called a meta theory/system. Set theory is one such stronger framework, but it is not the only possible one. One could also use a higher-order logic, or some form of type theory, both of which need not have anything to do with sets.</p>
<p>The circularity comes only if you say that you can <strong>justify</strong> the use of first-order logic or set theory or whatever other formal system by proving certain properties about them, because in most cases you would be using a stronger meta system to prove such meta theorems, which <strong>begs the question</strong>. However, if you use a <strong>weaker</strong> meta system to prove some meta theorems about stronger systems, then you might consider that justification more reasonable, and this is indeed done in the field called Reverse Mathematics.</p>
<p>Consistency of a formal system has always been the worry. If a formal system is inconsistent, then anything can be proven in it and so it becomes useless. One might hope that we can use a weaker system to prove that a stronger system is consistent, so that if we are convinced of the consistency of the weaker system, we can be convinced of the consistency of the stronger one. However, as Godel's incompleteness theorems show, this is impossible if we have arithmetic on the naturals.</p>
<p>So the issue dives straight into philosophy, because any proof in any formal system will already be a finite sequence of symbols from a finite alphabet of size at least two, so simply <strong>talking about</strong> a proof requires understanding finite sequences, which (almost) requires natural numbers to model. This means that any meta system powerful enough to talk about proofs and 'useful' enough for us to prove meta theorems in it (If you are a Platonist, you could have a formal system that simply has all truths as axioms. It is completely useless.) will be able to do something equivalent to arithmetic on the naturals and hence suffer from incompleteness.</p>
<p>There are two main parts to the 'circularity' in Mathematics (which is in fact a sociohistorical construct). The first is the understanding of logic, including the conditional and equality. If you do not understand what "if" means, no one can explain it to you because any purported explanation will be circular. Likewise for "same". (There are many types of equality that philosophy talks about.) The second is the understanding of the arithmetic on the natural numbers including induction. This boils down to the understanding of "repeat". If you do not know the meaning of "repeat" or "again" or other forms, no explanation can pin it down.</p>
<p>Now there arises the interesting question of how we could learn these basic undefinable concepts in the first place. We do so because we have an innate ability to recognize similarity in function. When people use words in some ways consistently, we can (unconsciously) learn the functions of those words by seeing how they are used and abstracting out the similarities in the contexts, word order, grammatical structure and so on. So we learn the meaning of "same" and things like that automatically.</p>
<p>I want to add a bit about the term "mathematics" itself. What we today call "mathematics" is a product of not just our observations of the world we live in, but also historical and social factors. If the world were different, we will not develop the same mathematics. But in the world we do live in, we cannot avoid the fact that there is no non-circular way to explain some fundamental aspects of the mathematics that we have developed, including equality and repetition and conditionals as I mentioned above, even though these are based on the real world. We can only explain them to another person via a shared experiential understanding of the real world.</p>
|
4,189,614 | <p>I am trying to answer the following question...</p>
<blockquote>
<p>Consider a wall made of brick <span class="math-container">$10$</span> centimeters thick, which separates a room in a house from the outside. The room is kept at <span class="math-container">$20$</span> degrees. Initially the outside temperature is <span class="math-container">$10$</span> degrees and the temperature in the wall has reached steady state. Then there is a sudden cold snap and the outside temperature drops to <span class="math-container">$-10$</span> degrees. Find the temperature in the wall as a function of position and time.</p>
</blockquote>
<p>I am okay executing the separation of variables technique, but I can't really reason through how to model this scenario. The solution manual states that the Initial/Boundary Value Problem is...<span class="math-container">$$u_t = ku_{xx},\ u(0,\ t) = 20,\ u(10,\ t) = -10,\ u(x,\ 0) = 20 - x$$</span></p>
<p>This question comes in a section before higher dimensional heat equations are introduced, but to me, it seems that this should be modeled as a three-dimensional heat equation, because thin walls are two-dimensional, and the bricks are prescribed thickness. How can I intuitively reason through this word problem to model it correctly?</p>
| Michał Miśkiewicz | 350,803 | <p>Following user89699's answer, let me phrase the same thought differently.</p>
<hr />
<p>In principle, you're right. The wall is 3-dimensional, so the 3-dimensional heat flow would be our first choice. For simplicity, let us assume that the wall stretches infinitely in directions <span class="math-container">$y$</span> and <span class="math-container">$z$</span>, and has thickness <span class="math-container">$10$</span> in direction <span class="math-container">$x$</span>. Then the temperature function <span class="math-container">$v$</span> solves
<span class="math-container">$$
v_t = k \cdot (v_{xx}+v_{yy}+v_{zz}),\ v(0,y,z,t) = 20,\ v(10,y,z,t) = -10,\ v(x,y,z,0) = 20 - x.
$$</span>
But because of the symmetry of the setup, we expect that the solution is independent of <span class="math-container">$y$</span> and <span class="math-container">$z$</span>. In other words, the heat only flows orthogonally through the wall (in direction <span class="math-container">$x$</span>). Formally, this observation can be rephrased as follows. If <span class="math-container">$u$</span> solves the 1-dimensional heat equation
<span class="math-container">$$
u_t = k \cdot u_{xx},\ u(0,t) = 20,\ u(10,t) = -10,\ u(x,0) = 20 - x
$$</span>
then the function <span class="math-container">$v(x,y,z,t) := u(x,t)$</span> solves our 3-dimensional problem.</p>
<hr />
<p>Of course, the wall is not infinite, but this is why we call it <em>modeling</em>: the reduction to 1-dimensional case is not 100% precise, but it's useful. In reality, the thickness of <span class="math-container">$10$</span> cm is relatively small compared to the other two dimensions, so we expect the error resulting from our idealization to be small.</p>
<hr />
<p>As requested in the comment - a few words about the initial conditions. Before the temperature drop, we had another solution <span class="math-container">$w$</span> with boundary conditions <span class="math-container">$w(0,y,z,t) = 20$</span> and <span class="math-container">$w(10,y,z,t) = 10$</span>. Moreover, we assume this was a stationary state, which means that <span class="math-container">$w_t \equiv 0$</span>.</p>
<p>Again, we look for a symmetric solution, so the problem reduces to the following 1-dimensional problem
<span class="math-container">$$
0 = k \cdot u_{0xx},\ u_0(0) = 20,\ u_0(10) = 10.
$$</span>
The condition <span class="math-container">$u_{0xx}=0$</span> tells us that <span class="math-container">$u_0$</span> is a linear function, and so <span class="math-container">$u_0(x)=20-x$</span>. This function then serves as the initial condition after the temperature drop.</p>
<p>The steady state assumption is yet another idealization. In reality, we should rather solve the heat equation with boundary conditions <span class="math-container">$20$</span> and <span class="math-container">$10$</span> and take <span class="math-container">$w(T,x,y,z)$</span> (with some large time <span class="math-container">$T>0$</span>) as the initial condition in the next step. However, since <span class="math-container">$w(T,\cdot)$</span> converges to a harmonic function (as <span class="math-container">$T \to \infty$</span>), we can assume the resulting error is negligible.</p>
|
471,145 | <p>I'm reading Proofs from the Book, and I ran into following theorem:</p>
<p>Suppose all roots of polynomial $x^n + a_{n-1}x^{n-1} + \dots + a_0$ are real. Then the roots are contained in the interval:</p>
<p>$$ - \frac{a_{n-1}}{n} \pm \frac{n-1}{n}
\sqrt{a_{n-1}^2 - \frac{2n}{n-1} a_{n-2} } $$</p>
<p>So, if you know that all the roots of polynomial are real, you can get an interval that contains them just looking at the first two coefficients.</p>
<p>I'm interested in other theorems/tricks that let you figure out interesting things about a polynomial just by ''eyeing'' it. Especially if they are surprising!</p>
| Daniel Fischer | 83,702 | <p>For an open set $U \subset \mathbb{C}$, there always are $f \in \mathcal{O}(U)$ which cannot be continued analytically across any part of the boundary $\partial U$.</p>
<p>That is an easy consequence of the (general) Weierstraß product theorem, which I quote here from Rudin (Real and Complex Analysis; Thm 15.11):</p>
<blockquote>
<p>Let $\Omega$ be an open set in $S^2$, $\Omega \neq S^2$. Suppose $A \subset \Omega$ and $A$ has no limit point in $\Omega$. With each $\alpha \in A$ associate a positive integer $m(\alpha)$. Then there exists an $f \in H(\Omega)$ all of whose zeros are in $A$, and such that $f$ has a zero of order $m(\alpha)$ at each $\alpha \in A$.</p>
</blockquote>
<p>With that, all one needs is the existence of a set $A$ contained in the annulus $K = \{ z : r < \lvert z\rvert < R\}$ such that $A$ has no limit point in $K$, and every boundary point of $K$ is a limit point of $A$. Then, choosing all multiplicities as $1$ for simplicity, one obtains a function $f\not\equiv 0$ such that every boundary point is a limit of zeros of $f$, hence $f$ cannot be analytically continued across any part of the boundary with isolated singularities, since such a continuation would vanish on a non-discrete set without vanishing identically, contradicting the identity theorem.</p>
<p>For the special case of annuli, we can construct such a function relatively explicitly.</p>
<p>For $n \in \mathbb{Z}^+$, let $\alpha_n = \left(1 - \frac{1}{(n+1)^2}\right)e^{in}$. Then $(\alpha_n)_{n \in \mathbb{Z}^+}$ is a sequence in the unit disk $\mathbb{D}$ whose limit points are exactly the points on the unit circle $\partial \mathbb{D}$. (That no other points are limit points is immediate, that every point on the unit circle is a limit point follows from the fact that $A_k := \{e^{in} : n \geqslant k \}$ is dense in the unit circle for all $k\in \mathbb{Z}^+$.)</p>
<p>Then consider the Blaschke product</p>
<p>$$B(z) := \prod_{n=1}^\infty \frac{\alpha_n - z}{1 - \overline{\alpha_n}z}\frac{\lvert \alpha_n\rvert}{\alpha_n}.$$</p>
<p>Since $\sum (1 - \lvert\alpha_n\rvert) = \pi^2/6 - 1 < \infty$, $B \in H^\infty(\mathbb{D})$ and $B$ has simple zeros in each $\alpha_n$ and no other zeros, by the general theory of Blaschke products.</p>
<p>Now, if $0 < r < R < \infty$, the function</p>
<p>$$f(z) = B\left(\frac{z}{R}\right)\cdot B\left(\frac{r}{z}\right)$$</p>
<p>is holomorphic on the annulus and has simple zeros [$R\cdot\alpha_k = r/\alpha_m$ can't happen, because $e^{i(k+m)} \neq 1$ for $k,\,m \in \mathbb{Z}^+$, so the zeros remain simple] only in the points $R\cdot\alpha_n$ and $r/\alpha_n$ for $n$ sufficiently large, so that $R\lvert\alpha_n\rvert > r$ resp. $r/\lvert\alpha_n\rvert < R$. Every point on $\lvert z\rvert = R$ is a limit point of the $R\cdot\alpha_n$, and every point on $\lvert z\rvert = r$ is a limit point of the $r/\alpha_n$, so every boundary point of the annulus is a limit point of zeros of $f$.</p>
<p>If $r = 0$, then $0$ is trivially an isolated singularity of each $g \in \mathcal{O}(K)$, and if $R = \infty$, then $\infty$ is trivially an isolated singularity of each $g \in \mathcal{O}(K)$. If $r = 0$ and $R = \infty$, then each boundary point of $K$ is an isolated singularity of all $g \in \mathcal{O}(K)$, if only one of these equations hold then drop the corresponding $B(z/R)$ or $B(r/z)$ from $f$ to obtain a function that cannot be analytically continued (with isolated singularities) across any part of the boundary.</p>
|
220,736 | <p>I have reduced a problem I'm working on to something resembling a graph theory problem, and my limited intuition tells me that it's not so esoteric that only I could have ever considered it. <strong>I'm looking to see if someone knows of any related work.</strong> Here's the problem:</p>
<hr>
<p>Given a roadway map (directed graph) and a set of sensor activations that reports how many vehicles were on each edge at a given time, generate as many sets of routes (i.e. source edge, sink edge, and start-time) that would explain the given sensor activations as possible.</p>
<hr>
<p>Here is a trivial approach: Create a source and sink for each edge, and generate/consume as much traffic is needed to satisfy the count at that edge for that time-slice.</p>
<p>The trivial case is useless to me, as I'm trying to study the dependencies across multiple intersections and multiple time-slices. What I need is a <em>likely</em> explanation, where the routes resemble the kind of routes that actual drivers would choose, and where the distribution of trip lengths also makes sense.</p>
<p>If I could generate all such explanations, or at least a great number of them, I could then treat picking the "likely" one as a separate problem.</p>
<p>Is there an algorithm that might be applicable here?</p>
| Igor Rivin | 11,142 | <p>The area is certainly the same for all smooth convex curves and small $\epsilon$ - your polygonal curve is a good way to see why that might be true. For large $\epsilon,$ it is not clear what the question means...</p>
|
1,072,656 | <p>I am building a website which will run on the equation specified below. I am in pre-algebra and do not have any idea how to go about this equation. my friends say it is a system of equation but I don't know how to solve those and no one I know seems to know how to do them with exponents. I was hoping that people on this site could tell me the answer to the problem. if you could explain how to do it not for me but for people in my situation with more experience that might make the question better for everyone ells looking into it. Thank you for your answer!</p>
<p>Here is the system of equations:</p>
<p>$xy^5=8000$</p>
<p>$(xy^4)-(xy^3)=5000$</p>
| turkeyhundt | 115,823 | <p>The first equation tells you that $x=\frac{8000}{y^5}$. Substituting that into the second equation you have $\frac{8000}{y^5}y^4-\frac{8000}{y^5}y^3=5000$ $$\frac{8000}{y}-\frac{8000}{y^2}=5000\\\frac{y^2}{1000}(\frac{8000}{y}-\frac{8000}{y^2}=5000)\\8y-8=5y^2\\5y^2-8y+8=0$$</p>
<p>But that gives imaginary roots, so either they never intersect or I did something wrong... Lemme check.</p>
|
102,661 | <blockquote>
<p>$n$ people attend the same meeting, what is the chance that two people share the same birthday? Given the first $b$ birthdays, the probability the next person doesn't share a birthday with any that went before is $(365-b)/365$. The probability that none share the same birthday is the following: $\Pi_{0}^{n-1}\frac{365-b}{365}$. How many people would have to attend a meeting so that there is at least a $50$% chance that two people share a birthday?</p>
</blockquote>
<p>So I set $\Pi_{0}^{n-1}\frac{365-b}{365}=.5$ and from there I manipulated some algebra to get
$\frac{364!}{(364-n)!365^{n}}=.5\iff (364-n)!365^{n}=364!/.5=.....$ </p>
<p>There has to be an easier way of simplifying this. </p>
| yiyi | 23,662 | <p>$\displaystyle{p(n) = 1 - \left(\frac{364}{365}\right)^{C(n,2)} = 1 - \left(\frac{364}{365}\right)^{n(n-1)/2} }$</p>
<p>Sorry if my latex is not right. </p>
<p>The big trick with most prob questions is to ask what is the prob if it doesn't happen. </p>
<p>So you take 1 (total sample space) - P(not your birthday) = P(share your birthday) </p>
<p>Because P(A) = 1- not(P(A)). </p>
<p>so there are 364 days that are not your birthday, and the total number of days is 365. Thus 364/365 is the prob of not(P(A)). </p>
<p>Now the C(n,2) comes from the pairs (two people having the same birthday) </p>
<p>so your answer is $\left(\frac{364}{365}\right)^{n(n-1)/2} = .5$</p>
<p>solve for n, and from Maple 22.98452752. </p>
|
102,661 | <blockquote>
<p>$n$ people attend the same meeting, what is the chance that two people share the same birthday? Given the first $b$ birthdays, the probability the next person doesn't share a birthday with any that went before is $(365-b)/365$. The probability that none share the same birthday is the following: $\Pi_{0}^{n-1}\frac{365-b}{365}$. How many people would have to attend a meeting so that there is at least a $50$% chance that two people share a birthday?</p>
</blockquote>
<p>So I set $\Pi_{0}^{n-1}\frac{365-b}{365}=.5$ and from there I manipulated some algebra to get
$\frac{364!}{(364-n)!365^{n}}=.5\iff (364-n)!365^{n}=364!/.5=.....$ </p>
<p>There has to be an easier way of simplifying this. </p>
| Karatuğ Ozan Bircan | 12,686 | <p>Paul Halmos asked this question in his "automathography", <em>I Want to Be a Mathematician</em>, and solved it as follows:</p>
<blockquote>
<p>In other words, the problem amounts to this: find the smallest <span class="math-container">$n$</span> for which <span class="math-container">$$\prod_{k=0}^{n-1} \left(1-\frac{k}{365}\right) \lt \frac{1}{2}.$$</span></p>
<p>The indicated product is dominated by</p>
<p><span class="math-container">$$\frac{1}{n} \sum_{k=0}^{n-1} \left(1-\frac{k}{365}\right)^n \lt \left(\frac{1}{n} \int_0^n \left(1-\frac{x}{365}\right)\mathrm dx\right)^n = \left(1- \frac{n}{730}\right)^n \lt e^{-n^2/730}.$$</span></p>
<p>The last term is less than <span class="math-container">$1/2$</span> if and only if <span class="math-container">$n \gt \sqrt{730 \log 2} \approx 22.6.$</span></p>
<p>Hence <span class="math-container">$n=23$</span>.</p>
</blockquote>
|
1,530,406 | <p>How to multiply Roman numerals? I need an algorithm of multiplication of numbers written in Roman numbers. Help me please. </p>
| ByronGeorge | 656,816 | <p>I don't like any on-line solutions to Numeral multiplication so I made one up.</p>
<p>The only thing you need to remember when multiplying is V x V = XXV all other multiple combinations are column shifts</p>
<p>Example: XXVII multiplied by XVIII</p>
<p>XXXVII across the top is multiplied by each individual numeral of XVIII down the left column, then each column is totalled.</p>
<p><span class="math-container">$$
\begin{matrix}
&&&& XXX & V & II \\
X && CCC & L & XX \\
V &&& LLL & XX & V \\
&&&&& VV \\
I &&&& XXX & V & II \\
I &&&& XXX & V & II \\
I &&&& XXX & V & II \\
---- & ---- & ---- & ---- & ---- & ---- & ---- \\
Carry & D & CCC & LLL & XXX & V \\
: & = & = & = & = & = & = \\
Total & D & C & L & X & V & I \\
\end{matrix}
$$</span></p>
<p>E.g. V (left) x II (top) = VV, carried to V column VxV = XXV and VxXXX = LLL
There are 6 I's which total VI the V is carried.
6 V's+that V carry = V + XXX carried</p>
<p>I have resisted the urge to use decimals to convert numerals at each stage, that's your task.</p>
<p>The final result is DCLXVI, a single occurrence of all numerals below M</p>
<p>A revalation perhaps?</p>
<p>All this only takes seconds on an Abacus and my division method is equally different</p>
|
2,327,675 | <p>Using the GPU Gems Article <a href="https://developer.nvidia.com/gpugems/GPUGems/gpugems_ch01.html" rel="nofollow noreferrer" title="Effective Water Simulation From Physical Models">Effective Water Simulation From Physical Models</a> I have implemented Gerstner Waves into UE4, I have built the function both on GPU for the tessellated mesh displacement and in code for the purpose of sampling height of the waves, but the issue I have run into is that the points by nature of the formula move away from their original points, so when using the gerstner function to get the height of a point, it is not necessarily at the point you are sampling so provides an incorrect height for that coordinate.</p>
<p>So somehow I need to solve for the point in the gerstner function that is actually lines up with the world position. I've taught myself enough to implement the waves, but i'm stuck on this solution, though I suspect there is some sort of matrix or way to take the inputs, apply the cos values and use it as a lookup for the actual point to sample. Any ideas and help is greatly appreciated.</p>
<p>For Reference the Gerstner Wave Formula</p>
<p><a href="https://i.stack.imgur.com/am1bR.jpg" rel="nofollow noreferrer">Formula Image</a></p>
<p><code>Q = Slope<br>
L = WaveLength<br>
A = Amplitude<br>
D = Vector2 Direction<br>
x = x world coordinate<br>
y = y world coordinate<br>
t = time<br>
Where Qi = Q/wi * A * NumWaves<br>
Where wi = 2pi/L<br>
Where phase = Speed * 2/L</code></p>
| Tyrendel | 1,098,410 | <p>Solution that I use:
<span class="math-container">$$
y=\frac{a}{2}\sin\left(\lambda x-a\cos\left(\lambda x-a\cos\left(...\right)\right)\right)
$$</span>
with:</p>
<ul>
<li><span class="math-container">$a$</span>: wave effect strength</li>
<li><span class="math-container">$p$</span>: wavelength</li>
<li><span class="math-container">$\lambda$</span>: wave number, <span class="math-container">$\lambda=\frac{2\pi}{p}$</span></li>
</ul>
<hr />
<p>The post by Daniel Wells refers to a desmos graph using the formula
<span class="math-container">$$
\begin{cases}
y=ha\sin\left(w\left(x-a\frac{p}{\color{red}{8}}\cos\left(w\left(x-a\frac{p}{\color{red}{8}}\cos\left(...\right)\right)\right)\right)\right)\\
w=\frac{\color{red}{2\pi}}{p}
\end{cases}
$$</span></p>
<p>I don't get the use of the number 8 instead of 2π. It seems to lead to a beautiful curve in fewer steps but gives smooth results even for the maximum wave effect (a=1), when the wave is supposed to be spiky.</p>
|
1,903,333 | <p>Let $G$ be a group. Prove that $Z(G)$ (the center of $G$) is always nonempty.</p>
<p>Can anyone give me solution of this theoretical problem? I have just started learning group theory and I am very interested in this math branch</p>
| Probabilitytheoryapprentice | 361,972 | <p>Hint: Think of the neutral element^^</p>
|
2,992,454 | <p>Prove :</p>
<blockquote>
<p><span class="math-container">$f : (a,b) \to \mathbb{R} $</span> is convex, then <span class="math-container">$f$</span> is bounded on every closed subinterval of <span class="math-container">$(a,b)$</span></p>
</blockquote>
<p>where <span class="math-container">$f$</span> is convex if <span class="math-container">$f(\lambda x + (1-\lambda)y) \le \lambda f(x) + (1-\lambda)f(y), \forall x,y \in (a,b), \forall \lambda \in [0,1]$</span></p>
<hr>
<h2>Try</h2>
<h3><span class="math-container">$f$</span> is bounded above</h3>
<p>Let <span class="math-container">$J = [\alpha, \beta] \subset (a,b)$</span>. </p>
<p><span class="math-container">$\forall c \in [\alpha, \beta]$</span>, <span class="math-container">$\exists \lambda_0 \in [0,1]$</span> s.t. <span class="math-container">$c = \lambda_0 \alpha + (1-\lambda_0) \beta$</span>, and</p>
<p><span class="math-container">$$
f(c) = f(\lambda_0 \alpha + (1-\lambda_0) \beta) \le \lambda_0 f(\alpha) + (1-\lambda_0) f(\beta)
$$</span></p>
<p>thus, <span class="math-container">$f$</span> is bounded above on <span class="math-container">$[\alpha, \beta]$</span></p>
<p>But I'm stuck at how I should proceed to prove that <span class="math-container">$f$</span> is bounded below.</p>
| Community | -1 | <p>One way could be to first prove that <span class="math-container">$f$</span> is continuous on <span class="math-container">$(a,b)$</span>. Then, we know that <span class="math-container">$f$</span> is bounded on every closed subinterval of <span class="math-container">$(a,b)$</span>.</p>
<p>A proof of this assertion can be found in Rudin's <em>Real and Complex Analysis</em>: this is Theorem 3.2 in the third edition. (On closer inspection, this is indeed the content of @HagenvonEitzen's answer).</p>
|
1,480,511 | <p>I have included a screenshot of the problem I am working on for context. T is a transformation. What is meant by Tf? Is it equivalent to T(f) or does it mean T times f?</p>
<p><a href="https://i.stack.imgur.com/LtRS1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LtRS1.png" alt="enter image description here"></a></p>
| Vincenzo Zaccaro | 269,380 | <p>Suppose that $ a, b $ are coprime numers. Note that $3 |b^2 $ then $3|b|$. If $3|b|$ then $3|a $, absurd!</p>
|
168,118 | <p>I'm attempting to differentiate an equation in the form</p>
<pre><code>D[sqrt((2*(((a*b*c+Pi*d*e^2+Pi*f*g^2+h*i*j+Pi*k*l^2)/(a*b*c+Pi*d*e^2+Pi*f*g^2))-1)*m)/(n^2 - o^2)/p), a]
</code></pre>
<p>in order to do an error propagation analysis. So I need to differentiate it against a, against b, against c and so on.</p>
<p>All my values will be positive (they reflect physical dimensions of my design) and the value of n will always be greater than o.</p>
<p>Is there a way to define all my variables as real and positive before I differentiate? And to define n greater than o?</p>
<p>The best I've found is $Assumptions = _Symbol [Element] Reals but this only gets me part of the way.</p>
| Bob Hanlon | 9,362 | <pre><code>assume = (And @@
Thread[{a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p} > 0]) && n > o;
</code></pre>
<p>Note that when you state that a variable is positive then it is automatically also real. And for <a href="http://reference.wolfram.com/language/ref/$Assumptions.html" rel="nofollow noreferrer"><code>$Assumptions</code></a> or <a href="http://reference.wolfram.com/language/ref/Assuming.html" rel="nofollow noreferrer"><code>Assuming</code></a> to have an effect, you must use a function that takes the option <a href="http://reference.wolfram.com/language/ref/Assumptions.html" rel="nofollow noreferrer"><code>Assumptions</code></a> (e.g., <a href="http://reference.wolfram.com/language/ref/Simplify.html" rel="nofollow noreferrer"><code>Simplify</code></a>).</p>
<pre><code>Assuming[assume, Element[a, Reals] // Simplify]
(* True *)
expr = Assuming[assume,
D[Sqrt[(2*(((a*b*c + Pi*d*e^2 + Pi*f*g^2 + h*i*j + Pi*k*l^2)/(a*b*c +
Pi*d*e^2 + Pi*f*g^2)) - 1)*m)/(n^2 - o^2)/p], a] // Simplify]
</code></pre>
<p><a href="https://i.stack.imgur.com/7Cxzo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7Cxzo.png" alt="enter image description here"></a></p>
|
2,106,662 | <p>I'm trying to show that the barycentric coordinate of excenter of triangle ABC, where BC=a, AC=b, and AB=c, and excenter opposite vertex A is Ia, is Ia=(-a:b:c). I've gotten to the point where after a lot of ratio bashing I have that it's (ab/(b+c)):CP:BP, where P is the incenter, but I have no idea where to go from there. What would be the proof?</p>
| dxiv | 291,201 | <p>Hint: written in matrix form:</p>
<p>$$
\begin{pmatrix}
a_{n+1} \\ b_{n+1}
\end{pmatrix}
= \begin{pmatrix}
2 & -1 \\ 1 & 4
\end{pmatrix}
\begin{pmatrix}
a_{n} \\ b_{n}
\end{pmatrix}
$$</p>
<p>Let $A_n = \begin{pmatrix}
a_{n+1} \\ b_{n+1}
\end{pmatrix}
\,$, $A = \begin{pmatrix}
2 & -1 \\ 1 & 4
\end{pmatrix}\,$ then $A_n = A^n \, A_0\,$ so the problem reduces to calculating $A^n$.</p>
<p>The <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan normal form</a> $A=P\,J\,P^{-1}$ where $P$ is invertible and $J$ is upper triangular gives:</p>
<p>$$
P = \begin{pmatrix}
-1 & 1 \\ 1 & 0
\end{pmatrix}\,, \quad J = \begin{pmatrix}
3 & 1 \\ 0 & 3
\end{pmatrix}\,, \quad P^{-1} = \begin{pmatrix}
0 & 1 \\ 1 & 1
\end{pmatrix}
$$</p>
<p>Since $A^n = P\,J^n\,P^{-1}\,$ the last step is to calculate $J^n\,$. Let: </p>
<p>$$J = \begin{pmatrix}
3 & 1 \\ 0 & 3
\end{pmatrix}
= \begin{pmatrix}
3 & 0 \\ 0 & 3
\end{pmatrix} + \begin{pmatrix}
0 & 1 \\ 0 & 0
\end{pmatrix} = J_0 + J_1
$$</p>
<p>Note that $J_0, J_1$ commute, and $J_1^2 = O\,$ (the zero matrix) so:</p>
<p>$$
\require{cancel}
J^n = (J_0+J_1)^n = J_0^n + \binom{n}{1} J_0^{n-1} J_1 + \cancel{\binom{n}{2} J_0^{n-2} J_1^2 + \cdots} = J_0^n + n J_0^{n-1} J_1
$$</p>
<p>Finally, putting everything back together:</p>
<p>$$
A_n = P\,\left(J_0^n + n J_0^{n-1} J_1\right)\,P^{-1}\,A_0 \quad\quad \left( \text{where of course} \;\; J_0^n = \begin{pmatrix}
3^n & 0 \\ 0 & 3^n
\end{pmatrix}\,\right)
$$</p>
|
853,308 | <p>I want to calculate the expected number of steps (transitions) needed for absorbtion. So the transition probability matrix $P$ has exactly one (lets say it is the first one) column with a $1$ and the rest of that column $0$ as entries.</p>
<p>$P = \begin{bmatrix} 1 & * & \cdots & * \\ 0 & \vdots & \ddots & \vdots \\ \vdots & & & \\ 0 & & & \end{bmatrix} \qquad s_0 = \begin{bmatrix} 0 \\ \vdots \\ 0 \\ 1 \\0 \\ \vdots \\ \\ \end{bmatrix}$</p>
<p>How can I now find the expected (<i>mean</i>) number of steps needed for the absorbtion for a given initial state $s_0$?</p>
<p>EDIT: An explicit example here:</p>
<p>$P = \begin{bmatrix} 1 & 0.1 & 0.8 \\ 0 & 0.7 & 0.2 \\ 0 & 0.2 & 0 \end{bmatrix}
\qquad s_0 = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \implies s_1 = \begin{bmatrix} 0.8 \\ 0.2 \\ 0 \end{bmatrix} \implies s_2 = \begin{bmatrix} 0.82 \\ 0.14 \\ 0.04 \end{bmatrix} \ldots $</p>
| Asaf Karagila | 622 | <p>Functions are only intuitive if you think about $f(x)=x^2+1$ or $f(x,y)=\ln x+e^y$ or so on. But how do you describe in an intuitive manner <strong>every</strong> function from $\Bbb R$ to $\Bbb R$? There are more than you can possibly imagine. How would you describe intuitively a function between two sets which you can't describe? There are sets which are neither intuitive, nor obvious. And one would expect that some functions would have these sets as domains.</p>
<p>Intuition can be misleading, and intuitive objects without a formal definition may cause mistakes. Look at the history of the definition of a function. It was believed that all functions are piecewise continuous, but that wasn't true; and that all continuous functions are differentiable almost everywhere, but that's not true either; and as time progressed we learned that in fact the "intuitive" part of the functions make up but a minute and negligible part of all things which are functions. (The reason is that intuition changes between people, and from time to time; and there is a slippery slope here: if the existence of this object is intuitively clear, then that object must exists, and so on, until you get somewhere that you have no intuition about.)</p>
<p>So instead, we have a formal definition of a function. And then there are less mistakes.</p>
<p>Of course one doesn't have to take a set theoretic definition of a function. This just one way to model the notion of a function using sets. One can use other means to do it. Using sets however, one can model almost all mathematical things, and the fact that one can model functions using sets is important for that.</p>
|
3,460 | <p>I asked the question "<a href="https://mathoverflow.net/questions/284824/averaging-2-omegan-over-a-region">Averaging $2^{\omega(n)}$ over a region</a>" because this is a necessary step in a research paper I am writing. The answer is detailed and does exactly what I need, and it would be convenient to directly cite the result. However, the author of the answer is anonymous... how would one deal with such a situation? I could of course very easily just reproduce the argument in my paper, but that would be academically dishonest.</p>
| Count Iblis | 52,954 | <p><a href="http://countiblis.blogspot.nl/2005/12/newcombs-paradox-and-conscious.html" rel="nofollow noreferrer">This blog posting</a> by me was cited in <a href="https://arxiv.org/abs/math/0608592" rel="nofollow noreferrer">this article</a> on page 12, footnote 2. </p>
|
2,108,558 | <p>We are started Linear programming problem question. Questions provided in examples are easy. And in exercise starting questions are easy to solve. As we have given conditions to form equations and solve them.</p>
<p>But this question little difficult.</p>
<p>Question - </p>
<p><strong>An aeroplane can carry a maximum of 200 passengers. A profit of Rs. 1000 is made on each executive class ticket and a profit of Rs. 600 is made on each economy class ticket. The airline reserves at least 20 seats for executive class. However at least 4 times as many passengers prefer to travel by economy class than by the executive class. Determine how many tickets of each type must be sold in order to maximize the profit for the airline. What is the maximum profit?</strong></p>
<p>I tried it as let x passenger travel in executive class then 200-x travel in economy class.</p>
<p>But after then I am confused what to do. Can anyone please provide solution for this with explanation.</p>
| DonAntonio | 31,254 | <p>A matrix over <em>any</em> field is non-diagonalizable iff its minimal polynomial is divisible by a linear factor to a power $\;\ge2\;$ , so $\;A\;$ is non diagonalizable iff</p>
<p>$$\min_A(x)=(x-a_1)^{n_1}g(x)\;,\;\;2\le n_1\in\Bbb N$$</p>
<p>and thus we can take $\;Q(x)=(x-a_1)g(x)\;$ and, of course, $\;Q(A)^2=0\;$ </p>
|
2,928,849 | <p>I have a problem understanding the following:</p>
<p><span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are independent vatiables with
<span class="math-container">$$P(X = i) = P(Y = i) = \frac{1}{2^i}. \quad i = 1, 2, \cdots$$</span>
Now the book says<span class="math-container">$ P(X< i)= \dfrac{1}{2^i}$</span>. I really do not get that… Would be nice if you could help maybe.</p>
<p>Best<br>
KingDingeling</p>
| Narendra | 583,106 | <p><span class="math-container">$$P(X < i)+P(X \geq i) = 1 $$</span>
<span class="math-container">$P(X \geq i) = 1-P(X < i)$</span></p>
<p><span class="math-container">$P(X<i) = P(X=1)+P(X=2)+......P(X=i-1)$</span></p>
<p><span class="math-container">$= \dfrac{1}{2^1} +\dfrac{1}{2^2}+.....\dfrac{1}{2^{i-1}}$</span>. The series is in Goemetric Progression. Calculating its value, you get</p>
<p><span class="math-container">$=\dfrac{1}{2}. \dfrac{1-(\dfrac{1}{2})^{i-1}}{1-\dfrac{1}{2}}$</span></p>
<p><span class="math-container">$= 1-(\dfrac{1}{2})^{i-1}$</span>. Solving for <span class="math-container">$P(X \geq i)$</span>, you get</p>
<p><span class="math-container">$P(X \geq i) = 1-P(X < i) = 1-[1-(\dfrac{1}{2})^{i-1}] = (\dfrac{1}{2})^{i-1} = \dfrac{1}{2^{i-1}}$</span></p>
|
1,303,274 | <p>Define a sequence {$\ x_n$} recursively by</p>
<p>$$ x_{n+1} =
\sqrt{2 x_n -1}, \ and \ x_0=a \ where \ a>1
$$
Prove that {$\ x_n$} is strictly decreasing. I'm not sure where to start.</p>
| Paolo Leonetti | 45,736 | <p>If it converges to $\ell$ then $|s_n-\ell|<\varepsilon$, once $\varepsilon>0$ is fixed, and $n$ is sufficiently large. Here $s_n$ is exactly the $n$-th partial sum.</p>
<p>But if you set, for example, $\ell=1/10$, then the above inequality cannot be satisfied even for $n$ enough large.</p>
|
3,044,318 | <p><span class="math-container">$$\frac{e^{z^2}}{z^{2n+1}}$$</span>
Am I right that limit as z approaches infinity does not exist? So its residue at infinity is equal to <span class="math-container">$c_{-1}$</span> of Laurent series. How am I supposed to get Laurent series of this function? Where is it centered? What range?</p>
<p>So <span class="math-container">$$e^{z^2}z^{-2n-1}$$</span> should somehow get to <span class="math-container">$$\sum_{k=-\infty}^{+\infty}{A_k(z-z_0)^k}$$</span></p>
| José Carlos Santos | 446,262 | <p>The residue at infinity of an analytic function <span class="math-container">$f$</span> is the residue at <span class="math-container">$0$</span> of <span class="math-container">$\frac{-1}{z^2}f\left(\frac1z\right)0$</span>. In the case of the function that you mentioned, it's the residue at <span class="math-container">$0$</span> of<span class="math-container">$$\frac{-1}{z^2}\times z^{2n+1}e^{\frac1{z^2}},$$</span>which is easy to compute, since<span class="math-container">$$e^{\frac1{z^2}}=1+\frac1{z^2}+\frac1{2!z^4}+\cdots$$</span></p>
|
3,875,643 | <p>I am studying the nonlinear ordinary differential equation</p>
<p><span class="math-container">$$\frac{d^2y}{dx^2}=\frac{1}{y}-\frac{x}{y^2}\frac{dy}{dx}$$</span></p>
<p>I have entered this equation into two different math software packages, and they produce different answers.</p>
<p>software 1:</p>
<p><span class="math-container">$$0=c_2-\ln(x)-\frac{1}{2}\ln\left(-\frac{c_1y}{x}-\frac{y^2}{x^2}+1\right)-\frac{c_1}{\sqrt{-c_1^2+4}}\tan^{-1}\left(\frac{c_1+\frac{2y}{x}}{\sqrt{-c_1^2+4}}\right)$$</span></p>
<p>software 2:</p>
<p><span class="math-container">$$0=-c_2-\ln(x)-\frac{c_1}{\sqrt{c_1^2+4}}\tanh^{-1}\left(\frac{c_1x+2y}{x\sqrt{c_1^2+4}}\right)-\frac{1}{2}\ln\left(\frac{c_1xy-x^2+y^2}{x^2}\right)$$</span></p>
<p>I have not attempted to verify the solution from software 1 yet, but have done some work on software 2.</p>
<p>I first used software 2 to try to solve for y, to substitute the expression for y directly into the ordinary differential equation. The result was the following:</p>
<p><a href="https://i.stack.imgur.com/EAc5w.png" rel="nofollow noreferrer">I believe that this output is ambiguous, since there are essentially two equations that are supposed to be equated to zero</a></p>
<p>I am not sure if it is possible to solve for y, and hence to check the validity of this solution using this method.</p>
<p>I then did some reading on the internet, and it was suggested to, in this case, take the second implicit derivative with respect to x, then simplify.</p>
<p>I tried to do this with math software 2, and the result was, after simplifying:</p>
<p><span class="math-container">$$\frac{d^2y}{dx^2}=\frac{c_1xy-x^2+y^2}{y^3}$$</span></p>
<p>I did some hand calculations, and it seems that software 2 simplifies the result before calculating the next derivative, even without using the simplify command.</p>
<p>Considering this, I used the software to take the first derivative implicitly, then wrote out the equation in full, put that equation into a different form than the software output, and calculated the second derivative implicitly by hand, treating derivatives as functions of x for operations such as the product rule.</p>
<p>The equation I calculated did not match the original differential equation.</p>
<p>Software 2 has a function called odetest, which is supposed to verify that a function is a solution to an ordinary differential equation. If you use odetest on this solution, the returned result is zero, implying that the function is a solution.</p>
<p>The problem is that odetest does not show steps. I contacted the company and asked to see the steps for this calculation, but they would not provide the steps.</p>
<p>Are there any other ways to verify implicit solutions to an ordinary differential equation?</p>
| user577215664 | 475,762 | <p><span class="math-container">$$\frac{d^2y}{dx^2}=\frac{1}{y}-\frac{x}{y^2}\frac{dy}{dx}$$</span>
You can check the solution just rewrite the DE as:
<span class="math-container">$$\frac{d^2y}{dx^2}=(x)'\frac{1}{y}+x \left (\frac{1}{y}\right )'$$</span>
Since <span class="math-container">$(fg)'=f'g+fg'$</span> we have:
<span class="math-container">$$y''= \left (\frac xy \right )'$$</span>
<span class="math-container">$$y'=\left (\frac xy \right) +C$$</span></p>
|
962,573 | <p>I have just learned Fermat's little theorem.</p>
<p>That is,</p>
<blockquote>
<p>If $p$ is a prime and $\gcd(a,p)=1$, then $a^{p-1} \equiv 1 \mod p$</p>
</blockquote>
<p>Well, there's nothing more explanation on this theorem in my book.</p>
<p>And there are exercises of this kind</p>
<blockquote>
<p>If $\gcd(a,42)=1$, then show that $a^6\equiv 1 \mod 168$</p>
</blockquote>
<p>I don't have any idea how to approach this problem.</p>
<p>The only thing I know is that $168=3\cdot 7\cdot 8$.</p>
<p>Hence, I get $[a^2\equiv 1 \mod 3]$ and $[a^6\equiv \mod 7]$ and $[a\equiv 1\mod2]$.</p>
<p>That's it.</p>
<p>I don't know what to do next.</p>
<p>Please help.</p>
| davidlowryduda | 9,754 | <p>Fermat's Little Theorem gives that $a \equiv 1 \pmod 2$, $a^6 \equiv 1 \pmod 3$, and $a^6 \equiv 1 \pmod 7$ immediately.</p>
<p>If $a \equiv 1 \pmod 2$, then $a \equiv \pm 1, \pm 3 \pmod 8$. In either of these, it's immediate that $a^2 \equiv 1 \pmod 8$, and so $a^6 \equiv 1 \pmod 8$.</p>
<p>Now by the Chinese Remainder Theorem, since $a^6 \equiv 1$ in each of the three relatively prime moduli, $a^6 \equiv 1 \pmod{8\cdot 3\cdot 7}$. $\diamondsuit$</p>
|
320,355 | <p>Show that $$\nabla\cdot (\nabla f\times \nabla h)=0,$$
where $f = f(x,y,z)$ and $h = h(x,y,z)$.</p>
<p>I have tried but I just keep getting a mess that I cannot simplify. I also need to show that </p>
<p>$$\nabla \cdot (\nabla f \times r) = 0$$</p>
<p>using the first result.</p>
<p>Thanks in advance for any help</p>
| Sangchul Lee | 9,340 | <p>Introducing the Levi-Civita symbol, we have</p>
<p>\begin{align*}
\nabla \cdot (\nabla f \times \nabla h)
&= \epsilon^{ijk} \frac{\partial}{\partial x^{i}} \left( \frac{\partial^2 f}{\partial x^{j}} \frac{\partial h}{\partial x^{k}} \right) \\
&= \epsilon^{ijk} \left( \frac{\partial f}{\partial x^{i} \partial x^{j}} \frac{\partial h}{\partial x^{k}} + \frac{\partial f}{\partial x^{j}} \frac{\partial h^2}{\partial x^{k} \partial x^{i}} \right)
\end{align*}</p>
<p>But since $\epsilon^{ijk}$ is anti-symmetric and $\partial^2 / \partial x^{i}\partial x^{j}$ is symmetric,</p>
<p>$$ \epsilon^{ijk} \frac{\partial f^2}{\partial x^{i} \partial x^{j}} \frac{\partial h}{\partial x^{k}}
= -\epsilon^{jik} \frac{\partial f^2}{\partial x^{j} \partial x^{i}} \frac{\partial h}{\partial x^{k}}, $$ </p>
<p>this quantity vanishes as it is identical to its negative. Similar argument applies to the second term, yielding</p>
<p>$$ \nabla \cdot (\nabla f \times \nabla h) = 0. $$</p>
<p>Finally, note that $\mathrm{r} = \nabla (\frac{1}{2} r^2) $. Then the second identity follows.</p>
<p>I guess this solution is equivalent to Muphrid's answer, but I am not sure as I know nothing on the Hodge dual (which replaces Levi-Civita pseudotensor in context-free description).</p>
|
609,770 | <p>We have an empty container and $n$ cups of water and $m$ empty cups. Suppose we want to find out how many ways we can add the cups of water to the bucket and remove them with the empty cups. You can use each cup once but the cups are unique. </p>
<p>The question: In how many ways can you perform this operation.</p>
<p>Example: Let's take $n = 3$ and $m = 2$.</p>
<p>For the first step we can only add water to the bucket so we have 3 choices.
For the second step we can both add another cup or remove a cup of water.
So for the first 2 steps we have $3\times(5-1) = 12$ possibilities.
For the third step it gets more difficult because this step depends on the previous step.
There are two scenarios after the second step. 1: The bucket either contains 2 cups of water or 2: The bucket contains no water at all.</p>
<p>1) We can both add or subtract a cup of water
2) We have to add a cup of water</p>
<p>So after step 3 we have $3\times(2\cdot3 + 2\cdot2) = 30$ combinations.</p>
<p>etc.</p>
<p>I hope I stated this question clearly enough since this is my first post. This is not a homework assignment just personal curiosity. </p>
| Michael Hardy | 11,667 | <p>Setting $\varepsilon$ to something doesn't make sense. You need to take $\varepsilon$ to be given, and find a value of $\delta$ that's small enough.</p>
<p>Continuity should not say $\exists c\in(0,1]$ etc., where $c$ is in the role you put it in. Rather, continuity <b>at the point $c$</b> should be defined by what comes after that.</p>
<p>Uniform continuity says
$$
\forall\varepsilon>0\ \exists\delta>0\ \forall x\in(0,1]\ \forall y\in(0,1]\ \left(\text{if }|x-y|<\delta\text{ then }\left|\frac1x-\frac1y\right|<\varepsilon\right).
$$</p>
<p>Lack of uniform continuity is the negation of that:
$$
\text{Not }\forall\varepsilon>0\ \exists\delta>0\ \forall x\in(0,1]\ \forall y\in(0,1]\ \left(\text{if }|x-y|<\delta\text{ then }\left|\frac1x-\frac1y\right|<\varepsilon\right). \tag 1
$$</p>
<p>The way to negate $\forall\varepsilon>0\ \cdots\cdots$ to by a de-Morganesque law that says $\left(\text{not }\forall\varepsilon>0\ \cdots\cdots\right)$ is the same as $(\exists\varepsilon>0\ \text{not }\cdots\cdots)$, and similarly when "not" moves past $\forall$, then that transforms to $\exists$. So $(1)$ becomes
$$
\exists\varepsilon>0\text{ not }\exists\delta>0\ \forall x\in(0,1]\ \forall y\in(0,1]\ \left(\text{if }|x-y|<\delta\text{ then }\left|\frac1x-\frac1y\right|<\varepsilon\right) \tag 2
$$
and that becomes
$$
\exists\varepsilon>0\ \forall\delta>0\text{ not }\forall x\in(0,1]\ \forall y\in(0,1]\ \left(\text{if }|x-y|<\delta\text{ then }\left|\frac1x-\frac1y\right|<\varepsilon\right) \tag 3
$$
and that becomes
$$
\exists\varepsilon>0\ \forall\delta>0\ \exists x\in(0,1]\text{ not } \forall y\in(0,1]\ \left(\text{if }|x-y|<\delta\text{ then }\left|\frac1x-\frac1y\right|<\varepsilon\right) \tag 4
$$
and that becomes
$$
\exists\varepsilon>0\ \forall\delta>0\ \exists x\in(0,1]\ \exists y\in(0,1]\text{ not } \left(\text{if }|x-y|<\delta\text{ then }\left|\frac1x-\frac1y\right|<\varepsilon\right) \tag 5
$$
and that becomes
$$
\exists\varepsilon>0\ \forall\delta>0\ \exists x\in(0,1]\ \exists y\in(0,1] \left(|x-y|<\delta\text{ and not }\left|\frac1x-\frac1y\right|<\varepsilon\right) \tag 6
$$
and finally that becomes
$$
\exists\varepsilon>0\ \forall\delta>0\ \exists x\in(0,1]\ \exists y\in(0,1] \left(|x-y|<\delta\text{ and }\left|\frac1x-\frac1y\right|\ge\varepsilon\right). \tag 7
$$</p>
<p>To show that such a value of $\varepsilon$ exists, it is enough to show that $\varepsilon=1$ will serve. You need to find $x$ and $y$ closer to each other than $\delta$ but having reciprocals differing by more than $1$. It is enough to make both $x$ and $y$ smaller than $\delta$ and then exploit the fact that there's a vertical asymptote at $0$ to make $x$ and $y$ far apart, by pushing one of them closer to $0$.</p>
|
1,142,631 | <p>A sequence of probability measures $\mu_n$ is said to be tight if for each $\epsilon$ there exists a finite interval $(a,b]$ such that $\mu((a,b])>1-\epsilon$ For all $n$.</p>
<p>With this information, prove that if $\sup_n\int f$ $d\mu_n<\infty$ for a nonnegative $f$ such that $f(x)\rightarrow\infty$ as $x\rightarrow\pm\infty$ then $\{\mu_n\}$ is tight.</p>
<p>This is the first I've worked with tight probability measure sequences so I'm not sure how to prove this. Any thoughts?</p>
| Davide Giraudo | 9,849 | <p>Note that for a fixed $t_0$, we have
$$\mu_n(\mathbf R\setminus [-t_0,t_0])\cdot\inf_{x:|x|\geqslant t_0 } f(x)\leqslant \int_\mathbf R f(x) \mathrm d\mu_n,$$
hence defining $M:=\sup_n\int_\mathbf R f(x) \mathrm d\mu_n$, it follows that
$$\mu_n(\mathbf R\setminus [-t_0,t_0])\cdot\inf_{x:|x|\geqslant t_0 } f(x)\leqslant M.$$
It remains to choose a good $t_0$ for a given $\varepsilon$.</p>
|
3,976,572 | <p>Suppose <span class="math-container">$f(x)= x^3+2x^2+3x+3$</span> and has roots <span class="math-container">$a , b ,c$</span>.
Then find the value of
<span class="math-container">$\left(\frac{a}{a+1}\right)^{3}+\left(\frac{b}{b+1}\right)^{3}+\left(\frac{c}{c+1}\right)^{3}$</span>.</p>
<p>My Approach :
I constructed a new polynomial <span class="math-container">$g(x) = f\left(\frac{x^{\frac{1}{3}}}{1-x^{\frac{1}{3}}}\right)$</span> and then used the Vieta's formula for the sum of roots taken one at a time to solve the sum.</p>
<p>But then I realised that I won't be able to do so as <span class="math-container">$g(x)$</span> is not a polynomial anymore.
Can anyone help me please.</p>
| See Hai | 646,705 | <p>We shall make use of the following well-known identity:
<span class="math-container">$$x^3+y^3+z^3-3xyz=(x+y+z)(x^2+y^2+z^2-xy-yz-xz).$$</span>
Now, using Vieta's Formula, <span class="math-container">$a+b+c=-2, ab+ac=bc=3$</span>, and <span class="math-container">$abc=-3$</span>.
Thus, by direct expansion, we have that
<span class="math-container">$(a+1)(b+1)(c+1)=-1.$</span></p>
<p><span class="math-container">\begin{align}
\dfrac{a}{a+1}+\dfrac{b}{b+1} + \dfrac{c}{c+1} &= 3- \left(\dfrac{1}{a+1} +
\dfrac{1}{b+1} + \dfrac{1}{c+1} \right) \\
& = 3- \dfrac{(b+1)(c+1) + (a+1)(c+1) + (b+1)(a+1)}{-1} \\
& = 3 + bc+b+c+1 + ac+ a + c+1 + ab+ a +b+1 \\
& = 5.
\end{align}</span></p>
<p><span class="math-container">$$\dfrac{a}{a+1} \cdot \dfrac{b}{b+1} \cdot \dfrac{c}{c+1} = \dfrac{abc}{-1}=3.$$</span></p>
<p><span class="math-container">\begin{align}
\dfrac{a}{a+1}\cdot \dfrac{b}{b+1} + \dfrac{a}{a+1}\cdot \dfrac{c}{c+1} + \dfrac{b}{b+1}\cdot \dfrac{c}{c+1} & = \dfrac{3(c+1)}{c} + \dfrac{3(b+1)}{b} + \dfrac{3(a+1)}{a} \\
& = 3 \left(3+ \dfrac{1}{c} + \dfrac{1}{b} + \dfrac{1}{a} \right) \\
& = 3 \left(3 + \dfrac{ab+bc+ca}{abc}\right) \\
& = 6.
\end{align}</span></p>
<p>Let <span class="math-container">$A=\dfrac{a}{a+1}, B=\dfrac{b}{b+1} , C=\dfrac{c}{c+1}$</span>. Then,
<span class="math-container">\begin{align}
A^3 + B^3 + C^3 & = (A+B+C)(A^2+B^2+C^2-AB-BC-CA)+3ABC \\
& = 5 \left((A+B+C)^2-2(AB+BC+CA)-AB-BC-CA \right) + 3ABC& \\
& = 5 (25 -3(6))+3(3) \\
& = \mathbf{44}.
\end{align}</span></p>
|
55,232 | <p>I'm looking for a concise way to show this:
$$\sum_{n=1}^{\infty}\frac{n}{10^n} = \sum_{n=1}^{\infty}\left(\left(\sum_{m=0}^{\infty}{10^{-m}}\right){10^{-n}}\right)$$
With this goal in mind:
$$\sum_{n=1}^{\infty}\left(\left(\sum_{m=0}^{\infty}{10^{-m}}\right){10^{-n}}\right) =
\sum_{n=1}^{\infty}\left(\left(\frac{10}{9}\right){10^{-n}}\right) = \frac{10}{81}$$</p>
<p>So far I've been looking at it by replacing $n$ in the LHS with $(\sum_{m=1}^{n}1)$ like this:
$$\sum_{n=1}^{\infty}\left(\left(\sum_{m=1}^{n}{1}\right){10^{-n}}\right) = \sum_{n=1}^{\infty}\left(\left(\sum_{m=0}^{\infty}{10^{-m}}\right){10^{-n}}\right)$$</p>
<p>And here I hit a particularly uncreative brick wall. This equation is obvious to me in a common sense way - I could easily demonstrate it by writing out the RHS as a huge addition problem and showing that the LHS just has the digit columns added ahead of time - but I don't know what to do in between for a proof.</p>
| Qiaochu Yuan | 232 | <p>The RHS can be written $\displaystyle \sum_{n \ge 1, m \ge 0} \frac{1}{10^{n+m}}$. Collect all terms with the same value of $n+m$. To justify this rigorously you just need to know that the series converges absolutely, which should be clear. </p>
|
55,232 | <p>I'm looking for a concise way to show this:
$$\sum_{n=1}^{\infty}\frac{n}{10^n} = \sum_{n=1}^{\infty}\left(\left(\sum_{m=0}^{\infty}{10^{-m}}\right){10^{-n}}\right)$$
With this goal in mind:
$$\sum_{n=1}^{\infty}\left(\left(\sum_{m=0}^{\infty}{10^{-m}}\right){10^{-n}}\right) =
\sum_{n=1}^{\infty}\left(\left(\frac{10}{9}\right){10^{-n}}\right) = \frac{10}{81}$$</p>
<p>So far I've been looking at it by replacing $n$ in the LHS with $(\sum_{m=1}^{n}1)$ like this:
$$\sum_{n=1}^{\infty}\left(\left(\sum_{m=1}^{n}{1}\right){10^{-n}}\right) = \sum_{n=1}^{\infty}\left(\left(\sum_{m=0}^{\infty}{10^{-m}}\right){10^{-n}}\right)$$</p>
<p>And here I hit a particularly uncreative brick wall. This equation is obvious to me in a common sense way - I could easily demonstrate it by writing out the RHS as a huge addition problem and showing that the LHS just has the digit columns added ahead of time - but I don't know what to do in between for a proof.</p>
| Gerry Myerson | 8,269 | <p>Writing $x$ for $1/10$ we have $$\sum_{n=1}^{\infty}nx^n=\sum_{n=1}^{\infty}\sum_{m=1}^nx^n=\sum_{m=1}^{\infty}\sum_{n=m}^{\infty}x^n=\sum_{m=1}^{\infty}x^m\sum_{n=m}^{\infty}x^{n-m}=\sum_{m=1}^{\infty}x^m\sum_{n=0}^{\infty}x^n$$ and the last sum is your right-hand side (except that my $m$ is your $n$, and vice versa). As Qiaochu notes, absolute convergence is necessary for the interchange-of-summations step. </p>
|
4,536,320 | <p>Let <span class="math-container">$R$</span> be a ring and <span class="math-container">$A \subseteq R$</span> be finite, say <span class="math-container">$A = \{a\}$</span>. The set <span class="math-container">$$RaR = \{ras\:\: : r,s\in R\}$$</span> Why is this not necessarily closed under addition?</p>
<p>Take <span class="math-container">$r_1as_1$</span> and <span class="math-container">$r_2as_2$</span>. Is it because the ring must be commutative in order to guarantee that we can add these two elements and still stay within <span class="math-container">$RaR$</span>?</p>
<p>Even though I see why commutativity will make this possible, I'm still a bit confused because the very definition of <span class="math-container">$RaR$</span> includes all possible combinations of elements of the form <span class="math-container">$ras$</span>.</p>
| N. F. Taussig | 173,070 | <p>Your formulation of the problem is not correct. Observe that if <span class="math-container">$m = 18$</span> and <span class="math-container">$a = 20$</span>, then <span class="math-container">$m + a = 38 > 20$</span>. Also, since we want <span class="math-container">$\frac{m + a}{2} \in \{1, 2, 3, \ldots, 20\}$</span>, we cannot permit <span class="math-container">$m = a = 0$</span>.</p>
<p>A <span class="math-container">$2$</span>-entries table is a table with, say, the possible values of <span class="math-container">$m$</span> on the horizontal axis and the possible values of <span class="math-container">$a$</span> on the vertical axis. You could then write <span class="math-container">$(a, m)$</span> as the entry in the <span class="math-container">$m$</span>th column and <span class="math-container">$a$</span>th row. The entries you seek are those in which <span class="math-container">$m + a$</span> is even and <span class="math-container">$m + a \neq 0$</span>.</p>
<p>However, it is not necessary to build such a table. You have correctly concluded that the average of the mathematics and algorithmics scores will be an integer if and only if the sum of the scores is even.</p>
<p>When is the sum of two integers even? The two addends must have the same parity, meaning they are both even or both odd.</p>
<p>In how many ordered pairs <span class="math-container">$(m, a)$</span> are both entries odd? In how many such pairs is <span class="math-container">$\frac{m + a}{2} \in \{1, 2, 3, \ldots, 20\}$</span>?</p>
<p>In how many ordered pairs <span class="math-container">$(m, a)$</span> are both entries even? In how many such pairs is <span class="math-container">$\frac{m + a}{2} \in \{1, 2, 3, \ldots, 20\}$</span>?</p>
<p>Can you proceed?</p>
|
520,285 | <p>I've been going through this representation theory <a href="http://math.berkeley.edu/~serganov/math252/notes1.pdf" rel="nofollow">lecture notes</a>, and I've found the ''hungry knights'' problem very interesting.
Do you know any interesting problems similar to that one?</p>
<p>To define ''similar'': problems which have a recreational, puzzle-like taste and you can solve them using representation theory/group theory.
One example I know is <a href="http://qchu.wordpress.com/2009/06/13/gila-i-group-actions-and-equivalence-relations/" rel="nofollow">this series</a> of blog posts by Qiaochu Yuan applying group theory to few basic combinatorial questions about colorings.</p>
| Calvin Lin | 54,563 | <p>A similar one is a Russian Olympiad problem about 7 dwarfs sitting around a table drinking wine. Each of them have a wine cup in front of them. In turn, they split the wine in their glass into 6 equal portions and distribute it out. After a round of distribution, they found that they have the same amount of wine as at the start. How much wine did each dwarf have?</p>
|
587,198 | <p>I am having problems with this question, it would be wonderful if someone can help.</p>
<p>Given that $f(x)= x^2 + x - 3$</p>
<p>1) Find $f(x + h)$</p>
<p>2) Then express $f(x+h)-f(x)$ in its simplest form</p>
<p>3) Deduce $\lim\limits_{h->0}\dfrac{f(x+h)-f(x)}{h}$</p>
<p>Thanks for the help, i was stuck on the second part.</p>
| Tim Ratigan | 79,602 | <p>As Stefan Smith kindly pointed out, $a=b=0$ is a trivial solution in the integers. But what if $a,b\in \Bbb N$?</p>
<p>Assume $2a^2=b^2$. Then $2|b^2$, which implies (by unique prime factorization) that $2|b$. Therefore we can write $b^2=4k^2$ for some $k\in\Bbb N$. Then $2a^2=4k^2\Longrightarrow 2k^2=a^2$. This inductive step begins a process of <a href="http://en.wikipedia.org/wiki/Proof_by_infinite_descent" rel="nofollow">infinite descent</a> which ultimately proves that $a$ and $b$ cannot be natural numbers, and therefore $\sqrt 2=\frac{b}{a}$ cannot have rational solutions (this takes note of the fact that the canonical form for $\sqrt 2$ should be greater than $0$).</p>
|
191,673 | <p>If I input:</p>
<pre><code>data = RandomVariate[ProbabilityDistribution[x/8, {x, 0, 4}], 10];
{EstimatedDistribution[data, ProbabilityDistribution[x/8, {x, 0, θ}],
ParameterEstimator -> "MaximumLikelihood"], data}
</code></pre>
<p>Mathematica returns:</p>
<pre><code>{ProbabilityDistribution[\[FormalX]/
8, {\[FormalX], 0, 4.99291}], {3.8921, 2.93817, 2.07761, 1.12473,
3.96292, 1.20091, 2.86696, 1.52381, 2.43073, 3.13515}}
</code></pre>
<p>I think the mle for a sample from this distribution is the maximum of the sample. What is Mathematica computing here? In other words, why is Mathematica returning 4.99291?</p>
<p>From <a href="https://mathematica.stackexchange.com/questions/191673/what-is-the-maximum-likelihood-estimator-of-a-distribution-with-pdf-fx-x-8#comment499117_191674">a comment</a>:</p>
<blockquote>
<p>I just now restarted Mathematica and I am getting the same bad results. I am using version 9.</p>
</blockquote>
| Michael E2 | 4,999 | <p>The log-likelihood of the OP's (non-normalized) distribution is piecewise constant, so any value for <code>θ</code> that occurs in a certain interval is "correct," at least from the point of view of maximizing the expression returned by <code>LogLikelihood[]</code>. As <a href="https://mathematica.stackexchange.com/questions/191673/what-is-the-maximum-likelihood-estimator-of-a-distribution-with-pdf-fx-x-8#comment499132_191673">@wolfies points out</a>, the distribution is not valid, so while <em>Mathematica</em> is willing to compute with it, it's unclear what the meaning of the result is from a probability/statistics point of view.</p>
<p>While V11.3 and V9 return different answers, both are equally correct. One might quibble that in V11.3, <code>EstimatedDistribution</code> has been reprogrammed to return the boundary value <code>3.96292</code> in the piecewise log-likelihood function, which is technically not in the open interval over which the function attains its maximum. In V9, my guess is that the initial estimate for <code>θ</code> is greater than <code>3.96292</code>, where the Jacobian is zero and the search stops.</p>
<pre><code>data = {3.8921`, 2.93817`, 2.07761`, 1.12473`, 3.96292`, 1.20091`,
2.86696`, 1.52381`, 2.43073`, 3.13515`};
dist = ProbabilityDistribution[\[FormalX]/8, {\[FormalX], 0, θ}];
LogLikelihood[dist, data]
</code></pre>
<p><img src="https://i.stack.imgur.com/Kbs6M.png" alt="Mathematica graphics"></p>
<p>Remark. Perhaps one should be using the normalized distribution:</p>
<pre><code>ProbabilityDistribution[\[FormalX]/8, {\[FormalX], 0, θ}, Method -> "Normalize"]
</code></pre>
<p>This does depend on a parameter <code>θ</code>, which could be estimated.</p>
|
2,870,956 | <p>Assume that f is a non-negative real function, and let $a>0$ be a real number.</p>
<p>Define $I_a(f)$ to be</p>
<p>$I_a(f)$=$\frac{1}{a}\int_{0}^{a} f(x) dx$ </p>
<p>We now assume that $\lim_{x\rightarrow \infty} f(x)=A$ exists.</p>
<p>Now I want to proof whether $\lim_{a\rightarrow \infty} I_a(f)=A$ is true or not. I have concluded that this is not always true.</p>
<p>My approach has been to construct the following counterexample $f(x)=A+\frac
{1}{x}$, it is easily seen that $\lim_{x\rightarrow \infty} f(x)=A$.</p>
<p>By integrating the chosen function I get that </p>
<p>$\lim_{a\rightarrow \infty}I_a(f)=A+\lim_{a\rightarrow \infty}\frac{1}{a}\int_{0}^a \frac{1}{x}dx = A+\lim_{a\rightarrow \infty} [\frac{\log(a)}{a}-(-\infty)]\rightarrow \infty$. </p>
<p>Therefore I concluded that $\lim_{x\rightarrow \infty} f(x)=A$ does not in general imply that $\lim_{a\rightarrow \infty}I_a(f)=A$. </p>
<p>I am of course unsure whether my calculations are correct, because you could also write the $\log(0)$ as a limit of $\log(\epsilon)\rightarrow \log(0)$, an then by division by $a$, and letting $a\rightarrow \infty$ get that the whole expression with the integral of $\frac{1}{x}$ goes to zero, I this case, it might be true that $\lim_{x\rightarrow \infty} f(x)=A$ implies that $\lim_{a\rightarrow \infty}I_a(f)=A$. </p>
<p>Does anybody have an idea to this?</p>
| mfl | 148,513 | <p>Assume $f$ is continuous on $[0,\infty)$ and $\lim_{x\to \infty} f(x)=L.$ Then it is $\lim_a I_a(f)=L.$ Indeed, we have</p>
<p>$$\lim_{a\to \infty} \dfrac{\int_0^a f(t)dt}{a}=\lim_{a\to \infty} \dfrac{f(a)}{1}=\lim_{a\to \infty}f(a) =L,$$ where we have used the Fundamental theorem of calculus and L'Hôpital's rule.</p>
|
2,870,956 | <p>Assume that f is a non-negative real function, and let $a>0$ be a real number.</p>
<p>Define $I_a(f)$ to be</p>
<p>$I_a(f)$=$\frac{1}{a}\int_{0}^{a} f(x) dx$ </p>
<p>We now assume that $\lim_{x\rightarrow \infty} f(x)=A$ exists.</p>
<p>Now I want to proof whether $\lim_{a\rightarrow \infty} I_a(f)=A$ is true or not. I have concluded that this is not always true.</p>
<p>My approach has been to construct the following counterexample $f(x)=A+\frac
{1}{x}$, it is easily seen that $\lim_{x\rightarrow \infty} f(x)=A$.</p>
<p>By integrating the chosen function I get that </p>
<p>$\lim_{a\rightarrow \infty}I_a(f)=A+\lim_{a\rightarrow \infty}\frac{1}{a}\int_{0}^a \frac{1}{x}dx = A+\lim_{a\rightarrow \infty} [\frac{\log(a)}{a}-(-\infty)]\rightarrow \infty$. </p>
<p>Therefore I concluded that $\lim_{x\rightarrow \infty} f(x)=A$ does not in general imply that $\lim_{a\rightarrow \infty}I_a(f)=A$. </p>
<p>I am of course unsure whether my calculations are correct, because you could also write the $\log(0)$ as a limit of $\log(\epsilon)\rightarrow \log(0)$, an then by division by $a$, and letting $a\rightarrow \infty$ get that the whole expression with the integral of $\frac{1}{x}$ goes to zero, I this case, it might be true that $\lim_{x\rightarrow \infty} f(x)=A$ implies that $\lim_{a\rightarrow \infty}I_a(f)=A$. </p>
<p>Does anybody have an idea to this?</p>
| James Yang | 481,378 | <p>Your counter-example works because f is not integrable on $(0,a)$ for some $a$. If $f$ is integrable on $(0,a)$ for every a, it's true.</p>
<p>WLOG, assume $\lim\limits_{x\to \infty} f(x) = 0$. Otherwise, since $f$ is integrable, we may show that $\frac{1}{a}\int_0^a (f(x)-A) dx \to 0$. For every $\epsilon >0$, there exists $M > 0$ s.d. $\forall x\geq M$, $|f(x)| < \epsilon$. Then for $a \geq M$,</p>
<p>$$\left|\frac{1}{a} \int_0^a f(x) dx \right|
= \left| \frac{1}{a}\int_0^M f(x) dx + \frac{1}{a}\int_M^a f(x) dx \right|
< \frac{1}{a}\left| \int_0^M f(x)dx \right| + \frac{a-M}{a}\epsilon
$$</p>
<p>Take $a\to \infty$ first then $\epsilon \to 0$.</p>
|
2,665,723 | <p>Find value of $k$ for which the equation $\sqrt{3z+3}-\sqrt{3z-9}=\sqrt{2z+k}$ has no solution.</p>
<p>solution i try $\sqrt{3z+3}-\sqrt{3z-9}=\sqrt{2z+k}......(1)$</p>
<p>$\displaystyle \sqrt{3z+3}+\sqrt{3z-9}=\frac{12}{\sqrt{2z+k}}........(2)$</p>
<p>$\displaystyle 2\sqrt{3z+3}=\frac{12}{\sqrt{2z+k}}+\sqrt{2z+k}=\frac{12+2z+k}{\sqrt{2z+k}}$</p>
<p>$\displaystyle 2\sqrt{6z^2+6z+3kz+3k}=\sqrt{12+2z+k}$</p>
<p>$\displaystyle 4(6z^2+6z+3kz+3k)=144+4z^2+k^2+48z+4kz+24k$</p>
<p>Help me how i solve it after that point </p>
| Angina Seng | 436,618 | <p>What you have there is <strong>not</strong> $\sum_{n=0}^{99}n(n+1)$ but rather
$\sum_{n=1}^{50}2n(2n-1)$. Using standard formulae,
$$\sum_{n=1}^{50}2n(2n-1)
=4\sum_{n=1}^{50}n^2-2\sum_{n=1}^{50}n
=\frac23(50\times 51\times 101)-50\times51=169150.
$$</p>
|
125,503 | <p>Completeness Properties of $\mathbb{R}$: Least Upper Bound Property, Monotone Convergence Theorem, Nested Intervals Theorem, Bolzano Weierstrass Theorem, Cauchy Criterion.</p>
<p>Archimedean Property: $\forall x\in \mathbb{R}\forall \epsilon >0\exists n\in \mathbb{N}:n\epsilon >x$</p>
<p>I can show that LUB implies the Archimedean Property but what about the rest of these properties? Please provide proofs (even hints) or counterexamples. </p>
<p>EDIT: It was shown by Isaac Solomon that the Bolzano-Weierstrass implies the Archimedean Property.</p>
| Elchanan Solomon | 647 | <p>For Bolzano-Weierstrass $\to$ Archimedean Property: Take $\epsilon > 0$, and consider the sequence $\{n\epsilon: n \in \mathbb{N}\}$. If the Archimedean Property fails, then this sequence is bounded, so that it has a convergent subsequence. However, it is easy to see that this sequence does not have a convergent subsequence.</p>
<hr>
<p>For Monotone Convergence Theorem $\to$ Archimedean Property: Taking the same sequence $\{n\epsilon: n \in \mathbb{N}\}$, it is easy to see that it is monotone. If the Archimedean Property fails, then this sequence is bounded, so by the MCT, it would have a finite limit, which is not the case.</p>
|
131,741 | <p>Take the following example <code>Dataset</code>:</p>
<pre><code>data = Table[Association["a" -> i, "b" -> i^2, "c" -> i^3], {i, 4}] // Dataset
</code></pre>
<p><img src="https://i.stack.imgur.com/PZSgO.png" alt="Mathematica graphics"></p>
<p>Picking out two of the three columns is done this way:</p>
<pre><code>data[All, {"a", "b"}]
</code></pre>
<p><img src="https://i.stack.imgur.com/LG3jU.png" alt="Mathematica graphics"></p>
<p>Now instead of just returning the "a" and "b" columns I want to map the functions f and h to their elements, respectively, and still drop "c". Based on the previous result and the documentation of <code>Dataset</code> I hoped the following would do that:</p>
<pre><code>data[All, {"a" -> f, "b" -> h}]
</code></pre>
<p><img src="https://i.stack.imgur.com/Wvwml.png" alt="Mathematica graphics"></p>
<p>As you can see, the behavior is not like the one before. Although the functions are mapped as desired, the unmentioned column "c" still remains in the data.</p>
<p>Do I really need one of the following (clumsy looking) alternatives</p>
<pre><code>data[All, {"a" -> f, "b" -> h}][All, {"a", "b"}]
data[Query[All, {"a", "b"}], {"a" -> f, "b" -> h}]
Query[All, {"a", "b"}]@data[All, {"a" -> f, "b" -> h}]
</code></pre>
<p>to get: </p>
<p><img src="https://i.stack.imgur.com/q62un.png" alt="Mathematica graphics"></p>
<p>or is there a more elegant solution?</p>
| yode | 21,532 | <p>Since nobody mentioned it, I will give a version with <code>KeyTake</code></p>
<pre><code>data[All /* KeyTake[{"a", "b"}], {"a" -> f, "b" -> h}]
</code></pre>
<p><img src="https://i.stack.imgur.com/WtVGN.png" alt=""> </p>
<p>Of course we can also do</p>
<pre><code>data[All /* Map[Take[#, 2] &], {"a" -> f, "b" -> h}]
</code></pre>
|
2,963,324 | <p>I want to prove <span class="math-container">$a \equiv b\;(\text{mod} \;n)$</span> is an equivalence relation then would it be ok to write,</p>
<p>Reflexive as, for all <span class="math-container">$a$</span>, <span class="math-container">$a \equiv a\;(\text{mod} \;n)$</span></p>
<p>Symmetric as, <span class="math-container">$a \equiv b\;(\text{mod} \;n)$</span> which implies <span class="math-container">$b \equiv a\;(\text{mod} \;n)$</span></p>
<p>Transitive as if <span class="math-container">$a \equiv b\;(\text{mod} \;n)$</span> and <span class="math-container">$b \equiv c\;(\text{mod} \;n)$</span> this implies <span class="math-container">$a \equiv c\;(\text{mod} \;n)$</span></p>
<p>I dont know if this has actually proved equivalence. Also the set on which this relatiton acts on was not specified so for reflexivity is it ok to say for all <span class="math-container">$a$</span>. Thanks.</p>
| José Carlos Santos | 446,262 | <p>No, it is not ok. You just <em>claimed</em> that all three assertions are true without a trace of evidence. Therefore, you proved nothing.</p>
|
2,498,123 | <blockquote>
<p>Given a $2 \times 2$ matrix $B$ that satisfies $B^2=3B-2I$, find the eigenvalues of $B$.</p>
</blockquote>
<p>My attempt: </p>
<p>Let $v$ be an eigenvector for B, and $\lambda$ it's corresponding eigenvalue. Also, let $T$ be the linear transformation (not that this is exactly necessary for the question, but just added it in for my understanding.) Therefore, </p>
<p>$$T(v) = Bv = \lambda v$$</p>
<p>Now I'm unsure how to incorporate this information into the quadratic equation given above since by matrix / vector arithmetic isn't extremely solid. Thanks!</p>
| amsmath | 487,169 | <p>Define $p(x) = (x-1)(x-2)$. You have $p(B) = 0$. Hence, the minimal polynomial of $B$ divides $p$. So, your eigenvalues are in $\{1,2\}$.</p>
|
3,118 | <p>Can anyone help me out here? Can't seem to find the right rules of divisibility to show this:</p>
<p>If $a \mid m$ and $(a + 1) \mid m$, then $a(a + 1) \mid m$.</p>
| Bill Dubuque | 242 | <blockquote>
<p>If <span class="math-container">$\rm\,\ a\mid m,\ a\!+\!1\mid m\ \,$</span> then it follows that <span class="math-container">$\rm\ \, \color{#90f}{a(a\!+\!1)\mid m}$</span></p>
</blockquote>
<p><span class="math-container">${\bf Proof}\rm\quad\displaystyle \frac{m}{a},\; \frac{m}{a+1}\in\mathbb{Z} \ \,\Rightarrow\,\ \frac{m}{a} - \frac{m}{a\!+\!1} \; = \;\color{#90f}{\frac{m}{a(a\!+\!1)} \in \mathbb Z}.\quad$</span> <strong>QED</strong></p>
<p><span class="math-container">${\bf Remark}\rm\ \, \text{More generally, if }\, \color{#c00}{n = bc \:\!-\:\! ad} \;$</span> is a linear combination of <span class="math-container">$\rm\, a, b\, $</span> then</p>
<p><span class="math-container">$\rm\text{we have}\quad\,\ \displaystyle \frac{m}{a},\; \frac{m}{b}\in\mathbb{Z} \;\;\Rightarrow\;\; \frac{m}{a}\frac{\color{#c00}{bc}}{b} - \frac{\color{#c00}{ad}}{a}\frac{m}{b} = \frac{m\:\!\color{#c00}n}{a\:\!b} \in \mathbb Z$</span></p>
<p>By Bezout, <span class="math-container">$\rm\, \color{#c00}{n = \gcd(a,b)}\, $</span> is the <em>least</em> positive linear combination, so the above yields</p>
<p><span class="math-container">$\rm\qquad\qquad a,b\mid m \;\Rightarrow\; ab\mid m\;gcd(a,b) \;\Rightarrow\; \mathfrak{m}_{a,b}\!\mid m\ \ $</span> for <span class="math-container">$\ \ \rm \mathfrak{m}_{a,b} := \dfrac{ab}{\gcd(a,b)}$</span></p>
<p>i.e. <span class="math-container">$ $</span> every common multiple <span class="math-container">$\rm\, m\,$</span> of <span class="math-container">$\,\rm a,b\,$</span> is a multiple of <span class="math-container">$\;\rm \mathfrak{m}_{a,b},\,$</span> so <span class="math-container">$\rm\, \color{#0a0}{\mathfrak{m}_{a,b}\le m}.\,$</span> But <span class="math-container">$\rm\,\mathfrak{m}_{a,b}\,$</span> is also a <em>common</em> multiple, i.e. <span class="math-container">$\rm\ a,b\mid \mathfrak{m}_{a,b}\,$</span> viz. <span class="math-container">$\displaystyle \,\rm \frac{\mathfrak{m}_{a,b}}{a} = \;\frac{a}{a}\frac{b}{gcd(a,b)}\in\mathbb Z\,$</span> <span class="math-container">$\,\Rightarrow\,$</span> <span class="math-container">$\rm\, a\mid \frak{m}_{a,b},\,$</span> and <span class="math-container">$\,\rm b\mid \mathfrak{m}_{a,b}\,$</span> by symmetry. Thus <span class="math-container">$\,\rm \mathfrak{m}_{a,b} = lcm(a,b)\,$</span> is the <span class="math-container">$\rm\color{#0a0}{least}$</span> common multiple of <span class="math-container">$\rm\,a,b.\,$</span> In fact we have proved the stronger statement that it is a common multiple that is <em>divisibility-least</em>, i.e. it divides every common multiple. This is the general definition of LCM in an arbitrary domain (ring without zero-divisors), i.e. we have the following <strong>universal</strong> dual definitions of LCM and GCD, which essentially says that LCM & GCD are <span class="math-container">$\,\sup\,$</span> & <span class="math-container">$\,\inf\,$</span> in the poset induced by divisibility order <span class="math-container">$\,a\preceq b\!\iff\! a\mid b$</span>.</p>
<p><strong>Definition</strong> of <strong>LCM</strong> <span class="math-container">$\ \ $</span> If <span class="math-container">$\quad\rm a,b\mid c\,\iff\; d\mid c \ \ \,$</span> then <span class="math-container">$\rm\ d\approx lcm(a,b)$</span><br />
compare: <span class="math-container">$\, $</span> <strong>Def</strong> of <span class="math-container">$\rm\,\cap\ \ \,$</span> If <span class="math-container">$\rm\ \ \ a,b\supset c\iff d\supset c\,\ $</span> then <span class="math-container">$\,\ \rm d = a\cap b$</span></p>
<p><strong>Definition</strong> of <strong>GCD</strong> <span class="math-container">$\ \ $</span> If <span class="math-container">$\quad\rm c\mid a,b \;\iff\; c\mid d \,\ $</span> then <span class="math-container">$\,\ \rm d \approx \gcd(a,b)$</span><br />
compare: <span class="math-container">$\, $</span> <strong>Def</strong> of <span class="math-container">$\rm\,\cup\ \ \,$</span> If <span class="math-container">$\rm\ \ \ c\supset a,b\iff c\supset d\,\ $</span> then <span class="math-container">$\,\ \rm d = a\cup b$</span></p>
<p>Note <span class="math-container">$\;\rm a,b\mid [a,b] \;$</span> follows by putting <span class="math-container">$\;\rm c = [a,b] \;$</span> in the definition. <span class="math-container">$ $</span> Dually <span class="math-container">$\;\rm (a,b)\mid a,b$</span>.</p>
<p>Above <span class="math-container">$\rm\,d\approx e\,$</span> means <span class="math-container">$\rm\,d,e\,$</span> are associate, i.e. <span class="math-container">$\rm\,d\mid e\mid d\,$</span> (equivalently <span class="math-container">$\rm\,d = u\!\: e\,$</span> for <span class="math-container">$\,\rm u\,$</span> a unit = invertible). In general domains gcds are defined only up to associates (unit multiples), but we can often <em>normalize</em> to rid such unit factors, e.g. normalizing the gcd to be <span class="math-container">$\ge 0$</span> in <span class="math-container">$\Bbb Z,\,$</span> and making it monic for polynomials over a field, e.g. see <a href="https://math.stackexchange.com/a/2939562/242">here</a> and <a href="https://math.stackexchange.com/a/3009888/242">here</a>.</p>
<p>Such universal definitions enable slick unified proofs of both arrow directions, e.g.</p>
<p><strong>Theorem</strong> <span class="math-container">$\rm\;\; (a,b) = ab/[a,b] \;\;$</span> if <span class="math-container">$\;\rm\ [a,b] \;$</span> exists.</p>
<p><strong>Proof</strong>: <span class="math-container">$\rm\quad d\mid a,b \iff a,b\mid ab/d \iff [a,b]\mid ab/d \iff\ d\mid ab/[a,b] \quad$</span> <strong>QED</strong></p>
<p>The conciseness of the proof arises by exploiting to the hilt the <span class="math-container">$\:\!(\!\!\iff\!\!)\:\!$</span> definition of LCM, GCD. Implicit in the above proof is an innate cofactor duality. Brought to the fore, it clarifies LCM, GCD duality (analogous to DeMorgan's Laws), e.g. see <a href="https://math.stackexchange.com/a/717775/242">here</a> and <a href="https://math.stackexchange.com/a/2127538/242">here.</a></p>
<p>By the theorem, GCDs exist if LCMs exist. But common multiples clearly comprise an ideal, being closed under subtraction and multiplication by any ring element. Hence in a PID the generator of an ideal of common multiples is clearly an LCM. In Euclidean domains this can be proved directly by a simple descent, e.g. in <span class="math-container">$\:\mathbb Z \;$</span> we have the following <a href="https://math.stackexchange.com/a/2322544/242">high-school level proof</a> of the existence of LCMs (and, hence, of GCDs), after noting the set <span class="math-container">$\rm M$</span> of common multiples of <span class="math-container">$\rm a,b$</span> is closed under subtraction and contains <span class="math-container">$\:\rm ab \ne 0\:$</span>:</p>
<p><strong>Lemma</strong> <span class="math-container">$\ $</span> If <span class="math-container">$\;\rm M\subset\mathbb Z \;$</span> is closed under subtraction and <span class="math-container">$\rm M$</span> contains a nonzero element <span class="math-container">$\rm\,k,\,$</span> then <span class="math-container">$\rm M \:$</span> has a positive element and the least such positive element of <span class="math-container">$\;\rm M$</span> divides every element.</p>
<p><strong>Proof</strong> <span class="math-container">$\, $</span> Note <span class="math-container">$\rm\, k-k = 0\in M\,\Rightarrow\, 0-k = -k\in M, \;$</span> therefore <span class="math-container">$\rm M$</span> contains a positive element. Let <span class="math-container">$\rm\, m\,$</span> be the least positive element in <span class="math-container">$\rm\, M.\,$</span> Since <span class="math-container">$\,\rm m\mid n \iff m\mid -n, \;$</span> if some <span class="math-container">$\rm\, n\in M\,$</span> is not divisible by <span class="math-container">$\,\rm m\,$</span> then we
may assume that <span class="math-container">$\,\rm n > 0,\,$</span> and the least such. Then <span class="math-container">$\rm\,M\,$</span> contains <span class="math-container">$\rm\, n-m > 0\,$</span> also not divisible by <span class="math-container">$\rm m,\,$</span> and smaller than <span class="math-container">$\rm n$</span>, contra leastness of <span class="math-container">$\,\rm n.\ \ $</span> <strong>QED</strong></p>
|
805,341 | <p>Need a bit of help with this question. </p>
<p>We're given two invertible square $n\times n$ matrices $A$ and $B$ with entries in the reals.</p>
<p>We have to show that $AB$ is also invertible and then express $(AB)^{-1}$ in terms of $A$ and $B$. </p>
<p>I've managed to get the first part out. </p>
<p>Since $A$ is invertible, $Det(A)$ $\neq$ $0$. Similarily for $B, Det(B) \neq 0$</p>
<p>Also from the properties of Determinants: $Det(A)Det(B) = Det(AB)$
Hence $Det(AB) \neq 0$ and so $AB$ is invertible. </p>
<p>It's the second part that I need help. Thanks. </p>
| Andreas Caranti | 58,401 | <p>Assuming you mean $(AB)^{-1}$, there's a well-known simile that can help you finding the answer.</p>
<p>Think of $A$ and $B$ representing actions, like $A$ means putting on socks, $B$ means putting on shoes.</p>
<p>In the morning you do $AB$, socks first, then shoes.</p>
<p>In the evening you need to undo this, that is, do $(AB)^{-1}$. You will need to take off the socks, which is $A^{-1}$, and take off the shoes, which is $B^{-1}$. But what do you do first, that is, do you do $A^{-1} B^{-1}$ or $B^{-1} A^{-1}$?</p>
<p>Once you've got the right idea, just compute the product to see it is correct.</p>
|
4,317 | <p>For computing the present worth of an infinite sequence of equally spaced payments $(n^{2})$ I had the need to evaluate</p>
<p>$$\displaystyle\sum_{n=1}^{\infty}\frac{n^{2}}{x^{n}}=\dfrac{x(x+1)}{(x-1)^{3}}\qquad x>1.$$</p>
<p>The method I used was based on the geometric series $\displaystyle\sum_{n=1}^{\infty}x^{n}=\dfrac{x}{1-x}$ differentiating each side followed by a multiplication by $x$, differentiating a second time and multiplying again by $x$. There is at least a second (more difficult) method that is to compute the series partial sums and letting $n$ go to infinity.</p>
<p><strong>Question:</strong> Is there a closed form for</p>
<p>$$\displaystyle\sum_{n=1}^{\infty }\dfrac{n^{p}}{x^{n}}\qquad x>1,p\in\mathbb{Z}^{+}\quad ?$$</p>
<p>What is the sketch of its proof in case it exists?</p>
| Qiaochu Yuan | 232 | <p>The general closed form is</p>
<p>$$\displaystyle \sum_{k=1}^{\infty} k^n x^k = \frac{1}{(1 - x)^{n+1}} \left( \sum_{m=0}^{n} A(n, m) x^{m+1} \right)$$</p>
<p>where $A(n, m)$ are the <a href="http://en.wikipedia.org/wiki/Eulerian_number">Eulerian numbers</a>. When I have time I will edit with a few more details. If you only want the answer for a particular small value of $n$ then see Section 3 of <a href="http://web.mit.edu/~qchu/Public/TopicsInGF.pdf">my notes on generating functions</a>. I will also mention that for a particular value of $n$ one can deduce the answer by using the identity</p>
<p>$$\displaystyle \sum_{k=0}^{\infty} {k+n \choose n} x^k = \frac{1}{(1 - x)^{n+1}}$$</p>
<p>and writing $k^n$ as a linear combination of the polynomials ${k+r \choose r}$ (in $k$), for example using a finite difference table.</p>
|
4,317 | <p>For computing the present worth of an infinite sequence of equally spaced payments $(n^{2})$ I had the need to evaluate</p>
<p>$$\displaystyle\sum_{n=1}^{\infty}\frac{n^{2}}{x^{n}}=\dfrac{x(x+1)}{(x-1)^{3}}\qquad x>1.$$</p>
<p>The method I used was based on the geometric series $\displaystyle\sum_{n=1}^{\infty}x^{n}=\dfrac{x}{1-x}$ differentiating each side followed by a multiplication by $x$, differentiating a second time and multiplying again by $x$. There is at least a second (more difficult) method that is to compute the series partial sums and letting $n$ go to infinity.</p>
<p><strong>Question:</strong> Is there a closed form for</p>
<p>$$\displaystyle\sum_{n=1}^{\infty }\dfrac{n^{p}}{x^{n}}\qquad x>1,p\in\mathbb{Z}^{+}\quad ?$$</p>
<p>What is the sketch of its proof in case it exists?</p>
| Arin Chaudhuri | 404 | <p>According to <a href="http://mathworld.wolfram.com/GeometricDistribution.html" rel="nofollow">mathworld</a> The $n^{th}$ moment of the geometric distribution with parameter $p$, which is $ \sum p (1-p)^n n^k $ can be expressed in terms of the polylogarithm function.</p>
|
2,413,368 | <p>I am to show that if $ w = z + \frac{c}{z} $ and $ |z| = 1 $, then $w$ is an ellipse, and I must find its equation.</p>
<p>Previously, I have solved transformation questions by finding the modulus of the transformation in either the form $ w = f(z) $ or $ z = f(w) $. However, I think the part stumping me here is that the transformation has both $z$ and $z^{-1}$ in it.</p>
<p>Attempt with $w = f(z)$:
$$ w = x+iy + \frac{c}{x+iy} $$
$$= x+iy + \frac{c(x-iy)}{x^2 + y^2} $$
$$= x+iy+c(x-iy)$$
[as $|z| = 1 \implies x^2 + y^2 = 1^2$]</p>
<p>However, from here I am unable to work out how to proceed.</p>
<p>I also tried to find $z=f(w)$:
$$ z^2 - zw + c = 0 $$
$$ (z - \frac{w}{2})^2 + c - \frac{w^2}{4} = 0 $$
But I cannot see how this is of any use either.</p>
| Donald Splutterwit | 404,247 | <p>$z$ is on the unit circle; let $z=e^{i \theta}$ so
\begin{eqnarray*}
w= (1+c) \cos( \theta) +i (1-c) \sin(\theta)
\end{eqnarray*}
which gives $ x= (1+c) \cos( \theta) , y= (1-c) \sin(\theta)$ and considered in cartesian coordinates
\begin{eqnarray*}
\frac{x^2}{ (1+c)^2} + \frac{y^2}{ (1-c)^2} =1 .
\end{eqnarray*}</p>
|
1,439,920 | <blockquote>
<p>So, the question is:<br>
Calculate the probability that 10 dice give more than 2 6s.</p>
</blockquote>
<p>I've calculated that the probability for throwing 3 6s is 1/216.</p>
<p>And by that logic: 1/216 + 1/216 + .. + 1/216 = 10/216.</p>
<p>But I've been told that this isn't the proper way set it up.</p>
<p>Anyone having a good way to calculate this?</p>
| chandu1729 | 64,736 | <p>It's clear that $x=0$ is one of the roots. Hence, if we prove there are atleast 2 zeros to $ f(x) := x^4-1102x^3-2015$, we are done.</p>
<p>Observe, $f(0) < 0$ and $f(-2) > 0 $, so from Intermediate Value Theorem there exists at least one root between $-2$ and $0$. </p>
<p>Now, lets say there is exactly one real root to $f$ which means that there are 3 non real complex roots to $f$. This can not be possible as complex roots occur in conjugate pairs. Hence, there are at least 2 real roots to $f=0$</p>
|
1,439,920 | <blockquote>
<p>So, the question is:<br>
Calculate the probability that 10 dice give more than 2 6s.</p>
</blockquote>
<p>I've calculated that the probability for throwing 3 6s is 1/216.</p>
<p>And by that logic: 1/216 + 1/216 + .. + 1/216 = 10/216.</p>
<p>But I've been told that this isn't the proper way set it up.</p>
<p>Anyone having a good way to calculate this?</p>
| NoChance | 15,180 | <p>You can use <a href="http://Descartes'%20rule%20of%20signs" rel="nofollow">Descartes' rule of signs</a> to tell you the number of real roots as long as you are not interested in the value of each.</p>
<p>First observe that $x=0$ is a root for:</p>
<p>$f(x)=x^5 − 1102x^4 − 2015x$</p>
<p>Second, count positive real roots by counting sign changes in $f(x)$, we have: (+-)(--), that is 1 sign change indicating 1 positive root.</p>
<p>third, count negative real roots by counting sign changes in $f(-x)$ where:
$f(-x)=-x^5 - 1102x^4 + 2015x$ </p>
<p>here we have the signs: (--)(-+), so we have 1 negative root.</p>
<p>From the above, we have 3 real roots for $f(x)$.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.