qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,902,188 | <p>$k\in\mathbb{N}$ </p>
<p>The inverse of the sum $$b_k:=\sum\limits_{j=1}^k (-1)^{k-j}\binom{k}{j} j^{\,k} a_j$$ is obviously
$$a_k=\sum\limits_{j=1}^k \binom{k-1}{j-1}\frac{b_j}{k^j}$$ . </p>
<p>How can one proof it (in a clear manner)? </p>
<p>Thanks in advance.</p>
<hr>
<p>Background of the question: </p>
<p>It’s $$\sum\limits_{k=1}^\infty \frac{b_k}{k!}\int\limits_0^\infty \left(\frac{t}{e^t-1}\right)^k dt =\sum\limits_{k=1}^\infty \frac{a_k}{k}$$ with $\,\displaystyle b_k:=\sum\limits_{j=1}^k (-1)^{k-j}\binom{k}{j}j^{\,k}a_j $. </p>
<p>Note:</p>
<p>A special case is $\displaystyle a_k:=\frac{1}{k^n}$ with $n\in\mathbb{N}$ and therefore $\,\displaystyle b_k=\sum\limits_{j=1}^k (-1)^{k-j}\binom{k}{j}j^{\,k-n}$ (see Stirling numbers of the second kind)
$$\sum\limits_{k=1}^n \frac{b_k}{k!}\int\limits_0^\infty \left(\frac{t}{e^t-1}\right)^k dt =\zeta(n+1)$$ and the invers equation can be found in <a href="https://math.stackexchange.com/questions/1873839/a-formula-for-int-limits-0-infty-fracxex-1n-dx">A formula for $\int\limits_0^\infty (\frac{x}{e^x-1})^n dx$</a> . </p>
| Batominovski | 72,152 | <p>In this proof, the binomial identity
$$\binom{m}{n}\,\binom{n}{s}=\binom{m}{s}\,\binom{m-s}{n-s}$$
for all integers $m,n,s$ with $0\leq s\leq n\leq m$ is used frequently, without being specifically mentioned. A particular case of importance is when $s=1$, where it is given by
$$n\,\binom{m}{n}=m\,\binom{m-1}{n-1}\,.$$</p>
<p>First, rewrite
$$b_k=k\,\sum_{j=1}^{k}\,(-1)^{k-j}\,\binom{k-1}{j-1}\,j^{k-1}\,a_j\,.$$
Then,
$$\sum_{k=1}^l\,\binom{l-1}{k-1}\,\frac{b_k}{l^k}=\sum_{k=1}^l\,\binom{l-1}{k-1}\,\frac{k}{l^k}\,\sum_{j=1}^k\,(-1)^{k-j}\,\binom{k-1}{j-1}\,j^{k-1}\,a_j\,.$$
Thus,
$$\begin{align}
\sum_{k=1}^l\,\binom{l-1}{k-1}\,\frac{b_k}{l^k}&=\sum_{j=1}^l\,\frac{a_j}{j}\,\sum_{k=j}^l\,(-1)^{k-j}\,\binom{l-1}{k-1}\,\binom{k-1}{j-1}\,k\left(\frac{j}{l}\right)^k
\\
&=\sum_{j=1}^l\,\frac{a_j}{j}\,\binom{l-1}{j-1}\,\sum_{k=j}^l\,(-1)^{k-j}\,\binom{l-j}{k-j}\,k\left(\frac{j}{l}\right)^k\,.
\end{align}$$
Let $r:=k-j$. We have
$$\sum_{k=1}^l\,\binom{l-1}{k-1}\,\frac{b_k}{l^k}=\sum_{j=1}^l\,\frac{a_j}{j}\,\binom{l-1}{j-1}\,\left(\frac{j}{l}\right)^j\,\sum_{r=0}^{l-j}\,(-1)^r\,\binom{l-j}{r}\,(r+j)\,\left(\frac{j}{l}\right)^{r}\,.\tag{*}$$</p>
<p>Now, if $j=l$, then
$$\sum_{r=0}^{l-j}\,(-1)^r\,\binom{l-j}{r}\,(r+j)\,\left(\frac{j}{l}\right)^{r}=l\,.$$
If $j<l$, then
$$\begin{align}
\sum_{r=0}^{l-j}\,(-1)^r\,\binom{l-j}{r}\,r\,\left(\frac{j}{l}\right)^{r}&=-(l-j)\left(\frac{j}{l}\right)\,\sum_{r=1}^{l-j}\,(-1)^{r-1}\,\binom{l-j-1}{r-1}\,\left(\frac{j}{l}\right)^{r-1}
\\&=-j\left(1-\frac{j}{l}\right)\,\left(1-\frac{j}{l}\right)^{l-j-1}=-j\left(1-\frac{j}{l}\right)^{l-j}
\end{align}$$
and
$$\sum_{r=0}^{l-j}\,(-1)^r\,\binom{l-j}{r}\,j\left(\frac{j}{l}\right)^r=j\left(1-\frac{j}{l}\right)^{l-j}\,.$$
Consequently,
$$\sum_{r=0}^{l-j}\,(-1)^r\,\binom{l-j}{r}\,(r+j)\,\left(\frac{j}{l}\right)^{r}=\begin{cases}
0\,,&\text{if }j<l\,,\\
l\,,&\text{if }j=l\,.
\end{cases}$$
From (*),
$$\sum_{k=1}^l\,\binom{l-1}{k-1}\,\frac{b_k}{l^k}=\frac{a_l}{l}\,\binom{l-1}{l-1}\,\left(\frac{l}{l}\right)^l\,l=a_l\,.$$</p>
|
3,362,115 | <blockquote>
<p>Find the maximum value of <span class="math-container">$y/x$</span> if it satisfies <span class="math-container">$(x-5)^2+(y-4)^2=6$</span>.</p>
</blockquote>
<p>Geometrically, this is finding the slope of the tangent from the origin to the circle. Other than solving this equation with <span class="math-container">$x^2+y^2=35$</span>, I cannot see any synthetic geometry solution. Thanks!</p>
| Quanto | 686,284 | <p>Let <span class="math-container">$a$</span> be the angle between the center line (from origin to circle center) and the <span class="math-container">$x$</span>-axis,</p>
<p><span class="math-container">$$ \tan a = \frac 45$$</span></p>
<p>Let <span class="math-container">$b$</span> be the angle between the center line and the tangent line,</p>
<p><span class="math-container">$$ \tan b= \frac{\sqrt{6}}{\sqrt{4^2+5^2-6}}= \frac{\sqrt{6}}{\sqrt{35}}$$</span></p>
<p>The maximum value <span class="math-container">$y/x$</span> is given by</p>
<p><span class="math-container">$$ \tan(a+b) = \frac{\tan a+\tan b}{1-\tan a \tan b}= \frac{20+\sqrt{210}}{19}$$</span></p>
|
436,172 | <p>let $a,b,c,d$ are real numbers,show that
$$2\sqrt{a^2+c^2}+\sqrt{a^2+c^2+3(b^2+d^2)-2\sqrt{3}(ab+cd)}+\sqrt{a^2+c^2+3(b^2+d^2)+2\sqrt{3}(ab+cd)}\ge6\sqrt{|ad-bc|}$$</p>
<p>This problem is creat by China's famous mathematician hua luogeng,<a href="http://en.wikipedia.org/wiki/Hua_Luogeng" rel="nofollow">http://en.wikipedia.org/wiki/Hua_Luogeng</a>
when he is child,and Now this problem has some ugly methods,I think this inequality has nice methods,Thank you</p>
| Aang | 33,989 | <p>HINT: </p>
<p>$$2\sqrt{a^2+c^2}+\sqrt{a^2+c^2+3(b^2+d^2)-2\sqrt{3}(ab+cd)}+\sqrt{a^2+c^2+3(b^2+d^2)+2\sqrt{3}(ab+cd)}$$ $$=2\sqrt{a^2+c^2}+\sqrt{(a-\sqrt{3}b)^2+(c-\sqrt{3}d)^2}+\sqrt{(a+\sqrt{3}b)^2+(c+\sqrt{3}d)^2}$$ $$\geq 2\sqrt{2|ac|}+\sqrt{2|(a-\sqrt{3}b)(c-\sqrt{3}d)|}+\sqrt{2|(a+\sqrt{3}b)(c+\sqrt{3}d)|}$$</p>
<p>Now,
$$\sqrt{2|(a-\sqrt{3}b)(c-\sqrt{3}d)|}+\sqrt{2|(a+\sqrt{3}b)(c+\sqrt{3}d)|}\geq\sqrt{2(|(a-\sqrt{3}b)(c-\sqrt{3}d)|+|(a+\sqrt{3}b)(c+\sqrt{3}d)|)}$$ </p>
|
10,615 | <p>The tag <a href="https://math.stackexchange.com/questions/tagged/summation" class="post-tag" title="show questions tagged 'summation'" rel="tag">summation</a> was created less than a year ago, see <a href="https://math.meta.stackexchange.com/questions/6324/summation-tag-for-finite-and-formal-summations">"summation" tag for finite and formal summations</a></p>
<p>Before that, post about finite sums were usually tagged as <a href="https://math.stackexchange.com/questions/tagged/sequences-and-series" class="post-tag" title="show questions tagged 'sequences-and-series'" rel="tag">sequences-and-series</a>. Since then the new question have been usually correctly tagged as <a href="https://math.stackexchange.com/questions/tagged/summation" class="post-tag" title="show questions tagged 'summation'" rel="tag">summation</a>, and some of older questions have been retagged too.</p>
<p>The <a href="https://math.stackexchange.com/tags/summation/info">tag-excerpt and tag-wiki for summation</a> say:</p>
<blockquote>
<p>Use <a href="https://math.stackexchange.com/questions/tagged/sequences-and-series" class="post-tag" title="show questions tagged 'sequences-and-series'" rel="tag">sequences-and-series</a> for sums of infinite series and questions of convergence; use <a href="https://math.stackexchange.com/questions/tagged/summation" class="post-tag" title="show questions tagged 'summation'" rel="tag">summation</a> for questions about finite sums and simplification of expressions involving sums.</p>
</blockquote>
<p>Based on this it seems that typical question on finite sums should be tagged only <a href="https://math.stackexchange.com/questions/tagged/summation" class="post-tag" title="show questions tagged 'summation'" rel="tag">summation</a> and not <a href="https://math.stackexchange.com/questions/tagged/sequences-and-series" class="post-tag" title="show questions tagged 'sequences-and-series'" rel="tag">sequences-and-series</a>. (Of course, there might be exceptions where both tags are appropriate.) Yet we have <a href="https://math.stackexchange.com/questions/tagged/summation+sequences-and-series">many question</a> tagged with both these tags.</p>
<p>I have to say that I am usually careful with removing the tags the the OP has chosen, especially if I am not entirely sure. (So some of occurrences of both tags are due to my retags.)</p>
<p>I would like to ask on the opinion of the community about this combination of tags. Perhaps this will encourage more people (including me) to use the tags correctly. And if most users disagree with the guide given in the tag-wiki, we can change the tag-wiki.</p>
<blockquote>
<ul>
<li>Should the tag <a href="https://math.stackexchange.com/questions/tagged/sequences-and-series" class="post-tag" title="show questions tagged 'sequences-and-series'" rel="tag">sequences-and-series</a> be removed the questions about finite sums and only the tag <a href="https://math.stackexchange.com/questions/tagged/summation" class="post-tag" title="show questions tagged 'summation'" rel="tag">summation</a> should be used from this pair of tags?</li>
</ul>
</blockquote>
| MJD | 25,554 | <p>When I created <a href="https://math.stackexchange.com/questions/tagged/summation" class="post-tag" title="show questions tagged 'summation'" rel="tag">summation</a>, it was with the idea that questions about series are usually concerned with issues of convergence, and that <a href="https://math.stackexchange.com/questions/tagged/summation" class="post-tag" title="show questions tagged 'summation'" rel="tag">summation</a> was a distinct tag for something quite different. A question such as </p>
<blockquote>
<p>What is a closed form for $$\sum_{i=0}^n i^2?$$</p>
</blockquote>
<p>is about a certain formal, algebraic identity. It has a completely different character from:</p>
<blockquote>
<p>What is a closed form for $$\sum_{i=0}^\infty i^{-2}? $$</p>
</blockquote>
<p>which is about convergence properties of the real numbers. The answer to the first question is an expression, expressing a particular function of $n$. The answer to the second question is a number, or a proof that the indicated series diverges. Someone who said the second summation was divergent would be wrong. But someone who said the first summation was divergent would be “not even wrong”, but deeply confused about what the question was.</p>
<p>Only the first question should be tagged <a href="https://math.stackexchange.com/questions/tagged/summation" class="post-tag" title="show questions tagged 'summation'" rel="tag">summation</a>; only the second one should be tagged <a href="https://math.stackexchange.com/questions/tagged/sequences-and-series" class="post-tag" title="show questions tagged 'sequences-and-series'" rel="tag">sequences-and-series</a>. Summations, such as the first example, have nothing at all to do with sequences, and it makes no sense to bundle sequences into their tag. Series, such as the second example, <em>are</em> sequences, and it makes sense to tag them together.</p>
<p>I tried to convey this dichotomy when I wrote the tag descriptions for <a href="https://math.stackexchange.com/questions/tagged/summation" class="post-tag" title="show questions tagged 'summation'" rel="tag">summation</a>. Unfortunately, the tag descriptions and tag wiki are very hard to find and to navigate.</p>
<p>In my opinion, <strong>it should be very rare that a question should receive both tags</strong>. I cannot recall any example for which both tags would be appropriate.</p>
|
3,190,435 | <blockquote>
<p>A coin, having probability <span class="math-container">$p$</span> of landing on heads and probability of <span class="math-container">$q=1-p$</span> of landing on tails. It is continuously flipped until at least one head and one tail have been flipped.<br>
a) Find the expected number of flips needed.</p>
</blockquote>
<p>This is not part of a homework assignment. I am studying for a final and don't understand the professors solutions. Since this is clearly geometric, I would think the solution would be:
<span class="math-container">$$E(N)=\sum_{i=0}^{\infty}ip^{n-1}q+\sum_{i=0}^{\infty}iq^{n-1}p=\frac{1}{q}+\frac{1}{p}.$$</span>
However, I am completely wrong. The answer is
<span class="math-container">$$E(N)=p\left(1+\frac{1}{q}\right)+q\left(1+\frac{1}{p}\right).$$</span>
For example, consider we flip for heads first. Then we have
<span class="math-container">$$E(N\mid H)=p+p\sum_{i=0}^{\infty}np^{n-1}q.$$</span>
I am not sure why this makes sense. I am not entirely sure why we have an added <span class="math-container">$1$</span> and a factored <span class="math-container">$p$</span>, <span class="math-container">$q$</span>. Could someone carefully explain why it makes sense that this is the right answer?</p>
| Peter Foreman | 631,494 | <p>If you get a head with probability <span class="math-container">$p$</span> then the expected number of throws is <span class="math-container">$1+E(X)$</span> where <span class="math-container">$X$</span> is a geometric distribution requiring a tail to be thrown with probability <span class="math-container">$q$</span> so <span class="math-container">$1+E(X)=1+\frac1q$</span>. Similarly if you throw a tail with probability <span class="math-container">$q$</span> then the expected number of throws is <span class="math-container">$1+E(Y)$</span> where <span class="math-container">$Y$</span> is a geometric distribution requiring a head to be thrown with probability <span class="math-container">$p$</span> so <span class="math-container">$1+E(Y)=1+\frac1p$</span>. This means that the overall expected number of throws is
<span class="math-container">$$p\left(1+\frac1q\right)+q\left(1+\frac1p\right)$$</span>
because there is a probability <span class="math-container">$p$</span> that the expected number of throws is given by <span class="math-container">$1+E(X)$</span> and probability <span class="math-container">$q$</span> that it is given by <span class="math-container">$1+E(Y)$</span>.</p>
|
3,190,435 | <blockquote>
<p>A coin, having probability <span class="math-container">$p$</span> of landing on heads and probability of <span class="math-container">$q=1-p$</span> of landing on tails. It is continuously flipped until at least one head and one tail have been flipped.<br>
a) Find the expected number of flips needed.</p>
</blockquote>
<p>This is not part of a homework assignment. I am studying for a final and don't understand the professors solutions. Since this is clearly geometric, I would think the solution would be:
<span class="math-container">$$E(N)=\sum_{i=0}^{\infty}ip^{n-1}q+\sum_{i=0}^{\infty}iq^{n-1}p=\frac{1}{q}+\frac{1}{p}.$$</span>
However, I am completely wrong. The answer is
<span class="math-container">$$E(N)=p\left(1+\frac{1}{q}\right)+q\left(1+\frac{1}{p}\right).$$</span>
For example, consider we flip for heads first. Then we have
<span class="math-container">$$E(N\mid H)=p+p\sum_{i=0}^{\infty}np^{n-1}q.$$</span>
I am not sure why this makes sense. I am not entirely sure why we have an added <span class="math-container">$1$</span> and a factored <span class="math-container">$p$</span>, <span class="math-container">$q$</span>. Could someone carefully explain why it makes sense that this is the right answer?</p>
| leonbloy | 312 | <p>Let <span class="math-container">$X$</span> be the time of the first head, and <span class="math-container">$Y$</span> the time of the first tail, and <span class="math-container">$W$</span> the first time when a head and a tail has been flipped.</p>
<p>You are right in assuming that <span class="math-container">$E[X]=\frac{1}{p}$</span> and <span class="math-container">$E[Y]=\frac{1}{q}$</span>, But you are wrong in assuming that <span class="math-container">$W=X+Y$</span>, that's simply not true, actually <span class="math-container">$W=\max(X,Y)$</span>.</p>
<p>A possible approach. Let <span class="math-container">$A$</span> be the indicator variable of the event: "first coin was a head" (hence <span class="math-container">$X=1$</span>).</p>
<p>Then use <span class="math-container">$$E[W]=E[E[W | A ]] = P(A=1) E[W|A=1]+P(A=0) E[W|A=0]=\\=p(E[Y]+1)+q(E[X]+1)$$</span></p>
|
78,478 | <blockquote>
<p>Prove that $\frac{1}{n} \sum_{k=2}^n \frac{1}{\log k}$ converges to $0.$</p>
</blockquote>
<p>Okay, seriously, it's like this question is mocking me. I know it converges to $0$. I can feel it in my blood. I even proved it was Cauchy, but then realized that didn't tell me what the limit <em>was</em>. I've been working on this for an hour, so can one of you math geniuses help me?</p>
<p>Thanks!</p>
| André Nicolas | 6,312 | <p>Logarithm to any base is equal to a constant times logarithm to the base $2$. So let's work with base $2$. For convenience, write $\log n$ for $\log_2 n$.</p>
<p>We use a mild variant of the usual proof that $\sum \frac{1}{n}$ diverges.</p>
<p>Note that $\log 2$ and $\log 3$ are both $\ge 1$; $\log 4$, $\log 5$, $\log 6$, and $\log 7$ are all $\ge 2$; $\log 8$, $\log 9$, and so on up to $\log 15$ are all $\ge 3$, and so on. So we have:
$$\frac{1}{\log 2}+\frac{1}{\log 3} \le \frac{1}{1}+\frac{1}{1}=2.$$
$$\frac{1}{\log 4}+\frac{1}{\log 5}+\frac{1}{\log 6}+\frac{1}{\log 7}\le \frac{1}{2}+\frac{1}{2}+\frac{1}{2}+\frac{1}{2}=2.$$</p>
<p>In the same way we can see that for any $m \ge 1$,
$$\frac{1}{\log (2^{m})}+\frac{1}{\log (2^{m}+1)}+\cdots+\frac{1}{\log (2^{m+1}-1)}\le 2.$$
We conclude that
$$\sum_{k=2}^{2^{m+1}-1} \frac{1}{\log k} <2m,$$
since up to $2^{m+1}-1$ we have $m$ chunks each of which has sum $<2$.</p>
<p>Now suppose that $2^{m} \le n \le 2^{m+1}-1$.
Then
$$\sum_{k=2}^n \frac{1}{\log k}\le \sum_{k=2}^{2^{m+1}-1} \frac{1}{\log k} <2m.$$
Thus
$$\frac{1}{n}\sum_{k=2}^n \frac{1}{\log k} <\frac{2m}{2^{m}}.$$
Finally, it is well-known that $\displaystyle\lim_{m\to\infty} \frac{m}{2^m}=0$. </p>
|
352,849 | <p>I have to show that $\lim \limits_{n\rightarrow\infty}\frac{n!}{(2n)!}=0$ </p>
<hr>
<p>I am not sure if correct but i did it like this :
$(2n)!=(2n)\cdot(2n-1)\cdot(2n-2)\cdot ...\cdot(2n-(n-1))\cdot (n!)$ so I have $$\displaystyle \frac{1}{(2n)\cdot(2n-1)\cdot(2n-2)\cdot ...\cdot(2n-(n-1))}$$ and $$\lim \limits_{n\rightarrow \infty}\frac{1}{(2n)\cdot(2n-1)\cdot(2n-2)\cdot ...\cdot(2n-(n-1))}=0$$ is this correct ? If not why ?</p>
| Community | -1 | <p><strong>Hint</strong></p>
<p>$$0\leq\frac{n!}{(2n)!}\leq\frac{1}{n}$$</p>
|
1,285,941 | <p>I have a question where i couldn't find any clue.
The question is</p>
<p>$$\frac{1}{1\cdot 2}+\frac{1\cdot3}{1\cdot2\cdot3\cdot4}+\frac{1\cdot3\cdot5}{1\cdot2\cdot3\cdot4\cdot5\cdot6}+\cdots$$</p>
<p>I could get the general term as $t_n=\frac{1\cdot3\cdot5\cdot7\cdots(2n-1)}{1\cdot2\cdot3\cdot4\cdot5\cdot6\cdots2n}$</p>
<p>I have also tried it to form the sequence in the telescopic form.But couldn't get. Any hint will be appreciated.</p>
| Alex Fok | 223,498 | <p>The lowest form of each term is $\frac{(\frac{1}{2})^n}{n!}$. By invoking the Taylor series expansion of $e^x$ the answer is $e^{1/2}-1$.</p>
|
1,521,720 | <p>I'm working through a book on logic and wanted to check one of my steps in a derivation I'm working out. Given the two quantifier negation equalities:</p>
<ol>
<li><p>$\lnot\exists P(x) = \forall\lnot P(x)$</p></li>
<li><p>$\exists\lnot P(x) = \lnot\forall P(x)$</p></li>
</ol>
<p>I'm trying to derive (2) from (1). I'm not asking for the solution if I have it wrong -- I think I've got it, but I want to find out if my last step is valid.</p>
<p>Negating both sides of (1):</p>
<p>$\lnot\lnot\exists P(x) = \lnot\forall\lnot P(x)$</p>
<p>Then removing the double negation, you get:</p>
<p>$\exists P(x) = \lnot\forall\lnot P(x)$</p>
<p>Now, it's not at all clear to me that algebraically, you can just slap a negation of both sides <em>inside the quantifiers</em> to get to (2), preserving correctness. But I notice that this equation has the same "shape" as (2). Intuitively, it seems to be saying the same thing, because you could define a predicate $R(x) = \lnot P(x)$ and obtain:</p>
<p>$\exists\lnot R(x) = \lnot\forall R(x)$</p>
<p>...but I don't like "intuitively". This step feels like a hand-wavy leap of faith -- is it valid? If so, is there a name for it?</p>
| Paul Sinclair | 258,282 | <p>Yes, it is valid. The $P$ refers to an arbitrary predicate. That means that it can be anything, including $\lnot P'$ for some predicate $P'$. That is exactly what you did.</p>
|
184,824 | <p>I have two piecewise function</p>
<pre><code>equ1 = Piecewise[{{0.524324 + 0.0376478x, 0.639464 <= x <= 0.839322}}]
equ2 = Piecewise[{{-0.506432 + 1.48068x, 0.658914 <= x <= 0.77085}}]
</code></pre>
<p>Now, I am trying to solve <code>equ1 = equ2</code>.</p>
<p>Firstly I tried <code>FindRoot</code>: </p>
<pre><code>FindRoot[equ1 == equ2, x]
</code></pre>
<p>But the output is <code>x = 0</code>. I can only get the correct answer by set search starting point <code>0.7</code>. How can I direct get the answer without set starting point? </p>
<p>Secondly, I tried code Reduce: </p>
<pre><code>Reduce[equ1 == equ2, x]
</code></pre>
<p>However, the error appear. The good news is <code>Reduce</code> do provide the correct answer for my equation. The error is: </p>
<pre><code>Reduce::ratnz: Reduce was unable to solve the system with inexact coefficients. The answer was obtained by solving a corresponding exact system and numericizing the result.
</code></pre>
<p>Do I have other way to solve those two piecewise function? </p>
| Alex Trounev | 58,388 | <pre><code>equ1 = Rationalize[
Piecewise[{{0.524324 + 0.0376478 x, 0.639464 <= x <= 0.839322}}]]
equ2 = Rationalize[
Piecewise[{{-0.506432 + 1.48068 x, 0.658914 <= x <= 0.77085}}]]
Reduce[equ1 == equ2, x]
x == 0.714299 || x < 79933/125000 || x > 419661/500000
</code></pre>
|
1,938,253 | <p>Consider the following limit:$$\lim_{x \to\infty}\frac{2+2x+\sin(2x)}{(2x+\sin(2x))e^{\sin(x)}}$$</p>
<p>If we apply L'hospital's rule then we get:</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/slrYm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/slrYm.png" alt="Blockquote"></a></p>
</blockquote>
<p>Since the function $e^{-\sin(x)}$ is bounded and $\frac{4\cos(x)}{2x+4\cos(x)+\sin(2x)}e^{-\sin(x)}$ tends to $0$. </p>
<p>However, it is also stated that: </p>
<blockquote>
<p><a href="https://i.stack.imgur.com/gqxqU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gqxqU.png" alt="Blockquote"></a></p>
</blockquote>
<p>Furthermore, (referring to the application of the L'hospital's Rule) it is stated in conclusion that:</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/M98KC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M98KC.png" alt="Blockquote"></a></p>
</blockquote>
| Christian Blatter | 1,303 | <p>The proof of Hôpital's rule rests on the following form of the mean value theorem: If $f$ and $g$ both are continuous on $[a,b]$ and differentiable in the interior $\ ]a,b[\ $, and if $g'(t)\ne0$ for all $t\in\ ]a,b[\ $, then there is a $\tau\in\ ]a,b[\ $ such that
$${f(b)-f(a)\over g(b)-g(a)}={f'(\tau)\over g'(\tau)}\ .$$
The essential "technical assumption" $g'(t)\ne0$ is violated in the example at hand.</p>
<p>It would be nice to have a simpler example where it becomes intuitively clear why things can go wrong in such a case.</p>
|
57,195 | <p>Let $f$ be a morphism of schemes $f: (X,\mathcal{O}_X)\to (Y,\mathcal{O}_Y)$, and $\mathcal{F},\mathcal{G}$ be sheaves of $\mathcal{O}_Y$-modules. I am trying to prove (I do NOT claim this to be true):</p>
<p>$f^{\ast}\mathcal{F}\otimes_{\mathcal{O}_X}f^{\ast}\mathcal{G}\cong f^{\ast}(\mathcal{F}\otimes_{\mathcal{O}_Y}\mathcal{G})$</p>
<p>By the definition of $f^{*}$, and the property of the tensor product, one can check that this boils down to proving: $\quad f^{-1} \mathcal{F} \otimes_{f^{-1}\mathcal{O}_Y} f^{-1}\mathcal{G} \cong f^{-1}(\mathcal{F} \otimes_{\mathcal{O}_Y}\mathcal{G})$. However, I cannot continue this bare hand computation at the present stage. For one thing $f^{-1}$ and $\otimes$ both require sheafification, and thus I get a compostion of two sheafification objects; for another, I know nothing about good properties of stalks on $f^{-1}$.</p>
<p>I guess the computation may be dirty, but I appreciate any insight on handling the problem.</p>
| Georges Elencwajg | 3,217 | <p>Yes, we have $f^{\ast}\mathcal{F}\otimes_{\mathcal{O}_X}f^{\ast}\mathcal{G}\cong f^{\ast}(\mathcal{F}\otimes_{\mathcal{O}_Y}\mathcal{G})\quad$</p>
<p>And, yes, this results from the <em>isomorphism</em> $\alpha: f^{-1} \mathcal{F} \otimes_{f^{-1}\mathcal{O}_Y} f^{-1}\mathcal{G} \overset {\sim}{\longrightarrow} f^{-1}(\mathcal{F} \otimes_{\mathcal{O}_Y}\mathcal{G})$<br>
And, no, the computation is <em>not</em> dirty!</p>
<p>To prove that the natural map $\alpha$ is an isomorphism, it is enough to look at the stalks.
The morphism $\alpha_x: (f^{-1} \mathcal{F} \otimes_{f^{-1}\mathcal{O}_Y} f^{-1}\mathcal{G})_x \to (f^{-1}(\mathcal{F} \otimes_{\mathcal{O}_Y}\mathcal{G}))_x$ is indeed an isomorphism because or the following two general results (which do <em>not</em> involve schemes):</p>
<p><strong>Fact 1:</strong> For any continuous map $f:X\to Y$ of topological spaces and any sheaf $\mathcal E$ on $Y$, we have for every $x\in X$ a canonical isomorphism $(f^{-1} \mathcal E)_x=\mathcal E _{f(x)}$ </p>
<p><strong>Fact 2:</strong> Given a sheaf of rings $\mathcal A$ and sheaves $\mathcal C, \mathcal D$ of
$\mathcal A$-Modules on the topological space $X$, we have for every $x\in X$ a natural isomorphism
$(\mathcal C \otimes _{\mathcal A}\mathcal D)_x=\mathcal C_x \otimes _{\mathcal A_x}\mathcal D_x$.<br>
[Of course, in the discussion at hand $\mathcal A$ is $f^{-1}\mathcal{O}_Y$]</p>
|
27,271 | <p>A few days ago I recall finding a visual calendar of my sign-in history on Math Stack Exchange. It looked like an ordinary monthly calendar, where the days were white if I hadn't signed in, and green if I had. (I found it when I wondering how close I've been in the past to getting the "fanatic" badge.)</p>
<p>However, I can no longer find this page/pop-up.</p>
<p>I thought it might have been a feature of Stack Exchange as a whole, but I was unable to find an answer on Meta Stack Exchange.</p>
| Asaf Karagila | 622 | <p>When you click your own profile, the default tab is the activity tab. You should go to the "Profile" tab, where you can see the "visited $x$ days, $y$ consecutive" text. Click this text to show the calendar.</p>
<p><a href="https://i.stack.imgur.com/J8Gmx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J8Gmx.png" alt="enter image description here"></a></p>
|
2,186,076 | <p><strong>Question.</strong></p>
<blockquote>
<p>If $p$ is a polynomial of degree $n$ with $p(\alpha)=0$, what do we know of the polynomial $q$ (with degree $n-1$) such that the numbers $(q^k(\alpha))_{k=1}^n$ contain all of the zeroes of $p$?</p>
</blockquote>
<p>Here I denote $q(q(\cdots q(\alpha)))=q^k(\alpha)$.</p>
<p><hr>
<strong>Notes.</strong></p>
<p>We know for a fact such $q$ exists, since there always exists a polynomial of degree $n-1$ through $n$ given points. $q$ is not unique, however, since there are multiple permutations we can put the zeroes in.
<hr>
<strong>Examples.</strong></p>
<p>For linear $p$ (write $p(x)=a_1x+a_0$), this is obvious; $q(x)=\tfrac{-a_0}{a_1}=\alpha$ suffices. If $p(\alpha)=0$, then $q(\alpha)=\alpha$ indeed are all the zeroes of $p$.</p>
<p>If $p$ is quadratic, write $p(x)=a_2x^2+a_1x+a_0$, and have $p(\alpha)=0$ again; now $q(x)=\frac{-a_1}{a_2}-x$.</p>
<p>If $p$ is cubic, write $p(x)=a_3x^3+a_2x^2+a_1x+a_0$. This is where I get stuck, since the roots of cubic equations aren't expressions that are easy to work with.
<hr>
<strong>Attempts.</strong></p>
<p>First I see (denote the (not necessarily real) zeroes by $z_1,z_2,\cdots,z_n$) that $z_1+\cdots+z_n=\frac{-a_{n-1}}{a_n}$ and $z_1z_2\cdots z_n=\frac{(-1)^na_0}{a_n}$. We can produce similar expressions for the other coefficients, but I doubt this is useful; they're not even solvable for $n>4$. We also have (given $z_1$)
$$-a_nz_1^n=a_{n-1}z_1^{n-1}+\cdots+a_1x+a_0$$
with which we can reduce every expression of degree $n$ or larger in $z_1$ to an expression of degree $n-1$ or smaller.</p>
<p>For $n=3$ (let's do some specific examples), we could write $q(x)=b_2x^2+b_1x+b_0$, and take for example $p(x)=x^3-x-1$. Then, if $\alpha$ is a zero of $p$, then $\alpha^3=\alpha+1$, and so $q(\alpha)^3=q(\alpha)+1$, which is</p>
<p>$$(b_2\alpha^2+b_1\alpha+b_0)^3=b_2\alpha^2+b_1\alpha+b_0+1$$</p>
<p>working out the constant terms gives $b_2^3+b_1^3+b_0^3+6b_0b_1b_2=b_0+1$ which isn't very useful either.</p>
<p>Please, enlighten me. Has there been done work on this subject, am I missing something obvious, or perhaps you see something that I missed?</p>
| lhf | 589 | <p>Let $\alpha=\alpha_1, \alpha_2, \ldots, \alpha_m$ be the distinct roots of $p$.</p>
<p>Choose a permutation $\sigma$ of $1,2,\dots,m$ without fixed points. For instance, an $m$-cycle such as $(12\cdots m)$.</p>
<p>Let $q$ be the unique polynomial such that $q(\alpha_i)=\alpha_{\sigma(i)}$. That will work, but won't have degree $n-1$.</p>
|
3,464,342 | <blockquote>
<p>For <span class="math-container">$j\in \mathbb{N}$</span> let <span class="math-container">$$M_j=\{f\in L^2([0,1]):\int_0^1 |f|^2 dx \leq j\}$$</span>
(a) Establish that <span class="math-container">$L^2([0,1])=\cup_{j\in \mathbb{N}}M_j$</span>.</p>
<p>(b) Show that each <span class="math-container">$M_j$</span> is a closed subset in <span class="math-container">$L^1([0,1])$</span>.</p>
<p>(c) Show that the interior of each <span class="math-container">$M_j$</span> in the norm topology of <span class="math-container">$L^1([0,1])$</span> is empty.</p>
<p>(d) From (a)-(c) it appears that <span class="math-container">$L^2([0,1])$</span> is the countable union of sets with empty interior. Explain why this does not contradict Baire's theorem.</p>
</blockquote>
<p>I believe (a) is obvious.</p>
<p>For (b) I need to show if <span class="math-container">$\int |f_n|^2 \leq j$</span> for <span class="math-container">$j\in \mathbb{N}$</span> and <span class="math-container">$\int |f_n-f| \to 0$</span> then also <span class="math-container">$\int |f|^2\leq j$</span>. To relate <span class="math-container">$L^2$</span> and <span class="math-container">$L^1$</span> I was thinking of using Cauchy-Schwarz and saying <span class="math-container">$\int |f_n-f|\leq (\int |f_n-f|^2)^{\frac12}$</span> but I need the inequality to go the other way.</p>
<p>For (c) I assume there exists an <span class="math-container">$M_j$</span> such that <span class="math-container">$O\in M_j$</span> where <span class="math-container">$O$</span> is open. Then there exists <span class="math-container">$f \in M_j$</span> and a sequence <span class="math-container">$g_j \in M_j$</span> such that <span class="math-container">$\int |f-g_j|<\epsilon$</span>. I somehow want to obtain a contradiction.</p>
<p>I found this helpful post <a href="https://math.stackexchange.com/questions/2560595/set-with-empty-interior-in-l10-1">Set with empty interior in $L^1([0,1])$</a> but there <span class="math-container">$f\in L^1([0,1])$</span> in the definition of <span class="math-container">$M_j$</span> so I am not sure if I can use it. There they argue if <span class="math-container">$M_j \ni f_k\to f$</span> in <span class="math-container">$L^1$</span> then for some subsequence <span class="math-container">$f_{k_n}\to f$</span> a.e. Then by Fatou <span class="math-container">$\int |f|^2\leq \lim \inf \int |f_{k_n}|^2\leq j$</span> so <span class="math-container">$f\in M_j$</span>. Wouldn't this argument show <span class="math-container">$M_j$</span> is closed in any <span class="math-container">$L^p$</span>-space then as we can always extract an almost everywhere converging subsequence?</p>
<p>For (d) is the problem that <span class="math-container">$M_j$</span> is closed and of empty interior in <span class="math-container">$L^1([0,1])$</span> but it is defined as a subset of <span class="math-container">$L^2([0,1])$</span>?</p>
| Jochen | 38,982 | <p>You can prove (b) without touching any integral: <span class="math-container">$L^2[0,1]$</span> is reflexive (as a Hilbert space) so that the closed balls <span class="math-container">$M_j$</span> are weakly compact. The inclusion <span class="math-container">$i:L^2[0,1]\hookrightarrow L^1[0,1]$</span> is continuous and hence weakly continuous (just by <em>abstract nonsense</em>) so that <span class="math-container">$M_j=i(M_j)$</span> is weakly compact in <span class="math-container">$L^1[0,1]$</span>, hence weakly closed and thus also norm closed.</p>
|
1,425,907 | <p>Show that $$1+e^{-j\theta} =2e^{-j\theta/2}*\cos{\frac{\theta}2}$$</p>
<p>I know of the Euler equation: $e^{j\theta}=\cos(\theta)+j\sin(\theta)$ but am unsure how to simply show that the above are equal.</p>
| Asinomás | 33,907 | <p>You can use Gauss's formula. It tells you the sum of an arithmetic sequence $a_1+a_2+\dots a_n$ is $\frac{(a_1+a_n)n}{2}$.</p>
<p>In this case we get that the sum of the first $n$ terms is $\frac{(-12+(-12+9(n-1)))n}{2}=\frac{(9n-9-24)n}{2}=\frac{(9n-33)n}{2}$. This is because the first term is $-12$ and the $n$'th term is $-12+(n-1)9$.</p>
<p>We want $\frac{(9n-33)n}{2}=363$ so $(9n-33)n=726\iff 9n^2-33n-726=0$.</p>
<p>This quadratic has roots $\frac{-22}{3}$ and $11$, so $11$ is your answer.</p>
|
1,425,907 | <p>Show that $$1+e^{-j\theta} =2e^{-j\theta/2}*\cos{\frac{\theta}2}$$</p>
<p>I know of the Euler equation: $e^{j\theta}=\cos(\theta)+j\sin(\theta)$ but am unsure how to simply show that the above are equal.</p>
| Mahie | 270,124 | <p>$S_n={n\over 2}(2a_1+(n-1)d)$</p>
<p>from the question,</p>
<p>$a_1= -12$ (i.e first term) and the common difference $d= (-3)-(-12) = 9$
and as sum to get is $363$ that will be $S_n$.</p>
<p>$S_n= 363$.
Substitute every term in the above formula,
$$363= {n\over 2}[2(-12)+(n-1)(9)]$$
$$363= {n\over 2}[-24+9n-9]$$
by solving the equation $n= {-22\over 3}$ and $11$. hence $n$-th term will be $11$. :) </p>
|
617,927 | <p>Find the taylor expansion of $\sin(x+1)\sin(x+2)$ at $x_0=-1$, up to order $5$.</p>
<p><strong>Taylor Series</strong></p>
<p>$$f(x)=f(a)+(x-a)f'(a)+\frac{(x-a)^2}{2!}f''(a)+...+\frac{(x-a)^r}{r!}f^{(r)}(a)+...$$</p>
<p>I've got my first term...</p>
<p>$f(a) = \sin(-1+1)\sin(-1+2)=\sin(0)\sin(1)=0$</p>
<p>Now, I've calculated $f'(x)=\sin(x+1)\cos(x+2)+\sin(x+2)\cos(x+1)$</p>
<p>So that $f'(-1) = \sin(1) = 0.8414709848$</p>
<p>This means my second term would be $(x+1)(0.8414709848).$</p>
<p>But, this doesn't seem to be nice and neat like the other expansions I have done and I can't figure out what I've done wrong.</p>
<p>Merry Christmas and thanks in advance.</p>
| Claude Leibovici | 82,404 | <p><strong>HINT</strong></p>
<p>Build the Taylor series for each term up to order 5 and multiply them, ignoring terms $(x+1)^m $with $m > 5$. </p>
<p>Leave coefficients $\cos(1)$ and $\sin(1)$, without computing them. </p>
<p>I suppose you are able to continue from here. If not, just post. Merry Xmas.</p>
|
833,827 | <p>I am trying to refresh on algorithm analysis. I am looking for a refresher on summation formulas.<br>
E.g.<br>
I can derive the $$\sum_{i = 0}^{N-1}i$$ to be N(N-1)/2 but I am rusty on the and more complex e.g. something like $$\sum_{i = 0}^{N-1}{\sum_{j = i+1}^{N-1}\sum_{k=j+1}^{N-1}}$$<br>
Is there a good refresher material for this?<br>
In my example my result of the inner most loop is:<br>
$$N(N-1)(N-2)/2$$</p>
<p>which is wrong though </p>
<p><strong>UPDATE</strong><br>
The sums I am describing are basically representing the following algorithm: </p>
<pre><code>for (i = 0; i < n; i++) {
for( j = i+1; j < n; j++) {
for (k = j +1; j < n; j++) {
//code
}
}
}
</code></pre>
<p>This algorithm is <code>O(N^3)</code> according to all textbooks by definition of its structure. I am not sure why the answers are giving me an <code>O(N^4)</code></p>
| Claude Leibovici | 82,404 | <p>For the problem in your post, I suppose that what you want to compute is $$\sum_{i = 0}^{N-1}{\sum_{j = i+1}^{N-1}\sum_{k=j+1}^{N-1}}k$$ For the most inner loop $$\sum_{k=j+1}^{N-1}k=\frac{1}{2} (N-j-1) (N+j)$$ So, for the middle loop $$\sum_{j=i+1}^{N-1}\frac{1}{2} (N-j-1) (N+j)=\frac{1}{6} (N-i-1) (N-i-2) (2 N+i)$$ and finally for the outer loop $$\sum_{i=0}^{N-1}\frac{1}{6} (N-i-1) (N-i-2) (2 N+i)=\frac{1}{24} (N-2) (N-1) N (3 N-1)$$ These results have been obtained using Faulhaber's formulas which give the sums of powers of positive integers. You must take into account the fact that, except the first one, the loops do not start at $0$ (this being particularly crucial for the most inner loop).</p>
<p>I hope and wish that I did not introduce any typo.</p>
<p>Concerning a refresher, I suggest you google "sums of powers of positive integers".</p>
|
833,827 | <p>I am trying to refresh on algorithm analysis. I am looking for a refresher on summation formulas.<br>
E.g.<br>
I can derive the $$\sum_{i = 0}^{N-1}i$$ to be N(N-1)/2 but I am rusty on the and more complex e.g. something like $$\sum_{i = 0}^{N-1}{\sum_{j = i+1}^{N-1}\sum_{k=j+1}^{N-1}}$$<br>
Is there a good refresher material for this?<br>
In my example my result of the inner most loop is:<br>
$$N(N-1)(N-2)/2$$</p>
<p>which is wrong though </p>
<p><strong>UPDATE</strong><br>
The sums I am describing are basically representing the following algorithm: </p>
<pre><code>for (i = 0; i < n; i++) {
for( j = i+1; j < n; j++) {
for (k = j +1; j < n; j++) {
//code
}
}
}
</code></pre>
<p>This algorithm is <code>O(N^3)</code> according to all textbooks by definition of its structure. I am not sure why the answers are giving me an <code>O(N^4)</code></p>
| Perry Elliott-Iverson | 125,899 | <p>Here's a bit more detailed solution. Knowing the following three summations will help:</p>
<p>$$\sum_{i=0}^{N} i = \frac{N(N+1)}{2}$$</p>
<p>$$\sum_{i=0}^{N} i^2 = \frac{N(N+1)(2N+1)}{6}$$</p>
<p>$$\sum_{i=0}^{N} i^3 = \frac{N^2(N+1)^2}{4}$$</p>
<p>For the innermost sum:</p>
<p>$$\sum_{k=j+1}^{N-1}k = \sum_{k=0}^{N-1}k - \sum_{k=0}^{j}k = \frac{N(N-1)}{2} - \frac{j(j+1)}{2} = \frac{1}{2}(N(N-1) - j^2 + j)$$</p>
<p>For the middle sum:</p>
<p>\begin{align} \\
\sum_{j = i+1}^{N-1}\sum_{k=j+1}^{N-1}k &= \frac{1}{2}\sum_{j = i+1}^{N-1} (N(N-1) - j^2 + j) \\
&= \frac{1}{2}\left(N(N-1)(N-1-(i+1)+1) - \sum_{j = i+1}^{N-1}(j^2+j)\right) \\
&= \frac{1}{2}\left(N(N-1)(N-i-1) - \sum_{j = i+1}^{N-1}j^2 - \sum_{j = i+1}^{N-1}j\right)\\
&= \frac{1}{2}\left(N(N-1)^2 - N(N-1)i - \left(\sum_{j = 0}^{N-1}j^2 - \sum_{j = 0}^{i}j^2\right) - \left(\sum_{j = 0}^{N-1}j - \sum_{j = 0}^{i}j\right)\right)\\
&= \frac{1}{2}\left(N(N-1)^2 - N(N-1)i - \left(\frac{N(N-1)(2N-1)}{6} - \frac{i(i+1)(2i+1)}{6}\right) - \left(\frac{N(N-1)}{2} - \frac{i(i+1)}{2}\right)\right)\\
&= \frac{1}{12}\left(6N(N-1)^2 - N(N-1)(2N-1) - 3N(N-1) - 6N(N-1)i + i(i+1)(2i+1) + 3i(i+1)\right)\\
&= \frac{1}{12}\left(4N(N-1)(N-2) - 6N(N-1)i + 2i^3 + 6i^2 + 4i\right)\\
&= \frac{1}{6}\left(2N(N-1)(N-2) - 3N(N-1)i + i^3 + 3i^2 + 2i\right)\\
\end{align}</p>
<p>And finally the outermost sum:</p>
<p>\begin{align} \\
\sum_{i = 0}^{N-1}\sum_{j = i+1}^{N-1}\sum_{k=j+1}^{N-1}k &= \frac{1}{6}\sum_{i = 0}^{N-1}\left(2N(N-1)(N-2) - 3N(N-1)i + i^3 + 3i^2 + 2i\right)\\ \\
&= \frac{1}{6}\left(2N^2(N-1)(N-2) - 3N(N-1)\sum_{i = 0}^{N-1}i + \sum_{i = 0}^{N-1}i^3 + 3\sum_{i = 0}^{N-1}i^2 + 2\sum_{i = 0}^{N-1}i\right)\\
&= \frac{1}{6}\left(2N^2(N-1)(N-2) - 3N(N-1)\frac{N(N-1)}{2} + \frac{N^2(N-1)^2}{4} + 3\frac{N(N-1)(2N-1)}{6} + 2\frac{N(N-1)}{2}\right)\\
&= \frac{N(N-1)}{6}\left(2N(N-2) - \frac{3N(N-1)}{2} + \frac{N(N-1)}{4} + \frac{(2N-1)}{2} + 1\right)\\
&= \frac{N(N-1)}{24}\left(8N(N-2) - 6N(N-1) + N(N-1) + 2(2N-1) + 4\right)\\
&= \frac{N(N-1)}{24}(3N^2-7N+2)\\
&= \frac{N(N-1)(N-2)(3N-1)}{24}\\
\end{align}</p>
|
3,735,798 | <blockquote>
<p><strong>QUESTION:</strong> Given a square <span class="math-container">$ABCD$</span> with two consecutive vertices, say <span class="math-container">$A$</span> and <span class="math-container">$B$</span> on the positive <span class="math-container">$x$</span>-axis and positive <span class="math-container">$y$</span>-axis respectively. Suppose the other vertex <span class="math-container">$C$</span> lying in the first quadrant has coordinates <span class="math-container">$(u , v)$</span>. Then find the area of the square <span class="math-container">$ABCD$</span> in terms of <span class="math-container">$u$</span> and <span class="math-container">$v$</span>.</p>
</blockquote>
<hr>
<p><strong>MY APPROACH:</strong> I was trying to solve it out using complex numbers, but I need a minor help. I have assumed <span class="math-container">$A$</span> to be <span class="math-container">$(x_1+0i)$</span>, <span class="math-container">$B$</span> to be <span class="math-container">$(0+y_2i)$</span> and <span class="math-container">$C$</span> is <span class="math-container">$(u+vi)$</span>. We know that multiplying a point by <span class="math-container">$i$</span> basically rotates it by <span class="math-container">$90°$</span>, <strong>about the origin</strong>. Here, <span class="math-container">$C$</span> is nothing but the reflection of <span class="math-container">$A$</span> about the line <span class="math-container">$BD$</span>. So if I can somehow rotate <span class="math-container">$A$</span> about <span class="math-container">$B$</span> by <span class="math-container">$90°$</span> then we will get <span class="math-container">$x_1$</span> and <span class="math-container">$y_2$</span> in terms of <span class="math-container">$u$</span> and <span class="math-container">$v$</span>.
This is where I am stuck. How to rotate a point with respect to another?</p>
<blockquote>
<p>Note that this question has been asked before. But I want to know how to solve it using complex numbers..</p>
</blockquote>
<p>Any answers, possibly with a diagram will be much helpful..</p>
<p>Thank you so much..</p>
| Théophile | 26,091 | <p>You can think of changing your frame of reference so that you're rotating around <span class="math-container">$B$</span>. Look at the vector <span class="math-container">$BA = x - yi$</span>. Then <span class="math-container">$BC = (BA)i = y + xi$</span>.</p>
<p>In other words, <span class="math-container">$C = B + BC = (yi) + (y + xi) = y + (x+y)i$</span>. Therefore <span class="math-container">$u = y$</span> and <span class="math-container">$v = x+y$</span>.</p>
<p>You can calculate the area of the square in terms of <span class="math-container">$x$</span> and <span class="math-container">$y$</span>, then convert that to <span class="math-container">$u$</span> and <span class="math-container">$v$</span>.</p>
|
1,675 | <p>This is a follow-up to <a href="https://mathoverflow.net/questions/1039/explicit-direct-summands-in-the-decomposition-theorem">this post</a> on the Decomposition Theorem. Hopefully, this will also invite some discussion about the theorem and perverse sheaves in general.</p>
<p>My question is how does one use the Decomposition Theorem in practice? Is there any way to pin down the subvarieties and local systems that appear in the decomposition. For example, how do you compute intesection homology complexes using this theorem? Does anyone have a link to a source with worked out examples?</p>
<p>Another related question: What is the deep part of the theorem? Is it the fact that the pushforward of a perverse sheaf is isomorphic to its perverse hypercohomology? Is it the fact that these pieces are semisimple? Or are these both hard statements? And what is so special about algebraic varieties?</p>
| Andreas Holmstrom | 349 | <p>There is a winter school on the decomposition theorem in Freiburg, Germany, 22-26 Feb 2010. <a href="http://home.mathematik.uni-freiburg.de/kebekus/FebSchool/" rel="nofollow">Link</a>.</p>
|
1,560,050 | <p>I want to solve the homogenous part of a stretched string problem where $y=y(x)$.</p>
<p>$$y'' + y = 0$$</p>
<p>with the boundary conditions such that: $y(0)=y(\pi/2)=0$</p>
<p>The differential equation gives rise to a solution on the form:
$$y = a \cos(x) + b \sin(x)$$</p>
<p>But when applying the boundary conditions I end up with only trivial solution ($a=b=0$).</p>
<p>Have I made a mistake or does these B.C only lead to $a=b=0$?</p>
| bartgol | 33,868 | <p>As other people said, the only solution to the problem <em>as it is written now</em> is the trivial one. But perhaps you misread the exercise and the boundary conditions are $y(0) = y(\pi) = 0$ or $y(0) = y(2\pi)=0$? In that case you will have non-trivial solutions.</p>
|
1,125,839 | <p>I'm having a bit of problems proving the following:
$$\sum_{k=1}^n k {n \choose k } = n\cdot 2^{n-1}$$</p>
<p>I always seem to get to the line: $2^{n-1} + 1 = 2^n$ which I know is untrue.</p>
<p>Could anyone help me prove this? </p>
| Adam Hughes | 58,831 | <p>No such subgroup exists. Pro-<span class="math-container">$p$</span> groups with the second definition (i.e. inverse limit of discrete, finite <span class="math-container">$p$</span>-groups) can be easily shown to be equivalent to the first definition:</p>
<p><span class="math-container">$$G/N\cong P$$</span></p>
<p>where <span class="math-container">$N$</span> is open, normal and <span class="math-container">$P$</span> is a finite <span class="math-container">$p$</span>-group.</p>
<p>How so? Since <span class="math-container">$N$</span> is normal it is the union of some open subgroups from a basis of open normal subgroups around the identity, however we know that if</p>
<p><span class="math-container">$$G=\varprojlim_{i\in I} G_i$$</span></p>
<p>with each <span class="math-container">$G_i$</span> a discrete, finite <span class="math-container">$p$</span>-group, then such a basis is given by</p>
<p><span class="math-container">$$U_i=\pi_i^{-1}(G_i)$$</span></p>
<p>If</p>
<p><span class="math-container">$$N=\bigcup_{i\in J\subseteq I} U_i$$</span></p>
<p>select any <span class="math-container">$i_0\in J$</span> then we have a surjective homomorphism:</p>
<p><span class="math-container">$$G/U_{i_0}\to G/N$$</span></p>
<p>with kernel <span class="math-container">$N/ U_{i_0}$</span>, hence <span class="math-container">$G/N$</span> is the homomorphic image of a finite <span class="math-container">$p$</span>-group, and is hence a <span class="math-container">$p$</span>-group.</p>
<p><strong>(Edit)</strong> For I think, following the comments, I should include the nitty gritty of the reduction to the closed case since there's enough confusion to merit it. Throughout we use <span class="math-container">$|\cdot|$</span> for the order of an element and of a subgroup, understood in the generalized sense of profinite groups (i.e. supernatural numbers)</p>
<blockquote>
<p>The basic idea: just use the Lagrange theorem for profinite groups.</p>
<p>The problem: indices are only defined for <strong>closed</strong> subgroups.</p>
</blockquote>
<p>If <span class="math-container">$|A|=p^nm$</span> with <span class="math-container">$(p,m)=1$</span> and select <span class="math-container">$a\in A$</span> such that <span class="math-container">$|a|\big| m$</span>, which is possible by Cauchy's theorem. Name the projection map <span class="math-container">$\pi:G\to G/N\cong A$</span> (first isomorphism theorem) is surjective, we may select a lift <span class="math-container">$\stackrel{\sim}{a}\in G$</span>, and by definition <span class="math-container">$\left|\pi\left(\stackrel{\sim}{a}\right)\right|=|a|$</span>, we have just changed our context from <span class="math-container">$A$</span> to a subgroup of order coprime to <span class="math-container">$A$</span>--namely <span class="math-container">$\langle a\rangle$</span>--so that we may assume that <span class="math-container">$p\not\big| |A|$</span> rather than the weaker condition <span class="math-container">$|A|\ne p^n$</span>. Denote by <span class="math-container">$H$</span> the closure of <span class="math-container">$\langle\stackrel{\sim}{a}\rangle$</span> in <span class="math-container">$G$</span>. Then <span class="math-container">$|\stackrel{\sim}{a}|=m$</span> in <span class="math-container">$H/H\cap N$</span>, which is finite because <span class="math-container">$N\cap H$</span> is clearly relatively open in <span class="math-container">$H$</span>.</p>
<p>Since <span class="math-container">$H$</span> is closed, <span class="math-container">$m\big||H|$</span>, and by Lagrange <span class="math-container">$|H|\; \bigg|\; |G|$</span>. However <span class="math-container">$(m,p)=1$</span> and the only prime dividing <span class="math-container">$|G|$</span> is <span class="math-container">$p$</span>, hence <span class="math-container">$m=1$</span>, so that <span class="math-container">$\langle a\rangle\le A$</span> is the trivial subgroup and <span class="math-container">$|A|=p^n$</span>.</p>
|
1,125,839 | <p>I'm having a bit of problems proving the following:
$$\sum_{k=1}^n k {n \choose k } = n\cdot 2^{n-1}$$</p>
<p>I always seem to get to the line: $2^{n-1} + 1 = 2^n$ which I know is untrue.</p>
<p>Could anyone help me prove this? </p>
| user 59363 | 192,084 | <p>Look at Proposition 4.2.3 in the book by Ribes-Zalesskii, the whole section 4.2. will be interesting for you. Edit: the question is answered affirmatively there.</p>
|
3,872,033 | <p>Currently I meet with the following interesting problem.</p>
<p>Let <span class="math-container">$x_1,\cdots,x_n$</span> be i.i.d standard Gaussian variables. How to calculate the probability distribution of the sum of their absoulte value, i.e., how to calculate
<span class="math-container">$$\mathbb{P}(|x_1|+\cdots+|x_n|\leq nt).$$</span>
Here I use <span class="math-container">$nt$</span> instead of <span class="math-container">$t$</span> for sake of possible concise formula.</p>
<p>I cannot find out the exact value. However, a practical lower bound is also good. Here practical means the ratio between the exact value and lower bound is independent of <span class="math-container">$n$</span> and <span class="math-container">$t$</span>, and is small.</p>
<p>Thanks very much!</p>
| JimmyK4542 | 155,509 | <p>The moment generating function of a standard Gaussian R.V. <span class="math-container">$x$</span> is <span class="math-container">$\mathbb{E}[e^{tx}] = e^{t^2/2}$</span> for all <span class="math-container">$t \in \mathbb{R}$</span>.</p>
<p>Since <span class="math-container">$x_k$</span> is a standard Gaussian (and thus has a symmetric distribution), for any <span class="math-container">$t \in \mathbb{R}$</span>, the moment generating function of <span class="math-container">$|x_k|$</span> can be bounded by<span class="math-container">$$\mathbb{E}\left[e^{t|x_k|}\right] \le \mathbb{E}\left[e^{t|x_k|} + e^{-t|x_k|}\right] = \mathbb{E}\left[e^{tx_k} + e^{-tx_k}\right] = \mathbb{E}\left[e^{tx_k}\right] + \mathbb{E}\left[e^{-tx_k}\right] = 2\mathbb{E}\left[e^{tx_k}\right] = 2e^{t^2/2}.$$</span></p>
<p>Now, we simply repeat the usual derivation for Chernoff bounds. For any <span class="math-container">$\lambda > 0$</span>, we have
<span class="math-container">\begin{align*}
\mathbb{P}\left\{\dfrac{1}{n}\sum_{k = 1}^{n}|x_k| \ge t\right\} &= \mathbb{P}\left\{\exp\left(\dfrac{\lambda}{n}\sum_{k = 1}^{n}|x_k|\right) \ge e^{\lambda t}\right\} & \text{since} \ y \to e^{\lambda y} \ \text{is increasing}
\\
&\le e^{-\lambda t}\mathbb{E}\left[\exp\left(\dfrac{\lambda}{n}\sum_{k = 1}^{n}|x_k|\right)\right] & \text{Markov's Inequality}
\\
&= e^{-\lambda t}\mathbb{E}\left[\prod_{k = 1}^{n}e^{\tfrac{\lambda}{n}|x_k|}\right]
\\
&= e^{-\lambda t}\prod_{k = 1}^{n}\mathbb{E}\left[e^{\tfrac{\lambda}{n}|x_k|}\right] & \text{Since} \ x_1,\ldots,x_n \ \text{are independent}
\\
&\le e^{-\lambda t}\prod_{k = 1}^{n}2e^{\lambda^2/(2n^2)} & \text{use the mgf bound above}
\\
&= 2^ne^{\lambda^2/(2n)-\lambda t}
\end{align*}</span></p>
<p>Now take <span class="math-container">$\lambda = nt$</span> (which minimizes the upper bound) to get <span class="math-container">$$\mathbb{P}\left\{\dfrac{1}{n}\sum_{k = 1}^{n}|x_k| \ge t\right\} \le 2^ne^{-nt^2/2} \quad \text{for all} \quad t > 0.$$</span></p>
<p>EDIT: I just realized this is equivalent to using a union bound over the <span class="math-container">$2^n$</span> events of the form <span class="math-container">$\dfrac{1}{n}\displaystyle\sum_{k = 1}^{n}\epsilon_kx_k \ge t$</span> where <span class="math-container">$\epsilon_1,\ldots,\epsilon_k \in \{-1,1\}$</span>, and then applying the usual Gaussian tail bound.</p>
|
134,444 | <p>I have the following code that determines when the second business day of each month is (given a start and end date). I have a few If statements I would like to replace with functional programming.</p>
<pre><code>getAccrualDates[fromDate_List,toDate_List]:=
(
today = fromDate;
projectionDate =toDate;
(*If the projected date is before a payroll, we don't want to grab any business days from that month. This will prevent the case where we try an grab the second business day of the month (i.e. payroll day), but it not existing*)
If[dateBeforePayrollQ[projectionDate],projectionDate=DatePlus[{projectionDate[[1]],projectionDate[[2]],1},{-1,"Day"}],projectionDate];
(*Determine if today is before the payroll date of the current month. If it is, include all business days of the current month. Otherwise, create a list of business days starting with next month*)
If[dateBeforePayrollQ[fromDate],allBusinessDays = Take[DateList[#],3]&/@DayRange[DateObject[firstDayOfThisMonth],DateObject[projectionDate],"BusinessDay"];,allBusinessDays = Take[DateList[#],3]&/@DayRange[DateObject[firstDayOfNextMonth],DateObject[projectionDate],"BusinessDay"];];
(*Partition data by month*)
groupedBusinessDays = Flatten[GatherBy[Flatten[GatherBy[allBusinessDays,First],1],#[[2]]&],0];
(*Find all first payrolls between today and projectionDate*)
payrollDays = #[[2]]&/@groupedBusinessDays
)
</code></pre>
<p>Helper function below:</p>
<pre><code>(*Create a helper function that returns true if the given date is before payroll, false otherwise*)
dateBeforePayrollQ[theDate_List]:=
(
firstDayOfThisMonth = {theDate[[1]],theDate[[2]],1};
firstDayOfNextMonth = DatePlus[firstDayOfThisMonth,{1,"Month"}];
businessDaysOfThisMonth = Take[DateList[#],3]&/@DayRange[DateObject[firstDayOfThisMonth],DateObject[DatePlus[firstDayOfNextMonth,{-1,"Day"}]],"BusinessDay"];
If[businessDaysOfThisMonth[[2]][[3]]<= theDate[[3]],False,True]
)
</code></pre>
<p>Does anyone have any suggestions/better ways to accomplish this?</p>
<pre><code>getAccrualDates[{2016,9,1},{2017,1,4}]
(*{{2016,9,2},{2016,10,4},{2016,11,2},{2016,12,2},{2017,1,4}}*)
</code></pre>
| J. M.'s persistent exhaustion | 50 | <p>Pretty cute problem. Let's generate the required dates first:</p>
<pre><code>firstDays = NestList[DatePlus[#, {1, "Month"}] &, DateObject[{2016, 9, 1}], 4];
</code></pre>
<p>From there:</p>
<pre><code>getAccrualDates[date_DateObject] := NestWhile[DatePlus[#, 1] &, date,
! (DayMatchQ[#1, "BusinessDay"] && DayMatchQ[#2, "BusinessDay"]) &, 2]
Take[DateList[getAccrualDates[#]], 3] & /@ firstDays
{{2016, 9, 2}, {2016, 10, 4}, {2016, 11, 2}, {2016, 12, 2}, {2017, 1, 4}}
</code></pre>
|
871,581 | <p>I am trying to prove the identity below to help with the simplification of another function that I'm investigating as it doesn't appear to be a standard trig identity.</p>
<p>$$
\tan\left(x\right) + \tan\left( y \right) = \frac{{\sin\left( {x + y} \right)}}{{\cos\left( x \right)\cos\left( y \right)}}
$$</p>
<p>Any assistance gratefully appreciated.</p>
| Mario Krenn | 157,257 | <p>You are asking about a proof of the identity
$$
\tan\left(x\right) + \tan\left( y \right) = \frac{{\sin\left( {x + y} \right)}}{{\cos\left( x \right)\cos\left( y \right)}}
$$</p>
<p>Using $\tan(x)=\frac{\sin(x)}{\cos(x)}$, we get
$$\tan\left(x\right) + \tan\left( y \right) = \frac{\sin(x)}{\cos(x)} + \frac{\sin(y)}{\cos(y)}\\
=\frac{\sin(x)\cdot \cos(y) + \sin(y)\cdot \cos(x)}{\cos(x)\cdot \cos(y)}$$</p>
<p>Using the identity $\sin(x+y)=\sin(x)\cdot \cos(y) + \sin(y)\cdot \cos(x)$ gives you the answer of your question.</p>
|
4,545,300 | <p>Find the number of all <span class="math-container">$n$</span>, <span class="math-container">$1 \leq n \leq 25$</span> such that <span class="math-container">$n^2+15n+122$</span> is divisible by 6.</p>
<p><strong>My attempt</strong>. We know that:
<span class="math-container">\begin{align*}
n^2+15n+122 & \equiv n^2+3n+2 \pmod{6}
\end{align*}</span>
But <span class="math-container">$n^2+3n+2=(n+1)(n+2)$</span>, then <span class="math-container">$n^2+15n+122 \equiv (n+1)(n+2) \pmod{6}$</span>, now we have</p>
<p><span class="math-container">\begin{align*}
n(n^2+15n+122) & \equiv n(n+1)(n+2)\pmod{6} \\
n^3+15n^2+122n & \equiv 0 \pmod{6}
\end{align*}</span>
I have done this and I think I have complicated the problem even more.</p>
| John Douma | 69,810 | <p>You were almost there with your first line. You have <span class="math-container">$n^2+3n+2=(n+1)(n+2)\text{ mod 6}$</span></p>
<p><span class="math-container">$(n+1)$</span> and <span class="math-container">$(n+2)$</span> are two consecutive numbers so one of them is even. That gives you that this polynomial is divisible by <span class="math-container">$2$</span> for all <span class="math-container">$n$</span>.</p>
<p>If <span class="math-container">$n$</span> is either congruent to <span class="math-container">$1$</span> or <span class="math-container">$2$</span> mod <span class="math-container">$3$</span> then <span class="math-container">$(n+2)$</span> or <span class="math-container">$(n+1)$</span>, respectively, is divisible by <span class="math-container">$3$</span>. Therefore all non-multiples of <span class="math-container">$3$</span> are solutions to this problem.</p>
|
514 | <p>I just came back from my Number Theory course, and during the lecture there was mention of the Collatz Conjecture.</p>
<p>I'm sure that everyone here is familiar with it; it describes an operation on a natural number – <span class="math-container">$n/2$</span> if it is even, <span class="math-container">$3n+1$</span> if it is odd.</p>
<p>The conjecture states that if this operation is repeated, all numbers will eventually wind up at <span class="math-container">$1$</span> (or rather, in an infinite loop of <span class="math-container">$1-4-2-1-4-2-1$</span>).</p>
<p>I fired up Python and ran a quick test on this for all numbers up to <span class="math-container">$5.76 \times 10^{18}$</span> (using the powers of cloud computing and dynamic programming magic). Which is millions of millions of millions. And all of them eventually ended up at <span class="math-container">$1$</span>.</p>
<p>Surely I am close to testing every natural number? How many natural numbers could there be? Surely not much more than millions of millions of millions. (I kid.)</p>
<p>I explained this to my friend, who told me, "Why would numbers suddenly get different at a certain point? Wouldn't they all be expected to behave the same?"</p>
<p>To which I said, "No, you are wrong! In fact, I am sure there are many conjectures which have been disproved by counterexamples that are extremely large!"</p>
<p>And he said, "It is my conjecture that there are none! (and if any, they are rare)".</p>
<p>Please help me, smart math people. Can you provide a counterexample to his conjecture? Perhaps, more convincingly, several? I've only managed to find one! (Polya's conjecture). One, out of the many thousands (I presume) of conjectures. It's also one that is hard to explain the finer points to the layman. Are there any more famous or accessible examples?</p>
| BlueRaja - Danny Pflughoeft | 136 | <p>I heard this story from <a href="http://www.ams.sunysb.edu/~estie/estie.html" rel="noreferrer">Professor Estie Arkin</a> at Stony Brook <strike><em>(sorry, I don't know what conjecture she was talking about)</em></strike>:</p>
<blockquote>
<p>For weeks we tried to prove the conjecture (without success) while we left a computer running looking for counter-examples. One morning we came in to find the computer screen flashing: <em>"Counter-example found"</em>. We all thought that there must have been a bug in the algorithm, but sure enough, it was a valid counter-example.</p>
<p>I tell this story to my students to emphasize that <em>"proof by lack of counter-example"</em> is not a proof at all!</p>
</blockquote>
<hr>
<p><strong>[Edit]</strong> Here was the response from Estie:</p>
<blockquote>
<p>It is mentioned in our paper:<br>
<a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.57.5143&rep=rep1&type=pdf" rel="noreferrer">Hamiltonian Triangulations for Fast Rendering</a><br>
<sub><em>E.M. Arkin, M. Held, J.S.B. Mitchell, S.S. Skiena (1994). Algorithms -- ESA'94, Springer-Verlag, LNCS 855, J. van Leeuwen (ed.), pp. 36-47; Utrecht, The Netherlands, Sep 26-28, 1994.</em></sub> </p>
<p>Specifically section 4 of the paper, that gives an example of a set of points that does not have a so-called <em>"sequential triangulation"</em>.</p>
<p>The person who wrote the code I talked about is Martin Held.</p>
</blockquote>
|
1,114,554 | <p>Consider the following sets:</p>
<blockquote>
<blockquote>
<p><span class="math-container">$A=$</span> set of sequence of real nos.</p>
</blockquote>
</blockquote>
<blockquote>
<blockquote>
<p><span class="math-container">$B=$</span> set of sequence of positive real nos</p>
</blockquote>
</blockquote>
<blockquote>
<blockquote>
<p><span class="math-container">$C=$</span> <span class="math-container">$\mathbb R$</span></p>
</blockquote>
</blockquote>
<blockquote>
<blockquote>
<p><span class="math-container">$D= C[0,1]$</span></p>
</blockquote>
</blockquote>
<p>then prove that cardinality of <span class="math-container">$A$</span>,<span class="math-container">$B$</span>,<span class="math-container">$C$</span>,<span class="math-container">$D$</span> are same. I think there we can construct a map from <span class="math-container">$\Bbb R$</span> to <span class="math-container">$C[0,1]$</span> by associating each point of <span class="math-container">$\Bbb R$</span> to its constant function. Then this is bijective map so <span class="math-container">$C$</span> ,<span class="math-container">$D$</span> have same cardinality.</p>
| Learnmore | 294,365 | <p>A has cardinality $\mathbb R^{\mathbb N}=c$ </p>
<p>Check that B also has same cardinality as A and $\mathbb R$ has cardinality as $c$</p>
|
1,114,554 | <p>Consider the following sets:</p>
<blockquote>
<blockquote>
<p><span class="math-container">$A=$</span> set of sequence of real nos.</p>
</blockquote>
</blockquote>
<blockquote>
<blockquote>
<p><span class="math-container">$B=$</span> set of sequence of positive real nos</p>
</blockquote>
</blockquote>
<blockquote>
<blockquote>
<p><span class="math-container">$C=$</span> <span class="math-container">$\mathbb R$</span></p>
</blockquote>
</blockquote>
<blockquote>
<blockquote>
<p><span class="math-container">$D= C[0,1]$</span></p>
</blockquote>
</blockquote>
<p>then prove that cardinality of <span class="math-container">$A$</span>,<span class="math-container">$B$</span>,<span class="math-container">$C$</span>,<span class="math-container">$D$</span> are same. I think there we can construct a map from <span class="math-container">$\Bbb R$</span> to <span class="math-container">$C[0,1]$</span> by associating each point of <span class="math-container">$\Bbb R$</span> to its constant function. Then this is bijective map so <span class="math-container">$C$</span> ,<span class="math-container">$D$</span> have same cardinality.</p>
| String | 94,971 | <p>For $x\in[0,1]$ define $f_n(x)$ as the number formed by first writing the decimal expansion of $x=0.x_1x_2...$ (note that $1=0.999...$) and then counting through the digits but resetting the counter whenever we reach a new "maximal count" like this
$$
\begin{array}{c}
digits:&x_1&x_2&x_3&x_4&x_5&x_6&x_7&x_8&x_9&x_{10}&x_{11}&x_{12}&x_{13}&x_{14}&x_{15}&...\\
counter:&1&1&2&1&2&3&1&2&3&4&1&2&3&4&5&...
\end{array}
$$
and finally forming the number $f_n(x)\in[0,1]$ that has all the digits of $x$ for which the counter was $n$. This maps each $x\in[0,1]$ to an infinite sequences $\{f_n(x)\}_{n=1}^\infty$ in $[0,1]$. Note that $g(x)=\{f_n(x)\}_{n=1}^\infty$ is surjective since from $\{f_n(x)\}_{n=1}^\infty$ we can easily construct $x\in[0,1]$ mapping to it. It is now easy to see, that
$$
h(x)=
\begin{cases}
\frac{1}{x}-1&x\in(0,1]\\
&\\
0&x=0
\end{cases}
$$
satisfies $h([0,1])=\mathbb R^+$. So $x\longmapsto\{h(f_n(x))\}_{n=1}^\infty$ defines a surjection from $D$ to $B$.</p>
<hr>
<p>With a little more work, it should be easy to construct functions mapping $[0,1]$ to $[-1,1]$ and then to $\mathbb R$, the first being a linear map, the second exploiting the ideas from $h$ above, and then combining those with $f_n(x)$ we quickly have a surjection from $D$ to $A$.</p>
<hr>
<p>To sum up, I just proved that $|D|\geq|B|$ and outlined that $|D|\geq|A|$. If you can show $|B|\geq|D|$ and $|A|\geq|C|\geq|D|$ you can conclude $|A|=|B|=|C|=|D|$ by combining results.</p>
|
4,002 | <p>I'm trying to obtain a series of points on the unit sphere with a somewhat homogeneous distribution, by minimizing a function depending on distances (I took $\exp(-d)$). My points are represented by spherical angles $\theta$ and $\phi$, starting by choosing equidistributed random vectors:</p>
<pre><code>pts = Apply[{2 π #1, ArcCos[2 #2 - 1]} &, RandomReal[1, {100, 2}], 1];
</code></pre>
<p>The energy function is defined first:</p>
<pre><code>energy[p_] := Module[{cart},
cart = Apply[{Sin[#1]*Cos[#2], Sin[#1]*Sin[#2], Cos[#1]} &, p, 1];
Total[Outer[Exp[-Norm[#1 - #2]] &, cart, cart, 1], 2]
]
</code></pre>
<p>But now, I can’t manage to get the right routine for minimization. I tried <code>FindMinimum</code>, which does local minimization from a given starting point, which is what I want. But it should operate on an expression of literal variables, so I'm kind of screwed:</p>
<pre><code>FindMinimum[energy[p], {p, pts}]
</code></pre>
<p> </p>
<pre><code>Outer::normal: Nonatomic expression expected at position 2 in Outer[Exp[-Norm[#1-Slot[<<1>>]]]&,p,p,1]. >>
FindMinimum::nrnum: The function value […] is not a real number at {p} = […] >>
</code></pre>
<p>The above obviously doesn't work, but I don't think it's wise to introduce a series of 200 literal variables. There has to be another way, hasn't it? Or is there an efficient way of introducing a lot of variables?</p>
| acl | 16 | <p>I was planning to get to this later but I am not sure I'll have time today. To force it to work I can do</p>
<pre><code>pts = Apply[{2 \[Pi] #1, ArcCos[2 #2 - 1]} &, RandomReal[1, {10, 2}],
1];
energy[p_] :=
Module[{cart},
cart = Apply[{Sin[#1]*Cos[#2], Sin[#1]*Sin[#2], Cos[#1]} &, p, 1];
Total[Outer[Exp[-Norm[#1 - #2]] &, cart, cart, 1], 2]]
fvars = Table[{x[i], y[i]}, {i, 1, Length@pts}];
NMinimize[energy[fvars], Flatten@fvars]
(*{32.2548, {x[1] \[Rule] -2.0039, y[1] \[Rule] -0.778507,
x[2] \[Rule] -1.13768, y[2] \[Rule] -1.5639, x[3] \[Rule] 1.13769,
y[3] \[Rule] 0.00689281, x[4] \[Rule] 2.00391,
y[4] \[Rule] -0.778497, x[5] \[Rule] -2.00391,
y[5] \[Rule] 0.792293, x[6] \[Rule] -9.32847\[Times]10^-6,
y[6] \[Rule] 1.20086, x[7] \[Rule] 1.1377, y[7] \[Rule] -1.5639,
x[8] \[Rule] 3.14158, y[8] \[Rule] 2.10272, x[9] \[Rule] 2.0039,
y[9] \[Rule] 0.792295, x[10] \[Rule] -1.13769,
y[10] \[Rule] 0.00689875}}
*)
</code></pre>
<p>But I wouldn't do it this way. </p>
|
3,595,622 | <p><strong>Problem: Give an example of a linear continuum which is not the real line <span class="math-container">$\mathbb{R}$</span>, nor
topologically equivalent to a subspace of <span class="math-container">$\mathbb{R}$</span>.</strong></p>
<p><strong>Definition of Linear Continuum:</strong> Let X be a linearly ordered set with order <. We say that X is a linear continuum iff it satisfies the following two axioms:</p>
<p>(1) LUB: X has the least upper bound property.
(2) Betweenness: <span class="math-container">$\forall x$</span> < <span class="math-container">$y$</span> <span class="math-container">$\in X$</span>, <span class="math-container">$\exists z$</span> <span class="math-container">$\in X$</span>, such that <span class="math-container">$x < z < y$</span>.</p>
<p>This is how I did it, not sure whether its accurate or not. </p>
<p>I tried to prove that <span class="math-container">$I$</span> x <span class="math-container">$I$</span> is not connected under subspace topology of <span class="math-container">$\mathbb{R^{2}}$</span> under dictionary order. Since, <span class="math-container">${x}$</span> X <span class="math-container">$I$</span> is open in <span class="math-container">$I$</span> X <span class="math-container">$I$</span>, therefore, for each <span class="math-container">$x$</span> <span class="math-container">$\in$</span> <span class="math-container">$I$</span>, say (<span class="math-container">${a}$</span> X <span class="math-container">$[a,b]$</span>) <span class="math-container">$\cup$</span> <span class="math-container">${y}$</span> X <span class="math-container">$I$</span>, where <span class="math-container">$y \in [a,b]$</span>, we can clearly say that their intersection will be <span class="math-container">$\emptyset$</span>, i.e. (<span class="math-container">${a}$</span> X <span class="math-container">$[a,b]$</span>) <span class="math-container">$\cup$</span> {<span class="math-container">${y}$</span> X <span class="math-container">$I$</span>} = <span class="math-container">$I X I$</span>, & (<span class="math-container">${a}$</span> X <span class="math-container">$[a,b]$</span>) <span class="math-container">$\cap$</span> (<span class="math-container">${y}$</span> X <span class="math-container">$I$</span>) = <span class="math-container">$\emptyset$</span>. Hence, its not connected under subspace topology of <span class="math-container">$\mathbb{R}^{2}$</span> but its still a linear continuum. </p>
<p>I was trying to come up with something else but unfortunately I couldn't get a better example. Need help from someone on this. Appreciate your time and patience. </p>
| freakish | 340,986 | <p><span class="math-container">$I\times I$</span> is clearly (path)connected under subspace topology of <span class="math-container">$\mathbb{R}^2$</span>. You incorrectly claim that <span class="math-container">$\{x\}\times I$</span> is open in <span class="math-container">$I\times I$</span>, it is not.</p>
<p>Unless you mean that you consider <span class="math-container">$I\times I$</span> with the topology induced by the dictionary order. However even under dictionary topology <span class="math-container">$\{x\}\times I$</span> is not open. And in fact <span class="math-container">$I\times I$</span> is connected, although not path connected, <a href="https://math.stackexchange.com/questions/980647/is-the-lexicographic-order-topology-on-the-unit-square-connected-path-connected">see here</a>. Therefore <span class="math-container">$I\times I$</span> cannot be homeomorphic to a subset of <span class="math-container">$\mathbb{R}$</span>, since for subsets of <span class="math-container">$\mathbb{R}$</span> connectedness and path connectedness coincide.</p>
<p>For a different example you can consider any well ordered set <span class="math-container">$\omega$</span> which is big enough, i.e. its cardinality is strictly greater than that of <span class="math-container">$\mathbb{R}$</span>. Then you take <span class="math-container">$\omega\times [0,1)$</span> with the dictionary order. This space is a linear continuum and cannot be a subspace of <span class="math-container">$\mathbb{R}$</span> regardless of topology, simply because it is bigger than <span class="math-container">$\mathbb{R}$</span>.</p>
|
443,578 | <blockquote>
<p>Is the limit
$$
e^{-x}\sum_{n=0}^N \frac{(-1)^n}{n!}x^n\to e^{-2x} \quad \text{as } \ N\to\infty \tag1
$$
uniform on $[0,+\infty)$? </p>
</blockquote>
<p>Numerically this appears to be true: see the difference of two sides in (1) for $N=10$ and $N=100$ plotted below. But the convergence is very slow (<strike>logarithmic</strike> error $\approx N^{-1/2}$ as shown by Antonio Vargas in his answer). In particular, putting $e^{-0.9x}$ and $e^{-1.9x}$ in (1) clearly makes convergence non-uniform. </p>
<p>One difficulty here is that the Taylor remainder formula is effective only up to $x\approx N/e$, and the maximum of the difference is at $x\approx N$.</p>
<p><img src="https://i.stack.imgur.com/Vuxmg.png" alt="N=10"></p>
<p><img src="https://i.stack.imgur.com/d0LHA.png" alt="enter image description here"></p>
<p>The question is inspired by an attempt to find an alternative proof of <a href="https://math.stackexchange.com/q/386807/">$\epsilon>0$ there is a polynomial $p$ such that $|f(x)-e^{-x}p|<\epsilon\forall x\in[0,\infty)$</a>. </p>
| Antonio Vargas | 5,531 | <p>Thanks, this was a fun problem.</p>
<p>From the integral representation</p>
<p>$$
\sum_{k=0}^{n} \frac{x^k}{k!} = \frac{1}{n!} \int_0^\infty (x+t)^n e^{-t} \,dt \tag1
$$</p>
<p>we can derive the expression</p>
<p>$$
e^{-x} \sum_{k=0}^{n} \frac{(-x)^k}{k!} = e^{-2x} - \frac{e^{-2x} (-x)^{n+1}}{n!} \int_0^1 t^n e^{xt}\,dt. \tag2
$$</p>
<p>Now</p>
<p>$$
\int_0^1 t^n e^{xt}\,dt \leq e^x \int_0^1 t^n\,dt = \frac{e^x}{n+1}, \tag3
$$</p>
<p>so that</p>
<p>$$
\begin{align}
\left|\frac{e^{-2x} (-x)^{n+1}}{n!} \int_0^1 t^n e^{xt}\,dt\right| &\leq \frac{e^{-x} x^{n+1}}{(n+1)!} \\
&\leq \frac{e^{-n-1} (n+1)^{n+1}}{(n+1)!} \\
&\sim \frac{1}{\sqrt{2\pi n}}
\end{align} \tag4
$$</p>
<p>by Stirling's formula.</p>
<hr>
<p>Added by <em>40 votes</em> for those interested in the derivation of (2) from (1):
$$
\sum_{k=0}^{n} \frac{(-x)^k}{k!} = \frac{1}{n!} \int_0^x (t-x)^n e^{-t} \,dt +
\frac{1}{n!} \int_x^\infty (t-x)^n e^{-t} \,dt \tag{A}
$$
Substitute $u=t-x$ in the second integral on the right of (A):
$$\frac{1}{n!}\int_x^\infty (t-x)^n e^{-t} \,dt
=\frac{1}{n!}\int_0^\infty u^n e^{-u-x} \,dt = e^{-x}
\tag{B}$$
Substitute $u=1-t/x$ in the first integral on the right of (A), noting that $(t-x)^n=(-x)^n u^n$ and $dt=(-x)du$:
$$\frac{1}{n!} \int_0^x (-x+t)^n e^{-t} \,dt
= \frac{(-x)^{n+1}}{n!} \int_0^1 u^n e^{xu-x} \,du = \frac{e^{-x}(-x)^{n+1}}{n!} \int_0^1 u^n e^{xu} \,du \tag{C}
$$
Adding (B) and (C), identity (2) follows. </p>
|
2,309,123 | <p>This is a 2 part question.</p>
<ol>
<li><p>I have been studying a particular matrix group $G \le GL(n,\mathbb R)$ with $n \ge 3$ and I managed to show elements of my group $A \in G$ have the block structure
$$
A = \left(
\begin{array}{cc}
O(3) & 0 \\
A_{21} & A_{22}
\end{array}
\right)
$$
Now $A_{22}$ must be invertible since I started with $GL(n, \mathbb R)$. <strong>So is it true that this matrix is an element of the group $G = O(3) \times GL(n-3,\mathbb R)$?</strong> I'm not sure how to account for the "extra" elements $A_{21}$ when writing $G$ as a direct product of groups.</p></li>
<li><p>I have a function
$$
f:\mathbb R^n \rightarrow \mathbb R
$$
which happens to be invariant under the action of my group $G$. In other words, $f(Ax) = f(x)$ for each $A \in G$. There is a comment in a thread <a href="https://mathoverflow.net/questions/166197/is-group-theory-useful-in-any-way-to-optimization">here</a> which says if I want to minimize my function, I only need to search for solutions in the quotient space $\mathbb R^n / G$. I am having trouble understanding what this quotient space looks like - can anyone provide some intuition how to go about visualizing this quotient space?</p></li>
</ol>
| Travis Willse | 155,629 | <p>(1) We may write the matrix group as a semidirect product
$$(\textrm{O}(3) \times \textrm{GL}(3, \Bbb R)) \leftthreetimes \textrm{M}(n - 3, \Bbb R) .$$
Explicitly, the homomorphism $\phi : \textrm{O}(3) \times \textrm{GL}(n - 3, \Bbb R) \to \textrm{Aut}(\textrm{M}(n \times 3, \Bbb R))$ defining this product is $\phi(A, B)(C) = B C A^{-1}$.</p>
<p>(2) I'll assume you're working with the action given by restricting the standard action of $\textrm{GL}(n, \Bbb R)$ on $\Bbb R^n$ to $G$. It is useful to write this action in a block decomposition, as
$$\pmatrix{A & 0 \\ C & B} \cdot \pmatrix{{\bf x} \\ {\bf y}} = \pmatrix{A {\bf x} \\ C {\bf x} + B {\bf y}}.$$</p>
<p>Since $\textrm{O}(3)$ acts transitively on $\Bbb R^3$, we may use an element of the form $\pmatrix{A & 0 \\ 0 & I_{n - 3}} \in G$ to map any element $({\bf x}, {\bf y}) \in \Bbb R^n$ to the element $((\lambda, 0, 0), {\bf y})$, $\lambda := |{\bf x}|$. </p>
<p>Similarly, since the orbits of $\textrm{GL}(n - 3, \Bbb R)$ on $\Bbb R^{n - 3}$ are the zero singleton and its complement, we may use an element of the form $\pmatrix{I_3 & 0 \\ 0 & B}$ to then bring an element of $\Bbb R^n$ to one of the form $((\lambda, 0, 0), (\epsilon, 0, \ldots, 0))$, where $\epsilon \in \{0, 1\}$.</p>
<p>If $\lambda \neq 0$, the element with $A = I_3$, $B = I_{n - 3}$ and $C = \lambda^{-1} E_{11}$ (where, $E_{11}$ is the $(n - 3) \times 3$ matrix with $(1, 1)$ entry $1$ and all other entries $0$) maps $((\lambda, 0, 0), (\epsilon, 0, \ldots, 0))$ to $((\lambda, 0, 0), {\bf 0})$. Thus, any $G$-orbit contains an element of one of the following forms:
$$((\lambda, 0, 0), {\bf 0}), \quad \lambda \geq 0 \textrm{;} \qquad ({\bf 0}, (1, 0, \ldots, 0)) .$$
On the other hand the $G$-action preserves the length of the projection of an element of $\Bbb R^n$ onto its first three components, and the $G$-orbit of $0$ is $\{0\}$, so the above list parameterizes the $G$-orbits, that is, no two elements of the list are in the same $G$-orbit.</p>
|
1,401,898 | <p>I need a test for primality that I apply to $2^{255}-19$ (which is claimed to be prime) and certify to be correct with the ACL2 theorem prover. This means that I must be able to code the test in Common LISP, run it on this case in a reasonable period of time (I'd be happy if it ran in a day), and write a proof of correctness of the test that is simple enough to be mechanized in the ACL2 logic.</p>
| DanaJ | 117,584 | <p>It takes my C+GMP code under 0.4s to do a BLS75 theorem 5 proof on the number, so this seems like the easiest option. This involves finding some small factors of p-1, checking conditions, then verifying primality of each factor (note you don't need to factor p-1 completely). This example has lots of small factors, and the large resulting prime minus 1 factors easily, and so does the next one, with the final value being small enough to check with deterministic M-R or BPSW.</p>
<p>You may be able to use one of the earlier theorems, e.g. 3, which is slightly easier. I didn't check whether it could trivially do the whole chain by itself.</p>
<p>Using ECPP plus n-1 and n+1 finishes in about 0.01 seconds on my laptop, but that's a lot more coding. The BLS75-T3 or -T5 would be much easier to code.</p>
<p>BLS75: <a href="http://www.ams.org/journals/mcom/1975-29-130/S0025-5718-1975-0384673-1/S0025-5718-1975-0384673-1.pdf" rel="nofollow">http://www.ams.org/journals/mcom/1975-29-130/S0025-5718-1975-0384673-1/S0025-5718-1975-0384673-1.pdf</a></p>
|
2,621,932 | <p><strong>Question:</strong> In a chess match, there are 16 contestants. Every player has to be each other player (like a round-robin). The player with the most wins/points wins the tournament.</p>
<p>a) How many games must be played until there is a victor? </p>
<p>b) If every player has to team up with each other player to play doubles chess. How many games must now be played until one of the teams is a victor? </p>
<p><strong>My Attempt:</strong></p>
<p>a) Each of the 16 player would have to verse 15 other people, but Player 1 vs Player 16 is same as Player 16 vs Player 1. Hence, $(16*15)/2$</p>
<p>b) No idea </p>
<p><strong>Official Answer:</strong> </p>
<p>a) ${}^{16}C_2$</p>
<p>b) ${}^{16}C_2/2$</p>
<p><strong>My Problem:</strong></p>
<p>a) ${}^{16}C_2$ is the same as my answer, however I thought combination would mean, how many different ways you can choose 2 people out of 16 people. However the question asks how many games have to played, so how does how many games have to be played mean the same thing as how many ways you can choose 2 people out of 16? </p>
<p>b) No idea</p>
| Especially Lime | 341,019 | <p>a) If I choose any two people there has to be one game between them, so the number of games is the same as the number of ways to choose two people. (The reason you use combination instead of permutation here is that it doesn't matter who is black and who is white.)</p>
<p>b) I can't see how the answer given is plausible for any interpretation of the question. </p>
<p>If the intended meaning was that every player teams up with <strong>one</strong> other player and a round-robin between the teams is played, there would be $8$ contestants (the teams) and so by the same argument as (a) you need ${}^8C_2={}^{16/2}C_2$ games. </p>
<p>On the other hand, if they really mean <strong>all possible teams</strong> are competing, there are ${}^{16}C_2$ possible teams, and each has to play each other, so it looks like you need ${}^{{}^{16}C_2}C_2$. However, this includes games where there is one player on both teams, which doesn't make much sense, so perhaps a better interpretation is that every team has to play every other team that neither of its players are on. This would give ${}^{16}C_2\times{}^{14}C_2/2$ games, because for each team there are ${}^{14}C_2$ teams they can play taken from the other $14$ players, and then you can swap the two teams to get the same game.</p>
|
125,610 | <p>I have question about sets. I need to prove that: $$X \cap (Y - Z) = (X \cap Y) - (X \cap Z)$$</p>
<p>Now, I tried to prove that from both sides of the equation but had no luck.</p>
<p>For example, I tried to do something like this: $$X \cap (Y - Z) = X \cap (Y \cap Z')$$ but now I don't know how to continue.</p>
<p>From the other side of the equation I tried to do something like this: $$(X \cap Y) - (X \cap Z) = (X \cap Y) \cap (X \cap Z)' = (X \cap Y) \cap (X' \cup Z')$$ and from here I don't know what to do again.</p>
<p>I will be glad to hear how should I continue from here and what I did wrong. Thanks in advance.</p>
| Community | -1 | <p><strong>Hint</strong></p>
<ul>
<li><p>$A \cap (B \cup C)=(A \cap B )\cup(A \cap C)$</p></li>
<li><p>$A \cap A'= \varnothing$</p></li>
</ul>
|
16,802 | <p>In an attempt to squeeze more plots and controls into the limited space for a demo UI, I am trying to remove any extra white spaces I see.</p>
<p>I am not sure what options to use to reduce the amount of space between the ticks labels and the actual text that represent the labels on the axes.</p>
<p>Here is a small <code>Plot</code> example using <code>Frame->True</code> (I put an outside <code>Frame</code> as well, just for illustration, it is not part of the problem here)</p>
<pre><code>Framed[
Plot[Sin[x], {x, -Pi, Pi},
Frame -> True,
FrameLabel -> {{Sin[x], None}, {x,
Row[{"This is a plot of ", Sin[x]}]}},
ImagePadding -> {{55, 10}, {35, 20}},
ImageMargins -> 0,
FrameTicksStyle -> 10,
RotateLabel -> False
],
FrameMargins -> 0
]
</code></pre>
<p><img src="https://i.stack.imgur.com/JRzf3.png" alt="Mathematica graphics"></p>
<p>Is there an option or method to control this distance?</p>
<p>Notice that <code>ImagePadding</code> affects distance below the frame label, and not between the frame label and the ticks. Hence changing <code>ImagePadding</code> will not help here.</p>
<p>Depending on the plot and other things, this space can be more than it should be. The above is just a small example I made up. Here is a small part of a UI, and I think the space between the <code>t(sec)</code> and the ticks is too large. I'd like to reduce it by few pixels. I also might like to push the top label down closer to the plot by few pixels also.</p>
<p><img src="https://i.stack.imgur.com/jruyf.png" alt="Mathematica graphics"></p>
<p>I am Using V9 on windows.</p>
<p><strong>update 12/22/12</strong></p>
<p>Using Labeld solution by @kguler below is a good solution, one just need to be little careful with the type-sitting for the labels. <code>Plot</code> automatically typeset things as <code>Text</code> in <code>TraditionalFormat</code>, which is a nice feature. To do the same when using <code>Labeled</code> one must do this manually using <code>TraditionalForm</code> and <code>Text</code> as well. </p>
<p>Here is example to show the difference</p>
<p><strong>1)</strong> Labeled used just with <code>TraditionalForm</code>. The left one uses <code>Plot</code> and the right one uses <code>Labeled</code> with <code>TraditionalForm</code>. Notice the difference in how labels look.</p>
<pre><code>Grid[{
{
Plot[Sin[x], {x, -Pi, Pi}, Frame -> True,
FrameLabel -> {{Sin[x], None}, {x, E Tan[x] Sin[x]}},
ImageSize -> 300, FrameTicksStyle -> 10, FrameStyle -> 16,RotateLabel -> False],
Labeled[
Plot[Sin[x], {x, -Pi, Pi}, Frame -> True, ImageSize -> 300],
TraditionalForm /@ {Sin[x], x, E Tan[x] Sin[x]}, {Left, Bottom, Top},
Spacings -> {0, 0, 0}, LabelStyle -> "Panel"]
}
}, Frame -> All]
</code></pre>
<p><img src="https://i.stack.imgur.com/MXRlR.png" alt="Mathematica graphics"></p>
<p><strong>2)</strong> Now we do the same, just need to add <code>Text</code> to get the same result as <code>Plot</code>.</p>
<pre><code>Grid[{
{
Plot[Sin[x], {x, -Pi, Pi}, Frame -> True, FrameTicksStyle -> 10,
FrameStyle -> 16,
FrameLabel -> {{Sin[x], None}, {x, E Tan[x] Sin[x]}},
ImageSize -> 300, RotateLabel -> False],
Labeled[
Plot[Sin[x], {x, -Pi, Pi}, Frame -> True, ImageSize -> 300],
Text /@ TraditionalForm /@ {Sin[x], x, E Tan[x] Sin[x]}, {Left,
Bottom, Top}, Spacings -> {0, 0, 0}, LabelStyle -> "Panel"]
}
}, Frame -> All]
</code></pre>
<p><img src="https://i.stack.imgur.com/CHhZF.png" alt="Mathematica graphics"></p>
<p><strong>Update 12/22/12 (2)</strong></p>
<p>There is a big problem with controlling the spacing. </p>
<p><code>Labeled</code> spacing only seem to work for horizontal and vertical spacing, taken togother. </p>
<p>i.e. One can't control spacing on each side of the plot separately? Here is an example, where I tried to move the bottom axes label up, this ended with moving the top label down as well. Which is not what I want.</p>
<pre><code>Labeled[Plot[Sin[x], {x, -Pi, Pi}, Frame -> True, ImageSize -> 300],
Text /@ TraditionalForm /@ {Sin[x], x, E Tan[x] Sin[x]}, {Left,
Bottom, Top}, Spacings -> {-.2, -0.7}]
</code></pre>
<p><img src="https://i.stack.imgur.com/q0Wdo.png" alt="Mathematica graphics"></p>
<p>Will see if there is a way to control each side spacing on its own. Trying <code>Spacing->{-0.2,{-0.7,0}}</code> does not work, it seems to take the zero in this case and ignored the <code>-0.7</code></p>
<p>This gives the same result as above:</p>
<pre><code>Labeled[Plot[Sin[x], {x, -Pi, Pi}, Frame -> True, ImageSize -> 300],
Text /@ TraditionalForm /@ {Sin[x], x, E Tan[x] Sin[x]}, {Left,
Bottom, Top}, Spacings -> {-.2, -0.7, .0}]
</code></pre>
<p><img src="https://i.stack.imgur.com/6JCgC.png" alt="Mathematica graphics"></p>
<p>ps. there might be a way to specify the spacing for each side with some tricky syntax. I have not figured it out yet. Still trying thing....
<a href="http://reference.wolfram.com/mathematica/ref/Spacings.html" rel="noreferrer">http://reference.wolfram.com/mathematica/ref/Spacings.html</a></p>
<p><strong>update 12/22/12 (3)</strong>
Using combination of <code>ImagePadding</code> and <code>Spacing</code> should have worked, but for some reason, the top label now is cut off. Please see screen shot. Using V9 on windows</p>
<p><img src="https://i.stack.imgur.com/kbT3G.png" alt="enter image description here"></p>
<p>Note: The above seems to be related to the issue reported here:
<a href="https://mathematica.stackexchange.com/questions/16248/some-graphics-output-do-not-fully-render-on-the-screen-until-an-extra-click-is-m">some Graphics output do not fully render on the screen until an extra click is made into the notebook</a></p>
<p>Need an extra click inside the notebook. Then label become un-chopped !</p>
| Szabolcs | 12 | <p>Another possible (partial) solution is adding negative space. This can be accomplished by putting the label in <a href="http://reference.wolfram.com/language/ref/Framed.html" rel="noreferrer"><code>Framed</code></a> and setting negative frame margins on one or mode side.</p>
<p>Try this:</p>
<pre><code>Manipulate[
Framed[Plot[Sin[x], {x, -Pi, Pi}, Frame -> True,
FrameLabel -> {{Sin[x], None},
(* HERE'S THE FRAMED LABEL: *)
{Framed[x, FrameStyle -> None,
FrameMargins -> {{0, 0}, {0, -space}},
ContentPadding -> False],
Row[{"This is a plot of ", Sin[x]}]}},
ImagePadding -> {{55, 10}, {35, 20}}, ImageMargins -> 0,
FrameTicksStyle -> 10, RotateLabel -> False], FrameMargins -> 0],
{space, 0, 20}
]
</code></pre>
<p>Adding too much negative space will cause part of the label to be cut off.</p>
<p>Here's a convenience function to shift labels up (positive value) or down (negative value):</p>
<pre><code>shift[space_][x_] :=
Framed[x, FrameStyle -> None,
FrameMargins -> {{0, 0}, If[space > 0, {0, -space}, {space, 0}]},
ContentPadding -> False]
</code></pre>
<p>Use it as <code>shift[amount][label]</code> and beware of part of the label getting cut off when the shift is not tiny!</p>
|
307,264 | <p>A professor of mine has suggested to me to look at this theorem and to find a problem related to it to explain in a future class.
I found an understandable proof in "Linear operators" by Dunford-Schwartz and I think I studied it, so now I know how to probe Brouwer's Theorem.
Now I was thinking of some interesting related problem I could try to solve, do you have any suggestion of something not too hard (I am a second year undergraduate student!) ?</p>
<p>Thank you very much! </p>
<p>EDIT: I put PDE in the tags as this professor is mainly interested in this area so if you have any idea on that then even better (I guess most of the applications to PDEs will be very hard though!! )</p>
| Artem | 29,547 | <p>You can prove Nash's theorem that every symmetric game with two players has a mixed strategy Nash equilibrium. This can be done using differential equations (ordinary though) and Brouwer's theorem. A very accessible exposition is given in <a href="http://rads.stackoverflow.com/amzn/click/0691142750" rel="nofollow">this book</a>.</p>
|
3,755,288 | <p>I'm trying to solve this:</p>
<blockquote>
<p>Which of the following is the closest to the value of this integral?</p>
<p><span class="math-container">$$\int_{0}^{1}\sqrt {1 + \frac{1}{3x}} \ dx$$</span></p>
<p>(A) 1</p>
<p>(B) 1.2</p>
<p>(C) 1.6</p>
<p>(D) 2</p>
<p>(E) The integral doesn't converge.</p>
</blockquote>
<p>I've found a lower bound by manually calculating <span class="math-container">$\int_{0}^{1} \sqrt{1+\frac{1}{3}} \ dx \approx 1.1547$</span>. This eliminates option (A). I also see no reason why the integral shouldn't converge. However, to pick an option out of (B), (C) and (D) I need to find an upper bound too. Ideas? Please note that I'm not supposed to use a calculator to solve this.</p>
<p>From <strong>GRE problem sets by UChicago</strong></p>
| Barry Cipra | 86,747 | <p>Starting from</p>
<p><span class="math-container">$$\int_0^1\sqrt{1+{1\over3x}}\,dx=2\int_0^1\sqrt{t^2+{1\over3}}\,dt$$</span></p>
<p>(from the subsitution <span class="math-container">$x=t^2$</span>) as in Yves Daoust's answer, integration by parts gives</p>
<p><span class="math-container">$$\int_0^1\sqrt{t^2+{1\over3}}\,dt=t\sqrt{t^2+{1\over3}}\Big|_0^1-\int_0^1{t^2\over\sqrt{t^2+{1\over3}}}\,dt={2\over\sqrt3}-\int_0^1{t^2+{1\over3}-{1\over3}\over\sqrt{t^2+{1\over3}}}\,dt$$</span></p>
<p>hence</p>
<p><span class="math-container">$$2\int_0^1\sqrt{t^2+{1\over3}}\,dt={2\over\sqrt3}+{1\over3}\int_0^1{dt\over\sqrt{t^2+{1\over3}}}={2\over\sqrt3}+{1\over\sqrt3}\int_0^1{dt\over\sqrt{3t^2+1}}$$</span></p>
<p>Since <span class="math-container">$1\le\sqrt{3t^2+1}\le2$</span> for <span class="math-container">$0\le t\le1$</span>, we have</p>
<p><span class="math-container">$${1\over2}\le\int_0^1{dt\over\sqrt{3t^2+1}}\le1$$</span></p>
<p>Thus</p>
<p><span class="math-container">$${2\over\sqrt3}+{1\over2\sqrt3}\le2\int_0^1\sqrt{t^2+{1\over3}}\,dt\le{2\over\sqrt3}+{1\over\sqrt3}$$</span></p>
<p>Now</p>
<p><span class="math-container">$${2\over\sqrt3}+{1\over2\sqrt3}={5\sqrt3\over6}=\sqrt{75\over36}\gt\sqrt2\gt1.4$$</span></p>
<p>and</p>
<p><span class="math-container">$${2\over\sqrt3}+{1\over\sqrt3}=\sqrt3\lt\sqrt{3.24}=1.8$$</span></p>
<p>Consquently</p>
<p><span class="math-container">$$1.4\lt\int_0^1\sqrt{1+{1\over3x}}\,dx\lt1.8$$</span></p>
<p>and thus (C) is the correct answer.</p>
|
1,182,844 | <p>I have completed Velleman's book, 'How to prove it'. I have also worked through Apostol Vol.1.
I have messed about with many rigorous single variable calculus textbooks, e.g.,Apostol, Spivak, Courant, Lang, etc. I had started working through Lang's, 'Calculus of several variables' but put it up to do a book like Edwards, 'Advanced Calculus:A Differential Forms Approach.' I now see that book to be a waste of time, because it is extremely 'hand-wavy' I can't stand mathematics out of physics texts and I will most certainly not tolerate the same from an actual math book. Therefore I am back to my starting point, I have finished Linear Algebra via Lang and have been perusing Hubbard and Hubbard's book and Artin. I do not like that Hubbard and Hubbard is so wordy. I don't really have the patience to put up with extremely long winded explanations of trivial facts just to get to the meat. </p>
<p>I would really like to work through Spivak's, 'Calculus on Manifolds' the problem being that I need to know if I can do without a book like Lang's, 'Calculus of Several Variables'?
My goal is to get to manifolds and skip 'Vector Calculus' but I do not want to shortchange myself on computation if Spivak's book would leave me in that state.</p>
<p>I need help to determine if it is worth the time to work through Lang's book or can I just skip it, I want to be able to apply forms, etc to physics although I am a math major.</p>
| Neal | 20,569 | <p>First: Why not? Math is best learned nonlinearly. Read Spivak and then, when you get confused, go look up the gap in your knowledge in another book.</p>
<p>Second: You will not understand manifolds if you do not have a thorough grasp of multivariable calculus. Manifolds exist as one generalization of multivariable calculus and provide a geometric interpretation of many notions from it. For example, a vector field is a generalization of a directional derivative; if you haven't thoroughly understood directional derivatives, you will have trouble understanding vector fields.</p>
<p>Third: Don't close yourself off to different perspectives because you, at this point in your life, don't like how it's presented or don't appreciate the lack of rigor. Some of the most beautiful and informative mathematical texts are written from a deep intuitive perspective (I am thinking of Thurston's notes here). In less formal communication, mathematicians often communicate non-rigorously, with the understanding that their audience can, at their leisure, fill in the formal gaps in the argument.</p>
|
61,509 | <p>I'm trying to read Elias Stein's "Singular Integrals" book, and in the beginning of the second chapter, he states two results classifying bounded linear operators that commute (on $L^1$ and $L^2$ respectively).</p>
<p>The first one reads:</p>
<p>Let $T: L^1(\mathbb{R}^n) \to L^1(\mathbb{R}^n)$ be a bounded linear transformation. Then $T$ commutes with translations if and only if there exists a measure $\mu$ in the dual of $C_0(\mathbb{R}^n)$ (continuous functions vanishing at infinity), s.t. $T(f) = f \ast \mu$ for every $f \in L^1(\mathbb{R}^n)$. It is also true that $\|T\|=\|\mu\|$. </p>
<p>The second one says:</p>
<p>Let $T:L^2(\mathbb{R}^n) \to L^2(\mathbb{R}^n)$ be bounded and linear. Then $T$ commutes with translation if and only if there exists a bounded measurable function $m(y)$ so that $(T\hat{f})(y) = m(y) \hat{f}(y)$ for all $f \in L^2(\mathbb{R}^n)$. It is also true that $\|T\|=\|m\|_\infty$. </p>
<p>I was wondering if anyone had a reference to a proof of these two results or could explain why they are true. </p>
| ibnAbu | 334,224 | <p>A bounded linear operator <span class="math-container">$ T: L^2 \to L^2$</span> and commutes with translation</p>
<p>Known result from Riesz Representation theorem <a href="https://math.stackexchange.com/questions/2809632/proof-that-every-bounded-linear-operator-between-hilbert-spaces-has-an-adjoint">see here</a>:<span class="math-container">$\int Tf(s)u(s) ds=\int f(s)T^*u(s)ds$</span> written as <span class="math-container">$<Tf,u>=<f,T^*u>$</span>, where <span class="math-container">$ T^*$</span> is the linear adjoint operator of <span class="math-container">$ T $</span> , <span class="math-container">$ u \in L^2$</span></p>
<p>now for <span class="math-container">$f(s)$</span> translated by <span class="math-container">$x$</span> ,<span class="math-container">$f(s+x)$</span></p>
<p>we have <span class="math-container">$\int Tf(s+x)u(s)ds=\int f(s+x)T^*u(s)ds$</span></p>
<p>Now take Let <span class="math-container">$u_{\epsilon}=\phi_\epsilon(s)=\phi(s/\epsilon)\epsilon^{-1}$</span> where
<span class="math-container">$\phi(s)$</span> is normalized gaussian function with zero mean.</p>
<p><span class="math-container">$\int Tf(s+x)u_{\epsilon}(-s) ds=\int Tf(s+x)u_{\epsilon}(s)ds=\int f(s+x)T^*u_{\epsilon}(s)ds$</span></p>
<p><span class="math-container">$H(s)=Tf(s)$</span> , by translation commutation <span class="math-container">$H(s+x)=Tf(s+x)$</span></p>
<p><span class="math-container">$R(-s)=T^*u_{\epsilon}$</span></p>
<p>we have <span class="math-container">$\lim_{\epsilon \to 0} H* u_{\epsilon}=\lim_{\epsilon \to 0} f*R$</span></p>
<p><span class="math-container">$\lim_{\epsilon \to 0} H*u_{\epsilon}=H(x)$</span> ae</p>
<p>for <span class="math-container">$u \in L^1$</span> function, we have the following theorem regarding fourier transform:</p>
<p><span class="math-container">$v(x)=g*u=\int g(s)u(x-s) ds$</span>
for <span class="math-container">$g \in L^2$</span> , define <span class="math-container">$\hat{g}=lim_{n \to \infty}\int_{n}^{-n}g(s)e^{2i\pi x z} dx$</span> the limit exists in the sense of <span class="math-container">$L^2$</span>. If <span class="math-container">$u \in L^1$</span> and <span class="math-container">$\int |g(s)||u(x-s)| ds \le P$</span> where <span class="math-container">$P$</span> is a non negative integrable function in <span class="math-container">$L^2$</span> then <span class="math-container">$\hat{v}=\hat{g}\hat{u}$</span></p>
<p>Proof : <span class="math-container">$g_n=g1_{[-n,n]},v_n=g_n*u$</span>
it's known that <span class="math-container">$\hat{v_n}=\hat{g_n}\hat{u}$</span>
By dominated convergence theorem <span class="math-container">$lim_{n \to \infty} ||v-v_n||^2=0$</span>
and by Plancherel theorem <span class="math-container">$lim_{n \to \infty} ||v-v_m||^2= lim_{n \to \infty} ||\hat{v}-\hat{v_n}||^2=0$</span>
This implies <span class="math-container">$lim_{n \to \infty} \hat{v_n}=\hat{v}$</span></p>
<p>Now from our equation when <span class="math-container">$u$</span> is a normalized Gaussian we have <span class="math-container">$|H|* |u_{\epsilon}| \le MH(x)$</span> where <span class="math-container">$MH(x)$</span> is the Hardy Littlewood Maximal function (also <span class="math-container">$ ||MH(x)||^2 \le ||H(x)||^2$</span>)
using Fourier transform <span class="math-container">$\hat{H}(z)\hat{u_{\epsilon}}(z)=\hat{f}(z)G(z)$</span> where <span class="math-container">$G(z)=\hat{R}$</span></p>
<p><span class="math-container">$\lim_{\epsilon \to 0}\hat{ u_{\epsilon}}=1$</span></p>
<p><span class="math-container">$\lim_{\epsilon \to 0}G=m(z)$</span></p>
<p>consequently we have <span class="math-container">$\hat{Hf}(z)=m(z)\hat{f}(z)$</span></p>
|
925,586 | <p>Determine if the relation : $$x \sim y \iff |y-x| \text{ is an integer multiple of } 3$$</p>
<p>is an equivalence one.</p>
<p>Now, I think this is an equivalence relation but I am having troubles formally proving the transitivity.</p>
<p>Any help?</p>
| André Nicolas | 6,312 | <p>Suppose that $x\sim y$ and $y\sim z$. </p>
<p>Then $3$ divides $|x-y|$. It follows that $3$ divides $x-y$.</p>
<p>Similarly, $3$ divides $y-z$.</p>
<p>So $3$ divides $(x-y)+(y-z)$. It follows that $3$ divides $x-z$, and therefore $3$ divides $|x-z|$. We conclude that $x\sim z$.</p>
|
925,586 | <p>Determine if the relation : $$x \sim y \iff |y-x| \text{ is an integer multiple of } 3$$</p>
<p>is an equivalence one.</p>
<p>Now, I think this is an equivalence relation but I am having troubles formally proving the transitivity.</p>
<p>Any help?</p>
| AsdrubalBeltran | 62,547 | <p>Hint note that $|x-z|=|x-y+y-z|$</p>
|
1,342,747 | <p>I am studying H. L. Royden's Real Analysis which includes some introduction to Measure Theory; and I encountered $(a,\infty]$ instead of $(a,\infty)$ for the first time! </p>
<p>What is the difference(s) between $(a,\infty)$ and $(a,\infty]$?</p>
| triple_sec | 87,778 | <p>\begin{align*}
(a,\infty)=&\,\{x\in\mathbb R\,|\,x>a\},\\
(a,\infty]=&\,\{x\in\mathbb R\,|\,x>a\}\cup\{\infty\}.
\end{align*}
The latter set includes an extra point termed “positive infinity.” Note that it is <em>not</em> a real number, but in certain areas of mathematics, especially in measure theory, it is useful to extend $\mathbb R$ by this single point to denote a quantity that is greater than all conceivable real numbers. With treating $\infty$ as though it were actually a number satisfying the property $\infty>x$ for all $x\in\mathbb R$, the interval notation becomes quite intuitive.</p>
|
4,421,529 | <p><strong>Question:</strong> Let <span class="math-container">$n > 0$</span>. How can I find a function <span class="math-container">$f:\mathbb{N}\rightarrow\mathbb{R}^+$</span> such that
<span class="math-container">$$
\lim_{n\to\infty} \frac{f(n)^2}{n} \log \left(\frac{f(n)}{n}\right) = L
$$</span>
with <span class="math-container">$0<L<\infty$</span>?</p>
<p><strong>Background</strong>: The term above appears in my research on subexponential bounds for binary words containing a limited number of ones. I have been able to elimate all other terms, but I am stuck with this one.</p>
<p><strong>What I tried so far:</strong> I applied L'Hôpital's rule to get
<span class="math-container">$$
\lim_{n\to\infty} \frac{\log\left(\frac{f(n)}{n}\right)}{\frac{-n}{f(n)^2}} = \lim_{n\to\infty} \frac{\frac{f'(n)}{f(n)}-\frac{1}{n}}{\frac{1}{f(n)^2}-\frac{2nf'(n)}{f(n)^3}}
$$</span></p>
<p>which got rid of the <span class="math-container">$\log()$</span>. Since the limit should be finite, it seems to me that <span class="math-container">$\lim_{n\to\infty} \frac{f(n)}{\sqrt{n}} < \infty$</span>, but I haven't been able to come up with an <span class="math-container">$f(n)$</span> that doesn't lead to <span class="math-container">$L=0$</span>.</p>
| Adam Rubinson | 29,156 | <p>I'll assume that <span class="math-container">$f$</span> is a function which can output real values.</p>
<p>For each <span class="math-container">$n\in\mathbb{N},$</span> define <span class="math-container">$g_n:[n,\infty)\to\mathbb{R};\ g_n(x) = \frac{x^2}{n} \log\left(\frac{x}{n}\right).$</span></p>
<p>One can readily check that <span class="math-container">$g_n(x)$</span> is a continuous, increasing function in <span class="math-container">$x$</span>, with range <span class="math-container">$[0,\infty).$</span></p>
<p>Therefore (by IVT), for each <span class="math-container">$n,\ \exists\ x_n\in [n,\infty)\ $</span> such that <span class="math-container">$ g_n(x_n)= \frac{{x_n}^2}{n} \log\left(\frac{x_n}{n}\right) = L.$</span></p>
<p>Let <span class="math-container">$f(n) = x_n$</span> for each <span class="math-container">$n\in\mathbb{N},$</span> and we have completed our construction of <span class="math-container">$f.$</span></p>
|
1,669,096 | <p>How do I show that <span class="math-container">$\ell^{ \infty}$</span> is a normed linear space,
where <span class="math-container">$\ell^{ \infty}$</span> is define as <span class="math-container">$$\|\{a_n\}_{n=1}^{\infty}\|_{\ell^\infty}=\sup_{1 \leq k \leq \infty} |a_k|?$$</span>
There are three properties that I need to check in order for this to be a normed linear space. Nonnegativity, positive homogeneity, and the triangle inequality.</p>
<p>I am having trouble working with the supremum. Any idea will be greatly appreciated thanks</p>
| Fred | 380,717 | <p>Let <span class="math-container">$(a_n),(b_n) \in \ell^\infty$</span> Then we have for each <span class="math-container">$k$</span>:</p>
<p><span class="math-container">$$|a_k+b_k| \le |a_k|+|b_k| \le ||(a_n)||_\infty+||(b_n)||_\infty.$$</span></p>
<p>This shows that <span class="math-container">$(a_n+b_n) \in \ell^\infty$</span> and that</p>
<p><span class="math-container">$$||(a_n+b_n)||_\infty \le ||(a_n)||_\infty+||(b_n)||_\infty.$$</span></p>
|
4,219,360 | <p>I was having some problems understanding how he found <span class="math-container">$\gamma(t)$</span> from the given <span class="math-container">$\Sigma$</span> and i was hoping someone could explain to me how if that is ok</p>
<p>So the problem goes like this:</p>
<p>Given the vector field <span class="math-container">$F(x, y, z) = (z, x, y)$</span>,
compute the flux of the curl of F through the surface
<span class="math-container">$
Σ =
(x, y, z) ∈ R^
3
: z = xy, x^2 + y^
2 ≤ 1
$</span></p>
<p>oriented so that the normal versor points upward</p>
<p>So what the professor did was first he computed <span class="math-container">$\gamma(t)$</span> using parametrization and he immediately writes</p>
<p><span class="math-container">$\gamma(t)=(cos(t),sin (t), cos(t)sin(t) )$</span> with <span class="math-container">$t\in[0,2\pi]$</span> and from here he finds <span class="math-container">$\gamma$</span>' and from there he computes</p>
<p><span class="math-container">$\int _\Sigma rot F *nd\sigma$</span>=<span class="math-container">$\int_0^{2\pi} F(\gamma(t))\gamma'(t)dt$</span></p>
<p>and from there is history i can do it myself</p>
<p>But what i couldn't understand is how did he get the <span class="math-container">$\gamma$</span></p>
| Adi | 578,455 | <p><span class="math-container">$\mathbb{E}[X^Y] = \sum_{y=0}^\infty\mathbb{E}[X^Y|Y=y]\mathbb{P}(Y=y)$</span></p>
<p><span class="math-container">$ = \sum_{y=0}^\infty\mathbb{E}[X^y] {\mu^y e^{-\mu} \over y!}$</span></p>
<p><span class="math-container">$ = e^{-\mu} \left( 1 + \mathbb{E}[X]\mu + \mathbb{E}[X^2] {\mu^2 \over 2!} + \mathbb{E}[X^3] {\mu^3 \over 3!} + \ldots\right)$</span></p>
<p>You are given that <span class="math-container">$\mathbb{E}[e^{tX}] = {\lambda \over \lambda - t}$</span></p>
<p>Write <span class="math-container">$\mathbb{E}[e^{\mu X}] = \mathbb{E}[1 + \mu X + {(\mu X)^2 \over 2!} + {(\mu X)^3 \over 3!} + \ldots]$</span></p>
<p><span class="math-container">$ = 1 + \mathbb{E}[X]\mu + \mathbb{E}[X^2] {\mu^2 \over 2!} + \mathbb{E}[X^3] {\mu^3 \over 3!} + \ldots = {\lambda \over \lambda - \mu}$</span></p>
<p>Plugging this back into the first equation you get:</p>
<p><span class="math-container">$\mathbb{E}[X^Y] = e^{-\mu} {\lambda \over \lambda - \mu}$</span></p>
|
3,084,479 | <p><span class="math-container">$h\in \mathbb{R}$</span>, because we have defined the Trigonometric Functions only on <span class="math-container">$\mathbb{R}$</span> so far.</p>
<p>I have a look at <span class="math-container">$e^{ih}=\sum_{k=0}^{\infty}\frac{(ih)^k}{k!}=1+ih-\frac{h^2}{2}+....$</span> </p>
<p><strong>How can one describe the nth term of the sum?</strong></p>
<p>Then I look at <span class="math-container">$\frac{e^{ih}-1}{h}=\frac{(1-1)}{h}+i-\frac{h}{2}+...=i-\frac{h}{2}+....$</span> </p>
<p><strong>Again how can I describe that the nth term of the sum?</strong> </p>
<p>Because <span class="math-container">$\frac{e^{ih}-1}{h}=\sum_{k=1}^{\infty}\frac{\frac{(ih)^k}{h}}{k!}<\sum_{k=0}^{\infty}\frac{\frac{(ih)^k}{h}}{k!}=\sum_{k=1}^{\infty}\frac{(ih^{-1})^k}{k!}=e^{ih^{-1}}$</span></p>
<p>and <span class="math-container">$ih^{-1}$</span> is a complex number and the exponential-series converges absolutely for all Elements in <span class="math-container">$\mathbb{C}$</span>, I have found a convergent majorant. And I can apply the properties of Limits on <span class="math-container">$\frac{e^{ih}-1}{h}\forall, h\in \mathbb{R}$</span>.</p>
<p><strong>How can I now prove formally (i.e by chosing an explicit <span class="math-container">$\delta$</span>) that</strong> </p>
<p><span class="math-container">$$\forall_{\epsilon>0}\exists_{\delta>0}\forall_{h\in\mathbb{R}}|h-0|=|h|<\delta\Longrightarrow |(\frac{e^{ih}-1}{h}=i-\frac{h}{2}+...)-i|<\epsilon$$</span></p>
<p><strong>I am also seeking for advice how to argue in such cases more intuitively (i.e by not always ginving an explicit <span class="math-container">$\delta$</span> ).</strong></p>
| Calvin Khor | 80,734 | <p>Sketch that follows the spirit of your approach, rather than using trigonometry etc. A useful technical tool:</p>
<blockquote>
<p>Theorem. Suppose the continuous functions <span class="math-container">$f_n=f_n(h)$</span> taking values in <span class="math-container">$\mathbb C$</span> are such that <span class="math-container">$\sum_{n=0}^\infty f_n(h) := \lim_{N\to\infty} \sum_{n=0}^N f_n(h)$</span> converges absolutely uniformly on some interval <span class="math-container">$h\in [-a,a]$</span> (i.e. <span class="math-container">$\sum_{n=0}^\infty \|f_n\|_{\infty} < \infty $</span>). Then
<span class="math-container">$$ \lim_{h\to 0} \sum_{n=0}^\infty f_n(h) = \sum_{n=0}^\infty f_n(0)$$</span></p>
</blockquote>
<p>Sketch proof of theorem: Recall that the uniform limit of continuous functions is continuous. (see <a href="https://math.stackexchange.com/questions/2164642/proof-of-uniform-limit-of-continuous-functions">Proof of uniform limit of Continuous Functions</a> or <a href="https://en.wikipedia.org/wiki/Uniform_limit_theorem" rel="nofollow noreferrer">Wikipedia</a> ). The <span class="math-container">$N$</span>th partial sums <span class="math-container">$F_N(h) := \sum_{n=0}^N f_n(h)$</span> are continuous, and they converge pointwise on <span class="math-container">$[-a,a]$</span> to <span class="math-container">$F(h):=\sum_{n=0}^\infty f_n(h)$</span>, and the absolutely uniformly convergent assumption implies that this convergence is uniform on <span class="math-container">$[-a,a]$</span>. Hence, <span class="math-container">$F$</span> is continuous at all points in <span class="math-container">$[-a,a]$</span>, and in particular at <span class="math-container">$0$</span>. The result follows.</p>
<hr>
<p>Application: <span class="math-container">$$\frac{e^{ih}-1}{h}\\= \frac{(\sum_{n=0}^\infty (ih)^n/n! )- 1} {h} \\= \frac{\sum_{n=1}^\infty (ih)^n/n! } {h} \\= \sum_{n=1}^\infty \frac{i(ih)^{n-1}}{n!} \\= \sum_{m=0}^\infty \frac{i(ih)^{m}}{(m+1)!} $$</span>
After you verify we can apply the theorem, this yields <span class="math-container">$\lim_{h\to 0 } \frac{e^{ih}-1}{h} = i+0+0+\dots = i$</span>.</p>
<hr>
<p>PS make sure you realise that all infinite sums are defined in terms of limits in <span class="math-container">$\mathbb C$</span> (e.g. <span class="math-container">$a_n \to a \in\mathbb C$</span> iff <span class="math-container">$|a_n - a| \to 0$</span>)</p>
<p>PPS this works to find the derivative at an arbitrary point of any function expressed as (for example) a Taylor series. </p>
|
2,809,090 | <p>The stochastic vector $(X,Y)$ has a continuous distribution with pdf:
$$f(x,y) =
\begin{cases}
xe^{-x(y+1)} & \text{if $x,y>0$} \\[2ex]
0 & \text{otherwise}
\end{cases}$$<br>
Define $Z:=XY$.<br>
I would like to know what exactly $XY$ is. It seems to me that the function $f(x,y)$ has but one output, so what does $XY$ mean here? I feel like I'm missing something obvious. Thanks in advance.</p>
| fleablood | 280,126 | <p>$\frac{1}{6}\cos(\frac{1}{2}x)\cos(\frac{3}{2}x)-\frac{1}{2}\cos^2(\frac{1}{2}x)+\frac{1}{2}\sin^2(\frac{1}{2}x)+\frac{1}{6}\sin(\frac{1}{2}x)\sin(\frac{3}{2}x)$</p>
<p>Gosh, that's ugly. </p>
<p>Let's just throw rocks at it until it either falls apart or lumbers off. </p>
<p>First, let's replace $\frac 12 x$ with $y$.</p>
<p>$\frac{1}{6}\cos(y)\cos(3y)-\frac{1}{2}\cos^2(y)+\frac{1}{2}\sin^2(y)+\frac{1}{6}\sin(y)\sin(3y)$</p>
<p>Now we have a case if two $\cos$ multiplied together. How can we get those to a single cosine? We also have to $\sin$ multiplied together.</p>
<p>We know that $\cos (A \pm B) = \cos A\cos B \mp \sin A\sin B$.</p>
<p>And we have $\frac 16[\cos(y)\cos(3y) - \sin(y)\sin(3y)]$. So that is $\frac 16[\cos (y - 3y)] = \frac 16\cos (-2y)$. </p>
<p>Now we could just as easily have done $\frac 16[\cos (3y - y) ] =\frac 16 \cos 2y = \frac 16 \cos (-2y)$.</p>
<p>So we have:</p>
<p>$\frac{1}{6}\cos 2y-\frac{1}{2}\cos^2(y)+\frac{1}{2}\sin^2(y)$</p>
<p>Now we have $\frac 12(-\cos^2 y + \sin^2 y)$</p>
<p>If we had $\cos^2 y + \sin^2 y$ we could just add them together to get $1$. And we could do $-\cos^2 y + \sin^2 y = 1 - 2\cos^2 y$, but that won't really help us.</p>
<p>But note:</p>
<p>$\cos(A+B) = \cos A \cos B - \sin A \sin B$ so $\cos(A + A) = \cos A\cos A - \sin A\sin A = \cos^2 A - \sin^2 A$ which is <em>exactly</em> our case.</p>
<p>So $\frac 12(-\cos^2 y + \sin^2 y) = -\frac 12(\cos^2 y - \sin^2 y) = -\frac 12 \cos 2y$.</p>
<p>So we have </p>
<p>$\frac{1}{6}\cos 2y-\frac{1}{2}\cos(2y)= -\frac 13 \cos 2y = -\frac 13 \cos x$</p>
<p>====</p>
<p>That involved knowing our trig identities forward <em>and backwards</em>.</p>
<p>What if we only know them forward? That is what if don't have the inspiration to see that pattern?</p>
<p>Well, we could do brute force going forward.</p>
<p>$N = \frac{1}{6}\cos(\frac{1}{2}x)\cos(\frac{3}{2}x)-\frac{1}{2}\cos^2(\frac{1}{2}x)+\frac{1}{2}\sin^2(\frac{1}{2}x)+\frac{1}{6}\sin(\frac{1}{2}x)\sin(\frac{3}{2}x)$. Let $y = \frac 12 x$ so</p>
<p>$6N = \cos y\cos 3y - 3\cos^2 y + 3\sin^2 y + \sin y \sin 3y$</p>
<p>Now we know the identities <em>forward</em>....</p>
<p>$\cos 3y = \cos (y + 2y) = \cos y\cos 2y - \sin y \sin 2y$ and $\sin 3y = \sin(y + 2y) = \sin y\cos 2y + \cos y\sin 2y$.</p>
<p>So we have </p>
<p>$\cos y\cos 3y - 3\cos^2 y + 3\sin^2 y + \sin y \sin 3y=$</p>
<p>$\cos y(\cos y\cos 2y - \sin y \sin 2y)- 3\cos^2 y + 3\sin^2 y+\sin y( \sin y\cos 2y + \cos y\sin 2y)=$</p>
<p>$\cos^2 y \cos 2y - \cos y\sin y\sin 2y - 3\cos^2 y + 3\sin^2 y+ \sin^2 y\cos 2y + \cos y \sin y \sin 2y=$</p>
<p>$cos^2 y \cos 2y+ \sin^2 y\cos 2y- 3\cos^2 y + 3\sin^2 y- \cos y\sin y\sin 2y+ \cos y \sin y \sin 2y=$.</p>
<p>$(\cos^2 y + sin^2 y)\cos 2y - 3\cos^2 y + 3\sin^2 y =$</p>
<p>$\cos 2y - 3\cos^2 y + 3\sin^2 y$.</p>
<p>And we have $\cos 2y = \cos (y+y)= \cos^2 y - \sin^2 y$ so</p>
<p>$\cos 2y - 3\cos^2 y + 3\sin^2 y=$</p>
<p>$\cos^2 y - \sin^2 y- 3\cos^2 y + 3\sin^2 y=$</p>
<p>$-2\cos^2 y + 2\sin^2 y$.</p>
<p>Oh, okay, I guess we <em>do</em> need to know our identities backwards. But as we <em>just</em> did $\cos 2y = \cos^2 y - \sin^2 y$ it should still be fresh in our minds.</p>
<p>$-2\cos^2 y + 2\sin^2 y= -2\cos(2y)$</p>
<p>So we have $6N = -2\cos(2y)$ so</p>
<p>$N = -\frac 13 \cos x$.</p>
|
1,382,587 | <p>In $\mathbb R^3$, let $C$ be the circle in the $xy$-plane with radius $2$ and the origin as the center, i.e., </p>
<p>$$C= \Big\{ \big(x,y,z\big) \in \mathbb R^3 \mid x^2+y^2=4, \ z=0\Big\}.$$</p>
<p>Let $\Omega$ consist of all points $(x,y,z) \in \mathbb R^3$ whose distance to $C$ is at most $1$. Compute
$$\int_\Omega \left|\,x\,\right|\,dx\,dy\,dz$$</p>
<p>So, with the help of erichan (see below), I now know that the volume is a solid torus. But I am having trouble setting up the integration bounds. As erichan had suggested, we consider a union of unit-spheres, all centered on points of the radius-$2$ circle. Using spherical coordinates, I have this integral set up:</p>
<p>$$\int_0^\pi\int_0^{2\pi}\int_0^1
\Big(\big|\,\left(r\cos \theta+2\right)\sin\phi \,\big| \,r^2 \sin\phi \Big) \, dr\, d\theta \,d\phi,$$</p>
<p>Where I parameterized the solid torus as:
$$
\begin{aligned}
x &= (r\cos \theta +2)\sin \phi \\
y &= (r\sin \theta +2)\sin\phi \\
z &= r\cos \phi
\end{aligned}
$$
with Jacobian as $r^2\sin \phi$.</p>
<p>Is my setup ok? I shifted $x$ and $y$ by two units.
I'm not so sure about the parametrization of $z$ (should I leave it as it is normally?).
And should I change the Jacobian factor?</p>
<p>I welcome any answers to this problem – I had previously requested just hints.
I am wondering whether there is a simpler way to compute this integral, using symmetry of the torus.</p>
<p>Thanks,</p>
| izœc | 83,639 | <p>So, you know it is a torus: then, your parametrization should be
$$
\begin{cases}
x = \big(\, 2 + r \cos \theta \, \big) \cos \phi \\
y = \big(\, 2 + r \cos \theta \, \big) \sin \phi \\
z = r \sin \theta
\end{cases}
$$
where $r \in [0,1]$ and $\theta, \phi \in [0, 2 \pi)$, and with jacobian
$$
\big|\,J\,\big|
=
\left|
\begin{vmatrix}
\frac{\partial x}{\partial \theta} & \frac{\partial x}{\partial \phi} & \frac{\partial x}{\partial r} \\
%
\frac{\partial y}{\partial \theta} & \frac{\partial y}{\partial \phi} & \frac{\partial y}{\partial r} \\
%
\frac{\partial z}{\partial \theta} & \frac{\partial z}{\partial \phi} & \frac{\partial z}{\partial r} \\
\end{vmatrix}
\right|
=
\cdots
=
r \,\left( \,2 + r \cos \theta\,\right)
.
$$
Thus your integral becomes:
$$
\int_0 ^1 \int_0 ^{2 \pi} \int_0 ^{2 \pi} \big|
\left(\,2 + r \cos \theta\,\right) \cos \phi \big| \, r \, ( \,2 + r \cos \theta\,) \, d \phi \, d \theta \, dr
$$
$$
=
\int_0 ^1 \int_0 ^{2 \pi} r\, \left( \,2 + r \cos \theta \,\right)^2 \, d \theta \, d r
\int_0 ^{2 \pi} \big| \cos \phi \big| \, d \phi
$$
which simplifies to
$$
\left[ \int_0 ^{2 \pi} \big| \cos \phi \big| \, d \phi \right] \cdot
\left[ 4 \int_0 ^1 r \, dr \int_0 ^{2 \pi} d \theta +
4 \int_0 ^1 r^2 \, dr \int_0 ^{2 \pi} \cos \theta \, d \theta +
\int_0 ^1 r^3 \, dr \int_0 ^{2 \pi} \cos ^2 \theta \, d \theta
\right]
$$
which, while tedious, is rather trivial to compute.</p>
|
2,878,553 | <p>I was trying to think about what $\pi$ actually is. There are a lot of ways to get $\pi$ for example $4(1-\frac{1}{3}+\frac{1}{5}-\cdots)$.</p>
<p>But there is no one way to define it. </p>
<p>On the other hand a fraction like $\frac{1}{2}$ also has multiple definitions e.g. $\frac{2}{4}$ or $\frac{3}{6}$. Although we generally take the simplest case as the definition. But it is really the class of all pairs $(n,2n)$. We might say that a tuple of the form $(n,2n)$ has the property of half-ness. Two fractions that both have this half-ness property can be set equal. </p>
<p>In the same way, there is not one algorithm to define $\pi$. So $\pi$ must be the class of all algorithms that define it(?). Or the class of all mathematical expressions that define it.</p>
<p>We might say that an expression $4\sum_{n=0}^\infty (-1)^n\frac{1}{2n+1}$ has the property of $\pi$-ness and if two expressions both have this property then they can be set equal.</p>
<p>The difficultly I see is that some expressions, it might be unknown if they have the property of $\pi$-ness. </p>
<p>Also, if this definition was true we could not write that anything is equal to $\pi$ as $\pi$ is just a property. How would we write this? Perhaps we would write that the expression belongs to the set of expressions that have the $\pi$-ness property. Perhaps:</p>
<p>$$\text{Expression}\left\{4\sum_{n=0}^\infty (-1)^n\frac{1}{2n+1} \right\}\subset \pi$$</p>
<p>And then this would work for other irrational numbers too:</p>
<p>$$ \text{Expression}\left\{\sum_{n=0}^\infty \frac{1}{n!}\right\} \subset e$$</p>
<p>But then this is not very convenient if we want to express the numerical value of one of these algorithms we would have to write something like:</p>
<p>$$\text{Expression}\left\{x\right\} \subset \pi \implies x \approx 3.14159$$</p>
<p>The alternative would simply be to have $\pi$ set equal to one of the expressions with the $\pi$-ness property but this seems a bit like cheating.</p>
| Fimpellizzeri | 173,410 | <p>This is a philosophical musing more so than a question.
It asks less about the nature of $\pi$ than about the nature of numbers themselves.</p>
<p>I would say the general stance is that we don't particularly care what numbers are.
Do you use a <a href="https://en.wikipedia.org/wiki/Set-theoretic_definition_of_natural_numbers" rel="nofollow noreferrer">set-theoretic definition</a>?
Do you build upon it with Cauchy sequences?
Are they Dedekind cuts?<br>
Regardless of the answer, what we are most concerned is with their properties and behavior, more so than which particular construction gave rise to the objects that exhibit these properties.</p>
<hr>
<p>In this sense, $\pi$ is a real number (however you've come to define it), and it exhibits properties that make it equal to the result of various other expressions, including the series you include in your OP.</p>
|
1,937,826 | <p>Ok, this seems obvious to me, but how would one prove it?</p>
<p>Let $<f(t),g(t)>$ and $<h(t),p(t)>$ be parametrized arcs in the cartesian plot. If $f,g,h,p$ are all continous and the arcs don't intersect, then there will be a line between the two that will be the shortest distance. Prove this line is normal to both arcs.</p>
<p>Is this proof non trivial? It seems so obvious, but i am not sure how it would be done.</p>
| Fimpellizzeri | 173,410 | <p>This is not true; consider when the shortest line segment between them contains one of an arc's endpoints.</p>
|
2,883,370 | <p>If I want to determine whether a sequence, ${a_n}$, is bounded above $\forall n \in \Bbb{N} $, is it enough to find a sequence that is larger than $a_n$, and show that it converges and is therefore bounded? For example:</p>
<p>$\forall n \in \Bbb{N}, let,$</p>
<p>$$
a_n = \frac{1}{n+1} + \frac{1}{n+2} + ...+\frac{1}{2n}\\
\frac{1}{n+1} + \frac{1}{n+2} + ...+\frac{1}{2n} \leq \frac{1}{n} + \frac{1}{n} + ...+\frac{1}{n} = n\cdot\frac{1}{n}=1
$$
and since $\lim\limits_{n\to\infty}1 = 1$, then 1 must be an upper bound for $a_n$. Is this correct? Thanks.</p>
| Fred | 380,717 | <p>Yes, it is correct, we have $a_n \le 1$ for all $n$.</p>
|
4,435,088 | <p>In an <span class="math-container">$8×8$</span> table one of the square is colored black and all the others are white . Prove that one cannot make all the boxes white by recoloring the rows and columns . "Recoloring" is the operation of changing the color of all boxes in a row or in a column .</p>
<p>This is a problem taken from Mathematical Circles by Dimitri Fomin , Sergey Genkin and IIa Itenberg.</p>
<p>My solution goes like this:</p>
<blockquote>
<p>We have a square colored black while all other squares are white. So, in order to make it white we must first recolor its row or its column in which the black colored square is present . Now , if we recolor the row or the column the number black squares will be increased and it's parity will still be odd as we now have <span class="math-container">$7$</span> black squares in that particular row or column . Now, if we again recolor that row or column it will again get reverted back to its initial configuration. So, we must now color each of the row or column in which we have one of those <span class="math-container">$7$</span> black sqaures . But this will also result in an odd number of black squares as we have then <span class="math-container">$7-1+7=13$</span> black squares in total. So, after any transformations we are left with an odd number of black squares. If all the squares are colored black then we have <span class="math-container">$0$</span> black squares . This has an even parity. So, this is not possible to have all squares colored white.</p>
</blockquote>
<p>I want to verify my proof whether it is alright or not? Is this valid? Of course there is a duplicate link about this question but that asks for a different thing. I am asking whether thus proof is valid or not...that link asks probably for a verification of a different proof ...but I want to know whether this proof is valid or not?....</p>
<p>Also if we have a <span class="math-container">$3×3$</span> square like the one given in the figure can we do the do the same thing.<a href="https://i.stack.imgur.com/yfXmxm.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yfXmxm.jpg" alt="enter image description here" /></a></p>
<p>Can we solve this using the same above reasoning?</p>
| Milten | 620,957 | <p>For the general <span class="math-container">$N\times M$</span> case, your approach works exactly when both <span class="math-container">$N$</span> and <span class="math-container">$M$</span> are even. In other cases (for <span class="math-container">$N,M\ge2$</span>), you can reduce to the <span class="math-container">$2\times2$</span> case by considering only the corners of the grid, or indeed the corners of any rectangle contained in the grid. That is, you can't clear the board if you can't even clear the corners.</p>
<p>In fact, this insight gives an immediate and powerful necessary condition on the initial configuration: <strong>Every (orthogonal) rectangle in the grid must have an even number of corners initially black.</strong> I think this is sufficient as well? But I don't know.</p>
<hr />
<p>EDIT: Indeed, the condition in bold above is sufficient, as can be seen by a simple greedy solving algorithm:</p>
<p>Clear the first row (so that it’s all white) using column moves. The rectangle condition is conserved after any moves, and it follows that all rows are either all black or all white now, which is clearly a solvable state.</p>
|
2,418,916 | <blockquote>
<p>Find how many terms there are in this geometric sequence:</p>
<p><span class="math-container">$-1, 2, -4, 8, ..., -16777216$</span></p>
</blockquote>
<p>My attempt:</p>
<p><span class="math-container">$a_k=a.r^{k-1}$</span></p>
<p>And in this sequence:</p>
<p><span class="math-container">$a=-1$</span>, <span class="math-container">$r=-2$</span></p>
<p>So</p>
<p><span class="math-container">$a_k=(-1){(-2)}^{k-1}$</span></p>
<p><span class="math-container">$-16777216=(-1){(-2)}^{k-1}$</span></p>
<p><span class="math-container">$16777216={(-2)}^{k-1}$</span></p>
<p><span class="math-container">$log(16777216)=log({(-2)}^{k-1})$</span></p>
<p><span class="math-container">$log(16777216)=(k-1)log{(-2)}$</span></p>
<p><span class="math-container">$k-1={{log(16777216)} \over {log{(-2)}}}$</span></p>
<p>But <span class="math-container">$-2$</span> is negative, and logarithm not defined for negative numbers?,
So what can I do ?</p>
<p>Thanks</p>
| Lee Mosher | 26,501 | <p>Hint: Since the sign is causing you trouble, get rid of it. The number of terms in this sequence is the same as the number of terms in the sequence
$$1,2,4,8,...,16777216
$$</p>
|
61,106 | <p>Let <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> be Poisson random variables with means <span class="math-container">$\lambda$</span> and <span class="math-container">$1$</span>, respectively. The difference of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> is a <a href="http://en.wikipedia.org/wiki/Skellam_distribution" rel="nofollow noreferrer">Skellam random variable</a>, with probability density function
<span class="math-container">$$\mathbb P(X - Y = k) = \mathrm e^{-\lambda - 1} \lambda^{k/2} I_k(2\sqrt{\lambda}) =: S(\lambda, k),$$</span>
where <span class="math-container">$I_k$</span> denotes the <a href="http://en.wikipedia.org/wiki/Bessel_function#Modified_Bessel_functions_:_I.CE.B1.2C_K.CE.B1" rel="nofollow noreferrer">modified Bessel function of the first kind</a>. Let <span class="math-container">$F(\lambda)$</span> denote the probability that <span class="math-container">$X$</span> is larger than <span class="math-container">$Y$</span>: <span class="math-container">$$F(\lambda) := \mathbb P(X > Y) = \sum_{k=1}^{\infty} S(\lambda, k) = \mathrm e^{-\lambda - 1} \sum_{k=1}^\infty \lambda^{k/2} I_k(2\sqrt{\lambda}).$$</span> According to Mathematica, the graph of the function <span class="math-container">$F$</span> looks like</p>
<p><img src="https://i.stack.imgur.com/m0mPi.png"><br/><sub>(source: <a href="https://cims.nyu.edu/~lagatta/F.png" rel="nofollow noreferrer">nyu.edu</a>)</sub><br/></p>
<p>My question:<li>Is there a closed-form expression for the function <span class="math-container">$F$</span>?</li><li>If not, what are <span class="math-container">$\lim_{\lambda \to 0} F'(\lambda)$</span> and <span class="math-container">$F'(1)$</span>? What is the asymptotic behavior as <span class="math-container">$\lambda \to \infty$</span>?</p>
| Suvrit | 8,430 | <p>An alternative way to see the $1/e$ is as follows.</p>
<p>Let $x=2\sqrt{\lambda}$. <a href="http://en.wikipedia.org/wiki/Bessel_function#Asymptotic_forms" rel="nofollow">Recall that for</a> small argument $0 < x \ll \sqrt{k+1}$ we have</p>
<p>$$I_k(x) \approx \frac{1}{\Gamma(k+1)}(x/2)^k$$</p>
<p>Using this, we see that
$$\sum_{k \ge 0} \lambda^{k/2}I_k(2\sqrt{\lambda}) \approx \sum_{k\ge 0}\frac{(x/2)^{2k}}{\Gamma(k+1)}.$$</p>
<p>This sum is nothing but $e^{x^2/4} = e^\lambda$. Multiplying with $e^{-\lambda-1}$ we obtain the said $1/e$ approximation.</p>
<p>Perhaps better approximations to $F$ can be obtained in a similar vein.</p>
|
3,742,456 | <blockquote>
<p>If <span class="math-container">$ABC$</span> is a right angled triangle at <span class="math-container">$C$</span> then prove that <span class="math-container">$a^n+b^n<c^n$</span> for all <span class="math-container">$n>2$</span></p>
</blockquote>
<p>This is an olympiad book problem. I know that <span class="math-container">$a+b>c$</span> and <span class="math-container">$a^2+b^2=c^2$</span>, <span class="math-container">$a^n+b^n≠c^n$</span> for all <span class="math-container">$n>2$</span>, and <span class="math-container">$(a+b)^n>c^n$</span> but I cant find a way to put these together and prove the above, may be I am not hitting up the right approach .......Need help.</p>
| uniquesolution | 265,735 | <p>If <span class="math-container">$n>2$</span>, the unit ball of the norm <span class="math-container">$\|(x,y)\|_2=(|x|^2+|y|^2)^{1/2}$</span> is contained in the unit ball of <span class="math-container">$\|(x,y)\|_n=(|x|^n+|y|^n)^{1/n}$</span>, and there are only four common points: <span class="math-container">$(\pm 1,0)$</span> and <span class="math-container">$(0,\pm 1)$</span>. Therefore, if <span class="math-container">$(a/c)^2+(b/c)^2=1$</span>,
then the point <span class="math-container">$(a/c,b/c)$</span> lies in the <em>interior</em> of the unit ball of <span class="math-container">$\|\cdot\|_n$</span>, because <span class="math-container">$a,b,c$</span> are all non-zero.</p>
|
789,458 | <p>If one day we finally prove the normality of $\pi $, would we be able to say that we have ourselves a sure-fire <em>truly random</em> number generator?</p>
| vadim123 | 73,324 | <p>Tomorrow's lottery numbers are random. Yesterday's are not. Pi is like the latter.</p>
<p>If you're not convinced, we can take a bet on yesterday's numbers. :-)</p>
|
3,054,898 | <h3>Problem</h3>
<p>Evaluate <span class="math-container">$$\int_0^{2\pi}(t-\sin t)(1-\cos t)^2{\rm d}t.$$</span></p>
<h3>Comment</h3>
<p>It's very complicated to compute the integral applying normal method. I obtain the result resorting to the skillful formula</p>
<blockquote>
<p><span class="math-container">$$\int_0^{2\pi}xf(\cos x){\rm d}x=\pi\int_0^{2\pi}f(\sin x){\rm d}x,$$</span>
where <span class="math-container">$f(x) \in C[-1,1].$</span></p>
</blockquote>
<p><span class="math-container">\begin{align*}
\require{begingroup}
\begingroup
\newcommand{\dd}{\;{\rm d}}\int_0^{2\pi} (t-\sin t)(1-\cos t)^2 \dd t
&= \int_0^{2\pi} t(1-\cos t)^2 \dd t - \int_0^{2\pi} \sin t(1-\cos t)^2 \dd t \\
&= \pi\int_0^{2\pi} (1-\sin t)^2 \dd t - \int_0^{2\pi} (1-\cos t)^2 \dd (1-\cos t) \\
&= \pi\int_0^{2\pi} \left(\frac32-\frac12\cos2t-2\sin t\right) \dd t - \left[\frac13(1-\cos t)^3\right]_0^{2\pi}\\
&= \pi\left[\frac32t-\frac14\sin2t+2\cos t\right]_0^{2\pi}\\
&= 3\pi^2
\endgroup
\end{align*}</span></p>
<p><strong>But any other solution?</strong></p>
| yavar | 621,272 | <p><span class="math-container">\begin{align}
&\int_0^{2\pi}(t-\sin t)(1-\cos t)^2{\rm d}t=\int_0^{2\pi}t+t{\cos }^2 t-2t\cos t-\sin t -\sin t{\cos }^2 t+\sin 2t{\rm d}t\\
&=\left[\frac{1}{2}t^2+\frac{1}{2}t^2+\frac{1}{4}t\sin 2t-\frac{1}{4}t^2+\frac{1}{8}\cos 2t-2t\sin t-2\cos t+\cos t+\frac{1}{3}{\cos }^3t-\frac{1}{2}\cos 2t\right]_0^{2\pi} \\
&=\left[\frac{3}{4}t^2+\frac{1}{4}t\sin 2t-\frac{3}{8}\cos 2t-2t\sin t-\cos t+\frac{1}{3}{\cos }^3 t\right]_0^{2\pi}\\
&=3\pi ^2
\end{align}</span></p>
|
264,745 | <p>When I was learning statistics I noticed that a lot of things in the textbook I was using were phrased in vague terms of "this is a function of that" e.g. a statistic is a function of a sample from a distribution. I realized that while I know the definition of a function as a relation and I have an intuitive notion of what "function of" means, it's unclear to me how you transform this into a rigorous definition of "function of". So what is the actual definition of "function of"?</p>
| paul garrett | 12,291 | <p>There certainly is a discrepancy between the formal set-theoretic definition ("giving" a function by giving its graph), and the informal use. Another important aspect of the informal use of "function" in practice is to ascertain when one thing $y$ is <em>not</em> "a function of" another thing $x$, which ordinarily means that "when $x$ changes", but everything else is "kept constant", $y$ does not change. A synonymous phrase is "$y$ does not depend on $x$".</p>
<p>How to ascertain whether $y$ "depends on/is a function of" $x$? There is no universal algorithm, and unless the relationship or lack thereof is described adequately, even specific examples are not resolvable. This is especially true of physical measurements, where correlation and causality are not always easy to distinguish.</p>
<p>In purely mathematical situations, often there is some difficulty in "finding" a thing $y$, and one is interested in being able to use "the same $y$" while other things in the environment/context vary. Giving upper bounds or lower bounds or counting something... with an outcome independent of, that is, <em>not</em> a function of, some other thing $x$... is a simpler story. It is not always obvious whether or not this is possible, so it is reasonable to ask the question.</p>
<p>In introductory physical science and engineering discussions, it is typically mathematically useful insofar as it <em>simplifies</em> things to <em>assume</em> (tentatively? heuristically? as a good approximation?) that one thing is independent of another, that is, "is not a function of". The archetype for this is a situation in which one will differentiate implicitly, but, if everything depends on all parameters, a uselessly complicated expression comes out. Using some experimental/physical sense about the physical realities often allows a practically useful approximation by declaring that this doesn't depend on that.</p>
|
264,745 | <p>When I was learning statistics I noticed that a lot of things in the textbook I was using were phrased in vague terms of "this is a function of that" e.g. a statistic is a function of a sample from a distribution. I realized that while I know the definition of a function as a relation and I have an intuitive notion of what "function of" means, it's unclear to me how you transform this into a rigorous definition of "function of". So what is the actual definition of "function of"?</p>
| nigel | 49,330 | <p>Let $A$ and $B$ be sets. A relation between $A$ and $B$ is some set $S \subseteq A \times B$. A <em>function on $A$</em> is a relation between $A$ and $B$ where $B$ is an arbitrary set (call this relation $S \subseteq A \times B$), and if $(a,b) \in S$ and $(a,c) \in S$, then $b=c$.</p>
<p>For example, if we say $f$ is a function of time, and we take time to be any non-negative real number, then we have that $f$ is a subset of $\mathbb{R}_{\geq 0} \times A$ where $A$ is some arbitrary set.</p>
|
4,650,979 | <blockquote>
<p>Solve the following recurrence equation: <span class="math-container">$T(n) = T(n-2)+n^2$</span>, having <span class="math-container">$T(0)=1$</span>, <span class="math-container">$T(1)=5$</span>.</p>
</blockquote>
<p>I need to solve this equation but when I get to the particular solution with <span class="math-container">$n^2$</span> some of the terms I need cancel out and it's kind of impossible to find the constants at that point.</p>
<p>Here is the way I'm doing it, using <span class="math-container">$an^2+bn+c$</span> to replace <span class="math-container">$T(n)$</span>;
<span class="math-container">$$\begin{split}
0&=an^2+bn+c - a(n-2)^2-b(n-2)-c-n^2\\
&=an^2+bn- a(n-2)^2-b(n-2)+n^2\\
&=an^2+bn- a(n^2-4n+4)-b(n-2)-n^2\\
&=an^2+bn- an^2+4an-4a-bn-2b-n^2\\ &=4an-4a-2b-n^2\\
&=n(4a-n)+(-4a+2b)\\
\end{split}$$</span>
But beyond that, I cannot find any way to find the constants.</p>
| Fnacool | 318,321 | <p><strong>You need to solve for even and odd separately</strong>, but first let's find the value of the <strong>sum of squares of all nonnegative integers from <span class="math-container">$0$</span> to <span class="math-container">$m$</span></strong>, a quantity we denote by <span class="math-container">$S(m)$</span> and which we will need later.</p>
<p>We make an assumption that <span class="math-container">$m\to S(m)$</span> is a polynomial of degree <span class="math-container">$3$</span> (which I won't justify here, and you can prove the formula by induction after we find it). Observing that <span class="math-container">$S(0)=0$</span>, we have <span class="math-container">$S(m) = c_3 m^3 + c_2 m^2+ c_1 m$</span>. Since <span class="math-container">$S(1)=1,S(2)=5$</span> and <span class="math-container">$S(3)=14$</span>, we also have<br />
<span class="math-container">\begin{align*} c_3+c_2+c_1 &= 1\\
8*c_3+4*c_2+2*c_1 &= 5\\
27*c_3+9*c_2+3*c_1 &= 14\end{align*}</span><br />
The solution to this system of equations is <span class="math-container">$c_3=\frac 16, c_2=\frac 12, c_1=\frac13$</span>. That is</p>
<p><span class="math-container">$$S(m) = \frac{m^3+3m^2+2m}{6}= \frac{m(m+1)(m+2)}{6}.$$</span></p>
<p>Now to the problem:</p>
<p><span class="math-container">$$T(2)-T(0)=2^2, T(4)-T(2)=4^2,\dots,T(2m)-T(2m-2)= (2m)^2.$$</span>
Adding up,</p>
<p><span class="math-container">$$T(2m)-T(0)= \underset{\mbox{even}}{\underbrace{2^2+ \dots + (2m)^2}}= 2^2(1^2+\dots+m^2) = 4S(m).$$</span></p>
<p>That is <span class="math-container">$$T(2m)= T(0)+ \frac{2m(m+1)(m+2)}{3}.$$</span></p>
<p>Similarly
<span class="math-container">$$T(3)-T(1)=3^2,\dots, T(2m+1)-T(2m-1)=(2m+1)^2,$$</span></p>
<p>Therefore</p>
<p><span class="math-container">$$T(2m+1)-T(1) = \underset{\mbox{odd}}{\underbrace{3^2+5^2+ \dots + (2m+1)^2}} = S(2m+1) - 1^2- (2^2+\dots + (2m)^2)=S(2m+1)-1 - 4S(m).$$</span></p>
<p>I'll leave simplifying this last expression to you.</p>
|
2,203,008 | <p>Consider the quadratic program</p>
<blockquote>
<p><span class="math-container">$ \min$</span> <span class="math-container">$ f(x) $</span></p>
<p><span class="math-container">$ \text{s.t.} \space Ax=c$</span></p>
</blockquote>
<p>Prove that <span class="math-container">$ x^* $</span> is a local minimum if and only if it is a global minimum. No convexity is assumed. The back direction is trivial.</p>
<p>Could anyone help with proof? Thanks!</p>
| Dinoman | 829,651 | <p>Now copper.hat's answer is already very good. However, I aim to answer this question with different notations, just hope to explain it in the plainest language.</p>
<p>First we assume that all the points satisfy <span class="math-container">$Ax=c$</span> compose of a <em>feasible region</em> of this optimization problem. It follows that very point in this set is called a <em>feasible point</em>. In particular, we want to show that a feasible point is a local minimum <strong>iff</strong> it is a global minimum.</p>
<p>Assume <span class="math-container">$x^*$</span> is a local minimum. Now, we assume another feasible point <span class="math-container">$y$</span>. Let <span class="math-container">$d=y-x^*$</span>, we know that vector <span class="math-container">$d$</span> is a feasible direction.</p>
<p>Now consider <span class="math-container">$Ad$</span>, as you can see, since <span class="math-container">$Ay = c$</span> and <span class="math-container">$Ax^* = c$</span>, we have <span class="math-container">$Ad = A(y-x^*)=0$</span>. Now, since <span class="math-container">$x^*$</span> is a local minimum, <span class="math-container">$f(x^*+ad)>f(x^*) $</span> for sufficiently small a.</p>
<p>Now, observe that <span class="math-container">$\frac{f(x^*+ad)-f(x^*)}{a} $</span> > <span class="math-container">$0$</span> as <span class="math-container">$a \to 0$</span>, and this equals to <span class="math-container">$\nabla f(x^*)^T d>0$</span></p>
<p>Another key observation is <span class="math-container">$\frac{f(x^*+ad)-2f(x^*)+f(x^*-ad)}{a^2}>0 $</span>, suggesting
<span class="math-container">$\frac{1}{2} d^T \nabla^2 f(x^*) d>0$</span>, now, if this function is of <span class="math-container">$C^{2}$</span>, you can ensure <span class="math-container">$d^T \nabla^2 f(x^*+\theta d) d>0$</span>, since you can always scale d by a constant.</p>
<p>By Taylor's Theorem <span class="math-container">$f(y) = f(x^*+d) = f(x^*) +\nabla f(x^*)^T d +\frac{1}{2} d^T \nabla^2 f(x^*+\theta d) d$</span>, by the above argument <span class="math-container">$f(y) > f(x^*)$</span>.</p>
|
311,153 | <p>Im trying to resolve the next definite integral:
$$\int_{1-x^2}^{1+x^2}{\ln(t^2)\ dt}$$
Im not sure if I can use the Barrow's theorem, I think I have to use the fundamental theorem of integral calculus, but im not sure. How can I solve it?</p>
| Santosh Linkha | 2,199 | <p>$\displaystyle \log t^2 = 2 \log t, $ so $ \displaystyle \int_{1-x^2}^{1+x^2}{\ln(t^2)·dt} = \int_{1-x^2}^{1+x^2}{2\ln(t)·dt}$ and $\displaystyle \int \log(t) dt = t (\log(t) -1) + c$</p>
|
1,292,889 | <p>I've read the paper <a href="http://web.mit.edu/leozhou/www/gauss.pdf" rel="noreferrer">Least square fitting of a Gaussian function to a histogram</a> by Leo Zhou on how to perform a Least Square Fitting of a gaussian function to a histogram.</p>
<p>The Gaussian function used to fit the data is:
$$f(y)=A\exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)$$</p>
<p>However, the method described in the paper doesn't work if the dataset has a vertical offset (i.e. all the points are shifted on the Y-axis by some amount $K$).</p>
<p>I was wondering how to perform LSF (or any other kind of fitting) to estimate the parameters $A$, $\mu$, $\sigma$ and $K$ of the function</p>
<p>$$f(y)=A\exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)+K$$</p>
<p>(without prior knowledge of the parameter $K$, of course).</p>
| Bryson of Heraclea | 122,828 | <p>Let us assume that $K$ is known. Then you would have to fit your transposed data $y_i'=y_i-K$ to a Gaussian curve. Using the terminology of the paper you cited, the error of the fit would be
$$
\chi^2=\sum_{i=1}^N\frac{(\ln y_i'-(ax_i^2+bx_i+c))^2}{\sigma_{\ln y'}}
$$
The important thing to note is that $\chi^2$ depends explicitly on $K$, since $a,b,c$ are solutions of system whose right hand side depends on $K$. One could calculate its expression analytically, but it would be too complicated to write down here. All you have to do is minimize the error with respect to $K$, find the points where $d\chi^2/dK=0$. This is a transcendental equation, which could be solved numerically, but computationally a quick an dirty solution would be to plot $\chi^2(K)$ with respect to $K$, to see where it is minimized. The most reasonable range for $K$ to look for would be around the minimal value of your samples $y_i$.</p>
|
1,292,889 | <p>I've read the paper <a href="http://web.mit.edu/leozhou/www/gauss.pdf" rel="noreferrer">Least square fitting of a Gaussian function to a histogram</a> by Leo Zhou on how to perform a Least Square Fitting of a gaussian function to a histogram.</p>
<p>The Gaussian function used to fit the data is:
$$f(y)=A\exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)$$</p>
<p>However, the method described in the paper doesn't work if the dataset has a vertical offset (i.e. all the points are shifted on the Y-axis by some amount $K$).</p>
<p>I was wondering how to perform LSF (or any other kind of fitting) to estimate the parameters $A$, $\mu$, $\sigma$ and $K$ of the function</p>
<p>$$f(y)=A\exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)+K$$</p>
<p>(without prior knowledge of the parameter $K$, of course).</p>
| Claude Leibovici | 82,404 | <p>In the same spirit as Bryson of Heraclea's answer, consider that $K$ is fixed at a given value. Then, for the given $K$, apply the method given in the paper to get the remaining parameters $A$, $\mu$, $\sigma$ (which are all implicit functions of $K$). Now, compute the corresponding $y$'s and $$SSQ(K)=\sum_{i=1}^N (y_i^{calc}-y_i^{exp})^2$$ Try different values of $K$ until you see a minimum of $SSQ(K)$. When you have one value of $K_m$ such that $$SSW(K_{m-1})\gt SSQ(K_m)$$ and $$SSQ(K_m) \lt SSW(K_{m+1})$$ use this value of $K_m$ and the associated values of $A_m$, $\mu_m$, $\sigma_m$ as starting values for a rigorous nonlinear regression.</p>
<p>This kind of procedure is extremely useful when one (even two or three) parameter make the model impossible to linearize.</p>
<p>Do not forget that when you linearize a model, you <strong>must</strong> run the nonlinear regression since what is measured is $y$ and not any of its possible transforms.</p>
<p>For illustration purposes, using the data from JJacquelin for different values of $K$, I obtained $$SSQ(0)=4.577$$ $$SSQ(1)=3.373$$ $$SSQ(2)=1.832$$ $$SSQ(3)=0.780$$ $$SSQ(4)=3.930$$ Based on these values, using spline interpolation, the minimum should occur for $K=2.89252$.</p>
<p>More advanced would be to solve the implicit equation $$\frac{dSSQ(K)}{dK}=0$$ using Newton method with numerical derivatives (first and second orders).</p>
|
1,379,513 | <p>A hot dog stand has 12 different toppings available. How many different kinds of hot dogs can be made, assuming the order of the toppings does not make a difference. I believe the correct answer is 882050, with the maximum varieties per number of toppings selected being 665280 when there six toppings. I am also not sure about how to create a formula that would arrive at this result.</p>
| fred | 254,551 | <p>For each topping you can choose to include it or not include it. This results in $2^{12}=4096$ different kinds of hot dogs. </p>
|
1,379,513 | <p>A hot dog stand has 12 different toppings available. How many different kinds of hot dogs can be made, assuming the order of the toppings does not make a difference. I believe the correct answer is 882050, with the maximum varieties per number of toppings selected being 665280 when there six toppings. I am also not sure about how to create a formula that would arrive at this result.</p>
| GOTO 0 | 29,669 | <p>The vendor can make uncountably many different kinds of hot dogs but varying the amount of one single topping, if the amount can be expressed as a real number (e.g. weight).</p>
|
389,460 | <blockquote>
<p>For $n>0$ let $A(n) = \underbrace{111 \ldots 11}_{n}$. Prove that if $A(n)$ is divisible by a prime number $p>3$, then $\gcd(n, p-1) > 1$.</p>
</blockquote>
<p>It is no huge discovery that if $n$ is even, then $2$ is a common divisor of $n$ and $p-1$, thus the implication holds. I don't know how to justify the general case though, so I would appreciate some hints.</p>
| lab bhattacharjee | 33,337 | <p>If $p(>3)$ divides $\underbrace{111 \ldots 11}_n, p$ divides $\underbrace{999\ldots 99}_n\implies p$ divides $(10^n-1)$ </p>
<p>$\implies 10^n\equiv1\pmod p\implies ord_p{10}$ divides $n$</p>
<p>Again, using Fermat's Little Theorem, $10^{p-1}\equiv1\pmod p\implies ord_p{10}$ divides $p-1$</p>
<p>$\implies ord_p{10}$ divides $(n,p-1)$</p>
<p>If $(n,p-1)=1,ord_p{10}$ divides $1\implies ord_p{10}=1\implies 10^1\equiv1\pmod p\implies p$ divides $9$ which is impossible as $p>3$</p>
|
4,107,232 | <h2>Problem</h2>
<p>Robert is playing a game with numbers. If he has the number <span class="math-container">$x$</span>, then in the next move, he can do one of the following:</p>
<ul>
<li>Replace <span class="math-container">$x$</span> by <span class="math-container">$\lceil{\frac{x^2}{2}}\rceil$</span></li>
<li>Replace <span class="math-container">$x$</span> by <span class="math-container">$\lfloor{\frac{x}{3}}\rfloor$</span></li>
<li>Replace <span class="math-container">$x$</span> by <span class="math-container">$9x+2$</span></li>
</ul>
<p>He starts with the number <span class="math-container">$0$</span>. How many integers less than or equal to <span class="math-container">$7000$</span> can he achieve using the above functions?</p>
<p><em>[It is permitted to use a number greater than <span class="math-container">$7000$</span> in the way of achieving the desired numbers.]</em></p>
<h2>My Approach</h2>
<p>Call the functions <span class="math-container">$f_1,f_2,f_3$</span> respectively. <span class="math-container">$2$</span> is easily achievable from <span class="math-container">$0$</span> (using <span class="math-container">$f_3$</span>). I've found that all the integers from <span class="math-container">$0$</span> to <span class="math-container">$10$</span> are achievable (Though we achieve them in a long way). The numbers get messy when we get ahead further. I can't prove that any number is unachievable. I've noticed that base-<span class="math-container">$3$</span> numbers can help for <span class="math-container">$f_2$</span> and <span class="math-container">$f_3$</span>.</p>
<hr />
<p>How to get ahead further?</p>
<p><strong>Update:</strong> Mr. Mike showed that all integers are <em>achievable</em> by this process through codes. Mr. Calvin also gave a partial proof for that. So, a complete proof is needed currently.</p>
| Mike Earnest | 177,399 | <p>Too long for a comment, but with the help of a computer, I found that all numbers in <span class="math-container">$\{1,\dots,7000\}$</span> are indeed reachable. I verified this with the following Python code, which uses a priority queue to find reachable numbers. The best priority ordering I found was to try the <span class="math-container">$\lfloor x/3\rfloor$</span> operation first, then to try the <span class="math-container">$\lceil x^2/2\rceil$</span> operation, and resorting to <span class="math-container">$9x+2$</span> last.</p>
<p><a href="https://tio.run/##bVBBbsIwELz7FXvDBjdxwgEVlUp9Qs4RQgY2JFJip46hQlXfnnqJUYXonmzP7Mx4@quvrVmOY@VsBzXq/hOarrfO3y79eajldLI9Y167E/oBNjCg506bE/JMwkqpTAhmzt0uMnYtVj7QAqIYGxBN3CnVVrAiXEqupBJbxr7qpkV42n0HtWYQ5nB2Dg2JxRi8EGW2vWHe7vTxGNUiMU2XQkKIxePDfJ7DAjKRpjkBuXyd3zUXkItJqbIOeO8a6xp/DQku2EoKJaAx0WaKQ9NUBIGxnlD63R9GQy9J2OAk8IDcO@WF/N/ukR6N3qYi1/AA3gqYKkscdvaCz340T9W@bCBjLLgbz2cfbUuMPbqBPvOdySRJJNn9gHYIDvWh1vsWk5kYx18" rel="noreferrer" title="Python 3 – Try It Online">Try it online!</a></p>
<pre><code>from heapq import heappush, heappop
targets = set(range(1, 7001))
num_targets_left = 7000
seen = set([0])
Q = [(0,0)]
while num_targets_left > 0:
current = heappop(Q)[1]
to_add = [(0,current//3), (1,(current**2 + 1)//2), (2,9*current + 2)]
for (priority_level, num) in to_add:
if num not in seen:
seen.add(num)
heappush(Q, (priority_level, num))
if num <= 7000:
targets.remove(num)
num_targets_left -= 1
print('All numbers in {1,...,7000} are reachable.')
</code></pre>
<p>I found the the hardest number was <span class="math-container">$6121$</span>. The path that led to it below. Here, <code>third x 7</code> means you do this <span class="math-container">$\lfloor x/3\rfloor$</span> operation <span class="math-container">$7$</span> times.</p>
<pre><code>Number Next operation(s)
----------------------------------
0 9x + 2
2 9x + 2
20 half-square
200 third x 3
7 half-square
25 third x 1
8 half-square
32 third x 1
10 half-square
50 third x 1
16 half-square
128 third x 2
14 half-square
98 half-square
4802 third x 5
19 half-square
181 half-square
16381 third x 4
202 half-square
20402 third x 4
251 half-square
31501 third x 4
388 half-square
75272 third x 6
103 half-square
5305 third x 3
196 half-square
19208 third x 2
2134 half-square
2276978 third x 5
9370 half-square
43898450 third x 9
2230 half-square
2486450 third x 6
3410 half-square
5814050 third x 6
7975 half-square
31800313 third x 7
14540 half-square
105705800 third x 8
16111 half-square
129782161 third x 9
6593 half-square
21733825 third x 7
9937 half-square
49371985 third x 8
7525 half-square
28312813 third x 8
4315 half-square
9309613 third x 6
12770 half-square
81536450 third x 8
12427 half-square
77215165 third x 8
11768 half-square
69242912 third x 8
10553 half-square
55682905 third x 7
25460 half-square
324105800 third x 9
16466 half-square
135564578 third x 8
20662 half-square
213459122 third x 8
32534 half-square
529230578 third x 9
26887 half-square
361455385 third x 10
6121
</code></pre>
|
71,608 | <p>Consider the following question:</p>
<p>Is there a family $\mathcal{F}$ of subsets of $\aleph_\omega$ that satisfies the following properties?</p>
<p>(1) $|\mathcal{F}|=\aleph_\omega$</p>
<p>(2) For all $A\in \mathcal{F}$, $|A|<\aleph_\omega$</p>
<p>(3) For all $B\subset \aleph_\omega$, if $|B|<\aleph_\omega$, then there exists some $B'\in \mathcal{F}$ such that $B\subset B'$.</p>
<p>I am not sure if there is anything special about $\aleph_\omega$, but this was the example that came up. </p>
<p>Any help?</p>
| saf | 16,826 | <p>a family F as above <strong>of minimal size</strong> satisfies <span class="math-container">$|F|=cov(\aleph_\omega,\aleph_\omega,\aleph_\omega,2)=pp(\aleph_\omega)=cof([\aleph_\omega]^\omega)\ge \aleph_{\omega+1}$</span>.</p>
<p>Edit: following Ali's suggestion, here are more details.
<span class="math-container">$$cov(\lambda,\theta,\kappa,\sigma):=min{ |\mathcal F| : \mathcal F\subseteq [\lambda]^{<\theta} s.t. \forall A\in [\lambda]^{<\kappa}\exists\mathcal A\in[\mathcal F]^{<\sigma}(A\subseteq\bigcup\mathcal A)}$$</span>
The definition is due to Shelah, of course. Note that <span class="math-container">$cov(\lambda,\kappa,\kappa,2)=cf([\lambda]^{<\kappa},\subseteq)$</span>. The definition of <span class="math-container">$pp$</span>, may be found in several places; for a crush treatment, see for example: <a href="http://papers.assafrinot.com/?num=5" rel="nofollow noreferrer">http://papers.assafrinot.com/?num=5</a> .
Let <span class="math-container">$\lambda$</span> denote a singular cardinal. It is always the case that <span class="math-container">$\lambda^+\le pp(\lambda)\le cov(\lambda^+,\lambda,cf(\lambda)^+,2)\le cf([\lambda]^{cf(\lambda)},\subseteq)$</span>. Now, consider the preceding three cardinal invaritans (of <span class="math-container">$\lambda$</span>). Shelah proved that if <span class="math-container">$\lambda$</span> is the <strong>least</strong> (singular) cardinal for which any of the three is greater than <span class="math-container">$\lambda^+$</span>, then all three are equal. In particular, the openning equation that I gave (concerning <span class="math-container">$\aleph_\omega$</span>) holds. See [Sh:E12] for pointers to Shelah's works on ``pp VS. cov''.</p>
<p>Edit2: I see that the definition of <span class="math-container">$cov$</span> is rendered incorrectly. The definition may be found here as well: <a href="http://papers.assafrinot.com/?num=1" rel="nofollow noreferrer">http://papers.assafrinot.com/?num=1</a> . See (the proof of) Lemma 3.4 from there, and the subsequent works on this subject:</p>
<ol>
<li><a href="https://projecteuclid.org/journals/journal-of-symbolic-logic/volume-71/issue-4/Bounds-for-covering-numbers/10.2178/jsl/1164060456.short" rel="nofollow noreferrer">Link</a></li>
<li><a href="http://journals.impan.pl/cgi-bin/doi?fm205-1-3" rel="nofollow noreferrer">http://journals.impan.pl/cgi-bin/doi?fm205-1-3</a></li>
</ol>
|
1,987,026 | <p>So I know to get a probability like $P(2\leq X\leq 4)$, you simply do $P(X\leq4) - P(X\leq1)$, but when there is a question like $P(2<X<4)$ what am I supposed to do? </p>
<p>Not just limited to in between two values, I also don't know what to do if it's just $P(X<2)$, so far all our examples have been greater/less than or equal to, for $P(X\leq 2)$ you just do the cumulative distribution up to $2$ but what do I do if that $2$ is not included in the range?</p>
| DavidF | 126,754 | <p>If $X$ is a continuous random variable then $\mathbb{P}(X \leq c) = \mathbb{P}(X < c)$, for $c$ some constant. This is because the cumulative probability is given by the integral, letting $f_X$ be the distribution function of $X$,
\begin{equation*}
\mathbb{P}(X \leq c) = F_x(c) = \int^c_{-\infty} f_X(t)\,dt
\end{equation*}</p>
<p>If you're familiar with integral calculus then it should be clear why there's no difference between integrating over the interval $(-\infty, c]$ and $(-\infty, c)$. </p>
<p>If $X$ is a discrete random variable then we can write our cumulative probability as (suppose $X$ can't be negative for simplicity)
\begin{align*}
\mathbb{P}(X \leq c) &= \mathbb{P}(X = 0 \text{ or } X = 1 \text{ or } X = 2 ~\cdots~ X = c) \\
&= \mathbb{P}(X = 0) + \mathbb{P}(X = 1) + \mathbb{P}(X = 2) + \cdots + \mathbb{P}(X = c)
\end{align*}</p>
<p>whereas the probability $\mathbb{P}(X < c)$ is
\begin{align*}
\mathbb{P}(X < c) &= \mathbb{P}(X = 0 \text{ or } X = 1 \text{ or } X = 2 ~\cdots~ X = c - 1) \\
&= \mathbb{P}(X = 0) + \mathbb{P}(X = 1) + \mathbb{P}(X = 2) + \cdots + \mathbb{P}(X = c - 1)
\end{align*}</p>
<p>So, supposing $X$ is strictly positive for simplicity,
\begin{align*}
\mathbb{P}(X \leq 2) &= \mathbb{P}(X = 0) + \mathbb{P}(X = 1) + \mathbb{P}(X = 2) \\
\mathbb{P}(X < 2) &= \mathbb{P}(X = 0) + \mathbb{P}(X = 1)
\end{align*}</p>
|
1,465,229 | <p>At p. 388 of Calculus, Spivak gives a formula:</p>
<p>$$\frac{1}{1+t^2} = 1 - t^2 + t^4 - ... + (-1)^nt^{2n} + \frac{(-1)^{n+1}t^{2n+2}}{1+t^2}$$</p>
<p>Which can be integrated to find $\arctan(x)$.</p>
<p>I don't understand where this formula comes from, but I found it up to $(-1)^nt^{2n}$ by considering the geometric series for $\frac{1}{1-x}$ and replacing $x$ by $-x^2$ to get the series for $\frac{1}{1+x^2}$. I don't see the term $\frac{(-1)^{n+1}x^{2n+2}}{1+x^2}$ though, because the series I got this way is $\frac{1}{1+x^2} = \sum_{n=0}^{\infty}(-1)^nx^{2n}$.</p>
| mrf | 19,440 | <p>This is a finite geometric sum:</p>
<p>$$
\sum_{k=0}^n (-1)^k t^{2k} = \sum_{k=0}^n (-t^2)^k = \frac{1-(-1)^{n+1}t^{2n+2}}{1-(-t^2)}
$$</p>
|
3,055,272 | <p><strong>Background</strong></p>
<p>A connected graph has an <em>Eulerian circuit</em> if every vertex has even degree. </p>
<p>I am thinking about a certain classification of connected graphs where, for a connected graph <span class="math-container">$G$</span>, every <a href="https://en.wikipedia.org/wiki/Cut_(graph_theory)" rel="nofollow noreferrer">cut</a> splits (intersects) an <em>even</em> number of edges.</p>
<p>For example, while the following graph does not have an Eulerian circuit, the displayed cut 'splits an <em>even</em> number of edges':</p>
<p><a href="https://i.stack.imgur.com/eS6vOs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eS6vOs.png" alt="Graph cut"></a></p>
<blockquote>
<p>Claim: A connected graph <span class="math-container">$G$</span> has an Eulerian circuit iff every cut of <span class="math-container">$G$</span> splits an even number of edges.</p>
</blockquote>
<p>Proof attempt:</p>
<p>(<span class="math-container">$\rightarrow$</span>) If <span class="math-container">$G$</span> has an Eulerian circuit then ... ?</p>
<p>(<span class="math-container">$\leftarrow$</span>) Suppose every cut splits an even number of edges. Then we can make a cut around each individual vertex that will split an even number of edges. Hence the degree of each vertex is even. Therefore <span class="math-container">$G$</span> has an Eulerian circuit. </p>
<p>I'm lost in the forwards direction of the proof. Any hint would be appreciated. </p>
| Misha Lavrov | 383,078 | <p>Given any cut and any Eulerian circuit, the circuit has to cross from one side of the cut to another an even number of times, since it starts and ends on the same side of the cut.</p>
<p>Since the Eulerian circuit takes each edge once, the number of edges split by the cut is even.</p>
|
1,150,805 | <p>An unfair 3-sided die is rolled twice. The probability of rolling a 3 is $0.5$, the probability of rolling a 1 is $0.25$, and the probability of rolling a 2 is $0.25$. Let $X$ be the outcome of the first roll and $Y$ the outcome of the second.</p>
<ul>
<li><p>Find the Joint Distribution of $X$ and $Y$ in a Table.</p>
<p>The outcome of $X = \{1,2,3\}$.</p>
<p>The outcome of $Y = \{1,2,3\}$.</p>
<p>Would I just make a table of all the roll possibilities?</p></li>
<li><p>Find the Probability $\mathrm{P}(X+Y \geq 5)$.</p>
<p>The only roll that will make this is a 3 or a 2.
Should I just take the same of every possible roll to find this probability?</p></li>
</ul>
| Milo Brandt | 174,927 | <p>One can show that:
$$P(A\cup B)=\{a\cup b : (a,b)\in P(A)\times P(B)\}$$
which is a recursive definition (very clearly if we take $B$ to be a singleton disjoint from $A$). It states that to get the power set of a union $A\cup B$, we choose any pair of a subset $a\subseteq A$ and $b\subseteq B$ and take their union. In particular, for disjoint $A$ and $B$, we can show, using the above definition, that:
$$|P(A\cup B)| = |P(A)|\cdot |P(B)|$$
implying that the size of a power set is exponential in the size of the original set. Computing the power set of a singleton set $\{s\}$ tells you the base of the exponent.</p>
|
127,493 | <p>How many number less than $k$ contain the digit $3$?
For instance:</p>
<p>How many number contain the digit $3$ in the following list?</p>
<pre><code>Table[n, {n, 33}]
</code></pre>
<p>$\lbrace 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, \
20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33\rbrace$</p>
<p>I tried: </p>
<pre><code>numbers[k_] := Count[Table[n, {n, k}], 3]
</code></pre>
<p>but it doesn't work.</p>
<p>Then I want to find the limit</p>
<pre><code>Limit[numbers[k]/k, k -> Infinity]
</code></pre>
<p>( <a href="https://www.youtube.com/watch?v=UfEiJJGv4CE" rel="nofollow">See Numberphile video here.</a>)</p>
| Michael E2 | 4,999 | <p>Well, it can be done:</p>
<pre><code>int = (x^2 + 2 x + 1 + (3 x + 1) Sqrt[x + Log[x]]) /
(x Sqrt[x + Log[x]] (x + Sqrt[x + Log[x]]));
dy = (1 + x)/(x Sqrt[x + Log[x]]);
Integrate[int - dy, x] + Integrate[dy, x]
(* 2 Sqrt[x + Log[x]] + 2 Log[x + Sqrt[x + Log[x]]] *)
</code></pre>
|
1,685,523 | <p>What is the domain of $f(x)=x^x$ ? </p>
<p>I used Wolfram alpha where it is said that the domain is all positive real numbers. Isn't $(-1)^{(-1)} = -1$ ? Why does the domain not include negative real numbers as well?</p>
<p>I also checked graph and its visible for only $x>0$ . Can someone help me clarify this?</p>
| Emilio Novati | 187,568 | <p>Write:</p>
<p>$$y=x^x=e^{x\log x}$$ </p>
<p>If we want $y \in \mathbb{R}$ we must have $\log x \in \mathbb{R}$ and this is done only if $x> 0$</p>
<p>This is the usual definition for the function $y=f(x)=x^x$ for $x \in \mathbb{R}$, that gives $(0,+\infty)$ as the domain.</p>
<hr>
<p>If we want $x\in \mathbb{Q}$ than we can define the function as:
$$
y=f(x)=x^x=\left( \frac{m}{n}\right)^{\frac{m}{n}} \iff y=\sqrt[n]{x^m} \iff y^n=\left(\frac{m}{n}\right)^m
$$</p>
<p>If we define $0^0=1$, this is a real number if $n=2k+1 \quad \forall k\in \mathbb{Z}$ so the domain of the function can be:
$$
\{q\in \mathbb{Q}|q=\frac{m}{2k+1}\quad , \quad m,k \in \mathbb{Z} \}
$$</p>
|
3,822,042 | <p>For any function <span class="math-container">$f : X \rightarrow Y$</span> and any subset A of Y, define
<span class="math-container">$$f^{-1}(A) = \{x \in X: f(x) \in A\}$$</span> Let <span class="math-container">$A^c$</span> denote the complement of A in Y. For subsets <span class="math-container">$A_1,A_2$</span> of Y, consider the following statements:</p>
<p>(i) <span class="math-container">$ f^{-1} (A^c_1 \bigcap A^c_2) = (f^{-1}(A_1))^c \bigcup (f^{-1}(A_2))^c $</span></p>
<p>(ii) If <span class="math-container">$ f^{-1} (A_1) = f^{-1} (A_2)$</span> then <span class="math-container">$A_1 = A_2 $</span></p>
<p>Then which of the above statements are always true?</p>
<p>My effort: The first statement can not be true unless <span class="math-container">$A_1 = A_2$</span>. So that's not always true. For the 2nd statement, let x = <span class="math-container">$ f^{-1}(A_1) = f^{-1}(A_2)$</span>, then <span class="math-container">$f(x) = A_1 = A_2$</span>. Since f is a function, f can not have duplicate values of f(x) for the same value of x. That's what I read in books. E.g. the function <span class="math-container">$y^2 = x$</span>, is actually two functions, <span class="math-container">$y=+\sqrt x$</span> and <span class="math-container">$y=-\sqrt x$</span>, since, for same x, there are two values of y. So, my answer is, (ii) is always true, (i) is not always true.</p>
<p>But the answer given is, neither (i) nor (ii) is always true. Any pointers on where my understanding is incorrect, is highly appreciated.</p>
| user2661923 | 464,411 | <p>There is an (arguably) better way, but it is somewhat convoluted.</p>
<p>There are 7 choices for the blue card and 5 choices for the red card, for a total of 35 possible blue x red combinations.</p>
<p>For each of the 35 combinations, either there is a unique satisfying green card, or there isn't. So all you have to do is identify which of the 35 combinations permit no satisfying green card.</p>
<p>A blue x red combination will permit a satisfying green combination if and only if the blue + red is in the interval <span class="math-container">$\{9, 10, 11, 12, 13\}.$</span></p>
<p>So all you have to do is create a chart of the various possible blue cards. With each such blue card, how many red cards will not permit a satisfying green card.</p>
<p>The chart should look like this: <br>
Blue # : # Red cards that force dis-satisfaction <br>
3 : 2 <br>
4 : 1 <br>
5 : 0 <br>
6 : 1 <br>
7 : 2 <br>
8 : 3 <br>
9 : 4 <br></p>
<p>Adding up the # of dis-satisfying possibilities (i.e. 2nd column) from the above chart gives 13. <br>
35 - 13 = 22.</p>
<p>Note, that although it appears that the chart was manually drawn, all you really have to do is notice that:</p>
<p>(a)<br>
When blue = 5, every red allows satisfaction.</p>
<p>(b) <br>
As blue increases or decreases from 5, the # of reds that force dis-satisfaction increase by 1.</p>
|
1,627,713 | <p>This is maybe math $101$ question:</p>
<p>Let $z_1=1+i$.</p>
<p>I know that $r=\sqrt 2$ and $\theta=\arctan(1/1)=\pi/4$ so $$z_1=\color{blue}{\sqrt 2e^{i\pi/4}} .$$</p>
<p>But now if I take a look at</p>
<p>$z_2=-1-i$,</p>
<p>I know that $r=\sqrt 2$ and $\theta=\arctan(-1/-1)=\pi/4$ so $$z_1=\color{blue}{\sqrt 2e^{i\pi/4}}.$$</p>
<p>But $z_2$ should be equal to $$\color{red}{\sqrt 2 e^{5i\pi/4}} .$$</p>
<p>Why in $z_2$ should I add $\pi$ in the power of the exponent?</p>
| Michael Albanese | 39,599 | <p><strong>(Abstract) Hint:</strong> Given two points on this grid, how would you construct a path along the grid between them?</p>
<p><a href="https://i.stack.imgur.com/Oaj56.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Oaj56.gif" alt="enter image description here"></a></p>
|
827,154 | <p>I need help with the definition of "within 1":</p>
<ul>
<li><p>If $x = 8$ and $y = 7$, then $x$ is "within 1" of $y$. </p></li>
<li><p>If $x = 8$ and $y = 9$, then $x$ is "within 1" of $y$.</p></li>
<li><p>If $x = 8$ and $y = 8$, is $x$ still "within 1" of $y$?</p></li>
</ul>
<p>It's my understanding that this would still be true, but I'm being asked for something to back up my assumption, so I guess I'm looking for a second opinion.</p>
| please delete me | 153,520 | <p>In this case, it probably means that the absolute value of the difference between the two numbers does not exceed $1$. Hence $8$ is within $1$ of $8$. Note that this expression does not occur frequently in mathematical literature.</p>
|
827,154 | <p>I need help with the definition of "within 1":</p>
<ul>
<li><p>If $x = 8$ and $y = 7$, then $x$ is "within 1" of $y$. </p></li>
<li><p>If $x = 8$ and $y = 9$, then $x$ is "within 1" of $y$.</p></li>
<li><p>If $x = 8$ and $y = 8$, is $x$ still "within 1" of $y$?</p></li>
</ul>
<p>It's my understanding that this would still be true, but I'm being asked for something to back up my assumption, so I guess I'm looking for a second opinion.</p>
| Community | -1 | <p>"Within $x$" refers to $\pm x$. Hence, given a number $y$, the numbers within $x$ of $y$ are elements of the set
$$Z=\{z\mid y-x\leq z\leq y+ x\}$$
Quite obviously, since $y-x\leq y\leq y+x$, $y\in Z$.</p>
|
118,298 | <p>I'm trying to work through "Elements of Functional Languages" by Martin Henson. On p. 17 he says:</p>
<blockquote>
<p>$v$ occurs free in $v$, $(\lambda v.v)v$, $vw$ and $(\lambda w.v)$ but not in $\lambda v.v$ or in $\lambda v.w$. And $v$ occurs bound in $\lambda v.v$ and in $(\lambda v.v)v$ but not in $v$, $vw$ or $(\lambda w.v)$. Note that free and bound variables are not opposites. A variable may occur free and bound in the same expression.</p>
</blockquote>
<p>Can someone explain what this means?</p>
| Thomas Andrews | 7,933 | <p>In the more normal mathematical world, we might say something like:</p>
<blockquote>
<p>Let $f(x)=x^n$.</p>
</blockquote>
<p>In that case, $x$ is a bound variable. If we later wrote:</p>
<blockquote>
<p>Let $g(y)=y^n$</p>
</blockquote>
<p>Then $g$ would be the same as $f$.</p>
<p>On the other hand, if we said:</p>
<blockquote>
<p>Let $h(x)=x^m$</p>
</blockquote>
<p>We cannot assert that $h$ is the same as $f$, because $m$ and $n$ are "free" variables.</p>
<p>In $\lambda$-calculus, you have to be careful about which variables are free in an expression. After a while it becomes natural.</p>
|
3,903,016 | <p>Why is
<span class="math-container">$$
\sum_{n=1}^{\infty} \left( \frac{1}{(10n-9)^2} + \frac{1}{(10n-1)^2} \right) = \frac{\pi^2}{50} \frac{1}{1- \cos \frac{\pi}{5}}
$$</span>
?</p>
<p>Each form of
<span class="math-container">$$
\sum_{n=1}^{\infty} \frac{1}{(10n-9)^2}
$$</span>
and
<span class="math-container">$$
\sum_{n=1}^{\infty} \frac{1}{(10n-1)^2}
$$</span></p>
<p>couldn't be calculated but the sum of them has a closed form</p>
| Jack D'Aurizio | 44,121 | <p>The <span class="math-container">$\Gamma$</span> function fulfills the symmetry relation
<span class="math-container">$$ \Gamma(z)\Gamma(1-z) = \frac{\pi}{\sin(\pi z)} $$</span>
hence by applying <span class="math-container">$\frac{d}{dz}\left(\log(\cdot)\right)$</span> to both sides
<span class="math-container">$$\psi(z)-\psi(1-z) =-\pi\cot(\pi z) $$</span>
and by differentiating once again
<span class="math-container">$$ \psi'(z)+\psi'(1-z) = \frac{\pi^2}{\sin^2(\pi z)}. \tag{1}$$</span>
Since
<span class="math-container">$$ \psi'(z) = \sum_{n\geq 0}\frac{1}{(n+z)^2}\tag{2} $$</span>
equations <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span> with <span class="math-container">$z=\frac{1}{10}$</span> prove the claim.</p>
|
3,631,587 | <p>Assume that our samples are high dimensional points (i.e., d is large) and we use
PCA to reduce it to k = 10 dimensions. After this step, we found that all the 10 new dimensions have
continuous values (e.g., in other words, each feature in the transformed dimension is not from discrete
domain, but rather, continuous domain). Describe in detail, how we can now use parametric method to
train our model to do classification. In particular, discuss how we can compute the correlation matrix
estimation, and when a new point arrive, what procedure we need to do so to make a classification
prediction (assume in general, we have K > 2 classes).</p>
| Community | -1 | <p><span class="math-container">$$a=\pi r^2,c=2\pi r\to c=\sqrt{4\pi^2r^2}=\sqrt{4\pi a}$$</span></p>
<p>and</p>
<p><span class="math-container">$$\sqrt{4\pi\cdot9\pi}=6\pi.$$</span></p>
<hr>
<p>Also</p>
<p><span class="math-container">$$a=\frac{c^2}{4\pi}$$</span> and
<span class="math-container">$$r=\sqrt{\frac a\pi}=\frac c{2\pi}.$$</span></p>
|
134,574 | <p>$a^{p-1} \equiv 1 \pmod p$</p>
<p>Why do Carmichael numbers prevent Fermat's Little Theorem from being a guaranteed test of primality? Fermat' Little theorem works for any $a$ such that $1≤a\lt p$, where $p$ is a prime number. Carmichael numbers only work for $a$'s coprime to $N$ (where $N$ is the modulus). Doesn't this mean that for some non-coprime $a$ the Carmichael number will fail the test? Therefore if every $a$ is tested, a Carmichael number wouldn't pass.</p>
| davidlowryduda | 9,754 | <p>If the goal of the Fermat Primality test is to guarantee that a number is prime, then testing against all possible $a$ is no better than simply trying to divide our number by all primes.</p>
<p>In particular, if it were easy to find a number that is not coprime to $n$, then it's easy to factor $n$ and so we wouldn't need to use any sort of Fermat primality.</p>
<p>But you are correct. Notably, if $p | n$, then doing the test$\mod p$ would yield $0$.</p>
|
2,793,983 | <p>For example I find myself wanting to write $x$ is an element of the integers from $1$ to $50$,</p>
<p>Is this the quickest way? </p>
<p>$x\in \left[ 1,50\right] \cap \mathbb{N} $</p>
<p>Also is this standard on here? $\mathbb{N} = \{0, 1, 2,\dotsc \}$,
$\mathbb{ℤ}_+ = \{1, 2, \dotsc \}$.</p>
| Especially Lime | 341,019 | <p>For the specific case that you start at $1$, it is fairly standard in combinatorics to write $[n]$ for $\{1,\ldots,n\}$, so $x\in[50]$ would work. This doesn't really help for other ranges, though - you could write $x\in[50]\setminus[10]$, but you probably shouldn't :)</p>
<p>To answer your other question, I prefer $\mathbb N$ to be $\{0,1,\ldots\}$ and $\mathbb Z_+$ to be $\{1,2,\ldots\}$, but there is no consensus on the first, and it's probably safer to write $\mathbb N_0$, which is unambiguous. Even $\mathbb Z_+$ could be misinterpreted, but I think when writing in English it's standard that this does not include $0$ (when writing in French, I'd expect the standard to be different, but I have no first-hand knowledge of this).</p>
|
2,793,983 | <p>For example I find myself wanting to write $x$ is an element of the integers from $1$ to $50$,</p>
<p>Is this the quickest way? </p>
<p>$x\in \left[ 1,50\right] \cap \mathbb{N} $</p>
<p>Also is this standard on here? $\mathbb{N} = \{0, 1, 2,\dotsc \}$,
$\mathbb{ℤ}_+ = \{1, 2, \dotsc \}$.</p>
| Eric Towers | 123,905 | <p>I do wonder why so many people believe convoluted notation is better than plainly writing what you mean.</p>
<p>"Let $x \in \mathbb{N}$ with $1 \leq x \leq 50$."</p>
<p>The twin purposes of notation are clarity and precision. Use of new or rare notation subverts both. Excessive density subverts clarity. Use of a single natural language word for exactly its meaning is both clear and precise.</p>
|
4,483,507 | <p>How can I change <span class="math-container">$\dfrac{-(3-\sqrt{3})}{(3+\sqrt{3})}$</span> to <span class="math-container">$\dfrac{1-\sqrt{3}}{1+\sqrt{3}}$</span>?</p>
<p>Background:</p>
<p>I tried solving <span class="math-container">$\tan(345°)$</span> with the trigonometric angle <em><strong>sum/difference</strong></em> identity.</p>
<p>I used <span class="math-container">$\tan(45°-30°)$</span> to find <span class="math-container">$\tan(15°)$</span>.</p>
<p>I then used <span class="math-container">$\tan(360°-15°)$</span> to get <span class="math-container">$\tan(345°)$</span>.</p>
<p>I used the identity:
<span class="math-container">$$\tan(\theta \pm \phi)=
\frac{\tanθ \pm tan \phi}{1 \mp \tanθ×\tan\phi}$$</span></p>
<p>My answer was <span class="math-container">$\frac{-(3-√3)}{(3+√3)}$</span> but the answer given was <span class="math-container">$\frac{(1-√3)}{(1+√3)}$</span>.</p>
<p>I used a calculator to find out these are one and the same.</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/mfP7E.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mfP7E.jpg" alt="enter image description here" /></a></p>
</blockquote>
<p>However, without using a calculator, How would I be able to transform my answer to the given answer.</p>
| Guillermo García Sáez | 696,501 | <p>Yes, that's perfect. Another approach would be using that the preimage of a regular value through a continuous function is closed. Just take <span class="math-container">$f(x,y)=x^2+y^2$</span>, since <span class="math-container">$\mathbb{S}^1=f^{-1}(\{1\})$</span>, the complementary( your set <span class="math-container">$S$</span>) is open</p>
|
1,627,619 | <p>Could anyone please check my solution to the following problem?</p>
<blockquote>
<p><strong>Problem:</strong> Let $f(x, y) = (x^2 + y^2)e^{-(x^2 + y^2)}$. Find global extrema of $f$ on $M = {\mathbf R}^2$.</p>
</blockquote>
<p><strong>Proposed solution:</strong> Taking partial derivatives of $f$, we conclude that critical points are $[0,0]$ and points of the unit circle $C = \{[x,y] \in {\mathbf R}^2:\ x^2 + y^2 = 1\big\}$.</p>
<p>We can reason immediately that the global minimum is attained at $[0,0]$ as the function is nonnegative. We observe that the value of $f$ on $C$ is $e^{-1}$.</p>
<p>To prove that $f$ attains global maximum on $C$, we let $r := x^2 + y^2$. We observe that for any two points $[x_1,y_1]$ and $[x_2,y_2]$, $r_1 = r_2$ (ie. the function is constant on circles). Now let $r \to \infty$. Then
$re^{-r} \to 0$. From the definition of limit, it follows that for any $\varepsilon > 0$, we find $\delta > 0$ such that</p>
<p>$$\forall r \in P(\infty, \delta) = ({1 \over \delta}, \infty): re^{-r} < \varepsilon.$$</p>
<p>Let $\varepsilon = (2e)^{-1}$. Then there's $\delta$ from the definition above and we know that for $r \in ({1 \over \delta}, \infty)$, the value of f is less than the value of f on $C$. Restricting ourselves to the compact set</p>
<p>$$C' = \big\{[x, y] \in {\mathbf R}^2:\ x^2 + y^2 \le {1 \over \delta}\big\},$$</p>
<p>we can now argue that $f$ on $C'$ is indeed maximized on $C$, as it is a continuous function on a compact set, it's value around the boundary is at most $(2e)^{-1}$ and all critical points have been considered.</p>
<p>Therefore, the maximum value of $f$ is attained on $C$ with respect to $M$ as well.</p>
| πr8 | 302,863 | <p>$$e^x\ge1+x \quad \forall x\in\mathbb{R}$$</p>
<p>(this is well-known / can be established all sorts of ways. Note that equality holds $\iff x=0$) </p>
<p>Take $x=d-1$ to see </p>
<p>$$e^{d-1}\ge d\implies de^{-d}\le e^{-1}\implies(x^2+y^2)e^{-(x^2+y^2)}\le e^{-1}$$</p>
<p>Chase back the fact that the equality holds when $x=0\iff d=1 \iff x^2+y^2=1$ to see that this maximum is attained, and that it is attained only on the unit circle.</p>
<p>Minimum is trivial for the reasons you detail - the function is non-negative, and $0$ only at the origin, so the global minimum is $0$ and achieved at $(0,0)$.</p>
|
915,631 | <blockquote>
<p>Let <span class="math-container">$V$</span> be the vector space of <span class="math-container">$n\times n$</span> matrices over <span class="math-container">$K$</span> under addition and let the linear operator <span class="math-container">$f$</span> be given by <span class="math-container">$f(A)=A^{T}$</span>, where <span class="math-container">$A^T$</span> denotes the transpose of matrix <span class="math-container">$A$</span>.</p>
<ol>
<li><p>Show that <span class="math-container">$\pm 1$</span> are the only eigenvalues of <span class="math-container">$f$</span>.</p>
</li>
<li><p>Suppose the characteristic of <span class="math-container">$K$</span> is not <span class="math-container">$2$</span>. What are the dimensions of the eigenspaces <span class="math-container">$E(1)$</span> and <span class="math-container">$E(-1)$</span>?</p>
</li>
</ol>
</blockquote>
<p>Eigenvalues are found from equation: <span class="math-container">$f(v)=\lambda v$</span>, therefore we have <span class="math-container">$v^T=\lambda v$</span>, I don't understand how this shows <span class="math-container">$\lambda =\pm 1$</span>.</p>
<p>I know that <span class="math-container">$\dim E(1)+\dim E(2)=n$</span> would this suffice for the second part?</p>
| zcn | 115,654 | <p>If $\lambda$ is an eigenvalue of $f$, then $f(A) = \lambda A$ for some $A \ne 0$. Then $A = (A^T)^T = f(f(A)) = f(\lambda A) = \lambda^2 A$, so $\lambda^2 = 1$ (since $A \ne 0$), hence $\lambda = \pm 1$. </p>
<p>Eigenspaces corresponding to distinct eigenvalues always intersect trivially, so $E(1) \cap E(-1) = 0$. On the other hand, $E(1)$ consists of symmetric matrices ($A^T = A$), and $E(-1)$ consists of skew-symmetric matrices ($A^T = -A$), and in characteristic $\ne 2$, every square matrix $A$ is a sum of a symmetric and a skew-symmetric matrix - explicitly $A = \frac{1}{2}(A + A^T) + \frac{1}{2}(A - A^T)$. Thus $\text{Mat}_{n \times n} = E(1) \oplus E(-1)$. Now $E(1)$ has dimension $n+1 \choose 2$ (as entries in the upper triangular block can be chosen arbitrarily), so $E(-1)$ has dimension $n^2 - {n+1 \choose 2} = {n-1 \choose 2}$.</p>
|
3,126,936 | <p>Numbers between <span class="math-container">$1 - 1000$</span> which leave no remainder when divided by <span class="math-container">$4$</span> and divided by <span class="math-container">$6$</span> but not by <span class="math-container">$21$</span>?</p>
<p>I tried <span class="math-container">$$\frac{1000}{12} = 83 - \frac{83}{21} = 83-3 = 80$$</span></p>
<p>Am I correct? Can someone please explain to me how it works?</p>
| N. F. Taussig | 173,070 | <p>If a number is divisible by both <span class="math-container">$4$</span> and <span class="math-container">$6$</span>, then it is divisible by <span class="math-container">$\operatorname{lcm}(4, 6) = 12$</span>. The number of multiples of <span class="math-container">$12$</span> that are at most <span class="math-container">$1000$</span> is
<span class="math-container">$$\left\lfloor \frac{1000}{12} \right\rfloor = 83$$</span>
where <span class="math-container">$\lfloor x \rfloor$</span> is the greatest integer less than <span class="math-container">$x$</span>.</p>
<p>From these, we must subtract those numbers that are also divisible by <span class="math-container">$21$</span>. Those numbers are divisible by <span class="math-container">$\operatorname{lcm}(4, 6, 21) = \operatorname{lcm}(12, 21) = 84$</span>. The number of multiples of <span class="math-container">$84$</span> that are at most <span class="math-container">$1000$</span> is
<span class="math-container">$$\left\lfloor \frac{1000}{84} \right\rfloor = 11$$</span>
Hence, the number of positive integers less than or equal to <span class="math-container">$1000$</span> that are divisible by both <span class="math-container">$4$</span> and <span class="math-container">$6$</span> but not divisible by <span class="math-container">$21$</span> is
<span class="math-container">$$\left\lfloor \frac{1000}{12} \right\rfloor - \left\lfloor \frac{1000}{84} \right\rfloor = 83 - 11 = 72$$</span></p>
|
3,929,089 | <p>In the construction of <span class="math-container">$\operatorname{Frac}(R)$</span>, where <span class="math-container">$R$</span> is a domain, we define a partition on <span class="math-container">$R \times R^\times$</span> where <span class="math-container">$R^\times:= R \setminus \{0\}$</span>. which in turn becomes a field containing <span class="math-container">$R$</span>.</p>
<p>Taking <span class="math-container">$R:=\mathbb{Q}[x]$</span>, can we tell that <span class="math-container">$\frac{x^2+1}{x-1} \in \mathbb{Q}(x)$</span>? As <span class="math-container">$x-1$</span> is not the zero-polynomial it is a member of <span class="math-container">$\mathbb{Q}[x]^\times$</span>, hence <span class="math-container">$\frac{x^2+1}{x-1}$</span> should be a member of <span class="math-container">$\mathbb{Q}(x)$</span>. But this is undefined when 1 is plugged in <span class="math-container">$x$</span>.</p>
<p>Where am I going wrong?</p>
| Moosh | 837,933 | <p>The simple answer is that elements of <span class="math-container">$\mathbb{Q}(x)$</span> are formal rational functions. strictly speaking, the values they take on at certain points are irrelevant to them. Their evaluation maps need not be defined for all elements in the underlying field.
Another example is that <span class="math-container">$\frac{1}{x}$</span> which is the multiplicative inverse of <span class="math-container">$x$</span> is not defined as a function at <span class="math-container">$x=0$</span>. That's fine because the way we are looking at it, <span class="math-container">$\frac{1}{x}$</span> is not actually a function, it's really just the formal inverse of <span class="math-container">$x$</span>.</p>
<p>There is a similar intuition here to how <span class="math-container">$x^2+x+1\in\mathbb{F}_2[x]$</span> is not the same thing as <span class="math-container">$1\in\mathbb{F}_2[x]$</span> even though the evaluation maps are the same.</p>
<p><span class="math-container">$x$</span> is just something that is transcendental over <span class="math-container">$\mathbb{Q}$</span> we think of the elements of <span class="math-container">$mathbb{Q}(x)$</span> as being rational functions, but really we shouldn't be considering any of their properties as functions in the context of algebra. It's helpful to realise that <span class="math-container">$\mathbb{Q}(x)$</span> is isomorphic to <span class="math-container">$\mathbb{Q}(\pi)$</span></p>
|
2,650,182 | <ol>
<li>Between every two distinct real numbers, there is a rational number </li>
</ol>
<p>Answer: There is no rational numbers between two non-distinct real numbers.</p>
<ol start="2">
<li>For all natural numbers $n ∈ N, \sqrt n$ is either a natural number or an
irrational number</li>
</ol>
<p>Answer: For all natural numbers $n$, $\sqrt n$ is either not a natural number or not a irrational number. </p>
<ol start="3">
<li>Given any real number $x ∈ R$, there exists $n ∈ N$ satisfying $n>x$.</li>
</ol>
<p>Answer: ??</p>
<p>Can someone tell me what is the general way to look at these things. </p>
| Mauro ALLEGRANZA | 108,274 | <p>In order to negate:</p>
<blockquote>
<p>1) "Between every two distinct real numbers, there is a rational number",</p>
</blockquote>
<p>we have to assert that "There are two distinct real numbers such that there is <strong>no</strong> rational number between them".</p>
<p>It may help to formalize the statements with quantifiers:</p>
<blockquote>
<p>$\forall r_1, r_2 \in \mathbb R \ (r_1 < r_2 \to \exists q \in \mathbb Q \ (r_1 < q < r_2))$.</p>
</blockquote>
<p>The "procedure" to get the correct negation is simply to put the negation sign in front and then "move it inside" with the equivalences:</p>
<blockquote>
<p>$\lnot \forall$ is equiv to $\exists \lnot$ and $\lnot \exists$ is equiv to$\forall \lnot$.</p>
</blockquote>
<p>Thus, from:</p>
<blockquote>
<p>$\lnot \ [\forall r_1, r_2 \in \mathbb R \ (r_1 < r_2 \to \exists q \in \mathbb Q \ (r_1 < q < r_2))],$</p>
</blockquote>
<p>we get in the first step:</p>
<blockquote>
<p>$\exists r_1, r_2 \in \mathbb R \ \lnot (r_1 < r_2 \to \exists q \in \mathbb Q \ (r_1 < q < r_2)).$</p>
</blockquote>
<p>The next step is to use the propositional equivalence between $\lnot (p \to q)$ and $(p \land \lnot q)$, to get:</p>
<blockquote>
<p>$\exists r_1, r_2 \in \mathbb R \ (r_1 < r_2 \land \lnot \exists q \in \mathbb Q \ (r_1 < q < r_2)).$</p>
</blockquote>
<hr>
<p>For:</p>
<blockquote>
<p>3) "Given any $x \in \mathbb R$, there exists $n \in \mathbb N$ satisfying $(n > x)$",</p>
</blockquote>
<p>we have for the negated statement:</p>
<blockquote>
<p>$\lnot \forall x \in \mathbb R \ \exists n \in \mathbb N \ (n > x)$.</p>
</blockquote>
<p>Applying the above equivalences we get:</p>
<blockquote>
<p>$\exists x \in \mathbb R \ \forall n \in \mathbb N \ (n \le x)$.</p>
</blockquote>
|
19,325 | <p>I'm looking for a simple way to define mathematics to primary/elementary school teachers and explain some of the confusion children have.</p>
<p>I'm hoping some Algebraist could help me properly state the following:</p>
<blockquote>
<p>A number in and of itself has no true meaning. True in the sense that it relates to an existing object within our world. The question we need to ask is how do we teach children meaning if that meaning is not at first grounded within something concrete.</p>
</blockquote>
<blockquote>
<p>Numbers in and of themselves represent abstract notions and in the pure study of mathematics we study mostly patterns: the various patterns that emerge from these abstract notions and the various means through which some relation is developed or expressed between them. Meaning that between the value 1 and 2 for instance there is no relation except when explicitly defined for example as some additive operation, in general an addition of multiples of the unit element.</p>
</blockquote>
| Ashish Shukla | 12,610 | <p>I teach kids, small one 5-6 years to 11-12 years. So I have realised that a kid's life itself is not bound to anything concrete. They are best receptacles of Knowledge. But they are not good at reproducing it. See kids are capable of learning anything however abstract, but the trick is that teacher should be visualising it within. As long as it is "real" in the teacher's imagination, kids will learn it. If it is obscure, abstract, they will not be able to describe what they saw/learned but they learned. They do look into your soul. Personal experience.</p>
<p>As far as numbers is concerned I don't know whether this is the answer but this is what I did. I realised that there is a path to Knowledge and one can choose any, but then there is "THE KNOWLEDGE" ITSELF without ANYTHING.
So the usual path is from "things" to "Numbers". "What is this?", "An apple Teacher!", "No, it's ONE APPLE!". I thought I'll take kids from Numbers to things. So I had to Explain NUMBER ITSELF without any "thing".
I told them that a number is nothing but a series of empty places, each place has a name(One's, Two's) etc. and each place has a value(1, 10) etc. So what does each place contains? A NUMBER. << Right here I explain to them difference between number and numerals. So each place contains ONLY ONE numeral. So Numeral 2 at 10's place means that this Number contains 2 tens in it. When this series of numerals is attached to a "thing" or property of a thing, then it is known as a NUMBER.
So Numerals tell about a "number" and a "number" tells us about things.</p>
<p>And thankfully kids get it...</p>
|
1,915,782 | <p>I'm attempting to teach myself some vector calculus before starting university next month in hope of getting my head around some of the concepts as I can foresee this being a weak topic for me.</p>
<p>I have been 'learning' from some online lecture notes related to my course. The notes talk about line integrals but as far as I understand say little on how to evaluate them and only gives one quick example in the form below that I didn't find terribly useful. As a result I'm not entirely sure how to evaluate the line integral below and so I would ask that someone answer the below question, but if possible perhaps give more detail than would usually be necessary, talking through each step with a specific emphasis on the difference between evaluating (i) and (ii), thank you.</p>
<blockquote>
<p>Evaluate explicitly the line integral $\int(y$ $dx+x$ $dy+dz)$ along
(i) the straight path from the origin to $x=y=z=1$ and (ii) the
parabolic path given parametrically by $x = t,y = t,z = t^2$ from $t=0$ to
$t=1$.</p>
</blockquote>
<p>Any help is appreciated.</p>
<p>Thank you.</p>
| Bobbie D | 317,218 | <p>The path $C$ is parametrized as $\mathbf r(t) = t\mathbf x + t\mathbf y + t\mathbf z$ for $t\in[0,1]$. Then you use the line integral formula</p>
<p>$$\int_{C} \mathbf f\cdot d\mathbf r = \int_{t_0}^{t_1} \mathbf f(\mathbf r)\cdot \mathbf r'(t)\ dt$$</p>
<p>In this case $\mathbf f(\mathbf r)\cdot \mathbf r'(t) = (t\mathbf x + t\mathbf y + 1\mathbf z)\cdot(1\mathbf x + 1\mathbf y + 1\mathbf z) = 2t+1$.</p>
<p>Thus your line integral reduces to the Riemann integral</p>
<p>$$\int_0^1 [2t+1]\ dt$$</p>
<p>which is pretty easy to evaluate.</p>
|
56,847 | <p>What are the angles formed at the center of a tetrahedron if you draw lines to the vertices?</p>
<p>I'm trying to make these:</p>
<p><img src="https://i.stack.imgur.com/FRUi8.jpg" alt="caltrop"> </p>
<p>I need to know what angles to bend the metal.</p>
| huzaifa abedeen | 845,875 | <p>Watch <a href="https://youtu.be/2UTr46btzaY" rel="nofollow noreferrer">this Khan Academy video</a> of Tetrahedral bond angle proof. In the video, Mathematical proof of the bond angles in methane (a tetrahedral molecule);</p>
<p><a href="https://i.stack.imgur.com/JSUio.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JSUio.jpg" alt="enter image description here" /></a></p>
<p>We have the tetrahedron structure, for example methane, which is a three-dimensional structure shown above. (It's very difficult to see tetrahedral geometry on a two-dimensional, But it's much easier to see it over here on the right with the three- dimensional representation of methane.)</p>
<p>The bond angle of sp3 hybridized carbon in methane is 109.5 degrees, so you could say that this angle is the same all the way around. orienting the tetrahedron in this way allows us to find the bond angle that we are going for.</p>
<p>And we don't know what bond angle yet, but we can figure out this angle right here. So I'm going to call this theta for this triangle that's formed. And I know that this distance down here is positive/negative square root of 2, and then we go up 1 on the y-axis and then 0 on the z-axis. And if I want to find my bond angle in here, those angles have to add up to equal 180 degrees.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.