qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
3,224,745
<p>Naive evaluation of <span class="math-container">$\sqrt{a + x} - \sqrt{a}$</span> when <span class="math-container">$|a| &gt;&gt; |x|$</span> suffers from catastrophic cancellation and loss of significance.</p> <p>WolframAlpha gives the Taylor series for <span class="math-container">$\sqrt{a+x}-\sqrt{a}$</span> as: <span class="math-container">$$\frac{x}{2 \sqrt{a}} - \frac{x^2}{8 a^{3/2}} + \frac{x^3}{16 a^{5/2}} - \frac{5 x^4}{128 a^{7/2}} + \frac{7 x^5}{256 a^{9/2}} + O(x^6)$$</span> which (I think) equals: <span class="math-container">$$\sqrt{a} \left( \frac{1}{2} \left(\frac{x}{a}\right) - \frac{1}{8} \left(\frac{x}{a}\right)^2 + \frac{1}{16} \left(\frac{x}{a}\right)^3 - \frac{5}{128} \left(\frac{x}{a}\right)^4 + \frac{7}{256} \left(\frac{x}{a}\right)^5 + O\left(\left(\frac{x}{a}\right)^6\right) \right)$$</span></p> <p>How quickly do the coeffients decrease?</p> <p>How many terms are needed to reach <span class="math-container">$53$</span> bits of accuracy (IEEE <code>double</code> precision) in the result given that <span class="math-container">$10^{-300} &lt; \left|\frac{x}{a}\right| &lt; 1$</span> is known?</p> <p>Alternatively, what are the threshold values of <span class="math-container">$\left|\frac{x}{a}\right|$</span> where the number of terms changes?</p> <p>What about rounding errors, assuming each value is stored in <code>double</code> precision?</p>
Picaud Vincent
450,342
<p>To avoid cancellation error the first thing to do is to write:</p> <p><span class="math-container">$$ \sqrt{a+x}-\sqrt{a}=\frac{x}{\sqrt{a+x}+\sqrt{a}}=\sqrt{a}\frac{x}{a} \frac{1}{1+\sqrt{1+\frac{x}{a}}} $$</span></p> <p>then with <span class="math-container">$y=\frac{x}{a}$</span> you must approximate this <span class="math-container">$$ \sqrt{a}\frac{y}{1+\sqrt{1+y}} $$</span> fonction for <span class="math-container">$y\in[10^{-300},1]$</span>. This function has nothing pathological and IMHO can be computed in a straightforward way.</p> <p>If you really want to use Taylor series for <span class="math-container">$y\sim 0$</span> <span class="math-container">$$ \sqrt{a}\frac{y}{1+\sqrt{1+y}}=\sqrt{a}(\frac{y}{2}-\frac{y^2}{8}+\frac{y^3}{16}-\frac{5 y^4}{128}+\frac{7 y^5}{256}+O\left(y^6\right)) $$</span> I assume that the series is alternating, hence the error term <span class="math-container">$e$</span> is majored by <span class="math-container">$|e|&lt;\sqrt{a}\frac{7y^5}{256}$</span>. For instance if you want <span class="math-container">$|e|&lt;10^{-q}$</span> you can use the Taylors series for <span class="math-container">$0\le y \le y_*$</span> where <span class="math-container">$y_*$</span> is such that <span class="math-container">$$\sqrt{a}\frac{7y_*^5}{256}&lt;10^{-q}$$</span> which gives <span class="math-container">$$y_*&lt;10^{-q/5}(\frac{256}{7\sqrt{a}})^{1/5}$$</span></p> <p><strong>Example:</strong> with <span class="math-container">$q=5$</span>, <span class="math-container">$a=3$</span></p> <p>We get <span class="math-container">$y_*&lt;0.184042$</span>. </p> <p>That means that you can use the Taylors series for <span class="math-container">$ y_*=\frac{x_*}{a}&lt;0.184042$</span>, hence <span class="math-container">$x_*&lt;3\times 0.184042 \approx 0.552125$</span>.</p> <p>Let's try with <span class="math-container">$x=0.55$</span>. </p> <p>With the initial formula we find: <span class="math-container">$$ \sqrt{a+x}-\sqrt{a}\approx 0.152094 $$</span></p> <p>With the Taylor series, with <span class="math-container">$y=\frac{0.55}{3}$</span> we get <span class="math-container">$$ \sqrt{a}(\frac{y}{2}-\frac{y^2}{8}+\frac{y^3}{16}-\frac{5 y^4}{128}+\frac{7 y^5}{256})\approx 0.152095 $$</span></p> <p>We see that the error <span class="math-container">$|e|=|0.152094-0.152095|\approx 1.17957\times 10^{-6}$</span> is less that <span class="math-container">$10^{-q}=10^{-5}$</span> as expected</p>
3,224,745
<p>Naive evaluation of <span class="math-container">$\sqrt{a + x} - \sqrt{a}$</span> when <span class="math-container">$|a| &gt;&gt; |x|$</span> suffers from catastrophic cancellation and loss of significance.</p> <p>WolframAlpha gives the Taylor series for <span class="math-container">$\sqrt{a+x}-\sqrt{a}$</span> as: <span class="math-container">$$\frac{x}{2 \sqrt{a}} - \frac{x^2}{8 a^{3/2}} + \frac{x^3}{16 a^{5/2}} - \frac{5 x^4}{128 a^{7/2}} + \frac{7 x^5}{256 a^{9/2}} + O(x^6)$$</span> which (I think) equals: <span class="math-container">$$\sqrt{a} \left( \frac{1}{2} \left(\frac{x}{a}\right) - \frac{1}{8} \left(\frac{x}{a}\right)^2 + \frac{1}{16} \left(\frac{x}{a}\right)^3 - \frac{5}{128} \left(\frac{x}{a}\right)^4 + \frac{7}{256} \left(\frac{x}{a}\right)^5 + O\left(\left(\frac{x}{a}\right)^6\right) \right)$$</span></p> <p>How quickly do the coeffients decrease?</p> <p>How many terms are needed to reach <span class="math-container">$53$</span> bits of accuracy (IEEE <code>double</code> precision) in the result given that <span class="math-container">$10^{-300} &lt; \left|\frac{x}{a}\right| &lt; 1$</span> is known?</p> <p>Alternatively, what are the threshold values of <span class="math-container">$\left|\frac{x}{a}\right|$</span> where the number of terms changes?</p> <p>What about rounding errors, assuming each value is stored in <code>double</code> precision?</p>
Claude
209,286
<p>It was pointed out by Robert Israel that the series does badly when <span class="math-container">$|x| \approx |a|$</span>, but in that case the loss of significance of the naive evaluation is small.</p> <p>It was also suggested by Winther (and a since-deleted answer) to rewrite as <span class="math-container">$$\frac{x}{\sqrt{a+x}+\sqrt{a}}$$</span> The series for the denominator is similar to the series in the question, only with a leading constant term. This means that when <span class="math-container">$\left|\frac{x}{a}\right|$</span> is small enough, the addition of terms eventually becomes insignificant in <code>double</code> arithmetic.</p> <p>If <span class="math-container">$\left|\frac{x}{a}\right| &lt; 2^{-52}$</span>, <span class="math-container">$1$</span> term is sufficient. Otherwise</p> <p>If <span class="math-container">$\left|\frac{x}{a}\right| &lt; 2^{-25}$</span>, <span class="math-container">$2$</span> terms are sufficient. Otherwise</p> <p>If <span class="math-container">$\left|\frac{x}{a}\right| &lt; 2^{-16}$</span>, <span class="math-container">$3$</span> terms are sufficient. Otherwise</p> <p>If <span class="math-container">$\left|\frac{x}{a}\right| &lt; 2^{-11}$</span>, <span class="math-container">$4$</span> terms are sufficient. Otherwise</p> <p>If <span class="math-container">$\left|\frac{x}{a}\right| &lt; 2^{-9}$</span>, <span class="math-container">$5$</span> terms are sufficient. Otherwise</p> <p>If <span class="math-container">$\left|\frac{x}{a}\right| &gt; 2^{-9}$</span>, the loss of precision in the addition <span class="math-container">$a + x$</span> is relatively small.</p> <p>But in fact, perhaps <span class="math-container">$$\frac{x}{\sqrt{a + x} + \sqrt{a}}$$</span> evaluated in <code>double</code> precision is good enough for all <span class="math-container">$|x| &lt;&lt; |a|$</span> and series are unnecessary?</p>
316,699
<p>If $A,B,C$ are sets, then we all know that $A\setminus (B\cap C)= (A\setminus B)\cup (A\setminus C)$. So by induction $$A\setminus\bigcap_{i=1}^nB_i=\bigcup_{i=1}^n (A\setminus B_i)$$ for all $n\in\mathbb N$.</p> <p>Now if $I$ is an uncountable set and $\{B_i\}_{i\in I}$ is a family of sets, is it true that: $$A\setminus\bigcap_{i\in I}B_i=\bigcup_{i\in I} (A\setminus B_i)\,\,\,?$$</p> <p>If the answer to the above question will be "NO", what can we say if $I$ is countable?</p>
Stefan Hansen
25,632
<p>Note that $$ A\setminus \bigcap_{i\in I}B_i=A\cap \left(\bigcap_{i\in I}B_i\right)^c, $$ where $^c$ denotes the complement. Using <a href="http://en.wikipedia.org/wiki/De_Morgan%27s_laws" rel="nofollow">De Morgan's laws</a> (which holds for a general $I$) we get $$ A\cap \left(\bigcap_{i\in I}B_i\right)^c =A\cap \left(\bigcup_{i\in I}B_i^c\right)=\bigcup_{i \in I}\left(A\cap B_i^c\right). $$ Now note that $A\cap B_i^c =A\setminus B_i$ and we are done.</p>
2,613,410
<blockquote> <p>What is the value of <span class="math-container">$2x+3y$</span> if</p> <p><span class="math-container">$x+y=6$</span> &amp; <span class="math-container">$x^2+3xy+2y=60$</span> ?</p> </blockquote> <p>My trial: from given conditions: substitute <span class="math-container">$y=6-x$</span> in <span class="math-container">$x^2+3xy+2y=60$</span> <span class="math-container">$$x^2+3x(6-x)+2(6-x)=60$$</span> <span class="math-container">$$x^2-8x+24=0$$</span> <span class="math-container">$$x=\frac{8\pm\sqrt{8^2-4(1)(24)}}{2(1)}=4\pm2i\sqrt2$$</span> this gives us <span class="math-container">$y=2\mp2i\sqrt2$</span> we now have <span class="math-container">$x=4+2i\sqrt2, y=2-2i\sqrt2$</span> or <span class="math-container">$x=4-2i\sqrt2, y=2+2i\sqrt2$</span></p> <p>substituting these values i got <span class="math-container">$2x+3y=14-2i\sqrt2$</span> or <span class="math-container">$$2x+3y=14+2i\sqrt2$$</span></p> <p>But my book suggests that <span class="math-container">$2x+3y$</span> should be a real value that I couldn't get. Can somebody please help me solve this problem? Is there any mistake in the question.?</p> <p>Thank you.</p>
user
505,767
<p>It seems correct to me</p> <p>$$x^2+3x(6-x)+2(6-x)=60\iff x^2+18x-3x^2+12-2x-60=0$$ $$\iff-2x^2+16x-48=0\iff x^2-8x+24=0\implies x=4\pm2i\sqrt2$$</p>
815,432
<p>I have sometimes seen notations like $a\equiv b\pmod c$. How do we define the notation? Have I understood correctly that $c$ must be an element of some ring or does the notation work in magmas in general?</p>
C-star-W-star
79,762
<p>You certainly need a ring since a magma has only one binary operation while the modulus expression involves two: $$a\equiv b\pmod{c}:\iff a=b+k\cdot c$$</p>
1,011,718
<p>$P1 : \sin(x/y)$ .</p> <p>I tried using $y=mx$. $f$ becomes $\sin(1/m)$, so limit doesn't exist. But it is too easy. Am I right?</p> <p>$P2 : F = x^2\log(x^2+y^2)$</p> <p><img src="https://i.stack.imgur.com/pEaSs.jpg" alt="enter image description here"></p> <p><img src="https://i.stack.imgur.com/CEI2K.jpg" alt="Attempt"></p> <p>So function can be defined as O at origin in order to be continous . Am I right in these? Im learning this stuff on my own.</p>
Alen
22,372
<p>Substituting $\left( {x,y} \right) = \left( {r\sin \phi ,y = r\cos \phi } \right) = \varphi \left( {r,\phi } \right)$ gives $f\left( {x,y} \right) = f\left( {\varphi \left( {r,\phi } \right)} \right) = {\sin ^2}\left( \phi \right){r^2}\ln {r^2}$.</p> <p>If the limit $\mathop {\lim }\limits_{r \to 0} {\sin ^2}\left( \phi \right){r^2}\ln {r^2}$ exists, so does the limit in question exist. We have $\mathop {\lim }\limits_{r \to 0} {\sin ^2}\left( \phi \right){r^2}\ln {r^2} = 0$, which answers the quesiton.</p>
546,572
<p><img src="https://i.stack.imgur.com/aJ2t5.jpg" alt="enter image description here"></p> <p>Could anyone tell me how to solve 9b and 10? I've been thinking for five hours, I really need help.</p>
GA316
72,257
<p>Hint : union of two discrete sets need not be discrete</p>
2,196,413
<blockquote> <p>Let $R$ be a commutative ring. Denote by $R^*$ the group of invertible elements (this is a group w.r.t multiplication.) Suppose $R^*\cong \mathbb{Z}$. I need to show that $1+1=0$ in $R$.</p> </blockquote> <p>I have no clue about why such statement should be true. I don't even have an example for a ring that satisfies these assumptions, so I'd be glad to see one.</p> <p>Hints (or partial solutions) will be welcomed. Thank you!</p>
Hanno
81,567
<p><em>Hint:</em> If $R^{\times}\cong{\mathbb Z}$, then in particular it has no nontrivial torsion; on the other hand, there's $-1\in R^{\times}$.</p>
4,154,298
<p>Suppose that<span class="math-container">$ f(x, y)$</span> given by <span class="math-container">$\sum_{i=0}^{a}\sum_{j=0}^{b}c_{i,j}x^iy^j$</span> is a polynomial in two variables with real coefficients such that among its coefficients there is a non-zero one. Prove that there is a point <span class="math-container">$(x_0, y_0) ∈ R^2$</span> such that <span class="math-container">$f(x_0, y_0)\neq 0$</span>.</p> <p>So basically this is a non-zero polynomial i.e. a polynomial with at least one non-zero coefficient. I do not understand how can one make such a statement like I do not understand intuition. Moreover, I am not getting any properties like this on the Wikipedia page. I think I am lacking some real analysis basics. Could anyone please provide any hints or direct me towards something helpful for this question?</p>
Community
-1
<p>If you want to work ultraformally, a sequence is a function from <span class="math-container">$\mathbb{N}$</span> to <span class="math-container">$\mathbb{R},$</span> and so you cannot just start indexing at 1--you have a new object, not a sequence. Your alternative definition is too broad, since it captures a notion of &quot;eventual&quot; subsequences, which is a different concept. Also, it doesn't apply to the case you have, since it is again defined for two sequences starting at 0, not for two generic sequences.</p> <p>In everyday life, nobody will be formal enough to tell you that <span class="math-container">$(2n-2)_{n=1}^{\infty}$</span> and <span class="math-container">$(2n)_{n=0}^{\infty}$</span> are different types of objects, although if you go down to details they are.</p>
2,414,492
<p>Check the convergence of $$\sum_{k=0}^\infty{2^{-\sqrt{k}}}$$ I have tried all other tests (ratio test, integral test, root test, etc.) but none of them got me anywhere. Pretty sure the way to do it is to check the convergence by comparison, but not sure how.</p>
Clement C.
75,808
<p>Note that $$2^{-\sqrt{k}} = e^{-\sqrt{k}\ln 2}$$ while $$\frac{1}{k^2} = e^{-2\ln k}.$$ Since $\sqrt{k} \ln 2 &gt; 2\ln k$ for $k$ big enough,* you can conclude by comparison with the $p$-series $\sum_{k=1}^\infty \frac{1}{k^2}$.</p> <blockquote> <p>*One can check it holds for all $k\geq 256$.</p> </blockquote>
2,732,202
<p>$Z_1, Z_2, Z_3,...$ are independent and identically distributed R>V.s s.t. $E(Z_i)^- &lt; \infty$ and $E(Z_i)^+ = \infty$. Prove that $$\frac {Z_1+Z_2+Z_3+\cdots+Z_n} n \to \infty$$ almost surely.</p> <p>What does $E(Z_i)^+$ $E(Z_i)^-$ mean? I believe it is integrating fromnegative infinity to zero and zero to positive infinity for - and x resepctively. But LLN wont apply here since expectation doesnt exist? like E|Zi|=$\infty$.</p> <p>p.s. another post i made is similar but diferent it is $E(Z)_i^+$ in the other which stands for max and min for the minus sign.</p>
Michael Hardy
11,667
<p>The positive part of a random variable $X$ is the random variable $X^+ = \begin{cases} 0 &amp; \text{if } X&lt;0, \\ X &amp; \text{if } X\ge 0. \end{cases}$</p> <p>The negative part $X^-$ is defined similarly. If $F(x) = \Pr(X\le x)$ then you have $\displaystyle\operatorname E(X^+) = \int_0^\infty x\, dF(x)$ and $\displaystyle\operatorname E(X^-) = \int_{-\infty}^0 x \, dF(x).$ If the distribution is one that has a density function $f$ then these are the same as $\displaystyle \int_a^b xf(x)\,dx$ for $(a,b) = (0,\infty) \text{ or } (-\infty,0)$ respectively.</p> <p>The law of large numbers cannot be used in the usual way if at all. This calls for a different sort of proof.</p>
1,121,354
<p>I need help understanding the following solution for the given problem. </p> <p>The problem is as follows: Given a field $F$, the set of all formal power series $p(t)=a_0+a_1 t+a_2 t^2 + \ldots$ with $a_i \in F$ forms a ring $F[[t]]$. Determine the ideals of the ring.</p> <p>The solution: Let $I$ be an ideal and $p \in I$ such the number $a := \min\{i|a_i \neq 0\}$ is minimal. We claim $I=(t^a).$ First, $p=t^aq$ for some unit $q$, hence $(t^a) \subset I$. Conversely, any $r \in I$ has first nonzero coefficient at degree $\geq a$, hence $t^a s$ for some $s \in F[[t]]$, and so $r \in (t^a)$.</p> <p>My questions: Why the claim $I=(t^a)$? Why does $q$ have to be a unit? What does "first nonzero coefficient at degree $\geq a$ mean? And I don't understand the last part of the proof!!</p>
orangeskid
168,051
<p>Let $$p(t) = a_0 + a_1 t + a_2 t^2 + \cdots \\ q(t) = b_0 + b_1 t + b_2 t^2 + \cdots $$ Then we have $$p(t) \cdot q(t) = c_0 + c_1 t + c_2 t^2 + \cdots $$ where \begin{eqnarray} c_0 &amp;=&amp; a_0 b_0 \\ c_1 &amp;=&amp;a_0 b_1 + a_1 b_0 \\ c_2 &amp;=&amp; a_0 b_2 + a_1 b_1 + a_2 b_0\\ c_3 &amp;=&amp; a_0 b_3 + a_1 b_2 + a_2 b_1 + a_1 b_0\\ &amp;\ldots \ldots \end{eqnarray} Therefore $p(t) \cdot q(t) =1$ if and only if we have the infinite sequence of equalities: \begin{eqnarray} 1 &amp;=&amp; a_0 b_0 \\ 0 &amp;=&amp;a_0 b_1 + a_1 b_0 \\ 0 &amp;=&amp; a_0 b_2 + a_1 b_1 + a_2 b_0\\ 0 &amp;=&amp; a_0 b_3 + a_1 b_2 + a_2 b_1 + a_1 b_0\\ &amp;\ldots \ldots \end{eqnarray} Therefore, if $p(t)$ has an inverse $q(t)$ then $a_0 \cdot b_0=1$ and so $a_0$ is invertible. Conversely, if $a_0$ is invertible then in the above system we can solve inductively for $b_0$, $b_1$, $b_2$, $\ldots $ and therefore $f(t)$ is invertible.</p> <p>For example: \begin{eqnarray} b_0 &amp;=&amp; \frac{1}{a_0} \\ b_1 &amp;=&amp; - \frac{a_1}{a_0^2}\\ b_2 &amp;=&amp; - \frac{a_2}{a_0^2} + \frac{a_1^2}{a_0^3}\\ b_3 &amp;=&amp; -\frac{a_3}{a_0^2}+ 2 \frac{a_1 a_2}{a_0^3}- \frac{a_1^3}{a_0^4}\\ \ldots \ldots \end{eqnarray} Let us define the order $o(p(t))$ of a power series $p(t)= \sum_n a_n t^n $ as follows: $o(p(t)) = \min \{ n \ | \ a_n \ne 0\}$ if $p(t) \ne 0$ and $o(0) = \infty$. </p> <p>From the above we have $p(t)$ invertible if and only if $o(p(t))=0$. Moreover, for any $p(t) \ne 0$ we have$$p(t) = t^{o(p(t))} \cdot \bar p(t)$$ with $\bar p(t)$ invertible. $\tiny{ \text{(a prime factor decomposition )}}$ It is easy to check that $o(p(t)\cdot q(t) ) = o(p(t))+ o(g(t))$ and $p(t) \mid q(t)$ $\tiny{\text{(divides )}}$ if and only if $o(p(t)) \le o (q(t))$ $\tiny{ \text{(like for numbers) }}$</p> <p>Let $I$ a nonzero ideal. Let $d$ the smallest order of nonzero elements in $I$.$\tiny{\text{(any set of natural numbers has a smallest element)}}$ Let $p(t) = t^d \cdot \bar p(t) \in I$. For any other $q(t)$ in $I$ we have $q(t) = t^e \cdot \bar q(t)$ with $e \ge d$ and so $t^d \mid q(t)$. We conclude that $I \subset (t^d)$. Moreover, $t^d = \frac{1}{\bar p(t)} \cdot p(t) \in I$. Therefore $I = (t^d)$. </p> <p>Hence all the ideals of $k[[t]]$ are $0$ and $(t^d)$, $d\ge 0$. </p>
3,755,355
<p>I wanted to prove that every group or order <span class="math-container">$4$</span> is isomorphic to <span class="math-container">$\mathbb{Z}_{4}$</span> or to the Klein group. I also wanted to prove that every group of order <span class="math-container">$6$</span> is isomorphic to <span class="math-container">$\mathbb{Z}_{6}$</span> or <span class="math-container">$S_{3}$</span>.</p> <ol> <li><p>For the first one I tried to prove that <span class="math-container">$H$</span> (a random group of order 4) is cyclic or the Klein group, because if <span class="math-container">$H$</span> is cyclic I can prove that a cyclic group of order <span class="math-container">$n$</span> is isomorphic to <span class="math-container">$\mathbb{Z}_{n}$</span>. Because <span class="math-container">$H$</span> has order <span class="math-container">$4$</span> it's only possible for elements in <span class="math-container">$H$</span> to have order <span class="math-container">$1$</span>, <span class="math-container">$2$</span>, <span class="math-container">$4$</span> (Lagrange). Say that <span class="math-container">$H$</span> is not cyclic. Then all the elements need to have order <span class="math-container">$1$</span> or <span class="math-container">$2$</span>. Not all the elements can have order <span class="math-container">$1$</span> so there must be one element of order <span class="math-container">$2$</span>. Say that <span class="math-container">$b$</span> is an element with order <span class="math-container">$2$</span>. Then take <span class="math-container">$c$</span> an element not the unit element or <span class="math-container">$b$</span>. Then <span class="math-container">$H=\{e, b, c, bc \}$</span>, so <span class="math-container">$c$</span> must have order <span class="math-container">$2$</span> because otherwise <span class="math-container">$H$</span> would have an order bigger than <span class="math-container">$4$</span>. This is the Klein group.</p> </li> <li><p>I wanted to do the second one analogously but I can't make a proper proof out of it.</p> </li> </ol> <p>Can someone help and correct me? (I'm so sorry for my English mistakes but I'm really trying.)</p>
Community
-1
<p>Let <span class="math-container">$G$</span> be a group of order 4. Assume for contradiction that it is not abelian, i.e. we have elements <span class="math-container">$a,b$</span> that do not commute: then <span class="math-container">$1, a, b, ab, ba$</span> are 5 distinct elements.</p> <p>Therefore <span class="math-container">$G$</span> is abelian, and by the structure theorem for abelian groups it must be isomorphic to one of <span class="math-container">$C_4$</span> or <span class="math-container">$C_2 \times C_2$</span>.</p>
279,277
<p>I have been told multiple times that the logarithmic function is the inverse of the exponential function and vice versa. My question is; what are the implications of this? How can we see that they're the inverse of each other in basic math (so their graphed functions, derivatives, etc.)?</p>
rschwieb
29,335
<p>It means that $e^x$ is a <a href="http://en.wikipedia.org/wiki/Bijection" rel="nofollow noreferrer">bijection</a> from $\Bbb R$ onto $(0, \infty)$, and that $\ln(x)$ is a bijection of $(0,\infty)$ onto $\Bbb R$. In other words, both functions pair up points in their domain and range: roughly speaking, "nothing is left out", and "no two pairs overlap".</p> <p>The inverse properties say that: $e^{\ln(x)}=x$ for all $x\in (0,\infty)$ and $\ln(e^x)=x$ for all $x\in \Bbb R$. These properties are useful for solving equations, for one thing. If you see $e^x=4$, then by applying $\ln$ to both sides:</p> <p>$$ \ln(e^x)=\ln(4) $$ and by that cancellation, the left hand side is just $x$, so it is now solved for $x$.</p> <p>Graphically you can check that they are inverses. If you graph both $e^x$ and $\ln(x)$ on the same axes, then you will observe that they are reflections of each other across the line $y=x$. This is the case for all pairs of mutually inverse functions. </p> <p>That graphical reflection translates, in symbols, to: if $(x_0,y_0)$ is on the graph of $e^x$, that means $e^x$ sends $x_0$ to $y_0$. Since the functions are inverses, that means $\ln(x)$ sends $y_0$ to $x_0$. Therefore, $(y_0,x_0)$ is on the graph of $\ln(x)$. The act of switching the coordinates of the ordered pairs produces the reflection across $y=x$.</p> <p>There is one relationship you can derive about the derivatives. Since $f(f^{-1}(x))=x$, differentiating both sides (using the chain rule on the left) says that $f'(f^{-1}(x))\cdot (f^{-1})'(x)=1$. Solving for $(f^{-1})'(x)$ you get that:</p> <p>$$ (f^{-1})'(x)=\frac{1}{f'(f^{-1}(x))} $$</p> <p>Using $f(x)=e^x$ and knowing that $f'=f$ for this function, you get:</p> <p>$$ \frac{d\ln(x)}{dx}=\frac{1}{e^{\ln(x)}}=\frac{1}{x} $$</p> <p>There are a lot of details and rigor I have not mentioned, but I hope this gives you a bit of a flavor of what is going on. Don't use this as an excuse not to look at your text!</p> <hr> <p>PS: I interpreted "the exponential" to mean $f(x)=e^x$, but of course everything above (with some care, especially in the case of the derivative) can be changed over to the case of $f(x)=a^x$ and $f^{-1}(x)=\log_a(x)$</p>
279,277
<p>I have been told multiple times that the logarithmic function is the inverse of the exponential function and vice versa. My question is; what are the implications of this? How can we see that they're the inverse of each other in basic math (so their graphed functions, derivatives, etc.)?</p>
amWhy
9,003
<p>Just to generalize: </p> <p>A function $g: Y \to X$ is an inverse function of $\;f: X \to Y\;$ (and vice versa) if and only if $$\;g \circ f = id_X\; \text { and}\;\,f\circ g = id_Y:\;$$ that is, if and only if $\;g(f(x)) = x\;$ for all $x \in X$ and $\;f(g(y)) = y\,$ for all $\,y \in Y$.</p>
4,269,414
<p>In <a href="https://math.stackexchange.com/questions/550764/quotient-space-of-s1-is-homeomorphic-to-s1">this post</a> someone suggested:</p> <p>&quot;<span class="math-container">$z\mapsto z^2$</span>&quot;</p> <p>where both <span class="math-container">$z$</span> and <span class="math-container">$z^2$</span> are in <span class="math-container">$\mathbb{S^1}$</span> (unless I missed something).</p> <p>Does this mean that they suggested: <span class="math-container">$z^2 = (z_1^2, z_2^2)$</span>?</p> <p>If not, what is it?</p>
principal-ideal-domain
131,887
<p>As in the comments is already pointed out <span class="math-container">$z^2$</span> means <span class="math-container">$z\cdot z$</span>.</p> <p>You suggestest <span class="math-container">$(z_1^2,z_2^2)$</span>. That's not how the multiplication in <span class="math-container">$\mathbb C$</span> works. It's actually <span class="math-container">$(x^2-y^2,2xy)$</span> if <span class="math-container">$z=(x,y)=x+iy$</span>.</p>
2,911,187
<p>Lines (same angle space between) radiating outward from a point and intersecting a line:</p> <p><a href="https://i.stack.imgur.com/52HY4.png" rel="noreferrer"><img src="https://i.stack.imgur.com/52HY4.png" alt="Intersection Point Density Distribution"></a></p> <p>This is the density distribution of the points on the line:</p> <p><a href="https://i.stack.imgur.com/CqbWF.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/CqbWF.jpg" alt="Plot line Graph"></a></p> <p><sup>I used a Python script to calculate this. The angular interval is 0.01 degrees. X = tan(deg) rounded to the nearest tenth, so many points will have the same x. Y is the number of points for its X. You can see on the plot that ~5,700 points were between -0.05 and 0.05 on the line.</sup></p> <p>What is this graph curve called? What's the function equation?</p>
TurlocTheRed
397,318
<p>This is also known, especially among we physicists as a Lorentz distribution.</p> <p>We know that every point on the line has the same vertical distance from the source point. Let's call this $l_0$. We also know the ratio of the horizontal component of the distance to the vertical component is $\tan(\theta)$, where theta goes from ${-\pi}\over{2}$ to ${\pi}\over{2}$ in the increments given. So horizontal distance=$x=l_0*\tan(\theta)$.</p> <p>To find the densities, first take the derivative of both sides, getting $dx=l_0*\sec^2(\theta)*d\theta$. From trig, we know $\sec^2(\theta)=1+\tan^2(\theta)$, and from above, we know that $\tan(\theta)$ is $x\over{l_0}$. So we can isolate $\theta$ to get</p> <p>$${dx \over {l_0* \left( 1+\left({x}\over{l_0} \right)^2 \right)}} =d\theta$$</p> <p>We know theta is uniformly distributed, so we can divide both sides by $\pi$. Now the probability of a given $\theta$ falling between $\theta$ and $\theta+d \theta$ is equal to the function of $x$ on the left. </p>
33,387
<p>I was told the following "Theorem": Let $y^{2} =x^{3} + Ax^{2} +Bx$ be a nonsingular cubic curve with $A,B \in \mathbb{Z}$. Then the rank $r$ of this curve satisfies</p> <p>$r \leq \nu (A^{2} -4B) +\nu(B) -1$</p> <p>where $\nu(n)$ is the number of distinct positive prime divisors of $n$.</p> <p>I can not find a name for this theorem or a reference, and I am wondering if it is a well known result, or if it is even true. Has anyone seen this result or have a suggestion on where I can find a reference. Thank you.</p>
Adrián Barquero
900
<p>I think this is a <a href="https://planetmath.org/BoundForTheRankOfAnEllipticCurve" rel="nofollow noreferrer">reference</a>.</p> <p>​</p>
3,270,725
<p>Hello everyone I read on my notes this proposition: </p> <p>Given a field <span class="math-container">$K$</span> and <span class="math-container">$R=K[T]$</span>, let <span class="math-container">$M$</span> be a (left) finitely generated <span class="math-container">$R$</span>-module; then <span class="math-container">$M$</span> is a torsion module if and only if <span class="math-container">$\dim_K(M)&lt;\infty$</span>.</p> <p>Since it has already been stated that <span class="math-container">$M$</span> is finitely generated, <span class="math-container">$\dim_K(M)$</span> must be something different from the number of generators of <span class="math-container">$M$</span>, then my question is: what does <span class="math-container">$\dim_K(M)$</span> mean?</p>
Dr. Sonnhard Graubner
175,066
<p>It is <span class="math-container">$$\frac{x}{\pi}+\frac{1}{2}\notin \mathbb{Z}$$</span> and <span class="math-container">$$-\frac{\pi}{2}&lt;x-2\pi n&lt;0$$</span> or <span class="math-container">$$\frac{x}{\pi}+\frac{1}{2}\notin \mathbb{Z}$$</span> and <span class="math-container">$$\frac{\pi}{2}&lt;x-2\pi n&lt;\pi$$</span> wher5e <span class="math-container">$n$</span> is an integer.</p>
2,009,557
<p>I am pretty sure this question has something to do with the Least Common Multiple. </p> <ul> <li>I was thinking that the proof was that every number either is or isn't a multiple of $3, 5$, and $8\left(3 + 5\right)$.</li> <li>If it isn't a multiple of $3,5$, or $8$, great. You have nothing to prove.</li> <li>But if it is divisible by one of them, I couldn't find a general proof that showed that it wouldn't be divisible by another one. Say $15$, it is divisible by $3$ and $5$, but not $8$.</li> </ul>
Joffan
206,402
<p>A formal induction: Any integer $k&gt;7$ can be expressed as $3a+5b = k$, with $a,b\ge 0$.</p> <p>Base cases:</p> <ul> <li>$k=8 \implies a=1,b=1$</li> <li>$k=9 \implies a=3,b=0$</li> <li>$k=10 \implies a=0,b=2$</li> </ul> <p>Induction: </p> <p>Assume that the statement holds for all $k&lt;m$, and $m&gt;10$. In particular since $m-3&gt;7$, then statement holds for $n := m-3$. Now set $a_m = a_n +1, b_m = b_n$ and:</p> <p>$$3 a_m + 5b_m = 3 (a_n+1) + 5b_n = 3+ 3a_n + 5b_n = 3+n = m$$</p> <p>Showing that the statement also holds for $m$ and the induction is complete.</p>
1,098,614
<p>Newbie question: Is pulling out factors and dividing the same? Please explain the difference in the examples below.</p> <p>Example:</p> <p>$$2x-4 \to 2(x-2)$$</p> <p>Is it the same as </p> <p>$$4x^2 - 8x + 1 = 0$$</p> <p>$$\frac{4x^2 - 8 }{4} = -\frac{1}{4} \implies x^2 - 2x = -\frac{1}{4}$$</p> <p>Thanks</p>
mez
59,360
<p>I worked things out and have the following conclusion.</p> <p>Abelian groups are the same as $\mathbb{Z}$-modules, therefore sheafs of abelian groups are the same as $\underline{\mathbb{Z}}^{pre}$-modules. However, due to the fact that sheafs have the identity and gluability property, this $\underline{\mathbb{Z}}^{pre}$ action can be canonically extended to a $\underline{\mathbb{Z}}$ action, the way <a href="https://math.stackexchange.com/questions/1098613/sheafs-of-abelian-groups-are-the-same-as-underline-mathbbz-modules#comment2241830_1098613">@hoot</a> described, where $\underline{\mathbb{Z}}^{pre}$ is identified as the constant functions inside $\underline{\mathbb{Z}}$.</p> <p>This can also be explained from the adjunction of embedding and sheafification as <a href="https://math.stackexchange.com/questions/1098613/sheafs-of-abelian-groups-are-the-same-as-underline-mathbbz-modules#comment2239625_1098613">@Zhen Lin</a> described. The fact that the sheafification of the constant presheaf is the constant sheaf implies that a $\underline{\mathbb{Z}}^{pre}$-module extends uniquely to a $\underline{\mathbb{Z}}$-module.</p>
1,098,614
<p>Newbie question: Is pulling out factors and dividing the same? Please explain the difference in the examples below.</p> <p>Example:</p> <p>$$2x-4 \to 2(x-2)$$</p> <p>Is it the same as </p> <p>$$4x^2 - 8x + 1 = 0$$</p> <p>$$\frac{4x^2 - 8 }{4} = -\frac{1}{4} \implies x^2 - 2x = -\frac{1}{4}$$</p> <p>Thanks</p>
Andrew Tawfeek
251,053
<p>For some reason this note of Vakil seemed to take a bit for me to unpack -- so once bumping into this question here I felt obligated to include what helped me eventually see the natural structure.</p> <p>Let's take <span class="math-container">$\mathscr{F}$</span> to be a <span class="math-container">$\underline{\mathbb{Z}}$</span>-module and <span class="math-container">$U$</span> some open set in <span class="math-container">$X$</span>. As @Hoot indirectly mentioned through their comment, it is not hard to see <span class="math-container">$\mathcal{U}= \{f^{-1}(n)\}_{n\in\mathbb{Z}}$</span> forms an open cover -- and is arguably the best choice of a cover as <span class="math-container">$f$</span> is constant on each element. Since <span class="math-container">$\mathscr{F}$</span> is a <span class="math-container">$\underline{\mathbb{Z}}$</span>-module, we have that for each <span class="math-container">$n \in \mathbb{Z}$</span> that</p> <p><a href="https://i.stack.imgur.com/p6r21.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p6r21.png" alt="enter image description here"></a></p> <p>commutes, hence it follows that</p> <p><a href="https://i.stack.imgur.com/yhlQL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yhlQL.png" alt="enter image description here"></a></p> <p>Using then the property that <span class="math-container">$\mathscr{F}$</span> is a sheaf, we can glue together each of the pieces of <span class="math-container">$f \cdot \alpha$</span> that are distributed throughout <span class="math-container">$\{\mathscr{F}(f^{-1}(n))\}_{n\in\mathbb{Z}}$</span> to get it back. </p> <p>Neatly being stated: take <span class="math-container">$\mathcal{U}$</span> above to be our cover, <span class="math-container">$f_k = k \cdot \alpha|_{f^{-1}(k)} \in \mathscr{F}(f^{-1}(k))$</span>, and note that for <span class="math-container">$n,m \in \mathbb{Z}$</span> that it should be clear we have <span class="math-container">\begin{align*} m=n &amp;\implies f^{-1}(n) \cap f^{-1}(m) = f^{-1}(n) = f^{-1}(m) \\ m \neq n &amp;\implies f^{-1}(n) \cap f^{-1}(m) = \emptyset \end{align*}</span> thus in both case we trivially obtain that for all <span class="math-container">$n,m \in \mathbb{Z}$</span> <span class="math-container">$$\text{res}_{f^{-1}(n),f^{-1}(n) \cap f^{-1}(m)} f_n = \text{res}_{f^{-1}(n),f^{-1}(n) \cap f^{-1}(m)} f_m$$</span> (as recall that <span class="math-container">$\mathscr{F}(\emptyset)$</span> is a singleton). Hence by the conclusion of the gluability axiom, we can essentially "glue" our <span class="math-container">$f_i$</span>'s all together to obtain an <span class="math-container">$F$</span> that restricts to each <span class="math-container">$f_i$</span> (i.e. we get the existence of such an <span class="math-container">$F$</span>). The uniqueness of this <span class="math-container">$F$</span> follows from the identity axiom on <span class="math-container">$\mathscr{F}$</span>. This resulting <span class="math-container">$F$</span> is what we then define to be <span class="math-container">$f\cdot \alpha$</span>.</p>
3,334,816
<p>The question first requires me to prove the identity <span class="math-container">$$\sqrt{\frac{1- \sin x}{1+ \sin x}}=\sec x- \tan x, -90^\circ &lt; x &lt; 90^\circ$$</span> I am able to prove this. The second part says “Explain why <span class="math-container">$x$</span> must be acute for the identity to be true”. I don’t see why <span class="math-container">$x$</span> must be acute for the identity to hold true. Wouldn’t it suffice for it to lie in either the 1st or 4th quadrant? Eg <span class="math-container">$x=330^\circ$</span>. </p>
lab bhattacharjee
33,337
<p>For <span class="math-container">$1+\sin x\ne0$</span></p> <p><span class="math-container">$$f^2(x)=\dfrac{1-\sin x}{1+\sin x}=\dfrac{(1-\sin x)^2}{\cos^2x}$$</span></p> <p><span class="math-container">$$f(x)=\dfrac{|1-\sin x|}{|\cos x|}=\dfrac{1-\sin x}{|\cos x|}$$</span> as <span class="math-container">$1-\sin x\ge0$</span></p> <p>Now <span class="math-container">$|\cos x|=+\cos x$</span> iff <span class="math-container">$\cos x\ge0$</span> i.e., if <span class="math-container">$x$</span> lies in the first or in the fourth quadrant</p> <p>Else <span class="math-container">$|\cos x|=-\cos x$</span></p>
3,473,944
<p>So i have an object that moves in a straight line with initial velocity <span class="math-container">$v_0$</span> and starting position <span class="math-container">$x_0$</span>. I can give it constant acceleration <span class="math-container">$a$</span> over a fixed time interval <span class="math-container">$t$</span>. Now what i need is that when the time interval ends this object should stop exactly at a point <span class="math-container">$x_1$</span> with it's velocity being equal to <span class="math-container">$0$</span>. I need to find acceleration <span class="math-container">$a$</span> that i can give it in order for that to happen. </p> <p>The way i see it we've got a system of equations: <span class="math-container">$$ 0 = v_0 + a t $$</span> <span class="math-container">$$ x_1 = x_0 + v_0 t + \frac {a t^2} {2} $$</span> </p> <p>I have only one unknown, which is <span class="math-container">$a$</span>. </p> <p>Let's get <span class="math-container">$a$</span> from the first equation: <span class="math-container">$$ a = \frac { - v_0 } { t } $$</span> </p> <p>And put it into the second one: <span class="math-container">$$ x_1 = x_0 + v_0 t + \frac { - v_0 t } {2} $$</span> </p> <p>Now let's express initial velocity (<span class="math-container">$v_0$</span>) from that equation: <span class="math-container">$$ x_1 - x_0 = v_0 t + \frac { - v_0 t } {2} $$</span> <span class="math-container">$$ \frac { x_1 - x_0 } { t } = v_0 + \frac { - v_0 } {2} $$</span> <span class="math-container">$$ \frac { 2 ( x_1 - x_0 ) } { t } = 2 v_0 - v_0 $$</span> <span class="math-container">$$ v_0 = \frac { 2 ( x_1 - x_0 ) } { t } $$</span> </p> <p>And put it back into equation for acceleration: <span class="math-container">$$ a = \frac { - v_0 } { t } $$</span> <span class="math-container">$$ a = \frac { - \frac { 2 ( x_1 - x_0 ) } { t } } { t } $$</span> <span class="math-container">$$ a = - \frac { 2 ( x_1 - x_0 ) } { t^2 } $$</span> </p> <p>So we got an acceleration that i need to apply to an object over a time interval <span class="math-container">$t$</span>, so that it would stop at <span class="math-container">$x_1$</span> with velocity <span class="math-container">$0$</span>, right? </p> <p>But it doesn't work! </p> <p>Because it doesn't depend on initial velocity at all! So if my object is flying at 2 m/s then i would need to apply the same acceleration as if it was flying 100 m/s, or 1000 m/s? How come? </p> <p>Where am i being wrong? This all seems mathematically sound... Am i setting the wrong premises? Interpreting results in the wrong way? </p> <p>I really need it for my project, and i've been trying to solve this for weeks, studying different aspects of maths that might help me, but i just can't do it :( </p> <p>But this looks so simple! And yet i just can't do it. 11 years of school seem so useless right now... </p> <p>Help please </p>
Arturo Magidin
742
<p>I will use <span class="math-container">$t_0$</span> rather than <span class="math-container">$t$</span>, since this is also a fixed quantity.</p> <p>What you are doing doesn't work for arbitrary <span class="math-container">$t_0$</span>, <span class="math-container">$x_0$</span>, <span class="math-container">$x_1$</span>, and <span class="math-container">$v_0$</span>. </p> <p>Since your only unknown is supposed to be <span class="math-container">$a$</span>, from the first equation you get <span class="math-container">$$a = -\frac{v_0}{t_0}$$</span> From the second equation you get <span class="math-container">$$a = \frac{2(x_1-x_0-v_0t_0)}{t_0^2}$$</span> Thus, for a solution to exist, you must have <span class="math-container">$$-\frac{v_0}{t_0} = \frac{2(x_1-x_0-v_0t_0)}{t_0^2}$$</span> or <span class="math-container">$$v_0t_0 = 2(x_1-x_0).$$</span> If this does not hold, then there is no solution.</p> <p>Conversely, if <span class="math-container">$v_0t_0=2(x_1-x_0)$</span>, then your solution is <span class="math-container">$a=-\frac{2(x_1-x_0)}{t_0^2} = -\frac{v_0}{t_0}.$</span> So the solution only exists for a specific value of <span class="math-container">$v_0$</span> (given the distance and time), and then the acceleration does depend on the initial velocity.</p> <p>Alternatively, you can fix any <em>three</em> of <span class="math-container">$v_0$</span>, <span class="math-container">$x_0$</span>, <span class="math-container">$t_0$</span>, and <span class="math-container">$x_1$</span>, and then solve for the remaining unknown and <span class="math-container">$a$</span>; but in general you cannot arbitrarily specify all four quantities.</p>
2,193,779
<p><a href="https://i.stack.imgur.com/d65g2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d65g2.png" alt="enter image description here"></a></p> <p>Any idea where the missing 3^k+2 comes from? (sorry for the format, this thing didn't allow me to post images)</p>
mrnovice
416,020
<p>Here's the inductive step:</p> <p>We have that:</p> <p>$1\cdot 3 + 2\cdot 3^2 +... + n\cdot 3^n = \frac{(2n-1)3^{n+1}+3}{4}$</p> <p>So now consider:</p> <p>$1\cdot 3 + 2\cdot 3^2 +... + n\cdot 3^n + (n+1)\cdot3^{n+1} = \frac{(2n-1)3^{n+1}+3}{4} + (n+1)3^{n+1}$</p> <p>$= \frac{(2n-1)3^{n+1}+3+4(n+1)3^{n+1}}{4}$ =$ \frac{6n\cdot 3^{n+1}+3\cdot 3^{n+1} +3}{4} $= $\frac{3^{n+2}(2n+1)+3}{4}$ as required.</p> <p>Hence true $\forall n\in \mathbb{Z^{+}}$</p>
1,862,807
<blockquote> <p>Show that for <span class="math-container">$x,y,z\in\mathbb{Z}$</span>, if <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are coprime and <span class="math-container">$z$</span> is nonzero, then <span class="math-container">$\exists n\in\mathbb{Z}$</span> such that <span class="math-container">$z$</span> and <span class="math-container">$y+xn$</span> are coprime.</p> </blockquote> <p>Not sure where to start on this one. I understand that coprime indicates that their GCD is 1, but I am somewhat confused how to proceed.</p>
C.S.
95,894
<p>$\textbf{Exercise}$ Let $(a,b)=1$ and $c&gt;0$. Prove that there is an integer $x$ such that $(a+bx,c)=1$.</p> <p>$\textbf{Solution.}$ Let $p_{1},p_{2},\cdots,p_{m}$ be the primes which appear in the prime factorization of $b$. Then since $(a,b)=1$, we have $(a,p_{i})=1$ for all $i$. If the prime factorization of $c$ contains only primes from the set $\{p_{1},p_{2},\ldots,p_{m}\}$, then our required integer $x=0$ since $(a,p_i)=1$ for each $i$ implies $(a,c)=1$. Now suppose that $c$ contains extra primes $q_{1},q_{2},\ldots,q_{n}$. That is $c$ is of the form $$c=\prod p_{i}q_j \quad \text{or} \quad c =\prod_{i=1}^{n} q_{i}$$ then we want to find a integer $x$ such that $(a+bx,p_i)=1$ for all $i$ and $(a+bx,q_j)=1$ for all $j$. It is clear that $(a+bx,p_i)=(a,p_i)=1$. So basically we want to find an integer $x$ such that $(a+bx,q_j)=1$ for all $j$. We know that $(q_{j}+1,q_j)=1$ always so we need $x$ such that $bx+a=q_{j}+1\equiv 1\pmod{q_j}$ for all $j$, that is $bx\equiv 1-a\pmod{q_j}$ for all $j$. Since $(b,q_j)=1$ for all $j$, therefore $b$ has an inverse and so $x=(1-a)b^{-1}\pmod{q_j}$. Now the system of congruences \begin{align*} x &amp;\equiv (1-a)b^{-1}\pmod{q_1}\\ x &amp;\equiv (1-a)b^{-1}\pmod{q_2}\\ &amp; \cdot\\ &amp; \cdot\\ x&amp;\equiv (1-a)b^{-1}\pmod{q_j} \end{align*} has a solution by <em>Chinese Remainder Theorem</em> and that solution is our required $x$.</p> <p>$\textbf{Remark.}$ If we assume <a href="http://mathworld.wolfram.com/DirichletsTheorem.html" rel="nofollow">Dirichlet's Theorem</a> then this problem can be solved as follows: Since $(a,b)=1$, The set $\{a+bx \ : x \in \mathbb{Z}\}$ contains infinitely many primes. Since $c$ is fixed so its finite so has only finitely many prime factors. Let $P$ be the largest prime factor of $c$. Now choose $x$ large enough so that $a+bx$ is a prime which is greater than $P$. Then for that $x$, $(a+bx,c)=1$.</p>
2,550,365
<p>I encountered a question regarding orthogonal projection. I only have some clues to approach this questions and am not sure whether my thoughts can be used to answer the question. Could anyone check my thoughts and correct it or figure out a better solution, if needed? Thanks in advance.</p> <p>Suppose $W \subseteq ℝ^6$ is a subspace with basis ${\begin{bmatrix}1\\5\\-1\\6\\787\\0\end{bmatrix},\begin{bmatrix}-34\\4\\4\\4\\4\\8\end{bmatrix},\begin{bmatrix}2\\3\\4\\5\\6\\2\end{bmatrix}, \begin{bmatrix}-5\\123\\-4\\12\\-3\\1\end{bmatrix}}$, and let $P : ℝ^{\color{red}6}\toℝ^{\color{red}6}$ be the orthogonal projection onto $W$. The following do not require very much computation. <ol> <li>What is the rank of $P$? How do you know?</li> <strong>My thought: the rank will be 4. As a vector in $ℝ^6$ will be projected onto a 4-dimensional subspace, 4 linearly independent columns are needed.</strong> <li></p> <p>What is the dimension of the $1$-eigenspace of $P$? How do you know?</li> <strong>My thought: the dimension will be 4, too. As $P$ will transform a vector in $ℝ^6$ into the 4-dimensional subspace, the eigenvector should also be on the subspace first so that some linear combinations of the 4 vectors that form $W$ can transform the input vector into the vector itself. Therefore the vector can be represented by W and the dim of the eigenspace is 4.</strong> <li></p> <p>What is the dimension of the null space of $P$? How do you know?</li> <strong>My thought: the dim of null space will be 2. As the dimension is 6 and the rank is 4, the dim of null space will therefore be 2. However, I got stuck here as I tried to image some form of Ax=b that transform a vector x into another b, what A can be so that the x in $ℝ^6$ can be transformed to a vector b that is the projection of A onto $W$.</strong> <li></p> <p>Explain why $P$ must be similar to a diagonal matrix, and find a diagonal matrix it is similar to (note: you are <i>not</i> being asked to find an invertible matrix $Q$ so that $P = QDQ^{-1}$).</p> <p><strong>I have no clues for this one at all, could anyone give me the solution or hint to this one as well?</strong> </ol></p> <p>Thanks a lot! ^_^ </p> <p>Bump!</p>
user
505,767
<p><strong>HINT</strong> </p> <p>As you noted</p> <p>if $v \in W \implies Pv=1\cdot v$</p> <p>if $u \in W^{\perp} \implies Pu=0\cdot u$</p> <p>thus, since $\dim(W)=4$ and $\dim(W^{\perp})=2$ we have that $P$ is similar to $diag(1,1,1,1,0,0)$.</p>
1,989,950
<p>I was doing proof of open mapping theorem from the book Walter Rudin real and complex analysis book and struck at one point. Given if $X$ and $Y$ are Banach spaces and $T$ is a bounded linear operator between them which is $\textbf{onto}$. Then to prove $$T(U) \supset \delta V$$ where $U$ is open unit ball in $X$ and $\delta V = \{ y \in Y : \|y\| &lt; \delta\}$.</p> <p>Proof- For any $y \in Y$ since map is onto, there exist an $x \in X$ such that $Tx = y$. It is also clear that if $\|x\| &lt; k$, then $y \in T(kU)$ for any $k$. Clearly $$Y = \underset{k \in \mathbb{N}}{\cup} T(kU) $$ But as $Y$ is complete, by Baire category theorem it can't be written as countable union of nowhere dense sets. So there exist atleast one $k$ such that $ T(kU)$ is not nowhere dense. Thus this means $$(\overline{T(kU)})^0 \ne \emptyset$$ i.e. $ T(kU)$ closure has non empty interior. Let $W$ be open set contained in closure of $T(kU)$. Now for any $w \in W \implies w \in \overline{T(kU})$, so every point of $W$ is the limit of the sequence $\{Tx_i\}$, where $x_i \in kU$, Let us now fix $W$ and $k$.</p> <p>Now choose $y_0 \in W$ and choose $\eta &gt; 0$, so that $y_0+y \in W$ if $\|y\| &lt; \eta$. This can be done as $W$ is open set, so every point of it has some neighborhood also there. Now as $y_0 , y_0+y \in W$ from above paragraph there exist sequences $\{x_i'\}$ and $\{x_i''\}$ in $kU$ such that $$T(x_i') \to y_0 \qquad T(x_i'') \to y_0+y \quad as \ i \to \infty$$ Set $x_i = x_i'-x_i''$. Then clearly $$\|x_i\| \leq \|x_i'\| + \|x_i''\| &lt; 2k$$ and $T(x_i) \to y$. Since this holds for every $y$ with $\|y\|&lt; \eta$. </p> <p>Now it is written that, the linearity of $T$ shows that following is true for $\delta = \dfrac{\eta}{2k}$ </p> <p>To each $y \in Y$ and to each $\epsilon &gt; 0$ there corresponds an $x \in X$ such that $$\|x\| \leq \delta^{-1}\|y\| \quad \text{and} \quad \|Tx-y\| &lt; \epsilon \quad (1)$$ How does this follows?</p> <p>This proof is given in Walter rudin 3rd edition on page 112</p>
Jack Tiger Lam
186,030
<p>$\displaystyle \large \sec{\theta} + \tan{\theta} \equiv \frac{(\cos{\frac{\theta}{2}} + \sin{\frac{\theta}{2}})^2}{\cos^2{\frac{\theta}{2}} - \sin^2{\frac{\theta}{2}}} \equiv \frac{1+\tan{\frac{\theta}{2}}}{1-\tan{\frac{\theta}{2}}} \equiv \tan{\left( \frac{\theta}{2} + \frac{\pi}{4} \right)}$</p> <p>The rest is trivial.</p>
376,484
<p>My questions are motivated by the following exercise:</p> <blockquote> <p>Consider the eigenvalue problem $$ \int_{-\infty}^{+\infty}e^{-|x|-|y|}u(y)dy=\lambda u(x), x\in{\Bbb R}.\tag{*} $$ Show that the spectrum consists purely of eigenvalues. </p> </blockquote> <p>Let $A:L^2({\Bbb R})\to L^2({\Bbb R})$ be a linear operator such that $$(Au)(x)=\int_{\Bbb R}k(x,y)u(y)dy$$ where $k(x,y)=e^{-|x|-|y|}$. Then $A$ is self-adjoint, since $\overline{k(x,y)}=k(y,x)$. Thus $\sigma(A)\subset{\Bbb R}$.</p> <blockquote> <p>My first <strong>question</strong>: is $A$ invertible?</p> </blockquote> <p>$A$ is a Hilbert-Schmidt operator since $k\in L^2(\Bbb{R}^2)$ and thus $A$ is compact. Then the answer should be NO since $L^2({\Bbb R})$ is an infinite-dimensional Hilbert space. It follows that $\lambda=0$ must be an eigenvalue of $A$ according to the conclusion in the exercise. But $\operatorname{ker}(A)={0}$ which implies that $\lambda=0$ is not an eigenvalue of $A$. </p> <blockquote> <p>My second <strong>question</strong>: what mistake do I make above?</p> </blockquote>
Álvaro Lozano-Robledo
14,699
<p>The statement is false, even if you restrict yourself to completely multiplicative arithmetic functions. </p> <p>Let $f(n): \mathbb{N} \to \mathbb{Q}$ be a function defined by $f(n)=\frac{1}{2^e}$, where $$n=p_1^{e_1}p_2^{e_2}\cdots p_r^{e_r}$$ is a factorization of $n$ into prime powers (i.e., each $p_i$ is a prime, $p_i\neq p_j$ unless $i=j$, and $e_i\geq 1$), and put $e=\sum_{i=1}^r e_i$. In other words, $f(p^e)=\frac{1}{2^e}$ and we extend this to a multiplicative function on $\mathbb{N}$.</p> <p>Then </p> <ul> <li><p>$f(n)$ is completely multiplicative,</p></li> <li><p>$f(p^e)=\frac{1}{2^e}\to 0$ as $e\to \infty$, and</p></li> <li><p>$f(n)=\frac{1}{2}$ infinitely often (for each $n$ prime, and there are infinitely many of those!).</p></li> </ul> <p>Hence $\lim_{n\to \infty} f(n)\neq 0$. In fact the limit does not exist.</p>
3,196,634
<p>I am trying to solve a differential equation: <span class="math-container">$$\frac{d f}{d\theta} = \frac{1}{c}(\text{max}(\sin\theta, 0) - f^4)~,$$</span> subject to periodic boundary condition, whic would imply <span class="math-container">$f(0)=f(2\pi)$</span> and <span class="math-container">$f'(0)= f'(2\pi)$</span>. To solve this numerically, I have set up an equation: <span class="math-container">$$f_i = f_{i-1}+\frac{1}{c}(\theta_i-\theta_{i-1})\left(\max(\sin\theta_i,0)-f_{i-1}^4\right)~.$$</span> Now, I want to solve this for specific grids. Suppose, I set up my grid points in <span class="math-container">$\theta = (0, 2\pi)$</span> to be <span class="math-container">$n$</span> equally spaced floats. Then I have small python program which would calculate <span class="math-container">$f$</span> for each grid points in <span class="math-container">$\theta$</span>. Here is the program:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt n=100 m = 500 a = np.linspace(0.01, 2*np.pi, n) b = np.linspace(0.01, 2*np.pi, m) arr = np.sin(a) arr1 = np.sin(b) index = np.where(arr&lt;0) index1 = np.where(arr1&lt;0) arr[index] = 0 arr1[index1] = 0 epsilon = 0.03 final_arr_l = np.ones(arr1.size) final_arr_n = np.ones(arr.size) for j in range(1,100*arr.size): i = j%arr.size step = final_arr_n[i-1]+ 1./epsilon*2*np.pi/n*(arr[i] - final_arr_n[i-1]**4) if (step&gt;=0): final_arr_n[i] = step else: final_arr_n[i] = 0.2*final_arr_n[i-1] for j in range(1,100*arr1.size): i = j%arr1.size final_arr_l[i] = final_arr_l[i-1]+1./epsilon*2*np.pi/m*(arr1[i] - final_arr_l[i-1]**4) plt.plot(b, final_arr_l) plt.plot(a, final_arr_n) plt.grid(); plt.show() </code></pre> <p>My major problem is for small <span class="math-container">$c$</span>, in the above case when <span class="math-container">$c=0.03$</span>, the numerical equation does not converge to a reasonable value (it is highly oscillatory) if I choose <span class="math-container">$N$</span> to be not so large. The main reason for that is since <span class="math-container">$\frac{1}{c}*(\theta_i-\theta_{i-1})&gt;1$</span>, <span class="math-container">$f$</span> tends to be driven to negative infinity when <span class="math-container">$N$</span> is not so large, i.e. <span class="math-container">$\theta_i-\theta_{i-1}$</span> is not so small. Here is an example with <span class="math-container">$c=0.03$</span> showing the behaviour when <span class="math-container">$N=100$</span> versus <span class="math-container">$N=500$</span>. In my code, I have applied some adhoc criteria for small <span class="math-container">$N$</span> to avoid divergences:</p> <pre><code>step = final_arr_n[i-1]+ 1./epsilon*2*np.pi/n*(max(np.sin(a[i]), 0) - final_arr_n[i-1]**4) if (step&gt;=0): final_arr_n[i] = step else: final_arr_n[i] = 0.2*final_arr_n[i-1] </code></pre> <p><strong>what I would like to know: is there any good mathematical trick to solve this numerical equation with not so large <span class="math-container">$N$</span> and still make it converge for small <span class="math-container">$c$</span>?</strong></p> <p><a href="https://i.stack.imgur.com/sKtE9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sKtE9.png" alt="enter image description here"></a></p>
jlandercy
113,708
<p>If you are not told to do it all by yourself, I would suggest you to use the powerful <code>scipy</code> package (specially the <a href="https://docs.scipy.org/doc/scipy-1.2.1/reference/tutorial/integrate.html" rel="nofollow noreferrer"><code>integrate</code></a> subpackage) which exposes many useful objects and methods to <a href="https://docs.scipy.org/doc/scipy/reference/integrate.html#solving-initial-value-problems-for-ode-systems" rel="nofollow noreferrer">solve ODE</a>.</p> <pre><code>import numpy as np from scipy import integrate import matplotlib.pyplot as plt </code></pre> <p>First define your model:</p> <pre><code>def model(t, y, c=0.03): return (np.max([np.sin(t), 0]) - y**4)/c </code></pre> <p>Then choose and instantiate the ODE Solver of your choice (here I have chosen <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.BDF.html#scipy.integrate.BDF" rel="nofollow noreferrer">BDF</a> solver):</p> <pre><code>t0 = 0 tmax = 10 y0 = np.array([0.35]) # You should compute the boundary condition more rigorously ode = integrate.BDF(model, t0, y0, tmax) </code></pre> <p>The <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html#scipy.integrate.solve_ivp" rel="nofollow noreferrer">new API</a> of ODE Solver allows user to control integration step by step:</p> <pre><code>t, y = [], [] while ode.status == 'running': ode.step() # Perform one integration step # You can perform all desired check here... # Object contains all information about the step performed and the current state! t.append(ode.t) y.append(ode.y) ode.status # finished </code></pre> <p>Notice the <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.odeint.html#scipy.integrate.odeint" rel="nofollow noreferrer">old API</a> is still present, but gives less control on the integration process:</p> <pre><code>t2 = np.linspace(0, tmax, 100) sol = integrate.odeint(model, y0, t2, tfirst=True) </code></pre> <p>And now requires the switch <code>tfirst</code> set to true because <code>scipy</code> swapped variable positions in model signature when creating the new API.</p> <p>Both result are compliant and seems to converge for the given setup:</p> <pre><code>fig, axe = plt.subplots() axe.plot(t, y, label="BDF") axe.plot(t2, sol, '+', label="odeint") axe.set_title(r"ODE: <span class="math-container">$\frac{d f}{d\theta} = \frac{1}{c}(\max(\sin\theta, 0) - f^4)$</span>") axe.set_xlabel("<span class="math-container">$t$</span>") axe.set_ylabel("<span class="math-container">$y(t)$</span>") axe.set_ylim([0, 1.2]) axe.legend() axe.grid() </code></pre> <p><a href="https://i.stack.imgur.com/sIgut.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sIgut.png" alt="enter image description here"></a></p> <p>Solving ODE numerically is about choosing a suitable integration method (stable, convergent) and well setup the parameters.</p> <p>I have observed that <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.RK45.html#scipy.integrate.RK45" rel="nofollow noreferrer">RK45</a> also performs well for this problem and requires less steps than BDF for your setup. Up to you to choose the Solver which suits you best.</p>
3,196,634
<p>I am trying to solve a differential equation: <span class="math-container">$$\frac{d f}{d\theta} = \frac{1}{c}(\text{max}(\sin\theta, 0) - f^4)~,$$</span> subject to periodic boundary condition, whic would imply <span class="math-container">$f(0)=f(2\pi)$</span> and <span class="math-container">$f'(0)= f'(2\pi)$</span>. To solve this numerically, I have set up an equation: <span class="math-container">$$f_i = f_{i-1}+\frac{1}{c}(\theta_i-\theta_{i-1})\left(\max(\sin\theta_i,0)-f_{i-1}^4\right)~.$$</span> Now, I want to solve this for specific grids. Suppose, I set up my grid points in <span class="math-container">$\theta = (0, 2\pi)$</span> to be <span class="math-container">$n$</span> equally spaced floats. Then I have small python program which would calculate <span class="math-container">$f$</span> for each grid points in <span class="math-container">$\theta$</span>. Here is the program:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt n=100 m = 500 a = np.linspace(0.01, 2*np.pi, n) b = np.linspace(0.01, 2*np.pi, m) arr = np.sin(a) arr1 = np.sin(b) index = np.where(arr&lt;0) index1 = np.where(arr1&lt;0) arr[index] = 0 arr1[index1] = 0 epsilon = 0.03 final_arr_l = np.ones(arr1.size) final_arr_n = np.ones(arr.size) for j in range(1,100*arr.size): i = j%arr.size step = final_arr_n[i-1]+ 1./epsilon*2*np.pi/n*(arr[i] - final_arr_n[i-1]**4) if (step&gt;=0): final_arr_n[i] = step else: final_arr_n[i] = 0.2*final_arr_n[i-1] for j in range(1,100*arr1.size): i = j%arr1.size final_arr_l[i] = final_arr_l[i-1]+1./epsilon*2*np.pi/m*(arr1[i] - final_arr_l[i-1]**4) plt.plot(b, final_arr_l) plt.plot(a, final_arr_n) plt.grid(); plt.show() </code></pre> <p>My major problem is for small <span class="math-container">$c$</span>, in the above case when <span class="math-container">$c=0.03$</span>, the numerical equation does not converge to a reasonable value (it is highly oscillatory) if I choose <span class="math-container">$N$</span> to be not so large. The main reason for that is since <span class="math-container">$\frac{1}{c}*(\theta_i-\theta_{i-1})&gt;1$</span>, <span class="math-container">$f$</span> tends to be driven to negative infinity when <span class="math-container">$N$</span> is not so large, i.e. <span class="math-container">$\theta_i-\theta_{i-1}$</span> is not so small. Here is an example with <span class="math-container">$c=0.03$</span> showing the behaviour when <span class="math-container">$N=100$</span> versus <span class="math-container">$N=500$</span>. In my code, I have applied some adhoc criteria for small <span class="math-container">$N$</span> to avoid divergences:</p> <pre><code>step = final_arr_n[i-1]+ 1./epsilon*2*np.pi/n*(max(np.sin(a[i]), 0) - final_arr_n[i-1]**4) if (step&gt;=0): final_arr_n[i] = step else: final_arr_n[i] = 0.2*final_arr_n[i-1] </code></pre> <p><strong>what I would like to know: is there any good mathematical trick to solve this numerical equation with not so large <span class="math-container">$N$</span> and still make it converge for small <span class="math-container">$c$</span>?</strong></p> <p><a href="https://i.stack.imgur.com/sKtE9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sKtE9.png" alt="enter image description here"></a></p>
Lutz Lehmann
115,115
<p>How to solve this (perhaps a little more complicated than necessary) with the tools of python <code>scipy.integrate</code> I demonstrated in <a href="https://math.stackexchange.com/q/3185707/115115">How to numerically set up to solve this differential equation?</a></p> <hr> <p>If you want to stay with the simplicity of a one-stage method, expand the step as <span class="math-container">$f(t+s)=f(t)+h(s)$</span> where <span class="math-container">$t$</span> is constant and <span class="math-container">$s$</span> the variable, so that <span class="math-container">$$ εh'(s)=εf'(t+s)=g(t+s)-f(t)^4-4f(t)^3h(s)-6f(t)^2h(s)^2-... $$</span> The factor linear in <span class="math-container">$h$</span> can be moved to and integrated into the left side by an exponential integrating factor. The remaining terms are quadratic or of higher degree in <span class="math-container">$h(Δt)\simΔt$</span> and thus do not influence the order of the resulting exponential-Euler method. <span class="math-container">\begin{align} ε\left(e^{4f(t)^3s/ε}h(s)\right)'&amp;=e^{4f(t)^3s/ε}\left(g(t+s)-f(t)^4-6f(t)^2h(s)^2-...\right) \\ \implies h(Δt)&amp;\approx h(0)e^{-4f(t)^3Δt/ε}+\frac{1-e^{-4f(t)^3Δt/ε}}{4f(t)^3}\left(g(t)-f(t)^4\right) \\ \implies f(t+Δt)&amp;\approx f(t)+\frac{1-e^{-4f(t)^3Δt/ε}}{4f(t)^3}\left(g(t)-f(t)^4\right) \end{align}</span></p> <p>This can be implemented as</p> <pre><code>eps = 0.03 def step(t,f,dt): # exponential Euler step g = max(0,np.sin(t)) f3 = 4*f**3; ef = np.exp(-f3*dt/eps) return f + (1-ef)/f3*(g - f**4) # plot the equilibrium curve f(t)**4 = max(0,sin(t)) x = np.linspace(0,np.pi, 150); plt.plot(x,np.sin(x)**0.25,c="lightgray",lw=5) plt.plot(2*np.pi-x,0*x,c="lightgray",lw=5) for N in [500, 100, 50]: a0, a1 = 0, eps/2 t = np.linspace(0,2*np.pi,N+1) dt = t[1]-t[0]; while abs(a0-a1)&gt;1e-6: # Aitken delta-squared method to accelerate the fixed-point iteration f = a0 = a1; for n in range(N): f = step(t[n],f,dt); a1 = f; if abs(a1-a0) &lt; 1e-12: break for n in range(N): f = step(t[n],f,dt); a2 = f; a1 = a0 - (a1-a0)**2/(a2+a0-2*a1) # produce the function table for the numerical solution f = np.zeros_like(t) f[0] = a1; for n in range(N): f[n+1] = step(t[n],f[n],dt); plt.plot(t,f,"-o", lw=2, ms=2+200.0/N, label="N=%4d"%N) plt.grid(); plt.legend(); plt.show() </code></pre> <p>and gives the plot</p> <p><a href="https://i.stack.imgur.com/JRPQI.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JRPQI.png" alt="enter image description here"></a></p> <p>showing stability even for <span class="math-container">$N=50$</span>. The errors for smaller <span class="math-container">$N$</span> look more chaotic due to the higher non-linearity of the method.</p>
860,247
<p>Simplify $$\frac{3x}{x+2} - \frac{4x}{2-x} - \frac{2x-1}{x^2-4}$$</p> <ol> <li><p>First I expanded $x²-4$ into $(x+2)(x-2)$. There are 3 denominators. </p></li> <li><p>So I multiplied the numerators into: $$\frac{3x(x+2)(2-x)}{(x+2)(x-2)(2-x)} - \frac{4x(x+2)(x-2)}{(x+2)(x-2)(2-x)} - \frac{2x-1(2-x)}{(x+2)(x-2)(2-x)} $$</p></li> </ol> <p>I then tried 2 different approaches:</p> <ol> <li>Calculated it without eliminating the denominator into: $$\frac{-6x²-5x+2}{(x+2)(x-2)(2-x)}$$</li> <li>Calculated it by multiplying it out to: $$\frac{-6x+2x²+2}{(x+2)(x-2)(2-x)}$$</li> </ol> <p>I can't seem to simplify them further and so they seem incorrect. Something I missed? Help! </p>
Count Iblis
155,436
<p>$$\int_{0}^{1}x^{\frac{5}{2}}\sqrt{1+x}dx$$</p> <p>Substitute $x = t^2$:</p> <p>$$2\int_{0}^{1}t^6\sqrt{1+t^2}dt$$</p> <p>Substitute $t = \sinh(u)$:</p> <p>$$2\int_{0}^{\operatorname{arcsinh}(1)}\left[\sinh^8(u)-\sinh^6(u)\right]du$$</p>
3,190,594
<p>From Rick Durrett's book <em>Probability: Theory and Examples</em>:</p> <blockquote> <p>We define the conditional expectation of <span class="math-container">$X$</span> given <span class="math-container">$\mathcal{G}$</span>, <span class="math-container">$E(X | \mathcal{G})$</span> to be any random variable <span class="math-container">$Y$</span> that has</p> <p>(1) <span class="math-container">$Y \in \mathcal{G}, \text { i.e., is } \mathcal{G} \text { measurable }$</span></p> <p>(2) <span class="math-container">$\text {for all } A \in \mathcal{G}, \int_{A} X d P=\int_{A} Y d P$</span></p> </blockquote> <p>And in other materials I found:</p> <blockquote> <p>Let <span class="math-container">$(\Omega, \mathscr{F}, P)$</span> be a probability space and let <span class="math-container">$\mathscr{G}$</span> be a σ−algebra contained in <span class="math-container">$\mathscr{F}$</span>. For any real random variable <span class="math-container">$X \in L^{1}(\Omega, \mathscr{F}, P)$</span>, <span class="math-container">$\operatorname{define} E(X | \mathscr{G})$</span> to be the unique random variable <span class="math-container">$Z \in L^{1}(\Omega, \mathscr{G}, P)$</span> such that for every bounded <span class="math-container">$\mathscr{G}-\text { measurable }$</span> random variable <span class="math-container">$Y$</span>, <span class="math-container">$$E(X Y)=E(Z Y)$$</span></p> </blockquote>
Davide Giraudo
9,849
<p>The difference between the two definitions is that in the first one, we need to do the test that <span class="math-container">$\mathbb E\left[XY\right]=\mathbb E\left[ZY\right]$</span> only when <span class="math-container">$Y$</span> has the form <span class="math-container">$\mathbf 1_A$</span> for all <span class="math-container">$A\in\mathcal G$</span> whereas in the second definition, this should be done for all the bounded <span class="math-container">$\mathcal G$</span>-measurable functions. </p> <p>All we need is the following fact:</p> <blockquote> <p>Let <span class="math-container">$X$</span> be an integrable random variable on a probability space <span class="math-container">$\left(\Omega,\mathcal F,\mathbb P\right)$</span> and let <span class="math-container">$\mathcal G$</span> be a sub-<span class="math-container">$\sigma$</span>-algebra of <span class="math-container">$\mathcal F$</span>. Assume that for all <span class="math-container">$A\in\mathcal G$</span>, the equality <span class="math-container">$\mathbb E\left[X\mathbf 1_A\right]=0$</span>. Then for each <span class="math-container">$\mathcal G$</span>-measurable bounded function <span class="math-container">$Y$</span>, <span class="math-container">$\mathbb E\left[XY\right]=0$</span>.</p> </blockquote> <p>We can use the fact that a bounded <span class="math-container">$\mathcal G$</span>-measurable function can be approximated in the uniform norm by a linear combination of indicator functions.</p>
2,827,970
<p>I'm aware this result (and the standard/obvious proof) is considered basic and while I've accepted and used it numerous times in the past, I'm starting to question its validity, or rather that said proof doesn't subtly require a form of the AC. (Disclaimer: it's been some time since I've looked at set theory.)</p> <p>I found the same question answered here using the same method I'd used in the past. (<a href="https://math.stackexchange.com/questions/305597/why-are-infinite-cardinals-limit-ordinals">Why are infinite cardinals limit ordinals?</a>) My posting here is more concerned with the apparent assumptions required to use this argument.</p> <p>The (apparent) issue here is that it presupposes that $\omega$ is the least infinite ordinal, i.e. a subset of any infinite ordinal $\alpha$. Now a countable choice argument could easily prove this, but this shouldn't be necessary, one would think.</p> <p>I've considered this: a simple (natural) induction argument showing that every natural number is an element of any infinite ordinal. This would show $\omega\subseteq \alpha$, or that $\omega$ is the least infinite ordinal. Does this seem right?</p> <p>Thank you </p> <p>To clarify: I'm trying to show that $\omega$ is the least infinite ordinal. Evidently some define it this way, (there is clearly some least infinite ordinal) but the definition of $\omega$ I'm working with involves defining inductive classes and taking their intersection, which is then axiomatized as a set. (These two definitions should be equivalent)</p>
hmakholm left over Monica
14,366
<p>Hopefully you already know that $\omega$ (defined as the intersection of all inductive sets) is an ordinal in the first place.</p> <p>Hopefully you also know that the ordinals are totally ordered by $\in$ -- that is, for ordinals $\alpha,\beta$ we always have either $\alpha\in\beta$ or $\alpha=\beta$ or $\beta\in\alpha$.</p> <p>What is an <strong>infinite</strong> ordinal? Usually "finite" is <strong>defined</strong> to mean, "equinumerous with an element of $\omega$". So if $\alpha$ is an ordinal that is <em>not</em> finite, then in particular it is not <em>itself</em> an element of $\omega$. Therefore we have either $\omega=\alpha$ or $\omega\in\alpha$. In either case we have $\omega\subseteq\alpha$.</p>
2,540,007
<p>I have the following question. Find the matrix representation of the transormation $T:\mathbb{R}^3\to\mathbb{R}^3$ that rotates any vector by $\theta=\frac{\pi}{6}$ along the vector $v=(1,1,1)$.</p> <p>A hint is given to find the rotation matrix about the $z-axis$ by $\frac{\pi}{6}$which is $$ \begin{bmatrix} \frac{\sqrt{3}}{2} &amp;\frac{-1}{2} &amp; 0 \\ \frac{1}{2} &amp; \frac{\sqrt{3}}{2} &amp; 0 \\ 0 &amp; 0 &amp;1 \end{bmatrix} $$ and then find an orthogonal basis for $\mathbb{R}^3$ that has $v$ as one of its vector then finally rewrite the above matrix using the new basis and that should be my answer.</p> <p>My questions are </p> <ol> <li>Do I pick two random vectors and join them to $v$ and use the Gram-Schmidt process to get an orthogonal basis.</li> <li>Why does that work?</li> </ol>
Gribouillis
398,505
<p>To understand why it works, let's use an analogy: imagine the linear transformation $T$ as a database that contains answers to a set of questions: a question is any vector $u$, the answer to question $u$ is the vector $T \left(u\right)$.</p> <p>Questions need to be asked in a certain language, as concrete vectors need to be given by their coordinates in a certain basis. That's where matrices intervene. Matrices are machines that take a question in some input language and return the answer in some output language, often, but not necessarily the same language.</p> <p>Suppose that we have an old machine $E$ that takes questions in english and returns answers in english. We can use it to create a new machine that takes question in chinese and returns answers in chinese, it works in three steps:</p> <ol> <li>Translate the question from chinese to english.</li> <li>Get the answer in english by using the $E$ machine.</li> <li>Translate the answer from english to chinese.</li> </ol> <p>In terms of vectors and matrices it becomes</p> <ol> <li>Transform the new basis coordinates of vector $u$ to the old basis.</li> <li>Compute $T \left(u\right)$ by using the matrix in the old basis.</li> <li>Transform the old basis coordinates of $T \left(u\right)$ in the new basis.</li> </ol> <p>The matrix that translate the question from chinese to english is called the transition matrix. The matrix that translates the answer from english to chinese is simply the inverse of the transition matrix.</p> <p>The columns of the transition matrix are the coordinates of the new basis vectors in the old basis. The columns of the inverse of the transition matrix are the coordinates of the old basis vectors in the new basis.</p> <p>In your case, the new basis is the standard basis of ${\mathbb{R}}^{3}$, the old basis is an orthonormal basis where the matrix of $T$ is the matrix you wrote above.</p> <p>Looking at the vectors given in @LeeMosher's answer, the old basis can be</p> <p>$$\left(\frac{w}{\left\|w\right\|} , \frac{u}{\left\|u\right\|} , \frac{v}{\left\|v\right\|}\right)$$</p> <p>The ordering matters because if you change the orientation, the rotation will have angle ${-{\pi}}/6$ instead of ${\pi}/6$</p>
1,597,638
<p>I noticed while studying integration that $\int \sqrt{1-x^2} \mathrm dx $ has a relatively simple antiderivative found by doing a trigonometric substitution. </p> <p>On the other hand, $\int \sqrt{1-x^3} \mathrm dx $ can only be expressed with elliptic integrals (according to WA). The same thing occured when $a=4, 5, 6, 7...$ etc. I was wondering if there exists a proof that $\sqrt{1-x^a} $ cannot be integrated without hypergeometric or elliptic integrals when $a&gt;2$.</p>
Sanwar
290,140
<p>You can write, $443642_{b_1} = 4*b_1^5+4*b_1^4+3*b_1^3+6*b_1^2+4*b_1+2$ and $53818_{b_2} = 5*b_2^4+3*b_2^3+8*b_2^2+b_2+8$. Now set the two expressions equal. You may need another restriction on $b_1$ and $b_2$.</p>
1,597,638
<p>I noticed while studying integration that $\int \sqrt{1-x^2} \mathrm dx $ has a relatively simple antiderivative found by doing a trigonometric substitution. </p> <p>On the other hand, $\int \sqrt{1-x^3} \mathrm dx $ can only be expressed with elliptic integrals (according to WA). The same thing occured when $a=4, 5, 6, 7...$ etc. I was wondering if there exists a proof that $\sqrt{1-x^a} $ cannot be integrated without hypergeometric or elliptic integrals when $a&gt;2$.</p>
peter.petrov
116,591
<p>An obvious observation here is that $b_1 \ge 7$ and $b_2 \ge 9$ (because the first number contains the digit 6, and the second one contains the digit 8). </p> <p>One possible solution is $b_1 = 7$ and $b_2 = 11$. I wrote a small program to find this. </p> <p>As far as I know there's no general method for solving such problems (manually, without a computer). You just have to make some observations (most probably number theoretic ones), and come up with some ingenious findings which lead you to the solution. There may be some general method though. I am just not aware of one. </p>
1,597,638
<p>I noticed while studying integration that $\int \sqrt{1-x^2} \mathrm dx $ has a relatively simple antiderivative found by doing a trigonometric substitution. </p> <p>On the other hand, $\int \sqrt{1-x^3} \mathrm dx $ can only be expressed with elliptic integrals (according to WA). The same thing occured when $a=4, 5, 6, 7...$ etc. I was wondering if there exists a proof that $\sqrt{1-x^a} $ cannot be integrated without hypergeometric or elliptic integrals when $a&gt;2$.</p>
Piquito
219,998
<p>Actually your problem can be in general very hard to solve because, in your particular case, one has the following diophantine equation of degree 5 with two unknowns:</p> <p>$$4x^5+4x^4+3x^3+6x^2+4x+2=5y^4+3y^3+8y^2+y+8$$</p> <p>whose solution is $$(x,y)=(b_1,b_2)=(7,11)$$ (<em>in both sides you have the number $78 185$</em>).</p>
1,597,638
<p>I noticed while studying integration that $\int \sqrt{1-x^2} \mathrm dx $ has a relatively simple antiderivative found by doing a trigonometric substitution. </p> <p>On the other hand, $\int \sqrt{1-x^3} \mathrm dx $ can only be expressed with elliptic integrals (according to WA). The same thing occured when $a=4, 5, 6, 7...$ etc. I was wondering if there exists a proof that $\sqrt{1-x^a} $ cannot be integrated without hypergeometric or elliptic integrals when $a&gt;2$.</p>
j215c228
283,345
<p>If you know that a there is an integer solution, why not just use the rational zeros theorem to find the zeros of both polynomials on both sides of the equation? Your list would be fairly short and could quickly find an answer</p>
1,791,673
<p>I was wondering about this, just now, because I was trying to write something like:<br> $880$ is not greater than $950$. <br> I am wondering this because there is a 'not equal to': $\not=$ <br> Not equal to is an accepted mathematical symbol - so would this be acceptable: $\not&gt;$? <br> I was searching around but I couldn't find any qualified sites that would point me in that direction.</p> <p><br> So, I would like to know if there are symbols for, not greater, less than, less than or equal to, greater than or equal to x. </p> <p>Thanks for your help and time! </p>
mvw
86,776
<p>I would probably use $850 \le 950$, as order is defined for integers.</p>
1,791,673
<p>I was wondering about this, just now, because I was trying to write something like:<br> $880$ is not greater than $950$. <br> I am wondering this because there is a 'not equal to': $\not=$ <br> Not equal to is an accepted mathematical symbol - so would this be acceptable: $\not&gt;$? <br> I was searching around but I couldn't find any qualified sites that would point me in that direction.</p> <p><br> So, I would like to know if there are symbols for, not greater, less than, less than or equal to, greater than or equal to x. </p> <p>Thanks for your help and time! </p>
GEdgar
442
<p>To answer the question, yes. $$ a \nless b\\ a \ngtr b\\ a \nleq b\qquad a \nleqq b\qquad a \nleqslant b\\ a \ngeq b\qquad a \ngeqq b\qquad a \ngeqslant b $$ and so on for many other mathematical relations $$ a \nleftarrow b\\ a \nLeftarrow b\\ A \nsupseteqq B\\ A \nvdash \phi\qquad A \nVdash \phi\\ \nexists x $$</p>
177,515
<p>From <a href="http://mitpress.mit.edu/algorithms/" rel="nofollow">Cormen et all</a>:</p> <blockquote> <p>The elements of a matrix or vectors are numbers from a number system, such as the real numbers , the complex numbers , or integers modulo a prime .</p> </blockquote> <p>What do they mean by <strong>integers modulo a prime</strong> ? I thought real numbers and complex numbers together make up all the elements of a matrix . Why did they put this additional one ?</p>
Community
-1
<blockquote> <p>Why did they put this additional one ?</p> </blockquote> <p>They did not "put" and an additional one. These are all <em>examples</em> of different number system: that is, a set of number along with operations you can do.</p> <blockquote> <p>I thought real numbers and complex numbers together make up all the elements of a matrix.</p> </blockquote> <p>No we can have more than just real matrices and complex matrices. Check Andrea's answer.</p> <p>Also, you may want to check the Wikipedia page for <a href="http://en.wikipedia.org/wiki/Real_number" rel="nofollow">Real numbers</a>, <a href="http://en.wikipedia.org/wiki/Complex_number" rel="nofollow">complex numbers</a>, <a href="http://en.wikipedia.org/wiki/Integer" rel="nofollow">integers</a>, <a href="http://en.wikipedia.org/wiki/Rational_number" rel="nofollow">rational numbers</a> and <a href="http://en.wikipedia.org/wiki/Multiplicative_group_of_integers_modulo_n" rel="nofollow">integer modulo prime</a>.</p> <blockquote> <p>What do they mean by integers modulo a prime ?</p> </blockquote> <p>Integers modulo prime, often denoted as $\mathbb{Z}/p \Bbb{Z}$ where $p$ is a prime. The form a <a href="http://en.wikipedia.org/wiki/Finite_field" rel="nofollow">finite field</a>, but this won't be helpful for you now. <em>For now,</em> you can think of it as the numbers in the range $\{0, 1, 2, \ldots, p-1 \},$ and all the arithmetic ($+,-,\times,\div$) are performed modulo prime.</p> <p>They have many applications e.g. in cryptography, coding theory. That's why we often need to consider matrices with such entries. On the other hand, engineering and physics deal with real &amp; complex numbers, so we need matrices over the reals and complex numbers. Different domains different applications.</p> <p>If you want to know more, you can read the Wikipage on <a href="http://en.wikipedia.org/wiki/Modular_arithmetic" rel="nofollow">modular arithmetic</a>.</p>
3,115,090
<p>This was supposedly an easy limit, and it is suspiciously similar to a Riemann sum, but I can't quite figure out for what function. </p> <p><span class="math-container">$$\lim_{n\to\infty}{\frac{1}{n} {\sum_{k=3}^{n}{\frac{3}{k^2-k-2}}}}$$</span></p> <p>Well, even the fact that <span class="math-container">$\frac{3}{k^2-k-2} = \frac{1}{k-1}-\frac{1}{k+2}$</span> doesn't seem to simplify the problem. I thought this would be a telescoping sum, but it's clearly not. </p> <p>Is that a Riemann sum at all? </p>
Yanko
426,577
<p><span class="math-container">$$\sum_{k=3}^n \frac{3}{k^2-k-2} = \sum_{k=3}^n \frac{1}{k-2}- \sum_{k=3}^n \frac{1}{k+1}$$</span></p> <p>You (and I) were mistaken before, see @Romeo 's answer.</p> <p>Notice that <span class="math-container">$$\sum_{k=3}^n \frac{1}{k-2}=\sum_{k=0}^{n-3} \frac{1}{k+1}$$</span></p> <p>Insert above you get <span class="math-container">$$\sum_{k=3}^n \frac{3}{k^2-k-2} = \sum_{k=0}^{n-3} \frac{1}{k+1} - \sum_{k=3}^n \frac{1}{k+1} = \sum_{k=0}^2 \frac{1}{k+1} - \sum_{k=n-2}^{n} \frac{1}{k+1}= $$</span><span class="math-container">$$=1+\frac{1}{2}+\frac{1}{3} - \frac{1}{n-1}-\frac{1}{n}-\frac{1}{n+1}$$</span></p> <p>Of course this argument requires <span class="math-container">$n\geq 3$</span>.</p>
1,988,731
<p>Can one of you math this for me?</p> <p>You've got a deck of 88 cards. Let's call the first card "1", the next "2", and so on.</p> <p>What are the odds of pulling "1", "2", and "3" out of the deck, in that order?</p>
Blake M
383,537
<p>I'm new using this website, so take my answer with a grain of salt. </p> <p>It should be 1/88 * 1/87 * 1/86 = 1/(88*87*86) = 1/658416 Or about 1.52*10^-6. </p> <p>My thought is that you have exactly one card in 88 that is the desired result of 1. With that card drawn, the deck has 87 remaining, again only one card is the 2. The same logic follows for the 3. </p> <p>Because probability is the result of dividing the outcome by the population, and multiple probabilities are multiplied to determine the overall probability, I used the equation above. </p> <p>A simple example: what are the chances I draw one card from a deck of four unique cards? 1/4 or 0.25 or 25%. </p>
4,016,133
<p>I am recently exploring set-theoretical postulates that contradict GCH. One particularly interesting one is the proposition &quot;<span class="math-container">$2^\kappa$</span> is singular for each infinite cardinal <span class="math-container">$\kappa$</span>&quot;. However I could not prove that this proposition is at all consistent with ZFC.</p> <p>It seems that a slightly weakened form &quot;<span class="math-container">$2^\kappa$</span> is singular for each infinite <em>regular</em> cardinal <span class="math-container">$\kappa$</span>&quot; is indeed consistent with ZFC, since the function <span class="math-container">$\kappa\mapsto\aleph_{\kappa^+}$</span> satisfies the conditions of Easton's theorem and there therefore exists a model where <span class="math-container">$2^\kappa=\aleph_{\kappa^+}$</span> for each regular <span class="math-container">$\kappa$</span>. Can this result also be extended to all singular cardinals?</p>
Jason Zesheng Chen
599,883
<p>Lemma 3.4 in the following <a href="https://arxiv.org/abs/1502.07470" rel="nofollow noreferrer">paper</a> by Golshani &amp; Hayut implies that this is consistent relative to the existence of a strong cardinal.</p> <p><em>Golshani, Mohammad; Hayut, Yair</em>, <a href="http://dx.doi.org/10.1017/jsl.2016.21" rel="nofollow noreferrer"><strong>On Foreman’s maximality principle</strong></a>, J. Symb. Log. 81, No. 4, 1344-1356 (2016). <a href="https://zbmath.org/?q=an:1387.03037" rel="nofollow noreferrer">ZBL1387.03037</a>.</p>
100,526
<p>It is known that for a Poisson process the inter-arrival time is exponentially distributed. My question, which may be nonsense, is this. Suppose you want to experimentally evaluate the distribution of inter-arrival time (not necessarily of a Poisson process). You measure the differences in the arrival time of consecutive customers, for example. But the problem is that the inter-arrival time depends on the (absolute) time the proceeding customer arrived. If it arrived earlier, the current inter-arrival would be different. So, does it make sense to measure inter-arrival times and build the distribution with those samples?</p>
Bill Cook
16,423
<p>Suppose we have a Fourier expansion for an eigenfunction $u(x)=a_0+\sum\limits_{k=1}^\infty \left(a_k\sin(kx)+b_k\cos(kx)\right)$. Then $\int_0^{2\pi} u(t)\cos(t)\,dt=b_1\pi$</p> <p>So in order to have $A[u] = \lambda u$ we would need $b_1\pi\sin(x)=\lambda u(x)$. So $\lambda u(x)$ has nothing but $\sin(x)$ appearing in its expansion. In particular, there is no $\cos(x)$ term and thus $b_1=0$. Therefore, we must have $\lambda u(x)=0$ so $\lambda=0$ if $u(x)\not=0$. So (among functions with a Fourier expansion), $0$ is the only possible eigenvalue (with any non-zero function having a Fourier expansion with $b_1=0$ as eigenfunctions).</p> <p><b>Edit:</b> Well, there you go. I feel very silly. <a href="http://en.wikipedia.org/wiki/Fourier_series#Convergence" rel="nofollow">Wikipedia says</a>:</p> <blockquote> <p>Theorem. If $f \in L^2([−\pi, \pi])$, then the Fourier series converges to $f$ in $L^2([−\pi, \pi])$.</p> </blockquote> <p>So I guess every such $u \in L^2([0,2\pi])$ has a convergent Fourier series. Thus my proof works for all functions in question.</p> <p>I really don't know anything about Fourier series! :P</p>
100,526
<p>It is known that for a Poisson process the inter-arrival time is exponentially distributed. My question, which may be nonsense, is this. Suppose you want to experimentally evaluate the distribution of inter-arrival time (not necessarily of a Poisson process). You measure the differences in the arrival time of consecutive customers, for example. But the problem is that the inter-arrival time depends on the (absolute) time the proceeding customer arrived. If it arrived earlier, the current inter-arrival would be different. So, does it make sense to measure inter-arrival times and build the distribution with those samples?</p>
paul garrett
12,291
<p>It may be worth adding that some of the funny behavior of this operator (explicated in Bill Cook's answer) is due to the fact that, in $L^2[0,1]$, say, the integral computes the <em>projection</em> to the one-dimensional space of scalar multiples of $\cos(y)$. Then the coefficient is used to multiply the function $\sin(x)$. So this operator is a rank-one operator of an explicit sort.</p> <p>Edit: Indeed, as Yemon Choi notes, since $\sin$ and $\cos$ are orthogonal, "of course" the square of the operator is $0$: the operator projects everything to multiples of $\sin$, then "rotates by 90 degrees" to $\cos$, so repeating the projection immediately gives $0$.</p>
25,414
<p>I'm running in to some problems with generating a persistent HSQLDB and during some troubleshooting I came upon the following behavior.</p> <pre><code>Needs["DatabaseLink`"] tc = OpenSQLConnection[ JDBC["hsqldb", ToFileName[Directory[], "temp"]], Username -&gt; "sa"] CloseSQLConnection[tc] </code></pre> <p>The above code generates three files (which will be located in <code>Directory[]</code>). Despite the fact that the connection is closed, two of the files (temp.lck and temp.log) cannot be deleted until the Mathematica kernel has been shut down. Is this 'normal' behavior? </p>
Albert Retey
169
<p><code>"DatabaseLink</code>"` connections in Mathematica are done via <strong>Java</strong> and it is the <strong>Java</strong> virtual machine which actually holds the file locks. To get rid of those locks you can e.g. use:</p> <pre><code>Needs["JLink"]; UninstallJava[]; </code></pre> <p>After that you should be able to delete the files (you can do that from Mathematica, if so desired with <code>DeleteFile</code>). There might be ways to remove those locks without closing the <strong>JVM</strong> with according <strong>Java</strong> calls to the underlying <strong>HSQL</strong> libraries, but that I don't know (and have not time to explore). </p> <p>I think it is an oversight (=bug) in the implementation of the <strong>HSQL</strong> part of <code>"DatabaseLink`"</code> that a <code>CloseSQLConnection</code> doesn't remove these file locks, so you might want to contact WRI and see what they say...</p>
4,463
<p>It seems that most authors use the phrase "elementary number theory" to mean "number theory that doesn't use complex variable techniques in proofs." </p> <p>I have two closely related questions.</p> <ol> <li>Is my understanding of the usage of "elementary" correct?</li> <li>It appears that advanced techniques from other areas (e.g. algebra) are allowed, just not complex variables. Are there historical reasons for why complex analysis singled out as a tool to avoid? </li> </ol> <p>NB: I'm asking about how "elementary" usually <strong>is</strong> defined and why, not how it <strong>should be</strong> defined.</p>
las3rjock
19
<p>Wikipedia has a definition of <a href="http://en.wikipedia.org/wiki/Number_theory#Elementary_number_theory" rel="nofollow">elementary number theory</a>, but I don't know how well accepted it is.</p>
4,463
<p>It seems that most authors use the phrase "elementary number theory" to mean "number theory that doesn't use complex variable techniques in proofs." </p> <p>I have two closely related questions.</p> <ol> <li>Is my understanding of the usage of "elementary" correct?</li> <li>It appears that advanced techniques from other areas (e.g. algebra) are allowed, just not complex variables. Are there historical reasons for why complex analysis singled out as a tool to avoid? </li> </ol> <p>NB: I'm asking about how "elementary" usually <strong>is</strong> defined and why, not how it <strong>should be</strong> defined.</p>
engelbrekt
3,304
<p>Elementary number theory is better defined by its focus of interest than by its methods of proof. For this reason, I rather like to think of it as <em>classical number theory.</em> It deals with integers, rationals, congruences and Diophantine equations within a framework recognizable to eighteenth-century number theorists. Algebraic number theory does not qualify because of its level of abstraction, even though algebraic numbers were sometimes applied to particular problems in number theory before the nineteenth century. Analytic number theory is not only distinguished by the use of complex and harmonic analysis (for many problems these are by no means indispensable), but even more by the modern emphasis on counting the number of solutions to number theoretical problems approximately. In the eighteenth century they also liked to count the number of solutions when they could, but they wanted exact answers, which severely limited the range of counting problems that they could solve. It is true that Dirichlet brought analysis into number theory, but he also counted the number of divisors of integers approximately by averaging, and Gauss before him had done the same for class numbers and genera of binary quadratic forms, as one can see from remarks in article 301 in the Disquisitiones Arithmetica (but he never published his proofs). And Legendre in 1808 published an approximation to the counting function \pi(x) of the primes, which was the start of the line of development that led to the Prime Number Theorem (Gauss had also found an approximation, but this was published only in 1863 in his collected works). The systematic acceptance of approximate answers in number theory really is a nineteenth century development. It is obvious from his interests and techniques that Euler could have found many such results if he had wanted. In 1838 when Dirichlet wrote his first paper on the approximate average number of divisors, the cupboard was so bare that he could only cite the remarks of Gauss in article 301 and Stirling's and allied formulas (!) as prior work to motivate his own.</p> <p>There is a field that was absolutely central to number theory from its earliest days but for which elementary tools no longer suffice by themselves - Diophantine analysis. Actually, this transition of Diophantine analysis from elementary to non-elementary status began in the nineteenth century with the use of algebraic number theory, and gathered force in the twentieth century. But of course, there is also a huge amount of Diophantine analysis by classical techniques from the nineteenth and twentieth centuries.</p>
194,421
<p>This is homework. The problem was also stated this way: </p> <p>Let A be a dense subset of $\mathbb{R}$ and let x$\in\mathbb{R}$. Prove that there exists a decreasing sequence $(a_k)$ in A that converges to x.</p> <p>I know:</p> <p>A dense in $\mathbb{R}$ $\Rightarrow$ every point in $\mathbb{R}$ is either in A or a limit point of A.</p> <p>If x is a limit point of A, then there is a sequence in A that converges to x. </p> <p>What if $x\in A$?</p> <p>Also, how can I know if the sequence is increasing or decreasing?</p>
Ross Millikan
1,827
<p>Hint: You have to construct your sequence so it is decreasing. Let $a_0$ be a point in $A$ greater than $x$. How do you know there is one? Then let $a_1$ be a point in $A$ in $(x,a_0)$. How do you know there is one? Then keep going. You also have to make sure the intervals shrink to zero length.</p>
194,421
<p>This is homework. The problem was also stated this way: </p> <p>Let A be a dense subset of $\mathbb{R}$ and let x$\in\mathbb{R}$. Prove that there exists a decreasing sequence $(a_k)$ in A that converges to x.</p> <p>I know:</p> <p>A dense in $\mathbb{R}$ $\Rightarrow$ every point in $\mathbb{R}$ is either in A or a limit point of A.</p> <p>If x is a limit point of A, then there is a sequence in A that converges to x. </p> <p>What if $x\in A$?</p> <p>Also, how can I know if the sequence is increasing or decreasing?</p>
William
13,579
<p>Recall that $A \subset R$ is dense if and only if every open subset of $\mathbb{R}$ contains an element of $A$. </p> <p>Let $x \in \mathbb{R}$. We will define a decreasing sequence in $A$ converging to $x$ as follows: Consider the open interval $(x + 2^{-1}, x + 2(2^{-1}))$. Since $A$ is dense, $A$ intersect this interval. Choose $a_1 \in A \cap (x + 2^{-1}, x + 2(2^{-1}))$. By recursion, suppose $a_1, ..., a_n$ has been chosen. Consider the interval $(x + 2^{-n}, x + 2(2^{-n})$. Again since $A$ is dense, there exists a $a_{n + 1} \in A \cap (x + 2^{-n}, x + 2(2^{-n})$. </p> <p>The sequence $(a_k)$ constructed in this way is decreasing and converges to $x$. </p>
239,688
<p>Suppose I have a complete and cocomplete category $\mathscr{C}$ with two sets of maps $I,J$ that are the candidates for generating (trivial) cofibrations on a model structure on $\mathscr{C}$. The only property I'm left to check in order to apply the recognition principle for cofibrantly generated model categories is that pushouts of maps in $J$ are weak equivalences.</p> <p>In this framework, does the cube lemma hold already? </p> <p>More precisely, if I have two spans $B_i \leftarrow A_i \rightarrow C_i$ for $i \in \{1,2\}$, where all the objects are cofibrant and $A_i \rightarrow B_i$ is a cofibration $\forall \ i \in \{1,2\}$, plus I'm given a pointwise weak equivalence of spans, is the induced map $$B_1 \coprod_{A_1} C_1 \to B_2 \coprod_{A_2} C_2$$ a weak equivalence as well?</p> <p>Thanks in advance for any hint/reply.</p>
Karol Szumiło
12,547
<p>This is not an answer in a full generality, but it is certainly not the case if the domains of generating acyclic cofibrations are cofibrant. (I have a hard time thinking of an example of a model category where this is not true, but there are probably some.)</p> <p>Assuming that the "Cube Lemma" holds (it's more often called the Gluing Lemma), take $A_2 \to B_2$ to be any generating acyclic cofibration and let $A_2 \to C_2$ be a morphism to a cofibrant object. Set $A_1 = B_1 = A_2$ and $C_1 = C_2$ and fill the cube with obvious maps. Then the Gluing Lemma says that the pushout of $A_2 \to B_2$ along $A_2 \to C_2$ is a weak equivalence.</p>
239,688
<p>Suppose I have a complete and cocomplete category $\mathscr{C}$ with two sets of maps $I,J$ that are the candidates for generating (trivial) cofibrations on a model structure on $\mathscr{C}$. The only property I'm left to check in order to apply the recognition principle for cofibrantly generated model categories is that pushouts of maps in $J$ are weak equivalences.</p> <p>In this framework, does the cube lemma hold already? </p> <p>More precisely, if I have two spans $B_i \leftarrow A_i \rightarrow C_i$ for $i \in \{1,2\}$, where all the objects are cofibrant and $A_i \rightarrow B_i$ is a cofibration $\forall \ i \in \{1,2\}$, plus I'm given a pointwise weak equivalence of spans, is the induced map $$B_1 \coprod_{A_1} C_1 \to B_2 \coprod_{A_2} C_2$$ a weak equivalence as well?</p> <p>Thanks in advance for any hint/reply.</p>
David White
11,540
<p>If I understand your situation correctly, the answer is yes, the cube lemma holds, but you cannot use that to get a full model structure. Your situation often arises when trying to transfer a model structure from a cofibrantly generated model category $M$, along an adjunction $F:M\leftrightarrows N: U$, to a bicomplete category $N$, e.g. to a category $N$ of algebras over a monad. Check out Lemma 2.3 in Schwede-Shipley <em>Algebras and Modules in monoidal model categories</em>. For them also, the only difficulty is proving that the pushout is a trivial cofibration in $N$ (defined via $J$-cell) is a weak equivalence. The good news is that, even without this condition, you can probably prove whatever you want. Specifically, the structure on $N$ may be that of a <em>semi-model structure</em>. Have a look at Definition 2.3 in <a href="http://hss.ulb.uni-bonn.de/2001/0241/0241.pdf" rel="nofollow">Spitzweck's thesis</a>. Note that on page 14 he remarks that the cube lemma holds in this setting. Indeed, Karol's answer is spot on: in a semi-model category, pushouts of spans $A\gets B \to C$, where $B \to C$ is a trivial cofibration and $A$ is cofibrant, yield trivial cofibrations.</p> <p>Even if your $N$ is not arising via a transfer along an adjunction, it may still be a semi-model category as defined by Benoit Fresse in 12.2.1 of <a href="http://club.pdmi.ras.ru/~topology/books/fresse.pdf" rel="nofollow">this book</a>. Again, the cube lemma holds. In general, any statement about model categories holds for semi-model categories if you restrict to the subcategory of cofibrant objects, or cofibrantly replace everything in sight. Since the cube lemma is already about cofibrant objects, there is no problem.</p> <p>Unfortunately, it does NOT follow that every semi-model category is a model category. An example is the category of non-reduced symmetric operads in $M = Ch(\mathbb{F}_2)$. As a category of algebras over a $\Sigma$-cofibrant colored operad $P$, it is a semi-model category, by Spitzweck's Theorem or by Fresse 12.2.A applied to the transfer from the category of collections $\prod_{n \in \mathbb{N}} M^{\Sigma_n}$. However, it is not a full model structure: the pushout of $P(0)\to P(K)$, where $K$ is an acyclic chain complex $C$ in level zero and 0 otherwise, along the map $P(0)\to Com$ to the terminal non-reduced symmetric operad, is not a weak equivalence, because it introduces a summand of $(C\otimes C) / \Sigma_2$.</p> <p>My advice: stick with the semi-model structure and use it to prove whatever you need. This was the approach taken by Spitzweck, by Goerss-Hopkins in their obstruction theory paper, by Fresse, by Hovey in his paper <em>Monoidal Model Categories</em>, and in many of my papers. A semi-model structure is good enough.</p>
1,202,661
<p>Let's consider the sum $$\sum_{i=4t+2} {\binom{m}{i}}$$. </p> <p>It's equivalent to the following $\sum_{s}{\binom{m}{4s+2}}$, but i got stuck here.</p> <p>How to evaluate such kind of sums? For instance, it's not so hard to calculate $$\sum_{2n}{\binom{m}{n}}$$ (just because we know, how to denote the generating function, which keeps $a_{2n}$ stable and resets $a_{2n+1}$ to $0$,if we know the $\sum_{n \ge 0}{a_{n}z^{n}}$), but i got into troubles while trying to cope with some more complicated problems, such as presented above. </p> <p>Any help would be appreciated.</p>
Jack D'Aurizio
44,121
<p>Consider that, by the discrete Fourier transform:</p> <p>$$ 4\cdot\mathbb{1}_{n\equiv 0\pmod{4}} = 1^n+(-1)^n+i^n+(-i)^n \tag{1}$$ hence:</p> <p>$$ \mathbb{1}_{n\equiv 2\pmod{4}} = \frac{1}{4}\left(1^n + (-1)^n - (-i)^n - i^n\right)\tag{2} $$ and: $$\begin{eqnarray*} \sum_{n\equiv 2\pmod{4}}\binom{m}{n} &amp;=&amp; \frac{1}{4}\sum_{n=0}^{m}\binom{m}{n}\left(1^n+(-1)^n-i^n-(-i)^n\right)\\ &amp;=&amp; \frac{1}{4}\left(2^m-(1+i)^m-(1-i)^m\right).\tag{3}\end{eqnarray*}$$</p>
1,202,661
<p>Let's consider the sum $$\sum_{i=4t+2} {\binom{m}{i}}$$. </p> <p>It's equivalent to the following $\sum_{s}{\binom{m}{4s+2}}$, but i got stuck here.</p> <p>How to evaluate such kind of sums? For instance, it's not so hard to calculate $$\sum_{2n}{\binom{m}{n}}$$ (just because we know, how to denote the generating function, which keeps $a_{2n}$ stable and resets $a_{2n+1}$ to $0$,if we know the $\sum_{n \ge 0}{a_{n}z^{n}}$), but i got into troubles while trying to cope with some more complicated problems, such as presented above. </p> <p>Any help would be appreciated.</p>
hyperkahler
188,593
<p>The following solution is supposed to be the simplest one.</p> <p>Let's consider a complex number $a$ such that $a^{4}=1$. So, denote $S_{n}=\sum_{s=0}{\binom{i}{m}}$. We know its generating function $A(s)=(1+s)^{m}$. So, the generating function for the sequence over all $i=4q, q\in \mathbb{Z}$ would look like this $B(s)=A(s)+A(as)+A(a^{2}s)+A(a^{3}s)=(1+s)^{n}+(1+as)^{n}+(1+a^{2}s)^{n}+(1+a^{3}s)^{n}$, $B(1)=2^{n}+(1+a)^{n}+(1+a^{2})^{n}+(1+a^{3})^{n}$. So, since we know how to evaluate the sum over $4q$, then we know how to deal with it over $4q+2$.</p>
744,377
<p>I just don't understand how to complete $\epsilon - N$ proofs. I don't know what my goal is or why they prove what they do. I have asked two questions on here in the past, but I simply don't 'get it'.</p> <p>So first we set $\epsilon \gt 0$ and we want to find $N \in \mathbb{N}$ such that $n \geq N$, we then take the $|a_n - a| \lt \epsilon$. We then reduce $|a_n - a|$ to simplest form and move everything apart from $n$ to the otherside of the equality. Where we let the floor of all this(Eq) equal $N$.</p> <p>Back to $n \geq N = \lfloor{Eq}\rfloor$ ${}$ Then we have $|a_n - a| \leq |a_{Eq} - a| \lt \epsilon$</p> <p>With some ending statement, eg: "If $n \leq N = Eq$, then $|a_n - a| \lt \epsilon$, this meaning $\lim \limits_{n \to \infty} a_n = a$</p> <p>Is this what is meant to be done? Am I just trying to prove $|a_n - a| \lt \epsilon$ by subbing in a created $N$ in place of $n$, where created $N$ is some $k \epsilon$ where $k$ is just some divisor or multiple.</p> <p>What is the simplest way to think of this problem?</p>
Tom Collinge
98,230
<p>What is being considered is whether an infinite sequence of terms $(a_1, a_2, a_3, ...a_n, ..)$ converges to some value $a$. If it does, then you can say that in the limit as $n$ tends to infinity $a_n$ tends to $a$, or in mathematical notation, $\lim \limits_{n \to \infty} a_n = a$.</p> <p>So what does "convergence" mean ? It means that as you move further along in the sequence the terms get closer and closer to $a$. </p> <p>How do you express mathematically that the terms are getting closer and closer to $a$ ? You consider by how much they differ from $a$, i.e. the value of $|a_n - a|$ and you would like to see this value getting smaller.</p> <p>How do you express mathematically that you are moving further along the seqence ? You look at increasing values of $n$.</p> <p>So to express that as you move further along in the sequence the terms get closer and closer to $a$, you say that given some value of $\epsilon &gt; 0$ (which though it is not stated is assumed to be as small as we like) there exists $N$ (and again it isn't stated but it is assumed that $N$ can be as large as neccessary) such that for all $n &gt; N$ we have $|a_n - a| &lt; \epsilon$.</p> <p>To apply the concept directly to a sequence, you normally start from an equation giving the $n^{th}$ term and use it to calculate a value of $N$ that works for some given value of $\epsilon$. For example, take the sequence $(1, 1/2, 1/3, ....1/n,...)$, i.e. $a_n = 1/n$ which you would think converges to zero. To prove that it does, consider that given $\epsilon &gt;0$ take $N &gt; 1/\epsilon$ and then for any $n&gt;N$ we have $a_n&lt; \epsilon$ which completes the proof.</p>
4,154,025
<p>In a set of lecture notes, I have the following result:</p> <blockquote> <p><strong>Theorem</strong>. Let <span class="math-container">$X_n$</span> be random variables on <span class="math-container">$(\Omega, \mathcal{F}, \mathbb{P})$</span> with values in a Polish metric space <span class="math-container">$S$</span>. Suppose <span class="math-container">$X = (X_n)_{n \geq 1}$</span> is a stationary sequence. Then <span class="math-container">$X$</span> is ergodic if and only if for any bounded Borel measurable function <span class="math-container">$g: S^p \to \mathbb{R}$</span> with <span class="math-container">$p \geq 1$</span> an arbitrary integer, <span class="math-container">$$\dfrac{1}{n}\sum_{m=0}^{n-1}g(X_{m+1}, \dots, X_{m+p}) \overset{a.s.}{\to} \mathbb{E}[g(X_1, \dots, X_p)]\text{.}$$</span></p> </blockquote> <p>Note that <span class="math-container">$\overset{a.s.}{\to}$</span> denotes almost sure convergence as <span class="math-container">$n \to \infty$</span>.</p> <p>I have been trying to find this result in the 20-30 measure-theoretic probability books I have to no avail, as well as <em>An Introduction to Ergodic Theory</em> by Walters. Does anyone know of a textbook where I can find this result? I would strongly prefer a reference with a proof, but would be willing to take those without as well.</p> <p><strong>Edit</strong>: Adding definitions as requested.</p> <p>Given <span class="math-container">$X$</span> above, it is ergodic if for any invariant set <span class="math-container">$A \in \mathcal{F}$</span>, <span class="math-container">$\mathbb{P}(A) \in \{0, 1\}$</span>.</p> <p>By &quot;invariant set,&quot; we say a set <span class="math-container">$A \in \mathcal{F}$</span> is invariant with respect to <span class="math-container">$X$</span> if for some <span class="math-container">$B \in \mathcal{B}(\mathbb{R}^{\infty})$</span> (<span class="math-container">$\mathcal{B}(\mathbb{R}^{\infty})$</span> denoting the Borel <span class="math-container">$\sigma$</span>-algebra generated by <span class="math-container">$\mathbb{R}^{\infty}$</span>), <span class="math-container">$A = \{(X_n, X_{n+1}, X_{n+2}, \dots)\} \in B$</span> for all <span class="math-container">$n \geq 1$</span>.</p> <p>[I suspect that <span class="math-container">$S^{\infty}$</span> should be used in place of <span class="math-container">$\mathbb{R}^{\infty}$</span> in the above definitions and that <span class="math-container">$\in$</span> should be <span class="math-container">$\subset$</span>, but that's how they are presented in the lecture notes.]</p> <p><strong>Edit 2</strong>: I found this claim in some other sources, though not in great detail. It would be nice to find a textbook.</p> <ul> <li>Last sentence of <a href="http://www.columbia.edu/%7Eks20/6712-14/6712-14-Notes-Ergodic.pdf" rel="nofollow noreferrer">http://www.columbia.edu/~ks20/6712-14/6712-14-Notes-Ergodic.pdf</a></li> <li><a href="https://onlinelibrary.wiley.com/doi/pdf/10.1002/9780470670057.app1" rel="nofollow noreferrer">Appendix A</a> of <em>GARCH Models: Structure, Statistical Inference and Financial Applications</em> uses the theorem above as the definition of an ergodic stationary process. This passage cites Billingsley (1995), which I assume is <em>Probability and Measure</em> - but I know that this theorem is not in there.</li> </ul>
Clarinetist
81,560
<p>This result is provided, without proof, as Theorem 5.6(e) of <em>A First Course in Stochastic Processes</em>, 2nd ed., by Karlin and Taylor (1975).</p> <p>I don't have this book, but <em>Stationary and Related Stochastic Processes: Sample Function Properties and Their Applications</em> by Cramer and Leadbetter (2004) might have some information there. I will edit this post if I find out whether or not this result is mentioned in there.</p>
192,394
<p>I'm re-reading some material from Apostol's Calculus. He asks to prove that, if $f$ is such that, for any $x,y\in[a,b]$ we have</p> <p>$$|f(x)-f(y)|\leq|x-y|$$</p> <p>then:</p> <p>$(i)$ $f$ is continuous in $[a,b]$</p> <p>$(ii)$ For any $c$ in the interval,</p> <p>$$\left|\int_a^b f(x)dx-(b-a)f(c)\right|\leq\frac{(b-a)^2}{2}$$</p> <p>The proof for the first part is easy, and I ommit it. I'm interested in the second one.</p> <p>We can write that as</p> <p>$$\left| {\int_a^b f (x)dx - \int_a^b f (c)dx} \right| \leqslant \frac{{{{(b - a)}^2}}}{2}$$</p> <p>Or $$\left| {\int_a^b {\left( {f(x) - f(c)} \right)dx} } \right| \leqslant \frac{{{{(b - a)}^2}}}{2}$$</p> <p>Now, it is not hard to show that</p> <p>$$\left| {\int_a^b {\left( {f(x) - f(c)} \right)dx} } \right| \leqslant \int_a^b {\left| {f(x) - f(c)} \right|dx} $$</p> <p>By hypothesis, we have</p> <p>$$\left| {f(x) - f(c)} \right| \leqslant \left| {x - c} \right|$$</p> <p>so that</p> <p>$$\left| {\int_a^b {\left( {f(x) - f(c)} \right)dx} } \right| \leqslant \int_a^b {\left| {f(x) - f(c)} \right|dx} \leqslant \int\limits_a^b {\left| {x - c} \right|dx} $$</p> <p>The last term, integrates as follows:</p> <p>$$\int\limits_a^b {\left| {x - c} \right|dx} = - \int\limits_a^c {\left( {x - c} \right)dx} + \int\limits_c^b {\left( {x - c} \right)dx} = \frac{{{{\left( {b - c} \right)}^2} + {{\left( {a - c} \right)}^2}}}{2}$$</p> <p>How can I conciliate that with $$\frac{{{{\left( {b - a} \right)}^2}}}{2}?$$</p> <p>I'd like to know what happens in the general case</p> <p>$$|f(x)-f(y)|\leq \lambda |x-y|$$ too.</p>
user29999
29,999
<p>Consider $f(c) = \dfrac{(b-c)^2+ (a-c)^2}{2}$. Then $f´(c) = (a-c)+(b-c)$. Hence, $f$ decreases of $a$ to $\dfrac{a+b}{2}$ and increases of $\dfrac{a+b}{2}$ to $b$. Hence $\dfrac{(a+b)^2}{2}=f(a)\ge f(c)$ to $a\le c\le \dfrac{a+b}{2}$ and $f(c) \le \dfrac{(a+b)^2}{2}=f(b)$ to $\dfrac{a+b}{2}\le c \le b$. Namely, $f(c) \le \dfrac{(a+b)^2}{2}$ to $c\in[a,b]$.</p>
192,394
<p>I'm re-reading some material from Apostol's Calculus. He asks to prove that, if $f$ is such that, for any $x,y\in[a,b]$ we have</p> <p>$$|f(x)-f(y)|\leq|x-y|$$</p> <p>then:</p> <p>$(i)$ $f$ is continuous in $[a,b]$</p> <p>$(ii)$ For any $c$ in the interval,</p> <p>$$\left|\int_a^b f(x)dx-(b-a)f(c)\right|\leq\frac{(b-a)^2}{2}$$</p> <p>The proof for the first part is easy, and I ommit it. I'm interested in the second one.</p> <p>We can write that as</p> <p>$$\left| {\int_a^b f (x)dx - \int_a^b f (c)dx} \right| \leqslant \frac{{{{(b - a)}^2}}}{2}$$</p> <p>Or $$\left| {\int_a^b {\left( {f(x) - f(c)} \right)dx} } \right| \leqslant \frac{{{{(b - a)}^2}}}{2}$$</p> <p>Now, it is not hard to show that</p> <p>$$\left| {\int_a^b {\left( {f(x) - f(c)} \right)dx} } \right| \leqslant \int_a^b {\left| {f(x) - f(c)} \right|dx} $$</p> <p>By hypothesis, we have</p> <p>$$\left| {f(x) - f(c)} \right| \leqslant \left| {x - c} \right|$$</p> <p>so that</p> <p>$$\left| {\int_a^b {\left( {f(x) - f(c)} \right)dx} } \right| \leqslant \int_a^b {\left| {f(x) - f(c)} \right|dx} \leqslant \int\limits_a^b {\left| {x - c} \right|dx} $$</p> <p>The last term, integrates as follows:</p> <p>$$\int\limits_a^b {\left| {x - c} \right|dx} = - \int\limits_a^c {\left( {x - c} \right)dx} + \int\limits_c^b {\left( {x - c} \right)dx} = \frac{{{{\left( {b - c} \right)}^2} + {{\left( {a - c} \right)}^2}}}{2}$$</p> <p>How can I conciliate that with $$\frac{{{{\left( {b - a} \right)}^2}}}{2}?$$</p> <p>I'd like to know what happens in the general case</p> <p>$$|f(x)-f(y)|\leq \lambda |x-y|$$ too.</p>
copper.hat
27,978
<p>I like pictures...</p> <p><img src="https://i.stack.imgur.com/GIjJf.png" alt="enter image description here"></p> <p>$\frac{(c-a)^2}{2} + \frac{(b-c)^2}{2} = m(T_1)+m(T_2) \leq \frac{m(R_1)+m(R_2)}{2} \leq \frac{1}{2} (b-a) \max(c-a,b-c) \leq \frac{1}{2} (b-a)^2$.</p>
2,889,075
<p>let $A$ be an infinite subset of $\mathbb R$ that is bounded above and let $u=\sup A$. Show that there exists an increasing sequence $ (x_n) $ with $x_n \in A $ for all $n\in \mathbb N$ such that $u = \lim_{n\rightarrow\infty} x_n$.</p> <p>If $u$ is in $A$ then the proof is trivial. If $u$ does not belong to $A$ then for any $ \epsilon &gt; 0$ there exists an $ x_1$ in $A$ such that $ u-\epsilon &lt; x_1&lt;u$. By density theorem there exists an $r_1$ lies between $x_1$ and $u$, since $r_1$ is not an upper bound we will find an $x_2$ in A such that $ u-\epsilon &lt; x_1 &lt; r_1&lt; x_2&lt;u$. Continuing this way we will get a monotone increasing sequence and then applying monotone convergence theorem we will get desired result.</p> <pre><code> I want to know whether I am right or I am wrong. </code></pre>
Mohammad Riazi-Kermani
514,496
<p>I am not convinced that your sequence converges to $u$</p> <p>You have picked an epsilon and formed a sequence between $u-\epsilon $ and $u$.</p> <p>How do you know that the sequence converges to u. </p> <p>We know the sequence will converge to a number $l$ such that $u-\epsilon &lt;l\le u$, but how do we know that $l=u$ ?</p> <p>Why don't you pick the terms of your sequence between $u-1/n$ and $u$ ? </p>
167,812
<p>I call a profinite group $G$ <strong><em>Noetherian</em></strong>, if evrey ascending chain of closed subgroups is eventually stable. A standart argument shows that every closed subgroup of a Noetherian profinite group is finitely generated.</p> <p>A profinite group $G$ is called <strong><em>just-infinite</em></strong> if every nontrivial $M \lhd_c G$ is open.</p> <p>Let $K$ be a profinite Noetherian just-infinite group. Must $K$ be the profinite completion of some residually finite group $R$?</p>
Yiftach Barnea
5,034
<p>It is a completely open problem whether Noetherian pro-$p$ group has finite rank, i.e., there is a bound on the number of generators of closed subgroups, which is equivalent to being $p$-adic analytic. As one can classify all just-infinite $p$-adic analytic pro-$p$ group if indeed any Noetherian pro-$p$ group is $p$-adic analytic, then all one needs to do for pro-$p$ groups is to go over the list. My gut feeling would be that the answer for $p$-adic analytic pro-$p$ groups is yes, but I am not 100% sure. </p>
4,422,824
<p><strong>Edit: This question involves derivatives, please read my prior work!</strong></p> <p>This question has me stumped.</p> <blockquote> <p>A car company wants to ensure its newest model can stop in less than 450 ft when traveling at 60 mph. If we assume constant deceleration, find the value of deceleration that accomplishes this.</p> </blockquote> <p>First, from the instructions, I believe...</p> <p><span class="math-container">$f'(x)=(5280/60)-ax=88-ax$</span></p> <p><span class="math-container">$f''(x)=-a$</span></p> <p>I also believe <span class="math-container">$f(x)={\int}f'(x)dx=88x-a{\int}x=88x-a\frac{x^2}{2}$</span>, because <code>a</code> is known to be constant.</p> <p>Where I'm lost is what comes next. I can compute <code>a</code> and <code>x</code> in terms of each other at <code>f(x)=450</code>, but this doesn't seem to get me closer to the answer. Neither does the fact that <span class="math-container">$f^{-1}(450)=x$</span>. What am I missing here? Thank you!</p>
John Douma
69,810
<p>Since acceleration is constant we get that <span class="math-container">$$\frac{dv}{dt}=a\implies v = at+v_0$$</span> where <span class="math-container">$v_0$</span> is the initial velocity. In our case we are starting at <span class="math-container">$60\text{ mph}$</span> which is <span class="math-container">$88\text{ ft/s}$</span>. To go from <span class="math-container">$88\text{ ft/s}$</span> to <span class="math-container">$0\text{ ft/s}$</span> requires <span class="math-container">$t=-\frac{88}{a}\text{ seconds}$</span>.</p> <p>From our expression for velocity we get that <span class="math-container">$$\frac{dx}{dt}=at+v_0\implies x=\frac{1}{2}at^2+v_0t+x_0$$</span> where <span class="math-container">$x_0$</span> is our initial position. We are trying to stop in <span class="math-container">$450\text{ ft}$</span> so we can let <span class="math-container">$x_0=0$</span> and <span class="math-container">$x=450$</span>. Plugging everything in we get <span class="math-container">$$450=\frac{1}{2}\frac{88^{2}}{a}+ -\frac{88^2}{a}=-\frac{1}{2}\frac{88^2}{a}\implies a=-\frac{88^2}{900}$$</span></p>
164,002
<p>When I am reading a mathematical textbook, I tend to skip most of the exercises. Generally I don't like exercises, particularly artificial ones. Instead, I concentrate on understanding proofs of theorems, propositions, lemmas, etc..</p> <p>Sometimes I try to prove a theorem before reading the proof. Sometimes I try to find a different proof. Sometimes I try to find an example or a counter-example. Sometimes I try to generalize a theorem. Sometimes I come up with a question and I try to answer it. </p> <p>I think those are good "exercises" for me.</p> <p><strong>EDIT</strong> What I think is a very good "excercise" is as follows:</p> <p>(1) Try to prove a theorem before reading the proof.</p> <p>(2) If you have no idea to prove it, take a look <strong>a bit</strong> at the proof.</p> <p>(3) Continue to try to prove it.</p> <p>(4) When you are stuck, take a look <strong>a bit</strong> at the proof.</p> <p>(5) Repeat (3) and (4) until you come up with a proof.</p> <p><strong>EDIT</strong> Another method I recommend rather than doing "homework type" exercises: Try to write a "textbook" on the subject. You don't have to write a real one. I tried to do this on Galois theory. Actually I posted "lecture notes" on Galois theory on an internet mathematics forum. I believe my knowledge and skill on the subject greatly increased.</p> <p>For example, I found <a href="https://math.stackexchange.com/questions/131757/a-proof-of-the-normal-basis-theorem-of-a-cyclic-extension-field">this</a> while I was writing "lecture notes" on Galois theory. I could also prove that any profinite group is a Galois group. This fact was mentioned in Neukirch's algebraic number theory. I found later that Bourbaki had this problem as an exercise. I don't understand its hint, though. Later I found someone wrote a paper on this problem. I made other small "discoveries" during the course. I was planning to write a "lecture note" on Grothendieck's Galois theory. This is an attractive plan, but has not yet been started.</p> <p><strong>EDIT</strong> If you want to have exercises, why not produce them yourself? When you are learning a subject, you naturally come up with questions. Some of these can be good exercises. At least you have the motivation not given by others. It is not homework. For example, I came up with the following question when I was learning algebraic geometry. I found that this was a good problem.</p> <p>Let $k$ be a field. Let $A$ be a finitely generated commutative algebra over $k$. Let $\mathbb{P}^n = Proj(k[X_0, ... X_n])$. Determine $Hom_k(Spec(A), \mathbb{P}^n)$.</p> <p>As I wrote, trying to find examples or counter-examples can be good exercises, too. For example, <a href="https://math.stackexchange.com/questions/133790/an-example-of-noncommutative-division-algebra-over-q-other-than-quaternion-alg">this</a> is a good exercise in the theory of division algebras.</p> <p><strong>EDIT</strong> Let me show you another example of self-exercises. I encountered the following problem when I was writing a "lecture note" on Galois theory.</p> <p>Let $K$ be a field. Let $K_{sep}$ be a separable algebraic closure of $K$. Let $G$ be the Galois group of $K_{sep}/K$.</p> <p>Let $A$ be a finite dimensional algebra over $K$. If $A$ is isomorphic to a product of fields each of which is separable over $K$, $A$ is called a finite etale algebra. Let $FinEt(K)$ be the category of finite etale algebra over $K$.</p> <p>Let $X$ be a finite set. Suppose $G$ acts on $X$ continuously. $X$ is called a finite $G$-set. Let $FinSets(G)$ be the category of finite $G$-sets.</p> <p><em>Then $FinEt(K)$ is anti-equivalent to $FinSets(G)$.</em></p> <p>This is a zero-dimensional version of the main theorem of Grothendieck's Galois theory. You can find the proof elsewhere, but I recommend you to prove it yourself. It's not difficult and it's a good exercise of Galois theory. <em>Hint</em>: Reduce it to the the case that $A$ is a finite separable extension of $K$ and X is a finite transitive $G$-set.</p> <p><strong>EDIT</strong> If you think this is too broad a question, you are free to add suitable conditions. This is a soft question.</p>
Russ
34,654
<p>Of course this is entirely subjective and depends on your intelligence and memory. I'd suspect most all people on this site are high in both areas, at least in logic/mathematics.</p> <p>Understanding theorems and working through them like you explained is a very good way to understand material, especially if you can keep it in mind when needed. Exercises are usually repetitious, but also can give you some context that you can/should apply the theorems in. </p> <p>So, no going through exercises isn't a requirement, but going through some is a good idea to both confirm you understand the material and to help commit that material to memory. That said I wouldn't suggest doing the first few exercises (in most textbooks they're the easiest) rather pick a few in the middle or some that the answer or how to solve them isn't obvious to you. Usually the ones toward the end of a section are either tough, long-winded or both. Doing some of those might be worth it, but some might just be long and strewn out and ultimately not worth the time.</p> <p>This is just my experience with Math Textbooks, but I've only gotten to the undergraduate level, so how true this is at higher levels I don't know.</p>
3,197,683
<p>Here is the theorem that I need to prove</p> <blockquote> <p>For <span class="math-container">$K = \mathbb{Q}[\sqrt{D}]$</span> we have</p> <p><span class="math-container">$$\begin{align}O_K = \begin{cases} \mathbb{Z}[\sqrt{D}] &amp; D \equiv 2, 3 \mod 4\\ \mathbb{Z}\left[\frac{1 + \sqrt{D}}{2}\right] &amp; D \equiv 1 \mod 4 \end{cases} \end{align}$$</span></p> </blockquote> <p>The theorem we need to use is this one that can be found in any generic number theory textbook.</p> <blockquote> <p>an element <span class="math-container">$\alpha\in K$</span> is an algebraic integer if and only if its minimal polynomial has coefficients in <span class="math-container">$\mathbb{Z}$</span>.</p> </blockquote> <p>I tried many avenues of attack but it is extremely hard to prove. How do I prove it?</p>
mathematics2x2life
79,043
<p>There are lots of approaches depending on how much you assume. Assuming one only knows traces and norms and your theorem, this should be a 'low brow' approach that gets there without any other knowledge assumptions.</p> <p>Let <span class="math-container">$K=\mathbb{Q}(\sqrt{d})$</span>, where <span class="math-container">$d\neq 1$</span> is a squarefree integer. We want to find <span class="math-container">$\mathcal{O}_K$</span>. If <span class="math-container">$\alpha=a+b\sqrt{d} \in \mathcal{O}_K$</span>, where <span class="math-container">$a,b \in \mathbb{Q}$</span>, we know that <span class="math-container">$\text{Nm}_{K/\mathbb{Q}}(\alpha),\text{Tr}_{K/Q}(\alpha) \in \mathbb{Z}$</span>. Hence, <span class="math-container">$a^2-db^2,2a \in \mathbb{Z}$</span>. Multiplying <span class="math-container">$a^2-db^2$</span> by 4, we obtain <span class="math-container">$(2a)^2-d(2b)^2,2a \in \mathbb{Z}$</span>. </p> <p>Therefore, <span class="math-container">$2\mathcal{O}_K \subseteq \mathbb{Z}[\sqrt{d}]=\{a+b\sqrt{d} \colon a,b \in \mathbb{Z}\}$</span>. Then we have an inclusion of abelian groups <span class="math-container">$$ \mathbb{Z}[\sqrt{d}] \subseteq \mathcal{O}_K \subseteq \dfrac{1}{2}\, \mathbb{Z}[\sqrt{d}] $$</span> The quotient <span class="math-container">$\frac{1}{2}\mathbb{Z}[\sqrt{d}]/\mathbb{Z}[\sqrt{d}]$</span> is a group of order 4 with coset representatives: 0, <span class="math-container">$\frac{1}{2}$</span>, <span class="math-container">$\frac{\sqrt{d}}{2}$</span>, and <span class="math-container">$\frac{1+\sqrt{d}}{2}$</span>. </p> <p>In order to determine <span class="math-container">$\mathcal{O}_K$</span>, we need to determine which of these representatives are algebraic integers. Clearly, <span class="math-container">$0 \in \mathcal{O}_K$</span> and <span class="math-container">$\frac{1}{2} \notin \mathcal{O}_K$</span>. The minimal polynomial of <span class="math-container">$\frac{\sqrt{d}}{2}$</span> is <span class="math-container">$x^2 - \frac{d}{4}$</span>---which is not in <span class="math-container">$\mathbb{Z}[x]$</span> as <span class="math-container">$d$</span> is square free. Hence, <span class="math-container">$\frac{\sqrt{d}}{4} \notin \mathcal{O}_K$</span>. Finally, the minimal polynomial of <span class="math-container">$\frac{1+\sqrt{d}}{2}$</span> is <span class="math-container">$$ \left(x - \dfrac{1+\sqrt{d}}{2}\right) \left(x - \dfrac{1-\sqrt{d}}{2}\right)= x^2-x+ \dfrac{1-d}{4}. $$</span> Then <span class="math-container">$\frac{1+\sqrt{d}}{2}$</span> has minimal polynomial <span class="math-container">$p_\alpha(x) \in \mathbb{Z}[x]$</span>. [That is, <span class="math-container">$\frac{1+\sqrt{d}}{2} \in \mathcal{O}_K$</span> if and only if <span class="math-container">$d \equiv 1 \mod 4$</span>.] Therefore, <span class="math-container">$$ \mathcal{O}_K= \begin{cases} \mathbb{Z}[\sqrt{d}], &amp; d \not\equiv 1 \mod 4 \\ \mathbb{Z}\left[\frac{1+\sqrt{d}}{2}\right], &amp; d \equiv 1 \mod 4. \end{cases} $$</span></p>
1,914,686
<p>A fleet of nine taxis is dispatched to three airports, in such a way that three go to airport A, five go to airport B and one goes to airport C.</p> <p>If exactly three taxis are in need of repair. What is the probability that every airport receives one of the taxis requiring repairs.</p> <p>My method was total number of ways is (9!)/(3!*5!*1!)= 504 (this is correct)</p> <p>Now you have three groups and of every group one position is already filled. So: ....R, ..R, R So the numbers of ways to distribute the rest is: (6!)/(2!*4!)=15</p> <p>But according to answer model of my professor, it should be 90. So I assume he says the number of ways of distrubiting these 3 broken cabs over the 3 spots is 3*2*1. But to me this seems wrong? The broken taxis don't differ so R(1)R(2)R(3) is same as R(2)R(1)R(3). If you standing at the airport A, it doesn't matter which of the broken cabs stands there?</p>
BruceET
221,800
<p><strong>Comment:</strong> Please see my first comment above. </p> <p>Below is a simulation in R statistical software of a million performances of the experiment, in which 1's denote bad taxis and 0's good ones. The simulated answer should be correct to two or three places.</p> <pre><code>m = 10^6; txi = c(1,1,1,0,0,0,0,0,0) a1 = a2 = a3 = numeric(m) for (i in 1:m) { prm = sample(txi, 9) a1[i] = sum(prm[1:3]) # nr bad taxis at Airport 1 a2[i] = sum(prm[4:8]) # etc a3[i] = sum(prm[9]) } mean(a1==1 &amp; a2==1 &amp; a3==1) ## 0.179161 90/504 ## 0.1785714 </code></pre> <p>It seems your professor is correct this time. You should think carefully about what kind of outcomes are included among the 504. Peerhaps @Henry's Answer (+1) to a similar, simpler problem has given you a clue how to do that.</p>
82,765
<p><strong>Bug introduced in 9.0 and persisting through 12.2</strong></p> <hr /> <p>I get the following output with a fresh Mathematica (ver 10.0.2.0 on Mac) session</p> <pre><code>FullSimplify[Exp[-100*(i-0.5)^2]] (* 0. *) Simplify[Exp[-100*(i-0.5)^2]] (* E^(-100. (-0.5+i)^2) *) </code></pre> <p><code>FullSimplify</code> seems to be a bit overambitious and kills the expression completely. Is there anything that explains this behavior or is this simply a bug?</p> <p>As suggested in the comments, I did an additional test:</p> <pre><code>FullSimplify[Exp[-100*(i-1/2)^2]] (* E^(-25 (1-2 i)^2) *) </code></pre> <p>Apparently, the float point math causes the problem.</p>
Daniel Lichtblau
51
<p>[Too long for a comment.]</p> <p>I very much doubt this will be "fixed" in any general way, and in fact am not convinced it is "broken" (in any general way). <code>(Full)Simplify</code> has to rely on any number of methods that manipulate rational/trig functions. These are all based on exact methods that have, of necessity, been adapted (more to the point, coopted) for use in the realm of approximate coefficients. With bignums and significance arithmetic these changes generally do alright. With machine numbers there is not a chance that all heuristics will meet all needs at all times. The things I have tried to optimize for include</p> <p>(1) Avoid crashes</p> <p>(2) Avoid "small" residual expressions wherein coefficients are on the order of a machine precision ULP (because these really mess up zero testing and related things that rely on a decision procedure, hence will mess up <code>Series</code>, <code>Limit</code>, <code>Integrate</code>...</p> <p>(3) Try to get cancellations to be sensible e.g. for <code>Together</code>.</p> <p>What this means is that smallish machine precision values might get chopped inside code that is primarily based on exact methods. Frankly, I'm happy we've managed to push the algorithms as far as we have in terms of handling approximate input. The literature on "symbolic-numeric manipulation" is not so optimistic, the best methods are not too fast, there are tolerances to be set or deduced, etc.</p>
203,614
<p>I have two matrices </p> <p><span class="math-container">$$ A=\begin{pmatrix} a &amp; 0 &amp; 0 \\ 0 &amp; b &amp; 0 \\ 0 &amp; 0 &amp; c \end{pmatrix} \quad \text{ and } \quad B=\begin{pmatrix} d &amp; e &amp; f \\ d &amp; e &amp; f \\ d &amp; e &amp; f \end{pmatrix} $$</span></p> <p>In reality mine are more like 1000 x 1000 matrices but the only thing that is important for now is that the left matrix is diagonal and the right one has one row that repeats itself.</p> <p>Obviously the eigenvalues of the left matrix are its diagonal components. I want to create a new matrix C</p> <p><span class="math-container">$$C = A+B=\begin{pmatrix} a &amp; 0 &amp; 0 \\0 &amp; b &amp; 0 \\0 &amp; 0 &amp; c \end{pmatrix}+\begin{pmatrix} d &amp; e &amp; f \\d &amp; e &amp; f \\d &amp; e &amp; f \end{pmatrix}=\begin{pmatrix} a+d &amp; e &amp; f \\d &amp; b+e &amp; f \\d &amp; e &amp; c+f \end{pmatrix}$$</span></p> <p>I am now wondering how the eigenvalues of this new matrix C are related to the eigenvalues of the diagonal matrix A. Can I use an argument that uses row reduction in order to relate the eigenvalues of both matrices? </p> <p>The reason why I am asking is that my 1000 x 1000 matrix (implemented in mathematica) that is described as above gives me almost the same eigenvalues as the corresponding diagonal matrix (only a few eigenvalues differ) and I really cannot think of any reason why that should be the case.</p> <p>EDIT:</p> <p>I implemented a simple code in mathematica to illustrate what I mean. One can see that every eigenvalue of the diagonal matrix A appears in C:</p> <pre><code> dim = 50; A = DiagonalMatrix[Flatten[RandomInteger[{0, 10}, {1, dim}]]]; mat = RandomReal[{0, 100}, {1, dim}]; B = ArrayFlatten[ConstantArray[{mat}, dim]]; c = A + B; Abs[Eigenvalues[A]] Round[Abs[Eigenvalues[c]], 0.01] (*{10, 10, 10, 10, 10, 10, 9, 9, 9, 9, 9, 9, 8, 8, 8, 8, 7, 7, 7, 7, 7, 6, 6, 6, 6, 5, 5, 5, 5, 5, 4, 4, 4, 4, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 1, 1, 1, 0, 0, 0}*) (*{2084.89, 10., 10., 10., 10., 10., 9.71, 9., 9., 9., 9., 9., 8.54, 8., 8., 8., 7.72, 7., 7., 7., 7., 6.61, 6., 6., 6., 5.44, 5., 5., 5., 5., 4.29, 4., 4., 4., 3.51, 3., 3., 3., 3., 2.28, 2., 2., 2., 2., 1.21, 1., 1., 0.33, 0., 0.}*) </code></pre>
AccidentalFourierTransform
34,893
<p>The reason is that your second matrix is a <strong>rank-one update</strong> of your first matrix: <span class="math-container">$$ B\equiv uv^t $$</span> where <span class="math-container">$u=(1,1,1)$</span> and <span class="math-container">$v=(d,e,f)$</span>. Therefore, the new eigenvalues are typically a small perturbation of the old ones, and there are some known formulas for special cases. See e.g. <a href="https://www.cs.vu.nl/~ran/LectureBerlijn2010.pdf" rel="noreferrer">these lectures</a> or the references in <a href="https://mathoverflow.net/q/143375/106114">this math.OF post</a>.</p>
2,830,718
<p>I want to estimate how many red balls in a box. Red, yellow, blue balls could be in the box. But I don't know how many of them are in the box.</p> <p>What I did was randomly drawing 10 balls from the box and learned that there was no red ball.</p> <p>(Edit: Assume the number of the balls in the box is a known finite number. Let's say 10,000)</p> <p>Can I say the possibility of having at least one red ball is $$\left(\frac13\right)^{10}= 0.00169\ \%\ ?$$ (3 possible outcomes, and 10 observations)</p> <p>I'm thinking I don't know the distribution of the colors of the balls, so I'm not sure this inference is reasonable or not. Thanks!</p>
Green.H
456,457
<p>No, you cant say that. </p> <blockquote> <p>Infinite case: </p> </blockquote> <p>The reason is that you dont know how many balls there are in a box in total. For instance, it may be that a box contains infinite number of balls (since we are talking about mathematical box, I can make this assumption) $\frac{9}{10}$'th of which are red. In this case, the probability of drawing at least $1$ red ball out of $10$ is will be different from what you calculated.</p> <blockquote> <p>Finite case:</p> </blockquote> <p>Similar argument applies. If it happens that the true distribution of balls are such that the share of red balls is $9/10$, then your above calculation is not correct.</p>
4,765
<p>I have a grid made up of overlapping <span class="math-container">$3\times 3$</span> squares like so:</p> <p><img src="https://i.stack.imgur.com/BaY9s.png" alt="Grid"></p> <p>The numbers on the grid indicate the number of overlapping squares. Given that we know the maximum number of overlapping squares (<span class="math-container">$9$</span> at the middle), and the size of the squares (<span class="math-container">$3\times 3$</span>), is there a simple way to calculate the rest of the number of overlaps?</p> <p>e.g. I know the maximum number of overlaps is <span class="math-container">$9$</span> at point <span class="math-container">$(2,2)$</span> and the square size is <span class="math-container">$3\times 3$</span> . So given point <span class="math-container">$(3,2)$</span> how can I calculate that there are <span class="math-container">$6$</span> overlaps at that point?</p>
Community
-1
<p>While searching on google, i got this JSTOR link: <a href="http://www.jstor.org/stable/2688681">http://www.jstor.org/stable/2688681</a> which answers the question in an intricate way.</p>
4,765
<p>I have a grid made up of overlapping <span class="math-container">$3\times 3$</span> squares like so:</p> <p><img src="https://i.stack.imgur.com/BaY9s.png" alt="Grid"></p> <p>The numbers on the grid indicate the number of overlapping squares. Given that we know the maximum number of overlapping squares (<span class="math-container">$9$</span> at the middle), and the size of the squares (<span class="math-container">$3\times 3$</span>), is there a simple way to calculate the rest of the number of overlaps?</p> <p>e.g. I know the maximum number of overlaps is <span class="math-container">$9$</span> at point <span class="math-container">$(2,2)$</span> and the square size is <span class="math-container">$3\times 3$</span> . So given point <span class="math-container">$(3,2)$</span> how can I calculate that there are <span class="math-container">$6$</span> overlaps at that point?</p>
Singh
83,768
<p>An elementary proof of the fact that the set ${n+\pi k}$, $n,k\in \Bbb{Z}$ is dense in reals is equivalent to showing that the subgroup $\Bbb{Z}+\pi\Bbb{Z}$ is dense in the additive group of real line. See for detail of the proof in Theorem 0.2 in the following</p> <p><a href="https://docs.google.com/viewer?a=v&amp;pid=sites&amp;srcid=ZGVmYXVsdGRvbWFpbnxzb251bWF0aHMyfGd4OjhhZTM3MmVkMWJiN2UzMA" rel="nofollow noreferrer">https://docs.google.com/viewer?a=v&amp;pid=sites&amp;srcid=ZGVmYXVsdGRvbWFpbnxzb251bWF0aHMyfGd4OjhhZTM3MmVkMWJiN2UzMA</a></p>
815,868
<blockquote> <p>Consider the following system describing pendulum</p> <p><span class="math-container">$$\begin{align} &amp; \frac{dx}{dt} = y, \\ &amp; \frac{dy}{dt} = − \sin x. \end{align}$$</span></p> <p>I need to classify all critical points of the system.</p> </blockquote> <p>All critical points are of the form <span class="math-container">$(k\pi,0)$</span> for any <span class="math-container">$k \in \mathbb{Z}$</span></p> <p>I know the Hamiltonian of the system <span class="math-container">$\displaystyle H(x,y)= \frac{y^2}2 -\cos x $</span>, but I'm not sure if this can help in any way.</p>
Winther
147,873
<p>Consider a small perturbation about the critical points $(x,y) = (0,n\pi)$ for $n\in\mathbb{Z}$, i.e. we take</p> <p>$$x = n\pi + \delta x,~~~~y = \delta y$$</p> <p>Then the dynamical system becomes, to first order in the perturbations,</p> <p>$$\dot{\delta x} = \delta y~~~~\text{and}~~~~\dot{\delta y} = (-1)^{n+1}\delta x$$ which also implies $\ddot{\delta x} = \dot{\delta y} = (-1)^{n+1}\delta x$. If $n$ is odd then $\ddot{\delta x} = \delta x \implies \delta x \propto e^{\pm t}$ so perturbations grow exponentially and the fixpoint is unstable. If $n$ is even then $\ddot{\delta x} = -\delta x \implies \delta x \propto \sin(t+\phi)$ and $\delta y \propto \cos(t + \phi)$ so the size of the perturbations do not grow in time.</p> <p>If you prefer a more physical approach: the system describes a particle rolling without friction in a potential $V(x) = -\cos(x)$ (since $\ddot{x} = -\frac{dV(x)}{dx}$), see figure below for a plot of the potential $V(x)$.</p> <p>$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$<a href="https://i.stack.imgur.com/vv4up.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vv4up.png" alt="enter image description here"></a></p> <p>When $n$ is odd we are at a peak of the potential so a slight perturbation will make the particle roll away from the fixpoint. When $n$ is even we are at the bottom of the potential so a slight perturbation away from the fixpoint will just make the particle oscillate around it (and since there is no friction the oscillations will not die out).</p>
1,390,093
<p>Let $G$ be act on $\Gamma$ with a fundamental domain $T$ where $T$ is tree. We construct <em>tree of groups</em> $(\mathcal{G},T)$ with the following structure: $$\text{for every } v\in V(T),\,\,G_v=\operatorname{Stab}_G(v) $$</p> <p>$$\text{for every } e\in E(T),\,\,G_e=\operatorname{Stab}_G(e) $$</p> <p>Assume that $G_T$ is the direct limit of the system $(\mathcal{G},T)$. With using the universal property of the definition of the direct limit, we get the map $\phi\colon G_T\mapsto G$.</p> <p>Now my question is:</p> <p>If $\Gamma$ is connected, then why can we conclude that $\phi$ is a surjective map?</p>
Daniel Valenzuela
156,302
<p>This is precisely the statement of Lemma 4 in Chapter 4.1 of Serre's Trees book. In terms of your specific problem in the lemma is a little too general and you can set $Y=T$. However, the previous Lemma 2 which is being generalized is too specific. This book is a beautiful read and strongly recommended! </p> <p>Note that this is an answer to your question in terms of Bass-Serre-Theory. That is also why Lee Mosher's answer is incorrect, since in his example there <em>exists no</em> fundamental domain.</p>
3,372,832
<blockquote> <p><strong>9)</strong> Is <span class="math-container">$$ \sum_{n=1}^\infty \delta_n \tag{7.10.1} $$</span> a well-defined distribution? Note, to be a well-defined distribution, its action on any test function should be a finite number. Provide an example of a function <span class="math-container">$f(x)$</span> whose derivative in the sense of distributions is <span class="math-container">$(7.10.1)$</span></p> </blockquote> <p>Hello, I want to find a distribution whose distributional derivative as the summation of the delta function (<span class="math-container">$\delta_1$</span> to <span class="math-container">$\delta_k$</span>). I find the distributional derivative of the summation of the shift of the Heaviside Function <span class="math-container">$H(x-a)$</span> is equal to the summation of the delta function. However, I have trouble of finding the convergence of the summation of the shift of the Heaviside function in the sense of the distribution. If I can find this convergence, and then , by the theorem, the derivative of the convergence is also the convergence of the summation of the delta function in the sense of distribution. </p>
Aloizio Macedo
59,234
<p>For a rundown on your reasoning, read after the horizontal rule below. First, let me prove what he mentions.</p> <p>Following up on Rudin's notation, to see why <span class="math-container">$\Omega$</span> is the unique component which can be unbounded, suppose that there is another, say <span class="math-container">$\Omega'$</span>. Since <span class="math-container">$\Omega'$</span> is unbounded, there is some <span class="math-container">$x$</span> which belongs to <span class="math-container">$\Omega'$</span> and also <span class="math-container">$D^c$</span>. Therefore, <span class="math-container">$\Omega'$</span> and <span class="math-container">$\Omega$</span> have a point in common. Since they are connected components, they must coincide.</p> <p>Note that nowhere in this argument do we conclude that there is only one other component, and neither does Rudin claim that. This is part of the Jordan Curve Theorem, as you imply. </p> <hr> <blockquote> <p>Note that <span class="math-container">$\gamma^*$</span> is compact, hence <span class="math-container">$\gamma^* \subset D$</span>.</p> </blockquote> <p>Yes, it lies in a bounded disk. This is true in general for any compact set inside a metric space <span class="math-container">$M$</span>, and there is no need for Lebesgue number lemma: given a compact set <span class="math-container">$K$</span> and a point <span class="math-container">$p \in M$</span>, there is an open (or closed) ball centered on <span class="math-container">$p$</span> big enough that contains <span class="math-container">$K$</span>. Just consider the collection of open balls of radius <span class="math-container">$n$</span> centered at <span class="math-container">$x$</span> and apply the definition to see why this holds. (And take the closure of the maximum of the resulting radii after you apply it, if you want to have a closed ball in the end.) </p> <blockquote> <p>whose complement <span class="math-container">$D^c$</span> is connected (since the complement is now path-connected?)</p> </blockquote> <p>Yes, that is a valid way to justify it.</p> <blockquote> <p>thus <span class="math-container">$D^c$</span> lies in some component of <span class="math-container">$\Omega$</span> (how do we know <span class="math-container">$D^c$</span> isn't the component (maximal connected subset?).</p> </blockquote> <p>It could be the entire component. (Just take <span class="math-container">$\gamma$</span> to be a circle, and <span class="math-container">$D$</span> to be the disk determined by it.) So we don't know that, but it is irrelevant. </p> <blockquote> <p>This shows <span class="math-container">$\Omega$</span> has precisely one unbounded component (since <span class="math-container">$D^c$</span> is also unbounded).</p> </blockquote> <p>Unfortunately, nothing that you mentioned makes this conclusion directly valid. (At least, if what Rudin mentioned doesn't make it valid, then what you didn't also does not.)</p>
2,594,829
<p>I'm having trouble knowing when my ansatz is wrong. For example if my ansatz to this is $y_p=a\cos{x}+b\sin{x},$ I get nowhere. How can I make a correct ansatz and are there any general rules to determine proper ansatz?</p> <p><strong>Note:</strong> I know one can sovle this using eulers formula and all that but that's not the point here. So please no posts that have nothing to do with the method of undeterminate coefficients.</p>
Alekos Robotis
252,284
<p>If you set $y_p=a\cos x+b\sin x$, we have that $y_p'=-a\sin x+b\cos x$ and $y_p''=-a\cos x-b\sin x.$ Then, we have that $$ y_p''-2y_p'+y_p=-a\cos x-b\sin x+2a\sin x-2b\cos x+a\cos x+b\sin x=2\cos x.$$ Collecting the coefficients, we see that $-2b-a+a=2$ and $2a-b+b=0$ so that $b=-1$ and $a=0$. Thus, the function $y=-\sin x$ solves the differential equation.</p>
2,594,829
<p>I'm having trouble knowing when my ansatz is wrong. For example if my ansatz to this is $y_p=a\cos{x}+b\sin{x},$ I get nowhere. How can I make a correct ansatz and are there any general rules to determine proper ansatz?</p> <p><strong>Note:</strong> I know one can sovle this using eulers formula and all that but that's not the point here. So please no posts that have nothing to do with the method of undeterminate coefficients.</p>
Dr. Sonnhard Graubner
175,066
<p>we have $$y'(x)=-a\sin(x)+b\cos(x)$$ $$y''(x)=-a\cos(x)-b\sin(x)$$ then $$-a\cos(x)-b\sin(x)-2(-a\sin(x)+b\cos(x))+a\cos(x)+b\sin(x)=2\cos(x)$$ from here we get $$\cos(x)(-a-2b+a)+\sin(x)(-b+2a+b)=2\cos(x)$$ thus $$b=-1$$ and $$a=0$$</p>
4,342,737
<p>My question: If you throw a dice 5 times, what is the expected value of the square of the median of the 5 results?</p> <p>A slightly modified question would be: If you throw a dice 5 times, what is the expected value of the median? The answer would be 3.5 by symmetry.</p> <p>For the square, it seems to be that symmetry does not hold anymore. Is there a &quot;smart&quot; way to solve this problem?</p> <p>If there isn't a smart way to solve the problem, if there a smart way to estimate the answer?</p>
Avraham
91,378
<p>This is small enough that it can be calculated explicitly through some code. Creating the matrix of throws is inefficient, but it should be clear to see how the universe of possibilities is spanned.</p> <pre><code>throws &lt;- matrix(double(0), ncol = 5, nrow = 6^5) throws[, 5] &lt;- rep(1:6, each = 6^0, times = 6^4) throws[, 4] &lt;- rep(1:6, each = 6^1, times = 6^3) throws[, 3] &lt;- rep(1:6, each = 6^2, times = 6^2) throws[, 2] &lt;- rep(1:6, each = 6^3, times = 6^1) throws[, 1] &lt;- rep(1:6, each = 6^4, times = 6^0) thrMed &lt;- apply(throws, 1L, median) mean(thrMed) [1] 3.5 mean(thrMed ^ 2) [1] 13.62346 </code></pre> <p>As stated in the comments, this is the expected squared median for five throws of dice. The symmetry no longer holds since <span class="math-container">$4^2 - 3.5^2 &gt; 3.5^2 - 3^2$</span> and so on. Therefore the expected squared median is larger than the expected median squared.</p>
91,700
<p>Suppose that $A,C$ are $C^*$-algebras and $\phi:A \to C$ is a completely positive, orthogonality-preserving linear map. (Orthogonality preserving means: if $a,b \in A$ satisfy $ab=0$ then $\phi(a)\phi(b) = 0$.) Then:</p> <p>(i) For any $a,b,c \in A$, $$ \phi\left(ab\right)\phi\left(c\right) = \phi\left(a\right)\phi\left(bc\right) $$ (in the special case that $A$ is unital, this is equivalent to $\phi\left(a\right)\phi\left(b\right) = \phi\left(1\right)\phi\left(ab\right)$ for any $a,b \in A$);</p> <p>(ii) For any $a,b \in A$, $$ \left\| \phi\left(ab\right) \right\| \leq \|a\|\cdot\left\|\phi\left(b\right)\right\|; $$</p> <p>(iii) If $A$ is unital and simple, then for any $a \in A$, $$ \left\| \phi\left(a\right) \right\| = \|a\|\cdot\left\|\phi\left(1\right)\right\|. $$</p> <p>In fact, there is a rich structure theorem about completely positive, orthogonality-preserving maps (in the literature, they are called "order zero" instead of "orthogonality-preserving" ), Theorem 2.3 of Winter, Zacharias, "Completely positive maps of order zero," Münster J. Math, 2009 (see also Corollary 3.1); and I can prove these statements easily using the structure theorem. But, my question is: can we prove any of the facts above directly (without appealing to this structure theorem)?</p> <p>(I am intentionally not restating the structure theorem here because my question is about not using it.)</p>
NC Wong
121,944
<p>For a proof of the fact (i) "ϕ(ab)ϕ(c)=ϕ(a)ϕ(bc) and ϕ(a)ϕ(b)=ϕ(1)ϕ(ab)", see M.A. Chebotar, W.-F. Ke, P.-H. Lee, N.-C. Wong Mappings preserving zero products, Studia Math., 155 (1) (2003), pp. 77-94 MR1961162 (2003m:47066)</p> <p>It is really surprising (at least to me) that few operator algebra people knowing of the notion of zero product preserving maps, on which a few dozen papers published since 1980s. </p>
1,478,038
<p>Polyhedrons or three dimensional analogues of polygons were studied by Euler who observed that if one lets $f$ to be the number of faces of a polyhedron, $n$ to be the number of solid angles and $e$ to be the number of joints where two faces come together side by side $n-e+f=2$.</p> <p>It was later seen that a serious defect in this definition (and in the proof supplied by Euler) is that it is not at all clear what is a polyhedron in the first place. For example if we consider a cube nested within another cube as a polyhedron then $n-e+f=4$, a counter example to Euler's result.</p> <p>What will be the modern definition of that polyhedron which will comply with Euler's result?</p>
postmortes
65,078
<p>The N in question is <span class="math-container">$\cal{N}$</span> which is a calligraphic N that <span class="math-container">$\LaTeX$</span> produces. You can type it yourself by putting <code><span class="math-container">$\cal{N}$</span></code> in your question or answer[1]. As others have also noted it denotes a normal distribution with specified mean and variance. For pronouncing it, when you're first learning about it I'd recommend saying <span class="math-container">$b \sim \cal{N}(\mu,\sigma^2)$</span> as "b is normally distributed with mean <span class="math-container">$\mu$</span> and variance <span class="math-container">$\sigma^2$</span>" as it will reinforce the meaning and slow your thinking down to give your brain time to understand what that implies.</p> <p>Later on I'd probably say "b is N of <span class="math-container">$\mu$</span> and <span class="math-container">$\sigma^2$</span>" unless explaining it to someone else. Notation is there to save you time for things you already understand.</p> <p>I doubt it has a Unicode value, I'm afraid, but I haven't checked. Calligraphic math symbols are more of a font thing that a unicode thing I think.</p> <p>[1]: You might want to experiment with putting all the different alphabet letters in calligraphic font as well to see what they look like.</p>
187,395
<p>I can't find my dumb mistake.</p> <p>I'm figuring the definite integral from first principles of $2x+3$ with limits $x=1$ to $x=4$. No big deal! But for some reason I can't find where my arithmetic went screwy. (Maybe because it's 2:46am @_@).</p> <p>so </p> <p>$\delta x=\frac{3}{n}$ and $x_i^*=\frac{3i}{n}$</p> <p>where $x_i^*$ is the right end point of each rectangle under the curve.</p> <p>So the sum of the areas of the $n$ rectangles is</p> <p>$\Sigma_{i=1}^n f(\delta xi)\delta x$</p> <p>$=\Sigma_{i=1}^n f(\frac{3i}{n})\frac{3}{n}$</p> <p>$=\Sigma_{i=1}^n (2(\frac{3i}{n})+3)\frac{3}{n}$</p> <p>$=\frac{3}{n}\Sigma_{i=1}^n (2(\frac{3i}{n})+3)$</p> <p>$=\frac{3}{n}\Sigma_{i=1}^n ((\frac{6i}{n})+3)$</p> <p>$=\frac{3}{n} (\frac{6}{n}\Sigma_{i=1}^ni+ 3\Sigma_{i=1}^n1)$</p> <p>$=\frac{3}{n} (\frac{6}{n}\frac{n(n+1)}{2}+ 3n)$</p> <p>$=\frac{18}{n}\frac{(n+1)}{2}+ 9$</p> <p>$=\frac{9(n+1)}{n}+ 9$</p> <p>$\lim_{n\to\infty} \frac{9(n+1)}{n}+ 9 = 18$</p> <p>But the correct answer is 24. </p>
The_Sympathizer
11,172
<p>The problem lies at the very beginning. You took</p> <p>$$x^{*}_i = \frac{3i}{n}$$.</p> <p>But this is wrong. Remember, you are looking for the right endpoints of the partitioning of $[1, 4]$. So at $i = n$, this should equal 4... but it doesn't. And at $i = 1$, this should equal $1 + \frac{3}{n}$... but it doesn't. You want</p> <p>$$x^{*}_i = 1 + \frac{3i}{n}$$.</p> <p>Try it now. It should work.</p> <p>(See, the <i>right</i> endpoint of the first subinterval will be $1 + \delta x$, as the <i>left</i> endpoint is 1 and the width of the subinterval is $\delta x$. The <i>right</i> endpoint of the second subinterval will then be $1 + 2 \delta x$, and so forth, up to the last subinterval, with right endpoint $1 + n \delta x = 4$.)</p>
2,972,085
<p><a href="https://i.stack.imgur.com/pcOfx.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/pcOfx.jpg" alt="enter image description here"></a></p> <p>My friend show me the diagram above , and ask me </p> <p>"What is the area of a BLACK circle with radius of 1 of BLUE circle?"</p> <p>So, I solved it by algebraic method. <span class="math-container">$$$$</span></p> <p>Let center of <span class="math-container">$\color{black}{BLACK}$</span> circle be <span class="math-container">$(0,0)$</span>.</p> <p>We can set, </p> <p><span class="math-container">$x^2 + (y-R)^2 = R^2$</span> , where <span class="math-container">$R$</span> means radius of <span class="math-container">$\color{red}{RED}$</span> circle.</p> <p><span class="math-container">$(x-p)^2 + (y-r)^2 = r^2 $</span>, where <span class="math-container">$(p,r)$</span> means center of <span class="math-container">$\color{blue}{BLUE}$</span> circle. <span class="math-container">$$$$</span> These can imply</p> <p><span class="math-container">$ 2R=r+ \sqrt{p^2 + r^2}$</span></p> <p><span class="math-container">$p^2 + (R-r)^2 = (R+r)^2 $</span></p> <p>So, </p> <p><span class="math-container">$ 2r=R$</span></p> <p><span class="math-container">$$$$</span></p> <p>But he wants not algebraic but <strong>Geometrical Method.</strong></p> <p>How can I show <span class="math-container">$ 2r=R$</span> with <strong>Geometrical Method</strong>?</p> <p>Really thank you.</p> <p><span class="math-container">$$$$</span></p> <p>(Actually I constructed the diagram with algebraic methed, </p> <p>but I'd like to know how construct this whit Geometrical method.)</p>
David K
139,123
<p>Based on the figure, one can make some inspired guesswork.</p> <p>Construct a rectangle <span class="math-container">$ABCD$</span> with side <span class="math-container">$AB$</span> of length <span class="math-container">$1$</span> and diagonal <span class="math-container">$AC$</span> of length <span class="math-container">$3.$</span> Extend <span class="math-container">$AB$</span> to <span class="math-container">$E$</span> so that <span class="math-container">$B$</span> between <span class="math-container">$A$</span> and <span class="math-container">$E$</span> and <span class="math-container">$AE = 2.$</span></p> <p>Construct a circle of radius <span class="math-container">$4$</span> about <span class="math-container">$A,$</span> a circle of radius <span class="math-container">$2$</span> about <span class="math-container">$E,$</span> and a circle of radius <span class="math-container">$1$</span> about <span class="math-container">$C.$</span></p> <p>Confirm that the circles about <span class="math-container">$A$</span> and <span class="math-container">$E$</span> are internally tangent, the circles about <span class="math-container">$A$</span> and <span class="math-container">$C$</span> are internally tangent, and the circles about <span class="math-container">$C$</span> and <span class="math-container">$E$</span> are externally tangent. Extend the side <span class="math-container">$AD$</span> to a diameter of the circle about <span class="math-container">$A$</span> and confirm that this diameter is tangent to both the circle about <span class="math-container">$C$</span> and the circle about <span class="math-container">$E.$</span></p> <p>Therefore the figure composed of these three circles and this diameter is congruent to the given figure, and the circles about <span class="math-container">$A,$</span> <span class="math-container">$C,$</span> and <span class="math-container">$E$</span> correspond respectively to the black, blue, and red circles.</p>
2,294,997
<p>How to prove that $\displaystyle 0,02&lt;\int_0^1 \frac{x^7}{(e^x+e^{-x})\sqrt{1+x^2}}dx&lt;0,05$? I tried to use mean value theorems, but i failed.</p>
John Hughes
114,036
<p>Hint: $e^x + e^{-x}$, on $[0, 1]$, is no greater than $3.1$. And $\sqrt{1 + x^2}$ is no greater than $\sqrt{2}$. So your integrand is no less than $$ \frac{x^7}{3.1 \sqrt{2}} \ge \frac{x^7}{4.4} $$</p> <p>Now integrate. </p>
2,294,997
<p>How to prove that $\displaystyle 0,02&lt;\int_0^1 \frac{x^7}{(e^x+e^{-x})\sqrt{1+x^2}}dx&lt;0,05$? I tried to use mean value theorems, but i failed.</p>
Arpan1729
444,208
<p>$e^x+e^{-x}\geq 2$ by AM-GM, and $\sqrt{1+x^2}&gt;x$</p> <p>So the integral is less than integration of $x^6/2$ from $0$ to $1$, which equals $1/14&gt;0.05$.</p> <p>$e^x+e^{−x}$, is less than $3.1$ in $(0,1]$ And $\sqrt{1+x^2}$ is less than $2√2$. So your integrand is greater than $x^7/3.1\times\sqrt{2}$</p> <p>Now integrate.</p>
964,372
<p>I have a general question.</p> <p>If there is a matrix which is inverse and I multiply it by other matrixs which are inverse. Will the result already be reverse matrix?</p> <p>My intonation says is correct, but I'm not sure how to prove it.</p> <p>Any ideas? Thanks.</p>
Marko Riedel
44,883
<p>For future reference here is a derivation using combinatorial species. The species under consideration is $$\mathfrak{P}(\mathfrak{P}_{=1}(\mathcal{Z})+\mathfrak{P}_{=2}(\mathcal{Z})).$$ This gives the exponential generating function $$G(z) = \exp\left(\frac{z}{1!} + \frac{z^2}{2!}\right) = \exp\left(z+\frac{z^2}{2}\right).$$ Differentiating we have $$G'(z) = \exp\left(z+\frac{z^2}{2}\right) (1+z) = G(z) (1+z).$$ Extracting coefficients we obtain $$n! [z^n] G'(z) = A_{n+1} = n! [z^n] G(z) (1+z) = A_n + n! [z^n] z G(z) \\= A_n + n! [z^{n-1}] G(z) = A_n + n! \frac{A_{n-1}}{(n-1)!} = A_n + n A_{n-1}.$$</p> <p>This finally yields $$A_{n+1} = A_n + n A_{n-1}$$ which is the result we were trying to prove.</p> <p><strong>Remark.</strong> What we have here is a special case or rather restriction of the species $$\mathfrak{P}(\mathcal{U}\mathfrak{P}_{\ge 1}(\mathcal{Z}))$$ which yields the generating function of the Stirling numbers of the second kind $$G(z, u) = \exp(u(\exp(z)-1))$$ where $${n\brace k} = n! [z^n] \frac{(\exp(z)-1)^k}{k!}.$$</p>
2,886,460
<blockquote> <p>Let $\omega$ be a complex number such that $\omega^5 = 1$ and $\omega \neq 1$. Find $$\frac{\omega}{1 + \omega^2} + \frac{\omega^2}{1 + \omega^4} + \frac{\omega^3}{1 + \omega} + \frac{\omega^4}{1 + \omega^3}.$$</p> </blockquote> <p>I have tried combining the first and third terms &amp; first and last terms. Here is what I have so far:</p> <blockquote> <p>\begin{align*} \frac{\omega}{1 + \omega^2} + \frac{\omega^2}{1 + \omega^4} + \frac{\omega^3}{1 + \omega} + \frac{\omega^4}{1 + \omega^3} &amp;= \frac{\omega}{1 + \omega^2} + \frac{\omega^4}{1 + \omega^3} + \frac{\omega^2}{1 + \omega^4} + \frac{\omega^3}{1 + \omega} \\ &amp;= \dfrac{\omega(1+\omega^3) + \omega^4(1+\omega^2)}{(1+\omega^2)(1+\omega^3)} + \dfrac{\omega^2(1+\omega) + \omega^3(1+\omega^4)}{(1+\omega^4)(1+\omega)} \\ &amp;= \dfrac{\omega + 2\omega^4 +\omega^6}{1+\omega^2 + \omega^3 + \omega^5} + \dfrac{\omega^2 + 2\omega^3 + \omega^7}{1+\omega + \omega^4 + \omega^5} \\ &amp;= \dfrac{2\omega + 2\omega^4}{2+\omega^2 + \omega^3} + \dfrac{2\omega^2 + 2\omega^3}{2+\omega+\omega^4} \end{align*}</p> </blockquote> <p>OR</p> <blockquote> <p>\begin{align*} \frac{\omega}{1 + \omega^2} + \frac{\omega^2}{1 + \omega^4} + \frac{\omega^3}{1 + \omega} + \frac{\omega^4}{1 + \omega^3} &amp;= \frac{\omega}{1 + \omega^2} + \frac{\omega^3}{1 + \omega} + \frac{\omega^4}{1 + \omega^3} + \frac{\omega^2}{1 + \omega^4} \\ &amp;= \dfrac{\omega(1+\omega) + \omega^3(1+\omega^2)}{(1+\omega)(1+\omega^2)} + \dfrac{\omega^2(1+\omega^3) + \omega^4(1+\omega^4)}{(1+\omega^3)(1+\omega^4)} \\ &amp;= \dfrac{\omega + \omega^2 + \omega^3 + \omega^5}{1+\omega + \omega^2 + \omega^3} + \dfrac{\omega^2 + \omega^4 + \omega^5 + \omega^8}{1 + \omega^3 + \omega^4 + \omega^7} \\ &amp;= \dfrac{2\omega+\omega^2+\omega^3}{1+\omega+\omega^2+\omega^4} + \dfrac{1+\omega+\omega^2+\omega^4}{1+2\omega^3+\omega^4} \end{align*}</p> </blockquote>
nonuser
463,553
<p>Expand 2nd and 4th fraction with $\omega $ and $\omega ^2$ respectively: $$\frac{\omega}{1 + \omega^2} + \frac{\omega^2}{1 + \omega^4} + \frac{\omega^3}{1 + \omega} + \frac{\omega^4}{1 + \omega^3}=\frac{\omega}{1 + \omega^2} + \frac{\omega^3}{\omega+ 1} + \frac{\omega^3}{1 + \omega} + \frac{\omega}{\omega^2+1}$$</p> <p>$$=2\frac{\omega}{1 + \omega^2} + 2\frac{\omega^3}{\omega+ 1} $$ $$=2\frac{\omega^2+\omega + \omega^3+1}{(\omega^2+1)(\omega+ 1)}=2 $$</p>
3,059,857
<p>We are supposed to use this formula for which I can't find any explaination anywhere and our teacher didn't explain anything so if anyone could help me I would appreciate it. </p> <p><span class="math-container">$ x = A + k \times 2\pi$</span></p> <p>and</p> <p><span class="math-container">$x = \pi - A + k \times 2\pi$</span> </p> <p>where <span class="math-container">$k$</span> is supposed to be a random integer? and A in this case is <span class="math-container">$\frac{11}{9}\pi$</span> </p>
hmakholm left over Monica
14,366
<p>The first step is always the same:</p> <h1>DRAW A DIAGRAM!</h1> <p>It doesn't need to be a particularly precise diagram -- we're not going to measure anything on it, just get an overview of how things fit together. Here's a rough graph of <span class="math-container">$\sin(x)$</span> with <span class="math-container">$x=\frac{11}{9}\pi$</span> marked:</p> <p><img src="https://i.stack.imgur.com/H9fOb.png" alt="graph of sin(1)"></p> <p><span class="math-container">$\frac{11}{9}\pi$</span> is a bit more than <span class="math-container">$\pi$</span>, so the value of <span class="math-container">$\sin \frac{11}{9}$</span> is somewhere between <span class="math-container">$-1$</span> and <span class="math-container">$0$</span>. We need to find all the <span class="math-container">$x$</span> that have that value as their sine.</p> <p><img src="https://i.stack.imgur.com/law7V.png" alt="graph with solutions marked"></p> <p><span class="math-container">$x=\frac{11}{9}\pi$</span> itself is certainly one solution. And we know that the sine repeats itself with a period of <span class="math-container">$2\pi$</span>, so <span class="math-container">$x=\frac{11}{9}\pi+2\pi$</span> is another solution, and <span class="math-container">$\frac{11}{9}\pi+2\pi+2\pi$</span> is yet another one, and so forth, and to the other side <span class="math-container">$\frac{11}{9}\pi-2\pi = -\frac{7}{9}\pi$</span> is also a solution, and <em>so</em> forth. This gives us <span class="math-container">$$ x = \frac{11}{9}\pi + 2k\pi, k\in\mathbb Z, $$</span> the points indicated in green here:</p> <p><img src="https://i.stack.imgur.com/aB6yq.png" alt="graph with descending points marked"></p> <p>These are all the points where the sine curve passes <em>down</em> through the red horizontal line. We're still missing the ones where the curve crosses back <em>up</em> again, but we can figure out where they are because the sine curve is mirror symmetric about each of it throughs and crests. So the first <em>up</em> crossing after <span class="math-container">$x=0$</span> is just as far <em>left</em> of <span class="math-container">$2\pi$</span> as our known <em>down</em> crossing is <em>right</em> of <span class="math-container">$\pi$</span>, and we get for this point <span class="math-container">$$ x = 2\pi - (\frac{11}{9}\pi - \pi) = \frac{16}{9}\pi $$</span> This point also has copies at regular <span class="math-container">$2\pi$</span> intervals to the right, so this gives the blue solutions <span class="math-container">$$ x = \frac{16}{9}\pi + 2k\pi, k\in\mathbb Z $$</span></p> <p><img src="https://i.stack.imgur.com/krufx.png" alt="diagram with even more markings"></p> <p>The <strong>complete</strong> solution is <span class="math-container">$$ x = \begin{cases} \frac{11}{9}\pi + 2k\pi &amp; k\in\mathbb Z \\ \frac{16}9 \pi + 2k\pi &amp; k \in\mathbb Z \end{cases} $$</span></p>
4,255,430
<p>I know that <strong>if</strong> both of the limits <span class="math-container">$$ \lim_{x\to a} f(x) \quad\text{and}\quad \lim_{x\to a} g(x) $$</span> exist (so they are both equal to real numbers), then <span class="math-container">$$ \lim_{x\to a} f(x) + g(x) = \lim_{x\to a} f(x) + \lim_{x\to a} g(x) $$</span> One could also use the difference or the product instead of the sum. Implied in this is that the limit of the sum of the two functions exist. It is easy to see that the converse of this is not true. That is, the limit of the sum may exist without the limits of the two functions exist.</p> <p>I can also see how this can be extended to include the cases where, say, <span class="math-container">$\lim_{x\to a} f(x) = \infty$</span> (so it doesn't exist) and <span class="math-container">$\lim_{x\to a} g(x)$</span> exists (if we understand that <span class="math-container">$\infty + b = \infty$</span> for all <span class="math-container">$b\in \mathbb{R}$</span>.</p> <p>I see also that people will use this property without first arguing that the two limits exist. For example <span class="math-container">$$ \lim_{x\to 1} x + x^2 = \lim_{x\to 1} x + \lim_{x\to 1} x^2 = 1 + 1 = 1 $$</span> That is, one possibly ought to first state that <span class="math-container">$\lim_{x\to 1} x$</span> and <span class="math-container">$\lim_{x\to 1} x^2$</span> both exist before using the rule. But all the examples that I have seen of limits, this is never a problem. That is, I don't remember ever seeing a computation of a limit where one uses the rule assume that the two limits exist, but where one of the limits to not in fact exist, but one ends up with an (incorrect) answer.</p> <p><strong>Finally, here is my question: What is an example of a limit where one falsely uses the rule <span class="math-container">$$ \lim_{x\to a} f(x) + g(x) = \lim_{x\to a} f(x) + \lim_{x\to a} g(x) $$</span></strong></p>
Macavity
58,320
<p>To use inequalities to find minima, we need the equality condition to be satisfied as well, hence in this case we need <span class="math-container">$x^2=8x=\frac{32}x$</span>, which doesn't have a solution.</p> <p>If you have a &quot;lucky guess&quot; for which <span class="math-container">$x$</span> the minimum occurs (<em>and you know you have the minimum value somehow</em>), we can split the terms to equal components and then apply AM-GM. If not, there is still a way - consider the more general version:</p> <p><span class="math-container">$$x^2 + \alpha \; \frac{8x}\alpha + \beta\; \frac{64}{\beta x^3} \geqslant (1+\alpha+\beta) \left(x^2\cdot \left(\frac{8x}\alpha\right)^\alpha \cdot \left( \frac{64}{\beta x^3}\right)^\beta\right)^{\frac1{1+\alpha+\beta}} \tag{1}$$</span> <em>While it is not necessary to write the above inequality out, I am reproducing it for clarity.</em></p> <p>Now for using this inequality for minimum, the RHS must be a constant, i.e. <span class="math-container">$x-$</span>free, so we need <span class="math-container">$\alpha = 3\beta-2$</span> considering the exponents of <span class="math-container">$x$</span>.</p> <p>Further, for the equality condition we must have some <span class="math-container">$x$</span> for which <span class="math-container">$$x^2 = \frac{8x}\alpha = \frac{64}{\beta x^3} \iff x = \frac{8}{3\beta-2} = \frac{64(3\beta-2)^4}{\beta \cdot 8^4}$$</span> The last equality gives <span class="math-container">$(3\beta-2)^5=2^9\beta$</span> which can be solved (<em>well, by observation, or trying one's luck at rational roots</em>) for <span class="math-container">$\beta=2$</span>. After doing all that, you can write out (<span class="math-container">$1$</span>) with the correct <span class="math-container">$\alpha, \beta$</span> values for the desired AM-GM.</p>
112,437
<p>I am working on a personal project involving a CloudDeploy[ ] that reads data off a Google Doc and then works with it. Ideally, the Google Doc is either a text document or a spreadsheet which contains a single string, which is what I want Mathematica to read as input. <a href="https://docs.google.com/document/d/17m1JfjEbrna7e9INZv-FXZQ9yQJd8d1Uu2LFEPyT_ZI/edit?usp=sharing">Example here.</a></p> <p>The kind of stuff I would like to do the the string is very simple, but I am stuck at the very beginning. This:</p> <pre><code>string = Import[theURLstring]; </code></pre> <p>obviously fails miserably. Can someone help?</p> <p><strong>More details</strong><br> - I looked at <a href="https://mathematica.stackexchange.com/questions/23983/error-messages-in-importing-data-file/23988#23988">this past question</a>, but it didn't help me.<br> - The reason I want to use Google Doc rather than a databin, say, is that I want my friends to add to the string without them to know basically any technical details. Basically I want to be as easy as: "open the doc, add to the document, save, close" on the input side, and "open this webpage to see the results" on the output side.<br> - a document on dropbox would be OK, but only if there is no way of doing it on Google.<br> - I also don't have much technical internet knowledge, so bear with me if I am asking the impossible or if this question is very simple.</p>
chuy
237
<p>This seems to work:</p> <pre><code>URLFetch["https://docs.google.com/document/export?format=txt&amp;\ id=17m1JfjEbrna7e9INZv-FXZQ9yQJd8d1Uu2LFEPyT_ZI&amp;token=\ AC4w5VhkHSfLIe2xvAUWQC9XHb1lAmM7Xw%3A1460476462625"] (* "When the soil were finally depleted of all nutrients, we \ realised that paper isn't edible. Or is it? Now all paper is \ digestible, we eat the wrapping of our consumer products and instead \ of recycling paperwork, we eat it. This is thanks to GM bacteria \ that can digest cellulose into human nutrients. When someone finally \ found a way to make the bacteria produce human microdoses of cocaine \ without killing itself in the process, scientist and governments \ decided it was for the best of society, GDP growth (and balance \ sheets of the corporations backing them) to put the bacteria straight \ into our guts, monthly. This was legally OK because nobody is being \ sold cocaine. Now workdays are up to 12 hours and sleeptime down to \ 4. When dating, we do the socialising at coffee shops, which sell \ cannabis, to take the edge off the day." *) </code></pre> <p>I got the URL by starting the download process and looking at the HTTP headers that were sent (you can see these in the Developer Tools for your browser). </p> <p>Might be better to present this in pieces:</p> <pre><code>url = "https://docs.google.com/document/export"; URLFetch[url, "Parameters" -&gt; {"format" -&gt; "txt", "id" -&gt; "17m1JfjEbrna7e9INZv-FXZQ9yQJd8d1Uu2LFEPyT_ZI", "token" -&gt; "AC4w5VhkHSfLIe2xvAUWQC9XHb1lAmM7Xw:1460476462625"}] </code></pre> <p>or you can save to a file with:</p> <pre><code>URLSave[ URLBuild[url, {"format" -&gt; "txt", "id" -&gt; "17m1JfjEbrna7e9INZv-FXZQ9yQJd8d1Uu2LFEPyT_ZI", "token" -&gt; "AC4w5VhkHSfLIe2xvAUWQC9XHb1lAmM7Xw:1460476462625"}], FileNameJoin[{$TemporaryDirectory, "gDoc"}]] (* "C:\\Users\\user1\\AppData\\Local\\Temp\\gDoc" *) </code></pre>
2,174,413
<p>Proof by induction, that </p> <p>$$x_n=10^{(3n+2)} + 4(-1)^n\text{ is divisible by 52, when n}\in N $$</p> <p>for now I did it like that:</p> <p>$$\text{for } n=0:$$ $$10^2+4=104$$ $$104/2=52$$ <br> $$\text{Let's assume that:}$$ $$x_n=10^{(3n+2)} + 4(-1)^n=52k$$ $$\text {so else}$$ $$4(-1)^n=52k-10^{3n+2}$$</p> <p><br> $$for \text{ n+1}:$$ $$\text {after transforms get something like that:}$$ $$52k=10^{3n+3}$$ <br> But I'm sure, that the last step I did wrong. Actually I don't know when the proof is done, if you would help me I would be thankful.</p>
Arnaldo
391,612
<p>For $n+1$ you have:</p> <p>$$10^{(3n+2)+3}+4(-1)^{n+1}=10^3\cdot10^{3n+2}+4(-1)^n\cdot(-1)=\\ =10^3\cdot[52k-4(-1)^n]-4(-1)^n=10^3\cdot52k-(-1)^n(4004)$$</p> <p>but $4004=52\cdot 77$, then</p> <p>$$10^{(3n+2)+3}+4(-1)^{n+1}=52[10^3k-77(-1)^n]$$</p>
2,174,413
<p>Proof by induction, that </p> <p>$$x_n=10^{(3n+2)} + 4(-1)^n\text{ is divisible by 52, when n}\in N $$</p> <p>for now I did it like that:</p> <p>$$\text{for } n=0:$$ $$10^2+4=104$$ $$104/2=52$$ <br> $$\text{Let's assume that:}$$ $$x_n=10^{(3n+2)} + 4(-1)^n=52k$$ $$\text {so else}$$ $$4(-1)^n=52k-10^{3n+2}$$</p> <p><br> $$for \text{ n+1}:$$ $$\text {after transforms get something like that:}$$ $$52k=10^{3n+3}$$ <br> But I'm sure, that the last step I did wrong. Actually I don't know when the proof is done, if you would help me I would be thankful.</p>
Bill Dubuque
242
<p>$\begin{align}{\bf Hint}\qquad\qquad {\rm mod}\,\ 52\!:\qquad \color{#0a0}{10^{\Large 3}} (\color{#0a0}{-4})\, &amp;\equiv\, (\color{#0a0}{-4})(\color{#0a0}{-1})\\[.3em] 10^{\Large 3n+2} &amp;\equiv\, (\color{}{{-}4})(-1)^{\Large n}\qquad\! {\rm i.e.}\ \ P(n) \\[.3em] {\rm scale\ prior\ by\ 10^{\Large 3}} \Rightarrow\ \ \color{}{10^{\Large 3}}10^{\Large 3n+2} &amp;\equiv\ \color{#0a0}{10^{\Large 3}}\,(\color{#0a0}{{-}4})(-1)^{\Large n} \\[.3em] \Rightarrow\ 10^{\Large 3(\color{#c00}{n+1})+2}\! &amp;\equiv (\color{#0a0}{-4})(\color{#0a0}{-1})(-1)^{\Large n}\\[.3em] &amp;\equiv (-4)(-1)^{\Large \color{#c00}{n+1}}\ \ \ {\rm i.e.}\ \ P(\color{#c00}{n\!+\!1})\\ \end{align}$</p> <p><strong>Remark</strong> $\ $ More generally the same method shows</p> <p>$$\begin{align} a^{\Large 2}&amp;=\, b\\ a^{\Large3}b\, &amp;=\, c\ (= ab^2)\\ \Rightarrow\ a^{\Large 3n+2} &amp;=\, b\, c^{\large n}\end{align}$$</p> <p>OP is the special case $\ a,b,c = 10,-4,-1\,$ in $\,\Bbb Z_52 = $ integers mod $52$</p>
2,661,210
<p>Let $a_{1}, \dots, a_{n}$ be real numbers not all zero; let $b_{1},\dots, b_{n}$ be real numbers; let $\sum_{1}^{n}b_{i} \neq 0$. Then does there exist real numbers $w_{1},\dots, w_{n} &gt; 0$ such that $$ \frac{\sum_{1}^{n}w_{i}a_{i}}{\sum_{1}^{n}w_{i}b_{i}} &gt; \frac{\sum_{1}^{n}a_{i}}{\sum_{1}^{n}b_{i}}? $$ Some function theory results seem prominent. But it seems that perhaps such a result is not in my current set of working knowledge. </p>
Sangchul Lee
9,340
<p>Let $U = \{\mathrm{w} \in \mathbb{R}_+^n : \langle \mathrm{w}, \mathrm{b} \rangle \neq 0 \}$ and define $f : U \to \mathbb{R}$ by</p> <p>$$ f(\mathrm{w}) = \frac{\langle \mathrm{w}, \mathrm{a} \rangle}{\langle \mathrm{w}, \mathrm{b} \rangle}. $$</p> <p>We know that $U$ is open and $\mathbf{1} = (1,\cdots,1) \in U$. Let as assume that $\mathrm{a}$ and $\mathrm{b}$ are not parallel. Then</p> <p>$$ \left. \frac{\partial f}{\partial w_i} \right|_{\mathrm{w}=\mathbf{1}} = \frac{\langle \mathbf{1}, \mathrm{b} \rangle a_i - \langle \mathbf{1}, \mathrm{a} \rangle b_i}{\langle \mathbf{1}, \mathrm{b} \rangle^2} $$</p> <p>The assumption tells that not all $\partial f_i / \partial w_i$ vanish. So $\nabla f(\mathbf{1})$ is non-zero. Therefore</p> <p>$$ f( \mathbf{1} + \delta \nabla f(\mathbf{1})) = f(\mathbf{1}) + \| \nabla f (\mathbf{1}) \|^2 \delta + \mathcal{O}(\delta^2) \quad \text{as } \delta \downarrow 0 $$</p> <p>and by taking sufficiently small $\delta &gt; 0$ we can find $\mathrm{w} \in U$ with $f(\mathrm{w}) &gt; f(\mathbf{1})$.</p>
3,479,765
<p>I am attempting to do this using Cauchy's integral theorem and formula. However I am unable to conclude if a singularity exists at all for me to apply any of those two techniques or any other theorem. </p>
user247327
247,327
<p>Multiply both sides by <span class="math-container">$x^a$</span> and <span class="math-container">$(x+ c)^b$</span> to get <span class="math-container">$$1= d_a(x+ c)^b+ d_{a-1}x(x+ c)^b+ \cdots+ d_1x^{a-1}(x+ c)^b+ e_bx^a+ e_{b+1}x^a(x+ c)+ \cdots+e_1x^a(x+ c)^{b-1}~.$$</span></p> <p>Taking <span class="math-container">$~x= 0~$</span> gives immediately <span class="math-container">$~1= c^b~d_a~$</span> so <span class="math-container">$~d_a= 1/c^b~$</span>. Taking <span class="math-container">$~x= -c~$</span> immediately gives <span class="math-container">$~1= e_b~c^a~$</span> so <span class="math-container">$~e_b= 1/c^a~$</span>.</p> <p>That leaves <span class="math-container">$~a-1+ b-1= a+ b- 2~$</span> values to be determined. Although there are no more "easy" equations but taking <span class="math-container">$~a+ b- 2~$</span> different values for <span class="math-container">$~x~$</span> will give <span class="math-container">$~a+ b- 2~$</span> equations for the <span class="math-container">$~a+ b- 2~$</span> unknown values.</p>
1,893,168
<p>$$\lim_{x\to 0} {\ln(\cos x)\over \sin^2x} = ?$$</p> <p>I can solve this by using L'Hopital's rule but how would I do this without this?</p>
Gordon
169,372
<p>You can use the important limits: $\lim_{x\rightarrow 0}\frac{\sin x}{x} =1$ and $\lim_{x\rightarrow 0}(1+x)^{\frac{1}{x}}=e$ (i.e., $\lim_{x\rightarrow 0} \frac{\ln(1+x)}{x}=1$). Then \begin{align*} \lim_{x\rightarrow 0}\frac{\ln \cos x}{\sin^2 x} &amp;= \lim_{x\rightarrow 0}\frac{\cos x -1}{x^2}\\ &amp;=\lim_{x\rightarrow 0}\frac{-2\sin^2 \frac{x}{2}}{x^2}\\ &amp;=-\frac{1}{2}. \end{align*}</p>
2,834,864
<p>Is it safe to assume that if $a\equiv b \pmod {35 =5\times7}$</p> <p>then $a\equiv b\pmod 5$ is also true?</p>
Angina Seng
436,618
<p>If $A=xH$, with $H$ a subgroup of $G$, then $H=\{b^{-1}a:a,b\in A\}$. So the coset $A$ determines the subgroup $H$.</p>
2,668,839
<blockquote> <p>Finding range of $$f(x)=\frac{\sin^2 x+4\sin x+5}{2\sin^2 x+8\sin x+8}$$</p> </blockquote> <p>Try: put $\sin x=t$ and $-1\leq t\leq 1$</p> <p>So $$y=\frac{t^2+4t+5}{2t^2+8t+8}$$</p> <p>$$2yt^2+8yt+8y=t^2+4t+5$$</p> <p>$$(2y-1)t^2+4(2y-1)t+(8y-5)=0$$</p> <p>For real roots $D\geq 0$</p> <p>So $$16(2y-1)^2-4(2y-1)(8y-5)\geq 0$$</p> <p>$$4(2y-1)^2-(2y-1)(8y-5)\geq 0$$</p> <p>$y\geq 0.5$</p> <p>Could some help me where I have wrong, thanks</p>
mathlove
78,967
<p>MrYouMath has already provided a good answer.</p> <p>This answer uses <em>your method</em>.</p> <p>You already have a quadratic equation on $t$ $$(2y-1)t^2+4(2y-1)t+(8y-5)=0\tag1$$ where $y\not=\frac 12$.</p> <p>Note here that we want to find $y$ such that $(1)$ has at least one real solution $t$ satisfying $-1\le t\le 1$. </p> <p>It seems that you've missed the condition $-1\le t\le 1$.</p> <p>Let $f(t)$ be the LHS of $(1)$.</p> <p>Then, since the vertex of the parabola $Y=f(t)$ is on $t=-2$, we have $$f(-1)f(1)\le 0,$$ i.e. $$\frac 59\le y\le 1$$</p>
960,010
<p>Two sides of a triangle are 15cm and 20cm long respectively. $A)$ How fast is the third side increasing if the angle between the given sidesis 60 degrees and is increasing at the rate of $2^\circ/sec$? $B)$ How fast is the area increasing?</p> <p>$A)$ I used $c^2=a^2+b^2-2ab\cos(\theta)$ so I got the missing side $c=28.72$ is this right? and then I get confused with the implicit differentiation where, $2c$$\frac{dc}{dt}$ $= $2a$\frac{da}{dt}$ $+$ $2b$$\frac{db}{dt}$ $-$ $2\cos(\theta)$ ($a$$\frac{db}{dt}$+$b$$\frac{da}{db})$ I know that 60degrees is $\frac{Pi}{3}$ to radiance but I keep on getting the wrong answer, they said it's supposed to be $\frac{dc}{dt}=0.5$ I don't know what to substitute with $\frac{da}{dt}$ and $\frac{db}{dt}$ is it the $2^\circ/sec$ ? I'm so confused.</p> <p>$B)$ I also get confuse here on what formula to use.</p>
Narasimham
95,860
<p>Assuming sides $a,b$ do not change in length ( no constraints given) , differentiate</p> <p>$$c^2 = a^2+b^2-2ab\cos\theta$$</p> <p>$$ 2 c \dot c = - 2 a b \sin \theta \, \omega $$</p> <p>$$ \dot c = - \dfrac{a b}{c} \sin \theta \, \omega $$</p> <p>Similarly for area A</p> <p>$$ A = \frac12 a b \sin \theta $$</p> <p>$$ \dot A = \frac{ab}2 \cos \theta\,\omega. $$ </p>
163,468
<p>I have a <code>Graph</code>, and I want to group some of its vertices into communities. <code>CommunityGraphPlot</code> uses force directed layout and its doesn't look like the original graph after I apply <code>CommunityGraphPlot</code>. I don't want the vertices of same community to come close so that the community border line can be drawn easily. I want the vertices to stay in their position as it was in the graph. Just drawing even some rectangles or any other shape over them will be fine for me. </p> <pre><code>CompleteGraph[8, VertexLabels -&gt; "Name"] </code></pre> <p><a href="https://i.stack.imgur.com/rsgsN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rsgsN.png" alt="Beautiful"></a></p> <pre><code>CommunityGraphPlot[ CompleteGraph[8, VertexLabels -&gt; "Name"], {{1, 2, 3}, {4, 5}, {4, 8}}] </code></pre> <p><a href="https://i.stack.imgur.com/bU4Mk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bU4Mk.png" alt="Just breaks the beauty"></a></p> <p>Given that complete graph above, <code>CommunityGraphPlot</code> just breaks its beauty, It doesn't even look like once its was a complete graph. However with these set of vertices it could have draw regions easily while keeping the vertices in their position.</p>
kglr
125
<p><strong>Update:</strong> Using <code>GraphComputation`GraphCommunitiesPlotDump`generateBlobs</code>, ... well, to generate blobs:</p> <pre><code>ClearAll[blobF, fC] fC[coords_, size_: .04] := Module[{}, CommunityGraphPlot[Graph[{}], {}]; FilledCurve @ BSplineCurve[GraphComputation`GraphCommunitiesPlotDump`generateBlobs[# &amp;, {coords}, size][[2, 1]], SplineClosed -&gt; True]] blobF[g_, cols_, coms_, size_: .04] := Thread[{cols, EdgeForm[{Gray, Thin}], Opacity[.25], fC[PropertyValue[{g, #}, VertexCoordinates] &amp; /@ #, size] &amp; /@ coms}]; {cliques, colors} = {{{1, 2, 3}, {4, 5}, {6, 7, 8}}, {Red, Green , Blue}}; cg = CompleteGraph[8]; SetProperty[cg, {Epilog -&gt; blobF[cg, colors, cliques], VertexLabels -&gt; "Name", ImagePadding -&gt; 15}] </code></pre> <p><a href="https://i.stack.imgur.com/iaq0C.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iaq0C.png" alt="enter image description here"></a></p> <pre><code>SeedRandom[1] Row[{#, SetProperty[#, {Epilog -&gt; blobF[#, colors, cliques], ImagePadding -&gt; 15}]} &amp;@ RandomGraph[{20, 15}, ImageSize -&gt; 300, VertexLabels -&gt; "Name"]] </code></pre> <p><a href="https://i.stack.imgur.com/MAbka.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MAbka.png" alt="enter image description here"></a></p> <p><strong>Original answer:</strong></p> <pre><code>{cliques, colors} = {{{1, 2, 3}, {4, 5}, {6, 7, 8}}, {Red, Green , Blue}}; lines = Thread[{colors, CapForm["Round"], JoinForm["Round"], Opacity[.5], AbsoluteThickness[20], Line[PropertyValue[{cg, #}, VertexCoordinates] &amp; /@ #] &amp; /@ cliques}]; SetProperty[CompleteGraph[8], {Epilog -&gt; lines, VertexLabels -&gt; "Name", ImagePadding -&gt; 15}] </code></pre> <p><a href="https://i.stack.imgur.com/yzyho.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yzyho.png" alt="enter image description here"></a></p> <p>Or highlight <code>Subgraph</code>s:</p> <pre><code>cg = CompleteGraph[8]; Fold[HighlightGraph[##] &amp;, cg, (Style[Subgraph[cg, #[[1]]], #[[2]], CapForm["Round"], JoinForm["Round"], AbsoluteThickness[20]] &amp; /@ Transpose[{cliques, colors}])] </code></pre> <p><a href="https://i.stack.imgur.com/itMBF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/itMBF.png" alt="enter image description here"></a></p>
557,426
<p>I have 5 ring oscillators whose frequencies are f1, f2, ..., f5. Each ring oscillator (RO) has 5 inverters. For each RO, I just randomly pick 3 inverters out of 5 inverters. For example, in RO1, I pick inverter 1,3,5 (Notation: RO1(1,3,5)). So I have the following:</p> <p>RO1(1,3,5) (I call this is the configuration of RO1)</p> <p>RO2(1,2,4) (configuration of RO2)</p> <p>RO3(2,4,5)</p> <p>RO4(1,4,5)</p> <p>RO5(2,3,4)</p> <p>Let say f1 &lt; f3 &lt; f4 &lt; f5 &lt; f2 (I call this is a rank). So for this particular rank, I have:</p> <p>{RO1(1,3,5), RO3(2,4,5), RO4(1,4,5), RO5(2,3,4), RO2(1,2,4)}</p> <p>For a different rank, let say f1 &lt; f2 &lt; f3 &lt; f4 &lt; f5, I will have:</p> <p>{RO1(1,3,5), RO2(1,2,4), RO3(2,4,5), RO4(1,4,5), RO5(2,3,4)}</p> <p>So as you can see, the rank affects how things are arranged.</p> <p>One trivial encoding is:</p> <p>{RO1(1,3,5), RO3(2,4,5), RO4(1,4,5), RO5(2,3,4), RO2(1,2,4)} = 000...00 (all zeros)</p> <p>{RO1(1,2,5), RO3(2,4,5), RO4(1,4,5), RO5(2,3,4), RO2(1,2,4)} = 000...01 (all zeros with a single 1)</p> <p>etc ... (repeat the whole process until we exhaust all combinations for all configurations for all ranks).</p> <p>This trivial encoding is <strong><em>NOT efficient</em></strong> because I need to store the entire mapping table in memory so that later on, I can use it for encoding.</p> <p>My question is: Is there a better way to carry out the encoding without having access to the entire mapping table? </p> <p><em><strong>Clarification:</em></strong> For 5 ROs, there are 5! different possible ranks. So I will need ceiling(log(5!))=7 bits (log here is base-2) to encode 5! different ranks.</p> <p>Each RO has (5 choose 3) different possible configurations. So for 5 ROs, there are (5 choose 3)^5=10^5 possible configuration combinations. Therefore, I will need ceiling(log(10^5))=17 bits to encode all possible combinations.</p> <p>So let me re-state my question in a clearer way: Given a specific rank of 5 ROs and the configurations of 5 ROs, how can I efficiently encode those information into 24 bits?</p>
Snowball
24,875
<p>Yes, there is. One such encoding follows.</p> <p>To encode the <em>rank</em>, write out the order of the frequencies as bytes. For example, $f_1 &lt; f_3 &lt; f_4 &lt; f_5 &lt; f_2$ becomes</p> <p>$$ 00000001\ 00000011\ 00000100\ 00000101\ 00000010. $$</p> <p>To encode the <em>configuration</em> of each RO, put a $1$ in the $i^\text{th}$ position if inverter $i$ is one of the 3 inverters you picked, and $0$'s elsewhere. For example,</p> <ul> <li>$\operatorname{RO}_1(1,3,5)$ becomes $10101000$,</li> <li>$\operatorname{RO}_2(1,2,4)$ becomes $11010000$.</li> </ul> <p>Now stick the bytes together end-to-end, so that you have a 10 byte sequence. Using your example of $(\operatorname{RO}_1(1,3,5), \operatorname{RO}_3(2,4,5), \operatorname{RO}_4(1,4,5), \operatorname{RO}_5(2,3,4), \operatorname{RO}_2(1,2,4))$, you'll get the following encoding: (ignore the line break)</p> <p>$$ 00000001\ 00000011\ 00000100\ 00000101\ 00000010\\ 10101000\ 11010000\ 01011000\ 10011000\ 01110000. $$</p> <p>Of course, your choice of encoding will depend on what you're doing with the data. Hopefully this was enough to get you started.</p>
2,801,162
<p>I need help to understand how we compute this kind of limit:</p> <p>$\lim_{(x,y)\rightarrow (0,0)}\ xy\log(\lvert x\rvert+\lvert y\rvert)$</p> <p>I think we can use the squeeze theorem but I don't know how to bound the function, so I can use the theorem. If I suppose $0 \lt \sqrt{x^2+y^2} \lt 1$ then, but I'm struggle.. </p> <p>$ 0 \le \lvert f(x,y)\rvert = \lvert xy\log(\lvert x\rvert+\lvert y\rvert)\rvert$</p> <p>Thanks in advance for the help.</p>
Clement C.
75,808
<p>Use the AM-GM inequality: $$ 0\leq 2\sqrt{\lvert xy\rvert} \leq |x|+|y| \tag{1} $$ from which $$ 0\leq 2\sqrt{\lvert xy\rvert}|\log(|x|+|y|)| \leq (|x|+|y|)|\log(|x|+|y|)|\tag{2} $$ Setting $t(x,y)\stackrel{\rm def}{=} |x|+|y|$, we have $\lim_{(x,y)\to(0,0)} t(x,y)=0$ <em>(why?)</em> and therefore since $\lim_{t\to 0}t\log t = 0$ we get by the Squeeze theorem</p> <p>$$ \lim_{(x,y)\to(0,0)}\sqrt{\lvert xy\rvert}|\log(|x|+|y|)| = 0\tag{3} $$ and also $$\lim_{(x,y)\to(0,0)}\sqrt{\lvert xy\rvert} = 0 \tag{4}$$ also by (1) and the Squeeze theorem. Can you conclude?</p>
3,248,552
<p>Imagine we want to use Theon's ladder to approximate <span class="math-container">$\sqrt{3}$</span>. The appropriate expressions are <span class="math-container">$$x_n=x_{n-1}+y_{n-1}$$</span></p> <p><span class="math-container">$$y_n=x_n+2x_{n-1}$$</span></p> <p>Rungs 6 through 10 in the approximation of <span class="math-container">$\sqrt{3}$</span> are </p> <p><span class="math-container">$\{\{208, 120\},\{568, 328\}, \{1552, 896\}, \{4240, 2448\}, \{11584, 6688\}\}$</span></p> <p>a) Compute the two values in rung 11 of the ladder.</p> <p>I'm assuming that all I need to do plug into the formula. So:</p> <p><span class="math-container">$x_{11}=x_{10}+y_{10}$</span></p> <p><span class="math-container">$x_{11}=6688+11584=18272$</span></p> <p><span class="math-container">$y_{11}=x_{11}+2x_{10}$</span></p> <p><span class="math-container">$y_{11}=18272+2(6688)=31648$</span></p> <p>Is this correct?Part b is really what I am struggling with. </p> <p>b) The figure below shows five rectangles whose dimensions correspond to rungs 6 through 10 above. That is, the lower left corner of each is at (0,0), while the upper right corners are at <span class="math-container">$(208,120),(568, 328),...,(11584,6688)$</span> Are any of these rectangles similar to each other? Explain, briefly, your reasoning. </p> <p>All I can think is that 6688 and 120 have a gcd of 8, and the gcd of 11584 and 208 is 16. Not really sure how to articulate that this helps with the similarity of the rectangles. Thanks for the help</p> <p><a href="https://i.stack.imgur.com/X0MOH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X0MOH.png" alt="enter image description here"></a></p>
Hw Chu
507,264
<p>The idea is that, if you can show that the ratios of the rectangles are getting closer to <span class="math-container">$\sqrt 3$</span>, then none of them should be similar to each other. So you need to have some way to track <span class="math-container">$\frac{y_n}{x_n}$</span> in each iteration.</p> <p>If you try some related expressions, you might discover that tracking <span class="math-container">$y_n^2 - 3x_n^2$</span> actually helps you a bit. Playing with the iteration relations you will get:</p> <p><span class="math-container">$$ y_n^2 - 3x_n^2 = -2(y_{n-1}^2 - 3x_{n-1}^2). $$</span></p> <p>From this, <span class="math-container">$$ \left|\left(\frac{y_n}{x_n}\right)^2-3\right| = 2\left(\frac{x_{n-1}}{x_n}\right)^2\left|\left(\frac{y_{n-1}}{x_{n-1}}\right)^2-3\right|. $$</span></p> <p>This leads to our assertion. Can you finish up the arguments?</p>